{"_id": "632589828c8b9fca2c3a59e97451fde8fa7d188d", "title": "A hybrid of genetic algorithm and particle swarm optimization for recurrent network design", "text": "An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO. In HGAPSO, individuals in a new generation are created, not only by crossover and mutation operation as in GA, but also by PSO. The concept of elite strategy is adopted in HGAPSO, where the upper-half of the best-performing individuals in a population are regarded as elites. However, instead of being reproduced directly to the next generation, these elites are first enhanced. The group constituted by the elites is regarded as a swarm, and each elite corresponds to a particle within it. In this regard, the elites are enhanced by PSO, an operation which mimics the maturing phenomenon in nature. These enhanced elites constitute half of the population in the new generation, whereas the other half is generated by performing crossover and mutation operation on these enhanced elites. HGAPSO is applied to recurrent neural/fuzzy network design as follows. For recurrent neural network, a fully connected recurrent neural network is designed and applied to a temporal sequence production problem. For recurrent fuzzy network design, a Takagi-Sugeno-Kang-type recurrent fuzzy network is designed and applied to dynamic plant control. The performance of HGAPSO is compared to both GA and PSO in these recurrent networks design problems, demonstrating its superiority."} {"_id": "86e87db2dab958f1bd5877dc7d5b8105d6e31e46", "title": "A Hybrid EP and SQP for Dynamic Economic Dispatch with Nonsmooth Fuel Cost Function", "text": "Dynamic economic dispatch (DED) is one of the main functions of power generation operation and control. It determines the optimal settings of generator units with predicted load demand over a certain period of time. The objective is to operate an electric power system most economically while the system is operating within its security limits. This paper proposes a new hybrid methodology for solving DED. The proposed method is developed in such a way that a simple evolutionary programming (EP) is applied as a based level search, which can give a good direction to the optimal global region, and a local search sequential quadratic programming (SQP) is used as a fine tuning to determine the optimal solution at the final. Ten units test system with nonsmooth fuel cost function is used to illustrate the effectiveness of the proposed method compared with those obtained from EP and SQP alone."} {"_id": "2a047d8c4c2a4825e0f0305294e7da14f8de6fd3", "title": "Genetic Fuzzy Systems - Evolutionary Tuning and Learning of Fuzzy Knowledge Bases", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the genetic fuzzy systems evolutionary tuning and learning of fuzzy knowledge bases. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book."} {"_id": "506172b0e0dd4269bdcfe96dda9ea9d8602bbfb6", "title": "A modified particle swarm optimizer", "text": "In this paper, we introduce a new parameter, called inertia weight, into the original particle swarm optimizer. Simulations have been done to illustrate the signilicant and effective impact of this new parameter on the particle swarm optimizer."} {"_id": "51317b6082322a96b4570818b7a5ec8b2e330f2f", "title": "Identification and control of dynamic systems using recurrent fuzzy neural networks", "text": "This paper proposes a recurrent fuzzy neural network (RFNN) structure for identifying and controlling nonlinear dynamic systems. The RFNN is inherently a recurrent multilayered connectionist network for realizing fuzzy inference using dynamic fuzzy rules. Temporal relations are embedded in the network by adding feedback connections in the second layer of the fuzzy neural network (FNN). The RFNN expands the basic ability of the FNN to cope with temporal problems. In addition, results for the FNNfuzzy inference engine, universal approximation, and convergence analysis are extended to the RFNN. For the control problem, we present the direct and indirect adaptive control approaches using the RFNN. Based on the Lyapunov stability approach, rigorous proofs are presented to guarantee the convergence of the RFNN by choosing appropriate learning rates. Finally, the RFNN is applied in several simulations (time series prediction, identification, and control of nonlinear systems). The results confirm the effectiveness of the RFNN."} {"_id": "857a8c6c46b0a85ed6019f5830294872f2f1dcf5", "title": "Separate face and body selectivity on the fusiform gyrus.", "text": "Recent reports of a high response to bodies in the fusiform face area (FFA) challenge the idea that the FFA is exclusively selective for face stimuli. We examined this claim by conducting a functional magnetic resonance imaging experiment at both standard (3.125 x 3.125 x 4.0 mm) and high resolution (1.4 x 1.4 x 2.0 mm). In both experiments, regions of interest (ROIs) were defined using data from blocked localizer runs. Within each ROI, we measured the mean peak response to a variety of stimulus types in independent data from a subsequent event-related experiment. Our localizer scans identified a fusiform body area (FBA), a body-selective region reported recently by Peelen and Downing (2005) that is anatomically distinct from the extrastriate body area. The FBA overlapped with and was adjacent to the FFA in all but two participants. Selectivity of the FFA to faces and FBA to bodies was stronger for the high-resolution scans, as expected from the reduction in partial volume effects. When new ROIs were constructed for the high-resolution experiment by omitting the voxels showing overlapping selectivity for both bodies and faces in the localizer scans, the resulting FFA* ROI showed no response above control objects for body stimuli, and the FBA* ROI showed no response above control objects for face stimuli. These results demonstrate strong selectivities in distinct but adjacent regions in the fusiform gyrus for only faces in one region (the FFA*) and only bodies in the other (the FBA*)."} {"_id": "12f107016fd3d062dff88a00d6b0f5f81f00522d", "title": "Scheduling for Reduced CPU Energy", "text": "The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994].\n What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance."} {"_id": "1ae0ac5e13134df7a0d670fc08c2b404f1e3803c", "title": "A data mining approach for location prediction in mobile environments", "text": "Mobility prediction is one of the most essential issues that need to be explored for mobility management in mobile computing systems. In this paper, we propose a new algorithm for predicting the next inter-cell movement of a mobile user in a Personal Communication Systems network. In the first phase of our threephase algorithm, user mobility patterns are mined from the history of mobile user trajectories. In the second phase, mobility rules are extracted from these patterns, and in the last phase, mobility predictions are accomplished by using these rules. The performance of the proposed algorithm is evaluated through simulation as compared to two other prediction methods. The performance results obtained in terms of Precision and Recall indicate that our method can make more accurate predictions than the other methods. 2004 Elsevier B.V. All rights reserved."} {"_id": "7d3c9c4064b588d5d8c7c0cb398118aac239c71b", "title": "$\\mathsf {pSCAN}$ : Fast and Exact Structural Graph Clustering", "text": "We study the problem of structural graph clustering, a fundamental problem in managing and analyzing graph data. Given an undirected unweighted graph, structural graph clustering is to assign vertices to clusters, and to identify the sets of hub vertices and outlier vertices as well, such that vertices in the same cluster are densely connected to each other while vertices in different clusters are loosely connected. In this paper, we develop a new two-step paradigm for scalable structural graph clustering based on our three observations. Then, we present a $\\mathsf {pSCAN}$ approach, within the paradigm, aiming to reduce the number of structural similarity computations, and propose optimization techniques to speed up checking whether two vertices are structure-similar. $\\mathsf {pSCAN}$ outputs exactly the same clusters as the existing approaches $\\mathsf {SCAN}$ and $\\mathsf {SCAN\\text{++}}$ , and we prove that $\\mathsf {pSCAN}$ is worst-case optimal. Moreover, we propose efficient techniques for updating the clusters when the input graph dynamically changes, and we also extend our techniques to other similarity measures, e.g., Jaccard similarity. Performance studies on large real and synthetic graphs demonstrate the efficiency of our new approach and our dynamic cluster maintenance techniques. Noticeably, for the twitter graph with 1 billion edges, our approach takes 25 minutes while the state-of-the-art approach cannot finish even after 24 hours."} {"_id": "305c45fb798afdad9e6d34505b4195fa37c2ee4f", "title": "Synthesis, properties, and applications of iron nanoparticles.", "text": "Iron, the most ubiquitous of the transition metals and the fourth most plentiful element in the Earth's crust, is the structural backbone of our modern infrastructure. It is therefore ironic that as a nanoparticle, iron has been somewhat neglected in favor of its own oxides, as well as other metals such as cobalt, nickel, gold, and platinum. This is unfortunate, but understandable. Iron's reactivity is important in macroscopic applications (particularly rusting), but is a dominant concern at the nanoscale. Finely divided iron has long been known to be pyrophoric, which is a major reason that iron nanoparticles have not been more fully studied to date. This extreme reactivity has traditionally made iron nanoparticles difficult to study and inconvenient for practical applications. Iron however has a great deal to offer at the nanoscale, including very potent magnetic and catalytic properties. Recent work has begun to take advantage of iron's potential, and work in this field appears to be blossoming."} {"_id": "9f234867df1f335a76ea07933e4ae1bd34eeb48a", "title": "Automatic Machine Translation Evaluation: A Qualitative Approach", "text": "ADVERTIMENT. La consulta d\u2019aquesta tesi queda condicionada a l\u2019acceptaci\u00f3 de les seg\u00fcents condicions d'\u00fas: La difusi\u00f3 d\u2019aquesta tesi per mitj\u00e0 del servei TDX (www.tdx.cat) i a trav\u00e9s del Dip\u00f2sit Digital de la UB (diposit.ub.edu) ha estat autoritzada pels titulars dels drets de propietat intel\u00b7lectual \u00fanicament per a usos privats emmarcats en activitats d\u2019investigaci\u00f3 i doc\u00e8ncia. No s\u2019autoritza la seva reproducci\u00f3 amb finalitats de lucre ni la seva difusi\u00f3 i posada a disposici\u00f3 des d\u2019un lloc ali\u00e8 al servei TDX ni al Dip\u00f2sit Digital de la UB. No s\u2019autoritza la presentaci\u00f3 del seu contingut en una finestra o marc ali\u00e8 a TDX o al Dip\u00f2sit Digital de la UB (framing). Aquesta reserva de drets afecta tant al resum de presentaci\u00f3 de la tesi com als seus continguts. En la utilitzaci\u00f3 o cita de parts de la tesi \u00e9s obligat indicar el nom de la persona autora."} {"_id": "5ebfcd50c56e51aada28ccecd041db5e002f5862", "title": "Gualzru's Path to the Advertisement World", "text": "This paper describes the genesis of Gualzru, a robot commissioned by a large Spanish technological company to provide advertisement services in open public spaces. Gualzru has to stand by at an interactive panel observing the people passing by and, at some point, select a promising candidate and approach her to initiate a conversation. After a small verbal interaction, the robot is supposed to convince the passerby to walk back to the panel, leaving the rest of the selling task to an interactive software embedded in it. The whole design and building process took less than three years of team composed of five groups at different geographical locations. We describe here the lessons learned during this period of time, from different points of view including the hardware, software, architectural decisions and team collaboration issues."} {"_id": "73a7144e072356b5c9512bd4a87b22457d33760c", "title": "Treatment-Response Models for Counterfactual Reasoning with Continuous-time, Continuous-valued Interventions", "text": "Treatment effects can be estimated from observational data as the difference in potential outcomes. In this paper, we address the challenge of estimating the potential outcome when treatment-dose levels can vary continuously over time. Further, the outcome variable may not be measured at a regular frequency. Our proposed solution represents the treatment response curves using linear time-invariant dynamical systems\u2014this provides a flexible means for modeling response over time to highly variable dose curves. Moreover, for multivariate data, the proposed method: uncovers shared structure in treatment response and the baseline across multiple markers; and, flexibly models challenging correlation structure both across and within signals over time. For this, we build upon the framework of multiple-output Gaussian Processes. On simulated and a challenging clinical dataset, we show significant gains in accuracy over stateof-the-art models."} {"_id": "c2aa3c7fd59a43c949844e98569429261dba36e6", "title": "Planar Helical Antenna of Circular Polarization", "text": "A planar helical antenna is presented for achieving wideband end-fire radiation of circular polarization while maintaining a very low profile. The helix is formed using printed strips with straight-edge connections implemented by plated viaholes. The currents flowing on the strips and along via-holes of the helix contribute to the horizontal and vertical polarizations, respectively. Besides, the current on the ground plane is utilized to weaken the strong amplitude of the horizontal electric field generated by the one on the strips. Thus, a good circular polarization can be achieved. Furthermore, a tapered helix and conducting side-walls are employed to broaden the axial ratio (AR) bandwidth as well as to improve the end-fire radiation pattern. The designed antenna operates at the center frequency of 10 GHz. Simulated results show that the planar helical antenna achieves wide-impedance bandwidth (|S11| <; -10 dB) from 7.4 to 12.8 GHz (54%) and 3-dB AR bandwidth from 8.2 to 11.6 GHz (34%), while retaining a thickness of only 0.11\u03bb0 at the center frequency. A prototype of the proposed antenna is fabricated and tested. Measured results are in good agreement with simulated ones."} {"_id": "befdf0eb1a3d2e0d404e7fbdb43438be7ae607e5", "title": "Body Composition Changes After Very-Low-Calorie Ketogenic Diet in Obesity Evaluated by 3 Standardized Methods.", "text": "Context\nCommon concerns when using low-calorie diets as a treatment for obesity are the reduction in fat-free mass, mostly muscular mass, that occurs together with the fat mass (FM) loss, and determining the best methodologies to evaluate body composition changes.\n\n\nObjective\nThis study aimed to evaluate the very-low-calorie ketogenic (VLCK) diet-induced changes in body composition of obese patients and to compare 3 different methodologies used to evaluate those changes.\n\n\nDesign\nTwenty obese patients followed a VLCK diet for 4 months. Body composition assessment was performed by dual-energy X-ray absorptiometry (DXA), multifrequency bioelectrical impedance (MF-BIA), and air displacement plethysmography (ADP) techniques. Muscular strength was also assessed. Measurements were performed at 4 points matched with the ketotic phases (basal, maximum ketosis, ketosis declining, and out of ketosis).\n\n\nResults\nAfter 4 months the VLCK diet induced a -20.2 \u00b1 4.5 kg weight loss, at expenses of reductions in fat mass (FM) of -16.5 \u00b1 5.1 kg (DXA), -18.2 \u00b1 5.8 kg (MF-BIA), and -17.7 \u00b1 9.9 kg (ADP). A substantial decrease was also observed in the visceral FM. The mild but marked reduction in fat-free mass occurred at maximum ketosis, primarily as a result of changes in total body water, and was recovered thereafter. No changes in muscle strength were observed. A strong correlation was evidenced between the 3 methods of assessing body composition.\n\n\nConclusion\nThe VLCK diet-induced weight loss was mainly at the expense of FM and visceral mass; muscle mass and strength were preserved. Of the 3 body composition techniques used, the MF-BIA method seems more convenient in the clinical setting."} {"_id": "506d4ca228f81715946ed1ad8d9205fad20fddfe", "title": "Measuring pictorial balance perception at first glance using Japanese calligraphy", "text": "According to art theory, pictorial balance acts to unify picture elements into a cohesive composition. For asymmetrical compositions, balancing elements is thought to be similar to balancing mechanical weights in a framework of symmetry axes. Assessment of preference for balance (APB), based on the symmetry-axes framework suggested in Arnheim R, 1974 Art and Visual Perception: A Psychology of the Creative Eye (Berkeley, CA: University of California Press), successfully matched subject balance ratings of images of geometrical shapes over unlimited viewing time. We now examine pictorial balance perception of Japanese calligraphy during first fixation, isolated from later cognitive processes, comparing APB measures with results from balance-rating and comparison tasks. Results show high between-task correlation, but low correlation with APB. We repeated the rating task, expanding the image set to include five rotations of each image, comparing balance perception of artist and novice participant groups. Rotation has no effect on APB balance computation but dramatically affects balance rating, especially for art experts. We analyze the variety of rotation effects and suggest that, rather than depending on element size and position relative to symmetry axes, first fixation balance processing derives from global processes such as grouping of lines and shapes, object recognition, preference for horizontal and vertical elements, closure, and completion, enhanced by vertical symmetry."} {"_id": "772205182fbb6ad842df4a6cd937741145eeece0", "title": "Smoking and cervical cancer: pooled analysis of the IARC multi-centric case\u2013control study", "text": "Background: Smoking has long been suspected to be a risk factor for cervical cancer. However, not all previous studies have properly controlled for the effect of human papillomavirus (HPV) infection, which has now been established as a virtually necessary cause of cervical cancer. To evaluate the role of smoking as a cofactor of progression from HPV infection to cancer, we performed a pooled analysis of 10 previously published case\u2013control studies. This analysis is part of a series of analyses of cofactors of HPV in the aetiology of cervical cancer. Methods: Data were pooled from eight case\u2013control studies of invasive cervical carcinoma (ICC) and two of carcinoma in situ (CIS) from four continents. All studies used a similar protocol and questionnaires and included a PCR-based evaluation of HPV DNA in cytological smears or biopsy specimens. Only subjects positive for HPV DNA were included in the analysis. A total of 1463 squamous cell ICC cases were analyzed, along with 211 CIS cases, 124 adeno- or adeno-squamous ICC cases and 254 control women. Pooled odds ratios (OR) and 95% confidence intervals (CI) were estimated using logistic regression models controlling for sexual and non-sexual confounding factors. Results: There was an excess risk for ever smoking among HPV positive women (OR 2.17 95%CI 1.46\u20133.22). When results were analyzed by histological type, an excess risk was observed among cases of squamous cell carcinoma for current smokers (OR 2.30, 95%CI 1.31\u20134.04) and ex-smokers (OR 1.80, 95%CI 0.95\u20133.44). No clear pattern of association with risk was detected for adenocarcinomas, although the number of cases with this histologic type was limited. Conclusions: Smoking increases the risk of cervical cancer among HPV positive women. The results of our study are consistent with the few previously conducted studies of smoking and cervical cancer that have adequately controlled for HPV infection. Recent increasing trends of smoking among young women could have a serious impact on cervical cancer incidence in the coming years."} {"_id": "d2018e51b772aba852e54ccc0ba7f0b7c2792115", "title": "Breathing Detection: Towards a Miniaturized, Wearable, Battery-Operated Monitoring System", "text": "This paper analyzes the main challenges associated with noninvasive, continuous, wearable, and long-term breathing monitoring. The characteristics of an acoustic breathing signal from a miniature sensor are studied in the presence of sources of noise and interference artifacts that affect the signal. Based on these results, an algorithm has been devised to detect breathing. It is possible to implement the algorithm on a single integrated circuit, making it suitable for a miniature sensor device. The algorithm is tested in the presence of noise sources on five subjects and shows an average success rate of 91.3% (combined true positives and true negatives)."} {"_id": "cc76f5d348ab6c3a20ab4adb285fc1ad96d3c009", "title": "Speech-driven 3 D Facial Animation with Implicit Emotional Awareness : A Deep Learning Approach", "text": "We introduce a long short-term memory recurrent neural network (LSTM-RNN) approach for real-time facial animation, which automatically estimates head rotation and facial action unit activations of a speaker from just her speech. Specifically, the time-varying contextual non-linear mapping between audio stream and visual facial movements is realized by training a LSTM neural network on a large audio-visual data corpus. In this work, we extract a set of acoustic features from input audio, including Mel-scaled spectrogram, Mel frequency cepstral coefficients and chromagram that can effectively represent both contextual progression and emotional intensity of the speech. Output facial movements are characterized by 3D rotation and blending expression weights of a blendshape model, which can be used directly for animation. Thus, even though our model does not explicitly predict the affective states of the target speaker, her emotional manifestation is recreated via expression weights of the face model. Experiments on an evaluation dataset of different speakers across a wide range of affective states demonstrate promising results of our approach in real-time speech-driven facial animation."} {"_id": "1b2a0e8af5c1f18e47e71244973ce4ace4ac6034", "title": "Compressed Nonparametric Language Modelling", "text": "Hierarchical Pitman-Yor Process priors are compelling methods for learning language models, outperforming point-estimate based methods. However, these models remain unpopular due to computational and statistical inference issues, such as memory and time usage, as well as poor mixing of sampler. In this work we propose a novel framework which represents the HPYP model compactly using compressed suffix trees. Then, we develop an efficient approximate inference scheme in this framework that has a much lower memory footprint compared to full HPYP and is fast in the inference time. The experimental results illustrate that our model can be built on significantly larger datasets compared to previous HPYP models, while being several orders of magnitudes smaller, fast for training and inference, and outperforming the perplexity of the state-of-the-art Modified Kneser-Ney countbased LM smoothing by up to 15%."} {"_id": "c9d41f115eae5e03c5ed45c663d9435cb66ec942", "title": "Dissecting and Reassembling Color Correction Algorithms for Image Stitching", "text": "This paper introduces a new compositional framework for classifying color correction methods according to their two main computational units. The framework was used to dissect fifteen among the best color correction algorithms and the computational units so derived, with the addition of four new units specifically designed for this work, were then reassembled in a combinatorial way to originate about one hundred distinct color correction methods, most of which never considered before. The above color correction methods were tested on three different existing datasets, including both real and artificial color transformations, plus a novel dataset of real image pairs categorized according to the kind of color alterations induced by specific acquisition setups. Differently from previous evaluations, special emphasis was given to effectiveness in real world applications, such as image mosaicing and stitching, where robustness with respect to strong image misalignments and light scattering effects is required. Experimental evidence is provided for the first time in terms of the most recent perceptual image quality metrics, which are known to be the closest to human judgment. Comparative results show that combinations of the new computational units are the most effective for real stitching scenarios, regardless of the specific source of color alteration. On the other hand, in the case of accurate image alignment and artificial color alterations, the best performing methods either use one of the new computational units, or are made up of fresh combinations of existing units."} {"_id": "b579366db457216b0548220bf369ab9eb183a0cc", "title": "An analysis on the significance of ticket analytics and defect analysis from software quality perspective", "text": "Software even though intangible should undergo evolution to fit into the ever changing real world scenarios. Each issue faced by the development and service team directly reflects in the quality of the software product. According to the related work, very few research is going on in the field of ticket and its related incident; a part of corrective maintenance. In depth research on incident tickets should be viewed as critical since, it provides information related to the kind of maintenance activities that is performed in any timestamp. Therefore classifying and analyzing tickets becomes a critical task in managing the operations of the service since each incident will be having a service level agreement associated with it. Further, incident analysis is essential to identify the patterns associated. Due to the existence of huge population of software in each organization and millions of incidents get reported per software product every year, it is practically impossible to manually analyze all the tickets. This paper focuses on projecting the importance of ticket to maintain the quality of software products and also distinguish it from the defect associated with a software system. This paper projects the importance of identifying defects in software as well as handling the incident related tickets and resolving it when viewed from the perspective of quality. It also gives an overview of the scope defect analysis and ticket analytics provide to the researchers."} {"_id": "f69253e97f487b9d77b72553a9115fc814e3ed51", "title": "Clickbait Convolutional Neural Network", "text": "With the development of online advertisements, clickbait spread wider and wider. Clickbait dissatisfies users because the article content does not match their expectation. Thus, clickbait detection has attracted more and more attention recently. Traditional clickbait-detection methods rely on heavy feature engineering and fail to distinguish clickbait from normal headlines precisely because of the limited information in headlines. A convolutional neural network is useful for clickbait detection, since it utilizes pretrained Word2Vec to understand the headlines semantically, and employs different kernels to find various characteristics of the headlines. However, different types of articles tend to use different ways to draw users\u2019 attention, and a pretrained Word2Vec model cannot distinguish these different ways. To address this issue, we propose a clickbait convolutional neural network (CBCNN) to consider not only the overall characteristics but also specific characteristics from different article types. Our experimental results show that our method outperforms traditional clickbait-detection algorithms and the TextCNN model in terms of precision, recall and accuracy."} {"_id": "6c9bd4bd7e30470e069f8600dadb4fd6d2de6bc1", "title": "A Database of Narrative Schemas", "text": "This paper describes a new language resource of events and semantic roles that characterize real-world situations. Narrative schemas contain sets of related events (edit and publish), a temporal ordering of the events (edit before publish), and the semantic roles of the participants (authors publish books). This type of world knowledge was central to early research in natural language understanding. Scripts were one of the main formalisms, representing common sequences of events that occur in the world. Unfortunately, most of this knowledge was hand-coded and time consuming to create. Current machine learning techniques, as well as a new approach to learning through coreference chains, has allowed us to automatically extract rich event structure from open domain text in the form of narrative schemas. The narrative schema resource described in this paper contains approximately 5000 unique events combined into schemas of varying sizes. We describe the resource, how it is learned, and a new evaluation of the coverage of these schemas over unseen documents."} {"_id": "a72daf1fc4b1fc16d3c8a2e33f9aac6e17461d9a", "title": "User-Oriented Context Suggestion", "text": "Recommender systems have been used in many domains to assist users' decision making by providing item recommendations and thereby reducing information overload. Context-aware recommender systems go further, incorporating the variability of users' preferences across contexts, and suggesting items that are appropriate in different contexts. In this paper, we present a novel recommendation task, \"Context Suggestion\", whereby the system recommends contexts in which items may be selected. We introduce the motivations behind the notion of context suggestion and discuss several potential solutions. In particular, we focus specifically on user-oriented context suggestion which involves recommending appropriate contexts based on a user's profile. We propose extensions of well-known context-aware recommendation algorithms such as tensor factorization and deviation-based contextual modeling and adapt them as methods to recommend contexts instead of items. In our empirical evaluation, we compare the proposed solutions to several baseline algorithms using four real-world data sets."} {"_id": "585da6b6355f3536e1b12b30ef4c3ea54b955f2d", "title": "Brand followers' retweeting behavior on Twitter: How brand relationships influence brand electronic word-of-mouth", "text": "Twitter, the popular microblogging site, has received increasing attention as a unique communication tool that facilitates electronic word-of-mouth (eWOM). To gain greater insight into this potential, this study investigates how consumers\u2019 relationships with brands influence their engagement in retweeting brand messages on Twitter. Data from a survey of 315 Korean consumers who currently follow brands on Twitter show that those who retweet brand messages outscore those who do not on brand identification, brand trust, community commitment, community membership intention, Twitter usage frequency, and total number of postings. 2014 Elsevier Ltd. All rights reserved."} {"_id": "d18cc66f7f87e041dec544a0b843496085ab54e1", "title": "Memory, navigation and theta rhythm in the hippocampal-entorhinal system", "text": "Theories on the functions of the hippocampal system are based largely on two fundamental discoveries: the amnestic consequences of removing the hippocampus and associated structures in the famous patient H.M. and the observation that spiking activity of hippocampal neurons is associated with the spatial position of the rat. In the footsteps of these discoveries, many attempts were made to reconcile these seemingly disparate functions. Here we propose that mechanisms of memory and planning have evolved from mechanisms of navigation in the physical world and hypothesize that the neuronal algorithms underlying navigation in real and mental space are fundamentally the same. We review experimental data in support of this hypothesis and discuss how specific firing patterns and oscillatory dynamics in the entorhinal cortex and hippocampus can support both navigation and memory."} {"_id": "22fc3af1fb55d48f3c03cd96f277503e92541c60", "title": "Predictive Control of Power Converters: Designs With Guaranteed Performance", "text": "In this work, a cost function design based on Lyapunov stability concepts for finite control set model predictive control is proposed. This predictive controller design allows one to characterize the performance of the controlled converter, while providing sufficient conditions for local stability for a class of power converters. Simulation and experimental results on a buck dc-dc converter and a two-level dc-ac inverter are conducted to validate the effectiveness of our proposal."} {"_id": "4114c89bec92ebde7c20d12d0303281983ed1df8", "title": "Design and Implementation of a Fast Dynamic Packet Filter", "text": "This paper presents Swift, a packet filter for high-performance packet capture on commercial off-the-shelf hardware. The key features of the Swift include: 1) extremely lowfilter update latency for dynamic packet filtering, and 2) gigabits-per-second high-speed packet processing. Based on complex instruction set computer (CISC) instruction set architecture (ISA), Swift achieves the former with an instruction set design that avoids the need for compilation and security checking, and the latter by mainly utilizing single instruction, multiple data (SIMD). We implement Swift in the Linux 2.6 kernel for both i386 and \u00d786-64 architectures and extensively evaluate its dynamic and static filtering performance on multiple machines with different hardware setups. We compare Swift to BPF (the BSD packet filter)--the de facto standard for packet filtering in modern operating systems--and hand-coded optimized C filters that are used for demonstrating possible performance gains. For dynamic filtering tasks, Swift is at least three orders of magnitude faster than BPF in terms of filter update latency. For static filtering tasks, Swift outperforms BPF up to three times in terms of packet processing speed and achieves much closer performance to the optimized C filters. We also show that Swift can harness the processing power of hardware SIMD instructions by virtue of its SIMD-capable instruction set."} {"_id": "8e508720cdb495b7821bf6e43c740eeb5f3a444a", "title": "Learning Scalable Deep Kernels with Recurrent\nStructure", "text": "Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure, and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable."} {"_id": "110599f48c30251aba60f68b8484a7b0307bcb87", "title": "SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter", "text": "This report summarizes the objectives and evaluation of the SemEval 2015 task on the sentiment analysis of figurative language on Twitter (Task 11). This is the first sentiment analysis task wholly dedicated to analyzing figurative language on Twitter. Specifically, three broad classes of figurative language are considered: irony, sarcasm and metaphor. Gold standard sets of 8000 training tweets and 4000 test tweets were annotated using workers on the crowdsourcing platform CrowdFlower. Participating systems were required to provide a fine-grained sentiment score on an 11-point scale (-5 to +5, including 0 for neutral intent) for each tweet, and systems were evaluated against the gold standard using both a Cosinesimilarity and a Mean-Squared-Error measure."} {"_id": "4b53f660eb6cfe9180f9e609ad94df8606724a3d", "title": "Text mining of news-headlines for FOREX market prediction: A Multi-layer Dimension Reduction Algorithm with semantics and sentiment", "text": "In this paper a novel approach is proposed to predict intraday directional-movements of a currency-pair in the foreign exchange market based on the text of breaking financial news-headlines. The motivation behind this work is twofold: First, although market-prediction through text-mining is shown to be a promising area of work in the literature, the text-mining approaches utilized in it at this stage are not much beyond basic ones as it is still an emerging field. This work is an effort to put more emphasis on the text-mining methods and tackle some specific aspects thereof that are weak in previous works, namely: the problem of high dimensionality as well as the problem of ignoring sentiment and semantics in dealing with textual language. This research assumes that addressing these aspects of text-mining have an impact on the quality of the achieved results. The proposed system proves this assumption to be right. The second part of the motivation is to research a specific market, namely, the foreign exchange market, which seems not to have been researched in the previous works based on predictive text-mining. Therefore, results of this work also successfully demonstrate a predictive relationship between this specific market-type and the textual data of news. Besides the above two main components of the motivation, there are other specific aspects that make the setup of the proposed system and the conducted experiment unique, for example, the use of news article-headlines only and not news article-bodies, which enables usage of short pieces of text rather than long ones; or the use of general financial breaking news without any further filtration. In order to accomplish the above, this work produces a multi-layer algorithm that tackles each of the mentioned aspects of the text-mining problem at a designated layer. The first layer is termed the Semantic Abstraction Layer and addresses the problem of co-reference in text mining that is contributing to sparsity. Co-reference occurs when two or more words in a text corpus refer to the same concept. This work produces a custom approach by the name of Heuristic-Hypernyms Feature-Selection which creates a way to recognize words with the same parent-word to be regarded as one entity. As a result, prediction accuracy increases significantly at this layer which is attributed to appropriate noise-reduction from the feature-space. The second layer is termed Sentiment Integration Layer, which integrates sentiment analysis capability into the algorithm by proposing a sentiment weight by the name of SumScore that reflects investors\u2019 sentiment. Additionally, this layer reduces the dimensions by eliminating those that are of zero value in terms of sentiment and thereby improves prediction accuracy. The third layer encompasses a dynamic model creation algorithm, termed Synchronous Targeted Feature Reduction (STFR). It is suitable for the challenge at hand whereby the mining of a stream of text is concerned. It updates the models with the most recent information available and, more importantly, it ensures that the dimensions are reduced to the absolute minimum. The algorithm and each of its layers are extensively evaluated using real market data and news content across multiple years and have proven to be solid and superior to any other comparable solution. The proposed techniques implemented in the system, result in significantly high directional-accuracies of up to 83.33%. On top of a well-rounded multifaceted algorithm, this work contributes a much needed research framework for this context with a test-bed of data that must make future research endeavors more convenient. The produced algorithm is scalable and its modular design allows improvement in each of its layers in future research. This paper provides ample details to reproduce the entire system and the conducted experiments. 2014 Elsevier Ltd. All rights reserved. A. Khadjeh Nassirtoussi et al. / Expert Systems with Applications 42 (2015) 306\u2013324 307"} {"_id": "7f90ef42f22d4f9b86d33b0ad7f16261273c8612", "title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", "text": "a r t i c l e i n f o a b s t r a c t We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multilingual semantic network. Key to our approach is the integration of lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition, Machine Translation is applied to enrich the resource with lexical information for all languages. We first conduct in vitro experiments on new and existing gold-standard datasets to show the high quality and coverage of BabelNet. We then show that our lexical resource can be used successfully to perform both monolingual and cross-lingual Word Sense Disambiguation: thanks to its wide lexical coverage and novel semantic relations, we are able to achieve state-of the-art results on three different SemEval evaluation tasks."} {"_id": "033b62167e7358c429738092109311af696e9137", "title": "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews", "text": "This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., \u201csubtle nuances\u201d) and a negative semantic orientation when it has bad associations (e.g., \u201cvery cavalier\u201d). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word \u201cexcellent\u201d minus the mutual information between the given phrase and the word \u201cpoor\u201d. A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews."} {"_id": "105a0b3826710356e218685f87b20fe39c64c706", "title": "Opinion observer: analyzing and comparing opinions on the Web", "text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he/she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him/her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly."} {"_id": "9ea16bc34448ca9d713f4501f1a6215a26746372", "title": "A survey of software testing practices in alberta", "text": "Software organizations have typically de-emphasized the importance of software testing. In this paper, the results of a regional survey of software testing and software quality assurance techniques are described. Researchers conducted the study during the summer and fall of 2002 by surveying software organizations in the Province of Alberta. Results indicate that Alberta-based organizations tend to test less than their counterparts in the United States. The results also indicate that Alberta software organizations tend to train fewer personnel on testing-related topics. This practice has the potential for a two-fold impact: first, the ability to detect trends that lead to reduced quality and to identify the root causes of reductions in product quality may suffer from the lack of testing. This consequence is serious enough to warrant consideration, since overall quality may suffer from the reduced ability to detect and eliminate process or product defects. Second, the organization may have a more difficult time adopting methodologies such as extreme programming. This is significant because other industry studies have concluded that many software organizations have tried or will in the next few years try some form of agile method. Newer approaches to software development like extreme programming increase the extent to which teams rely on testing skills. Organizations should consider their testing skill level as a key indication of their readiness for adopting software development techniques such as test-driven development, extreme programming, agile modelling, or other agile methods."} {"_id": "746cafc676374114198c414d6426ec2f50e0ff80", "title": "Analysis and Design of Average Current Mode Control Using a Describing-Function-Based Equivalent Circuit Model", "text": "This paper proposes a small-signal model for average current mode control based on an equivalent circuit. The model uses a three-terminal equivalent circuit model based on a linearized describing function method to include the feedback effect of the sideband frequency components of the inductor current. The model extends the results obtained in peak current mode control to average current mode control. The proposed small-signal model is accurate up to half switching frequency, predicting the subharmonic instability. The proposed model is verified using SIMPLIS simulation and hardware experiments, which show good agreement with the measurement results. Based on the proposed model, new feedback design guidelines are presented. The proposed design guidelines are compared with several conventional, widely used design criteria. By designing the external ramp following the proposed design guidelines, the quality factor of the double poles at half of the switching frequency in the control-to-output transfer function can be precisely controlled. This helps the feedback loop design to achieve wide control bandwidth and proper damping."} {"_id": "2b337d6a72c8c2b1d97097dc24ec0e9a8d4c2186", "title": "Using deep learning for short text understanding", "text": "Classifying short texts to one category or clustering semantically related texts is challenging, and the importance of both is growing due to the rise of microblogging platforms, digital news feeds, and the like. We can accomplish this classifying and clustering with the help of a deep neural network which produces compact binary representations of a short text, and can assign the same category to texts that have similar binary representations. But problems arise when there is little contextual information on the short texts, which makes it difficult for the deep neural network to produce similar binary codes for semantically related texts. We propose to address this issue using semantic enrichment. This is accomplished by taking the nouns, and verbs used in the short texts and generating the concepts and co-occurring words with the help of those terms. The nouns are used to generate concepts within the given short text, whereas the verbs are used to prune the ambiguous context (if any) present in the text. The enriched text then goes through a deep neural network to produce a prediction label for that short text representing it\u2019s category."} {"_id": "1d53a898850b8d055db80ba99c59c89b080dfc4c", "title": "MVOR: A Multi-view RGB-D Operating Room Dataset for 2D and 3D Human Pose Estimation", "text": "Person detection and pose estimation is a key requirement to develop intelligent context-aware assistance systems. To foster the development of human pose estimation methods and their applications in the Operating Room (OR), we release the Multi-View Operating Room (MVOR) dataset, the first public dataset recorded during real clinical interventions. It consists of 732 synchronized multi-view frames recorded by three RGB-D cameras in a hybrid OR. It also includes the visual challenges present in such environments, such as occlusions and clutter. We provide camera calibration parameters, color and depth frames, human bounding boxes, and 2D/3D pose annotations. In this paper, we present the dataset, its annotations, as well as baseline results from several recent person detection and 2D/3D pose estimation methods. Since we need to blur some parts of the images to hide identity and nudity in the released dataset, we also present a comparative study of how the baselines have been impacted by the blurring. Results show a large margin for improvement and suggest that the MVOR dataset can be useful to compare the performance of the different methods."} {"_id": "954d0346b5cdf3f1ec0fcc74ae5aadc5b733adc0", "title": "Beyond engagement analytics: which online mixed-data factors predict student learning outcomes?", "text": "This mixed-method study focuses on online learning analytics, a research area of importance. Several important student attributes and their online activities are examined to identify what seems to work best to predict higher grades. The purpose is to explore the relationships between student grade and key learning engagement factors using a large sample from an online undergraduate business course at an accredited American university (n\u00a0=\u00a0228). Recent studies have discounted the ability to predict student learning outcomes from big data analytics but a few significant indicators have been found by some researchers. Current studies tend to use quantitative factors in learning analytics to forecast outcomes. This study extends that work by testing the common quantitative predictors of learning outcome, but qualitative data is also examined to triangulate the evidence. Pre and post testing of information technology understanding is done at the beginning of the course. First quantitative data is collected, and depending on the hypothesis test results, qualitative data is collected and analyzed with text analytics to uncover patterns. Moodle engagement analytics indicators are tested as predictors in the model. Data is also taken from the Moodle system logs. Qualitative data is collected from student reflection essays. The result was a significant General Linear Model with four online interaction predictors that captured 77.5\u00a0% of grade variance in an undergraduate business course."} {"_id": "483b94374944293d2a6d36cc1c97f0544ce3c79c", "title": "Which Hotel attributes Matter ? A review of previous and a framework for future research", "text": "A lot of effort has been made in the last decades to reveal, which hotel attributes guest care about. Due to the high costs that are typically involved with investments in the hotel industry, it makes a lot of sense to study, which product components the travellers appreciate. This study reveals that hotel attribute research turns out to be a wide and extremely heterogeneous field of research. The authors review empirical studies investigating the importance of hotel attributes, provide attribute rankings and suggest a framework for past and future research projects in the field, based on the dimensions \u201cfocus of research\u201d, \u201drisk versus utility\u201d and \u201ctrade-off versus no trade-off questioning situation\u201d."} {"_id": "54c377407242e74e7c08e4a49e61837fd9ce2b25", "title": "On Power Quality of Variable-Speed Constant-Frequency Aircraft Electric Power Systems", "text": "In this paper, a comprehensive model of the variable-speed constant-frequency aircraft electric power system is developed to study the performance characteristics of the system and, in particular, the system power quality over a frequency range of operation of 400 Hz to 800 Hz. A fully controlled active power filter is designed to regulate the load terminal voltage, eliminate harmonics, correct supply power factor, and minimize the effect of unbalanced loads. The control algorithm for the active power filter (APF) is based on the perfect harmonic cancellation method which provides a three-phase reference supply current in phase with its positive-sequence fundamental voltage. The proposed APF is integrated into the model of a 90-kVA advanced aircraft electric power system under VSCF operation. The performance characteristics of the system are studied with the frequency of the generator's output voltage varied from 400 Hz to 800 Hz under different loading conditions. Several case studies are presented including dc loads as well as passive and dynamic ac loads. The power quality characteristics of the studied aircraft electric power system with the proposed active filter are shown to be in compliance with the most recent military aircraft electrical standards MIL-STD-704F as well as with the IEEE Std. 519."} {"_id": "9d1940f843c448cc378214ff6bad3c1279b1911a", "title": "Shape-aware Instance Segmentation", "text": "We address the problem of instance-level semantic segmentation, which aims at jointly detecting, segmenting and classifying every individual object in an image. In this context, existing methods typically propose candidate objects, usually as bounding boxes, and directly predict a binary mask within each such proposal. As a consequence, they cannot recover from errors in the object candidate generation process, such as too small or shifted boxes. In this paper, we introduce a novel object segment representation based on the distance transform of the object masks. We then design an object mask network (OMN) with a new residual-deconvolution architecture that infers such a representation and decodes it into the final binary object mask. This allows us to predict masks that go beyond the scope of the bounding boxes and are thus robust to inaccurate object candidates. We integrate our OMN into a Multitask Network Cascade framework, and learn the resulting shape-aware instance segmentation (SAIS) network in an end-to-end manner. Our experiments on the PASCAL VOC 2012 and the CityScapes datasets demonstrate the benefits of our approach, which outperforms the state-of-the-art in both object proposal generation and instance segmentation."} {"_id": "4d0130e95925b00a2d1ecba931a1a05a74370f3f", "title": "RT-Mover: a rough terrain mobile robot with a simple leg-wheel hybrid mechanism", "text": "There is a strong demand in many fields for practical robots, such as a porter robot and a personal mobility robot, that can move over rough terrain while carrying a load horizontally. We have developed a robot, called RT-Mover, which shows adequate mobility performance on targeted types of rough terrain. It has four drivable wheels and two leg-like axles but only five active shafts. A strength of this robot is that it realizes both a leg mode and a wheel mode in a simple mechanism. In this paper, the mechanical design concept is discussed. With an emphasis on minimizing the number of drive shafts, a mechanism is designed for a four-wheeled mobile body that is widely used in practical locomotive machinery. Also, strategies for moving on rough terrain are proposed. The kinematics, stability, and control of RT-Mover are also described in detail. Some typical cases of rough terrain for wheel mode and leg mode are selected, and the robot\u2019s ability of locomotion is assessed through simulations and experiments. In each case, the robot is able to move over rough terrain while maintaining the horizontal orientation of its platform."} {"_id": "0651f838d918586ec1df66450c3d324602c9f59e", "title": "Privacy attacks in social media using photo tagging networks: a case study with Facebook", "text": "Social-networking users unknowingly reveal certain kinds of personal information that malicious attackers could profit from to perpetrate significant privacy breaches. This paper quantitatively demonstrates how the simple act of tagging pictures on the social-networking site of Facebook could reveal private user attributes that are extremely sensitive. Our results suggest that photo tags can be used to help predicting some, but not all, of the analyzed attributes. We believe our analysis make users aware of significant breaches of their privacy and could inform the design of new privacy-preserving ways of tagging pictures on social-networking sites."} {"_id": "4c9774c5e57a4b7535eb19f6584f75c8b9c2cdcc", "title": "A framework based on RSA and AES encryption algorithms for cloud computing services", "text": "Cloud computing is an emerging computing model in which resources of the computing communications are provided as services over the Internet. Privacy and security of cloud storage services are very important and become a challenge in cloud computing due to loss of control over data and its dependence on the cloud computing provider. While there is a huge amount of transferring data in cloud system, the risk of accessing data by attackers raises. Considering the problem of building a secure cloud storage service, current scheme is proposed which is based on combination of RSA and AES encryption methods to share the data among users in a secure cloud system. The proposed method allows providing difficulty for attackers as well as reducing the time of information transmission between user and cloud data storage."} {"_id": "e645cbd3aaeab56858f1e752677b8792d7377d14", "title": "BCSAT : A Benchmark Corpus for Sentiment Analysis in Telugu Using Word-level Annotations", "text": "The presented work aims at generating a systematically annotated corpus that can support the enhancement of sentiment analysis tasks in Telugu using wordlevel sentiment annotations. From OntoSenseNet, we extracted 11,000 adjectives, 253 adverbs, 8483 verbs and sentiment annotation is being done by language experts. We discuss the methodology followed for the polarity annotations and validate the developed resource. This work aims at developing a benchmark corpus, as an extension to SentiWordNet, and baseline accuracy for a model where lexeme annotations are applied for sentiment predictions. The fundamental aim of this paper is to validate and study the possibility of utilizing machine learning algorithms, word-level sentiment annotations in the task of automated sentiment identification. Furthermore, accuracy is improved by annotating the bi-grams extracted from the target corpus."} {"_id": "40f5430ef326838d5b7ce018f62e51c188d7cdd7", "title": "Effects of quiz-style information presentation on user understanding", "text": "This paper proposes quiz-style information presentation for interactive systems as a means to improve user understanding in educational tasks. Since the nature of quizzes can highly motivate users to stay voluntarily engaged in the interaction and keep their attention on receiving information, it is expected that information presented as quizzes can be better understood by users. To verify the effectiveness of the approach, we implemented read-out and quiz systems and performed comparison experiments using human subjects. In the task of memorizing biographical facts, the results showed that user understanding for the quiz system was significantly better than that for the read-out system, and that the subjects were more willing to use the quiz system despite the long duration of the quizzes. This indicates that quiz-style information presentation promotes engagement in the interaction with the system, leading to the improved user understanding."} {"_id": "24bbff699187ad6bf37e447627de1ca25267a770", "title": "Research on continuous auditing: A bibliometric analysis", "text": "This paper presents the results of a bibliometric study about the evolution of research on Continuous Auditing. This study has as main motivation to find reasons for the very slow evolvement of research on this topic. In addition, Continuous Auditing is one of the features of the emerging concept of Continuous Assurance. Thus, considering that Continuous Assurance represents numerous advantages for the organizational performance, this study also intends to understand if there is a relation between the evolution of research on Continuous Auditing and the still very low maturity levels of continuous assurance solutions. This study shows that the number of publications is considerably reduced and that the slow growth of research on Continuous Auditing may be contributing to the lack of maturity of Continuous Assurance."} {"_id": "abd0478f1572d8ecdca4738df3e4b3bd116d7b42", "title": "Dispositional Factors in Internet Use: Personality Versus Cognitive Style", "text": "This study directly tests the effect of personality and cognitive style on three measures of Internet use. The results support the use of personality\u2014but not cognitive style\u2014as an antecedent variable. After controlling for computer anxiety, selfefficacy, and gender, including the \u201cBig Five\u201d personality factors in the analysis significantly adds to the predictive capabilities of the dependent variables. Including cognitive style does not. The results are discussed in terms of the role of personality and cognitive style in models of technology adoption and use."} {"_id": "a64f48f9810c4788236f31dc2a9b87dd02977c3e", "title": "Voice quality evaluation of recent open source codecs", "text": "\u2022 Averaged frequency responses at different 16, and 24 kHz. External sampling rate does not tell the internal sampling rate. \u2022 Supported signal bandwidth depends on bitrate, but no documentation exists bandwidths were found out expirementally \u2022 We tested 32kHz sampling with 16ms frame length. There is also 8 ms lookahead. \u2022 The results show that bitrates below 32 kbit/s are not useable for voice applications.The voice quality is much worse than with SILK or bitrates shown in steady state"} {"_id": "76eea8436996c7e9c8f7ad3dac34a12865edab24", "title": "Chain Replication for Supporting High Throughput and Availability", "text": "Chain replication is a new approach to coordinating clusters of fail-stop storage servers. The approach is intended for supporting large-scale storage services that exhibit high throughput and availability without sacrificing strong consistency guarantees. Besides outlining the chain replication protocols themselves, simulation experiments explore the performance characteristics of a prototype implementation. Throughput, availability, and several objectplacement strategies (including schemes based on distributed hash table routing) are discussed."} {"_id": "522a7178e501018e442c03f4b93e29f62ae1eb64", "title": "Deep Voice 2 : Multi-Speaker Neural Text-to-Speech", "text": "We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly."} {"_id": "ccbcaf528a222d04f40fd03b3cb89d5f78acbdc6", "title": "A Literature Review on Kidney Disease Prediction using Data Mining Classification Technique", "text": "-The huge amounts of data generated by healthcare transactions are too complex and voluminous to be processed and analyzed by traditional methods. Data mining provides the methodology and technology to transform these mounds of data into useful information for decision making. The Healthcare industry is generally \u201cinformation rich\u201d, which is not feasible to handle manually. These large amounts of data are very important in the field of data mining to extract useful information and generate relationships amongst the attributes. Kidney disease is a complex task which requires much experience and knowledge. Kidney disease is a silent killer in developed countries and one of the main contributors to disease burden in developing countries. In the health care industry the data mining is mainly used for predicting the diseases from the datasets. The Data mining classification techniques, namely Decision trees, ANN, Naive Bayes are analyzed on Kidney disease data set. Keywords--Data Mining, Kidney Disease, Decision tree, Naive Bayes, ANN, K-NN, SVM, Rough Set, Logistic Regression, Genetic Algorithms (GAs) / Evolutionary Programming (EP), Clustering"} {"_id": "30f46fdfe1fdab60bdecaa27aaa94526dfd87ac1", "title": "Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera", "text": "We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data."} {"_id": "892fea843d58852a835f38087bc3b5102123f567", "title": "Multiple ramp schemes", "text": "A (t; k; n; S) ramp scheme is a protocol to distribute a secret s chosen inS among a setP of n participants in such a way that: 1) sets of participants of cardinality greater than or equal to k can reconstruct the secrets; 2) sets of participants of cardinality less than or equal tot have no information on s, whereas 3) sets of participants of cardinality greater than t and less thank might have \u201csome\u201d information on s. In this correspondence we analyze multiple ramp schemes, which are protocols to share many secrets among a set P of participants, using different ramp schemes. In particular, we prove a tight lower bound on the size of the shares held by each participant and on the dealer\u2019s randomness in multiple ramp schemes."} {"_id": "ce148df015fc488ac6fc022dac3da53c141e0ea8", "title": "Protein function in precision medicine: deep understanding with machine learning.", "text": "Precision medicine and personalized health efforts propose leveraging complex molecular, medical and family history, along with other types of personal data toward better life. We argue that this ambitious objective will require advanced and specialized machine learning solutions. Simply skimming some low-hanging results off the data wealth might have limited potential. Instead, we need to better understand all parts of the system to define medically relevant causes and effects: how do particular sequence variants affect particular proteins and pathways? How do these effects, in turn, cause the health or disease-related phenotype? Toward this end, deeper understanding will not simply diffuse from deeper machine learning, but from more explicit focus on understanding protein function, context-specific protein interaction networks, and impact of variation on both."} {"_id": "38d34b02820020aac7f060e84bb6c01b4dee665a", "title": "The impact of design management and process management on quality : an empirical investigation", "text": "\u017d . Design management and process management are two important elements of total quality management TQM implementation. They are drastically different in their targets of improvement, visibility, and techniques. In this paper, we establish a framework for identifying the synergistic linkages of design and process management to the operational quality \u017d . \u017d . outcomes during the manufacturing process internal quality and upon the field usage of the products external quality . Through a study of quality practices in 418 manufacturing plants from multiple industries, we empirically demonstrate that both design and process management efforts have an equal positive impact on internal quality outcomes such as scrap, rework, defects, performance, and external quality outcomes such as complaints, warranty, litigation, market share. A detailed contingency analysis shows that the proposed model of synergies between design and process management holds true for large and small firms; for firms with different levels of TQM experience; and in different industries with varying levels of competition, logistical complexity of production, or production process characteristics. Finally, the results also suggest that organizational learning enables mature TQM firms to implement both design and process efforts more rigorously and their synergy helps these firms to attain better quality outcomes. These findings indicate that, to attain superior quality outcomes, firms need to balance their design and process management efforts and persevere with long-term \u017d . implementation of these efforts. Because the study spans all of the manufacturing sectors SIC 20 through 39 , these conclusions should help firms in any industry revisit their priorities in terms of the relative efforts in design management and process management. q 2000 Elsevier Science B.V. All rights reserved."} {"_id": "b09b43cacd45fd922f7f85b1f8514cb4a775ca5d", "title": "A Web Service Discovery Approach Based on Common Topic Groups Extraction", "text": "Web services have attracted much attention from distributed application designers and developers because of their roles in abstraction and interoperability among heterogeneous software systems, and a growing number of distributed software applications have been published as Web services on the Internet. Faced with the increasing numbers of Web services and service users, researchers in the services computing field have attempted to address a challenging issue, i.e., how to quickly find the suitable ones according to user queries. Many previous studies have been reported towards this direction. In this paper, a novel Web service discovery approach based on topic models is presented. The proposed approach mines common topic groups from the service-topic distribution matrix generated by topic modeling, and the extracted common topic groups can then be leveraged to match user queries to relevant Web services, so as to make a better trade-off between the accuracy of service discovery and the number of candidate Web services. Experiment results conducted on two publicly-available data sets demonstrate that, compared with several widely used approaches, the proposed approach can maintain the performance of service discovery at an elevated level by greatly decreasing the number of candidate Web services, thus leading to faster response time."} {"_id": "c108437a57bd8f8eaed9e26360ee100074e3f3fc", "title": "Computational Capabilities of Graph Neural Networks", "text": "In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G, n) isin R m that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model."} {"_id": "28d3ec156472c35ea8e1b7acad969b725111fe56", "title": "Hipikat: a project memory for software development", "text": "Sociological and technical difficulties, such as a lack of informal encounters, can make it difficult for new members of noncollocated software development teams to learn from their more experienced colleagues. To address this situation, we have developed a tool, named Hipikat that provides developers with efficient and effective access to the group memory for a software development project that is implicitly formed by all of the artifacts produced during the development. This project memory is built automatically with little or no change to existing work practices. After describing the Hipikat tool, we present two studies investigating Hipikat's usefulness in software modification tasks. One study evaluated the usefulness of Hipikat's recommendations on a sample of 20 modification tasks performed on the Eclipse Java IDE during the development of release 2.1 of the Eclipse software. We describe the study, present quantitative measures of Hipikat's performance, and describe in detail three cases that illustrate a range of issues that we have identified in the results. In the other study, we evaluated whether software developers who are new to a project can benefit from the artifacts that Hipikat recommends from the project memory. We describe the study, present qualitative observations, and suggest implications of using project memory as a learning aid for project newcomers."} {"_id": "334c4806912d851ef2117e67728cfa624dbec9a3", "title": "A Metrics Suite for Object Oriented Design", "text": "Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field\u2019s understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber, the theoretical base chosen for the metrics was the ontology of Bunge. Six design metrics are developed, and then analytically evaluated against Weyuker\u2019s proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. \u201cA Metrics Suite For Object Oriented Design\u201d Shyam R. Chidamber Chris F. Kemerer Index Terms CR"} {"_id": "383ca85aaca9f306ea7ae04fb0b6b76f1e393395", "title": "Two case studies of open source software development: Apache and Mozilla", "text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance commercial/open source process hybrids."} {"_id": "3ea9cd35f39e8c128f39f13148e91466715f4ee2", "title": "A File Comparison Program", "text": "A file comparison program produces a list of differences between two files. These differences can be couched in terms of lines, e.g. by telling which lines must be inserted, deleted or moved to convert the first file to the second. Alternatively, the list of differences can identify individual bytes. Byte-oriented comparisons are useful with non-text files, such as compiled programs, that are not divided into lines. The approach adopted here is to generate only instructions to insert or delete entire lines. Since lines are treated as indivisible objects, files can be treated as containing lines consisting of a single symbol. In other words, an n-line file is modelled by a string of n symbols. In more formal terms, the file comparison problem can be rephrased as follows. The edit distance between two strings of symbols is the length of a shortest sequence of insertions and deletions that will convert the first string to the second. T h e goal, then, is to write a program that computes the edit distance between two arbitrary strings of symbols. In addition, the program must explicitly produce a shortest possible edit script (i.e. sequence of edit commands) for the given strings. Other approaches have been tried. For example, Tichy ' discusses a file-comparison tool that determines how one file can be constructed from another by copying blocks of lines and appending lines. However, the ability to economically generate shortestpossible edit scripts depends critically on the repertoire of instructions that are allowed in the scripts.2 File comparison algorithms have a number of potential uses beside merely producing a set of edit commands to be read by someone trying to understand the evolution of a program or document. For example, the edit scripts might be text editor instructions that are saved to avoid the expense of storing nearly identical files. Rather than storing"} {"_id": "508119a50e3d4e8b7116c1b56a002de492b2270b", "title": "Object Detection Featuring 3D Audio Localization for Microsoft HoloLens - A Deep Learning based Sensor Substitution Approach for the Blind", "text": "Finding basic objects on a daily basis is a difficult but common task for blind people. This paper demonstrates the implementation of a wearable, deep learning backed, object detection approach in the context of visual impairment or blindness. The prototype aims to substitute the impaired eye of the user and replace it with technical sensors. By scanning its surroundings, the prototype provides a situational overview of objects around the device. Object detection has been implemented using a near real-time, deep learning model named YOLOv2. The model supports the detection of 9000 objects. The prototype can display and read out the name of augmented objects which can be selected by voice commands and used as directional guides for the user, using 3D audio feedback. A distance announcement of a selected object is derived from the HoloLens\u2019s spatial model. The wearable solution offers the opportunity to efficiently locate objects to support orientation without extensive training of the user. Preliminary evaluation covered the detection rate of speech recognition and the response times of the server."} {"_id": "c0a39b1b64100b929ec77d33232513ec72089a2e", "title": "English as a Formal Specification Language", "text": "PENG is a computer-processable controlled natural language designed for writing unambiguous and precise specifications. PENG covers a strict subset of standard English and is precisely defined by a controlled grammar and a controlled lexicon. In contrast to other controlled languages, the author does not need to know the grammatical restrictions explicitly. ECOLE, a look-ahead text editor, indicates the restrictions while the specification is written. The controlled lexicon contains domain-specific content words that can be defined by the author on the fly and predefined function words. Specifications written in PENG can be deterministically translated into discourse representations structures to cope with anaphora and presuppositions and also into first-order predicate logic. To test the formal properties of PENG, we reformulated Schubert\u2019s steamroller puzzle in PENG, translated the resulting specification via discourse representation structures into first-order predicate logic with equality, and proved the steamroller\u2019s conclusion with OTTER, a standard theorem prover."} {"_id": "f9cf246008d745f883914d925567bb36df806613", "title": "Automatic Retraction and Full-Cycle Operation for a Class of Airborne Wind Energy Generators", "text": "Airborne wind energy systems aim to harvest the power of winds blowing at altitudes higher than what conventional wind turbines reach. They employ a tethered flying structure, usually a wing, and exploit the aerodynamic lift to produce electrical power. In the case of ground-based systems, where the traction force on the tether is used to drive a generator on the ground, a two-phase power cycle is carried out: one phase to produce power, where the tether is reeled out under high traction force, and a second phase where the tether is recoiled under lower load. The problem of controlling a tethered wing in this second phase, the retraction phase, is addressed here, by proposing two possible control strategies. Theoretical analyses, numerical simulations, and experimental results are presented to show the performance of the two approaches. Finally, the experimental results of complete autonomous power generation cycles are reported and compared with those in first-principle models."} {"_id": "53c544145d2fe5fe8c44584f44f36f74393b983e", "title": "Simulation of object and human skin formations in a grasping task", "text": "This paper addresses the problem of simulating deformations between objects and the hand of a synthetic character during a grasping process. A numerical method based on finite element theory allows us to take into account the active forces of the fingers on the object and the reactive forces of the object on the fingers. The method improves control of synthetic human behavior in a task level animation system because it provides information about the environment of a synthetic human and so can be compared to the sense of touch. Finite element theory currently used in engineering seems one of the best approaches for modeling both elastic and plastic deformation of objects, as well as shocks with or without penetration between deformable objects. We show that intrinsic properties of the method based on composition/decomposition of elements have an impact in computer animation. We also state that the use of the same method for modeling both objects and human bodies improves the modeling both objects and human bodies improves the modeling of the contacts between them. Moreover, it allows a realistic envelope deformation of the human fingers comparable to existing methods. To show what we can expect from the method, we apply it to the grasping and pressing of a ball. Our solution to the grasping problem is based on displacement commands instead of force commands used in robotics and human behavior."} {"_id": "0eaa75861d9e17f2c95bd3f80f48db95bf68a50c", "title": "Electromigration and its impact on physical design in future technologies", "text": "Electromigration (EM) is one of the key concerns going forward for interconnect reliability in integrated circuit (IC) design. Although analog designers have been aware of the EM problem for some time, digital circuits are also being affected now. This talk addresses basic design issues and their effects on electromigration during interconnect physical design. The intention is to increase current density limits in the interconnect by adopting electromigration-inhibiting measures, such as short-length and reservoir effects. Exploitation of these effects at the layout stage can provide partial relief of EM concerns in IC design flows in future."} {"_id": "24ff5027e7042aeead47ef3071f1a023243078bb", "title": "Optimizing Space-Air-Ground Integrated Networks by Artificial Intelligence", "text": "It is widely acknowledged that the development of traditional terrestrial communication technologies cannot provide all users with fair and high quality services due to the scarce network resource and limited coverage areas. To complement the terrestrial connection, especially for users in rural, disasterstricken, or other difficult-to-serve areas, satellites, unmanned aerial vehicles (UAVs), and balloons have been utilized to relay the communication signals. On the basis, Space-Air-Ground Integrated Networks (SAGINs) have been proposed to improve the users\u2019 Quality of Experience (QoE). However, compared with existing networks such as ad hoc networks and cellular networks, the SAGINs are much more complex due to the various characteristics of three network segments. To improve the performance of SAGINs, researchers are facing many unprecedented challenges. In this paper, we propose the Artificial Intelligence (AI) technique to optimize the SAGINs, as the AI technique has shown its predominant advantages in many applications. We first analyze several main challenges of SAGINs and explain how these problems can be solved by AI. Then, we consider the satellite traffic balance as an example and propose a deep learning based method to improve the traffic control performance. Simulation results evaluate that the deep learning technique can be an efficient tool to improve the performance of SAGINs."} {"_id": "2c6835e8bdb8c70a9c3aa9bd2578b01dd1b93114", "title": "RY-ON WITH CLOTHING REGION", "text": "We propose a virtual try-on method based on generative adversarial networks (GANs). By considering clothing regions, this method enables us to reflect the pattern of clothes better than Conditional Analogy GAN (CAGAN), an existing virtual try-on method based on GANs. Our method first obtains the clothing region on a person by using a human parsing model learned with a large-scale dataset. Next, using the acquired region, the clothing part is removed from a human image. A desired clothing image is added to the blank area. The network learns how to apply new clothing to the area of people\u2019s clothing. Results demonstrate the possibility of reflecting a clothing pattern. Furthermore, an image of the clothes that the person is originally wearing becomes unnecessary during testing. In experiments, we generate images using images gathered from Zaland (a fashion E-commerce site)."} {"_id": "38a70884a93dd6912404519a779cc497965feff1", "title": "Stereotypes of individuals with learning disabilities: views of college students with and without learning disabilities.", "text": "To explore possible reasons for low self-identification rates among undergraduates with learning disabilities (LD), we asked students (38 with LD, 100 without LD) attending two large, public, research-intensive universities to respond to a questionnaire designed to assess stereotypes about individuals with LD and conceptions of ability. Responses were coded into six categories of stereotypes about LD (low intelligence, compensation possible, process deficit, nonspecific insurmountable condition, working the system, and other), and into three categories of conceptions of intelligence (entity, incremental, neither). Consistent with past findings, the most frequent metastereotype reported by individuals in both groups related to generally low ability. In addition, students with LD were more likely to espouse views of intelligence as a fixed trait. As a whole, the study's findings have implications for our understanding of factors that influence self-identification and self-advocacy at the postsecondary level."} {"_id": "cc6dc5a3e8a18a0aaab7cbe8cee22bf3ac92f0bf", "title": "Concurrency control methods in distributed database: A review and comparison", "text": "In the last years, remarkable improvements have been made in the ability of distributed database systems performance. A distributed database is composed of some sites which are connected to each other through network connections. In this system, if good harmonization isn't made between different transactions, it may result in database incoherence. Nowadays, because of the complexity of many sites and their connection methods, it is difficult to extend different models in distributed database serially. The principle goal of concurrency control in distributed database is to ensure not interfering in accessibility of common database by different sites. Different concurrency control algorithms have been suggested to use in distributed database systems. In this paper, some available methods have been introduced and compared for concurrency control in distributed database."} {"_id": "45e2e2a327ea696411b212492b053fd328963cc3", "title": "Health App Use Among US Mobile Phone Users: Analysis of Trends by Chronic Disease Status", "text": "BACKGROUND\nMobile apps hold promise for serving as a lifestyle intervention in public health to promote wellness and attenuate chronic conditions, yet little is known about how individuals with chronic illness use or perceive mobile apps.\n\n\nOBJECTIVE\nThe objective of this study was to explore behaviors and perceptions about mobile phone-based apps for health among individuals with chronic conditions.\n\n\nMETHODS\nData were collected from a national cross-sectional survey of 1604 mobile phone users in the United States that assessed mHealth use, beliefs, and preferences. This study examined health app use, reason for download, and perceived efficacy by chronic condition.\n\n\nRESULTS\nAmong participants, having between 1 and 5 apps was reported by 38.9% (314/807) of respondents without a condition and by 6.6% (24/364) of respondents with hypertension. Use of health apps was reported 2 times or more per day by 21.3% (172/807) of respondents without a condition, 2.7% (10/364) with hypertension, 13.1% (26/198) with obesity, 12.3% (20/163) with diabetes, 12.0% (32/267) with depression, and 16.6% (53/319) with high cholesterol. Results of the logistic regression did not indicate a significant difference in health app download between individuals with and without chronic conditions (P>.05). Compared with individuals with poor health, health app download was more likely among those with self-reported very good health (odds ratio [OR] 3.80, 95% CI 2.38-6.09, P<.001) and excellent health (OR 4.77, 95% CI 2.70-8.42, P<.001). Similarly, compared with individuals who report never or rarely engaging in physical activity, health app download was more likely among those who report exercise 1 day per week (OR 2.47, 95% CI 1.6-3.83, P<.001), 2 days per week (OR 4.77, 95% CI 3.27-6.94, P<.001), 3 to 4 days per week (OR 5.00, 95% CI 3.52-7.10, P<.001), and 5 to 7 days per week (OR 4.64, 95% CI 3.11-6.92, P<.001). All logistic regression results controlled for age, sex, and race or ethnicity.\n\n\nCONCLUSIONS\nResults from this study suggest that individuals with poor self-reported health and low rates of physical activity, arguably those who stand to benefit most from health apps, were least likely to report download and use these health tools."} {"_id": "71795f9f511f6948dd67aff7e9725c08ff1a4c94", "title": "Hadoop+Aparapi: Making heterogenous MapReduce programming easier", "text": "Lately, programmers have started to take advantage of GPU capabilities of cloud-based machines. Using the GPUs can decrease the number of nodes required to perform the computation by increasing the productivity per node. We combine Hadoop, a widely-used MapReduce framework, with Aparapi, a new Java-to-OpenCL conversion tool from AMD. We propose an easy-to-use API which allows easy implementation of MapReduce algorithms that make use of the GPU. Our API improves upon Hadoop by further hiding the complexity of GPU programming, thus allowing the programmer to concentrate on her algorithm. We also propose an accompanying refactoring that allows the programmer to specify the GPU part of their map computation by using very lightweight annotation."} {"_id": "8cfb12304856268ee438ccb16e4b87960c7349e0", "title": "A Review on Internet of Things (IoT)", "text": "Internet, a revolutionary invention, is always transforming into some new kind of hardware and software making it unavoidable for anyone. The form of communication that we see now is either human-human or human-device, but the Internet of Things (IoT) promises a great future for the internet where the type of communication is machine-machine (M2M). This paper aims to provide a comprehensive overview of the IoT scenario and reviews its enabling technologies and the sensor networks. Also, it describes a six-layered architecture of IoT and points out the related key challenges."} {"_id": "a39faa00248abb3984317f2d6830f485cb5e1a0d", "title": "of Analytical Chemistry Wearable and Implantable Sensors for Biomedical Applications Hatice", "text": "Mobile health technologies offer great promise for reducing healthcare costs and improving patient care. Wearable and implantable technologies are contributing to a transformation in the mobile health era in terms of improving healthcare and health outcomes and providing real-time guidance on improved health management and tracking. In this article, we review the biomedical applications of wearable and implantable medical devices and sensors, ranging from monitoring to prevention of diseases, as well as the materials used in the fabrication of these devices and the standards for wireless medical devices and mobile applications. We conclude by discussing some of the technical challenges in wearable and implantable technology and possible solutions for overcoming these difficulties."} {"_id": "e749e6311e25eb8081672742e78c427ce5979552", "title": "Effective application of process improvement patterns to business processes", "text": "Improving the operational effectiveness and efficiency of processes is a fundamental task of business process management (BPM). There exist many proposals of process improvement patterns (PIPs) as practices that aim at supporting this goal. Selecting and implementing relevant PIPs are therefore an important prerequisite for establishing process-aware information systems in enterprises. Nevertheless, there is still a gap regarding the validation of PIPs with respect to their actual business value for a specific application scenario before implementation investments are incurred. Based on empirical research as well as experiences from BPM projects, this paper proposes a method to tackle this challenge. Our approach toward the assessment of process improvement patterns considers real-world constraints such as the role of senior stakeholders or the cost of adapting available IT systems. In addition, it outlines process improvement potentials that arise from the information technology infrastructure available to organizations, particularly regarding the combination of enterprise resource planning with business process intelligence. Our approach is illustrated along a real-world business process from human resource management. The latter covers a transactional volume of about 29,000 process instances over a period of 1\u00a0year. Overall, our approach enables both practitioners and researchers to reasonably assess PIPs before taking any process implementation decision."} {"_id": "d5ecb372f6cbdfb52588fbb4a54be21d510009d0", "title": "A Study of Birth Order , Academic Performance , and Personality", "text": "This study aimed to investigate birth order effect on personality and academic performance amongst 120 Malaysians. Besides, it also aimed to examine the relationship between personality and academic achievement. Thirty firstborns, 30 middle children, 30 lastborns, and 30 only children, who shared the mean age of 20.0 years (SD= 1.85), were recruited into this study. Participants\u2019 Sijil Pelajaran Malaysia (SPM) results were recorded and their personality was assessed by Ten Item Personality Inventory (TIPI). Results indicated that participants of different birth positions did not differ significantly in terms of personality and academic performance. However, Pearson\u2019s correlation showed that extraversion correlated positively with academic performance. Keywordsbirth order; personality; academic achievement"} {"_id": "6193ece762c15b7d8a958dc64c37e858cd873b8a", "title": "A compact printed log-periodic antenna with loaded stub", "text": "A compact printed log-periodic dipole antenna (LPDA) with distributed inductive load has been presented in this paper. By adding a stub on the top of each element, the dimension of the LPAD can be reduced by 60%. The antenna has obtained an impedance bandwidth of 10GHz (8GHz-18GHz). According to the simulation results, the designed structure achieves stable radiation patterns throughout the operating frequency band."} {"_id": "2d9416485091e6af3619c4bc9323a0887d450c8a", "title": "LSDA: Large Scale Detection Through Adaptation", "text": "A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at lsda.berkeleyvision.org."} {"_id": "0f28cbfe0674e0af4899d21dd90f6f5d0d5c3f1b", "title": "The Parable of Google Flu: Traps in Big Data Analysis", "text": "In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States (1, 2). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data (3, 4), what lessons can we draw from this error?"} {"_id": "cc43c080340817029fd497536cc9bd39b0a76dd2", "title": "Human Mobility in a Continuum Approach", "text": "Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape."} {"_id": "15e8961e8f9d1fb5060c3284a5bdcc09f2fc1ba6", "title": "Automated Diagnosis of Glaucoma Using Digital Fundus Images", "text": "Glaucoma is a disease of the optic nerve caused by the increase in the intraocular pressure of the eye. Glaucoma mainly affects the optic disc by increasing the cup size. It can lead to the blindness if it is not detected and treated in proper time. The detection of glaucoma through Optical Coherence Tomography (OCT) and Heidelberg Retinal Tomography (HRT) is very expensive. This paper presents a novel method for glaucoma detection using digital fundus images. Digital image processing techniques, such as preprocessing, morphological operations and thresholding, are widely used for the automatic detection of optic disc, blood vessels and computation of the features. We have extracted features such as cup to disc (c/d) ratio, ratio of the distance between optic disc center and optic nerve head to diameter of the optic disc, and the ratio of blood vessels area in inferior-superior side to area of blood vessel in the nasal-temporal side. These features are validated by classifying the normal and glaucoma images using neural network classifier. The results presented in this paper indicate that the features are clinically significant in the detection of glaucoma. Our system is able to classify the glaucoma automatically with a sensitivity and specificity of 100% and 80% respectively."} {"_id": "36b3865f944c74c6d782c26dfe7be04ef9664a67", "title": "Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification", "text": "Improving speech system performance in noisy environments remains a challenging task, and speech enhancement (SE) is one of the effective techniques to solve the problem. Motivated by the promising results of generative adversarial networks (GANs) in a variety of image processing tasks, we explore the potential of conditional GANs (cGANs) for SE, and in particular, we make use of the image processing framework proposed by Isola et al. [1] to learn a mapping from the spectrogram of noisy speech to an enhanced counterpart. The SE cGAN consists of two networks, trained in an adversarial manner: a generator that tries to enhance the input noisy spectrogram, and a discriminator that tries to distinguish between enhanced spectrograms provided by the generator and clean ones from the database using the noisy spectrogram as a condition. We evaluate the performance of the cGAN method in terms of perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and equal error rate (EER) of speaker verification (an example application). Experimental results show that the cGAN method overall outperforms the classical short-time spectral amplitude minimum mean square error (STSA-MMSE) SE algorithm, and is comparable to a deep neural network-based SE approach (DNN-SE)."} {"_id": "bb192e0208548831de1475b11859f2114121c662", "title": "Direct and Indirect Discrimination Prevention Methods", "text": "Along with privacy, discrimination is a very import ant issue when considering the legal and ethical aspects of data mini ng. It is more than obvious that most people do not want to be discriminated because of their gender, religion, nationality, age and so on, especially when those att ribu es are used for making decisions about them like giving them a job, loan, insu rance, etc. Discovering such potential biases and eliminating them from the traini ng data without harming their decision-making utility is therefore highly desirab le. For this reason, antidiscrimination techniques including discrimination discovery and prevention have been introduced in data mining. Discrimination prev ention consists of inducing patterns that do not lead to discriminatory decisio n even if the original training datasets are inherently biased. In this chapter, by focusing on the discrimination prevention, we present a taxonomy for classifying a d examining discrimination prevention methods. Then, we introduce a group of p re-processing discrimination prevention methods and specify the different featur es of each approach and how these approaches deal with direct or indirect discr imination. A presentation of metrics used to evaluate the performance of those app ro ches is also given. Finally, we conclude our study by enumerating interesting fu ture directions in this research body."} {"_id": "1935e0986939ea6ef2afa01eeef94dbfea6fb6da", "title": "Markowitz Revisited: Mean-Variance Models in Financial Portfolio Analysis", "text": "Mean-variance portfolio analysis provided the first quantitative treatment of the tradeoff between profit and risk. We describe in detail the interplay between objective and constraints in a number of single-period variants, including semivariance models. Particular emphasis is laid on avoiding the penalization of overperformance. The results are then used as building blocks in the development and theoretical analysis of multiperiod models based on scenario trees. A key property is the possibility of removing surplus money in future decisions, yielding approximate downside risk minimization."} {"_id": "1ea03bc28a14ade633d5a7fe9af71328834d45ab", "title": "Federated Learning for Keyword Spotting", "text": "We propose a practical approach based on federated learning to solve out-of-domain issues with continuously running embedded speech-based models such as wake word detectors. We conduct an extensive empirical study of the federated averaging algorithm for the \u201cHey Snips\u201d wake word based on a crowdsourced dataset that mimics a federation of wake word users. We empirically demonstrate that using an adaptive averaging strategy inspired from Adam in place of standard weighted model averaging highly reduces the number of communication rounds required to reach our target performance. The associated upstream communication costs per user are estimated at 8 MB, which is a reasonable in the context of smart home voice assistants. Additionally, the dataset used for these experiments is being open sourced with the aim of fostering further transparent research in the application of federated learning to speech data."} {"_id": "55ca165fa6091973674b12ea8fa3f1a3a1e50a6d", "title": "Sample-Based Tree Search with Fixed and Adaptive State Abstractions", "text": "Sample-based tree search (SBTS) is an approach to solving Markov decision problems based on constructing a lookahead search tree using random samples from a generative model of the MDP. It encompasses Monte Carlo tree search (MCTS) algorithms like UCT as well as algorithms such as sparse sampling. SBTS is well-suited to solving MDPs with large state spaces due to the relative insensitivity of SBTS algorithms to the size of the state space. The limiting factor in the performance of SBTS tends to be the exponential dependence of sample complexity on the depth of the search tree. The number of samples required to build a search tree is O((|A|B)), where |A| is the number of available actions, B is the number of possible random outcomes of taking an action, and d is the depth of the tree. State abstraction can be used to reduce B by aggregating random outcomes together into abstract states. Recent work has shown that abstract tree search often performs substantially better than tree search conducted in the ground state space. This paper presents a theoretical and empirical evaluation of tree search with both fixed and adaptive state abstractions. We derive a bound on regret due to state abstraction in tree search that decomposes abstraction error into three components arising from properties of the abstraction and the search algorithm. We describe versions of popular SBTS algorithms that use fixed state abstractions, and we introduce the Progressive Abstraction Refinement in Sparse Sampling (PARSS) algorithm, which adapts its abstraction during search. We evaluate PARSS as well as sparse sampling with fixed abstractions on 12 experimental problems, and find that PARSS outperforms search with a fixed abstraction and that search with even highly inaccurate fixed abstractions outperforms search without abstraction. These results establish progressive abstraction refinement as a promising basis for new tree search algorithms, and we propose directions for future work within the progressive refinement framework."} {"_id": "2ae40898406df0a3732acc54f147c1d377f54e2a", "title": "Query by Committee", "text": "We propose an algorithm called query by commitee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms."} {"_id": "49e85869fa2cbb31e2fd761951d0cdfa741d95f3", "title": "Adaptive Manifold Learning", "text": "Manifold learning algorithms seek to find a low-dimensional parameterization of high-dimensional data. They heavily rely on the notion of what can be considered as local, how accurately the manifold can be approximated locally, and, last but not least, how the local structures can be patched together to produce the global parameterization. In this paper, we develop algorithms that address two key issues in manifold learning: 1) the adaptive selection of the local neighborhood sizes when imposing a connectivity structure on the given set of high-dimensional data points and 2) the adaptive bias reduction in the local low-dimensional embedding by accounting for the variations in the curvature of the manifold as well as its interplay with the sampling density of the data set. We demonstrate the effectiveness of our methods for improving the performance of manifold learning algorithms using both synthetic and real-world data sets."} {"_id": "bf07d60ba6d6c6b8cabab72dfce06f203782df8f", "title": "Manifold-Learning-Based Feature Extraction for Classification of Hyperspectral Data: A Review of Advances in Manifold Learning", "text": "Advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems, while inspiring new methods for extracting information from these data. HSI data often lie on sparse, nonlinear manifolds whose geometric and topological structures can be exploited via manifold-learning techniques. In this article, we focused on demonstrating the opportunities provided by manifold learning for classification of remotely sensed data. However, limitations and opportunities remain both for research and applications. Although these methods have been demonstrated to mitigate the impact of physical effects that affect electromagnetic energy traversing the atmosphere and reflecting from a target, nonlinearities are not always exhibited in the data, particularly at lower spatial resolutions, so users should always evaluate the inherent nonlinearity in the data. Manifold learning is data driven, and as such, results are strongly dependent on the characteristics of the data, and one method will not consistently provide the best results. Nonlinear manifold-learning methods require parameter tuning, although experimental results are typically stable over a range of values, and have higher computational overhead than linear methods, which is particularly relevant for large-scale remote sensing data sets. Opportunities for advancing manifold learning also exist for analysis of hyperspectral and multisource remotely sensed data. Manifolds are assumed to be inherently smooth, an assumption that some data sets may violate, and data often contain classes whose spectra are distinctly different, resulting in multiple manifolds or submanifolds that cannot be readily integrated with a single manifold representation. Developing appropriate characterizations that exploit the unique characteristics of these submanifolds for a particular data set is an open research problem for which hierarchical manifold structures appear to have merit. To date, most work in manifold learning has focused on feature extraction from single images, assuming stationarity across the scene. Research is also needed in joint exploitation of global and local embedding methods in dynamic, multitemporal environments and integration with semisupervised and active learning."} {"_id": "01996726f44253807537cec68393f1fce6a9cafa", "title": "Stochastic Neighbor Embedding", "text": "We describe a probabilistic approach to the task of placing objects, described by high-dimensional vectors or by pairwise dissimilarities, in a low-dimensional space in a way that preserves neighbor identities. A Gaussian is centered on each object in the high-dimensional space and the densities under this Gaussian (or the given dissimilarities) are used to define a probability distribution over all the potential neighbors of the object. The aim of the embedding is to approximate this distribution as well as possible when the same operation is performed on the low-dimensional \u201cimages\u201d of the objects. A natural cost function is a sum of Kullback-Leibler divergences, one per object, which leads to a simple gradient for adjusting the positions of the low-dimensional images. Unlike other dimensionality reduction methods, this probabilistic framework makes it easy to represent each object by a mixture of widely separated low-dimensional images. This allows ambiguous objects, like the document count vector for the word \u201cbank\u201d, to have versions close to the images of both \u201criver\u201d and \u201cfinance\u201d without forcing the images of outdoor concepts to be located close to those of corporate concepts."} {"_id": "0e1431fa42d76c44911b07078610d4b9254bd4ce", "title": "Nonlinear Component Analysis as a Kernel Eigenvalue Problem", "text": "A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear mapfor instance, the space of all possible five-pixel products in 16 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition."} {"_id": "40cfac582cafeadb0e09e5f020e2febf5cbd4986", "title": "Leveraging graph topology and semantic context for pharmacovigilance through twitter-streams", "text": "Adverse drug events (ADEs) constitute one of the leading causes of post-therapeutic death and their identification constitutes an important challenge of modern precision medicine. Unfortunately, the onset and effects of ADEs are often underreported complicating timely intervention. At over 500 million posts per day, Twitter is a commonly used social media platform. The ubiquity of day-to-day personal information exchange on Twitter makes it a promising target for data mining for ADE identification and intervention. Three technical challenges are central to this problem: (1) identification of salient medical keywords in (noisy) tweets, (2) mapping drug-effect relationships, and (3) classification of such relationships as adverse or non-adverse. We use a bipartite graph-theoretic representation called a drug-effect graph (DEG) for modeling drug and side effect relationships by representing the drugs and side effects as vertices. We construct individual DEGs on two data sources. The first DEG is constructed from the drug-effect relationships found in FDA package inserts as recorded in the SIDER database. The second DEG is constructed by mining the history of Twitter users. We use dictionary-based information extraction to identify medically-relevant concepts in tweets. Drugs, along with co-occurring symptoms are connected with edges weighted by temporal distance and frequency. Finally, information from the SIDER DEG is integrate with the Twitter DEG and edges are classified as either adverse or non-adverse using supervised machine learning. We examine both graph-theoretic and semantic features for the classification task. The proposed approach can identify adverse drug effects with high accuracy with precision exceeding 85\u00a0% and F1 exceeding 81\u00a0%. When compared with leading methods at the state-of-the-art, which employ un-enriched graph-theoretic analysis alone, our method leads to improvements ranging between 5 and 8\u00a0% in terms of the aforementioned measures. Additionally, we employ our method to discover several ADEs which, though present in medical literature and Twitter-streams, are not represented in the SIDER databases. We present a DEG integration model as a powerful formalism for the analysis of drug-effect relationships that is general enough to accommodate diverse data sources, yet rigorous enough to provide a strong mechanism for ADE identification."} {"_id": "292eee24017356768f1f50b72701ea636dba7982", "title": "IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS", "text": "We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m of urban area in total."} {"_id": "ffd7ac9b4fff641d461091d5237321f83bae5216", "title": "Multi-task Learning for Maritime Traffic Surveillance from AIS Data Streams", "text": "In a world of global trading, maritime safety, security and efficiency are crucial issues. We propose a multi-task deep learning framework for vessel monitoring using Automatic Identification System (AIS) data streams. We combine recurrent neural networks with latent variable modeling and an embedding of AIS messages to a new representation space to jointly address key issues to be dealt with when considering AIS data streams: massive amount of streaming data, noisy data and irregular timesampling. We demonstrate the relevance of the proposed deep learning framework on real AIS datasets for a three-task setting, namely trajectory reconstruction, anomaly detection and vessel type identification."} {"_id": "6385cd92746386c82a69ffdc3bc0a9da9f64f721", "title": "The Dysphagia Outcome and Severity Scale", "text": "The Dysphagia Outcome and Severity Scale (DOSS) is a simple, easy-to-use, 7-point scale developed to systematically rate the functional severity of dysphagia based on objective assessment and make recommendations for diet level, independence level, and type of nutrition. Intra- and interjudge reliabilities of the DOSS was established by four clinicians on 135 consecutive patients who underwent a modified barium swallow procedure at a large teaching hospital. Patients were assigned a severity level, independence level, and nutritional level based on three areas most associated with final recommendations: oral stage bolus transfer, pharyngeal stage retention, and airway protection. Results indicate high interrater (90%) and intrarater (93%) agreement with this scale. Implications are suggested for use of the DOSS in documenting functional outcomes of swallowing and diet status based on objective assessment."} {"_id": "1d18fba47004a4cf2643c41ca82f6b04904bb134", "title": "Depth Map Super-Resolution Considering View Synthesis Quality", "text": "Accurate and high-quality depth maps are required in lots of 3D applications, such as multi-view rendering, 3D reconstruction and 3DTV. However, the resolution of captured depth image is much lower than that of its corresponding color image, which affects its application performance. In this paper, we propose a novel depth map super-resolution (SR) method by taking view synthesis quality into account. The proposed approach mainly includes two technical contributions. First, since the captured low-resolution (LR) depth map may be corrupted by noise and occlusion, we propose a credibility based multi-view depth maps fusion strategy, which considers the view synthesis quality and interview correlation, to refine the LR depth map. Second, we propose a view synthesis quality based trilateral depth-map up-sampling method, which considers depth smoothness, texture similarity and view synthesis quality in the up-sampling filter. Experimental results demonstrate that the proposed method outperforms state-of-the-art depth SR methods for both super-resolved depth maps and synthesized views. Furthermore, the proposed method is robust to noise and achieves promising results under noise-corruption conditions."} {"_id": "922b5eaa5ca03b12d9842b7b84e0e420ccd2feee", "title": "A New Approach to Linear Filtering and Prediction Problems", "text": "AN IMPORTANT class of theoretical and practical problems in communication and control is of a statistical nature. Such problems are: (i) Prediction of random signals; (ii) separation of random signals from random noise; (iii) detection of signals of known form (pulses, sinusoids) in the presence of random noise. In his pioneering work, Wiener [1]3 showed that problems (i) and (ii) lead to the so-called Wiener-Hopf integral equation; he also gave a method (spectral factorization) for the solution of this integral equation in the practically important special case of stationary statistics and rational spectra. Many extensions and generalizations followed Wiener\u2019s basic work. Zadeh and Ragazzini solved the finite-memory case [2]. Concurrently and independently of Bode and Shannon [3], they also gave a simplified method [2) of solution. Booton discussed the nonstationary Wiener-Hopf equation [4]. These results are now in standard texts [5-6]. A somewhat different approach along these main lines has been given recently by Darlington [7]. For extensions to sampled signals, see, e.g., Franklin [8], Lees [9]. Another approach based on the eigenfunctions of the WienerHopf equation (which applies also to nonstationary problems whereas the preceding methods in general don\u2019t), has been pioneered by Davis [10] and applied by many others, e.g., Shinbrot [11], Blum [12], Pugachev [13], Solodovnikov [14]. In all these works, the objective is to obtain the specification of a linear dynamic system (Wiener filter) which accomplishes the prediction, separation, or detection of a random signal.4 \u2014\u2014\u2014 1 This research was supported in part by the U. S. Air Force Office of Scientific Research under Contract AF 49 (638)-382. 2 7212 Bellona Ave. 3 Numbers in brackets designate References at end of paper. 4 Of course, in general these tasks may be done better by nonlinear filters. At present, however, little or nothing is known about how to obtain (both theoretically and practically) these nonlinear filters. Contributed by the Instruments and Regulators Division and presented at the Instruments and Regulators Conference, March 29\u2013 Apri1 2, 1959, of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS. NOTE: Statements and opinions advanced in papers are to be understood as individual expressions of their authors and not those of the Society. Manuscript received at ASME Headquarters, February 24, 1959. Paper No. 59-IRD\u201411. A New Approach to Linear Filtering and Prediction Problems"} {"_id": "9ee859bec32fa8240de0273faff6f20e16e21cc6", "title": "Detection of exudates in fundus photographs using convolutional neural networks", "text": "Diabetic retinopathy is one of the leading causes of preventable blindness in the developed world. Early diagnosis of diabetic retinopathy enables timely treatment and in order to achieve it a major effort will have to be invested into screening programs and especially into automated screening programs. Detection of exudates is very important for early diagnosis of diabetic retinopathy. Deep neural networks have proven to be a very promising machine learning technique, and have shown excellent results in different compute vision problems. In this paper we show that convolutional neural networks can be effectively used in order to detect exudates in color fundus photographs."} {"_id": "13d4c2f76a7c1a4d0a71204e1d5d263a3f5a7986", "title": "Random Forests", "text": "Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148\u2013156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression."} {"_id": "aeb7eaf29e16c82d9c0038a10d5b12d32b26ab60", "title": "Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic Speech Recognition", "text": "Multi-task learning (MTL) involves the simultaneous training of two or more related tasks over shared representations. In this work, we apply MTL to audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn a mapping between audio-visual fused features and frame labels obtained from acoustic GMM/HMM model. This is combined with an auxiliary task which maps visual features to frame labels obtained from a separate visual GMM/HMM model. The MTL model is tested at various levels of babble noise and the results are compared with a base-line hybrid DNN-HMM AVASR model. Our results indicate that MTL is especially useful at higher level of noise. Compared to base-line, upto 7% relative improvement in WER is reported at -3 SNR dB1."} {"_id": "45fe9fac928c9e64c96b5feda318a09f1b0228dd", "title": "Ethnicity estimation with facial images", "text": "We have advanced an effort to develop vision based human understanding technologies for realizing human-friendly machine interfaces. Visual information, such as gender, age ethnicity, and facial expression play an important role in face-to-face communication. This paper addresses a novel approach for ethnicity classification with facial images. In this approach, the Gabor wavelets transformation and retina sampling are combined to extract key facial features, and support vector machines that are used for ethnicity classification. Our system, based on this approach, has achieved approximately 94% for ethnicity estimation under various lighting conditions."} {"_id": "2dbaedea8ac09b11d9da8767eb73b6b821890661", "title": "Bullying and victimization in adolescence: concurrent and stable roles and psychological health symptoms.", "text": "From an initial sample of 1278 Italian students, the authors selected 537 on the basis of their responses to a self-report bully and victim questionnaire. Participants' ages ranged from 13 to 20 years (M = 15.12 years, SD = 1.08 years). The authors compared the concurrent psychological symptoms of 4 participant groups (bullies, victims, bully/victims [i.e., bullies who were also victims of bullying], and uninvolved students). Of participants, 157 were in the bullies group, 140 were in the victims group, 81 were in the bully/victims group, and 159 were in the uninvolved students group. The results show that bullies reported a higher level of externalizing problems, victims reported more internalizing symptoms, and bully/victims reported both a higher level of externalizing problems and more internalizing symptoms. The authors divided the sample into 8 groups on the basis of the students' recollection of their earlier school experiences and of their present role. The authors classified the participants as stable versus late bullies, victims, bully/victims, or uninvolved students. The authors compared each stable group with its corresponding late group and found that stable victims and stable bully/victims reported higher degrees of anxiety, depression, and withdrawal than did the other groups. The authors focus their discussion on the role of chronic peer difficulties in relation to adolescents' symptoms and well-being."} {"_id": "d26ce29a109f8ccb42d7b0d312c70a6a750018ce", "title": "Side gate HiGT with low dv/dt noise and low loss", "text": "This paper presents a novel side gate HiGT (High-conductivity IGBT) that incorporates historical changes of gate structures for planar and trench gate IGBTs. Side gate HiGT has a side-wall gate, and the opposite side of channel region for side-wall gate is covered by a thick oxide layer to reduce Miller capacitance (Cres). In addition, side gate HiGT has no floating p-layer, which causes the excess Vge overshoot. The proposed side gate HiGT has 75% smaller Cres than the conventional trench gate IGBT. The excess Vge overshoot during turn-on is effectively suppressed, and Eon + Err can be reduced by 34% at the same diode's recovery dv/dt. Furthermore, side gate HiGT has sufficiently rugged RBSOA and SCSOA."} {"_id": "0db78d548f914135ad16c0d6618890de52c0c065", "title": "Incremental Dialogue System Faster than and Preferred to its Nonincremental Counterpart", "text": "Current dialogue systems generally operate in a pipelined, modular fashion on one complete utterance at a time. Evidence from human language understanding shows that human understanding operates incrementally and makes use of multiple sources of information during the parsing process, including traditionally \u201clater\u201d components such as pragmatics. In this paper we describe a spoken dialogue system that understands language incrementally, provides visual feedback on possible referents during the course of the user\u2019s utterance, and allows for overlapping speech and actions. We further present findings from an empirical study showing that the resulting dialogue system is faster overall than its nonincremental counterpart. Furthermore, the incremental system is preferred to its nonincremental counterpart \u2013 beyond what is accounted for by factors such as speed and accuracy. These results indicate that successful incremental understanding systems will improve both performance and usability."} {"_id": "eec44862b2d58434ca7706224bc0e9437a2bc791", "title": "The balanced scorecard: a foundation for the strategic management of information systems", "text": "\u017d . The balanced scorecard BSC has emerged as a decision support tool at the strategic management level. Many business leaders now evaluate corporate performance by supplementing financial accounting data with goal-related measures from the following perspectives: customer, internal business process, and learning and growth. It is argued that the BSC concept can be adapted to assist those managing business functions, organizational units and individual projects. This article develops a \u017d . balanced scorecard for information systems IS that measures and evaluates IS activities from the following perspectives: business value, user orientation, internal process, and future readiness. Case study evidence suggests that a balanced IS scorecard can be the foundation for a strategic IS management system provided that certain development guidelines are followed, appropriate metrics are identified, and key implementation obstacles are overcome. q 1999 Elsevier Science B.V. All rights reserved."} {"_id": "4828a00f623651a9780b945980530fb6b3cb199a", "title": "Adversarial Machine Learning at Scale", "text": "Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model\u2019s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet (Russakovsky et al., 2014). Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than singlestep attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a \u201clabel leaking\u201d effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process."} {"_id": "f7d3b7986255f2e5e2402e84d7d7c5e583d7cb05", "title": "A Combined Model- and Learning-Based Framework for Interaction-Aware Maneuver Prediction", "text": "This paper presents a novel online-capable interaction-aware intention and maneuver prediction framework for dynamic environments. The main contribution is the combination of model-based interaction-aware intention estimation with maneuver-based motion prediction based on supervised learning. The advantages of this framework are twofold. On one hand, expert knowledge in the form of heuristics is integrated, which simplifies the modeling of the interaction. On the other hand, the difficulties associated with the scalability and data sparsity of the algorithm due to the so-called curse of dimensionality can be reduced, as a reduced feature space is sufficient for supervised learning. The proposed algorithm can be used for highly automated driving or as a prediction module for advanced driver assistance systems without the need of intervehicle communication. At the start of the algorithm, the motion intention of each driver in a traffic scene is predicted in an iterative manner using the game-theoretic idea of stochastic multiagent simulation. This approach provides an interpretation of what other drivers intend to do and how they interact with surrounding traffic. By incorporating this information into a Bayesian network classifier, the developed framework achieves a significant improvement in terms of reliable prediction time and precision compared with other state-of-the-art approaches. By means of experimental results in real traffic on highways, the validity of the proposed concept and its online capability is demonstrated. Furthermore, its performance is quantitatively evaluated using appropriate statistical measures."} {"_id": "7616624dd230c42f6397a9a48094cf4611c02ab8", "title": "Frequency Tracking Control for a Capacitor-Charging Parallel Resonant Converter with Phase-Locked Loop", "text": "This study investigates a phase-locked loop (PLL) controlled parallel resonant converter (PRC) for a pulse power capacitor charging application. The dynamic nature of the capacitor charging is such that it causes a shift in the resonant frequency of the PRC. Using the proposed control method, the PRC can be optimized to operate with its maximum power capability and guarantee ZVS operation, even when the input voltage and resonant tank parameters vary. The detailed implementation of the PLL controller, as well as the determination of dead-time and leading time, is presented in this paper. Simulation and experimental results verify the performance of the proposed control method."} {"_id": "4902805fe1e2f292f6beed7593154e686d7f6dc2", "title": "A Novel Connectionist System for Unconstrained Handwriting Recognition", "text": "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance."} {"_id": "e48df18774fbaff8b70b0231a02c3ccf1ebdf784", "title": "Enhancing computer vision to detect face spoofing attack utilizing a single frame from a replay video attack using deep learning", "text": "Recently, automatic face recognition has been applied in many web and mobile applications. Developers integrate and implement face recognition as an access control into these applications. However, face recognition authentication is vulnerable to several attacks especially when an attacker presents a 2-D printed image or recorded video frames in front of the face sensor system to gain access as a legitimate user. This paper introduces a non-intrusive method to detect face spoofing attacks that utilize a single frame of sequenced frames. We propose a specialized deep convolution neural network to extract complex and high features of the input diffused frame. We tested our method on the Replay Attack dataset which consists of 1200 short videos of both real-access and spoofing attacks. An extensive experimental analysis was conducted that demonstrated better results when compared to previous static algorithms results."} {"_id": "1663f1b811c0ea542c1d128ff129cdf5cd7f9c44", "title": "ISAR - radar imaging of targets with complicated motion", "text": "ISAR imaging is described for general motion of a radar target. ISAR imaging may be seen as a 3D to 2D projection, and the importance of the ISAR image projection plane is stated. For general motion, ISAR images are often smeared when using FFT processing. Time frequency methods are used to analyze such images, and to form sharp images. A given smeared image is shown to be the result of changes both in scale and in the projection plane orientation."} {"_id": "792e9c588e3426ec55f630fffefa439fc17e0406", "title": "Closing the Loop: Evaluating a Measurement Instrument for Maturity Model Design", "text": "To support the systematic improvement of business intelligence (BI) in organizations, we have designed and refined a BI maturity model (BIMM) and a respective measurement instrument (MI) in prior research. In this study, we devise an evaluation strategy, and evaluate the validity of the designed measurement artifact. Through cluster analysis of maturity assessments of 92 organizations, we identify characteristic BI maturity scenarios and representative cases for the relevant scenarios. For evaluating the designed instrument, we compare its results with insights obtained from in-depth interviews in the respective companies. A close match between our model's quantitative maturity assessments and the maturity levels from the qualitative analyses indicates that the MI correctly assesses BI maturity. The applied evaluation approach has the potential to be re-used in other design research studies where the validity of utility claims is often hard to prove."} {"_id": "2907cde029f349948680a3690500d4cf09b5be96", "title": "An architecture for scalable, universal speech recognition", "text": "This thesis describes MultiSphinx, a concurrent architecture for scalable, low-latency automatic speech recognition. We first consider the problem of constructing a universal \u201ccore\u201d speech recognizer on top of which domain and task specific adaptation layers can be constructed. We then show that when this problem is restricted to that of expanding the search space from a \u201ccore\u201d vocabulary to a superset of this vocabulary across multiple passes of search, it allows us to effectively \u201cfactor\u201d a recognizer into components of roughly equal complexity. We present simple but effective algorithms for constructing the reduced vocabulary and associated statistical language model from an existing system. Finally, we describe the MultiSphinx decoder architecture, which allows multiple passes of recognition to operate concurrently and incrementally, either in multiple threads in the same process, or across multiple processes on separate machines, and which allows the best possible partial results, including confidence scores, to be obtained at any time during the recognition process."} {"_id": "86436e9d0c98e7133c7d00d8875bcf0720ad3882", "title": "' SMART \u2019 CANE FOR THE VISUALLY IMPAIRED : DESIGN AND CONTROLLED FIELD TESTING OF AN AFFORDABLE OBSTACLE DETECTION SYSTEM", "text": "(USA) respectively. Since graduation, they have been associated with this project in voluntary and individual capacity. Sandeep Singh Gujral, an occupational therapist was an intern at the IIT Delhi from August-November 2009 and conducted user training for the trials."} {"_id": "ee548bd6b96cf1d748def335c1517c2deea1b3f5", "title": "Forecasting Nike's sales using Facebook data", "text": "This paper tests whether accurate sales forecasts for Nike are possible from Facebook data and how events related to Nike affect the activity on Nike's Facebook pages. The paper draws from the AIDA sales framework (Awareness, Interest, Desire, and Action) from the domain of marketing and employs the method of social set analysis from the domain of computational social science to model sales from Big Social Data. The dataset consists of (a) selection of Nike's Facebook pages with the number of likes, comments, posts etc. that have been registered for each page per day and (b) business data in terms of quarterly global sales figures published in Nike's financial reports. An event study is also conducted using the Social Set Visualizer (SoSeVi). The findings suggest that Facebook data does have informational value. Some of the simple regression models have a high forecasting accuracy. The multiple regressions have a lower forecasting accuracy and cause analysis barriers due to data set characteristics such as perfect multicollinearity. The event study found abnormal activity around several Nike specific events but inferences about those activity spikes, whether they are purely event-related or coincidences, can only be determined after detailed case-by-case text analysis. Our findings help assess the informational value of Big Social Data for a company's marketing strategy, sales operations and supply chain."} {"_id": "dcbebb8fbd3ebef2816ebe0f7da12340a5725a8b", "title": "Wiktionary-Based Word Embeddings", "text": "Vectorial representations of words have grown remarkably popular in natural language processing and machine translation. The recent surge in deep learning-inspired methods for producing distributed representations has been widely noted even outside these fields. Existing representations are typically trained on large monolingual corpora using context-based prediction models. In this paper, we propose extending pre-existing word representations by exploiting Wiktionary. This process results in a substantial extension of the original word vector representations, yielding a large multilingual dictionary of word embeddings. We believe that this resource can enable numerous monolingual and cross-lingual applications, as evidenced in a series of monolingual and cross-lingual semantic experiments that we have conducted."} {"_id": "356f5f4224ee090a11e83a7e3cc130b2fdb0e612", "title": "Concept Based Query Expansion", "text": "Query expansion methods have been studied for a long time - with debatable success in many instances. In this paper we present a probabilistic query expansion model based on a similarity thesaurus which was constructed automatically. A similarity thesaurus reflects domain knowledge about the particular collection from which it is constructed. We address the two important issues with query expansion: the selection and the weighting of additional search terms. In contrast to earlier methods, our queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to the query terms. Our experiments show that this kind of query expansion results in a notable improvement in the retrieval effectiveness when measured using both recall-precision and usefulness."} {"_id": "37b0f219c1f2fbc4b432b24a5fe91dd733f19b7f", "title": "An Association Thesaurus for Information Retrieval", "text": "Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent beneets for retrieval performance, and it is diicult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in IN-QUERY to evaluate diierent types of association thesauri, and thesauri constructed for a variety of collections."} {"_id": "ca56018ed7042d8528b5a7cd8f332c5737b53b1f", "title": "Experiments in Automatic Statistical Thesaurus Construction", "text": "A well constructed thesaurus has long been recognized as a valuable tool in the effective operation of an information retrieval system. This paper reports the results of experiments designed to determine the validity of an approach to the automatic construction of global thesauri (described originally by Crouch in [1] and [2] based on a clustering of the document collection. The authors validate the approach by showing that the use of thesauri generated by this method results in substantial improvements in retrieval effectiveness in four test collections. The term discrimination value theory, used in the thesaurus generation algorithm to determine a term's membership in a particular thesaurus class, is found not to be useful in distinguishing a \u201cgood\u201d from an \u201cindifferent\u201d or \u201cpoor\u201d thesaurus class). In conclusion, the authors suggest an alternate approach to automatic thesaurus construction which greatly simplifies the work of producing viable thesaurus classes. Experimental results show that the alternate approach described herein in some cases produces thesauri which are comparable in retrieval effectiveness to those produced by the first method at much lower cost."} {"_id": "e50a316f97c9a405aa000d883a633bd5707f1a34", "title": "Term-Weighting Approaches in Automatic Text Retrieval", "text": "The experimental evidence accumulated over the past 20 years indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective termweighting systems. This article summarizes the insights gained in automatic term weighting, and provides baseline single-term-indexing models with which other more elaborate content analysis procedures can be compared. 1. AUTOMATIC TEXT ANALYSIS In the late 195Os, Luhn [l] first suggested that automatic text retrieval systems could be designed based on a comparison of content identifiers attached both to the stored texts and to the users\u2019 information queries. Typically, certain words extracted from the texts of documents and queries would be used for content identification; alternatively, the content representations could be chosen manually by trained indexers familiar with the subject areas under consideration and with the contents of the document collections. In either case, the documents would be represented by term vectors of the form D= (ti,tj,...ytp) (1) where each tk identifies a content term assigned to some sample document D. Analogously, the information requests, or queries, would be represented either in vector form, or in the form of Boolean statements. Thus, a typical query Q might be formulated as Q = (qa,qbr.. . ,4r) (2)"} {"_id": "fac511cc5079d432c7e010eee2d5a8d67136ecdd", "title": "Build-to-order supply chain management : a literature review and framework for development", "text": "The build-to-order supply chain management (BOSC) strategy has recently attracted the attention of both researchers and practitioners, given its successful implementation in many companies including Dell computers, Compaq, and BMW. The growing number of articles on BOSC in the literature is an indication of the importance of the strategy and of its role in improving the competitiveness of an organization. The objective of a BOSC strategy is to meet the requirements of individual customers by leveraging the advantages of outsourcing and information technology. There are not many research articles that provide an overview of BOSC, despite the fact that this strategy is being promoted as the operations paradigm of the future. The main objective of this research is to (i) review the concepts of BOSC, (ii) develop definitions of BOSC, (iii) classify the literature based on a suitable classification scheme, leading to some useful insights into BOSC and some future research directions, (iv) review the selected articles on BOSC for their contribution to the development and operations of BOSC, (v) develop a framework for BOSC, and (vi) suggest some future research directions. The literature has been reviewed based on the following four major areas of decision-making: organizational competitiveness, the development and implementation of BOSC, the operations of BOSC, and information technology in BOSC. Some of the important observations are: (a) there is a lack of adequate research on the design and control of BOSC, (b) there is a need for further research on the implementation of BOSC, (c) human resource issues in BOSC have been ignored, (d) issues of product commonality and modularity from the perspective of partnership or supplier development require further attention and (e) the trade-off between responsiveness and the cost of logistics needs further study. The paper ends with concluding remarks. # 2004 Elsevier B.V. All rights reserved."} {"_id": "6ac15e819701cd0d077d8157711c4c402106722c", "title": "Team MIT Urban Challenge Technical Report", "text": "This technical report describes Team MIT's approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross\u00admodal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real\u00ad time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well\u00adproven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using pure\u00adpursuit control and our local frame perception strategy, obstacle avoidance using kino\u00addynamic RRT path planning, U\u00adturns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios. \u2020 Executive Summary This technical report describes Team MIT's approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using pure-pursuit control and our local frame perception strategy, obstacle avoidance using kino-dynamic RRT path planning, U-turns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios. DISCLAIMER: The information contained in this paper does not represent the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA) or the Department of Defense. DARPA does not guarantee the accuracy or reliability of the information in this paper. Additional support \u2026"} {"_id": "035e9cd81d5dd3b1621fb14e00b452959daffddf", "title": "Platforms in healthcare innovation ecosystems: The lens of an innovation intermediary", "text": "Healthcare innovation has made progressive strides. Innovative solutions now tend to incorporate device integration, data collection and data analysis linked across a diverse range of actors building platform-centric healthcare ecosystems. The interconnectedness and inter-disciplinarity of the ecosystems bring with it a number of vital issues around how to strategically manage such a complex system. This paper highlights the importance of innovation intermediaries particularly in a platform-centric ecosystem such as the healthcare industry. It serves as a reminder of why it is important for healthcare technologists to consider proactive ways to contribute to the innovation ecosystem by creating devices with the platform perspective in mind."} {"_id": "f4968d7a9a330edf09f95c45170ad67d2f340fc7", "title": "Clustering Sensors in Wireless Ad Hoc Networks Operating in a Threat Environment", "text": "Sensors in a data fusion environment over hostile territory are geographically dispersed and change location with time. In order to collect and process data from these sensors an equally flexible network of fusion beds (i.e., clusterheads) is required. To account for the hostile environment, we allow communication links between sensors and clusterheads to be unreliable. We develop a mixed integer linear programming (MILP) model to determine the clusterhead location strategy that maximizes the expected data covered minus the clusterhead reassignments, over a time horizon. A column generation (CG) heuristic is developed for this problem. Computational results show that CG performs much faster than a standard commercial solver and the typical optimality gap for large problems is less than 5%. Improvements to the basic model in the areas of modeling link failure, consideration of bandwidth capacity, and clusterhead changeover cost estimation are also discussed."} {"_id": "e5a837696c7a5d329a6a540a0d2de912a010ac47", "title": "Tableau Methods for Modal and Temporal Logics", "text": "This document is a complete draft of a chapter by Rajeev Gor e on \\Tableau Methods for Modal and Temporal Logics\" which is part of the \\Handbook of Tableau Methods\", edited"} {"_id": "f0fcb064acefae20d0a7293a868c1a9822f388bb", "title": "General Spectral Camera Lens Simulation", "text": "We present a camera lens simulation model capable of producing advanced photographic phenomena in a general spectral Monte Carlo image rendering system. Our approach incorporates insights from geometrical diffraction theory, from optical engineering and from glass science. We show how to efficiently simulate all five monochromatic aberrations, spherical and coma aberration, astigmatism, field curvature and distortion. We also consider chromatic aberration, lateral colour and aperture diffraction. The inclusion of Fresnel reflection generates correct lens flares and we present an optimized sampling method for path generation."} {"_id": "412b93429b7b7b307cc64702cdfa91630210a52e", "title": "An optimal technology mapping algorithm for delay optimization in lookup-table based FPGA designs", "text": "In this paper we present a polynomial time technology mapping algorithm, called Flow-Map, that optimally solves the LUT-based FPGA technology mapping problem for depth minimization for general Boolean networks. This theoretical breakthrough makes a sharp contrast with the fact that conventional technology mapping problem in library-based designs is NP-hard. A key step in Flow-Map is to compute a minimum height K-feasible cut in a network, solved by network flow computation. Our algorithm also effectively minimizes the number of LUTs by maximizing the volume of each cut and by several postprocessing operations. We tested the Flow-Map algorithm on a set of benchmarks and achieved reductions on both the network depth and the number of LUTs in mapping solutions as compared with previous algorithms."} {"_id": "10727ad1dacea19dc813e50b014ca08e27dcd065", "title": "A gender-specific behavioral analysis of mobile device usage data", "text": "Mobile devices provide a continuous data stream of contextual behavioral information which can be leveraged for a variety of user services, such as in personalizing ads and customizing home screens. In this paper, we aim to better understand gender-related behavioral patterns found in application, Bluetooth, and Wi-Fi usage. Using a dataset which consists of approximately 19 months of data collected from 189 subjects, gender classification is performed using 1,000 features related to the frequency of events yielding up to 91.8% accuracy. Then, we present a behavioral analysis of application traffic using techniques commonly used for web browsing activity as an alternative data exploration approach. Finally, we conclude with a discussion on impersonation attacks, where we aim to determine if one gender is less vulnerable to unauthorized access on their mobile device."} {"_id": "e275f643c97ca1f4c7715635bb72cf02df928d06", "title": "From Databases to Big Data", "text": ""} {"_id": "5f85b28af230828675997aea48e939576996e16e", "title": "Educational game design for online education", "text": "The use of educational games in learning environments is an increasingly relevant trend. The motivational and immersive traits of game-based learning have been deeply studied in the literature, but the systematic design and implementation of educational games remain an elusive topic. In this study some relevant requirements for the design of educational games in online education are analyzed, and a general game design method that includes adaptation and assessment features is proposed. Finally, a particular implementation of that design is described in light of its applicability to other implementations and environments. 2008 Elsevier Ltd. All rights reserved."} {"_id": "328bfd1d0229bc4973277f893abd1eb288159fc9", "title": "A review of the literature on the aging adult skull and face: implications for forensic science research and applications.", "text": "This paper is a summary of findings of adult age-related craniofacial morphological changes. Our aims are two-fold: (1) through a review of the literature we address the factors influencing craniofacial aging, and (2) the general ways in which a head and face age in adulthood. We present findings on environmental and innate influences on face aging, facial soft tissue age changes, and bony changes in the craniofacial and dentoalveolar skeleton. We then briefly address the relevance of this information to forensic science research and applications, such as the development of computer facial age-progression and face recognition technologies, and contributions to forensic sketch artistry."} {"_id": "7ecaca8db190608dc4482999e19b1593cc6ad4e5", "title": "Feature-based survey of model transformation approaches", "text": "Concrete Figure 5 Features of the body of a domain: (A) patterns and (B) logic Language Paradigm Value Specification Element Creation Implicit Explicit Logic Constraint Object-Oriented Functional Procedural Logic Value Binding Imperative Assignment IBM SYSTEMS JOURNAL, VOL 45, NO 3, 2006 CZARNECKI AND HELSEN 629 operating on one model from the parts operating on other models. For example, classical rewrite rules have an LHS operating on the source model and an RHS operating on the target model. In other approaches, such as a rule implemented as a Java program, there might not be any such syntactic distinction. Multidirectionality Multidirectionality refers to the ability to execute a rule in different directions (see Figure 4A). Rules supporting multidirectionality are usually defined over in/out-domains. Multidirectional rules are available in MTF and QVT Relations. Application condition Transformation rules in some approaches may have an application condition (see Figure 4A) that must be true in order for the rule to be executed. An example is the when-clause in QVT Relations (Example 1). Intermediate structure The execution of a rule may require the creation of some additional intermediate structures (see Figure 4A) which are not part of the models being transformed. These structures are often temporary and require their own metamodel. A particular example of intermediate structures are traceability links. In contrast to other intermediate structures, traceability links are usually persisted. Even if traceability links are not persisted, some approaches, such as AGG and VIATRA, rely on them to prevent multiple \u2018\u2018firings\u2019\u2019 of a rule for the same input element. Parameterization The simplest kind of parameterization is the use of control parameters that allow passing values as control flags (Figure 7). Control parameters are useful for implementing policies. For example, a transformation from class models to relational schemas could have a control parameter specifying which of the alternative patterns of object-relational mapping should be used in a given execution. 7 Generics allow passing data types, including model element types, as parameters. Generics can help make transformation rules more reusable. Generic transformations have been described by Varr\u00f3 and Pataricza. 17 Finally, higher-order rules take other rules as parameters and may provide even higher levels of reuse and abstraction. Stratego 64 is an example of a term rewriting language for program transformation supporting higher-order rules. We are currently not aware of any model transformation approaches with a first class support for higherorder rules. Reflection and aspects Some authors advocate the support for reflection and aspects (Figure 4) in transformation languages. Reflection is supported in ATL by allowing reflective access to transformation rules during the execution of transformations. An aspect-oriented extension of MTL was proposed by Silaghi et al. 65 Reflection and aspects can be used to express concerns that crosscut several rules, such as custom traceability management policies. 66 Rule application control: Location determination A rule needs to be applied to a specific location within its source scope. As there may be more than one match for a rule within a given source scope, we need a strategy for determining the application locations (Figure 8A). The strategy could be deterministic, nondeterministic, or interactive. For example, a deterministic strategy could exploit some standard traversal strategy (such as depth first) over the containment hierarchy in the source. Stratego 64 is an example of a term rewriting language with a rich mechanism for expressing traversal in tree structures. Examples of nondeterministic strategies include one-point application, where a rule is applied to one nondeterministically selected location, and concurrent application, where one rule is Figure 6 Typing Untyped Syntactically Typed Semantically Typed Typing Figure 7 Parameterization Control Parameters Generics Higher-Order Rules Parameterization CZARNECKI AND HELSEN IBM SYSTEMS JOURNAL, VOL 45, NO 3, 2006 630 Figure 8 Model transformation approach features: (A) location determination, (B) rule scheduling, (C) rule organization, (D) source-target relationship, (E) incrementality, (F) directionality, and (G) tracing Concurrent One-Point Non-Deterministic Deterministic Interactive Rule Application Strategy A"} {"_id": "4f0a21f94152f68f102d90c6f63f2eb8638eacc6", "title": "Robotic versus Open Partial Nephrectomy: A Systematic Review and Meta-Analysis", "text": "OBJECTIVES\nTo critically review the currently available evidence of studies comparing robotic partial nephrectomy (RPN) and open partial nephrectomy (OPN).\n\n\nMATERIALS AND METHODS\nA comprehensive review of the literature from Pubmed, Web of Science and Scopus was performed in October 2013. All relevant studies comparing RPN with OPN were included for further screening. A cumulative meta-analysis of all comparative studies was performed and publication bias was assessed by a funnel plot.\n\n\nRESULTS\nEight studies were included for the analysis, including a total of 3418 patients (757 patients in the robotic group and 2661 patients in the open group). Although RPN procedures had a longer operative time (weighted mean difference [WMD]: 40.89; 95% confidence interval [CI], 14.39-67.40; p\u200a=\u200a0.002), patients in this group benefited from a lower perioperative complication rate (19.3% for RPN and 29.5% for OPN; odds ratio [OR]: 0.53; 95%CI, 0.42-0.67; p<0.00001), shorter hospital stay (WMD: -2.78; 95%CI, -3.36 to -1.92; p<0.00001), less estimated blood loss(WMD: -106.83; 95%CI, -176.4 to -37.27; p\u200a=\u200a0.003). Transfusions, conversion to radical nephrectomy, ischemia time and estimated GFR change, margin status, and overall cost were comparable between the two techniques. The main limitation of the present meta-analysis is the non-randomization of all included studies.\n\n\nCONCLUSIONS\nRPN appears to be an efficient alternative to OPN with the advantages of a lower rate of perioperative complications, shorter length of hospital stay and less blood loss. Nevertheless, high quality prospective randomized studies with longer follow-up period are needed to confirm these findings."} {"_id": "d464735cc3f8cb515a96f1ac346d42e8d7a28671", "title": "Black-Box Calibration for ADCs With Hard Nonlinear Errors Using a Novel INL-Based Additive Code: A Pipeline ADC Case Study", "text": "This paper presents a digital nonlinearity calibration technique for ADCs with strong input\u2013output discontinuities between adjacent codes, such as pipeline, algorithmic, and SAR ADCs with redundancy. In this kind of converter, the ADC transfer function often involves multivalued regions, where conventional integral-nonlinearity (INL)-based calibration methods tend to miscalibrate, negatively affecting the ADC\u2019s performance. As a solution to this problem, this paper proposes a novel INL-based calibration which incorporates information from the ADC\u2019s internal signals to provide a robust estimation of static nonlinear errors for multivalued ADCs. The method is fully generalizable and can be applied to any existing design as long as there is access to internal digital signals. In pipeline or subranging ADCs, this implies access to partial subcodes before digital correction; for algorithmic or SAR ADCs, conversion bit/bits per cycle are used. As a proof-of-concept demonstrator, the experimental results for a 1.2 V 23 mW 130 nm-CMOS pipeline ADC with a SINAD of 58.4 dBc (in nominal conditions without calibration) is considered. In a stressed situation with 0.95 V of supply, the ADC has SINAD values of 47.8 dBc and 56.1 dBc, respectively, before and after calibration (total power consumption, including the calibration logic, being 15.4 mW)."} {"_id": "b1be4ea2ce9edcdcb4455643d0d21094f4f0a772", "title": "WebSelF: A Web Scraping Framework", "text": "We present WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture previous work on web scraping. We conducted an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that composition of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing tech-"} {"_id": "a7e2814ec5db800d2f8c4313fd436e9cf8273821", "title": "KNN Model-Based Approach in Classification", "text": "The k-Nearest-Neighbours (kNN) is a simple but effective method for classification. The major drawbacks with respect to kNN are (1) its low efficiency being a lazy learning method prohibits it in many applications such as dynamic web mining for a large repository, and (2) its dependency on the selection of a \u201cgood value\u201d for k. In this paper, we propose a novel kNN type method for classification that is aimed at overcoming these shortcomings. Our method constructs a kNN model for the data, which replaces the data to serve as the basis of classification. The value of k is automatically determined, is varied for different data, and is optimal in terms of classification accuracy. The construction of the model reduces the dependency on k and makes classification faster. Experiments were carried out on some public datasets collected from the UCI machine learning repository in order to test our method. The experimental results show that the kNN based model compares well with C5.0 and kNN in terms of classification accuracy, but is more efficient than the standard kNN."} {"_id": "0fafcc5fc916ac93e73f3a708f4023720bbe2faf", "title": "A new retexturing method for virtual fitting room using Kinect 2 camera", "text": "This research work proposes a new method for garment retexturing using a single static image along with depth information obtained using the Microsoft Kinect 2 camera. First the garment is segmented out from the image and texture domain coordinates are computed for each pixel of the shirt using 3D information. After that shading is applied on the new colours from the texture image by applying linear stretching of the luminance of the segmented garment. The proposed method is colour and pattern invariant and results in to visually realistic retexturing. The proposed method has been tested on various images and it is shown that it generally performs better and produces more realistic results compared to the state-of-the-art methods. The proposed method can be an application for the virtual fitting room."} {"_id": "b323c4d8f284dd27b9bc8c8be5bee3cd30e2c8ca", "title": "Classification of Various Neighborhood Operations for the Nurse Scheduling Problem", "text": "Since the nurse scheduling problem (NSP) is a problem of finding a feasible solution, the solution space must include infeasible solutions to solve it using a local search algorithm. However, the solution space consisting of all the solutions is so large that the search requires much CPU time. In the NSP, some constraints have higher priority. Thus, we can define the solution space to be the set of solutions satisfying some of the important constraints, which are called the elementary constraints. The connectivity of the solution space is also important for the performance. However, the connectivity is not obvious when the solution space consists only of solutions satisfying the elementary constraints and is composed of small neighborhoods. This paper gives theoretical support for using 4-opt-type neighborhood operations by discussing the connectivity of its solution space and the size of the neighborhood. Another interesting point in our model is a special case of the NSP corresponds to the bipartite transportation problem, and our result also applies to it."} {"_id": "00dbf46a7a4ba6222ac5d44c1a8c09f261e5693c", "title": "Ecient Sparse Matrix-Vector Multiplication on CUDA", "text": "The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many high-performance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrix-vector multiplication (SpMV) is of singular importance in sparse linear algebra. In this paper we discuss data structures and algorithms for SpMV that are efficiently implemented on the CUDA platform for the fine-grained parallel architecture of the GPU. Given the memory-bound nature of SpMV, we emphasize memory bandwidth efficiency and compact storage formats. We consider a broad spectrum of sparse matrices, from those that are well-structured and regular to highly irregular matrices with large imbalances in the distribution of nonzeros per matrix row. We develop methods to exploit several common forms of matrix structure while offering alternatives which accommodate greater irregularity. On structured, grid-based matrices we achieve performance of 36 GFLOP/s in single precision and 16 GFLOP/s in double precision on a GeForce GTX 280 GPU. For unstructured finite-element matrices, we observe performance in excess of 15 GFLOP/s and 10 GFLOP/s in single and double precision respectively. These results compare favorably to prior state-of-the-art studies of SpMV methods on conventional multicore processors. Our double precision SpMV performance is generally two and a half times that of a Cell BE with 8 SPEs and more than ten times greater than that of a quad-core Intel Clovertown system."} {"_id": "0cab85c646bc4572101044cb22d944e3685732b5", "title": "High-speed VLSI architectures for the AES algorithm", "text": "This paper presents novel high-speed architectures for the hardware implementation of the Advanced Encryption Standard (AES) algorithm. Unlike previous works which rely on look-up tables to implement the SubBytes and InvSubBytes transformations of the AES algorithm, the proposed design employs combinational logic only. As a direct consequence, the unbreakable delay incurred by look-up tables in the conventional approaches is eliminated, and the advantage of subpipelining can be further explored. Furthermore, composite field arithmetic is employed to reduce the area requirements, and different implementations for the inversion in subfield GF(2/sup 4/) are compared. In addition, an efficient key expansion architecture suitable for the subpipelined round units is also presented. Using the proposed architecture, a fully subpipelined encryptor with 7 substages in each round unit can achieve a throughput of 21.56 Gbps on a Xilinx XCV1000 e-8 bg560 device in non-feedback modes, which is faster and is 79% more efficient in terms of equivalent throughput/slice than the fastest previous FPGA implementation known to date."} {"_id": "29e65e682764c6dcc33cdefd8150521893fc2c94", "title": "Improving Real-Time Captioning Experiences for Deaf and Hard of Hearing Students", "text": "We take a qualitative approach to understanding deaf and hard of hearing (DHH) students' experiences with real-time captioning as an access technology in mainstream university classrooms. We consider both existing human-based captioning as well as new machine-based solutions that use automatic speech recognition (ASR). We employed a variety of qualitative research methods to gather data about students' captioning experiences including in-class observations, interviews, diary studies, and usability evaluations. We also conducted a co-design workshop with 8 stakeholders after our initial research findings. Our results show that accuracy and reliability of the technology are still the most important issues across captioning solutions. However, we additionally found that current captioning solutions tend to limit students' autonomy in the classroom and present a variety of user experience shortcomings, such as complex setups, poor feedback and limited control over caption presentation. Based on these findings, we propose design requirements and recommend features for real-time captioning in mainstream classrooms."} {"_id": "704885bae7e9c5a37528be854b8bd2f24d1e641c", "title": "An Empirical Examination of the Effects of Web Personalization at Different Stages of Decision-Making", "text": "Personalization agents are incorporated in many websites to tailor content and interfaces for individual users. But in contrast to the proliferation of personalized web services worldwide, empirical research studying the effects of web personalization is scant. How does exposure to personalized offers affect subsequent product consideration and choice outcome? Drawing on the literature in HCI and consumer behavior, this research examines the effects of web personalization on users\u2019 information processing and expectations through different decision-making stages. A study using a personalized ring-tone download website was conducted. Our findings provide empirical evidence of the effects of web personalization. In particular, when consumers are forming their consideration sets, the agents have the capacity to help them discover new products and generate demand for unfamiliar products. Once the users have arrived at their choice, however, the persuasive effects from the personalization agent diminish. These results establish that adaptive role of personalization agents at different stages of decision-making."} {"_id": "6e91e18f89027bb01c791ce54b2f7a1fc8d8ce9c", "title": "Pacific kids DASH for health (PacDASH) randomized, controlled trial with DASH eating plan plus physical activity improves fruit and vegetable intake and diastolic blood pressure in children.", "text": "BACKGROUND\nPacific Kids DASH for Health (PacDASH) aimed to improve child diet and physical activity (PA) level and prevent excess weight gain and elevation in blood pressure (BP) at 9 months.\n\n\nMETHODS\nPacDASH was a two-arm, randomized, controlled trial (ClinicalTrials.gov: NCT00905411). Eighty-five 5- to 8-year-olds in the 50th-99th percentile for BMI were randomly assigned to treatment (n=41) or control (n=44) groups; 62 completed the 9-month trial. Sixty-two percent were female. Mean age was 7.1\u00b10.95 years. Race/ethnicity was Asian (44%), Native Hawaiian or Other Pacific Islander (28%), white (21%), or other race/ethnicity (7%). Intervention was provided at baseline and 3, 6 and 9 months, with monthly supportive mailings between intervention visits, and a follow-up visit at 15 months to observe maintenance. Diet and PA were assessed by 2-day log. Body size, composition, and BP were measured. The intervention effect on diet and PA, body size and composition, and BP by the end of the intervention was tested using an F test from a mixed regression model, after adjustment for sex, age, and ethnic group.\n\n\nRESULTS\nFruit and vegetable (FV) intake decreased less in the treatment than control group (p=0.04). Diastolic BP (DBP) was 12 percentile units lower in the treatment than control group after 9 months of intervention (p=0.01). There were no group differences in systolic BP (SBP) or body size/composition.\n\n\nCONCLUSIONS\nThe PacDASH trial enhanced FV intake and DBP, but not SBP or body size/composition."} {"_id": "18d5fe1bd87db26b7037d81f4170cf4ebe320e00", "title": "Efficient Time-Domain Image Formation with Precise Topography Accommodation for General Bistatic SAR Configurations", "text": "Due to the lack of an appropriate symmetry in the acquisition geometry, general bistatic synthetic aperture radar (SAR) cannot benefit from the two main properties of low-to-moderate resolution monostatic SAR: azimuth-invariance and topography-insensitivity. The precise accommodation of azimuth-variance and topography is a real challenge for efficent image formation algorithms working in the Fourier domain, but can be quite naturally handled by time-domain approaches. We present an efficient and practical implementation of a generalised bistatic SAR image formation algorithm with an accurate accommodation of these two effects. The algorithm has a common structure with the monostatic fast-factorised backprojection (FFBP), and is therefore based on subaperture processing. The images computed over the different subapertures are displayed in an advantageous elliptical coordinate system capable of incorporating the topographic information of the imaged scene in an analogous manner as topography-dependent monostatic SAR algorithms do. Analytical expressions for the Nyquist requirements using this coordinate system are derived. The overall discussion includes practical implementation hints and a realistic computational burden estimation. The algorithm is tested with both simulated and actual bistatic SAR data. The actual data correspond to the spaceborne-airborne experiment between TerraSAR-X and F-SAR performed in 2007 and to the DLR-ONERA airborne experiment carried out in 2003. The presented approach proves its suitability for the precise SAR focussing of the data acquired in general bistatic configurations."} {"_id": "2f585530aea2968d799a9109aeb202b03ba977f4", "title": "Machine Comprehension Based on Learning to Rank", "text": "Machine comprehension plays an essential role in NLP and has been widely explored with dataset like MCTest. However, this dataset is too simple and too small for learning true reasoning abilities. (Hermann et al., 2015) therefore release a large scale news article dataset and propose a deep LSTM reader system for machine comprehension. However, the training process is expensive. We therefore try feature-engineered approach with semantics on the new dataset to see how traditional machine learning technique and semantics can help with machine comprehension. Meanwhile, our proposed L2R reader system achieves good performance with efficiency and less training data."} {"_id": "5dd2b359fa6a0f7ebe3ca123df01b852bef08421", "title": "The challenges of building mobile underwater wireless networks for aquatic applications", "text": "The large-scale mobile underwater wireless sensor network (UWSN) is a novel networking paradigm to explore aqueous environments. However, the characteristics of mobile UWSNs, such as low communication bandwidth, large propagation delay, floating node mobility, and high error probability, are significantly different from ground-based wireless sensor networks. The novel networking paradigm poses interdisciplinary challenges that will require new technological solutions. In particular, in this article we adopt a top-down approach to explore the research challenges in mobile UWSN design. Along the layered protocol stack, we proceed roughly from the top application layer to the bottom physical layer. At each layer, a set of new design intricacies is studied. The conclusion is that building scalable mobile UWSNs is a challenge that must be answered by interdisciplinary efforts of acoustic communications, signal processing, and mobile acoustic network protocol design."} {"_id": "0435ef56d5ba8461021cf9e5d811a014ed4e98ac", "title": "A transmission control scheme for media access in sensor networks", "text": "We study the problem of media access control in the novel regime of sensor networks, where unique application behavior and tight constraints in computation power, storage, energy resources, and radio technology have shaped this design space to be very different from that found in traditional mobile computing regime. Media access control in sensor networks must not only be energy efficient but should also allow fair bandwidth allocation to the infrastructure for all nodes in a multihop network. We propose an adaptive rate control mechanism aiming to support these two goals and find that such a scheme is most effective in achieving our fairness goal while being energy efficient for both low and high duty cycle of network traffic."} {"_id": "11897106a75d5ba3bf5fb4677943bd6f33daadf7", "title": "Challenges for efficient communication in underwater acoustic sensor networks", "text": "Ocean bottom sensor nodes can be used for oceanographic data collection, pollution monitoring, offshore exploration and tactical surveillance applications. Moreover, Unmanned or Autonomous Underwater Vehicles (UUVs, AUVs), equipped with sensors, will find application in exploration of natural undersea resources and gathering of scientific data in collaborative monitoring missions. Underwater acoustic networking is the enabling technology for these applications. Underwater Networks consist of a variable number of sensors and vehicles that are deployed to perform collaborative monitoring tasks over a given area.In this paper, several fundamental key aspects of underwater acoustic communications are investigated. Different architectures for two-dimensional and three-dimensional underwater sensor networks are discussed, and the underwater channel is characterized. The main challenges for the development of efficient networking solutions posed by the underwater environment are detailed at all layers of the protocol stack. Furthermore, open research issues are discussed and possible solution approaches are outlined."} {"_id": "1e7e4d2b2be08c01f1347dee53e99473d2a9e83d", "title": "The WHOI micro-modem: an acoustic communications and navigation system for multiple platforms", "text": "The micro-modem is a compact, low-power, underwater acoustic communications and navigation subsystem. It has the capability to perform low-rate frequency-hopping frequency-shift keying (FH-FSK), variable rate phase-coherent keying (PSK), and two different types of long base line navigation, narrow-band and broadband. The system can be configured to transmit in four different bands from 3 to 30 kHz, with a larger board required for the lowest frequency. The user interface is based on the NMEA standard, which is a serial port specification. The modem also includes a simple built-in networking capability which supports up to 16 units in a polled or random-access mode and has an acknowledgement capability which supports guaranteed delivery transactions. The paper contains a detailed system description and results from several tests are also presented"} {"_id": "93074961701ca6983a6c0fdc2964d11e282b8bae", "title": "The state of the art in underwater acoustic telemetry", "text": "Progress in underwater acoustic telemetry since 1982 is reviewed within a framework of six current research areas: (1) underwater channel physics, channel simulations, and measurements; (2) receiver structures; (3) diversity exploitation; (4) error control coding; (5) networked systems; and (6) alternative modulation strategies. Advances in each of these areas as well as perspectives on the future challenges facing them are presented. A primary thesis of this paper is that increased integration of high-fidelity channel models into ongoing underwater telemetry research is needed if the performance envelope (defined in terms of range, rate, and channel complexity) of underwater modems is to expand."} {"_id": "5e296b53252db11667011b55e78bd4a67cd3e547", "title": "Performance of Store Brands: A Cross-Country Analysis of Consumer Store Brand Preferences, Perceptions, and Risk", "text": "This paper empirically studies consumer choice behavior in regard to store brands in the US, UK and Spain. Store brand market shares differ by country and they are usually much higher in Europe than in the US. However, there is surprisingly little work in marketing that empirically studies the reasons that underlie higher market shares associated with store brands in Europe over the US. In this paper, we empirically study the notion that the differential success of store brands in the US versus in Europe is the higher brand equity that store brands command in Europe over the US. We use a framework based on previous work to conduct our analysis: consumer brand choice under uncertainty, and brands as signals of product positions. More specifically, we examine whether uncertainty about quality (or, the positioning of the brand in the product space), perceived quality of store brands versus national brands, consistency in store brand offerings over time, as well as consumer attitudes towards risk, quality and price underlie the differential success of store brands at least partially in the US versus Europe. We propose and estimate a model that explicitly incorporates the impact of uncertainty on consumer behavior. We compare 1) levels of uncertainty associated with store brands versus national brands, 2) consistency in product positions over time for both national and store brands, 3) relative quality levels of national versus store brands, and 4) consumer sensitivity to price, quality and risk across the three countries we study. The model is estimated on scanner panel data on detergent in the US, UK and Spanish markets, and on toilet paper and margarine in the US and Spain. We find that consumer learning and perceived risk (and the associated brand equity), as well as consumer attitude towards risk, quality and price, play an important role in consumers\u2019 store versus national brand choices and contribute to the differences in relative success of store brands across the countries we study."} {"_id": "f5dbb5a0aeb5823d704077b372d0dd989d4ebe1c", "title": "Approximate queuing analysis for IEEE 802.15.4 sensor network", "text": "Wireless sensor networks (WSNs) have attracted much attention in recent years for their unique characteristics and wide use in many different applications. Especially, in military networks, all sensor motes are deployed randomly and densely. How can we optimize the number of deployed nodes (sensor node and sink) with a QoS guarantee (minimal end-to-end delay and packet drop)? In this paper, using the M/M/1 queuing model we propose a deployment optimization model for non-beacon-mode 802.15.4 sensor networks. The simulation results show that the proposed model is a promising approach for deploying the sensor network."} {"_id": "1fbb66a9407470e1da332c4ef69cdc34e169a3d7", "title": "A Baseline for General Music Object Detection with Deep Learning", "text": "Deep learning is bringing breakthroughs to many computer vision subfields including Optical Music Recognition (OMR), which has seen a series of improvements to musical symbol detection achieved by using generic deep learning models. However, so far, each such proposal has been based on a specific dataset and different evaluation criteria, which made it difficult to quantify the new deep learning-based state-of-the-art and assess the relative merits of these detection models on music scores. In this paper, a baseline for general detection of musical symbols with deep learning is presented. We consider three datasets of heterogeneous typology but with the same annotation format, three neural models of different nature, and establish their performance in terms of a common evaluation standard. The experimental results confirm that the direct music object detection with deep learning is indeed promising, but at the same time illustrates some of the domain-specific shortcomings of the general detectors. A qualitative comparison then suggests avenues for OMR improvement, based both on properties of the detection model and how the datasets are defined. To the best of our knowledge, this is the first time that competing music object detection systems from the machine learning paradigm are directly compared to each other. We hope that this work will serve as a reference to measure the progress of future developments of OMR in music object detection."} {"_id": "4b07c5cda3dab4e48bd39bee91c5eba2642037b2", "title": "System for the Measurement of Cathodic Currents in Electrorefining Processes That Employ Multicircuital Technology", "text": "This paper presents a measurement system of cathodic currents for copper electrorefining processes using multicircuital technology with optibar intercell bars. The proposed system is based on current estimation using 55 magnetic field sensors per intercell bar. Current values are sampled and stored every 5 min for seven days in a compatible SQL database. The method does not affect the normal operation of the process and does not require any structural modifications. The system for online measurement of 40 cells involving 2090 sensors is in operation in an electrorefinery site."} {"_id": "a2c1cf2de9e822c2d662d520083832853ac39f9d", "title": "An Inductive 2-D Position Detection IC With 99.8% Accuracy for Automotive EMR Gear Control System", "text": "In this paper, the analog front end (AFE) for an inductive position sensor in an automotive electromagnetic resonance gear control applications is presented. To improve the position detection accuracy, a coil driver with an automatic two-step impedance calibration is proposed which, despite the load variation, provides the desired driving capability by controlling the main driver size. Also, a time shared analog-to-digital converter (ADC) is proposed to convert eight-phase signals while reducing the current consumption and area to 1/8 of the conventional structure. A relaxation oscillator with temperature compensation is proposed to generate a constant clock frequency in vehicle temperature conditions. This chip is fabricated using a 0.18- $\\mu \\text{m}$ CMOS process and the die area is 2 mm $\\times 1.5$ mm. The power consumption of the AFE is 23.1 mW from the supply voltage of 3.3 V to drive one transmitter (Tx) coil and eight receiver (Rx) coils. The measured position detection accuracy is greater than 99.8 %. The measurement of the Tx shows a driving capability higher than 35 mA with respect to the load change."} {"_id": "2cfac4e999538728a8e11e2b7784433e80ca6a38", "title": "Transcriptional regulatory networks in Saccharomyces cerevisiae.", "text": "We have determined how most of the transcriptional regulators encoded in the eukaryote Saccharomyces cerevisiae associate with genes across the genome in living cells. Just as maps of metabolic networks describe the potential pathways that may be used by a cell to accomplish metabolic processes, this network of regulator-gene interactions describes potential pathways yeast cells can use to regulate global gene expression programs. We use this information to identify network motifs, the simplest units of network architecture, and demonstrate that an automated process can use motifs to assemble a transcriptional regulatory network structure. Our results reveal that eukaryotic cellular functions are highly connected through networks of transcriptional regulators that regulate other transcriptional regulators."} {"_id": "526ece284aacc3ab8e3d4e839a9512dbbd27867b", "title": "Online crowdsourcing: Rating annotators and obtaining cost-effective labels", "text": "Labeling large datasets has become faster, cheaper, and easier with the advent of crowdsourcing services like Amazon Mechanical Turk. How can one trust the labels obtained from such services? We propose a model of the labeling process which includes label uncertainty, as well a multi-dimensional measure of the annotators' ability. From the model we derive an online algorithm that estimates the most likely value of the labels and the annotator abilities. It finds and prioritizes experts when requesting labels, and actively excludes unreliable annotators. Based on labels already obtained, it dynamically chooses which images will be labeled next, and how many labels to request in order to achieve a desired level of confidence. Our algorithm is general and can handle binary, multi-valued, and continuous annotations (e.g. bounding boxes). Experiments on a dataset containing more than 50,000 labels show that our algorithm reduces the number of labels required, and thus the total cost of labeling, by a large factor while keeping error rates low on a variety of datasets."} {"_id": "55b4b9b303da1b0e8f938a80636ec95f7af235fd", "title": "Neural Question Answering at BioASQ 5B", "text": "This paper describes our submission to the 2017 BioASQ challenge. We participated in Task B, Phase B which is concerned with biomedical question answering (QA). We focus on factoid and list question, using an extractive QA model, that is, we restrict our system to output substrings of the provided text snippets. At the core of our system, we use FastQA, a state-ofthe-art neural QA system. We extended it with biomedical word embeddings and changed its answer layer to be able to answer list questions in addition to factoid questions. We pre-trained the model on a large-scale open-domain QA dataset, SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our approach, we achieve state-of-the-art results on factoid questions and competitive results on list questions."} {"_id": "6653956ab027cadb035673055c91ce1c7767e140", "title": "Compact Planar Microstrip Branch-Line Couplers Using the Quasi-Lumped Elements Approach With Nonsymmetrical and Symmetrical T-Shaped Structure", "text": "A class of the novel compact-size branch-line couplers using the quasi-lumped elements approach with symmetrical or nonsymmetrical T-shaped structures is proposed in this paper. The design equations have been derived, and two circuits using the quasi-lumped elements approach were realized for physical measurements. This novel design occupies only 29% of the area of the conventional approach at 2.4 GHz. In addition, a third circuit was designed by using the same formula implementing a symmetrical T-shaped structure and occupied both the internal and external area of the coupler. This coupler achieved 500-MHz bandwidth while the phase difference between S21 and S31 is within 90degplusmn1deg. Thus, the bandwidth is not only 25% wider than that of the conventional coupler, but occupies only 70% of the circuit area compared to the conventional design. All three proposed couplers can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements, bonding wires, and via-holes, making it very useful for wireless communication systems"} {"_id": "fa27e993f88c12ef3cf5bbc6eb3b0b4f9de15e86", "title": "The Anatomy of Motivation: An Evolutionary-Ecological Approach", "text": "There have been few attempts to bring evolutionary theory to the study of human motivation. From this perspective motives can be considered psychological mechanisms to produce behavior that solves evolutionarily important tasks in the human niche. From the dimensions of the human niche we deduce eight human needs: optimize the number and survival of gene copies; maintain bodily integrity; avoid external threats; optimize sexual, environmental, and social capital; and acquire reproductive and survival skills. These needs then serve as the foundation for a necessary and sufficient list of 15 human motives, which we label: lust, hunger, comfort, fear, disgust, attract, love, nurture, create, hoard, affiliate, status, justice, curiosity, and play. We show that these motives are consistent with evidence from the current literature. This approach provides us with a precise vocabulary for talking about motivation, the lack of which has hampered progress in behavioral science. Developing testable theories about the structure and function of motives is essential to the project of understanding the organization of animal cognition and learning, as well as for the applied behavioral sciences."} {"_id": "6de09e48d42d14c4079e5d8a6a58485341b41cad", "title": "Students\u2019 opinions on blended learning and its implementation in terms of their learning styles", "text": "The purpose of this article is to examine students\u2019 views on the blended learning method and its use in relation to the students\u2019 individual learning style. The study was conducted with 31 senior students. Web based media together with face to face classroom settings were used in the blended learning framework. A scale of Students\u2019 Views on Blended Learning and its implementation, Kolb\u2019s Learning Style Inventory, Pre-Information Form and open ended questions were used to gather data. The majority of the students\u2019 fell into assimilators, accommodators and convergers learning styles. Results revealed that students\u2019 views on blended learning method and its use are quite positive."} {"_id": "b4d4a78ecc68fd8fe9235864e0b1878cb9e9f84b", "title": "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (Reprint)", "text": "An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences:Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the message, since only he knows the corresponding decryption key.\nA message can be \u201csigned\u201d using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in \u201celectronic mail\u201d and \u201celectronic funds transfer\u201d systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two large secret prime numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d = 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n."} {"_id": "628c2bcfbd6b604e2d154c7756840d3a5907470f", "title": "Blockchain Platform for Industrial Internet of Things", "text": "Internet of Things (IoT) are being adopted for industrial and manufacturing applications such as manufacturing automation, remote machine diagnostics, prognostic health management of industrial machines and supply chain management. CloudBased Manufacturing is a recent on-demand model of manufacturing that is leveraging IoT technologies. While Cloud-Based Manufacturing enables on-demand access to manufacturing resources, a trusted intermediary is required for transactions between the users who wish to avail manufacturing services. We present a decentralized, peer-to-peer platform called BPIIoT for Industrial Internet of Things based on the Block chain technology. With the use of Blockchain technology, the BPIIoT platform enables peers in a decentralized, trustless, peer-to-peer network to interact with each other without the need for a trusted intermediary."} {"_id": "0fd0e3854ee696148e978ec33d5c042554cd4d23", "title": "Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions", "text": "This document provides additional details about the experiments described in (Heilman and Smith, 2010). Note that while this document provides information about the datasets and experimental methods, it does not provide further results. If you have any further questions, please feel free to contact the first author. The preprocessed datasets (i.e., tagged and parsed) will be made available for research purposes upon request."} {"_id": "3e27e210c9a41d9901164fd4c24e549616d1958a", "title": "CompNet: Neural networks growing via the compact network morphism", "text": "It is often the case that the performance of a neural network can be improved by adding layers. In real-world practices, we always train dozens of neural network architectures in parallel which is a wasteful process. We explored CompNet, in which case we morph a well-trained neural network to a deeper one where network function can be preserved and the added layer is compact. The work of the paper makes two contributions: a). The modified network can converge fast and keep the same functionality so that we do not need to train from scratch again; b). The layer size of the added layer in the neural network is controlled by removing the redundant parameters with sparse optimization. This differs from previous network morphism approaches which tend to add more neurons or channels beyond the actual requirements and result in redundance of the model. The method is illustrated using several neural network structures on different data sets including MNIST and CIFAR10."} {"_id": "c11de783900e118e3d3de74efca5435c98b11e7c", "title": "Driver Drowsiness Detection System", "text": "Sleepiness or fatigue in drivers driving for long hours is the major cause of accidents on highways worldwide. The International statistics shows that a large number of road accidents are caused by driver fatigue. Therefore, a system that can detect oncoming driver fatigue and issue timely warning could help in preventing many accidents, and consequently save money and reduce personal suffering. The authors have made an attempt to design a system that uses video camera that points directly towards the driver\u201fs face in order to detect fatigue. If the fatigue is detected a warning signal is issued to alert the driver. The authors have worked on the video files recorded by the camera. Video file is converted into frames.Once the eyes are located from each frame, by determining the energy value of each frame one can determine whether the eyes are open or close. A particular condition is set for the energy values of open and close eyes. If the average of the energy value for 5 consecutive frames falls in a given condition then the driver will be detected as drowsy and issues a warning signal. The algorithm is proposed, implemented, tested, and found working satisfactorily."} {"_id": "1e55bb7c095d3ea15bccb3df920c546ec54c86b5", "title": "Enterprise resource planning: A taxonomy of critical factors", "text": ""} {"_id": "23a3dc2af47b13fe63189df63dbdda068b854cdd", "title": "Computable Elastic Distances Between Shapes", "text": "We define distances between geometric curves by the square root of the minimal energy required to transform one curve into the other. The energy is formally defined from a left invariant Riemannian distance on an infinite dimensional group acting on the curves, which can be explicitly computed. The obtained distance boils down to a variational problem for which an optimal matching between the curves has to be computed. An analysis of the distance when the curves are polygonal leads to a numerical procedure for the solution of the variational problem, which can efficiently be implemented, as illustrated by experiments."} {"_id": "37726c30352bd235c2a832b6c16633c2b11b8913", "title": "Supervised Locally Linear Embedding", "text": "Locally linear embedding (LLE) is a recently proposed method for unsupervised nonlinear dimensionality reduction. It has a number of attractive features: it does not require an iterative algorithm, and just a few parameters need to be set. Two extensions of LLE to supervised feature extraction were independently proposed by the authors of this paper. Here, both methods are unified in a common framework and applied to a number of benchmark data sets. Results show that they perform very well on high-dimensional data which exhibits a manifold structure."} {"_id": "8213dc79a49f6bd8e1f396e66db1d8503d85f566", "title": "Differentially Private Histogram Publication for Dynamic Datasets: an Adaptive Sampling Approach", "text": "Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on \"one-time\" release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods."} {"_id": "d318b1cbb00282eea7fc5789f97d859181fc165e", "title": "LBP based recursive averaging for babble noise reduction applied to automatic speech recognition", "text": "Improved automatic speech recognition (ASR) in babble noise conditions continues to pose major challenges. In this paper, we propose a new local binary pattern (LBP) based speech presence indicator (SPI) to distinguish speech and non-speech components. Babble noise is subsequently estimated using recursive averaging. In the speech enhancement system optimally-modified log-spectral amplitude (OMLSA) uses the estimated noise spectrum obtained from the LBP based recursive averaging (LRA). The performance of the LRA speech enhancement system is compared to the conventional improved minima controlled recursive averaging (IMCRA). Segmental SNR improvements and perceptual evaluations of speech quality (PESQ) scores show that LRA offers superior babble noise reduction compared to the IMCRA system. Hidden Markov model (HMM) based word recognition results show a corresponding improvement."} {"_id": "f7b3544567c32512be1bef4093a78075e59bdc11", "title": "Using flipped classroom approach to teach computer programming", "text": "Flipped classroom approach has been increasingly adopted in higher institutions. Although this approach has many advantages, there are also many challenges that should be considered. In this paper, we discuss the suitability of this approach to teach computer programming, and we report on our pilot experience of using this approach at Qatar University to teach one subject of computer programming course. It is found that students has positive attitude to this approach, it improves their learning. However, the main challenge was how to involve some of the students in online learning activities."} {"_id": "ba965d68ea7a58f6a5676c47a5c81fad21959ef6", "title": "Towards a New Structural Model of the Sense of Humor: Preliminary Findings", "text": "In this article some formal, content-related and procedural considerations towards the sense of humor are articulated and the analysis of both everyday humor behavior and of comic styles leads to the initial proposal of a four factormodel of humor (4FMH). This model is tested in a new dataset and it is also examined whether two forms of comic styles (benevolent humor and moral mockery) do fit in. The model seems to be robust but further studies on the structure of the sense of humor as a personality trait are required."} {"_id": "3e7aabaffdb4b05e701c544451dce55ad96e9401", "title": "Wi-Fi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons", "text": "The growing commercial interest in indoor location-based services (ILBS) has spurred recent development of many indoor positioning techniques. Due to the absence of Global Positioning System (GPS) signal, many other signals have been proposed for indoor usage. Among them, Wi-Fi (802.11) emerges as a promising one due to the pervasive deployment of wireless LANs (WLANs). In particular, Wi-Fi fingerprinting has been attracting much attention recently because it does not require line-of-sight measurement of access points (APs) and achieves high applicability in complex indoor environment. This survey overviews recent advances on two major areas of Wi-Fi fingerprint localization: advanced localization techniques and efficient system deployment. Regarding advanced techniques to localize users, we present how to make use of temporal or spatial signal patterns, user collaboration, and motion sensors. Regarding efficient system deployment, we discuss recent advances on reducing offline labor-intensive survey, adapting to fingerprint changes, calibrating heterogeneous devices for signal collection, and achieving energy efficiency for smartphones. We study and compare the approaches through our deployment experiences, and discuss some future directions."} {"_id": "8acaebdf9569adafb03793b23e77bf4ac8c09f83", "title": "Design of spoof surface plasmon polariton based terahertz delay lines", "text": "We present the analysis and design of fixed physical length, spoof Surface Plasmon Polariton based waveguides with adjustable delay at terahertz frequencies. The adjustable delay is obtained using Corrugated Planar Goubau Lines (CPGL) by changing its corrugation depth without changing the total physical length of the waveguide. Our simulation results show that electrical lengths of 237.9\u00b0, 220.6\u00b0, and 310.6\u00b0 can be achieved by physical lengths of 250 \u03bcm and 200 \u03bcm at 0.25, 0.275, and 0.3 THz, respectively, for demonstration purposes. These simulations results are also consistent with our analytical calculations using the physical parameter and material properties. When we combine pairs of same length delay lines as if they are two branches of a terahertz phase shifter, we achieved an error rate of relative phase shift estimation better than 5.8%. To the best of our knowledge, this is the first-time demonstration of adjustable spoof Surface Plasmon Polariton based CPGL delay lines. The idea can be used for obtaining tunable delay lines with fixed lengths and phase shifters for the terahertz band circuitry."} {"_id": "10d710c01acb10c4aea702926d21697935656c3d", "title": "Infrared Colorization Using Deep Convolutional Neural Networks", "text": "This paper proposes a method for transferring the RGB color spectrum to near-infrared (NIR) images using deep multi-scale convolutional neural networks. A direct and integrated transfer between NIR and RGB pixels is trained. The trained model does not require any user guidance or a reference image database in the recall phase to produce images with a natural appearance. To preserve the rich details of the NIR image, its high frequency features are transferred to the estimated RGB image. The presented approach is trained and evaluated on a real-world dataset containing a large amount of road scene images in summer. The dataset was captured by a multi-CCD NIR/RGB camera, which ensures a perfect pixel to pixel registration."} {"_id": "325d145af5f38943e469da6369ab26883a3fd69e", "title": "Colorful Image Colorization", "text": "Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a \u201ccolorization Turing test,\u201d asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32% of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks."} {"_id": "326a0914dcdf7f42b5e1c2887174476728ca1b9d", "title": "Wasserstein GAN", "text": "The problem this paper is concerned with is that of unsupervised learning. Mainly, what does it mean to learn a probability distribution? The classical answer to this is to learn a probability density. This is often done by defining a parametric family of densities (P\u03b8)\u03b8\u2208Rd and finding the one that maximized the likelihood on our data: if we have real data examples {x}i=1, we would solve the problem"} {"_id": "5287d8fef49b80b8d500583c07e935c7f9798933", "title": "Generative Adversarial Text to Image Synthesis", "text": "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions."} {"_id": "57bbbfea63019a57ef658a27622c357978400a50", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation", "text": ""} {"_id": "6ec02fb5bfc307911c26741fb3804f16d8ad299c", "title": "Active learning for on-road vehicle detection: a comparative study", "text": "In recent years, active learning has emerged as a powerful tool in building robust systems for object detection using computer vision. Indeed, active learning approaches to on-road vehicle detection have achieved impressive results. While active learning approaches for object detection have been explored and presented in the literature, few studies have been performed to comparatively assess costs and merits. In this study, we provide a cost-sensitive analysis of three popular active learning methods for on-road vehicle detection. The generality of active learning findings is demonstrated via learning experiments performed with detectors based on histogram of oriented gradient features and SVM classification (HOG\u2013SVM), and Haar-like features and Adaboost classification (Haar\u2013Adaboost). Experimental evaluation has been performed on static images and real-world on-road vehicle datasets. Learning approaches are assessed in terms of the time spent annotating, data required, recall, and precision."} {"_id": "c2b6755543c3f7c71adb3e14eb06179f27b6ad5d", "title": "HyFlex nickel-titanium rotary instruments after clinical use: metallurgical properties.", "text": "AIM\nTo analyse the type and location of defects in HyFlex CM instruments after clinical use in a graduate endodontic programme and to examine the impact of clinical use on their metallurgical properties.\n\n\nMETHODOLOGY\nA total of 468 HyFlex CM instruments discarded from a graduate endodontic programme were collected after use in three teeth. The incidence and type of instrument defects were analysed. The lateral surfaces of the defect instruments were examined by scanning electron microscopy. New and clinically used instruments were examined by differential scanning calorimetry (DSC) and x-ray diffraction (XRD). Vickers hardness was measured with a 200-g load near the flutes for new and clinically used axially sectioned instruments. Data were analysed using one-way anova or Tukey's multiple comparison test.\n\n\nRESULTS\nOf the 468 HyFlex instruments collected, no fractures were observed and 16 (3.4%) revealed deformation. Of all the unwound instruments, size 20, .04 taper unwound the most often (n\u00a0=\u00a05) followed by size 25, .08 taper (n\u00a0=\u00a04). The trend of DSC plots of new instruments and clinically used (with and without defects) instruments groups were very similar. The DSC analyses showed that HyFlex instruments had an austenite transformation completion or austenite-finish (Af ) temperature exceeding 37\u00a0\u00b0C. The Af temperatures of HyFlex instruments (with or without defects) after multiple clinical use were much lower than in new instruments (P\u00a0<\u00a00.05). The enthalpy values for the transformation from martensitic to austenitic on deformed instruments were smaller than in the new instruments at the tip region (P\u00a0<\u00a00.05). XRD results showed that NiTi instruments had austenite and martensite structure on both new and used HyFlex instruments at room temperature. No significant difference in microhardness was detected amongst new and used instruments (with and without defects).\n\n\nCONCLUSIONS\nThe risk of HyFlex instruments fracture in the canal is very low when instruments are discarded after three cases of clinical use. New HyFlex instruments were a mixture of martensite and austenite structure at body temperature. Multiple clinical use caused significant changes in the microstructural properties of HyFlex instruments. Smaller instruments should be considered as single-use."} {"_id": "a25f6d05c8191be01f736073fa2bc20c03ad7ad8", "title": "Integrated control of a multi-fingered hand and arm using proximity sensors on the fingertips", "text": "In this study, we propose integrated control of a robotic hand and arm using only proximity sensing from the fingertips. An integrated control scheme for the fingers and for the arm enables quick control of the position and posture of the arm by placing the fingertips adjacent to the surface of an object to be grasped. The arm control scheme enables adjustments based on errors in hand position and posture that would be impossible to achieve by finger motions alone, thus allowing the fingers to grasp an object in a laterally symmetric grasp. This can prevent grasp failures such as a finger pushing the object out of the hand or knocking the object over. Proposed control of the arm and hand allowed correction of position errors on the order of several centimeters. For example, an object on a workbench that is in an uncertain positional relation with the robot, with an inexpensive optical sensor such as a Kinect, which only provides coarse image data, would be sufficient for grasping an object."} {"_id": "13317a497f4dc5f62a15dbdc135dd3ea293474df", "title": "Do Multi-Sense Embeddings Improve Natural Language Understanding?", "text": "Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while \u2018multi-sense\u2019 methods have been proposed and tested on artificial wordsimilarity tasks, we don\u2019t know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language un-"} {"_id": "7ec6b06b0f421b80ca25994c7aa106106c7bfb50", "title": "Design and Simulation of Bridgeless PFC Boost Rectifiers", "text": "This work presents new three-level unidirectional single-phase PFC rectifier topologies well-suited for applications targeting high efficiency and/or high power density. The characteristics of a selected novel rectifier topology, including its principles of operation, modulation strategy, PID control scheme, and a power circuit design related analysis are presented. Finally, a 220-V/3-kW laboratory prototype is constructed and used in order to verify the characteristics of the new converter, which include remarkably low switching losses and single ac-side boost inductor, that allow for a 98.6% peak efficiency with a switching frequency of 140 kHz."} {"_id": "c62ba57869099f20c8bcefd9b38ce5d8b4b3db56", "title": "Computational models of trust and reputation: agents, evolutionary games, and social networks", "text": "Many recent studies of trust and reputation are made in the context of commercial reputation or rating systems for online communities. Most of these systems have been constructed without a formal rating model or much regard for our sociological understanding of these concepts. We first provide a critical overview of the state of research on trust and reputation. We then propose a formal quantitative model for the rating process. Based on this model, we formulate two personalized rating schemes and demonstrate their effectiveness at inferring trust experimentally using a simulated dataset and a real world movie-rating dataset. Our experiments show that the popular global rating scheme widely used in commercial electronic communities is inferior to our personalized rating schemes when sufficient ratings among members are available. The level of sufficiency is then discussed. In comparison with other models of reputation, we quantitatively show that our framework provides significantly better estimations of reputation. \"Better\" is discussed with respect to a rating process and specific games as defined in this work. Secondly, we propose a mathematical framework for modeling trust and reputation that is rooted in findings from the social sciences. In particular, our framework makes explicit the importance of social information (i.e., indirect channels of inference) in aiding members of a social network choose whom they want to partner with or to avoid. Rating systems that make use of such indirect channels of inference are necessarily personalized in nature, catering to the individual context of the rater. Finally, we have extended our trust and reputation framework toward addressing a fundamental problem for social science and biology: evolution of cooperation. We show that by providing an indirect inference mechanism for the propagation of trust and reputation, cooperation among selfish agents can be explained for a set of game theoretic simulations. For these simulations in particular, our proposal is shown to have provided more cooperative agent communities than existing schemes are able to. Thesis Supervisor: Peter Szolovits Title: Professor of Electrical Engineering and Computer Science"} {"_id": "61df37b2c1f731e2b6bcb1ae2c2b7670b917284c", "title": "Surface Management System Field Trial Results", "text": "NASA Ames Research Center, in cooperation with the FAA, has completed research and development of a proof-ofconcept Surface Management System (SMS). This paper reports on two recent SMS field tests as well as final performance and benefits analyses. Field tests and analysis support the conclusion that substantial portions of SMS technology are ready for transfer to the FAA and deployment throughout the National Airspace System (NAS). Other SMS capabilities were accepted in concept but require additional refinement for inclusion in subsequent development spirals. SMS is a decision support tool that helps operational specialists at Air Traffic Control (ATC) and NAS user facilities to collaboratively manage the movements of aircraft on the surface of busy airports, thereby improving capacity, efficiency, and flexibility. SMS provides accurate predictions of the future demand and how that demand will affect airport resources \u2013 information that is not currently available. The resulting shared awareness enables the Air Traffic Control Tower (ATCT), Terminal Radar Approach Control (TRACON), Air Route Traffic Control Center (ARTCC), and air carriers to coordinate traffic management decisions. Furthermore, SMS uses its ability to predict how future demand will play out on the surface to evaluate the effect of various traffic management decisions in advance of implementing them, to plan and advise surface operations. The SMS concept, displays, and algorithms were evaluated through a series of field tests at Memphis International Airport (MEM). An operational trial in September, 2003 evaluated SMS traffic management components, such as runway configuration change planning; shadow testing in January, 2004 tested tactical components (e.g., Approval Request (APREQ) coordination, sequencing for departure, and Expected Departure Clearance Time (EDCT) compliance). Participants in these evaluations rated the SMS concept and many of the traffic management displays very positively. Local and Ground controller displays will require integration with other automation systems. Feedback from FAA and NAS user participants support the conclusion that SMS algorithms currently provide information that has acceptable and beneficial accuracy for traffic management applications. Performance analysis results document the current accuracy of SMS algorithms. Benefits/cost analysis of delay cost reduction due to SMS provides the business case for SMS deployment."} {"_id": "dcff311940942dcf81db5073e551a87e1710e52a", "title": "Recognizing Malicious Intention in an Intrusion Detection Process", "text": "Generally, theintrudermustperformseveralactions,organizedin anintrusionscenario, to achieve hisor hermaliciousobjective.Wearguethatintrusionscenarioscan bemodelledasa planningprocessandwesuggestmodellinga maliciousobjectiveas anattemptto violatea givensecurityrequirement. Our proposalis thento extendthe definitionof attackcorrelationpresentedin [CM02] to correlateattackswith intrusion objectivesThis notionis usefulto decideif a sequenceof correlatedactionscanlead to a securityrequirementviolation.This approachprovidesthesecurityadministrator with aglobalview of whathappensin thesystem.In particular, it controlsunobserved actionsthroughhypothesisgeneration,clustersrepeatedactionsin a singlescenario, recognizesintrudersthatarechangingtheir intrusionobjectivesandis efficient to detectvariationsof anintrusionscenario.Thisapproachcanalsobeusedto eliminatea category of falsepositivesthatcorrespondto falseattacks,that is actionsthatarenot furthercorrelatedto anintrusionobjective."} {"_id": "7ffdf4d92b4bc5690249ed98e51e1699f39d0e71", "title": "Reconfigurable RF MEMS Phased Array Antenna Integrated Within a Liquid Crystal Polymer (LCP) System-on-Package", "text": "For the first time, a fully integrated phased array antenna with radio frequency microelectromechanical systems (RF MEMS) switches on a flexible, organic substrate is demonstrated above 10 GHz. A low noise amplifier (LNA), MEMS phase shifter, and 2 times 2 patch antenna array are integrated into a system-on-package (SOP) on a liquid crystal polymer substrate. Two antenna arrays are compared; one implemented using a single-layer SOP and the second with a multilayer SOP. Both implementations are low-loss and capable of 12deg of beam steering. The design frequency is 14 GHz and the measured return loss is greater than 12 dB for both implementations. The use of an LNA allows for a much higher radiated power level. These antennas can be customized to meet almost any size, frequency, and performance needed. This research furthers the state-of-the-art for organic SOP devices."} {"_id": "3aa41f8fdb6a4523e2cd95365bb6c7499ad29708", "title": "iCanTrace : Avatar Personalization through Selfie Sketches", "text": "This paper introduces a novel system that allows users to generate customized cartoon avatars through a sketching interface. The rise of social media and personalized gaming has given a need for personalized virtual appearances. Avatars, self-curated and customized images to represent oneself, have become a common means of expressing oneself in these new media. Avatar creation platforms face the challenge of granting user significant control over the avatar creation, and the challenge of encumbering the user with too many choices in their avatar customization. This paper demonstrates a sketch-guided avatar customization system and its potential to simplify the avatar creation process. Author"} {"_id": "ba02b6125ba47ff3629f1d09d1bada28169c2b32", "title": "Teaching Syntax by Adversarial Distraction", "text": "Existing entailment datasets mainly pose problems which can be answered without attention to grammar or word order. Learning syntax requires comparing examples where different grammar and word order change the desired classification. We introduce several datasets based on synthetic transformations of natural entailment examples in SNLI or FEVER, to teach aspects of grammar and word order. We show that without retraining, popular entailment models are unaware that these syntactic differences change meaning. With retraining, some but not all popular entailment models can learn to compare the syntax properly."} {"_id": "cb25c33ba56db92b7da4d5080f73fba07cb914a3", "title": "A large-stroke flexure fast tool servo with new displacement amplifier", "text": "As the rapid progress of science and technology, the free-form surface optical component has played an important role in spaceflight, aviation, national defense, and other areas of the technology. While the technology of fast tool servo (FTS) is the most promising method for the machining of free-form surface optical component. However, the shortcomings of short-stroke of fast tool servo device have constrained the development of free-form surface optical component. To address this problem, a new large-stroke flexible FTS device is proposed in this paper. A series of mechanism modeling and optimal designs are carried out via compliance matrix theory, pseudo-rigid body theory, and Particle Swarm Optimization (PSO) algorithm, respectively. The mechanism performance of the large-stroke FTS device is verified by the Finite Element Analysis (FEA) method. For this study, a piezoelectric (PZT) actuator P-840.60 that can travel to 90 \u00b5m under open-loop control is employed, the results of experiment indicate that the maximum of output displacement can achieve 258.3\u00b5m, and the bandwidth can achieve around 316.84 Hz. Both theoretical analysis and the test results of prototype uniformly verify that the presented FTS device can meet the demand of the actual microstructure processing."} {"_id": "3d4bae33c2ccc0a6597f80e27cbeed64990b95bd", "title": "Mindfulness practice leads to increases in regional brain gray matter density", "text": "Therapeutic interventions that incorporate training in mindfulness meditation have become increasingly popular, but to date little is known about neural mechanisms associated with these interventions. Mindfulness-Based Stress Reduction (MBSR), one of the most widely used mindfulness training programs, has been reported to produce positive effects on psychological well-being and to ameliorate symptoms of a number of disorders. Here, we report a controlled longitudinal study to investigate pre-post changes in brain gray matter concentration attributable to participation in an MBSR program. Anatomical magnetic resonance (MR) images from 16 healthy, meditation-na\u00efve participants were obtained before and after they underwent the 8-week program. Changes in gray matter concentration were investigated using voxel-based morphometry, and compared with a waiting list control group of 17 individuals. Analyses in a priori regions of interest confirmed increases in gray matter concentration within the left hippocampus. Whole brain analyses identified increases in the posterior cingulate cortex, the temporo-parietal junction, and the cerebellum in the MBSR group compared with the controls. The results suggest that participation in MBSR is associated with changes in gray matter concentration in brain regions involved in learning and memory processes, emotion regulation, self-referential processing, and perspective taking."} {"_id": "217af49622a4e51b6d1b9b6c75726eaf1355a903", "title": "Animating pictures with stochastic motion textures", "text": "In this paper, we explore the problem of enhancing still pictures with subtly animated motions. We limit our domain to scenes containing passive elements that respond to natural forces in some fashion. We use a semi-automatic approach, in which a human user segments the scene into a series of layers to be individually animated. Then, a \"stochastic motion texture\" is automatically synthesized using a spectral method, i.e., the inverse Fourier transform of a filtered noise spectrum. The motion texture is a time-varying 2D displacement map, which is applied to each layer. The resulting warped layers are then recomposited to form the animated frames. The result is a looping video texture created from a single still image, which has the advantages of being more controllable and of generally higher image quality and resolution than a video texture created from a video source. We demonstrate the technique on a variety of photographs and paintings."} {"_id": "10d79507f0f2e2d2968bf3a962e1daffc8bd44f0", "title": "Modeling the statistical time and angle of arrival characteristics of an indoor multipath channel", "text": "Most previously proposed statistical models for the indoor multipath channel include only time of arrival characteristics. However, in order to use statistical models in simulating or analyzing the performance of systems employing spatial diversity combining, information about angle of arrival statistics is also required. Ideally, it would be desirable to characterize the full spare-time nature of the channel. In this paper, a system is described that was used to collect simultaneous time and angle of arrival data at 7 GHz. Data processing methods are outlined, and results obtained from data taken in two different buildings are presented. Based on the results, a model is proposed that employs the clustered \"double Poisson\" time-of-arrival model proposed by Saleh and Valenzuela (1987). The observed angular distribution is also clustered with uniformly distributed clusters and arrivals within clusters that have a Laplacian distribution."} {"_id": "d00ef607a10e5be00a9e05504ab9771c0b05d4ea", "title": "Analysis and Comparison of a Fast Turn-On Series IGBT Stack and High-Voltage-Rated Commercial IGBTS", "text": "High-voltage-rated solid-state switches such as insulated-gate bipolar transistors (IGBTs) are commercially available up to 6.5 kV. Such voltage ratings are attractive for pulsed power and high-voltage switch-mode converter applications. However, as the IGBT voltage ratings increase, the rate of current rise and fall are generally reduced. This tradeoff is difficult to avoid as IGBTs must maintain a low resistance in the epitaxial or drift region layer. For high-voltage-rated IGBTs with thick drift regions to support the reverse voltage, the required high carrier concentrations are injected at turn on and removed at turn off, which slows the switching speed. An option for faster switching is to series multiple, lower voltage-rated IGBTs. An IGBT-stack prototype with six, 1200 V rated IGBTs in series has been experimentally tested. The six-series IGBT stack consists of individual, optically isolated, gate drivers and aluminum cooling plates for forced air cooling which results in a compact package. Each IGBT is overvoltage protected by transient voltage suppressors. The turn-on current rise time of the six-series IGBT stack and a single 6.5 kV rated IGBT has been experimentally measured in a pulsed resistive-load, capacitor discharge circuit. The IGBT stack has also been compared to two IGBT modules in series, each rated at 3.3 kV, in a boost circuit application switching at 9 kHz and producing an output of 5 kV. The six-series IGBT stack results in improved turn-on switching speed, and significantly higher power boost converter efficiency due to a reduced current tail during turn off. The experimental test parameters and the results of the comparison tests are discussed in the following paper"} {"_id": "b63a60e666c4c0335d8de4581eaaa3f71e8e0e54", "title": "A Nonlinear-Disturbance-Observer-Based DC-Bus Voltage Control for a Hybrid AC/DC Microgrid", "text": "DC-bus voltage control is an important task in the operation of a dc or a hybrid ac/dc microgrid system. To improve the dc-bus voltage control dynamics, traditional approaches attempt to measure and feedforward the load or source power in the dc-bus control scheme. However, in a microgrid system with distributed dc sources and loads, the traditional feedforward-based methods need remote measurement with communications. In this paper, a nonlinear disturbance observer (NDO) based dc-bus voltage control is proposed, which does not need the remote measurement and enables the important \u201cplug-and-play\u201d feature. Based on this observer, a novel dc-bus voltage control scheme is developed to suppress the transient fluctuations of dc-bus voltage and improve the power quality in such a microgrid system. Details on the design of the observer, the dc-bus controller and the pulsewidth-modulation (PWM) dead-time compensation are provided in this paper. The effects of possible dc-bus capacitance variation are also considered. The performance of the proposed control strategy has been successfully verified in a 30 kVA hybrid microgrid including ac/dc buses, battery energy storage system, and photovoltaic (PV) power generation system."} {"_id": "6b557c35514d4b6bd75cebdaa2151517f5e820e2", "title": "Prediction , operations , and condition monitoring in wind energy", "text": "Recent developments in wind energy research including wind speed prediction, wind turbine control, operations of hybrid power systems, as well as condition monitoring and fault detection are surveyed. Approaches based on statistics, physics, and data mining for wind speed prediction at different time scales are reviewed. Comparative analysis of prediction results reported in the literature is presented. Studies of classical and intelligent control of wind turbines involving different objectives and strategies are reported. Models for planning operations of different hybrid power systems including wind generation for various objectives are addressed. Methodologies for condition monitoring and fault detection are discussed. Future research directions in wind energy are proposed. 2013 Elsevier Ltd. All rights reserved."} {"_id": "e81f115f2ac725f27ea6549f4de0a71b3a3f6a5c", "title": "NEUROPSI: a brief neuropsychological test battery in Spanish with norms by age and educational level.", "text": "The purpose of this research was to develop, standardize, and test the reliability of a short neuropsychological test battery in the Spanish language. This neuropsychological battery was named \"NEUROPSI,\" and was developed to assess briefly a wide spectrum of cognitive functions, including orientation, attention, memory, language, visuoperceptual abilities, and executive functions. The NEUROPSI includes items that are relevant for Spanish-speaking communities. It can be applied to illiterates and low educational groups. Administration time is 25 to 30 min. Normative data were collected from 800 monolingual Spanish-speaking individuals, ages 16 to 85 years. Four age groups were used: (1) 16 to 30 years, (2) 31 to 50 years, (3) 51 to 65 years, and (4) 66 to 85 years. Data also are analyzed and presented within 4 different educational levels that were represented in this sample; (1) illiterates (zero years of school); (2) 1 to 4 years of school; (2) 5 to 9 years of school; and (3) 10 or more years of formal education. The effects of age and education, as well as the factor structure of the NEUROPSI are analyzed. The NEUROPSI may fulfill the need for brief, reliable, and objective evaluation of a broad range of cognitive functions in Spanish-speaking populations."} {"_id": "382c057c0be037340e7d6494fc3a580b9d6b958c", "title": "Should TED talks be teaching us something?", "text": "The nonprofit phenomenon \u201cTED,\u201d the brand name for the concepts of Technology Education and Design, was born in 1984. It launched into pop culture stardom in 2006 when the organization\u2019s curators began offering short, free, unrestricted, and educational video segments. Known as \u201cTEDTalks,\u201d these informational segments are designed to be no longer than 18 minutes in length and provide succinct, targeted enlightenment on various topics or ideas that are deemed \u201cworth spreading.\u201d TED Talks, often delivered in sophisticated studios with trendy backdrops, follow a format that focuses learners on the presenter and limited, extremely purposeful visual aids. Topics range from global warming to running to the developing world. Popular TED Talks, such as Sir Ken Robinson\u2019s \u201cSchools Kill Creatively\u201d or Dan Gilbert\u2019s \u201cWhy Are We Happy?\u201d can easily garner well over a million views. TED Talks are a curious phenomenon for educators to observe. They are in many ways the antithesis of traditional lectures, which are typically 60-120 minutes in length and delivered in cavernous halls by faculty members engaged in everyday academic lives. Perhaps the formality of the lecture is the biggest superficial difference in comparison to casual TEDTalks (Table 1). However, TED Talks are not as unstructured as they may appear. Presenters are well coached and instructed to follow a specific presentation formula, whichmaximizes storyboarding and highlights passion for the subject. While learning is not formally assessed, TED Talks do seem to accomplish their goals of spreading ideas while sparking curiosity within the learner. The fact that some presentations have been viewed more than 16 million times points to the effectiveness of the platform in at least reaching learners and stimulating a desire to click, listen, and learn.Moreover, the TEDTalks website is the fourth most popular technology website and the single most popular conference and events website in the world. The TED phenomenon may have both direct and subliminal messages for academia. Perhaps an initial question to ponder is whether the TED phenomenon is a logical grassroots educational evolution or a reaction to the digital generation and their preference for learning that occurs \u201cwherever, whenever.\u201d The diverse cross-section of TED devotees ranging in background and age would seem to provide evidence that the platform does not solely appeal to younger generations of learners. Instead, it suggests that adult learners are either more drawn to digital learning than they think they are or than they are likely to admit. The perceived efficacy of TED once again calls into question the continued reliance of academia on the lecture as the primary currency of learning. TED Talks do not convey large chunks of information but rather present grander ideas. Would TED-like educational modules or blocks of 18-20 minutes be more likely to pique student curiosity across a variety of pharmacy topics, maintain attention span, and improve retention? Many faculty members who are recognized as outstanding teachers or lecturers might confess that they already teach through a TED-like lens. Collaterally, TED Talks or TED-formatted learning experiences might be ideal springboards for incorporation into inverted or flipped classroom environments where information is gathered and learned at home, while ideas are analyzed, debated, and assimilated within the classroom. Unarguably, TED Talks have given scientists and other researchers a real-time, mass media driven opportunity to disseminate their research, ideas, and theories that might otherwise have gone unnoticed. Similar platforms or approaches may be able to provide opportunities for the academy to further transmit research to the general public. The TED approach to idea dissemination is not without its critics. Several authors have criticized TED for flattening or dumbing down ideas so they fit into a preconceived, convenient format that is primarily designed to entertain. Consequently, the oversimplified ideas and conceptsmay provoke little effort from the learner to analyze data, theory, or controversy. Some Corresponding Author: Frank Romanelli, PharmD, MPH, 789 South Limestone Road, University of Kentucky College of Pharmacy, Lexington, KY 40536. E-mail: froma2@email. uky.edu American Journal of Pharmaceutical Education 2014; 78 (6) Article 113."} {"_id": "36eff99a7f23cec395e4efc80ff7f937934c7be6", "title": "Geometry and Meaning", "text": "Geometry and Meaning is an interesting book about a relationship between geometry and logic defined on certain types of abstract spaces and how that intimate relationship might be exploited when applied in computational linguistics. It is also about an approach to information retrieval, because the analysis of natural language, especially negation, is applied to problems in IR, and indeed illustrated throughout the book by simple examples using search engines. It is refreshing to see IR issues tackled from a different point of view than the standard vector space (Salton, 1968). It is an enjoyable read, as intended by the author, and succeeds as a sort of tourist guide to the subject in hand. The early part of the book concentrates on the introduction of a number of elementary concepts from mathematics: graph theory, linear algebra (especially vector spaces), lattice theory, and logic. These concepts are well motivated and illustrated with good examples, mostly of a classificatory or taxonomic kind. One of the major goals of the book is to argue that non-classical logic, in the form of a quantum logic, is a candidate for analyzing language and its underlying logic, with a promise that such an approach could lead to improved search engines. The argument for this is aided by copious references to early philosophers, scientists, and mathematicians, creating the impression that when Aristotle, Descartes, Boole, and Grassmann were laying the foundations for taxonomy, analytical geometry, logic, and vector spaces, they had a more flexible and broader view of these subjects than is current. This is especially true of logic. Thus the historical approach taken to introducing quantum logic (chapter 7) is to show that this particular kind of logic and its interpretation in vector space were inherent in some of the ideas of these earlier thinkers. Widdows claims that Aristotle was never respected for his mathematics and that Grassmann\u2019s Ausdehnungslehre was largely ignored and left in obscurity. Whether Aristotle was never admired for his mathematics I am unable to judge, but certainly Alfred North Whitehead (1925) was not complimentary when he said:"} {"_id": "f0d82cbac15c4379677d815c9d32f7044b19d869", "title": "Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity.", "text": "Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems from micro- to macroscales. We present examples of how human brain imaging data are being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers and emphasize their utility in informing diagnosis and monitoring, brain-machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights that are critical for the neuroengineer's tool kit."} {"_id": "f7d5f8c60972c18812925715f685ce8ae5d5659d", "title": "A new exact method for the two-dimensional orthogonal packing problem", "text": "The two-dimensional orthogonal packing problem (2OPP ) consists of determining if a set of rectangles (items) can be packed into one rectangle of fixed size (bin). In this paper we propose two exact algorithms for solving this problem. The first algorithm is an improvement on a classical branch&bound method, whereas the second algorithm is based on a two-step enumerative method. We also describe reduction procedures and lower bounds which can be used within the branch&bound method. We report computational experiments for randomly generated benchmarks, which demonstrate the efficiency of both methods."} {"_id": "90c1104142203c8ead18882d49bfea8aec23e758", "title": "Sensitivity and diagnosticity of NASA-TLX and simplified SWAT to assess the mental workload associated with operating an agricultural sprayer.", "text": "The objectives of the present study were: a) to investigate three continuous variants of the NASA-Task Load Index (TLX) (standard NASA (CNASA), average NASA (C1NASA) and principal component NASA (PCNASA)) and five different variants of the simplified subjective workload assessment technique (SSWAT) (continuous standard SSWAT (CSSWAT), continuous average SSWAT (C1SSWAT), continuous principal component SSWAT (PCSSWAT), discrete event-based SSWAT (D1SSWAT) and discrete standard SSWAT (DSSWAT)) in terms of their sensitivity and diagnosticity to assess the mental workload associated with agricultural spraying; b) to compare and select the best variants of NASA-TLX and SSWAT for future mental workload research in the agricultural domain. A total of 16 male university students (mean 30.4 +/- 12.5 years) participated in this study. All the participants were trained to drive an agricultural spraying simulator. Sensitivity was assessed by the ability of the scales to report the maximum change in workload ratings due to the change in illumination and difficulty levels. In addition, the factor loading method was used to quantify sensitivity. The diagnosticity was assessed by the ability of the scale to diagnose the change in task levels from single to dual. Among all the variants of NASA-TLX and SSWAT, PCNASA and discrete variants of SSWAT showed the highest sensitivity and diagnosticity. Moreover, among all the variants of NASA and SSWAT, the discrete variants of SSWAT showed the highest sensitivity and diagnosticity but also high between-subject variability. The continuous variants of both scales had relatively low sensitivity and diagnosticity and also low between-subject variability. Hence, when selecting a scale for future mental workload research in the agricultural domain, a researcher should decide what to compromise: 1) between-subject variability or 2) sensitivity and diagnosticity. STATEMENT OF RELEVANCE: The use of subjective workload scales is very popular in mental workload research. The present study investigated the different variants of two popular workload rating scales (i.e. NASA-TLX and SSWAT) in terms of their sensitivity and diagnositicity and selected the best variants of each scale for future mental workload research."} {"_id": "b1cbfd6c1e7f8a77e6c1e6db6cd0625e3bd785ef", "title": "Stadium Hashing: Scalable and Flexible Hashing on GPUs", "text": "Hashing is one of the most fundamental operations that provides a means for a program to obtain fast access to large amounts of data. Despite the emergence of GPUs as many-threaded general purpose processors, high performance parallel data hashing solutions for GPUs are yet to receive adequate attention. Existing hashing solutions for GPUs not only impose restrictions (e.g., inability to concurrently execute insertion and retrieval operations, limitation on the size of key-value data pairs) that limit their applicability, their performance does not scale to large hash tables that must be kept out-of-core in the host memory. In this paper we present Stadium Hashing (Stash) that is scalable to large hash tables and practical as it does not impose the aforementioned restrictions. To support large out-of-core hash tables, Stash uses a compact data structure named ticket-board that is separate from hash table buckets and is held inside GPU global memory. Ticket-board locally resolves significant portion of insertion and lookup operations and hence, by reducing accesses to the host memory, it accelerates the execution of these operations. Split design of the ticket-board also enables arbitrarily large keys and values. Unlike existing methods, Stash naturally supports concurrent insertions and retrievals due to its use of double hashing as the collision resolution strategy. Furthermore, we propose Stash with collaborative lanes (clStash) that enhances GPU's SIMD resource utilization for batched insertions during hash table creation. For concurrent insertion and retrieval streams, Stadium hashing can be up to 2 and 3 times faster than GPU Cuckoo hashing for in-core and out-of-core tables respectively."} {"_id": "20f5b475effb8fd0bf26bc72b4490b033ac25129", "title": "Real time detection of lane markers in urban streets", "text": "We present a robust and real time approach to lane marker detection in urban streets. It is based on generating a top view of the road, filtering using selective oriented Gaussian filters, using RANSAC line fitting to give initial guesses to a new and fast RANSAC algorithm for fitting Bezier Splines, which is then followed by a post-processing step. Our algorithm can detect all lanes in still images of the street in various conditions, while operating at a rate of 50 Hz and achieving comparable results to previous techniques."} {"_id": "27edbcf8c6023905db4de18a4189c2093ab39b23", "title": "Robust Lane Detection and Tracking in Challenging Scenarios", "text": "A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable."} {"_id": "4d2cd0b25c5b0f69b6976752ebca43ec5f04a461", "title": "Lane detection and tracking using B-Snake", "text": "In this paper, we proposed a B-Snake based lane detection and tracking algorithm without any cameras\u2019 parameters. Compared with other lane models, the B-Snake based lane model is able to describe a wider range of lane structures since B-Spline can form any arbitrary shape by a set of control points. The problems of detecting both sides of lane markings (or boundaries) have been merged here as the problem of detecting the mid-line of the lane, by using the knowledge of the perspective parallel lines. Furthermore, a robust algorithm, called CHEVP, is presented for providing a good initial position for the B-Snake. Also, a minimum error method by Minimum Mean Square Error (MMSE) is proposed to determine the control points of the B-Snake model by the overall image forces on two sides of lane. Experimental results show that the proposed method is robust against noise, shadows, and illumination variations in the captured road images. It is also applicable to the marked and the unmarked roads, as well as the dash and the solid paint line roads. q 2003 Elsevier B.V. All rights reserved."} {"_id": "1c0f7854c14debcc34368e210568696a01c40573", "title": "Using vanishing points for camera calibration", "text": "In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence."} {"_id": "235aff8bdb65654163110b35f268de6933814c49", "title": "Realtime lane tracking of curved local road", "text": "A lane detection system is an important component of many intelligent transportation systems. We present a robust realtime lane tracking algorithm for a curved local road. First, we present a comparative study to find a good realtime lane marking classifier. Once lane markings are detected, they are grouped into many lane boundary hypotheses represented by constrained cubic spline curves. We present a robust hypothesis generation algorithm using a particle filtering technique and a RANSAC (random sample concensus) algorithm. We introduce a probabilistic approach to group lane boundary hypotheses into left and right lane boundaries. The proposed grouping approach can be applied to general part-based object tracking problems. It incorporates a likelihood-based object recognition technique into a Markov-style process. An experimental result on local streets shows that the suggested algorithm is very reliable"} {"_id": "514ee2a4d6dec51d726012bd74b32b1e05f13271", "title": "The Ontological Foundation of REA Enterprise Information Systems", "text": "Philosophers have studied ontologies for centuries in their search for a systematic explanation of existence: \u201cWhat kind of things exist?\u201d Recently, ontologies have emerged as a major research topic in the fields of artificial intelligence and knowledge management where they address the content issue: \u201cWhat kind of things should we represent?\u201d The answer to that question differs with the scope of the ontology. Ontologies that are subject-independent are called upper-level ontologies, and they attempt to define concepts that are shared by all domains, such as time and space. Domain ontologies, on the other hand, attempt to define the things that are relevant to a specific application domain. Both types of ontologies are becoming increasingly important in the era of the Internet where consistent and machine-readable semantic definitions of economic phenomena become the language of e-commerce. In this paper, we propose the conceptual accounting framework of the Resource-Event-Agent (REA) model of McCarthy (1982) as an enterprise domain ontology, and we build upon the initial ontology work of Geerts and McCarthy (2000) which explored REA with respect to the ontological categorizations of John Sowa (1999). Because of its conceptual modeling heritage, REA already resembles an established ontology in many declarative (categories) and procedural (axioms) respects, and we also propose here to extend formally that framework both (1) vertically in terms of entrepreneurial logic (value chains) and workflow detail, and (2) horizontally in terms of type and commitment images of enterprise economic phenomena. A strong emphasis throughout the paper is given to the microeconomic foundations of the category definitions."} {"_id": "944692d5d33fbc5f42294a8310380e0b057a1320", "title": "Dual- and Multiband U-Slot Patch Antennas", "text": "A wide band patch antenna fed by an L-probe can be designed for dual- and multi-band application by cutting U-slots on the patch. Simulation and measurement results are presented to illustrate this design."} {"_id": "6800fbe3314be9f638fb075e15b489d1aadb3030", "title": "Advances in Collaborative Filtering", "text": "The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it played a central role within the recently completed Netflix competition has contributed to its popularity. This chapter surveys the recent progress in the field. Matrix factorization techniques, which became a first choice for implementing CF, are described together with recent innovations. We also describe several extensions that bring competitive accuracy into neighborhood methods, which used to dominate the field. The chapter demonstrates how to utilize temporal models and implicit feedback to extend models accuracy. In passing, we include detailed descriptions of some the central methods developed for tackling the challenge of the Netflix Prize competition."} {"_id": "12bbec48c8fde83ea276402ffedd2e241e978a12", "title": "VirtualTable: a projection augmented reality game", "text": "VirtualTable is a projection augmented reality installation where users are engaged in an interactive tower defense game. The installation runs continuously and is designed to attract people to a table, which the game is projected onto. Any number of players can join the game for an optional period of time. The goal is to prevent the virtual stylized soot balls, spawning on one side of the table, from reaching the cheese. To stop them, the players can place any kind of object on the table, that then will become part of the game. Depending on the object, it will become either a wall, an obstacle for the soot balls, or a tower, that eliminates them within a physical range. The number of enemies is dependent on the number of objects in the field, forcing the players to use strategy and collaboration and not the sheer number of objects to win the game."} {"_id": "ffd76d49439c078a6afc246e6d0638a01ad563f8", "title": "A Context-Aware Usability Model for Mobile Health Applications", "text": "Mobile healthcare is a fast growing area of research that capitalizes on mobile technologies and wearables to provide realtime and continuous monitoring and analysis of vital signs of users. Yet, most of the current applications are developed for general population without taking into consideration the context and needs of different user groups. Designing and developing mobile health applications and diaries according to the user context can significantly improve the quality of user interaction and encourage the application use. In this paper, we propose a user context model and a set of usability attributes for developing mobile applications in healthcare. The proposed model and the selected attributes are integrated into a mobile application development framework to provide user-centered and context-aware guidelines. To validate our framework, a mobile diary was implemented for patients undergoing Peritoneal Dialysis (PD) and tested with real users."} {"_id": "8deafc34941a79b9cfc348ab63ec51752c7b1cde", "title": "New approach for clustering of big data: DisK-means", "text": "The exponential growth in the amount of data gathered from various sources has resulted in the need for more efficient algorithms to quickly analyze large datasets. Clustering techniques, like K-Means are useful in analyzing data in a parallel fashion. K-Means largely depends upon a proper initialization to produce optimal results. K-means++ initialization algorithm provides solution based on providing an initial set of centres to the K-Means algorithm. However, its inherent sequential nature makes it suffer from various limitations when applied to large datasets. For instance, it makes k iterations to find k centres. In this paper, we present an algorithm that attempts to overcome the drawbacks of previous algorithms. Our work provides a method to select a good initial seeding in less time, facilitating fast and accurate cluster analysis over large datasets."} {"_id": "455d562bf02dcb5161c98668a5f5e470d02b70b8", "title": "A probabilistic constrained clustering for transfer learning and image category discovery", "text": "Neural network-based clustering has recently gained popularity, and in particular a constrained clustering formulation has been proposed to perform transfer learning and image category discovery using deep learning. The core idea is to formulate a clustering objective with pairwise constraints that can be used to train a deep clustering network; therefore the cluster assignments and their underlying feature representations are jointly optimized end-toend. In this work, we provide a novel clustering formulation to address scalability issues of previous work in terms of optimizing deeper networks and larger amounts of categories. The proposed objective directly minimizes the negative log-likelihood of cluster assignment with respect to the pairwise constraints, has no hyper-parameters, and demonstrates improved scalability and performance on both supervised learning and unsupervised transfer learning."} {"_id": "e6bef595cb78bcad4880aea6a3a73ecd32fbfe06", "title": "Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach", "text": "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains."} {"_id": "d77d2ab03f891d8f0822083020486a6de1f2900f", "title": "EEG Classification of Different Imaginary Movements within the Same Limb", "text": "The task of discriminating the motor imagery of different movements within the same limb using electroencephalography (EEG) signals is challenging because these imaginary movements have close spatial representations on the motor cortex area. There is, however, a pressing need to succeed in this task. The reason is that the ability to classify different same-limb imaginary movements could increase the number of control dimensions of a brain-computer interface (BCI). In this paper, we propose a 3-class BCI system that discriminates EEG signals corresponding to rest, imaginary grasp movements, and imaginary elbow movements. Besides, the differences between simple motor imagery and goal-oriented motor imagery in terms of their topographical distributions and classification accuracies are also being investigated. To the best of our knowledge, both problems have not been explored in the literature. Based on the EEG data recorded from 12 able-bodied individuals, we have demonstrated that same-limb motor imagery classification is possible. For the binary classification of imaginary grasp and elbow (goal-oriented) movements, the average accuracy achieved is 66.9%. For the 3-class problem of discriminating rest against imaginary grasp and elbow movements, the average classification accuracy achieved is 60.7%, which is greater than the random classification accuracy of 33.3%. Our results also show that goal-oriented imaginary elbow movements lead to a better classification performance compared to simple imaginary elbow movements. This proposed BCI system could potentially be used in controlling a robotic rehabilitation system, which can assist stroke patients in performing task-specific exercises."} {"_id": "de0f84359078ec9ba79f4d0061fe73f6cac6591c", "title": "Single-Stage Single-Switch Four-Output Resonant LED Driver With High Power Factor and Passive Current Balancing", "text": "A resonant single-stage single-switch four-output LED driver with high power factor and passive current balancing is proposed. By controlling one output current, the other output currents of four-output LED driver can be controlled via passive current balancing, which makes its control simple. When magnetizing inductor current operates in critical conduction mode, unity power factor is achieved. The proposed LED driver uses only one active switch and one magnetic component, thus it benefits from low cost, small volume, and light weight. Moreover, high-efficiency performance is achieved due to single-stage power conversion and soft-switching characteristics. The characteristics of the proposed LED driver are studied in this paper and experimental results of two 110-W four-output isolated LED drivers are provided to verify the studied results."} {"_id": "1924ae6773f09efcfc791454d42a3ec53207a815", "title": "Flexible Ambiguity Resolution and Incompleteness Detection in Requirements Descriptions via an Indicator-Based Configuration of Text Analysis Pipelines", "text": "Natural language software requirements descriptions enable end users to formulate their wishes and expectations for a future software product without much prior knowledge in requirements engineering. However, these descriptions are susceptible to linguistic inaccuracies such as ambiguities and incompleteness that can harm the development process. There is a number of software solutions that can detect deficits in requirements descriptions and partially solve them, but they are often hard to use and not suitable for end users. For this reason, we develop a software system that helps end-users to create unambiguous and complete requirements descriptions by combining existing expert tools and controlling them using automatic compensation strategies. In order to recognize the necessity of individual compensation methods in the descriptions, we have developed linguistic indicators, which we present in this paper. Based on these indicators, the whole text analysis pipeline is ad-hoc configured and thus adapted to the individual circumstances of a requirements description."} {"_id": "727774c3a911d45ea6fe2d4ad66fd3b453a18c99", "title": "Correlating low-level image statistics with users - rapid aesthetic and affective judgments of web pages", "text": "In this paper, we report a study that examines the relationship between image-based computational analyses of web pages and users' aesthetic judgments about the same image material. Web pages were iteratively decomposed into quadrants of minimum entropy (quadtree decomposition) based on low-level image statistics, to permit a characterization of these pages in terms of their respective organizational symmetry, balance and equilibrium. These attributes were then evaluated for their correlation with human participants' subjective ratings of the same web pages on four aesthetic and affective dimensions. Several of these correlations were quite large and revealed interesting patterns in the relationship between low-level (i.e., pixel-level) image statistics and design-relevant dimensions."} {"_id": "21c76cc8ebfb9c112c2594ce490b47e458b50e31", "title": "American Sign Language Recognition Using Leap Motion Sensor", "text": "In this paper, we present an American Sign Language recognition system using a compact and affordable 3D motion sensor. The palm-sized Leap Motion sensor provides a much more portable and economical solution than Cyblerglove or Microsoft kinect used in existing studies. We apply k-nearest neighbor and support vector machine to classify the 26 letters of the English alphabet in American Sign Language using the derived features from the sensory data. The experiment result shows that the highest average classification rate of 72.78% and 79.83% was achieved by k-nearest neighbor and support vector machine respectively. We also provide detailed discussions on the parameter setting in machine learning methods and accuracy of specific alphabet letters in this paper."} {"_id": "519f5892938d4423cecc999b6e489b72fc0d0ca7", "title": "Cognitive, emotional, and behavioral considerations for chronic pain management in the Ehlers-Danlos syndrome hypermobility-type: a narrative review.", "text": "BACKGROUND\nEhlers-Danlos syndrome (EDS) hypermobility-type is the most common hereditary disorder of the connective tissue. The tissue fragility characteristic of this condition leads to multi-systemic symptoms in which pain, often severe, chronic, and disabling, is the most experienced. Clinical observations suggest that the complex patient with EDS hypermobility-type is refractory toward several biomedical and physical approaches. In this context and in accordance with the contemporary conceptualization of pain (biopsychosocial perspective), the identification of psychological aspects involved in the pain experience can be useful to improve interventions for this under-recognized pathology.\n\n\nPURPOSE\nReview of the literature on joint hypermobility and EDS hypermobility-type concerning psychological factors linked to pain chronicity and disability.\n\n\nMETHODS\nA comprehensive search was performed using scientific online databases and references lists, encompassing publications reporting quantitative and qualitative research as well as unpublished literature.\n\n\nRESULTS\nDespite scarce research, psychological factors associated with EDS hypermobility-type that potentially affect pain chronicity and disability were identified. These are cognitive problems and attention to body sensations, negative emotions, and unhealthy patterns of activity (hypo/hyperactivity).\n\n\nCONCLUSIONS\nAs in other chronic pain conditions, these aspects should be more explored in EDS hypermobility-type, and integrated into chronic pain prevention and management programs. Implications for Rehabilitation Clinicians should be aware that joint hypermobility may be associated with other health problems, and in its presence suspect a heritable disorder of connective tissue such as the Ehlers-Danlos syndrome (EDS) hypermobility-type, in which chronic pain is one of the most frequent and invalidating symptoms. It is necessary to explore the psychosocial functioning of patients as part of the overall chronic pain management in the EDS hypermobility-type, especially when they do not respond to biomedical approaches as psychological factors may be operating against rehabilitation. Further research on the psychological factors linked to pain chronicity and disability in the EDS hypermobility-type is needed."} {"_id": "7d2fda30e52c39431dbb90ae065da036a55acdc7", "title": "A brief review: factors affecting the length of the rest interval between resistance exercise sets.", "text": "Research has indicated that multiple sets are superior to single sets for maximal strength development. However, whether maximal strength gains are achieved may depend on the ability to sustain a consistent number of repetitions over consecutive sets. A key factor that determines the ability to sustain repetitions is the length of rest interval between sets. The length of the rest interval is commonly prescribed based on the training goal, but may vary based on several other factors. The purpose of this review was to discuss these factors in the context of different training goals. When training for muscular strength, the magnitude of the load lifted is a key determinant of the rest interval prescribed between sets. For loads less than 90% of 1 repetition maximum, 3-5 minutes rest between sets allows for greater strength increases through the maintenance of training intensity. However, when testing for maximal strength, 1-2 minutes rest between sets might be sufficient between repeated attempts. When training for muscular power, a minimum of 3 minutes rest should be prescribed between sets of repeated maximal effort movements (e.g., plyometric jumps). When training for muscular hypertrophy, consecutive sets should be performed prior to when full recovery has taken place. Shorter rest intervals of 30-60 seconds between sets have been associated with higher acute increases in growth hormone, which may contribute to the hypertrophic effect. When training for muscular endurance, an ideal strategy might be to perform resistance exercises in a circuit, with shorter rest intervals (e.g., 30 seconds) between exercises that involve dissimilar muscle groups, and longer rest intervals (e.g., 3 minutes) between exercises that involve similar muscle groups. In summary, the length of the rest interval between sets is only 1 component of a resistance exercise program directed toward different training goals. Prescribing the appropriate rest interval does not ensure a desired outcome if other components such as intensity and volume are not prescribed appropriately."} {"_id": "fe0643f3405c22fe7ca0b7d1274a812d6e3e5a11", "title": "Silicon carbide power MOSFETs: Breakthrough performance from 900 V up to 15 kV", "text": "Since Cree, Inc.'s 2nd generation 4H-SiC MOSFETs were commercially released with a specific on-resistance (RON, SP) of 5 m\u03a9\u00b7cm2 for a 1200 V-rating in early 2013, we have further optimized the device design and fabrication processes as well as greatly expanded the voltage ratings from 900 V up to 15 kV for a much wider range of high-power, high-frequency, and high-voltage energy-conversion and transmission applications. Using these next-generation SiC MOSFETs, we have now achieved new breakthrough performance for voltage ratings from 900 V up to 15 kV with a RON, SP as low as 2.3 m\u03a9\u00b7cm2 for a breakdown voltage (BV) of 1230 V and 900 V-rating, 2.7 m\u03a9\u00b7cm2 for a BV of 1620 V and 1200 V-rating, 3.38 m\u03a9\u00b7cm2 for a BV of 1830 V and 1700 V-rating, 10.6 m\u03a9\u00b7cm2 for a BV of 4160 V and 3300 V-rating, 123 m\u03a9\u00b7cm2 for a BV of 12 kV and 10 kV-rating, and 208 m\u03a9\u00b7cm2 for a BV of 15.5 kV and 15 kV-rating. In addition, due to the lack of current tailing during the bipolar device switching turn-off, the SiC MOSFETs reported in this work exhibit incredibly high frequency switching performance over their silicon counter parts."} {"_id": "011d4ccb74f32f597df54ac8037a7903bd95038b", "title": "The evolution of human skin coloration.", "text": "Skin color is one of the most conspicuous ways in which humans vary and has been widely used to define human races. Here we present new evidence indicating that variations in skin color are adaptive, and are related to the regulation of ultraviolet (UV) radiation penetration in the integument and its direct and indirect effects on fitness. Using remotely sensed data on UV radiation levels, hypotheses concerning the distribution of the skin colors of indigenous peoples relative to UV levels were tested quantitatively in this study for the first time. The major results of this study are: (1) skin reflectance is strongly correlated with absolute latitude and UV radiation levels. The highest correlation between skin reflectance and UV levels was observed at 545 nm, near the absorption maximum for oxyhemoglobin, suggesting that the main role of melanin pigmentation in humans is regulation of the effects of UV radiation on the contents of cutaneous blood vessels located in the dermis. (2) Predicted skin reflectances deviated little from observed values. (3) In all populations for which skin reflectance data were available for males and females, females were found to be lighter skinned than males. (4) The clinal gradation of skin coloration observed among indigenous peoples is correlated with UV radiation levels and represents a compromise solution to the conflicting physiological requirements of photoprotection and vitamin D synthesis. The earliest members of the hominid lineage probably had a mostly unpigmented or lightly pigmented integument covered with dark black hair, similar to that of the modern chimpanzee. The evolution of a naked, darkly pigmented integument occurred early in the evolution of the genus Homo. A dark epidermis protected sweat glands from UV-induced injury, thus insuring the integrity of somatic thermoregulation. Of greater significance to individual reproductive success was that highly melanized skin protected against UV-induced photolysis of folate (Branda & Eaton, 1978, Science201, 625-626; Jablonski, 1992, Proc. Australas. Soc. Hum. Biol.5, 455-462, 1999, Med. Hypotheses52, 581-582), a metabolite essential for normal development of the embryonic neural tube (Bower & Stanley, 1989, The Medical Journal of Australia150, 613-619; Medical Research Council Vitamin Research Group, 1991, The Lancet338, 31-37) and spermatogenesis (Cosentino et al., 1990, Proc. Natn. Acad. Sci. U.S.A.87, 1431-1435; Mathur et al., 1977, Fertility Sterility28, 1356-1360).As hominids migrated outside of the tropics, varying degrees of depigmentation evolved in order to permit UVB-induced synthesis of previtamin D(3). The lighter color of female skin may be required to permit synthesis of the relatively higher amounts of vitamin D(3)necessary during pregnancy and lactation. Skin coloration in humans is adaptive and labile. Skin pigmentation levels have changed more than once in human evolution. Because of this, skin coloration is of no value in determining phylogenetic relationships among modern human groups."} {"_id": "d87d70ecd0fdf0976cebbeaeacf25ad9872ffde1", "title": "Robust and false positive free watermarking in IWT domain using SVD and ABC", "text": "Watermarking is used to protect the copyrighted materials from being misused and help us to know the lawful ownership. The security of any watermarking scheme is always a prime concern for the developer. In this work, the robustness and security issue of IWT (integer wavelet transform) and SVD (singular value decomposition) based watermarking is explored. Generally, SVD based watermarking techniques suffer with an issue of false positive problem. This leads to even authenticating the wrong owner. We are proposing a novel solution to this false positive problem; that arises in SVD based approach. Firstly, IWT is employed on the host image and then SVD is performed on this transformed host. The properties of IWT and SVD help in achieving high value of robustness. Singular values are used for the watermark embedding. In order to further improve the quality of watermarking, the optimization of scaling factor (mixing ratio) is performed with the help of artificial bee colony (ABC) algorithm. A comparison with other schemes is performed to show the superiority of proposed scheme. & 2015 Elsevier Ltd. All rights reserved."} {"_id": "ae3ebe6c69fdb19e12d3218a5127788fae269c10", "title": "A Literature Survey of Benchmark Functions For Global Optimization Problems", "text": "Test functions are important to validate and compare the performance of optimization algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties so that can be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimization problems with diverse properties in terms of modality, separability, and valley landscape. This is by far the most complete set of functions so far in the literature, and tt can be expected this complete set of functions can be used for validation of new optimization in the future."} {"_id": "d28235adc2c8c6fdfaa474bc2bab931129149fd6", "title": "Approaches to Measuring the Difficulty of Games in Dynamic Difficulty Adjustment Systems", "text": "In this article, three approaches are proposed for measuring difficulty that can be useful in developing Dynamic Difficulty Adjustment (DDA) systems in different game genres. Our analysis of the existing DDA systems shows that there are three ways to measure the difficulty of the game: using the formal model of gameplay, using the features of the game, and direct examination of the player. These approaches are described in this article and supplemented by appropriate examples of DDA implementations. In addition, the article describes the distinction between task complexity and task difficulty in DDA systems. Separating task complexity (especially the structural one) is suggested, which is an objective characteristic of the task, and task difficulty, which is related to the interaction between the task and the task performer."} {"_id": "5c881260bcc64070b2b33c10d28f23f793b8344f", "title": "A low-voltage, low quiescent current, low drop-out regulator", "text": "The demand for low voltage, low drop-out (LDO) regulators is increasing because of the growing demand for portable electronics, i.e., cellular phones, pagers, laptops, etc. LDOs are used coherently with dc-dc converters as well as standalone parts. In power supply systems, they are typically cascaded onto switching regulators to suppress noise and provide a low noise output. The need for low voltage is innate to portable low power devices and corroborated by lower breakdown voltages resulting from reductions in feature size. Low quiescent current in a battery operated system is an intrinsic performance parameter because it partially determines battery life. This paper discusses some techniques that enable the practical realizations of low quiescent current LDOs at low voltages and in existing technologies. The proposed circuit exploits the frequency response dependence on load-current to minimize quiescent current flow. Moreover, the output current capabilities of MOS power transistors are enhanced and drop-out voltages are decreased for a given device size. Other applications, like dc-dc converters, can also reap the benefits of these enhanced MOS devices. An LDO prototype incorporating the aforementioned techniques was fabricated. The circuit was operable down to input voltages of 1 V with a zero-load quiescent current flow of 23 \u03bcA. Moreover, the regulator provided 18 and 50 mA of output current at input voltages of 1 and 1.2 V respectively."} {"_id": "950ff860dbc8a24fc638ac942ce9c1f51fb24899", "title": "Where to Go Next: A Spatio-temporal LSTM model for Next POI Recommendation", "text": "Next Point-of-Interest (POI) recommendation is of great value for both location-based service providers and users. Recently Recurrent Neural Networks (RNNs) have been proved to be effective on sequential recommendation tasks. However, existing RNN solutions rarely consider the spatio-temporal intervals between neighbor checkins, which are essential for modeling user check-in behaviors in next POI recommendation. In this paper, we propose a new variant of LSTM, named STLSTM, which implements time gates and distance gates into LSTM to capture the spatio-temporal relation between successive check-ins. Specifically, one time gate and one distance gate are designed to control short-term interest update, and another time gate and distance gate are designed to control long-term interest update. Furthermore, to reduce the number of parameters and improve efficiency, we further integrate coupled input and forget gates with our proposed model. Finally, we evaluate the proposed model using four real-world datasets from various location-based social networks. Our experimental results show that our model significantly outperforms the state-of-the-art approaches for next POI recommendation."} {"_id": "f99a50ce62845c62d9fcdec277e0857350534cc9", "title": "Absorptive Frequency-Selective Transmission Structure With Square-Loop Hybrid Resonator", "text": "A novel design of an absorptive frequency-selective transmission structure (AFST) is proposed. This structure is based on the design of a frequency-dependent lossy layer with square-loop hybrid resonator (SLHR). The parallel resonance provided by the hybrid resonator is utilized to bypass the lossy path and improve the insertion loss. Meanwhile, the series resonance of the hybrid resonator is used for expanding the upper absorption bandwidth. Furthermore, the absorption for out-of-band frequencies is achieved by using four metallic strips with lumped resistors, which are connected with the SLHR. The quantity of lumped elements required in a unit cell can be reduced by at least 50% compared to previous structures. The design guidelines are explained with the aid of an equivalent circuit model. Both simulation and experiment results are presented to demonstrate the performance of our AFST. It is shown that an insertion loss of 0.29 dB at 6.1 GHz and a 112.4% 10 dB reflection reduction bandwidth are obtained under the normal incidence."} {"_id": "26f70336acf7247a35d3c0be6308fe29f25d2872", "title": "Implementation of AES-GCM encryption algorithm for high performance and low power architecture Using FPGA", "text": "Evaluation of the Advanced Encryption Standard (AES) algorithm in FPGA is proposed here. This Evaluation is compared with other works to show the efficiency. Here we are concerned about two major purposes. The first is to define some of the terms and concepts behind basic cryptographic methods, and to offer a way to compare the myriad cryptographic schemes in use today. The second is to provide some real examples of cryptography in use today. The design uses an iterative looping approach with block and key size of 128 bits, lookup table implementation of S-box. This gives low complexity architecture and easily achieves low latency as well as high throughput. Simulation results, performance results are presented and compared with previous reported designs. Since its acceptance as the adopted symmetric-key algorithm, the Advanced Encryption Standard (AES) and its recently standardized authentication Galois/Counter Mode (GCM) have been utilized in various security-constrained applications. Many of the AES-GCM applications are power and resource constrained and requires efficient hardware implementations. In this project, AES-GCM algorithms are evaluated and optimized to identify the high-performance and low-power architectures. The Advanced Encryption Standard (AES) is a specification for the encryption of electronic data. The Cipher Block Chaining (CBC) mode is a confidentiality mode whose encryption process features the combining (\u201cchaining\u201d) of the plaintext blocks with the previous Cipher text blocks. The CBC mode requires an IV to combine with the first plaintext block. The IV need not be secret, but it must be unpredictable. Also, the integrity of the IV should be protected. Galois/Counter Mode (GCM) is a block cipher mode of operation that uses universal hashing over a binary Galois field to provide authenticated encryption. Galois Hash is used for authentication, and the Advanced Encryption Standard (AES) block cipher is used for encryption in counter mode of operation. To obtain the least-complexity S-box, the formulations for the Galois Field (GF) sub-field inversions in GF (24) are optimized By conducting exhaustive simulations for the input transitions, we analyze the synthesis of the AES S-boxes considering the switching activities, gate-level net lists, and parasitic information. Finally, by implementation of AES-GCM the high-performance GF (2128) multiplier architectures, gives the detailed information of its performance. An optimized coding for the implementation of Advanced Encryption Standard-Galois Counter Mode has been developed. The speed factor of the algorithm implementation has been targeted and a software code in Verilog HDL has been developed. This implementation is useful in wireless security like military communication and mobile telephony where there is a grayer emphasis on the speed of communication."} {"_id": "03f64a5989e4d2ecab989d9724ad4cc58f976daf", "title": "Multi-Document Summarization using Sentence-based Topic Models", "text": "Most of the existing multi-document summarization methods decompose the documents into sentences and work directly in the sentence space using a term-sentence matrix. However, the knowledge on the document side, i.e. the topics embedded in the documents, can help the context understanding and guide the sentence selection in the summarization procedure. In this paper, we propose a new Bayesian sentence-based topic model for summarization by making use of both the term-document and term-sentence associations. An efficient variational Bayesian algorithm is derived for model parameter estimation. Experimental results on benchmark data sets show the effectiveness of the proposed model for the multi-document summarization task."} {"_id": "9a1b3247fc7f0abf892a40884169e0ed10d3b684", "title": "Intrusion detection by machine learning: A review", "text": "The popularity of using Internet contains some risks of network attacks. Intrusion detection is one major research problem in network security, whose aim is to identify unusual access or attacks to secure internal networks. In literature, intrusion detection systems have been approached by various machine learning techniques. However, there is no a review paper to examine and understand the current status of using machine learning techniques to solve the intrusion detection problems. This chapter reviews 55 related studies in the period between 2000 and 2007 focusing on developing single, hybrid, and ensemble classifiers. Related studies are compared by their classifier design, datasets used, and other experimental setups. Current achievements and limitations in developing intrusion detection systems by machine learning are present and discussed. A number of future research directions are also provided. 2009 Elsevier Ltd. All rights reserved."} {"_id": "a10d128fd95710308dfee83953c5b26293b9ede7", "title": "Combining OpenFlow and sFlow for an effective and scalable anomaly detection and mitigation mechanism on SDN environments", "text": "Software Defined Networks (SDNs) based on the OpenFlow (OF) protocol export controlplane programmability of switched substrates. As a result, rich functionality in traffic management, load balancing, routing, firewall configuration, etc. that may pertain to specific flows they control, may be easily developed. In this paper we extend these functionalities with an efficient and scalable mechanism for performing anomaly detection and mitigation in SDN architectures. Flow statistics may reveal anomalies triggered by large scale malicious events (typically massive Distributed Denial of Service attacks) and subsequently assist networked resource owners/operators to raise mitigation policies against these threats. First, we demonstrate that OF statistics collection and processing overloads the centralized control plane, introducing scalability issues. Second, we propose a modular architecture for the separation of the data collection process from the SDN control plane with the employment of sFlow monitoring data. We then report experimental results that compare its performance against native OF approaches that use standard flow table statistics. Both alternatives are evaluated using an entropy-based method on high volume real network traffic data collected from a university campus network. The packet traces were fed to hardware and software OF devices in order to assess flow-based datagathering and related anomaly detection options. We subsequently present experimental results that demonstrate the effectiveness of the proposed sFlow-based mechanism compared to the native OF approach, in terms of overhead imposed on usage of system resources. Finally, we conclude by demonstrating that once a network anomaly is detected and identified, the OF protocol can effectively mitigate it via flow table modifications. 2013 Elsevier B.V. All rights reserved."} {"_id": "c84b10c01a84f26fe8a1c978c919fbe5a9f9a661", "title": "Software-Defined Networking: A Comprehensive Survey", "text": "The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network\u2019s control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms with a focus on aspects such as resiliency, scalability, performance, security, and dependabilityVas well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined"} {"_id": "1821fbfc03a45af816a8d7aef50321654b0aeec0", "title": "Revisiting Traffic Anomaly Detection Using Software Defined Networking", "text": "Despite their exponential growth, home and small office/home office networks continue to be poorly managed. Consequently, security of hosts in most home networks is easily compromised and these hosts are in turn used for largescale malicious activities without the home users\u2019 knowledge. We argue that the advent of Software Defined Networking (SDN) provides a unique opportunity to effectively detect and contain network security problems in home and home office networks. We show how four prominent traffic anomaly detection algorithms can be implemented in an SDN context using Openflow compliant switches and NOX as a controller. Our experiments indicate that these algorithms are significantly more accurate in identifying malicious activities in the home networks as compared to the ISP. Furthermore, the efficiency analysis of our SDN implementations on a programmable home network router indicates that the anomaly detectors can operate at line rates without introducing any performance penalties for the home network traffic."} {"_id": "3192a953370bc8bf4b906261e8e2596355d2b610", "title": "A clean slate 4D approach to network control and management", "text": "Today's data networks are surprisingly fragile and difficult to manage. We argue that the root of these problems lies in the complexity of the control and management planes--the software and protocols coordinating network elements--and particularly the way the decision logic and the distributed-systems issues are inexorably intertwined. We advocate a complete refactoring of the functionality and propose three key principles--network-level objectives, network-wide views, and direct control--that we believe should underlie a new architecture. Following these principles, we identify an extreme design point that we call \"4D,\" after the architecture's four planes: decision, dissemination, discovery, and data. The 4D architecture completely separates an AS's decision logic from pro-tocols that govern the interaction among network elements. The AS-level objectives are specified in the decision plane, and en-forced through direct configuration of the state that drives how the data plane forwards packets. In the 4D architecture, the routers and switches simply forward packets at the behest of the decision plane, and collect measurement data to aid the decision plane in controlling the network. Although 4D would involve substantial changes to today's control and management planes, the format of data packets does not need to change; this eases the deployment path for the 4D architecture, while still enabling substantial innovation in network control and management. We hope that exploring an extreme design point will help focus the attention of the research and industrial communities on this crucially important and intellectually challenging area."} {"_id": "883e3a3950968ebf8d03d3281076671538660c7c", "title": "Sensing spatial distribution of urban land use by integrating points-of-interest and Google Word2Vec model", "text": "Urban land use information plays an essential role in a wide variety of urban planning and environmental monitoring processes. During the past few decades, with the rapid technological development of remote sensing (RS), geographic information systems (GIS) and geospatial big data, numerous methods have been developed to identify urban land use at a fine scale. Points-of-interest (POIs) have been widely used to extract information pertaining to urban land use types and functional zones. However, it is difficult to quantify the relationship between spatial distributions of POIs and regional land use types due to a lack of reliable models. Previous methods may ignore abundant spatial features that can be extracted from POIs. In this study, we establish an innovative framework that detects urban land use distributions at the scale of traffic analysis zones (TAZs) by integrating Baidu POIs and a Word2Vec model. This framework was implemented using a Google open-source model of a deep-learning language in 2013. First, data for the Pearl River Delta (PRD) are transformed into a TAZ-POI corpus using a greedy algorithmby considering the spatial distributions of TAZs and inner POIs. Then, high-dimensional characteristic vectors of POIs and TAZs are extracted using the Word2Vec model. Finally, to validate the reliability of the POI/TAZ vectors, we implement a K-Means-based clustering model to analyze correlations between the POI/TAZ vectors and deploy TAZ vectors to identify urban land use types using a random forest algorithm (RFA) model. Compared with some state-of-the-art probabilistic topic models (PTMs), the proposed method can efficiently obtain the highest accuracy (OA = 0.8728, kappa = 0.8399). Moreover, the results can be used to help urban planners to monitor dynamic urban land use and evaluate the impact of urban planning schemes. ARTICLE HISTORY Received 21 March 2016 Accepted 28 September 2016"} {"_id": "b0f7423f93e7c6e506c115771ef82440077a732a", "title": "Full virtualization based ARINC 653 partitioning", "text": "As the number of electronic components of avionics systems is significantly increasing, it is desirable to run several avionics software on a single computing device. In such system, providing a seamless way to integrate separate applications on a computing device is a very critical issue as the Integrated Modular Avionics (IMA) concept addresses. In this context, the ARINC 653 standard defines resource partitioning of avionics application software. The virtualization technology has very high potential of providing an optimal implementation of the partition concept. In this paper, we study supports for full virtualization based ARINC 653 partitioning. The supports include extension of XML-based configuration file format and hierarchical scheduler for temporal partitioning. We show that our implementation can support well-known VMMs, such as VirtualBox and VMware and present basic performance numbers."} {"_id": "5fa463ad51c0fda19cf6a32d851a12eec5e872b1", "title": "Human Identification From Freestyle Walks Using Posture-Based Gait Feature", "text": "With the increase of terrorist threats around the world, human identification research has become a sought after area of research. Unlike standard biometric recognition techniques, gait recognition is a non-intrusive technique. Both data collection and classification processes can be done without a subject\u2019s cooperation. In this paper, we propose a new model-based gait recognition technique called postured-based gait recognition. It consists of two elements: posture-based features and posture-based classification. Posture-based features are composed of displacements of all joints between current and adjacent frames and center-of-body (CoB) relative coordinates of all joints, where the coordinates of each joint come from its relative position to four joints: hip-center, hip-left, hip-right, and spine joints, from the front forward. The CoB relative coordinate system is a critical part to handle the different observation angle issue. In posture-based classification, postured-based gait features of all frames are considered. The dominant subject becomes a classification result. The postured-based gait recognition technique outperforms the existing techniques in both fixed direction and freestyle walk scenarios, where turning around and changing directions are involved. This suggests that a set of postures and quick movements are sufficient to identify a person. The proposed technique also performs well under the gallery-size test and the cumulative match characteristic test, which implies that the postured-based gait recognition technique is not gallery-size sensitive and is a good potential tool for forensic and surveillance use."} {"_id": "602f775577a5458e8b6c5d5a3cdccc7bb183662c", "title": "Comparing comprehension measured by multiple-choice and open-ended questions.", "text": "This study compared the nature of text comprehension as measured by multiple-choice format and open-ended format questions. Participants read a short text while explaining preselected sentences. After reading the text, participants answered open-ended and multiple-choice versions of the same questions based on their memory of the text content. The results indicated that performance on open-ended questions was correlated with the quality of self-explanations, but performance on multiple-choice questions was correlated with the level of prior knowledge related to the text. These results suggest that open-ended and multiple-choice format questions measure different aspects of comprehension processes. The results are discussed in terms of dual process theories of text comprehension."} {"_id": "ebeca41ac60c2151137a45fcc5d1a70a419cad65", "title": "Current location-based next POI recommendation", "text": "Availability of large volume of community contributed location data enables a lot of location providing services and these services have attracted many industries and academic researchers by its importance. In this paper we propose the new recommender system that recommends the new POI for next hours. First we find the users with similar check-in sequences and depict their check-in sequences as a directed graph, then find the users current location. To recommend the new POI recommendation for next hour we refer to the directed graph we have created. Our algorithm considers both the temporal factor i.e., recommendation time, and the spatial(distance) at the same time. We conduct an experiment on random data collected from Foursquare and Gowalla. Experiment results show that our proposed model outperforms the collaborative-filtering based state-of-the-art recommender techniques."} {"_id": "08952d434a9b6f1dc9281f2693b2dd855edcda6b", "title": "SiRiUS: Securing Remote Untrusted Storage", "text": "This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations."} {"_id": "adeca3a75008d92cb52f5f2561dda7005a8814a4", "title": "Calibrated fuzzy AHP for current bank account selection", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.089 \u21d1 Corresponding author. Tel.: +44 23 92 844171. E-mail addresses: Alessio.Ishizaka@port.ac.uk (A. I com (N.H. Nguyen). Fuzzy AHP is a hybrid method that combines Fuzzy Set Theory and AHP. It has been developed to take into account uncertainty and imprecision in the evaluations. Fuzzy Set Theory requires the definition of a membership function. At present, there are no indications of how these membership functions can be constructed. In this paper, a way to calibrate the membership functions with comparisons given by the decision-maker on alternatives with known measures is proposed. This new technique is illustrated in a study measuring the most important factors in selecting a student current account. 2012 Elsevier Ltd. All rights reserved."} {"_id": "539b15c0215582d12e2228d486374651c21ac75d", "title": "Lane-Change Fuzzy Control in Autonomous Vehicles for the Overtaking Maneuver", "text": "The automation of the overtaking maneuver is considered to be one of the toughest challenges in the development of autonomous vehicles. This operation involves two vehicles (the overtaking and the overtaken) cooperatively driving, as well as the surveillance of any other vehicles that are involved in the maneuver. This operation consists of two lane changes-one from the right to the left lane of the road, and the other is to return to the right lane after passing. Lane-change maneuvers have been used to move into or out of a circulation lane or platoon; however, overtaking operations have not received much coverage in the literature. In this paper, we present an overtaking system for autonomous vehicles equipped with path-tracking and lane-change capabilities. The system uses fuzzy controllers that mimic human behavior and reactions during overtaking maneuvers. The system is based on the information that is supplied by a high-precision Global Positioning System and a wireless network environment. It is able to drive an automated vehicle and overtake a second vehicle that is driving in the same lane of the road."} {"_id": "a306754e556446a5199e258f464fd6e26be547fe", "title": "Safety and Efficacy of Selective Neurectomy of the Gastrocnemius Muscle for Calf Reduction in 300 Cases", "text": "Liposuction alone is not always sufficient to correct the shape of the lower leg, and muscle reduction may be necessary. To assess the outcomes of a new technique of selective neurectomy of the gastrocnemius muscle to correct calf hypertrophy. Between October 2007 and May 2010, 300 patients underwent neurectomy of the medial and lateral heads of the gastrocnemius muscle at the Department of\u00a0Cosmetic and Plastic Surgery, the Second People\u2019s Hospital of Guangdong Province (Guangzhou, China) to correct the shape of their lower legs. Follow-up data from these 300 patients were analyzed retrospectively. Cosmetic results were evaluated independently by the surgeon, the patient, and a third party. Preoperative and postoperative calf circumferences were compared. The Fugl-Meyer motor function assessment was evaluated 3\u00a0months after surgery. The average reduction in calf circumference was 3.2\u00a0\u00b1\u00a01.2\u00a0cm. The Fugl-Meyer scores were normal in all patients both before and 3\u00a0months after surgery. A normal calf shape was achieved in all patients. Six patients complained of fatigue while walking and four of scar pigmentation, but in all cases, this resolved within 6\u00a0months. Calf asymmetry was observed in only two patients. The present series suggests that neurectomy of the medial and lateral heads of the gastrocnemius muscle may be safe and effective for correcting the shape of the calves. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."} {"_id": "f31e0932a2f35a6d7feff20977ce08b5b5398c60", "title": "Structure of the tendon connective tissue.", "text": "Tendons consist of collagen (mostly type I collagen) and elastin embedded in a proteoglycan-water matrix with collagen accounting for 65-80% and elastin approximately 1-2% of the dry mass of the tendon. These elements are produced by tenoblasts and tenocytes, which are the elongated fibroblasts and fibrocytes that lie between the collagen fibers, and are organized in a complex hierarchical scheme to form the tendon proper. Soluble tropocollagen molecules form cross-links to create insoluble collagen molecules which then aggregate progressively into microfibrils and then into electronmicroscopically clearly visible units, the collagen fibrils. A bunch of collagen fibrils forms a collagen fiber, which is the basic unit of a tendon. A fine sheath of connective tissue called endotenon invests each collagen fiber and binds fibers together. A bunch of collagen fibers forms a primary fiber bundle, and a group of primary fiber bundles forms a secondary fiber bundle. A group of secondary fiber bundles, in turn, forms a tertiary bundle, and the tertiary bundles make up the tendon. The entire tendon is surrounded by a fine connective tissue sheath called epitenon. The three-dimensional ultrastructure of tendon fibers and fiber bundles is complex. Within one collagen fiber, the fibrils are oriented not only longitudinally but also transversely and horizontally. The longitudinal fibers do not run only parallel but also cross each other, forming spirals. Some of the individual fibrils and fibril groups form spiral-type plaits. The basic function of the tendon is to transmit the force created by the muscle to the bone, and, in this way, make joint movement possible. The complex macro- and microstructure of tendons and tendon fibers make this possible. During various phases of movements, the tendons are exposed not only to longitudinal but also to transversal and rotational forces. In addition, they must be prepared to withstand direct contusions and pressures. The above-described three-dimensional internal structure of the fibers forms a buffer medium against forces of various directions, thus preventing damage and disconnection of the fibers."} {"_id": "6939327c1732e027130f0706b6279f78b8ecd2b7", "title": "Flexible Container-Based Computing Platform on Cloud for Scientific Workflows", "text": "Cloud computing is expected to be a promising solution for scientific computing. In this paper, we propose a flexible container-based computing platform to run scientific workflows on cloud. We integrate Galaxy, a popular biology workflow system, with four famous container cluster systems. Preliminary evaluation shows that container cluster systems introduce negligible performance overhead for data intensive scientific workflows, meanwhile, they are able to solve tool installation problem, guarantee reproducibility and improve resource utilization. Moreover, we implement four ways of using Docker, the most popular container tool, for our platform. Docker in Docker and Sibling Docker, which run everything within containers, both help scientists easily deploy our platform on any clouds in a few minutes."} {"_id": "545dd72cd0357995144bb19bef132bcc67a52667", "title": "Voiced-Unvoiced Classification of Speech Using a Neural Network Trained with LPC Coefficients", "text": "Voiced-Unvoiced classification (V-UV) is a well understood but still not perfectly solved problem. It tackles the problem of determining whether a signal frame contains harmonic content or not. This paper presents a new approach to this problem using a conventional multi-layer perceptron neural network trained with linear predictive coding (LPC) coefficients. LPC is a method that results in a number of coefficients that can be transformed to the envelope of the spectrum of the input frame. As a spectrum is suitable for determining the harmonic content, so are the LPC-coefficients. The proposed neural network works reasonably well compared to other approaches and has been evaluated on a small dataset of 4 different speakers."} {"_id": "89cbcc1e740a4591443ff4765a6ae8df0fdf5554", "title": "Piaget \u2019 s Constructivism , Papert \u2019 s Constructionism : What \u2019 s the difference ?", "text": "What is the difference between Piaget's constructivism and Papert\u2019s \u201cconstructionism\u201d? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget\u2019s constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children\u2019s ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of\u2014or hold onto\u2014 their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they\u2019re wrong. Papert\u2019s constructionism, in contrast, focuses more on the art of learning, or \u2018learning to learn\u2019, and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people\u2019s] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world"} {"_id": "19c05a149bb20f27dd0eca0ec3ac847390b2d100", "title": "Microphone array processing for distant speech recognition: Towards real-world deployment", "text": "Distant speech recognition (DSR) holds out the promise of providing a natural human computer interface in that it enables verbal interactions with computers without the necessity of donning intrusive body- or head-mounted devices. Recognizing distant speech robustly, however, remains a challenge. This paper provides a overview of DSR systems based on microphone arrays. In particular, we present recent work on acoustic beamforming for DSR, along with experimental results verifying the effectiveness of the various algorithms described here; beginning from a word error rate (WER) of 14.3% with a single microphone of a 64-channel linear array, our state-of-the-art DSR system achieved a WER of 5.3%, which was comparable to that of 4.2% obtained with a lapel microphone. Furthermore, we report the results of speech recognition experiments on data captured with a popular device, the Kinect [1]. Even for speakers at a distance of four meters from the Kinect, our DSR system achieved acceptable recognition performance on a large vocabulary task, a WER of 24.1%, beginning from a WER of 42.5% with a single array channel."} {"_id": "142bd1d4e41e5e29bdd87e0d5a145f3c708a3f44", "title": "Ford Campus vision and lidar data set", "text": "This paper describes a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during NovemberDecember 2009. The vehicle path trajectory in these data sets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and simultaneous localization and mapping (SLAM) algorithms. Fig. 1. The modified Ford F-250 pickup truck."} {"_id": "1de3c8ddf30b9d6389aebc3bfa8a02a169a7368b", "title": "Mining frequent closed graphs on evolving data streams", "text": "Graph mining is a challenging task by itself, and even more so when processing data streams which evolve in real-time. Data stream mining faces hard constraints regarding time and space for processing, and also needs to provide for concept drift detection. In this paper we present a framework for studying graph pattern mining on time-varying streams. Three new methods for mining frequent closed subgraphs are presented. All methods work on coresets of closed subgraphs, compressed representations of graph sets, and maintain these sets in a batch-incremental manner, but use different approaches to address potential concept drift. An evaluation study on datasets comprising up to four million graphs explores the strength and limitations of the proposed methods. To the best of our knowledge this is the first work on mining frequent closed subgraphs in non-stationary data streams."} {"_id": "31ea3186aa7072a9e25218efe229f5ee3cca3316", "title": "A ug 2 01 7 Reinforced Video Captioning with Entailment Rewards", "text": "Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset."} {"_id": "4b944d518b88beeb9b2376975400cabd6e919957", "title": "SDN and Virtualization Solutions for the Internet of Things: A Survey", "text": "The imminent arrival of the Internet of Things (IoT), which consists of a vast number of devices with heterogeneous characteristics, means that future networks need a new architecture to accommodate the expected increase in data generation. Software defined networking (SDN) and network virtualization (NV) are two technologies that promise to cost-effectively provide the scale and versatility necessary for IoT services. In this paper, we survey the state of the art on the application of SDN and NV to IoT. To the best of our knowledge, we are the first to provide a comprehensive description of every possible IoT implementation aspect for the two technologies. We start by outlining the ways of combining SDN and NV. Subsequently, we present how the two technologies can be used in the mobile and cellular context, with emphasis on forthcoming 5G networks. Afterward, we move to the study of wireless sensor networks, arguably the current foremost example of an IoT network. Finally, we review some general SDN-NV-enabled IoT architectures, along with real-life deployments and use-cases. We conclude by giving directions for future research on this topic."} {"_id": "fa16642fe405382cbd407ce1bc22213561185aba", "title": "Non-Invasive Glucose Meter for Android-Based Devices", "text": "This study helps in monitoring blood glucose level of a patient with the aid of an android device non-invasively. Diabetes is a metabolic disease characterized by high level of sugar in the blood, and considered as the fastest growing long-term disease affecting millions of people globally. The study measures the blood glucose level using sensor patch through diffused reflectance spectra on the inner side of the forearm. The Arduino microcontroller does the processing of the information from the sensor patch while the Bluetooth module wirelessly transmits to the android device the measured glucose level for storing, interpreting and displaying. Results showed that there is no significant between the measured values using the commercially available glucose meter and the created device. Based on ISO 15197 standard 39 of the 40 trials conducted, or 97.5% fell within the acceptable range."} {"_id": "a360a526794df3aa8de96f83df171769a4022642", "title": "Joint Text Embedding for Personalized Content-based Recommendation", "text": "Learning a good representation of text is key to many recommendation applications. Examples include news recommendation where texts to be recommended are constantly published everyday. However, most existing recommendation techniques, such as matrix factorization based methods, mainly rely on interaction histories to learn representations of items. While latent factors of items can be learned e\u0082ectively from user interaction data, in many cases, such data is not available, especially for newly emerged items. In this work, we aim to address the problem of personalized recommendation for completely new items with text information available. We cast the problem as a personalized text ranking problem and propose a general framework that combines text embedding with personalized recommendation. Users and textual content are embedded into latent feature space. \u008ce text embedding function can be learned end-to-end by predicting user interactions with items. To alleviate sparsity in interaction data, and leverage large amount of text data with li\u008ale or no user interactions, we further propose a joint text embedding model that incorporates unsupervised text embedding with a combination module. Experimental results show that our model can signi\u0080cantly improve the e\u0082ectiveness of recommendation systems on real-world datasets."} {"_id": "1aa60b5ae893cd93a221bf71b6b264f5aa5ca6b8", "title": "Why Not?", "text": "As humans, we have expectations for the results of any action, e.g. we expect at least one student to be returned when we query a university database for student records. When these expectations are not met, traditional database users often explore datasets via a series of slightly altered SQL queries. Yet most database access is via limited interfaces that deprive end users of the ability to alter their query in any way to garner better understanding of the dataset and result set. Users are unable to question why a particular data item is Not in the result set of a given query. In this work, we develop a model for answers to WHY NOT? queries. We show through a user study the usefulness of our answers, and describe two algorithms for finding the manipulation that discarded the data item of interest. Moreover, we work through two different methods for tracing the discarded data item that can be used with either algorithm. Using our algorithms, it is feasible for users to find the manipulation that excluded the data item of interest, and can eliminate the need for exhausting debugging."} {"_id": "f39e21382458bf723e207d0ac649680f9b4dde4a", "title": "Recognition of Offline Handwritten Chinese Characters Using the Tesseract Open Source OCR Engine", "text": "Due to the complex structure and handwritten deformation, the offline handwritten Chinese characters recognition has been one of the most challenging problems. In this paper, an offline handwritten Chinese character recognition tool has been developed based on the Tesseract open source OCR engine. The tool mainly contributes on the following two points: First, a handwritten Chinese character features library is generated, which is independent of a specific user's writing style, Second, by preprocessing the input image and adjusting the Tesseract engine, multiple candidate recognition results are output based on weight ranking. The recognition accuracy rate of this tool is above 88% for both known user test set and unknown user test set. It has shown that the Tesseract engine is feasible for offline handwritten Chinese character recognition to a certain degree."} {"_id": "3cc0c9a9917f9ed032376fa467838e720701e783", "title": "Gal4 in the Drosophila female germline", "text": "The modular Gal4 system has proven to be an extremely useful tool for conditional gene expression in Drosophila. One limitation has been the inability of the system to work in the female germline. A modified Gal4 system that works throughout oogenesis is presented here. To achieve germline expression, it was critical to change the basal promoter and 3'-UTR in the Gal4-responsive expression vector (generating UASp). Basal promoters and heterologous 3'-UTRs are often considered neutral, but as shown here, can endow qualitative tissue-specificity to a chimeric transcript. The modified Gal4 system was used to investigate the role of the Drosophila FGF homologue branchless, ligand for the FGF receptor breathless, in border cell migration. FGF signaling guides tracheal cell migration in the embryo. However, misexpression of branchless in the ovary had no effect on border cell migration. Thus border cells and tracheal cells appear to be guided differently."} {"_id": "e79b34f6779095a73ba4604291d84bc26802b35e", "title": "Improving Relation Extraction by Pre-trained Language Representations", "text": "Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code."} {"_id": "bad43ffc1c7d07db5990f631334bfa3157a6b134", "title": "Plate-laminated corporate-feed slotted waveguide array antenna at 350-GHz band by silicon process", "text": "A corporate feed slotted waveguide array antenna with broadband characteristics in term of gain in the 350 GHz band is achieved by measurement for the first time. The etching accuracy for thin laminated plates of the diffusion bonding process with conventional chemical etching is limited to \u00b120\u03bcm. This limits the use of this process for antenna fabrication in the submillimeter wave band where the fabrication tolerances are very severe. To improve the etching accuracy of the thin laminated plates, a new fabrication process has been developed. Each silicon wafer is etched by DRIE (deep reactive ion etcher) and is plated by gold on the surface. This new fabrication process provides better fabrication tolerances about \u00b15 \u03bcm using wafer bond aligner. The thin laminated wafers are then bonded with the diffusion bonding process under high temperature and high pressure. To validate the proposed antenna concepts, an antenna prototype has been designed and fabricated in the 350 GHz band. The 3dB-down gain bandwidth is about 44.6 GHz by this silicon process while it was about 15GHz by the conventional process using metal plates in measurement."} {"_id": "0dacd4593ba6bce441bae37fc3ff7f3b70408ee1", "title": "Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds", "text": "Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (\u03b5, 0)and (\u03b5, \u03b4)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median."} {"_id": "24d800e6681a129b7787cbb05d0e224acad70e8d", "title": "Exposure: A Passive DNS Analysis Service to Detect and Report Malicious Domains", "text": "A wide range of malicious activities rely on the domain name service (DNS) to manage their large, distributed networks of infected machines. As a consequence, the monitoring and analysis of DNS queries has recently been proposed as one of the most promising techniques to detect and blacklist domains involved in malicious activities (e.g., phishing, spam, botnets command-and-control, etc.). EXPOSURE is a system we designed to detect such domains in real time, by applying 15 unique features grouped in four categories.\n We conducted a controlled experiment with a large, real-world dataset consisting of billions of DNS requests. The extremely positive results obtained in the tests convinced us to implement our techniques and deploy it as a free, online service. In this article, we present the Exposure system and describe the results and lessons learned from 17 months of its operation. Over this amount of time, the service detected over 100K malicious domains. The statistics about the time of usage, number of queries, and target IP addresses of each domain are also published on a daily basis on the service Web page."} {"_id": "32334506f746e83367cecb91a0ab841e287cd958", "title": "Practical privacy: the SuLQ framework", "text": "We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to {0, 1}. The true answer is \u03a3i\u03b5S f(di), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11]."} {"_id": "49934d08d42ed9e279a82cbad2086377443c8a75", "title": "Differentially Private Empirical Risk Minimization", "text": "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the \u03b5-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance."} {"_id": "61efdc56bc6c034e9d13a0c99d0b651a78bfc596", "title": "Differentially Private Distributed Constrained Optimization", "text": "Many resource allocation problems can be formulated as an optimization problem whose constraints contain sensitive information about participating users. This paper concerns a class of resource allocation problems whose objective function depends on the aggregate allocation (i.e., the sum of individual allocations); in particular, we investigate distributed algorithmic solutions that preserve the privacy of participating users. Without privacy considerations, existing distributed algorithms normally consist of a central entity computing and broadcasting certain public coordination signals to participating users. However, the coordination signals often depend on user information, so that an adversary who has access to the coordination signals can potentially decode information on individual users and put user privacy at risk. We present a distributed optimization algorithm that preserves differential privacy, which is a strong notion that guarantees user privacy regardless of any auxiliary information an adversary may have. The algorithm achieves privacy by perturbing the public signals with additive noise, whose magnitude is determined by the sensitivity of the projection operation onto user-specified constraints. By viewing the differentially private algorithm as an implementation of stochastic gradient descent, we are able to derive a bound for the suboptimality of the algorithm. We illustrate the implementation of our algorithm via a case study of electric vehicle charging. Specifically, we derive the sensitivity and present numerical simulations for the algorithm. Through numerical simulations, we are able to investigate various aspects of the algorithm when being used in practice, including the choice of step size, number of iterations, and the trade-off between privacy level and suboptimality."} {"_id": "c7788c34ba1387f1e437a2f83e1931f0c64d8e4e", "title": "The role of transparency in recommender systems", "text": "Recommender Systems act as a personalized decision guides, aiding users in decisions on matters related to personal taste. Most previous research on Recommender Systems has focused on the statistical accuracy of the algorithms driving the systems, with little emphasis on interface issues and the user's perspective. The goal of this research was to examine the role of transprency (user understanding of why a particular recommendation was made) in Recommender Systems. To explore this issue, we conducted a user study of five music Recommender Systems. Preliminary results indicate that users like and feel more confident about recommendations that they perceive as transparent."} {"_id": "7731c8a1c56fdfa149759a8bb7b81464da0b15c1", "title": "Recognizing Abnormal Heart Sounds Using Deep Learning", "text": "The work presented here applies deep learning to the task of automated cardiac auscultation, i.e. recognizing abnormalities in heart sounds. We describe an automated heart sound classification algorithm that combines the use of time-frequency heat map representations with a deep convolutional neural network (CNN). Given the cost-sensitive nature of misclassification, our CNN architecture is trained using a modified loss function that directly optimizes the trade-off between sensitivity and specificity. We evaluated our algorithm at the 2016 PhysioNet Computing in Cardiology challenge where the objective was to accurately classify normal and abnormal heart sounds from single, short, potentially noisy recordings. Our entry to the challenge achieved a final specificity of 0.95, sensitivity of 0.73 and overall score of 0.84. We achieved the greatest specificity score out of all challenge entries and, using just a single CNN, our algorithm differed in overall score by only 0.02 compared to the top place finisher, which used an ensemble approach."} {"_id": "17a00f26b68f40fb03e998a7eef40437dd40e561", "title": "The Tire as an Intelligent Sensor", "text": "Active safety systems are based upon the accurate and fast estimation of the value of important dynamical variables such as forces, load transfer, actual tire-road friction (kinetic friction) muk, and maximum tire-road friction available (potential friction) mup. Measuring these parameters directly from tires offers the potential for improving significantly the performance of active safety systems. We present a distributed architecture for a data-acquisition system that is based on a number of complex intelligent sensors inside the tire that form a wireless sensor network with coordination nodes placed on the body of the car. The design of this system has been extremely challenging due to the very limited available energy combined with strict application requirements for data rate, delay, size, weight, and reliability in a highly dynamical environment. Moreover, it required expertise in multiple engineering disciplines, including control-system design, signal processing, integrated-circuit design, communications, real-time software design, antenna design, energy scavenging, and system assembly."} {"_id": "190dcdb71a119ec830d6e7e6e01bb42c6c10c2f3", "title": "Surgical precision JIT compilers", "text": "Just-in-time (JIT) compilation of running programs provides more optimization opportunities than offline compilation. Modern JIT compilers, such as those in virtual machines like Oracle's HotSpot for Java or Google's V8 for JavaScript, rely on dynamic profiling as their key mechanism to guide optimizations. While these JIT compilers offer good average performance, their behavior is a black box and the achieved performance is highly unpredictable.\n In this paper, we propose to turn JIT compilation into a precision tool by adding two essential and generic metaprogramming facilities: First, allow programs to invoke JIT compilation explicitly. This enables controlled specialization of arbitrary code at run-time, in the style of partial evaluation. It also enables the JIT compiler to report warnings and errors to the program when it is unable to compile a code path in the demanded way. Second, allow the JIT compiler to call back into the program to perform compile-time computation. This lets the program itself define the translation strategy for certain constructs on the fly and gives rise to a powerful JIT macro facility that enables \"smart\" libraries to supply domain-specific compiler optimizations or safety checks.\n We present Lancet, a JIT compiler framework for Java bytecode that enables such a tight, two-way integration with the running program. Lancet itself was derived from a high-level Java bytecode interpreter: staging the interpreter using LMS (Lightweight Modular Staging) produced a simple bytecode compiler. Adding abstract interpretation turned the simple compiler into an optimizing compiler. This fact provides compelling evidence for the scalability of the staged-interpreter approach to compiler construction.\n In the case of Lancet, JIT macros also provide a natural interface to existing LMS-based toolchains such as the Delite parallelism and DSL framework, which can now serve as accelerator macros for arbitrary JVM bytecode."} {"_id": "f5888af5e5353eb74d37ec50e9840e58b1992953", "title": "An LDA-Based Approach to Scientific Paper Recommendation", "text": "Recommendation of scientific papers is a task aimed to support researchers in accessing relevant articles from a large pool of unseen articles. When writing a paper, a researcher focuses on the topics related to her/his scientific domain, by using a technical language. The core idea of this paper is to exploit the topics related to the researchers scientific production (authored articles) to formally define her/his profile; in particular we propose to employ topic modeling to formally represent the user profile, and language modeling to formally represent each unseen paper. The recommendation technique we propose relies on the assessment of the closeness of the language used in the researchers papers and the one employed in the unseen papers. The proposed approach exploits a reliable knowledge source for building the user profile, and it alleviates the cold-start problem, typical of collaborative filtering techniques. We also present a preliminary evaluation of our approach on the DBLP."} {"_id": "1f8be49d63c694ec71c2310309cd02a2d8dd457f", "title": "Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning", "text": "In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds \"more noise\" into features which are \"less relevant\" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions."} {"_id": "31e9d9458471b4a0cfc6cf1de219b10af0f37239", "title": "Why do you play World of Warcraft? An in-depth exploration of self-reported motivations to play online and in-game behaviours in the virtual world of Azeroth", "text": "Massively multiplayer online role-playing games (MMORPGs) are video games in which players create an avatar that evolves and interacts with other avatars in a persistent virtual world. Motivations to play MMORPGs are heterogeneous (e.g. achievement, socialisation, immersion in virtual worlds). This study investigates in detail the relationships between self-reported motives and actual in-game behaviours. We recruited a sample of 690 World of Warcraft players (the most popular MMORPG) who agreed to have their avatar monitored for 8 months. Participants completed an initial online survey about their motives to play. Their actual in-game behaviours were measured through the game\u2019s official database (the Armory website). Results showed specific associations between motives and in-game behaviours. Moreover, longitudinal analyses revealed that teamworkand competition-oriented motives are the most accurate predictors of fast progression in the game. In addition, although specific associations exist between problematic use and certain motives (e.g. advancement, escapism), longitudinal analyses showed that high involvement in the game is not necessarily associated with a negative impact upon daily living. 2012 Elsevier Ltd. All rights reserved."} {"_id": "33127e014cf537192c33a5b0e4b62df2a7b1869f", "title": "Policy ratification", "text": "It is not sufficient to merely check the syntax of new policies before they are deployed in a system; policies need to be analyzed for their interactions with each other and with their local environment. That is, policies need to go through a ratification process. We believe policy ratification becomes an essential part of system management as the number of policies in the system increases and as the system administration becomes more decentralized. In this paper, we focus on the basic tasks involved in policy ratification. To a large degree, these basic tasks can be performed independent of policy model and language and require little domain-specific knowledge. We present algorithms from constraint, linear, and logic programming disciplines to help perform ratification tasks. We provide an algorithm to efficiently assign priorities to the policies based on relative policy preferences indicated by policy administrators. Finally, with an example, we show how these algorithms have been integrated with our policy system to provide feedback to a policy administrator regarding potential interactions of policies with each other and with their deployment environment."} {"_id": "c6b5c1cc565c878db50ad20aafd804284558ad02", "title": "Centrality in valued graphs : A measure of betweenness based on network flow", "text": "A new measure of centrality, C,, is introduced. It is based on the concept of network flows. While conceptually similar to Freeman\u2019s original measure, Ca, the new measure differs from the original in two important ways. First, C, is defined for both valued and non-valued graphs. This makes C, applicable to a wider variety of network datasets. Second, the computation of C, is not based on geodesic paths as is C, but on all the independent paths between all pairs of points in the network."} {"_id": "2ccca721c20ad1d8503ede36fe310626070de640", "title": "Distributed Energy Resources Topology Identification via Graphical Modeling", "text": "Distributed energy resources (DERs), such as photovoltaic, wind, and gas generators, are connected to the grid more than ever before, which introduces tremendous changes in the distribution grid. Due to these changes, it is important to understand where these DERs are connected in order to sustainably operate the distribution grid. But the exact distribution system topology is difficult to obtain due to frequent distribution grid reconfigurations and insufficient knowledge about new components. In this paper, we propose a methodology that utilizes new data from sensor-equipped DER devices to obtain the distribution grid topology. Specifically, a graphical model is presented to describe the probabilistic relationship among different voltage measurements. With power flow analysis, a mutual information-based identification algorithm is proposed to deal with tree and partially meshed networks. Simulation results show highly accurate connectivity identification in the IEEE standard distribution test systems and Electric Power Research Institute test systems."} {"_id": "8eb3ebd0a1d8a26c7070543180d233f841b79850", "title": "Performance of Reliable Transport Protocol over IEEE 802.11 Wireless LAN: Analysis and Enhancement", "text": "IEEE 802.11 Medium Access Control(MAC) is proposed to support asynchronous and time bounded delivery of radio data packets in infrastructure and ad hoc networks. The basis of the IEEE 802.11 WLAN MAC protocol is Distributed Coordination Function(DCF), which is a Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA) with binary slotted exponential back-off scheme. Since IEEE 802.11 MAC has its own characteristics that are different from other wireless MAC protocols, the performance of reliable transport protocol over 802.11 needs further study. This paper proposes a scheme named DCF+, which is compatible with DCF, to enhance the performance of reliable transport protocol over WLAN. To analyze the performance of DCF and DCF+, this paper also introduces an analytical model to compute the saturated throughput of WLAN. Comparing with other models, this model is shown to be able to predict the behaviors of 802.11 more accurately. Moreover, DCF+ is able to improve the performance of TCP over WLAN, which is verified by modeling and elaborate simulation results."} {"_id": "5574763d870bae0fd3fd6d3014297942a045f60a", "title": "Utilization of Data mining Approaches for Prediction of Life Threatening Diseases Survivability", "text": "Data mining now-a-days plays an important role in prediction of diseases in health care industry. The Health care industry utilizes data mining Techniques and finds out the information which is hidden in the data set. Many diagnoses have been done for predicting diseases. Without knowing the knowledge of profound medicine and clinical experience the treatment goes wrong. The time taken to recover from diseases depends on patients' severity. For finding out the disease, number of test needs to be taken by patient. In most cases not all test become more effective. And at last it leads to the death of the patient. Many experiments have been conducted by comparing the performance of predictive data mining for reducing the number of test taken by the patient indirectly. This research paper is to present a survey on predicting the presence of life threatening diseases which causes to death and list out the various classification algorithms that has been used with number of attributes for prediction."} {"_id": "6273df9def7c011bc21cd42a4029d4b7c7c48c2e", "title": "A 45GHz Doherty power amplifier with 23% PAE and 18dBm output power, in 45nm SOI CMOS", "text": "A 45GHz Doherty power amplifier is implemented in 45nm SOI CMOS. Two-stack FET amplifiers are used as main and auxiliary amplifiers, allowing a supply voltage of 2.5V and high output power. The use of slow-wave coplanar waveguides (CPW) improves the PAE and gain by approximately 3% and 1dB, and reduces the die area by 20%. This amplifier exhibits more than 18dBm saturated output power, with peak power gain of 7dB. It occupies 0.64mm2 while achieving a peak PAE of 23%; at 6dB back-off the PAE is 17%."} {"_id": "1e396464e440e6032be3f035a9a6837c32c9d2c0", "title": "Review of Micro Thermoelectric Generator", "text": "Used for thermal energy harvesting, thermoelectric generator (TEG) can convert heat into electricity directly. Structurally, the main part of TEG is the thermopile, which consists of thermocouples connected in series electrically and in parallel thermally. Benefiting from massive progress achieved in a microelectromechanical systems technology, micro TEG ( $\\mu$ -TEG) with advantages of small volume and high output voltage has obtained attention in recent 20 years. The review gives a comprehensive survey of the development and current status of $\\mu$ -TEG. First, the principle of operation is introduced and some key parameters used for characterizing the performance of $\\mu$ -TEG are highlighted. Next, $\\mu$ -TEGs are classified from the perspectives of structure, material, and fabrication technology. Then, almost all the relevant works are summarized for the convenience of comparison and reference. Summarized information includes the structure, material property, fabrication technology, output performance, and so on. This will provide readers with an overall evaluation of different studies and guide them in choosing the suitable $\\mu$ -TEGs for their applications. In addition, the existing and potential applications of $\\mu$ -TEG are shown, especially the applications in the Internet of things. Finally, we summarize the challenges encountered in improving the output power of $\\mu$ -TEG and predicted that more researchers would focus their efforts on the flexible structure $\\mu$ -TEG, and combination of $\\mu$ -TEG and other energy harvestings. With the emergence of more low-power devices and the gradual improvement of ZT value of the thermoelectric material, $\\mu$ -TEG is promising for applications in various fields. [2017-0610]"} {"_id": "4c11a7b668dee651cc2d8eb2eaf8665449b1738f", "title": "Modern Release Engineering in a Nutshell -- Why Researchers Should Care", "text": "The release engineering process is the process that brings high quality code changes from a developer's workspace to the end user, encompassing code change integration, continuous integration, build system specifications, infrastructure-as-code, deployment and release. Recent practices of continuous delivery, which bring new content to the end user in days or hours rather than months or years, have generated a surge of industry-driven interest in the release engineering pipeline. This paper argues that the involvement of researchers is essential, by providing a brief introduction to the six major phases of the release engineering pipeline, a roadmap of future research, and a checklist of three major ways that the release engineering process of a system under study can invalidate the findings of software engineering studies. The main take-home message is that, while release engineering technology has flourished tremendously due to industry, empirical validation of best practices and the impact of the release engineering process on (amongst others) software quality is largely missing and provides major research opportunities."} {"_id": "9f6db3f5809a9d1b9f1c70d9d30382a0bd8be8d0", "title": "A Review on Performance Analysis of Cloud Computing Services for Scientific Computing", "text": "Cloud computing has emerged as a very important commercial infrastructure that promises to reduce the need for maintaining costly computing facilities by organizations and institutes. Through the use of virtualization and time sharing of resources, clouds serve with a single set of physical resources as a large user base with altogether different needs. Thus, the clouds have the promise to provide to their owners the benefits of an economy of calibration and, at the same time, become a substitute for scientists to clusters, grids, and parallel production conditions. However, the present commercial clouds have been built to support web and small database workloads, which are very different from common scientific computing workloads. Furthermore, the use of virtualization and resource time sharing may introduce significant performance penalties for the demanding scientific computing workloads. In this paper, we analyze the performance of cloud computing services for scientific computing workloads. This paper evaluate the presence in real scientific computing workloads of Many-Task Computing users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific goals. Our effective method demonstrates to yield comparative and even better results than the more complex state-of-the-art techniques but has the advantage to be appropriate for real-time applications."} {"_id": "6fcccd6def46a4dd50f85df4d4c011bd9f1855af", "title": "Cedalion: a language for language oriented programming", "text": "Language Oriented Programming (LOP) is a paradigm that puts domain specific programming languages (DSLs) at the center of the software development process. Currently, there are three main approaches to LOP: (1) the use of internal DSLs, implemented as libraries in a given host language; (2) the use of external DSLs, implemented as interpreters or compilers in an external language; and (3) the use of language workbenches, which are integrated development environments (IDEs) for defining and using external DSLs. In this paper, we contribute: (4) a novel language-oriented approach to LOP for defining and using internal DSLs. While language workbenches adapt internal DSL features to overcome some of the limitations of external DSLs, our approach adapts language workbench features to overcome some of the limitations of internal DSLs. We introduce Cedalion, an LOP host language for internal DSLs, featuring static validation and projectional editing. To validate our approach we present a case study in which Cedalion was used by biologists in designing a DNA microarray for molecular Biology research."} {"_id": "7cbbe0025b71a265c6bee195b5595cfad397a734", "title": "Health chair: implicitly sensing heart and respiratory rate", "text": "People interact with chairs frequently, making them a potential location to perform implicit health sensing that requires no additional effort by users. We surveyed 550 participants to understand how people sit in chairs and inform the design of a chair that detects heart and respiratory rate from the armrests and backrests of the chair respectively. In a laboratory study with 18 participants, we evaluated a range of common sitting positions to determine when heart rate and respiratory rate detection was possible (32% of the time for heart rate, 52% for respiratory rate) and evaluate the accuracy of the detected rate (83% for heart rate, 73% for respiratory rate). We discuss the challenges of moving this sensing to the wild by evaluating an in-situ study totaling 40 hours with 11 participants. We show that, as an implicit sensor, the chair can collect vital signs data from its occupant through natural interaction with the chair."} {"_id": "a00a757b26d5c4f53b628a9c565990cdd0e51876", "title": "The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings", "text": "We motivate and describe a new freely available human-human dialogue data set for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool (Healey et al., 2003; Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as \u201cburchak\u201d for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include selfand other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rulebased system built previously."} {"_id": "b49e31fe5948b3ca4552ac69dd7a735607467f1c", "title": "GUSS: Solving Collections of Data Related Models Within GAMS", "text": "In many applications, optimization of a collection of problems is required where each problem is structurally the same, but in which some or all of the data defining the instance is updated. Such models are easily specified within modern modeling systems, but have often been slow to solve due to the time needed to regenerate the instance, and the inability to use advance solution information (such as basis factorizations) from previous solves as the collection is processed. We describe a new language extension, GUSS, that gathers data from different sources/symbols to define the collection of models (called scenarios), updates a base model instance with this scenario data and solves the updated model instance and scatters the scenario results to symbols in the GAMS database. We demonstrate the utility of this approach in three applications, namely data envelopment analysis, cross validation and stochastic dual dynamic programming. The language extensions are available for general use in all versions of GAMS starting with release 23.7."} {"_id": "5914781bde18606e55e8f7683f55889df91576ec", "title": "30 + years of research and practice of outsourcing \u2013 Exploring the past and anticipating the future", "text": "Article history: Received 7 January 2008 Received in revised form 24 June 2008 Accepted 31 July 2008 Available online 5 April 2009 Outsourcing is a phenomenon that as a practice originated in the 1950s, but it was not until the 1980s when the strategy became widely adopted in organizations. Since then, the strategy has evolved from a strictly cost focused approach towards more cooperative nature, in which cost is only one, often secondary, decision-making criterion. In the development of the strategy, three broad and somewhat overlapping, yet distinct phases can be identified: the era of the Big Bang, the era of the Bandwagon, and the era of Barrierless Organizations. This paper illustrates that the evolution of the practice has caused several contradictions among researchers, as well as led to the situation where the theoretical background of the phenomenon has recently become much richer. Through examining existing research, this paper intends to identify the development of outsourcing strategy from a practical as well as a theoretical perspective from its birth up to today. In addition, through providing insights from managers in the information technology industry, this paper aims at providing a glimpse from the future \u2013 that is \u2013 what may be the future directions and research issues in this complex phenomenon? \u00a9 2009 Elsevier Inc. All rights reserved."} {"_id": "423455ad8afb9b2534c0954a5e61c95bea611801", "title": "Virtualizing I/O Devices on VMware Workstation's Hosted Virtual Machine Monitor", "text": "Virtual machines were developed by IBM in the 1960\u2019s to provide concurrent, interactive access to a mainframe computer. Each virtual machine is a replica of the underlying physical machine and users are given the illusion of running directly on the physical machine. Virtual machines also provide benefits like isolation and resource sharing, and the ability to run multiple flavors and configurations of operating systems. VMware Workstation brings such mainframe-class virtual machine technology to PC-based desktop and workstation computers. This paper focuses on VMware Workstation\u2019s approach to virtualizing I/O devices. PCs have a staggering variety of hardware, and are usually pre-installed with an operating system. Instead of replacing the pre-installed OS, VMware Workstation uses it to host a user-level application (VMApp) component, as well as to schedule a privileged virtual machine monitor (VMM) component. The VMM directly provides high-performance CPU virtualization while the VMApp uses the host OS to virtualize I/O devices and shield the VMM from the variety of devices. A crucial question is whether virtualizing devices via such a hosted architecture can meet the performance required of high throughput, low latency devices. To this end, this paper studies the virtualization and performance of an Ethernet adapter on VMware Workstation. Results indicate that with optimizations, VMware Workstation\u2019s hosted virtualization architecture can match native I/O throughput on standard PCs. Although a straightforward hosted implementation is CPU-limited due to virtualization overhead on a 733 MHz Pentium R III system on a 100 Mb/s Ethernet, a series of optimizations targeted at reducing CPU utilization allows the system to match native network throughput. Further optimizations are discussed both within and outside a hosted architecture."} {"_id": "c5788be735f3caadc7d0d3147aa52fd4a6036ec4", "title": "Detecting epistasis in human complex traits", "text": "Genome-wide association studies (GWASs) have become the focus of the statistical analysis of complex traits in humans, successfully shedding light on several aspects of genetic architecture and biological aetiology. Single-nucleotide polymorphisms (SNPs) are usually modelled as having additive, cumulative and independent effects on the phenotype. Although evidently a useful approach, it is often argued that this is not a realistic biological model and that epistasis (that is, the statistical interaction between SNPs) should be included. The purpose of this Review is to summarize recent directions in methodology for detecting epistasis and to discuss evidence of the role of epistasis in human complex trait variation. We also discuss the relevance of epistasis in the context of GWASs and potential hazards in the interpretation of statistical interaction terms."} {"_id": "d3569f184b7083c0433bf00fa561736ae6f8d31e", "title": "Interactive Entity Resolution in Relational Data: A Visual Analytic Tool and Its Evaluation", "text": "Databases often contain uncertain and imprecise references to real-world entities. Entity resolution, the process of reconciling multiple references to underlying real-world entities, is an important data cleaning process required before accurate visualization or analysis of the data is possible. In many cases, in addition to noisy data describing entities, there is data describing the relationships among the entities. This relational data is important during the entity resolution process; it is useful both for the algorithms which determine likely database references to be resolved and for visual analytic tools which support the entity resolution process. In this paper, we introduce a novel user interface, D-Dupe, for interactive entity resolution in relational data. D-Dupe effectively combines relational entity resolution algorithms with a novel network visualization that enables users to make use of an entity's relational context for making resolution decisions. Since resolution decisions often are interdependent, D-Dupe facilitates understanding this complex process through animations which highlight combined inferences and a history mechanism which allows users to inspect chains of resolution decisions. An empirical study with 12 users confirmed the benefits of the relational context visualization on the performance of entity resolution tasks in relational data in terms of time as well as users' confidence and satisfaction."} {"_id": "c630196c34533903b48e546897d46df27c844bc2", "title": "High-power-transfer-density capacitive wireless power transfer system for electric vehicle charging", "text": "This paper introduces a large air-gap capacitive wireless power transfer (WPT) system for electric vehicle charging that achieves a power transfer density exceeding the state-of-the-art by more than a factor of four. This high power transfer density is achieved by operating at a high switching frequency (6.78 MHz), combined with an innovative approach to designing matching networks that enable effective power transfer at this high frequency. In this approach, the matching networks are designed such that the parasitic capacitances present in a vehicle charging environment are absorbed and utilized as part of the wireless power transfer mechanism. A new modeling approach is developed to simplify the complex network of parasitic capacitances into equivalent capacitances that are directly utilized as the matching network capacitors. A systematic procedure to accurately measure these equivalent capacitances is also presented. A prototype capacitive WPT system with 150 cm2 coupling plates, operating at 6.78 MHz and incorporating matching networks designed using the proposed approach, is built and tested. The prototype system transfers 589 W of power across a 12-cm air gap, achieving a power transfer density of 19.6 kW/m2."} {"_id": "1750a3716a03aaacdfbb0e25214beaa5e1e2b6ee", "title": "Ontology Development 101 : A Guide to Creating Your First Ontology", "text": "1 Why develop an ontology? In recent years the development of ontologies\u2014explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)\u2014has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are:"} {"_id": "7c459c36e19629ff0dfb4bd0e541cc5d2d3f03e0", "title": "Generic Taxonomy of Social Engineering Attack", "text": "Social engineering is a type of attack that allows unauthorized access to a system to achieve specific objective. Commonly, the purpose is to obtain information for social engineers. Some successful social engineering attacks get victims\u2019 information via human based retrieval approach, example technique terms as dumpster diving or shoulder surfing attack to get access to password. Alternatively, victims\u2019 information also can be stolen using technical-based method such as from pop-up windows, email or web sites to get the password or other sensitive information. This research performed a preliminary analysis on social engineering attack taxonomy that emphasized on types of technical-based social engineering attack. Results from the analysis become a guideline in proposing a new generic taxonomy of Social Engineering Attack (SEA)."} {"_id": "bf003bb2d52304fea114d824bc0bf7bfbc7c3106", "title": "Dissecting social engineering", "text": ""} {"_id": "10466df2b511239674d8487101229193c011a657", "title": "The urgency for effective user privacy-education to counter social engineering attacks on secure computer systems", "text": "Trusted people can fail to be trustworthy when it comes to protecting their aperture of access to secure computer systems due to inadequate education, negligence, and various social pressures. People are often the weakest link in an otherwise secure computer system and, consequently, are targeted for social engineering attacks. Social Engineering is a technique used by hackers or other attackers to gain access to information technology systems by getting the needed information (for example, a username and password) from a person rather than breaking into the system through electronic or algorithmic hacking methods. Such attacks can occur on both a physical and psychological level. The physical setting for these attacks occurs where a victim feels secure: often the workplace, the phone, the trash, and even on-line. Psychology is often used to create a rushed or officious ambiance that helps the social engineer to cajole information about accessing the system from an employee.\n Data privacy legislation in the United States and international countries that imposes privacy standards and fines for negligent or willful non-compliance increases the urgency to measure the trustworthiness of people and systems. One metric for determining compliance is to simulate, by audit, a social engineering attack upon an organization required to follow data privacy standards. Such an organization commits to protect the confidentiality of personal data with which it is entrusted.\n This paper presents the results of an approved social engineering audit made without notice within an organization where data security is a concern. Areas emphasized include experiences between the Social Engineer and the audited users, techniques used by the Social Engineer, and other findings from the audit. Possible steps to mitigate exposure to the dangers of Social Engineering through improved user education are reviewed."} {"_id": "24b4076e2f58325f5d86ba1ca1f00b08a56fb682", "title": "Ontologies: principles, methods and applications", "text": "This paper is intended to serve as a comprehensive introduction to the emerging eld concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools, and techniques are a major barrier to e ective communication among people, organisations, and/or software systems. We show how the development and implementation of an explicit account of a shared understanding (i.e. an `ontology') in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purposes they serve. We outline a methodology for developing and evaluating ontologies, rst discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing de nitions. We then consider the bene ts of and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the speci cation, implementation and evaluation of ontologies. Finally, we review the state of the art and practice in this emerging eld, considering various case studies, software tools for ontology development, key research issues and future prospects. AIAI-TR-191 Ontologies Page i"} {"_id": "90b16f97715a18a52b6a00b69411083bdb0460a0", "title": "Highly Sensitive, Flexible, and Wearable Pressure Sensor Based on a Giant Piezocapacitive Effect of Three-Dimensional Microporous Elastomeric Dielectric Layer.", "text": "We report a flexible and wearable pressure sensor based on the giant piezocapacitive effect of a three-dimensional (3-D) microporous dielectric elastomer, which is capable of highly sensitive and stable pressure sensing over a large tactile pressure range. Due to the presence of micropores within the elastomeric dielectric layer, our piezocapacitive pressure sensor is highly deformable by even very small amounts of pressure, leading to a dramatic increase in its sensitivity. Moreover, the gradual closure of micropores under compression increases the effective dielectric constant, thereby further enhancing the sensitivity of the sensor. The 3-D microporous dielectric layer with serially stacked springs of elastomer bridges can cover a much wider pressure range than those of previously reported micro-/nanostructured sensing materials. We also investigate the applicability of our sensor to wearable pressure-sensing devices as an electronic pressure-sensing skin in robotic fingers as well as a bandage-type pressure-sensing device for pulse monitoring at the human wrist. Finally, we demonstrate a pressure sensor array pad for the recognition of spatially distributed pressure information on a plane. Our sensor, with its excellent pressure-sensing performance, marks the realization of a true tactile pressure sensor presenting highly sensitive responses to the entire tactile pressure range, from ultralow-force detection to high weights generated by human activity."} {"_id": "8f24560a66651fdb94eef61339527004fda8283b", "title": "Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning", "text": "Humans are able to understand and perform complex tasks by strategically structuring the tasks into incremental steps or subgoals. For a robot attempting to learn to perform a sequential task with critical subgoal states, such states can provide a natural opportunity for interaction with a human expert. This paper analyzes the benefit of incorporating a notion of subgoals into Inverse Reinforcement Learning (IRL) with a Human-In-The-Loop (HITL) framework. The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states. These subgoal states define a set of subtasks for the learning agent to complete in order to achieve the final goal. The learning agent queries for partial demonstrations corresponding to each subtask as needed when the agent struggles with the subtask. The proposed Human Interactive IRL (HI-IRL) framework is evaluated on several discrete path-planning tasks. We demonstrate that subgoal-based interactive structuring of the learning task results in significantly more efficient learning, requiring only a fraction of the demonstration data needed for learning the underlying reward function with the baseline IRL model."} {"_id": "747a58918524d15aca29885af3e1bc87313eb312", "title": "A step toward irrationality: using emotion to change belief", "text": "Emotions have a powerful impact on behavior and beliefs. The goal of our research is to create general computational models of this interplay of emotion, cognition and behavior to inform the design of virtual humans. Here, we address an aspect of emotional behavior that has been studied extensively in the psychological literature but largely ignored by computational approaches, emotion-focused coping. Rather than motivating external action, emotion-focused coping strategies alter beliefs in response to strong emotions. For example an individual may alter beliefs about the importance of a goal that is being threatened, thereby reducing their distress. We present a preliminary model of emotion-focused coping and discuss how coping processes, in general, can be coupled to emotions and behavior. The approach is illustrated within a virtual reality training environment where the models are used to create virtual human characters in high-stress social situations."} {"_id": "332648a09d6ded93926829dbd81ac9dddf31d5b9", "title": "Perching and takeoff of a robotic insect on overhangs using switchable electrostatic adhesion", "text": "For aerial robots, maintaining a high vantage point for an extended time is crucial in many applications. However, available on-board power and mechanical fatigue constrain their flight time, especially for smaller, battery-powered aircraft. Perching on elevated structures is a biologically inspired approach to overcome these limitations. Previous perching robots have required specific material properties for the landing sites, such as surface asperities for spines, or ferromagnetism. We describe a switchable electroadhesive that enables controlled perching and detachment on nearly any material while requiring approximately three orders of magnitude less power than required to sustain flight. These electroadhesives are designed, characterized, and used to demonstrate a flying robotic insect able to robustly perch on a wide range of materials, including glass, wood, and a natural leaf."} {"_id": "1facb3308307312789e1db7f0a0904ac9c9e7179", "title": "Key parameters influencing the behavior of Steel Plate Shear Walls (SPSW)", "text": "The complex behavior of Steel Plate Shear Walls (SPSW) is investigated herein through nonlinear FE simulations. A 3D detailed FE model is developed and validated utilizing experimental results available in the literature. The influence of key parameters on the structural behavior is investigated. The considered parameters are: the infill plate thickness, the beam size, the column size, the infill plate material grade and the frame material grade. Several structural responses are used as criteria to quantify their influence on the SPSW behavior. The evaluated structural responses are: yield strength, yield displacement, ultimate strength, initial stiffness and secondary stiffness. The results show that, overall the most influential parameter is the infill plate thickness followed by the beam size. Also, it was found that the least influential parameter is the frame material grade."} {"_id": "236f183be06d824122da59ffb79e501d1a537486", "title": "Design for Reliability of Low-voltage, Switched-capacitor Circuits", "text": "Design for Reliability of Low-voltage, Switched-capacitor Circuits by Andrew Masami Abo Doctor of Philosophy in Engineering University of California, Berkeley Professor Paul R. Gray, Chair Analog, switched-capacitor circuits play a critical role in mixed-signal, analogto-digital interfaces. They implement a large class of functions, such as sampling, filtering, and digitization. Furthermore, their implementation makes them suitable for integration with complex, digital-signal-processing blocks in a compatible, low-cost technology\u2013particularly CMOS. Even as an increasingly larger amount of signal processing is done in the digital domain, this critical, analogto-digital interface is fundamentally necessary. Examples of some integrated applications include camcorders, wireless LAN transceivers, digital set-top boxes, and others. Advances in CMOS technology, however, are driving the operating voltage of integrated circuits increasingly lower. As device dimensions shrink, the applied voltages will need to be proportionately scaled in order to guarantee long-term reliability and manage power density. The reliability constraints of the technology dictate that the analog circuitry operate at the same low voltage as the digital circuitry. Furthermore, in achieving low-voltage operation, the reliability constraints of the technology must not be violated. This work examines the voltage limitations of CMOS technology and how analog circuits can maximize the utility of MOS devices without degrading relia-"} {"_id": "83834cd33996ed0b00e3e0fca3cda413d7ed79ff", "title": "DWCMM: The Data Warehouse Capability Maturity Model", "text": "Data Warehouses and Business Intelligence have become popular fields of research in recent years. Unfortunately, in daily practice many Data Warehouse and Business Intelligence solutions still fail to help organizations make better decisions and increase their profitability, due to intransparent complexities and project interdependencies. In addition, emerging application domains such as Mobile Learning & Analytics heavily depend on a wellstructured data foundation with a longitudinally prepared architecture. Therefore, this research presents the Data Warehouse Capability Maturity Model (DWCMM) which encompasses both technical and organizational aspects involved in developing a Data Warehouse environment. The DWCMM can be used to help organizations assess their current Data Warehouse solution and provide them with guidelines for future improvements. The DWCMM consists of a maturity matrix and a maturity assessment questionnaire with 60 questions. The DWCMM has been evaluated empirically through expert interviews and case studies. We conclude that the DWCMM can be successfully applied in practice and that organizations can intelligibly utilize the DWCMM as a quickscan instrument to jumpstart their Data Warehouse and Business Intelligence improvement processes."} {"_id": "89a9ad85d8343a622aaa8c072beacaf8df1f0464", "title": "Multiple-resonator-based bandpass filters", "text": "This article describes a class of recently developed multiple-mode-resonator-based bandpass filters for ultra-wide-band (UWB) transmission systems. These filters have many attractive features, including a simple design, compact size, low loss and good linearity in the UWB, enhanced out-of-band rejection, and easy integration with other circuits/antennas. In this article, we present a variety of multiple-mode resonators with stepped-impedance or stub-loaded nonuniform configurations and analyze their properties based on the transmission line theory. Along with the frequency dispersion of parallel-coupled transmission lines, we design and implement various filter structures on planar, uniplanar, and hybrid transmission line geometries."} {"_id": "1ca75a68d6769df095ac3864d86bca21e9650985", "title": "Enhanced ARP: preventing ARP poisoning-based man-in-the-middle attacks", "text": "In this letter, an enhanced version of Address Resolution Protocol (ARP) is proposed to prevent ARP poisoning-based Man-in-the-Middle (MITM) attacks. The proposed mechanism is based on the following concept. When a node knows the correct Media Access Control (MAC) address for a given IP address, if it retains the IP/MAC address mapping while that machine is alive, then MITM attack is impossible for that IP address. In order to prevent MITM attacks even for a new IP address, a voting-based resolution mechanism is proposed. The proposed scheme is backward compatible with existing ARP and incrementally deployable."} {"_id": "9c13e54760455a50482cda070c70448ecf30d68c", "title": "Time series classification with ensembles of elastic distance measures", "text": "Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and move\u2013split\u2013merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area."} {"_id": "8c76872375aa79acb26871c93da76d90dfb0a950", "title": "Recovering punctuation marks for automatic speech recognition", "text": "This paper shows results of recovering punctuation over speech transcriptions for a Portuguese broadcast news corpus. The approach is based on maximum entropy models and uses word, part-of-speech, time and speaker information. The contribution of each type of feature is analyzed individually. Separate results for each focus condition are given, making it possible to analyze the differences of performance between planned and spontaneous speech."} {"_id": "7e2eb3402ea7eacf182bccc3f8bb685636098d2c", "title": "Optical character recognition of the Orthodox Hellenic Byzantine Music notation", "text": "In this paper we present for the first time, the development of a new system for the off-line optical recognition of the characters used in the Orthodox Hellenic Byzantine Music Notation, that has been established since 1814. We describe the structure of the new system and propose algorithms for the recognition of the 71 distinct character classes, based on Wavelets, 4-projections and other structural and statistical features. Using a Nearest Neighbor classifier, combined with a post classification schema and a tree-structured classification philosophy, an accuracy of 99.4 % was achieved, in a database of about 18,000 byzantine character patterns that have been developed for the needs of the system. Optical music recognition Off-line character recognition, Byzantine Music, Byzantine Music Notation, Wavelets, Projections, Neural Networks Contour processing, Nearest Neighbor Classifier Byzantine Music Data Base"} {"_id": "26d4ab9b60b91bb610202b58fa1766951fedb9e9", "title": "DRAW: A Recurrent Neural Network For Image Generation", "text": "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."} {"_id": "4805aee558489b5413ce5434737043148537f62f", "title": "A Comparison of Features for Android Malware Detection", "text": "With the increase in mobile device use, there is a greater need for increasingly sophisticated malware detection algorithms. The research presented in this paper examines two types of features of Android applications, permission requests and system calls, as a way to detect malware. We are able to differentiate between benign and malicious apps by applying a machine learning algorithm. The model that is presented here achieved a classification accuracy of around 80% using permissions and 60% using system calls for a relatively small dataset. In the future, different machine learning algorithms will be examined to see if there is a more suitable algorithm. More features will also be taken into account and the training set will be expanded."} {"_id": "608ec914e356ff5e5782c908016958bf650a946f", "title": "CogALex-V Shared Task: GHHH - Detecting Semantic Relations via Word Embeddings", "text": "This paper describes our system submission to the CogALex-2016 Shared Task on Corpus-Based Identification of Semantic Relations. Our system won first place for Task-1 and second place for Task-2. The evaluation results of our system on the test set is 88.1% (79.0% for TRUE only) f-measure for Task-1 on detecting semantic similarity, and 76.0% (42.3% when excluding RANDOM) for Task-2 on identifying fine-grained semantic relations. In our experiments, we try word analogy, linear regression, and multi-task Convolutional Neural Networks (CNNs) with word embeddings from publicly available word vectors. We found that linear regression performs better in the binary classification (Task-1), while CNNs have better performance in the multi-class semantic classification (Task-2). We assume that word analogy is more suited for deterministic answers rather than handling the ambiguity of one-to-many and many-to-many relationships. We also show that classifier performance could benefit from balancing the distribution of labels in the training data."} {"_id": "5ecbb84d51e2a23dadd496d7c6ab10cf277d4452", "title": "A 5-DOF rotation-symmetric parallel manipulator with one unconstrained tool rotation", "text": "This paper introduces a novel 5-DOF parallel manipulator with a rotation-symmetric arm system. The manipulator is unorthodox since one degree of freedom of its manipulated platform is unconstrained. Such a manipulator is still useful in a wide range of applications utilizing a rotation-symmetric tool. The manipulator workspace is analyzed for singularities and collisions. The rotation-symmetric arm system leads to a large positional workspace in relation to the footprint of the manipulator. With careful choice of structural parameters, the rotational workspace of the tool is also sizeable."} {"_id": "c79ddcef4bdf56c5467143b32e53b23825c17eff", "title": "A Framework based on SDN and Containers for Dynamic Service Chains on IoT Gateways", "text": "In this paper, we describe a new approach for managing service function chains in scenarios where data from Internet of Things (IoT) devices is partially processed at the network edge. Our framework is enabled by two emerging technologies, Software-Defined Networking (SDN) and container based virtualization, which ensure several benefits in terms of flexibility, easy programmability, and versatility. These features are well suitable with the increasingly stringent requirements of IoT applications, and allow a dynamic and automated network service chaining. An extensive performance evaluation, which has been carried out by means of a testbed, seeks to understand how our proposed framework performs in terms of computational overhead, network bandwidth, and energy consumption. By accounting for the constraints of typical IoT gateways, our evaluation tries to shed light on the actual deployability of the framework on low-power nodes."} {"_id": "489555f05e316015d24d2a1fdd9663d4b85eb60f", "title": "Diagnostic Accuracy of Clinical Tests for Morton's Neuroma Compared With Ultrasonography.", "text": "The aim of the present study was to assess the diagnostic accuracy of 7 clinical tests for Morton's neuroma (MN) compared with ultrasonography (US). Forty patients (54 feet) were diagnosed with MN using predetermined clinical criteria. These patients were subsequently referred for US, which was performed by a single, experienced musculoskeletal radiologist. The clinical test results were compared against the US findings. MN was confirmed on US at the site of clinical diagnosis in 53 feet (98%). The operational characteristics of the clinical tests performed were as follows: thumb index finger squeeze (96% sensitivity, 96% accuracy), Mulder's click (61% sensitivity, 62% accuracy), foot squeeze (41% sensitivity, 41% accuracy), plantar percussion (37% sensitivity, 36% accuracy), dorsal percussion (33% sensitivity, 26% accuracy), and light touch and pin prick (26% sensitivity, 25% accuracy). No correlation was found between the size of MN on US and the positive clinical tests, except for Mulder's click. The size of MN was significantly larger in patients with a positive Mulder's click (10.9 versus 8.5 mm, p = .016). The clinical assessment was comparable to US in diagnosing MN. The thumb index finger squeeze test was the most sensitive screening test for the clinical diagnosis of MN."} {"_id": "206723950b10580ced733cbacbfc23c85b268e13", "title": "Why Lurkers Lurk", "text": "The goal of this paper is to address the question: \u2018why do lurkers lurk?\u2019 Lurkers reportedly makeup the majority of members in online groups, yet little is known about them. Without insight into lurkers, our understanding of online groups is incomplete. Ignoring, dismissing, or misunderstanding lurking distorts knowledge of life online and may lead to inappropriate design of online environments. To investigate lurking, the authors carried out a study of lurking using in-depth, semi-structured interviews with ten members of online groups. 79 reasons for lurking and seven lurkers\u2019 needs are identified from the interview transcripts. The analysis reveals that lurking is a strategic activity involving more than just reading posts. Reasons for lurking are categorized and a gratification model is proposed to explain lurker behavior."} {"_id": "22d185c7ba066468f9ff1df03f1910831076e943", "title": "Learning Better Embeddings for Rare Words Using Distributional Representations", "text": "There are two main types of word representations: low-dimensional embeddings and high-dimensional distributional vectors, in which each dimension corresponds to a context word. In this paper, we initialize an embedding-learning model with distributional vectors. Evaluation on word similarity shows that this initialization significantly increases the quality of embeddings for rare words."} {"_id": "f1b3400e49a929d9f5bd1b15081a13120abc3906", "title": "Text comparison using word vector representations and dimensionality reduction", "text": "This paper describes a technique to compare large text sources using word vector representations (word2vec) and dimensionality reduction (tSNE) and how it can be implemented using Python. The technique provides a bird\u2019s-eye view of text sources, e.g. text summaries and their source material, and enables users to explore text sources like a geographical map. Word vector representations capture many linguistic properties such as gender, tense, plurality and even semantic concepts like \"capital city of\". Using dimensionality reduction, a 2D map can be computed where semantically similar words are close to each other. The technique uses the word2vec model from the gensim Python library and t-SNE from scikit-learn."} {"_id": "e49d662652885e9b71622713838c840cca9d33ed", "title": "Engineering Quality and Reliability in Technology-Assisted Review", "text": "The objective of technology-assisted review (\"TAR\") is to find as much relevant information as possible with reasonable effort. Quality is a measure of the extent to which a TAR method achieves this objective, while reliability is a measure of how consistently it achieves an acceptable result. We are concerned with how to define, measure, and achieve high quality and high reliability in TAR. When quality is defined using the traditional goal-post method of specifying a minimum acceptable recall threshold, the quality and reliability of a TAR method are both, by definition, equal to the probability of achieving the threshold. Assuming this definition of quality and reliability, we show how to augment any TAR method to achieve guaranteed reliability, for a quantifiable level of additional review effort. We demonstrate this result by augmenting the TAR method supplied as the baseline model implementation for the TREC 2015 Total Recall Track, measuring reliability and effort for 555 topics from eight test collections. While our empirical results corroborate our claim of guaranteed reliability, we observe that the augmentation strategy may entail disproportionate effort, especially when the number of relevant documents is low. To address this limitation, we propose stopping criteria for the model implementation that may be applied with no additional review effort, while achieving empirical reliability that compares favorably to the provably reliable method. We further argue that optimizing reliability according to the traditional goal-post method is inconsistent with certain subjective aspects of quality, and that optimizing a Taguchi quality loss function may be more apt."} {"_id": "e75cb14344eaeec987aa571d0009d0e02ec48a63", "title": "Design of Highly Integrated Mechatronic Gear Selector Levers for Automotive Shift-by-Wire Systems", "text": "Increased requirements regarding ergonomic comfort, limited space, weight reduction, and electronic automation of functions and safety features are on the rise for future automotive gear levers. At the same time, current mechanical gear levers have restrictions to achieve this. In this paper, we present a monostable, miniaturized mechatronic gear lever to fulfill these requirements for automotive applications. This solution describes a gear lever for positioning in the center console of a car to achieve optimal ergonomics for dynamic driving, which enables both automatic and manual gear switching. In this paper, we describe the sensor and actuator concept, safety concept, recommended shift pattern, mechanical design, and the electronic integration of this shift-by-wire system in a typical automotive bus communication network. The main contribution of this paper is a successful system design and the integration of a mechatronic system in new applications for optimizing the human-machine interface inside road vehicles."} {"_id": "66c410a2567e96dcff135bf6582cb26c9df765c4", "title": "Batch Identification Game Model for Invalid Signatures in Wireless Mobile Networks", "text": "Secure access is one of the fundamental problems in wireless mobile networks. Digital signature is a widely used technique to protect messages\u2019 authenticity and nodes\u2019 identities. From the practical perspective, to ensure the quality of services in wireless mobile networks, ideally the process of signature verification should introduce minimum delay. Batch cryptography technique is a powerful tool to reduce verification time. However, most of the existing works focus on designing batch verification algorithms for wireless mobile networks without sufficiently considering the impact of invalid signatures, which can lead to verification failures and performance degradation. In this paper, we propose a Batch Identification Game Model (BIGM) in wireless mobile networks, enabling nodes to find invalid signatures with reasonable delay no matter whether the game scenario is complete information or incomplete information. Specifically, we analyze and prove the existence of Nash Equilibriums (NEs) in both scenarios, to select the dominant algorithm for identifying invalid signatures. To optimize the identification algorithm selection, we propose a self-adaptive auto-match protocol which estimates the strategies and states of attackers based on historical information. Comprehensive simulation results in terms of NE reasonability, algorithm selection accuracy, and identification delay are provided to demonstrate that BIGM can identify invalid signatures more efficiently than existing algorithms."} {"_id": "9a59a3719bf08105d4632898ee178bd982da2204", "title": "International Journal of Advanced Robotic Systems Design of a Control System for an Autonomous Vehicle Based on Adaptive-PID Regular Paper", "text": "The autonomous vehicle is a mobile robot integrating multi\u2010sensor navigation and positioning, intelligent decision making and control technology. This paper presents the control system architecture of the autonomous vehicle, called \u201cIntelligent Pioneer\u201d, and the path tracking and stability of motion to effectively navigate in unknown environments is discussed. In this approach, a two degree\u2010of\u2010freedom dynamic model is developed to formulate the path\u2010tracking problem in state space format. For controlling the instantaneous path error, traditional controllers have difficulty in guaranteeing performance and stability over a wide range of parameter changes and disturbances. Therefore, a newly developed adaptive\u2010PID controller will be used. By using this approach the flexibility of the vehicle control system will be increased and achieving great advantages. Throughout, we provide examples and results from Intelligent Pioneer and the autonomous vehicle using this approach competed in the 2010 and 2011 Future Challenge of China. Intelligent Pioneer finished all of the competition programmes and won first position in 2010 and third position in 2011."} {"_id": "17ebe1eb19655543a6b876f91d41917488e70f55", "title": "Random synaptic feedback weights support error backpropagation for deep learning", "text": "The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning."} {"_id": "57b199e1d22752c385c34191c1058bcabb850d9f", "title": "Getting Formal with Dopamine and Reward", "text": "Recent neurophysiological studies reveal that neurons in certain brain structures carry specific signals about past and future rewards. Dopamine neurons display a short-latency, phasic reward signal indicating the difference between actual and predicted rewards. The signal is useful for enhancing neuronal processing and learning behavioral reactions. It is distinctly different from dopamine's tonic enabling of numerous behavioral processes. Neurons in the striatum, frontal cortex, and amygdala also process reward information but provide more differentiated information for identifying and anticipating rewards and organizing goal-directed behavior. The different reward signals have complementary functions, and the optimal use of rewards in voluntary behavior would benefit from interactions between the signals. Addictive psychostimulant drugs may exert their action by amplifying the dopamine reward signal."} {"_id": "7592f8a1d4fa2703b75cad6833775da2ff72fe7b", "title": "Deep Big Multilayer Perceptrons for Digit Recognition", "text": "The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent advancement by others dates back 8 years (error rate 0.4%). Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark with a single MLP and 0.31% with a committee of seven MLP. All we need to achieve this until 2011 best result are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning."} {"_id": "9539a0c4f8766c08dbaf96561cf6f1f409f5d3f9", "title": "Feature-based attention influences motion processing gain in macaque visual cortex", "text": "Changes in neural responses based on spatial attention have been demonstrated in many areas of visual cortex, indicating that the neural correlate of attention is an enhanced response to stimuli at an attended location and reduced responses to stimuli elsewhere. Here we demonstrate non-spatial, feature-based attentional modulation of visual motion processing, and show that attention increases the gain of direction-selective neurons in visual cortical area MT without narrowing the direction-tuning curves. These findings place important constraints on the neural mechanisms of attention and we propose to unify the effects of spatial location, direction of motion and other features of the attended stimuli in a \u2018feature similarity gain model\u2019 of attention."} {"_id": "cbcd9f32b526397f88d18163875d04255e72137f", "title": "Gradient-based learning applied to document recognition", "text": ""} {"_id": "19e2ad92d0f6ad3a9c76e957a0463be9ac244203", "title": "Condition monitoring of helicopter drive shafts using quadratic-nonlinearity metric based on cross-bispectrum", "text": "Based on cross-bispectrum, quadratic-nonlinearity coupling between two vibration signals is proposed and used to assess health conditions of rotating shafts in an AH-64D helicopter tail rotor drive train. Vibration data are gathered from two bearings supporting the shaft in an experimental helicopter drive train simulating different shaft conditions, namely, baseline, misalignment, imbalance, and combination of misalignment and imbalance. The proposed metric shows better capabilities in distinguishing different shaft settings than the conventional linear coupling based on cross-power spectrum."} {"_id": "7b24aa024ca2037b097cfcb2ea73a60ab497b80e", "title": "Internet security architecture", "text": "Fear of security breaches has been a major r eason f or the business world\u2019 s reluctance to embrace the Inter net as a viable means of communication. A widely adopted solution consists of physically separating private networks from the rest of Internet using firewalls. This paper discusses the curr ent cryptographic security measures available for the Internet infrastructur e as an alternative to physical segregation. First the IPsec ar chitecture including security protocols in the Internet Layer and the related key management pr oposals are introduced. The transport layer security protocol and security issues in the netw ork control and management are then presented. The paper is addr essed to r eaders with a basic understanding of common security mechanisms including encryption, authentication and key exchange techniques."} {"_id": "525dc4242b21df23ba4e1ec0748cf46de0e8f5c0", "title": "Client attachment, attachment to the therapist and client-therapist attachment match: how do they relate to change in psychodynamic psychotherapy?", "text": "OBJECTIVE\nWe examined the associations between client attachment, client attachment to the therapist, and symptom change, as well as the effects of client-therapist attachment match on outcome. Clients (n = 67) and their therapists (n = 27) completed the ECR to assess attachment.\n\n\nMETHOD\nClients completed also the Client Attachment to Therapist scale three times (early, middle, and late sessions) and the OQ-45 at intake and four times over the course of a year of psychodynamic psychotherapy.\n\n\nRESULTS\nClients characterized by avoidant attachment and by avoidant attachment to their therapist showed the least improvement. A low-avoidant client-therapist attachment match led to a greater decrease in symptom distress than when a low-avoidant therapist treated a high-avoidant client.\n\n\nCONCLUSIONS\nThese findings suggest the importance of considering client-therapist attachment matching and the need to pay attention to the special challenges involved in treating avoidant clients in order to facilitate progress in psychotherapy."} {"_id": "476edaffb4e613303012e7321dd319ba23abd0c3", "title": "Prioritized multi-task compliance control of redundant manipulators", "text": "We propose a new approach for dynamic control of redundant manipulators to deal with multiple prioritized tasks at the same time by utilizing null space projection techniques. The compliance control law is based on a new representation of the dynamics wherein specific null space velocity coordinates are introduced. These allow to efficiently exploit the kinematic redundancy according to the task hierarchy and lead to a dynamics formulation with block-diagonal inertia matrix. The compensation of velocity-dependent coupling terms between the tasks by an additional passive feedback action facilitates a stability analysis for the complete hierarchy based on semi-definite Lyapunov functions. No external forces have to be measured. Finally, the performance of the control approach is evaluated in experiments on a torque-controlled robot. \u00a9 2015 Elsevier Ltd. All rights reserved."} {"_id": "fdb1a478c6c566729a82424b3d6b37ca76c8b85e", "title": "The Concept of Flow", "text": "What constitutes a good life? Few questions are of more fundamental importance to a positive psychology. Flow research has yielded one answer, providing an understanding of experiences during which individuals are fully involved in the present moment. Viewed through the experiential lens of flow, a good life is one that is characterized by complete absorption in what one does. In this chapter, we describe the flow model of optimal experience and optimal development, explain how flow and related constructs have been measured, discuss recent work in this area, and identify some promising directions for future research."} {"_id": "7817db7b898a3458035174d914a7570d0b0efb7b", "title": "Corporate social responsibility and bank customer satisfaction A research agenda", "text": "Purpose \u2013 The purpose of this paper is to explore the relationship between corporate social responsibility (CSR) and customer outcomes. Design/methodology/approach \u2013 This paper reviews the literature on CSR effects and satisfaction, noting gaps in the literature. Findings \u2013 A series of propositions is put forward to guide future research endeavours. Research limitations/implications \u2013 By understanding the likely impact on customer satisfaction of CSR initiatives vis-\u00e0-vis customer-centric initiatives, the academic research community can assist managers to understand how to best allocate company resources in situations of low customer satisfaction. Such endeavours are managerially relevant and topical. Researchers seeking to test the propositions put forward in this paper would be able to gain links with, and possibly attract funding from, banks to conduct their research. Such endeavours may assist researchers to redefine the stakeholder view by placing customers at the centre of a network of stakeholders. Practical implications \u2013 An understanding of how to best allocate company resources to increase the proportion of satisfied customers will allow bank marketers to reduce customer churn and hence increase market share and profits. Originality/value \u2013 Researchers have not previously conducted a comparative analysis of the effects of different CSR initiatives on customer satisfaction, nor considered whether more customer-centric initiatives are likely to be more effective in increasing the proportion of satisfied customers."} {"_id": "60cc377d4d2b885594906d58bacb5732e8a04eb9", "title": "Essential Layers, Artifacts, and Dependencies of Enterprise Architecture", "text": "After a period where implementation speed was more important than integration, consistency and reduction of complexity, architectural considerations have become a key issue of information management in recent years again. Enterprise architecture is widely accepted as an essential mechanism for ensuring agility and consistency, compliance and efficiency. Although standards like TOGAF and FEAF have developed, however, there is no common agreement on which architecture layers, which artifact types and which dependencies constitute the essence of enterprise architecture. This paper contributes to the identification of essential elements of enterprise architecture by (1) specifying enterprise architecture as a hierarchical, multilevel system comprising aggregation hierarchies, architecture layers and views, (2) discussing enterprise architecture frameworks with regard to essential elements, (3) proposing interfacing requirements of enterprise architecture with other architecture models and (4) matching these findings with current enterprise architecture practice in several large companies."} {"_id": "46e0faacf50c8053d38fb3cf2da7fbbfb2932977", "title": "Agent-based control for decentralised demand side management in the smart grid", "text": "Central to the vision of the smart grid is the deployment of smart meters that will allow autonomous software agents, representing the consumers, to optimise their use of devices and heating in the smart home while interacting with the grid. However, without some form of coordination, the population of agents may end up with overly-homogeneous optimised consumption patterns that may generate significant peaks in demand in the grid. These peaks, in turn, reduce the efficiency of the overall system, increase carbon emissions, and may even, in the worst case, cause blackouts. Hence, in this paper, we introduce a novel model of a Decentralised Demand Side Management (DDSM) mechanism that allows agents, by adapting the deferment of their loads based on grid prices, to coordinate in a decentralised manner. Specifically, using average UK consumption profiles for 26M homes, we demonstrate that, through an emergent coordination of the agents, the peak demand of domestic consumers in the grid can be reduced by up to 17% and carbon emissions by up to 6%. We also show that our DDSM mechanism is robust to the increasing electrification of heating in UK homes (i.e., it exhibits a similar efficiency)."} {"_id": "3d9e919a4de74089f94f5a1b2a167c66c19a241d", "title": "Maxillary length at 11-14 weeks of gestation in fetuses with trisomy 21.", "text": "OBJECTIVE\nTo determine the value of measuring maxillary length at 11-14 weeks of gestation in screening for trisomy 21.\n\n\nMETHODS\nIn 970 fetuses ultrasound examination was carried out for measurement of crown-rump length (CRL), nuchal translucency and maxillary length, and to determine if the nasal bone was present or absent, immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In 60 cases the maxillary length was measured twice by the same operator to calculate the intraobserver variation in measurements.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The maxilla was successfully examined in all cases. The mean difference between paired measurements of maxillary length was -0.012 mm and the 95% limits of agreement were -0.42 (95% CI, -0.47 to -0.37) to 0.40 (95% CI, 0.35 to 0.44) mm. The fetal karyotype was normal in 839 pregnancies and abnormal in 131, including 88 cases of trisomy 21. In the chromosomally normal group the maxillary length increased significantly with CRL from a mean of 4.8 mm at a CRL of 45 mm to 8.3 mm at a CRL of 84 mm. In the trisomy 21 fetuses the maxillary length was significantly shorter than normal by 0.7 mm and in the trisomy 21 fetuses with absent nasal bone the maxilla was shorter than in those with present nasal bone by 0.5 mm. In fetuses with other chromosomal defects there were no significant differences from normal in the maxillary length.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation, maxillary length in trisomy 21 fetuses is significantly shorter than in normal fetuses."} {"_id": "db9531c2677ab3eeaaf434ccb18ca354438560d6", "title": "From e-commerce to social commerce: A close look at design features", "text": "E-commerce is undergoing an evolution through the adoption of Web 2.0 capabilities to enhance customer participation and achieve greater economic value. This new phenomenon is commonly referred to as social commerce, however it has not yet been fully understood. In addition to the lack of a stable and agreed-upon definition, there is little research on social commerce and no significant research dedicated to the design of social commerce platforms. This study offers literature review to explain the concept of social commerce, tracks its nascent state-of-the-art, and discusses relevant design features as they relate to e-commerce and Web 2.0. We propose a new model and a set of principles for guiding social commerce design. We also apply the model and guidelines to two leading social commerce platforms, Amazon and Starbucks on Facebook. The findings indicate that, for any social commerce website, it is critical to achieve a minimum set of social commerce design features. These design features must cover all the layers of the proposed model, including the individual, conversation, community and commerce levels. 2012 Elsevier B.V. All rights reserved."} {"_id": "9d420ad78af7366384f77b29e62a93a0325ace77", "title": "A spectrogram-based audio fingerprinting system for content-based copy detection", "text": "This paper presents a novel audio fingerprinting method that is highly robust to a variety of audio distortions. It is based on an unconventional audio fingerprint generation scheme. The robustness is achieved by generating different versions of the spectrogram matrix of the audio signal by using a threshold based on the average of the spectral values to prune this matrix. We transform each version of this pruned spectrogram matrix into a 2-D binary image. Multiple versions of these 2-D images suppress noise to a varying degree. This varying degree of noise suppression improves likelihood of one of the images matching a reference image. To speed up matching, we convert each image into an n-dimensional vector, and perform a nearest neighbor search based on this n-dimensional vector. We give results with two different feature parameters and their combination. We test this method on TRECVID 2010 content-based copy detection evaluation dataset, and we validate the performance on TRECVID 2009 dataset also. Experimental results show the effectiveness of these features even when the audio is distorted. We compare the proposed method to two state-of-the-art audio copy detection systems, namely NN-based and Shazam systems. Our method by far outperforms Shazam system for all audio transformations (or distortions) in terms of detection performance, number of missed queries and localization accuracy. Compared to NN-based system, our approach reduces minimal Normalized Detection Cost Rate (min NDCR) by 23\u00a0% and improves localization accuracy by 24\u00a0%."} {"_id": "8c15753cbb921f1b0ce4cd09b83415152212dbef", "title": "More than Just Two Sexes: The Neural Correlates of Voice Gender Perception in Gender Dysphoria", "text": "Gender dysphoria (also known as \"transsexualism\") is characterized as a discrepancy between anatomical sex and gender identity. Research points towards neurobiological influences. Due to the sexually dimorphic characteristics of the human voice, voice gender perception provides a biologically relevant function, e.g. in the context of mating selection. There is evidence for a better recognition of voices of the opposite sex and a differentiation of the sexes in its underlying functional cerebral correlates, namely the prefrontal and middle temporal areas. This fMRI study investigated the neural correlates of voice gender perception in 32 male-to-female gender dysphoric individuals (MtFs) compared to 20 non-gender dysphoric men and 19 non-gender dysphoric women. Participants indicated the sex of 240 voice stimuli modified in semitone steps in the direction to the other gender. Compared to men and women, MtFs showed differences in a neural network including the medial prefrontal gyrus, the insula, and the precuneus when responding to male vs. female voices. With increased voice morphing men recruited more prefrontal areas compared to women and MtFs, while MtFs revealed a pattern more similar to women. On a behavioral and neuronal level, our results support the feeling of MtFs reporting they cannot identify with their assigned sex."} {"_id": "9281495c7ffc4d6d6e5305281c200f9b02ba70db", "title": "Security and compliance challenges in complex IT outsourcing arrangements: A multi-stakeholder perspective", "text": "Complex IT outsourcing arrangements promise numerous benefits such as increased cost predictability and reduced costs, higher flexibility and scalability upon demand. Organizations trying to realize these benefits, however, face several security and compliance challenges. In this article, we investigate the pressure to take action with respect to such challenges and discuss avenues toward promising responses. We collected perceptions on security and compliance challenges from multiple stakeholders by means of a series of interviews and an online survey, first, to analyze the current and future relevance of the challenges as well as potential adverse effects on organizational performance and, second, to discuss the nature and scope of potential responses. The survey participants confirmed the current and future relevance of the six challenges auditing clouds, managing heterogeneity of services, coordinating involved parties, managing relationships between clients and vendors, localizing and migrating data and coping with lack of security awareness. Additionally, they perceived these challenges as affecting organizational performance adversely in case they are not properly addressed. Responses in form of organizational measures were considered more promising than technical ones concerning all challenges except localizing and migrating data, for which the opposite was true. Balancing relational and contractual governance as well as employing specific client and vendor capabilities is essential for the success of IT outsourcing arrangements, yet do not seem sufficient to overcome the investigated challenges. Innovations connecting the technical perspective of utility software with the business perspective of application software relevant for security and compliance management, however, nourish the hope that the benefits associated with complex IT outsourcing arrangements can be realized in the foreseeable future whilst addressing the security and compliance challenges. a 2013 Elsevier Ltd. All rights reserved. 61. .fraunhofer.de (D. Bachlechner), stefan.thalmann@uibk.ac.at (S. Thalmann), ronald.maier@ ier Ltd. All rights reserved. c om p u t e r s & s e c u r i t y 4 0 ( 2 0 1 4 ) 3 8e5 9 39"} {"_id": "919fa5c3a4f9c3c1c7ba407ccbac8ab72ba68566", "title": "Detection of high variability in gene expression from single-cell RNA-seq profiling", "text": "The advancement of the next-generation sequencing technology enables mapping gene expression at the single-cell level, capable of tracking cell heterogeneity and determination of cell subpopulations using single-cell RNA sequencing (scRNA-seq). Unlike the objectives of conventional RNA-seq where differential expression analysis is the integral component, the most important goal of scRNA-seq is to identify highly variable genes across a population of cells, to account for the discrete nature of single-cell gene expression and uniqueness of sequencing library preparation protocol for single-cell sequencing. However, there is lack of generic expression variation model for different scRNA-seq data sets. Hence, the objective of this study is to develop a gene expression variation model (GEVM), utilizing the relationship between coefficient of variation (CV) and average expression level to address the over-dispersion of single-cell data, and its corresponding statistical significance to quantify the variably expressed genes (VEGs). We have built a simulation framework that generated scRNA-seq data with different number of cells, model parameters, and variation levels. We implemented our GEVM and demonstrated the robustness by using a set of simulated scRNA-seq data under different conditions. We evaluated the regression robustness using root-mean-square error (RMSE) and assessed the parameter estimation process by varying initial model parameters that deviated from homogeneous cell population. We also applied the GEVM on real scRNA-seq data to test the performance under distinct cases. In this paper, we proposed a gene expression variation model that can be used to determine significant variably expressed genes. Applying the model to the simulated single-cell data, we observed robust parameter estimation under different conditions with minimal root mean square errors. We also examined the model on two distinct scRNA-seq data sets using different single-cell protocols and determined the VEGs. Obtaining VEGs allowed us to observe possible subpopulations, providing further evidences of cell heterogeneity. With the GEVM, we can easily find out significant variably expressed genes in different scRNA-seq data sets."} {"_id": "d4caec47eeabb2eca3ce9e39b1fae5424634c731", "title": "Design and control of underactuated tendon-driven mechanisms", "text": "Many robotic hands or prosthetic hands have been developed in the last several decades, and many use tendon-driven mechanisms for their transmissions. Robotic hands are now built with underactuated mechanisms, which have fewer actuators than degrees of freedom, to reduce mechanical complexity or to realize a biomimetic motion such as flexion of an index finger. The design is heuristic and it is useful to develop design methods for the underactuated mechanisms. This paper classifies mechanisms driven by tendons into three classes, and proposes a design method for them. The two classes are related to underactuated tendon-driven mechanisms, and these have been used without distinction so far. An index finger robot, which has four active tendons and two passive tendons, is developed and controlled with the proposed method."} {"_id": "8d6ca2dae1a6d1e71626be6167b9f25d2ce6dbcc", "title": "Semi-Supervised Learning with the Deep Rendering Mixture Model", "text": "Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning. Deep Convolutional Networks (DCNs) have achieved great success in supervised tasks and as such have been widely employed in the semi-supervised learning. In this paper we leverage the recently developed Deep Rendering Mixture Model (DRMM), a probabilistic generative model that models latent nuisance variation, and whose inference algorithm yields DCNs. We develop an EM algorithm for the DRMM to learn from both labeled and unlabeled data. Guided by the theory of the DRMM, we introduce a novel nonnegativity constraint and a variational inference term. We report state-of-the-art performance on MNIST and SVHN and competitive results on CIFAR10. We also probe deeper into how a DRMM trained in a semi-supervised setting represents latent nuisance variation using synthetically rendered images. Taken together, our work provides a unified framework for supervised, unsupervised, and semisupervised learning."} {"_id": "9bfc34ca3d3dd17ecdcb092f2a056da6cb824acd", "title": "Visual analytics of spatial interaction patterns for pandemic decision support", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Population mobility, i.e. the movement and contact of individuals across geographic space, is one of the essential factors that determine the course of a pandemic disease spread. This research views both individual-based daily activities and a pandemic spread as spatial interaction problems, where locations interact with each other via the visitors that they share or the virus that is transmitted from one place to another. The research proposes a general visual analytic approach to synthesize very large spatial interaction data and discover interesting (and unknown) patterns. The proposed approach involves a suite of visual and computational techniques, including (1) a new graph partitioning method to segment a very large interaction graph into a moderate number of spatially contiguous subgraphs (regions); (2) a reorderable matrix, with regions 'optimally' ordered on the diagonal, to effectively present a holistic view of major spatial interaction patterns; and (3) a modified flow map, interactively linked to the reorderable matrix, to enable pattern interpretation in a geographical context. The implemented system is able to visualize both people's daily movements and a disease spread over space in a similar way. The discovered spatial interaction patterns provide valuable insight for designing effective pandemic mitigation strategies and supporting decision-making in time-critical situations."} {"_id": "0428c79e5be359ccd13d63205b5e06037404967b", "title": "On Bayesian Upper Confidence Bounds for Bandit Problems", "text": "Stochastic bandit problems have been analyzed from two different perspectives: a frequentist view, where the parameter is a deterministic unknown quantity, and a Bayesian approach, where the parameter is drawn from a prior distribution. We show in this paper that methods derived from this second perspective prove optimal when evaluated using the frequentist cumulated regret as a measure of performance. We give a general formulation for a class of Bayesian index policies that rely on quantiles of the posterior distribution. For binary bandits, we prove that the corresponding algorithm, termed BayesUCB, satisfies finite-time regret bounds that imply its asymptotic optimality. More generally, Bayes-UCB appears as an unifying framework for several variants of the UCB algorithm addressing different bandit problems (parametric multi-armed bandits, Gaussian bandits with unknown mean and variance, linear bandits). But the generality of the Bayesian approach makes it possible to address more challenging models. In particular, we show how to handle linear bandits with sparsity constraints by resorting to Gibbs sampling."} {"_id": "e030aa1ea57ee47d3f3a0ce05b7e983f95115f1a", "title": "Psychometric Properties of Physical Activity and Leisure Motivation Scale in Farsi: an International Collaborative Project on Motivation for Physical Activity and Leisure.", "text": "BACKGROUND\nGiven the importance of regular physical activity, it is crucial to evaluate the factors favoring participation in physical activity. We aimed to report the psychometric analysis of the Farsi version of the Physical Activity and Leisure Motivation Scale (PALMS).\n\n\nMETHODS\nThe Farsi version of PALMS was completed by 406 healthy adult individuals to test its factor structure and concurrent validity and reliability.\n\n\nRESULTS\nConducting the exploratory factor analysis revealed nine factors that accounted for 64.6% of the variances. The PALMS reliability was supported with a high internal consistency of 0.91 and a high test-retest reliability of 0.97 (95% CI: 0.97-0.98). The association between the PALMS and its previous version Recreational Exercise Motivation Measure scores was strongly significant (r= 0.86, P < 0.001).\n\n\nCONCLUSION\nWe have shown that the Farsi version of the PALMS appears to be a valuable instrument to measure motivation for physical activity and leisure."} {"_id": "6570489a6294a5845adfd195a50a226f78a139c1", "title": "An extended online purchase intention model for middle-aged online users", "text": "This article focuses on examining the determinants and mediators of the purchase intention of nononline purchasers between ages 31 and 60 who mostly have strong purchasing power. It propose anew online purchase intention model by integrating the technology acceptance model with additional determinants and adding habitual online usage as a new mediator. Based on a sample of more than 300 middle-aged non-online purchasers, beyond some situationally-specific predictor variables, online purchasing attitude and habitual online usage are key mediators. Personal awareness of security only affects habitual online usage, t indicating a concern of middle-aged users. Habitual online usage is a"} {"_id": "1eb92d883dab2bc6a408245f4766f4c5d52f7545", "title": "Maximum Complex Task Assignment: Towards Tasks Correlation in Spatial Crowdsourcing", "text": "Spatial crowdsourcing has gained emerging interest from both research communities and industries. Most of current spatial crowdsourcing frameworks assume independent and atomic tasks. However, there could be some cases that one needs to crowdsource a spatial complex task which consists of some spatial sub-tasks (i.e., tasks related to a specific location). The spatial complex task's assignment requires assignments of all of its sub-tasks. The currently available frameworks are inapplicable to such kind of tasks. In this paper, we introduce a novel approach to crowdsource spatial complex tasks. We first formally define the Maximum Complex Task Assignment (MCTA) problem and propose alternative solutions. Subsequently, we perform various experiments using both real and synthetic datasets to investigate and verify the usability of our proposed approach."} {"_id": "44298a4cf816fe8d55c663337932724407ae772b", "title": "A survey on policy search algorithms for learning robot controllers in a handful of trials", "text": "Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word \u201cbig-data\u201d, we refer to this challenge as \u201cmicro-data reinforcement learning\u201d. We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time."} {"_id": "35318f1dcc88c8051911ba48815c47d424626a92", "title": "Visual Analysis of TED Talk Topic Trends", "text": "TED Talks are short, powerful talks given by some of the world's brightest minds - scientists, philanthropists, businessmen, artists, and many others. Funded by members and advertising, these talks are free to access by the public on the TED website and TED YouTube channel, and many videos have become viral phenomena. In this research project, we perform a visual analysis of TED Talk videos and playlists to gain a good understanding of the trends and relationships between TED Talk topics."} {"_id": "a42569c671b5f9d0fe2007af55199d668dae491b", "title": "Fine-grained Concept Linking using Neural Networks in Healthcare", "text": "To unlock the wealth of the healthcare data, we often need to link the real-world text snippets to the referred medical concepts described by the canonical descriptions. However, existing healthcare concept linking methods, such as dictionary-based and simple machine learning methods, are not effective due to the word discrepancy between the text snippet and the canonical concept description, and the overlapping concept meaning among the fine-grained concepts. To address these challenges, we propose a Neural Concept Linking (NCL) approach for accurate concept linking using systematically integrated neural networks. We call the novel neural network architecture as the COMposite AttentIonal encode-Decode neural network (COM-AID). COM-AID performs an encode-decode process that encodes a concept into a vector and decodes the vector into a text snippet with the help of two devised contexts. On the one hand, it injects the textual context into the neural network through the attention mechanism, so that the word discrepancy can be overcome from the semantic perspective. On the other hand, it incorporates the structural context into the neural network through the attention mechanism, so that minor concept meaning differences can be enlarged and effectively differentiated. Empirical studies on two real-world datasets confirm that the NCL produces accurate concept linking results and significantly outperforms state-of-the-art techniques."} {"_id": "30f2b6834d6f2322da204f36ad24ddf43cc45d33", "title": "Structural XML Classification in Concept Drifting Data Streams", "text": "Classification of large, static collections of XML data has been intensively studied in the last several years. Recently however, the data processing paradigm is shifting from static to streaming data, where documents have to be processed online using limited memory and class definitions can change with time in an event called concept drift. As most existing XML classifiers are capable of processing only static data, there is a need to develop new approaches dedicated for streaming environments. In this paper, we propose a new classification algorithm for XML data streams called XSC. The algorithm uses incrementally mined frequent subtrees and a tree-subtree similarity measure to classify new documents in an associative manner. The proposed approach is experimentally evaluated against eight state-of-the-art stream classifiers on real and synthetic data. The results show that XSC performs significantly better than competitive algorithms in terms of accuracy and memory usage."} {"_id": "14829636fee5a1cf8dee9737849a8e2bdaf9a91f", "title": "Bitter to Better - How to Make Bitcoin a Better Currency", "text": "Bitcoin is a distributed digital currency which has attracted a substantial number of users. We perform an in-depth investigation to understand what made Bitcoin so successful, while decades of research on cryptographic e-cash has not lead to a large-scale deployment. We ask also how Bitcoin could become a good candidate for a long-lived stable currency. In doing so, we identify several issues and attacks of Bitcoin, and propose suitable techniques to address them."} {"_id": "35fe18606529d82ce3fc90961dd6813c92713b3c", "title": "SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies", "text": "Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison."} {"_id": "3d16ed355757fc13b7c6d7d6d04e6e9c5c9c0b78", "title": "Majority Is Not Enough: Bitcoin Mining Is Vulnerable", "text": ""} {"_id": "5e86853f533c88a1996455d955a2e20ac47b3878", "title": "Information propagation in the Bitcoin network", "text": "Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior."} {"_id": "5fb1285e05bbd78d0094fe8061c644ea09d9da8d", "title": "Double-spending fast payments in bitcoin", "text": "Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of \\emph{fast} payment in a local fast-food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments."} {"_id": "d2920567fb66bc69d92ab2208f6455e37ce6138b", "title": "Disruptive Innovation : Removing the Innovators \u2019 Dilemma", "text": "The objectives of this research are to co-create understanding and knowledge on the phenomenon of disruptive innovation in order to provide pragmatic clarity on the term\u2019s meaning, impact and implications. This will address the academic audience\u2019s gap in knowledge and provide help to practitioners wanting to understand how disruptive innovation can be fostered as part of a major competitive strategy. This paper reports on the first eighteen months of a three year academic and industrial investigation. It presents a new pragmatic definition drawn from the literature and an overview of the conceptual framework for disruptive innovation that was co-created via the collaborative efforts of academia and industry. The barriers to disruptive innovation are presented and a best practice case study of how one company is overcoming these barriers is described. The remainder of the research, which is supported by a European Commission co-sponsored project called Disrupt-it, will focus on developing and validating tools to help overcome these barriers. Thomond, P., Herzberg, T. and Lettice, F. (2003). \"Disruptive Innovation: Removing the Innovators\u2019 Dilemma\". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 2 1.0. Introduction and Background. In his ground breaking book \u201cThe Innovator\u2019s Dilemma: When New Technologies Cause Great Firms to Fail\u201d, Clayton Christensen first coined the phrase \u2018disruptive technologies\u2019. He showed that time and again almost all the organisations that have \u2018died\u2019 or been displaced from their industries because of a new paradigm of customer offering could see the disruption coming but did nothing until it was too late (Christensen, 1997). They assess the new approaches or technologies and frame them as either deficient or as an unlikely threat much to the managers\u2019 regret and the organisation\u2019s demise (Christensen 2002). In the early 1990s, major airlines such as British Airways decided that the opportunities afforded by a low-cost, point-to point no frills strategy such as that introduced by the newly formed Ryanair was an unlikely threat. By the mid-1990\u2019s other newcomers such as easyJet had embraced Ryanair\u2019s foresight and before long, the \u2018low cost\u2019 approach had captured a large segment of the market. Low-cost no frills proved a hit with European travellers but not with the established airlines who had either ignored the threat or failed to capitalise on the approach. Today DVD technology and Charles Schwab are seen to be having a similar impact upon the VHS industry and Merrill Lynch respectively, however, disruption is not just a recent phenomenon it has firm foundations as a trend in the past that will also inevitably occur in the future. Examples of past disruptive innovations would include the introduction of the telegraph and its impact upon businesses like Pony Express and the transistor\u2019s impact upon the companies that produced cathode ray tubes. Future predictions include the impact of Light Emitting Diode\u2019 (L.E.D.) technology and its potential to completely disrupt the traditional light bulb sector and its supporting industries. More optimistically, Christensen (2002) further shows that the process of disruptive innovation has been one of the fundamental causal mechanisms through which access to life improving products and services has been increased and the basis on which long term organisational survival could be ensured (Christensen, 1997). In spite of the proclaimed importance of disruptive innovation and the ever increasing interest from both the business and academic press alike, there still appears to be a disparity between rhetoric and reality. To date, the multifaceted and interrelated issues of disruptive innovation have not been investigated in depth. The phenomenon with examples has been described by a number of authors (Christensen, 1997, Moore, 1995 Gilbert and Bower, 2002) and practitioner orientated writers have begun to offer strategies for responding to disruptive Thomond, P., Herzberg, T. and Lettice, F. (2003). \"Disruptive Innovation: Removing the Innovators\u2019 Dilemma\". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 3 change (Charitou and Markides, 2003, Rigby and Corbett, 2002, Rafi and Kampas, 2002). However, a deep integrated understanding of the entire subject is missing. In particular, there is an industrial need and academic gap knowledge in the pragmatic comprehension of how organisations can understand and foster disruptive innovation as part of a major competitive strategy. The objectives of this research are to co-create understanding and knowledge on the phenomenon of disruptive innovation in order to provide pragmatic clarity on the term\u2019s meaning, impact and implications. This will address the academic audience\u2019s gap in knowledge and provide help to practitioners wanting to understand how disruptive innovation can be fostered as part of a major competitive strategy. The current paper reports on the first eighteen months of a three year academic and industrial investigation. It presents a new pragmatic definition drawn from the literature and an overview of the conceptual framework for disruptive innovation that was co-created via the collaborative efforts of academia and industry. The barriers to disruptive innovation are presented and a best practice case study of how one company is overcoming these barriers is described. The research contributes to \u201cDisrupt-it\u201d, a \u20ac3million project for the Information Society Technologies Commission under the 5th Framework Program of the European Union, which will focus on developing and validating tools to help organisations foster disruptive innovation. 2.0 Understanding the Phenomenon of Disruptive Innovation. \u2018Disruptive Innovation\u2019, \u2018Disruptive Technologies\u2019 and \u2018Disruptive Business Strategies\u2019 are emerging and increasingly prominent business terms that are used to describe a form of revolutionary change. They are receiving ever more academic and industrial attention, yet these terms are still poorly defined and not well understood. A key objective of this research is to improve the understanding of disruptive innovation by drawing together multiple perspectives on the topic, as shown in Figure 1, into a more holistic and comprehensive definition. Much of the past investigation into discontinuous and disruptive innovation has been path dependent upon the researchers\u2019 investigative history. For example, Hamel\u2019s strategy background leads him to see disruptive innovation through the lens of the \u2018business model\u2019; whereas Christensen\u2019s technologically orientated past leads to a focus on \u2018disruptive technologies\u2019. What many researchers share is the view that firms need to periodically engage in the process of revolutionary change for long-term survival and this is not a new Thomond, P., Herzberg, T. and Lettice, F. (2003). \"Disruptive Innovation: Removing the Innovators\u2019 Dilemma\". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 4 phenomenon (Christensen, 1997; Christensen and Rosenbloom, 1995; Hamel, 2000; Schumpeter, 1975, Tushman and Anderson, 1986; Tushman and Nadler, 1986, Gilbert and Bower, 2002; Rigby and Corbett, 2002; Charitou and Markides, 2003; Foster and Kaplan, 2001; Thomond and Lettice, 2002). Disruptive innovation has also been defined as \u201ca technology, product or process that creeps up from below an existing business and threatens to displace it. Typically, the disrupter offers lower performance and less functionality... The product or process is good enough for a meaningful number of customers \u2013 indeed some don\u2019t buy the older version\u2019s higher functionality and welcome the disruption\u2019s simplicity. And gradually, the new product or process improves to the point where it displaces the incumbent.\u201d (Rafi and Kampas p 8, 2002). This definition borrows heavily from the work of Christensen (1997), which in turn has some of its origins in the findings of Dosi (1982). For example, each of the cases of disruptive innovation mentioned thus far represents a new paradigm of customer offering. Dosi (1982) claims that these can be represented as discontinuities in trajectories of progress as defined within earlier paradigms where a technological paradigm is a pattern of solutions for selected technological problems. In fact, new paradigms redefine the future meaning of progress and a new class of problems becomes the target of normal incremental innovation (Dosi, 1982). Therefore, disruptive innovations appear to typify a particular type of \u2018discontinuous innovation\u2019 (a term which has received much more academic attention). The same characteristics are found, except that disruptive innovations first establish their commercial footing in new or simple market niches by enabling customers to do things that only specialists could do before (e.g. low cost European airlines are opening up air travel to thousands that did not fly before) and that these new offerings, through a period of exploitation, migrate upmarket and eventually redefine the paradigms and value propositions on which the existing industry is based (Christensen, 1997, 2002; Moore, 1995; Business Model Charitou & Markides, 2003 Hamel, 2000"} {"_id": "5e90e57fccafbc78ecbac1a78c546b7db9a468ce", "title": "Finding new terminology in very large corpora", "text": "Most technical and scientific terms are comprised of complex, multi-word noun phrases but certainly not all noun phrases are technical or scientific terms. The distinction of specific terminology from common non-specific noun phrases can be based on the observation that terms reveal a much lesser degree of distributional variation than non-specific noun phrases. We formalize the limited paradigmatic modifiability of terms and, subsequently, test the corresponding algorithm on bigram, trigram and quadgram noun phrases extracted from a 104-million-word biomedical text corpus. Using an already existing and community-wide curated biomedical terminology as an evaluation gold standard, we show that our algorithm significantly outperforms standard term identification measures and, therefore, qualifies as a high-performant building block for any terminology identification system. We also provide empirical evidence that the superiority of our approach, beyond a 10-million-word threshold, is essentially domain- and corpus-size-independent."} {"_id": "991891e3aa226766dcb4ad7221045599f8607685", "title": "Review of axial flux induction motor for automotive applications", "text": "Hybrid and electric vehicles have been the focus of many academic and industrial studies to reduce transport pollution; they are now established products. In hybrid and electric vehicles, the drive motor should have high torque density, high power density, high efficiency, strong physical structure and variable speed range. An axial flux induction motor is an interesting solution, where the motor is a double sided axial flux machine. This can significantly increase torque density. In this paper a review of the axial flux motor for automotive applications, and the different possible topologies for the axial field motor, are presented."} {"_id": "11b111cbe79e5733fea28e4b9ff99fe7b4a4585c", "title": "Generalized vulnerability extrapolation using abstract syntax trees", "text": "The discovery of vulnerabilities in source code is a key for securing computer systems. While specific types of security flaws can be identified automatically, in the general case the process of finding vulnerabilities cannot be automated and vulnerabilities are mainly discovered by manual analysis. In this paper, we propose a method for assisting a security analyst during auditing of source code. Our method proceeds by extracting abstract syntax trees from the code and determining structural patterns in these trees, such that each function in the code can be described as a mixture of these patterns. This representation enables us to decompose a known vulnerability and extrapolate it to a code base, such that functions potentially suffering from the same flaw can be suggested to the analyst. We evaluate our method on the source code of four popular open-source projects: LibTIFF, FFmpeg, Pidgin and Asterisk. For three of these projects, we are able to identify zero-day vulnerabilities by inspecting only a small fraction of the code bases."} {"_id": "0dbed89ea3296f351eb986cc02678c7a33d50945", "title": "A Combinatorial Noise Model for Quantum Computer Simulation", "text": "Quantum computers (QCs) have many potential hardware implementations ranging from solid-state silicon-based structures to electron-spin qubits on liquid helium. However, all QCs must contend with gate infidelity and qubit state decoherence over time. Quantum error correcting codes (QECCs) have been developed to protect program qubit states from such noise. Previously, Monte Carlo noise simulators have been developed to model the effectiveness of QECCs in combating decoherence. The downside to this random sampling approach is that it may take days or weeks to produce enough samples for an accurate measurement. We present an alternative noise modeling approach that performs combinatorial analysis rather than random sampling. This model tracks the progression of the most likely error states of the quantum program through its course of execution. This approach has the potential for enormous speedups versus the previous Monte Carlo methodology. We have found speedups with the combinatorial model on the order of 100X-1,000X over the Monte Carlo approach when analyzing applications utilizing the [[7,1,3]] QECC. The combinatorial noise model has significant memory requirements, and we analyze its scaling properties relative to the size of the quantum program. Due to its speedup, this noise model is a valuable alternative to traditional Monte Carlo simulation."} {"_id": "47f0455d65a0823c70ce7cce9749f3abd826e0a7", "title": "Random Walk with Restart on Large Graphs Using Block Elimination", "text": "Given a large graph, how can we calculate the relevance between nodes fast and accurately? Random walk with restart (RWR) provides a good measure for this purpose and has been applied to diverse data mining applications including ranking, community detection, link prediction, and anomaly detection. Since calculating RWR from scratch takes a long time, various preprocessing methods, most of which are related to inverting adjacency matrices, have been proposed to speed up the calculation. However, these methods do not scale to large graphs because they usually produce large dense matrices that do not fit into memory. In addition, the existing methods are inappropriate when graphs dynamically change because the expensive preprocessing task needs to be computed repeatedly.\n In this article, we propose Bear, a fast, scalable, and accurate method for computing RWR on large graphs. Bear has two versions: a preprocessing method BearS for static graphs and an incremental update method BearD for dynamic graphs. BearS consists of the preprocessing step and the query step. In the preprocessing step, BearS reorders the adjacency matrix of a given graph so that it contains a large and easy-to-invert submatrix, and precomputes several matrices including the Schur complement of the submatrix. In the query step, BearS quickly computes the RWR scores for a given query node using a block elimination approach with the matrices computed in the preprocessing step. For dynamic graphs, BearD efficiently updates the changed parts in the preprocessed matrices of BearS based on the observation that only small parts of the preprocessed matrices change when few edges are inserted or deleted. Through extensive experiments, we show that BearS significantly outperforms other state-of-the-art methods in terms of preprocessing and query speed, space efficiency, and accuracy. We also show that BearD quickly updates the preprocessed matrices and immediately computes queries when the graph changes."} {"_id": "239222aead65a66be698036d04e4af6eaa24b77b", "title": "An energy-efficient unequal clustering mechanism for wireless sensor networks", "text": "Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime"} {"_id": "d19f938c790f0ffd8fa7fccc9fd7c40758a29f94", "title": "Art-Bots: Toward Chat-Based Conversational Experiences in Museums", "text": ""} {"_id": "7a5ae36df3f08df85dfaa21fead748f830d5e4fa", "title": "Learning Bound for Parameter Transfer Learning", "text": "We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability and parameter transfer learnability of parametric feature mapping, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in selftaught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning."} {"_id": "2b695f4060e78f9977a3da1c01a07a05a3f94b28", "title": "Analyzing Posture and Affect in Task-Oriented Tutoring", "text": "Intelligent tutoring systems research aims to produce systems that meet or exceed the effectiveness of one on one expert human tutoring. Theory and empirical study suggest that affective states of the learner must be addressed to achieve this goal. While many affective measures can be utilized, posture offers the advantages of non intrusiveness and ease of interpretation. This paper presents an accurate posture estimation algorithm applied to a computer mediated tutoring corpus of depth recordings. Analyses of posture and session level student reports of engagement and cognitive load identified significant patterns. The results indicate that disengagement and frustration may coincide with closer postural positions and more movement, while focused attention and less frustration occur with more distant, stable postural positions. It is hoped that this work will lead to intelligent tutoring systems that recognize a greater breadth of affective expression through channels of posture and gesture."} {"_id": "c28bcaab43e57b9b03f09fd2237669634da8a741", "title": "Contributions of the prefrontal cortex to the neural basis of human decision making", "text": "The neural basis of decision making has been an elusive concept largely due to the many subprocesses associated with it. Recent efforts involving neuroimaging, neuropsychological studies, and animal work indicate that the prefrontal cortex plays a central role in several of these subprocesses. The frontal lobes are involved in tasks ranging from making binary choices to making multi-attribute decisions that require explicit deliberation and integration of diverse sources of information. In categorizing different aspects of decision making, a division of the prefrontal cortex into three primary regions is proposed. (1) The orbitofrontal and ventromedial areas are most relevant to deciding based on reward values and contribute affective information regarding decision attributes and options. (2) Dorsolateral prefrontal cortex is critical in making decisions that call for the consideration of multiple sources of information, and may recruit separable areas when making well defined versus poorly defined decisions. (3) The anterior and ventral cingulate cortex appear especially relevant in sorting among conflicting options, as well as signaling outcome-relevant information. This topic is broadly relevant to cognitive neuroscience as a discipline, as it generally comprises several aspects of cognition and may involve numerous brain regions depending on the situation. The review concludes with a summary of how these regions may interact in deciding and possible future research directions for the field."} {"_id": "3ec40e4f549c49b048cd29aeb0223e709abc5565", "title": "Image-based Airborne LiDAR Point Cloud Encoding for 3 D Building Model Retrieval", "text": "With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods."} {"_id": "2433254a9df37729159daa5eeec56123e122518e", "title": "THE ROLE OF DIGITAL AND SOCIAL MEDIA MARKETING IN CONSUMER BEHAVIOR", "text": "This article reviews recently published research about consumers in digital and social media marketing settings. Five themes are identified: (i) consumer digital culture, (ii) responses to digital advertising, (iii) effects of digital environments on consumer behavior, (iv) mobile environments, and (v) online word of mouth (WOM). Collectively these articles shed light from many different angles on how consumers experience, influence, and are influenced by the digital environments in which they are situated as part of their daily lives. Much is still to be understood, and existing knowledge tends to be disproportionately focused on WOM, which is only part of the digital consumer experience. Several directions for future research are advanced to encourage researchers to consider a broader range of phenomena."} {"_id": "399bc455dcbaf9eb0b4144d0bc721ac4bb7c8d59", "title": "A Spreadsheet Algebra for a Direct Data Manipulation Query Interface", "text": "A spreadsheet-like \"direct manipulation\" interface is more intuitive for many non-technical database users compared to traditional alternatives, such as visual query builders. The construction of such a direct manipulation interfacemay appear straightforward, but there are some significant challenges. First, individual direct manipulation operations cannot be too complex, so expressive power has to be achieved through composing (long) sequences of small operations. Second, all intermediate results are visible to the user, so grouping and ordering are material after every small step. Third, users often find the need to modify previously specified queries. Since manipulations are specified one step at a time, there is no actual queryexpression to modify. Suitable means must be provided to address this need. Fourth, the order in which manipulations are performed by the user should not affect the results obtained, to avoid user confusion. We address the aforementioned challenges by designing a new spreadsheet algebra that: i) operates on recursively grouped multi-sets, ii) contains a selectively designed set of operators capable of expressing at least all single-block SQL queries and can be intuitively implemented in a spreadsheet, iii) enables query modification by the notion of modifiable query state, and iv) requires no ordering in unary data manipulation operators since they are all designed to commute. We built a prototype implementation of the spreadsheet algebra and show, through user studies with non-technical subjects, that the resultant query interface is easier to use than a standard commercial visual query builder."} {"_id": "1eff385c88fd1fdd1c03fd3fb573de2530b73f99", "title": "OBJECTIVE SELF-AWARENESS THEORY : RECENT PROGRESS AND ENDURING PROBLEMS By :", "text": "Objective self-awareness theory has undergone fundamental changes in the 3 decades since Duval and Wicklund's (1972) original formulation. We review new evidence that bears on the basic tenets of the theory. Many of the assumptions of self-awareness theory require revision, particularly how expectancies influence approach and avoidance of self-standard discrepancies; the nature of standards, especially when they are changed; and the role of causal attribution in directing discrepancy reduction. However, several unresolved conceptual issues remain; future theoretical and empirical directions are discussed. Article: The human dilemma is that which arises out of a man's capacity to experience himself as both subject and object at the same time. Both are necessary--for the science of psychology, for therapy, and for gratifying living. (May, 1967, p. 8) Although psychological perspectives on the self have a long history (e.g., Cooley, 1902; James, 1890; Mead, 1934), experimental research on the self has emerged only within the last 40 years. One of the earliest \"self theories\" was objective self-awareness (OSA) theory (Duval & Wicklund, 1972). OSA theory was concerned with the self-reflexive quality of the consciousness. Just as people can apprehend the existence of environmental stimuli, they can be aware of their own existence: \"When attention is directed inward and the individual's consciousness is focused on himself, he is the object of his own consciousness--hence 'objective' self awareness\" (Duval & Wicklund, 1972, p. 2). This is contrasted with \"subjective self-awareness\" that results when attention is directed away from the self and the person \"experiences himself as the source of perception and action\" (Duval & Wicklund, 1972, p. 3). By this, Duval and Wicklund (1972,chap. 3) meant consciousness of one's existence on an organismic level, in which such existence is undifferentiated as a separate and distinct object in the world. OSA theory has stimulated a lot of research and informed basic issues in social psychology, such as emotion (Scheier & Carver, 1977), attribution (Duval & Wicklund, 1973), attitude--behavior consistency (Gibbons, 1983), self-standard comparison (Duval & Lalwani, 1999), prosocial behavior (Froming, Nasby, & McManus, 1998), deindividuation (Diener, 1979), stereotyping (Macrae, Bodenhausen, & Milne, 1998), self-assessment (Silvia & Gendolla, in press), terror management (Arndt, Greenberg, Simon, Pyszczynski, & Solomon, 1998; Silvia, 2001), and group dynamics (Duval, 1976; Mullen, 1983). Self-focused attention is also fundamental to a host of clinical and health phenomena (Hull, 1981; Ingram, 1990; Pyszczynski, Hamilton, Greenberg, & Becker, 1991; Wells & Matthews, 1994). The study of self-focused attention continues to be a dynamic and active research area. A lot of research relevant to basic theoretical issues has been conducted since the last maj or review (Gibbons, 1990). Recent research has made progress in understanding links between self-awareness and causal attribution, the effects of expectancies on self-standard discrepancy reduction, and the nature of standards--the dynamics of selfawareness are now viewed quite differently. We review these recent developments[1] and hope that a conceptual integration of new findings will further stimulate research on self-focused attention. However, there is still much conceptual work left to be done, and many basic issues remain murky and controversial. We discuss these unresolved issues and sketch the beginnings of some possible solutions. Original Theory The original statement of OSA theory (Duval & Wicklund, 1972) employed only a few constructs, relations, and processes. The theory assumed that the orientation of conscious attention was the essence of selfevaluation. Focusing attention on the self brought about objective self-awareness, which initiated an automatic comparison of the self against standards. The self was defined very broadly as the person's knowledge of the person. A standard was \"defined as a mental representation of correct behavior, attitudes, and traits ... All of the standards of correctness taken together define what a 'correct' person is\" (Duval & Wicklund, 1972, pp. 3, 4). This simple system consisting of self, standards, and attentional focus was assumed to operate according to gestalt consistency principles (Heider, 1960). If a discrepancy was found between self and standards, negative affect was said to arise. This aversive state then motivated the restoration of consistency. Two behavioral routes were proposed. People could either actively change their actions, attitudes, or traits to be more congruent with the representations of the standard or could avoid the self-focusing stimuli and circumstances. Avoidance effectively terminates the comparison process and hence all self-evaluation. Early research found solid support for these basic ideas (Carver, 1975; Gibbons & Wicklund, 1976; Wicklund & Duval, 1971). Duval and Wicklund (1972) also assumed that objective self-awareness would generally be an aversive state--the probability that at least one self-standard discrepancy exists is quite high. This was the first assumption to be revised. Later work found that self-awareness can be a positive state when people are congruent with their standards (Greenberg & Musham, 1981; Ickes, Wicklund, & Ferris, 1973). New Developments OSA theory has grown considerably from the original statement (Duval & Wicklund, 1972). Our review focuses primarily on core theoretical developments since the last review (Gibbons, 1990). Other interesting aspects, such as interpersonal processes and interoceptive accuracy, have not changed significantly since previous reviews (Gibbons, 1990; Silvia & Gendolla, in press). We will also overlook the many clinical consequences of self-awareness; these have been exhaustively reviewed elsewhere (Pyszczynski et al., 1991; Wells & Matthews, 1994). Expectancies and Avoiding Self-Awareness Reducing a discrepancy or avoiding self-focus are equally effective ways of reducing the negative affect resulting from a discrepancy. When do people do one or the other? The original theory was not very specific about when approach versus avoidance would occur. Duval and Wicklund (1972) did, however, speculate that two factors should be relevant. The first was whether people felt they could effectively reduce the discrepancy; the second was whether the discrepancy was small or large. In their translation of OSA theory into a \"test--operate--test--exit\" (TOTE) feedback system, Carver, Blaney, and Scheier (1979a, 1979b) suggested that expectancies regarding outcome favorability determine approach versus avoidance behavior. When a self-standard discrepancy is recognized, people implicitly appraise their likelihood of reducing the discrepancy (cf. Bandura, 1977; Lazarus, 1966). If eventual discrepancy reduction is perceived as likely, people will try to achieve the standard. When expectations regarding improvement are unfavorable, however, people will try to avoid self-focus. Later research and theory (Duval, Duval, & Mulilis, 1992) refined Duval and Wicklund's (1972) speculations and the notion of outcome favorability. Expectancies are not simply and dichotomously favorable or unfavorable--they connote a person's rate of progress in discrepancy reduction relative to the magnitude of the discrepancy. More specifically, people will try to reduce a discrepancy to the extent they believe that their rate of progress is sufficient relative to the magnitude of the problem. Those who believe their rate of progress to be insufficient will avoid. To test this hypothesis, participants were told they were either highly (90%) or mildly (10%) discrepant from an experimental standard (Duval et al., 1992, Study 1). Participants were then given the opportunity to engage in a remedial task that was guaranteed by the experimenter to totally eliminate their deficiency provided that they worked on the task for 2 hr and 10 min. However, the rate at which working on the task would reduce the discrepancy was varied. In the low rate of progress conditions individuals were shown a performance curve indicating no progress until the last 20 min of working on the remedial task. During the last 20 min discrepancy was reduced to zero. In the constant rate of progress condition, participants were shown a performance curve in which progress toward discrepancy reduction began immediately and continued throughout such efforts with 30% of the deficiency being reduced in the first 30 min of activity and totally eliminated after 2 hr and 10 min. Results indicated that persons who believed the discrepancy to be mild and progress to be constant worked on the remedial task; those who perceived the problem to be mild but the rate of progress to be low avoided this activity. However, participants who thought that the discrepancy was substantial and the rate of progress only constant avoided working on the remedial task relative to those in the mild discrepancy and constant rate of progress condition. These results were conceptually replicated in a second experiment (Duval et al., 1992) using participants' time to complete the total 2 hr and 10 min of remedial work as the dependent measure. This pattern suggests that the rate of progress was sufficient relative to the magnitude of the discrepancy in the mild discrepancy and constant rate of progress condition; this in turn promoted approaching the problem. In the high discrepancy and constant rate of progress condition and the high and mild discrepancy and low rate of progress conditions, rate of progress was insufficient and promoted avoiding the problem. In a third experiment (Duval et al., 1992), people were again led to believe that they were either highly or mildly discrepant from a standard on an intellectual dimension and then given the opportunity to reduce that deficiency by working on a remedial task. A"} {"_id": "0341cd2fb49a56697edaf03b05734f44d0e41f89", "title": "An empirical study on dependence clusters for effort-aware fault-proneness prediction", "text": "A dependence cluster is a set of mutually inter-dependent program elements. Prior studies have found that large dependence clusters are prevalent in software systems. It has been suggested that dependence clusters have potentially harmful effects on software quality. However, little empirical evidence has been provided to support this claim. The study presented in this paper investigates the relationship between dependence clusters and software quality at the function-level with a focus on effort-aware fault-proneness prediction. The investigation first analyzes whether or not larger dependence clusters tend to be more fault-prone. Second, it investigates whether the proportion of faulty functions inside dependence clusters is significantly different from the proportion of faulty functions outside dependence clusters. Third, it examines whether or not functions inside dependence clusters playing a more important role than others are more fault-prone. Finally, based on two groups of functions (i.e., functions inside and outside dependence clusters), the investigation considers a segmented fault-proneness prediction model. Our experimental results, based on five well-known open-source systems, show that (1) larger dependence clusters tend to be more fault-prone; (2) the proportion of faulty functions inside dependence clusters is significantly larger than the proportion of faulty functions outside dependence clusters; (3) functions inside dependence clusters that play more important roles are more fault-prone; (4) our segmented prediction model can significantly improve the effectiveness of effort-aware fault-proneness prediction in both ranking and classification scenarios. These findings help us better understand how dependence clusters influence software quality."} {"_id": "b0f16acfa4efce9c24100ec330b82fb8a28feeec", "title": "Reinforcement Learning in Continuous State and Action Spaces", "text": "Many traditional reinforcement-learning algorithms have been designed for problems with small finite state and action spaces. Learn ing in such discrete problems can been difficult, due to noise and delayed reinfor cements. However, many real-world problems have continuous state or action sp aces, which can make learning a good decision policy even more involved. In this c apter we discuss how to automatically find good decision policies in continuous d omains. Because analytically computing a good policy from a continuous model c an be infeasible, in this chapter we mainly focus on methods that explicitly up date a representation of a value function, a policy or both. We discuss conside rations in choosing an appropriate representation for these functions and disc uss gradient-based and gradient-free ways to update the parameters. We show how to a pply these methods to reinforcement-learning problems and discuss many speci fic algorithms. Amongst others, we cover gradient-based temporal-difference lear ning, evolutionary strategies, policy-gradient algorithms and (natural) actor-cri ti methods. We discuss the advantages of different approaches and compare the perform ance of a state-of-theart actor-critic method and a state-of-the-art evolutiona ry strategy empirically."} {"_id": "2bf8acb0bd8b0fde644b91c5dd4bef2e8119e61e", "title": "Decision Support based on Bio-PEPA Modeling and Decision Tree Induction: A New Approach, Applied to a Tuberculosis Case Study", "text": "The problem of selecting determinant features generating appropriate model structure is a challenge in epidemiological modelling. Disease spread is highly complex, and experts develop their understanding of its dynamic over years. There is an increasing variety and volume of epidemiological data which adds to the potential confusion. The authors propose here to make use of that data to better understand disease systems. Decision tree techniques have been extensively used to extract pertinent information and improve decision making. In this paper, the authors propose an innovative structured approach combining decision tree induction with Bio-PEPA computational modelling, and illustrate the approach through application to tuberculosis. By using decision tree induction, the enhanced Bio-PEPA model shows considerable improvement over the initial model with regard to the simulated results matching observed data. The key finding is that the developer expresses a realistic predictive model using relevant features, thus considering this approach as decision support, empowers the epidemiologist in his policy decision making. KEywoRDS Bio-PEPA Modelling, Data Mining, Decision Support, Decision Tree Induction, Epidemiology, Modelling and Simulation, Optimisation, Refinement, Tuberculosis"} {"_id": "e3ab7a95af2c0efc92f146f8667ff95e46da84f1", "title": "On Optimizing VLC Networks for Downlink Multi-User Transmission: A Survey", "text": "The evolving explosion in high data rate services and applications will soon require the use of untapped, abundant unregulated spectrum of the visible light for communications to adequately meet the demands of the fifth-generation (5G) mobile technologies. Radio-frequency (RF) networks are proving to be scarce to cover the escalation in data rate services. Visible light communication (VLC) has emerged as a great potential solution, either in replacement of, or complement to, existing RF networks, to support the projected traffic demands. Despite of the prolific advantages of VLC networks, VLC faces many challenges that must be resolved in the near future to achieve a full standardization and to be integrated to future wireless systems. Here, we review the new, emerging research in the field of VLC networks and lay out the challenges, technological solutions, and future work predictions. Specifically, we first review the VLC channel capacity derivation, discuss the performance metrics and the associated variables; the optimization of VLC networks are also discussed, including resources and power allocation techniques, user-to-access point (AP) association and APs-toclustered-users-association, APs coordination techniques, nonorthogonal multiple access (NOMA) VLC networks, simultaneous energy harvesting and information transmission using the visible light, and the security issue in VLC networks. Finally, we propose several open research problems to optimize the various VLC networks by maximizing either the sum rate, fairness, energy efficiency, secrecy rate, or harvested energy."} {"_id": "b1b5646683557b38468344dff09ae921a5a4b345", "title": "Comparison of CoAP and MQTT Performance Over Capillary Radios", "text": "The IoT protocols used in the application layer, namely the Constraint Application Protocol (CoAP) and Message Queue Telemetry Transport (MQTT) have dependencies to the transport layer. The choice of transport, Transmission Control Protocol (TCP) or the User Datagram Protocol(UDP), on the other hand, has an impact on the Internet of Things (IoT) application level performance, especially over a wireless medium. The motivation of this work is to look at the impact of the protocol stack on performance over two different wireless medium realizations, namely Bluetooth Low Energy and Wi-Fi. The use case studied is infrequent small reports sent from the sensor device to a central cloud storage over a last mile radio access link. We find that while CoAP/UDP based transport performs consistently better both in terms of latency and power consumption over both links, MQTT/TCP may also work when the use requirements allow for longerlatency providing better reliability. All in all, the full connectivity stack needs to be considered when designing an IoT deployment."} {"_id": "cd5b7d8fb4f8dc3872e773ec24460c9020da91ed", "title": "Design of a compact high power phased array for 5G FD-MIMO system at 29 GHz", "text": "This paper presents a new design concept of a beam steerable high gain phased array antenna based on WR28 waveguide at 29 GHz frequency for fifth generation (5G) full dimension multiple input multiple output (FD-MIMO) system. The 8\u00d78 planar phased array is fed by a three dimensional beamformer to obtain volumetric beam scanning ranging from \u221260 to +60 degrees both in azimuth and elevation direction. Beamforming network (BFN) is designed using 16 set of 8\u00d78 Butler matrix beamformer to get 64 beam states, which control the horizontal and vertical angle. This is a new concept to design waveguide based high power three-dimensional beamformer for volumetric multibeam in Ka band for 5G application. The maximum gain of phased array is 28.5 dBi that covers 28.9 GHz to 29.4 GHz frequency band."} {"_id": "b4cbe50b8988e7c9c1a7b982bfb6c708bb3ce3e8", "title": "Development and evaluation of low cost game-based balance rehabilitation tool using the microsoft kinect sensor", "text": "The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to \"cheat\" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury."} {"_id": "6a2311d02aea97f7fe4e78c8bd2a53091364dc3b", "title": "Aesthetics and Entropy III . Aesthetic measures 2", "text": "1 Consultant, Maplewood, MN USA 4 * Correspondence: sahyun@infionline.net; 1-(651)-927-9686 5 6 Abstract: We examined a series of real-world, pictorial photographs with varying 7 characteristics, along with their modification by noise addition and unsharp masking. As 8 response metrics we used three different versions of the aesthetic measure originally 9 proposed by Birkhoff. The first aesthetic measure, which has been used in other studies, 10 and which we used in our previous work as well, showed a preference for the least 11 complex of the images. It provided no justification for noise addition, but did reveal 12 enhancement on unsharp masking. Optimum level of unsharp masking varied with the 13 image, but was predictable from the individual image\u2019s GIF compressibility. We expect this 14 result to be useful for guiding the processing of pictorial photographic imagery. The 15 second aesthetic measure, that of informational aesthetics based on entropy alone failed 16 to provide useful discrimination among the images or the conditions of their modification. 17 A third measure, derived from the concepts of entropy maximization, as well as the 18 hypothesized preference of observers for \u201csimpler\u201d, i.e., more compressible, images, 19 yielded qualitatively the same results as the more traditional version of the measure. 20 Differences among the photographs and the conditions of their modification were more 21 clearly defined with this metric, however. 22"} {"_id": "97aef787d63aef75e6f8055cdac3771f8649f21a", "title": "A Syllable-based Technique for Word Embeddings of Korean Words", "text": "Word embedding has become a fundamental component to many NLP tasks such as named entity recognition and machine translation. However, popular models that learn such embeddings are unaware of the morphology of words, so it is not directly applicable to highly agglutinative languages such as Korean. We propose a syllable-based learning model for Korean using a convolutional neural network, in which word representation is composed of trained syllable vectors. Our model successfully produces morphologically meaningful representation of Korean words compared to the original Skip-gram embeddings. The results also show that it is quite robust to the Out-of-Vocabulary problem."} {"_id": "d9d8aafe6856025f2c2b7c70f5e640e03b6bcd46", "title": "Anti-phishing based on automated individual white-list", "text": "In phishing and pharming, users could be easily tricked into submitting their username/passwords into fraudulent web sites whose appearances look similar as the genuine ones. The traditional blacklist approach for anti-phishing is partially effective due to its partial list of global phishing sites. In this paper, we present a novel anti-phishing approach named Automated Individual White-List (AIWL). AIWL automatically tries to maintain a white-list of user's all familiar Login User Interfaces (LUIs) of web sites. Once a user tries to submit his/her confidential information to an LUI that is not in the white-list, AIWL will alert the user to the possible attack. Next, AIWL can efficiently defend against pharming attacks, because AIWL will alert the user when the legitimate IP is maliciously changed; the legitimate IP addresses, as one of the contents of LUI, are recorded in the white-list and our experiment shows that popular web sites' IP addresses are basically stable. Furthermore, we use Na\u00efve Bayesian classifier to automatically maintain the white-list in AIWL. Finally, we conclude through experiments that AIWL is an efficient automated tool specializing in detecting phishing and pharming."} {"_id": "34feeafb5ff7757b67cf5c46da0869ffb9655310", "title": "Perpetual environmentally powered sensor networks", "text": "Environmental energy is an attractive power source for low power wireless sensor networks. We present Prometheus, a system that intelligently manages energy transfer for perpetual operation without human intervention or servicing. Combining positive attributes of different energy storage elements and leveraging the intelligence of the microprocessor, we introduce an efficient multi-stage energy transfer system that reduces the common limitations of single energy storage systems to achieve near perpetual operation. We present our design choices, tradeoffs, circuit evaluations, performance analysis, and models. We discuss the relationships between system components and identify optimal hardware choices to meet an application's needs. Finally we present our implementation of a real system that uses solar energy to power Berkeley's Telos Mote. Our analysis predicts the system will operate for 43 years under 1% load, 4 years under 10% load, and 1 year under 100% load. Our implementation uses a two stage storage system consisting of supercapacitors (primary buffer) and a lithium rechargeable battery (secondary buffer). The mote has full knowledge of power levels and intelligently manages energy transfer to maximize lifetime."} {"_id": "3689220c58f89e9e19cc0df51c0a573884486708", "title": "AmbiMax: Autonomous Energy Harvesting Platform for Multi-Supply Wireless Sensor Nodes", "text": "AmbiMax is an energy harvesting circuit and a supercapacitor based energy storage system for wireless sensor nodes (WSN). Previous WSNs attempt to harvest energy from various sources, and some also use supercapacitors instead of batteries to address the battery aging problem. However, they either waste much available energy due to impedance mismatch, or they require active digital control that incurs overhead, or they work with only one specific type of source. AmbiMax addresses these problems by first performing maximum power point tracking (MPPT) autonomously, and then charges supercapacitors at maximum efficiency. Furthermore, AmbiMax is modular and enables composition of multiple energy harvesting sources including solar, wind, thermal, and vibration, each with a different optimal size. Experimental results on a real WSN platform, Eco, show that AmbiMax successfully manages multiple power sources simultaneously and autonomously at several times the efficiency of the current state-of-the-art for WSNs"} {"_id": "4833d690f7e0a4020ef48c1a537dbb5b8b9b04c6", "title": "Integrated photovoltaic maximum power point tracking converter", "text": "A low-power low-cost highly efficient maximum power point tracker (MPPT) to be integrated into a photovoltaic (PV) panel is proposed. This can result in a 25% energy enhancement compared to a standard photovoltaic panel, while performing functions like battery voltage regulation and matching of the PV array with the load. Instead of using an externally connected MPPT, it is proposed to use an integrated MPPT converter as part of the PV panel. It is proposed that this integrated MPPT uses a simple controller in order to be cost effective. Furthermore, the converter has to be very efficient, in order to transfer more energy to the load than a directly coupled system. This is achieved by using a simple soft-switched topology. A much higher conversion efficiency at lower cost will then result, making the MPPT an affordable solution for small PV energy systems."} {"_id": "61c1d66defb225eda47462d1bc393906772c9196", "title": "Hardware design experiences in ZebraNet", "text": "The enormous potential for wireless sensor networks to make a positive impact on our society has spawned a great deal of research on the topic, and this research is now producing environment-ready systems. Current technology limits coupled with widely-varying application requirements lead to a diversity of hardware platforms for different portions of the design space. In addition, the unique energy and reliability constraints of a system that must function for months at a time without human intervention mean that demands on sensor network hardware are different from the demands on standard integrated circuits. This paper describes our experiences designing sensor nodes and low level software to control them.\n In the ZebraNet system we use GPS technology to record fine-grained position data in order to track long term animal migrations [14]. The ZebraNet hardware is composed of a 16-bit TI microcontroller, 4 Mbits of off-chip flash memory, a 900 MHz radio, and a low-power GPS chip. In this paper, we discuss our techniques for devising efficient power supplies for sensor networks, methods of managing the energy consumption of the nodes, and methods of managing the peripheral devices including the radio, flash, and sensors. We conclude by evaluating the design of the ZebraNet nodes and discussing how it can be improved. Our lessons learned in developing this hardware can be useful both in designing future sensor nodes and in using them in real systems."} {"_id": "576803b930ef44b79028048569e7ea321c1cecb0", "title": "Adaptive Computer-Based Training Increases on the Job Performance of X-Ray Screeners", "text": "Due to severe terrorist attacks in recent years, aviation security issues have moved into the focus of politicians as well as the general public. Effective screening of passenger bags using state-of-the-art X-ray screening systems is essential to prevent terrorist attacks. The performance of the screening process depends critically on the security personnel, because they decide whether bags are OK or whether they might contain a prohibited item. Screening X-ray images of passenger bags for dangerous and prohibited items effectively and efficiently is a demanding object recognition task. Effectiveness of computer-based training (CBT) on X-ray detection performance was assessed using computer-based tests and on the job performance measures using threat image projection (TIP). It was found that adaptive CBT is a powerful tool to increase detection performance and efficiency of screeners in X-ray image interpretation. Moreover, the results of training could be generalized to the real life situation as shown in the increased detection performance in TIP not only for trained items, but also for new (untrained) items. These results illustrate that CBT is a very useful tool to increase airport security from a human factors perspective."} {"_id": "6c1ccc66420136488cf34c1ffe707afefd8b00b9", "title": "Rotation-discriminating template matching based on Fourier coefficients of radial projections with robustness to scaling and partial occlusion", "text": "We consider brightness/contrast-invariant and rotation-discriminating template matching that searches an image to analyze A for a query image Q. We propose to use the complex coefficients of the discrete Fourier transform of the radial projections to compute new rotation-invariant local features. These coefficients can be efficiently obtained via FFT. We classify templates in \u201cstable\u201d and \u201cunstable\u201d ones and argue that any local feature-based template matching may fail to find unstable templates. We extract several stable sub-templates of Q and find them in A by comparing the features. The matchings of the sub-templates are combined using the Hough transform. As the features of A are computed only once, the algorithm can find quickly many different sub-templates in A, and it is suitable for: finding many query images in A; multi-scale searching and partial occlusion-robust template matching."} {"_id": "3370784dacf9df1e54384190dad40b817520ba3a", "title": "Haswell: The Fourth-Generation Intel Core Processor", "text": "Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set."} {"_id": "146da74cd886acbd4a593a55f0caacefa99714a6", "title": "Working model of Self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino", "text": "The evolution of Artificial Intelligence has served as the catalyst in the field of technology. We can now develop things which was once just an imagination. One of such creation is the birth of self-driving car. Days have come where one can do their work or even sleep in the car and without even touching the steering wheel, accelerator you will still be able to reach your target destination safely. This paper proposes a working model of self-driving car which is capable of driving from one location to the other or to say on different types of tracks such as curved tracks, straight tracks and straight followed by curved tracks. A camera module is mounted over the top of the car along with Raspberry Pi sends the images from real world to the Convolutional Neural Network which then predicts one of the following directions. i.e. right, left, forward or stop which is then followed by sending a signal from the Arduino to the controller of the remote controlled car and as a result of it the car moves in the desired direction without any human intervention."} {"_id": "6fd62c67b281956c3f67eb53fafaea83b2f0b4fb", "title": "Taking perspective into account in a communicative task", "text": "Previous neuroimaging studies of spatial perspective taking have tended not to activate the brain's mentalising network. We predicted that a task that requires the use of perspective taking in a communicative context would lead to the activation of mentalising regions. In the current task, participants followed auditory instructions to move objects in a set of shelves. A 2x2 factorial design was employed. In the Director factor, two directors (one female and one male) either stood behind or next to the shelves, or were replaced by symbolic cues. In the Object factor, participants needed to use the cues (position of the directors or symbolic cues) to select one of three possible objects, or only one object could be selected. Mere presence of the Directors was associated with activity in the superior dorsal medial prefrontal cortex (MPFC) and the superior/middle temporal sulci, extending into the extrastriate body area and the posterior superior temporal sulcus (pSTS), regions previously found to be responsive to human bodies and faces respectively. The interaction between the Director and Object factors, which requires participants to take into account the perspective of the director, led to additional recruitment of the superior dorsal MPFC, a region activated when thinking about dissimilar others' mental states, and the middle temporal gyri, extending into the left temporal pole. Our results show that using perspective taking in a communicative context, which requires participants to think not only about what the other person sees but also about his/her intentions, leads to the recruitment of superior dorsal MPFC and parts of the social brain network."} {"_id": "30b1447fbfdbd887a9c896a2b0d80177fc17c94e", "title": "3-Axis Magnetic Sensor Array System for Tracking Magnet's Position and Orientation", "text": "In medical diagnoses and treatments, e.g., the endoscopy, the dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we present a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source. It does not require the connection wire and power supply for excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some pre-determined spatial points can be detected, and the magnet's position and orientation parameters can be calculated based on an appropriate algorithm. Here, we propose a real-time tracking system built by Honeywell 3-axis magnetic sensors, HMC1053, as well as the computer sampling circuit. The results show that satisfactory tracking accuracy (average localization error is 3.3 mm) can be achieved using a sensor array with enough number of the 3-axis magnetic sensors"} {"_id": "b551feaa696da1ba44c31e081555e50358c6eca9", "title": "A Polymer-Based Capacitive Sensing Array for Normal and Shear Force Measurement", "text": "In this work, we present the development of a polymer-based capacitive sensing array. The proposed device is capable of measuring normal and shear forces, and can be easily realized by using micromachining techniques and flexible printed circuit board (FPCB) technologies. The sensing array consists of a polydimethlysiloxane (PDMS) structure and a FPCB. Each shear sensing element comprises four capacitive sensing cells arranged in a 2 \u00d7 2 array, and each capacitive sensing cell has two sensing electrodes and a common floating electrode. The sensing electrodes as well as the metal interconnect for signal scanning are implemented on the FPCB, while the floating electrodes are patterned on the PDMS structure. This design can effectively reduce the complexity of the capacitive structures, and thus makes the device highly manufacturable. The characteristics of the devices with different dimensions were measured and discussed. A scanning circuit was also designed and implemented. The measured maximum sensitivity is 1.67%/mN. The minimum resolvable force is 26 mN measured by the scanning circuit. The capacitance distributions induced by normal and shear forces were also successfully captured by the sensing array."} {"_id": "bb17e8858b0d3a5eba2bb91f45f4443d3e10b7cd", "title": "The Balanced Scorecard: Translating Strategy Into Action", "text": ""} {"_id": "01cac0a7c2a3240cb77a1e090694a104785f78f5", "title": "Workflow Automation: Overview and Research Issues", "text": "Workflow management systems, a relatively recent technology, are designed to make work more efficient, integrate heterogeneous application systems, and support interorganizational processes in electronic commerce applications. In this paper, we introduce the field of workflow automation, the subject of this special issue of Information Systems Frontiers. In the first part of the paper, we provide basic definitions and frameworks to aid understanding of workflow management technologies. In the remainder of the paper, we discuss technical and management research opportunities in this field and discuss the other contributions to the special issue."} {"_id": "4cdad9b059b5077fcce00fb8bcb4e381edd353bd", "title": "A Novel Anomaly Detection Scheme Based on Principal Component Classifier", "text": "This paper proposes a novel scheme that uses robust principal component classifier in intrusion detection problems where the training data may be unsupervised. Assuming that anomalies can be treated as outliers, an intrusion predictive model is constructed from the major and minor principal components of the normal instances. A measure of the difference of an anomaly from the normal instance is the distance in the principal component space. The distance based on the major components that account for 50% of the total variation and the minor components whose eigenvalues less than 0.20 is shown to work well. The experiments with KDD Cup 1999 data demonstrate that the proposed method achieves 98.94% in recall and 97.89% in precision with the false alarm rate 0.92% and outperforms the nearest neighbor method, density-based local outliers (LOF) approach, and the outlier detection algorithm based on Canberra metric."} {"_id": "ca20f466791f4b051ef3b8d2bf63789d33c562c9", "title": "CredFinder: A real-time tweets credibility assessing system", "text": "Lately, Twitter has grown to be one of the most favored ways of disseminating information to people around the globe. However, the main challenge faced by the users is how to assess the credibility of information posted through this social network in real time. In this paper, we present a real-time content credibility assessment system named CredFinder, which is capable of measuring the trustworthiness of information through user analysis and content analysis. The proposed system is capable of providing a credibility score for each user's tweets. Hence, it provides users with the opportunity to judge the credibility of information faster. CredFinder consists of two parts: a frontend in the form of an extension to the Chrome browser that collects tweets in real time from a Twitter search or a user-timeline page and a backend that analyzes the collected tweets and assesses their credibility."} {"_id": "9923edf7815c720aa0d6d58a28332806ae91b224", "title": "Design of an Ultra-Wideband Pulse Generator Based on Avalanche Transistor", "text": "Based on the avalanche effect of avalanche transistor, a kind of ultra-wideband nanosecond pulse circuit has been designed, whose frequency, pulse width and amplitude are tunable. In this paper, the principle, structure and selection of components' parameters in the circuit are analyzed in detail. The circuit generates periodic negative pulse, whose pulse full width is 890 ps and pulse amplitude is -11.2 V in simulation mode. By setting up circuit for experiment and changing parameters properly, a kind of ultra-wideband pulse with pulse width of 2.131 ns and pulse amplitude of -9.23 V is achieved. With the features such as simple structure, stable and reliable performance and low cost, this pulse generator is applicable to ultra-wideband wireless communication system."} {"_id": "33b424698c2b7602dcb579513c34fe20cc3ae669", "title": "A 0.5ps 1.4mW 50MS/s Nyquist bandwidth time amplifier based two-step flash-\u0394\u03a3 time-to-digital converter", "text": "We propose a 50-MS/s two-step flash-\u0394\u03a3 time-to-digital converter (TDC) using stable time amplifiers (TAs). The TDC demonstrates low-levels of shaped quantization noise. The system is simulated in 40-nm CMOS and consumes 1.3 mA from a 1.1 V supply. The bandwidth is broadened to Nyquist rate. At frequencies below 25 MHz, the integrated TDC error is as low as 143 fsrms, which is equal to an equivalent TDC resolution of 0.5 ps."} {"_id": "7fabd0639750563e0fb09df341e0e62ef4d6e1fb", "title": "Brain-computer interfaces: communication and restoration of movement in paralysis.", "text": "The review describes the status of brain-computer or brain-machine interface research. We focus on non-invasive brain-computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive evaluation in well-controlled experiments. Invasive BMIs based on neuronal spike patterns, local field potentials or electrocorticogram may constitute the strategy of choice in severe cases of stroke and spinal cord paralysis. Future directions of BCI research should include the regulation of brain metabolism and blood flow and electrical and magnetic stimulation of the human brain (invasive and non-invasive). A series of studies using BOLD response regulation with functional magnetic resonance imaging (fMRI) and near infrared spectroscopy demonstrated a tight correlation between voluntary changes in brain metabolism and behaviour."} {"_id": "a31b795f8defb59889df8f13321e057192d64f73", "title": "iCONCUR: informed consent for clinical data and bio-sample use for research", "text": "Background\nImplementation of patient preferences for use of electronic health records for research has been traditionally limited to identifiable data. Tiered e-consent for use of de-identified data has traditionally been deemed unnecessary or impractical for implementation in clinical settings.\n\n\nMethods\nWe developed a web-based tiered informed consent tool called informed consent for clinical data and bio-sample use for research (iCONCUR) that honors granular patient preferences for use of electronic health record data in research. We piloted this tool in 4 outpatient clinics of an academic medical center.\n\n\nResults\nOf patients offered access to iCONCUR, 394 agreed to participate in this study, among whom 126 patients accessed the website to modify their records according to data category and data recipient. The majority consented to share most of their data and specimens with researchers. Willingness to share was greater among participants from an Human Immunodeficiency Virus (HIV) clinic than those from internal medicine clinics. The number of items declined was higher for for-profit institution recipients. Overall, participants were most willing to share demographics and body measurements and least willing to share family history and financial data. Participants indicated that having granular choices for data sharing was appropriate, and that they liked being informed about who was using their data for what purposes, as well as about outcomes of the research.\n\n\nConclusion\nThis study suggests that a tiered electronic informed consent system is a workable solution that respects patient preferences, increases satisfaction, and does not significantly affect participation in research."} {"_id": "7aa1866adbc2b4758c04d8484e5bf22e4cce9cc9", "title": "Automotive radome design - fishnet structure for 79 GHz", "text": "Metamaterials are considered as an option for quasi-optical matching of automotive radar radomes, lowering transmission loss and minimizing reflections. This paper shows a fishnet structure design for the 79 GHz band which is suitable for this type of matching and which exhibits a negative index of refraction. The measured transmission loss is 0.9 dB at 79 GHz. A tolerance study concerning copper plating, substrate permittivity, oblique incidence, and polarization is shown. Quasi-optical measurements were done in the range of 60 \u2013 90 GHz, which agree with simulated results."} {"_id": "b95dd9e28f2126aac27da8b0378d3b9487d8b73d", "title": "Avatar-independent scripting for real-time gesture animation", "text": "When animation of a humanoid figure is to be generated at run-time, instead of by replaying precomposed motion clips, some method is required of specifying the avatar\u2019s movements in a form from which the required motion data can be automatically generated. This form must be of a more abstract nature than raw motion data: ideally, it should be independent of the particular avatar\u2019s proportions, and both writable by hand and suitable for automatic generation from higher-level descriptions of the required actions. We describe here the development and implementation of such a scripting language for the particular area of sign languages of the deaf, called SiGML (Signing Gesture Markup Language), based on the existing HamNoSys notation for sign languages. We conclude by suggesting how this work may be extended to more general animation for interactive virtual reality applications."} {"_id": "267718d3b9399a5eab90a1b1701e78369696e8fe", "title": "Analyzing multiple configurations of a C program", "text": "Preprocessor conditionals are heavily used in C programs since they allow the source code to be configured for different platforms or capabilities. However, preprocessor conditionals, as well as other preprocessor directives, are not part of the C language. They need to be evaluated and removed, and so a single configuration selected, before parsing can take place. Most analysis and program understanding tools run on this preprocessed version of the code so their results are based on a single configuration. This paper describes the approach of CRefactory, a refactoring tool for C programs. A refactoring tool cannot consider only a single configuration: changing the code for one configuration may break the rest of the code. CRefactory analyses the program for all possible configurations simultaneously. CRefactory also preserves preprocessor directives and integrates them in the internal representations. The paper also presents metrics from two case studies to show that CRefactory's program representation is practical."} {"_id": "63abfb7d2d35d60a5dc2cc884251f9fee5d46963", "title": "The contribution of electrophysiology to functional connectivity mapping", "text": "A powerful way to probe brain function is to assess the relationship between simultaneous changes in activity across different parts of the brain. In recent years, the temporal activity correlation between brain areas has frequently been taken as a measure of their functional connections. Evaluating 'functional connectivity' in this way is particularly popular in the fMRI community, but has also drawn interest among electrophysiologists. Like hemodynamic fluctuations observed with fMRI, electrophysiological signals display significant temporal fluctuations, even in the absence of a stimulus. These neural fluctuations exhibit a correlational structure over a wide range of spatial and temporal scales. Initial evidence suggests that certain aspects of this correlational structure bear a high correspondence to so-called functional networks defined using fMRI. The growing family of methods to study activity covariation, combined with the diverse neural mechanisms that contribute to the spontaneous fluctuations, has somewhat blurred the operational concept of functional connectivity. What is clear is that spontaneous activity is a conspicuous, energy-consuming feature of the brain. Given its prominence and its practical applications for the functional connectivity mapping of brain networks, it is of increasing importance that we understand its neural origins as well as its contribution to normal brain function."} {"_id": "1a1f0d0abcbdaa2d487f0a46dba1ca097774012d", "title": "Practical Backscatter Communication Systems for Battery-Free Internet of Things: A Tutorial and Survey of Recent Research", "text": "Backscatter presents an emerging ultralow-power wireless communication paradigm. The ability to offer submilliwatt power consumption makes it a competitive core technology for Internet of Things (IoT) applications. In this article, we provide a tutorial of backscatter communication from the signal processing perspective as well as a survey of the recent research activities in this domain, primarily focusing on bistatic backscatter systems. We also discuss the unique real-world applications empowered by backscatter communication and identify open questions in this domain. We believe this article will shed light on the low-power wireless connectivity design toward building and deploying IoT services in the wild."} {"_id": "6fd78d20e6f51d872f07cde9350f4d31078ff723", "title": "OF A TUNABLE STIFFNESS COMPOSITE LEG FOR DYNAMIC LOCOMOTION", "text": "Passively compliant legs have been instrumental in the development of dynamically running legged robots. Having properly tuned leg springs is essential for stable, robust and energetically efficient running at high speeds. Recent simulation studies indicate that having variable stiffness legs, as animals do, can significantly improve the speed and stability of these robots in changing environmental conditions. However, to date, the mechanical complexities of designing usefully robust tunable passive compliance into legs has precluded their implementation on practical running robots. This paper describes a new design of a \u201dstructurally controlled variable stiffness\u201d leg for a hexapedal running robot. This new leg improves on previous designs\u2019 performance and enables runtime modification of leg stiffness in a small, lightweight, and rugged package. Modeling and leg test experiments are presented that characterize the improvement in stiffness range, energy storage, and dynamic coupling properties of these legs. We conclude that this variable stiffness leg design is now ready for implementation and testing on a dynamical running robot."} {"_id": "d1c4907b1b225f61059915a06a3726706860c71e", "title": "An in-depth study of the promises and perils of mining GitHub", "text": "With over 10 million git repositories, GitHub is becoming one of the most important sources of software artifacts on the Internet. Researchers mine the information stored in GitHub\u2019s event logs to understand how its users employ the site to collaborate on software, but so far there have been no studies describing the quality and properties of the available GitHub data. We document the results of an empirical study aimed at understanding the characteristics of the repositories and users in GitHub; we see how users take advantage of GitHub\u2019s main features and how their activity is tracked on GitHub and related datasets to point out misalignment between the real and mined data. Our results indicate that while GitHub is a rich source of data on software development, mining GitHub for research purposes should take various potential perils into consideration. For example, we show that the majority of the projects are personal and inactive, and that almost 40 % of all pull requests do not appear as merged even though they were. Also, approximately half of GitHub\u2019s registered users do not have public activity, while the activity of GitHub users in repositories is not always easy to pinpoint. We use our identified perils to see if they can pose validity threats; we review selected papers from the MSR 2014 Mining Challenge and see if there are potential impacts to consider. We provide a set of recommendations for software engineering researchers on how to approach the data in GitHub."} {"_id": "9c1ebae0eea2aa27fed13c71dc98dc0f67dd52a0", "title": "Unsupervised Segmentation of 3D Medical Images Based on Clustering and Deep Representation Learning", "text": "This paper presents a novel unsupervised segmentation method for 3D medical images. Convolutional neural networks (CNNs) have brought significant advances in image segmentation. However, most of the recent methods rely on supervised learning, which requires large amounts of manually annotated data. Thus, it is challenging for these methods to cope with the growing amount of medical images. This paper proposes a unified approach to unsupervised deep representation learning and clustering for segmentation. Our proposed method consists of two phases. In the first phase, we learn deep feature representations of training patches from a target image using joint unsupervised learning (JULE) that alternately clusters representations generated by a CNN and updates the CNN parameters using cluster labels as supervisory signals. We extend JULE to 3D medical images by utilizing 3D convolutions throughout the CNN architecture. In the second phase, we apply k-means to the deep representations from the trained CNN and then project cluster labels to the target image in order to obtain the fully segmented image. We evaluated our methods on three images of lung cancer specimens scanned with micro-computed tomography (micro-CT). The automatic segmentation of pathological regions in micro-CT could further contribute to the pathological examination process. Hence, we aim to automatically divide each image into the regions of invasive carcinoma, noninvasive carcinoma, and normal tissue. Our experiments show the potential abilities of unsupervised deep representation learning for medical image segmentation."} {"_id": "27a693acee22752fa66f442b8d52b7f3c83134c7", "title": "Optimal Multiserver Configuration for Profit Maximization in Cloud Computing", "text": "As cloud computing becomes more and more popular, understanding the economics of cloud computing becomes critically important. To maximize the profit, a service provider should understand both service charges and business costs, and how they are determined by the characteristics of the applications and the configuration of a multiserver system. The problem of optimal multiserver configuration for profit maximization in a cloud computing environment is studied. Our pricing model takes such factors into considerations as the amount of a service, the workload of an application environment, the configuration of a multiserver system, the service-level agreement, the satisfaction of a consumer, the quality of a service, the penalty of a low-quality service, the cost of renting, the cost of energy consumption, and a service provider's margin and profit. Our approach is to treat a multiserver system as an M/M/m queuing model, such that our optimization problem can be formulated and solved analytically. Two server speed and power consumption models are considered, namely, the idle-speed model and the constant-speed model. The probability density function of the waiting time of a newly arrived service request is derived. The expected service charge to a service request is calculated. The expected net business gain in one unit of time is obtained. Numerical calculations of the optimal server size and the optimal server speed are demonstrated."} {"_id": "e8217edd7376c26c714757a362724f81f3afbee0", "title": "Overview on Additive Manufacturing Technologies", "text": "This paper provides an overview on the main additive manufacturing/3D printing technologies suitable for many satellite applications and, in particular, radio-frequency components. In fact, nowadays they have become capable of producing complex net-shaped or nearly net-shaped parts in materials that can be directly used as functional parts, including polymers, metals, ceramics, and composites. These technologies represent the solution for low-volume, high-value, and highly complex parts and products."} {"_id": "738f4d2137fc767b1802963b5e45a2216c27b77c", "title": "Churn Prediction : Does Technology Matter ?", "text": "The aim of this paper is to identify the most suitable model for churn prediction based on three different techniques. The paper identifies the variables that affect churn in reverence of customer complaints data and provides a comparative analysis of neural networks, regression trees and regression in their capabilities of predicting customer churn. Keywords\u2014Churn, Decision Trees, Neural Networks, Regression."} {"_id": "ae341ad66824e1f30a2675fd50742b97794c8f57", "title": "Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach", "text": "Learning from imbalanced data sets, where the number of examples of one (majority) class is much higher than the others, presents an important challenge to the machine learning community. Traditional machine learning algorithms may be biased towards the majority class, thus producing poor predictive accuracy over the minority class. In this paper, we describe a new approach that combines boosting, an ensemble-based learning algorithm, with data generation to improve the predictive power of classifiers against imbalanced data sets consisting of two classes. In the DataBoost-IM method, hard examples from both the majority and minority classes are identified during execution of the boosting algorithm. Subsequently, the hard examples are used to separately generate synthetic examples for the majority and minority classes. The synthetic data are then added to the original training set, and the class distribution and the total weights of the different classes in the new training set are rebalanced. The DataBoost-IM method was evaluated, in terms of the F-measures, G-mean and overall accuracy, against seventeen highly and moderately imbalanced data sets using decision trees as base classifiers. Our results are promising and show that the DataBoost-IM method compares well in comparison with a base classifier, a standard benchmarking boosting algorithm and three advanced boosting-based algorithms for imbalanced data set. Results indicate that our approach does not sacrifice one class in favor of the other, but produces high predictions against both minority and majority classes."} {"_id": "090a6772a1d69f07bfe7e89f99934294a0dac1b9", "title": "Two Modifications of CNN", "text": ""} {"_id": "0df013671e9e901a9126deb4957e22e3d937b1a5", "title": "Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms", "text": "With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term \u201cprototypes\u201d refers to the reference instances used in a nearest neighbor computation \u2014 the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes."} {"_id": "32f7aef5c13c715b00b966eaaba5dd2fe35df1a4", "title": "Bayesian network classifiers for identifying the slope of the customer lifecycle of long-life customers", "text": "Undoubtedly, Customer Relationship Management (CRM) has gained its importance through the statement that acquiring a new customer is several times more costly than retaining and selling additional products to existing customers. Consequently, marketing practitioners are currently often focusing on retaining customers for as long as possible. However, recent findings in relationship marketing literature have shown that large differences exist within the group of long-life customers in terms of spending and spending evolution. Therefore, this paper focuses on introducing a measure of a customer\u2019s future spending evolution that might improve relationship marketing decision making. In this study, from a marketing point of view, we focus on predicting whether a newly acquired customer will increase or decrease his/her future spending from initial purchase information. This is essentially a classification task. The main contribution of this study lies in comparing and evaluating several Bayesian network classifiers with statistical and other artificial intelligence techniques for the purpose of classifying customers in the binary classification problem at hand. Certain Bayesian network classifiers have been recently proposed in the artificial"} {"_id": "25519ce6a924f5890180eacfa6e66203048f5dd1", "title": "Big Data : New Tricks for Econometrics", "text": "5 Nowadays computers are in the middle of most economic transactions. 6 These \u201ccomputer-mediated transactions\u201d generate huge amounts of 7 data, and new tools can be used to manipulate and analyze this data. 8 This essay offers a brief introduction to some of these tools and meth9 ods. 10 Computers are now involved in many economic transactions and can cap11 ture data associated with these transactions, which can then be manipulated 12 and analyzed. Conventional statistical and econometric techniques such as 13 regression often work well but there are issues unique to big data sets that 14 may require different tools. 15 First, the sheer size of the data involved may require more powerful data 16 manipulation tools. Second, we may have more potential predictors than 17 appropriate for estimation, so we need to do some kind of variable selection. 18 Third, large data sets may allow for more flexible relationships than simple 19 \u2217Hal Varian is Chief Economist, Google Inc., Mountain View, California, and Emeritus Professor of Economics, University of California, Berkeley, California. Thanks to Jeffrey Oldham, Tom Zhang, Rob On, Pierre Grinspan, Jerry Friedman, Art Owen, Steve Scott, Bo Cowgill, Brock Noland, Daniel Stonehill, Robert Snedegar, Gary King, the editors of this journal for comments on earlier versions of this paper. 1 linear models. Machine learning techniques such as decision trees, support 20 vector machines, neural nets, deep learning and so on may allow for more 21 effective ways to model complex relationships. 22 In this essay I will describe a few of these tools for manipulating and an23 alyzing big data. I believe that these methods have a lot to offer and should 24 be more widely known and used by economists. In fact, my standard advice 25 to graduate students these days is \u201cgo to the computer science department 26 and take a class in machine learning.\u201d There have been very fruitful collabo27 rations between computer scientists and statisticians in the last decade or so, 28 and I expect collaborations between computer scientists and econometricians 29 will also be productive in the future. 30 1 Tools to manipulate big data 31 Economists have historically dealt with data that fits in a spreadsheet, but 32 that is changing as new more detailed data becomes available; see Einav 33 and Levin [2013] for several examples and discussion. If you have more 34 than a million or so rows in a spreadsheet, you probably want to store it in a 35 relational database, such as MySQL. Relational databases offer a flexible way 36 to store, manipulate and retrieve data using a Structured Query Language 37 (SQL) which is easy to learn and very useful for dealing with medium-sized 38 data sets. 39 However, if you have several gigabytes of data or several million observa40 tions, standard relational databases become unwieldy. Databases to manage 41 data of this size are generically known as \u201cNoSQL\u201d databases. The term is 42 used rather loosely, but is sometimes interpreted as meaning \u201cnot only SQL.\u201d 43 NoSQL databases are more primitive than SQL databases in terms of data 44 manipulation capabilities but can handle larger amounts of data. 45 Due to the rise of computer mediated transactions, many companies have 46 found it necessary to develop systems to process billions of transactions per 47"} {"_id": "634aa5d051512ee4b831e6210a234fb2d9b9d623", "title": "Cooperative Intersection Management: A Survey", "text": "Intersection management is one of the most challenging problems within the transport system. Traffic light-based methods have been efficient but are not able to deal with the growing mobility and social challenges. On the other hand, the advancements of automation and communications have enabled cooperative intersection management, where road users, infrastructure, and traffic control centers are able to communicate and coordinate the traffic safely and efficiently. Major techniques and solutions for cooperative intersections are surveyed in this paper for both signalized and nonsignalized intersections, whereas focuses are put on the latter. Cooperative methods, including time slots and space reservation, trajectory planning, and virtual traffic lights, are discussed in detail. Vehicle collision warning and avoidance methods are discussed to deal with uncertainties. Concerning vulnerable road users, pedestrian collision avoidance methods are discussed. In addition, an introduction to major projects related to cooperative intersection management is presented. A further discussion of the presented works is given with highlights of future research topics. This paper serves as a comprehensive survey of the field, aiming at stimulating new methods and accelerating the advancement of automated and cooperative intersections."} {"_id": "593b0a74211460f424d471ab7155a0a05c5fd342", "title": "Sequential Mining: Patterns and Algorithms Analysis", "text": "This paper presents and analysis the common existing sequential pattern mining algorithms. It presents a classifying study of sequential pattern-mining algorithms into five extensive classes. First, on the basis of Apriori-based algorithm, second on Breadth First Search-based strategy, third on Depth First Search strategy, fourth on sequential closed-pattern algorithm and five on the basis of incremental pattern mining algorithms. At the end, a comparative analysis is done on the basis of important key features supported by various algorithms. This study gives an enhancement in the understanding of the approaches of sequential"} {"_id": "b5e04a538ecb428c4cfef9784fe1f7d1c193cd1a", "title": "Microstrip-fed circular substrate integrated waveguide (SIW) cavity resonator and antenna", "text": "Substrate integrated waveguide (SIW) cavity resonators are among the emerging group of SIW-based circuit components that are gaining popularity and increasingly employed in integrated microwave and mm-wave circuits. The SIW cavities offer a significantly enhanced performance in comparison to the previously available planar microwave resonators. The high quality factor of the waveguide-based cavity resonators enables designing microwave oscillators with very low phase noise as well as compact high-gain antennas [1\u20134]. The SIW-cavity-based antennas also show much promise for implementation of low-cost lightweight fully-integrated high-gain antennas that find application in ultra-light communication satellites, low payload spacecrafts, high-frequency radar, and sensors. In this paper, a circular SIW cavity resonator, which is fed by a microstrip and via probe, is presented. The microstrip feed is optimized to achieve a return loss of better than 20dB from both simulation and measurements. A resonance frequency of 16.79GHz and quality factor of 76.3 are determined from vector network analyzer (VNA) measurements. Due to the folded slot on the upper conductor layer, the resonator is an efficient cavity backed antenna. A maximum gain of over 7dB is measured in an anechoic chamber at the resonance frequency of 16.79 GHz."} {"_id": "35de4258058f02a31cd0a0882b5bcc14d7a06697", "title": "Quality driven web services composition", "text": "The process-driven composition of Web services is emerging as a promising approach to integrate business applications within and across organizational boundaries. In this approach, individual Web services are federated into composite Web services whose business logic is expressed as a process model. The tasks of this process model are essentially invocations to functionalities offered by the underlying component services. Usually, several component services are able to execute a given task, although with different levels of pricing and quality. In this paper, we advocate that the selection of component services should be carried out during the execution of a composite service, rather than at design-time. In addition, this selection should consider multiple criteria (e.g., price, duration, reliability), and it should take into account global constraints and preferences set by the user (e.g., budget constraints). Accordingly, the paper proposes a global planning approach to optimally select component services during the execution of a composite service. Service selection is formulated as an optimization problem which can be solved using efficient linear programming methods. Experimental results show that this global planning approach outperforms approaches in which the component services are selected individually for each task in a composite service."} {"_id": "87d696d7dce4fed554430f100d0f2aaee9f73bc5", "title": "Navigating the massive world of reddit: using backbone networks to map user interests in social media", "text": "In the massive online worlds of social media, users frequently rely on organizing themselves around specific topics of interest to find and engage with like-minded people. However, navigating these massive worlds and finding topics of specific interest often proves difficult because the worlds are mostly organized haphazardly, leaving users to find relevant interests by word of mouth or using a basic search feature. Here, we report on a method using the backbone of a network to create a map of the primary topics of interest in any social network. To demonstrate the method, we build an interest map for the social news web site reddit and show how such a map could be used to navigate a social media world. Moreover, we analyze the network properties of the reddit social network and find that it has a scale-free, small-world, and modular community structure, much like other online social networks such as Facebook and Twitter. We suggest that the integration of interest maps into popular social media platforms will assist users in organizing themselves into more specific interest groups, which will help alleviate the overcrowding effect often observed in large online communities. Subjects Network Science and Online Social Networks, Visual Analytics"} {"_id": "1d145b63fd065c562ed2fecb3f34643fc9653b60", "title": "Examining the Technology Acceptance Model Using Physician Acceptance of Telemedicine Technology", "text": "The rapid growth of investment in information technology (IT) by organizations worldwide has made user acceptance an increasingly critical technology implementation a d management issue. While such acceptance has received fairly extensive attention from previous research, additional efforts are needed to examine or validate existing research results, particularly those involving different technologies, user populations, and/or organizational contexts. In response, this paper reports a research work that examined the applicability of the Technology Acceptance Model (\u03a4\u0391\u039c) in explaining physicians' decisions to accept telemedicine technology in the health-care context. The technology, the user group, and the organizational context are all new to IT acceptance/adoption research. The study also addressed a pragmatic technology management need resulting from millions of dollars invested by healthcare organizations in developing and implementing telemedicine programs in recent years. The model's overall fit, explanatory power, and the individual causal links that it postulates were evaluated by examining the acceptance of telemedicine technology among physicians practicing at public tertiary hospitals in Hong Kong. Our results suggested that \u03a4\u0391\u039c was able to provide a reasonable depiction of physicians' intention to use telemedicine technology. Perceived usefulness was found to be a significant determinant ofattitude and intention but perceived ease of use was not. The relatively low R-square of the model suggests both the limitations of the parsimonious model and the need for incorporating additional factors or integrating with other IT acceptance models in order to improve its specificity and explanatory utility in a health-care context. Based on the study findings, implications for user technology acceptance research and telemedicine management are discussed."} {"_id": "d7ab41adebaec9272c2797512a021482a594d040", "title": "DevOps for Developers", "text": "ed descriptions of machines by using a DSL while enjoying the full power of scripting languages (in both Puppet and Chef, you can describe behavior in the Ruby language (a dynamic, general-purpose object-oriented programming language), see http://www.ruby-lang. org/en/). Declarative descriptions of target behavior (i.e., what the system must be). Thus, running the scripts will always lead to the same end result. Management of code in version control. By using a version control system as the leading medium, you do not need to adjust the machines manually (which is not reproducible). Synchronization of environments by using a version control system and automatic provisioning of environments. Continuous integration servers, such as Jenkins, simply have to listen to the path in the version control system to detect changes. Then the configuration management tool (e.g., Puppet) ensures that the corresponding machines apply the behavior that is described in version control. Using tools such as Jenkins (see Chapter 8) and Puppet and Vagrant (see Chapter 9), complete setups, including virtualizations, can be managed automatically. Sharing of scripts (e.g., Puppet manifests). A cross-functional team that includes development and operations can develop this function. Sharing the scripts in the version control system enables all parties, particularly development and operations, to use those scripts to set up their respective environments: test environments (used by development) and production environments (managed by operations). Automation is an essential backbone of DevOps (see Chapter 3 and Chapter 8 for more information on automation). Automation is the use of solutions to reduce the need for human work. Automation can ensure that the software is built the same way each time, that the team sees every change made to the software, and that the software is tested and reviewed in the same way every day so that no defects slip through or are introduced through human error. In software development projects, a high level of automation is a prerequisite for quickly delivering the best quality and for obtaining feedback from stakeholders early and often. Automating aspects of DevOps helps to make parts of the process transparent for the whole team and also helps deploy software to different target environments in the same way. You can best improve what you measure; and to measure something usefully, you need a process that delivers results in a reproducible way. DevOps addresses aspects similar to those tackled by Agile development, but the former focuses on breaking down the walls between developers and operations workers. The challenge is to communicate the benefits of DevOps to both development and operations teams. Both groups may be reluctant to start implementing the shift toward DevOps because their day is already full of activities. So why should they be concerned with the work of others? Why should DEVOPS FOR DEVELOPERS 31 operations want to use unfamiliar tools and adjust their daily routines when their self-made, isolated solutions have worked just fine for years? Because of this resistance, the incentives and commitment provided by upper management are important. Incentives alone are not enough: unified processes and tool chains are also important. Upper management will also resist by questioning the wisdom of implementing DevOps if the concrete benefits are not visible. Better cash flow and improved time to market are hard to measure. Management asks questions that address the core problems of software engineering while ignoring the symptoms: how can the company achieve maximal earnings in a short period of time? How can requirements be made stable and delivered to customers quickly? These results and visions should be measured with metrics that are shared by development and operations. Existing metrics can be further used or replaced by metrics that accurately express business value. One example of an end-to-end metric is the cycle time, which we will discuss in detail in Chapter 3."} {"_id": "bca1bf790987bfb8fccf4e158a5c9fab3ab371ac", "title": "Naive Bayes Word Sense Induction", "text": "We introduce an extended naive Bayes model for word sense induction (WSI) and apply it to a WSI task. The extended model incorporates the idea the words closer to the target word are more relevant in predicting its sense. The proposed model is very simple yet effective when evaluated on SemEval-2010 WSI data."} {"_id": "53f3edfeb22de82c7a4b4a02209d296526eee38c", "title": "Dispatching optimization and routing guidance for emergency vehicles in disaster", "text": "Based on the problem that disasters occur frequently all over the world recently. This paper aims to develop dispatching optimization and dynamical routing guidance techniques for emergency vehicles under disaster conditions, so as to reduce emergency response time and avoid further possible deterioration of disaster situation. As to dispatching for emergency vehicles, firstly, classify the casualties into several regions based on the pickup locations, quantity and severity of casualties by an adaptive spectral clustering method, and then work out dispatching strategies for emergency vehicles by k-means clustering method based on the distance among casualties regions, emergency supply stations and hospitals. As to routing guidance for emergency vehicles, centrally dynamic route guidance system based on parallel computing technology is presented to offer safe, reliable and fast routes for emergency vehicles, which are subject to the network's impedance function based on real-time forecasted travel time. Finally, the algorithms presented in this paper are validated based on the platform of ArcGIS by generating casualties randomly in random areas and damaging the simulation network of Changchun city randomly."} {"_id": "ef51ff88c525751e2d09f245a3bedc40cf364961", "title": "Breaking CAPTCHAs on the Dark Web 11 February , 2018", "text": "On the Dark Web, several websites inhibit automated scraping attempts by employing CAPTCHAs. Scraping important content from a website is possible if these CAPTCHAs are solved by a web scraper. For this purpose, a Machine Learning tool is used, TensorFlow and an Optical Character Recognition tool, Tesseract to solve simple CAPTCHAs. Two sets of CATPCHAs, which are also used on some Dark Web websites, were generated for testing purposes. Tesseract achieved a success rate of 27.6% and 13.7% for set 1 and 2, respectively. A total of three models were created for TensorFlow. One model per set of CAPTCHAs and one model with the two sets mixed together. TensorFlow achieved a success rate of 94.6%, 99.7%, and 70.1% for the first, second, and mixed set, respectively. The initial investment to train TensorFlow can take up to two days to train for a single type of CAPTCHA, depending on implementation efficiency and hardware. The CAPTCHA images, including the answers, are also a requirement for training TensorFlow. Whereas Tesseract can be used on-demand without need for prior training."} {"_id": "0d4fca03c4748fcac491809f0f73cde401972e28", "title": "Business Intelligence", "text": "Business intelligence systems combine operational data with analytical tools to present complex and competitive information to planners and decision makers. The objective is to improve the timeliness and quality of inputs to the decision process. Business Intelligence is used to understand the capabilities available in the firm; the state of the art, trends, and future directions in the markets, the technologies, and the regulatory environment in which the firm competes; and the actions of competitors and the implications of these actions. The emergence of the data warehouse as a repository, advances in data cleansing, increased capabilities of hardware and software, and the emergence of the web architecture all combine to create a richer business intelligence environment than was available previously. Although business intelligence systems are widely used in industry, research about them is limited. This paper, in addition to being a tutorial, proposes a BI framework and potential research topics. The framework highlights the importance of unstructured data and discusses the need to develop BI tools for its acquisition, integration, cleanup, search, analysis, and delivery. In addition, this paper explores a matrix for BI data types (structured vs. unstructured) and data sources (internal and external) to guide research."} {"_id": "22b22af6c27e6d4348ed9d131ec119ba48d8301e", "title": "Automated API Property Inference Techniques", "text": "Frameworks and libraries offer reusable and customizable functionality through Application Programming Interfaces (APIs). Correctly using large and sophisticated APIs can represent a challenge due to hidden assumptions and requirements. Numerous approaches have been developed to infer properties of APIs, intended to guide their use by developers. With each approach come new definitions of API properties, new techniques for inferring these properties, and new ways to assess their correctness and usefulness. This paper provides a comprehensive survey of over a decade of research on automated property inference for APIs. Our survey provides a synthesis of this complex technical field along different dimensions of analysis: properties inferred, mining techniques, and empirical results. In particular, we derive a classification and organization of over 60 techniques into five different categories based on the type of API property inferred: unordered usage patterns, sequential usage patterns, behavioral specifications, migration mappings, and general information."} {"_id": "e66efb82b1c3982c6451923d73e870e95339c3a6", "title": "Trajectory pattern mining: Exploring semantic and time information", "text": "With the development of GPS and the popularity of smart phones and wearable devices, users can easily log their daily trajectories. Prior works have elaborated on mining trajectory patterns from raw trajectories. Trajectory patterns consist of hot regions and the sequential relationships among them, where hot regions refer the spatial regions with a higher density of data points. Note that some hot regions do not have any meaning for users. Moreover, trajectory patterns do not have explicit time information or semantic information. To enrich trajectory patterns, we propose semantic trajectory patterns which are referred to as the moving patterns with spatial, temporal, and semantic attributes. Given a user trajectory, we aim at mining frequent semantic trajectory patterns. Explicitly, we extract the three attributes from a raw trajectory, and convert it into a semantic mobility sequence. Given such a semantic mobility sequence, we propose two algorithms to discover frequent semantic trajectory patterns. The first algorithm, MB (standing for matching-based algorithm), is a naive method to find frequent semantic trajectory patterns. It generates all possible patterns and extracts the occurrence of the patterns from the semantic mobility sequence. The second algorithm, PS (standing for PrefixSpan-based algorithm), is developed to efficiently mine semantic trajectory patterns. Due to the good efficiency of PrefixSpan, algorithm PS will fully utilize the advantage of PrefixSpan. Since the semantic mobility sequence contains three attributes, we need to further transform it into a raw sequence before using algorithm PrefixSpan. Therefore, we propose the SS algorithm (standing for sequence symbolization algorithm) to achieve this purpose. To evaluate our proposed algorithms, we conducted experiments on the real datasets of Google Location History, and the experimental results show the effectiveness and efficiency of our proposed algorithms."} {"_id": "331cd0d53df0254213557cee2d9f0a2109ba16d8", "title": "Performance Analysis of Modified LLC Resonant Converter", "text": "In this paper a modified form of the most efficient resonant LLC series parallel converter configuration is proposed. The proposed system comprises of an additional LC circuit synchronized with the existing resonant tank of LLC configuration (LLC-LC configuration). With the development of power electronics devices, resonant converters have been proved to be more efficient than conventional converters as they employ soft switching technique. Among the three basic configurations of resonant converter, Series Resonant Converter (SRC), Parallel Resonant Converter (PRC) and Series Parallel Resonant Converter (SPRC), the LLC configuration under SPRC is proved to be most efficient providing narrow switching frequency range for wide range of load variation, improved efficiency and providing ZVS capability even under no load condition. The modified LLC configuration i.e., LLC-LC configuration offers better efficiency as well as better output voltage and gain. The efficiency tends to increase with increase in input voltage and hence these are suitable for high input voltage operation. The simulation and analysis has been done for full bridge configuration of the switching circuit and the results are presented"} {"_id": "85fc21452fe92532ec89444055880aadb0eacf4c", "title": "Deep Recurrent Neural Networks for seizure detection and early seizure detection systems", "text": "Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration."} {"_id": "8f6c14f6743d8f9a8ab0be99a50fb51a123ab62c", "title": "Document Image Binarization Using Recurrent Neural Networks", "text": "In the context of document image analysis, image binarization is an important preprocessing step for other document analysis algorithms, but also relevant on its own by improving the readability of images of historical documents. While historical document image binarization is challenging due to common image degradations, such as bleedthrough, faded ink or stains, achieving good binarization performance in a timely manner is a worthwhile goal to facilitate efficient information extraction from historical documents. In this paper, we propose a recurrent neural network based algorithm using Grid Long Short-Term Memory cells for image binarization, as well as a pseudo F-Measure based weighted loss function. We evaluate the binarization and execution performance of our algorithm for different choices of footprint size, scale factor and loss function. Our experiments show a significant trade-off between binarization time and quality for different footprint sizes. However, we see no statistically significant difference when using different scale factors and only limited differences for different loss functions. Lastly, we compare the binarization performance of our approach with the best performing algorithm in the 2016 handwritten document image binarization contest and show that both algorithms perform equally well."} {"_id": "61dad02743d5333e942677836052b814bef4bad8", "title": "A collaborative filtering framework based on fuzzy association rules and multiple-level similarity", "text": "The rapid development of Internet technologies in recent decades has imposed a heavy information burden on users. This has led to the popularity of recommender systems, which provide advice to users about items they may like to examine. Collaborative Filtering (CF) is the most promising technique in recommender systems, providing personalized recommendations to users based on their previously expressed preferences and those of other similar users. This paper introduces a CF framework based on Fuzzy Association Rules and Multiple-level Similarity (FARAMS). FARAMS extended existing techniques by using fuzzy association rule mining, and takes advantage of product similarities in taxonomies to address data sparseness and nontransitive associations. Experimental results show that FARAMS improves prediction quality, as compared to similar approaches."} {"_id": "3ddac15bd47bc0745db4297d30be71af43adf0bb", "title": "Greed is good: Approximating independent sets in sparse and bounded-degree graphs", "text": "Theminimum-degree greedy algorithm, or Greedy for short, is a simple and well-studied method for finding independent sets in graphs. We show that it achieves a performance ratio of (\u0394+2)/3 for approximating independent sets in graphs with degree bounded by \u0394. The analysis yields a precise characterization of the size of the independent sets found by the algorithm as a function of the independence number, as well as a generalization of Tur\u00e1n\u2019s bound. We also analyze the algorithm when run in combination with a known preprocessing technique, and obtain an improved $$(2\\bar d + 3)/5$$ performance ratio on graphs with average degree $$\\bar d$$ , improving on the previous best $$(\\bar d + 1)/2$$ of Hochbaum. Finally, we present an efficient parallel and distributed algorithm attaining the performance guarantees of Greedy."} {"_id": "bb6e6e3251bbb80587bdb5064e24b55d728529b1", "title": "Mixed Methods Research : A Research Paradigm Whose Time Has Come", "text": "14 The purposes of this article are to position mixed methods research (mixed research is a synonym) as the natural complement to traditional qualitative and quantitative research, to present pragmatism as offering an attractive philosophical partner for mixed methods research, and to provide a framework for designing and conducting mixed methods research. In doing this, we briefly review the paradigm \u201cwars\u201d and incompatibility thesis, we show some commonalities between quantitative and qualitative research, we explain the tenets of pragmatism, we explain the fundamental principle of mixed research and how to apply it, we provide specific sets of designs for the two major types of mixed methods research (mixed-model designs and mixed-method designs), and, finally, we explain mixed methods research as following (recursively) an eight-step process. A key feature of mixed methods research is its methodological pluralism or eclecticism, which frequently results in superior research (compared to monomethod research). Mixed methods research will be successful as more investigators study and help advance its concepts and as they regularly practice it."} {"_id": "d8e682b2f33b7c765d879717d68cdf262dba871e", "title": "Development Emails Content Analyzer: Intention Mining in Developer Discussions (T)", "text": "Written development communication (e.g. mailing lists, issue trackers) constitutes a precious source of information to build recommenders for software engineers, for example aimed at suggesting experts, or at redocumenting existing source code. In this paper we propose a novel, semi-supervised approach named DECA (Development Emails Content Analyzer) that uses Natural Language Parsing to classify the content of development emails according to their purpose (e.g. feature request, opinion asking, problem discovery, solution proposal, information giving etc), identifying email elements that can be used for specific tasks. A study based on data from Qt and Ubuntu, highlights a high precision (90%) and recall (70%) of DECA in classifying email content, outperforming traditional machine learning strategies. Moreover, we successfully used DECA for re-documenting source code of Eclipse and Lucene, improving the recall, while keeping high precision, of a previous approach based on ad-hoc heuristics."} {"_id": "4e56ab1afd8a515a0a0b351fbf1b1d08624d0cc2", "title": "Shrink Globally , Act Locally : Sparse Bayesian Regularization and Prediction", "text": "We study the classic problem of choosing a prior distribution for a location parameter \u03b2 = (\u03b21, . . . , \u03b2p) as p grows large. First, we study the standard \u201cglobal-local shrinkage\u201d approach, based on scale mixtures of normals. Two theorems are presented which characterize certain desirable properties of shrinkage priors for sparse problems. Next, we review some recent results showing how L\u00e9vy processes can be used to generate infinite-dimensional versions of standard normal scale-mixture priors, along with new priors that have yet to be seriously studied in the literature. This approach provides an intuitive framework both for generating new regularization penalties and shrinkage rules, and for performing asymptotic analysis on existing models."} {"_id": "64eb627f5e2048892edbeab44567516af4a43b2e", "title": "A randomized trial of a low-carbohydrate diet for obesity.", "text": "BACKGROUND\nDespite the popularity of the low-carbohydrate, high-protein, high-fat (Atkins) diet, no randomized, controlled trials have evaluated its efficacy.\n\n\nMETHODS\nWe conducted a one-year, multicenter, controlled trial involving 63 obese men and women who were randomly assigned to either a low-carbohydrate, high-protein, high-fat diet or a low-calorie, high-carbohydrate, low-fat (conventional) diet. Professional contact was minimal to replicate the approach used by most dieters.\n\n\nRESULTS\nSubjects on the low-carbohydrate diet had lost more weight than subjects on the conventional diet at 3 months (mean [+/-SD], -6.8+/-5.0 vs. -2.7+/-3.7 percent of body weight; P=0.001) and 6 months (-7.0+/-6.5 vs. -3.2+/-5.6 percent of body weight, P=0.02), but the difference at 12 months was not significant (-4.4+/-6.7 vs. -2.5+/-6.3 percent of body weight, P=0.26). After three months, no significant differences were found between the groups in total or low-density lipoprotein cholesterol concentrations. The increase in high-density lipoprotein cholesterol concentrations and the decrease in triglyceride concentrations were greater among subjects on the low-carbohydrate diet than among those on the conventional diet throughout most of the study. Both diets significantly decreased diastolic blood pressure and the insulin response to an oral glucose load.\n\n\nCONCLUSIONS\nThe low-carbohydrate diet produced a greater weight loss (absolute difference, approximately 4 percent) than did the conventional diet for the first six months, but the differences were not significant at one year. The low-carbohydrate diet was associated with a greater improvement in some risk factors for coronary heart disease. Adherence was poor and attrition was high in both groups. Longer and larger studies are required to determine the long-term safety and efficacy of low-carbohydrate, high-protein, high-fat diets."} {"_id": "0dadc024bb2e9cb675165fdc7a13d55f5c732636", "title": "Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation.", "text": "The assessment of the discrimination ability of a survival analysis model is a problem of considerable theoretical interest and important practical applications. This issue is, however, more complex than evaluating the performance of a linear or logistic regression. Several different measures have been proposed in the biostatistical literature. In this paper we investigate the properties of the overall C index introduced by Harrell as a natural extension of the ROC curve area to survival analysis. We develop the overall C index as a parameter describing the performance of a given model applied to the population under consideration and discuss the statistic used as its sample estimate. We discover a relationship between the overall C and the modified Kendall's tau and construct a confidence interval for our measure based on the asymptotic normality of its estimate. Then we investigate via simulations the length and coverage probability of this interval. Finally, we present a real life example evaluating the performance of a Framingham Heart Study model."} {"_id": "9afa9c1c650d915c1b6f56b458ff3759bc26bf09", "title": "The apnea-ECG database", "text": "Sleep apnea is a sleep disorder with a high prevalence in the adult male population. Sleep apnea is regarded as an independent risk factor,for cardiovascular sequelae such as ischemic heart attacks and stroke. The diagnosis of sleep apnea requires polysomnographic studies in sleep laboratories with expensive equipment and attending personnel. Sleep apnea can be treated effectively using nasal ventilation therapy (nCPAP). Early recognition and selection of patients with sleep related breathing disorders is an important task. Although it has been suggested that this can be done on the basis of the ECG, careful quantitative studies of the accuracy of such techniques are needed. An annotated database with 70 nighttime ECG recordings has been created to support such studies. The annotations were based on visual scoring of disordered breathing during sleep."} {"_id": "1d70f0d7bd782c65273bc689b6ada8723e52d7a3", "title": "Empirical comparison of algorithms for network community detection", "text": "Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that \"look like\" good communities for the application of interest.\n In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior."} {"_id": "312a2edbec5fae34beaf33faa059d37d04cb7235", "title": "Community detection algorithms: a comparative analysis.", "text": "Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and/or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], Blondel [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems."} {"_id": "3e656e08d2b8d1bf84db56090f4053316b01c10f", "title": "Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities.", "text": "Many complex networks display a mesoscopic structure with groups of nodes sharing many links with the other nodes in their group and comparatively few with nodes of different groups. This feature is known as community structure and encodes precious information about the organization and the function of the nodes. Many algorithms have been proposed but it is not yet clear how they should be tested. Recently we have proposed a general class of undirected and unweighted benchmark graphs, with heterogeneous distributions of node degree and community size. An increasing attention has been recently devoted to develop algorithms able to consider the direction and the weight of the links, which require suitable benchmark graphs for testing. In this paper we extend the basic ideas behind our previous benchmark to generate directed and weighted networks with built-in community structure. We also consider the possibility that nodes belong to more communities, a feature occurring in real systems, such as social networks. As a practical application, we show how modularity optimization performs on our benchmark."} {"_id": "56ff48f2b22014d5f59fd2db2b0fc0c651038de1", "title": "Link communities reveal multiscale complexity in networks", "text": "Networks have become a key approach to understanding systems of interacting objects, unifying the study of diverse phenomena including biological organisms and human society. One crucial step when studying the structure and dynamics of networks is to identify communities: groups of related nodes that correspond to functional subunits such as protein complexes or social spheres. Communities in networks often overlap such that nodes simultaneously belong to several groups. Meanwhile, many networks are known to possess hierarchical organization, where communities are recursively grouped into a hierarchical structure. However, the fact that many real networks have communities with pervasive overlap, where each and every node belongs to more than one group, has the consequence that a global hierarchy of nodes cannot capture the relationships between overlapping groups. Here we reinvent communities as groups of links rather than nodes and show that this unorthodox approach successfully reconciles the antagonistic organizing principles of overlapping communities and hierarchy. In contrast to the existing literature, which has entirely focused on grouping nodes, link communities naturally incorporate overlap while revealing hierarchical organization. We find relevant link communities in many networks, including major biological networks such as protein\u2013protein interaction and metabolic networks, and show that a large social network contains hierarchically organized community structures spanning inner-city to regional scales while maintaining pervasive overlap. Our results imply that link communities are fundamental building blocks that reveal overlap and hierarchical organization in networks to be two aspects of the same phenomenon."} {"_id": "62bb7ce6ae6ed38f0ae4d304d56e8edfba1870d0", "title": "Topic-link LDA: joint models of topic and author community", "text": "Given a large-scale linked document collection, such as a collection of blog posts or a research literature archive, there are two fundamental problems that have generated a lot of interest in the research community. One is to identify a set of high-level topics covered by the documents in the collection; the other is to uncover and analyze the social network of the authors of the documents. So far these problems have been viewed as separate problems and considered independently from each other. In this paper we argue that these two problems are in fact inter-dependent and should be addressed together. We develop a Bayesian hierarchical approach that performs topic modeling and author community discovery in one unified framework. The effectiveness of our model is demonstrated on two blog data sets in different domains and one research paper citation data from CiteSeer."} {"_id": "b7543053ab5e44a6e0fdfc6f9b9d3451011569b6", "title": "The role of brand logos in fi rm performance", "text": "a r t i c l e i n f o Keywords: Brand logos Brand management Aesthetics Commitment Brand extensions Firm performance This research demonstrates that the positive effects of brand logos on customer brand commitment and firm performance derive not from enabling brand identification, as is currently understood, but primarily from facilitating customer self-identity/expressiveness, representing a brand's functional benefits, and offering aesthetic appeal. This study examines whether brand names or visual symbols as logos are more effective at creating these benefits and whether or not the impact of the three aforementioned brand logo benefits on customer brand commitment and firm performance is contingent on the extent to which a firm leverages its brand (i.e., employs brand extensions to different product categories)."} {"_id": "f07fd927971c40261dd7cef1ad6d2360b23fe294", "title": "A greedy approach to sparse canonical correlation analysis", "text": "We consider the problem of sparse canonical correlation analysis (CCA), i.e., the search for two linear combi nations, one for each multivariate, that yield maximum correlation using a specified number of variables. We propose an efficient numeri cal approximation based on a direct greedy approach which bound s the correlation at each stage. The method is specifically des igned to cope with large data sets and its computational complexit y depends only on the sparsity levels. We analyze the algorith m\u2019s performance through the tradeoff between correlation and parsimony. The results of numerical simulation suggest that a significant portion of the correlation may be captured using a relatively small number of variables. In addition, we exami ne the use of sparse CCA as a regularization method when the number of available samples is small compared to the dimensions of t he multivariates. I. I NTRODUCTION Canonical correlation analysis (CCA), introduced by Harol d Hotelling [1], is a standard technique in multivariate data n lysis for extracting common features from a pair of data sourc es [2], [3]. Each of these data sources generates a random vecto r that we call a multivariate. Unlike classical dimensionali ty reduction methods which address one multivariate, CCA take s into account the statistical relations between samples fro m two spaces of possibly different dimensions and structure. In particular, it searches for two linear combinations, one fo r each multivariate, in order to maximize their correlation. It is used in different disciplines as a stand-alone tool or as a preprocessing step for other statistical methods. Further more, CCA is a generalized framework which includes numerous classical methods in statistics, e.g., Principal Componen t Analysis (PCA), Partial Least Squares (PLS) and Multiple Linear Regression (MLR) [4]. CCA has recently regained attention with the advent of kernel CCA and its application to independent component analysis [5], [6]. The last decade has witnessed a growing interest in the search for sparse representations of signals and sparse numerical methods. Thus, we consider the problem of sparse CCA, i.e., the search for linear combinations with maximal correlation using a small number of variables. The quest for sparsity can be motivated through various reasonings. First is the ability to interpret and visualize the results. A small number of variables allows us to get the \u201cbig picture\u201d, while sacrificing some of the small details. Moreover, spars e representations enable the use of computationally efficien t The first two authors contributed equally to this manuscript . This work was supported in part by an AFOSR MURI under Grant FA9550-06-1-0 324. numerical methods, compression techniques, as well as nois e reduction algorithms. The second motivation for sparsity i s regularization and stability. One of the main vulnerabilit ies of CCA is its sensitivity to a small number of observations. Thu s, regularized methods such as ridge CCA [7] must be used. In this context, sparse CCA is a subset selection scheme which allows us to reduce the dimensions of the vectors and obtain a stable solution. To the best of our knowledge the first reference to sparse CCA appeared in [2] where backward and stepwise subset selection were proposed. This discussion was of qualitativ e nature and no specific numerical algorithm was proposed. Recently, increasing demands for multidimensional data pr ocessing and decreasing computational cost has caused the topic to rise to prominence once again [8]\u2013[13]. The main disadvantages with these current solutions is that there is no direct control over the sparsity and it is difficult (and nonintuitive) to select their optimal hyperparameters. In add ition, the computational complexity of most of these methods is too high for practical applications with high dimensional data sets. Sparse CCA has also been implicitly addressed in [9], [14] an d is intimately related to the recent results on sparse PCA [9] , [15]\u2013[17]. Indeed, our proposed solution is an extension of the results in [17] to CCA. The main contribution of this work is twofold. First, we derive CCA algorithms with direct control over the sparsity in each of the multivariates and examine their performance. Our computationally efficient methods are specifically aime d at understanding the relations between two data sets of larg e dimensions. We adopt a forward (or backward) greedy approach which is based on sequentially picking (or dropping) variables. At each stage, we bound the optimal CCA solution and bypass the need to resolve the full problem. Moreover, the computational complexity of the forward greedy method does not depend on the dimensions of the data but only on the sparsity parameters. Numerical simulation results show th at a significant portion of the correlation can be efficiently cap tured using a relatively low number of non-zero coefficients. Our second contribution is investigation of sparse CCA as a regularization method. Using empirical simulations we examin e the use of the different algorithms when the dimensions of the multivariates are larger than (or of the same order of) the number of samples and demonstrate the advantage of sparse CCA. In this context, one of the advantages of the greedy approach is that it generates the full sparsity path i n a single run and allows for efficient parameter tuning using"} {"_id": "cbf796150ff01714244f09ccc16f16ffc471ffdb", "title": "Matching Web Tables with Knowledge Base Entities: From Entity Lookups to Entity Embeddings", "text": "Web tables constitute valuable sources of information for various applications, ranging from Web search to Knowledge Base (KB) augmentation. An underlying common requirement is to annotate the rows of Web tables with semantically rich descriptions of entities published in Web KBs. In this paper, we evaluate three unsupervised annotation methods: (a) a lookup-based method which relies on the minimal entity context provided in Web tables to discover correspondences to the KB, (b) a semantic embeddings method that exploits a vectorial representation of the rich entity context in a KB to identify the most relevant subset of entities in the Web table, and (c) an ontology matching method, which exploits schematic and instance information of entities available both in a KB and a Web table. Our experimental evaluation is conducted using two existing benchmark data sets in addition to a new large-scale benchmark created using Wikipedia tables. Our results show that: 1) our novel lookup-based method outperforms state-of-theart lookup-based methods, 2) the semantic embeddings method outperforms lookup-based methods in one benchmark data set, and 3) the lack of a rich schema in Web tables can limit the ability of ontology matching tools in performing high-quality table annotation. As a result, we propose a hybrid method that significantly outperforms individual methods on all the benchmarks."} {"_id": "a742d81416a1da97af53e0a9748c16f37fd61b40", "title": "Blood Code: The History and Future of Video Game Censorship", "text": "INTRODUCTION ................................................................................... 571 I. FIRST AMENDMENT BACKGROUND ....................................... 573 II. THE ANALOGOUS HISTORIES OF FILMS AND VIDEO GAMES ....................................................................................... 576 A. Film Controversy and the Formation of the MPAA ................ 576 B. Early Video Game Controversy and the Formation of the ESRB ................................................................................... 580 C. Doom and Columbine ........................................................... 584 D. Jack Thompson and Grand Theft Auto ................................... 586 III. WHY VIDEO GAMES SHOULD NOT BE TREATED DIFFERENTLY THAN FILMS .................................................... 593 A. Violent and Sexual Content in Video Games is Distinguishable from Pornography and Obscenity. .................. 594 B. Violent Game Content is Similar to Violent Film Content. ..... 596 C. Positive Social Aspects of Violent Gaming............................... 597 D. Desensitization Will Lead to a Decrease in Political Outrage. ............................................................................... 604 IV. EXISTING VIDEO GAME JURISPRUDENCE .............................. 605 V. RATINGS AND LABELS AS UNCONSTITUTIONAL CENSORSHIP.............................................................................. 607 CONCLUSION ....................................................................................... 609"} {"_id": "0814694a247a9b6e38dde34ab95067a63f67e458", "title": "Characterizing the life cycle of online news stories using social media reactions", "text": "This paper presents a study of the life cycle of news articles posted online. We describe the interplay between website visitation patterns and social media reactions to news content. We show that we can use this hybrid observation method to characterize distinct classes of articles. We also find that social media reactions can help predict future visitation patterns early and accurately. We validate our methods using qualitative analysis as well as quantitative analysis on data from a large international news network, for a set of articles generating more than 3,000,000 visits and 200,000 social media reactions. We show that it is possible to model accurately the overall traffic articles will ultimately receive by observing the first ten to twenty minutes of social media reactions. Achieving the same prediction accuracy with visits alone would require to wait for three hours of data. We also describe significant improvements on the accuracy of the early prediction of shelf-life for news stories."} {"_id": "275f66e845043217d5c37328b5e71a178302469f", "title": "Improving www proxies performance with greedy-dual-size-frequency caching policy", "text": "Web, HTTP, WWW proxies, caching policies, replacement algorithms, performance Web proxy caches are used to improve performance of the WWW. Since the majority of Web documents are static documents, caching them at WWW proxies reduces both network traffic and response time. One of the keys to better proxy cache performance is an efficient caching policy which keeps in the cache popular documents and replaces rarely used ones. This paper introduces the Greedy-Dual-Size-Frequency caching policy to maximize hit and byte hit rates for WWW proxies. Proposed caching strategy incorporates in a simple way the most important characteristics of the file and its accesses such as file size, file access frequency and recentness of the last access. Greedy-Dual-Size-Frequency is an improvement of Greedy-Dual-Size algorithm \u2013 the current champion among the replacement strategies proposed for Web proxy caches."} {"_id": "1c8ea4d0687eae94871ee1916da4445e08f29076", "title": "Modeling Review Argumentation for Robust Sentiment Analysis", "text": "Most text classification approaches model text at the lexical and syntactic level only, lacking domain robustness and explainability. In tasks like sentiment analysis, such approaches can result in limited effectiveness if the texts to be classified consist of a series of arguments. In this paper, we claim that even a shallow model of the argumentation of a text allows for an effective and more robust classification, while providing intuitive explanations of the classification results. Here, we apply this idea to the supervised prediction of sentiment scores for reviews. We combine existing approaches from sentiment analysis with novel features that compare the overall argumentation structure of the given review text to a learned set of common sentiment flow patterns. Our evaluation in two domains demonstrates the benefit of modeling argumentation for text classification in terms of effectiveness and robustness."} {"_id": "f698467f1dd781f652c9839379ccc548a9aa4af1", "title": "Accounting for the effects of accountability.", "text": "This article reviews the now extensive research literature addressing the impact of accountability on a wide range of social judgments and choices. It focuses on 4 issues: (a) What impact do various accountability ground rules have on thoughts, feelings, and action? (b) Under what conditions will accountability attenuate, have no effect on, or amplify cognitive biases? (c) Does accountability alter how people think or merely what people say they think? and (d) What goals do accountable decision makers seek to achieve? In addition, this review explores the broader implications of accountability research. It highlights the utility of treating thought as a process of internalized dialogue; the importance of documenting social and institutional boundary conditions on putative cognitive biases; and the potential to craft empirical answers to such applied problems as how to structure accountability relationships in organizations."} {"_id": "49afbe880b8bd419605beb84d3382647bf8e50ea", "title": "An Effective Gated and Attention-Based Neural Network Model for Fine-Grained Financial Target-Dependent Sentiment Analysis", "text": ""} {"_id": "c1927578a61df3c5a33f6bca9f9bd5c181e1d5ac", "title": "Security Issues in the Internet of Things ( IoT ) : A Comprehensive Study", "text": "Wireless communication networks are highly prone to security threats. The major applications of wireless communication networks are in military, business, healthcare, retail, and transportations. These systems use wired, cellular, or adhoc networks. Wireless sensor networks, actuator networks, and vehicular networks have received a great attention in society and industry. In recent years, the Internet of Things (IoT) has received considerable research attention. The IoT is considered as future of the internet. In future, IoT will play a vital role and will change our living styles, standards, as well as business models. The usage of IoT in different applications is expected to rise rapidly in the coming years. The IoT allows billions of devices, peoples, and services to connect with others and exchange information. Due to the increased usage of IoT devices, the IoT networks are prone to various security attacks. The deployment of efficient security and privacy protocols in IoT networks is extremely needed to ensure confidentiality, authentication, access control, and integrity, among others. In this paper, an extensive comprehensive study on security and privacy issues in IoT networks is provided. Keywords\u2014Internet of Things (IoT); security issues in IoT; security; privacy"} {"_id": "b5f6ee9baa07301bba6e187bd9380686a72866c6", "title": "Networks of the Brain", "text": "Models of Network Growth All networks, whether they are social, technological, or biological, are the result of a growth process. Many of these networks continue to grow for prolonged periods of time, continually modifying their connectivity structure throughout their entire existence. For example, the World Wide Web has grown from a small number of cross-linked documents in the early 1 990s to an estimated 30 billion indexed web pages in 2009.3 The extraordinary growth of the Web continues unabated and has occurred without any top-down design, yet the topology of its hyperlink structure exhibits characteristic statistical patterns (Pastor-Satorras and Vespig\u00ad nani, 2004). Other technological networks such as the power grid, global transportation networks, or mobile communication networks continue to grow and evolve, each displaying characteristic patterns of expansion and elaboration. Growth and change in social and organizational"} {"_id": "d0eab53a6b20bfca924b85fcfb0ee76bfde6d4ef", "title": "Comparative Analysis of ADS-B Verification Techniques", "text": "ADS-B is one of many Federal Aviation Administration (FAA) regulated technologies used to monitor air traffic with high precision, while reducing dependencies on dated and costly radar equipment [1]. The FAA hopes to decrease the separation between aircraft, reduce risk of collision as air traffic density increases, save fuel costs, and increase situational awareness of both commercial and general aviation aircraft within United States airspace. Several aviation technology experts have expressed concern over the security of the ADS-B protocol [2] [3]. ADS-B has an open and well known data format, which is broadcast on known frequencies. This means that the protocol is highly susceptible to radio frequency (RF) attacks such as eavesdropping, jamming, and spoofing. Eavesdropping and jamming will be reviewed in Section 3.4. While eavesdropping and jamming attacks are well studied, due to their applicability in many radio technologies, spoofing attacks against ADS-B are particular to this system. As such, the latter is the focus of our research. This paper evaluates so-called Kalman Filtering and Group Validation techniques (described below) in order to assess which would be a better position verification method of ADS-B signals. The parameters for the comparative analysis include both technical feasibility and practical implementation of each position verification technique. The goal is to offer a practical position verification process which could be implemented with limited government funding within the next 10 years."} {"_id": "4c2fedecddcae64514ad99b7301ad6e04654f10d", "title": "Deep Learning and Its Applications to Signal and Information Processing [Exploratory DSP]", "text": "The purpose of this article is to introduce the readers to the emerging technologies enabled by deep learning and to review the research work conducted in this area that is of direct relevance to signal processing. We also point out, in our view, the future research directions that may attract interests of and require efforts from more signal processing researchers and practitioners in this emerging area for advancing signal and information processing technology and applications."} {"_id": "d0c9acb277da76aebf56c021cb02b51cdfbb56b8", "title": "The Anti-vaccination Movement: A Regression in Modern Medicine", "text": "There have been recent trends of parents in Western countries refusing to vaccinate their children due to numerous reasons and perceived fears. While opposition to vaccines is as old as the vaccines themselves, there has been a recent surge in the opposition to vaccines in general, specifically against the MMR (measles, mumps, and rubella) vaccine, most notably since the rise in prominence of the notorious British ex-physician, Andrew Wakefield,\u00a0and his works. This has caused multiple measles outbreaks in Western countries where the measles virus was previously considered eliminated. This paper evaluates and reviews the origins of the anti-vaccination movement, the reasons behind the recent strengthening of the movement, role of the internet in the spread of anti-vaccination ideas, and the repercussions in terms of public health and safety."} {"_id": "e170ca6dad1221f4bb2e4fc3d42a182e23026b80", "title": "Emotion control in collaborative learning situations: do students regulate emotions evoked by social challenges?", "text": "BACKGROUND\nDuring recent decades, self-regulated learning (SRL) has become a major research field. SRL successfully integrates the cognitive and motivational components of learning. Self-regulation is usually seen as an individual process, with the social aspects of regulation conceptualized as one aspect of the context. However, recent research has begun to investigate whether self-regulation processes are complemented by socially shared regulation processes.\n\n\nAIMS\nThe presented study investigated what kind of socio-emotional challenges students experience during collaborative learning and whether the students regulate the emotions evoked during these situations. The interplay of the emotion regulation processes between the individual and the group was also studied.\n\n\nSAMPLE\nThe sample for this study was 63 teacher education students who studied in groups of three to five during three collaborative learning tasks.\n\n\nMETHOD\nStudents' interpretations of experienced social challenges and their attempts to regulate emotions evoked by these challenges were collected following each task using the Adaptive Instrument for the Regulation of Emotions.\n\n\nRESULTS\nThe results indicated that students experienced a variety of social challenges. Students also reported the use of shared regulation in addition to self-regulation. Finally, the results suggested that intrinsic group dynamics are derived from both individual and social elements of collaborative situations.\n\n\nCONCLUSION\nThe findings of the study support the assumption that students can regulate emotions collaboratively as well as individually. The study contributes to our understanding of the social aspects of emotional regulation in collaborative learning contexts."} {"_id": "6f8a13a1a7eba8966627775c32ae59dafd91cedc", "title": "PAPILLON: expressive eyes for interactive characters", "text": "PAPILLON is a technology for designing highly expressive animated eyes for interactive characters, robots and toys. Expressive eyes are essential in any form of face-to-face communication [2] and designing them has been a critical challenge in robotics, as well as in interactive character and toy development."} {"_id": "e33b9ee3c575c6a38b873888e796d29f59d98e04", "title": "Heart-Brain Neurodynamics : The Making of Emotions", "text": "As pervasive and vital as they are in human experience, emotions have long remained an enigma to science. This monograph explores recent scientific advances that clarify central controversies in the study of emotion, including the relationship between intellect and emotion, and the historical debate on the source of emotional experience. Particular attention is given to the intriguing body of research illuminating the critical role of ascending input from the body to the brain in the generation and perception of emotions. This discussion culminates in the presentation of a systems-oriented model of emotion in which the brain functions as a complex pattern-matching system, continually processing input from both the external and internal environments. From this perspective it is shown that the heart is a key component of the emotional system, thus providing a physiological basis for the long-acknowledged link between the heart and our emotional life."} {"_id": "19b7e0786d9e093fdd8c8751dac0c4eb0aea0b74", "title": "The James-Lange theory of emotions: a critical examination and an alternative theory. By Walter B. Cannon, 1927.", "text": ""} {"_id": "224f751d4691515b3f8010d12660c70dd62336b8", "title": "DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features", "text": "We propose DeepHand to estimate the 3D pose of a hand using depth data from commercial 3D sensors. We discriminatively train convolutional neural networks to output a low dimensional activation feature given a depth map. This activation feature vector is representative of the global or local joint angle parameters of a hand pose. We efficiently identify 'spatial' nearest neighbors to the activation feature, from a database of features corresponding to synthetic depth maps, and store some 'temporal' neighbors from previous frames. Our matrix completion algorithm uses these 'spatio-temporal' activation features and the corresponding known pose parameter values to estimate the unknown pose parameters of the input feature vector. Our database of activation features supplements large viewpoint coverage and our hierarchical estimation of pose parameters is robust to occlusions. We show that our approach compares favorably to state-of-the-art methods while achieving real time performance (\u2248 32 FPS) on a standard computer."} {"_id": "486eb944129e0d90a7d2ef8d6085fd482c9be6c5", "title": "Correctness By Construction: Better Can Also Be Cheaper", "text": "I n December 1999 CrossTalk [3], David Cook provided a well-reasoned historical analysis of programming language development and considered the role languages play in the software development process. The article was valuable because it showed that programming language developments are not sufficient to ensure success; however, it would be dangerous to conclude from this that they are not necessary for success. Cook rightly identifies other issues such as requirements capture, specifications, and verification and validation (V&V) that need to be addressed. Perhaps we need to look at programming languages not just in terms of their ability to code some particular design but in the influence the language has on some of these other vital aspects of the development process. The key notion is that of the benefit of a precise language or language subset. If the term subset has set anyone thinking \" oh no, not another coding standard, \" then read on, the topic is much more interesting and useful than that! Language Issues Programming languages have evolved in three main ways. First came improvements in structure; then attempts at improving compile-time error detection through such things as strong typing; and, most significantly , facilities to improve our ability to express abstractions. All of these have shaped the way we think about problem solving. However, programming languages have not evolved in their precision of expression. In fact, they may have actually gotten worse since the meaning of a sample of machine code is exact and unequivocal , whereas the meaning of the constructs of typical modern high-order languages are substantially less certain. The evolution of C into C++ certainly improved its ability to express design abstractions but, if anything, the predictability of the compiled code decreased. These ambiguities arise either from deficiencies in the original language definition or from implementation freedoms given to the compiler writer for ease of implementation or efficiency reasons. None of this may look like a very serious problem. We can still do code walk-throughs and reviews and, after all, we still have to do dynamic testing that should flush out any remaining ambiguities. In fact the evidence is quite strong that it does matter because it creates an environment where we are encouraged to make little attempt to reason about the software we are producing at each stage of its development. Since we typically do not have formal mathematical specifications and we use imprecise \u2026"} {"_id": "15fc05b9da56764192d56036721a6a19239c07fc", "title": "A lifelong learning perspective for mobile robot control", "text": "Abstmct-Designing robots that learn by themselves to perform complex real-world tasks is a still-open challenge for the field of Robotics and Artificial Intelligence. In this paper we present the robot learning problem as a lifelong problem, in which a robot faces a collection of tasks over its entire lifetime. Such a scenario provides the opportunity to gather general-purpose knowledge that transfers across tasks. We illustrate a particular learning mechanism, explanation-based neural network learning, that transfers knowledge between related tasks via neural network action models. The learning approach is illustrated using a mobile robot, equipped with visual, ultrasonic and laser sensors. In less than 10 minutes operation time, the robot is able to learn to navigate to a marked target object in a natural office environment."} {"_id": "5a3e2899deed746f1513708f1f0f24a25f4a0750", "title": "3D Point Cloud Registration for Localization Using a Deep Neural Network Auto-Encoder", "text": "We present an algorithm for registration between a large-scale point cloud and a close-proximity scanned point cloud, providing a localization solution that is fully independent of prior information about the initial positions of the two point cloud coordinate systems. The algorithm, denoted LORAX, selects super-points–local subsets of points–and describes the geometric structure of each with a low-dimensional descriptor. These descriptors are then used to infer potential matching regions for an efficient coarse registration process, followed by a fine-tuning stage. The set of super-points is selected by covering the point clouds with overlapping spheres, and then filtering out those of low-quality or nonsalient regions. The descriptors are computed using state-of-the-art unsupervised machine learning, utilizing the technology of deep neural network based auto-encoders. Abstract This novel framework provides a strong alternative to the common practice of using manually designed key-point descriptors for coarse point cloud registration. Utilizing super-points instead of key-points allows the available geometrical data to be better exploited to find the correct transformation. Encoding local 3D geometric structures using a deep neural network auto-encoder instead of traditional descriptors continues the trend seen in other computer vision applications and indeed leads to superior results. The algorithm is tested on challenging point cloud registration datasets, and its advantages over previous approaches as well as its robustness to density changes, noise, and missing data are shown."} {"_id": "89a3c09e0a4c54f89a7237be92d3385116030efc", "title": "Phishing Detection Taxonomy for Mobile Device", "text": "Phishing is one of the social engineering attacks and currently hit on mobile devices. Based on security report by Lookout [1], 30% of Lookout users clicking on an unsafe link per year by using mobile device. Few phishing detection techniques have been applied on mobile device. However, review on phishing detection technique on the detection technique redundant is still need. This paper addresses the current trend phishing detection for mobile device and identifies significant criterion to improve phishing detection techniques on mobile device. Thus, existing research on phishing detection technique for computer and mobile device will be compared and analysed. Hence, outcome of the analysis becomes a guideline in proposing generic phishing detection taxonomy for mobile device."} {"_id": "c59f39796e0f8e733f44b1cfe374cfe76834dcf8", "title": "Bandwidth formula for Linear FMCW radar waveforms", "text": "The International Telecommunications Union provides recommendations regarding spectral emission bounds for primary radar systems. These bounds are currently in review and are defined in terms of spectral occupancy, necessary bandwidth, 40dB bandwidth and out-of-band roll-off rates. Here we derive out-of-band domain spectral envelopes, bandwidth formula and roll-off rates, for various Linear FMCW radar waveforms including sawtooth (LFMCW), Quadratic Phase Coded LFMCW, LFM Pulse Train, and Hann amplitude tapered LFMCW."} {"_id": "585654039c441a15cdda936902f0f1f9b7498a89", "title": "Gold Fingers: 3D Targets for Evaluating Capacitive Readers", "text": "With capacitive fingerprint readers being increasingly used for access control as well as for smartphone unlock and payments, there is a growing interest among metrology agencies (e.g., the National Institute of Standards and Technology) to develop standard artifacts (targets) and procedures for repeatable evaluation of capacitive readers. We present our design and fabrication procedures to create conductive 3D targets (gold fingers) for capacitive readers. Wearable 3D targets with known feature markings (e.g., fingerprint ridge flow and ridge spacing) are first fabricated using a high-resolution 3D printer. A sputter coating process is subsequently used to deposit a thin layer (~300 nm) of conductive materials (titanium and gold) on 3D printed targets. The wearable gold finger targets are used to evaluate a PIV-certified single-finger capacitive reader as well as small-area capacitive readers embedded in smartphones and access control terminals. In additional, we show that a simple procedure to create 3D printed spoofs with conductive carbon coating is able to successfully spoof a PIV-certified single-finger capacitive reader as well as a capacitive reader embedded in an access control terminal."} {"_id": "7e0f013e85eff9b089f58d9a3e98605ae1a7ba18", "title": "On Training Targets for Supervised Speech Separation", "text": "Formulation of speech separation as a supervised learning problem has shown considerable promise. In its simplest form, a supervised learning algorithm, typically a deep neural network, is trained to learn a mapping from noisy features to a time-frequency representation of the target of interest. Traditionally, the ideal binary mask (IBM) is used as the target because of its simplicity and large speech intelligibility gains. The supervised learning framework, however, is not restricted to the use of binary targets. In this study, we evaluate and compare separation results by using different training targets, including the IBM, the target binary mask, the ideal ratio mask (IRM), the short-time Fourier transform spectral magnitude and its corresponding mask (FFT-MASK), and the Gammatone frequency power spectrum. Our results in various test conditions reveal that the two ratio mask targets, the IRM and the FFT-MASK, outperform the other targets in terms of objective intelligibility and quality metrics. In addition, we find that masking based targets, in general, are significantly better than spectral envelope based targets. We also present comparisons with recent methods in non-negative matrix factorization and speech enhancement, which show clear performance advantages of supervised speech separation."} {"_id": "c1cd441dad61b9d9d294a19a7043adb1582f786b", "title": "Speech recognition using factorial hidden Markov models for separation in the feature space", "text": "This paper proposes an algorithm for the recognition and separation of speech signals in non-stationary noise, such as another speaker. We present a method to combine hidden Markov models (HMMs) trained for the speech and noise into a factorial HMM to model the mixture signal. Robustness is obtained by separating the speech and noise signals in a feature domain, which discards unnecessary information. We use mel-cepstral coefficients (MFCCs) as features, and estimate the distribution of mixture MFCCs from the distributions of the target speech and noise. A decoding algorithm is proposed for finding the state transition paths and estimating gains for the speech and noise from a mixture signal. Simulations were carried out using speech material where two speakers were mixed at various levels, and even for high noise level (9 dB above the speech level), the method produced relatively good (60% word recognition accuracy) results. Audio demonstrations are available at www.cs.tut.fi/ \u0303tuomasv."} {"_id": "ecd4bc32bb2717c96f76dd100fcd1255a07bd656", "title": "Roles of Pre-Training and Fine-Tuning in Context-Dependent DBN-HMMs for Real-World Speech Recognition", "text": "Recently, deep learning techniques have been successfully applied to automatic speech recognition tasks -first to phonetic recognition with context-independent deep belief network (DBN) hidden Markov models (HMMs) and later to large vocabulary continuous speech recognition using context-dependent (CD) DBN-HMMs. In this paper, we report our most recent experiments designed to understand the roles of the two main phases of the DBN learning -pre-training and fine tuning -in the recognition performance of a CD-DBN-HMM based large-vocabulary speech recognizer. As expected, we show that pre-training can initialize weights to a point in the space where fine-tuning can be effective and thus is crucial in training deep structured models. However, a moderate increase of the amount of unlabeled pre-training data has an insignificant effect on the final recognition results as long as the original training size is sufficiently large to initialize the DBN weights. On the other hand, with additional labeled training data, the fine-tuning phase of DBN training can significantly improve the recognition accuracy."} {"_id": "0b3cfbf79d50dae4a16584533227bb728e3522aa", "title": "Long Short-Term Memory", "text": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."} {"_id": "0c7d7b4c546e38a4097a97bf1d16a60012916758", "title": "The Kaldi Speech Recognition Toolkit", "text": "We describe the design of Kaldi, a free, open-source toolkit for speech recognition research. Kaldi provides a speech recognition system based on finite-state transducers (using the freely available OpenFst), together with detailed documentation and scripts for building complete recognition systems. Kaldi is written is C++, and the core library supports modeling of arbitrary phonetic-context sizes, acoustic modeling with subspace Gaussian mixture models (SGMM) as well as standard Gaussian mixture models, together with all commonly used linear and affine transforms. Kaldi is released under the Apache License v2.0, which is highly nonrestrictive, making it suitable for a wide community of users."} {"_id": "9eb67ca57fecc691853636507e2b852de3f56fac", "title": "Analysis of the Paragraph Vector Model for Information Retrieval", "text": "Previous studies have shown that semantically meaningful representations of words and text can be acquired through neural embedding models. In particular, paragraph vector (PV) models have shown impressive performance in some natural language processing tasks by estimating a document (topic) level language model. Integrating the PV models with traditional language model approaches to retrieval, however, produces unstable performance and limited improvements. In this paper, we formally discuss three intrinsic problems of the original PV model that restrict its performance in retrieval tasks. We also describe modifications to the model that make it more suitable for the IR task, and show their impact through experiments and case studies. The three issues we address are (1) the unregulated training process of PV is vulnerable to short document over-fitting that produces length bias in the final retrieval model; (2) the corpus-based negative sampling of PV leads to a weighting scheme for words that overly suppresses the importance of frequent words; and (3) the lack of word-context information makes PV unable to capture word substitution relationships."} {"_id": "6c7e55e3b53029296097ee07ba75b6b6a98b14e5", "title": "Intrusion detection system based on the analysis of time intervals of CAN messages for in-vehicle network", "text": "Controller Area Network (CAN) bus in the vehicles is a de facto standard for serial communication to provide an efficient, reliable and economical link between Electronic Control Units (ECU). However, CAN bus does not have enough security features to protect itself from inside or outside attacks. Intrusion Detection System (IDS) is one of the best ways to enhance the vehicle security level. Unlike the traditional IDS for network security, IDS for vehicle requires light-weight detection algorithm because of the limitations of the computing power of electronic devices reside in cars. In this paper, we propose a light-weight intrusion detection algorithm for in-vehicle network based on the analysis of time intervals of CAN messages. We captured CAN messages from the cars made by a famous manufacturer and performed three kinds of message injection attacks. As a result, we find the time interval is a meaningful feature to detect attacks in the CAN traffic. Also, our intrusion detection system detects all of message injection attacks without making false positive errors."} {"_id": "b3ff14e4c9b939841dec4d877256e47d12817638", "title": "Automatic Gloss Finding for a Knowledge Base using Ontological Constraints", "text": "While there has been much research on automatically constructing structured Knowledge Bases (KBs), most of it has focused on generating facts to populate a KB. However, a useful KB must go beyond facts. For example, glosses (short natural language definitions) have been found to be very useful in tasks such as Word Sense Disambiguation. However, the important problem of Automatic Gloss Finding, i.e., assigning glosses to entities in an initially gloss-free KB, is relatively unexplored. We address that gap in this paper. In particular, we propose GLOFIN, a hierarchical semi-supervised learning algorithm for this problem which makes effective use of limited amounts of supervision and available ontological constraints. To the best of our knowledge, GLOFIN is the first system for this task. Through extensive experiments on real-world datasets, we demonstrate GLOFIN's effectiveness. It is encouraging to see that GLOFIN outperforms other state-of-the-art SSL algorithms, especially in low supervision settings. We also demonstrate GLOFIN's robustness to noise through experiments on a wide variety of KBs, ranging from user contributed (e.g., Freebase) to automatically constructed (e.g., NELL). To facilitate further research in this area, we have made the datasets and code used in this paper publicly available."} {"_id": "6eb662ef35ec514429c5ba533b212a3a512c3517", "title": "Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments", "text": "Autonomous micro aerial vehicles (MAVs) will soon play a major role in tasks such as search and rescue, environment monitoring, surveillance, and inspection. They allow us to easily access environments to which no humans or other vehicles can get access. This reduces the risk for both the people and the environment. For the above applications, it is, however, a requirement that the vehicle is able to navigate without using GPS, or without relying on a preexisting map, or without specific assumptions about the environment. This will allow operations in unstructured, unknown, and GPS-denied environments. We present a novel solution for the task of autonomous navigation of a micro helicopter through a completely unknown environment by using solely a single camera and inertial sensors onboard. Many existing solutions suffer from the problem of drift in the xy plane or from the dependency on a clean GPS signal. The novelty in the here-presented approach is to use a monocular simultaneous localization and mapping (SLAM) framework to stabilize the vehicle in six degrees of freedom. This way, we overcome the problem of both the drift and the GPS dependency. The pose estimated by the visual SLAM algorithm is used in a linear optimal controller that allows us to perform all basic maneuvers such as hovering, set point and trajectory following, vertical takeoff, and landing. All calculations including SLAM and controller are running in real time and online while the helicopter is flying. No offline processing or preprocessing is done. We show real experiments that demonstrate that the vehicle can fly autonomously in an unknown and unstructured environment. To the best of our knowledge, the here-presented work describes the first aerial vehicle that uses onboard monocular vision as a main sensor to navigate through an unknown GPS-denied environment and independently of any external artificial aids. C \u00a9 2011 Wiley Periodicals, Inc."} {"_id": "0286bef6d6da7990a2c50aefd5543df2ce481fbb", "title": "Combinatorial Pure Exploration of Multi-Armed Bandits", "text": "We study the combinatorial pure exploration (CPE) problem in the stochastic multi-armed bandit setting, where a learner explores a set of arms with the objective of identifying the optimal member of a decision class, which is a collection of subsets of arms with certain combinatorial structures such as size-K subsets, matchings, spanning trees or paths, etc. The CPE problem represents a rich class of pure exploration tasks which covers not only many existing models but also novel cases where the object of interest has a nontrivial combinatorial structure. In this paper, we provide a series of results for the general CPE problem. We present general learning algorithms which work for all decision classes that admit offline maximization oracles in both fixed confidence and fixed budget settings. We prove problem-dependent upper bounds of our algorithms. Our analysis exploits the combinatorial structures of the decision classes and introduces a new analytic tool. We also establish a general problem-dependent lower bound for the CPE problem. Our results show that the proposed algorithms achieve the optimal sample complexity (within logarithmic factors) for many decision classes. In addition, applying our results back to the problems of top-K arms identification and multiple bandit best arms identification, we recover the best available upper bounds up to constant factors and partially resolve a conjecture on the lower bounds."} {"_id": "3f6f45584d7f71e47118bdcd12826995998871d1", "title": "How to Use the SOINN Software: User's Guide (Version 1.0)", "text": "The Self-Organizing Incremental Neural Network (SOINN) is an unsupervised classifier that is capable of online incremental learning. Studies have been performed not only for improving the SOINN, but also for applying it to various problems. Furthermore, using the SOINN, more intelligent functions are achieved, such as association, reasoning, and so on. In this paper, we show how to use the SOINN software and to apply it to the above problems."} {"_id": "185f3fcc78ea32ddbad3a5ebdaefa9504bfb3f5e", "title": "Pattern Recognition and Computer Vision", "text": "Chinese painting is distinct from other art in that the painting elements are exhibited by complex water-and-ink diffusion and shows gray, white and black visual effect. Rendering such a water-and-ink painting with polychrome style is a challenging problem. In this paper, we propose a novel style transfer method for Chinese painting. We firstly decompose the Chinese painting with adaptive patches based on its structure, and locally colorize the painting. Then, the colorized image is used for guiding the process of texture transfer that is modeled in Markov Random Field (MRF). More precisely, we improve the classic texture transfer algorithm by modifying the compatibility functions for searching the optimal matching, with the chromatism information. The experiment results show that proposed adaptive patches can well preserve the original content while match the example style. Moreover, we present the transfer results with our method and recent style transfer algorithms, in order to make a comparison."} {"_id": "0b6aab9ce7910938e0d60c0764dc1c09d3219b05", "title": "Combining document representations for known-item search", "text": "This paper investigates the pre-conditions for successful combination of document representations formed from structural markup for the task of known-item search. As this task is very similar to work in meta-search and data fusion, we adapt several hypotheses from those research areas and investigate them in this context. To investigate these hypotheses, we present a mixture-based language model and also examine many of the current meta-search algorithms. We find that compatible output from systems is important for successful combination of document representations. We also demonstrate that combining low performing document representations can improve performance, but not consistently. We find that the techniques best suited for this task are robust to the inclusion of poorly performing document representations. We also explore the role of variance of results across systems and its impact on the performance of fusion, with the surprising result that the correct documents have higher variance across document representations than highly ranking incorrect documents."} {"_id": "a57623e6f0de3775513b436510b2d6cd9343dc5f", "title": "Text Segmentation with Topic Modeling and Entity Coherence", "text": "This paper describes a system which uses entity and topic coherence for improved Text Segmentation (TS) accuracy. First, Linear Dirichlet Allocation (LDA) algorithm was used to obtain topics for sentences in the document. We then performed entity mapping across a window in order to discover the transition of entities within sentences. We used the information obtained to support our LDA-based boundary detection for proper boundary adjustment. We report the significance of the entity coherence approach as well as the superiority of our algorithm over existing works."} {"_id": "cd472598052666440b8063e7259b35b78a45d757", "title": "Factors Affecting Online Impulse Buying : Evidence from Chinese Social Commerce Environment", "text": "First, the purpose of this study is to examine the impact of situational variables, scarcity and serendipity, on online impulse buying (OIB) in Chinese social commerce (SC) environment. Second, the study further assesses the moderating role of five dimensions of hedonic shopping value. Data were gathered from 671 online shoppers who come from two metropolitan cities of China, Beijing, and Shanghai. Structure equation modeling utilized was generated by AMOS 23 version to test the study hypotheses. The results confirm that situational factors positively influence the online impulse buying among Chinese online shoppers in SC environment. Four dimensions of hedonic shopping value (social shopping, relaxation shopping, adventure shopping and idea shopping) positively moderate the relationship between serendipity and OIB; value shopping is insignificant with moderation effect. The finding is helpful to the online retailers and SC web developers by recommending them to take the scarcity and serendipity in their consideration. These factors have the potential to motivate the consumers to initiate the hedonic shopping aptitude to urge to buy impulsively. Unlike the previous work which remained unsuccessful in incorporating all factors into one study, this study has incorporated irrational and unplanned consumption along with rational and planned one in the same research."} {"_id": "75225142acc421a15cb1cd5b633f6de5fc036586", "title": "Short Tamil sentence similarity calculation using knowledge-based and corpus-based similarity measures", "text": "Sentence similarity calculation plays an important role in text processing-related research. Many unsupervised techniques such as knowledge-based techniques, corpus-based techniques, string similarity based techniques, and graph alignment techniques are available to measure sentence similarity. However, none of these techniques have been experimented with Tamil. In this paper, we present the first-ever system to measure semantic similarity for Tamil short phrases using a hybrid approach that makes use of knowledge-based and corpus-based techniques. We tested this system with 2000 general sentence pairs and 100 mathematical sentence pairs. For the dataset of 2000 sentence pairs, this approach achieved a Mean Squared Error of 0.195 and a Pearson Correlation factor of 0.815. For the 100 mathematical sentence pairs, this approach achieved an 85% of accuracy."} {"_id": "b8759b1ea437802d9a1c2a99d22932a960b7beec", "title": "Firewall Security: Policies, Testing and Performance Evaluation", "text": "This paper explores the firewall security and performance relationship for distributed systems. Experiments are conducted to set firewall security into seven different levels and to quantify their performance impacts. These firewall security levels are formulated, designed, implemented, and tested phase by phase under an experimental environment in which all performed tests are evaluated and compared. Based on the test results, the impacts of the various firewall security levels on system performance with respect to transaction time and latency are measured and analyzed. It is interesting to note that the intuitive belief about security to performance, i.e. the more security would result in less performance, does not always hold in the firewall testing. The results reveal that the significant impact from enhanced security on performance could only be observed under some particular scenarios and thus their relationships are not necessarily inversely related. We also discuss the tradeoff between security and performance."} {"_id": "4df321947a2ac4365584a01d78a780913b171cf5", "title": "Datasets for Aspect-Based Sentiment Analysis in French", "text": "Aspect Based Sentiment Analysis (ABSA) is the task of mining and summarizing opinions from text about specific entities and their aspects. This article describes two datasets for the development and testing of ABSA systems for French which comprise user reviews annotated with relevant entities, aspects and polarity values. The first dataset contains 457 restaurant reviews (2365 sentences) for training and testing ABSA systems, while the second contains 162 museum reviews (655 sentences) dedicated to out-of-domain evaluation. Both datasets were built as part of SemEval-2016 Task 5 \u201cAspect-Based Sentiment Analysis\u201d where seven different languages were represented, and are publicly available for research purposes. This article provides examples and statistics by annotation type, summarizes the annotation guidelines and discusses their cross-lingual applicability. It also explains how the data was used for evaluation in the SemEval ABSA task and briefly presents the results obtained for French."} {"_id": "dd8da9a4a0a5ef1f19ca71caf5eba11192dd2c41", "title": "A survey of domain adaptation for statistical machine translation", "text": "Differences in domains of language use between training data and test data have often been reported to result in performance degradation for phrase-based machine translation models. Throughout the past decade or so, a large body of work aimed at exploring domain-adaptation methods to improve system performance in the face of such domain differences. This paper provides a systematic survey of domain-adaptation methods for phrase-based machine-translation systems. The survey starts out with outlining the sources of errors in various components of phrase-based models due to domain change, including lexical selection, reordering and optimization. Subsequently, it outlines the different research lines to domain adaptation in the literature, and surveys the existing work within these research lines, discussing how these approaches differ and how they relate to each other."} {"_id": "5fbe9d4e616632972e86c31fbb4b1dff4897e59e", "title": "Design of adaptive hypermedia learning systems: A cognitive style approach", "text": "In the past decade, a number of adaptive hypermedia learning systems have been developed. However, most of these systems tailor presentation content and navigational support solely according to students\u2019 prior knowledge. On the other hand, previous research suggested that cognitive styles significantly affect student learning because they refer to how learners process and organize information. To this end, the study presented in this paper developed an adaptive hypermedia learning system tailored to students\u2019 cognitive styles, with an emphasis on Pask\u2019s Holist\u2013Serialist dimension. How students react to this adaptive hypermedia learning system, including both learning performance and perceptions, was examined in this study. Forty-four undergraduate and postgraduate students participated in the study. The findings indicated that, in general, adapting to cognitive styles improves student learning. The results also showed that the adaptive hypermedia learning system have more effects on students\u2019 perceptions than performance. The implications of these results for the design of adaptive hypermedia learning systems are discussed. 2010 Elsevier Ltd. All rights reserved."} {"_id": "242b5b545bb17879a73161134bc84d5ba3e3cf35", "title": "Memory resource management in VMware ESX server", "text": "VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified commodity operating systems. This paper introduces several novel ESX Server mechanisms and policies for managing memory. A ballooning technique reclaims the pages considered least valuable by the operating system running in a virtual machine. An idle memory tax achieves efficient memory utilization while maintaining performance isolation guarantees. Content-based page sharing and hot I/O page remapping exploit transparent page remapping to eliminate redundancy and reduce copying overheads. These techniques are combined to efficiently support virtual machine workloads that overcommit memory."} {"_id": "596785ca2d338ebcdeac1fc29bf5357045574b2b", "title": "An architectural pattern language of cloud-based applications", "text": "The properties of clouds -- elasticity, pay-per-use, and standardization of the runtime infrastructure -- enable cloud providers and users alike to benefit from economies of scale, faster provisioning times, and reduced runtime costs. However, to achieve these benefits, application architects and developers have to respect the characteristics of the cloud environment.\n To reduce the complexity of cloud application architectures, we propose a pattern-based approach for cloud application design and development. We defined a pattern format to describe the principles of cloud computing, available cloud offerings, and cloud application architectures. Based on this format we developed an architectural pattern language of cloud-based applications: through interrelation of patterns for cloud offering descriptions and cloud application architectures, developers are guided during the identification of cloud environments and architecture patterns applicable to their problems. We cover the proceeding how we identified patterns in various information sources and existing productively used applications, give an overview of previously discovered patterns, and introduce one new pattern. Further, we propose a framework for the organizations of patterns and the guidance of developers during pattern instantiation."} {"_id": "c70ad19c90491e2de8de686b6a49f9bbe44692c0", "title": "Seeing with Humans: Gaze-Assisted Neural Image Captioning", "text": "Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems. Previous works demonstrated the potential of gaze for object-centric tasks, such as object localization and recognition, but it remains unclear if gaze can also be beneficial for scene-centric tasks, such as image captioning. We present a new perspective on gaze-assisted image captioning by studying the interplay between human gaze and the attention mechanism of deep neural networks. Using a public large-scale gaze dataset, we first assess the relationship between state-of-the-art object and scene recognition models, bottom-up visual saliency, and human gaze. We then propose a novel split attention model for image captioning. Our model integrates human gaze information into an attention-based long short-term memory architecture, and allows the algorithm to allocate attention selectively to both fixated and non-fixated image regions. Through evaluation on the COCO/SALICON datasets we show that our method improves image captioning performance and that gaze can complement machine attention for semantic scene understanding tasks."} {"_id": "0d836a0461e9c21fa7a25622115de55b81ceb446", "title": "Estimation of Presentations Skills Based on Slides and Audio Features", "text": "This paper proposes a simple estimation of the quality of student oral presentations. It is based on the study and analysis of features extracted from the audio and digital slides of 448 presentations. The main goal of this work is to automatically predict the values assigned by professors to different criteria in a presentation evaluation rubric. Machine Learning methods were used to create several models that classify students in two clusters: high and low performers. The models created from slide features were accurate up to 65%. The most relevant features for the slide-base models were: number of words, images, and tables, and the maximum font size. The audio-based models reached up to 69% of accuracy, with pitch and filled pauses related features being the most significant. The relatively high degrees of accuracy obtained with these very simple features encourage the development of automatic estimation tools for improving presentation skills."} {"_id": "1627ce7f1429366829df3d49e28b8ecd7f7597b5", "title": "Diplomat: Using Delegations to Protect Community Repositories", "text": "Community repositories, such as Docker Hub, PyPI, and RubyGems, are bustling marketplaces that distribute software. Even though these repositories use common software signing techniques (e.g., GPG and TLS), attackers can still publish malicious packages after a server compromise. This is mainly because a community repository must have immediate access to signing keys in order to certify the large number of new projects that are registered each day. This work demonstrates that community repositories can offer compromise-resilience and real-time project registration by employing mechanisms that disambiguate trust delegations. This is done through two delegation mechanisms that provide flexibility in the amount of trust assigned to different keys. Using this idea we implement Diplomat, a software update framework that supports security models with different security / usability tradeoffs. By leveraging Diplomat, a community repository can achieve near-perfect compromise-resilience while allowing real-time project registration. For example, when Diplomat is deployed and configured to maximize security on Python\u2019s community repository, less than 1% of users will be at risk even if an attacker controls the repository and is undetected for a month. Diplomat is being integrated by Ruby, CoreOS, Haskell, OCaml, and Python, and has already been deployed by Flynn, LEAP, and Docker."} {"_id": "69613390ca76bf103791ef251e1568deb5fe91dd", "title": "Satellite Image Classification Methods and Techniques: A Review", "text": "Satellite image classification process involves grouping the image pixel values into meaningful categories. Several satellite image classification methods and techniques are available. Satellite image classification methods can be broadly classified into three categories 1) automatic 2) manual and 3) hybrid. All three methods have their own advantages and disadvantages. Majority of the satellite image classification methods fall under first category. Satellite image classification needs selection of appropriate classification method based on the requirements. The current research work is a study on satellite image classification methods and techniques. The research work also compares various researcher's comparative results on satellite image classification methods."} {"_id": "456f85fb61fa5f137431e6d12c5fc73cc2ebaced", "title": "Biometric Gait Authentication Using Accelerometer Sensor", "text": "This paper presents a biometric user authentication based on a person\u2019s gait. Unlike most previous gait recognition approaches, which are based on machine vision techniques, in our approach gait patterns are extracted from a physical device attached to the lower leg. From the output of the device accelerations in three directions: vertical, forward-backward, and sideways motion of the lower leg are obtained. A combination of these accelerations is used for authentication. Applying two different methods, histogram similarity and cycle length, equal error rates (EER) of 5% and 9% were achieved, respectively."} {"_id": "4e116ae01d873ad67fb2ab6da5cb4feeb24bbcb5", "title": "Decision making regarding Smith-Petersen vs. pedicle subtraction osteotomy vs. vertebral column resection for spinal deformity.", "text": "STUDY DESIGN\nAuthor experience and literature review.\n\n\nOBJECTIVES\nTo investigate and discuss decision-making on when to perform a Smith-Petersen osteotomy as opposed to a pedicle subtraction procedure and/or a vertebral column resection.\n\n\nSUMMARY OF BACKGROUND DATA\nArticles have been published regarding Smith-Petersen osteotomies, pedicle subtraction procedures, and vertebral column resections. Expectations and complications have been reviewed. However, decision-making regarding which of the 3 procedures is most useful for a particular spinal deformity case is not clearly investigated.\n\n\nMETHODS\nDiscussed in this manuscript is the author's experience and the literature regarding the operative options for a fixed coronal or sagittal deformity.\n\n\nRESULTS\nThere are roles for Smith-Petersen osteotomy, pedicle subtraction, and vertebral column resection. Each has specific applications and potential complications.\n\n\nCONCLUSION\nAs the magnitude of resection increases, the ability to correct deformity improves, but also the risk of complication increases. Therein, an understanding of potential applications and complications is helpful."} {"_id": "80ca5505a9ba00e91283519a75e01840a15a74bf", "title": "dCompaction: Delayed Compaction for the LSM-Tree", "text": "Key-value (KV) stores have become a backbone of large-scale applications in today\u2019s data centers. Write-optimized data structures like the Log-Structured Merge-tree (LSM-tree) and their variants are widely used in KV storage systems like BigTable and RocksDB. Conventional LSM-tree organizes KV items into multiple, successively larger components, and uses compaction to push KV items from one smaller component to another adjacent larger component until the KV items reach the largest component. Unfortunately, current compaction scheme incurs significant write amplification due to repeated KV item reads and writes, and then results in poor throughput. We propose a new compaction scheme, delayed compaction (dCompaction), that decreases write amplification. dCompaction postpones some compactions and gather them into the following compaction. In this way, it avoids KV item reads and writes during compaction, and consequently improves the throughput of LSM-tree based KV stores. We implement dCompaction on RocksDB, and conduct extensive experiments. Validation using YCSB framework shows that compared with RocksDB dCompaction has about 30% write performance improvements and also comparable read performance."} {"_id": "131e4a4a40c29737f39e8cb0f4e59864ca1a1b34", "title": "LaneQuest: An accurate and energy-efficient lane detection system", "text": "Current outdoor localization techniques fail to provide the required accuracy for estimating the car's lane. In this paper, we present LaneQuest: a system that leverages the ubiquitous and low-energy inertial sensors available in commodity smart-phones to provide an accurate estimate of the car's current lane. LaneQuest leverages hints from the phone sensors about the surrounding environment to detect the car's lane. For example, a car making a right turn most probably will be in the right-most lane, a car passing by a pothole will be in a specific lane, and the car's angular velocity when driving through a curve reflects its lane. Our investigation shows that there are amble opportunities in the environment, i.e. lane \u201canchors\u201d, that provide cues about the car's lane. To handle the ambiguous location, sensors noise, and fuzzy lane anchors; LaneQuest employs a novel probabilistic lane estimation algorithm. Furthermore, it uses an unsupervised crowd-sourcing approach to learn the position and lane-span distribution of the different lane-level anchors. Our evaluation results from implementation on different android devices and 260Km driving traces by 13 drivers in different cities shows that LaneQuest can detect the different lane-level anchors with an average precision and recall of more than 90%. This leads to an accurate detection of the exact car's lane position 80% of the time, increasing to 89% of the time to within one lane. This comes with a low-energy footprint, allowing LaneQuest to be implemented on the energy-constrained mobile devices."} {"_id": "3cbf0dcbc36a8f70e9b2b2f46b16e5057cbd9a7d", "title": "Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance", "text": "Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task."} {"_id": "528fa9bb03644ba752fb9491be49b9dd1bce1d52", "title": "SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity", "text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. This paper presents the results of the STS pilot task in Semeval. The training data contained 2000 sentence pairs from previously existing paraphrase datasets and machine translation evaluation resources. The test data also comprised 2000 sentences pairs for those datasets, plus two surprise datasets with 400 pairs from a different machine translation evaluation corpus and 750 pairs from a lexical resource mapping exercise. The similarity of pairs of sentences was rated on a 0-5 scale (low to high similarity) by human judges using Amazon Mechanical Turk, with high Pearson correlation scores, around 90%. 35 teams participated in the task, submitting 88 runs. The best results scored a Pearson correlation>80%, well above a simple lexical baseline that only scored a 31% correlation. This pilot task opens an exciting way ahead, although there are still open issues, specially the evaluation metric."} {"_id": "d87ceda3042f781c341ac17109d1e94a717f5f60", "title": "WordNet : an electronic lexical database", "text": "WordNet is perhaps the most important and widely used lexical resource for natural language processing systems up to now. WordNet: An Electronic Lexical Database, edited by Christiane Fellbaum, discusses the design of WordNet from both theoretical and historical perspectives, provides an up-to-date description of the lexical database, and presents a set of applications of WordNet. The book contains a foreword by George Miller, an introduction by Christiane Fellbaum, seven chapters from the Cognitive Sciences Laboratory of Princeton University, where WordNet was produced, and nine chapters contributed by scientists from elsewhere. Miller's foreword offers a fascinating account of the history of WordNet. He discusses the presuppositions of such a lexical database, how the top-level noun categories were determined, and the sources of the words in WordNet. He also writes about the evolution of WordNet from its original incarnation as a dictionary browser to a broad-coverage lexicon, and the involvement of different people during its various stages of development over a decade. It makes very interesting reading for casual and serious users of WordNet and anyone who is grateful for the existence of WordNet. The book is organized in three parts. Part I is about WordNet itself and consists of four chapters: \"Nouns in WordNet\" by George Miller, \"Modifiers in WordNet\" by Katherine Miller, \"A semantic network of English verbs\" by Christiane Fellbaum, and \"Design and implementation of the WordNet lexical database and search software\" by Randee Tengi. These chapters are essentially updated versions of four papers from Miller (1990). Compared with the earlier papers, the chapters in this book focus more on the underlying assumptions and rationales behind the design decisions. The description of the information contained in WordNet, however, is not as detailed as in Miller (1990). The main new additions in these chapters include an explanation of sense grouping in George Miller's chapter, a section about adverbs in Katherine Miller's chapter, observations about autohyponymy (one sense of a word being a hyponym of another sense of the same word) and autoantonymy (one sense of a word being an antonym of another sense of the same word) in Fellbaum's chapter, and Tengi's description of the Grinder, a program that converts the files the lexicographers work with to searchable lexical databases. The three papers in Part II are characterized as \"extensions, enhancements and"} {"_id": "1528def1ddbd2deb261ebb873479f27f48251031", "title": "Clustering WordNet word senses", "text": "This paper presents the results of a set of methods to cluster WordNet word senses. The methods rely on different information sources: confusion matrixes from Senseval-2 Word Sense Disambiguation systems, translation similarities, hand-tagged examples of the target word senses and examples obtained automatically from the web for the target word senses. The clustering results have been evaluated using the coarsegrained word senses provided for the lexical sample in Senseval-2. We have used Cluto, a general clustering environment, in order to test different clustering algorithms. The best results are obtained for the automatically obtained examples, yielding purity values up to 84% on average over 20 nouns."} {"_id": "2445089d4277ccbec3727fecfe73eaa4cc57e414", "title": "(Meta-) Evaluation of Machine Translation", "text": "This paper evaluates the translation quality of machine translation systems for 8 language pairs: translating French, German, Spanish, and Czech to English and back. We carried out an extensive human evaluation which allowed us not only to rank the different MT systems, but also to perform higher-level analysis of the evaluation process. We measured timing and intraand inter-annotator agreement for three types of subjective evaluation. We measured the correlation of automatic evaluation metrics with human judgments. This meta-evaluation reveals surprising facts about the most commonly used methodologies."} {"_id": "3764e0bfcc4a196eb020d483d8c2f1822206a444", "title": "LSTM-Based Hierarchical Denoising Network for Android Malware Detection", "text": "Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN), a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks) to learn the dense representations; then the secondlevel LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM) for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency thanN-gram-based malware detection which is similar to our method."} {"_id": "41bb7d4546fbc95d55342b621f95edad3b06717e", "title": "A Study on False Channel Condition Reporting Attacks in Wireless Networks", "text": "Wireless networking protocols are increasingly being designed to exploit a user's measured channel condition; we call such protocols channel-aware. Each user reports the measured channel condition to a manager of wireless resources and a channel-aware protocol uses these reports to determine how resources are allocated to users. In a channel-aware protocol, each user's reported channel condition affects the performance of every other user. The deployment of channel-aware protocols increases the risks posed by false channel-condition feedback. In this paper, we study what happens in the presence of an attacker that falsely reports its channel condition. We perform case studies on channel-aware network protocols to understand how an attack can use false feedback and how much the attack can affect network performance. The results of the case studies show that we need a secure channel condition estimation algorithm to fundamentally defend against the channel-condition misreporting attack. We design such an algorithm and evaluate our algorithm through analysis and simulation. Our evaluation quantifies the effect of our algorithm on system performance as well as the security and the performance of our algorithm."} {"_id": "fe025433b702bf6e946610e0dba77f7dd16ae821", "title": "Extreme-angle broadband metamaterial lens.", "text": "For centuries, the conventional approach to lens design has been to grind the surfaces of a uniform material in such a manner as to sculpt the paths that rays of light follow as they transit through the interfaces. Refractive lenses formed by this procedure of bending the surfaces can be of extremely high quality, but are nevertheless limited by geometrical and wave aberrations that are inherent to the manner in which light refracts at the interface between two materials. Conceptually, a more natural--but usually less convenient--approach to lens design would be to vary the refractive index throughout an entire volume of space. In this manner, far greater control can be achieved over the ray trajectories. Here, we demonstrate how powerful emerging techniques in the field of transformation optics can be used to harness the flexibility of gradient index materials for imaging applications. In particular we design and experimentally demonstrate a lens that is broadband (more than a full decade bandwidth), has a field-of-view approaching 180 degrees and zero f-number. Measurements on a metamaterial implementation of the lens illustrate the practicality of transformation optics to achieve a new class of optical devices."} {"_id": "9b9da80c186d8f6e7fa35747a6543d78e36f17e8", "title": "HMOG: New Behavioral Biometric Features for Continuous Authentication of Smartphone Users", "text": "We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16% (walking) and 10.05% (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1% using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7% and 34.2%. We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9% without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones)."} {"_id": "4e0e664450094cc786898a5e1ef3727135ecfcd8", "title": "Accurate nonrigid 3D human body surface reconstruction using commodity depth sensors", "text": "1Department of Computer Science, School of Engineering and Applied Science, Institute for Computer Graphics, The George Washington University, 800 22nd Street NW Suite 3400, Washington, DC 20052, USA 2Department of Epidemiology and Biostatistics, Milken Institute School of Public Health, The George Washington University, 800 22nd Street NW Suite 7680, Washington, DC 20052, USA 3Department of Computer Science, School of Engineering and Applied Science, and Department of Pediatrics, School of Medicine and Health Sciences, Institute for Computer Graphics, The George Washington University, 800 22nd Street NW Suite 5830, Washington, DC 20052, USA"} {"_id": "de0597313056b05fd7dd6b2d5e031cfb96564920", "title": "Transductive Adversarial Networks (TAN)", "text": "Transductive Adversarial Networks (TAN) is a novel domain-adaptation machine learning framework that is designed for learning a conditional probability distribution on unlabelled input data in a target domain, while also only having access to: (1) easily obtained labelled data from a related source domain, which may have a different conditional probability distribution than the target domain, and (2) a marginalised prior distribution on the labels for the target domain. TAN leverages a fully adversarial training procedure and a unique generator/encoder architecture which approximates the transductive combination of the available sourceand target-domain data. A benefit of TAN is that it allows the distance between the sourceand target-domain label-vector marginal probability distributions to be greater than 0 (i.e. different tasks across the source and target domains) whereas other domain-adaptation algorithms require this distance to equal 0 (i.e. a single task across the source and target domains). TAN can, however, still handle the latter case and is a more generalised approach to this case. Another benefit of TAN is that due to being a fully adversarial algorithm, it has the potential to accurately approximate highly complex distributions. Theoretical analysis demonstrates the viability of the TAN framework."} {"_id": "620c1821f67fd39051fe0863567ac702ce27a72a", "title": "Modeling, simulation, and development of a robotic dolphin prototype", "text": "Abilities of sea animals and efficiency of fish swimming are a few of the impressive solutions of nature. In this paper, design, modeling, simulation and development studies of a robotic dolphin prototype, entirely inspired by the bottlenose dolphin (Tursiops truncatus), are presented. The first section focuses on the design principles and core design features of the prototype. In the second section, modeling and simulation studies which consist of hydrodynamics, kinematics and dynamical analysis of the robotic dolphin are presented. Dynamical simulations of the underwater behavior of the prototype are included in this section. The third section focuses on the general prototype development from mechanical construction to control system structure. Finally in the last section, experimental results obtained through the development of the prototype are discussed."} {"_id": "7da851a9d67b6c8ee7e985928f91ee577c529f2e", "title": "An Empirical Comparison of Topics in Twitter and Traditional Media", "text": "Twitter as a new form of social media can potentially contain much useful information, but content analysis on Twitter has not been well studied. In particular, it is not clear whether as an information source Twitter can be simply regarded as a faster news feed that covers mostly the same information as traditional news media. In This paper we empirically compare the content of Twitter with a traditional news medium, New York Times, using unsupervised topic modeling. We use a Twitter-LDA model to discover topics from a representative sample of the entire Twitter. We then use text mining techniques to compare these Twitter topics with topics from New York Times, taking into consideration of topic categories and types. We find that although Twitter and New York Times cover similar categories and types of topics, the distributions of topic categories and types are quite different. Furthermore, there are Twitter-specific topics and NYT-specific topics, and they tend to belong to certain topic categories and types. We also study the relation between the proportions of opinionated tweets and retweets and topic categories and types, and find some interesting dependence. To the best of our knowledge, ours is the first comprehensive empirical comparison between Twitter and traditional news media."} {"_id": "0b3983a6ad65f7f480d63d5a8d3e9f5c9c57e06a", "title": "Binary Rewriting of an Operating System Kernel \u2217", "text": "This paper deals with some of the issues that arise in the context of binary rewriting and instrumentation of an operatin g system kernel. OS kernels are very different from ordinary application code in many ways, e.g., they contain a significant amount of hand-written assembly code. Binary rewriting is an attractive approach for processing OS kernel code for several reasons, e.g., it provides a uniform way to handl e heterogeneity in code due to a combination of source code, assembly code and legacy code such as in device drivers. However, because of the many differences between ordinary application code and OS kernel code, binary rewriting techniques that work for application code do not always carry over directly to kernel code. This paper describes some of the issues that arise in this context, and the approaches we have taken to address them. A key goal when developing our system was to deal in a systematic manner with the various peculiarities seen in low-level systems code, and reason about the safety and correctness of code transformation s, without requiring significant deviations from the regular d evelopmental path. For example, a precondition we assumed was that no compiler or linker modifications should be required to use it and the tool should be able to process kernel binaries in the same way as it does ordinary applications."} {"_id": "c4cfdcf19705f9095fb60fb2e569a9253a475f11", "title": "Towards Context-Aware Interaction Recognition for Visual Relationship Detection", "text": "Recognizing how objects interact with each other is a crucial task in visual recognition. If we define the context of the interaction to be the objects involved, then most current methods can be categorized as either: (i) training a single classifier on the combination of the interaction and its context; or (ii) aiming to recognize the interaction independently of its explicit context. Both methods suffer limitations: the former scales poorly with the number of combinations and fails to generalize to unseen combinations, while the latter often leads to poor interaction recognition performance due to the difficulty of designing a contextindependent interaction classifier.,,To mitigate those drawbacks, this paper proposes an alternative, context-aware interaction recognition framework. The key to our method is to explicitly construct an interaction classifier which combines the context, and the interaction. The context is encoded via word2vec into a semantic space, and is used to derive a classification result for the interaction. The proposed method still builds one classifier for one interaction (as per type (ii) above), but the classifier built is adaptive to context via weights which are context dependent. The benefit of using the semantic space is that it naturally leads to zero-shot generalizations in which semantically similar contexts (subject-object pairs) can be recognized as suitable contexts for an interaction, even if they were not observed in the training set. Our method also scales with the number of interaction-context pairs since our model parameters do not increase with the number of interactions. Thus our method avoids the limitation of both approaches. We demonstrate experimentally that the proposed framework leads to improved performance for all investigated interaction representations and datasets."} {"_id": "39cc0e6c1b85d052c998a1c5949fe51baa96f0c5", "title": "Effective resource management for enhancing performance of 2D and 3D stencils on GPUs", "text": "GPUs are an attractive target for data parallel stencil computations prevalent in scientific computing and image processing applications. Many tiling schemes, such as overlapped tiling and split tiling, have been proposed in past to improve the performance of stencil computations. While effective for 2D stencils, these techniques do not achieve the desired improvements for 3D stencils due to the hardware constraints of GPU.\n A major challenge in optimizing stencil computations is to effectively utilize all resources available on the GPU. In this paper we develop a tiling strategy that makes better use of resources like shared memory and register file available on the hardware. We present a systematic methodology to reason about which strategy should be employed for a given stencil and also discuss implementation choices that have a significant effect on the achieved performance. Applying these techniques to various 2D and 3D stencils gives a performance improvement of 200-400% over existing tools that target such computations."} {"_id": "a98242e420179d2a080069f5a02c6603fb5cfe3d", "title": "Zero-Current Switching Switched-Capacitor Zero-Voltage-Gap Automatic Equalization System for Series Battery String", "text": "The series battery string or supercapacitor string automatic equalization system based on quasi-resonant switched-capacitor converter is presented in this paper. It realizes the zero-voltage gap between cells and allows maximum energy recovery in a series battery system or supercapacitor system. It not only inherits the advantage of conventional switched-capacitor battery cell balancing system, but also overcomes the drawback of conduction loss, switching loss, and finite voltage difference among battery cells. All switches are MOSFET and controlled by just a pair of complementary signals in synchronous trigger pattern and the resonant tanks operate alternatively between the two states of charging and discharging. Zero-current switching and zero-voltage gap are achieved in this paper. Different resonant tank designs can meet the needs of different balancing time to meet the needs of different energy storage devices. Experimental results indicate that the efficiency of the system is high exceeding 98%. The system is very suitable for balancing used in battery management system."} {"_id": "dcaeb29ad3307e2bdab2218416c81cb0c4e548b2", "title": "End-to-end Speech Recognition Using Lattice-free MMI", "text": "We present our work on end-to-end training of acoustic models using the lattice-free maximum mutual information (LF-MMI) objective function in the context of hidden Markov models. By end-to-end training, we mean flat-start training of a single DNN in one stage without using any previously trained models, forced alignments, or building state-tying decision trees. We use full biphones to enable context-dependent modeling without trees, and show that our end-to-end LF-MMI approach can achieve comparable results to regular LF-MMI on well-known large vocabulary tasks. We also compare with other end-to-end methods such as CTC in character-based and lexicon-free settings and show 5 to 25 percent relative reduction in word error rates on different large vocabulary tasks while using significantly smaller models."} {"_id": "6b4593128ddfcbe006b51de0549596f24e724ff0", "title": "Counting of cigarettes in cigarette packets using LabVIEW", "text": "The proposed work presents a technique for an automated count of cigarette in cigarette packets. Proposed work is based on application of image processing techniques in LabVIEW platform. An objective of the proposed work is to count the number of cigarettes in a packet. National Instrument's Smart camera is used to capture images of cigarette packets moving in packaging line and process the data to fulfill the above objective. The technique was subjected to offline testing on more than 50 number of cigarette packets and the results obtained are found to be satisfactory in all cases."} {"_id": "5d7e3fb23f2a14ffdab31f18051c9b8ff573db4e", "title": "LQR, double-PID and pole placement stabilization and tracking control of single link inverted pendulum", "text": "This paper presents the dynamic behaviour of a nonlinear single link inverted pendulum-on-cart system based on Lagrange Equation. The nonlinear model linearization was presented based on Taylor series approximation. LQR, double-PID and simple pole placement control techniques were proposed for upright stabilization and tracking controls of the system. Simulations results for the various control techniques subjected to a unity magnitude pulse input torque with and without disturbance were compared. The performances of the proposed controllers were investigated based on response time specifications and level of disturbance rejection. Thus, the performance of LQR is more reliable and satisfactory. Finally, future work suggestions were made."} {"_id": "15fc8ce6630616cce1681f049391bdb4e186192b", "title": "Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level", "text": "This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed."} {"_id": "29fbadeb12389a2fee4f0739cfc62c0ea9399de1", "title": "The influence of futures markets on real time price stabilization in electricity markets", "text": "Markets can interact with power systems in ways that can render an otherwise stable market and an otherwise stable power system into an unstable overall system. This unstable system will be characterized not only by fluctuating prices that do not settle to constant values, but, more worrisome, it creates the possibility of inducing slow electromechanical oscillations if left unchecked. This will tend to happen as a result of \"price chasing\" on the part of suppliers that can react (and over-react) to changing system prices. This paper examines the role that futures markets may have on the clearing prices and on altering the volatility and potential instability of real time prices and generator output."} {"_id": "ecfc4792299f58390d85753b60ee227b6282ccbc", "title": "Ubiquitous enterprise service adaptations based on contextual user behavior", "text": "Recent advances in mobile technologies and infrastructures have created the demand for ubiquitous access to enterprise services from mobile handheld devices. Further, with the invention of new interaction devices, the context in which the services are being used becomes an integral part of the activity carried out with the system. Traditional human\u2013computer interface (HCI) theories are now inadequate for developing these context-aware applications, as we believe that the notion of context should be extended to different categories: computing contexts, user contexts, and physical contexts for ubiquitous computing. This demands a new paradigm for system requirements elicitation and design in order to make good use of such extended context information captured from mobile user behavior. Instead of redesigning or adapting existing enterprise services in an ad hoc manner, we introduce a methodology for the elicitation of context-aware adaptation requirements and the matching of context-awareness features to the target context by capability matching. For the implementation of such adaptations, we propose the use of three tiers of views: user interface views, data views, and process views. This approach centers on a novel notion of process views to ubiquitous service adaptation, where mobile users may execute a more concise version or modified procedure of the original process according to their behavior under different contexts. The process view also serves as the key mechanism for integrating user interface views and data views. Based on this model, we analyze the design and implementation issues of some common ubiquitous access situations and show how to adapt them systematically into a context-aware application by considering the requirements of a ubiquitous enterprise information system."} {"_id": "edd6b6cd62d4c3b5d288721510e579be62c941d6", "title": "Conditional image generation using feature-matching GAN", "text": "Generative Adversarial Net is a frontier method of generative models for images, audios and videos. In this paper, we focus on conditional image generation and introduce conditional Feature-Matching Generative Adversarial Net to generate images from category labels. By visualizing state-of-art discriminative conditional generative models, we find these networks do not gain clear semantic concepts. Thus we design the loss function in the light of metric learning to measure semantic distance. The proposed model is evaluated on several well-known datasets. It is shown to be of higher perceptual quality and better diversity then existing generative models."} {"_id": "7a050d2f0c83a65996e1261b52e6523f24d0bac2", "title": "Reactance-domain ESPRIT algorithm for a hexagonally shaped seven-element ESPAR antenna", "text": "A direction-of-arrival (DoA) method that combines the reactance-domain (RD) technique and the ESPRIT algorithm is proposed for use with the 7-element electronically steerable parasitic array radiator (ESPAR) for the estimation of noncoherent sources. Simulations show that the method could resolve up to three incoming signals with an estimation performance that depends on the signal's angle of arrival. Moreover, the method is compared with the Cramer-Rao lower bound (CRB) and the MUSIC asymptotic error variance, both modified for the RD technique. Numerical comparison between this lower bound and the MUSIC algorithm confirmed that the proposed method can achieve the CRB and provide high-precision DoA estimation with a level of performance that is sufficient for many DoA finding applications. The proposed method could be demonstrated by means of experiments on DOA estimation conducted in an anechoic chamber."} {"_id": "f512a4ae0f6b2d8d03c54a6405d2697a74f7256a", "title": "A quick MST-based algorithm to obtain Pathfinder networks (\u221e,n-1)", "text": "Network scaling algorithms such as the Pathfinder algorithm are used to prunemany different kinds of networks, including citation networks, randomnetworks, and social networks. However, this algorithm suffers from run time problems for large networks and online processing due to its O(n4) time complexity. In this article, we introduce a new alternative, the MST-Pathfinder algorithm, which will allow us to prune the original network to get its PFNET(\u221e, n \u22121) in justO(n2 \u00b7 logn) time.Theunderlying idea comes from the fact that the union (superposition) of all the Minimum Spanning Trees extracted from a given network is equivalent to the PFNET resulting from the Pathfinder algorithmparameterized by a specific set of values (r = \u221e and q = n \u22121), those usually considered in many different applications. Although this property is well-known in the literature, it seems that no algorithm based on it has been proposed, up to now, to decrease the high computational cost of the original Pathfinder algorithm.We also present a mathematical proof of the correctness of this new alternative and test its good efficiency in two different case studies: one dedicated to the post-processing of large random graphs, and the other one to a real world case in which medium networks obtained by a cocitation analysis of the scientific domains in different countries are pruned."} {"_id": "b49af9c4ab31528d37122455e4caf5fdeefec81a", "title": "Smart homes and their users: a systematic analysis and key challenges", "text": "Published research on smart homes and their users is growing exponentially, yet a clear understanding of who these users are and how they might use smart home technologies is missing from a field being overwhelmingly pushed by technology developers. Through a systematic analysis of peer-reviewed literature on smart homes and their users, this paper takes stock of the dominant research themes and the linkages and disconnects between them. Key findings within each of nine themes are analysed, grouped into three: (1) views of the smart home\u2014functional, instrumental, socio-technical; (2) users and the use of the smart home\u2014prospective users, interactions and decisions, using technologies in the home; and (3) challenges for realising the smart home\u2014hardware and software, design, domestication. These themes are integrated into an organising framework for future research that identifies the presence or absence of cross-cutting relationships between different understandings of smart homes and their users. The usefulness of the organising framework is illustrated in relation to two major concerns\u2014privacy and control\u2014that have been narrowly interpreted to date, precluding deeper insights and potential solutions. Future research on smart homes and their users can benefit by exploring and developing cross-cutting relationships between the research themes identified."} {"_id": "c4a3da33ac6bcc9acd962f3bbb92d2387a62aed2", "title": "Mobile application for Indonesian medicinal plants identification using Fuzzy Local Binary Pattern and Fuzzy Color Histogram", "text": "This research proposed a new mobile application based on Android operating system for identifying Indonesian medicinal plant images based on texture and color features of digital leaf images. In the experiments we used 51 species of Indonesian medicinal plants and each species consists of 48 images, so the total images used in this research are 2,448 images. This research investigates effectiveness of the fusion between the Fuzzy Local Binary Pattern (FLBP) and the Fuzzy Color Histogram (FCH) in order to identify medicinal plants. The FLBP method is used for extracting leaf image texture. The FCH method is used for extracting leaf image color. The fusion of FLBP and FCH is done by using Product Decision Rules (PDR) method. This research used Probabilistic Neural Network (PNN) classifier for classifying medicinal plant species. The experimental results show that the fusion between FLBP and FCH can improve the average accuracy of medicinal plants identification. The accuracy of identification using fusion of FLBP and FCH is 74.51%. This application is very important to help people identifying and finding information about Indonesian medicinal plant."} {"_id": "de704a0347322014abc4b3ecc27e86bdc5fac2fd", "title": "KNOWLEDGE ACQUISITION AND CYBER SICKNESS : A COMPARISON OF VR DEVICES IN VIRTUAL TOURS", "text": "SCIENCE JOURNAL 2015 | JUNE | SCIENCE JOURNAL | 613"} {"_id": "b648d73edd1a533decd22eec2e7722b96746ceae", "title": "weedNet: Dense Semantic Weed Classification Using Multispectral Images and MAV for Smart Farming", "text": "Selective weed treatment is a critical step in autonomous crop management as related to crop health and yield. However, a key challenge is reliable and accurate weed detection to minimize damage to surrounding plants. In this letter, we present an approach for dense semantic weed classification with multispectral images collected by a micro aerial vehicle (MAV). We use the recently developed encoder\u2013decoder cascaded convolutional neural network, SegNet, that infers dense semantic classes while allowing any number of input image channels and class balancing with our sugar beet and weed datasets. To obtain training datasets, we established an experimental field with varying herbicide levels resulting in field plots containing only either crop or weed, enabling us to use the normalized difference vegetation index as a distinguishable feature for automatic ground truth generation. We train six models with different numbers of input channels and condition (fine tune) it to achieve $\\sim$0.8 F1-score and 0.78 area under the curve classification metrics. For the model deployment, an embedded Graphics Processing Unit (GPU) system (Jetson TX2) is tested for MAV integration. Dataset used in this letter is released to support the community and future work."} {"_id": "7d712ea3803485467a46b46b71242477560c18f0", "title": "A Novel Differential-Fed Patch Antenna on Stepped-Impedance Resonator With Enhanced Bandwidth Under Dual-Resonance", "text": "A novel design concept to enhance the bandwidth of a differential-fed patch antenna using the dual-resonant radiation of a stepped-impedance resonator (SIR) is proposed. The SIR is composed of two distinctive portions: the radiating patch and a pair of open stubs. Initially, based on the transmission line model, the first and second odd-order radiative resonant modes, i.e., TM10 and TM30, of this SIR-typed patch antenna are extensively investigated. It is demonstrated that the frequency ratio between the dual-resonant modes can be fully controlled by the electrical length and the impedance ratios between the open stub and radiating patch. After that, the SIR-typed patch antenna is reshaped with stepped ground plane in order to increase the impedance ratio as highly required for wideband radiation. With this arrangement, these two radiative modes are merged with each other, resulting in a wide impedance bandwidth with a stable radiation pattern under dual-resonant radiation. Finally, the proposed antenna is designed, fabricated, and measured. It is verified in experiment that the impedance bandwidth (|Sdd11| <; -10 dB) of the proposed antenna has gained tremendous increment up to 10% (0.85-0.94 GHz) with two attenuation poles. Most importantly, the antenna has achieved a stable gain varying from 7.4 to 8.5 dB within the whole operating band, while keeping low-cross polarization."} {"_id": "9f7b1b9d4f7dc2dcf850dcb311ecd309be380226", "title": "Design and Analysis of a Low-Profile and Broadband Microstrip Monopolar Patch Antenna", "text": "A new microstrip monopolar patch antenna is proposed and analyzed. The antenna has a wide bandwidth and a monopole like radiation pattern. Such antenna is constructed on a circular patch antenna that is shorted concentrically with a set of conductive vias. The antenna is analyzed using a cavity model. The cavity model analysis not only distinguishes each resonating mode and gives a physical insight into each mode of the antenna, but also provides a guideline to design a broadband monopolar patch antenna that utilizes two modes (TM01 and TM02 modes). Both modes provide a monopole like radiation pattern. The proposed antenna has a simple structure with a low profile of 0.024 wavelengths, and yields a wide impedance bandwidth of 18% and a maximum gain of 6 dBi."} {"_id": "1965a7d9a3eb0727c054fb235b1758c8ffbb8e22", "title": "Circularly Polarized U-Slot Antenna", "text": "Circularly polarized single-layer U-slot microstrip patch antenna has been proposed. The suggested asymmetrical U-slot can generate the two orthogonal modes for circular polarization without chamfering any corner of the probe-fed square patch microstrip antenna. A parametric study has been carried out to investigate the effects caused by different arm lengths of the U-slot. The thickness of the foam substrate is about 8.5% of the wavelength at the operating frequency. The 3 dB axial ratio bandwidth of the antenna is 4%. Both experimental and theoretical results of the antenna have been presented and discussed. Circular polarization, printed antennas, U-slot."} {"_id": "4f86fdb8312794929b9a11770fba271c5bf886fa", "title": "A Broadband Center-Fed Circular Patch-Ring Antenna With a Monopole Like Radiation Pattern", "text": "A center-fed circular microstrip patch antenna with a coupled annular ring is presented. This antenna has a low profile configuration with a monopole like radiation pattern. Compared to the center-fed circular patch antenna (CPA), the proposed antenna has a large bandwidth and similar radiation pattern. The proposed antenna is fabricated and tested. It resonates at 5.8 GHz, the corresponding impedance bandwidth and gain are 12.8% and 5.7 dBi, respectively. Very good agreement between the measurement and simulation for the return loss and radiation patterns is achieved."} {"_id": "9462cd1ec2e404b22f76c88b6149d1e84683acb7", "title": "Printed Meandering Probe-Fed Circularly Polarized Patch Antenna With Wide Bandwidth", "text": "In this letter, a wideband compact circularly polarized (CP) patch antenna is proposed. This patch antenna consists of a printed meandering probe (M-probe) and truncated patches that excite orthogonal resonant modes to generate a wideband CP operation. The stacked patch is employed to further improve the axial-ratio (AR) bandwidth to fit the 5G Wi-Fi application. The proposed antenna achieves 42.3% impedance bandwidth and 16.8% AR bandwidth, respectively. The average gain within the AR bandwidth is 6.6 dBic with less than 0.5 dB variation. This work demonstrates a bandwidth broadening technique of an M-probe fed CP patch antenna. It is the first study to investigate and exhibit the M-probe could also provide the wideband characteristics in the dielectric loaded patch antenna. The potential applications of the antenna are 5G Wi-Fi and satellite communication systems."} {"_id": "c9ed18a4a52503ede1f50691ff77efdf26acedd5", "title": "A PFC based BLDC motor drive using a Bridgeless Zeta converter", "text": "This paper deals with a PFC (Power Factor Corrected) Bridgeless Zeta converter based VSI (Voltage Source Inverter) fed BLDC (Brushless DC) motor drive. The speed control is achieved by controlling the voltage at the DC bus of VSI using a single voltage sensor. This facilitates the operation of VSI in fundamental frequency switching mode (Electronic Commutation of BLDC motor) in place of high frequency PWM (Pulse Width Modulation) switching for speed control. This leads to low switching losses in VSI and thus improves the efficiency of the drive. Moreover, a bridgeless configuration is used to reduce the conduction losses of DBR (Diode Bridge Rectifier). The bridgeless Zeta converter working in DCM (Discontinuous Conduction Mode) is used which utilizes a voltage follower approach thus requiring a single voltage sensor for speed control and PFC operation. The proposed drive is designed to operate over a wide range of speed control and under wide variation in supply voltages with high power factor and low harmonic distortion in the supply current at AC mains. An improved power quality is achieved with performance indices satisfying the international PQ (Power Quality) standards such as IEC-61000-3-2."} {"_id": "d6002a6cc8b5fc2218754aed970aac91c8d8e7e9", "title": "Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach", "text": "In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D/LINEMOD representation introduced recently by Hinterstoisser et al., yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images."} {"_id": "aaf0a8925f8564e904abc46b3ad1f9aa2b120cf6", "title": "Vector filtering for color imaging", "text": "Vector processing operations use essential spectral and spatial information to remove noise and localize microarray spots. The proposed fully automated vector technique can be easily implemented in either hardware or software; and incorporated in any existing microarray image analysis and gene expression tool."} {"_id": "89c59ac45c267a1c12b9d0ed4af88f6d6c619683", "title": "Entity Extraction, Linking, Classification, and Tagging for Social Media: A Wikipedia-Based Approach", "text": "Many applications that process social data, such as tweets, must extract entities from tweets (e.g., \u201cObama\u201d and \u201cHawaii\u201d in \u201cObama went to Hawaii\u201d), link them to entities in a knowledge base (e.g., Wikipedia), classify tweets into a set of predefined topics, and assign descriptive tags to tweets. Few solutions exist today to solve these problems for social data, and they are limited in important ways. Further, even though several industrial systems such as OpenCalais have been deployed to solve these problems for text data, little if any has been published about them, and it is unclear if any of the systems has been tailored for social media. In this paper we describe in depth an end-to-end industrial system that solves these problems for social data. The system has been developed and used heavily in the past three years, first at Kosmix, a startup, and later at WalmartLabs. We show how our system uses a Wikipedia-based global \u201creal-time\u201d knowledge base that is well suited for social data, how we interleave the tasks in a synergistic fashion, how we generate and use contexts and social signals to improve task accuracy, and how we scale the system to the entire Twitter firehose. We describe experiments that show that our system outperforms current approaches. Finally we describe applications of the system at Kosmix and WalmartLabs, and lessons learned."} {"_id": "0b632048208c9c6b48b636f9f7ef8a5466325488", "title": "A Scenario-Adaptive Driving Behavior Prediction Approach to Urban Autonomous Driving", "text": "Driving through dynamically changing traffic scenarios is a highly challenging task for autonomous vehicles, especially on urban roadways. Prediction of surrounding vehicles\u2019 driving behaviors plays a crucial role in autonomous vehicles. Most traditional driving behavior prediction models work only for a specific traffic scenario and cannot be adapted to different scenarios. In addition, priori driving knowledge was never considered sufficiently. This study proposes a novel scenario-adaptive approach to solve these problems. A novel ontology model was developed to model traffic scenarios. Continuous features of driving behavior were learned by Hidden Markov Models (HMMs). Then, a knowledge base was constructed to specify the model adaptation strategies and store priori probabilities based on the scenario\u2019s characteristics. Finally, the target vehicle\u2019s future behavior was predicted considering both a posteriori probabilities and a priori probabilities. The proposed approach was sufficiently evaluated with a real autonomous vehicle. The application scope of traditional models can be extended to a variety of scenarios, while the prediction performance can be improved by the consideration of priori knowledge. For lane-changing behaviors, the prediction time horizon can be extended by up to 56% (0.76 s) on average. Meanwhile, long-term prediction precision can be enhanced by over 26%."} {"_id": "41d103f751d47f0c140d21c5baa4981b3d4c9a76", "title": "Commonsense Causal Reasoning Using Millions of Personal Stories", "text": "The personal stories that people write in their Internet weblogs include a substantial amount of information about the causal relationships between everyday events. In this paper we describe our efforts to use millions of these stories for automated commonsense causal reasoning. Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, we describe four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora. The top performing system in these experiments uses a simple co-occurrence statistic between words in the causal antecedent and consequent, calculated as the Pointwise Mutual Information between words in a corpus of millions of personal stories."} {"_id": "c9d1bcdb95aa748940b85508fd7277622f74c0a4", "title": "Rigor in Information Systems Positivist Case Research: Current Practices", "text": "Case research has commanded respect in the information systems (IS) discipline for at least a decade. Notwithstanding the relevance and potential value of case studies, this methodological approach was once considered to be one of the least systematic. Toward the end of the 1980s, the issue of whether IS case research was rigorously conducted was first raised. Researchers from our field (e.g., Benbasat et al. 1987; Lee 1989) and from other disciplines (e.g., Eisenhardt 1989; Yin 1994) called for more rigor in case research and, through theirrecommendations, contributed to the advancement of the case study methodology. Considering these contributions, the present study seeks to determine the extent to which the field of IS has advanced in its operational use of case study method. Precisely, it investigates the level of methodological rigor in positivist IS case research conducted over the past decade. To fulfill this objective, we identified and coded 183 case articles from seven major IS journals. Evaluation attributes or criteria considered in the present review focus on three main areas, namely, design issues, data collection, and data analysis. While the level of methodological rigor has experienced modest progress with respect to some specific attributes, the overall assessed rigor is somewhat equivocal and there are still significant areas for improvement. One of the keys is to include better documentation particularly regarding issues related to the data collection and"} {"_id": "30584b8d5bf99e51139e7ca9a8c04637480b73ca", "title": "Design of Compact Wide Stopband Microstrip Low-pass Filter using T-shaped Resonator", "text": "In this letter, a compact microstrip low-pass filter (LPF) using T-shaped resonator with wide stopband is presented. The proposed LPF has capability to remove the eighth harmonic and a low insertion loss of 0.12 dB. The bandstop structure using stepped impendence resonator and two open-circuit stubs are used to design a wide stopband with attenuation level better than \u221220 dB from 3.08 up to 22 GHz. The proposed filter with \u22123-dB cutoff frequency of 2.68 GHz has been designed, fabricated, and measured. The operating of the LPF is investigated based on equivalent circuit model. Simulation results are verified by measurement results and excellent agreement between them is observed."} {"_id": "bb9936dc85acfc794636140f02644f4f29a754c9", "title": "On-Line Analytical Processing", "text": "On-line analytical processing (OLAP) describes an approach to decision support, which aims to extract knowledge from a data warehouse, or more specifically, from data marts. Its main idea is providing navigation through data to non-expert users, so that they are able to interactively generate ad hoc queries without the intervention of IT professionals. This name was introduced in contrast to on-line transactional processing (OLTP), so that it reflected the different requirements and characteristics between these classes of uses. The concept falls in the area of business intelligence."} {"_id": "566199b865312f259d0cf694d71d6a51462e0fb8", "title": "Mining Roles with Multiple Objectives", "text": "With the growing adoption of Role-Based Access Control (RBAC) in commercial security and identity management products, how to facilitate the process of migrating a non-RBAC system to an RBAC system has become a problem with significant business impact. Researchers have proposed to use data mining techniques to discover roles to complement the costly top-down approaches for RBAC system construction. An important problem is how to construct RBAC systems with low complexity. In this article, we define the notion of weighted structural complexity measure and propose a role mining algorithm that mines RBAC systems with low structural complexity. Another key problem that has not been adequately addressed by existing role mining approaches is how to discover roles with semantic meanings. In this article, we study the problem in two primary settings with different information availability. When the only information is user-permission relation, we propose to discover roles whose semantic meaning is based on formal concept lattices. We argue that the theory of formal concept analysis provides a solid theoretical foundation for mining roles from a user-permission relation. When user-attribute information is also available, we propose to create roles that can be explained by expressions of user-attributes. Since an expression of attributes describes a real-world concept, the corresponding role represents a real-world concept as well. Furthermore, the algorithms we propose balance the semantic guarantee of roles with system complexity. Finally, we indicate how to create a hybrid approach combining top-down candidate roles. Our experimental results demonstrate the effectiveness of our approaches."} {"_id": "0f0133873e0ddf9db8e190ccf44a07249c16ba10", "title": "Data Fusion by Matrix Factorization", "text": "For most problems in science and engineering we can obtain data sets that describe the observed system from various perspectives and record the behavior of its individual components. Heterogeneous data sets can be collectively mined by data fusion. Fusion can focus on a specific target relation and exploit directly associated data together with contextual data and data about system's constraints. In the paper we describe a data fusion approach with penalized matrix tri-factorization (DFMF) that simultaneously factorizes data matrices to reveal hidden associations. The approach can directly consider any data that can be expressed in a matrix, including those from feature-based representations, ontologies, associations and networks. We demonstrate the utility of DFMF for gene function prediction task with eleven different data sources and for prediction of pharmacologic actions by fusing six data sources. Our data fusion algorithm compares favorably to alternative data integration approaches and achieves higher accuracy than can be obtained from any single data source alone."} {"_id": "34c15e94066894d92b5b0ec6817b8a246c009aaa", "title": "Cloud Service Negotiation: A Research Roadmap", "text": "Cloud services are Internet-based XaaS (X as a Service) services, where X can be hardware, software or applications. As Cloud consumers value QoS (Quality of Service), Cloud providers should make certain service level commitments in order to achieve business success. This paper argues for Cloud service negotiation. It outlines a research roadmap, reviews the state of the art, and reports our work on Cloud service negotiation. Three research problems that we formulate are QoS measurement, QoS negotiation, and QoS enforcement. To address QoS measurement, we pioneer a quality model named CLOUDQUAL for Cloud services. To address QoS negotiation, we propose a tradeoff negotiation approach for Cloud services, which can achieve a higher utility. We also give some ideas to solve QoS enforcement, and balance utility and success rate for QoS negotiation."} {"_id": "08d6c0f860378a8c56b4ba7f347429970f70e3bd", "title": "Classification of brain disease in magnetic resonance images using two-stage local feature fusion", "text": "BACKGROUND\nMany classification methods have been proposed based on magnetic resonance images. Most methods rely on measures such as volume, the cerebral cortical thickness and grey matter density. These measures are susceptible to the performance of registration and limited in representation of anatomical structure. This paper proposes a two-stage local feature fusion method, in which deformable registration is not desired and anatomical information is represented from moderate scale.\n\n\nMETHODS\nKeypoints are firstly extracted from scale-space to represent anatomical structure. Then, two kinds of local features are calculated around the keypoints, one for correspondence and the other for representation. Scores are assigned for keypoints to quantify their effect in classification. The sum of scores for all effective keypoints is used to determine which group the test subject belongs to.\n\n\nRESULTS\nWe apply this method to magnetic resonance images of Alzheimer's disease and Parkinson's disease. The advantage of local feature in correspondence and representation contributes to the final classification. With the help of local feature (Scale Invariant Feature Transform, SIFT) in correspondence, the performance becomes better. Local feature (Histogram of Oriented Gradient, HOG) extracted from 16\u00d716 cell block obtains better results compared with 4\u00d74 and 8\u00d78 cell block.\n\n\nDISCUSSION\nThis paper presents a method which combines the effect of SIFT descriptor in correspondence and the representation ability of HOG descriptor in anatomical structure. This method has the potential in distinguishing patients with brain disease from controls."} {"_id": "e3bb879045c5807a950047d91e65c15e7f087313", "title": "Community Detection in Social Networks Based on Influential Nodes", "text": "Large-scale social networks emerged rapidly in recent years. Social networks have become complex networks. The structure of social networks is an important research area and has attracted much scientific interest. Community is an important structure in social networks. In this paper, we propose a community detection algorithm based on influential nodes. First, we introduce how to find influential nodes based on random walk. Then we combine the algorithm with order statistics theory to find community structure. We apply our algorithm in three classical data sets and compare to other algorithms. Our community detection algorithm is proved to be effective in the experiments. Our algorithm also has applications in data mining and recommendations."} {"_id": "0414c4cc1974e6d3e69d9f2986e5bb9fb1af4701", "title": "Natural Language Processing using NLTK and WordNet", "text": "Natural Language Processing is a theoretically motivated range of computational techniques for analysing and representing naturally occurring texts at one or more levels of linguistic analysis for the purpose of achieving human-like language processing for a range of tasks or applications [1]. To perform natural language processing a variety of tools and platform have been developed, in our case we will discuss about NLTK for Python.The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python programming language[2]. It provides easy-to-use interfaces to many corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning. In this paper we discuss different approaches for natural language processing using NLTK."} {"_id": "7beabda9986a546cbf4f6c66de9c9aa7ea92a7ac", "title": "Arabic Automatic Speech Recognition Enhancement", "text": "In this paper, we propose three approaches for Arabic automatic speech recognition. For pronunciation modeling, we propose a pronunciation variant generation with decision tree. For acoustic modeling, we propose the Hybrid approach to adapt the native acoustic model using another native acoustic model. Regarding the language model, we improve the language model using processed text. The experimental results show that the proposed pronunciation model approach has reduction in WER around 1%. The acoustic modeling reduce the WER by 1.2% and the adapted language modeling show reduction in WER by 1.9%."} {"_id": "2a1c06ea40469064f2419df481a6e6aab7b09cdf", "title": "Driving Hotzenplotz: A Hybrid Interface for Vehicle Control Aiming to Maximize Pleasure in Highway Driving", "text": "A prerequisite to foster proliferation of automated driving is common system acceptance. However, different users groups (novice, enthusiasts) decline automation, which could be, in turn, problematic for a successful market launch. We see a feasible solution in the combination of the advantages of manual (autonomy) and automated (increased safety) driving. Hence, we've developed the Hotzenplotz interface, combining possibility-driven design with psychological user needs. A simulator study (N=30) was carried-out to assess user experience with subjective criteria (Need Scale, PANAS/-X, HEMA, AttrakDiff) and quantitative measures (driving behavior, HR/HRV) in different conditions. Our results confirm that pure AD is significantly less able to satisfy user needs compared to manual driving and make people feeling bored/out of control. In contrast, the Hotzenplotz interface has proven to reduce the negative effects of AD. Our implication is that drivers should be provided with different control options to secure acceptance and avoid deskilling."} {"_id": "715250b1a76178cbc057de701524db7b2695234c", "title": "The Design and Implementation of CoGaDB: A Column-oriented GPU-accelerated DBMS", "text": "Nowadays, the performance of processors is primarily bound by a fixed energy budget, the power wall. This forces hardware vendors to optimize processors for specific tasks, which leads to an increasingly heterogeneous hardware landscape. Although efficient algorithms for modern processors such as GPUs are heavily investigated, we also need to prepare the database optimizer to handle computations on heterogeneous processors. GPUs are an interesting base for case studies, because they already offer many difficulties we will face tomorrow. In this paper, we present CoGaDB, a main-memory DBMS with built-in GPU acceleration, which is optimized for OLAP workloads. CoGaDB uses the self-tuning optimizer framework HyPE to build a hardware-oblivious optimizer, which learns cost models for database operators and efficiently distributes a workload on available processors. Furthermore, CoGaDB implements efficient algorithms on CPU and GPU and efficiently supports star joins. We show in this paper, how these novel techniques interact with each other in a single system. Our evaluation shows that CoGaDB quickly adapts to the underlying hardware by increasing the accuracy of its cost models at runtime."} {"_id": "e4c98cd116c0b86387354208bc97c1dc3c79d16d", "title": "Understanding Stories with Large-Scale Common Sense", "text": "Story understanding systems need to be able to perform commonsense reasoning, specifically regarding characters\u2019 goals and their associated actions. Some efforts have been made to form large-scale commonsense knowledge bases, but integrating that knowledge into story understanding systems remains a challenge. We have implemented the Aspire system, an application of large-scale commonsense knowledge to story understanding. Aspire extends Genesis, a rule-based story understanding system, with tens of thousands of goalrelated assertions from the commonsense semantic network ConceptNet. Aspire uses ConceptNet\u2019s knowledge to infer plausible implicit character goals and story causal connections at a scale unprecedented in the space of story understanding. Genesis\u2019s rule-based inference enables precise story analysis, while ConceptNet\u2019s relatively inexact but widely applicable knowledge provides a significant breadth of coverage difficult to achieve solely using rules. Genesis uses Aspire\u2019s inferences to answer questions about stories, and these answers were found to be plausible in a small study. Though we focus on Genesis and ConceptNet, demonstrating the value of supplementing precise reasoning systems with large-scale, scruffy commonsense knowledge is our primary contribution."} {"_id": "79b4723a010c66c2f3fdcd3bd79dba1c3a3e2d28", "title": "A comprehensive model of PMOS NBTI degradation", "text": "Negative bias temperature instability has become an important reliability concern for ultra-scaled Silicon IC technology with significant implications for both analog and digital circuit design. In this paper, we construct a comprehensive model for NBTI phenomena within the framework of the standard reaction\u2013diffusion model. We demonstrate how to solve the reaction\u2013diffusion equations in a way that emphasizes the physical aspects of the degradation process and allows easy generalization of the existing work. We also augment this basic reaction\u2013diffusion model by including the temperature and field-dependence of the NBTI phenomena so that reliability projections can be made under arbitrary circuit operating conditions. 2004 Published by Elsevier Ltd."} {"_id": "8e5ba7d60a4d9f425deeb3a05f3124fe6686b29a", "title": "Neural correlates of experimentally induced flow experiences", "text": "Flow refers to a positive, activity-associated, subjective experience under conditions of a perceived fit between skills and task demands. Using functional magnetic resonance perfusion imaging, we investigated the neural correlates of flow in a sample of 27 human subjects. Experimentally, in the flow condition participants worked on mental arithmetic tasks at challenging task difficulty which was automatically and continuously adjusted to individuals' skill level. Experimental settings of \"boredom\" and \"overload\" served as comparison conditions. The experience of flow was associated with relative increases in neural activity in the left anterior inferior frontal gyrus (IFG) and the left putamen. Relative decreases in neural activity were observed in the medial prefrontal cortex (MPFC) and the amygdala (AMY). Subjective ratings of the flow experience were significantly associated with changes in neural activity in the IFG, AMY, and, with trend towards significance, in the MPFC. We conclude that neural activity changes in these brain regions reflect psychological processes that map on the characteristic features of flow: coding of increased outcome probability (putamen), deeper sense of cognitive control (IFG), decreased self-referential processing (MPFC), and decreased negative arousal (AMY)."} {"_id": "23e4844f33adaf2e26195ffc2a7514a2e45fd33d", "title": "Discovering Structure in the Universe of Attribute Names", "text": "Recently, search engines have invested significant effort to answering entity\u2013attribute queries from structured data, but have focused mostly on queries for frequent attributes. In parallel, several research efforts have demonstrated that there is a long tail of attributes, often thousands per class of entities, that are of interest to users. Researchers are beginning to leverage these new collections of attributes to expand the ontologies that power search engines and to recognize entity\u2013 attribute queries. Because of the sheer number of potential attributes, such tasks require us to impose some structure on this long and heavy tail of attributes. This paper introduces the problem of organizing the attributes by expressing the compositional structure of their names as a rule-based grammar. These rules offer a compact and rich semantic interpretation of multi-word attributes, while generalizing from the observed attributes to new unseen ones. The paper describes an unsupervised learning method to generate such a grammar automatically from a large set of attribute names. Experiments show that our method can discover a precise grammar over 100,000 attributes of Countries while providing a 40-fold compaction over the attribute names. Furthermore, our grammar enables us to increase the precision of attributes from 47% to more than 90% with only a minimal curation effort. Thus, our approach provides an efficient and scalable way to expand ontologies with attributes of user interest."} {"_id": "025cdba37d191dc73859c51503e91b0dcf466741", "title": "Fingerprint image enhancement using GWT and DMF", "text": "Fingerprint image enhancement is an essential preprocessing step in fingerprint recognition applications. In this paper we introduce an approach that extracts simultaneously orientation and frequency of local ridge in the fingerprint image by Gabor wavelet filter bank and use them in Gabor Filtering of image. Furthermore, we describes a robust approach to fingerprint image enhancement, which is based on integration of Gabor Filters and Directional Median Filter(DMF). In fact, Gaussian-distributed noises are reduced effectively by Gabor Filters and impulse noises by DMF. the proposed DMF not only can finish its original tasks, it can also join broken fingerprint ridges, fill out the holes of fingerprint images, smooth irregular ridges as well as remove some annoying small artifacts between ridges. Experimental results show our method to be superior to those described in the literature."} {"_id": "cccb533cd14259b92ec7cd71b1d3f679ef251394", "title": "Combining haptic human-machine interaction with predictive path planning for lane-keeping and collision avoidance systems", "text": "This paper presents a first approach for a haptic human-machine interface combined with a novel lane-keeping and collision-avoidance assistance system approach, as well as the results of a first exploration study with human test drivers. The assistance system approach is based on a potential field predictive path planning algorithm that incorporates the drivers wishes commanded by the steering wheel angle, the brake pedal or throttle, and the intended maneuver. For the design of the haptic human-machine interface the assistance torque characteristic at the handwheel is shaped and the path planning parameters are held constant. In the exploration, both driving data as well as questionnaires are evaluated. The results show good acceptance for the lane-keeping assistance while the collision avoidance assistance needs to be improved."} {"_id": "fbbe5bf055f997cbd1ed1f3b72b1d630771e358e", "title": "Learning patterns of university student retention", "text": "Learning predictors for student retention is very difficult. After reviewing the literature, it is evident that there is considerable room for improvement in the current state of the art. As shown in this paper, improvements are possible if we (a) explore a wide range of learning methods; (b) take care when selecting attributes; (c) assess the efficacy of the learned theory not just by its median performance, but also by the variance in that performance; (d) study the delta of student factors between those who stay and those who are retained. Using these techniques, for the goal of predicting if students will remain for the first three years of an undergraduate degree, the following factors were found to be informative: family background and family\u2019s social-economic status, high school GPA"} {"_id": "03dbbd987ea1fd5307ba5ae2f56d88e4f465b88c", "title": "Anonymizing sequential releases", "text": "An organization makes a new release as new information become available, releases a tailored view for each data request, releases sensitive information and identifying information separately. The availability of related releases sharpens the identification of individuals by a global quasi-identifier consisting of attributes from related releases. Since it is not an option to anonymize previously released data, the current release must be anonymized to ensure that a global quasi-identifier is not effective for identification. In this paper, we study the sequential anonymization problem under this assumption. A key question is how to anonymize the current release so that it cannot be linked to previous releases yet remains useful for its own release purpose. We introduce the lossy join, a negative property in relational database design, as a way to hide the join relationship among releases, and propose a scalable and practical solution."} {"_id": "3dfce4601c3f413605399267b3314b90dc4b3362", "title": "Protecting Respondents' Identities in Microdata Release", "text": "\u00d0Today's globally networked society places great demand on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call today for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of the respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not to distort the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal"} {"_id": "5c15b11610d7c3ee8d6d99846c276795c072eec3", "title": "Generalizing Data to Provide Anonymity when Disclosing Information (Abstract)", "text": "The proliferation of information on the Internet and access to fast computers with large storage capacities has increased the volume of information collected and disseminated about individuals. The existence os these other data sources makes it much easier to re-identify individuals whose private information is released in data believed to be anonymous. At the same time, increasing demands are made on organizations to release individualized data rather than aggregate statistical information. Even when explicit identi ers, such as name and phone number, are removed or encrypted when releasing individualized data, other characteristic data, which we term quasi-identi ers, can exist which allow the data recipient to re-identify individuals to whom the data refer. In this paper, we provide a computational disclosure technique for releasing information from a private table such that the identity of any individual to whom the released data refer cannot be de nitively recognized. Our approach protects against linking to other data. It is based on the concepts of generalization, by which stored values can be replaced with semantically consistent and truthful but less precise alternatives, and of k-anonymity . A table is said to provide k-anonymity when the contained data do not allow the recipient to associate the released information to a set of individuals smaller than k. We introduce the notions of generalized table and of minimal generalization of a table with respect to a k-anonymity requirement. As an optimization problem, the objective is to minimally distort the data while providing adequate protection. We describe an algorithm that, given a table, e ciently computes a preferred minimal generalization to provide anonymity."} {"_id": "9492fb8d3b3ce09451fc1df46d5e3c200095f5eb", "title": "Bottom-Up Generalization : A Data Mining Solution to Privacy Protection", "text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. This paper investigates data mining as a technique for masking data; therefore, termed data mining based privacy protection. This approach incorporates partially the requirement of a targeted data mining task into the process of masking data so that essential structure is preserved in the masked data. The following privacy problem is considered in this paper: a data holder wants to release a version of data for building classification models, but wants to protect against linking the released data to an external source for inferring sensitive information. An iterative bottom-up generalization is adapted from data mining to generalize the data. The generalized data remains useful to classification but becomes difficult to link to other sources. The generalization space is specified by a hierarchical structure of generalizations. A key is identifying the best generalization to climb up the hierarchy at each iteration."} {"_id": "b30706600c01e23e11b303842fe548d62bf3ecf8", "title": "M-invariance: towards privacy preserving re-publication of dynamic datasets", "text": "The previous literature of privacy preserving data publication has focused on performing \"one-time\" releases. Specifically, none of the existing solutions supports re-publication of the microdata, after it has been updated with insertions and deletions. This is a serious drawback, because currently a publisher cannot provide researchers with the most recent dataset continuously.\n This paper remedies the drawback. First, we reveal the characteristics of the re-publication problem that invalidate the conventional approaches leveraging k-anonymity and l-diversity. Based on rigorous theoretical analysis, we develop a new generalization principle m-invariance that effectively limits the risk of privacy disclosure in re-publication. We accompany the principle with an algorithm, which computes privacy-guarded relations that permit retrieval of accurate aggregate information about the original microdata. Our theoretical results are confirmed by extensive experiments with real data."} {"_id": "43d7f7a090630db3ed0d7fca9d649c8562aeaaa9", "title": "Evaluating Sketchiness as a Visual Variable for the Depiction of Qualitative Uncertainty", "text": "We report on results of a series of user studies on the perception of four visual variables that are commonly used in the literature to depict uncertainty. To the best of our knowledge, we provide the first formal evaluation of the use of these variables to facilitate an easier reading of uncertainty in visualizations that rely on line graphical primitives. In addition to blur, dashing and grayscale, we investigate the use of `sketchiness' as a visual variable because it conveys visual impreciseness that may be associated with data quality. Inspired by work in non-photorealistic rendering and by the features of hand-drawn lines, we generate line trajectories that resemble hand-drawn strokes of various levels of proficiency-ranging from child to adult strokes-where the amount of perturbations in the line corresponds to the level of uncertainty in the data. Our results show that sketchiness is a viable alternative for the visualization of uncertainty in lines and is as intuitive as blur; although people subjectively prefer dashing style over blur, grayscale and sketchiness. We discuss advantages and limitations of each technique and conclude with design considerations on how to deploy these visual variables to effectively depict various levels of uncertainty for line marks."} {"_id": "99aba9948302a8bd2cdf46a809a59e34d1b57d86", "title": "An Architecture for Local Energy Generation, Distribution, and Sharing", "text": "The United States electricity grid faces significant problems resulting from fundamental design principles that limit its ability to handle the key energy challenges of the 21st century. We propose an innovative electric power architecture, rooted in lessons learned from the Internet and microgrids, which addresses these problems while interfacing gracefully into the current grid to allow for non-disruptive incremental adoption. Such a system, which we term a \"Local\" grid, is controlled by intelligent power switches (IPS), and can consist of loads, energy sources, and energy storage. The desired result of the proposed architecture is to produce a grid network designed for distributed renewable energy, prevalent energy storage, and stable autonomous systems. We will describe organizing principles of such a system that ensure well-behaved operation, such as requirements for communication and energy transfer protocols, regulation and control schemes, and market-based rules of operation."} {"_id": "30b32f4a6341b5809428df1271bdb707f2418362", "title": "A Sequential Neural Encoder With Latent Structured Description for Modeling Sentences", "text": "In this paper, we propose a sequential neural encoder with latent structured description SNELSD for modeling sentences. This model introduces latent chunk-level representations into conventional sequential neural encoders, i.e., recurrent neural networks with long short-term memory LSTM units, to consider the compositionality of languages in semantic modeling. An SNELSD model has a hierarchical structure that includes a detection layer and a description layer. The detection layer predicts the boundaries of latent word chunks in an input sentence and derives a chunk-level vector for each word. The description layer utilizes modified LSTM units to process these chunk-level vectors in a recurrent manner and produces sequential encoding outputs. These output vectors are further concatenated with word vectors or the outputs of a chain LSTM encoder to obtain the final sentence representation. All the model parameters are learned in an end-to-end manner without a dependency on additional text chunking or syntax parsing. A natural language inference task and a sentiment analysis task are adopted to evaluate the performance of our proposed model. The experimental results demonstrate the effectiveness of the proposed SNELSD model on exploring task-dependent chunking patterns during the semantic modeling of sentences. Furthermore, the proposed method achieves better performance than conventional chain LSTMs and tree-structured LSTMs on both tasks."} {"_id": "36cd10d9afacdc26a019d44ff3d39a8cf3fd4a9a", "title": "Advanced Gate Drive Unit With Closed-Loop $di_{{C}}/dt$ Control", "text": "This paper describes the design and the experimental investigation of a gate drive unit with closed-loop control of the collector current slope diC/dt for multichip insulated-gate bipolar transistors (IGBTs). Compared to a pure resistive gate drive, the proposed diC/dt control offers the ability to adjust the collector current slope freely which helps to find an optimized relation between switching losses and secure operation of the freewheeling diode for every type of IGBT. Based on the description of IGBT's switching behavior, the design and the realization of the gate drive are presented. The test setup and the comparison of switching tests with and without the proposed diC/dt control are discussed."} {"_id": "efa2be90d7f48e11da39c3ed3fa14579fd367f9c", "title": "3-Dimensional Localization via RFID Tag Array", "text": "In this paper, we propose 3DLoc, which performs 3-dimensional localization on the tagged objects by using the RFID tag arrays. 3DLoc deploys three arrays of RFID tags on three mutually orthogonal surfaces of each object. When performing 3D localization, 3DLoc continuously moves the RFID antenna and scans the tagged objects in a 2-dimensional space right in front of the tagged objects. It then estimates the object's 3D position according to the phases from the tag arrays. By referring to the fixed layout of the tag array, we use Angle of Arrival-based schemes to accurately estimate the tagged objects' orientation and 3D coordinates in the 3D space. To suppress the localization errors caused by the multipath effect, we use the linear relationship of the AoA parameters to remove the unexpected outliers from the estimated results. We have implemented a prototype system and evaluated the actual performance in the real complex environment. The experimental results show that 3DLoc achieves the mean accuracy of 10cm in free space and 15.3cm in the multipath environment for the tagged object."} {"_id": "3d676791081e7b16a4ead9924fc03bac587d181d", "title": "Inductive Learning of Answer Set Programs from Noisy Examples", "text": "In recent years, non-monotonic Inductive Logic Programming has received growing interest. Specifically, several new learning frameworks and algorithms have been introduced for learning under the answer set semantics, allowing the learning of common-sense knowledge involving defaults and exceptions, which are essential aspects of human reasoning. In this paper, we present a noise-tolerant generalisation of the learning from answer sets framework. We evaluate our ILASP3 system, both on synthetic and on real datasets, represented in the new framework. In particular, we show that on many of the datasets ILASP3 achieves a higher accuracy than other ILP systems that have previously been applied to the datasets, including a recently proposed differentiable learning framework."} {"_id": "7d46c3648f76b5542d8ecd582f01f155e6b248d5", "title": "On Formal and Cognitive Semantics for Semantic Computing", "text": "Semantics is the meaning of symbols, notations, concepts, functions, and behaviors, as well as their relations that can be deduced onto a set of predefined entities and/or known concepts. Semantic computing is an emerging computational methodology that models and implements computational structures and behaviors at semantic or knowledge level beyond that of symbolic data. In semantic computing, formal semantics can be classified into the categories of to be, to have, and to do semantics. This paper presents a comprehensive survey of formal and cognitive semantics for semantic computing in the fields of computational linguistics, software science, computational intelligence, cognitive computing, and denotational mathematics. A set of novel formal semantics, such as deductive semantics, concept-algebra-based semantics, and visual semantics, is introduced that forms a theoretical and cognitive foundation for semantic computing. Applications of formal semantics in semantic computing are presented in case studies on semantic cognition of natural languages, semantic analyses of computing behaviors, behavioral semantics of human cognitive processes, and visual semantic algebra for image and visual object manipulations."} {"_id": "927aa1bd7c0e122ae77224e92821f2eaafc96cf9", "title": "GENDER RECOGNITION FROM FACES USING BANDLET TRANSFORMATION", "text": "Gender Recognition is important in different commercial and law enforcement applications. In this paper we have proposed a gender recognition system through facial images. We have used a different technique that involves Bandlet Transform instead of previously used Wavelet Transform, which is a multi-resolution technique and more efficiently provides the edges of images, and then mean is combined to create the feature vectors of the images. To classify the images for gender, we have used fuzzy c mean clustering. Experimental results have shown that average 97.1% accuracy have been achieved using this technique when SUMS database was used and 93.3% was achieved when FERET database was used. Keywords----Bandlet, Gender Recognition, Fuzzy C-mean, Multi Resolution."} {"_id": "86cd8da6c6b35d99b75aaaaf7be55c78a31948eb", "title": "Evolution of the internal fixation of long bone fractures. The scientific basis of biological internal fixation: choosing a new balance between stability and biology.", "text": "The advent of 'biological internal fixation' is an important development in the surgical management of fractures. Locked nailing has demonstrated that flexible fixation without precise reduction results in reliable healing. While external fixators are mainly used today to provide temporary fixation in fractures after severe injury, the internal fixator offers flexible fixation, maintaining the advantages of the external fixator but allowing long-term treatment. The internal fixator resembles a plate but functions differently. It is based on pure splinting rather than compression. The resulting flexible stabilisation induces the formation of callus. With the use of locked threaded bolts, the application of the internal fixator foregoes the need of adaptation of the shape of the splint to that of the bone during surgery. Thus, it is possible to apply the internal fixator as a minimally invasive percutaneous osteosynthesis (MIPO). Minimal surgical trauma and flexible fixation allow prompt healing when the blood supply to bone is maintained or can be restored early. The scientific basis of the fixation and function of these new implants has been reviewed. The biomechanical aspects principally address the degree of instability which may be tolerated by fracture healing under different biological conditions. Fractures may heal spontaneously in spite of gross instability while minimal, even non-visible, instability may be deleterious for rigidly fixed small fracture gaps. The theory of strain offers an explanation for the maximum instability which will be tolerated and the minimal degree required for induction of callus formation. The biological aspects of damage to the blood supply, necrosis and temporary porosity explain the importance of avoiding extensive contact of the implant with bone. The phenomenon of bone loss and stress protection has a biological rather than a mechanical explanation. The same mechanism of necrosis-induced internal remodelling may explain the basic process of direct healing."} {"_id": "59ae5541e1dc71ff33b4b6e14cbbc5c3d46fc506", "title": "A Survey on Clustering Algorithms for Wireless Sensor Networks", "text": "A wireless sensor network (WSN)consisting of a large number of tiny sensors can be an effective tool for gathering data in diverse kinds of environments. The data collected by each sensor is communicated to the base station, which forwards the data to the end user. Clustering is introduced to WSNs because it has proven to be an effective approach to provide better data aggregation and scalability for large WSNs. Clustering also conserves the limited energy resources of the sensors. This paper synthesises existing clustering algorithms news's and highlights the challenges in clustering."} {"_id": "ce7f81e898f1c78468df2480294806864899549e", "title": "80\u00baC, 50-Gb/s Directly Modulated InGaAlAs BH-DFB Lasers", "text": "Direct modulation at 50 Gb/s of 1.3-\u03bcm InGaAlAs DFB lasers operating at up to 80\u00b0C was experimentally demonstrated by using a ridge-shaped buried heterostructure."} {"_id": "c955e319595b8ae051fe18f193490ba56c4f0254", "title": "Platform-as-a-Service (PaaS): The Next Hype of Cloud Computing", "text": "Cloud Computing is expected to become the driving force of information technology to revolutionize the future. Presently number of companies is trying to adopt this new technology either as service providers, enablers or vendors. In this way the cloud market is estimated be likely to emerge at a remarkable rate. Under the whole cloud umbrella, PaaS seems to have a relatively small market share. However, it is expected to offer much more as it is compared with its counterparts SaaS and IaaS. This paper is aimed to assess and analyze the future of PaaS technology. Year 2015 named as \u201cthe year of PaaS\u201d. It means that PaaS technology has established strong roots and ready to hit the market with better technology services. This research will discuss future PaaS market trends, growth and business competitors. In the current dynamic era, several companies in the market are offering PaaS services. This research will also outline some of the top service providers (proprietary & open source) to discuss their current technology status and present a futuristic look into their services and business strategies. Analysis of the present and future PaaS technology infrastructure will also be a major discussion in this paper."} {"_id": "e1a7cf4e4760bb580dd67255fbe872bac33ae28b", "title": "Hybrid CMOS/memristor circuits", "text": "This is a brief review of recent work on the prospective hybrid CMOS/memristor circuits. Such hybrids combine the flexibility, reliability and high functionality of the CMOS subsystem with very high density of nanoscale thin film resistance switching devices operating on different physical principles. Simulation and initial experimental results demonstrate that performance of CMOS/memristor circuits for several important applications is well beyond scaling limits of conventional VLSI paradigm."} {"_id": "91d513af1f667f64c9afc55ea1f45b0be7ba08d4", "title": "Automatic Face Image Quality Prediction", "text": "Face image quality can be defined as a measure of the utility of a face image to automatic face recognition. In this work, we propose (and compare) two methods for automatic face image quality based on target face quality values from (i) human assessments of face image quality (matcherindependent), and (ii) quality values computed from similarity scores (matcher-dependent). A support vector regression model trained on face features extracted using a deep convolutional neural network (ConvNet) is used to predict the quality of a face image. The proposed methods are evaluated on two unconstrained face image databases, LFW and IJB-A, which both contain facial variations with multiple quality factors. Evaluation of the proposed automatic face image quality measures shows we are able to reduce the FNMR at 1% FMR by at least 13% for two face matchers (a COTS matcher and a ConvNet matcher) by using the proposed face quality to select subsets of face images and video frames for matching templates (i.e., multiple faces per subject) in the IJB-A protocol. To our knowledge, this is the first work to utilize human assessments of face image quality in designing a predictor of unconstrained face quality that is shown to be effective in cross-database evaluation."} {"_id": "2e3522d8d9c30f34e18161246c2f6dac7a9ae04d", "title": "The Neural Architecture of the Language Comprehension Network: Converging Evidence from Lesion and Connectivity Analyses", "text": "While traditional models of language comprehension have focused on the left posterior temporal cortex as the neurological basis for language comprehension, lesion and functional imaging studies indicate the involvement of an extensive network of cortical regions. However, the full extent of this network and the white matter pathways that contribute to it remain to be characterized. In an earlier voxel-based lesion-symptom mapping analysis of data from aphasic patients (Dronkers et al., 2004), several brain regions in the left hemisphere were found to be critical for language comprehension: the left posterior middle temporal gyrus, the anterior part of Brodmann's area 22 in the superior temporal gyrus (anterior STG/BA22), the posterior superior temporal sulcus (STS) extending into Brodmann's area 39 (STS/BA39), the orbital part of the inferior frontal gyrus (BA47), and the middle frontal gyrus (BA46). Here, we investigated the white matter pathways associated with these regions using diffusion tensor imaging from healthy subjects. We also used resting-state functional magnetic resonance imaging data to assess the functional connectivity profiles of these regions. Fiber tractography and functional connectivity analyses indicated that the left MTG, anterior STG/BA22, STS/BA39, and BA47 are part of a richly interconnected network that extends to additional frontal, parietal, and temporal regions in the two hemispheres. The inferior occipito-frontal fasciculus, the arcuate fasciculus, and the middle and inferior longitudinal fasciculi, as well as transcallosal projections via the tapetum were found to be the most prominent white matter pathways bridging the regions important for language comprehension. The left MTG showed a particularly extensive structural and functional connectivity pattern which is consistent with the severity of the impairments associated with MTG lesions and which suggests a central role for this region in language comprehension."} {"_id": "a58fe54d2677fb01533c4dc2d8ef958a2cb921cb", "title": "Distribution of Eigenvalues for 2x2 MIMO Channel Capacity Based on Indoor Measurements", "text": "Based on the non line-of-sight (NLOS) indoor measurements performed in the corridors on the third floor of the Electronics and Telecommunications Research Institute (ETRI) building in Daejeon city, Republic of Korea, we investigated the distribution of the eigenvalues of HH*, where H denotes a 2\u00d72 multiple-input multiple-output (MIMO) channel matrix. Using the observation that the distribution of the measured eigenvalues matches well with the Gamma distribution, we propose a model of eigenvalues as Gamma distributed random variables that prominently feature both transmitting and receiving correlations. Using the model with positive integer k_i, i=1, 2, which is the shape parameter of a Gamma distribution, we derive the closed-form ergodic capacity of the 2\u00d72 MIMO channel. Validation results show that the proposed model enables the evaluation of the outage and ergodic capacities of the correlated 2\u00d72 MIMO channel in the NLOS indoor corridor environment."} {"_id": "d4b29432c3e7bd4e2d06935169c91f781f441160", "title": "Speech recognition of Malayalam numbers", "text": "Digit speech recognition is important in many applications such as automatic data entry, PIN entry, voice dialing telephone, automated banking system, etc. This paper presents speaker independent speech recognition system for Malayalam digits. The system employs Mel frequency cepstrum coefficient (MFCC) as feature for signal processing and Hidden Markov model (HMM) for recognition. The system is trained with 21 male and female voices in the age group of 20 to 40 years and there was 98.5% word recognition accuracy (94.8% sentence recognition accuracy) on a test set of continuous digit recognition task."} {"_id": "517a461a8839733e34c9025154de3d6275543642", "title": "Beyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval", "text": "We present a non-traditional retrieval problem we call subtopic retrieval. The subtopic retrieval problem is concerned with finding documents that cover many different subtopics of a query topic. In such a problem, the utility of a document in a ranking is dependent on other documents in the ranking, violating the assumption of independent relevance which is assumed in most traditional retrieval methods. Subtopic retrieval poses challenges for evaluating performance, as well as for developing effective algorithms. We propose a framework for evaluating subtopic retrieval which generalizes the traditional precision and recall metrics by accounting for intrinsic topic difficulty as well as redundancy in documents. We propose and systematically evaluate several methods for performing subtopic retrieval using statistical language models and a maximal marginal relevance (MMR) ranking strategy. A mixture model combined with query likelihood relevance ranking is shown to modestly outperform a baseline relevance ranking on a data set used in the TREC interactive track."} {"_id": "6b3abd1a6bf9c9564147cfda946c447955d01804", "title": "Kaleido: You Can Watch It But Cannot Record It", "text": "Recently a number of systems have been developed to implement and improve the visual communication over screen-camera links. In this paper we study an opposite problem: how to prevent unauthorized users from videotaping a video played on a screen, such as in a theater, while do not affect the viewing experience of legitimate audiences. We propose and develop a light-weight hardware-free system, called Kaleido, that ensures these properties by taking advantage of the limited disparities between the screen-eye channel and the screen-camera channel. Kaleido does not require any extra hardware and is purely based on re-encoding the original video frame into multiple frames used for displaying. We extensively test our system Kaleido using a variety of smartphone cameras. Our experiments confirm that Kaleido preserves the high-quality screen-eye channel while reducing the secondary screen-camera channel quality significantly."} {"_id": "528407a9c5b41f81366bbe5cf8058dadcb139fea", "title": "A new design of XOR-XNOR gates for low power application", "text": "XOR and XNOR gate plays an important role in digital systems including arithmetic and encryption circuits. This paper proposes a combination of XOR-XNOR gate using 6-transistors for low power applications. Comparison between a best existing XOR-XNOR have been done by simulating the proposed and other design using 65nm CMOS technology in Cadence environment. The simulation results demonstrate the delay, power consumption and power-delay product (PDP) at different supply voltages ranging from 0.6V to 1.2V. The results show that the proposed design has lower power dissipation and has a full voltage swing."} {"_id": "17580101d22476be5ad1655abb48d967c78a3875", "title": "Twenty-Five Years of Research on Women Farmers in Africa : Lessons and Implications for Agricultural Research Institutions with an Annotated Bibliography", "text": ".................................................................................................................... iv Acknowledgments ...................................................................................................... iv Introduction ............................................................................................................... 1 Labor ..........................................................................................................................2 Gender Division of Labor .................................................................................... 2 Household Labor Availability ............................................................................... 6 Agricultural Labor Markets .................................................................................. 8 Conclusions: Labor and Gender .......................................................................... 9 Land ........................................................................................................................ 10 Access to Land ................................................................................................... 10 Security of Land ................................................................................................ 11 Changing Access to Land ................................................................................... 11 Access to Other Inputs .............................................................................................. 12 Access to Credit ................................................................................................. 13 Access to Fertilizer ............................................................................................. 14 Access to Extension and Information ................................................................. 15 Access to Mechanization .................................................................................... 16 Gender Issues in Access to Inputs: Summary...................................................... 16 Outputs .................................................................................................................... 17 Household Decision-Making .................................................................................... 18 Cooperative Bargaining and Collective Models .................................................. 19 Noncooperative Bargaining Models ................................................................... 19 Conclusions .............................................................................................................. 21 References ................................................................................................................. 23 Annotated Bibliography ............................................................................................ 27"} {"_id": "205917e8e885b3c2c6c31c90f9de7f411a24ec22", "title": "Sensor Based PUF IoT Authentication Model for a Smart Home with Private Blockchain", "text": "With ubiquitous adoption of connected sensors, actuators and smart devices are finding inroads into daily life. Internet of Things (IoT) authentication is rapidly transforming from classical cryptographic user-centric knowledge based approaches to device signature based automated methodologies to corroborate identity between claimant and a verifier. Physical Unclonable Function (PUF) based IoT authentication mechanisms are gaining widespread interest as users are required to access IoT devices in real time while also expecting execution of sensitive (even physical) IoT actions immediately. This paper, delineates combination of BlockChain and Sensor based PUF authentication mechanism for solving real-time but non-repudiable access to IoT devices in a Smart Home by utilizing a mining less consensus mechanism for the provision of immutable assurance to users' and IoT devices' transactions i.e. commands, status alerts, actions etc."} {"_id": "112f07f90d4395623a51725e90bf9d91b89e559a", "title": "Defending the morality of violent video games", "text": "The effect of violent video games is among the most widely discussed topics in media studies, and for good reason. These games are immensely popular, but many seem morally objectionable. Critics attack them for a number of reasons ranging from their capacity to teach players weapons skills to their ability to directly cause violent actions. This essay shows that many of these criticisms are misguided. Theoretical and empirical arguments against violent video games often suffer from a number of significant shortcomings that make them ineffective. This essay argues that video games are defensible from the perspective of Kantian, Aristotelian, and utilitarian moral theories."} {"_id": "cdbb15ee448ee4dc6db16a540a40fbb035d1f4ca", "title": "Education-specific Tag Recommendation in CQA Systems", "text": "Systems for Community Question Answering (CQA) are well-known on the open web (e.g. Stack Overflow or Quora). They have been recently adopted also for use in educational domain (mostly in MOOCs) to mediate communication between students and teachers. As students are only novices in topics they learn about, they may need various scaffoldings to achieve effective question answering. In this work, we focus specifically on automatic recommendation of tags classifying students' questions. We propose a novel method that can automatically analyze a text of a question and suggest appropriate tags to an asker. The method takes specifics of educational domain into consideration by a two-step recommendation process in which tags reflecting course structure are recommended at first and consequently supplemented with additional related tags.\n Evaluation of the method on data from CS50 MOOC at Stack Exchange platform showed that the proposed method achieved higher performance in comparison with a baseline method (tag recommendation without taking educational specifics into account)."} {"_id": "598caa7f431930892a78f1083453b1c0ba29e725", "title": "Teaching a Machine to Read Maps With Deep Reinforcement Learning", "text": "The ability to use a 2D map to navigate a complex 3D environment is quite remarkable, and even difficult for many humans. Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community. In this paper we teach a reinforcement learning agent to read a map in order to find the shortest way out of a random maze it has never seen before. Our system combines several state-of-theart methods such as A3C and incorporates novel elements such as a recurrent localization cell. Our agent learns to localize itself based on 3D first person images and an approximate orientation angle. The agent generalizes well to bigger mazes, showing that it learned useful localization and naviga-"} {"_id": "13082af1fd6bb9bfe63e73cf007de1655b7f9ae0", "title": "Machine learning in automated text categorization", "text": "The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation."} {"_id": "6eb69624732a589e91816cc7722be0b0cdd28767", "title": "Discovering actionable patterns in event data", "text": "Applications such as those for systems management and intrusion detection employ an automated real-time operation system in which sensor data are collected and processed in real time. Although such a system effectively reduces the need for operation staff, it requires constructing and maintaining correlation rules. Currently, rule construction requires experts to identify problem patterns, a process that is timeconsuming and error-prone. In this paper, we propose reducing this burden by mining historical data that are readily available. Specifically, we first present efficient algorithms to mine three types of important patterns from historical event data: event bursts, periodic patterns, and mutually dependent patterns. We then discuss a framework for efficiently mining events that have multiple attributes. Last, we present Event Correlation Constructor\u2014a tool that validates and extends correlation knowledge."} {"_id": "cd866d4510e397dbc18156f8d840d7745943cc1a", "title": "A tutorial on hidden Markov models and selected applications in speech recognition", "text": ""} {"_id": "0a283fb395343cd26984425306ca24c85b09ccdb", "title": "Automatic Indexing Based on Bayesian Inference Networks", "text": "In this paper, a Bayesian inference network model for automatic indexing with index terms (descriptors) from a prescribed vocabulary is presented. It requires an indexing dictionary with rules mapping terms of the respective subject field onto descriptors and inverted lists for terms occuring in a set of documents of the subject field and descriptors manually assigned to these documents. The indexing dictionary can be derived automatically from a set of manually indexed documents. An application of the network model is described, followed by an indexing example and some experimental results about the indexing performance of the network model."} {"_id": "1ac5b0628ff249c388ff5ca934a9ccbec577cbd7", "title": "Beyond Market Baskets: Generalizing Association Rules to Correlations", "text": "One of the most well-studied problems in data mining is mining for association rules in market basket data. Association rules, whose significance is measured via support and confidence, are intended to identify rules of the type, \u201cA customer purchasing item A often also purchases item B.\u201d Motivated by the goal of generalizing beyond market baskets and the association rules used with them, we develop the notion of mining rules that identify correlations (generalizing associations), and we consider both the absence and presence of items as a basis for generating rules. We propose measuring significance of associations via the chi-squared test for correlation from classical statistics. This leads to a measure that is upward closed in the itemset lattice, enabling us to reduce the mining problem to the search for a border between correlated and uncorrelated itemsets in the lattice. We develop pruning strategies and devise an efficient algorithm for the resulting problem. We demonstrate its effectiveness by testing it on census data and finding term dependence in a corpus of text documents, as well as on synthetic data."} {"_id": "16afeecd0f4dbccdd8e281e0e7e443bd08681da1", "title": "Measuring the Impact of Network Performance on Cloud-Based Speech Recognition Applications", "text": "Cloud-based speech recognition systems enhance Web surfing, transportation, health care, etc. For example, using voice commands helps drivers search the Internet without affecting traffic safety risks. User frustration with network traffic problems can affect the usability of these applications. The performance of these type of applications should be robust in difficult network conditions. We evaluate the performance of several client-server speech recognition applications, under various network conditions. We measure transcription delay and accuracy of each application under different packet loss and jitter values. Results of our study show that performance of client-server speech recognition systems is affected by jitter and packet loss; which commonly occur in WiFi and cellular networks."} {"_id": "41f3b1aebde4c342211185d2b5e339a60ceff9e2", "title": "Friction modeling in linear chemical-mechanical planarization", "text": "In this article, we develop an analytical model of the relationship between the wafer/pad friction and process configuration. We also provide experimental validation of this model for in situ process monitoring. CMP thus demonstrates that the knowledge and methodologies developed for friction modeling and control can be used to advance the understanding, monitoring, and control of semiconductor manufacturing processes. Meanwhile, relevant issues and challenges in real-time monitoring of CMP are presented as sources of future development."} {"_id": "b972f638b7c4ed22e1bcb573520bb232ea88cda5", "title": "Efficient, Safe, and Probably Approximately Complete Learning of Action Models", "text": "In this paper we explore the theoretical boundaries of planning in a setting where no model of the agent\u2019s actions is given. Instead of an action model, a set of successfully executed plans are given and the task is to generate a plan that is safe, i.e., guaranteed to achieve the goal without failing. To this end, we show how to learn a conservative model of the world in which actions are guaranteed to be applicable. This conservative model is then given to an off-the-shelf classical planner, resulting in a plan that is guaranteed to achieve the goal. However, this reduction from a model-free planning to a model-based planning is not complete: in some cases a plan will not be found even when such exists. We analyze the relation between the number of observed plans and the likelihood that our conservative approach will indeed fail to solve a solvable problem. Our analysis show that the number of trajectories needed scales gracefully."} {"_id": "e38b9f339e858c8ac95679737a0852d21c48d89c", "title": "Pressure characteristics at the stump/socket interface in transtibial amputees using an adaptive prosthetic foot.", "text": "BACKGROUND\nThe technological advances that have been made in developing highly functional prostheses are promising for very active patients but we do not yet know whether they cause an increase in biomechanical load along with possibly negative consequences for pressure conditions in the socket. Therefore, this study monitored the socket pressure at specific locations of the stump when using a microprocessor-controlled adaptive prosthetic ankle under different walking conditions.\n\n\nMETHODS\nTwelve unilateral transtibial amputees between 43 and 59 years of age were provided with the Proprio-Foot (Ossur) and underwent an instrumented 3D gait analysis in level, stair, and incline walking, including synchronous data capturing of socket pressure. Peak pressures and pressure time integrals (PTI) at three different locations were compared for five walking conditions with and without using the device's ankle adaptation mode.\n\n\nFINDINGS\nHighest peak pressures of 2.4 k Pa/kg were found for incline ascent at the calf muscle as compared to 2.1 k Pa/kg in level walking with large inter-individual variance. In stair ascent a strong correlation was found between maximum knee moment and socket pressure. The most significant pressure changes relative to level walking were seen in ramp descent anteriorly towards the stump end, with PTI values being almost twice as high as those in level walking. Adapting the angle of the prosthesis on stairs and ramps modified the pressure data such that they were closer to those in level walking.\n\n\nINTERPRETATION\nPressure at the stump depends on the knee moments involved in each walking condition. Adapting the prosthetic ankle angle is a valuable means of modifying joint kinetics and thereby the pressure distribution at the stump. However, large inter-individual differences in local pressures underline the importance of individual socket fitting."} {"_id": "6c237c3638eefe1eb39212f801cd857bedc004ee", "title": "Exploiting Electronic Health Records to Mine Drug Effects on Laboratory Test Results", "text": "The proliferation of Electronic Health Records (EHRs) challenges data miners to discover potential and previously unknown patterns from a large collection of medical data. One of the tasks that we address in this paper is to reveal previously unknown effects of drugs on laboratory test results. We propose a method that leverages drug information to find a meaningful list of drugs that have an effect on the laboratory result. We formulate the problem as a convex non smooth function and develop a proximal gradient method to optimize it. The model has been evaluated on two important use cases: lowering low-density lipoproteins and glycated hemoglobin test results. The experimental results provide evidence that the proposed method is more accurate than the state-of-the-art method, rediscover drugs that are known to lower the levels of laboratory test results, and most importantly, discover additional potential drugs that may also lower these levels."} {"_id": "ebae5af27aafd39358a46c83c1409885773254dd", "title": "A Survey of Vehicular Ad hoc Networks Routing Protocols", "text": "In recent years, the aspect of vehicular ad hoc network (VANET) is becoming an interesting research area; VANET is a mobile ad hoc network considered as a special case of mobile ad hoc network (MANET). Similar to MANET, VANET is characterized as autonomous and self-configured wireless network. However, VANET has very dynamic topology, large and variable network size, and constrained mobility; these characteristics led to the need for efficient routing and resource saving VANET protocols, to fit with different VANET environments. These differences render traditional MANET\u2019s protocols unsuitable for VANET. The aim of this work is to give a survey of the VANETs routing mechanisms, this paper gives an overview of Vehicular ad hoc networks (VANETs) and the existing VANET routing protocols; mainly it focused on vehicle to vehicle (V2V) communication and protocols. The paper also represents the general outlines and goals of VANETs, investigates different routing schemes that have been developed for VANETs, as well as providing classifications of VANET routing protocols (focusing on two classification forms), and gives summarized comparisons between different classes in the context of their methodologies used, strengths, and limitations of each class scheme compared to other classes. Finally, it extracts the current trends and the challenges for efficient routing mechanisms in VANETs."} {"_id": "d012519a924e41aa7ff49d9b6be58033bd60fd9c", "title": "Predicting hospital admission at emergency department triage using machine learning", "text": "OBJECTIVE\nTo predict hospital admission at the time of ED triage using patient history in addition to information collected at triage.\n\n\nMETHODS\nThis retrospective study included all adult ED visits between March 2014 and July 2017 from one academic and two community emergency rooms that resulted in either admission or discharge. A total of 972 variables were extracted per patient visit. Samples were randomly partitioned into training (80%), validation (10%), and test (10%) sets. We trained a series of nine binary classifiers using logistic regression (LR), gradient boosting (XGBoost), and deep neural networks (DNN) on three dataset types: one using only triage information, one using only patient history, and one using the full set of variables. Next, we tested the potential benefit of additional training samples by training models on increasing fractions of our data. Lastly, variables of importance were identified using information gain as a metric to create a low-dimensional model.\n\n\nRESULTS\nA total of 560,486 patient visits were included in the study, with an overall admission risk of 29.7%. Models trained on triage information yielded a test AUC of 0.87 for LR (95% CI 0.86-0.87), 0.87 for XGBoost (95% CI 0.87-0.88) and 0.87 for DNN (95% CI 0.87-0.88). Models trained on patient history yielded an AUC of 0.86 for LR (95% CI 0.86-0.87), 0.87 for XGBoost (95% CI 0.87-0.87) and 0.87 for DNN (95% CI 0.87-0.88). Models trained on the full set of variables yielded an AUC of 0.91 for LR (95% CI 0.91-0.91), 0.92 for XGBoost (95% CI 0.92-0.93) and 0.92 for DNN (95% CI 0.92-0.92). All algorithms reached maximum performance at 50% of the training set or less. A low-dimensional XGBoost model built on ESI level, outpatient medication counts, demographics, and hospital usage statistics yielded an AUC of 0.91 (95% CI 0.91-0.91).\n\n\nCONCLUSION\nMachine learning can robustly predict hospital admission using triage information and patient history. The addition of historical information improves predictive performance significantly compared to using triage information alone, highlighting the need to incorporate these variables into prediction models."} {"_id": "89a523135fc9cb3b0eb6cade2d1eab1b17ea42f4", "title": "Stress sensitivity of fault seismicity : a comparison between limited-offset oblique and major strike-slip faults", "text": "We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M=7.1 Loma Prieta earthquake, and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (rightlateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (> 50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced, and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones."} {"_id": "5d6959c6d37ed3cc910cd436865a4c2a73284c7c", "title": "Photoplethysmography and its detailed pulse waveform analysis for arterial stiffness", "text": "Arterial stiffness index is one of the biomechanical indices of vascular healthiness. These indexes are based on detailed pulse waveform analysis which is presented here. After photoplethysmographyic (PPG) pulse wave measurement, we decompose the pulse waveform for the estimation and determination of arterial elasticity. Firstly, it is electro-optically measured PPG signal and by electromechanical film (EMFi) measured signal that are analyzed and investigated by dividing each wave into five logarithmic normal function components. For both the PPG and EMFi waveform we can find very easily a good fit between the original and overlapped and summed wave components. Each wave component is assumed to resemble certain phenomenon in the arteries and certain indexes can be calculated for example based on the mutual timing of the components. Several studies have demonstrated that these kinds of indexes calculated based on actual biomechanical processed can predict future cardiovascular events. Many dynamic factors, e.g., arterial stiffness, depend on fixed structural features of the arterial wall. For more accurate description, arterial stiffness is estimated based on pulse wave decomposition analysis in the radial measured by EMFi and PPG method and tibial arterial walls measured by PPG method parallelly. Elucidation of the precise relationship between endothelial function and arterial stiffness can be done through biomechanics. However, arterial wall elasticity awaits still further biomechanical studies with clinical relations and the influence of arterial flexibility, resistance and ageing inside of the radial pulse waveform."} {"_id": "cb0c85c4eb75016a7098ca0c452e13812b9c95e9", "title": "Iterated learning and the evolution of language", "text": "Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins."} {"_id": "dc53c638f58bf3982c5a6ed82002d56c955763c2", "title": "Effective and efficient correlation analysis with application to Market Basket Analysis and Network Community Detection", "text": "Finding the most interesting correlations among items is essential for problems in many commercial, medical, and scientific domains. For example, what kinds of items should be recommended with regard to what has been purchased by a customer? How to arrange the store shelf in order to increase sales? How to partition the whole social network into several communities for successful advertising campaigns? Which set of individuals on a social network should we target to convince in order to trigger a large cascade of further adoptions? When conducting correlation analysis, traditional methods have both effectiveness and efficiency problems, which will be addressed in this dissertation. Here, we explore the effectiveness problem in three ways. First, we expand the set of desirable properties and study the property satisfaction for different correlation measures. Second, we study different techniques to adjust original correlation measure, and propose two new correlation measures: the Simplified \u03c7 with Continuity Correction and the Simplified \u03c7 with Support. Third, we study the upper and lower bounds of different measures and categorize them by the bound differences. Combining with the above three directions, we provide guidelines for users to choose the proper measure according to their situations. With the proper correlation measure, we start to solve the efficiency problem for a large dataset. Here, we propose a fully-correlated itemset (FCI) framework to decouple the correlation measure from the need for efficient search. By wrapping the desired measure in our FCI framework, we take advantage of the desired measure\u2019s superiority in evaluating itemsets, eliminate itemsets with irrelevant items, and achieve good computational performance. In addition, we identify a 1-dimensional monotone property of the upper bound of any good correlation measure, and different 2-dimensional"} {"_id": "afad9773e74db1927cff4c284dee8afbc4fb849d", "title": "Circularly polarized array antenna with corporate-feed network and series-feed elements", "text": "In this paper, corporate-feed circularly polarized microstrip array antennas are studied. The antenna element is a series-feed slot-coupled structure. Series feeding causes sequential rotation effect at the element level. Antenna elements are then used to form the subarray by applying sequential rotation to their feeding. Arrays having 4, 16, and 64 elements were made. The maximum achieved gains are 15.3, 21, and 25.4 dBic, respectively. All arrays have less than 15 dB return loss and 3 dB axial ratio from 10 to 13 GHz. The patterns are all quite symmetrical."} {"_id": "5a131856df045cf27a2d5056cea2d2401e2d81b2", "title": "When Social Bots Attack: Modeling Susceptibility of Users in Online Social Networks", "text": "Social bots are automatic or semi-automatic computer programs that mimic humans and/or human behavior in online social networks. Social bots can attack users (targets) in online social networks to pursue a variety of latent goals, such as to spread information or to influence targets. Without a deep understanding of the nature of such attacks or the susceptibility of users, the potential of social media as an instrument for facilitating discourse or democratic processes is in jeopardy. In this paper, we study data from the Social Bot Challenge 2011 an experiment conducted by the WebEcologyProject during 2011 in which three teams implemented a number of social bots that aimed to influence user behavior on Twitter. Using this data, we aim to develop models to (i) identify susceptible users among a set of targets and (ii) predict users\u2019 level of susceptibility. We explore the predictiveness of three different groups of features (network, behavioral and linguistic features) for these tasks. Our results suggest that susceptible users tend to use Twitter for a conversational purpose and tend to be more open and social since they communicate with many different users, use more social words and show more affection than non-susceptible users."} {"_id": "618c47bc44c3b6fc27067211214afccce6f7cd2c", "title": "Speed Estimation and Abnormality Detection from Surveillance Cameras", "text": "Motivated by the increasing industry trends towards autonomous driving, vehicles, and transportation we focus on developing a traffic analysis framework for the automatic exploitation of a large pool of available data relative to traffic applications. We propose a cooperative detection and tracking algorithm for the retrieval of vehicle trajectories in video surveillance footage based on deep CNN features that is ultimately used for two separate traffic analysis modalities: (a) vehicle speed estimation based on a state of the art fully automatic camera calibration algorithm and (b) the detection of possibly abnormal events in the scene using robust optical flow descriptors of the detected vehicles and Fisher vector representations of spatiotemporal visual volumes. Finally we measure the performance of our proposed methods in the NVIDIA AI CITY challenge evaluation dataset."} {"_id": "2230381a078241c4385bb9c20b385e8f0da70b9b", "title": "Blind detection of photomontage using higher order statistics", "text": "We investigate the prospect of using bicoherence features for blind image splicing detection. Image splicing is an essential operation for digital photomontaging, which in turn is a technique for creating image forgery. We examine the properties of bicoherence features on a data set, which contains image blocks of diverse image properties. We then demonstrate the limitation of the baseline bicoherence features for image splicing detection. Our investigation has led to two suggestions for improving the performance of bicoherence features, i.e., estimating the bicoherence features of the authentic counterpart and incorporating features that characterize the variance of the feature performance. The features derived from the suggestions are evaluated with support vector machine (SVM) classification and is shown to improve the image splicing detection accuracy from 62% to about 70%."} {"_id": "1f0396c08e8485358b3d1e13f451ed12ecfc1a77", "title": "Building a Question-Answering Chatbot using Forum Data in the Semantic Space", "text": "We build a conversational agent which knowledge base is an online forum for parents of autistic children. We collect about 35,000 threads totalling some 600,000 replies, and label 1% of them for usefulness using Amazon Mechanical Turk. We train a Random Forest Classifier using sent2vec features to label the remaining thread replies. Then, we use word2vec to match user queries conceptually with a thread, and then a reply with a predefined context window."} {"_id": "ade7178613e4db90d6a551cb372aebae4c4fa0bf", "title": "Time series forecasting of cyber attack intensity", "text": "Cyber attacks occur on a near daily basis and are becoming exponentially more common. While some research aims to detect the characteristics of an attack, little focus has been given to patterns of attacks in general. This paper aims to exploit temporal correlations between the number of attacks per day in order to predict future intensity of cyber incidents. Through analysis of attack data collected from Hackmageddon, correlation was found among reported attack volume in consecutive days. This paper presents a forecasting system that aims to predict the number of cyber attacks on a given day based only on a set of historical attack count data. Our system conducts ARIMA time series forecasting on all previously collected incidents to predict the expected number of attacks on a future date. Our tool is able to use only a subset of data relevant to a specific attack method. Prediction models are dynamically updated over time as new data is collected to improve accuracy. Our system outperforms naive forecasting methods by 14.1% when predicting attacks of any type, and up to 21.2% when forecasting attacks of a specific type. Our system also produces a model which more accurately predicts future cyber attack intensity behavior."} {"_id": "c906c9b2daddf67ebd949c71fc707d697065c6a0", "title": "Semantics-Aware Machine Learning for Function Recognition in Binary Code", "text": "Function recognition in program binaries serves as the foundation for many binary instrumentation and analysis tasks. However, as binaries are usually stripped before distribution, function information is indeed absent in most binaries. By far, identifying functions in stripped binaries remains a challenge. Recent research work proposes to recognize functions in binary code through machine learning techniques. The recognition model, including typical function entry point patterns, is automatically constructed through learning. However, we observed that as previous work only leverages syntax-level features to train the model, binary obfuscation techniques can undermine the pre-learned models in real-world usage scenarios. In this paper, we propose FID, a semantics-based method to recognize functions in stripped binaries. We leverage symbolic execution to generate semantic information and learn the function recognition model through well-performing machine learning techniques.FID extracts semantic information from binary code and, therefore, is effectively adapted to different compilers and optimizations. Moreover, we also demonstrate that FID has high recognition accuracy on binaries transformed by widely-used obfuscation techniques. We evaluate FID with over four thousand test cases. Our evaluation shows that FID is comparable with previous work on normal binaries and it notably outperforms existing tools on obfuscated code."} {"_id": "df6b604d1352d4bd81604730f9000d7a29574384", "title": "eyond virtual museums : Experiencing immersive virtual reality in eal museums", "text": "Contemporary museums are much more than places devoted to the placement and the exhibition of collections and artworks; indeed, they are nowadays considered as a privileged means for communication and play a central role in making culture accessible to the mass audience. One of the keys to approach the general public is the use of new technologies and novel interaction paradigms. These means, which bring with them an undeniable appeal, allow curators to modulate the cultural proposal by structuring different courses for different user profiles. Immersive Virtual reality (VR) is probably one of the most appealing and potentially effective technologies to serve this purpose; nevertheless, it is still quite uncommon to find immersive installations in museums. Starting from our 10 years\u2019 experience in this topic, and following an in-depth survey about these technologies and their use in cultural contexts, we propose a classification of VR installations, specifically oriented to cultural heritage applications, based on their features in terms of interaction and immersion. On the basis of this classification, aiming to provide a tool for framing VR systems which would hopefully suggest indications related to costs, usability and quality of the sensorial experience, we analyze a series of live examples of which we point out strengths and weak points. We then summarize the current state and the very next future, identifying the major issues that prevent these technologies from being actually widespread, and outline proposals for a more se of pervasive and effective u"} {"_id": "9aade3d26996ce7ef6d657130464504b8d812534", "title": "Face Alignment With Deep Regression", "text": "In this paper, we present a deep regression approach for face alignment. The deep regressor is a neural network that consists of a global layer and multistage local layers. The global layer estimates the initial face shape from the whole image, while the following local layers iteratively update the shape with local image observations. Combining standard derivations and numerical approximations, we make all layers able to backpropagate error differentials, so that we can apply the standard backpropagation to jointly learn the parameters from all layers. We show that the resulting deep regressor gradually and evenly approaches the true facial landmarks stage by stage, avoiding the tendency that often occurs in the cascaded regression methods and deteriorates the overall performance: yielding early stage regressors with high alignment accuracy gains but later stage regressors with low alignment accuracy gains. Experimental results on standard benchmarks demonstrate that our approach brings significant improvements over previous cascaded regression algorithms."} {"_id": "f85ccab7173e543f2bfd4c7a81fb14e147695740", "title": "A method to infer emotions from facial Action Units", "text": "We present a robust method to map detected facial Action Units (AUs) to six basic emotions. Automatic AU recognition is prone to errors due to illumination, tracking failures and occlusions. Hence, traditional rule based methods to map AUs to emotions are very sensitive to false positives and misses among the AUs. In our method, a set of chosen AUs are mapped to the six basic emotions using a learned statistical relationship and a suitable matching technique. Relationships between the AUs and emotions are captured as template strings comprising the most discriminative AUs for each emotion. The template strings are computed using a concept called discriminative power. The Longest Common Subsequence (LCS) distance, an approach for approximate string matching, is applied to calculate the closeness of a test string of AUs with the template strings, and hence infer the underlying emotions. LCS is found to be efficient in handling practical issues like erroneous AU detection and helps to reduce false predictions. The proposed method is tested with various databases like CK+, ISL, FACS, JAFFE, MindReading and many real-world video frames. We compare our performance with rule based techniques, and show clear improvement on both benchmark databases and real-world datasets."} {"_id": "7539293eaadec85917bcfcf4ecc53e7bdd41c227", "title": "Using phrases and document metadata to improve topic modeling of clinical reports", "text": "Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports."} {"_id": "291c3f4393987f67cded328e984dbae84af643cb", "title": "Faster Dynamic Programming for Markov Decision Processes", "text": "OF THESIS FASTER DYNAMIC PROGRAMMING FOR MARKOV DECISION PROCESSES Markov decision processes (MDPs) are a general framework used by Artificial Intelligence (AI) researchers to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature. This paper discusses two main groups of approaches in solving MDPs. The first group of approaches combines the strategies of heuristic search and dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to decrease the effort of classic dynamic programming algorithms. Two new algorithms proposed by the author, MBLAO* and TVI, are described here."} {"_id": "370063c5491147d88d57bbcd865eb5004484c1eb", "title": "A Review of Technical Approaches to Realizing Near-Field Communication Mobile Payments", "text": "This article describes and compares four approaches to storing payment keys and executing payment applications on mobile phones via near-field communication at the point of sale. Even though the comparison hinges on security--specifically, how well the keys and payment application are protected against misuse--other criteria such as hardware requirements, availability, management complexity, and performance are also identified and discussed."} {"_id": "21e480ad39c52d8e770810f8319750a34f8bc091", "title": "Exploiting geographic dependencies for real estate appraisal: a mutual perspective of ranking and clustering", "text": "It is traditionally a challenge for home buyers to understand, compare and contrast the investment values of real estates. While a number of estate appraisal methods have been developed to value real property, the performances of these methods have been limited by the traditional data sources for estate appraisal. However, with the development of new ways of collecting estate-related mobile data, there is a potential to leverage geographic dependencies of estates for enhancing estate appraisal. Indeed, the geographic dependencies of the value of an estate can be from the characteristics of its own neighborhood (individual), the values of its nearby estates (peer), and the prosperity of the affiliated latent business area (zone). To this end, in this paper, we propose a geographic method, named ClusRanking, for estate appraisal by leveraging the mutual enforcement of ranking and clustering power. ClusRanking is able to exploit geographic individual, peer, and zone dependencies in a probabilistic ranking model. Specifically, we first extract the geographic utility of estates from geography data, estimate the neighborhood popularity of estates by mining taxicab trajectory data, and model the influence of latent business areas via ClusRanking. Also, we use a linear model to fuse these three influential factors and predict estate investment values. Moreover, we simultaneously consider individual, peer and zone dependencies, and derive an estate-specific ranking likelihood as the objective function. Finally, we conduct a comprehensive evaluation with real-world estate related data, and the experimental results demonstrate the effectiveness of our method."} {"_id": "044a9cb24e2863c6bcaaf39b7a210fbb11b381e9", "title": "A Low-Bandwidth Network File System", "text": "Users rarely consider running network file systems over slow or wide-area networks, as the performance would be unacceptable and the bandwidth consumption too high. Nonetheless, efficient remote file access would often be desirable over such networks---particularly when high latency makes remote login sessions unresponsive. Rather than run interactive programs such as editors remotely, users could run the programs locally and manipulate remote files through the file system. To do so, however, would require a network file system that consumes less bandwidth than most current file systems.This paper presents LBFS, a network file system designed for low-bandwidth networks. LBFS exploits similarities between files or versions of the same file to save bandwidth. It avoids sending data over the network when the same data can already be found in the server's file system or the client's cache. Using this technique in conjunction with conventional compression and caching, LBFS consumes over an order of magnitude less bandwidth than traditional network file systems on common workloads."} {"_id": "964b6997a9c7852deff71d34894dbdc38d34fbdf", "title": "Algorithms for Delta Compression and Remote File Synchronization", "text": "Delta compression and remote file synchronization techniques are concerned with efficient file transfer over a slow communication link in the case where the receiving party already has a similar file (or files). This problem arises naturally, e.g., when distributing updated versions of software over a network or synchronizing personal files between different accounts and devices. More generally, the problem is becoming increasingly common in many networkbased applications where files and content are widely replicated, frequently modified, and cut and reassembled in different contexts and packagings. In this chapter, we survey techniques, software tools, and applications for delta compression, remote file synchronization, and closely related problems. We first focus on delta compression, where the sender knows all the similar files that are held by the receiver. In the second part, we survey work on the related, but in many ways quite different, problem of remote file synchronization, where the sender does not have a copy of the files held by the receiver. Work supported by NSF CAREER Award NSF CCR-0093400 and by Intel Corporation."} {"_id": "148ec401da7d5859a9488c0f9a34200de71cc824", "title": "Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency", "text": "Caching introduces the overhead and complexity of ensuring consistency, reducing some of its performance benefits. In a distributed system, caching must deal with the additional complications of communication and host failures.\nLeases are proposed as a time-based mechanism that provides efficient consistent access to cached data in distributed systems. Non-Byzantine failures affect performance, not correctness, with their effect minimized by short leases. An analytic model and an evaluation for file access in the V system show that leases of short duration provide good performance. The impact of leases on performance grows more significant in systems of larger scale and higher processor performance."} {"_id": "41f1abe566060e53ad93d8cfa8c39ac582256868", "title": "Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial", "text": "The state machine approach is a general method for implementing fault-tolerant services in distributed systems. This paper reviews the approach and describes protocols for two different failure models\u2014Byzantine and fail stop. Systems reconfiguration techniques for removing faulty components and integrating repaired components are also discussed."} {"_id": "46f766c11df69808453e14c900bcb3f4e081fcae", "title": "Copy Detection Mechanisms for Digital Documents", "text": "In a digital library system, documents are available in digital form and therefore are more easily copied and their copyrights are more easily violated. This is a very serious problem, as it discourages owners of valuable information from sharing it with authorized users. There are two main philosophies for addressing this problem: prevention and detection. The former actually makes unauthorized use of documents difficult or impossible while the latter makes it easier to discover such activity.In this paper we propose a system for registering documents and then detecting copies, either complete copies or partial copies. We describe algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security). We also describe a working prototype, called COPS, describe implementation issues, and present experimental results that suggest the proper settings for copy detection parameters."} {"_id": "6db68f27bcb7c9c001bb0c144c1d0ac5d69a3f3a", "title": "Formal Analysis of Graphical Security Models", "text": "Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. \u2022 Users may download and print one copy of any publication from the public portal for the purpose of private study or research. \u2022 You may not further distribute the material or use it for any profit-making activity or commercial gain \u2022 You may freely distribute the URL identifying the publication in the public portal ? If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. May not music be described as the mathematics of the sense, mathematics as music of the reason? The musician feels mathematics, the mathematician thinks music: music the dream, mathematics the working life. Summary The increasing usage of computer-based systems in almost every aspect of our daily life makes more and more dangerous the threat posed by potential attackers , and more and more rewarding a successful attack. Moreover, the complexity of these systems is also increasing, including physical devices, software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing formal verification of their properties. Finally, their appealing graphical notations enable to communicate security concerns in an understandable way also to non-experts, often in charge of the decision making. This dissertation argues that automated techniques can be developed on graphical security models to evaluate qualitative and quantitative security properties of socio-technical systems and to synthesise optimal attack and defence strategies. In support to this claim we develop analysis techniques for widely-used graphical security models such as attack trees and attack-defence trees. Our analyses cope with the optimisation of multiple parameters of an attack and defence scenario. Improving on the literature, in case of conflicting parameters such as probability and cost we compute the set of optimal solutions \u2026"} {"_id": "e88eec15946dd19bdcf69db882f204386e05ff48", "title": "Robust techniques for background subtraction in urban traffic video", "text": "Identifying moving objects from a video sequence is a fundamental and critical task in many computer-vision applications. A common approach is to perform background subtraction, which identifies moving objects from the portion of a video frame that differs significantly from a background model. There are many challenges in developing a good background subtraction algorithm. First, it must be robust against changes in illumination. Second, it should avoid detecting non-stationary background objects such as swinging leaves, rain, snow, and shadow cast by moving objects. Finally, its internal background model should react quickly to changes in background such as starting and stopping of vehicles. In this paper, we compare various background subtraction algorithms for detecting moving vehicles and pedestrians in urban traffic video sequences. We consider approaches varying from simple techniques such as frame differencing and adaptive median filtering, to more sophisticated probabilistic modeling techniques. While complicated techniques often produce superior performance, our experiments show that simple techniques such as adaptive median filtering can produce good results with much lower computational complexity."} {"_id": "2232416778783616736149c870a69beb13cda743", "title": "Face Recognition in a Meeting Room", "text": "In this paper, weinvestigaterecognition of humanfaces in a meetingroom. The major challenges of identifying humanfacesin this environmentincludelow quality of input images,poor illumination,unrestrictedheadposesand continuouslychangingfacial expressionsandocclusion.In order to addresstheseproblemswe proposea novel algorithm, DynamicSpaceWarping (DSW).Thebasic idea of the algorithm is to combinelocal features under certain spatial constraints. We compare DSWwith the eigenface approachondatacollectedfromvariousmeetings.Wehave testedboth front and profile face imagesand imageswith two stagesof occlusion.Theexperimentalresultsindicate thattheDSWapproachoutperformstheeigenfaceapproach in bothcases."} {"_id": "74c24d7454a2408f766e4d9e507a0e9c3d80312f", "title": "A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks", "text": "A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes."} {"_id": "853860b6472b2c883be4085a3460042fc8b1af3e", "title": "Laplacian Auto-Encoders: An explicit learning of nonlinear data manifold", "text": "A key factor contributing to the success of many auto-encoders based deep learning techniques is the implicit consideration of the underlying data manifold in their training criteria. In this paper, we aim to make this consideration more explicit by training auto-encoders completely from the manifold learning perspective. We propose a novel unsupervised manifold learning method termed Laplacian Auto-Encoders (LAEs). Starting from a general regularized function learning framework, LAE regularizes training of autoencoders so that the learned encoding function has the locality-preserving property for data points on the manifold. By exploiting the analog relation between the graph Laplacian and the Laplace\u2013Beltrami operator on the continuous manifold, we derive discrete approximations of the firstand higher-order auto-encoder regularizers that can be applied in practical scenarios, where only data points sampled from the distribution on the manifold are available. Our proposed LAE has potentially better generalization capability, due to its explicit respect of the underlying data manifold. Extensive experiments on benchmark visual classification datasets show that LAE consistently outperforms alternative auto-encoders recently proposed in deep learning literature, especially when training samples are relatively scarce. & 2015 Elsevier B.V. All rights reserved."} {"_id": "81fd1a1f72963ce16ddebacea82e71dab3d6992a", "title": "Interactive surface design with interlocking elements", "text": "We present an interactive tool for designing physical surfaces made from flexible interlocking quadrilateral elements of a single size and shape. With the element shape fixed, the design task becomes one of finding a discrete structure---i.e., element connectivity and binary orientations---that leads to a desired geometry. In order to address this challenging problem of combinatorial geometry, we propose a forward modeling tool that allows the user to interactively explore the space of feasible designs. Paralleling principles from conventional modeling software, our approach leverages a library of base shapes that can be instantiated, combined, and extended using two fundamental operations: merging and extrusion. In order to assist the user in building the designs, we furthermore propose a method to automatically generate assembly instructions. We demonstrate the versatility of our method by creating a diverse set of digital and physical examples that can serve as personalized lamps or decorative items."} {"_id": "90a5393b72b85ec21fae9a108ed5dd3e99837701", "title": "The Role of Instructional Design in Persuasion: A Comics Approach for Improving Cybersecurity", "text": "Although computer security technologies are the first line of defence to secure users, their success is dependent on individuals\u2019 behaviour. It is therefore necessary to persuade users to practice good computer security. Our interview analysis of users\u2019 conceptualization of security password guessing attacks, antivirus protection, and mobile online privacy shows that poor understanding of security threats influences users\u2019 motivation and ability to practice safe behaviours. We designed and developed an online interactive comic series called Secure Comics based on instructional design principles to address this problem. An eye-tracking experiment suggests that the graphical and interactive components of the comics direct users\u2019 attention and facilitate comprehension of the information. In our evaluations of Secure Comics, results from several user studies show that the comics improve understanding and motivate positive changes in security management behaviour. We discuss the implication of the findings to better understand the role of instructional design and persuasion in education technology."} {"_id": "4c9f9f228390d1370e0df91d2565a2559444431d", "title": "SimRT: an automated framework to support regression testing for data races", "text": "Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance."} {"_id": "9da15b932df57a8f959471ebc977d620efb18cc1", "title": "PicToSeek: combining color and shape invariant features for image retrieval", "text": "We aim at combining color and shape invariants for indexing and retrieving images. To this end, color models are proposed independent of the object geometry, object pose, and illumination. From these color models, color invariant edges are derived from which shape invariant features are computed. Computational methods are described to combine the color and shape invariants into a unified high-dimensional invariant feature set for discriminatory object retrieval. Experiments have been conducted on a database consisting of 500 images taken from multicolored man-made objects in real world scenes. From the theoretical and experimental results it is concluded that object retrieval based on composite color and shape invariant features provides excellent retrieval accuracy. Object retrieval based on color invariants provides very high retrieval accuracy whereas object retrieval based entirely on shape invariants yields poor discriminative power. Furthermore, the image retrieval scheme is highly robust to partial occlusion, object clutter and a change in the object's pose. Finally, the image retrieval scheme is integrated into the PicToSeek system on-line at http://www.wins.uva.nl/research/isis/PicToSeek/ for searching images on the World Wide Web."} {"_id": "3923c0deee252ba10562a4378fc2bbc4885282b3", "title": "Fake Colorized Image Detection", "text": "Image forensics aims to detect the manipulation of digital images. Currently, splicing detection, copy-move detection, and image retouching detection are attracting significant attention from researchers. However, image editing techniques develop over time. An emerging image editing technique is colorization, in which grayscale images are colorized with realistic colors. Unfortunately, this technique may also be intentionally applied to certain images to confound object recognition algorithms. To the best of our knowledge, no forensic technique has yet been invented to identify whether an image is colorized. We observed that, compared with natural images, colorized images, which are generated by three state-of-the-art methods, possess statistical differences for the hue and saturation channels. Besides, we also observe statistical inconsistencies in the dark and bright channels, because the colorization process will inevitably affect the dark and bright channel values. Based on our observations, i.e., potential traces in the hue, saturation, dark, and bright channels, we propose two simple yet effective detection methods for fake colorized images: Histogram-based fake colorized image detection and feature encoding-based fake colorized image detection. Experimental results demonstrate that both proposed methods exhibit a decent performance against multiple state-of-the-art colorization approaches."} {"_id": "63db36cb0b5c8dad17a0c02ab95fde805d585513", "title": "Assessing patient suitability for short-term cognitive therapy with an interpersonal focus", "text": "In the current study, the development and initial validation of the Suitability for Short-Term Cognitive Therapy (SSCT) interview procedure is reported. The SSCT is an interview and rating procedure designed to evaluate the potential appropriateness of patients for short-term cognitive therapy with an interpersonal focus. It consists of a 1-hour, semistructured interview, focused on eliciting information from the patient relevant to nine selection criteria. The procedures involved in the development of this scale are described in detail, and preliminary evidence suggesting that the selection criteria can be rated reliably is presented. In addition, data indicating that scores on the SSCT scale predict the outcome of short-term cognitive therapy on multiple dependent measures, including both therapist and patient perspectives, are reported. It is concluded that the SSCT is a potentially useful scale for identifying patients who may be suitable, or unsuitable, for the type of short-term cognitive therapy administered in the present study."} {"_id": "2dc18f661b400033abd1086b917c451d3358aef2", "title": "Visible Machine Learning for Biomedicine", "text": "A major ambition of artificial intelligence lies in translating patient data to successful therapies. Machine learning models face particular challenges in biomedicine, however, including handling of extreme data heterogeneity and lack of mechanistic insight into predictions. Here, we argue for \"visible\" approaches that guide model structure with experimental biology."} {"_id": "09e941ab733b2c6c26261cb85f00f145d9063b0b", "title": "Automated Text Summarization In SUMMARIST", "text": "SUMMARIST is an attempt to create a robust automated text summarization system, based on the \u2018equation\u2019: summarization = topic identification + interpretation + generation. Each of these stages contains several independent modules, many of them trained on large corpora of text. We describe the system\u2019s architecture and provide details of some of its modules."} {"_id": "03ce7c63eea901962dfae539b3ca6c77d65c5c38", "title": "Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges", "text": "The ability to carry out signal processing, classification, recognition, and computation in artificial spiking neural networks (SNNs) is mediated by their synapses. In particular, through activity-dependent alteration of their efficacies, synapses play a fundamental role in learning. The mathematical prescriptions under which synapses modify their weights are termed synaptic plasticity rules. These learning rules can be based on abstract computational neuroscience models or on detailed biophysical ones. As these rules are being proposed and developed by experimental and computational neuroscientists, engineers strive to design and implement them in silicon and en masse in order to employ them in complex real-world applications. In this paper, we describe analog very large-scale integration (VLSI) circuit implementations of multiple synaptic plasticity rules, ranging from phenomenological ones (e.g., based on spike timing, mean firing rates, or both) to biophysically realistic ones (e.g., calcium-dependent models). We discuss the application domains, weaknesses, and strengths of various representative approaches proposed in the literature, and provide insight into the challenges that engineers face when designing and implementing synaptic plasticity rules in VLSI technology for utilizing them in real-world applications."} {"_id": "bc9f5d844ea6b989feb989cc6c8fc34f721a6b06", "title": "Networks versus markets in international trade", "text": "I propose a network/search view of international trade in differentiated products. I present evidence that supports the view that proximity and common language/colonial ties are more important for differentiated products than for products traded on organized exchanges in matching international buyers and sellers, and that search barriers to trade are higher for differentiated than for homogeneous products. I also discuss alternative explanations for the findings."} {"_id": "3973e14770350ed54ba1272aa3e19b4d21f5dad3", "title": "Obstacle Detection and Tracking for the Urban Challenge", "text": "This paper describes the obstacle detection and tracking algorithms developed for Boss, which is Carnegie Mellon University 's winning entry in the 2007 DARPA Urban Challenge. We describe the tracking subsystem and show how it functions in the context of the larger perception system. The tracking subsystem gives the robot the ability to understand complex scenarios of urban driving to safely operate in the proximity of other vehicles. The tracking system fuses sensor data from more than a dozen sensors with additional information about the environment to generate a coherent situational model. A novel multiple-model approach is used to track the objects based on the quality of the sensor data. Finally, the architecture of the tracking subsystem explicitly abstracts each of the levels of processing. The subsystem can easily be extended by adding new sensors and validation algorithms."} {"_id": "6a694487451957937adddbd682d3851fabd45626", "title": "Question answering passage retrieval using dependency relations", "text": "State-of-the-art question answering (QA) systems employ term-density ranking to retrieve answer passages. Such methods often retrieve incorrect passages as relationships among question terms are not considered. Previous studies attempted to address this problem by matching dependency relations between questions and answers. They used strict matching, which fails when semantically equivalent relationships are phrased differently. We propose fuzzy relation matching based on statistical models. We present two methods for learning relation mapping scores from past QA pairs: one based on mutual information and the other on expectation maximization. Experimental results show that our method significantly outperforms state-of-the-art density-based passage retrieval methods by up to 78% in mean reciprocal rank. Relation matching also brings about a 50% improvement in a system enhanced by query expansion."} {"_id": "a3676ecae39afd35b1f7075fc630e28cfbb5a188", "title": "Nitro: Hardware-Based System Call Tracing for Virtual Machines", "text": "Virtual machine introspection (VMI) describes the method of monitoring and analyzing the state of a virtual machine from the hypervisor level. This lends itself well to security applications, though the hardware virtualization support from Intel and AMD was not designed with VMI in mind. This results in many challenges for developers of hardware-supported VMI systems. This paper describes the design and implementation of our prototype framework, Nitro, for system call tracing and monitoring. Since Nitro is a purely VMI-based system, it remains isolated from attacks originating within the guest operating system and is not directly visible from within the guest. Nitro is extremely flexible as it supports all three system call mechanisms provided by the Intel x86 architecture and has been proven to work in Windows, Linux, 32-bit, and 64-bit environments. The high performance of our system allows for real-time capturing and dissemination of data without hindering usability. This is supported by extensive testing with various guest operating systems. In addition, Nitro is resistant to circumvention attempts due to a construction called hardware rooting. Finally, Nitro surpasses similar systems in both performance and functionality."} {"_id": "4b31ec67990a5fa81e7c1cf9fa2dbebcb91ded59", "title": "Adapting Naive Bayes to Domain Adaptation for Sentiment Analysis", "text": "In the community of sentiment analysis, supervised learning techniques have been shown to perform very well. When transferred to another domain, however, a supervised sentiment classifier often performs extremely bad. This is so-called domain-transfer problem. In this work, we attempt to attack this problem by making the maximum use of both the old-domain data and the unlabeled new-domain data. To leverage knowledge from the old-domain data, we proposed an effective measure, i.e., Frequently Co-occurring Entropy (FCE), to pick out generalizable features that occur frequently in both domains and have similar occurring probability. To gain knowledge from the newdomain data, we proposed Adapted Na\u00efve Bayes (ANB), a weighted transfer version of Naive Bayes Classifier. The experimental results indicate that proposed approach could improve the performance of base classifier dramatically, and even provide much better performance than the transfer-learning baseline, i.e. the Na\u00efve Bayes Transfer Classifier (NTBC)."} {"_id": "654952a9cc4f3526dda8adf220a50a27a5c91449", "title": "DendroPy: a Python library for phylogenetic computing", "text": "UNLABELLED\nDendroPy is a cross-platform library for the Python programming language that provides for object-oriented reading, writing, simulation and manipulation of phylogenetic data, with an emphasis on phylogenetic tree operations. DendroPy uses a splits-hash mapping to perform rapid calculations of tree distances, similarities and shape under various metrics. It contains rich simulation routines to generate trees under a number of different phylogenetic and coalescent models. DendroPy's data simulation and manipulation facilities, in conjunction with its support of a broad range of phylogenetic data formats (NEXUS, Newick, PHYLIP, FASTA, NeXML, etc.), allow it to serve a useful role in various phyloinformatics and phylogeographic pipelines.\n\n\nAVAILABILITY\nThe stable release of the library is available for download and automated installation through the Python Package Index site (http://pypi.python.org/pypi/DendroPy), while the active development source code repository is available to the public from GitHub (http://github.com/jeetsukumaran/DendroPy)."} {"_id": "6a69cab99a68869b2f6361c6a3004657e2deeae4", "title": "Ground plane segmentation for mobile robot visual navigation", "text": "We describe a method of mobile robot monocular visual navigation, which uses multiple visual cues to detect and segment the ground plane in the robot\u2019s field of view. Corner points are tracked through an image sequence and grouped into coplanar regions using a method which we call an H-based tracker. The H-based tracker employs planar homographys and is initialised by 5-point planar projective invariants. This allows us to detect ground plane patches and the colour within such patches is subsequently modelled. These patches are grown by colour classification to give a ground plane segmentation, which is then used as an input to a new variant of the artificial potential field algorithm."} {"_id": "13c48b8c10022b4b2262c5d12f255e21f566cecc", "title": "Practical design considerations for a LLC multi-resonant DC-DC converter in battery charging applications", "text": "In this paper, resonant tank design procedure and practical design considerations are presented for a high performance LLC multi-resonant dc-dc converter in a two-stage smart battery charger for neighborhood electric vehicle applications. The multi-resonant converter has been analyzed and its performance characteristics are presented. It eliminates both low and high frequency current ripple on the battery, thus maximizing battery life without penalizing the volume of the charger. Simulation and experimental results are presented for a prototype unit converting 390 V from the input dc link to an output voltage range of 48 V to 72 V dc at 650 W. The prototype achieves a peak efficiency of 96%."} {"_id": "27229aff757b797d0cae7bead5a236431b253b91", "title": "Predictive State Temporal Difference Learning", "text": "We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem."} {"_id": "21ff1d20dd7b3e6b1ea02036c0176d200ec5626d", "title": "Loss Max-Pooling for Semantic Image Segmentation", "text": "We introduce a novel loss max-pooling concept for handling imbalanced training data distributions, applicable as alternative loss layer in the context of deep neural networks for semantic image segmentation. Most real-world semantic segmentation datasets exhibit long tail distributions with few object categories comprising the majority of data and consequently biasing the classifiers towards them. Our method adaptively re-weights the contributions of each pixel based on their observed losses, targeting under-performing classification results as often encountered for under-represented object classes. Our approach goes beyond conventional cost-sensitive learning attempts through adaptive considerations that allow us to indirectly address both, inter- and intra-class imbalances. We provide a theoretical justification of our approach, complementary to experimental analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal VOC 2012 segmentation datasets we find consistently improved results, demonstrating the efficacy of our approach."} {"_id": "87982ff47c0614cf40204970208312abe943641f", "title": "Comparing and evaluating the sentiment on newspaper articles: A preliminary experiment", "text": "Recent years have brought a symbolic growth in the volume of research in Sentiment Analysis, mostly on highly subjective text types like movie or product reviews. The main difference these texts have with news articles is that their target is apparently defined and unique across the text. Thence while dealing with news articles, we performed three subtasks namely identifying the target; separation of good and bad news content from the good and bad sentiment expressed on the target and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. On concluding these tasks, we present our work on mining opinions about three different Indian political parties during elections in the year 2009. We built a Corpus of 689 opinion-rich instances from three different English dailies namely The Hindu, Times of India and Economic Times extracted from 02/ 01/ 2009 to 05/ 01/ 2009 (MM/ DD/ YY). In which (a) we tested the relative suitability of various sentiment analysis methods (both machine learning and lexical based) and (b) we attempted to separate positive or negative opinion from good or bad news. Evaluation includes comparison of three sentiment analysis methods (two machine learning based and one lexical based) and analyzing the choice of certain words used in political text which influence the Sentiments of public in polls. This preliminary experiment will benefit in predicting and forecasting the winning party in forthcoming Indian elections 2014."} {"_id": "2538e3eb24d26f31482c479d95d2e26c0e79b990", "title": "Natural Language Processing (almost) from Scratch", "text": "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements."} {"_id": "317deb87586baa4ee7c7b5dfc603ebed94d1da07", "title": "Deep Learning for Efficient Discriminative Parsing", "text": "We propose a new fast purely discriminative algorithm for natural language parsing, based on a \u201cdeep\u201d recurrent convolutional graph transformer network (GTN). Assuming a decomposition of a parse tree into a stack of \u201clevels\u201d, the network predicts a level of the tree taking into account predictions of previous levels. Using only few basic text features which leverage word representations from Collobert and Weston (2008), we show similar performance (in F1 score) to existing pure discriminative parsers and existing \u201cbenchmark\u201d parsers (like Collins parser, probabilistic context-free grammars based), with a huge speed advantage."} {"_id": "0354210007fbe92385acf407549b5cacb41b5835", "title": "Distributed and overlapping representations of faces and objects in ventral temporal cortex.", "text": "The functional architecture of the object vision pathway in the human brain was investigated using functional magnetic resonance imaging to measure patterns of response in ventral temporal cortex while subjects viewed faces, cats, five categories of man-made objects, and nonsense pictures. A distinct pattern of response was found for each stimulus category. The distinctiveness of the response to a given category was not due simply to the regions that responded maximally to that category, because the category being viewed also could be identified on the basis of the pattern of response when those regions were excluded from the analysis. Patterns of response that discriminated among all categories were found even within cortical regions that responded maximally to only one category. These results indicate that the representations of faces and objects in ventral temporal cortex are widely distributed and overlapping."} {"_id": "04cc04457e09e17897f9256c86b45b92d70a401f", "title": "A latent factor model for highly multi-relational data", "text": "Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations."} {"_id": "052b1d8ce63b07fec3de9dbb583772d860b7c769", "title": "Learning representations by back-propagating errors", "text": "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal \u2018hidden\u2019 units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1."} {"_id": "50a6b2b84a9d11ed168ce6380ff17e76136cdfe7", "title": "A memory insensitive technique for large model simplification", "text": "In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems.The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses.Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the \"representative vertex\" of a grid cell to an optimal position inside the cell."} {"_id": "22630a79f1c50603c1356f6ac9dc8524a18d4061", "title": "SecondNet: a data center network virtualization architecture with bandwidth guarantees", "text": "In this paper, we propose virtual data center (VDC) as the unit of resource allocation for multiple tenants in the cloud. VDCs are more desirable than physical data centers because the resources allocated to VDCs can be rapidly adjusted as tenants' needs change. To enable the VDC abstraction, we design a data center network virtualization architecture called SecondNet. SecondNet achieves scalability by distributing all the virtual-to-physical mapping, routing, and bandwidth reservation state in server hypervisors. Its port-switching based source routing (PSSR) further makes SecondNet applicable to arbitrary network topologies using commodity servers and switches. SecondNet introduces a centralized VDC allocation algorithm for bandwidth guaranteed virtual to physical mapping. Simulations demonstrate that our VDC allocation achieves high network utilization and low time complexity. Our implementation and experiments show that we can build SecondNet on top of various network topologies, and SecondNet provides bandwidth guarantee and elasticity, as designed."} {"_id": "5af83b56353c5fba0518c203d192ffb6375cd986", "title": "A deep multiple instance model to predict prostate cancer metastasis from nuclear morphology", "text": "We consider the problem of identifying the patients who are diagnosed with highgrade prostate cancer using the histopathology of tumor in a prostate needle biopsy and are at a very high risk of lethal cancer progression. We hypothesize that the morphology of tumor cell nuclei in digital images from the biopsy can be used to predict tumor aggressiveness and posit the presence of metastasis as a surrogate for disease specific mortality. For this purpose, we apply a compositional multiinstance learning approach which encodes images of nuclei through a convolutional neural network, then predicts the presence of metastasis from sets of encoded nuclei. Through experiments on prostate needle biopsies (PNBX) from a patient cohort with known presence (M1 stage, n = 85) or absence (M0 stage, n = 86) of metastatic disease, we obtained an average area under the receiver operating characteristic curve of 0.71\u00b1 0.08 for predicting metastatic cases. These results support our hypothesis that information related to metastatic capacity of prostate cancer cells can be obtained through analysis of nuclei and establish a baseline for future research aimed at predicting the risk of future metastatic disease at a time when it might be preventable."} {"_id": "d055b799c521b28bd4d6bf2fc905819d8e88207c", "title": "Design of a dual circular polarization microstrip patch array antenna", "text": "Design of a microstrip array antenna to achieve dual circular polarization is proposed in this paper. The proposed antenna is a 2\u00d72 array antenna where each patch element is circularly polarized. The feed network has microstrip lines, cross slot lines and air-bridges. The array antenna can excite both right-hand circular polarization (RHCP) and left-hand circular polarization (LHCP) without using any 90\u00b0 hybrid circuit or PIN diode. \u201cBoth-sided MIC Technology\u201d is used to design the feed network as it provides flexibility to place several types of transmission lines on both sides of the dielectric substrate. The design frequency of the proposed array antenna is 10 GHz. The simulated return loss exhibits an impedance bandwidth of greater than 5% and the 3-dB axial ratio bandwidths for both RHCP and LHCP are approximately 1.39%. The structure and the basic operation along with the simulation results of the proposed dual circularly polarized array antenna are demonstrated in this paper."} {"_id": "2b72dd0d33e0892436394ef7642c6b517a1c71fd", "title": "Matching Visual Saliency to Confidence in Plots of Uncertain Data", "text": "Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets."} {"_id": "7d9facefffc720079d837aa421ab79d4856e2c88", "title": "Lightweight, High-Force Gripper Inspired by Chuck Clamping Devices", "text": "In this letter, we present a novel gripper, whose design was inspired by chuck clamping devices, for transferring heavy objects and assembling parts precisely in industrial applications. The developed gripper is lightweight (0.9 kg), can manipulate heavy payloads (over 23 kgf), and can automatically align its position and posture via a grasping motion. A fingertip design criterion is presented for the position alignment, while a control strategy is presented for the posture alignment. With one actuator, this gripper realized the above features. This letter describes the mathematical analyses and experiments used to validate these key metrics."} {"_id": "29f07c86886af63f9bf43d089373ac1f7a95ea0e", "title": "A Multiarmed Bandit Incentive Mechanism for Crowdsourcing Demand Response in Smart Grids", "text": "Demand response is a critical part of renewable integration and energy cost reduction goals across the world. Motivated by the need to reduce costs arising from electricity shortage and renewable energy fluctuations, we propose a novel multiarmed bandit mechanism for demand response (MAB-MDR) which makes monetary offers to strategic consumers who have unknown response characteristics, to incetivize reduction in demand. Our work is inspired by a novel connection we make to crowdsourcing mechanisms. The proposed mechanism incorporates realistic features of the demand response problem including time varying and quadratic cost function. The mechanism marries auctions, that allow users to report their preferences, with online algorithms, that allow distribution companies to learn user-specific parameters. We show that MAB-MDR is dominant strategy incentive compatible, individually rational, and achieves sublinear regret. Such mechanisms can be effectively deployed in smart grids using new information and control architecture innovations and lead to welcome savings in energy costs."} {"_id": "8e7fdb9d3fc0fef1f82f126072fc675e01ce5873", "title": "Clarifying Hypotheses by Sketching Data", "text": "Discussions between data analysts and colleagues or clients with no statistical background are difficult, as the analyst often has to teach and explain their statistical and domain knowledge. We investigate work practices of data analysts who collaborate with non-experts, and report findings regarding types of analysis, collaboration and availability of data. Based on these, we have created a tool to enhance collaboration between data analysts and their clients in the initial stages of the analytical process. Sketching time series data allows analysts to discuss expectations for later analysis. We propose function composition rather than freehand sketching, in order to structure the analyst-client conversation by independently expressing expected features in the data. We evaluate the usability of our prototype through two small studies, and report on user feedback for future iterations."} {"_id": "0567283bc9affd475eae7cebaae658692a64d5a4", "title": "Intelligent Widgets for Intuitive Interaction and Coordination in Smart Home Environments", "text": "The intelligent home environment is a well-established example of the Ambient Intelligence application domain. A variety of sensors and actuators can be used to have the home environment adapt towards changing circumstances and user preferences. However, the complexity of how these intelligent home automation systems operate is often beyond the comprehension of non-technical users, and adding new technology to an existing infrastructure is often a burden. In this paper, we present a home automation framework designed based on smart widgets with a model driven methodology that raises the level of abstraction to configure home automation equipment. It aims to simplify user-level home automation management by mapping high-level home automation concepts onto a low-level composition and configuration of the automation building blocks with a reverse mapping to simplify the integration of new equipment into existing home automation systems. Experiments have shown that the mappings we proposed are sufficient to represent household appliances to the end user in a simple way and that new mappings can easily be added to our framework."} {"_id": "a79f43246bed540084ca2d1fcf99a68c69820747", "title": "A Hybrid Approach to Detect and Localize Texts in Natural Scene Images", "text": "Text detection and localization in natural scene images is important for content-based image analysis. This problem is challenging due to the complex background, the non-uniform illumination, the variations of text font, size and line orientation. In this paper, we present a hybrid approach to robustly detect and localize texts in natural scene images. A text region detector is designed to estimate the text existing confidence and scale information in image pyramid, which help segment candidate text components by local binarization. To efficiently filter out the non-text components, a conditional random field (CRF) model considering unary component properties and binary contextual component relationships with supervised parameter learning is proposed. Finally, text components are grouped into text lines/words with a learning-based energy minimization method. Since all the three stages are learning-based, there are very few parameters requiring manual tuning. Experimental results evaluated on the ICDAR 2005 competition dataset show that our approach yields higher precision and recall performance compared with state-of-the-art methods. We also evaluated our approach on a multilingual image dataset with promising results."} {"_id": "1b9de2d1e74fbe49bf852fa495f63c31bb038a31", "title": "A Pneumatic-Driven Haptic Glove with Force and Tactile Feedback", "text": "The advent of Oculus Rift indicates the start of a booming era of virtual reality. In order to increase the immersive feeling of interaction with the virtual world, haptic devices allow us to touch and manipulate virtual objects in an intuitive way. In this paper, we introduce a portable and low-cost haptic glove that provides both force and tactile feedback using a direct-control pneumatic concept. To produce force feedback, two inlet ports of a double acting pneumatic cylinder are opened and closed via solenoid DC valves through Pulse-width modulation (PWM) technique. For tactile feedback, an air bladder is actuated using a diaphragm pump via PWM operated solenoid valve. Experiments on a single finger prototype validated that the glove can provide force and tactile feedback with sufficient moving range of the finger joints. The maximum continuous force is 9 Newton and the response time is less than 400ms. The glove is light weighted and easy to be mounted on the index finger. The proposed glove could be potentially used for virtual reality grasping scenarios and for teleoperation of a robotic hand for handling hazardous objects."} {"_id": "c6504fbbfcf32854e0bd35eb70539cafbecf332f", "title": "Client-side rate adaptation scheme for HTTP adaptive streaming based on playout buffer model", "text": "HTTP Adaptive Streaming (HAS) is an adaptive bitrate streaming technique which is able to adapt to the network conditions using conventional HTTP web servers. An HAS player periodically requests pre-encoded video chunks by sending an HTTP GET message. When the downloading a video chunk is finished, the player estimates the network bandwidth by calculating the goodput and adjusts the video quality based on its estimates. However, the bandwidth estimation in application layer is pretty inaccurate due to its architectural limitation. We show that inaccurate bandwidth estimation in rate adaptation may incur serious rate oscillations which is poor quality-of-experience for users. In this paper, we propose a buffer-based rate adaptation scheme which eliminates the bandwidth estimation step in rate adaptation to provide a smooth playback of HTTP-based streaming. We evaluate the performance of the HAS player implemented in the ns-3 network simulator. Our simulation results show that the proposed scheme significantly improves the stability by replacing bandwidth estimation with buffer occupancy estimation."} {"_id": "03dc771ebf5b7bc3ccf8c4689d918924da524fe4", "title": "Approximating dynamic global illumination in image space", "text": "Physically plausible illumination at real-time framerates is often achieved using approximations. One popular example is ambient occlusion (AO), for which very simple and efficient implementations are used extensively in production. Recent methods approximate AO between nearby geometry in screen space (SSAO). The key observation described in this paper is, that screen-space occlusion methods can be used to compute many more types of effects than just occlusion, such as directional shadows and indirect color bleeding. The proposed generalization has only a small overhead compared to classic SSAO, approximates direct and one-bounce light transport in screen space, can be combined with other methods that simulate transport for macro structures and is visually equivalent to SSAO in the worst case without introducing new artifacts. Since our method works in screen space, it does not depend on the geometric complexity. Plausible directional occlusion and indirect lighting effects can be displayed for large and fully dynamic scenes at real-time frame rates."} {"_id": "1b656883bed80fdec1d109ae04873540720610fa", "title": "Development and validation of the childhood narcissism scale.", "text": "In this article, we describe the development and validation of a short (10 item) but comprehensive self-report measure of childhood narcissism. The Childhood Narcissism Scale (CNS) is a 1-dimensional measure of stable individual differences in childhood narcissism with strong internal consistency reliability (Studies 1-4). The CNS is virtually unrelated to conventional measures of self-esteem but is positively related to self-appraised superiority, social evaluative concern and self-esteem contingency, agentic interpersonal goals, and emotional extremity (Study 5). Furthermore, the CNS is negatively related to empathic concern and positively related to aggression following ego threat (Study 6). These results suggest that childhood narcissism has similar psychological and interpersonal correlates as adult narcissism. The CNS provides researchers a convenient tool for measuring narcissism in children and young adolescents with strong preliminary psychometric characteristics."} {"_id": "92c1f538613ff4923a8fa3407a16bed4aed361ac", "title": "Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis", "text": "Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples."} {"_id": "bb6508fb4457f09b5e146254220247bc4ea7b71c", "title": "Multiclass Classification of Driver Perceived Workload Using Long Short-Term Memory based Recurrent Neural Network", "text": "Human sensing enables intelligent vehicles to provide driver-adaptive support by classifying perceived workload into multiple levels. Objective of this study is to classify driver workload associated with traffic complexity into five levels. We conducted driving experiments in systematically varied traffic complexity levels in a simulator. We recorded driver physiological signals including electrocardiography, electrodermal activity, and electroencephalography. In addition, we integrated driver performance and subjective workload measures. Deep learning based models outperform statistical machine learning methods when dealing with dynamic time-series data with variable sequence lengths. We show that our long short-term memory based recurrent neural network model can classify driver perceived-workload into five classes with an accuracy of 74.5%. Since perceived workload differ between individual drivers for the same traffic situation, our results further highlight the significance of including driver characteristics such as driving style and workload sensitivity to achieve higher classification accuracy."} {"_id": "c8e7c2a9201eb6217e62266c9d8c061b5394866e", "title": "Data mining for censored time-to-event data: a Bayesian network model for predicting cardiovascular risk from electronic health record data", "text": "Models for predicting the risk of cardiovascular (CV) events based on individual patient characteristics are important tools for managing patient care. Most current and commonly used risk prediction models have been built from carefully selected epidemiological cohorts. However, the homogeneity and limited size of such cohorts restrict the predictive power and generalizability of these risk models to other populations. Electronic health data (EHD) from large health care systems provide access to data on large, heterogeneous, and contemporaneous patient populations. The unique features and challenges of EHD, including missing risk factor information, non-linear relationships between risk factors and CV event outcomes, and differing effects from different patient subgroups, demand novel machine learning approaches to risk model development. In this paper, we present a machine learning approach based on Bayesian networks trained on EHD to predict the probability of having a CV event within 5\u00a0years. In such data, event status may be unknown for some individuals, as the event time is right-censored due to disenrollment and incomplete follow-up. Since many traditional data mining methods are not well-suited for such data, we describe how to modify both modeling and assessment techniques to account for censored observation times. We show that our approach can lead to better predictive performance than the Cox proportional hazards model (i.e., a regression-based approach commonly used for censored, time-to-event data) or a Bayesian network with ad hoc approaches to right-censoring. Our techniques are motivated by and illustrated on data from a large US Midwestern health care system."} {"_id": "fc5a530ea80a3295d0872b85c3991a4d81336a61", "title": "Voice Activated Virtual Assistants Personality Perceptions and Desires- Comparing Personality Evaluation Frameworks", "text": "Currently, Voice Activated Virtual Assistants and Artificial Intelligence technologies are not just about performance or the functionalities they can carry out, it is also about the associated personality. This empirical multi-country study explores the personality perceptions of current VAVA users regarding these technologies. Since this is a rather unexplored territory for research, this study has identified two well-established personality evaluation methodologies, Aaker\u2019s traits approach and Jung\u2019s archetypes, to investigate current perceived personality and future desired personality of the four main Voice Activated Virtual Assistants: Siri, Google Assistant, Cortana and Alexa. Following are a summary of results by each methodology, and an analysis of the commonalities found between the two methodologies."} {"_id": "f64d18d4bad30ea544aa828eacfa83208f2b7815", "title": "Conceptualizing Context for Pervasive Advertising", "text": "Profile-driven personalization based on socio-demographics is currently regarded as the most convenient base for successful personalized advertising. However, signs point to the dormant power of context recognition: Advertising systems that can adapt to the situational context of a consumer will rapidly gain importance. While technologies that can sense the environment are increasingly advanced, questions such as what to sense and how to adapt to a consumer\u2019s context are largely unanswered. In this chapter, we analyze the purchase context of a retail outlet and conceptualize it such that adaptive pervasive advertising applications really deliver on their potential: showing the right message at the right time to the right recipient. full version published as: Bauer, Christine & Spiekermann, Sarah (2011). Conceptualizing Context for Pervasive Advertising. In M\u00fcller, J\u00f6rg, Alt, Florian, & Michelis, Daniel (Eds.), Pervasive Advertising (pp. 159-183). London: Springer."} {"_id": "2e92ddcf2e7a9d6c27875ec442637e13753f21a2", "title": "Self-Soldering Connectors for Modular Robots", "text": "The connection mechanism between neighboring modules is the most critical subsystem of each module in a modular robot. Here, we describe a strong, lightweight, and solid-state connection method based on heating a low melting point alloy to form reversible soldered connections. No external manipulation is required for forming or breaking connections between adjacent connectors, making this method suitable for reconfigurable systems such as self-reconfiguring modular robots. Energy is only consumed when switching connectivity, and the ability to transfer power and signal through the connector is inherent to the method. Soldering connectors have no moving parts, are orders of magnitude lighter than other connectors, and are readily mass manufacturable. The mechanical strength of the connector is measured as 173 N, which is enough to support many robot modules, and hundreds of connection cycles are performed before failure."} {"_id": "459fbc416eb9a55920645c741b1e4cce95f39786", "title": "The Numerics of GANs", "text": "In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train."} {"_id": "40c8e3894314581d0241162374602d68a6d1f38c", "title": "Culture and Institutions", "text": "A growing body of empirical work measuring different types of cultural traits has shown that culture matters for a variety of economic outcomes. This paper focuses on one specific aspect of the relevance of culture: its relationship to institutions. We review work with a theoretical, empirical, and historical bent to assess the presence of a two-way causal effect between culture and institutions. 1 We thank Benjamin Friedman and Andrei Shleifer for useful conversations and Janet Currie, Steven Durlauf, and six anonymous referees for excellent comments."} {"_id": "6b894324281bd4b0251549c0e40802d6ca3d0b8f", "title": "Challenges and prospects of lithium-sulfur batteries.", "text": "Electrical energy storage is one of the most critical needs of 21st century society. Applications that depend on electrical energy storage include portable electronics, electric vehicles, and devices for renewable energy storage from solar and wind. Lithium-ion (Li-ion) batteries have the highest energy density among the rechargeable battery chemistries. As a result, Li-ion batteries have proven successful in the portable electronics market and will play a significant role in large-scale energy storage. Over the past two decades, Li-ion batteries based on insertion cathodes have reached a cathode capacity of \u223c250 mA h g(-1) and an energy density of \u223c800 W h kg(-1), which do not meet the requirement of \u223c500 km between charges for all-electric vehicles. With a goal of increasing energy density, researchers are pursuing alternative cathode materials such as sulfur and O2 that can offer capacities that exceed those of conventional insertion cathodes, such as LiCoO2 and LiMn2O4, by an order of magnitude (>1500 mA h g(-1)). Sulfur, one of the most abundant elements on earth, is an electrochemically active material that can accept up to two electrons per atom at \u223c2.1 V vs Li/Li(+). As a result, sulfur cathode materials have a high theoretical capacity of 1675 mA h g(-1), and lithium-sulfur (Li-S) batteries have a theoretical energy density of \u223c2600 W h kg(-1). Unlike conventional insertion cathode materials, sulfur undergoes a series of compositional and structural changes during cycling, which involve soluble polysulfides and insoluble sulfides. As a result, researchers have struggled with the maintenance of a stable electrode structure, full utilization of the active material, and sufficient cycle life with good system efficiency. Although researchers have made significant progress on rechargeable Li-S batteries in the last decade, these cycle life and efficiency problems prevent their use in commercial cells. To overcome these persistent problems, researchers will need new sulfur composite cathodes with favorable properties and performance and new Li-S cell configurations. In this Account, we first focus on the development of novel composite cathode materials including sulfur-carbon and sulfur-polymer composites, describing the design principles, structure and properties, and electrochemical performances of these new materials. We then cover new cell configurations with carbon interlayers and Li/dissolved polysulfide cells, emphasizing the potential of these approaches to advance capacity retention and system efficiency. Finally, we provide a brief survey of efficient electrolytes. The Account summarizes improvements that could bring Li-S technology closer to mass commercialization."} {"_id": "e8691980eeb827b10cdfb4cc402b3f43f020bc6a", "title": "Segmentation Guided Attention Networks for Visual Question Answering", "text": "In this paper we propose to solve the problem of Visual Question Answering by using a novel segmentation guided attention based network which we call SegAttendNet. We use image segmentation maps, generated by a Fully Convolutional Deep Neural Network to refine our attention maps and use these refined attention maps to make the model focus on the relevant parts of the image to answer a question. The refined attention maps are used by the LSTM network to learn to produce the answer. We presently train our model on the visual7W dataset and do a category wise evaluation of the 7 question categories. We achieve state of the art results on this dataset and beat the previous benchmark on this dataset by a 1.5% margin improving the question answering accuracy from 54.1% to 55.6% and demonstrate improvements in each of the question categories. We also visualize our generated attention maps and note their improvement over the attention maps generated by the previous best approach."} {"_id": "07f3f736d90125cb2b04e7408782af411c67dd5a", "title": "Convolutional Neural Network Architectures for Matching Natural Language Sentences", "text": "Semantic matching is of central importance to many natural language tasks [2, 28]. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layerby-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models."} {"_id": "0af737eae02032e66e035dfed7f853ccb095d6f5", "title": "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs", "text": "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence\u2019s representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/yinwenpeng/Answer_Selection."} {"_id": "1c059493904b2244d2280b8b4c0c7d3ca115be73", "title": "node2vec: Scalable Feature Learning for Networks", "text": "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations.\n We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks."} {"_id": "468b9055950c428b17f0bf2ff63fe48a6cb6c998", "title": "A Neural Attention Model for Abstractive Sentence Summarization", "text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines."} {"_id": "81eb0a1ea90a6f6d5e7f14cb3397a4ee0f77824a", "title": "Question/Answer Matching for CQA System via Combining Lexical and Sequential Information", "text": "Community-based Question Answering (CQA) has become popular in knowledge sharing sites since it allows users to get answers to complex, detailed, and personal questions directly from other users. Large archives of historical questions and associated answers have been accumulated. Retrieving relevant historical answers that best match a question is an essential component of a CQA service. Most state of the art approaches are based on bag-of-words models, which have been proven successful in a range of text matching tasks, but are insufficient for capturing the important word sequence information in short text matching. In this paper, a new architecture is proposed to more effectively model the complicated matching relations between questions and answers. It utilises a similarity matrix which contains both lexical and sequential information. Afterwards the information is put into a deep architecture to find potentially suitable answers. The experimental study shows its potential in improving matching accuracy of question and answer."} {"_id": "81ff60a35e57e150875cfdde735fe69d19e9fdc4", "title": "Development of attentional networks in childhood", "text": "Recent research in attention has involved three networks of anatomical areas that carry out the functions of orienting, alerting and executive control (including conflict monitoring). There have been extensive cognitive and neuroimaging studies of these networks in adults. We developed an integrated Attention Network Test (ANT) to measure the efficiency of the three networks with adults. We have now adapted this test to study the development of these networks during childhood. The test is a child-friendly version of the flanker task with alerting and orienting cues. We studied the development of the attentional networks in a cross-sectional experiment with four age groups ranging from 6 through 9 (Experiment 1). In a second experiment, we compared children (age 10 years) and adult performance in both child and adults versions of the ANT. Reaction time and accuracy improved at each age interval and positive values were found for the average efficiency of each of the networks. Alertness showed evidence of change up to and beyond age 10, while conflict scores appear stable after age seven and orienting scores do not change in the age range studied. A final experiment with forty 7-year-old children suggested that children like adults showed independence between the three networks under some conditions."} {"_id": "6c1cabe3f5980cbc50d290c2ed60b9aca624eab8", "title": "Mathematical modelling of infectious diseases.", "text": "INTRODUCTION\nMathematical models allow us to extrapolate from current information about the state and progress of an outbreak, to predict the future and, most importantly, to quantify the uncertainty in these predictions. Here, we illustrate these principles in relation to the current H1N1 epidemic.\n\n\nSOURCES OF DATA\nMany sources of data are used in mathematical modelling, with some forms of model requiring vastly more data than others. However, a good estimation of the number of cases is vitally important.\n\n\nAREAS OF AGREEMENT\nMathematical models, and the statistical tools that underpin them, are now a fundamental element in planning control and mitigation measures against any future epidemic of an infectious disease. Well-parameterized mathematical models allow us to test a variety of possible control strategies in computer simulations before applying them in reality.\n\n\nAREAS OF CONTROVERSY\nThe interaction between modellers and public-health practitioners and the level of detail needed for models to be of use.\n\n\nGROWING POINTS\nThe need for stronger statistical links between models and data.\n\n\nAREAS TIMELY FOR DEVELOPING RESEARCH\nGreater appreciation by the medical community of the uses and limitations of models and a greater appreciation by modellers of the constraints on public-health resources."} {"_id": "5e22c4362df3b0accbe04517c41848a2b229efd1", "title": "Predicting sports events from past results Towards effective betting on football matches", "text": "A system for predicting the results of football matches that beats the bookmakers\u2019 odds is presented. The predictions for the matches are based on previous results of the teams involved."} {"_id": "de93c4f886bdf55bfc1bcaefad648d5996ed3302", "title": "Modern Intrusion Detection, Data Mining, and Degrees of Attack Guilt", "text": "This chapter examines the state of modern intrusion detection, with a particular emphasis on the emerging approach of data mining. The discussion paralleIs two important aspects of intrusion detection: general detection strategy (misuse detection versus anomaly detection) and data source (individual hosts versus network trafik). Misuse detection attempts to match known patterns of intrusion , while anomaly detection searches for deviations from normal behavior . Between the two approaches, only anomaly detection has the ability to detect unknown attacks. A particularly promising approach to anomaly detection combines association mining with other forms of machine learning such as classification. Moreover, the data source that an intrusion detection system employs significantly impacts the types of attacks it can detect. There is a tradeoff in the level of detailed information available verD. Barbar\u00e1 et al. (ed .), Applications of Data Mining in Computer Security \u00a9 Kluwer Academic Publishers 2002 s"} {"_id": "df25eaf576f55c09bb460d67134646fcb422b2ac", "title": "AGA: Attribute-Guided Augmentation", "text": "We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems."} {"_id": "b52abc5f401a6dec62d650f5a2a500f469b9a7c0", "title": "A Case Study on Barriers and Enhancements of the PET Bottle-to-Bottle Recycling Systems in Germany and Sweden", "text": "Problem: The demand of beverages in PET bottles is constantly increasing. In this context, environmental, technological and regulatory aspects set a stronger focus on recycling. Generally, the reuse of recycled material from post-consumer PET bottles in bottle-to-bottle applications is seen as least environmentally harmful. However, closedloop systems are not widely implemented in Europe. Previous research mainly focuses on open-loop recycling systems and generally lacks discussion about the current German and Swedish systems and their challenges. Furthermore, previous studies lack theoretical and practical enhancements for bottle-to-bottle recycling from a managerial perspective. Purpose: The purpose of this study is to compare the PET bottle recycling systems in Germany and Sweden, analyse the main barriers and develop enhancements for closedloop systems. Method: This qualitative study employs a case study strategy about the two cases of Germany and Sweden. In total, 14 semi-structured interviews are conducted with respondents from different industry sectors within the PET bottle recycling systems. The empirical data is categorised and then analysed by pattern matching with the developed theoretical framework. Conclusion: Due to the theoretical and practical commitment to closed-loop recycling, the Swedish PET bottle recycling system outperforms the Germany system. In Germany, bottle-to-bottle recycling is currently performed on a smaller scale without a unified system. The main barriers for bottle-to-bottle recycling are distinguished into (1) quality and material factors, (2) regulatory and legal factors, (3) economic and market factors and (4) factors influenced by consumers. The enhancements for the systems are (1) quality and material factors, (2) regulatory and legal factors, (3) recollection factors and (4) expanding factors. Lastly, the authors provide further recommendations, which are (1) a recycling content symbol on bottle labels, (2) a council for bottle quality in Germany, (3) a quality seal for the holistic systems, (4) a reduction of transportation in Sweden and (5) an increase of consumer awareness on PET bottle consumption."} {"_id": "9e00005045a23f3f6b2c9fca094930f8ce42f9f6", "title": "Managing Portfolios of Development Projects in a Complex Environment How the UN assign priorities to Programs at the Country", "text": ""} {"_id": "2ec2f8cd6cf1a393acbc7881b8c81a78269cf5f7", "title": "Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics", "text": "We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images."} {"_id": "94d1a665c0c7fbd017c9f3c50d35992e1c0c1ed0", "title": "Molecular and Morphological Characterization of Aphelenchoides fuchsi sp. n. (Nematoda: Aphelenchoididae) Isolated from Pinus eldarica in Western Iran.", "text": "Aphelenchoides fuchsi sp. n. is described and illustrated from bark and wood samples of a weakened Mondell pine in Kermanshah Province, western Iran. The new species has body length of 332 to 400 \u00b5m (females) and 365 to 395 \u00b5m (males). Lip region set off from body contour. The cuticle is weakly annulated, and there are four lines in the lateral field. The stylet is 8 to 10 \u03bcm long and has small basal swellings. The excretory pore is located ca one body diam. posterior to metacorpus valve or 51 to 62 \u03bcm from the head. The postuterine sac well developed (60-90 \u00b5m). Spicules are relatively short (15-16 \u03bcm in dorsal limb) with apex and rostrum rounded, well developed, and the end of the dorsal limb clearly curved ventrad like a hook. The male tail has usual three pairs of caudal papillae (2+2+2) and a well-developed mucro. The female tail is conical, terminating in a complicated step-like projection, usually with many tiny nodular protuberances. The new species belongs to the Group 2 sensu Shahina, category of Aphelenchoides species. Phylogenetic analysis based on small subunit (SSU) and partial large subunit (LSU) sequences of rRNA supported the morphological results."} {"_id": "4d40a715a51bcca554915ecc5d88005fd56dc1e5", "title": "The future of seawater desalination: energy, technology, and the environment.", "text": "In recent years, numerous large-scale seawater desalination plants have been built in water-stressed countries to augment available water resources, and construction of new desalination plants is expected to increase in the near future. Despite major advancements in desalination technologies, seawater desalination is still more energy intensive compared to conventional technologies for the treatment of fresh water. There are also concerns about the potential environmental impacts of large-scale seawater desalination plants. Here, we review the possible reductions in energy demand by state-of-the-art seawater desalination technologies, the potential role of advanced materials and innovative technologies in improving performance, and the sustainability of desalination as a technological solution to global water shortages."} {"_id": "6180482e02eb79eca6fd2e9b1ee9111d749d5ca2", "title": "A bidirectional soft pneumatic fabric-based actuator for grasping applications", "text": "THIS paper presents the development of a bidirectional fabric-based soft pneumatic actuator requiring low fluid pressurization for actuation, which is incorporated into a soft robotic gripper to demonstrate its utility. The bidirectional soft fabric-based actuator is able to provide both flexion and extension. Fabrication of the fabric actuators is simple as compared to the steps involved in traditional silicone-based approach. In addition, the fabric actuators are able to generate comparably larger vertical grip resistive force at lower operating pressure than elastomeric actuators and 3D-printed actuators, being able to generate resistive grip force up to 20N at 120 kPa. Five of the bidirectional soft fabric-based actuators are deployed within a five-fingered soft robotic gripper, complete with five casings and a base. It is capable of grasping a variety of objects with maximum width or diameter closer to its bending curvature. A cutting task involved bimanual manipulation was demonstrated successfully with the gripper. To incorporate intelligent control for such a task, a soft force made completely of compliant material was attached to the gripper, which allows determination of whether the cutting task is completed. To the authors' knowledge, this work is the first study which incorporates two soft robotic grippers for bimanual manipulation with one of the grippers sensorized to provide closed loop control."} {"_id": "3f0924241a7deba2b40b0c1ea57a2e3d10c57ae0", "title": "Principles of GNSS, inertial, and multisensor integrated navigation systems, 2nd edition [Book review]", "text": "This second edition of Dr. Grove's book (the original was published in 2008) could arguably be considered a new work. At just under 1,000 pages (including the 11 appendices on the DVD), the second edition is 80% longer than the original. Frankly, the word \"book\" hardly seems adequate, considering the wide range of topics covered. \"Mini-encyclopedia\" seems more appropriate. The hardcover portion of the book comprises 18 chapters, and the DVD includes the aforementioned appendices plus 20 fully worked examples, 125 problems or exercises (with answers), and MATLAB routines for the simulation of many of the algorithms discussed in the main text. Here is a brief overview of the contents: \u25b8 Chapters 1\u20133: an overview of the diversity of positioning techniques and navigation systems; fundamentals of coordinate frames, kinematics and earth models; introduction to Kaiman filtering \u25b8 Chapters 4\u20136: inertial sensors, inertial navigation, and lower-cost dead reckoning systems \u25b8 Chapters 7\u201312: principles of radio positioning, short-, medium-, and long-range radio navigation, as well as extensive coverage of global navigation satellite systems (GNSS) \u25b8 Chapter 13: environmental feature matching. \u25b8 Chapters 14\u201316: various integration topics, including inertial navigation system (INS)/GNSS integration, alignment, zero-velocity updates, and multisensor integration \u25b8 Chapter 17: fault detection. \u25b8 Chapter 18: applications and trends. In summary, this book is an excellent reference (with numerous nuggets of wisdom) that should be readily handy on the shelf of every practicing navigation engineer. In the hands of an experienced instructor, the book will also serve students as a great textbook. However, the lack of examples integrated in the main text makes it difficult for the book to serve as a self-study guide for those that are new to the field."} {"_id": "b0e7d36c94935fadf3c514903e4340eaa415e4ee", "title": "True self-configuration for the IoT", "text": "For the Internet of Things to finally become a reality, obstacles on different levels need to be overcome. This is especially true for the upcoming challenge of leaving the domain of technical experts and scientists. Devices need to connect to the Internet and be able to offer services. They have to announce and describe these services in machine-understandable ways so that user-facing systems are able to find and utilize them. They have to learn about their physical surroundings, so that they can serve sensing or acting purposes without explicit configuration or programming. Finally, it must be possible to include IoT devices in complex systems that combine local and remote data, from different sources, in novel and surprising ways. We show how all of that is possible today. Our solution uses open standards and state-of-the art protocols to achieve this. It is based on 6LowPAN and CoAP for the communications part, semantic web technologies for meaningful data exchange, autonomous sensor correlation to learn about the environment, and software built around the Linked Data principles to be open for novel and unforeseen applications."} {"_id": "a8e656fe16825c47a41df9b28e0c97d4bc8fa58f", "title": "From turtles to Tangible Programming Bricks: explorations in physical language design", "text": "This article provides a historical overview of educational computing research at MIT from the mid-1960s to the present day, focusing on physical interfaces. It discusses some of the results of this research: electronic toys that help children develop advanced modes of thinking through free-form play. In this historical context, the article then describes and discusses the author\u2019s own research into tangible programming, culminating in the development of the Tangible Programming Bricks system\u2014a platform for creating microworlds for children to explore computation and scientific thinking."} {"_id": "f83a207712fd4cf41aded79e9e6c4345ba879128", "title": "Ray: A Distributed Framework for Emerging AI Applications", "text": "The next generation of AI applications will continuously interact with the environment and learn from these interactions. These applications impose new and demanding systems requirements, both in terms of performance and flexibility. In this paper, we consider these requirements and present Ray\u2014a distributed system to address them. Ray implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine. To meet the performance requirements, Ray employs a distributed scheduler and a distributed and fault-tolerant store to manage the system\u2019s control state. In our experiments, we demonstrate scaling beyond 1.8 million tasks per second and better performance than existing specialized systems for several challenging reinforcement learning applications."} {"_id": "aa2213a9f39736f80ccc54b9096e414682afa082", "title": "Wave-front Transformation with Gradient Metasurfaces", "text": "Relying on abrupt phase discontinuities, metasurfaces characterized by a transversely inhomogeneous surface impedance profile have been recently explored as an ultrathin platform to generate arbitrary wave fronts over subwavelength thicknesses. Here, we outline fundamental limitations of passive gradient metasurfaces in molding the impinging wave and show that local phase compensation is essentially insufficient to realize arbitrary wave manipulation, but full-wave designs should be considered. These findings represent a critical step towards realistic and highly efficient conformal wave manipulation beyond the scope of ray optics, enabling unprecedented nanoscale light molding."} {"_id": "8641be8daff5b24e98a0d68138a61456853aef82", "title": "Adaptation impact and environment models for architecture-based self-adaptive systems", "text": "Self-adaptive systems have the ability to adapt their behavior to dynamic operating conditions. In reaction to changes in the environment, these systems determine the appropriate corrective actions based in part on information about which action will have the best impact on the system. Existing models used to describe the impact of adaptations are either unable to capture the underlying uncertainty and variability of such dynamic environments, or are not compositional and described at a level of abstraction too low to scale in terms of specification effort required for non-trivial systems. In this paper, we address these shortcomings by describing an approach to the specification of impact models based on architectural system descriptions, which at the same time allows us to represent both variability and uncertainty in the outcome of adaptations, hence improving the selection of the best corrective action. The core of our approach is a language equipped with a formal semantics defined in terms of Discrete Time Markov Chains that enables us to describe both the impact of adaptation tactics, as well as the assumptions about the environment. To validate our approach, we show how employing our language can improve the accuracy of predictions used for decision-making in the Rainbow framework for architecture-based self-adaptation."} {"_id": "a65e815895bed510c0549957ce6baa129c909813", "title": "Induction of Root and Pattern Lexicon for Unsupervised Morphological Analysis of Arabic", "text": "We propose an unsupervised approach to learning non-concatenative morphology, which we apply to induce a lexicon of Arabic roots and pattern templates. The approach is based on the idea that roots and patterns may be revealed through mutually recursive scoring based on hypothesized pattern and root frequencies. After a further iterative refinement stage, morphological analysis with the induced lexicon achieves a root identification accuracy of over 94%. Our approach differs from previous work on unsupervised learning of Arabic morphology in that it is applicable to naturally-written, unvowelled text."} {"_id": "5da41b7d7b1963cd1e86d99b4d9b86ad6d7a227a", "title": "An Unequal Wilkinson Power Divider for a Frequency and Its First Harmonic", "text": "This letter presents a Wilkinson power divider operating at a frequency and its first harmonic with unequal power divider ratio. To obtain the unequal property, four groups of 1/6 wavelength transmission lines with different characteristic impedances are needed to match all ports. Theoretically, closed-form equations for the design are derived based on transmission line theory. Experimental results have indicated that all the features of this novel power divider can be fulfilled at f 0 and 2f 0 simultaneously."} {"_id": "6cd700af0b7953345d831c129a5a4e0d927bfa19", "title": "Adaptive Haptic Feedback Steering Wheel for Driving Simulators", "text": "Controlling a virtual vehicle is a sensory-motor activity with a specific rendering methodology that depends on the hardware technology and the software in use. We propose a method that computes haptic feedback for the steering wheel. It is best suited for low-cost, fixed-base driving simulators but can be ported to any driving simulator platform. The goal of our method is twofold. 1) It provides an efficient yet simple algorithm to model the steering mechanism using a quadri-polar representation. 2) This model is used to compute the haptic feedback on top of which a tunable haptic augmentation is adjusted to overcome the lack of presence and the unavoidable simulation loop latencies. This algorithm helps the driver to laterally control the virtual vehicle. We also discuss the experimental results that demonstrate the usefulness of our haptic feedback method."} {"_id": "3f4e71d715fce70c89e4503d747aad11fcac8a43", "title": "Competing Values in the Era of Digitalization", "text": "This case study examines three different digital innovation projects within Auto Inc -- a large European automaker. By using the competing values framework as a theoretical lens we explore how dynamic capabilities occur in a firm trying to meet increasing demands in originating and innovating from digitalization. In this digitalization process, our study indicates that established socio-technical congruences are being challenged. More so, we pinpoint the need for organizations to find ways to embrace new experimental learning processes in the era of digitalization. While such a change requires long-term commitment and vision, this study presents three informal enablers for such experimental processes these enablers are timing, persistence, and contacts."} {"_id": "215b4c25ad34557644b1a177bd5aeac8b2e66bc6", "title": "Why Your Encrypted Database Is Not Secure", "text": "Encrypted databases, a popular approach to protecting data from compromised database management systems (DBMS's), use abstract threat models that capture neither realistic databases, nor realistic attack scenarios. In particular, the \"snapshot attacker\" model used to support the security claims for many encrypted databases does not reflect the information about past queries available in any snapshot attack on an actual DBMS.\n We demonstrate how this gap between theory and reality causes encrypted databases to fail to achieve their \"provable security\" guarantees."} {"_id": "84cf1178a7526355f323ce0442458de3b3744358", "title": "A high performance parallel algorithm for 1-D FFT", "text": "In this paper we propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. We use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. We show that the multidimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. We implemented this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine."} {"_id": "1f7594d3be7f5c32e117bc669ed898dd0af88aa3", "title": "Dual-Band Textile MIMO Antenna Based on Substrate-Integrated Waveguide (SIW) Technology", "text": "A dual-band textile antenna for multiple-input-multiple-output (MIMO) applications, based on substrate-integrated waveguide (SIW) technology, is designed. The fundamental SIW cavity mode is designed to resonate at 2.4 GHz. Meanwhile, the second and third modes are modified and combined by careful placement of a via within the cavity to enable wideband coverage in the 5-GHz WLAN band. The simple antenna topology can be fabricated fully using textiles in a planar form, ensuring reliability and comfort. Numerical and experimental results indicate satisfactory antenna performance when worn on body in terms of impedance bandwidth, radiation efficiency, and specific absorption ratio (SAR). In order to validate its potential for MIMO applications, two elements of the proposed SIW antenna are arranged in six configurations to study the performance in terms of mutual coupling and envelope correlation. It is observed that the placement of the shorted edges of the two elements adjacent to each other produces the lowest mutual coupling and consequently the best envelope correlation."} {"_id": "a2204b1ae6109db076a2b3c8d0db8cf390008812", "title": "Low self-esteem during adolescence predicts poor health, criminal behavior, and limited economic prospects during adulthood.", "text": "Using prospective data from the Dunedin Multidisciplinary Health and Development Study birth cohort, the authors found that adolescents with low self-esteem had poorer mental and physical health, worse economic prospects, and higher levels of criminal behavior during adulthood, compared with adolescents with high self-esteem. The long-term consequences of self-esteem could not be explained by adolescent depression, gender, or socioeconomic status. Moreover, the findings held when the outcome variables were assessed using objective measures and informant reports; therefore, the findings cannot be explained by shared method variance in self-report data. The findings suggest that low self-esteem during adolescence predicts negative real-world consequences during adulthood."} {"_id": "02bb762c3bd1b3d1ad788340d8e9cdc3d85f33e1", "title": "Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web", "text": "We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/fF\u2019,and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes nr.inimaflyas the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems."} {"_id": "155ca30ef360d66af571eee47c7f60f300e154db", "title": "In Search of an Understandable Consensus Algorithm", "text": "Raft is a consensus algorithm for managing a replicated log. It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than Paxos and also provides a better foundation for building practical systems. In order to enhance understandability, Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered. Results from a user study demonstrate that Raft is easier for students to learn than Paxos. Raft also includes a new mechanism for changing the cluster membership, which uses overlapping majorities to guarantee safety."} {"_id": "2a0d27ae5c82d81b4553ea44e81eb986be5fd126", "title": "Paxos Made Simple", "text": "The Paxos algorithm, when presented in plain English, is very simple."} {"_id": "3593269a4bf87a7d0f7aba639a50bc74cb288fb1", "title": "Space/Time Trade-offs in Hash Coding with Allowable Errors", "text": "In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency.\nThe new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods.\nIn such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to \u201ccatch\u201d the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods.\nAnalysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time."} {"_id": "691564e0f19d5f62597adc0720d0e51ddbce9b89", "title": "Web Caching with Consistent Hashing", "text": "A key performance measure for the World Wide Web is the speed with which content is served to users. As traffic on the Web increases, users are faced with increasing delays and failures in data delivery. Web caching is one of the key strategies that has been explored to improve performance. An important issue in many caching systems is how to decide what is cached where at any given time. Solutions have included multicast queries and directory schemes. In this paper, we offer a new Web caching strategy based on consistent hashing. Consistent hashing provides an alternative to multicast and directory schemes, and has several other advantages in load balancing and fault tolerance. Its performance was analyzed theoretically in previous work; in this paper we describe the implementation of a consistent-hashing-based system and experiments that support our thesis that it can provide performance improvements. \uf8e9 1999 Published by Elsevier Science B.V. All rights reserved."} {"_id": "215ac9b23a9a89ad7c8f22b5f9a9ad737204d820", "title": "An Empirical Investigation into Programming Language Syntax", "text": "Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming language syntax as part of a larger analysis into the, so called, programming language wars. We first present two surveys conducted with students on the intuitiveness of syntax, which we used to garner formative clues on what words and symbols might be easy for novices to understand. We followed up with two studies on the accuracy rates of novices using a total of six programming languages: Ruby, Java, Perl, Python, Randomo, and Quorum. Randomo was designed by randomly choosing some keywords from the ASCII table (a metaphorical placebo). To our surprise, we found that languages using a more traditional C-style syntax (both Perl and Java) did not afford accuracy rates significantly higher than a language with randomly generated keywords, but that languages which deviate (Quorum, Python, and Ruby) did. These results, including the specifics of syntax that are particularly problematic for novices, may help teachers of introductory programming courses in choosing appropriate first languages and in helping students to overcome the challenges they face with syntax."} {"_id": "e4edc414773e709e8eb3eddd77b519637f26f9a5", "title": "Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train", "text": "For the past 5 years, the ILSVRC competition and the ImageNet dataset have attracted a lot of interest from the Computer Vision community, allowing for state-of-the-art accuracy to grow tremendously. This should be credited to the use of deep artificial neural network designs. As these became more complex, the storage, bandwidth, and compute requirements increased. This means that with a non-distributed approach, even when using the most high-density server available, the training process may take weeks, making it prohibitive. Furthermore, as datasets grow, the representation learning potential of deep networks grows as well by using more complex models. This synchronicity triggers a sharp increase in the computational requirements and motivates us to explore the scaling behaviour on petaflop scale supercomputers. In this paper we will describe the challenges and novel solutions needed in order to train ResNet50 in this large scale environment. We demonstrate above 90% scaling efficiency and a training time of 28 minutes using up to 104K x86 cores. This is supported by software tools from Intel\u2019s ecosystem. Moreover, we show that with regular 90 120 epoch train runs we can achieve a top-1 accuracy as high as 77% for the unmodified ResNet-50 topology. We also introduce the novel Collapsed Ensemble (CE) technique that allows us to obtain a 77.5% top-1 accuracy, similar to that of a ResNet-152, while training a unmodified ResNet-50 topology for the same fixed training budget. All ResNet-50 models as well as the scripts needed to replicate them will be posted shortly. Keywords\u2014deep learning, scaling, convergence, large minibatch, ensembles."} {"_id": "154d62d97d43243d73352b969b2335caaa6c2b37", "title": "Ensemble learning for free with evolutionary algorithms?", "text": "Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-EEL) or incrementally along evolution (On-EEL). Experiments on a set of benchmark problems show that Off-EEL outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles."} {"_id": "3146fabd5631a7d1387327918b184103d06c2211", "title": "Person-Independent 3D Gaze Estimation Using Face Frontalization", "text": "Person-independent and pose-invariant estimation of eye-gaze is important for situation analysis and for automated video annotation. We propose a fast cascade regression based method that first estimates the location of a dense set of markers and their visibility, then reconstructs face shape by fitting a part-based 3D model. Next, the reconstructed 3D shape is used to estimate a canonical view of the eyes for 3D gaze estimation. The model operates in a feature space that naturally encodes local ordinal properties of pixel intensities leading to photometric invariant estimation of gaze. To evaluate the algorithm in comparison with alternative approaches, three publicly-available databases were used, Boston University Head Tracking, Multi-View Gaze and CAVE Gaze datasets. Precision for head pose and gaze averaged 4 degrees or less for pitch, yaw, and roll. The algorithm outperformed alternative methods in both datasets."} {"_id": "39773ed3c249a731224b77783a1c1e5f353d5429", "title": "End-to-End Radio Traffic Sequence Recognition with Deep Recurrent Neural Networks", "text": "We investigate sequence machine learning techniques on raw radio signal time-series data. By applying deep recurrent neural networks we learn to discriminate between several application layer traffic types on top of a constant envelope modulation without using an expert demodulation algorithm. We show that complex protocol sequences can be learned and used for both classification and generation tasks using this approach. Keywords\u2014Machine Learning, Software Radio, Protocol Recognition, Recurrent Neural Networks, LSTM, Protocol Learning, Traffic Classification, Cognitive Radio, Deep Learning"} {"_id": "fb7f39d7d24b30df7b177bca2732ff8c3ade0bc0", "title": "Homography estimation using one ellipse correspondence and minimal additional information", "text": "In sport scenarios like football or basketball, we often deal with central views where only the central circle and some additional primitives like the central line and the central point or a touch line are visible. In this paper we first characterize, from a mathematical point of view, the set of homographies that project a given ellipse into the unit circle, next, using some extra minimal additional information like the knowledge of the position in the image of the central line and central point or a touch line we show a method to fully determine the plane homography. We present some experiments in sport scenarios to show the ability of the proposed method to properly recover the plane homography."} {"_id": "591b52d24eb95f5ec3622b814bc91ac872acda9e", "title": "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.", "text": "Multilayer connectionist models of memory based on the encoder model using the backpropagation learning rule are evaluated. The models are applied to standard recognition memory procedures in which items are studied sequentially and then tested for retention. Sequential learning in these models leads to 2 major problems. First, well-learned information is forgotten rapidly as new information is learned. Second, discrimination between studied items and new items either decreases or is nonmonotonic as a function of learning. To address these problems, manipulations of the network within the multilayer model and several variants of the multilayer model were examined, including a model with prelearned memory and a context model, but none solved the problems. The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning."} {"_id": "639f02d25eab3794e35b757ef64c6815a8929f84", "title": "A self-boost charge pump topology for a gate drive high-side power supply", "text": "A self-boost charge pump topology is presented for a floating high-side gate drive power supply that features high voltage and current capabilities for use in integrated power electronic modules (IPEMs). The transformerless topology uses a small capacitor to transfer energy to the high-side switch from a single power supply referred to the negative rail. Unlike conventional bootstrap power supplies, no switching of the main phase-leg switches is required to provide power continuously to the high-side gate drive, even if the high-side switch is permanently on. Additional advantages include low parts-count and simple control requirements. A piecewise linear model of the self-boost charge pump is derived and the circuit's operating characteristics are analyzed. Simulation and experimental results are provided to verify the desired operation of the new charge pump circuit. Guidelines are provided to assist with circuit component selection in new applications."} {"_id": "a4bf5c295f0bf4f7f8d5c1e702b62018cca9bc58", "title": "The long-term sequelae of child and adolescent abuse: a longitudinal community study.", "text": "The purpose of the present study was to examine the relationship between childhood and adolescent physical and sexual abuse before the age of 18 and psychosocial functioning in mid-adolescence (age 15) and early adulthood (age 21) in a representative community sample of young adults. Subjects were 375 participants in an ongoing 17-years longitudinal study. At age 21, nearly 11% reported physical or sexual abuse before age 18. Psychiatric disorders based on DSM-III-R criteria were assessed utilizing the NIMH Diagnostic Interview Schedule, Revised Version (DIS-III-R). Approximately 80% of the abused young adults met DSM-III-R criteria for at least one psychiatric disorder at age 21. Compared to their nonabused counterparts, abused subjects demonstrated significant impairments in functioning both at ages 15 and at 21, including more depressive symptomatology, anxiety, psychiatric disorders, emotional-behavioral problems, suicidal ideation, and suicide attempts. While abused individuals were functioning significantly more poorly overall at ages 15 and 21 than their nonabused peers, gender differences and distinct patterns of impaired functioning emerged. These deficits underscore the need for early intervention and prevention strategies to forestall or minimize the serious consequences of child abuse."} {"_id": "16e39000918a58e0755dc42abed368b2215c2aed", "title": "A radio resource management framework for TVWS exploitation under an auction-based approach", "text": "This paper elaborates on the design, implementation and performance evaluation of a prototype Radio Resource Management (RRM) framework for TV white spaces (TVWS) exploitation, under an auction-based approach. The proposed RRM framework is applied in a centralised Cognitive Radio (CR) network architecture, where exploitation of the available TVWS by Secondary Systems is orchestrated via a Spectrum Broker. Efficient RRM framework performance, as a matter of maximum-possible resources utilization and benefit of Spectrum Broker, is achieved by proposing and evaluating an auction-based algorithm. This auction-based algorithm considers both frequency and time domain during TVWS allocation process which was defined as an optimization problem, where maximum payoff of Spectrum Broker is the optimization goal. Experimental tests that were carried-out under controlled conditions environment, verified the validity of the proposed framework, besides identifying fields for further research."} {"_id": "d6619b3c0523f0a12168fbce750edeee7b6b8a53", "title": "High power and high efficiency GaN-HEMT for microwave communication applications", "text": "Microwaves have been widely used for the modern communication systems, which have advantages in high bit rate transmission and the easiness of compact circuit and antenna design. Gallium Nitride (GaN), featured with high breakdown and high saturation velocity, is one of the promising material for high power and high frequency devices, and a kW-class output power has already been achieved [1]. We have developed the high power and high efficiency GaN HEMTs [2\u20135], targeting the amplifier for the base transceiver station (BTS). This presentation summarizes our recent works, focusing on the developments for the efficiency boosting and the robustness in high power RF operation."} {"_id": "d2f210e3f34d65e3ae66b60e98d9c3a740b3c52a", "title": "Coloring-based coalescing for graph coloring register allocation", "text": "Graph coloring register allocation tries to minimize the total cost of spilled live ranges of variables. Live-range splitting and coalescing are often performed before the coloring to further reduce the total cost. Coalescing of split live ranges, called sub-ranges, can decrease the total cost by lowering the interference degrees of their common interference neighbors. However, it can also increase the total cost because the coalesced sub-ranges can become uncolorable. In this paper, we propose coloring-based coalescing, which first performs trial coloring and next coalesces all copyrelated sub-ranges that were assigned the same color. The coalesced graph is then colored again with the graph coloring register allocation. The rationale is that coalescing of differently colored sub-ranges could result in spilling because there are some interference neighbors that prevent them from being assigned the same color. Experiments on Java programs show that the combination of live-range splitting and coloring-based coalescing reduces the static spill cost by more than 6% on average, comparing to the baseline coloring without splitting. In contrast, well-known iterated and optimistic coalescing algorithms, when combined with splitting, increase the cost by more than 20%. Coloring-based coalescing improves the execution time by up to 15% and 3% on average, while the existing algorithms improve by up to 12% and 1% on average."} {"_id": "4bbd31803e900aebcdb984523ef3770de3641981", "title": "Mathematics Learning through Computational Thinking Activities: A Systematic Literature Review", "text": "Computational Thinking represents a terminology that embraces the complex set of reasoning processes that are held for problem stating and solving through a computational tool. The ability of systematizing problems and solve them by these means is currently being considered a skill to be developed by all students, together with Language, Mathematics and Sciences. Considering that Computer Science has many of its roots on Mathematics, it is reasonable to ponder if and how Mathematics learning can be influenced by offering activities related to Computational Thinking to students. In this sense, this article presents a Systematic Literature Review on reported evidences of Mathematics learning in activities aimed at developing Computational Thinking skills. Forty-two articles which presented didactic activities together with an experimental design to evaluate learning outcomes published from 2006 to 2017 were analyzed. The majority of identified activities used a software tool or hardware device for their development. In these papers, a wide variety of mathematical topics has been being developed, with some emphasis on Planar Geometry and Algebra. Conversion of models and solutions between different semiotic representations is a high level cognitive skill that is most frequently associated to educational outcomes. This review indicated that more recent articles present a higher level of rigor in methodological procedures to assess learning effects. However, joint analysis of evidences from more than one data source is still not frequently used as a validation procedure."} {"_id": "e9b87d8ba83281d5ea01e9b9fab14c73b0ae75eb", "title": "Partially overlapping neural networks for real and imagined hand movements.", "text": "Neuroimagery findings have shown similar cerebral networks associated with imagination and execution of a movement. On the other hand, neuropsychological studies of parietal-lesioned patients suggest that these networks may be at least partly distinct. In the present study, normal subjects were asked to either imagine or execute auditory-cued hand movements. Compared with rest, imagination and execution showed overlapping networks, including bilateral premotor and parietal areas, basal ganglia and cerebellum. However, direct comparison between the two experimental conditions showed that specific cortico-subcortical areas were more engaged in mental simulation, including bilateral premotor, prefrontal, supplementary motor and left posterior parietal areas, and the caudate nuclei. These results suggest that a specific neuronal substrate is involved in the processing of hand motor representations."} {"_id": "8a718fccc947750580851f10698de1f41f5991f4", "title": "Disconnected aging: Cerebral white matter integrity and age-related differences in cognition", "text": "Cognition arises as a result of coordinated processing among distributed brain regions and disruptions to communication within these neural networks can result in cognitive dysfunction. Cortical disconnection may thus contribute to the declines in some aspects of cognitive functioning observed in healthy aging. Diffusion tensor imaging (DTI) is ideally suited for the study of cortical disconnection as it provides indices of structural integrity within interconnected neural networks. The current review summarizes results of previous DTI aging research with the aim of identifying consistent patterns of age-related differences in white matter integrity, and of relationships between measures of white matter integrity and behavioral performance as a function of adult age. We outline a number of future directions that will broaden our current understanding of these brain-behavior relationships in aging. Specifically, future research should aim to (1) investigate multiple models of age-brain-behavior relationships; (2) determine the tract-specificity versus global effect of aging on white matter integrity; (3) assess the relative contribution of normal variation in white matter integrity versus white matter lesions to age-related differences in cognition; (4) improve the definition of specific aspects of cognitive functioning related to age-related differences in white matter integrity using information processing tasks; and (5) combine multiple imaging modalities (e.g., resting-state and task-related functional magnetic resonance imaging; fMRI) with DTI to clarify the role of cerebral white matter integrity in cognitive aging."} {"_id": "d53432934fa78151e7b75c95093c9b0be94b4b9a", "title": "Evolving computational intelligence systems", "text": "A new paradigm of the evolving computational intelligence systems (ECIS) is introduced in a generic framework of the knowledge and data integration (KDI). This generalization of the recent advances in the development of evolving fuzzy and neuro-fuzzy models and the more analytical angle of consideration through the prism of knowledge evolution as opposed to the usually used datacentred approach marks the novelty of the present paper. ECIS constitutes a suitable paradigm for adaptive modeling of continuous dynamic processes and tracing the evolution of knowledge. The elements of evolution, such as inheritance and structure development are related to the knowledge and data pattern dynamics and are considered in the context of an individual system/model. Another novelty of this paper consists of the comparison at a conceptual level between the concept of models and knowledge captured by these models evolution and the well known paradigm of evolutionary computation. Although ECIS differs from the concept of evolutionary (genetic) computing, both paradigms heavily borrow from the same source \u2013 nature and human evolution. As the origin of knowledge, humans are the best model of an evolving intelligent system. Instead of considering the evolution of population of spices or genes as the evolutionary computation algorithms does the ECIS concentrate on the evolution of a single intelligent system. The aim is to develop the intelligence/knowledge of this system through an evolution using inheritance and modification, upgrade and reduction. This approach is also suitable for the integration of new data and existing models into new models that can be incrementally adapted to future incoming data. This powerful new concept has been recently introduced by the authors in a series of parallel works and is still under intensive development. It forms the conceptual basis for the development of the truly intelligent systems. Another specific of this paper includes bringing together the two working examples of ECIS, namely ECOS and EFS. The ideas are supported by illustrative examples (a synthetic non-linear function for the ECOS case and a benchmark problem of house price modelling from UCI repository for the case of EFS)."} {"_id": "7bdec3d91d8b649f892a779da78428986d8c5e3b", "title": "CCVis : Visual Analytics of Student Online Learning Behaviors Using Course Clickstream Data", "text": "As more and more college classrooms utilize online platforms to facilitate teaching and learning activities, analyzing student online behaviors becomes increasingly important for instructors to effectively monitor and manage student progress and performance. In this paper, we present CCVis, a visual analytics tool for analyzing the course clickstream data and exploring student online learning behaviors. Targeting a large college introductory course with over two thousand student enrollments, our goal is to investigate student behavior patterns and discover the possible relationships between student clickstream behaviors and their course performance. We employ higher-order network and structural identity classification to enable visual analytics of behavior patterns from the massive clickstream data. CCVis includes four coordinated views (the behavior pattern, behavior breakdown, clickstream comparative, and grade distribution views) for user interaction and exploration. We demonstrate the effectiveness of CCVis through case studies along with an ad-hoc expert evaluation. Finally, we discuss the limitation and extension of this work."} {"_id": "104829c56a7f1236a887a6993959dd52aebd86f5", "title": "Modeling the global freight transportation system: A multi-level modeling perspective", "text": "The interconnectedness of different actors in the global freight transportation industry has rendered such a system as a large complex system where different sub-systems are interrelated. On such a system, policy-related- exploratory analyses which have predictive capacity are difficult to perform. Although there are many global simulation models for various large complex systems, there is unfortunately very little research aimed to develop a global freight transportation model. In this paper, we present a multi-level framework to develop an integrated model of the global freight transportation system. We employ a system view to incorporate different relevant sub-systems and categorize them in different levels. The fourstep model of freight transport is used as the basic foundation of the framework proposed. In addition to that, we also present the computational framework which adheres to the high level modeling framework to provide a conceptualization of the discrete-event simulation model which will be developed."} {"_id": "c22366074e3b243f2caaeb2f78a2c8d56072905e", "title": "A broadband slotted ridge waveguide antenna array", "text": "A longitudinally-slotted ridge waveguide antenna array with a compact transverse dimension is presented. To broaden the bandwidth of the array, it is separated into two subarrays fed by a novel compact convex waveguide divider. A 16-element uniform linear array at X-band was fabricated and measured to verify the validity of the design. The measured bandwidth of S11les-15 dB is 14.9% and the measured cross- polarization level is less than -36 dB over the entire bandwidth. This array can be combined with the edge-slotted waveguide array to build a two-dimensional dual-polarization antenna array for the synthetic aperture radar (SAR) application"} {"_id": "09c5b100f289a3993d91a66116e35ee95e99acc0", "title": "Segmenting cardiac MRI tagging lines using Gabor filter banks", "text": "t\u2014This paper describes a new method for the segmentation and extraction of cardiac MRI s. Our method is based on the novel use of a 2D bank. By convolving the tagged input image with ilters, the tagging lines are automatically enhanced ted out. We design the Gabor filter bank based on age\u2019s spatial and frequency characteristics. The is a combination of each filter\u2019s response in the bank. We demonstrate that compared to bandpass ds such as HARP, this method results in robust and mentation of the tagging lines."} {"_id": "41e4eb8fbb335ae70026f4216069f33f8f9bbe53", "title": "Stepfather Involvement and Stepfather-Child Relationship Quality: Race and Parental Marital Status as Moderators.", "text": "Stepparent-child relationship quality is linked to stepfamily stability and children's well-being. Yet, the literature offers an incomplete understanding of factors that promote high-quality stepparent-child relationships, especially among socio-demographically diverse stepfamilies. In this study, we explore the association between stepfather involvement and stepfather-child relationship quality among a racially diverse and predominately low-income sample of stepfamilies with preadolescent children. Using a subsample of 467 mother-stepfather families from year 9 of the Fragile Families and Child Wellbeing Study, results indicate that stepfather involvement is positively associated with stepfather-child relationship quality. This association is statistically indistinguishable across racial groups, although the association is stronger among children in cohabiting stepfamilies compared to children in married stepfamilies."} {"_id": "45063cf2e0116e700da5ca2863c8bb82ad4d64c2", "title": "Conceptual and Database Modelling of Graph Databases", "text": "Comparing graph databases with traditional, e.g., relational databases, some important database features are often missing there. Particularly, a graph database schema including integrity constraints is not explicitly defined, also a conceptual modelling is not used at all. It is hard to check a consistency of the graph database, because almost no integrity constraints are defined. In the paper, we discuss these issues and present current possibilities and challenges in graph database modelling. Also a conceptual level of a graph database design is considered. We propose a sufficient conceptual model and show its relationship to a graph database model. We focus also on integrity constraints modelling functional dependencies between entity types, which reminds modelling functional dependencies known from relational databases and extend them to conditional functional dependencies."} {"_id": "6733017c5a01b698cc07b57fa9c9b9207b85cfbc", "title": "Accurate reconstruction of image stimuli from human fMRI based on the decoding model with capsule network architecture", "text": "In neuroscience, all kinds of computation models were designed to answer the open question of how sensory stimuli are encoded by neurons and conversely, how sensory stimuli can be decoded from neuronal activities. Especially, functional Magnetic Resonance Imaging (fMRI) studies have made many great achievements with the rapid development of the deep network computation. However, comparing with the goal of decoding orientation, position and object category from activities in visual cortex, accurate reconstruction of image stimuli from human fMRI is a still challenging work. In this paper, the capsule network (CapsNet) architecture based visual reconstruction (CNAVR) method is developed to reconstruct image stimuli. The capsule means containing a group of neurons to perform the better organization of feature structure and representation, inspired by the structure of cortical mini column including several hundred neurons in primates. The high-level capsule features in the CapsNet includes diverse features of image stimuli such as semantic class, orientation, location and so on. We used these features to bridge between human fMRI and image stimuli. We firstly employed the CapsNet to train the nonlinear mapping from image stimuli to high-level capsule features, and from highlevel capsule features to image stimuli again in an end-to-end manner. After estimating the serviceability of each voxel by encoding performance to accomplish the selecting of voxels, we secondly trained the nonlinear mapping from dimension-decreasing fMRI data to high-level capsule features. Finally, we can predict the high-level capsule features with fMRI data, and reconstruct image stimuli with the CapsNet. We evaluated the proposed CNAVR method on the dataset of handwritten digital images, and exceeded about 10% than the accuracy of all existing state-of-the-art methods on the structural similarity index (SSIM)."} {"_id": "f8be08195b1a7e9e45028eee4844ea2482170a3e", "title": "Gut microbiota functions: metabolism of nutrients and other food components", "text": "The diverse microbial community that inhabits the human gut has an extensive metabolic repertoire that is distinct from, but complements the activity of mammalian enzymes in the liver and gut mucosa and includes functions essential for host digestion. As such, the gut microbiota is a key factor in shaping the biochemical profile of the diet and, therefore, its impact on host health and disease. The important role that the gut microbiota appears to play in human metabolism and health has stimulated research into the identification of specific microorganisms involved in different processes, and the elucidation of metabolic pathways, particularly those associated with metabolism of dietary components and some host-generated substances. In the first part of the review, we discuss the main gut microorganisms, particularly bacteria, and microbial pathways associated with the metabolism of dietary carbohydrates (to short chain fatty acids and gases), proteins, plant polyphenols, bile acids, and vitamins. The second part of the review focuses on the methodologies, existing and novel, that can be employed to explore gut microbial pathways of metabolism. These include mathematical models, omics techniques, isolated microbes, and enzyme assays."} {"_id": "7ec5f9694bc3d061b376256320eacb8ec3566b77", "title": "The CN2 Induction Algorithm", "text": "Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks."} {"_id": "0d57ba12a6d958e178d83be4c84513f7e42b24e5", "title": "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour", "text": "Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves \u223c90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internetscale data with high efficiency."} {"_id": "22ba26e56fc3e68f2e6a96c60d27d5f721ea00e9", "title": "RMSProp and equilibrated adaptive learning rates for non-convex optimization", "text": "Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the socalled equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent."} {"_id": "27da8d31b23f15a8d4feefe0f309dfaad745f8b0", "title": "Understanding deep learning requires rethinking generalization", "text": "Despite their massivesize, successful deep artificial neural networkscan exhibit a remarkably small differencebetween training and test performance. Conventional wisdom attributessmall generalization error either to propertiesof themodel family, or to the regularization techniquesused during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experimentsestablish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simpledepth two neural networksalready haveperfect finitesampleexpressivity assoon as thenumber of parameters exceeds thenumber of datapointsas it usually does in practice. We interpret our experimental findingsby comparison with traditional models."} {"_id": "8e0eacf11a22b9705a262e908f17b1704fd21fa7", "title": "Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin", "text": "We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech\u2014two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system [26]. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale."} {"_id": "bcdce6325b61255c545b100ef51ec7efa4cced68", "title": "An overview of gradient descent optimization algorithms", "text": "Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent."} {"_id": "907149ace088dad97fe6a6cadfd0c9260bb75795", "title": "Expressing emotion through posture and gesture Introduction", "text": "Introduction Emotion and its physical expression are an integral part of social interaction, informing others about how we are feeling and affecting social outcomes (Vosk, Forehand, and Figueroa 1983). Studies on the physical expression of emotion can be traced back to the 19th century with Darwin\u2019s seminal book \u201cThe Expression of the Emotions in Man and Animals\u201d that reveals the key role of facial expressions and body movement in communicating status and emotion (Darwin 1872)."} {"_id": "eaa6537b640e744216c8ec1272f6db5bbc53e0fe", "title": "Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras", "text": "An important aspect of collision avoidance and driver assistance systems, as well as autonomous vehicles, is the tracking of vehicle taillights and the detection of alert signals (turns and brakes). In this paper, we present the design and implementation of a robust and computationally lightweight algorithm for a real-time vision system, capable of detecting and tracking vehicle taillights, recognizing common alert signals using a vehicle-mounted embedded smart camera, and counting the cars passing on both sides of the vehicle. The system is low-power and processes scenes entirely on the microprocessor of an embedded smart camera. In contrast to most existing work that addresses either daytime or nighttime detection, the presented system provides the ability to track vehicle taillights and detect alert signals regardless of lighting conditions. The mobile vision system has been tested in actual traffic scenes and the results obtained demonstrate the performance and the lightweight nature of the algorithm."} {"_id": "dd18d4a30cb1f516b62950db44f73589f8083c3e", "title": "Role of the Immune system in chronic pain", "text": "During the past two decades, an important focus of pain research has been the study of chronic pain mechanisms, particularly the processes that lead to the abnormal sensitivity \u2014 spontaneous pain and hyperalgesia \u2014 that is associated with these states. For some time it has been recognized that inflammatory mediators released from immune cells can contribute to these persistent pain states. However, it has only recently become clear that immune cell products might have a crucial role not just in inflammatory pain, but also in neuropathic pain caused by damage to peripheral nerves or to the CNS."} {"_id": "5592c7e0225c956419a9a315718a87190b33f4c2", "title": "An Energy-Efficient Architecture for Binary Weight Convolutional Neural Networks", "text": "Binary weight convolutional neural networks (BCNNs) can achieve near state-of-the-art classification accuracy and have far less computation complexity compared with traditional CNNs using high-precision weights. Due to their binary weights, BCNNs are well suited for vision-based Internet-of-Things systems being sensitive to power consumption. BCNNs make it possible to achieve very high throughput with moderate power dissipation. In this paper, an energy-efficient architecture for BCNNs is proposed. It fully exploits the binary weights and other hardware-friendly characteristics of BCNNs. A judicious processing schedule is proposed so that off-chip I/O access is minimized and activations are maximally reused. To significantly reduce the critical path delay, we introduce optimized compressor trees and approximate binary multipliers with two novel compensation schemes. The latter is able to save significant hardware resource, and almost no computation accuracy is compromised. Taking advantage of error resiliency of BCNNs, an innovative approximate adder is developed, which significantly reduces the silicon area and data path delay. Thorough error analysis and extensive experimental results on several data sets show that the approximate adders in the data path cause negligible accuracy loss. Moreover, algorithmic transformations for certain layers of BCNNs and a memory-efficient quantization scheme are incorporated to further reduce the energy cost and on-chip storage requirement. Finally, the proposed BCNN hardware architecture is implemented with the SMIC 130-nm technology. The postlayout results demonstrate that our design can achieve an energy efficiency over 2.0TOp/s/W when scaled to 65 nm, which is more than two times better than the prior art."} {"_id": "55ea7bb4e75608115b50b78f2fea6443d36d60cc", "title": "Application of ordinal logistic regression analysis in determining risk factors of child malnutrition in Bangladesh", "text": "BACKGROUND\nThe study attempts to develop an ordinal logistic regression (OLR) model to identify the determinants of child malnutrition instead of developing traditional binary logistic regression (BLR) model using the data of Bangladesh Demographic and Health Survey 2004.\n\n\nMETHODS\nBased on weight-for-age anthropometric index (Z-score) child nutrition status is categorized into three groups-severely undernourished (< -3.0), moderately undernourished (-3.0 to -2.01) and nourished (\u2265-2.0). Since nutrition status is ordinal, an OLR model-proportional odds model (POM) can be developed instead of two separate BLR models to find predictors of both malnutrition and severe malnutrition if the proportional odds assumption satisfies. The assumption is satisfied with low p-value (0.144) due to violation of the assumption for one co-variate. So partial proportional odds model (PPOM) and two BLR models have also been developed to check the applicability of the OLR model. Graphical test has also been adopted for checking the proportional odds assumption.\n\n\nRESULTS\nAll the models determine that age of child, birth interval, mothers' education, maternal nutrition, household wealth status, child feeding index, and incidence of fever, ARI & diarrhoea were the significant predictors of child malnutrition; however, results of PPOM were more precise than those of other models.\n\n\nCONCLUSION\nThese findings clearly justify that OLR models (POM and PPOM) are appropriate to find predictors of malnutrition instead of BLR models."} {"_id": "32f6c0b6f801da365ed39f50a4966cf241bb905e", "title": "Why Sleep Matters-The Economic Costs of Insufficient Sleep: A Cross-Country Comparative Analysis.", "text": "The Centers for Disease Control and Prevention (CDC) in the United States has declared insufficient sleep a \"public health problem.\" Indeed, according to a recent CDC study, more than a third of American adults are not getting enough sleep on a regular basis. However, insufficient sleep is not exclusively a US problem, and equally concerns other industrialised countries such as the United Kingdom, Japan, Germany, or Canada. According to some evidence, the proportion of people sleeping less than the recommended hours of sleep is rising and associated with lifestyle factors related to a modern 24/7 society, such as psychosocial stress, alcohol consumption, smoking, lack of physical activity and excessive electronic media use, among others. This is alarming as insufficient sleep has been found to be associated with a range of negative health and social outcomes, including success at school and in the labour market. Over the last few decades, for example, there has been growing evidence suggesting a strong association between short sleep duration and elevated mortality risks. Given the potential adverse effects of insufficient sleep on health, well-being and productivity, the consequences of sleep-deprivation have far-reaching economic consequences. Hence, in order to raise awareness of the scale of insufficient sleep as a public-health issue, comparative quantitative figures need to be provided for policy- and decision-makers, as well as recommendations and potential solutions that can help tackling the problem."} {"_id": "506277ae84149b82d215f76bc4f7135400f65b1d", "title": "User-defined Interface Gestures: Dataset and Analysis", "text": "We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study. To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool."} {"_id": "aa6da71c3099cd394b9af663cfadce1ef77cb37b", "title": "Decision Support for Handling Mismatches between COTS Products and System Requirements", "text": "In the process of selecting commercial off-the-shelf (COTS) products, it is inevitable to encounter mismatches between COTS products and system requirements. Mismatches occur when COTS attributes do not exactly match our requirements. Many of these mismatches are resolved after selecting a COTS product in order to improve its fitness with the requirements. This paper proposes a decision support approach that aims at addressing COTS mismatches during and after the selection process. Our approach can be integrated with existing COTS selection methods at two stages: (I) When evaluating COTS candidates: our approach is used to estimate the anticipated fitness of the candidates if their mismatches are resolved. This helps to base our COTS selection decisions on the fitness that the COTS candidates will eventually have if selected. (2) After selecting a COTS product: the approach suggests alternative plans for resolving the most appropriate mismatches using suitable actions, such that the most important risk, technical, and resource constraints are met. A case study from the e-services domain is used to illustrate the method and to discuss its added value"} {"_id": "58bd0411bce7df96c44aa3579136eff873b56ac5", "title": "Multimodal Classification of Remote Sensing Images: A Review and Future Directions", "text": "Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future."} {"_id": "9b69889c7d762c04a2d13b112d0b37e4f719ca34", "title": "Interface engineering of highly efficient perovskite solar cells", "text": "Advancing perovskite solar cell technologies toward their theoretical power conversion efficiency (PCE) requires delicate control over the carrier dynamics throughout the entire device. By controlling the formation of the perovskite layer and careful choices of other materials, we suppressed carrier recombination in the absorber, facilitated carrier injection into the carrier transport layers, and maintained good carrier extraction at the electrodes. When measured via reverse bias scan, cell PCE is typically boosted to 16.6% on average, with the highest efficiency of ~19.3% in a planar geometry without antireflective coating. The fabrication of our perovskite solar cells was conducted in air and from solution at low temperatures, which should simplify manufacturing of large-area perovskite devices that are inexpensive and perform at high levels."} {"_id": "159f32e0d91ef919e94d9b6f1ef13ce9be62155c", "title": "Concatenate text embeddings for text classification", "text": "Text embedding has gained a lot of interests in text classification area. This paper investigates the popular neural document embedding method Paragraph Vector as a source of evidence in document ranking. We focus on the effects of combining knowledge-based with knowledge-free document embeddings for text classification task. We concatenate these two representations so that the classification can be done more accurately. The results of our experiments show that this approach achieves better performances on a popular dataset."} {"_id": "8db81373f22957d430dddcbdaebcbc559842f0d8", "title": "Limits of predictability in human mobility.", "text": "A range of applications, from predicting the spread of human and electronic viruses to city planning and resource management in mobile communications, depend on our ability to foresee the whereabouts and mobility of individuals, raising a fundamental question: To what degree is human behavior predictable? Here we explore the limits of predictability in human dynamics by studying the mobility patterns of anonymized mobile phone users. By measuring the entropy of each individual's trajectory, we find a 93% potential predictability in user mobility across the whole user base. Despite the significant differences in the travel patterns, we find a remarkable lack of variability in predictability, which is largely independent of the distance users cover on a regular basis."} {"_id": "2bbe9735b81e0978125dad005656503fca567902", "title": "Reusing Hardware Performance Counters to Detect and Identify Kernel Control-Flow Modifying Rootkits", "text": "Kernel rootkits are formidable threats to computer systems. They are stealthy and can have unrestricted access to system resources. This paper presents NumChecker, a new virtual machine (VM) monitor based framework to detect and identify control-flow modifying kernel rootkits in a guest VM. NumChecker detects and identifies malicious modifications to a system call in the guest VM by measuring the number of certain hardware events that occur during the system call's execution. To automatically count these events, NumChecker leverages the hardware performance counters (HPCs), which exist in modern processors. By using HPCs, the checking cost is significantly reduced and the tamper-resistance is enhanced. We implement a prototype of NumChecker on Linux with the kernel-based VM. An HPC-based two-phase kernel rootkit detection and identification technique is presented and evaluated on a number of real-world kernel rootkits. The results demonstrate its practicality and effectiveness."} {"_id": "e7317fd7bd4f31e70351ca801f41d0040558ad83", "title": "Development and investigation of efficient artificial bee colony algorithm for numerical function optimization", "text": "Artificial bee colony algorithm (ABC), which is inspired by the foraging behavior of honey bee swarm, is a biological-inspired optimization. It shows more effective than genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). However, ABC is good at exploration but poor at exploitation, and its convergence speed is also an issue in some cases. For these insufficiencies, we propose an improved ABC algorithm called I-ABC. In I-ABC, the best-so-far solution, inertia weight and acceleration coefficients are introduced to modify the search process. Inertia weight and acceleration coefficients are defined as functions of the fitness. In addition, to further balance search processes, the modification forms of the employed bees and the onlooker ones are different in the second acceleration coefficient. Experiments show that, for most functions, the I-ABC has a faster convergence speed and ptimization better performances than each of ABC and the gbest-guided ABC (GABC). But I-ABC could not still substantially achieve the best solution for all optimization problems. In a few cases, it could not find better results than ABC or GABC. In order to inherit the bright sides of ABC, GABC and I-ABC, a high-efficiency hybrid ABC algorithm, which is called PS-ABC, is proposed. PS-ABC owns the abilities of prediction and selection. Results show that PS-ABC has a faster convergence speed like I-ABC and better search ability ods f than other relevant meth"} {"_id": "7401611a24f86dffb5b0cd39cf11ee55a4edb32b", "title": "Comparative Evaluation of Anomaly Detection Techniques for Sequence Data", "text": "We present a comparative evaluation of a large number of anomaly detection techniques on a variety of publicly available as well as artificially generated data sets. Many of these are existing techniques while some are slight variants and/or adaptations of traditional anomaly detection techniques to sequence data."} {"_id": "d7988bb266bc6653efa4b83dda102e1fc464c1f8", "title": "Flexible and Stretchable Electronics Paving the Way for Soft Robotics", "text": "Planar and rigid wafer-based electronics are intrinsically incompatible with curvilinear and deformable organisms. Recent development of organic and inorganic flexible and stretchable electronics enabled sensing, stimulation, and actuation of/for soft biological and artificial entities. This review summarizes the enabling technologies of soft sensors and actuators, as well as power sources based on flexible and stretchable electronics. Examples include artificial electronic skins, wearable biosensors and stimulators, electronics-enabled programmable soft actuators, and mechanically compliant power sources. Their potential applications in soft robotics are illustrated in the framework of a five-step human\u2013robot interaction loop. Outlooks of future directions and challenges are provided at the end."} {"_id": "a3d638ab304d3ef3862d37987c3a258a24339e05", "title": "CycleGAN, a Master of Steganography", "text": "CycleGAN [Zhu et al., 2017] is one recent successful approach to learn a transformation between two image distributions. In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to \u201chide\u201d information about a source image into the images it generates in a nearly imperceptible, highfrequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic. We connect this phenomenon with adversarial attacks by viewing CycleGAN\u2019s training procedure as training a generator of adversarial examples and demonstrate that the cyclic consistency loss causes CycleGAN to be especially vulnerable to adversarial attacks."} {"_id": "5b54b6aa8288a1e9713293cec0178e8f3db3de2d", "title": "A Novel Variable Reluctance Resolver for HEV/EV Applications", "text": "In order to simplify the manufacturing process of variable reluctance (VR) resolvers for hybrid electric vehicle/electric vehicle (HEV/EV) applications, a novel VR resolver with nonoverlapping tooth-coil windings is proposed in this paper. A comparison of the winding configurations is first carried out between the existing and the proposed designs, followed by the description of the operating principle. Furthermore, the influence of actual application conditions is investigated by finite-element (FE) analyses, including operating speed and assembling eccentricity. In addition, identical stator and windings of the novel design can be employed in three resolvers of different rotor saliencies. The voltage difference among the three rotor combinations, as well as the detecting accuracy, is further investigated. Finally, prototypes are fabricated and tested to verify the analyses."} {"_id": "355f9782e9667c19144e137761a7d44977c7a5c2", "title": "A content analysis of depression-related tweets", "text": "This study examines depression-related chatter on Twitter to glean insight into social networking about mental health. We assessed themes of a random sample (n=2,000) of depression-related tweets (sent 4-11 to 5-4-14). Tweets were coded for expression of DSM-5 symptoms for Major Depressive Disorder (MDD). Supportive or helpful tweets about depression was the most common theme (n=787, 40%), closely followed by disclosing feelings of depression (n=625; 32%). Two-thirds of tweets revealed one or more symptoms for the diagnosis of MDD and/or communicated thoughts or ideas that were consistent with struggles with depression after accounting for tweets that mentioned depression trivially. Health professionals can use our findings to tailor and target prevention and awareness messages to those Twitter users in need."} {"_id": "69393d1fe9d68b7aeb5dd57741be392d18385e13", "title": "A Meta-Analysis of Methodologies for Research in Knowledge Management, Organizational Learning and Organizational Memory: Five Years at HICSS", "text": "The Task Force on Organizational Memory presented a report at the Hawaii International Conference for System Sciences in January 1998. The report included perspectives on knowledge-oriented research, conceptual models for organizational memory, and research methodologies for researchers considering work in organizational memory. This paper builds on the ideas originally presented in the 1998 report by examining research presented at HICSS in the general areas of knowledge management, organizational memory and organizational learning in the five years since the original task force report."} {"_id": "c171faac12e0cf24e615a902e584a3444fcd8857", "title": "The Satisfaction With Life Scale.", "text": ""} {"_id": "5a14949bcc06c0ae9eecd29b381ffce22e1e75b2", "title": "Organizational Learning and Management Information Systems", "text": "T he articles in this issue ofDATA BASE were chosen b y Anthony G . Hopwood, who is a professor of accounting and financial reporting at the London Graduate Schoo l of Business Studies . The articles contain important ideas , Professor Hopwood wrote, of significance to all intereste d in information systems, be they practitioners or academics . The authors, with their professional affiliations at th e time, were Chris Argyris, Graduate School of Education , Harvard University; Bo Hedberg and Sten Jonsson, Department of Business Administration, University o f Gothenburg; J . Frisco den Hertog, N . V. Philips' Gloeilampenfabrieken, The Netherlands, and Michael J . Earl, Oxford Centre for Management Studies . The articles appeared originally in Accounting, Organizations and Society, a publication of which Professor Hopwood is editor-in-chief. AOS exists to monitor emergin g developments and to actively encourage new approaches and perspectives ."} {"_id": "ae4bb38eaa8fecfddbc9afefa33188ba3cc2282b", "title": "Missing Data Estimation in High-Dimensional Datasets: A Swarm Intelligence-Deep Neural Network Approach", "text": "In this paper, we examine the problem of missing data in high-dimensional datasets by taking into consideration the Missing Completely at Random and Missing at Random mechanisms, as well as the Arbitrary missing pattern. Additionally, this paper employs a methodology based on Deep Learning and Swarm Intelligence algorithms in order to provide reliable estimates for missing data. The deep learning technique is used to extract features from the input data via an unsupervised learning approach by modeling the data distribution based on the input. This deep learning technique is then used as part of the objective function for the swarm intelligence technique in order to estimate the missing data after a supervised fine-tuning phase by minimizing an error function based on the interrelationship and correlation between features in the dataset. The investigated methodology in this paper therefore has longer running times, however, the promising potential outcomes justify the trade-off. Also, basic knowledge of statistics is presumed."} {"_id": "349119a443223a45dabcda844ac41e37bd1abc77", "title": "Spatio-Temporal Join on Apache Spark", "text": "Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Apache Spark is one such open-source framework that is enjoying widespread adoption. Within this data space, it is important to note that most of the observational data (i.e., data collected by sensors, either moving or stationary) has a temporal component, or timestamp. In order to perform advanced analytics and gain insights, the temporal component becomes equally important as the spatial and attribute components. In this paper, we detail several variants of a spatial join operation that addresses both spatial, temporal, and attribute-based joins. Our spatial join technique differs from other approaches in that it combines spatial, temporal, and attribute predicates in the join operator.\n In addition, our spatio-temporal join algorithm and implementation differs from others in that it runs in commercial off-the-shelf (COTS) application. The users of this functionality are assumed to be GIS analysts with little if any knowledge of the implementation details of spatio-temporal joins or distributed processing. They are comfortable using simple tools that do not provide the ability to tweak the configuration of the"} {"_id": "0161e4348a7079e9c37434c5af47f6372d4b412d", "title": "Class segmentation and object localization with superpixel neighborhoods", "text": "We propose a method to identify and localize object classes in images. Instead of operating at the pixel level, we advocate the use of superpixels as the basic unit of a class segmentation or pixel localization scheme. To this end, we construct a classifier on the histogram of local features found in each superpixel. We regularize this classifier by aggregating histograms in the neighborhood of each superpixel and then refine our results further by using the classifier in a conditional random field operating on the superpixel graph. Our proposed method exceeds the previously published state-of-the-art on two challenging datasets: Graz-02 and the PASCAL VOC 2007 Segmentation Challenge."} {"_id": "02227c94dd41fe0b439e050d377b0beb5d427cda", "title": "Reading Digits in Natural Images with Unsupervised Feature Learning", "text": "Detecting and reading text from natural images is a hard computer vision task that is central to a variety of emerging applications. Related problems like document character recognition have been widely studied by computer vision and machine learning researchers and are virtually solved for practical applications like reading handwritten digits. Reliably recognizing characters in more complex scenes like photographs, however, is far more difficult: the best existing methods lag well behind human performance on the same tasks. In this paper we attack the problem of recognizing digits in a real application using unsupervised feature learning methods: reading house numbers from street level photos. To this end, we introduce a new benchmark dataset for research use containing over 600,000 labeled digits cropped from Street View images. We then demonstrate the difficulty of recognizing these digits when the problem is approached with hand-designed features. Finally, we employ variants of two recently proposed unsupervised feature learning methods and find that they are convincingly superior on our benchmarks."} {"_id": "081651b38ff7533550a3adfc1c00da333a8fe86c", "title": "How transferable are features in deep neural networks?", "text": "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset."} {"_id": "17facd6efab9d3be8b1681bb2c1c677b2cb02628", "title": "Transfer Feature Learning with Joint Distribution Adaptation", "text": "Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems."} {"_id": "1c734a14c2325cb76783ca0431862c7f04a69268", "title": "Deep Domain Confusion: Maximizing for Domain Invariance", "text": "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task."} {"_id": "1e21b925b65303ef0299af65e018ec1e1b9b8d60", "title": "Unsupervised Cross-Domain Image Generation", "text": "We study the ecological use of analogies in AI. Specifically, we address the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given representation function f, which accepts inputs in either domains, would remain unchanged. Other than f, the training data is unsupervised and consist of a set of samples from each domain, without any mapping between them. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f preserving component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity."} {"_id": "6918fcbf5c5a86a7ffaf5650080505b95cd6d424", "title": "Hierarchical organization versus self-organization", "text": "In this paper the difference between hierarchical organization and selforganization is investigated. Organization is defined as a structure with a function. But how does the structure affect the function? I will start to examine this by doing two simulations. The idea is to have a given network of agents, which influence their neighbors. How the result differs in three different types of networks, is then explored. In the first simulation, agents try to align with their neighbors. The second simulation is inspired by the ecosystem. Agents take certain products from their neighbors, and transform them into products their neighbors can use."} {"_id": "891d443dc003ed5f8762373395aacfa9ff895fd4", "title": "Moving object detection, tracking and classification for smart video surveillance", "text": "MOVING OBJECT DETECTION, TRACKING AND CLASSIFICATION FOR SMART VIDEO SURVEILLANCE Yi\u011fithan Dedeo\u011flu M.S. in Computer Engineering Supervisor: Assist. Prof. Dr. U\u011fur G\u00fcd\u00fckbay August, 2004 Video surveillance has long been in use to monitor security sensitive areas such as banks, department stores, highways, crowded public places and borders. The advance in computing power, availability of large-capacity storage devices and high speed network infrastructure paved the way for cheaper, multi sensor video surveillance systems. Traditionally, the video outputs are processed online by human operators and are usually saved to tapes for later use only after a forensic event. The increase in the number of cameras in ordinary surveillance systems overloaded both the human operators and the storage devices with high volumes of data and made it infeasible to ensure proper monitoring of sensitive areas for long times. In order to filter out redundant information generated by an array of cameras, and increase the response time to forensic events, assisting the human operators with identification of important events in video by the use of \u201csmart\u201d video surveillance systems has become a critical requirement. The making of video surveillance systems \u201csmart\u201d requires fast, reliable and robust algorithms for moving object detection, classification, tracking and activity analysis. In this thesis, a smart visual surveillance system with real-time moving object detection, classification and tracking capabilities is presented. The system operates on both color and gray scale video imagery from a stationary camera. It can handle object detection in indoor and outdoor environments and under changing illumination conditions. The classification algorithm makes use of the shape of the detected objects and temporal tracking results to successfully categorize objects into pre-defined classes like human, human group and vehicle. The system is also able to detect the natural phenomenon fire in various scenes reliably. The proposed tracking algorithm successfully tracks video objects even in full occlusion cases. In addition to these, some important needs of a robust iii"} {"_id": "38a08fbe5eabbd68db495fa38f4ee506d82095d4", "title": "IMGPU: GPU-Accelerated Influence Maximization in Large-Scale Social Networks", "text": "Influence Maximization aims to find the top-$(K)$ influential individuals to maximize the influence spread within a social network, which remains an important yet challenging problem. Proven to be NP-hard, the influence maximization problem attracts tremendous studies. Though there exist basic greedy algorithms which may provide good approximation to optimal result, they mainly suffer from low computational efficiency and excessively long execution time, limiting the application to large-scale social networks. In this paper, we present IMGPU, a novel framework to accelerate the influence maximization by leveraging the parallel processing capability of graphics processing unit (GPU). We first improve the existing greedy algorithms and design a bottom-up traversal algorithm with GPU implementation, which contains inherent parallelism. To best fit the proposed influence maximization algorithm with the GPU architecture, we further develop an adaptive K-level combination method to maximize the parallelism and reorganize the influence graph to minimize the potential divergence. We carry out comprehensive experiments with both real-world and sythetic social network traces and demonstrate that with IMGPU framework, we are able to outperform the state-of-the-art influence maximization algorithm up to a factor of 60, and show potential to scale up to extraordinarily large-scale networks."} {"_id": "1459a6fc833e60ce0f43fe0fc9a48f8f74db77cc", "title": "Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization", "text": "We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works."} {"_id": "19229afbce15d62bcf8d3afe84a2d47a0b6f1939", "title": "Participatory design and \"democratizing innovation\"", "text": "Participatory design has become increasingly engaged in public spheres and everyday life and is no longer solely concerned with the workplace. This is not only a shift from work oriented productive activities to leisure and pleasurable engagements, but also a new milieu for production and innovation and entails a reorientation from \"democracy at work\" to \"democratic innovation\". What democratic innovation entails is currently defined by management and innovation research, which claims that innovation has been democratized through easy access to production tools and lead-users as the new experts driving innovation. We sketch an alternative \"democratizing innovation\" practice more in line with the original visions of participatory design based on our experience of running Malm\u00f6 Living Labs - an open innovation milieu where new constellations, issues and ideas evolve from bottom-up long-term collaborations amongst diverse stakeholders. Two cases and controversial matters of concern are discussed. The fruitfulness of the concepts \"Things\" (as opposed to objects), \"infrastructuring\" (as opposed to projects) and \"agonistic public spaces\" (as opposed to consensual decision-making) are explored in relation to participatory innovation practices and democracy."} {"_id": "853331d5c2e4a5c29ff578c012bff7fec7ebd7bc", "title": "Study on Virtual Control of a Robotic Arm via a Myo Armband for the Self-Manipulation of a Hand Amputee", "text": "This paper proposes a Myo device that has electromyography (EMG) sensors for detecting electrical activities from different parts of the forearm muscles; it also has a gyroscope and an accelerometer. EMG sensors detect and provide very clear and important data from muscles compared with other types of sensors. The Myo armband sends data from EMG, gyroscope, and accelerometer sensors to a computer via Bluetooth and uses these data to control a virtual robotic arm which was built in Unity 3D. Virtual robotic arms based on EMG, gyroscope, and accelerometer sensors have different features. A robotic arm based on EMG is controlled by using the tension and relaxation of muscles. Consequently, a virtual robotic arm based on EMG is preferred for a hand amputee to a virtual robotic arm based on a gyroscope and an accelerometer"} {"_id": "21786e6ca30849f750656277573ee11fa4d469c5", "title": "Physical Demands of Different Positions in FA Premier League Soccer.", "text": "The purpose of this study was to evaluate the physical demands of English Football Association (FA) Premier League soccer of three different positional classifications (defender, midfielder and striker). Computerised time-motion video-analysis using the Bloomfield Movement Classification was undertaken on the purposeful movement (PM) performed by 55 players. Recognition of PM had a good inter-tester reliability strength of agreement (\u03ba= 0.7277). Players spent 40.6 \u00b1 10.0% of the match performing PM. Position had a significant influence on %PM time spent sprinting, running, shuffling, skipping and standing still (p < 0.05). However, position had no significant influence on the %PM time spent performing movement at low, medium, high or very high intensities (p > 0.05). Players spent 48.7 \u00b1 9.2% of PM time moving in a directly forward direction, 20.6 \u00b1 6.8% not moving in any direction and the remainder of PM time moving backward, lateral, diagonal and arced directions. The players performed the equivalent of 726 \u00b1 203 turns during the match; 609 \u00b1 193 of these being of 0\u00b0 to 90\u00b0 to the left or right. Players were involved in the equivalent of 111 \u00b1 77 on the ball movement activities per match with no significant differences between the positions for total involvement in on the ball activity (p > 0.05). This study has provided an indication of the different physical demands of different playing positions in FA Premier League match-play through assessment of movements performed by players. Key pointsPlayers spent ~40% of the match performing Pur-poseful Movement (PM).Position had a significant influence on %PM time spent performing each motion class except walking and jogging. Players performed >700 turns in PM, most of these being of 0\u00b0-90\u00b0.Strikers performed most high to very high intensity activity and most contact situations.Defenders also spent a significantly greater %PM time moving backwards than the other two posi-tions.Different positions could benefit from more specific conditioning programs."} {"_id": "76737d93659b31d5a6ce07a4e9e5107bc0c39adf", "title": "A CNS-permeable Hsp90 inhibitor rescues synaptic dysfunction and memory loss in APP-overexpressing Alzheimer\u2019s mouse model via an HSF1-mediated mechanism", "text": "Induction of neuroprotective heat-shock proteins via pharmacological Hsp90 inhibitors is currently being investigated as a potential treatment for neurodegenerative diseases. Two major hurdles for therapeutic use of Hsp90 inhibitors are systemic toxicity and limited central nervous system permeability. We demonstrate here that chronic treatment with a proprietary Hsp90 inhibitor compound (OS47720) not only elicits a heat-shock-like response but also offers synaptic protection in symptomatic Tg2576 mice, a model of Alzheimer\u2019s disease, without noticeable systemic toxicity. Despite a short half-life of OS47720 in mouse brain, a single intraperitoneal injection induces rapid and long-lasting (>3 days) nuclear activation of the heat-shock factor, HSF1. Mechanistic study indicates that the remedial effects of OS47720 depend upon HSF1 activation and the subsequent HSF1-mediated transcriptional events on synaptic genes. Taken together, this work reveals a novel role of HSF1 in synaptic function and memory, which likely occurs through modulation of the synaptic transcriptome."} {"_id": "d65c2cbc0980d0840b88b569516ae9c277d9d200", "title": "Credit card fraud detection using machine learning techniques: A comparative analysis", "text": "Financial fraud is an ever growing menace with far consequences in the financial industry. Data mining had played an imperative role in the detection of credit card fraud in online transactions. Credit card fraud detection, which is a data mining problem, becomes challenging due to two major reasons \u2014 first, the profiles of normal and fraudulent behaviours change constantly and secondly, credit card fraud data sets are highly skewed. The performance of fraud detection in credit card transactions is greatly affected by the sampling approach on dataset, selection of variables and detection technique(s) used. This paper investigates the performance of na\u00efve bayes, k-nearest neighbor and logistic regression on highly skewed credit card fraud data. Dataset of credit card transactions is sourced from European cardholders containing 284,807 transactions. A hybrid technique of under-sampling and oversampling is carried out on the skewed data. The three techniques are applied on the raw and preprocessed data. The work is implemented in Python. The performance of the techniques is evaluated based on accuracy, sensitivity, specificity, precision, Matthews correlation coefficient and balanced classification rate. The results shows of optimal accuracy for na\u00efve bayes, k-nearest neighbor and logistic regression classifiers are 97.92%, 97.69% and 54.86% respectively. The comparative results show that k-nearest neighbour performs better than na\u00efve bayes and logistic regression techniques."} {"_id": "fc4bd8f4db91bbb4053b8174544f79bf67b96b3b", "title": "Bangladeshi Number Plate Detection: Cascade Learning vs. Deep Learning", "text": "This work investigated two different machine learning techniques: Cascade Learning and Deep Learning, to find out which algorithm performs better to detect the number plate of vehicles registered in Bangladesh. To do this, we created a dataset of about 1000 images collected from a security camera of Independent University, Bangladesh. Each image in the dataset were then labelled manually by selecting the Region of Interest (ROI). In the Cascade Learning approach, a sliding window technique was used to detect objects. Then a cascade classifier was employed to determine if the window contained object of interest or not. In the Deep Learning approach, CIFAR-10 dataset was used to pre-train a 15-layer Convolutional Neural Network (CNN). Using this pretrained CNN, a Regions with CNN (R-CNN) was then trained using our dataset. We found that the Deep Learning approach (maximum accuracy 99.60% using 566 training images) outperforms the detector constructed using Cascade classifiers (maximum accuracy 59.52% using 566 positive and 1022 negative training images) for 252 test images."} {"_id": "049c15a106015b287fec6fc3e8178d4c3f4adf67", "title": "Combining Poisson singular integral and total variation prior models in image restoration", "text": "In this paper, a novel Bayesian image restoration method based on a combination of priors is presented. It is well known that the Total Variation (TV) image prior preserves edge structures while imposing smoothness on the solutions. However, it tends to oversmooth textured areas. To alleviate this problem we propose to combine the TV and the Poisson Singular Integral (PSI) models, which, as we will show, preserves the image textures. The PSI prior depends on a parameter that controls the shape of the filter. A study on the behavior of the filter as a function of this parameter is presented. Our restoration model utilizes a bound for the TV image model based on the majorization\u2013minimization principle, and performs maximum a posteriori Bayesian inference. In order to assess the performance of the proposed approach, in the experimental section we compare it with other restoration methods. & 2013 Elsevier B.V. All rights reserved."} {"_id": "ebf35073e122782f685a0d6c231622412f28a53b", "title": "A High-Quality Denoising Dataset for Smartphone Cameras", "text": "The last decade has seen an astronomical shift from imaging with DSLR and point-and-shoot cameras to imaging with smartphone cameras. Due to the small aperture and sensor size, smartphone images have notably more noise than their DSLR counterparts. While denoising for smartphone images is an active research area, the research community currently lacks a denoising image dataset representative of real noisy images from smartphone cameras with high-quality ground truth. We address this issue in this paper with the following contributions. We propose a systematic procedure for estimating ground truth for noisy images that can be used to benchmark denoising performance for smartphone cameras. Using this procedure, we have captured a dataset - the Smartphone Image Denoising Dataset (SIDD) - of ~30,000 noisy images from 10 scenes under different lighting conditions using five representative smartphone cameras and generated their ground truth images. We used this dataset to benchmark a number of denoising algorithms. We show that CNN-based methods perform better when trained on our high-quality dataset than when trained using alternative strategies, such as low-ISO images used as a proxy for ground truth data."} {"_id": "156e7730b8ba8a08ec97eb6c2eaaf2124ed0ce6e", "title": "THE CONTROL OF THE FALSE DISCOVERY RATE IN MULTIPLE TESTING UNDER DEPENDENCY By", "text": "Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate t. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased."} {"_id": "7f47767d338eb39664844c94833b52ae73d964ef", "title": "Gesture Recognition with a Convolutional Long Short-Term Memory Recurrent Neural Network", "text": "Inspired by the adequacy of convolutional neural networks in implicit extraction of visual features and the efficiency of Long Short-Term Memory Recurrent Neural Networks in dealing with long-range temporal dependencies, we propose a Convolutional Long Short-Term Memory Recurrent Neural Network (CNNLSTM) for the problem of dynamic gesture recognition. The model is able to successfully learn gestures varying in duration and complexity and proves to be a significant base for further development. Finally, the new gesture command TsironiGR-dataset for human-robot interaction is presented for the evaluation of CNNLSTM."} {"_id": "a9b533329845d5d1a31c3ff2821ce9865c440158", "title": "Mirroring others' emotions relates to empathy and interpersonal competence in children", "text": "The mirror neuron system (MNS) has been proposed to play an important role in social cognition by providing a neural mechanism by which others' actions, intentions, and emotions can be understood. Here functional magnetic resonance imaging was used to directly examine the relationship between MNS activity and two distinct indicators of social functioning in typically-developing children (aged 10.1 years+/-7 months): empathy and interpersonal competence. Reliable activity in pars opercularis, the frontal component of the MNS, was elicited by observation and imitation of emotional expressions. Importantly, activity in this region (as well as in the anterior insula and amygdala) was significantly and positively correlated with established behavioral measures indexing children's empathic behavior (during both imitation and observation) and interpersonal skills (during imitation only). These findings suggest that simulation mechanisms and the MNS may indeed be relevant to social functioning in everyday life during typical human development."} {"_id": "64887b38c382e331cd2b045f7a7edf05f17586a8", "title": "Genetic and environmental influences on sexual orientation and its correlates in an Australian twin sample.", "text": "We recruited twins systematically from the Australian Twin Registry and assessed their sexual orientation and 2 related traits: childhood gender nonconformity and continuous gender identity. Men and women differed in their distributions of sexual orientation, with women more likely to have slight-to-moderate degrees of homosexual attraction, and men more likely to have high degrees of homosexual attraction. Twin concordances for nonheterosexual orientation were lower than in prior studies. Univariate analyses showed that familial factors were important for all traits, but were less successful in distinguishing genetic from shared environmental influences. Only childhood gender nonconformity was significantly heritable for both men and women. Multivariate analyses suggested that the causal architecture differed between men and women, and, for women, provided significant evidence for the importance of genetic factors to the traits' covariation."} {"_id": "4f2d62eaf7559b91b97bab3076fcd5f306da57f2", "title": "A texture-based method for modeling the background and detecting moving objects", "text": "This paper presents a novel and efficient texture-based method for modeling the background and detecting moving objects from a video sequence. Each pixel is modeled as a group of adaptive local binary pattern histograms that are calculated over a circular region around the pixel. The approach provides us with many advantages compared to the state-of-the-art. Experimental results clearly justify our model."} {"_id": "23ddae93514a47b56dcbeed80e67fab62e8b5ec9", "title": "Retro: Targeted Resource Management in Multi-tenant Distributed Systems", "text": "In distributed systems shared by multiple tenants, effective resource management is an important pre-requisite to providing quality of service guarantees. Many systems deployed today lack performance isolation and experience contention, slowdown, and even outages caused by aggressive workloads or by improperly throttled maintenance tasks such as data replication. In this work we present Retro, a resource management framework for shared distributed systems. Retro monitors per-tenant resource usage both within and across distributed systems, and exposes this information to centralized resource management policies through a high-level API. A policy can shape the resources consumed by a tenant using Retro\u2019s control points, which enforce sharing and ratelimiting decisions. We demonstrate Retro through three policies providing bottleneck resource fairness, dominant resource fairness, and latency guarantees to high-priority tenants, and evaluate the system across five distributed systems: HBase, Yarn, MapReduce, HDFS, and Zookeeper. Our evaluation shows that Retro has low overhead, and achieves the policies\u2019 goals, accurately detecting contended resources, throttling tenants responsible for slowdown and overload, and fairly distributing the remaining cluster capacity."} {"_id": "8f81d1854da5f6254780f00966d0c00d174b9881", "title": "Significant Change Spotting for Periodic Human Motion Segmentation of Cleaning Tasks Using Wearable Sensors", "text": "The proportion of the aging population is rapidly increasing around the world, which will cause stress on society and healthcare systems. In recent years, advances in technology have created new opportunities for automatic activities of daily living (ADL) monitoring to improve the quality of life and provide adequate medical service for the elderly. Such automatic ADL monitoring requires reliable ADL information on a fine-grained level, especially for the status of interaction between body gestures and the environment in the real-world. In this work, we propose a significant change spotting mechanism for periodic human motion segmentation during cleaning task performance. A novel approach is proposed based on the search for a significant change of gestures, which can manage critical technical issues in activity recognition, such as continuous data segmentation, individual variance, and category ambiguity. Three typical machine learning classification algorithms are utilized for the identification of the significant change candidate, including a Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Naive Bayesian (NB) algorithm. Overall, the proposed approach achieves 96.41% in the F1-score by using the SVM classifier. The results show that the proposed approach can fulfill the requirement of fine-grained human motion segmentation for automatic ADL monitoring."} {"_id": "6c8d5d5eee5967958a2e03a84bcc00f1f81f4d9e", "title": "Decontaminating eukaryotic genome assemblies with machine learning", "text": "High-throughput sequencing has made it theoretically possible to obtain high-quality de novo assembled genome sequences but in practice DNA extracts are often contaminated with sequences from other organisms. Currently, there are few existing methods for rigorously decontaminating eukaryotic assemblies. Those that do exist filter sequences based on nucleotide similarity to contaminants and risk eliminating sequences from the target organism. We introduce a novel application of an established machine learning method, a decision tree, that can rigorously classify sequences. The major strength of the decision tree is that it can take any measured feature as input and does not require a priori identification of significant descriptors. We use the decision tree to classify de novo assembled sequences and compare the method to published protocols. A decision tree performs better than existing methods when classifying sequences in eukaryotic de novo assemblies. It is efficient, readily implemented, and accurately identifies target and contaminant sequences. Importantly, a decision tree can be used to classify sequences according to measured descriptors and has potentially many uses in distilling biological datasets."} {"_id": "6588070d6578bc8a9d1f284f766340587501d620", "title": "SAX-EFG: an evolutionary feature generation framework for time series classification", "text": "A variety of real world applications fit into the broad definition of time series classification. Using traditional machine learning approaches such as treating the time series sequences as high dimensional vectors have faced the well known \"curse of dimensionality\" problem. Recently, the field of time series classification has seen success by using preprocessing steps that discretize the time series using a Symbolic Aggregate ApproXimation technique (SAX) and using recurring subsequences (\"motifs\") as features.\n In this paper we explore a feature construction algorithm based on genetic programming that uses SAX-generated motifs as the building blocks for the construction of more complex features. The research shows that the constructed complex features improve the classification accuracy in a statistically significant manner for many applications."} {"_id": "6c201c1ded432c98178f1d35410b8958decc884a", "title": "Conserving Energy Through Neural Prediction of Sensed Data", "text": "The constraint of energy consumption is a serious problem in wireless sensor networks (WSNs). In this regard, many solutions for this problem have been proposed in recent years. In one line of research, scholars suggest data driven approaches to help conserve energy by reducing the amount of required communication in the network. This paper is an attempt in this area and proposes that sensors be powered on intermittently. A neural network will then simulate sensors\u2019 data during their idle periods. The success of this method relies heavily on a high correlation between the points making a time series of sensed data. To demonstrate the effectiveness of the idea, we conduct a number of experiments. In doing so, we train a NAR network against various datasets of sensed humidity and temperature in different environments. By testing on actual data, it is shown that the predictions by the device greatly obviate the need for sensed data during sensors\u2019 idle periods and save over 65 percent of energy."} {"_id": "2c17972edee8cd41f344009dc939cf51260f425a", "title": "Appropriate blood pressure control in hypertensive and normotensive type 2 diabetes mellitus: a summary of the ABCD trial", "text": "The hypertensive and normotensive Appropriate Blood Pressure Control in Diabetes (ABCD) studies were prospective, randomized, interventional clinical trials with 5 years of follow-up that examined the role of intensive versus standard blood pressure control in a total of 950 patients with type 2 diabetes mellitus. In the hypertensive ABCD study, a significant decrease in mortality was detected in the intensive blood pressure control group when compared with the standard blood pressure control group. There was also a marked reduction in the incidence of myocardial infarction when patients were randomly assigned to initial antihypertensive therapy with angiotensin-converting-enzyme inhibition rather than calcium channel blockade. The results of the normotensive ABCD study included associations between intensive blood pressure control and significant slowing of the progression of nephropathy (as assessed by urinary albumin excretion) and retinopathy, and fewer strokes. In both the hypertensive and normotensive studies, mean renal function (as assessed by 24 h creatinine clearance) remained stable during 5 years of either intensive or standard blood pressure intervention in patients with normoalbuminuria (<30 mg/24 h) or microalbuminuria (30\u2013300 mg/24 h) at baseline. By contrast, the rate of creatinine clearance in patients with overt diabetic nephropathy (>300 mg/24 h; albuminuria) at baseline decreased by an average of 5 ml/min/year in spite of either intensive or standard blood pressure control. Analysis of the results of 5 years of follow-up revealed a highly significant correlation of all-cause and cardiovascular mortality with left ventricular mass and severity of albuminuria."} {"_id": "4e4c9d1d7893795d386fa6e62385faa6c5eff814", "title": "Gaussian process factorization machines for context-aware recommendations", "text": "Context-aware recommendation (CAR) can lead to significant improvements in the relevance of the recommended items by modeling the nuanced ways in which context influences preferences. The dominant approach in context-aware recommendation has been the multidimensional latent factors approach in which users, items, and context variables are represented as latent features in low-dimensional space. An interaction between a user, item, and a context variable is typically modeled as some linear combination of their latent features. However, given the many possible types of interactions between user, items and contextual variables, it may seem unrealistic to restrict the interactions among them to linearity.\n To address this limitation, we develop a novel and powerful non-linear probabilistic algorithm for context-aware recommendation using Gaussian processes. The method which we call Gaussian Process Factorization Machines (GPFM) is applicable to both the explicit feedback setting (e.g. numerical ratings as in the Netflix dataset) and the implicit feedback setting (i.e. purchases, clicks). We derive stochastic gradient descent optimization to allow scalability of the model. We test GPFM on five different benchmark contextual datasets. Experimental results demonstrate that GPFM outperforms state-of-the-art context-aware recommendation methods."} {"_id": "e083df3577b3231d16678aaf7a020767bdc9c3a0", "title": "Single-layered complex-valued neural network for real-valued classification problems", "text": "This paper presents two models of complex-valued neurons (CVNs) for real-valued classification problems, incorporating two newly-proposed activation functions, and presents their abilities as well as differences between them on benchmark problems. In both models, each real-valued input is encoded into a phase between 0 and \uf070 of a complex number of unity magnitude, and multiplied by a complex-valued weight. The weighted sum of inputs is fed to an activation function. Activation functions of both models map complex values into real values, and their role is to divide the net-input (weighted sum) space into multiple regions representing the classes of input patterns. The gradient-based learning rule is derived for each of the activation functions. Ability of such CVNs are discussed and tested with two-class problems, such as two and three input Boolean problems, and symmetry detection in binary sequences. We exhibit here that both the models can form proper boundaries for these linear and nonlinear problems. For solving n-class problems, a complex-valued neural network (CVNN) consisting of n CVNs is also considered in this paper. We tested such single-layered CVNNs on several real world benchmark problems. The results show that the classification and generalization abilities of single-layered CVNNs are comparable to the conventional real-valued neural networks (RVNNs) having one hidden layer. Moreover, convergence of CVNNs is much faster than that of RVNNs in most of the cases."} {"_id": "0b8651737442ec30052724a68e85fefc4c941970", "title": "Visual firewall: real-time network security monitor", "text": "Networked systems still suffer from poor firewall configuration and monitoring. VisualFirewall seeks to aid in the configuration of firewalls and monitoring of networks by providing four simultaneous views that display varying levels of detail and time-scales as well as correctly visualizing firewall reactions to individual packets. The four implemented views, real-time traffic, visual signature, statistics, and IDS alarm, provide the levels of detail and temporality that system administrators need to properly monitor their systems in a passive or an active manner. We have visualized several attacks, and we feel that even individuals unfamiliar with networking concepts can quickly distinguish between benign and malignant traffic patterns with a minimal amount of introduction."} {"_id": "9fe265de1cfab7a97b5efd81d7d42b386b15f2b9", "title": "IDS rainStorm: visualizing IDS alarms", "text": "The massive amount of alarm data generated from intrusion detection systems is cumbersome for network system administrators to analyze. Often, important details are overlooked and it is difficult to get an overall picture of what is occurring in the network by manually traversing textual alarm logs. We have designed a novel visualization to address this problem by showing alarm activity within a network. Alarm data is presented in an overview where system administrators can get a general sense of network activity and easily detect anomalies. They then have the option of zooming and drilling down for details. The information is presented with local network IP (Internet Protocol) addresses plotted over multiple yaxes to represent the location of alarms. Time on the x-axis is used to show the pattern of the alarms and variations in color encode the severity and amount of alarms. Based on our system administrator requirements study, this graphical layout addresses what system administrators need to see, is faster and easier than analyzing text logs, and uses visualization techniques to effectively scale and display the data. With this design, we have built a tool that effectively uses operational alarm log data generated on the Georgia Tech campus network. The motivation and background of our design is presented along with examples that illustrate its usefulness. CR Categories: C.2.0 [Computer-Communication Networks]: General\u2014Security and Protection C.2.3 [ComputerCommunication Networks]: Network Operations\u2014Network Monitoring H.5.2 [Information Systems]: Information Interfaces and Presentation\u2014User Interfaces"} {"_id": "0c90a3d183dc5f467c692fb7cbf60303729c8078", "title": "Clustering intrusion detection alarms to support root cause analysis", "text": "It is a well-known problem that intrusion detection systems overload their human operators by triggering thousands of alarms per day. This paper presents a new approach for handling intrusion detection alarms more efficiently. Central to this approach is the notion that each alarm occurs for a reason, which is referred to as the alarm's root causes. This paper observes that a few dozens of rather persistent root causes generally account for over 90% of the alarms that an intrusion detection system triggers. Therefore, we argue that alarms should be handled by identifying and removing the most predominant and persistent root causes. To make this paradigm practicable, we propose a novel alarm-clustering method that supports the human analyst in identifying root causes. We present experiments with real-world intrusion detection alarms to show how alarm clustering helped us identify root causes. Moreover, we show that the alarm load decreases quite substantially if the identified root causes are eliminated so that they can no longer trigger alarms in the future."} {"_id": "3c1f860a678f3f6f33f2cbfdfa7dfc7119a57a00", "title": "Aggregation and Correlation of Intrusion-Detection Alerts", "text": "This paper describes an aggregation and correlation algorithm used in the design and implementation of an intrusion-detection console built on top of the Tivoli Enterprise Console (TEC). The aggregation and correlation algorithm aims at acquiring intrusion-detection alerts and relating them together to expose a more condensed view of the security issues raised by intrusion-detection systems."} {"_id": "844f1a88efc648b5c604c0a098b5c49f3fea4139", "title": "MieLog: A Highly Interactive Visual Log Browser Using Information Visualization and Statistical Analysis", "text": "System administration has become an increasingly important function, with the fundamental task being the inspection of computer log-files. It is not, however, easy to perform such tasks for two reasons. One is the high recognition load of log contents due to the massive amount of textual data. It is a tedious, time-consuming and often error-prone task to read through them. The other problem is the difficulty in extracting unusual messages from the log. If an administrator does not have the knowledge or experience, he or she cannot readily recognize unusual log messages. To help address these issues, we have developed a highly interactive visual log browser called \u2018\u2018MieLog.\u2019\u2019 MieLog uses two techniques for manual log inspection tasks: information visualization and statistical analysis. Information visualization is helpful in reducing the recognition load because it provides an alternative method of interpreting textual information without reading. Statistical analysis enables the extraction of unusual log messages without domain specific knowledge. We will give three examples that illustrate the ability of the MieLog system to isolate unusual messages more easily than before."} {"_id": "7f481f1a5fac3a49ee8b2f1bfa7e5f2f8eda3085", "title": "Bit Error Tolerance of a CIFAR-10 Binarized Convolutional Neural Network Processor", "text": "Deployment of convolutional neural networks (ConvNets) in always-on Internet of Everything (IoE) edge devices is severely constrained by the high memory energy consumption of hardware ConvNet implementations. Leveraging the error resilience of ConvNets by accepting bit errors at reduced voltages presents a viable option for energy savings, but few implementations utilize this due to the limited quantitative understanding of how bit errors affect performance. This paper demonstrates the efficacy of SRAM voltage scaling in a 9-layer CIFAR-10 binarized ConvNet processor, achieving memory energy savings of 3.12\u00d7 with minimal accuracy degradation (\u223c99% of nominal). Additionally, we quantify the effect of bit error accumulation in a multi-layer network and show that further energy savings are possible by splitting weight and activation voltages. Finally, we compare the measured error rates for the CIFAR-10 binarized ConvNet against MNIST networks to demonstrate the difference in bit error requirements across varying complexity in network topologies and classification tasks."} {"_id": "2aa5aaddbb367e477ba3bed67ce780f30e055279", "title": "A Simple Model for Intrinsic Image Decomposition with Depth Cues", "text": "We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images."} {"_id": "504054b182fc4d028c430e74a51b2d6ac2c43f64", "title": "Indirectly Encoding Neural Plasticity as a Pattern of Local Rules", "text": "Biological brains can adapt and learn from past experience. In neuroevolution, i.e. evolving artificial neural networks (ANNs), one way that agents controlled by ANNs can evolve the ability to adapt is by encoding local learning rules. However, a significant problem with most such approaches is that local learning rules for every connection in the network must be discovered separately. This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method. Adaptive HyperNEAT is introduced to allow not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary learning rules. Several such adaptive models with different levels of generality are explored and compared. The long-term promise of the new approach is to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution."} {"_id": "fa92af436a7d04fcccc025fdedde4039d19df42f", "title": "Breast cancer intrinsic subtype classification, clinical use and future trends.", "text": "Breast cancer is composed of multiple subtypes with distinct morphologies and clinical implications. The advent of microarrays has led to a new paradigm in deciphering breast cancer heterogeneity, based on which the intrinsic subtyping system using prognostic multigene classifiers was developed. Subtypes identified using different gene panels, though overlap to a great extent, do not completely converge, and the avail of new information and perspectives has led to the emergence of novel subtypes, which complicate our understanding towards breast tumor heterogeneity. This review explores and summarizes the existing intrinsic subtypes, patient clinical features and management, commercial signature panels, as well as various information used for tumor classification. Two trends are pointed out in the end on breast cancer subtyping, i.e., either diverging to more refined groups or converging to the major subtypes. This review improves our understandings towards breast cancer intrinsic classification, current status on clinical application, and future trends."} {"_id": "fc1f88e48ab29a1fa21f1e7d73f47c270353de59", "title": "A robust 2D point-sequence curve offset algorithm with multiple islands for contour-parallel tool path", "text": "An offset algorithm is important to the contour-parallel tool path generation process. Usually, it is necessary to offset with islands. In this paper a new offset algorithm for a 2D point-sequence curve (PS-curve) with multiple islands is presented. The algorithm consists of three sub-processes, the islands bridging process, the raw offset curve generation and the global invalid loops removal. The input of the algorithm is a set of PS-curves, in which one of them is the outer profile and the others are islands. The bridging process bridges all the islands to the outer profile with the Delaunay triangulation method, forming a single linked PS-curve.With the fact that local problems are caused by intersections of adjacent bisectors, the concept of stuck circle is proposed. Based on stuck circle, local problems are fixed by updating the original profile with the proposed basic rule and append rule, so that a raw offset curve can be generated. The last process first reports all the self-intersections on the raw offset PS-curve, and then a procedure called tree analysis puts all the self-intersections into a tree. All the points between the nodes in even depth and its immediate children are collected using the collecting rule. The collected points form the valid loops, which is the output of the proposed algorithm. Each sub-process can be complete in near linear time, so the whole algorithm has a near linear time complexity. This can be proved by the examples tested in the paper. \u00a9 2012 Elsevier Ltd. All rights reserved."} {"_id": "27266a1dd3854e4effe41b9a3c0e569d33004a33", "title": "Discovering facts with boolean tensor tucker decomposition", "text": "Open Information Extraction (Open IE) has gained increasing research interest in recent years. The first step in Open IE is to extract raw subject--predicate--object triples from the data. These raw triples are rarely usable per se, and need additional post-processing. To that end, we proposed the use of Boolean Tucker tensor decomposition to simultaneously find the entity and relation synonyms and the facts connecting them from the raw triples. Our method represents the synonym sets and facts using (sparse) binary matrices and tensor that can be efficiently stored and manipulated. We consider the presentation of the problem as a Boolean tensor decomposition as one of this paper's main contributions. To study the validity of this approach, we use a recent algorithm for scalable Boolean Tucker decomposition. We validate the results with empirical evaluation on a new semi-synthetic data set, generated to faithfully reproduce real-world data features, as well as with real-world data from existing Open IE extractor. We show that our method obtains high precision while the low recall can easily be remedied by considering the original data together with the decomposition."} {"_id": "7dd434b3799a6c8c346a1d7ee77d37980a4ef5b9", "title": "Syntax-Directed Variational Autoencoder for Structured Data", "text": "Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where the syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches."} {"_id": "f43c1aee5382bb6fe9c92c54767218d954016e0c", "title": "Color texture classification by integrative Co-occurrence matrices", "text": "Integrative Co-occurrence matrices are introduced as novel features for color texture classi\"cation. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information pro\"t of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classi\"cation experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classi\"cation results up to 20% and 32% for the \"rst and second baseline, respectively. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."} {"_id": "eb596f7a723c955387ef577ba6bf8817cd3ebdc1", "title": "Design of hybrid resistive-capacitive DAC for SAR A/D converters", "text": "While hybrid capacitive-resistive D/A Converter (DAC) has been known for many years, its potential for energy-efficient operation is sometimes overlooked. This paper investigates the utilization of hybrid DACs in successive-approximation register A/D converters. To improve energy efficiency of SAR ADCs, a new hybrid DAC is introduced. In an exemplar 10-bit 100-MS/s ADC, simulation results show that the energy efficiency and chip area (of passive devices) can be improved by more than an order of magnitude."} {"_id": "4c790c71219f6be248a3d426347bf7c4e3a0a6c4", "title": "The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment.", "text": "OBJECTIVES\nTo develop a 10-minute cognitive screening tool (Montreal Cognitive Assessment, MoCA) to assist first-line physicians in detection of mild cognitive impairment (MCI), a clinical state that often progresses to dementia.\n\n\nDESIGN\nValidation study.\n\n\nSETTING\nA community clinic and an academic center.\n\n\nPARTICIPANTS\nNinety-four patients meeting MCI clinical criteria supported by psychometric measures, 93 patients with mild Alzheimer's disease (AD) (Mini-Mental State Examination (MMSE) score > or =17), and 90 healthy elderly controls (NC).\n\n\nMEASUREMENTS\nThe MoCA and MMSE were administered to all participants, and sensitivity and specificity of both measures were assessed for detection of MCI and mild AD.\n\n\nRESULTS\nUsing a cutoff score 26, the MMSE had a sensitivity of 18% to detect MCI, whereas the MoCA detected 90% of MCI subjects. In the mild AD group, the MMSE had a sensitivity of 78%, whereas the MoCA detected 100%. Specificity was excellent for both MMSE and MoCA (100% and 87%, respectively).\n\n\nCONCLUSION\nMCI as an entity is evolving and somewhat controversial. The MoCA is a brief cognitive screening tool with high sensitivity and specificity for detecting MCI as currently conceptualized in patients performing in the normal range on the MMSE."} {"_id": "92bb6791f0e8bc2dd0874f09930b6a7ff990827d", "title": "Wind-Aware Emergency Landing Assistant Based on Dubins Curves", "text": "A total engine failure poses a major threat to passengers as well as the aircraft and requires a fast decision by the pilot. We develop an assistant system to support the pilot in this decision process. An aircraft is able to glide a certain distance without thrust power by converting the potential energy of the altitude into distance. The objective of our work is to calculate an approach route which allows the aircraft to reach a suitable landing field at an appropriate altitude. This is a non-trivial problem because of many free parameters like wind direction and wind velocity. Our solution computes an approach route with two tangents and two co-rotating circular segments. For this purpose, valid approach routes can be calculated for many general cases. The method has a constant complexity and can dispense with iterative approaches. The route is calculated entirely in the wind frame which avoids complex calculations with trochoids. The central idea is to take the wind into account by moving the target towards the wind direction."} {"_id": "72663b8c32a4ee99da679366868ca4de863c3ba4", "title": "Dynamic Cascades for Face Detection", "text": "In this paper, we propose a novel method, called \"dynamic cascade\", for training an efficient face detector on massive data sets. There are three key contributions. The first is a new cascade algorithm called \"dynamic cascade \", which can train cascade classifiers on massive data sets and only requires a small number of training parameters. The second is the introduction of a new kind of weak classifier, called \"Bayesian stump\", for training boost classifiers. It produces more stable boost classifiers with fewer features. Moreover, we propose a strategy for using our dynamic cascade algorithm with multiple sets of features to further improve the detection performance without significant increase in the detector's computational cost. Experimental results show that all the new techniques effectively improve the detection performance. Finally, we provide the first large standard data set for face detection, so that future researches on the topic can be compared on the same training and testing set."} {"_id": "43dcd8b78857cbfe58ae684c44dd57c8c72368c3", "title": "Providing Adaptive Courses in Learning Management Systems with Respect to Learning Styles", "text": "Learning management systems (LMS) are commonly used in e-learning but provide little, or in most cases, no adaptivity. However, courses which adapt to the individual needs of students make learning easier for them and lead to a positive effect in learning. In this paper, we introduce a concept for providing adaptivity based on learning styles in LMS. In order to show the effectiveness of our approach, Moodle was extended by an add-on and an experiment with 437 students was performed. From the analysis of the students\u2019 performance and behaviour in the course, we found out that students who learned from a course that matches their learning styles spent significantly less time in the course and achieved in average the same marks than students who got a course that either mismatched with their learning styles or included all available learning objects. Therefore, providing adaptive courses in LMS according to the proposed concept can be seen as effective in supporting students in learning."} {"_id": "34001fa75ba229639dc251fb1714a6bc2dfb76b3", "title": "A Tutorial on Logistic Regression", "text": "Many procedures in SAs/STAT~ can be used to perform 10' gistic regression analysis: CATMOD, GENMOD,LOGISTIC, al)d PROBIT. Each procedure has special features that make. ~ useful for gertain applications. For most applica\u00b7 tions, PROC LOGISTIC is the preferred choice,. It fits binary response or proportional odds models, provides various model\u00b7selection methods to identify important prognostic variables from a large number of candidate variables, and computes regression diagnostic statistics. This tutorial dis\u00b7 cusses some of the problems users encountered when they used the LOGISTIC procedure."} {"_id": "4f58366300e6031ece7b770d3cc7ecdd019ca440", "title": "Computational intelligence techniques for HVAC systems : A review", "text": "Buildings are responsible for 40% of global energy use and contribute towards 30% of the total CO2 emissions. The drive to reduce energy use and associated greenhouse gas emissions from buildings has acted as a catalyst in the development of advanced computational methods for energy efficient design, management and control of buildings and systems. Heating, ventilation and air-conditioning (HVAC) systems are the major source of energy consumption in buildings and ideal candidates for substantial reductions in energy demand. Significant advances have been made in the past decades on the application of computational intelligence (CI) techniques for HVAC design, control, management, optimization, and fault detection and diagnosis. This article presents a comprehensive and critical review on the theory and applications of CI techniques for prediction, optimization, control and diagnosis of HVAC systems. The analysis of trends reveals that the minimisation of energy consumption was the key optimization objective in the reviewed research, closely followed by the optimization of thermal comfort, indoor air quality and occupant preferences. Hardcoded Matlab program was the most widely used simulation tool, followed by TRNSYS, EnergyPlus, DOE-2, HVACSim+ and ESP-r. Metaheuristic algorithms were the preferred CI method for solving HVAC related problems and in particular genetic algorithms were applied in most of the studies. Despite the low number of studies focussing on multi-agent systems (MAS), as compared to the other CI techniques, interest in the technique is increasing due to their ability of dividing and conquering an HVAC optimization problem with enhanced overall performance. The paper also identifies prospective future advancements and research directions."} {"_id": "6c4917342e7c81c09b28a4cd4e7575b4e9b176bf", "title": "Parallelization of DC Analysis through Multiport Decomposition", "text": "Physical problems offer scope for macro level parallelization of solution by their essential structure. For parallelization of electrical network simulation, the most natural structure based method is that of multiport decomposition. In this paper this method is used for the simulation of electrical networks consisting of resistances, voltage and current sources using a distributed cluster of weakly coupled processors. At the two levels in which equations are solved in this method the authors have used sparse LU for both levels in the first scheme and sparse LU in the inner level and conjugate gradient in the outer level in the second scheme. Results are presented for planar networks, for the cases where the numbers of slave processors are 1 and 2, and for circuit sizes up to 8.2 million nodes and 16.4 million edges using 8 slave processors. The authors use a cluster of Pentium IV processors linked through a 10/100MBPS Ethernet switch"} {"_id": "2d6d67ec52505e890f777900e9de4e5fa827ebd7", "title": "Online Learning with Predictable Sequences", "text": "We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known \u201cpredictable process\u201d, the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding prior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds [11, 9] can be seen as particular examples of online learning with simple predictable sequences. We further extend our methods and results to include competing with a set of possible predictable processes (models), that is \u201clearning\u201d the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback. Our results suggest a promising direction of further research with potential applications to stock market and time series prediction."} {"_id": "3eb54c38421009eac93c667b303afdd1e5544fce", "title": "d-Separation: From Theorems to Algorithms", "text": "An efficient algorithm is developed that identifies all independencies implied by the topology of a Baye\u00ad sian network. Its correctness and maximality stems from the soundness and completeness of d\u00ad separation with respect to probability theory. The al\u00ad gorithm runs in time 0 (IE I ) where E is the number of edges in the network."} {"_id": "70666e2ecb5fd35af9adeccae0e2ef765cf149fe", "title": "Histogram-based image hashing scheme robust against geometric deformations", "text": "In this paper, we propose a robust image hash algorithm by using the invariance of the image histogram shape to geometric deformations. Robustness and uniqueness of the proposed hash function are investigated in detail by representing the histogram shape as the relative relations in the number of pixels among groups of two different bins. It is found from extensive testing that the histogram-based hash function has a satisfactory performance to various geometric deformations, and is also robust to most common signal processing operations thanks to the use of Gaussian kernel low-pass filter in the preprocessing phase."} {"_id": "4daa0a41e8d3049cc40fb9804f22318d5302abc1", "title": "The Ever-Changing Social Perception of Autism Spectrum Disorders in the United States", "text": "This paper aims to examine the comprehensive social perception of autism spectrum disorders (ASDs) within the United States today. In order to study the broad public view of those with ASDs, this study investigates the evolution of the syndrome in both sociological and scientific realms. By drawing on the scientific progression of the syndrome and the mixture of this research with concurrent social issues and media representations, this study infers why such a significant amount of stigmatization has become attached to those with ASDs and how these stigmatizations have varied throughout history. After studying this evolving social perception of ASDs in the United States, the writer details suggestions for the betterment of this awareness, including boosted and specified research efforts, increased collaboration within those experts in autism, and positive visibility of those with ASDs and their families. Overall, the writer suggests that public awareness has increased and thus negative stigmatization has decreased in recent years; however, there remains much to be done to increase general social understanding of ASDs. \u201cAutism is about having a pure heart and being very sensitive... It is about finding a way to survive in an overwhelming, confusing world... It is about developing differently, in a different pace and with different leaps.\u201d -Trisha Van Berkel The identification of autism, in both sociological and scientific terms, has experienced a drastic evolution since its original definition in the early 20th century. From its original designation by Leo Kanner (1943), public understanding of autism spectrum disorders (ASDs) has been shrouded in mystery and misperception. The basic core features of all ASDs include problems with basic socialization and communication, strange intonation and facial expressions, and intense preoccupations or repetitive behaviors; however, one important aspect of what makes autism so complex is the wide variation in expression of the disorder (Lord, 2011). When comparing individuals with the same autism diagnosis, one will undoubtedly encounter many different personalities, strengths and weaknesses. This wide variability between individuals diagnosed with autism, along with the lack of basic understanding of the general public, accounts for a significant amount of social stigma in our society today. Social stigma stemming from this lack of knowledge has been reported in varying degrees since the original formation of the diagnosis. Studies conducted over the past two centuries have shown perceived negative stigma from the view of both the autistic individual and the family or caretakers"} {"_id": "87a1273dea59e3748372c5ea69488d70e9125046", "title": "Finding and tracking road lanes using \"line-snakes\"", "text": "This paper presents a method forfinding and tracking road lanes. The method extracts and tracks lane boundaries for vision-guided vehicle navigation by combining the hough transform and the \u201cactive line model ( A M ) \u201d . The hough transform can extract vanishing points of the road, which can be used as a good estimation of the vehicle heading. For the curved road, however, the estimation may be too crude to be used for navigation. Therefore, the hough transform is used to obtain an initial position estimation of the lane boundaries on the road. The line snake ALM then improves the initial approximation to an accurate configuration of the lane boundaries. Once the line snake is initialized in the first image, it tracks the road lanes using the external and internal forces computed from images and a proposed boundary refinement technique. Specifically, an image region is divided into a few subregions along the vertical direction. The hough transform is then performed for each sub-region and candidate lines of road lanes in each sub-region are extracted. Among candidate lines, a most prominent line is found by the ALM that minimizes a defined snake energy. The external energy of ALM is a normalized sum of image gradients along the line. The internal deformation energy ensures the continuity of two neighboring lines by using the angledifference between two adjacent lines and the distance between the two lines. A search method based on the dynamic programming reduces the computational cost. The proposed method gives a uniJied framework for detecting, refining and tracking the road lane. Experimental results using images of a real road scene are presented."} {"_id": "8045f9b48d3e848861620f86a4f7add0ad919556", "title": "A Book Dewarping System by Boundary-Based 3D Surface Reconstruction", "text": "Non-contact imaging devices such as digital cameras and overhead scanners can convert hardcopy books to digital images without cutting them to individual pages. However, the captured images have distinct distortions. A book dewarping system is proposed to remove the perspective and geometric distortions automatically from single images. A book boundary model is extracted, and a 3D book surface is reconstructed. And then the horizontal and vertical metrics of each column are restored from it. Experimental results show the good dewarping and speed performance. Since no additional equipments and no restrictions to specific book layouts or contents are needed, the proposed system is very practical in real applications."} {"_id": "e7d3bd1df77c30b8db6a9a9c83692aa54d21e12a", "title": "Advantages of thesaurus representation using the Simple Knowledge Organization System (SKOS) compared with proposed alternatives", "text": "The concept of thesaurus has evolved from a list of conceptually interrelated words to today's controlled vocabularies, where terms form complex structures through semantic relationships. This term comes from the Latin and has turn been derived from the Greek \"\u03b8\u03b7\u03c3\u03b1\u03c5\u03c1\u03cc\u03c2\", which means treasury according to the Spanish Royal Academy, in whose dictionary it is also defined as: 'name given by its authors to certain dictionaries, catalogues and anthologies'. The increase in scientific communication and productivity made it essential to develop keyword indexing systems. At that time, Howerton spoke of controlled lists to refer to concepts that were heuristically or intuitively related. According to Roberts (1984), Mooers was the first to relate thesauri to information retrieval systems; Taube established the foundations of post-coordination, while Luhn dealt, at a basic level, with the creation of thesauri using automatic techniques. Brownson (1957) was the first to use the term to refer to the issue of translating concepts and their relationships expressed in documents into a more precise language free of ambiguities in order to facilitate information retrieval. The ASTIA Thesaurus was published in the early 1960s (Curr\u00e1s, 2005), already bearing the characteristics of today's thesauri and taking on the need for a tool to administer a controlled vocabulary in terms of indexing, thereby giving rise to the concept of documentary language."} {"_id": "8c1351ff77f34a5a6e11be81a995e3faf862931b", "title": "Branching embedding: A heuristic dimensionality reduction algorithm based on hierarchical clustering", "text": "This paper proposes a new dimensionality reduction algorithm named branching embedding (BE). It converts a dendrogram to a two-dimensional scatter plot, and visualizes the inherent structures of the original high-dimensional data. Since the conversion part is not computationally demanding, the BE algorithm would be beneficial for the case where hierarchical clustering is already performed. Numerical experiments revealed that the outputs of the algorithm moderately preserve the original hierarchical structures."} {"_id": "1a68d1fb23fd11649c111a3c374f2c3d3ef4c09e", "title": "VizTree: a Tool for Visually Mining and Monitoring Massive Time Series Databases", "text": "Moments before the launch of every space vehicle, engineering discipline specialists must make a critical go/no-go decision. The cost of a false positive, allowing a launch in spite of a fault, or a false negative, stopping a potentially successful launch, can be measured in the tens of millions of dollars, not including the cost in morale and other more intangible detriments. The Aerospace Corporation is responsible for providing engineering assessments critical to the go/no-go decision for every Department of Defense (DoD) launch vehicle. These assessments are made by constantly monitoring streaming telemetry data in the hours before launch. For this demonstration, we will introduce VizTree, a novel time-series visualization tool to aid the Aerospace analysts who must make these engineering assessments. VizTree was developed at the University of California, Riverside and is unique in that the same tool is used for mining archival data and monitoring incoming live telemetry. Unlike other time series visualization tools, VizTree can scale to very large databases, giving it the potential to be a generally useful data mining and database tool."} {"_id": "f1aa9bbf85aa1a0b3255b38032c08781f491f4d3", "title": "0.18um BCD technology with best-in-class LDMOS from 6 V to 45 V", "text": "We propose a novel nLDMOS structure and a design concept in BCD technology with the best-in-class performance. The drift profile is optimized and the multi-oxide in the drift region is adopted to approach the RESURF limit of on-resistance vs. BVdss characteristic (i.e., 36V DMOS has a Ron_sp of 20mohm-mm2 with a BVdss of 50V; 45V DMOS has a Ron_sp of 28mohm-mm2 with a BVdss of 65V). Moreover, this modification requires merely three extra masks in the Non-Epitaxy LV process to achieve this improvement. Therefore it is not only a high performance but also a low cost solution."} {"_id": "3f203c85493bce692da54648ef3f015db956a924", "title": "0.35\u03bcm, 30V fully isolated and low-Ron nLDMOS for DC-DC applications", "text": "In this paper, we present a new approach to integrate in a 0.35 \u03bcm BCD technology, low Ron LDMOS power transistors with highly competitive Specific Resistance figure of merit (Rsp, defined as Ron*Area). The LDMOS are fully isolated in order to support applications which may bias the source/drain electrodes below the substrate potential, which is critical for devices used in high-current, high-frequency switching applications. The new devices are suitable for high-efficiency DC-DC converter products with operating voltage of up to 30V, such as mobile PMICs. For maximum performance, two different extended-drain LDMOS structures have been developed to cover the entire operating voltage range: for 16V and below, a planar-gate structure is used and for 20V and above, a non-planar \u201coffset-LOCOS\u201d gate is used for 20V and above."} {"_id": "ad3324136e70d546e2744bda4e2b14e6b5311c55", "title": "Ultra-low on-resistance LDMOS implementation in 0.13\u00b5m CD and BiCD process technologies for analog power IC's", "text": "Toshiba's 5th generation BiCD/CD-0.13 is a new process platform for analog power applications based on 0.13\u00b5m CMOS technology. The process platform has six varieties of rated voltage, 5V, 6V, 18V, 25V, 40V, and 60V. 5 to 18V CD-0.13 process use P-type silicon substrate. 25 to 60V BiCD-0.13 process use N-Epi wafer with N+/P+ buried layer on P type silicon substrate. Each LDMOS recode ultra-low on-resistance compared with that of previous papers, and we will realize the highest performance analog power IC's using this technology."} {"_id": "d0a5e8fd8ca4537e6c0b76e90ae32f86de3cb0fc", "title": "Advanced 0.13um smart power technology from 7V to 70V", "text": "This paper presents BCD process integrating 7V to 70V power devices on 0.13um CMOS platform for various power management applications. BJT, Zener diode and Schottky diode are available and non-volatile memory is embedded as well. LDMOS shows best-in-class specific Ron (RSP) vs. BVDSS characteristics (i.e., 70V NMOS has RSP of 69m\u03a9-mm2 with BVDSS of 89V). Modular process scheme is used for flexibility to various requirements of applications."} {"_id": "dd73a3d6682f3e1e70fd7b4d04971128fad5f27b", "title": "BCD8sP: An advanced 0.16 \u03bcm technology platform with state of the art power devices", "text": "Advanced 0.16 \u03bcm BCD technology platform offering dense logic transistors (1.8 V-5 V CMOS) and high performance analog features has been developed. Thanks to dedicated field plate optimization, body and drain engineering, state of the art power devices (8 V to 42 V rated) have been obtained ensuring large Safe Operating Areas with best RONXAREA-BVDSS tradeoff."} {"_id": "b3e04836a8f1a1efda32d15296ff9435ab8afd86", "title": "HeNet: A Deep Learning Approach on Intel$^\\circledR$ Processor Trace for Effective Exploit Detection", "text": "This paper presents HeNet, a hierarchical ensemble neural network, applied to classify hardware-generated control flow traces for malware detection. Deep learning-based malware detection has so far focused on analyzing executable files and runtime API calls. Static code analysis approaches face challenges due to obfuscated code and adversarial perturbations. Behavioral data collected during execution is more difficult to obfuscate but recent research has shown successful attacks against API call based malware classifiers. We investigate control flow based characterization of a program execution to build robust deep learning malware classifiers. HeNet consists of a low-level behavior model and a toplevel ensemble model. The low-level model is a per-application behavior model, trained via transfer learning on a time-series of images generated from control flow trace of an execution. We use Intel R \u00a9 Processor Trace enabled processor for low overhead execution tracing and design a lightweight image conversion and segmentation of the control flow trace. The top-level ensemble model aggregates the behavior classification of all the trace segments and detects an attack. The use of hardware trace adds portability to our system and the use of deep learning eliminates the manual effort of feature engineering. We evaluate HeNet against real-world exploitations of PDF readers. HeNet achieves 100% accuracy and 0% false positive on test set, and higher classification accuracy compared to classical machine learning algorithms."} {"_id": "a8540ff90bc1cf9eb54a2ba1ad4125e726d1980d", "title": "Soft-Gated Warping-GAN for Pose-Guided Person Image Synthesis", "text": "Despite remarkable advances in image synthesis research, existing works often fail in manipulating images under the context of large geometric transformations. Synthesizing person images conditioned on arbitrary poses is one of the most representative examples where the generation quality largely relies on the capability of identifying and modeling arbitrary transformations on different body parts. Current generative models are often built on local convolutions and overlook the key challenges (e.g. heavy occlusions, different views or dramatic appearance changes) when distinct geometric changes happen for each part, caused by arbitrary pose manipulations. This paper aims to resolve these challenges induced by geometric variability and spatial displacements via a new Soft-Gated Warping Generative Adversarial Network (Warping-GAN), which is composed of two stages: 1) it first synthesizes a target part segmentation map given a target pose, which depicts the region-level spatial layouts for guiding image synthesis with higher-level structure constraints; 2) the Warping-GAN equipped with a soft-gated warping-block learns feature-level mapping to render textures from the original image into the generated segmentation map. Warping-GAN is capable of controlling different transformation degrees given distinct target poses. Moreover, the proposed warping-block is lightweight and flexible enough to be injected into any networks. Human perceptual studies and quantitative evaluations demonstrate the superiority of our WarpingGAN that significantly outperforms all existing methods on two large datasets."} {"_id": "19c953d98323e6f56a26e9d4b6b46f5809793fe4", "title": "Vehicle Guidance for an Autonomous Vehicle", "text": "This paper describes the vehicle guidance system of an autonomous vehicle. This system is part of the control structure of the vehicle and consists of a path generator, a motion planning algorithm and a sensor fusion module. A stereo vision sensor and a DGPS sensor are used as position sensors. The trajectory for the vehicle motion is generated in a rst step by using only information from a digital map. Object-detecting sensors such as the stereo vision sensor, three laserscanner and a radar sensor observe the vehicle environment and report detected objects to the sensor fusion module. This information is used to dynamically update the planned vehicle trajectory to the nal vehicle motion."} {"_id": "43a050a0279c86baf842371c73b68b674061b579", "title": "A Tutorial on UAVs for Wireless Networks: Applications, Challenges, and Open Problems", "text": "The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing in a wide range of wireless networking applications. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. For instance, UAVs can be deployed to complement existing cellular systems by providing additional capacity to hotspot areas as well as to provide network coverage in emergency and public safety situations. On the other hand, UAVs can operate as flying mobile terminals within the cellular networks. Such cellular-connected UAVs can enable a wide range of key applications expanding from real-time video streaming to item delivery. Despite the several benefits and practical applications of using UAVs as aerial wireless devices, one must address many technical challenges. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as three-dimensional (3D) deployment, performance analysis, air-to-ground channel modeling, and energy efficiency are explored along with representative results. Then, fundamental open problems and potential research directions pertaining to wireless communications and networking with UAVs are introduced. To cope with the open research problems, various analytical frameworks and mathematical tools such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems."} {"_id": "c139770aba13d62bf2f5a3a579da98a28148fb09", "title": "Data Change Exploration Using Time Series Clustering", "text": "Analysis of static data is one of the best studied research areas. However, data changes over time. These changes may reveal patterns or groups of similar values, properties, and entities. We study changes in large, publicly available data repositories by modelling them as time series and clustering these series by their similarity. In order to perform change exploration on real-world data we use the publicly available revision data of Wikipedia Infoboxes and weekly snapshots of IMDB. The changes to the data are captured as events, which we call change records. In order to extract temporal behavior we count changes in time periods and propose a general transformation framework that aggregates groups of changes to numerical time series of different resolutions. We use these time series to study different application scenarios of unsupervised clustering. Our explorative results show that changes made to collaboratively edited data sources can help find characteristic behavior, distinguish entities or properties and provide insight into the respective domains."} {"_id": "2f4bdb8f54ae1a7666c7be8e6b461e3c717a20cf", "title": "The survival of direct composite restorations in the management of severe tooth wear including attrition and erosion: A prospective 8-year study.", "text": "OBJECTIVES\nSurvival of directly placed composite to restore worn teeth has been reported in studies with small sample sizes, short observation periods and different materials. This study aimed to estimate survival for a hybrid composite placed by one clinician up to 8-years follow-up.\n\n\nMETHODS\nAll patients were referred and recruited for a prospective observational cohort study. One composite was used: Spectrum(\u00ae) (DentsplyDeTrey). Most restorations were placed on the maxillary anterior teeth using a Dahl approach.\n\n\nRESULTS\nA total of 1010 direct composites were placed in 164 patients. Mean follow-up time was 33.8 months (s.d. 27.7). 71 of 1010 restorations failed during follow-up. The estimated failure rate in the first year was 5.4% (95% CI 3.7-7.0%). Time to failure was significantly greater in older subjects (p=0.005) and when a lack of posterior support was present (p=0.003). Bruxism and an increase in the occlusal vertical dimension were not associated with failure. The proportion of failures was greater in patients with a Class 3 or edge-to-edge incisal relationship than in Class 1 and Class 2 cases but this was not statistically significant. More failures occurred in the lower arch (9.6%) compared to the upper arch (6%) with the largest number of composites having been placed on the maxillary incisors (n=519).\n\n\nCONCLUSION\nThe worn dentition presents a restorative challenge but composite is an appropriate restorative material.\n\n\nCLINICAL SIGNIFICANCE\nThis study shows that posterior occlusal support is necessary to optimise survival."} {"_id": "b1ad2a08dc6b9c598f2ac101b22c2de8cd23e47f", "title": "On the dimensionality of organizational justice: a construct validation of a measure.", "text": "This study explores the dimensionality of organizational justice and provides evidence of construct validity for a new justice measure. Items for this measure were generated by strictly following the seminal works in the justice literature. The measure was then validated in 2 separate studies. Study 1 occurred in a university setting, and Study 2 occurred in a field setting using employees in an automobile parts manufacturing company. Confirmatory factor analyses supported a 4-factor structure to the measure, with distributive, procedural, interpersonal, and informational justice as distinct dimensions. This solution fit the data significantly better than a 2- or 3-factor solution using larger interactional or procedural dimensions. Structural equation modeling also demonstrated predictive validity for the justice dimensions on important outcomes, including leader evaluation, rule compliance, commitment, and helping behavior."} {"_id": "b0fa515948682246559cebd1190cec39e87334c0", "title": "Music Synthesis with Reconstructive Phrase Modeling", "text": "This article describes a new synthesis technology called reconstructive phrase modeling (RPM). A goal of RPM is to combine the realistic sound quality of sampling with the performance interaction of functional synthesis. Great importance is placed on capturing the dynamics of note transitions-slurs, legato, bow changes, etc. Expressive results are achieved with conventional keyboard controllers. Mastery of special performance techniques is not needed. RPM is an analysis-synthesis system that is related to two important trends in computer music research. The first is a form of additive synthesis in which sounds are represented as a sum of time-varying harmonics plus noise elements. RPM creates expressive performances by searching a database of idiomatic instrumental phrases and combining modified fragments of these phrases to form a new expressive performance. This approach is related to another research trend called concatenative synthesis"} {"_id": "2b550251323d541dd5d3f72ab68073e05cd485c5", "title": "SAR image formation toolbox for MATLAB", "text": "While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and (non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection algorithm may be successfully employed for many SAR research endeavors not involving considerably large data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image reconstruction using the matched filter and backprojection algorithms is provided."} {"_id": "0392a3743157a87d0bdfe53dffa416a9a6b93dcc", "title": "Mitigating Sybils in Federated Learning Poisoning", "text": "Machine learning (ML) over distributed multiparty data is required for a variety of domains. Existing approaches, such as federated learning, collect the outputs computed by a group of devices at a central aggregator and run iterative algorithms to train a globally shared model. Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils. In this paper we first evaluate the vulnerability of federated learning to sybil-based poisoning attacks. We then describe FoolsGold, a novel defense to this problem that identifies poisoning sybils based on the diversity of client updates in the distributed learning process. Unlike prior work, our system does not bound the expected number of attackers, requires no auxiliary information outside of the learning process, and makes fewer assumptions about clients"} {"_id": "786f730302115d19f5ea14c49332bba77bde75c2", "title": "An ensemble classifier with random projection for predicting multi-label protein subcellular localization", "text": "In protein subcellular localization prediction, a predominant scenario is that the number of available features is much larger than the number of data samples. Among the large number of features, many of them may contain redundant or irrelevant information, causing the prediction systems suffer from overfitting. To address this problem, this paper proposes a dimensionality-reduction method that applies random projection (RP) to construct an ensemble multi-label classifier for predicting protein subcellular localization. Specifically, the frequencies of occurrences of gene-ontology terms are used as feature vectors, which are projected onto lower-dimensional spaces by random projection matrices whose elements conform to a distribution with zero mean and unit variance. The transformed low-dimensional vectors are classified by an ensemble of one-vs-rest multi-label support vector machine (SVM) classifiers, each corresponding to one of the RP matrices. The scores obtained from the ensemble are then fused for making the final decision. Experimental results on two recent datasets suggest that the proposed method can reduce the dimensions by six folds and remarkably improve the classification performance."} {"_id": "ce83f8c2227ce387d7491f247d97f45e534fac24", "title": "Novel Control Method for Multimodule PV Microinverter With Multiple Functions", "text": "This paper presents a novel control method for multimodule photovoltaic microinverter (MI). The proposed MI employs a two-stage topology with active-clamped current-fed push\u2013pull converter cascaded with a full-bridge inverter. This system can operate in grid-connected mode to feed power to the grid with a programmable power factor. This system can also operate in line-interactive mode, i.e., share load power without feeding power to the grid. In the event of grid power failure, the MI can operate in a standalone mode to supply uninterruptible power to the load. This paper presents a multiloop control scheme with power programmable capability for achieving the above multiple functions. In addition, the proposed control scheme embedded a multimodule parallel capability that multiple MI modules can be paralleled to enlarge the capacity with autonomous control in all operation modes. Finally, three 250-W MI modules are adopted to demonstrate the effectiveness of the proposed control method in simulations as well as experiments."} {"_id": "fe0e8bd42fe96f67d73756a582d1d394b326eaac", "title": "A refined PWM scheme for voltage and current source converters", "text": "A pulse-width-modulation (PWM) scheme that uses the converter switching frequency to minimize unwanted load current harmonics is described. This results in the reduction of the number of switch communications per cycle. The method is suitable for high-performance variable-speed AC-motor drives that require high-output switching frequencies for near-silent operation. It is also applicable, without change, to voltage or current-source inverters and to two and four-quadrant three-phase PWM rectifiers. Key predicted results have been verified experimentally on a 5 kVA inverter breadboard.<>"} {"_id": "b2cc59430df4ff20e34c48d122ccb47b45b96f83", "title": "Path Planning of Unmanned Aerial Vehicles using B-Splines and Particle Swarm Optimization", "text": "Military operations are turning to more complex and advanced automation technologies for minimum risk and maximum efficiency.A critical piece to this strategy is unmanned aerial vehicles. Unmanned aerial vehicles require the intelligence to safely maneuver along a path to an intended target and avoiding obstacles such as other aircrafts or enemy threats. This paper presents a unique three-dimensional path planning problem formulation and solution approach using particle swarm optimization. The problem formulation was designed with three objectives: 1) minimize risk owing to enemy threats, 2) minimize fuel consumption incurred by deviating from the original path, and 3) fly over defined reconnaissance targets. The initial design point is defined as the original path of the unmanned aerial vehicles. Using particle swarm optimization, alternate paths are generated using B-spline curves, optimized based on the three defined objectives. The resulting paths can be optimized with a preference toward maximum safety, minimum fuel consumption, or target reconnaissance. This method has been implemented in a virtual environment where the generated alternate paths can be visualized interactively to better facilitate the decision-making process. The problem formulation and solution implementation is described along with the results from several simulated scenarios demonstrating the effectiveness of the method."} {"_id": "7a48f36e567b3beda7866a25e37e4a3cc668d987", "title": "Diffusion Component Analysis: Unraveling Functional Topology in Biological Networks", "text": "Complex biological systems have been successfully modeled by biochemical and genetic interaction networks, typically gathered from high-throughput (HTP) data. These networks can be used to infer functional relationships between genes or proteins. Using the intuition that the topological role of a gene in a network relates to its biological function, local or diffusionbased \u201cguilt-by-association\u201d and graph-theoretic methods have had success in inferring gene functions. Here we seek to improve function prediction by integrating diffusion-based methods with a novel dimensionality reduction technique to overcome the incomplete and noisy nature of network data. In this paper, we introduce diffusion component analysis (DCA), a framework that plugs in a diffusion model and learns a low-dimensional vector representation of each node to encode the topological properties of a network. As a proof of concept, we demonstrate DCA\u2019s substantial improvement over state-of-the-art diffusion-based approaches in predicting protein function from molecular interaction networks. Moreover, our DCA framework can integrate multiple networks from heterogeneous sources, consisting of genomic information, biochemical experiments and other resources, to even further improve function prediction. Yet another layer of performance gain is achieved by integrating the DCA framework with support vector machines that take our node vector representations as features. Overall, our DCA framework provides a novel representation of nodes in a network that can be used as a plug-in architecture to other machine learning algorithms to decipher topological properties of and obtain novel insights into interactomes. This paper was selected for oral presentation at RECOMB 2015 and an abstract is published in the conference proceedings. ar X iv :1 50 4. 02 71 9v 1 [ qbi o. M N ] 1 0 A pr 2 01 5"} {"_id": "45f3c92cfe87fa376dfe184cead765ff251b9b30", "title": "Experiments on deep learning for speech denoising", "text": "In this paper we present some experiments using a deep learning model for speech denoising. We propose a very lightweight procedure that can predict clean speech spectra when presented with noisy speech inputs, and we show how various parameter choices impact the quality of the denoised signal. Through our experiments we conclude that such a structure can perform better than some comparable single-channel approaches and that it is able to generalize well across various speakers, noise types and signal-to-noise ratios."} {"_id": "881a4de26b49a08608ae056b7688d64396bd62cd", "title": "Detection of drug-drug interactions through data mining studies using clinical sources, scientific literature and social media", "text": "Drug-drug interactions (DDIs) constitute an important concern in drug development and postmarketing pharmacovigilance. They are considered the cause of many adverse drug effects exposing patients to higher risks and increasing public health system costs. Methods to follow-up and discover possible DDIs causing harm to the population are a primary aim of drug safety researchers. Here, we review different methodologies and recent advances using data mining to detect DDIs with impact on patients. We focus on data mining of different pharmacovigilance sources, such as the US Food and Drug Administration Adverse Event Reporting System and electronic health records from medical institutions, as well as on the diverse data mining studies that use narrative text available in the scientific biomedical literature and social media. We pay attention to the strengths but also further explain challenges related to these methods. Data mining has important applications in the analysis of DDIs showing the impact of the interactions as a cause of adverse effects, extracting interactions to create knowledge data sets and gold standards and in the discovery of novel and dangerous DDIs."} {"_id": "06a267a3b7a696c57d857c3ff1cf0e7711ff0dee", "title": "Fast Min-Sum Algorithms for Decoding of LDPC over GF(q)", "text": "In this paper, we present a fast min-sum algorithm for decoding LDPC codes over GF(q). Our algorithm is different from the one presented by David Declercq and Marc Fossorier (2005) only at the way of speeding up the horizontal scan in the min-sum algorithm. The Declercq and Fossorier's algorithm speeds up the computation by reducing the number of configurations, while our algorithm uses the dynamic programming instead. Compared with the configuration reduction algorithm, the dynamic programming one is simpler at the design stage because it has less parameters to tune. Furthermore, it does not have the performance degradation problem caused by the configuration reduction because it searches the whole configuration space efficiently through dynamic programming. Both algorithms have the same level of complexity and use simple operations which are suitable for hardware implementations"} {"_id": "a6355b13e74d2a11aba4bad75464bf721d7b61d4", "title": "Machine Learning Algorithms for Characterization of EMG Signals", "text": "\u2014In the last decades, the researchers of the human arm prosthesis are using different types of machine learning algorithms. This review article firstly gives a brief explanation about type of machine learning methods. Secondly, some recent applications of myoelectric control of human arm prosthesis by using machine learning algorithms are compared. This study presents two different comparisons based on feature extraction methods which are time series modeling and wavelet transform of EMG signal. Finally, of characterization of EMG for of human arm prosthesis have been and discussed."} {"_id": "892c22460a1ef1da7d10d1cf007ff46c6c080f18", "title": "Generating Haptic Textures with a Vibrotactile Actuator", "text": "Vibrotactile actuation is mainly used to deliver buzzing sensations. But if vibrotactile actuation is tightly coupled to users' actions, it can be used to create much richer haptic experiences. It is not well understood, however, how this coupling should be done or which vibrotactile parameters create which experiences. To investigate how actuation parameters relate to haptic experiences, we built a physical slider with minimal native friction, a vibrotactile actuator and an integrated position sensor. By vibrating the slider as it is moved, we create an experience of texture between the sliding element and its track. We conducted a magnitude estimation experiment to map how granularity, amplitude and timbre relate to the experiences of roughness, adhesiveness, sharpness and bumpiness. We found that amplitude influences the strength of the perceived texture, while variations in granularity and timbre create distinct experiences. Our study underlines the importance of action in haptic perception and suggests strategies for deploying such tightly coupled feedback in everyday devices."} {"_id": "df5c0ae24bdf598a9fe8e85facf476f4903bf8aa", "title": "Characterizing Taxi Flows in New York City", "text": "We present an analysis of taxi flows in Manhattan (NYC) using a variety of data mining approaches. The methods presented here can aid in development of representative and accurate models of large-scale traffic flows with applications to many areas, including outlier detection and characterization."} {"_id": "265a32d3e5a55140389df0a0b666ac5c2dfaa0bd", "title": "Curriculum Learning Based on Reward Sparseness for Deep Reinforcement Learning of Task Completion Dialogue Management", "text": "Learning from sparse and delayed reward is a central issue in reinforcement learning. In this paper, to tackle reward sparseness problem of task oriented dialogue management, we propose a curriculum based approach on the number of slots of user goals. This curriculum makes it possible to learn dialogue management for sets of user goals with large number of slots. We also propose a dialogue policy based on progressive neural networks whose modules with parameters are appended with previous parameters fixed as the curriculum proceeds, and this policy improves performances over the one with single set of parameters."} {"_id": "3c77afb5f21b4256f289371590fa539e074cc3aa", "title": "A system for understanding imaged infographics and its applications", "text": "Information graphics, or infographics, are visual representations of information, data or knowledge. Understanding of infographics in documents is a relatively new research problem, which becomes more challenging when infographics appear as raster images. This paper describes technical details and practical applications of the system we built for recognizing and understanding imaged infographics located in document pages. To recognize infographics in raster form, both graphical symbol extraction and text recognition need to be performed. The two kinds of information are then auto-associated to capture and store the semantic information carried by the infographics. Two practical applications of the system are introduced in this paper, including supplement to traditional optical character recognition (OCR) system and providing enriched information for question answering (QA). To test the performance of our system, we conducted experiments using a collection of downloaded and scanned infographic images. Another set of scanned document pages from the University of Washington document image database were used to demonstrate how the system output can be used by other applications. The results obtained confirm the practical value of the system."} {"_id": "0c85afa692a692da6e444b1098e59d11f0b07b83", "title": "Coupling design and performance analysis of rim-driven integrated motor propulsor", "text": "Coupling design of rim-driven integrated motor propulsor for vessels and underwater vehicle is presented in this paper. The main characteristic of integrated motor propulsor is that the motor is integrated in the duct of propulsor. So the coupling design of motor and propulsor is the key to the overall design. Considering the influence of the motor and duct size, the propeller was designed, and the CFD Method was used to analyze the hydrodynamic performance of the propulsor. Based on the air-gap magnetic field of permanent magnet motor and the equivalent magnetic circuit, the integrated motor electromagnetic model was proposed, and the finite element method was used to analyze the motor electromagnetic field. Finally, the simulation of the integrated motor starting process with the load of propulsor torque was carried out, and the results meets the design specifications."} {"_id": "cae3bc55809a531a933bf6071550eeb3a2632f55", "title": "Deep keyphrase generation with a convolutional sequence to sequence model", "text": "Keyphrases can provide highly condensed and valuable information that allows users to quickly acquire the main ideas. Most previous studies realize the automatic keyphrase extraction through dividing the source text into multiple chunks and then rank and select the most suitable ones. These approaches ignore the deep semantics behind the text and could not predict the keyphrases not appearing in the source text. A sequence to sequence model to generate keyphrases from vocabulary could solve the issues above. However, traditional sequence to sequence model based on recurrent neural network(RNN) suffers from low efficiency problem. We propose an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations can be completely parallelized over all elements so as to better exploit the GPU hardware. Our use of gated linear units alleviates gradient propagation and we equip each decoder layer with a separate attention model. Moreover, we incorporate a copying mechanism to handle out-of-vocabulary phrases. In experiments, we evaluate our model on six datasets, and our proposed model is demonstrated to outperform state-of-the-art baseline models consistently and significantly, both on extracting the keyphrases existing in the source text and generating the absent keyphrases based on the sematic meaning of the text."} {"_id": "280f9cc6ee7679d02a7b8b58d08173628057f3ea", "title": "Evolutionary timeline summarization: a balanced optimization framework via iterative substitution", "text": "Classic news summarization plays an important role with the exponential document growth on the Web. Many approaches are proposed to generate summaries but seldom simultaneously consider evolutionary characteristics of news plus to traditional summary elements. Therefore, we present a novel framework for the web mining problem named Evolutionary Timeline Summarization (ETS). Given the massive collection of time-stamped web documents related to a general news query, ETS aims to return the evolution trajectory along the timeline, consisting of individual but correlated summaries of each date, emphasizing relevance, coverage, coherence and cross-date diversity. ETS greatly facilitates fast news browsing and knowledge comprehension and hence is a necessity. We formally formulate the task as an optimization problem via iterative substitution from a set of sentences to a subset of sentences that satisfies the above requirements, balancing coherence/diversity measurement and local/global summary quality. The optimized substitution is iteratively conducted by incorporating several constraints until convergence. We develop experimental systems to evaluate on 6 instinctively different datasets which amount to 10251 documents. Performance comparisons between different system-generated timelines and manually created ones by human editors demonstrate the effectiveness of our proposed framework in terms of ROUGE metrics."} {"_id": "39c230241d51b1435472115aaa8c62b94ab9927d", "title": "Joint Inference for Event Timeline Construction", "text": "This paper addresses the task of constructing a timeline of events mentioned in a given text. To accomplish that, we present a novel representation of the temporal structure of a news article based on time intervals. We then present an algorithmic approach that jointly optimizes the temporal structure by coupling local classifiers that predict associations and temporal relations between pairs of temporal entities with global constraints. Moreover, we present ways to leverage knowledge provided by event coreference to further improve the system performance. Overall, our experiments show that the joint inference model significantly outperformed the local classifiers by 9.2% of relative improvement in F1. The experiments also suggest that good event coreference could make remarkable contribution to a robust event timeline construction system."} {"_id": "6d823b8098ec2b13fb5dcbb02bb55d7030d37d5a", "title": "Automating Temporal Annotation with TARSQI", "text": "We present an overview of TARSQI, a modular system for automatic temporal annotation that adds time expressions, events and temporal relations to news texts."} {"_id": "0229c0eb39efed90db6691469daf0bb7244cf649", "title": "Text classification and named entities for new event detection", "text": "New Event Detection is a challenging task that still offers scope for great improvement after years of effort. In this paper we show how performance on New Event Detection (NED) can be improved by the use of text classification techniques as well as by using named entities in a new way. We explore modifications to the document representation in a vector space-based NED system. We also show that addressing named entities preferentially is useful only in certain situations. A combination of all the above results in a multi-stage NED system that performs much better than baseline single-stage NED systems."} {"_id": "19e8aaa1021f829c8ff0378158d9b69699ea4f83", "title": "Temporal Summaries of News Topics", "text": "We discuss technology to help a person monitor changes in news coverage over time. We define temporal summaries of news stories as extracting a single sentence from each event within a news topic, where the stories are presented one at a time and sentences from a story must be ranked before the next story can be considered. We explain a method for evaluation, and describe an evaluation corpus that we have built. We also propose several methods for constructing temporal summaries and evaluate their effectiveness in comparison to degenerate cases. We show that simple approaches are effective, but that the problem is far from solved."} {"_id": "a75b289951207f627cdba8580a65cf18f9188d62", "title": "A 2.6GHz band 537W peak power GaN HEMT asymmetric Doherty amplifier with 48% drain efficiency at 7dB", "text": "A 537W saturation output power (Psat) asymmetric Doherty amplifier for 2.6GHz band was successfully developed. The main and peak amplifiers were implemented with Psat of 210W and 320W GaN HEMTs. The newly developed 320W GaN HEMT consists of a single GaN die, both input and output partial match networks and a compact package. Its output matching network was tuned to inverse class F and a single-ended 320W GaN HEMT achieved higher than 61.8% drain efficiency from 2.4GHz to 2.7GHz. The 210W and 320W GaN HEMTs asymmetric Doherty amplifier exhibited 57.3dBm (537W) Psat and 48% drain efficiency with \u221250.6dBc ACLR at 50.3dBm (107W) average output power using a 4-carrier W-CDMA signal and commercially available digital pre-distortion system. These excellent performances show the good suitability for 2.6GHz band basestations."} {"_id": "90d0b469521883bf24d673457f080343b97902fb", "title": "A survey of automated material handling systems in 300-mm SemiconductorFabs", "text": "The fast-paced developments and technological breakthroughs in the semiconductor manufacturing industry elevates the importance of optimum utilization of resources. The newer 300-mm wafers fabs place a high level of emphasis on increasing yield and reducing cycle times. Automated material handling systems are importanttools that help us achieve these objectives. In addition, due to the increased weight and size of 300-mm wafers, an automated material handling system isa must for a 300-mm manufacturing facility. This paper discusses various approaches for automated materials handling in semiconductor manufacturing industries."} {"_id": "66baa370deca40f0928554a62cd8b0e4dd5985d3", "title": "Nuclear CDKs Drive Smad Transcriptional Activation and Turnover in BMP and TGF-\u03b2 Pathways", "text": "TGF-beta and BMP receptor kinases activate Smad transcription factors by C-terminal phosphorylation. We have identified a subsequent agonist-induced phosphorylation that plays a central dual role in Smad transcriptional activation and turnover. As receptor-activated Smads form transcriptional complexes, they are phosphorylated at an interdomain linker region by CDK8 and CDK9, which are components of transcriptional mediator and elongation complexes. These phosphorylations promote Smad transcriptional action, which in the case of Smad1 is mediated by the recruitment of YAP to the phosphorylated linker sites. An effector of the highly conserved Hippo organ size control pathway, YAP supports Smad1-dependent transcription and is required for BMP suppression of neural differentiation of mouse embryonic stem cells. The phosphorylated linker is ultimately recognized by specific ubiquitin ligases, leading to proteasome-mediated turnover of activated Smad proteins. Thus, nuclear CDK8/9 drive a cycle of Smad utilization and disposal that is an integral part of canonical BMP and TGF-beta pathways."} {"_id": "c9c30fe890c7b77b13d78c116cd80e046ae737b6", "title": "Antibacterial activity of large-area monolayer graphene film manipulated by charge transfer", "text": "Graphene has attracted increasing attention for potential applications in biotechnology due to its excellent electronic property and biocompatibility. Here we use both Gram-positive Staphylococcus aureus (S. aureus) and Gram-negative Escherichia coli (E. coli) to investigate the antibacterial actions of large-area monolayer graphene film on conductor Cu, semiconductor Ge and insulator SiO2. The results show that the graphene films on Cu and Ge can surprisingly inhibit the growth of both bacteria, especially the former. However, the proliferation of both bacteria cannot be significantly restricted by the graphene film on SiO2. The morphology of S. aureus and E. coli on graphene films further confirms that the direct contact of both bacteria with graphene on Cu and Ge can cause membrane damage and destroy membrane integrity, while no evident membrane destruction is induced by graphene on SiO2. From the viewpoint of charge transfer, a plausible mechanism is proposed here to explain this phenomenon. This study may provide new insights for the better understanding of antibacterial actions of graphene film and for the better designing of graphene-based antibiotics or other biomedical applications."} {"_id": "87e9ed98e6b7d5c7bf3837f62f3af9d182224f3b", "title": "Industrial Management & Data Systems Transformational leadership and innovative work behavior", "text": "Purpose \u2013 The purpose of this paper is to explore the mediating role of psychological empowerment and the moderating role of self-construal (independent and interdependent) on the relationship between transformational leadership and employees\u2019 innovative work behavior (IWB). Design/methodology/approach \u2013 A total of 639 followers and 87 leaders filled out questionnaires from cross-industry sample of five most innovative companies of China. Structural equation modeling was used to analyze the relations. Findings \u2013 Results revealed that psychological empowerment mediated the relationship between transformational leadership and IWB. The research established that transformational leadership positively influences IWB which includes idea generation as well as idea implementation. The results also showed that the relationship between transformational leadership and IWB was stronger among employees with a higher interdependent self-construal and a lower independent self-construal. Originality/value \u2013 This study adds to IWB literature by empirically testing the moderating role of self-construal and the mediating role of psychological empowerment on transformational leadership-IWB link."} {"_id": "6ccd19770991f35f7f4a9b0af62e3ff771536ae4", "title": "Named Entity Recognition System for Urdu", "text": "Named Entity Recognition (NER) is a task which helps in finding out Persons name, Location names, Brand names, Abbreviations, Date, Time etc and classifies the m into predefined different categories. NER plays a major role in various Natural Language Processing (NLP) fields like Information Extraction, Machine Translations and Question Answering. This paper describes the problems of NER in the context of Urdu Language and provides relevan t solutions. The system is developed to tag thirteen different Named Entities (NE), twelve NE proposed by IJCNLP-08 and Izaafats. We have used the Rule Based approach and developed the various rules to extract the Named Entities in the given Urdu text."} {"_id": "4d26eb642175bcd01f0f67e55d73735bcfb13bab", "title": "A 27-mW 3.6-gb/s I/O transceiver", "text": "This paper describes a 3.6-Gb/s 27-mW transceiver for chip-to-chip applications. A voltage-mode transmitter is proposed that equalizes the channel while maintaining impedance matching. A comparator is proposed that achieves sampling bandwidth control and offset compensation. A novel timing recovery circuit controls the phase by mismatching the current in the charge pump. The architecture maintains high signal integrity while each port consumes only 7.5 mW/Gb/s. The entire design occupies 0.2 mm/sup 2/ in a 0.18-/spl mu/m 1.8-V CMOS technology."} {"_id": "4faa414044d97b8deb45c37b78f59f18e6886bc7", "title": "Recurrent Neural Network Embedding for Knowledgebase Completion", "text": "Knowledge can often be represented using entities connected by relations. For example, the fact that tennis ball is round can be represented as \u201cTennisBall HasShape Round\u201d, where a \u201cTennisBall\u201d is one entity, \u201cHasShape\u201d is a relation and \u201cRound\u201d is another entity. A knowledge base is a way to store such structured information, a knowledge base stores triples of the \u201can entity-relation-an entity\u201d form, and a real world knowledge base often has millions or billions of such triples. There are several well-known knowledge bases including FreeBase [1], WordNet [2], YAGO [3], etc. They are important in fields like reasoning and question answering; for instance if one asks \u201cwhat is the shape of a tennis ball\u201d, we can search the knowledge base for the triple as \u201cTennisBall HasShape Round\u201d and output \u201cround\u201d as the answer."} {"_id": "9981e27f01960526ea68227c7f8120e0c3ffe87f", "title": "Golden section search over hyper-rectangle: a direct search method", "text": "Abstract: This paper generalises the golden section optimal search method to higher dimensional optimisation problem. The method is applicable to a strict quasi-convex function of N-variables over an N-dimensional hyper rectangle. An algorithm is proposed in N-dimension. The algorithm is illustrated graphically in two dimensions and verified through several test functions in higher dimension using MATLAB."} {"_id": "ac06b09e232fe9fd8c3fabef71e1fd7f6f752a7b", "title": "Honeypots : The Need of Network Security", "text": "Network forensics is basically used to detect attackers activity and to analyze their behavior. Data collection is the major task of network forensics and honeypots are used in network forensics to collect useful data. Honeypot is an exciting new technology with enormous potential for security communities. This review paper is based upon the introduction to honeypots, their importance in network security, types of honeypots, their advantages disadvantages and legal issues related with them. Research Paper also discuss about the shortcomings of intrusion detection system in a network security and how honeypots improve the security architecture of the organizational network. Furthermore, paper reveals about the different kind of honeypots their level of interaction and risks associated with them. Keywords\u2014honeyd; honeypots; nmap; network forensics"} {"_id": "5eea7073acfa8b946204ff681aca192571a1d6c2", "title": "Repeatable Reverse Engineering with PANDA", "text": "We present PANDA, an open-source tool that has been purpose-built to support whole system reverse engineering. It is built upon the QEMU whole system emulator, and so analyses have access to all code executing in the guest and all data. PANDA adds the ability to record and replay executions, enabling iterative, deep, whole system analyses. Further, the replay log files are compact and shareable, allowing for repeatable experiments. A nine billion instruction boot of FreeBSD, e.g., is represented by only a few hundred MB. PANDA leverages QEMU's support of thirteen different CPU architectures to make analyses of those diverse instruction sets possible within the LLVM IR. In this way, PANDA can have a single dynamic taint analysis, for example, that precisely supports many CPUs. PANDA analyses are written in a simple plugin architecture which includes a mechanism to share functionality between plugins, increasing analysis code re-use and simplifying complex analysis development. We demonstrate PANDA's effectiveness via a number of use cases, including enabling an old but legitimately purchased game to run despite a lost CD key, in-depth diagnosis of an Internet Explorer crash, and uncovering the censorship activities and mechanisms of an IM client."} {"_id": "34776d39fec2da8b10de1c2633bec5968341509f", "title": "Breaking 104 bit WEP in less than 60 seconds", "text": "We demonstrate an active attack on the WEP protocol that is able to recover a 104-bit WEP key using less than 40.000 frames with a success probability of 50%. In order to succeed in 95% of all cases, 85.000 packets are needed. The IV of these packets can be randomly chosen. This is an improvement in the number of required frames by more than an order of magnitude over the best known key-recovery attacks for WEP. On a IEEE 802.11g network, the number of frames required can be obtained by re-injection in less than a minute. The required computational effort is approximately 2 RC4 key setups, which on current desktop and laptop CPUs is neglegible."} {"_id": "16b187a157ad1599bf785912ac7974e38198be7a", "title": "Brain-computer interface research at the Wadsworth Center.", "text": "Studies at the Wadsworth Center over the past 14 years have shown that people with or without motor disabilities can learn to control the amplitude of mu or beta rhythms in electroencephalographic (EEG) activity recorded from the scalp over sensorimotor cortex and can use that control to move a cursor on a computer screen in one or two dimensions. This EEG-based brain-computer interface (BCI) could provide a new augmentative communication technology for those who are totally paralyzed or have other severe motor impairments. Present research focuses on improving the speed and accuracy of BCI communication."} {"_id": "1ada7e038d9dd3030cdbc7b0fc1eb041c1e4fb6b", "title": "Unsupervised Induction of Semantic Roles within a Reconstruction-Error Minimization Framework", "text": "We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages."} {"_id": "e4d3d316d29c6612593be1c9ce4736e3928213ed", "title": "A Two-Iteration Clustering Method to Reveal Unique and Hidden Characteristics of Items Based on Text Reviews", "text": "This paper presents a new method for extracting unique features of items based on their textual reviews. The method is built of two similar iterations of applying a weighting scheme and then clustering the resultant set of vectors. In the first iteration, restaurants of similar food genres are grouped together into clusters. The second iteration reduces the importance of common terms in each such cluster, and highlights those that are unique to each specific restaurant. Clustering the restaurants again, now according to their unique features, reveals very interesting connections between the restaurants."} {"_id": "3981416d7c8f784db8d9bfff2216fb50af711dbe", "title": "Siphon-based deadlock prevention for a class of S4PR generalized Petri nets", "text": "The paper proposes a new technique for deadlock control for a class of generalized Petri net (PN) with S4PR net, from the concept of control siphon. One important property of PN is to design structure, in terms of siphons, in order to the characterization of the deadlock prevent, and analytic structure of the synchronization subsystem which is needed to control place in a system. An efficient siphon method provides a solution to calculating minimal siphons, which is lead to constructing an optimal supervisor, ensuring that the system live state is preserved. The experimental results show the illustration of the proposed methods by using an S4PR net with an example is represented in FMS. The simulation result is also provided as a Petri nets the conception as well as the integration with MATLAB."} {"_id": "483a5429b8f027bb75a03943c3150846377e44f6", "title": "Software-Defined Networking Paradigms in Wireless Networks: A Survey", "text": "Software-defined networking (SDN) has generated tremendous interest from both academia and industry. SDN aims at simplifying network management while enabling researchers to experiment with network protocols on deployed networks. This article is a distillation of the state of the art of SDN in the context of wireless networks. We present an overview of the major design trends and highlight key differences between them."} {"_id": "75ebe1e0ae9d42732e31948e2e9c03d680235c39", "title": "Hello! My name is... Buffy'' -- Automatic Naming of Characters in TV Video", "text": "We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series \u201cBuffy the Vampire Slayer\u201d."} {"_id": "d85a7bef0adb2651353d6aa74d6e6a2e12c3d841", "title": "A survey and analysis of ontology-based software tools for semantic interoperability in IoT and WoT landscapes", "text": "The current Internet of Things (IoT) ecosystem consists of non-interoperable products and services. The Web of Things (WoT) advances the IoT by allowing consumers to interact with the IoT ecosystem through the open and standard web technologies. But the Web alone does not solve the interoperability issues. It is widely acknowledged that Semantic Web Technologies hold the potential of achieving data and platform interoperability in both the IoT and WoT landscapes. In this context, the paper attempts to review and analyze the current state of ontology-based software tools for semantic interoperability."} {"_id": "e4acbc3656424766e39a6fbb0ae758d90554111e", "title": "\"With most of it being pictures now, I rarely use it\": Understanding Twitter's Evolving Accessibility to Blind Users", "text": "Social media is an increasingly important part of modern life. We investigate the use of and usability of Twitter by blind users, via a combination of surveys of blind Twitter users, large-scale analysis of tweets from and Twitter profiles of blind and sighted users, and analysis of tweets containing embedded imagery. While Twitter has traditionally been thought of as the most accessible social media platform for blind users, Twitter's increasing integration of image content and users' diverse uses for images have presented emergent accessibility challenges. Our findings illuminate the importance of the ability to use social media for people who are blind, while also highlighting the many challenges such media currently present this user base, including difficulty in creating profiles, in awareness of available features and settings, in controlling revelations of one's disability status, and in dealing with the increasing pervasiveness of image-based content. We propose changes that Twitter and other social platforms should make to promote fuller access to users with visual impairments."} {"_id": "1b3c1759c16fe3f0ea57140bb23489fbf104616b", "title": "Full dimension MIMO for LTE-Advanced and 5G", "text": "Elevation beamforming and Full Dimension MIMO (FD-MIMO) has been an active area of research and standardization in 3GPP LTE-Advanced. In an FD-MIMO system, a base station with 2-dimensional (2D) active array supports multi-user joint elevation and azimuth beamforming (a.k.a. 3D beamfoming), which results in much higher cell capacity compared to conventional systems. Recent study has shown that with these new FD-MIMO technologies, we can achieve promising 3-5\u00d7 gain in both cell capacity as well as cell-edge throughput. In this paper, we will provide a brief summary of recent 3GPP activities, including the recently completed 3D channel model, ongoing study on FD-MIMO scenarios, antenna/RF (radio frequency) transceiver architectures, as well as potential network performance benefits. In addition, we also discuss some methods for reducing CSI (channel state information) feedback overhead and ensuring efficient operation of large size FD-MIMO for both TDD and FDD systems."} {"_id": "77a78f27356d502425ad232bf5cc554b73b38897", "title": "AnySee: Peer-to-Peer Live Streaming", "text": "Efficient and scalable live-streaming overlay construction has become a hot topic recently. In order to improve the performance metrics, such as startup delay, source-to-end delay, and playback continuity, most previous studies focused on intra-overlay optimization. Such approaches have drawbacks including low resource utilization, high startup and source-to-end delay, and unreasonable resource assignment in global P2P networks. Anysee is a peer-to-peer live streaming system and adopts an inter-overlay optimization scheme, in which resources can join multiple overlays, so as to (1) improve global resource utilization and distribute traffic to all physical links evenly; (2) assign resources based on their locality and delay; (3) guarantee streaming service quality by using the nearest peers, even when such peers might belong to different overlays; and (4) balance the load among the group members. We compare the performance of our design with existing approaches based on comprehensive trace driven simulations. Results show that AnySee outperforms previous schemes in resource utilization and the QoS of streaming services. AnySee has been implemented as an Internet based live streaming system, and was successfully released in the summer of 2004 in CERNET of China. Over 60,000 users enjoy massive entertainment programs, including TV programs, movies, and academic conferences. Statistics prove that this design is scalable and robust, and we believe that the wide deployment of AnySee will soon benefit many more Internet users."} {"_id": "e7dbd9ba29c59c68a9dae9f40dfc4040476c4624", "title": "Feeling and believing: the influence of emotion on trust.", "text": "The authors report results from 5 experiments that describe the influence of emotional states on trust. They found that incidental emotions significantly influence trust in unrelated settings. Happiness and gratitude--emotions with positive valence--increase trust, and anger--an emotion with negative valence--decreases trust. Specifically, they found that emotions characterized by other-person control (anger and gratitude) and weak control appraisals (happiness) influence trust significantly more than emotions characterized by personal control (pride and guilt) or situational control (sadness). These findings suggest that emotions are more likely to be misattributed when the appraisals of the emotion are consistent with the judgment task than when the appraisals of the emotion are inconsistent with the judgment task. Emotions do not influence trust when individuals are aware of the source of their emotions or when individuals are very familiar with the trustee."} {"_id": "d6f7c761fa64754d7d93601a4802da27b5858f8b", "title": "3 D model classification using convolutional neural network", "text": "Our goal is to classify 3D models directly using convolutional neural network. Most of existing approaches rely on a set of human-engineered features. We use 3D convolutional neural network to let the network learn the features over 3D space to minimize classification error. We trained and tested over ShapeNet dataset with data augmentation by applying random transformations. We made various visual analysis to find out what the network has learned. We extended our work to extract additional information such as pose of the 3D model."} {"_id": "00f51b60ef3929097ada76a16ff71badc2277165", "title": "Preliminary Guidelines for Empirical Research in Software Engineering", "text": "Empirical software engineering research needs research guidelines to improve the research and reporting processes. We propose a preliminary set of research guidelines aimed at stimulating discussion among software researchers. They are based on a review of research guidelines developed for medical researchers and on our own experience in doing and reviewing software engineering research. The guidelines are intended to assist researchers, reviewers and meta-analysts in designing, conducting and evaluating empirical studies. Editorial boards of software engineering journals may wish to use our recommendations as a basis for developing guidelines for reviewers and for framing policies for dealing with the design, data collection and analysis and reporting of empirical studies."} {"_id": "a023f6d6c383f4a3839036f07b1ea0aa04da9cbb", "title": "Measuring and predicting software productivity: A systematic map and review", "text": "0950-5849/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.infsof.2010.12.001 \u21d1 Address: School of Computing, Blekinge Institute 372 25, Sweden. E-mail addresses: kai.petersen@bth.se, kai.petersen URLs: http://www.bth.se/besq, http://www.ericsso Context: Software productivity measurement is essential in order to control and improve the performance of software development. For example, by identifying role models (e.g. projects, individuals, tasks) when comparing productivity data. The prediction is of relevance to determine whether corrective actions are needed, and to discover which alternative improvement action would yield the best results. Objective: In this study we identify studies for software productivity prediction and measurement. Based on the identified studies we first create a classification scheme and map the studies into the scheme (systematic map). Thereafter, a detailed analysis and synthesis of the studies is conducted. Method: As a research method for systematically identifying and aggregating the evidence of productivity measurement and prediction approaches systematic mapping and systematic review have been used. Results: In total 38 studies have been identified, resulting in a classification scheme for empirical research on software productivity. The mapping allowed to identify the rigor of the evidence with respect to the different productivity approaches. In the detailed analysis the results were tabulated and synthesized to provide recommendations to practitioners. Conclusion: Risks with simple ratio-based measurement approaches were shown. In response to the problems data envelopment analysis seems to be a strong approach to capture multivariate productivity measures, and allows to identify reference projects to which inefficient projects should be compared. Regarding simulation no general prediction model can be identified. Simulation and statistical process control are promising methods for software productivity prediction. Overall, further evidence is needed to make stronger claims and recommendations. In particular, the discussion of validity threats should become standard, and models need to be compared with each other. 2010 Elsevier B.V. All rights reserved."} {"_id": "b58b3a1dd84fe44f91510df00905a1ed33c1525c", "title": "A Review of Productivity Factors and Strategies on Software Development", "text": "Since the late seventies, efforts to catalog factors that influences productivity, as well as actions to improve it, has been a huge concern for both academy and software development industry. Despite numerous studies, software organizations still do not know which the most significant factors are and what to do with it. Several studies present the factors in a very superficial way, some others address only the related factors or there are those that describe only a single factor. Actions to deal with the factors are spread and frequently were not mapped. Through a literature review, this paper presents a consolidated view of the main factors that have affected productivity over the years, and the strategies to deal with these factors nowadays. This research aims to support software development industry on the selection of their strategies to improve productivity by maximizing the positive factors and minimizing or avoiding the impact of the negative ones."} {"_id": "d7b9dde9a7d304b378079049a0c2af40454a13bb", "title": "The impact of agile practices on communication in software development", "text": "Agile software development practices such as eXtreme Programming (XP) and SCRUM have increasingly been adopted to respond to the challenges of volatile business environments, where the markets and technologies evolve rapidly and present the unexpected. In spite of the encouraging results so far, little is known about how agile practices affect communication. This article presents the results from a study which examined the impact of XP and SCRUM practices on communication within software development teams and within the focal organization. The research was carried out as a case study in F-Secure where two agile software development projects were compared from the communication perspective. The goal of the study is to increase the understanding of communication in the context of agile software development: internally among the developers and project leaders and in the interface between the development team and stakeholders (i.e. customers, testers, other development teams). The study shows that agile practices improve both informal and formal communication. However, it further indicates that, in larger development situations involving multiple external stakeholders, a mismatch of adequate communication mechanisms can sometimes even hinder the communication. The study highlights the fact that hurdles and improvements in the communication process can both affect the feature requirements and task subtask dependencies as described in coordination theory. While the use of SCRUM and some XP practices facilitate team and organizational communication of the dependencies between product features and working tasks, the use of agile practices requires that the team and organization use also additional plan-driven practices to ensure the efficiency of external communication between all the actors of software development."} {"_id": "0ee47ca8e90f3dd2107b6791c0da42357c56f5bc", "title": "Agile Software Development: The Business of Innovation", "text": "T he rise and fall of the dot-com-driven Internet economy shouldn't distract us from seeing that the business environment continues to change at a dramatically increasing pace. To thrive in this turbulent environment, we must confront the business need for relentless innovation and forge the future workforce culture. Agile software development approaches such as Extreme Programming , Crystal methods, Lean Development, Scrum, Adaptive Software Development (ASD), and others view change from a perspective that mirrors today's turbulent business and technology environment. In a recent study of more than 200 software development projects, QSM Associates' Michael Mah reported that the researchers couldn't find nearly half of the projects' original plans to measure against. Why? Conforming to plan was no longer the primary goal; instead, satisfying customers\u2014at the time of delivery , not at project initiation\u2014took precedence. In many projects we review, major changes in the requirements, scope, and technology that are outside the development team's control often occur within a project's life span. Accepting that Barry Boehm's life cycle cost differentials theory\u2014the cost of change grows through the software's development life cycle\u2014remains valid, the question today is not how to stop change early in a project but how to better handle inevitable changes throughout its life cycle. Traditional approaches assumed that if we just tried hard enough, we could anticipate the complete set of requirements early and reduce cost by eliminating change. Today, eliminating change early means being unresponsive to business con-ditions\u2014in other words, business failure. Similarly, traditional process manage-ment\u2014by continuous measurement, error identification, and process refine-ments\u2014strove to drive variations out of processes. This approach assumes that variations are the result of errors. Today, while process problems certainly cause some errors, external environmental changes cause critical variations. Because we cannot eliminate these changes, driving down the cost of responding to them is the only viable strategy. Rather than eliminating rework, the new strategy is to reduce its cost. However, in not just accommodating change, but embracing it, we also must be careful to retain quality. Expectations have grown over the years. The market demands and expects innovative, high-quality software that meets its needs\u2014 and soon. Agile methods are a response to this expectation. Their strategy is to reduce the cost of change throughout a project. Extreme Programming (XP), for example, calls for the software development team to \u2022 produce the first delivery in weeks, to achieve an early win and rapid \u2026"} {"_id": "12f3f9a5bbcaa3ab09eff325bd4554924ac1356d", "title": "The Critical Success Factors for Effective ICT Governance in Malaysian Public Sector : A Delphi Study", "text": "The fundamental issues in ICT Governance (ICTG) implementation for Malaysian Public Sector (MPS) is how ICT be applied to support improvements in productivity, management effectiveness and the quality of services offered to its citizens. Our main concern is to develop and adopt a common definition and framework to illustrate how ICTG can be used to better align ICT with government\u2019s operations and strategic focus. In particular, we want to identify and categorize factors that drive a successful ICTG process. This paper presents the results of an exploratory study to identify, validate and refine such Critical Success Factors (CSFs) and confirmed seven CSFs and nineteen sub-factors as influential factors that fit MPS after further validated and refined. The Delphi method applied in validation and refining process before being endorsed as appropriate for MPS. The identified CSFs reflect the focus areas that need to be considered strategically to strengthen ICT Governance implementation and ensure business success. Keywords\u2014IT Governance, Critical Success Factors."} {"_id": "427a03a0746f398340e8a7f95d56316dacd8d70c", "title": "The sil Locus in Streptococcus Anginosus Group: Interspecies Competition and a Hotspot of Genetic Diversity", "text": "The Streptococcus Invasion Locus (Sil) was first described in Streptococcus pyogenes and Streptococcus pneumoniae, where it has been implicated in virulence. The two-component peptide signaling system consists of the SilA response regulator and SilB histidine kinase along with the SilCR signaling peptide and SilD/E export/processing proteins. The presence of an associated bacteriocin region suggests this system may play a role in competitive interactions with other microbes. Comparative analysis of 42 Streptococcus Anginosus/Milleri Group (SAG) genomes reveals this to be a hot spot for genomic variability. A cluster of bacteriocin/immunity genes is found adjacent to the sil system in most SAG isolates (typically 6-10 per strain). In addition, there were two distinct SilCR peptides identified in this group, denoted here as SilCRSAG-A and SilCRSAG-B, with corresponding alleles in silB. Our analysis of the 42 sil loci showed that SilCRSAG-A is only found in Streptococcus intermedius while all three species can carry SilCRSAG-B. In S. intermedius B196, a putative SilA operator is located upstream of bacteriocin gene clusters, implicating the sil system in regulation of microbe-microbe interactions at mucosal surfaces where the group resides. We demonstrate that S. intermedius B196 responds to its cognate SilCRSAG-A, and, less effectively, to SilCRSAG-B released by other Anginosus group members, to produce putative bacteriocins and inhibit the growth of a sensitive strain of S. constellatus."} {"_id": "7ba0aa88b813b8e7e47d5999cf4802bb9b30b86a", "title": "Towards Lexicalization of DBpedia Ontology with Unsupervised Learning and Semantic Role Labeling", "text": "Filling the gap between natural language expressions and ontology concepts or properties is the new trend in Semantic Web. Ontology lexicalization introduces a new layer of lexical information for ontology properties and concepts. We propose a method based on unsupervised learning for the extraction of the potential lexical expressions of DBpedia propertiesfrom Wikipedia text corpus. It is a resource-driven approach that comprises three main steps. The first step consists of the extraction of DBpedia triples for the aimed property followed by the extraction of Wikipedia articles describing the resources from these triples. In the second step, sentences mostly related to the property are extracted from the articles and they are analyzed with a Semantic Role Labeler resulting in a set of SRL annotated trees. In the last step, clusters of expressions are built using spectral clustering based on the distances between the SRL trees. The clusters with the least variance are considered to be relevant for the lexical expressions of the property."} {"_id": "2d633db75b177aad6045c0469ba0696b905f314f", "title": "An objective and subjective study of the role of semantics and prosodic features in building corpora for emotional TTS", "text": "Building a text corpus suitable to be used in corpus-based speech synthesis is a time-consuming process that usually requires some human intervention to select the desired phonetic content and the necessary variety of prosodic contexts. If an emotional text-to-speech (TTS) system is desired, the complexity of the corpus generation process increases. This paper presents a study aiming to validate or reject the use of a semantically neutral text corpus for the recording of both neutral and emotional (acted) speech. The use of this kind of texts would eliminate the need to include semantically emotional texts into the corpus. The study has been performed for Basque language. It has been made by performing subjective and objective comparisons between the prosodic characteristics of recorded emotional speech using both semantically neutral and emotional texts. At the same time, the performed experiments allow for an evaluation of the capability of prosody to carry emotional information in Basque language. Prosody manipulation is the most common processing tool used in concatenative TTS. Experiments of automatic recognition of the emotions considered in this paper (the \"Big Six emotions\") show that prosody is an important emotional indicator, but cannot be the only manipulated parameter in an emotional TTS system-at least not for all the emotions. Resynthesis experiments transferring prosody from emotional to neutral speech have also been performed. They corroborate the results and support the use of a neutral-semantic-content text in databases for emotional speech synthesis"} {"_id": "516f412a76911a13c9128aac827b52b27b98fad9", "title": "Uncovering social network sybils in the wild", "text": "Sybil accounts are fake identities created to unfairly increase the power or resources of a single user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but have not been able to perform large scale measurements to detect them or measure their activities. In this paper, we describe our efforts to detect, characterize and understand Sybil account activity in the Renren online social network (OSN). We use ground truth provided by Renren Inc. to build measurement based Sybil account detectors, and deploy them on Renren to detect over 100,000 Sybil accounts. We study these Sybil accounts, as well as an additional 560,000 Sybil accounts caught by Renren, and analyze their link creation behavior. Most interestingly, we find that contrary to prior conjecture, Sybil accounts in OSNs do not form tight-knit communities. Instead, they integrate into the social graph just like normal users. Using link creation timestamps, we verify that the large majority of links between Sybil accounts are created accidentally, unbeknownst to the attacker. Overall, only a very small portion of Sybil accounts are connected to other Sybils with social links. Our study shows that existing Sybil defenses are unlikely to succeed in today's OSNs, and we must design new techniques to effectively detect and defend against Sybil attacks."} {"_id": "6847de10b11501b26f6d35405d1b6436ef17c0b4", "title": "Query by Example of Speaker Audio Signals Using Power Spectrum and MFCCs", "text": "Search engine is the popular term for an information retrieval (IR) system. Typically, search engine can be based on full-text indexing. Changing the presentation from the text data to multimedia data types make an information retrieval process more complex such as a retrieval of image or sounds in large databases. This paper introduces the use of language and text independent speech as input queries in a large sound database by using Speaker identification algorithm. The method consists of 2 main processing first steps, we separate vocal and non-vocal identification after that vocal be used to speaker identification for audio query by speaker voice. For the speaker identification and audio query by process, we estimate the similarity of the example signal and the samples in the queried database by calculating the Euclidian distance between the Mel frequency cepstral coefficients (MFCC) and Energy spectrum of acoustic features. The simulations show that the good performance with a sustainable computational cost and obtained the average accuracy rate more than 90%."} {"_id": "26433d86b9c215b5a6871c70197ff4081d63054a", "title": "Multimodal biometric fusion at feature level: Face and palmprint", "text": "Multimodal biometrics has recently attracted substantial interest for its high performance in biometric recognition system. In this paper we introduce multimodal biometrics for face and palmprint images using fusion techniques at the feature level. Gabor based image processing is utilized to extract discriminant features, while principal component analysis (PCA) and linear discriminant analysis (LDA) are used to reduce the dimension of each modality. The output features of LDA are serially combined and classified by a Euclidean distance classifier. The experimental results based on ORL face and Poly-U palmprint databases proved that this fusion technique is able to increase biometric recognition rates compared to that produced by single modal biometrics."} {"_id": "4ca6307c991f8f4c28ebdd45a08239dfb9da1c0c", "title": "Improved Style Transfer by Respecting Inter-layer Correlations", "text": "A popular series of style transfer methods apply a style to a content image by controlling mean and covariance of values in early layers of a feature stack. This is insufficient for transferring styles that have strong structure across spatial scales like, e.g., textures where dots lie on long curves. This paper demonstrates that controlling inter-layer correlations yields visible improvements in style transfer methods. We achieve this control by computing cross-layer, rather than within-layer, gram matrices. We find that (a) cross-layer gram matrices are sufficient to control within-layer statistics. Inter-layer correlations improves style transfer and texture synthesis. The paper shows numerous examples on \u201dhard\u201d real style transfer problems (e.g. long scale and hierarchical patterns); (b) a fast approximate style transfer method can control cross-layer gram matrices; (c) we demonstrate that multiplicative, rather than additive style and content loss, results in very good style transfer. Multiplicative loss produces a visible emphasis on boundaries, and means that one hyper-parameter can be eliminated. 1 ar X iv :1 80 1. 01 93 3v 1 [ cs .C V ] 5 J an 2 01 8"} {"_id": "58c87d2d678aab8bccd5cb20d04bc867682b07f2", "title": "The INTERSPEECH 2017 Computational Paralinguistics Challenge: Addressee, Cold & Snoring", "text": "The INTERSPEECH 2017 Computational Paralinguistics Challenge addresses three different problems for the first time in research competition under well-defined conditions: In the Addressee sub-challenge, it has to be determined whether speech produced by an adult is directed towards another adult or towards a child; in the Cold sub-challenge, speech under cold has to be told apart from \u2018healthy\u2019 speech; and in the Snoring subchallenge, four different types of snoring have to be classified. In this paper, we describe these sub-challenges, their conditions, and the baseline feature extraction and classifiers, which include data-learnt feature representations by end-to-end learning with convolutional and recurrent neural networks, and bag-of-audiowords for the first time in the challenge series."} {"_id": "1ae9b720b3b3e497ef6a2a3e97079c0acb8570f1", "title": "An Entity-Centric Approach for Privacy and Identity Management in Cloud Computing", "text": "Entities (e.g., users, services) have to authenticate themselves to service providers (SPs) in order to use their services. An entity provides personally identifiable information (PII) that uniquely identifies it to an SP. In the traditional application-centric Identity Management (IDM) model, each application keeps trace of identities of the entities that use it. In cloud computing, entities may have multiple accounts associated with different SPs, or one SP. Sharing PIIs of the same entity across services along with associated attributes can lead to mapping of PIIs to the entity. We propose an entity-centric approach for IDM in the cloud. The approach is based on: (1) active bundles\u2014each including a payload of PII, privacy policies and a virtual machine that enforces the policies and uses a set of protection mechanisms to protect themselves, (2) anonymous identification to mediate interactions between the entity and cloud services using entity\u2019s privacy policies. The main characteristics of the approach are: it is independent of third party, gives minimum information to the SP and provides ability to use identity data on untrusted hosts."} {"_id": "626a38a32e2255e5bef98880ebbddf6994840e9e", "title": "Multichannel Decoded Local Binary Patterns for Content-Based Image Retrieval", "text": "Local binary pattern (LBP) is widely adopted for efficient image feature description and simplicity. To describe the color images, it is required to combine the LBPs from each channel of the image. The traditional way of binary combination is to simply concatenate the LBPs from each channel, but it increases the dimensionality of the pattern. In order to cope with this problem, this paper proposes a novel method for image description with multichannel decoded LBPs. We introduce adder- and decoder-based two schemas for the combination of the LBPs from more than one channel. Image retrieval experiments are performed to observe the effectiveness of the proposed approaches and compared with the existing ways of multichannel techniques. The experiments are performed over 12 benchmark natural scene and color texture image databases, such as Corel-1k, MIT-VisTex, USPTex, Colored Brodatz, and so on. It is observed that the introduced multichannel adder- and decoder-based LBPs significantly improve the retrieval performance over each database and outperform the other multichannel-based approaches in terms of the average retrieval precision and average retrieval rate."} {"_id": "57a809faecdeb6c97160be4cab0d0b2f42ed3c6f", "title": "Mapping the world's photos", "text": "We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale."} {"_id": "98997f81cc77f53ee84e9b6df1edc253f9f9d5f9", "title": "Personalized, interactive tag recommendation for flickr", "text": "We study the problem of personalized, interactive tag recommendation for Flickr: While a user enters/selects new tags for a particular picture, the system suggests related tags to her, based on the tags that she or other people have used in the past along with (some of) the tags already entered. The suggested tags are dynamically updated with every additional tag entered/selected. We describe a new algorithm, called Hybrid, which can be applied to this problem, and show that it outperforms previous algorithms. It has only a single tunable parameter, which we found to be very robust.\n Apart from this new algorithm and its detailed analysis, our main contributions are (i) a clean methodology which leads to conservative performance estimates, (ii) showing how classical classification algorithms can be applied to this problem, (iii) introducing a new cost measure, which captures the effort of the whole tagging process, (iv) clearly identifying, when purely local schemes (using only a user's tagging history) can or cannot be improved by global schemes (using everybody's tagging history)."} {"_id": "424f92289a632f85f6ba9a611614d145c7d3393a", "title": "A review on software development security engineering using Dynamic System Method (DSDM)", "text": "Agile methodology such as Scrum, Extreme Programming (XP), Feature Driven Development (FDD) and the Dynamic System Development Method (DSDM) have gained enough recognition as efficient development process by delivering software fast even under the time constrains. However, like other agile methods DSDM has been criticized because of unavailability of security element in its four phases. In order to have a deeper look into the matter and discover more about the reality, we conducted a literature review. Our findings highlight that, in its current form, the DSDM does not support developing secure software. Although, there are a few researches on this topic about Scrum, XP and FDD but, based on our findings, there is no research on developing secure software using DSDM. Thus, in our future work we intend to propose enhanced DSDM that will cater the security aspects in software development."} {"_id": "cbd8a90e809151b684e73fb3e31c2731874570c4", "title": "A systematic literature review of nurse shortage and the intention to leave.", "text": "AIM\nTo present the findings of a literature review regarding nurses' intention to leave their employment or the profession.\n\n\nBACKGROUND\nThe nursing shortage is a problem that is being experienced worldwide. It is a problem that, left unresolved, could have a serious impact on the provision of quality health care. Understanding the reasons why nurses leave their employment or the profession is imperative if efforts to increase retention are to be successful.\n\n\nEVALUATION\nElectronic databases were systematically searched to identify English research reports about nurses' intention to leave their employment or the profession. Key results concerning the issue were extracted and synthesized.\n\n\nKEY ISSUES\nThe diversified measurement instruments, samples and levels of intention to leave caused difficulties in the attempt to compare or synthesize findings. The factors influencing nurses' intention to leave were identified and categorized into organizational and individual factors.\n\n\nCONCLUSIONS\nThe reasons that trigger nurses' intention to leave are complex and are influenced by organizational and individual factors. Further studies should be conducted to investigate how external factors such as job opportunities correlate with nurses' intention to leave.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nThe review provides insight that can be useful in designing and implementing strategies to maintain a sustainable workforce in nursing."} {"_id": "3df4b372489e734beeb70b320666354e53eb23c4", "title": "A model for play-based intervention for children with ADHD.", "text": "BACKGROUND/AIM\nThe importance of play in the social development of children is undisputed. Even though children with attention-deficit hyperactivity disorder (ADHD) experience serious social problems, there is limited research on their play. By integrating literature on ADHD with literature on play, we can postulate how play is influenced by the characteristics of ADHD. These postulations enabled us to propose a theoretical model (proposed model) to depict the interactive process between the characteristics of ADHD and factors that promote play. This paper presents the revised model and principles for intervention based on the results of a study investigating the play of children with ADHD (reported elsewhere).\n\n\nMETHODS\nWe tested the proposed model in a study comparing two groups of children (n = 350) between the ages of 5 and 11 years. One group consisted of children diagnosed with ADHD (n = 112) paired with playmates (n = 112) who were typically developing; the control group consisted of typically developing children paired with typically developing playmates (n = 126). The Test of Playfulness was administered, and the model was revised in line with the findings.\n\n\nRESULTS AND CONCLUSIONS\nThe findings suggest difficulties in the social play and lack of interpersonal empathy in the play of children with ADHD. We draw on the revised model to propose preliminary principles for play-based interventions for children with ADHD. The principles emphasise the importance of capturing the motivation of children with ADHD, counteracting the effects of lack of interpersonal empathy, and considerations for including playmates in the intervention process."} {"_id": "a864fbfd34426c98b2832a3c2aa9fbc7df8bb910", "title": "Role of affective self-regulatory efficacy in diverse spheres of psychosocial functioning.", "text": "This prospective study with 464 older adolescents (14 to 19 years at Time 1; 16 to 21 years at Time 2) tested the structural paths of influence through which perceived self-efficacy for affect regulation operates in concert with perceived behavioral efficacy in governing diverse spheres of psychosocial functioning. Self-efficacy to regulate positive and negative affect is accompanied by high efficacy to manage one's academic development, to resist social pressures for antisocial activities, and to engage oneself with empathy in others' emotional experiences. Perceived self-efficacy for affect regulation essentially operated mediationally through the latter behavioral forms of self-efficacy rather than directly on prosocial behavior, delinquent conduct, and depression. Perceived empathic self-efficacy functioned as a generalized contributor to psychosocial functioning. It was accompanied by prosocial behavior and low involvement in delinquency but increased vulnerability to depression in adolescent females."} {"_id": "f4b98dbd75c87a86a8bf0d7e09e3ebbb63d14954", "title": "Comparative Evaluation of Various MFCC Implementations on the Speaker Verification Task", "text": "Making no claim of being exhaustive, a review of the most popular MFCC (Mel Frequency Cepstral Coefficients) implementations is made. These differ mainly in the particular approximation of the nonlinear pitch perception of human, the filter bank design, and the compression of the filter bank output. Then, a comparative evaluation of the presented implementations is performed on the task of text-independent speaker verification, by means of the well-known 2001 NIST SRE (speaker recognition evaluation) one-speaker detection database."} {"_id": "7b143616c637734d6c89f28723e2ceb7aabc0389", "title": "Personality, Culture, and System Factors - Impact on Affective Response to Multimedia", "text": "Whilst affective responses to various forms and genres of multimedia content have been well researched, precious few studies have investigated the combined impact that multimedia system parameters and human factors have on affect. Consequently, in this paper we explore the role that two primordial dimensions of human factors personality and culture in conjunction with system factors frame rate, resolution, and bit rate have on user affect and enjoyment of multimedia presentations. To this end, a two-site, cross-cultural study was undertaken, the results of which produced three predictve models. Personality and Culture traits were shown statistically to represent 5.6% of the variance in positive affect, 13.6% in negative affect and 9.3% in enjoyment. The correlation between affect and enjoyment, was significant. Predictive modeling incorporating human factors showed about 8%, 7% and 9% improvement in predicting positive affect, negative affect and enjoyment respectively when compared to models trained only on system factors. Results and analysis indicate the significant role played by human factors in influencing affect that users experience while watching multimedia."} {"_id": "906dc85636c056408f13f0d24b6d6f92ffb63113", "title": "Towards a Simple but Useful Ontology Design Pattern Representation Language", "text": "The need for a representation language for ontology design patterns has long been recognized. However, the body of literature on the topic is still rather small and does not sufficiently reflect the diverse requirements on such a language. Herein, we propose a simple but useful and extendable approach which is fully compatible with the Web Ontology Language and should be easy to adopt by the community."} {"_id": "f3da8e33c90dc19a33d91a1b6b2ec4430f3b0315", "title": "Enhanced multisensory integration in older adults", "text": "Information from the different senses is seamlessly integrated by the brain in order to modify our behaviors and enrich our perceptions. It is only through the appropriate binding and integration of information from the different senses that a meaningful and accurate perceptual gestalt can be generated. Although a great deal is known about how such cross-modal interactions influence behavior and perception in the adult, there is little knowledge as to the impact of aging on these multisensory processes. In the current study, we examined the speed of discrimination responses of aged and young individuals to the presentation of visual, auditory or combined visual-auditory stimuli. Although the presentation of multisensory stimuli speeded response times in both groups, the performance gain was significantly greater in the aged. Most strikingly, multisensory stimuli restored response times in the aged to those seen in young subjects to the faster of the two unisensory stimuli (i.e., visual). The current results suggest that despite the decline in sensory processing that accompanies aging, the use of multiple sensory channels may represent an effective compensatory strategy to overcome these unisensory deficits."} {"_id": "404574efdb5193dc6b69ffcfbf2190212ebfa43f", "title": "Potential Vulnerabilities of Neuronal Reward, Risk, and Decision Mechanisms to Addictive Drugs", "text": "How do addictive drugs hijack the brain's reward system? This review speculates how normal, physiological reward processes may be affected by addictive drugs. Addictive drugs affect acute responses and plasticity in dopamine neurons and postsynaptic structures. These effects reduce reward discrimination, increase the effects of reward prediction error signals, and enhance neuronal responses to reward-predicting stimuli, which may contribute to compulsion. Addictive drugs steepen neuronal temporal reward discounting and create temporal myopia that impairs the control of drug taking. Tonically enhanced dopamine levels may disturb working memory mechanisms necessary for assessing background rewards and thus may generate inaccurate neuronal reward predictions. Drug-induced working memory deficits may impair neuronal risk signaling, promote risky behaviors, and facilitate preaddictive drug use. Malfunctioning adaptive reward coding may lead to overvaluation of drug rewards. Many of these malfunctions may result in inadequate neuronal decision mechanisms and lead to choices biased toward drug rewards."} {"_id": "64290c658d2f1c47ad4fd8757a87ac6c9a708f89", "title": "Work Teams Applications and Effectiveness", "text": "\" This article uses an ecological approach to analyze factors in the effectiveness of work teams--small groups of interdependent individuals who share responsibility for outcomes for their organizations. Applications include advice and involvement, as in quality control circles and committees; production and service, as in assembly groups and sales teams; projects and development, as in engineering and research groups; and action and negotiation, as in sports teams and combat units. An analytic framework depicts team effectiveness as interdependent with organizational context, boundaries, and team development. Key context factors include (a) organizational culture, (b) technology and task design, (c) mission clarity, (d) autonomy, (e) rewards, ( f ) performance feedback, (g) training/consultation, and (h) physical environment. Team boundaries may mediate the impact of organizational context on team development. Current research leaves unanswered questions but suggests that effectiveness depends on organizational context and boundaries as much as on internal processes. Issues are raised for research and practice. The terms work team and work group appear often in today's discussions of organizations. Some experts claim that to be effective modern firms need to use small teams for an increasing variety of jobs. For instance, in an article subtitled \"The Team as Hero,\" Reich (1987) wrote, If we are to compete in today's world, we must begin to celebrate collective entrepreneurship, endeavors in which the whole of the effort is greater than the sum of individual contributions. We need to honor our teams more, our aggressive leaders and maverick geniuses less. (p. 78) Work teams occupy a pivotal role in what has been described as a management transformation (Walton, 1985), paradigm shift (Ketehum, 1984), and corporate renaissance (Kanter, 1983). In this management revolution, Peters (1988) advised that organizations use \"multi-function teams for all development activities\" (p. 210) and \"organize every function into tento thirty-person, largely self-managing teams\" (p. 296). Tornatzky (1986) pointed to new technologies that allow small work groups to take responsibility for whole products. Hackman (1986) predicted that, \"organizations in the future will rely heavily on member self-management\" (p. 90). Building blocks of such organizations are self-regulating work teams. But University of Tennessee University of Wisconsin--Eau Claire University o f Tennessee far from being revolutionary, work groups are traditional; \"the problem before us is not to invent more tools, but to use the ones we have\" (Kanter, 1983, p. 64). In this article, we explore applications of work teams and propose an analytic framework for team effectiveness. Work teams are defined as interdependent collections of individuals who share responsibility for specific outcomes for their organizations. In what follows, we first identify applications of work teams and then offer a framework for analyzing team effectiveness. Its facets make up topics of subsequent sections: organizational context, boundaries, and team development. We close with issues for research and practice. A p p l i c a t i o n s o f W o r k T e a m s Two watershed events called attention to the benefits of applying work teams beyond sports and mih'tary settings: the Hawthorne studies (Homans, 1950) and European experiments with autonomous work groups (Kelly, 1982). Enthusiasm has alternated with disenchantment (Bramel & Friend, 1987), but the 1980s have brought a resurgence of interest. Unfortunately, we have little evidence on how widely work teams are used or whether their use is expanding. Pasmore, Francis, Haldeman, and Shani (1982) reported that introduction of autonomous work groups was the most common intervention in 134 experiments in manufacturing firms. Production teams number among four broad categories of work team applications: (a) advice and involvement, (b) production and service, (c) projects and development, and (d) action and negotiation. Advice and Involvement Decision-making committees traditional in management now are expanding to first-line employees. Quality control (QC) circles and employee involvement groups have been common in the 1980s, often as vehicles for employee participation ( Cole, 1982 ). Perhaps several hundred thousand U.S. employees belong to QC circles (Ledford, Lawler, & Mohrman, 1988), usually first-line manufacturing employees who meet to identify opportunities for improvement. Some make and carry out proposals, but most have restricted scopes of activity and little working time, perhaps a few hours each month (Thompson, 1982). Employee involvement groups operate similarly, exploring ways to improve customer service (Peterfreund, 1982). 120 February 1990 \u2022 American Psychologist Copyright 1990 by the American Psyc2aological A~mciafion, Inc. 0003-066X/90/$00.75 Vol. 45, No. 2, 120-133 QC circles and employee involvement groups at times may have been implemented poorly (Shea, 1986), but they have been used extensively in some companies"} {"_id": "16deaf3986d996a7bf5f6188d39607c2e406a1f8", "title": "Self-Adaptive Anytime Stream Clustering", "text": "Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering."} {"_id": "31cc80ffb56d7f82dcc44e78fbdea95bffe5028e", "title": "Depth-disparity calibration for augmented reality on binocular optical see-through displays", "text": "We present a study of depth-disparity calibration for augmented reality applications using binocular optical see-through displays. Two techniques were proposed and compared. The \"paired-eyes\" technique leverages the Panum's fusional area to help viewer find alignment between the virtual and physical objects. The \"separate-eyes\" technique eliminates the need of binocular fusion and involves using both eyes sequentially to check the virtual-physical object alignment on retinal images. We conducted a user study to measure the calibration results and assess the subjective experience of users with the proposed techniques."} {"_id": "0c4867f11c9758014d591381d8b397a1d38b04a7", "title": "Pattern Recognition and Machine Learning", "text": "The first \u20ac price and the \u00a3 and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the \u20ac(D) includes 7% for Germany, the \u20ac(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. C. Bishop Pattern Recognition and Machine Learning"} {"_id": "04c5268d7a4e3819344825e72167332240a69717", "title": "Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition", "text": "In this paper we introduce a template-based method for recognizing human actions called action MACH. Our approach is based on a maximum average correlation height (MACH) filter. A common limitation of template-based methods is their inability to generate a single template using a collection of examples. MACH is capable of capturing intra-class variability by synthesizing a single Action MACH filter for a given action class. We generalize the traditional MACH filter to video (3D spatiotemporal volume), and vector valued data. By analyzing the response of the filter in the frequency domain, we avoid the high computational cost commonly incurred in template-based approaches. Vector valued data is analyzed using the Clifford Fourier transform, a generalization of the Fourier transform intended for both scalar and vector-valued data. Finally, we perform an extensive set of experiments and compare our method with some of the most recent approaches in the field by using publicly available datasets, and two new annotated human action datasets which include actions performed in classic feature films and sports broadcast television."} {"_id": "139a860b94e9b89a0d6c85f500674fe239e87099", "title": "Optimal Brain Damage", "text": "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application."} {"_id": "1c01e44df70d6fde616de1ef90e485b23a3ea549", "title": "A new class of upper bounds on the log partition function", "text": "We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants."} {"_id": "39a6cc80b1590bcb2927a9d4c6c8f22d7480fbdd", "title": "A 3-dimensional sift descriptor and its application to action recognition", "text": "In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data."} {"_id": "efaf07d40b9c5837639bed129794efc00f02e4c3", "title": "Continuous N-gram Representations for Authorship Attribution", "text": "This paper presents work on using continuous representations for authorship attribution. In contrast to previous work, which uses discrete feature representations, our model learns continuous representations for n-gram features via a neural network jointly with the classification layer. Experimental results demonstrate that the proposed model outperforms the state-of-the-art on two datasets, while producing comparable results on the remaining two."} {"_id": "6683426ca06560523fc7461152d4dd3b84a07854", "title": "Autotagger: A Model for Predicting Social Tags from Acoustic Features on Large Music Databases", "text": "Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of \u201cWeb 2.0\u201d recommender systems, allowing users to generate playlists based on use-dependent terms such as chill or jogging that have been applied to particular songs. In this paper, we propose a method for predicting these social tags directly from MP3 files. Using a set of 360 classifiers trained using the online ensemble learning algorithm FilterBoost, we map audio features onto social tags collected from the Web. The resulting automatic tags (or autotags) furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This avoids the \u201ccold-start problem\u201d common in such systems. Autotags can also be used to smooth the tag space from which similarities and recommendations are made by providing a set of comparable baseline tags for all tracks in a recommender system. Because the words we learn are the same as those used by people who label their music collections, it is easy to integrate our predictions into existing similarity and prediction methods based on web data."} {"_id": "d86e51d6e1215d792a9d00995d367b6161fc33e7", "title": "Does Your Model Know the Digit 6 Is Not a Cat? A Less Biased Evaluation of \"Outlier\" Detectors", "text": "In the real world, a learning system could receive an input that looks nothing like anything it has seen during training, and this can lead to unpredictable behaviour. We thus need to know whether any given input belongs to the population distribution of the training data to prevent unpredictable behaviour in deployed systems. A recent surge of interest on this problem has led to the development of sophisticated techniques in the deep learning literature. However, due to the absence of a standardized problem formulation or an exhaustive evaluation, it is not evident if we can rely on these methods in practice. What makes this problem different from a typical supervised learning setting is that we cannot model the diversity of out-of-distribution samples in practice. The distribution of outliers used in training may not be the same as the distribution of outliers encountered in the application. Therefore, classical approaches that learn inliers vs. outliers with only two datasets can yield optimistic results. We introduce OD-test, a three-dataset evaluation scheme as a practical and more reliable strategy to assess progress on this problem. The OD-test benchmark provides a straightforward means of comparison for methods that address the out-of-distribution sample detection problem. We present an exhaustive evaluation of a broad set of methods from related areas on image classification tasks. Furthermore, we show that for realistic applications of high-dimensional images, the existing methods have low accuracy. Our analysis reveals areas of strength and weakness of each method."} {"_id": "158d62f4e3363495148cf16c7b800daab7765760", "title": "Privacy Aware Learning", "text": "We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, as measured by convergence rate, of any statistical estimator or learning procedure."} {"_id": "77071790f7e3a2ab7fb8b2a7e8d0a10e0debc5c1", "title": "Autonomous Object Manipulation Using a Soft Planar Grasping Manipulator", "text": "This article presents the development of an autonomous motion planning algorithm for a soft planar grasping manipulator capable of grasp-and-place operations by encapsulation with uncertainty in the position and shape of the object. The end effector of the soft manipulator is fabricated in one piece without weakening seams using lost-wax casting instead of the commonly used multilayer lamination process. The soft manipulation system can grasp randomly positioned objects within its reachable envelope and move them to a desired location without human intervention. The autonomous planning system leverages the compliance and continuum bending of the soft grasping manipulator to achieve repeatable grasps in the presence of uncertainty. A suite of experiments is presented that demonstrates the system's capabilities."} {"_id": "3ce29949228103340391dbb57e38dd68d58e9b9e", "title": "Fetal left brachiocephalic vein in normal and abnormal conditions.", "text": "OBJECTIVES\nTo establish values of fetal left brachiocephalic vein (LBCV) dimensions during normal pregnancy and determine whether routine assessment of the LBCV may help in identifying fetuses with congenital abnormalities of this vessel.\n\n\nMETHODS\nFetal LBCV was assessed prospectively during ultrasound examinations in 431 normal singleton pregnancies. The visualization rate of the transverse view of the upper fetal chest at the level of drainage of the LBCV into the superior vena cava (SVC) by two-dimensional (2D) and 2D plus color Doppler ultrasound was evaluated. Reference ranges of LBCV diameter during non-complicated pregnancies were established. Interobserver and intraobserver measurement variability was analyzed. In addition, a retrospective review of the hospital medical records of 91 pregnancies with fetuses diagnosed with LBCV abnormalities was performed.\n\n\nRESULTS\nSonographic assessment of the fetal LBCV was consistently achieved in the second and third trimesters and in some fetuses in the first trimester of pregnancy. In normal fetuses LBCV diameter increased significantly throughout pregnancy, with a mean value of 0.7 mm at 11 weeks and 4.9 mm at term. Dilation of the fetal LBCV was noted in five cases of intracranial arteriovenous malformation and six cases of supracardiac type total anomalous pulmonary venous connection. Abnormal course of the LBCV was noted in 12 fetuses. In 63 fetuses with a persistent left SVC and a right SVC the LBCV was absent.\n\n\nCONCLUSION\nThis is the first study describing an effective sonographic approach for the assessment of fetal LBCV dimensions during pregnancy. The normative data may provide an additional means of detecting rare anomalies of systemic and pulmonary veins during pregnancy."} {"_id": "9960994d81af14ae49a684883ea376a6bef41b2d", "title": "The importance of drawing in the mechanical design process", "text": "-This paper is a study on the importance of drawing ( both formal drafting and informal sketching) during the process of mechanical design. Five hypotheses, focused on the t~es of drawings, their necessity in mechanical problem solving, and their relation to the external representation medium, are presented and supported. Support is through referenced studies in other domains and the results of protocol studies performed on five mechanical designers. Videotapes of all the marks-on-paper made by designers in representative sections of the design process were studied in detail for their type and purpose. The resulting data is supportive of the hypotheses. These results also give requirements for future computer aided design tools and graphics education, and goals for further studies. I. I N T R O D U C T I O N The goal of this paper is to study the importance of drawing (both formal drafting and informal sketching) during the process of mechanical design. This goal can be extended to state that we intend to show the necessio\" of drawing during all the developmental stages of a mechanical design. Through the information presented here, the requirements for future computer aided design tools, graphics education, and further studies will be developed. All mechanical engineers are taught drafting. Thus, most engineers are skilled at making and interpreting these formal mechanical drawings. These drawings are representations of a final design (the end product of the design process) and they are intended to archive the completed design and communicate it to other designers and manufacturing personnel. Additionally, engineers are notorious for not being able to think without making \"'back-of-the-envelope\" sketches of rough ideas. Sometimes these informal sketches serve to communicate a concept to a colleague, but more often they just help the idea take shape on paper. It is in considering how these sketches help an idea take form that gives a hint that drawing's role in engineering is more than just to archive a concept or to communicate with others. Understanding the use of both drafting and sketching in design is important to help formulate the future development of Computer Aided Design or Drafting (CAD) systems. As CAD evolves and becomes more \"'intelligent,'\" the question of what attributes these systems must have becomes more important. In the past, CAD system attributes have primarily been driven from developments in the computer industry, it is only through understanding drawing's importance in the design process that these systems can be based on design needs. Additionally, the pressures of CAD tool development, faculty time demands, and course expenses cause academic institutions to reevaluate the content of their \"graphics'\" courses. Understanding drawing's importance in the design process helps establish what skills need to be taught to engineers during their training. This paper is organized by first, in Section 2, clarifying the types of drawings used in mechanical design. The hypotheses to be addressed in this paper are given in Section 3. A discussion of research on the understanding of visual imagery to be used as a basis for arguments in support of the hypotheses is in Section 4. In Section 5 is a discussion of the results of data taken on how mechanical engineers use drawings during design. Lastly, in Section 6, is a discussion of how well the hypotheses have been supported and the implications of our findings on CAD development, educational requirements, and future research directions. 2. TYPES OF D R A W I N G S USED IN DESIGN Engineers make many types of marks-on-paper. In research, to be described in Section 5, we have broken down these marks into two main groupings: support notation and graphic representations. Support notation includes textual notes, lists, dimensions (including leaders and arrows), and calculations. Graphic representations include drawings of objects and their functions, and plots and charts. Mechanical design graphic representations are often scale drawings made with mechanical instruments or CAD computer systems. These drawings, made in accordance with a set of widely accepted rules, are defined as having been drafted. Sketches, on the other hand, are defined as \"free-hand\" drawings. They are usually not to scale and may use shorthand notations to represent both objects and their function. A differentiation must be made between the act of graphic representation and the medium on which it occurs. The medium, whether it be paper and pencil, a computer stylus on a tablet, chalk on a blackboard, or other medium may put interface restrictions on the representation. The following discussions are concerned with what is being represented, not with how the representation is made. However, the discussions point to the medium's restriction on representation and the need for improved interfaces. Another aspect of drawings to be considered is the level of abstraction of the information to be represented. During the design process, the design is refined"} {"_id": "2cbc1789ba0a5df8069948aa2dfbd080d8184fc9", "title": "Anticipative Interaction Primitives for Human-Robot Collaboration", "text": "This paper introduces our initial investigation on the problem of providing a semi-autonomous robot collaborator with anticipative capabilities to predict human actions. Anticipative robot behavior is a desired characteristic of robot collaborators that lead to fluid, proactive interactions. We are particularly interested in improving reactive methods that rely on human action recognition to activate the corresponding robot action. Action recognition invariably causes delay in the robot\u2019s response, and the goal of our method is to eliminate this delay by predicting the next human action. Prediction is achieved by using a lookup table containing variations of assembly sequences, previously demonstrated by different users. The method uses the nearest neighbor sequence in the table that matches the actual sequence of human actions. At the movement level, our method uses a probabilistic representation of interaction primitives to generate robot trajectories. The method is demonstrated using a 7 degree-offreedom lightweight arm equipped with a 5-finger hand on an assembly task consisting of 17 steps."} {"_id": "6217d2c64b6f843b2078dd0cf4fdb8ab15f06d43", "title": "The Missing Link of Jewish European Ancestry: Contrasting the Rhineland and\nthe Khazarian Hypotheses", "text": "The question of Jewish ancestry has been the subject of controversy for over two centuries and has yet to be resolved. The \"Rhineland hypothesis\" depicts Eastern European Jews as a \"population isolate\" that emerged from a small group of German Jews who migrated eastward and expanded rapidly. Alternatively, the \"Khazarian hypothesis\" suggests that Eastern European Jews descended from the Khazars, an amalgam of Turkic clans that settled the Caucasus in the early centuries CE and converted to Judaism in the 8th century. Mesopotamian and Greco-Roman Jews continuously reinforced the Judaized empire until the 13th century. Following the collapse of their empire, the Judeo-Khazars fled to Eastern Europe. The rise of European Jewry is therefore explained by the contribution of the Judeo-Khazars. Thus far, however, the Khazars' contribution has been estimated only empirically, as the absence of genome-wide data from Caucasus populations precluded testing the Khazarian hypothesis. Recent sequencing of modern Caucasus populations prompted us to revisit the Khazarian hypothesis and compare it with the Rhineland hypothesis. We applied a wide range of population genetic analyses to compare these two hypotheses. Our findings support the Khazarian hypothesis and portray the European Jewish genome as a mosaic of Near Eastern-Caucasus, European, and Semitic ancestries, thereby consolidating previous contradictory reports of Jewish ancestry. We further describe a major difference among Caucasus populations explained by the early presence of Judeans in the Southern and Central Caucasus. Our results have important implications for the demographic forces that shaped the genetic diversity in the Caucasus and for medical studies."} {"_id": "8433bee637213243749bc3ef8bdbd61d9d3a0f3e", "title": "Energy efficient wearable sensor node for IoT-based fall detection systems", "text": "Falls can cause serious traumas such as brain injuries and bone fractures, especially among elderly people. Fear of falling might reduce physical activities resulting in declining social interactions and eventually causing depression. To lessen the effects of a fall, timely delivery of medical treatment can play a vital role. In a similar scenario, an IoT-based wearable system can pave the most promising way to mitigate serious consequences of a fall while providing the convenience of usage. However, to deliver sufficient degree of monitoring and reliability, wearable devices working at the core of fall detection systems are required to work for a prolonged period of time. In this work, we focus on energy efficiency of a wearable sensor node in an Internet-of-Things (IoT) based fall detection system. We propose the design of a tiny, lightweight, flexible and energy efficient wearable device. We investigate different parameters (e.g. sampling rate, communication bus interface, transmission protocol, and transmission rate) impacting on energy consumption of the wearable device. In addition, we provide a comprehensive analysis of energy consumption of the wearable in different configurations and operating conditions. Furthermore, we provide hints (hardware and software) for system designers implementing the optimal wearable device for IoT-based fall detection systems in terms of energy efficiency and high quality of service. The results clearly indicate that the proposed sensor node is novel and energy efficient. In a critical condition, the wearable device can be used continuously for 76 h with a 1000 mAh li-ion battery."} {"_id": "27b7e8f3b11dfe12318f8ff10f1d4a60e144a646", "title": "Predicting function from sequence in venom peptide families", "text": "Toxins from animal venoms are small peptides that recognize specific molecular targets in the brains of prey or predators. Next generation sequencing has uncovered thousands of diverse toxin sequences, but the functions of these peptides are poorly understood. Here we demonstrate that the use of machine learning techniques on sequence-derived features enables high accuracy in the task of predicting a toxin\u2019s functionality using only its amino acid sequence. Comparison of the performance of several learning algorithms in this prediction task demonstrates that both physiochemical properties of the amino acid residues in a sequence as well as noncontiguous sequence motifs can be used independently to model the sequence dependence of venom function. We rationalize the observed model performance using unsupervised learning and make broad predictions about the distribution of toxin functions in the venome. Keywords\u2014Bioinformatics, machine learning, protein function prediction, venomics."} {"_id": "bd8d9e1b3a192fcd045c7a3389920ac98097e774", "title": "ParaPIM: a parallel processing-in-memory accelerator for binary-weight deep neural networks", "text": "Recent algorithmic progression has brought competitive classification accuracy despite constraining neural networks to binary weights (+1/-1). These findings show remarkable optimization opportunities to eliminate the need for computationally-intensive multiplications, reducing memory access and storage. In this paper, we present ParaPIM architecture, which transforms current Spin Orbit Torque Magnetic Random Access Memory (SOT-MRAM) sub-arrays to massively parallel computational units capable of running inferences for Binary-Weight Deep Neural Networks (BWNNs). ParaPIM's in-situ computing architecture can be leveraged to greatly reduce energy consumption dealing with convolutional layers, accelerate BWNNs inference, eliminate unnecessary off-chip accesses and provide ultra-high internal bandwidth. The device-to-architecture co-simulation results indicate ~4x higher energy efficiency and 7.3x speedup over recent processing-in-DRAM acceleration, or roughly 5x higher energy-efficiency and 20.5x speedup over recent ASIC approaches, while maintaining inference accuracy comparable to baseline designs."} {"_id": "7902e4fb3e30e085c0b88ea84c611be2b601f0d7", "title": "Automated Transplantation and Differential Testing for Clones", "text": "Code clones are common in software. When applying similar edits to clones, developers often find it difficult to examine the runtime behavior of clones. The problem is exacerbated when some clones are tested, while their counterparts are not. To reuse tests for similar but not identical clones, Grafter transplants one clone to its counterpart by (1) identifying variations in identifier names, types, and method call targets, (2) resolving compilation errors caused by such variations through code transformation, and (3) inserting stub code to transfer input data and intermediate output values for examination. To help developers examine behavioral differences between clones, Grafter supports fine-grained differential testing at both the test outcome level and the intermediate program state level. In our evaluation on three open source projects, Grafter successfully reuses tests in 94% of clone pairs without inducing build errors, demonstrating its automated code transplantation capability. To examine the robustness of G RAFTER, we systematically inject faults using a mutation testing tool, Major, and detect behavioral differences induced by seeded faults. Compared with a static cloning bug finder, Grafter detects 31% more mutants using the test-level comparison and almost 2X more using the state-level comparison. This result indicates that Grafter should effectively complement static cloning bug finders."} {"_id": "0c8ce51f3384208518c328bd0306507079102d55", "title": "Bone graphs: Medial shape parsing and abstraction", "text": "1077-3142/$ see front matter 2011 Elsevier Inc. A doi:10.1016/j.cviu.2010.12.011 \u21d1 Corresponding author. E-mail address: dmac@cs.toronto.edu (D. Macrini) The recognition of 3-D objects from their silhouettes demands a shape representation which is stable with respect to minor changes in viewpoint and articulation. This can be achieved by parsing a silhouette into parts and relationships that do not change across similar object views. Medial descriptions, such as skeletons and shock graphs, provide part-based decompositions but suffer from instabilities. As a result, similar shapes may be represented by dissimilar part sets. We propose a novel shape parsing approach which is based on identifying and regularizing the ligature structure of a medial axis, leading to a bone graph, a medial abstraction which captures a more stable notion of an object\u2019s parts. Our experiments show that it offers improved recognition and pose estimation performance in the presence of within-class deformation over the shock graph. 2011 Elsevier Inc. All rights reserved."} {"_id": "697bbd2f32b0eeb10783d87503d37e1e56ec5e2e", "title": "A Discrete Model for Inelastic Deformation of Thin Shells", "text": "We introduce a method for simulating the inelastic deformation of thin shells: we model plasticity and fracture of curved, deformable objects such as light bulbs, egg-shells and bowls. Our novel approach uses triangle meshes yet evolves fracture lines unrestricted to mesh edges. We present a novel measure of bending strain expressed in terms of surface invariants such as lengths and angles. We also demonstrate simple techniques to improve the robustness of standard timestepping as well as collisionresponse algorithms."} {"_id": "98431da7222ee3fe12d277facf5ca1561c56d4f3", "title": "The estimation of the gradient of a density function, with applications in pattern recognition", "text": "Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistenby of the estimates. The results are generalized to obtain a simple mean-shift estimate that can be extended in a k-nearestneighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients."} {"_id": "721e64bfd3158a77c55d59dd6415570594a72e9c", "title": "NVIDIA Jetson Platform Characterization", "text": "This study characterizes the NVIDIA Jetson TK1 and TX1 Platforms, both built on a NVIDIA Tegra System on Chip and combining a quad-core ARM CPU and an NVIDIA GPU. Their heterogeneous nature, as well as their wide operating frequency range, make it hard for application developers to reason about performance and determine which optimizations are worth pursuing. This paper attempts to inform developers\u2019 choices by characterizing the platforms\u2019 performance using Roofline models obtained through an empirical measurement-based approach as well as through a case study of a heterogeneous application (matrix multiplication). Our results highlight a difference of more than an order of magnitude in compute performance between the CPU and GPU on both platforms. Given that the CPU and GPU share the same memory bus, their Roofline models\u2019 balance points are also more than an order of magnitude apart. We also explore the impact of frequency scaling: build CPU and GPU Roofline profiles and characterize both platforms\u2019 balance point variation, power consumption, and performance per watt as frequency is scaled. The characterization we provide can be used in two main ways. First, given an application, it can inform the choice and number of processing elements to use (i.e., CPU/GPU and number of cores) as well as the optimizations likely to lead to high performance gains. Secondly, this characterization indicates that developers can use frequency scaling to tune the Jetson Platform to suit the requirements of their applications. Third, given a required power/performance budget, application developers can identify the appropriate parameters to use to tune the Jetson platforms to their specific workload requirements. We expect that this optimization approach can lead to overall gains in performance and/or power efficiency without requiring application changes."} {"_id": "3972dc2d2306c48135b6dfa587a5433d0b75b1cd", "title": "Scan, Attend and Read: End-to-End Handwritten Paragraph Recognition with MDLSTM Attention", "text": "We present an attention-based model for end-to-end handwriting recognition. Our system does not require any segmentation of the input paragraph. The model is inspired by the differentiable attention models presented recently for speech recognition, image captioning or translation. The main difference is the implementation of covert and overt attention with a multi-dimensional LSTM network. Our principal contribution towards handwriting recognition lies in the automatic transcription without a prior segmentation into lines, which was critical in previous approaches. Moreover, the system is able to learn the reading order, enabling it to handle bidirectional scripts such as Arabic. We carried out experiments on the well-known IAM Database and report encouraging results which bring hope to perform full paragraph transcription in the near future."} {"_id": "6cbb6db2561ecee3b24e22ee060b01068cba6b5a", "title": "Accidental displacement and migration of endosseous implants into \nadjacent craniofacial structures: A review and update", "text": "OBJECTIVES\nAccidental displacement of endosseous implants into the maxillary sinus is an unusual but potential complication in implantology procedures due to the special features of the posterior aspect of the maxillary bone; there is also a possibility of migration throughout the upper paranasal sinuses and adjacent structures. The aim of this paper is to review the published literature about accidental displacement and migration of dental implants into the maxillary sinus and other adjacent structures.\n\n\nSTUDY DESIGN\nA review has been done based on a search in the main on-line medical databases looking for papers about migration of dental implants published in major oral surgery, periodontal, dental implant and ear-nose-throat journals, using the keywords \"implant,\" \"migration,\" \"complication,\" \"foreign body\" and \"sinus.\"\n\n\nRESULTS\n24 articles showing displacement or migration to maxillary, ethmoid and sphenoid sinuses, orbit and cranial fossae, with different degrees of associated symptoms, were identified. Techniques found to solve these clinical issues include Cadwell-Luc approach, transoral endoscopy approach via canine fossae and transnasal functional endoscopy surgery.\n\n\nCONCLUSION\nBefore removing the foreign body, a correct diagnosis should be done in order to evaluate the functional status of the ostiomeatal complex and the degree of affectation of paranasal sinuses and other involved structures, determining the size and the exact location of the foreign body. After a complete diagnosis, an indicated procedure for every case would be decided."} {"_id": "bc6f2144ab55022e10d623f94f3398595547be38", "title": "Language and life history: a new perspective on the development and evolution of human language.", "text": "It has long been claimed that Homo sapiens is the only species that has language, but only recently has it been recognized that humans also have an unusual pattern of growth and development. Social mammals have two stages of pre-adult development: infancy and juvenility. Humans have two additional prolonged and pronounced life history stages: childhood, an interval of four years extending between infancy and the juvenile period that follows, and adolescence, a stage of about eight years that stretches from juvenility to adulthood. We begin by reviewing the primary biological and linguistic changes occurring in each of the four pre-adult ontogenetic stages in human life history. Then we attempt to trace the evolution of childhood and juvenility in our hominin ancestors. We propose that several different forms of selection applied in infancy and childhood; and that, in adolescence, elaborated vocal behaviors played a role in courtship and intrasexual competition, enhancing fitness and ultimately integrating performative and pragmatic skills with linguistic knowledge in a broad faculty of language. A theoretical consequence of our proposal is that fossil evidence of the uniquely human stages may be used, with other findings, to date the emergence of language. If important aspects of language cannot appear until sexual maturity, as we propose, then a second consequence is that the development of language requires the whole of modern human ontogeny. Our life history model thus offers new ways of investigating, and thinking about, the evolution, development, and ultimately the nature of human language."} {"_id": "8b0723fa5c33193386f1040ca9991abca969a827", "title": "Gender differences in the relationship between internet addiction and depression: A cross-lagged study in Chinese adolescents", "text": "The present study explored the role of gender in the association between Internet addiction and depression. Three-wave longitudinal panel data were collected from self-reported questionnaires that were completed by 1715 adolescents in grades 6e8 in China. Cross-lagged structural equation modeling was used to examine the relationship between Internet addiction and depression. In male adolescents, depression was found to significantly predict subsequent Internet addiction, suggesting that depression was the cause of Internet addiction and supporting the mood enhancement hypothesis. In female adolescents, Internet addiction was found to significantly predict subsequent depression, indicating that Internet addiction leads to depression and supporting the social displacement hypothesis. These results indicate that the relationship between Internet addiction and depression depended on gender. In addition, it was found that males and females exhibit different behavioral patterns and motivations of Internet usage. Males were more likely to use the Internet for pleasure and less likely to surf the Internet to search for information, compared with females. Although both males and females were prone to surfing the Internet alone, males were more likely to go online with friends compared with females. These findings suggest that gender-specific preventative and interventional strategies should be developed to reduce Internet addiction. \u00a9 2016 Elsevier Ltd. All rights reserved."} {"_id": "383f1f2ceb32557690b6a0abf6aab48cb98552ff", "title": "Facebook, Twitter and Google Plus for Breaking News: Is There a Winner?", "text": "Twitter is widely seen as being the go to place for breaking news. Recently however, competing Social Media have begun to carry news. Here we examine how Facebook, Google Plus and Twitter report on breaking news. We consider coverage (whether news events are reported) and latency (the time when they are reported). Using data drawn from three weeks in December 2013, we identify 29 major news events, ranging from celebrity deaths, plague outbreaks to sports events. We find that all media carry the same major events, but Twitter continues to be the preferred medium for breaking news, almost consistently leading Facebook or Google Plus. Facebook and Google Plus largely repost newswire stories and their main research value is that they conveniently package multitple sources of information together."} {"_id": "c97901da440e70bb6085b118d5f3f3190fc5eaf0", "title": "A Compact Circularly Polarized Cross-Shaped Slotted Microstrip Antenna", "text": "A compact cross-shaped slotted microstrip patch antenna is proposed for circularly polarized (CP) radiation. A symmetric, cross shaped slot is embedded along one of the diagonal axes of the square patch for CP radiation and antenna size reduction. The structure is asymmetric (unbalanced) along the diagonal axes. The overall size of the antenna with CP radiation can be reduced by increasing the perimeter of the symmetric cross-shaped slot within the first patch quadrant of the square patch. The performance of the CP radiation is also studied by varying the size and angle variation of the cross-shaped slot. A measured 3-dB axial-ratio (AR) bandwidth of around 6.0 MHz is achieved with the CP cross-shaped slotted microstrip antenna, with an 18.0 MHz 10-dB return-loss bandwidth. The measured boresight gain is more than 3.8 dBic over the operating band, while the overall antenna volume is 0.273\u03bbo \u00d7 0.273\u03bbo \u00d7 0.013\u03bbo (\u03bb\u03bf operating wavelength at 910 MHz)."} {"_id": "35c12a61ada36fd9b9f89176c927bb53af6f2466", "title": "Linkages between Depressive Symptomatology and Internet Harassment among Young Regular Internet Users", "text": "Recent reports indicate 97% of youth are connected to the Internet. As more young people have access to online communication, it is integrally important to identify youth who may be more vulnerable to negative experiences. Based upon accounts of traditional bullying, youth with depressive symptomatology may be especially likely to be the target of Internet harassment. The current investigation will examine the cross-sectional relationship between depressive symptomatology and Internet harassment, as well as underlying factors that may help explain the observed association. Youth between the ages of 10 and 17 (N = 1,501) participated in a telephone survey about their Internet behaviors and experiences. Subjects were required to have used the Internet at least six times in the previous 6 months to ensure a minimum level of exposure. The caregiver self-identified as most knowledgeable about the young person's Internet behaviors was also interviewed. The odds of reporting an Internet harassment experience in the previous year were more than three times higher (OR: 3.38, CI: 1.78, 6.45) for youth who reported major depressive symptomatology compared to mild/absent symptomatology. When female and male respondents were assessed separately, the adjusted odds of reporting Internet harassment for males who also reported DSM IV symptoms of major depression were more than three times greater (OR: 3.64, CI: 1.16, 11.39) than for males who indicated mild or no symptoms of depression. No significant association was observed among otherwise similar females. Instead, the association was largely explained by differences in Internet usage characteristics and other psychosocial challenges. Internet harassment is an important public mental health issue affecting youth today. Among young, regular Internet users, those who report DSM IV-like depressive symptomatology are significantly more likely to also report being the target of Internet harassment. Future studies should focus on establishing the temporality of events, that is, whether young people report depressive symptoms in response to the negative Internet experience, or whether symptomatology confers risks for later negative online incidents. Based on these cross-sectional results, gender differences in the odds of reporting an unwanted Internet experience are suggested, and deserve special attention in future studies."} {"_id": "c1b8ba97aa88210a02affe2f92826e059c729c8b", "title": "Exploration of adaptive gait patterns with a reconfigurable linkage mechanism", "text": "Legged robots are able to move across irregular terrains and some can be energy efficient, but are often constrained by a limited range of gaits which can limit their locomotion capabilities considerably. This paper reports a reconfigurable design approach to robotic legged locomotion that produces a wide variety of gait cycles, opening new possibilities for innovative applications. In this paper, we present a distance-based formulation and its application to solve the position analysis problem of a standard Theo Jansen mechanism. By changing the configuration of a linkage, our objective in this study is to identify novel gait patterns of interest for a walking platform. The exemplary gait variations presented in this work demonstrate the feasibility of our approach, and considerably extend the capabilities of the original design to not only produce novel cum useful gait patterns but also to realize behaviors beyond locomotion."} {"_id": "018300f5f0e679cee5241d9c69c8d88e00e8bf31", "title": "Neural Variational Inference and Learning in Belief Networks", "text": "\u2022We introduce a simple, efficient, and general method for training directed latent variable models. \u2013 Can handle both discrete and continuous latent variables. \u2013 Easy to apply \u2013 requires no model-specific derivations. \u2022Key idea: Train an auxiliary neural network to perform inference in the model of interest by optimizing the variational bound. \u2013 Was considered before for Helmholtz machines and rejected as infeasible due to high variance of inference net gradient estimates. \u2022We make the approach practical using simple and general variance reduction techniques. \u2022Promising document modelling results using sigmoid belief networks."} {"_id": "0a10d64beb0931efdc24a28edaa91d539194b2e2", "title": "Efficient Estimation of Word Representations in Vector Space", "text": "We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities."} {"_id": "32cbd065ac9405530ce0b1832a9a58c7444ba305", "title": "Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments", "text": "We address the problem of part-of-speech tagging for English data from the popular microblogging service Twitter. We develop a tagset, annotate data, develop features, and report tagging results nearing 90% accuracy. The data and tools have been made available to the research community with the goal of enabling richer text analysis of Twitter and related social media data sets."} {"_id": "040522d17bb540726a2e8d45ee264442502723a0", "title": "The Helmholtz Machine", "text": "Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways."} {"_id": "3be23e51455b39a2819ecfd86b8bb5ba4716679f", "title": "A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion", "text": "In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic camera having a field of view greater than 200 in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible."} {"_id": "7eb7d3529adf3954a7704d0e502178ca10c79e0b", "title": "The effective use of Benford\u2019s Law to assist in detecting fraud in accounting data", "text": "Benford\u2019s law has been promoted as providing the auditor with a tool that is simple and effective for the detection of fraud. The purpose of this paper is to assist auditors in the most effective use of digital analysis based on Benford\u2019s law. The law is based on a peculiar observation that certain digits appear more frequently than others in data sets. For example, in certain data sets, it has been observed that more than 30% of numbers begin with the digit one. After discussing the background of the law and development of its use in auditing, we show where digital analysis based on Benford\u2019s law can most effectively be used and where auditors should exercise caution. Specifically, we identify data sets which can be expected to follow Benford\u2019s distribution, discuss the power of statistical tests, types of frauds that would be detected and not be detected by such analysis, the potential problems that arise when an account contains too few observations, as well as issues related to base rate of fraud. An actual example is provided demonstrating where Benford\u2019s law proved successful in identifying fraud in a population of accounting data."} {"_id": "f2bce820b7f0f3ccf0554b105bfa2ded636db77a", "title": "Systematic Thinking Fostered by Illustrations in Scientific Text", "text": "In 2 experiments, students who lacked prior knowledge about car mechanics read a passage about vehicle braking systems that either contained labeled illustrations of the systems, illustrations without labels, labels without illustrations, or no labeled illustrations. Students who received passages that contained labeled illustrations of braking systems recalled more explanative than nonexplanative information as compared to control groups, and performed better on problem solving transfer but not on verbatim recognition as compared to control groups. Results support a model of meaningful learning in which illustrations can help readers to focus their attention on explanative information in text and to reorganize the information into useful mental models."} {"_id": "b688b830da148f1c3a86916a42d9dd1b1cccd5ff", "title": "Pixel-Wise Attentional Gating for Scene Parsing", "text": "To achieve dynamic inference in pixel labeling tasks, we propose Pixel-wise Attentional Gating (PAG), which learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily \"plugged in\" to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation (FLOPs) while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-of-the-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by 10% without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints."} {"_id": "84599b3defa3dfa9cfcd33c1339ce422aa5d2b68", "title": "Current Mode Control Integrated Circuit with High Accuracy Current Sensing Circuit for Buck Converter", "text": "A current mode control integrated circuit with accuracy current sensing circuit ( CSC ) for buck converter is presented in this the proposed accurate integrated current sensed inductor with the internal ramp be used for DC-DC converter feedback proposed CSC doesn't need an op amp to implement, and has been fabricated with a standard 0.35 mum CMOS process. Simulation result show that the switching converter can be operated up to 1 MHZ. The supply is suitable for signal cell lithium-ion battery supply power efficiency is over 85% for supply voltage from 2.5 V to 5 V and output current is 200 mA. The performance of the proposed circuit is the good compared to the other circuits."} {"_id": "1eb7ba50214d0a7bb3247a0055f5700b7833c17e", "title": "Cold-Start Recommendation with Provable Guarantees: A Decoupled Approach", "text": "Although the matrix completion paradigm provides an appealing solution to the collaborative filtering problem in recommendation systems, some major issues, such as data sparsity and cold-start problems, still remain open. In particular, when the rating data for a subset of users or items is entirely missing, commonly known as the cold-start problem, the standard matrix completion methods are inapplicable due the non-uniform sampling of available ratings. In recent years, there has been considerable interest in dealing with cold-start users or items that are principally based on the idea of exploiting other sources of information to compensate for this lack of rating data. In this paper, we propose a novel and general algorithmic framework based on matrix completion that simultaneously exploits the similarity information among users and items to alleviate the cold-start problem. In contrast to existing methods, our proposed recommender algorithm, dubbed DecRec, decouples the following two aspects of the cold-start problem to effectively exploit the side information: (i) the completion of a rating sub-matrix, which is generated by excluding cold-start users/items from the original rating matrix; and (ii) the transduction of knowledge from existing ratings to cold-start items/users using side information. This crucial difference prevents the error propagation of completion and transduction, and also significantly boosts the performance when appropriate side information is incorporated. The recovery error of the proposed algorithm is analyzed theoretically and, to the best of our knowledge, this is the first algorithm that addresses the cold-start problem with provable guarantees on performance. Additionally, we also address the problem where both cold-start user and item challenges are present simultaneously. We conduct thorough experiments on real datasets that complement our theoretical results. These experiments demonstrate the effectiveness of the proposed algorithm in handling the cold-start users/items problem and mitigating data sparsity issue."} {"_id": "17f537b9f39cdb37ec26100530f69c615d03fa3b", "title": "Automatic query-based keyword and keyphrase extraction", "text": "Extracting keywords and keyphrases mainly for identifying content of a document, has an importance role in text processing tasks such as text summarization, information retrieval, and query expansion. In this research, we introduce a new keyword/keyphrase extraction approach in which both single and multi-document keyword/keyphrase extraction techniques are considered. The proposed approach is specifically practical when a user is interested in additional data such as keywords/keyphrases related to a topic or query. In the proposed approach, first a set of documents are retrieved based on user's query, then a single document keyword extraction method is applied to extract candidate keyword/keyphrases from each retrieved document. Finally, a new re-scoring scheme is introduced to extract final keywords/keyphrases. We have evaluated the proposed method based on the relationship between the final keyword/keyphrases with the initial user query, and based user's satisfaction. Our experimental results show how much the extracted keywords/keyphrases are relevant and wellmatched with user's need."} {"_id": "1d06bfa37282bda43a396dc99927b298d0288bfa", "title": "Category-aware Next Point-of-Interest Recommendation via Listwise Bayesian Personalized Ranking", "text": "Next Point-of-Interest (POI) recommendation has become an important task for location-based social networks (LBSNs). However, previous efforts suffer from the high computational complexity, besides the transition pattern between POIs has not been well studied. In this paper, we proposed a twofold approach for next POI recommendation. First, the preferred next category is predicted by using a third-rank tensor optimized by a Listwise Bayesian Personalized Ranking (LBPR) approach. Specifically we introduce two functions, namely PlackettLuce model and cross entropy, to generate the likelihood of a ranking list for posterior computation. Then POI candidates filtered by the predicated category are ranked based on the spatial influence and category ranking influence. The experiments on two real-world datasets demonstrate the significant improvements of our methods over several state-ofthe-art methods."} {"_id": "49b3d71c415956a31b1031ae22920af6ea5bec9a", "title": "Crowdsourcing based spatial mining of urban emergency events using social media", "text": "With the advances of information communication technologies, it is critical to improve the efficiency and accuracy of emergency management systems through modern data processing techniques. The past decade has witnessed the tremendous technical advances in Sensor Networks, Internet/Web of Things, Cloud Computing, Mobile/Embedded Computing, Spatial/Temporal Data Processing, and Big Data, and these technologies have provided new opportunities and solutions to emergency management. GIS models and simulation capabilities are used to exercise response and recovery plans during non-disaster times. They help the decision-makers understand near real-time possibilities during an event. In this paper, a crowdsourcing based model for mining spatial information of urban emergency events is introduced. Firstly, basic definitions of the proposed method are given. Secondly, positive samples are selected to mine the spatial information of urban emergency events. Thirdly, location and GIS information are extracted from positive samples. At last, the real spatial information is determined based on address and GIS information. At last, a case study on an urban emergency event is given."} {"_id": "a00bd22c2148fc0c2c32300742d9390431949f56", "title": "Attitudes towards following meat, vegetarian and vegan diets: an examination of the role of ambivalence", "text": "Vegetarianism within the U.K. is growing in popularity, with the current estimate of 7% of the population eating a vegetarian diet. This study examined differences between the attitudes and beliefs of four dietary groups (meat eaters, meat avoiders, vegetarians and vegans) and the extent to which attitudes influenced intentions to follow each diet. In addition, the role of attitudinal ambivalence as a moderator variable was examined. Completed questionnaires were obtained from 111 respondents (25 meat eaters, 26 meat avoiders, 34 vegetarians, 26 vegans). In general, predictions were supported, in that respondents displayed most positive attitudes and beliefs towards their own diets, and most negative attitudes and beliefs towards the diet most different form their own. Regression analyses showed that, as predicted by the Theory of Planned Behaviour, attitudes, subjective norm and perceived behavioural control were significant predictors of intention to follow each diet (apart from the vegetarian diet, where subjective norm was non-significant). In each case, attitudinal ambivalence was found to moderate the attitude-intention relationship, such that attitudes were found to be stronger predictors at lower levels of ambivalence. The results not only highlight the extent to which such alternative diets are an interesting focus for psychological research, but also lend further support to the argument that ambivalence in an important influence on attitude strength."} {"_id": "b07bfdebdf11b7ab3ea3d5f0087891c464c5e34d", "title": "A 29\u201330GHz 64-element Active Phased array for 5G Application", "text": "A 64-element 29\u201330GHz active phased array for 5G millimeter wave applications is presented in this paper. The proposed phased array composites of 64-element antennas, 64-chan-nel T/R modules, 4 frequency conversion links, beam controlling circuitry, power management circuits and cooling fans, and are integrated in a in a very compact size(135mmX 77mmX56mm). Hybrid integration of GaAs and Si circuits are employed to achieve better RF performance. The architecture of the proposed phased array and the detail design of the T/R modules and antennas are analyzed. By the OTA (over the air) measurement, the proposed phased array achieves a bandwidth of 1 GHz at the center frequency of 29.5GHz, and the azimuth beam-width is 12 deg with the scanning range of \u00b145deg. With the excitation of 800MHz 64QAM signals, the transmitter beam achieves a EVM of 5.5%, ACLR of \u221230.5dBc with the PA working at \u221210dB back off, and the measured saturated EIRP is 63 dBm."} {"_id": "2472198a01624e6e398518929c88d8ead6a33473", "title": "mCloud: A Context-Aware Offloading Framework for Heterogeneous Mobile Cloud", "text": "Mobile cloud computing (MCC) has become a significant paradigm for bringing the benefits of cloud computing to mobile devices\u2019 proximity. Service availability along with performance enhancement and energy efficiency are primary targets in MCC. This paper proposes a code offloading framework, called mCloud, which consists of mobile devices, nearby cloudlets and public cloud services, to improve the performance and availability of the MCC services. The effect of the mobile device context (e.g., network conditions) on offloading decisions is studied by proposing a context-aware offloading decision algorithm aiming to provide code offloading decisions at runtime on selecting wireless medium and appropriate cloud resources for offloading. We also investigate failure detection and recovery policies for our mCloud system. We explain in details the design and implementation of the mCloud prototype framework. We conduct real experiments on the implemented system to evaluate the performance of the algorithm. Results indicate the system and embedded decision algorithm are able to provide decisions on selecting wireless medium and cloud resources based on different context of the mobile devices, and achieve significant reduction on makespan and energy, with the improved service availability when compared with existing offloading schemes."} {"_id": "1613a9fe64fbc2228e52b021ad45041556cc77ef", "title": "A Review of Scar Scales and Scar Measuring Devices", "text": "OBJECTIVE\nPathologic scarring affects millions of people worldwide. Quantitative and qualitative measurement modalities are needed to effectively evaluate and monitor treatments.\n\n\nMETHODS\nThis article reviews the literature on available tools and existent assessment scales used to subjectively and objectively characterize scar.\n\n\nRESULTS\nWe describe the attributes and deficiencies of each tool and scale and highlight areas where further development is critical.\n\n\nCONCLUSION\nAn optimal, universal scar scoring system is needed in order to better characterize, understand and treat pathologic scarring."} {"_id": "9368b596fdc2af12a45defd3df6c94e39dd02d3a", "title": "WNtags: A Web-Based Tool For Image Labeling And Retrieval With Lexical Ontologies", "text": "Ever growing number of image documents available on the Internet continuously motivates research in better annotation models and more efficient retrieval methods. Formal knowledge representation of objects and events in pictures, their interaction as well as context complexity becomes no longer an option for a quality image repository, but a necessity. We present an ontologybased online image annotation tool WNtags and demonstrate its usefulness in several typical multimedia retrieval tasks using International Affective Picture System emotionally annotated image database. WNtags is built around WordNet lexical ontology but considers Suggested Upper Merged Ontology as the preferred labeling formalism. WNtags uses sets of weighted WordNet synsets as high-level image semantic descriptors and query matching is performed with word stemming and node distance metrics. We also elaborate our near future plans to expand image content description with induced affect as in stimuli for research of human emotion and attention."} {"_id": "8443e3b50190f297874d2d76233f29dfb423069c", "title": "Paired Learners for Concept Drift", "text": "To cope with concept drift, we paired a stable online learner with a reactive one. A stable learner predicts based on all of its experience, whereas are active learner predicts based on its experience over a short, recent window of time. The method of paired learning uses differences in accuracy between the two learners over this window to determine when to replace the current stable learner, since the stable learner performs worse than does there active learner when the target concept changes. While the method uses the reactive learner as an indicator of drift, it uses the stable learner to predict, since the stable learner performs better than does the reactive learner when acquiring target concept. Experimental results support these assertions. We evaluated the method by making direct comparisons to dynamic weighted majority, accuracy weighted ensemble, and streaming ensemble algorithm (SEA) using two synthetic problems, the Stagger concepts and the SEA concepts, and three real-world data sets: meeting scheduling, electricity prediction, and malware detection. Results suggest that, on these problems, paired learners outperformed or performed comparably to methods more costly in time and space."} {"_id": "0b2fe16ea31f59e44a0d244a12554d9554740b63", "title": "Intersection-priority based optimal RSU allocation for VANET", "text": "Roadside Unit (RSU) is an essential unit in a vehicular ad-hoc network (VANET) for collecting and analyzing traffic data given from smart vehicles. Furthermore, RSUs can take part in controlling traffic flow for vehicle's secure driving by broadcasting locally analyzed data, forwarding some important messages, and communicating with other RSUs, and soon. In order to maximize the availability of RSUs in the VANET, RSUs need to be fully distributed over an entire area. Thus, RSUs can make the best use of all traffic data gathered from every intersection. In this paper, we provide intersection-priority based RSU placement methods to find the optimal number and positions of RSUs for the full distribution providing with a maximal connectivity between RSUs while minimizing RSU setup costs. We propose three optimal algorithms: greedy, dynamic and hybrid algorithms. Finally, we provide simulated analyses of our algorithms using real urban roadmaps of JungGu/Jongrogu, YongsanGu, and GangnamGu in Seoul, each of which has a characteristic road style different than each other. We analyze how our algorithms work in such different types of roadways with real traffic data, and find the optimal number and positions of RSUs in these areas."} {"_id": "6c3c36fbc2cf24baf2301e80da57ed68cab97cd6", "title": "Repeated labeling using multiple noisy labelers", "text": "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction of predictive models. With the outsourcing of small tasks becoming easier, for example via Amazon\u2019s Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i)\u00a0Repeated-labeling can improve label quality and model quality, but not always. (ii)\u00a0When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii)\u00a0As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv)\u00a0Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a set of robust techniques that combine different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial."} {"_id": "1d1da2ef88928cf6174c9c53e0543665bc285b68", "title": "Improving literacy in developing countries using speech recognition-supported games on mobile devices", "text": "Learning to read in a second language is challenging, but highly rewarding. For low-income children in developing countries, this task can be significantly more challenging because of lack of access to high-quality schooling, but can potentially improve economic prospects at the same time. A synthesis of research findings suggests that practicing recalling and vocalizing words for expressing an intended meaning could improve word reading skills - including reading in a second language - more than silent recognition of what the given words mean. Unfortunately, many language learning software do not support this instructional approach, owing to the technical challenges of incorporating speech recognition support to check that the learner is vocalizing the correct word. In this paper, we present results from a usability test and two subsequent experiments that explore the use of two speech recognition-enabled mobile games to help rural children in India read words with understanding. Through a working speech recognition prototype, we discuss two major contributions of this work: first, we give empirical evidence that shows the extent to which productive training (i.e. vocalizing words) is superior to receptive vocabulary training, and discuss the use of scaffolding hints to \"\"unpack\"\" factors in the learner's linguistic knowledge that may impact reading. Second, we discuss what our results suggest for future research in HCI."} {"_id": "742c42f5cd7c1f195fa83c8c1611ee7e62c9c81f", "title": "Making middleboxes someone else's problem: network processing as a cloud service", "text": "Modern enterprises almost ubiquitously deploy middlebox processing services to improve security and performance in their networks. Despite this, we find that today's middlebox infrastructure is expensive, complex to manage, and creates new failure modes for the networks that use them. Given the promise of cloud computing to decrease costs, ease management, and provide elasticity and fault-tolerance, we argue that middlebox processing can benefit from outsourcing the cloud. Arriving at a feasible implementation, however, is challenging due to the need to achieve functional equivalence with traditional middlebox deployments without sacrificing performance or increasing network complexity.\n In this paper, we motivate, design, and implement APLOMB, a practical service for outsourcing enterprise middlebox processing to the cloud.\n Our discussion of APLOMB is data-driven, guided by a survey of 57 enterprise networks, the first large-scale academic study of middlebox deployment. We show that APLOMB solves real problems faced by network administrators, can outsource over 90% of middlebox hardware in a typical large enterprise network, and, in a case study of a real enterprise, imposes an average latency penalty of 1.1ms and median bandwidth inflation of 3.8%."} {"_id": "6b3c3e02cfc46ca94097934bec18333dde7cf77c", "title": "Improving Intrusion Detection System based on Snort rules for network probe attack detection", "text": "Data and network system security is the most important roles. An organization should find the methods to protect their data and network system to reduce the risk from attacks. Snort Intrusion Detection System (Snort-IDS) is a security tool of network security. It has been widely used for protecting the network of the organizations. The Snort-IDS utilize the rules to matching data packets traffic. If some packet matches the rules, Snort-IDS will generate the alert messages. However, Snort-IDS contain many rules and it also generates a lot of false alerts. In this paper, we present the procedure to improve the Snort-IDS rules for the network probe attack detection. In order to test the performance evaluation, we utilized the data set from the MIT-DAPRA 1999, which includes the normal and abnormal traffics. Firstly, we analyzed and explored the existing the Snort-IDS rules to improve the proposed Snort-IDS rules. Secondly, we applied the WireShark software to analyze data packets form of attack in data set. Finally, the Snort-IDS was improved, and it can detect the network probe attack. This paper, we had classified the attacks into several groups based on the nature of network probe attack. In addition, we also compared the efficacy of detection attacks between Snort-IDS rules to be updated with the Detection Scoring Truth. As the experimental results, the proposed Snort-IDS efficiently detected the network probe attacks compared to the Detection Scoring Truth. It can achieve higher accuracy. However, there were some detecting alert that occur over the attack in Detection Scoring Truth, because some attack occur in several time but the Detection Scoring Truth indentify as one time."} {"_id": "5f507abd8d07d3bee56820fd3a5dc2234d1c38ee", "title": "Risk management applied to software development projects in incubated technology-based companies : literature review , classification , and analysis Gest\u00e3o", "text": ""} {"_id": "4dda236c57d9807d811384ffa714196c4999949d", "title": "Algebraic Distance on Graphs", "text": "Measuring the connection strength between a pair of vertices in a graph is one of the most important concerns in many graph applications. Simple measures such as edge weights may not be sufficient for capturing the effects associated with short paths of lengths greater than one. In this paper, we consider an iterative process that smooths an associated value for nearby vertices, and we present a measure of the local connection strength (called the algebraic distance, see [25]) based on this process. The proposed measure is attractive in that the process is simple, linear, and easily parallelized. An analysis of the convergence property of the process reveals that the local neighborhoods play an important role in determining the connectivity between vertices. We demonstrate the practical effectiveness of the proposed measure through several combinatorial optimization problems on graphs and hypergraphs."} {"_id": "2e7ebdd353c1de9e47fdd1cf0fce61bd33d87103", "title": "Comparing Speech Recognition Systems (Microsoft API, Google API And CMU Sphinx)", "text": "The idea of this paper is to design a tool that will be used to test and compare commercial speech recognition systems, such as Microsoft Speech API and Google Speech API, with open-source speech recognition systems such as Sphinx-4. The best way to compare automatic speech recognition systems in different environments is by using some audio recordings that were selected from different sources and calculating the word error rate (WER). Although the WER of the three aforementioned systems were acceptable, it was observed that the Google API is superior."} {"_id": "49ea217068781d3f3d07ef258b84a1fd4cae9528", "title": "Pellet: An OWL DL Reasoner", "text": "Reasoning capability is of crucial importance to many applications developed for the Semantic Web. Description Logics provide sound and complete reasoning algorithms that can effectively handle the DL fragment of the Web Ontology Language (OWL). However, existing DL reasoners were implemented long before OWL came into existence and lack some features that are essential for Semantic Web applications, such as reasoning with individuals, querying capabilities, nominal support, elimination of the unique name assumption and so forth. With these objectives in mind we have implemented an OWL DL reasoner and deployed it in various kinds of applications."} {"_id": "1cb5dea2a8f6abf0ef61ce229ee866594b6c5228", "title": "Unsupervised Lesion Detection in Brain CT using Bayesian Convolutional Autoencoders", "text": "Normally, lesions are detected using supervised learning techniques that require labelled training data. We explore the use of Bayesian autoencoders to learn the variability of healthy tissue and detect lesions as unlikely events under the normative model. As a proof-of-concept, we test our method on registered 2D midaxial slices from CT imaging data. Our results indicate that our method achieves best performance in detecting lesions caused by bleeding compared to baselines."} {"_id": "1450296fb936d666f2f11454cc8f0108e2306741", "title": "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks", "text": "While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity."} {"_id": "639937b3a1b8bded3f7e9a40e85bd3770016cf3c", "title": "A 3D Face Model for Pose and Illumination Invariant Face Recognition", "text": "Generative 3D face models are a powerful tool in computer vision. They provide pose and illumination invariance by modeling the space of 3D faces and the imaging process. The power of these models comes at the cost of an expensive and tedious construction process, which has led the community to focus on more easily constructed but less powerful models. With this paper we publish a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrate its application to several face recognition task. We improve on previous models by offering higher shape and texture accuracy due to a better scanning device and less correspondence artifacts due to an improved registration algorithm. The same 3D face model can be fit to 2D or 3D images acquired under different situations and with different sensors using an analysis by synthesis method. The resulting model parameters separate pose, lighting, imaging and identity parameters, which facilitates invariant face recognition across sensors and data sets by comparing only the identity parameters. We hope that the availability of this registered face model will spur research in generative models. Together with the model we publish a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons."} {"_id": "6424b69f3ff4d35249c0bb7ef912fbc2c86f4ff4", "title": "Deep Learning Face Attributes in the Wild", "text": "Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts."} {"_id": "6b4da897dce4d6636670a83b64612f16b7487637", "title": "Learning from Simulated and Unsupervised Images through Adversarial Training", "text": "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data."} {"_id": "76705c60d9e41dddbb6e4e75c08dcb2b6fa23ed6", "title": "A Measure of Polarization on Social Media Networks Based on Community Boundaries", "text": "Polarization in social media networks is a fact in several scenarios such as political debates and other contexts such as same-sex marriage, abortion and gun control. Understanding and quantifying polarization is a longterm challenge to researchers from several areas, also being a key information for tasks such as opinion analysis. In this paper, we perform a systematic comparison between social networks that arise from both polarized and non-polarized contexts. This comparison shows that the traditional polarization metric \u2013 modularity \u2013 is not a direct measure of antagonism between groups, since non-polarized networks may be also divided into fairly modular communities. To bridge this conceptual gap, we propose a novel polarization metric based on the analysis of the boundary of a pair of (potentially polarized) communities, which better captures the notions of antagonism and polarization. We then characterize polarized and non-polarized social networks according to the concentration of high-degree nodes in the boundary of communities, and found that polarized networks tend to exhibit low concentration of popular nodes along the boundary. To demonstrate the usefulness of our polarization measures, we analyze opinions expressed on Twitter on the gun control issue in the United States, and conclude that our novel metrics help making sense of opinions expressed on online media."} {"_id": "8ae5dde36e2755fd9afcb8a62df8cc9e35c79cb1", "title": "Significant improvement in one-dimensional cursor control using Laplacian electroencephalography over electroencephalography.", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements.\n\n\nAPPROACH\nEight human subjects were asked to imagine either 'left' or 'right' hand movement during one recording session to control the computer cursor using TCRE and disc electrodes.\n\n\nMAIN RESULTS\nThe obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%).\n\n\nSIGNIFICANCE\nThis study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training."} {"_id": "fdaaa830821d5693f709d3bfcdec1526f32d32af", "title": "Assessment of healthcare claims rejection risk using machine learning", "text": "Modern healthcare service records, called Claims, record the medical treatments by a Provider (Doctor/Clinic), medication advised etc., along with the charges, and payments to be made by the patient and the Payer (insurance provider). Denial and rejection of healthcare claims is a significant administrative burden and source of loss to various healthcare providers and payers as well. Automating the identification of Claims prone to denial by reason, source, cause and other deciding factors is critical to lowering this burden of rework. We present classification methods based on Machine Learning (ML) to fully automate identification of such claims prone to rejection or denial with high accuracy, investigate the reasons for claims denial and recommend methods to engineer features using Claim Adjustment Reason Codes (CARC) as features with high Information Gain. The ML engine reported is first of its kind in Claims risk identification and represents a novel, significant enhancement to the state of practice of using ML for automating and containing claims denial risks."} {"_id": "df39b32b8f2207c17ea19353591673244fda53eb", "title": "A 16 Gb/s receiver with DC wander compensated rail-to-rail AC coupling and passive linear-equalizer in 22 nm CMOS", "text": "A 16 Gb/s receiver implemented in 22 nm SOI CMOS technology is reported. The analog frontend accepts a rail-to-rail input common-mode imposed from the transmitter side. It consists of a baseline wander compensated passive linear equalizer that AC-couples the received signal to the subsequent active CTLE with a regulated common-mode level. The programmable passive linear equalizer features a frequency response suitable for low-frequency equalization such as for skin-effect losses. When its zero is programmed at 200 MHz minimum frequency, the measured maximum mid-band peaking is 7 dB. The receiver architecture is half-rate and comprises an 8-tap DFE and a baud-rate CDR. With no FFE at the transmitter, 0.9 Vppd PRBS31 NRZ data are recovered error-free (BER<;10-12) across a copper channel with 34 dB attenuation at 8 GHz."} {"_id": "332c81b75c22ca272ccf0ca3237066b35ea81c3b", "title": "A Passive Gait-Based Weight-Support Lower Extremity Exoskeleton With Compliant Joints", "text": "This paper presents the design and analysis of a passive body weight (BW)-support lower extremity exoskeleton (LEE) with compliant joints to relieve compressive load in the knee. The biojoint-like mechanical knee decouples human gait into two phases, stance and swing, by a dual snap fit. The knee joint transfers the BW to the ground in the stance phases and is compliant to free the leg in the swing phases. Along with a leg dynamic model and a knee biomechanical model, the unmeasurable knee internal forces are simulated. The concept feasibility and dynamic models of the passive LEE design have been experimentally validated with measured plantar forces. The reduced knee forces confirm the effectiveness of the LEE in supporting human BW during walking and also provide a basis for computing the internal knee forces as a percentage of BW. Energy harvested from the hip spring reveals that the LEE can save human walking energy."} {"_id": "26d0b98825761cda7e1a79475dbf6dc140daffbb", "title": "A Second-Order Class-D Audio Amplifier", "text": "Class-D audio amplifiers are particularly efficient, and this efficiency has led to their ubiquity in a wide range of modern electronic appliances. Their output takes the form of a high-frequency square wave whose duty cycle (ratio of on-time to off-time) is modulated at low frequency according to the audio signal. A mathematical model is developed here for a second-order class-D amplifier design (i.e., containing one second-order integrator) with negative feedback. We derive exact expressions for the dominant distortion terms, corresponding to a general audio input signal, and confirm these predictions with simulations. We also show how the observed phenomenon of \u201cpulse skipping\u201d arises from an instability of the analytical solution upon which the distortion calculations are based, and we provide predictions of the circumstances under which pulse skipping will take place, based on a stability analysis. These predictions are confirmed by simulations."} {"_id": "d2938415204bb6f99a069152cb954e4baa441bba", "title": "A Compact GPS Antenna for Artillery Projectile Applications", "text": "This letter presents a compact antenna suitable for the reception of GPS signals on artillery projectiles over 1.57-1.60 GHz. Four inverted-F-type elements are excited by a series feed network in equal magnitude and successive 90\u00b0 phase difference. The shape and form factor of the antenna is tailored so that the antenna can be easily installed inside an artillery fuze. Measurements show that the proposed antenna has a gain of 2.90-3.77 dBic, an axial ratio of 1.9-2.86 dB, and a reflection coefficient of less than -10 dB over 1.57-1.62 GHz."} {"_id": "a9c2ecffaf332d714a5c69adae1dad12031ee77a", "title": "A Graph Rewriting Approach for Converting Asynchronous ROMs into Synchronous Ones", "text": "Most of FPGAs have Configurable Logic Blocks (CLBs) to implement combinational and sequential circuits and block RAMs to implement Random Access Memories (RAMs) and Read Only Memories (ROMs). Circuit design that minimizes the number of clock cycles is easy if we use asynchronous read operations. However, most of FPGAs support synchronous read operations, but do not support asynchronous read operations. The main contribution of this paper is to provide one of the potent approaches to resolve this problem. We assume that a circuit using asynchronous ROMs designed by a non-expert or quickly designed by an expert is given. Our goal is to convert this circuit with asynchronous ROMs into an equivalent circuit with synchronous ones. The resulting circuit with synchronous ROMs can be embedded into FPGAs. We also discuss several techniques to decrease the latency and increase the clock frequency of the resulting circuits. key words: FPGA, block RAMs, asynchronous read operations, rewriting algorithm"} {"_id": "c792d3aa4a0a2a93b6c443143588a19c645c66f4", "title": "Object-Oriented Design Process Model", "text": "ion x x x x Relationship x x x x x"} {"_id": "289f1a3a127d0bc22b2abf4b897a03d934aec51b", "title": "Implementation of a Restricted Boltzmann Machine in a Spiking Neural Network", "text": "Restricted Boltzmann Machines (RBMs) have been demonstrated to perform efficiently on a variety of applications, such as dimensionality reduction and classification. Implementing RBMs on neuromorphic hardware has certain advantages, particularly from a concurrency and lowpower perspective. This paper outlines some of the requirements involved for neuromorphic adaptation of an RBM and attempts to address these issues with suitably targeted modifications for sampling and weight updates. Results show the feasibility of such alterations which will serve as a guide for future implementation of such algorithms in VLSI arrays of spiking neurons."} {"_id": "3c701a0fcf29817d3f22117b8b73993a4e0d303b", "title": "Citation-based bootstrapping for large-scale author disambiguation", "text": "We present a new, two-stage, self-supervised algorithm for author disambiguation in large bibliographic databases. In the first \u201cbootstrap\u201d stage, a collection of highprecision features is used to bootstrap a training set with positive and negative examples of coreferring authors. A supervised feature-based classifier is then trained on the bootstrap clusters and used to cluster the authors in a larger unlabeled dataset. Our selfsupervised approach shares the advantages of unsupervised approaches (no need for expensive hand labels) as well as supervised approaches (a rich set of features that can be discriminatively trained). The algorithm disambiguates 54,000,000 author instances in Thomson Reuters\u2019 Web of Knowledge with B3 F1 of .807. We analyze parameters and features, particularly those from citation networks, which have not been deeply investigated in author disambiguation. The most important citation feature is self-citation, which can be approximated without expensive extraction of the full network. For the supervised stage, the minor improvement due to other citation features (increasing F1 from .748 to .767) suggests they may not be worth the trouble of extracting from databases that don\u2019t already have them. A lean feature set without expensive abstract and title features performs 130 times faster with about equal F"} {"_id": "17033fd4fff03228cd6a06518365b082b4b45f7f", "title": "The bright-side and the dark-side of CEO personality: examining core self-evaluations, narcissism, transformational leadership, and strategic influence.", "text": "This article reports on an examination of the relationships between chief executive officer (CEO) personality, transformational and transactional leadership, and multiple strategic outcomes in a sample of 75 CEOs of Major League Baseball organizations over a 100-year period. CEO bright-side personality characteristics (core self-evaluations) were positively related to transformational leadership, whereas dark-side personality characteristics (narcissism) of CEOs were negatively related to contingent reward leadership. In turn, CEO transformational and contingent reward leadership were related to 4 different strategic outcomes, including manager turnover, team winning percentage, fan attendance, and an independent rating of influence. CEO transformational leadership was positively related to ratings of influence, team winning percentage, and fan attendance, whereas contingent reward leadership was negatively related to manager turnover and ratings of influence."} {"_id": "ae15bbd7c206137fa8f8f5abc127e91bf59b7ddc", "title": "Early Detection and Quantification of Verticillium Wilt in Olive Using Hyperspectral and Thermal Imagery over Large Areas", "text": "Automatic methods for an early detection of plant diseases (i.e., visible symptoms at early stages of disease development) using remote sensing are critical for precision crop protection. Verticillium wilt (VW) of olive caused by Verticillium dahliae can be controlled only if detected at early stages of development. Linear discriminant analysis (LDA) and support vector machine (SVM) classification methods were applied to classify V. dahliae severity using remote sensing at large scale. High-resolution thermal and hyperspectral imagery were acquired with a manned platform which flew a 3000-ha commercial olive area. LDA reached an overall accuracy of 59.0% and a \u03ba of 0.487 while SVM obtained a higher overall accuracy, 79.2% with a similar \u03ba, 0.495. However, LDA better classified trees at initial and low severity levels, reaching accuracies of 71.4 and 75.0%, respectively, in comparison with the 14.3% and 40.6% obtained by SVM. Normalized canopy temperature, chlorophyll fluorescence, structural, xanthophyll, chlorophyll, carotenoid and disease indices were found to be the best indicators for early and advanced stage infection by VW. These results demonstrate that the methods developed in other studies at orchard scale are valid for flights in large areas comprising several olive orchards differing in soil and crop management characteristics."} {"_id": "3dc5d14cdcc0240cba9c571cac4360714ef18458", "title": "EMDR: a putative neurobiological mechanism of action.", "text": "Numerous studies have provided evidence for the efficacy of eye movement desensitization and reprocessing therapy (EMDR) in the treatment of posttraumatic stress disorder (PTSD), including recent studies showing it to be more efficient than therapist-directed flooding. But few theoretical explanations of how EMDR might work have been offered. Shapiro, in her original description of EMDR, proposed that its directed eye movements mimic the saccades of rapid eye movement sleep (REM), but provided no clear explanation of how such mimicry might lead to clinical improvement. We now revisit her original proposal and present a complete model for how EMDR could lead to specific improvement in PTSD and related conditions. We propose that the repetitive redirecting of attention in EMDR induces a neurobiological state, similar to that of REM sleep, which is optimally configured to support the cortical integration of traumatic memories into general semantic networks. We suggest that this integration can then lead to a reduction in the strength of hippocampally mediated episodic memories of the traumatic event as well as the memories' associated, amygdala-dependent, negative affect. Experimental data in support of this model are reviewed and possible tests of the model are suggested."} {"_id": "1e667b69915fef9070f063635ba01cdf229f5d8a", "title": "From ADHD to SAD: Analyzing the Language of Mental Health on Twitter through Self-Reported Diagnoses", "text": "Many significant challenges exist for the mental health field, but one in particular is a lack of data available to guide research. Language provides a natural lens for studying mental health \u2013 much existing work and therapy have strong linguistic components, so the creation of a large, varied, language-centric dataset could provide significant grist for the field of mental health research. We examine a broad range of mental health conditions in Twitter data by identifying self-reported statements of diagnosis. We systematically explore language differences between ten conditions with respect to the general population, and to each other. Our aim is to provide guidance and a roadmap for where deeper exploration is likely to be fruitful."} {"_id": "0e52fbadb7af607b4135189e722e550a0bd6e7cc", "title": "Camouflage of self-inflicted razor blade incision scars with carbon dioxide laser resurfacing and thin skin grafting.", "text": "BACKGROUND\nSelf-cutting using a razor blade is a type of self-mutilating behavior that leaves permanent and socially unacceptable scars with unique patterns, particularly on the upper extremities and anterior chest wall. These scars are easily recognized in the community and become a source of lifelong guilt, shame, and regret for the self-mutilators. In the presented clinical study, we aimed to investigate the effectiveness of carbon dioxide laser resurfacing and thin skin grafting in camouflaging self-inflicted razor blade incision scars.\n\n\nMETHODS\nA total of 26 anatomical sites (11 upper arm, 11 forearm, and four anterior chest) of 16 white male patients, whose ages ranged from 20 to 41 years (mean, 23.8 years), were treated between February of 2001 and August of 2003. Detailed psychiatric evaluation preoperatively; informing the patient that the procedure is a \"camouflage\" operation; trimming hypertrophic scars down to intact skin level; intralesional corticosteroid injection to hypertrophic scars; carbon dioxide laser resurfacing as a single unit; thin (0.2 to 0.3 mm) skin grafting; compressive dressing for 15 days; use of tubular bandage; and protection from sunlight for at least 6 months constituted the key points of the procedure.\n\n\nRESULTS\nThe scars were successfully camouflaged and converted to a socially acceptable appearance similar to a burn scar. Partial graft loss in one case and hyperpigmentation in another case were the complications. No new hypertrophic scar developed.\n\n\nCONCLUSIONS\nThe carbon dioxide laser resurfacing and thin skin grafting method is effective in camouflaging self-inflicted razor blade incision scars."} {"_id": "46946ff7eb0e0a42d33504690aad7ddbff83f31b", "title": "User modeling with personas", "text": "User demographic and behavior data obtained from real user observation provide valuable information for designers. Yet, such information can be misinterpreted if presented as statistic figures. Personas are fictitious user representations created in order to embody behaviors and motivations that a group of real users might express, representing them during the project development process. This article describes the persona as being an effective tool to the users' descriptive model."} {"_id": "65a2cb8a02795015b398856327bdccc36214cdc6", "title": "IOFlow: a software-defined storage architecture", "text": "In data centers, the IO path to storage is long and complex. It comprises many layers or \"stages\" with opaque interfaces between them. This makes it hard to enforce end-to-end policies that dictate a storage IO flow's performance (e.g., guarantee a tenant's IO bandwidth) and routing (e.g., route an untrusted VM's traffic through a sanitization middlebox). These policies require IO differentiation along the flow path and global visibility at the control plane. We design IOFlow, an architecture that uses a logically centralized control plane to enable high-level flow policies. IOFlow adds a queuing abstraction at data-plane stages and exposes this to the controller. The controller can then translate policies into queuing rules at individual stages. It can also choose among multiple stages for policy enforcement.\n We have built the queue and control functionality at two key OS stages-- the storage drivers in the hypervisor and the storage server. IOFlow does not require application or VM changes, a key strength for deployability. We have deployed a prototype across a small testbed with a 40 Gbps network and storage devices. We have built control applications that enable a broad class of multi-point flow policies that are hard to achieve today."} {"_id": "9f3b0bf6a3a083731d6c0fa0435fab5e68304dfc", "title": "Enterprise Architecture Characteristics in Context Enterprise Governance Base On COBIT 5 Framework", "text": "The existence of the enterprise architecture is an attempt of managing and planning over the evolution of information systems in the sphere of an enterprise with model-based. In developing the enterprise architecture, there are several tools definition of components in the system. This tool is known as enterprises architecture (EA) framework. In this paper, we present a method to build a model of enterprise architecture in accordance with the needs of the Organization by Understanding the characteristics of the EA framework such as Zachman, TOGAF, and FEAF. They are selected as the EA framework will be used to determine the characteristics of an EA because the framework is most widely used in corporate or Government. In COBIT 5 framework, there is a process associated with enterprise architecture it is APO03 Manage Enterprise Architecture. At this stage of the research, we describe the link between the characteristics of the EA with one process in COBIT 5 framework. The results contribute to give a recommendation how to design EA for organization based on the characteristic of EA in Context Enterprise Governance using COBIT 5 Framework."} {"_id": "76c44858b1a3f3add903a992f66b71f5cdcd18e3", "title": "MDig : Multi-digit Recognition using Convolutional Nerual Network on Mobile", "text": "Multi-character recognition in arbitrary photographs on mobile platform is difficult, in terms of both accuracy and real-time performance. In this paper, we focus on the domain of hand-written multi-digit recognition. Convolutional neural network (CNN) is the state-of-the-art solution for object recognition, and presents a workload that is both compute and data intensive. To reduce the workload, we train a shallow CNN offline, achieving 99.07% top-1 accuracy. And we utilize preprocessing and segmentation to reduce input image size fed into CNN. For CNN implementation on the mobile platform, we adopt and modify DeepBeliefSDK to support batching fully-connected layers. On NVIDIA SHIELD tablet, the application processes a frame and extracts 32 digits in approximately 60ms, and batching the fully-connected layers reduces the CNN runtime by another 12%."} {"_id": "2b0750d16db1ecf66a3c753264f207c2cb480bde", "title": "Mining Sequential Patterns", "text": "We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction."} {"_id": "8cc8c9ece3cd13beabbc07719f5ff694af59ba5b", "title": "Smart wheelchairs: A literature review.", "text": "Several studies have shown that both children and adults benefit substantially from access to a means of independent mobility. While the needs of many individuals with disabilities can be satisfied with traditional manual or powered wheelchairs, a segment of the disabled community finds it difficult or impossible to use wheelchairs independently. To accommodate this population, researchers have used technologies originally developed for mobile robots to create \"smart wheelchairs.\" Smart wheelchairs have been the subject of research since the early 1980s and have been developed on four continents. This article presents a summary of the current state of the art and directions for future research."} {"_id": "7c09d08cdeb688aece28d41feb406dbbcda9c5ac", "title": "Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network", "text": "A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration."} {"_id": "9eeada49fc2cba846b4dad1012ba8a7ee78a8bb7", "title": "A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA", "text": "ses a facial expression recognition system based on Gabor feature using a novel r bank. Traditionally, a global Gabor filter bank with 5 frequencies and 8 ten used to extract the Gabor feature. A lot of time will be involved to extract mensions of such Gabor feature vector are prohibitively high. A novel local Gabor art of frequency and orientation parameters is proposed. In order to evaluate the he local Gabor filter bank, we first employed a two-stage feature compression LDA to select and compress the Gabor feature, then adopted minimum distance nize facial expression. Experimental results show that the method is effective for eduction and good recognition performance in comparison with traditional entire . The best average recognition rate achieves 97.33% for JAFFE facial expression abor filter bank, feature extraction, PCA, LDA, facial expression recognition. deliver rich information about human emotion and play an essential role in human In order to facilitate a more intelligent and natural human machine interface of new cts, automatic facial expression recognition [1][18][20] had been studied world en years, which has become a very active research area in computer vision and n. There are many approaches have been proposed for facial expression analysis ages and image sequences [12][18] in the literature. we focus on the recognition of facial expression from single digital images with feature extraction. A number of approaches have been developed for extracting by: Motorola Labs Research Foundation (No.303D804372), NSFC (No.60275005), GDNSF 105938)."} {"_id": "0c46b067779d7132c5fd097baadc261348afb261", "title": "Political Polarization on Twitter", "text": "In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between leftand right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis."} {"_id": "1ad8410d0ded269af4a0116d8b38842a7549f0ae", "title": "Measuring User Influence in Twitter: The Million Follower Fallacy", "text": "Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user\u2019s influence on others\u2014a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user."} {"_id": "3531e0eb9ec8a6ac786146c71ead7f8d624fd2ca", "title": "TwitterRank: finding topic-sensitive influential twitterers", "text": "This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4% of the users in Twitter follow more than 80% of their followers, and (2) 80.5% of the users have 80% of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank."} {"_id": "3f4558f0526a7491e2597941f99c14fea536288d", "title": "Author cocitation: A literature measure of intellectual structure", "text": ""} {"_id": "7e8a5e0a87fab337d71ce04ba02b7a5ded392421", "title": "Detecting and Tracking Political Abuse in Social Media", "text": "We study astroturf political campaigns on microblogging platforms: politically-motivated individuals and organizations that use multiple centrally-controlled accounts to create the appearance of widespread support for a candidate or opinion. We describe a machine learning framework that combines topological, content-based and crowdsourced features of information diffusion networks on Twitter to detect the early stages of viral spreading of political misinformation. We present promising preliminary results with better than 96% accuracy in the detection of astroturf content in the run-up to the 2010 U.S. midterm elections."} {"_id": "9da9786e6aba77c72516abb5cfa124402cfdb9f1", "title": "From content delivery today to information centric networking", "text": "Today, content delivery is a heterogeneous ecosystem composed by various independent infrastructures. The ever increasing growth of Internet traffic has encouraged the proliferation of different architectures to serve content provider needs and user demand. Despite the differences among the technology, their low level implementation can be characterized in a few fundamental building blocks: network storage, request routing, and data transfer. Existing solutions are inefficient because they try to build an information centric service model over a network infrastructure which was designed to support host-to-host communications. Information-Centric Networking (ICN) paradigm has been proposed as a possible solution to this mismatch. ICN integrates content delivery as a native network feature. The rationale is to architect a network that automatically interprets, processes, and delivers content (information) independently of its location. This paper makes the following contributions: 1) it identifies a set of building blocks for content delivery, 2) it surveys the most popular approaches to realize the above building blocks, 3) it compares content delivery solutions relying on the current Internet infrastructure with novel ICN approaches."} {"_id": "8afc4ead207602dee08c9482fcf65a947b15c5e9", "title": "\"I spy with my little eye!\": breadth of attention, inattentional blindness, and tactical decision making in team sports.", "text": "Failures of awareness are common when attention is otherwise engaged. Such failures are prevalent in attention-demanding team sports, but surprisingly no studies have explored the inattentional blindness paradigm in complex sport game-related situations. The purpose of this paper is to explore the link between breadth of attention, inattentional blindness, and tactical decision-making in team ball sports. A series of studies revealed that inattentional blindness exists in the area of team ball sports (Experiment 1). More tactical instructions can lead to a narrower breadth of attention, which increases inattentional blindness, whereas fewer tactical instructions widen the breadth of attention in the area of team ball sports (Experiment 2). Further meaningful exogenous stimuli reduce inattentional blindness (Experiment 3). The results of all experiments are discussed in connection with consciousness and attention theories as well as creativity and training in team sports."} {"_id": "40dc541032e2a31313e5acdbde83d271d699f41c", "title": "State of the art of Trust and Reputation Systems in E-Commerce Context", "text": "This article proposes in depth comparative study of the most popular, used and analyzed Trust and Reputation System (TRS) according to the trust and reputation literature and in terms of specific trustworthiness criteria. This survey is realized relying on a selection of trustworthiness criteria that analyze and evaluate the maturity and effectiveness of TRS. These criteria describe the utility, the usability, the performance and the effectiveness of the TRS. We also provide a summary table of the compared TRS within a detailed and granular selection of trust and reputation aspects."} {"_id": "0d74e25ee00166b89792439556de170236a0c63f", "title": "Building a corpus of \"real\" texts for deception detection", "text": "Text-based deception detection is currently emerging as a vital multidisciplinary field due to its indisputable theoretical and practical value (police, security, and customs, including predatory communications, such as Internet scams). A very important issue associated with deception detection is designing valid text corpora. Most research has been carried out using texts produced in laboratory settings. There is a lack of \"real\" deceptive texts written when the stakes for deception are high as they are obviously difficult to collect and label. In addition, studies in text-based deception detection have mostly been performed for Romance and Germanic languages. There are few studies dealing with deception detection in Slavic languages. In this paper one can find an overview of available text corpora used for studying text-based deception detection as well as the description of how the first corpus of \"real\" deceptive texts for Slavic languages was collected and labeled. We expect this corpus once finished to be a handy tool for developing and testing new deception detection techniques and for carrying out related cross-cultural studies."} {"_id": "98cef5c176f714a51ddfc8585b8985e968ea9d25", "title": "3D Human Activity Recognition with Reconfigurable Convolutional Neural Networks", "text": "Human activity understanding with 3D/depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition."} {"_id": "2883a32fe32493c1519f404112cbdadd1fe90c7c", "title": "Formal analysis of privacy requirements specifications for multi-tier applications", "text": "Companies require data from multiple sources to develop new information systems, such as social networking, e-commerce and location-based services. Systems rely on complex, multi-stakeholder data supply-chains to deliver value. These data supply-chains have complex privacy requirements: privacy policies affecting multiple stakeholders (e.g. user, developer, company, government) regulate the collection, use and sharing of data over multiple jurisdictions (e.g. California, United States, Europe). Increasingly, regulators expect companies to ensure consistency between company privacy policies and company data practices. To address this problem, we propose a methodology to map policy requirements in natural language to a formal representation in Description Logic. Using the formal representation, we reason about conflicting requirements within a single policy and among multiple policies in a data supply chain. Further, we enable tracing data flows within the supply-chain. We derive our methodology from an exploratory case study of Facebook platform policy. We demonstrate the feasibility of our approach in an evaluation involving Facebook, Zynga and AOL-Advertising policies. Our results identify three conflicts that exist between Facebook and Zynga policies, and one conflict within the AOL Advertising policy."} {"_id": "d7377a8ac77be7041b189626a9dbef20ed37829c", "title": "e-health first impressions and visual evaluations: key design principles for attention and appeal", "text": "Design plays a critical role in the development of e-health, greatly impacting the outreach potential for pertinent health communication. Design influences viewers' initial evaluations of electronic displays of health information, as well as directly impacting the likelihood one will attend to and favorably evaluate the information, essential actions for processing the health concepts presented. Individuals with low health literacy, representing a hard-to-reach audience susceptible to worsened health outcomes, will benefit greatly from the application of theory-based design principles. Design principles that have been shown to appeal and engage audiences are the necessary first step for effective message delivery. Design principles, which directly impact increased attention, favorable evaluations, and greater information processing abilities, include: web aesthetics, visual complexity, affordances, prototypicality, and persuasive imagery. These areas of theory-driven design research should guide scholars in e-health investigation with research goals of broader outreach, reduction of disparities, and potential avenues for reduced health care costs. Improving design by working with this hard-to-reach audience will simultaneously improve practice, as the applications of key design principles through theory-driven design research will allow practitioners to create effective e-health that will benefit people more broadly."} {"_id": "d88c3f2fa212e4ec1254e98b33380928299fa02b", "title": "Towards SVC-based Adaptive Streaming in information centric networks", "text": "HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for video streaming services. In HAS, each video is segmented and stored in different qualities. The client can dynamically select the most appropriate quality level to download, allowing it to adapt to varying network conditions. As the Internet was not designed to deliver such applications, optimal support for multimedia delivery is still missing. Information Centric Networking (ICN) is a recently proposed disruptive architecture that could solve this issue, where the focus is given to the content rather than to end-to-end connectivity. Due to the bandwidth unpredictability typical of ICN, standard AVC-based HAS performs quality selection sub-optimally, thus leading to a poor Quality of Experience (QoE). In this article, we propose to overcome this inefficiency by using Scalable Video Coding (SVC) instead. We individuate the main advantages of SVC-based HAS over ICN and outline, both theoretically and via simulation, the research challenges to be addressed to optimize the delivered QoE."} {"_id": "f6c265af493c74cb7ef64b8ffe238e3f2487d133", "title": "A novel compact printed ACS fed dual-band antenna for Bluetooth/WLAN/WiMAX applications", "text": "In this research article, a compact dual band asymmetric coplanar strip-fed printed antenna is designed and presented for Bluetooth, WLAN/WiMAX and public safety applications. The dual frequency operating bands (2.45 GHz and 5.25 GHz) have been achieved by attaching two simple meander shaped radiating strips to the ACS feed line. The proposed antenna geometry is printed on a low cost FR4 substrate having thickness of 1.6mm with overall dimensions of 13 \u00d7 21.3m including uniplanar ground plane. The -10 dB impedance bandwidth of the meandered ACS-fed dual band monopole antenna is about 140MHz from 2.36-2.5 GHz, and 2500MHz from 4.5-7.0 GHz respectively, which can cover 2.4 GHz Bluetooth/WLAN, 5.2/5.8 GHz WLAN, 5.5 GHz WiMAX and 4.9 GHz US public safety bands. In addition to the simple geometry and wide impedance bandwidth features, proposed structure perform bidirectional and omnidirectional radiation pattern in both E and H-plane respectively."} {"_id": "12d6bf07c3a530bfe5a962e807347ed7b3532d02", "title": "Table-top spatially-augmented realty: bringing physical models to life with projected imagery", "text": "Despite the availability of high-quality graphics systems, architects and designers still build scaled physical models of buildings and products. These physical models have many advantages, however they are typically static in structure and surface characteristics. They are inherently lifeless. In contrast, high-quality graphics systems are tremendously flexible, allowing viewers to see alternative structures, facades, textures, cut-away views, and even dynamic effects such as changing lighting, moving automobiles, people, etc. We introduce a combination of these approaches that builds on our previously-published projector-based Spatially-Augmented Reality techniques. The basic idea is to aim multiple ceiling-mounted light projectors inward to graphically augment table-top scaled physical models of buildings or products. This approach promises to provide very compelling hybrid visualizations that afford the benefits of both traditional physical models, and modern computer graphics, effectively \"bringing to life\" table-top"} {"_id": "4ab84a0dc95f51ea23f4a26178ceba4a7f04a38f", "title": "Visual Event Summarization on Social Media using Topic Modelling and Graph-based Ranking Algorithms", "text": "Due to the increasing popularity of microblogging platforms, the amount of messages (posts) related to public events, especially posts encompassing multimedia content, is steadily increasing. The inclusion of images can convey much more information about the event, compared to their text, which is typically very short (e.g., tweets). Although such messages can be quite informative regarding different aspects of the event, there is a lot of spam and redundancy making it challenging to extract pertinent insights. In this work, we describe a summarization framework that, given a set of social media messages about an event, aims to select a subset of images derived from them, that, at the same time, maximizes the relevance of the selected images and minimizes their redundancy. To this end, we propose a topic modeling technique to capture the relevance of messages to event topics and a graph-based algorithm to produce a diverse ranking of the selected high-relevance images. A user-centered evaluation on a large Twitter dataset around several real-world events demonstrates that the proposed method considerably outperforms a number of state-of-the-art summarization algorithms in terms of result relevance, while at the same time it is also highly competitive in terms of diversity. Namely, we get an improvement of 25% in terms of precision compared to the second best result, and 7% in terms of diversity."} {"_id": "8f242985963caaf265e921275f7ecc43d3381c89", "title": "Calibration of RGB camera with velodyne LiDAR", "text": "Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to the calibration of the sensors with a small mutual displacement only. Our approach presents a novel 3D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel way for evaluation of the calibration precision using projection error."} {"_id": "e289f61e79f12738cca1bf0558e0e8c89219d928", "title": "Sketch2normal: deep networks for normal map generation", "text": "Normal maps are of great importance for many 2D graphics applications such as surface editing, re-lighting, texture mapping and 2D shading etc. Automatically inferring normal map is highly desirable for graphics designers. Many researchers have investigated the inference of normal map from intuitive and flexiable line drawing based on traditional geometric methods while our proposed deep networks-based method shows more robustness and provides more plausible results."} {"_id": "e1cfd9fa3f55c89ff1a09bb8b5ae44485451fb99", "title": "Delayed enhancement of multitasking performance: Effects of anodal transcranial direct current stimulation on the prefrontal cortex", "text": "BACKGROUND\nThe dorsolateral prefrontal cortex (DLPFC) has been proposed to play an important role in neural processes that underlie multitasking performance. However, this claim is underexplored in terms of direct causal evidence.\n\n\nOBJECTIVE\nThe current study aimed to delineate the causal involvement of the DLPFC during multitasking by modulating neural activity with transcranial direct current stimulation (tDCS) prior to engagement in a demanding multitasking paradigm.\n\n\nMETHODS\nThe study is a single-blind, crossover, sham-controlled experiment. Anodal tDCS or sham tDCS was applied over left DLPFC in forty-one healthy young adults (aged 18-35 years) immediately before they engaged in a 3-D video game designed to assess multitasking performance. Participants were separated into three subgroups: real-sham (i.e., real tDCS in the first session, followed by sham tDCS in the second session 1\u00a0h later), sham-real (sham tDCS first session, real tDCS second session), and sham-sham (sham tDCS in both sessions).\n\n\nRESULTS\nThe real-sham group showed enhanced multitasking performance and decreased multitasking cost during the second session, compared to first session, suggesting delayed cognitive benefits of tDCS. Interestingly, performance benefits were observed only for multitasking and not on a single-task version of the game. No significant changes were found between the first and second sessions for either the sham-real or the sham-sham groups.\n\n\nCONCLUSIONS\nThese results suggest a causal role of left prefrontal cortex in facilitating the simultaneous performance of more than one task, or multitasking. Moreover, these findings reveal that anodal tDCS may have delayed benefits that reflect an enhanced rate of learning."} {"_id": "1e9f908a7ff08e8c4317252896ae0e545b27f1c4", "title": "Binarization-free OCR for historical documents using LSTM networks", "text": "A primary preprocessing block of almost any typical OCR system is binarization, through which it is intended to remove unwanted part of the input image, and only keep a binarized and cleaned-up version for further processing. The binarization step does not, however, always perform perfectly, and it can happen that binarization artifacts result in important information loss, by for instance breaking or deforming character shapes. In historical documents, due to a more dominant presence of noise and other sources of degradations, the performance of binarization methods usually deteriorates; as a result the performance of the recognition pipeline is hindered by such preprocessing phases. In this paper, we propose to skip the binarization step by directly training a 1D Long Short Term Memory (LSTM) network on gray-level text lines. We collect a large set of historical Fraktur documents, from publicly available online sources, and form train and test sets for performing experiments on both gray-level and binarized text lines. In order to observe the impact of resolution, the experiments are carried out on two identical sets of low and high resolutions. Overall, using gray-level text lines, the 1D LSTM network can reach 24% and 19% lower error rates on the low- and high-resolution sets, respectively, compared to the case of using binarization in the recognition pipeline."} {"_id": "b802f6281eea6f049ca1686a5c6cc41e44f5748f", "title": "Joint effects of emotion and color on memory.", "text": "Numerous studies have shown that memory is enhanced for emotionally negative and positive information relative to neutral information. We examined whether emotion-induced memory enhancement is influenced by low-level perceptual attributes such as color. Because in everyday life red is often used as a warning signal, whereas green signals security, we hypothesized that red might enhance memory for negative information and green memory for positive information. To capture the signaling function of colors, we measured memory for words standing out from the context by color, and manipulated the color and emotional significance of the outstanding words. Making words outstanding by color strongly enhanced memory, replicating the well-known von Restorff effect. Furthermore, memory for colored words was further increased by emotional significance, replicating the memory-enhancing effect of emotion. Most intriguingly, the effects of emotion on memory additionally depended on color type. Red strongly increased memory for negative words, whereas green strongly increased memory for positive words. These findings provide the first evidence that emotion-induced memory enhancement is influenced by color and demonstrate that different colors can have different functions in human memory."} {"_id": "3f994e413846f9bef8a25b30da0665420fa2bd2a", "title": "Design of a Real-Time Micro-Winch Controller for Kite-Power Applications Master \u2019 s Thesis in Embedded", "text": "Airborne wind energy is a technology to extract energy from high altitude winds. This technology is under heavy development by several companies and universities. An actual problem with the commercialization of the technology is the reliability and safety of the system. In this thesis a real time environment suitable to perform research and further development of the prototype steering and depower control is proposed. Additionally, the overload prevention of the kite lines is researched. This thesis presents a method to estimate the tension on the kite control tapes using only one tension sensor. Thus, reducing the amount of hardware needed to protect the kite from overloads. The method relies on the characterization of the powertrain efficiency and can be used to estimate the tensions at high loads. An algorithm to limit the forces on the steering lines by depowering the kite is shown; it controls the depower state of the kite based on the desired depower state, the actual tension, and previous tensions on the KCU\u2019s tapes. The tensions history is used to calculate a higher depower state to prevent future overloads, this reduces the amount of action needed by the motors and enable the system to use a brake to save energy. The limiter output is used as an input to a position controller, which allows the project to use off the shelf solutions to build the KCU prototype. The controller was implemented in a real time system and is able to run as fast as 20 Hz being the communication protocol the execution time bottleneck. The control algorithms were tested using a mathematical model of the kite, the environment, and trajectory control inputs from FreeKiteSim. Three scenarios were considered for the model test, normal operation, overload operation without tension limitation, and overload operation with tension limitation. The apparent wind speed during the reel out phase of the normal scenario is approximately 30 m/s and 35 m/s for the overload scenarios. During the overload scenario the limiter spent roughly 22% more energy than the normal operation scenario to counteract an increase of 5 m/s in the apparent wind during 3.5 hours of operation, but it spent 15% less energy than the overload scenario without tension limitation."} {"_id": "3865d02552139862bf8dcc4942782af1d9eec17c", "title": "Analyzing News Media Coverage to Acquire and Structure Tourism Knowledge", "text": "Destination image significantly influences a tourist\u2019s decision-making process. The impact of news media coverage on destination image has attracted research attention and became particularly evident after catastrophic events such as the 2004 Indian Ocean earthquake that triggered a series of lethal tsunamis. Building upon previous research, this article analyzes the prevalence of tourism destinations among 162 international media sites. Term frequency captures the attention a destination receives\u2014from a general and, after contextual filtering, from a tourism perspective. Calculating sentiment estimates positive and negative media influences on destination image at a given point in time. Identifying semantic associations with the names of countries and major cities, the results of co-occurrence analysis reveal the public profiles of destinations, and the impact of current events on media coverage. These results allow national tourism organizations to assess how their destination is covered by news media in general, and in a specific tourism context. To guide analysts and marketers in this assessment, an iterative analysis of semantic associations extracts tourism knowledge automatically, and represents this knowledge as ontological structures."} {"_id": "2298ec2d7a34ce667319f7e5e88005c71c4ee142", "title": "Composite Connectors for Composing Software Components", "text": "In a component-based system, connectors are used to compose components. Connectors should have a semantics that makes them simple to construct and use. At the same time, their semantics should be rich enough to endow them with desirable properties such as genericity, compositionality and reusability. For connector construction, compositionality would be particularly useful, since it would facilitate systematic construction. In this paper we describe a hierarchical approach to connector definition and construction that allows connectors to be defined and constructed from sub-connectors. These composite connectors are indeed generic, compositional and reusable. They behave like design patterns, and provide powerful composition connectors."} {"_id": "0ea94e9c83d2e138fbccc1116b57d4f2a7ba6868", "title": "Measured Gene-Environment Interactions in Psychopathology: Concepts, Research Strategies, and Implications for Research, Intervention, and Public Understanding of Genetics.", "text": "There is much curiosity about interactions between genes and environmental risk factors for psychopathology, but this interest is accompanied by uncertainty. This article aims to address this uncertainty. First, we explain what is and is not meant by gene-environment interaction. Second, we discuss reasons why such interactions were thought to be rare in psychopathology, and argue instead that they ought to be common. Third, we summarize emerging evidence about gene-environment interactions in mental disorders. Fourth, we argue that research on gene-environment interactions should be hypothesis driven, and we put forward strategies to guide future studies. Fifth, we describe potential benefits of studying measured gene-environment interactions for basic neuroscience, gene hunting, intervention, and public understanding of genetics. We suggest that information about nurture might be harnessed to make new discoveries about the nature of psychopathology."} {"_id": "ec76d5b32cd6f57a59d0c841f3ff558938aa6ddd", "title": "Oral stimulation for promoting oral feeding in preterm infants.", "text": "BACKGROUND\nPreterm infants (< 37 weeks' postmenstrual age) are often delayed in attaining oral feeding. Normal oral feeding is suggested as an important outcome for the timing of discharge from the hospital and can be an early indicator of neuromotor integrity and developmental outcomes. A range of oral stimulation interventions may help infants to develop sucking and oromotor co-ordination, promoting earlier oral feeding and earlier hospital discharge.\n\n\nOBJECTIVES\nTo determine the effectiveness of oral stimulation interventions for attainment of oral feeding in preterm infants born before 37 weeks' postmenstrual age (PMA).To conduct subgroup analyses for the following prespecified subgroups.\u2022 Extremely preterm infants born at < 28 weeks' PMA.\u2022 Very preterm infants born from 28 to < 32 weeks' PMA.\u2022 Infants breast-fed exclusively.\u2022 Infants bottle-fed exclusively.\u2022 Infants who were both breast-fed and bottle-fed.\n\n\nSEARCH METHODS\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE via PubMed (1966 to 25 February 2016), Embase (1980 to 25 February 2016) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 25 February 2016). We searched clinical trials databases, conference proceedings and the reference lists of retrieved articles.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised controlled trials comparing a defined oral stimulation intervention with no intervention, standard care, sham treatment or non-oral intervention in preterm infants and reporting at least one of the specified outcomes.\n\n\nDATA COLLECTION AND ANALYSIS\nOne review author searched the databases and identified studies for screening. Two review authors screened the abstracts of these studies and full-text copies when needed to identify trials for inclusion in the review. All review authors independently extracted the data and analysed each study for risk of bias across the five domains of bias. All review authors discussed and analysed the data and used the GRADE system to rate the quality of the evidence. Review authors divided studies into two groups for comparison: intervention versus standard care and intervention versus other non-oral or sham intervention. We performed meta-analysis using a fixed-effect model.\n\n\nMAIN RESULTS\nThis review included 19 randomised trials with a total of 823 participants. Almost all included trials had several methodological weaknesses. Meta-analysis showed that oral stimulation reduced the time to transition to oral feeding compared with standard care (mean difference (MD) -4.81, 95% confidence interval (CI) -5.56 to -4.06 days) and compared with another non-oral intervention (MD -9.01, 95% CI -10.30 to -7.71 days), as well as the duration of initial hospitalisation compared with standard care (MD -5.26, 95% CI -7.34 to -3.19 days) and compared with another non-oral intervention (MD -9.01, 95% CI -10.30 to -7.71 days).Investigators reported shorter duration of parenteral nutrition for infants compared with standard care (MD -5.30, 95% CI -9.73 to -0.87 days) and compared with another non-oral intervention (MD -8.70, 95% CI -15.46 to -1.94 days). They could identify no effect on breast-feeding outcomes nor on weight gain.\n\n\nAUTHORS' CONCLUSIONS\nAlthough the included studies suggest that oral stimulation shortens hospital stay, days to exclusive oral feeding and duration of parenteral nutrition, one must interpret results of these studies with caution, as risk of bias and poor methodological quality are high overall. Well-designed trials of oral stimulation interventions for preterm infants are warranted. Such trials should use reliable methods of randomisation while concealing treatment allocation, blinding caregivers to treatment when possible and paying particular attention to blinding of outcome assessors."} {"_id": "5eee46f39ae4cafd90c0c34b9996b96ac8f638a2", "title": "The Economics of State Fragmentation-Assessing the Economic Impact of Secession", "text": "This paper provides empirical evidence that declaring independence significantly lowers per capita GDP based on a large panel of countries covering the period 1950-2016. To do so, we rely on a semi-parametric identification strategy that controls for the confounding effects of past GDP dynamics, anticipation effects, unobserved heterogeneity, model uncertainty and effect heterogeneity. Our baseline results indicate that declaring independence reduces per capita GDP by around 20% in the long run. We subsequently propose a quadruple-difference procedure to demonstrate that the results are not driven by simulation and matching inaccuracies or spillover effects. A second methodological novelty consists of the development of a two-step estimator that relies on the control function approach to control for the potential endogeneity of the estimated independence payoffs and their potential determinants, to shed some light on the primary channels driving our results. We find tentative evidence that the adverse effects of independence decrease in territorial size, pointing to the presence of economies of scale, but that they are mitigated when newly independent states liberalize their trade regime or use their new-found political autonomy to democratize."} {"_id": "9e2af148acbf7d4623ca8a946be089a774ce5258", "title": "AlterEgo: A Personalized Wearable Silent Speech Interface", "text": "We present a wearable interface that allows a user to silently converse with a computing device without any voice or any discernible movements - thereby enabling the user to communicate with devices, AI assistants, applications or other people in a silent, concealed and seamless manner. A user's intention to speak and internal speech is characterized by neuromuscular signals in internal speech articulators that are captured by the AlterEgo system to reconstruct this speech. We use this to facilitate a natural language user interface, where users can silently communicate in natural language and receive aural output (e.g - bone conduction headphones), thereby enabling a discreet, bi-directional interface with a computing device, and providing a seamless form of intelligence augmentation. The paper describes the architecture, design, implementation and operation of the entire system. We demonstrate robustness of the system through user studies and report 92% median word accuracy levels."} {"_id": "40e2d032a11b18bc470d47f8b0545db691c0d253", "title": "Be Quiet? Evaluating Proactive and Reactive User Interface Assistants", "text": "This research examined the ability of an anthropomorphic interface assistant to help people learn and use an unfamiliar text-editing tool, with a specific focus on assessing proactive assistant behavior. Participants in the study were introduced to a text editing system that used keypress combinations for invoking the different editing operations. Participants then were directed to make a set of prescribed changes to a document with the aid either of a paper manual, an interface assistant that would hear and respond to questions orally, or an assistant that responded to questions and additionally made proactive suggestions. Anecdotal evidence suggested that proactive assistant behavior would not enhance performance and would be viewed as intrusive. Our results showed that all three conditions performed similarly on objective editing performance (completion time, commands issued, and command recall), while the participants in the latter two conditions strongly felt that the assistant\u2019s help was valuable."} {"_id": "04f39720b9b20f8ab990228ae3fe4f473e750fe3", "title": "Probabilistic Graphical Models - Principles and Techniques", "text": ""} {"_id": "17fac85921a6538161b30665f55991f7c7e0f940", "title": "Calibrating Noise to Sensitivity in Private Data Analysis", "text": "We continue a line of research initiated in [10, 11] on privacypreserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = P i g(xi), where xi denotes the ith row of the database and g maps database rows to [0, 1]. We extend the study to general functions f , proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f . Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive."} {"_id": "2a622720d4021259a6f6d3c6298559d1b56e7e62", "title": "Scaling personalized web search", "text": "Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques."} {"_id": "37c3303d173c055592ef923235837e1cbc6bd986", "title": "Learning Fair Representations", "text": "We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification."} {"_id": "4556f3f9463166aa3e27b2bec798c0ca7316bd65", "title": "Three naive Bayes approaches for discrimination-free classification", "text": "In this paper, we investigate how to modify the naive Bayes classifier in order to perform classification that is restricted to be independent with respect to a given sensitive attribute. Such independency restrictions occur naturally when the decision process leading to the labels in the data-set was biased; e.g., due to gender or racial discrimination. This setting is motivated by many cases in which there exist laws that disallow a decision that is partly based on discrimination. Naive application of machine learning techniques would result in huge fines for companies. We present three approaches for making the naive Bayes classifier discrimination-free: (i) modifying the probability of the decision being positive, (ii) training one model for every sensitive attribute value and balancing them, and (iii) adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization. We present experiments for the three approaches on both artificial and real-life data."} {"_id": "72e8b0167757b9c8579972ee6972065ba5cb762d", "title": "Using Visual Text Mining to Support the Study Selection Activity in Systematic Literature Reviews", "text": "Background: A systematic literature review (SLR) is a methodology used to aggregate all relevant existing evidence to answer a research question of interest. Although crucial, the process used to select primary studies can be arduous, time consuming, and must often be conducted manually. Objective: We propose a novel approach, known as 'Systematic Literature Review based on Visual Text Mining' or simply SLR-VTM, to support the primary study selection activity using visual text mining (VTM) techniques. Method: We conducted a case study to compare the performance and effectiveness of four doctoral students in selecting primary studies manually and using the SLR-VTM approach. To enable the comparison, we also developed a VTM tool that implemented our approach. We hypothesized that students using SLR-VTM would present improved selection performance and effectiveness. Results: Our results show that incorporating VTM in the SLR study selection activity reduced the time spent in this activity and also increased the number of studies correctly included. Conclusions: Our pilot case study presents promising results suggesting that the use of VTM may indeed be beneficial during the study selection activity when performing an SLR."} {"_id": "9cc995f368233a7dce30d5a48c3509b5b4ffa57b", "title": "Pineal cyst: normal or pathological?", "text": "Review of 500 consecutive MRI studies was undertaken to assess the frequency and the appearances of cystic pineal glands. Cysts were encountered in 2.4% of cases. Follow-up examination demonstrated no change in these cysts and they were considered to be a normal variant. Size, MRI appearances and signs associated with this condition are reported in order to establish criteria of normality."} {"_id": "48aef24024c979d70161cb8ca86793c2de8584ff", "title": "Fast algorithm for robust template matching with M-estimators", "text": "In this paper, we propose a fast algorithm for speeding up the process of template matching that uses M-estimators for dealing with outliers. We propose a particular image hierarchy called the p-pyramid that can be exploited to generate a list of ascending lower bounds of the minimal matching errors when a non-decreasing robust error measure is adopted. Then, the set of lower bounds can be used to prune the search of the p-pyramid, and a fast algorithm is thereby developed in this paper. This fast algorithm ensures finding the global minimum of the robust template matching problem in which a non-decreasing M-estimator serves as an error measure. Experimental results demonstrate the effectiveness of our method. Index Terms \u2014Template matching, robust template matching, M-estimator, fast algorithm."} {"_id": "c5ad3dc750cc9c8316ee5903141baa7e857e670e", "title": "RELIABLE ISLANDING DETECTION WITH ACTIVE MV NETWORK MANAGEMENT", "text": "Active management of distribution networks and different controllable resources will play a key role in future Smart Grids. This paper proposes that centralized active network management functionalities at MV level could also include an algorithm which in real time confirms reliable islanding detection with local measurements based passive method. The proposed algorithm continuously ensures that there is such reactive power unbalance, so that the operation point remains constantly outside the non-detection zone of the used passive islanding detection method. The effect of recent grid code requirements on the proposed scheme, like the P/f -droop control of DG units, has been also considered."} {"_id": "77abdfbfe65af328f61c6c0ed30b388c6cde8d63", "title": "IRIS: A goal-oriented big data analytics framework on Spark for better Business decisions", "text": "Big data analytics is the hottest new practice in Business Analytics today. However, recent industrial surveys find that big data analytics may fail to meet business expectations because of lack of business context and lack of expertise to connect the dots, inaccurate scope and batch-oriented Hadoop system. In this paper, we present IRIS - a goal-oriented big data analytics framework using Spark for better business decisions, which consists of a conceptual model which connects a business side and a big data side providing context information around the data, a claim-based evaluation method which enables to focus the most effective solutions, a process on how to use IRIS framework and an assistant tool using Spark which is a real-time big data analytics platform. In this framework, problems against business goals of the current process and solutions for the future process are explicitly hypothesized in the conceptual model and validated on real big data using big queries or big data analytics. As an empirical study, a shipment decision process is used to show how IRIS can support better business decisions in terms of comprehensive understanding both on business and data analytics, high priority and fast decisions."} {"_id": "f5de0751d6d73f0496ac5842cc6ca84b2d0c2063", "title": "A Comprehensive Survey of Wireless Body Area Networks", "text": "Recent advances in microelectronics and integrated circuits, system-on-chip design, wireless communication and intelligent low-power sensors have allowed the realization of a Wireless Body Area Network (WBAN). A WBAN is a collection of low-power, miniaturized, invasive/non-invasive lightweight wireless sensor nodes that monitor the human body functions and the surrounding environment. In addition, it supports a number of innovative and interesting applications such as ubiquitous healthcare, entertainment, interactive gaming, and military applications. In this paper, the fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed. A comprehensive study of the proposed technologies for WBAN at Physical (PHY), MAC, and Network layers is presented and many useful solutions are discussed for each layer. Finally, numerous WBAN applications are highlighted."} {"_id": "104e29743b98ea97d73ce9558e839e0a6e90113f", "title": "Android ION Hazard: the Curse of Customizable Memory Management System", "text": "ION is a unified memory management interface for Android that is widely used on virtually all ARM based Android devices. ION attempts to achieve several ambitious goals that have not been simultaneously achieved before (not even on Linux). Different from managing regular memory in the system, ION is designed to share and manage memory with special constraints, e.g., physically contiguous memory. Despite the great flexibility and performance benefits offered, such a critical subsystem, as we discover, unfortunately has flawed security assumptions and designs. In this paper, we systematically analyze ION related vulnerabilities from conceptual root causes to detailed implementation decisions. Since ION is often customized heavily for different Android devices, the specific vulnerabilities often manifest themselves differently. By conducting a range of runtime testing as well as static analysis, we are able to uncover a large number of serious vulnerabilities on the latest Android devices (e.g., Nexus 6P running Android 6.0 and 7.0 preview) such as denial-of-service and dumping memory from the system and arbitrary applications (e.g., email content, passwords). Finally, we offer suggestions on how to redesign the ION subsystem to eliminate these flaws. We believe that the lessons learned can help guide the future design of similar memory management subsystems."} {"_id": "0ee6254ab9bd1bc52d851206a84ec5f3fab9a308", "title": "Study of S-box properties in block cipher", "text": "In the field of cryptography, the substitution box (S-box) becomes the most widely used ciphers. The process of creating new and powerful S-boxes never end. Various methods are proposed to make the S-box becomes strongest and hard to attack. The strength or weakness of S-box will be determined through the analysis of S-box properties. However, the analysis of the properties of the S-box in block ciphers is still lacking because there is no specific guidelines and technique based on S-box properties. Hence, the cipher is easier to attack by an adversary if the S-box properties are not robust. The purpose of this paper is to describe and review of the S-box properties in block ciphers. As a result, for future work, a new model for analysis S-box properties will be proposed. The model can be used to analysis the properties to determine the strength and weakness of any S-boxes."} {"_id": "b8349a89ef37e5f741da19609b6aea777cbc39ca", "title": "Design of tri-band Planar Inverted F Antenna (PIFA) with parasitic elements for UMTS2100, LTE and WiMAX mobile applications", "text": "In this paper, the multiband Planar Inverted F Antenna (PIFA) for mobile communication applications has been designed and constructed by introducing two rectangular shape parasitic elements which are located under the main radiating patch of the PIFA in order to obtain a triple-band operation. This triple-band PIFA antenna can be operated at three different operating frequencies which are 2100 MHz for UMTS2100, 2600 MHz for LTE Application and 3500 MHz for WiMAX Application. The main radiating patch of this antenna will control the upper band frequency at 3500 MHz, while the first and second parasitic elements located at the left and right edge of the ground plane will resonate at the middle band (2600MHz) and the lower band (2100 MHz) frequency operation. The proposed triple-band antenna is fabricated and experimentally tested. A good agreement between simulated and measured reflection coefficent of the antenna is achieved where the experimental impedance bandwidth covering the three applications. In these frequency bands, the antenna has nearly omni-directional radiation pattern with the peak gain between 2 dBi~ 5 dBi."} {"_id": "872be69f66b12879d4741b0f0df02738452e3483", "title": "HGMF: Hierarchical Group Matrix Factorization for Collaborative Recommendation", "text": "Matrix factorization is one of the most powerful techniques in collaborative filtering, which models the (user, item) interactions behind historical explicit or implicit feedbacks. However, plain matrix factorization may not be able to uncover the structure correlations among users and items well such as communities and taxonomies. As a response, we design a novel algorithm, i.e., hierarchical group matrix factorization (HGMF), in order to explore and model the structure correlations among users and items in a principled way. Specifically, we first define four types of correlations, including (user, item), (user, item group), (user group, item) and (user group, item group); we then extend plain matrix factorization with a hierarchical group structure; finally, we design a novel clustering algorithm to mine the hidden structure correlations. In the experiments, we study the effectiveness of our HGMF for both rating prediction and item recommendation, and find that it is better than some state-of-the-art methods on several real-world data sets."} {"_id": "2bcdc111f96df6b77a7d6be8c9a7e54eeda6d443", "title": "Sustainable Passenger Transportation : Dynamic RideSharing", "text": "AND"} {"_id": "d4ddf8e44d13ac198384819443d6170fb4428614", "title": "Space Telerobotics : Unique Challenges to Human \u2013 Robot Collaboration in Space", "text": "In this chapter, we survey the current state of the art in space telerobots. We begin by defining relevant terms and describing applications. We then examine the design issues for space telerobotics, including common requirements, operational constraints, and design elements. A discussion follows of the reasons space telerobotics presents unique challenges beyond terrestrial systems. We then present case studies of several different space telerobots, examining key aspects of design and human\u2013robot interaction. Next, we describe telerobots and concepts of operations for future space exploration missions. Finally, we discuss the various ways in which space telerobots can be evaluated in order to characterize and improve performance."} {"_id": "97bafb07cb97c7847b0b39bc0cafd1bafc28cca7", "title": "THE UBISENSE SMART SPACE PLATFORM", "text": "Ubisense has developed a platform for building Smart Space applications. The platform addresses the key requirements for building Smart Spaces: accurate 3D positioning, scalable realtime performance, development and deployment tools. This paper deepens the key requirements and describes how the Ubisense platform components meets them. The demonstration exemplifies the smart space platform by tracking players in a game application. The Ubisense Smart Space Platform Ubisense has developed a platform for building Smart Space applications. Our platform addresses the key requirements for building Smart Spaces: Accurate 3D positioning supports applications that can perceive the physical relationships between people and objects in the environment Scalable real-time performance enables arbitrary numbers of applications, used by arbitrary numbers of people over an arbitrarily-large area Development and deployment tools make it easy to design, implement, and manage Smart Space applications. The demonstration shows a Smart Space containing applications that integrate with external software (a simple game that users control by moving around), and devices (a PTZ camera that keeps users in shot while they are playing the game) Fig. 1 Smart space demonstration setup 1 Ubisense, St Andrews House, 90 St Andrews Road, Chesterton, CB4 1DL, United Kingdom, http://www.ubisense.net"} {"_id": "5786cb96af196a5e70660689fe6bd92d40e6d7ab", "title": "A 20 W/Channel Class-D Amplifier With Near-Zero Common-Mode Radiated Emissions", "text": "A class-D amplifier that employs a new modulation scheme and associated output stage to achieve true filter-less operation is presented. It uses a new type of BD modulation that keeps the output common-mode constant, thereby removing a major contributor to radiated emissions, typically an issue for class-D amplifiers. The amplifier meets the FCC class B standard for radiated emissions without any LC filtering. It can accomplish this without any degradation to audio performance and while retaining high efficiency. THD+N is 0.19% at 1 kHz while supplying 5 W to an 8 Ohm load from a 12 V supply. Efficiency is 90% while providing 10 W under the same supply and load conditions. The new output stage occupies 1.8 mm2 per channel using the high voltage devices of a 0.25 \u00bfm BCD process."} {"_id": "7848c7c6a782c6941bdd67556521aa97569366a4", "title": "Design Optimal PID Controller for Quad Rotor System", "text": "Quad rotor aerial vehicles are one of the most flexible and adaptable platforms for undertaking aerial research. Quad rotor in simplicity is rotorcraft that has four lift-generation propellers (four rotors), two rotors rotate in clockwise and the other two rotate anticlockwise, the speed and direction of Quad rotor can be controlled by varying the speed of these rotors. This paper describes the PID controller has been used for controlling attitude, Roll, Pitch and Yaw direction. In addition, optimal PID controller has been achieving by using particle swarm optimization (PSO), Bacterial Foraging optimization (BFO) and the BF-PSO optimization. Finally, the Quad rotor model has been simulating for several scenarios of testing."} {"_id": "64c7f6717d962254721c204e60ea106c0ff4acda", "title": "Memristor Model Comparison", "text": "Since the 2008-dated discovery of memristor behavior at the nano-scale, Hewlett Packard is credited for, a large deal of efforts have been spent in the research community to derive a suitable model able to capture the nonlinear dynamics of the nano-scale structures. Despite a considerable number of models of different complexity have been proposed in the literature, there is an ongoing debate over which model should be universally adopted for the investigation of the unique opportunities memristors may offer in integrated circuit design. In order to shed some light into this passionate discussion, this paper compares some of the most noteworthy memristor models present in the literature. The strength of the Pickett?s model stands in its experiment-based development and in its ability to describe some physical mechanism at the origin of memristor dynamics. Since its parameter values depend on the excitation of the memristor and/or on the circuit employing the memristor, it may be assumed as a reference for comparison only in those scenarios for which its parameters were reported in the literature. In this work various noteworthy memristor models are fitted to the Pickett's model under one of such scenarios. This study shows how three models, Biolek's model, the Boundary Condition Memristor model and the Threshold Adaptive Memristor model, outperform the others in the replica of the dynamics observed in the Pickett's model. In the second part of this work the models are used in a couple of basic circuits to study the variance between the dynamical behaviors they give rise to. This analysis intends to make the circuit designers aware of the different behaviors which may occur in memristor-based circuits according to the memristor model under use."} {"_id": "af359fbcfc6d58a741ac0d597cd20eb9a68dfa51", "title": "An Interval-Based Representation of Temporal Knowledge", "text": "This paper describes a method for maintaining the relationships between temporal intervals in a hierarchical manner using constraint propagation techniques. The representation includes a notion of the present moment (i.e., \"now\") , and allows one to represent intervals that may extend indefinitely into the past/future. This research was supported in part by the National Science Foundation under Grant Number IST-80-12418, and in part by the Office of Naval Research under Grant Number N00014-80-O0197."} {"_id": "2e4e83ec31b43595ee7160a7bdb5c3a7dc4a1db2", "title": "Ladder Variational Autoencoders", "text": "Variational autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers."} {"_id": "bebdd553058ab50d0cb19a1f65d7f4daeb7cda37", "title": "A Multi-Theoretical Literature Review on Information Security Investments using the Resource-Based View and the Organizational Learning Theory", "text": "The protection of information technology (IT) has become and is predicted to remain a key economic challenge for organizations. While research on IT security investment is fast growing, it lacks a theoretical basis for structuring research, explaining economictechnological phenomena and guide future research. We address this shortcoming by suggesting a new theoretical model emerging from a multi-theoretical perspective adopting the Resource-Based View and the Organizational Learning Theory. The joint application of these theories allows to conceptualize in one theoretical model the organizational learning effects that occur when the protection of organizational resources through IT security countermeasures develops over time. We use this model of IT security investments to synthesize findings of a large body of literature and to derive research gaps. We also discuss managerial implications of (closing) these gaps by providing practical examples."} {"_id": "f7a0d42044b26be8d509310ba20fd8d665943eba", "title": "Template Attacks", "text": "We present template attacks, the strongest form of side channel attack possible in an information theoretic sense. These attacks can break implementations and countermeasures whose security is dependent on the assumption that an adversary cannot obtain more than one or a limited number of side channel samples. They require that an adversary has access to an identical experimental device that he can program to his choosing. The success of these attacks in such constraining situations is due manner in which noise within each sample is handled. In contrast to previous approaches which viewed noise as a hindrance that had to be reduced or eliminated, our approach focuses on precisely modeling noise, and using this to fully extract information present in a single sample. We describe in detail how an implementation of RC4, not amenable to techniques such as SPA and DPA, can easily be broken using template attacks with a single sample. Other applications include attacks on certain DES implementations which use DPA\u2013resistant hardware and certain SSL accelerators which can be attacked by monitoring electromagnetic emanations from an RSA operation even from distances of fifteen feet."} {"_id": "5a2b6c6a7b9cb12554f660610526b22da8e070a7", "title": "Global aetiology and epidemiology of type 2 diabetes mellitus and its complications", "text": "Globally, the number of people with diabetes mellitus has quadrupled in the past three decades, and diabetes mellitus is the ninth major cause of death. About 1 in 11 adults worldwide now have diabetes mellitus, 90% of whom have type 2 diabetes mellitus (T2DM). Asia is a major area of the rapidly emerging T2DM global epidemic, with China and India the top two epicentres. Although genetic predisposition partly determines individual susceptibility to T2DM, an unhealthy diet and a sedentary lifestyle are important drivers of the current global epidemic; early developmental factors (such as intrauterine exposures) also have a role in susceptibility to T2DM later in life. Many cases of T2DM could be prevented with lifestyle changes, including maintaining a healthy body weight, consuming a healthy diet, staying physically active, not smoking and drinking alcohol in moderation. Most patients with T2DM have at least one complication, and cardiovascular complications are the leading cause of morbidity and mortality in these patients. This Review provides an updated view of the global epidemiology of T2DM, as well as dietary, lifestyle and other risk factors for T2DM and its complications."} {"_id": "1407b3363d9bd817b00e95190a95372d3cb3694a", "title": "Probabilistic Frame Induction", "text": "In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort."} {"_id": "1bf9a76c9d9838afc51983894b58790b14c2e3d3", "title": "An ambient assisted living framework for mobile environments", "text": "Ambient assisted living (AAL) delivers IT solutions that aim to facilitate and improve lives of the disabled, elderly, and chronically ill people. Mobility is a key issue for elderly people because their physical activity, in general, improves their quality of life and keep health status. Then, this paper presents an AAL framework for caregivers and elderly people that allow them to maintain an active lifestyle without limiting their mobility. This framework includes four AAL tools for mobility environments: i) a fall detection mobile application; ii) a biofeedback monitoring system trough wearable sensors; iii) an outdoor location service through a shoe equipped with Global Positioning System (GPS); and iv) a mobile application for caregivers that take care of several elders confined to a home environment. The proposal is evaluated and demonstrated and it is ready for use."} {"_id": "048c1752c56a64ee9883ee960b385835474b7fc0", "title": "Higher Lower Bounds from the 3SUM Conjecture", "text": "The 3SUM conjecture has proven to be a valuable tool for proving conditional lower bounds on dynamic data structures and graph problems. This line of work was initiated by P\u01cetra\u015fcu (STOC 2010) who reduced 3SUM to an offline SetDisjointness problem. However, the reduction introduced by P\u01cetra\u015fcu suffers from several inefficiencies, making it difficult to obtain tight conditional lower bounds from the 3SUM conjecture. In this paper we address many of the deficiencies of P\u01cetra\u015fcu\u2019s framework. We give new and efficient reductions from 3SUM to offline SetDisjointness and offline SetIntersection (the reporting version of SetDisjointness) which leads to polynomially higher lower bounds on several problems. Using our reductions, we are able to show the essential optimality of several algorithms, assuming the 3SUM conjecture. \u2022 Chiba and Nishizeki\u2019s O(m\u03b1)-time algorithm (SICOMP 1985) for enumerating all triangles in a graph with arboricity/degeneracy \u03b1 is essentially optimal, for any \u03b1. \u2022 Bj\u00f8rklund, Pagh, Williams, and Zwick\u2019s algorithm (ICALP 2014) for listing t triangles is essentially optimal (assuming the matrix multiplication exponent is \u03c9 = 2). \u2022 Any static data structure for SetDisjointness that answers queries in constant time must spend \u03a9(N2\u2212o(1)) time in preprocessing, where N is the size of the set system. These statements were unattainable via P\u01cetra\u015fcu\u2019s reductions. We also introduce several new reductions from 3SUM to pattern matching problems and dynamic graph problems. Of particular interest are new conditional lower bounds for dynamic versions of Maximum Cardinality Matching, which introduce a new technique for obtaining amortized lower bounds."} {"_id": "b90a3e7d80eeb1f320e2d548cf17e7d17718d1eb", "title": "Rigid-Body Dynamics with Friction and Impact", "text": "Rigid-body dynamics with unilateral contact is a good approximation for a wide range of everyday phenomena, from the operation of car brakes to walking to rock slides. It is also of vital importance for simulating robots, virtual reality, and realistic animation. However, correctly modeling rigid-body dynamics with friction is difficult due to a number of discontinuities in the behavior of rigid bodies and the discontinuities inherent in the Coulomb friction law. This is particularly crucial for handling situations with large coefficients of friction, which can result in paradoxical results known at least since Painlev\u00e9 [C. R. Acad. Sci. Paris, 121 (1895), pp. 112\u2013115]. This single example has been a counterexample and cause of controversy ever since, and only recently have there been rigorous mathematical results that show the existence of solutions to his example. The new mathematical developments in rigid-body dynamics have come from several sources: \u201csweeping processes\u201d and the measure differential inclusions of Moreau in the 1970s and 1980s, the variational inequality approaches of Duvaut and J.-L. Lions in the 1970s, and the use of complementarity problems to formulate frictional contact problems by L\u00f6tstedt in the early 1980s. However, it wasn\u2019t until much more recently that these tools were finally able to produce rigorous results about rigid-body dynamics with Coulomb friction and impulses."} {"_id": "2375f6d71ce85a9ff457825e192c36045e994bdd", "title": "Multilayer feedforward networks are universal approximators", "text": ""} {"_id": "60b7c281f3a677274b7126c67b7f4059c631b1ea", "title": "There exists a neural network that does not make avoidable mistakes", "text": "The authors show that a multiple-input, single-output, single-hidden-layer feedforward network with (known) hardwired connections from input to hidden layer, monotone squashing at the hidden layer and no squashing at the output embeds as a special case a so-called Fourier network, which yields a Fourier series approximation properties of Fourier series representations. In particular, approximation to any desired accuracy of any square integrable function can be achieved by such a network, using sufficiently many hidden units. In this sense, such networks do not make avoidable mistakes.<>"} {"_id": "6bfa668b84ae5cd7e19dbda5d78688ee9dc4b25c", "title": "A massively parallel architecture for a self-organizing neural pattern recognition machine", "text": "A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived ~ ..from the set of all input patterns that are ever experienced. Four types of attentional"} {"_id": "ee7f0bc85b339d781c2e0c7e6db8e339b6b9fec2", "title": "Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions", "text": "K.M. Hornik, M. Stinchcombe, and H. White (Univ. of California at San Diego, Dept. of Economics Discussion Paper, June 1988; to appear in Neural Networks) showed that multilayer feedforward networks with as few as one hidden layer, no squashing at the output layer, and arbitrary sigmoid activation function at the hidden layer are universal approximators: they are capable of arbitrarily accurate approximation to arbitrary mappings, provided sufficiently many hidden units are available. The present authors obtain identical conclusions but do not require the hidden-unit activation to be sigmoid. Instead, it can be a rather general nonlinear function. Thus, multilayer feedforward networks possess universal approximation capabilities by virtue of the presence of intermediate layers with sufficiently many parallel processors; the properties of the intermediate-layer activation function are not so crucial. In particular, sigmoid activation functions are not necessary for universal approximation.<>"} {"_id": "384dc2b005c5d8605c62d840cc851751c47c4055", "title": "Control methodologies in networked control systems", "text": "The use of a data network in a control loop has gained increasing attentions in recent years due to its cost effective and flexible applications. One of the major challenges in this so-called networked control system (NCS) is the network-induced delay effect in the control loop. Network delays degrade the NCS control performance and destabilize the system. A significant emphasis has been on developing control methodologies to handle the network delay effect in NCS. This survey paper presents recent NCS control methodologies. The overview on NCS structures and description of network delays including characteristics and effects are also covered. r 2003 Published by Elsevier Ltd."} {"_id": "0b33d8210d530fad72ce20bd6565ceaed792cbc0", "title": "In Defense of the Internet: The Relationship between Internet Communication and Depression, Loneliness, Self-Esteem, and Perceived Social Support", "text": "As more people connect to the Internet, researchers are beginning to examine the effects of Internet use on users' psychological health. Due in part to a study released by Kraut and colleagues in 1998, which concluded that Internet use is positively correlated with depression, loneliness, and stress, public opinion about the Internet has been decidedly negative. In contrast, the present study was designed to test the hypothesis that Internet usage can affect users beneficially. Participants engaged in five chat sessions with an anonymous partner. At three different intervals they were administered scales measuring depression, loneliness, self-esteem, and social support. Changes in their scores were tracked over time. Internet use was found to decrease loneliness and depression significantly, while perceived social support and self-esteem increased significantly."} {"_id": "0f9065db0193be42173be5546a3dfb839f694521", "title": "Distributed Representations for Compositional Semantics", "text": "The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches\u2014meaning distributed representations that exploit co-occurrence statistics of large corpora\u2014have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work."} {"_id": "854c6d52fe2bf888ab7cb33ed8115df8c422d552", "title": "Social-driven Internet of Connected Objects", "text": "Internet evolution has been recently related with some aspect of user empowerment, mostly in terms of content distribution, and this has been ultimately accelerated by the fast-paced introduction and expansion of wireless technologies. Hence, the Internet should start to be seen as a communications infrastructure able to support the integration of a myriad of embedded and personal wireless objects. This way a future Internet will support the interaction between users\u2019 social, physical and virtual sphere. This position paper aims to raise some discussion about the technology required to ensure an efficient interaction between the physical, social and virtual worlds by extending the Internet by means of interconnected objects. Namely, it is argued that an efficient interaction between the physical, social and virtual worlds requires the development of a data-centric architecture based on IP-driven opportunisitc networking able to make useful data available to people when and where they really need it, augmenting their social and environmental awareness."} {"_id": "8f89f992fdcc37e302b9c5b9b25c06d7f1086cf9", "title": "RHex: A Biologically Inspired Hexapod Runner", "text": "RHex is an untethered, compliant leg hexapod robot that travels at better than one body length per second over terrain few other robots can negotiate at all. Inspired by biomechanics insights into arthropod locomotion, RHex uses a clock excited alternating tripod gait to walk and run in a highly maneuverable and robust manner. We present empirical data establishing that RHex exhibits a dynamical (\u201cbouncing\u201d) gait\u2014its mass center moves in a manner \u2217This work was supported in part by DARPA/SPAWAR under contract N66001-00-C-8026. Portions of the material reported here were first presented in a conference paper appearing in the collection (Altendorfer et al., 2000). 208 Altendorfer et al. well approximated by trajectories from a Spring Loaded Inverted Pendulum (SLIP)\u2014characteristic of a large and diverse group of running animals, when its central clock, body mass, and leg stiffnesses are appropriately tuned. The SLIP template can function as a useful control guide in developing more complex autonomous locomotion behaviors such as registration via visual servoing, local exploration via visual odometry, obstacle avoidance, and, eventually, global mapping and localization."} {"_id": "c0c0b8558b17aa20debc4611275a4c69edd1e2a7", "title": "Facial Expression Recognition via a Boosted Deep Belief Network", "text": "A training process for facial expression recognition is usually performed sequentially in three individual stages: feature learning, feature selection, and classifier construction. Extensive empirical studies are needed to search for an optimal combination of feature representation, feature set, and classifier to achieve good recognition performance. This paper presents a novel Boosted Deep Belief Network (BDBN) for performing the three training stages iteratively in a unified loopy framework. Through the proposed BDBN framework, a set of features, which is effective to characterize expression-related facial appearance/shape changes, can be learned and selected to form a boosted strong classifier in a statistical way. As learning continues, the strong classifier is improved iteratively and more importantly, the discriminative capabilities of selected features are strengthened as well according to their relative importance to the strong classifier via a joint fine-tune process in the BDBN framework. Extensive experiments on two public databases showed that the BDBN framework yielded dramatic improvements in facial expression analysis."} {"_id": "91c7fc5b47c6767632ba030167bb59d9d080fbed", "title": "Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning", "text": "We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that builds an explicit semantic map in the world reference frame by incorporating a pinhole camera projection model within the network. The information stored in the map is learned from experience, while the local-to-world transformation is computed explicitly. We train the model using DAGGERFM, a modified variant of DAGGER that trades tabular convergence guarantees for improved training speed and memory use. We test GSMN in virtual environments on a realistic quadcopter simulator and show that incorporating an explicit mapping and grounding modules allows GSMN to outperform strong neural baselines and almost reach an expert policy performance. Finally, we analyze the learned map representations and show that using an explicit map leads to an interpretable instruction-following model."} {"_id": "0b98aa34c67031ae065c58b5ed8b269391db8368", "title": "Stationary common spatial patterns for brain-computer interfacing.", "text": "Classifying motion intentions in brain-computer interfacing (BCI) is a demanding task as the recorded EEG signal is not only noisy and has limited spatial resolution but it is also intrinsically non-stationary. The non-stationarities in the signal may come from many different sources, for instance, electrode artefacts, muscular activity or changes of task involvement, and often deteriorate classification performance. This is mainly because features extracted by standard methods like common spatial patterns (CSP) are not invariant to variations of the signal properties, thus should also change over time. Although many extensions of CSP were proposed to, for example, reduce the sensitivity to noise or incorporate information from other subjects, none of them tackles the non-stationarity problem directly. In this paper, we propose a method which regularizes CSP towards stationary subspaces (sCSP) and show that this increases classification accuracy, especially for subjects who are hardly able to control a BCI. We compare our method with the state-of-the-art approaches on different datasets, show competitive results and analyse the reasons for the improvement."} {"_id": "82a3311dc057343216f82efa09fd8c3df1ff9e51", "title": "Compact GaN MMIC T/R module front-end for X-band pulsed radar", "text": "An X-band Single-Chip monolithic microwave integrated circuit (MMIC) has been developed by using a European GaN HEMT technology. The very compact MMIC die occupying only an area of 3.0 mm \u00d7 3.0 mm, integrates a high power amplifier (HPA), a low-noise amplifier (LNA) and a robust asymmetrical absorptive/reflective SPDT switch. At the antenna RF pad in the frequency range from 8.6 to 11.2 GHz, nearly 8 W of output power and 22 dB of linear gain were measured when operated in transmit mode. When operated in receive mode, a noise figure of 2.5 dB with a gain of 15 dB were measured at the Rx RF pad in the same frequency range."} {"_id": "2f0ed0d45537853b04e711a9cfee0c294205acd3", "title": "Augmented reality in education: a meta-review and cross-media analysis", "text": "Augmented reality (AR) is an educational medium increasingly accessible to young users such as elementary school and high school students. Although previous research has shown that AR systems have the potential to improve student learning, the educational community remains unclear regarding the educational usefulness of AR and regarding contexts in which this technology is more effective than other educational mediums. This paper addresses these topics by analyzing 26 publications that have previously compared student learning in AR versus non-AR applications. It identifies a list of positive and negative impacts of AR experiences on student learning and highlights factors that are potentially underlying these effects. This set of factors is argued to cause differences in educational effectiveness between AR and other media. Furthermore, based on the analysis, the paper presents a heuristic questionnaire generated for judging the educational potential of AR experiences."} {"_id": "273ab36c41cc5175c9bdde2585a9f4d17e35c683", "title": "Nonparametric Canonical Correlation Analysis", "text": "Canonical correlation analysis (CCA) is a classical representation learning technique for finding correlated variables in multi-view data. Several nonlinear extensions of the original linear CCA have been proposed, including kernel and deep neural network methods. These approaches seek maximally correlated projections among families of functions, which the user specifies (by choosing a kernel or neural network structure), and are computationally demanding. Interestingly, the theory of nonlinear CCA, without functional restrictions, had been studied in the population setting by Lancaster already in the 1950s, but these results have not inspired practical algorithms. We revisit Lancaster\u2019s theory to devise a practical algorithm for nonparametric CCA (NCCA). Specifically, we show that the solution can be expressed in terms of the singular value decomposition of a certain operator associated with the joint density of the views. Thus, by estimating the population density from data, NCCA reduces to solving an eigenvalue system, superficially like kernel CCA but, importantly, without requiring the inversion of any kernel matrix. We also derive a partially linear CCA (PLCCA) variant in which one of the views undergoes a linear projection while the other is nonparametric. Using a kernel density estimate based on a small number of nearest neighbors, our NCCA and PLCCA algorithms are memory-efficient, often run much faster, and perform better than kernel CCA and comparable to deep CCA. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s)."} {"_id": "4c97f3c66236649f1723e210104833278fb7f84e", "title": "Language Independent Single Document Image Super-Resolution using CNN for improved recognition", "text": "Recognition of document images have important applications in restoring old and classical texts. The problem involves quality improvement before passing it to a properly trained OCR to get accurate recognition of the text. The image enhancement and quality improvement constitute important steps as subsequent recognition depends upon the quality of the input image. There are scenarios when high resolution images are not available and our experiments show that the OCR accuracy reduces significantly with decrease in the spatial resolution of document images. Thus the only option is to improve the resolution of such document images. The goal is to construct a high resolution image, given a single low resolution binary image, which constitutes the problem of single image super-resolution. Most of the previous work in super-resolution deal with natural images which have more information-content than the document images. Here, we use Convolution Neural Network to learn the mapping between low and the corresponding high resolution images. We experiment with different number of layers, parameter settings and non-linear functions to build a fast end-to-end framework for document image super-resolution. Our proposed model shows a very good PSNR improvement of about 4 dB on 75 dpi Tamil images, resulting in a 3% improvement of word level accuracy by the OCR. It takes less time than the recent sparse based natural image super-resolution technique, making it useful for real-time document recognition applications."} {"_id": "3e6d1864887c440b7d78b4745110645803577098", "title": "Dual-Band Omnidirectional Circularly Polarized Antenna", "text": "A dual-band omnidirectional circularly polarized antenna is proposed. The antenna comprises back-to-back microstrip patches fed by a coplanar waveguide. A very low frequency ratio of 1.182 has been achieved, which can be easily tuned by adjusting four lumped capacitors incorporated into the antenna. An analysis of the omnidirectional circular polarization mechanism as well the dual band operation is provided and confirmed by numerical and experimental data. Key parameters to tune the resonant frequencies and the axial ratio have been identified. The prototype antenna provides omnidirectional circular polarization in one plane with cross polar isolation better than 12 dB for both frequency bands."} {"_id": "fb9f09a906ad75395020e9bae5b51449fe58d49f", "title": "Beyond Bitcoin - Part I: A critical look at blockchain-based systems", "text": "After more than six years from the launch of Bitcoin, it has become evident that the decentralized transaction ledger functionality implemented through the blockchain technology can be used not only for cryptocurrencies, but to register, confirm and transfer any kind of contract and property. In this work we analyze the most relevant functionalities and known issues of this technology, with the intent of pointing out the possible behaviours that are not as efficient as they should when thinking with a broader outlook. Our analysis would be the starting point for the introduction of a new approach to blockchain creation and management, which will be the subject of a forthcoming paper."} {"_id": "65bfd757ebd1712b2ed0fa3a0529b8aae1427f33", "title": "Secure Medical Data Transmission Model for IoT-Based Healthcare Systems", "text": "Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient\u2019s data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image."} {"_id": "3632d60d1bb84cab4ba624bf9726712f5c4216c8", "title": "Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives", "text": "We develop a theory characterizing optimal stopping times for discrete-time ergodic Markov processes with discounted rewards. The theory differs from prior work by its view of per-stage and terminal reward functions as elements of a certain Hilbert space. In addition to a streamlined analysis establishing existence and uniqueness of a solution to Bellman's equation, this approach provides an elegant framework for the study of approximate solutions. In particular, we propose a stochastic approximation algorithm that tunes weights of a linear combination of basis functions in order to approximate a value function. We prove that this algorithm converges (almost surely) and that the limit of convergence has some desirable properties. We discuss how variations on this line of analysis can be used to develop similar results for other classes of optimal stopping problems, including those involving independent increment processes, finite horizons, and two-player zero-sum games. We illustrate the approximation method with a computational case study involving the pricing of a path-dependent financial derivative security that gives rise to an optimal stopping problem with a one-hundred-dimensional state space."} {"_id": "06895dd27e67a039c24fe06adf6391bcb245d7af", "title": "Image and video abstraction by multi-scale anisotropic Kuwahara filtering", "text": "The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local structure of image features. In this work, two limitations of the anisotropic Kuwahara filter are addressed. First, it is shown that by adding thresholding to the weighting term computation of the sectors, artifacts are avoided and smooth results in noise-corrupted regions are achieved. Second, a multi-scale computation scheme is proposed that simultaneously propagates local orientation estimates and filtering results up a low-pass filtered pyramid. This allows for a strong abstraction effect and avoids artifacts in large low-contrast regions. The propagation is controlled by the local variances and anisotropies that are derived during the computation without extra overhead, resulting in a highly efficient scheme that is particularly suitable for real-time processing on a GPU."} {"_id": "48caac2f65bce47f6d27400ae4f60d8395cec2f3", "title": "Stochastic Gradient Boosting", "text": "Gradient boosting constructs additive regression models by sequentially tting a simple parameterized function (base learner) to current \\pseudo\"{residuals by least{squares at each iteration. The pseudo{residuals are the gradient of the loss functional being minimized, with respect to the model values at each training data point, evaluated at the current step. It is shown that both the approximation accuracy and execution speed of gradient boosting can be substantially improved by incorporating randomization into the procedure. Speci cally, at each iteration a subsample of the training data is drawn at random (without replacement) from the full training data set. This randomly selected subsample is then used in place of the full sample to t the base learner and compute the model update for the current iteration. This randomized approach also increases robustness against overcapacity of the base learner. 1 Gradient Boosting In the function estimation problem one has a system consisting of a random \\output\" or \\response\" variable y and a set of random \\input\" or \\explanatory\" variables x = fx1; ; xng. Given a \\training\" sample fyi;xig N 1 of known (y;x){values, the goal is to nd a function F (x) that maps x to y, such that over the joint distribution of all (y;x){values, the expected value of some speci ed loss function (y; F (x)) is minimized F (x) = argmin F (x) Ey;x (y; F (x)): (1) Boosting approximates F (x) by an \\additive\" expansion of the form"} {"_id": "4fd01b9fad9502df657c578274818bc5f3cbe93f", "title": "Take a Look Around: Using Street View and Satellite Images to Estimate House Prices", "text": "When an individual purchases a home, they simultaneously purchase its structural features, its accessibility to work, and the neighborhood amenities. Some amenities, such as air quality, are measurable whilst others, such as the prestige or the visual impression of a neighborhood, are difficult to quantify. Despite the well-known impacts intangible housing features have on house prices, limited attention has been given to systematically quantifying these difficult to measure amenities. Two issues have lead to this neglect. Not only do few quantitative methods exist that can measure the urban environment, but that the collection of such data is both costly and subjective. We show that street image and satellite image data can capture these urban qualities and improve the estimation of house prices. We propose a pipeline that uses a deep neural network model to automatically extract visual features from images to estimate house prices in London, UK. We make use of traditional housing features such as age, size and accessibility as well as visual features from Google Street View images and Bing aerial images in estimating the house price model. We find encouraging results where learning to characterize the urban quality of a neighborhood improves house price prediction, even when generalizing to previously unseen London boroughs. We explore the use of non-linear vs. linear methods to fuse these cues with conventional models of house pricing, and show how the interpretability of linear models allows us to directly extract the visual desirability of neighborhoods as proxy variables that are both of interest in their own right, and could be used as inputs to other econometric methods. This is particularly valuable as once the network has been trained with the training data, it can be applied elsewhere, allowing us to generate vivid dense maps of the desirability of London streets."} {"_id": "4a2690b2f504e192b77615fdd7b55be8527d1486", "title": "CARDIAC: An Intelligent Conversational Assistant for Chronic Heart Failure Patient Heath Monitoring", "text": "We describe CARDIAC, a prototype for an intelligent conversational assistant that provides health monitoring for chronic heart failure patients. CARDIAC supports user initiative through its ability to understand natural language and connect it to intention recognition. The natural language interface allows patients to interact with CARDIAC without special training. The system is designed to understand information that arises spontaneously in the course of the interview. If the patient gives more detail than necessary for answering a question, the system updates the user model accordingly. CARDIAC is a first step towards developing cost-effective, customizable, automated in-home conversational assistants that help patients manage their care and monitor their health using natural language."} {"_id": "a05ad7eb010dcda96f4b76c56cdf1dce14ea49ed", "title": "A Queueing Network Model of Patient Flow in an Accident and Emergency Department", "text": "In many complex processing systems with limited resources, fast response times are demanded, but are seldom delivered. This is an especially serious problem in healthcare systems providing critical patient care. In this paper, we develop a multiclass Markovian queueing network model of patient flow in the Accident and Emergency department of a major London hospital. Using real patient timing data to help parameterise the model, we solve for moments and probability density functions of patient response time using discrete event simulation. We experiment with different patient handling priority schemes and compare the resulting response time moments and densities with real data."} {"_id": "20755e1889af2f39bc0ed6eeafe8469aeff3073f", "title": "A near-optimal algorithm for computing the entropy of a stream", "text": "We describe a simple algorithm for approximating the empirical entropy of a stream of m values in a single pass, using O(\u03b5-2 log(\u0394-1) log m) words of space. Our algorithm is based upon a novel extension of a method introduced by Alon, Matias, and Szegedy [1]. We show a space lower bound of \u03a9(\u03b5-2 / log(\u03b5-1)), meaning that our algorithm is near-optimal in terms of its dependency on \u03b5. This improves over previous work on this problem [8, 13, 17, 5]. We show that generalizing to kth order entropy requires close to linear space for all k \u2265 1, and give additive approximations using our algorithm. Lastly, we show how to compute a multiplicative approximation to the entropy of a random walk on an undirected graph."} {"_id": "53ea5aa438e041820d2fd9413d2b3aaf87a95212", "title": "Decoding as Continuous Optimization in Neural Machine Translation", "text": "In this work, we propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. The resulting optimisation problem can then be tackled using a whole range of continuous optimisation algorithms which have been developed and used in the literature mainly for training. Our approach is general and can be applied to other sequence-to-sequence neural models as well. We make use of this powerful decoding approach to intersect an underlying NMT with a language model, to intersect left-to-right and right-to-left NMT models, and to decode with soft constraints involving coverage and fertility of the source sentence words. The experimental results show the promise of the proposed framework."} {"_id": "908cb227cf6e3138157b47d057db0c700be4d7f2", "title": "Integrated virtual commissioning an essential activity in the automation engineering process: From virtual commissioning to simulation supported engineering", "text": "Production plants within manufacturing and process industries are becoming more and more complex systems. For a safe and demand sufficient production the automation system of a production plant is mission critical. Therefore the correct functioning and engineering of the automation system is essential. Nowadays the use of a virtual commissioning step before deploying the automation software on the real production plant is used more often. Within the virtual commissioning the automation software is tested against a virtual plant model, sufficient to check out the correct functioning of the automation system. Usually virtual commissioning is used as a step at the end of the automation engineering, as proposed by VDI 4499. Within this paper an integrated virtual commissioning is proposed, where the automation software is continuously tested against a virtual plant model during the overall engineering phase. The virtual plant is growing together with the automation software and thus enables simulation supported automation engineering. Benefits of the proposed workflow are that errors can be detected immediately after the implementation and the use of the virtual plant model could be extended to various stages of the engineering workflow."} {"_id": "42f49ab31c43f28ca5ee68a0cb19406e77321cc3", "title": "Agile Flight Control Techniques for a Fixed-Wing Aircraft by Frantisek Michal Sobolic", "text": "As unmanned aerial vehicles (UAVs) become more involved in challenging mission objectives, the need for agility controlled flight becomes more of a necessity. The ability to navigate through constrained environments as well as quickly maneuver to each mission target is essential. Currently, individual vehicles are developed with a particular mission objective, whether it be persistent surveillance or fly-by reconnaissance. Fixed-wing vehicles with a high thrust-to-weight ratio are capable of performing maneuvers such as take-off or perch style landing and switch between hover and conventional flight modes. Agile flight controllers enable a single vehicle to achieve multiple mission objectives. By utilizing the knowledge of the flight dynamics through all flight regimes, nonlinear controllers can be developed that control the aircraft in a single design. This thesis develops a full six-degree-of-freedom model for a fixed-wing propellerdriven aircraft along with methods of control through nonconventional flight regimes. In particular, these controllers focus on transitioning into and out of hover to level flight modes. This maneuver poses hardships for conventional linear control architectures because these flights involve regions of the post-stall regime, which is highly nonlinear due to separation of flow over the lifting surfaces. Using Lyapunov backstepping control stability theory as well as quaternion-based control methods, control strategies are developed that stabilize the aircraft through these flight regimes without the need to switch control schemes. The effectiveness of each control strategy is demonstrated in both simulation and flight experiments. Thesis Supervisor: Jonathan P. How Title: Professor"} {"_id": "1319bf6218cbcd85ac7512991447ecd9d776577d", "title": "Task constraints in visual working memory", "text": "This paper examines the nature of visual representations that direct ongoing performance in sensorimotor tasks. Performance of such natural tasks requires relating visual information from different gaze positions. To explore this we used the technique of making task relevant display changes during saccadic eye movements. Subjects copied a pattern of colored blocks on a computer monitor, using the mouse to drag the blocks across the screen. Eye position was monitored using a dual-purkinje eye tracker, and the color of blocks in the pattern was changed at different points in task performance. When the target of the saccade changed color during the saccade, the duration of fixations on the model pattern increased, depending on the point in the task that the change was made. Thus different fixations on the same visual stimulus served a different purpose. The results also indicated that the visual information that is retained across successive fixations depends on moment by moment task demands. This is consistent with previous suggestions that visual representations are limited and task dependent. Changes in blocks in addition to the saccade target led to greater increases in fixation duration. This indicated that some global aspect of the pattern was retained across different fixations. Fixation durations revealed effects of the display changes that were not revealed in perceptual report. This can be understood by distinguishing between processes that operate at different levels of description and different time scales. Our conscious experience of the world may reflect events over a longer time scale than those underlying the substructure of the perceptuo-motor machinery."} {"_id": "b68b04cb44f2d0a9762715fabb4005756926374f", "title": "The role of visual attention in saccadic eye movements.", "text": "The relationship between saccadic eye movements and covert orienting or visual spatial attention was investigated in two experiments. In the first experiment, subjects were required to make a saccade to a specified location while also detecting a visual target presented just prior to the eye movement. Detection accuracy was highest when the location of the target coincided with the location of the saccade, suggesting that subjects use spatial attention in the programming and/or execution of saccadic eye movements. In the second experiment, subjects were explicitly directed to attend to a particular location and to make a saccade to the same location or to a different one. Superior target detection occurred at the saccade location regardless of attention instructions. This finding shows that subjects cannot move their eyes to one location and attend to a different one. The result of these experiments suggest that visuospatial attention is an important mechanism in generating voluntary saccadic eye movements."} {"_id": "b73f2d7b58bfc555d8037b3fdb673c4cec1aecf0", "title": "Modeling attention to salient proto-objects", "text": "Selective visual attention is believed to be responsible for serializing visual information for recognizing one object at a time in a complex scene. But how can we attend to objects before they are recognized? In coherence theory of visual cognition, so-called proto-objects form volatile units of visual information that can be accessed by selective attention and subsequently validated as actual objects. We propose a biologically plausible model of forming and attending to proto-objects in natural scenes. We demonstrate that the suggested model can enable a model of object recognition in cortex to expand from recognizing individual objects in isolation to sequentially recognizing all objects in a more complex scene."} {"_id": "03406ec0118137ca1ab734a8b6b3678a35a43415", "title": "A Morphable Model for the Synthesis of 3D Faces", "text": "In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an \u201cunlikely\u201d appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness."} {"_id": "26e8f3cc46b710a013358bf0289df4de033446ba", "title": "Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper", "text": "A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis."} {"_id": "6b3d644ac7cd884961df181d3f4cab2452a6a217", "title": "Simple to complex cross-modal learning to rank", "text": "The heterogeneity-gap between different modalities brings a significant challenge to multimedia information retrieval. Some studies formalize the cross-modal retrieval tasks as a ranking problem and learn a shared multi-modal embedding space to measure the cross-modality similarity. However, previous methods often establish the shared embedding space based on linear mapping functions which might not be sophisticated enough to reveal more complicated inter-modal correspondences. Additionally, current studies assume that the rankings are of equal importance, and thus all rankings are used simultaneously, or a small number of rankings are selected randomly to train the embedding space at each iteration. Such strategies, however, always suffer from outliers as well as reduced generalization capability due to their lack of insightful understanding of procedure of human cognition. In this paper, we involve the self-paced learning theory with diversity into the cross-modal learning to rank and learn an optimal multi-modal embedding space based on non-linear mapping functions. This strategy enhances the model\u2019s robustness to outliers and achieves better generalization via training the model gradually from easy rankings by diverse queries to more complex ones. An efficient alternative algorithm is exploited to solve the proposed challenging problem with fast convergence in practice. Extensive experimental results on several benchmark datasets indicate that the proposed method achieves significant improvements over the stateof-the-arts in this literature. \u00a9 2017 Elsevier Inc. All rights reserved."} {"_id": "394dcd9679bd27023ea1d6eb0f62ef13353c347e", "title": "The Agile Requirements Refinery: Applying SCRUM Principles to Software Product Management", "text": "Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now. In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management. For this purpose, the 'agile requirements refinery' is presented, an extension to the SCRUM process that enables product managers to cope with large requirements in an agile development environment. A real-life case study is presented to illustrate how agile methods can be applied to software product management. The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process."} {"_id": "7c135538e31fba64b798d3dd618a2ba01d11ca57", "title": "Determinants of Successful Knowledge Management Programs", "text": "The main objective of this paper is to investigate and identify the main determinants of successful knowledge management (KM) programs. We draw upon the institutional theory and the theory of technology assimilation to develop an integrative model of KM success that clarifies the role of information technology (IT) in relation to other important KM infrastructural capabilities and to KM process capabilities. We argue that the role of IT cannot be studied in isolation and that the effect of IT on KM success is fully mediated by KM process capabilities. The research model is tested with a survey study involving 191 KM practitioners. The empirical results provided strong support for the model. In addition to its theoretical contributions, this study also presents important practical implications through the identification of specific infrastructural capabilities leading to KM success."} {"_id": "1e50a3f7edd649a11acc22fef4c8b993f2ceba74", "title": "Potential of Estimating Soil Moisture Under Vegetation Cover by Means of PolSAR", "text": "In this paper, the potential of using polarimetric SAR (PolSAR) acquisitions for the estimation of volumetric soil moisture under agricultural vegetation is investigated. Soil-moisture estimation by means of SAR is a topic that is intensively investigated but yet not solved satisfactorily. The key problem is the presence of vegetation cover which biases soil-moisture estimates. In this paper, we discuss the problem of soil-moisture estimation in the presence of agricultural vegetation by means of L-band PolSAR images. SAR polarimetry allows the decomposition of the scattering signature into canonical scattering components and their quantification. We discuss simple canonical models for surface, dihedral, and vegetation scattering and use them to model and interpret scattering processes. The performance and modifications of the individual scattering components are discussed. The obtained surface and dihedral components are then used to retrieve surface soil moisture. The investigations cover, for the first time, the whole vegetation-growing period for three crop types using SAR data and ground measurements acquired in the frame of the AgriSAR campaign."} {"_id": "108200780ff50d766c1a13e2a5095d59b5a5421f", "title": "Can Twitter be used to predict county excessive alcohol consumption rates?", "text": "OBJECTIVES\nThe current study analyzes a large set of Twitter data from 1,384 US counties to determine whether excessive alcohol consumption rates can be predicted by the words being posted from each county.\n\n\nMETHODS\nData from over 138 million county-level tweets were analyzed using predictive modeling, differential language analysis, and mediating language analysis.\n\n\nRESULTS\nTwitter language data captures cross-sectional patterns of excessive alcohol consumption beyond that of sociodemographic factors (e.g. age, gender, race, income, education), and can be used to accurately predict rates of excessive alcohol consumption. Additionally, mediation analysis found that Twitter topics (e.g. 'ready gettin leave') can explain much of the variance associated between socioeconomics and excessive alcohol consumption.\n\n\nCONCLUSIONS\nTwitter data can be used to predict public health concerns such as excessive drinking. Using mediation analysis in conjunction with predictive modeling allows for a high portion of the variance associated with socioeconomic status to be explained."} {"_id": "9704ce70f2013c7a1b0947222e68bf7da833b249", "title": "Effects of coffee on ambulatory blood pressure in older men and women: A randomized controlled trial.", "text": "This study assessed the effects of regular coffee drinking on 24-hour ambulatory blood pressure (ABP) in normotensive and hypertensive older men and women. Twenty-two normotensive and 26 hypertensive, nonsmoking men and women, with a mean age of 72.1 years (range, 54 to 89 years), took part in the study. After 2 weeks of a caffeine-free diet, subjects were randomized to continue with the caffeine-free diet and abstain from caffeine-containing drinks or drink instant coffee (5 cups per day, equivalent to 300 mg caffeine per day) in addition to the caffeine-free diet for a further 2 weeks. Change in systolic and diastolic blood pressures (SBP, DBP) determined by 24-hour ambulatory BP monitoring showed significant interactions between coffee drinking and hypertension status. In the hypertensive group, rise in mean 24-hour SBP was greater by 4.8 (SEM, 1.3) mm Hg (P=0.031) and increase in mean 24-hour DBP was higher by 3.0 (1.0) mm Hg (P=0.010) in coffee drinkers than in abstainers. There were no significant differences between abstainers and coffee drinkers in the normotensive group for 24-hour, daytime, or nighttime SBP or DBP. In older men and women with treated or untreated hypertension, ABP increased in coffee drinkers and decreased in abstainers. Restriction of coffee intake may be beneficial in older hypertensive individuals."} {"_id": "aa9d4e6732320de060c4eccb6134fe579ad6428c", "title": "Efficient region search for object detection", "text": "We propose a branch-and-cut strategy for efficient region-based object detection. Given an oversegmented image, our method determines the subset of spatially contiguous regions whose collective features will maximize a classifier's score. We formulate the objective as an instance of the prize-collecting Steiner tree problem, and show that for a family of additive classifiers this enables fast search for the optimal object region via a branch-and-cut algorithm. Unlike existing branch-and-bounddetection methods designed for bounding boxes, our approach allows scoring of irregular shapes \u2014 which is especially critical for objects that do not conform to a rectangular window. We provide results on three challenging object detection datasets, and demonstrate the advantage of rapidly seeking best-scoring regions rather than subwindow rectangles."} {"_id": "38cace070ec3151f4af4477da3798dd5196e1476", "title": "Reproducible Experiments for Comparing Apache Flink and Apache Spark on Public Clouds", "text": "Big data processing is a hot topic in today\u2019s computer science world. There is a significant demand for analysing big data to satisfy many requirements of many industries. Emergence of the Kappa architecture created a strong requirement for a highly capable and efficient data processing engine. Therefore data processing engines such as Apache Flink and Apache Spark emerged in open source world to fulfill that efficient and high performing data processing requirement. There are many available benchmarks to evaluate those two data processing engines. But complex deployment patterns and dependencies make those benchmarks very difficult to reproduce by our own. This project has two main goals. They are making few of community accepted benchmarks easily reproducible on cloud and validate the performance claimed by those studies. Keywords\u2013 Data Processing, Apache Flink, Apache Spark, Batch processing, Stream processing, Reproducible experiments, Cloud"} {"_id": "31d2423a14db6f87bdeba678ad7b4304f07dd565", "title": ">1.3-Tb/s VCSEL-Based On-Board Parallel-Optical Transceiver Module for High-Density Optical Interconnects", "text": "This paper gives a detailed description of a >1.3-Tb/s VCSEL-based on-board parallel-optical transceiver module for high-density optical interconnects. The optical module integrates a 28-Gb/s \u00d7 24-channel transmitter and receiver into one package with a 1-in2 footprint, thereby yielding a data rate density as high as 1.34\u00a0Tb/s/in2. A unique module design is developed to utilize the whole top surface for thermal dissipation and the bottom surface for optical and electrical interfaces. The heat dissipation characteristics are studied in simulations and experiments. A test system of 24-channel optical loop-back link is built to evaluate the performance when operating all 24 channels by 28.05-Gb/s 231\u20131 PRBS bit streams simultaneously. With a total power consumption of 9.1\u00a0W, the optical module can be operated within the operating case temperature under a practical air cooling condition. When operating all 24 channels simultaneously, a total jitter margin at BER of 10\u201312 is larger than 0.4\u00a0U.I. at a monitor channel. The measured jitter margin is consistent regardless of any channel operation."} {"_id": "61ef28caf5f2f72a2f66d27e5710f94653e13d9c", "title": "Random projection-based multiplicative data perturbation for privacy preserving distributed data mining", "text": "This paper explores the possibility of using multiplicative random projection matrices for privacy preserving distributed data mining. It specifically considers the problem of computing statistical aggregates like the inner product matrix, correlation coefficient matrix, and Euclidean distance matrix from distributed privacy sensitive data possibly owned by multiple parties. This class of problems is directly related to many other data-mining problems such as clustering, principal component analysis, and classification. This paper makes primary contributions on two different grounds. First, it explores independent component analysis as a possible tool for breaching privacy in deterministic multiplicative perturbation-based models such as random orthogonal transformation and random rotation. Then, it proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data. The paper presents extensive theoretical analysis and experimental results. Experiments demonstrate that the proposed technique is effective and can be successfully used for different types of privacy-preserving data mining applications."} {"_id": "667d26ab623fc8c578895b7e88326dd590d696de", "title": "Anusaaraka: Machine Translation in Stages", "text": "Fully-automatic general-purpose high-quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. This would explain the difficulty of building a machine translation system. Since, the machine is not capable of interpreting a general text with sufficient accuracy automatically at present let alone re-expressing it for a given audience, it fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine facesin interpreting a given text is the lack of general world knowledge or common sense knowledge.}"} {"_id": "3bd350b0a5dc5d059707fc30ae524c8ff1b92e28", "title": "Automatic Audio Content Analysis", "text": "This paper describes the theoretic framework and applications of automatic audio content analysis. After explaining the basic properties of audio analysis, we present a toolbox being the basis for the development of audio analysis algorithms. We also describe new applications which can be developed using the toolset, among them music indexing and retrieval as well as violence detection in the sound track of videos."} {"_id": "d4e3795452bfc87f388e0c1a5fb519d1558effd0", "title": "LEARNING USING A SIAMESE CONVOLUTIONAL NEURAL NETWORK", "text": "In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves stateof-art results in terms of the false positive rate at a 95% recall rate on standard benchmark datasets. * Corresponding author"} {"_id": "1a25c7f8fadb01debaf574ba366c275d1a94dd73", "title": "Wearable/disposable sweat-based glucose monitoring device with multistage transdermal drug delivery module", "text": "Electrochemical analysis of sweat using soft bioelectronics on human skin provides a new route for noninvasive glucose monitoring without painful blood collection. However, sweat-based glucose sensing still faces many challenges, such as difficulty in sweat collection, activity variation of glucose oxidase due to lactic acid secretion and ambient temperature changes, and delamination of the enzyme when exposed to mechanical friction and skin deformation. Precise point-of-care therapy in response to the measured glucose levels is still very challenging. We present a wearable/disposable sweat-based glucose monitoring device integrated with a feedback transdermal drug delivery module. Careful multilayer patch design and miniaturization of sensors increase the efficiency of the sweat collection and sensing process. Multimodal glucose sensing, as well as its real-time correction based on pH, temperature, and humidity measurements, maximizes the accuracy of the sensing. The minimal layout design of the same sensors also enables a strip-type disposable device. Drugs for the feedback transdermal therapy are loaded on two different temperature-responsive phase change nanoparticles. These nanoparticles are embedded in hyaluronic acid hydrogel microneedles, which are additionally coated with phase change materials. This enables multistage, spatially patterned, and precisely controlled drug release in response to the patient\u2019s glucose level. The system provides a novel closed-loop solution for the noninvasive sweat-based management of diabetes mellitus."} {"_id": "a5d0083902cdd74a4e22805a01688b07e3d01883", "title": "Generalized neural-network representation of high-dimensional potential-energy surfaces.", "text": "The accurate description of chemical processes often requires the use of computationally demanding methods like density-functional theory (DFT), making long simulations of large systems unfeasible. In this Letter we introduce a new kind of neural-network representation of DFT potential-energy surfaces, which provides the energy and forces as a function of all atomic positions in systems of arbitrary size and is several orders of magnitude faster than DFT. The high accuracy of the method is demonstrated for bulk silicon and compared with empirical potentials and DFT. The method is general and can be applied to all types of periodic and nonperiodic systems."} {"_id": "a82a54cfee4556191546b57d3fd94f00c03dc95c", "title": "A universal SNP and small-indel variant caller using deep neural networks", "text": "Despite rapid advances in sequencing technologies, accurately calling genetic variants present in an individual genome from billions of short, errorful sequence reads remains challenging. Here we show that a deep convolutional neural network can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships between images of read pileups around putative variant and true genotype calls. The approach, called DeepVariant, outperforms existing state-of-the-art tools. The learned model generalizes across genome builds and mammalian species, allowing nonhuman sequencing projects to benefit from the wealth of human ground-truth data. We further show that DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, including deep whole genomes from 10X Genomics and Ion Ampliseq exomes, highlighting the benefits of using more automated and generalizable techniques for variant calling."} {"_id": "6769f95f97db9d67fe3e5a1fa96195d5e7e03f64", "title": "Risk Management for Service-Oriented Systems", "text": "Web service technology can be used for integrating heterogeneous and autonomous applications into cross-organizational systems. A key problem is to support a high quality of service-oriented systems despite vulnerabilities caused by the use of external web services. One important aspect that has received little attention so far is risk management for such systems. This paper discusses risks peculiar for servicebased systems, their impact and ways of mitigation. In the context of service-oriented design, risks can be reduced by selection of appropriate business partners, web service discovery, service composition and Quality of Service (QoS) management. Advisors. Vincenzo D\u2019Andrea."} {"_id": "a9fb580a2c961c4921709dd5d7425a12a7c3fc64", "title": "A Questionnaire Approach based on the Technology Acceptance Model for Mobile tracking on Patient progress Applications", "text": "Healthcare professionals spend much of their time wandering between patients and offices, while the supportive technology stays stationary. Therefore, mobile applications has adapted for healthcare industry. In spite of the advancement and variety of available mobile based applications, there is an eminent need to investigate the current position of the acceptance of those mobile health applications that are tailored towards the tracking patients condition, share patients information and access. Consequently, in this study Technology Acceptance Model has designed to investigate the user acceptance of mobile technology application within healthcare industry. The purpose of this study is to design a quantitative approach based on the technology acceptance model questionnaire as its primary research methodology. It utilized a quantitative approach based a Technology Acceptance Model (TAM) to evaluate the system mobile tracking Model. The related constructs for evaluation are: Perceived of Usefulness, Perceived Ease of Use, User Satisfaction and Attribute of Usability. All these constructs are modified to suit the context of the study. Moreover, this study outlines the details of each construct and its relevance toward the research issue. The outcome of the study represents series of approaches that will apply for checking the suitability of a mobile tracking on patient progress application for health care industry and how well it achieves the aims and objectives of the design."} {"_id": "e5a636c6942c5ea72cf62ed962bb7c2996efe251", "title": "Propulsion Drive Models for Full Electric Marine Propulsion Systems", "text": "Integrated full electric propulsion systems are being introduced across both civil and military marine sectors. Standard power systems analysis packages cover electrical and electromagnetic components, but have limited models of mechanical subsystems and their controllers. Hence electromechanical system interactions between the prime movers, power network and driven loads are poorly understood. This paper reviews available models of the propulsion drive system components: the power converter, motor, propeller and ship. Due to the wide range of time-constants in the system, reduced order models of the power converter are required. A new model using state-averaged models of the inverter and a hybrid model of the rectifier is developed to give an effective solution combining accuracy with speed of simulation and an appropriate interface to the electrical network model. Simulation results for a typical ship manoeuvre are presented."} {"_id": "b44c324af6b69c829ce2cc19de4daab3b1263ba0", "title": "Cooperative (rather than autonomous) vehicle-highway automation systems", "text": "The recent DARPA-sponsored automated vehicle \"Challenges\" have generated strong interest in both the research community and the general public, raising consciousness about the possibilities for vehicle automation. Driverless vehicles make good subjects for the visually-oriented media, and they pose enough interesting research challenges to occupy generations of graduate students. However, automated vehicles also have the potential to help solve a variety of real-world problems. Engineers need to think carefully about which of those problems we are actually solving.A well-engineered system should be designed to satisfy specific needs, and those needs should be reflected in the definition of system requirements. Alternative technological approaches can then be evaluated and traded off based on their ability to meet the requirements. The article describes the rather different needs of the public road transportation system and the military transportation system, and then shows how those needs influence the requirements for automated vehicle systems. These requirements point toward significantly different technical approaches, but it is possible to find some limited areas of technical commonality."} {"_id": "a35d3d359f5cc8a307776acb9ee23c50175a964f", "title": "Transcending transmedia: emerging story telling structures for the emerging convergence platforms", "text": "Although the current paradigm for expanded participatory storytellling is the \"transmedia\" exploitation of the same storyworld on multiple platforms, it can be more productive to think of the digital medium as a single platform, combining all the functionalities we now associate with networked computers, game consoles, and conventionally delivered episodic television to enhance the traditional strengths and cultural functions of television. By looking at the how the four affordances of the digital medium the procedural, participatory, encyclopedic, and spatial apply to three core characteristics of television moving images, sustained storytelling, and individual viewing in the presence of a mass audience -- we can identify areas of emerging conventions and widening opportunity for design innovation"} {"_id": "fb7ff16f840317c1f725f81f4f1cc8aafacb1516", "title": "A generic visual perception domain randomisation framework for Gazebo", "text": "The impressive results of applying deep neural networks in tasks such as object recognition, language translation, and solving digital games are largely attributed to the availability of massive amounts of high quality labelled data. However, despite numerous promising steps in incorporating these techniques in robotic tasks, the cost of gathering data with a real robot has halted the proliferation of deep learning in robotics. In this work, a plugin for the Gazebo simulator is presented, which allows rapid generation of synthetic data. By introducing variations in simulator-specific or irrelevant aspects of a task, one can train a model which exhibits some degree of robustness against those aspects, and ultimately narrow the reality gap between simulated and real-world data. To show a use-case of the developed software, we build a new dataset for detection and localisation of three object classes: box, cylinder and sphere. Our results in the object detection and localisation task demonstrate that with small datasets generated only in simulation, one can achieve comparable performance to that achieved when training on real-world images."} {"_id": "31c3790b30bc27c7e657f1ba5c5421713d72474a", "title": "Fast single image fog removal using the adaptive Wiener filter", "text": "We present in this paper a fast single image defogging method that uses a novel approach to refining the estimate of amount of fog in an image with the Locally Adaptive Wiener Filter. We provide a solution for estimating noise parameters for the filter when the observation and noise are correlated by decorrelating with a naively estimated defogged image. We demonstrate our method is 50 to 100 times faster than existing fast single image defogging methods and that our proposed method subjectively performs as well as the Spectral Matting smoothed Dark Channel Prior method."} {"_id": "e4d422ba556732bb90bb7b17636552af2cd3a26e", "title": "Probability Risk Identification Based Intrusion Detection System for SCADA Systems", "text": "As Supervisory Control and Data Acquisition (SCADA) systems control several critical infrastructures, they have connected to the internet. Consequently, SCADA systems face different sophisticated types of cyber adversaries. This paper suggests a Probability Risk Identification based Intrusion Detection System (PRI-IDS) technique based on analysing network traffic of Modbus TCP/IP for identifying replay attacks. It is acknowledged that Modbus TCP is usually vulnerable due to its unauthenticated and unencrypted nature. Our technique is evaluated using a simulation environment by configuring a testbed, which is a custom SCADA network that is cheap, accurate and scalable. The testbed is exploited when testing the IDS by sending individual packets from an attacker located on the same LAN as the Modbus master and slave. The experimental results demonstrated that the proposed technique can effectively and efficiently recognise replay attacks."} {"_id": "cc98157b70d7cf464b880668d7694edd12188157", "title": "An Implementation of Intrusion Detection System Using Genetic Algorithm", "text": "Nowadays it is very important to maintain a high level security to ensure safe and trusted communication of information between various organizations. But secured data communication over internet and any other network is always under threat of intrusions and misuses. So Intrusion Detection Systems have become a needful component in terms of computer and network security. There are various approaches being utilized in intrusion detections, but unfortunately any of the systems so far is not completely flawless. So, the quest of betterment continues. In this progression, here we present an Intrusion Detection System (IDS), by applying genetic algorithm (GA) to efficiently detect various types of network intrusions. Parameters and evolution processes for GA are discussed in details and implemented. This approach uses evolution theory to information evolution in order to filter the traffic data and thus reduce the complexity. To implement and measure the performance of our system we used the KDD99 benchmark dataset and obtained reasonable detection rate."} {"_id": "2f991be8d35e4c1a45bfb0d646673b1ef5239a1f", "title": "Model-Agnostic Interpretability of Machine Learning", "text": "Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as blackbox functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges."} {"_id": "546add32740ac350dda44bab06f56d4e206622ab", "title": "Safety Verification of Deep Neural Networks", "text": "Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on image manipulations, such as scratches or changes to camera angle or lighting conditions, and define safety for an image classification decision in terms of invariance of the classification with respect to manipulations of the original image within a region of images that are close to it. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness."} {"_id": "8db9df2eadea654f128c1887722c677c708e8a47", "title": "Deep Reinforcement Learning framework for Autonomous Driving", "text": "Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles. INTRODUCTION A robot car that drives autonomously is a long-standing goal of Artificial Intelligence. Driving a vehicle is a task that requires high level of skill, attention and experience from a human driver. Although computers are more capable of sustained attention and focus than humans, fully autonomous driving requires a level of intelligence that surpasses that achieved so far by AI agents. The tasks involved in creating an autonomous driving agent can be divided into 3 categories, as shown in Figure 1: 1) Recognition: Identifying components of the surrounding environment. Examples of this are pedestrian detection, traffic sign recognition, etc. Although far from trivial, recognition is a relatively easy task nowadays thanks to advances in Deep Learning (DL) algorithms, which have reached human level recognition or above at several object detection and classification problems [8] [2]. Deep learning models are able to learn complex feature representations from raw input data, omitting the need for handcrafted features [15] [2] [7]. In this regard, Convolutional Neural Networks (CNNs) are probably the most successful deep learning model, and have formed the basis of every winning entry on the ImageNet challenge since AlexNet [8]. This success has been replicated in lane & vehicle detection for autonomous driving [6]. 2) Prediction: It is not enough for an autonomous driving agent to recognize its environment; it must also be able to build internal models that predict the future states of the environment. Examples of this class of problem include building a map of the environment or tracking an object. To be able to predict the future, it is important to integrate past information. As such, Recurrent Neural Networks (RNNs) are essential to this class of problem. Long-Short Term Memory (LSTM) networks [5] are one such category of RNN that have been used in end-to-end scene labeling systems [14]. More recently, RNNs have also been used to improve object tracking performance in the DeepTracking model [13]. 3) Planning: The generation of an efficient model that incorporates recognition and prediction to plan the future sequence of driving actions that will enable the vehicle to navigate successfully. Planning is the hardest task of the three. The difficulty lies in integrating the ability of the model to understand the environment (recognition) and its dynamics (prediction) in a way that enables it to plan the future actions such that it avoids unwanted situations (penalties) and drives safely to its destination (rewards). Figure 1: High level autonomous driving tasks The Reinforcement Learning (RL) framework [17] [20] has been used for a long time in control tasks. The mixture of RL with DL was pointed out to be one of the most promising approaches to achieve human-level control in [9]. In [12] and [11] this humanlevel control was demonstrated on Atari games using the Deep Q Networks (DQN) model, in which RL is responsible for the planning part while DL is responsible for the representation learning part. Later, RNNs were integrated in the mixture to account for partial observable scenarios [4]. Autonomous driving requires the integration of information ar X iv :1 70 4. 02 53 2v 1 [ st at .M L ] 8 A pr 2 01 7 from multiple sensors. Some of them are low dimensional, like LIDAR, while others are high dimensional, like cameras. It is noteworthy in this particular example, however, that although raw camera images are high dimensional, the useful information needed to achieve the autonomous driving task is of much lower dimension. For example, the important parts of the scene that affect driving decisions are limited to the moving vehicle, free space on the road ahead, the position of kerbs, etc. Even the fine details of vehicles are not important, as only their spatial location is truly necessary for the problem. Hence the memory bandwidth for relevant information is much lower. If this relevant information can be extracted, while the other non-relevant parts are filtered out, it would improve both the accuracy and efficiency of autonomous driving systems. Moreover, this would reduce the computation and memory requirements of the system, which are critical constraints on embedded systems that will contain the autonomous driving control unit. Attention models are a natural fit for such an information filtering process. Recently, these models were successfully deployed for image recognition in [23] and [10], wherein RL was mixed with RNNs to obtain the parts of the image to attend to. Such models are easily extended and integrated to the DQN [11] and Deep Recurrent Q Networks (DRQN) [4] models. This integration was performed in [16]. The success of attention models drives us to propose them for the extraction of low level information from the raw sensory information to perform autonomous driving. In this paper, we propose a framework for an end-end autonomous driving model that takes in raw sensor inputs and outputs driving actions. The model is able to handle partially observable scenarios. Moreover, we propose to integrate the recent advances in attention models in order to extract only relevant information from the received sensors data, thereby making it suitable for real-time embedded systems. The main contributions of this paper: 1) presenting a survey of the recent advances of deep reinforcement learning and 2) introducing a framework for endend autonomous driving using deep reinforcement learning to the automotive community. The rest of the paper is divided into two parts. The first part provides a survey of deep reinforcement learning algorithms, starting with the traditional MDP framework and Q-learning, followed by the the DQN, DRQN and Deep Attention Recurrent Q Networks (DARQN). The second part of the paper describes the proposed framework that integrates the recent advances in deep reinforcement learning. Finally, we conclude and suggest directions for future work. REVIEW OF REINFORCEMENT LEARNING For a comprehensive overview of reinforcement learning, please refer to the second edition of Rich Sutton\u2019s textbook [18]. We provide a short review of important topics in this section. The Reinforcement Learning framework was formulated in [17] as a model to provide the best policy an agent can follow (best action to take in a given state), such that the total accumulated rewards are maximized when the agent follows that policy from the current and until a terminal state is reached. Motivation for RL Paradigm Driving is a multi-agent interaction problem. As a human driver, it is much easier to keep within a lane without any interaction with other cars than to change lanes in heavy traffic. The latter is more difficult because of the inherent uncertainty in behavior of other drivers. The number of interacting vehicles, their geometric configuration and the behavior of the drivers could have large variability and it is challenging to design a supervised learning dataset with exhaustive coverage of all scenarios. Human drivers employ some sort of online reinforcement learning to understand the behavior of other drivers such as whether they are defensive or aggressive, experienced or in-experienced, etc. This is particularly useful in scenarios which need negotiation, namely entering a roundabout, navigating junctions without traffic lights, lane changes during heavy traffic, etc. The main challenge in autonomous driving is to deal with corner cases which are unexpected even for a human driver, like recovering from being lost in an unknown area without GPS or dealing with disaster situations like flooding or appearance of a sinkhole on the ground. The RL paradigm models uncharted territory and learns from its own experience by taking actions. Additionally, RL may be able to handle non-differentiable cost functions which can create challenges for supervised learning problems. Currently, the standard approach for autonomous driving is to decouple the system into isolated sub-problems, typically supervised-learning-like object detection, visual odometry, etc and then having a post processing layer to combine all the results of the previous steps. There are two main issues with this approach: Firstly, the sub-problems which are solved may be more difficult than autonomous driving. For example, one might be solving object detection by semantic segmentation which is both challenging and unnecessary. Human drivers don\u2019t detect and classify all visible objects while driving, only the most relevant ones. Secondly, the isolated sub-problems may not combine coherently to achieve"} {"_id": "a4d513cfc9d4902ef1a80198582f29b8ba46ac28", "title": "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation", "text": "This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders."} {"_id": "b5a047dffc3d70dce19de61257605dfc8c69535c", "title": "Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks", "text": "Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU ) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods."} {"_id": "a7680e975d395891522d3c10e3bf892f9b618048", "title": "Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-to-End Learning from Demonstration", "text": "We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks. The controller also combines VAE-GAN-based reconstruction with autoregressive multimodal action prediction. Our results demonstrate that it is possible to learn complex manipulation tasks, such as picking up a towel, wiping an object, and depositing the towel to its previous position, entirely from raw images with direct behavior cloning. We show that weight sharing and reconstruction-based regularization substantially improve generalization and robustness, and training on multiple tasks simultaneously increases the success rate on all tasks."} {"_id": "a479ee6ba49a26e28033b8b9448cf9e058bfa913", "title": "Mechanical design of the Wheel-Leg hybrid mobile robot to realize a large wheel diameter", "text": "In this paper, a new category of the wheel-leg hybrid robot is presented. The proposed mechanism can compose large wheel diameter compared with the previous hybrid robot to realize a greater ability to climb obstacles. A prototype model of one Wheel-Leg module of the proposed robot mechanism has been developed to illustrate the concept. Actual design and mode changing experiment with a test mechanical module is also presented. Basic movement tests and a test of the basic properties of the rotational fingertip are also shown. The Basic configurations of wheel-leg retractable is considered well. The integrated mode is also described."} {"_id": "9cc507dc1c86b7e22d384d200da49961fd0a7fa5", "title": "Development of GaN HEMT based High Power High Efficiency Distributed Power Amplifier for Military Applications", "text": "Implementing wide bandgap GaN HEMT device into broadband distributed power amplifier creates a tremendous opportunity for RF designers to develop high power high efficiency very broadband power amplifiers for military applications. Several prototypes of 10-40 W GaN based distributed power amplifiers, including MMIC distributed PA, are currently under the development at Rockwell Collins, Inc. In this paper, we will discuss the results of a 10 W distributed power amplifier with the maximum power output of more than 40 dBm and a power-added efficiency of 30-70% over the bandwidth of 20-2000 MHz."} {"_id": "414b173b65bbb4d471134162acac6fd303d32313", "title": "IDENTIFICATION OF FAILURE MECHANISMS TO ENHANCE PROGNOSTIC OUTCOMES", "text": "Predicting the reliability of a system in its actual life-cycle conditions and estimating its time to failure is helpful in decision making to mitigate system risks. There are three approaches to prognostics: the physics-of-failure approach, the data-driven approach, and the fusion approach. A key requirement in all these approaches is identification of the appropriate parameter(s) to monitor to gather data that can be used to assess impending failure. This paper presents a physics-of-failure approach, which uses failure modes, mechanisms and effects analysis (FMMEA) to enhance prognostics planning and implementation. This paper also presents the fusion approach to prognostics and the applicability of FMMEA to this approach. An example case of generating FMMEA information and using that to identify appropriate parameters to monitor is presented."} {"_id": "81cba545677a2fa3ce259ef5c540b03cc8ab6c03", "title": "A survey on Vehicular Cloud Computing and its security", "text": "Vehicular networking has a significant advantages in the today era. It provides desirable features and some specific applications such as efficient traffic management, road safety and infotainment. The vehicle consists of comparatively more communication systems such as on-board computing device, storage and computing power, GPS etc. to provide Intelligent Transportation System (ITS). The new hybrid technology known as Vehicular Cloud Computing (VCC) has great impact on the ITS by using the resources of vehicles such as GPS, storage, internet and computing power for instant decision making and sharing information on the cloud. Moreover, the paper not only present the concept of vehicular cloud but also provide a brief overview on the applications, security issues, threats and security solution for the VCC."} {"_id": "b4bd9fab8439da4939a980a950838d1299a9b030", "title": "Tabu Search - Part II", "text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article\u2019s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. \u00a9 1990 INFORMS"} {"_id": "8c9e07e4695028fd72b905eacb6c6766efa3c70d", "title": "Marahel : A Language for Constructive Level Generation", "text": "Marahel is a language and framework for constructive generation of 2D tile-based game levels. It is developed with the dual aim of making it easier to build level generators for game developers, and to help solving the general level generation problem by creating a generator space that can be searched using evolution. We describe the different sections of the level generators, and show examples of generated maps from 5 different generators. We analyze their expressive range on three dimensions: percentage of empty space, number of isolated elements, and cell-wise entropy of empty space. The results show that generators that have starkly different output from each other can easily be defined in Marahel."} {"_id": "ae18fa7080e85922fa916591bc73cd100ff5e861", "title": "Right nulled GLR parsers", "text": "The right nulled generalized LR parsing algorithm is a new generalization of LR parsing which provides an elegant correction to, and extension of, Tomita's GLR methods whereby we extend the notion of a reduction in a shift-reduce parser to include right nulled items. The result is a parsing technique which runs in linear time on LR(1) grammars and whose performance degrades gracefully to a polynomial bound in the presence of nonLR(1) rules. Compared to other GLR-based techniques, our algorithm is simpler and faster."} {"_id": "0f0fc58f268d1055166276c3b74d36fd12f10c33", "title": "One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities", "text": "The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as large scale classification, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we introduce an efficient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classification problems."} {"_id": "5c9b92f2d065211c3d08bab2083571543fb12469", "title": "Object Recognition using 3D SIFT in Complex CT Volumes", "text": "The automatic detection of objects within complex volumetric imagery is becoming of increased interest due to the use of dual energy Computed Tomography (CT) scanners as an aviation security deterrent. These devices produce a volumetric image akin to that encountered in prior medical CT work but in this case we are dealing with a complex multi-object volumetric environment including significant noise artefacts. In this work we look at the application of the recent extension to the seminal SIFT approach to the 3D volumetric recognition of rigid objects within this complex volumetric environment. A detailed overview of the approach and results when applied to a set of exemplar CT volumetric imagery is presented."} {"_id": "d6054e77672a62708ea9f4be8d35c824cf25068d", "title": "Pulsating torque minimization techniques for permanent magnet AC motor drives-a review", "text": "AbstructPermanent magnet ac (PMAC) motor drives are finding expanded use in high-performance applications where torque smoothness is essential. This paper reviews a wide range of motorand controller-based design techniques that have been described in the literature for minimizing the generation of cogging and ripple torques in both sinusoidal and trapezoidal PMAC motor drives. Sinusoidal PMAC drives generally show the greatest potential for pulsating torque minimization using wellknown motor design techniques such as skewing and fractional slot pitch windings. In contrast, trapezoidal PMAC drives pose more difficult tradeoffs in both the motor and controller design which may require compromises in drive simplicity and cost to improve torque smoothness. Controller-based techniques for minimizing pulsating torque typically involve the use of active cancellation algorithms which depend on either accurate tuning or adaptive control schemes for effectiveness. In the end, successful suppression of pulsating torque ultimately relies on an orchestrated systems approach to all aspects of the PMAC machine and controller design which often requires a carefully selected combination of minimization techniques."} {"_id": "46b92b61c908eb8809af6d0a3b7a4a2728792161", "title": "Commutation torque ripple reduction in brushless DC motor drives using a single DC current sensor", "text": "This paper presents a comprehensive study on reducing commutation torque ripples generated in brushless DC motor drives with only a single DC-link current sensor provided. In such drives, commutation torque ripple suppression techniques that are practically effective in low speed as well as high speed regions are scarcely found. The commutation compensation technique proposed here is based on a strategy that the current slopes of the incoming and the outgoing phases during the commutation interval can be equalized by a proper duty-ratio control. Being directly linked with deadbeat current control scheme, the proposed control method accomplishes suppression of the spikes and dips superimposed on the current and torque responses during the commutation intervals of the inverter. Effectiveness of the proposed control method is verified through simulations and experiments."} {"_id": "10b8bf301fbad86b83fabcbd8592adc67d33f8c6", "title": "Targeted Restoration of the Intestinal Microbiota with a Simple, Defined Bacteriotherapy Resolves Relapsing Clostridium difficile Disease in Mice", "text": "Relapsing C. difficile disease in humans is linked to a pathological imbalance within the intestinal microbiota, termed dysbiosis, which remains poorly understood. We show that mice infected with epidemic C. difficile (genotype 027/BI) develop highly contagious, chronic intestinal disease and persistent dysbiosis characterized by a distinct, simplified microbiota containing opportunistic pathogens and altered metabolite production. Chronic C. difficile 027/BI infection was refractory to vancomycin treatment leading to relapsing disease. In contrast, treatment of C. difficile 027/BI infected mice with feces from healthy mice rapidly restored a diverse, healthy microbiota and resolved C. difficile disease and contagiousness. We used this model to identify a simple mixture of six phylogenetically diverse intestinal bacteria, including novel species, which can re-establish a health-associated microbiota and clear C. difficile 027/BI infection from mice. Thus, targeting a dysbiotic microbiota with a defined mixture of phylogenetically diverse bacteria can trigger major shifts in the microbial community structure that displaces C. difficile and, as a result, resolves disease and contagiousness. Further, we demonstrate a rational approach to harness the therapeutic potential of health-associated microbial communities to treat C. difficile disease and potentially other forms of intestinal dysbiosis."} {"_id": "f0bf87e3f74f8bd0336d0d3102dce9028882389e", "title": "Partitioning and Segment Organization Strategies for Real-Time Selective Search on Document Streams", "text": "The basic idea behind selective search is to partition a collection into topical clusters, and for each query, consider only a subset of the clusters that are likely to contain relevant documents. Previous work on web collections has shown that it is possible to retain high-quality results while considering only a small fraction of the collection. These studies, however, assume static collections where it is feasible to run batch clustering algorithms for partitioning. In this work, we consider the novel formulation of selective search on document streams (specifically, tweets), where partitioning must be performed incrementally. In our approach, documents are partitioned into temporal segments and selective search is performed within each segment: these segments can either be clustered using batch or online algorithms, and at different temporal granularities. For efficiency, we take advantage of word embeddings to reduce the dimensionality of the document vectors. Experiments with test collections from the TREC Microblog Tracks show that we are able to achieve precision indistinguishable from exhaustive search while considering only around 5% of the collection. Interestingly, we observe no significant effectiveness differences between batch vs. online clustering and between hourly vs. daily temporal segments, despite them being very different index organizations. This suggests that architectural choices should be primarily guided by efficiency considerations."} {"_id": "e54dad471f831b64710367400dacef5a2d63731d", "title": "A novel three-phase DC/DC converter for high-power applications", "text": "This paper proposes a new three-phase series resonant converter. The principle working of the converter is described analytically in detail for switching frequencies equal to the resonant frequency and for switching frequencies greater than the resonant frequency. Additionally, detailed simulations are performed to analyse the converter. Based on the analysis, design criteria for a 5 kW breadboard are derived. The 5 kW breadboard version of the converter has been built to validate the analytical investigations and simulations experimentally. Moreover, the breadboard is used to investigate the ZVS and ZCS possibilities of the topology and the influence of the deadtime and the switching frequency on the overall efficiency."} {"_id": "0572e4cb844a1ca04bd3c555e38accab84e11c4b", "title": "Facebook\u00ae and academic performance", "text": "There is much talk of a change in modern youth \u2013 often referred to as digital natives or Homo Zappiens \u2013 with respect to their ability to simultaneously process multiple channels of information. In other words, kids today can multitask. Unfortunately for proponents of this position, there is much empirical documentation concerning the negative effects of attempting to simultaneously process different streams of information showing that such behavior leads to both increased study time to achieve learning parity and an increase in mistakes while processing information than those who are sequentially or serially processing that same information. This article presents the preliminary results of a descriptive and exploratory survey study involving Facebook use, often carried out simultaneously with other study activities, and its relation to academic performance as measured by self-reported Grade Point Average (GPA) and hours spent studying per week. Results show that Facebook users reported having lower GPAs and spend fewer hours per week studying than nonusers. 2010 Elsevier Ltd. All rights reserved."} {"_id": "1c09aabb45e44685ae214d8c6b94ab6e9ce422de", "title": "Silk - Generating RDF Links while Publishing or Consuming Linked Data", "text": "The central idea of the Web of Data is to interlink data items using RDF links. However, in practice most data sources are not sufficiently interlinked with related data sources. The Silk Link Discovery Framework addresses this problem by providing tools to generate links between data items based on user-provided link specifications. It can be used by data publishers to generate links between data sets as well as by Linked Data consumers to augment Web data with additional RDF links. In this poster we present the Silk Link Discovery Framework and report on two usage examples in which we employed Silk to generate links between two data sets about movies as well as to find duplicate persons in a stream of data items that is crawled from the Web."} {"_id": "8b2ea3ecac8abd2357fbf8ca59ec31bad3191388", "title": "Towards End-to-End Lane Detection: an Instance Segmentation Approach", "text": "Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally expensive and prone to scalability due to road scene variations. More recent approaches leverage deep learning models, trained for pixel-wise lane segmentation, even when no markings are present in the image due to their big receptive field. Despite their advantages, these methods are limited to detecting a pre-defined, fixed number of lanes, e.g. ego-lanes, and can not cope with lane changes. In this paper, we go beyond the aforementioned limitations and propose to cast the lane detection problem as an instance segmentation problem - in which each lane forms its own instance - that can be trained end-to-end. To parametrize the segmented lane instances before fitting the lane, we further propose to apply a learned perspective transformation, conditioned on the image, in contrast to a fixed \u201dbird\u2019s-eye view\u201d transformation. By doing so, we ensure a lane fitting which is robust against road plane changes, unlike existing approaches that rely on a fixed, predefined transformation. In summary, we propose a fast lane detection algorithm, running at 50 fps, which can handle a variable number of lanes and cope with lane changes. We verify our method on the tuSimple dataset and achieve competitive results."} {"_id": "a3ead91f63ca244359cfb4dbb9a22ab149d577eb", "title": "Information Extraction Using Web Usage Mining, Web Scrapping and Semantic Annotation", "text": "Extracting useful information from the web is the most significant issue of concern for the realization of semantic web. This may be achieved by several ways among which Web Usage Mining, Web Scrapping and Semantic Annotation plays an important role. Web mining enables to find out the relevant results from the web and is used to extract meaningful information from the discovery patterns kept back in the servers. Web usage mining is a type of web mining which mines the information of access routes/manners of users visiting the web sites. Web scraping, another technique, is a process of extracting useful information from HTML pages which may be implemented using a scripting language known as Prolog Server Pages(PSP) based on Prolog. Third, Semantic annotation is a technique which makes it possible to add semantics and a formal structure to unstructured textual documents, an important aspect in semantic information extraction which may be performed by a tool known as KIM(Knowledge Information Management). In this paper, we revisit, explore and discuss some information extraction techniques on web like web usage mining, web scrapping and semantic annotation for a better or efficient information extraction on the web illustrated with examples."} {"_id": "6f0ab73a8d6494b0f8f50fafb32b9a214efbe4c9", "title": "Making Arabic PDF books accessible using gamification", "text": "Most of online Arabic books are not accessible to Arab people with visual impairments. They cannot read online books because they are usually scanned images of the original ones. There is also a problem in PDF encoding of some of the textual books. One of the solutions is to use an Arabic OCR to convert scanned books into text; however Arabic OCR is still in its early stages and suffers from many limitations. In this paper we propose the use of human recognition skills to replace OCR limitations by incorporating the concepts of crowdsourcing and gamification. Our proposed system is in the form of a mobile recall game that presents players with word images segmented from the books to be converted into text. The players' answers are checked using techniques similar to what is used in word spotting. We initially implemented two components of the system; the segmentation, and the feature extraction and matching components. For the feature extraction and matching component, which is used to verify the player's answers, we performed four tests to choose a similarity measure threshold for accepting an entered word as a correct answer. Future work will consider other means of input correctness assurance."} {"_id": "bec079e846275997314be1415d8e5b3551fe09ca", "title": "Drug delivery and nanoparticles: Applications and hazards", "text": "The use of nanotechnology in medicine and more specifically drug delivery is set to spread rapidly. Currently many substances are under investigation for drug delivery and more specifically for cancer therapy. Interestingly pharmaceutical sciences are using nanoparticles to reduce toxicity and side effects of drugs and up to recently did not realize that carrier systems themselves may impose risks to the patient. The kind of hazards that are introduced by using nanoparticles for drug delivery are beyond that posed by conventional hazards imposed by chemicals in classical delivery matrices. For nanoparticles the knowledge on particle toxicity as obtained in inhalation toxicity shows the way how to investigate the potential hazards of nanoparticles. The toxicology of particulate matter differs from toxicology of substances as the composing chemical(s) may or may not be soluble in biological matrices, thus influencing greatly the potential exposure of various internal organs. This may vary from a rather high local exposure in the lungs and a low or neglectable exposure for other organ systems after inhalation. However, absorbed species may also influence the potential toxicity of the inhaled particles. For nanoparticles the situation is different as their size opens the potential for crossing the various biological barriers within the body. From a positive viewpoint, especially the potential to cross the blood brain barrier may open new ways for drug delivery into the brain. In addition, the nanosize also allows for access into the cell and various cellular compartments including the nucleus. A multitude of substances are currently under investigation for the preparation of nanoparticles for drug delivery, varying from biological substances like albumin, gelatine and phospholipids for liposomes, and more substances of a chemical nature like various polymers and solid metal containing nanoparticles. It is obvious that the potential interaction with tissues and cells, and the potential toxicity, greatly depends on the actual composition of the nanoparticle formulation. This paper provides an overview on some of the currently used systems for drug delivery. Besides the potential beneficial use also attention is drawn to the questions how we should proceed with the safety evaluation of the nanoparticle formulations for drug delivery. For such testing the lessons learned from particle toxicity as applied in inhalation toxicology may be of use. Although for pharmaceutical use the current requirements seem to be adequate to detect most of the adverse effects of nanoparticle formulations, it can not be expected that all aspects of nanoparticle toxicology will be detected. So, probably additional more specific testing would be needed."} {"_id": "161a4ba80d447f9f60fd1246e51b360ec78c13de", "title": "A Lightweight Component Caching Scheme for Satisfiability Solvers", "text": "We introduce in this paper a lightweight technique for reducing work repetition caused by non\u2013chronological backtracking commonly practiced by DPLL\u2013based SAT solvers. The presented technique can be viewed as a partial component caching scheme. Empirical evaluation of the technique reveals significant improvements on a broad range of in-"} {"_id": "37d5d2b0d392231d7deae1b2a4f13cbf54c9e184", "title": "Efficient mining of weighted association rules (WAR)", "text": "ABSTRACT In this paper, we extend the tradition association rule problem by allowing a weight to be associated with each item in a transaction, to re ect interest/intensity of the item within the transaction. This provides us in turn with an opportunity to associate a weight parameter with each item in the resulting association rule. We call it weighted association rule (WAR). WAR not only improves the con dence of the rules, but also provides a mechanism to do more effective target marketing by identifying or segmenting customers based on their potential degree of loyalty or volume of purchases. Our approach mines WARs by rst ignoring the weight and nding the frequent itemsets (via a traditional frequent itemset discovery algorithm), and is followed by introducing the weight during the rule generation. It is shown by experimental results that our approach not only results in shorter average execution times, but also produces higher quality results than the generalization of previous known methods on quantitative association rules."} {"_id": "4509d415e5801aa2c33544168aac572bc13406dd", "title": "Fuzzy Neural Network-Based Adaptive Control for a Class of Uncertain Nonlinear Stochastic Systems", "text": "This paper studies an adaptive tracking control for a class of nonlinear stochastic systems with unknown functions. The considered systems are in the nonaffine pure-feedback form, and it is the first to control this class of systems with stochastic disturbances. The fuzzy-neural networks are used to approximate unknown functions. Based on the backstepping design technique, the controllers and the adaptation laws are obtained. Compared to most of the existing stochastic systems, the proposed control algorithm has fewer adjustable parameters and thus, it can reduce online computation load. By using Lyapunov analysis, it is proven that all the signals of the closed-loop system are semiglobally uniformly ultimately bounded in probability and the system output tracks the reference signal to a bounded compact set. The simulation example is given to illustrate the effectiveness of the proposed control algorithm."} {"_id": "320ea4748b1f7e808eabedbedb75cce660122d26", "title": "Detecting Avocados to Zucchinis: What Have We Done, and Where Are We Going?", "text": "The growth of detection datasets and the multiple directions of object detection research provide both an unprecedented need and a great opportunity for a thorough evaluation of the current state of the field of categorical object detection. In this paper we strive to answer two key questions. First, where are we currently as a field: what have we done right, what still needs to be improved? Second, where should we be going in designing the next generation of object detectors? Inspired by the recent work of Hoiem et al. on the standard PASCAL VOC detection dataset, we perform a large-scale study on the Image Net Large Scale Visual Recognition Challenge (ILSVRC) data. First, we quantitatively demonstrate that this dataset provides many of the same detection challenges as the PASCAL VOC. Due to its scale of 1000 object categories, ILSVRC also provides an excellent test bed for understanding the performance of detectors as a function of several key properties of the object classes. We conduct a series of analyses looking at how different detection methods perform on a number of image-level and object-class-level properties such as texture, color, deformation, and clutter. We learn important lessons of the current object detection methods and propose a number of insights for designing the next generation object detectors."} {"_id": "5425f7109dca2022ff9bde0ed3f113080d0d606b", "title": "DFD: Efficient Functional Dependency Discovery", "text": "The discovery of unknown functional dependencies in a dataset is of great importance for database redesign, anomaly detection and data cleansing applications. However, as the nature of the problem is exponential in the number of attributes none of the existing approaches can be applied on large datasets. We present a new algorithm DFD for discovering all functional dependencies in a dataset following a depth-first traversal strategy of the attribute lattice that combines aggressive pruning and efficient result verification. Our approach is able to scale far beyond existing algorithms for up to 7.5 million tuples, and is up to three orders of magnitude faster than existing approaches on smaller datasets."} {"_id": "c2f4c6d7e06da14c4b3ce3a9b97394a64708dc52", "title": "Database Dependency Discovery: A Machine Learning Approach", "text": "Database dependencies, such as functional and multivalued dependencies, express the presence of structure in databas e relations, that can be utilised in the database design proce ss. The discovery of database dependencies can be viewed as an induction problem, in which general rules (dependencies) a re obtained from specific facts (the relation). This viewpoint has the advantage of abstracting away as much as possible from the particulars of the dependencies. The algorithms in this paper are designed such that they can easily be generalised to other kinds of dependencies. Like in current approaches to computational induction such as inductive logic programming, we distinguish between top down algorithms and bottom-up algorithms. In a top-down approach, hypotheses are generated in a systematic way and then tested against the given relation. In a bottom-up approach, the relation is inspected in order to see what dependencies it may satisfy or violate. We give algorithms for bot h approaches."} {"_id": "28f840416cfe7aed3cda11e266119d247fcc3f9e", "title": "GORDIAN: Efficient and Scalable Discovery of Composite Keys", "text": "Identification of (composite) key attributes is of fundamental importance for many different data management tasks such as data modeling, data integration, anomaly detection, query formulation, query optimization, and indexing. However, information about keys is often missing or incomplete in many real-world database scenarios. Surprisingly, the fundamental problem of automatic key discovery has received little attention in the existing literature. Existing solutions ignore composite keys, due to the complexity associated with their discovery. Even for simple keys, current algorithms take a brute-force approach; the resulting exponential CPU and memory requirements limit the applicability of these methods to small datasets. In this paper, we describe GORDIAN, a scalable algorithm for automatic discovery of keys in large datasets, including composite keys. GORDIAN can provide exact results very efficiently for both real-world and synthetic datasets. GORDIAN can be used to find (composite) key attributes in any collection of entities, e.g., key column-groups in relational data, or key leaf-node sets in a collection of XML documents with a common schema. We show empirically that GORDIAN can be combined with sampling to efficiently obtain high quality sets of approximate keys even in very large datasets."} {"_id": "5288d14f6a3937df5e10109d4e23d79b7ddf080f", "title": "Fast Algorithms for Mining Association Rules in Large Databases", "text": ""} {"_id": "57fb4b0c63400dc984893461b1f5a73244b3e3eb", "title": "Logic and Databases: A Deductive Approach", "text": "ion, Databases and Conceptual Modeling (Pingree Park, Colo., June), pp. 112-114; ACM SZGMOD Rec. 11, 2 (Feb.). CODD, E. F. 1982. Relational database: A practical foundation for productivity. Commun. ACM 25, 2 (Feb.), 109-117. COLMERAUER, A. 1973. Un systeme de communication homme-machine en francais. Rapport, Groupe Intelligence Artificielle, Universite d\u2019AixMarseille-Luminy, Marseilles, France. COLMERAUER, A., AND PIQUE, J. F. 1981. About natural logic. In Advances in Database Theory vol. 1, H. Gallaire, J. Minker, and J.-M. Nicolas, Eds. Plenum, New York, pp. 343-365. COOPER, E. C. 1980. On the expressive power of query languages for relational databases. Tech. Rep. 14-80, Computer Science Dept., Harvard Univ., Cambridge, Mass. DAHL, V. 1982. On database systems development through logic. ACM Trans. Database Syst. 7, 1 (Mar.), 102-123. DATE, C. J. 1977. An Introduction to Database Systems. Addison-Wesley, Reading, Mass. DATE, C. J. 1981. An Introduction to Database Systems, 3rd ed. Addison-Wesley, Reading, Mass. DELIYANNI, A., AND KOWALSKI, R. A. 1979. Logic and semantic networks. Commun. ACM 22, 3"} {"_id": "7d81977db644f56b3291546598d2f53165f76117", "title": "Regional cerebral blood flow correlates of visuospatial tasks in Alzheimer's disease.", "text": "This study investigated the role of visuospatial tasks in identifying cognitive decline in patients with Alzheimer's disease (AD), by correlating neuropsychological performance with cerebral perfusion measures. There were 157 participants: 29 neurologically healthy controls (age: 70.3 +/- 6.6, MMSE > or = 27), 86 patients with mild AD (age: 69.18 +/- 8.28, MMSE > or = 21) and 42 patients moderate/severe AD (age: 68.86 +/- 10.69, MMSE 8-20). Single Photon Emission Computerized Tomography (SPECT) was used to derive regional perfusion ratios, and correlated using partial least squares (PLS) with neuropsychological test scores from the Benton Line Orientation (BLO) and the Rey-Osterrieth Complex Figure (RO). Cross-sectional analysis demonstrated that mean scores differed in accordance with disease status: control group (BLO 25.5, RO 33.3); mild AD (BLO 20.1, RO 25.5); moderate/severe AD (BLO 10.7, RO 16). Correlations were observed between BLO/RO and right parietal SPECT regions in the AD groups. Visuospatial performance, often undersampled in cognitive batteries for AD, is clearly impaired even in mild AD and correlates with functional deficits as indexed by cerebral perfusion ratios on SPECT implicating right hemisphere circuits. Furthermore, PLS reveals that usual spatial tasks probe a distributed brain network in both hemispheres including many areas targeted by early AD pathology."} {"_id": "ce2eb2cda28e8883d9475acd6f034733de38ae91", "title": "Articulatory , positional and coarticulatory characteristics for clear / l / and dark / l / : evidence from two Catalan dialects", "text": "Electropalatographic and acoustic data reported in this study show differences in closure location and degree, dorsopalatal contact size, closure duration, relative timing of events and formant frequency between clear /l/ and dark /l/ in two dialects of Catalan (Valencian and Majorcan). The two Catalan dialects under investigation differ also regarding degree of darkness but essentially not regarding coarticulatory resistance at the word edges, i.e. the alveolar lateral is equally dark word-initially and word-finally in Majorcan, and clearer in the former position vs. than the latter in Valencian, and more resistant to vowel effects in the two positions than intervocalically in both dialects. With reference to data from the literature, it appears that languages and dialects may differ as to whether /l/ is dark or clear in all word positions or whether or not initial /l/ is clearer than final /l/, and that articulatory strengthening occurs not only wordand utterance-initially but wordand utterance-finally as well. These and other considerations confirm the hypothesis that degree of darkness in /l/ proceeds gradually rather than categorically from one language to another."} {"_id": "13179f0c9959a2bd357838aac3d0a97aea96d7b5", "title": "Risk factors for involvement in cyber bullying : Victims , bullies and bully \u2013 victims \u2606", "text": "The use of online technology is exploding worldwide and is fast becoming a preferred method of interacting. While most online interactions are neutral or positive the Internet provides a new means through which children and youth are bullied. The aim of this grounded theory approach was to explore technology, virtual relationships and cyber bullying from the perspectives of students. Seven focus groups were held with 38 students between fifth and eighth grades. The participants considered cyber bullying to be a serious problem and some characterized online bullying as more serious than \u0333traditional\u2018 bullying because of the associated anonymity. Although the students depicted anonymity as integral to cyber bullying, the findings suggest that much of the cyber bullying occurred within the context of their social groups and relationships. Findings revealed five major themes: technology embraced at younger ages and becoming the dominant medium for communication; definitions and views of cyber bullying; factors unique to cyber bullying; types of cyber bullying; and telling adults. The findings highlight the complexity of the perceived anonymity provided by the Internet and how AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 3 this may impact cyber bullying. The study offers greater awareness of the meanings of online relationships for children and youth."} {"_id": "5cf701e4588067b52ead26188904005b59e71139", "title": "The Gut Microbiota and Autism Spectrum Disorders", "text": "Gastrointestinal (GI) symptoms are a common comorbidity in patients with autism spectrum disorder (ASD), but the underlying mechanisms are unknown. Many studies have shown alterations in the composition of the fecal flora and metabolic products of the gut microbiome in patients with ASD. The gut microbiota influences brain development and behaviors through the neuroendocrine, neuroimmune and autonomic nervous systems. In addition, an abnormal gut microbiota is associated with several diseases, such as inflammatory bowel disease (IBD), ASD and mood disorders. Here, we review the bidirectional interactions between the central nervous system and the gastrointestinal tract (brain-gut axis) and the role of the gut microbiota in the central nervous system (CNS) and ASD. Microbiome-mediated therapies might be a safe and effective treatment for ASD."} {"_id": "28c41a507c374c15443fa5ccf1209e1eec34f317", "title": "Compiling Knowledge into Decomposable Negation Normal Form", "text": "We propose a method for compiling proposit ional theories into a new tractable form that we refer to as decomposable negation normal form (DNNF). We show a number of results about our compilation approach. First, we show that every propositional theory can be compiled into D N N F and present an algorithm to this effect. Second, we show that if a clausal form has a bounded treewidth, then its DNNF compilation has a linear size and can be computed in linear t ime \u2014 treewidth is a graphtheoretic parameter which measures the connectivity of the clausal form. Th i rd , we show that once a propositional theory is compiled into DNNF, a number of reasoning tasks, such as satisfiability and forgett ing, can be performed in linear t ime. Finally, we propose two techniques for approximating the DNNF compilat ion of a theory when the size of such compilation is too large to be practical. One of the techniques generates a sound but incomplete compilation, while the other generates a complete but unsound compilation. Together, these approximations bound the exact compilation from below and above in terms for their abil i ty to answer queries."} {"_id": "d62f6185ed2c3878d788582fdeabbb5423c04c01", "title": "Object volume estimation based on 3D point cloud", "text": "An approach to estimating the volume of an object based on 3D point cloud is proposed in this paper. Firstly, 3D point cloud is cut into slices of equal thickness along the z-axis. Bisect each slice along the y-axis and cut each slice into sub-intervals along the x-axis. Using the minimum and maximum coordinates in each sub-interval, two surface curve functions can be fitted accordingly for the bisected slices. Then, the curves are integrated to estimate the area of each slice and further integrated along the z-axis for an estimated volume of the object."} {"_id": "6b50b1fa78307a7505eb35ef58d5f51bf977f2a3", "title": "Auto-Encoding User Ratings via Knowledge Graphs in Recommendation Scenarios", "text": "In the last decade, driven also by the availability of an unprecedented computational power and storage capabilities in cloud environments, we assisted to the proliferation of new algorithms, methods, and approaches in two areas of artificial intelligence: knowledge representation and machine learning. On the one side, the generation of a high rate of structured data on the Web led to the creation and publication of the so-called knowledge graphs. On the other side, deep learning emerged as one of the most promising approaches in the generation and training of models that can be applied to a wide variety of application fields. More recently, autoencoders have proven their strength in various scenarios, playing a fundamental role in unsupervised learning. In this paper, we instigate how to exploit the semantic information encoded in a knowledge graph to build connections between units in a Neural Network, thus leading to a new method, SEM-AUTO, to extract and weight semantic features that can eventually be used to build a recommender system. As adding content-based side information may mitigate the cold user problems, we tested how our approach behaves in the presence of a few ratings from a user on the Movielens 1M dataset and compare results with BPRSLIM."} {"_id": "6cd3ea4361e035969e6cf819422d0262f7c0a186", "title": "3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data", "text": "Recently, deep learning has demonstrated great success in computer vision with the capability to learn powerful image features from a large training set. However, most of the published work has been confined to solving 2D problems, with a few limited exceptions that treated the 3D space as a composition of 2D orthogonal planes. The challenge of 3D deep learning is due to a much larger input vector, compared to 2D, which dramatically increases the computation time and the chance of over-fitting, especially when combined with limited training samples (hundreds to thousands), typical for medical imaging applications. To address this challenge, we propose an efficient and robust deep learning algorithm capable of full 3D detection in volumetric data. A two-step approach is exploited for efficient detection. A shallow network (with one hidden layer) is used for the initial testing of all voxels to obtain a small number of promising candidates, followed by more accurate classification with a deep network. In addition, we propose two approaches, i.e., separable filter decomposition and network sparsification, to speed up the evaluation of a network. To mitigate the over-fitting issue, thereby increasing detection robustness, we extract small 3D patches from a multi-resolution image pyramid. The deeply learned image features are further combined with Haar wavelet features to increase the detection accuracy. The proposed method has been quantitatively evaluated for carotid artery bifurcation detection on a head-neck CT dataset from 455 patients. Compared to the state-ofthe-art, the mean error is reduced by more than half, from 5.97 mm to 2.64 mm, with a detection speed of less than 1 s/volume."} {"_id": "2ee7ee38745e9fcf89860dfb3d41c2155521e3a3", "title": "Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks", "text": "We introduce the Residual Memory Network (RMN) architecture to language modeling. RMN is an architecture of feedforward neural networks that incorporates residual connections and time-delay connections that allow us to naturally incorporate information from a substantial time context. As this is the first time RMNs are applied for language modeling, we thoroughly investigate their behaviour on the well studied Penn Treebank corpus. We change the model slightly for the needs of language modeling, reducing both its time and memory consumption. Our results show that RMN is a suitable choice for small-sized neural language models: With test perplexity 112.7 and as few as 2.3M parameters, they out-perform both a much larger vanilla RNN (PPL 124, 8M parameters) and a similarly sized LSTM (PPL 115, 2.08M parameters), while being only by less than 3 perplexity points worse than twice as big LSTM."} {"_id": "772aa928a1bbc901795b78c1bee6d539aa2d1a36", "title": "Author Disambiguation in PubMed: Evidence on the Precision and Recall of Author-ity among NIH-Funded Scientists", "text": "We examined the usefulness (precision) and completeness (recall) of the Author-ity author disambiguation for PubMed articles by associating articles with scientists funded by the National Institutes of Health (NIH). In doing so, we exploited established unique identifiers-Principal Investigator (PI) IDs-that the NIH assigns to funded scientists. Analyzing a set of 36,987 NIH scientists who received their first R01 grant between 1985 and 2009, we identified 355,921 articles appearing in PubMed that would allow us to evaluate the precision and recall of the Author-ity disambiguation. We found that Author-ity identified the NIH scientists with 99.51% precision across the articles. It had a corresponding recall of 99.64%. Precision and recall, moreover, appeared stable across common and uncommon last names, across ethnic backgrounds, and across levels of scientist productivity."} {"_id": "c9946fedf333df0c6404765ba6ccbf8006779753", "title": "Motion planning based on learning models of pedestrian and driver behaviors", "text": "Autonomous driving has shown the capability of providing driver convenience and enhancing safety. While introducing autonomous driving into our current traffic system, one significant issue is to make the autonomous vehicle be able to react in the same way as real human drivers. In order to ensure that an autonomous vehicle of the future will perform like human drivers, this paper proposes a vehicle motion planning model, which can represent how drivers control vehicles based on the assessment of traffic environments in the real signalized intersection. The proposed motion planning model comprises functions of pedestrian intention detection, gap detection and vehicle dynamic control. The three functions are constructed based on the analysis of actual data collected from real traffic environments. Finally, this paper demonstrates the performance of the proposed method by comparing the behaviors of our model with the behaviors of real pedestrians and human drivers. The experimental results show that our proposed model can achieve 85% recognition rate for the pedestrian crossing intention. Moreover, the vehicle controlled by the proposed motion planning model and the actual human-driven vehicle are highly similar with respect to the gap acceptance in intersections."} {"_id": "ffe504583a03a224f4b14938aa9d7b625d326736", "title": "Facing prejudice: implicit prejudice and the perception of facial threat.", "text": "We propose that social attitudes, and in particular implicit prejudice, bias people's perceptions of the facial emotion displayed by others. To test this hypothesis, we employed a facial emotion change-detection task in which European American participants detected the offset (Study 1) or onset (Study 2) of facial anger in both Black and White targets. Higher implicit (but not explicit) prejudice was associated with a greater readiness to perceive anger in Black faces, but neither explicit nor implicit prejudice predicted anger perceptions regarding similar White faces. This pattern indicates that European Americans high in implicit racial prejudice are biased to perceive threatening affect in Black but not White faces, suggesting that the deleterious effects of stereotypes may take hold extremely early in social interaction."} {"_id": "e8cd182c70220f7c381a26c80c0a82b5c8e4d5c1", "title": "Recurrent networks with attention and convolutional networks for sentence representation and classification", "text": "In this paper, we propose a bi-attention, a multi-layer attention and an attention mechanism and convolution neural network based text representation and classification model (ACNN). The bi-attention have two attention mechanism to learn two context vectors, forward RNN with attention to learn forward context vector c\u20d7 $\\overrightarrow {\\mathbf {c}}$ and backward RNN with attention to learn backward context vector c\u20d6 $\\overleftarrow {\\mathbf {c}}$, and then concatenation c\u20d7 $\\overrightarrow {\\mathbf {c}}$ and c\u20d6 $\\overleftarrow {\\mathbf {c}}$ to get context vector c. The multi-layer attention is the stack of the bi-attention. In the ACNN, the context vector c is obtained by the bi-attention, then the convolution operation is performed on the context vector c, and the max-pooling operation is used to reduce the dimension. After max-pooling operation the text is converted to low-dimensional sentence vector m. Finally, the Softmax classifier be used for text classification. We test our model on 8 benchmarks text classification datasets, and our model achieved a better or the same performance compare with the state-of-the-art methods."} {"_id": "92fff676dc28d962e79c0450531ebba12a341896", "title": "Modeling changing dependency structure in multivariate time series", "text": "We show how to apply the efficient Bayesian changepoint detection techniques of Fearnhead in the multivariate setting. We model the joint density of vector-valued observations using undirected Gaussian graphical models, whose structure we estimate. We show how we can exactly compute the MAP segmentation, as well as how to draw perfect samples from the posterior over segmentations, simultaneously accounting for uncertainty about the number and location of changepoints, as well as uncertainty about the covariance structure. We illustrate the technique by applying it to financial data and to bee tracking data."} {"_id": "c7473ab608ec26b799e10a387067d503cdcc4e7e", "title": "Habits \u2014 A Repeat Performance", "text": "Habits are response dispositions that are activated automatically by the context cues that co-occurred with responses during past performance. Experiencesampling diary studies indicate that much of everyday action is characterized by habitual repetition. We consider various mechanisms that could underlie the habitual control of action, and we conclude that direct cuing and motivated contexts best account for the characteristic features of habit responding\u2014in particular, for the rigid repetition of action that can be initiated without intention and that runs to completion with minimal conscious control. We explain the utility of contemporary habit research for issues central to psychology, especially for behavior prediction, behavior change, and self-regulation. KEYWORDS\u2014habit; automaticity; motivation; goals; behavior change; behavior prediction; self-regulation From self-help guru Anthony Robbins to the religion of Zen Buddhism, received wisdom exhorts people to be mindful, deliberative, and conscious in all they do. In contrast, contemporary research in psychology shows that it is actually people\u2019s unthinking routines\u2014or habits\u2014that form the bedrock of everyday life. Without habits, people would be doomed to plan, consciously guide, and monitor every action, from making that first cup of coffee in the morning to sequencing the finger movements in a Chopin piano concerto. But what is a habit? The cognitive revolution radically reshaped the behaviorist view that habits rely on simple stimulus\u2013 response associations devoid of mental representation. Emerging is a more nuanced construct that includes roles for consciousness, goals, and motivational states. Fundamental questions persist, however, especially when comparing evidence across neuropsychology, animal-learning, and social-cognition literatures. Data from these fields support three views of habit, which we term the direct-context-cuing, implicit-goal, and motivated-context models. In this article, we consider these models and explain the relevance for psychology of a reinvigorated habit construct. HABITS AFTER BEHAVIORISM Within current theorizing, habits are automated response dispositions that are cued by aspects of the performance context (i.e., environment, preceding actions). They are learned through a process in which repetition incrementally tunes cognitive processors in procedural memory (i.e., the memory system that supports the minimally conscious control of skilled action). The relatively primitive associative learning that promotes habits is shared in some form across mammalian species. Our own interest in habits has been fueled by the recognition that much of everyday action is characterized by repetition. In experience-sampling diary studies using both student and community samples, approximately 45% of everyday behaviors tended to be repeated in the same location almost every day (Quinn & Wood, 2005; Wood, Quinn, & Kashy, 2002). In these studies, people reported a heterogeneous set of actions that varied in habit strength, including reading the newspaper, exercising, and eating fast food. Although a consensual perspective on habit mechanisms has yet to develop, common to all views is the idea that many behavioral sequences (e.g., one\u2019s morning coffee-making routine) are performed repeatedly in similar contexts. When responses and features of context occur in contiguity, the potential exists for associations to form between them, such that contexts come to cue responses. In what follows, we outline three views of habitual control that build on this understanding. Direct Context Cuing According to the direct-context-cuing model, repeated coactivation forges direct links in memory between context and response representations. Once these links are formed via associative learning, merely perceiving a context triggers associated responses. Supporting evidence comes from research in which merely activating a construct, such as the elderly stereotype, influences the performance of relevant behaviors, such as a slow speed of walking (e.g., Bargh, Chen, & Burrows, 1996). Address correspondence to David Neal, Department of Psychology, Box 90085, Duke University, Durham, NC 27708; e-mail: dneal@ duke.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 198 Volume 15\u2014Number 4 Copyright r 2006 Association for Psychological Science Readers might wonder if it is realistic that contexts cue responses through this simple mechanism in the absence of an implicit or explicit goal. The answer is not clear, given that social-cognition research has thus far demonstrated only a limited version of direct-cuing effects. For example, activating the elderly stereotype influences walking speed, but it remains to be demonstrated whether such activation can initiate walking itself. However, the direct cuing of repeated action by contexts is suggested by myriad findings in cognitive neuroscience that reveal reduced involvement of goal-related neural structures, such as the prefrontal cortex, when behaviors have come under habitual control (see Daw, Niv, & Dayan, 2005). Furthermore, animal-learning research using a clever paradigm in which reinforcers are devalued suggests direct control by context. When rats initially perform an instrumental behavior (e.g., pressing a bar for a food pellet), they appear to be guided by specific goal expectations; they cease the behavior if the reward is devalued (e.g., by pairing it with a toxin; Dickinson & Balleine, 1995). In contrast, when rats have extensively repeated a behavior, their responses appear to be cued directly by contextual stimuli (e.g., the bar); reward devaluation has little impact on continued performance. These data are commonly interpreted as indicating that habit formation involves a shift to direct context cuing. Implicit Goals Associative learning explains not only the direct binding of contexts and actions but also the binding of contexts and goals. In implicit-goal models, habits develop when people repeatedly pursue a goal via a specific behavior in a given context. An indirect association then forms between the context and behavior within the broader goal system. In support, Aarts and Dijksterhuis (2000) found in several experiments that the automatic activation of habitual responses (e.g., bicycle riding) only occurs when a relevant goal has first been made accessible (e.g., the thought of attending class). These studies did not measure people\u2019s real-world behavior, however, but focused instead on judgments about behavior. It remains to be seen whether such judgments tap the cognitive processes that actually drive behavior. In addition, there is good reason to think that habit performance itself does not depend on goal activation. Goaldriven responses tend to be dynamic and flexible, as evidenced by people sometimes substituting behaviors that serve a common goal. In contrast, habits emerge in a rigid pattern such that, for example, a habitual runner is unlikely to substitute a cycling class for running. Thus, although implicit goals provide potentially powerful guides to action, they do not plausibly explain the context cuing of habits. Motivated Contexts In another framework for understanding context-cued responses, contexts can acquire diffuse motivational value when they have preceded rewards in the past. When contexts predict rewards in this way, they energize associated responses without activating specific goals. Evidence of the motivating quality of contexts comes from animal studies of the neurotransmitters that mediate reward learning. For example, when monkeys first learn that a feature of the environment (e.g., a light) predicts a reward (e.g., a drop of juice) when a response is made (e.g., a lever press), neurotransmitter activity (i.e., dopamine release) occurs just after the reward (see Schultz, Dayan, & Montague, 1997). After repeated practice, the animal reaches for the lever when the light is illuminated. Furthermore, the neurotransmitter response is no longer elicited by the juice but instead by the light. In this way, environmental cues can acquire motivational value. Reward-predicting environments are thought to signal the cached (or long-run future) value of an action without signaling a specific outcome (e.g., juice; Daw et al., 2005). This diffuse motivation may explain the rigid nature of context cuing, given that cached values do not convey a specific desired outcome that could be met by substitutable means. Contributing further to the rigidity of habits, neural evidence indicates that, with repetition, whole sequences of responses become chunked or integrated in memory with the contexts that predict them (Barnes, Kubota, Hu, Jin, & Graybiel, 2005). Chunked responses are cued and implemented as a unit, consistent with the idea that habits require limited conscious control to proceed to completion. This quality of habitual responding is frustratingly evident when, for example, trying to fix a well-practiced but badly executed golf swing or dance-step sequence. As yet, the motivated-context idea has been tested primarily with animals. Its promise as a model of human habits comes from evidence that reward-related neurotransmitter systems are shared across species (e.g., in humans, dopamine is elicited by monetary reward). Multiple Habit Mechanisms The high degree of repetition in daily life observed in the diary research of Wood et al. (2002) is likely to be a product of multiple habit-control mechanisms that draw, in various cases, on direct context associations as well as on diffuse motivations. Although we consider implicit goals to be an implausible mediator of habitual behavior, they undoubtedly contribute to some types of repetition. Whether habits are cued directly or are diffusely motivated, they are triggered automatically by contexts and performed in a relatively rigid way. These features of responding have important implications for theories of behavior prediction, behavior cha"} {"_id": "32578f50dfcd443505450c79b61d59cc71b8d685", "title": "Digital Enterprise Architecture - Transformation for the Internet of Things", "text": "Excellence in IT is both a driver and a key enabler of the digital transformation. The digital transformation changes the way we live, work, learn, communicate, and collaborate. The Internet of Things (IoT) fundamentally influences today's digital strategies with disruptive business operating models and fast changing markets. New business information systems are integrating emerging Internet of Things infrastructures and components. With the huge diversity of Internet of Things technologies and products organizations have to leverage and extend previous Enterprise Architecture efforts to enable business value by integrating Internet of Things architectures. Both architecture engineering and management of current information systems and business models are complex and currently integrating beside the Internet of Things synergistic subjects, like Enterprise Architecture in context with services & cloud computing, semantic-based decision support through ontologies and knowledge-based systems, big data management, as well as mobility and collaboration networks. To provide adequate decision support for complex business/IT environments, we have to make transparent the impact of business and IT changes over the integral landscape of affected architectural capabilities, like directly and transitively impacted IoT-objects, business categories, processes, applications, services, platforms and infrastructures. The paper describes a new metamodel-based approach for integrating Internet of Things architectural objects, which are semi-automatically federated into a holistic Digital Enterprise Architecture environment."} {"_id": "e8859b92af978e29cb3931b27bf781be6f98e3d0", "title": "A novel bridgeless buck-boost PFC converter", "text": "Conventional cascade buck-boost PFC (CBB-PFC) converter suffers from the high conduction loss in the input rectifier bridge. To resolve the above problem, a novel bridgeless buck-boost PFC topology is proposed in this paper. The proposed PFC converter which removes the input rectifier bridge has three conduction semiconductors at every moment. Comparing with CBB-PFC topology, the proposed topology reduces the conduction semiconductors, reduces conduction losses effectively, improves the efficiency of converter and is suitable for use in the wide input voltage range. In this paper, the average current mode control was implemented with UC3854, the theoretical analysis and design of detection circuits was presented. The experimental prototype with 400 V/600 W output and line input voltage range from 220 VAC to 380 VAC was built. Experimental results show that the proposed converter can improve 0.8% efficiency comparing CBB-PFC converter."} {"_id": "9a7ff2f35a9e874fcf7d3a6a3c8671406cd7829a", "title": "Scene text recognition and tracking to identify athletes in sport videos", "text": "We present an athlete identification module forming part of a system for the personalization of sport video broadcasts. The aim of this module is the localization of athletes in the scene, their identification through the reading of names or numbers printed on their uniforms, and the labelling of frames where athletes are visible. Building upon a previously published algorithm we extract text from individual frames and read these candidates by means of an optical character recognizer (OCR). The OCR-ed text is then compared to a known list of athletes\u2019 names (or numbers), to provide a presence score for each athlete. Text regions are tracked in subsequent frames using a template matching technique. In this way blurred or distorted text, normally unreadable by the OCR, is exploited to provide a denser labelling of the video sequences. Extensive experiments show that the method proposed is fast, robust and reliable, out-performing results of other systems in the literature."} {"_id": "0214778162cb1bb1fb15b6b8e5c5d669a3985fb5", "title": "A Knowledge Graph based Bidirectional Recurrent Neural Network Method for Literature-based Discovery", "text": "In this paper, we present a model which incorporates biomedical knowledge graph, graph embedding and deep learning methods for literature-based discovery. Firstly, the relations between entities are extracted from biomedical abstracts and then a knowledge graph is constructed by using these obtained relations. Secondly, the graph embedding technologies are applied to convert the entities and relations in the knowledge graph into a low-dimensional vector space. Thirdly, a bidirectional Long Short-Term Memory network is trained based on the entity associations represented by the pre-trained graph embeddings. Finally, the learned model is used for open and closed literature-based discovery tasks. The experimental results show that our method could not only effectively discover hidden associations between entities, but also reveal the corresponding mechanism of interactions. It suggests that incorporating knowledge graph and deep learning methods is an effective way for capturing the underlying complex associations between entities hidden in the literature."} {"_id": "e74949ae81efa58f562d44b86339571701674284", "title": "A system for generating and injecting indistinguishable network decoys", "text": "We propose a novel trap-based architecture for detecting passive, \u201csilent\u201d, attackers who are eavesdropping on enterprise networks. Motivated by the increasing number of incidents where attackers sniff the local network for interesting information, such as credit card numbers, account credentials, and passwords, we introduce a methodology for building a trap-based network that is designed to maximize the realism of bait-laced traffic. Our proposal relies on a \u201crecord, modify, replay\u201d paradigm that can be easily adapted to different networked environments. The primary contributions of our architecture are the ease of automatically injecting large amounts of believable bait, and the integration of different detection mechanisms in the back-end. We demonstrate our methodology in a prototype platform that uses our decoy injection API to dynamically create and dispense network traps on a subset of our campus wireless network. Our network traps consist of several types of monitored passwords, authentication cookies, credit cards, and documents containing beacons to alarm when opened. The efficacy of our decoys against a model attack program is also discussed, along with results obtained from experiments in the field. In addition, we present a user study that demonstrates the believability of our decoy traffic, and finally, we provide experimental results to show that our solution causes only negligible interference to ordinary users."} {"_id": "b8febc6422057c0476db7e64ac88df8fb0a619eb", "title": "Reflect , and React : A Culturally Responsive Model for Pre-service Secondary Social Studies Teachers", "text": "The purpose of this qualitative study was to design and implement a model of cultural-responsiveness within a social studies teacher education program. Specifically, we sought to understand how pre-service grades 6-12 social studies practitioners construct culturally responsive teaching (CRT) in their lesson planning. In addition, we examined the professional barriers that prevented teacher-candidates from actualizing culturally responsive pedagogy. Incorporating a conceptual model of Review, Reflect, and React, 20 teacher candidates in a social studies methods course engaged CRT theory and practice. Thematic analysis of lesson plans and clinical reflections indicated successful proponents of CRT critically analyzed their curriculum, explored the diverse needs of their students, and engaged learners in culturally appropriate social studies pedagogy. Findings also showed that unsuccessful CRT was characterized by a lack of content knowledge, resistance from the cooperating teacher, and a reliance on the textbook materials."} {"_id": "0ed4df86b942e8b4979bac640a720db0187a32ef", "title": "Deep and Broad Learning on Content-Aware POI Recommendation", "text": "POI recommendation has attracted lots of research attentions recently. There are several key factors that need to be modeled towards effective POI recommendation - POI properties, user preference and sequential momentum of check- ins. The challenge lies in how to synergistically learn multi-source heterogeneous data. Previous work tries to model multi-source information in a flat manner, using either embedding based methods or sequential prediction models in a cross-related space, which cannot generate mutually reinforce results. In this paper, a deep and broad learning approach based on a Deep Context- aware POI Recommendation (DCPR) model was proposed to structurally learn POI and user characteristics. The proposed DCPR model includes three collaborative layers, a CNN layer for POI feature mining, an RNN layer for sequential dependency and user preference modeling, and an interactive layer based on matrix factorization to jointly optimize the overall model. Experiments over three data sets demonstrate that DCPR model achieves significant improvement over state-of-the-art POI recommendation algorithms and other deep recommendation models."} {"_id": "2d113bbeb6f2f393a09a82bc05fdff61b391d05d", "title": "A Rule Based Approach to Discourse Parsing", "text": "In this paper we present an overview of recent developments in discourse theory and parsing under the Linguistic Discourse Model (LDM) framework, a semantic theory of discourse structure. We give a novel approach to the problem of discourse segmentation based on discourse semantics and sketch a limited but robust approach to symbolic discourse parsing based on syntactic, semantic and lexical rules. To demonstrate the utility of the system in a real application, we briefly describe the architecture of the PALSUMM system, a symbolic summarization system being developed at FX Palo Alto Laboratory that uses discourse structures constructed using the theory outlined to summarize written English prose texts. 1"} {"_id": "9778197c8b8a4a1c297edb180b63f4a29f612895", "title": "Emerging Principles of Gene Expression Programs and Their Regulation.", "text": "Many mechanisms contribute to regulation of gene expression to ensure coordinated cellular behaviors and fate decisions. Transcriptional responses to external signals can consist of many hundreds of genes that can be parsed into different categories based on kinetics of induction, cell-type and signal specificity, and duration of the response. Here we discuss the structure of transcription programs and suggest a basic framework to categorize gene expression programs based on characteristics related to their control mechanisms. We also discuss possible evolutionary implications of this framework."} {"_id": "a7ab6fe31ee11a6b59e4d0c15de9f81661ef0d58", "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "text": "OBJECTIVE\nBrain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible.\n\n\nAPPROACH\nIn this work we introduce EEGNet, a compact convolutional neural network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet, both for within-subject and cross-subject classification, to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR).\n\n\nMAIN RESULTS\nWe show that EEGNet generalizes across paradigms better than, and achieves comparably high performance to, the reference algorithms when only limited training data is available across all tested paradigms. In addition, we demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features.\n\n\nSIGNIFICANCE\nOur results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks. Our models can be found at: https://github.com/vlawhern/arl-eegmodels."} {"_id": "fdddcec753050d624a72d3de6f23c5b560f1cb25", "title": "Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain\u2013Computer Interfaces", "text": "One of the major limitations of brain-computer interfaces (BCI) is their long calibration time, which limits their use in practice, both by patients and healthy users alike. Such long calibration times are due to the large between-user variability and thus to the need to collect numerous training electroencephalography (EEG) trials for the machine learning algorithms used in BCI design. In this paper, we first survey existing approaches to reduce or suppress calibration time, these approaches being notably based on regularization, user-to-user transfer, semi-supervised learning and a priori physiological information. We then propose new tools to reduce BCI calibration time. In particular, we propose to generate artificial EEG trials from the few EEG trials initially available, in order to augment the training set size. These artificial EEG trials are obtained by relevant combinations and distortions of the original trials available. We propose three different methods to do so. We also propose a new, fast and simple approach to perform user-to-user transfer for BCI. Finally, we study and compare offline different approaches, both old and new ones, on the data of 50 users from three different BCI data sets. This enables us to identify guidelines about how to reduce or suppress calibration time for BCI."} {"_id": "061356704ec86334dbbc073985375fe13cd39088", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "text": "In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth, which shows that a significant improvement on the prior-art configurations can be achi eved by pushing the depth to 16\u201319 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first a nd he second places in the localisation and classification tracks respec tively. We also show that our representations generalise well to other datasets, whe re t y achieve the stateof-the-art results. Importantly, we have made our two bestp rforming ConvNet models publicly available to facilitate further research o n the use of deep visual representations in computer vision."} {"_id": "14318685b5959b51d0f1e3db34643eb2855dc6d9", "title": "Going deeper with convolutions", "text": "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection."} {"_id": "1827de6fa9c9c1b3d647a9d707042e89cf94abf0", "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "text": "Training Deep Neural Networks is complicated by the fact that the distribution of each layer\u2019s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a stateof-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters."} {"_id": "6e80768219b2ab5a3247444cfb280e8d33d369f0", "title": "DESIGN OF AN ULTRA-WIDEBAND POWER DIVIDER VIA THE COARSE-GRAINED PARALLEL MICRO-GENETIC ALGORITHM", "text": "An ultra-wideband (UWB) power divider is designed in this paper. The UWB performance of this power divider is obtained by using a tapered microstrip line that consists of exponential and elliptic sections. The coarse grained parallel micro-genetic algorithm (PMGA) and CST Microwave Studio are combined to achieve an automated parallel design process. The method is applied to optimize the UWB power divider. The optimized power divider is fabricated and measured. The measured results show relatively low insertion loss, good return loss, and high isolation between the output ports across the whole UWB (3.1\u201310.6 GHz)."} {"_id": "12dc84ff8f98d6cb4b3ea0c1d4b444900d4bbb5c", "title": "Visual Path Prediction in Complex Scenes with Crowded Moving Objects", "text": "This paper proposes a novel path prediction algorithm for progressing one step further than the existing works focusing on single target path prediction. In this paper, we consider moving dynamics of co-occurring objects for path prediction in a scene that includes crowded moving objects. To solve this problem, we first suggest a two-layered probabilistic model to find major movement patterns and their cooccurrence tendency. By utilizing the unsupervised learning results from the model, we present an algorithm to find the future location of any target object. Through extensive qualitative/quantitative experiments, we show that our algorithm can find a plausible future path in complex scenes with a large number of moving objects."} {"_id": "90ff0f5eaed1ebb42a50da451f15cf39de53b681", "title": "NEED FOR HUMAN DIRECTION OF DATA MINING CROSS-INDUSTRY STANDARD PROCESS: CRISP\u2013DM CASE STUDY 1: ANALYZING AUTOMOBILE WARRANTY CLAIMS: EXAMPLE OF THE CRISP\u2013DM INDUSTRY STANDARD PROCESS IN ACTION FALLACIES OF DATA MINING", "text": "WHAT IS DATA MINING? WHY DATA MINING? NEED FOR HUMAN DIRECTION OF DATA MINING CROSS-INDUSTRY STANDARD PROCESS: CRISP\u2013DM CASE STUDY 1: ANALYZING AUTOMOBILE WARRANTY CLAIMS: EXAMPLE OF THE CRISP\u2013DM INDUSTRY STANDARD PROCESS IN ACTION FALLACIES OF DATA MINING WHAT TASKS CAN DATA MINING ACCOMPLISH? CASE STUDY 2: PREDICTING ABNORMAL STOCK MARKET RETURNS USING NEURAL NETWORKS CASE STUDY 3: MINING ASSOCIATION RULES FROM LEGAL DATABASES CASE STUDY 4: PREDICTING CORPORATE BANKRUPTCIES USING DECISION TREES CASE STUDY 5: PROFILING THE TOURISM MARKET USING k-MEANS CLUSTERING ANALYSIS"} {"_id": "25f08ef49357067dcb58cc3b8af416e138006737", "title": "Design and implementation of server cluster dynamic load balancing based on OpenFlow", "text": "Nowadays, the Internet is flooded with huge traffic, many applications have millions users, a single server is difficult to bear a large number of clients' access, so many application providers will put several servers as a computing unit to provide support for a specific application, usually people will use distributed computing, load balancing technology to complete the work. A typical load balancing technique is to use a dedicated load balancer to forward the client requests to different servers, this technique requires dedicated hardware support, the hardware is expensive, lacks of flexibility and is easy to become a single point failure. There will be a new solution for load balancing with OpenFlow proposed., this paper mainly studies dynamic load balancing technology in the OpenFlow environment, the Controller collected the server running status through the SNMP protocol, and calculated the aggregated load of the severs according to dynamic load balancing scheduling algorithm, the OpenFlow switch will forward the client's request to the server whose aggregated load is smallest, thus minimize the response time of the web server. In the OpenFlow network environment, using this method can brings high flexibility without additional equipment."} {"_id": "48593450966066cd97cf5e1ad11dfd2c29ab13bc", "title": "Solving the 0-1 Knapsack Problem with Genetic Algorithms", "text": "This paper describes a research project on using Genetic Algorithms (GAs) to solve the 0-1 Knapsack Problem (KP). The Knapsack Problem is an example of a combinatorial optimization problem, which seeks to maximize the benefit of objects in a knapsack without exceeding its capacity. The paper contains three sections: brief description of the basic idea and elements of the GAs, definition of the Knapsack Problem, and implementation of the 0-1 Knapsack Problem using GAs. The main focus of the paper is on the implementation of the algorithm for solving the problem. In the program, we implemented two selection functions, roulette-wheel and group selection. The results from both of them differed depending on whether we used elitism or not. Elitism significantly improved the performance of the roulette-wheel function. Moreover, we tested the program with different crossover ratios and single and double crossover points but the results given were not that different."} {"_id": "b58ac39705ba8b6df6c87e7f662d4041eeb032b6", "title": "Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals", "text": "Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection."} {"_id": "a5786a82cea67a46c45b18525fca9e0bb9254040", "title": "Floral dip: a simplified method for Agrobacterium-mediated transformation of Arabidopsis thaliana.", "text": "The Agrobacterium vacuum infiltration method has made it possible to transform Arabidopsis thaliana without plant tissue culture or regeneration. In the present study, this method was evaluated and a substantially modified transformation method was developed. The labor-intensive vacuum infiltration process was eliminated in favor of simple dipping of developing floral tissues into a solution containing Agrobacterium tumefaciens, 5% sucrose and 500 microliters per litre of surfactant Silwet L-77. Sucrose and surfactant were critical to the success of the floral dip method. Plants inoculated when numerous immature floral buds and few siliques were present produced transformed progeny at the highest rate. Plant tissue culture media, the hormone benzylamino purine and pH adjustment were unnecessary, and Agrobacterium could be applied to plants at a range of cell densities. Repeated application of Agrobacterium improved transformation rates and overall yield of transformants approximately twofold. Covering plants for 1 day to retain humidity after inoculation also raised transformation rates twofold. Multiple ecotypes were transformable by this method. The modified method should facilitate high-throughput transformation of Arabidopsis for efforts such as T-DNA gene tagging, positional cloning, or attempts at targeted gene replacement."} {"_id": "4e08d766f6f81501aba9a026db4b3d0b634ea535", "title": "Device-to-device communications: The physical layer security advantage", "text": "In systems that allow device-to-device (D2D) communications, user pairs in close proximity communicate directly without using an access point (AP) as an intermediary. D2D communications leads to improved throughput, reduced power consumption and interference, and more flexible resource allocation. We show that the D2D paradigm also provides significantly improved security at the physical layer, by reducing exposure of the information to eavesdroppers from two relatively high-power transmissions to a single low-power hop. We derive the secrecy outage probability (SOP) for the D2D and cellular systems, and compare performance for D2D scenarios in the presence of a multi-antenna eavesdropper. The cellular approach is only seen to have an advantage in certain cases when the AP has a large number of antennas and perfect channel state information."} {"_id": "79e78623f550a14fec5a05fcb31e641f347ccccd", "title": "Efficient Self-Interpretation in Lambda Calculus", "text": "We start by giving a compact representation schema for \u03bb-terms and show how this leads to an exceedingly small and elegant self-interpreter. We then define the notion of a self-reducer, and show how this too can be written as a small \u03bb-term. Both the self-interpreter and the self-reducer are proved correct. We finally give a constructive proof for the second fixed point theorem for the representation schema. All the constructions have been implemented on a computer, and experiments verify their correctness. Timings show that the self-interpreter and self-reducer are quite efficient, being about 35 and 50 times slower than direct execution using a call-byneed reduction strategy."} {"_id": "a520c83130857a8892a8233a3d8a1381a65d6ae3", "title": "A 10000 frames/s CMOS digital pixel sensor", "text": "A 352 288 pixel CMOS image sensor chip with perpixel single-slope ADC and dynamic memory in a standard digital 0.18m CMOS process is described. The chip performs \u201csnapshot\u201d image acquisition, parallel 8-bit A/D conversion, and digital readout at continuous rate of 10 000 frames/s or 1 Gpixels/s with power consumption of 50 mW. Each pixel consists of a photogate circuit, a three-stage comparator, and an 8-bit 3T dynamic memory comprising a total of 37 transistors in 9.4 9.4 m with a fill factor of 15%. The photogate quantum efficiency is 13.6%, and the sensor conversion gain is 13.1 V/e . At 1000 frames/s, measured integral nonlinearity is 0.22% over a 1-V range, rms temporal noise with digital CDS is 0.15%, and rms FPN with digital CDS is 0.027%. When operated at low frame rates, on-chip power management circuits permit complete powerdown between each frame conversion and readout. The digitized pixel data is read out over a 64-bit (8-pixel) wide bus operating at 167 MHz, i.e., over 1.33 GB/s. The chip is suitable for general high-speed imaging applications as well as for the implementation of several still and standard video rate applications that benefit from high-speed capture, such as dynamic range enhancement, motion estimation and compensation, and image stabilization."} {"_id": "dd58d7c7083c1ee6e281fd2a5293653d4aecca5d", "title": "Adaptation of the Nomophobia Questionnaire (NMP-Q) to Spanish in a sample of adolescents.", "text": "INTRODUCTION\nNomophobia is the fear of being out of mobile phone contact. People suffering from this anxiety disorder have feelings of stress and nervousness when access to their mobiles or computers is not possible. This work is an adaptation and validation study of the Spanish version of the Nomophobia Questionnaire (NMP-Q).\n\n\nMETHODOLOGY\nThe study included 306 students (46.1% males and 53.9% females) with ages ranging 13 to 19 years (Md=15.41\u00b11.22).\n\n\nRESULTS\nExploratory factor analysis revealed four dimensions that accounted for 64.4% of total variance. The ordinal \u03b1-value was 0.95, ranging from 0.75 to 0.92 across factors. Measure of stability was calculated by the testretest method (r=0.823). Indicators of convergence with the Spanish versions of the \u201cMobile Phone Problem Use Scale\u201d (r=0.654) and the \u201cGeneralized Problematic Internet Use Scale\u201d (r=0.531) were identified. Problematic mobile phone use patterns were examined taking the 15P, 80P and 95P percentiles as cut-off points. Scores of 39, 87 and 116 on NMP-Q corresponded to occasional, at-risk and problematic users, respectively.\n\n\nCONCLUSIONS\nPsychometric analysis shows that the Spanish version of the NMP-Q is a valid and reliable tool for the study of nomophobia."} {"_id": "5baade37a1a5ff966fc1ffe3099056fae491a2c6", "title": "Novel air blowing control for balancing a unicycle robot", "text": "This paper presents the implementation of a novel control method of using air for balancing the roll angle of a unicycle robot. The unicycle robot is designed and built. The roll angle of the unicycle robot is controlled by blowing air while the pitch angle is also controlled by DC motors. Successful balancing performances are demonstrated."} {"_id": "7c461664d3be5328d2b82b62f254513a8cfc3f69", "title": "REMAINING LIFE PREDICTION OF ELECTRONIC PRODUCTS USING LIFE CONSUMPTION MONITORING APPROACH", "text": "Various kinds of failures may occur in electronic products because of their life cycle environmental conditions including temperature, humidity, shock and vibration. Failure mechanism models are available to estimate the time to failure for most of these failures. Hence if the life cycle environment of a product can be determined, it is possible to assess the amount of damage induced and predict when the product might fail. This paper presents a life consumption monitoring methodology to determine the remaining life of a product. A battery powered data recorder is used to monitor the temperature, shock and vibration loads on a printed circuit board assembly placed under the hood of an automobile. The recorded data is used in conjunction with physics-of-failure models to determine the damage accumulation in the solder joints due to temperature and vibration loading. The remaining life of the solder joints of the test board is then obtained from the damage accumulation information. 1. Reliability Prediction Reliability is defined as the ability of a product to perform as intended (i.e., without failure and within specified performance limits) for a specified time, in its life cycle application environment. Over time, technology improvements have lead to higher I/O counts in components and circuit cards of electronic products. This has resulted in significant downsizing of interconnect thicknesses, making them more vulnerable in field applications. At the same time, increase in warranties and severe liabilities of electronic product failure have compelled manufacturers to predict the reliability of their products in field applications. An efficient reliability prediction scheme can be used for many purposes including [1]: \u2022 Logistics support (e.g., forecast warranty and life cycle costs, spare parts provisioning, availability)"} {"_id": "5424a7cd6a6393ceb8759494ee6a7f7f16a0a179", "title": "You Can Run but You Can't Read: Preventing Disclosure Exploits in Executable Code", "text": "Code reuse attacks allow an adversary to impose malicious behavior on an otherwise benign program. To mitigate such attacks, a common approach is to disguise the address or content of code snippets by means of randomization or rewriting, leaving the adversary with no choice but guessing. However, disclosure attacks allow an adversary to scan a process - even remotely - and enable her to read executable memory on-the-fly, thereby allowing the just-in time assembly of exploits on the target site. In this paper, we propose an approach that fundamentally thwarts the root cause of memory disclosure exploits by preventing the inadvertent reading of code while the code itself can still be executed. We introduce a new primitive we call Execute-no-Read (XnR) which ensures that code can still be executed by the processor, but at the same time code cannot be read as data. This ultimately forfeits the self-disassembly which is necessary for just-in-time code reuse attacks (JIT-ROP) to work. To the best of our knowledge, XnR is the first approach to prevent memory disclosure attacks of executable code and JIT-ROP attacks in general. Despite the lack of hardware support for XnR in contemporary Intel x86 and ARM processors, our software emulations for Linux and Windows have a run-time overhead of only 2.2% and 3.4%, respectively."} {"_id": "8415d972023133d0f3f731ec7d070663ba878624", "title": "TangiPaint: A Tangible Digital Painting System", "text": "TangiPaint is a digital painting application that provides the experience of working with real materials such as canvas and oil paint. Using fingers on the touchscreen of an iPad or iPhone, users can lay down strokes of thick, three-dimensional paint on a simulated canvas. Then using the Tangible Display technology introduced by Darling and Ferwerda [1], users can tilt the display screen to see the gloss and relief or \"impasto\" of the simulated surface, and modify it until they get the appearance they desire. Scene lighting can also be controlled through direct gesture-based interaction. A variety of \"paints\" with different color and gloss properties and substrates with different textures are available and new ones can be created or imported. The tangiPaint system represents a first step toward developing digital art media that look and behave like real materials. Introduction The development of computer-based digital art tools has had a huge impact on a wide range of creative fields. In commercial art, advertisements incorporating images, text, and graphic elements can be laid out and easily modified using digital illustration applications. In cinema, background matte elements can be digitally drawn, painted, and seamlessly integrated with live footage. In fine art, painters, printers, and engravers have also been embracing the new creative possibilities of computer-based art tools. The recent introduction of mobile, tablet-based computers with high-resolution displays, graphics processing units (GPUs) and multi-touch capabilities is also creating new possibilities for direct interaction in digital painting. However a significant limitation of most digital painting tools is that the final product is just a digital image (typically an array of RGB color values). All the colors, textures and lighting effects that we see when we look at the digital painting are \u201cbaked in\u201d to the image by the painter. In contrast, when a painter works with real tools and media, the color, gloss, and textural properties of the work are a natural byproduct of the creative process, and lighting effects such as highlights and shadows are produced directly through interactions of the surface with light in the environment. In this paper we introduce tangiPaint, a new tablet-based digital painting system that attempts to bridge the gap between the real and digital worlds. tangiPaint is a tangible painting application that allows artists to work with digital media that look and behave like real materials. Figure 1 shows screenshots from the tangiPaint application implemented on an Apple iPad2. In Figure 1a an artist has painted a number of brushstrokes on a blank canvas. Note that in addition to color, the strokes vary in gloss, thickness, and texture, and run out just as if they were real paint laid down with a real brush. The paints also layer and mix realistically as they would on a real canvas. Figure 1. Screenshots of paintings created using the tangiPaint system. Note the gloss and relief of the brushstrokes and the texture of the underlying canvas. The system allows direct interaction with the \u201cpainted\u201d surface both in terms of paint application and manipulation of surface orientation and"} {"_id": "0295f6b715b6ed73fb9fbb0da528220004eb892e", "title": "Estimating Data Integration and Cleaning Effort", "text": "Data cleaning and data integration have been the topic of intensive research for at least the past thirty years, resulting in a multitude of specialized methods and integrated tool suites. All of them require at least some and in most cases significant human input in their configuration, during processing, and for evaluation. For managers (and for developers and scientists) it would be therefore of great value to be able to estimate the effort of cleaning and integrating some given data sets and to know the pitfalls of such an integration project in advance. This helps deciding about an integration project using cost/benefit analysis, budgeting a team with funds and manpower, and monitoring its progress. Further, knowledge of how well a data source fits into a given data ecosystem improves source selection. We present an extensible framework for the automatic effort estimation for mapping and cleaning activities in data integration projects with multiple sources. It comprises a set of measures and methods for estimating integration complexity and ultimately effort, taking into account heterogeneities of both schemas and instances and regarding both integration and cleaning operations. Experiments on two real-world scenarios show that our proposal is two to four times more accurate than a current approach in estimating the time duration of an integration process, and provides a meaningful breakdown of the integration problems as well as the required integration activities. 1. COMPLEXITY OF INTEGRATION AND CLEANING Data integration and data cleaning remain among the most human-work-intensive tasks in data management. Both require a clear understanding of the semantics of schema and data \u2013 a notoriously difficult task for machines. Despite much research and development of supporting tools and algorithms, state-of-the-art integration projects involve significant human resource cost. In fact, Gartner reports that 10% of all IT cost goes into enterprise software c \u00a92015, Copyright is with the authors. Published in Proc. 18th International Conference on Extending Database Technology (EDBT), March 23-27, 2015, Brussels, Belgium: ISBN 978-3-89318-067-7, on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-nd 4.0. for data integration and data quality, and it is well recognized that most of those expenses are for human labor. Thus, when embarking on a data integration and cleaning project, it is useful and important to estimate in advance the effort and cost of the project and to find out which particular difficulties cause these. Such estimations help deciding whether to pursue the project in the first place, planning and scheduling the project using estimates about the duration of integration steps, budgeting in terms of cost or manpower, and finally monitoring the progress of the project. Cost estimates can also help integration service providers, IT consultants, and IT tool vendors to generate better price quotes for integration customers. Further, automatically generated knowledge of how well and how easy a data source fits into a given data ecosystem improves source selection. However, \u201cproject estimation for [. . . ] data integration projects is especially difficult, given the number of stakeholders involved across the organization as well as the unknowns of data complexity and quality.\u201d [14]. Any integration project has several steps and tasks, including requirements analysis, selection of data sources, determining the appropriate target database, data transformation specifications, testing, deployment, and maintenance. In this paper, we focus on exploring the database-related steps of integration and cleaning and automatically estimate their effort. 1.1 Challenges There are simple approaches to estimate in isolation the complexity of individual mapping and cleaning tasks. For the mapping, evaluating its complexity can be done by counting the matchings (i.e., correspondences) among elements. For the cleaning problem, a natural solution is to measure its complexity by counting the number of constraints on the target schema. However, as several integration approaches have shown, the interactive nature of these two problems is particularly complex [5, 11, 13]. For example, a data exchange problem takes as input two relational schemas, a transformation between them (a mapping), a set of target constraints, and answers two questions: whether it is possible to compute a valid solution for a given setting and how. Interestingly, to have a solution, certain conditions must hold on the target constraints, and extending the setting to more complex languages or data models bring tighter restrictions on the class of tractable cases [6, 12]. In our work, the main challenge is to estimate complexity and effort in a setting that goes beyond these ad-hoc studies while satisfying four main requirements: Generality: We require independence from the language used to express the data transformation. Furthermore, real cases often fail the existence of solution tests considered in formal frameworks http://www.gartner.com/ technology/research/ it-spending-forecast/ http://www.gartner.com/newsroom/ id/2292815 (e.g., weak acyclicity condition [11]), but an automatic estimation is still desirable for them in practice. Completeness: Only a subset of the constraints that hold on the data are specified over the schema. In fact, business rules are commonly enforced at the application level and are not reflected in the metadata of the schemas, but should nevertheless be considered. Granularity: Details about the integration issues are crucial for consumption of the estimation. For a real understanding and proper planning, it is important to know which source and/or target attributes are cause of problems and how (e.g., phone attributes in source and target schema have different formats). Existing estimators do not reason over actual data structures and thus make no statements about the causes of integration effort. Configurability and extensibility: The actual effort depends on subjective factors like the capabilities of available tools and the desired quality of the output. Therefore, intuitive, yet rich configuration settings for the estimation process are crucial for its applicability. Moreover, users must be able to extend the range of problems covered by the framework. These challenges cannot be tackled with existing syntactical methods to test the existence of solutions, as they work only in specific settings (Generality), are restricted to declarative specifications over the schemas (Completeness), and do not provide details about the actual problems (Granularity). On the other hand, as systems that compute solutions require human interaction to finalize the process [8, 13], they cannot be used for estimation purpose and their availability is orthogonal to our problem (Configurability). 1.2 Approaching Effort Estimation Figure 1 presents our view on the problem of estimating the effort of data integration. The starting point is an integration scenario with a target database and one or more source databases. The right-hand side of the figure shows the actual integration process performed by an integration specialist, where the goal is to move all instances of the source databases into the target database. Typically, a set of integration tools are used by the specialist. These tools have access to the source and target and support her in the tasks. The process takes a certain effort, which can be measured, for instance as amount of work in hours or days or in a monetary unit. Our goal is to find that effort without actually performing the integration. Moreover, we want to find and present the problems that cause this effort. To this end, we developed a two-phase process as shown on the left-hand side of Figure 1. The first phase, the complexity assessment, reveals concrete integration challenges for the scenario. To address generality, these problems are exclusively determined by the source and target schemas and instances; if and how an integration practitioner deals with them is not addressed at this point. Thus, this first phase is independent of external parameters, such as the level of expertise of the specialist or the available integration tools. However, it is aided by the results of schema matching and data profiling tools, which analyze the participating databases and produce metadata about them (to achieve completeness). The output of the complexity assessment is a set of clearly defined problems, such as number of violations for a constraint or number of different value representations. This detailed breakdown of the problems achieves granularity and is useful for several tasks, even if not interpreted as an input to calculate actual effort. Examples of application are source selection [9], i.e., given a set of integration candidates, find the source with the best \u2018fit\u2019; and support for data visualization [7], i.e., highlight parts of the schemas that are hard to integrate. Data"} {"_id": "fcf9086047002a6a04385a80e3f4cf25aa1d59a5", "title": "Bidirectional Decoder Networks for Attention-Based End-to-End Offline Handwriting Recognition", "text": "Recurrent neural networks that can be trained end-to-end on sequence learning tasks provide promising benefits over traditional recognition systems. In this paper, we demonstrate the application of an attention-based long short-term memory decoder network for offline handwriting recognition and analyze the segmentation, classification and decoding errors produced by the model. We further extend the decoding network by a bidirectional topology together with an integrated length estimation procedure and show that it is superior to unidirectional decoder networks. Results are presented for the word and text line recognition tasks of the RIMES handwriting recognition database. The software used in the experiments is freely available for academic research purposes."} {"_id": "e5ff81282457006da54e230dcff3a7ae45f7c278", "title": "A Piezoelectric Energy Harvester for Rotary Motion Applications: Design and Experiments", "text": "This paper investigates the analysis and design of a vibration-based energy harvester for rotary motion applications. The energy harvester consists of a cantilever beam with a tip mass and a piezoelectric ceramic attached along the beam that is mounted on a rotating shaft. Using this system, mechanical vibration energy is induced in the flexible beam due to the gravitational force applied to the tip mass while the hub is rotating. The piezoelectric transducer is used to convert the induced mechanical vibration energy into electricity. The equations of motion of the flexible structure are utilized along with the physical characteristics of the piezoelectric transducer to derive expressions for the electrical power. Furthermore, expressions for the optimum load resistance and maximum output power are obtained and validated experimentally using PVDF and PZT transducers. The results indicate that a maximum power of 6.4 mW at a shaft speed of 138 rad/s can be extracted by using a PZT transducer with dimensions 50.8 mm \u00d7 38.1 mm \u00d7 0.13 mm. This amount of power is sufficient to provide power for typical wireless sensors such as accelerometers and strain gauges."} {"_id": "741fd80f0a31fe77f91b1cce3d91c544d6d5b1b2", "title": "Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: a meta-analysis.", "text": "Virtual reality exposure therapy (VRET) is an increasingly common treatment for anxiety and specific phobias. Lacking is a quantitative meta-analysis that enhances understanding of the variability and clinical significance of anxiety reduction outcomes after VRET. Searches of electronic databases yielded 52 studies, and of these, 21 studies (300 subjects) met inclusion criteria. Although meta-analysis revealed large declines in anxiety symptoms following VRET, moderator analyses were limited due to inconsistent reporting in the VRET literature. This highlights the need for future research studies that report uniform and detailed information regarding presence, immersion, anxiety and/or phobia duration, and demographics."} {"_id": "0e762d93bcce174d13d5d682e702c5e3a28bed20", "title": "Hair Follicle Miniaturization in a Woolly Hair Nevus: A Novel \"Root\" Perspective for a Mosaic Hair Disorder.", "text": "Woolly hair nevus is a mosaic disorder characterized by unruly, tightly curled hair in a circumscribed area of the scalp. This condition may be associated with epidermal nevi. We describe an 11-year-old boy who initially presented with multiple patches of woolly hair and with epidermal nevi on his left cheek and back. He had no nail, teeth, eye, or cardiac abnormalities. Analysis of plucked hairs from patches of woolly hair showed twisting of the hair shaft and an abnormal hair cuticle. Histopathology of a woolly hair patch showed diffuse hair follicle miniaturization with increased vellus hairs."} {"_id": "32595517f623429cd323e6552a063b99ddb78766", "title": "On Using Twitter to Monitor Political Sentiment and Predict Election Results", "text": "The body of content available on Twitter undoubtedly contains a diverse range of political insight and commentary. But, to what extent is this representative of an electorate? Can we model political sentiment effectively enough to capture the voting intentions of a nation during an election capaign? We use the recent Irish General Election as a case study for investigating the potential to model political sentiment through mining of social media. Our approach combines sentiment analysis using supervised learning and volume-based measures. We evaluate against the conventional election polls and the final election result. We find that social analytics using both volume-based measures and sentiment analysis are predictive and we make a number of observations related to the task of monitoring public sentiment during an election campaign, including examining a variety of sample sizes, time periods as well as methods for qualitatively exploring the underlying content."} {"_id": "598552f19aa38e7d7921e074885aba9d18d22aa2", "title": "Robust Instance Recognition in Presence of Occlusion and Clutter", "text": "We present a robust learning based instance recognition framework from single view point clouds. Our framework is able to handle real-world instance recognition challenges, i.e, clutter, similar looking distractors and occlusion. Recent algorithms have separately tried to address the problem of clutter [9] and occlusion [16] but fail when these challenges are combined. In comparison we handle all challenges within a single framework. Our framework uses a soft label Random Forest [5] to learn discriminative shape features of an object and use them to classify both its location and pose. We propose a novel iterative training scheme for forests which maximizes the margin between classes to improve recognition accuracy, as compared to a conventional training procedure. The learnt forest outperforms template matching, DPM [7] in presence of similar looking distractors. Using occlusion information, computed from the depth data, the forest learns to emphasize the shape features from the visible regions thus making it robust to occlusion. We benchmark our system with the state-of-the-art recognition systems [9, 7] in challenging scenes drawn from the largest publicly available dataset. To complement the lack of occlusion tests in this dataset, we introduce our Desk3D dataset and demonstrate that our algorithm outperforms other methods in all settings."} {"_id": "2059e79de003a1a177405cee9d8cdb8b555d91e8", "title": "Indoor Localization Using Camera Phones", "text": "Indoor localization has long been a goal of pervasive computing research. In this paper, we explore the possibility of determining user's location based on the camera images received from a smart phone. In our system, the smart phone is worn by the user as a pendant and images are periodically captured and transmitted over GPRS to a web server. The web server returns the location of the user by comparing the received images with images stored in a database. We tested our system inside the Computer Science department building. Preliminary results show that user's location can be determined correctly with more than 80% probability of success. As opposed to earlier solutions for indoor localization, this approach does not have any infrastructure requirements. The only cost is that of building an image database."} {"_id": "b8c3586b0db6ab4a520e038fd04379944a93e328", "title": "Practical Processing of Mobile Sensor Data for Continual Deep Learning Predictions", "text": "We present a practical approach for processing mobile sensor time series data for continual deep learning predictions. The approach comprises data cleaning, normalization, capping, time-based compression, and finally classification with a recurrent neural network. We demonstrate the effectiveness of the approach in a case study with 279 participants. On the basis of sparse sensor events, the network continually predicts whether the participants would attend to a notification within 10 minutes. Compared to a random baseline, the classifier achieves a 40% performance increase (AUC of 0.702) on a withheld test set. This approach allows to forgo resource-intensive, domain-specific, error-prone feature engineering, which may drastically increase the applicability of machine learning to mobile phone sensor data."} {"_id": "117a50fbdfd473e43e550c6103733e6cb4aecb4c", "title": "Maximum margin planning", "text": "Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task."} {"_id": "11b6bdfe36c48b11367b27187da11d95892f0361", "title": "Maximum Entropy Inverse Reinforcement Learning", "text": "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling realworld navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories."} {"_id": "2532d0567c8334e4cadf282a73ffe399c1c32476", "title": "Learning Agents for Uncertain Environments (Extended Abstract)", "text": "This talk proposes a very simple \u201cbaseline architecture\u201d for a learning agent that can handle stochastic, partially observable environments. The architecture uses reinforcement learning together with a method for representing temporal processes as graphical models. I will discuss methods for leaming the parameters and structure of such representations from sensory inputs, and for computing posterior probabilities. Some open problems remain before we can try out the complete agent; more arise when we consider scaling up. A second theme of the talk will be whether reinforcement learning can provide a good model of animal and human learning. To answer this question, we must do inverse reinforcement learning: given the observed behaviour, what reward signal, if any, is being optimized? This seems to be a very interesting problem for the COLT, UAI, and ML communities, and has been addressed in econometrics under the heading of structural estimation of Markov decision processes. 1 Learning in uncertain environments AI is about the construction of intelligent agents, i.e., systems that perceive and act effectively (according to some performance measure) in an environment. I have argued elsewhere Russell and Norvig (1995) that most AI research has focused on environments that are static, deterministic, discrete, and fully observable. What is to be done when, as in the real world, the environment is dynamic, stochastic, continuous, and partially observable? \u2018This paper draws on a variety of research efforts supported by NSF @I-9634215), ONR (N00014-97-l-0941), and AR0 (DAAH04-96-1-0341). Permission to make digital or hard copies of all or p.art of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prolit or commercial adwantage and that copies bear this notice and the full citation on the first page. To copy otherwise. to republish, to post on servers or to redistribute to lists, requires prior specific pemlission and/or a fee. COLT 98 Madison WI IJSA Copyright ACM 1998 1-5X1 13-057--0/9X/ 7...$5.00 In recent years, reinforcement learning (also called neurodynamic programming) has made rapid progress as an approachfor building agents automatically (Sutton, 1988; Kaelbling et al., 1996; Bertsekas & Tsitsiklis, 1996). The basic idea is that the performance measure is made available to the agent in the form of a rewardfunction specifying the reward for each state that the agent passes through. The performance measure is then the sum of the rewards obtained. For example, when a bumble bee forages, the reward function at each time step might be some combination of the distance flown (weighted negatively) and the nectar ingested. Reinforcement learning (RL) methods are essentially online algorithmd for solving Markovdecisionprocesses (MDPs). An MDP is defined by the reward function and a model, that is, the state transition probabilities conditioned on each possible action. RL algorithms can be model-based, where the agent learns a model, or model-free-e.g., Q-learning citeWatkins: 1989, which learns just a function Q(s, a) specifying the long-term value of taking action a in state s and acting optimally thereafter. Despite their successes, RL methods have been restricted largely tofully observable MDPs, in which the sensory input at each state is sufficient to identify the state. Obviously, in the real world, we must often deal with partially observable MDPs (POMDPs). Astrom (1965) proved that optimal decisions in POMDPs depend on the belief state b at each point in time, i.e., the posterior probability distribution over all possible actual states, given all evidence to date. The functions V and Q then become functions of b instead of s. Parr and Russell (1995) describes a very simple POMDP RL algorithm using an explicit representation of b as a vector of probabilities, and McCallum (1993) shows a way to approximate the belief state using recent percept sequences. Neither approach is likely to scale up to situations with large numbers of state variables and long-term temporal dependencies. What is needed is a way of representing the model compactly and updating the belief state efficiently given the model and each new observation. Dynamic Bayesian networks (Dean & Kanazawa, 1989) seem to have some of the required properties; in particular, they have significant advantages over other approaches such as Kalman filters and hidden Markov models. Our baseline architecture, shown in Figure 1, uses DBNs to represent and update the belief state as new sensor information arrives. Given a representation for b, the reward signal is used to learn a Q-function represented by some \u201cblack-box\u201d function approximator such as a neural network. Provided we can handle hybrid (dis-"} {"_id": "6f20506ce955b7f82f587a14301213c08e79463b", "title": "Algorithms for Inverse Reinforcement Learning", "text": ""} {"_id": "77024583e21d0cb7591900795f43f1a42dd6acf8", "title": "Learning to search: Functional gradient techniques for imitation learning", "text": "Programming robot behavior remains a challenging task. While it is often easy to abstractly define or even demonstrate a desired behavior, designing a controller that embodies the same behavior is difficult, time consuming, and ultimately expensive. The machine learning paradigm offers the promise of enabling \u201cprogramming by demonstration\u201d for developing high-performance robotic systems. Unfortunately, many \u201cbehavioral cloning\u201d (Bain & Sammut, 1995; Pomerleau, 1989; LeCun et al., 2006) approaches that utilize classical tools of supervised learning (e.g. decision trees, neural networks, or support vector machines) do not fit the needs of modern robotic systems. These systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to myopic and poor-quality robot performance. While planning algorithms have shown success in many real-world applications ranging from legged locomotion (Chestnutt et al., 2003) to outdoor unstructured navigation (Kelly et al., 2004; Stentz, 2009), such algorithms rely on fully specified cost functions that map sensor readings and environment models to quantifiable costs. Such cost functions are usually manually designed and programmed. Recently, a set of techniques has been developed that explore learning these functions from expert human demonstration. These algorithms apply an inverse optimal control approach to find a cost function for which planned behavior mimics an expert\u2019s demonstration. The work we present extends the Maximum Margin Planning (MMP) (Ratliff et al., 2006a) framework to admit learning of more powerful, non-linear cost functions. These algorithms, known collectively as LEARCH (LEArning to seaRCH ), are simpler to implement than most existing methods, more efficient than previous attempts at non-linearization (Ratliff et al., 2006b), more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the function\u2019s form. We derive and discuss the framework both mathematically and intuitively, and demonstrate practical realworld performance with three applied case-studies including legged locomotion, grasp planning, and autonomous outdoor unstructured navigation. The latter study includes hundreds of kilometers of autonomous traversal through complex natural environments. These case-studies address key challenges in applying the algorithm in practical settings that utilize state-of-the-art planners, and which may be constrained by efficiency requirements and imperfect expert demonstration."} {"_id": "b3ec660a43361ea05b32b1d659210ece24361b6e", "title": "The benefits of hybridization", "text": "This article presents the impact of the performance of an FC and control strategies on the benefits of hybridization of fuel cell/supercapacitor hybrid sources for vehicle applications. Then, the storage device can complement the main source to produce the compatibility and performance characteristics needed in a load. The studies of two hybrid power systems for vehicle applications, FC/battery and FC/supercapacitor hybrid power sources, are explained. Experimental results with small-scale devices (a PEMFC of 500 W, 40 A, and 13 V; a lead-xicid battery module of 33 Ah and 48 V; and a supercapacitor module of 292 F, 500 A, and 30 V) in laboratory will illustrate the performance of the system during motor-drive cycles."} {"_id": "d14ddc01cff72066c6655aa39f3e207e34fb8591", "title": "RF-MEMS Switches for Reconfigurable Integrated Circuits", "text": "This paper deals with a relatively new area of radio-frequency (RF) technology based on microelectromechanical systems (MEMS). RF MEMS provides a class of new devices and components which display superior high-frequency performance relative to conventional (usually semiconductor) devices, and which enable new system capabilities. In addition, MEMS devices are designed and fabricated by techniques similar to those of very large-scale integration, and can be manufactured by traditional batch-processing methods. In this paper, the only device addressed is the electrostatic microswitch\u2014perhaps the paradigm RF-MEMS device. Through its superior performance characteristics, the microswitch is being developed in a number of existing circuits and systems, including radio front-ends, capacitor banks, and time-delay networks. The superior performance combined with ultra-low-power dissipation and large-scale integration should enable new system functionality as well. Two possibilities addressed here are quasi-optical beam steering and electrically reconfigurable antennas."} {"_id": "0d9ec57e9bfd360c5fa7cee2c2ef149f735b649d", "title": "Modeling Complex Air Traffic Management Systems", "text": "In this work, we propose the use of multi-agent system (MAS) models as the basis for predictive reasoning about various safety conditions and the performance of Air Traffic Management (ATM) Systems. To this end, we describe the engineering of a domain-specific MAS model that provides constructs for creating scenarios related to ATM systems and procedures; we then instantiate the constructs in the ATM model for different scenarios. As a case study we generate a model for a concept that provides the ability to maximize departure throughput at La Guardia airport (LGA) without impacting the flow of the arrival traffic; the model consists of approximately 1.5 hours real time flight data. During this time, between 130 and 150 airplanes are managed by four en-route controllers, three TRACON controllers, and one tower controller at LGA who is responsible for departures and arrivals. The planes are landing at approximately 36 to 40 planes an hour. A key contribution of this work is that the model can be extended to various air-traffic management scenarios and can serve as a template for engineering large-scale models in other domains."} {"_id": "8281e847faa0afa1fa28a438f214ca94fddcbe7b", "title": "Union and Difference of Models, 10 years later", "text": "This paper contains a summary of the talk given by the author on the occasion of the MODELS 2013 most influential paper award. The talk discussed the original paper as published in 2003, the research work done by others afterwards and the author\u2019s personal reflection on the award. 1 Version Control of Software and System Models There are two main usage scenarios for design models in software and system development: models as sketches, that serve as a communication aid in informal discussions, and models as formal artifacts, to be analyzed, transformed into other artifacts, maintained and evolved during the whole software and system development process. In this second scenario, models are valuable assets that should be kept in a trusted repository. In a complex development project, these models will be updated often and concurrently by different developers. Therefore, there is a need for a version control system for models with optimistic locking. This is a system to compare, merge and store all versions of all models created within a development project. We can illustrate the use of a version control system for models as follows. Let us assume that the original model shown at the top of Figure 1 is edited simultaneously by two developers. One developer has decided that the subclass B is no longer necessary in the model. Simultaneously, the other developer has decided that class C should have a subclass D. The problem is to combine the contributions of both developers into a single model. This is the model shown at the bottom of Fig. 1. We presented the basic algorithms to solve this problem in the original paper published in the proceedings of the UML 2003 conference [1]. The proposed solution is based on calculating the final model as the merge of the differences between the original and the edited models. Figure 2 shows an example of the difference of two models, in this case the difference between the models edited by the developers and the original model. The result of the difference is not always a model, in a similar way that the difference between two natural numbers is not a natural number but a negative one. An example of this is shown in the bottom B C A Original Model"} {"_id": "e8813fa43d641f70ede35477dd21599b9012cab7", "title": "Stakeholder Identification in the Requirements Engineering Process", "text": "Adequate, timely and effective consultation of relevant stakeholders is of paramount importance in the requirements engineering process. However, the thorny issue of making sure that all relevant stakeholders are consulted has received less attention than other areas which depend on it, such as scenario-based requirements, involving users in development, negotiating between different viewpoints and so on. The literature suggests examples of stakeholders, and categories of stakeholder, but does not provide help in identifying stakeholders for a specific system. In this paper, we discuss current work in stakeholder identification, propose an approach to identifying relevant stakeholders for a specific system, and propose future directions for the work. 1. What is a \u00d4stakeholder\u00d5? There is a large body of literature in the strategic management area which discusses organisations in terms of a stakeholder model. Stakeholder analysis, it is claimed, can be used to analyse an organisation\u00d5s performance and determine its future strategic direction. An oft-quoted definition of \u00d4stakeholder\u00d5, taken from a key reference in this literature is: \u00d4A stakeholder in an organisation is (by definition) any group or individual who can affect or is affected by the achievement of the organisation\u00d5s objectives.\u00d5 [10] A much broader definition, which has also been attributed to Freeman, is that a stakeholder is \u00d4anything influencing or influenced by\u00d5 the firm, but it has been claimed that this definition is problematic because it leads to the identification of a very broad set of stakeholders. It is important to distinguish between influencers and stakeholders because while some potential stakeholders may indeed be both stakeholders and influencers, some who have a real stake in an enterprise may have no influence, e.g. a job applicant, while some influencers may have no stake, e.g. the media [8]. Information systems (IS) researchers have also taken up the idea of stakeholders: \u00d4We define stakeholders as these participants together with any other individuals, groups or organisations whose actions can influence or be influenced by the development and use of the system whether directly or indirectly.\u00d5 [22] In software engineering, stakeholders have been defined as: \u00d4The people and organisations affected by the application\u00d5 [3] \u00d4System stakeholders are people or organisations who will be affected by the system and who have a direct or indirect influence on the system requirements\u00d5 [16] \u00d4Stakeholders are people who have a stake or interest in the project\u00d5 [4] A more explicit refinement of this definition is: \u00d4\u00c9 anyone whose jobs will be altered, who supplies or gains information from it, or whose power or influence within the organisation will increase or decrease.\u00d5 [7] They go on to say that \u00d4It will frequently be the case that the formal \u00d4client\u00d5 who orders the system falls very low on the list of those affected. Be very wary of changes which take power, influence or control from some stakeholders without returning something tangible in its place.\u00d5 [7] When faced with the practical problem of how to identify the set of stakeholders relevant to a specific project, these definitions are not particularly helpful. The main concern is that, although such definitions are usually accompanied by example groups of stakeholders, they are vague and may lead to consideration of inappropriate or incomplete groups of stakeholders. Categories of stakeholder include end-users, managers and others involved in the organisational processes influenced by the system, engineers responsible for system development and maintenance, customers of the Sharp, Finkelstein & Galal 11/12/99 2 organisation who will use the system to provide a service, external bodies such as regulators, domain experts, and so on. They will each have different goals, and will try to satisfy their own without recourse to others [16]. Cotterell and Hughes [4] suggest that stakeholders might be in one of three categories: internal to the project team; external to the project team, but internal to the organisation; and external to both the project team and the organisation. Newman and Lamming [20] suggest a different division: into those who will use the system directly or indirectly, and those who will be involved in developing the system. This distinction is also taken up with respect to the development of knowledge-based systems [22]. The set of stakeholders in the knowledge acquisition process and the set of stakeholders in the use of a knowledge-based system, are not necessarily identical; they are likely to vary in membership, and for those members in common, the type and level of stake they have is likely to vary. The IS literature suggests a different division again, into \u00d4hubs\u00d5 or \u00d4sponsors\u00d5 and \u00d4spokes\u00d5 or \u00d4adaptors\u00d5, where the former are those initiating and sustaining the system, while the latter are those participating [21]. Macaulay [19] identifies four categories of stakeholder in any computer system: 1. Those responsible for design and development; 2. Those with a financial interest, responsible for its sale or purchase; 3. Those responsible for introduction and maintenance; 4. Those who have an interest in its use. But again, she offers no guidelines for identifying specific stakeholders for a given system. So far, we have not distinguished between individuals or groups and roles. As with other situations, the mapping between stakeholders and individuals or groups is not oneto-one. It is therefore more appropriate to think of stakeholders as roles rather than as specific people. In subsequent discussion, we shall use \u00d4stakeholder\u00d5 to mean \u00d4stakeholder role\u00d5. 2. Identifying stakeholders There has been little focus on the participants of the requirements engineering (RE) process, for example in terms of how to trace participants in the RE process and how to identify stakeholders [13]. All of the references cited above emphasise the importance of identifying stakeholders, and although they provide examples, or broad guidance for identifying them, none describes a model or a concrete approach for identifying stakeholders for a specific project. This deficiency has been noted in the management literature, and in the IS literature [21], where the approaches have been criticised for either assuming that stakeholders are \u00d4obvious\u00d5, or for providing broad categories which are too generic to be of practical use. Expert identification for knowledge-based systems development has similarities with stakeholder identification in RE, although here too there has been an assumption that experts are readily identifiable [22]. Pouloudi and Whitley [21] suggest four principles of stakeholder identification, and describe an approach which is based on these principles, and which they used to identify stakeholders in the drug use management domain. Lyytinen and Hirschheim [17] also provides some guidance for stakeholder identification for IS, while acknowledging that \u00d4the identification of the set of all stakeholders is far from trivial\u00d5. Methods for requirements engineering, e.g. KAOS [5], do not directly support stakeholder identification; it seems to be assumed that it is straightforward. The task of identifying actors for use case development [15] has similarities with stakeholder identification, but is targeted only at a fraction of system stakeholders. Stakeholders are related to each other and interact with each other [22,11,17]. Interactions between them include: exchanging information, products, or instructions, or providing supporting tasks. Information about the stakeholders and the nature of their relationships and interactions needs to be identified and recorded. Dimensions of importance are: relationships between stakeholders, the relationship of each stakeholder to the system, and the priority to be given to each stakeholder\u00d5s view. This information is needed to manage, interpret, balance and process stakeholder input. Reconciling different stakeholder views is beyond this paper, but is considered in [12]. 3. An approach to identifying stakeholders for requirements engineering Here, we propose an approach to discovering all relevant stakeholders of a specific system that we believe is domain-independent, effective and pragmatic. We draw on the literature cited above, and on aspects of ethical decision-making in software systems [18]. Our starting point is a set of stakeholders that we refer to as \u00d4baseline\u00d5 stakeholders. From these, we can recognise \u00d4supplier\u00d5 stakeholders and \u00d4client\u00d5 stakeholders: the former provides information or supporting tasks to the baseline, and the latter processes or inspects the products of the baseline. Other stakeholders that we call \u00d4satellite\u00d5s interact with the baseline in a variety of ways. \u00d4Interaction\u00d5 may involve communicating, reading a set of rules or guidelines, searching for information and so on. Our approach focuses on interactions between stakeholders rather than relationships between the system and the stakeholder, Sharp, Finkelstein & Galal 11/12/99 3 because they are easier to follow. Figure 1 illustrates the main elements of stakeholder identification. Figure 1. The main elements of stakeholder identification addressed by our approach 3.1 Baseline stakeholders We have identified four groups of baseline stakeholder: users, developers, legislators and decision-makers. We have dubbed these \u00d4baseline\u00d5 because the web of stakeholders and their relationships can be identified from them. The nature of each group is explored below. 3.1.1 Users. The term \u00d4user\u00d5 has many interpretations. For example, Holtzblatt and Jones [14] include in their definition of \u00d4users\u00d5 those who manage direct users, those who receive products from the system, those who test the system, those who make the purchasing decision, and those who use competitive products. Eason [9] identifies three categories of user: primary, secondary and te"} {"_id": "e1b9ffd685908be70165ce89b83f541ceaf71895", "title": "A Smart Cloud Robotic System Based on Cloud Computing Services", "text": "In this paper, we present a smart service robotic system based on cloud computing services. The design and implementation of infrastructure, computation components and communication components are introduced. The proposed system can alleviate the complex computation and storage load of robots to cloud and provide various services to the robots. The computation components can dynamically allocate resources to the robots. The communication components allow easy access of the robots and provide flexible resource management. Furthermore, we modeled the task-scheduling problem and proposed a max-heaps algorithm. The simulation results demonstrate that the proposed algorithm minimized the overall task costs."} {"_id": "1471511609f7185544703f0e22777a64c6681f38", "title": "The spread of behavior in an online social network experiment.", "text": "How do social networks affect the spread of behavior? A popular hypothesis states that networks with many clustered ties and a high degree of separation will be less effective for behavioral diffusion than networks in which locally redundant ties are rewired to provide shortcuts across the social space. A competing hypothesis argues that when behaviors require social reinforcement, a network with more clustering may be more advantageous, even if the network as a whole has a larger diameter. I investigated the effects of network structure on diffusion by studying the spread of health behavior through artificially structured online communities. Individual adoption was much more likely when participants received social reinforcement from multiple neighbors in the social network. The behavior spread farther and faster across clustered-lattice networks than across corresponding random networks."} {"_id": "55eab388ca816eff2c9d4488ef5024840e444854", "title": "Voltage buffer compensation using Flipped Voltage Follower in a two-stage CMOS op-amp", "text": "In Miller and current buffer compensation techniques, the compensation capacitor often loads the output node. If a voltage buffer is used in feedback, the compensation capacitor obviates the loading on the output node. In this paper, we introduce an implementation of a voltage buffer compensation using a Flipped Voltage Follower (FVF) for stabilizing a two-stage CMOS op-amp. The op-amps are implemented in a 180-nm CMOS process with a power supply of 1.8V while operating with a quiescent current of 110\u03bcA. Results indicate that the proposed voltage buffer compensation using FVF improves the Unity Gain Frequency from 5.5MHz to 12.2MHz compared to Miller compensation. Also, the proposed technique enhances the transient response while lowering the compensation capacitance by 47% and 17.7% compared to Miller and common-drain compensation topologies. Utilization of FVF or its variants as a voltage buffer in a feedback compensation network has wide potential applications in the analog design space."} {"_id": "3a4643a0c11a866e6902baa43e3bee9e3a68a3c6", "title": "Dynamic question generation system for web-based testing using particle swarm optimization", "text": "One aim of testing is to identify weaknesses in students\u2019 knowledge. Computerized tests are now one of the most important ways to judge learning, and, selecting tailored questions for each learner is a significant part of such tests. Therefore, one current trend is that computerized adaptive tests (CATs) not only assist teachers in estimating the learning performance of students, but also facilitate understanding of problems in their learning process. These tests, must effectively and efficiently select questions from a large-scale item bank, and to cope with this problem we propose a dynamic question generation system for web-based tests using the novel approach of particle swarm optimization (PSO). The dynamic question generation system is built to select tailored questions for each learner from the item bank to satisfy multiple assessment requirements. Furthermore, the proposed approach is able to efficiently generate near-optimal questions that satisfy multiple assessment criteria. With a series of experiments, we compare the efficiency and efficacy of the PSO approach with other approaches. The experimental results show that the PSO approach is suitable for the selection of near-optimal questions from large-scale item banks. 2007 Elsevier Ltd. All rights reserved."} {"_id": "f8ca1142602ce7b85b24f34c4de7bb2467d2c952", "title": "Deep Embedding for Spatial Role Labeling", "text": "This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description. The model is composed of a deep multilayer perceptron (MLP) stacked on the top of a Long Short Term Memory (LSTM) network, the latter being preceded by an embedding layer. The VIEW is applied to transferring multimodal background knowledge to Spatial Role Labeling (SpRL) algorithms, which recognize spatial relations between objects mentioned in the text. This work also contributes with a new method to select complementary features and a fine-tuning method for MLP that improves the F1 measure in classifying the words into spatial roles. The VIEW is evaluated with the Task 3 of SemEval-2013 benchmark data set, SpaceEval."} {"_id": "1ad4d974e4732a9e0a3c857eb182275fb296e62d", "title": "To recognize shapes, first learn to generate images.", "text": "The uniformity of the cortical architecture and the ability of functions to move to different areas of cortex following early damage strongly suggest that there is a single basic learning algorithm for extracting underlying structure from richly structured, high-dimensional sensory data. There have been many attempts to design such an algorithm, but until recently they all suffered from serious computational weaknesses. This chapter describes several of the proposed algorithms and shows how they can be combined to produce hybrid methods that work efficiently in networks with many layers and millions of adaptive connections."} {"_id": "0177450c77baa6c48c3f1d3180b6683fd67f23b2", "title": "Design and optimization of thermo-mechanical reliability in wafer level packaging", "text": "Article history: Received 4 July 2009 Received in revised form 16 November 2009 Available online 29 January 2010 0026-2714/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.microrel.2009.11.010 * Corresponding author. Address: Department of M University, P.O. Box 10028, Beaumont, TX 77710, USA +1 409 880 8121. E-mail address: xuejun.fan@lamar.edu (X.J. Fan). In this paper, a variety of wafer level packaging (WLP) structures, including both fan-in and fan-out WLPs, are investigated for solder joint thermo-mechanical reliability performance, from a structural design point of view. The effects of redistribution layer (RDL), bump structural design/material selection, polymer-cored ball application, and PCB design/material selection are studied. The investigation focuses on four different WLP technologies: standard WLP (ball on I/O WLP), ball on polymer WLP without under bump metallurgy (UBM) layer, ball on polymer WLP with UBM layer, and encapsulated copper post WLP. Ball on I/O WLP, in which solder balls are directly attached to the metal pads on silicon wafer, is used as a benchmark for the analysis. 3-D finite element modeling is performed to investigate the effects of WLP structures, UBM layer, polymer film material properties (in ball on polymer WLP), and encapsulated epoxy material properties (in copper post WLP). Both ball on polymer and copper post WLPs have shown great reliability improvement in thermal cycling. For ball on polymer WLP structures, polymer film between silicon and solder balls creates a \u2018cushion\u2019 effect to reduce the stresses in solder joints. Such cushion effect can be achieved either by an extremely compliant film or a \u2018hard\u2019 film with a large coefficient of thermal expansion. Encapsulated copper post WLP shows the best thermo-mechanical performance among the four WLP structures. Furthermore, for a fan-out WLP, it has been found that the critical solder balls are the outermost solder balls under die-area, where the maximum thermal mismatch takes place. In a fan-out WLP package, chip size, other than package size, determines the limit of solder joint reliability. This paper also discusses the polymer-cored solder ball applications to enhance thermo-mechanical reliability of solder joints. Finally, both experimental and finite element analysis have demonstrated that making corner balls non-electrically connected can greatly improve the WLP thermomechanical reliability. 2009 Elsevier Ltd. All rights reserved."} {"_id": "9d5f36b92ac155fccdae6730660ab44d46ad501a", "title": "Introducing Expected Returns into Risk Parity Portfolios : A New Framework for Tactical and Strategic Asset Allocation", "text": "Risk parity is an allocation method used to build diversified portfolios that does not rely on any assumptions of expected returns, thus placing risk management at the heart of the strategy. This explains why risk parity became a popular investment model after the global financial crisis in 2008. However, risk parity has also been criticized because it focuses on managing risk concentration rather than portfolio performance, and is therefore seen as being closer to passive management than active management. In this article, we show how to introduce assumptions of expected returns into risk parity portfolios. To do this, we consider a generalized risk measure that takes into account both the portfolio return and volatility. However, the trade-off between performance and volatility contributions creates some difficulty, while the risk budgeting problem must be clearly defined. After deriving the theoretical properties of such risk budgeting portfolios, we apply this new model to asset allocation. First, we consider long-term investment policy and the determination of strategic asset allocation. We then consider dynamic allocation and show how to build risk parity funds that depend on expected returns."} {"_id": "b6bb01c270536933cb0778e7d14df32e207b979e", "title": "wradlib \u2013 An Open Source Library for Weather Radar Data Processing", "text": "Observation data from weather radars is deemed to have great potential in hydrology and meteorology for forecasting of severe weather or floods in small and urban catchments by providing high resolution measurements of precipitation. With the time series from operational weather radar networks attaining lengths suitable for the long term calibration of hydrological models, the interest in using this data is growing. There are, however, several challenges impeding a widespread use of weather radar data. The first being a multitude of different file formats for data storage and exchange. Although the OPERA project [1] has taken steps towards harmonizing the data exchange inside the European radar network [2], different dialects still exist in addition to a large variety of legacy formats. The second barrier is what we would like to describe as the product dilemma. A great number of potential applications also implies a great number of different and often competing requirements as to the quality of the data. As an example, while one user might be willing to accept more false alarms in a clutter detection filter, e.g. to avoid erroneous data assimilation results, another might want a more conservative product in order not to miss the small scale convection that leads to a hundred year flood in a small head catchment. A single product from a radar operator, even if created with the best methods currently available, will never be able to accommodate all these needs simultaneously. Thus the product will either be a compromise or it will accommodate one user more than the other. Often the processing chain needs to be in a specific order, where a change of a certain processing step is impossible without affecting the results of all following steps. Sometimes some post-processing of the product might be possible, but if not, the user\u2019s only choice is to either take the product as is or leave it. If a user should decide that he would take a raw radar product and try to do customized corrections, he is faced with basically reinventing the wheel, writing routines for data I/O, error corrections, georeferencing and visualization, trying to find relevant literature and to extract algorithms from publications, which, often enough, do not offer all implementation details. This takes a lot of time and effort, which could be used much more effectively, if standard algorithms were available in a well documented and easy to use manner. wradlib is intended to provide this collection of routines and algorithms in order to facilitate and foster the use of weather radar data in as many disciplines as possible including research, application development and teaching."} {"_id": "372d6dcd30f63a1268067f7db6de50c5a49bcf3c", "title": "A survey of hardware Trojan threat and defense", "text": "Hardware Trojans (HTs) can be implanted in security-weak parts of a chip with various means to steal the internal sensitive data or modify original functionality, which may lead to huge economic losses and great harm to society. Therefore, it is very important to analyze the specific HT threats existing in the whole life cycle of integrated circuits (ICs), and perform protection against hardware Trojans. In this paper, we elaborate an IC market model to illustrate the potential HT threats faced by the parties involved in the model. Then we categorize the recent research advances in the countermeasures against HT attacks. Finally, the challenges and prospects for HT defense are illuminated. & 2016 Elsevier B.V. All rights reserved."} {"_id": "596bcab1b62bf26d5f172154302f28d705020a1d", "title": "An overview of Fog computing and its security issues", "text": "Fog computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage and application services to end users. In this article, we elaborate the motivation and advantages of Fog computing and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state of the art of Fog computing and similar work under the same umbrella. Distinguished from other reviewing work of Fog computing, this paper further discloses the security and privacy issues according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of system security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device. In addition, we discuss the authentication and authorization techniques that can be used in Fog computing. An example of authentication techniques is introduced to address the security scenario where the connection between Fog and Cloud is fragile. Copyright \u00a9 2015 John Wiley & Sons, Ltd."} {"_id": "4560491820e0ee49736aea9b81d57c3939a69e12", "title": "Investigating the Impact of Data Volume and Domain Similarity on Transfer Learning Applications", "text": "Transfer learning allows practitioners to recognize and apply knowledge learned in previous tasks (source task) to new tasks or new domains (target task), which share some commonality. The two important factors impacting the performance of transfer learning models are: (a) the size of the target dataset, and (b) the similarity in distribution between source and target domains. Thus far, there has been little investigation into just how important these factors are. In this paper, we investigate the impact of target dataset size and source/target domain similarity on model performance through a series of experiments. We find that more data is always beneficial, and model performance improves linearly with the log of data size, until we are out of data. As source/target domains differ, more data is required and fine tuning will render better performance than feature extraction. When source/target domains are similar and data size is small, fine tuning and feature extraction renders equivalent performance. Our hope is that by beginning this quantitative investigation on the effect of data volume and domain similarity in transfer learning we might inspire others to explore the significance of data in developing more accurate statistical models."} {"_id": "55065876334df2698da179898d2f1be7501beca1", "title": "Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios", "text": "Cognitive scientists have studied internal cognitive structures, processes, and systems for decades in order to understand how they function in human learning. In order to solve challenging tasks in problem situations, learners not only have to perform cognitive activities, e.g., activating existing cognitive structures or organizing new information, they also have to set specific goals, plan their activities, monitor their performance during the problem-solving process, and evaluate the efficiency of their actions. This paper reports an experimental study with 98 participants where effective instructional interventions for self-regulated learning within problemsolving processes are investigated. Furthermore, an automated assessment and analysis methodology for determining the quality of learning outcomes is introduced. The results indicate that generic prompts are an important aid for developing cognitive structures while solving problems."} {"_id": "91908a5b9e3df25a31112818485bd8a4b988cec3", "title": "Synchronising movements with the sounds of a virtual partner enhances partner likeability", "text": "Previous studies have demonstrated that synchronising movements with other people can influence affiliative behaviour towards them. While research has focused on synchronisation with visually observed movement, synchronisation with a partner who is heard may have similar effects. We replicate findings showing that synchronisation can influence ratings of likeability of a partner, but demonstrate that this is possible with virtual interaction, involving a video of a partner. Participants performed instructed synchrony in time to sounds instead of the observable actions of another person. Results show significantly higher ratings of likeability of a partner after moving at the same time as sounds attributed to that partner, compared with moving in between sounds. Objectively quantified synchrony also correlated with ratings of likeability. Belief that sounds were made by another person was manipulated in Experiment 2, and results demonstrate that when sounds are attributed to a computer, ratings of likeability are not affected by moving in or out of time. These findings demonstrate that interaction with sound can be experienced as social interaction in the absence of genuine interpersonal contact, which may help explain why people enjoy engaging with recorded music."} {"_id": "c467ae4d7209bb6bc06f2d7e8f923e1ebf39213d", "title": "KeyGraph: Automatic Indexing by Co-Occurrence Graph Based on Building Construction Metaphor", "text": "In this paper, we present an algorithm for extracting keywords representing the asserted main point in a document, without relying on external devices such as natural language processing tools or a document corpus. Our algorithm KeyGraph is based on the segmentation of a graph, representing the co-occurrence between terms in a document, into clusters. Each cluster corresponds to a concept on which author's idea is based, and top ranked terms by a statistic based on each term's relationship to these clusters are selected as keywords. This strategy comes from considering that a document is constructed like a building for expressing new ideas based on traditional concepts. The experimental results show that thus extracted terms match author's point quite accurately, even though KeyGraph does not use each term's average frequency in a corpus, i.e., KeyGraph is a contentsensitive, domain independent device of indexing."} {"_id": "26108bc64c03282bd5ee230afe25306c91cc5cd6", "title": "Community detection using a neighborhood strength driven Label Propagation Algorithm", "text": "Studies of community structure and evolution in large social networks require a fast and accurate algorithm for community detection. As the size of analyzed communities grows, complexity of the community detection algorithm needs to be kept close to linear. The Label Propagation Algorithm (LPA) has the benefits of nearly-linear running time and easy implementation, thus it forms a good basis for efficient community detection methods. In this paper, we propose new update rule and label propagation criterion in LPA to improve both its computational efficiency and the quality of communities that it detects. The speed is optimized by avoiding unnecessary updates performed by the original algorithm. This change reduces significantly (by order of magnitude for large networks) the number of iterations that the algorithm executes. We also evaluate our generalization of the LPA update rule that takes into account, with varying strength, connections to the neighborhood of a node considering a new label. Experiments on computer generated networks and a wide range of social networks show that our new rule improves the quality of the detected communities compared to those found by the original LPA. The benefit of considering positive neighborhood strength is pronounced especially on real-world networks containing sufficiently large fraction of nodes with high clustering coefficient."} {"_id": "bda5d3785886f9ad9e966fa060aaaaf470436d44", "title": "Fast Supervised Discrete Hashing", "text": "Learning-based hashing algorithms are \u201chot topics\u201d because they can greatly increase the scale at which existing methods operate. In this paper, we propose a new learning-based hashing method called \u201cfast supervised discrete hashing\u201d (FSDH) based on \u201csupervised discrete hashing\u201d (SDH). Regressing the training examples (or hash code) to the corresponding class labels is widely used in ordinary least squares regression. Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm. To the best of our knowledge, this strategy has not previously been used for hashing. Traditional SDH decomposes the optimization into three sub-problems, with the most critical sub-problem - discrete optimization for binary hash codes - solved using iterative discrete cyclic coordinate descent (DCC), which is time-consuming. However, FSDH has a closed-form solution and only requires a single rather than iterative hash code-solving step, which is highly efficient. Furthermore, FSDH is usually faster than SDH for solving the projection matrix for least squares regression, making FSDH generally faster than SDH. For example, our results show that FSDH is about 12-times faster than SDH when the number of hashing bits is 128 on the CIFAR-10 data base, and FSDH is about 151-times faster than FastHash when the number of hashing bits is 64 on the MNIST data-base. Our experimental results show that FSDH is not only fast, but also outperforms other comparative methods."} {"_id": "c78755a1aa972aa59c307ebc284f9e3cdea00784", "title": "Vascular structures in dermoscopy*", "text": "Dermoscopy is an aiding method in the visualization of the epidermis and dermis. It is usually used to diagnose melanocytic lesions. In recent years, dermoscopy has increasingly been used to diagnose non-melanocytic lesions. Certain vascular structures, their patterns of arrangement and additional criteria may demonstrate lesion-specific characteristics. In this review, vascular structures and their arrangements are discussed separately in the light of conflicting views and an overview of recent literature."} {"_id": "65541ef6e41cd870eb2d8601c401ac03ac62aacb", "title": "Audio-visual emotion recognition : A dynamic , multimodal approach", "text": "Designing systems able to interact with students in a natural manner is a complex and far from solved problem. A key aspect of natural interaction is the ability to understand and appropriately respond to human emotions. This paper details our response to the continuous Audio/Visual Emotion Challenge (AVEC'12) whose goal is to predict four affective signals describing human emotions. The proposed method uses Fourier spectra to extract multi-scale dynamic descriptions of signals characterizing face appearance, head movements and voice. We perform a kernel regression with very few representative samples selected via a supervised weighted-distance-based clustering, that leads to a high generalization power. We also propose a particularly fast regressor-level fusion framework to merge systems based on different modalities. Experiments have proven the efficiency of each key point of the proposed method and our results on challenge data were the highest among 10 international research teams. Key-words Affective computing, Dynamic features, Multimodal fusion, Feature selection, Facial expressions. ACM Classification"} {"_id": "1c56a787cbf99f55b407bb1992d60fdfaf4a69b7", "title": "Lane detection using Fourier-based line detector", "text": "In this paper a new approach to lane marker detection problem is introduced as a significant complement for a semi/fully autonomous driver assistance system. The method incorporates advance line detection using multilayer fractional Fourier transform (MLFRFT) and the state-of-the-art advance lane detector (ALD). Experimental results have shown a considerable reduction in computational complexity."} {"_id": "003715e5bda2dfd2373c937ded390e469e8d84b1", "title": "Directed diffusion: a scalable and robust communication paradigm for sensor networks", "text": "Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network."} {"_id": "16e6938256f8fd82de59aac2257805f692278f03", "title": "Adaptive Protocols for Information Dissemination in Wireless Sensor Networks", "text": "In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminates information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called metadata. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of two specific SPIN protocols, comparing them to other possible approaches and a theoretically optimal protocol. We find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches. We also find that, in terms of dissemination rate and energy usage, the SPIN protocols perform close to the theoretical optimum."} {"_id": "006df3db364f2a6d7cc23f46d22cc63081dd70db", "title": "Dynamic source routing in ad hoc wireless networks", "text": "An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host\u2019s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal."} {"_id": "49ed15db181c74c7067ec01800fb5392411c868c", "title": "Epidemic Algorithms for Replicated Database Maintenance", "text": "When a database is replicated at many sites, maintaining mutual consistency among the sites in the face of updates is a significant problem. This paper describes several randomized algorithms for distributing updates and driving the replicas toward consistency. The algorithms are very simple and require few guarantees from the underlying communication system, yet they ensure that the effect of every update is eventually reflected in all replicas. The cost and performance of the algorithms are tuned by choosing appropriate distributions in the randomization step. The algorithms are closely analogous to epidemics, and the epidemiology literature aids in understanding their behavior. One of the algorithms has been implemented in the Clearinghouse servers of the Xerox Corporate Internet, solving long-standing problems of high traffic and database inconsistency. An earlier version of this paper appeared in the Proceedings of the Sixth Annual ACM Symposium on Principles of Distributed Computing, Vancouver, August 1987, pages 1-12. CR"} {"_id": "53b85e4066944b1753aae8e3418028a67d9372e1", "title": "The Chemical Basis of Morphogenesis", "text": "The paper discussed is by Alan Turing. It was published in 1952 and presents an idea of how periodic patterns could be formed in nature. Looking on periodic structures \u2013 like the stripes on tigers, the dots on leopards or the whirly leaves on woodruff \u2013 it is hard to imagine those patterns are formated by pure chance. On the other hand, thinking of the unbelievable multitude of possible realizations, the patterns can not all be exactly encoded in the genes. The paper \u201cThe Chemical Basis of Morphogenesis\u201d proposes a possible mechanism due to an interaction of two \u201cmorphogenes\u201d which react and diffuse through the tissue. Fulfilling some constrains regarding the diffusibilities and the behaviour of the reactions, this mechanism \u2013 called Turing mechanism \u2013 can lead to a pattern of concentrations defining the structure we see."} {"_id": "6b60dd4cac27ac7b0f28012322146967c2a388ca", "title": "An Overview of 3D Software Visualization", "text": "Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions."} {"_id": "f49b78324e01d783a66b3b777965b08a7b904a74", "title": "Ultra-wideband, short-pulse ground-penetrating radar: simulation and measurement", "text": "Ultra-wideband (UWB), short-pulse (SP) radar is investigated theoretically and experimentally for the detection and identification of targets buried in and placed atop soil. The calculations are performed using a rigorous, three-dimensional (3-D) Method of Moments algorithm for perfectly conducting bodies of revolution. Particular targets investigated theoretically include anti-personnel mines, anti-tank mines, and a 55-gallon drum, for which we model the time-domain scattered fields and the buried-target late-time resonant frequencies. With regard to the latter, the computed resonant frequencies are utilized to assess the feasibility of resonance-based buried-target identification for this class of targets. The measurements are performed using a novel UWB, SP synthetic aperture radar (SAR) implemented on a mobile boom. Experimental and theoretical results are compared."} {"_id": "6714e4673b514ad9698d4949db763c66ad681d3d", "title": "Negotiation as a Metaphor for Distributed Problem Solving", "text": "We describe the concept of distributed problem solving and define it as the cooperative solution of problems by a decentralized and loosely coupled collection of problem solvers. This approach to problem solving offers the promise of increased performance and provides a useful medium for exploring and developing new problem-solving techniques. We present a framework called the contract net that specifies communication and control in a distributed problem solver. Task distribution is viewed as an interactive process, a discussion carried on between a node with a task to be executed and a group of nodes that may be able to execute the task. We describe the kinds of information that must be passed between nodes during the discussion in order to obtain effective problem-solving behavior. This discussion is the origin of the negotiation metaphor: Task distribution is viewed as a form of contract negotiation. We emphasize that protocols for distributed problem solving should help determine the content of the information transmitted, rather than simply provide a means of sending bits from one node to another. The use of the contract net framework is demonstrated in the solution of a simulated problem in area surveillance, of the sort encountered in ship or air traffic control. We discuss the mode of operation of a distributed sensing system, a network of nodes extending throughout a relatively large geographic area, whose primary aim is the formation of a dynamic map of traffic in the area. From the results of this preliminary study we abstract features of the framework applicable to problem solving in general, examining in particular transfer of control. Comparisons with PLANNER, CONNIVER, HEARSAY-n, and PUP6 are used to demonstrate that negotiation--the two-way transfer of information--is a natural extension to the transfer of control mechanisms used in earlier problem-"} {"_id": "491c6e5b40ecef186c232290f6ce5ced2fff1409", "title": "An application of analytic hierarchy process ( AHP ) to investment portfolio selection in the banking sector of the Nigerian capital market", "text": "The importance of investment to the individual, a nation and the world economy cannot be over emphasized. Investment involves the sacrifice of immediate consumption to achieve greater consumption in the future. The Nigerian banking sector has made tremendous success in the recent past. All the banks that approached the capital market through public offers and right issues to raise their capital base recorded huge success. Investors in bank stocks have enjoyed high returns on investment, despite the slow growth in the nation\u2019s economy. However, the recent financial crisis that started in America, which has caused economy meltdown in many nations of the world and sudden fall in share prices, has brought about a higher risk than envisaged on investors, particularly those investing in bank stocks. This study underlines the importance of different criteria, factors and alternatives that are essential to successful investment decisions by applying the Analytic Hierarchy Process (AHP) in the selection of investment in bank stocks in the circumstances of financial crisis. The study provides recommendation on strategic investment decision options."} {"_id": "e918ab254c2cfd832dec13b5a2d98b7e0cc905d5", "title": "Suggestibility of the child witness: a historical review and synthesis.", "text": "The field of children's testimony is in turmoil, but a resolution to seemingly intractable debates now appears attainable. In this review, we place the current disagreement in historical context and describe psychological and legal views of child witnesses held by scholars since the turn of the 20th century. Although there has been consistent interest in children's suggestibility over the past century, the past 15 years have been the most active in terms of the number of published studies and novel theorizing about the causal mechanisms that underpin the observed findings. A synthesis of this research posits three \"families\" of factors--cognitive, social, and biological--that must be considered if one is to understand seemingly contradictory interpretations of the findings. We conclude that there are reliable age differences in suggestibility but that even very young children are capable of recalling much that is forensically relevant. Findings are discussed in terms of the role of expert witnesses."} {"_id": "cb9155bf684f9146da4605f07fed9224fd8b146b", "title": "The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?", "text": "A successful grasp requires careful balancing of the contact forces. Deducing whether a particular grasp will be successful from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through touch sensing provides an appealing avenue toward more successful and consistent robotic grasping. However, in order to fully evaluate the value of touch sensing for grasp outcome prediction, we must understand how touch sensing can influence outcome prediction accuracy when combined with other modalities. Doing so using conventional model-based techniques is exceptionally difficult. In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch. To that end, we collected more than 9,000 grasping trials using a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger, and evaluated visuo-tactile deep neural network models to directly predict grasp outcomes from either modality individually, and from both modalities together. Our experimental results indicate that incorporating tactile readings substantially improve grasping performance."} {"_id": "08567f9b834bdc05a0cf6164ec4a6ab9c985429e", "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "text": "In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12 000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties \u201cencouraged\u201d by the corresponding losses. On the negative side, we observe that in our study (1) \u201cgood\u201d hyperparameters seemingly cannot be identified without access to ground-truth labels, (2) good hyperparameters neither transfer across data sets nor across disentanglement metrics, and (3) that increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets."} {"_id": "74a5e76a6e45dfecfb4f4542a6a8cb1c6be3c2cb", "title": "Contraction of the abdominal muscles associated with movement of the lower limb.", "text": "BACKGROUND AND PURPOSE\nActivity of the trunk muscles is essential for maintaining stability of the lumbar spine because of the unstable structure of that portion of the spine. A model involving evaluation of the response of the lumbar multifidus and abdominal muscles to leg movement was developed to evaluate this function.\n\n\nSUBJECTS\nTo examine this function in healthy persons, 9 male and 6 female subjects (mean age = 20.6 years, SD = 2.3) with no history of low back pain were studied.\n\n\nMETHODS\nFine-wire and surface electromyography electrodes were used to record the activity of selected trunk muscles and the prime movers for hip flexion, abduction, and extension during hip movements in each of those directions.\n\n\nRESULTS\nTrunk muscle activity occurring prior to activity of the prime mover of the limb was associated with hip movement in each direction. The transversus abdominis (TrA) muscle was invariably the first muscle that was active. Although reaction time for the TrA and oblique abdominal muscles was consistent across movement directions, reaction time for the rectus abdominis and multifidus muscles varied with the direction of limb movement.\n\n\nCONCLUSION AND DISCUSSION\nResults suggest that the central nervous system deals with stabilization of the spine by contraction of the abdominal and multifidus muscles in anticipation of reactive forces produced by limb movement. The TrA and oblique abdominal muscles appear to contribute to a function not related to the direction of these forces."} {"_id": "4c34433231e209d0b444cd38dad788de2a973f45", "title": "Low-degree Graph Partitioning via Local Search with Applications to Constraint Satisfaction, Max Cut, and Coloring", "text": "We present practical algorithms for constructing partitions of graphs into a fixed number of vertex-disjoint subgraphs that satisfy particular degree constraints. We use this in particular to find k-cuts of graphs of maximum degree \u2206 that cut at least a k\u22121 k (1 + 1 2\u2206+k\u22121 ) fraction of the edges, improving previous bounds known. The partitions also apply to constraint networks, for which we give a tight analysis of natural local search heuristics for the maximum constraint satisfaction problem. These partitions also imply efficient approximations for several problems on weighted bounded-degree graphs. In particular, we improve the best performance ratio for the weighted independent set problem to 3 \u2206+2 , and obtain an efficient algorithm for coloring 3-colorable graphs with at most 3\u2206+2 4 colors. Communicated by M. F\u00fcrer: submitted February 1996; revised March 1997. Halld\u00f3rsson, Lau, Low-degree Graph Partitioning, JGAA, 1(3) 1\u201313 (1997) 2"} {"_id": "b55aafb62fb0d49ad7e0ed7ab5b936f985e1ac58", "title": "New cardinality estimation algorithms for HyperLogLog sketches", "text": "This paper presents new methods to estimate the cardinalities of data sets recorded by HyperLogLog sketches. A theoretically motivated extension to the original estimator is presented that eliminates the bias for small and large cardinalities. Based on the maximum likelihood principle a second unbiased method is derived together with a robust and efficient numerical algorithm to calculate the estimate. The maximum likelihood approach can also be applied to more than a single HyperLogLog sketch. In particular, it is shown that it gives more precise cardinality estimates for union, intersection, or relative complements of two sets that are both represented by HyperLogLog sketches compared to the conventional technique using the inclusion-exclusion principle. All the new methods are demonstrated and verified by extensive simulations."} {"_id": "977ad29fc117946aa37e4e9b69a7d6cc00cb7388", "title": "Analysis of YouTube\u2019s traffic adaptation to dynamic environments", "text": "The popular Internet service, YouTube, has adopted by default the HyperText Markup Language version 5 (HTML5). With this adoption, YouTube has moved to Dynamic Adaptive Streaming over HTTP (DASH) as Adaptive BitRate (ABR) video streaming technology. Furthermore, rate adaptation in DASH is solely receiver-driven. This issue motivates this work to make a deep analysis of YouTube\u2019s particular DASH implementation. Firstly, this article provides a state of the art about DASH and adaptive streaming technology, and also YouTube traffic characterization related work. Secondly, this paper describes a new methodology and test-bed for YouTube\u2019s DASH implementation traffic characterization and performance measurement. This methodology and test-bed do not make use of proxies and, moreover, they are able to cope with YouTube traffic redirections. Finally, a set of experimental results are provided, involving a dataset of 310 YouTube\u2019s videos. The depicted results show a YouTube\u2019s traffic pattern characterization and a discussion about allowed download bandwidth, YouTube\u2019s consumed bitrate and quality of the video. Moreover, the obtained results are cross-validated with the analysis of HTTP requests performed by YouTube\u2019s video player. The outcomes of this article are applicable in the field of Quality of Service (QoS) and Quality of Experience (QoE) management. This is valuable information for Internet Service Providers (ISPs), because QoS management based on assured download bandwidth can be used in order to provide a target end-user\u2019s QoE when YouTube service is being consumed."} {"_id": "25a26b86f4a2ebca2b154effbaf894aef690c03c", "title": "Analyzing the Effectiveness and Applicability of Co-training", "text": "Recently there has been signi cant interest in supervised learning algorithms that combine labeled and unlabeled data for text learning tasks. The co-training setting [1] applies to datasets that have a natural separation of their features into two disjoint sets. We demonstrate that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not. When a natural split does not exist, co-training algorithms that manufacture a feature split may out-perform algorithms not using a split. These results help explain why co-training algorithms are both discriminative in nature and robust to the assumptions of their embedded classi ers."} {"_id": "3499f071b44d06d576e78c938ca7fc41edff510c", "title": "Exact finite-state machine identification from scenarios and temporal properties", "text": "Finite-state models, such as finite-state machines (FSMs), aid software engineering in many ways. They are often used in formal verification and also can serve as visual software models. The latter application is associated with the problems of software synthesis and automatic derivation of software models from specification. Smaller synthesized models are more general and are easier to comprehend, yet the problem of minimum FSM identification has received little attention in previous research. This paper presents four exact methods to tackle the problem of minimum FSM identification from a set of test scenarios and a temporal specification represented in linear temporal logic. The methods are implemented as an open-source tool. Three of them are based on translations of the FSM identification problem to SAT or QSAT problem instances. Accounting for temporal properties is done via counterexample prohibition. Counterexamples are either obtained from previously identified FSMs, or based on bounded model checking. The fourth method uses backtracking. The proposed methods are evaluated on several case studies and on a larger number of randomly generated instances of increasing complexity. The results show that the Iterative SAT-based method is the leader among the proposed methods. The methods are also compared with existing inexact approaches, i.e., the ones which do not necessarily identify the minimum FSM, and these comparisons show encouraging results."} {"_id": "79780f551413667a6a69dc130dcf843516cda6aa", "title": "Real-Time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor", "text": "We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments. Existing methods typically fail for hand-object interactions in cluttered scenes imaged from egocentric viewpoints\u2014common for virtual or augmented reality applications. Our approach uses two subsequently applied Convolutional Neural Networks (CNNs) to localize the hand and regress 3D joint locations. Hand localization is achieved by using a CNN to estimate the 2D position of the hand center in the input, even in the presence of clutter and occlusions. The localized hand position, together with the corresponding input depth value, is used to generate a normalized cropped image that is fed into a second CNN to regress relative 3D hand joint locations in real time. For added accuracy, robustness and temporal stability, we refine the pose estimates using a kinematic pose tracking energy. To train the CNNs, we introduce a new photorealistic dataset that uses a merged reality approach to capture and synthesize large amounts of annotated data of natural hand interaction in cluttered scenes. Through quantitative and qualitative evaluation, we show that our method is robust to self-occlusion and occlusions by objects, particularly in moving egocentric perspectives."} {"_id": "3e7c554b435525b50b558a05486148376af04ccb", "title": "Enhanced belief propagation decoding of polar codes through concatenation", "text": "The bit-channels of finite-length polar codes are not fully polarized, and a proportion of such bit-channels are neither completely \u201cnoiseless\u201d nor completely \u201cnoisy\u201d. By using an outer low-density parity-check code for these intermediate channels, we show how the performance of belief propagation (BP) decoding of the overall concatenated polar code can be improved. A simple example reports an improvement in Eb /N0 of 0.3 dB with respect to the conventional BP decoder."} {"_id": "78beead3a05f7e8f2dc812298f813c5bacdc3061", "title": "From \"Bonehead\" to \"cLoNehEAd\": Nicknames, Play and Identity on Internet Relay Chat", "text": ""} {"_id": "2cb7835672029e97a20e006cd8cee918cb351d03", "title": "Same-day genomic and epigenomic diagnosis of brain tumors using real-time nanopore sequencing", "text": "Molecular classification of cancer has entered clinical routine to inform diagnosis, prognosis, and treatment decisions. At the same time, new tumor entities have been identified that cannot be defined histologically. For central nervous system tumors, the current World Health Organization classification explicitly demands molecular testing, e.g., for 1p/19q-codeletion or IDH mutations, to make an integrated histomolecular diagnosis. However, a plethora of sophisticated technologies is currently needed to assess different genomic and epigenomic alterations and turnaround times are in the range of weeks, which makes standardized and widespread implementation difficult and hinders timely decision making. Here, we explored the potential of a pocket-size nanopore sequencing device for multimodal and rapid molecular diagnostics of cancer. Low-pass whole genome sequencing was used to simultaneously generate copy number (CN) and methylation profiles from native tumor DNA in the same sequencing run. Single nucleotide variants in IDH1, IDH2, TP53, H3F3A, and the TERT promoter region were identified using deep amplicon sequencing. Nanopore sequencing yielded ~0.1X genome coverage within 6\u00a0h and resulting CN and epigenetic profiles correlated well with matched microarray data. Diagnostically relevant alterations, such as 1p/19q codeletion, and focal amplifications could be recapitulated. Using ad hoc random forests, we could perform supervised pan-cancer classification to distinguish gliomas, medulloblastomas, and brain metastases of different primary sites. Single nucleotide variants in IDH1, IDH2, and H3F3A were identified using deep amplicon sequencing within minutes of sequencing. Detection of TP53 and TERT promoter mutations shows that sequencing of entire genes and GC-rich regions is feasible. Nanopore sequencing allows same-day detection of structural variants, point mutations, and methylation profiling using a single device with negligible capital cost. It outperforms hybridization-based and current sequencing technologies with respect to time to diagnosis and required laboratory equipment and expertise, aiming to make precision medicine possible for every cancer patient, even in resource-restricted settings."} {"_id": "8cb45a5a03d2e8c9cc56030a99b9938cb2981087", "title": "TUT: a statistical model for detecting trends, topics and user interests in social media", "text": "The rapid development of online social media sites is accompanied by the generation of tremendous web contents. Web users are shifting from data consumers to data producers. As a result, topic detection and tracking without taking users' interests into account is not enough. This paper presents a statistical model that can detect interpretable trends and topics from document streams, where each trend (short for trending story) corresponds to a series of continuing events or a storyline. A topic is represented by a cluster of words frequently co-occurred. A trend can contain multiple topics and a topic can be shared by different trends. In addition, by leveraging a Recurrent Chinese Restaurant Process (RCRP), the number of trends in our model can be determined automatically without human intervention, so that our model can better generalize to unseen data. Furthermore, our proposed model incorporates user interest to fully simulate the generation process of web contents, which offers the opportunity for personalized recommendation in online social media. Experiments on three different datasets indicated that our proposed model can capture meaningful topics and trends, monitor rise and fall of detected trends, outperform baseline approach in terms of perplexity on held-out dataset, and improve the result of user participation prediction by leveraging users' interests to different trends."} {"_id": "232ce27451be9cc332969ce20659d28cfc0bbfec", "title": "Synchronous neural oscillations and cognitive processes", "text": "The central problem for cognitive neuroscience is to describe how cognitive processes arise from brain processes. This review summarizes the recent evidence that synchronous neural oscillations reveal much about the origin and nature of cognitive processes such as memory, attention and consciousness. Memory processes are most closely related to theta and gamma rhythms, whereas attention seems closely associated with alpha and gamma rhythms. Conscious awareness may arise from synchronous neural oscillations occurring globally throughout the brain rather than from the locally synchronous oscillations that occur when a sensory area encodes a stimulus. These associations between the dynamics of the brain and cognitive processes indicate progress towards a unified theory of brain and cognition."} {"_id": "64f256dd8b83e18bd6c5692f22ac821b622d820a", "title": "Medicinal strategies in the treatment of obesity", "text": "When prevention fails, medicinal treatment of obesity may become a necessity. Any strategic medicinal development must recognize that obesity is a chronic, stigmatized and costly disease that is increasing in prevalence. Because obesity can rarely be cured, treatment strategies are effective only as long as they are used, and combined therapy may be more effective than monotherapy. For a drug to have significant impact on body weight it must ultimately reduce energy intake, increase energy expenditure, or both. Currently approved drugs for long-term treatment of obesity include sibutramine, which inhibits food intake, and orlistat, which blocks fat digestion."} {"_id": "1d6889c44e11141cc82ef28bba1afe07f3c0a2b4", "title": "Identity Authentication and Capability Based Access Control (IACAC) for the Internet of Things", "text": "In the last few years the Internet of Things (IoT) has seen widespread application and can be found in each field. Authentication and access control are important and critical functionalities in the context of IoT to enable secure communication between devices. Mobility, dynamic network topology and weak physical security of low power devices in IoT networks are possible sources for security vulnerabilities. It is promising to make an authentication and access control attack resistant and lightweight in a resource constrained and distributed IoT environment. This paper presents the Identity Authentication and Capability based Access Control (IACAC) model with protocol evaluation and performance analysis. To protect IoT from man-in-the-middle, replay and denial of service (Dos) attacks, the concept of capability for access control is introduced. The novelty of this model is that, it presents an integrated approach of authentication and access control for IoT devices. The results of other related study have also been analyzed to validate and support our findings. Finally, the proposed protocol is evaluated by using security protocol verification tool and verification results shows that IACAC is secure against aforementioned attacks. This paper also discusses performance analysis of the protocol in terms of computational time compared to other Journal of Cyber Security and Mobility, Vol. 1, 309\u2013348. c \u00a9 2013 River Publishers. All rights reserved. 310 P.N. Mahalle et al. existing solutions. Furthermore, this paper addresses challenges in IoT and security attacks are modelled with the use cases to give an actual view of IoT networks."} {"_id": "bf9fd38bdf1d19834f13b675a9052feee4abeeb0", "title": "A magnetostatic 2-axis MEMS scanner with I-section rib-reinforcement and slender permanent magnet patterns", "text": "This study demonstrates the 2-axis epitaxial silicon scanner driven by the coil-less magnetostatic force using electroplated permanent magnet (CoNiMnP) film. The present approach has four merits: (1) the process employs the cheap silicon wafer with epitaxial layer; and the electrochemical etching stop technique is used to precisely control the thickness of scanner; (2) the I-section rib-reinforced structure is implemented to provide high stiffness of the mirror plate; (3) the magnetostatic driving force on scanner is increased by electroplated permanent magnet film with slender patterns; (4) the size of packaged scanner is reduced since the assembled permanent magnets are not required."} {"_id": "fc10cf04b774b7e1d4e6d6c6c666ae04a07329d9", "title": "Improvements in children's speech recognition performance", "text": "There are several reasons why conventional speech recognition systems modeled on adult data fail to perform satisfactorily on children's speech input. For instance, children's vocal characteristics differ signi cantly from those of adults. In addition, their choices of vocabulary and sentence construction modalities usually do not conform to adult patterns. We describe comparative studies demonstrating the performance gain realized by adopting to children's acoustic and language model data to construct a children's speech recognition system."} {"_id": "16207729492b760cd48bb4b9dd619a433b2e3d25", "title": "Social Media and Firm Equity Value", "text": "Companies have increasingly advocated social media technologies and platforms to improve and ultimately transform business performance. This study examines whether social media has a predictive relationship with firm equity value, which metric of social media has the strongest relationship, and the dynamics of the relationship. The results derived from the vector autoregressive models suggest that social media-based metrics (blogs and reviews) are significant leading indicators of firm stock market performance. We also find a faster predictive value of social media, i.e., a shorter \u201cwear-in\u201d effect compared with conventional online behavior metrics, such as web traffic and search. Interestingly, conventional digital user metrics (Google search and web traffic), which have been widely adopted to measure online consumer behavior, are found to have significant yet substantially weaker predictive relationships with firm equity value than social media. The results provide new insights for social media managers and firm equity valuations."} {"_id": "a7d47e32ee59f2f104ff6120fa82bae912a87c81", "title": "Byte Rotation Encryption Algorithm through parallel processing and multi-core utilization", "text": "Securing digital data has become tedious task as the technology is increasing. Existing encryption algorithms such as AES,DES and Blowfish ensure information security but consume lot of time as the security level increases. In this paper, Byte Rotation Encryption Algorithm (BREA) has implemented using parallel processing and multi-core utilization. BREA divides data into fixed size blocks. Each block is processed parallely using random number key. So the set of blocks are executed in parallel by utilizing all the available CPU cores. Finally, from the experimental analysis, it is observed that the proposed BREA algorithm reduces execution time when the number of cores has increased."} {"_id": "310b72fbc3d384ca88ca994b33476b8a2be2e27f", "title": "Sentiment Analyzer: Extracting Sentiments about a Given Topic using Natural Language Processing Techniques", "text": "We present Sentiment Analyzer (SA) that extracts sentiment (or opinion) about a subject from online text documents. Instead of classifying the sentiment of an entire document about a subject, SA detects all references to the given subject, and determines sentiment in each of the references using natural language processing (NLP) techniques. Our sentiment analysis consists of 1) a topic specific feature term extraction, 2) sentiment extraction, and 3) (subject, sentiment) association by relationship analysis. SA utilizes two linguistic resources for the analysis: the sentiment lexicon and the sentiment pattern database. The performance of the algorithms was verified on online product review articles (\u201cdigital camera\u201d and \u201cmusic\u201d reviews), and more general documents including general webpages and news articles."} {"_id": "59d9160780bf3eac8c621983a36ff332a3497219", "title": "Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis", "text": "Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral."} {"_id": "7c89cbf5d860819c9b5e5217d079dc8aafcba336", "title": "Recognizing subjectivity: a case study in manual tagging", "text": "In this paper, we describe a case study of a sentence-level categorization in which tagging instructions are developed and used by four judges to classify clauses from the Wall Street Journal as either subjective or objective. Agreement among the four judges is analyzed, and, based on that analysis, each clause is given a nal classiication. To provide empirical support for the classiications, correlations are assessed in the data between the subjective category and a basic semantic class posited by Quirk et al. (1985)."} {"_id": "9141d85998eadb1bca5cca027ae07670cfafb015", "title": "Determining the Sentiment of Opinions", "text": "Identifying sentiments (the affective parts of opinions) is a challenging problem. We present a system that, given a topic, automatically finds the people who hold opinions about that topic and the sentiment of each opinion. The system contains a module for determining word sentiment and another for combining sentiments within a sentence. We experiment with various models of classifying and combining sentiment at word and sentence levels, with promising results."} {"_id": "c2ac213982e189e4ad4c7f60608914a489ec9051", "title": "The Penn Arabic Treebank : Building a Large-Scale Annotated Arabic Corpus", "text": "From our three year experience of developing a large-scale corpus of annotated Arabic text, our paper will address the following: (a) review pertinent Arabic language issues as they relate to methodology choices, (b) explain our choice to use the Penn English Treebank style of guidelines, (requiring the Arabic-speaking annotators to deal with a new grammatical system) rather than doing the annotation in a more traditional Arabic grammar style (requiring NLP researchers to deal with a new system); (c) show several ways in which human annotation is important and automatic analysis difficult, including the handling of orthographic ambiguity by both the morphological analyzer and human annotators; (d) give an illustrative example of the Arabic Treebank methodology, focusing on a particular construction in both morphological analysis and tagging and syntactic analysis and following it in detail through the entire annotation process, and finally, (e) conclude with what has been achieved so far and what remains to be done."} {"_id": "f3221ec7074722e125fe9f0f12b5fee22a431254", "title": "Algorithms for continuous queries: A geometric approach", "text": "Algorithms for Continuous Queries: A Geometric"} {"_id": "52152dac5c7320a4818b48140bfcd396e4e965b7", "title": "Mapping the U.S. Political Blogosphere: Are Conservative Bloggers More Prominent?", "text": "Weblogs are now a key part of online culture, and social scientists are interested in characterising the networks formed by bloggers and measuring their extent and impact in areas such as politics. However, researchers wishing to conduct quantitative social science analysis of the blogging phenomenon are faced with the challenge of using new methods of data collection and analysis largely derived from fields outside of the social sciences, such as the information sciences. This paper presents an overview of one new approach for collecting and analysing weblog data, and illustrates this approach in the context of a preliminary quantitative analysis of online networks formed by a sample of North-American \u201cA-list\u201d political bloggers. There are two aims to this paper. First is to assess (using different data and methods) the conclusion of Adamic and Glance (2005) that there are significant differences in the behaviour of liberal and conservative bloggers, with the latter forming more dense patterns of linkages. We find broad support for this conclusion, and empirically assess the implications of differences in conservative/liberal linking behaviour for the online visibility of different political messages or ideologies. The second aim is to highlight the role of web mining and data visualisation in the analysis of weblogs, and the opportunities and challenges inherent in this new field of research."} {"_id": "743c18a2c7abca292b3e6060ecd4e4464cf66bcc", "title": "Semantic Facial Expression Editing using Autoencoded Flow", "text": "High-level manipulation of facial expressions in images \u2014 such as changing a smile to a neutral expression \u2014 is challenging because facial expression changes are highly non-linear, and vary depending on the appearance of the face. We present a fully automatic approach to editing faces that combines the advantages of flow-based face manipulation with the more recent generative capabilities of Variational Autoencoders (VAEs). During training, our model learns to encode the flow from one expression to another over a low-dimensional latent space. At test time, expression editing can be done simply using latent vector arithmetic. We evaluate our methods on two applications: 1) single-image facial expression editing, and 2) facial expression interpolation between two images. We demonstrate that our method generates images of higher perceptual quality than previous VAE and flow-based methods."} {"_id": "b3c346b3da022238301c1c95e17823b7d7ab2f49", "title": "Reliability of measurements of muscle tone and muscle power in stroke patients.", "text": "OBJECTIVES\nto establish the reliability of the modified Ashworth scale for measuring muscle tone in a range of muscle groups (elbow, wrist, knee and ankle; flexors and extensors) and of the Medical Research Council scale for measuring muscle power in the same muscle groups and their direct antagonists.\n\n\nDESIGN\na cross-sectional study involving repeated measures by two raters. We estimated reliability using the kappa statistic with quadratic weights (Kw).\n\n\nSETTING\nan acute stroke ward, a stroke rehabilitation unit and a continuing care facility.\n\n\nSUBJECTS\npeople admitted to hospital with an acute stroke-35 patients, median age 73 (interquartile range 65-80), 20 men and 15 women.\n\n\nRESULTS\ninter- and intra-rater agreement for the measurement of power was good to very good for all tested muscle groups (Kw = 0.84-0.96, Kw = 0.70-0.96). Inter- and intra-rater agreement for the measurement of tone in the elbow, wrist and knee flexors was good to very good (Kw = 0.73-0.96, Kw = 0.77-0.94). Inter- and intra-rater agreement for the measurement of tone in the ankle plantarflexors was moderate to good (Kw = 0.45-0.51, Kw = 0.59-0.64).\n\n\nCONCLUSIONS\nthe Medical Research Council scale was reliable in the tested muscle groups. The modified Ashworth scale demonstrated reliability in all tested muscle groups except the ankle plantarflexors. If reliable measurement of tone at the ankle is required for a specific purpose (e.g. to measure the effect of therapeutic intervention), further work will be necessary."} {"_id": "8925a583cad259d60f63f236302fbc5ac8277e14", "title": "Microservice Ambients: An Architectural Meta-Modelling Approach for Microservice Granularity", "text": "Isolating fine-grained business functionalities byboundaries into entities called microservices is a core activityunderlying microservitization. We define microservitization asthe paradigm shift towards microservices. Determining theoptimal microservice boundaries (i.e. microservice granularity) is among the key microservitization design decisions thatinfluence the Quality of Service (QoS) of the microservice applicationat runtime. In this paper, we provide an architecturecentricapproach to model this decision problem. We build onambients \u2014 a modelling approach that can explicitly capturefunctional boundaries and their adaptation. We extend the aspect-oriented architectural meta-modelling approach of ambients\u2014AMBIENT-PRISMA \u2014 with microservice ambients. A microservice ambient is a modelling concept that treatsmicroservice boundaries as an adaptable first-class entity. Weuse a hypothetical online movie subscription-based systemto capture a microservitization scenario using our aspectorientedmodelling approach. The results show the ability ofmicroservice ambients to express the functional boundary of amicroservice, the concerns of each boundary, the relationshipsacross boundaries and the adaptations of these boundaries. Additionally, we evaluate the expressiveness and effectivenessof microservice ambients using criteria from ArchitectureDescription Language (ADL) classification frameworkssince microservice ambients essentially support architecturedescription for microservices. The evaluation focuses on thefundamental modelling constructs of microservice ambientsand how they support microservitization properties such asutility-driven design, tool heterogeneity and decentralised governance. The evaluation highlights how microservice ambientssupport analysis, evolution and mobility/location awarenesswhich are significant to quality-driven microservice granularityadaptation. The evaluation is general and irrespective of theparticular application domain and the business competenciesin that domain."} {"_id": "b3200539538eca54a85223bf0ec4f3ed132d0493", "title": "Action Anticipation with RBF Kernelized Feature Mapping RNN", "text": "We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN . Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adversarial training. Using only some of the earliest frames of a video, the feature mapping RNN is able to generate future features with a fraction of the parameters needed in traditional RNN. By feeding these future features into a simple multilayer perceptron facilitated with an RBF kernel layer, we are able to accurately predict the action in the video. In our experiments, we obtain 18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on UT-Interaction datasets over prior stateof-the-art for action anticipation."} {"_id": "b0e92c3a7506a526a5130872f4dfc1a5d59c2ecd", "title": "Chip Level Techniques for EMI Reduction in LCD Panels", "text": "This paper presents chip level techniques to improve electro-magnetic interference (EMI) characteristics of LCD-TV panels. A timing controller (TCON) uses over-driving algorithms to improve the response time of liquid crystals (LC). Since this algorithm needs previous frame data, external memory such as double data rate synchronous DRAM (DDR SDRAM) is widely used as a frame buffer. A TTL interface between the TCON and memory is used for read and write operations, generating EMI noise. For reduction of this EMI, three methods are described. The first approach is to reduce the driving current of data I/O buffers. The second is to adopt spread spectrum clock generation (SSCG), and the third is to apply a proposed algorithm which minimizes data transitions. EMI measurement of a 32\" LCD-TV panel shows that these approaches are very effective for reduction of EMI, achieving 20dB reduction at 175MHz."} {"_id": "16c8c2ead791636acdad5ca72c1938dd66fba6c5", "title": "A Two-Stage Iterative Decoding of LDPC Codes for Lowering Error Floors", "text": "In iterative decoding of LDPC codes, trapping sets often lead to high error floors. In this work, we propose a two-stage iterative decoding to break trapping sets. Simulation results show that the error floor performance can be significantly improved with this decoding scheme."} {"_id": "5b5909cd3757e2b9ab6f8784f4c7d936865076ea", "title": "Strategic Directions in Artificial Intelligence", "text": "\u2014constructing intelligent machines, whether or not these operate in the same way as people do; \u2014formalizing knowledge and mechanizing reasoning, both commonsense and refined expertise, in all areas of human endeavor; \u2014using computational models to understand the psychology and behavior of people, animals, and artificial agents; and \u2014making working with computers as easy and as helpful as working with skilled, cooperative, and possibly expert people."} {"_id": "e33a3487f9b656631159186db4b2aebaed230b36", "title": "The digital platform: a research agenda", "text": "As digital platforms are transforming almost every industry today, they are slowly finding their way into the mainstream information systems (IS) literature. Digital platforms are a challenging research object because of their distributed nature and intertwinement with institutions, markets and technologies. New research challenges arise as a result of the exponentially growing scale of platform innovation, the increasing complexity of platform architectures, and the spread of digital platforms to many different industries. This paper develops a research agenda for digital platforms research in IS. We recommend researchers seek to (1) advance conceptual clarity by providing clear definitions that specify the unit of analysis, degree of digitality and the sociotechnical nature of digital platforms; (2) define the proper scoping of digital platform concepts by studying platforms on different architectural levels and in different industry settings; and (3) advance methodological rigour by employing embedded case studies, longitudinal studies, design research, data-driven modelling and visualization techniques. Considering current developments in the business domain, we suggest six questions for further research: (1) Are platforms here to stay?; (2) How should platforms be designed?; (3) How do digital platforms transform industries?; (4) How can data-driven approaches inform digital platforms research?; (5) How should researchers develop theory for digital platforms?; and (6) How do digital platforms affect everyday life?"} {"_id": "3efcb76ba7a3709d63f9f04d62e54ee2ef1d9603", "title": "Theory and Application of Magnetic Flux Leakage Pipeline Detection", "text": "Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted."} {"_id": "1be8cab8701586e751d6ed6d186ca0b6f58a54e7", "title": "Automatic detection of incomplete requirements via symbolic analysis", "text": "The usefulness of a system specification depends in part on the completeness of the requirements. However, enumerating all necessary requirements is difficult, especially when requirements interact with an unpredictable environment. A specification built with an idealized environmental view is incomplete if it does not include requirements to handle non-idealized behavior. Often incomplete requirements are not detected until implementation, testing, or worse, after deployment. Even when performed during requirements analysis, detecting incomplete requirements is typically an error prone, tedious, and manual task. This paper introduces Ares, a design-time approach for detecting incomplete requirements decomposition using symbolic analysis of hierarchical requirements models. We illustrate our approach by applying Ares to a requirements model of an industry-based automotive adaptive cruise control system. Ares is able to automatically detect specific instances of incomplete requirements decompositions at design-time, many of which are subtle and would be difficult to detect, either manually or with testing."} {"_id": "d1feb47dd448594d30517add168b7365b45688c1", "title": "Internet-based peer support for parents: a systematic integrative review.", "text": "OBJECTIVES\nThe Internet and social media provide various possibilities for online peer support. The aim of this review was to explore Internet-based peer-support interventions and their outcomes for parents.\n\n\nDESIGN\nA systematic integrative review.\n\n\nDATA SOURCES\nThe systematic search was carried out in March 2014 in PubMed, Cinahl, PsycINFO and Cochrane databases.\n\n\nREVIEW METHODS\nTwo reviewers independently screened the titles (n=1793), abstracts and full texts to decide which articles should be chosen. The inclusion criteria were: (1) an Internet-based community as an intervention, or at least as a component of an intervention; (2) the participants in the Internet-based community had to be mothers and/or fathers or pregnant women; (3) the parents had to interact and communicate with each other through the Internet-based community. The data was analysed using content analysis. When analysing peer-support interventions only interventions developed by researchers were included and when analysing the outcomes for the parents, studies that focused on mothers, fathers or both parents were separated.\n\n\nRESULTS\nIn total, 38 publications met the inclusion criteria. Most of the studies focused on Internet-based peer support between mothers (n=16) or both parents (n=15) and seven focused on fathers. In 16 studies, the Internet-based interventions had been developed by researchers and 22 studies used already existing Internet peer-support groups, in which any person using the Internet could participate. For mothers, Internet-based peer support provided emotional support, information and membership in a social community. For fathers, it provided support for the transition to fatherhood, information and humorous communication. Mothers were more active users of Internet-based peer-support groups than fathers. In general, parents were satisfied with Internet-based peer support. The evidence of the effectiveness of Internet-based peer support was inconclusive but no harmful effects were reported in these reviewed studies.\n\n\nCONCLUSIONS\nInternet-based peer support provided informational support for parents and was accessible despite geographical distance or time constraints. Internet-based peer support is a unique form of parental support, not replacing but supplementing support offered by professionals. Experimental studies in this area are needed."} {"_id": "9fcb152b8cc6e0c6d1c80d4a57a287a2515ee89c", "title": "A Rule-Based Framework of Metadata Extraction from Scientific Papers", "text": "Most scientific documents on the web are unstructured or semi-structured, and the automatic document metadata extraction process becomes an important task. This paper describes a framework for automatic metadata extraction from scientific papers. Based on a spatial and visual knowledge principle, our system can extract title, authors and abstract from scientific papers. We utilize format information such as font size and position to guide the metadata extraction process. The experiment results show that our system achieves a high accuracy in header metadata extraction which can effectively assist the automatic index creation for digital libraries."} {"_id": "1ea34fdde4d6818a5e6b6062b763d48091bb7400", "title": "Battery-Draining-Denial-of-Service Attack on Bluetooth Devices", "text": "We extend the Xen virtual machine monitor with the ability to host a hundred of virtual machines on a single physical node. Similarly to a demand paging of virtual memory, we page out idle virtual machines making them available on demand. Paging is transparent. An idle virtual machine remains fully operational. It is able to respond to external events with a delay comparable to a delay under a medium load. To achieve desired degree of consolidation, we identify and leave in memory only a minimal working set of pages required to maintain the illusion of running VM and respond to external events. To keep the number of active pages small without harming performance dramatically, we build a correspondence between every event and its working set. Reducing a working set further, we implement a copy-on-write page sharing across virtual machines running on the same host. To decrease resources occupied by a virtual machine\u2019s file system, we implement a copy-on-write storage and golden imaging."} {"_id": "c812374d55b1deb54582cb3656bb0265522e7ec6", "title": "Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions", "text": "Currently autonomous or self-driving vehicles are at the heart of academia and industry research becauseof itsmulti-faceted advantages that includes improved safety, reduced congestion, lower emissions and greater mobility. Software is the key driving factor underpinning autonomy within which planning algorithms that are responsible for mission-critical decision making hold a significant position. While transporting passengers or goods from a givenorigin to agivendestination,motionplanningmethods incorporate searching for apath to follow, avoiding obstacles and generating the best trajectory that ensures safety, comfort and efficiency. A range of different planning approaches have beenproposed in the literature. Thepurpose of this paper is to reviewexisting approaches and then compare and contrast different methods employed for the motion planning of autonomous on-road driving that consists of (1)findingapath, (2) searching for the safestmanoeuvre and (3)determining themost feasible trajectory. Methods developed by researchers in each of these three levels exhibit varying levels of complexity and performance accuracy. This paper presents a critical evaluation of each of these methods, in terms of their advantages/disadvantages, inherent limitations, feasibility, optimality, handling of obstacles and testing operational environments. Based on a critical review of existingmethods, research challenges to address current limitations are identified and future research directions are suggested so as to enhance the performanceofplanningalgorithmsat all three levels. Somepromising areasof future focushave been identified as the use of vehicular communications (V2V and V2I) and the incorporation of transport engineering aspects in order to improve the look-ahead horizon of current sensing technologies that are essential for planning with the aim of reducing the total cost of driverless vehicles. This critical reviewon planning techniques presented in this paper, along with theassociateddiscussions on their constraints and limitations, seek to assist researchers in accelerating development in the emerging field of autonomous vehicle research. Crown Copyright 2015 Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)."} {"_id": "cc5a1cf7ad9d644f21a5df799ffbcb8d1e24abe1", "title": "MonoPerfCap : Human Performance Capture from Monocular Video WEIPENGXU , AVISHEKCHATTERJEE , MICHAELZOLLH\u00d6FER , HELGERHODIN , DUSHYANTMEHTA", "text": "We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows to resolve the ambiguities of the monocular reconstruction problem based on a low dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness and scene complexity that can be handled."} {"_id": "2243ca5af76cccee0347230abc05b029ddf0e16d", "title": "A linguistic theory of translation : an essay in applied linguistics", "text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite linguistic theory of translation an essay in applied linguistics book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of linguistic theory of translation an essay in applied linguistics, just pick it. You know, this book is always making the fans to be dizzy if not to find."} {"_id": "174f55ffd478659bb3eea7b9d5c0c64e1a6c2907", "title": "A theory of formal synthesis via inductive learning", "text": "Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis."} {"_id": "2e006307ea0fbc35ee318810bfa40dc8c055dea2", "title": "NAVSTAR: Global positioning system—Ten years later", "text": "With the award of one of the first multiyear procurements to be authorized by the Congress, the Department of Defense celebrates the tenth anniversary of a major Joint-Service / Agency program that promises to revolutionize the art and science of navigation. This contract, totaling 1.17 billion dollars, will result in delivery by the end of this decade of 28 satellites that are to become the heart of the NAVSTAR Global Positioning System (GPS). This paper traces the history of this program from its inception, identifying and discussing the major technologies that made it all possible. In so doing, the concept of operations of the entire system, including the satellites, their associated ground-based control network, and the family of user equipment being developed and tested to meet the broad scope of Department of Defence positioning and navigation requirements is introduced."} {"_id": "53e082483bf1daa3dfe617be83eb3266425eede0", "title": "Bayesian Grasp Planning", "text": "We present a Bayesian framework for grasp planning that takes into account uncertainty in object shape or pose, as well as robot motion error. When trying to grasp objects based on noisy sensor data, a common problem is errors in perception, which cause the geometry or pose of the object to be uncertain. For each hypothesis about the geometry or pose of the object to be grasped, different sets of grasps can be planned. Of the resulting grasps, some are likely to work only if particular hypotheses are true, but some may work on most or even all hypotheses. Likewise, some grasps are easily broken by small errors in robot motion, but other grasps are robust to such errors. Our probabilistic framework takes into account all of these factors while trying to estimate the overall probability of success of each grasp, allowing us to select grasps that are robust to incorrect object recognition as well as motion error due to imperfect robot calibration. We demonstrate our framework while using the PR2 robot to grasp common household objects."} {"_id": "9b7741bafecf80bf4de8aaae0d5260f4a6706082", "title": "Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization.", "text": "An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories."} {"_id": "155ed7834a8a44a195b80719985a8b4ca11e6fdc", "title": "Iterative Adaptive Approaches to MIMO Radar Imaging", "text": "Multiple-input multiple-output (MIMO) radar can achieve superior performance through waveform diversity over conventional phased-array radar systems. When a MIMO radar transmits orthogonal waveforms, the reflected signals from scatterers are linearly independent of each other. Therefore, adaptive receive filters, such as Capon and amplitude and phase estimation (APES) filters, can be directly employed in MIMO radar applications. High levels of noise and strong clutter, however, significantly worsen detection performance of the data-dependent beamformers due to a shortage of snapshots. The iterative adaptive approach (IAA), a nonparametric and user parameter-free weighted least-squares algorithm, was recently shown to offer improved resolution and interference rejection performance in several passive and active sensing applications. In this paper, we show how IAA can be extended to MIMO radar imaging, in both the negligible and nonnegligible intrapulse Doppler cases, and we also establish some theoretical convergence properties of IAA. In addition, we propose a regularized IAA algorithm, referred to as IAA-R, which can perform better than IAA by accounting for unrepresented additive noise terms in the signal model. Numerical examples are presented to demonstrate the superior performance of MIMO radar over single-input multiple-output (SIMO) radar, and further highlight the improved performance achieved with the proposed IAA-R method for target imaging."} {"_id": "fbe9e8467fcddb402d6cce9f515a74e3c5b8342a", "title": "You Cannot Sense My PINs: A Side-Channel Attack Deterrent Solution Based on Haptic Feedback on Touch-Enabled Devices", "text": "In this paper, we introduce a novel and secure solution to mitigate side-channel attacks to capture the PINs like touchID and other credentials of touch-enabled devices. Our approach can protect haptic feedback enabled devices from potential direct observation techniques such as cameras and motion sense techniques including such as accelerometers in smart-watch. Both attacks use the concept of shoulder surfing in social engineering and were published recently (CCS'14 and CCS'15). Hand-held devices universally employ small vibration motors as an inexpensive way to provide haptic feedback. The strength of the haptic feedback depends on the brand and the device manufacturer. They are usually strong enough to produce sliding movement and make audible noises if the device is resting on the top of a desk when the vibration motor turns. However, when the device is held in the hand the vibration can only be sensed by the holder; it is usually impossible or uncertain for an observer to know when the vibration motor turns. Our proposed solution uses the haptic feedback to inform the internal state of the keypad to the user and takes advantage of the fact that the effect of haptic feedback can be easily cloaked in such a way that direct observation techniques and indirect sensing techniques will fail. We develop an application on Android cell phones to demonstrate it and invite users to test the code. Moreover, we use real smart-watch to sense the vibration of Android cell phones. Our experimental results show that our approach can mitigate the probability of sensing a 4-digit or 6-digit PINs using smart-watch to below practical value. Our approach also can mitigate the probability of recognizing a 4-digit or 6-digit PINs using a camera within 1 meter to below practical value because the user does not need to move his or her hand during the internal states to input different PINs."} {"_id": "0cfe588996f1bc319f87c6f75160d1cf1542d9a9", "title": "ON THE EMERGENCE OF GR 4 AMMAR FROM THE LEXICON", "text": ""} {"_id": "162cd1259ad106990c6bfd36db19751f940274d3", "title": "Rapid word learning under uncertainty via cross-situational statistics.", "text": "There are an infinite number of possible word-to-word pairings in naturalistic learning environments. Previous proposals to solve this mapping problem have focused on linguistic, social, representational, and attentional constraints at a single moment. This article discusses a cross-situational learning strategy based on computing distributional statistics across words, across referents, and, most important, across the co-occurrences of words and referents at multiple moments. We briefly exposed adults to a set of trials that each contained multiple spoken words and multiple pictures of individual objects; no information about word-picture correspondences was given within a trial. Nonetheless, over trials, subjects learned the word-picture mappings through cross-trial statistical relations. Different learning conditions varied the degree of within-trial reference uncertainty, the number of trials, and the length of trials. Overall, the remarkable performance of learners in various learning conditions suggests that they calculate cross-trial statistics with sufficient fidelity and by doing so rapidly learn word-referent pairs even in highly ambiguous learning contexts."} {"_id": "53dd71dc5598d41c06d3eef1315e098dc4cbca28", "title": "Word Segmentation : The Role of Distributional Cues", "text": "One of the infant\u2019s first tasks in language acquisition is to discover the words embedded in a mostly continuous speech stream. This learning problem might be solved by using distributional cues to word boundaries\u2014for example, by computing the transitional probabilities between sounds in the language input and using the relative strengths of these probabilities to hypothesize word boundaries. The learner might be further aided by language-specific prosodic cues correlated with word boundaries. As a first step in testing these hypotheses, we briefly exposed adults to an artificial language in which the only cues available for word segmentation were the transitional probabilities between syllables. Subjects were able to learn the words of this language. Furthermore, the addition of certain prosodic cues served to enhance performance. These results suggest that distributional cues may play an important role in the initial word segmentation of language learners. q 1996 Academic Press, Inc."} {"_id": "7085126d3d21b559e38231f3fa283aae0ca50cd8", "title": "Statistical learning by 8-month-old infants.", "text": "Learners rely on a combination of experience-independent and experience-dependent mechanisms to extract information from the environment. Language acquisition involves both types of mechanisms, but most theorists emphasize the relative importance of experience-independent mechanisms. The present study shows that a fundamental task of language acquisition, segmentation of words from fluent speech, can be accomplished by 8-month-old infants based solely on the statistical relationships between neighboring speech sounds. Moreover, this word segmentation was based on statistical learning from only 2 minutes of exposure, suggesting that infants have access to a powerful mechanism for the computation of statistical properties of the language input."} {"_id": "a6ad17f9df9346c56bab090b35ef73ff94f56c01", "title": "A computational study of cross-situational techniques for learning word-to-meaning mappings", "text": "This paper presents a computational study of part of the lexical-acquisition task faced by children, namely the acquisition of word-to-meaning mappings. It first approximates this task as a formal mathematical problem. It then presents an implemented algorithm for solving this problem, illustrating its operation on a small example. This algorithm offers one precise interpretation of the intuitive notions of cross-situational learning and the principle of contrast applied between words in an utterance. It robustly learns a homonymous lexicon despite noisy multi-word input, in the presence of referential uncertainty, with no prior knowledge that is specific to the language being learned. Computational simulations demonstrate the robustness of this algorithm and illustrate how algorithms based on cross-situational learning and the principle of contrast might be able to solve lexical-acquisition problems of the size faced by children, under weak, worst-case assumptions about the type and quantity of data available."} {"_id": "d2880069513ee3cfbe1136cee6133a6eedb51c00", "title": "Substrate Integrated Waveguide Cavity-Backed Self-Triplexing Slot Antenna", "text": "In this letter, a novel substrate integrated waveguide (SIW) cavity-backed self-triplexing slot antenna is proposed for triple-band communication. It is realized on a single-layer printed circuit board. The proposed antenna consists of a pair of bow-tie slots etched on an SIW cavity, and three different resonant modes are excited by two microstrip lines and a coaxial probe to operate at three distinct frequencies (7.89, 9.44, and 9.87 GHz). The band around 9.48 GHz is obtained due to radiation from one bow-tie slot fed by adjacent microstrip feedline and band around 9.87 GHz is due to radiation from other bow-tie slot fed by its adjacent microstrip feedline. On the other hand, lower band around 7.89 GHz is achieved because of radiation from both bow-tie slots excited by coaxial probe feedline. The measured isolation between any two feedlines is better than 22.5 dB. The measured realized gain is more than 7.2 dB at all the bands. Cross polarization below 36.5, 29.3, and 24.45 dB in broad sight direction and high front-to-back ratio of more than 17.3 dB at each resonant frequency are obtained."} {"_id": "9c703258eca64936838f700b6d2a0e2c33b12b72", "title": "Structural Health Monitoring Framework Based on Internet of Things: A Survey", "text": "Internet of Things (IoT) has recently received a great attention due to its potential and capacity to be integrated into any complex system. As a result of rapid development of sensing technologies such as radio-frequency identification, sensors and the convergence of information technologies such as wireless communication and Internet, IoT is emerging as an important technology for monitoring systems. This paper reviews and introduces a framework for structural health monitoring (SHM) using IoT technologies on intelligent and reliable monitoring. Specifically, technologies involved in IoT and SHM system implementation as well as data routing strategy in IoT environment are presented. As the amount of data generated by sensing devices are voluminous and faster than ever, big data solutions are introduced to deal with the complex and large amount of data collected from sensors installed on structures."} {"_id": "e4284c6b3cab23dc6221a9b8383546810f5ecb6b", "title": "Comparative Analysis of Intelligent Transportation Systems for Sustainable Environment in Smart Cities", "text": "In recent works on the Internet of Vehicles (IoV), \u201cintelligent\u201d and \u201csustainable\u201d have been the buzzwords in the context of transportation. Maintaining sustainability in IoV is always a challenge. Sustainability in IoV can be achieved not only by the use of pollution-free vehicular systems, but also by maintenance of road traffic safety or prevention of accidents or collisions. With the aim of establishing an effective sustainable transportation planning system, this study performs a short analysis of existing sustainable transportation methods in the IoV. This study also analyzes various characteristics of sustainability and the advantages and disadvantages of existing transportation systems. Toward the end, this study provides a clear suggestion for effective sustainable transportation planning aimed at the maintenance of an eco-friendly environment and road traffic safety, which, in turn, would lead to a sustainable transportation system."} {"_id": "6505481758758ad1b4d98e7321801723d9773af2", "title": "An Improved DC-Link Voltage Fast Control Scheme for a PWM Rectifier-Inverter System", "text": "This paper presents an improved dc-link voltage fast control strategy based on energy balance for a pulse width modulation (PWM) rectifier-inverter system to reduce the fluctuation of dc-link voltage. A conclusion is drawn that the energy in dc-link capacitor cannot be kept constant in dynamic process when the system operates in rectifier mode, even in ideal case. Meanwhile, the minimum dc-link voltage deviation is analyzed. Accordingly, a predictive dc-link voltage control scheme based on energy balance is proposed, while the grid current is regulated by deadbeat predictive control. A prediction method for output power is also introduced. Furthermore, a small-signal model with the control delay is adopted to analyze the stability and robustness of the control strategy. The simulation and experimental results demonstrate both good dynamic and steady-state performances of the rectifier-inverter system with the proposed control scheme."} {"_id": "adf434fe0bf7ff55edee25d5e50d3de94ad2c325", "title": "Comparative Study on Load Balancing Techniques in Cloud Computing", "text": "The present era has witnessed tremendous growth of the internet and various applications that are running over it. Cloud computing is the internet based technology, emphasizing its utility and follows pay-as-you-go model, hence became so popular with high demanding features. Load balancing is one of the interesting and prominent research topics in cloud computing, which has gained a large attention recently. Users are demanding more services with better results. Many algorithms and approaches are proposed by various researchers throughout the world, with the aim of balancing the overall workload among given nodes, while attaining the maximum throughput and minimum time. In this paper, various proposed algorithms addressing the issue of load balancing in Cloud Computing are analyzed and compared to provide a gist of the latest approaches in this research area."} {"_id": "3e38e521ec579a6aad4e7ebc5f125123018b1683", "title": "Application of spiking neural networks and the bees algorithm to control chart pattern recognition", "text": "Statistical process control (SPC) is a method for improving the quality o f products. Control charting plays a most important role in SPC. SPC control charts arc used for monitoring and detecting unnatural process behaviour. Unnatural patterns in control charts indicate unnatural causes for variations. Control chart pattern recognition is therefore important in SPC. Past research shows that although certain types o f charts, such as the CUSUM chart, might have powerful detection ability, they lack robustness and do not function automatically. In recent years, neural network techniques have been applied to automatic pattern recognition. Spiking Neural Networks (SNNs) belong to the third generation o f artificial neural networks, with spiking neurons as processing elements. In SNNs, time is an important feature for information representation and processing. This thesis proposes the application o f SNN techniques to control chart pattern recognition. It is designed to present an analysis o f the existing learning algorithms o f SNN for pattern recognition and to explain how and why spiking neurons have more computational power in comparison to the previous generation o f neural networks. This thesis focuses on the architecture and the learning procedure o f the network. Four new learning algorithms are presented with their specific architecture: Spiking Learning Vector Quantisation (S-LVQ), Enhanced-Spiking Learning Vector Quantisation (NS-LVQ), S-LVQ with Bees and NS-LVQ with Bees. The latter two algorithms employ a new intelligent swarm-based optimisation called the Bees Algorithm to optimise the LVQ pattern recognition networks. Overall, the aim o f the research is to develop a simple architecture for the proposed network as well as to develop a network that is efficient for application to control chart pattern recognition. Experiments show that the proposed architecture and the learning procedure give high pattern recognition accuracies."} {"_id": "ab430d9e3e250ab5dc61e12f9e7e40c83227b8a0", "title": "On Different Facets of Regularization Theory", "text": "This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the spatial (Fourier) domain. State-ofthe-art research relevant to the regularization theory is reviewed, covering Occam's razor, minimum length description, Bayesian theory, pruning algorithms, informational (entropy) theory, statistical learning theory, and equivalent regularization. The universal principle of regularization in terms of Kolmogorov complexity is discussed. Finally, some prospective studies on regularization theory and beyond are suggested."} {"_id": "3e8d2a11a3ed6d9ebdd178a420b3c019c5356fae", "title": "Never-Ending Multiword Expressions Learning", "text": "This paper introduces NEMWEL, a system that performs Never-Ending MultiWord Expressions Learning. Instead of using a static corpus and classifier, NEMWEL applies supervised learning on automatically crawled news texts. Moreover, it uses its own results to periodically retrain the classifier, bootstrapping on its own results. In addition to a detailed description of the system\u2019s architecture and its modules, we report the results of a manual evaluation. It shows that NEMWEL is capable of learning new expressions over time with improved precision."} {"_id": "9860487cd9e840d946b93457d11605be643e6d4c", "title": "Text Clustering for Topic Detection", "text": "The world wide web represents vast stores of information. However, the sheer amount of such information makes it practically impossible for any human user to be aware of much of it. Therefore, it would be very helpful to have a system that automatically discovers relevant, yet previously unknown information, and reports it to users in human-readable form. As the first attempt to accomplish such a goal, we proposed a new clustering algorithm and compared it with existing clustering algorithms. The proposed method is motivated by constructive and competitive learning from neural network research. In the construction phase, it tries to find the optimal number of clusters by adding a new cluster when the intrinsic difference between the instance presented and the existing clusters is detected. Each cluster then moves toward the optimal cluster center according to the learning rate by adjusting its weight vector. From the experimental results on the three different real world data sets, the proposed method shows an even trend of performance across the different domains, while the performance of our algorithm on text domains was better than that reported in previous research."} {"_id": "c7df32e449f1ba9767763a122bbdc2fac310f958", "title": "Direct Desktop Printed-Circuits-on-Paper Flexible Electronics", "text": "There currently lacks of a way to directly write out electronics, just like printing pictures on paper by an office printer. Here we show a desktop printing of flexible circuits on paper via developing liquid metal ink and related working mechanisms. Through modifying adhesion of the ink, overcoming its high surface tension by dispensing machine and designing a brush like porous pinhead for printing alloy and identifying matched substrate materials among different papers, the slightly oxidized alloy ink was demonstrated to be flexibly printed on coated paper, which could compose various functional electronics and the concept of Printed-Circuits-on-Paper was thus presented. Further, RTV silicone rubber was adopted as isolating inks and packaging material to guarantee the functional stability of the circuit, which suggests an approach for printing 3D hybrid electro-mechanical device. The present work paved the way for a low cost and easygoing method in directly printing paper electronics."} {"_id": "aea427bcea46c83021e62f4fb10178557510ca5d", "title": "Latent Variable Dialogue Models and their Diversity", "text": "We present a dialogue generation model that directly captures the variability in possible responses to a given input, which reduces the \u2018boring output\u2019 issue of deterministic dialogue models. Experiments show that our model generates more diverse outputs than baseline models, and also generates more consistently acceptable output than sampling from a deterministic encoder-decoder model."} {"_id": "20efcba63a0d9f12251a5e5dda745ac75a6a84a9", "title": "Bio-Inspired Robotics Based on Liquid Crystalline Elastomers ( LCEs ) and Flexible Stimulators", "text": ""} {"_id": "2a6783ae51d7ee781d584ef9a3eb8ab1997d0489", "title": "A study of large-scale ethnicity estimation with gender and age variations", "text": "In this paper we study large-scale ethnicity estimation under variations of age and gender. The biologically-inspired features are applied to ethnicity classification for the first time. Through a large number of experiments on a large database with more than 21,000 face images, we systematically study the effect of gender and age variations on ethnicity estimation. Our finding is that ethnicity classification can have high accuracy in most cases, but an interesting phenomenon is observed that the ethnic classification accuracies could be reduced by 6\u223c8% in average when female faces are used for training while males for testing. The study results provide a guide for face processing on a multi-ethnic database, e.g., image collection from the Internet, and may inspire further psychological studies on ethnic grouping with gender and age variations. We also apply the methods to the whole MORPH-II database with more than 55,000 face images for ethnicity classification of five races. It is the first time that ethnicity estimation is performed on so large a database."} {"_id": "37caf807a77df2bf907b1e7561454419a328e34d", "title": "Engineering for Predictive Modeling using Reinforcement Learning", "text": "Feature engineering is a crucial step in the process of predictive modeling. It involves the transformation of given feature space, typically using mathematical functions, with the objective of reducing the modeling error for a given target. However, there is no well-defined basis for performing effective feature engineering. It involves domain knowledge, intuition, and most of all, a lengthy process of trial and error. The human attention involved in overseeing this process significantly influences the cost of model generation. We present a new framework to automate feature engineering. It is based on performance driven exploration of a transformation graph, which systematically and compactly enumerates the space of given options. A highly efficient exploration strategy is derived through reinforcement learning on past examples."} {"_id": "e7ab1a736bb105df8678af5191640fd534c8c430", "title": "Power Transformer Economic Evaluation in Decentralized Electricity Markets", "text": "Owing to deregulation, privatization, and competition, estimating financial benefits of electrical power system projects is becoming increasingly important. In other words, it is necessary to assess the project profitability under the light of new developments in the electricity market. In this paper, a detailed methodology for the least cost choice of a distribution transformer is proposed, showing how the higher price of a facility can be traded against its operational cost over its life span. The proposed method involves the incorporation of the discounted cost of transformer losses to their economic evaluation, providing the ability to take into account variable energy cost during the transformer operating lifetime. In addition, the influence of the variability in the energy loss cost is investigated, taking into account a potential policy intended to be adopted by distribution network operators. The method is combined with statistical and probabilistic assessment of electricity price volatility in order to derive its impact on the transformer purchasing policy."} {"_id": "fe8e3898e203086496ae33e355f450bd32b5daff", "title": "Use of opposition method in the test of high-power electronic converters", "text": "The test and the characterization of medium or high-power electronic converters, under nominal operating conditions, are made difficult by the requirement of high-power electrical source and load. In addition, the energy lost during the test may be very significant. The opposition method, which consists of an association of two identical converters supplied by the same source, one operating as a generator, the other as a receptor, can be a better way to do these test. Another advantage is the possibility to realize accurate measurements of the different losses in the converters under test. In the first part of this paper, the characteristics of the method concerning loss measurements are compared to those of the electrical or calorimetric methods, then it is shown how it can be applied to different types of power electronic converters, choppers, switched mode power supplies, and pulsewidth modulation inverters. In the second part, different examples of studies conducted by the authors, and using this method, are presented. They have varying goals, from the test of soft-switching inverters to the characterization of integrated gate-commutated thyristor (IGCT) devices mounted into 2-MW choppers."} {"_id": "b9728a8279c12f93fff089b6fac96afd2d3bab04", "title": "Mollifying Networks", "text": "The optimization of deep neural networks can be more challenging than traditional convex optimization problems due to the highly non-convex nature of the loss function, e.g. it can involve pathological landscapes such as saddle-surfaces that can be difficult to escape for algorithms based on simple gradient descent. In this paper, we attack the problem of optimization of highly non-convex neural networks by starting with a smoothed \u2013 or mollified \u2013 objective function which becomes more complex as the training proceeds. Our proposition is inspired by the recent studies in continuation methods: similar to curriculum methods, we begin learning an easier (possibly convex) objective function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, objective function. The complexity of the mollified networks is controlled by a single hyperparameter which is annealed during the training. We show improvements on various difficult optimization tasks and establish a relationship between recent works on continuation methods for neural networks and mollifiers."} {"_id": "1a3fe9032cf48e877bf44e24f5d06d9d14e04c9e", "title": "Robot-assisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke.", "text": "OBJECTIVE\nTo compare the effects of robot-assisted movement training with conventional techniques for the rehabilitation of upper-limb motor function after stroke.\n\n\nDESIGN\nRandomized controlled trial, 6-month follow-up.\n\n\nSETTING\nA Department of Veterans Affairs rehabilitation research and development center.\n\n\nPARTICIPANTS\nConsecutive sample of 27 subjects with chronic hemiparesis (>6mo after cerebrovascular accident) randomly allocated to group.\n\n\nINTERVENTIONS\nAll subjects received twenty-four 1-hour sessions over 2 months. Subjects in the robot group practiced shoulder and elbow movements while assisted by a robot manipulator. Subjects in the control group received neurodevelopmental therapy (targeting proximal upper limb function) and 5 minutes of exposure to the robot in each session.\n\n\nMAIN OUTCOME MEASURES\nFugl-Meyer assessment of motor impairment, FIMtrade mark instrument, and biomechanic measures of strength and reaching kinematics. Clinical evaluations were performed by a therapist blinded to group assignments.\n\n\nRESULTS\nCompared with the control group, the robot group had larger improvements in the proximal movement portion of the Fugl-Meyer test after 1 month of treatment (P<.05) and also after 2 months of treatment (P<.05). The robot group had larger gains in strength (P<.02) and larger increases in reach extent (P<.01) after 2 months of treatment. At the 6-month follow-up, the groups no longer differed in terms of the Fugl-Meyer test (P>.30); however, the robot group had larger improvements in the FIM (P<.04).\n\n\nCONCLUSIONS\nCompared with conventional treatment, robot-assisted movements had advantages in terms of clinical and biomechanical measures. Further research into the use of robotic manipulation for motor rehabilitation is justified."} {"_id": "552aa062d2895901bca03b26fa25766b3f64bf6f", "title": "A Brief survey of Data Mining Techniques Applied to Agricultural Data", "text": "As with many other sectors the amount of agriculture data based are increasing on a daily basis. However, the application of data mining methods and techniques to discover new insights or knowledge is a relatively a novel research area. In this paper we provide a brief review of a variety of Data Mining techniques that have been applied to model data from or about the agricultural domain. The Data Mining techniques applied on Agricultural data include k-means, bi clustering, k nearest neighbor, Neural Networks (NN) Support Vector Machine (SVM), Naive Bayes Classifier and Fuzzy c-means. As can be seen the appropriateness of data mining techniques is to a certain extent determined by the different types of agricultural data or the problems being addressed. This survey summarize the application of data mining techniques and predictive modeling application in the agriculture field."} {"_id": "6ee0d9a40c60e50fab636cca74c6301853d42367", "title": "Stargazer: Automated regression-based GPU design space exploration", "text": "Graphics processing units (GPUs) are of increasing interest because they offer massive parallelism for high-throughput computing. While GPUs promise high peak performance, their challenge is a less-familiar programming model with more complex and irregular performance trade-offs than traditional CPUs or CMPs. In particular, modest changes in software or hardware characteristics can lead to large or unpredictable changes in performance. In response to these challenges, our work proposes, evaluates, and offers usage examples of Stargazer1, an automated GPU performance exploration framework based on stepwise regression modeling. Stargazer sparsely and randomly samples parameter values from a full GPU design space and simulates these designs. Then, our automated stepwise algorithm uses these sampled simulations to build a performance estimator that identifies the most significant architectural parameters and their interactions. The result is an application-specific performance model which can accurately predict program runtime for any point in the design space. Because very few initial performance samples are required relative to the extremely large design space, our method can drastically reduce simulation time in GPU studies. For example, we used Stargazer to explore a design space of nearly 1 million possibilities by sampling only 300 designs. For 11 GPU applications, we were able to estimate their runtime with less than 1.1% average error. In addition, we demonstrate several usage scenarios of Stargazer."} {"_id": "ccaab0cee02fe1e5ffde33b79274b66aedeccc65", "title": "Ethical and Social Aspects of Self-Driving Cars", "text": "As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethics. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems are typically being presented in the form of an idealized unsolvable decision-making problem, the so-called trolley problem, which is grossly misleading. We argue that an applied engineering ethical approach for the development of new technology is what is needed; the approach should be applied, meaning that it should focus on the analysis of complex real-world engineering problems. Software plays a crucial role for the control of self-driving cars; therefore, software engineering solutions should seriously handle ethical and social considerations. In this paper we take a closer look at the regulative instruments, standards, design, and implementations of components, systems, and services and we present practical social and ethical challenges that have to be met, as well as novel expectations for software engineering."} {"_id": "bbd9b5e4d4761d923d21a060513e826bf5bfc620", "title": "Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations", "text": "Recent advances with Convolutional Networks (ConvNets) have shifted the bottleneck for many computer vision tasks to annotated data collection. In this paper, we present a geometry-driven approach to automatically collect annotations for human pose prediction tasks. Starting from a generic ConvNet for 2D human pose, and assuming a multi-view setup, we describe an automatic way to collect accurate 3D human pose annotations. We capitalize on constraints offered by the 3D geometry of the camera setup and the 3D structure of the human body to probabilistically combine per view 2D ConvNet predictions into a globally optimal 3D pose. This 3D pose is used as the basis for harvesting annotations. The benefit of the annotations produced automatically with our approach is demonstrated in two challenging settings: (i) fine-tuning a generic ConvNet-based 2D pose predictor to capture the discriminative aspects of a subjects appearance (i.e.,personalization), and (ii) training a ConvNet from scratch for single view 3D human pose prediction without leveraging 3D pose groundtruth. The proposed multi-view pose estimator achieves state-of-the-art results on standard benchmarks, demonstrating the effectiveness of our method in exploiting the available multi-view information."} {"_id": "66e3aa516a7124befa7d2a8b0872e9619acf7f58", "title": "JM: An R Package for the Joint Modelling of Longitudinal and Time-to-Event Data", "text": "In longitudinal studies measurements are often collected on different types of outcomes for each subject. These may include several longitudinally measured responses (such as blood values relevant to the medical condition under study) and the time at which an event of particular interest occurs (e.g., death, development of a disease or dropout from the study). These outcomes are often separately analyzed; however, in many instances, a joint modeling approach is either required or may produce a better insight into the mechanisms that underlie the phenomenon under study. In this paper we present the R package JM that fits joint models for longitudinal and time-to-event data."} {"_id": "4f57643b95e854bb05fa0c037cbf8898accdbdef", "title": "Technical evaluation of the Carolo-Cup 2014 - A competition for self-driving miniature cars", "text": "The Carolo-Cup competition conducted for the eighth time this year, is an international student competition focusing on autonomous driving scenarios implemented on 1:10 scale car models. Three practical sub-competitions have to be realized in this context and represent a complex, interdisciplinary challenge. Hence, students have to cope with all core topics like mechanical development, electronic design, and programming as addressed usually by robotic applications. In this paper we introduce the competition challenges in detail and evaluate the results of all 13 participating teams from the 2014 competition. For this purpose, we analyze technical as well as non-technical configurations of each student group and derive best practices, lessons learned, and criteria as a precondition for a successful participation. Due to the comprehensive orientation of the Carolo-Cup, this knowledge can be applied on comparable projects and related competitions as well."} {"_id": "21fb86020f68bf2dd57cd1b8a0e8adead5d9a9ae", "title": "Data Mining : Concepts and Techniques", "text": "Association rule mining was first proposed by Agrawal, Imielinski, and Swami [AIS93]. The Apriori algorithm discussed in Section 5.2.1 for frequent itemset mining was presented in Agrawal and Srikant [AS94b]. A variation of the algorithm using a similar pruning heuristic was developed independently by Mannila, Tiovonen, and Verkamo [MTV94]. A joint publication combining these works later appeared in Agrawal, Mannila, Srikant, Toivonen, and Verkamo [AMS96]. A method for generating association rules from frequent itemsets is described in Agrawal and Srikant [AS94a]."} {"_id": "4eae6ee36de5f9ae3c05c6ca385938de98cd5ef8", "title": "Combining Text and Linguistic Document Representations for Authorship Attribution", "text": "In this paper, we provide several alternatives to the classical Bag-Of-Words model for automatic authorship attribution. To this end, we consider linguistic and writing style information such as grammatical structures to construct different document representations. Furthermore we describe two techniques to combine the obtained representations: combination vectors and ensemble based meta classification. Our experiments show the viability of our approach."} {"_id": "288c67457f09c0c30cadd7439040114e9c377bc3", "title": "Finding Interesting Rules from Large Sets of Discovered Association Rules", "text": "Association rules, introduced by Agrawal, Imielinski, and Swami, are rules of the form \u201cfor 90% of the rows of the relation, if the row has value 1 in the columns in set W, then it has 1 also in column B\u201d. Efficient methods exist for discovering association rules from large collections of data. The number of discovered rules can, however, be so large that browsing the rule set and finding interesting rules from it can be quite difficult for the user. We show how a simple formalism of rule templates makes it possible to easily describe the structure of interesting rules. We also give examples of visualization of rules, and show how a visualization tool interfaces with rule templates."} {"_id": "384bb3944abe9441dcd2cede5e7cd7353e9ee5f7", "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "text": ""} {"_id": "49fa97db6b7f3ab2b3a623c3552aa680b80c8dd2", "title": "Automatically Categorizing Written Texts by Author Gender", "text": "The problem of automatically determining the gender of a document's author would appear to be a more subtle problem than those of categorization by topic or authorship attribution. Nevertheless, it is shown that automated text categorization techniques can exploit combinations of simple lexical and syntactic features to infer the gender of the author of an unseen formal written document with approximately 80% accuracy. The same techniques can be used to determine if a document is fiction or non-fiction with approximately 98% accuracy."} {"_id": "883224c3b28b0563a393746066738f52e6fcc70d", "title": "To Create What You Tell: Generating Videos from Captions", "text": "We are creating multimedia contents everyday and everywhere. While automatic content generation has played a fundamental challenge to multimedia community for decades, recent advances of deep learning have made this problem feasible. For example, the Generative Adversarial Networks (GANs) is a rewarding approach to synthesize images. Nevertheless, it is not trivial when capitalizing on GANs to generate videos. The difficulty originates from the intrinsic structure where a video is a sequence of visually coherent and semantically dependent frames. This motivates us to explore semantic and temporal coherence in designing GANs to generate videos. In this paper, we present a novel Temporal GANs conditioning on Captions, namely TGANs-C, in which the input to the generator network is a concatenation of a latent noise vector and caption embedding, and then is transformed into a frame sequence with 3D spatio-temporal convolutions. Unlike the naive discriminator which only judges pairs as fake or real, our discriminator additionally notes whether the video matches the correct caption. In particular, the discriminator network consists of three discriminators: video discriminator classifying realistic videos from generated ones and optimizes video-caption matching, frame discriminator discriminating between real and fake frames and aligning frames with the conditioning caption, and motion discriminator emphasizing the philosophy that the adjacent frames in the generated videos should be smoothly connected as in real ones. We qualitatively demonstrate the capability of our TGANs-C to generate plausible videos conditioning on the given captions on two synthetic datasets (SBMG and TBMG) and one real-world dataset (MSVD). Moreover, quantitative experiments on MSVD are performed to validate our proposal via Generative Adversarial Metric and human study."} {"_id": "f4b3804f052f15f8a25af78db24e32dc25254722", "title": "Number : AUTCON-D16-00058 R 1 Title : BIM for Infrastructure : An Overall Review and Constructor Perspective", "text": "6 The subject of Building Information Modelling (BIM) has become a central topic to the improvement of the AECOO 7 (Architecture, Engineering, Construction, Owner and Operator) industry around the world, to the point where the 8 concept is being expanded into domains it was not originally conceived to address. Transitioning BIM into the 9 domain of infrastructure projects has provided challenges and emphasized the constructor perspective of BIM. 10 Therefore, this study aims to collect the relevant literature regarding BIM within the Infrastructure domain and its 11 use from the constructor perspective to review and analyse the current industry positioning and research state of 12 the art, with regards to the set criteria. The review highlighted a developing base of BIM for infrastructure. From 13 the analysis, the related research gaps were identified regarding information integration, alignment of BIM 14 processes to constructor business processes & the effective governance and value of information. From this a 15 unique research strategy utilising a framework for information governance coupled with a graph based distributed 16 data environment is outlined to further progress the integration and efficiency of AECOO Infrastructure projects. 17"} {"_id": "36d167397872c36713e8a274113b30ea5cd3ad7d", "title": "Enterprise Database Applications and the Cloud: A Difficult Road Ahead", "text": "There is considerable interest in moving DBMS applications from inside enterprise data centers to the cloud, both to reduce cost and to increase flexibility and elasticity. Some of these applications are \"green field\" projects (i.e., new applications), others are existing legacy systems that must be migrated to the cloud. In another dimension, some are decision support applications while others are update-oriented. In this paper, we discuss the technical and political challenges that these various enterprise applications face when considering cloud deployment. In addition, a requirement for quality-of-service (QoS) guarantees will generate additional disruptive issues. In some circumstances, achieving good DBMS performance on current cloud architectures and future hardware technologies will be non-trivial. In summary, there is a difficult road ahead for enterprise database applications."} {"_id": "f61ca00abf165ea5590f67942c9bd7538187752d", "title": "Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues", "text": "We propose detecting and responding to humor in spoken dialogue by extracting language and audio cues and subsequently feeding these features into a combined recurrent neural network (RNN) and logistic regression model. In this paper, we parse Switchboard phone conversations to build a corpus of punchlines and unfunny lines where punchlines precede laughter tokens in Switchboard transcripts. We create a combined RNN and logistic regression model that uses both acoustic and language cues to predict whether a conversational agent should respond to an utterance with laughter. Our model achieves an F1-score of 63.2 and accuracy of 73.9. This model outperforms our logistic language model (F1-score 56.6) and RNN acoustic model (59.4) as well as the final RNN model of D. Bertero, 2016 (52.9). Using our final model, we create a \u201claughbot\u201d that audibly responds to a user with laughter when their utterance is classified as a punchline. A conversational agent outfitted with a humorrecognition system such as the one we present in this paper would be valuable as these agents gain utility in everyday life. Keywords\u2014Chatbots; spoken natural language processing; deep learning; machine learning"} {"_id": "4c27eef7fa83900ef8f2e48a523750d035830342", "title": "Reconstruction of dorsal and/or caudal nasal septum deformities with septal battens or by septal replacement: an overview and comparison of techniques.", "text": "OBJECTIVES\nThe objectives of this study were to describe and compare two techniques used to correct nasal septum deviations located in the dorsal and/or caudal septum.\n\n\nSTUDY DESIGN\nThe authors conducted a retrospective clinical chart review.\n\n\nMETHODS\nThe authors conducted a comparison of functional and technical results between surgery in the L-strut of the septum in 114 patients with septal battens or by septal replacement by subjective self-evaluation and by examination of the position of the septum during follow up.\n\n\nRESULTS\nThere was subjective improvement in nasal breathing in 86% of the septal batten group and in 94% of the septal replacement group. This difference was not statistically significant. The technical result was judged by examining the position of the septum during follow up as midline, slightly deviated, or severely deviated. The septum was significantly more often located in the midline during follow up in the septal replacement group than in the septal batten group.\n\n\nCONCLUSION\nTreatment of deformities located in the structurally important L-strut of the septum may be technically challenging and many functional, structural, and esthetic considerations must be taken into account. On the basis of this series, both septal battens and septal replacement techniques may be considered for correction of deviations in this area. The functional improvement rates were not significantly different between the techniques, although during follow up, the septum appeared to be significantly more often located in the midline in the septal replacement group. The techniques are described and their respective advantages and potential drawbacks are discussed."} {"_id": "1b5f18498b42e464b81e3d81b8d32237aea4a234", "title": "DroidTrace: A ptrace based Android dynamic analysis system with forward execution capability", "text": "Android, being an open source smartphone operating system, enjoys a large community of developers who create new mobile services and applications. However, it also attracts malware writers to exploit Android devices in order to distribute malicious apps in the wild. In fact, Android malware are becoming more sophisticated and they use advanced \u201cdynamic loading\u201d techniques like Java reflection or native code execution to bypass security detection. To detect dynamic loading, one has to use dynamic analysis. Currently, there are only a handful of Android dynamic analysis tools available, and they all have shortcomings in detecting dynamic loading. The aim of this paper is to design and implement a dynamic analysis system which allows analysts to perform systematic analysis of dynamic payloads with malicious behaviors. We propose \u201cDroidTrace\u201d, a ptrace based dynamic analysis system with forward execution capability. Our system uses ptrace to monitor selected system calls of the target process which is running the dynamic payloads, and classifies the payloads behaviors through the system call sequence, e.g., behaviors such as file access, network connection, inter-process communication and even privilege escalation. Also, DroidTrace performs \u201cphysical modification\u201d to trigger different dynamic loading behaviors within an app. Using DroidTrace, we carry out a large scale analysis on 36,170 dynamic payloads in 50,000 apps and 294 malware in 10 families (four of them are zero-day) with various dynamic loading behaviors."} {"_id": "ef9473055dd96e5d146c88ae3cc88d06e7adfd07", "title": "Understanding the Dynamic Interplay of Social Buzz and Contribution Behavior within and between Online Platforms - Evidence from Crowdfunding", "text": "Motivated by the growing interconnection between online platforms, we examine the dynamic interplay between social buzz and contribution behavior in the crowdfunding context. Since the utility of crowdfunding projects is usually difficult to ascertain, prospective backers draw on quality signals, such as social buzz and prior-contribution behavior, to make their funding decisions. We employ the panel vector autoregression (PVAR) methodology to investigate both intraand cross-platform effects based on data collected from three platforms: Indiegogo, one of the largest crowdfunding platforms on the web, Twitter and Facebook. Our results show a positive influence of social buzz on project backing, but a negative relationship in the reverse direction. Furthermore, we observe strong positive feedback cycles within each platform. Our results are supplemented by split-sample analyses for project orientation (Social, Cause and Entrepreneurial) and project success (Winners vs. Losers), in which Facebook shares were identified as a critical success factor."} {"_id": "6e5d8a30531680beb200cd6f0de91a7919381520", "title": "Comparing exploration strategies for Q-learning in random stochastic mazes", "text": "Balancing the ratio between exploration and exploitation is an important problem in reinforcement learning. This paper evaluates four different exploration strategies combined with Q-learning using random stochastic mazes to investigate their performances. We will compare: UCB-1, softmax, \u2208-greedy, and pursuit. For this purpose we adapted the UCB-1 and pursuit strategies to be used in the Q-learning algorithm. The mazes consist of a single optimal goal state and two suboptimal goal states that lie closer to the starting position of the agent, which makes efficient exploration an important part of the learning agent. Furthermore, we evaluate two different kinds of reward functions, a normalized one with rewards between 0 and 1, and an unnormalized reward function that penalizes the agent for each step with a negative reward. We have performed an extensive grid-search to find the best parameters for each method and used the best parameters on novel randomly generated maze problems of different sizes. The results show that softmax exploration outperforms the other strategies, although it is harder to tune its temperature parameter. The worst performing exploration strategy is \u2208-greedy."} {"_id": "a22aa5a7e98fe4fad1ec776df8d423b1c8b373ef", "title": "Character-based movie summarization", "text": "A decent movie summary is helpful for movie producer to promote the movie as well as audience to capture the theme of the movie before watching the whole movie. Most exiting automatic movie summarization approaches heavily rely on video content only, which may not deliver ideal result due to the semantic gap between computer calculated low-level features and human used high-level understanding. In this paper, we incorporate script into movie analysis and propose a novel character-based movie summarization approach, which is validated by modern film theory that what actually catches audiences' attention is the character. We first segment scenes in the movie by analysis and alignment of script and movie. Then we conduct substory discovery and content attention analysis based on the scent analysis and character interaction features. Given obtained movie structure and content attention value, we calculate movie attraction scores at both shot and scene levels and adopt this as criterion to generate movie summary. The promising experimental results demonstrate that character analysis is effective for movie summarization and movie content understanding."} {"_id": "7af4c0e2899042aea87d4a37aadd9f60b53cf272", "title": "Distributed Denial-of-Service (DDoS) Threat in Collaborative Environment - A Survey on DDoS Attack Tools and Traceback Mechanisms", "text": "Collaborative applications are feasible nowadays and are becoming more popular due to the advancement in internetworking technology. The typical collaborative applications, in India include the Space research, Military applications, Higher learning in Universities and Satellite campuses, State and Central government sponsored projects, e-governance, e-healthcare systems, etc. In such applications, computing resources for a particular institution/organization spread across districts and states and communication is achieved through internetworking. Therefore the computing and communication resources must be protected against security attacks as any compromise on these resources would jeopardize the entire application/mission. Collaborative environment is prone for various threats, of which Distributed Denial of Service (DDoS) attacks are of major concern. DDoS attack prevents legitimate access to critical resources. A survey by Arbor networks reveals that approximately 1,200 DDoS attacks occur per day. As the DDoS attack is coordinated, the defense for the same has to be a collaborative one. To counter DDoS attacks in a collaborative environment, all the routers need to work collaboratively by exchanging their caveat messages with their neighbors. This paper analyses the security measures in a collaborative environment, identifles the popular DDoS attack tools, and surveys the existing traceback mechanisms to trace the real attacker."} {"_id": "34977babbdc735c56b04668c19da31d89161a2b9", "title": "Geolocation of RF Emitters by Many UAVs", "text": "This paper presents an approach to using a large team of UAVs to find radio frequency (RF) emitting targets in a large area. Small, inexpensive UAVs that can collectively and rapidly determine the approximate location of intermittently broadcasting and mobile RF emitters have a range of applications in both military, e.g., for finding SAM batteries, and civilian, e.g., for finding lost hikers, domains. Received Signal Strength Indicator (RSSI) sensors on board the UAVs measure the strength of RF signals across a range of frequencies. The signals, although noisy and ambiguous due to structural noise, e.g., multipath effects, overlapping signals and sensor noise, allow estimates to be made of emitter locations. Generating a probability distribution over emitter locations requires integrating multiple signals from different UAVs into a Bayesian filter, hence requiring cooperation between the UAVs. Once likely target locations are identified, EO-camera equipped UAVs must be tasked to provide a video stream of the area to allow a user to identify the emitter."} {"_id": "cb32e2100a853e7ea491b1ac17b941f64f8720df", "title": "75\u201385 GHz flip-chip phased array RFIC with simultaneous 8-transmit and 8-receive paths for automotive radar applications", "text": "This paper presents the first simultaneous 8-transmit and 8-receive paths 75-85 GHz phased array RFIC for FMCW automotive radars. The receive path has two separate I/Q mixers each connected to 4-element phased arrays for RF and digital beamforming. The chip also contains a build-in-self-test system (BIST) for the transmit and receive paths. Measurements on a flip-chip prototype show a gain >24 dB at 77 GHz, -25 dB coupling between adjacent channels in the transmit and receive paths (<;-45 dB between non-adjacent channels), and <;-50 dB coupling between the transmit and receive portions of the chip."} {"_id": "d21ebaab3f715dc7178966ff146711882e6a6fee", "title": "Globally and locally consistent image completion", "text": "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces."} {"_id": "aae7bde6972328de6c23a510fe59254854163308", "title": "DEFO-NET: Learning Body Deformation Using Generative Adversarial Networks", "text": "Modelling the physical properties of everyday objects is a fundamental prerequisite for autonomous robots. We present a novel generative adversarial network (DEFO-NET), able to predict body deformations under external forces from a single RGB-D image. The network is based on an invertible conditional Generative Adversarial Network (IcGAN) and is trained on a collection of different objects of interest generated by a physical finite element model simulator. Defo-netinherits the generalisation properties of GANs. This means that the network is able to reconstruct the whole 3-D appearance of the object given a single depth view of the object and to generalise to unseen object configurations. Contrary to traditional finite element methods, our approach is fast enough to be used in real-time applications. We apply the network to the problem of safe and fast navigation of mobile robots carrying payloads over different obstacles and floor materials. Experimental results in real scenarios show how a robot equipped with an RGB-D camera can use the network to predict terrain deformations under different payload configurations and use this to avoid unsafe areas."} {"_id": "651adaa058f821a890f2c5d1053d69eb481a8352", "title": "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples", "text": "We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimizationbased attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining noncertified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers."} {"_id": "33eb066178d7ec2307f1db171f90c8338108bcc6", "title": "Graphical SLAM - a self-correcting map", "text": "We describe an approach to simultaneous localization and mapping, SLAM. This approach has the highly desirable property of robustness to data association errors. Another important advantage of our algorithm is that non-linearities are computed exactly, so that global constraints can be imposed even if they result in large shifts to the map. We represent the map as a graph and use the graph to find an efficient map update algorithm. We also show how topological consistency can be imposed on the map, such as, closing a loop. The algorithm has been implemented on an outdoor robot and we have experimental validation of our ideas. We also explain how the graph can be simplified leading to linear approximations of sections of the map. This reduction gives us a natural way to connect local map patches into a much larger global map."} {"_id": "711d59dba9bd4284170ccae24fdc2a14519cf941", "title": "A novel approach to American sign language recognition using MAdaline neural network", "text": "Sign language interpretation is gaining a lot of research attention because of its social contributions which is proved to be extremely beneficial for the people suffering from hearing or speaking disabilities. This paper proposes a novel image processing sign language detection framework that employs MAdaline network for classification purpose. This paper mainly highlights two novel aspects, firstly it introduces an advanced feature set comprising of seven distinct features that has not been used widely for sign language interpretation purpose, more over utilization of such features negates the cumbersome step of cropping of irrelevant background image, thus reducing system complexity. Secondly it suggests a possible solution of the concerned problem can be obtained using an extension of the traditional Adaline network, formally termed as MAdaline Network. Although the concept of MAdaline network has originated much earlier, the provision of application of this framework in this domain definitely help in designing an improved sign language interpreting interface. The newly formulated framework has been implemented to recognize standardized American sign language containing 26 English alphabets from \u2018A\u2019 to \u2018Z\u2019. The performance of the proposed algorithm has also been compared with the standardized algorithms, and in each case the former one outperformed its contender algorithms by a large margin establishing the efficiency of the same."} {"_id": "8f9a6313e525e33d88bb6f756e22bfec5272aab3", "title": "Design and Optimization a Circular Shape Network Antenna Micro Strip for Some Application", "text": "To meet the demands of high speed required by mobile communication of past generations ,one solution is to increase the number of antennas to the show and the reception of the wireless link this is called MIMO (Multiple input ,Multiple output )technology .however ,the integration of multiple antennas on the same PCB is delicate because of the small volume that require some applications and electromagnetic antenna between the coupling ,phenomena that we cannot neglect them .indeed a strong isolation between them has been reached to reduce fading of the signal caused by the electromagnetic antenna reached to reduce fading of the signal caused by the electromagnetic coupling and maximize the overall gain .in this article we are interested then integration on the same printed circuit of eight antennas MIMO are not operation in the same frequency band .the first antenna of this last work at 2.4GHz .other antennas have resonance frequency folling each with 20MHz offset this device is characterized by its original form that keeps is highly isolated antennas from the point of view electromagnetic coupling . INDEX TERMS MIMO, Technology Micro-strip, Microwave, Network Antenna"} {"_id": "2ad08da69a014691ae76cf7f53534b40b412c0e4", "title": "Network Traffic Anomaly Detection", "text": "This paper presents a tutorial for network anomaly detection, focusing on non-signature-based approaches. Network traffic anomalies are unusual and significant changes in the traffic of a network. Networks play an important role in today\u2019s social and economic infrastructures. The security of the network becomes crucial, and network traffic anomaly detection constitutes an important part of network security. In this paper, we present three major approaches to non-signature-based network detection: PCA-based, sketch-based, and signal-analysis-based. In addition, we introduce a framework that subsumes the three approaches and a scheme for network anomaly extraction. We believe network anomaly detection will become more important in the future because of the increasing importance of network security."} {"_id": "3cbb64df30f2581016542d2c0441f35e8a8c2147", "title": "Forward-Private Dynamic Searchable Symmetric Encryption with Efficient Search", "text": "Dynamic Searchable Symmetric Encryption (DSSE) allows to delegate keyword search and file update over an encrypted database via encrypted indexes, and therefore provides opportunities to mitigate the data privacy and utilization dilemma in cloud storage platforms. Despite its merits, recent works have shown that efficient DSSE schemes are vulnerable to statistical attacks due to the lack of forward-privacy, whereas forward-private DSSE schemes suffers from practicality concerns as a result of their extreme computation overhead. Due to significant practical impacts of statistical attacks, there is a critical need for new DSSE schemes that can achieve the forward-privacy in a more practical and efficient manner. We propose a new DSSE scheme that we refer to as Forward-private Sublinear DSSE (FS-DSSE). FS-DSSE harnesses special secure update strategies and a novel caching strategy to reduce the computation cost of repeated queries. Therefore, it achieves forward-privacy, sublinear search complexity, low end-to-end delay, and parallelization capability simultaneously. We fully implemented our proposed method and evaluated its performance on a real cloud platform. Our experimental evaluation results showed that the proposed scheme is highly secure and highly efficient compared with state-of-the-art DSSE techniques. Specifically, FS-DSSE is up to three magnitude of times faster than forward-secure DSSE counterparts, depending on the frequency of the searched keyword in the database."} {"_id": "18d5fc8a3f2c7e9bac55fff40e0ecf3112196813", "title": "Performance Analysis of Classification Algorithms on Medical Diagnoses-a Survey", "text": "Corresponding Author: Vanaja, S., Research Scholar and Research guide, Research and Development, Bharathiar University, Coimbatore, Tamil Nadu, India Email: vanajasha@yahoo.com Abstract: The aim of this research paper is to study and discuss the various classification algorithms applied on different kinds of medical datasets and compares its performance. The classification algorithms with maximum accuracies on various kinds of medical datasets are taken for performance analysis. The result of the performance analysis shows the most frequently used algorithms on particular medical dataset and best classification algorithm to analyse the specific disease. This study gives the details of different classification algorithms and feature selection methodologies. The study also discusses about the data constraints such as volume and dimensionality problems. This research paper also discusses the new features of C5.0 classification algorithm over C4.5 and performance of classification algorithm on high dimensional datasets. This research paper summarizes various reviews and technical articles which focus on the current research on Medical diagnosis."} {"_id": "132b3bd259bf10a41c00330a49de701c4e59a7ca", "title": "Semantic MEDLINE: An advanced information management application for biomedicine", "text": "Semantic MEDLINE integrates information retrieval, advanced natural language processing, automatic summarization, and visualization into a single Web portal. The application is intended to help manage the results of PubMed searches by condensing core semantic content in the citations retrieved. Output is presented as a connected graph of semantic relations, with links to the original MEDLINE citations. The ability to connect salient information across documents helps users keep up with the research literature and discover connections which might otherwise go unnoticed. Semantic MEDLINE can make an impact on biomedicine by supporting scientific discovery and the timely translation of insights from basic research into advances in clinical practice and patient care. Marcelo Fiszman has an M.D. from the State University of Rio de Janeiro and a Ph.D. in biomedical informatics from the University of Utah. He was awarded a postdoctoral fellowship in biomedical informatics at the National Library of Medicine (NLM) and is currently a research scientist there. His work focuses on natural language processing algorithms that exploit symbolic, rule-based techniques for semantic interpretation of biomedical text. He is also interested in using extracted semantic information for automatic abstraction summarization and literaturebased discovery. These efforts underpin Semantic MEDLINE, which is currently under development at NLM. This innovative biomedical information management application combines document retrieval, semantic interpretation, automatic summarization, and knowledge visualization into a single application."} {"_id": "49b6601bd93f4cfb606c6c9d6be2ae7d4da7e5ac", "title": "Effects of Professional Development on Teachers ' Instruction : Results from a Three-Year Longitudinal Study", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Educational Evaluation and Policy Analysis. This article examines the effects of professional development on teachers' instruction. Using a purposefully selected sample of about 207 teachers in 30 schools, in 10 districts infive states, we examine features of teachers' professional development and its effects on changing teaching practice in mathematics and science from 1996-1999. We found that professional developmentfocused on specific instructional practices increases teachers' use of those practices in the classroom. Furthermore, we found that specificfeatures, such as active learning opportunities, increase the effect of the professional development on teacher's instruction. What are the characteristics of professional development that affect teaching practice? This study adds to the knowledge base on effective professional development. The success of standards-based reform depends on teachers' ability to foster both basic knowledge and advanced thinking and problem solving among their students (Loucks-Desimone et al. Education 1999a), and surveys of teachers about their preservice preparation and in-service professional development experiences (e.g., Carey & Frechtling, 1997). In addition, there is a considerable amount of literature describing \"best practices\" in professional development, drawing on expert experiences (e.g., Loucks-Horsley et al., 1998). A professional consensus is emerging about particular characteristics of \"high quality\" professional development. These characteristics include a focus on content and how students learn content; in-depth, active learning opportunities; links to high standards, opportunities for teachers to engage in leadership roles; extended duration ; and the collective participation of groups of teachers from the same school, grade, or department. Although lists of characteristics such as these commonly appear in the literature on effective professional development, there is little direct evidence on the extent to which these characteristics are related to better teaching and increased student achievement. Some studies conducted over the past decade suggest that professional development experiences that share all or most of these characteristics can have a substantial, positive influence on teachers' classroom practice and student achieve-A few recent studies have begun to examine the relative importance of specific characteristics of professional development. Several studies have found that the intensity and \u2026"} {"_id": "853ba021b20e566a632a3c6e047b06c8914ec37d", "title": "\"It's alive, it's magic, it's in love with you\": opportunities, challenges and open questions for actuated interfaces", "text": "Actuated Interfaces are receiving a great deal of interest from the research community. The field can now present a range of point designs, illustrating the potential design space of Actuated Interfaces. However, despite the increasing interest in Actuated Interfaces, the research carried out is nevertheless primarily preoccupied with the technical challenges and potential application areas, rather than how users actually approach, experience, interpret and understand Actuated Interfaces. Based on three case studies, investigating how people experience Actuated Interfaces, we point to; magic, movement and ambiguity as fruitful perspectives for understanding users' experiences with Actuated Interfaces. The three perspectives are employed to reflect upon opportunities and challenges, as well as point to open questions and relevant areas for future research for Actuated Interfaces."} {"_id": "8c5ca158d90b3b034db872e5a82986af0146abf3", "title": "Automatic Airspace Sectorisation: A Survey", "text": "Airspace sectorisation provides a partition of a given airspace into sectors, subject to geometric constraints and workload constraints, so that some cost metric is minimised. We survey the algorithmic aspects of methods for automatic airspace sectorisation, for an intended readership of experts on air traffic management."} {"_id": "8ed0ee88e811aaaad56d1a42e2cfce02edcc90ff", "title": "Brain structures differ between musicians and non-musician", "text": "From an early age, musicians learn complex motor and auditory skills (e.g., the translation of visually perceived musical symbols into motor commands with simultaneous auditory monitoring of output), which they practice extensively from childhood throughout their entire careers. Using a voxel-by-voxel morphometric technique, we found gray matter volume differences in motor, auditory, and visual-spatial brain regions when comparing professional musicians (keyboard players) with a matched group of amateur musicians and non-musicians. Although some of these multiregional differences could be attributable to innate predisposition, we believe they may represent structural adaptations in response to long-term skill acquisition and the repetitive rehearsal of those skills. This hypothesis is supported by the strong association we found between structural differences, musician status, and practice intensity, as well as the wealth of supporting animal data showing structural changes in response to long-term motor training. However, only future experiments can determine the relative contribution of predisposition and practice."} {"_id": "cc54251f84c8577ca862fec41a1766c9a0d4a7b8", "title": "Updating P300: An integrative theory of P3a and P3b", "text": "The empirical and theoretical development of the P300 event-related brain potential (ERP) is reviewed by considering factors that contribute to its amplitude, latency, and general characteristics. The neuropsychological origins of the P3a and P3b subcomponents are detailed, and how target/standard discrimination difficulty modulates scalp topography is discussed. The neural loci of P3a and P3b generation are outlined, and a cognitive model is proffered: P3a originates from stimulus-driven frontal attention mechanisms during task processing, whereas P3b originates from temporal-parietal activity associated with attention and appears related to subsequent memory processing. Neurotransmitter actions associating P3a to frontal/dopaminergic and P3b to parietal/norepinephrine pathways are highlighted. Neuroinhibition is suggested as an overarching theoretical mechanism for P300, which is elicited when stimulus detection engages memory operations."} {"_id": "6d3f38ea64c84d5ca1569fd73497e34525baa215", "title": "Increased auditory cortical representation in musicians", "text": "Acoustic stimuli are processed throughout the auditory projection pathway, including the neocortex, by neurons that are aggregated into \u2018tonotopic\u2019 maps according to their specific frequency tunings. Research on animals has shown that tonotopic representations are not statically fixed in the adult organism but can reorganize after damage to the cochlea or after training the intact subject to discriminate between auditory stimuli. Here we used functional magnetic source imaging (single dipole model) to measure cortical representations in highly skilled musicians. Dipole moments for piano tones, but not for pure tones of similar fundamental frequency (matched in loudness), were found to be enlarged by about 25% in musicians compared with control subjects who had never played an instrument. Enlargement was correlated with the age at which musicians began to practise and did not differ between musicians with absolute or relative pitch. These results, when interpreted with evidence for modified somatosensory representations of the fingering digits in skilled violinists, suggest that use-dependent functional reorganization extends across the sensory cortices to reflect the pattern of sensory input processed by the subject during development of musical skill."} {"_id": "781ee20481f4e2e4accfa2b1cd6d70d5854bb171", "title": "Navigation-related structural change in the hippocampi of taxi drivers.", "text": "Structural MRIs of the brains of humans with extensive navigation experience, licensed London taxi drivers, were analyzed and compared with those of control subjects who did not drive taxis. The posterior hippocampi of taxi drivers were significantly larger relative to those of control subjects. A more anterior hippocampal region was larger in control subjects than in taxi drivers. Hippocampal volume correlated with the amount of time spent as a taxi driver (positively in the posterior and negatively in the anterior hippocampus). These data are in accordance with the idea that the posterior hippocampus stores a spatial representation of the environment and can expand regionally to accommodate elaboration of this representation in people with a high dependence on navigational skills. It seems that there is a capacity for local plastic change in the structure of the healthy adult human brain in response to environmental demands."} {"_id": "3d7a8f5557b6a219e44c0c9fbb81aa0b668e65f9", "title": "Extended Kalman filtering for battery management systems of LiPB-based HEV battery packs Part 1 . Background", "text": "Battery management systems (BMS) in hybrid-electric-vehicle (HEV) battery packs must estimate values descriptive of the pack\u2019s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium-ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This first paper investigates the estimation requirements for HEV BMS in some detail, in parallel to the requirements for other battery-powered applications. The comparison leads us to understand that the HEV environment is very challenging on batteries and the BMS, and that precise estimation of some parameters will improve performance and robustness, and will ultimately lengthen the useful lifetime of the pack. This conclusion motivates the use of more complex algorithms than might be used in other applications. Our premise is that EKF then becomes a very attractive approach. This paper introduces the basic method, gives some intuitive feel to the necessary computational steps, and concludes by presenting an illustrative example as to the type of results that may be obtained using EKF. \u00a9 2004 Elsevier B.V. All rights reserved."} {"_id": "ed0034302ecba18ca29a5e23baa39b23409add7b", "title": "Price-Based Global Market Segmentation for Services", "text": "In business-to-business marketing, managers are often tasked with developing effective global pricing strategies for customers characterized by different cultures and different utilities for product attributes. The challenges of formulating international pricing schedules are especially evident in global markets for service offerings, where intensive customer contact, extensive customization requirements, and reliance on extrinsic cues for service quality make pricing particularly problematic. The purpose of this article is to develop and test a model of the antecedents of business customers\u2019 price elasticities of demand for services in an international setting. The article begins with a synthesis of the services, pricing, and global marketing literature streams and then identifies factors that account for differences in business customers\u2019 price elasticities for service offerings across customers in Asia Pacific, Europe, and North America. The findings indicate that price elasticities depend on service quality, service type, and level of service support and that horizontal segments do exist, which provides support for pricing strategies transcending national borders. The article concludes with a discussion of the managerial implications of these results for effective segmentation of global markets for services."} {"_id": "e3b17a245dce9a2189a8a4f7538631b69c93812e", "title": "Adversarial Patch", "text": "We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class."} {"_id": "d5b023d962e15c519993eed913a73f44c595a7ad", "title": "Efficacy and safety of Meriva\u00ae, a curcumin-phosphatidylcholine complex, during extended administration in osteoarthritis patients.", "text": "In a previous three-month study of Meriva, a proprietary curcumin-phosphatidylcholine phytosome complex, decreased joint pain and improvement in joint function were observed in 50 osteoarthritis (OA) patients. Since OA is a chronic condition requiring prolonged treatment, the long-term efficacy and safety of Meriva were investigated in a longer (eight months) study involving 100 OA patients. The clinical end points (Western Ontario and McMaster Universities [WOMAC] score, Karnofsky Performance Scale Index, and treadmill walking performance) were complemented by the evaluation of a series of inflammatory markers (interleukin [IL]-1beta, IL-6, soluble CD40 ligand [sCD40L], soluble vascular cell adhesion molecule (sVCAM)-1, and erythrocyte sedimentation rate [ESR]). This represents the most ambitious attempt, to date, to evaluate the clinical efficacy and safety of curcumin as an anti-inflammatory agent. Significant improvements of both the clinical and biochemical end points were observed for Meriva compared to the control group. This, coupled with an excellent tolerability, suggests that Meriva is worth considering for the long-term complementary management of osteoarthritis."} {"_id": "d82d9cb8411cc8d39a26b821f98cda80b08124d7", "title": "An Ontology based Dialog Interface to Database", "text": "In this paper, we extend the state-of-the-art NLIDB system and present a dialog interface to relational databases. Dialog interface enables users to automatically exploit the semantic context of the conversation while asking natural language queries over RDBMS, thereby making it simpler to express complex questions in a natural, piece-wise manner. We propose novel ontology-driven techniques for addressing each of the dialog-specific challenges such as co-reference resolution, ellipsis resolution, and query disambiguation, and use them in determining the overall intent of the user query. We demonstrate the applicability and usefulness of dialog interface over two different domains viz. finance and healthcare."} {"_id": "7e2fbad4fa4877ea3fd8d197950e335d59ebeedf", "title": "Consumers \u2019 Trust in Electronic Commerce Transactions : The Role of Perceived Privacy and Perceived Security 1 Introduction", "text": "Acknowledgement I am greatly indebted to Omar El Sawy and Ann Majchrzak for their guidance and suggestion, and I would like to thank Ricky Lim and Raymond for their help with the data analysis. Abstract Consumers' trust in their online transactions is vital for the sustained progress and development of electronic commerce. Our paper proposes that in addition to known factors of trust such a vendor's reputation, consumers' perception of privacy and security influence their trust in online transactions. Our research shows that consumers exhibit variability in their perceptions of privacy, security and trust between online and offline transactions even if it is conducted with the same store. We build upon this finding to develop and validate measures of consumers' perceived privacy and perceived security of their online transactions which are then theorized to influence their trust in EC transactions. We propose that the perceptions of privacy and security are factors that affect the consumers' trust in the institutional governance mechanisms underlying the Internet. We conduct two distinct empirical studies and through successive refinement and analysis using the Partial Least Squares technique, we test our hypothesized relationships while verifying the excellent measurement properties associated with our instrument. Our study finds that the consumers' perceived privacy and perceived security are indeed distinct constructs but the effect of perceived privacy on trust in EC transactions is strongly mediated by perceived security. A major implication of our findings is that while the much studied determinants of trust such as reputation of the transacting firm should not be neglected, vendors should also engage in efforts to positively influence consumer perceptions of privacy and security. We discuss the significance of this observation in the context of increasing importance in acquiring customer information for personalization and other online strategies."} {"_id": "2e36ea91a3c8fbff92be2989325531b4002e2afc", "title": "Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models", "text": "Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison."} {"_id": "422df80a96c5417302067f5ad7ba7b071a73e156", "title": "DC Motor Drive with P, PI, and Particle Swarm Optimization Speed Controllers", "text": "This paper implements a Particle Swarm Optimization (PSO) speed controller for controlling the speed of DC motor. Also traditional Proportional (P), Proportional-Integral (PI) controller have been developed and simulated using MATLAB/SIMULINK. The simulation results indicate that the PI controller has big overshoot, sensitivity to controller gains. The optimization with particle swarm verified the reduction of oscillations as well as improve the steady state error, rise time and overshot in speed response. The simulation results confirmed the system is less sensitive to gain."} {"_id": "217abac938808f5f89e72fce7cfad1f61be8bcff", "title": "Modeling crowdsourcing systems: design and analysis of incentive mechanism and rating system", "text": "Over the past few years, we have seen an increasing popularity of crowdsourcing services [5]. Many companies are now providing such services, e.g., Amazon Mechanical Turk [1], Google Helpouts [3], and Yahoo! Answers [8], etc. Briefly speaking, crowdsourcing is an online, distributed problem solving paradigm and business production platform. It uses the power of today\u2019s Internet to solicit the collective intelligence of large number of users. Relying on the wisdom of the crowd to solve posted tasks (or problems), crowdsourcing has become a promising paradigm to obtain \u201csolutions\u201d which can have higher quality or lower costs than the conventional method of solving problems via specialized employees or contractors in a company. Typically, a crowdsourcing system operates with three basic components: Users, tasks and rewards. Users are classified into requesters and workers. A user can be a requester or a worker, and in some cases, a user can be a requester/worker at the same time. Requesters outsource tasks to workers and associate each task with certain rewards, which will be granted to the workers who solve the task. Workers, on the other hand, solve the assigned tasks and reply to requesters with solutions, and then take the reward, which can be in form of money [1], entertainment [7] or altruism [8], etc. To have a successful crowdsourcing website, it is pertinent to attract high volume of participation of users (requesters and workers), and at the same time, solutions by workers have to be of high quality. In this paper we design a rating system and a mechanism to encourage users to participate, and incentivize workers to high quality solutions. First, we develop a game-theoretic model to characterize workers\u2019 strategic behavior. We then design a class effective incentive mechanisms which consist of a task bundling scheme and a rating system, and pay workers according to solution ratings from requesters. We develop a model to characterize the design space of a class of commonly users rating systems\u2013 threshold based rating system. We quantify the impact of such rating systems, and the bundling scheme on reducing requesters\u2019 reward payment in guaranteeing high quality solutions. We find out that a simplest rating system, e.g., two rating points, is an effective system in which requesters only need to provide binary feedbacks to indicate whether they are satisfied or not with a solution."} {"_id": "eebdf35becd03454304c4ba8fde1d07ede465b8d", "title": "An analysis of player affect transitions in survival horror games", "text": "The trend of multimodal interaction in interactive gaming has grown significantly as demonstrated for example by the wide acceptance of the Wii Remote and the Kinect as tools not just for commercial games but for game research as well. Furthermore, using the player\u2019s affective state as an additional input for game manipulation has opened the realm of affective gaming. In this paper, we analyzed the affective states of players prior to and after witnessing a scary event in a survival horror game. Player affect data were collected through our own affect annotation tool that allows the player to report his affect labels while watching his recorded gameplay and facial expressions. The affect data were then used for training prediction models with the player\u2019s brainwave and heart rate signals, as well as keyboard\u2013mouse activities collected during gameplay. Our results show that (i) players are likely to get more fearful of a scary event when they are in the suspense state and that (ii) heart rate is a good candidate for detecting player affect. Using our results, game designers can maximize the fear level of the player by slowly building tension until the suspense state and showing a scary event after that. We believe that this approach can be applied to the analyses of different sets of emotions in other games as well."} {"_id": "5491b73d751c16c703eccbc67286f2d802594438", "title": "Asymptotic Convergence in Online Learning with Unbounded Delays", "text": "We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayesoptimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting."} {"_id": "ba906034290287d1527ea5bb90271bb25aaca84c", "title": "Degradation of methyl orange using short-wavelength UV irradiation with oxygen microbubbles.", "text": "A novel wastewater treatment technique using 8 W low-pressure mercury lamps in the presence of uniform-sized microbubbles (diameter = 5.79 microm) was investigated for the decomposition of methyl orange as a model compound in aqueous solution. Photodegradation experiments were conducted with a BLB black light blue lamp (365 nm), a UV-C germicidal lamp (254 nm) and an ozone lamp (185 nm+254 nm) both with and without oxygen microbubbles. The results show that the oxygen microbubbles accelerated the decolorization rate of methyl orange under 185+254 nm irradiation. In contrast, the microbubbles under 365 and 254 nm irradiation were unaffected on the decolorization of methyl orange. It was found that the pseudo-zero order decolorization reaction constant in microbubble system is 2.1 times higher than that in conventional large bubble system. Total organic carbon (TOC) reduction rate of methyl orange was greatly enhanced by oxygen microbubble under 185+254 nm irradiation, however, TOC reduction rate by nitrogen microbubble was much slower than that with 185+254 nm irradiation only. Possible reaction mechanisms for the decolorization and mineralization of methyl orange both with oxygen and nitrogen mirobubbles were proposed in this study."} {"_id": "05504fc6ff64fb5060ef24b16d978fac3fd96337", "title": "Cost Models for Future Software Life Cycle Processes: COCOMO 2.0", "text": "Current software cost estimation models, such as the 1981 Constructive Cost Model (COCOMO) for software cost estimation and its 1987 Ada COCOMO update, have been experiencing increasing difficulties in estimating the costs of software developed to new life cycle processes and capabilities. These include non-sequential and rapid-development process models; reuse-driven approaches involving commercial off the shelf (COTS) packages, reengineering, applications composition, and applications generation capabilities; object-oriented approaches supported by distributed middleware; and software process maturity initiatives. This paper summarizes research in deriving a baseline COCOMO 2.0 model tailored to these new forms of software development, including rationales for the model decisions. The major new modeling capabilities of COCOMO 2.0 are a tailorable family of software sizing models, involving Object Points, Function Points, and Source Lines of Code; nonlinear models for software reuse and reengineering; an exponent-driver approach for modeling relative software diseconomies of scale; and several additions, deletions, and updates to previous COCOMO effort-multiplier cost drivers. This model is serving as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model\u2019s estimation capabilities."} {"_id": "47f0f6a2fd518932734cc90936292775cc95aa5d", "title": "OCCUPATIONAL THERAPY FOR THE POLY- TRAUMA CASUALTY WITH LIMB LOSS", "text": ""} {"_id": "c7cf37c3609051dbd4084fcc068427fbb7a1a0e1", "title": "Blob Enhancement and Visualization for Improved Intracranial Aneurysm Detection", "text": "Several researches have established that the sensitivity of visual assessment of smaller intracranial aneurysms is not satisfactory. Computer-aided diagnosis based on volume rendering of the response of blob enhancement filters may shorten visual inspection and increase detection sensitivity by directing a diagnostician to suspicious locations in cerebral vasculature. We proposed a novel blob enhancement filter based on a modified volume ratio of Hessian eigenvalues that has a more uniform response inside the blob-like structures compared to state-of-the-art filters. Because the response of proposed filter is independent of the size and intensity of structures, it is especially sensitive for detecting small blob-like structures such as aneuryms. We proposed a novel volume rendering method, which is sensitive to signal energy along the viewing ray and which visually enhances the visualization of true positives and suppresses usually sharp false positive responses. The proposed and state-of-the-art methods were quantitatively evaluated on a synthetic dataset and 42 clinical datasets of patients with aneurysms. Because of the capability to accurately enhance the aneurysm's boundary and due to a low number of visualized false positive responses, the combined use of the proposed filter and visualization method ensures a reliable detection of (small) intracranial aneurysms."} {"_id": "cdf35a8d61d38659527b0f52f6d3655778c165c1", "title": "Spatio-Temporal Recurrent Convolutional Networks for Citywide Short-term Crowd Flows Prediction", "text": "With the rapid development of urban traffic, forecasting the flows of crowd plays an increasingly important role in traffic management and public safety. However, it is very challenging as it is affected by many complex factors, including spatio-temporal dependencies of regions and other external factors such as weather and holiday. In this paper, we proposed a deep-learning-based approach, named STRCNs, to forecast both inflow and outflow of crowds in every region of a city. STRCNs combines Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network structures to capture spatio-temporal dependencies, simultaneously. More particularly, our model can be decomposed into four components: Closeness captures the changes of instantaneous flows; Daily influence detects the changes of daily influence flows regularly; Weekly influence reacts weekly patterns of influence flows and External influence gets the influence of external factors. For the first three properties (Closeness, Daily influence and Weekly influence), we give a branch of recurrent convolutional network units to learn both spatial and temporal dependencies in crowd flows. External factors are fed into a two-layers fully connected neural network. STRCNs assigns different weights to different branches, and then merges the outputs of the four parts together. Experimental results on two data sets (MobileBJ and TaxiBJ) demonstrate that STRCNs outperforms classical time series and other deep-learning-based prediction methods."} {"_id": "b336f946d34cb427452517f503ada4bbe0181d3c", "title": "Diagnosing Error in Temporal Action Detectors", "text": "Despite the recent progress in video understanding and the continuous rate of improvement in temporal action localization throughout the years, it is still unclear how far (or close?) we are to solving the problem. To this end, we introduce a new diagnostic tool to analyze the performance of temporal action detectors in videos and compare different methods beyond a single scalar metric. We exemplify the use of our tool by analyzing the performance of the top rewarded entries in the latest ActivityNet action localization challenge. Our analysis shows that the most impactful areas to work on are: strategies to better handle temporal context around the instances, improving the robustness w.r.t. the instance absolute and relative size, and strategies to reduce the localization errors. Moreover, our experimental analysis finds the lack of agreement among annotator is not a major roadblock to attain progress in the field. Our diagnostic tool is publicly available to keep fueling the minds of other researchers with additional insights about their algorithms."} {"_id": "085a5657303249f8a7d2ffd5397cd9e4c70d8dbe", "title": "Cloud computing paradigms for pleasingly parallel biomedical applications", "text": "Cloud computing offers exciting new approaches for scientific computing that leverages the hardware and software investments on large scale data centers by major commercial players. Loosely coupled problems are very important in many scientific fields and are on the rise with the ongoing move towards data intensive computing. There exist several approaches to leverage clouds & cloud oriented data processing frameworks to perform pleasingly parallel computations. In this paper we present two pleasingly parallel biomedical applications, 1) assembly of genome fragments 2) dimension reduction in the analysis of chemical structures, implemented utilizing cloud infrastructure service based utility computing models of Amazon AWS and Microsoft Windows Azure as well as utilizing MapReduce based data processing frameworks, Apache Hadoop and Microsoft DryadLINQ. We review and compare each of the frameworks and perform a comparative study among them based on performance, efficiency, cost and the usability. Cloud service based utility computing model and the managed parallelism (MapReduce) exhibited comparable performance and efficiencies for the applications we considered. We analyze the variations in cost between the different platform choices (eg: EC2 instance types), highlighting the need to select the appropriate platform based on the nature of the computation."} {"_id": "5b6535f9aa0f5afd4a33c70da2a8d10ca1832342", "title": "The psychometric properties and utility of the Short Sadistic Impulse Scale (SSIS).", "text": "Sadistic personality disorder (SPD) has been underresearched and often misunderstood in forensic settings. Furthermore, personality disorders in general are the subject of much controversy in terms of their classification (i.e., whether they should be categorical or dimensional). The Sadistic Attitudes and Behaviors Scale (SABS; Davies & Hand, 2003; O'Meara, Davies, & Barnes-Holmes, 2004) is a recently developed scale for measuring sadistic inclinations. Derived from this is the Short Sadistic Impulse Scale (SSIS), which has proved to be a strong unidimensional measure of sadistic inclination. Through cumulative scaling, it was investigated whether the SSIS could measure sadism on a continuum of interest, thus providing a dimensional view of the construct. Further, the SSIS was administered along with a number of other measures related to sadism in order to assess the validity of the scale. Results showed that the SSIS has strong construct and discriminant validity and may be useful as a screening measure for sadistic impulse."} {"_id": "a9d916bd42b3298ae22185a10f7e1e365b4acdbe", "title": "Cyber military strategy for cyberspace superiority in cyber warfare", "text": "In this paper, we proposed robust and operational cyber military strategy for cyberspace superiority in cyber warfare. We considered cyber forces manpower, cyber intelligence capability, and an organization of cyber forces for improving cyber military strategy. In cyber forces power field, we should cultivated personals that can perform network computer operations and hold cyber security technology such as cyber intelligence collection, cyber-attack, cyber defence, and cyber forensics. Cyber intelligence capability includes cyber surveillance/reconnaissance, cyber order of battle, pre-CTO, and cyber damage assessment. An organization of cyber forces has to change tree structure to network structure and has to organize task or functional centric organizations. Our proposed cyber military strategy provides prior decision of action to operate desired effects in cyberspace."} {"_id": "160404fb0d05a1a2efa593c448fcb8796c24b873", "title": "The emulation theory of representation: motor control, imagery, and perception.", "text": "The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language."} {"_id": "7ca4d3c70a5f87f9b5d0dbbda0d644aeb6b485da", "title": "DC Bus Voltage Control for a Distributed Power System", "text": "This paper addresses voltage control of distributed dc power systems. DC power systems have been discussed as a result of the introduction of renewable, small-scale power generation units. Also, telecommunication power systems featuring UPS properties might benefit from a broader introduction of dc power systems. Droop control is utilized to distribute the load between the source converters. In order to make the loading of the source converters equal, in per unit, the voltage control algorithm for each converter has to be designed to act similar. The dc side capacitor of each converter, needed for filtering, is also determined as a consequence. The root locus is investigated for varying dc bus impedance. It is found that the risk of entering converter over-modulation is a stronger limitation than stability, at least for reasonable dc bus cable parameters. The stationary and dynamic properties during load variations are also investigated."} {"_id": "678680169e8b75430c1239e7e5289f656e009809", "title": "REFRAMING SUPPLY CHAIN MANAGEMENT : A SERVICE-DOMINANT", "text": "Shifting the dominant thinking of supply chain management toward the concepts of service, value cocreation, value propositions, operant resources, networks, service ecosystems and learning opens up many research opportunities and strategies for improved organizational performance. The emerging thought world of service-dominant logic is presented as a means to reframe supply chain scholarship and practice for increased relevance and impact."} {"_id": "9bf6a003ed9dab85b8d22b2dd50b4325c6ab67ef", "title": "The Intersecting Roles of Consumer and Producer : A Critical Perspective on Co-production , Co-creation and Prosumption", "text": "The terms \u2018co-creation\u2019, \u2018co-production\u2019, and \u2018prosumption\u2019 refer to situations in which consumers collaborate with companies or with other consumers to produce things of value. These situations sometimes appear to blur the traditional roles of \u2018producer\u2019 and \u2018consumer\u2019. Building on Marx\u2019s distinction between \u2018use value\u2019 and \u2018exchange value\u2019, we argue that, when consumers perform tasks normally handled by the company, this does not necessarily represent a fundamental change in exchange roles or economic organization. We then argue that, when individuals who are traditionally defined as \u2018consumers\u2019 produce exchange value for companies, this does represent a fundamental change. Thanks to recent advances in information technology, customers today are contributing to organizational processes in ways that would not have been possible 10 years ago. For example, at Threadless.com, customers can not only vote for the clothing designs that they would like to see produced but also submit their own designs for voting (Walker 2007). Similarly, at National Instruments, a company that makes measurement software and sensors, nearly half of the company\u2019s research and development activity is done by an on-line community of customers who collaborate to discover how NI\u2019s products can solve members\u2019 problems (Seybold 2007). Another example is Proctor & Gamble\u2019s \u2018Vocalpoint\u2019 program, an on-line community of product enthusiasts who are rewarded with coupons and samples in return for talking about products with their friends. The program has more than 500,000 members and is being used by other companies such as WD40 and the Discovery Channel to promote their products (Neff 2006). Some have argued that examples like these are symptomatic of a fundamental and dramatic shift that is occurring in relationships between organizations and their customers \u2013 a change that calls into question previously clear distinctions between consumers and producers and between customers and employees. For instance, some have suggested 2 Intersecting Roles of Consumer and Producer \u00a9 2008 The Authors Sociology Compass 2 (2008): 10.1111/j.1751-9020.2008.00112.x Journal Compilation \u00a9 2008 Blackwell Publishing Ltd that customers who help with product design or assist with product marketing are part of a \u2018consumer-as-creator revolution\u2019 (Nadeau 2006, 105) and represent a \u2018new paradigm of message creation and delivery\u2019 (McConnell and Huba 2006, 42). Others have written that \u2018the gap between producers and consumers is blurring\u2019 (Tapscott and Williams 2006, 125) and that \u2018in the emerging ... paradigm, a person can seamlessly shift from consumer to contributor and creator\u2019 (143). Still others have argued that in today\u2019s customer-organization relationships \u2018power and control are radically decentralized and heterarchical: producers and consumers coalesce\u2019 (Pitt et al. 2006, 118). We have also recently seen a resurgence of the word \u2018prosumer\u2019, which was first coined by cultural critic Alvin Toffler in 1980, to emphasize the novelty of asking individuals to simultaneously play the role of consumer and producer (Kotler 1986; Tapscott and Williams 2006). In this article, we offer a critical analysis of the ways in which the role of the consumer may be changing and therefore also an investigation of this role in the broader capitalist system, a system in which consumers have traditionally served a central function. We argue that although consumers are increasingly performing tasks normally handled by the company, this role redefinition may be, at least in some cases, illusory. We also find that although the employee/customer distinction can be blurred on many dimensions, one dimension on which the distinction remains is between those who create use value and those who create exchange value (Marx 1867 [2001]). Increasingly, individuals who have traditionally been defined as \u2018consumers\u2019 are producing exchange value for companies, and this, we argue, is where the so-called \u2018prosumer\u2019 does indeed represent a fundamental change in economic organization. We then use the distinction between use value and exchange value to further explore the normative and ethical implications of consumer production. Consumer versus producer The consumer\u2013producer relationship has traditionally been conceived of as an exchange relationship in which each party trades one kind of value for another (Bagozzi 1975). In this article, we focus on the exchange relationship between an end user (such as a person buying coffee beans for home use) and the organization from which this end user buys a product or service (such as a supermarket or coffee shop). Before an end user buys something, a series of transformations are usually applied to it to make the product or service usable to the end user and to therefore enhance its value. To make a cup of coffee, for example, someone must grow and harvest the coffee beans, roast and grind them, transport and package them, offer them for retail sale, and finally brew the cup of coffee. These are all steps in what Michael Porter (1985) calls the \u2018value chain\u2019, the series of transformations required to make a product for an end user."} {"_id": "bd1c85bf52295adad0b1a59d79d7429091beeb22", "title": "Evolving to a New Dominant Logic for Marketing", "text": "Marketing inherited a model of exchange from economics, which had a dominant logic based on the exchange of \u201cgoods,\u201d which usually are manufactured output. The dominant logic focused on tangible resources, embedded value, and transactions. Over the past several decades, new perspectives have emerged that have a revised logic focused on intangible resources, the cocreation of value, and relationships. The authors believe that the new perspectives are converging to form a new dominant logic for marketing, one in which service provision rather than goods is fundamental to economic exchange. The authors explore this evolving logic and the corresponding shift in perspective for marketing scholars, marketing practitioners, and marketing educators."} {"_id": "030467d08fb735b2855aa9e71185126ba389f9d9", "title": "The Emerging Role of Electronic Marketplaces on the Internet", "text": "Markets play a central role in the economy, facilitating the exchange of information, goods, services and payments. In the process, they create economic value for buyers, sellers, market intermediaries and for society at large. Recent years have seen a dramatic increase in the role of information technology in markets, both in traditional markets, and in the emergence of electronic marketplaces, such as the multitude of Internet-based online auctions."} {"_id": "314617f72bff343c7a9f0d550dc8f918a691f2bd", "title": "Information systems in supply chain integration and management", "text": "Supply chain management (SCM) is the 21st century global operations strategy for achieving organizational competitiveness. Companies are attempting to find ways to improve their flexibility and responsiveness and in turn competitiveness by changing their operations strategy, methods and technologies that include the implementation of SCM paradigm and information technology (IT). However, a thorough and critical review of literature is yet to be carried out with the objective of bringing out pertinent factors and useful insights into the role and implications of IT in SCM. In this paper, the literature available on IT in SCM have been classified using suitable criteria and then critically reviewed to develop a framework for studying the applications of IT in SCM. Based on this review and analysis, recommendations have been made regarding the application of IT in SCM and some future research directions are indicated. 2003 Elsevier B.V. All rights reserved."} {"_id": "65c85498be307ee940976db668dae4546943a4c8", "title": "The American Economic Review VOLUME XLV MARCH , 1955 NUMBER ONE ECONOMIC GROWTH AND INCOME INEQUALITY", "text": ""} {"_id": "f9b79f7222658cc6670292f57547731a54a015f8", "title": "A Three-Phase Current-Fed DC/DC Converter With Active Clamp for Low-DC Renewable Energy Sources", "text": "This paper focuses on a new three-phase high power current-fed DC/DC converter with an active clamp. A three-phase DC/DC converter with high efficiency and voltage boosting capability is designed for use in the interface between a low-voltage fuel-cell source and a high-voltage DC bus for inverters. Zero-voltage switching in all active switches is achieved through using a common active clamp branch, and zero current switching in the rectifier diodes is achieved through discontinuous current conduction in the secondary side. Further, the converter is capable of increased power transfer due to its three-phase power configuration, and it reduces the RMS current per phase, thus reducing conduction losses. Moreover, a delta-delta connection on the three-phase transformer provides parallel current paths and reduces conduction losses in the transformer windings. An efficiency of above 93% is achieved through both improvements in the switching and through reducing conduction losses. A high voltage ratio is achieved by combining inherent voltage boost characteristics of the current-fed converter and the transformer turns ratio. The proposed converter and three-phase PWM strategy is analyzed, simulated, and implemented in hardware. Experimental results are obtained on a 500-W prototype unit, with all of the design verified and analyzed."} {"_id": "16c7cb0515c235f6ab6e2d2fdfac0920ed1f7588", "title": "Par4: very high speed parallel robot for pick-and-place", "text": "This paper introduces a four-degree-of-freedom parallel manipulator dedicated to pick-and-place. It has been developed with the goal of reaching very high speed. This paper shows that its architecture is particularly well adapted to high dynamics. Indeed, it is an evolution of Delta, H4 and 14 robots architectures: it keeps the advantages of these existing robots, while overcoming their drawbacks. In addition, an optimization method based on velocity using adept motion has been developed and applied to this high speed parallel robot. All these considerations led to experimentations that proved we can reach high accelerations (13 G) and obtain a cycle time of 0.28 s."} {"_id": "0123a29af69b6aa92856be8b48c834f0d2483640", "title": "Towards Controlled Transformation of Sentiment in Sentences", "text": "An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and timeconsuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7% is achieved on the sentiment change."} {"_id": "ec28e43d33c838921d7e68fc7fd6a0cd521336a3", "title": "Design, Analysis, and Test of a Novel 2-DOF Nanopositioning System Driven by Dual Mode", "text": "Piezodriven flexure-based motion stages, with a large workspace and high positioning precision, are really attractive for the realization of high-performance atomic force microscope (AFM) scanning. In this paper, a modified lever displacement amplifier is proposed for the mechanism design of a novel compliant two-degree-of-freedom (2-DOF) nanopositioning stage, which can be selected to drive in dual modes. Besides, the modified double four-bar parallelogram, P (P denotes prismatic) joints are adopted in designing the flexure limbs. The established models for the mechanical performance evaluation of the stage, in terms of kinetostatics, dynamics, and workspace, are validated by the finite-element analysis. After a series of dimension optimizations carried out through the particle swarm optimization algorithm, a novel active disturbance rejection controller, including the nonlinearity tracking differentiator, the extended state observer, and the nonlinear state error feedback, is proposed to automatically estimate and suppress plant uncertainties arising from the hysteresis nonlinearity, creep effect, sensor noises, and unknown disturbances. The simulation and prototype test results indicate that the first natural frequency of the proposed stage is approximated to be 831 Hz, the amplification ratio in two axes is about 4.2, and the workspace is 119.7 \u03bcm \u00d7 121.4 \u03bcm, while the cross coupling between the two axes is kept within 2%. All the results prove that the developed stage possesses a good property for high-performance AFM scanning."} {"_id": "b84c91f48e62506f733530ef2a046d74ff5e2d64", "title": "Large Receptive Field Networks for High-Scale Image Super-Resolution", "text": "Convolutional Neural Networks have been the backbone of recent rapid progress in Single-Image Super-Resolution. However, existing networks are very deep with many network parameters, thus having a large memory footprint and being challenging to train. We propose Large Receptive Field Networks which strive to directly expand the receptive field of Super-Resolution networks without increasing depth or parameter count. In particular, we use two different methods to expand the network receptive field: 1-D separable kernels and atrous convolutions. We conduct considerable experiments to study the performance of various arrangement schemes of the 1-D separable kernels and atrous convolution in terms of accuracy (PSNR / SSIM), parameter count, and speed, while focusing on the more challenging high upscaling factors. Extensive benchmark evaluations demonstrate the effectiveness of our approach."} {"_id": "761f2288b1b0cea385b0b9a89bb068593d94d6bd", "title": "3D face recognition: a survey", "text": "3D face recognition has become a trending research direction in both industry and academia. It inherits advantages from traditional 2D face recognition, such as the natural recognition process and a wide range of applications. Moreover, 3D face recognition systems could accurately recognize human faces even under dim lights and with variant facial positions and expressions, in such conditions 2D face recognition systems would have immense difficulty to operate. This paper summarizes the history and the most recent progresses in 3D face recognition research domain. The frontier research results are introduced in three categories: pose-invariant recognition, expression-invariant recognition, and occlusion-invariant recognition. To promote future research, this paper collects information about publicly available 3D face databases. This paper also lists important open problems."} {"_id": "16f67094b8e0ec70d48d1f0f01a1f204a89b0e12", "title": "Hybrid distribution transformer: Concept development and field demonstration", "text": "Today's distribution system is expected to supply power to loads for which it was not designed. Moreover, high penetration of distributed generation units is redefining the requirements for the design, control and operation of the electric distribution system. A Hybrid Distribution Transformer is a potential cost-effective alternative solution to various distribution grid control devices. The Hybrid Distribution Transformer is realized by augmenting a regular transformer with a fractionally rated power electronic converter, which provides the transformer with additional control capabilities. The Hybrid Distribution Transformer concept can provide dynamic ac voltage regulation, reactive power compensation and, in future designs, form an interface with energy storage devices. Other potential functionalities that can be realized from the Hybrid Distribution Transformer include voltage phase angle control, harmonic compensation and voltage sag compensation. This paper presents the concept of a Hybrid Distribution Transformer and the status of our efforts towards a 500 kVA, 12.47 kV/480 V field demonstrator."} {"_id": "4b81b16778c5d2dbf2cee511f7822fa6ae3081bf", "title": "Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation", "text": "Background Since Big Data mainly aims to explore the correlation between surface features but not their underlying causality relationship, the Big Mechanism 2 program has been proposed by DARPA to find out \u201cwhy\u201d behind the \u201cBig Data\u201d. However, the pre-requisite for it is that the machine can read each document and learn its associated knowledge, which is the task of Machine Reading (MR). Since a domain-independent MR system is complicated and difficult to build, the math word problem (MWP) [1] is frequently chosen as the first test case to study MR (as it usually uses less complicated syntax and requires less amount of domain knowledge). According to the framework for making the decision while there are several candidates, previous MWP algebra solvers can be classified into: (1) Rule-based approaches with logic inference [2-7], which apply rules to get the answer (via identifying entities, quantities, operations, etc.) with a logic inference engine. (2) Rule-based approaches without logic inference [8-13], which apply rules to get the answer without a logic inference engine. (3) Statistics-based approaches [14, 15], which use statistical models to identify entities, quantities, operations, and get the answer. To our knowledge, all the statistics-based approaches do not adopt logic inference. The main problem of the rule-based approaches mentioned above is that the coverage rate problem is serious, as rules with wide coverage are difficult and expensive to construct. Also, since they adopt Go/No-Go approach (unlike statistical approaches which can adopt a large Top-N to have high including rates), the error accumulation problem would be severe. On the other hand, the main problem of those approaches without adopting logic inference is that they usually need to implement a new handling procedure for each new type of problems (as the general logic inference mechanism is not adopted). Also, as there is no inference engine to generate the reasoning chain [16], additional effort would be required for"} {"_id": "bd2d7c7f0145028e85c102fe52655c2b6c26aeb5", "title": "Attribute-based People Search: Lessons Learnt from a Practical Surveillance System", "text": "We address the problem of attribute-based people search in real surveillance environments. The system we developed is capable of answering user queries such as \"show me all people with a beard and sunglasses, wearing a white hat and a patterned blue shirt, from all metro cameras in the downtown area, from 2pm to 4pm last Saturday\". In this paper, we describe the lessons we learned from practical deployments of our system, and how we made our algorithms achieve the accuracy and efficiency required by many police departments around the world. In particular, we show that a novel set of multimodal integral filters and proper normalization of attribute scores are critical to obtain good performance. We conduct a comprehensive experimental analysis on video footage captured from a large set of surveillance cameras monitoring metro chokepoints, in both crowded and normal activity periods. Moreover, we show impressive results using images from the recent Boston marathon bombing event, where our system can rapidly retrieve the two suspects based on their attributes from a database containing more than one thousand people present at the event."} {"_id": "3f3b05dab6d9f734d40755a12cbe56c34cfb28cc", "title": "Why does the microbiome affect behaviour?", "text": "Growing evidence indicates that the mammalian microbiome can affect behaviour, and several symbionts even produce neurotransmitters. One common explanation for these observations is that symbionts have evolved to manipulate host behaviour for their benefit. Here, we evaluate the manipulation hypothesis by applying evolutionary theory to recent work on the gut\u2013brain axis. Although the theory predicts manipulation by symbionts under certain conditions, these appear rarely satisfied by the genetically diverse communities of the mammalian microbiome. Specifically, any symbiont investing its resources to manipulate host behaviour is expected to be outcompeted within the microbiome by strains that do not manipulate and redirect their resources into growth and survival. Moreover, current data provide no clear evidence for manipulation. Instead, we show how behavioural effects can readily arise as a by-product of natural selection on microorganisms to grow within the host and natural selection on hosts to depend upon their symbionts. We argue that understanding why the microbiome influences behaviour requires a focus on microbial ecology and local effects within the host. The microbiota can influence host behaviour through the gut\u2013brain axis. In this Opinion, Johnson and Foster explore the evolution of this relationship and propose that adaptations of competing gut microorganisms may affect behaviour as a by\u2011product, leading to host dependence."} {"_id": "8b40b159c2316dbea297a301a9c561b1d9873c4a", "title": "Monolingual and Cross-Lingual Information Retrieval Models Based on (Bilingual) Word Embeddings", "text": "We propose a new unified framework for monolingual (MoIR) and cross-lingual information retrieval (CLIR) which relies on the induction of dense real-valued word vectors known as word embeddings (WE) from comparable data. To this end, we make several important contributions: (1) We present a novel word representation learning model called Bilingual Word Embeddings Skip-Gram (BWESG) which is the first model able to learn bilingual word embeddings solely on the basis of document-aligned comparable data; (2) We demonstrate a simple yet effective approach to building document embeddings from single word embeddings by utilizing models from compositional distributional semantics. BWESG induces a shared cross-lingual embedding vector space in which both words, queries, and documents may be presented as dense real-valued vectors; (3) We build novel ad-hoc MoIR and CLIR models which rely on the induced word and document embeddings and the shared cross-lingual embedding space; (4) Experiments for English and Dutch MoIR, as well as for English-to-Dutch and Dutch-to-English CLIR using benchmarking CLEF 2001-2003 collections and queries demonstrate the utility of our WE-based MoIR and CLIR models. The best results on the CLEF collections are obtained by the combination of the WE-based approach and a unigram language model. We also report on significant improvements in ad-hoc IR tasks of our WE-based framework over the state-of-the-art framework for learning text representations from comparable data based on latent Dirichlet allocation (LDA)."} {"_id": "c2c03bd11ae5c58b3b7c8e10f325e2a253868e45", "title": "Easily Add Significance Testing to your Market Basket Analysis in SAS \u00ae Enterprise Miner TM", "text": "Market Basket Analysis is a popular data mining tool that can be used to search through data to find patterns of co-occurrence among objects. It is an algorithmic process that generates business rules and several metrics for each business rule such as support, confidence and lift that help researchers identify \u201cinteresting\u201d patterns. Although useful, these popular metrics do not incorporate traditional significance testing. This paper describes how to easily add a well-known statistical significance test, the Pearson\u2019s Chi Squared statistic, to the existing output generated by SAS\u00ae Enterprise Miner\u2019s Association Node. The addition of this significance test enhances the ability of data analysts to make better decisions about which business rules are likely to be more useful."} {"_id": "3f9df5c77af49d5b1b19eac9b82cb430b50f482d", "title": "Leveraging social media networks for classification", "text": "Social media has reshaped the way in which people interact with each other. The rapid development of participatory web and social networking sites like YouTube, Twitter, and Facebook, also brings about many data mining opportunities and novel challenges. In particular, we focus on classification tasks with user interaction information in a social network. Networks in social media are heterogeneous, consisting of various relations. Since the relation-type information may not be available in social media, most existing approaches treat these inhomogeneous connections homogeneously, leading to an unsatisfactory classification performance. In order to handle the network heterogeneity, we propose the concept of social dimension to represent actors\u2019 latent affiliations, and develop a classification framework based on that. The proposed framework, SocioDim, first extracts social dimensions based on the network structure to accurately capture prominent interaction patterns between actors, then learns a discriminative classifier to select relevant social dimensions. SocioDim, by differentiating different types of network connections, outperforms existing representative methods of classification in social media, and offers a simple yet effective approach to integrating two types of seemingly orthogonal information: the network of actors and their attributes."} {"_id": "bf9db8ca2dce7386cbed1ae0fd6465148cdb2b98", "title": "From Word To Sense Embeddings: A Survey on Vector Representations of Meaning", "text": "Over the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality."} {"_id": "9d2d8b7b683e0ad66fb014956ddfdfcd292bdc55", "title": "Demographic Attributes Prediction on the Real-World Mobile Data", "text": "The deluge of the data generated by mobile phone devices imposes new challenges on the data mining community. User activities recorded by mobile phones could be useful for uncovering behavioral patterns. An interesting question is whether patterns in mobile phone usage can reveal demographic characteristics of the user? Demographic information about gender, age, marital status, job type, etc. is a key for applications with customer centric strategies. In this paper, we describe our approach to feature extraction from raw data and building predictive models for the task of demographic attributes predictions. We experimented with graph based representation of users inferred from similarity of their feature vectors, feature selections and classifications algorithms. Our work contributes to the Nokia Mobile Data Challenge (MDC) in the endeavor of exploring the real-world mobile data."} {"_id": "a5bfe7d4c00ed9c9922ebf9e29692eedb6add060", "title": "The Effects of Profile Pictures and Friends' Comments on Social Network Site Users' Body Image and Adherence to the Norm", "text": "This study sought to explore the effects of exposure to Facebook body ideal profile pictures and norm conforming comments on users' body image. In addition, the social identity and self-categorization theoretical frameworks were used to explore users' endorsement of a body ideal norm. A mock Facebook page was used to conduct a pretest posttest 2\u2009\u00d7\u20092 between-group web-based experiment that featured body ideal profile pictures (body ideal vs. no body) and body ideal comments (conforming vs. nonconforming). Five hundred and one participants completed the experiment and passed all manipulation checks. Participants viewed pictures and comments on the status page and were able to leave their own comment before exiting. Results demonstrated no significant main effects. However, predispositional body satisfaction significantly moderated the relationship between body ideal pictures and body satisfaction. Most comments supported the body ideal norm. However, in support of self-categorization theory, participants exposed to nonconforming comments made nonconforming comments themselves significantly more than those exposed to conforming comments. The findings demonstrated the importance of continued body image research in social network sites, as well as the potential for self-categorization theory to guide such research."} {"_id": "d8c42a63826842f99cf357c0af52beceb2d9e563", "title": "(Invited) Nanoimprinted perovskite for optoelectronics", "text": "Organic-inorganic hybrid perovskites have recently emerged as promising materials for optoelectronics. Here we show successful patterning of hybrid perovskite into nanostructures with cost-effective nanoimprint technology. Photodetectors are fabricated on nanoimprinted perovskite with improved responsivity. Nanoimprinted perovskite metasurface forms with significantly enhanced photoluminescence. Lasing is expected on nanoimprinted perovskite with optimized cavity design and process."} {"_id": "c7ce1e88f41e771329fc4101252f61ff2aa9de6a", "title": "A 0.7V Fully-on-Chip Pseudo-Digital LDO Regulator with 6.3\u03bcA Quiescent Current and 100mV Dropout Voltage in 0.18-\u03bcm CMOS", "text": "This paper presents an NMOS pseudo-digital low-dropout (PD-LDO) regulator that supports low-voltage operation by eliminating the amplifier of an analog LDO. The proposed pseudo-digital control loop consists of a latched comparator, a 2X charge pump and a RC network. It detects the output voltage and provides a continuous gate control signal for the power transistor by charging and discharging the RC network. Fast transient response is achieved due to the source follower structure of the power NMOS, with a small output capacitor and small occupied chip area and without consuming large quiescent current. The proof-of-concept design of the proposed PD-LDO is implemented in a 0.18-J.1m CMOS process. The minimum supply voltage is 0.7 V, with a dropout voltage of 100 mV and a maximum load current of 100 mA. Using only 20 pF of on-chip output capacitor and 10 MHz comparator clock frequency, the undershoot is 106 mV with 90 mA load current step and 150 ns edge time. The quiescent current is only 6.3 \u03bcA and the active chip area is 0.08 mm2."} {"_id": "f45cfbb9377a7da8be3f0a09d7291a7b01bb79d2", "title": "Heart rate variability and psychometric responses to overload and tapering in collegiate sprint-swimmers.", "text": "OBJECTIVES\nThe purpose of this study was to evaluate cardiac-parasympathetic and psychometric responses to competition preparation in collegiate sprint-swimmers. Additionally, we aimed to determine the relationship between average vagal activity and its daily fluctuation during each training phase.\n\n\nDESIGN\nObservational.\n\n\nMETHODS\nTen Division-1 collegiate sprint-swimmers performed heart rate variability recordings (i.e., log transformed root mean square of successive RR intervals, lnRMSSD) and completed a brief wellness questionnaire with a smartphone application daily after waking. Mean values for psychometrics and lnRMSSD (lnRMSSDmean) as well as the coefficient of variation (lnRMSSDcv) were calculated from 1 week of baseline (BL) followed by 2 weeks of overload (OL) and 2 weeks of tapering (TP) leading up to a championship competition.\n\n\nRESULTS\nCompetition preparation resulted in improved race times (p<0.01). Moderate decreases in lnRMSSDmean, and Large to Very Large increases in lnRMSSDcv, perceived fatigue and soreness were observed during the OL and returned to BL levels or peaked during TP (p<0.05). Inverse correlations between lnRMSSDmean and lnRMSSDcv were Very Large at BL and OL (p<0.05) but only Moderate at TP (p>0.05).\n\n\nCONCLUSIONS\nOL training is associated with a reduction and greater daily fluctuation in vagal activity compared with BL, concurrent with decrements in perceived fatigue and muscle soreness. These effects are reversed during TP where these values returned to baseline or peaked leading into successful competition. The strong inverse relationship between average vagal activity and its daily fluctuation weakened during TP."} {"_id": "3cbf5c2b9833207aa7c18cbbb6257e25057bd99c", "title": "\u57fa\u65bc\u5c0d\u7167\u8868\u4ee5\u53ca\u8a9e\u8a00\u6a21\u578b\u4e4b\u7c21\u7e41\u5b57\u9ad4\u8f49\u63db (Chinese Characters Conversion System based on Lookup Table and Language Model) [In Chinese]", "text": "The character sets used in China and Taiwan are both Chinese, but they are divided into simplified and traditional Chinese characters. There are large amount of information exchange between China and Taiwan through books and Internet. To provide readers a convenient reading environment, the character conversion between simplified and traditional Chinese is necessary. The conversion between simplified and traditional Chinese characters has two problems: one-to-many ambiguity and term usage problems. Since there are many traditional Chinese characters that have only one corresponding simplified character, when converting simplified Chinese into traditional Chinese, the system will face the one-to-many ambiguity. Also, there are many terms that have different usages between the two Chinese societies. This paper focus on designing an extensible conversion system, that can take the advantage of community knowledge by accumulating lookup tables through Wikipedia to tackle the term usage problem and can integrate language model to disambiguate the one-to-many ambiguity. The system can reduce the cost of proofreading of character conversion for books, e-books, or online publications. The extensible architecture makes it easy to improve the system with new training data."} {"_id": "b3318d66069cc164ac085d15dc25cafac82c9d6b", "title": "Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?", "text": "Qualitative research methods are enjoying unprecedented popularity. Although checklists have undoubtedly contributed to the wider acceptance of such methods, these can be counterproductive if used prescriptively. The uncritical adoption of a range of \u201ctechnical fixes\u201d (such as purposive sampling, grounded theory, multiple coding, triangulation, and respondent validation) does not, in itself, confer rigour. In this article I discuss the limitations of these procedures and argue that there is no substitute for systematic and thorough application of the principles of qualitative research. Technical fixes will achieve little unless they are embedded in a broader understanding of the rationale and assumptions behind qualitative research."} {"_id": "3455c5ed8dac3cb7a8a6c5cbe28dff96cc123a68", "title": "Yet Another Visit to Paxos", "text": "This paper presents a modular introduction to crash-tolerant and Byzantine-tolerant protocols for reaching consensus that use the method introduced by the Paxos algorithm of Lamport and by the viewstamped replication algorithm of Oki and Liskov. The consensus protocol runs a sequence of epoch abstractions as governed by an epoch-change abstraction. Implementations of epoch and epoch-change that tolerate crash faults yield the consensus algorithm in Paxos and in viewstamped replication. Implementations of epoch and epoch-change that tolerate Byzantine faults yield the consensus algorithm in the PBFT protocol of Castro and Liskov."} {"_id": "13bf13f019632a4edb967635e72e3e140f89e90e", "title": "Internet inter-domain traffic", "text": "In this paper, we examine changes in Internet inter-domain traffic demands and interconnection policies. We analyze more than 200 Exabytes of commercial Internet traffic over a two year period through the instrumentation of 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. Our analysis shows significant changes in inter-AS traffic patterns and an evolution of provider peering strategies. Specifically, we find the majority of inter-domain traffic by volume now flows directly between large content providers, data center / CDNs and consumer networks. We also show significant changes in Internet application usage, including a global decline of P2P and a significant rise in video traffic. We conclude with estimates of the current size of the Internet by inter-domain traffic volume and rate of annualized inter-domain traffic growth."} {"_id": "0a65bc1de050f544648977e786c499a0166fa22a", "title": "Strategies and struggles with privacy in an online social networking community", "text": "Online social networking communities such as Facebook and MySpace are extremely popular. These sites have changed how many people develop and maintain relationships through posting and sharing personal information. The amount and depth of these personal disclosures have raised concerns regarding online privacy. We expand upon previous research on users\u2019 under-utilization of available privacy options by examining users\u2019 current strategies for maintaining their privacy, and where those strategies fail, on the online social network site Facebook. Our results demonstrate the need for mechanisms that provide awareness of the privacy impact of users\u2019 daily interactions."} {"_id": "21b07e146c9b56dbe3d516b59b69d989be2998b7", "title": "Moving beyond untagging: photo privacy in a tagged world", "text": "Photo tagging is a popular feature of many social network sites that allows users to annotate uploaded images with those who are in them, explicitly linking the photo to each person's profile. In this paper, we examine privacy concerns and mechanisms surrounding these tagged images. Using a focus group, we explored the needs and concerns of users, resulting in a set of design considerations for tagged photo privacy. We then designed a privacy enhancing mechanism based on our findings, and validated it using a mixed methods approach. Our results identify the social tensions that tagging generates, and the needs of privacy tools to address the social implications of photo privacy management."} {"_id": "2d2b1f9446e9b4cdb46327cda32a8d9621944e29", "title": "Information revelation and privacy in online social networks", "text": "Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences."} {"_id": "2dff0f21a23f9e3b6e0c50ce3fec75de4ff00359", "title": "Persona: an online social network with user-defined privacy", "text": "Online social networks (OSNs) are immensely popular, with some claiming over 200 million users. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information. We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data. We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices."} {"_id": "2e1fa859dc31677895276a6b26d90ec70a4861c5", "title": "Estimating the Customer-Level Demand for Electricity Under Real-Time Market Prices", "text": "This paper presents estimates of the customer-level demand for electricity by industrial and commercial customers purchasing electricity according to the half-hourly energy prices from the England and Wales (E&W) electricity market. These customers also face the possibility of a demand charge on their electricity consumption during the three half-hour periods that are coincident with E&W system peaks. Although energy charges are largely known by 4:00 P.M. the day prior to consumption, a fraction of the energy charge and the identity of the half-hour periods when demand charges occur are only known with certainty ex post of consumption. Four years of data from a Regional Electricity Company (REC) in the United Kingdom is used to quantify the half-hourly customer-level demands under this real-time pricing program. The econometric model developed and estimated here quantifies the extent of intertemporal substitution in electricity consumption across pricing periods within the day due to changes in all of the components of the day-ahead E&W electricity prices, the level of the demand charge and the probability that a demand charge will be imposed. The results of this modeling framework can be used by retail companies supplying consumers purchasing electricity according to real-time market prices to construct demand-side bids into an electricity market open to competition. The paper closes with several examples of how this might be done."} {"_id": "446d5a4496d7dc201263b888c4f0ae65833a25eb", "title": "Application of Spreading Activation Techniques in Information Retrieval", "text": "This paper surveys the use of Spreading Activation techniques onSemantic Networks in Associative Information Retrieval. The majorSpreading Activation models are presented and their applications toIR is surveyed. A number of works in this area are criticallyanalyzed in order to study the relevance of Spreading Activation forassociative IR."} {"_id": "5882b1409dc3d6d9beebb0fd3ab149f864e8c8d3", "title": "Combined digital signature and digital watermark scheme for image authentication", "text": "Conventional digital signature schemes for image authentication encode the signature in a file separate from the original image, thus require extra bandwidth to transmit it. Meantime, the watermark technique embeds some information in the host image. In this paper, a combined digital watermark and digital signature scheme for image authentication is proposed. The scheme extracts signature from the original image and embeds them back into the image as watermark, avoiding additional signature file. Since images are always compressed before transmission in the Internet, the proposed scheme is tested for compression tolerance and shows good robustness against JPEG coding. Furthermore, the scheme not only can verify the authenticity and the integrity of images, but also can locate the illegal modifications."} {"_id": "f2e9f869a9fc1f07887866be5f70a37b6c31411b", "title": "Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction", "text": "Stock trend prediction plays a critical role in seeking maximized profit from the stock investment. However, precise trend prediction is very difficult since the highly volatile and non-stationary nature of the stock market. Exploding information on the Internet together with the advancing development of natural language processing and text mining techniques have enabled investors to unveil market trends and volatility from online content. Unfortunately, the quality, trustworthiness, and comprehensiveness of online content related to stock market vary drastically, and a large portion consists of the low-quality news, comments, or even rumors. To address this challenge, we imitate the learning process of human beings facing such chaotic online news, driven by three principles: sequential content dependency, diverse influence, and effective and efficient learning. In this paper, to capture the first two principles, we designed a Hybrid Attention Networks(HAN) to predict the stock trend based on the sequence of recent related news. Moreover, we apply the self-paced learning mechanism to imitate the third principle. Extensive experiments on real-world stock market data demonstrate the effectiveness of our framework. A further simulation illustrates that a straightforward trading strategy based on our proposed framework can significantly increase the annualized return."} {"_id": "adbe1565649a4f547a68030da8b6a0814f228bbc", "title": "FinDroidHR: Smartwatch Gesture Input with Optical Heartrate Monitor", "text": "We present FinDroidHR, a novel gesture input technique for off-the-shelf smartwatches. Our technique is designed to detect 10 hand gestures on the hand wearing a smartwatch. The technique is enabled by analysing features of the Photoplethysmography (PPG) signal that optical heart-rate sensors capture. In a study with 20 participants, we show that FinDroidHR achieves 90.55% accuracy and 90.73% recall. Our work is the first study to explore the feasibility of using optical sensors on the off-the-shelf wearable devices to recognise gestures. Without requiring bespoke hardware, FinDroidHR can be readily used on existing smartwatches."} {"_id": "9f4a856aee19e6cddbace27be817770214a6fa4a", "title": "Humanoid robotics platforms developed in HRP", "text": "This paper presents humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI\u2019s Humanoid Robotics Project (HRP). The final version of the robot, called HRP-2, has 1540 mm height, 58 kg weight and 30 degrees of the freedom. The software platform includes a dynamics simulator and motion controllers of the r ns and is e \u00a9"} {"_id": "51077ea4ac322d8b6438d5acfb6c0012cb1bcfb1", "title": "Using web security scanners to detect vulnerabilities in web services", "text": "Although web services are becoming business-critical components, they are often deployed with critical software bugs that can be maliciously explored. Web vulnerability scanners allow detecting security vulnerabilities in web services by stressing the service from the point of view of an attacker. However, research and practice show that different scanners have different performance on vulnerabilities detection. In this paper we present an experimental evaluation of security vulnerabilities in 300 publicly available web services. Four well known vulnerability scanners have been used to identify security flaws in web services implementations. A large number of vulnerabilities has been observed, which confirms that many services are deployed without proper security testing. Additionally, the differences in the vulnerabilities detected and the high number of false-positives (35% and 40% in two cases) and low coverage (less than 20% for two of the scanners) observed highlight the limitations of web vulnerability scanners on detecting security vulnerabilities in web services."} {"_id": "2fddc5cdada7b44a598b3a4a76de52825350dd5d", "title": "Development of Generalized Photovoltaic Model Using MATLAB / SIMULINK", "text": "temperature into consideration, the output current and power characteristics of PV model are simulated and optimized using the proposed model. This enables the dynamics of PV power system to be easily simulated, analyzed, and optimized."} {"_id": "9270f2bf533897ab5711c65eb269561b39a2055e", "title": "Social-Aware Video Recommendation for Online Social Groups", "text": "Group recommendation plays a significant role in today's social media systems, where users form social groups to receive multimedia content together and interact with each other, instead of consuming the online content individually. Limitations of traditional group recommendation approaches are as follows. First, they usually infer group members\u2019 preferences by their historical behaviors, failing to capture inactive users\u2019 preferences from the sparse historical data. Second, relationships between group members are not studied by these approaches, which fail to capture the inherent personality of members in a group. To address these issues, we propose a social-aware group recommendation framework that jointly utilizes both social relationships and social behaviors to not only infer a group's preference, but also model the tolerance and altruism characteristics of group members. Based on the observation that the following relationship in the online social network reflects common interests of users, we propose a group preference model based on external experts of group members. Furthermore, we model users\u2019 tolerance (willingness to receive content not preferred) and altruism (willingness to receive content preferred by friends). Finally, based on the group preference model, we design recommendation algorithms for users under different social contexts. Experimental results demonstrate the effectiveness of our approach, which significantly improves the recommendation accuracy against traditional approaches, especially in the cases of inactive group members."} {"_id": "427afd1b0ecbea2ffb1feaec7ef55234394c9814", "title": "ScreenerNet: Learning Curriculum for Neural Networks", "text": "We propose to learn a curriculum or a syllabus for supervised learning with deep neural networks. Specifically, we learn weights for each sample in training by an attached neural network, called ScreenerNet, to the original network and jointly train them in an end-to-end fashion. We show the networks augmented with our ScreenerNet achieve early convergence with better accuracy than the state-of-the-art rule-based curricular learning methods in extensive experiments using three popular vision datasets including MNIST, CIFAR10 and Pascal VOC2012, and a Cartpole task using Deep Q-learning."} {"_id": "4ab868acf51fd6d78ba2d15357de673f8ec0bad1", "title": "ICTs for Improving Patients Rehabilitation Research Techniques", "text": "The world population is rapidly aging and becoming a burden to health systems around the world. In this work we present a conceptual framework to encourage the research community to develop more comprehensive and adaptive ICT solutions for prevention and rehabilitation of chronic conditions in the daily life of the aging population and beyond health facilities. We first present an overview of current international standards in human functioning and disability, and how chronic conditions are interconnected in older age. We then describe innovative mobile and sensor technologies, predictive data analysis in healthcare, and game-based prevention and rehabilitation techniques. We then set forth a multidisciplinary approach for the personalized prevention and rehabilitation of chronic conditions using unobtrusive and pervasive sensors, interactive activities, and predictive analytics, which also eases the tasks of health-related researchers, caregivers and providers. Our proposal represents a conceptual basis for future research, in which much remains to be done in terms of standardization of technologies and health terminology, as well as data protection and privacy legislation."} {"_id": "78be1d50e1eb2c526f5e35f5820e195eec313101", "title": "An Evaluation of Parser Robustness for Ungrammatical Sentences", "text": "For many NLP applications that require a parser, the sentences of interest may not be well-formed. If the parser can overlook problems such as grammar mistakes and produce a parse tree that closely resembles the correct analysis for the intended sentence, we say that the parser is robust. This paper compares the performances of eight state-of-the-art dependency parsers on two domains of ungrammatical sentences: learner English and machine translation outputs. We have developed an evaluation metric and conducted a suite of experiments. Our analyses may help practitioners to choose an appropriate parser for their tasks, and help developers to improve parser robustness against ungrammatical sentences."} {"_id": "c7062c80bc4b13da80d4d10f9d9b273aa316f0fc", "title": "Microglia-mediated recovery from ALS-relevant motor neuron degeneration in a mouse model of TDP-43 proteinopathy", "text": "Though motor neurons selectively degenerate in amyotrophic lateral sclerosis, other cell types are likely involved in this disease. We recently generated rNLS8 mice in which human TDP-43 (hTDP-43) pathology could be reversibly induced in neurons and expected that microglia would contribute to neurodegeneration. However, only subtle microglial changes were detected during disease in the spinal cord, despite progressive motor neuron loss; microglia still reacted to inflammatory triggers in these mice. Notably, after hTDP-43 expression was suppressed, microglia dramatically proliferated and changed their morphology and gene expression profiles. These abundant, reactive microglia selectively cleared neuronal hTDP-43. Finally, when microgliosis was blocked during the early recovery phase using PLX3397, a CSF1R and c-kit inhibitor, rNLS8 mice failed to regain full motor function, revealing an important neuroprotective role for microglia. Therefore, reactive microglia exert neuroprotective functions in this amyotrophic lateral sclerosis model, and definition of the underlying mechanism could point toward novel therapeutic strategies. Using an inducible mouse model of sporadic ALS, Spiller et al. show that spinal microgliosis is not a major feature of TDP-43-triggered disease. Instead, microglia mediate TDP-43 clearance and motor recovery, suggesting a neuroprotective role in ALS."} {"_id": "0fce6f27385af907e0751bb1d0781eb9bc1e5359", "title": "MazeBase: A Sandbox for Learning from Games", "text": "This paper introduces an environment for simple 2D maze games, designed as a sandbox for machine learning approaches to reasoning and planning. Within it, we create 10 simple games based on algorithmic tasks (e.g. embodying simple if-then statements). We deploy a range of neural models (fully connected, convolutional network, memory network) on these games, with and without a procedurally generated curriculum. We show that these architectures can be trained with reinforcement to respectable performance on these tasks, but are still far from optimal, despite their simplicity. We also apply these models to games involving combat, including StarCraft, demonstrating their ability to learn non-trivial tactics which enable them to consistently beat the in-game AI."} {"_id": "157099d6ffd3ffca8cfca7955aff7c5f1a979ac9", "title": "Multilabel Text Classification for Automated Tag Suggestion", "text": "The increased popularity of tagging during the last few years can be mainly attributed to its embracing by most of the recently thriving user-centric content publishing and management Web 2.0 applications. However, tagging systems have some limitations that have led researchers to develop methods that assist users in the tagging process, by automatically suggesting an appropriate set of tags. We have tried to model the automated tag suggestion problem as a multilabel text classification task in order to participate in the ECML/PKDD 2008 Discovery Challenge."} {"_id": "1edae9efb4f84352bb66095295cd94b2cddce00d", "title": "A taxonomy for user-healthcare robot interaction", "text": "This paper evaluates existing taxonomies aimed at characterizing the interaction between robots and their users and modifies them for health care applications. The modifications are based on existing robot technologies and user acceptance of robotics. Characterization of the user, or in this case the patient, is a primary focus of the paper, as they present a unique new role as robot users. While therapeutic and monitoring-related applications for robots are still relatively uncommon, we believe they will begin to grow and thus it is important that the spurring relationship between robot and patient is well understood."} {"_id": "c91fed8c6bf32fccd4fc9148fb7b912191b8d962", "title": "Multiple Physical Layer Pipes performance for DVB-T2", "text": "The DVB-T2 terrestrial television standard is becoming increasingly important, and have been extensively studied and developed to provide many types of services with higher spectral efficiency and better performance. The Physical Layer Pipes in DVB-S2 are logical channels carrying one or more services with modulation scheme and robustness. The main changes are found in the physical layer where DVB-T2 incorporates a new physical layer pipe (PLP). Each physical layer pipe contains an individual configuration of modulation, coding and interleaving. This new concept allows a transmission with multiple physical layer pipes where each service can be transmitted with different physical layer configuration. The Advanced Television Systems Committee (ATSC3.0) standard will add value to broadcasting services, allowing a extending reach by adding new business models, providing higher quality, improved accessibility, personalization and interactivity and more flexible and efficient use of the spectrum."} {"_id": "ab4bcc8e1db4f80fa2a31905cf332c00379602b0", "title": "Precision CMOS current reference with process and temperature compensation", "text": "This paper presents a new first-order temperature compensated CMOS current reference. To achieve a compact architecture able to operate under low voltage with low power consumption, it is based on a self-biasing beta multiplier current generator. Compensation against temperature is achieved by using instead of an ordinary resistor two triode transistors in parallel, acting as a negative and a positive temperature coefficient resistor, that generate a proportional to absolute temperature and a complementary to absolute temperature current which can be directly added to attain a temperature compensated current. Programmability is included to adjust the temperature coefficient and the reference current magnitude over process variations. Results for a 0.18 \u03bcm CMOS implementation show that the proposed 500 nA reference operate with supplies down to 1.2 V accomplishing over a (-40 to +120\u00b0C) range temperature drifts below 120 ppm/\u00b0C."} {"_id": "403b8448589e0405ce50356d0c6be3916bd12ce1", "title": "Modified post-filter to recover modulation spectrum for HMM-based speech synthesis", "text": "This paper proposes a modified post-filter to recover a Modulation Spectrum (MS) in HMM-based speech synthesis. To alleviate the over-smoothing effect which is one of the major problems in HMM-based speech synthesis, the MS-based post-filter has been proposed. It recovers the utterance-level MS of the generated speech trajectory, and we have reported its benefit to the quality improvement. However, this post-filter is not applicable to various lengths of speech parameter trajectories, such as phrases or segments, which are shorter than an utterance. To address this problem, we propose two modified post-filters, (1) the time-invariant filter with a simplified conversion form and (2) the segment-level post-filter which applicable to a short-term parameter sequence. Furthermore, we also propose (3) the post-filter to recover the phoneme-level MS of HMM-state duration. Experimental results show that the modified post-filters also yield significant quality improvements in synthetic speech as yielded by the conventional post-filter."} {"_id": "e9c525679fed4dad85699d09b5ce1ccaffe8f11d", "title": "Fully convolutional network and sparsity-based dictionary learning for liver lesion detection in CT examinations", "text": ""} {"_id": "7d91d2944ed5b846739256029f2c9c79090fd0ca", "title": "TABLA: A unified template-based framework for accelerating statistical machine learning", "text": "A growing number of commercial and enterprise systems increasingly rely on compute-intensive Machine Learning (ML) algorithms. While the demand for these compute-intensive applications is growing, the performance benefits from general-purpose platforms are diminishing. Field Programmable Gate Arrays (FPGAs) provide a promising path forward to accommodate the needs of machine learning algorithms and represent an intermediate point between the efficiency of ASICs and the programmability of general-purpose processors. However, acceleration with FPGAs still requires long development cycles and extensive expertise in hardware design. To tackle this challenge, instead of designing an accelerator for a machine learning algorithm, we present TABLA, a framework that generates accelerators for a class of machine learning algorithms. The key is to identify the commonalities across a wide range of machine learning algorithms and utilize this commonality to provide a high-level abstraction for programmers. TABLA leverages the insight that many learning algorithms can be expressed as a stochastic optimization problem. Therefore, learning becomes solving an optimization problem using stochastic gradient descent that minimizes an objective function over the training data. The gradient descent solver is fixed while the objective function changes for different learning algorithms. TABLA provides a template-based framework to accelerate this class of learning algorithms. Therefore, a developer can specify the learning task by only expressing the gradient of the objective function using our high-level language. Tabla then automatically generates the synthesizable implementation of the accelerator for FPGA realization using a set of hand-optimized templates. We use Tabla to generate accelerators for ten different learning tasks targeted at a Xilinx Zynq FPGA platform. We rigorously compare the benefits of FPGA acceleration to multi-core CPUs (ARM Cortex A15 and Xeon E3) and many-core GPUs (Tegra K1, GTX 650 Ti, and Tesla K40) using real hardware measurements. TABLA-generated accelerators provide 19.4x and 2.9x average speedup over the ARM and Xeon processors, respectively. These accelerators provide 17.57x, 20.2x, and 33.4x higher Performance-per-Watt in comparison to Tegra, GTX 650 Ti and Tesla, respectively. These benefits are achieved while the programmers write less than 50 lines of code."} {"_id": "f5a1eeab058b91503d7ebb25fd3615480d96a639", "title": "Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review.", "text": "BACKGROUND\nSystematic literature reviews identify, select, appraise, and synthesize relevant literature on a particular topic. Typically, these reviews examine primary studies based on similar methods, e.g., experimental trials. In contrast, interest in a new form of review, known as mixed studies review (MSR), which includes qualitative, quantitative, and mixed methods studies, is growing. In MSRs, reviewers appraise studies that use different methods allowing them to obtain in-depth answers to complex research questions. However, appraising the quality of studies with different methods remains challenging. To facilitate systematic MSRs, a pilot Mixed Methods Appraisal Tool (MMAT) has been developed at McGill University (a checklist and a tutorial), which can be used to concurrently appraise the methodological quality of qualitative, quantitative, and mixed methods studies.\n\n\nOBJECTIVES\nThe purpose of the present study is to test the reliability and efficiency of a pilot version of the MMAT.\n\n\nMETHODS\nThe Center for Participatory Research at McGill conducted a systematic MSR on the benefits of Participatory Research (PR). Thirty-two PR evaluation studies were appraised by two independent reviewers using the pilot MMAT. Among these, 11 (34%) involved nurses as researchers or research partners. Appraisal time was measured to assess efficiency. Inter-rater reliability was assessed by calculating a kappa statistic based on dichotomized responses for each criterion. An appraisal score was determined for each study, which allowed the calculation of an overall intra-class correlation.\n\n\nRESULTS\nOn average, it took 14 min to appraise a study (excluding the initial reading of articles). Agreement between reviewers was moderate to perfect with regards to MMAT criteria, and substantial with respect to the overall quality score of appraised studies.\n\n\nCONCLUSION\nThe MMAT is unique, thus the reliability of the pilot MMAT is promising, and encourages further development."} {"_id": "23355a1cb7bb226654e2319dd5ce9443284a694b", "title": "YAGO2: exploring and querying world knowledge in time, space, context, and many languages", "text": "We present YAGO2, an extension of the YAGO knowledge base with focus on temporal and spatial knowledge. It is automatically built from Wikipedia, GeoNames, and WordNet, and contains nearly 10 million entities and events, as well as 80 million facts representing general world knowledge. An enhanced data representation introduces time and location as first-class citizens. The wealth of spatio-temporal information in YAGO can be explored either graphically or through a special time- and space-aware query language."} {"_id": "1920f1e482a7971b6e168df0354744c2544e4658", "title": "Stretchable Heater Using Ligand-Exchanged Silver Nanowire Nanocomposite for Wearable Articular Thermotherapy.", "text": "Thermal therapy is one of the most popular physiotherapies and it is particularly useful for treating joint injuries. Conventional devices adapted for thermal therapy including heat packs and wraps have often caused discomfort to their wearers because of their rigidity and heavy weight. In our study, we developed a soft, thin, and stretchable heater by using a nanocomposite of silver nanowires and a thermoplastic elastomer. A ligand exchange reaction enabled the formation of a highly conductive and homogeneous nanocomposite. By patterning the nanocomposite with serpentine-mesh structures, conformal lamination of devices on curvilinear joints and effective heat transfer even during motion were achieved. The combination of homogeneous conductive elastomer, stretchable design, and a custom-designed electronic band created a novel wearable system for long-term, continuous articular thermotherapy."} {"_id": "09d8995d289fd31a15df47c824a9fdb79114a169", "title": "Manifold Gaussian Processes for regression", "text": "Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness assumptions on the structure of the function to be modeled. To model complex and non-differentiable functions, these smoothness assumptions are often too restrictive. One way to alleviate this limitation is to find a different representation of the data by introducing a feature space. This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task. In this paper, we propose Manifold Gaussian Processes, a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space. The Manifold GP is a full GP and allows to learn data representations, which are useful for the overall regression task. As a proof-of-concept, we evaluate our approach on complex non-smooth functions where standard GPs perform poorly, such as step functions and robotics tasks with contacts."} {"_id": "192687300b76bca25d06744b6586f2826c722645", "title": "Deep Gaussian Processes", "text": "In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples."} {"_id": "2cac0942a692c3dbb46bcf826d71d202ab0f2e02", "title": "Variational Auto-encoded Deep Gaussian Processes", "text": "We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization."} {"_id": "3bae80eca92a6e607cacdf03da393a1059c0d062", "title": "A Unifying View of Sparse Approximate Gaussian Process Regression", "text": "We provide a new unifying view, including all existing prope r probabilistic sparse approximations for Gaussian process regression. Our approach relies on exp ressing theeffective priorwhich the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically j ustified ranking of the closeness of the known approximations to the corresponding full GPs. Finall y we point directly to designs of new better sparse approximations, combining the best of the exi sting strategies, within attractive computational constraints."} {"_id": "722fcc35def20cfcca3ada76c8dd7a585d6de386", "title": "Caffe: Convolutional Architecture for Fast Feature Embedding", "text": "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments.\n Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia."} {"_id": "f801630d99835bea79d587cce0ee6b816bbaafd6", "title": "Multilayer networks: an architecture framework", "text": "We present an architecture framework for the control and management of multilayer networks and associated advanced network services. This material is identified as an \"architecture framework\" to emphasize its role in providing guidance and structure to our subsequent detailed architecture, design, and implementation activities. Our work is motivated by requirements from the Department of Energy science application community for real-time on-demand science-domain-specific network services and resource provisioning. We also summarize the current state of deployments and use of network services based on this multilayer network architecture framework."} {"_id": "d3acf7f37c003cc77e9d51577ce5dce3a6700ad3", "title": "IoT for Healthcare", "text": "The Internet of Things (IoT) is the network of physical objects or things embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. The Internet of Things allows objects to be sensed and controlled remotely across existing network infrastructure. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. Smart healthcare plays a significant role in healthcare applications through embedding sensors and actuators in patients and their medicine for monitoring and tracking purposes. The IoT is used by clinical care to monitor physiological statuses of patients through sensors by collecting and analyzing their information and then sending analyzed patient\u2019s data remotely to processing centers to make suitable actions. Not only for patients, it also useful for normal people to check the health status by using wearable devices with sensors."} {"_id": "d6de518c6f78b406ed5e3996246acfc33b4a3918", "title": "User Experience in Service Design: A Case Study from Algeria", "text": "The design of crisis management services is crucial for emerging countries such as Algeria. It must take into account the experiences of diverse stakeholders. The authors investigate user experience (UX) practices from a service design perspective and describe a case study from Algeria exploring UX-driven service design for crisis management."} {"_id": "2de974b0e20c9dd5215fb3d20698eb0bcdf0995b", "title": "Towards Accurate Predictions of Customer Purchasing Patterns", "text": "A range of algorithms was used to classify online retail customers of a UK company using historical transaction data. The predictive capabilities of the classifiers were assessed using linear regression, Lasso and regression trees. Unlike most related studies, classifications were based upon specific and marketing focused customer behaviours. Prediction accuracy on untrained customers was generally better than 80%. The models implemented (and compared) for classification were: Logistic Regression, Quadratic Discriminant Analysis, Linear SVM, RBF SVM, Gaussian Process, Decision Tree, Random Forest and Multi-layer Perceptron (Neural Network). Postcode data was then used to classify solely on demographics derived from the UK Land Registry and similar public data sources. Prediction accuracy remained better than 60%."} {"_id": "1512f2115eaeeb96d05d67c694e4d0e9e77af23e", "title": "Discriminative learning of visual words for 3D human pose estimation", "text": "This paper addresses the problem of recovering 3D human pose from a single monocular image, using a discriminative bag-of-words approach. In previous work, the visual words are learned by unsupervised clustering algorithms. They capture the most common patterns and are good features for coarse-grain recognition tasks like object classification. But for those tasks which deal with subtle differences such as pose estimation, such representation may lack the needed discriminative power. In this paper, we propose to jointly learn the visual words and the pose regressors in a supervised manner. More specifically, we learn an individual distance metric for each visual word to optimize the pose estimation performance. The learned metrics rescale the visual words to suppress unimportant dimensions such as those corresponding to background. Another contribution is that we design an appearance and position context (APC) local descriptor that achieves both selectivity and invariance while requiring no background subtraction. We test our approach on both a quasi-synthetic dataset and a real dataset (HumanEva) to verify its effectiveness. Our approach also achieves fast computational speed thanks to the integral histograms used in APC descriptor extraction and fast inference of pose regressors."} {"_id": "859be8c9a179cb0d231e62ca07b9f2569035487f", "title": "A graph-based recommender system for digital library", "text": "Research shows that recommendations comprise a valuable service for users of a digital library [11]. While most existing recommender systems rely either on a content-based approach or a collaborative approach to make recommendations, there is potential to improve recommendation quality by using a combination of both approaches (a hybrid approach). In this paper, we report how we tested the idea of using a graph-based recommender system that naturally combines the content-based and collaborative approaches. Due to the similarity between our problem and a concept retrieval task, a Hopfield net algorithm was used to exploit high-degree book-book, user-user and book-user associations. Sample hold-out testing and preliminary subject testing were conducted to evaluate the system, by which it was found that the system gained improvement with respect to both precision and recall by combining content-based and collaborative approaches. However, no significant improvement was observed by exploiting high-degree associations."} {"_id": "38d4dce2c40a329aef2a82b003bd17551ac8439f", "title": "Real-time people counting system using video camera", "text": "In this MTech thesis experiments will be tried out on a people counting system in an effort to enhance the accuracy when separating counting groups of people, and nonhuman objects. This system features automatic color equalization, adaptive background subtraction, shadow detection algorithm and Kalman tracking. The aim is to develop a reliable and accurate computer vision alternative to sensor or contact based mechanisms. The problem for many computer vision based systems are making good separation between the background and foreground, and teaching the computers what parts make up a scene. We also want to find features to classify the foreground moving objects, an easy task for a human, but a complex task for a computer. Video has been captured with a birds eye view close to one of the entrances at the school about ten meters above the floor. From this video troublesome parts have been selected to test the changes done to the algorithms and program code."} {"_id": "4d1d499963f37f8fb60654ccb8a26bb3f50f37e3", "title": "Bicycle dynamics and control: adapted bicycles for education and research", "text": "In this paper, the dynamics of bicycles is analyzed from the perspective of control. Models of different complexity are presented, starting with simple ones and ending with more realistic models generated from multibody software. Models that capture essential behavior such as self-stabilization as well as models that demonstrate difficulties with rear wheel steering are considered. Experiences using bicycles in control education along with suggestions for fun and thought-provoking experiments with proven student attraction are presented. Finally, bicycles and clinical programs designed for children with disabilities are described."} {"_id": "0aafa4593b8bb496032a5b7bc7ab25d1a383d764", "title": "A neostriatal habit learning system in humans.", "text": "Amnesic patients and nondemented patients with Parkinson's disease were given a probabilistic classification task in which they learned which of two outcomes would occur on each trial, given the particular combination of cues that appeared. Amnesic patients exhibited normal learning of the task but had severely impaired declarative memory for the training episode. In contrast, patients with Parkinson's disease failed to learn the probabilistic classification task, despite having intact memory for the training episode. This double dissociation shows that the limbic-diencephalic regions damaged in amnesia and the neostriatum damaged in Parkinson's disease support separate and parallel learning systems. In humans, the neostriatum (caudate nucleus and putamen) is essential for the gradual, incremental learning of associations that is characteristic of habit learning. The neostriatum is important not just for motor behavior and motor learning but also for acquiring nonmotor dispositions and tendencies that depend on new associations."} {"_id": "76365e761078ce82463988df59a8a0bce0d8d4d3", "title": "On Removing Routing Protocol from Future Wireless Networks: A Real-time Deep Learning Approach for Intelligent Traffic Control", "text": "Recently, deep learning has appeared as a breakthrough machine learning technique for various areas in computer science as well as other disciplines. However, the application of deep learning for network traffic control in wireless/heterogeneous networks is a relatively new area. With the evolution of wireless networks, efficient network traffic control such as routing methodology in the wireless backbone network appears as a key challenge. This is because the conventional routing protocols do not learn from their previous experiences regarding network abnormalities such as congestion and so forth. Therefore, an intelligent network traffic control method is essential to avoid this problem. In this article, we address this issue and propose a new, real-time deep learning based intelligent network traffic control method, exploiting deep Convolutional Neural Networks (deep CNNs) with uniquely characterized inputs and outputs to represent the considered Wireless Mesh Network (WMN) backbone. Simulation results demonstrate that our proposal achieves significantly lower average delay and packet loss rate compared to those observed with the existing routing methods. We particularly focus on our proposed method's independence from existing routing protocols, which makes it a potential candidate to remove routing protocol(s) from future wired/ wireless networks."} {"_id": "24f01f041a3bbc5ffa8e013398808e6ac2b63763", "title": "Development of a 6-DOF manipulator actuated with a straight-fiber-type artificial muscle", "text": "Robots have become an integral part of human life, and the relationship between humans and robots has grown closer. Thus, it is desired that robots have characteristics similar to humans. In this context, we paid attention to an artificial muscle actuator. We used straight-fiber-type artificial muscles, derived from the McKibben type, which have excellent characteristics with respect to the contraction rate and force. We developed a 6-DOF manipulator actuated by a straight fiber artificial muscle. Furthermore, we tried to control the manipulator position by considering its characteristics."} {"_id": "5c642f42f7c4057a63de64a9f9c1feeb9eac6a50", "title": "From smart to smarter cities: Bridging the dimensions of technology and urban planning", "text": "This paper discusses the importance of urban form in smart cities. Development of urban form has been a major concern for urban planners, designers and policy makers. Sprawl is one way of urban development which is not considered good for a better living standard. Compact form arose as an opposed idea to urban sprawl. Form based codes (FBCs) is a tool for urban planning and design that attempts to mitigate the problem of urban sprawl, whereas conventional zoning strengthens this. This paper highlights the importance of physical place in smart city and how FBCs attempt to create better physical place than conventional zoning. Our study shows that FBCs can lead to smart growth which can be a solution to bridge technology and urban planning together in a single platform."} {"_id": "118e54f8f100564d0309c97e1062b2f7809186e8", "title": "How to learn a graph from smooth signals", "text": "We propose a framework that learns the graph structure underlying a set of smooth signals. Given X \u2208 Rm\u00d7n whose rows reside on the vertices of an unknown graph, we learn the edge weights w \u2208 R + under the smoothness assumption that tr ( X>LX ) is small. We show that the problem is a weighted `-1 minimization that leads to naturally sparse solutions. We point out how known graph learning or construction techniques fall within our framework and propose a new model that performs better than the state of the art in many settings. We present efficient, scalable primal-dual based algorithms for both our model and the previous state of the art, and evaluate their performance on artificial and real data."} {"_id": "557e5e38a4c5b95e2bc86f491b03e5c8c7add857", "title": "Thin-Slicing for Pose: Learning to Understand Pose without Explicit Pose Estimation", "text": "We address the problem of learning a pose-aware, compact embedding that projects images with similar human poses to be placed close-by in the embedding space. The embedding function is built on a deep convolutional network, and trained with triplet-based rank constraints on real image data. This architecture allows us to learn a robust representation that captures differences in human poses by effectively factoring out variations in clothing, background, and imaging conditions in the wild. For a variety of pose-related tasks, the proposed pose embedding provides a cost-efficient and natural alternative to explicit pose estimation, circumventing challenges of localizing body joints. We demonstrate the efficacy of the embedding on pose-based image retrieval and action recognition problems."} {"_id": "fd50fa6954e1f6f78ca66f43346e7e86b196b137", "title": "Regions, Periods, Activities: Uncovering Urban Dynamics via Cross-Modal Representation Learning", "text": "With the ever-increasing urbanization process, systematically modeling people\u2019s activities in the urban space is being recognized as a crucial socioeconomic task. This task was nearly impossible years ago due to the lack of reliable data sources, yet the emergence of geo-tagged social media (GTSM) data sheds new light on it. Recently, there have been fruitful studies on discovering geographical topics from GTSM data. However, their high computational costs and strong distributional assumptions about the latent topics hinder them from fully unleashing the power of GTSM. To bridge the gap, we present CrossMap, a novel crossmodal representation learning method that uncovers urban dynamics with massive GTSM data. CrossMap first employs an accelerated mode seeking procedure to detect spatiotemporal hotspots underlying people\u2019s activities. Those detected hotspots not only address spatiotemporal variations, but also largely alleviate the sparsity of the GTSM data. With the detected hotspots, CrossMap then jointly embeds all spatial, temporal, and textual units into the same space using two different strategies: one is reconstructionbased and the other is graph-based. Both strategies capture the correlations among the units by encoding their cooccurrence and neighborhood relationships, and learn lowdimensional representations to preserve such correlations. Our experiments demonstrate that CrossMap not only significantly outperforms state-of-the-art methods for activity recovery and classification, but also achieves much better efficiency."} {"_id": "57b7af14a6aff0a942755abd3a935bf18f19965b", "title": "A 3.4 W Digital-In Class-D Audio Amplifier in 0.14 $\\mu $m CMOS", "text": "In this paper a class-D audio amplifier for mobile applications is presented realized in a 0.14 \u03bcm CMOS technology tailored for mobile applications. The amplifier has a simple PDM-based digital interface for audio and control that requires only two pins and enables assembly in 9-bump WL-CSP. The complete audio path is discussed that consists of a Parser, Digital PWM controller, 1-bit DA-converters, analog feedback loop and the Class-D power stage. A reconfigurable gate driver is used that reduces quiescent current consumption and radiated emission."} {"_id": "ce8d99e5b270d15dc09422c08c500c5d86ed3703", "title": "A new paradigm of human gait analysis with Kinect", "text": "Analysis of human gait helps to find an intrinsic gait signature through which ubiquitous human identification and medical disorder problems can be investigated in a broad spectrum. The gait biometric provides an unobtrusive feature by which video gait data can be captured at a larger distance without prior awareness of the subject. In this paper, a new technique has been addressed to study the human gait analysis with Kinect Xbox device. It ensures us to minimize the segmentation errors with automated background subtraction technique. The closely similar human skeleton model can be generated from background subtracted gait images, altered by covariate conditions, such as change in walking speed and variations in clothing type. The gait signatures are captured from joint angle trajectories of left hip, left knee, right hip and right knee of subject's skeleton model. The experimental verification on Kinect gait data has been compared with our in-house development of sensor based biometric suit, Intelligent Gait Oscillation Detector (IGOD). An endeavor has been taken to investigate whether this sensor based biometric suit can be altered with a Kinect device for the proliferation of robust gait identification system. The Fisher discriminant analysis has been applied on training gait signature to look into the discriminatory power of feature vector. The Nai\u0308ve Bayesian classifier demonstrates an encouraging classification result with estimation of errors on limited dataset captured by Kinect sensor."} {"_id": "484036c238df3645c038546df722d80a0fd4f642", "title": "Employee Engagement From a Self-Determination Theory Perspective", "text": "Macey and Schneider (2008) draw on numerous theories to explain what engagement is and how it is similar to and different from related constructs in the organizational behavior literature. As a result, we now have a better understanding of some of the key \u2018\u2018components\u2019\u2019 of engagement. What appears to be missing, however, is a strong unifying theory to guide research and practice. We believe that such a theory exists in the form of self-determination theory (SDT; Deci & Ryan, 1985; Ryan & Deci, 2000) and its various corollaries, self-concordance theory (SCT; Sheldon & Elliot, 1999), hierarchical theory (Vallerand, 1997), and passion theory (Vallerand et al., 2003). Although Macey and Schneider acknowledged the relevance of SDT and SCT, we believe that much greater use of these theories could be made to justify and extend their conceptual model."} {"_id": "3f51289c8d4246525bc27be17c4e69b924e8ad1c", "title": "Cyber and Physical Security Vulnerability Assessment for IoT-Based Smart Homes", "text": "The Internet of Things (IoT) is an emerging paradigm focusing on the connection of devices, objects, or \"things\" to each other, to the Internet, and to users. IoT technology is anticipated to become an essential requirement in the development of smart homes, as it offers convenience and efficiency to home residents so that they can achieve better quality of life. Application of the IoT model to smart homes, by connecting objects to the Internet, poses new security and privacy challenges in terms of the confidentiality, authenticity, and integrity of the data sensed, collected, and exchanged by the IoT objects. These challenges make smart homes extremely vulnerable to different types of security attacks, resulting in IoT-based smart homes being insecure. Therefore, it is necessary to identify the possible security risks to develop a complete picture of the security status of smart homes. This article applies the operationally critical threat, asset, and vulnerability evaluation (OCTAVE) methodology, known as OCTAVE Allegro, to assess the security risks of smart homes. The OCTAVE Allegro method focuses on information assets and considers different information containers such as databases, physical papers, and humans. The key goals of this study are to highlight the various security vulnerabilities of IoT-based smart homes, to present the risks on home inhabitants, and to propose approaches to mitigating the identified risks. The research findings can be used as a foundation for improving the security requirements of IoT-based smart homes."} {"_id": "f9f5f97f3ac2c4d071bbf6cff1ee2d6ceaf9338b", "title": "Handshaking, gender, personality, and first impressions.", "text": "Although people's handshakes are thought to reflect their personality and influence our first impressions of them, these relations have seldom been formally investigated. One hundred twelve participants had their hand shaken twice by 4 trained coders (2 men and 2 women) and completed 4 personality measures. The participants' handshakes were stable and consistent across time and coders. There were also gender differences on most of the handshaking characteristics. A firm handshake was related positively to extraversion and emotional expressiveness and negatively to shyness and neuroticism; it was also positively related to openness to experience, but only for women. Finally, handshake characteristics were related to the impressions of the participants formed by the coders. These results demonstrate that personality traits, assessed through self-report, can predict specific behaviors assessed by trained observers. The pattern of relations among openness, gender, handshaking, and first impressions suggests that a firm handshake may be an effective form of self-promotion for women."} {"_id": "5c6f2becfb309e5b1323857bddc2c8fbc0c0ace5", "title": "Large-Scale Network Dysfunction in Major Depressive Disorder: A Meta-analysis of Resting-State Functional Connectivity.", "text": "IMPORTANCE\nMajor depressive disorder (MDD) has been linked to imbalanced communication among large-scale brain networks, as reflected by abnormal resting-state functional connectivity (rsFC). However, given variable methods and results across studies, identifying consistent patterns of network dysfunction in MDD has been elusive.\n\n\nOBJECTIVE\nTo investigate network dysfunction in MDD through a meta-analysis of rsFC studies.\n\n\nDATA SOURCES\nSeed-based voxelwise rsFC studies comparing individuals with MDD with healthy controls (published before June 30, 2014) were retrieved from electronic databases (PubMed, Web of Science, and EMBASE) and authors contacted for additional data.\n\n\nSTUDY SELECTION\nTwenty-seven seed-based voxel-wise rsFC data sets from 25 publications (556 individuals with MDD and 518 healthy controls) were included in the meta-analysis.\n\n\nDATA EXTRACTION AND SYNTHESIS\nCoordinates of seed regions of interest and between-group effects were extracted. Seeds were categorized into seed-networks by their location within a priori functional networks. Multilevel kernel density analysis of between-group effects identified brain systems in which MDD was associated with hyperconnectivity (increased positive or reduced negative connectivity) or hypoconnectivity (increased negative or reduced positive connectivity) with each seed-network.\n\n\nRESULTS\nMajor depressive disorder was characterized by hypoconnectivity within the frontoparietal network, a set of regions involved in cognitive control of attention and emotion regulation, and hypoconnectivity between frontoparietal systems and parietal regions of the dorsal attention network involved in attending to the external environment. Major depressive disorder was also associated with hyperconnectivity within the default network, a network believed to support internally oriented and self-referential thought, and hyperconnectivity between frontoparietal control systems and regions of the default network. Finally, the MDD groups exhibited hypoconnectivity between neural systems involved in processing emotion or salience and midline cortical regions that may mediate top-down regulation of such functions.\n\n\nCONCLUSIONS AND RELEVANCE\nReduced connectivity within frontoparietal control systems and imbalanced connectivity between control systems and networks involved in internal or external attention may reflect depressive biases toward internal thoughts at the cost of engaging with the external world. Meanwhile, altered connectivity between neural systems involved in cognitive control and those that support salience or emotion processing may relate to deficits regulating mood. These findings provide an empirical foundation for a neurocognitive model in which network dysfunction underlies core cognitive and affective abnormalities in depression."} {"_id": "582ea307db25c5764e7d2ed82c4846757f4e95d7", "title": "Greedy Function Approximation : A Gradient Boosting Machine", "text": "Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \\boosting\" paradigm is developed for additive expansions based on any tting criterion. Speci c algorithms are presented for least{squares, least{absolute{deviation, and Huber{M loss functions for regression, and multi{class logistic likelihood for classi cation. Special enhancements are derived for the particular case where the individual additive components are decision trees, and tools for interpreting such \\TreeBoost\" models are presented. Gradient boosting of decision trees produces competitive, highly robust, interpretable procedures for regression and classi cation, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire 1996, and Friedman, Hastie, and Tibshirani 1998 are discussed. 1 Function estimation In the function estimation problem one has a system consisting of a random \\output\" or \\response\" variable y and a set of random \\input\" or \\explanatory\" variables x = fx1; ; xng. Given a \\training\" sample fyi;xig N 1 of known (y;x){values, the goal is to nd a function F (x) that maps x to y, such that over the joint distribution of all (y;x){values, the expected value of some speci ed loss function (y; F (x)) is minimized F (x) = argmin F (x) Ey;x (y; F (x)) = argmin F (x) Ex [Ey( (y; F (x)) jx] : (1) Frequently employed loss functions (y; F ) include squared{error (y F ) and absolute error jy F j for y 2 R (regression), and negative binomial log{likelihood, log(1 + e 2yF ), when y 2 f 1; 1g (classi cation). A common procedure is to take F (x) to be a member of a parameterized class of functions F (x;P), where P = fP1; P2; g is a set of parameters. In this paper we focus on \\additive\" expansions of the form"} {"_id": "561f750a5a168283b14271abfb4331566f1813f4", "title": "FPGA implementations of fast fourier transforms for real-time signal and image processing", "text": "Applications based on Fast Fourier Transform (FFT) such as signal and image processing require high computational power, plus the ability to experiment with algorithms. Reconfigurable hardware devices in the form of Field Programmable Gate Arrays (FPGAs) have been proposed as a way of obtaining high performance at an economical price. At present, however, users must program FPGAs at a very low level and have a detailed knowledge of the architecture of the device being used. To try to reconcile the dual requirements of high performance and ease of development, this paper reports on the design and realisation of a High Level framework for the implementation of 1-D and 2-D FFTs for real-time applications. Results show that the parallel implementation of 2-D FFT achieves virtually linear speed-up and real-time performance for large matrix sizes. Finally, an FPGA-based parametrisable environment based on the developed parallel 2-D FFT architecture is presented as a solution for frequencydomain image filtering application."} {"_id": "7911ab33185a881a673e2c9218ec2e7ebf86cf62", "title": "Priors for people tracking from small training sets", "text": "We advocate the use of scaled Gaussian process latent variable models (SGPLVM) to learn prior models of 3D human pose for 3D people tracking. The SGPLVM simultaneously optimizes a low-dimensional embedding of the high-dimensional pose data and a density function that both gives higher probability to points close to training data and provides a nonlinear probabilistic mapping from the low-dimensional latent space to the full-dimensional pose space. The SGPLVM is a natural choice when only small amounts of training data are available. We demonstrate our approach with two distinct motions, golfing and walking. We show that the SGPLVM sufficiently constrains the problem such that tracking can be accomplished with straightforward deterministic optimization."} {"_id": "70e16c6565e6e9debbeea09c1631d88cc52b807e", "title": "Three dual polarized 2.4GHz microstrip patch antennas for active antenna and in-band full duplex applications", "text": "This paper presents the design, implementation and interport isolation performance evaluation of three dual port, dual polarized 2.4GHz microstrip patch antennas. Input matching and interport isolation (both DC and RF) performance of implemented antennas has been compared by measuring the input return losses (Sii, S22) and interport isolation (S12) respectively at 2.4GHz. Two implemented single layer antennas provide around 40dB RF interport isolation while the multilayer antenna has 60dB isolation between transmit and receive ports at centre frequency with DC isolated ports. The multilayer antenna provides more than 55dB interport isolation for antenna's 10 dB input impedance bandwidth of 50MHz."} {"_id": "1aa5a8ad5b7031ba39e1dc0537484694364a1312", "title": "Evaluating Color Descriptors for Object and Scene Recognition", "text": "Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge."} {"_id": "21cbb7e58e8f3e5912b32df3e75b004e5a0c00cc", "title": "Automatic annotation of human actions in video", "text": "This paper addresses the problem of automatic temporal annotation of realistic human actions in video using minimal manual supervision. To this end we consider two associated problems: (a) weakly-supervised learning of action models from readily available annotations, and (b) temporal localization of human actions in test videos. To avoid the prohibitive cost of manual annotation for training, we use movie scripts as a means of weak supervision. Scripts, however, provide only implicit, noisy, and imprecise information about the type and location of actions in video. We address this problem with a kernel-based discriminative clustering algorithm that locates actions in the weakly-labeled training data. Using the obtained action samples, we train temporal action detectors and apply them to locate actions in the raw video data. Our experiments demonstrate that the proposed method for weakly-supervised learning of action models leads to significant improvement in action detection. We present detection results for three action classes in four feature length movies with challenging and realistic video data."} {"_id": "4fa2b00f78b2a73b63ad014f3951ec902b8b24ae", "title": "Semi-supervised hashing for scalable image retrieval", "text": "Large scale image search has recently attracted considerable attention due to easy availability of huge amounts of data. Several hashing methods have been proposed to allow approximate but highly efficient search. Unsupervised hashing methods show good performance with metric distances but, in image search, semantic similarity is usually given in terms of labeled pairs of images. There exist supervised hashing methods that can handle such semantic similarity but they are prone to overfitting when labeled data is small or noisy. Moreover, these methods are usually very slow to train. In this work, we propose a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data. The proposed method can handle both metric as well as semantic similarity. The experimental results on two large datasets (up to one million samples) demonstrate its superior performance over state-of-the-art supervised and unsupervised methods."} {"_id": "6a7c63a73724c0ca68b1675e256bb8b9a35c94f4", "title": "Investigating Causal Relations by Econometric Models and Cross-spectral Methods", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/econosoc.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."} {"_id": "d71c84fbaff0d7f2cbcf2f03630e189c58939a4a", "title": "Semantic analysis of soccer video using dynamic Bayesian network", "text": "Video semantic analysis is formulated based on the low-level image features and the high-level knowledge which is encoded in abstract, nongeometric representations. This paper introduces a semantic analysis system based on Bayesian network (BN) and dynamic Bayesian network (DBN). It is validated in the particular domain of soccer game videos. Based on BN/DBN, it can identify the special events in soccer games such as goal event, corner kick event, penalty kick event, and card event. The video analyzer extracts the low-level evidences, whereas the semantic analyzer uses BN/DBN to interpret the high-level semantics. Different from previous shot-based semantic analysis approaches, the proposed semantic analysis is frame-based for each input frame, it provides the current semantics of the event nodes as well as the hidden nodes. Another contribution is that the BN and DBN are automatically generated by the training process instead of determined by ad hoc. The last contribution is that we introduce a so-called temporal intervening network to improve the accuracy of the semantics output"} {"_id": "430fb2aab098ea8ba737d1f1b98a31344d152308", "title": "A Flexible Model-Driven Game Development Approach", "text": "Game developers are facing an increasing demand for new games every year. Game development tools can be of great help, but require highly specialized professionals. Also, just as any software development effort, game development has some challenges. Model-Driven Game Development (MDGD) is suggested as a means to solve some of these challenges, but with a loss in flexibility. We propose a MDGD approach that combines multiple domain-specific languages (DSLs) with design patterns to provide flexibility and allow generated code to be integrated with manual code. After experimentation, we observed that, with the approach, less experienced developers can create games faster and more easily, and the product of code generation can be customized with manually written code, providing flexibility. However, with MDGD, developers become less familiar with the code, making manual codification more difficult."} {"_id": "e83c5de4aa7ab7cd71c5fe39e64c828fba6ae645", "title": "Matchmaking in multi-player on-line games: studying user traces to improve the user experience", "text": "Designing and implementing a quality matchmaking service for Multiplayer Online Games requires an extensive knowledge of the habits, behaviors and expectations of the players. Gathering and analyzing traces of real games offers insight on these matters, but game server providers are very protective of such data in order to deter possible reuse by the competition and to prevent cheating. We circumvented this issue by gathering public data from a League of Legends server (information over more than 28 million game sessions). In this paper, we present our database which is freely available online, and we detail the analysis and conclusions we draw from this data regarding the expected requirements for the matchmaking service."} {"_id": "d982c24c969f5b4042dabc4469c4842462d7b0dc", "title": "Knowledge sharing in organisational contexts: a motivation-based perspective", "text": "Purpose \u2013 Facilitating knowledge sharing within organisations is a difficult task: the willingness of individuals to share and integrate their knowledge is one of the central barriers. This paper aims to develop a motivation-based perspective to explore how organisations resolve the social dilemma of knowledge sharing. Design/methodology/approach \u2013 The analysis builds on a three-category taxonomy of motivation, adding \u2018\u2018hedonic\u2019\u2019 motivation to the traditional dichotomy of \u2018\u2018extrinsic\u2019\u2019 and \u2018\u2018intrinsic\u2019\u2019 motivation. It uses case studies gleaned from the literature to explore the interactive effects between the different motivators in two different types of knowledge-intensive organisations: professional bureaucracy and operating adhocracy. Findings \u2013 Within a professional bureaucracy, the social dilemma of knowledge sharing may be overcome through normative motivation, with provision of hedonic motivation through extrinsic incentives such as training and career progression. In an operating adhocracy where interdependent teamwork is vital, it may be overcome through normative alignment reinforced by intensive socialisation. Extrinsic motivators that align with hedonic motivation may also reinforce the propensity for knowledge sharing. In both organisational types, financial extrinsic incentives do not appear to be relevant on their own, and may \u2018\u2018crowd out\u2019\u2019 other motivators. Research limitations/implications \u2013 The cases reported were chosen from the existing literature and, although many were not designed specifically to address motivational issues, suggestive conclusions are drawn. Most of the cases were drawn from organisations rooted in the Anglo-American context and thus care would be needed in generalising the findings to organisations in other contexts. Originality/value \u2013 The paper represents the first attempt to apply a three-category taxonomy of motivation to examine knowledge-sharing behaviour in organisations. It highlights the interaction between the different motivators and provides a basis to integrate further the work of social psychologists and socio-economists on incentives and motivation in the context of knowledge sharing."} {"_id": "8eca169f19425c76fa72078824e6a91a5b37f470", "title": "A versatile FMCW Radar System Simulator for Millimeter-Wave Applications", "text": "For the successful design of low-cost and high-performance radar systems accurate and efficient system simulation is a key requirement. In this paper we present a new versatile simulation environment for frequency-modulated continuous-wave radar systems. Besides common hardware simulation it covers integrated system simulation and concept analysis from signal synthesis to baseband. It includes a flexible scenario generator, accurate noise modeling, and efficiently delivers simulation data for development and testing of signal processing algorithms. A comparison of simulations and measurement results for an integrated 77-GHz radar prototype shows the capabilities of the simulator on two different scenarios."} {"_id": "276bf4c952671f6b435081924b955717ce1dc78a", "title": "Automatic multilabel classification for Indonesian news articles", "text": "Problem transformation and algorithm adaptation are the two main approaches in machine learning to solve multilabel classification problem. The purpose of this paper is to investigate both approaches in multilabel classification for Indonesian news articles. Since this classification deals with a large number of features, we also employ some feature selection methods to reduce feature dimension. There are four factors as the focuses of this paper, i.e., feature weighting method, feature selection method, multilabel classification approach, and single-label classification algorithm. These factors will be combined to determine the best combination. The experiments show that the best performer for multilabel classification of Indonesian news articles is the combination of TF-IDF feature weighting method, Symmetrical Uncertainty feature selection method, Calibrated Label Ranking - which belongs to problem transformation approach -, and SVM algorithm. This best combination achieves F-measure of 85.13% in 10-fold cross-validation, but the F-measure decreases to 76.73% in testing because of OOV."} {"_id": "1a8a32e1946cc8f5f45c7b5f121cdf0a2ac08eff", "title": "Topic Models for Mortality Modeling in Intensive Care Units", "text": "Mortality prediction is an important problem in the intensive care unit (ICU) because it is helpful for understanding patients\u2019 evolving severity, quality of care, and comparing treatments. Most ICU mortality models primarily consider structured data and physiological waveforms (Le Gall et al., 1993). An important limitation of these structured data approaches is that they miss a lot of vital information captured in providers\u2019 free text notes and reports. In this paper, we propose an approach to mortality prediction that incorporates the information from free text notes using topic modeling."} {"_id": "71337276460b50a2cb37959a2d843e593dc4fdcc", "title": "A non-isolated three-port converter for stand-alone renewable power system", "text": "A novel non-isolated three-port converter (NI-TPC) is proposed interfacing one PV port, one bidirectional battery port and one load port. Single stage power conversion between any two of the three ports is achieved. The topology is derived by decoupling the bidirectional power flow path of the conventional structure into two unidirectional ones. Two of the three ports can be tightly regulated to achieve maximum power harvesting for PV or charge control for battery, and maintain the load voltage constant at the same time, while the third port is left flexible to compensate the power imbalance of the converter. Operation states are analyzed. The multi-regulator competition control strategy is presented to achieve autonomous and smooth state switching when the PV input power fluctuates. The analysis is verified by the experimental results."} {"_id": "9fa1eadcbc10b91d91d1c5aa669631e6388aa7b4", "title": "A Framework for Coordinated Surface Operations Planning at Dallas-Fort Worth International Airport", "text": "An Integer Programming formulation is developed for optimizing surface operations at Dallas-Fort Worth airport, with the goal of assessing the potential benefits of taxi route planning. The model is based on operations in the eastern half of the airport under the most frequently used configuration. The focus is on operational concepts that optimize taxi routes by utilizing different control points on the airport surface. The benefits of two different concepts for optimizing taxiway operations, namely controlled pushback and taxi reroutes are analyzed, for both current data and a projected data set with approximately twice the traffic density. The analysis estimates that: (1) for current traffic densities, controlled pushback would reduce the average departure taxi time by 17% without altering runway schedule conformance, while the benefits of taxi reroutes would be minimal; and (2) for high-density operations, controlled pushback would reduce the average departure taxi time by 18%, while incorporating taxi reroutes would reduce the average arrival taxi time by 14%. Other benefits analyzed for these control strategies include a decrease in the average time spent in runway crossing queues."} {"_id": "835e510fcf22b4b9097ef51b8d0bb4e7b806bdfd", "title": "Unsupervised Learning of Sequence Representations by Autoencoders", "text": "Sequence data is challenging for machine learning approaches, because the lengths of the sequences may vary between samples. In this paper, we present an unsupervised learning model for sequence data, called the Integrated Sequence Autoencoder (ISA), to learn a fixed-length vectorial representation by minimizing the reconstruction error. Specifically, we propose to integrate two classical mechanisms for sequence reconstruction which takes into account both the global silhouette information and the local temporal dependencies. Furthermore, we propose a stop feature that serves as a temporal stamp to guide the reconstruction process, which results in a higher-quality representation. The learned representation is able to effectively summarize not only the apparent features, but also the underlying and high-level style information. Take for example a speech sequence sample: our ISA model can not only recognize the spoken text (apparent feature), but can also discriminate the speaker who utters the audio (more high-level style). One promising application of the ISA model is that it can be readily used in the semi-supervised learning scenario, in which a large amount of unlabeled data is leveraged to extract high-quality sequence representations and thus to improve the performance of the subsequent supervised learning tasks on limited labeled data."} {"_id": "617c3919c78d7cc0f596a17ea149eab1bf651c6f", "title": "A practical path loss model for indoor WiFi positioning enhancement", "text": "Positioning within a local area refers to technology whereby each node is self-aware of its position. Based on empirical study, this paper proposes an enhancement to the path loss model in the indoor environment for improved accuracy in the relationship between distance and received signal strength. We further demonstrate the potential of our model for the WiFi positioning system, where the mean errors in the distance estimation are 2.3 m and 2.9 m for line of sight and non line of sight environments, respectively."} {"_id": "cb7ff548490bbdbb3da4cc6eab8a3429f61f618d", "title": "xLED: Covert Data Exfiltration from Air-Gapped Networks via Router LEDs", "text": "In this paper we show how attackers can covertly leak data (e.g., encryption keys, passwords and files) from highly secure or air-gapped networks via the row of status LEDs that exists in networking equipment such as LAN switches and routers. Although it is known that some network equipment emanates optical signals correlated with the information being processed by the device ('side-channel'), intentionally controlling the status LEDs to carry any type of data ('covert-channel') has never studied before. A malicious code is executed on the LAN switch or router, allowing full control of the status LEDs. Sensitive data can be encoded and modulated over the blinking of the LEDs. The generated signals can then be recorded by various types of remote cameras and optical sensors. We provide the technical background on the internal architecture of switches and routers (at both the hardware and software level) which enables this type of attack. We also present amplitude and frequency based modulation and encoding schemas, along with a simple transmission protocol. We implement a prototype of an exfiltration malware and discuss its design and implementation. We evaluate this method with a few routers and different types of LEDs. In addition, we tested various receivers including remote cameras, security cameras, smartphone cameras, and optical sensors, and also discuss different detection and prevention countermeasures. Our experiment shows that sensitive data can be covertly leaked via the status LEDs of switches and routers at a bit rates of 10 bit/sec to more than 1Kbit/sec per LED. Keywords\u2014 ezfiltration; air-gap; network; optical; covertchannel (key words)"} {"_id": "65bc85b79306de2ffe607b71d21f3ef240c1d005", "title": "Steganalysis of QIM Steganography in Low-Bit-Rate Speech Signals", "text": "Steganalysis of the quantization index modulation QIM steganography in a low-bit-rate encoded speech stream is conducted in this research. According to the speech generation theory and the phoneme distribution properties in language, we first point out that the correlation characteristics of split vector quantization VQ codewords of linear predictive coding filter coefficients are changed after the QIM steganography. Based on this observation, we construct a model called the Quantization codeword correlation network QCCN based on split VQ codeword from adjacent speech frames. Furthermore, the QCCN model is pruned to yield a stronger correlation network. After quantifying the correlation characteristics of vertices in the pruned correlation network, we obtain feature vectors that are sensitive to steganalysis. Finally, we build a high-performance detector using the support vector machine SVM classifier. It is shown by experimental results that the proposed QCCN steganalysis method can effectively detect the QIM steganography in encoded speech stream when it is applied to low-bit-rate speech codec such as G.723.1 and G.729."} {"_id": "fbcef6f098edb22a183a8161628beec6ff234ac5", "title": "Relation Extraction : A Survey", "text": "With the advent of the Internet, large amount of digital text is generated everyday in the form of news articles, research publications, blogs, question answering forums and social media. It is important to develop techniques for extracting information automatically from these documents, as lot of important information is hidden within them. This extracted information can be used to improve access and management of knowledge hidden in large text corpora. Several applications such as Question Answering, Information Retrieval would benefit from this information. Entities like persons and organizations, form the most basic unit of the information. Occurrences of entities in a sentence are often linked through well-defined relations; e.g., occurrences of person and organization in a sentence may be linked through relations such as employed at. The task of Relation Extraction (RE) is to identify such relations automatically. In this paper, we survey several important supervised, semi-supervised and unsupervised RE techniques. We also cover the paradigms of Open Information Extraction (OIE) and Distant Supervision. Finally, we describe some of the recent trends in the RE techniques and possible future research directions. This survey would be useful for three kinds of readers i) Newcomers in the field who want to quickly learn about RE; ii) Researchers who want to know how the various RE techniques evolved over time and what are possible future research directions and iii) Practitioners who just need to know which RE technique works best in various settings."} {"_id": "4172f558219b94f850c6567f93fa60dee7e65139", "title": "The Treatment of Missing Values and its Effect on Classifier Accuracy", "text": "The presence of missing values in a dataset can affect the performance of a classifier constructed using that dataset as a training sample. Several methods have been proposed to treat missing data and the one used most frequently deletes instances containing at least one missing value of a feature. In this paper we carry out experiments with twelve datasets to evaluate the effect on the misclassification error rate of four methods for dealing with missing values: the case deletion method, mean imputation, median imputation, and the KNN imputation procedure. The classifiers considered were the Linear Discriminant Analysis (LDA) and the KNN classifier. The first one is a parametric classifier whereas the second one is a non parametric classifier."} {"_id": "c4f3d64454d7166f9c2395dbb550bbfe1ab2b0cc", "title": "Designing Media Architecture: Tools and Approaches for Addressing the Main Design Challenges", "text": "Media Architecture is reaching a level of maturity at which we can identify tools and approaches for addressing the main challenges for HCI practitioners working in this field. While previous influential contributions within Media Architecture have identified challenges for designers and offered case studies of specific approaches, here, we (1) provide guidance on how to tackle the domain-specific challenges of Media Architecture design -- pertaining to the interface, integration, content, context, process, prototyping, and evaluation -- on the basis of the development of numerous installations over the course of seven years, and thorough studies of related work, and (2) present five categories of tools and approaches -- software tools, projection, 3D models, hardware prototyping, and evaluation tools -- developed to address these challenges in practice, exemplified through six concrete examples from real-life cases."} {"_id": "77fea2c3831cb690b333d997be48f816dc5be3b6", "title": "More is Less: On the End-to-End Security of Group Chats in Signal, WhatsApp, and Threema", "text": "Secure instant messaging is utilized in two variants: one-to-one communication and group communication. While the first variant has received much attention lately (Frosch et al., EuroS Cohn-Gordon et al., EuroS Kobeissi et al., EuroS&P17), little is known about the cryptographic mechanisms and security guarantees of secure group communication in instant messaging. To approach an investigation of group instant messaging protocols, we first provide a comprehensive and realistic security model. This model combines security and reliability goals from various related literature to capture relevant properties for communication in dynamic groups. Thereby the definitions consider their satisfiability with respect to the instant delivery of messages. To show its applicability, we analyze three widely used real-world protocols: Signal, WhatsApp, and Threema. By applying our model, we reveal several shortcomings with respect to the security definition. Therefore we propose generic countermeasures to enhance the protocols regarding the required security and reliability goals. Our systematic analysis reveals that (1) the communications' integrity \u2013 represented by the integrity of all exchanged messages \u2013 and (2) the groups' closeness \u2013 represented by the members' ability of managing the group \u2013 are not end-to-end protected. We additionally show that strong security properties, such as Future Secrecy which is a core part of the one-to-one communication in the Signal protocol, do not hold for its group communication."} {"_id": "fc0b575b84412e5bd90700cfe519615eb1f2eab5", "title": "Existential meaning's role in the enhancement of hope and prevention of depressive symptoms.", "text": "The authors confirmed that existential meaning has a unique relationship with and can prospectively predict levels of hope and depressive symptoms within a population of college students. Baseline measures of explicit meaning (i.e., an individual's self-reported experience of a sense of coherence and purpose in life) and implicit meaning (i.e., an individual's self-reported embodiment of the factors that are normatively viewed as comprising a meaningful life) explained significant amounts of variance in hope and depressive symptoms 2 months later beyond the variance explained by baseline levels of hope/depression, neuroticism, conscientiousness, agreeableness, openness to experience, extraversion, and social desirability. The authors discuss implications of these findings for the field of mental health treatment and suggest ways of influencing individuals' experience of existential meaning."} {"_id": "e87b7eeae1f34148419be37900cf358069a152f1", "title": "Why software fails [software failure]", "text": "Most IT experts agree that software failures occur far more often than they should despite the fact that, for the most part, they are predictable and avoidable. It is unfortunate that most organizations don't see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Because software failure has tremendous implications for business and society, it is important to understand why this attitude persists."} {"_id": "bea20c34bc3aa04c4c49361197e415b6a09e06b8", "title": "Transfer learning for class imbalance problems with inadequate data", "text": "A fundamental problem in data mining is to effectively build robust classifiers in the presence of skewed data distributions. Class imbalance classifiers are trained specifically for skewed distribution datasets. Existing methods assume an ample supply of training examples as a fundamental prerequisite for constructing an effective classifier. However, when sufficient data are not readily available, the development of a representative classification algorithm becomes even more difficult due to the unequal distribution between classes. We provide a unified framework that will potentially take advantage of auxiliary data using a transfer learning mechanism and simultaneously build a robust classifier to tackle this imbalance issue in the presence of few training samples in a particular target domain of interest. Transfer learning methods use auxiliary data to augment learning when training examples are not sufficient and in this paper we will develop a method that is optimized to simultaneously augment the training data and induce balance into skewed datasets. We propose a novel boosting-based instance transfer classifier with a label-dependent update mechanism that simultaneously compensates for class imbalance and incorporates samples from an auxiliary domain to improve classification. We provide theoretical and empirical validation of our method and apply to healthcare and text classification applications."} {"_id": "3020dea141c2c81e9eb6e0a1a6c05740dc2ff30f", "title": "Player Type Models: Towards Empirical Validation", "text": "Player type models -- such as the BrainHex model -- are popular approaches for personalizing digital games towards individual preferences of players. Although several player type models have been developed and are currently used in game design projects, there is still a lack of data on their validity. To close this research gap we currently investigate the psychometric properties (factor structure, reliability, stability) and predictive validity (if player type scores can predict player experience) of the player type model BrainHex in an ongoing project. Results of two online studies (n1=592, n2=243) show that the psychometric properties of the BrainHex model could be improved. We suggest to improve the according questionnaire and sketch how the predictive validity could be investigated in future studies."} {"_id": "fc7e37bd4d0e79ab0df0ee25260a180984e2a456", "title": "Scalable Metadata Management Techniques for Ultra-Large Distributed Storage Systems - A Systematic Review", "text": "The provisioning of an efficient ultra-large scalable distributed storage system for expanding cloud applications has been a challenging job for researchers in academia and industry. In such an ultra-large-scale storage system, data are distributed on multiple storage nodes for performance, scalability, and availability. The access to this distributed data is through its metadata, maintained by multiple metadata servers. The metadata carries information about the physical address of data and access privileges. The efficiency of a storage system highly depends on effective metadata management. This research presents an extensive systematic literature analysis of metadata management techniques in storage systems. This research work will help researchers to find the significance of metadata management and important parameters of metadata management techniques for storage systems. Methodical examination of metadata management techniques developed by various industry and research groups is described. The different metadata distribution techniques lead to various taxonomies. Furthermore, the article investigates techniques based on distribution structures and key parameters of metadata management. It also presents strengths and weaknesses of individual existing techniques that will help researchers to select the most appropriate technique for specific applications. Finally, it discusses existing challenges and significant research directions in metadata management for researchers."} {"_id": "b0449156130e0de3a48fad6ef1149325d48d44f9", "title": "An Integral Model to Provide Reactive and Proactive Services in an Academic CSIRT Based on Business Intelligence", "text": "Cyber-attacks have increased in severity and complexity. That requires, that the CERT/CSIRT research and develops new security tools. Therefore, our study focuses on the design of an integral model based on Business Intelligence (BI), which provides reactive and proactive services in a CSIRT, in order to alert and reduce any suspicious or malicious activity on information systems and data networks. To achieve this purpose, a solution has been assembled, that generates information stores, being compiled from a continuous network transmission of several internal and external sources of an organization. However, it contemplates a data warehouse, which is focused like a correlator of logs, being formed by the information of feeds with diverse formats. Furthermore, it analyzed attack detection and port scanning, obtained from sensors such as Snort and Passive Vulnerability Scanner, which are stored in a database, where the logs have been generated by the systems. With such inputs, we designed and implemented BI systems using the phases of the Ralph Kimball methodology, ETL and OLAP processes. In addition, a software application has been implemented using the SCRUM methodology, which allowed to link the obtained logs to the BI system for visualization in dynamic dashboards, with the purpose of generating early alerts and constructing complex queries using the user interface through objects structures. The results demonstrate, that this solution has generated early warnings based on the level of criticality and level of sensitivity of malware and vulnerabilities as well as monitoring efficiency, increasing the level of security of member institutions."} {"_id": "f049a8c4d08321b12276208687b6dc35586eecf7", "title": "Extensive Analysis and Prediction of Optimal Inventory levels in supply chain management based on Particle Swarm Optimization Algorithm", "text": "Efficient inventory management is a complex process which entails the management of the inventory in the whole supply chain. The dynamic nature of the excess stock level and shortage level from one period to another is a serious issue. In addition, consideration of multiple products and more supply chain members leads to very complex inventory management process. Moreover, the supply chain cost increases because of the influence of lead times for supplying the stocks as well as the raw materials. A better optimization methodology would consider all these factors in the prediction of the optimal stock levels to be maintained in order to minimize the total supply chain cost. Here, we are proposing an optimization methodology that utilizes the Particle Swarm Optimization algorithm, one of the best optimization algorithms, to overcome the impasse in maintaining the optimal stock levels at each member of the supply chain."} {"_id": "ac8877b0e87625e26f52ab75e84c534a576b1e77", "title": "CIO and Business Executive Leadership Approaches to Establishing Company-wide Information Orientation", "text": "In the digital world, business executives have a heightened awareness of the strategic importance of information and information management to their companies\u2019 value creation. This presents both leadership opportunities and challenges for CIOs. To prevent the CIO position from being marginalized and to enhance CIOs\u2019 contribution to business value creation, they must move beyond being competent IT utility managers and play an active role in helping their companies build a strong information usage culture. The purpose of this article is to provide a better understanding of the leadership approaches that CIOs and business executives can adopt to improve their companies\u2019 information orientation. Based on our findings from four case studies, we have created a four-quadrant leadership-positioning framework. This framework is constructed from the CIO\u2019s perspective and indicates that a CIO may act as a leader, a follower or a nonplayer in developing the company\u2019s information orientation to achieve its strategic focus. The article concludes with guidelines that CIOs can use to help position their leadership challenges in introducing or sustaining their companies\u2019 information orientation initiatives and recommends specific leadership approaches depending on CIOs\u2019 particular situations."} {"_id": "fd560f862dad46152e98abcab9ab79451b4324b8", "title": "Applying Machine Learning Techniques to Analysis of Gene Expression Data: Cancer Diagnosis", "text": "Classification of patient samples is a crucial aspect of cancer diagnosis. DNA hybridization arrays simultaneously measure the expression levels of thousands of genes and it has been suggested that gene expression may provide the additional information needed to improve cancer classification and diagnosis. This paper presents methods for analyzing gene expression data to classify cancer types. Machine learning techniques, such as Bayesian networks, neural trees, and radial basis function (RBF) networks, are used for the analysis of CAMDA Data Set 2. These techniques have their own properties including the ability of finding important genes for cancer classification, revealing relationships among genes, and classifying cancer. This paper reports on comparative evaluation of the experimental results of these methods."} {"_id": "1677bd5219b40cff4db32b03dea230e0afc4896e", "title": "SignalSLAM: Simultaneous localization and mapping with mixed WiFi, Bluetooth, LTE and magnetic signals", "text": "Indoor localization typically relies on measuring a collection of RF signals, such as Received Signal Strength (RSS) from WiFi, in conjunction with spatial maps of signal fingerprints. A new technology for localization could arise with the use of 4G LTE telephony small cells, with limited range but with rich signal strength information, namely Reference Signal Received Power (RSRP). In this paper, we propose to combine an ensemble of available sources of RF signals to build multi-modal signal maps that can be used for localization or for network deployment optimization. We primarily rely on Simultaneous Localization and Mapping (SLAM), which provides a solution to the challenge of building a map of observations without knowing the location of the observer. SLAM has recently been extended to incorporate signal strength from WiFi in the so-called WiFi-SLAM. In parallel to WiFi-SLAM, other localization algorithms have been developed that exploit the inertial motion sensors and a known map of either WiFi RSS or of magnetic field magnitude. In our study, we use all the measurements that can be acquired by an off-the-shelf smartphone and crowd-source the data collection from several experimenters walking freely through a building, collecting time-stamped WiFi and Bluetooth RSS, 4G LTE RSRP, magnetic field magnitude, GPS reference points when outdoors, Near-Field Communication (NFC) readings at specific landmarks and pedestrian dead reckoning based on inertial data. We resolve the location of all the users using a modified version of Graph-SLAM optimization of the users poses with a collection of absolute location and pairwise constraints that incorporates multi-modal signal similarity. We demonstrate that we can recover the user positions and thus simultaneously generate dense signal maps for each WiFi access point and 4G LTE small cell, \u201cfrom the pocket\u201d. Finally, we demonstrate the localization performance using selected single modalities, such as only WiFi and the WiFi signal maps that we generated."} {"_id": "30f67b7275cec21a94be945dfe4beff08c7e004a", "title": "Simultaneous localization and mapping: part I", "text": "This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method. While there are still many practical issues to overcome, especially in more complex outdoor environments, the general SLAM method is now a well understood and established part of robotics. Another part of the tutorial summarized more recent works in addressing some of the remaining issues in SLAM, including computation, feature representation, and data association"} {"_id": "3303b29b10ce7cd76c799ad0c796521751347f9f", "title": "A solution to the simultaneous localization and map building (SLAM) problem", "text": "The simultaneous localisation and map building (SLAM) problem asks if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location. Starting from the estimation-theoretic foundations of this problem developed in [1], [2], [3], this paper proves that a solution to the SLAM problem is indeed possible. The underlying structure of the SLAM problem is first elucidated. A proof that the estimated map converges monotonically to a relative map with zero uncertainty is then developed. It is then shown that the absolute accuracy of the map and the vehicle location reach a lower bound defined only by the initial vehicle uncertainty. Together, these results show that it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and, using relative observations only, incrementally build a perfect map of the world and to compute simultaneously a bounded estimate of vehicle location. This paper also describes a substantial implementation of the SLAM algorithm on a vehicle operating in an outdoor environment using millimeter-wave (MMW) radar to provide relative map observations. This implementation is used to demonstrate how some key issues such as map management and data association can be handled in a practical environment. The results obtained are cross-compared with absolute locations of the map landmarks obtained by surveying. In conclusion, this paper discusses a number of key issues raised by the solution to the SLAM problem including sub-optimal map-building algorithms and map management."} {"_id": "5a6a91287d06016ccce00c1c35e04af5d46c2b16", "title": "Walk detection and step counting on unconstrained smartphones", "text": "Smartphone pedometry offers the possibility of ubiquitous health monitoring, context awareness and indoor location tracking through Pedestrian Dead Reckoning (PDR) systems. However, there is currently no detailed understanding of how well pedometry works when applied to smartphones in typical, unconstrained use.\n This paper evaluates common walk detection (WD) and step counting (SC) algorithms applied to smartphone sensor data. Using a large dataset (27 people, 130 walks, 6 smartphone placements) optimal algorithm parameters are provided and applied to the data. The results favour the use of standard deviation thresholding (WD) and windowed peak detection (SC) with error rates of less than 3%. Of the six different placements, only the back trouser pocket is found to degrade the step counting performance significantly, resulting in undercounting for many algorithms."} {"_id": "0e278a94018c3bbe464c3747cd155af10dc4fdab", "title": "Analyzing features for activity recognition", "text": "Human activity is one of the most important ingredients of context information. In wearable computing scenarios, activities such as walking, standing and sitting can be inferred from data provided by body-worn acceleration sensors. In such settings, most approaches use a single set of features, regardless of which activity to be recognized. In this paper we show that recognition rates can be improved by careful selection of individual features for each activity. We present a systematic analysis of features computed from a real-world data set and show how the choice of feature and the window length over which the feature is computed affects the recognition rates for different activities. Finally, we give a recommendation of suitable features and window lengths for a set of common activities."} {"_id": "56e362c661d575b908e8a9f9bbb48f535a9312a5", "title": "On Managing Very Large Sensor-Network Data Using Bigtable", "text": "Recent advances and innovations in smart sensor technologies, energy storage, data communications, and distributed computing paradigms are enabling technological breakthroughs in very large sensor networks. There is an emerging surge of next-generation sensor-rich computers in consumer mobile devices as well as tailor-made field platforms wirelessly connected to the Internet. Billions of such sensor computers are posing both challenges and opportunities in relation to scalable and reliable management of the peta- and exa-scale time series being generated over time. This paper presents a Cloud-computing approach to this issue based on the two well-known data storage and processing paradigms: Bigtable and MapReduce."} {"_id": "bcfca73fd9a210f9a4c78a0e0ca7e045c5495250", "title": "A Bayesian Nonparametric Approach for Multi-label Classification", "text": "Many real-world applications require multi-label classification where multiple target labels are assigned to each instance. In multi-label classification, there exist the intrinsic correlations between the labels and features. These correlations are beneficial for multi-label classification task since they reflect the coexistence of the input and output spaces that can be exploited for prediction. Traditional classification methods have attempted to reveal these correlations in different ways. However, existing methods demand expensive computation complexity for finding such correlation structures. Furthermore, these approaches can not identify the suitable number of label-feature correlation patterns. In this paper, we propose a Bayesian nonparametric (BNP) framework for multi-label classification that can automatically learn and exploit the unknown number of multi-label correlation. We utilize the recent techniques in stochastic inference to derive the cheap (but efficient) posterior inference algorithm for the model. In addition, our model can naturally exploit the useful information from missing label samples. Furthermore, we extend the model to update parameters in an online fashion that highlights the flexibility of our model against the existing approaches. We compare our method with the state-of-the-art multi-label classification algorithms on real-world datasets using both complete and missing label settings. Our model achieves better classification accuracy while our running time is consistently much faster than the baselines in an order of magnitude."} {"_id": "9bc55cc4590caf827060fe677645e11242f28e4f", "title": "Complexity leadership theory : An interactive perspective on leading in complex adaptive systems", "text": "leadership theory: An interactive perspective on leading in complex adaptive systems\" (2006). Management Department Faculty Publications. Paper 8. Traditional, hierarchical views of leadership are less and less useful given the complexities of our modern world. Leadership theory must transition to new perspectives that account for the complex adaptive needs of organizations. In this paper, we propose that leadership (as opposed to leaders) can be seen as a complex dynamic process that emerges in the interactive \" spaces between \" people and ideas. That is, leadership is a dynamic that transcends the capabilities of individuals alone; it is the product of interaction, tension, and exchange rules governing changes in perceptions and understanding. We label this a dynamic of adaptive leadership, and we show how this dynamic provides important insights about the nature of leadership and its outcomes in organizational fields. We define a leadership event as a perceived segment of action whose meaning is created by the interactions of actors involved in producing it, and we present a set of innovative methods for capturing and analyzing these contextually driven processes. We provide theoretical and practical implications of these ideas for organizational behavior and organization and management theory."} {"_id": "5a56e57683c3d512363d3032368941580d136f7e", "title": "Practical Option Pricing with Support Vector Regression and MART by", "text": "Support Vector Regression and MART by Ian I-En Choo Stanford University"} {"_id": "14ec5729375dcba5c1712ab8df0f8970c3415d8d", "title": "Exome sequencing in sporadic autism spectrum disorders identifies severe de novo mutations", "text": "Evidence for the etiology of autism spectrum disorders (ASDs) has consistently pointed to a strong genetic component complicated by substantial locus heterogeneity. We sequenced the exomes of 20 individuals with sporadic ASD (cases) and their parents, reasoning that these families would be enriched for de novo mutations of major effect. We identified 21 de novo mutations, 11 of which were protein altering. Protein-altering mutations were significantly enriched for changes at highly conserved residues. We identified potentially causative de novo events in 4 out of 20 probands, particularly among more severely affected individuals, in FOXP1, GRIN2B, SCN1A and LAMC3. In the FOXP1 mutation carrier, we also observed a rare inherited CNTNAP2 missense variant, and we provide functional support for a multi-hit model for disease risk. Our results show that trio-based exome sequencing is a powerful approach for identifying new candidate genes for ASDs and suggest that de novo mutations may contribute substantially to the genetic etiology of ASDs."} {"_id": "05ee231749c9ce97f036c71c1d2d599d660a8c81", "title": "GhostVLAD for set-based face recognition", "text": "The objective of this paper is to learn a compact representation of image sets for template-based face recognition. We make the following contributions: first, we propose a network architecture which aggregates and embeds the face descriptors produced by deep convolutional neural networks into a compact fixed-length representation. This compact representation requires minimal memory storage and enables efficient similarity computation. Second, we propose a novel GhostVLAD layer that includes ghost clusters, that do not contribute to the aggregation. We show that a quality weighting on the input faces emerges automatically such that informative images contribute more than those with low quality, and that the ghost clusters enhance the network\u2019s ability to deal with poor quality images. Third, we explore how input feature dimension, number of clusters and different training techniques affect the recognition performance. Given this analysis, we train a network that far exceeds the state-of-the-art on the IJB-B face recognition dataset. This is currently one of the most challenging public benchmarks, and we surpass the state-of-the-art on both the identification and verification protocols."} {"_id": "1acc5e2788743b46272d977045868b937fb317f6", "title": "Similarity Breeds Proximity: Pattern Similarity within and across Contexts Is Related to Later Mnemonic Judgments of Temporal Proximity", "text": "Experiences unfold over time, but little is known about the mechanisms that support the formation of coherent episodic memories for temporally extended events. Recent work in animals has provided evidence for signals in hippocampus that could link events across temporal gaps; however, it is unknown whether and how such signals might be related to later memory for temporal information in humans. We measured patterns of fMRI BOLD activity as people encoded items that were separated in time and manipulated the presence of shared or distinct context across items. We found that hippocampal pattern similarity in the BOLD response across trials predicted later temporal memory decisions when context changed. By contrast, pattern similarity in lateral occipital cortex was related to memory only when context remained stable. These data provide evidence in humans that representational stability in hippocampus across time may be a mechanism for temporal memory organization."} {"_id": "70bb8edcac8802816fbfe90e7c1643d67419dd34", "title": "Conspicuous Consumption and Household Indebtedness", "text": "Using a novel, large dataset of consumer transactions in Singapore, we study how conspicuous consumption affects household indebtedness. The coexistence of private housing (condominiums) and subsidized public housing (HDB) allows us to identify conspicuous consumers. Conditional on the same income and other socioeconomic characteristics, those who choose to reside in condominiums\u2014considered a status good\u2014are likely to be more conspicuous than their counterparts living in HDB units. We consistently find that condominium residents spend considerably more (by up to 44%) on conspicuous goods but not differently on inconspicuous goods. Compared with their matched HDB counterparts, more conspicuous consumers have 13% more credit card debt and 151% more delinquent credit card debt. Furthermore, the association between conspicuous consumption and credit card debt is concentrated among younger, male, single individuals. These results suggest that status-seeking-induced conspicuous consumption is an important determinant of household indebtedness."} {"_id": "9b5bc029f386e51d5eee91cf6a1f921f1404549f", "title": "Wireless Network Virtualization: A Survey, Some Research Issues and Challenges", "text": "Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization."} {"_id": "6b3e788bba1176c6c47b37213461cf72a4fb666c", "title": "Unfolding the role of protein misfolding in neurodegenerative diseases", "text": "Recent evidence indicates that diverse neurodegenerative diseases might have a common cause and pathological mechanism \u2014 the misfolding, aggregation and accumulation of proteins in the brain, resulting in neuronal apoptosis. Studies from different disciplines strongly support this hypothesis and indicate that a common therapy for these devastating disorders might be possible. The aim of this article is to review the literature on the molecular mechanism of protein misfolding and aggregation, its role in neurodegeneration and the potential targets for therapeutic intervention in neurodegenerative diseases. Many questions still need to be answered and future research in this field will result in exciting new discoveries that might impact other areas of biology."} {"_id": "fabb31da278765454c102086995cfaeda4f4df7b", "title": "Vehicle Number Plate Detection System for Indian Vehicles", "text": "An exponential increase in number of vehicles necessitates the use of automated systems to maintain vehicle information. The information is highly required for both management of traffic as well as reduction of crime. Number plate recognition is an effective way for automatic vehicle identification. Some of the existing algorithms based on the principle of learning takes a lot of time and expertise before delivering satisfactory results but even then lacks in accuracy. In the proposed algorithm an efficient method for recognition for Indian vehicle number plates has been devised. The algorithm aims at addressing the problems of scaling and recognition of position of characters with a good accuracy rate of 98.07%."} {"_id": "1c1bebfd526b911e3865a20ffd9748117029993a", "title": "Generalized Thompson Sampling for Contextual Bandits", "text": "Thompson Sampling, one of the oldest heuristics for solving multi-armed bandits, has recently been shown to demonstrate state-of-the-art performance. The em pirical success has led to great interests in theoretical understanding of this heuristic. In this pap er, we approach this problem in a way very different from existing efforts. In particular, motiv ated by the connection between Thompson Sampling and exponentiated updates, we propose a new family of algorithms called Generalized Thompson Sampling in the expert-learning framework, which includes Thompson Sampling as a special case. Similar to most expert-learning algorithms, Generalized Thompson Sampling uses a loss function to adjust the experts\u2019 weights. General regr et bounds are derived, which are also instantiated to two important loss functions: square loss a nd logarithmic loss. In contrast to existing bounds, our results apply to quite general contextual bandi ts. More importantly, they quantify the effect of the \u201cprior\u201d distribution on the regret bounds."} {"_id": "2067ad16e1991b77d53d527334e23b97d37f5ec1", "title": "Analysis of users and non-users of smartphone applications", "text": "Purpose: Smartphones facilitate the potential adoption of new mobile applications. The purpose of this research is to study users and non-users of three selected mobile applications, and find out what really drives the intention to use these applications across users and non-users. Design/methodology/approach: The authors measured actual usage of mobile applications in a panel study of 579 Finnish smartphone users, using in-device measurements as an objective way to identify users and non-users. A web-based survey was used in collecting data to test an extended TAM model in explaining intention to use. Findings: Perceived technological barriers negatively affect behavioural control, reflecting people\u2019s assessment of themselves being capable of using the services without trouble. Behavioural control is directly linked to perceived usefulness (except for games) and perceived enjoyment, as hypothesized. Perceived enjoyment and usefulness were generically found to explain intention to use applications for both users and for non-users. Research limitations/implications: With regards to the impact of social norms, the study finds that further research needs to be done in exploring its impact more thoroughly. The dataset of the research, consisting purely of male-dominated, young smartphone users, make the generalization of results difficult. Practical implications: There are differences regarding what drives the usage of different kinds of mobile applications. In this study, map applications and mobile Internet, are driven by more utilitarian motivations, whereas games are more hedonic. It is also clear that not everybody are using applications facilitated by smartphones, and therefore the presented approach of studying users and non-users separately provides a new approach to analyze adoption on a practical level. Originality/value: This research proves that models like TAM should not treat mobile services as a generic concept, but instead to specifically address individual mobile services. The research also demonstrates the unique value of combining objective usage measurements (reflecting actual behaviour) with traditional survey data in more comprehensively modelling service adoption. 2009 Elsevier Ltd. All rights reserved."} {"_id": "8b6f2c853875f99f5e130aaac606ec85bb6ef801", "title": "Retransmission steganography and its detection", "text": "The paper presents a new steganographic method called RSTEG (retransmission steganography), which is intended for a broad class of protocols that utilises retransmission mechanisms. The main innovation of RSTEG is to not acknowledge a successfully received packet in order to intentionally invoke retransmission. The retransmitted packet carries a steganogram instead of user data in the payload field. RSTEG is presented in the broad context of network steganography, and the utilisation of RSTEG for TCP (transmission control protocol) retransmission mechanisms is described in detail. Simulation results are also presented with the main aim of measuring and comparing the steganographic bandwidth of the proposed method for different TCP retransmission mechanisms, as well as to determine the influence of RSTEG on the network retransmission level."} {"_id": "ef4bc332442674a437a704e0fbee98fe36676986", "title": "Verification of an Implementation of Tomasulo's Algorithm by Compositional Model Checking", "text": "An implementation of an out-of-order processing unit based on Tomasulo\u2019s algorithm is formally verified using compositional model checking techniques. This demonstrates that finite-state methods can be applied to such algorithms, without recourse to higher-order proof systems. The paper introduces a novel compositional system that supports cyclic environment reasoning and multiple environment abstractions per signal. A proof of Tomasulo\u2019s algorithm is outlined, based on refinement maps, and relying on the novel features of the compositional system. This proof is fully verified by the SMV verifier, using symmetry to reduce the number of assertions that must be verified."} {"_id": "b9f80615f15ced168c45a4cdbaa7ea0c37734029", "title": "Many-to-Many Geographically-Embedded Flow Visualisation: An Evaluation", "text": "Showing flows of people and resources between multiple geographic locations is a challenging visualisation problem. We conducted two quantitative user studies to evaluate different visual representations for such dense many-to-many flows. In our first study we compared a bundled node-link flow map representation and OD Maps [37] with a new visualisation we call MapTrix. Like OD Maps, MapTrix overcomes the clutter associated with a traditional flow map while providing geographic embedding that is missing in standard OD matrix representations. We found that OD Maps and MapTrix had similar performance while bundled node-link flow map representations did not scale at all well. Our second study compared participant performance with OD Maps and MapTrix on larger data sets. Again performance was remarkably similar."} {"_id": "806ce938fca2c5d0989a599461526d9062dfaf51", "title": "Estimating Smart City sensors data generation", "text": "Nowadays, Smart Cities are positioned as one of the most challenging and important research topics, highlighting major changes in people's lifestyle. Technologies such as smart energy, smart transportation or smart health are being designed to improve citizen's quality of life. Smart Cities leverage the deployment of a network of devices - sensors and mobile devices-, all connected through different means and/or technologies, according to their network availability and capacities, setting a novel framework feeding end-users with an innovative set of smart services. Aligned to this objective, a typical Smart City architecture is organized into layers, including a sensing layer (generates data), a network layer (moves the data), a middleware layer (manages all collected data and makes it ready for usage) and an application layer (provides the smart services benefiting from this data). In this paper a real Smart City is analyzed, corresponding to the city of Barcelona, with special emphasis on the layers responsible for collecting the data generated by the deployed sensors. The amount of daily sensors data transmitted through the network has been estimated and a rough projection has been made assuming an exhaustive deployment that fully covers all city. Finally, we discuss some solutions to both reduce the data transmission and improve the data management."} {"_id": "4354f1c8b058a5da4b30ffba97131edcf4fd79e7", "title": "Supervised Extraction of Diagnosis Codes from EMRs: Role of Feature Selection, Data Selection, and Probabilistic Thresholding", "text": "Extracting diagnosis codes from medical records is a complex task carried out by trained coders by reading all the documents associated with a patient's visit. With the popularity of electronic medical records (EMRs), computational approaches to code extraction have been proposed in the recent years. Machine learning approaches to multi-label text classification provide an important methodology in this task given each EMR can be associated with multiple codes. In this paper, we study the the role of feature selection, training data selection, and probabilistic threshold optimization in improving different multi-label classification approaches. We conduct experiments based on two different datasets: a recent gold standard dataset used for this task and a second larger and more complex EMR dataset we curated from the University of Kentucky Medical Center. While conventional approaches achieve results comparable to the state-of-the-art on the gold standard dataset, on our complex in-house dataset, we show that feature selection, training data selection, and probabilistic thresholding provide significant gains in performance."} {"_id": "5c6b51bb44c9b2297733b58daaf26af01c98fe09", "title": "A Comparative Study of Feature Extraction Algorithms in Customer Reviews", "text": "The paper systematically compares two feature extraction algorithms to mine product features commented on in customer reviews. The first approach [17] identifies candidate features by applying a set of POS patterns and pruning the candidate set based on the log likelihood ratio test. The second approach [11] applies association rule mining for identifying frequent features and a heuristic based on the presence of sentiment terms for identifying infrequent features. We evaluate the performance of the algorithms on five product specific document collections regarding consumer electronic devices. We perform an analysis of errors and discuss advantages and limitations of the algorithms."} {"_id": "bf6dd4e8ed48ea7be9654f521bfd45ba67c210e2", "title": "Culture care theory, research, and practice.", "text": "Today nurses are facing a world in which they are almost forced to use transculturally-based nursing theories and practices in order to care for people of diverse cultures. The author, who in the mid-50s pioneered the development of the first transcultural nursing theory with a care focus, discusses the relevance, assumptions, and predictions of the culture care theory along with the ethnonursing research method. The author contends that transcultural nursing findings are gradually transforming nursing practice and are providing a new paradigm shift from traditional medical and unicultural practice to multiculturally congruent and specific care modalities. A few research findings are presented to show the importance of being attentive to cultural care diversities and universalities as the major tenets of the theory. In addition, some major contributions of the theory are cited along with major challenges for the immediate future."} {"_id": "ea5e4becc7f7a533c3685d89e5f087aa4e25cba7", "title": "Decision making, the P3, and the locus coeruleus-norepinephrine system.", "text": "Psychologists and neuroscientists have had a long-standing interest in the P3, a prominent component of the event-related brain potential. This review aims to integrate knowledge regarding the neural basis of the P3 and to elucidate its functional role in information processing. The authors review evidence suggesting that the P3 reflects phasic activity of the neuromodulatory locus coeruleus-norepinephrine (LC-NE) system. They discuss the P3 literature in the light of empirical findings and a recent theory regarding the information-processing function of the LC-NE phasic response. The theoretical framework emerging from this research synthesis suggests that the P3 reflects the response of the LC-NE system to the outcome of internal decision-making processes and the consequent effects of noradrenergic potentiation of information processing."} {"_id": "41f9d67c031f8b9ec51745eb5a02d826d7b04539", "title": "OBFS: A File System for Object-Based Storage Devices", "text": "The object-based storage model, in which files are made up of one or more data objects stored on self-contained Object-Based Storage Devices (OSDs), is emerging as an architecture for distributed storage systems. The workload presented to the OSDs will be quite different from that of generalpurpose file systems, yet many distributed file systems employ general-purpose file systems as their underlying file system. We present OBFS, a small and highly efficient file system designed for use in OSDs. Our experiments show that our user-level implementation of OBFS outperforms Linux Ext2 and Ext3 by a factor of two or three, and while OBFS is 1/25 the size of XFS, it provides only slightly lower read performance and 10%\u201340% higher write performance."} {"_id": "6f48d05e532254e2b7d429db97cb5ad6841b0812", "title": "Are effective teachers like good parents? Teaching styles and student adjustment in early adolescence.", "text": "This study examined the utility of parent socialization models for understanding teachers' influence on student adjustment in middle school. Teachers were assessed with respect to their modeling of motivation and to Baumrind's parenting dimensions of control, maturity demands, democratic communication, and nurturance. Student adjustment was defined in terms of their social and academic goals and interest in class, classroom behavior, and academic performance. Based on information from 452 sixth graders from two suburban middle schools, results of multiple regressions indicated that the five teaching dimensions explained significant amounts of variance in student motivation, social behavior, and achievement. High expectations (maturity demands) was a consistent positive predictor of students' goals and interests, and negative feedback (lack of nurturance) was the most consistent negative predictor of academic performance and social behavior. The role of motivation in mediating relations between teaching dimensions and social behavior and academic achievement also was examined; evidence for mediation was not found. Relations of teaching dimensions to student outcomes were the same for African American and European American students, and for boys and girls. The implications of parent socialization models for understanding effective teaching are discussed."} {"_id": "e7e061731c5e623dcf9f8c62a5a17041c229bf68", "title": "Compact second harmonic-suppressed bandstop and bandpass filters using open stubs", "text": "Integration of bandstop filters with the bandstop or bandpass filter are presented in this paper. By replacing the series quarter-wavelength connecting lines of conventional open-stub bandpass/bandstop filters with the equivalent T-shaped lines, one could have compact open-stub bandstop/bandpass filters with second harmonic suppression. Transmission-line model calculation is used to derive the design equations of the equivalent T-shaped lines. Experiments have also been done to validate the design concept. Compared with the conventional open-stub bandpass/bandstop filters, over 30-dB improvement of the second harmonic suppression and 28.6% size reduction are achieved"} {"_id": "afb772d0361d6ca85b78ba166b960317b9a87943", "title": "The Blockchain as a Software Connector", "text": "Blockchain is an emerging technology for decentralized and transactional data sharing across a large network of untrusted participants. It enables new forms of distributed software architectures, where components can find agreements on their shared states without trusting a central integration point or any particular participating components. Considering the blockchain as a software connector helps make explicitly important architectural considerations on the resulting performance and quality attributes (for example, security, privacy, scalability and sustainability) of the system. Based on our experience in several projects using blockchain, in this paper we provide rationales to support the architectural decision on whether to employ a decentralized blockchain as opposed to other software solutions, like traditional shared data storage. Additionally, we explore specific implications of using the blockchain as a software connector including design trade-offs regarding quality attributes."} {"_id": "03af306bcb882da089453fa57539f62aa7b5289e", "title": "Conceptual modeling for ETL processes", "text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we focus on the problem of the definition of ETL activities and provide formal foundations for their conceptual representation. The proposed conceptual model is (a) customized for the tracing of inter-attribute relationships and the respective ETL activities in the early stages of a data warehouse project; (b) enriched with a 'palette' of a set of frequently used ETL activities, like the assignment of surrogate keys, the check for null values, etc; and (c) constructed in a customizable and extensible manner, so that the designer can enrich it with his own re-occurring patterns for ETL activities."} {"_id": "5b30ca7b3cbd38380c23333310f673f835b2dd3e", "title": "Beyond data warehousing: what's next in business intelligence?", "text": "During the last ten years the approach to business management has deeply changed, and companies have understood the importance of enforcing achievement of the goals defined by their strategy through metrics-driven management. The DW process, though supporting bottom-up extraction of information from data, fails in top-down enforcing the company strategy. A new approach to BI, called Business Performance Management (BPM), is emerging from this framework: it includes DW but it also requires a reactive component capable of monitoring the time-critical operational processes to allow tactical and operational decision-makers to tune their actions according to the company strategy. The aim of this paper is to encourage the research community to acknowledge the coming of a second era in BI, to propose a general architecture for BPM, and to lay the premises for investigating the most challenging of the related issues."} {"_id": "ca966559c378c0d22d17b1bbce7f55b41cbbfba5", "title": "A UML Based Approach for Modeling ETL Processes in Data Warehouses", "text": "Data warehouses (DWs) are complex computer systems whose main goal is to facilitate the decision making process of knowledge workers. ETL (Extraction-Transformation-Loading) processes are responsible for the extraction of data from heterogeneous operational data sources, their transformation (conversion, cleaning, normalization, etc.) and their loading into DWs. ETL processes are a key component of DWs because incorrect or misleading data will produce wrong business decisions, and therefore, a correct design of these processes at early stages of a DW project is absolutely necessary to improve data quality. However, not much research has dealt with the modeling of ETL processes. In this paper, we present our approach, based on the Unified Modeling Language (UML), which allows us to accomplish the conceptual modeling of these ETL processes together with the conceptual schema of the target DW in an integrated manner. We provide the necessary mechanisms for an easy and quick specification of the common operations defined in these ETL processes such as, the integration of different data sources, the transformation between source and target attributes, the generation of surrogate keys and so on. Moreover, our approach allows the designer a comprehensive tracking and documentation of entire ETL processes, which enormously facilitates the maintenance of these processes. Another advantage of our proposal is the use of the UML (standardization, ease-of-use and functionality) and the seamless integration of the design of the ETL processes with the DW conceptual schema. Finally, we show how to use our integrated approach by using a well-known modeling tool such as Rational Rose."} {"_id": "1126ceee34acd741396c493c84d8b6072a18bfd7", "title": "Potter's Wheel: An Interactive Data Cleaning System", "text": "Cleaning data of errors in structure and content is important for data warehousing and integration. Current solutions for data cleaning involve many iterations of data \u201cauditing\u201d to find errors, and long-running transformations to fix them. Users need to endure long waits, and often write complex transformation scripts. We present Potter\u2019s Wheel, an interactive data cleaning system that tightly integrates transformation and discrepancy detection. Users gradually build transformations to clean the data by adding or undoing transforms on a spreadsheet-like interface; the effect of a transform is shown at once on records visible on screen. These transforms are specified either through simple graphical operations, or by showing the desired effects on example data values. In the background, Potter\u2019s Wheel automatically infers structures for data values in terms of user-defined domains, and accordingly checks for constraint violations. Thus users can gradually build a transformation as discrepancies are found, and clean the data without writing complex programs or enduring long delays."} {"_id": "25e0853ae37c2200de5d3597ae9e86131ce25aee", "title": "Modeling ETL activities as graphs", "text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we focus on the logical design of the ETL scenario of a data warehouse. Based on a formal logical model that includes the data stores, activities and their constituent parts, we model an ETL scenario as a graph, which we call the Architecture Graph. We model all the aforementioned entities as nodes and four different kinds of relationships (instance-of, part-of, regulator and provider relationships) as edges. In addition, we provide simple graph transformations that reduce the complexity of the graph. Finally, in order to support the engineering of the design and the evolution of the warehouse, we introduce specific importance metrics, namely dependence and responsibility, to measure the degree to which entities are bound to each other."} {"_id": "0d8070ff416deb43fc9eae352d04b752dabba82f", "title": "A survey of B-tree locking techniques", "text": "B-trees have been ubiquitous in database management systems for several decades, and they are used in other storage systems as well. Their basic structure and basic operations are well and widely understood including search, insertion, and deletion. Concurrency control of operations in B-trees, however, is perceived as a difficult subject with many subtleties and special cases. The purpose of this survey is to clarify, simplify, and structure the topic of concurrency control in B-trees by dividing it into two subtopics and exploring each of them in depth."} {"_id": "470a828d5e3962f2917a0092cc6ba46ccfe41a2a", "title": "Data Preparation for Data Mining", "text": "Senior Editor: Diane D. Cerra Director of Production & Manufacturing: Yonie Overton Production Editor: Edward Wade Editorial Assistant: Belinda Breyer Cover Design: Wall-To-Wall Studios Cover Photograph: \u00a9 1999 PhotoDisc, Inc. Text Design & Composition: Rebecca Evans & Associates Technical Illustration: Dartmouth Publishing, Inc. Copyeditor: Gary Morris Proofreader: Ken DellaPenta Indexer: Steve Rath Printer: Courier Corp."} {"_id": "1774bec4155de16e4ed5328e44012d991f1509cd", "title": "RefSeer: A citation recommendation system", "text": "Citations are important in academic dissemination. To help researchers check the completeness of citations while authoring a paper, we introduce a citation recommendation system called RefSeer. Researchers can use it to find related works to cited while authoring papers. It can also be used by reviewers to check the completeness of a paper's references. RefSeer presents both topic based global recommendation and also citation-context based local recommendation. By evaluating the quality of recommendation, we show that such recommendation system can recommend citations with good precision and recall. We also show that our recommendation system is very efficient and scalable."} {"_id": "9091d1c5aa7e07d022865b4800ec1684211d2045", "title": "A robust-coded pattern projection for dynamic 3D scene measurement", "text": "This paper presents a new coded structured light pattern which permits to solve the correspondence problem by a single shot and without using geometrical constraints. The pattern is composed by the projection of a grid made by coloured slits in such a way that each slit with its two neighbours appears only once in the pattern. The technique proposed permits a rapid and robust 3D scene measurement, even with moving objects. \u00d3 1998 Elsevier Science B.V. All rights reserved."} {"_id": "623fd6adaa5585707d8d7339b5125185af6e3bf1", "title": "A Comparative Study of Psychosocial Interventions for Internet Gaming Disorder Among Adolescents Aged 13\u201317\u00a0Years", "text": "The present study is a quasi-experimental, prospective study of interventions for internet gaming disorder (IGD). One hundred four parents and their adolescent children were enrolled and allocated to one of the four treatment groups; 7-day Siriraj Therapeutic Residential Camp (S-TRC) alone, 8-week Parent Management Training for Game Addiction (PMT-G) alone, combined S-TRC and PMT-G, and basic psychoeducation (control). The severity of IGD was measured by the Game Addiction Screening Test (GAST). The mean difference among groups in GAST scores was statistically significant, with P values of <\u20090.001, 0.002, and 0.005 at 1, 3, and 6\u00a0months post-intervention, respectively. All groups showed improvement over the control group. The percentage of adolescents who remained in the addicted or probably addicted groups was less than 50% in the S-TRC, PMT-G, and combined groups. In conclusion, both S-TRC and PMT-G were effective psychosocial interventions for IGD and were superior to basic psychoeducation alone."} {"_id": "a8820b172e7bc02406f1948d8bc75b7f4bdfb4ef", "title": "Paper-Based Synthetic Gene Networks", "text": "Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides an alternate, versatile venue for synthetic biologists to operate and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze dried onto paper, enabling the inexpensive, sterile, and abiotic distribution of synthetic-biology-based technologies for the clinic, global health, industry, research, and education. For field use, we create circuits with colorimetric outputs for detection by eye and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small-molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors."} {"_id": "60ac421dfd1e55c8649f2cb7d122f60e04821127", "title": "From centrality to temporary fame: Dynamic centrality in complex networks", "text": "We develop a new approach to the study of the dynamics of link utilization in complex networks using records of communication in a large social network. Counter to the perspective that nodes have particular roles, we find roles change dramatically from day to day. \u201cLocal hubs\u201d have a power law degree distribution over time, with no characteristic degree value. Our results imply a significant reinterpretation of the concept of node centrality in complex networks, and among other conclusions suggest that interventions targeting hubs will have significantly less effect than previously thought. \u00a9 2006 Wiley Periodicals, Inc. Complexity 12: 000 \u2013 000, 2006"} {"_id": "72417ba72a69b9d2e84fb4228a6398c79a16e11a", "title": "HERZBERG ' S MOTIVATION-HYGIENE THEORY AND JOB SATISFACTION IN THE MALAYSIAN RETAIL SECTOR : THE MEDIATING EFFECT OF LOVE OF MONEY", "text": "This paper examines what motivates employees in the retail industry, and examines their level of job satisfaction, using Herzberg's hygiene factors and motivators. In this study, convenience sampling was used to select sales personnel from women's clothing stores in Bandar Sunway shopping mall in the state of Selangor. The results show that hygiene factors were the dominant motivators of sales personnel job satisfaction. Working conditions were the most significant in motivating sales personnel. Recognition was second, followed by company policy and salary. There is a need to delve more deeply into why salespeople place such a high importance on money. Further analysis was performed to assess how much the love of money mediates the relationship between salary and job satisfaction. Based on the general test for mediation, the love of money could explain the relationship between salary and job satisfaction. The main implication of this study is that sales personnel who value money highly are satisfied with their salary and job when they receive a raise."} {"_id": "e58636e67665bd7c6ac1b441c353ab6842a65482", "title": "Evaluation of mobile app paradigms", "text": "The explosion of mobile applications both in number and variety raises the need of shedding light on their architecture, composition and quality. Indeed, it is crucial to understand which mobile application paradigm fits better to what type of application and usage. Such understanding has direct consequences on the user experience, the development cost and sale revenues of mobile apps. In this paper, we identified four main mobile application paradigms and evaluated them from the developer, user and service provider viewpoints. To ensure objectivity and soundness we start by defining high level criteria and then breaking down in finer-grained criteria. After a theoretical evaluation an implementation was carried out as a practical verification. The selected application is object recognition app, which is both exciting and challenging to develop."} {"_id": "b0bd79de12dcc892c7ed3750fa278d14158cab26", "title": "Extraction of Information Related to Adverse Drug Events from Electronic Health Record Notes: Design of an End-to-End Model Based on Deep Learning", "text": "BACKGROUND\nPharmacovigilance and drug-safety surveillance are crucial for monitoring adverse drug events (ADEs), but the main ADE-reporting systems such as Food and Drug Administration Adverse Event Reporting System face challenges such as underreporting. Therefore, as complementary surveillance, data on ADEs are extracted from electronic health record (EHR) notes via natural language processing (NLP). As NLP develops, many up-to-date machine-learning techniques are introduced in this field, such as deep learning and multi-task learning (MTL). However, only a few studies have focused on employing such techniques to extract ADEs.\n\n\nOBJECTIVE\nWe aimed to design a deep learning model for extracting ADEs and related information such as medications and indications. Since extraction of ADE-related information includes two steps-named entity recognition and relation extraction-our second objective was to improve the deep learning model using multi-task learning between the two steps.\n\n\nMETHODS\nWe employed the dataset from the Medication, Indication and Adverse Drug Events (MADE) 1.0 challenge to train and test our models. This dataset consists of 1089 EHR notes of cancer patients and includes 9 entity types such as Medication, Indication, and ADE and 7 types of relations between these entities. To extract information from the dataset, we proposed a deep-learning model that uses a bidirectional long short-term memory (BiLSTM) conditional random field network to recognize entities and a BiLSTM-Attention network to extract relations. To further improve the deep-learning model, we employed three typical MTL methods, namely, hard parameter sharing, parameter regularization, and task relation learning, to build three MTL models, called HardMTL, RegMTL, and LearnMTL, respectively.\n\n\nRESULTS\nSince extraction of ADE-related information is a two-step task, the result of the second step (ie, relation extraction) was used to compare all models. We used microaveraged precision, recall, and F1 as evaluation metrics. Our deep learning model achieved state-of-the-art results (F1=65.9%), which is significantly higher than that (F1=61.7%) of the best system in the MADE1.0 challenge. HardMTL further improved the F1 by 0.8%, boosting the F1 to 66.7%, whereas RegMTL and LearnMTL failed to boost the performance.\n\n\nCONCLUSIONS\nDeep learning models can significantly improve the performance of ADE-related information extraction. MTL may be effective for named entity recognition and relation extraction, but it depends on the methods, data, and other factors. Our results can facilitate research on ADE detection, NLP, and machine learning."} {"_id": "a5858131642b069c74cf49ebde3bb0e6149ee183", "title": "Measuring level of cuteness of baby images: a supervised learning scheme", "text": "The attractiveness of a baby face image depends on the perception of the perceiver. However, several recent studies advocate the idea that human perceptual analysis can be approximated by statistical models. We believe that the cuteness of baby faces depends on the low level facial features extracted from different parts (e.g., mouth, eyes, nose) of the faces. In this paper, we introduce a new problem of classifying baby face images based on their cuteness level using supervised learning techniques. The proposed learning model finds the potential of a deep learning technique in measuring the level of cuteness of baby faces. Since no datasets are available to validate the proposed technique, we construct a dataset of images of baby faces, downloaded from the internet. The dataset consists of several challenges like different view-point, orientation, lighting condition, contrast and background. We annotate the data using some well-known statistical tools inherited from Reliability theory. The experiments are conducted with some well-known image features like Speeded Up Robust Feature (SURF), Histogram of Oriented Gradient (HOG), Convolutional Neural Network (CNN) on Gradient and CNN on Laplacian, and the results are presented and discussed."} {"_id": "5f3291e73ee204afad2eebd4bc1458248e45eabd", "title": "An empirical analysis of smart contracts: platforms, applications, and design patterns", "text": "Smart contracts are computer programs that can be consistently executed by a network of mutually distrusting nodes, without the arbitration of a trusted authority. Because of their resilience to tampering, smart contracts are appealing in many scenarios, especially in those which require transfers of money to respect certain agreed rules (like in financial services and in games). Over the last few years many platforms for smart contracts have been proposed, and some of them have been actually implemented and used. We study how the notion of smart contract is interpreted in some of these platforms. Focussing on the two most widespread ones, Bitcoin and Ethereum, we quantify the usage of smart contracts in relation to their application domain. We also analyse the most common programming patterns in Ethereum, where the source code of smart contracts is available."} {"_id": "b9f56b82698c7b2036a98833a670d9a8f96d4474", "title": "A 5.2mW, 0.0016% THD up to 20kHz, ground-referenced audio decoder with PSRR-enhanced class-AB 16\u03a9 headphone amplifiers", "text": "A low-power ground-referenced audio decoder with PSRR-enhanced class-AB headphone amplifiers presents <;0.0016% THD in the whole audio band against the supply ripple by a negative charge-pump. Realized in the 40nm CMOS, the fully-integrated stereo decoder achieves 91dB SNDR and 100dB dynamic range while driving a 16\u03a9 headphone load and consumes 5.2mW from a 1.8V power supply. The core area is 0.093mm2/channel only."} {"_id": "e4ae7b14d03e4ac5a432e1226dbd8613affd3e71", "title": "An operational measure of information leakage", "text": "Given two discrete random variables X and Y, an operational approach is undertaken to quantify the \u201cleakage\u201d of information from X to Y. The resulting measure \u2112(X\u2192Y ) is called maximal leakage, and is defined as the multiplicative increase, upon observing Y, of the probability of correctly guessing a randomized function of X, maximized over all such randomized functions. It is shown to be equal to the Sibson mutual information of order infinity, giving the latter operational significance. Its resulting properties are consistent with an axiomatic view of a leakage measure; for example, it satisfies the data processing inequality, it is asymmetric, and it is additive over independent pairs of random variables. Moreover, it is shown that the definition is robust in several respects: allowing for several guesses or requiring the guess to be only within a certain distance of the true function value does not change the resulting measure."} {"_id": "29374eed47527cdaf14aa55fdb3935fc2de78c96", "title": "Vehicle Detection and Tracking in Car Video Based on Motion Model", "text": "This paper aims at real-time in-car video analysis to detect and track vehicles ahead for safety, autodriving, and target tracing. This paper describes a comprehensive approach to localizing target vehicles in video under various environmental conditions. The extracted geometry features from the video are continuously projected onto a 1-D profile and are constantly tracked. We rely on temporal information of features and their motion behaviors for vehicle identification, which compensates for the complexity in recognizing vehicle shapes, colors, and types. We probabilistically model the motion in the field of view according to the scene characteristic and the vehicle motion model. The hidden Markov model (HMM) is used to separate target vehicles from the background and track them probabilistically. We have investigated videos of day and night on different types of roads, showing that our approach is robust and effective in dealing with changes in environment and illumination and that real-time processing becomes possible for vehicle-borne cameras."} {"_id": "9eea99f11ff7417ba8821e84b531e1d16f6683fc", "title": "The Weak Byzantine Generals Problem", "text": "The Byzantine Generals Problem requires processes to reach agreement upon a value even though some of them may fad. It is weakened by allowing them to agree upon an \"incorrect\" value if a failure occurs. The transaction eormmt problem for a distributed database Js a special case of the weaker problem. It is shown that, like the original Byzantine Generals Problem, the weak version can be solved only ff fewer than one-third of the processes may fad. Unlike the onginal problem, an approximate solution exists that can tolerate arbaranly many failures. Categories and Subject Descnptors: D.4.5 [Operating Systems]: Reliability, F.2.m [Analysis of Algorithms and Problem Complexity]: Mzscellaneous General Terms: Reliabthty Addmonal"} {"_id": "748260579dc2fb789335a88ae3f63c114795d047", "title": "Action and Interaction Recognition in First-Person Videos", "text": "In this work, we evaluate the performance of the popular dense trajectories approach on first-person action recognition datasets. A person moving around with a wearable camera will actively interact with humans and objects and also passively observe others interacting. Hence, in order to represent real-world scenarios, the dataset must contain actions from first-person perspective as well as third-person perspective. For this purpose, we introduce a new dataset which contains actions from both the perspectives captured using a head-mounted camera. We employ a motion pyramidal structure for grouping the dense trajectory features. The relative strengths of motion along the trajectories are used to compute different bag-of-words descriptors and concatenated to form a single descriptor for the action. The motion pyramidal approach performs better than the baseline improved trajectory descriptors. The method achieves 96.7% on the JPL interaction dataset and 61.8% on our NUS interaction dataset. The same is used to detect actions in long video sequences and achieves average precision of 0.79 on JPL interaction dataset."} {"_id": "1282e1e89b5c969ea26caa88106b186ed84f20d5", "title": "Wearable Electronics and Smart Textiles: A Critical Review", "text": "Electronic Textiles (e-textiles) are fabrics that feature electronics and interconnections woven into them, presenting physical flexibility and typical size that cannot be achieved with other existing electronic manufacturing techniques. Components and interconnections are intrinsic to the fabric and thus are less visible and not susceptible of becoming tangled or snagged by surrounding objects. E-textiles can also more easily adapt to fast changes in the computational and sensing requirements of any specific application, this one representing a useful feature for power management and context awareness. The vision behind wearable computing foresees future electronic systems to be an integral part of our everyday outfits. Such electronic devices have to meet special requirements concerning wearability. Wearable systems will be characterized by their ability to automatically recognize the activity and the behavioral status of their own user as well as of the situation around her/him, and to use this information to adjust the systems' configuration and functionality. This review focuses on recent advances in the field of Smart Textiles and pays particular attention to the materials and their manufacturing process. Each technique shows advantages and disadvantages and our aim is to highlight a possible trade-off between flexibility, ergonomics, low power consumption, integration and eventually autonomy."} {"_id": "6d2b1acad1aa0d93782bf2611be9d5a3c31898a8", "title": "Customers \u2019 Behavior Prediction Using Artificial Neural Network", "text": "In this paper, customer restaurant preference is predicted based on social media location check-ins. Historical preferences of the customer and the influence of the customer\u2019s social network are used in combination with the customer\u2019s mobility characteristics as inputs to the model. As the popularity of social media increases, more and more customer comments and feedback about products and services are available online. It not only becomes a way of sharing information among friends in the social network but also forms a new type of survey which can be utilized by business companies to improve their existing products, services, and market analysis. Approximately 121,000 foursquare restaurant check-ins in the Greater New York City area are used in this research. Artificial neural networks (ANN) and support vector machine (SVM) are developed to predict the customers\u2019 behavior regarding restaurant preferences. ANN provides 93.13% average accuracy across investigated customers, compared to only 54.00% for SVM with a sigmoid kernel function."} {"_id": "aca437e9e2a453c84a38d716ca9a7a7683ae58b6", "title": "Scene Understanding by Reasoning Stability and Safety", "text": "This paper presents a new perspective for 3D scene understanding by reasoning object stability and safety using intuitive mechanics. Our approach utilizes a simple observation that, by human design, objects in static scenes should be stable in the gravity field and be safe with respect to various physical disturbances such as human activities. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Given a 3D point cloud captured for a static scene by depth cameras, our method consists of three steps: (i) recovering solid 3D volumetric primitives from voxels; (ii) reasoning stability by grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior; and (iii) reasoning safety by evaluating the physical risks for objects under physical disturbances, such as human activity, wind or earthquakes. We adopt a novel intuitive physics model and represent the energy landscape of each primitive and object in the scene by a disconnectivity graph (DG). We construct a contact graph with nodes being 3D volumetric primitives and edges representing the supporting relations. Then we adopt a Swendson\u2013Wang Cuts algorithm to partition the contact graph into groups, each of which is a stable object. In order to detect unsafe objects in a static scene, our method further infers hidden and situated causes (disturbances) in the scene, and then introduces intuitive physical mechanics to predict possible effects (e.g., falls) as consequences of the disturbances. In experiments, we demonstrate that the algorithm achieves a substantially better performance for (i) object segmentation, (ii) 3D volumetric recovery, and (iii) scene understanding with respect to other state-of-the-art methods. We also compare the safety prediction from the intuitive mechanics model with human judgement."} {"_id": "7baecdf494fd6e352add6993f12cc1554ee5b645", "title": "Association fibre pathways of the brain: parallel observations from diffusion spectrum imaging and autoradiography.", "text": "Understanding the long association pathways that convey cortical connections is a critical step in exploring the anatomic substrates of cognition in health and disease. Diffusion tensor imaging (DTI) is able to demonstrate fibre tracts non-invasively, but present approaches have been hampered by the inability to visualize fibres that have intersecting trajectories (crossing fibres), and by the lack of a detailed map of the origins, course and terminations of the white matter pathways. We therefore used diffusion spectrum imaging (DSI) that has the ability to resolve crossing fibres at the scale of single MRI voxels, and identified the long association tracts in the monkey brain. We then compared the results with available expositions of white matter pathways in the monkey using autoradiographic histological tract tracing. We identified 10 long association fibre bundles with DSI that match the observations in the isotope material: emanating from the parietal lobe, the superior longitudinal fasciculus subcomponents I, II and III; from the occipital-parietal region, the fronto-occipital fasciculus; from the temporal lobe, the middle longitudinal fasciculus and from rostral to caudal, the uncinate fasciculus, extreme capsule and arcuate fasciculus; from the occipital-temporal region, the inferior longitudinal fasciculus; and from the cingulate gyrus, the cingulum bundle. We suggest new interpretations of the putative functions of these fibre bundles based on the cortical areas that they link. These findings using DSI and validated with reference to autoradiographic tract tracing in the monkey represent a considerable advance in the understanding of the fibre pathways in the cerebral white matter. By replicating the major features of these tracts identified by histological techniques in monkey, we show that DSI has the potential to cast new light on the organization of the human brain in the normal state and in clinical disorders."} {"_id": "c86c37f76dc5c95d84a141e6313e5608ad7638e9", "title": "The role of the self in mindblindness in autism", "text": "Since its inception the 'mindblindness' theory of autism has greatly furthered our understanding of the core social-communication impairments in autism spectrum conditions (ASC). However, one of the more subtle issues within the theory that needs to be elaborated is the role of the 'self'. In this article, we expand on mindblindness in ASC by addressing topics related to the self and its central role in the social world and then review recent research in ASC that has yielded important insights by contrasting processes relating to both self and other. We suggest that new discoveries lie ahead in understanding how self and other are interrelated and/or distinct, and how understanding atypical self-referential and social-cognitive mechanisms may lead to novel ideas as to how to facilitate social-communicative abilities in ASC."} {"_id": "28890189c8fb5a8082a2a0445eabaa914ea72bae", "title": "A review of functional Near-Infrared Spectroscopy measurements in naturalistic environments", "text": "The development of novel miniaturized wireless and wearable functional Near-Infrared Spectroscopy (fNIRS) devices have paved the way to new functional brain imaging that can revolutionize the cognitive research fields. Over the past few decades, several studies have been conducted with conventional fNIRS systems that have demonstrated the suitability of this technology for a wide variety of populations and applications, to investigate both the healthy brain and the diseased brain. However, what makes wearable fNIRS even more appealing is its capability to allow more ecologically-valid measurements in everyday life scenarios that are not possible with other gold-standard neuroimaging modalities, such as functional Magnetic Resonance Imaging. This can have"} {"_id": "afd45a78b319032b19afd5553ee8504ff8319852", "title": "Programming with Abstract Data Types", "text": "The motivation behind the work in very-high-level languages is to ease the programming task by providing the programmer with a language containing primitives or abstractions suitable to his problem area. The programmer is then able to spend his effort in the right place; he concentrates on solving his problem, and the resulting program will be more reliable as a result. Clearly, this is a worthwhile goal.\n Unfortunately, it is very difficult for a designer to select in advance all the abstractions which the users of his language might need. If a language is to be used at all, it is likely to be used to solve problems which its designer did not envision, and for which the abstractions embedded in the language are not sufficient.\n This paper presents an approach which allows the set of built-in abstractions to be augmented when the need for a new data abstraction is discovered. This approach to the handling of abstraction is an outgrowth of work on designing a language for structured programming. Relevant aspects of this language are described, and examples of the use and definitions of abstractions are given."} {"_id": "d593a0d4682012312354797938fbaa053e652f0d", "title": "The influence of user-generated content on traveler behavior: An empirical investigation on the effects of e-word-of-mouth to hotel online bookings", "text": "The increasing use of web 2.0 applications has generated numerous online user reviews. Prior studies have revealed the influence of user-generated reviews on the sales of products such as CDs, books, and movies. However, the influence of online user-generated reviews in the tourism industry is still largely unknown both to tourism researchers and practitioners. To bridge this knowledge gap in tourism management, we conducted an empirical study to identify the impact of online user-generated reviews on business performance using data extracted from a major online travel agency in China. The empirical findings show that traveler reviews have a significant impact on online sales, with a 10 percent increase in traveler review ratings boosting online bookings by more than five percent. Our results highlight the importance of online user-generated reviews to business performance in tourism. 2010 Elsevier Ltd. All rights reserved."} {"_id": "44b05f00f032c0d59df4b712b83ff82a913e950a", "title": "Designing for Exploratory Search on Touch Devices", "text": "Exploratory search confront users with challenges in expressing search intents as the current search interfaces require investigating result listings to identify search directions, iterative typing, and reformulating queries. We present the design of Exploration Wall, a touch-based search user interface that allows incremental exploration and sense-making of large information spaces by combining entity search, flexible use of result entities as query parameters, and spatial configuration of search streams that are visualized for interaction. Entities can be flexibly reused to modify and create new search streams, and manipulated to inspect their relationships with other entities. Data comprising of task-based experiments comparing Exploration Wall with conventional search user interface indicate that Exploration Wall achieves significantly improved recall for exploratory search tasks while preserving precision. Subjective feedback supports our design choices and indicates improved user satisfaction and engagement. Our findings can help to design user interfaces that can effectively support exploratory search on touch devices."} {"_id": "4b4afd45404d2fd994c5bd0fb79181e702594b61", "title": "The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations", "text": "A useful starting point for designing advanced graphical user interfaces is the Visual InformationSeeking Mantra: Overview first, zoom and filter, then details-on-demand. But this is only a starting point in trying to understand the rich and varied set of information visualizations that have been proposed in recent years. This paper offers a task by data type taxonomy with seven data types (1-, 2-, 3-dimensional data, temporal and multi-dimensional data, and tree and network data) and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extract)."} {"_id": "7e9507924ceebd784503fd25128218a7119ff722", "title": "TopicPanorama: A Full Picture of Relevant Topics", "text": "This paper presents a visual analytics approach to analyzing a full picture of relevant topics discussed in multiple sources, such as news, blogs, or micro-blogs. The full picture consists of a number of common topics covered by multiple sources, as well as distinctive topics from each source. Our approach models each textual corpus as a topic graph. These graphs are then matched using a consistent graph matching method. Next, we develop a level-of-detail (LOD) visualization that balances both readability and stability. Accordingly, the resulting visualization enhances the ability of users to understand and analyze the matched graph from multiple perspectives. By incorporating metric learning and feature selection into the graph matching algorithm, we allow users to interactively modify the graph matching result based on their information needs. We have applied our approach to various types of data, including news articles, tweets, and blog data. Quantitative evaluation and real-world case studies demonstrate the promise of our approach, especially in support of examining a topic-graph-based full picture at different levels of detail."} {"_id": "a96898490180c86b63eee2e801de6e25de5aa71d", "title": "A user-centric evaluation framework for recommender systems", "text": "This research was motivated by our interest in understanding the criteria for measuring the success of a recommender system from users' point view. Even though existing work has suggested a wide range of criteria, the consistency and validity of the combined criteria have not been tested. In this paper, we describe a unifying evaluation framework, called ResQue (Recommender systems' Quality of user experience), which aimed at measuring the qualities of the recommended items, the system's usability, usefulness, interface and interaction qualities, users' satisfaction with the systems, and the influence of these qualities on users' behavioral intentions, including their intention to purchase the products recommended to them and return to the system. We also show the results of applying psychometric methods to validate the combined criteria using data collected from a large user survey. The outcomes of the validation are able to 1) support the consistency, validity and reliability of the selected criteria; and 2) explain the quality of user experience and the key determinants motivating users to adopt the recommender technology. The final model consists of thirty two questions and fifteen constructs, defining the essential qualities of an effective and satisfying recommender system, as well as providing practitioners and scholars with a cost-effective way to evaluate the success of a recommender system and identify important areas in which to invest development resources."} {"_id": "054fc7be5084e0f2cec80e6d708c3eafbffc6497", "title": "Interaction Design for Recommender Systems", "text": "1 kirstens@sims.berkeley.edu, sinha@sims.berkeley.edu ABSTRACT Recommender systems act as personalized decision guides for users, aiding them in decision making about matters related to personal taste. Research has focused mostly on the algorithms that drive the system, with little understanding of design issues from the user\u2019s perspective. The goal of our research is to study users\u2019 interactions with recommender systems in order to develop general design guidelines. We have studied users\u2019 interactions with 11 online recommender systems. Our studies have highlighted the role of transparency (understanding of system logic), familiar recommendations, and information about recommended items in the user\u2019s interaction with the system. Our results also indicate that there are multiple models for successful recommender systems."} {"_id": "44c299893ac76287e37d4d3ee8bf4b5f81cf37e9", "title": "Driving cycle-based design optimization of interior permanent magnet synchronous motor drives for electric vehicle application", "text": "The paper discusses the influence of driving cycles on the design optimization of permanent magnet synchronous machines. A bi-objective design optimization is presented for synchronous machines with V-shaped buried magnets. The machine length and the loss energy over a given driving cycle are defined as objectives. A total of 14 parameters defining geometry and winding layout are chosen as design parameters. Additionally some constraints like speed-torque-requirements and minimal stray field bridge widths are set. The optimization problem is solved using 2D finite element analysis and a high-performance differential evolution algorithm. The analyses are performed for the ARTEMIS driving cycle due to the more realistic driving behavior in comparison to the most commonly used New European Driving Cycle. Furthermore, a reference design optimization against the rated point loss energy is presented. The results show a much better performance of the driving cycle optimized machines in comparison to the rated point optimized machines in terms of the cycle-based loss energy. Loss savings depend strongly on the machine length and are approximately in between 15% and 45%."} {"_id": "a154e688baa929c335dd9a673592797ec3c27281", "title": "Learning From Positive and Unlabeled Data: A Survey", "text": "Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis and knowledge base completion. This article provides a survey of the current state of the art in PU learning. It proposes seven key research questions that commonly arise in this field and provides a broad overview of how the field has tried to address them."} {"_id": "454b7a0806de86a95a7b7df8ed3f196aff66532d", "title": "Machine Learning Techniques in ADAS: A Review", "text": "What machine learning (ML) technique is used for system intelligence implementation in ADAS (advanced driving assistance system)? This paper tries to answer this question. This paper analyzes ADAS and ML independently and then relate which ML technique is applicable to what ADAS component and why. The paper gives a good grasp of the current state-of-the-art. Sample works in supervised, unsupervised, deep and reinforcement learnings are presented; their strengths and rooms for improvements are also discussed. This forms part of the basics in understanding autonomous vehicle. This work is a contribution to the ongoing research in ML aimed at reducing road traffic accidents and fatalities, and the invocation of safe driving."} {"_id": "b5e26adc2dfb7ad04f8f9286b7eac5f12c4a6714", "title": "Radian: Visual Exploration of Traceroutes", "text": "Several projects deploy probes in the Internet. Probes are systems that continuously perform traceroutes and other networking measurements (e.g., ping) towards selected targets. Measurements can be stored and analyzed to gain knowledge on several aspects of the Internet, but making sense of such data requires suitable methods and tools for exploration and visualization. We present Radian, a tool that allows to visualize traceroute paths at different levels of detail and to animate their evolution during a selected time interval. We also describe extensive tests of the tool using traceroutes performed by RIPE Atlas Internet probes."} {"_id": "4a49ca9bc897fd1d8205371faca4bbaefcd2248e", "title": "DifNet: Semantic Segmentation by Diffusion Networks", "text": "Deep Neural Networks (DNNs) have recently shown state of the art performance on semantic segmentation tasks, however, they still suffer from problems of poor boundary localization and spatial fragmented predictions. The difficulties lie in the requirement of making dense predictions from a long path model all at once, since details are hard to keep when data goes through deeper layers. Instead, in this work, we decompose this difficult task into two relative simple sub-tasks: seed detection which is required to predict initial predictions without the need of wholeness and preciseness, and similarity estimation which measures the possibility of any two nodes belong to the same class without the need of knowing which class they are. We use one branch network for one sub-task each, and apply a cascade of random walks base on hierarchical semantics to approximate a complex diffusion process which propagates seed information to the whole image according to the estimated similarities. The proposed DifNet consistently produces improvements over the baseline models with the same depth and with the equivalent number of parameters, and also achieves promising performance on Pascal VOC and Pascal Context dataset. Our DifNet is trained end-to-end without complex loss functions."} {"_id": "1c6ee895c202a91a808de59445e3dbde2f4cda0e", "title": "Any domain parsing: automatic domain adaptation for natural language parsing", "text": "of \u201cAny Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing\u201d by David McClosky , Ph.D., Brown University, May, 2010. Current efforts in syntactic parsing are largely data-driv en. These methods require labeled examples of syntactic structures to learn statistical patterns governing these structures. Labeled data typically requires expert annotators which makes it both time consuming and costly to p roduce. Furthermore, once training data has been created for one textual domain, portability to similar domains is limited. This domain-dependence has inspired a large body of work since syntactic parsing aims to capture syntactic patterns across an entire language rather than just a specific domain. The simplest approach to this task is to assume that the targe t domain is essentially the same as the source domain. No additional knowledge about the target domain is i cluded. A more realistic approach assumes that only raw text from the target domain is available. This a s umption lends itself well to semi-supervised learning methods since these utilize both labeled and unlab eled examples. This dissertation focuses on a family of semi-supervised me thods called self-training. Self-training creates semi-supervised learners from existing supervised learne rs with minimal effort. We first show results on self-training for constituency parsing within a single dom ain. While self-training has failed here in the past, we present a simple modification which allows it to succeed, p roducing state-of-the-art results for English constituency parsing. Next, we show how self-training is be neficial when parsing across domains and helps further when raw text is available from the target domain. On e of the remaining issues is that one must choose a training corpus appropriate for the target domain or perfo rmance may be severely impaired. Humans can do this in some situations, but this strategy becomes less prac tical as we approach larger data sets. We present a technique, Any Domain Parsing, which automatically detect s useful source domains and mixes them together to produce a customized parsing model. The resulting models perform almost as well as the best seen parsing models (oracle) for each target domain. As a result, we have a fully automatic syntactic constituency parser which can produce high-quality parses for all types of text, regardless of domain. Abstract of \u201cAny Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing\u201d by David McClosky , Ph.D., Brown University, May, 2010.of \u201cAny Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing\u201d by David McClosky , Ph.D., Brown University, May, 2010. Current efforts in syntactic parsing are largely data-driv en. These methods require labeled examples of syntactic structures to learn statistical patterns governing these structures. Labeled data typically requires expert annotators which makes it both time consuming and costly to p roduce. Furthermore, once training data has been created for one textual domain, portability to similar domains is limited. This domain-dependence has inspired a large body of work since syntactic parsing aims to capture syntactic patterns across an entire language rather than just a specific domain. The simplest approach to this task is to assume that the targe t domain is essentially the same as the source domain. No additional knowledge about the target domain is i cluded. A more realistic approach assumes that only raw text from the target domain is available. This a s umption lends itself well to semi-supervised learning methods since these utilize both labeled and unlab eled examples. This dissertation focuses on a family of semi-supervised me thods called self-training. Self-training creates semi-supervised learners from existing supervised learne rs with minimal effort. We first show results on self-training for constituency parsing within a single dom ain. While self-training has failed here in the past, we present a simple modification which allows it to succeed, p roducing state-of-the-art results for English constituency parsing. Next, we show how self-training is be neficial when parsing across domains and helps further when raw text is available from the target domain. On e of the remaining issues is that one must choose a training corpus appropriate for the target domain or perfo rmance may be severely impaired. Humans can do this in some situations, but this strategy becomes less prac tical as we approach larger data sets. We present a technique, Any Domain Parsing, which automatically detect s useful source domains and mixes them together to produce a customized parsing model. The resulting models perform almost as well as the best seen parsing models (oracle) for each target domain. As a result, we have a fully automatic syntactic constituency parser which can produce high-quality parses for all types of text, regardless of domain. Any Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing by David McClosky B. S., University of Rochester, 2004 Sc. M., Brown University, 2006 A dissertation submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in the Department of Computer Science at Brown University. Providence, Rhode Island May, 2010 c \u00a9 Copyright 2010 by David McClosky This dissertation by David McClosky is accepted in its prese nt form by the Department of Computer Science as satisfying the disser tation requirement for the degree of Doctor of Philosophy. Date Eugene Charniak, Director Recommended to the Graduate Council Date Mark Johnson, Reader Cognitive and Linguistic Sciences Date Dan Klein, Reader University of California at Berkeley Approved by the Graduate Council Date Sheila Bond Dean of the Graduate School"} {"_id": "bd75508d29a69fbf3bf1454c06554e6850d76bd5", "title": "Walking > Walking-in-Place > Flying, in Virtual Environments", "text": "A study by Slater, et al., [1995] indicated that naive subjects in an immersive virtual environment experience a higher subjective sense of presence when they locomote by walking-in-place (virtual walking) than when they push-button-fly (along the floor plane). We replicated their study, adding real walking as a third condition. Our study confirmed their findings. We also found that real walking is significantly better than both virtual walking and flying in ease (simplicity, straightforwardness, naturalness) as a mode of locomotion. The greatest difference in subjective presence was between flyers and both kinds of walkers. In addition, subjective presence was higher for real walkers than virtual walkers, but the difference was statistically significant only in some models. Follow-on studies show virtual walking can be substantially improved by detecting footfalls with a head accelerometer. As in the Slater study, subjective presence significantly correlated with subjects\u2019 degree of association with their virtual bodies (avatars). This, our strongest statistical result, suggests that substantial potential presence gains can be had from tracking all limbs and customizing avatar appearance. An unexpected by-product was that real walking through our enhanced version of Slater\u2019s visual-cliff virtual environment (Figure 1) yielded a strikingly compelling virtual experience\u2014the strongest we and most of our visitors have yet experienced. The most needed system improvement is the substitution of wireless technology for all links to the user. CR"} {"_id": "2cac9d4521fb51f1644a6a318df5ed84e8a1878d", "title": "NL2KB: Resolving Vocabulary Gap between Natural Language and Knowledge Base in Knowledge Base Construction and Retrieval", "text": "Words to express relations in natural language (NL) statements may be different from those to represen t properties in knowledge bases (KB). The vocabulary g p becomes barriers for knowledge base construction and retrieval. With the demo system called NL2KB in this paper, users can browse which properties in KB side may be mapped to for a given relati onal pattern in NL side. Besides, they can retrieve the sets of relational patterns in NL side for a given property in KB side. We describe how the mapping is established in detail. Although the mined patterns are used for Chinese knowledge base applications, t he methodology can be extended to other languages."} {"_id": "c80a825c336431658efb2cf4c82d6797eb4054fe", "title": "Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl", "text": "We present DEPCC, the largest-to-date linguistically analyzed corpus in English including 365 million documents, composed of 252 billion tokens and 7.5 billion of named entity occurrences in 14.3 billion sentences from a web-scale crawl of the COMMON CRAWL project. The sentences are processed with a dependency parser and with a named entity tagger and contain provenance information, enabling various applications ranging from training syntax-based word embeddings to open information extraction and question answering. We built an index of all sentences and their linguistic meta-data enabling quick search across the corpus. We demonstrate the utility of this corpus on the verb similarity task by showing that a distributional model trained on our corpus yields better results than models trained on smaller corpora, like Wikipedia. This distributional model outperforms the state of art models of verb similarity trained on smaller corpora on the SimVerb3500 dataset."} {"_id": "0a22c2d4a7a05db1afa1b702bcb1faa3ff63b6e8", "title": "Universal Blind Quantum Computation", "text": "We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation."} {"_id": "4774b4f88a26eff2d22372acc17168b2d1c58b0d", "title": "The evolution to 4G cellular systems: LTE-Advanced", "text": "This paper provides an in-depth view on the technologies being considered for Long Term Evolution-Advanced (LTE-Advanced). First, the evolution from third generation (3G) to fourth generation (4G) is described in terms of performance requirements and main characteristics. The new network architecture developed by the Third Generation Partnership Project (3GPP), which supports the integration of current and future radio access technologies, is highlighted. Then, the main technologies for LTE-Advanced are explained, together with possible improvements, their associated challenges, and some approaches that have been considered to tackle those challenges."} {"_id": "8e0bace83c69e81cf7c68ce007347c4775204cd0", "title": "A closer look at GPUs", "text": "As the line between GPUs and CPUs begins to blur, it's important to understand what makes GPUs tick."} {"_id": "c8dce54deaaa3bc13585ab458720d1e08f4d5d89", "title": "MRR: an unsupervised algorithm to rank reviews by relevance", "text": "The automatic detection of relevant reviews plays a major role in tasks such as opinion summarization, opinion-based recommendation, and opinion retrieval. Supervised approaches for ranking reviews by relevance rely on the existence of a significant, domain-dependent training data set. In this work, we propose MRR (Most Relevant Reviews), a new unsupervised algorithm that identifies relevant revisions based on the concept of graph centrality. The intuition behind MRR is that central reviews highlight aspects of a product that many other reviews frequently mention, with similar opinions, as expressed in terms of ratings. MRR constructs a graph where nodes represent reviews, which are connected by edges when a minimum similarity between a pair of reviews is observed, and then employs PageRank to compute the centrality. The minimum similarity is graph-specific, and takes into account how reviews are written in specific domains. The similarity function does not require extensive pre-processing, thus reducing the computational cost. Using reviews from books and electronics products, our approach has outperformed the two unsupervised baselines and shown a comparable performance with two supervised regression models in a specific setting. MRR has also achieved a significantly superior run-time performance in a comparison with the unsupervised baselines."} {"_id": "10e70e16e5a68d52fa2c9d0a452db9ed2f9403aa", "title": "A generalized uncertainty principle and sparse representation in pairs of bases", "text": "An elementary proof of a basic uncertainty principle concerning pairs of representations of RN vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pairs of orthonormal bases as overcomplete dictionaries. The main contribution in this paper is the improvement of an important result due to Donoho and Huo concerning the replacement of the l0 optimization problem by a linear programming minimization when searching for the unique sparse representation."} {"_id": "6059b76d64bcd03332f8e5dd55ea5c61669d42dd", "title": "Happiness unpacked: positive emotions increase life satisfaction by building resilience.", "text": "Happiness-a composite of life satisfaction, coping resources, and positive emotions-predicts desirable life outcomes in many domains. The broaden-and-build theory suggests that this is because positive emotions help people build lasting resources. To test this hypothesis, the authors measured emotions daily for 1 month in a sample of students (N = 86) and assessed life satisfaction and trait resilience at the beginning and end of the month. Positive emotions predicted increases in both resilience and life satisfaction. Negative emotions had weak or null effects and did not interfere with the benefits of positive emotions. Positive emotions also mediated the relation between baseline and final resilience, but life satisfaction did not. This suggests that it is in-the-moment positive emotions, and not more general positive evaluations of one's life, that form the link between happiness and desirable life outcomes. Change in resilience mediated the relation between positive emotions and increased life satisfaction, suggesting that happy people become more satisfied not simply because they feel better but because they develop resources for living well."} {"_id": "b04a503487bc6505aa8972fd690da573f771badb", "title": "Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention", "text": "Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-tointerpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network\u2019s output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network endto- end from images to steering angle. The attention model highlights image regions that potentially influence the network\u2019s output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network\u2019s behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving."} {"_id": "43f7c23ff80f8d96cc009bf3a0b030658ff9ad8b", "title": "An exploratory study of backtracking strategies used by developers", "text": "Developers frequently backtrack while coding. They go back to an earlier state by removing inserted code or by restoring removed code for various reasons. However, little is known about when and how the developers backtrack, and modern IDEs do not provide much assistance for backtracking. As a first step towards gathering baseline knowledge about backtracking and designing more robust backtracking assistance tools in modern IDEs, we conducted an exploratory study with 12 professional developers and a follow-up online survey. Our study revealed several barriers they faced while backtracking. Subjects often manually commented and uncommented code, and often had difficulty finding relevant parts to backtrack. Backtracking was reported to be needed by 3/4 of the developers at least \"sometimes\"."} {"_id": "baee37b414e52bbbcbd3a30fe9112cfcc70e0c88", "title": "Scientific and Pragmatic Challenges for Bridging Education and Neuroscience", "text": "Neuroscience has experienced rapid growth in recent years, spurred in part by the U.S. government\u2019s designation of the 1990s as \u201cThe Decade of the Brain\u201d (Jones & Mendell, 1999). The rapid development of functional neuroimaging techniques has given researchers unprecedented access to the behaving brains of healthy children and adults. The result has been a wave of new insights into thinking, emotion, motivation, learning, and development. As these insights suffuse the social sciences, they sometimes inspire reconsideration of existing explanations. This is most true in psychology, as marked by the births of cognitive neuroscience (Gazzaniga, Ivry, & Mangun, 2002), developmental neuroscience (Johnson, Munakata, & Gilmore, 2001), and social neuroscience (Cacioppo, Visser, & Pickett, 2005). It is increasingly true in economics, where the rapid rise of neuroeconomics (Camerer, Loewenstein, & Prelec, 2005) has caught the attention of the popular press (Cassidy, 2006). Other social sciences, including communication (Anderson et al., 2006), political science (McDermott, 2004), and sociology (Wexler, 2006), are just beginning to confront the question of whether their research can be informed by neuroscience. Education is somewhere between the two poles of early adopters and tentative newcomers. A decade ago, in this journal, Bruer (1997) forcefully considered the relevance of neuroscience to education. His conclusion\u2014that neuroscience is \u201ca bridge too far\u201d\u2014was noteworthy because Bruer was then director of the McDonnell Foundation, which was actively funding research in both disciplines. Although it was in his best interests to find connections between the disciplines, he found instead poorly drawn extrapolations that inflated neuroscience findings into educational neuromyths. Since Bruer\u2019s cautionary evaluation, a number of commentators have considered the prospects for educational neuroscience. Many sound a more optimistic note (Ansari & Coch, 2006; Byrnes & Fox, 1998; Geake & Cooper, 2003; Goswami, 2006; Petitto & Dunbar, in press), and a textbook has even appeared (Blakemore & Frith, 2005). In this article, we negotiate the middle ground between the pessimism of Bruer and the optimism of those who followed. Table 1 summarizes eight concerns about connecting education and neuroscience. Some are drawn from Bruer (1997) and the ensuing commentaries. Others come from conversations with colleagues in both disciplines, and still others from our own experiences. These concerns do not seem to represent a blanket dismissal but rather a genuine curiosity (tempered by a healthy skepticism) about the implications of neuroscience for education. We begin by articulating the concerns along with some facts about neuroscience that make the concerns more concrete. We voice them in the strong tone in which we have heard them espoused. We then revisit the concerns, reinterpreting them as potential opportunities (also in Table 1). This approach permits us to review a selection of neuroscience studies relevant to content learning. We focus on recent functional magnetic resonance imaging (fMRI), or neuroimaging, studies for reasons of space and because these are the findings that have captured the most attention, both in the academy and in the popular press. Ideally, our review illustrates some elements of neuroscience so that education researchers can think more specifically about the prospects of educational neuroscience. We conclude with two reflections on moving from armchair arguments of a philosophical nature to scientific action on the ground. First, we argue that education and neuroscience can be bridged if (and only if) researchers collaborate across disciplinary lines on tractable problems of common interest. It is the success"} {"_id": "4a85b7fa5e81ae4cefc963c897ed6cf734a15fbc", "title": "Animal-assisted therapy and loneliness in nursing homes: use of robotic versus living dogs.", "text": "Loneliness is a common problem in long-term care facilities (LTCF) and previous work has shown that animal-assisted therapy (AAT) can to some degree reverse loneliness. Here, we compared the ability of a living dog (Dog) and a robotic dog (AIBO) to treat loneliness in elderly patients living in LTCF. In comparison with a control group not receiving AAT, both the Dog and AIBO groups had statistically significant improvements in their levels of loneliness. As measured by a modified Lexington Attachment to Pets Scale (MLAPS), residents showed high levels of attachment to both the dog and AIBO. Subscale analysis showed that the AIBO group scored lower than the living dog on \"animal rights/animal welfare\" but not on \"general attachment\" or \"people substituting.\" However, MLAPS measures did not correlate with changes in loneliness, showing that attachment was not the mechanism by which AAT decreases loneliness. We conclude that interactive robotic dogs can reduce loneliness in residents of LTCF and that residents become attached to these robots. However, level of attachment does not explain the decrease in loneliness associated with AAT conducted with either a living or robotic dog."} {"_id": "24f4c9ea98592710dc22d12de43903a74b29335b", "title": "Modelling and Simulation of Four Quadrant Operation of Three Phase Brushless DC Motor With Hysteresis Current", "text": "Brushless DC (BLDC) motor drives are becoming more popular in industrial, traction applications. This makes the control of BLDC motor in all the four quadrants very vitalThe motor is operated in four steady state operating modes of torque-speed plane. To control a BLDC machine it is generally required to measure the speed and position of rotor by using the sensor because the inverter phases, acting at any time, must be commutated depending on the rotor position. Simulation of the proposed model was done using MATLAB/ SIMULINK."} {"_id": "a3c6aa7949501e1bea1380bfcdd5344dcd152e30", "title": "3D Soft Tissue Analysis \u2013 Part 1: Sagittal Parameters", "text": "The aim of this study was to develop a reliable threedimensional (3D) analysis of facial soft tissues. We determined the mean sagittal 3D values and relationships between sagittal skeletal parameters, and digitally recorded 3D soft tissue parameters. A total of 100 adult patients (n\u00aa = 53, n\u00ba = 47) of Caucasian ethnic origin were included in this study. Patients with syndromes, cleft lip and palate, noticeable asymmetry or anomalies in the number of teeth were excluded. Arithmetic means for seven sagittal 3D soft tissue parameters were calculated. The parameters were analyzed biometrically in terms of their reliability and gender-specific differences. The 3D soft tissue values were further analyzed for any correlations with sagittal cephalometric values. Reproducible 3D mean values were defined for 3D soft tissue parameters. We detected gender-specific differences among the parameters. Correlations between the sagittal 3D soft tissue and cephalometric measurements were statistically significant. 3D soft tissue analysis provides additional information on the sagittal position of the jaw bases and on intermaxillary sagittal relations. Further studies are needed to integrate 3D soft tissue analysis in future treatment planning and assessment as a supportive diagnostic tool. Ziel dieser Untersuchung war die Entwicklung einer reliablen dreidimensionalen (3D) Analyse der Gesichtsweichteile. Es sollten sagittale 3D-Durchschnittswerte bestimmt werden und Beziehungen zwischen sagittalen skelettalen Parametern und digital erfassten 3D-Weichteilparametern dargestellt werden. In die Studie eingeschlossen wurden 100 erwachsene Patienten (n\u00aa = 53, n\u00ba = 47) kaukasischer Herkunft. Ausgeschlossen wurden Patienten mit Syndromen, LKGSSpalten, auff\u00e4lligen Asymmetrien oder Anomalien der Zahnzahl. Es wurden arithmetische Mittelwerte f\u00fcr sieben sagittale 3DWeichteilparameter ermittelt. Die Parameter wurden biometrisch hinsichtlich ihrer Reliabilit\u00e4t und geschlechtsspezifischer Unterschiede analysiert. Des Weiteren wurden die 3D-Weichteilwerte bez\u00fcglich bestehender Korrelationen zu sagittalen kephalometrischen Werten untersucht. F\u00fcr die 3D-Weichteilparameter konnten reproduzierbare 3D-Durchschnittswerte definiert werden. Innerhalb der Parameter lie\u00dfen sich geschlechtsspezifische Unterschiede feststellen. Die Korrelationen zwischen den sagittalen 3D-Weichteilmessungen und den kephalometrischen Messungen waren statistisch signifikant. Die 3D-Weichteilanalyse l\u00e4sst Aussagen sowohl \u00fcber den sagittalen Einbau der Kieferbasen als auch \u00fcber die sagittalen intermaxill\u00e4ren Beziehungen zu. Weitere Untersuchungen sind w\u00fcnschenswert, um die 3D-Weichteildiagnostik zuk\u00fcnftig als unterst\u00fctzendes diagnostisches Instrument in die Behandlungsplanung und -bewertung integrieren zu k\u00f6nnen."} {"_id": "9e0d8f69eccaee03e20020b7278510e3e41e1c47", "title": "Binary partition tree as an efficient representation for image processing, segmentation, and information retrieval", "text": "This paper discusses the interest of binary partition trees as a region-oriented image representation. Binary partition trees concentrate in a compact and structured representation a set of meaningful regions that can be extracted from an image. They offer a multiscale representation of the image and define a translation invariant 2-connectivity rule among regions. As shown in this paper, this representation can be used for a large number of processing goals such as filtering, segmentation, information retrieval and visual browsing. Furthermore, the processing of the tree representation leads to very efficient algorithms. Finally, for some applications, it may be interesting to compute the binary partition tree once and to store it for subsequent use for various applications. In this context, the paper shows that the amount of bits necessary to encode a binary partition tree remains moderate."} {"_id": "d6590674fe5fb16cf93c9ead436555d4d984d870", "title": "Making sense of social media streams through semantics: A survey", "text": "Using semantic technologies for mining and intelligent information access to social media is a challenging, emerging research area. Traditional search methods are no longer able to address the more complex information seeking behaviour in media streams, which has evolved towards sense making, learning, investigation, and social search. Unlike carefully authored news text and longer web context, social media streams pose a number of new challenges, due to their large-scale, short, noisy, contextdependent, and dynamic nature. This paper defines five key research questions in this new application area, examined through a survey of state-of-the-art approaches for mining semantics from social media streams; user, network, and behaviour modelling; and intelligent, semanticbased information access. The survey includes key methods not just from the Semantic Web research field, but also from the related areas of natural language processing and user modelling. In conclusion, key outstanding challenges are discussed and new directions for research are proposed."} {"_id": "c86736a80661552937ed77bb4bffe7e5e246cc87", "title": "A Document Skew Detection Method Using the Hough Transform", "text": "Document image processing has become an increasingly important technology in the automation of office documentation tasks. Automatic document scanners such as text readers and OCR (Optical Character Recognition) systems are an essential component of systems capable of those tasks. One of the problems in this field is that the document to be read is not always placed correctly on a flatbed scanner. This means that the document may be skewed on the scanner bed, resulting in a skewed image. This skew has a detrimental effect on document on document analysis, document understanding, and character segmentation and recognition. Consequently, detecting the skew of a document image and correcting it are important issues in realising a practical document reader. In this paper we describe a new algorithm for skew detection. We then compare the performance and results of this skew detection algorithm to other publidhed methods form O'Gorman, Hinds, Le, Baird, Posel and Akuyama. Finally, we discuss the theory of skew detection and the different apporaches taken to solve the problem of skew in documents. The skew correction algorithm we propose has been shown to be extremenly fast, with run times averaging under 0.25 CPU seconds to calculate the angle on the DEC 5000/20 workstation."} {"_id": "36927265f588ed093c2cbdbf7bf95ddd72f000a9", "title": "Performance Evaluation of Bridgeless PFC Boost Rectifiers", "text": "In this paper, a systematic review of bridgeless power factor correction (PFC) boost rectifiers, also called dual boost PFC rectifiers, is presented. Performance comparison between the conventional PFC boost rectifier and a representative member of the bridgeless PFC boost rectifier family is performed. Loss analysis and experimental efficiency evaluation for both CCM and DCM/CCM boundary operations are provided."} {"_id": "77c87f82a73edab2c46d600fc3d7821cdb15359a", "title": "State-of-the-art, single-phase, active power-factor-correction techniques for high-power applications - an overview", "text": "A review of high-performance, state-of-the-art, active power-factor-correction (PFC) techniques for high-power, single-phase applications is presented. The merits and limitations of several PFC techniques that are used in today's network-server and telecom power supplies to maximize their conversion efficiencies are discussed. These techniques include various zero-voltage-switching and zero-current-switching, active-snubber approaches employed to reduce reverse-recovery-related switching losses, as well as techniques for the minimization of the conduction losses. Finally, the effect of recent advancements in semiconductor technology, primarily silicon-carbide technology, on the performance and design considerations of PFC converters is discussed."} {"_id": "864e1700594dfdf46a4981b5bc07a54ebeab11ba", "title": "Bridgeless PFC implementation using one cycle control technique", "text": "Conventional boost PFC suffers from the high conduction loss in the input rectifier-bridge. Higher efficiency can be achieved by using the bridgeless boost topology. This new circuit has issues such as voltage sensing, current sensing and EMI noise. In this paper, one cycle control technique is used to solve the issues of the voltage sensing and current sensing. Experimental results show efficiency improvement and EMI performance"} {"_id": "c0b7e09f212ec85da22974c481e7b93efeba1504", "title": "Common mode noise modeling and analysis of dual boost PFC circuit", "text": "To achieve high efficiency PFC front stage in switching mode power supply (SMPS), dual boost PFC (DBPFC) topology shows superior characteristics compared with traditional boost PFC, but it by nature brings higher EMI noise, especially common mode (CM) noise. This paper deals with the modeling and analysis of DBPFC CM noise based on and compared with boost PFC, noise propagation equivalent circuits of both topologies are deduced, and theoretical analysis illustrates the difference. Experiments are performed to validate the EMI model and analysis."} {"_id": "60ba158cb1a619726db31b684a7bf818e2f8256b", "title": "Common mode EMI noise suppression in bridgeless boost PFC converter", "text": "Bridgeless boost PFC converter has high efficiency by eliminating the input diode bridge. However, Common Mode (CM) conducted EMI becomes a great issue. The goal of this paper is to study the possibility to minimize the CM noise in this converter. First the noise model is studied. Then a balance concept is introduced and applied to cancel the CM noise. Two approaches to minimize CM noise are introduced and compared. Experiments verify the effectiveness of both approaches."} {"_id": "6e81f26db102e3e261f4fd35251e2f1315709258", "title": "A design methodology of chip-to-chip wireless power transmission system", "text": "A design methodology to transmit power using a chip-to-chip wireless interface is proposed. The proposed power transmission system is based on magnetic coupling, and the power transmission of 5mW/mm2 was verified. The transmission efficiency trade-off with the transmitted power is also discussed."} {"_id": "4155199a98a1c618d2c73fe850f582483addd7ff", "title": "Options for Control of Reactive Power by Distributed Photovoltaic Generators", "text": "High-penetration levels of distributed photovoltaic (PV) generation on an electrical distribution circuit present several challenges and opportunities for distribution utilities. Rapidly varying irradiance conditions may cause voltage sags and swells that cannot be compensated by slowly responding utility equipment resulting in a degradation of power quality. Although not permitted under current standards for interconnection of distributed generation, fast-reacting, VAR-capable PV inverters may provide the necessary reactive power injection or consumption to maintain voltage regulation under difficult transient conditions. As side benefit, the control of reactive power injection at each PV inverter provides an opportunity and a new tool for distribution utilities to optimize the performance of distribution circuits, e.g., by minimizing thermal losses. We discuss and compare via simulation various design options for control systems to manage the reactive power generated by these inverters. An important design decision that weighs on the speed and quality of communication required is whether the control should be centralized or distributed (i.e., local). In general, we find that local control schemes are able to maintain voltage within acceptable bounds. We consider the benefits of choosing different local variables on which to control and how the control system can be continuously tuned between robust voltage control, suitable for daytime operation when circuit conditions can change rapidly, and loss minimization better suited for nighttime operation."} {"_id": "1e6123e778caa1e7d77292ffc40920b78d769ce7", "title": "Deep Convolutional Neural Network Design Patterns", "text": "Recent research in the deep learning field has produced a plethora of new architectures. At the same time, a growing number of groups are applying deep learning to new applications. Some of these groups are likely to be composed of inexperienced deep learning practitioners who are baffled by the dizzying array of architecture choices and therefore opt to use an older architecture (i.e., Alexnet). Here we attempt to bridge this gap by mining the collective knowledge contained in recent deep learning research to discover underlying principles for designing neural network architectures. In addition, we describe several architectural innovations, including Fractal of FractalNet network, Stagewise Boosting Networks, and Taylor Series Networks (our Caffe code and prototxt files is available at https://github.com/iPhysicist/CNNDesignPatterns). We hope others are inspired to build on our preliminary work."} {"_id": "88c3904153d831f6d854067f2ad69c5a330f4af3", "title": "A linear regression analysis of the effects of age related pupil dilation change in iris biometrics", "text": "Medical studies have shown that average pupil size decreases linearly throughout adult life. Therefore, on average, the longer the time between acquisition of two images of the same iris, the larger the difference in dilation between the two images. Several studies have shown that increased difference in dilation causes an increase in the false nonmatch rate for iris recognition. Thus, increased difference in pupil dilation is one natural mechanism contributing to an iris template aging effect. We present an experimental analysis of the change in genuine match scores in the context of dilation differences due to aging."} {"_id": "bbaee955dd1280cb50ee26040f0e8c939369cf78", "title": "Controlling Robot Morphology From Incomplete Measurements", "text": "Mobile robots with complex morphology are essential for traversing rough terrains in Urban Search & Rescue missions. Since teleoperation of the complex morphology causes high cognitive load of the operator, the morphology is controlled autonomously. The autonomous control measures the robot state and surrounding terrain which is usually only partially observable, and thus the data are often incomplete. We marginalize the control over the missing measurements and evaluate an explicit safety condition. If the safety condition is violated, tactile terrain exploration by the body-mounted robotic arm gathers the missing data."} {"_id": "40213ebcc1e03c25ba97f4110c0b2030fd2e79b6", "title": "Computational imaging with multi-camera time-of-flight systems", "text": "Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human-computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating."} {"_id": "627c05039285cee961f9802c41db1a5eaa550b13", "title": "Merging What's Cracked, Cracking What's Merged: Adaptive Indexing in Main-Memory Column-Stores", "text": "Adaptive indexing is characterized by the partial creation and refinement of the index as side effects of query execution. Dynamic or shifting workloads may benefit from preliminary index structures focused on the columns and specific key ranges actually queried \u2014 without incurring the cost of full index construction. The costs and benefits of adaptive indexing techniques should therefore be compared in terms of initialization costs, the overhead imposed upon queries, and the rate at which the index converges to a state that is fully-refined for a particular workload component. Based on an examination of database cracking and adaptive merging, which are two techniques for adaptive indexing, we seek a hybrid technique that has a low initialization cost and also converges rapidly. We find the strengths and weaknesses of database cracking and adaptive merging complementary. One has a relatively high initialization cost but converges rapidly. The other has a low initialization cost but converges relatively slowly. We analyze the sources of their respective strengths and explore the space of hybrid techniques. We have designed and implemented a family of hybrid algorithms in the context of a column-store database system. Our experiments compare their behavior against database cracking and adaptive merging, as well as against both traditional full index lookup and scan of unordered data. We show that the new hybrids significantly improve over past methods while at least two of the hybrids come very close to the \u201cideal performance\u201d in terms of both overhead per query and convergence to a final state."} {"_id": "bcc18022ba80d2e2a6d41c64fb6c8a9e3a898140", "title": "Positioning for the Internet of Things: A 3GPP Perspective", "text": "Many use cases in the Internet of Things (IoT) will require or benefit from location information, making positioning a vital dimension of the IoT. The 3GPP has dedicated a significant effort during its Release 14 to enhance positioning support for its IoT technologies to further improve the 3GPPbased IoT eco-system. In this article, we identify the design challenges of positioning support in LTE-M and NB-IoT, and overview the 3GPP's work in enhancing the positioning support for LTE-M and NB-IoT. We focus on OTDOA, which is a downlink based positioning method. We provide an overview of the OTDOA architecture and protocols, summarize the designs of OTDOA positioning reference signals, and present simulation results to illustrate the positioning performance."} {"_id": "24fe8b910028522424f2e8dd5ecb9dc2bd3e9153", "title": "Autonomous Takeoff and Flight of a Tethered Aircraft for Airborne Wind Energy", "text": "A control design approach to achieve fully autonomous takeoff and flight maneuvers with a tethered aircraft is presented and demonstrated in real-world flight tests with a small-scale prototype. A ground station equipped with a controlled winch and a linear motion system accelerates the aircraft to takeoff speed and controls the tether reeling in order to limit the pulling force. This setup corresponds to airborne wind energy (AWE) systems with ground-based energy generation and rigid aircrafts. A simple model of the aircraft\u2019s dynamics is introduced and its parameters are identified from experimental data. A model-based, hierarchical feedback controller is then designed, whose aim is to manipulate the elevator, aileron, and propeller inputs in order to stabilize the aircraft during the takeoff and to achieve figure-of-eight flight patterns parallel to the ground. The controller operates in a fully decoupled mode with respect to the ground station. Parameter tuning and stability/robustness aspect are discussed, too. The experimental results indicate that the controller is able to achieve satisfactory performance and robustness, notwithstanding its simplicity, and confirm that the considered takeoff approach is technically viable and solves the issue of launching this kind of AWE systems in a compact space and at low additional cost."} {"_id": "7d6968a25c81e4bc0fb958173a61ea82362cbd03", "title": "STOCK MARKET PREDICTION USING BIO-INSPIRED COMPUTING : A SURVEY", "text": "Bio-inspired evolutionary algorithms are probabilistic search methods that mimic natural biological evolution. They show the behavior of the biological entities interacting locally with one another or with their environment to solve complex problems. This paper aims to analyze the most predominantly used bio-inspired optimization techniques that have been used for stock market prediction and hence present a comparative study between them."} {"_id": "717bba8eec2457f1af024a008929fbe4c7ad0612", "title": "Security slicing for auditing XML, XPath, and SQL injection vulnerabilities", "text": "XML, XPath, and SQL injection vulnerabilities are among the most common and serious security issues for Web applications and Web services. Thus, it is important for security auditors to ensure that the implemented code is, to the extent possible, free from these vulnerabilities before deployment. Although existing taint analysis approaches could automatically detect potential vulnerabilities in source code, they tend to generate many false warnings. Furthermore, the produced traces, i.e. dataflow paths from input sources to security-sensitive operations, tend to be incomplete or to contain a great deal of irrelevant information. Therefore, it is difficult to identify real vulnerabilities and determine their causes. One suitable approach to support security auditing is to compute a program slice for each security-sensitive operation, since it would contain all the information required for performing security audits (Soundness). A limitation, however, is that such slices may also contain information that is irrelevant to security (Precision), thus raising scalability issues for security audits. In this paper, we propose an approach to assist security auditors by defining and experimenting with pruning techniques to reduce original program slices to what we refer to as security slices, which contain sound and precise information. To evaluate the proposed pruning mechanism by using a number of open source benchmarks, we compared our security slices with the slices generated by a state-of-the-art program slicing tool. On average, our security slices are 80% smaller than the original slices, thus suggesting significant reduction in auditing costs."} {"_id": "bf277908733127ada3acf7028947a5eb0e9be38b", "title": "A fully integrated multi-CPU, GPU and memory controller 32nm processor", "text": "This paper describes the 32nm Sandy Bridge processor that integrates up to 4 high performance Intel Architecture (IA) cores, a power/performance optimized graphic processing unit (GPU) and memory and PCIe controllers in the same die. The Sandy Bridge architecture block diagram is shown in Fig. 15.1.1 and the floorplan of a four IA-core version is shown in Fig. 15.1.2. The Sandy Bridge IA core implements an improved branch prediction algorithm, a micro-operation (Uop) cache, a floating point Advanced Vector Extension (AVX), a second load port in the L1 cache and bigger register files in the out-of-order part of the machine; all these architecture improvements boost the IA core performance without increasing the thermal power dissipation envelope or the average power consumption (to preserve battery life in mobile systems). The CPUs and GPU share the same 8MB level-3 cache memory. The data flow is optimized by a high performance on die interconnect fabric (called \u201cring\u201d) that connects between the CPUs, the GPU, the L3 cache and the system agent (SA) unit that houses a 1600MT/s, dual channel DDR3 memory controller, a 20-lane PCIe gen2 controller, a two parallel pipe display engine, the power management control unit and the testability logic. An on die EPROM is used for configurability and yield optimization."} {"_id": "daac2e9298d513de49d9e6d01c6ec78daf8feb47", "title": "Position-based impedance control of an industrial hydraulic manipulator", "text": "This paper addresses the problem of impedance control in hydraulic robots. Whereas most impedance and hybrid force/position control formulations have focussed on electrically-driven robots with controllable actuator torques, torque control of hydraulic actuators is a difficult task. Motivated by the previous works [2,9, lo], a position-based impedance controller is proposed and demonstrated on an existing industrial hydraulic robot (a Unimate MKII-2000). A nonlinear proportional-integral (NPI) controller is first developed to meet the accurate positioning requirements of this impedance control formulation. The NPI controller is shown to make the manipulator match a range of second-order target impedances. Various experiments in free space and in environmental contact, including a simple impedance modulation experiment, demonstrate the feasibility and the promise of the technique."} {"_id": "b57a4f80f2b216105c6c283e18236c2474927590", "title": "Clustered Collaborative Filtering Approach for Distributed Data Mining on Electronic Health Records", "text": "Distributed Data Mining (DDM) has become one of the promising areas of Data Mining. DDM techniques include classifier approach and agent-approach. Classifier approach plays a vital role in mining distributed data, having homogeneous and heterogeneous approaches depend on data sites. Homogeneous classifier approach involves ensemble learning, distributed association rule mining, meta-learning and knowledge probing. Heterogeneous classifier approach involves collective principle component analysis, distributed clustering, collective decision tree and collective bayesian learning model. In this paper, classifier approach for DDM is summarized and an architectural model based on clustered-collaborative filtering for Electronic Health Records (EHR) is proposed."} {"_id": "9f2923984ca3f0bbcca4415f47bee061d848120e", "title": "Regulating the new information intermediaries as gatekeepers of information diversity", "text": "Natali Helberger is Professor at the Institute for Information Law, University of Amsterdam, Amsterdam, The Netherlands. Katharina Kleinen-von K\u00f6nigsl\u00f6w is Assistant Professor for Political Communication at the University of Zurich, Zurich, Switzerland. Rob van der Noll is Senior Researcher at SEO Economic Research, Amsterdam, The Netherlands. Abstract Purpose \u2013 The purposes of this paper are to deal with the questions: because search engines, social networks and app-stores are often referred to as gatekeepers to diverse information access, what is the evidence to substantiate these gatekeeper concerns, and to what extent are existing regulatory solutions to control gatekeeper control suitable at all to address new diversity concerns? It will also map the different gatekeeper concerns about media diversity as evidenced in existing research before the background of network gatekeeping theory critically analyses some of the currently discussed regulatory approaches and develops the contours of a more user-centric approach towards approaching gatekeeper control and media diversity."} {"_id": "8a3afde910fc3ebd95fdb51a157883b81bfc7e73", "title": "A hierarchical lstm model with multiple features for sentiment analysis of sina weibo texts", "text": "Sentiment analysis has long been a hot topic in natural language processing. With the development of social network, sentiment analysis on social media such as Facebook, Twitter and Weibo becomes a new trend in recent years. Many different methods have been proposed for sentiment analysis, including traditional methods (SVM and NB) and deep learning methods (RNN and CNN). In addition, the latter always outperform the former. However, most of existing methods only focus on local text information and ignore the user personality and content characteristics. In this paper, we propose an improved LSTM model with considering the user-based features and content-based features. We first analysis the training dataset to extract artificial features which consists of user-based and content-based. Then we construct a hierarchical LSTM model, named LSTM-MF (a hierarchical LSTM model with multiple features), and introduce the features into the model to generate sentence and document representations. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-the-art methods."} {"_id": "933cba585a12e56a8f60511ebeb74b8cb42634b1", "title": "A Distribution-Based Clustering Algorithm for Mining in Large Spatial Databases", "text": "The problem of detecting clusters of points belonging to a spatial point process arises in many applications. In this paper , we introduce the new clustering algorithm DBCLASD (Distribution Based Clustering of LArge Spatial Databases) to discover clusters of this type. The results of experiments demonstrate that DBCLASD, contrary to partitioning algorithms such as CLARANS, discovers clusters of arbitrary shape. Furthermore, DBCLASD does not require any input parameters, in contrast to the clustering algorithm DBSCAN requiring two input parameters which may be difficult to provide for large databases. In terms of efficiency, DBCLASD is between CLARANS and DBSCAN, close to DBSCAN. Thus, the efficiency of DBCLASD on large spatial databases is very attractive when considering its nonparametric nature and its good quality for clusters of arbitrary shape."} {"_id": "c66dc716dc0ab674386255e64a743c291c3a1ab5", "title": "Impact of Depression and Antidepressant Treatment on Heart Rate Variability: A Review and Meta-Analysis", "text": "BACKGROUND\nDepression is associated with an increase in the likelihood of cardiac events; however, studies investigating the relationship between depression and heart rate variability (HRV) have generally focused on patients with cardiovascular disease (CVD). The objective of the current report is to examine with meta-analysis the impact of depression and antidepressant treatment on HRV in depressed patients without CVD.\n\n\nMETHODS\nStudies comparing 1) HRV in patients with major depressive disorder and healthy control subjects and 2) the HRV of patients with major depressive disorder before and after treatment were considered for meta-analysis.\n\n\nRESULTS\nMeta-analyses were based on 18 articles that met inclusion criteria, comprising a total of 673 depressed participants and 407 healthy comparison participants. Participants with depression had lower HRV (time frequency: Hedges' g = -.301, p < .001; high frequency: Hedges' g = -.293, p < .001; nonlinear: Hedges' g = -1.955, p = .05; Valsalva ratio: Hedges' g = -.712, p < .001) than healthy control subjects, and depression severity was negatively correlated with HRV (r = -.354, p < .001). Tricyclic medication decreased HRV, although serotonin reuptake inhibitors, mirtazapine, and nefazodone had no significant impact on HRV despite patient response to treatment.\n\n\nCONCLUSIONS\nDepression without CVD is associated with reduced HRV, which decreases with increasing depression severity, most apparent with nonlinear measures of HRV. Critically, a variety of antidepressant treatments do not resolve these decreases despite resolution of symptoms, highlighting that antidepressant medications might not have HRV-mediated cardioprotective effects and the need to identify individuals at risk among patients in remission."} {"_id": "2cae7a02082722145a6977469f74a0eb5bd10585", "title": "Temporal Sequence Learning, Prediction, and Control: A Review of Different Models and Their Relation to Biological Mechanisms", "text": "In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward-based (e.g., TD learning) and correlation-based (Hebbian) learning related? and How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? We first compare the different models in an open-loop condition, where behavioral feedback does not alter the learning. Here we observe that reward-based and correlation-based learning are indeed very similar. Machine control is then used to introduce the problem of closed-loop control (e.g., actor-critic architectures). Here the problem of evaluative (rewards) versus nonevaluative (correlations) feedback from the environment will be discussed, showing that both learning approaches are fundamentally different in the closed-loop condition. In trying to answer the second question, we compare neuronal versions of the different learning architectures to the anatomy of the involved brain structures (basal-ganglia, thalamus, and cortex) and the molecular biophysics of glutamatergic and dopaminergic synapses. Finally, we discuss the different algorithms used to model STDP and compare them to reward-based learning rules. Certain similarities are found in spite of the strongly different timescales. Here we focus on the biophysics of the different calcium-release mechanisms known to be involved in STDP."} {"_id": "b83a6c77e61a38ada308992cc579c8cd49ee16f4", "title": "A Survey of Outlier Detection Methods in Network Anomaly Identification", "text": "The detection of outliers has gained considerable interest in data mining with the realization that outliers can be the key discovery to be made from very large databases. Outliers arise due to various reasons such as mechanical faults, changes in system behavior, fraudulent behavior, human error and instrument error. Indeed, for many applications the discovery of outliers leads to more interesting and useful results than the discovery of inliers. Detection of outliers can lead to identification of system faults so that administrators can take preventive measures before they escalate. It is possible that anomaly detection may enable detection of new attacks. Outlier detection is an important anomaly detection approach. In this paper, we present a comprehensive survey of well known distance-based, density-based and other techniques for outlier detection and compare them. We provide definitions of outliers and discuss their detection based on supervised and unsupervised learning in the context of network anomaly detection."} {"_id": "f0bb2eac780d33a2acc129a17e502a3aca28d3a3", "title": "Managing Risk in Software Process Improvement: An Action Research Approach", "text": "Many software organizations engage in software process improvement (SPI) initiatives to increase their capability to develop quality solutions at a competitive level. Such efforts, however, are complex and very demanding. A variety of risks makes it difficult to develop and implement new processes. We studied SPI in its organizational context through collaborative practice research (CPR), a particular form of action research. The CPR program involved close collaboration between practitioners and researchers over a three-year period to understand and improve SPI initiatives in four Danish software organizations. The problem of understanding and managing risks in SPI teams emerged in one of the participating organizations and led to this research. We draw upon insights from the literature on SPI and software risk management as well as practical lessons learned from managing SPI risks in the participating software organizations. Our research offers two contributions. First, we contribute to knowledge on SPI by proposing an approach to understand and manage risks in SPI teams. This risk management approach consists of a framework for understanding risk areas and risk resolution strategies within SPI and a related Iversen et al./Managing Risk in SPI 396 MIS Quarterly Vol. 28 No. 3/September 2004 process for managing SPI risks. Second, we contribute to knowledge on risk management within the information systems and software engineering disciplines. We propose an approach to tailor risk management to specific contexts. This approach consists of a framework for understanding and choosing between different forms of risk management and a process to tailor risk management to specific contexts."} {"_id": "2b1d83d6d7348700896088b34154eb9e0b021962", "title": "CONTEXT-CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS", "text": "We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods."} {"_id": "4edae1c443cd9bede2af016c23e13d6e664bfe7e", "title": "Ensemble methods for spoken emotion recognition in call-centres", "text": "Machine-based emotional intelligence is a requirement for more natural interaction between humans and computer interfaces and a basic level of accurate emotion perception is needed for computer systems to respond adequately to human emotion. Humans convey emotional information both intentionally and unintentionally via speech patterns. These vocal patterns are perceived and understood by listeners during conversation. This research aims to improve the automatic perception of vocal emotion in two ways. First, we compare two emotional speech data sources: natural, spontaneous emotional speech and acted or portrayed emotional speech. This comparison demonstrates the advantages and disadvantages of both acquisition methods and how these methods affect the end application of vocal emotion recognition. Second, we look at two classification methods which have not been applied in this field: stacked generalisation and unweighted vote. We show how these techniques can yield an improvement over traditional classification methods. 2006 Elsevier B.V. All rights reserved."} {"_id": "32c79c5a66e97469465875a31685864e0df8faee", "title": "Laser ranging : a critical review of usual techniques for distance measurement", "text": "Marc Rioux National Research Council, Canada Institute for Information Technology Visual Information Technology Ottawa, Canada, K1A 0R6 Abstract. We review some usual laser range finding techniques for industrial applications. After outlining the basic principles of triangulation and time of flight [pulsed, phase-shift and frequency modulated continuous wave (FMCW)], we discuss their respective fundamental limitations. Selected examples of traditional and new applications are also briefly presented. \u00a9 2001 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1330700]"} {"_id": "2913553d5e1ff6447b555e458fe1e0de78c0d45a", "title": "Liquid cooled permanent-magnet traction motor design considering temporary overloading", "text": "This paper focuses on traction motor design in a variable speed drive of an Electric Mini Bus, aiming to deliver high mean mechanical power over the whole speed range, using single gear transmission. Specific design considerations aim to minimize oversizing, by utilizing temporary overloading capability using liquid cooled motor housing. Constant toque - constant power control strategy is adopted by appropriately programming the over-torque and field weakening operation areas of the motor. A good quantitative analysis between over-torque, rated and field weakening operation areas, is given, focusing on efficiency and paying particular attention to iron losses both on stator and rotor. The motor has been designed in order to meet traction application specifications and allow operation in higher specific electric loading to increase power density."} {"_id": "325be255b9eb19cb721b7ec0429e46570181370e", "title": "Nitroglycerin: a review of its use in the treatment of vascular occlusion after soft tissue augmentation.", "text": "BACKGROUND\nSkin necrosis after soft tissue augmentation with dermal fillers is a rare but potentially severe complication. Nitroglycerin paste may be an important treatment option for dermal and epidermal ischemia in cosmetic surgery.\n\n\nOBJECTIVES\nTo summarize the knowledge about nitroglycerin paste in cosmetic surgery and to understand its current use in the treatment of vascular compromise after soft tissue augmentation. To review the mechanism of action of nitroglycerin, examine its utility in the dermal vasculature in the setting of dermal filler-induced ischemia, and describe the facial anatomy danger zones in order to avoid vascular injury.\n\n\nMETHODS\nA literature review was conducted to examine the mechanism of action of nitroglycerin, and a treatment algorithm was proposed from clinical observations to define strategies for impending facial necrosis after filler injection.\n\n\nRESULTS AND CONCLUSIONS\nOur experience with nitroglycerin paste and our review of the medical literature supports the use of nitroglycerin paste on the skin to help improve flow in the dermal vasculature because of its vasodilatory effect on small-caliber arterioles."} {"_id": "3e8482443faa2319a6c2c04693402d9b5264bc0a", "title": "Factors contributing to the facial aging of identical twins.", "text": "BACKGROUND\nThe purpose of this study was to identify the environmental factors that contribute to facial aging in identical twins.\n\n\nMETHODS\nDuring the Twins Day Festival in Twinsburg, Ohio, 186 pairs of identical twins completed a comprehensive questionnaire, and digital images were obtained. A panel reviewed the images independently and recorded the differences in the perceived twins' ages and their facial features. The perceived age differences were then correlated with multiple factors.\n\n\nRESULTS\nFour-point higher body mass index was associated with an older appearance in twins younger than age 40 but resulted in a younger appearance after age 40 (p = 0.0001). Eight-point higher body mass index was associated with an older appearance in twins younger than age 55 but was associated with a younger appearance after age 55 (p = 0.0001). The longer the twins smoked, the older they appeared (p < 0.0001). Increased sun exposure was associated with an older appearance and accelerated with age (p = 0.015), as was a history of outdoor activities and lack of sunscreen use. Twins who used hormone replacement had a younger appearance (p = 0.002). Facial rhytids were more evident in twins with a history of skin cancer (p = 0.05) and in those who smoked (p = 0.005). Dark and patchy skin discoloration was less prevalent in twins with a higher body mass index (p = 0.01) and more common in twins with a history of smoking (p = 0.005) and those with sun exposure (p = 0.005). Hair quantity was better with a higher body mass index (p = 0.01) although worse with a history of skin cancer (p = 0.005) and better with the use of hormones (p = 0.05).\n\n\nCONCLUSION\nThis study offers strong statistical evidence to support the role of some of the known factors that govern facial aging."} {"_id": "5841bf263cfd388a7af631f0f85fc6fa07dca945", "title": "Foreign body granulomas after all injectable dermal fillers: part 2. Treatment options.", "text": "SUMMARY\nForeign body granulomas occur at certain rates with all injectable dermal fillers. They have to be distinguished from early implant nodules, which usually appear 2 to 4 weeks after injection. In general, foreign body granulomas appear after a latent period of several months at all injected sites at the same time. If diagnosed early and treated correctly, they can be diminished within a few weeks. The treatment of choice of this hyperactive granulation tissue is the intralesional injection of corticosteroid crystals (triamcinolone, betamethasone, or prednisolone), which may be repeated in 4-week cycles until the right dose is found. To lower the risk of skin atrophy, corticosteroids can be combined with antimitotic drugs such as 5-fluorouracil and pulsed lasers. Because foreign body granulomas grow fingerlike into the surrounding tissue, surgical excision should be the last option. Surgery or drainage is indicated to treat normal lumps and cystic foreign body granulomas with little tissue ingrowth. In most patients, a foreign body granuloma is a single event during a lifetime, often triggered by a systemic bacterial infection."} {"_id": "b6626537ce41ccd7b8442799eb4c33576f1ce482", "title": "Aging changes of the midfacial fat compartments: a computed tomographic study.", "text": "BACKGROUND\nThe restoration of a natural volume distribution is a major goal in facial rejuvenation. The aims of this study were to establish a radiographic method enabling effective measurements of the midfacial fat compartments and to compare the anatomy between human cadavers of younger versus older age.\n\n\nMETHODS\nData from computed tomographic scans of 12 nonfixed cadaver heads, divided into two age groups (group 1, 54 to 75 years, n = 6; and group 2, 75 to 104 years, n = 6), were analyzed. For evaluation of the volume distribution within a specific compartment, the sagittal diameter of the upper, middle, and lower thirds of each compartment was determined. For evaluation of a \"sagging\" of the compartments, the distance between the cephalad border and the infraorbital rim was determined.\n\n\nRESULTS\nComputed tomography enables a reproducible depiction of the facial fat compartments and reveals aging changes. The distance between the fat compartments and the infraorbital rim was higher in group 2 compared with group 1. The sagittal diameter of the lower third of the compartments was higher, and the sagittal diameter of the upper third was smaller in group 2 compared with group 1. The buccal extension of the buccal fat pad was shown to be an independent, separate compartment.\n\n\nCONCLUSIONS\nThis study demonstrates an inferior migration of the midfacial fat compartments and an inferior volume shift within the compartments during aging. Additional distinct compartment-specific changes (e.g., volume loss of the deep medial cheek fat and buccal extension of the buccal fat pad) contribute to the appearance of the aged face."} {"_id": "c668bf08ccce6f68ad2a285727dbfb7433f3cfa7", "title": "Validated assessment scales for the lower face.", "text": "BACKGROUND\nAging in the lower face leads to lines, wrinkles, depression of the corners of the mouth, and changes in lip volume and lip shape, with increased sagging of the skin of the jawline. Refined, easy-to-use, validated, objective standards assessing the severity of these changes are required in clinical research and practice.\n\n\nOBJECTIVE\nTo establish the reliability of eight lower face scales assessing nasolabial folds, marionette lines, upper and lower lip fullness, lip wrinkles (at rest and dynamic), the oral commissure and jawline, aesthetic areas, and the lower face unit.\n\n\nMETHODS AND MATERIALS\nFour 5-point rating scales were developed to objectively assess upper and lower lip wrinkles, oral commissures, and the jawline. Twelve experts rated identical lower face photographs of 50 subjects in two separate rating cycles using eight 5-point scales. Inter- and intrarater reliability of responses was assessed.\n\n\nRESULTS\nInterrater reliability was substantial or almost perfect for all lower face scales, aesthetic areas, and the lower face unit. Intrarater reliability was high for all scales, areas and the lower face unit.\n\n\nCONCLUSION\nOur rating scales are reliable tools for valid and reproducible assessment of the aging process in lower face areas."} {"_id": "67d72dfcb30b466718172610315dbb7b7655b412", "title": "Millimetre-wave T-shaped antenna with defected ground structures for 5G wireless networks", "text": "This paper presents a T-shaped antenna at millimetre-wave (MMW) frequency ranges to offer a number of advantages including simple structure, high operating bandwidth, and high gain. Defected ground structures (DGS) have been symmetrically added in ground in order to produce multiple resonating bands, accompanied by partial ground plane to achieve continuous operating bandwidth. The antenna consists of T-shaped radiating patch with a coplanar waveguide (CPW) feed. The bottom part has a partial ground plane loaded with five symmetrical split-ring slots. Measured results of antenna prototype show a wide bandwidth of 25.1-37.5 GHz. Moreover, simulation evaluation of peak gain of the antenna is 9.86 dBi at 36.8 GHz, and efficiency is higher than 80% in complete range of operation. The proposed antenna is considered as a potential candidate for the 5G wireless networks and applications."} {"_id": "acdbb55beedf3db02bfc16e254c7d8759c64758f", "title": "Design and implementation of full bridge bidirectional isolated DC-DC converter for high power applications", "text": "This paper proposes the design and implementation of a high power full bridge bidirectional isolated DC-DC converter (BIDC) which comprises of two symmetrical voltage source converters and a high frequency transformer. In the proposed BIDC, a well-known PI controller based single phase-shift (SPS) modulation technique is used in order to achieve high power transfer. Besides, different phase shift methods such as extended phase-shift (EPS) and double phase-shift (DPS) are compared with SPS. Both simulation and experimental results are caried out to verify PI controller based simple phase-shift controlled BIDC prototype that is designed for 300-V 2.4-kW and operating at 20 kHz."} {"_id": "a430998dd1a9a93ff5d322bd69944432cfd2769b", "title": "Ubii: Physical World Interaction Through Augmented Reality", "text": "We describe a new set of interaction techniques that allow users to interact with physical objects through augmented reality (AR). Previously, to operate a smart device, physical touch is generally needed and a graphical interface is normally involved. These become limitations and prevent the user from operating a device out of reach or operating multiple devices at once. Ubii (Ubiquitous interface and interaction) is an integrated interface system that connects a network of smart devices together, and allows users to interact with the physical objects using hand gestures. The user wears a smart glass which displays the user interface in an augmented reality view. Hand gestures are captured by the smart glass, and upon recognizing the right gesture input, Ubii will communicate with the connected smart devices to complete the designated operations. Ubii supports common inter-device operations such as file transfer, printing, projecting, as well as device pairing. To improve the overall performance of the system, we implement computation offloading to perform the image processing computation. Our user test shows that Ubii is easy to use and more intuitive than traditional user interfaces. Ubii shortens the operation time on various tasks involving operating physical devices. The novel interaction paradigm attains a seamless interaction between the physical and digital worlds."} {"_id": "6af930eebc0e426c97a245d94d9dbf1177719258", "title": "Early androgen exposure and human gender development", "text": "During early development, testosterone plays an important role in sexual differentiation of the mammalian brain and has enduring influences on behavior. Testosterone exerts these influences at times when the testes are active, as evidenced by higher concentrations of testosterone in developing male than in developing female animals. This article critically reviews the available evidence regarding influences of testosterone on human gender-related development. In humans, testosterone is elevated in males from about weeks 8 to 24 of gestation and then again during early postnatal development. Individuals exposed to atypical concentrations of testosterone or other androgenic hormones prenatally, for example, because of genetic conditions or because their mothers were prescribed hormones during pregnancy, have been consistently found to show increased male-typical juvenile play behavior, alterations in sexual orientation and gender identity (the sense of self as male or female), and increased tendencies to engage in physically aggressive behavior. Studies of other behavioral outcomes following dramatic androgen abnormality prenatally are either too small in their numbers or too inconsistent in their results, to provide similarly conclusive evidence. Studies relating normal variability in testosterone prenatally to subsequent gender-related behavior have produced largely inconsistent results or have yet to be independently replicated. For studies of prenatal exposures in typically developing individuals, testosterone has been measured in single samples of maternal blood or amniotic fluid. These techniques may not be sufficiently powerful to consistently detect influences of testosterone on behavior, particularly in the relatively small samples that have generally been studied. The postnatal surge in testosterone in male infants, sometimes called mini-puberty, may provide a more accessible opportunity for measuring early androgen exposure during typical development. This approach has recently begun to be used, with some promising results relating testosterone during the first few months of postnatal life to later gender-typical play behavior. In replicating and extending these findings, it may be important to assess testosterone when it is maximal (months 1 to 2 postnatal) and to take advantage of the increased reliability afforded by repeated sampling."} {"_id": "eee3851d8889a9267f5ddc6ef5b2e24ed37ca4f0", "title": "Effect of Manipulated Amplitude and Frequency of Human Voice on Dominance and Persuasiveness in Audio Conferences", "text": "We propose to artificially manipulate participants' vocal cues, amplitude and frequency, in real time to adjust their dominance and persuasiveness during audio conferences. We implemented a prototype system and conducted two experiments. The first experiment investigated the effect of vocal cue manipulation on the perception of dominance. The results showed that participants perceived higher dominance while listening to a voice with a high amplitude and low frequency. The second experiment investigated the effect of vocal cue manipulation on persuasiveness. The results indicated that a person with a low amplitude and low frequency voice had greater persuasiveness in conversations with biased dominance, while there was no statistically significant difference in persuasiveness in conversations with unbiased dominance."} {"_id": "bdeb23105ed6c419890cc86b4d72d1431bb0fe51", "title": "A Theoretical Framework for Serious Game Design: Exploring Pedagogy, Play and Fidelity and their Implications for the Design Process", "text": "It is widely acknowledged that digital games can provide an engaging, motivating and \u201cfun\u201d experience for students. However an entertaining game does not necessarily constitute a meaningful, valuable learning experience. For this reason, experts espouse the importance of underpinning serious games with a sound theoretical framework which integrates and balances theories from two fields of practice: pedagogy and game design (Kiili, 2005; Seeney & Routledge, 2009). Additionally, with the advent of sophisticated, immersive technologies, and increasing interest in the opportunities for constructivist learning offered by these technologies, concepts of fidelity and its impact on student learning and engagement, have emerged (Aldrich, 2005; Harteveld et al., 2007, 2010). This paper will explore a triadic theoretical framework for serious game design comprising play, pedagogy and fidelity. It will outline underpinning theories, review key literatures and identify challenges and issues involved in balancing these elements in the process of serious game design. DOI: 10.4018/ijgbl.2012100103 42 International Journal of Game-Based Learning, 2(4), 41-60, October-December 2012 Copyright \u00a9 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. game design (Kiili, 2005; Seeney & Routledge, 2009). While a sound pedagogical framework is considered essential to their effectiveness as learning tools, equally important is the integration of game play elements which harness and sustain player engagement. Additionally, with the advent of sophisticated and immersive technologies, as exemplified in the virtual worlds of contemporary games, and increasing interest in the opportunities for constructivist learning offered by these worlds, concepts of fidelity, and its impact on student learning and engagement, have emerged (Aldrich, 2005; Harteveld et al., 2007, 2010). This paper will explore this triadic theoretical framework for serious game design, outlining underpinning theories, reviewing key literatures and identifying associated challenges and issues (Figure 1). The paper begins by reflecting on pedagogical theories commonly utilised to conceptualise game-based learning, focusing on three constructivist theories. Following this, attention switches to theories used to conceptualise players\u2019 engagement with digital games, and thus inform effective, engaging and \u201cfun\u201d game design. As a key component of engaging and pedagogically effective game design, the concept of fidelity, and how it relates to game design and game-based learning, is discussed. The paper will conclude by reflecting on issues and challenges involved in balancing these components when designing"} {"_id": "0a191b2cecb32969feea6b9db5a4a58f9a0eb456", "title": "Design and evaluation of a wide-area event notification service", "text": "The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i. e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service's clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance. We also present results of simulation studies that examine the scalability and performance of the service."} {"_id": "3199194fec2eaa6be1453f280c6392e3ce94e37a", "title": "Comparing Children's Crosshair and Finger Interactions in Handheld Augmented Reality: Relationships Between Usability and Child Development", "text": "Augmented reality technology is a unique medium that can be helpful for young children's entertainment and education, but in order to achieve the benefits of this technology, augmented reality experiences need to be appropriately designed for young children's developing physical and cognitive skills. In the present study we investigated how 5-10 year-old children react to typical handheld augmented reality interaction techniques such as crosshair selection and finger selection, in AR environments that require them to change perspective or not. Our analysis shows significant impacts of age upon AR performance, with young children having slower selection times, more tracking losses, and taking longer to recover tracking. Significant differences were also found between AR interaction technique conditions, with finger selection being faster than crosshair selection, and interactions which required changes in perspective taking longer, generating more tracking losses, and more errors in selection. Furthermore, by analyzing children's performance in relation to metrics of physical and cognitive development, we identified correlations between AR interaction techniques performance and developmental tests of spatial relations, block construction and visuomotor precision. Gender differences were analyzed but no significant effects were detected."} {"_id": "ba459a7bd17d8af7642dd0c0ddf9e897dff3c4a8", "title": "Do citations and readership identify seminal publications?", "text": "This work presents a new approach for analysing the ability of existing research metrics to identify research which has strongly influenced future developments. More specifically, we focus on the ability of citation counts and Mendeley reader counts to distinguish between publications regarded as seminal and publications regarded as literature reviews by field experts. The main motivation behind our research is to gain a better understanding of whether and how well the existing research metrics relate to research quality. For this experiment we have created a new dataset which we call TrueImpactDataset and which contains two types of publications, seminal papers and literature reviews. Using the dataset, we conduct a set of experiments to study how citation and reader counts perform in distinguishing these publication types, following the intuition that causing a change in a field signifies research quality. Our research shows that citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers from literature reviews while Mendeley reader counts do not work better than the baseline."} {"_id": "9182452cd181022034cc810c263d397fbc3dd75d", "title": "Successful failure: what Foucault can teach us about privacy self-management in a world of Facebook and big data", "text": "The \u201cprivacy paradox\u201d refers to the discrepancy between the concern individuals express for their privacy and the apparently low value they actually assign to it when they readily trade personal information for low-value goods online. In this paper, I argue that the privacy paradox masks a more important paradox: the self-management model of privacy embedded in notice-and-consent pages on websites and other, analogous practices can be readily shown to underprotect privacy, even in the economic terms favored by its advocates. The real question, then, is why privacy self-management occupies such a prominent position in privacy law and regulation. Borrowing from Foucault\u2019s late writings, I argue that this failure to protect privacy is also a success in ethical subject formation, as it actively pushes privacy norms and practices in a neoliberal direction. In other words, privacy self-management isn\u2019t about protecting people\u2019s privacy; it\u2019s about inculcating the idea that privacy is an individual, commodified good that can be traded for other market goods. Along the way, the self-management regime forces privacy into the market, obstructs the functioning of other, more social, understandings of privacy, and occludes the various ways that individuals attempt to resist adopting the market-based view of themselves and their privacy. Throughout, I use the analytics practices of Facebook and social networking sites as a sustained case study of the point."} {"_id": "43310a0a428a222ca3cc34370c9494bfa97528e4", "title": "Always-ON visual node with a hardware-software event-based binarized neural network inference engine", "text": "This work introduces an ultra-low-power visual sensor node coupling event-based binary acquisition with Binarized Neural Networks (BNNs) to deal with the stringent power requirements of always-on vision systems for IoT applications. By exploiting in-sensor mixed-signal processing, an ultra-low-power imager generates a sparse visual signal of binary spatial-gradient features. The sensor output, packed as a stream of events corresponding to the asserted gradient binary values, is transferred to a 4-core processor when the amount of data detected after frame difference surpasses a given threshold. Then, a BNN trained with binary gradients as input runs on the parallel processor if a meaningful activity is detected in a pre-processing stage. During the BNN computation, the proposed Event-based Binarized Neural Network model achieves a system energy saving of 17.8% with respect to a baseline system including a low-power RGB imager and a Binarized Neural Network, while paying a classification performance drop of only 3% for a real-life 3-classes classification scenario. The energy reduction increases up to 8x when considering a long-term always-on monitoring scenario, thanks to the event-driven behavior of the processing sub-system."} {"_id": "4954bb26107d69eb79bb32ffa247c8731cf20fcf", "title": "Improving privacy and security in multi-authority attribute-based encryption", "text": "Attribute based encryption (ABE) [13] determines decryption ability based on a user's attributes. In a multi-authority ABE scheme, multiple attribute-authorities monitor different sets of attributes and issue corresponding decryption keys to users, and encryptors can require that a user obtain keys for appropriate attributes from each authority before decrypting a message. Chase [5] gave a multi-authority ABE scheme using the concepts of a trusted central authority (CA) and global identifiers (GID). However, the CA in that construction has the power to decrypt every ciphertext, which seems somehow contradictory to the original goal of distributing control over many potentially untrusted authorities. Moreover, in that construction, the use of a consistent GID allowed the authorities to combine their information to build a full profile with all of a user's attributes, which unnecessarily compromises the privacy of the user. In this paper, we propose a solution which removes the trusted central authority, and protects the users' privacy by preventing the authorities from pooling their information on particular users, thus making ABE more usable in practice."} {"_id": "b27e8000ef007af8f6b51d597f13c02e7c7c2a0f", "title": "Multigate-Cell Stacked FET Design for Millimeter-Wave CMOS Power Amplifiers", "text": "To increase the voltage handling capability of scaled CMOS-based circuits, series connection (stacking) of transistors has been demonstrated in recently reported mm-wave power amplifiers. This paper discusses the implementation of stacked CMOS circuits employing a compact, multigate layout technique, rather than the conventional series connection of individual transistors. A unit multigate FET is composed of a single transistor with one source and drain and multiple (four) gate connections. Capacitances are implemented in a distributed manner allowing close proximity to the individual gate fingers using metal layers available within the CMOS back-end-of-line (BEOL) stack. The multigate structure is demonstrated to decrease parasitic resistances and capacitances, and has better layout for heat-sinking. The unit cell is replicated through tiling to implement larger effective gate widths. Millimeter-wave power amplifiers using the multigate-cell are presented which operate over the 25-35 GHz band and achieve 300 mW of saturated output power and peak power-added efficiency (PAE) of 30% in 45 nm CMOS SOI technology. To the authors' knowledge, the output power is the best reported for a single stage CMOS power amplifier that does not use power-combining for this frequency range."} {"_id": "33b8063aac5715591c80a38a69f5b0619fbc041d", "title": "Application of Requirement-oriented Data Quality Evaluation Method", "text": "The data quality directly determines the data further analysis and other applications. With the important of data quality, data is a major asset in software applications and information systems. Getting and ensuring high quality data is critical to the software and related business operations. At present, there are many methods and techniques of data quality assessment or evaluation. Based on the evaluation criteria of data quality, the paper mainly starts from the needs of application software and uses the requirements as a guide in the evaluation process. Define evaluation criteria based on user-defined requirements. In this paper, the method proposed in a practical case has been applied, and achieved good practical results."} {"_id": "3a01bdcd4bb19151d326bff1c84561ea0b6c757e", "title": "Real-time neuroevolution in the NERO video game", "text": "In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet, if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the real-time Neuroevolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the Neuroevolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games."} {"_id": "d6487d4a4c8433c7636a490a6630efb961bbfd13", "title": "Concept Mining via Embedding", "text": "In this work, we study the problem of concept mining, which serves as the first step in transforming unstructured text into structured information, and supports downstream analytical tasks such as information extraction, organization, recommendation and search. Previous work mainly relies on statistical signals, existing knowledge bases, or predefined linguistic patterns. In this work, we propose a novel approach that mines concepts based on their occurrence contexts, by learning embedding vector representations that summarize the context information for each possible candidates, and use these embeddings to evaluate the concept's global quality and their fitness to each local context. Experiments over several real-world corpora demonstrate the superior performance of our method. A publicly available implementation is provided at https://github.com/kleeeeea/ECON."} {"_id": "c5fa3cf216af23b8e88803750ff8e040bb8a2d1f", "title": "WhatsApp Usage Patterns and Prediction Models", "text": "This paper presents an extensive study of the usage of the WhatsApp social network, an Internet messaging application that is quickly replacing SMS messaging. It is based on the analysis of over 4 million messages from nearly 100 users that we collected in order to understand people\u2019s use of the network. We believe that this is the first in-depth study of the properties of WhatsApp messages with an emphasis of noting differences across different age and gender demographic groups. It is also the first to use statistical and data analytic tools in this analysis. We found that different genders and age demographics had significantly different usage habits in almost all message and group attributes. These differences facilitate the development of user prediction models based on data mining tools. We illustrate this by developing several prediction models such as for a person\u2019s gender or approximate age. We also noted differences in users\u2019 group behavior. We created group behaviorial models including the likelihood a given group would have more file attachments, if a group would contain a larger number of participants, a higher frequency of activity, quicker response times and shorter messages. We present a detailed discussion about the specific attributes that were contained in all predictive models and suggest possible applications based on these results."} {"_id": "12a3b69936d3d2e618c8068970cbbe0e8da101ac", "title": "A data mining framework for detecting subscription fraud in telecommunication", "text": "Service providing companies including telecommunication companies often receive substantial damage from customers\u2019 fraudulent behaviors. One of the common types of fraud is subscription fraud in which usage type is in contradiction with subscription type. This study aimed at identifying customers\u2019 subscription fraud by employing data mining techniques and adopting knowledge discovery process. To this end, a hybrid approach consisting of preprocessing, clustering, and classification phases was applied, and appropriate tools were employed commensurate to each phase. Specifically, in the clustering phase SOM and K-means were combined, and in the classification phase decision tree (C4.5), neural networks, and support vector machines as single classifiers and bagging, boosting, stacking, majority and consensus voting as ensembles were examined. In addition to using clustering to identify outlier cases, it was also possible \u2013 by defining new features \u2013 to maintain the results of clustering phase for the classification phase. This, in turn, contributed to better classification results. A real dataset provided by Telecommunication Company of Tehran was applied to demonstrate the effectiveness of the proposed method. The efficient use of synergy among these techniques significantly increased prediction accuracy. The performance of all single and ensemble classifiers is evaluated based on various metrics and compared by statistical tests. The results showed that support vector machines among single classifiers and boosted trees among all classifiers have the best performance in terms of various metrics. The research findings show that the proposed model has a high accuracy, and the resulting outcomes are significant both theoretically and practically. & 2010 Elsevier Ltd. All rights reserved."} {"_id": "948b79a0b533917db575cd13d52732a5229d9500", "title": "FastSpMM: An Efficient Library for Sparse Matrix Matrix Product on GPUs", "text": "Sparse matrix matrix (SpMM) multiplication is involved in a wide range of scientific and technical applications. The computational requirements for this kind of operation are enormous, especially for large matrices. This paper analyzes and evaluates a method to efficiently compute the SpMM product in a computing environment that includes graphics processing units (GPUs). Some libraries to compute this matricial operation can be found in the literature. However, our strategy (FastSpMM) outperforms the existing approaches because it combines the use of the ELLPACK-R storage format with the exploitation of the high ratio computation/memory access of the SpMM operation and the overlapping of CPU\u2013GPU communications/computations by Compute Unified Device Architecture streaming computation. In this work, FastSpMM is described and its performance evaluated with regard to the CUSPARSE library (supplied by NVIDIA), which also includes routines to compute SpMM on GPUs. Experimental evaluations based on a representative set of test matrices show that, in terms of performance, FastSpMM outperforms the CUSPARSE routine as well as the implementation of the SpMM as a set of sparse matrix vector products."} {"_id": "892144cf4cddf38a272015e261bca984dca7f4b0", "title": "Knowledge Management: An Introduction and Perspective", "text": "In the mid-1980s, individuals and organizations began to appreciate the increasingly important role of knowledge in the emerging competitive environment. International competition was changing to increasingly emphasize product and service quality, responsiveness, diversity and customization. Some organizations, such as USbased Chaparral Steel, had been pursuing a knowledge focus for some years, but during this period it started to become a more wide-spread business concern. These notions appeared in many places throughout the world \u2013 almost simultaneously in the way bubbles appear in a kettle of superheated water! Over a brief period from 1986 to 1989, numerous reports appeared in the public domain concerning how to manage knowledge explicitly. There were studies, results of corporate efforts, and conferences on the topic."} {"_id": "25098861749fe9eab62fbe90c1ebeaed58c211bb", "title": "Boosting as a Regularized Path to a Maximum Margin Classifier", "text": "In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion. For the two most commonly used criteria (exponential and binomial log-likelihood), we further show that as the constraint is relaxed\u2014or equivalently as the boosting iterations proceed\u2014the solution converges (in the separable case) to an \u201cl1-optimal\u201d separating hyper-plane. We prove that this l1-optimal separating hyper-plane has the property of maximizing the minimal l1-margin of the training data, as defined in the boosting literature. An interesting fundamental similarity between boosting and kernel support vector machines emerges, as both can be described as methods for regularized optimization in high-dimensional predictor space, using a computational trick to make the calculation practical, and converging to margin-maximizing solutions. While this statement describes SVMs exactly, it applies to boosting only approximately."} {"_id": "3baddc440617ce202fd190b32b1d73f1bb14561d", "title": "Fault Detection, Isolation, and Service Restoration in Distribution Systems: State-of-the-Art and Future Trends", "text": "This paper surveys the conceptual aspects, as well as recent developments in fault detection, isolation, and service restoration (FDIR) following an outage in an electric distribution system. This paper starts with a discussion of the rationale for FDIR, and then investigates different areas of the FDIR problem. Recently reported approaches are compared and related to discussions on current practices. This paper then addresses some of the often-cited associated technical, environmental, and economic challenges of implementing self-healing for the distribution grid. The review concludes by pointing toward the need and directions for future research."} {"_id": "51a3e60fa624ffeb2d75f6adaffe58e4646ce366", "title": "Exploiting potential citation papers in scholarly paper recommendation", "text": "To help generate relevant suggestions for researchers, recommendation systems have started to leverage the latent interests in the publication profiles of the researchers themselves. While using such a publication citation network has been shown to enhance performance, the network is often sparse, making recommendation difficult. To alleviate this sparsity, we identify \"potential citation papers\" through the use of collaborative filtering. Also, as different logical sections of a paper have different significance, as a secondary contribution, we investigate which sections of papers can be leveraged to represent papers effectively.\n On a scholarly paper recommendation dataset, we show that recommendation accuracy significantly outperforms state-of-the-art recommendation baselines as measured by nDCG and MRR, when we discover potential citation papers using imputed similarities via collaborative filtering and represent candidate papers using both the full text and assigning more weight to the conclusion sections."} {"_id": "04a8c5d8535fc58e5ad55dd3f9288bc78567d0c4", "title": "A COMPARISON OF ENTERPRISE ARCHITECTURE FRAMEWORKS", "text": "An Enterprise Architecture Framework (EAF) maps all of the software development processes within the enterprise and how they relate and interact to fulfill the enterprise\u2019s mission. It provides organizations with the ability to understand and analyze weaknesses or inconsistencies to be identified and addressed. There are a number of already established EAF in use today; some of these frameworks were developed for very specific areas, whereas others have broader functionality. This study provides a comparison of several frameworks that can then be used for guidance in the selection of an EAF that meets the needed criteria."} {"_id": "0825788b9b5a18e3dfea5b0af123b5e939a4f564", "title": "Glove: Global Vectors for Word Representation", "text": "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition."} {"_id": "0826c98d1b1513aa2f45e6654bb5075a58b64649", "title": "Linguistic Regularities in Continuous Space Word Representations", "text": "\u2022 Neural network language model and distributed representation for words (Vector representation) \u2022 Capture syntactic and remantic regularities in language \u2022 Outperform state-of-the-art"} {"_id": "307e3d7d5857942d7d2c9d97f7437777535487e0", "title": "Universal Adversarial Perturbations", "text": "Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images."} {"_id": "326cfa1ffff97bd923bb6ff58d9cb6a3f60edbe5", "title": "The Earth Mover's Distance as a Metric for Image Retrieval", "text": "We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances."} {"_id": "83a6cacc126d85c45605797406262677c256a6af", "title": "Software Framework for Topic Modelling with Large Corpora", "text": "Large corpora are ubiquitous in today\u2019s world and memory quickly becomes the limiting factor in practical applications of the Vector Space Model (VSM). In this paper, we identify a gap in existing implementations of many of the popular algorithms, which is their scalability and ease of use. We describe a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion. Within this framework, we implement several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation, in a way that makes them completely independent of the training corpus size. Particular emphasis is placed on straightforward and intuitive framework design, so that modifications and extensions of the methods and/or their application by interested practitioners are effortless. We demonstrate the usefulness of our approach on a real-world scenario of computing document similarities within an existing digital library DML-CZ."} {"_id": "cd2b8ac987b3254dd31835afa185ab1962e62aa6", "title": "Dual-Resonance NFC Antenna System Based on NFC Chip Antenna", "text": "In this letter, to enhance the performance of the near-field communication (NFC) antenna, an antenna system that has dual resonance is proposed. The antenna is based on NFC chip antenna, and the dual resonance comes from the chip antenna itself, the eddy current on printed circuit board, and a nearby loop that has strong coupling with the chip antenna. The performance of the proposed antenna system is confirmed by measurement."} {"_id": "c7ceb4200c1875142edb1664628abb4a9ccd0674", "title": "Explaining the \u201c Identifiable Victim Effect \u201d", "text": "It is widely believed that people are willing to expend greater resources to save the lives of identified victims than to save equal numbers of unidentified or statistical victims. There are many possible causes of this disparity which have not been enumerated previously or tested empirically. We discuss four possible causes of the \u201cidentifiable victim effect\u201d and present the results of two studies which indicate that the most important cause of the disparity in treatment of identifiable and statistical lives is that, for identifiable victims, a high proportion of those at risk can be saved."} {"_id": "52a97f714547e62f3ad59c979dcfa2f6733038e8", "title": "Efficiency optimization of half-bridge series resonant inverter with asymmetrical duty cycle control for domestic induction heating", "text": "In this paper, a method to improve efficiency in half-bridge series resonant inverter applied to domestic induction heating is presented. Low and medium output powers required for this application implies the use of higher switching frequencies, which leads to an efficiency decrease. Asymmetrical duty cycle (ADC) modulation scheme is proposed to improve efficiency due to its switching frequency reduction and absence of additional hardware requirements. Study methodology comprises, in a first step, a theoretical analysis of power balance as a function of control parameters: duty cycle and switching frequency. In addition, restrictions due to snubber and dead time, and variability of the induction loads have been considered. Afterwards, an efficiency analysis has been carried out to determine the optimum operation point. Switching and conduction losses have been calculated to examine global importance of each one for different switching devices. ADC modulation efficiency improvement is achieved by means of a switching frequency reduction, mainly at low-medium power range and low quality factor (Q) loads. The analytical results obtained with this study have been validated through an induction heating test-bench. A discrete 3-kW RL load has been designed to emulate a typical induction heating load. Then, a commercial induction heating inverter is used to evaluate ADC modulation scheme."} {"_id": "8038a2e6da256556664b21401aed77079160c8b1", "title": "Vehicle Detection and Compass Applications using AMR Magnetic Sensors", "text": "The earliest magnetic field detectors allowed navigation over trackless oceans by sensing the earth\u2019s magnetic poles. Magnetic field sensing has vastly expanded as industry has adapted a variety of magnetic sensors to detect the presence, strength, or direction of magnetic fields not only from the earth, but also from permanent magnets, magnetized soft magnets, vehicle disturbances, brain wave activity, and fields generated from electric currents. Magnetic sensors can measure these properties without physical contact and have become the eyes of many industrial and navigation control systems. This paper will describe the current state of magnetic sensing within the earth\u2019s field range and how these sensors are applied. Several applications will be presented for magnetic sensing in systems with emphasis on vehicle detection and navigation based on magnetic fields."} {"_id": "7f77d49b35ed15637d767a13a882ef8d3193772e", "title": "Flying Eyes and Hidden Controllers: A Qualitative Study of People's Privacy Perceptions of Civilian Drones in The US", "text": "Drones are unmanned aircraft controlled remotely or operated autonomously. While the extant literature suggests that drones can in principle invade people\u2019s privacy, little is known about how people actually think about drones. Drawing from a series of in-depth interviews conducted in the United States, we provide a novel and rich account of people\u2019s pri\u00ad vacy perceptions of drones for civilian uses both in general and under specific usage scenarios. Our informants raised both physical and information privacy issues against government, organization and individual use of drones. Informants\u2019 reason\u00ad ing about the acceptance of drone use was in part based on whether the drone is operating in a public or private space. However, our informants differed significantly in their defini\u00ad tions of public and private spaces. While our informants\u2019 pri\u00ad vacy concerns such as surveillance, data collection and shar\u00ad ing have been raised for other tracking technologies such as camera phones and closed-circuit television (CCTV), our in\u00ad terviews highlight two heightened issues of drones: (1) pow\u00ad erful yet inconspicuous data collection, (2) hidden and inac\u00ad cessible drone controllers. These two aspects of drones render some of people\u2019s existing privacy practices futile (e.g., notice recording and ask controllers to stop or delete the recording). Some informants demanded notifications of drones near them and expected drone controllers asking for their explicit per\u00ad missions before recording. We discuss implications for future privacy-enhancing drone designs."} {"_id": "a0041f890b7e1ffef5c3919fd4a7de95c82282d7", "title": "Reduced graphene oxide\u2013silver nanoparticle nanocomposite: a potential anticancer nanotherapy", "text": "BACKGROUND\nGraphene and graphene-based nanocomposites are used in various research areas including sensing, energy storage, and catalysis. The mechanical, thermal, electrical, and biological properties render graphene-based nanocomposites of metallic nanoparticles useful for several biomedical applications. Epithelial ovarian carcinoma is the fifth most deadly cancer in women; most tumors initially respond to chemotherapy, but eventually acquire chemoresistance. Consequently, the development of novel molecules for cancer therapy is essential. This study was designed to develop a simple, non-toxic, environmentally friendly method for the synthesis of reduced graphene oxide-silver (rGO-Ag) nanoparticle nanocomposites using Tilia amurensis plant extracts as reducing and stabilizing agents. The anticancer properties of rGO-Ag were evaluated in ovarian cancer cells.\n\n\nMETHODS\nThe synthesized rGO-Ag nanocomposite was characterized using various analytical techniques. The anticancer properties of the rGO-Ag nanocomposite were evaluated using a series of assays such as cell viability, lactate dehydrogenase leakage, reactive oxygen species generation, cellular levels of malonaldehyde and glutathione, caspase-3 activity, and DNA fragmentation in ovarian cancer cells (A2780).\n\n\nRESULTS\nAgNPs with an average size of 20 nm were uniformly dispersed on graphene sheets. The data obtained from the biochemical assays indicate that the rGO-Ag nanocomposite significantly inhibited cell viability in A2780 ovarian cancer cells and increased lactate dehydrogenase leakage, reactive oxygen species generation, caspase-3 activity, and DNA fragmentation compared with other tested nanomaterials such as graphene oxide, rGO, and AgNPs.\n\n\nCONCLUSION\nT. amurensis plant extract-mediated rGO-Ag nanocomposites could facilitate the large-scale production of graphene-based nanocomposites; rGO-Ag showed a significant inhibiting effect on cell viability compared to graphene oxide, rGO, and silver nanoparticles. The nanocomposites could be effective non-toxic therapeutic agents for the treatment of both cancer and cancer stem cells."} {"_id": "932d3ce4de4fe94f8f4d302d208ed6e1c4c930de", "title": "Transfer Learning for Music Classification and Regression Tasks", "text": "In this paper, we present a transfer learning approach for music classification and regression tasks. We propose to use a pre-trained convnet feature, a concatenated feature vector using the activations of feature maps of multiple layers in a trained convolutional network. We show how this convnet feature can serve as general-purpose music representation. In the experiments, a convnet is trained for music tagging and then transferred to other music-related classification and regression tasks. The convnet feature outperforms the baseline MFCC feature in all the considered tasks and several previous approaches that are aggregating MFCCs as well as lowand high-level music features."} {"_id": "8d71074d6e7421c3f267328cafbba0877892c60d", "title": "Detecting Deceptive Behavior via Integration of Discriminative Features From Multiple Modalities", "text": "Deception detection has received an increasing amount of attention in recent years, due to the significant growth of digital media, as well as increased ethical and security concerns. Earlier approaches to deception detection were mainly focused on law enforcement applications and relied on polygraph tests, which had proved to falsely accuse the innocent and free the guilty in multiple cases. In this paper, we explore a multimodal deception detection approach that relies on a novel data set of 149 multimodal recordings, and integrates multiple physiological, linguistic, and thermal features. We test the system on different domains, to measure its effectiveness and determine its limitations. We also perform feature analysis using a decision tree model, to gain insights into the features that are most effective in detecting deceit. Our experimental results indicate that our multimodal approach is a promising step toward creating a feasible, non-invasive, and fully automated deception detection system."} {"_id": "33960c2f143918e200ebfb5bbd4ae9b8e20cd96c", "title": "Inferring Undiscovered Public Knowledge by Using Text Mining-driven Graph Model", "text": "Due to the recent development of Information Technology, the number of publications is increasing exponentially. In response to the increasing number of publications, there has been a sharp surge in the demand for replacing the existing manual text data processing by an automatic text data processing. Swanson proposed ABC model [1] on the top of text mining as a part of literature-based knowledge discovery for finding new possible biomedical hypotheses about three decades ago. The following clinical scholars proved the effectiveness of the possible hypotheses found by ABC model [2]. Such effectiveness let scholars try various literature-based knowledge discovery approaches [3, 4, 5]. However, their trials are not fully automated but hybrids of automatic and manual processes. The manual process requires the intervention of experts. In addition, their trials consider a single perspective. Even trials involving network theory have difficulties in mal-understanding the entire network structure of the relationships among concepts and the systematic interpretation on the structure [6, 7]. Thus, this study proposes a novel approach to discover various relationships by extending the intermediate concept B to a multi-leveled concept. By applying a graph-based path finding method based on co-occurrence and the relational entities among concepts, we attempt to systematically analyze and investigate the relationships between two concepts of a source node and a target node in the total paths. For the analysis of our study, we set our baseline as the result of Swanson [8]'s work. This work suggested the intermediate concept or terms between Raynaud's disease and fish oils as blood viscosity, platelet aggregability, and vasconstriction. We compared our results of intermediate concepts with these intermediate concepts of Swanson's. This study provides distinct perspectives for literature-based discovery by not only discovering the meaningful relationship among concepts in biomedical literature through graph-based path interference but also being able to generate feasible new hypotheses."} {"_id": "84404e2e918815ff359c4a534f7bc18b59ebc2f7", "title": "Failure detection and consensus in the crash-recovery model", "text": "Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover, and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model. We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not. Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice \u2013 those with no failures or failure detector mistakes. In such runs, consensus is achieved within $3 \\delta$ time and with 4 n messages, where $\\delta$ is the maximum message delay and n is the number of processes in the system."} {"_id": "390baa992f07450a257694753c6f5e2858586fe0", "title": "Preimages for Step-Reduced SHA-2", "text": "In this paper, we present preimage attacks on up to 43step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps. The time complexities are 2, 2 for finding pseudo-preimages and 2, 2 compression function operations for full preimages. The memory requirements are modest, around 2 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384. Our attack is a meet-in-the-middle attack that uses a range of novel techniques to split the function into two independent parts that can be computed separately and then matched in a birthday-style phase."} {"_id": "8be94dfea49d79ab7ce90b3dbd25ff38500f4163", "title": "Critical review on video game evaluation heuristics: social games perspective", "text": "This paper presents the first step in creating design and evaluation heuristics for social games which emerge from the domain of social media. Initial high level heuristics for social games are offered by reviewing four existing video game heuristic models and analyzing two social games design frameworks."} {"_id": "b42d95996b8760ec06cfa1894e411bdb972be9c4", "title": "Through-Wall Opportunistic Sensing System Utilizing a Low-Cost Flat-Panel Array", "text": "A UWB through-wall imaging system is proposed based on a planar low profile aperture array operating from 0.9 GHz to 2.3 GHz. The goal is to provide a lightweight, fixed array to serve as an alternative to synthetic aperture radars (SAR) that require continuous array movement while collecting data. The proposed system consists of 12 dual-linear printed elements arranged within a triangular lattice, each forming a \u201cflower\u201d shape and backed by a ground plane. The array delivers half-space radiation with wideband performance, necessary for imaging applications. UWB capability is realized by suppressing grating lobes via the introduction of virtual phase centers interwoven within the actual array feeds. The proposed system is demonstrated for through-wall imaging via a non-coherent process. Distinctively, several coherent images are forged from various fixed aperture locations (referred to as \u201csnapshot\u201d locations) and appropriately combined to create a composite scene image. In addition to providing a unique wideband imaging capability (as an alternative to SAR), the system is portable and inexpensive for collecting/storing scattering data. The array design and data collection system is described, and several through-wall images are presented to demonstrate functionality."} {"_id": "10767cd60ac9e33188ae7e35d1e84a6614387da2", "title": "Maximum likelihood linear transformations for HMM-based speech recognition", "text": "This paper examines the application of linear transformations for speaker and environmental adaptation in an HMM-based speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple bias, strict linear feature-space transformations are inappropriate in this case. Hence, only model-based linear transforms are considered. The paper compares the two possible forms of model-based transforms: (i) unconstrained, where any combination of mean and variance transform may be used, and (ii) constrained, which requires the variance transform to have the same form as the mean transform (sometimes referred to as feature-space transforms). Re-estimation formulae for all appropriate cases of transform are given. This includes a new and e cient \\full\" variance transform and the extension of the constrained model-space transform from the simple diagonal case to the full or block-diagonal case. The constrained and unconstrained transforms are evaluated in terms of computational cost, recognition time e ciency, and use for speaker adaptive training. The recognition performance of the two model-space transforms on a large vocabulary speech recognition task using incremental adaptation is investigated. In addition, initial experiments using the constrained model-space transform for speaker adaptive training are detailed."} {"_id": "6446405d851f7e940314d2d07aa8ff67b86d1da6", "title": "Semantic-Based Location Recommendation With Multimodal Venue Semantics", "text": "In recent years, we have witnessed a flourishing of location -based social networks. A well-formed representation of location knowledge is desired to cater to the need of location sensing, browsing, navigation and querying. In this paper, we aim to study the semantics of point-of-interest (POI) by exploiting the abundant heterogeneous user generated content (UGC) from different social networks. Our idea is to explore the text descriptions, photos, user check-in patterns, and venue context for location semantic similarity measurement. We argue that the venue semantics play an important role in user check-in behavior. Based on this argument, a unified POI recommendation algorithm is proposed by incorporating venue semantics as a regularizer. In addition to deriving user preference based on user-venue check-in information, we place special emphasis on location semantic similarity. Finally, we conduct a comprehensive performance evaluation of location semantic similarity and location recommendation over a real world dataset collected from Foursquare and Instagram. Experimental results show that the UGC information can well characterize the venue semantics, which help to improve the recommendation performance."} {"_id": "ad2f8ea1313410195953f9c186a4cdb6012e53a2", "title": "SyncGC: A Synchronized Garbage Collection Technique for Reducing Tail Latency in Cassandra", "text": "Data-center applications running on distributed databases often suffer from unexpectedly high response time fluctuation which is caused by long tail latency. In this paper, we find that long tail latency of user writes is mainly created by the interference with garbage collection (GC) tasks running in various system layers. In order to address the tail latency problem, we propose a synchronized garbage collection technique, called SyncGC. By scheduling multiple GC instances to execute in sync with each other in an overlapped manner, SyncGC prevents user requests from being interfered with GC instances, thereby minimizing their negative impacts on tail latency. Our experimental results with Cassandra show that SyncGC reduces the 99.99th-percentile tail latency and the maximum latency by 35% and 37%, on average, respectively."} {"_id": "508d8c1dbc250732bd2067689565a8225013292f", "title": "Experimental validation of dual PPG local pulse wave velocity probe", "text": "A novel dual photoplethysmograph (PPG) probe and measurement system for local pulse wave velocity (PWV) is proposed and demonstrated. The developed probe design employs reflectance PPG transducers for non-invasive detection of blood pulse propagation waveforms from two adjacent measurement points (28 mm apart). Transit time delay between the continuously acquired dual pulse waveform was used for beat-to-beat local PWV measurement. An in-vivo experimental validation study was conducted on 10 healthy volunteers (8 male and 2 female, 21 to 33 years of age) to validate the PPG probe design and developed local PWV measurement system. The proposed system was able to measure carotid local PWV from multiple subjects. Beat-to-beat variation of baseline carotid PWV was less than 7.5% for 7 out of 10 subjects, a maximum beat-to-beat variation of 16% was observed during the study. Variation in beat-to-beat carotid local PWV and brachial blood pressure (BP) values during post-exercise recovery period was also examined. A statistically significant correlation between intra-subject local PWV variation and brachial BP parameters was observed (r > 0.85, p < 0.001). The results demonstrated the feasibility of proposed PPG probe for continuous beat-to-beat local PWV measurement from the carotid artery. Such a non-invasive local PWV measurement unit can be potentially used for continuous ambulatory BP measurements."} {"_id": "2b3caf9dfb2a539c89a5f55fc3ba6f81f4fa8f65", "title": "Millimeter-Wave TE20-Mode SIW Dual-Slot-Fed Patch Antenna Array With a Compact Differential Feeding Network", "text": "A millimeter-wave series\u2013parallel patch antenna array is presented, in which the dual-slot feeding structure is handily implemented using the intrinsic field distribution of TE20 mode in substrate-integrated waveguide (SIW). One 28 GHz patch antenna element fed by the TE20-mode SIW is first designed, achieving a 10 dB impedance bandwidth of 10.2% and a simulated peak gain of 6.48 dBi. Based on the antenna element, a $4 \\times 4$ array with a compact series\u2013parallel differential feeding network is developed accordingly. Due to the novel compact SIW-based series\u2013parallel feeding network, the antenna array can achieve superior radiation performances, which is the highlight of this communication. The simulation and measurement results of the proposed antenna array are in good agreement, demonstrating a performance of 8.5% impedance bandwidth, 19.1 dBi peak gain, symmetrical radiation patterns, and low cross-polarization levels (\u221230 dB in E-plane and \u221225 dB in H-plane) in the operating frequency band of 26.65\u201329.14 GHz."} {"_id": "0797a337a6f2a8e1ac2824a4e941a92c83c79040", "title": "Perceived parental social support and academic achievement: an attachment theory perspective.", "text": "The study tested the extent to which parental social support predicted college grade point average among undergraduate students. A sample of 418 undergraduates completed the Social Provisions Scale--Parent Form (C.E. Cutrona, 1989) and measures of family conflict and achievement orientation. American College Testing Assessment Program college entrance exam scores (ACT; American College Testing Program, 1986) and grade point average were obtained from the university registrar. Parental social support, especially reassurance of worth, predicted college grade point average when controlling for academic aptitude (ACT scores), family achievement orientation, and family conflict. Support from parents, but not from friends or romantic partners, significantly predicted grade point average. Results are interpreted in the context of adult attachment theory."} {"_id": "4d85ad577916479ff7d57b78162ef4de70cf895e", "title": "Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony", "text": "The quantification of phase synchrony between neuronal signals is of crucial importance for the study of large-scale interactions in the brain. Two methods have been used to date in neuroscience, based on two distinct approaches which permit a direct estimation of the instantaneous phase of a signal [Phys. Rev. Lett. 81 (1998) 3291; Human Brain Mapping 8 (1999) 194]. The phase is either estimated by using the analytic concept of Hilbert transform or, alternatively, by convolution with a complex wavelet. In both methods the stability of the instantaneous phase over a window of time requires quantification by means of various statistical dependence parameters (standard deviation, Shannon entropy or mutual information). The purpose of this paper is to conduct a direct comparison between these two methods on three signal sets: (1) neural models; (2) intracranial signals from epileptic patients; and (3) scalp EEG recordings. Levels of synchrony that can be considered as reliable are estimated by using the technique of surrogate data. Our results demonstrate that the differences between the methods are minor, and we conclude that they are fundamentally equivalent for the study of neuroelectrical signals. This offers a common language and framework that can be used for future research in the area of synchronization."} {"_id": "a5a4ea1118adf2880b0800859d0e7238b3ae2094", "title": "Author ' s personal copy Romancing leadership : Past , present , and future \u2606", "text": "Available online 2 October 2011 This paper presents a review of the romance of leadership and the social construction of leadership theory 25 years after it was originally introduced. We trace the development of this theoretical approach from the original formulation of the romance of leadership (RoL) theory as attributional bias through its emergence as a radical, unconventional approach that views leadership as a sensemaking activity that is primarily \u2018in the eye of the beholder.\u2019 We subsequently review research published in management and organizational psychology journals, book chapters and special issues of journals from 1985 to 2010. Three overall themes emerged from this review: 1) biases in (mis)attributions of leadership, including attributions for organizational success and failure; 2) follower-centered approaches, including the role of follower characteristics, perceptions, and motivations in interpreting leadership ratings; and 3) the social construction of leadership, including interfollower and social contagion processes, the role of crisis and uncertainty, and constructions and deconstructions of leadership and CEO celebrity in the media. Within each of these themes, we examine developments and summarize key findings. Our review concludes with recommendations for future theoretical and empirical"} {"_id": "3c31d0411e813111eb652fb3639a76411b53394e", "title": "A General Agnostic Active Learning Algorithm", "text": "We present a simple, agnostic active learning algorithm that works for any hypothesis class of bounded VC dimension, and any data distribution. Our algorithm extends a scheme of Cohn, Atlas, and Ladner [6] to the agnostic setting, by (1) reformulating it using a reduction to supervised learning and (2) showing how to apply generalization bounds even for the non-i.i.d. samples that result from selective sampling. We provide a general characterization of the label complexity of our algorithm. This quantity is never more than the usual PAC sample complexity of supervised learning, and is exponentially smaller for some hypothesis classes and distributions. We also demonstrate improvements experimentally."} {"_id": "611f9faa6f3aeff3ccd674d779d52c4f9245376c", "title": "Multiresolution Models for Object Detection", "text": "Most current approaches to recognition aim to be scaleinvariant. However, the cues available for recognizing a 300 pixel tall object are qualitatively different from those for recognizing a 3 pixel tall object. We argue that for sensors with finite resolution, one should instead use scale-variant, or multiresolution representations that adapt in complexity to the size of a putative detection window. We describe a multiresolution model that acts as a deformable part-based model when scoring large instances and a rigid template with scoring small instances. We also examine the interplay of resolution and context, and demonstrate that context is most helpful for detecting low-resolution instances when local models are limited in discriminative power. We demonstrate impressive results on the Caltech Pedestrian benchmark, which contains object instances at a wide range of scales. Whereas recent state-of-theart methods demonstrate missed detection rates of 86%-37% at 1 falsepositive-per-image, our multiresolution model reduces the rate to 29%."} {"_id": "c97cca5e6f7c2268a7c5aa0603842fd7cb72dfd4", "title": "Gesture control of drone using a motion controller", "text": "In this study, we present our implementation of using a motion controller to control the motion of a drone via simple human gestures. We have used the Leap as the motion controller and the Parrot AR DRONE 2.0 for this implementation. The Parrot AR DRONE is an off the shelf quad rotor having an on board Wi-Fi system. The AR DRONE is connected to the ground station via Wi-Fi and the Leap is connected to the ground station via USB port. The LEAP Motion Controller recognizes the hand gestures and relays it on to the ground station. The ground station runs ROS (Robot Operating System) in Linux which is used as the platform for this implementation. Python is the programming language used for interaction with the AR DRONE in order to convey the simple hand gestures. In our implementation, we have written python codes to interpret the hand gestures captured by the LEAP, and transmit them in order to control the motion of the AR DRONE via these gestures."} {"_id": "a09547337b202ecf203148cc4636dd3db75f9df0", "title": "Improvement of Marching Cubes Algorithm Based on Sign Determination", "text": "Traditional Marching Cubes algorithm has the problem of repeating calculation, so that an improved Marching Cubes algorithm is put forward. Boundary voxel is utilized to find adjacent boundary voxel. According to the relationship of edge and edge sign between boundary voxel and adjacent boundary voxel, we transmit intersection on the common face of the boundary voxel to adjacent boundary voxel. If the edge sign of adjacent boundary voxel is not existed, we change the edge sign of the adjacent boundary voxel simultaneously. In that way, we can avoid double counting of the intersection, which is in two adjacent boundary voxels. The two adjacent boundary voxels have common surface on where the edge of the isosurface lies. During the computation of intersection of edge and isosurface, we only compute the edges whose edge signs are null. At the same time, we make use of edge sign to avoid repeating assignment. It speeds up isosurface extraction."} {"_id": "10cfa5bfab3da9c8026d3a358695ea2a5eba0f33", "title": "Parallel Tracking and Mapping for Small AR Workspaces", "text": "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems."} {"_id": "5137516a2604bb0828e59dc517b252d489409986", "title": "Fast Keypoint Recognition in Ten Lines of Code", "text": "While feature point recognition is a key component of modern approaches to object detection, existing approaches require computationally expensive patch preprocessing to handle perspective distortion. In this paper, we show that formulating the problem in a Naive Bayesian classification framework makes such preprocessing unnecessary and produces an algorithm that is simple, efficient, and robust. Furthermore, it scales well to handle large number of classes. To recognize the patches surrounding keypoints, our classifier uses hundreds of simple binary features and models class posterior probabilities. We make the problem computationally tractable by assuming independence between arbitrary sets of features. Even though this is not strictly true, we demonstrate that our classifier nevertheless performs remarkably well on image datasets containing very significant perspective changes."} {"_id": "69524d5b10b0bb7e4aa1c9057eefe197b230922e", "title": "Face to Face Collaborative AR on Mobile Phones", "text": "Mobile phones are an ideal platform for augmented reality. In this paper we describe how they can also be used to support face to face collaborative AR gaming. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played the game. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications."} {"_id": "779f05bf98049762df4298043ac6f38c82a07607", "title": "USING CAMERA-EQUIPPED MOBILE PHONES FOR INTERACTING WITH REAL-WORLD OBJECTS", "text": "The idea described in this paper is to use the built-in cameras of consumer mobile phones as sensors for 2-dimensional visual codes. Such codes can be attached to physical objects in order to retrieve object-related information and functionality. They are also suitable for display on electronic screens. The proposed visual code system allows the simultaneous detection of multiple codes, introduces a position-independent coordinate system, and provides the phone\u2019s orientation as a parameter. The ability to detect objects in the user\u2019s vicinity offers a natural way of interaction and strengthens the role of mobile phones in a large number of application scenarios. We describe the hardware requirements, the design of a suitable visual code, a lightweight recognition algorithm, and present some example applications."} {"_id": "7c20a16d1666e990e6499995556578c1649d191b", "title": "Robust Visual Tracking for Non-Instrumented Augmented Reality", "text": "This paper presents a robust and flexible framework for augmented reality which does not require instrumenting either the environment or the workpiece. A model-based visual tracking system is combined with with rate gyroscopes to produce a system which can track the rapid camera rotations generated by a head-mounted camera, even if images are substantially degraded by motion blur. This tracking yields estimates of head position at video field rate (50Hz) which are used to align computer-generated graphics on an optical see-through display. Nonlinear optimisation is used for the calibration of display parameters which include a model of optical distortion. Rendered visuals are pre-distorted to correct the optical distortion of the display."} {"_id": "edb92581ae897424de756257b82389c6cb21f28b", "title": "Engagement in Multimedia Training Systems", "text": "Two studies examined user engagement in two type multimedia training systems -a more passive medium, videotape, and a less passive medium, interactive softw Each study compared engagement in three formats: the Text format contained text and still images, the Audio format contained audio and still images, and the Video format contained audio and video images. In both studies, engagement was lower in the Text condition than in the Video condition. However, there were no differences in engagement between Text and Audio in the videotapebased training, and no differences between Audio and Video in the computer-based training."} {"_id": "79465f3bac4fb9f8cc66dcbe676022ddcd9c05c6", "title": "Action recognition based on a bag of 3D points", "text": "This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90% recognition accuracy were achieved by sampling only about 1% 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation."} {"_id": "27b807162372574a2317b92a66396ba15c10d290", "title": "Birdbrains could teach basal ganglia research a new song", "text": "Recent advances in anatomical, physiological and histochemical characterization of avian basal ganglia neurons and circuitry have revealed remarkable similarities to mammalian basal ganglia. A modern revision of the avian anatomical nomenclature has now provided a common language for studying the function of the cortical-basal-ganglia-cortical loop, enabling neuroscientists to take advantage of the specialization of basal ganglia areas in various avian species. For instance, songbirds, which learn their vocal motor behavior using sensory feedback, have specialized a portion of their cortical-basal ganglia circuitry for song learning and production. This discrete circuit dedicated to a specific sensorimotor task could be especially tractable for elucidating the interwoven sensory, motor and reward signals carried by basal ganglia, and the function of these signals in task learning and execution."} {"_id": "e2f8ea2a3504d2984ed91ef8666874d91fd2a400", "title": "Robotic Arm Based 3D Reconstruction Test Automation", "text": "The 3-D reconstruction involves the construction of a 3-D model from a set of images. The 3-D reconstruction has varied uses that include 3-D printing, the generation of 3-D models that can be shared through social media, and more. The 3-D reconstruction involves complex computations in mobile phones that must determine the pose estimation. The pose estimation involves the process of transforming a 2-D object into 3-D space. Once the pose estimation is done, then the mesh generation is performed using the graphics processing unit. This helps render the 3-D object. The competitive advantages of using hardware processors are to accelerate the intensive computation using graphics processors and digital signal processors. The stated problem that this technical paper addresses is the need for a reliable automated test for the 3-D reconstruction feature. The solution to this problem involved the design and development of an automated test system using a programmable robotic arm and rotor for precisely testing the quality of 3-D reconstruction features. The 3-D reconstruction testing involves using a robotic arm lab to accurately test the algorithmic integrity and end-to-end validation of the generated 3-D models. The robotic arm can move the hardware at different panning speeds, specific angles, fixed distances from the object, and more. The ability to reproduce the scanning at a fixed distance and the same panning speed helps to generate test results that can be benchmarked by different software builds. The 3-D reconstruction also requires a depth sensor to be mounted onto the device under examination. We use this robotic arm lab for functional, high performance, and stable validation of the 3-D reconstruction feature. This paper addresses the computer vision use case testing for 3-D reconstruction features and how we have used the robotic arm lab for automating these use cases."} {"_id": "c4f7dcd553740d393544c2e2016809ba2c03e3a4", "title": "A Ka-Band CMOS Wilkinson Power Divider Using Synthetic Quasi-TEM Transmission Lines", "text": "This work presents a Ka-band two-way 3 dB Wilkinson power divider using synthetic quasi-transverse electromagnetic (TEM) transmission lines (TLs). The synthetic quasi-TEM TL, also called complementary-conducting-strip TL (CCS TL), is theoretically analyzed. The equivalent TL model, whose production is based on the extracted results, is applied to the power divider design. The prototype is fabricated by the standard 0.18 mum 1P6M CMOS technology, showing the circuit size of 210.0 mumtimes390.0 mum without contact pads. The measurement results, which match the 50 Omega system, reveal perfect agreements with those of the simulations. The comparison reveals the following characteristics. The divider exhibits an equal power-split with the insertion losses (S21 and S31) of 3.65 dB. The return losses (S11, S22 and S33) of the prototype are higher than 10.0 dB from 30.0 to 40.0 GHz."} {"_id": "7cba794f1a270d0f44c76114c1c9d57718abe033", "title": "Using Patterns to Capture Architectural Decisions", "text": "Throughout the software design process, developers must make decisions and reify them in code. The decisions made during software architecting are particularly significant in that they have system-wide implications, especially on the quality attributes. However, architects often fail to adequately document their decisions because they don't appreciate the benefits, don't know how to document decisions, or don't recognize that they're making decisions. This lack of thorough documentation. This paper provides information about a decision's rationale and consequences, architecture patterns can help architects better understand and more easily record their decisions."} {"_id": "618b5ebc365fb374faf0276633b11bcf249efa0e", "title": "A model for concentric tube continuum robots under applied wrenches", "text": "Continuum robots made from telescoping precurved elastic tubes enable base-mounted actuators to specify the curved shapes of robots as thin as standard surgical needles. While free space beam mechanics-based models of the shape of these \u2018active cannulas\u2019 exist, current models cannot account for external forces and torques applied to the cannula by the environment. In this paper we apply geometrically exact beam theory to solve the statics problem for concentric-tube continuum robots. This yields the equivalent of forward kinematics for an active cannula with general tube precurvature functions and arbitrarily many tubes, under loading from a general wrench distribution. The model achieves average experimental tip errors of less than 3 mm over the workspace of a prototype active cannula subject to various tip forces."} {"_id": "aec61d77458e8d2713c29d12d49ed2eaf814c98f", "title": "The feasible workspace analysis of a set point control for a cable-suspended robot with input constraints and disturbances", "text": "This paper deals with the characterization of reachable domain of a set-point controller for a cable-suspended robot under disturbances and input constraints. The main contribution of the paper is to calculate the feasible domain analytically through the choice of a control law, starting from a given initial condition. This analytical computation is then recursively used to find a second feasible domain starting from a boundary point of the first feasible domain. Hence, this procedure allows to expand the region of feasible reference signals by connecting adjacent domains through common points. Finally, the effectiveness of the proposed method is illustrated by numerical simulations on a kinematically determined cable robot with six cables."} {"_id": "9878cc39383560752c5379a7e9641fc82a4daf7f", "title": "Graph Embedding Problem Settings Graph Embedding Input Graph Embedding Output Homogeneous Graph Heterogeneous Graph Graph with Auxiliary Information Graph Constructed from Non-relational Data Node Embedding Edge Embedding Hybrid Embedding Whole-Graph Embedding Graph Embedding Techniques Matrix Facto", "text": "Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximumly preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work address these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques and application scenarios."} {"_id": "53c08abb25ba0573058ad2692ffed39fbd65eb0d", "title": "Local sensor system for badminton smash analysis", "text": "This paper presents a development of a sensory system for analysis of badminton smashes. During a badminton game, the ability to execute a powerful smash is fundamental for a player to be competitive. In most games, the winning factor for the game is often attributed to a high shuttle speed during the execution of a smash. It was envisioned that the shuttle speed can be correlated from the speed of the rackets head. To help analyze the key relationship between the racket speed and the shuttle ball speed, a series of sensors were integrated into a conventional racket. The aim of this investigation is to develop a sensor system that will facilitate the quantifiable analysis of the badminton smash. It was determined in a previous investigation that a single accelerometer is insufficient to determine the two or three dimensional trajectory of the racket. Therefore a low mass compact, 2-axes piezoelectric accelerometers was applied to the head of the racket. An acoustic shock sensor was also mounted on the racket head to identify the instant of the contact event. It was hypothesized that the magnitude of the acoustic signal, associated with the hitting event, and the final speed of the racket when fused could provide a good correlation with the shuttle speed. The fuzzy inference system and ANFIS (Adaptive Neuro-Fuzzy Inferior System) sensor fusion techniques was investigated. It has been demonstrated that it is possible to analyze the performance of the smashing stroke based on the system developed in this investigation which in turn may, with further develop by a useful aid to badminton coaches and the methods developed may also be applicable to other applications."} {"_id": "46fd85775cab39ecb32cf2e41642ed2d0984c760", "title": "Vital, Sophia, and Co. - The Quest for the Legal Personhood of Robots", "text": "The paper examines today\u2019s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the paper suggests a twofold stance. First, policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. However, how should we deal with Sophia, which became the first AI application to receive citizenship of any country, namely, Saudi Arabia, in October 2017? Admittedly, granting someone, or something, legal personhood is\u2014as always has been\u2014a highly sensitive political issue that does not simply hinge on rational choices and empirical evidence. Discretion, arbitrariness, and even bizarre decisions play a role in this context. However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today\u2019s quest for the legal personhood of AI robots. Is citizen Sophia really conscious, or capable of suffering the slings and arrows of outrageous scholars?"} {"_id": "9bd38b75d75ea35a720b7647a93e8f5526348be6", "title": "Working memory as an emergent property of the mind and brain", "text": "Cognitive neuroscience research on working memory has been largely motivated by a standard model that arose from the melding of psychological theory with neuroscience data. Among the tenets of this standard model are that working memory functions arise from the operation of specialized systems that act as buffers for the storage and manipulation of information, and that frontal cortex (particularly prefrontal cortex) is a critical neural substrate for these specialized systems. However, the standard model has been a victim of its own success, and can no longer accommodate many of the empirical findings of studies that it has motivated. An alternative is proposed: Working memory functions arise through the coordinated recruitment, via attention, of brain systems that have evolved to accomplish sensory-, representation-, and action-related functions. Evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies, from monkeys and humans, is considered, as is the question of how to interpret delay-period activity in the prefrontal cortex."} {"_id": "c805f7e151418da74df326c730cc690f2c081b7d", "title": "Innovating with Concept Mapping", "text": "Student-generated concept maps (Cmaps) have been the preferred choice to design assessment tasks, which are time-consuming in real classroom settings. We have proposed the use of Cmap with errors (elaborated by teachers) to develop assessment tasks, as a way to address the logistical practicality obstacles usually found in classrooms. This paper compared two different tasks, finding the errors and judging the selected propositions in Cmap with errors, exploring two topics with different levels of difficulty. Our results confirmed Cmap with errors as a straightforward approach to include concept mapping into the classroom routine to foster the pedagogic resonance (the bridge between teacher knowledge and student learning), which is critical for motivating students to learn meaningfully. Moreover, Cmaps with errors are also amenable for on-line learning platforms. Future works of our research group will develop an automated process for evaluation and feedback using the task formats presented in this paper."} {"_id": "949b3f3ee6256aa3465914b6697f0a17475a3112", "title": "Natural Language Processing for Question Answering ( NLP 4", "text": "The recent developments in Question Answering have kept with open-domain questions and collections, sometimes argued as being more di fficult than narrow domain-focused questions and corpora. The biomedical field is indeed a specialized domain; however, its scope is fairly broad, so that considering a biomedical QA task is not necessarily such a simplification over open-domain QA as represented in the recent TREC evaluations. We shall try to characterize salient aspects of biomedical QA as well as to give a short review of useful resources to address this task."} {"_id": "177a167ba6a2e0b84d5e6f1375f6b54e14a8c6c2", "title": "Guest editorial: qualitative studies in information systems: a critical review and some guiding principles", "text": "Please refer to attached Guest Editorial"} {"_id": "b7ec80b305ee1705a107989b4a56a24801084dbd", "title": "Examining the critical success factors in the adoption of enterprise resource planning", "text": "This paper presents a literature review of the critical success factors (CSFs) in the implementation of enterprise resource planning (ERP) across 10 different countries/regions. The review covers journals, conference proceedings, doctoral dissertation, and textbooks from these 10 different countries/regions. Through a review of the literature, 18 CSFs were identified, with more than 80 sub-factors, for the successful implementation of ERP. The findings of our study reveal that \u2018appropriate business and IT legacy systems\u2019, \u2018business plan/vision/goals/justification\u2019, \u2018business process reengineering\u2019, \u2018change management culture and programme\u2019, \u2018communication\u2019, \u2018ERP teamwork and composition\u2019, \u2018monitoring and evaluation of performance\u2019, \u2018project champion\u2019, \u2018project management\u2019, \u2018software/system development, testing and troubleshooting\u2019, \u2018top management support\u2019, \u2018data management\u2019, \u2018ERP strategy and implementation methodology\u2019, \u2018ERP vendor\u2019, \u2018organizational characteristics\u2019, \u2018fit between ERP and business/process\u2019, \u2018national culture\u2019 and \u2018country-related functional requirement\u2019 were the commonly extracted factors across these 10 countries/regions. In these 18 CSFs, \u2018top management support\u2019 and \u2018training and education\u2019 were the most frequently cited as the critical factors to the successful implementation of ERP systems. # 2008 Elsevier B.V. All rights reserved."} {"_id": "5dfbf6d15c47637fa5d62e27f9b26c59fc249ab3", "title": "Using Quality Models in Software Package Selection", "text": "0 7 4 0 7 4 5 9 / 0 3 / $ 1 9 . 0 0 \u00a9 2 0 0 3 I E E E All the methodologies that have been proposed recently for choosing software packages compare user requirements with the packages\u2019 capabilities.3\u20135 There are different types of requirements, such as managerial, political, and, of course, quality requirements. Quality requirements are often difficult to check. This is partly due to their nature, but there is another reason that can be mitigated, namely the lack of structured and widespread descriptions of package domains (that is, categories of software packages such as ERP systems, graphical or data structure libraries, and so on). This absence hampers the accurate description of software packages and the precise statement of quality requirements, and consequently overall package selection and confidence in the result of the process. Our methodology for building structured quality models helps solve this drawback. (See the \u201cRelated Work\u201d sidebar for information about other approaches.) A structured quality model for a given package domain provides a taxonomy of software quality features, and metrics for computing their value. Our approach relies on the International Organization for Standardization and International Electrotechnical Commission 9126-1 quality standard,6 which we selected for the following reasons:"} {"_id": "d2656083425ffd4135cc13d0b444dbc605ebb6ba", "title": "An AI Approach to Automatic Natural Music Transcription", "text": "Automatic music transcription (AMT) remains a fundamental and difficult problem in music information research, and current music transcription systems are still unable to match human performance. AMT aims to automatically generate a score representation given a polyphonic acoustical signal. In our project, we approach the AMT problem on two fronts: acoustic modeling to identify pitches from a frame of audio, and establishing a score generation model to convert exact piano roll representations of audio into more \u2018natural\u2019 sheet music. We build an end to end pipeline that aims to convert .wav classical piano audio files into a \u2018natural\u2019 score representation."} {"_id": "e79b273af2f644753eef96f39e39bff973ce21a1", "title": "COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA", "text": "Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms, which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms, which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set of selected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of a number of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithm performs well for text data."} {"_id": "600c91dd8c2ad24217547d506ec2f89316f21e15", "title": "A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling", "text": "We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder. The semantic role labeler achieves competitive performance on English, even without any kind of syntactic information and only using local inference. However, when automatically predicted partof-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL2009 dataset. We also consider Chinese, Czech and Spanish where our approach also achieves competitive results. Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting. Our syntax-agnostic model appears more robust, resulting in the best reported results on standard out-of-domain test sets."} {"_id": "2fc1e38198f2961969f3863230c0a2993e156506", "title": "Walking on foot to explore a virtual environment with uneven terrain", "text": "In this work we examine people's ability to maintain spatial awareness of an HMD--based immersive virtual environment when the terrain of the virtual world does not match the real physical world in which they are locomoting. More specifically, we examine what happens when a subject explores a hilly virtual environment while physically walking in a flat room. In our virtual environments, subjects maintain their height (or distance to the ground plane) at all times. In the experiment described in this work, we directly compare spatial orientation in a flat virtual environment to two variations of a hilly virtual environment. Interestingly, we find that subjects' spatial awareness of the environment is improved with the addition of uneven terrain."} {"_id": "26ae715ca80c4057a9e1ddce311b45ce5e2de835", "title": "The emergence of lncRNAs in cancer biology.", "text": "The discovery of numerous noncoding RNA (ncRNA) transcripts in species from yeast to mammals has dramatically altered our understanding of cell biology, especially the biology of diseases such as cancer. In humans, the identification of abundant long ncRNA (lncRNA) >200 bp has catalyzed their characterization as critical components of cancer biology. Recently, roles for lncRNAs as drivers of tumor suppressive and oncogenic functions have appeared in prevalent cancer types, such as breast and prostate cancer. In this review, we highlight the emerging impact of ncRNAs in cancer research, with a particular focus on the mechanisms and functions of lncRNAs."} {"_id": "0943ed739c909d17f8686280d43d50769fe2c2f8", "title": "Action Reaction Learning : Analysis and Synthesis of Human Behaviour", "text": "We propose Action-Reaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this method to analyze human interaction and to subsequently synthesize human behaviour. Using a time series of perceptual measurements, a system automatically uncovers a mapping between gestures from one human participant (an action) and a subsequent gesture (a reaction) from another participant. A probabilistic model is trained from data of the human interaction using a novel estimation technique, Conditional Expectation Maximization (CEM). The system drives a graphical interactive character which probabilistically predicts the most likely response to the user's behaviour and performs it interactively. Thus, after analyzing human interaction in a pair of participants, the system is able to replace one of them and interact with a single remaining user."} {"_id": "8d4fd3f3f9cab5f24bdb0ec533a1304eca6a8331", "title": "Vehicle licence plate recognition using super-resolution technique", "text": "Due to the development of economy and technology, the people's demands on cars are growing and so are the problems, such as finding stolen car, banning violation and parking lot management. It will be time-consuming and low-efficiency if we do those jobs only by human because of the limitation of human being's concentration. Therefore, it has been a popular topic to develop intelligent monitoring system under new video technology within a decade. There are still many rooms for future development and application in the car license detection and recognition fields. For finding stolen car, we can integrate car license detection system and road monitoring system to analyze the videos and trace the objects, so we can gain high-efficiency and low-cost results. For parking lot management system, we can have the result of access management and automatic charge to reduce the human resource through car license recognition system. Automated toll of highway can be done through car license recognition system as well. Because the car license recognition system is mostly applied to security monitoring field and business purposes, the demand of the accuracy is quite strict. There are causes which make the inaccuracy of the license recognition, such as the lack of video resolution, too small license due to the distance and the light and shadow. We will discuss the license recognition system and how to use apply the super-resolution to overcome those above problems."} {"_id": "1f050eb09a40c7d59715d2bb3b9d2d3708e99dda", "title": "PubChem Substance and Compound databases", "text": "PubChem (https://pubchem.ncbi.nlm.nih.gov) is a public repository for information on chemical substances and their biological activities, launched in 2004 as a component of the Molecular Libraries Roadmap Initiatives of the US National Institutes of Health (NIH). For the past 11 years, PubChem has grown to a sizable system, serving as a chemical information resource for the scientific research community. PubChem consists of three inter-linked databases, Substance, Compound and BioAssay. The Substance database contains chemical information deposited by individual data contributors to PubChem, and the Compound database stores unique chemical structures extracted from the Substance database. Biological activity data of chemical substances tested in assay experiments are contained in the BioAssay database. This paper provides an overview of the PubChem Substance and Compound databases, including data sources and contents, data organization, data submission using PubChem Upload, chemical structure standardization, web-based interfaces for textual and non-textual searches, and programmatic access. It also gives a brief description of PubChem3D, a resource derived from theoretical three-dimensional structures of compounds in PubChem, as well as PubChemRDF, Resource Description Framework (RDF)-formatted PubChem data for data sharing, analysis and integration with information contained in other databases."} {"_id": "272216c1f097706721096669d85b2843c23fa77d", "title": "Adam: A Method for Stochastic Optimization", "text": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."} {"_id": "9ecd3d4c75f5e927fc3167dd185d7825e14a814b", "title": "Probabilistic Principal Component Analysis", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."} {"_id": "00ca2e50ae1c5f4499d9271c72466d2f9d4ae137", "title": "Stochastic Variational Inference", "text": "We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets."} {"_id": "053dec3537df88a1f68b53e33e1462a6b88066f6", "title": "PILCO: A Model-Based and Data-Efficient Approach to Policy Search", "text": "In this paper, we introduce pilco, a practical, data-efficient model-based policy search method. Pilco reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, pilco can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-ofthe-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks."} {"_id": "05aba481e8a221df5d8775a3bb749001e7f2525e", "title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization", "text": "We present a new family of subgradient methods that dynamica lly incorporate knowledge of the geometry of the data observed in earlier iterations to perfo rm more informative gradient-based learning. Metaphorically, the adaptation allows us to find n eedles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems fro m recent advances in stochastic optimization and online learning which employ proximal funct ions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adap tively modifying the proximal function, which significantly simplifies setting a learning rate nd results in regret guarantees that are provably as good as the best proximal function that can be cho sen in hindsight. We give several efficient algorithms for empirical risk minimization probl ems with common and important regularization functions and domain constraints. We experimen tally study our theoretical analysis and show that adaptive subgradient methods outperform state-o f-the-art, yet non-adaptive, subgradient algorithms."} {"_id": "f2bc77fdcea85738d1062da83d84dfa3371d378d", "title": "A 14-mW 6.25-Gb/s Transceiver in 90-nm CMOS", "text": "This paper describes a 6.25-Gb/s 14-mW transceiver in 90-nm CMOS for chip-to-chip applications. The transceiver employs a number of features for reducing power consumption, including a shared LC-PLL clock multiplier, an inductor-loaded resonant clock distribution network, a low- and programmable-swing voltage-mode transmitter, software-controlled clock and data recovery (CDR) and adaptive equalization within the receiver, and a novel PLL-based phase rotator for the CDR. The design can operate with channel attenuation of -15 dB or greater at a bit-error rate of 10-15 or less, while consuming less than 2.25 mW/Gb/s per transceiver."} {"_id": "82507310123a6cd970f28c628edba9a6e78618c3", "title": "Limbic-cortical dysregulation: a proposed model of depression.", "text": "A working model of depression implicating failure of the coordinated interactions of a distributed network of limbic-cortical pathways is proposed. Resting state patterns of regional glucose metabolism in idiopathic depressed patients, changes in metabolism with antidepressant treatment, and blood flow changes with induced sadness in healthy subjects were used to test and refine this hypothesis. Dorsal neocortical decreases and ventral paralimbic increases characterize both healthy sadness and depressive illness; concurrent inhibition of overactive paralimbic regions and normalization of hypofunctioning dorsal cortical sites characterize disease remission. Normal functioning of the rostral anterior cingulate, with its direct connections to these dorsal and ventral areas, is postulated to be additionally required for the observed reciprocal compensatory changes, since pretreatment metabolism in this region uniquely predicts antidepressant treatment response. This model is offered as an adaptable framework to facilitate continued integration of clinical imaging findings with complementary neuroanatomical, neurochemical, and electrophysiological studies in the investigation of the pathogenesis of affective disorders."} {"_id": "a1cd96a10ca8dd53a52444456f6119a3284e5ae7", "title": "Image Steganography Using LSB and Edge \u2013 Detection Technique", "text": "217 \uf020 Abstract-Steganography is the technique of hiding the fact that communication is taking place, by hiding data in other data. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet. For hiding secret information in images, there exist a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. Steganalysis, the detection of this hidden information, is an inherently difficult problem and requires a thorough investigation so we are using \" Edge detection Filter \". In this paper search how the edges of the images can be used to hiding text message in Steganography .It give the depth view of image steganography and Edge detection Filter techniques. Method: In this paper search how the edges of the images can be used to hiding text message in Steganography. For that gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 pixel connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of least Significant bit steganography. Steganalysis then used to evaluate the hiding process to ensure the data can be hidden in best possible way."} {"_id": "5e4ae6cd0f583d4c8edc74bddd800baf8915bfe8", "title": "Multiplexed protein quantitation in Saccharomyces cerevisiae using amine-reactive isobaric tagging reagents.", "text": "We describe here a multiplexed protein quantitation strategy that provides relative and absolute measurements of proteins in complex mixtures. At the core of this methodology is a multiplexed set of isobaric reagents that yield amine-derivatized peptides. The derivatized peptides are indistinguishable in MS, but exhibit intense low-mass MS/MS signature ions that support quantitation. In this study, we have examined the global protein expression of a wild-type yeast strain and the isogenic upf1Delta and xrn1Delta mutant strains that are defective in the nonsense-mediated mRNA decay and the general 5' to 3' decay pathways, respectively. We also demonstrate the use of 4-fold multiplexing to enable relative protein measurements simultaneously with determination of absolute levels of a target protein using synthetic isobaric peptide standards. We find that inactivation of Upf1p and Xrn1p causes common as well as unique effects on protein expression."} {"_id": "8cf34a74266fcefdd1e70e7c0900648c4f2cac88", "title": "An Efficient Certificateless Encryption for Secure Data Sharing in Public Clouds", "text": "We propose a mediated certificateless encryption scheme without pairing operations for securely sharing sensitive information in public clouds. Mediated certificateless public key encryption (mCL-PKE) solves the key escrow problem in identity based encryption and certificate revocation problem in public key cryptography. However, existing mCL-PKE schemes are either inefficient because of the use of expensive pairing operations or vulnerable against partial decryption attacks. In order to address the performance and security issues, in this paper, we first propose a mCL-PKE scheme without using pairing operations. We apply our mCL-PKE scheme to construct a practical solution to the problem of sharing sensitive information in public clouds. The cloud is employed as a secure storage as well as a key generation center. In our system, the data owner encrypts the sensitive data using the cloud generated users' public keys based on its access control policies and uploads the encrypted data to the cloud. Upon successful authorization, the cloud partially decrypts the encrypted data for the users. The users subsequently fully decrypt the partially decrypted data using their private keys. The confidentiality of the content and the keys is preserved with respect to the cloud, because the cloud cannot fully decrypt the information. We also propose an extension to the above approach to improve the efficiency of encryption at the data owner. We implement our mCL-PKE scheme and the overall cloud based system, and evaluate its security and performance. Our results show that our schemes are efficient and practical."} {"_id": "392ed32f87afa8677e44e40b3f79ff967f6d88f9", "title": "Clustal W and Clustal X version 2.0", "text": "SUMMARY\nThe Clustal W and Clustal X multiple sequence alignment programs have been completely rewritten in C++. This will facilitate the further development of the alignment algorithms in the future and has allowed proper porting of the programs to the latest versions of Linux, Macintosh and Windows operating systems.\n\n\nAVAILABILITY\nThe programs can be run on-line from the EBI web server: http://www.ebi.ac.uk/tools/clustalw2. The source code and executables for Windows, Linux and Macintosh computers are available from the EBI ftp site ftp://ftp.ebi.ac.uk/pub/software/clustalw2/"} {"_id": "af91a6a76f71a399794b10bcfa1500ffdef9410d", "title": "Evaluating three treatments for borderline personality disorder: a multiwave study.", "text": "OBJECTIVE\nThe authors examined three yearlong outpatient treatments for borderline personality disorder: dialectical behavior therapy, transference-focused psychotherapy, and a dynamic supportive treatment.\n\n\nMETHOD\nNinety patients who were diagnosed with borderline personality disorder were randomly assigned to transference-focused psychotherapy, dialectical behavior therapy, or supportive treatment and received medication when indicated. Prior to treatment and at 4-month intervals during a 1-year period, blind raters assessed the domains of suicidal behavior, aggression, impulsivity, anxiety, depression, and social adjustment in a multiwave study design.\n\n\nRESULTS\nIndividual growth curve analysis revealed that patients in all three treatment groups showed significant positive change in depression, anxiety, global functioning, and social adjustment across 1 year of treatment. Both transference-focused psychotherapy and dialectical behavior therapy were significantly associated with improvement in suicidality. Only transference-focused psychotherapy and supportive treatment were associated with improvement in anger. Transference-focused psychotherapy and supportive treatment were each associated with improvement in facets of impulsivity. Only transference-focused psychotherapy was significantly predictive of change in irritability and verbal and direct assault.\n\n\nCONCLUSIONS\nPatients with borderline personality disorder respond to structured treatments in an outpatient setting with change in multiple domains of outcome. A structured dynamic treatment, transference-focused psychotherapy was associated with change in multiple constructs across six domains; dialectical behavior therapy and supportive treatment were associated with fewer changes. Future research is needed to examine the specific mechanisms of change in these treatments beyond common structures."} {"_id": "6c5d2e4bc54beb4260cd56f9a45bf90e98d1187d", "title": "A Corpus and Model Integrating Multiword Expressions and Supersenses", "text": "This paper introduces a task of identifying and semantically classifying lexical expressions in running text. We investigate the online reviews genre, adding semantic supersense annotations to a 55,000 word English corpus that was previously annotated for multiword expressions. The noun and verb supersenses apply to full lexical expressions, whether singleor multiword. We then present a sequence tagging model that jointly infers lexical expressions and their supersenses. Results show that even with our relatively small training corpus in a noisy domain, the joint task can be performed to attain 70% class labeling F1."} {"_id": "61c6da88c9a6ee3a72e1494d42acd5eb21c17a55", "title": "The neural basis of rationalization: cognitive dissonance reduction during decision-making.", "text": "People rationalize the choices they make when confronted with difficult decisions by claiming they never wanted the option they did not choose. Behavioral studies on cognitive dissonance provide evidence for decision-induced attitude change, but these studies cannot fully uncover the mechanisms driving the attitude change because only pre- and post-decision attitudes are measured, rather than the process of change itself. In the first fMRI study to examine the decision phase in a decision-based cognitive dissonance paradigm, we observed that increased activity in right-inferior frontal gyrus, medial fronto-parietal regions and ventral striatum, and decreased activity in anterior insula were associated with subsequent decision-related attitude change. These findings suggest the characteristic rationalization processes that are associated with decision-making may be engaged very quickly at the moment of the decision, without extended deliberation and may involve reappraisal-like emotion regulation processes."} {"_id": "9da870dbbc32c23013ef92dd9b30db60a3cd7628", "title": "Sparse Non-rigid Registration of 3D Shapes", "text": "Non-rigid registration of 3D shapes is an essential task of increasing importance as commodity depth sensors become more widely available for scanning dynamic scenes. Non-rigid registration is much more challenging than rigid registration as it estimates a set of local transformations instead of a single global transformation, and hence is prone to the overfitting issue due to underdetermination. The common wisdom in previous methods is to impose an \u21132-norm regularization on the local transformation differences. However, the \u21132-norm regularization tends to bias the solution towards outliers and noise with heavy-tailed distribution, which is verified by the poor goodness-of-fit of the Gaussian distribution over transformation differences. On the contrary, Laplacian distribution fits well with the transformation differences, suggesting the use of a sparsity prior. We propose a sparse non-rigid registration (SNR) method with an \u21131-norm regularized model for transformation estimation, which is effectively solved by an alternate direction method (ADM) under the augmented Lagrangian framework. We also devise a multi-resolution scheme for robust and progressive registration. Results on both public datasets and our scanned datasets show the superiority of our method, particularly in handling large-scale deformations as well as outliers and noise."} {"_id": "4b50628723e4e539fb985e15766a6c65da49237c", "title": "THE STRUCTURE ( S ) OF PARTICLE VERBS", "text": "\u2022 complex heads are formed either in the lexicon (cf. Booij 1990 , Johnson 1991 , Koizumi 1993, Neeleman 1994, Neeleman & Weerman 1993, Stiebels 1996, Stiebels & Wunderlich 1994, Wiese 1996, Ackerman & Webelhuth 1998 ) or by some kind of overt or covert incorporation of the particle into the verb (cf. van Riemsdijk 1978, Baker 1988, Koopman 1995, Olsen 1995a, 1995b, Zeller 1997a, 1997b, 1998) \u2022 small clause structure (cf. Kayne 1985, Gu\u00e9ron 1987 , 1990, Hoekstra 1988, Grewendorf 1990, Bennis 1992, Mulder 1992 , den Dikken 1992, 1995 , Zwart 1997 among others) 1"} {"_id": "6d24e98c086818bfd00406ef54a44bca9dde76b8", "title": "State of the Art in Stereoscopic and Autostereoscopic Displays", "text": "Underlying principles of stereoscopic direct-view displays, binocular head-mounted displays, and autostereoscopic direct-view displays are explained and some early work as well as the state of the art in those technologies are reviewed. Stereoscopic displays require eyewear and can be categorized based on the multiplexing scheme as: 1) color multiplexed (old technology but there are some recent developments; low-quality due to color reproduction and crosstalk issues; simple and does not require additional electronics hardware); 2) polarization multiplexed (requires polarized light output and polarization-based passive eyewear; high-resolution and high-quality displays available); and 3) time multiplexed (requires faster display hardware and active glasses synchronized with the display; high-resolution commercial products available). Binocular head-mounted displays can readily provide 3-D, virtual images, immersive experience, and more possibilities for interactive displays. However, the bulk of the optics, matching of the left and right ocular images and obtaining a large field of view make the designs quite challenging. Some of the recent developments using unconventional optical relays allow for thin form factors and open up new possibilities. Autostereoscopic displays are very attractive as they do not require any eyewear. There are many possibilities in this category including: two-view (the simplest implementations are with a parallax barrier or a lenticular screen), multiview, head tracked (requires active optics to redirect the rays to a moving viewer), and super multiview (potentially can solve the accommodation-convergence mismatch problem). Earlier 3-D booms did not last long mainly due to the unavailability of enabling technologies and the content. Current developments in the hardware technologies provide a renewed interest in 3-D displays both from the consumers and the display manufacturers, which is evidenced by the recent commercial products and new research results in this field."} {"_id": "239d60331832d4a457fa29659dccb1d6571c481f", "title": "Importance of audio feature reduction in automatic music genre classification", "text": "Multimedia database retrieval is rapidly growing and its popularity in online retrieval systems is consequently increasing. Large datasets are major challenges for searching, retrieving, and organizing the music content. Therefore, a robust automatic music-genre classification method is needed for organizing this music data into different classes according to specific viable information. Two fundamental components are to be considered for genre classification: audio feature extraction and classifier design. In this paper, we propose diverse audio features to precisely characterize the music content. The feature sets belong to four groups: dynamic, rhythmic, spectral, and harmonic. From the features, five statistical parameters are considered as representatives, including the fourth-order central moments of each feature as well as covariance components. Ultimately, insignificant representative parameters are controlled by minimum redundancy and maximum relevance. This algorithm calculates the score level of all feature attributes and orders them. Only high-score audio features are considered for genre classification. Moreover, we can recognize those audio features and distinguish which of the different statistical parameters derived from them are important for genre classification. Among them, mel frequency cepstral coefficient statistical parameters, such as covariance components and variance, are more frequently selected than the feature attributes of other groups. This approach does not transform the original features as do principal component analysis and linear discriminant analysis. In addition, other feature reduction methodologies, such as locality-preserving projection and non-negative matrix factorization are considered. The performance of the proposed system is measured based on the reduced features from the feature pool using different feature reduction techniques. The results indicate that the overall classification is competitive with existing state-of-the-art frame-based methodologies."} {"_id": "e36ecd4250fac29cc990330e01c9abee4c67a9d6", "title": "A Dual-Band Dual-Circular-Polarization Antenna for Ka-Band Satellite Communications", "text": "A novel Ka-band dual-band dual-circularly-polarized antenna array is presented in this letter. A dual-band antenna with left-hand circular polarization for the Ka-band downlink frequencies and right-hand circular polarization for the Ka-band uplink frequencies is realized with compact annular ring slots. By applying the sequential rotation technique, a 2 \u00d7 2 subarray with good performance is obtained. This letter describes the design process and presents simulation and measurement results."} {"_id": "528f0ce730b0607cc6107d672aec344aaaae1b24", "title": "Using machine learning algorithms for housing price prediction: The case of Fairfax County, Virginia housing data", "text": "House sales are determined based on the Standard & Poor\u2019s Case-Shiller home price indices and the housing price index of the Office of Federal Housing Enterprise Oversight (OFHEO). These reflect the trends of the US housing market. In addition to these housing price indices, the development of a housing price prediction model can greatly assist in the prediction of future housing prices and the establishment of real estate policies. This study uses machine learning algorithms as a research methodology to develop a housing price prediction model. To improve the accuracy of housing price prediction, this paper analyzes the housing data of 5359 townhouses in Fairfax County, Virginia, gathered by the Multiple Listing Service (MLS) of the Metropolitan Regional Information Systems (MRIS). We develop a housing price prediction model based on machine learning algorithms such as C4.5, RIPPER, Na\u00efve Bayesian, and AdaBoost and compare their classification accuracy performance. We then propose an improved housing price prediction model to assist a house seller or a real estate agent make better informed decisions based on house price valuation. The experiments demonstrate that the RIPPER algorithm, based on accuracy, consistently outperforms the other models in the performance of housing price prediction. 2014 Elsevier Ltd. All rights reserved."} {"_id": "ab3e60422c31e6d9763d5dc0794d3c29f3e75b0c", "title": "Using plane + parallax for calibrating dense camera arrays", "text": "A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields better results than full metric calibration."} {"_id": "4743c1b19006909c86219af3aeb6dcbee4eb119f", "title": "Performance Analysis, Mapping, and Multiobjective Optimization of a Hybrid Robotic Machine Tool", "text": "A serial-parallel hybrid machine tool is expected to integrate the respective features of pure serial/parallel mechanism. The traditional method of hybridization is to connect n (n \u2265 2) mechanisms bottom to head, in which at least one should be a parallel mechanism. One unique approach called Mechanism Hybridization is to embed one serial mechanism inside of a pure parallel mechanism, which greatly changes its overall performance. Based on this idea, an X-Y gantry system including a five-axis hybrid manipulator is developed, which is expected to be applied as the next generation of computer numerical control machine. The inverse kinematics and Jacobian matrix are derived. Since performance improvement is one of the most important factors that greatly affect the application potential of hybrid manipulators in different industry fields, to deeply investigate the comprehensive features, the local/global performance indexes of stiffness, dexterity, and manipulability are mathematically modeled and mapped. A discrete-boundary-searching method is developed to calculate and visualize the workspace. Pareto-based evolutionary multiobjective performance optimization is implemented to simultaneously improve the four indexes, and the representative nondominated solutions are listed. The proposed methodologies are generic and applicable for the design, modeling, and improvement of other parallel/hybrid robotic machine tools."} {"_id": "95db44d1f978a8407ae8b202dee9d70f700c6948", "title": "Simulation in software engineering training", "text": "Simulation is frequently used for training in many application areas like aviation and economics, but not in software engineering. We present the SESAM project which focuses on software engineering education using simulation. In the SESAM project a simulator was developed. Using this simulator, a student can take the role of a software project manager. The simulated software project can be finished within a couple of hours because it is simulated in \u201cquick-motion\u201d mode.\nIn this paper, the background and goals of the SESAM project are presented. A new simulation model, the so called QA model, is introduced. The model behavior is demonstrated by investigating and comparing different strategies for software development. The results of experiments based on the QA model are reported. Finally, conclusions are drawn from the experiments and future work is outlined."} {"_id": "d0701ea0e6822272cac58dc73e6c7e5d2c796436", "title": "Effects of Transformational Leadership Training on Attitudinal and Financial Outcomes : A Field Experiment", "text": "A pretest-posttest control-group design (N = 20) was used to assess the effects of transformational leadership training, with 9 and 11 managers assigned randomly to training and control groups, respectively. Training consisted of a 1-day group session and 4 individual booster sessions thereafter on a monthly basis. Multivariate analyses of covariance, with pretest scores as the covariate, showed that the training resulted in significant effects on subordinates' perceptions of leaders' transformational leadership, subordinates' own organizational commitment, and 2 aspects of branch-level financial performance."} {"_id": "2d3a3a3ef0ac94e899feb3a7c7caa3fb6099393d", "title": "An approach to finite-time controller design for a class of T-S fuzzy systems", "text": "This paper studies the finite-time stabilization problem for a class of nonlinear systems described by Takagi-Sugeno (T-S) fuzzy dynamic models with parametric uncertainties. A novel non-Lipschitz continuous state feedback control scheme with augmented dynamics is developed. It is shown that finite-time convergence of the closed loop system can be achieved, and the potential controller singularity problem can be avoided. Finally, one example is given to illustrate the effectiveness of the proposed control approach."} {"_id": "5f8e64cc066886a99cc8e30e68d1b29d8bb1961d", "title": "The Human Hippocampus and Spatial and Episodic Memory", "text": "Finding one's way around an environment and remembering the events that occur within it are crucial cognitive abilities that have been linked to the hippocampus and medial temporal lobes. Our review of neuropsychological, behavioral, and neuroimaging studies of human hippocampal involvement in spatial memory concentrates on three important concepts in this field: spatial frameworks, dimensionality, and orientation and self-motion. We also compare variation in hippocampal structure and function across and within species. We discuss how its spatial role relates to its accepted role in episodic memory. Five related studies use virtual reality to examine these two types of memory in ecologically valid situations. While processing of spatial scenes involves the parahippocampus, the right hippocampus appears particularly involved in memory for locations within an environment, with the left hippocampus more involved in context-dependent episodic or autobiographical memory."} {"_id": "cb2ff27205b27b9fd95005a98facdf09a0d4759f", "title": "3D Sampling Textures for Creative Design and Manufacturing", "text": "1 Texture synthesized from 3D scans of snake-skin and mushroom laminae. ABSTRACT 3D sampling is a robust new strategy for exploring and creating designs. 3D textures are sampled and mixed almost like music to create new multifunctional surfaces and material microstructures (Figure 1). This paper presents several design cases performed using 3D sampling techniques, demonstrating how they can be used to explore and enhance product ideas, variations, and functions in subtle and responsive ways. In each case, design variations are generated and their mechanical behavior is evaluated against performance criteria using computational simulation or empirical testing. This work aims to promote creativity and discovery by introducing irregular geometric structures within trial-and-error feedback loops. Sayjel Vijay Patel SUTD DManD Centre"} {"_id": "06b30e1de10abd64ece74623fbb2f6f58a34f492", "title": "Large-Scale Liquid Simulation on Adaptive Hexahedral Grids", "text": "Regular grids are attractive for numerical fluid simulations because they give rise to efficient computational kernels. However, for simulating high resolution effects in complicated domains they are only of limited suitability due to memory constraints. In this paper we present a method for liquid simulation on an adaptive octree grid using a hexahedral finite element discretization, which reduces memory requirements by coarsening the elements in the interior of the liquid body. To impose free surface boundary conditions with second order accuracy, we incorporate a particular class of Nitsche methods enforcing the Dirichlet boundary conditions for the pressure in a variational sense. We then show how to construct a multigrid hierarchy from the adaptive octree grid, so that a time efficient geometric multigrid solver can be used. To improve solver convergence, we propose a special treatment of liquid boundaries via composite finite elements at coarser scales. We demonstrate the effectiveness of our method for liquid simulations that would require hundreds of millions of simulation elements in a non-adaptive regime."} {"_id": "547bfc90e96c2f3c1f9a1fdc9d7c84014322bc81", "title": "Power optimization for FinFET-based circuits using genetic algorithms", "text": "Reducing power consumption is one of the important design goals for circuit designers. Power optimization techniques for bulk CMOS-based circuit designs have been extensively studied. As technology scales, FinFET has been proposed as an alternative for bulk CMOS when technology scales beyond 32 nm technology (E.J. Nowak et al., 2004). In this paper, we propose a power optimization framework for FinFET based circuit design, based on genetic algorithms. We exploit the unique feature of independent gate (IG) controls for FinFET devices to reduce the power consumption, and combine with other low power techniques such as multi-Vdd and gate sizing to achieve power optimization for FinFET-based circuits. We use 32 nm PTM FinFET device model (W. Zhao and Y. Cao, 2006) and conduct experiments on ISCAS benchmarks. The experimental results show that our methodology can achieve over 80% power reduction while satisfying the same performance constraints, comparing to the case that all the FinFET transistors are tuned to be the fastest ones."} {"_id": "8a504a1ea577a855490e0a413b283ac1856c2da8", "title": "Improving the intrinsic calibration of a Velodyne LiDAR sensor", "text": "LiDAR (Light detection and ranging) sensors are widely used in research and development. As such, they build the base for the evaluation of newly developed ADAS (Advanced Driver Assistance Systems) functions in the automotive field where they are used for ground truth establishment. However, the factory calibration provided for the sensors is not able to satisfy the high accuracy requirements by such applications. In this paper we propose a concept to easily improve the existing calibration of a Velodyne LiDAR sensor without the need for special calibration setups which can even be used to enhance already recorded data."} {"_id": "aae4665c2bd9f04e0c7f993123b30d87486b79f7", "title": "Combining template tracking and laser peak detection for 3D reconstruction and grasping in underwater environments", "text": "Autonomous grasping of unknown objects by a robot is a highly challenging skill that is receiving increasing attention in the last years. This problem becomes still more challenging (and less explored) in underwater environments, with highly unstructured scenarios, limited availability of sensors and, in general, adverse conditions that affect in different degree the robot perception and control systems. This paper describes an approach for semi-autonomous grasping and recovery on underwater unknown objects from floating vehicles. A laser stripe emitter is attached to a robot forearm that performs a scan of a target of interest. This scan is captured by a camera that also estimates the motion of the floating vehicle while doing the scan. The scanned points are triangulated and transformed according to the motion estimation, thus recovering partial 3D structure of the scene with respect to a fixed frame. A user then indicates the part where to grab the object, and the final grasp is automatically planned on that area. The approach herein presented is tested and validated in water tank conditions."} {"_id": "dec3de4ae1cb82c75189eb98b5ebb9a1a683f334", "title": "Clear underwater vision", "text": "Underwater imaging is important for scientific research and technology, as well as for popular activities. We present a computer vision approach which easily removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. We show that the main degradation effects can be associated with partial polarization of light. We therefore present an algorithm which inverts the image formation process, to recover a good visibility image of the object. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by product, a distance map of the scene is derived as well. We successfully used our approach when experimenting in the sea using a system we built. We obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range."} {"_id": "375d5a31e1f15b4b4dab3428f0517e014fa61a91", "title": "Girona 500 AUV: From Survey to Intervention", "text": "This paper outlines the specifications and basic design approach taken on the development of the Girona 500, an autonomous underwater vehicle whose most remarkable characteristic is its capacity to reconfigure for different tasks. The capabilities of this new vehicle range from different forms of seafloor survey to inspection and intervention tasks."} {"_id": "d45eaee8b2e047306329e5dbfc954e6dd318ca1e", "title": "ROS: an open-source Robot Operating System", "text": "This paper gives an overview of ROS, an opensource robot operating system. ROS is not an operating system in the traditional sense of process management and scheduling; rather, it provides a structured communications layer above the host operating systems of a heterogenous compute cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS."} {"_id": "d8b69a91ff70de099aeaee3b448ef02f889f3faf", "title": "Design and Control of Autonomous Underwater Robots: A Survey", "text": "During the 1990s, numerous worldwide research and development activities have occurred in underwater robotics, especially in the area of autonomous underwater vehicles (AUVs). As the ocean attracts great attention on environmental issues and resources as well as scientific and military tasks, the need for and use of underwater robotic systems has become more apparent. Great efforts have been made in developing AUVs to overcome challenging scientific and engineering problems caused by the unstructured and hazardous ocean environment. In the 1990s, about 30 new AUVs have been built worldwide. With the development of new materials, advanced computing and sensory technology, as well as theoretical advancements, R&D activities in the AUV community have increased. However, this is just the beginning for more advanced, yet practical and reliable AUVs. This paper surveys some key areas in current state-of-the-art underwater robotic technologies. It is by no means a complete survey but provides key references for future development. The new millennium will bring advancements in technology that will enable the development of more practical, reliable AUVs."} {"_id": "17f84250276a340edcac9e5173e2d55020922deb", "title": "Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge", "text": "We propose Logic Tensor Networks: a uniform framework for integrating automatic learning and reasoning. A logic formalism called Real Logic is defined on a first-order language whereby formulas have truth-value in the interval [0,1] and semantics defined concretely on the domain of real numbers. Logical constants are interpreted as feature vectors of real numbers. Real Logic promotes a well-founded integration of deductive reasoning on a knowledge-base and efficient data-driven relational machine learning. We show how Real Logic can be implemented in deep Tensor Neural Networks with the use of Google\u2019s TENSORFLOW primitives. The paper concludes with experiments applying Logic Tensor Networks on a simple but representative example of knowledge comple-"} {"_id": "1553fc8c9a9e7ab44aa948ca641bd9148e3f0f6b", "title": "Survey on Data Classification and Data Encryption Techniques Used in Cloud Computing", "text": "Cloud computing is an imminent revolution in information technology (IT) industry because of its performance, accessibility, low cost and many other luxury features. Security of data in the cloud is one of the major issues which acts as barrier in the implementation of cloud computing. In past years, a number of research works have targeted this problem. In this paper discuss some of the data classification techniques widely used in cloud computing. The objective of data classification is to find out the required level of security for data and to protect data by providing sufficient level of security according to the risk levels of data. In this paper also discuss a survey of existing solutions for security problem, discuss their advantages, and point out any disadvantages for future research. Specifically, focus on the use of encryption techniques, and provide a comparative study of the major encryption techniques."} {"_id": "abbb1ed6bd09bbeae2b7f5d438950d774dbfdb42", "title": "Metaphor in the Mind : The Cognition of Metaphor", "text": "The most sustained and innovative recent work on metaphor has occurred in cognitive science and psychology. Psycholinguistic investigation suggests that novel, poetic metaphors are processed differently than literal speech, while relatively conventionalized and contextually salient metaphors are processed more like literal speech. This conflicts with the view of \u201ccognitive linguists\u201d like George Lakoff that all or nearly all thought is essentially metaphorical. There are currently four main cognitive models of metaphor comprehension: juxtaposition, category-transfer, feature-matching, and structural alignment. Structural alignment deals best with the widest range of examples; but it still fails to account for the complexity and richness of fairly novel, poetic metaphors. 1. General Issues in the Study of Metaphor Philosophers have often adopted a dismissive attitude toward metaphor. Hobbes (ch. 8) advocated excluding metaphors from rational discourse because they \u201copenly profess deceit,\u201d while Locke (Bk. 3, ch. 10) claimed that figurative uses of language serve only \u201cto insinuate wrong ideas, move the passions, and thereby mislead the judgment; and so indeed are perfect cheats.\u201d Later, logical positivists like Ayer and Carnap assumed that because metaphors like (1) How sweet the moonlight sleeps upon this bank!2 involve category mistakes, they have no real meaning or verification conditions. Thus, they too mentioned metaphor only to place it beyond the pale of rational discourse. Starting in the 1960s and 70s, philosophers and linguists began to take more positive interest in metaphor. Black argued forcefully that metaphors do have a distinctive, essentially non-propositional meaning or cognitive significance, which is produced by the \u201cinteraction\u201d of the \u201csystems of associated commonplaces\u201d for the metaphor\u2019s \u201cprimary\u201d and \u201csecondary\u201d subjects (e.g., with moonlight and sleeping sweetly). Other theorists were more friendly to the idea that metaphorical and literal meaning are of the same essential kind. Many of them proposed that the literal absurdity of metaphors \u00a9 Blackwell Publishing 2006 Philosophy Compass 1/2 (2006): 154\u2013170, 10.1111/j.1747-9991.2006.00013.x"} {"_id": "97ea7509c7f08b82b30676195b21ddc15c3b7712", "title": "Faculty at Saudi Electronic University attitudes toward using augmented reality in education", "text": "This study aims to examine the possibility of implementing the Augmented Reality (AR) application in higher education by answering three questions: Which extended faculty at the Saudi Electronic University are familiar with such applications, what the perceptions do they hold toward using it in education, and what barriers do they believe may hinder implementing this technology. An online survey was designed and distributed to faculty members at two colleges selected randomly to collect data from participants. Even though the faculty were at an accepted level of familiarity with this technology, they did not use it in their classes. Results showed that faculty have a positive attitude toward the use of AR and have confidence in its potential to enrich the learning environment. This result is connected to the AR advantages the faculty were in agreement with. Results also showed that faculty have concerns about some barriers that might impact implementation of AR in the education environment, such as the lack of technical support."} {"_id": "5fed66826bc19773b2c281997db3cc2233e1f14a", "title": "Impact of supply chain integration and selective disintegration on supply chain efficiency and organization performance", "text": "An efficient business system integration (BSI) is the key determinant factor for the organization to stay competitive. The goal of this study is to develop and execute a research project to study BSI in the oil & gas industry and how an efficient BSI in supply chain (SCI) can be used to gain advantage. A mixed (qualitative & quantitative) survey method was employed and structural equation modeling (SEM) is applied to test the theoretical framework and hypotheses. Findings reveal that, total integration do not offer optimum performance and selective disintegration (DIS) is necessary for supply chain efficiency (SCE) and organization performance (PER). This study helps to better understand the degree of integration required for optimum supply chain performance and came up with new construct DIS. Specifically, this study investigated the effect of SCI and DIS on SCE and PER. The compelled sample size precludes evaluating and testing of additional broad models of the relationship amongst the constructs. The main managerial lesson is that, in contrast to what has been written in many books and other popular publications, selective disintegration is necessary for optimum supply chain efficiency."} {"_id": "603561e877ceee2213a72b90a9248d684f83e6ba", "title": "Learning to summarize web image and text mutually", "text": "We consider the problem of learning to summarize images by text and visualize text utilizing images, which we call Mutual-Summarization. We divide the web image-text data space into three subspaces, namely pure image space (PIS), pure text space (PTS) and image-text joint space (ITJS). Naturally, we treat the ITJS as a knowledge base.\n For summarizing images by sentence issue, we map images from PIS to ITJS via image classification models and use text summarization on the corresponding texts in ITJS to summarize images. For text visualization problem, we map texts from PTS to ITJS via text categorization models and generate the visualization by choosing the semantic related images from ITJS, where the selected images are ranked by their confidence. In above approaches images are represented by color histograms, dense visual words and feature descriptors at different levels of spatial pyramid; and the texts are generated according to the Latent Dirichlet Allocation (LDA) topic model. Multiple Kernel (MK) methodologies are used to learn classifiers for image and text respectively. We show the Mutual-Summarization results on our newly collected dataset of six big events (\"Gulf Oil Spill\", \"Haiti Earthquake\", etc.) as well as demonstrate improved cross-media retrieval performance over existing methods in terms of MAP, Precision and Recall."} {"_id": "46f91a6cf5047498c0bf4c75852ecdb1a72fadd7", "title": "A-PNR: Automatic Plate Number Recognition", "text": "Automatic Plate Number Recognition (APNR) has important applications in traffic surveillance, toll booth, protected parking lot, No parking zone, etc. It is a challenging problem, especially when the license plates have varying sizes, the number of lines, fonts, background diversity etc. This work aims to address APNR using deep learning method for real-time traffic images. We first extract license plate candidates from each frame using edge information and geometrical properties, ensuring high recall using one class SVM. The verified candidates are used for NP recognition purpose along with a spatial transformer network (STN) for character recognition. Our system is evaluated on several traffic images with vehicles having different license plate formats in terms of tilt, distances, colors, illumination, character size, thickness etc. Also, the background was very challenging. Results demonstrate robustness to such variations and impressive performance in both the localization and recognition."} {"_id": "a32ba4129c23051e23d49b75bd84967dd532ad1b", "title": "Virtual reality job interview training in adults with autism spectrum disorder.", "text": "The feasibility and efficacy of virtual reality job interview training (VR-JIT) was assessed in a single-blinded randomized controlled trial. Adults with autism spectrum disorder were randomized to VR-JIT (n = 16) or treatment-as-usual (TAU) (n = 10) groups. VR-JIT consisted of simulated job interviews with a virtual character and didactic training. Participants attended 90 % of laboratory-based training sessions, found VR-JIT easy to use and enjoyable, and they felt prepared for future interviews. VR-JIT participants had greater improvement during live standardized job interview role-play performances than TAU participants (p = 0.046). A similar pattern was observed for self-reported self-confidence at a trend level (p = 0.060). VR-JIT simulation performance scores increased over time (R(2) = 0.83). Results indicate preliminary support for the feasibility and efficacy of VR-JIT, which can be administered using computer software or via the internet."} {"_id": "561612d24c6c793143c7e5c41a241933b3349299", "title": "Event-based stock market prediction", "text": "There are various studies on the behavior of the market. In particular, derivatives such as futures and options have taken a lot of attentions, lately. Predicting these derivatives is not only important for the risk management purposes, but also for price speculative activities. Besides that accurate prediction of the market\u2019s direction can help investors to gain enormous profits with small amount of capital [Tsaih et al., 1998]. Stock market prediction can be viewed as a challenging time-series prediction [Kim, 2003]. There are many factors that are influential on the financial markets, including political events, natural disasters, economic conditions, and so on. Despite the complexity of the movements in market prices, the market behavior is not completely random. Instead, it is governed by a highly nonlinear dynamical system [Blank, 1991]. Forecasting the future prices is carried out based on the technical analysis, which studies the market\u2019s action using past prices and the other market information. Market analysis is in contradiction with the Efficient Market Hypothesis (EMH). EMH was developed in 1970 by economist Eugene Fama [Fama, 1965a, Fama, 1965b] whose theory stated that it is not possible for an investor to outperform the market because all available information is already built into all stock prices. If the EMH was true, it would not be possible to use machine learning techniques for market prediction. Nevertheless, there are many successful technical analysis in the financial world and number of studies appearing in academic literature that are using machine learning techniques for market prediction [Choudhry and Garg, 2008]. One way to forecast the market movement is by analyzing the special events of the market such as earnings announcements. Earnings announcement for each stock is an official public statement of a company\u2019s profitability for a specific time period, typically a quarter or a year. Each company has its specific earnings announcement dates. Stock price of a company is affected by the earnings announcement event. Equity analysts usually predict the earnings per share (EPS) prior to the announcement date. In this project, using machine-learning techniques, we want to predict whether a given stock will be rising in the following day after earnings announcement or not. This will lead to a binary classification problem, which can be tackled based on the huge amount of available public data. This project consists of two major tasks: data collection and application of machine learning algorithms. In \u00a72, we will discuss our efforts to collect the required data. In continuation in \u00a73, features definitions and selections are described. We have considered and discussed different machine learning algorithms in \u00a74. In \u00a75, the summary of our results and possible extensions are explained."} {"_id": "5cd8a2a85caaa1fdc48f67957db3ac5350e69127", "title": "Effects of early-life abuse differ across development: infant social behavior deficits are followed by adolescent depressive-like behaviors mediated by the amygdala.", "text": "Abuse during early life, especially from the caregiver, increases vulnerability to develop later-life psychopathologies such as depression. Although signs of depression are typically not expressed until later life, signs of dysfunctional social behavior have been found earlier. How infant abuse alters the trajectory of brain development to produce pathways to pathology is not completely understood. Here we address this question using two different but complementary rat models of early-life abuse from postnatal day 8 (P8) to P12: a naturalistic paradigm, where the mother is provided with insufficient bedding for nest building; and a more controlled paradigm, where infants undergo olfactory classical conditioning. Amygdala neural assessment (c-Fos), as well as social behavior and forced swim tests were performed at preweaning (P20) and adolescence (P45). Our results show that both models of early-life abuse induce deficits in social behavior, even during the preweaning period; however, depressive-like behaviors were observed only during adolescence. Adolescent depressive-like behavior corresponds with an increase in amygdala neural activity in response to forced swim test. A causal relationship between the amygdala and depressive-like behavior was suggested through amygdala temporary deactivation (muscimol infusions), which rescued the depressive-like behavior in the forced swim test. Our results indicate that social behavior deficits in infancy could serve as an early marker for later psychopathology. Moreover, the implication of the amygdala in the ontogeny of depressive-like behaviors in infant abused animals is an important step toward understanding the underlying mechanisms of later-life mental disease associated with early-life abuse."} {"_id": "8293ab4ce2ccf1fd64f816b274838fe19c569d40", "title": "Neural correlates of cognitive processing in monolinguals and bilinguals.", "text": "Here, we review the neural correlates of cognitive control associated with bilingualism. We demonstrate that lifelong practice managing two languages orchestrates global changes to both the structure and function of the brain. Compared with monolinguals, bilinguals generally show greater gray matter volume, especially in perceptual/motor regions, greater white matter integrity, and greater functional connectivity between gray matter regions. These changes complement electroencephalography findings showing that bilinguals devote neural resources earlier than monolinguals. Parallel functional findings emerge from the functional magnetic resonance imaging literature: bilinguals show reduced frontal activity, suggesting that they do not need to rely on top-down mechanisms to the same extent as monolinguals. This shift for bilinguals to rely more on subcortical/posterior regions, which we term the bilingual anterior-to-posterior and subcortical shift (BAPSS), fits with results from cognitive aging studies and helps to explain why bilinguals experience cognitive decline at later stages of development than monolinguals."} {"_id": "5f271829cb0fd59e97a2b1c5c1fb9fa9ab4973c2", "title": "Low-frequency Fourier analysis of speech rhythm.", "text": "A method for studying speech rhythm is presented, using Fourier analysis of the amplitude envelope of bandpass-filtered speech. Rather than quantifying rhythm with time-domain measurements of interval durations, a frequency-domain representation is used--the rhythm spectrum. This paper describes the method in detail, and discusses approaches to characterizing rhythm with low-frequency spectral information."} {"_id": "75acb0d2d776889fb6e2387b82140336839820cb", "title": "Simple sequence repeat markers that identify Claviceps species and strains", "text": "Claviceps purpurea is a pathogen that infects most members of Pooideae, a subfamily of Poaceae, and causes ergot, a floral disease in which the ovary is replaced with a sclerotium. When the ergot body is accidently consumed by either man or animal in high enough quantities, there is extreme pain, limb loss and sometimes death. This study was initiated to develop simple sequence repeat (SSRs) markers for rapid identification of\u00a0C. purpurea. SSRs were designed from sequence data stored at the National Center for Biotechnology Information database. The study consisted of 74 ergot isolates, from four different host species, Lolium perenne, Poa pratensis, Bromus inermis, and Secale cereale plus three additional Claviceps species, C. pusilla, C. paspali and C. fusiformis. Samples were collected from six different counties in Oregon and Washington over a 5-year period. Thirty-four SSR markers were selected, which enabled the differentiation of each isolate from one another based solely on their molecular fingerprints. Discriminant analysis of principle components was used to identify four isolate groups, CA Group 1, 2, 3, and 4, for subsequent cluster and molecular variance analyses. CA Group 1 consisting of eight isolates from the host species P. pratensis, was separated on the cluster analysis plot from the remaining three groups and this group was later identified as C. humidiphila. The other three groups were distinct from one another, but closely related. These three groups contained samples from all four of the host species. These SSRs are simple to use, reliable and allowed clear differentiation of C. humidiphila from C. purpurea. Isolates from the three separate species, C. pusilla, C. paspali and C. fusiformis, also amplified with these markers. The SSR markers developed in this study will be helpful in defining the population structure and genetics of Claviceps strains. They will also provide valuable tools for plant breeders needing to identify resistance in crops or for researchers examining fungal movements across environments."} {"_id": "0bb71e91b29cf9739c0e1334f905baad01b663e6", "title": "Lifetime-Aware Scheduling and Power Control for M2M Communications in LTE Networks", "text": "In this paper the scheduling and transmit power control are investigated to minimize the energy consumption for battery-driven devices deployed in LTE networks. To enable efficient scheduling for a massive number of machine-type subscribers, a novel distributed scheme is proposed to let machine nodes form local clusters and communicate with the base-station through the cluster-heads. Then, uplink scheduling and power control in LTE networks are introduced and lifetime-aware solutions are investigated to be used for the communication between cluster-heads and the base-station. Beside the exact solutions, low-complexity suboptimal solutions are presented in this work which can achieve near optimal performance with much lower computational complexity. The performance evaluation shows that the network lifetime is significantly extended using the proposed protocols."} {"_id": "e66877f07bdbf5ac394880f0ad630117e63803a9", "title": "Rumor detection for Persian Tweets", "text": "Nowadays, striking growth of online social media has led to easier and faster spreading of rumors on cyber space, in addition to tradition ways. In this paper, rumor detection on Persian Twitter community has been addressed for the first time by exploring and analyzing the significances of two categories of rumor features: Structural and Content-based features. While applying both feature sets leads to precision more than 80%, precision around 70% using only structural features has been obtained. Moreover, we show how features of users tending to produce and spread rumors are effective in rumor detection process. Also, the experiments have led to learning of a language model of collected rumors. Finally, all features have been ranked and the most discriminative ones have been discussed."} {"_id": "6dc4be33a07c277ee68d42c151b4ee866108281f", "title": "Covalsa: Covariance estimation from compressive measurements using alternating minimization", "text": "The estimation of covariance matrices from compressive measurements has recently attracted considerable research efforts in various fields of science and engineering. Owing to the small number of observations, the estimation of the covariance matrices is a severely ill-posed problem. This can be overcome by exploiting prior information about the structure of the covariance matrix. This paper presents a class of convex formulations and respective solutions to the high-dimensional covariance matrix estimation problem under compressive measurements, imposing either Toeplitz, sparseness, null-pattern, low rank, or low permuted rank structure on the solution, in addition to positive semi-definiteness. To solve the optimization problems, we introduce the Co-Variance by Augmented Lagrangian Shrinkage Algorithm (CoVALSA), which is an instance of the Split Augmented Lagrangian Shrinkage Algorithm (SALSA). We illustrate the effectiveness of our approach in comparison with state-of-the-art algorithms."} {"_id": "4d2074c1b6ee5562f815e489c5db366d2ec0d894", "title": "Control Strategy to Naturally Balance Hybrid Converter for Variable-Speed Medium-Voltage Drive Applications", "text": "In this paper, a novel control strategy to naturally balance a hybrid converter is presented for medium-voltage induction motor drive applications. Fundamental current of the motor is produced by a three-level quasi-square-wave converter. Several single-phase full-bridge converters are connected in series with this quasi-square-wave converter to eliminate harmonic voltages from the net output voltage of this hybrid converter. Various novel modulation strategies for these series cells are proposed. These lead to a naturally regulated dc-bus voltage of series cells. These also optimize the number of series cells required. Variable fundamental voltage and frequency control of induction motor is adopted with a constant dc-bus voltage of the three-level quasi-square-wave converter. Experimental results are also produced here to validate the proposed modulation strategies for induction motor drives using low-voltage protomodel. Based on the aforementioned, a 2.2-kV induction motor drive has been built, and results are also produced in this paper."} {"_id": "b56d1c7cef034003e469701d215394db3d8cc675", "title": "Improvement of Intrusion Detection System in Data Mining using Neural Network", "text": "Many researchers have argued that Artificial Neural Networks (ANNs) can improve the performance of intrusion detection systems (IDS). One of the central areas in network intrusion detection is how to build effective systems that are able to distinguish normal from intrusive traffic. In this paper four different algorithms are used namely as Multilayer Perception, Radial Base Function, Logistic Regression and Voted Perception. All these neural based algorithms are implemented in WEKA data mining tool to evaluate the performance. For experimental work, NSL KDD dataset is used. Each neural based algorithm is tested with conducted dataset. It is shown that the Multilayer Perception neural network algorithm is providing more accurate results than other algorithms. Keywords\u2014 Data Mining, Intrusion Detection System, Neural network classifier algorithm, NSL KDD dataset"} {"_id": "aa5a3009c83b497918127f6696557658d142b706", "title": "Valid items for screening dysphagia risk in patients with stroke: a systematic review.", "text": "BACKGROUND AND PURPOSE\nScreening for dysphagia is essential to the implementation of preventive therapies for patients with stroke. A systematic review was undertaken to determine the evidence-based validity of dysphagia screening items using instrumental evaluation as the reference standard.\n\n\nMETHODS\nFour databases from 1985 through March 2011 were searched using the terms cerebrovascular disease, stroke deglutition disorders, and dysphagia. Eligibility criteria were: homogeneous stroke population, comparison to instrumental examination, clinical examination without equipment, outcome measures of dysphagia or aspiration, and validity of screening items reported or able to be calculated. Articles meeting inclusion criteria were evaluated for methodological rigor. Sensitivity, specificity, and predictive capabilities were calculated for each item.\n\n\nRESULTS\nTotal source documents numbered 832; 86 were reviewed in full and 16 met inclusion criteria. Study quality was variable. Testing swallowing, generally with water, was the most commonly administered item across studies. Both swallowing and nonswallowing items were identified as predictive of aspiration. Neither swallowing protocols nor validity were consistent across studies.\n\n\nCONCLUSIONS\nNumerous behaviors were found to be associated with aspiration. The best combination of nonswallowing and swallowing items as well as the best swallowing protocol remains unclear. Findings of this review will assist in development of valid clinical screening instruments."} {"_id": "082e75207b76185cdd901b18ec09b1a5b694922a", "title": "Distributed Subgradient Methods for Multi-Agent Optimization", "text": "We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy."} {"_id": "02b8a4d6e4df5d2a5db57786bb047ed08105336f", "title": "Monte-Carlo Localization for Mobile Wireless Sensor Networks", "text": "Localization is crucial to many applications in wireless sensor networks. In this article, we propose a range-free anchor-based localization algorithm for mobile wireless sensor networks that builds upon the Monte Carlo Localization algorithm. We concentrate on improving the localization accuracy and efficiency by making better use of the information a sensor node gathers and by drawing the necessary location samples faster. To do so, we constrain the area from which samples are drawn by building a box that covers the region where anchors\u2019 radio ranges overlap. This box is the region of the deployment area where the sensor node is localized. Simulation results show that localization accuracy is improved by a minimum of 4% and by a maximum of 73%, on average 30%, for varying node speeds when considering nodes with knowledge of at least three anchors. The coverage is also strongly affected by speed and its improvement ranges from 3% to 55%, on average 22%. Finally, the processing time is reduced by 93% for a similar localization accuracy."} {"_id": "cfce4a3e3a626d6e0d9a155706d995f5d406d5a0", "title": "On the economic significance of ransomware campaigns: A Bitcoin transactions perspective", "text": "Bitcoin cryptocurrency system enables users to transact securely and pseudo-anonymously by using an arbitrary number of aliases (Bitcoin addresses). Cybercriminals exploit these characteristics to commit immutable and presumably untraceable monetary fraud, especially via ransomware; a type of malware that encrypts files of the infected system and demands ransom for decryption. In this paper, we present our comprehensive study on all recent ransomware and report the economic impact of such ransomware from the Bitcoin payment perspective. We also present a lightweight framework to identify, collect, and analyze Bitcoin addresses managed by the same user or group of users (cybercriminals, in this case), which includes a novel approach for classifying a payment as ransom. To verify the correctness of our framework, we compared our findings on CryptoLocker ransomware with the results presented in the literature. Our results align with the results found in the previous works except for the final valuation in USD. The reason for this discrepancy is that we used the average Bitcoin price on the day of each ransom payment whereas the authors of the previous studies used the Bitcoin price on the day of their evaluation. Furthermore, for each investigated ransomware, we provide a holistic view of its genesis, development, the process of infection and execution, and characteristic of ransom demands. Finally, we also release our dataset that contains a detailed transaction history of all the Bitcoin addresses we identified for each ransomware."} {"_id": "ff6c257dbac99b39e1e2fb3f5c0154268e9f1022", "title": "Localized Multiple Kernel Learning With Dynamical Clustering and Matrix Regularization", "text": "Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features with regard to their discriminative power for each individual sample. However, the learning of numerous local solutions may not scale well even for a moderately sized training set, and the independently learned local models may suffer from overfitting. Hence, in existing local methods, the distributed samples are typically assumed to share the same weights, and various unsupervised clustering methods are applied as preprocessing. In this paper, to enable the learner to discover and benefit from the underlying local coherence and diversity of the samples, we incorporate the clustering procedure into the canonical support vector machine-based LMKL framework. Then, to explore the relatedness among different samples, which has been ignored in a vector $\\ell _{p}$ -norm analysis, we organize the cluster-specific kernel weights into a matrix and introduce a matrix-based extension of the $\\ell _{p}$ -norm for constraint enforcement. By casting the joint optimization problem as a problem of alternating optimization, we show how the cluster structure is gradually revealed and how the matrix-regularized kernel weights are obtained. A theoretical analysis of such a regularizer is performed using a Rademacher complexity bound, and complementary empirical experiments on real-world data sets demonstrate the effectiveness of our technique."} {"_id": "f93c86a40e38465762397b66bea99ceaec6fdf94", "title": "Optimal Execution with Nonlinear Impact Functions and Trading-Enhanced Risk", "text": "We determine optimal trading strategies for liquidation of a large single-asset portfolio to minimize a combination of volatility risk and market impact costs. We take the market impact cost per share to be a power law function of the trading rate, with an arbitrary positive exponent. This includes, for example, the square-root law that has been proposed based on market microstructure theory. In analogy to the linear model, we define a \u201ccharacteristic time\u201d for optimal trading, which now depends on the initial portfolio size and decreases as execution proceeds. We also consider a model in which uncertainty of the realized price is increased by demanding rapid execution; we show that optimal trajectories are described by a \u201ccritical portfolio size\u201d above which this effect is dominant and below which it may be neglected."} {"_id": "4ce54221d9c2fc3ecf38aedc3eecefee53e568b8", "title": "Mining Offensive Language on Social Media", "text": "English. The present research deals with the automatic annotation and classification of vulgar ad offensive speech on social media. In this paper we will test the effectiveness of the computational treatment of the taboo contents shared on the web, the output is a corpus of 31,749 Facebook comments which has been automatically annotated through a lexicon-based method for the automatic identification and classification of taboo expressions. Italiano. La presente ricerca affronta il tema dell\u2019annotazione e della classificazione automatica dei contenuti volgari e offensivi espressi nei social media. Lo scopo del nostro lavoro consiste nel testare l\u2019efficacia del trattamento computazionale dei contenuti tab\u00f9 condivisi in rete. L\u2019output che forniamo un corpus di 31,749 commenti generati dagli utenti di Facebook e annotato automaticamente attraverso un metodo basato sul lessico per l\u2019identificazione e la classificazione delle"} {"_id": "234f11713077aa09179533a1f37c075662e25b0f", "title": "Incremental Algorithms for Hierarchical Classification", "text": "We study the problem of hierarchical classification when labels corresponding to partial and/or multiple paths in the underlying taxonomy are allowed. We introduce a new hierarchical loss function, the H-loss, implementing the simple intuition that additional mistakes in the subtree of a mistaken class should not be charged for. Based on a probabilistic data model introduced in earlier work, we derive the Bayes-optimal classifier for the H-loss. We then empirically compare two incremental approximations of the Bayes-optimal classifier with a flat SVM classifier and with classifiers obtained by using hierarchical versions of the Perceptron and SVM algorithms. The experiments show that our simplest incremental approximation of the Bayes-optimal classifier performs, after just one training epoch, nearly as well as the hierarchical SVM classifier (which performs best). For the same incremental algorithm we also derive an H-loss bound showing, when data are generated by our probabilistic data model, exponentially fast convergence to the H-loss of the hierarchical classifier based on the true model parameters."} {"_id": "2acbd7681ac7c06cae1541b8925f95294bc4dc45", "title": "An integrated framework for optimizing automatic monitoring systems in large IT infrastructures", "text": "The competitive business climate and the complexity of IT environments dictate efficient and cost-effective service delivery and support of IT services. These are largely achieved by automating routine maintenance procedures, including problem detection, determination and resolution. System monitoring provides an effective and reliable means for problem detection. Coupled with automated ticket creation, it ensures that a degradation of the vital signs, defined by acceptable thresholds or monitoring conditions, is flagged as a problem candidate and sent to supporting personnel as an incident ticket. This paper describes an integrated framework for minimizing false positive tickets and maximizing the monitoring coverage for system faults.\n In particular, the integrated framework defines monitoring conditions and the optimal corresponding delay times based on an off-line analysis of historical alerts and incident tickets. Potential monitoring conditions are built on a set of predictive rules which are automatically generated by a rule-based learning algorithm with coverage, confidence and rule complexity criteria. These conditions and delay times are propagated as configurations into run-time monitoring systems. Moreover, a part of misconfigured monitoring conditions can be corrected according to false negative tickets that are discovered by another text classification algorithm in this framework. This paper also provides implementation details of a program product that uses this framework and shows some illustrative examples of successful results."} {"_id": "05357314fe2da7c2248b03d89b7ab9e358cbf01e", "title": "Learning with kernels", "text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher."} {"_id": "06d0a9697a0f0242dbdeeff08ec5266b74bfe457", "title": "Fast Exact Inference with a Factored Model for Natural Language Parsing", "text": "We presenta novel generati ve model for natural languagetree structuresin whichsemantic(lexical dependenc y) andsyntacticstructuresare scoredwith separatemodels.Thisfactorizationprovidesconceptual simplicity, straightforwardopportunitiesfor separatelyimproving the componentmodels,anda level of performancealreadycloseto thatof similar, non-factoredmodels.Most importantly, unlikeothermodernparsing models,thefactoredmodeladmitsanextremelyeffectiveA parsingalgorithm,which makesefficient,exactinferencefeasible."} {"_id": "094d9601e6f6c45579647e20b5f7b0eeb4e2819f", "title": "Large margin hierarchical classification", "text": "We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set. Our approach combines ideas from large margin kernel methods and Bayesian analysis. Following the large margin principle, we associate a prototype with each label in the tree and formulate the learning task as an optimization problem with varying margin constraints. In the spirit of Bayesian methods, we impose similarity requirements between the prototypes corresponding to adjacent labels in the hierarchy. We describe new online and batch algorithms for solving the constrained optimization problem. We derive a worst case loss-bound for the online algorithm and provide generalization analysis for its batch counterpart. We demonstrate the merits of our approach with a series of experiments on synthetic, text and speech data."} {"_id": "7083489c4750898ceae641e8e3854e4440cadb61", "title": "A ZigBee -wireless wearable remote physiological monitoring system", "text": "Wearable health monitoring systems use integrated sensors to monitor vital physiological parameters of the wearer. The paper discusses the preliminary results of a prototype wearable physiological monitoring system to monitor physiological parameters such as Electrocardiogram (ECG), Heart Rate (HR), Electroencephalogram (EEG) and Body Temperature. The ECG and EEG signals are acquired using a textile electrodes integrated into the fabric of the wearer. The acquired ECG and EEG signals are processed to remove noises and the parameters are extracted and trend analysis is carried out. The physiological parameters are monitored at the remote monitoring station using a ZigBee wireless communication."} {"_id": "8da0771e47c32405c4877cedce3ff84ac7390646", "title": "A survey of datasets for visual tracking", "text": "For 15\u00a0years now, visual tracking has been a very active research area of the computer vision community. But an increasing amount of works can be observed in the last five years. This has led to the development of numerous algorithms that can deal with more and more complex video sequences. Each of them has its own strengths and weaknesses. That is the reason why it becomes necessary to compare those algorithms. For this purpose, some datasets dedicated to visual tracking as well as, sometimes, their ground truth annotation files are regularly made publicly available by researchers. However, each dataset has its own specificities and is sometimes dedicated to test the ability of some algorithms to tackle only one or a few specific visual tracking subproblems. This article provides an overview of some of the datasets that are most used by the visual tracking community, but also of others that address specific tasks. We also propose a cartography of these datasets from a novel perspective, namely that of the difficulties the datasets present for visual tracking."} {"_id": "8e0df9abe037f370809e19045a114ee406902139", "title": "DC Voltage Balancing for PWM Cascaded H-Bridge Converter Based STATCOM", "text": "This paper presents a new control method for cascaded connected H-bridge converter based STATCOMs. These converters have been classically commutated at fundamental line-frequencies, but the evolution of power semiconductors has allowed to increase switching frequencies and the power ratings of these devices, permitting the use of PWM modulation techniques. This paper mainly focuses the DC bus voltage balancing problems, and proposes a new control technique that solves these balancing problems maintaining the delivered reactive power equally distributed among all the H-bridges of the converter"} {"_id": "682af5b9069369aae4e463807dcfcc96bce26a83", "title": "Emotion recognition from text using semantic labels and separable mixture models", "text": "This study presents a novel approach to automatic emotion recognition from text. First, emotion generation rules (EGRs) are manually deduced from psychology to represent the conditions for generating emotion. Based on the EGRs, the emotional state of each sentence can be represented as a sequence of semantic labels (SLs) and attributes (ATTs); SLs are defined as the domain-independent features, while ATTs are domain-dependent. The emotion association rules (EARs) represented by SLs and ATTs for each emotion are automatically derived from the sentences in an emotional text corpus using the a priori algorithm. Finally, a separable mixture model (SMM) is adopted to estimate the similarity between an input sentence and the EARs of each emotional state. Since some features defined in this approach are domain-dependent, a dialog system focusing on the students' daily expressions is constructed, and only three emotional states, happy, unhappy, and neutral, are considered for performance evaluation. According to the results of the experiments, given the domain corpus, the proposed approach is promising, and easily ported into other domains."} {"_id": "45717c656ad11586ae1654d3961dbc762035b291", "title": "Project Aura: Toward Distraction-Free Pervasive Computing", "text": "A s the effects of Moore's law cause computing systems to become cheaper and more plentiful, a new problem arises: increasingly, the bottleneck in computing is not its disk capacity, processor speed, or communication bandwidth, but rather the limited resource of human attention. Human attention refers to a user's ability to attend to his or her primary tasks, ignoring system-generated distractions such as poor performance and failures. By exploiting plentiful computing resources to reduce user distraction, Project Aura is creating a system whose effectiveness is considerably greater than that of other systems today. Aura is specifically intended for pervasive computing environments involving wireless communication , wearable or handheld computers, and smart spaces. Human attention is an especially scarce resource in such environments, because the user is often preoccupied with walking, driving, or other real-world interactions. In addition, mobile computing poses difficult challenges such as intermittent and variable-bandwidth connectivity, concern for battery life, and the client resource constraints that weight and size considerations impose. To accomplish its ambitious goals, research in Aura spans every system level: from the hardware, through the operating system, to applications and end users. Underlying this diversity of concerns, Aura applies two broad concepts. First, it uses proactivity, which is a system layer's ability to anticipate requests from a higher layer. In today's systems , each layer merely reacts to the layer above it. Second, Aura is self-tuning: layers adapt by observing the demands made on them and adjusting their performance and resource usage characteristics accordingly. Currently, system-layer behavior is relatively static. Both of these techniques will help lower demand for human attention. To illustrate the kind of world we are trying to create, we present two hypothetical Aura scenarios. Although these might seem far-fetched today, they represent the kind of scenarios we expect to make commonplace through our research. In the first scenario, Jane is at Gate 23 in the Pitts-burgh airport, waiting for her connecting flight. She has edited many large documents and would like to use her wireless connection to email them. Unfortunately , bandwidth is miserable because many passengers at Gates 22 and 23 are surfing the Web. Aura observes that, at the current bandwidth, Jane won't be able to finish sending her documents before her flight departs. Consulting the airport's wireless network bandwidth service and flight schedule service, Aura discovers that wireless bandwidth is excellent at Gate 15, and that there are \u2026"} {"_id": "39320211191e5e61a5547723256f04618a0b229c", "title": "Potassium and sodium transporters of Pseudomonas aeruginosa regulate virulence to barley", "text": "We investigated the role of three uncharacterized cation transporters of Pseudomonas aeruginosa PAO1 as virulence factors for barley: PA1207, PA5021, and PA2647. PAO1 displayed reduced barley virulence with inactivated PA1207, PA5021, and PA2647 as well as with one known Na+/H+ antiporter, PA1820. Using the Escherichia coli LB2003 mutant lacking three K+ uptake systems, the expression of the PA5021 gene repressed LB2003 growth with low K+, but the strain acquired tolerance to high K+. In contrast, the expression of the PA1207 gene enhanced growth of LB2003 with low K+ but repressed its growth with high K+; therefore, the PA5021 protein exports K+, while the PA1207 protein imports K+. The PA5021 mutant of P. aeruginosa also showed impaired growth at 400\u00a0mM KCl and at 400\u00a0mM NaCl; therefore, the PA5021 protein may also export Na+. The loss of the PA5021 protein also decreased production of the virulence factor pyocyanin; corroborating this result, pyocyanin production decreased in wild-type PAO1 under high salinity. Whole-genome transcriptome analysis showed that PAO1 induced more genes in barley upon infection compared to the PA5021 mutant. Additionally, PAO1 infection induced water stress-related genes in barley, which suggests that barley may undergo water deficit upon infection by this pathogen."} {"_id": "8f76334bd276a2b92bd79203774f292318f42dc6", "title": "Design of broadband circularly polarized horn antenna using an L-shaped probe", "text": "This paper deals with a circular horn antenna fed by an L-shaped probe. The design process for broadband matching to a 50 Omega coaxial cable, and the antenna performance in axial ratio and gain are presented. The simulation results of this paper were obtained using Ansoft HFSS 9.2"} {"_id": "66078dcc6053a2b314942f738048ee0359726cb5", "title": "COMPUTATION OF CONDITIONAL PROBABILITY STATISTICS BY 8-MONTH-OLD INFANTS", "text": "321 Abstract\u2014 A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation\u2014transitional (conditional) probability\u2014used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (tri-syllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs. Many aspects of the patterns of human languages are signaled in the speech stream by what is called distributional evidence, that is, regularities in the relative positions and order of elements over a corpus of utterances (Bloomfield, 1933; Maratsos & Chalkley, 1980). This type of evidence, along with linguistic theories about the characteristics of human languages, is what comparative linguists use to discover the structure of exotic languages (Harris, 1951). Similarly, this type of evidence , along with tendencies to perform certain kinds of analyses on language input (Chomsky, 1957), could be used by human language learners to acquire their native languages. However, using such evidence would require rather complex distributional and statistical computations , and surprisingly little is known about the abilities of human infants and young children to perform these computations. By using the term computation, we do not mean, of course, that infants are consciously performing a mathematical calculation, but rather that they might be sensitive to and able to store quantitative aspects of distribu-tional information about a language corpus. Recently, we have begun studying this problem by investigating the abilities of human learners to use statistical information to dis-Words are known to vary dramatically from one language to another, so finding the words of a language is clearly a problem that must involve learning from the linguistic environment. Moreover, the beginnings and ends of the sequences of sounds that form words in a \u2026"} {"_id": "bb290a97b0d7312ea25870f145ef9262926d9be4", "title": "A deep learning-based multi-sensor data fusion method for degradation monitoring of ball screws", "text": "As ball screw has complex structure and long range of distribution, single signal collected by one sensor is difficult to express its condition fully and accurately. Multi-sensor data fusion usually has a better effect compared with single signal. Multi-sensor data fusion based on neural network(BP) is a commonly used multi-sensor data fusion method, but its application is limited by local optimum problem. Aiming at this problem, a multi-sensor data fusion method based on deep learning for ball screw is proposed in this paper. Deep learning, which consists of unsupervised learning and supervised learning, is the development and evolution of traditional neural network. It can effectively alleviate the optimization difficulty. Parallel superposition on frequency spectra of signals is directly done in the proposed deep learning-based multi-sensor data fusion method, and deep belief networks(DBN) are established by using fused data to adaptively mine available fault characteristics and automatically identify the degradation condition of ball screw. Test is designed to collect vibration signals of ball screw in 7 different degradation conditions by using 5 acceleration sensors installed on different places. The proposed fusion method is applied in identifying the degradation degree of ball screw in the test to demonstrate its efficacy. Finally, the multi-sensor data fusion based on neural network is also applied in degradation degree monitoring. The monitoring accuracy of deep learning-based multi-sensor data fusion is higher compared with that of neural network-based multi-sensor data fusion, which means the proposed method has more superiority."} {"_id": "a8bec3694c8cc6885c88d88d103f807ef115e6cc", "title": "Dual methods for nonconvex spectrum optimization of multicarrier systems", "text": "The design and optimization of multicarrier communications systems often involve a maximization of the total throughput subject to system resource constraints. The optimization problem is numerically difficult to solve when the problem does not have a convexity structure. This paper makes progress toward solving optimization problems of this type by showing that under a certain condition called the time-sharing condition, the duality gap of the optimization problem is always zero, regardless of the convexity of the objective function. Further, we show that the time-sharing condition is satisfied for practical multiuser spectrum optimization problems in multicarrier systems in the limit as the number of carriers goes to infinity. This result leads to efficient numerical algorithms that solve the nonconvex problem in the dual domain. We show that the recently proposed optimal spectrum balancing algorithm for digital subscriber lines can be interpreted as a dual algorithm. This new interpretation gives rise to more efficient dual update methods. It also suggests ways in which the dual objective may be evaluated approximately, further improving the numerical efficiency of the algorithm. We propose a low-complexity iterative spectrum balancing algorithm based on these ideas, and show that the new algorithm achieves near-optimal performance in many practical situations"} {"_id": "e9073b8b285afdd3713fa59ead836571060b9e73", "title": "A Comprehensive Framework for Evaluation in Design Science Research", "text": "Evaluation is a central and essential activity in conducting rigorous Design Science Research (DSR), yet there is surprisingly little guidance about designing the DSR evaluation activity beyond suggesting possible methods that could be used for evaluation. This paper extends the notable exception of the existing framework of Pries-Heje et al [11] to address this problem. The paper proposes an extended DSR evaluation framework together with a DSR evaluation design method that can guide DSR researchers in choosing an appropriate strategy for evaluation of the design artifacts and design theories that form the output from DSR. The extended DSR evaluation framework asks the DSR researcher to consider (as input to the choice of the DSR evaluation strategy) contextual factors of goals, conditions, and constraints on the DSR evaluation, e.g. the type and level of desired rigor, the type of artifact, the need to support formative development of the designed artifacts, the properties of the artifact to be evaluated, and the constraints on resources available, such as time, labor, facilities, expertise, and access to research subjects. The framework and method support matching these in the first instance to one or more DSR evaluation strategies, including the choice of ex ante (prior to artifact construction) versus ex post evaluation (after artifact construction) and naturalistic (e.g., field setting) versus artificial evaluation (e.g., laboratory setting). Based on the recommended evaluation strategy(ies), guidance is provided concerning what methodologies might be appropriate within the chosen strategy(ies)."} {"_id": "06138684cc86aba86b496017a7bd410f96ab18dd", "title": "Clustering to Find Exemplar Terms for Keyphrase Extraction", "text": "Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graphbased ranking methods (TextRank) by 9.5% in F1-measure."} {"_id": "6d69f9b44c3316250d328b6ac08d1d22bc426f00", "title": "TextRay: Mining Clinical Reports to Gain a Broad Understanding of Chest X-Rays", "text": "The chest X-ray (CXR) is by far the most commonly performed radiological examination for screening and diagnosis of many cardiac and pulmonary diseases. There is an immense world-wide shortage of physicians capable of providing rapid and accurate interpretation of this study. A radiologist-driven analysis of over two million CXR reports generated an ontology including the 40 most prevalent pathologies on CXR. By manually tagging a relatively small set of sentences, we were able to construct a training set of 959k studies. A deep learning model was trained to predict the findings given the patient frontal and lateral scans. For 12 of the findings we compare the model performance against a team of radiologists and show that in most cases the radiologists agree on average more with the algorithm than with each other."} {"_id": "893cd8c2d739eacfb79f9217d018d0c4cfbb8d98", "title": "Fast bayesian compressive sensing using Laplace priors", "text": "In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree of sparsity. We develop a constructive (greedy) algorithm resulting from this formulation where necessary parameters are estimated solely from the observation and therefore no user-intervention is needed. We provide experimental results with synthetic 1D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach."} {"_id": "3269b6ccf3ed63bb7c1fa744c20377474ff23760", "title": "Structure, culture and Simmelian ties in entrepreneurial firms", "text": "This article develops a cultural agreement approach to organizational culture that emphasizes how clusters of individuals reinforce potentially idiosyncratic understandings of many aspects of culture including the structure of network relations. Building on recent work concerning Simmelian tied dyads (defined as dyads embedded in three-person cliques), the research examines perceptions concerning advice and friendship relations in three entrepreneurial firms. The results support the idea that Simmelian tied dyads (relative to dyads in general) reach higher agreement concerning who is tied to whom, and who are embedded together in triads in organizations. \u00a9 2002 Elsevier Science B.V. All rights reserved."} {"_id": "717a68a5a2eca79a95c1e05bd595f9c10efdeafa", "title": "Thinking like a nurse: a research-based model of clinical judgment in nursing.", "text": "This article reviews the growing body of research on clinical judgment in nursing and presents an alternative model of clinical judgment based on these studies. Based on a review of nearly 200 studies, five conclusions can be drawn: (1) Clinical judgments are more influenced by what nurses bring to the situation than the objective data about the situation at hand; (2) Sound clinical judgment rests to some degree on knowing the patient and his or her typical pattern of responses, as well as an engagement with the patient and his or her concerns; (3) Clinical judgments are influenced by the context in which the situation occurs and the culture of the nursing care unit; (4) Nurses use a variety of reasoning patterns alone or in combination; and (5) Reflection on practice is often triggered by a breakdown in clinical judgment and is critical for the development of clinical knowledge and improvement in clinical reasoning. A model based on these general conclusions emphasizes the role of nurses' background, the context of the situation, and nurses' relationship with their patients as central to what nurses notice and how they interpret findings, respond, and reflect on their response."} {"_id": "300810453b6d300077e4ac4b16f271ba5abd7310", "title": "Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery", "text": "Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision."} {"_id": "3d5586cc57eadcc5220e6a316236a8b474500d41", "title": "The Security Development Lifecycle in the Context of Accreditation Policies and Standards", "text": "The proposed security development lifecycle (SecDLC) model delivers a perpetual cycle of information security management and refinement. Using real-world examples, the authors show how SecDLC ensures the goals of preserving, monitoring, and improving security practices, policies, and standards in private and public sectors. The authors describe the four phases of SecDLC, comparing and contrasting them to existing security development models."} {"_id": "9c6afcd7cb409fd3130465474ce0911fdb99200f", "title": "Upgrading Lignocellulosic Biomasses : Hydrogenolysis of Platform Derived Molecules Promoted by Heterogeneous Pd-Fe Catalysts", "text": "This review provides an overview of heterogeneous bimetallic Pd-Fe catalysts in the C\u2013C and C\u2013O cleavage of platform molecules such as C2\u2013C6 polyols, furfural, phenol derivatives and aromatic ethers that are all easily obtainable from renewable cellulose, hemicellulose and lignin (the major components of lignocellulosic biomasses). The interaction between palladium and iron affords bimetallic Pd-Fe sites (ensemble or alloy) that were found to be very active in several sustainable reactions including hydrogenolysis, catalytic transfer hydrogenolysis (CTH) and aqueous phase reforming (APR) that will be highlighted. This contribution concentrates also on the different synthetic strategies (incipient wetness impregnation, deposition-precipitaion, co-precipitaion) adopted for the preparation of heterogeneous Pd-Fe systems as well as on the main characterization techniques used (XRD, TEM, H2-TPR, XPS and EXAFS) in order to elucidate the key factors that influence the unique catalytic performances observed."} {"_id": "57db04988af0b65c217eaf3271afc40927d1c72f", "title": "Proposed embedded security framework for Internet of Things (IoT)", "text": "IoT is going to be an established part of life by extending the communication and networking anytime, anywhere. Security requirements for IoT will certainly underline the importance of properly formulated, implemented, and enforced security policies throughout their life-cycle. This paper gives a detailed survey and analysis of embedded security, especially in the area of IoT. Together with the conventional security solutions, the paper highlights the need to provide in-built security in the device itself to provide a flexible infrastructure for dynamic prevention, detection, diagnosis, isolation, and countermeasures against successful breaches. Based on this survey and analysis, the paper defines the security needs taking into account computational time, energy consumption and memory requirements of the devices. Finally, this paper proposes an embedded security framework as a feature of software/hardware co-design methodology."} {"_id": "db52d3520f9ac17d20bd6195e03f4a650c923fba", "title": "A New Modular Bipolar High-Voltage Pulse Generator", "text": "Adapting power-electronic converters that are used in pulsed-power application attracted considerable attention during recent years. In this paper, a modular bipolar high-voltage pulse generator is proposed based on the voltage multipliers and half-bridge converters concept by using power electronics switches. This circuit is capable to generate repetitive high voltage bipolar pulses with a flexible output pattern, by using low voltage power supply and elements. The proposed topology was simulated in MATLAB/Simulink. To verify the circuit operation a four-stage laboratory prototype has been assembled and tested. The results confirm the theoretical analysis and show the validity of the converter scheme."} {"_id": "4fd69173cabb3d4377432d70488938ac533a5ac3", "title": "JOINT IMAGE SHARPENING AND DENOISING BY 3 D TRANSFORM-DOMAIN COLLABORATIVE FILTERING", "text": "In order to simultaneously sharpen image details and attenuate noise, we propose to combine the recent blockmatching and 3D \u00deltering (BM3D) denoising approach, based on 3D transform-domain collaborative \u00deltering, with alpha-rooting, a transform-domain sharpening technique. The BM3D exploits grouping of similar image blocks into 3D arrays (groups) on which collaborative \u00deltering (by hard-thresholding) is applied. We propose two approaches of sharpening by alpha-rooting; the \u00derst applies alpharooting individually on the 2D transform spectra of each grouped block; the second applies alpha-rooting on the 3D-transform spectra of each 3D array in order to sharpen \u00dene image details shared by all grouped blocks and further enhance the interblock differences. The conducted experiments with the proposed method show that it can preserve and sharpen \u00dene image details and effectively attenuate noise."} {"_id": "41c987b8a7e916d56fed2ea7311397e0f2286f3b", "title": "ACIQ: Analytical Clipping for Integer Quantization of neural networks", "text": "Unlike traditional approaches that focus on the quantization at the network level, in this work we propose to minimize the quantization effect at the tensor level. We analyze the trade-off between quantization noise and clipping distortion in low precision networks. We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping. By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping. For example, just by choosing the accurate clipping values, more than 40% accuracy improvement is obtained for the quantization of VGG16-BN to 4-bits of precision. Our results have many applications for the quantization of neural networks at both training and inference time. One immediate application is for a rapid deployment of neural networks to low-precision accelerators without time-consuming fine tuning or the availability of the full datasets."} {"_id": "3b01dce44b93c96e9bdf6ba1c0ffc44303eebc10", "title": "Latent fault detection in large scale services", "text": "Unexpected machine failures, with their resulting service outages and data loss, pose challenges to datacenter management. Existing failure detection techniques rely on domain knowledge, precious (often unavailable) training data, textual console logs, or intrusive service modifications. We hypothesize that many machine failures are not a result of abrupt changes but rather a result of a long period of degraded performance. This is confirmed in our experiments, in which over 20% of machine failures were preceded by such latent faults. We propose a proactive approach for failure prevention. We present a novel framework for statistical latent fault detection using only ordinary machine counters collected as standard practice. We demonstrate three detection methods within this framework. Derived tests are domain-independent and unsupervised, require neither background information nor tuning, and scale to very large services. We prove strong guarantees on the false positive rates of our tests."} {"_id": "6e8e4a5bec184eb1d1df1276005c9ffd3fbfab25", "title": "An Optimal Auction Mechanism for Mobile Edge Caching", "text": "With the explosive growth of wireless data, mobile edge caching has emerged as a promising paradigm to support mobile traffic recently, in which the service providers (SPs) prefetch some popular contents in advance and cache them locally at the network edge. When requested, those locally cached contents can be directly delivered to users with low latency, thus alleviating the traffic load over backhaul channels during peak hours and enhancing the quality-of-experience (QoE) of users simultaneously. Due to the limited available cache space, it makes sense for the SP to cache the most profitable contents. Nevertheless, users' true valuations of contents are their private knowledge, which is unknown to the SP in general. This information asymmetry poses a significant challenge for effective caching at the SP side. Further, the cached contents can be delivered with different quality, which needs to be chosen judiciously to balance delivery costs and user satisfaction. To tackle these difficulties, in this paper, we propose an optimal auction mechanism from the perspective of the SP. In the auction, the SP determines the cache space allocation over contents and user payments based on the users' (possibly untruthful) reports of their valuations so that the SP's expected revenue is maximized. The advocated mechanism is designed to elicit true valuations from the users (incentive compatibility) and to incentivize user participation (individual rationality). In addition, we devise a computationally efficient method for calculating the optimal cache space allocation and user payments. We further examine the optimal choice of the content delivery quality for the case with a large number of users and derive a closed-form solution to compute the optimal delivery quality. Finally, extensive simulations are implemented to evaluate the performance of the proposed optimal auction mechanism, and the impact of various model parameters is highlighted to obtain engineering insights into the content caching problem."} {"_id": "1bde4205a9f1395390c451a37f9014c8bea32a8a", "title": "3D object recognition in range images using visibility context", "text": "Recognizing and localizing queried objects in range images plays an important role for robotic manipulation and navigation. Even though it has been steadily studied, it is still a challenging task for scenes with occlusion and clutter."} {"_id": "1e8f46aeed1a96554a2d759d7ca194e1f9c22de1", "title": "Go-ICP: Solving 3D Registration Efficiently and Globally Optimally", "text": "Registration is a fundamental task in computer vision. The Iterative Closest Point (ICP) algorithm is one of the widely-used methods for solving the registration problem. Based on local iteration, ICP is however well-known to suffer from local minima. Its performance critically relies on the quality of initialization, and only local optimality is guaranteed. This paper provides the very first globally optimal solution to Euclidean registration of two 3D point sets or two 3D surfaces under the L2 error. Our method is built upon ICP, but combines it with a branch-and-bound (BnB) scheme which searches the 3D motion space SE(3) efficiently. By exploiting the special structure of the underlying geometry, we derive novel upper and lower bounds for the ICP error function. The integration of local ICP and global BnB enables the new method to run efficiently in practice, and its optimality is exactly guaranteed. We also discuss extensions, addressing the issue of outlier robustness."} {"_id": "242caa8e04b73f56a8d4adae36028cc176364540", "title": "Voting-based pose estimation for robotic assembly using a 3D sensor", "text": "We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor."} {"_id": "02a808de5aa34685955fd1473433161edd20fd80", "title": "Surface reconstruction from unorganized points", "text": "Computer Graphics, 26, 2, July 1992 Surface Reconstruction from Unorganized Points Hugues Hoppe* Tony DeRose* Tom Duchampt John McDonald$ Werner Stuetzle~ University of Washington Seattle, WA 98195 We describe and demonstrate an algorithm that takes as input an unorganized set of points {xl, . . . . x.} c IR3 on or near an unknown manifold M, and produces as output a simplicial surface that approximates M. Neither the topology, the presence of boundaries, nor the geometry of M are assumed to be known in advance \u2014 all are inferred automatically from the data. This problem natu rally arises in a variety of practical situations such as range scanning an object from multiple view points, recovery of biological shapes from two-dimensional slices, and interactive surface sketching. CR"} {"_id": "1eca34e935b5e0f172ff5557697317f32b646f5e", "title": "Registration and Integration of Textured 3-D Data", "text": "In general, multiple views are required to create a complete 3-D model of an object or of a multi-roomed indoor scene. In this work, we address the problem of merging multiple textured 3-D data sets, each of which corresponds to a different view of a scene or object. There are two steps to the merging process: registration and integration. To register, or align, data sets we use a modified version of the Iterative Closest Point algorithm; our version, which we call color ICP, considers not only 3-D information, but color as well. We show experimentally that the use of color decreases registration error significantly. Once the 3-D data sets have been registered, we integrate them to produce a seamless, composite 3-D textured model. Our approach to integration uses a 3-D occupancy grid to represent likelihood of spatial occupancy through voting. In addition to occupancy information, we store surface normal in each voxel of the occupancy grid. Surface normal is used to robustly extract a surface from the occupancy grid; on that surface we blend textures from multiple views."} {"_id": "ff2a96303a285c946c989369666503ededff35be", "title": "Neem and other Botanical insecticides: Barriers to commercialization", "text": "In spite of the wide recognition that many plants possess insecticidal properties, only a handful of pest control products directly obtained from plants,i. e., botanical insecticides, are in use in developed countries. The demonstrated efficacy of the botanical neem (based on seed kernel extracts ofAzadirachta indica), and its recent approval for use in the United States, has stimulated research and development of other botanical insecticides. However, the commercialization of new botanical insecticides can be hindered by a number of issues. The principal barriers to commercialization of new botanicals are (i) scarcity of the natural resource; (ii) standardization and quality control; and (iii) registration. These issues are no problem (i) or considerably less of a problem (ii, iii) with conventional insecticides. In this review I discuss these issues and suggest how the problems may be overcome in the future."} {"_id": "f0d99d9594fe3b5da915218614ef9f31f3bc4835", "title": "Trajectory learning from human demonstrations via manifold mapping", "text": "This work proposes a framework that enables arbitrary robots with unknown kinematics models to imitate human demonstrations to acquire a skill, and reproduce it in real-time. The diversity of robots active in non-laboratory environments is growing constantly, and to this end we present an approach for users to be able to easily teach a skill to a robot with any body configuration. Our proposed method requires a motion trajectory obtained from human demonstrations via a Kinect sensor, which is then projected onto a corresponding human skeleton model. The kinematics mapping between the robot and the human model is learned by employing Local Procrustes Analysis, which enables the transfer of the demonstrated trajectory from the human model to the robot. Finally, the transferred trajectory is modeled using Dynamic Movement Primitives, allowing it to be reproduced in real time. Experiments in simulation on a 4 degree of freedom robot show that our method is able to correctly imitate various skills demonstrated by a human."} {"_id": "41e5f9984ec71218416c54304bcb9a99c27b4938", "title": "A Chinese Text Corrector Based on Seq2Seq Model", "text": "In this paper, we build a Chinese text corrector which can correct spelling mistakes precisely in Chinese texts. Our motivation is inspired by the recently proposed seq2seq model which consider the text corrector as a sequence learning problem. To begin with, we propose a biased-decoding method to improve the bilingual evaluation understudy (BLEU) score of our model. Secondly, we adopt a more reasonable OOV token scheme, which enhances the robustness of our correction mechanism. Moreover, to test the performance of our proposed model thoroughly, we establish a corpus which includes 600,000 sentences from news data of Sogou Labs. Experiments show that our corrector model can achieve better corrector results based on the corpus."} {"_id": "143c5c64ed76f9cbbd70309e071905862364aa4b", "title": "Cloud Log Forensics Metadata Analysis", "text": "The increase in the quantity and questionable quality of the forensic information retrieved from the current virtualized data cloud system architectures has made it extremely difficult for law enforcement to resolve criminal activities within these logical domains. This paper poses the question of what kind of information is desired from virtual machine (VM) hosted operating systems (OS) investigated by a cloud forensic examiner. The authors gives an overview of the information that exists on current VM OS by looking at it's kernel hypervisor logs and discusses the shortcomings. An examination of the role that the VM kernel hypervisor logs provide as OS metadata in cloud investigations is also presented."} {"_id": "3251fcb9cfb22beeb1ec13b746a5e5608590e977", "title": "A novel wideband slotted mm wave microstrip patch antenna", "text": "In this paper, a novel approach for the design of a compact broadband planar microstrip patch antenna is presented for millimetre wave wireless applications. This antenna structure was realized on a Quartz crystal substrate of dielectric constant 3.8 with the thickness of 0.8mm with ground plane. In order to examine, the performances of this antenna it was designed at the frequency 30 GHz and simulated by IE3D software package of Zeland. This type of antenna is composed of two slots of different dimensions and fed by single coaxial feed. Appropriate design of the antenna structure resulted in three discontinuous resonant bands. The proposed antenna has a \u221210dB return loss impedance bandwidth of 10.3GHz. The performance of this antenna was verified and characterised in terms of the antenna return loss, radiation pattern, surface current distribution and power gain. Also, the numerical results show that the proposed antenna has good impedance bandwidth and radiation characteristics over the entire operating bands. Details of the proposed antenna configurations and design procedures are given."} {"_id": "5df318e4aac5313124571ecc7e186cba9e84a264", "title": "Mobile Application Security in the Presence of Dynamic Code Updates", "text": "The increasing number of repeated malware penetrations into official mobile app markets poses a high security threat to the confidentiality and privacy of end users\u2019 personal and sensitive information. Protecting end user devices from falling victims to adversarial apps presents a technical and research challenge for security researchers/engineers in academia and industry. Despite the security practices and analysis checks deployed at app markets, malware sneak through the defenses and infect user devices. The evolution of malware has seen it become sophisticated and dynamically changing software usually disguised as legitimate apps. Use of highly advanced evasive techniques, such as encrypted code, obfuscation and dynamic code updates, etc., are common practices found in novel malware. With evasive usage of dynamic code updates, a malware pretending as benign app bypasses analysis checks and reveals its malicious functionality only when installed on a user\u2019s device. This dissertation provides a thorough study on the use and the usage manner of dynamic code updates in Android apps. Moreover, we propose a hybrid analysis approach, StaDART, that interleaves static and dynamic analysis to cover the inherent shortcomings of static analysis techniques to analyze apps in the presence of dynamic code updates. Our evaluation results on real world apps demonstrate the effectiveness of StaDART. However, typically dynamic analysis, and hybrid analysis too for that matter, brings the problem of stimulating the app\u2019s behavior which is a non-trivial challenge for automated analysis tools. To this end, we propose a backward slicing based targeted inter component code paths execution technique, TeICC. TeICC leverages a backward slicing mechanism to extract code paths starting from a target point in the app. It makes use of a system dependency graph to extract code paths that involve inter component communication. The extracted code paths are then instrumented and executed inside the app context to capture sensitive dynamic behavior, resolve dynamic code updates and obfuscation. Our evaluation of TeICC shows that it can be effectively used for targeted execution of inter component code paths in obfuscated Android apps. Also, still not ruling out the possibility of adversaries reaching the user devices, we propose an on-phone API hooking"} {"_id": "22d2a23f9fcf70ae5b9b5effa02931d7267efd92", "title": "PCAS: Pruning Channels with Attention Statistics for Deep Network Compression", "text": "Compression techniques for deep neural networks are important for implementing them on small embedded devices. In particular, channel-pruning is a useful technique for realizing compact networks. However, many conventional methods require manual setting of compression ratios in each layer. It is difficult to analyze the relationships between all layers, especially for deeper models. To address these issues, we propose a simple channel-pruning technique based on attention statistics that enables to evaluate the importance of channels. We improved the method by means of a criterion for automatic channel selection, using a single compression ratio for the entire model in place of per-layer model analysis. The proposed approach achieved superior performance over conventional methods with respect to accuracy and the computational costs for various models and datasets. We provide analysis results for behavior of the proposed criterion on different datasets to demonstrate its favorable properties for channel pruning."} {"_id": "a111795bf08efda6cb504916e023ec43821f3a84", "title": "An automated domain specific stop word generation method for natural language text classification", "text": "We propose an automated method for generating domain specific stop words to improve classification of natural language content. Also we implemented a Bayesian natural language classifier working on web pages, which is based on maximum a posteriori probability estimation of keyword distributions using bag-of-words model to test the generated stop words. We investigated the distribution of stop-word lists generated by our model and compared their contents against a generic stop-word list for English language. We also show that the document coverage rank and topic coverage rank of words belonging to natural language corpora follow Zipf's law, just like the word frequency rank is known to follow."} {"_id": "25af957ac7b643df9191507930d943ea22254549", "title": "Image-Dependent Gamut Mapping as Optimization Problem", "text": "We explore the potential of image-dependent gamut mapping as a constrained optimization problem. The performance of our new approach is compared to standard reference gamut mapping algorithms in psycho-visual tests."} {"_id": "686b5783138e9adc75aec0f5832d69703eca41d9", "title": "Research synthesis in software engineering: A tertiary study", "text": "0950-5849/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.infsof.2011.01.004 \u21d1 Corresponding author. E-mail addresses: dcruzes@idi.ntnu.no (D.S. Cruzes Context: Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline. Objective: The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice. Method: A tertiary study of journal articles and full proceedings papers from the inception of evidencebased software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering. Results: As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis. Conclusion: We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice. 2011 Elsevier B.V. All rights reserved."} {"_id": "f74e36bb7d96b3875531f451212ee3703b575c42", "title": "Vishit: A Visualizer for Hindi Text", "text": "We outline the design of a visualizer, named Vishit, for texts in the Hindi language. The Hindi language is lingua franca in many states of India where people speak different languages. The visualized text serves as a universal language where seamless communication is needed by many people who speak different languages and have different cultures. Vishit consists of the following three major processing steps: language processing, knowledge base creation and scene generation. Initial results from the Vishit prototype are encouraging."} {"_id": "d44f2a7cc586ea1e1cdff14b7a91bdf49c2aef85", "title": "Software-Based Resolver-to-Digital Conversion Using a DSP", "text": "A simple and cost-effective software-based resolver-to-digital converter using a digital signal processor is presented. The proposed method incorporates software generation of the resolver carrier using a digital filter for synchronous demodulation of the resolver outputs in such a way that there is a substantial savings on hardware like the costly carrier oscillator and associated digital and analog circuits for amplitude demodulators. In addition, because the method does not cause any time delay, the dynamics of the servo control using the scheme are not affected. Furthermore, the method enables the determination of the angle for a complete 360deg shaft rotation with reasonable accuracy using a lookup table that contains entries of only up to 45deg. Computer simulations and experimental results demonstrate the effectiveness and applicability of the proposed scheme."} {"_id": "b98a96a4c381d85186e8becc85fb9fc114b13926", "title": "Learning to Detect Misleading Content on Twitter", "text": "The publication and spread of misleading content is a problem of increasing magnitude, complexity and consequences in a world that is increasingly relying on user-generated content for news sourcing. To this end, multimedia analysis techniques could serve as an assisting tool to spot and debunk misleading content on the Web. In this paper, we tackle the problem of misleading multimedia content detection on Twitter in real time. We propose a number of new features and a new semi-supervised learning event adaptation approach that helps generalize the detection capabilities of trained models to unseen content, even when the event of interest is of different nature compared to that used for training. Combined with bagging, the proposed approach manages to outperform previous systems by a significant margin in terms of accuracy. Moreover, in order to communicate the verification process to end users, we develop a web-based application for visualizing the results."} {"_id": "7fb9a32853480bd6cb92d669b47013cb9ed5237c", "title": "Directional antennas in FANETs: A performance analysis of routing protocols", "text": "Flying Ad-hoc Networks (FANETs) [1] are mobile ad-hoc networks formed by small and medium-sized UAVs. Nowadays, most UAVs are equipped with omnidirectional antennas. In addition, most of the existing routing protocols were designed assuming the use of omnidirectional antennas. Directional antennas have the potential to increase spatial reuse, save the battery's energy, and substantially increase the transmission range. However, these benefits come with a few challenges. Existing directional MAC protocols deal with these challenges, mostly in static ad-hoc networks. We define DA-FANETs as FANETs where directional antennas are used. In this paper, we investigate the performance of existing routing protocols in DA-FANETs. First we implement an 802.11b-based directional MAC protocol that embodies common directional MAC protocol features. Then we evaluate the following routing protocols in DA-FANET scenarios: AODV, OLSR, RGR, and GRP. The results show that each antenna beamwidth has an optimal network size that gives the highest amount of routing. In addition, RGR is the best option for DA-FANETs while GRP is the worst. Although RGR shows the best performance of all, it still leaves a lot of room for improvement with a PDR sitting at just 85% and a relatively high amount of overhead in terms of the number of transmissions performed."} {"_id": "fbd573bab9a6e1f9728b5f7c1d99b47b4b8aa0da", "title": "A new 6-DOF parallel robot with simple kinematic model", "text": "This paper presents a novel six-legged parallel robot, the kinematic model of which is simpler than that of the simplest six-axis serial robot. The new robot is the 6-DOF extension of the Cartesian parallel robot. It consists of three pairs of base-mounted prismatic actuators, the directions in each pair parallel to one of the axes of a Cartesian coordinate system. In each of the six legs, there are also two passive revolute joints, the axes of which are parallel to the direction of the prismatic joint. Finally, each leg is attached to the mobile platform via a spherical joint. The direct kinematics of the novel parallel robot can be solved easily by partitioning the orientation and the position of the mobile platform. There are eight distinct solutions, which can be found directly by solving a linear system and alternating the signs of three radicals. This parallel robot has a large workspace and is suitable for machining or rapid prototyping, as detailed in this paper."} {"_id": "7f657845ac8d66b15752e51bf133f3ff2b2b771e", "title": "Accurate Singular Values of Bidiagonal Matrices", "text": "C Abstract om puting the singular valu es of a bidiagonal m atrix is the fin al phase of the stan dard algo-w rithm for the singular valu e decom position of a general m atrix. We present a ne w algorithm hich com putes all the singular valu es of a bidiagonal m atrix to high relative accuracy indepen-p dent of their m agn itudes. In contrast, the stan dard algorithm for bidiagonal m atrice s m ay com ute sm all singular valu es with no relative accuracy at all. Num erical experim ents sh ow th at K the new algorithm is com parable in speed to the stan dard algorithm , an d frequently faster. The stan dard algorithm for com puting the singular valu e decom position (SVD) of a gen-1 eral real m atrix A has two phases [7]:) Com pute orthogonal m atrices P an d Q su ch that B = P AQ is in bidiagonal form , i.e. 1 1 1 T 1. 2 has nonzero entries only on its diagonal an d first su perdiagonal) Com pute orthogonal m atrices P an d Q su ch that \u03a3 = P BQ is diagonal an d nonnega-t i 2 2 2 T 2 ive. The diagonal entries \u03c3 of \u03a3 are the singular values of A. We will take them to be-sorted in decreasing order: \u03c3 \u2265 \u03c3. The colum ns of Q = Q Q are the right singular vec i i + 1 1 2 t 1 2 ors, an d the colum ns of P= P P are the left singular vectors. n This process takes O (n) operations, where n is the dim ension of A. This is true eve 3 though Phase 2 is iterative, since it converges quickly in practice. The error an alysis o f this r u com bined procedure has a widely accepted conclusion [8], an d provided neither overflow no nderflow occurs m ay be su m m arized as follows: The com puted singular valu es \u03c3 differ from the true singular valu es of A by no m ore t 1 i han p (n). \u03b5. A , where A = \u03c3 is the 2-norm of A, \u03b5 is the m ach ine precision, an d p (n) is a T slowly growing function of the dim ension n \u2026"} {"_id": "5ed4b57999d2a6c28c66341179e2888c9ca96a25", "title": "Learning Symbolic Models of Stochastic Domains", "text": "In this article, we work towards the goal of developing agents that can learn to act in complex worlds. We develop a probabilistic, relational planning rule representation that compactly models noisy, nondeterministic action effects, and show how such rules can be effectively learned. Through experiments in simple planning domains and a 3D simulated blocks world with realistic physics, we demonstrate that this learning algorithm allows agents to effectively model world dynamics."} {"_id": "410600034c3693dbb6830f21a7d682e662984f7e", "title": "A Rank Minimization Heuristic with Application to Minimum Order System Approximation", "text": "Several problems arising in control system analysis and design, such as reduced order controller synthesis, involve minimizing the rank of a matrix variable subject to linear matrix inequality (LMI) constraints. Except in some special cases, solving this rank minimization problem (globally) is very difficult. One simple and surprisingly effective heuristic, applicable when the matrix variable is symmetric and positive semidefinite, is to minimize its trace in place of its rank. This results in a semidefinite program (SDP) which can be efficiently solved. In this paper we describe a generalization of the trace heuristic that applies to general nonsymmetric, even non-square, matrices, and reduces to the trace heuristic when the matrix is positive semidefinite. The heuristic is to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm. We show that this problem can be reduced to an SDP, hence efficiently solved. To motivate the heuristic, we show that the dual spectral norm is the convex envelope of the rank on the set of matrices with norm less than one. We demonstrate the method on the problem of minimum order system approximation."} {"_id": "297e1b9c24fc6f5d481c52cdc2f305d45ecfddb2", "title": "StalemateBreaker: A Proactive Content-Introducing Approach to Automatic Human-Computer Conversation", "text": "Existing open-domain human-computer conversation systems are typically passive: they either synthesize or retrieve a reply provided a human-issued utterance. It is generally presumed that humans should take the role to lead the conversation and introduce new content when a stalemate occurs, and that the computer only needs to \u201crespond.\u201d In this paper, we propose STALEMATEBREAKER, a conversation system that can proactively introduce new content when appropriate. We design a pipeline to determine when, what, and how to introduce new content during human-computer conversation. We further propose a novel reranking algorithm BiPageRank-HITS to enable rich interaction between conversation context and candidate replies. Experiments show that both the content-introducing approach and the reranking algorithm are effective. Our full STALEMATEBREAKER model outperforms a state-of-the-practice conversation system by +14.4% p@1 when a stalemate occurs."} {"_id": "497bfd76187fb4c71c010ca3e3e65862ecf74e14", "title": "Ethical theory, codes of ethics and IS practice", "text": "Ethical issues, with respect to computer-based information systems, are important to the individual IS practitioner. These same issues also have an important impact on the moral well-being of organizations and societies. Considerable discussion has taken place in the Information Systems (IS) literature on specific ethical issues, but there is little published work which relates these issues to mainstream ethical theory. This paper describes a range of ethical theories drawn from philosophy literature and uses these theories to critique aspects of the newly revised ACM Code of Ethics. Some in the IS field on problematic ethical issues which are not resolved by the Code are then identified and discussed. The paper draws some implications and conclusions on the value of ethical theory and codes of practice, and on further work to develop existing ethical themes and to promote new initiatives in the ethical domain."} {"_id": "787da29b8e0226a7b451ffd7f5d17f3394c3e615", "title": "Dual-camera design for coded aperture snapshot spectral imaging.", "text": "Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method."} {"_id": "4822e607340828e9466cadd262c0e0286decbf64", "title": "Examining the Relationship Between Reviews and Sales: The Role of Reviewer Identity Disclosure in Electronic Markets", "text": "C product reviews have proliferated online, driven by the notion that consumers\u2019 decision to purchase or not purchase a product is based on the positive or negative information about that product they obtain from fellow consumers. Using research on information processing as a foundation, we suggest that in the context of an online community, reviewer disclosure of identity-descriptive information is used by consumers to supplement or replace product information when making purchase decisions and evaluating the helpfulness of online reviews. Using a unique data set based on both chronologically compiled ratings as well as reviewer characteristics for a given set of products and geographical location-based purchasing behavior from Amazon, we provide evidence that community norms are an antecedent to reviewer disclosure of identity-descriptive information. Online community members rate reviews containing identity-descriptive information more positively, and the prevalence of reviewer disclosure of identity information is associated with increases in subsequent online product sales. In addition, we show that shared geographical location increases the relationship between disclosure and product sales, thus highlighting the important role of geography in electronic commerce. Taken together, our results suggest that identity-relevant information about reviewers shapes community members\u2019 judgment of products and reviews. Implications for research on the relationship between online word-of-mouth (WOM) and sales, peer recognition and reputation systems, and conformity to online community norms are discussed."} {"_id": "0f3178633b1512dc55cd1707eb539623d17e29d3", "title": "Examining CNN Representations With Respect to Dataset Bias", "text": "Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN\u2019s blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method."} {"_id": "feed8923642dc2e052213e5fcab6346819dc2fb2", "title": "A hybrid Fuzzy Logic Controller-Firefly Algorithm (FLC-FA) based for MPPT Photovoltaic (PV) system in solar car", "text": "This paper propose Firefly Algorithm (FA) new method for tuning the membership function of fuzzy logic controller for Maximum Power Point Tracker (MPPT) system Photovoltaic solar car, that consist of hybrid Fuzzy Logic Controller - Firefly Algorithm (FLC-FA) for the parameter. There are many MPPT methods for photovoltaic (PV) system, Perturbation and Observation (PnO), fuzzy logic controller (FLC) standard, and hybrid Fuzzy logic controller firefly algorithm (FLC-FA) is compared in this paper. The proposed FLC-FA algorithm is to obtain the optimal solution for MPPT for photovoltaic (PV) systems for solar cars. The result Fuzzy logic controller firefly (FLC-FA) of the proposed method, the highest maximum strength and efficiency generated is PnO = 96.31%, Standard FLC = 99.88% and Proposed FLC-FA = 99.98%. better than PnO and Fuzzy logic controller standard method. The main advantage of the proposed FLC-FA is more efficient and accurate than the still fuzzy logic controller standard."} {"_id": "224eb3407b50533668b6c1caa55a720688b8b532", "title": "A review and future direction of agile, business intelligence, analytics and data science", "text": "Agile methodologies were introduced in 2001. Since this time, practitioners have applied Agile methodologies to many delivery disciplines. This article explores the application of Agile methodologies and principles to business intelligence delivery and how Agile has changed with the evolution of business intelligence. Business intelligence has evolved because the amount of data generated through the internet and smart devices has grown exponentially altering how organizations and individuals use information. The practice of business intelligence delivery with an Agile methodology has matured; however, business intelligence has evolved altering the use of Agile principles and practices. The Big Data phenomenon, the volume, variety, and velocity of data, has impacted business intelligence and the use of information. New trends such as fast analytics and data science have emerged as part of business intelligence. This paper addresses how Agile principles and practices have evolved with business intelligence, as well as its challenges and future directions."} {"_id": "387ef15ce6de4a74b1a51f3694419b90d3d81fba", "title": "Ravenclaw: dialog management using hierarchical task decomposition and an expectation agenda", "text": "We describe RavenClaw, a new dialog management framework developed as a successor to the Agenda [1] architecture used in the CMU Communicator. RavenClaw introduces a clear separation between task and discourse behavior specification, and allows rapid development of dialog management components for spoken dialog systems operating in complex, goal-oriented domains. The system development effort is focused entirely on the specification of the dialog task, while a rich set of domain-independent conversational behaviors are transparently generated by the dialog engine. To date, RavenClaw has been applied to five different domains allowing us to draw some preliminary conclusions as to the generality of the approach. We briefly describe our experience in developing these systems."} {"_id": "07c095dc513b7ac94a9360fab68ec4d2572797e2", "title": "Stochastic Language Generation For Spoken Dialogue Systems", "text": "The two current approaches to language generation, Template-based and rule-based (linguistic) NLG, have limitations when applied to spoken dialogue systems, in part because they were developed for text generation. In this paper, we propose a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems."} {"_id": "1347af2305933f77953c881a78c6029ab50ae460", "title": "Automatically clustering similar units for unit selection in speech synthesis", "text": "This paper describes a new method for synthesizing speech by concatenating sub-word units from a database of labelled speech. A large unit inventory is created by automatically clustering units of the same phone class based on their phonetic and prosodic context. The appropriate cluster is then selected for a target unit offering a small set of candidate units. An optimal path is found through the candidate units based on their distance from the cluster center and an acoustically based join cost. Details of the method and justification are presented. The results of experiments using two different databases are given, optimising various parameters within the system. Also a comparison with other existing selection based synthesis techniques is given showing the advantages this method has over existing ones. The method is implemented within a full text-to-speech system offering efficient natural sounding speech synthesis."} {"_id": "43d15ec7a3f7c26830541ea57f4af56b61983ca4", "title": "Creating natural dialogs in the carnegie mellon communicator system", "text": "The Carnegie Mellon Communicator system helps users create complex travel itineraries through a conversational interface. Itineraries consist of (multi-leg) flights, hotel and car reservations and are built from actual travel information for North America, obtained from the Web. The system manages dialog using a schema-based approach. Schemas correspond to major units of task information (such as a flight leg) and define conversational topics, or foci of interaction, meaningful to the user."} {"_id": "2cc0df76c91c52a497906abc1cdeda91edb2923c", "title": "Extraction of high confidence minutiae points from fingerprint images", "text": "Fingerprint is one of the well accepted biometric traits for building automatic human recognition system. These systems mainly involves matching of two fingerprints based on their minutiae points. Therefore, extraction of reliable or true minutiae from the fingerprint image is critically important. This paper proposed an novel algorithm for automatic extraction of highly reliable minutiae points from a fingerprint image. It utilizes frequency domain image enhancement and heuristics. Experiments have been conducted on two databases. FVC2002 which is a publicly available fingerprint database containing 800 fingerprint images of 100 persons and IITK-Sel500FP which is an in-house database containing 1000 fingerprint images of 500 persons. Minutiae points extracted by the proposed algorithm are benchmarked against the manually marked ones and the comparison is done with an open source minutiae extractor mindtct of NIST. Experimental results have shown that the proposed algorithm has significantly higher confidence of extracted minutiae points being true."} {"_id": "5054139b40eb3036b7c04ef7dbc401782527dcfb", "title": "Cognitive effort drives workspace configuration of human brain functional networks.", "text": "Effortful cognitive performance is theoretically expected to depend on the formation of a global neuronal workspace. We tested specific predictions of workspace theory, using graph theoretical measures of network topology and physical distance of synchronization, in magnetoencephalographic data recorded from healthy adult volunteers (N = 13) during performance of a working memory task at several levels of difficulty. We found that greater cognitive effort caused emergence of a more globally efficient, less clustered, and less modular network configuration, with more long-distance synchronization between brain regions. This pattern of task-related workspace configuration was more salient in the \u03b2-band (16-32 Hz) and \u03b3-band (32-63 Hz) networks, compared with both lower (\u03b1-band; 8-16 Hz) and higher (high \u03b3-band; 63-125 Hz) frequency intervals. Workspace configuration of \u03b2-band networks was also greater in faster performing participants (with correct response latency less than the sample median) compared with slower performing participants. Processes of workspace formation and relaxation in relation to time-varying demands for cognitive effort could be visualized occurring in the course of task trials lasting <2 s. These experimental results provide support for workspace theory in terms of complex network metrics and directly demonstrate how cognitive effort breaks modularity to make human brain functional networks transiently adopt a more efficient but less economical configuration."} {"_id": "c5ae787fe9a647636da5384da552851a327159c0", "title": "Designing wireless transceiver blocks for LoRa application", "text": "The Internet of Things enables connected objects to communicate over distant fields, and trade-offs are often observed with the battery life and range. Long Range Radio, or LoRa, provides transmission for up to 15 km with impressive battery life, while sacrificing throughput. In order to design the LoRa system, there is a need to design and implement the LoRa transceiver. 1 V, 900 MHz, LoRa-capable transceiver blocks were designed in 65 nm process. The receiver front end consists of the narrowband low noise amplifier, common gate-common source balun, and double balanced mixer. The power consumption of this system is 2.63 mW. The overall gain of the receiver front end is approximately 51 dB. The transmitter implemented in this project utilizes a class-D amplifier alongside an inverter chain driver stage and a transformer based power combiner. The transmitter outputs 21.57 dBm of power from the given supply at 900 MHz, operating at 21% drain efficiency."} {"_id": "f4fdaaf864ca6f73ced06f937d3af978568998eb", "title": "Network Monitoring as a Streaming Analytics Problem", "text": "Programmable switches potentially make it easier to perform flexible network monitoring queries at line rate, and scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time. However, processing such network monitoring queries at high traffic rates requires both the switches and the stream processors to filter the traffic iteratively and adaptively so as to extract only that traffic that is of interest to the query at hand. While the realization that network monitoring is a streaming analytics problem has been made earlier, our main contribution in this paper is the design and implementation of Sonata, a closed-loop system that enables network operators to perform streaming analytics for network monitoring applications at scale. To achieve this objective, Sonata allows operators to express a network monitoring query by considering each packet as a tuple. More importantly, Sonata allows them to partition the query across both the switches and the stream processor, and through iterative refinement, Sonata's runtime attempts to extract only the traffic that pertains to the query, thus ensuring that the stream processor can scale to satisfy a large number of queries for traffic at very high rates. We show with a simple example query involving DNS reflection attacks and traffic traces from one of the world's largest IXPs that Sonata can capture 95% of all traffic pertaining to the query, while reducing the overall data rate by a factor of about 400 and the number of required counters by four orders of magnitude."} {"_id": "4f0bfa3fb030a6aa6dc84ac97052583e425a1521", "title": "Neural representations for object perception: structure, category, and adaptive coding.", "text": "Object perception is one of the most remarkable capacities of the primate brain. Owing to the large and indeterminate dimensionality of object space, the neural basis of object perception has been difficult to study and remains controversial. Recent work has provided a more precise picture of how 2D and 3D object structure is encoded in intermediate and higher-level visual cortices. Yet, other studies suggest that higher-level visual cortex represents categorical identity rather than structure. Furthermore, object responses are surprisingly adaptive to changes in environmental statistics, implying that learning through evolution, development, and also shorter-term experience during adulthood may optimize the object code. Future progress in reconciling these findings will depend on more effective sampling of the object domain and direct comparison of these competing hypotheses."} {"_id": "3e37eb2620e86bafdcab25f7086806ba1bac3404", "title": "A Wireless Headstage for Combined Optogenetics and Multichannel Electrophysiological Recording", "text": "This paper presents a wireless headstage with real-time spike detection and data compression for combined optogenetics and multichannel electrophysiological recording. The proposed headstage, which is intended to perform both optical stimulation and electrophysiological recordings simultaneously in freely moving transgenic rodents, is entirely built with commercial off-the-shelf components, and includes 32 recording channels and 32 optical stimulation channels. It can detect, compress and transmit full action potential waveforms over 32 channels in parallel and in real time using an embedded digital signal processor based on a low-power field programmable gate array and a Microblaze microprocessor softcore. Such a processor implements a complete digital spike detector featuring a novel adaptive threshold based on a Sigma-delta control loop, and a wavelet data compression module using a new dynamic coefficient re-quantization technique achieving large compression ratios with higher signal quality. Simultaneous optical stimulation and recording have been performed in-vivo using an optrode featuring 8 microelectrodes and 1 implantable fiber coupled to a 465-nm LED, in the somatosensory cortex and the Hippocampus of a transgenic mouse expressing ChannelRhodospin (Thy1::ChR2-YFP line 4) under anesthetized conditions. Experimental results show that the proposed headstage can trigger neuron activity while collecting, detecting and compressing single cell microvolt amplitude activity from multiple channels in parallel while achieving overall compression ratios above 500. This is the first reported high-channel count wireless optogenetic device providing simultaneous optical stimulation and recording. Measured characteristics show that the proposed headstage can achieve up to 100% of true positive detection rate for signal-to-noise ratio (SNR) down to 15 dB, while achieving up to 97.28% at SNR as low as 5 dB. The implemented prototype features a lifespan of up to 105 minutes, and uses a lightweight (2.8 g) and compact $(17 \\times 18 \\times 10\\ \\text{mm}^{3})$ rigid-flex printed circuit board."} {"_id": "7780c59bcc110b9b1987d74ccab27c3e108ebf0d", "title": "Big Data and Transformational Government", "text": "The big data phenomenon is growing throughout private and public sector domains. Profit motives make it urgent for companies in the private sector to learn how to leverage big data. However, in the public sector, government services could also be greatly improved through the use of big data. Here, the authors describe some drivers, barriers, and best practices affecting the use of big data and associated analytics in the government domain. They present a model that illustrates how big data can result in transformational government through increased efficiency and effectiveness in the delivery of services. Their empirical basis for this model uses a case vignette from the US Department of Veterans Affairs, while the theoretical basis is a balanced view of big data that takes into account the continuous growth and use of such data. This article is part of a special issue on big data and business analytics."} {"_id": "b3870b319e32b0a2f687aa873d7935f0043f6aa5", "title": "An analysis and critique of Research through Design: towards a formalization of a research approach", "text": "The field of HCI is experiencing a growing interest in Research through Design (RtD), a research approach that employs methods and processes from design practice as a legitimate method of inquiry. We are interested in expanding and formalizing this research approach, and understanding how knowledge, or theory, is generated from this type of design research. We conducted interviews with 12 leading HCI design researchers, asking them about design research, design theory, and RtD specifically. They were easily able to identify different types of design research and design theory from contemporary and historical design research efforts, and believed that RtD might be one of the most important contributions of design researchers to the larger research community. We further examined three historical RtD projects that were repeatedly mentioned in the interviews, and performed a critique of current RtD practices within the HCI research and interaction design communities. While our critique summarizes the problems, it also shows possible directions for further developments and refinements of the approach."} {"_id": "fcb58022e146719109af0b527352d9a313705828", "title": "SDN On-The-Go (OTG) physical testbed", "text": "An emerging field of research, Software Defined Networks (SDN) promises to change the landscape of traditional network topology and management. Researchers and early adopters alike need adequate SDN testing facilities for their experiments but their options are limited. Industry is responding slowly with embedded support for SDN in their enterprise grade network hardware but it is cost prohibitive for many test environments with a single SDN switch costing thousands of dollars. There are a few emerging community SDN test networks that are fantastic for testing large topologies with production grade traffic but there is a cost associated with membership and some controlled experiments are difficult. A free and indispensible alternative to a dedicated hardware SDN is to use network emulation tools. These software tools are widely used and invaluable to SDN research. They provide an amazingly precise representation of physical network nodes and behavior but are inherently limited by their aggregation with other virtual devices on the same compute node. Some of our research requires a higher precision than software emulation can provide. Our solution is to build a low cost, portable, standalone SDN testbed. Called SDN On-The-Go (OTG), it is a complete, self-contained testbed that consists of four dedicated ZodiacFX SDN switches, four RaspberryPi3 hosts, a dedicated Kangaroo+ controller with 4GB RAM and a couple of routers to form the network isolation. The testbed supports many configurations for pseudo real-world SDN experiments that produce reliable and repeatable results. It can be used as a standalone research tool or as part of a larger network with production quality traffic. SDN OTG is designed to be used as a portable teaching device, moved from classroom to classroom or taken home for private research. We achieved our repeatability factor of an order of magnitude greater than emulation based testing. Our SDN OTG physical testbed weighs only twenty pounds, costs about a thousand US dollars, provides repeatable, precise time sensitive data and can be setup as a fully functional SDN testbed in a matter of minutes."} {"_id": "eee5594f8e88e51f4fb86b43c9f4cb54d689f73c", "title": "Predicting and Improving Memory Retention : Psychological Theory Matters in the Big Data Era", "text": "Cognitive psychology has long had the aim of understanding mechanisms of human memory, with the expectation that such an understanding will yield practical techniques that support learning and retention. Although research insights have given rise to qualitative advice for students and educators, we present a complementary approach that offers quantitative, individualized guidance. Our approach synthesizes theory-driven and data-driven methodologies. Psychological theory characterizes basic mechanisms of human memory shared among members of a population, whereas machine-learning techniques use observations from a population to make inferences about individuals. We argue that despite the power of big data, psychological theory provides essential constraints on models. We present models of forgetting and spaced practice that predict the dynamic time-varying knowledge state of an individual student for specific material. We incorporate these models into retrieval-practice software to assist students in reviewing previously mastered material. In an ambitious year-long intervention in a middle-school foreign language course, we demonstrate the value of systematic review on long-term educational outcomes, but more specifically, the value of adaptive review that leverages data from a population of learners to personalize recommendations based on an individual\u2019s study history and past performance."} {"_id": "28e28ba720462ced332bd649fcc6283fec9c134a", "title": "IoT survey: An SDN and fog computing perspective", "text": "Recently, there has been an increasing interest in the Internet of Things (IoT). While some analysts disvalue the IoT hype, several technology leaders, governments, and researchers are putting serious efforts to develop solutions enabling wide IoT deployment. Thus, the huge amount of generated data, the high network scale, the security and privacy concerns, the new requirements in terms of QoS, and the heterogeneity in this ubiquitous network of networks make its implementation a very challenging task. SDN, a new networking paradigm, has revealed its usefulness in reducing the management complexities in today\u2019s networks. Additionally, SDN, having a global view of the network, has presented effective security solutions. On the other hand, fog computing, a new data service platform, consists of pushing the data to the network edge reducing the cost (in terms of bandwidth consumption and high latency) of \u201cbig data\u201d transportation through the core network. In this paper, we critically review the SDN and fog computingbased solutions to overcome the IoT main challenges, highlighting their advantages, and exposing their weaknesses. Thus, we make recommendations at the end of this paper for the upcoming research work. Keywords\u2014 IoT; Survey; SDN; Fog; Cloud; 5G."} {"_id": "337cafd0035dee02fd62550bae301f796a553ac7", "title": "General Image-Quality Equation: GIQE.", "text": "A regression-based model was developed relating aerial image quality, expressed in terms of the National Imagery Interpretability Rating Scale (NIIRS), to fundamental image attributes. The General Image-Quality Equation (GIQE) treats three main attributes: scale, expressed as the ground-sampled distance; sharpness, measured from the system modulation transfer function; and the signal-to-noise ratio. The GIQE can be applied to any visible sensor and predicts NIIRS ratings with a standard error of 0.3 NIIRS. The image attributes treated by the GIQE are influenced by system design and operation parameters. The GIQE allows system designers and operators to perform trade-offs for the optimization of image quality."} {"_id": "93ade051f51f727ef745f2c417ed30eb08148e85", "title": "A Survey on Quality of Experience of HTTP Adaptive Streaming", "text": "Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services that relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows taking into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains, which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies that cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time, it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, this paper is a major step toward truly improving HAS."} {"_id": "fcdfdf3012bdd8feeaffc43f15bc976c11944ec3", "title": "A Framework for End-to-End Evaluation of 5G mmWave Cellular Networks in ns-3", "text": "The growing demand for ubiquitous mobile data services along with the scarcity of spectrum in the sub-6 GHz bands has given rise to the recent interest in developing wireless systems that can exploit the large amount of spectrum available in the millimeter wave (mmWave) frequency range. Due to its potential for multi-gigabit and ultra-low latency links, mmWave technology is expected to play a central role in 5th Generation (5G) cellular networks. Overcoming the poor radio propagation and sensitivity to blockages at higher frequencies presents major challenges, which is why much of the current research is focused at the physical layer. However, innovations will be required at all layers of the protocol stack to effectively utilize the large air link capacity and provide the end-to-end performance required by future networks.\n Discrete-event network simulation will be an invaluable tool for researchers to evaluate novel 5G protocols and systems from an end-to-end perspective. In this work, we present the first-of-its-kind, open-source framework for modeling mmWave cellular networks in the ns-3 simulator. Channel models are provided along with a configurable physical and MAC-layer implementation, which can be interfaced with the higher-layer protocols and core network model from the ns-3 LTE module to simulate end-to-end connectivity. The framework is demonstrated through several example simulations showing the performance of our custo mmmWave stack."} {"_id": "b6a8927a47dce4daf240c70711358727cac371c5", "title": "Center Frequency and Bandwidth Controllable Microstrip Bandpass Filter Design Using Loop-Shaped Dual-Mode Resonator", "text": "A design approach for developing microwave bandpass filters (BPFs) with continuous control of the center frequency and bandwidth is presented. The proposed approach exploits a simple loop-shaped dual-mode resonator that is tapped and perturbed with varactor diodes to realize center frequency tunability and passband reconfigurabilty. The even- and odd-mode resonances of the resonator can be predominately controlled via the incorporated varactors, and the passband response reconfiguration is obtained with the proposed tunable external coupling mechanism, which resolves the return losses degradation attributed to conventional fixed external coupling mechanisms. The demonstrated approach leads to a relatively simple synthesis of the microwave BPF with an up to 33% center frequency tuning range, an excellent bandwidth tuning capability, as well as a high filter response reconfigurability, including an all-reject response."} {"_id": "6123e52c1a560c88817d8720e05fbff8565271fb", "title": "Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification", "text": "Matching pedestrians across multiple camera views, known as human re-identification, is a challenging research problem that has numerous applications in visual surveillance. With the resurgence of Convolutional Neural Networks (CNNs), several end-to-end deep Siamese CNN architectures have been proposed for human re-identification with the objective of projecting the images of similar pairs (i.e. same identity) to be closer to each other and those of dissimilar pairs to be distant from each other. However, current networks extract fixed representations for each image regardless of other images which are paired with it and the comparison with other images is done only at the final level. In this setting, the network is at risk of failing to extract finer local patterns that may be essential to distinguish positive pairs from hard negative pairs. In this paper, we propose a gating function to selectively emphasize such fine common local patterns by comparing the mid-level features across pairs of images. This produces flexible representations for the same image according to the images they are paired with. We conduct experiments on the CUHK03, Market-1501 and VIPeR datasets and demonstrate improved performance compared to a baseline Siamese CNN architecture."} {"_id": "4469ff0b698d4752504b4b900b0cbef38ded59e4", "title": "Data association for multi-object Tracking-by-Detection in multi-camera networks", "text": "Multi-object tracking is still a challenging task in computer vision. We propose a robust approach to realize multi-object tracking using multi-camera networks. Detection algorithms are utilized to detect object regions with confidence scores for initialization of individual particle filters. Since data association is the key issue in Tracking-by-Detection mechanism, we present an efficient greedy matching algorithm considering multiple judgments based on likelihood functions. Furthermore, tracking in single cameras is realized by a greedy matching method. Afterwards, 3D geometry positions are obtained from the triangulation relationship between cameras. Corresponding objects are tracked in multiple cameras to take the advantages of multi-camera based tracking. Our algorithm performs online and does not need any information about the scene, no restrictions of enter-and-exit zones, no assumption of areas where objects are moving on and can be extended to any class of object tracking. Experimental results show the benefits of using multiple cameras by the higher accuracy and precision rates."} {"_id": "55c769b5829ca88ba940e0050497f4956c233445", "title": "A real-time method for depth enhanced visual odometry", "text": "Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by structure from motion using the previously estimated motion, and salient visual features for which depth is unavailable. Therefore, the method is able to extend RGBD visual odometry to large scale, open environments where depth often cannot be sufficiently acquired. The core of our method is a bundle adjustment step that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #4 on the KITTI odometry benchmark irrespective of sensing modality\u2014compared to stereo visual odometry methods which retrieve depth by triangulation. The resulting average position error is 1.14% of the distance traveled."} {"_id": "602bb6a3eaab203895d5daaf311b728c53a41463", "title": "Repeatability, reproducibility and rigor in systems research", "text": "Computer systems research spans sub-disciplines that include embedded and real-time systems, compilers, networking, and operating systems. Our contention is that a number of structural factors inhibit quality research and decrease the velocity of science. We highlight some of the factors we have encountered in our work and observed in published papers and propose solutions that, if widely adopted, could both increase the productivity of researchers and the quality of their output."} {"_id": "801307124a2e4098b8d7402241ed189fed123be5", "title": "Earpod: eyes-free menu selection using touch input and reactive audio feedback", "text": "We present the design and evaluation of earPod: an eyes-free menu technique using touch input and reactive auditory feedback. Studies comparing earPod with an iPod-like visual menu technique on reasonably-sized static menus indicate that they are comparable in accuracy. In terms of efficiency (speed), earPod is initially slower, but outperforms the visual technique within 30 minutes of practice. Our results indicate that earPod is potentially a reasonable eyes-free menu technique for general use, and is a particularly exciting technique for use in mobile device interfaces."} {"_id": "4e348a6bb29f7ac5514ba52d503417424153223c", "title": "Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation", "text": "Semantic segmentation has made encouraging progress due to the success of deep convolutional networks in recent years. Meanwhile, depth sensors become prevalent nowadays, so depth maps can be acquired more easily. However, there are few studies that focus on the RGB-D semantic segmentation task. Exploiting the depth information effectiveness to improve performance is a challenge. In this paper, we propose a novel solution named LDFNet, which incorporates Luminance, Depth and Color information by a fusion-based network. It includes a sub-network to process depth maps and employs luminance images to assist the depth information in processes. LDFNet outperforms the other state-of-art systems on the Cityscapes dataset, and its inference speed is faster than most of the existing networks. The experimental results show the effectiveness of the proposed multi-modal fusion network and its potential for practical applications."} {"_id": "1c9eb312cd57109b4040aab332aeff3e6661a929", "title": "Applying Reinforcement Learning to Blackjack Using Q-Learning", "text": "Blackjack is a popular card game played in many casinos. The objective of the game is to win money by obtaining a point total higher than the dealer\u2019s without exceeding 21. Determining an optimal blackjack strategy proves to be a difficult challenge due to the stochastic nature of the game. This presents an interesting opportunity for machine learning algorithms. Supervised learning techniques may provide a viable solution, but do not take advantage of the inherent reward structure of the game. Reinforcement learning algorithms generally perform well in stochastic environments, and could utilize blackjack's reward structure. This paper explores reinforcement learning as a means of approximating an optimal blackjack strategy using the Q-learning algorithm."} {"_id": "22ddd63b622aa19166322abed42c3971685accd1", "title": "Methods of inference and learning for performance modeling of parallel applications", "text": "Increasing system and algorithmic complexity combined with a growing number of tunable application parameters pose significant challenges for analytical performance modeling. We propose a series of robust techniques to address these challenges. In particular, we apply statistical techniques such as clustering, association, and correlation analysis, to understand the application parameter space better. We construct and compare two classes of effective predictive models: piecewise polynomial regression and artifical neural networks. We compare these techniques with theoretical analyses and experimental results. Overall, both regression and neural networks are accurate with median error rates ranging from 2.2 to 10.5 percent. The comparable accuracy of these models suggest differentiating features will arise from ease of use, transparency, and computational efficiency."} {"_id": "11d11c127be2a23596e38e868977188f8eb59cd8", "title": "Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC", "text": "State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning in nonlinear nonparametric statespace models. We place a Gaussian process prior over the transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. However, to enable efficient inference, we marginalize over the dynamics of the model and instead infer directly the joint smoothing distribution through the use of specially tailored Particle Markov Chain Monte Carlo samplers. Once an approximation of the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. We make use of sparse Gaussian process models to greatly reduce the computational complexity of the approach."} {"_id": "353656b577e1fbbbe995a1a623698f6fd8559b3d", "title": "Spatial and Spatio-Temporal Multidimensional Data Modelling: A Survey", "text": "Data warehouse store and provide access to large volume of historical data supporting the strategic decisions of organisations. Data warehouse is based on a multidimensional model which allow to express user\u2019s needs for supporting the decision making process. Since it is estimated that 80% of data used for decision making has a spatial or location component, spatial data have been widely integrated in Data Warehouses and in OLAP systems. Extending a multidimensional data model by the inclusion of spatial data provides a concise and organised spatial data warehouse representation. This paper aims to provide a comprehensive review of literature on developed and suggested spatial and spatio-temporel multidimensional models. A benchmarking study of the proposed models is presented. Several evaluation criteria\u2019s are used to identify the existence of trends as well as potential needs for further investigations. Keywords\u2014 Data warehouse, Spatial data, Multidimensional modelling, Temporal data"} {"_id": "b90dd2f366988d9bb76399d4137c1768fe460c8f", "title": "Malicious PDF detection using metadata and structural features", "text": "Owed to their versatile functionality and widespread adoption, PDF documents have become a popular avenue for user exploitation ranging from large-scale phishing attacks to targeted attacks. In this paper, we present a framework for robust detection of malicious documents through machine learning. Our approach is based on features extracted from document metadata and structure. Using real-world datasets, we demonstrate the the adequacy of these document properties for malware detection and the durability of these features across new malware variants. Our analysis shows that the Random Forests classification method, an ensemble classifier that randomly selects features for each individual classification tree, yields the best detection rates, even on previously unseen malware.\n Indeed, using multiple datasets containing an aggregate of over 5,000 unique malicious documents and over 100,000 benign ones, our classification rates remain well above 99% while maintaining low false positives of 0.2% or less for different classification parameters and experimental scenarios. Moreover, the classifier has the ability to detect documents crafted for targeted attacks and separate them from broadly distributed malicious PDF documents. Remarkably, we also discovered that by artificially reducing the influence of the top features in the classifier, we can still achieve a high rate of detection in an adversarial setting where the attacker is aware of both the top features utilized in the classifier and our normality model. Thus, the classifier is resilient against mimicry attacks even with knowledge of the document features, classification method, and training set."} {"_id": "067c7857753e21e7317b556c86e30be60aa7cac0", "title": "Xen and the art of virtualization", "text": "Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests."} {"_id": "0992ef63a94c4b9dfc05f96b3a144c1e7237c539", "title": "Static detection of malicious JavaScript-bearing PDF documents", "text": "Despite the recent security improvements in Adobe's PDF viewer, its underlying code base remains vulnerable to novel exploits. A steady flow of rapidly evolving PDF malware observed in the wild substantiates the need for novel protection instruments beyond the classical signature-based scanners. In this contribution we present a technique for detection of JavaScript-bearing malicious PDF documents based on static analysis of extracted JavaScript code. Compared to previous work, mostly based on dynamic analysis, our method incurs an order of magnitude lower run-time overhead and does not require special instrumentation. Due to its efficiency we were able to evaluate it on an extremely large real-life dataset obtained from the VirusTotal malware upload portal. Our method has proved to be effective against both known and unknown malware and suitable for large-scale batch processing."} {"_id": "0c668ee24d58ecca165f788d40765e79ed615471", "title": "Classification and Regression Trees", "text": ""} {"_id": "0dac671dae4192cfe96d290b50cc3f1105798825", "title": "Stealthy malware detection through vmm-based \"out-of-the-box\" semantic view reconstruction", "text": "An alarming trend in malware attacks is that they are armed with stealthy techniques to detect, evade, and subvert malware detection facilities of the victim. On the defensive side, a fundamental limitation of traditional host-based anti-malware systems is that they run inside the very hosts they are protecting (\"in the box\"), making them vulnerable to counter-detection and subversion by malware. To address this limitation, recent solutions based on virtual machine (VM) technologies advocate placing the malware detection facilities outside of the protected VM (\"out of the box\"). However, they gain tamper resistance at the cost of losing the native, semantic view of the host which is enjoyed by the \"in the box\" approach, thus leading to a technical challenge known as the semantic gap.\n In this paper, we present the design, implementation, and evaluation of VMwatcher - an \"out-of-the-box\" approach that overcomes the semantic gap challenge. A new technique called guest view casting is developed to systematically reconstruct internal semantic views (e.g., files, processes, and kernel modules) of a VM from the outside in a non-intrusive manner. Specifically, the new technique casts semantic definitions of guest OS data structures and functions on virtual machine monitor (VMM)-level VM states, so that the semantic view can be reconstructed. With the semantic gap bridged, we identify two unique malware detection capabilities: (1) view comparison-based malware detection and its demonstration in rootkit detection and (2) \"out-of-the-box\" deployment of host-based anti-malware software with improved detection accuracy and tamper-resistance. We have implemented a proof-of-concept prototype on both Linux and Windows platforms and our experimental results with real-world malware, including elusive kernel-level rootkits, demonstrate its practicality and effectiveness."} {"_id": "c803e14cf844f774607ba7dd1f647fd8539519f0", "title": "Automated and Cooperative Vehicle Merging at Highway On-Ramps", "text": "Recognition of necessities of connected and automated vehicles (CAVs) is gaining momentum. CAVs can improve both transportation network efficiency and safety through control algorithms that can harmonically use all existing information to coordinate the vehicles. This paper addresses the problem of optimally coordinating CAVs at merging roadways to achieve smooth traffic flow without stop-and-go driving. We present an optimization framework and an analytical closed-form solution that allows online coordination of vehicles at merging zones. The effectiveness of the efficiency of the proposed solution is validated through a simulation, and it is shown that coordination of vehicles can significantly reduce both fuel consumption and travel time."} {"_id": "22a21791f8e36fe0959027a689ce996d3eae0747", "title": "An ESPAR Antenna for Beamspace-MIMO Systems Using PSK Modulation Schemes", "text": "In this paper the use of electronically steerable passive array radiator (ESPAR) antennas is introduced for achieving increased spectral efficiency characteristics in multiple-input, multiple-output (MIMO) systems using a single active element, and compact antennas. The proposed ESPAR antenna is capable of mapping phase-shift-keying (PSK) modulated symbols to be transmitted onto orthogonal basis functions on the wavevector domain of the multi-element antenna (MEA), instead of the traditional approach of sending different symbol streams in different locations on the antenna domain. In this way, different symbols are transmitted simultaneously towards different angles of departure at the transmitter side. We show that the proposed system, called beamspace-MIMO (BS-MIMO) can achieve performance characteristics comparable to traditional MIMO systems while using a single radio-frequency (RF) front-end and ESPAR antenna arrays of total length equal to lambda/8, for popular PSK modulation schemes such as binary-PSK (BPSK), quaternary-PSK (QPSK), as well as for their offset-PSK and differential-PSK variations."} {"_id": "4b7ec490154397c2691d3404eccd412665fa5e6a", "title": "N-gram-based Machine Translation", "text": "This article describes in detail an n-gram approach to statistical machine translation. This approach consists of a log-linear combination of a translation model based on n-grams of bilingual units, which are referred to as tuples, along with four specific feature functions. Translation performance, which happens to be in the state of the art, is demonstrated with Spanish-to-English and English-to-Spanish translations of the European Parliament Plenary Sessions (EPPS)."} {"_id": "1f568ee68dd5a0d8e77321fb724dd6b26a8f034a", "title": "To queue or not to queue : equilibrium behavior in queueing systems", "text": "To the memory of my parents, Sara and Avraham Haviv To my mother and late father, Fela and Shimon Hassin Contents Preface xi 1. INTRODUCTION 1 1.1 Basic concepts 2 1.1.1 Strategies, payoffs, and equilibrium 2 1.1.2 Steady-state 4 1.1.3 Subgame perfect equilibrium 5 1.1.4 Evolutionarily stable strategies 5 1.1.5 The Braess paradox 5 1.1.6 Avoid the crowd or follow it? 6 1.2 Threshold strategies 7 1.3 Costs and objectives 9 1.4 Queueing theory preliminaries 11 1.5 A shuttle example 14 1.5.1 The unobservable model 14 1.5.2 The observable model 17 1.5.3 Social optimality 18 1.6 Non-stochastic models 19"} {"_id": "342ca79a0ac2cc726bf31ffb4ce399821d3e2979", "title": "Merkle Tree Traversal in Log Space and Time", "text": "We present a technique for Merkle tree traversal which requires only logarithmic space and time. For a tree with N nodes, our algorithm computes sequential tree leaves and authentication path data in time Log2(N) and space less than 3Log2(N), where the units of computation are hash function evaluations or leaf value computations, and the units of space are the number of node values stored. Relative to this algorithm, we show our bounds to be necessary and sufficient. This result is an asymptotic improvement over all other previous results (for example, measuring cost = space \u2217 time). We also prove that the complexity of our algorithm is optimal: There can exist no Merkle tree traversal algorithm which consumes both less than O(Log2(N)) space and less than O(Log2(N)) time. Our algorithm is especially of practical interest when space efficiency is required, and can also enhance other traversal algorithms which relax space constraints to gain speed."} {"_id": "f93507b4c55a3ae6b891d00bcb98dde6410543f3", "title": "Kinematics and Jacobian analysis of a 6UPS Stewart-Gough platform", "text": "In this paper a complete kinematics analysis of a 6UPS Stewart-Gough platform is performed. A new methodology to deal with the forward kinematics problem based on a multibody formulation and a numerical method is presented, resulting in a highly efficient formulation approach. The idea of a multibody formulation is basically to built for each joint that links the body of the robot, a set of equations that define the constraints on their movement so that the joint variables are included in these equations. Once the constraint function is defined, we may use a numerical method to find the kinematic solution. In order to avoid the problem reported with the orientation (Euler's Angles) of the moving platform, a generalized coordinates vector using quaternions is applied. Then, the Jacobian analysis using reciprocal screw systems is described and the workspace is determined. Finally, a design of a simulator using MATLAB as programming tool is presented."} {"_id": "82e1e4c49d515b1adebe9e2932458e08280de634", "title": "Coping with compassion fatigue.", "text": "Helping others who have undergone a trauma from a natural disaster, accident, or sudden act of violence, can be highly satisfying work. But helping trauma victims can take a toll on even the most seasoned mental health professional. Ongoing exposure to the suffering of those you are helping can bring on a range of signs and symptoms -including anxiety, sleeplessness, irritability, and feelings of helplessness -that can interfere, sometimes significantly, with everyday life and work. In clinicians, including therapists, counselors, and social workers, this response is often referred to as \u201ccompassion fatigue\u201d or \u201csecondary post-traumatic stress.\u201d"} {"_id": "12a23d19543e73b5808b35f1ff2d00faba633bb6", "title": "The flipped classroom: a course redesign to foster learning and engagement in a health professions school.", "text": "Recent calls for educational reform highlight ongoing concerns about the ability of current curricula to equip aspiring health care professionals with the skills for success. Whereas a wide range of proposed solutions attempt to address apparent deficiencies in current educational models, a growing body of literature consistently points to the need to rethink the traditional in-class, lecture-based course model. One such proposal is the flipped classroom, in which content is offloaded for students to learn on their own, and class time is dedicated to engaging students in student-centered learning activities, like problem-based learning and inquiry-oriented strategies. In 2012, the authors flipped a required first-year pharmaceutics course at the University of North Carolina Eshelman School of Pharmacy. They offloaded all lectures to self-paced online videos and used class time to engage students in active learning exercises. In this article, the authors describe the philosophy and methodology used to redesign the Basic Pharmaceutics II course and outline the research they conducted to investigate the resulting outcomes. This article is intended to serve as a guide to instructors and educational programs seeking to develop, implement, and evaluate innovative and practical strategies to transform students' learning experience. As class attendance, students' learning, and the perceived value of this model all increased following participation in the flipped classroom, the authors conclude that this approach warrants careful consideration as educators aim to enhance learning, improve outcomes, and fully equip students to address 21st-century health care needs."} {"_id": "e930f44136bcc8cf32370c4caaf0734af0fa0d51", "title": "An Empirical Investigation of the Impact of Individual and Work Characteristics on Telecommuting Success", "text": "Individual and work characteristics are used in telecommuting plans; however, their impact on telecommuting success is not well known. We studied how employee tenure, work experience, communication skills, task interdependence, work output measurability, and task variety impact telecommuter productivity, performance, and satisfaction after taking into account the impact of communication technologies. Data collected from 89 North American telecommuters suggest that in addition to the richness of the media, work experience, communication skills, and task interdependence impact telecommuting success. These characteristics are practically identifiable and measurable; therefore, we expect our findings to help managers convert increasing telecommuting adoption rates to well-defined and measurable gains."} {"_id": "def773093c721c5d0dcac3909255ec39efeca97b", "title": "An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data", "text": "Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model, both on the small human action recognition dataset of SBU and the currently largest NTU dataset."} {"_id": "5a5af9fc5093fcd964daa769cbdd8e548261f79b", "title": "Fast Viterbi map matching with tunable weight functions", "text": "This paper describes a map matching program submitted to the ACM SIGSPATIAL Cup 2012. We first summarize existing map matching algorithms into three categories, and compare their performance thoroughly. In general, global max-weight methods using the Viterbi dynamic programming algorithm are the most accurate but the accuracy varies at different sampling intervals using different weight functions. Our submission selects a hybrid that improves upon the best two weight functions such that its accuracy is better than both and the performance is robust against varying sampling rates. In addition, we employ many optimization techniques to reduce the overall latency, as the scoring heavily emphasizes on speed. Using the training dataset with manually corrected ground truth, our Java-based program matched all 14,436 samples in 5 seconds on a dual-core 3.3 GHz iCore 3 processor, and achieved 98.9% accuracy."} {"_id": "fe124d1c5ff4c691e0f16b24713e81b0fbc840ca", "title": "Domain Adaptation with Adversarial Neural Networks and Auto-encoders", "text": "Background. Domain adaptation focuses on the situation where we have data generated from multiple domains, which are assumed to be different, but similar, in a certain sense. In this work we focus on the case where there are two domains, known as the source and the target domain. The source domain is assumed to have a large amount of labeled data while labeled data in the target domain is scarce. The goal of domain adaptation algorithms is to generalize better on the target domain by exploiting the large amount of labeled data in the related source domain."} {"_id": "2fc5999b353f41be8c0d5dea2ecb7c26507aa7c0", "title": "A particle filter for monocular vision-aided odometry", "text": "We propose a particle filter-based algorithm for monocular vision-aided odometry for mobile robot localization. The algorithm fuses information from odometry with observations of naturally occurring static point features in the environment. A key contribution of this work is a novel approach for computing the particle weights, which does not require including the feature positions in the state vector. As a result, the computational and sample complexities of the algorithm remain low even in feature-dense environments. We validate the effectiveness of the approach extensively with both simulations as well as real-world data, and compare its performance against that of the extended Kalman filter (EKF) and FastSLAM. Results from the simulation tests show that the particle filter approach is better than these competing approaches in terms of the RMS error. Moreover, the experiments demonstrate that the approach is capable of achieving good localization accuracy in complex environments."} {"_id": "e89d7b727da4617caf1c7c2dc8523d8565079ecf", "title": "Attacking the Washington, D.C. Internet Voting System", "text": "In 2010, Washington, D.C. developed an Internet voting pilot project that was intended to allow overseas absentee voters to cast their ballots using a website. Prior to deploying the system in the general election, the District held a unique public trial: a mock election during which anyone was invited to test the system or attempt to compromise its security. This paper describes our experience participating in this trial. Within 48 hours of the system going live, we had gained nearcomplete control of the election server. We successfully changed every vote and revealed almost every secret ballot. Election officials did not detect our intrusion for nearly two business days\u2014and might have remained unaware for far longer had we not deliberately left a prominent clue. This case study\u2014the first (to our knowledge) to analyze the security of a government Internet voting system from the perspective of an attacker in a realistic pre-election deployment\u2014attempts to illuminate the practical challenges of securing online voting as practiced today by a growing number of jurisdictions."} {"_id": "9cccd211c9208f790d71fa5b3499d8f827744aa0", "title": "Applications of Educational Data Mining: A survey", "text": "Various educational oriented problems are resolved through Educational Data Mining, which is the most prevalent applications of data mining. One of the crucial goals of this paper is to study the most recent works carried out on EDM and analyze their merits and drawbacks. This paper also highlights the cumulative results of the various data mining practices and techniques applied in the surveyed articles, and thereby suggesting the researchers on the future directions on EDM. In addition, an experiment was also conducted to evaluate, certain classification and clustering algorithms to observe the most reliable algorithms for future researches."} {"_id": "6831bb247c853b433d7b2b9d47780dc8d84e4762", "title": "Analysis of Deep Convolutional Neural Network Architectures", "text": "In computer vision many tasks are solved using machine learning. In the past few years, state of the art results in computer vision have been achieved using deep learning. Deeper machine learning architectures are better capable in handling complex recognition tasks, compared to previous more shallow models. Many architectures for computer vision make use of convolutional neural networks which were modeled after the visual cortex. Currently deep convolutional neural networks are the state of the art in computer vision. Through a literature survey an overview is given of the components in deep convolutional neural networks. The role and design decisions for each of the components is presented, and the difficulties involved in training deep neural networks are given. To improve deep learning architectures an analysis is given of the activation values in four different architectures using various activation functions. Current state of the art classifiers use dropout, max-pooling as well as the maxout activation function. New components may further improve the architecture by providing a better solution for the diminishing gradient problem."} {"_id": "07186dd168d2a8add853cc9fdedf1376a77a7ac8", "title": "Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments", "text": "Future driver assistance systems are likely to use a multisensor approach with heterogeneous sensors for tracking dynamic objects around the vehicle. The quality and type of data available for a data fusion algorithm depends heavily on the sensors detecting an object. This article presents a general framework which allows the use sensor specific advantages while abstracting the specific details of a sensor. Different tracking models are used depending on the current set of sensors detecting the object. A sensor independent algorithm for classifying objects regarding their current and past movement state is presented. The described architecture and algorithms have been successfully implemented in Tartan racingpsilas autonomous vehicle for the urban grand challenge. Results are presented and discussed."} {"_id": "25338f00a219e2f4b152ff8b62e8ca8aa6d4709f", "title": "NAViGaTOR: Network Analysis, Visualization and Graphing Toronto", "text": "SUMMARY\nNAViGaTOR is a powerful graphing application for the 2D and 3D visualization of biological networks. NAViGaTOR includes a rich suite of visual mark-up tools for manual and automated annotation, fast and scalable layout algorithms and OpenGL hardware acceleration to facilitate the visualization of large graphs. Publication-quality images can be rendered through SVG graphics export. NAViGaTOR supports community-developed data formats (PSI-XML, BioPax and GML), is platform-independent and is extensible through a plug-in architecture.\n\n\nAVAILABILITY\nNAViGaTOR is freely available to the research community from http://ophid.utoronto.ca/navigator/. Installers and documentation are provided for 32- and 64-bit Windows, Mac, Linux and Unix.\n\n\nCONTACT\njuris@ai.utoronto.ca\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."} {"_id": "9fc2c61ea601d126d386121f25e7af679a15c2fc", "title": "Speed or accuracy? a study in evaluation of simultaneous speech translation", "text": "Simultaneous speech translation is a technology that attempts to reduce the delay inherent in speech translation by beginning translation before the end of explicit sentence boundaries. Despite best efforts, there is still often a trade-off between speed and accuracy in these systems, with systems with less delay also achieving lower accuracy. However, somewhat surprisingly, there is no previous work examining the relative importance of speed and accuracy, and thus given two systems with various speeds and accuracies, it is difficult to say with certainty which is better. In this paper, we make the first steps towards evaluation of simultaneous speech translation systems in consideration of both speed and accuracy. We collect user evaluations of speech translation results with different levels of accuracy and delay, and using this data to learn the parameters of an evaluation measure that can judge the trade-off between these two factors. Based on these results, we find that considering both accuracy and delay in the evaluation of speech translation results helps improve correlations with human judgements, and that users placed higher relative importance on reducing delay when results were presented through text, rather than speech."} {"_id": "60ac4554e9f89f0ca319d3bbcc4828bf4dad5b05", "title": "Cyber-Physical Systems: A New Frontier", "text": "The report of the President's Council of Advisors on Science and Technology (PCAST) has placed CPS on the top of the priority list for federal research investment [6]. This article first reviews some of the challenges and promises of CPS, followed by an articulation of some specific challenges and promises that are more closely related to the sensor networks, ubiquitous and trustworthy computing conference."} {"_id": "110ee8ab8f652c16fcc3bb767687e1c695c2500b", "title": "GP-GAN: Towards Realistic High-Resolution Image Blending", "text": "Recent advances in generative adversarial networks (GANs) have shown promising potentials in conditional image generation. However, how to generate high-resolution images remains an open problem. In this paper, we aim at generating high-resolution well-blended images given composited copy-and-paste ones, i.e. realistic highresolution image blending. To achieve this goal, we propose Gaussian-Poisson GAN (GP-GAN), a framework that combines the strengths of classical gradient-based approaches and GANs, which is the first work that explores the capability of GANs in high-resolution image blending task to the best of our knowledge. Particularly, we propose GaussianPoisson Equation to formulate the high-resolution image blending problem, which is a joint optimisation constrained by the gradient and colour information. Gradient filters can obtain gradient information. For generating the colour information, we propose Blending GAN to learn the mapping between the composited image and the well-blended one. Compared to the alternative methods, our approach can deliver high-resolution, realistic images with fewer bleedings and unpleasant artefacts. Experiments confirm that our approach achieves the state-of-the-art performance on Transient Attributes dataset. A user study on Amazon Mechanical Turk finds that majority of workers are in favour of the proposed approach."} {"_id": "5005a39bf313b75601b07295387dc7efc4924726", "title": "Dependent Gaussian Processes", "text": "Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs."} {"_id": "8db60e5196657c4f6188ffee3d2a48d188014f6d", "title": "Realtime multi-aircraft tracking in aerial scene with deep orientation network", "text": "Tracking the aircrafts from an aerial view is very challenging due to large appearance, perspective angle, and orientation variations. The deep-patch orientation network (DON) method was proposed for the multi-ground target tracking system, which is general and can learn the target\u2019s orientation based on the structure information in the training samples. Such approach leverages the performance of tracking-by-detection framework into two aspects: one is to improve the detectability of the targets by using the patch-based model for the target localization in the detection component and the other is to enhance motion characteristics of the individual tracks by incorporating the orientation information as an association metric in the tracking component. Based on the DON structure, you only look once (YOLO) and faster region convolutional neural network (FrRCNN) detection frameworks with simple online and realtime tracking (SORT) tracker are utilized as a case study. The Comparative experiments demonstrate that the overall detection accuracy is improved at the same processing speed of both detection frameworks. Furthermore, the number of Identity switches (IDsw) has reduced about 67% without affecting the computational complexity of the tracking component. Consequently, the presented method is efficient for realtime ground target-tracking scenarios."} {"_id": "78fef6c5072e0d2ecfa710c2bb5bbc2cf54acbca", "title": "Learning user tastes: a first step to generating healthy meal plans", "text": "Poor nutrition is fast becoming one of the major causes of ill-health and death in the western world. It is caused by a variety of factors including lack of nutritional understanding leading to poor choices being made when selecting which dishes to cook and eat. We wish to build systems which can recommend nutritious meal plans to users, however a crucial pre-requisite is to be able to recommend dishes that people will like. In this work we investigate key factors contributing to how recipes are rated by analysing the results of a longterm study (n=123 users) in order to understand how best to approach the recommendation problem. In doing so we identify a number of important contextual factors which can influence the choice of rating and suggest how these might be exploited to build more accurate recipe recommender systems. We see this as a crucial first step in a healthy meal recommender. We conclude by summarising our thoughts on how we will combine recommended recipes into meal plans based on nutritional guidelines."} {"_id": "393158606639f86f4d3ca2e89d15bd38b59577e1", "title": "Transliteration for Resource-Scarce Languages", "text": "Today, parallel corpus-based systems dominate the transliteration landscape. But the resource-scarce languages do not enjoy the luxury of large parallel transliteration corpus. For these languages, rule-based transliteration is the only viable option. In this article, we show that by properly harnessing the monolingual resources in conjunction with manually created rule base, one can achieve reasonable transliteration performance. We achieve this performance by exploiting the power of Character Sequence Modeling (CSM), which requires only monolingual resources. We present the results of our rule-based system for Hindi to English, English to Hindi, and Persian to English transliteration tasks. We also perform extrinsic evaluation of transliteration systems in the context of Cross Lingual Information Retrieval. Another important contribution of our work is to explain the widely varying accuracy numbers reported in transliteration literature, in terms of the entropy of the language pairs and the datasets involved."} {"_id": "197a7fc2f8d57d93727b348851b59b34ce990afd", "title": "SRILM - an extensible language modeling toolkit", "text": "SRILM is a collection of C++ libraries, executable programs, and helper scripts designed to allow both production of and experimentation with statistical language models for speech recognition and other applications. SRILM is freely available for noncommercial purposes. The toolkit supports creation and evaluation of a variety of language model types based on N-gram statistics, as well as several related tasks, such as statistical tagging and manipulation of N-best lists and word lattices. This paper summarizes the functionality of the toolkit and discusses its design and implementation, highlighting ease of rapid prototyping, reusability, and combinability of tools."} {"_id": "9c1775c992847f31a9bda931a1ecb8cac5365367", "title": "Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar", "text": "We describe a chart realization algorithm for Combinatory Categorial Grammar (CCG), and show how it can be used to efficiently realize a wide range of coordination phenomena, including argument cluster coordination and gapping. The algorithm incorporates three novel methods for improving the efficiency of chart realization: (i) using rules to chunk the input logical form into sub-problems to be solved independently prior to further combination; (ii) pruning edges from the chart based on the n-gram score of the edge\u2019s string, in comparison to other edges with equivalent categories; and (iii) formulating the search as a best-first anytime algorithm, using n-gram scores to sort the edges on the agenda. The algorithm has been implemented as an extension to the OpenCCG open source CCG parser, and initial performance tests indicate that the realizer is fast enough for practical use in natural language dialogue systems."} {"_id": "12f661171799cbd899e1ff4ae0a7e2170c3d547b", "title": "Two decades of statistical language modeling: where do we go from here?", "text": "Statistical language models estimate the distribution of various natural language phenomena for the purpose of speech recognition and other language technologies. Since the first significant model was proposed in 1980, many attempts have been made to improve the state of the art. We review them, point to a few promising directions, and argue for a Bayesian approach to integration of linguistic theories with data."} {"_id": "395f4b41578c3ff5139ddcf9e90eb60801b50394", "title": "Statistical language modeling using the CMU-cambridge toolkit", "text": "The CMU Statistical Language Modeling toolkit was re leased in in order to facilitate the construction and testing of bigram and trigram language models It is currently in use in over academic government and industrial laboratories in over countries This paper presents a new version of the toolkit We outline the con ventional language modeling technology as implemented in the toolkit and describe the extra e ciency and func tionality that the new toolkit provides as compared to previous software for this task Finally we give an exam ple of the use of the toolkit in constructing and testing a simple language model"} {"_id": "7dd4a0fb7368a510840cffd7dcad618962d3b8e4", "title": "An Architecture for Action, Emotion, and Social Behavior", "text": "The Oz project at Carnegie Mellon is studying the construction of artistically e ective simulated worlds. Such worlds typically include several agents, which must exhibit broad behavior. To meet this need, we are developing an agent architecture, called Tok, that presently supports reactivity, goals, emotions, and social behavior. Here we brie y introduce the requirements of our application, summarize the Tok architecture, and describe a particular agent we have constructed that exhibits the desired social behavior. 1 THE OZ PROJECT AND BROAD AGENTS 1 1 The Oz Project and Broad Agents The Oz project at Carnegie Mellon University is developing technology for artistically interesting simulated worlds [3]. We want to let human users participate in dramatically e ective worlds that include moderately competent, emotional agents. We work with artists in the CMU Drama and English Departments, to help focus our technology on genuine artistic needs. An Oz world has four primary components. There is a simulated physical environment, a set of automated agents which help populate the world, a user interface to allow one or more people to participate in the world [14], and a planner concerned with the long term structure of the user's experience [2]. One of the keys to an artistically engaging experience is for the user to be able to \\suspend disbelief\". That is, the user must be able to imagine that the world portrayed is real, without being jarred out of this belief by the world's behavior. The automated agents, in particular, must not be blatantly unreal. Thus, part of our e ort is aimed at producing agents with a broad set of capabilities, including goal-directed reactive behavior, emotional state and behavior, social knowledge and behavior, and some natural language abilities. For our purpose, each of these capacities can be as limited as is necessary to allow us to build broad, integrated agents [4]. Oz worlds can be simpler than the real world, but they must retain su cient complexity to serve as interesting artistic vehicles. The complexity level seems to be somewhat higher, but not exceptionally higher, than typical AI micro-worlds. Despite these simpli cations, we nd that our agents must deal with imprecise and erroneous perceptions, with the need to respond rapidly, and with a general inability to fully model the agent-rich world they inhabit. Thus, we suspect that some of our experience with broad agents in Oz may transfer to the domain of social, real-world robots. Building broad agents is a little studied area. Much work has been done on building reactive systems [1, 6, 7, 10, 11, 22], natural language systems (which we do not discuss here), and even emotion systems [9, 18, 20]. There has been growing interest in integrating action and learning (see [16]) and some very interesting work on broader integration [23, 19]. However, we are aware of no other e orts to integrate the particularly wide range of capabilities needed in the Oz domain. Here we present our e orts, focusing on the structure of a particular agent designed to exhibit goaldirected reactive behavior, emotion, and some social behavior. Further discussion of the integration issues can be found in [5]."} {"_id": "6fdbe12fc54a583dfe52315f5694e352185d8f03", "title": "A source code linearization technique for detecting plagiarized programs", "text": "It is very important to detect plagiarized programs in the field of computer science education. Therefore, many tools and algorithms have been developed for this purpose. Generally, these tools are operated in two phases. In phase 1, a program plagiarism detecting tool generates an intermediate representation from a given program set. The intermediate representation should reflect the structural characterization of the program. Most tools use the parse tree or token sequence by intermediate representation. In phase 2, the program looks for plagiarized material and evaluates the similarity of two programs. It is helpful to announce the plagiarized metarials between two programs to the instructor. In this paper, we present the static tracing method in order to improve program plagiarism detection accuracy. The static tracing method statically executes a program at the syntax-level and then extracts predefined keywords according to the order of the executed functions. The result of experiment proves this method can detect plagiarism more effectively than the previously released plagiarism detecting method."} {"_id": "0b8f4edf1a7b4d19d47d419f41cde432b9708ab7", "title": "Silicon-Filled Rectangular Waveguides and Frequency Scanning Antennas for mm-Wave Integrated Systems", "text": "We present a technology for the manufacturing of silicon-filled integrated waveguides enabling the realization of low-loss high-performance millimeter-wave passive components and high gain array antennas, thus facilitating the realization of highly integrated millimeter-wave systems. The proposed technology employs deep reactive-ion-etching (DRIE) techniques with aluminum metallization steps to integrate rectangular waveguides with high geometrical accuracy and continuous metallic side walls. Measurement results of integrated rectangular waveguides are reported exhibiting losses of 0.15 dB/ \u03bbg at 105 GHz. Moreover, ultra-wideband coplanar to waveguide transitions with 0.6 dB insertion loss at 105 GHz and return loss better than 15 dB from 80 to 110 GHz are described and characterized. The design, integration and measured performance of a frequency scanning slotted-waveguide array antenna is reported, achieving a measured beam steering capability of 82 \u00b0 within a band of 23 GHz and a half-power beam-width (HPBW) of 8.5 \u00b0 at 96 GHz. Finally, to showcase the capability of this technology to facilitate low-cost mm-wave system level integration, a frequency modulated continuous wave (FMCW) transmit-receive IC for imaging radar applications is flip-chip mounted directly on the integrated array and experimentally characterized."} {"_id": "5696eb62fd6f1c039021e66b9f03b50066797a8e", "title": "Development of a continuum robot using pneumatic artificial muscles", "text": "This paper presents the design concept of an intrinsic spatial continuum robot, of which arm segments are actuated by pneumatic artificial muscles. Since pneumatic artificial muscles supported the arm segments of the robot instead of a passive spring, the developed continuum robot was not only capable of following a given trajectory, but also varying its stiffness according to given external environment. Experiment results revealed that the proposed continuum robot showed versatile movements by controlling the pressure of supply air to pneumatic artificial muscles, and the well-coordinated pattern matching scheme based on low-cost webcams gave good performance in estimating the position and orientation of the end-effector of the spatial continuum robot."} {"_id": "c58ae94ed0c4f59e00fde734f8b3ec8a554faf09", "title": "Introduction of speech log-spectral priors into dereverberation based on Itakura-Saito distance minimization", "text": "It has recently been shown that a multi-channel linear prediction can effectively achieve blind speech dereverberation based on maximum-likelihood (ML) estimation. This approach can estimate and cancel unknown reverberation processes from only a few seconds of observation. However, one problem with this approach is that speech distortion may increase if we iterate the dereverberation more than once based on Itakura-Saito (IS) distance minimization to further reduce the reverberation. To overcome this problem, we introduce speech log-spectral priors into this approach, and reformulate it based on maximum a posteriori (MAP) estimation. Two types of priors are introduced, a Gaussian mixture model (GMM) of speech log spectra, and a GMM of speech mel-frequency cepstral coefficients. In the formulation, we also propose a new versatile technique to integrate such log-spectral priors with the IS distance minimization in a computationally efficient manner. Preliminary experiments show the effectiveness of the proposed approach."} {"_id": "261069aa06459562a2de3950395b41ed4aebe9df", "title": "Disentangled Representation Learning for Text Style Transfer", "text": "This paper tackles the problem of disentangling the latent variables of style and content in language models. We propose a simple, yet effective approach, which incorporates auxiliary objectives: a multi-task classification objective, and dual adversarial objectives for label prediction and bag-ofwords prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space, using this approach. This disentangled latent representation learning method is applied to attribute (e.g. style) transfer on non-parallel corpora. We achieve similar content preservation scores compared to previous state-of-the-art approaches, and significantly better style-transfer strength scores. Our code is made publicly available for replicability and extension purposes ."} {"_id": "4f8781df9fd1d7473a28b86885af30a1cad2a2d0", "title": "Abstractive Multi-document Summarization with Semantic Information Extraction", "text": "This paper proposes a novel approach to generate abstractive summary for multiple documents by extracting semantic information from texts. The concept of Basic Semantic Unit (BSU) is defined to describe the semantics of an event or action. A semantic link network on BSUs is constructed to capture the semantic information of texts. Summary structure is planned with sentences generated based on the semantic link network. Experiments demonstrate that the approach is effective in generating informative, coherent and compact summary."} {"_id": "ec1df457a2be681227f79de3ce932fccb65ee2bb", "title": "Opportunistic Computation Offloading in Mobile Edge Cloud Computing Environments", "text": "The dynamic mobility and limitations in computational power, battery resources, and memory availability are main bottlenecks in fully harnessing mobile devices as data mining platforms. Therefore, the mobile devices are augmented with cloud resources in mobile edge cloud computing (MECC) environments to seamlessly execute data mining tasks. The MECC infrastructures provide compute, network, and storage services in one-hop wireless distance from mobile devices to minimize the latency in communication as well as provide localized computations to reduce the burden on federated cloud systems. However, when and how to offload the computation is a hard problem. In this paper, we present an opportunistic computation offloading scheme to efficiently execute data mining tasks in MECC environments. The scheme provides the suitable execution mode after analyzing the amount of unprocessed data, privacy configurations, contextual information, and available on-board local resources (memory, CPU, and battery power). We develop a mobile application for online activity recognition and evaluate the proposed scheme using the event data stream of 5 million activities collected from 12 users for 15 days. The experiments show significant improvement in execution time and battery power consumption resulting in 98% data reduction."} {"_id": "5f0b96c18ac64affb923b28938ac779d7d1e1b81", "title": "Large-scale audio feature extraction and SVM for acoustic scene classification", "text": "This work describes a system for acoustic scene classification using large-scale audio feature extraction. It is our contribution to the Scene Classification track of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (D-CASE). The system classifies 30 second long recordings of 10 different acoustic scenes. From the highly variable recordings, a large number of spectral, cepstral, energy and voicing-related audio features are extracted. Using a sliding window approach, classification is performed on short windows. SVM are used to classify these short segments, and a majority voting scheme is employed to get a decision for longer recordings. On the official development set of the challenge, an accuracy of 73 % is achieved. SVM are compared with a nearest neighbour classifier and an approach called Latent Perceptual Indexing, whereby SVM achieve the best results. A feature analysis using the t-statistic shows that mainly Mel spectra are the most relevant features."} {"_id": "80f83fa19bd5b7d7e94781a1de91609dfbd62937", "title": "A fractional order fuzzy PID controller for binary distillation column control", "text": "Expert and intelligent control schemes have recently emerged out as a promising solution with robustness which can efficiently deal with the nonlinearities, along with various types of modelling uncertainties, present in different real world systems e.g. binary distillation column. This paper is an attempt to propose an intelligent control system which takes the form of a Fractional Order Fuzzy Proportional \u2013 Integral \u2013 Derivative (FOFPID) controller which is investigated as a solution to deal with the complex dynamic nature of the distillation column. The FOFPID controller is an extension of an existing formula based self tuning fuzzy Proportional Integral controller structure, which varies its gains at run time in accordance with the instantaneous error and rate of change of error. The FOFPID controller is a Takagi-Sugeno (TS) model based fuzzy adaptive controller comprising of non-integer order of integration and differentiation operators used in the controller. It has been observed that inclusion of non-integer order of the integration and differentiation operators made the controller scheme more robust. For the performance evaluation of the proposed scheme, the performance of FOFPID controller is compared with that of its integer order counterpart, a Fuzzy Proportional \u2013 Integral \u2013 Derivative (FPID) controller. The parameters of both the controllers were optimized for minimum integral of absolute error (IAE) using a bio-inspired global optimization algorithm, Genetic Algorithm (GA). Intensive LabVIEW TM simulation studies were performed which included setpoint tracking with and without uncertainties, disturbance rejection, and noise suppression investigations. For testing the parameter uncertainty handling capability of the proposed controller, uncertain and time varying relative volatility and uncertain tray hydraulic constant were applied. Also, for the disturbance rejection studies, intensive simulations were conducted, which included two most common causes of disturbance i.e. variation in feed composition and variation in feed flow rate. All the simulation investigations clearly suggested that FOFPID controller provided superior performance over FPID controller for each case study i.e. setpoint tracking, disturbance rejection, noise suppression and parameter uncertainties."} {"_id": "8977709773a432a17bf13dfdee420bf41fdb63fd", "title": "A three-component scattering model for polarimetric SAR data", "text": "An approach has been developed that involves the fit of a combination of three simple scattering mechanisms to polarimetric SAR observations. The mechanisms are canopy scatter from a cloud of randomly oriented dipoles, evenor double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants and Bragg scatter from a moderately rough surface. This composite scattering model is used to describe the polarimetric backscatter from naturally occurring scatterers. The model is shown to describe the behavior of polarimetric backscatter from tropical rain forests quite well by applying it to data from NASA/Jet Propulsion Laboratory\u2019s (JPL\u2019s) airborne polarimetric synthetic aperture radar (AIRSAR) system. The model fit allows clear discrimination between flooded and nonflooded forest and between forested and deforested areas, for example. The model is also shown to be usable as a predictive tool to estimate the effects of forest inundation and disturbance on the fully polarimetric radar signature. An advantage of this model fit approach is that the scattering contributions from the three basic scattering mechanisms can be estimated for clusters of pixels in polarimetric SAR images. Furthermore, it is shown that the contributions of the three scattering mechanisms to the HH, HV, and VV backscatter can be calculated from the model fit. Finally, this model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem."} {"_id": "e22ab00eaae9e17f2c4c69883b8c111dba5bc646", "title": "Stacked Patch Antenna With Dual-Polarization and Low Mutual Coupling for Massive MIMO", "text": "Massive multiple input and multiple output (MIMO) has attracted significant interests in both academia and industry. It has been considered as one of most promising technologies for 5G wireless systems. The large-scale antenna array for base stations naturally becomes the key to deploy the Massive MIMO technologies. In this communication, we present a dual-polarized antenna array with 144 ports for Massive MIMO operating at 3.7 GHz. The proposed array consists of 18 low profile subarrays. Each subarray consists of four single units. Each single antenna unit consists of one vertically polarized port and one horizontally polarized port connected to power splitters, which serve as a feeding network. A stacked patch design is used to construct the single unit with the feeding network, which gives higher gain and lower mutual coupling within the size of a conversional dual-port patch antenna. Simulation results of the proposed single antenna unit, sub-array, and Massive MIMO array are verified by measurement."} {"_id": "9f8eb04dbafdfda997ac5e06cd6c521f82bf4e4c", "title": "UIMA: an architectural approach to unstructured information processing in the corporate research environment", "text": "IBM Research has over 200 people working on Unstructured Information Management (UIM) technologies with a strong focus on Natural Language Processing (NLP). These researchers are engaged in activities ranging from natural language dialog, information retrieval, topic-tracking, named-entity detection, document classification and machine translation to bioinformatics and open-domain question answering. An analysis of these activities strongly suggested that improving the organization\u2019s ability to quickly discover each other's results and rapidly combine different technologies and approaches would accelerate scientific advance. Furthermore, the ability to reuse and combine results through a common architecture and a robust software framework would accelerate the transfer of research results in NLP into IBM\u2019s product platforms. Market analyses indicating a growing need to process unstructured information, specifically multilingual, natural language text, coupled with IBM Research\u2019s investment in NLP, led to the development of middleware architecture for processing unstructured information dubbed UIMA. At the heart of UIMA are powerful search capabilities and a data-driven framework for the development, composition and distributed deployment of analysis engines. In this paper we give a general introduction to UIMA focusing on the design points of its analysis engine architecture and we discuss how UIMA is helping to accelerate research and technology transfer. 1 Motivation and Objectives We characterize a UIM application as a software system that analyzes large volumes of unstructured information while consulting structured information sources to discover, organize and present knowledge relevant to the application\u2019s end-user. We define structured information as information whose intended meaning is unambiguous and explicitly represented in the structure or format of the data. The canonical example of structured information is a relational database table. We define unstructured information as information whose intended meaning is only loosely implied by its form. The canonical example is a natural language document; other examples include voice, audio, image and video. To appear in a special issue of the Journal of Natural Language Engineering 2004 2 Ferrucci and Lally Do Not Distirubte without written permission from authors Market indicators suggest a rapid increase in the use of text and voice analytics to a growing class of commercially viable applications (Roush 2003). National security interests are driving applications to involve an even broader set of UIM technologies including image and video analytics. Emerging and commercially important UIM application areas include life sciences, e-commerce, technical support, advanced search, and national and business intelligence. In six major labs spread out over the globe, IBM Research has over 200 people working on Unstructured Information Management (UIM) technologies with a primary focus on Natural Language Processing (NLP). These researchers are engaged in activities ranging from natural language dialog, information retrieval, topic-tracking, named-entity detection, document classification and machine translation to bioinformatics and open-domain question answering. Each group is developing different technical and engineering approaches to process unstructured information in pursuit of their specific research agenda in the UIM space. While IBM Research\u2019s independent UIM technologies fare well in scientific venues, accelerated development of integrated, robust applications is becoming increasingly important to IBM\u2019s commercial interests. Furthermore the rapid integration of different alogorthims and techniques is also a means to advance scientific results. For example, experiments in statistical name-entity detection or machine translation may benefit from using a deep parser as a feature generator. Facilitating the rapid integration of an independently produced deep parser with statistical algorithm implementations can accelerate the research cycle time as well as the time to product. Some of the challenges that face a large research organization in efficiently leveraging and integrating its (and third-party) UIM assets include: \u2022 Organizational Structure. IBM Research is largely organized in three to five person teams geographically dispersed among different labs throughout world. This structure makes it more difficult to jointly develop and reuse technologies. \u2022 Skills Alignment. There are inefficiencies in having PhDs trained and specialized in, for example, computational linguistics or statistical machine learning, develop and apply the requisite systems expertise to deliver their work into product architectures and/or make it easily usable by other researchers. To appear in a special issue of the Journal of Natural Language Engineering 2004 3 Ferrucci and Lally Do Not Distirubte without written permission from authors \u2022 Development Inefficiencies. Research results are delivered in a myriad of interfaces and technologies. Reuse and integration often requires too much effort dedicated to technical adaptation. Too often work is replicated. \u2022 Speed to product. Software Products introduce their own architectures, interfaces and implementation technologies. Technology transfer from the lab to the products often becomes a significant engineering or rewrite effort for researchers. A common framework and engineering discipline could help address the challenges listed above. The Unstructured Information Management Architecture (UIMA) project was initiated within IBM Research based on the premise that a common software architecture for developing, composing and delivering UIM technologies, if adopted by the organization, would facilitate technology reuse and integration, enable quicker scientific experimentation and speed the path to product. IBM\u2019s Unstructured Information Management Architecture (UIMA) high-level objectives are two fold: 1) Accelerate scientific advances by enabling the rapid combination of UIM technologies (e.g., natural language processing, audio and video analysis, information retrieval, etc.). 2) Accelerate transfer of UIM technologies to product by providing an architecture and associated framework implementations that promote reuse and support flexible deployment options. 2 The Unstructured Information Management Architecture (UIMA) UIMA is a software architecture for developing UIM applications. The UIMA high-level architecture, illustrated in Figure 1, defines the roles, interfaces and communications of large-grained components essential for UIM applications. These include components capable of analyzing unstructured artifacts, integrating and accessing structured sources and storing, indexing and searching for artifacts based on discovered semantic content resulting from analyses. A primary design point for UIMA is that it should admit the implementation of middleware frameworks that provide a component-based infrastructure for developing UIM applications suitable for vastly different deployment environments. These should range from lightweight and embeddable implementations to highly scalable implementations that are meant to exploit clusters of machines and provide high throughput, high availability serviced-based offerings. While the primary focus of current UIMA framework implementations is squarely on natural language To appear in a special issue of the Journal of Natural Language Engineering 2004 4 Ferrucci and Lally Do Not Distirubte without written permission from authors UIM Application Analysis Collection Level Analysis Document, Collection & Metadata Store Knowledge Source Access Collection Processing Manager (CPM) Knowledge & Data Bases Unstructured Information (Text) Analysis Engines Crawlers Analysis Engine Directory Acquisition Unstructured Information Analysis Structured Information Access Semantic Search Engine Indices Indices Documents Collections Metadata Knowledge Source Adapter Directory Access"} {"_id": "18cdf57195143ed3a3681440ddfb6f1fa839475b", "title": "On composition of a federated web search result page: using online users to provide pairwise preference for heterogeneous verticals", "text": "Modern web search engines are federated --- a user query is sent to the numerous specialized search engines called verticals like web (text documents), News, Image, Video, etc. and the results returned by these engines are then aggregated and composed into a search result page (SERP) and presented to the user. For a specific query, multiple verticals could be relevant, which makes the placement of these vertical results within blocks of textual web results challenging: how do we represent, assess, and compare the relevance of these heterogeneous entities?\n In this paper we present a machine-learning framework for SERP composition in the presence of multiple relevant verticals. First, instead of using the traditional label generation method of human judgment guidelines and trained judges, we use a randomized online auditioning system that allows us to evaluate triples of the form query, web block, vertical>. We use a pairwise click preference to evaluate whether the web block or the vertical block had a better users' engagement. Next, we use a hinged feature vector that contains features from the web block to create a common reference frame and augment it with features representing the specific vertical judged by the user. A gradient boosted decision tree is then learned from the training data. For the final composition of the SERP, we place a vertical result at a slot if the score is higher than a computed threshold. The thresholds are algorithmically determined to guarantee specific coverage for verticals at each slot.\n We use correlation of clicks as our offline metric and show that click-preference target has a better correlation than human judgments based models. Furthermore, on online tests for News and Image verticals we show higher user engagement for both head and tail queries."} {"_id": "271d0fd591c44a018b62e9ba091ee0bd6697af06", "title": "Soft start-up for high frequency LLC resonant converter with optimal trajectory control", "text": "This paper investigates the soft start-up for high frequency LLC resonant converter with optimal trajectory control. Two methods are proposed to realize soft start-up for high frequency LLC converter by commercial low-cost microcontrollers (MCU). Both methods can achieve soft start-up with minimum stress and optimal energy delivery. One method is mixed-signal implementation by sensing resonant tank to minimize the digital delay. Another method is digital implementation with look-up table. Experimental results are demonstrated on 500kHz 1kW 400V/12V LLC converter."} {"_id": "129371e185855a115d23a36cf78d5fcfb4fdb9ed", "title": "Grammar Variational Autoencoder", "text": "s m i l e s ! c h a i n atom ! b r a c k e t _ a t o m | a l i p h a t i c _ o r g a n i c | a r o m a t i c _ o r g a n i c a l i p h a t i c _ o r g a n i c ! \u2019B\u2019 | \u2019C\u2019 | \u2019N\u2019 | \u2019O\u2019 | \u2019S \u2019 | \u2019P \u2019 | \u2019F \u2019 | \u2019 I \u2019 | \u2019 Cl \u2019 | \u2019 Br \u2019 a r o m a t i c _ o r g a n i c ! \u2019 c \u2019 | \u2019 n \u2019 | \u2019 o \u2019 | \u2019 s \u2019 b r a c k e t _ a t o m ! \u2019 [ \u2019 BAI \u2019 ] \u2019 BAI ! i s o t o p e symbol BAC | symbol BAC | i s o t o p e symbol | symbol BAC ! c h i r a l BAH | BAH | c h i r a l BAH ! hc ou n t BACH | BACH | h co u n t BACH ! c h a r g e c l a s s | c h a r g e | c l a s s symbol ! a l i p h a t i c _ o r g a n i c | a r o m a t i c _ o r g a n i c i s o t o p e ! DIGIT | DIGIT DIGIT | DIGIT DIGIT DIGIT DIGIT ! \u20191 \u2019 | \u20192 \u2019 | \u20193 \u2019 | \u20194 \u2019 | \u20195 \u2019 | \u20196 \u2019 | \u20197 \u2019 | \u20198 \u2019 c h i r a l ! \u2019@\u2019 | \u2019@@\u2019 hc ou n t ! \u2019H\u2019 | \u2019H\u2019 DIGIT c h a r g e ! \u2019 \u2019 | \u2019 \u2019 DIGIT | \u2019 \u2019 DIGIT DIGIT | \u2019+ \u2019 | \u2019+ \u2019 DIGIT | \u2019+ \u2019 DIGIT DIGIT bond ! \u2019 \u2019 | \u2019= \u2019 | \u2019# \u2019 | \u2019 / \u2019 | \u2019 \\ \u2019 r i n g b o n d ! DIGIT | bond DIGIT branched_a tom ! atom | atom RB | atom BB | atom RB BB RB ! RB r i n g b o n d | r i n g b o n d BB ! BB b ra n ch | b r a n ch b ra nc h ! \u2019 ( \u2019 c h a i n \u2019 ) \u2019 | \u2019 ( \u2019 bond c h a i n \u2019 ) \u2019 c h a i n ! branched_a tom | c h a i n branched_a tom | c h a i n bond branched_a tom"} {"_id": "1324708ddf12d7940bc2377131e54d97111a1197", "title": "Fashion-focused creative commons social dataset", "text": "In this work, we present a fashion-focused Creative Commons dataset, which is designed to contain a mix of general images as well as a large component of images that are focused on fashion (i.e., relevant to particular clothing items or fashion accessories). The dataset contains 4810 images and related metadata. Furthermore, a ground truth on image's tags is presented. Ground truth generation for large-scale datasets is a necessary but expensive task. Traditional expert based approaches have become an expensive and non-scalable solution. For this reason, we turn to crowdsourcing techniques in order to collect ground truth labels; in particular we make use of the commercial crowdsourcing platform, Amazon Mechanical Turk (AMT). Two different groups of annotators (i.e., trusted annotators known to the authors and crowdsourcing workers on AMT) participated in the ground truth creation. Annotation agreement between the two groups is analyzed. Applications of the dataset in different contexts are discussed. This dataset contributes to research areas such as crowdsourcing for multimedia, multimedia content analysis, and design of systems that can elicit fashion preferences from users."} {"_id": "ede1e7c2dba44f2094307f17322babce38031577", "title": "Memory versus non-linearity in reservoirs", "text": "Reservoir Computing (RC) is increasingly being used as a conceptually simple yet powerful method for using the temporal processing of recurrent neural networks (RNN). However, because fundamental insight in the exact functionality of the reservoir is as yet still lacking, in practice there is still a lot of manual parameter tweaking or brute-force searching involved in optimizing these systems. In this contribution we aim to enhance the insights into reservoir operation, by experimentally studying the interplay of the two crucial reservoir properties, memory and non-linear mapping. For this, we introduce a novel metric which measures the deviation of the reservoir from a linear regime and use it to define different regions of dynamical behaviour. Next, we study the relationship of two important reservoir parameters, input scaling and spectral radius, on two properties of an artificial task, namely memory and non-linearity."} {"_id": "3d66a00d2fedbdbd6c709cdd2ad764c7963614a4", "title": "Improving feature selection techniques for machine learning", "text": "As a commonly used technique in data preprocessing for machine learning, feature selection identifies important features and removes irrelevant, redundant or noise features to reduce the dimensionality of feature space. It improves efficiency, accuracy and comprehensibility of the models built by learning algorithms. Feature selection techniques have been widely employed in a variety of applications, such as genomic analysis, information retrieval, and text categorization. Researchers have introduced many feature selection algorithms with different selection criteria. However, it has been discovered that no single criterion is best for all applications. We proposed a hybrid feature selection framework called based on genetic algorithms (GAs) that employs a target learning algorithm to evaluate features, a wrapper method. We call it hybrid genetic feature selection (HGFS) framework. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for the target algorithm. The experiments on genomic data demonstrate that ours is a robust and effective approach that can find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm. A common characteristic of text categorization tasks is multi-label classification with a great number of features, which makes wrapper methods time-consuming and impractical. We proposed a simple filter (non-wrapper) approach called Relation Strength and Frequency Variance (RSFV) measure. The basic idea is that informative features are those that are highly correlated with the class and distribute most differently among all classes. The approach is compared with two well-known feature selection methods in the experiments on two standard text corpora. The experiments show that RSFV generate equal or better performance than the others in many cases. INDEX WORDS: Feature selection, Gene selection, Term selection, Dimension Reduction, Genetic algorithm, Text categorization, Text classification IMPROVING FEATURE SELECTION TECHNIQUES FOR"} {"_id": "ccc636a980fa307262aef569667a5bcf568e32af", "title": "Calculating the maximum execution time of real-time programs", "text": "In real-time systems, the timing behavior is an important property of each task. It has to be guaranteed that the execution of a task does not take longer than the specified amount of time. Thus, a knowledge about the maximum execution time of programs is of utmost importance. This paper discusses the problems for the calculation of the maximum execution time (MAXT... MAximum eXecution Time). It shows the preconditions which have to be met before the MAXT of a task can be calculated. Rules for the MAXT calculation are described. Triggered by the observation that in most cases the calculated MAXT far exceeds the actual execution time, new language constructs are introduced. These constructs allow programmers to put into their programs more information about the behavior of the algorithms implemented and help to improve the self checking property of programs. As a consequence, the quality of MAXT calculations is improved significantly. In a realistic example, an improvement fator of 11 has been achieved."} {"_id": "3ec05afd1eb4bb4d5ec17a9e0b3d09f5cbc30304", "title": "Comparative Study of Chronic Kidney Disease Prediction using KNN and SVM", "text": "Chronic kidney disease (CKD), also known as chronic renal disease. Chronic kidney disease involves conditions that damage your kidneys and decrease their ability to keep you healthy. You may develop complications like high blood pressure, anemia (low blood count), weak bones, poor nutritional health and nerve damage. . Early detection and treatment can often keep chronic kidney disease from getting worse. Data Mining is the term used for knowledge discovery from large databases. The task of data mining is to make use of historical data, to discover regular patterns and improve future decisions, follows from the convergence of several recent trends: the lessening cost of large data storage devices and the everincreasing ease of collecting data over networks; the expansion of robust and efficient machine learning algorithms to process this data; and the lessening cost of computational power, enabling use of computationally intensive methods for data analysis. Machine learning, has already created practical applications in such areas as analyzing medical science outcomes, detecting fraud, detecting fake users etc. Various data mining classification approaches and machine learning algorithms are applied for prediction of chronic diseases. The objective of this research work is to introduce a new decision support system to predict chronic kidney disease. The aim of this work is to compare the performance of Support vector machine (SVM) and K-Nearest Neighbour (KNN) classifier on the basis of its accuracy, precision and execution time for CKD prediction. From the experimental results it is observed that the performance of KNN classifier is better than SVM. Keywords\u2014Data Mining, Machine learning, Chronic kidney disease, Classification, K-Nearest Neighbour, Support vector machine."} {"_id": "7f1aefe7eeb759bf05c27438c1687380f7b6b06f", "title": "Prader-Willi syndrome: a review of clinical, genetic, and endocrine findings", "text": "INTRODUCTION\nPrader-Willi syndrome (PWS) is a multisystemic complex genetic disorder caused by lack of expression of genes on the paternally inherited chromosome 15q11.2-q13 region. There are three main genetic subtypes in PWS: paternal 15q11-q13 deletion (65-75 % of cases), maternal uniparental disomy 15 (20-30 % of cases), and imprinting defect (1-3 %). DNA methylation analysis is the only technique that will diagnose PWS in all three molecular genetic classes and differentiate PWS from Angelman syndrome. Clinical manifestations change with age with hypotonia and a poor suck resulting in failure to thrive during infancy. As the individual ages, other features such as short stature, food seeking with excessive weight gain, developmental delay, cognitive disability and behavioral problems become evident. The phenotype is likely due to hypothalamic dysfunction, which is responsible for hyperphagia, temperature instability, high pain threshold, hypersomnia and multiple endocrine abnormalities including growth hormone and thyroid-stimulating hormone deficiencies, hypogonadism and central adrenal insufficiency. Obesity and its complications are the major causes of morbidity and mortality in PWS.\n\n\nMETHODS\nAn extensive review of the literature was performed and interpreted within the context of clinical practice and frequently asked questions from referring physicians and families to include the current status of the cause and diagnosis of the clinical, genetics and endocrine findings in PWS.\n\n\nCONCLUSIONS\nUpdated information regarding the early diagnosis and management of individuals with Prader-Willi syndrome is important for all physicians and will be helpful in anticipating and managing or modifying complications associated with this rare obesity-related disorder."} {"_id": "dd97a44fdc5923d1d3fd9d7c3dc300fb6f4f04ed", "title": "Spotlight: Optimizing Device Placement for Training Deep Neural Networks", "text": "Training deep neural networks (DNNs) requires an increasing amount of computation resources, and it becomes typical to use a mixture of GPU and CPU devices. Due to the heterogeneity of these devices, a recent challenge is how each operation in a neural network can be optimally placed on these devices, so that the training process can take the shortest amount of time possible. The current state-of-the-art solution uses reinforcement learning based on the policy gradient method, and it suffers from suboptimal training times. In this paper, we propose Spotlight, a new reinforcement learning algorithm based on proximal policy optimization, designed specifically for finding an optimal device placement for training DNNs. The design of our new algorithm relies upon a new model of the device placement problem: by modeling it as a Markov decision process with multiple stages, we are able to prove that Spotlight achieves a theoretical guarantee on performance improvements. We have implemented Spotlight in the CIFAR-10 benchmark and deployed it on the Google Cloud platform. Extensive experiments have demonstrated that the training time with placements recommended by Spotlight is 60.9% of that recommended by the policy gradient method."} {"_id": "f659b381e2074262453e7b92071abe07443640b3", "title": "Percutaneous interventions for left atrial appendage exclusion: options, assessment, and imaging using 2D and 3D echocardiography.", "text": "Percutaneous left atrial appendage (LAA) exclusion is an evolving treatment to prevent embolic events in patients with nonvalvular atrial fibrillation. In the past few years multiple percutaneous devices have been developed to exclude the LAA from the body of the left atrium and thus from the systemic circulation. Two- and 3-dimensional transesophageal echocardiography (TEE) is used to assess the LAA anatomy and its suitability for percutaneous closure to select the type and size of the closure device and to guide the device implantation procedure in conjunction with fluoroscopy. In addition, 2- and 3-dimensional TEE is also used to assess the effectiveness of device implantation acutely and on subsequent follow-up examination. Knowledge of the implantation options that are currently available along with their specific characteristics is essential for choosing the appropriate device for a given patient with a specific LAA anatomy. We present the currently available LAA exclusion devices and the echocardiographic imaging approaches for evaluation of\u00a0the LAA before, during, and after LAA occlusion."} {"_id": "d0596a92400fa2268ee502d682a5b72fca4cc678", "title": "Biomedical ontology alignment: an approach based on representation learning", "text": "BACKGROUND\nWhile representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic similarity information becomes inscribed onto fields of pre-trained word vectors. The resulting framework also incorporates a novel outlier detection mechanism based on a denoising autoencoder that is shown to improve performance.\n\n\nRESULTS\nAn ontology matching system derived using the proposed framework achieved an F-score of 94% on an alignment scenario involving the Adult Mouse Anatomical Dictionary and the Foundational Model of Anatomy ontology (FMA) as targets. This compares favorably with the best performing systems on the Ontology Alignment Evaluation Initiative anatomy challenge. We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus. Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results.\n\n\nCONCLUSIONS\nOur proposed representation learning approach leverages terminological embeddings to capture semantic similarity. Our results provide evidence that the approach produces embeddings that are especially well tailored to the ontology matching task, demonstrating a novel pathway for the problem."} {"_id": "ed130a246a3ff589096ac2c20babdd986068e528", "title": "The Media and Technology Usage and Attitudes Scale: An empirical investigation", "text": "Current approaches to measuring people's everyday usage of technology-based media and other computer-related activities have proved to be problematic as they use varied outcome measures, fail to measure behavior in a broad range of technology-related domains and do not take into account recently developed types of technology including smartphones. In the present study, a wide variety of items, covering a range of up-to-date technology and media usage behaviors. Sixty-six items concerning technology and media usage, along with 18 additional items assessing attitudes toward technology, were administered to two independent samples of individuals, comprising 942 participants. Factor analyses were used to create 11 usage subscales representing smartphone usage, general social media usage, Internet searching, e-mailing, media sharing, text messaging, video gaming, online friendships, Facebook friendships, phone calling, and watching television in addition to four attitude-based subscales: positive attitudes, negative attitudes, technological anxiety/dependence, and attitudes toward task-switching. All subscales showed strong reliabilities and relationships between the subscales and pre-existing measures of daily media usage and Internet addiction were as predicted. Given the reliability and validity results, the new Media and Technology Usage and Attitudes Scale was suggested as a method of measuring media and technology involvement across a variety of types of research studies either as a single 60-item scale or any subset of the 15 subscales."} {"_id": "31864e13a9b3473ebb07b4f991f0ae3363517244", "title": "A Computational Approach to Edge Detection", "text": "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge."} {"_id": "874b9e74fb000c6554c6941f73ddd4c66c6de38d", "title": "Exploring steganography: Seeing the unseen", "text": "Steganography is the art of hiding information in ways that prevent the detection of hidden messages. It includes a vast array of secret communications methods that conceal the message's very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. Steganography and cryptography are cousins in the spycraft family: cryptography scrambles a message so it cannot be understood while steganography hides the message so it cannot be seen. In this article the authors discuss image files and how to hide information in them, and discuss results obtained from evaluating available steganographic software. They argue that steganography by itself does not ensure secrecy, but neither does simple encryption. If these methods are combined, however, stronger encryption methods result. If an encrypted message is intercepted, the interceptor knows the text is an encrypted message. But with steganography, the interceptor may not know that a hidden message even exists. For a brief look at how steganography evolved, there is included a sidebar titled \"Steganography: Some History.\""} {"_id": "8c82c93dfc7d3672e58efd982a23791a8a419053", "title": "Techniques for Data Hiding", "text": "by International Business Machines Corporation. Copying in printed form for private use is permitted without payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied or distributed royalty free without further permission by computer-based and other information service systems. Permission to republish any other portion of this paper must be obtained from the Editor. Data hiding, a form of steganography, embeds data into digital media for the purpose of identification, annotation, and copyright. Several constraints affect this process: the quantity of data to be hidden, the need for invariance of these data under conditions where a \" host \" signal is subject to distortions, e.g., lossy compression, and the degree to which the data must be immune to interception, modification, or removal by a third party. We explore both traditional and novel techniques for addressing the data-hiding process and evaluate these techniques in light of three applications: copyright protection, tamper-proofing, and augmentation data embedding. igital representation of media facilitates access and potentially improves the portability, efficiency , and accuracy of the information presented. Undesirable effects of facile data access include an increased opportunity for violation of copyright and tampering with or modification of content. The motivation for this work includes the provision of protection of intellectual property rights, an indication of content manipulation, and a means of annotation. Data hiding represents a class of processes used to embed data, such as copyright information, into various forms of media such as image, audio, or text with a minimum amount of perceivable degradation to the \" host \" signal; i.e., the embedded data should be invisible and inaudible to a human observer. Note that data hiding, while similar to compression, is distinct from encryption. Its goal is not to restrict or regulate access to the host signal, but rather to ensure that embedded data remain inviolate and recoverable. Two important uses of data hiding in digital media are to provide proof of the copyright, and assurance of content integrity. Therefore, the data should stay hidden in a host signal, even if that signal is subjected to manipulation as degrading as filtering, resampling, cropping, or lossy data compression. Other applications of data hiding, such as the inclusion of augmentation data, need not \u2026"} {"_id": "934ac849af9e478b7c0c3c599b9047e291d96ae2", "title": "A New Approach to Persian/Arabic Text Steganography", "text": "Conveying information secretly and establishing hidden relationship has been of interest since long past. Text documents have been widely used since very long time ago. Therefore, we have witnessed different method of hiding information in texts (text steganography) since past to the present. In this paper we introduce a new approach for steganography in Persian and Arabic texts. Considering the existence of too many points in Persian and Arabic phrases, in this approach, by vertical displacement of the points, we hide information in the texts. This approach can be categorized under feature coding methods. This method can be used for Persian/Arabic watermarking. Our method has been implemented by Java programming language"} {"_id": "b41c45b2ca0c38a4514f0779395ebdf3d34cecc0", "title": "Applications of data hiding in digital images", "text": ""} {"_id": "9859eaa1e72bffa7474c0d1ee63a5fbb041b314e", "title": "At the end of the 14C time scale--the Middle to Upper Paleolithic record of western Eurasia.", "text": "The dynamics of change underlying the demographic processes that led to the replacement of Neandertals by Anatomically Modern Humans (AMH) and the emergence of what are recognized as Upper Paleolithic technologies and behavior can only be understood with reference to the underlying chronological framework. This paper examines the European chronometric (mainly radiocarbon-based) record for the period between ca. 40 and 30 ka 14C BP and proposes a relatively rapid transition within some 2,500 years. This can be summarized in the following falsifiable hypotheses: (1) final Middle Paleolithic (FMP) \"transitional\" industries (Uluzzian, Chatelperronian, leaf-point industries) were made by Neandertals and date predominantly to between ca. 41 and 38 ka 14C BP, but not younger than 35/34 ka 14C BP; (2) initial (IUP) and early (EUP) Upper Paleolithic \"transitional\" industries (Bachokirian, Bohunician, Protoaurignacian, Kostenki 14) will date to between ca. 39/38 and 35 ka 14C BP and document the appearance of AMH in Europe; (3) the earliest Aurignacian (I) appears throughout Europe quasi simultaneously at ca. 35 ka 14C BP. The earliest appearance of figurative art is documented only for a later phase ca. 33.0/32.5-29.2 ka 14C BP. Taken together, the Middle to Upper Paleolithic transition appears to be a cumulative process involving the acquisition of different elements of \"behavioral modernity\" through several \"stages of innovation.\""} {"_id": "831d55d38104389de256c501495539a73118db7f", "title": "Azuma A Survey of Augmented Reality", "text": "This paper surveys the field of augmented reality (AR), in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing , visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality."} {"_id": "e6c8af0bedefa15ab8278b2f21a7b8f93f93ba5d", "title": "The Social Explanatory Styles Questionnaire: Assessing Moderators of Basic Social-Cognitive Phenomena Including Spontaneous Trait Inference, the Fundamental Attribution Error, and Moral Blame", "text": "Why is he poor? Why is she failing academically? Why is he so generous? Why is she so conscientious? Answers to such everyday questions--social explanations--have powerful effects on relationships at the interpersonal and societal levels. How do people select an explanation in particular cases? We suggest that, often, explanations are selected based on the individual's pre-existing general theories of social causality. More specifically, we suggest that over time individuals develop general beliefs regarding the causes of social events. We refer to these beliefs as social explanatory styles. Our goal in the present article is to offer and validate a measure of individual differences in social explanatory styles. Accordingly, we offer the Social Explanatory Styles Questionnaire (SESQ), which measures three independent dimensions of social explanatory style: Dispositionism, historicism, and controllability. Studies 1-3 examine basic psychometric properties of the SESQ and provide positive evidence regarding internal consistency, factor structure, and both convergent and divergent validity. Studies 4-6 examine predictive validity for each subscale: Does each explanatory dimension moderate an important phenomenon of social cognition? Results suggest that they do. In Study 4, we show that SESQ dispositionism moderates the tendency to make spontaneous trait inferences. In Study 5, we show that SESQ historicism moderates the tendency to commit the Fundamental Attribution Error. Finally, in Study 6 we show that SESQ controllability predicts polarization of moral blame judgments: Heightened blaming toward controllable stigmas (assimilation), and attenuated blaming toward uncontrollable stigmas (contrast). Decades of research suggest that explanatory style regarding the self is a powerful predictor of self-functioning. We think it is likely that social explanatory styles--perhaps comprising interactive combinations of the basic dimensions tapped by the SESQ--will be similarly potent predictors of social functioning. We hope the SESQ will be a useful tool for exploring that possibility."} {"_id": "71214b3a60fb87b159b73f4f3a33bedbb5751cbf", "title": "SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications", "text": "We present the SimSensei system, a fully automatic virtual agent that conducts interviews to assess indicators of psychological distress. We emphasize on the perception part of the system, a multimodal framework which captures and analyzes user state for both behavioral understanding and interactional purposes."} {"_id": "922cc72c45cdf8b4ff2c4b1ad604056c90ee5983", "title": "Speech Synthesis of Code-Mixed Text", "text": "Most Text to Speech (TTS) systems today assume that the input text is in a single language and is written in the same language that the text needs to be synthesized in. However, in bilingual and multilingual communities, code mixing or code switching occurs in speech, in which speakers switch between languages in the same utterance. Due to the popularity of social media, we now see code-mixing even in text in these multilingual communities. TTS systems capable of synthesizing such text need to be able to handle text that is written in multiple languages and scripts. Code-mixed text poses many challenges to TTS systems, such as language identification, spelling normalization and pronunciation modeling. In this work, we describe a preliminary framework for synthesizing code-mixed text. We carry out experiments on synthesizing code-mixed Hindi and English text. We find that there is a significant user preference for TTS systems that can correctly identify and pronounce words in different languages."} {"_id": "e526fd18329ad0da5dd989358fd20e4dd2a2a3e1", "title": "A framework for designing cloud forensic-enabled services (CFeS)", "text": "Cloud computing is used by consumers to access cloud services. Malicious actors exploit vulnerabilities of cloud services to attack consumers. The link between these two assumptions is the cloud service. Although cloud forensics assists in the direction of investigating and solving cloud-based cyber-crimes, in many cases the design and implementation of cloud services fall back. Software designers and engineers should focus their attention on the design and implementation of cloud services that can be investigated in a forensic sound manner. This paper presents a methodology that aims on assisting designers to design cloud forensic-enabled services. The methodology supports the design of cloud services by implementing a number of steps to make the services cloud forensic enabled. It consists of a set of cloud forensic constraints, a modeling language expressed through a conceptual model and a process based on the concepts identified and presented in the model. The main advantage of the proposed methodology is the correlation of cloud services\u2019 characteristics with the cloud investigation while providing software engineers the ability to design and implement cloud forensic-enabled services via the use of a set of predefined forensic-related tasks."} {"_id": "8fffaf3e357ca89261b71a23d56669a5da277156", "title": "Implementing a Personal Health Record Cloud Platform Using Ciphertext-Policy Attribute-Based Encryption", "text": "Work on designing and implementing a patient-centric, personal health record cloud platform based on open-source Indivo X system. We adopt cipher text-policy attribute-based encryption to provide privacy protection and fine-grained access control."} {"_id": "977afc672a8f100665fc5377f7f934e408b5a177", "title": "An ef fi cient and effective convolutional auto-encoder extreme learning machine network for 3 d feature learning", "text": "3D shape features play a crucial role in graphics applications, such as 3D shape matching, recognition, and retrieval. Various 3D shape descriptors have been developed over the last two decades; however, existing descriptors are handcrafted features that are labor-intensively designed and cannot extract discriminative information for a large set of data. In this paper, we propose a rapid 3D feature learning advantages of the convolutional neuron network, auto-encoder, and extreme learning machine (ELM). This method performs better and faster than other methods. In addition, we define a novel architecture based on CAE-ELM. The architecture accepts two types of 3D shape representation, namely, voxel data and signed distance field data (SDF), as inputs to extract the global and local features of 3D shapes. Voxel data describe structural information, whereas SDF data contain details on 3D shapes. Moreover, the proposed CAE-ELM can be used in practical graphics applications, such as 3D shape completion. Experiments show that the features extracted by CAE-ELM are superior to existing hand-crafted features and other deep learning methods or ELM models. Moreover, the classification accuracy of the proposed architecture is superior to that of other methods on ModelNet10 (91.4%) and ModelNet40 (84.35%). The training process also runs faster than existing deep learning methods by approximately two orders of magnitude. & 2015 Elsevier B.V. All rights reserved."} {"_id": "87c7647c659aefb539dbe672df06714d935e0a3b", "title": "Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations", "text": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Unlike recent robustness research, this benchmark evaluates performance on commonplace corruptions not worst-case adversarial corruptions. We find that there are negligible changes in relative corruption robustness from AlexNet to ResNet classifiers, and we discover ways to enhance corruption robustness. Then we propose a new dataset called ICONS-50 which opens research on a new kind of robustness, surface variation robustness. With this dataset we evaluate the frailty of classifiers on new styles of known objects and unexpected instances of known classes. We also demonstrate two methods that improve surface variation robustness. Together our benchmarks may aid future work toward networks that learn fundamental class structure and also robustly generalize."} {"_id": "49b42aa77b764af561d63aee591e114c6dc03d8b", "title": "Dual Policy Iteration", "text": "A novel class of Approximate Policy Iteration (API) algorithms have recently demonstrated impressive practical performance (e.g., ExIt [1], AlphaGo-Zero [2]). This new family of algorithms maintains, and alternately optimizes, two policies: a fast, reactive policy (e.g., a deep neural network) deployed at test time, and a slow, non-reactive policy (e.g., Tree Search), that can plan multiple steps ahead. The reactive policy is updated under supervision from the non-reactive policy, while the non-reactive policy is improved via guidance from the reactive policy. In this work we study this class of Dual Policy Iteration (DPI) strategy in an alternating optimization framework and provide a convergence analysis that extends existing API theory. We also develop a special instance of this framework which reduces the update of non-reactive policies to model-based optimal control using learned local models, and provides a theoretically sound way of unifying model-free and model-based RL approaches with unknown dynamics. We demonstrate the efficacy of our approach on various continuous control Markov Decision Processes."} {"_id": "5b062562a8067baae045df1c7f5a8455d0363b5a", "title": "SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial Person Re-Identification", "text": "Holistic person re-identification (ReID) has received extensive study in the past few years and achieves impressive progress. However, persons are often occluded by obstacles or other persons in practical scenarios, which makes partial person re-identification non-trivial. In this paper, we propose a spatial-channel parallelism network (SCPNet) in which each channel in the ReID feature pays attention to a given spatial part of the body. The spatial-channel corresponding relationship supervises the network to learn discriminative feature for both holistic and partial person re-identification. The single model trained on four holistic ReID datasets achieves competitive accuracy on these four datasets, as well as outperforms the state-of-the-art methods on two partial ReID datasets without training."} {"_id": "aad96ea7ce0e9ef3097401a0c4c8236b41f4a06f", "title": "Identifying prosumer's energy sharing behaviours for forming optimal prosumer-communities", "text": "Smart Grid (SG) achieves bidirectional energy and information flow between the energy user and the utility grid, allowing energy users not only to consume energy, but also to generate the energy and share with the utility grid or with other energy consumers. This type of energy user who consumes energy and who also can generate the energy is called the \u201cprosumer\u201d. The sustainability of the SG energy sharing process heavily depends on its participating prosumers, making prosumer participation and management schemes crucial within the energy sharing approaches. However in literature, there is very little attention on prosumer management schemes. The contribution of this article is twofold. First, this article introduces a novel concept of participating and managing the prosumers in the SG energy sharing process in the form of autonomous, intelligent goal-oriented virtual communities. Here, the community of prosumers can collectively increase the amount of power to be auctioned or bought offering higher bargaining power, thereby settling for a higher price per kilowatt in long-term. According to the literature, this research is the first of this type introducing a community approach for prosumer management. The initial step to build an effective prosumer-community is the identification of those prosumers who would be suitable to make efficient prosumer communities. This leads the necessity of identifying parameters that influence the energy sharing behaviours of prosumers. The second contribution of this article is that, this comprehensively analyzes the different parameters influencing the prosumer's energy sharing behaviours and thus presents multi-agent architecture for optimal prosumer-community formation."} {"_id": "61963ec7bfa73a1725bed7a58e0e570d6c860bb6", "title": "Adaptive UKF-SLAM based on magnetic gradient inversion method for underwater navigation", "text": "Consider the two characteristics: (1) Simultaneous localization and mapping (SLAM) is a popular algorithm for autonomous underwater vehicle, but visual SLAM is significantly influenced by weak illumination. (2)Geomagnetism-aided navigation and gravity-aided navigation are equally important methods in the field of vehicle navigation, but both are affected heavily by time-varying noises and terrain fluctuations. However, magnetic gradient vector can avoid the influence of time-varying noises, and is less affected by terrain fluctuations. To this end, we propose an adaptive SLAM-based magnetic gradient aided navigation with the following advantages: (1) Adaptive SLAM is an efficient way to deal with uncertainty of the measurement model. (2) Magnetic gradient inversion equation is a good alternative to be used as measurement equation in visual SLAM-denied environment. Experimental results show that our proposed method is an effective solution, combining magnetic gradient information with SLAM."} {"_id": "6a182f5fe8b5475014986ad303d15240450922a8", "title": "Investigating episodes of mobile phone activity as indicators of opportune moments to deliver notifications", "text": "We investigate whether opportune moments to deliver notifications surface at the endings of episodes of mobile interaction (making voice calls or receiving SMS) based on the assumption that the endings collocate with naturally occurring breakpoint in the user's primary task. Testing this with a naturalistic experiment we find that interruptions (notifications) are attended to and dealt with significantly more quickly after a user has finished an episode of mobile interaction compared to a random baseline condition, supporting the potential utility of this notification strategy. We also find that the workload and situational appropriateness of the secondary interruption task significantly affect subsequent delay and completion rate of the tasks. In situ self-reports and interviews reveal complexities in the subjective experience of the interruption, which suggest that a more nuanced classification of the particular call or SMS and its relationship to the primary task(s) would be desirable."} {"_id": "c539c664fc020d3a44c9614bb291e8fb43c973b9", "title": "Virtual reality and hand tracking system as a medical tool to evaluate patients with parkinson's", "text": "In this paper, we take advantage of the free hand interaction technology as a medical tool, either in rehabilitation centers or at home, that allows the evaluation of patients with Parkinson's. We have created a virtual reality scene to engage the patient to feel in an activity that can be found in daily life, and use the Leap Motion controller tracking to evaluate and classify the tremor in the hands. A sample of 33 patients diagnosed with Parkinson's disease (PD) participated in the study. Three tests were performed per patient, the first two to evaluate the amplitude of the postural tremor in each hand, and the third to measure the time to complete a specific task. Analysis shows that our tool can be used effectively to classify the stage of Parkinson's disease."} {"_id": "8843dc55d09407a8b22de1e0c8b781cda9f70bb6", "title": "Neonatal Abstinence Syndrome and High School Performance.", "text": "BACKGROUND AND OBJECTIVES\nLittle is known of the long-term, including school, outcomes of children diagnosed with Neonatal abstinence syndrome (NAS) (International Statistical Classification of Disease and Related Problems [10th Edition], Australian Modification, P96.1).\n\n\nMETHODS\nLinked analysis of health and curriculum-based test data for all children born in the state of New South Wales (NSW), Australia, between 2000 and 2006. Children with NAS (n = 2234) were compared with a control group matched for gestation, socioeconomic status, and gender (n = 4330, control) and with other NSW children (n = 598 265, population) for results on the National Assessment Program: Literacy and Numeracy, in grades 3, 5, and 7.\n\n\nRESULTS\nMean test scores (range 0-1000) for children with NAS were significantly lower in grade 3 (359 vs control: 410 vs population: 421). The deficit was progressive. By grade 7, children with NAS scored lower than other children in grade 5. The risk of not meeting minimum standards was independently associated with NAS (adjusted odds ratio [aOR], 2.5; 95% confidence interval [CI], 2.2-2.7), indigenous status (aOR, 2.2; 95% CI, 2.2-2.3), male gender (aOR, 1.3; 95% CI, 1.3-1.4), and low parental education (aOR, 1.5; 95% CI, 1.1-1.6), with all Ps < .001.\n\n\nCONCLUSIONS\nA neonatal diagnostic code of NAS is strongly associated with poor and deteriorating school performance. Parental education may decrease the risk of failure. Children with NAS and their families must be identified early and provided with support to minimize the consequences of poor educational outcomes."} {"_id": "c987d7617ade93e5c9ce8cec3832d38dd9de5de9", "title": "Programming challenges of chatbot: Current and future prospective", "text": "In the modern Era of technology, Chatbots is the next big thing in the era of conversational services. Chatbots is a virtual person who can effectively talk to any human being using interactive textual skills. Currently, there are many cloud base Chatbots services which are available for the development and improvement of the chatbot sector such as IBM Watson, Microsoft bot, AWS Lambda, Heroku and many others. A virtual person is based on machine learning and Artificial Intelligence (AI) concepts and due to dynamic nature, there is a drawback in the design and development of these chatbots as they have built-in AI, NLP, programming and conversion services. This paper gives an overview of cloud-based chatbots technologies along with programming of chatbots and challenges of programming in current and future Era of chatbot."} {"_id": "4b9ac9e1493b11f0af0c65c7a291eb61f33678b6", "title": "Model predictive control for dynamic footstep adjustment using the divergent component of motion", "text": "This paper presents an extension of previous model predictive control (MPC) schemes to the stabilization of the time-varying divergent component of motion (DCM). To address the control authority limitations caused by fixed footholds, the step positions and rotations are treated as control inputs, allowing the generation and execution of stable walking motions, both at high speeds and in the face of disturbances. Rotation approximations are handled by applying a mixed-integer program, which, when combined with the use of the time-varying DCM to account for the effects of height changes, improve the versatility of MPC. Simulation results of fast walking and step recovery with the ESCHER humanoid demonstrate the effectiveness of this approach."} {"_id": "40e17135a21da48687589acb0c2478a86825c6df", "title": "Vaginal Labiaplasty: Defense of the Simple \u201cClip and Snip\u201d and a New Classification System", "text": "Vaginal labiaplasty has become a more frequently performed procedure as a result of the publicity and education possible with the internet. Some of our patients have suffered in silence for years with large, protruding labia minora and the tissue above the clitoris that is disfiguring and uncomfortable and makes intercourse very difficult and painful. We propose four classes of labia protrusion based on size and location: Class 1 is normal, where the labia majora and minora are about equal. Class 2 is the protrusion of the minora beyond the majora. Class 3 includes a clitoral hood. Class 4 is where the large labia minora extends to the perineum. There are two principal means of reconstructing this area. Simple amputation may be possible for Class 2 and Class 4. Class 2 and Class 3 may be treated with a wedge resection and flap advancement that preserves the delicate free edge of the labia minora (Alter, Ann Plast Surg 40:287, 1988). Class 4 may require a combination of both amputation of the clitoral hood and/or perineal extensions and rotation flap advancement over the labia minora. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."} {"_id": "7e19f7a82528fa79349f1fc61c7f0d35a9ad3a5e", "title": "Face Recognition: A Hybrid Neural Network Approach", "text": "Faces represent complex, multidimensional, meaningful visual stimuli an d developing a computational model for face recognition is difficult [42]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sam pling, a self-organizing map neural network, and a convolutional neural network. The self-organizi ng map provides a quantization of the image samples into a topological space where inputs that are n e rby in the original space are also nearby in the output space, thereby providing dimensionality red uction and invariance to minor changes in the image sample, and the convolutional neural network prov ides for partial invariance to translation, rotation, scale, and deformation. The convolutional net work extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen -Lo\u00e8ve transform in place of the self-organizing map, and a multi-layer perceptron in place of the con volutional network. The Karhunen-Lo\u00e8ve transform performs almost as well (5.3% error versus 3 .8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is cap able of rapid classification, requires only fast, approximate normalization and preprocessing, and cons istently exhibits better classification performance than the eigenfaces approach [42] on the database consider ed as the number of images per person in the training database is varied from 1 to 5. With 5 imag es per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recogni zer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which cont ains quite a high degree of variability in expression, pose, and facial details. We analyze computati onal complexity and discuss how new classes could be added to the trained recognizer."} {"_id": "fd17fcc2d8736d270b34ddc2fbc8bfa581b7d6dd", "title": "Prevalence of chronic low back pain: systematic review", "text": "OBJECTIVE\nTo estimate worldwide prevalence of chronic low back pain according to age and sex.\n\n\nMETHODS\nWe consulted Medline (PubMed), LILACS and EMBASE electronic databases. The search strategy used the following descriptors and combinations: back pain, prevalence, musculoskeletal diseases, chronic musculoskeletal pain, rheumatic, low back pain, musculoskeletal disorders and chronic low back pain. We selected cross-sectional population-based or cohort studies that assessed chronic low back pain as an outcome. We also assessed the quality of the selected studies as well as the chronic low back pain prevalence according to age and sex.\n\n\nRESULTS\nThe review included 28 studies. Based on our qualitative evaluation, around one third of the studies had low scores, mainly due to high non-response rates. Chronic low back pain prevalence was 4.2% in individuals aged between 24 and 39 years old and 19.6% in those aged between 20 and 59. Of nine studies with individuals aged 18 and above, six reported chronic low back pain between 3.9% and 10.2% and three, prevalence between 13.1% and 20.3%. In the Brazilian older population, chronic low back pain prevalence was 25.4%.\n\n\nCONCLUSIONS\nChronic low back pain prevalence increases linearly from the third decade of life on, until the 60 years of age, being more prevalent in women. Methodological approaches aiming to reduce high heterogeneity in case definitions of chronic low back pain are essential to consistency and comparative analysis between studies. A standard chronic low back pain definition should include the precise description of the anatomical area, pain duration and limitation level."} {"_id": "9b39830391d7ad65ab2b7a8d312e404d128bbde6", "title": "The image of India as a Travel Destination and the attitude of viewers towards Indian TV Dramas", "text": "For a few decades now, various television stations in Indonesia have been broadcasting foreign drama series including those from a range of Asian countries, such Korea, India, Turkey, Thailand and the Philippines . This study aims to explore attitude towards Asian drama in those countries and the destination image of the country where the drama emanates from as perceived by the audiences. This study applied a mixed-methodology approach in order to explore particularly attitudes towards foreign television drama productions. There is a paucity of study exploring the attitudes of audiences towards Indian television dramas and a limited study focussing on the image of India as a preferred travel destination. Data was collected using an online instrument and participants were selected as a convenience sample. The attitude towards foreign television dramas was measured using items that were adapted from the qualitative study results, whereas for measuring destination image, an existing scale was employed. This study found that the attitudes of audiences towards Indian drama and their image of India had no dimension (one factor). The study also reported that attitude towards Indian dramas had a significant impact on the image of India as a travel destination and vice-versa. Recommendations for future study and tourism marketing are discussed."} {"_id": "5dd9dc47c4acc9ea3e597751194db52119398ac6", "title": "Low Power High-Efficiency Shift Register Using Implicit Pulse-Triggered Flip-Flop in 130 nm CMOS Process for a Cryptographic RFID Tag", "text": "The shift register is a type of sequential logic circuit which is mostly used for storing digital data or the transferring of data in the form of binary numbers in radio frequency identification (RFID) applications to improve the security of the system. A power-efficient shift register utilizing a new flip-flop with an implicit pulse-triggered structure is presented in this article. The proposed flip-flop has features of high performance and low power. It is composed of a sampling circuit implemented by five transistors, a C-element for rise and fall paths, and a keeper stage. The speed is enhanced by executing four clocked transistors together with a transition condition technique. The simulation result confirms that the proposed topology consumes the lowest amounts of power of 30.1997 and 22.7071 nW for parallel in \u2013parallel out (PIPO) and serial in \u2013serial out (SISO) shift register respectively covering 22 \u03bcm2 chip area. The overall design consist of only 16 transistors and is simulated in 130 nm complementary-metal-oxide-semiconductor (CMOS) technology with a 1.2 V power supply."} {"_id": "31bb49ba7df94b88add9e3c2db72a4a98927bb05", "title": "Static and dynamic 3D facial expression recognition: A comprehensive survey", "text": "a r t i c l e i n f o Keywords: Facial behaviour analysis Facial expression recognition 3D facial surface 3D facial surface sequences (4D faces) Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed. Automatic human behaviour understanding has attracted a great deal of interest over the past two decades, mainly because of its many applications spanning various fields such as psychology, computer technology , medicine and security. It can be regarded as the essence of next-generation computing systems as it plays a crucial role in affective computing technologies (i.e. proactive and affective user interfaces), learner-adaptive tutoring systems, patient-profiled personal wellbeing technologies, etc. [1]. Facial expression is the most cogent, naturally preeminent means for humans to communicate emotions, to clarify and give emphasis, to signal comprehension disagreement, to express intentions and, more generally, to regulate interactions with the environment and other people [2]. These facts highlight the importance of automatic facial behaviour analysis, including facial expression of emotion and facial action unit (AU) recognition, and justify the interest this research area has attracted, in the past twenty years [3,4]. Until recently, most of the available data sets of expressive faces were of limited size containing only deliberately posed affective displays, mainly of the prototypical expressions of six basic emotions (i.e. anger, disgust, fear, happiness, sadness and surprise), recorded under highly controlled conditions. Recent efforts focus on the recognition of complex and spontaneous emotional phenomena (e.g. boredom or lack of attention, frustration , stress, etc.) rather than on the recognition of deliberately displayed prototypical expressions of emotions [5,4,6,7]. However, most \u2026"} {"_id": "08f7fb99edd844aac90fccfa51f5d55cbdf15cbd", "title": "Steady-State VEP-Based Brain-Computer Interface Control in an Immersive 3D Gaming Environment", "text": "This paper presents the application of an effective EEG-based brain-computer interface design for binary control in a visually elaborate immersive 3D game. The BCI uses the steady-state visual evoked potential (SSVEP) generated in response to phasereversing checkerboard patterns. Two power-spectrum estimation methods were employed for feature extraction in a series of offline classification tests. Both methods were also implemented during real-time game play. The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed. For the best performing feature extraction method, the average real-time control accuracy across subjects was 89%. The feasibility of obtaining reliable control in such a visually rich environment using SSVEPs is thus demonstrated and the impact of this result is discussed."} {"_id": "382b521a3a385ea8a339b68c54183e2681602735", "title": "Brainport: an alternative input to the brain.", "text": "Brain Computer Interface (BCI) technology is one of the most rapidly developing areas of modern science; it has created numerous significant crossroads between Neuroscience and Computer Science. The goal of BCI technology is to provide a direct link between the human brain and a computerized environment. The objective of recent BCI approaches and applications have been designed to provide the information flow from the brain to the computerized periphery. The opposite or alternative direction of the flow of information (computer to brain interface, or CBI) remains almost undeveloped. The BrainPort is a CBI that offers a complementary technology designed to support a direct link from a computerized environment to the human brain - and to do so non-invasively. Currently, BrainPort research is pursuing two primary goals. One is the delivery of missing sensory information critical for normal human behavior through an additional artificial sensory channel around the damaged or malfunctioning natural sensory system. The other is to decrease the risk of sensory overload in human-machine interactions by providing a parallel and supplemental channel for information flow to the brain. In contrast, conventional CBI strategies (e.g., Virtual Reality), are usually designed to provide additional or substitution information through pre-existing sensory channels, and unintentionally aggravate the brain overload problem."} {"_id": "6777d0f3b5b606f218437905346a9182712840cc", "title": "A general framework for brain-computer interface design", "text": "The Brain-Computer Interface (BCI) research community has acknowledged that researchers are experiencing difficulties when they try to compare the BCI techniques described in the literature. In response to this situation, the community has stressed the need for objective methods to compare BCI technologies. Suggested improvements have included the development and use of benchmark applications and standard data sets. However, as a young, multidisciplinary research field, the BCI community lacks a common vocabulary. As a result, this deficiency leads to poor intergroup communication, which hinders the development of the desired methods of comparison. One of the principle reasons for the lack of common vocabulary is the absence of a common functional model of a BCI System. This paper proposes a new functional model for BCI System design. The model supports many features that facilitate the comparison of BCI technologies with other BCI and non-BCI user interface technologies. From this model, taxonomy for BCI System design is developed. Together the model and taxonomy are considered a general framework for BCI System design. The representational power of the proposed framework was evaluated by applying it to a set of existing BCI technologies. The framework could effectively describe all of the BCI System designs tested."} {"_id": "97e5088bfb7583f1a40881d7c41f7be63ebb2846", "title": "The thought translation device (TTD) for completely paralyzed patients.", "text": "The thought translation device trains locked-in patients to self-regulate slow cortical potentials (SCP's) of their electroencephalogram (EEG). After operant learning of SCP self-control, patients select letters, words or pictograms in a computerized language support program. Results of five respirated, locked-in-patients are described, demonstrating the usefulness of the thought translation device as an alternative communication channel in motivated totally paralyzed patients with amyotrophic lateral sclerosis."} {"_id": "d76beb59a23c01c9bec1940c4cec1ca26e00480a", "title": "Brain-computer interfaces based on the steady-state visual-evoked response.", "text": "The Air Force Research Laboratory has implemented and evaluated two brain-computer interfaces (BCI's) that translate the steady-state visual evoked response into a control signal for operating a physical device or computer program. In one approach, operators self-regulate the brain response; the other approach uses multiple evoked responses."} {"_id": "8a65dc637d39c14323dccd5cbcc08eed2553880e", "title": "The Struggle for District-based Health Information Systems in South Africa", "text": "This article describes the initial period (1994\u20132001) of an ongoing action research project to develop health information systems to support district management in South Africa. The reconstruction of the health sector in postapartheid South Africa striving for equity in health service delivery and building of a decentralized structure based on health districts. In terms of information systems (IS) development, this reform process translates into standardization of health data in ways that inscribe the goals of the new South Africa by enhancing local control and integration of information handling. We describe our approach to action research and use concepts from actor-network and structuration theories in analyzing the case material. In the detailed description and analysis of the process of IS development provided, we focus on the need to balance standardization and local \u008f exibility (localization); standardization is thus seen as bottom-up alignment of an array of heterogeneous actors. Building on a social system model of information systems, we conceptualize the IS design strategy developed and used as the cultivation of processes whereby these actors are translating and aligning their interests. We develop a modular hierarchy of global and local datasets as a framework within which the tensions between standardization and localization may be understood and addressed. Finally, we discuss the possible relevance of the results of the research in other countries."} {"_id": "9cbef9d9cdbe4007444bd3a6e83551ae0865b648", "title": "Dynamic Conditional Random Fields for Joint Sentence Boundary and Punctuation Prediction", "text": "The use of dynamic conditional random fields (DCRF) has been shown to outperform linear-chain conditional random fields (LCRF) for punctuation prediction on conversational speech texts [1]. In this paper, we combine lexical, prosodic, and modified n-gram score features into the DCRF framework for a joint sentence boundary and punctuation prediction task on TDT3 English broadcast news. We show that the joint prediction method outperforms the conventional two-stage method using LCRF or maximum entropy model (MaxEnt). We show the importance of various features using DCRF, LCRF, MaxEnt, and hidden-event n-gram model (HEN) respectively. In addition, we address the practical issue of feature explosion by introducing lexical pruning, which reduces model size and improves the F1-measure. We adopt incremental local training to overcome memory size limitation without incurring significant performance penalty. Our results show that adding prosodic and n-gram score features gives about 20% relative error reduction in all cases. Overall, DCRF gives the best accuracy, followed by LCRF, MaxEnt, and HEN."} {"_id": "f351de6e160b429a18fa06bd3994a95f932c4ce5", "title": "Toward Understanding How Early-Life Stress Reprograms Cognitive and Emotional Brain Networks", "text": "Vulnerability to emotional disorders including depression derives from interactions between genes and environment, especially during sensitive developmental periods. Adverse early-life experiences provoke the release and modify the expression of several stress mediators and neurotransmitters within specific brain regions. The interaction of these mediators with developing neurons and neuronal networks may lead to long-lasting structural and functional alterations associated with cognitive and emotional consequences. Although a vast body of work has linked quantitative and qualitative aspects of stress to adolescent and adult outcomes, a number of questions are unclear. What distinguishes \u2018normal\u2019 from pathologic or toxic stress? How are the effects of stress transformed into structural and functional changes in individual neurons and neuronal networks? Which ones are affected? We review these questions in the context of established and emerging studies. We introduce a novel concept regarding the origin of toxic early-life stress, stating that it may derive from specific patterns of environmental signals, especially those derived from the mother or caretaker. Fragmented and unpredictable patterns of maternal care behaviors induce a profound chronic stress. The aberrant patterns and rhythms of early-life sensory input might also directly and adversely influence the maturation of cognitive and emotional brain circuits, in analogy to visual and auditory brain systems. Thus, unpredictable, stress-provoking early-life experiences may influence adolescent cognitive and emotional outcomes by disrupting the maturation of the underlying brain networks. Comprehensive approaches and multiple levels of analysis are required to probe the protean consequences of early-life adversity on the developing brain. These involve integrated human and animal-model studies, and approaches ranging from in vivo imaging to novel neuroanatomical, molecular, epigenomic, and computational methodologies. Because early-life adversity is a powerful determinant of subsequent vulnerabilities to emotional and cognitive pathologies, understanding the underlying processes will have profound implications for the world\u2019s current and future children."} {"_id": "5f8835f977ed62da907113ddd03115ee7269e623", "title": "Mutations in EZH2 cause Weaver syndrome.", "text": "We used trio-based whole-exome sequencing to analyze two families affected by Weaver syndrome, including one of the original families reported in 1974. Filtering of rare variants in the affected probands against the parental variants identified two different de novo mutations in the enhancer of zeste homolog 2 (EZH2). Sanger sequencing of EZH2 in a third classically-affected proband identified a third de novo mutation in this gene. These data show that mutations in EZH2 cause Weaver syndrome."} {"_id": "3912e7522ad4c7b271883600dd3d2d6f46e2c8b5", "title": "CapStones and ZebraWidgets: sensing stacks of building blocks, dials and sliders on capacitive touch screens", "text": "Recent research proposes augmenting capacitive touch pads with tangible objects, enabling a new generation of mobile applications enhanced with tangible objects, such as game pieces and tangible controllers. In this paper, we extend the concept to capacitive tangibles consisting of multiple parts, such as stackable gaming pieces and tangible widgets with moving parts. We achieve this using a system of wires and connectors inside each block that causes the capacitance of the bottom-most block to reflect the entire assembly. We demonstrate three types of tangibles, called CapStones, Zebra Dials and Zebra Sliders that work with current consumer hardware and investigate what designs may become possible as touchscreen hardware evolves."} {"_id": "7e51cac4bc694a1a3c07720fb33cee9ce37c4ba4", "title": "Sufficient Sample Sizes for Multilevel Modeling", "text": "An important problem in multilevel modeling is what constitutes a sufficient sample size for accurate estimation. In multilevel analysis, the major restriction is often the higher-level sample size. In this paper, a simulation study is used to determine the influence of different sample sizes at the group level on the accuracy of the estimates (regression coefficients and variances) and their standard errors. In addition, the influence of other factors, such as the lowest-level sample size and different variance distributions between the levels (different intraclass correlations), is examined. The results show that only a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second-level standard errors. In all of the other simulated conditions the estimates of the regression coefficients, the variance components, and the standard errors are unbiased and accurate."} {"_id": "600434c6255c160b53ad26912c1c0b96f0d48ce6", "title": "How Many Trees in a Random Forest?", "text": "Random Forest is a computationally efficient technique that can operate quickly over large datasets. It has been used in many recent research projects and real-world applications in diverse domains. However, the associated literature provides almost no directions about how many trees should be used to compose a Random Forest. The research reported here analyzes whether there is an optimal number of trees within a Random Forest, i.e., a threshold from which increasing the number of trees would bring no significant performance gain, and would only increase the computational cost. Our main conclusions are: as the number of trees grows, it does not always mean the performance of the forest is significantly better than previous forests (fewer trees), and doubling the number of trees is worthless. It is also possible to state there is a threshold beyond which there is no significant gain, unless a huge computational environment is available. In addition, it was found an experimental relationship for the AUC gain when doubling the number of trees in any forest. Furthermore, as the number of trees grows, the full set of attributes tend to be used within a Random Forest, which may not be interesting in the biomedical domain. Additionally, datasets\u2019 density-based metrics proposed here probably capture some aspects of the VC dimension on decision trees and low-density datasets may require large capacity machines whilst the opposite also seems to be true."} {"_id": "4cbadc5f4afe9ac178fd14a6875ef1956a528313", "title": "Security in IPv 6-enabled Wireless Sensor Networks : An Implementation of TLS / DTLS for the Contiki Operating System", "text": "During the last several years the advancements in technology made it possible for small sensor nodes to communicate wirelessly with the rest of the Internet. With this achievement the question of securing such IP-enabled Wireless Sensor Networks (IP-WSNs) emerged and has been an important research topic since. In this thesis we discuss our implementation of TLS and DTLS protocols using a pre-shared key cipher suite (TLS PSK WITH AES 128 CCM 8) for the Contiki operating system. Apart from simply adding a new protocol to the set of protocols supported by the Contiki OS, this project allows us to evaluate how suitable the transport-layer security and pre-shared key management schemes are for IP-WSNs."} {"_id": "5ffff74a97cc52c9061f45295ce94ac77bf2f350", "title": "Arbuscular mycorrhizal fungi in alleviation of salt stress: a review.", "text": "BACKGROUND\nSalt stress has become a major threat to plant growth and productivity. Arbuscular mycorrhizal fungi colonize plant root systems and modulate plant growth in various ways.\n\n\nSCOPE\nThis review addresses the significance of arbuscular mycorrhiza in alleviation of salt stress and their beneficial effects on plant growth and productivity. It also focuses on recent progress in unravelling biochemical, physiological and molecular mechanisms in mycorrhizal plants to alleviate salt stress.\n\n\nCONCLUSIONS\nThe role of arbuscular mycorrhizal fungi in alleviating salt stress is well documented. This paper reviews the mechanisms arbuscular mycorrhizal fungi employ to enhance the salt tolerance of host plants such as enhanced nutrient acquisition (P, N, Mg and Ca), maintenance of the K(+) : Na(+) ratio, biochemical changes (accumulation of proline, betaines, polyamines, carbohydrates and antioxidants), physiological changes (photosynthetic efficiency, relative permeability, water status, abscissic acid accumulation, nodulation and nitrogen fixation), molecular changes (the expression of genes: PIP, Na(+)/H(+) antiporters, Lsnced, Lslea and LsP5CS) and ultra-structural changes. Theis review identifies certain lesser explored areas such as molecular and ultra-structural changes where further research is needed for better understanding of symbiosis with reference to salt stress for optimum usage of this technology in the field on a large scale. This review paper gives useful benchmark information for the development and prioritization of future research programmes."} {"_id": "1d895f34b640670fd92192caff8b3b53c8fb09b5", "title": "Efficient Object Localization and Pose Estimation with 3D Wireframe Models", "text": "We propose a new and efficient method for 3D object localization and fine-grained 3D pose estimation from a single 2D image. Our approach follows the classical paradigm of matching a 3D model to the 2D observations. Our first contribution is a 3D object model composed of a set of 3D edge primitives learned from 2D object blueprints, which can be viewed as a 3D generalization of HOG features. This model is used to define a matching cost obtained by applying a rigid-body transformation to the 3D object model, projecting it onto the image plane, and matching the projected model to HOG features extracted from the input image. Our second contribution is a very efficient branch-and-bound algorithm for finding the 3D pose that maximizes the matching score. For this, 3D integral images of quantized HOGs are employed to evaluate in constant time the maximum attainable matching scores of individual model primitives. We applied our method to three different datasets of cars and achieved promising results with testing times as low as less than half a second."} {"_id": "fd4a1f1ecb930518bc0d4a3f6f3ac72eeda88be9", "title": "Didier Dubey Suarez Medina Assessment of Web-based Information Security Awareness Courses", "text": "Information security awareness web-based courses are commonly recommended in cyber security strategies to help build a security culture capable of addressing information systems breaches caused by user mistakes whose negligence or ignorance of policies may endanger information systems assets. A research gap exists on the impact of Information Security Awareness Web-Based Courses: these are failing in changing to a significant degree the behavior of participants regarding compliance and diligence, which translates into continuous vulnerabilities. The aim of this work is to contribute with a theoretical and empirical analysis on the potential strengths and weaknesses of Information Security Awareness Web-Based Courses and with two practical tools readily applicable for designers and reviewers of web-based or mediatized courses on information security awareness and education. The research design seeks to respond two research questions. The first on the formulation of a minimum set of criteria that could be applied to Information Security Awareness Web-Based Courses, to support their real impact on employee\u2019s diligence and compliance, resulting in eleven criteria for courses\u2019 assessment and a checklist. The second, about a controlled experiment to explore the actual impact of an existing course, in respect to diligence and compliance using phishing emails as educational tools, that reaffirms the theoretical assumptions arrived to earlier. The development of minimum criteria and their systematic implementation pursue behavioral change, emphasizes the importance of disciplinary integration in cyber security research, and advocates for the development of a solid security culture of diligence and compliance, capable of supporting the protection of organizations from information system threats. The results gathered in this study suggest that achieving positive results in the existing information security tests that follow security awareness courses does not necessarily imply that diligence or information security policies compliance are affected. These preliminary findings accumulate evidence on the importance of implementing the recommendations formulated in this work."} {"_id": "143f6b100c13d120e55c3e30e441c4abac7a5db2", "title": "An Analysis of Time-Dependent Planning", "text": "This paper presents a framework for exploring issues in time-dependent planning: planning in which the time available to respond to predicted events varies, and the decision making required to formulate effective responses is complex. Our analysis of time-dependent planning suggests an approach based on a class of algorithms that we call anytime algorithms. Anytime algorithms can be interrupted at any point during computation to return a result whose utility is a function of computation time. We explore methods for solving time-dependent planning problems based on the properties of anytime algorithms. Time-dependent planning is concerned with determining how best to respond to predicted events when the time available to make such determinations varies from situation to situation. In order to program a robot to react appropriately over a range of situations, we have to understand how to design effective algorithms for time-dependent planning. In this paper, we will be concerned primarily with understanding the properties of such algorithms, and providing a precise characterization of time-dependent planning. The issues we are concerned with arise either because the number of events that the robot has to contend with varies, and, hence, the time allotted to deliberating about any one event varies, or the observations that allow us to predict events precede the events they herald by varying amounts of time. The range of planning problems in which such complications occur is quite broad. Almost any situation that involves tracking objects of differing velocities will involve time-dependent planning (e.g., vehicle monitoring [Lesser and Corkill, 1983; Durfee, 19871, signal processing [Chung et al., 19871, and juggling [Donner and Jameson, 19861). S t t i ua ions where a system has to dynamically reevaluate its options [Fox and Smith, 1985; Dean, 19871 or delay committing to specific options until critical information arrives [Fox and Kempf, 19851 generally can be cast as time-dependent planning problems. To take a specific example, consider the problem faced by a stationary robot assigned the task of recognizing and intercepting or rerouting objects on a moving conveyor *This work was supported in part by the National Science Foundation under grant IRI-8612644 and by an IBM faculty development award. belt. Suppose that the robot\u2019s view of the conveyor is obscured at some point by a partition, and that someone on the other side of this partition places objects on the conveyor at irregular intervals. The robot\u2019s task requires that, between the time each object clears the partition and the time it reaches the end of the conveyor, it must classify the object and react appropriately. We assume that classification is computationally intensive, and that the longer the robot spends in analyzing an image, the more likely it is to make a correct classification. One can imagine a variety of reactions. The robot might simply have to push a button to direct each object into a bin intended for objects of a specific class; the time required for this sort of reaction is negligible. Alternatively, the robot might have to reach out and grasp certain objects and assemble them; the time required to react in this case will depend upon many factors. One can also imagine variations that exacerbate the timedependent aspects of the problem. For instance, it might take more time to classify certain objects, the number of objects placed on the conveyor might vary throughout the day, or the conveyor might speed up or slow down according to production demands. The important thing to note is, if the robot is to make optimal use of its time, it should be prepared to make decisions in situations where there is very little time to decide as well as to take advantage of situations where there is more than average time to decide. This places certain constraints on the design of the algorithms for performing classification, determining assembly sequences, and handling other inferential tasks. Traditional computer science concerns itself primarily with the complexity and correctness of algorithms. In most planning situations, however, there is no one correct answer, and having the right answer too late is tantamount to not having it at all. In dealing with potentially intractable problems, computer scientists are sometimes content with less than guaranteed solutions (e.g., answers that are likely correct and guaranteed computed in polynomial time (Monte Carlo algorithms), answers that are guaranteed correct and likely computed in polynomial time (Las Vegas algorithms), answers that are optimal within some factor and computed in polynomial time (approximation algorithms). While we regard this small concession to reality as encouraging, it doesn\u2019t begin to address the problems in time-dependent planning. For many planning tasks, polynomial performance is not sufficient; we need algorithms that compute the best answers they can in the time they have available. Planning is concerned with reasoning about whether to act and how. Scheduling is concerned with reasoning about Dean and Boddy 49 From: AAAI-88 Proceedings. Copyright \u00a91988, AAAI (www.aaai.org). All rights reserved."} {"_id": "e803c3a56d7accc79c0d66847075ebc83bc6bd97", "title": "LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene Labeling", "text": "Our model achieves the state-of-the-art performance on SUNRGBD, NYUdepth v2. It can be leveraged to improve the groundtruth annotations of newly captured 3943 RGB-D images in the SUNRGBD dataset. 2.2% improvement over previous state-of-the-art average accuracy and best on 15 categories Photometric data is vital for scene labeling while depth is an auxiliary information source. LSMT-CF model is important for fusing context features from different sources."} {"_id": "01cb3f168a2cc811f6a79c4f0508f769002a49d5", "title": "RGB-D Based Action Recognition with Light-weight 3D Convolutional Networks", "text": "Different from RGB videos, depth data in RGB-D videos provide key complementary information for tristimulus visual data which potentially could achieve accuracy improvement for action recognition. However, most of the existing action recognition models solely using RGB videos limit the performance capacity. Additionally, the state-of-the-art action recognition models, namely 3D convolutional neural networks (3D-CNNs) contain tremendous parameters suffering from computational inefficiency. In this paper, we propose a series of 3D lightweight architectures for action recognition based on RGB-D data. Compared with conventional 3D-CNN models, the proposed lightweight 3D-CNNs have considerably less parameters involving lower computation cost, while it results in favorable recognition performance. Experimental results on two public benchmark datasets show that our models can approximate or outperform the state-of-the-art approaches. Specifically, on the RGB+DNTU (NTU) dataset, we achieve 93.2% and 97.6% for crosssubject and cross-view measurement, and on the NorthwesternUCLA Multiview Action 3D (N-UCLA) dataset, we achieve 95.5% accuracy of cross-view."} {"_id": "bc2852fa0a002e683aad3fb0db5523d1190d0ca5", "title": "Learning from Ambiguously Labeled Face Images", "text": "Learning a classifier from ambiguously labeled face images is challenging since training images are not always explicitly-labeled. For instance, face images of two persons in a news photo are not explicitly labeled by their names in the caption. We propose a Matrix Completion for Ambiguity Resolution (MCar) method for predicting the actual labels from ambiguously labeled images. This step is followed by learning a standard supervised classifier from the disambiguated labels to classify new images. To prevent the majority labels from dominating the result of MCar, we generalize MCar to a weighted MCar (WMCar) that handles label imbalance. Since WMCar outputs a soft labeling vector of reduced ambiguity for each instance, we can iteratively refine it by feeding it as the input to WMCar. Nevertheless, such an iterative implementation can be affected by the noisy soft labeling vectors, and thus the performance may degrade. Our proposed Iterative Candidate Elimination (ICE) procedure makes the iterative ambiguity resolution possible by gradually eliminating a portion of least likely candidates in ambiguously labeled faces. We further extend MCar to incorporate the labeling constraints among instances when such prior knowledge is available. Compared to existing methods, our approach demonstrates improvements on several ambiguously labeled datasets."} {"_id": "90d07df2d165b034e38ec04b3f6343d483f6cb38", "title": "Using Generative Adversarial Networks to Design Shoes : The Preliminary Steps", "text": "In this paper, we envision a Conditional Generative Adversarial Network (CGAN) designed to generate shoes according to an input vector encoding desired features and functional type. Though we do not build the CGAN, we lay the foundation for its completion by exploring 3 areas. Our dataset is the UT-Zap50K dataset, which has 50,025 images of shoes categorized by functional type and with relative attribute comparisons. First, we experiment with several models to build a stable Generative Adversarial Network (GAN) trained on just athletic shoes. Then, we build a classifier based on GoogLeNet that is able to accurately categorize shoe images into their respective functional types. Finally, we explore the possibility of creating a binary classifier for each attribute in our dataset, though we are ultimately limited by the quality of the attribute comparisons provided. The progress made by this study will provide a robust base to create a conditional GAN that generates customized shoe designs."} {"_id": "2d8683044761263c0654466b205e0c3c46428054", "title": "A confirmatory factor analysis of IS employee motivation and retention", "text": "It is widely recognized that the relationships between organizations and their IS departments are changing. This trend threatens to undermine the retention of IS workers and the productivity of IS operations. In the study reported here, we examine IS employees' motivation and intent to remain using structural equation modeling. A survey was conducted among existing IS employees and analyzed with LISREL VIII. Results showed that latent motivation has an impact on latent retention, with job satisfaction and perceptions of management on career development as indicator variables for the former, and burnout, loyalty, and turnover intent as indicator variables for the latter. Implications for management in developing strategies for the retention of IS employees are provided. # 2001 Elsevier Science B.V. All rights reserved."} {"_id": "2f965b1d0017f0c01e79eb40b8faf3b72b199581", "title": "Living with Seal Robots in a Care House - Evaluations of Social and Physiological Influences", "text": "Robot therapy for elderly residents in a care house has been conducted since June 2005. Two therapeutic seal robots were introduced and activated for over 9 hours every day to interact with the residents. This paper presents the results of this experiment. In order to investigate the psychological and social effects of the robots, the activities of the residents in public areas were recorded by video cameras during daytime hours (8:30-18:00) for over 2 months. In addition, the hormones 17-Ketosteroid sulfate (17-KS-S) and 17-hydroxycorticosteroids (17-OHCS) in the residents' urine were obtained and analyzed. The results indicate that their social interaction was increased through interaction with the seal robots. Furthermore, the results of the urinary tests showed that the reactions of the subjects' vital organs to stress improved after the introduction of the robots"} {"_id": "0ab99aa04e3a8340a7552355fb547374a5604b24", "title": "Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique", "text": "D EEP learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013 [1]. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data [2]. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications. Promising results are emerging. In medical imaging, the accurate diagnosis and/or assessment of a disease depends on both image acquisition and image interpretation. Image acquisition has improved substantially over recent years, with devices acquiring data at faster rates and increased resolution. The image interpretation process, however, has only recently begun to benefit from computer technology. Most interpretations of medical images are performed by physicians; however, image interpretation by humans is limited due to its subjectivity, large variations across interpreters, and fatigue. Many diagnostic tasks require an initial search process to detect abnormalities, and to quantify measurements and changes over time. Computerized tools, specifically image analysis and machine learning, are the key enablers to improve diagnosis, by facilitating identification of the findings that require treatment and to support the expert\u2019s workflow. Among these tools, deep learning is rapidly proving to be the state-of-the-art foundation, leading to improved accuracy. It has also opened up new frontiers in data analysis with rates of progress not before experienced."} {"_id": "75a850e035a7599ecf69d560d6f593d0443d1ef4", "title": "Privacy in e-commerce: examining user scenarios and privacy preferences", "text": "Privacy is a necessary concern in electronic commerce. It is difficult, if not impossible, to complete a transaction without revealing some personal data \u2013 a shipping address, billing information, or product preference. Users may be unwilling to provide this necessary information or even to browse online if they believe their privacy is invaded or threatened. Fortunately, there are technologies to help users protect their privacy. P3P (Platform for Privacy Preferences Project) from the World Wide Web Consortium is one such technology. However, there is a need to know more about the range of user concerns and preferences about privacy in order to build usable and effective interface mechanisms for P3P and other privacy technologies. Accordingly, we conducted a survey of 381 U.S. Net users, detailing a range of commerce scenarios and examining the participants' concerns and preferences about privacy. This paper presents both the findings from that study as well as their design implications."} {"_id": "e6334789ec6d43664f8f164a462461a4408243ba", "title": "Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach", "text": "Abstruct-Most applications of hyperspectral imagery require processing techniques which achieve two fundamental goals: 1) detect and classify the constituent materials for each pixel in the scene; 2) reduce the data volumeldimensionality, without loss of critical information, so that it can be processed efficiently and assimilated by a human analyst. In this paper, we describe a technique which simultaneously reduces the data dimensionality, suppresses undesired or interfering spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel vector onto a subspace which is orthogonal to the undesired signatures. This operation is an optimal interference suppression process in the least squares sense. Once the interfering signatures have been nulled, projecting the residual onto the signature of interest maximizes the signal-to-noise ratio and results in a single component image that represents a classification for the signature of interest. The orthogonal subspace projection (OSP) operator can be extended to k signatures of interest, thus reducing the dimensionality of k and classifying the hyperspectral image simultaneously. The approach is applicable to both spectrally pure as well as mixed pixels."} {"_id": "7207e0a58f6c36b7860574d5568924a5a3e9c51b", "title": "Flow Wars: Systemizing the Attack Surface and Defenses in Software-Defined Networks", "text": "Emerging software defined network SDN stacks have introduced an entirely new attack surface that is exploitable from a wide range of launch points. Through an analysis of the various attack strategies reported in prior work, and through our own efforts to enumerate new and variant attack strategies, we have gained two insights. First, we observe that different SDN controller implementations, developed independently by different groups, seem to manifest common sets of pitfalls and design weakness that enable the extensive set of attacks compiled in this paper. Second, through a principled exploration of the underlying design and implementation weaknesses that enables these attacks, we introduce a taxonomy to offer insight into the common pitfalls that enable SDN stacks to be broken or destabilized when fielded within hostile computing environments. This paper first captures our understanding of the SDN attack surface through a comprehensive survey of existing SDN attack studies, which we extend by enumerating 12 new vectors for SDN abuse. We then organize these vulnerabilities within the well-known confidentiality, integrity, and availability model, assess the severity of these attacks by replicating them in a physical SDN testbed, and evaluate them against three popular SDN controllers. We also evaluate the impact of these attacks against published SDN defense solutions. Finally, we abstract our findings to offer the research and development communities with a deeper understanding of the common design and implementation pitfalls that are enabling the abuse of SDN networks."} {"_id": "ec03905f9a87f0e1e071b16bdac4bc26619a7f2e", "title": "Psychopathy and the DSM-IV criteria for antisocial personality disorder.", "text": "The Axis II Work Group of the Task Force on DSM-IV has expressed concern that antisocial personality disorder (APD) criteria are too long and cumbersome and that they focus on antisocial behaviors rather than personality traits central to traditional conceptions of psychopathy and to international criteria. We describe an alternative to the approach taken in the rev. 3rd ed. of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R; American Psychiatric Association, 1987), namely, the revised Psychopathy Checklist. We also discuss the multisite APD field trials designed to evaluate and compare four criteria sets: the DSM-III-R criteria, a shortened list of these criteria, the criteria for dyssocial personality disorder from the 10th ed. of the International Classification of Diseases (World Health Organization, 1990), and a 10-item criteria set for psychopathic personality disorder derived from the revised Psychopathy Checklist."} {"_id": "ae13b99c809922ce555fd5d968b9b3dd389f8c61", "title": "Annular-ring CMUT arrays for forward-looking IVUS: transducer characterization and imaging", "text": "In this study, a 64-element, 1.15-mm diameter annular-ring capacitive micromachined ultrasonic transducer (CMUT) array was characterized and used for forward-looking intravascular ultrasound (IVUS) imaging tests. The array was manufactured using low-temperature processes suitable for CMOS electronics integration on a single chip. The measured radiation pattern of a 43 /spl times/ 140-/spl mu/m array element depicts a 40/spl deg/ view angle for forward-looking imaging around a 15-MHz center frequency in agreement with theoretical models. Pulse-echo measurements show a -10-dB fractional bandwidth of 104% around 17 MHz for wire targets 2.5 mm away from the array in vegetable oil. For imaging and SNR measurements, RF A-scan data sets from various targets were collected using an interconnect scheme forming a 32-element array configuration. An experimental point spread function was obtained and compared with simulated and theoretical array responses, showing good agreement. Therefore, this study demonstrates that annular-ring CMUT arrays fabricated with CMOS-compatible processes are capable of forward-looking IVUS imaging, and the developed modeling tools can be used to design improved IVUS imaging arrays."} {"_id": "73f8d428fa37bc677d6e08e270336e066432c8c9", "title": "Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial", "text": "Robots increasingly have the potential to interact with people in daily life. It is believed that, based on this ability, they will play an essential role in human society in the not-so-distant future. This article examined the proposition that robots could form relationships with children and that children might learn from robots as they learn from other children. In this article, this idea is studied in an 18-day field trial held at a Japanese elementary school. Two English-speaking \u201cRobovie\u201d robots interacted with firstand sixth-grade pupils at the perimeter of their respective classrooms. Using wireless identification tags and sensors, these robots identified and interacted with children who came near them. The robots gestured and spoke English with the children, using a vocabulary of about 300 sentences for speaking and 50 words for recognition. The children were given a brief picture\u2013word matching English test at the start of the trial, af62 KANDA, HIRANO, EATON, ISHIGURO"} {"_id": "c4f19894a279d10aa2001facc7d35516e8d654ff", "title": "The role of physical embodiment in human-robot interaction", "text": "Autonomous robots are agents with physical bodies that share our environment. In this work, we test the hypothesis that physical embodiment has a measurable effect on performance and perception of social interactions. Support of this hypothesis would suggest fundamental differences between virtual agents and robots from a social standpoint and have significant implications for human-robot interaction. We measure task performance and perception of a robot's social abilities in a structured but open-ended task based on the Towers of Hanoi puzzle. Our experiment compares aspects of embodiment by evaluating: (1) the difference between a physical robot and a simulated one; (2) the effect of physical presence through a co-located robot versus a remote tele-present robot. We present data from a pilot study with 12 subjects showing interesting differences in perception of remote physical robot's and simulated agent's attention to the task, and task enjoyment"} {"_id": "1d7383438d2ad4705fcfae4c9c56b5ac8a86d3f3", "title": "Representations in Distributed Cognitive Tasks", "text": "In this paper we propose a theoretical framework of distributed representations and a methodology of representational analysis for the study of distributed cognitive tasks\u00d1tasks that require the processing of information distributed across the internal mind and the external environment. The basic principle of distributed representations is that the representational system of a distributed cognitive task is a set of internal and external representations, which together represent the abstract structure of the task. The basic strategy of representational analysis is to decompose the representation of a hierarchical task into its component levels so that the representational properties at each level can be independently examined. The theoretical framework and the methodology are used to analyze the hierarchical structure of the Tower of Hanoi problem. Based on this analysis, four experiments are designed to examine the representational properties of the Tower of Hanoi. Finally, the nature of external representations is discussed. This research was supported by a grant to Donald Norman and Edwin Hutchins from the Ames Research Center of the National Aeronautics & Space Agency, Grant NCC 2-591 in the Aviation Safety/Automation Program, technical monitor, Everett Palmer. Additional support was provided by funds from the Apple Computer Company and the Digital Equipment Corporation to the Affiliates of Cognitive Science at"} {"_id": "563ea3682f1e3d14dee418fe1fbb52f433185242", "title": "The usability of everyday technology: emerging and fading opportunities", "text": "Current work in the field of usability tends to focus on snapshots of use as the basis for evaluating designs. However, giving due consideration to the fact that everyday use of technology involves a process of evolution, we set out to investigate how the design of the technology may be used to support this. Based on a long-term empirical study of television use in the homes of two families, we illustrate how use continuously develops in a complex interplay between the users' expectations---as they are formed and triggered by the design---and the needs and context of use per se. We analyze the empirical data from the perspective of activity theory. This framework serves to highlight how use develops, and it supports our analysis and discussion about how design, the users' backgrounds, previous experience, and needs, and the specific context of use supports or hinders the development of use. Moreover, we discuss how the characteristics of the home settings, in which the televisions studied were situated, represent a challenge to usability work. The concluding discussion leads to a set of hypotheses relevant to designers and researchers who wish to tackle some of the aspects of usability of particular importance to development in the use of home technology."} {"_id": "ce66e4006c9948b7ed080c239540dfd0746ff639", "title": "Are Robots Embodied ?", "text": "Embodiment has become an important concept in many areas of cognitive science. There are, however, very different notions of exactly what embodiment is and what kind of body is required for what kind of embodied cognition. Hence, while many would agree that humans are embodied cognizers, there is much less agreement on what kind of artefact could be considered as embodied. This paper identifies and contrasts five different notions of embodiment which can roughly be characterized as (1) structural coupling between agent and environment, (2) historical embodiment as the result of a history of structural coupling, (3) physical embodiment, (4) \u2018organismoid\u2019 embodiment, i.e. organism-like bodily form (e.g., humanoid robots), and (5) organismic embodiment of autopoietic, living systems."} {"_id": "1ee728c9e75d89d371a5001db4326c2341792b7c", "title": "Single Cesium Lead Halide Perovskite Nanocrystals at Low Temperature: Fast Single-Photon Emission, Reduced Blinking, and Exciton Fine Structure.", "text": "Metal-halide semiconductors with perovskite crystal structure are attractive due to their facile solution processability, and have recently been harnessed very successfully for high-efficiency photovoltaics and bright light sources. Here, we show that at low temperature single colloidal cesium lead halide (CsPbX3, where X = Cl/Br) nanocrystals exhibit stable, narrow-band emission with suppressed blinking and small spectral diffusion. Photon antibunching demonstrates unambiguously nonclassical single-photon emission with radiative decay on the order of 250 ps, representing a significant acceleration compared to other common quantum emitters. High-resolution spectroscopy provides insight into the complex nature of the emission process such as the fine structure and charged exciton dynamics."} {"_id": "5dc8b3e5b1d233067e5b34215a50aef173ed3fa3", "title": "Abstract Structures in Spatial Cognition", "text": "Structures in Spatial Cognition Christopher Habel and Carola Eschenbach University of Hamburg Abstract. The importance of studying spatial cognition in cognitive science is enforced by the fact that the applicability of spatial concepts and spatial expressions is not limited to the spatial domain. We claim that common structures underlying both concrete, physical space and other domains are the basis for using spatial expressions, e.g., prepositions like between, with respect to space as well as time or other domains. This claim opposes the thesis that the common use is based upon an analogy between concrete space and other domains. The development of geometry from Euclid\u2019s Elements to more differentiated systems of diverse geometries and topologies can be perceived of as an example of the transfer from modeling concrete space towards describing abstract spatial structures. The importance of studying spatial cognition in cognitive science is enforced by the fact that the applicability of spatial concepts and spatial expressions is not limited to the spatial domain. We claim that common structures underlying both concrete, physical space and other domains are the basis for using spatial expressions, e.g., prepositions like between, with respect to space as well as time or other domains. This claim opposes the thesis that the common use is based upon an analogy between concrete space and other domains. The development of geometry from Euclid\u2019s Elements to more differentiated systems of diverse geometries and topologies can be perceived of as an example of the transfer from modeling concrete space towards describing abstract spatial structures. 1 The Current Interest in Spatial Cognition: Spatial Representations and Spatial Concepts Human behavior is anchored in space and time. Spatial information, i.e., information about spatial properties of the entities in our environment, about spatial constellations in our surrounding, and about the spatial properties and relations of our bodies with respect to this surrounding, has a central position for human cognition. In the recognition of objects and events by different sensory channels, i.e., in visual, haptic or auditory perception, spatial information is involved. Motor behavior, i.e., locomotion and the movement of the body, is based on such information as well. Beyond perception and motor action, some higher cognitive activities that interact indirectly with the spatial environment are coupled with spatial information, for instance, memory, problem solving and planning (cf. Eilan et al. 1993). The interaction of spatial cognition and other cognitive faculties is also exemplified by the ability to communicate information about spatial properties of the external world, especially about objects or constellations not directly perceivable (cf. Freksa & Habel 1990). The cognitive science method of investigating and explaining cognition based on computation and representation has led to increasing research activities focusing on spatial representations and processes on such representations: * Parts of this paper are based on Habel & Eschenbach (1995). The research reported in this paper was carried out in connection to the project \u2018Axiomatik r\u00e4umlicher Konzepte\u2019 (Ha 1237/7) supported by the Deutsche Forschungsgemeinschaft (DFG). Thanks to an anonymous referee for comments and suggestions for improvements. Address: FB Informatik (AB WSV) and Graduiertenkolleg Kognitionswissenschaft, Universit\u00e4t Hamburg, Vogt-K\u00f6lln-Str. 30, D-22527 Hamburg. {habel, eschenbach}@informatik.uni-hamburg.de. \u2022 In cognitive psychology spatial concepts are basic for thinking about objects and situations in physical space and therefore the necessary constituents for the integration of the central higher cognitive faculties with sensory, motor and linguistic faculties (cf. Miller 1978, Landau & Jackendoff 1993). \u2022 In linguistics spatial concepts are discussed as basic ingredients of the \u2013 mental \u2013 lexicon; the linguistic approach of cognitive grammar \u2013 with Lakoff, Langacker and Talmy as its most influential advocates \u2013 is committed to space as the foundation for semantics and for the general grammatical system. \u2022 In Artificial Intelligence spatial concepts are the basis for developing representational formalisms for processing spatial knowledge; for example, calculi of Qualitative Spatial Reasoning differ with respect to what their primitive terms are and which spatial expressions are definable on this basis (See, e.g., Freksa 1992, Hern\u00e1ndez 1994, and Randell et al. 1992, Schlieder 1995a, b). In the cognitive grammar approach as well as in most other discussions on spatial information the question what the basis of using the term spatial is seems to allow a simple answer: Space is identified with three-dimensional physical space. And by this, the concrete, physical space of our environment is seen as the conceptual and semantic basis for a wide range of linguistic and non-linguistic cognition. Accordingly, spatial concepts concern size, shape or relative location of objects in three-dimensional physical space. This view is based on the judgment that direct interaction with concrete, physical space is the core of our experience and therefore of our knowledge. Our spatial experience leads to groupings among spatial concepts depending on geometrical types of spatial characterizations. Examples of such groups, each of them corresponding to types of experience, are: topological concepts \u2022 based on relations between regions and their boundaries \u2022 invariant with respect to elastic transformations concepts of ordering \u2022 based on relations between objects or regions with respect to the relative position in a spatial constellation. \u2022 independent of the extensions and distances of the objects and regions in question metric concepts \u2022 include measures of distance and size of objects and regions. 1 Following Miller (1978) we assume that the conceptual structure includes different types of concepts, e.g. concepts for objects, properties and relations. In this sense, spatial relations like touching or betweenness correspond to relational concepts, while shape properties like being round or angular correspond to predicative concepts. 2 See, e.g., Lakoff (1987), Langacker (1986), Talmy (1983). Note that in the initial phase of this framework the term space grammar was used (Langacker 1982). 3 Although it is often claimed that physical space is not concrete but an abstraction based on spatial properties and relations of material bodies, we will use the term concrete space to refer to physical space in contrast to abstract spatial structures referring to less restricted structures (such as topological, ordering or metric spaces) underlying physical space. Aspects of this division are reflected by the contrast between qualitative and quantitative spatial reasoning. The means of qualitative spatial reasoning are in many cases restricted to topological terms (e.g., Randell et al. 1992) and terms of ordering (e.g., Schlieder 1995a, b). An independent dimension of analysis concerns the distinction between concept types and types of spatial entities: On the one hand we deal with spatial relations between objects, their location or relative position. Characteristically these relations are independent of shape and extension, such that they can apply to points idealizing the place of the objects. On the other hand we are concerned with shape properties of extended objects or regions independent of their location. Shape properties are coined by the relative position of object parts among themselves. Concepts of object orientation combine both, shape of extended entities and spatial relations of their parts and other objects. These types of spatial concepts can be subdivided according to the dimension of geometrical type. As a third dimension in classifying spatial concepts, we distinguish between static and dynamic concepts. Whereas the former concern properties and relations without consideration of time and change, the latter reflect the possibility of changes of spatial properties and relations over time. Since trajectories or paths of locomotion processes are extended spatial entities, spatial concepts are applicable to them as well as to material or geographical entities. (See Table 1 for an exemplification of some sections according to the three-dimensional classification scheme)."} {"_id": "5343b6d5c9f3a2c4d9648991162a6cc13c1c5e70", "title": "DA-GAN: Instance-Level Image Translation by Deep Attention Generative Adversarial Networks", "text": "Unsupervised image translation, which aims in translating two independent sets of images, is challenging in discovering the correct correspondences without paired data. Existing works build upon Generative Adversarial Networks (GANs) such that the distribution of the translated images are indistinguishable from the distribution of the target set. However, such set-level constraints cannot learn the instance-level correspondences (e.g. aligned semantic parts in object transfiguration task). This limitation often results in false positives (e.g. geometric or semantic artifacts), and further leads to mode collapse problem. To address the above issues, we propose a novel framework for instance-level image translation by Deep Attention GAN (DA-GAN). Such a design enables DA-GAN to decompose the task of translating samples from two sets into translating instances in a highly-structured latent space. Specifically, we jointly learn a deep attention encoder, and the instance-level correspondences could be consequently discovered through attending on the learned instances. Therefore, the constraints could be exploited on both set-level and instance-level. Comparisons against several state-of-the-arts demonstrate the superiority of our approach, and the broad application capability, e.g, pose morphing, data augmentation, etc., pushes the margin of domain translation problem.1"} {"_id": "38f88e1f1ca23b2a5cd226b5b72b1933637d16e8", "title": "Automatic detection and rapid determination of earthquake magnitude by wavelet multiscale analysis of the primary arrival", "text": "Earthquake early warning systems must save lives. It is of great importance that networked systems of seismometers be equipped with reliable tools to make rapid determinations of earthquake magnitude in the few to tens of seconds before the damaging ground motion occurs. A new fully automated algorithm based on the discrete wavelet transform detects as well as analyzes the incoming first arrival with great accuracy and precision, estimating the final magnitude to within a single unit from the first few seconds of the P wave. \u00a9 2006 Elsevier B.V. All rights reserved."} {"_id": "ac5342a7973f246af43634994ab288a9193d1131", "title": "High-Accuracy Tracking Control of Hydraulic Rotary Actuators With Modeling Uncertainties", "text": "Structured and unstructured uncertainties are the main obstacles in the development of advanced controllers for high-accuracy tracking control of hydraulic servo systems. For the structured uncertainties, nonlinear adaptive control can be employed to achieve asymptotic tracking performance. But modeling errors, such as nonlinear frictions, always exist in physical hydraulic systems and degrade the tracking accuracy. In this paper, a robust integral of the sign of the error controller and an adaptive controller are synthesized via backstepping method for motion control of a hydraulic rotary actuator. In addition, an experimental internal leakage model of the actuator is built for precise model compensation. The proposed controller accounts for not only the structured uncertainties (i.e., parametric uncertainties), but also the unstructured uncertainties (i.e., nonlinear frictions). Furthermore, the controller theoretically guarantees asymptotic tracking performance in the presence of various uncertainties, which is very important for high-accuracy tracking control of hydraulic servo systems. Extensive comparative experimental results are obtained to verify the high-accuracy tracking performance of the proposed control strategy."} {"_id": "f76c947a3afe92269b87f24f9bcc060b7072603f", "title": "Computationally efficient leakage inductance calculation for a high-frequency core-type transformer", "text": "Leakage inductance is a critical design attribute of high-frequency transformers. In this paper, a new method to estimate core-type transformer leakage inductance is proposed. The magnetic analysis uses a combination of analytical and numerical analysis. A two-dimensional magnetic analysis using Biot-Savart law and the method of images is introduced. Using the same field analysis, expressions for mean squared field intensity values necessary for estimating proximity effect loss in the windings are also derived."} {"_id": "fb2d5460fd0291552e5449b7d2c42667e7a751ae", "title": "View-Independent Recognition of Hand Postures", "text": "In Proc. of IEEE Conf. on CVPR\u20192000, Vol.II, pp.88-94, Hilton Head Island, SC, 2000 Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks."} {"_id": "a14bb15298961545b83e7c7cefff0e7af79828f7", "title": "A Survey of CPU-GPU Heterogeneous Computing Techniques", "text": "As both CPUs and GPUs become employed in a wide range of applications, it has been acknowledged that both of these Processing Units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated a significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this article, we survey Heterogeneous Computing Techniques (HCTs) such as workload partitioning that enable utilizing both CPUs and GPUs to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler, and application levels. Further, we review both discrete and fused CPU-GPU systems and discuss benchmark suites designed for evaluating Heterogeneous Computing Systems (HCSs). We believe that this article will provide insights into the workings and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance."} {"_id": "176fd97ad01bcbd27214502110e6054c6923262c", "title": "Software engineering for big data projects: Domains, methodologies and gaps", "text": "Context: Big data has become the new buzzword in the information and communication technology industry. Researchers and major corporations are looking into big data applications to extract the maximum value from the data available to them. However, developing and maintaining stable and scalable big data applications is still a distant milestone. Objective: To look at existing research on how software engineering concepts, namely the phases of the software development project life cycle (SDPLC), can help build better big data application projects. Method: A literature survey was performed. A manual search covered papers returned by search engines resulting in approximately 2,000 papers being searched and 170 papers selected for review. Results: The search results helped in identifying data rich application projects that have the potential to utilize big data successfully. The review helped in exploring SDPLC phases in the context of big data applications and performing a gap analysis of the phases that have yet to see detailed research efforts but deserve attention."} {"_id": "94097d89173928e11fc20f851cf05c9138cbfbb3", "title": "Biology of language-The epistemology of reality", "text": "2. What processes take place in a linguistic interaction that permit an organism (us) to describe and to predict events that it may experience? This is my way of honoring the memory of Eric H. Lenneberg, if one honors the memory of another scientist by speaking about one\u2019s own work Whatever the case, I wish to honor his memory not only because of his great accomplishments, but also because he was capable of inspiring his students, as the symposium on which this book is based revealed. The only way I can do this is to accept the honor of presenting my views about biology, language, and reality. I shall, accordingly, speak about language as a biologist. In doing so, I shall use language, notwithstanding that this use of language to speak about language is within the core of the problem I wish to consider."} {"_id": "f1526054914997591ffdb8cd523bea219ce7a26e", "title": "Statistical significance versus clinical relevance.", "text": "In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P\u2009<\u20090.05 means that the null hypothesis is false, and P\u2009\u22650.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs."} {"_id": "c44c877ad48f587baa49c79d528d1be6448256b1", "title": "Derivation of Human Trophoblast Stem Cells.", "text": "Trophoblast cells play an essential role in the interactions between the fetus and mother. Mouse trophoblast stem (TS) cells have been derived and used as the best in\u00a0vitro model for molecular and functional analysis of mouse trophoblast lineages, but attempts to derive human TS cells have so far been unsuccessful. Here we show that activation of Wingless/Integrated (Wnt) and EGF and inhibition of TGF-\u03b2, histone deacetylase (HDAC), and Rho-associated protein kinase (ROCK) enable long-term culture of human villous cytotrophoblast (CT) cells. The resulting cell lines have the capacity to give rise to the three major trophoblast lineages, which show transcriptomes similar to those of the corresponding primary trophoblast cells. Importantly, equivalent cell lines can be derived from human blastocysts. Our data strongly suggest that the CT- and blastocyst-derived cell lines are human TS cells, which will provide a powerful tool to study human trophoblast development and function."} {"_id": "4cd6e2a4e9212c7e47e50bddec92f09095b67c2d", "title": "A Bidirectional-Switch-Based Wide-Input Range High-Efficiency Isolated Resonant Converter for Photovoltaic Applications", "text": "Modular photovoltaic (PV) power conditioning systems (PCSs) require a high-efficiency dc-dc converter stage capable of regulation over a wide input voltage range for maximum power point tracking. In order to mitigate ground leakage currents and to be able to use a high-efficiency, nonisolated grid-tied inverter, it is also desirable for this microconverter to provide galvanic isolation between the PV module and the inverter. This paper presents a novel, isolated topology that is able to meet the high efficiency over a wide input voltage range requirement. This topology yields high efficiency through low circulating currents, zero-voltage switching (ZVS) and low-current switching of the primary side devices, ZCS of the output diodes, and direct power transfer to the load for the majority of switching cycle. This topology is also able to provide voltage regulation through basic fixed-frequency pulsewidth modulated (PWM) control. These features are able to be achieved with the simple addition of a secondary-side bidirectional ac switch to the isolated series resonant converter. Detailed analysis of the operation of this converter is discussed along with a detailed design procedure. Experimental results of a 300-W prototype are given. The prototype reached a peak power stage efficiency of 98.3% and a California Energy Commission (CEC) weighted power stage efficiency of 98.0% at the nominal input voltage."} {"_id": "86cace71c7626d46a362963b5f354132effae16d", "title": "Detection of Red Tomato on Plants using Image Processing Techniques", "text": "Tomatoes are the best-known grown fruit in greenhouses that have been recently attempted to be picked up automatically. Tomato is a plant which its fruit does not ripe simultaneously, therefore it is necessary to develop an algorithm to distinguish red tomatoes. In the current study, a new segmentation algorithm based on region growing was proposed for guiding a robot to pick up red tomatoes. For this purpose, several colour images of tomato plants were acquired in a greenhouse. The colour images of tomato were captured under natural light, without any artificial lighting equipment. To recognize red tomatoes form non-red ones, at first background of images were removed. For removing the background, subtraction of red and green components (R-G) was applied. Usually tomatoes touch together, so separating touching tomatoes was next step. In this step, the watershed algorithm was used that was followed by improving process. Afterwards, red tomato was detected by the region growing approach. Results obtained from testing the developed algorithm showed an encouraging accuracy (82.38%) to develop an expert system for online recognition of red"} {"_id": "a83aac2a7a136fcaa1c076dd6e3abdd720fbb520", "title": "A Novel Metal-Plate Monopole Antenna for DTV Application", "text": "In this paper, a novel metal-plate monopole antenna is presented for indoor digital television (DTV) signal coverage in the 468-880MHz band. The proposed antenna consists of two orthorhombic metal-plates with symmetrical bevels, a top-uploaded metal-plate and a ground plane. Simulation results show that the antenna achieves a bandwidth ranging from 468MHz to 880MHz which covers the DTV band (470-860MHZ) and shows stable radiation patterns in three coordinate planes."} {"_id": "501363eb1c55d30e6151d99269afc5aa31f8a0c8", "title": "Plot Induction and Evolutionary Search for Story Generation", "text": "In this paper we develop a story generator that leverages knowledge inherent in corpora without requiring extensive manual involvement. A key feature in our approach is the reliance on a story planner which we acquire automatically by recording events, their participants, and their precedence relationships in a training corpus. Contrary to previous work our system does not follow a generate-and-rank architecture. Instead, we employ evolutionary search techniques to explore the space of possible stories which we argue are well suited to the story generation task. Experiments on generating simple children\u2019s stories show that our system outperforms previous data-driven approaches."} {"_id": "d781442d1ab4345865c068dfbfead772d777ca19", "title": ", GT-GOLEM-2011-001 , 2011 Ach : IPC for Real-Time Robot Control", "text": "We present a new Inter-Process Communication (IPC) mechanism and library. Ach is uniquely suited for coordinating perception, control drivers, and algorithms in real-time systems that sample data from physical processes. Ach eliminates the Head-of-Line Blocking problem for applications that always require access to the newest message. Ach is efficient, robust, and formally verified. It has been tested and demonstrated on a variety of physical robotic systems. Finally, the source code for Ach is available under an Open Source BSD-style license."} {"_id": "a4fc8436f0b5b9bfa5803ea945d6ccfeaa6b0376", "title": "CNNPred: CNN-based stock market prediction using several data sources", "text": "Feature extraction from financial data is one of the most important problems in market prediction domain for which many approaches have been suggested. Among other modern tools, convolutional neural networks (CNN) have recently been applied for automatic feature selection and market prediction. However, in experiments reported so far, less attention has been paid to the correlation among different markets as a possible source of information for extracting features. In this paper, we suggest a CNN-based framework with specially designed CNNs, that can be applied on a collection of data from a variety of sources, including different markets, in order to extract features for predicting the future of those markets. The suggested framework has been applied for predicting the next days direction of movement for the indices of S&P 500, NASDAQ, DJI, NYSE, and RUSSELL markets based on various sets of initial features. The evaluations show a significant improvement in predictions performance compared to the state of the art baseline algorithms."} {"_id": "16c527f74f029be1d6526175b7cc2d938f7e7c69", "title": "Deep Learning for Multi-label Classification", "text": "In multi-label classification, the main focus has been to develop ways of learning the underlying dependencies between labels, and to take advantage of this at classification time. Developing better feature-space representations has been predominantly employed to reduce complexity, e.g., by eliminating non-helpful feature attributes from the input space prior to (or during) training. This is an important task, since many multilabel methods typically create many different copies or views of the same input data as they transform it, and considerable memory can be saved by taking advantage of redundancy. In this paper, we show that a proper development of the feature space can make labels less interdependent and easier to model and predict at inference time. For this task we use a deep learning approach with restricted Boltzmann machines. We present a deep network that, in an empirical evaluation, outperforms a number of competitive methods from the literature."} {"_id": "6a2d7f6431574307404b38c07e845b763b577e64", "title": "A 69.8 dB SNDR 3rd-order Continuous Time Delta-Sigma Modulator with an Ultimate Low Power Tuning System for a Worldwide Digital TV-Receiver", "text": "This paper presents a 3rd-order continuous time delta-sigma modulator for a worldwide digital TV-receiver whose SNDR is 69.8 dB. An ultimate low power tuning system using RC-relaxation oscillator is developed in order to achieve high yield against PVT variations. A 3rd-order modulator with modified single opamp resonator contributes to cost reduction by realizing very compact circuit. The mechanism to occur 2nd-order harmonic distortion at current feedback DAC was analyzed and a reduction scheme of the distortion enabled the modulator to achieved FOM of 0.18 pJ/conv-step."} {"_id": "af7bec24fc78d1b87c0c3807f447dc9eed0d261b", "title": "A comparison of spotlight synthetic aperture radar image formation techniques", "text": "Spotlight synthetic aperture radar images can be formed from the complex phase history data using two main techniques: 1) polar-to-Cartesian interpolation followed by twodimensional inverse Fourier transform (2DFFT) , and 2) convolution backprojection (CBP) . CBP has been widely used to reconstruct medical images in computer aided tomography, and only recently has been applied to form synthetic aperture radar imagery. It is alleged that CBP yields higher quality images because 1) all the Fourier data are used and 2) the polar formatted data is used directly to form a 2D Cartesian image and therefore 2D interpolation is not required. This report compares the quality of images formed by CBP and several modified versions of the 2DFFT method. We show from an image quality point of view that CBP is equivalent to first windowing the phase history data and then interpolating to an exscribed rectangle. From a mathematical perspective, we should expect this conclusion since the same Fourier data are used to form the SAR image. We next address the issue of parallel implementation of each algorithm. We dispute previous claims that CBP is more readily parallelizable than the 2DFFT method. Our conclusions are supported by comparing execution times between massively parallel implementations of both algorithms, showing that both experience similar decreases in computation time, but that CBP takes significantly longer to form an image. This page intentionally left blank"} {"_id": "7ed74be5864f454e508f7684954aaec94ad68394", "title": "A Decision Framework for Blockchain Platforms for IoT and Edge Computing", "text": "Blockchains started as an enabling technology in the area of digital currencies with the introduction of Bitcoin. However, blockchains have emerged as a technology that goes beyond financial transactions by providing a platform supporting secure and robust distributed public ledgers. We think that the Internet of Things (IoT) can also benefit from blockchain technology, especially in the areas of security, privacy, fault tolerance, and autonomous behavior. Here we present a decision framework to help practitioners systematically evaluate the potential use of blockchains in an IoT context."} {"_id": "432dc627ba321f5df3fe163c5903b58e0527ad7a", "title": "Brain Activity during Simulated Deception: An Event-Related Functional Magnetic Resonance Study", "text": "TheGuilty Knowledge Test (GKT) has been used extensively to model deception. An association between the brain evoked response potentials and lying on the GKT suggests that deception may be associated with changes in other measures of brain activity such as regional blood flow that could be anatomically localized with event-related functional magnetic resonance imaging (fMRI). Blood oxygenation level-dependent fMRI contrasts between deceptive and truthful responses were measured with a 4 Tesla scanner in 18 participants performing the GKT and analyzed using statistical parametric mapping. Increased activity in the anterior cingulate cortex (ACC), the superior frontal gyrus (SFG), and the left premotor, motor, and anterior parietal cortex was specifically associated with deceptive responses. The results indicate that: (a) cognitive differences between deception and truth have neural correlates detectable by fMRI, (b) inhibition of the truthful response may be a basic component of intentional deception, and (c) ACC and SFG are components of the basic neural circuitry for deception."} {"_id": "f95d0dfd126614d289b7335379083778d8d68b01", "title": "Central sensitization in carpal tunnel syndrome with extraterritorial spread of sensory symptoms", "text": "Extraterritorial spread of sensory symptoms is frequent in carpal tunnel syndrome (CTS). Animal models suggest that this phenomenon may depend on central sensitization. We sought to obtain psychophysical evidence of sensitization in CTS with extraterritorial symptoms spread. We recruited 100 unilateral CTS patients. After selection to rule out concomitant upper-limb causes of pain, 48 patients were included. The hand symptoms distribution was graded with a diagram into median and extramedian pattern. Patients were asked on proximal pain. Quantitative sensory testing (QST) was performed in the territory of injured median nerve and in extramedian territories to document signs of sensitization (hyperalgesia, allodynia, wind-up). Extramedian pattern and proximal pain were found in 33.3% and 37.5% of patients, respectively. The QST profile associated with extramedian pattern includes: (1) thermal and mechanic hyperalgesia in the territory of the injured median nerve and in those of the uninjured ulnar and radial nerves and (2) enhanced wind-up. No signs of sensitization were found in patients with the median distribution and those with proximal symptoms. Different mechanisms may underlie hand extramedian and proximal spread of symptoms, respectively. Extramedian spread of symptoms in the hand may be secondary to spinal sensitization but peripheral and supraspinal mechanisms may contribute. Proximal spread may represent referred pain. Central sensitization may be secondary to abnormal activity in the median nerve afferents or the consequence of a predisposing trait. Our data may explain the persistence of sensory symptoms after median nerve surgical release and the presence of non-anatomical sensory patterns in neuropathic pain."} {"_id": "5d832e9b80b6edbd3f99adb96f9c6496ae3218de", "title": "PsychoPy\u2014Psychophysics software in Python", "text": "The vast majority of studies into visual processing are conducted using computer display technology. The current paper describes a new free suite of software tools designed to make this task easier, using the latest advances in hardware and software. PsychoPy is a platform-independent experimental control system written in the Python interpreted language using entirely free libraries. PsychoPy scripts are designed to be extremely easy to read and write, while retaining complete power for the user to customize the stimuli and environment. Tools are provided within the package to allow everything from stimulus presentation and response collection (from a wide range of devices) to simple data analysis such as psychometric function fitting. Most importantly, PsychoPy is highly extensible and the whole system can evolve via user contributions. If a user wants to add support for a particular stimulus, analysis or hardware device they can look at the code for existing examples, modify them and submit the modifications back into the package so that the whole community benefits."} {"_id": "02a98118ce990942432c0147ff3c0de756b4b76a", "title": "Learning realistic human actions from movies", "text": "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8% accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results."} {"_id": "14ce7635ff18318e7094417d0f92acbec6669f1c", "title": "DeepFace: Closing the Gap to Human-Level Performance in Face Verification", "text": "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance."} {"_id": "50ca90bc847694a7a2d9a291f0d903a15e408481", "title": "A Multi-scale Approach to Gesture Detection and Recognition", "text": "We propose a generalized approach to human gesture recognition based on multiple data modalities such as depth video, articulated pose and speech. In our system, each gesture is decomposed into large-scale body motion and local subtle movements such as hand articulation. The idea of learning at multiple scales is also applied to the temporal dimension, such that a gesture is considered as a set of characteristic motion impulses, or dynamic poses. Each modality is first processed separately in short spatio-temporal blocks, where discriminative data-specific features are either manually extracted or learned. Finally, we employ a Recurrent Neural Network for modeling large-scale temporal dependencies, data fusion and ultimately gesture classification. Our experiments on the 2013 Challenge on Multimodal Gesture Recognition dataset have demonstrated that using multiple modalities at several spatial and temporal scales leads to a significant increase in performance allowing the model to compensate for errors of individual classifiers as well as noise in the separate channels."} {"_id": "586d7b215d1174f01a1dc2f6abf6b2eb0f740ab6", "title": "Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition", "text": "We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples."} {"_id": "afbd6dbf502004ad2be091afc084580d02a56a2e", "title": "Efficient model-based 3D tracking of hand articulations using Kinect", "text": "The 3D tracking of the articulation of human hands is a theoretically interesting and challenging problem with numerous and diverse applications [1]. Several attempts have been made to address the problem by considering markerless visual data. Existing approaches can be categorized into modeland appearance-based [1]. In this work, we propose a model-based approach to the problem (Fig. 1). We formulate it as an optimization problem that minimizes the discrepancy between the 3D structure and appearance of hypothesized 3D hand model instances, and actual visual observations. Optimization is performed with a variant of an existing stochastic optimization method (Particle Swarm Optimization PSO) [2]. The most computationally demanding parts of the process have been implemented to run efficiently on a GPU [3]. The resulting system performs hand articulations tracking at a rate of 15Hz."} {"_id": "47310b4e14990becd5d473a07092ded4df2fbef1", "title": "DeepCas: An End-to-end Predictor of Information Cascades", "text": "Information cascades, effectively facilitated by most social network platforms, are recognized as a major factor in almost every social success and disaster in these networks. Can cascades be predicted? While many believe that they are inherently unpredictable, recent work has shown that some key properties of information cascades, such as size, growth, and shape, can be predicted by a machine learning algorithm that combines many features. These predictors all depend on a bag of hand-crafting features to represent the cascade network and the global network structures. Such features, always carefully and sometimes mysteriously designed, are not easy to extend or to generalize to a different platform or domain. Inspired by the recent successes of deep learning in multiple data mining tasks, we investigate whether an end-to-end deep learning approach could effectively predict the future size of cascades. Such a method automatically learns the representation of individual cascade graphs in the context of the global network structure, without hand-crafted features or heuristics. We find that node embeddings fall short of predictive power, and it is critical to learn the representation of a cascade graph as a whole. We present algorithms that learn the representation of cascade graphs in an end-to-end manner, which significantly improve the performance of cascade prediction over strong baselines including feature based methods, node embedding methods, and graph kernel methods. Our results also provide interesting implications for cascade prediction in general."} {"_id": "0dc01f266118cb816dc148c3680c59eaaa7c0c6e", "title": "Applications of Probabilistic Programming (Master's thesis, 2015)", "text": "This thesis describes work on two applications of probabilistic programming: the learning of probabilistic program code given specifications, in particular program code of one-dimensional samplers; and the facilitation of sequential Monte Carlo inference with help of data-driven proposals. The latter is presented with experimental results on a linear Gaussian model and a non-parametric dependent Dirichlet process mixture of objects model for object recognition and tracking. We begin this work by providing a brief introduction to probabilistic programming. In the second Chapter we present an approach to automatic discovery of samplers in the form of probabilistic programs. Specifically, we learn the procedure code of samplers for one-dimensional distributions. We formulate a Bayesian approach to this problem by specifying a grammar-based prior over probabilistic program code. We use an approximate Bayesian computation method to learn the programs, whose executions generate samples that statistically match observed data or analytical characteristics of distributions of interest. In our experiments we leverage different probabilistic programming systems, including Anglican and Probabilistic C, to perform Markov chain Monte Carlo sampling over the space of programs. Experimental results have demonstrated that, using the proposed methodology, we can learn approximate and even some exact samplers. Finally, we show that our results are competitive with regard to genetic programming methods."} {"_id": "6e31e3713c07011b9e8a6d0df67e4b242082431d", "title": "Low-Level Software Security: Attacks and Defenses", "text": "This tutorial paper considers the issues of low-level software security from a language-based perspective, with the help of concrete examples. Four examples of low-level software attacks are covered in full detail; these examples are representative of the major types of attacks on C and C++ software that is compiled into machine code. Six examples of practical defenses against those attacks are also covered in detail; these defenses are selected because of their effectiveness, wide applicability, and low enforcement overhead."} {"_id": "3656d8a8391f66c516d1358065d2bd7a3caa160f", "title": "Towards integrated safety analysis and design", "text": "There are currently many problems with the development and assessment of software intensive safety-critical systems. In this paper we describe the problems, and introduce a novel approach to their solution, based around goal-structuring concepts, which we believe will ameliorate some of the difficulties. We discuss the use of modified and new forms of safety assessment notations to provide evidence of safety, and the use of data derived from such notations as a means of providing quantified input into the design assessment process. We then show how the design assessment can be partially automated, and from this develop some ideas on how we might move from analytical to synthetic approaches, using safety criteria and evidence as a fitness function for comparing alternative automatically-generated designs."} {"_id": "418e12e8d443c7a6cf0ea708d49265bb4d4ce34e", "title": "Hand Gesture Recognition Based on Perceptual Shape Decomposition with a Kinect Camera", "text": "In this paper, we propose the Perceptual Shape Decomposition (PSD) to detect fingers for a Kinect-based hand gesture recognition system. The PSD is formulated as a discrete optimization problem by removing all negative minima with minimum cost. Experiments show that our PSD is perceptually relevant and robust against distortion and hand variations, and thus improves the recognition system performance. key words: Kinect camerta, hand gesture recognition, perceptual decomposition, finger detection"} {"_id": "72696bce8a55e6d4beb49dcc168be2b3c05ef243", "title": "Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders", "text": "RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical foundations have remained unclear. In this work we make progress towards that by giving proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives and we give bounds on the running time. We then design experiments to compare the performances of RMSProp and ADAM against Nesterov Accelerated Gradient method on a variety of autoencoder setups. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter \u03b21. We show that in terms of getting lower training and test losses, at very high values of the momentum parameter (\u03b21 = 0.99) (and large enough nets if using mini-batches) ADAM outperforms NAG at any momentum value tried for the latter. On the other hand, NAG can sometimes do better when ADAM\u2019s \u03b21 is set to the most commonly used value: \u03b21 = 0.9. We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms and finding weights which increase the minimum eigenvalue of the Hessian of the loss function."} {"_id": "8e0d5976b09a15c1125558338f0a6859fc29494a", "title": "Input-series output-parallel AC/AC converter", "text": "Input-series and output-parallel (ISOP) converters are suitable for the high input voltage and low output voltage conversion applications. An ISOP current mode AC/AC converter with high frequency link is proposed. The control strategy and operation principle of the ISOP AC/AC converter are investigated. The control strategy ensures the proper sharing of the input voltage and the proper sharing of the output current among the ISOP AC/AC converter. By simulation, the correctness of the operation principle and the control strategy of the ISOP AC/AC converter are proved."} {"_id": "37a83525194c436369fa110c0e709f6585409f26", "title": "Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks", "text": "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video frames. We also show that our model can be applied to visual analogy-making, and present an analysis of the learned network representations."} {"_id": "80bcfbb1a30149e636ff1a08aeb715dad6dd9285", "title": "High efficiency Ka-band Gallium Nitride power amplifier MMICs", "text": "The design and performance of two high efficiency Ka-band power amplifier MMICs utilizing a 0.15\u03bcm GaN HEMT process technology is presented. Measured in-fixture continuous wave (CW) results for the 3-stage balanced amplifier demonstrates up to 11W of output power and 30% power added efficiency (PAE) at 30GHz. The 3-stage single-ended design produced over 6W of output power and up to 34% PAE. The die size for the balanced and single-ended MMICs are 3.24\u00d73.60mm2 and 1.74\u00d73.24mm2 respectively."} {"_id": "284de726e700a6c52f9f8fb9f3de4d4b0ff778bb", "title": "A prioritized grid long short-term memory RNN for speech recognition", "text": "Recurrent neural networks (RNNs) are naturally suitable for speech recognition because of their ability of utilizing dynamically changing temporal information. Deep RNNs have been argued to be able to model temporal relationships at different time granularities, but suffer vanishing gradient problems. In this paper, we extend stacked long short-term memory (LSTM) RNNs by using grid LSTM blocks that formulate computation along not only the temporal dimension, but also the depth dimension, in order to alleviate this issue. Moreover, we prioritize the depth dimension over the temporal one to provide the depth dimension more updated information, since the output from it will be used for classification. We call this model the prioritized Grid LSTM (pGLSTM). Extensive experiments on four large datasets (AMI, HKUST, GALE, and MGB) indicate that the pGLSTM outperforms alternative deep LSTM models, beating stacked LSTMs with 4% to 7% relative improvement, and achieve new benchmarks among uni-directional models on all datasets."} {"_id": "597a9a2b093ea5463be0a62397a600f6bac51c27", "title": "Correlation for user confidence in predictive decision making", "text": "Despite the recognized value of Machine Learning (ML) techniques and high expectation of applying ML techniques within various applications, significant barriers to the widespread adoption and local implementation of ML approaches still exist in the areas of trust (of ML results), comprehension (of ML processes), as well as confidence (in decision making) by users. This paper investigates the effects of correlation between features and target values on user confidence in data analytics-driven decision making. Our user study found that revealing the correlation between features and target variables affected user confidence in decision making significantly. Moreover, users felt more confident in decision making when correlation shared the same trend with the prediction model performance. These findings would help design intelligent user interfaces and evaluate the effectiveness of machine learning models in applications."} {"_id": "48d0fdea25539712d98f5408465affcaa23dc917", "title": "Databases, features and classifiers for speech emotion recognition: a review", "text": "Abstract Speech is an effective medium to express emotions and attitude through language. Finding the emotional content from a speech signal and identify the emotions from the speech utterances is an important task for the researchers. Speech emotion recognition has considered as an important research area over the last decade. Many researchers have been attracted due to the automated analysis of human affective behaviour. Therefore a number of systems, algorithms, and classifiers have been developed and outlined for the identification of emotional content of a speech from a person\u2019s speech. In this study, available literature on various databases, different features and classifiers have been taken in to consideration for speech emotion recognition from assorted languages."} {"_id": "e724d4640b2ece1dc0f6d67c616e8dd61675210b", "title": "A WL-SPPIM Semantic Model for Document Classification", "text": "In this paper, we explore SPPIM-based text classification method, and the experiment reveals that the SPPIM method is equal to or even superior than SGNS method in text classification task on three international and standard text datasets, namely 20newsgroups, Reuters52 and WebKB. Comparing to SGNS, although SPPMI provides a better solution, it is not necessarily better than SGNS in text classification tasks.. Based on our analysis, SGNS takes into the consideration of weight calculation during decomposition process, so it has better performance than SPPIM in some standard datasets. Inspired by this, we propose a WL-SPPIM semantic model based on SPPIM model, and experiment shows that WL-SPPIM approach has better classification and higher scalability in the text classification task compared with LDA, SGNS and SPPIM approaches."} {"_id": "10c897304637b917f0412303e9597bc032a4cd1a", "title": "Estimation of human body shape and cloth field in front of a kinect", "text": "This paper describes an easy-to-use system to estimate the shape of a human body and his/her clothes. The system uses a Kinect to capture the human's RGB and depth information from different views. Using the depth data, a non-rigid deformation method is devised to compensate motions between different views, thus to align and complete the dressed shape. Given the reconstructed dressed shape, the skin regions are recognized by a skin classifier from the RGB images, and these skin regions are taken as a tight constraints for the body estimation. Subsequently, the body shape is estimated from the skin regions of the dressed shape by leveraging a statistical model of human body. After the body estimation, the body shape is non-rigidly deformed to fit the dressed shape, so as to extract the cloth field of the dressed shape. We demonstrate our system and the therein algorithms by several experiments. The results show the effectiveness of the proposed method. & 2014 Elsevier B.V. All rights reserved."} {"_id": "c0d30aa1b12a9bceda14ab1d1f2489a2dddc8277", "title": "Identifying and Eliminating Mislabeled Training Instances", "text": "This paper presents a new approach to identifying and eliminating mislabeled training instances. The goal of this technique is to improve classiication accuracies produced by learning algorithms by improving the quality of the training data. The approach employs an ensemble of clas-siiers that serve as a lter for the training data. Using an n-fold cross validation, the training data is passed through the lter. Only instances that the lter classiies correctly are passed to the-nal learning algorithm. We present an empirical evaluation of the approach for the task of automated land cover mapping from remotely sensed data. Labeling error arises in these data from a multitude of sources including lack of consistency in the vegetation classiication used, variable measurement techniques, and variation in the spatial sampling resolution. Our evaluation shows that for noise levels of less than 40%, lter-ing results in higher predictive accuracy than not ltering, and for levels of class noise less than or equal to 20% ltering allows the base-line accuracy to be retained. Our empirical results suggest that the ensemble lter approach is an eeective method for identifying labeling errors, and further , that the approach will signiicantly beneet ongoing research to develop accurate and robust remote sensing-based methods to map land cover at global scales."} {"_id": "dc264515056acaf1ddcb4e810787fcf23e86cbc0", "title": "Low-power energy harvester for wiegand transducers", "text": "This paper discusses the use of Wiegand magnetic sensors as energy harvesters for powering low-power electronic equipments. A Wiegand device typically releases a ~10 \u03bcs voltage pulse several volts wide when subject to an external, time-varying magnetic field. Due to the sharpness of the magnetic transition, pulse generation occurs regardless of how slow the magnetic field variation is, an attractive feature which enables its use in energy harvesting scenarios even when low-frequency sources are considered. The paper first identifies the theoretical conditions for maximum energy extraction. An efficient energy harvesting circuit is then proposed which interfaces the Wiegand source to a fixed DC voltage provided, for instance, by a rechargeable Lithium-Ion battery. Simulations and experimental results are provided supporting the developed theoretical framework and the effectiveness of the proposed implementation."} {"_id": "e46aba462356c60cd8b336c77aec99d69a5e58a9", "title": "Improving Estimation Accuracy using Better Similarity Distance in Analogy-based Software Cost Estimation", "text": "Software cost estimation nowadays plays a more and more important role in practical projects since modern software projects become more and more complex as well as diverse. To help estimate software development cost accurately, this research does a systematic analysis of the similarity distances in analogy-based software cost estimation and based on this, a new non-orthogonal space distance (NoSD) is proposed as a measure of the similarities between real software projects. Different from currently adopted measures like the Euclidean distance and so on, this non-orthogonal space distance not only considers the different features to have different importance for cost estimation, but also assumes project features to have a non-orthogonal dependent relationship which is considered independent to each other in Euclidean distance. Based on such assumptions, NoSD method describes the non-orthogonal angles between feature axes using feature redundancy and it represents the feature weights using feature relevance, where both redundancy and relevance are defined in terms of mutual information. It can better reveal the real dependency relationships between real life software projects based on this non-orthogonal space distance. Also experiments show that it brings a greatest of 13.1% decrease of MMRE and a 12.5% increase of PRED(0.25) on ISBSG R8 dataset, and 7.5% and 20.5% respectively on the Desharnais dataset. Furthermore, to make it better fit the complex data distribution of real life software projects data, this research leverages the particle swarm optimization algorithm for an optimization of the proposed non-orthogonal space distance and proposes a PSO optimized non-orthogonal space distance (PsoNoSD). It brings further improvement in the estimation accuracy. As shown in experiments, compared with the normally used Euclidean distance, PsoNoSD improves the estimation accuracy by 38.73% and 11.59% in terms of MMRE and PRED(0.25) on ISBSG R8 dataset. On the Desharnais dataset, the improvements are 23.38% and 24.94% respectively. In summary, the new methods proposed in this research, which are based on theoretical study as well as systematic experiments, have solved some problems of currently used techniques and they show a great ability of notably improving the software cost estimation accuracy."} {"_id": "1525c74ab503677924f60f1df304a0bcfbd51ae0", "title": "A Counterexample to Theorems of Cox and Fine", "text": "Cox s well known theorem justifying the use of probability is shown not to hold in nite domains The counterexample also suggests that Cox s assumptions are insu cient to prove the result even in in nite domains The same counterexample is used to disprove a result of Fine on comparative conditional probability"} {"_id": "7190ae6bb76076ffdecd78cd40e0be9e86f1f85e", "title": "Location Assisted Coding (LAC): Embracing Interference in Free Space Optical Communications", "text": "As the number of wireless devices grows, the increasing demand for the shared radio frequency (RF) spectrum becomes a critical problem. Unlike wired communications in which, theoretically, more fibers can be used to accommodate the increasing bandwidth demand, wireless spectrum cannot be arbitrarily increased due to the fundamental limitations imposed by the physical laws. On the other hand, recent advances in free space optical (FSO) technologies promise a complementary approach to increase wireless capacity. However, high-speed FSO technologies are currently confined to short distance transmissions, resulting in limited mobility. In this paper, we briefly describe WiFO, a hybrid WiFi-FSO network for Gbps wireless local area network (WLAN) femtocells that can provide up to one Gbps per user while maintaining seamless mobility. While typical RF femtocells are non-overlapped to minimize inter-cell interference, there are advantages of using overlapped femtocells to increase mobility and throughput when the number of users is small. That said, the primary contribution of this paper will be a novel location assisted coding (LAC) technique used in the WiFO network that aims to increase throughput and reduce interference for multiple users in a dense array of femtocells. Both theoretical analysis and numerical experiments show orders of magnitude increase in throughput using LAC over basic codes."} {"_id": "5eca25bc0be5329ad6ed6c8f2ca16e218632f470", "title": "Innovative enhancement of the Caesar cipher algorithm for cryptography", "text": "The Caesar Cipher algorithm for cryptography is one of the oldest algorithms. Now much newer algorithms have arrived that are much more secure, however in terms of speed of execution Caesar cipher algorithm is still the fastest owing to its simplicity. However the algorithm is extremely easy to crack. This is because in this algorithm each character of a message is always replaced by the same fixed character that has been predetermined. To improve the algorithm and enhance it's security feature, a few changes can be added. This paper proposes an enhancement to the existing algorithm by making use first of a simple Diffie-Hellman key exchange scenario to obtain a secret key and later using simple mathematics to ensure the encryption of data is much more safer. Once a private shared key is obtained by making use of the Diffie-Hellman method, the key is subject to the mod operation with 26 to obtain a value less than or equal to 26, then the current character is taken and to this the key value obtained is added to obtain a new character. For any character in the `x' position the key is simply first multiplied with `x' and then mod is done to obtain the encrypted character. So 2nd character of the message is multiplied with 2, third character with 3 and so on. This enhances the security and also does not increase the time of execution by a large margin."} {"_id": "2bb1e444ca057597eb1d393457ca41e9897079c6", "title": "Turret: A Platform for Automated Attack Finding in Unmodified Distributed System Implementations", "text": "Security and performance are critical goals for distributed systems. The increased design complexity, incomplete expertise of developers, and limited functionality of existing testing tools often result in bugs and vulnerabilities that prevent implementations from achieving their design goals in practice. Many of these bugs, vulnerabilities, and misconfigurations manifest after the code has already been deployed making the debugging process difficult and costly. In this paper, we present Turret, a platform for automatically finding performance attacks in unmodified implementations of distributed systems. Turret does not require the user to provide any information about vulnerabilities and runs the implementation in the same operating system setup as the deployment, with an emulated network. Turret uses a new attack finding algorithm and several optimizations that allow it to find attacks in a matter of minutes. We ran Turret on 5 different distributed system implementations specifically designed to tolerate insider attacks, and found 30 performance attacks, 24 of which were not previously reported to the best of our knowledge."} {"_id": "8108a291e7f526178395f50b1b52cb55bed8db5b", "title": "A Relational Approach to Monitoring Complex Systems", "text": "Monitoring is an essential part of many program development tools, and plays a central role in debugging, optimization, status reporting, and reconfiguration. Traditional monitoring techniques are inadequate when monitoring complex systems such as multiprocessors or distributed systems. A new approach is described in which a historical database forms the conceptual basis for the information processed by the monitor. This approach permits advances in specifying the low-level data collection, specifying the analysis of the collected data, performing the analysis, and displaying the results. Two prototype implementations demonstrate the feasibility of the approach."} {"_id": "2a61d70a4e9dece71594d33320063e331485f1df", "title": "Mapping and Localization in 3D Environments Using a 2D Laser Scanner and a Stereo Camera", "text": "2D laser scanners have been widely used for accomplishing a number of challenging AI and robotics tasks such as mapping of large environments and localization in highly dynamic environments. However, using only one 2D laser scanner could be insufficient and less reliable for accomplishing tasks in 3D environments. The problem could be solved using multiple 2D laser scanners or a 3D laser scanner for performing 3D perception. Unfortunately, the cost of such 3D sensing systems is still too high for enabling AI and robotics applications. In this paper, we propose to use a 2D laser scanner and a stereo camera for accomplishing simultaneous localization and mapping (SLAM) in 3D indoor environments in which the 2D laser scanner is used for SLAM and the stereo camera is used for 3D mapping. The experimental results demonstrate that the proposed system is lower cost yet effective, and the obstacle detection rate is significant improved compares to using one 2D laser scanner for mapping."} {"_id": "875e08da83c0d499da9d9a5728d492d35d96773c", "title": "Architecting the next generation of service-based SCADA/DCS system of systems", "text": "SCADA and DCS systems are in the heart of the modern industrial infrastructure. The rapid changes in the networked embedded systems and the way industrial applications are designed and implemented, call for a shift in the architectural paradigm. Next generation SCADA and DCS systems will be able to foster cross-layer collaboration with the shop-floor devices as well as in-network and enterprise applications. Ecosystems driven by (web) service based interactions will enable stronger coupling of real-world and the business side, leading to a new generation of monitoring and control applications and services witnessed as the integration of large-scale systems of systems that are constantly evolving to address new user needs."} {"_id": "d29f918ad0b759f01299ec905f564359ada97ba5", "title": "Information Processing in Medical Imaging", "text": "In this paper, we present novel algorithms to compute robust statistics from manifold-valued data. Specifically, we present algorithms for estimating the robust Fr\u00e9chet Mean (FM) and performing a robust exact-principal geodesic analysis (ePGA) for data lying on known Riemannian manifolds. We formulate the minimization problems involved in both these problems using the minimum distance estimator called the L2E. This leads to a nonlinear optimization which is solved efficiently using a Riemannian accelerated gradient descent technique. We present competitive performance results of our algorithms applied to synthetic data with outliers, the corpus callosum shapes extracted from OASIS MRI database, and diffusion MRI scans from movement disorder patients respectively."} {"_id": "0a6d7e8e61c54c796f53120fdb86a25177e00998", "title": "Complex Embeddings for Simple Link Prediction", "text": "In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.1"} {"_id": "6ac1962fd1f2b90da02c63c16af39a3c7a3e6df6", "title": "SQuAD: 100, 000+ Questions for Machine Comprehension of Text", "text": "We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com."} {"_id": "c05727976dfae43ae13aa520774fc3bd369d49b5", "title": "Freebase: A Shared Database of Structured General Human Knowledge", "text": "Freebase is a practical, scalable, graph-shaped database of structured general human knowledge, inspired by Semantic Web research and collaborative data communities such as the Wikipedia. Freebase allows public read and write access through an HTTP-based graph-query API for research, the creation and maintenance of structured data, and application building. Access is free and all data in Freebase has a very open (e.g. Creative Commons, GFDL) license."} {"_id": "f010affab57b5fcf1cd6be23df79d8ec98c7289c", "title": "TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension", "text": "We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study.1"} {"_id": "6e7302a08e04e2120c50440f280fb77dcd5aeb35", "title": "Emotion recognition with multimodal features and temporal models", "text": "This paper presents our methods to the Audio-Video Based Emotion Recognition subtask in the 2017 Emotion Recognition in the Wild (EmotiW) Challenge. The task aims to predict one of the seven basic emotions for short video segments. We extract different features from audio and facial expression modalities. We also explore the temporal LSTM model with the input of frame facial features, which improves the performance of the non-temporal model. The fusion of different modality features and the temporal model lead us to achieve a 58.5% accuracy on the testing set, which shows the effectiveness of our methods."} {"_id": "7fe73dd079d14520224af718c12ab3e224d40cbb", "title": "Recommendation to Groups", "text": "Recommender systems have traditionally recommended items to individual users, but there has recently been a proliferation of recommenders that address their recommendations to groups of users. The shift of focus from an individual to a group makes more of a difference than one might at first expect. This chapter discusses the most important new issues that arise, organizing them in terms of four subtasks that can or must be dealt with by a group recommender: 1. acquiring information about the user\u2019s preferences; 2. generating recommendations; 3. explaining recommendations; and 4. helping users to settle on a final decision. For each issue, we discuss how it has been dealt with in existing group recommender systems and what open questions call for further research."} {"_id": "7ae59771c7d9a3a346fb6374d21c31ca62c3618b", "title": "The cyber threat landscape: Challenges and future research directions", "text": "Cyber threats are becoming more sophisticated with the blending of once distinct types of attack into more damaging forms. Increased variety and volume of attacks is inevitable given the desire of financially and criminally-motivated actors to obtain personal and confidential information, as highlighted in this paper. We describe how the Routine Activity Theory can be applied to mitigate these risks by reducing the opportunities for cyber crime to occur, making cyber crime more difficult to commit and by increasing the risks of detection and punishment associated with committing cyber crime. Potential research questions are also identified. a 2011 Elsevier Ltd. All rights reserved."} {"_id": "c6919dd8f8e72ff3e1fbc0072d64bde0a19c2a8e", "title": "Genetics-Based Machine Learning for Rule Induction: State of the Art, Taxonomy, and Comparative Study", "text": "The classification problem can be addressed by numerous techniques and algorithms which belong to different paradigms of machine learning. In this paper, we are interested in evolutionary algorithms, the so-called genetics-based machine learning algorithms. In particular, we will focus on evolutionary approaches that evolve a set of rules, i.e., evolutionary rule-based systems, applied to classification tasks, in order to provide a state of the art in this field. This paper has a double aim: to present a taxonomy of the genetics-based machine learning approaches for rule induction, and to develop an empirical analysis both for standard classification and for classification with imbalanced data sets. We also include a comparative study of the genetics-based machine learning (GBML) methods with some classical non-evolutionary algorithms, in order to observe the suitability and high potential of the search performed by evolutionary algorithms and the behavior of the GBML algorithms in contrast to the classical approaches, in terms of classification accuracy."} {"_id": "8d99c0d149046b36a8f1691d9f14775de9927171", "title": "A connectionist approach to dynamic resource management for virtualised network functions", "text": "Network Functions Virtualisation (NFV) continues to gain attention as a paradigm shift in the way telecommunications services are deployed and managed. By separating Network Functions (NFs) from traditional middleboxes, NFV is expected to lead to reduced CAPEX and OPEX, and to more agile services. However, one of the main challenges to achieving these objectives is on how physical resources can be efficiently, autonomously, and dynamically allocated to Virtualised Network Functions (VNFs) whose resource requirements ebb and flow. In this paper, we propose a Graph Neural Network (GNN)-based algorithm which exploits Virtual Network Function Forwarding Graph (VNF-FG) topology information to predict future resource requirements for each Virtual Network Function Component (VNFC). The topology information of each VNFC is derived from combining its past resource utilisation as well as the modelled effect on the same from VNFCs in its neighbourhood. Our proposal has been evaluated using a deployment of a virtualised IP Multimedia Subsystem (IMS), and real VoIP traffic traces, with results showing an average prediction accuracy of 90%. Moreover, compared to a scenario where resources are allocated manually and/or statically, our proposal reduces the average number of dropped calls by at least 27% and improves call setup latency by over 29%."} {"_id": "9bdd3ae8dd074573fa262fe99fdf03ce578d907c", "title": "A Chopper Current-Feedback Instrumentation Amplifier With a 1 mHz $1/f$ Noise Corner and an AC-Coupled Ripple Reduction Loop", "text": "This paper presents a chopper instrumentation amplifier for interfacing precision thermistor bridges. For high CMRR and DC gain, the amplifier employs a three-stage current-feedback topology with nested-Miller compensation. By chopping both the input and intermediate stages of the amplifier, a 1 mHz 1/f noise corner was achieved at an input-referred noise power spectral density (PSD) of 15 nV/\u00bfHz. To reduce chopper ripple, the amplifier employs a continuous-time AC-coupled ripple reduction loop. Due to its continuous-time nature, the loop causes no noise folding to DC and hence offers improved noise performance over auto-zeroed amplifiers. The loop reduces chopper ripple by more than 60 dB, to levels below the amplifier's own input-referred noise. Furthermore, a maximum input referred offset of 5 \u00bf V and a CMRR greater than 120 dB were measured at a supply current of 230 \u00bfA at 5 V."} {"_id": "96ad9daa11dfd96846990ab0ceaebf8077772c78", "title": "Digitalism: The New Realism?", "text": "Today\u2019s society is increasingly digitalised, with mobile smartphones being routinely carried and used by a significant percentage of the population. This provides an augmented experience for the individual that does not depend on their geographical separation with respect to their community of friends and other contacts. This changes the nature of relationships between people. Individuals may live in a \u201cdigital bubble\u201d, close to others physically, but far away from them in their digital world. More specifically, digital images can be generated and shared with ever greater ease. Sometimes the digital image takes on an important part of the individual\u2019s experience of reality. This paper explores examples of the phenomenon, within the context of the arts in particular and culture in general. We also consider the assortment of terms used in a variety of ways by researchers in different fields with regard to our ever more digital society, such as digitalism, digitality, digitalisation, digital culture, digital philosophy, etc. We survey these terms, exploring them from alternative viewpoints, including sociological and philosophical aspects, and attempt to pinpoint some of these terms more precisely, especially in a cultural and artistic context."} {"_id": "65e58981f966a6d1b62c1f7d889ae3e1c0a864a1", "title": "Strategic Business / IT Alignment using Goal Models", "text": "Since few years, enterprise information technologies (IT) are no more seen as a simple technological support for business strategies in the enterprise. Moreover, standalone IT departments are created in order to support evolution and growth of IT in the enterprise. Often, IT department defines a specific strategy describing vision, goals and objectives of IT development in the organization. However, to remain competitive, IT strategy and IT investment should be coherent with global enterprise strategies. The continuous process of preserving coherence between Business/IT strategies is widely known as strategic Business/IT alignment. Our work aims at discussing the relation between interview based Business IT alignement discovery and the Strategic Alignement Model proposed by Venkatraman. The paper proposes also modeling tools and engineering methodologies to support this alignment process."} {"_id": "882bd3e291822e2a510b6648779efd8312c9068d", "title": "On the use of X-vectors for Robust Speaker Recognition", "text": "Text-independent speaker verification (SV) is currently in the process of embracing DNN modeling in every stage of SV system. Slowly, the DNN-based approaches such as end-to-end modelling and systems based on DNN embeddings start to be competitive even in challenging and diverse channel conditions of recent NIST SREs. Domain adaptation and the need for a large amount of training data are still a challenge for current discriminative systems and (unlike with generative models), we see significant gains from data augmentation, simulation and other techniques designed to overcome lack of training data. We present an analysis of a SV system based on DNN embeddings (x-vectors) and focus on robustness across diverse data domains such as standard telephone and microphone conversations, both in clean, noisy and reverberant environments. We also evaluate the system on challenging far-field data created by re-transmitting a subset of NIST SRE 2008 and 2010 microphone interviews. We compare our results with the stateof-the-art i-vector system. In general, we were able to achieve better performance with the DNN-based systems, but most importantly, we have confirmed the robustness of such systems across multiple data domains."} {"_id": "de5a4d0112784b2e62a309c058529fc6ab5dceb1", "title": "User experiences and expectations of vibrotactile, thermal and squeeze feedback in interpersonal communication", "text": "Katja Suhonen Tampere University of Technology, Human-Centered Technology, P.O.Box 589, 33101 Tampere, Finland katja.suhonen@tut.fi Kaisa V\u00e4\u00e4n\u00e4nen-Vainio-Mattila Tampere University of Technology, Human-Centered Technology, P.O.Box 589, 33101 Tampere, Finland kaisa.vaananen-vainiomattila@tut.fi Kalle M\u00e4kel\u00e4 Tampere University of Technology, Department of Electronics, P.O.Box 692, 33101 Tampere, Finland kalle.makela@tut.fi"} {"_id": "ab575f3953b5e998439f3d17ac12c6c42e1f0220", "title": "Differences in kinematics and electromyographic activity between men and women during the single-legged squat.", "text": "BACKGROUND\nNumerous factors have been identified as potentially increasing the risk of anterior cruciate ligament injury in the female athlete. However, differences between the sexes in lower extremity coordination, particularly hip control, are only minimally understood.\n\n\nHYPOTHESIS\nThere is no difference in kinematic or electromyographic data during the single-legged squat between men and women.\n\n\nSTUDY DESIGN\nDescriptive comparison study.\n\n\nMETHODS\nWe kinematically and electromyographically analyzed the single-legged squat in 18 intercollegiate athletes (9 male, 9 female). Subjects performed five single-legged squats on their dominant leg, lowering themselves as far as possible and then returning to a standing position without losing balance.\n\n\nRESULTS\nWomen demonstrated significantly more ankle dorsiflexion, ankle pronation, hip adduction, hip flexion, hip external rotation, and less trunk lateral flexion than men. These factors were associated with a decreased ability of the women to maintain a varus knee position during the squat as compared with the men. Analysis of all eight tested muscles demonstrated that women had greater muscle activation compared with men. When each muscle was analyzed separately, the rectus femoris muscle activation was found to be statistically greater in women in both the area under the linear envelope and maximal activation data.\n\n\nCONCLUSIONS\nUnder a physiologic load in a position commonly assumed in sports, women tend to position their entire lower extremity and activate muscles in a manner that could increase strain on the anterior cruciate ligament."} {"_id": "df0e6e6c808bb30f0e21b6873048361a28b28b64", "title": "Procedural Content Generation: Goals, Challenges and Actionable Steps", "text": "This chapter discusses the challenges and opportunities of procedural content generation (PCG) in games. It starts with defining three grand goals of PCG, namely multi-level multi-content PCG, PCG-based game design and generating complete games. The way these goals are defined, they are not feasible with current technology. Therefore we identify nine challenges for PCG research. Work towards meeting these challenges is likely to take us closer to realising the three grand goals. In order to help researchers get started, we also identify five actionable steps, which PCG researchers could get started working on immediately. 1998 ACM Subject Classification I.2.1 Applications and Expert Systems: Games"} {"_id": "e327e5083b862239374d2e5424e431eb0711c29a", "title": "Time-to-Digital Converter Using a Tuned-Delay Line Evaluated in 28-, 40-, and 45-nm FPGAs", "text": "This paper proposes a bin-width tuning method for a field-programmable gate array (FPGA)-based delay line for a time-to-digital converter (TDC). Changing the hit transitions and sampling patterns of the carry chain considering delays of the sum and carry-out bins can improve the bin-width uniformity and thus measurement precision. The proposed sampling method was evaluated and compared with the ordinary tapped-delay-line (TDL) method in three different types of FPGAs: Kintex-7, Virtex-6, and Spartan-6. The linearity, equivalent bin width, and measurement precision improved for all the evaluated FPGAs by adopting the proposed method. The measurement precision obtained using the simple TDL architecture is comparable with other complex TDC architectures. In addition, the proposed method improves bin-width uniformity and measurement precision while maintaining the advantages of TDL TDCs, that is, fast conversion rate and small resource usage. Furthermore, the enhanced linearity of the delay line can also improve other carry-chain-based FPGA-TDCs."} {"_id": "232faf8d0b97862dce95c5afbccf11004b91ef04", "title": "Control structure design for complete chemical plants", "text": "Control structure design deals with the structural decisions of the control system, including what to control and how to pair the variables to form control loops. Although these are very important issues, these decisions are in most cases made in an ad hoc fashion, based on experience and engineering insight, without considering the details of each problem. In the paper, a systematic procedure for control structure design for complete chemical plants (plantwide control) is presented. It starts with carefully defining the operational and economic objectives, and the degrees of freedom available to fulfill them. Other issues, discussed in the paper, include inventory and production rate control, decentralized versus multivariable control, loss in performance by bottom-up design, and a definition of a the \u2018\u2018complexity number\u2019\u2019 for the control system. # 2003 Elsevier Ltd. All rights reserved."} {"_id": "15943bdecfe42e5f6707efa2eb5a491356f1822e", "title": "Reducing Uncertainty of Low-Sampling-Rate Trajectories", "text": "The increasing availability of GPS-embedded mobile devices has given rise to a new spectrum of location-based services, which have accumulated a huge collection of location trajectories. In practice, a large portion of these trajectories are of low-sampling-rate. For instance, the time interval between consecutive GPS points of some trajectories can be several minutes or even hours. With such a low sampling rate, most details of their movement are lost, which makes them difficult to process effectively. In this work, we investigate how to reduce the uncertainty in such kind of trajectories. Specifically, given a low-sampling-rate trajectory, we aim to infer its possible routes. The methodology adopted in our work is to take full advantage of the rich information extracted from the historical trajectories. We propose a systematic solution, History based Route Inference System (HRIS), which covers a series of novel algorithms that can derive the travel pattern from historical data and incorporate it into the route inference process. To validate the effectiveness of the system, we apply our solution to the map-matching problem which is an important application scenario of this work, and conduct extensive experiments on a real taxi trajectory dataset. The experiment results demonstrate that HRIS can achieve higher accuracy than the existing map-matching algorithms for low-sampling-rate trajectories."} {"_id": "5d428cc440bb0b2c1bac75e679ac8a88e2ed71bf", "title": "Towards reliable traffic sign recognition", "text": "The demand for reliable traffic sign recognition (TSR) increases with the development of safety driven advanced driver assistance systems (ADAS). Emerging technologies like brake-by-wire or steer-by-wire pave the way for collision avoidance and threat identification systems. Obviously, decision making in such critical situations requires high reliability of the information base. Especially for comfort systems, we need to take into account that the user tends to trust the information provided by the ADAS [1]. In this paper, we present a robust system architecture for the reliable recognition of circular traffic signs. Our system employs complementing approaches for the different stages of current TSR systems. This introduces the application of local SIFT features for content-based traffic sign detection along with widely applied shape-based approaches. We further add a technique called contracting curve density (CCD) to refine the localization of the detected traffic sign candidates and therefore increase the performance of the subsequent classification module. Finally, the recognition stage based on SIFT and SURF descriptions of the candidates executed by a neural net provides a robust classification of structured image content like traffic signs. By applying these steps we compensate the weaknesses of the utilized approaches, and thus, improve the system's performance."} {"_id": "742b172f9f8afbc2b4821171aa35b5f0e15a2661", "title": "3-Dimensional Analysis on the GIDL Current of Body-tied Triple Gate FinFET", "text": "Triple gate FinFET is emerging as a promising candidate for the future CMOS device structures because of its immunity to short-channel effect. However, the suppression of GIDL is a significant challenge for its application. In this paper, we discuss the characteristics of GIDL on FinFET and extensively analyze the influence of the device technology on GIDL. The analysis is expected to give guidelines to the future development of triple gate FinFET"} {"_id": "a0283ce9ecdca710b186cbb103efe5ec812d1fb1", "title": "Perspectives on the Productivity Dilemma", "text": "Editor\u2019s note The authors of this paper presented an All-academy session at the 2008 Academy of Management annual meeting in Anaheim, California. We were excited by the dynamic nature of the debate and felt that it related closely to critical issues in the areas of operations management, strategy, product development and international business. We thus invited the authors to write an article offering their individual and joint views on the productivity dilemma. We trust you will find it to be stimulating and thought provoking. We invite you to add your voice to the discussion by commenting on this article at the Operations and Supply Chain (OSM) Forum at http://www.journaloperationsmanagement.org/OSM.asp. \u2013 Kenneth K. Boyer and Morgan L. Swink"} {"_id": "4ef8fdb0c97d331e07ae96323855d15a75340ab0", "title": "Applying the Waek Learning Framework to Understand and Improve C4.5", "text": "There has long been a chasm between theoretical mod els of machine learning and practical machine learn ing algorithms For instance empirically successful algorithms such as C and backpropagation have not met the criteria of the PAC model and its vari ants Conversely the algorithms suggested by com putational learning theory are usually too limited in various ways to nd wide application The theoreti cal status of decision tree learning algorithms is a case in point while it has been proven that C and all reasonable variants of it fails to meet the PAC model criteria other recently proposed decision tree algo rithms that do have non trivial performance guaran tees unfortunately require membership queries Two recent developments have narrowed this gap between theory and practice not for the PAC model but for the related model known as weak learning or boosting First an algorithm called Adaboost was proposed that meets the formal criteria of the boosting model and is also competitive in practice Second the basic algorithms underlying the popular C and CART programs have also very recently been shown to meet the formal criteria of the boosting model Thus it seems plausible that the weak learning frame work may provide a setting for interaction between formal analysis and machine learning practice that is lacking in other theoretical models Our aim in this paper is to push this interaction further in light of these recent developments In par ticular we perform experiments suggested by the for mal results for Adaboost and C within the weak learning framework We concentrate on two particu larly intriguing issues First the theoretical boosting results for top down decision tree algorithms such as C suggest that a new splitting criterion may result in trees that are smaller and more accurate than those obtained using the usual information gain We con rm this suggestion experimentally Second a super cial interpretation of the theo retical results suggests that Adaboost should vastly outperform C This is not the case in practice and we argue through experimental results that the theory must be understood in terms of a measure of a boosting algorithm s behavior called its advantage sequence We compare the advantage sequences for C and Adaboost in a number of experiments We nd that these sequences have qualitatively dif ferent behavior that explains in large part the dis crepancies between empirical performance and the theoretical results Brie y we nd that although C and Adaboost are both boosting algorithms Adaboost creates successively harder ltered dis tributions while C creates successively easier ones in a sense that will be made precise"} {"_id": "9a5b3f24d1e17cd675cc95d5ebcf6cee8a4b4811", "title": "Sex similarities and differences in preferences for short-term mates: what, whether, and why.", "text": "Are there sex differences in criteria for sexual relationships? The answer depends on what question a researcher asks. Data suggest that, whereas the sexes differ in whether they will enter short-term sexual relationships, they are more similar in what they prioritize in partners for such relationships. However, additional data and context of other findings and theory suggest different underlying reasons. In Studies 1 and 2, men and women were given varying \"mate budgets\" to design short-term mates and were asked whether they would actually mate with constructed partners. Study 3 used a mate-screening paradigm. Whereas women have been found to prioritize status in long-term mates, they instead (like men) prioritize physical attractiveness much like an economic necessity in short-term mates. Both sexes also show evidence of favoring well-rounded long- and short-term mates when given the chance. In Studies 4 and 5, participants report reasons for having casual sex and what they find physically attractive. For women, results generally support a good genes account of short-term mating, as per strategic pluralism theory (S. W. Gangestad & J. A. Simpson, 2000). Discussion addresses broader theoretical implications for mate preference, and the link between method and theory in examining social decision processes."} {"_id": "1ef415ce920b2ca197a21bbfd710e7a9dc7a655e", "title": "Interpersonal connectedness: conceptualization and directions for a measurement instrument", "text": "Interpersonal connectedness is the sense of belonging based on the appraisal of having sufficient close social contacts. This feeling is regarded as one of the major outcomes of successful (mediated) social interaction and as such an important construct for HCI. However, the exact nature of this feeling, how to achieve it, and how to assess it remain unexplored to date. In the current paper we start the theoretical conceptualization of this phenomenon by exploring its basic origins in psychological literature and simultaneously formulate requirements for a measurement instrument to be developed in the service of exploring and testing CMC applications, in particular awareness technologies."} {"_id": "686fb884072b323d6bd365bdee1df894ee996758", "title": "Security challenges in software defined network and their solutions", "text": "The main purpose of Software Defined Networking (SDN) is to allow network engineers to respond quickly to changing network industrial requirements. This network technology focuses on making network as adaptable and active as virtual server. SDN is physical separation of Control plane from Data plane and control plane is centralized to manage underlying infrastructure. Hence, the SDN permit network administrator to adjust wide traffic flow from centralized control console without having to touch Switches and Routers, and can provide the services to wherever they are needed in the network. As in SDN the control plane is disassociated from underlying forwarding plane, however, susceptible to many security challenges like Denial of Service (DoS) attack, Distributed DoS (DDoS) attack, Volumetric attack. In this paper, we highlight some security challenges and evaluate some security solutions."} {"_id": "0f8468de03ee9f12d693237bec87916311bf1c24", "title": "The Seventh PASCAL Recognizing Textual Entailment Challenge", "text": "This paper presents the Seventh Recognizing Textual Entailment (RTE-7) challenge. This year\u2019s challenge replicated the exercise proposed in RTE-6, consisting of a Main Task, in which Textual Entailment is performed on a real corpus in the Update Summarization scenario; a Main subtask aimed at detecting novel information; and a KBP Validation Task, in which RTE systems had to validate the output of systems participating in the KBP Slot Filling Task. Thirteen teams participated in the Main Task (submitting 33 runs) and 5 in the Novelty Detection Subtask (submitting 13 runs). The KBP Validation Task was undertaken by 2 participants which submitted 5 runs. The ablation test experiment, introduced in RTE-5 to evaluate the impact of knowledge resources used by the systems participating in the Main Task and extended also to tools in RTE-6, was also repeated in RTE-7."} {"_id": "b13d8434b140ac3b9cb923b91afc17d1e448abfc", "title": "Mobile applications for weight management: theory-based content analysis.", "text": "BACKGROUND\nThe use of smartphone applications (apps) to assist with weight management is increasingly prevalent, but the quality of these apps is not well characterized.\n\n\nPURPOSE\nThe goal of the study was to evaluate diet/nutrition and anthropometric tracking apps based on incorporation of features consistent with theories of behavior change.\n\n\nMETHODS\nA comparative, descriptive assessment was conducted of the top-rated free apps in the Health and Fitness category available in the iTunes App Store. Health and Fitness apps (N=200) were evaluated using predetermined inclusion/exclusion criteria and categorized based on commonality in functionality, features, and developer description. Four researchers then evaluated the two most popular apps in each category using two instruments: one based on traditional behavioral theory (score range: 0-100) and the other on the Fogg Behavioral Model (score range: 0-6). Data collection and analysis occurred in November 2012.\n\n\nRESULTS\nEligible apps (n=23) were divided into five categories: (1) diet tracking; (2) healthy cooking; (3) weight/anthropometric tracking; (4) grocery decision making; and (5) restaurant decision making. The mean behavioral theory score was 8.1 (SD=4.2); the mean persuasive technology score was 1.9 (SD=1.7). The top-rated app on both scales was Lose It! by Fitnow Inc.\n\n\nCONCLUSIONS\nAll apps received low overall scores for inclusion of behavioral theory-based strategies."} {"_id": "2b211f9553ec78ff17fa3ebe16c0a036ef33c54b", "title": "Constructions from Dots and Lines", "text": "Marko A. Rodriguez is graph systems architect at AT&T Interactive. He can be reached at markomarkorodriguez.com. Peter Neubauer is chief operating officer of Neo Technology. He can be reached at peter.neubauerneotechnology.com A graph is a data structure composed of dots (i.e., vertices) and lines (i.e., edges). The dots and lines of a graph can be organized into intricate arrangements. A graph\u2019s ability to denote objects and their relationships to one another allows for a surprisingly large number of things to be modeled as graphs. From the dependencies that link software packages to the wood beams that provide the framing to a house, most anything has a corresponding graph representation. However, just because it is possible to represent something as a graph does not necessarily mean that its graph representation will be useful. If a modeler can leverage the plethora of tools and algorithms that store and process graphs, then such a mapping is worthwhile. This article explores the world of graphs in computing and exposes situations in which graphical models are beneficial."} {"_id": "0122bba20a91c739ffb6dd4c68cf84b9305ccfc0", "title": "Hilbert Space Embeddings of Hidden Markov Models", "text": "Hidden Markov Models (HMMs) are important tools for modeling sequence data. However, they are restricted to discrete latent states, and are largely restricted to Gaussian and discrete observations. And, learning algorithms for HMMs have predominantly relied on local search heuristics, with the exception of spectral methods such as those described below. We propose a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions. Furthermore, we derive a localminimum-free kernel spectral algorithm for learning these HMMs. We apply our method to robot vision data, slot car inertial sensor data and audio event classification data, and show that in these applications, embedded HMMs exceed the previous state-of-the-art performance."} {"_id": "0c5e3186822a3d10d5377b741f36b6478d0a8667", "title": "Closing the learning-planning loop with predictive state representations", "text": "A central problem in artificial intelligence is that of planning to maximize future reward under uncertainty in a partially observable environment. In this paper we propose and demonstrate a novel algorithm which accurately learns a model of such an environment directly from sequences of action-observation pairs. We then close the loop from observations to actions by planning in the learned model and recovering a policy which is near-optimal in the original environment. Specifically, we present an efficient and statistically consistent spectral algorithm for learning the parameters of a Predictive State Representation (PSR). We demonstrate the algorithm by learning a model of a simulated high-dimensional, vision-based mobile robot planning task, and then perform approximate point-based planning in the learned PSR. Analysis of our results shows that the algorithm learns a state space which efficiently captures the essential features of the environment. This representation allows accurate prediction with a small number of parameters, and enables successful and efficient planning."} {"_id": "16611312448f5897c7a84e2f590617f4fa3847c4", "title": "A Spectral Algorithm for Learning Hidden Markov Models", "text": "Hidden Markov Models (HMMs) are one of the most fundamental a nd widely used statistical tools for modeling discrete time series. Typically, they are learned using sea rch heuristics (such as the Baum-Welch / EM algorithm), which suffer from the usual local optima issues. While in gen eral these models are known to be hard to learn with samples from the underlying distribution, we provide t h first provably efficient algorithm (in terms of sample and computational complexity) for learning HMMs under a nat ur l separation condition. This condition is roughly analogous to the separation conditions considered for lear ning mixture distributions (where, similarly, these model s are hard to learn in general). Furthermore, our sample compl exity results do not explicitly depend on the number of distinct (discrete) observations \u2014 they implicitly depend on this number through spectral properties of the underlyin g HMM. This makes the algorithm particularly applicable to se ttings with a large number of observations, such as those in natural language processing where the space of observati on is sometimes the words in a language. Finally, the algorithm is particularly simple, relying only on a singula r value decomposition and matrix multiplications."} {"_id": "5ec33788f9908f69255c9df424c51c7495546893", "title": "Hilbert space embeddings of conditional distributions with applications to dynamical systems", "text": "In this paper, we extend the Hilbert space embedding approach to handle conditional distributions. We derive a kernel estimate for the conditional embedding, and show its connection to ordinary embeddings. Conditional embeddings largely extend our ability to manipulate distributions in Hilbert spaces, and as an example, we derive a nonparametric method for modeling dynamical systems where the belief state of the system is maintained as a conditional embedding. Our method is very general in terms of both the domains and the types of distributions that it can handle, and we demonstrate the effectiveness of our method in various dynamical systems. We expect that conditional embeddings will have wider applications beyond modeling dynamical systems."} {"_id": "645b2c28c28bd28eaa187a2faafa5ec12bc12e3a", "title": "Knowledge tracing: Modeling the acquisition of procedural knowledge", "text": "This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has \u2018mastered\u2019 each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels."} {"_id": "420a5f02b079d596ec2da0b5cddda43326226a09", "title": "Differential evolution algorithm with multiple mutation strategies based on roulette wheel selection", "text": "In this paper, we propose a differential evolution (DE) algorithm variant with a combination of multiple mutation strategies based on roulette wheel selection, which we call MMRDE. We first propose a new, reflection-based mutation operation inspired by the reflection operations in the Nelder\u2013Mead method. We design an experiment to compare its performance with seven mutation strategies, and we prove its effectiveness at balancing exploration and exploitation of DE. Although our reflection-based mutation strategy can balance exploration and exploitation of DE, it is still prone to premature convergence or evolutionary stagnation when solving complex multimodal optimization problems. Therefore, we add two basic strategies to help maintain population diversity and increase the robustness. We use roulette wheel selection to arrange mutation strategies based on their success rates for each individual. MMRDE is tested with some improved DE variants based on 28 benchmark functions for real-parameter optimization that have been recommended by the Institute of Electrical and Electronics Engineers CEC2013 special session. Experimental results indicate that the proposed algorithm shows its effectiveness at cooperative work with multiple strategies. It can obtain a good balance between exploration and exploitation. The proposed algorithm can guide the search for a global optimal solution with quick convergence compared with other improved DE variants."} {"_id": "7cf3e97ef62c25a4df6cafb79e2efa8d605419b8", "title": "Managing the knowledge lifecycle: A integrated knowledge management process model", "text": "The main purpose of this study is to propose an integrated conceptual model for exploring knowledge management process. Reviewing literature on knowledge management and tracing its historical background, definitions and dominant paradigms and analysis 32 frameworks of KM, a relatively integrated model of KM was presented in this paper. The study found many similarities of models and its presents a model which combine all the best features of the models analyzed. A 10-stage integrated model was proposed. These stages include some activities such as knowledge goal setting, identification, acquisition, creation, organization, sharing, evaluation, preservation, retention and updating and KM effectiveness evaluation. The findings can be used by managers in all organizational environments to implementation and auditing of KM practices."} {"_id": "55295193bc32ffd43c592ad19b4eabfde282c9b1", "title": "An Augmented Reality museum guide", "text": "Recent years have seen advances in many enabling augmented reality technologies. Furthermore, much research has been carried out on how augmented reality can be used to enhance existing applications. This paper describes our experiences with an AR-museum guide that combines some of the latest technologies. Amongst other technologies, markerless tracking, hybrid tracking, and an ultra-mobile-PC were used. Like existing audio guides, the AR-guide can be used by any museum visitor, during a six-month exhibition on Islamic art. We provide a detailed description of the museumpsilas motivation for using AR, of our experiences in developing the system, and the initial results of user surveys. Taking this information into account, we can derive possible system improvements."} {"_id": "523cf537aa1050efdcf0befe1d851b363afa0396", "title": "Security in Cloud Computing Using Cryptographic Algorithms Miss .", "text": "Cloud computing is the concept implemented to decipher the Daily Computing Problems. Cloud computing is basically virtual pool of resources and it provides these resources to users via internet. Cloud computing is the internet based development and used in computer technology. The prevalent problem associated with cloud computing is data privacy, security, anonymity and reliability etc. But the most important between them is security and how cloud provider assures it. To secure the Cloud means secure the treatments (calculations) and storage (databases hosted by the Cloud provider). In this paper we analyses different security issues to cloud and different cryptographic algorithms adoptable to better security for the cloud. Keywords\u2014 Cloud Computing, Cryptographic Algorithm, Internet, Security Algorithms, Security Attacks, Security Issue"} {"_id": "a94250803137af3aedf73e1cd2d9146a21b29356", "title": "CEFAM: Comprehensive Evaluation Framework for Agile Methodologies", "text": "Agile software development is regarded as an effective and efficient approach, mainly due to its ability to accommodate rapidly changing requirements, and to cope with modern software development challenges. There is therefore a strong tendency to use agile software development methodologies where applicable; however, the sheer number of existing agile methodologies and their variants hinders the selection of an appropriate agile methodology or method chunk. Methodology evaluation tools address this problem through providing detailed evaluations, yet no comprehensive evaluation framework is available for agile methodologies. We introduce the comprehensive evaluation framework for agile methodologies (CEFAM) as an evaluation tool for project managers and method engineers. The hierarchical (and mostly quantitative) evaluation criterion set introduced in this evaluation framework enhances the usability of the framework and provides results that are precise enough to be useful for the selection, adaptation and construction of agile methodologies."} {"_id": "687dbe4438022dc4521ae6ff53d7a6dc04c9d154", "title": "Design and operation of Wi-Fi agribot integrated system", "text": "Robotics in agriculture is not a new concept; in controlled environments (green houses), it has a history of over 20 years. Research has been performed to develop harvesters for cherry tomatoes, cucumbers, mushrooms, and other fruits. In horticulture, robots have been introduced to harvest citrus and apples. In this paper autonomous robot for agriculture (AgriBot) is a prototype and implemented for performing various agricultural activities like seeding, weeding, spraying of fertilizers, insecticides. AgriBot is controlled with a Arduino Mega board having At mega 2560 microcontroller. The powerful Raspberry Pi a mini computer is used to control and monitor the working of the robot. The Arduino Mega is mounted on a robot allowing for access to all of the pins for rapid prototyping. Its hexapod body can autonomously walk in any direction, avoiding objects with its ultrasonic proximity sensor. Its walking algorithms allow it to instantly change direction and walk in any new direction without turning its body. An underbody sensory array allows the robot to know if a seed has been planted in the area at the optimal spacing and depth. AgriBot can then dig a hole, plant a seed in the hole, cover the seed with soil, and apply any pre-emergence fertilizers and/or herbicides along with the marking agent. AgriBot can then signal to other robots in the immediate proximity that it needs help planting in that area or that this area has been planted and to move on by communicating through Wi-Fi."} {"_id": "80372b437142b8a27a9b496c21c2f2708f6f3ae3", "title": "A practical VEP-based brain-computer interface", "text": "This paper introduces the development of a practical brain-computer interface at Tsinghua University. The system uses frequency-coded steady-state visual evoked potentials to determine the gaze direction of the user. To ensure more universal applicability of the system, approaches for reducing user variation on system performance have been proposed. The information transfer rate (ITR) has been evaluated both in the laboratory and at the Rehabilitation Center of China, respectively. The system has been proved to be applicable to >90% of people with a high ITR in living environments."} {"_id": "cd81ec0f8f66e4e82bd4123fdf7d39bc75e6d441", "title": "Projector Calibration by \"Inverse Camera Calibration\"", "text": "The accuracy of 3-D reconstructions depends substantially on the accuracy of active vision system calibration. In this work, the problem of video projector calibration is solved by inverting the standard camera calibration work flow. The calibration procedure requires a single camera, which does not need to be calibrated and which is used as the sensor whether projected dots and calibration pattern landmarks, such as the checkerboard corners, coincide. The method iteratively adjusts the projected dots to coincide with the landmarks and the final coordinates are used as inputs to a camera calibration method. The otherwise slow iterative adjustment is accelerated by estimating a plane homography between the detected landmarks and the projected dots, which makes the calibration method fast."} {"_id": "3eff1f3c6899cfb7b66694908e1de92764e3ba04", "title": "FEEL: Featured Event Embedding Learning", "text": "Statistical script learning is an effective way to acquire world knowledge which can be used for commonsense reasoning. Statistical script learning induces this knowledge by observing event sequences generated from texts. The learned model thus can predict subsequent events, given earlier events. Recent approaches rely on learning event embeddings which capture script knowledge. In this work, we suggest a general learning model\u2013Featured Event Embedding Learning (FEEL)\u2013for injecting event embeddings with fine grained information. In addition to capturing the dependencies between subsequent events, our model can take into account higher level abstractions of the input event which help the model generalize better and account for the global context in which the event appears. We evaluated our model over three narrative cloze tasks, and showed that our model is competitive with the most recent state-of-the-art. We also show that our resulting embedding can be used as a strong representation for advanced semantic tasks such as discourse parsing and sentence semantic relatedness."} {"_id": "8ad6fda2d41dd823d2569797c8c7353dad31b371", "title": "Attribute-based encryption with non-monotonic access structures", "text": "We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes."} {"_id": "7ee87dc96108eca0ae8d5c2d5a8b44bd4d0f0fe1", "title": "Browser Fingerprinting as user tracking technology", "text": "The Web has become an indispensable part of our society and is currently most commonly used mode of information delivery. Millions of users access the free services provided by the websites on daily basis and while providing these free services websites track and profile their web users. In this environment, the ability to track users and their online habits can be very lucrative for advertising companies, yet very intrusive for the privacy of users. The objective of this paper is to study about the increasingly common yet hardly discussed technique of identifying individual Web users and tracking them across multiple websites known as \u201cBrowser Fingerprinting\u201d. A unique browser fingerprint is derived by the unique pattern of information visible whenever a computer visits a website. The permutations thus collected are sufficiently distinct that they can be used as a tool for tracking. Unlike cookies, Fingerprints are generated on server side and are difficult for a user to influence. The main objective of this research is study about how the fingerprinting was evolved, its positives and negatives, what threat it poses to users' online privacy and what countermeasures could be used to prevent it. This paper will also analyse which different properties the browsers send to the server, allowing a unique fingerprint of those browsers to be created."} {"_id": "141c85bcd27b9d53f97cba73c6f4c88455654dac", "title": "Zone Divisional Hierarchical Routing Protocol for Wireless Sensor Network", "text": "Clustering prolongs energy resources, improves scalability and preserves communication bandwidth. Clustering is either classified as static and dynamic or as equal and unequal. In cluster based routing protocols that employ multi-hop communication, imbalanced energy consumption among the nodes results in hot-spots. Unequal clustering overcomes hot-spots but requires a high overhead and is prone to connectivity issues. To offer guaranteed connectivity and alleviate the hot-spot problem, a zone divisional hierarchical routing protocol has been proposed in this paper. The network is divided into equal sized static rectangular clusters which are assigned to two zones namely near zone and far zone. The zone facing the base station is known as the near zone and rest of the network space makes up the far zone which is further divided into sub-zones. Dual cluster heads approach for sharing the reception, aggregation and forwarding tasks is proposed. The performance evaluation of the proposed protocol against the existing protocols reveals that the method offers energy-efficient multi-hop communication support, uses negligible overhead, prevents creation of hot-spots, avoids early death of nodes, uses balanced energy consumption across the network and maximizes the network lifetime."} {"_id": "96ac16ce5d0d094fe5bf675a4a15e65025d85874", "title": "MOBA: a New Arena for Game AI", "text": "Games have always been popular testbeds for Artificial Intelligence (AI). In the last decade, we have seen the rise of the Multiple Online Battle Arena (MOBA) games, which are the most played games nowadays. In spite of this, there are few works that explore MOBA as a testbed for AI Research. In this paper we present and discuss the main features and opportunities offered by MOBA games to Game AI Research. We describe the various challenges faced along the game and also propose a discrete model that can be used to better understand and explore the game. With this, we aim to encourage the use of MOBA as a novel research platform for Game AI."} {"_id": "4f3dbfec5c67f0fb0602d9c803a391bc2f6ee4c7", "title": "A 20-GHz phase-locked loop for 40-gb/s serializing transmitter in 0.13-/spl mu/m CMOS", "text": "A 20-GHz phase-locked loop with 4.9 ps/sub pp//0.65 ps/sub rms/ jitter and -113.5 dBc/Hz phase noise at 10-MHz offset is presented. A half-duty sampled-feedforward loop filter that simply replaces the resistor with a switch and an inverter suppresses the reference spur down to -44.0 dBc. A design iteration procedure is outlined that minimizes the phase noise of a negative-g/sub m/ oscillator with a coupled microstrip resonator. Static frequency dividers made of pulsed latches operate faster than those made of flip-flops and achieve near 2:1 frequency range. The phase-locked loop fabricated in a 0.13-/spl mu/m CMOS operates from 17.6 to 19.4GHz and dissipates 480mW."} {"_id": "2edc7ee4ef1b19d104f34395f4e977afac12ea64", "title": "Color Balance and Fusion for Underwater Image Enhancement", "text": "We introduce an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption. Our method is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. It builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image. The two images to fusion, as well as their associated weight maps, are defined to promote the transfer of edges and color contrast to the output image. To avoid that the sharp weight map transitions create artifacts in the low frequency components of the reconstructed image, we also adapt a multiscale fusion strategy. Our extensive qualitative and quantitative evaluation reveals that our enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. Our validation also proves that our algorithm is reasonably independent of the camera settings, and improves the accuracy of several image processing applications, such as image segmentation and keypoint matching."} {"_id": "1fcaf7ddcadda724d67684d66856c107375f448b", "title": "Rationale-Augmented Convolutional Neural Networks for Text Classification", "text": "We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions."} {"_id": "a11c59837b88f27c2f58a1c562b96450f7d52c3f", "title": "Informing Pedagogical Action: Aligning Learning Analytics With Learning Design", "text": "This article considers the developing field of learning analytics and argues that to move from small-scale practice to broad scale applicability, there is a need to establish a contextual framework that helps teachers interpret the information that analytics provides. The article presents learning design as a form of documentation of pedagogical intent that can provide the context for making sense of diverse sets of analytic data. We investigate one example of learning design to explore how broad categories of analytics\u2014which we call checkpoint and process analytics\u2014can inform the interpretation of outcomes from a learning design and facilitate pedagogical action."} {"_id": "20b41b2a0d8ee71efd3986b4baeed24eba904350", "title": "Maternal depression and early childhood growth in developing countries: systematic review and meta-analysis.", "text": "OBJECTIVE\nTo investigate the relationship between maternal depression and child growth in developing countries through a systematic literature review and meta-analysis.\n\n\nMETHODS\nSix databases were searched for studies from developing countries on maternal depression and child growth published up until 2010. Standard meta-analytical methods were followed and pooled odds ratios (ORs) for underweight and stunting in the children of depressed mothers were calculated using random effects models for all studies and for subsets of studies that met strict criteria on study design, exposure to maternal depression and outcome variables. The population attributable risk (PAR) was estimated for selected studies.\n\n\nFINDINGS\nSeventeen studies including a total of 13,923 mother and child pairs from 11 countries met inclusion criteria. The children of mothers with depression or depressive symptoms were more likely to be underweight (OR: 1.5; 95% confidence interval, CI: 1.2-1.8) or stunted (OR: 1.4; 95% CI: 1.2-1.7). Subanalysis of three longitudinal studies showed a stronger effect: the OR for underweight was 2.2 (95% CI: 1.5-3.2) and for stunting, 2.0 (95% CI: 1.0-3.9). The PAR for selected studies indicated that if the infant population were entirely unexposed to maternal depressive symptoms 23% to 29% fewer children would be underweight or stunted.\n\n\nCONCLUSION\nMaternal depression was associated with early childhood underweight and stunting. Rigorous prospective studies are needed to identify mechanisms and causes. Early identification, treatment and prevention of maternal depression may help reduce child stunting and underweight in developing countries."} {"_id": "63d91c875bad47f2cf563a00b39218b200d6814b", "title": "Application of Artificial Neural Networks to Multiple Criteria Inventory Classification", "text": "Original scientific paper Inventory classification is a very important part of inventory control which represents the technique of operational research discipline. A systematic approach to the inventory control and classification may have a significant influence on company competitiveness. The paper describes the results obtained by investigating the application of neural networks in multiple criteria inventory classification. Various structures of a back-propagation neural network have been analysed and the optimal one with the minimum Root Mean Square error selected. The predicted results are compared to those obtained by the multiple criteria classification using the analytical hierarchy process."} {"_id": "3a932920c44c06b43fc24393c8710dfd2238eb37", "title": "Methods for Pretreatment of Lignocellulosic Biomass for Efficient Hydrolysis and Biofuel Production", "text": "Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Review Methods for Pretreatment of Lignocellulosic Biomass for Efficient Hydrolysis and Biofuel Production Parveen Kumar, Diane M. Barrett, Michael J. Delwiche, and Pieter Stroeve Ind. Eng. Chem. Res., Article ASAP \u2022 DOI: 10.1021/ie801542g \u2022 Publication Date (Web): 20 March 2009 Downloaded from http://pubs.acs.org on March 26, 2009"} {"_id": "0cc95944a0fcbeb402b02bc86b522bef79873f16", "title": "Automatic Game Design via Mechanic Generation", "text": "Game designs often center on the game mechanics\u2014 rules governing the logical evolution of the game. We seek to develop an intelligent system that generates computer games. As first steps towards this goal we present a composable and cross-domain representation for game mechanics that draws from AI planning action representations. We use a constraint solver to generate mechanics subject to design requirements on the form of those mechanics\u2014what they do in the game. A planner takes a set of generated mechanics and tests whether those mechanics meet playability requirements\u2014controlling how mechanics function in a game to affect player behavior. We demonstrate our system by modeling and generating mechanics in a role-playing game, platformer game, and combined role-playing-platformer game."} {"_id": "2f17bde51760f3d5577043c3ba83173d583a66ac", "title": "Central Retinal Artery Occlusion and Partial Ophthalmoplegia Caused by Hyaluronic Acid Injection Filler Cosmetic", "text": "Central retinal artery occlusion is a rare ocular emergency condition that can lead to a total blindness. Hereby we report a 30-year-old woman with a sudden visual loss of the left eye after being injected by hyaluronic acid filler at the dorsum nasal area for cosmetic purpose. Visual acuity was light perception while anterior segment and intraocular pressure were within normal limit. Partial ophthalmoplegia include restriction to the nasal gaze were found. Funduscopic examination revealed total retinal edema with a cherry red spot at the fovea. Ocular massage was performed as initial management followed by humor aqueous paracentesis. All procedures were done within 90 minutes. The patients discharge with the visual improvement to hand motion and the Partial ophthalmophlegia has been improved after being evaluated less than a month."} {"_id": "5b0295e92ac6f493f23f46efcd6fdfbbca74ac48", "title": "Critiquing-based recommenders: survey and emerging trends", "text": "Critiquing-based recommender systems elicit users\u2019 feedback, called critiques, which they made on the recommended items. This conversational style of interaction is in contract to the standard model where users receive recommendations in a single interaction. Through the use of the critiquing feedback, the recommender systems are able to more accurately learn the users\u2019 profiles, and therefore suggest better recommendations in the subsequent rounds. Critiquing-based recommenders have been widely studied in knowledge-, content-, and preference-based recommenders and are beginning to be tried in several online websites, such as MovieLens. This article examines the motivation and development of the subject area, and offers a detailed survey of the state of the art concerning the design of critiquing interfaces and development of algorithms for critiquing generation. With the help of categorization analysis, the survey reveals three principal branches of critiquing based recommender systems, using respectively natural language based, system-suggested, and user-initiated critiques. Representative example systems will be presented and analyzed for each branch, and their respective pros and cons will be discussed. Subsequently, a hybrid framework is developed to unify the advantages of different methods and overcome their respective limitations. Empirical findings from user studies are further presented, indicating how hybrid critiquing supports could effectively enable end-users to achieve more confident decisions. Finally, the article will point out several future trends to boost the advance of critiquing-based recommenders."} {"_id": "5c38df0e9281c60b32550d92bed6e5af9a869c05", "title": "Expert Level Control of Ramp Metering Based on Multi-Task Deep Reinforcement Learning", "text": "This paper shows how the recent breakthroughs in reinforcement learning (RL) that have enabled robots to learn to play arcade video games, walk, or assemble colored bricks, can be used to perform other tasks that are currently at the core of engineering cyberphysical systems. We present the first use of RL for the control of systems modeled by discretized non-linear partial differential equations (PDEs) and devise a novel algorithm to use non-parametric control techniques for large multi-agent systems. Cyberphysical systems (e.g., hydraulic channels, transportation systems, the energy grid, and electromagnetic systems) are commonly modeled by PDEs, which historically have been a reliable way to enable engineering applications in these domains. However, it is known that the control of these PDE models is notoriously difficult. We show how neural network-based RL enables the control of discretized PDEs whose parameters are unknown, random, and time-varying. We introduce an algorithm of mutual weight regularization (MWR), which alleviates the curse of dimensionality of multi-agent control schemes by sharing experience between agents while giving each agent the opportunity to specialize its action policy so as to tailor it to the local parameters of the part of the system it is located in. A discretized PDE, such as the scalar Lighthill\u2013Whitham\u2013Richards PDE can indeed be considered as a macroscopic freeway traffic simulator and which presents the most salient challenges for learning to control large cyberphysical system with multiple agents. We consider two different discretization procedures and show the opportunities offered by applying deep reinforcement for continuous control on both. Using a neural RL PDE controller on a traffic flow simulation based on a Godunov discretization of the San Francisco Bay Bridge, we are able to achieve precise adaptive metering without model calibration thereby beating the state of the art in traffic metering. Furthermore, with the more accurate BeATS simulator, we manage to achieve a control performance on par with ALINEA, a state-of-the-art parametric control scheme, and show how using MWR improves the learning procedure."} {"_id": "9ec3b78b826df149fb215f005e32e56afaf532da", "title": "Design Issues and Challenges in Wireless Sensor Networks", "text": "Wireless Sensor Networks (WSNs) are composed self-organized wireless ad hoc networks which comprise of a large number of resource constrained sensor nodes. The major areas of research in WSN is going on hardware, and operating system of WSN, deployment, architecture, localization, synchronization, programming models, data aggregation and dissemination, database querying, architecture, middleware, quality of service and security. This paper study highlights ongoing research activities and issues that affect the design and performance of Wireless Sensor Network."} {"_id": "15cf63f8d44179423b4100531db4bb84245aa6f1", "title": "Deep Learning", "text": "Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users\u2019 interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition and speech recognition, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules, analysing particle accelerator data, reconstructing brain circuits, and predicting the effects of mutations in non-coding DNA on gene expression and disease. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding, particularly topic classification, sentiment analysis, question answering and language translation. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress."} {"_id": "268a7cd1bf65351bba4eb97aac373d624af2c08f", "title": "A Persona-Based Neural Conversation Model", "text": "We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speakeraddressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges."} {"_id": "32de3580916a1e5f36bc54779ba5904c19daa403", "title": "Intelligent Tutoring Systems with Conversational Dialogue", "text": "been developed during the last 20 years have proven to be quite successful, particularly in the domains of mathematics, science, and technology. They produce significant learning gains beyond classroom environments. They are capable of engaging most students\u2019 attention and interest for hours. We have been working on a new generation of intelligent tutoring systems that hold mixedinitiative conversational dialogues with the learner. The tutoring systems present challenging problems and questions to the learner, the learner types in answers in English, and there is a lengthy multiturn dialogue as complete solutions or answers evolve. This article presents the tutoring systems that we have been developing. AUTOTUTOR is a conversational agent, with a talking head, that helps college students learn about computer literacy. ANDES, ATLAS, AND WHY2 help adults learn about physics. Instead of being mere information-delivery systems, our systems help students actively construct knowledge through conversations."} {"_id": "3b9732bb07dc99bde5e1f9f75251c6ea5039373e", "title": "Deep Reinforcement Learning with Double Q-learning", "text": "The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. The goal of reinforcement learning (Sutton and Barto 1998) is to learn good policies for sequential decision problems, by optimizing a cumulative future reward signal. Q-learning (Watkins 1989) is one of the most popular reinforcement learning algorithms, but it is known to sometimes learn unrealistically high action values because it includes a maximization step over estimated action values, which tends to prefer overestimated to underestimated values. In previous work, overestimations have been attributed to insufficiently flexible function approximation (Thrun and Schwartz 1993) and noise (van Hasselt 2010, 2011). In this paper, we unify these views and show overestimations can occur when the action values are inaccurate, irrespective of the source of approximation error. Of course, imprecise value estimates are the norm during learning, which indicates that overestimations may be much more common than previously appreciated. It is an open question whether, if the overestimations do occur, this negatively affects performance in practice. Overoptimistic value estimates are not necessarily a problem in and of themselves. If all values would be uniformly higher then the relative action preferences are preserved and we would not expect the resulting policy to be any worse. Furthermore, it is known that sometimes it is good to be optimistic: optimism in the face of uncertainty is a well-known Copyright c \u00a9 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. exploration technique (Kaelbling et al. 1996). If, however, the overestimations are not uniform and not concentrated at states about which we wish to learn more, then they might negatively affect the quality of the resulting policy. Thrun and Schwartz (1993) give specific examples in which this leads to suboptimal policies, even asymptotically. To test whether overestimations occur in practice and at scale, we investigate the performance of the recent DQN algorithm (Mnih et al. 2015). DQN combines Q-learning with a flexible deep neural network and was tested on a varied and large set of deterministic Atari 2600 games, reaching human-level performance on many games. In some ways, this setting is a best-case scenario for Q-learning, because the deep neural network provides flexible function approximation with the potential for a low asymptotic approximation error, and the determinism of the environments prevents the harmful effects of noise. Perhaps surprisingly, we show that even in this comparatively favorable setting DQN sometimes substantially overestimates the values of the actions. We show that the Double Q-learning algorithm (van Hasselt 2010), which was first proposed in a tabular setting, can be generalized to arbitrary function approximation, including deep neural networks. We use this to construct a new algorithm called Double DQN. This algorithm not only yields more accurate value estimates, but leads to much higher scores on several games. This demonstrates that the overestimations of DQN indeed lead to poorer policies and that it is beneficial to reduce them. In addition, by improving upon DQN we obtain state-of-the-art results on the Atari domain."} {"_id": "4cd91a0bc0474ce5560209cbeb79b21a1403eac1", "title": "Attention with Intention for a Neural Network Conversation Model", "text": "In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network that produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-toend without labeling data. Experiments show that this model generates natural responses to user inputs."} {"_id": "3aabaa2b1a7e3bee7ab1c29da0311a4fdd5f4378", "title": "Low cost power failure protection for MLC NAND flash storage systems with PRAM/DRAM hybrid buffer", "text": "In the latest PRAM/DRAM hybrid MLC NAND flash storage systems (NFSS), DRAM is used to temporarily store file system data for system response time reduction. To ensure data integrity, super-capacitors are deployed to supply the backup power for moving the data from DRAM to NAND flash during power failures. However, the capacitance degradation of super-capacitor severely impairs system robustness. In this work, we proposed a low cost power failure protection scheme to reduce the energy consumption of power failure protection and increase the robustness of the NFSS with PRAM/DRAM hybrid buffer. Our scheme enables the adoption of the more reliable regular capacitor to replace the super capacitor as the backup power. The experimental result shows that our scheme can substantially reduce the capacitance budget of power failure protection circuitry by 75.1% with very marginal performance and energy overheads."} {"_id": "a60c571ffeceab5029e3a762838fb7b5e52fdb7b", "title": "Latent Variable Analysis Growth Mixture Modeling and Related Techniques for Longitudinal Data", "text": "This chapter gives an overview of recent advances in latent variable analysis. Emphasis is placed on the strength of modeling obtained by using a flexible combination of continuous and categorical latent variables. To focus the discussion and make it manageable in scope, analysis of longitudinal data using growth models will be considered. Continuous latent variables are common in growth modeling in the form of random effects that capture individual variation in development over time. The use of categorical latent variables in growth modeling is, in contrast, perhaps less familiar, and new techniques have recently emerged. The aim of this chapter is to show the usefulness of growth model extensions using categorical latent variables. The discussion also has implications for latent variable analysis of cross-sectional data. The chapter begins with two major parts corresponding to continuous outcomes versus categorical outcomes. Within each part, conventional modeling using continuous latent variables will be described"} {"_id": "3ea12acd699b0be1985d51b49eeb48cab87034d0", "title": "Code quality analysis in open source software development", "text": "Proponents of open source style software development claim that better software is produced using this model compared with the traditional closed model. However, there is little empirical evidence in support of these claims. In this paper, we present the results of a pilot case study aiming: (a) to understand the implications of structural quality; and (b) to figure out the benefits of structural quality analysis of the code delivered by open source style development. To this end, we have measured quality characteristics of 100 applications written for Linux, using a software measurement tool, and compared the results with the industrial standard that is proposed by the tool. Another target of this case study was to investigate the issue of modularity in open source as this characteristic is being considered crucial by the proponents of open source for this type of software development. We have empirically assessed the relationship between the size of the application components and the delivered quality measured through user satisfaction. We have determined that, up to a certain extent, the average component size of an application is negatively related to the user satisfaction for this application."} {"_id": "4ac2914c53913bde6efe2d68b4a703fdce74ccbe", "title": "A snake-based method for segmentation of intravascular ultrasound images and its in vivo validation.", "text": "Image segmentation for detection of vessel walls is necessary for quantitative assessment of vessel diseases by intravascular ultrasound. A new segmentation method based on gradient vector flow (GVF) snake model is proposed in this paper. The main characteristics of the proposed method include two aspects: one is that nonlinear filtering is performed on GVF field to reduce the critical points, change the morphological structure of the parallel curves and extend the capture range; the other is that balloon snake is combined with the model. Thus, the improved GVF and balloon snake can be automatically initialized and overcome the problem caused by local energy minima. Results of 20 in vivo cases validated the accuracy and stability of the segmentation method for intravascular ultrasound images."} {"_id": "40944e7b88077be8ae53feb44bb92ae545d16ff1", "title": "Basket-Sensitive Personalized Item Recommendation", "text": "Personalized item recommendation is useful in narrowing down the list of options provided to a user. In this paper, we address the problem scenario where the user is currently holding a basket of items, and the task is to recommend an item to be added to the basket. Here, we assume that items currently in a basket share some association based on an underlying latent need, e.g., ingredients to prepare some dish, spare parts of some device. Thus, it is important that a recommended item is relevant not only to the user, but also to the existing items in the basket. Towards this goal, we propose two approaches. First, we explore a factorizationbased model called BFM that incorporates various types of associations involving the user, the target item to be recommended, and the items currently in the basket. Second, based on our observation that various recommendations towards constructing the same basket should have similar likelihoods, we propose another model called CBFM that further incorporates basket-level constraints. Experiments on three real-life datasets from different domains empirically validate these models against baselines based on matrix factorization and association rules."} {"_id": "9fab8e2111a3546f61d8484a1ef7bdbd0fb5239a", "title": "A Review of Classic Edge Detectors", "text": "In this paper some of the classic alternatives for edge detection in digital images are studied. The main idea of edge detection algorithms is to find where abrupt changes in the intensity of an image have occurred. The first family of algorithms reviewed in this work uses the first derivative to find the changes of intensity, such as Sobel, Prewitt and Roberts. In the second reviewed family, second derivatives are used, for example in algorithms like Marr-Hildreth and Haralick. The obtained results are analyzed from a qualitative point of view (perceptual) and from a quantitative point of view (number of operations, execution time), considering different ways to convolve an image with a kernel (step required in some of the algorithms). Source Code For all the reviewed algorithms, an open source C implementation is provided which can be downloaded from the IPOL web page of this article1. An online demonstration is also available, where the user can test and reproduce our results."} {"_id": "bcbf261b0c98b563b842313d02990e386cad0d24", "title": "An Analysis of the Accuracy of Bluetooth Low Energy for Indoor Positioning Applications", "text": "This study investigated the impact of Bluetooth Low Energy devices in advertising/beaconing mode on fingerprint-based indoor positioning schemes. Early experimentation demonstrated that the low bandwidth of BLE signals compared to WiFi is the cause of significant measurement error when coupled with the use of three BLE advertising channels. The physics underlying this behaviour is verified in simulation. A multipath mitigation scheme is proposed and tested. It is determined that the optimal positioning performance is provided by 10Hz beaconing and a 1 second multipath mitigation processing window size. It is determined that a steady increase in positioning performance with fingerprint size occurs up to 7 \u00b11, above this there is no clear benefit to extra beacon coverage."} {"_id": "2d453783a940b8ebdae7cf5efa1cc6949d04bf56", "title": "Internet of Things: Vision, Applications and Challenges", "text": "The form of communication that we see now is either human-human or human-device, but the Internet of Things (IoT) promises a great future for the internet where the type of communication is machine-machine (M2M). It aims to unify everything in our world under a common infrastructure, giving us not only control of things around us, but also keeping us informed of the state of the things. This paper aims to provide a comprehensive overview of the IoT scenario and reviews its enabling technologies and the sensor networks. Also, it describes a six-layered architecture of IoT and points out the related key challenges. However, this manuscript will give good comprehension for the new researchers, who want to do research in this field of Internet of Things and facilitate knowledge accumulation in efficiently."} {"_id": "18eadfc4a6bcffd6f1ca1d1534a54a3848442d46", "title": "The architecture of virtual machines", "text": "A virtual machine can support individual processes or a complete system depending on the abstraction level where virtualization occurs. Some VMs support flexible hardware usage and software isolation, while others translate from one instruction set to another. Virtualizing a system or component -such as a processor, memory, or an I/O device - at a given abstraction level maps its interface and visible resources onto the interface and resources of an underlying, possibly different, real system. Consequently, the real system appears as a different virtual system or even as multiple virtual systems. Interjecting virtualizing software between abstraction layers near the HW/SW interface forms a virtual machine that allows otherwise incompatible subsystems to work together. Further, replication by virtualization enables more flexible and efficient and efficient use of hardware resources."} {"_id": "5340844c3768aa6f04ef0784061d4070f1166fe6", "title": "Parasitic-Aware Common-Centroid FinFET Placement and Routing for Current-Ratio Matching", "text": "The FinFET technology is regarded as a better alternative for modern high-performance and low-power integrated-circuit design due to more effective channel control and lower power consumption. However, the gate-misalignment problem resulting from process variation and the parasitic resistance resulting from interconnecting wires based on the FinFET technology becomes even more severe compared with the conventional planar CMOS technology. Such gate misalignment and unwanted parasitic resistance may increase the threshold voltage and decrease the drain current of transistors. When applying the FinFET technology to analog circuit design, the variation of drain currents can destroy current-ratio matching among transistors and degrade circuit performance. In this article, we present the first FinFET placement and routing algorithms for layout generation of a common-centroid FinFET array to precisely match the current ratios among transistors. Experimental results show that the proposed matching-driven FinFET placement and routing algorithms can obtain the best current-ratio matching compared with the state-of-the-art common-centroid placer."} {"_id": "c596f88ccba5b7d5276ac6a9b68972fd7d14d959", "title": "A Domain Model for the Internet of Things", "text": "By bringing together the physical world of real objects with the virtual world of IT systems, the Internet of Things has the potential to significantly change both the enterprise world as well as society. However, the term is very much hyped and understood differently by different communities, especially because IoT is not a technology as such but represents the convergence of heterogeneous - often new - technologies pertaining to different engineering domains. What is needed in order to come to a common understanding is a domain model for the Internet of Things, defining the main concepts and their relationships, and serving as a common lexicon and taxonomy and thus as a basis for further scientific discourse and development of the Internet of Things. As we show, having such a domain model is also helpful in design of concrete IoT system architectures, as it provides a template and thus structures the analysis of use cases."} {"_id": "5a9f4dc3e5d7c70d58c9512d7193d079c3331273", "title": "3D People Tracking with Gaussian Process Dynamical Models", "text": "We advocate the use of Gaussian Process Dynamical Models (GPDMs) for learning human pose and motion priors for 3D people tracking. A GPDM provides a lowdimensional embedding of human motion data, with a density function that gives higher probability to poses and motions close to the training data. With Bayesian model averaging a GPDM can be learned from relatively small amounts of data, and it generalizes gracefully to motions outside the training set. Here we modify the GPDM to permit learning from motions with significant stylistic variation. The resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions."} {"_id": "abcc6724bc625f8f710a2d455e9ac0577d84eea9", "title": "Can We Predict a Riot? Disruptive Event Detection Using Twitter", "text": "In recent years, there has been increased interest in real-world event detection using publicly accessible data made available through Internet technology such as Twitter, Facebook, and YouTube. In these highly interactive systems, the general public are able to post real-time reactions to \u201creal world\u201d events, thereby acting as social sensors of terrestrial activity. Automatically detecting and categorizing events, particularly small-scale incidents, using streamed data is a non-trivial task but would be of high value to public safety organisations such as local police, who need to respond accordingly. To address this challenge, we present an end-to-end integrated event detection framework that comprises five main components: data collection, pre-processing, classification, online clustering, and summarization. The integration between classification and clustering enables events to be detected, as well as related smaller-scale \u201cdisruptive events,\u201d smaller incidents that threaten social safety and security or could disrupt social order. We present an evaluation of the effectiveness of detecting events using a variety of features derived from Twitter posts, namely temporal, spatial, and textual content. We evaluate our framework on a large-scale, real-world dataset from Twitter. Furthermore, we apply our event detection system to a large corpus of tweets posted during the August 2011 riots in England. We use ground-truth data based on intelligence gathered by the London Metropolitan Police Service, which provides a record of actual terrestrial events and incidents during the riots, and show that our system can perform as well as terrestrial sources, and even better in some cases."} {"_id": "9a67ef24bdbed0776d1bb0aa164c09cc029bbdd5", "title": "An Improved Algorithm for Incremental Induction of Decision Trees", "text": "This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called`slewing' is introduced. Finally, a non-incremental method is given for nding a decision tree based on a direct metric of a candidate tree."} {"_id": "7e0e6e0cf37ce4f4dde8c940ee4ce0fdb7b28656", "title": "Sentiment Analysis and Summarization of Twitter Data", "text": "Sentiment Analysis (SA) and summarization has recently become the focus of many researchers, because analysis of online text is beneficial and demanded in many different applications. One such application is product-based sentiment summarization of multi-documents with the purpose of informing users about pros and cons of various products. This paper introduces a novel solution to target-oriented (i.e. aspect-based) sentiment summarization and SA of short informal texts with a main focus on Twitter posts known as \"tweets\". We compare different algorithms and methods for SA polarity detection and sentiment summarization. We show that our hybrid polarity detection system not only outperforms the unigram state-of-the-art baseline, but also could be an advantage over other methods when used as a part of a sentiment summarization system. Additionally, we illustrate that our SA and summarization system exhibits a high performance with various useful functionalities and features."} {"_id": "6623033437816e8a841738393dcdbaa7ab9e2fa5", "title": "Data Representation Based on Interval-Sets for Anomaly Detection in Time Series", "text": "Anomaly detection in time series is a popular topic focusing on a variety of applications, which achieves a wealth of results. However, there are many cases of missing anomaly and increased false alarm in most of the existing works. Inspired by the concept of interval-sets, this paper proposes an anomaly detection algorithm called a fuzzy interval-set and tries to detect the potential value of the time series from a new perspective. In the proposed algorithm, a time series will be divided into several subsequences. Each subsequence is regarded as an interval-set depending on its value space and boundary of the subsequence. The similarity measurements between interval sets adopt interval operations and point probability distributions of the interval bounds. In addition, an anomaly score is defined based on similarity results. The experimental results on synthetic and real data sets indicate that the proposed algorithm has better discriminative performance than the piecewise aggregate approximation method and reduces the false alarm rate significantly."} {"_id": "22c205caf66afc33814419b9006e4a7e704d937e", "title": "Bridging the divide in financial market forecasting: machine learners vs. financial economists", "text": "Financial time series forecasting is a popular application of machine learning methods. Previous studies report that advanced forecasting methods predict price changes in financial markets with high accuracy and that profit can be made trading on these predictions. However, financial economists point to the informational efficiency of financial markets, which questions price predictability and opportunities for profitable trading. The objective of the paper is to resolve this contradiction. To this end, we undertake an extensive forecasting simulation, based on data from thirty-four financial indices over six years. These simulations confirm that the best machine learning methods produce more accurate forecasts than the best econometric methods. We also examine the methodological factors that impact the predictive accuracy of machine learning forecasting experiments. The results suggest that the predictability of a financial market and the feasibility of profitable model-based trading are significantly influenced by the maturity of the market, the forecasting method employed, the horizon for which it generates predictions and the methodology used to assess the model and simulate model-based trading. We also find evidence against the informational value of indicators from the field of technical analysis. Overall, we confirm that advanced forecasting methods can be used to predict price changes in some financial markets and we discuss whether these results question the prevailing view in the financial economics literature that financial markets are efficient."} {"_id": "6694e272267d3fa2b05e330a6ca43f48494af2e9", "title": "Algorithms for asymptotic extrapolation \u2217", "text": "Consider a power series f \u2208 R[[z]], which is obtained by a precise mathematical construction. For instance, f might be the solution to some differential or functional initial value problem or the diagonal of the solution to a partial differential equation. In cases when no suitable method is beforehand for determining the asymptotics of the coefficients fn, but when many such coefficients can be computed with high accuracy, it would be useful if a plausible asymptotic expansion for fn could be guessed automatically. In this paper, we will present a general scheme for the design of such \u201casymptotic extrapolation algorithms\u201d. Roughly speaking, using discrete differentiation and techniques from automatic asymptotics, we strip off the terms of the asymptotic expansion one by one. The knowledge of more terms of the asymptotic expansion will then allow us to approximate the coefficients in the expansion with high accuracy."} {"_id": "9cb3bfd45bd0dbf4ffd328c3577a989fcb82ca07", "title": "Knowledge Engineering with Markov Logic Networks : A Review", "text": "Within the realm of statistical relational knowledge representation formalisms, Markov logic is perhaps one of the most flexible and general languages, for it generalises both first-order logic (for finite domains) and probabilistic graphical models. Knowledge engineering with Markov logic is, however, not a straightforward task. In particular, modelling approaches that are too firmly rooted in the principles of logic often tend to produce unexpected results in practice. In this paper, I collect a number of issues that are relevant to knowledge engineering practice: I describe the fundamental semantics of Markov logic networks and explain how simple probabilistic properties can be represented. Furthermore, I discuss fallacious modelling assumptions and summarise conditions under which generalisation across domains may fail. As a collection of fundamental insights, the paper is primarily directed at knowledge engineers who are new to Markov logic."} {"_id": "588e37b4d7ffcd245d819eafe079f2b92ac9e20d", "title": "Effective Crowd Annotation for Relation Extraction", "text": "Can crowdsourced annotation of training data boost performance for relation extraction over methods based solely on distant supervision? While crowdsourcing has been shown effective for many NLP tasks, previous researchers found only minimal improvement when applying the method to relation extraction. This paper demonstrates that a much larger boost is possible, e.g., raising F1 from 0.40 to 0.60. Furthermore, the gains are due to a simple, generalizable technique, Gated Instruction, which combines an interactive tutorial, feedback to correct errors during training, and improved screening."} {"_id": "8a0d391f53bf7566552e79835916fc1ea49900cd", "title": "A Hardware Framework for Yield and Reliability Enhancement in Chip Multiprocessors", "text": "Device reliability and manufacturability have emerged as dominant concerns in end-of-road CMOS devices. An increasing number of hardware failures are attributed to manufacturability or reliability problems. Maintaining an acceptable manufacturing yield for chips containing tens of billions of transistors with wide variations in device parameters has been identified as a great challenge. Additionally, today\u2019s nanometer scale devices suffer from accelerated aging effects because of the extreme operating temperature and electric fields they are subjected to. Unless addressed in design, aging-related defects can significantly reduce the lifetime of a product. In this article, we investigate a micro-architectural scheme for improving yield and reliability of homogeneous chip multiprocessors (CMPs). The proposed solution involves a hardware framework that enables us to utilize the redundancies inherent in a multicore system to keep the system operational in the face of partial failures. A micro-architectural modification allows a faulty core in a CMP to use another core\u2019s resources to service any instruction that the former cannot execute correctly by itself. This service improves yield and reliability but may cause loss of performance. The target platform for quantitative evaluation of performance under degradation is a dual-core and a quad-core chip multiprocessor with one or more cores sustaining partial failure. Simulation studies indicate that when a large, high-latency, and sparingly used unit such as a floating-point unit fails in a core, correct execution may be sustained through outsourcing with at most a 16% impact on performance for a floating-point intensive application. For applications with moderate floating-point load, the degradation is insignificant. The performance impact may be mitigated even further by judicious selection of the cores to commandeer depending on the current load on each of the candidate cores. The area overhead is also negligible due to resource reuse."} {"_id": "dd9d92499cead9292bdebc5135361a2dbc70c169", "title": "Data processing and calibration of a cross-pattern stripe projector", "text": "Dense surface acquisition is one of the most challenging tasks for optical 3-D measurement systems in applications such as inspection of industrial parts, reverse engineering, digitization of virtual reality models and robot guidance. In order to achieve high accuracy we need good hardware equipment, as well as sophisticated data processing methods. Since measurements should be feasible under real world conditions, accuracy, reliability and consistency are a major issue. Based on the experiences with a previous system, a detailed analysis of the performance was carried out, leading to a new hardware setup. On the software side, we improved our calibration procedure and replaced the phase shift technique previously used by a new processing scheme which we call line shift processing. This paper describes our new approach. Results are presented and compared to results derived from the previous system."} {"_id": "6418b3858a4764c4c35f32da8bc2c24284b41d47", "title": "Anomaly network traffic detection algorithm based on information entropy measurement under the cloud computing environment", "text": "In recent years, there are more and more abnormal activities in the network, which greatly threaten network security. Hence, it is of great importance to collect the data which indicate the running statement of the network, and distinguish the anomaly phenomena of the network in time. In this paper, we propose a novel anomaly network traffic detection algorithm under the cloud computing environment. Firstly, the framework of the anomaly network traffic detection system is illustrated, and six type of network traffic features are consider in this work, that is, (1) number of source IP address, (2) number of source port number, (3) number of destination IP address, (4) number of destination port number, (5) Number of packet type, and (6) number of network packets. Secondly, we propose a novel hybrid information entropy and SVM model to tackle the proposed problem by normalizing values of network features and exploiting SVM detect anomaly network behaviors. Finally, experimental results demonstrate that the proposed algorithm can detect anomaly network traffic with high accuracy and it can also be used in the large scale dataset."} {"_id": "c3f2d101b616d82d07ca2cc4cb8ed0cb53fde21f", "title": "DeepPointSet : A Point Set Generation Network for 3 D Object Reconstruction from a Single Image Supplementary Material", "text": "We conducted human study to provide reference to our current CD and EMD values reported on the rendered dataset. We provided the human subject with a GUI tool to create a triangular mesh from the image. The tool (see Fig 1) enables the user to edit the mesh in 3D and to align the modeled object back to the input image. In total 16 models are created from the input images of our validation set. N = 1024 points are sampled from each model."} {"_id": "2824e8f94cffe6d4969e96248e67aa228ac72278", "title": "Machine Learning for Real-Time Prediction of Damaging Straight-Line Convective Wind", "text": "Thunderstorms in theUnited States cause over 100 deaths and $10 billion (U.S. dollars) in damage per year, much of which is attributable to straight-line (nontornadic) wind. This paper describes a machine-learning system that forecasts the probability of damaging straight-line wind ($50 kt or 25.7m s) for each storm cell in the continental United States, at distances up to 10 km outside the storm cell and lead times up to 90min. Predictors are based on radar scans of the storm cell, storm motion, storm shape, and soundings of the nearstorm environment. Verification data come from weather stations and quality-controlled storm reports. The system performs very well on independent testing data. The area under the receiver operating characteristic (ROC) curve ranges from 0.88 to 0.95, the critical success index (CSI) ranges from 0.27 to 0.91, and the Brier skill score (BSS) ranges from 0.19 to 0.65 (.0 is better than climatology). For all three scores, the best value occurs for the smallest distance (inside storm cell) and/or lead time (0\u201315min), while the worst value occurs for the greatest distance (5\u201310 km outside storm cell) and/or lead time (60\u201390min). The system was deployed during the 2017 Hazardous Weather Testbed."} {"_id": "07f77ad9c58b21588a9c6fa79ca7917dd58cca98", "title": "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture", "text": "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks."} {"_id": "1abf6491d1b0f6e8af137869a01843931996a562", "title": "ParseNet: Looking Wider to See Better", "text": "We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN [19]). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-theart performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at https: //github.com/weiliu89/caffe/tree/fcn."} {"_id": "2fdbd2cdb5b028e77b7640150496ae13eab05f30", "title": "Coupled depth learning", "text": "In this paper we propose a method for estimating depth from a single image using a coarse to fine approach. We argue that modeling the fine depth details is easier after a coarse depth map has been computed. We express a global (coarse) depth map of an image as a linear combination of a depth basis learned from training examples. The depth basis captures spatial and statistical regularities and reduces the problem of global depth estimation to the task of predicting the input-specific coefficients in the linear combination. This is formulated as a regression problem from a holistic representation of the image. Crucially, the depth basis and the regression function are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct regression of depth pixel values or approaches learning the depth basis disjointly from the regression function. The global depth estimate is then used as a guidance by a local refinement method that introduces depth details that were not captured at the global level. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-the-art at a considerably lower computational cost for both training and testing."} {"_id": "3fe41cfefb187e83ad0a5417ab01c212f318577f", "title": "Shape, Illumination, and Reflectance from Shading", "text": "A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images of that world. Traditional methods for recovering scene properties such as shape, reflectance, or illumination rely on multiple observations of the same scene to overconstrain the problem. Recovering these same properties from a single image seems almost impossible in comparison-there are an infinite number of shapes, paint, and lights that exactly reproduce a single image. However, certain explanations are more likely than others: surfaces tend to be smooth, paint tends to be uniform, and illumination tends to be natural. We therefore pose this problem as one of statistical inference, and define an optimization problem that searches for the most likely explanation of a single image. Our technique can be viewed as a superset of several classic computer vision problems (shape-from-shading, intrinsic images, color constancy, illumination estimation, etc) and outperforms all previous solutions to those constituent problems."} {"_id": "420c46d7cafcb841309f02ad04cf51cb1f190a48", "title": "Multi-Scale Context Aggregation by Dilated Convolutions", "text": "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction problems such as semantic segmentation are structurally different from image classification. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multiscale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy."} {"_id": "e820785f476cf37122c58e5d6b0e203af479b2df", "title": "An overview on crime prediction methods", "text": "In the recent past, crime analyses are required to reveal the complexities in the crime dataset. This process will help the parties that involve in law enforcement in arresting offenders and directing the crime prevention strategies. The ability to predict the future crimes based on the location, pattern and time can serve as a valuable source of knowledge for them either from strategic or tactical perspectives. Nevertheless, to predict future crime accurately with a better performance, it is a challenging task because of the increasing numbers of crime in present days. Therefore, crime prediction method is important to identify the future crime and reduces the numbers of crime. Currently, some researchers have been conducted a study to predict crime based on particular inputs. The performance of prediction models can be evaluated using a variety of different prediction methods such as support vector machine, multivariate time series and artificial neural network. However, there are still some limitations on their findings to provide an accurate prediction for the location of crimes. A large number of research papers on this topic have already been published previously. Thus, in this paper, we thoroughly review each of them and summarized the outcomes. Our objective is to identify current implementations of crime prediction method and the possibility to enhance it for future needs."} {"_id": "f7c8f4897d24769e019ae792e9f7f492aa6e8977", "title": "Organizational Use of Big Data and Competitive Advantage - Exploration of Antecedents", "text": "The use of Big Data can hold considerable benefits for organizations, but there is as yet little research on the impacts of Big Data use on key variables such as competitive advantage. The antecedents to Big Data use aiming at competitive advantage are also poorly understood. Drawing from prior research on the ICT-competitiveness link, this research examined the relationship of organizational use of Big Data and competitive advantage \u2013 as well as certain hypothesized antecedents to this relationship. Data was collected from a nationwide survey in Japan which was addressed to Big Data users in private companies. The result shows that the competitiveness is influenced by the systematic and extensive use of Big Data in the organization, and top management understanding for Big Data use. Antecedents of systematic and extensive Big Data use were found to be the cross-functional activities of analysts, proactive attitude of users, secured data management, and human resources. Future research should strive to verify these respondent opinions using longitudinal approaches based on objective financial data."} {"_id": "6bc785deeb35643d865157738548149e393f9dd3", "title": "Back to the Future: Current-Mode Processor in the Era of Deeply Scaled CMOS", "text": "This paper explores the use of MOS current-mode logic (MCML) as a fast and low noise alternative to static CMOS circuits in microprocessors, thereby improving the performance, energy efficiency, and signal integrity of future computer systems. The power and ground noise generated by an MCML circuit is typically 10-100\u00d7 smaller than the noise generated by a static CMOS circuit. Unlike static CMOS, whose dominant dynamic power is proportional to the frequency, MCML circuits dissipate a constant power independent of clock frequency. Although these traits make MCML highly energy efficient when operating at high speeds, the constant static power of MCML poses a challenge for a microarchitecture that operates at the modest clock rate and with a low activity factor. To address this challenge, a single-core microarchitecture for MCML is explored that exploits the C-slow retiming technique, and operates at a high frequency with low complexity to save energy. This design principle contrasts with the contemporary multicore design paradigm for static CMOS that relies on a large number of gates operating in parallel at the modest speeds. The proposed architecture generates 10-40\u00d7 lower power and ground noise, and operates within 13% of the performance (i.e., 1/ExecutionTime) of a conventional, eight-core static CMOS processor while exhibiting 1.6\u00d7 lower energy and 9% less area. Moreover, the operation of an MCML processor is robust under both systematic and random variations in transistor threshold voltage and effective channel length."} {"_id": "09696f6d3d623445878a2745ae276fdce3ce214b", "title": "Marine Litter Distribution and Density in European Seas, from the Shelves to Deep Basins", "text": "Anthropogenic litter is present in all marine habitats, from beaches to the most remote points in the oceans. On the seafloor, marine litter, particularly plastic, can accumulate in high densities with deleterious consequences for its inhabitants. Yet, because of the high cost involved with sampling the seafloor, no large-scale assessment of distribution patterns was available to date. Here, we present data on litter distribution and density collected during 588 video and trawl surveys across 32 sites in European waters. We found litter to be present in the deepest areas and at locations as remote from land as the Charlie-Gibbs Fracture Zone across the Mid-Atlantic Ridge. The highest litter density occurs in submarine canyons, whilst the lowest density can be found on continental shelves and on ocean ridges. Plastic was the most prevalent litter item found on the seafloor. Litter from fishing activities (derelict fishing lines and nets) was particularly common on seamounts, banks, mounds and ocean ridges. Our results highlight the extent of the problem and the need for action to prevent increasing accumulation of litter in marine environments."} {"_id": "606fec3f353ba72e3cb9cd27e1a0f3a3edfbcca8", "title": "The development of integrated career portal in university using agile methodology", "text": "A gap between university graduate's quality and industry sector demand is one of the challenge university has to tackle, which makes university graduates absorption in job market slows down. One of the tools that can be used to bridge between graduates and companies is an integrated career portal. This portal can be optimized to help industry find the employees they need from the alumni from university. On the other hand, this tool can help graduates to get job that suits their competency. By using the portal, alumni can hopefully minimize the number of unemployed university graduates and accelerate their absorption into job market. The development of this career portal will use agile methodology. Questionnaire was distributed to alumni and employers to collect information about what user need in the portal. The result of this research can create a career portal which integrated business process and social media."} {"_id": "85f01eaab9fe79aa901da13ef90a55f5178baf3e", "title": "Internet of Things security and forensics: Challenges and opportunities", "text": "The Internet of Things (IoT) envisions pervasive, connected, and smart nodes interacting autonomously while offering all sorts of services. Wide distribution, openness and relatively high processing power of IoT objects made them an ideal target for cyber attacks. Moreover, as many of IoT nodes are collecting and processing private information, they are becoming a goldmine of data for malicious actors. Therefore, security and specifically the ability to detect compromised nodes, together with collecting and preserving evidences of an attack or malicious activities emerge as a priority in successful deployment of IoT networks. In this paper, we first introduce existing major security and forensics challenges within IoT domain and then briefly discuss about papers published in this special issue targeting identified challenges."} {"_id": "2d2ae1590b23963c26b5338972ecbf7f1bdc8e13", "title": "Auto cropping for digital photographs", "text": "In this paper, we propose an effective approach to the nearly untouched problem, still photograph auto cropping, which is one of the important features to automatically enhance photographs. To obtain an optimal result, we first formulate auto cropping as an optimization problem by defining an energy function, which consists of three sub models: composition sub model, conservative sub model, and penalty sub model. Then, particle swarm optimization (PSO) is employed to obtain the optimal solution by maximizing the objective function. Experimental results and user studies over hundreds of photographs show that the proposed approach is effective and accurate in most cases, and can be used in many practical multimedia applications."} {"_id": "38e80bb1d747439550eac14453b7bc1f50436ff0", "title": "Amplified Boomerang Attacks Against Reduced-Round MARS and Serpent", "text": "We introduce a new cryptanalytic technique based on Wagner\u2019s boomerang and inside-out attacks. We first describe this new attack in terms of the original boomerang attack, and then demonstrate its use on reduced-round variants of the MARS core and Serpent. Our attack breaks eleven rounds of the MARS core with 2 chosen plaintexts, 2 memory, and 2 partial decryptions. Our attack breaks eight rounds of Serpent with 2 chosen plaintexts, 2 memory, and 2 partial decryptions."} {"_id": "6a9840bd5ed2d10dfaf233e7fbede05135761d81", "title": "The naturalness of reproduced high dynamic range images", "text": "The problem of visualizing high dynamic range images on the devices with restricted dynamic range has recently gained a lot of interest in the computer graphics community. Various so-called tone mapping operators have been proposed to face this issue. The field of tone mapping assumes thorough knowledge of both the objective and subjective attributes of an image. However, there no published analysis of such attributes exists so far. In this paper, we present an overview of image attributes which are used extensively in different tone mapping methods. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall quality measure which we call naturalness. We present results of the subjective psychophysical testing that we have performed to prove the proposed relationship scheme. Our effort sets the stage for well-founded quality comparisons between tone mapping operators. By providing good definitions of the different attributes, comparisons, be they user-driven or fully automatic, are made possible at all."} {"_id": "366ae3858593d5440b4a9bf39865eb7c4ddd0206", "title": "Real-Time Ray Tracing with CUDA", "text": "The graphics processors (GPUs) have recently emerged as a low-cost alternative for parallel programming. Since modern GPUs have great computational power as well as high memory bandwidth, running ray tracing on them has been an active field of research in computer graphics in recent years. Furthermore, the introduction of CUDA, a novel GPGPU architecture, has removed several limitations that the traditional GPU-based ray tracing suffered. In this paper, an implementation of high performance CUDA ray tracing is demonstrated. We focus on the performance and show how our design choices in various optimization lead to an implementation that outperforms the previous works. For reasonably complex scenes with simple shading, our implementation achieves the performance of 30 to 43 million traced rays per second. Our implementation also includes the effects of recursive specular reflection and refraction, which were less discussed in previous GPU-based ray tracing works."} {"_id": "47b4ef22a8ca7b9d6e6089a99f5748193ad24acc", "title": "Are men really more \u2018 oriented \u2019 toward short-term mating than women ?", "text": "According to Sexual Strategies Theory (D.M. Buss and D.P. Schmitt 1993), both men and women possess psychological adaptations for short-term mating. However, men may possess three adaptations that make it seem as though they are generally more \u2018oriented\u2019 toward short-term mating than women: (1) Men possess greater desire for short-term sexual relationships than women; (2) Men prefer larger numbers of sexual partners over time than women; and (3) Men require less time before consenting to sex than women. We review a wide body of psychological theory and evidence that corroborates the presence of these adaptations in men\u2019s short-term sexual psychology. We also correct some recurring misinterpretations of Sexual Strategies Theory, such as the mistaken notion that women are designed solely for long-term mating. Finally, we document how the observed sex differences in short-term mating complement some feminist theories and refute competing evolutionary theories of human sexuality. Psychology, Evolution & Gender 3.3 December 2001 pp. 211\u2013239 Psychology, Evolution & Gender ISSN 1461-6661 print/ISSN 1470-1073 online \u00a9 2001 Taylor & Francis Ltd http://www.tandf.co.uk/journals DOI: 10.1080/1461666011011933 1"} {"_id": "32791996c1040b9dcc34e71a05d72e5c649eeff9", "title": "Automatic Detection of Respiration Rate From Ambulatory Single-Lead ECG", "text": "Ambulatory electrocardiography is increasingly being used in clinical practice to detect abnormal electrical behavior of the heart during ordinary daily activities. The utility of this monitoring can be improved by deriving respiration, which previously has been based on overnight apnea studies where patients are stationary, or the use of multilead ECG systems for stress testing. We compared six respiratory measures derived from a single-lead portable ECG monitor with simultaneously measured respiration air flow obtained from an ambulatory nasal cannula respiratory monitor. Ten controlled 1-h recordings were performed covering activities of daily living (lying, sitting, standing, walking, jogging, running, and stair climbing) and six overnight studies. The best method was an average of a 0.2-0.8 Hz bandpass filter and RR technique based on lengthening and shortening of the RR interval. Mean error rates with the reference gold standard were plusmn4 breaths per minute (bpm) (all activities), plusmn2 bpm (lying and sitting), and plusmn1 breath per minute (overnight studies). Statistically similar results were obtained using heart rate information alone (RR technique) compared to the best technique derived from the full ECG waveform that simplifies data collection procedures. The study shows that respiration can be derived under dynamic activities from a single-lead ECG without significant differences from traditional methods."} {"_id": "a1bdfa8ec779fbf6aa780d3c63cabac2d5887b4a", "title": "Location and Time Aware Social Collaborative Retrieval for New Successive Point-of-Interest Recommendation", "text": "In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results.\n In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method."} {"_id": "7eac1eb85b919667c785b9ac4085d8ca68998d20", "title": "MOBILE LEARNING FOR EDUCATION : BENEFITS AND CHALLENGES", "text": "Education and training is the process by which the wisdom, knowledge and skills of one generation are passed on to the next. Today there are two form s f education and training: conventional education and distance education. Mobile learning, or \"M-Lear ning\", offers modern ways to support learning process through mobile devices, such as handheld an d tablet computers, MP3 players, smart phones and mobile phones. This document introduces the sub ject of mobile learning for education purposes. It examines what impact mobile devices have had on teaching and learning practices and goes on to look at the opportunities presented by the use of d igital media on mobile devices. The main purpose of this paper is to describe the current state of mobi le learning, benefits, challenges, and it\u2019s barrier s to support teaching and learning. Data for this paper w re collected through bibliographic and internet research from January to March 2013. Four key areas will be addressed in this paper: 1. An analysis of Mobile Learning. 2. Differentiating E-Learning from Mobile Learning 3. Value and Benefits of Mobile Learning 4. Challenges and Barriers of M obile Learning: Study showed that M-Learning as a Distance learning brought great benefits to socie ty include : Training when it is needed, Training a t any time; Training at any place; Learner-centred co ntent; Avoidance of re-entry to work problems; Training for taxpayers, and those fully occupied du ring university lectures and sessions at training centres; and The industrialisation of teaching and learning. And also, notebooks, mobile Tablets, iPod touch, and iPads are very popular devices for mobil e learning because of their cost and availability o f apps. --------------------------------"} {"_id": "6ca0343f94fdd4efbaddde8e4040558bcc29e8ed", "title": "Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection", "text": "In this paper, a new learning algorithm for adaptive network intrusion detection using naive Bayesian classifier and decision tree is presented, which performs balance detections and keeps false positives at acceptable level for different types of network attacks, and eliminates redundant attributes as well as contradictory examples from training data that make the detection model complex. The proposed algorithm also addresses some difficulties of data mining such as handling continuous attribute, dealing with missing attribute values, and reducing noise in training data. Due to the large volumes of security audit data as well as the complex and dynamic properties of intrusion behaviours, several data miningbased intrusion detection techniques have been applied to network-based traffic data and host-based data in the last decades. However, there remain various issues needed to be examined towards current intrusion detection systems (IDS). We tested the performance of our proposed algorithm with existing learning algorithms by employing on the KDD99 benchmark intrusion detection dataset. The experimental results prove that the proposed algorithm achieved high detection rates (DR) and significant reduce false positives (FP) for different types of network intrusions using limited computational resources."} {"_id": "8f67ff7d7a4fc72d87f82ae340dba4365b7ea664", "title": "Learning Filter Banks Using Deep Learning For Acoustic Signals", "text": "Designing appropriate features for acoustic event recognition tasks is an active field of research. Expressive features should both improve the performance of the tasks and also be interpret-able. Currently, heuristically designed features based on the domain knowledge requires tremendous effort in hand-crafting, while features extracted through deep network are difficult for human to interpret. In this work, we explore the experience guided learning method for designing acoustic features. This is a novel hybrid approach combining both domain knowledge and purely data driven feature designing. Based on the procedure of log Mel-filter banks, we design a filter bank learning layer. We concatenate this layer with a convolutional neural network (CNN) model. After training the network, the weight of the filter bank learning layer is extracted to facilitate the design of acoustic features. We smooth the trained weight of the learning layer and re-initialize it in filter bank learning layer as audio feature extractor. For the environmental sound recognition task based on the Urbansound8K dataset [1], the experience guided learning leads to a 2% accuracy improvement compared with the fixed feature extractors (the log Mel-filter bank). The shape of the new filter banks are visualized and explained to prove the effectiveness of the feature design process."} {"_id": "d691128d367d9bd1f35bd0dd15887c073b72817a", "title": "Improved Algorithm for Intrusion Detection Using Genetic Algorithm and SNORT", "text": "Intrusion Detection Systems (IDSs) detects the network attacks by scanning each data packets. There are various approaches were done for making better IDS, though traditional IDS approaches are insufficient in terms of automating detection of attacks as attackers evolved. This research is focused on Genetic Algorithm approach used in Intrusion Detection. This research shows that how the traditional SNORT and Genetic Algorithm combined together so that the number of rules of SNORT is decreased. Experimental results Shows That IDS using Genetic Algorithm and SNORT can decrease the amount of detection time, CPU Utilization and Memory Utilization. Keywords\u2014Genetic Algorithm, SNORT, Fitness Function,"} {"_id": "b86c2baf8bbff63c79a47797ac82778b75b72e03", "title": "Probabilistic Semantics and Pragmatics: Uncertainty in Language and Thought", "text": "Language is used to communicate ideas. Ideas are mental tools for coping with a complex and uncertain world. Thus human conceptual structures should be key to language meaning, and probability\u2014the mathematics of uncertainty\u2014 should be indispensable for describing both language and thought. Indeed, probabilistic models are enormously useful in modeling human cognition (Tenenbaum et al., 2011) and aspects of natural language (Bod et al., 2003; Chater et al., 2006). With a few early exceptions (e.g. Adams, 1975; Cohen, 1999b), probabilistic tools have only recently been used in natural language semantics and pragmatics. In this chapter we synthesize several of these modeling advances, exploring a formal model of interpretation grounded, via lexical semantics and pragmatic inference, in conceptual structure. Flexible human cognition is derived in large part from our ability to imagine possibilities (or possible worlds). A rich set of concepts, intuitive theories, and other mental representations support imagining and reasoning about possible worlds\u2014together we will call these the conceptual lexicon. We posit that this collection of concepts also forms the set of primitive elements available for lexical semantics: word meanings can be built from the pieces of conceptual structure. Larger semantic structures are then built from word meanings by composition, ultimately resulting in a sentence meaning which is a phrase in the \u201clanguage of thought\u201d provided by the conceptual lexicon. This expression is truth-functional in that it takes on a Boolean value for each imagined world, and it can thus be used as the basis for belief updating. However, the connection between cognition, semantics, and belief is not direct: because language must flexibly adapt to the context of communication, the connection between lexical representation and interpreted meaning is mediated by pragmatic inference."} {"_id": "efe61b96e7d3ca79c14b41f4918f234bec936bb1", "title": "Clinical and symptomatological reflections: the fascial system", "text": "Every body structure is wrapped in connective tissue, or fascia, creating a structural continuity that gives form and function to every tissue and organ. Currently, there is still little information on the functions and interactions between the fascial continuum and the body system; unfortunately, in medical literature there are few texts explaining how fascial stasis or altered movement of the various connective layers can generate a clinical problem. Certainly, the fascia plays a significant role in conveying mechanical tension, in order to control an inflammatory environment. The fascial continuum is essential for transmitting muscle force, for correct motor coordination, and for preserving the organs in their site; the fascia is a vital instrument that enables the individual to communicate and live independently. This article considers what the literature offers on symptoms related to the fascial system, trying to connect the existing information on the continuity of the connective tissue and symptoms that are not always clearly defined. In our opinion, knowing and understanding this complex system of fascial layers is essential for the clinician and other health practitioners in finding the best treatment strategy for the patient."} {"_id": "e25b53d36bb1e0c00a13d2aa6c48dc320a400a67", "title": "Towards Online-Recognition with Deep Bidirectional LSTM Acoustic Models", "text": "Online-Recognition requires the acoustic model to provide posterior probabilities after a limited time delay given the online input audio data. This necessitates unidirectional modeling and the standard solution is to use unidirectional long short-term memory (LSTM) recurrent neural networks (RNN) or feedforward neural networks (FFNN). It is known that bidirectional LSTMs are more powerful and perform better than unidirectional LSTMs. To demonstrate the performance difference, we start by comparing several different bidirectional and unidirectional LSTM topologies. Furthermore, we apply a modification to bidirectional RNNs to enable online-recognition by moving a window over the input stream and perform one forwarding through the RNN on each window. Then, we combine the posteriors of each forwarding and we renormalize them. We show in experiments that the performance of this online-enabled bidirectional LSTM performs as good as the offline bidirectional LSTM and much better than the unidirectional LSTM."} {"_id": "467024c6cc8fe73c64e501f8a12cdbafbf9561b0", "title": "A License Plate-Recognition Algorithm for Intelligent Transportation System Applications", "text": "In this paper, a new algorithm for vehicle license plate identification is proposed, on the basis of a novel adaptive image segmentation technique (sliding concentric windows) and connected component analysis in conjunction with a character recognition neural network. The algorithm was tested with 1334 natural-scene gray-level vehicle images of different backgrounds and ambient illumination. The camera focused in the plate, while the angle of view and the distance from the vehicle varied according to the experimental setup. The license plates properly segmented were 1287 over 1334 input images (96.5%). The optical character recognition system is a two-layer probabilistic neural network (PNN) with topology 108-180-36, whose performance for entire plate recognition reached 89.1%. The PNN is trained to identify alphanumeric characters from car license plates based on data obtained from algorithmic image processing. Combining the above two rates, the overall rate of success for the license-plate-recognition algorithm is 86.0%. A review in the related literature presented in this paper reveals that better performance (90% up to 95%) has been reported, when limitations in distance, angle of view, illumination conditions are set, and background complexity is low"} {"_id": "0e9c1dc5415dafcbe41ef447121afc6835238870", "title": "Semantic Spaces: Measuring the Distance between Different Subspaces", "text": "Semantic Space models, which provide a numerical representation of words\u2019 meaning extracted from corpus of documents, have been formalized in terms of Hermitian operators over real valued Hilbert spaces by Bruza et al. [1]. The collapse of a word into a particular meaning has been investigated applying the notion of quantum collapse of superpositional states [2]. While the semantic association between words in a Semantic Space can be computed by means of the Minkowski distance [3] or the cosine of the angle between the vector representation of each pair of words, a new procedure is needed in order to establish relations between two or more Semantic Spaces. We address the question: how can the distance between different Semantic Spaces be computed? By representing each Semantic Space as a subspace of a more general Hilbert space, the relationship between Semantic Spaces can be computed by means of the subspace distance. Such distance needs to take into account the difference in the dimensions between subspaces. The availability of a distance for comparing different Semantic Subspaces would enable to achieve a deeper understanding about the geometry of Semantic Spaces which would possibly translate into better effectiveness in Information Retrieval tasks."} {"_id": "84cfcacc9f11372818d9f8cada04fef64ece87cf", "title": "Fracture detection in x-ray images through stacked random forests feature fusion", "text": "Bone fractures are among the most common traumas in musculoskeletal injuries. They are also frequently missed during the radiological examination. Thus, there is a need for assistive technologies for radiologists in this field. Previous automatic bone fracture detection work has focused on detection of specific fracture types in a single anatomical region. In this paper, we present a generalized bone fracture detection method that is applicable to multiple bone fracture types and multiple bone structures throughout the body. The method uses features extracted from candidate patches in X-ray images in a novel discriminative learning framework called the Stacked Random Forests Feature Fusion. This is a multilayer learning formulation in which the class probability labels, produced by random forests learners at a lower level, are used to derive the refined class distribution labels at the next level. The candidate patches themselves are selected using an efficient subwindow search algorithm. The outcome of the method is a number of fracture bounding-boxes ranked from the most likely to the least likely to contain a fracture. We evaluate the proposed method on a set of 145 X-rays images. When the top ranking seven fracture bounding-boxes are considered, we are able to capture 81.2% of the fracture findings reported by a radiologist. The proposed method outperforms other fracture detection frameworks that use local features, and single layer random forests and support vector machine classification."} {"_id": "f681ceb64b7b7cc440a417d255fbf30d69926970", "title": "Neural Disambiguation of Causal Lexical Markers Based on Context", "text": "Causation is a psychological tool of humans to understand the world and it is projected in natural language. Causation relates two events, so in order to understand the causal relation of those events and the causal reasoning of humans, the study of causality classification is required. We claim that the use of linguistic features may restrict the representation of causality, and dense vector spaces can provide a better encoding of the causal meaning of an utterance. Herein, we propose a neural network architecture only fed with word embeddings for the task of causality classification. Our results show that our claim holds, and we outperform the state-of-the-art on the AltLex corpus. The source code of our experiments is publicly available.1"} {"_id": "8a60b082aa758c317e9677beed7e7776acde5e4c", "title": "Chapter 12 DATA MINING IN SOCIAL MEDIA", "text": "The rise of online social media is providing a wealth of social network data. Data mining techniques provide researchers and practitioners the tools needed to analyze large, complex, and frequently changing social media data. This chapter introduces the basics of data mining, reviews social media, discusses how to mine social media data, and highlights some illustrative examples with an emphasis on social networking sites and blogs."} {"_id": "36ada37bede2f0fd3dbab2f8d4d33df8670ff00b", "title": "The Description Logic Handbook: Theory, Implementation, and Applications", "text": "Like modal logic, temporal logic, or description logic, separation logic has become a Description Logic Handbook: Theory, Implementation and Applications,\u201d. The Description Logic Handbook: Theory, Implementation and Applications (Franz Baader, Diego Calvanese, Deborah. McGuinness, Daniele Nardi, Peter. To formalize knowledge, we employ various logic-based formalisms, such as The Description Logic Handbook: Theory, Implementation and Applications."} {"_id": "862576569343cfa7c699784267543b81fad23c3a", "title": "Attributive Concept Descriptions with Complements", "text": "Schmidt-SchaulL M. and G. Smolka, Attributive concept descriptions with complements, Artificial Intelligence 48 (1991) 1-26. We investigate the consequences of adding unions and complements to attributive concept descriptions employed in terminological knowledge representation languages. It is shown that deciding coherence and subsumption of such descriptions are PSPACE-complete problems that can be decided with linear space."} {"_id": "0f26b165700dc85548ea3d570d757aba838f5a97", "title": "Racer: A Core Inference Engine for the Semantic Web", "text": "In this paper we describe Racer, which can be considered as a core inference engine for the semantic web. The Racer inference server offers two APIs that are already used by at least three different network clients, i.e., the ontology editor OilEd, the visualization tool RICE, and the ontology development environment Protege 2. The Racer server supports the standard DIG protocol via HTTP and a TCP based protocol with extensive query facilities. Racer currently supports the web ontology languages DAML+OIL, RDF, and OWL."} {"_id": "57820e6f974d198bf4bbdf26ae7e1063bac190c3", "title": "The Semantic Web\" in Scientific American", "text": ""} {"_id": "54a5680c9fcd540bf3df970dc7fcd38747f689bc", "title": "Hand Gesture Recognition System", "text": "The task of gesture recognition is highly challenging due to complex background, presence of nongesture hand motions, and different illumination environments. In this paper, a new technique is proposed which begins by detecting the hand and determining its canter, tracking the hands trajectory and analyzing the variations in the hand locations, and finally recognizing the gesture. The proposed technique overcomes background complexity and distance from camera which is up to 2.5 meters. Experimental results show that the proposed technique can recognize 12 gestures with rate above 94%. Keywords\u2014 Hand gesture recognition, skin detection, Hand tracking."} {"_id": "46aa7391e5763cbc7dd570882baf5cfc832ba3f4", "title": "A review of split-bolus single-pass CT in the assessment of trauma patients", "text": "The purpose of this study was to review and compare the image quality and radiation dose of split-bolus single-pass computed tomography(CT) in the assessment of trauma patients in comparison to standard multi-phase CT techniques. An online electronic database was searched using the MESH terms \u201csplit-bolus,\u201d \u201cdual phase,\u201d and \u201csingle pass.\u201d Inclusion criteria required the research article to compare a split contrast bolus protocol in a single-pass scan in the assessment of trauma patients. Studies using split-bolus CT technique in non-traumatic injury assessment were excluded. Six articles met the inclusion criteria. Parenchymal and vascular image qualities, as well as subjective image quality assessments, were equal or superior in comparison to non-split-bolus multi-phase trauma CT protocols. Split-bolus single-pass CT decreased radiation exposure in all studies. Further research is required to determine the superior split-bolus protocol and the specificity and sensitivity of detecting blunt cerebrovascular injury screening, splenic parenchymal vascular lesions, and characterization of pelvic vascular extravasation."} {"_id": "64af3319e9ed4e459e80b469923106887d28313d", "title": "Dual-Band Microstrip Bandpass Filter Using Stepped-Impedance Resonators With New Coupling Schemes", "text": "A microstrip bandpass filter using stepped-impedance resonators is designed in low-temperature co-fired ceramic technology for dual-band applications at 2.4 and 5.2 GHz. New coupling schemes are proposed to replace the normal counterparts. It is found that the new coupling scheme for the interstages can enhance the layout compactness of the bandpass filter; while the new coupling scheme at the input and output can improve the performance of the bandpass filter. To validate the design and analysis, a prototype of the bandpass filter was fabricated and measured. It is shown that the measured and simulated performances are in good agreement. The prototype of the bandpass filter achieved insertion loss of 1.25 and 1.87 dB, S11 of -29 and -40 dB, and bandwidth of 21% and 12.7% at 2.4 and 5.2 GHz, respectively. The bandpass filter is further studied for a single-package solution of dual-band radio transceivers. The bandpass filter is, therefore, integrated into a ceramic ball grid array package. The integration is analyzed with an emphasis on the connection of the bandpass filter to the antenna and to the transceiver die"} {"_id": "72987c6ea6c0d6bfbfea8520ae0a1f74bdd8fb7c", "title": "A Survey on the Security of Stateful SDN Data Planes", "text": "Software-defined networking (SDN) emerged as an attempt to introduce network innovations faster, and to radically simplify and automate the management of large networks. SDN traditionally leverages OpenFlow as device-level abstraction. Since OpenFlow permits the programmer to \u201cjust\u201d abstract a static flow-table, any stateful control and processing intelligence is necessarily delegated to the network controller. Motivated by the latency and signaling overhead that comes along with such a two-tiered SDN programming model, in the last couple of years several works have proposed innovative switch-level (data plane) programming abstractions capable to deploy some smartness directly inside the network switches, e.g., in the form of localized stateful flow processing. Furthermore, the possible inclusion of states and state maintenance primitives inside the switches is currently being debated in the OpenFlow standardization community itself. In this paper, after having provided the reader with a background on such emerging stateful SDN data plane proposals, we focus our attention on the security implications that data plane programmability brings about. Also via the identification of potential attack scenarios, we specifically highlight possible vulnerabilities specific to stateful in-switch processing (including denial of service and saturation attacks), which we believe should be carefully taken into consideration in the ongoing design of current and future proposals for stateful SDN data planes."} {"_id": "1182ce944d433c4930fef42e0c27edd5c00f1254", "title": "Prosthetic gingival reconstruction in a fixed partial restoration. Part 1: introduction to artificial gingiva as an alternative therapy.", "text": "The Class III defect environment entails a vertical and horizontal deficiency in the edentulous ridge. Often, bone and soft tissue surgical procedures fall short of achieving a natural esthetic result. Alternative surgical and restorative protocols for these types of prosthetic gingival restorations are presented in this three-part series, which highlights the diagnostic and treatment aspects as well as the lab and maintenance challenges. A complete philosophical approach involves both a biologic understanding of the limitations of the hard and soft tissue healing process as well as that of multiple adjacent implants in the esthetic zone. These limitations may often necessitate the use of gingiva-colored \"pink\" restorative materials and essential preemptive planning via three-dimensional computer-aided design/computer-assisted manufacture to achieve the desired esthetic outcome. The present report outlines a rationale for consideration of artificial gingiva when planning dental prostheses. Prosthetic gingiva can overcome the limitations of grafting and should be a consideration in the initial treatment plan. (Int J Periodontics Restorative Dent 2009;29:471-477.)."} {"_id": "a4255430a93ca3e304a56d83234927885c8aaae5", "title": "Car monitoring, alerting and tracking model: Enhancement with mobility and database facilities", "text": "Statistics show that the number of cars is increasing rapidly and so is the number of car theft attempts, locally and internationally. Although there are a lot of car security systems that had been produced lately, but the result is still disappointing as the number of car theft cases still increases. The thieves are inventing cleverer and stronger stealing techniques that need more powerful security systems. This project \u201cCar Monitoring and Tracking System\u201d is being proposed to solve the issue. It introduces the integration between monitoring and tracking system. Both elements are very crucial in order to have a powerful security system. The system can send SMS and MMS to the owner to have fast response especially if the car is nearby. This paper focuses on using MMS and database technology, the picture of the intruder will be sent via local GSM/GPRS service provider to user (and/or) police. The Database offers the required information about car and owner, which will help the police or security authorities in their job. Moreover, in the event of theft local police and user can easily track the car using GPS system that can be link to Google Earth and other mapping software. The implementation and testing results show the success of prototype in sending MMS to owner within 40 seconds and receiving acknowledgment to the database (police or security unit) within 4 minutes. The timing and results are suitable to owner and police to take suitable action against intruder."} {"_id": "3320f454a8d3e872e2007a07f286b91ca333a924", "title": "Data Governance Framework for Big Data Implementation with a Case of Korea", "text": "Big Data governance requires a data governance that can satisfy the needs for corporate governance, IT governance, and ITA/EA. While the existing data governance focuses on the processing of structured data, Big Data governance needs to be established in consideration of a broad sense of Big Data services including unstructured data. To achieve the goals of Big Data, strategies need to be established together with goals that are aligned with the vision and objective of an organization. In addition to the preparation of the IT infrastructure, a proper preparation of the components is required to effectively implement the strategy for Big Data services. We propose the Big Data Governance Framework in this paper. The Big Data governance framework presents criteria different from existing criteria at the data quality level. It focuses on timely, reliable, meaningful, and sufficient data services, focusing on what data attributes should be achieved based on the data attributes of Big Data services. In addition to the quality level of Big Data, the personal information protection strategy and the data disclosure/accountability strategy are also needed to achieve goals and to prevent problems. This paper performed case analysis based on the Big Data Governance Framework with the National Pension Service of South Korea. Big Data services in the public sector are an inevitable choice to improve the quality of people's life. Big Data governance and its framework are the essential components for the realization of Big Data service."} {"_id": "5b854ac6f0ffe3f963ee2b7d4df474eba737cd63", "title": "Numeric data frames and probabilistic judgments in complex real-world environments", "text": "This thesis investigates human probabilistic judgment in complex real-world settings to identify processes underpinning biases across groups which relate to numerical frames and formats. Experiments are conducted replicating real-world environments and data to test judgment performance based on framing and format. Regardless of background skills and experience, people in professional and consumer contexts show a strong tendency to perceive the world from a linear perspective, interpreting information in concrete, absolute terms and making judgments based on seeking and applying linear functions. Whether predicting sales, selecting between financial products, or forecasting refugee camp data, people use minimal cues and systematically apply additive methods amidst non-linear trends and percentage points to yield linear estimates in both rich and sparse informational contexts. Depending on data variability and temporality, human rationality and choice may be significantly helped or hindered by informational framing and format. The findings deliver both theoretical and practical contributions. Across groups and individual differences, the effects of informational format and the tendency to linearly extrapolate are connected by the bias to perceive values in concrete terms and make sense of data by seeking simple referent points. People compare and combine referents using additive methods when inappropriate and adhere strongly to defaults when applied in complex numeric environments. The practical contribution involves a framing manipulation which shows that format biases (i.e., additive processing) and optimism (i.e., associated with intertemporal effects) can be counteracted in judgments involving percentages and exponential growth rates by using absolute formats and positioning defaults in future event context information. This framing manipulation was highly effective in improving loan choice and repayment judgments compared to information in standard finance industry formats. There is a strong potential to increase rationality using this data format manipulation in other financial settings and domains such as health behaviour change in which peoples\u2019 erroneous interpretation of percentages and non-linear relations negatively impact choice and behaviours in both the short and long-term."} {"_id": "ffdd7823e1f093876d8c4ea1d7da897c6b259ca8", "title": "A reconfigurable three- and single-phase AC/DC non-isolated bi-directional converter for multiple worldwide voltages", "text": "AC/DC converters for worldwide deployment or multiple applications are often required to operate with single-or three-phase AC connections at a variety of voltages. Electric vehicle charging is an excellent example: a typical AC/DC converter optimized for a 400 V three-phase European connection will only produce about 35% of rated power when operating on a 240 V single-phase connection in North America. This paper proposes a reconfigurable AC-DC converter with integral DC-DC stage that allows the single-phase AC power to roughly double compared to a standard configuration, up to 2/3 of the DC/DC stage rating. The system is constructed with two conventional back-to-back three-phase two-level bridges. When operating in single-phase mode, rather than leaving the DC side under-utilized, our proposed design repurposes one of the DC phase legs and the unused AC phase to create two parallel single-phase connections. This roughly doubles the AC output current and hence doubles the single phase AC power."} {"_id": "4f6ea57dddeb2950feedd39101c7bd586774599a", "title": "Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness", "text": "Interactive video in an e-learning system allows proactive and random access to video content. Our empirical study examined the influence of interactive video on learning outcome and learner satisfaction in e-learning environments. Four different settings were studied: three were e-learning environments\u2014with interactive video, with non-interactive video, and without video. The fourth was the traditional classroom environment. Results of the experiment showed that the value of video for learning effectiveness was contingent upon the provision of interactivity. Students in the e-learning environment that provided interactive video achieved significantly better learning performance and a higher level of learner satisfaction than those in other settings. However, students who used the e-learning environment that provided non-interactive video did not improve either. The findings suggest that it may be important to integrate interactive instructional video into e-learning systems. # 2005 Elsevier B.V. All rights reserved."} {"_id": "84252a9be6279e9e34ed4af5676bc642986ba0f6", "title": "IoT based monitoring and control of appliances for smart home", "text": "The recent technology in home automation provides security, safety and comfortable life at home. That is why in the competitive environment and fast world, home automation technology is required for every person. This purposed home automation technology provides smart monitoring and control of the home appliances as well as door permission system for interaction between the visitor and home/office owner. The control and monitoring the status (ON/OFF of the appliances) have been implemented using multiple ways such as The Internet, electrical switch, and Graphical User Interface (GUI) interface. The system has low-cost design, user-friendly interface, and easy installation in home or multi-purpose building. Using this technology, the consumer can reduce the wastage of electrical power by regular monitoring of home appliances or the proper ON/OFF scheduling of the devices."} {"_id": "8e393c18974baa8d5d704edaf116f009cb919463", "title": "2.1 28Gb/s 560mW multi-standard SerDes with single-stage analog front-end and 14-tap decision-feedback equalizer in 28nm CMOS", "text": "A high-speed SerDes must meet multiple challenges including high-speed operation, intensive equalization technique, low power consumption, small area and robustness. In order to meet new standards, such a OIF CEI-25G-LR, CEI-28G-MR/SR/VSR, IEEE802.3bj and 32G-FC, data-rates are increased to 25 to 28Gb/s, which is more than 75% higher than the previous generation of SerDes. For SerDes applications with several hundreds of lanes integrated in single chip, power consumption is very important factor while maintaining high performance. There are several previous works at 28Gb/s or higher data-rate [1-2]. They use an unrolled DFE to meet the critical timing margin, but the unrolled DFE structure increases the number of DFE slicers, increasing the overall power and die area. In order to tackle these challenges, we introduce several circuits and architectural techniques. The analog front-end (AFE) uses a single-stage architecture and a compact on-chip passive inductor in the transimpedance amplifier (TIA), providing 15dB boost. The boost is adaptive and its adaptation loop is decoupled from the decision-feedback equalizer (DFE) adaptation loop by the use of a group-delay adaptation (GDA) algorithm. DFE has a half-rate 1-tap unrolled structure with 2 total error latches for power and area reduction. A two-stage sense-amplifier-based slicer achieves a sensitivity of 15mV and DFE timing closure. We also develop a high-speed clock buffer that uses a new active-inductor circuit. This active-inductor circuit has the capability to control output-common-mode voltage to optimize circuit operating points."} {"_id": "bc8e53c1ef837531126886520ce155e14140beb4", "title": "Understanding Human Motion and Gestures for Underwater Human-Robot Collaboration", "text": "In this paper, we present a number of robust methodologies for an underwater robot to visually detect, follow, and interact with a diver for collaborative task execution. We design and develop two autonomous diver-following algorithms, the first of which utilizes both spatialand frequency-domain features pertaining to human swimming patterns in order to visually track a diver. The second algorithm uses a convolutional neural network-based model for robust tracking-by-detection. In addition, we propose a hand gesture-based human-robot communication framework that is syntactically simpler and computationally more efficient than the existing grammar-based frameworks. In the proposed interaction framework, deep visual detectors are used to provide accurate hand gesture recognition; subsequently, a finite-state machine performs robust and efficient gesture-to-instruction mapping. The distinguishing feature of this framework is that it can be easily adopted by divers for communicating with underwater robots without using artificial markers or requiring memorization of complex language rules. Furthermore, we validate the performance and effectiveness of the proposed methodologies through extensive field experiments in closedand open-water environments. Finally, we perform a user interaction study to demonstrate the usability benefits of our proposed interaction framework compared to existing methods."} {"_id": "f9e416ae4c54fd91ab3f45ba01b463686eb1dbcc", "title": "mTOR regulation of autophagy.", "text": "Nutrient starvation induces autophagy in eukaryotic cells through inhibition of TOR (target of rapamycin), an evolutionarily-conserved protein kinase. TOR, as a central regulator of cell growth, plays a key role at the interface of the pathways that coordinately regulate the balance between cell growth and autophagy in response to nutritional status, growth factor and stress signals. Although TOR has been known as a key regulator of autophagy for more than a decade, the underlying regulatory mechanisms have not been clearly understood. This review discusses the recent advances in understanding of the mechanism by which TOR regulates autophagy with focus on mammalian TOR (mTOR) and its regulation of the autophagy machinery."} {"_id": "f180081aab729e2e26c81f475a70c030816901d8", "title": "EyeTell: Video-Assisted Touchscreen Keystroke Inference from Eye Movements", "text": "Keystroke inference attacks pose an increasing threat to ubiquitous mobile devices. This paper presents EyeTell, a novel video-assisted attack that can infer a victim's keystrokes on his touchscreen device from a video capturing his eye movements. EyeTell explores the observation that human eyes naturally focus on and follow the keys they type, so a typing sequence on a soft keyboard results in a unique gaze trace of continuous eye movements. In contrast to prior work, EyeTell requires neither the attacker to visually observe the victim's inputting process nor the victim device to be placed on a static holder. Comprehensive experiments on iOS and Android devices confirm the high efficacy of EyeTell for inferring PINs, lock patterns, and English words under various environmental conditions."} {"_id": "428d73d47d9f01557e612abba52c9c21942915a1", "title": "Rigid-Foldable Thick Origami", "text": "In this paper, a method is proposed for geometrically constructing thick panel structures that follow the kinetic behavior of rigid origami by using tapered or two-ply panels and hinges located at their edges. The proposed method can convert generalized pattern of rigid-foldable origami into thick panels structure with kinetic motion, which leads to novel designs of origami for various engineering purposes including architecture."} {"_id": "9fae98bdba54a24e72e8173d50c1461553cd98b2", "title": "Moving ERP Systems to the Cloud-Data Security Issues", "text": "This paper brings to light data security issues and concerns for organizations by moving their Enterprise Resource Planning (ERP) systems to the cloud. Cloud computing has become the new trend of how organizations conduct business and has enabled them to innovate and compete in a dynamic environment through new and innovative business models. The growing popularity and success of the cloud has led to the emergence of cloud-based Software-as-a-Service (SaaS) ERP systems, a new alternative approach to traditional on-premise ERP systems. Cloud-based ERP has a myriad of benefits for organizations. However, infrastructure engineers need to address data security issues before moving their enterprise applications to the cloud. Cloud-based ERP raises specific concerns about the confidentiality and integrity of the data stored in the cloud. Such concerns that affect the adoption of cloud-based ERP are based on the size of the organization. Small to medium enterprises (SMEs) gain the maximum benefits from cloud-based ERP as many of the concerns around data security are not relevant to them. On the contrary, larger organizations are more cautious in moving their mission critical enterprise applications to the cloud. A hybrid solution where organizations can choose to keep their sensitive applications on-premise while leveraging the benefits of the cloud is proposed in this paper as an effective solution that is gaining momentum and popularity for large organizations."} {"_id": "cbd51c78edc96eb6f654a938f8247334a2c4fa00", "title": "Implementation of RSA 2048-bit and AES 256-bit with digital signature for secure electronic health record application", "text": "This research addresses the implementation of encryption and digital signature technique for electronic health record to prevent cybercrime problem such as robbery, modification and unauthorized access. In this research, RSA 2048-bit algorithm, AES 256-bit and SHA 256 will be implemented in Java programming. Secure Electronic Health Record Information (SEHR) application design is intended to combine given services, such as confidentiality, integrity, authentication, and non-repudiation. Cryptography is used to ensure the file records and electronic documents for detailed information on the medical past, present and future forecasts that have been given only for the patients. The document will be encrypted using an encryption algorithm based on NIST Standard. In the application, there are two schemes, namely the protection and verification scheme. This research uses black-box testing and white-box testing to test the software input, output, and code without testing the process and design that occurs in the system. We demonstrated the implementation of cryptography in Secure Electronic Health Record Information (SEHR). The implementation of encryption and digital signature in this research can prevent archive thievery which is shown on implementation and is proven on the test."} {"_id": "2f6611fea81af98bc28341c830dd590a1c9159a9", "title": "An automotive radar network based on 77 GHz FMCW sensors", "text": "Automotive radar sensors will be used in many future applications to increase comfort and safety. Adaptive cruise control (ACC), parking aid, collision warning or avoidance and pre crash are some automotive applications, which are based on high performance radar sensors. This paper describes results gained by an European Commission funded research project and a new radar network based on four individual monostatic 77 GHz FMCW radar sensors. Each sensor measures target range and velocity simultaneously and extremely accurate even in multiple target situations with a maximum range of 40 m. The azimuth angle is estimated by multilateration techniques in a radar network architecture. The system design and the signal processing of this network are described in detail in this paper. Due to the high range resolution performance of each radar sensor ordinary targets cannot be considered any longer as point but as extended targets. System capability, performance and reliability can be improved essentially if the single sensor FMCW signal processing, the target position processing in the radar network and the tracking scheme are jointly optimized."} {"_id": "08de4de6aaffc4d235e45cfa324fb36d5681dd6c", "title": "Sex differences in intrinsic aptitude for mathematics and science?: a critical review.", "text": "This article considers 3 claims that cognitive sex differences account for the differential representation of men and women in high-level careers in mathematics and science: (a) males are more focused on objects from the beginning of life and therefore are predisposed to better learning about mechanical systems; (b) males have a profile of spatial and numerical abilities producing greater aptitude for mathematics; and (c) males are more variable in their cognitive abilities and therefore predominate at the upper reaches of mathematical talent. Research on cognitive development in human infants, preschool children, and students at all levels fails to support these claims. Instead, it provides evidence that mathematical and scientific reasoning develop from a set of biologically based cognitive capacities that males and females share. These capacities lead men and women to develop equal talent for mathematics and science."} {"_id": "a03f443f5b0be834ad0e53a8e436b7d09e7a8689", "title": "Reality mining based on Social Network Analysis", "text": "Data Mining is the extraction of hidden predictive information from large databases. The process of discovering interesting, useful, nontrivial patterns from large spatial datasets is called spatial data mining. When time gets associated it becomes spatio temporal data mining. The study of spatio temporal data mining is of great concern for the study of mobile phone sensed data. Reality Mining is defined as the study of human social behavior based on mobile phone sensed data. It is based on the data collected by sensors in mobile phones, security cameras, RFID readers, etc. All allow for the measurement of human physical and social activity. In this paper Netviz, Gephi and Weka tools have been used to convert and analyze the Facebook. Further, analysis of a reality mining dataset is also presented."} {"_id": "923d01e0ff983c0ecd3fde5e831e5faff8485bc5", "title": "Improving Restaurants by Extracting Subtopics from Yelp Reviews", "text": "In this paper, we describe latent subtopics discovered from Yelp restaurant reviews by running an online Latent Dirichlet Allocation (LDA) algorithm. The goal is to point out demand of customers from a large amount of reviews, with high dimensionality. These topics can provide meaningful insights to restaurants about what customers care about in order to increase their Yelp ratings, which directly affects their revenue. We used the open dataset from the Yelp Dataset Challenge with over 158,000 restaurant reviews. To find latent subtopics from reviews, we adopted Online LDA, a generative probabilistic model for collections of discrete data such as text corpora. We present the breakdown of hidden topics over all reviews, predict stars per hidden topics discovered, and extend our findings to that of temporal information regarding restaurants peak hours. Overall, we have found several interesting insights and a method which could definitely prove useful to restaurant owners."} {"_id": "b1076b870b991563fbc9e5004752794708492bed", "title": "Ontology Based SMS Controller for Smart Phones", "text": "Text analysis includes lexical analysis of the text and has been widely studied and used in diverse applications. In the last decade, researchers have proposed many efficient solutions to analyze / classify large text dataset, however, analysis / classification of short text is still a challenge because 1) the data is very sparse 2) It contains noise words and 3) It is difficult to understand the syntactical structure of the text. Short Messaging Service (SMS) is a text messaging service for mobile/smart phone and this service is frequently used by all mobile users. Because of the popularity of SMS service, marketing companies nowadays are also using this service for direct marketing also known as SMS marketing.In this paper, we have proposed Ontology based SMS Controller which analyze the text message and classify it using ontology aslegitimate or spam. The proposed system has been tested on different scenarios and experimental results shows that the proposed solution is effective both in terms of efficiency and time. Keywords\u2014Short Text Classification; SMS Spam; Text Analysis; Ontology based SMS Spam; Text Analysis and Ontology"} {"_id": "cfad94900162ffcbc5a975e348a5cdccdc1e8b07", "title": "BP-STDP: Approximating backpropagation using spike timing dependent plasticity", "text": "The problem of training spiking neural networks (SNNs) is a necessary precondition to understanding computations within the brain, a field still in its infancy. Previous work has shown that supervised learning in multi-layer SNNs enables bio-inspired networks to recognize patterns of stimuli through hierarchical feature acquisition. Although gradient descent has shown impressive performance in multi-layer (and deep) SNNs, it is generally not considered biologically plausible and is also computationally expensive. This paper proposes a novel supervised learning approach based on an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons. The proposed temporally local learning rule follows the backpropagation weight change updates applied at each time step. This approach enjoys benefits of both accurate gradient descent and temporally local, efficient STDP. Thus, this method is able to address some open questions regarding accurate and efficient computations that occur in the brain. The experimental results on the XOR problem, the Iris data, and the MNIST dataset demonstrate that the proposed SNN performs as successfully as the traditional NNs. Our approach also compares favorably with the state-of-the-art multi-layer SNNs."} {"_id": "3cf5bc20338110eefa8b5375735272a1c4365619", "title": "A Requirements-to-Implementation Mapping Tool for Requirements Traceability", "text": "Software quality is a major concern in software engineering, particularly when we are dealing with software services that must be available 24 hours a day, please the customer and be kept permanently. In software engineering, requirements management is one of the most important tasks that can influence the success of a software project. Maintain traceability information is a fundamental aspect in software engineering and can facilitate the process of software development and maintenance. This information can be used to support various activities such as analyzing the impact of changes and software verification and validation. However, traceability techniques are mainly used between requirements and software test cases. This paper presents a prototype of a tool that supports the mapping of functional requirements with the pages and HTML elements of a web site. This tool helps maintaining requirements traceability information updated and, ultimately, increasing the efficiency of requirements change management, which may contribute to the overall quality of the software service."} {"_id": "45b7d3881dd92c8c73b99fa3497e5d28a2106c24", "title": "Neuroevolution: from architectures to learning", "text": "Artificial neural networks (ANNs) are applied to many real-world problems, ranging from pattern classification to robot control. In order to design a neural network for a particular task, the choice of an architecture (including the choice of a neuron model), and the choice of a learning algorithm have to be addressed. Evolutionary search methods can provide an automatic solution to these problems. New insights in both neuroscience and evolutionary biology have led to the development of increasingly powerful neuroevolution techniques over the last decade. This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures."} {"_id": "505c58c2c100e7512b7f7d906a9d4af72f6e8415", "title": "Genetic programming - on the programming of computers by means of natural selection", "text": "Page ii Complex Adaptive Systems John H. Holland, Christopher Langton, and Stewart W. Wilson, advisors Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press edition John H. Holland Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life edited by Francisco J. Varela and Paul Bourgine Genetic Programming: On the Programming of Computers by Means of Natural Selection John R. Koza"} {"_id": "50975d6cd92e71f828ffd54bf776c32daa79e295", "title": "Survey of Neural Transfer Functions", "text": "The choice of transfer functions may strongly influence complexity and performance of neural networks. Although sigmoidal transfer functions are the most common there is no a priori reason why models based on such functions should always provide optimal decision borders. A large number of alternative transfer functions has been described in the literature. A taxonomy of activation and output functions is proposed, and advantages of various non-local and local neural transfer functions are discussed. Several less-known types of transfer functions and new combinations of activation/output functions are described. Universal transfer functions, parametrized to change from localized to delocalized type, are of greatest interest. Other types of neural transfer functions discussed here include functions with activations based on nonEuclidean distance measures, bicentral functions, formed from products or linear combinations of pairs of sigmoids, and extensions of such functions making rotations of localized decision borders in highly dimensional spaces practical. Nonlinear input preprocessing techniques are briefly described, offering an alternative way to change the shapes of decision borders."} {"_id": "6ae5deae0167649c37117d11681a223a271923c3", "title": "Back propagation neural network with adaptive differential evolution algorithm for time series forecasting", "text": "The back propagation neural network (BPNN) can easily fall into the local minimum point in time series forecasting. A hybrid approach that combines the adaptive differential evolution (ADE) algorithm with BPNN, called ADE\u2013BPNN, is designed to improve the forecasting accuracy of BPNN. ADE is first applied to search for the global initial connection weights and thresholds of BPNN. Then, BPNN is employed to thoroughly search for the optimal weights and thresholds. Two comparative real-life series data sets are used to verify the feasibility and effectiveness of the hybrid method. The proposed ADE\u2013BPNN can effectively improve forecasting accuracy relative to basic BPNN, autoregressive integrated moving average model (ARIMA), and other hybrid models. 2014 Elsevier Ltd. All rights reserved."} {"_id": "6e744cf0273a84b087e94191fd654210e8fec8e9", "title": "A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks", "text": "Research in neuroevolutionthat is, evolving artificial neural networks (ANNs) through evolutionary algorithmsis inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution."} {"_id": "baf77336afa708e071c5c5e1c6f481ff27d0150d", "title": "On code reuse from StackOverflow: An exploratory study on Android apps", "text": "Context: Source code reuse has been widely accepted as a fundamental activity in software development. Recent studies showed that StackOverflow has emerged as one of the most popular resources for code reuse. Therefore, a plethora of work proposed ways to optimally ask questions, search for answers and find relevant code on StackOverflow. However, little work studies the impact of code reuse from Stack-"} {"_id": "84ca84dad742749a827291e103cde8185cea1bcf", "title": "CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks", "text": "Generative Adversarial Networks (GAN) have achieved big success in various domains such as image generation, music generation, and natural language generation. In this paper, we propose a novel GAN-based collaborative filtering (CF) framework to provide higher accuracy in recommendation. We first identify a fundamental problem of existing GAN-based methods in CF and highlight it quantitatively via a series of experiments. Next, we suggest a new direction of vector-wise adversarial training to solve the problem and propose our GAN-based CF framework, called CFGAN, based on the direction. We identify a unique challenge that arises when vector-wise adversarial training is employed in CF. We then propose three CF methods realized on top of our CFGAN that are able to address the challenge. Finally, via extensive experiments on real-world datasets, we validate that vector-wise adversarial training employed in CFGAN is really effective to solve the problem of existing GAN-based CF methods. Furthermore, we demonstrate that our proposed CF methods on CFGAN provide recommendation accuracy consistently and universally higher than those of the state-of-the-art recommenders."} {"_id": "3a46c11ad7afed8defbb368e478dbf94c24f43a3", "title": "A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures", "text": "Scientific problems that depend on processing largeamounts of data require overcoming challenges in multiple areas:managing large-scale data distribution, co-placement andscheduling of data with compute resources, and storing and transferringlarge volumes of data. We analyze the ecosystems of thetwo prominent paradigms for data-intensive applications, hereafterreferred to as the high-performance computing and theApache-Hadoop paradigm. We propose a basis, common terminologyand functional factors upon which to analyze the two approachesof both paradigms. We discuss the concept of \"Big DataOgres\" and their facets as means of understanding and characterizingthe most common application workloads found acrossthe two paradigms. We then discuss the salient features of thetwo paradigms, and compare and contrast the two approaches.Specifically, we examine common implementation/approaches ofthese paradigms, shed light upon the reasons for their current\"architecture\" and discuss some typical workloads that utilizethem. In spite of the significant software distinctions, we believethere is architectural similarity. We discuss the potential integrationof different implementations, across the different levelsand components. Our comparison progresses from a fully qualitativeexamination of the two paradigms, to a semi-quantitativemethodology. We use a simple and broadly used Ogre (K-meansclustering), characterize its performance on a range of representativeplatforms, covering several implementations from bothparadigms. Our experiments provide an insight into the relativestrengths of the two paradigms. We propose that the set of Ogreswill serve as a benchmark to evaluate the two paradigms alongdifferent dimensions."} {"_id": "0a3437fa123c92dc2789b8d49d1f934febe2c125", "title": "Towards a Self-Organized Agent-Based Simulation Model for Exploration of Human Synaptic Connections", "text": "In this paper, the early design of our selforganized agent-based simulation model for exploration of synaptic connections that faithfully generates what is observed in natural situation is given. While we take inspiration from neuroscience, our intent is not to create a veridical model of processes in neurodevelopmental biology, nor to represent a real biological system. Instead, our goal is to design a simulation model that learns acting in the same way of human nervous system by using findings on human subjects using reflex methodologies in order to estimate unknown connections."} {"_id": "e48617210e65974e7c21c081b6f1cec7116580ba", "title": "BIM: An integrated model for planned and preventive maintenance of architectural heritage", "text": "Modern digital technologies give us great possibilities to organize knowledge about constructions, regarding multidisciplinary fields like preservation, conservation and valorization of our architectural heritage, in order to suggest restoration projects and related work, or to suppose a planned preventive maintenance. New procedures to archive, analyze and manage architectural information find a natural support in 3D model, thanks to the development of capacities of new modeling software. Moreover, if the model contains or interfaces with a heterogeneous archive of information, as it is for BIM, this model can be considered as the bases of critical studies, projects of restoration, heritage maintenance, integrated management, protection, and valorization, and evaluation of economic aspects, management and planning, that can flow into a planned preventive maintenance [1]. The aspect that limit the use of BIM technology is the set up parametric object library inside programs: the standardized level of these objects deals difficulty with survey and restoration issues, where each single element has its own historical and architectural characterization [2]. From this foreword, the goal of this research is more evident: the possibilities of using BIM modeling to the existing constructions and cultural heritage, as a support for the construction and management of a Plan for planned preventive maintenance."} {"_id": "dc7024840a4ba7ab634517fae53e77695ff5dda9", "title": "Energy Efficient Smartphone-Based Activity Recognition using Fixed-Point Arithmetic", "text": "In this paper we propose a novel energy efficient approach for the recognition of human activities using smartphones as wearable sensing devices, targeting assisted living applications such as remote patient activity monitoring for the disabled and the elderly. The method exploits fixed-point arithmetic to propose a modified multiclass Support Vector Machine (SVM) learning algorithm, allowing to better preserve the smartphone battery lifetime with respect to the conventional floating-point based formulation while maintaining comparable system accuracy levels. Experiments show comparative results between this approach and the traditional SVM in terms of recognition performance and battery consumption, highlighting the advantages of the proposed method."} {"_id": "556254959ca591439473a8e31d2e09ef37d78285", "title": "Design for reliability of power electronics modules", "text": "0026-2714/$ see front matter 2009 Published by doi:10.1016/j.microrel.2009.07.055 * Corresponding author. Tel.: +44 (0)20 8331 8660 E-mail address: c.bailey@gre.ac.uk (C. Bailey). Power electronics uses semiconductor technology to convert and control electrical power. Demands for efficient energy management, conversion and conservation, and the increasing take-up of electronics in transport systems has resulted in tremendous growth in the use of power electronics devices such as Insulated Gate Bipolar Transistors (IGBT\u2019s). The packaging of power electronics devices involves a number of challenges for the design engineer in terms of reliability. For example, IGBT modules will contain a number of semiconductor dies within a small footprint bonded to substrates with aluminum wires and wide area solder joints. To a great extent, the reliability of the package will depend on the thermo-mechanical behavior of these materials. This paper details a physics of failure approach to reliability predictions of IGBT modules. It also illustrates the need for a probabilistic approach to reliability predictions that include the effects of design variations. Also discussed are technologies for predicting the remaining life of the package when subjected to qualification stresses or in service stresses using prognostics methods. 2009 Published by Elsevier Ltd."} {"_id": "52d09d745295ed32519a74291f890a5115798d58", "title": "A Real-time Inversion Attack on the GMR-2 Cipher Used in the Satellite Phones", "text": "The GMR-2 cipher is a kind of stream cipher currently being used in some Inmarsat satellite phones. It has been proven that such cipher can be cracked using only one frame known keystream but with a moderate executing times. In this paper, we present a new thorough security analysis of the GMR-2 cipher. We first study the inverse properties and the relationship of the cipher\u2019s components to reveal a bad one-way character of the cipher. Then by introducing a new concept called \u201cvalid key chain\u201d according to the cipher\u2019s key schedule, we for the first time propose a real-time inversion attack using one frame keystream. This attack contains three phases: (1) table generation (2) dynamic table looks-up, filtration and combination (3) verification. Our analysis shows that, using the proposed attack, the exhaustive search space for the 64-bit encryption key can be reduced to about 2 when one frame (15 bytes) keystream is available. Compared with previous known attacks, this inversion attack is much more efficient. Finally, the proposed attack are carried out on a 3.3GHz platform, and the experimental results demonstrate that the 64-bit encryption-key could be recovered in around 0.02s on average."} {"_id": "8e31ec136f0a49718827fbe9e95a546d16579441", "title": "Review on virtual synchronous generator (VSG) for enhancing performance of microgrid", "text": "Increasing penetration level of Distributed Generators (DGs)/ Renewable energy sources (RES) creates low inertia problems, damping effect on grid dynamic performance and stability. Voltage rise by reverse power from PV generations, excessive supply of electricity in the grid due to full generation of the DGs, power fluctuations due to variable nature of Renewable energy sources, and degradation of frequency regulation can be considered as some negative results of mentioned hurdle. For securing such a grid there should add inertia, virtually. The proposed control approach based on an inverter connected in Microgrid, which operated as a virtual synchronous generator (VSG) reacting only in transitory situations. Super capacitor (SC), an energy storage system (ESS) included in MG-forming inverter i.e., a normal inverter structure used for contributing system robustness and reliability. In steady-state load distributed between the MG-supporting inverter are achieved with two restoration controllers for active and reactive powers. The virtual inertia hypothesis can be used for maintaining a large share of DGs in future grids without affect system stability. This proposed system analyzed and simulated using MATLAB/SIMULINK."} {"_id": "26ed19606a57d837b4b9dcc984ff61763cdd0d36", "title": "Predicting short-term interests using activity-based search context", "text": "A query considered in isolation offers limited information about a searcher's intent. Query context that considers pre-query activity (e.g., previous queries and page visits), can provide richer information about search intentions. In this paper, we describe a study in which we developed and evaluated user interest models for the current query, its context (from pre-query session activity), and their combination, which we refer to as intent. Using large-scale logs, we evaluate how accurately each model predicts the user's short-term interests under various experimental conditions. In our study we: (i) determine the extent of opportunity for using context to model intent; (ii) compare the utility of different sources of behavioral evidence (queries, search result clicks, and Web page visits) for building predictive interest models, and; (iii) investigate optimally combining the query and its context by learning a model that predicts the context weight for each query. Our findings demonstrate significant opportunity in leveraging contextual information, show that context and source influence predictive accuracy, and show that we can learn a near-optimal combination of the query and context for each query. The findings can inform the design of search systems that leverage contextual information to better understand, model, and serve searchers' information needs."} {"_id": "77b1354f4b9b1d9282840ee8baf3f003c2bc3f66", "title": "A 280-KBytes Twin-Bit-Cell Embedded NOR Flash Memory With a Novel Sensing Current Protection Enhanced Technique and High-Voltage Generating Systems", "text": "A novel sensing current protection enhanced (SCPE) technique and high-voltage (HV) generating systems are adopted to greatly enhance the advantages of the twin-bit cell. The SCPE technique provides the tradeoff of sensing margin loss between \u201c1\u201d and \u201c0\u201d bits sensing case to realize fast read access. The read speed is improved by 7.7%. HV generating systems with parallel-series-transform and capacitance-shared techniques are proposed to satisfy the complicated requirements of output capability and current drivability with lower area penalty. The HV periphery area decreases by 71%. A 1.5-V 280-KBytes embedded NOR flash memory IP has been fabricated in HHGrace (Shanghai Huahong Grace Semiconductor Manufacturing Corporation) 90-nm 4 poly 4 metal CMOS process. The complicated operation voltages and access time of 23.5 ns have been achieved at 1.5 V. The die size of the proposed IP is 0.437 mm $^{2}$ and the area size of charge pump has been reduced to 0.006 mm $^{2}$ on average."} {"_id": "0c495278136cae458b3976b879bcace7f9b42c30", "title": "A survey of methods for data fusion and system adaptation using autonomic nervous system responses in physiological computing", "text": "Physiological computing represents a mode of human\u2013computer interaction where the computer monitors, analyzes and responds to the user\u2019s psychophysiological activity in real-time. Within the field, autonomic nervous system responses have been studied extensively since they can be measured quickly and unobtrusively. However, despite a vast body of literature available on the subject, there is still no universally accepted set of rules that would translate physiological data to psychological states. This paper surveys the work performed on data fusion and system adaptation using autonomic nervous system responses in psychophysiology and physiological computing during the last ten years. First, five prerequisites for data fusion are examined: psychological model selection, training set preparation, feature extraction, normalization and dimension reduction. Then, different methods for either classification or estimation of psychological states from the extracted features are presented and compared. Finally, implementations of system adaptation are reviewed: changing the system that the user is interacting with in response to cognitive or affective information inferred from autonomic nervous system responses. The paper is aimed primarily at psychologists and computer scientists who have already recorded autonomic nervous system responses and now need to create algorithms to determine the subject\u2019s psychological state. 2012 British Informatics Society Limited. All rights reserved."} {"_id": "39fc175c150ba9ee662107b395fc0c3bbcc4962e", "title": "Review: Graph Databases", "text": "This paper covers NOSQL databases. With the increase in net users and applications it is important to understand graph databases, the future of tomorrow\u2019s data storage and management. Basic architecture of some graph databases has been discussed to know their working and how data is stored in the form of nodes and relationships. KeywordsNOSQL databases, Neo4j, JENA, DEX, InfoGrid, FlockDB."} {"_id": "8cbe965f4909a3a7347f4def8365e473db1faf2b", "title": "Differential effects of atomoxetine on executive functioning and lexical decision in attention-deficit/hyperactivity disorder and reading disorder.", "text": "OBJECTIVE\nThe effects of a promising pharmacological treatment for attention-deficit/hyperactivity disorder (ADHD), atomoxetine, were studied on executive functions in both ADHD and reading disorder (RD) because earlier research demonstrated an overlap in executive functioning deficits in both disorders. In addition, the effects of atomoxetine were explored on lexical decision.\n\n\nMETHODS\nSixteen children with ADHD, 20 children with ADHD + RD, 21 children with RD, and 26 normal controls were enrolled in a randomized placebo-controlled crossover study. Children were measured on visuospatial working memory, inhibition, and lexical decision on the day of randomization and following two 28-day medication periods.\n\n\nRESULTS\nChildren with ADHD + RD showed improved visuospatial working memory performance and, to a lesser extent, improved inhibition following atomoxetine treatment compared to placebo. No differential effects of atomoxetine were found for lexical decision in comparison to placebo. In addition, no effects of atomoxetine were demonstrated in the ADHD and RD groups.\n\n\nCONCLUSION\nAtomoxetine improved visuospatial working memory and to a lesser degree inhibition in children with ADHD + RD, which suggests differential developmental pathways for co-morbid ADHD + RD as compared to ADHD and RD alone.\n\n\nCLINICAL TRIAL REGISTRY\nB4Z-MC-LYCK, NCT00191906; http://clinicaltrials.gov/ct2/show/NCT00191906."} {"_id": "f4cdd1d15112a3458746b58a276d97e79d8f495d", "title": "Gradient Regularization Improves Accuracy of Discriminative Models", "text": "Regularizing the gradient norm of the output of a neural network with respect to its inputs is a powerful technique, rediscovered several times. This paper presents evidence that gradient regularization can consistently improve classification accuracy on vision tasks, using modern deep neural networks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers. We demonstrate empirically on real and synthetic data that the learning process leads to gradients controlled beyond the training points, and results in solutions that generalize well."} {"_id": "4aa00fec49069c8e3e4814b99be1d99d654c84bb", "title": "A robust data scaling algorithm to improve classification accuracies in biomedical data", "text": "Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms."} {"_id": "6674bf8a73f7181ec0cee0a8ee23f4b4f9bb162e", "title": "Compliance with Information Security Policies: An Empirical Investigation", "text": "Information security was the main topic in this paper. An investigation of the compliance to information security policies were discussed. The author mentions that the insignificant relationship between rewards and actual compliance with information security policies does not make sense. Quite possibly this relationship results from not applying rewards for security compliance. Also mentions that based on the survey conducted, careless employee behavior places an organization's assets and reputation in serious jeopardy. The major threat to information security arises from careless employees who fail to comply with organizations' information security policies and procedures."} {"_id": "1bd50926079e68a6e32dc4412e9d5abe331daefb", "title": "Fisher Discrimination Dictionary Learning for sparse representation", "text": "Sparse representation based classification has led to interesting image recognition results, while the dictionary used for sparse coding plays a key role in it. This paper presents a novel dictionary learning (DL) method to improve the pattern classification performance. Based on the Fisher discrimination criterion, a structured dictionary, whose dictionary atoms have correspondence to the class labels, is learned so that the reconstruction error after sparse coding can be used for pattern classification. Meanwhile, the Fisher discrimination criterion is imposed on the coding coefficients so that they have small within-class scatter but big between-class scatter. A new classification scheme associated with the proposed Fisher discrimination DL (FDDL) method is then presented by using both the discriminative information in the reconstruction error and sparse coding coefficients. The proposed FDDL is extensively evaluated on benchmark image databases in comparison with existing sparse representation and DL based classification methods."} {"_id": "9462ce3ca0fd8ac68e01892152ad6a72ab0e37b7", "title": "More on the fragility of performance: choking under pressure in mathematical problem solving.", "text": "In 3 experiments, the authors examined mathematical problem solving performance under pressure. In Experiment 1, pressure harmed performance on only unpracticed problems with heavy working memory demands. In Experiment 2, such high-demand problems were practiced until their answers were directly retrieved from memory. This eliminated choking under pressure. Experiment 3 dissociated practice on particular problems from practice on the solution algorithm by imposing a high-pressure test on problems practiced 1, 2, or 50 times each. Infrequently practiced high-demand problems were still performed poorly under pressure, whereas problems practiced 50 times each were not. These findings support distraction theories of choking in math, which contrasts with considerable evidence for explicit monitoring theories of choking in sensorimotor skills. This contrast suggests a skill taxonomy based on real-time control structures."} {"_id": "e093191573b88bd279213c0164f389378810e782", "title": "The Effects of Interaction Goals on Negotiation Tactics and Outcomes: A Dyad-Level Analysis Across Two Cultures", "text": "This study investigates how negotiators\u2019 interaction goals influence their own and their counterparts\u2019 negotiation tactics and outcomes across two cultures using a simulated employment contract negotiation. Results show that when negotiators placed greater importance on competitive goals, they used more distributive persuasion and fewer priority information exchange tactics, which reduced their counterparts\u2019 profit; negotiators\u2019 competitive goals also caused their counterparts to use fewer priority information exchange tactics, which in turn hurt their own profit. Dyad members\u2019 competitive goals have an indirect, negative impact on joint profit. In addition, Chinese negotiators placed greater importance on competitive goals and used more distributive and fewer integrative tactics than Americans, but the associations between goals and tactics did not differ across cultures. Nevertheless, members of the two cultures took different paths to improve joint profit; as a result, Chinese dyads achieved no less joint profit than American dyads. The study sheds light on culture\u2019s effect on the interactive processes by which goals impact negotiation performance."} {"_id": "d504a72e40ecee5c2e721629e7368a959b18c681", "title": "Solving job-shop scheduling problems by genetic algorithm", "text": "Job-shop Scheduling Problem (JSP) is one of extremely hard problems because it requires very large combinatorial search space and the precedence constraint between machines. The traditional algorithm used to solve the problem is the branch-andbound method, which takes considerable computing time when the size of problem is large. W e propose a new method for solving JSP using Genetic Algorithm (GA) and demonstrate its efficiency by the standard benchmark of job-shop scheduling problems. Some important points of G A are how t o represent the schedules as an individuals and to design the genetic operators for the representation in order t o produce better results."} {"_id": "42bacdec7c9b78075e4849609a25892951095461", "title": "Detection of ulcerative colitis severity in colonoscopy video frames", "text": "Ulcerative colitis (UC) is a chronic inflammatory disease characterized by periods of relapses and remissions affecting more than 500,000 people in the United States. The therapeutic goals of UC are to first induce and then maintain disease remission. However, it is very difficult to evaluate the severity of UC objectively because of non-uniform nature of symptoms associated with UC, and large variations in their patterns. To address this, we objectively measure and classify the severity of UC presented in optical colonoscopy video frames based on the image textures. To extract distinct textures, we are using a hybrid approach in which a new proposed feature based on the accumulation of pixel value differences is combined with an existing feature such as LBP (Local Binary Pattern). The experimental results show the hybrid method can achieve more than 90% overall accuracy."} {"_id": "3fb14a16d10d942ce9a78406d3a47c430244954f", "title": "Finding surprising patterns in a time series database in linear time and space", "text": "The problem of finding a specified pattern in a time series database (i.e. query by content) has received much attention and is now a relatively mature field. In contrast, the important problem of enumerating all surprising or interesting patterns has received far less attention. This problem requires a meaningful definition of \"surprise\", and an efficient search technique. All previous attempts at finding surprising patterns in time series use a very limited notion of surprise, and/or do not scale to massive datasets. To overcome these limitations we introduce a novel technique that defines a pattern surprising if the frequency of its occurrence differs substantially from that expected by chance, given some previously seen data."} {"_id": "3d0f75c914bc3a34ef45cb0f6a18f841fa8008f0", "title": "Everyday Life Information Seeking : Approaching Information Seeking in the Context of \u201c Way of Life \u201d", "text": "The study offers a framework for the study of everyday life information seeking (ELIS) in the context of way of and mastery of life.Way of life is defined as the \u201corder of things,\u201d manifesting itself, for example, in the relationship between work and leisure time, models of consumption, and nature of hobbies. Mastery of life is interpreted as \u201ckeeping things in order; \u201d four ideal types of mastery of life with their implications for ELIS, namely optimistic-cognitive, pessimistic-cognitive, defensiveaffective and pessimistic-affective mastery of life are outlined. The article reviews two major dimensions of ELIS, there are. the seeking of orienting and practical information. The research framework was tested in an empirical study based on interviews with teachers and industrial workers, eleven of both. The main features of seeking orienting and practical information are reviewed, followed by suggestions for refinement of the research framework."} {"_id": "843018e9d8ebda9a4ec54256a9f70835e0edff7e", "title": "Ultra-wideband antipodal vivaldi antenna array with Wilkinson power divider feeding network", "text": "In this paper, a two element antipodal Vivaldi antenna array in combination with an ultra-wideband (UWB) Wilkinson power divider feeding network is presented. The antennas are placed in a stacked configuration to provide more gain and to limit cross talk between transmit and receive antennas in radar applications. Each part can be realized with standard planar technology. For a single antenna element, a comparison in performance is done between standard FR4 and a Rogers\u00ae RO3003 substrate. The antenna spacing is obtained using the parametric optimizer of CST MWS\u00ae. The performance of the power divider and the antenna array is analyzed by means of simulations and measurements."} {"_id": "8839b84e8432f6787d2281806430a65d39a94499", "title": "The Vivaldi Aerial", "text": "The Vivaldi Aerial is a new member of the class of aperiodic continuously scaled antenna structures and, as such, it has theoretically unlimited instantaneous frequency bandwidth. This aerial has significant gain and linear polarisation and can be made to conform to a constant gain vs. frequency performance. One such design has been made with approximately 10 dBI gain and \u00bf20 dB sidelobe level over an instantaneous frequency bandwidth extending from below 2 GHz to above 40 GHz."} {"_id": "984df1f081fbd623600ec45635e5d9a4811c0aef", "title": "Ultra-wideband Vivaldi arrays for see-through-wall imaging radar applications", "text": "Two Vivaldi antenna arrays have been presented. First is an 8-element tapered slot array covering 1.2 to 4 GHz band for STW applications for brick/concrete wall imaging. Second is a 16-element antipodal array operating at 8 to 10.6 GHz for high resolution imaging when penetrating through dry wall. Based on the two designs, and utilizing a smooth wide band slot to microstrip transition to feed the Vivaldi antenna array, a 1\u201310 GHz frequency band can be covered. Alternatively, the design can be used in a reconfigurable structure to cover either a 1\u20133 GHz or 8\u201310 GHz band. Experimental and measured results have been completed and will be discussed in detail. The designs will significantly impact the development of compact reconfigurable and portable systems."} {"_id": "d7ccf0988c16b0b373274bcf57991f036e253f5c", "title": "Analysis of Ultra Wideband Antipodal Vivaldi Antenna Design", "text": "The characteristics of antipodal Vivaldi antennas with different structures and the key factors that affect the performance of the antipodal antennas have been investigated. The return loss, radiation pattern and current distribution with various elliptical loading sizes, opening rates and ground plane sizes have been analyzed and compared using full wave simulation tool. Based on the parameter study, a design that is capable of achieving a bandwidth from 2.1 GHz to 20 GHz has been identified."} {"_id": "e3f4fdf6d2f10ebe4cfc6d0544afa63976527d60", "title": "A 324-Element Vivaldi Antenna Array for Radio Astronomy Instrumentation", "text": "This paper presents a 324-element 2-D broadside array for radio astronomy instrumentation which is sensitive to two mutually orthogonal polarizations. The array is composed of cruciform units consisting of a group of four Vivaldi antennas arranged in a cross-shaped structure. The Vivaldi antenna used in this array exhibits a radiation intensity characteristic with a symmetrical main beam of 87.5\u00b0 at 3 GHz and 44.2\u00b0 at 6 GHz. The measured maximum side/backlobe level is 10.3 dB below the main beam level. The array can operate at a high frequency of 5.4 GHz without the formation of grating lobes."} {"_id": "f5f9366ae604faf45ef53a3f20fcc31d317bd751", "title": "Finger printing on the web", "text": "Nowadays, companies such as Google [1] and Facebook [2] are present on most websites either through advertising services or through social plugins. They follow users around the web collecting information to create unique user profiles. This serious threat to privacy is a major cause for concern, as this reveals that there are no restrictions on the type of information that can be collected, the duration it can be stored, the people who can access it and the purpose of use or misuse. In order to determine if there indeed is any statistically significant evidence that fingerprinting occurs on certain web services, we integrated 2 software namely, FPBlock [6], a prototype browser extension that blocks web-based device fingerprinting, and AdFisher [10], an automated tool that sending out thousands of automated Web browser instances, in such a manner so that the data collected by FP-Block could be fed to AdFisher for analysis."} {"_id": "c0271edf9cc616a30974918b3cce0f95d4265366", "title": "Cloud-assisted body area networks: state-of-the-art and future challenges", "text": "Body Area Networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed."} {"_id": "3b5c7cdcc75e6064149226b4b652bf88f687bd77", "title": "Visual and Infrared Sensor Data-Based Obstacle Detection for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine", "text": "A novel visual and infrared sensor data-based system to assist visually impaired users in detecting obstacles in their path while independently navigating indoors is presented. The system has been developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful graphics processor and several sensors which allow it to track its motion and orientation in 3-D space in real-time. It exploits the inbuilt functionalities of the Unity engine in the Tango SDK to create a 3-D reconstruction of the surrounding environment, then associates a Unity collider component with the user and utilizes it to determine his interaction with the reconstructed mesh in order to detect obstacles. The user is warned about any detected obstacles via audio alerts. An extensive empirical evaluation of the obstacle detection component has yielded favorable results, thus, confirming the potential of this system for future development work."} {"_id": "a29cc7c75b5868bef0576bd2ad94821e6358a26a", "title": "BioGlass: Physiological parameter estimation using a head-mounted wearable device", "text": "This work explores the feasibility of using sensors embedded in Google Glass, a head-mounted wearable device, to measure physiological signals of the wearer. In particular, we develop new methods to use Glass's accelerometer, gyroscope, and camera to extract pulse and respiratory rates of 12 participants during a controlled experiment. We show it is possible to achieve a mean absolute error of 0.83 beats per minute (STD: 2.02) for heart rate and 1.18 breaths per minute (STD: 2.04) for respiration rate when considering different combinations of sensors. These results included testing across sitting, supine, and standing still postures before and after physical exercise."} {"_id": "e3412b045525ed4a6c0142bba69522beb2f82a25", "title": "Monocular Depth Estimation: A Survey", "text": "Monocular depth estimation is often described as an illposed and inherently ambiguous problem. Estimating depth from 2D images is a crucial step in scene reconstruction, 3D object recognition, segmentation, and detection. The problem can be framed as: given a single RGB image as input, predict a dense depth map for each pixel. This problem is worsened by the fact that most scenes have large texture and structural variations, object occlusions, and rich geometric detailing. All these factors contribute to difficulty in accurate depth estimation. In this paper, we review five papers that attempt to solve the depth estimation problem with various techniques including supervised, weakly-supervised, and unsupervised learning techniques. We then compare these papers and understand the improvements made over one another. Finally, we explore potential improvements that can aid to better solve this problem."} {"_id": "6fee63e0ae4bc1f3fd08044d7d694bb17b9c059c", "title": "Cross-Cultural Software Production and Use: A Structurational Analysis", "text": "This paper focuses on cross-cultural software production and use, which is increasingly common in today's more globalized world. A theoretical basis for analysis is developed, using concepts drawn from structuration theory. The theory is illustrated using two cross-cultural case studies. It is argued that structurational analysis provides a deeper examination of cross-cultural working and IS than is found in the current literature, which is dominated by Hofstede-type studies, in particular, the theoretical approach can be used to analyze cross-cultural conflict and contradiction, cultural heterogeneity, detailed work patterns, and the dynamic nature of culture. The paper contributes to the growing body of literature that emphasizes the essential role of cross-cultural understanding in contemporary society. 'Michael D. Myers was the accepting senior editor for this paper. introduction There has been much debate over the last decade about the major sociat transformations taking place in the world such as the increasing interconnectedness of different societies, the compression of time and space, and an intensification of consciousness of the world as a whole (Robertson 1992), Such changes are often tabeted with the term globatization, atthough the precise nature of this phenomenon is highly complex on closer examination. For example. Beck (2000) distinguishes between globality, the change in consciousness of the world as a single entity, and globaiism, the ideology of neoliberatism which argues that the world market eliminates or supplants the importance of locat potiticat action. Despite the complexity of the globalization phenomena, all commentators would agree that information and communication technologies (ICTs) are deeply implicated in the changes that are taking ptace through their abitity to enabte new modes of work, communication, and organization MiS Quarterly Vol. 26 No. 4. pp. 359-380/December 2002 359 Walsham/Cross-Cultural Software Production & Use across time and space. For example, the influential work of Castells (1996, 1997, 1998) argues that we are in the \"information age\" where information generation, processing, and transformation are fundamental to societal functioning and societal change, and where ICTs enable the pervasive expansion of networking throughout the social structure. However, does globalization, and the related spread of ICTs, imply that the world is becoming a homogeneous arena for gtobat business and gtobal attitudes, with differences between organizations and societies disappearing? There are many authors who take exception to this conclusion. For exampie, Robertson (1992) discussed the way in which imported themes are indigenized in particular societies with tocat culture constraining receptivity to some ideas rather than others, and adapting them in specific ways. He cited Japan as a good example of these glocalization processes. White accepting the idea of time-space compression facilitated by ICTs, Robertson argued that one of its main consequences is an exacerbation of collisions between gtobat, societat, and communat attitudes. Simitarty, Appadural {1997), coming from a nonWestern background, argued against the gtobat homogenization thesis on the grounds that difl'erent societies witt appropriate the \"materials of modernity\" differently depending on their specific geographies, histories, and languages. Watsham {2001) devetoped a related argument, with a specific focus on the rote of ICTs, concluding that globat diversity needs to be a key focus when devetoping and using such technotogies. If these latter arguments are broadty correct, then working with tCTs in and across different cuttures should prove to be problematic, in that there wilt be different views ofthe relevance, appticabitity, and vatue of particular modes of working and use of ICTs which may produce conflict. For exampte, technotogy transfer from one society to another involves the importing of that technology into an \"atien\" cutturat context where its value may not be similarly perceived to that in its original host cutture. Simitarty, cross-cuttural communication through tCTs, or cross-cultural information systems (IS) devetopment teams, are likely to confront issues of incongruence of values and attitudes. The purpose of this paper is to examine a particular topic within the area of cross-cutturat working and tCTs, namety that of software production and use; in particutar, where the software is not devetoped in and for a specific cutturat group. A primary goat is to devetop a theoreticat basis for anatysis of this area. Key eiements of this basis, which draws on structuration theory, are described in the next section of the paper, tn order to iltustrate the theoreticat basis and its vatue in analyzing real situations, the subsequent sections draw on the field data from two published case studies of cross-cultural software development and application. There is an extensive titerature on cross-cutturat working and IS, and the penultimate section ofthe paper reviews key etements of this titerature, and shows how the anatysis of this paper makes a new contribution. In particular, it witt be argued that the structurationat analysis enabtes a more sophisticated and detailed consideration of issues in cross-culturat software production under four specific headings: cross-cultural contradiction and conflict; cultural heterogeneity; detailed work patterns in different cuttures; and the dynamic, emergent nature of cutture. The final section of the paper wilt summarize some theoretical and practical implications. Structuration Theory, Cuiture and iS The theoretical basis for this paper draws on structuration theory {Giddens 1979, 1984). This theory has been highty inftuentiat in sociology and the social sciences generalty since Giddens first developed the ideas some 20 years ago. In addition, the theory has received considerable attention in the IS field {for a good review, see Jones 1998). The focus here, however, wilt be on how structuration theory can offer a new way of looking 360 MIS Quarterly Vol. 26 No. 4/December 2002 Walsham/Cross-Cuitural Software Production & Use Table 1. Structuration Theory, Culture, and ICTs: Some Key Concepts"} {"_id": "69d40b2bc094e1ec938617d8cdaf4f7ac227ead3", "title": "LTSA-WS: a tool for model-based verification of web service compositions and choreography", "text": "In this paper we describe a tool for a model-based approach to verifying compositions of web service implementations. The tool supports verification of properties created from design specifications and implementation models to confirm expected results from the viewpoints of both the designer and implementer. Scenarios are modeled in UML, in the form of Message Sequence Charts (MSCs), and then compiled into the Finite State Process (FSP) process algebra to concisely model the required behavior. BPEL4WS implementations are mechanically translated to FSP to allow an equivalence trace verification process to be performed. By providing early design verification and validation, the implementation, testing and deployment of web service compositions can be eased through the understanding of the behavior exhibited by the composition. The approach is implemented as a plug-in for the Eclipse development environment providing cooperating tools for specification, formal modeling, verification and validation of the composition process."} {"_id": "0f8a645b6e204d6bd06191ebc685fe2d887dbde8", "title": "Breaking the News: First Impressions Matter on Online News", "text": "A growing number of people are changing the way they consume news, replacing the traditional physical newspapers and magazines by their virtual online versions or/and weblogs. The interactivity and immediacy present in online news are changing the way news are being produced and exposed by media corporations. News websites have to create effective strategies to catch people\u2019s attention and attract their clicks. In this paper we investigate possible strategies used by online news corporations in the design of their news headlines. We analyze the content of 69,907 headlines produced by four major global media corporations during a minimum of eight consecutive months in 2014. In order to discover strategies that could be used to attract clicks, we extracted features from the text of the news headlines related to the sentiment polarity of the headline. We discovered that the sentiment of the headline is strongly related to the popularity of the news and also with the dynamics of the posted comments on that particular news."} {"_id": "7661539d276da03f63864e8df4162521af2d8184", "title": "A new method for decontamination of de novo transcriptomes using a hierarchical clustering algorithm", "text": "Motivation\nThe identification of contaminating sequences in a de novo assembly is challenging because of the absence of information on the target species. For sample types where the target organism is impossible to isolate from its matrix, such as endoparasites, endosymbionts and soil-harvested samples, contamination is unavoidable. A few post-assembly decontamination methods are currently available but are based only on alignments to databases, which can lead to poor decontamination.\n\n\nResults\nWe present a new decontamination method based on a hierarchical clustering algorithm called MCSC. This method uses frequent patterns found in sequences to create clusters. These clusters are then linked to the target species or tagged as contaminants using classic alignment tools. The main advantage of this decontamination method is that it allows sequences to be tagged correctly even if they are unknown or misaligned to a database.\n\n\nAvailability and Implementation\nScripts and documentation about the MCSC decontamination method are available at https://github.com/Lafond-LapalmeJ/MCSC_Decontamination .\n\n\nContact\n: benjamin.mimee@agr.gc.ca.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."} {"_id": "d40a8628453b9be53c57c3d6e50ea1aa438bdd3f", "title": "Understanding self-reflection: how people reflect on personal data through visual data exploration", "text": "Rapid advancements in consumer technologies enable people to collect a wide range of personal data. With a proper means for people to ask questions and explore their data, longitudinal data feeds from multiple self-tracking tools pose great opportunities to foster deep self-reflection. However, most self-tracking tools lack support for self-reflection beyond providing simple feedback. Our overarching goal is to support self-trackers in reflecting on their data and gaining rich insights through visual data exploration. As a first step toward the goal, we built a web-based application called Visualized Self, and conducted an in-lab think-aloud study (N = 11) to examine how people reflect on their personal data and what types of insights they gain throughout the reflection. We discuss lessons learned from studying with Visualized Self, and suggest directions for designing visual data exploration tools for fostering self-reflection."} {"_id": "dadd12956a684202ac0ee6b7d599434a1a50fee2", "title": "Flavonoids in food and their health benefits.", "text": "There has been increasing interest in the research of flavonoids from dietary sources, due to growing evidence of the versatile health benefits of flavonoids through epidemiological studies. As occurrence of flavonoids is directly associated with human daily dietary intake of antioxidants, it is important to evaluate flavonoid sources in food. Fruits and vegetables are the main dietary sources of flavonoids for humans, along with tea and wine. However, there is still difficulty in accurately measuring the daily intake of flavonoids because of the complexity of existence of flavonoids from various food sources, the diversity of dietary culture, and the occurrence of a large amount of flavonoids itself in nature. Nevertheless, research on the health aspects of flavonoids for humans is expanding rapidly. Many flavonoids are shown to have antioxidative activity, free-radical scavenging capacity, coronary heart disease prevention, and anticancer activity, while some flavonoids exhibit potential for anti-human immunodeficiency virus functions. As such research progresses. further achievements will undoubtedly lead to a new era of flavonoids in either foods or pharmaceutical supplements. Accordingly, an appropriate model for a precise assessment of intake of flavonoids needs to be developed. Most recent research has focused on the health aspects of flavonoids from food sources for humans. This paper reviews the current advances in flavonoids in food, with emphasis on health aspects on the basis of the published literature, which may provide some guidance for researchers in further investigations and for industries in developing practical health agents."} {"_id": "f568818ca0eccc3e04213d300dfa4e4e7e81ba8c", "title": "Open/Closed Eye Analysis for Drowsiness Detection", "text": "Drowsiness detection is vital in preventing traffic accidents. Eye state analysis - detecting whether the eye is open or closed - is critical step for drowsiness detection. In this paper, we propose an easy algorithm for pupil center and iris boundary localization and a new algorithm for eye state analysis, which we incorporate into a four step system for drowsiness detection: face detection, eye detection, eye state analysis, and drowsy decision. This new system requires no training data at any step or special cameras. Our eye detection algorithm uses Eye Map, thus achieving excellent pupil center and iris boundary localization results on the IMM database. Our novel eye state analysis algorithm detects eye state using the saturation (S) channel of the HSV color space. We analyze our eye state analysis algorithm using five video sequences and show superior results compared to the common technique based on distance between eyelids."} {"_id": "e48385aa4bea1d8454364c23a9f7f18b6565dc6c", "title": "Indoors forensic entomology: colonization of human remains in closed environments by specific species of sarcosaprophagous flies.", "text": "Fly species that are commonly recovered on human corpses concealed in houses or other dwellings are often dependent on human created environments and might have special features in their biology that allow them to colonize indoor cadavers. In this study we describe nine typical cases involving forensically relevant flies on human remains found indoors in southern Finland. Eggs, larvae and puparia were reared to adult stage and determined to species. Of the five species found the most common were Lucilia sericata Meigen, Calliphora vicina Robineau-Desvoidy and Protophormia terraenovae Robineau-Desvoidy. The flesh fly Sarcophaga caerulescens Zetterstedt is reported for the first time to colonize human cadavers inside houses and a COI gene sequence based DNA barcode is provided for it to help facilitate identification in the future. Fly biology, colonization speed and the significance of indoors forensic entomological evidence are discussed."} {"_id": "672eb3ae0edb85a592b98b0456a07616a2c73670", "title": "Affordances in Grounded Language Learning", "text": "We present a novel methodology involving mappings between different modes of semantic representations. We propose distributional semantic models as a mechanism for representing the kind of world knowledge inherent in the system of abstract symbols characteristic of a sophisticated community of language users. Then, motivated by insight from ecological psychology, we describe a model approximating affordances, by which we mean a language learner\u2019s direct perception of opportunities for action in an environment. We present a preliminary experiment involving mapping between these two representational modalities, and propose that our methodology can become the basis for a cognitively inspired model of grounded language learning."} {"_id": "2f072c1e958c64fbd87af763e5f6ed14747bcbf7", "title": "LSTM-Based ECG Classification for Continuous Monitoring on Personal Wearable Devices", "text": "A novel ECG classification algorithm is proposed for continuous cardiac monitoring on wearable devices with limited processing capacity. The proposed solution employs a novel architecture consisting of wavelet transform and multiple LSTM recurrent neural networks (Fig. 1). Experimental evaluations show superior ECG classification performance compared to previous works. Measurements on different hardware platforms show the proposed algorithm meets timing requirements for continuous and real-time execution on wearable devices. In contrast to many compute-intensive deep-learning based approaches, the proposed algorithm is lightweight, and therefore, brings continuous monitoring with accurate LSTM-based ECG classification to wearable devices. The source code is available online [1]."} {"_id": "846b945e8af5340a803708a2971268216decaddf", "title": "Data Series Management: Fulfilling the Need for Big Sequence Analytics", "text": "Massive data sequence collections exist in virtually every scientific and social domain, and have to be analyzed to extract useful knowledge. However, no existing data management solution (such as relational databases, column stores, array databases, and time series management systems) can offer native support for sequences and the corresponding operators necessary for complex analytics. We argue for the need to study the theory and foundations for sequence management of big data sequences, and to build corresponding systems that will enable scalable management and analysis of very large sequence collections. To this effect, we need to develop novel techniques to efficiently support a wide range of sequence queries and mining operations, while leveraging modern hardware. The overall goal is to allow analysts across domains to tap in the goldmine of the massive and ever-growing sequence collections they (already) have."} {"_id": "91e9387c92b7c9d295c1188719d30dd179cc81e8", "title": "DocChat: An Information Retrieval Approach for Chatbot Engines Using Unstructured Documents", "text": "Most current chatbot engines are designed to reply to user utterances based on existing utterance-response (or Q-R)1 pairs. In this paper, we present DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances. A learning to rank model with features designed at different levels of granularity is proposed to measure the relevance between utterances and responses directly. We evaluate our proposed approach in both English and Chinese: (i) For English, we evaluate DocChat on WikiQA and QASent, two answer sentence selection tasks, and compare it with state-of-the-art methods. Reasonable improvements and good adaptability are observed. (ii) For Chinese, we compare DocChat with XiaoIce2, a famous chitchat engine in China, and side-by-side evaluation shows that DocChat is a perfect complement for chatbot engines using Q-R pairs as main source of responses."} {"_id": "141ca542912b6ecac0babd9b5e17169db0ded49b", "title": "An integrated cloud-based platform for labor monitoring and data analysis in precision agriculture", "text": "Harvest labor has became a prevailing cost in cherry and other Specialty Crops industry. We developed an integrated solution that provided real-time labor monitoring, payroll accrual, and labor-data-based analysis. At the core of our solution is a cloud-based information system that collects labor data from purposely designed labor monitoring devices, and visualizes real-time labor productivity data through a mobile-friendly user interface. Our solution used a proprietary process [1] to accurately associate labor data with its related worker and position under a many-to-many employment relation. We also describe our communication API and protocol, which are specifically designed to improve the reliability of data communication within an orchard. Besides its immediate benefits in improving the efficiency and accuracy of labor monitoring, our solution also enables data analysis and visualization based on harvest labor data. As an example, we discuss our approach of yield mapping based on harvest labor data. We implemented the platform and deployed the system on a cloud-based computing platform for better scalability. An early version of the system has been tested during the 2012 harvest season in cherry orchards in the U.S. Pacific Northwest Region."} {"_id": "9f54261f4179e710427d56d41e944f9a45ec581c", "title": "PALSAR Radiometric and Geometric Calibration", "text": "This paper summarizes the results obtained from geometric and radiometric calibrations of the Phased-Array L-Band Synthetic Aperture Radar (PALSAR) on the Advanced Land Observing Satellite, which has been in space for three years. All of the imaging modes of the PALSAR, i.e., single, dual, and full polarimetric strip modes and scanning synthetic aperture radar (SCANSAR), were calibrated and validated using a total of 572 calibration points collected worldwide and distributed targets selected primarily from the Amazon forest. Through raw-data characterization, antenna-pattern estimation using the distributed target data, and polarimetric calibration using the Faraday rotation-free area in the Amazon, we performed the PALSAR radiometric and geometric calibrations and confirmed that the geometric accuracy of the strip mode is 9.7-m root mean square (rms), the geometric accuracy of SCANSAR is 70 m, and the radiometric accuracy is 0.76 dB from a corner-reflector analysis and 0.22 dB from the Amazon data analysis (standard deviation). Polarimetric calibration was successful, resulting in a VV/HH amplitude balance of 1.013 (0.0561 dB) with a standard deviation of 0.062 and a phase balance of 0.612deg with a standard deviation of 2.66deg ."} {"_id": "d6b3be85cae20761285d347bafac7b2c2b3b176a", "title": "Cascaded seven level inverter with reduced number of switches using level shifting PWM technique", "text": "A multilevel inverter is a power electronic device that is used for high voltage and high power applications and has many advantages like, low switching stress, low total harmonic distortion (THD). Hence, the size and bulkiness of passive filters can be reduced. This paper proposes two new topologies of a 7-level cascaded multilevel inverter with reduced number of switches than that of conventional type which has 12 switches. The topologies consist of circuits with 9 switches and 7 switches for the same 7-level output. Therefore with less number of switches, there will be a reduction in gate drive circuitry and also very few switches will be conducting for specific intervals of time. The SPWM technique is implemented using multicarrier wave signals. Level Shifted triangular waves are used in comparison with sinusoidal reference to generate Sine PWM switching sequence. The number of level shifted triangular waves depends on the number of levels in the output. i.e. for n levels, n-1 number of carrier waves. This paper uses 1 KHz SPWM pulses with a modulation index of 0.8. The circuits are simulated using SPWM technique and the effect of the harmonic spectrum is analyzed. A comparison is made for the topologies with 9 switches and 7 switches and an effective reduction in THD has been observed for the circuits with less number of switches. The THD for 9 switches is 14% and the THD for 7 switches is 12.5%. The circuits are modeled and simulated with the help of MATLAB/SIMULINK."} {"_id": "49deb46f1b045ee3b2f85d62015b9c42eb393e76", "title": "Gaze estimation method based on an aspherical model of the cornea: surface of revolution about the optical axis of the eye", "text": "A novel gaze estimation method based on a novel aspherical model of the cornea is proposed in this paper. The model is a surface of revolution about the optical axis of the eye. The calculation method is explained on the basis of the model. A prototype system for estimating the point of gaze (POG) has been developed using this method. The proposed method has been found to be more accurate than the gaze estimation method based on a spherical model of the cornea."} {"_id": "446fb5f82d5f9320370798f82a5aad0f8255c1ff", "title": "Evaluation of Multi-view 3D Reconstruction Software", "text": "A number of software solutions for reconstructing 3D models from multi-view image sets have been released in recent years. Based on an unordered collection of photographs, most of these solutions extract 3D models using structure-from-motion (SFM) algorithms. In this work, we compare the resulting 3D models qualitatively and quantitatively. To achieve these objectives, we have developed different methods of comparison for all software solutions. We discuss the perfomance and existing drawbacks. Particular attention is paid to the ability to create printable 3D models or 3D models usable for other applications."} {"_id": "fe48ab7bc677d59094d4414d538a96bc4f99176b", "title": "Beautiful Math, Part 2: Aesthetic Patterns Based on Fractal Tilings", "text": "A fractal tiling (f-tiling) is a tiling whose boundary is fractal. This article presents two families of rare, infinitely many f-tilings. Each f-tiling is constructed by reducing tiles by a fixed scaling factor, using a single prototile, which is a segment of a regular polygon. The authors designed invariant mappings to automatically produce appealing seamless, colored patterns from such tilings."} {"_id": "9edf6f5e799ca6fa0c2d5510d96cbf2848005cb9", "title": "The art of deep learning ( applied to NLP )", "text": "In many regards, tuning deep-learning networks is still more an art than it is a technique. Choosing the correct hyper-parameters and getting a complex network to learn properly can be daunting to people not well versed in that art. Taking the example of entailment classification, this work will analyze different methods and tools that can be used in order to diagnose learning problems. It will aim at providing an deep understanding of some key hyper-parameters, sketch out their effects and explain how to identify recurrent problems when tuning them."} {"_id": "f457cdc4212fdf5979aa3d05498170eb610ea144", "title": "Auditing and Assessment of Data Traffic Flows in an IoT Architecture", "text": "Recent advances in the development of the Internet of Things and ICT have completely changed the way citizens interact with Smart City environments increasing the demand of more services and infrastructures in many different contexts. Furthermore, citizens require to be active users in a flexible smart living lab, with the possibility to access Smart City data, analyze them, perform actions and receive notifications based on automated decision-making processes. Critical problems could arise if the continuity of data flows and communication among connected IoT devices and data-driven applications is interrupted or lost, due to some devices or system malfunction or unexpected behavior. The proposed solution is a set of instruments, aimed at real-time collecting and storing IoT and Smart City data (data shadow), as well as auditing data traffic flows in an IoT Smart City Architecture, with the purpose of quantitatively monitoring the status and detecting potential anomalies and malfunctions at level of single IoT device and/or service. These instruments are the DevDash and AMMA tools, designed and realized within the Snap4City framework. Specific use cases have been provided to highlight the capabilities of these instruments in terms of data indexing, monitoring and analysis."} {"_id": "07f1bf314056d39c24e995dab1e0a44a5cae3df0", "title": "Probability Estimates for Multi-class Classification by Pairwise Coupling", "text": "Pairwise coupling is a popular multi-class classification method that combines together all pairwise comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement. We show conceptually and experimentally that the proposed approaches are more stable than two existing popular methods: voting and [3]."} {"_id": "0acf1a74e6ed8c323192d2b0424849820fe88715", "title": "Support Vector Machine Active Learning with Applications to Text Classification", "text": "Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings."} {"_id": "1a090df137014acab572aa5dc23449b270db64b4", "title": "LIBSVM: a library for support vector machines", "text": ""} {"_id": "9ae252d3b0821303f8d63ba9daf10030c9c97d37", "title": "A Bayesian hierarchical model for learning natural scene categories", "text": "We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes."} {"_id": "fa6cbc948677d29ecce76f1a49cea01a75686619", "title": "Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope", "text": "In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category."} {"_id": "2c688c40374fee862e0f0038696f2951f1927337", "title": "When two choices are not enough: Balancing at scale in Distributed Stream Processing", "text": "Carefully balancing load in distributed stream processing systems has a fundamental impact on execution latency and throughput. Load balancing is challenging because real-world workloads are skewed: some tuples in the stream are associated to keys which are significantly more frequent than others. Skew is remarkably more problematic in large deployments: having more workers implies fewer keys per worker, so it becomes harder to \u201caverage out\u201d the cost of hot keys with cold keys. We propose a novel load balancing technique that uses a heavy hitter algorithm to efficiently identify the hottest keys in the stream. These hot keys are assigned to d \u2265 2 choices to ensure a balanced load, where d is tuned automatically to minimize the memory and computation cost of operator replication. The technique works online and does not require the use of routing tables. Our extensive evaluation shows that our technique can balance real-world workloads on large deployments, and improve throughput and latency by 150% and 60% respectively over the previous state-of-the-art when deployed on Apache Storm."} {"_id": "2fc32ad5f46e866dd24774e55ea07a9ba1a85df6", "title": "Smart Configuration of Smart Environments", "text": "One of the central research challenges in the Internet of Things and Ubiquitous Computing domains is how users can be enabled to \u201cprogram\u201d their personal and industrial smart environments by combining services that are provided by devices around them. We present a service composition system that enables the goal-driven configuration of smart environments for end users by combining semantic metadata and reasoning with a visual modeling tool. In contrast to process-driven approaches where service mashups are statically defined, we make use of embedded semantic API descriptions to dynamically create mashups that fulfill the user's goal. The main advantage of our system is its high degree of flexibility, as service mashups can adapt to dynamic environments and are fault-tolerant with respect to individual services becoming unavailable. To support users in expressing their goals, we integrated a visual programming tool with our system that allows to model the desired state of a smart environment graphically, thereby hiding the technicalities of the underlying semantics. Possible applications of the presented system include the management of smart homes to increase individual well-being, and reconfigurations of smart environments, for instance in the industrial automation or healthcare domains."} {"_id": "6697bd267ccf363bc1b8ab7cb971b880495ff3f1", "title": "Smart Locks: Lessons for Securing Commodity Internet of Things Devices", "text": "We examine the security of home smart locks: cyber-physical devices that replace traditional door locks with deadbolts that can be electronically controlled by mobile devices or the lock manufacturer's remote servers. We present two categories of attacks against smart locks and analyze the security of five commercially-available locks with respect to these attacks. Our security analysis reveals that flaws in the design, implementation, and interaction models of existing locks can be exploited by several classes of adversaries, allowing them to learn private information about users and gain unauthorized home access. To guide future development of smart locks and similar Internet of Things devices, we propose several defenses that mitigate the attacks we present. One of these defenses is a novel approach to securely and usably communicate a user's intended actions to smart locks, which we prototype and evaluate. Ultimately, our work takes a first step towards illuminating security challenges in the system design and novel functionality introduced by emerging IoT systems."} {"_id": "f0be59cebe9c67353f5e84fe3abdca4cc360f03b", "title": "A hybrid machine learning approach to automatic plant phenotyping for smart agriculture", "text": "Recently, a new ICT approach to agriculture called \u201cSmart Agriculture\u201d has been received great attention to support farmers' decision-making for good final yield on various kinds of field conditions. For this purpose, this paper presents two image sensing methods that enable an automatic observation to capture flowers and seedpods of soybeans in real fields. The developed image sensing methods are considered as sensors in an agricultural cyber-physical system in which big data on the growth status of agricultural plants and environmental information (e.g., weather, temperature, humidity, solar radiation, soil condition, etc.) are analyzed to mine useful rules for appropriate cultivation. The proposed image sensing methods are constructed by combining several image processing and machine learning techniques. The flower detection is realized based on a coarse-to-fine approach where candidate areas of flowers are first detected by SLIC and hue information, and the acceptance of flowers is decided by CNN. In the seedpod detection, candidates of seedpod regions are first detected by the Viola-Jones object detection method, and we also use CNN to make a final decision on the acceptance of detected seedpods. The performance of the proposed image sensing methods is evaluated for a data set of soybean images that were taken from a crowd of soybeans in real agricultural fields in Hokkaido, Japan."} {"_id": "a4de5334f527fed37578da04461bbb0b077b6005", "title": "Low temperature Cu-Cu direct bonding for 3D-IC by using fine crystal layer", "text": "In this paper, we report a method of low temperature solid diffusion bonding. To investigate bondability of solid diffusion, we examined the effect of bump metals and bump planarization methods. Cu and Au bump were used for bump metals and CMP and ultra-precision cutting were used for bump planarization methods. We found that fine crystal layer could be formed on only cut Cu and Au bumps, and especially cut Cu bumps had a thick fine crystal layer on the surface. The layer on cut Cu bump was found to be easily to recrystallize at low temperature condition of 150 degree C. Moreover, the bonding interface of cut Cu bump disappeared at 200 degree C for 30 min, which means solid diffusion across the interface was realized with the contribution of fine crystal layer. In addition, for Cu-Cu direct bonding, formic acid treatment before bonding is effective because formic acid can react at low temperature without destroying fine crystal layer. That led to achieve high bonding strength between cut Cu bumps."} {"_id": "3520ba596291829fbc749fa44f573a176a57f554", "title": "Hybrid excited claw pole electric machine", "text": "The paper presents the concept and results of simulation and experimental research of a claw pole generator with hybrid excitation. Hybrid excitation is performed with a conventional coil located between two parts of the claw-shape rotor and additional permanent magnets which are placed on claw poles. Within the research first a simulation and next constructed experimental model has been developed on the basis of the mass-produced vehicle alternator. Experimental researches have shown that - at a suitable rotational speed - it is possible to self-excite of the generator without any additional source of electrical power."} {"_id": "9a25afa23b40f57229b9642b519fe67a6019ef41", "title": "A 92GHz bandwidth SiGe BiCMOS HBT TIA with less than 6dB noise figure", "text": "A low-noise, broadband amplifier with resistive degeneration and transimpedance feedback is reported with 200 mVpp input linearity and less than 6 dB noise figure up to 88 GHz. The measured gain of 13 dB, noise figure, linearity, and group delay variation of \u00b11.5 ps are in excellent agreement with simulation. Eye diagram measurements were conducted up to 120 Gb/s and a dynamic range larger than 36 dB was obtained from eye diagram measurements at 40 Gb/s. The chip, which includes a 50\u03a9 output buffer, occupies 0.138 mm2 and consumes 21 mA from a 2.3V supply."} {"_id": "a05e84f77e1dacaa1c59ba0d92919bdcfe4debbb", "title": "Video Question Answering via Hierarchical Spatio-Temporal Attention Networks", "text": "Open-ended video question answering is a challenging problem in visual information retrieval, which automatically generates the natural language answer from the referenced video content according to the question. However, the existing visual question answering works only focus on the static image, which may be ineffectively applied to video question answering due to the lack of modeling the temporal dynamics of video contents. In this paper, we consider the problem of open-ended video question answering from the viewpoint of spatio-temporal attentional encoderdecoder learning framework. We propose the hierarchical spatio-temporal attention network for learning the joint representation of the dynamic video contents according to the given question. We then develop the spatio-temporal attentional encoder-decoder learning method with multi-step reasoning process for open-ended video question answering. We construct a large-scale video question answering dataset. The extensive experiments show the effectiveness of our method."} {"_id": "c6916fc5b6798c91c1c7f486d95cc94bc0a61afa", "title": "Persistence Diagrams with Linear Machine Learning Models", "text": "Persistence diagrams have been widely recognized as a compact descriptor for characterizing multiscale topological features in data. When many datasets are available, statistical features embedded in those persistence diagrams can be extracted by applying machine learnings. In particular, the ability for explicitly analyzing the inverse in the original data space from those statistical features of persistence diagrams is significantly important for practical applications. In this paper, we propose a unified method for the inverse analysis by combining linear machine learning models with persistence images. The method is applied to point clouds and cubical sets, showing the ability of the statistical inverse analysis and its advantages. This work is partially supported by JSPS KAKENHI Grant Number JP 16K17638, JST CREST Mathematics15656429, JST \u201cMaterials research by Information Integration\u201d Initiative (MI2I) project of the Support Program for Starting Up Innovation Hub, Structural Materials for Innovation Strategic Innovation Promotion Program D72, and New Energy and Industrial Technology Development Organization (NEDO). Ippei Obayashi Advanced Institute for Materials Research (WPI-AIMR), Tohoku University. 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 Japan Tel.: +81-22-217-6320 Fax: +81-22-217-5129 E-mail: ippei.obayashi.d8@tohoku.ac.jp Y. Hiraoka Advanced Institute for Materials Research (WPI-AIMR), Tohoku University. Center for Materials research by Information Integration (CMI2), Research and Services Division of Materials Data and Integrated System (MaDIS), National Institute for Materials Science (NIMS). E-mail: hiraoka@tohoku.ac.jp"} {"_id": "5e9bc2f884254f4f2256f6943a46d84ac065108d", "title": "The neural correlates of trustworthiness evaluations of faces and brands: Implications for behavioral and consumer neuroscience.", "text": "When we buy a product of a brand, we trust the brand to provide good quality and reliability. Therefore, trust plays a major role in consumer behavior. It is unclear, however, how trust in brands is processed in the brain and whether it is processed differently from interpersonal trust. In this study, we used fMRI to investigate the neural correlates of interpersonal and brand trust by comparing the brain activation patterns during explicit trustworthiness judgments of faces and brands. Our results showed that while there were several brain areas known to be linked to trustworthiness evaluations, such as the amygdalae, more active in trustworthiness judgments when compared to a control task (familiarity judgment) for faces, no such difference was found for brands. Complementary ROI analysis revealed that the activation of both amygdalae was strongest for faces in the trustworthiness judgments. The direct comparison of the brain activation patterns during the trustworthiness evaluations between faces and brands in this analysis showed that trustworthiness judgments of faces activated the orbitofrontal cortex, another region that was previously linked to interpersonal trust, more strongly than trustworthiness judgments of brands. Further, trustworthiness ratings of faces, but not brands, correlated with activation in the orbitofrontal cortex. Our results indicate that the amygdalae, as well as the orbitofrontal cortex, play a prominent role in interpersonal trust (faces), but not in trust for brands. It is possible that this difference is due to brands being processed as cultural objects rather than as having human-like personality characteristics."} {"_id": "682c27724a404dfd41e2728813c66a5ee2b20dea", "title": "SeVR+: Secure and privacy-aware cloud-assisted video reporting service for 5G vehicular networks", "text": "Recently, Eiza et al. proposed a secure and privacy-aware scheme for video reporting service in 5G enabled Vehicular Ad hoc Networks (VANET). They employ heterogeneous network and cloud platform to obtain more availability with low latency platform for an urgent accident video reporting service. In their study, for the first time the security issues of 5G enabled vehicular networks have been addressed. Eiza et al. claimed that their scheme guarantees user's privacy, confidentiality, non-repudiation, message integrity and availability for participant vehicles. In this paper, we show that Eiza et al. scheme is vulnerable to replay, message fabrication and DoS attacks. Regarding the sensibility of video reporting services in VANET, then, we propose an efficient protocol to overcome security weaknesses of Eiza et.al. scheme and show that the proposed protocol resists against commonplace attacks in VANET with acceptable communication and computation overhead."} {"_id": "fd4d6ee3aed9f5e3f63b1bce29e8706c3c5917dd", "title": "`Putting the Face to the Voice' Matching Identity across Modality", "text": "Speech perception provides compelling examples of a strong link between auditory and visual modalities. This link originates in the mechanics of speech production, which, in shaping the vocal tract, determine the movement of the face as well as the sound of the voice. In this paper, we present evidence that equivalent information about identity is available cross-modally from both the face and voice. Using a delayed matching to sample task, XAB, we show that people can match the video of an unfamiliar face, X, to an unfamiliar voice, A or B, and vice versa, but only when stimuli are moving and are played forward. The critical role of time-varying information is underlined by the ability to match faces to voices containing only the coarse spatial and temporal information provided by sine wave speech [5]. The effect of varying sentence content across modalities was small, showing that identity-specific information is not closely tied to particular utterances. We conclude that the physical constraints linking faces to voices result in bimodally available dynamic information, not only about what is being said, but also about who is saying it."} {"_id": "5b2727716de710dddb5bdca184912ff0e269e2ae", "title": "Control and path planning of a walk-assist robot using differential flatness", "text": "With the growth of elderly population in our society, technology will play an important role in providing functional mobility to humans. In this paper, we propose a robot walking helper with both passive and active control modes of guidance. From the perspective of human safety, the passive mode adopts the braking control law on the wheels to differentially steer the vehicle. The active mode can guide the user efficiently when the passive control with user-applied force is not adequate for guidance. The theory of differential flatness is used to plan the trajectory of control gains within the proposed scheme of the controller. Since the user input force is not known a-priori, the theory of model predictive control is used to periodically compute the trajectory of these control gains. The simulation results show that the walking assist robot, along with the structure of this proposed control scheme, can guide the user to a goal effectively."} {"_id": "f7457a69b32cf761508ae51016204bab473f2035", "title": "Electrically Large Zero-Phase-Shift Line Grid-Array UHF Near-Field RFID Reader Antenna", "text": "A grid-array antenna using a zero-phase-shift (ZPS) line is proposed to enlarge the interrogation zone of a reader antenna for near-field ultra-high-frequency (UHF) radio frequency identification (RFID) system. The proposed grid-array antenna is composed of a number of grid cells and a double-sided parallel-strip line feeding network. Each grid cell, namely segmented loop constructed by the ZPS line, has uniform and single-direction flowing current along the line. By configuration of the cells sharing a common side with its adjacent grid cells which carry reverse-direction flowing current, a grid-array antenna is formed to generate a strong and uniform magnetic-field distribution over a large interrogation zone even when the perimeter of the interrogation zone reaches up to 3\u03bb (where \u03bb is the operating wavelength in free space) or larger. As an example, a grid-array antenna with 1 \u00d7 2 segmented ZPS line loop cells implemented onto a piece of FR4 printed board (PCB) is designed and prototyped. The results show that the grid-array antenna achieves the impedance matching over the frequency range from 790 to 1040 MHz and produces strong and uniform magnetic-field distribution over an interrogation zone of 308 mm \u00d7 150 mm."} {"_id": "1e477aa7eb007c493fa92b8450a7f85eb14ccf0c", "title": "101companies: A Community Project on Software Technologies and Software Languages", "text": "101companies is a community project in computer science (or software science) with the objective of developing a free, structured, wiki-accessible knowledge resource including an open-source repository for different stakeholders with interests in software technologies, software languages, and technological spaces; notably: teachers and learners in software engineering or software languages as well as software developers, software technologists, and ontologists. The present paper introduces the 101companies Project. In fact, the present paper is effectively a call for contributions to the project and a call for applications of the project in research and education."} {"_id": "ee4093f87de9cc612cc83fdc13bd67bfdd366fbe", "title": "An approach to Korean license plate recognition based on vertical edge matching", "text": "License plate recognition (LPR) has many applications in traffic monitoring systems. In this paper, a vertical edge matching based algorithm to recognize Korean license plate from input gray-scale image is proposed. The algorithm is able to recognize license plates in normal shape, as well as plates that are out of shape due to the angle of view. The proposed algorithm is fast enough, the recognition unit of a LPR system can be implemented only in software so that the cost of the system is reduced."} {"_id": "c02c1a345461f015a683ee8b8d6082649567429f", "title": "Electromagnetic simulation of 3D stacked ICs: Full model vs. S-parameter cascaded based model", "text": "Three-dimensional electromagnetic simulation models are often simplified and/or segmented in order to reduce the simulation time and memory requirements without sacrificing the accuracy of the results. This paper investigates the difference between full model and S-parameter cascaded based model of 3D stacked ICs with the presence of Through Silicon Vias. It is found that the simulation of the full model is required for accurate results, however, a divide and conquers (segmentation) approach can be used for preliminary post layout analysis. Modeling guidelines are discussed and details on the proper choice of ports, boundary conditions, and solver technology are highlighted. A de-embedding methodology is finally explored to improve the accuracy of the cascaded/segmented results."} {"_id": "1ac52b7d8db223029388551b2db25657ed8c9852", "title": "Solving a Huge Number of Similar Tasks: A Combination of Multi-Task Learning and a Hierarchical Bayesian Approach", "text": "In this paper, we propose a machine-learning solution to problems consisting of many similar prediction tasks. Each of the individual tasks has a high risk of overrtting. We combine two types of knowledge transfer between tasks to reduce this risk: multi-task learning and hierarchical Bayesian modeling. Multi-task learning is based on the assumption that there exist features typical to the task at hand. To nd these features, we train a huge two-layered neural network. Each task has its own output, but shares the weights from the input to the hidden units with all other tasks. In this way a relatively large set of possible explanatory variables (the network inputs) is reduced to a smaller and easier to handle set of features (the hidden units). Given this set of features and after an appropriate scale transformation, we assume that the tasks are exchangeable. This assumption allows for a hierarchical Bayesian analysis in which the hyperparameters can be estimated from the data. EEectively, these hyperpa-rameters act as regularizers and prevent over-tting. We describe how to make the system robust against nonstationarities in the time series and give directions for further improvement. We illustrate our ideas on a database regarding the prediction of newspaper sales."} {"_id": "3a80c307f2f0782214120f600a81f3cde941b3c3", "title": "True Random Number Generator Embedded in Reconfigurable Hardware", "text": "This paper presents a new True Random Number Generator (TRNG) based on an analog Phase-Locked Loop (PLL) implemented in a digital Altera Field Programmable Logic Device (FPLD). Starting with an analysis of the one available on chip source of randomness the PLL synthesized low jitter clock signal, a new simple and reliable method of true randomness extraction is proposed. Basic assumptions about statistical properties of jitter signal are confirmed by testing of mean value of the TRNG output signal. The quality of generated true random numbers is confirmed by passing standard NIST statistical tests. The described TRNG is tailored for embedded System-On-a-ProgrammableChip (SOPC) cryptographic applications and can provide a good quality true random bit-stream with throughput of several tens of kilobits per second. The possibility of including the proposed TRNG into a SOPC design significantly increases the system security of embedded cryptographic hardware."} {"_id": "2b0dcde4dba0dcad345af5285553b0b9fdf35f78", "title": "Reliability-aware design to suppress aging", "text": "Due to aging, circuit reliability has become extraordinary challenging. Reliability-aware circuit design flows do virtually not exist and even research is in its infancy. In this paper, we propose to bring aging awareness to EDA tool flows based on so-called degradation-aware cell libraries. These libraries include detailed delay information of gates/cells under the impact that aging has on both threshold voltage (Vth) and carrier mobility (\u03bc) of transistors. This is unlike state of the art which considers Vth only. We show how ignoring \u03bc degradation leads to underestimating guard-bands by 19% on average. Our investigation revealed that the impact of aging is strongly dependent on the operating conditions of gates (i.e. input signal slew and output load capacitance), and not solely on the duty cycle of transistors. Neglecting this fact results in employing insufficient guard-bands and thus not sustaining reliability during lifetime.\n We demonstrate that degradation-aware libraries and tool flows are indispensable for not only accurately estimating guardbands, but also efficiently containing them. By considering aging degradations during logic synthesis, significantly more resilient circuits can be obtained. We further quantify the impact of aging on the degradation of image processing circuits. This goes far beyond investigating aging with respect to path delays solely. We show that in a standard design without any guardbanding, aging leads to unacceptable image quality after just one year. By contrast, if the synthesis tool is provided with the degradation-aware cell library, high image quality is sustained for 10 years (even under worst-case aging and without a guard-band). Hence, using our approach, aging can be effectively suppressed."} {"_id": "205b0711be40875a7ea9a6ac21c8d434b7830943", "title": "Hexahedral Meshing With Varying Element Sizes", "text": "Hexahedral (or Hex-) meshes are preferred in a number of scientific and engineering simulations and analyses due to their desired numerical properties. Recent state-of-the-art techniques can generate high quality hex-meshes. However, they typically produce hex-meshes with uniform element sizes and thus may fail to preserve small scale features on the boundary surface. In this work, we present a new framework that enables users to generate hex-meshes with varying element sizes so that small features will be filled with smaller and denser elements, while the transition from smaller elements to larger ones is smooth, compared to the octree-based approach. This is achieved by first detecting regions of interest (ROI) of small scale features. These ROIs are then magnified using the as-rigid-as-possible (ARAP) deformation with either an automatically determined or a user-specified scale factor. A hex-mesh is then generated from the deformed mesh using existing approaches that produce hex-meshes with uniform-sized elements. This initial hex-mesh is then mapped back to the original volume before magnification to adjust the element sizes in those ROIs. We have applied this framework to a variety of man-made and natural models to demonstrate its effectiveness."} {"_id": "889e50a73eb7307b35aa828a1a4392c7a08c1c01", "title": "An Efficient Participant\u2019s Selection Algorithm for Crowdsensing", "text": "With the advancement of mobile technology the use of Smartphone is greatly increased. Everyone has the mobile phones and it becomes the necessity of life. Today, smart devices are flooding the internet data at every time and in any form that cause the mobile crowdsensing (MCS). One of the key challenges in mobile crowd sensing system is how to effectively identify and select the well-suited participants in recruitments from a large user pool. This research work presents the concept of crowdsensing along with the selection process of participants from a large user pool. MCS provides the efficient selection process for participants that how well suited participant\u2019s selects/recruit from a large user pool. For this, the proposed selection algorithm plays our role in which the recruitment of participants takes place with the availability status from the large user pool. At the end, the graphical result presented with the suitable location of the participants and their time slot. Keywords\u2014Mobile crowdsensing (MCS); Mobile Sensing Platform (MSP]); crowd sensing; participant; user pool; crowdsourcing"} {"_id": "4c5b0aed439c050d95e026a91ebade76793f39c0", "title": "Active-feedback frequency-compensation technique for low-power multistage amplifiers", "text": "An active-feedback frequency-compensation (AFFC) technique for low-power operational amplifiers is presented in this paper. With an active-feedback mechanism, a high-speed block separates the low-frequency high-gain path and high-frequency signal path such that high gain and wide bandwidth can be achieved simultaneously in the AFFC amplifier. The gain stage in the active-feedback network also reduces the size of the compensation capacitors such that the overall chip area of the amplifier becomes smaller and the slew rate is improved. Furthermore, the presence of a left-half-plane zero in the proposed AFFC topology improves the stability and settling behavior of the amplifier. Three-stage amplifiers based on AFFC and nested-Miller compensation (NMC) techniques have been implemented by a commercial 0.8m CMOS process. When driving a 120-pF capacitive load, the AFFC amplifier achieves over 100-dB dc gain, 4.5-MHz gain-bandwidth product (GBW) , 65 phase margin, and 1.5-V/ s average slew rate, while only dissipating 400W power at a 2-V supply. Compared to a three-stage NMC amplifier, the proposed AFFC amplifier provides improvement in both the GBW and slew rate by 11 times and reduces the chip area by 2.3 times without significant increase in the power consumption."} {"_id": "681a52fa5356334acc7d43ca6e3d4a78e82f12e2", "title": "Effective Question Answering Techniques and their Evaluation Metrics", "text": "Question Answering (QA) is a focused way of information retrieval. Question Answering system tries to get back the accurate answers to questions posed in natural language provided a set of documents. Basically question answering system (QA) has three elements i. e. question classification, information retrieval (IR), and answer extraction. These elements play a major role in Question Answering. In Question classification, the questions are classified depending upon the type of its entity. Information retrieval component is used to determine success by retrieving relevant answer for different questions posted by the intelligent question answering system. Answer extraction module is growing topics in the QA in which ranking and validating a candidate's answer is the major job. This paper offers a concise discussion regarding different Question Answering types. In addition we describe different evaluation metrics used to evaluate the performance of different question answering systems. We also discuss the recent question answering systems developed and their corresponding techniques."} {"_id": "47e035c0fd01fa0418060ba14612d15bd9a01845", "title": "A 500mA analog-assisted digital-LDO-based on-chip distributed power delivery grid with cooperative regulation and IR-drop reduction in 65nm CMOS", "text": "With the die area of modern processors growing larger and larger, the IR drop across the power supply rail due to its parasitic resistance becomes considerable. There is an urgent demand for local power regulation to reduce the IR drop and to enhance transient response. A distributed power-delivery grid (DPDG) is an attractive solution for large area power-supply applications. The dual-loop distributed micro-regulator in [1] achieves a tight regulation and fast response, but suffers from large ripple and high power consumption due to the comparator-based regulator. Digital low-dropout regulators (DLDOs) [2] can be used as local micro-regulators to implement a DPDG, due to their low-voltage operation and process scalability. Adaptive control [3], asynchronous 3D pipeline control [4], analog-assisted tri-loop control [5], and event-driven PI control [6] are proposed to enhance the transient response speed. However, digital LDOs suffer from the intrinsic limitations of large output ripple and narrow current range. This paper presents an on-chip DPDg with cooperative regulation based on an analog-assisted digital LDO (AADLDO), which inherits the merits of low output ripple and sub-LSB current supply ability from the analog control, and the advantage of low supply voltage operation and adaptive fast response from the digital control."} {"_id": "1e56ed3d2c855f848ffd91baa90f661772a279e1", "title": "Latent Dirichlet Allocation", "text": "We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hofmann's aspect model , also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification."} {"_id": "27ae23bb8d284a1fa8c8ab24e23a72e1836ff5cc", "title": "Similarity estimation techniques from rounding algorithms", "text": "(MATH) A locality sensitive hashing scheme is a distribution on a family $\\F$ of hash functions operating on a collection of objects, such that for two objects x,y, Prh\u03b5F[h(x) = h(y)] = sim(x,y), where sim(x,y) \u03b5 [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure sim(A,B) = \\frac{|A &Pgr; B|}{|A &Pgr B|}.(MATH) We show that rounding algorithms for LPs and SDPs used in the context of approximation algorithms can be viewed as locality sensitive hashing schemes for several interesting collections of objects. Based on this insight, we construct new locality sensitive hashing schemes for:
  1. A collection of vectors with the distance between \u2192 \\over u and \u2192 \\over v measured by \u00d8(\u2192 \\over u, \u2192 \\over v)/\u03c0, where \u00d8(\u2192 \\over u, \u2192 \\over v) is the angle between \u2192 \\over u) and \u2192 \\over v). This yields a sketching scheme for estimating the cosine similarity measure between two vectors, as well as a simple alternative to minwise independent permutations for estimating set similarity.
  2. A collection of distributions on n points in a metric space, with distance between distributions measured by the Earth Mover Distance (EMD), (a popular distance measure in graphics and vision). Our hash functions map distributions to points in the metric space such that, for distributions P and Q, EMD(P,Q) &xie; Eh\u03b5\\F [d(h(P),h(Q))] &xie; O(log n log log n). EMD(P, Q).
."} {"_id": "2b8a80b18cc7a4461c6e532c2f3de7e570d4fcd6", "title": "A probabilistic approach to spatiotemporal theme pattern mining on weblogs", "text": "Mining subtopics from weblogs and analyzing their spatiotemporal patterns have applications in multiple domains. In this paper, we define the novel problem of mining spatiotemporal theme patterns from weblogs and propose a novel probabilistic approach to model the subtopic themes and spatiotemporal theme patterns simultaneously. The proposed model discovers spatiotemporal theme patterns by (1) extracting common themes from weblogs; (2) generating theme life cycles for each given location; and (3) generating theme snapshots for each given time period. Evolution of patterns can be discovered by comparative analysis of theme life cycles and theme snapshots. Experiments on three different data sets show that the proposed approach can discover interesting spatiotemporal theme patterns effectively. The proposed probabilistic model is general and can be used for spatiotemporal text mining on any domain with time and location information."} {"_id": "682988d3cc614c122745d0e87ad9df0f44c3e432", "title": "Automatic labeling of multinomial topic models", "text": "Multinomial distributions over words are frequently used to model topics in text collections. A common, major challenge in applying all such topic models to any text mining problem is to label a multinomial topic model accurately so that a user can interpret the discovered topic. So far, such labels have been generated manually in a subjective way. In this paper, we propose probabilistic approaches to automatically labeling multinomial topic models in an objective way. We cast this labeling problem as an optimization problem involving minimizing Kullback-Leibler divergence between word distributions and maximizing mutual information between a label and a topic model. Experiments with user study have been done on two text data sets with different genres.The results show that the proposed labeling methods are quite effective to generate labels that are meaningful and useful for interpreting the discovered topic models. Our methods are general and can be applied to labeling topics learned through all kinds of topic models such as PLSA, LDA, and their variations."} {"_id": "99f231f29d8bbd410bb3edc096a502b1ebda8526", "title": "Automatic labeling hierarchical topics", "text": "Recently, statistical topic modeling has been widely applied in text mining and knowledge management due to its powerful ability. A topic, as a probability distribution over words, is usually difficult to be understood. A common, major challenge in applying such topic models to other knowledge management problem is to accurately interpret the meaning of each topic. Topic labeling, as a major interpreting method, has attracted significant attention recently. However, previous works simply treat topics individually without considering the hierarchical relation among topics, and less attention has been paid to creating a good hierarchical topic descriptors for a hierarchy of topics. In this paper, we propose two effective algorithms that automatically assign concise labels to each topic in a hierarchy by exploiting sibling and parent-child relations among topics. The experimental results show that the inter-topic relation is effective in boosting topic labeling accuracy and the proposed algorithms can generate meaningful topic labels that are useful for interpreting the hierarchical topics."} {"_id": "f6be1786de10bf484decb16762fa66819930965e", "title": "The Costs and Benefits of Pair Programming", "text": "Pair or collaborative programming is where two programmers develop software side by side at one computer. Using interviews and controlled experiments, the authors investigated the costs and benefits of pair programming. They found that for a development-time cost of about 15%, pair programming improves design quality, reduces defects, reduces staffing risk, enhances technical skills, improves team communications and is considered more enjoyable at statistically significant levels."} {"_id": "85a1b811113414aced9c9fe18b02dfbd24cb41af", "title": "Comparing White-Box and Black-Box Test Prioritization", "text": "Although white-box regression test prioritization has been well-studied, the more recently introduced black-box prioritization approaches have neither been compared against each other nor against more well-established white-box techniques. We present a comprehensive experimental comparison of several test prioritization techniques, including well-established white-box strategies and more recently introduced black-box approaches. We found that Combinatorial Interaction Testing and diversity-based techniques (Input Model Diversity and Input Test Set Diameter) perform best among the black-box approaches. Perhaps surprisingly, we found little difference between black-box and white-box performance (at most 4% fault detection rate difference). We also found the overlap between black- and white-box faults to be high: the first 10% of the prioritized test suites already agree on at least 60% of the faults found. These are positive findings for practicing regression testers who may not have source code available, thereby making white-box techniques inapplicable. We also found evidence that both black-box and white-box prioritization remain robust over multiple system releases."} {"_id": "56832b51f1e8c7cd4b45c3591f650c90e0d554fd", "title": "4G- A NEW ERA IN WIRELESS TELECOMMUNICATION", "text": "4G \u2013 \u201cconnect anytime, anywhere, anyhow\u201d promising ubiquitous network access at high speed to the end users, has been a topic of great interest especially for the wireless telecom industry. 4G seems to be the solution for the growing user requirements of wireless broadband access and the limitations of the existing wireless communication system. The purpose of this paper is to provide an overview of the different aspects of 4G which includes its features, its proposed architecture and key technological enablers. It also elaborates on the roadblocks in its implementations. A special consideration has been given to the security concerns of 4G by discussing a security threat analysis model proposed by International Telecommunication Union (ITU). By applying this model, a detailed analysis of threats to 4G and the corresponding measures to counter them can be performed."} {"_id": "38936f2c404ec886d214f2e529e2d090a7cd6d95", "title": "Collocated photo sharing, story-telling, and the performance of self", "text": "This article reports empirical findings from four inter-related studies, with an emphasis on collocated sharing. Collocated sharing remains important, using both traditional and emerging image-related technologies. Co-present viewing is a dynamic, improvisational construction of a contingent, situated interaction between story-teller and audience. The concept of performance, as articulated differently by Erving Goffman and Judith Butler, is useful understand the enduring importance of co-present sharing of photos and the importance of oral narratives around images in enacting identity and relationships. Finally, we suggest some implications for both HCI research and the design of image-related technologies. & 2009 Elsevier Ltd. All rights reserved."} {"_id": "39517e37380c64d3c6fe8f9f396500c8e254bfdf", "title": "DoS and DDoS in Named Data Networking", "text": "With the growing realization that current Internet protocols are reaching the limits of their senescence, several on-going research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denial-of-Service (DoS) attacks that plague today's Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) - a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN's resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking."} {"_id": "d0e9b588bcff6971ecafbe2aedbb622a62619a8a", "title": "Introducing Serendipity in a Content-Based Recommender System", "text": "Today recommenders are commonly used with various purposes, especially dealing with e-commerce and information filtering tools. Content-based recommenders rely on the concept of similarity between the bought/ searched/ visited item and all the items stored in a repository. It is a common belief that the user is interested in what is similar to what she has already bought/searched/visited. We believe that there are some contexts in which this assumption is wrong: it is the case of acquiring unsearched but still useful items or pieces of information. This is called serendipity. Our purpose is to stimulate users and facilitate these serendipitous encounters to happen. This paper presents the design and implementation of a hybrid recommender system that joins a content-based approach and serendipitous heuristics in order to mitigate the over-specialization problem with surprising suggestions."} {"_id": "0c9ba3329d6ec82ae581cde268614abd0313fdeb", "title": "Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System", "text": "Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance."} {"_id": "556a1aefa5122461e97141a49963e76fe15c25bd", "title": "ARROW: GenerAting SignatuRes to Detect DRive-By DOWnloads", "text": "A drive-by download attack occurs when a user visits a webpage which attempts to automatically download malware without the user's consent. Attackers sometimes use a malware distribution network (MDN) to manage a large number of malicious webpages, exploits, and malware executables. In this paper, we provide a new method to determine these MDNs from the secondary URLs and redirect chains recorded by a high-interaction client honeypot. In addition, we propose a novel drive-by download detection method. Instead of depending on the malicious content used by previous methods, our algorithm first identifies and then leverages the URLs of the MDN's central servers, where a central server is a common server shared by a large percentage of the drive-by download attacks in the same MDN. A set of regular expression-based signatures are then generated based on the URLs of each central server. This method allows additional malicious webpages to be identified which launched but failed to execute a successful drive-by download attack. The new drive-by detection system named ARROW has been implemented, and we provide a large-scale evaluation on the output of a production drive-by detection system. The experimental results demonstrate the effectiveness of our method, where the detection coverage has been boosted by 96% with an extremely low false positive rate."} {"_id": "8c38382f681598e77a2e539473ac9d91b2d7a2d3", "title": "A theorem on polygon cutting with applications", "text": "Let P be a simple polygon with N vertices, each being assigned a weight \u2208 {0,1}, and let C, the weight of P, be the added weight of all vertices. We prove that it is possible, in O(N) time, to find two vertices a,b in P, such that the segment ab lies entirely inside the polygon P and partitions it into two polygons, each with a weight not exceeding 2C/3. This computation assumes that all the vertices have been sorted along some axis, which can be done in O(Nlog N) time. We use this result to derive a number of efficient divide-and-conquer algorithms for: 1. Triangulating an N-gon in O(Nlog N) time. 2. Decomposing an N-gon into (few) convex pieces in O(Nlog N) time. 3. Given an O(Nlog N) preprocessing, computing the shortest distance between two arbitrary points inside an N-gon (i.e., the internal distance), in O(N) time. 4. Computing the longest internal path in an N-gon in O(N2) time. In all cases, the algorithms achieve significant improvements over previously known methods, either by displaying better performance or by gaining in simplicity. In particular, the best algorithms for Problems 2,3,4, known so far, performed respectively in O(N2), O(N2), and O(N4) time."} {"_id": "e2c318d32e58a1cdc073080dd189a88eab6c07b3", "title": "A meta-analysis of continuing medical education effectiveness.", "text": "INTRODUCTION\nWe undertook a meta-analysis of the Continuing Medical Education (CME) outcome literature to examine the effect of moderator variables on physician knowledge, performance, and patient outcomes.\n\n\nMETHODS\nA literature search of MEDLINE and ERIC was conducted for randomized controlled trials and experimental design studies of CME outcomes in which physicians were a major group. CME moderator variables included the types of intervention, the types and number of participants, time, and the number of intervention sessions held over time.\n\n\nRESULTS\nThirty-one studies met the eligibility criteria, generating 61 interventions. The overall sample-size weighted effect size for all 61 interventions was r = 0.28 (0.18). The analysis of CME moderator variables showed that active and mixed methods had medium effect sizes (r = 0.33 [0.33], r = 0.33 [0.26], respectively), and passive methods had a small effect size (r = 0.20 [0.16], confidence interval 0.15, 0.26). There was a positive correlation between the effect size and the length of the interventions (r = 0.33) and between multiple interventions over time (r = 0.36). There was a negative correlation between the effect size and programs that involved multiple disciplines (r = -0.18) and the number of participants (r = -0.13). The correlation between the effect size and the length of time for outcome assessment was negative (r = -0.31).\n\n\nDISCUSSION\nThe meta-analysis suggests that the effect size of CME on physician knowledge is a medium one; however, the effect size is small for physician performance and patient outcome. The examination of moderator variables shows there is a larger effect size when the interventions are interactive, use multiple methods, and are designed for a small group of physicians from a single discipline."} {"_id": "205201b657a894eee015be9f1b4269508fca83e8", "title": "Prediction of Yelp Review Star Rating using Sentiment Analysis", "text": "Yelp aims to help people find great local businesses, e.g. restaurants. Automated software is currently used to recommend the most helpful and reliable reviews for the Yelp community, based on various measures of quality, reliability, and activity. However, this is not tailored to each customer. Our goal in this project is to apply machine learning to predict a customer\u2019s star rating of a restaurant based on his/her reviews, as well as other customers\u2019 reviews/ratings, to recommend other restaurants to the customer, as shown in Figure 1."} {"_id": "a41df0c6d0dea24fef561177bf19450b8413bb21", "title": "An efficient algorithm for the \"optimal\" stable marriage", "text": "In an instance of size n of the stable marriage problem, each of n men and n women ranks the members of the opposite sex in order of preference. A stable matching is a complete matching of men and women such that no man and woman who are not partners both prefer each other to their actual partners under the matching. It is well known [2] that at least one stable matching exists for every stable marriage instance. However, the classical Gale-Shapley algorithm produces a marriage that greatly favors the men at the expense of the women, or vice versa. The problem arises of finding a stable matching that is optimal under some more equitable or egalitarian criterion of optimality. This problem was posed by Knuth [6] and has remained unsolved for some time. Here, the objective of maximizing the average (or, equivalently, the total) \u201csatisfaction\u201d of all people is used. This objective is achieved when a person's satisfaction is measured by the position of his/her partner in his/her preference list. By exploiting the structure of the set of all stable matchings, and using graph-theoretic methods, an O(n4) algorithm for this problem is derived."} {"_id": "e990a41e8f09e0ef4695c39af351bf25f333eefa", "title": "Primary, Secondary, and Meta-Analysis of Research", "text": ""} {"_id": "024e394f00b43e7f44f8d0c641c584b6d7edf7d6", "title": "Efficient AUV navigation fusing acoustic ranging and side-scan sonar", "text": "This paper presents an on-line nonlinear least squares algorithm for multi-sensor autonomous underwater vehicle (AUV) navigation. The approach integrates the global constraints of range to and GPS position of a surface vehicle or buoy communicated via acoustic modems and relative pose constraints arising from targets detected in side-scan sonar images. The approach utilizes an efficient optimization algorithm, iSAM, which allows for consistent on-line estimation of the entire set of trajectory constraints. The optimized trajectory can then be used to more accurately navigate the AUV, to extend mission duration, and to avoid GPS surfacing. As iSAM provides efficient access to the marginal covariances of previously observed features, automatic data association is greatly simplified \u2014 particularly in sparse marine environments. A key feature of our approach is its intended scalability to single surface sensor (a vehicle or buoy) broadcasting its GPS position and simultaneous one-way travel time range (OWTT) to multiple AUVs. We discuss why our approach is scalable as well as robust to modem transmission failure. Results are provided for an ocean experiment using a Hydroid REMUS 100 AUV co-operating with one of two craft: an autonomous surface vehicle (ASV) and a manned support vessel. During these experiments the ranging portion of the algorithm ran online on-board the AUV. Extension of the paradigm to multiple missions via the optimization of successive survey missions (and the resultant sonar mosaics) is also demonstrated."} {"_id": "32a0b53aac8c6e2fcb2a7e4c492b195efe2dc965", "title": "Understanding the perceived logic of care by vaccine-hesitant and vaccine-refusing parents: A qualitative study in Australia", "text": "In terms of public health, childhood vaccination programs have benefits that far outweigh risks. However, some parents decide not to vaccinate their children. This paper explores the ways in which such parents talked about the perceived risks and benefits incurred by vaccinating (or not vaccinating) their children. Between 2013-2016 we undertook 29 in-depth interviews with non-vaccinating and/or 'vaccine hesitant' parents in Australia. Interviews were conducted in an open and non-judgmental manner, akin to empathic neutrality. Interviews focused on parents talking about the factors that shaped their decisions not to (or partially) vaccinate their children. All interviews were transcribed and analysed using both inductive and deductive processes. The main themes focus on parental perceptions of: 1. their capacity to reason; 2. their rejection of Western medical epistemology; and 3. their participation in labour intensive parenting practices (which we term salutogenic parenting). Parents engaged in an ongoing search for information about how best to parent their children (capacity to reason), which for many led to questioning/distrust of traditional scientific knowledge (rejection of Western medical epistemology). Salutogenic parenting spontaneously arose in interviews, whereby parents practised health promoting activities which they saw as boosting the natural immunity of their children and protecting them from illness (reducing or negating the perceived need for vaccinations). Salutogenic parenting practices included breastfeeding, eating organic and/or home-grown food, cooking from scratch to reduce preservative consumption and reducing exposure to toxins. We interpret our data as a 'logic of care', which is seen by parents as internally consistent, logically inter-related and inter-dependent. Whilst not necessarily sharing the parents' reasoning, we argue that an understanding of their attitudes towards health and well-being is imperative for any efforts to engage with their vaccine refusal at a policy level."} {"_id": "68d05c858844b9e4c2a798e35fb49d4e7d446c82", "title": "Computational Thinking in Practice : How STEM Professionals Use CT in Their Work", "text": "The goal of this study is to bring current computational thinking in STEM educational efforts in line with the increasingly computational nature of STEM research practices. We conducted interviews with STEM practitioners in various fields to understand the nature of CT as it happens in authentic research settings and to revisit a first iteration of our definition of CT in form of a taxonomy. This exploration gives us insight into how scientists use computers in their work and help us identify what practices are important to include in high school STEM learning contexts. Our findings will inform the design of classroom activities to better prepare today\u2019s students for the modern STEM landscape that awaits them."} {"_id": "92594ee5041fd15ad817257b304f2a12b28f5ab8", "title": "Wavelet filter evaluation for image compression", "text": "Choice of filter bank in wavelet compression is a critical issue that affects image quality as well as system design. Although regularity is sometimes used in filter evaluation, its success at predicting compression performance is only partial. A more reliable evaluation can be obtained by considering an L-level synthesis/analysis system as a single-input, single-output, linear shift-variant system with a response that varies according to the input location module (2(L),2(L)). By characterizing a filter bank according to its impulse response and step response in addition to regularity, we obtain reliable and relevant (for image coding) filter evaluation metrics. Using this approach, we have evaluated all possible reasonably short (less than 36 taps in the synthesis/analysis pair) minimum-order biorthogonal wavelet filter banks. Of this group of over 4300 candidate filter banks, we have selected and present here the filters best suited to image compression. While some of these filters have been published previously, others are new and have properties that make them attractive in system design."} {"_id": "aa474ae1be631696b790ffb19468ce9615899631", "title": "Multimodal recognition of personality traits in social interactions", "text": "This paper targets the automatic detection of personality traits in a meeting environment by means of audio and visual features; information about the relational context is captured by means of acoustic features designed to that purpose. Two personality traits are considered: Extraversion (from the Big Five) and the Locus of Control. The classification task is applied to thin slices of behaviour, in the form of 1-minute sequences. SVM were used to test the performances of several training and testing instance setups, including a restricted set of audio features obtained through feature selection. The outcomes improve considerably over existing results, provide evidence about the feasibility of the multimodal analysis of personality, the role of social context, and pave the way to further studies addressing different features setups and/or targeting different personality traits."} {"_id": "f8a95360af226056a99ad1ea1328653bbe89b8b4", "title": "Design Criteria for Electric Vehicle Fast Charge Infrastructure Based on Flemish Mobility Behavior", "text": "This paper studies the technical design criteria for fast charge infrastructure, covering the mobility needs. The infrastructure supplements the residential and public slow charging infrastructure. Two models are designed. The first determines the charging demand, based on current mobility behavior in Flanders. The second model simulates a charge infrastructure that meets the resulting fast charge demand. The energy management is performed by a rule-based control algorithm, that directs the power flows between the fast chargers, the energy storage system, the grid connection, and the photovoltaic installation. There is a clear trade-off between the size of the energy storage system and the power rating of the grid connection. Finally, the simulations indicate that 99.7% of the vehicles visiting the fast charge infrastructure can start charging within 10 minutes with a configuration limited to 5 charging spots, instead of 9 spots when drivers are not willing to wait."} {"_id": "41856dc1ca34c93967ec7c20c735bed83fc379f4", "title": "A Field Study of Run-Time Location Access Disclosures on Android Smartphones", "text": "Smartphone users are increasingly using apps that can access their location. Often these accesses can be without users knowledge and consent. For example, recent research has shown that installation-time capability disclosures are ineffective in informing people about their apps\u2019 location access. In this paper, we present a four-week field study (N=22) on run-time location access disclosures. Towards this end, we implemented a novel method to disclose location accesses by location-enabled apps on participants\u2019 smartphones. In particular, the method did not need any changes to participants\u2019 phones beyond installing our study app. We randomly divided our participants to two groups: a Disclosure group (N=13), who received our disclosures and a No Disclosure group (N=9) who received no disclosures from us. Our results confirm that the Android platform\u2019s location access disclosure method does not inform participants effectively. Almost all participants pointed out that their location was accessed by several apps they would have not expected to access their location. Further, several apps accessed their location more frequently than they expected. We conclude that our participants appreciated the transparency brought by our run-time disclosures and that because of the disclosures most of them had taken actions to manage their apps\u2019 location access."} {"_id": "bcb1029032d2ca45669f8f66b9ee94a7f0f23028", "title": "Morphological and LSU rDNA sequence variation within the Gonyaulax spinifera-Spiniferites group (Dinophyceae) and proposal of G-elongata comb. nov and G-membranacea comb. nov", "text": "Cultures were established from cysts of the cyst-based taxa Spiniferites elongatus and S. membranaceus. Motile cells and cysts from both cultures and sediment samples were examined using light and scanning electron microscopy. The cyst\u00ad theca relationship was established for S. elongatus. The motile cells have the tabulation pattern 2 pr, 4', 6\", 6c, \ufffd 4s, 6\"', I p, Inn, but they remain unattributable to previously described Gonyaulax species. There was large variation in process length and process morphology in cysts from both cultures and wild samples and there was variation in ornamentation and in the development of spines and flanges in motile cells. A new combination, G. elongata (Reid) Ellegaard et al. comb. nov. is proposed, following new rules of the International Code of Botanical Nomenclature that give genera based on extant forms priority over genera based on fossil forms. Extreme morphological variation in the cyst and motile stages of S. membranaceus is described and this species is also transferred to the genus Gonyaulax, as G. membranacea (Rossignol) Ellegaard et al. comb. nov. Approximately 1500 bp of large subunit (LSU) rDNA were determined for these two species and for G. baltica, G. cf. spinifera ( = S. ramosus) and G. digitalis ( = Bitectatodinium tepikiense). LSU rDNA showed sequence divergences similar to those estimated between species in other genera within the Gonyaulacales; a phylogeny for the Gonyaulacales was established, including novel LSU rONA sequences for Alexandrium margalefii, A. pseudogonyaulax and Pyrodinium bahamense var. compressum. Our results show that motile stages obtained from the germination of several cysts of the 'fossil-based' Spiniferites and B. tepikiense, which were previously attributed to 'Gonyaulax spinifera group undifferentiated', belong to distinct species of the genus Gonyaulax. These species show small morphological differences in the motile stage but relatively high sequence divergence. Moreover, this group of species is monophyletic, supported by bootstrap values of 100% in parsimony and maximum likelihood analyses."} {"_id": "a1700eb761f10df03d60b9d0b2e91b1977b0f0fb", "title": "BUSINESS-TO-BUSINESS E-BUSINESS MODELS : CLASSIFICATION AND TEXTILE INDUSTRY IMPLICATIONS", "text": "Since the introduction of the Internet and e-commerce in the mid-1990s, there has been a lot of hype surrounding e-business, the impact that it will have on the way that companies do business, and how it will change the global economy as a whole. Since the crash of the dotcom companies in 2001, there has been much less hype surrounding the use of the Internet for business. There seems to have been a realization that e-business may not be the answer to all of a company\u2019s problems, but can be a great asset in the struggle to increase efficiencies in daily business dealings, and that the Web is primarily a new way of relating to customers and suppliers. This paper categorizes and discusses the different types of business-to-business electronic business models currently being used by businesses and discussed in the academic literature, and shows how these business models are being implemented within the textile industry. This paper is divided into three parts. Part I gives an overview and some important definitions associated with businessto-business e-business, and discusses some characteristics that are unique to doing business on the Internet. Risks and benefits associated with doing business online are also discussed. Part II analyzes the different types of e-business models seen in the academic literature. Based on the analysis of the literature, a taxonomy of e-business models was developed. This new classification system organized e-business models into the following categories: sourcing models, ownership models, service-based models, customer relationship management models, supply chain models, interaction Models and revenue models. Part III reviews how these e-business models are currently being used within the textile industry. A preliminary analysis of 79 textile manufacturing companies was conducted to identify the applications of e-business."} {"_id": "2716c1a156c89dbde7281a7b0812ad81038e4722", "title": "Test Results and Torque Improvement of the 50-kW Switched Reluctance Motor Designed for Hybrid Electric Vehicles", "text": "A switched reluctance motor (SRM) has been developed as one of the possible candidates of rare-earth-free electric motors. A prototype machine has been built and tested. It has competitive dimensions, torque, power, and efficiency with respect to the 50-kW interior permanent magnet synchronous motor employed in the hybrid electric vehicles (Toyota Prius 2003). It is found that competitive power of 50-kW rating and efficiency of 95% are achieved. The prototype motor provided 85% of the target torque. Except the maximum torque, the most speed-torque region is found to be covered by the test SRM. The cause of discrepancy in the measured and calculated torque values is examined. An improved design is attempted, and a new experimental switched reluctance machine is designed and built for testing. The results are given in this paper."} {"_id": "b0064e4e4e6277d8489d607893b36d91f850083d", "title": "Matching Curves to Imprecise Point Sets using Fr\u00e9chet Distance", "text": "Let P be a polygonal curve in R of length n, and S be a point-set of size k. The Curve/Point Set Matching problem consists of finding a polygonal curve Q on S such that the Fr\u00e9chet distance from P is less than a given \u03b5. We consider eight variations of the problem based on the distance metric used and the omittability or repeatability of the points. We provide closure to a recent series of complexity results for the case where S consists of precise points. More importantly, we formulate a more realistic version of the problem that takes into account measurement errors. This new problem is posed as the matching of a given curve to a set of imprecise points. We prove that all three variations of the problem that are in P when S consists of precise points become NP-complete when S consists of imprecise points. We also discuss approximation re-"} {"_id": "1f8116db538169de3553b1091e82107f7594301a", "title": "Common LISP: the language, 2nd Edition", "text": ""} {"_id": "38b1824bbab96f9ab2e1079ea2b60ae313e6cb7f", "title": "A Survey on Multicarrier Communications: Prototype Filters, Lattice Structures, and Implementation Aspects", "text": "Due to their numerous advantages, communications over multicarrier schemes constitute an appealing approach for broadband wireless systems. Especially, the strong penetration of orthogonal frequency division multiplexing (OFDM) into the communications standards has triggered heavy investigation on multicarrier systems, leading to re-consideration of different approaches as an alternative to OFDM. The goal of the present survey is not only to provide a unified review of waveform design options for multicarrier schemes, but also to pave the way for the evolution of the multicarrier schemes from the current state of the art to future technologies. In particular, a generalized framework on multicarrier schemes is presented, based on what to transmit, i.e., symbols, how to transmit, i.e., filters, and where/when to transmit, i.e., lattice. Capitalizing on this framework, different variations of orthogonal, bi-orthogonal, and non-orthogonal multicarrier schemes are discussed. In addition, filter designs for various multicarrier systems are reviewed considering four different design perspectives: energy concentration, rapid decay, spectrum nulling, and channel/hardware characteristics. Subsequently, evaluation tools which may be used to compare different filters in multicarrier schemes are studied. Finally, multicarrier schemes are evaluated from the perspective of practical implementation aspects, such as lattice adaptation, equalization, synchronization, multiple antennas, and hardware impairments."} {"_id": "4d1efe967fde304dc31e19665b9c328969a79128", "title": "5GNOW: non-orthogonal, asynchronous waveforms for future mobile applications", "text": "This article provides some fundamental indications about wireless communications beyond LTE/LTE-A (5G), representing the key findings of the European research project 5GNOW. We start with identifying the drivers for making the transition to 5G networks. Just to name one, the advent of the Internet of Things and its integration with conventional human-initiated transmissions creates a need for a fundamental system redesign. Then we make clear that the strict paradigm of synchronism and orthogonality as applied in LTE prevents efficiency and scalability. We challenge this paradigm and propose new key PHY layer technology components such as a unified frame structure, multicarrier waveform design including a filtering functionality, sparse signal processing mechanisms, a robustness framework, and transmissions with very short latency. These components enable indeed an efficient and scalable air interface supporting the highly varying set of requirements originating from the 5G drivers."} {"_id": "de19bd1be876549cbccb79e545c43442a6223f7a", "title": "Sparse code multiple access", "text": "Multicarrier CDMA is a multiplexing approach in which modulated QAM symbols are spread over multiple OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequence allowing us to take advantage of a near optimal ML receiver with practically feasible complexity. In this paper, we propose a new multiple access scheme so called sparse code multiple access (SCMA) which still enjoys the low complexity reception technique but with better performance compared to LDS. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to a multidimensional codeword of an SCMA codebook set. Each layer or user has its dedicated codebook. Shaping gain of a multidimensional constellation is the main source of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. In general, SCMA codebook design is an optimization problem. A systematic sub-optimal approach is proposed here for SCMA codebook design."} {"_id": "f1732fe25d335af43c0ff90c1b35f9b5a5c38dc5", "title": "SCMA Codebook Design", "text": "Multicarrier CDMA is a multiple access scheme in which modulated QAM symbols are spread over OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequences allowing us to take advantage of a near optimal message passing algorithm (MPA) receiver with practically feasible complexity. Sparse code multiple access (SCMA) is a multi-dimensional codebook-based non-orthogonal spreading technique. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to multi-dimensional codewords of SCMA codebook sets. Each layer has its dedicated codebook. Shaping gain of a multi-dimensional constellation is one of the main sources of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. Meanwhile, like LDS, SCMA enjoys the low complexity reception techniques due to the sparsity of SCMA codewords. In this paper a systematic approach is proposed to design SCMA codebooks mainly based on the design principles of lattice constellations. Simulation results are presented to show the performance gain of SCMA compared to LDS and OFDMA."} {"_id": "43a87c07f2cbfeab0801b9cf2ece5f7e0689d287", "title": "LDS-OFDM an Efficient Multiple Access Technique", "text": "In this paper LDS-OFDM is introduced as an uplink multicarrier multiple access scheme. LDS-OFDM uses Low Density Signature (LDS) structure for spreading the symbols in frequency domain. This technique benefits from frequency diversity besides its ability of supporting parallel data streams up to 400% more than the number of subcarriers (overloaded condition). The performance of LDS-OFDM is evaluated and compared with conventional OFDMA systems over multipath fading channel. Monte Carlo based simulations for various loading conditions indicate significant performance improvement over OFDMA system."} {"_id": "1b5280076b41fda48eb837cbb03cc1b283d4fa3c", "title": "Neural network controller based on PID using an extended Kalman filter algorithm for multi-variable non-linear control system", "text": "The Proportional Integral Derivative (PID) controller is widely used in the industrial control application, which is only suitable for the single input/single output (SISO) with known-parameters of the linear system. However, many researchers have been proposed the neural network controller based on PID (NNPID) to apply for both of the single and multi-variable control system but the NNPID controller that uses the conventional gradient descent-learning algorithm has many disadvantages such as a low speed of the convergent stability, difficult to set initial values, especially, restriction of the degree of system complexity. Therefore, this paper presents an improvement of recurrent neural network controller based on PID, including a controller structure improvement and a modified extended Kalman filter (EKF) learning algorithm for weight update rule, called ENNPID controller. We apply the proposed controller to the dynamic system including inverted pendulum, and DC motor system by the MATLAB simulation. From our experimental results, it shows that the performance of the proposed controller is higher than the other PID-like controllers in terms of fast convergence and fault tolerance that are highly required."} {"_id": "08a5c749035e13a464eb5fb6cf4d4df63069330f", "title": "Can recursive bisection alone produce routable placements?", "text": "This work focuses on congestion-driven placement of standard cells into rows in the fixed-die context. We summarize the state-of-the-art after two decades of research in recursive bisection placement and implement a new placer, called Capo, to empirically study the achievable limits of the approach. From among recently proposed improvements to recursive bisection, Capo incorporates a leading-edge multilevel min-cut partitioner [7], techniques for partitioning with small tolerance [8], optimal min-cut partitioners and end-case min-wirelength placers [5], previously unpublished partitioning tolerance computations, and block splitting heuristics. On the other hand, our \u201cgood enough\u201d implementation does not use \u201coverlapping\u201d [17], multi-way partitioners [17, 20], analytical placement, or congestion estimation [24, 35]. In order to run on recent industrial placement instances, Capo must take into account fixed macros, power stripes and rows with different allowed cell orientations. Capo reads industry-standard LEF/DEF, as well as formats of the GSRC bookshelf for VLSI CAD algorithms [6], to enable comparisons on available placement instances in the fixed-die regime.\nCapo clearly demonstrates that despite a potential mismatch of objectives, improved mincut bisection can still lead to improved placement wirelength and congestion. Our experiments on recent industrial benchmarks fail to give a clear answer to the question in the title of this paper. However, they validate a series of improvements to recursive bisection and point out a need for transparent congestion management techniques that do not worsen the wirelength of already routable placements. Our experimental flow, which validates fixed-die placement results by violation-free detailed auto-routability, provides a new norm for comparison of VLSI placement implementations."} {"_id": "098ee9f8d7c885e6ccc386435537513a6303667c", "title": "Factors Affecting Repurchase Intention of Smartphone : A Case Study of Huawei Smartphone in Yangon , Myanmar", "text": "This study investigates the relationship between functional value, price consciousness, word of mouth, brand image, attitude towards product and repurchases intention of a smartphone brand. To do so, a survey was conducted by distributing 420 questionnaires in 7 different shopping malls in Yangon, Myanmar. The data collected was analyzed using SPSS and hypotheses were examined by employing the Pearson correlation and multiple linear regression. The results show that there is a positive and significant relationship among functional value, word of mouth, price consciousness towards attitude towards product and brand image, attitude towards product, word of mouth influences repurchase intention. Based on these results, it seems that the smart phone company needs to develop marketing strategy to increase repurchase intention."} {"_id": "91fa5860d3a15cdd9cf03bd6bbee8b0cc8a9f08a", "title": "Finding the face in the crowd: an anger superiority effect.", "text": "Facial gestures have been given an increasingly critical role in models of emotion. The biological significance of interindividual transmission of emotional signals is a pivotal assumption for placing the face in a central position in these models. This assumption invited a logical corollary, examined in this article: Face-processing should be highly efficient. Three experiments documented an asymmetry in the processing of emotionally discrepant faces embedded in crowds. The results suggested that threatening faces pop out of crowds, perhaps as a result of a preattentive, parallel search for signals of direct threat."} {"_id": "bc749f0e81eafe9e32d56336750782f45d82609d", "title": "Combination of Texture and Geometric Features for Age Estimation in Face Images", "text": "Automatic age estimation from facial images has recently received an increasing interest due to a variety of applications, such as surveillance, human-computer interaction, forensics, and recommendation systems. Despite such advances, age estimation remains an open problem due to several challenges associated with the aging process. In this work, we develop and analyze an automatic age estimation method from face images based on a combination of textural and geometric features. Experiments are conducted on the Adience dataset (Adience Benchmark, 2017; Eidinger et al., 2014), a large known benchmark used to evaluate both age and gender classification approaches."} {"_id": "507f4a0a5eaecd38566a3b1bf2a33554a2b2eed2", "title": "BIGT control optimisation for overall loss reduction", "text": "In this paper we present the latest results of utilizing MOS-control (MOSctrl) to optimize the performance of the Bi-mode Insulated Gate Transistor (BIGT) chip. The adaption of the BIGT technology enables higher output power per footprint. However, to enable the full performance benefit of the BIGT, the optimisation of the known standard MOS gate control is necessary. This optimisation is being demonstrated over the whole current and temperature range for the BIGT diode turn-off and BIGT turn-on operation. It is shown that the optimum control can offer a performance increase up to 20% for high voltage devices."} {"_id": "6138c3595abf95cb83cbe89de6b6620b7c1d5234", "title": "Evaluation of information technology investment: a data envelopment analysis approach", "text": "The increasing use of information technology (IT) has resulted in a need for evaluating the productivity impacts of IT. The contemporary IT evaluation approach has focused on return on investment and return on management. IT investment has impacts on different stages of business operations. For example, in the banking industry, IT plays a key role in effectively generating (i) funds from the customer in the forms of deposits and then (ii) profits by using deposits as investment funds. Existing approaches based upon data envelopment analysis (DEA) only measure the IT efficiency or impact on one specific stage when a multi-stage business process is present. A detailed model is needed to characterize the impact of IT on each stage of the business operation. The current paper develops a DEA non-linear programming model to evaluate the impact of IT on multiple stages along with information on how to distribute the IT-related resources so that the efficiency is maximized. It is shown that this non-linear program can be treated as a parametric linear program. It is also shown that if there is only one intermediate measure, then the non-linear DEA model becomes a linear program. Our approach is illustrated with an example taken from previous studies. 2004 Elsevier Ltd. All rights reserved."} {"_id": "3e89403dcd478c849732f872001064f21ff073f9", "title": "Comparing memory systems for chip multiprocessors", "text": "There are two basic models for the on-chip memory in CMP systems:hardware-managed coherent caches and software-managed streaming memory. This paper performs a direct comparison of the two modelsunder the same set of assumptions about technology, area, and computational capabilities. The goal is to quantify how and when they differ in terms of performance, energy consumption, bandwidth requirements, and latency tolerance for general-purpose CMPs. We demonstrate that for data-parallel applications, the cache-based and streaming models perform and scale equally well. For certain applications with little data reuse, streaming scales better due to better bandwidth use and macroscopic software prefetching. However, the introduction of techniques such as hardware prefetching and non-allocating stores to the cache-based model eliminates the streaming advantage. Overall, our results indicate that there is not sufficient advantage in building streaming memory systems where all on-chip memory structures are explicitly managed. On the other hand, we show that streaming at the programming model level is particularly beneficial, even with the cache-based model, as it enhances locality and creates opportunities for bandwidth optimizations. Moreover, we observe that stream programming is actually easier with the cache-based model because the hardware guarantees correct, best-effort execution even when the programmer cannot fully regularize an application's code."} {"_id": "d28000811e1fbb0a8967395cd8ffba8ce1b53863", "title": "Positive and negative emotional verbal stimuli elicit activity in the left amygdala.", "text": "The human amygdala's involvement in negative emotion is well established, but relatively little is known regarding its role in positive emotion. Here we examined the neural response to emotionally positive, negative, and neutral words using fMRI. Relative to neutral words, positive and negative emotional words elicited greater activity in the left amygdala. Positive but not negative words elicited activity in dorsal and ventral striatal regions which have been linked in previous neuroimaging studies to reward and positive affect, including caudate, putamen, globus pallidus, and accumbens. These findings provide the first direct evidence that the amygdala is involved in emotional reactions elicited by both positive and negative emotional words, and further indicate that positive words additionally activate brain regions related to reward."} {"_id": "5f4847990b6e68368cd2cac809748298ab241099", "title": "Broadband Radial Waveguide Spatial Combiner", "text": "A broadband eight-way spatial combiner using coaxial probes and radial waveguides has been proposed and designed. The simple electromagnetic modeling for the radial waveguide power divider/combiner has been developed using equivalent-circuit method. The measured 10-dB return loss and 1-dB insertion loss bandwidth of this waveguide spatial combiner are all demonstrated to be about 8 GHz."} {"_id": "13d0f69204bc4a78acc9b8fecb91f41ab1a76823", "title": "Privacy Aspects of Recommender Systems", "text": "The popularity of online recommender systems has soared; they are deployed in numerous websites and gather tremendous amounts of user data that are necessary for the recommendation purposes. This data, however, may pose a severe threat to user privacy, if accessed by untrusted parties or used inappropriately. Hence, it is of paramount importance for recommender system designers and service providers to find a sweet spot, which allows them to generate accurate recommendations and guarantee the privacy of their users. In this chapter we overview the state of the art in privacy enhanced recommendations. We analyze the risks to user privacy imposed by recommender systems, survey the existing solutions, and discuss the privacy implications for the users of the recommenders. We conclude that a considerable effort is still required to develop practical recommendation solutions that provide adequate privacy guarantees, while at the same time facilitating the delivery of high-quality recommendations to their users. Arik Friedman NICTA, Australia, e-mail: arik.friedman@nicta.com.au Bart Knijnenburg UC Irvine, USA, e-mail: bart.k@uci.edu Kris Vanhecke iMinds Ghent University, Belgium, e-mail: kris.vanhecke@intec.ugent.be Luc Martens iMinds Ghent University, Belgium, e-mail: luc.martens@intec.ugent.be Shlomo Berkovsky CSIRO, Australia, e-mail: shlomo.berkovsky@csiro.au 1 Accepted to be included in the 2nd edition of the Recommender Systems handbook. Please do not share publicly, and consult the authors before citing this chapter."} {"_id": "158d54518e694d0a7d7c0fe2ea474d873eaeb5a0", "title": "ArgumenText: Searching for Arguments in Heterogeneous Sources", "text": "Argument mining is a core technology for enabling argument search in large corpora. However, most current approaches fall short when applied to heterogeneous texts. In this paper, we present an argument retrieval system capable of retrieving sentential arguments for any given controversial topic. By analyzing the highest-ranked results extracted from Web sources, we found that our system covers 89% of arguments found in expert-curated lists of arguments from an online debate portal, and also identifies additional valid arguments."} {"_id": "50d90325c9f81e562fdb65097c0be4f889008d20", "title": "A qualitative study of Ragnar\u00f6k Online private servers: in-game sociological issues", "text": "In the last decade, online games have garnered much attention as more and more players gather on game servers. In parallel, communities of illegal private server players and administrators have spread and might host hundreds of thousands of players. To study the Korean online game Ragnar\u00f6k Online, we conducted interviews to collect qualitative data on two private servers as well as on the official French server for the game. This paper discusses some of the reasons why Ragnar\u00f6k Online private servers might attract players and how examining private servers' characteristics could help improve official game servers."} {"_id": "19b82e0ee0d61bb538610d3a470798ce7d6096ee", "title": "A 28GHz SiGe BiCMOS phase invariant VGA", "text": "The paper describes a technique to design a phase invariant variable gain amplifier (VGA). Variable gain is achieved by varying the bias current in a BJT, while the phase variation is minimized by designing a local feedback network such that the applied base to emitter voltage has a bias-dependent phase variation which compensates the inherent phase variation of the transconductance. Two differential 28GHz VGA variants based on these principles achieve <;5\u00b0 phase variation over 8dB and 18dB of gain control range, respectively, with phase invariance maintained over PVT. Implemented in GF 8HP BiCMOS technology, the VGAs achieve 18dB nominal gain, 4GHz bandwidth, and IP1dB > -13dBm while consuming 35mW."} {"_id": "d1adb86df742a9556e137020dca0e505442f337a", "title": "Bacteriocins: developing innate immunity for food.", "text": "Bacteriocins are bacterially produced antimicrobial peptides with narrow or broad host ranges. Many bacteriocins are produced by food-grade lactic acid bacteria, a phenomenon which offers food scientists the possibility of directing or preventing the development of specific bacterial species in food. This can be particularly useful in preservation or food safety applications, but also has implications for the development of desirable flora in fermented food. In this sense, bacteriocins can be used to confer a rudimentary form of innate immunity to foodstuffs, helping processors extend their control over the food flora long after manufacture."} {"_id": "48265726215736f7dd7ceccacac488422032397c", "title": "DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging", "text": "Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies."} {"_id": "1cf314336cc6af13025fa91b19a50960b626684e", "title": "On the Traveling Salesman Problem with Simple Temporal Constraints", "text": "Many real-world applications require the successful combination of spatial and temporal reasoning. In this paper, we study the general framework of the Traveling Salesman Problem with Simple Temporal Constraints. Representationally, this framework subsumes the Traveling Salesman Problem, Simple Temporal Problems, as well as many of the frameworks described in the literature. We analyze the theoretical properties of the combined problem providing strong inapproximability results for the general problem, and positive results for"} {"_id": "5ed17634e4bd989c235662d8fa3af0a23e7262ef", "title": "A Survey on Detection of Reasons Behind Infant Cry Using Speech Processing", "text": "Infant cry analysis is necessary to detect the reasons behind the cry. Infants can communicate through their cry only. They express their physical and emotional needs through their cry. The crying of an infant can be stopped if the reason behind their cry is known. Therefore, identifying the reason behind infant cry is very essential for medical follow up. In this paper, a literature survey is done to study, analyze, compare the existing approaches and their limitations. This survey helped us to propose a new approach for detecting the reasons behind Infant's cry. The proposed model will use input as the cry signal and from these signal patterns the unique features are extracted using MFCCs, LPCCs and Pitch, etc. These features will help discriminating the patterns from one another. These extracted features will be used to train Neural Network Multilayer classifier. This classifier will be used to identify multiple classes of reasons behind infant cry such as hunger, pain, sleep, discomfort, etc. Efforts will be made for achieving good accuracy results through this approach."} {"_id": "671280fad11ccd0d02469086207ba9f988f267cb", "title": "Convolutional neural networks: an illustration in TensorFlow", "text": "C onvolutional neural networks (CNNs) have interesting and widely varied architectures catering to the requirement of learning features from three dimensional and structured data volumes (with a particular example being images, whether single or multi-channel). However, there are certain design considerations common to all CNN architectures that enable us to craft correct implementations of the architectures suited to our objectives. It is more than just mathematical calculations\u2014it's an art in itself! Thus, in the present tutorial, we'll put our knowledge of CNNs and their distinct layers in practice by making use of Google's open-source machine learning (ML) platform, TensorFlow. The tutorial is fairly simple and follows the steps detailed online [1] in light of added insights, which can be assimilated from the basics of CNN architectural concepts. The TensorFlow platform was released in November 2015, in a climate of stiff competition from well-established deep learning and ML packages such as Torch and Theano. (The performance comparisons between the packages goes beyond the scope of this piece). The platform has been undergoing constant improvements since its release, thanks in part to a nascent community of contributors. TensorFlow packs quite a punch owing to its significant positive characteristics. Firstly, the package's open source nature makes it transparent, customizable, and extensible by the end user. Secondly, the documentation support is rich and elaborate, which is a critical factor in accelerating user adoption. Lastly, the fact that the package has been backed by Google, with some of the best ML and AI researchers being among its active contributors, is the cherry on top. Additionally it's compatibility with newer versions of Python (version 3.3 and above), optional GPU utilization, parallelizable execution, and visualization support via TensorBoard further enrich its impressive repertoire. Its Achilles heel\u2014which has prevented widespread adoption\u2014is its unavailability on the Windows OS (TensorFlow is only available on Mac OSX and Linux platforms), and its non-operability in distributed environments, as of now (it's bound to change soon). The question as to why the MNIST dataset [2] was chosen for this tutorial is a very pertinent one. The answer is the handwritten digits dataset has been ubiquitous since its release in 1998 by Yann LeCunn [3]. Having since become the textbook dataset for neural network classification tasks, it is a good choice in our context of getting acquainted with CNNs. The dataset consists of 60,000 training examples and 10,000 test examples, each being a grayscale \u2026"} {"_id": "c788f4ea8d8c0d91237c8146556c3fad71ac362a", "title": "TextCatcher: a method to detect curved and challenging text in natural scenes", "text": "In this paper, we propose a text detection algorithm which is hybrid and multi-scale. First, it relies on a connected component-based approach: After the segmentation of the image, a classification step using a new wavelet descriptor spots the letters. A new graph modeling and its traversal procedure allow to form candidate text areas. Second, a texture-based approach discards the false positives. Finally, the detected text areas are precisely cut out and a new binarization step is introduced. The main advantage of our method is that few assumptions are put forward. Thus, \u201cchallenging texts\u201d like multi-sized, multi-colored, multi-oriented or curved text can be localized. The efficiency of TextCatcher has been validated on three different datasets: Two come from the ICDAR competition, and the third one contains photographs we have taken with various daily life texts. We present both qualitative and quantitative results."} {"_id": "ca20de3504e3d74faec96ed4f70340e3a0860191", "title": "Put your money where your mouth is! Explaining collective action tendencies through group-based anger and group efficacy.", "text": "Insights from appraisal theories of emotion are used to integrate elements of theories on collective action. Three experiments with disadvantaged groups systematically manipulated procedural fairness (Study 1), emotional social support (Study 2), and instrumental social support (Study 3) to examine their effects on collective action tendencies through group-based anger and group efficacy. Results of structural equation modeling showed that procedural fairness and emotional social support affected the group-based anger pathway (reflecting emotion-focused coping), whereas instrumental social support affected the group efficacy pathway (reflecting problem-focused coping), constituting 2 distinct pathways to collective action tendencies. Analyses of the means suggest that collective action tendencies become stronger the more fellow group members \"put their money where their mouth is.\" The authors discuss how their dual pathway model integrates and extends elements of current approaches to collective action."} {"_id": "524144d7e3624c3e88cf380e7ccf585d96b39a70", "title": "Taxonomy of intrusion risk assessment and response system", "text": "In recent years, we have seen notable changes in the way attackers infiltrate computer systems compromising their functionality. Research in intrusion detection systems aims to reduce the impact of these attacks. In this paper, we present a taxonomy of intrusion response systems (IRS) and Intrusion Risk Assessment (IRA), two important components of an intrusion detection solution. We achieve this by classifying a number of studies published during the last two decades . We discuss the key features of existing IRS and IRA. We show how characterizing security risks and choosing the right countermeasures are an important and challenging part of designing an IRS and an IRA. Poorly designed IRS and IRA may reduce network performance and wrongly disconnect users from a network. We propose techniques on how to address these challenges and highlight the need for a comprehensive defense mechanism approach. We believe that this taxonomy will open up interesting areas for future research in the growing field of intrusion risk assessment and response systems."} {"_id": "67ea3089b457e6a09e45ea4117cb7f30d7695e69", "title": "Segmentation of Medical Ultrasound Images Using Convolutional Neural Networks with Noisy Activating Functions", "text": "The attempts to segment medical ultrasound images have had limited success than the attempts to segment images from other medical imaging modalities. In this project, we attempt to segment medical ultrasound images using convolutional neural networks (CNNs) with a group of noisy activation functions which have recently been demonstrated to improve the performance of neural networks. We report on the segmentation results using a U-Net-like CNN with noisy rectified linear unit (NReLU) functions, noisy hard sigmoid (NHSigmoid) functions, and noisy hard tanh (NHTanh) function on a small data set."} {"_id": "034b2b97e6b23061f6f71a5e19c1b03bf4c19ec8", "title": "GSA: A Gravitational Search Algorithm", "text": "In recent years, various heuristic optimization methods have been developed. Many of these methods are inspired by swarm behaviors in nature. In this paper, a new optimization algorithm based on the law of gravity and mass interactions is introduced. In the proposed algorithm, the searcher agents are a collection of masses which interact with each other based on the Newtonian gravity and the laws of motion. The proposed method has been compared with some well-known heuristic search methods. The obtained results confirm the high performance of the proposed method in solving various nonlinear functions. 2009 Elsevier Inc. All rights reserved."} {"_id": "00844516c86828a4cc81471b573cb1a1696fcde9", "title": "Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion", "text": "Here, we demonstrate that subject motion produces substantial changes in the timecourses of resting state functional connectivity MRI (rs-fcMRI) data despite compensatory spatial registration and regression of motion estimates from the data. These changes cause systematic but spurious correlation structures throughout the brain. Specifically, many long-distance correlations are decreased by subject motion, whereas many short-distance correlations are increased. These changes in rs-fcMRI correlations do not arise from, nor are they adequately countered by, some common functional connectivity processing steps. Two indices of data quality are proposed, and a simple method to reduce motion-related effects in rs-fcMRI analyses is demonstrated that should be flexibly implementable across a variety of software platforms. We demonstrate how application of this technique impacts our own data, modifying previous conclusions about brain development. These results suggest the need for greater care in dealing with subject motion, and the need to critically revisit previous rs-fcMRI work that may not have adequately controlled for effects of transient subject movements."} {"_id": "21bfc289cf7e2309e70f390ae14d89df7c911a67", "title": "Modeling regional and psychophysiologic interactions in fMRI: the importance of hemodynamic deconvolution", "text": "The analysis of functional magnetic resonance imaging (fMRI) time-series data can provide information not only about task-related activity, but also about the connectivity (functional or effective) among regions and the influences of behavioral or physiologic states on that connectivity. Similar analyses have been performed in other imaging modalities, such as positron emission tomography. However, fMRI is unique because the information about the underlying neuronal activity is filtered or convolved with a hemodynamic response function. Previous studies of regional connectivity in fMRI have overlooked this convolution and have assumed that the observed hemodynamic response approximates the neuronal response. In this article, this assumption is revisited using estimates of underlying neuronal activity. These estimates use a parametric empirical Bayes formulation for hemodynamic deconvolution."} {"_id": "261208c69aeca0243e43511845a0d8023d31acbe", "title": "Common regions of the human frontal lobe recruited by diverse cognitive demands", "text": "Though many neuroscientific methods have been brought to bear in the search for functional specializations within prefrontal cortex, little consensus has emerged. To assess the contribution of functional neuroimaging, this article reviews patterns of frontal-lobe activation associated with a broad range of different cognitive demands, including aspects of perception, response selection, executive control, working memory, episodic memory and problem solving. The results show a striking regularity: for many demands, there is a similar recruitment of mid-dorsolateral, mid-ventrolateral and dorsal anterior cingulate cortex. Much of the remainder of frontal cortex, including most of the medial and orbital surfaces, is largely insensitive to these demands. Undoubtedly, these results provide strong evidence for regional specialization of function within prefrontal cortex. This specialization, however, takes an unexpected form: a specific frontal-lobe network that is consistently recruited for solution of diverse cognitive problems."} {"_id": "2ce0d2f6efe74b9df4c0eccb434322d931c5dd47", "title": "Prefrontal cortical function and anxiety: controlling attention to threat-related stimuli", "text": "Threat-related stimuli are strong competitors for attention, particularly in anxious individuals. We used functional magnetic resonance imaging (fMRI) with healthy human volunteers to study how the processing of threat-related distractors is controlled and whether this alters as anxiety levels increase. Our work builds upon prior analyses of the cognitive control functions of lateral prefrontal cortex (lateral PFC) and anterior cingulate cortex (ACC). We found that rostral ACC was strongly activated by infrequent threat-related distractors, consistent with a role for this area in responding to unexpected processing conflict caused by salient emotional stimuli. Participants with higher anxiety levels showed both less rostral ACC activity overall and reduced recruitment of lateral PFC as expectancy of threat-related distractors was established. This supports the proposal that anxiety is associated with reduced top-down control over threat-related distractors. Our results suggest distinct roles for rostral ACC and lateral PFC in governing the processing of task-irrelevant, threat-related stimuli, and indicate reduced recruitment of this circuitry in anxiety."} {"_id": "0ca9e60d077c97f8f9f9e43110e899ed45284ecd", "title": "Other minds in the brain: a functional imaging study of \u201ctheory of mind\u201d in story comprehension", "text": "The ability of normal children and adults to attribute independent mental states to self and others in order to explain and predict behaviour (\"theory of mind\") has been a focus of much recent research. Autism is a biologically based disorder which appears to be characterised by a specific impairment in this \"mentalising\" process. The present paper reports a functional neuroimaging study with positron emission tomography in which we studied brain activity in normal volunteers while they performed story comprehension tasks necessitating the attribution of mental states. The resultant brain activity was compared with that measured in two control tasks: \"physical\" stories which did not require this mental attribution, and passages of unlinked sentences. Both story conditions, when compared to the unlinked sentences, showed significantly increased regional cerebral blood flow in the following regions: the temporal poles bilaterally, the left superior temporal gyrus and the posterior cingulate cortex. Comparison of the \"theory of mind\" stories with \"physical\" stores revealed a specific pattern of activation associated with mental state attribution: it was only this task which produced activation in the medial frontal gyrus on the left (Brodmann's area 8). This comparison also showed significant activation in the posterior cingulate cortex. These surprisingly clear-cut findings are discussed in relation to previous studies of brain activation during story comprehension. The localisation of brain regions involved in normal attribution of mental states and contextual problem solving is feasible and may have implication for the neural basis of autism."} {"_id": "723b30edce2a7a46626a38c8f8cac929131b9ed4", "title": "Daemo: A Self-Governed Crowdsourcing Marketplace", "text": "Crowdsourcing marketplaces provide opportunities for autonomous and collaborative professional work as well as social engagement. However, in these marketplaces, workers feel disrespected due to unreasonable rejections and low payments, whereas requesters do not trust the results they receive. The lack of trust and uneven distribution of power among workers and requesters have raised serious concerns about sustainability of these marketplaces. To address the challenges of trust and power, this paper introduces Daemo, a self-governed crowdsourcing marketplace. We propose a prototype task to improve the work quality and open-governance model to achieve equitable representation. We envisage Daemo will enable workers to build sustainable careers and provide requesters with timely, quality labor for their businesses."} {"_id": "1eda2af8492a67d66afdb26b70d15e07d9bd11fe", "title": "Discriminative shape from shading in uncalibrated illumination", "text": "Estimating surface normals from just a single image is challenging. To simplify the problem, previous work focused on special cases, including directional lighting, known reflectance maps, etc., making shape from shading impractical outside the lab. To cope with more realistic settings, shading cues need to be combined and generalized to natural illumination. This significantly increases the complexity of the approach, as well as the number of parameters that require tuning. Enabled by a new large-scale dataset for training and analysis, we address this with a discriminative learning approach to shape from shading, which uses regression forests for efficient pixel-independent prediction and fast learning. Von Mises-Fisher distributions in the leaves of each tree enable the estimation of surface normals. To account for their expected spatial regularity, we introduce spatial features, including texton and silhouette features. The proposed silhouette features are computed from the occluding contours of the surface and provide scale-invariant context. Aside from computational efficiency, they enable good generalization to unseen data and importantly allow for a robust estimation of the reflectance map, extending our approach to the uncalibrated setting. Experiments show that our discriminative approach outperforms state-of-the-art methods on synthetic and real-world datasets."} {"_id": "84a505024b58c0c3bb5b1bf12ee76f162ebf52d0", "title": "System Integration and Power-Flow Management for a Series Hybrid Electric Vehicle Using Supercapacitors and Batteries", "text": "In this paper, system integration and power-flow management algorithms for a four-wheel-driven series hybrid electric vehicle (HEV) having multiple power sources composed of a diesel-engine-based generator, lead acid battery bank, and supercapacitor bank are presented. The super-capacitor is utilized as a short-term energy storage device to meet the dynamic performance of the vehicle, while the battery is utilized as a mid-term energy storage for the electric vehicle (EV) mode operation due to its higher energy density. The generator based on an interior permanent magnet machine (IPMM), run by a diesel engine, provides the average power for the normal operation of the vehicle. Thanks to the proposed power-flow management algorithm, each of the energy sources is controlled appropriately and also the dynamic performance of the vehicle has been improved. The proposed power-flow management algorithm has been experimentally verified with a full-scale prototype vehicle."} {"_id": "dfd653ad1409ecbcd8970f2b505dea0807e316ca", "title": "Linear AC LED driver with the multi-level structure and variable current regulator", "text": "This paper proposes a linear AC LED driver for LED lighting applications. The proposed circuit is small in size because the circuit structure consists of only semiconductors and resistors without any reactors and electrolytic capacitors. The current bypass circuit which is connected in parallel to the LED string consists of single MOSFET, single zener diode and two resistors. The MOSFET is operated in an active state by a self-bias circuit. Thus, an external controller and high voltage gate drivers are not required. The proposed circuit is experimentally validated by using a 7 W prototype. From the experimental results, the THD of input current is 2.1% and the power factor is 0.999. In addition, the simulation loss analysis demonstrates an efficiency of 87% for a 7 W prototype."} {"_id": "166cee31d458a41872a50e81532b787845d92e70", "title": "Space-Vector PWM Control Synthesis for an H-Bridge Drive in Electric Vehicles", "text": "This paper deals with a synthesis of space-vector pulsewidth modulation (SVPWM) control methods applied for an H-bridge inverter feeding a three-phase permanent-magnet synchronous machine (PMSM) in electric-vehicle (EV) applications. First, a short survey of existing architectures of power converters, particularly those adapted to degraded operating modes, is presented. Standard SVPWM control methods are compared with three innovative methods using EV drive specifications in the normal operating mode. Then, a rigorous analysis of the margins left in the control strategy is presented for a semiconductor switch failure to fulfill degraded operating modes. Finally, both classic and innovative strategies are implemented in numerical simulation; their results are analyzed and discussed."} {"_id": "a5a147be45a38cacff1b21a9c05fc8c408df237b", "title": "WALNUT: Waging Doubt on the Integrity of MEMS Accelerometers with Acoustic Injection Attacks", "text": "Cyber-physical systems depend on sensors to make automated decisions. Resonant acoustic injection attacks are already known to cause malfunctions by disabling MEMS-based gyroscopes. However, an open question remains on how to move beyond denial of service attacks to achieve full adversarial control of sensor outputs. Our work investigates how analog acoustic injection attacks can damage the digital integrity of a popular type of sensor: the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Our contributions include (1) modeling the physics of malicious acoustic interference on MEMS accelerometers, (2) discovering the circuit-level security flaws that cause the vulnerabilities by measuring acoustic injection attacks on MEMS accelerometers as well as systems that employ on these sensors, and (3) two software-only defenses that mitigate many of the risks to the integrity of MEMS accelerometer outputs. We characterize two classes of acoustic injection attacks with increasing levels of adversarial control: output biasing and output control. We test these attacks against 20 models of capacitive MEMS accelerometers from 5 different manufacturers. Our experiments find that 75% are vulnerable to output biasing, and 65% are vulnerable to output control. To illustrate end-to-end implications, we show how to inject fake steps into a Fitbit with a $5 speaker. In our self-stimulating attack, we play a malicious music file from a smartphone's speaker to control the on-board MEMS accelerometer trusted by a local app to pilot a toy RC car. In addition to offering hardware design suggestions to eliminate the root causes of insecure amplification and filtering, we introduce two low-cost software defenses that mitigate output biasing attacks: randomized sampling and 180 degree out-of-phase sampling. These software-only approaches mitigate attacks by exploiting the periodic and predictable nature of the malicious acoustic interference signal. Our results call into question the wisdom of allowing microprocessors and embedded systems to blindly trust that hardware abstractions alone will ensure the integrity of sensor outputs."} {"_id": "0d4d20c9f025a54b4c55f8d674f475306ebc88a6", "title": "Ant-Q: A Reinforcement Learning Approach to the Traveling Salesman Problem", "text": "In this paper we introduce Ant-Q, a family of algorithms which present many similarities with Q-learning (Watkins, 1989), and which we apply to the solution of symmetric and asymmetric instances of the traveling salesman problem (TSP). Ant-Q algorithms were inspired by work on the ant system (AS), a distributed algorithm for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in (Dorigo, 1992; Dorigo, Maniezzo and Colorni, 1996). We show that AS is a particular instance of the Ant-Q family, and that there are instances of this family which perform better than AS. We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP's are competitive with those obtained by other heuristic approaches based on neural networks or local search. Finally, we apply Ant-Q to some difficult asymmetric TSP's obtaining very good results: Ant-Q was able to find solutions of a quality which usually can be found only by very specialized algorithms."} {"_id": "18977c6f7abb245691f4268ccd116036bd2391f0", "title": "All-at-once Optimization for Coupled Matrix and Tensor Factorizations", "text": "Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers\u2019 social networks (e.g., Facebook friendships) and restaurant categories information (e.g., Thai or Italian) can also be used to make better recommendations. The task of fusing data, however, is challenging since data sets can be incomplete and heterogeneous, i.e., data consist of both matrices, e.g., the person by person social network matrix or the restaurant by category matrix, and higher-order tensors, e.g., the \u201cratings\u201d tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outer-product models to higher-order tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose an all-at-once optimization approach called CMTF-OPT (CMTF-OPTimization), which is a gradient-based optimization approach for joint analysis of matrices and higher-order tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate that the proposed all-at-once approach is more accurate than the alternating least squares approach."} {"_id": "7d6acd022c000e57f46795cc54506215b9b9ec33", "title": "A Tagged Corpus and a Tagger for Urdu", "text": "In this paper, we describe a release of a sizeable monolingual Urdu corpus automatically tagged with part-of-speech tags. We extend the work of Jawaid and Bojar (2012) who use three different taggers and then apply a voting scheme to disambiguate among the different choices suggested by each tagger. We run this complex ensemble on a large monolingual corpus and release the tagged corpus. Additionally, we use this data to train a single standalone tagger which will hopefully significantly simplify Urdu processing. The standalone tagger obtains the accuracy of 88.74% on test data."} {"_id": "539ea86fa738afd939fb18566107c971461f8548", "title": "Learning as search optimization: approximate large margin methods for structured prediction", "text": "Mappings to structured output spaces (strings, trees, partitions, etc.) are typically learned using extensions of classification algorithms to simple graphical structures (eg., linear chains) in which search and parameter estimation can be performed exactly. Unfortunately, in many complex problems, it is rare that exact search or parameter estimation is tractable. Instead of learning exact models and searching via heuristic means, we embrace this difficulty and treat the structured output problem in terms of approximate search. We present a framework for learning as search optimization, and two parameter updates with convergence the-orems and bounds. Empirical evidence shows that our integrated approach to learning and decoding can outperform exact models at smaller computational cost."} {"_id": "1219fb39b46aabd74879a7d6d3c724fb4e55aeae", "title": "Bricolage versus breakthrough : distributed and embedded agency in technology entrepreneurship", "text": "We develop a perspective on technology entrepreneurship as involving agency that is distributed across different kinds of actors. Each actor becomes involved with a technology, and, in the process, generates inputs that result in the transformation of an emerging technological path. The steady accumulation of inputs to a technological path generates a momentum that enables and constrains the activities of distributed actors. In other words, agency is not only distributed, but it is embedded as well. We explicate this perspective through a comparative study of processes underlying the emergence of wind turbines in Denmark and in United States. Through our comparative study, we flesh out \u201cbricolage\u201d and \u201cbreakthrough\u201d as contrasting approaches to the engagement of actors in shaping technological paths. \u00a9 2002 Elsevier Science B.V. All rights reserved."} {"_id": "2266636d87e44590ade738b92377d1fe1bc5c970", "title": "Threshold selection using Renyi's entropy", "text": ""} {"_id": "1672a134e0cebfef817c0c832eea1e54ffb094b0", "title": "UTHealth at SemEval-2016 Task 12: an End-to-End System for Temporal Information Extraction from Clinical Notes", "text": "The 2016 Clinical TempEval challenge addresses temporal information extraction from clinical notes. The challenge is composed of six sub-tasks, each of which is to identify: (1) event mention spans, (2) time expression spans, (3) event attributes, (4) time attributes, (5) events\u2019 temporal relations to the document creation times (DocTimeRel), and (6) narrative container relations among events and times. In this article, we present an end-to-end system that addresses all six sub-tasks. Our system achieved the best performance for all six sub-tasks when plain texts were given as input. It also performed best for narrative container relation identification when gold standard event/time annotations were given."} {"_id": "beaaba420f5cef9b4564bc4e1ff88094a5fa2054", "title": "Discovering Molecular Functional Groups Using Graph Convolutional Neural Networks", "text": "Functional groups (FGs) serve as a foundation for analyzing chemical properties of organic molecules. Automatic discovery of FGs will impact various fields of research, including medicinal chemistry, by reducing the amount of lab experiments required for discovery or synthesis of new molecules. Here, we investigate methods based on graph convolutional neural networks (GCNNs) for localizing FGs that contribute to specific chemical properties. Molecules are modeled as undirected graphs with atoms as nodes and bonds as edges. Using this graph structure, we trained GCNNs in a supervised way on experimentally-validated molecular training sets to predict specific chemical properties, e.g., toxicity. Upon learning a GCNN, we analyzed its activation patterns to automatically identify FGs using four different methods: gradient-based saliency maps, Class Activation Mapping (CAM), gradient-weighted CAM (Grad-CAM), and Excitation Back-Propagation. We evaluated the contrastive power of these methods with respect to the specificity of the identified molecular substructures and their relevance for chemical functions. GradCAM had the highest contrastive power and generated qualitatively the best FGs. This work paves the way for automatic analysis and design of new molecules."} {"_id": "a8f87a5ab16764e61aef3cbadcc52ca927bb392d", "title": "How to make large self-organizing maps for nonvectorial data", "text": "The self-organizing map (SOM) represents an open set of input samples by a topologically organized, finite set of models. In this paper, a new version of the SOM is used for the clustering, organization, and visualization of a large database of symbol sequences (viz. protein sequences). This method combines two principles: the batch computing version of the SOM, and computation of the generalized median of symbol strings."} {"_id": "83c2183c5fd530bd1ff00ba51939680b4419840b", "title": "Structurally-Sensitive Multi-Scale Deep Neural Network for Low-Dose CT Denoising", "text": "Computed tomography (CT) is a popular medical imaging modality and enjoys wide clinical applications. At the same time, the X-ray radiation dose associated with CT scannings raises a public concern due to its potential risks to the patients. Over the past years, major efforts have been dedicated to the development of low-dose CT (LDCT) methods. However, the radiation dose reduction compromises the signal-to-noise ratio, leading to strong noise and artifacts that down-grade the CT image quality. In this paper, we propose a novel 3-D noise reduction method, called structurally sensitive multi-scale generative adversarial net, to improve the LDCT image quality. Specifically, we incorporate 3-D volumetric information to improve the image quality. Also, different loss functions for training denoising models are investigated. Experiments show that the proposed method can effectively preserve the structural and textural information in reference to the normal-dose CT images and significantly suppress noise and artifacts. Qualitative visual assessments by three experienced radiologists demonstrate that the proposed method retrieves more information and outperforms competing methods."} {"_id": "2af586c64c32baeb445992e0ea6b76bbbbc30c7f", "title": "Massive parallelization of approximate nearest neighbor search on KD-tree for high-dimensional image descriptor matching", "text": ""} {"_id": "15a4ef82d92b08c5c1332324d0820ec3d082bf3e", "title": "REGULARIZATION TOOLS: A Matlab package for analysis and solution of discrete ill-posed problems", "text": "The package REGULARIZATION TOOLS consists of 54 Matlab routines for analysis and solution of discrete ill-posed problems, i.e., systems of linear equations whose coefficient matrix has the properties that its condition number is very large, and its singular values decay gradually to zero. Such problems typically arise in connection with discretization of Fredholm integral equations of the first kind, and similar ill-posed problems. Some form of regularization is always required in order to compute a stabilized solution to discrete ill-posed problems. The purpose of REGULARIZATION TOOLS is to provide the user with easy-to-use routines, based on numerical robust and efficient algorithms, for doing experiments with regularization of discrete ill-posed problems. By means of this package, the user can experiment with different regularization strategies, compare them, and draw conclusions from these experiments that would otherwise require a major programming effert. For discrete ill-posed problems, which are indeed difficult to treat numerically, such an approach is certainly superior to a single black-box routine. This paper describes the underlying theory gives an overview of the package; a complete manual is also available."} {"_id": "cb0da1ed189087c9ba716cc5c99c75b52430ec06", "title": "Transparent and Efficient CFI Enforcement with Intel Processor Trace", "text": "Current control flow integrity (CFI) enforcement approaches either require instrumenting application executables and even shared libraries, or are unable to defend against sophisticated attacks due to relaxed security policies, or both, many of them also incur high runtime overhead. This paper observes that the main obstacle of providing transparent and strong defense against sophisticated adversaries is the lack of sufficient runtime control flow information. To this end, this paper describes FlowGuard, a lightweight, transparent CFI enforcement approach by a novel reuse of Intel Processor Trace (IPT), a recent hardware feature that efficiently captures the entire runtime control flow. The main challenge is that IPT is designed for offline performance analysis and software debugging such that decoding collected control flow traces is prohibitively slow on the fly. FlowGuard addresses this challenge by reconstructing applications' conservative control flow graphs (CFG) to be compatible with the compressed encoding format of IPT, and labeling the CFG edges with credits in the help of fuzzing-like dynamic training. At runtime, FlowGuard separates fast and slow paths such that the fast path compares the labeled CFGs with the IPT traces for fast filtering, while the slow path decodes necessary IPT traces for strong security. We have implemented and evaluated FlowGuard on a commodity Intel Skylake machine with IPT support. Evaluation results show that FlowGuard is effective in enforcing CFI for several applications, while introducing only small performance overhead. We also show that, with minor hardware extensions, the performance overhead can be further reduced."} {"_id": "f812347d46035d786de40c165a158160bb2988f0", "title": "Predictive coding as a model of cognition", "text": "Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function."} {"_id": "f59d8504d7c6e209e6f8bcb62346140214b244b7", "title": "Fine-tuning Deep Convolutional Networks for Plant Recognition", "text": "This paper describes the participation of the ECOUAN team in the LifeCLEF 2015 challenge. We used a deep learning approach in which the complete system was learned without hand-engineered components. We pre-trained a convolutional neural network using 1.8 million images and used a fine-tuning strategy to transfer learned recognition capabilities from general domains to the specific challenge of Plant Identification task. The classification accuracy obtained by our method outperformed the best result obtained in 2014. Our group obtained the 4th position among all teams and the 10th position among 18 runs."} {"_id": "681faa552d147d606815bbd008bc1de0005f63ba", "title": "A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines", "text": "Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines."} {"_id": "28ff2f3bf5403d7adc58f6aac542379806fa3233", "title": "Interpreting random forest models using a feature contribution method", "text": "Model interpretation is one of the key aspects of the model evaluation process. The explanation of the relationship between model variables and outputs is easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance. For \u201cblack box\u201d models, such as random forest, this information is hidden inside the model structure. This work presents an approach for computing feature contributions for random forest classification models. It allows for the determination of the influence of each variable on the model prediction for an individual instance. Interpretation of feature contributions for two UCI benchmark datasets shows the potential of the proposed methodology. The robustness of results is demonstrated through an extensive analysis of feature contributions calculated for a large number of generated random forest models."} {"_id": "c8bbf44d7454f37e2e12713f48ca99b3f19ef915", "title": "Methodology of band rejection / addition for microstrip antennas design using slot line theory and current distribution analysis", "text": "Radio or wireless communication means to transfer information over long distance without using any wires. Millions of people exchange information every day using pager, cellular, telephones, laptops, various types of personal digital assistance (PDAs) and other wireless communication products. The worldwide interoperability for microwave access (Wi-Max) aims to provide wireless data over a long distance in variety of ways. It was based on IEEE 802.16 standard. It is an effective metropolitan area access technique with many favorable features like flexibility, cost, efficiency and fast networking, which not only provides wireless access, but also serves to expand the access to wired network. The coverage area of Wi-Max is around 30-50 km. it can provide high 100 Mbps data rates in 20 MHz bandwidth on fixed and nomadic applications in the 2-11 GHz frequencies. In this paper, a methodology of band rejection/addition for microstrip antennas design using slot line theory and current distribution analysis has been introduced and analyzed. The analysis and design are done by a commercial software. The radiation characteristics, such as; return loss, VSWR, input impedance, and the surface current densities have been introduced and discussed. Finally, the proposed optimum antenna design structure has been fabricated and the measured S-parameters of the proposed structure can be analyzed with network analyzer and compared with simulation results to demonstrate the excellent performance and meet the requirements for wireless communication applications."} {"_id": "2704a9af1b368e2b68b0fe022b2fd48b8c7c25cc", "title": "Distance Metric Learning with Application to Clustering with Side-Information", "text": "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \u201cplausible\u201d ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \u201csimilar.\u201d For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in , learns a distance metric over that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance."} {"_id": "d3c79bd62814782c7075c7fea80d1f87e337a538", "title": "Comparing Clusterings - An Overview", "text": "As the amount of data we nowadays have to deal with becomes larger and larger, the methods that help us to detect structures in the data and to identify interesting subsets in the data become more and more important. One of these methods is clustering, i.e. segmenting a set of elements into subsets such that the elements in each subset are somehow \u201dsimiliar\u201d to each other and elements of different subsets are \u201dunsimilar\u201d. In the literature we can find a large variety of clustering algorithms, each having certain advantages but also certain drawbacks. Typical questions that arise in this context comprise:"} {"_id": "09370d132a1e238a778f5e39a7a096994dc25ec1", "title": "Discriminant Adaptive Nearest Neighbor Classification", "text": "Nearest neighbor classification expects the class conditional probabilities to be locally constant, and suffers from bias in high dimensions We propose a locally adaptive form of nearest neighbor classification to try to finesse this curse of dimensionality. We use a local linear discriminant analysis to estimate an effective metric for computing neighborhoods. We determine the local decision boundaries from centroid information, and then shrink neighborhoods in directions orthogonal to these local decision boundaries, and elongate them parallel to the boundaries. Thereafter, any neighborhood-based classifier can be employed, using the modified neighborhoods. The posterior probabilities tend to be more homogeneous in the modified neighborhoods. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbour classification. Introduction We consider a discrimination problem with d classes and N training observations. The training observations consist of predictor measurements x = (zl,z2,...zp) on p predictors and the known class memberships. Our goal is to predict the class membership of an observation with predictor vector x0 Nearest neighbor classification is a simple and appealing approach to this problem. We find the set of K nearest neighbors in the training set to x0 and then classify x0 as the most frequent class among the K neighbors. Nearest neighbors is an extremely flexible classification scheme, and does not involve any pre-processing (fitting) of the training data. This can offer both space and speed advantages in very large problems: see Cover (1968), Duda & Hart (1973), McLachlan (1992) for background material on nearest neighborhood classification. Cover & Hart (1967) show that the one nearest neighbour rule has asymptotic error rate at most twice the Bayes rate. However in finite samples the curse of dimensionality can severely hurt the nearest neighbor rule. The relative radius of the nearest-neighbor sphere grows like r 1/p where p is the dimension and r the radius for p = 1, resulting in severe bias at the target point x. Figure 1 illustrates the situation for a simple example. Figure 1: The vertical strip denotes the NN region using only the X coordinate to find the nearest neighbor for the target point (solid dot). The sphere shows the NN region using both coordinates, and we see in this case it has extended into the class 1 region (and found the wrong class in this instance). Our illustration here is based on a 1-NN rule, but the same phenomenon ccurs for k-NN rules as well. Nearest neighbor techniques are based on the assumption that locally the class posterior probabilities are constant. While that is clearly true in the vertical strip using only coordinate X, using X and Y this is no longer true. The techniques outlined in the abstract are designed to overcome these problems. Figure 2 shows an example. There are two classes in two dimensions, one of which almost completely surrounds the other. The left panel shows a nearest neighborhood of size 25 at the target point (shown as origin), which is chosen to near the class boundary. The right panel shows the same size neighborhood using our discriminant adap142 KDD--95 From: KDD-95 Proceedings. Copyright \u00a9 1995, AAAI (www.aaai.org). All rights reserved."} {"_id": "0bacca0993a3f51649a6bb8dbb093fc8d8481ad4", "title": "Constrained K-means Clustering with Background Knowledge", "text": "Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be profitably modified to make use of this information. In experiments with artificial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance."} {"_id": "bed9545680bbbf2c4e922263e13534d199c98613", "title": "Promoting Student Metacognition", "text": "Imagine yourself as the instructor of an introductory undergraduate biology course. Two students from your course independently visit your office the week after the first exam. Both students are biology majors. Both regularly attend class and submit their assignments on time. Both appear to be eager, dedicated, and genuine students who want to learn biology. During each of their office hours visits, you ask them to share how they prepared for the first exam. Their stories are strikingly different (inspired by Ertmer and Newby, 1996). During office hours, Josephina expresses that she was happy the exam was on a Monday, because she had a lot of time to prepare the previous weekend. She shares that she started studying after work on Saturday evening and did not go out with friends that night. When queried, she also shares that she reread all of the assigned textbook material and made flashcards of the bold words in the text. She feels that she should have done well on the test, because she studied all Saturday night and all day on Sunday. She feels that she did everything she could do to prepare. That said, she is worried about what her grade will be, and she wants you to know that she studied really hard, so she should get a good grade on the exam. Later in the week, Maya visits your office. When asked how she prepared for the first exam, she explains that she has regularly reviewed the PowerPoint slides each evening after class since the beginning of the term 4 weeks ago. She also read the assigned textbook pages weekly, but expresses that she spent most of her time comparing the ideas in the PowerPoint slides with the information in the textbook to see how they were similar and different. She found several places in which things seemed not to agree, which confused her. She kept a running list of these confusions each week. When you ask what she did with these confusions, she shares that she"} {"_id": "7e30d0c7aaaed7fa2d04fc8cc0fd3af8e24ca385", "title": "A Survey of Text Summarization Extractive Techniques", "text": "Text Summarization is condensing the source text into a shorter version preserving its information content and overall meaning. It is very difficult for human beings to manually summarize large documents of text. Text Summarization methods can be classified into extractive and abstractive summarization. An extractive summarization method consists of selecting important sentences, paragraphs etc. from the original document and concatenating them into shorter form. The importance of sentences is decided based on statistical and linguistic features of sentences. An abstractive summarization method consists of understanding the original text and re-telling it in fewer words. It uses linguistic methods to examine and interpret the text and then to find the new concepts and expressions to best describe it by generating a new shorter text that conveys the most important information from the original text document. In this paper, a Survey of Text Summarization Extractive techniques has been presented."} {"_id": "7294cc55a3b09a43cde606085b1a5742277b9e09", "title": "Multiple SVM-RFE for gene selection in cancer classification with expression data", "text": "This paper proposes a new feature selection method that uses a backward elimination procedure similar to that implemented in support vector machine recursive feature elimination (SVM-RFE). Unlike the SVM-RFE method, at each step, the proposed approach computes the feature ranking score from a statistical analysis of weight vectors of multiple linear SVMs trained on subsamples of the original training data. We tested the proposed method on four gene expression datasets for cancer classification. The results show that the proposed feature selection method selects better gene subsets than the original SVM-RFE and improves the classification accuracy. A Gene Ontology-based similarity assessment indicates that the selected subsets are functionally diverse, further validating our gene selection method. This investigation also suggests that, for gene expression-based cancer classification, average test error from multiple partitions of training and test sets can be recommended as a reference of performance quality."} {"_id": "2601e35c203cb160bc82e7840c15b193f9c66404", "title": "Photogeometric Scene Flow for High-Detail Dynamic 3D Reconstruction", "text": "Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion."} {"_id": "08394b082370493f381bab44158dc316fe6eef2a", "title": "Data Driven Ontology Evaluation", "text": "The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the \u2018fit\u2019 between an ontology and a domain of knowledge. We consider a number of methods for measuring this \u2018fit\u2019 and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology."} {"_id": "d1a5a5b6aac08e09c0056c5f70906cb6f96e8b4b", "title": "Learning Across Scales - Multiscale Methods for Convolution Neural Networks", "text": "In this work, we establish the relation between optimal control and training deep Convolution Neural Networks (CNNs). We show that the forward propagation in CNNs can be interpreted as a time-dependent nonlinear differential equation and learning can be seen as controlling the parameters of the differential equation such that the network approximates the data-label relation for given training data. Using this continuous interpretation, we derive two new methods to scale CNNs with respect to two different dimensions. The first class of multiscale methods connects low-resolution and high-resolution data using prolongation and restriction of CNN parameters inspired by algebraic multigrid techniques. We demonstrate that our method enables classifying highresolution images using CNNs trained with low-resolution images and vice versa and warm-starting the learning process. The second class of multiscale methods connects shallow and deep networks and leads to new training strategies that gradually increase the depths of the CNN while re-using parameters for initializations."} {"_id": "d9d1c5a04bc9e6a117facc2701a28003b9df42dd", "title": "Toolpath Planning for Continuous Extrusion Additive Manufacturing", "text": "Recent work in additive manufacturing has introduced a new class of 3D printers that operate by extruding slurries and viscous mixtures such as silicone, glass, epoxy, and concrete, but because of the fluid flow properties of these materials it is difficult to stop extrusion once the print has begun. Conventional toolpath generation for 3D printing is based on the assumption that the flow of material can be controlled precisely and the resulting path includes instructions to disable extrusion and move the print head to another portion of the model. A continuous extrusion printer cannot disable material flow, and so these toolpaths produce low quality prints with wasted material. This paper outlines a greedy algorithm for post-processing toolpath instructions that employs a Traveling Salesperson Problem (TSP) solver to reduce the distance traveled between subsequent space-filling curves and layers, which reduces unnecessary extrusion by at least 20% for simple object models on an open-source 3D printer."} {"_id": "0e8b8e0c37b0ebc9c36b99103a487dbbbdf9ee97", "title": "Model Predictive Control System Design of a passenger car for a Valet Parking Scenario", "text": ""} {"_id": "76278b9afb45ccb7c6e633a5747054f94e5fcc23", "title": "Artificial Intelligence MArkup Language: A Brief Tutorial", "text": "The purpose of this paper is to serve as a reference guide for the development of chatterbots implemented with the AIML language. In order to achieve this, the main concepts in Pattern Recognition area are described because the AIML uses such theoretical framework in their syntactic and semantic structures. After that, AIML language is described and each AIML command/tag is followed by an application example. Also, the usage of AIML embedded tags for the handling of sequence dialogue limitations between humans and machines is shown. Finally, computer systems that assist in the design of chatterbots with the AIML language are classified and described."} {"_id": "677974d1ee2bd8d53a139b2f5805cca883f4d710", "title": "An Analysis of the Elastic Net Approach to the Traveling Salesman Problem", "text": "This paper analyzes the elastic net approach (Durbin and Willshaw 1987) to the traveling salesman problem of finding the shortest path through a set of cities. The elastic net approach jointly minimizes the length of an arbitrary path in the plane and the distance between the path points and the cities. The tradeoff between these two requirements is controlled by a scale parameter K. A global minimum is found for large K, and is then tracked to a small value. In this paper, we show that (1) in the small K limit the elastic path passes arbitrarily close to all the cities, but that only one path point is attracted to each city, (2) in the large K limit the net lies at the center of the set of cities, and (3) at a critical value of K the energy function bifurcates. We also show that this method can be interpreted in terms of extremizing a probability distribution controlled by K. The minimum at a given K corresponds to the maximum a posteriori (MAP) Bayesian estimate of the tour under a natural statistical interpretation. The analysis presented in this paper gives us a better understanding of the behavior of the elastic net, allows us to better choose the parameters for the optimization, and suggests how to extend the underlying ideas to other domains."} {"_id": "223754b54c2fa60578b32253b2c7de3ddf6447c5", "title": "A Comparative Analysis and Study of Multiview CNN Models for Joint Object Categorization and Pose Estimation", "text": "In the Object Recognition task, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose estimation using these approaches has received relatively less attention. In this work, we study how Convolutional Neural Networks (CNN) architectures can be adapted to the task of simultaneous object recognition and pose estimation. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations within CNNs represent object pose information and how this contradicts with object category representations. We extensively experiment on two recent large and challenging multi-view datasets and we achieve better than the state-of-the-art."} {"_id": "c9fe03fc61635732db2955ae965b0babfb346d50", "title": "Optimizing esthetics for implant restorations in the anterior maxilla: anatomic and surgical considerations.", "text": "The placement of dental implants in the anterior maxilla is a challenge for clinicians because of patients' exacting esthetic demands and difficult pre-existing anatomy. This article presents anatomic and surgical considerations for these demanding indications for implant therapy. First, potential causes of esthetic implant failures are reviewed, discussing anatomic factors such as horizontal or vertical bone deficiencies and iatrogenic factors such as improper implant selection or the malpositioning of dental implants for an esthetic implant restoration. Furthermore, aspects of preoperative analysis are described in various clinical situations, followed by recommendations for the surgical procedures in single-tooth gaps and in extended edentulous spaces with multiple missing teeth. An ideal implant position in all 3 dimensions is required. These mesiodistal, apicocoronal, and orofacial dimensions are well described, defining \"comfort\" and \"danger\" zones for proper implant position in the anterior maxilla. During surgery, the emphasis is on proper implant selection to avoid oversized implants, careful and low-trauma soft tissue handling, and implant placement in a proper position using either a periodontal probe or a prefabricated surgical guide. If missing, the facial bone wall is augmented using a proper surgical technique, such as guided bone regeneration with barrier membranes and appropriate bone grafts and/or bone substitutes. Finally, precise wound closure using a submerged or a semi-submerged healing modality is recommended. Following a healing period of between 6 and 12 weeks, a reopening procedure is recommended with a punch technique to initiate the restorative phase of therapy."} {"_id": "28a7bad2788637fcf83809cd5725ea847fb74832", "title": "EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks.", "text": "INTRODUCTION\nThe ability to continuously and unobtrusively monitor levels of task engagement and mental workload in an operational environment could be useful in identifying more accurate and efficient methods for humans to interact with technology. This information could also be used to optimize the design of safer, more efficient work environments that increase motivation and productivity.\n\n\nMETHODS\nThe present study explored the feasibility of monitoring electroencephalo-graphic (EEG) indices of engagement and workload acquired unobtrusively and quantified during performance of cognitive tests. EEG was acquired from 80 healthy participants with a wireless sensor headset (F3-F4,C3-C4,Cz-POz,F3-Cz,Fz-C3,Fz-POz) during tasks including: multi-level forward/backward-digit-span, grid-recall, trails, mental-addition, 20-min 3-Choice Vigilance, and image-learning and memory tests. EEG metrics for engagement and workload were calculated for each 1 -s of EEG.\n\n\nRESULTS\nAcross participants, engagement but not workload decreased over the 20-min vigilance test. Engagement and workload were significantly increased during the encoding period of verbal and image-learning and memory tests when compared with the recognition/ recall period. Workload but not engagement increased linearly as level of difficulty increased in forward and backward-digit-span, grid-recall, and mental-addition tests. EEG measures correlated with both subjective and objective performance metrics.\n\n\nDISCUSSION\nThese data in combination with previous studies suggest that EEG engagement reflects information-gathering, visual processing, and allocation of attention. EEG workload increases with increasing working memory load and during problem solving, integration of information, analytical reasoning, and may be more reflective of executive functions. Inspection of EEG on a second-by-second timescale revealed associations between workload and engagement levels when aligned with specific task events providing preliminary evidence that second-by-second classifications reflect parameters of task performance."} {"_id": "2e827db520b6d2d781aaa9ec744ccdd7a557c641", "title": "MuDeL: a language and a system for describing and generating mutants", "text": "Mutation Testing is an approach for assessing the quality of a test case suite by analyzing its ability in distinguishing the product under test from a set of alternative products, the so-called mutants. The mutants are generated from the product under test by applying a set of mutant operators, which produce products with slight syntactical differences. The mutant operators are usually based on typical errors that occurs during the software development and can be related to a fault model. In this paper, we propose a language \u2014 named MuDeL \u2014 for describing mutant operators aiming not only at automating the mutant generation, but also at providing precision and formality to the operator descriptions. The language was designed using concepts that come from transformational and logical programming paradigms, as well as from context-free grammar theory. The language is illustrated with some simple examples. We also describe the mudelgen system, developed to support this language."} {"_id": "e10d43df2cb088c0fbb129261331ab76c32697fa", "title": "Direct-on-line-start permanent-magnet-assisted synchronous reluctance machine with ferrite magnets", "text": "The possibility of applying low-cost ferrite magnets in a direct-on-line-start permanent-magnet-assisted synchronous reluctance machine (DOL PMaSynRM) for industrial applications, such as pump or ventilation systems was studied. The main target of this machine topology was to achieve a higher efficiency class compared to a counterpart induction machine (IM). Because of the weak properties of the ferrite magnets the motor had to be optimized for a low magnetic stress starting from the permanent magnets' point of view. The optimization procedure includes finding of a suitable rotor construction with a proper number of barriers, thickness of the barriers and thickness of the ferrite magnets. More detailed attention was dedicated to the analysis and elimination of irreversible demagnetization risk of ferrite magnets during start up period."} {"_id": "6e407ee7ef1edc319ca5fd5e12f67ce46158b4f7", "title": "Internet use and the logics of personal empowerment in health.", "text": "OBJECTIVES\nThe development of personal involvement and responsibility has become a strategic issue in health policy. The main goal of this study is to confirm the coexistence of three logics of personal empowerment through health information found on the Internet.\n\n\nMETHODS\nA theoretical framework was applied to analyze personal empowerment from the user's perspective. A well-established Canadian Web site that offers information on personal health was used as a case study. A close-ended questionnaire was completed online by 2275 visitors and members of the Web site.\n\n\nRESULTS\nThe findings confirm that the development of feelings of competence and control through Internet use is structured around three different logics. This implies three types of aptitudes that are fostered when the Internet is used to seek health information: doing what is prescribed (the professional logic), making choices based on personal judgment (the consumer logic), and mutual assistance (the community logic).\n\n\nCONCLUSIONS\nA recurring issue in three logics is the balance of roles and responsibilities required between the individual and the health provider."} {"_id": "abe5043d7f72ae413c4f616817f18f814d647c7a", "title": "Serendipity in the Research Literature: A Phenomenology of Serendipity Reporting", "text": "The role of information sciences is to connect people with the information they need to accomplish the tasks that contribute to the greater societal good. While evidence of the wonderful contributions arising from serendipitous events abound, the framework describing the information behaviors exhibited during a serendipitous experience is just emerging and additional detail regarding the factors influencing those behaviors is needed in order to support these experiences effectively. Furthermore, it is important to understand the whole process of serendipity to fully appreciate the impact of research policies, disciplinary traditions and academic reporting practices on this unique type of information behavior. This study addresses those need by examining the phenomenon of serendipity as it is reported by biomedical and radiography researchers. A mixed method content analysis of existing research reports will be incorporated with semi-structured interviews of serendipity reporters to gain a robust understanding of the phenomenon of serendipity, and provide detail that may inform the design of information environments."} {"_id": "e2f682775355e20f460c5f07a8a8612a0d50db8e", "title": "Zero-forcing beamforming with block diagonalization scheme for Coordinated Multi-Point transmission", "text": "Coordinated Multi-Point (CoMP) transmission is a technology targeted for Long Term Evolution Advanced (LTE-A). The Joint Processing (JP) technique for CoMP can maximize system performance, which is achieved mainly with channel information-based beamforming algorithms. Precoding methods perform differently in various CoMP scenarios. In this paper, we propose a joint processing scheme in downlink CoMP transmission system. We apply block diagonal beamforming to downlink transmission, and assume perfect knowledge of downlink channels and transmit messages at each transmit point."} {"_id": "9e23ea65a0ab63d6024ea53920745bd6abe591a8", "title": "Social Computing for Mobile Big Data", "text": "Mobile big data contains vast statistical features in various dimensions, including spatial, temporal, and the underlying social domain. Understanding and exploiting the features of mobile data from a social network perspective will be extremely beneficial to wireless networks, from planning, operation, and maintenance to optimization and marketing."} {"_id": "3dfd88c034e4984a00e0cef1fe57f8064dceb644", "title": "Tobler ' s First Law of Geography , Self Similarity , and Perlin Noise : A Large Scale Analysis of Gradient Distribution in Southern Utah with Application to Procedural Terrain Generation", "text": "A statistical analysis finds that in a 160,000 square kilometer region of southern Utah gradients appear to be exponentially distributed at resolutions from 5m up to 1km. A simple modification to the Perlin noise generator changing the gradient distribution in each octave to an exponential distribution results in realistic and interesting procedurally generated terrain. The inverse transform sampling method is used in the amortized noise algorithm to achieve an exponential distribution in each octave, resulting in the generation of infinite non-repeating terrain with the same characteristics."} {"_id": "546574329c45069faf241ba98e378151d776c458", "title": "\"My Data Just Goes Everywhere: \" User Mental Models of the Internet and Implications for Privacy and Security", "text": "Many people use the Internet every day yet know little about how it really works. Prior literature diverges on how people\u2019s Internet knowledge affects their privacy and security decisions. We undertook a qualitative study to understand what people do and do not know about the Internet and how that knowledge affects their responses to privacy and security risks. Lay people, as compared to those with computer science or related backgrounds, had simpler mental models that omitted Internet levels, organizations, and entities. People with more articulated technical models perceived more privacy threats, possibly driven by their more accurate understanding of where specific risks could occur in the network. Despite these differences, we did not find a direct relationship between people\u2019s technical background and the actions they took to control their privacy or increase their security online. Consistent with other work on user knowledge and experience, our study suggests a greater emphasis on policies and systems that protect privacy and security without relying too much on users\u2019 security practices."} {"_id": "7c50b02fe69da0381262a2a8e088fcb7f6399937", "title": "Thirty-Five Years of Research on Neuro-Linguistic Programming . NLP Research Data Base . State of the Art or Pseudoscientific Decoration ?", "text": "The huge popularity of Neuro-Linguistic Programming (NLP) therapies and training has not been accompanied by knowledge of the empirical underpinnings of the concept. The article presents the concept of NLP in the light of empirical research in the Neuro-Linguistic Programming Research Data Base. From among 315 articles the author selected 63 studies published in journals from the Master Journal List of ISI. Out of 33 studies, 18.2% show results supporting the tenets of NLP, 54.5% results non-supportive of the NLP tenets and 27.3% brings uncertain results. The qualitative analysis indicates the greater weight of the non-supportive studies and their greater methodological worth against the ones supporting the tenets. Results contradict the claim of an empirical basis of NLP."} {"_id": "240eed85e5badbc8693fcd61fd05df8792e964f0", "title": "Cloud Computing: A Review", "text": "Cloud computing is a development of parallel, distributed and grid computing which provides computing potential as a service to clients rather than a product. Clients can access software resources, valuable information and hardware devices as a subscribed and monitored service over a network through cloud computing.Due to large number of requests for access to resources and service level agreements between cloud service providers and clients, few burning issues in cloud environment like QoS, Power, Privacy and Security, VM Migration, Resource Allocation and Scheduling need attention of research community.Resource allocation among multiple clients has to be ensured as per service level agreements. Several techniques have been invented and tested by research community for generation of optimal schedules in cloud computing. A few promising approaches like Metaheuristics, Greedy, Heuristic technique and Genetic are applied for task scheduling in several parallel and distributed systems. This paper presents a review on scheduling proposals in cloud environment."} {"_id": "3a4fe8f246ad7461e52741780010e288c147906c", "title": "Patient satisfaction and medication adherence assessment amongst patients at the diabetes medication therapy adherence clinic.", "text": "AIMS\nTo determine the satisfaction and current adherence status of patients with diabetes mellitus at the diabetes Medication Therapy Adherence Clinic and the relationship between patient satisfaction and adherence.\n\n\nMETHODS\nThis cross-sectional descriptive study was carried out at three government hospitals in the state of Johor, Malaysia. Patient's satisfaction was measured using the Patient Satisfaction with Pharmaceutical Care Questionnaire; medication adherence was measured using the eight-item Morisky Medication Adherence Scale.\n\n\nRESULTS\nOf n=165 patients, 87.0% of patients were satisfied with DMTAC service (score 60-100) with mean scores of 76.8. On the basis of MMAS, 29.1% had a medium rate and 26.1% had a high rate of adherence. Females are 3.02 times more satisfied with the pharmaceutical service compared to males (OR 3.03, 95% CI 1.12-8.24, p<0.05) and non-Malays are less satisfied with pharmaceutical care provided during DMTAC compared to Malays (OR 0.32, 95% CI 0.12-0.85, p<0.05). Older patients age group \u226560 years were 3.29 times more likely to adhere to their medications (OR 3.29, 95% CI 1.10-9.86, p<0.05). Females were the most adherent compared to males (OR 2.33, 95%CI 1.10-4.93, p<0.05) and patients with secondary level of education were 2.72 times more adherent to their medications compared to those in primary school and no formal education (OR 2.72, 95%CI 1.13-6.55, p<0.05). There is a significant (p<0.01), positive fair correlation (r=0.377) between satisfaction and adherence.\n\n\nCONCLUSION\nPatients were highly satisfied with DMTAC service, while their adherence levels were low. There is an association between patient satisfaction and adherence."} {"_id": "176256ef634abe50ec11a3ef1538b4e485608a66", "title": "Wind Turbine Structural Health Monitoring : A Short Investigation Based on SCADA Data", "text": "The use of offshore wind farms has been growing in recent years, as steadier and higher wind speeds can be generally found over water compared to land. Moreover, as human activities tend to complicate the construction of land wind farms, offshore locations, which can be found more easily near densely populated areas, can be seen as an attractive choice. However, the cost of an offshore wind farm is relatively high, and therefore their reliability is crucial if they ever need to be fully integrated into the energy arena. As wind turbines have become more complex, efficient, and expensive structures, they require more sophisticated monitoring systems, especially in offshore sites where the financial losses due to failure could be substantial. This paper presents the preliminary analysis of supervisor control and data acquisition (SCADA) extracts from the Lillgrund wind farm for the purposes of structural health monitoring. A machine learning approach is applied in order to produce individual power curves, and then predict measurements of the power produced of each wind turbine from the measurements of the other wind turbines in the farm. A comparison between neural network and Gaussian process regression is also made."} {"_id": "2c03df8b48bf3fa39054345bafabfeff15bfd11d", "title": "Deep Residual Learning for Image Recognition", "text": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."} {"_id": "5763c2c62463c61926c7e192dcc340c4691ee3aa", "title": "Learning a Deep Convolutional Network for Image Super-Resolution", "text": "We propose a deep learning method for single image superresolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the lowresolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage."} {"_id": "83de43bc849ad3d9579ccf540e6fe566ef90a58e", "title": "A Public Domain Dataset for Human Activity Recognition using Smartphones", "text": "Human-centered computing is an emerging research field that aims to understand human behavior and integrate users and their social context with computer systems. One of the most recent, challenging and appealing applications in this framework consists in sensing human body motion using smartphones to gather context information about people actions. In this context, we describe in this work an Activity Recognition database, built from the recordings of 30 subjects doing Activities of Daily Living (ADL) while carrying a waist-mounted smartphone with embedded inertial sensors, which is released to public domain on a well-known on-line repository. Results, obtained on the dataset by exploiting a multiclass Support Vector Machine (SVM), are also acknowledged."} {"_id": "ba02ed6083ec066e4a7494883b3ef373ff78e802", "title": "On deep learning-based channel decoding", "text": "We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity."} {"_id": "9c263ada2509f84c1f7c49504742c98f04d827bc", "title": "Automated plant Watering system", "text": "In daily operations related to farming or gardening Watering is the most important cultural practice and the most labor-intensive task. No matter whichever weather it is, either too hot and dry or too cloudy and wet, you want to be able to control the amount of water that reaches your plants. Modern watering systems could be effectively used to water plants when they need it. But this manual process of watering requires two important aspects to be considered: when and how much to water. In order to replace manual activities and making gardener's work easier, we have create automatic plant watering system. By adding automated plant watering system to your garden or agricultural field, you will help all of your plants reach their fullest potential as well as conserving water. Using sprinklers drip emitters, or a combination of both, we can design a system that is ideal for every plant in our yard. For implementation of automatic plant watering system, we have used combination of sprinkler systems, pipes, and nozzles. This project uses the ATmega328 microcontroller. It is programmed to sense moisture level of plants at particular instance of time, if the moisture content is less than specified threshold which is predefined according to particular plant's water need then desired amount of water is supplied till it reaches threshold. Generally, plants need to be watered twice a day, morning and evening. Thus, the microcontroller is programmed to water plants two times per day. System is designed in such a way that it reports its current state as well as remind the user to add water to the tank. All this notifications are made through mobile application. We hope that through this prototype we all can enjoy having plants, without being worried about absent or forgetfulness."} {"_id": "a22eb623224133c5966aadc2b11d9618d8270b3b", "title": "Energy-Efficient I/O Thread Schedulers for NVMe SSDs on NUMA", "text": "Non-volatile memory express (NVMe) based SSDs and the NUMA platform are widely adopted in servers to achieve faster storage speed and more powerful processing capability. As of now, very little research has been conducted to investigate the performance and energy efficiency of the state-of-the-art NUMA architecture integrated with NVMe SSDs, an emerging technology used to host parallel I/O threads. As this technology continues to be widely developed and adopted, we need to understand the runtime behaviors of such systems in order to design software runtime systems that deliver optimal performance while consuming only the necessary amount of energy. This paper characterizes the runtime behaviors of a Linux-based NUMA system employing multiple NVMe SSDs. Our comprehensive performance and energy-efficiency study using massive numbers of parallel I/O threads shows that the penalty due to CPU contention is much smaller than that due to remote access of NVMe SSDs. Based on this insight, we develop a dynamic \"lesser evil\" algorithm called ESN, to minimize the impact of these two types of penalties. ESN is an energy-efficient profiling-based I/O thread scheduler for managing I/O threads accessing NVMe SSDs on NUMA systems. Our empirical evaluation shows that ESN can achieve optimal I/O throughput and latency while consuming up to 50% less energy and using fewer CPUs."} {"_id": "3ec46c96b22b55cfc4187a27d6a0b21a2e44f955", "title": "Cascade Mask Generation Framework for Fast Small Object Detection", "text": "Detecting small objects is a challenging task. Existing CNN-based objection detection pipeline faces such a dilemma: using a high-resolution image as input incurs high computational cost, but using a low-resolution image as input loses the feature representation of small objects and therefore leads to low accuracy. In this work, we propose a cascade mask generation framework to tackle this issue. The proposed framework takes in multi-scale images as input and processes them in ascending order of the scale. Each processing stage outputs object proposals as well as a region-of-interest (RoI) mask for the next stage. With RoI convolution, the masked regions can be excluded from the computation in the next stage. The procedure continues until the largest scale image is processed. Finally, the object proposals generated from multiple scales are classified by a post classifier. Extensive experiments on Tsinghua-Tencent 100K traffic sign benchmark demonstrate that our approach achieves state-of-the-art small object detection performance at a significantly improved speed-accuracy tradeoff compared with previous methods."} {"_id": "2db168f14f3169b8939b843b9f4caf78c3884fb3", "title": "Broadband Bent Triangular Omnidirectional Antenna for RF Energy Harvesting", "text": "In this letter, a broadband bent triangular omnidirectional antenna is presented for RF energy harvesting. The antenna has a bandwidth for VSWR \u2264 2 from 850 MHz to 1.94 GHz. The antenna is designed to receive both horizontal and vertical polarized waves and has a stable radiation pattern over the entire bandwidth. Antenna has also been optimized for energy harvesting application and it is designed for 100 \u03a9 input impedance to provide a passive voltage amplification and impedance matching to the rectifier. A peak efficiency of 60% and 17% is obtained for a load of 500 \u03a9 at 980 and 1800 MHz, respectively. At a cell site while harvesting all bands simultaneously a voltage of 3.76 V for open circuit and 1.38 V across a load of 4.3 k \u03a9 is obtained at a distance of 25 m using an array of two elements of the rectenna."} {"_id": "484ac571356251355d3e24dcb23bdd6d0911bd94", "title": "Graph Indexing: Tree + Delta >= Graph", "text": "Recent scientific and technological advances have witnessed an abundance of structural patterns modeled as graphs. As a result, it is of special interest to process graph containment queries effectively on large graph databases. Given a graph database G, and a query graph q, the graph containment query is to retrieve all graphs in G which contain q as subgraph(s). Due to the vast number of graphs in G and the nature of complexity for subgraph isomorphism testing, it is desirable to make use of high-quality graph indexing mechanisms to reduce the overall query processing cost. In this paper, we propose a new cost-effective graph indexing method based on frequent tree-features of the graph database. We analyze the effectiveness and efficiency of tree as indexing feature from three critical aspects: feature size, feature selection cost, and pruning power. In order to achieve better pruning ability than existing graph-based indexing methods, we select, in addition to frequent tree-features (Tree), a small number of discriminative graphs (\u2206) on demand, without a costly graph mining process beforehand. Our study verifies that (Tree+\u2206) is a better choice than graph for indexing purpose, denoted (Tree+\u2206 \u2265Graph), to address the graph containment query problem. It has two implications: (1) the index construction by (Tree+\u2206) is efficient, and (2) the graph containment query processing by (Tree+\u2206) is efficient. Our experimental studies demonstrate that (Tree+\u2206) has a compact index structure, achieves an order of magnitude better performance in index construction, and most importantly, outperforms up-to-date graphbased indexing methods: gIndex and C-Tree, in graph containment query processing."} {"_id": "2063222c5ce0dd233fa3056ddc245fca26bd5cf2", "title": "Deep learning based human action recognition: A survey", "text": "Human action recognition has attracted much attentions because of its great potential applications. With the rapid development of computer performance and Internet, the methods of human action recognition based on deep learning become mainstream and develop at a breathless pace. This paper will give a novel reasonable taxonomy and a review of deep learning human action recognition methods based on color videos, skeleton sequences and depth maps. In addition, some datasets and effective tricks in action recognition deep learning methods will be introduced, and also the development trend is discussed."} {"_id": "22749899b50c5113516b9820f875a580910aa746", "title": "A compact dual-band (L1/L2) GPS antenna design", "text": "A small slot-loaded patch antenna design developed for receiving both L1 and L2 bands GPS signals is discussed. The dual band coverage is achieved by using a patch mode at L2 band and a slot mode at L1 band. High dielectric material and meandered slot line are employed to reduce the antenna size down to 25.4 mm in diameter. The RHCP is achieved by combining two orthogonal modes via a small 0\u00b0-90\u00b0 hybrid chip. Both patch and slot modes share a single proximity probe conveniently located on the side of the antenna (Fig.1). This paper discusses about the design procedure as well as simulated antenna performance."} {"_id": "d01cf53d8561406927613b3006f4128199e05a6a", "title": "PROBABILISTIC NEURAL NETWORK", "text": "This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots control. In particular we deal with the well-known strategy of navigating by \u201cwall-following\u201d. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks. The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset."} {"_id": "61f8cca7f449990e9e3bc541c1fac71fd17fcfe4", "title": "Trends and Challenges in CMOS Design for Emerging 60 GHz WPAN Applications", "text": "The extensive growth of wireless communications industry is creating a big market opportunity. Wireless operators are currently searching for new solutions which would be implemented into the existing wireless communication networks to provide the broader bandwidth, the better quality and new value-added services. In the last decade, most commercial efforts were focused on the 1-10 GHz spectrum for voice and data applications for mobile phones and portable computers (Niknejad & Hashemi, 2008). Nowadays, the interest is growing in applications that use high rate wireless communications. Multigigabit-per-second communication requires a very large bandwidth. The Ultra-Wide Band (UWB) technology was basically used for this issue. However, this technology has some shortcomings including problems with interference and a limited data rate. Furthermore, the 3\u20135 GHz spectrum is relatively crowded with many interferers appearing in the WiFi bands (Niknejad & Hashemi, 2008). The use of millimeter wave frequency band is considered the most promising technology for broadband wireless. In 2001, the Federal Communications Commission (FCC) released a set of rules governing the use of spectrum between 57 and 66 GHz (Baldwin, 2007). Hence, a large bandwidth coupled with high allowable transmit power equals high possible data rates. Traditionally the implementation of 60 GHz radio technology required expensive technologies based on III-V compound semiconductors such as InP and GaAs (Smulders et al., 2007). The rapid progress of CMOS technology has enabled its application in millimeter wave applications. Currently, the transistors became small enough, consequently fast enough. As a result, the CMOS technology has become one of the most attractive choices in implementing 60 GHz radio due to its low cost and high level of integration (Doan et al., 2005). Despite the advantages of CMOS technology, the design of 60 GHz CMOS transceiver exhibits several challenges and difficulties that the designers must overcome. This chapter aims to explore the potential of the 60 GHz band in the use for emergent generation multi-gigabit wireless applications. The chapter presents a quick overview of the state-of-the-art of 60 GHz radio technology and its potentials to provide for high data rate and short range wireless communications. The chapter is organized as follows. Section 2 presents an overview about 60 GHz band. The advantages are presented to highlight the performance characteristics of this band. The opportunities of the physical layer of the IEEE"} {"_id": "f929758fc52f8d7660298afd96d078541a4b8ef7", "title": "Classification of behaviour in housed dairy cows using an accelerometer-based activity monitoring system", "text": "Advances in bio-telemetry technology have made it possible to automatically monitor and classify behavioural activities in many animals, including domesticated species such as dairy cows. Automated behavioural classification has the potential to improve health and welfare monitoring processes as part of a Precision Livestock Farming approach. Recent studies have used accelerometers and pedometers to classify behavioural activities in dairy cows, but such approaches often cannot discriminate accurately between biologically important behaviours such as feeding, lying and standing or transition events between lying and standing. In this study we develop a decision-tree algorithm that uses tri-axial accelerometer data from a neck-mounted sensor to both classify biologically important behaviour in dairy cows and to detect transition events between lying and standing. Data were collected from six dairy cows that were monitored continuously for 36 h. Direct visual observations of each cow were used to validate the algorithm. Results show that the decision-tree algorithm is able to accurately classify three types of biologically relevant behaviours: lying (77.42 % sensitivity, 98.63 % precision), standing (88.00 % sensitivity, 55.00 % precision), and feeding (98.78 % sensitivity, 93.10 % precision). Transitions between standing and lying were also detected accurately with an average sensitivity of 96.45 % and an average precision of 87.50 %. The sensitivity and precision of the decision-tree algorithm matches the performance of more computationally intensive algorithms such as hidden Markov models and support vector machines. Biologically important behavioural activities in housed dairy cows can be classified accurately using a simple decision-tree algorithm applied to data collected from a neck-mounted tri-axial accelerometer. The algorithm could form part of a real-time behavioural monitoring system in order to automatically detect dairy cow health and welfare status."} {"_id": "11acf1d410fd12649f73fd42f1a22c5fa6746191", "title": "Different models for model matching: An analysis of approaches to support model differencing", "text": "Calculating differences between models is an important and challenging task in Model Driven Engineering. Model differencing involves a number of steps starting with identifying matching model elements, calculating and representing their differences, and finally visualizing them in an appropriate way. In this paper, we provide an overview of the fundamental steps involved in the model differencing process and summarize the advantages and shortcomings of existing approaches for identifying matching model elements. To assist potential users in selecting one of the existing methods for the problem at stake, we investigate the trade-offs these methods impose in terms of accuracy and effort required to implement each one of them."} {"_id": "75b5bee6f5d2cd8ef1928bf99da5e6d26addfe84", "title": "Wearable Computing for Health and Fitness: Exploring the Relationship between Data and Human Behaviour", "text": "Health and fitness wearable technology has recently advanced, making it easier for an individual to monitor their behaviours. Previously self generated data interacts with the user to motivate positive behaviour change, but issues arise when relating this to long term mention of wearable devices. Previous studies within this area are discussed. We also consider a new approach where data is used to support instead of motivate, through monitoring and logging to encourage reflection. Based on issues highlighted, we then make recommendations on the direction in which future work could be most beneficial."} {"_id": "c924e8c66c1a8255abbeb3de28e3a714cb58f934", "title": "Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science", "text": "As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning--pipeline design. We implement an open source Tree-based Pipeline Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets. In particular, we show that TPOT can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user. We also address the tendency for TPOT to design overly complex pipelines by integrating Pareto optimization, which produces compact pipelines without sacrificing classification accuracy. As such, this work represents an important step toward fully automating machine learning pipeline design."} {"_id": "8391927148bd67b82afe53c8d65030c675f8953d", "title": "Brain basis of early parent-infant interactions: psychology, physiology, and in vivo functional neuroimaging studies.", "text": "Parenting behavior critically shapes human infants' current and future behavior. The parent-infant relationship provides infants with their first social experiences, forming templates of what they can expect from others and how to best meet others' expectations. In this review, we focus on the neurobiology of parenting behavior, including our own functional magnetic resonance imaging (fMRI) brain imaging experiments of parents. We begin with a discussion of background, perspectives and caveats for considering the neurobiology of parent-infant relationships. Then, we discuss aspects of the psychology of parenting that are significantly motivating some of the more basic neuroscience research. Following that, we discuss some of the neurohormones that are important for the regulation of social bonding, and the dysregulation of parenting with cocaine abuse. Then, we review the brain circuitry underlying parenting, proceeding from relevant rodent and nonhuman primate research to human work. Finally, we focus on a study-by-study review of functional neuroimaging studies in humans. Taken together, this research suggests that networks of highly conserved hypothalamic-midbrain-limbic-paralimbic-cortical circuits act in concert to support aspects of parent response to infants, including the emotion, attention, motivation, empathy, decision-making and other thinking that are required to navigate the complexities of parenting. Specifically, infant stimuli activate basal forebrain regions, which regulate brain circuits that handle specific nurturing and caregiving responses and activate the brain's more general circuitry for handling emotions, motivation, attention, and empathy--all of which are crucial for effective parenting. We argue that an integrated understanding of the brain basis of parenting has profound implications for mental health."} {"_id": "9d17e897e8344d1cf42a322359b48d1ff50b4aef", "title": "Learning to Fuse Things and Stuff", "text": "We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks."} {"_id": "ceb2f3a1d43251058393575c3cf7139cd72b787f", "title": "How to get mature global virtual teams: a framework to improve team process management in distributed software teams", "text": "Managing global software development teams is not an easy task because of the additional problems and complexities that have to be taken into account. This paper defines VTManager, a methodology that provides a set of efficient practices for global virtual team management in software development projects. These practices integrate software development techniques in global environments with others such as explicit practices for global virtual team management, definition of skills and abilities needed to work in these teams, availability of collaborative work environments and shared knowledge management practices. The results obtained and the lessons learned from implementing VTManager in a pilot project to develop software tools for collaborative work in rural environments are also presented. This project was carried out by geographically distributed teams involving people from seven countries with a high level of virtualness."} {"_id": "4e0398207529e36c08fc9da7e0a4a1ad41ef1d37", "title": "An iterative maximum-likelihood polychromatic algorithm for CT", "text": "A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography is presented. The algorithm prevents beam hardening artifacts by incorporating a polychromatic acquisition model. The continuous spectrum of the X-ray tube is modeled as a number of discrete energies. The energy dependence of the attenuation is taken into account by decomposing the linear attenuation coefficient into a photoelectric component and a Compton scatter component. The relative weight of these components is constrained based on prior material assumptions. Excellent results are obtained for simulations and for phantom measurements. Beam-hardening artifacts are effectively eliminated. The relation with existing algorithms is discussed. The results confirm that improving the acquisition model assumed by the reconstruction algorithm results in reduced artifacts. Preliminary results indicate that metal artifact reduction is a very promising application for this new algorithm."} {"_id": "93db2c50e9087c2d83c3345ef0e59dc3e6bf3707", "title": "Optimal Charging in Wireless Rechargeable Sensor Networks", "text": "Recent years have witnessed several new promising technologies to power wireless sensor networks, which motivate some key topics to be revisited. By integrating sensing and computation capabilities to the traditional radio-frequency identification (RFID) tags, the Wireless Identification and Sensing Platform (WISP) is an open-source platform acting as a pioneering experimental platform of wireless rechargeable sensor networks. Different from traditional tags, an RFID-based wireless rechargeable sensor node needs to charge its onboard energy storage above a threshold to power its sensing, computation, and communication components. Consequently, such charging delay imposes a unique design challenge for deploying wireless rechargeable sensor networks. In this paper, we tackle this problem by planning the optimal movement strategy of the mobile RFID reader, such that the time to charge all nodes in the network above their energy threshold is minimized. We first propose an optimal solution using the linear programming (LP) method. To further reduce the computational complexity, we then introduce a heuristic solution with a provable approximation ratio of (1 + \u03b8)/(1 - \u03b5) by discretizing the charging power on a 2-D space. Through extensive evaluations, we demonstrate that our design outperforms the set-cover-based design by an average of 24.7%, whereas the computational complexity is O((N/\u03b5)2). Finally, we consider two practical issues in system implementation and provide guidelines for parameter setting."} {"_id": "c8790e54ad744731893c1a44664f2495494ca9f8", "title": "CPW to waveguide transition with tapered slotline probe", "text": "A new CPW to waveguide transition is developed based on the concept of tapered slot antenna and E-plane probe coupling. The transition consists of a tapered slotline probe and a slotline to CPW matching section. The current design has the advantages of broad bandwidth, compact size, low fabrication cost, and high reliability. The characteristics of a prototype transition are investigated by numerical simulation on Duroid substrate and WR-90 waveguide. The back-to-back combination is measured to verify the agreement with the simulated results and realization of this design."} {"_id": "043eb1fbce9890608b859fb45e321829e518c26e", "title": "A novel secure hash algorithm for public key digital signature schemes", "text": "Hash functions are the most widespread among all cryptographic primitives, and are currently used in multiple cryptographic schemes and in security protocols. This paper presents a new Secure Hash Algorithm called (SHA-192). It uses a famous secure hash algorithm given by the National Institute of Standard and Technology (NIST).The basic design of SHA192 is to have the output length of 192.The SHA-192 has been designed to satisfy the different level of enhanced security and to resist the advanced SHA attacks. The security analysis of the SHA-192 is compared to the old one given by NIST and gives more security and excellent results as shown in our discussion. In this paper the digital signature algorithm which is given by NIST has been modified using the proposed algorithms SHA-192. Using proposed SHA-192 hash algorithm a new digital signature schemes is also proposed. The SHA-192 can be used in many applications such s public key cryptosystem, digital signcryption, message authentication code, random generator and in security architecture of upcoming wireless devices like software defined radio etc."} {"_id": "a24d72bd0d08d515cb3e26f94131d33ad6c861db", "title": "Ethical Challenges in Data-Driven Dialogue Systems", "text": "The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems."} {"_id": "62646f9450a3c95e745c1d2bb056dcf851acdaad", "title": "Speed-Security Tradeo s in Blockchain Protocols", "text": "Transaction processing speed is one of the major considerations in cryptocurrencies that are based on proof of work (POW) such as Bitcoin. At an intuitive level it is widely understood that processing speed is at odds with the security aspects of the underlying POW based consensus mechanism of such protocols, nevertheless the tradeo between the two properties is still not well understood. In this work, motivated by recent work [9] in the formal analysis of the Bitcoin backbone protocol, we investigate the tradeo between provable security and transaction processing speed viewing the latter as a function of the block generation rate. We introduce a new formal property of blockchain protocols, called chain growth, and we show it is fundamental for arguing the security of a robust transaction ledger. We strengthen the results of [9] showing for the rst time that reasonable security bounds hold even for the faster (than Bitcoin's) block generation rates that have been adopted by several major alt-coins (including Litecoin, Dogecoin etc.). We then provide a rst formal security proof of the GHOST rule for blockchain protocols. The GHOST rule was put forth in [14] as a mechanism to improve transaction processing speed and a variant of the rule is adopted by Ethereum. Our security analysis of the GHOST backbone matches our new analysis for Bitcoin in terms of the common pre x property but falls short in terms of chain growth where we provide an attack that substantially reduces the chain speed compared to Bitcoin. While our results establish the GHOST variant as a provably secure alternative to standard Bitcoin-like transaction ledgers they also highlight potential shortcomings in terms of processing speed compared to Bitcoin. We nally present attacks and simulation results against blockchain protocols (both for Bitcoin and GHOST) that present natural upper barriers for the speed-security tradeo . By combining our positive and negative results we map the speed/security domain for blockchain protocols and list open problems for future work."} {"_id": "883964b5ce2b736b504cbe239e6a3b2dec8626c1", "title": "Classification of teeth in cone-beam CT using deep convolutional neural network", "text": "Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification."} {"_id": "e99b38c64aec6109a126f5f6e192b7ed7f1afb58", "title": "Harvesting rib cartilage grafts for secondary rhinoplasty.", "text": "BACKGROUND\nReconstruction of the nasal osseocartilaginous framework is the foundation of successful secondary rhinoplasty.\n\n\nMETHODS\nAchieving this often requires large quantities of cartilage to correct both contour deformities and functional problems caused by previous procedures. Satisfactory and consistent long-term results rely on using grafts with low resorption rates and sufficient strength to offer adequate support. Auricular cartilage, irradiated cartilage, and alloplastic materials have all been used as implantable grafts with limited success.\n\n\nRESULTS\nIn the senior author's experience (J.P.G.), rib cartilage has proven to be a reliable, abundant, and relatively accessible donor with which to facilitate successful secondary rhinoplasty surgery.\n\n\nCONCLUSIONS\n: The authors describe in detail the techniques that they have found to be integral in harvesting rib cartilage grafts for secondary rhinoplasty."} {"_id": "1aa02f027663a626f3aa17ed9bab52377a23f634", "title": "High-Dimensional Continuous Control Using Generalized Advantage Estimation", "text": "This paper is concerned with developing policy gradient methods that gracefully scale up to challenging problems with high-dimensional state and action spaces. Towards this end, we develop a scheme that uses value functions to substantially reduce the variance of policy gradient estimates, while introducing a tolerable amount of bias. This scheme, which we call generalized advantage estimation (GAE), involves using a discounted sum of temporal difference residuals as an estimate of the advantage function, and can be interpreted as a type of automated cost shaping. It is simple to implement and can be used with a variety of policy gradient methods and value function approximators. Along with this variance-reduction scheme, we use trust region algorithms to optimize the policy and value function, both represented as neural networks. We present experimental results on a number of highly challenging 3D locomotion tasks, where our approach learns complex gaits for bipedal and quadrupedal simulated robots. We also learn controllers for the biped getting up off the ground. In contrast to prior work that uses hand-crafted low-dimensional policy representations, our neural network policies map directly from raw kinematics to joint torques."} {"_id": "96fccad0177530b81941d208355887de2d658d2c", "title": "Policy Gradient Methods for Robotics", "text": "The acquisition and improvement of motor skills and control policies for robotics from trial and error is of essential importance if robots should ever leave precisely pre-structured environments. However, to date only few existing reinforcement learning methods have been scaled into the domains of high-dimensional robots such as manipulator, legged or humanoid robots. Policy gradient methods remain one of the few exceptions and have found a variety of applications. Nevertheless, the application of such methods is not without peril if done in an uninformed manner. In this paper, we give an overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field. We outline previous applications to robotics and show how the most recently developed methods can significantly improve learning performance. Finally, we evaluate our most promising algorithm in the application of hitting a baseball with an anthropomorphic arm"} {"_id": "afbe59950a7d452ce0a3f412ee865f1e1d94d9ef", "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "text": "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations."} {"_id": "085fb3acabcbf80ef1bf47daec50d246475b072b", "title": "Infinite-Horizon Policy-Gradient Estimation", "text": "Gradient-based approaches to direct policy search in reinf orcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-f unction methods. In this paper we introduceGPOMDP, a simulation-based algorithm for generating a bi sedestimate of the gradient of theaverage rewardin Partially Observable Markov Decision Processes ( POMDPs) controlled by parameterized stochastic policies. A similar algorithm wa s proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm\u2019s chief advantages are tha t i requires storage of only twice the number of policy parameters, uses one free parameter 2 [0; 1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowl edge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter is related to the mixing timeof the controlledPOMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control s paces, multiple-agents, higher-order derivatives, and a version for training stochastic policie s with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient e stimates generated by GPOMDP can be used in both a traditional stochastic gradient algori thm and a conjugate-gradient procedure to find local optima of the average reward."} {"_id": "286955da96c1c5c38fe3853da3a877c1d6607c9a", "title": "Privacy-Preserving Public Auditing For Secure Cloud Storage", "text": "By using Cloud storage, users can access applications, services, software whenever they requires over the internet. Users can put their data remotely to cloud storage and get benefit of on-demand services and application from the resources. The cloud must have to ensure data integrity and security of data of user. The issue about cloud storage is integrity and privacy of data of user can arise. To maintain to overkill this issue here, we are giving public auditing process for cloud storage that users can make use of a third-party auditor (TPA) to check the integrity of data. Not only verification of data integrity, the proposed system also supports data dynamics. The work that has been done in this line lacks data dynamics and true public auditability. The auditing task monitors data modifications, insertions and deletions. The proposed system is capable of supporting public auditability, data dynamics and Multiple TPA are used for the auditing process. We also extend our concept to ring signatures in which HARS scheme is used. Merkle Hash Tree is used to improve block level authentication. Further we extend our result to enable the TPA to perform audits for multiple users simultaneously through Batch auditing."} {"_id": "c7cccff4318f8a3e7d6d48bd9e91819d1186ba18", "title": "Design Of Ternary Logic Gates Using CNTFET", "text": "This paper presents a novel design of ternary logic gates like STI,PTI,NTI,NAND and NOR using carbon nanotube field effect transistors. Ternary logic is a promising alternative to the conventional binary logic design technique, since it is possible to accomplish simplicity and energy efficiency in modern digital design due to the reduced circuit overhead such as interconnects and chip area. In this paper novel design of basic logic gates for ternary logic based on CNTFET, is proposed. Keywords\u2014 Carbon nano-tube Field Effect Transistor(CNTFET),MVL(multi valued logic),Ternary logic,STI,NTI,PTI"} {"_id": "dd0b5dd2d15ebc6a5658c75ec102b64e359c674d", "title": "Sampled-Point Network for Classification of Deformed Building Element Point Clouds", "text": "Search-and-rescue (SAR) robots operating in post-disaster urban areas need to accurately identify physical site information to perform navigation, mapping and manipulation tasks. This can be achieved by acquiring a 3D point cloud of the environment and performing object recognition from the point cloud data. However, this task is complicated by the unstructured environments and potentially-deformed objects encountered during disaster relief operations. Current 3D object recognition methods rely on point cloud input acquired under suitable conditions and do not consider deformations such as outlier noise, bending and truncation. This work introduces a deep learning architecture for 3D class recognition from point clouds of deformed building elements. The classification network, consisting of stacked convolution and average pooling layers applied directly to point coordinates, was trained using point clouds sampled from a database of mesh models. The proposed method achieves robustness to input variability using point sorting, resampling, and rotation normalization techniques. Experimental results on synthetically-deformed object datasets show that the proposed method outperforms the conventional deep learning methods in terms of classification accuracy and computational efficiency."} {"_id": "bb988b89e77179c24a1d69e7a97655de1a120855", "title": "Real-time 3 D Camera Tracking for Industrial Augmented Reality Applications", "text": "In this paper we present a new solution for real-time 3D camera pose estimation for Augmented Reality (AR) applications. The tracking system does not require special engineering of the environment, such as placing markers or beacons. The required input data are a CAD model of the target object to be tracked, and a calibrated reference image of it. We consider the whole process of camera tracking, and developed both an autonomous initialization and a real-time tracking procedure. The system is robust to abrupt camera motions, strong changes of the lighting conditions and partial occlusions. To avoid typical jitter and drift problems the tracker performs feature matching not only in an iterative manner, but also against stable reference features, which are dynamically cached in case of high confidence. We present experimental results generated with help of synthetic ground truth, real off-line and on-line image sequences using different types of target objects."} {"_id": "6d016cc9496bdfcb12a223eac073aa58444e80f4", "title": "[Hyaluronic acid rheology: Basics and clinical applications in facial rejuvenation].", "text": "Hyaluronic acid (HA) is the most widely used dermal filler to treat facial volume deficits and winkles specially for facial rejuvenation. Depending on various areas of the face, filler is exposed to two different forces (shear deformation and compression/stretching forces) resulting from intrinsec and external mechanical stress. The purpose of this technical note is to explain how rheology, which is the study of the flow and deformation of matter under strains, can be used in our clinical practice of facial volumization with fillers. Indeed, comprehension of the rheological properties of HA has become essential in selection of dermal filler targeted to the area of the face. Viscosity, elasticity and cohesivity are the main three properties to be taken into consideration in this selection. Aesthetic physicians and surgeons have to familiarize with those basics in order to select the HA with the right rheological properties to achieve a natural-looking and long-lasting outcome."} {"_id": "cd15507f33e0d30103a9b9b2c6304177268f4e0a", "title": "Robust Vehicle Detection and Distance Estimation Under Challenging Lighting Conditions", "text": "Avoiding high computational costs and calibration issues involved in stereo-vision-based algorithms, this paper proposes real-time monocular-vision-based techniques for simultaneous vehicle detection and inter-vehicle distance estimation, in which the performance and robustness of the system remain competitive, even for highly challenging benchmark datasets. This paper develops a collision warning system by detecting vehicles ahead and, by identifying safety distances to assist a distracted driver, prior to occurrence of an imminent crash. We introduce adaptive global Haar-like features for vehicle detection, tail-light segmentation, virtual symmetry detection, intervehicle distance estimation, as well as an efficient single-sensor multifeature fusion technique to enhance the accuracy and robustness of our algorithm. The proposed algorithm is able to detect vehicles ahead at both day or night and also for short- and long-range distances. Experimental results under various weather and lighting conditions (including sunny, rainy, foggy, or snowy) show that the proposed algorithm outperforms state-of-the-art algorithms."} {"_id": "f89384e8216a665534903b470cce4b8476814a50", "title": "Proof Analysis in Modal Logic", "text": "A general method for generating contractionand cut-free sequent calculi for a large family of normal modal logics is presented. The method covers all modal logics characterized by Kripke frames determined by universal or geometric properties and it can be extended to treat also G\u00f6del\u2013L\u00f6b provability logic. The calculi provide direct decision methods through terminating proof search. Syntactic proofs of modal undefinability results are obtained in the form of conservativity theorems."} {"_id": "6a535e0aeb83f1c2e361c7b0c575f8bc0cc8aa34", "title": "Huffman Coding-Based Adaptive Spatial Modulation", "text": "Antenna switch enables multiple antennas to share a common RF chain. It also offers an additional spatial dimension, i.e., antenna index, that can be utilized for data transmission via both signal space and spatial dimension. In this paper, we propose a Huffman coding-based adaptive spatial modulation that generalizes both conventional spatial modulation and transmit antenna selection. Through the Huffman coding, i.e., designing variable length prefix codes, the transmit antennas can be activated with different probabilities. When the input signal is Gaussian distributed, the optimal antenna activation probability is derived through optimizing channel capacity. To make the optimization tractable, closed form upper bound and lower bound are derived as the effective approximations of channel capacity. When the input is discrete QAM signal, the optimal antenna activation probability is derived through minimizing symbol error rate. Numerical results show that the proposed adaptive transmission offers considerable performance improvement over the conventional spatial modulation and transmit antenna selection."} {"_id": "767410f40ed2ef1b8b759fec3782d8a0f2f8ad40", "title": "On Blockchain Applications : Hyperledger Fabric And Ethereum", "text": "Blockchain is a tamper-proof digital ledger which can be used to record public or private peer to peer network transactions and it cannot be altered retroactively without the alteration of all subsequent blocks of the network. A blockchain is updated via the consensus protocol that ensures a linear, unambiguous ordering of transactions. Blocks guarantee the integrity and consistency of the blockchain across a network of distributed nodes. Different blockchain applications use various consensus protocols for their working. Byzantine fault tolerance (BFT) is one of them and it is a characteristic of a system that tolerates the class of failures known as the Byzantine Generals Problem. Hyperledger, Stellar, and Ripple are three blockchain application which uses BFT consensus. The best variant of BFT is Practical Byzantine Fault tolerance (PBFT). Hyperledger fabric with deterministic transactions can run on the top of PBFT. This paper focuses on a survey of various consensus mechanisms and makes a comparative study of Hyperledger fabric and Ethereum. Keywords\u2014 consensus; hyperledger fabric; ethereum; byzantine fault tolerance;"} {"_id": "69e37306cc5c1262a979a95244c487b73e4a3fbf", "title": "MapFactory \u2013 Towards a mapping design pattern for big geospatial data", "text": "With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information -Metadata -Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data."} {"_id": "12fc6f855f58869cf81743b9be0df1380c17f4d0", "title": "Exploiting Redundancy in Question Answering", "text": "Our goal is to automatically answer brief factual questions of the form ``When was the Battle of Hastings?'' or ``Who wrote The Wind in the Willows?''. Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to ``vote'' for the most likely answer. The approach is experimentally tested on questions taken from the TREC-9 question-answering test collection. As an additional demonstration, the approach is extended to answer multiple choice trivia questions of the form typically asked in trivia quizzes and television game shows."} {"_id": "eacba5e8fbafb1302866c0860fc260a2bdfff232", "title": "VOS-GAN: Adversarial Learning of Visual-Temporal Dynamics for Unsupervised Dense Prediction in Videos", "text": "Recent GAN-based video generation approaches model videos as the combination of a time-independent scene component and a time-varying motion component, thus factorizing the generation problem into generating background and foreground separately. One of the main limitations of current approaches is that both factors are learned by mapping one source latent space to videos, which complicates the generation task as a single data point must be informative of both background and foreground content. In this paper we propose a GAN framework for video generation that, instead, employs two latent spaces in order to structure the generative process in a more natural way: 1) a latent space to generate the static visual content of a scene (background), which remains the same for the whole video, and 2) a latent space where motion is encoded as a trajectory between sampled points and whose dynamics are modeled through an RNN encoder (jointly trained with the generator and the discriminator) and then mapped by the generator to visual objects\u2019 motion. Additionally, we extend current video discrimination approaches by incorporating in the learning procedure motion estimation and, leveraging the peculiarity of the generation process, unsupervised pixel-wise dense predictions. Extensive performance evaluation showed that our approach is able to a) synthesize more realistic videos than state-of-the-art methods, b) learn effectively both local and global video dynamics, as demonstrated by the results achieved on a video action recognition task over the UCF-101 dataset, and c) accurately perform unsupervised video object segmentation on standard video benchmarks, such as DAVIS, SegTrack and F4K-Fish."} {"_id": "83e5ee06cf7ae891750682ca39cefaeed60ac998", "title": "Pharmacist-managed medication therapy adherence clinics: The Malaysian experience.", "text": "In Malaysia, the first pharmacist-managed clinic was established in 2004. This type of clinics is locally known as a medication therapy adherence clinic (MTAC).1e3 In fact, these clinics were introduced by the Pharmaceutical Services Division (PSD), Ministry of Health Malaysia (MOH) as part of the clinical pharmacy services to improve the quality, safety, and cost-effectiveness of patient care through better medicines management at ambulatory care settings in hospitals and MOH clinics.2 The major aims of the clinics include optimization of medication therapy, improvement of medication adherence, and prevention or reduction of adverse drug events and other medication related problems.1 Over a decade, the number of MTACs has increased tremendously to 660 by the end of 2013.1 Moreover, currently, there are 13 types of clinics being operated in Malaysia that include neurology (stroke), diabetes, heart failure, respiratory, psoriasis, anticoagulation, rheumatology, nephrology, hepatitis, geriatrics, retroviral disease, hemophilia, and psychiatry.1,3,4 The pharmacists operate the clinics and provide pharmaceutical care services to the patients in collaboration with physicians and other healthcare professionals according to specific protocols. The specific roles of the pharmacist include providing drug information, patient education and counseling, identifying and solving drugrelated problems, monitoring patients' drug therapy and treatment outcomes, offering feedback on patient progress, making appropriate recommendations to physicians to individualize the patients' drug regimens according to patient-related factors including clinical, humanistic, and economic factors.1e4 Several studies have been conducted to evaluate the impact of pharmacist-managed clinics on patient care in Malaysia.5e10 These studies have demonstrated that the clinics are beneficial in optimizing medication therapy, improving medication adherence, and giving better clinical outcomes. Moreover, some studies have shown that these clinics are more cost-effective than standard medical care.11,12 In addition, it has been shown that the patients are satisfied with the pharmaceutical care services provided by these clinics.13,14 Therefore, the current evidence supports further expansion of these clinics in the ambulatory care system in Malaysia. We believe there are several factors that have led to the success of the pharmacist-managed clinics in Malaysia. These include the tremendous support and guidance from the PSD, Ministry of Health Malaysia to help in the establishment of these clinics. As an example of this support, the PSD has established a protocol for each clinic to help the pharmacists run the clinic, and to enable standardization of practice and the expansion of the clinics throughout the country. These protocols are written by specialized clinical pharmacy committees (e.g. respiratory subspecialty,"} {"_id": "df9c3ce70641d77ec2a563ac336e51265005c61b", "title": "An Intelligent RFID Reader and its Application in Airport Baggage Handling System", "text": "In civil aviation baggage handing application, the RFID tags are used to enhance the ability for baggage tracking, dispatching and conveyance, so as to improve the management efficiency and the users' satisfaction. An intelligent RFID reader which has the ability of data disposal and provides EPC Edge Savant services is presented. The prototype readers and its experiment in the airport baggage handling system are also introduced. The workflow of the reader can be configured dynamically, which makes the deployment much more flexible. Real-time tasks can also be assigned to implement edge business process by the intelligent reader."} {"_id": "826a530b835a917200da1b25993b5319021e4551", "title": "Realtime Ray Tracing on GPU with BVH-based Packet Traversal", "text": "Recent GPU ray tracers can already achieve performance competitive to that of their CPU counterparts. Nevertheless, these systems can not yet fully exploit the capabilities of modern GPUs and can only handle medium-sized, static scenes. In this paper we present a BVH-based GPU ray tracer with a parallel packet traversal algorithm using a shared stack. We also present a fast, CPU-based BVH construction algorithm which very accurately approximates the surface area heuristic using streamed binning while still being one order of magnitude faster than previously published results. Furthermore, using a BVH allows us to push the size limit of supported scenes on the GPU: We can now ray trace the 12.7 million triangle Power Plant at 1024 times 1024 image resolution with 3 fps, including shading and shadows."} {"_id": "5c36dd75aa6ae71c7d3d46621a4d7249db9f46f4", "title": "Mobile Forensic Data Analysis: Suspicious Pattern Detection in Mobile Evidence", "text": "Culprits\u2019 identification by the means of suspicious pattern detection techniques from mobile device data is one of the most important aims of the mobile forensic data analysis. When criminal activities are related to entirely automated procedures such as malware propagation, predicting the corresponding behavior is a rather achievable task. However, when human behavior is involved, such as in cases of traditional crimes, prediction and detection become more compelling. This paper introduces a combined criminal profiling and suspicious pattern detection methodology for two criminal activities with moderate to the heavy involvement of mobile devices, cyberbullying and low-level drug dealing. Neural and Neurofuzzy techniques are applied on a hybrid original and simulated dataset. The respective performance results are measured and presented, the optimal technique is selected, and the scenarios are re-run on an actual dataset for additional testing and verification."} {"_id": "09ebd015cc09089907f6e2e2edabb3f5d0ac7a2f", "title": "A GPU-based implementation of motion detection from a moving platform", "text": "We describe a GPU-based implementation of motion detection from a moving platform. Motion detection from a moving platform is inherently difficult as the moving camera induces 2D motion field in the entire image. A step compensating for camera motion is required prior to estimating of the background model. Due to inevitable registration errors, the background model is estimated according to a sliding window of frames to avoid the case where erroneous registration influences the quality of the detection for the whole sequence. However, this approach involves several characteristics that put a heavy burden on real-time CPU implementation. We exploit GPU to achieve significant acceleration over standard CPU implementations. Our GPU-based implementation can build the background model and detect motion regions at around 18 fps on 320times240 videos that are captured for a moving camera."} {"_id": "b1117993836169e2b24b4bf0d5afea6d49a0348c", "title": "How to do a grounded theory study: a worked example of a study of dental practices", "text": "BACKGROUND\nQualitative methodologies are increasingly popular in medical research. Grounded theory is the methodology most-often cited by authors of qualitative studies in medicine, but it has been suggested that many 'grounded theory' studies are not concordant with the methodology. In this paper we provide a worked example of a grounded theory project. Our aim is to provide a model for practice, to connect medical researchers with a useful methodology, and to increase the quality of 'grounded theory' research published in the medical literature.\n\n\nMETHODS\nWe documented a worked example of using grounded theory methodology in practice.\n\n\nRESULTS\nWe describe our sampling, data collection, data analysis and interpretation. We explain how these steps were consistent with grounded theory methodology, and show how they related to one another. Grounded theory methodology assisted us to develop a detailed model of the process of adapting preventive protocols into dental practice, and to analyse variation in this process in different dental practices.\n\n\nCONCLUSIONS\nBy employing grounded theory methodology rigorously, medical researchers can better design and justify their methods, and produce high-quality findings that will be more useful to patients, professionals and the research community."} {"_id": "18aab1169897c138bba123046148e2015fb67c0c", "title": "qBase relative quantification framework and software for management and automated analysis of real-time quantitative PCR data", "text": "Although quantitative PCR (qPCR) is becoming the method of choice for expression profiling of selected genes, accurate and straightforward processing of the raw measurements remains a major hurdle. Here we outline advanced and universally applicable models for relative quantification and inter-run calibration with proper error propagation along the entire calculation track. These models and algorithms are implemented in qBase, a free program for the management and automated analysis of qPCR data."} {"_id": "09ad9778b7d8ef3a9a6953a988dd3aacdc3e85ae", "title": "A Comparison of String Distance Metrics for Name-Matching Tasks", "text": "Using an open-source, Java toolkit of name-matching methods, we experimentally compare string distance metrics on the task of matching entity names. We investigate a number of different metrics proposed by different communities, including edit-distance metrics, fast heuristic string comparators, token-based distance metrics, and hybrid methods. Overall, the best-performing method is a hybrid scheme combining a TFIDF weighting scheme, which is widely used in information retrieval, with the Jaro-Winkler string-distance scheme, which was developed in the probabilistic record linkage community."} {"_id": "d4b21d9321cb68932a5ceeed49330aff1d638042", "title": "The power of statistical tests in meta-analysis.", "text": "Calculations of the power of statistical tests are important in planning research studies (including meta-analyses) and in interpreting situations in which a result has not proven to be statistically significant. The authors describe procedures to compute statistical power of fixed- and random-effects tests of the mean effect size, tests for heterogeneity (or variation) of effect size parameters across studies, and tests for contrasts among effect sizes of different studies. Examples are given using 2 published meta-analyses. The examples illustrate that statistical power is not always high in meta-analysis."} {"_id": "e7c6f67a70b5cf0842a7a2fc497131a79b6ee2c5", "title": "Frequent Pattern Compression: A Significance-Based Compression Scheme for L2 Caches", "text": "With the widening gap between processor and memory speeds, memory system designers ma find cache compression beneficial to increase cache capacity and reduce off-chip bandwidth Most hardware compression algorithms fall into the dictionary-based category, which depend on building a dictionary and using its entries to encode repeated data values. Such algorithms are effective in compressing large data blocks and files. Cache lines, however, are typically short (32-256 bytes), and a per-line dictionary places a significant overhead that limits the compressibility and increases decompression latency of such algorithms. For such short lines significance-based compression is an appealing alternative. We propose and evaluate a simple significance-based compression scheme that has a low com pression and decompression overhead. This scheme, Frequent Pattern Compression (FPC compresses individual cache lines on a word-by-word basis by storing common word patterns in a compressed format accompanied with an appropriate prefix. For a 64-byte cache line, compression can be completed in three cycles and decompression in five cycles, assuming 1 FO4 gate delays per cycle. We propose a compressed cache design in which data is stored in compressed form in the L2 caches, but are uncompressed in the L1 caches. L2 cache lines ar compressed to predetermined sizes that never exceed their original size to reduce decompre sion overhead. This simple scheme provides comparable compression ratios to more comple schemes that have higher cache hit latencies."} {"_id": "6dc6d89fdb28ffbc75c38a2bb566601b93b9e30a", "title": "A pre-collision control strategy for human-robot interaction based on dissipated energy in potential inelastic impacts", "text": "Enabling human-robot collaboration raises new challenges in safety-oriented robot design and control. Indices that quantitatively describe human injury due to a human-robot collision are needed to propose suitable pre-collision control strategies. This paper presents a novel model-based injury index built on the concept of dissipated kinetic energy in a potential inelastic impact. This quantity represents the fracture energy lost when a human-robot collision occurs, modeling both clamped and unclamped cases. It depends on the robot reflected mass and velocity in the impact direction. The proposed index is expressed in analytical form suitable to be integrated in a constraint-based pre-collision control strategy. The exploited control architecture allows to perform a given robot task while simultaneously bounding our injury assessment and minimizing the reflected mass in the direction of the impact. Experiments have been performed on a lightweight robot ABB FRIDA to validate the proposed injury index as well as the pre-collision control strategy."} {"_id": "4e26e488c02b3647e0f1566760555ebe5d002558", "title": "A Quadratic-Complexity Observability-Constrained Unscented Kalman Filter for SLAM", "text": "This paper addresses two key limitations of the unscented Kalman filter (UKF) when applied to the simultaneous localization and mapping (SLAM) problem: the cubic computational complexity in the number of states and the inconsistency of the state estimates. To address the first issue, we introduce a new sampling strategy for the UKF, which has constant computational complexity. As a result, the overall computational complexity of UKF-based SLAM becomes of the same order as that of the extended Kalman filter (EKF)-based SLAM, i.e., quadratic in the size of the state vector. Furthermore, we investigate the inconsistency issue by analyzing the observability properties of the linear-regression-based model employed by the UKF. Based on this analysis, we propose a new algorithm, termed observability-constrained (OC)-UKF, which ensures the unobservable subspace of the UKF's linear-regression-based system model is of the same dimension as that of the nonlinear SLAM system. This results in substantial improvement in the accuracy and consistency of the state estimates. The superior performance of the OC-UKF over other state-of-the-art SLAM algorithms is validated by both Monte-Carlo simulations and real-world experiments."} {"_id": "b8aa8b5d06c98a900d8cea61864669b28c3ac0fc", "title": "Routing protocols in Vehicular Delay Tolerant Networks: A comprehensive survey", "text": "This article presents a comprehensive survey of routing protocols proposed for routing in Vehicular Delay Tolerant Networks (VDTN) in vehicular environment. DTNs are utilized in various operational environments, including those subject to disruption and disconnection and those with high-delay, such as Vehicular Ad-Hoc Networks (VANET). We focus on a special type of VANET, where the vehicular traffic is sparse and direct end-to-end paths between communicating parties do not always exist. Thus, communication in this context falls into the category of Vehicular Delay Tolerant Network (VDTN). Due to the limited transmission range of an RSU (Road Side Unit), remote vehicles, in VDTN, may not connect to the RSU directly and thus have to rely on intermediate vehicles to relay the packets. During the message relay process, complete end-to-end paths may not exist in highly partitioned VANETs. Therefore, the intermediate vehicles must buffer and forward messages opportunistically. Through buffer, carry and forward, the message can eventually be delivered to the destination even if an end-to-end connection never exists between source and destination. The main objective of routing protocols in DTN is to maximize the probability of delivery to the destination while minimizing the end-to-end delay. Also, vehicular traffic models are important for DTN routing in vehicle networks because the performance of DTN routing protocols is closely related to population and mobility models of the network. 2014 Elsevier B.V. All rights reserved."} {"_id": "286f0650438d0cc5123057909662d242d7bbce07", "title": "Toward detecting emotions in spoken dialogs", "text": "The importance of automatically recognizing emotions from human speech has grown with the increasing role of spoken language interfaces in human-computer interaction applications. This paper explores the detection of domain-specific emotions using language and discourse information in conjunction with acoustic correlates of emotion in speech signals. The specific focus is on a case study of detecting negative and non-negative emotions using spoken language data obtained from a call center application. Most previous studies in emotion recognition have used only the acoustic information contained in speech. In this paper, a combination of three sources of information-acoustic, lexical, and discourse-is used for emotion recognition. To capture emotion information at the language level, an information-theoretic notion of emotional salience is introduced. Optimization of the acoustic correlates of emotion with respect to classification error was accomplished by investigating different feature sets obtained from feature selection, followed by principal component analysis. Experimental results on our call center data show that the best results are obtained when acoustic and language information are combined. Results show that combining all the information, rather than using only acoustic information, improves emotion classification by 40.7% for males and 36.4% for females (linear discriminant classifier used for acoustic information)."} {"_id": "34850eb3f55633599e8e8f36db4bedf541d30b94", "title": "A database of German emotional speech", "text": "The article describes a database of emotional speech. Ten actors (5 female and 5 male) simulated the emotions, producing 10 German utterances (5 short and 5 longer sentences) which could be used in everyday communication and are interpretable in all applied emotions. The recordings were taken in an anechoic chamber with high-quality recording equipment. In addition to the sound electro-glottograms were recorded. The speech material comprises about 800 sentences (seven emotions * ten actors * ten sentences + some second versions). The complete database was evaluated in a perception test regarding the recognisability of emotions and their naturalness. Utterances recognised better than 80% and judged as natural by more than 60% of the listeners were phonetically labelled in a narrow transcription with special markers for voice-quality, phonatory and articulatory settings and articulatory features. The database can be accessed by the public via the internet (http://www.expressive-speech.net/emodb/)."} {"_id": "67efaba1be4c0462a5fc2ce9762f7edf9719c6a0", "title": "Speech emotion recognition using hidden Markov models", "text": "In emotion classification of speech signals, the popular features employed are statistics of fundamental frequency, energy contour, duration of silence and voice quality. However, the performance of systems employing these features degrades substantially when more than two categories of emotion are to be classified. In this paper, a text independent method of emotion classification of speech is proposed. The proposed method makes use of short time log frequency power coefficients (LFPC) to represent the speech signals and a discrete hidden Markov model (HMM) as the classifier. The emotions are classified into six categories. The category labels used are, the archetypal emotions of Anger, Disgust, Fear, Joy, Sadness and Surprise. Adatabase consisting of 60 emotional utterances, each from twelve speakers is constructed and used to train and test the proposed system. Performance of the LFPC feature parameters is compared with that of the linear prediction Cepstral coefficients (LPCC) and mel-frequency Cepstral coefficients (MFCC) feature parameters commonly used in speech recognition systems. Results show that the proposed system yields an average accuracy of 78% and the best accuracy of 96% in the classification of six emotions. This is beyond the 17% chances by a random hit for a sample set of 6 categories. Results also reveal that LFPC is a better choice as feature parameters for emotion classification than the traditional feature parameters. 2003 Elsevier B.V. All rights reserved."} {"_id": "953e69c66f2bdfcc65e4d677fa429571cdec2a60", "title": "Emotion Recognition in Speech Using Neural Networks", "text": "Emotion recognition in speech is a topic on which little research has been done to-date. In this paper, we discuss why emotion recognition in speech is a significant and applicable research topic, and present a system for emotion recognition using oneclass-in-one neural networks. By using a large database of phoneme balanced words, our system is speakerand context-independent. We achieve a recognition rate of approximately 50% when testing eight emotions."} {"_id": "c33076c2aa1d4a860c223529c1d1941c58cd77fc", "title": "Nonlinear dimension reduction for EEG-based epileptic seizure detection", "text": "Approximately 0.1 percent of epileptic patients die from unexpected deaths. In general, for intractable seizures, it is crucial to have an algorithm to accurately and automatically detect the seizures and notify care-givers to assist patients. EEG signals are known as definitive diagnosis of seizure events. In this work, we utilize the frequency domain features (normalized in-band power spectral density) for the EEG channels. We applied a nonlinear data-embedding technique based on stochastic neighbor distance metric to capture the relationships among data elements in high dimension and improve the accuracy of seizure detection. This proposed data embedding technique not only makes it possible to visualize data in two or three dimensions, but also tackles the inherent difficulties regarding high dimensional data classification such as time complexity and memory requirement. We also applied a patient specific KNN classification to detect seizure and non-seizure events. The results indicate that our nonlinear technique provides significantly better visualization and classification efficiency (F-measure greater than 87%) compared to conventional dimension reduction approaches."} {"_id": "676e552b1b1f10cc7b80ec3ce51bced5990a9e68", "title": "SVD-based collaborative filtering with privacy", "text": "Collaborative filtering (CF) techniques are becoming increasingly popular with the evolution of the Internet. Such techniques recommend products to customers using similar users' preference data. The performance of CF systems degrades with increasing number of customers and products. To reduce the dimensionality of filtering databases and to improve the performance, Singular Value Decomposition (SVD) is applied for CF. Although filtering systems are widely used by E-commerce sites, they fail to protect users' privacy. Since many users might decide to give false information because of privacy concerns, collecting high quality data from customers is not an easy task. CF systems using these data might produce inaccurate recommendations. In this paper, we discuss SVD-based CF with privacy. To protect users' privacy while still providing recommendations with decent accuracy, we propose a randomized perturbation-based scheme."} {"_id": "b3cedde36a6841b43162fc406b688e51bec68d36", "title": "Hierarchical interpretations for neural network predictions", "text": "Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables. However, the inability to effectively visualize these relationships has led to DNNs being characterized as black boxes and consequently limited their applications. To ameliorate this problem, we introduce the use of hierarchical interpretations to explain DNN predictions through our proposed method, agglomerative contextual decomposition (ACD). Given a prediction from a trained DNN, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive. Using examples from Stanford Sentiment Treebank and ImageNet, we show that ACD is effective at diagnosing incorrect predictions and identifying dataset bias. Through human experiments, we demonstrate that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN\u2019s outputs. We also find that ACD\u2019s hierarchy is largely robust to adversarial perturbations, implying that it captures fundamental aspects of the input and ignores spurious noise."} {"_id": "9c49820bb76031297dab4ae4f5d9237ca4c2e603", "title": "Moving Target Indication via RADARSAT-2 Multichannel Synthetic Aperture Radar Processing", "text": "With the recent launches of the German TerraSAR-X and the Canadian RADARSAT-2, both equipped with phased array antennas and multiple receiver channels, synthetic aperture radar, ground moving target indication (SAR-GMTI) data are now routinely being acquired from space. Defence R&D Canada has been conducting SAR-GMTI trials to assess the performance and limitations of the RADARSAT-2 GMTI system. Several SAR-GMTI modes developed for RADARSAT-2 are described and preliminary test results of these modes are presented. Detailed equations of motion of a moving target for multiaperture spaceborne SAR geometry are derived and a moving target parameter estimation algorithm developed for RADARSAT-2 (called the Fractrum Estimator) is presented. Limitations of the simple dual-aperture SAR-GMTI mode are analysed as a function of the signal-to-noise ratio and target speed. Recently acquired RADARSAT-2 GMTI data are used to demonstrate the capability of different system modes and to validate the signal model and the algorithm."} {"_id": "26861e41e5b44774a2801e1cd76fd56126bbe257", "title": "Personalized Tour Recommendation Based on User Interests and Points of Interest Visit Durations", "text": "Tour recommendation and itinerary planning are challenging tasks for tourists, due to their need to select Points of Interest (POI) to visit in unfamiliar cities, and to select POIs that align with their interest preferences and trip constraints. We propose an algorithm called PERSTOUR for recommending personalized tours using POI popularity and user interest preferences, which are automatically derived from real-life travel sequences based on geotagged photos. Our tour recommendation problem is modelled using a formulation of the Orienteering problem, and considers user trip constraints such as time limits and the need to start and end at specific POIs. In our work, we also reflect levels of user interest based on visit durations, and demonstrate how POI visit duration can be personalized using this time-based user interest. Using a Flickr dataset of four cities, our experiments show the effectiveness of PERSTOUR against various baselines, in terms of tour popularity, interest, recall, precision and F1-score. In particular, our results show the merits of using time-based user interest and personalized POI visit durations, compared to the current practice of using frequency-based user interest and average visit durations."} {"_id": "97dc1b226233b4591df41ad8633d1db91ca49406", "title": "MaJIC: Compiling MATLAB for Speed and Responsiveness", "text": "This paper presents and evaluates techniques to improve the execution performance of MATLAB. Previous efforts concentrated on source to source translation and batch compilation; MaJIC provides an interactive frontend that looks like MATLAB and compiles/optimizes code behind the scenes in real time, employing a combination of just-in-time and speculative ahead-of-time compilation. Performance results show that the proper mixture of these two techniques can yield near-zero response time as well as performance gains previously achieved only by batch compilers."} {"_id": "14d81790e89e55007b3854e9d242e638065d8415", "title": "NEAT-o-Games: blending physical activity and fun in the daily routine", "text": "This article describes research that aims to encourage physical activity through a novel pervasive gaming paradigm. Data from a wearable accelerometer are logged wirelessly to a cell phone and control the animation of an avatar that represents the player in a virtual race game with other players over the cellular network. Winners are declared every day and players with an excess of activity points can spend some to get hints in mental games of the suite, like Sudoku. The racing game runs in the background throughout the day and every little move counts. As the gaming platform is embedded in the daily routine of players, it may act as a strong behavioral modifier and increase everyday physical activity other than volitional sporting exercise. Such physical activity (e.g., taking the stairs), is termed NEAT and was shown to play a major role in obesity prevention and intervention. A pilot experiment demonstrates that players are engaged in NEAT-o-Games and become more physically active while having a good dosage of fun."} {"_id": "19c499cfd7fe58dea4241c6dc44dfe0003ac1453", "title": "A novel 8T SRAM with minimized power and delay", "text": "In this paper, a novel 8T SRAM cell is proposed which aims at decreasing the delay and lowering the total power consumption of the cell. The threshold voltage variations in the transistor affect the read and write stability of the cell. Also, power dissipation increases with the number of transistors which in turn affects the read and write stability. The proposed 8T SRAM bitcell is designed using 180 nm CMOS, n-well technology with a supply voltage of 1.8 V. The results show that the average delay has been improved by 80 % compared to the conventional 6T cell. The total power is improved by 14.5 % as compared to conventional 6T SRAM cell."} {"_id": "381a30fba04a5094f0f0c2b250b00f04c023ccb6", "title": "Texture analysis of medical images.", "text": "The analysis of texture parameters is a useful way of increasing the information obtainable from medical images. It is an ongoing field of research, with applications ranging from the segmentation of specific anatomical structures and the detection of lesions, to differentiation between pathological and healthy tissue in different organs. Texture analysis uses radiological images obtained in routine diagnostic practice, but involves an ensemble of mathematical computations performed with the data contained within the images. In this article we clarify the principles of texture analysis and give examples of its applications, reviewing studies of the technique."} {"_id": "59ccb3db9e905808340f2edbeb7e9814c47d6beb", "title": "f3.js: A Parametric Design Tool for Physical Computing Devices for Both Interaction Designers and End-users", "text": "Although the exploration of design alternatives is crucial for interaction designers and customization is required for end-users, the current development tools for physical computing devices have focused on single versions of an artifact. We propose the parametric design of devices including their enclosure layouts and programs to address this issue. A Web-based design tool called f3.js is presented as an example implementation, which allows devices assembled from laser-cut panels with sensors and actuator modules to be parametrically created and customized. It enables interaction designers to write code with dedicated APIs, declare parameters, and interactively tune them to produce the enclosure layouts and programs. It also provides a separate user interface for end-users that allows parameter tuning and dynamically generates instructions for device assembly. The parametric design approach and the tool were evaluated through two user studies with interaction designers, university students, and end-users."} {"_id": "6dfbb5801aab21dc3c0b1825db028bb617477446", "title": "Recurrent Transformer Networks for Semantic Correspondence", "text": "We present recurrent transformer networks (RTNs) for obtaining dense correspondences between semantically similar images. Our networks accomplish this through an iterative process of estimating spatial transformations between the input images and using these transformations to generate aligned convolutional activations. By directly estimating the transformations between an image pair, rather than employing spatial transformer networks to independently normalize each individual image, we show that greater accuracy can be achieved. This process is conducted in a recursive manner to refine both the transformation estimates and the feature representations. In addition, a technique is presented for weakly-supervised training of RTNs that is based on a proposed classification loss. With RTNs, state-of-the-art performance is attained on several benchmarks for semantic correspondence."} {"_id": "3fc7dbd009c93f9f0a163d9dfd087d7748ee7d34", "title": "Statics and Dynamics of Continuum Robots With General Tendon Routing and External Loading", "text": "Tendons are a widely used actuation strategy for continuum robots that enable forces and moments to be transmitted along the robot from base-mounted actuators. Most prior robots have used tendons routed in straight paths along the robot. However, routing tendons through general curved paths within the robot offers potential advantages in reshaping the workspace and enabling a single section of the robot to achieve a wider variety of desired shapes. In this paper, we provide a new model for the statics and dynamics of robots with general tendon routing paths that is derived by coupling the classical Cosserat-rod and Cosserat-string models. This model also accounts for general external loading conditions and includes traditional axially routed tendons as a special case. The advantage of the usage of this coupled model for straight-tendon robots is that it accounts for the distributed wrenches that tendons apply along the robot. We show that these are necessary to consider when the robot is subjected to out-of-plane external loads. Our experimental results demonstrate that the coupled model matches experimental tip positions with an error of 1.7% of the robot length, in a set of experiments that include both straight and nonstraight routing cases, with both point and distributed external loads."} {"_id": "4555fd3622908e2170e4ffdd717b83518b123b09", "title": "Folded dipole antenna near metal plate", "text": "The paper presents the effects on antenna parameters when an antenna is placed horizontally near a metal plate. The plate has finite size and rectangular shape. A folded dipole antenna is used and it is placed symmetrically above the plate. The FEM (finite element method) is used to simulate the dependency of antenna parameters on the size of the plate and the distance between the plate and the antenna. The presence of the metal plate, even a small one if it is at the right distance, causes very big changes in the behaviour of the antenna. The bigger the plate, especially in width, the sharper and narrower are the lobes of the radiation pattern. The antenna height defines how many lobes the radiation pattern has. A number of the antenna parameters, including impedance, directivity and front-to-back ratio, change periodically as the antenna height is increased. The resonant frequency of the antenna also changes under the influence of the metal plate."} {"_id": "2421a14ce3cdd563fc3155a151a69568b0ee6b31", "title": "Semi-supervised Learning by Entropy Minimization", "text": "We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solutions benefit from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \u201ccluster assumption\u201d. Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces."} {"_id": "73cdc71ced7be58ad8eeacb8e8311358412d6bd6", "title": "LinkNBed: Multi-Graph Representation Learning with Entity Linkage", "text": "Knowledge graphs have emerged as an important model for studying complex multirelational data. This has given rise to the construction of numerous large scale but incomplete knowledge graphs encoding information extracted from various resources. An effective and scalable approach to jointly learn over multiple graphs and eventually construct a unified graph is a crucial next step for the success of knowledge-based inference for many downstream applications. To this end, we propose LinkNBed, a deep relational learning framework that learns entity and relationship representations across multiple graphs. We identify entity linkage across graphs as a vital component to achieve our goal. We design a novel objective that leverage entity linkage and build an efficient multi-task training procedure. Experiments on link prediction and entity linkage demonstrate substantial improvements over the state-ofthe-art relational learning approaches."} {"_id": "bcc61dd29c110b25c1a179188b1270a34210d669", "title": "Metacognition of the testing effect: guiding learners to predict the benefits of retrieval.", "text": "If the mnemonic benefits of testing are to be widely realized in real-world learning circumstances, people must appreciate the value of testing and choose to utilize testing during self-guided learning. Yet metacognitive judgments do not appear to reflect the enhancement provided by testing Karpicke & Roediger (Science 319:966-968, 2008). In this article, we show that under judicious conditions, learners can indeed reveal an understanding of the beneficial effects of testing, as well as the interaction of that effect with delay (experiment 1). In that experiment, subjects made judgments of learning (JOLs) for previously studied or previously tested items in either a cue-only or a cue-target context, and either immediately or after a 1-day delay. When subjects made judgments in a cue-only context, their JOLs accurately reflected the effects of testing, both immediately and at a delay. To evaluate the potential of exposure to such conditions for promoting generalized appreciation of testing effects, three further experiments elicited global predictions about restudied and tested items across two study/test cycles (experiments 2, 3, and 4). The results indicated that learners' global na\u00efve metacognitive beliefs increasingly reflect the beneficial effects of testing when learners experience these benefits with increasing external support. If queried under facilitative circumstances, learners appreciate the mnemonic enhancement that testing provides on both an item-by-item and global basis but generalize that knowledge to future learning only with considerable guidance."} {"_id": "ef0bb86fb89935e42f723771896eae0ac67b9545", "title": "White-box cryptography: practical protection on hostile hosts", "text": "Businesses often interact with users via web-browsers and applications on mobile devices, and host services on cloud servers they may not own. Such highly-exposed environments employ white-box cryptography (WBC) for security protection. WBC operates on a security model far different from the traditional black-box model. The modern business world includes large commercial segments in which end-users are directly exposed to business application software hosted on web browsers, mobile phones, web-connected tablets, and an increasing number of other devices: the `internet of things' (IoT). Software applications and their communication activities now dominate much of the commercial world, and there have been countless hacks on such software, and on devices hosting it, with targets as diverse as mobile phones, web browser applications, vehicles, and even refrigerators! The business advantages of deploying computational power near the user encourage software migration to exposed network end-points, but this increasing exposure provides an ever growing attack surface. Here, we discuss goals and challenges of white-box cryptography and emerging approaches in a continual attempt to stay at least one step ahead of the attackers. We list some WBC techniques, both traditional and recent, indicating how they might be incorporated into a WBC AES implementation."} {"_id": "1893b770859e7d6f28f4e5f9173065223ef948d6", "title": "The pursuit of happiness: time, money, and social connection.", "text": "Does thinking about time, rather than money, influence how effectively individuals pursue personal happiness? Laboratory and field experiments revealed that implicitly activating the construct of time motivates individuals to spend more time with friends and family and less time working-behaviors that are associated with greater happiness. In contrast, implicitly activating money motivates individuals to work more and socialize less, which (although productive) does not increase happiness. Implications for the relative roles of time versus money in the pursuit of happiness are discussed."} {"_id": "20cf8416518b40e1aa583d459617081241bc18d4", "title": "Fragile online relationship: a first look at unfollow dynamics in twitter", "text": "We analyze the dynamics of the behavior known as 'unfollow' in Twitter. We collected daily snapshots of the online relationships of 1.2 million Korean-speaking users for 51 days as well as all of their tweets. We found that Twitter users frequently unfollow. We then discover the major factors, including the reciprocity of the relationships, the duration of a relationship, the followees' informativeness, and the overlap of the relationships, which affect the decision to unfollow. We conduct interview with 22 Korean respondents to supplement the quantitative results.\n They unfollowed those who left many tweets within a short time, created tweets about uninteresting topics, or tweeted about the mundane details of their lives. To the best of our knowledge, this work is the first systematic study of the unfollow behavior in Twitter."} {"_id": "0fb54f54cd7427582fdb2a8da46d42884c7f417a", "title": "The role of emotion in computer-mediated communication: A review", "text": "It has been argued that the communication of emotions is more difficult in computer-mediated communication (CMC) than in face-to-face (F2F) communication. The aim of this paper is to review the empirical evidence in order to gain insight in whether emotions are communicated differently in these different modes of communication. We review two types of studies: (1) studies that explicitly examine discrete emotions and emotion expressions, and (2) studies that examine emotions more implicitly, namely as self-disclosure or emotional styles. Our conclusion is that there is no indication that CMC is a less emotional or less personally involving medium than F2F. On the contrary, emotional communication online and offline is surprisingly similar, and if differences are found they show more frequent and explicit emotion communication in CMC than in F2F. 2007 Elsevier Ltd. All rights reserved."} {"_id": "55327c16e9ace5bd6e716a83abc6ebdca7497621", "title": "Initial construction and validation of the Pathological Narcissism Inventory.", "text": "The construct of narcissism is inconsistently defined across clinical theory, social-personality psychology, and psychiatric diagnosis. Two problems were identified that impede integration of research and clinical findings regarding narcissistic personality pathology: (a) ambiguity regarding the assessment of pathological narcissism vs. normal narcissism and (b) insufficient scope of existing narcissism measures. Four studies are presented documenting the initial derivation and validation of the Pathological Narcissism Inventory (PNI). The PNI is a 52-item self-report measure assessing 7 dimensions of pathological narcissism spanning problems with narcissistic grandiosity (Entitlement Rage, Exploitativeness, Grandiose Fantasy, Self-sacrificing Self-enhancement) and narcissistic vulnerability (Contingent Self-esteem, Hiding the Self, Devaluing). The PNI structure was validated via confirmatory factor analysis. The PNI correlated negatively with self-esteem and empathy, and positively with shame, interpersonal distress, aggression, and borderline personality organization. Grandiose PNI scales were associated with vindictive, domineering, intrusive, and overly-nurturant interpersonal problems, and vulnerable PNI scales were associated with cold, socially avoidant, and exploitable interpersonal problems. In a small clinical sample, PNI scales exhibited significant associations with parasuicidal behavior, suicide attempts, homicidal ideation, and several aspects of psychotherapy utilization."} {"_id": "645c2d0bf6225c87f1f8fdc2eacf4a9a39c570fc", "title": "COPSS-lite: Lightweight ICN Based Pub/Sub for IoT Environments", "text": "Information Centric Networking (ICN) is a new networking paradigm that treats content as the first class entity. It provides content to users without regards to the current location or source of the content. The publish/subscribe (pub/sub) systems have gained popularity in Internet. Pub/sub systems dismisses the need for users to request every content of their interest. Instead, the content is supplied to the interested users (subscribers) as and when it is published. CCN/NDN are popular ICN proposals widely accepted in the ICN community however, they do not provide an efficient pub/sub mechanism. COPSS enhances CCN/NDN with an efficient pub/sub capability. Internet of Things (IoT) is a growing topic of interest in both Academia and Industry. The current designs for IoT relies on IP. However, the IoT devices are constrained in their available resources and IP is heavy for their operation.We observed that IoT\u2019s are information centric in nature and hence ICN is a more suitable candidate to support IoT environments. Although NDN and COPSS work well for the Internet, their current full fledged implementations cannot be used by the resource constrained IoT devices. CCN-lite is a light weight, inter-operable version of the CCNx protocol for supporting the IoT devices. However, CCN-lite like its ancestors lacks the support for an efficient pub/sub mechanism. In this paper, we developed COPSS-lite, an efficient and light weight implementation of pub/sub for IoT. COPSS-lite is developed to enhance CCN-lite and also support multi-hop connection by incorporating the famous RPL protocol for low power and lossy networks. We provide a preliminary evaluation to show proof of operability with real world sensor devices in IoT lab. Our results show that COPSS-lite is compact, operates on all platforms that support CCN-lite and we observe significant performance benefits with COPSS-lite in IoT environments."} {"_id": "5752b8dcec5856b7ad6289bbe1177acce535fba4", "title": "Parsing English with a Link Grammar", "text": "We develop a formal grammatical system called a link grammar, show how English grammar can be encoded in such a system, and give algorithms for efficiently parsing with a link grammar. Although the expressive power of link grammars is equivalent to that of context free grammars, encoding natural language grammars appears to be much easier with the new system. We have written a program for general link parsing and written a link grammar for the English language. The performance of this preliminary system \u2013 both in the breadth of English phenomena that it captures and in the computational resources used \u2013 indicates that the approach may have practical uses as well as linguistic significance. Our program is written in C and may be obtained through the internet. c 1991 Daniel Sleator and Davy Temperley * School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, sleator@cs.cmu.edu. y Music Department, Columbia University, New York, NY 10027, dt3@cunixa.cc.columbia.edu. Research supported in part by the National Science Foundation under grant CCR-8658139, Olin Corporation, and R. R. Donnelley and Sons. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of Olin Corporation, R. R. Donnelley and Sons, or the NSF."} {"_id": "36a3eed52ff0a694aa73ce6a0d592cb440ed3d31", "title": "Robust Physical-World Attacks on Machine Learning Models", "text": "Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world\u2014they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm\u2014Robust Physical Perturbations (RP2)\u2014 that generates perturbations by taking images under different conditions into account. Our algorithm can create spatiallyconstrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions."} {"_id": "fcc961631830b8b11114bd60f7e3ccb3888e3678", "title": "Sociable driving agents to maintain driver's attention in autonomous driving", "text": "Recently, many studies have been conducted on increasing the automation level in cars to achieve safer and more efficient transportation. The increased automation level creates room for the drivers to shift their attention to non-driving related activities. However, there are cases that cannot be handled by automation where a driver should take over the control. This pilot study investigates a paradigm for keeping the drivers' situation-awareness active during autonomous driving by utilizing a social robot system, NAMIDA. NAMIDA is an interface consisting of three sociable driving agents that can interact with the driver through eye-gaze behaviors. We analyzed the effectiveness of NAMIDA on maintaining the drivers' attention to the road, by evaluating the response time of the drivers to a critical situation on the road. An experiment consisting of a take over scenario was conducted in a dynamic driving simulator. The results showed that existence of NAMIDA significantly reduced the response time of the drivers. However, surprisingly, NAMIDA without eye-gaze behaviors was more effective in reducing the response time than NAMIDA with eye-gaze behaviors. Additionally, the results revealed better subjective impressions for NAMIDA with eye-gaze behaviors behaviors."} {"_id": "e0f296dd7a8c9e315e4bd4e1108142f2e9e6faec", "title": "Correlating driver gaze with the road scene for driver assistance systems", "text": "A driver assistance system (DAS) should support the driver by monitoring road and vehicle events and presenting relevant and timely information to the driver. It is impossible to know what a driver is thinking, but we can monitor the driver\u2019s gaze direction and compare it with the position of information in the driver\u2019s viewfield to make inferences. In this way, not only do we monitor the driver\u2019s actions, we monitor the driver\u2019s observations as well. In this paper we present the automated detection and recognition of road signs, combined with the monitoring of the driver\u2019s response. We present a complete system that reads speed signs in real-time, compares the driver\u2019s gaze, and provides immediate feedback if it appears the sign has been missed by t \u00a9"} {"_id": "19aca01bafe52131ec95473dac105889aa6a4d33", "title": "Robust method for road sign detection and recognition", "text": "This paper describes a method for detecting and recognizing road signs in gray-level and color images acquired by a single camera mounted on a moving vehicle. The method works in three stages. First, the search for the road sign is reduced to a suitable region of the image by using some a priori knowledge on the scene or color clues (when available). Secondly, a geometrical analysis of the edges extracted from the image is carried out, which generates candidates to be circular and triangular signs. Thirdly, a recognition stage tests by cross-correlation techniques each candidate which, if validated, is classi ed according to the data-base of signs. An extensive experimentation has shown that the method is robust against low-level noise corrupting edge detection and contour following, and works for images of cluttered urban streets as well as country roads and highways. A further improvement on the detection and recognition scheme has been obtained by means of temporal integration based on Kalman ltering methods of the extracted information. The proposed approach can be very helpful for the development of a system for driving assistance."} {"_id": "28312c3a47c1be3a67365700744d3d6665b86f22", "title": "Face recognition: A literature survey", "text": "As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered."} {"_id": "2c21e4808dcb8f9d935d98af07d733b8134525ab", "title": "Fast Radial Symmetry for Detecting Points of Interest", "text": "A new transform is presented that utilizes local radial symmetry to highlight points of interest within a scene. Its lowcomputational complexity and fast runtimes makes this method well-suited for real-time vision applications. The performance of the transform is demonstrated on a wide variety of images and compared with leading techniques from the literature. Both as a facial feature detector and as a generic region of interest detector the new transform is seen to offer equal or superior performance to contemporary techniques at a relatively low-computational cost. A real-time implementation of the transform is presented running at over 60 frames per second on a standard Pentium III PC."} {"_id": "342f0a9db326eb32df8edb8268e573ea6c7d3999", "title": "Vision-based eye-gaze tracking for human computer interface", "text": "Eye-gaze is an input mode which has the potential of an efficient computer interface. Eye movement has been the focus of research in this area. Non-intrusive eyegaze tracking that allows slight head movement is addressed in this paper. A small 2D mark is employed as a reference to compensate for this movement. The iris center has been chosen for purposes of measuring eye movement. The gaze point is estimated after acquiring the eye movement data. Preliminary experimental results are given through a screen pointing application."} {"_id": "b43ffb0d4f8d1c66632b78ad74d92ab1218a6976", "title": "An Empirical Exploration of Curriculum Learning for Neural Machine Translation", "text": "Machine translation systems based on deep neural networks are expensive to train. Curriculum learning aims to address this issue by choosing the order in which samples are presented during training to help train better models faster. We adopt a probabilistic view of curriculum learning, which lets us flexibly evaluate the impact of curricula design, and perform an extensive exploration on a German-English translation task. Results show that it is possible to improve convergence time at no loss in translation quality. However, results are highly sensitive to the choice of sample difficulty criteria, curriculum schedule and other hyperparameters."} {"_id": "1f2e7ce5ccf6afc5551db594997a749444480bbd", "title": "Algorithms for Lipschitz Learning on Graphs", "text": "We develop fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices. The extension we compute is the absolutely minimal Lipschitz extension, and is the limit for large p of p-Laplacian regularization. We present an algorithm that computes a minimal Lipschitz extension in expected linear time, and an algorithm that computes an absolutely minimal Lipschitz extension in expected time \u00d5(mn). The latter algorithm has variants that seem to run much faster in practice. These extensions are particularly amenable to regularization: we can perform l0-regularization on the given values in polynomial time and l1-regularization on the initial function values and on graph edge weights in time \u00d5(m). Our definitions and algorithms naturally extend to directed graphs."} {"_id": "c633e0075f56581556967dbe32631d8e4c94dcb5", "title": "Tanzer group IIB constricted ear repair with helical advancement and superior auricular artery chondrocutaneous flap.", "text": "Constricted ear deformity was first described by Tanzer and classified it into 3 groups according to the degree of constriction. The group IIB deformity involves the helix, scapha, and antihelical fold. The height of the ear is sharply reduced, and the soft tissue envelope is not sufficient to close the cartilage framework after expansion and reshaping.This study describes expanding the cartilage and increasing the height by advancing the helical root superiorly and repairing the skin-cartilage defect with a superior auricular artery chondrocutaneous flap in Tanzer group IIB constricted ear deformity.Six ears of 6 patients were treated with this technique during the past 3 years. All patients were satisfied with the appearance of their corrected ears, and the increase in height was maintained through the follow-up period.The described technique does not have the disadvantages and possible complications of harvesting a costal cartilage graft. Moving and fixing the root of helix to a more superior position provide the auricle with additional length. The superior auricular artery chondrocutaneous flap not only provides adequate soft tissue for primary closure of the anterior portion of the auricle but also aids in repairing the cartilage defect resulting from the superior advancement of the helix as well."} {"_id": "20862a3a216ecc99519fa24fb0826317a3e54302", "title": "Hand Gesture Recognition using Sign Language", "text": "Sign Language is mostly used by deaf and dumb people. In order to improve the man machine interaction, sign language can be used as a way for communicating with machines. Most of the applications which enable sign language processing are using data gloves and other devices for interacting with computers. This restricts the freedom of users. So to avoid this, this system we capture live video stream through normal webcam and pass it as input and process this video to detect human gestures . These gestures can be signs represented in any sign language like American Sign Language (ASL), Indian Sign Language(ISL) etc. By processing these signs, this system can make an efficient system for searching using signs. In future this method of searching can be implemented in many areas such as railway station for searching the details of any trains, etc. Searching uses signs can play a major role in searching technique because the latest existing technique of searching is based on voice. But this may not work properly because each person has a different pronunciation. This system can overcome the drawbacks of the current available searching technique and thus can make every person under an umbrella of searching."} {"_id": "4e740d864f382798b33b3510b1ca2980c885c5e7", "title": "Thermoplastic Forming of Bulk Metallic Glass\u2014 A Technology for MEMS and Microstructure Fabrication", "text": "A technology for microelectromechanical systems (MEMS) and microstructure fabrication is introduced where the bulk metallic glass (BMG) is formed at a temperature where the BMG exist as a viscous liquid under an applied pressure into a mold. This thermoplastic forming is carried out under comparable forming pressure and temperatures that are used for plastics. The range of possible sizes in all three dimensions of this technology allows the replication of high strength features ranging from about 30 nm to centimeters with aspect ratios of 20 to 1, which are homogeneous and isotropic and free of stresses and porosity. Our processing method includes a hot-cutting technique that enables a clean planar separation of the parts from the BMG reservoir. It also allows to net-shape three-dimensional parts on the micron scale. The technology can be implemented into conventional MEMS fabrication processes. The properties of BMG as well as the thermoplastic formability enable new applications and performance improvements of existing MEMS devices and nanostructures"} {"_id": "85acd7b5542bf475c70914f1c4efa4e2207421c2", "title": "Mobile Multimedia Recommendation in Smart Communities: A Survey", "text": "Due to the rapid growth of Internet broadband access and proliferation of modern mobile devices, various types of multimedia (e.g., text, images, audios, and videos) have become ubiquitously available anytime. Mobile device users usually store and use multimedia contents based on their personal interests and preferences. Mobile device challenges such as storage limitation have, however, introduced the problem of mobile multimedia overload to users. To tackle this problem, researchers have developed various techniques that recommend multimedia for mobile users. In this paper, we examine the importance of mobile multimedia recommendation systems from the perspective of three smart communities, namely mobile social learning, mobile event guide, and context-aware services. A cautious analysis of existing research reveals that the implementation of proactive, sensor-based and hybrid recommender systems can improve mobile multimedia recommendations. Nevertheless, there are still challenges and open issues such as the incorporation of context and social properties, which need to be tackled to generate accurate and trustworthy mobile multimedia recommendations."} {"_id": "63aa729924d672a37f04bd3a18e60bd0510bb3a3", "title": "Wireless Geophone Network for remote monitoring and detection of landslides", "text": "Recent years have shown an alarmous increase in rain fall induced landslides. This has facilitated the need for having a monitoring system to predict the landslides which could eventually reduce the loss of human life. We have developed and deployed a Wireless Sensor Network to monitor rainfall induced landslide, in Munnar, South India. A successful landslide warning was issued in June 2009 using this system. The system is being enhanced by incorporating a Wireless Geophone Network to locate the initiation of landslide. The paper discusses an algorithm that was developed to analyze the geophone data and automatically detect the landslide signal. A novel method to localize the landslide initiation point is detailed. The algorithm is based on the time delay inherent in the transmission of waves through the surface of the earth. The approach detailed here does not require additional energy since the geophones are self excitatory. The error rate of the approach is much less when compared to the other localization methods like RSSI. The proposed algorithm is being tested and validated, in the landslide laboratory set up at our university."} {"_id": "d70cd3d2fe0a194321ee92c305976873b883d529", "title": "A compact, 37% fractional bandwidth millimeter-wave phase shifter using a wideband lange coupler for 60-GHz and E-band systems", "text": "A wideband 57.7\u201384.2 GHz Phase Shifter is presented using a compact Lange coupler to generate in-phase and quadrature signal. The Lange coupler is followed by two balun transformers that provide the IQ vector modulation with differential I and Q signals. The implemented Phase Shifter demonstrates an average 6-dB insertion loss and 5-dB gain variation. The measured average rms phase and gain errors are 7 degrees and 1 dB, respectively. The phase shifter is implemented in GlobalFoundries 45-nm SOI CMOS technology using a trap-rich substrate. The chip area is 385 \u03bcm \u00d7 285 \u03bcm and the Phase Shifter consumes less than 17 mW. To the best of authors knowledge, this is the first phase shifter that covers both 60 GHz band and E-band frequencies with a fractional bandwidth of 37%."} {"_id": "98bcaf679cc12712d8187b64e1351f1cbf90d7fb", "title": "An innovative ontology-driven system supporting personnel selection: the OntoHR case", "text": "This paper describes the initial development of an HRM system that aims to decrease the gap between higher vocational education and the labour market for a specific job in the ICT sector. This paper focuses specifically on the delineation of a process model and the selection of a suitable job role (information system analyst) that is valid across organisational and cultural boundaries. The process model implies various applied uses of this ontology-based system, including mapping qualifications in vocational education to current and valid job roles, testing and evaluating the student applicant on the basis of labour market driven competencies, and providing ad-hoc support to educational institutions by elucidating the weaknesses of particular VET curricula."} {"_id": "72868defad65be8d72c911afb343c3bc74fb2a24", "title": "A Balanced Filtering Branch-Line Coupler", "text": "A balanced filtering branch-line coupler is proposed for the first time. The proposed design is realized by coupled microstrip lines and coupled-line-fed coupling structures. Based on this, the proposed design not only exhibits the functions of power dividing and filtering for differential-mode signals, but also can suppress the common-mode noise/signal and be easily connected with other balanced circuits. The dimension of the proposed design is similar to that of the traditional single-ended design. A prototype of the balanced filtering branch-line coupler centered at 1.87 GHz with the size of 0.33 \u03bbg \u00d7 0.42 \u03bbg is fabricated, where \u03bbg is the guided wavelength at the center frequency. The measured results exhibit the maximum insertion loss of 1.4 dB with a 3-dB fractional bandwidth of 3.5%."} {"_id": "c59dd7da9b11761239d9b97e36ef972acfb1ba6f", "title": "Identifying personal genomes by surname inference.", "text": "Sharing sequencing data sets without identifiers has become a common practice in genomics. Here, we report that surnames can be recovered from personal genomes by profiling short tandem repeats on the Y chromosome (Y-STRs) and querying recreational genetic genealogy databases. We show that a combination of a surname with other types of metadata, such as age and state, can be used to triangulate the identity of the target. A key feature of this technique is that it entirely relies on free, publicly accessible Internet resources. We quantitatively analyze the probability of identification for U.S. males. We further demonstrate the feasibility of this technique by tracing back with high probability the identities of multiple participants in public sequencing projects."} {"_id": "96693c88cdcf0b721e4eff7f4f426bc90f90d601", "title": "Automatic web spreadsheet data extraction", "text": "Spreadsheets contain a huge amount of high-value data but do not observe a standard data model and thus are difficult to integrate. A large number of data integration tools exist, but they generally can only work on relational data. Existing systems for extracting relational data from spreadsheets are too labor intensive to support ad-hoc integration tasks, in which the correct extraction target is only learned during the course of user interaction.\n This paper introduces a system that automatically extracts relational data from spreadsheets, thereby enabling relational spreadsheet integration. The resulting integrated relational data can be queried directly or can be translated into RDF triples. When compared to standard techniques for spreadsheet data extraction on a set of 100 random Web spreadsheets, the system reduces the amount of human labor by 72% to 92%. In addition to the system design, we present the results of a general survey of more than 400,000 spreadsheets we downloaded from the Web, giving a novel view of how users organize their data in spreadsheets."} {"_id": "5bbc7e59b11e737f653a565419b68d9f5d9ceb68", "title": "Analysis of waveguide slot-based structures using wide-band equivalent-circuit model", "text": "Analysis of geometrically complicated waveguide-based slotted arrays and filters is performed using a simple equivalent-circuit model. First, the circuit parameters (inductance and capacitance) of a simple waveguide slot-coupler problem are obtained through moment-method (MoM) analysis. The values of the lumped LC elements are virtually constant over the frequency range of interest (the X-band) for specific waveguide and slot dimensions. Based on the equivalent-circuit model of a single slot of two coupled waveguides, more complicated structures are then analyzed, such as slot coupler arrays and slot-based waveguide filters. The scattering parameters of these structures are obtained through circuit analysis, and are verified using the MoM and finite-difference time-domain method. Excellent agreement is observed over a wide band of frequencies and is confirmed by experimental results."} {"_id": "eb58118b9db1e95f9792f39c3780dbba3bb966cb", "title": "A Wearable Inertial Measurement System With Complementary Filter for Gait Analysis of Patients With Stroke or Parkinson\u2019s Disease", "text": "This paper presents a wearable inertial measurement system and its associated spatiotemporal gait analysis algorithm to obtain quantitative measurements and explore clinical indicators from the spatiotemporal gait patterns for patients with stroke or Parkinson\u2019s disease. The wearable system is composed of a microcontroller, a triaxial accelerometer, a triaxial gyroscope, and an RF wireless transmission module. The spatiotemporal gait analysis algorithm, consisting of procedures of inertial signal acquisition, signal preprocessing, gait phase detection, and ankle range of motion estimation, has been developed for extracting gait features from accelerations and angular velocities. In order to estimate accurate ankle range of motion, we have integrated accelerations and angular velocities into a complementary filter for reducing the accumulation of integration error of inertial signals. All 24 participants mounted the system on their foot to walk along a straight line of 10 m at normal speed and their walking recordings were collected to validate the effectiveness of the proposed system and algorithm. Experimental results show that the proposed inertial measurement system with the designed spatiotemporal gait analysis algorithm is a promising tool for automatically analyzing spatiotemporal gait information, serving as clinical indicators for monitoring therapeutic efficacy for diagnosis of stroke or Parkinson\u2019s disease."} {"_id": "f17d9ab6a60600174df20eb6cb1fecc522912257", "title": "Preparation of Personalized-dose Salbutamol Sulphate Oral Films with Thermal Ink-Jet Printing", "text": "To evaluate the use of thermal ink-jetting as a method for dosing drugs onto oral films. A Hewlett-Packard printer cartridge was modified so that aqueous drug solutions replaced the ink. The performance of the printer as a function of print solution viscosity and surface tension was determined; viscosities between 1.1 and 1.5\u00a0mm2 s-1 were found to be optimal, while surface tension did not affect deposition. A calibration curve for salbutamol sulphate was prepared, which demonstrated drug deposition onto an acetate film varied linearly with concentration (r2\u2009=\u20090.9992). The printer was then used to deposit salbutamol sulphate onto an oral film made of potato starch. It was found that when doses were deposited in a single pass under the print head, then the measured dose was in good agreement with the theoretical dose. With multiple passes the measured dose was always significantly less than the theoretical dose. It is proposed that the losses arise from erosion of the printed layer by shearing forces during paper handling. The losses were predictable, and the variance in dose deposited was always less than the BP limits for tablet and oral syrup salbutamol sulphate preparations. TIJ printing offers a rapid method for extemporaneous preparation of personalized-dose medicines."} {"_id": "a1a84459c9cae06e7e118f053193ac511fba6d96", "title": "From big smartphone data to worldwide research: The Mobile Data Challenge", "text": "This paper presents an overview of the Mobile Data Challenge (MDC), a large-scale research initiative aimed at generating innovations around smartphone-based research, as well as community-based evaluation of mobile data analysis methodologies. First, we review the Lausanne Data Collection Campaign (LDCC) \u2013 an initiative to collect unique, longitudinal smartphone data set for the MDC. Then, we introduce the Open and Dedicated Tracks of the MDC; describe the specific data sets used in each of them; discuss the key design and implementation aspects introduced in order to generate privacypreserving and scientifically relevant mobile data resources for wider use by the research community; and summarize the main research trends found among the 100+ challenge submissions. We finalize by discussing the main lessons learned from the participation of several hundred researchers worldwide in the MDC Tracks."} {"_id": "519d220149838ca5f534b004b305ed9ca1f3afd1", "title": "ESUR prostate MR guidelines 2012", "text": "The aim was to develop clinical guidelines for multi-parametric MRI of the prostate by a group of prostate MRI experts from the European Society of Urogenital Radiology (ESUR), based on literature evidence and consensus expert opinion. True evidence-based guidelines could not be formulated, but a compromise, reflected by \u201cminimal\u201d and \u201coptimal\u201d requirements has been made. The scope of these ESUR guidelines is to promulgate high quality MRI in acquisition and evaluation with the correct indications for prostate cancer across the whole of Europe and eventually outside Europe. The guidelines for the optimal technique and three protocols for \u201cdetection\u201d, \u201cstaging\u201d and \u201cnode and bone\u201d are presented. The use of endorectal coil vs. pelvic phased array coil and 1.5 vs. 3\u00a0T is discussed. Clinical indications and a PI-RADS classification for structured reporting are presented. Key Points \u2022 This report provides guidelines for magnetic resonance imaging (MRI) in prostate cancer. \u2022 Clinical indications, and minimal and optimal imaging acquisition protocols are provided. \u2022 A structured reporting system (PI-RADS) is described."} {"_id": "7e7f14f325d7e8d70e20ca22800ad87cfbf339ff", "title": "An overview of pricing models for revenue management", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."} {"_id": "765865cd323c8f16909c89ca8e5a3ff38e41c36b", "title": "Recommendation Systems in Software Engineering", "text": "Software engineering is a knowledge-intensive activity that presents many information navigation challenges. Information spaces in software engineering include the source code and change history of the software, discussion lists and forums, issue databases, component technologies and their learning resources, and the development environment. The technical nature, size, and dynamicity of these information spaces motivate the development of a special class of applications to support developers: recommendation systems in software engineering (RSSEs), which are software applications that provide information items estimated to be valuable for a software engineering task in a given context. In this introduction, we review the characteristics of information spaces in software engineering, describe the unique aspects of RSSEs, present an overview of the issues and considerations involved in creating, evaluating, and using RSSEs, and present a general outlook on the current state of research and development in the field of recommendation systems for highly technical domains."} {"_id": "87dc0dea299ed325d70973b6ffc8ca03b48d97f4", "title": "Relationship Between Visual-Motor Integration , Eye-Hand Coordination , and Quality of Handwriting", "text": "If the influence of visual-motor integration (copying forms) on the quality of handwriting has been widely investigated, the influence of eye-hand coordination (tracing item) has been less well analyzed. The Concise Assessment Scale for Children\u2019s Handwriting (BHK), the Developmental Test of Visual Perception (DTVP-2), and the section \u201cManual Dexterity\u201d of the Movement Assessment Battery for Children (M-ABC) were administered to a group of second grade children (N = 75; 8.1-year-olds). The association of visual-motor integration and eye-hand coordination are predictive of the quality of handwriting (p < . 001). These two skills should be taken into consideration when children are referred to occupational therapy for difficulties in handwriting."} {"_id": "dc877b84314f20dc32df80f838942325decf997a", "title": "The research and implementation of metadata cache backup technology based on CEPH file system", "text": "Based on research and analysis of the Ceph file system, the log cache backup scheme is proposed. The log cache backup scheme reduces metadata access delays by caching logs and reducing the time of storing logs to server clusters. In order to prevent the loss of cached data in metadata servers, a cached data backup scheme is proposed. Compared to the metadata management subsystem of the Ceph, the log cache backup scheme can effectively improve the access performance of metadata. Finally, based on open source codes of the Ceph file system, the log cache backup scheme is implemented. Compared to the performance of the Ceph metadata management subsystem, experiment results show that performance improvements of the log cache backup scheme are up to 11.5%."} {"_id": "9ca7a563741e76c1a3b5e749c78a0604cf18fa24", "title": "A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects", "text": "We conducted a quantitative meta-analysis of previous research on the technology acceptance model (TAM) in an attempt to make well-grounded statements on the role of subjective norm. Furthermore, we compared TAM results by taking into account moderating effects of one individual-related factor (type of respondents), one technology-related factor (type of technology), and one contingent factor (culture). Results indicated a significant influence of subjective norm on perceived usefulness and behavioral intention to use. Moderating effects were found for all three factors. The findings yielded managerial implications for both intracompany and market-based settings. # 2006 Elsevier B.V. All rights reserved."} {"_id": "d842dd469141b4a6f06894240814ef641270fe56", "title": "Triangular Tile Pasting P Systems for Pattern Generation", "text": "-In the area of membrane computing, a new computability model, called P system, which is a highly distributed, parallel theoretical computing model, was introduced by G.H. P\u0103un [1], inspired from the cell structure and its function. There are several application areas of these P systems. Among these one area deals with the problem of picture generation. Ceterachi et al. [2] began a study on linking the two areas of membrane computing and picture grammars, which were not very much linked before, by relating P system and array-rewriting grammars generating picture languages and proposing array rewriting P systems. Iso-Triangular picture languages were introduced in [3], which can also generate rectangular and hexagonal picture languages. Tile pasting P system model for pattern generation was introduced in [4]. In this paper we propose a theoretical model of a P system using triangular tiles called Triangular tile pasting P system, for generating two dimensional patterns that are generated by gluing triangular tiles and study some of its properties. Keywords--Triangular Tiles, Array Grammars, Pasting System, Tile Pasting P Systems"} {"_id": "042f8effa0841dca3a41b58fab12f0fd8e3f9ccb", "title": "Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers", "text": "We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a margin-based binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class is compared against all others, or in which all pairs of classes are compared to each other, or in which output codes with error-correcting properties are used. We propose a general method for combining the classifiers generated on the binary problems, and we prove a general empirical multiclass loss bound given the empirical loss of the individual binary learning algorithms. The scheme and the corresponding bounds apply to many popular classification learning algorithms including support-vector machines, AdaBoost, regression, logistic regression and decision-tree algorithms. We also give a multiclass generalization error analysis for general output codes with AdaBoost as the binary learner. Experimental results with SVM and AdaBoost show that our scheme provides a viable alternative to the most commonly used multiclass algorithms."} {"_id": "3f168efed1bc3d97c0f7ddb2f3e79f2eb230bafa", "title": "Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection", "text": "During expressive speech, the voice is enriched to convey not only the intended semantic message but also the emotional state of the speaker. The pitch contour is one of the important properties of speech that is affected by this emotional modulation. Although pitch features have been commonly used to recognize emotions, it is not clear what aspects of the pitch contour are the most emotionally salient. This paper presents an analysis of the statistics derived from the pitch contour. First, pitch features derived from emotional speech samples are compared with the ones derived from neutral speech, by using symmetric Kullback-Leibler distance. Then, the emotionally discriminative power of the pitch features is quantified by comparing nested logistic regression models. The results indicate that gross pitch contour statistics such as mean, maximum, minimum, and range are more emotionally prominent than features describing the pitch shape. Also, analyzing the pitch statistics at the utterance level is found to be more accurate and robust than analyzing the pitch statistics for shorter speech regions (e.g., voiced segments). Finally, the best features are selected to build a binary emotion detection system for distinguishing between emotional versus neutral speech. A new two-step approach is proposed. In the first step, reference models for the pitch features are trained with neutral speech, and the input features are contrasted with the neutral model. In the second step, a fitness measure is used to assess whether the input speech is similar to, in the case of neutral speech, or different from, in the case of emotional speech, the reference models. The proposed approach is tested with four acted emotional databases spanning different emotional categories, recording settings, speakers and languages. The results show that the recognition accuracy of the system is over 77% just with the pitch features (baseline 50%). When compared to conventional classification schemes, the proposed approach performs better in terms of both accuracy and robustness."} {"_id": "782f2b8924ed5104b42308a0565429747780f0c2", "title": "The role of voice quality in communicating emotion, mood and attitude", "text": "This paper explores the role of voice quality in the communication of emotions, moods and attitudes. Listeners reactions to an utterance synthesised with seven different voice qualities were elicited in terms of pairs of opposing affective attributes. The voice qualities included harsh voice, tense voice, modal voice, breathy voice, whispery voice, creaky voice and lax\u2013creaky voice. These were synthesised using a formant synthesiser, and the voice source parameter settings were guided by prior analytic studies as well as auditory judgements. Results offer support for some past observations on the association of voice quality and affect, and suggest a number of refinements in some cases. Listeners ratings further suggest that these qualities are considerably more effective in signalling milder affective states than the strong emotions. It is clear that there is no one-to-one mapping between voice quality and affect: rather a given quality tends to be associated with a cluster of affective attributes. 2002 Elsevier Science B.V. All rights reserved."} {"_id": "a474b149717cb37d6c507395ba27844a92889939", "title": "Emotion recognition in human-computer interaction", "text": "In this paper, we outline the approach we have developed to construct an emotion-recognising system. It is based on guidance from psychological studies of emotion, as well as from the nature of emotion in its interaction with attention. A neural network architecture is constructed to be able to handle the fusion of different modalities (facial features, prosody and lexical content in speech). Results from the network are given and their implications discussed, as are implications for future direction for the research."} {"_id": "002a8b9ef513d46dc8dcce85c04a87ae6a221b4c", "title": "New Support Vector Algorithms", "text": "We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of , and report experimental results."} {"_id": "37bf8fa510083fadc50ef3a52e4b8c10fa0531dd", "title": "RNN-based sequence prediction as an alternative or complement to traditional recommender systems", "text": "The recurrent neural networks have the ability to grasp the temporal patterns within the data. This is a property that can be used in order to help a recommender system better taking into account the user past history. Still the dimensionality problem that raises within the recommender system field also raises here as the number of items the system have to be aware of is susceptibility high. Recent research have studied the use of such neural networks at a user\u2019s session level. This thesis rather examines the use of this technique at a whole user\u2019s past history level associated with techniques such as embeddings and softmax sampling in order to accommodate with the high dimensionality. The proposed method results in a sequence prediction model that can be used as is for the recommender task or as a feature within a more complex system."} {"_id": "b3bcbbe5b6a7f477b4277b3273f999721dba30ef", "title": "Developing multiple hypotheses in behavioral ecology", "text": "Researchers in behavioral ecology are increasingly turning to research methods that allow the simultaneous evaluation of hypotheses. This approach has great potential to increase our scientific understanding, but researchers interested in the approach should be aware of its long and somewhat contentious history. Also, prior to implementing multiple hypothesis evaluation, researchers should be aware of the importance of clearly specifying a priori hypotheses. This is one of the more difficult aspects of research based on multiple hypothesis evaluation, and we outline and provide examples of three approaches for doing so. Finally, multiple hypothesis evaluation has some limitations important to behavioral ecologists; we discuss two practical issues behavioral ecologists are likely to face."} {"_id": "9746994d09b5bf6c40bee3693ee8678e191f84b8", "title": "The Sixth PASCAL Recognizing Textual Entailment Challenge", "text": "This paper presents the Fifth Recognizing Textual Entailment Challenge (RTE5). Following the positive experience of the last campaign, RTE-5 has been proposed for the second time as a track at the Text Analysis Conference (TAC). The structure of the RTE-5 Main Task remained unchanged, offering both the traditional two-way task and the threeway task introduced in the previous campaign. Moreover, a pilot Search Task was set up, consisting of finding all the sentences in a set of documents that entail a given hypothesis. 21 teams participated in the campaign, among which 20 in the Main Task (for a total of 54 runs) and 8 in the Pilot Task (for a total of 20 runs). Another important innovation introduced in this campaign was mandatory ablation tests that participants had to perform for all major knowledge resources employed by their systems."} {"_id": "f1eb886a9e3d2a74cdce442bfdef4c0341d697c8", "title": "Mindfulness-Based Cognitive Therapy and the Adult ADHD Brain: A Neuropsychotherapeutic Perspective", "text": "Attention-deficit/hyperactivity disorder (ADHD) is a recognized serious mental disorder that often persists into adulthood. The symptoms and impairments associated with ADHD often cause significant mental suffering in affected individuals. ADHD has been associated with abnormal neuronal activity in various neuronal circuits, such as the dorsofrontostriatal, orbitofrontostriatal, and frontocerebellar circuits. Psychopharmacological treatment with methylphenidate hydrochloride is recommended as the first-line treatment for ADHD. It is assumed that medication ameliorates ADHD symptoms by improving the functioning of the brain areas affected in the condition. However, side effects, contraindications, or non-response can limit the effectiveness of a psychopharmacological treatment for ADHD. It is therefore necessary to develop non-pharmacological interventions that target neuronal mechanisms associated with the condition in the same way as pharmacological treatment. We think that mindfulness meditation employed as a neuropsychotherapeutic intervention could help patients with ADHD to regulate impaired brain functioning and thereby reduce ADHD symptoms. In this paper, we highlight the mechanisms of such mindfulness meditation, and thus provide a rationale for further research and treatment development from a neuropsychotherapeutic perspective. We conclude that mindfulness meditation employed as a neuropsychotherapeutic intervention in therapy is a promising treatment approach in ADHD."} {"_id": "709df9cb0574f1ff3cb3302c904db38365040fa8", "title": "Image-based reconstruction of wire art", "text": "Objects created by connecting and bending wires are common in furniture design, metal sculpting, wire jewelry, etc. Reconstructing such objects with traditional depth and image based methods is extremely difficult due to their unique characteristics such as lack of features, thin elements, and severe self-occlusions. We present a novel image-based method that reconstructs a set of continuous 3D wires used to create such an object, where each wire is composed of an ordered set of 3D curve segments. Our method exploits two main observations: simplicity - wire objects are often created using only a small number of wires, and smoothness - each wire is primarily smoothly bent with sharp features appearing only at joints or isolated points. In light of these observations, we tackle the challenging image correspondence problem across featureless wires by first generating multiple candidate 3D curve segments and then solving a global selection problem that balances between image and smoothness cues to identify the correct 3D curves. Next, we recover a decomposition of such curves into a set of distinct and continuous wires by formulating a multiple traveling salesman problem, which finds smooth paths, i.e., wires, connecting the curves. We demonstrate our method on a wide set of real examples with varying complexity and present high-fidelity results using only 3 images for each object. We provide the source code and data for our work in the project website."} {"_id": "52ed13d4b686fa236ca4e999b7ec533431dfbfee", "title": "On the Semantics of Self-Unpacking Malware Code \u2217", "text": "The rapid increase in attacks on software systems via malware such as viruses, worms, trojans, etc., has made it imperative to develop effective techniques for detecting and analyzing malware binaries. Such binaries are usually transmitted in packed or encrypted form, with the executable payload decrypted dynamically and then executed. In order to reason formally about their execution behavior, therefore, we need semantic descriptions that can capture this self-modifying aspect of their code. However, current approaches to the semantics of programs usually assume that the program code is immutable, which makes them inapplicable to self-unpacking malware code. This paper takes a step towards addressing this problem by describing a formal semantics for self-modifying code. We use our semantics to show how the execution of self-unpacking code can be divided naturally into a sequence of phases, and uses this to show how the behavior of a program can be characterized statically in terms of a program evolution graph. We discuss several applications of our work, including static unpacking and deobfuscation of encrypted malware and static cross-phase code analysis."} {"_id": "17a4b08d28ca23e6a077abd427aa88f70fec8c3e", "title": "An Information Retrieval Approach For Automatically Constructing Software Libraries", "text": "Although software reuse presents clear advantages for programmer productivit.y and code reliability, it is not practiced enough. One of the reasons for the only moderate success of reuse is the lack of software libr'lries that facilitate the actual locating and understanding of reusable components. This paper dpscrihes II technology for automatically I\\Ssembling large softwar~ libraries that promote softwarp rpuse by helping the user locate the components closest to hfO'r/his neeels Softwarp libraries are aut.omatically assemblfO'd from a set. of Ilnorganizpd component.s by using informat.ion rptripval techniques. The constrllction of the library is done in !.wo steps. First, attribut.es ;up alJtornal.ically pxtracted from nat.urlll language docllmpntat.ion by IIsing a npIV inJexing scheme ba~fO'd an t.he nntinns of lexical affinities anci quantity of inforrnllt.ion. Then, a hierarchy for browsing is alJtomat.iclllly generaterl using a clustering techniq1le that draws only nn the informlltion provided by the attribulps. Thanks to the frfO'p-!.pxt inrlexing se-heme, tools following this approach can accept free-style nat'Jral language 'luprips. This tpChllOlogy h1\\.S bfO'en impll\"ment .. d in the GURU ~ystpm, whirh hrforms l!i% better on a random test set, while being milch le~s pxpensive to build than INFOEXl'LORF.R."} {"_id": "f3e36b7e24b71129480321ecd8cfd81c8d5659b9", "title": "NOSEP: Nonoverlapping Sequence Pattern Mining With Gap Constraints", "text": "Sequence pattern mining aims to discover frequent subsequences as patterns in a single sequence or a sequence database. By combining gap constraints (or flexible wildcards), users can specify special characteristics of the patterns and discover meaningful subsequences suitable for their own application domains, such as finding gene transcription sites from DNA sequences or discovering patterns for time series data classification. Due to the inherent complexity of sequence patterns, including the exponential candidate space with respect to pattern letters and gap constraints, to date, existing sequence pattern mining methods are either incomplete or do not support the Apriori property because the support ratio of a pattern may be greater than that of its subpatterns. Most importantly, patterns discovered by these methods are either too restrictive or too general and cannot represent underlying meaningful knowledge in the sequences. In this paper, we focus on a nonoverlapping sequence pattern mining task with gap constraints, where a nonoverlapping sequence pattern allows sequence letters to be flexibly and maximally utilized for pattern discovery. A new Apriori-based nonoverlapping sequence pattern mining algorithm, NOSEP, is proposed. NOSEP is a complete pattern mining algorithm, which uses a specially designed data structure, Nettree, to calculate the exact occurrence of a pattern in the sequence. Experimental results and comparisons on biology DNA sequences, time series data, and Gazelle datasets demonstrate the efficiency of the proposed algorithm and the uniqueness of nonoverlapping sequence patterns compared to other methods."} {"_id": "0bfb9311d006ed6b520f4afbd349b9bacca88000", "title": "A CMOS Transceiver for a Multistandard 13.56-MHz RFID Reader SoC", "text": "A CMOS transceiver for a multistandard 13.56-MHz radio-frequency identification reader system-on-a-chip (SoC) is designed and fabricated. The SoC consists of an RF/analog part for modulation/demodulation and a digital part for controlling the transceiver functionality. Prior to designing the integrated circuit, pre-experiments using discrete components and commercial tags are performed. With the results, overall functions and specifications are determined. For supporting multistandard, several blocks are designed with digital controls according to the standards. In the transmitter, a digitally controlled amplitude modulator for various modulation indexes and a power control circuit are adopted. In the receiver, a variable gain amplifier and a level-controllable comparator, which are also controlled digitally according to the standard, are introduced. The full transceiver SoC is implemented in the Chartered 0.18-\u00bfm CMOS technology. The measurement results of the implemented chip indicate that the designed transceiver operates in a multistandard mode."} {"_id": "d9bfaf51cf9894657b814adfb342ff25948877e0", "title": "The contagious leader: impact of the leader's mood on the mood of group members, group affective tone, and group processes.", "text": "The present study examined the effects of leaders' mood on (a) the mood of individual group members, (b) the affective tone of groups, and (c) 3 group processes: coordination, effort expenditure, and task strategy. On the basis of a mood contagion model, the authors found that when leaders were in a positive mood, in comparison to a negative mood, (a) individual group members experienced more positive and less negative mood, and (b) groups had a more positive and a less negative affective tone. The authors also found that groups with leaders in a positive mood exhibited more coordination and expended less effort than did groups with leaders in a negative mood. Applied implications of the results are discussed."} {"_id": "ed0a72c00b56a1c9fe9e9151211021f7d4ad4f47", "title": "A 3-Month, Randomized, Double-Blind, Placebo-Controlled Study Evaluating the Ability of an Extra-Strength Marine Protein Supplement to Promote Hair Growth and Decrease Shedding in Women with Self-Perceived Thinning Hair", "text": "An oral marine protein supplement (MPS) is designed to promote hair growth in women with temporary thinning hair (Viviscal Extra Strength; Lifes2good, Inc., Chicago, IL). This double-blind, placebo-controlled study assessed the ability of MPS to promote terminal hair growth in adult women with self-perceived thinning hair associated with poor diet, stress, hormonal influences, or abnormal menstrual cycles. Adult women with thinning hair were randomized to receive MPS (N = 30) or placebo (N = 30) twice daily for 90 days. Digital images were obtained from a 4\u2009cm(2) area scalp target area. Each subject's hair was washed and shed hairs were collected and counted. After 90 days, these measures were repeated and subjects completed Quality of Life and Self-Assessment Questionnaires. MPS-treated subjects achieved a significant increase in the number of terminal hairs within the target area (P < 0.0001) which was significantly greater than placebo (P < 0.0001). MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035). There were no reported adverse events. MPS promotes hair growth and decreases hair loss in women suffering from temporary thinning hair. This trial is registered with ClinicalTrials.gov Identifier: NCT02297360."} {"_id": "e6b7c15bba47ec33771eda5e22c747a8a093b9b5", "title": "Digital Advertising: An Information Scientist's Perspective", "text": "Digital online advertising is a form of promotion that uses the Internet and World Wide Web for the express purpose of delivering marketing messages to attract customers. Examples of online advertising include text ads that appear on search engine results pages, banner ads, in-text ads, or Rich Media ads that appear on regular web pages, portals, or applications. Over the past 15 years online advertising, a $65 billion industry worldwide in 2009, has been pivotal to the success of the World Wide Web. That being said, the field of advertising has been equally revolutionized by the Internet, World Wide Web, and more recently, by the emergence of the social web, and mobile devices. This success has arisen largely from the transformation of the advertising industry from a low-tech, human intensive, \u201cMad Men\u201d way of doing work to highly optimized, quantitative, mathematical, computerand data-centric processes that enable highly targeted, personalized, performance based advertising. This chapter provides a clear and detailed overview of the technologies and business models that are transforming the field of online advertising primarily from statistical machine learning and information science perspectives."} {"_id": "e936c90b2ca8f65dcad86ad4946ebb4bfaa8e909", "title": "Design on the low-leakage diode string for using in the power-rail ESD clamp circuits in a 0.35-/spl mu/m silicide CMOS process", "text": "A new design of the diode string with very low leakage current is proposed for use in the ESD clamp circuits across the power rails. By adding an NMOS-controlled lateral SCR (NCLSCR) device into the stacked diode string, the leakage current of this new diode string with six stacked diodes at 5 V (3.3 V) forward bias can be reduced to only 2.1 (1.07) nA at a temperature of 125/spl deg/C in a 0.35 /spl mu/m silicide CMOS process, whereas the previous designs have a leakage current in the order of mA. The total blocking voltage of this new design with NCLSCR can be linearly adjusted by changing the number of the stacked diodes in the diode string without causing latch-up danger across the power rails. From the experimental results, the human-body-model ESD level of the ESD clamp circuit with the proposed low-leakage diode string is greater than 8 kV in a 0.35 /spl mu/m silicide CMOS process by using neither ESD implantation nor the silicide-blocking process modifications."} {"_id": "0911bcf6bfff20a84a56b9d448bcb3d72a1eb093", "title": "Zero-bias autoencoders and the benefits of co-adapting features", "text": "Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representation. We then show that negative biases impede the learning of data distributions whose intrinsic dimensionality is high. We also propose a new activation function that decouples the two roles of the hidden layer and that allows us to learn representations on data with very high intrinsic dimensionality, where standard autoencoders typically fail. Since the decoupled activation function acts like an implicit regularizer, the model can be trained by minimizing the reconstruction error of training data, without requiring any additional regularization."} {"_id": "f9b7cb13eee257a67a5a8049f22580152873c0a4", "title": "Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis.", "text": "PURPOSE\nGlaucoma is the leading cause of global irreversible blindness. Present estimates of global glaucoma prevalence are not up-to-date and focused mainly on European ancestry populations. We systematically examined the global prevalence of primary open-angle glaucoma (POAG) and primary angle-closure glaucoma (PACG), and projected the number of affected people in 2020 and 2040.\n\n\nDESIGN\nSystematic review and meta-analysis.\n\n\nPARTICIPANTS\nData from 50 population-based studies (3770 POAG cases among 140,496 examined individuals and 786 PACG cases among 112 398 examined individuals).\n\n\nMETHODS\nWe searched PubMed, Medline, and Web of Science for population-based studies of glaucoma prevalence published up to March 25, 2013. Hierarchical Bayesian approach was used to estimate the pooled glaucoma prevalence of the population aged 40-80 years along with 95% credible intervals (CrIs). Projections of glaucoma were estimated based on the United Nations World Population Prospects. Bayesian meta-regression models were performed to assess the association between the prevalence of POAG and the relevant factors.\n\n\nMAIN OUTCOME MEASURES\nPrevalence and projection numbers of glaucoma cases.\n\n\nRESULTS\nThe global prevalence of glaucoma for population aged 40-80 years is 3.54% (95% CrI, 2.09-5.82). The prevalence of POAG is highest in Africa (4.20%; 95% CrI, 2.08-7.35), and the prevalence of PACG is highest in Asia (1.09%; 95% CrI, 0.43-2.32). In 2013, the number of people (aged 40-80 years) with glaucoma worldwide was estimated to be 64.3 million, increasing to 76.0 million in 2020 and 111.8 million in 2040. In the Bayesian meta-regression model, men were more likely to have POAG than women (odds ratio [OR], 1.36; 95% CrI, 1.23-1.52), and after adjusting for age, gender, habitation type, response rate, and year of study, people of African ancestry were more likely to have POAG than people of European ancestry (OR, 2.80; 95% CrI, 1.83-4.06), and people living in urban areas were more likely to have POAG than those in rural areas (OR, 1.58; 95% CrI, 1.19-2.04).\n\n\nCONCLUSIONS\nThe number of people with glaucoma worldwide will increase to 111.8 million in 2040, disproportionally affecting people residing in Asia and Africa. These estimates are important in guiding the designs of glaucoma screening, treatment, and related public health strategies."} {"_id": "211b00d62a9299880fefbbb6251254cb685b7810", "title": "Fuzzy Logic Based-Map Matching Algorithm for Vehicle Navigation System in Urban Canyons", "text": "With the rapid progress in the development of wireless technology, Global Positioning System (GPS) based vehicle navigation systems are being widely deployed in automobiles to serve the location-based needs of users and for efficient traffic management. An essential process in vehicle navigation is to map match the position obtained from GPS (or/and other sensors) on a road network map. This process of map matching in turn helps in mitigating errors from navigation solution. GPS based vehicle navigation systems have difficulties in tracking vehicles in urban canyons due to poor satellite availability. High Sensitivity GPS (HS GPS) receivers can alleviate this problem by acquiring and tracking weak signals (and increasing the availability), but at the cost of high measurement noise and errors due to multipath and cross correlation. Position and velocity results in such conditions are typically biased and have unknown distributions. Thus filtering and other statistical methods are difficult to implement. Soft computing has replaced classical computing on many fronts where uncertainties are difficult to model. Fuzzy logic, based on fuzzy reasoning concepts, is one of the most widely used soft computational methods. In many circumstances, it can take noisy, imprecise input, to yield crisp (i.e. numerically accurate) output. Fuzzy logic can be applied effectively to map match the output from a HS GPS receiver in urban canyons because of its inherent tolerance to imprecise inputs. This paper describes a map matching algorithm based on fuzzy logic. The input of the system comes from a SiRF HS XTrac GPS receiver and a low cost gyro (Murata ENV-05G). The results show an improvement in tracking the vehicle in urban canyon conditions."} {"_id": "87182a9e818ec5cc093c28cbba0ff32fc6bf40b0", "title": "Automatic design of low power CMOS buffer-chain circuit using differential evolutionary algorithm and particle swarm optimization", "text": "PSO and DE algorithms and its variants are used for the optimization of a buffer-chain circuit and the results of all the algorithms are compared in this literature. By testing these algorithms on different mathematical benchmark functions the best parameter values of buffer chain circuit are obtained in such a way that it reduces the error between simulated output and optimized output, hence giving the best circuit performance. Evolutionary algorithms are better in performance and speed than the classical methods. 130nm CMOS technology has been used in this work. With the help of these parameter values the circuit simulator gives the values of power consumption, symmetry, rise time and fall time, which are almost closer to the desired specification of the buffer chain circuit."} {"_id": "0c37d48d2bb1bd6c63a9b5c57da37609d03900b0", "title": "Memory Interference as a Determinant of Language Comprehension", "text": "The parameters of the human memory system constrain the operation of language comprehension processes. In the memory literature, both decay and interference have been proposed as causes of forgetting; however, while there is a long history of research establishing the nature of interference effects in memory, the effects of decay are much more poorly supported. Nevertheless, research investigating the limitations of the human sentence processing mechanism typically focus on decay-based explanations, emphasizing the role of capacity, while the role of interference has received comparatively little attention. This paper reviews both accounts of difficulty in language comprehension by drawing direct connections to research in the memory domain. Capacity-based accounts are found to be untenable, diverging substantially from what is known about the operation of the human memory system. In contrast, recent research investigating comprehension difficulty using a retrieval-interference paradigm is shown to be wholly consistent with both behavioral and neuropsychological memory phenomena. The implications of adopting a retrieval-interference approach to investigating individual variation in language comprehension are discussed."} {"_id": "47918c5bece63d7a6cd9574c37c2b17114a5f87e", "title": "Sentiment Classification through Semantic Orientation Using SentiWordNet", "text": "Sentiment analysis is the procedure by which information is extracted from the opinions, appraisals and emotions of people in regards to entities, events and their attributes. In decision making, the opinions of others have a significant effect on customers ease in making choices regards to online shopping, choosing events, products, entities. In this paper, a rule based domain independent sentiment analysis method is proposed. The proposed method classifies subjective and objective sentences from reviews and blog comments. The semantic score of subjective sentences is extracted from SentiWordNet to calculate their polarity as positive, negative or neutral based on the contextual sentence structure. The results show the effectiveness of the proposed method and it outperforms the machine learning methods. The proposed method achieves an accuracy of 87% at the feedback level and 83% at the sentence level for comments. [Aurangzeb khan, Muhammad Zubair Asghar, Shakeel Ahmad, Fazal Masud Kundi, Maria Qasim, Furqan. Sentiment Classification through Semantic Orientation Using SentiWordNet. Life Sci J 2014; 11(10):309-315] (ISSN: 1097-8135). http://www.lifesciencesite.com. 44"} {"_id": "591565e123518cc04748649d0b028d5326eafc5f", "title": "Radar Signal Processing for Jointly Estimating Tracks and Micro-Doppler Signatures", "text": "The aim of the radar systems is to collect information about their surroundings. In many scenarios besides static targets there are numerous moving objects with very different characteristics, such as extent, movement behavior or micro-Doppler spread. It would be most desirable to have algorithms that extract all information on static and moving object automatically, without a system operator. In this paper, we present measurements conducted with a commercially available high-resolution multi-channel linear frequency-modulated continuous-wave radar and algorithms that do not only produce radar images but a description of the scenario on a higher level. After conventional spectrum estimation and thresholding, we present a clustering stage that combines individual detections and generates representations of each target individually. This stage is followed by a Kalman filter based multi-target tracking block. The tracker allows us to follow each target and collect its properties over time. With this method of jointly estimating tracks and characteristics of each individual target in a scenario, inputs for classifiers can be generated. Which, in turn, will be able to generate information that could be used for driver assistance or alarm trigger systems."} {"_id": "803b70098fe8e6c84bb266e8f2abb23d785cc46b", "title": "The medial patellofemoral ligament: location of femoral attachment and length change patterns resulting from anatomic and nonanatomic attachments.", "text": "BACKGROUND\nIncompetence of the medial patellofemoral ligament (MPFL) is an integral factor in patellofemoral instability. Reconstruction of this structure is gaining increasing popularity. However, the natural behavior of the ligament is still not fully understood, and crucially, the correct landmark for femoral attachment of the MPFL at surgery is poorly defined.\n\n\nPURPOSE\nTo determine the length change pattern of the native MPFL, investigate the effect of nonanatomic femoral and differing patellar attachment sites on length changes, and recommend a reproducible femoral attachment site for undertaking anatomic MPFL reconstruction.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nEight cadaveric knees were dissected of skin and subcutaneous fat and mounted in a kinematics rig with the quadriceps tensioned. The MPFL length change patterns were measured for combinations of patellar and femoral attachments using a suture and displacement transducer. Three attachments were along the superomedial border of the patella, and 5 femoral attachments were at the MPFL center and 5 mm proximal, distal, anterior, and posterior to this point. Reproducibility of attachment sites was validated radiographically.\n\n\nRESULTS\nThe femoral attachment point, taking the anterior-posterior medial femoral condyle diameter to be 100%, was identified 40% from the posterior, 50% from the distal, and 60% from the anterior border of the medial femoral condyle. This point was most isometric, with a mean maximal length change to the central patellar attachment of 2.1 mm from 0\u00b0 to 110\u00b0 of knee flexion. The proximal femoral attachment resulted in up to 6.4 mm mean lengthening and the distal attachment up to 9.1 mm mean shortening through 0\u00b0 to 110\u00b0 of knee flexion, resulting in a significantly nonisometric graft (P < .05).\n\n\nCONCLUSION\nWe report the anatomic femoral and patellar MPFL graft attachments, with confirmation of the reproducibility of their location and resulting kinematic behavior. Nonanatomic attachments caused significant loss of isometry.\n\n\nCLINICAL RELEVANCE\nThe importance of an anatomically positioned MPFL reconstruction is highlighted, and an identifiable radiographic point for femoral tunnel position is suggested for use intraoperatively."} {"_id": "87adf3d8f3f935dc982e91951dd5730fb1559d90", "title": "The Spatial Orienting paradigm: How to design and interpret spatial attention experiments", "text": "This paper is conceived as a guide that will describe the very well known Spatial Orienting paradigm, used to explore attentional processes in healthy individuals as well as in people suffering from psychiatric disorders and brain-damaged patients. The paradigm was developed in the late 1970s, and since then, it has been used in thousands of attentional studies. In this review, we attempt to describe, the paradigm for the na\u00eff reader, and explain in detail when is it used, which variables are usually manipulated, how to interpret its results, and how can it be adapted to different populations and methodologies. The main goal of this review is to provide a practical guide to researchers who have never used the paradigm that will help them design their experiments, as a function of their theoretical and experimental needs. We also focus on how to adapt the paradigm to different technologies (such as event-related potentials, functional resonance imaging, or transcranial magnetic stimulation), and to different populations by presenting an example of its use in brain-damaged patients."} {"_id": "a494191eed61da992579d67412d504f5b3104816", "title": "A gesture-free geometric approach for mid-air expression of design intent in 3D virtual pottery", "text": "The advent of depth cameras has enabled mid-air interactions for shape modeling with bare hands. Typically, these interactions employ a finite set of pre-defined hand gestures to allow users to specify modeling operations in virtual space. However, human interactions in real world shaping processes (such as pottery or sculpting) are complex, iterative, and continuous. In this paper, we show that the expression of user intent in shaping processes can be derived from the geometry of contact between the hand and the manipulated object. Specifically, we describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the pot\u2019s profile to the shape of the user\u2019s hand represented as a point-cloud (PCL). Thus, a user does not need to learn, know, or remember any gestures to interact with our system. Our choice of pottery simplifies the geometric representation, allowing us to systematically study how users use their hands and fingers to express the intent of deformation during a shaping process. Our evaluations demonstrate that it is possible to enable users to express their intent for shape deformation without the need for a fixed set of gestures for clutching and deforming a shape."} {"_id": "23ec90b7f51ebb1a290a760ed9b9ebfe8ba68eb6", "title": "A comparative study of differential evolution variants for global optimization", "text": "In this paper, we present an empirical comparison of some Differential Evolution variants to solve global optimization problems. The aim is to identify which one of them is more suitable to solve an optimization problem, depending on the problem's features and also to identify the variant with the best performance, regardless of the features of the problem to be solved. Eight variants were implemented and tested on 13 benchmark problems taken from the specialized literature. These variants vary in the type of recombination operator used and also in the way in which the mutation is computed. A set of statistical tests were performed in order to obtain more confidence on the validity of the results and to reinforce our discussion. The main aim is that this study can help both researchers and practitioners interested in using differential evolution as a global optimizer, since we expect that our conclusions can provide some insights regarding the advantages or limitations of each of the variants studied."} {"_id": "8f0c8df7a81d53d7227c2eea6552c9cd4b1a846d", "title": "An AES Smart Card Implementation Resistant to Power Analysis Attacks", "text": "In this article we describe an efficient AES software implementation that is well suited for 8-bit smart cards and resistant against power analysis attacks. Our implementation masks the intermediate results and randomizes the sequence of operations at the beginning and the end of the AES execution. Because of the masking, it is secure against simple power analysis attacks, template attacks and first-order DPA attacks. Due to the combination of masking and randomization, it is resistant against higher-order DPA attacks. Resistant means that a large number of measurements is required for a successful attack. This expected number of measurements is tunable. The designer can choose the amount of randomization and thereby increase the number of measurements. This article also includes a practical evaluation of the countermeasures. The results prove the theoretical assessment of the countermeasures to be correct."} {"_id": "27f9b805de1f125273a88786d2383621e60c6094", "title": "Approximating Kinematics for Tracked Mobile Robots", "text": "In this paper we propose a kinematic approach for tracked mobile robots in order to improve motion control and pose estimation. Complex dynamics due to slippage and track\u2013soil interactions make it difficult to predict the exact motion of the vehicle on the basis of track velocities. Nevertheless, real-time computations for autonomous navigation require an effective kinematics approximation without introducing dynamics in the loop. The proposed solution is based on the fact that the instantaneous centers of rotation (ICRs) of treads on the motion plane with respect to the vehicle are dynamics-dependent, but they lie within a bounded area. Thus, optimizing constant ICR positions for a particular terrain results in an approximate kinematic model for tracked mobile robots. Two different approaches are presented for off-line estimation of kinematic parameters: (i) simulation of the stationary response of the dynamic model for the whole velocity range of the vehicle; (ii) introduction of an experimental setup so that a genetic algorithm can produce the model from actual sensor readings. These methods have been evaluated for on-line odometric computations and low-level motion control with the Auriga\u03b1 mobile robot on a hard-surface flat soil at moderate speeds. KEY WORDS\u2014tracked vehicles, kinematic control, mobile robotics, parameter identification, dynamics simulation"} {"_id": "656dce6c2b7315518cf0aea5d5b2500f869ab223", "title": "Practical Stabilization of a Skid-steering Mobile Robot - A Kinematic-based Approach", "text": "This paper presents kinematic control problem of skid-steering mobile robot using practical smooth and time-varying stabilizer. The stability result is proved using Lyapunov analysis and takes into account both input signal saturation and uncertainty of kinematics. In order to ensure stable motion of the robot the condition of permissible velocities is formulated according to dynamic model and wheel-surface interaction. Theoretical considerations are illustrated by simulation results"} {"_id": "760c81c0a19f8358dde38691b81fe2f20c829b44", "title": "Experimental kinematics for wheeled skid-steer mobile robots", "text": "This work aims at improving real-time motion control and dead-reckoning of wheeled skid-steer vehicles by considering the effects of slippage, but without introducing the complexity of dynamics computations in the loop. This traction scheme is found both in many off-the-shelf mobile robots due to its mechanical simplicity and in outdoor applications due to its maneuverability. In previous works, we reported a method to experimentally obtain an optimized kinematic model for skid-steer tracked vehicles based on the boundedness of the instantaneous centers of rotation (ICRs) of treads on the motion plane. This paper provides further insight on this method, which is now proposed for wheeled skid-steer vehicles. It has been successfully applied to a popular research robotic platform, pioneer P3-AT, with different kinds of tires and terrain types."} {"_id": "7df9117c67587f516fd4b13bd9df88aff9ab79b3", "title": "Adaptive Trajectory Tracking Control of Skid-Steered Mobile Robots", "text": "Skid-steered mobile robots have been widely used for terrain exploration and navigation. In this paper, we present an adaptive trajectory control design for a skid-steered wheeled mobile robot. Kinematic and dynamic modeling of the robot is first presented. A pseudo-static friction model is used to capture the interaction between the wheels and the ground. An adaptive control algorithm is designed to simultaneously estimate the wheel/ground contact friction information and control the mobile robot to follow a desired trajectory. A Lyapunov-based convergence analysis of the controller and the estimation of the friction model parameter are presented. Simulation and preliminary experimental results based on a four-wheel robot prototype are demonstrated for the effectiveness and efficiency of the proposed modeling and control scheme"} {"_id": "9d9fc59f80f41915793e7f59895c02dfa6c1e5a9", "title": "Trajectory Tracking Control of a Four-Wheel Differentially Driven Mobile Robot", "text": "We consider the trajecto ry tracking control problem for a 4-wheel differentially driven mobile robot moving on an outdoor terrain. A dynamic model is presented accounting for the effects of wheel skidding. A model-based nonlinear controller is designed, following the dynamic feedback linearization paradigm. A n operational nonholonomic constraint is added at this stage, so as to obtain a predictable behavior for the instantaneous center of rotation thus preventing excessive skidding. The controller is then robustijied, using conventional linear techniques, against uncertainty in the soil parameters at the ground-wheel contact. Simulation results show the good performance in tracking spline-type trajectories on a virtual terrain with varying characteristics."} {"_id": "872331a88e2aa40b9a1b64c1a9c2ea6577c3fe44", "title": "Impact of probing procedure on flip chip reliability", "text": "Probe-after-bump is the primary probing procedure for flip chip technology, since it does not directly contact the bump pad, and involves a preferred under bump metallurgy (UBM) step coverage on the bump pads. However, the probe-after-bump procedure suffers from low throughputs and high cost. It also delays the yield feedback to the fab, and makes difficult clarification of the accountability of the low yield bumped wafer between the fab and the bumping house. The probe-before-bump procedure can solve these problems, but the probing tips may over-probe or penetrate the bump pads, leading to poor UBM step coverage, due to inadequate probing conditions or poor probing cards. This work examines the impact of probing procedure on flip chip reliability, using printing and electroplating bumpings on aluminum and copper pads. Bump height, bump shear strength, die shear force, UBM step coverage, and reliability testing are used to determine the influence of probing procedure on flip chip reliability. The experimental results reveal that bump quality and reliability test in the probe-before-bump procedure, under adequate probing conditions, differ slightly from the corresponding items in the probe-after-bump procedure. UBM gives superior step coverage of probe marks in both probe-before-bump and probe-after-bump procedures, implying that UBM achieves greater adhesion and barrier function between the solder bump and the bump pad. Both printing and electroplating bump processes slightly influence all evaluated items. The heights of probe marks on the copper pads are 40\u201360% lower than those on the aluminum pads, indicating that the copper pad enhances UBM step coverage. This finding reveals that adequate probing conditions of the probe-before-bump procedure are suited to sort flip chip wafers and do not significantly affect bump height, bump shear strength, die shear force, or flip chip reliability. 2002 Elsevier Science Ltd. All rights reserved."} {"_id": "94453d8f060a5968d30f9d5b68f3ee557cb69078", "title": "Anomaly Detection with Attribute Conflict Identification in Bank Customer Data", "text": "In commercial banks, data centers often integrates different data sources, which represent complex and independent business systems. Due to the inherent data variability and measurement or execution errors, there may exist some abnormal customer records (data). Existing automatic abnormal customer detection methods are outlier detection which focuses on the differences between customers, and it ignores the other possible abnormal customers caused by the inner features confliction of each customer. In this paper, we designed a method to identify abnormal customer information whose inner attributes are conflicting (confliction detection). We integrate the outlier detection and the confliction identification techniques together, as the final abnormality detection. This can provide a complete and accurate support of customer data for commercial bank's decision making. Finally, we have performed experiments on a dataset from a Chinese commercial bank to demonstrate the effectiveness of our method."} {"_id": "ae0f81e746cb41562970711c94b53f9a53bb466d", "title": "Subcutaneous penile vein thrombosis (Penile Mondor's Disease): pathogenesis, diagnosis, and therapy.", "text": "OBJECTIVES\nIn international studies, only a few data are available on subcutaneous penile vein thrombosis. The pathogenesis is unknown, and no general recommendation exists regarding therapy.\n\n\nMETHODS\nA total of 25 patients with the clinical picture of a \"superficial penile vein thrombosis\" were treated at our policlinic. All patients had noted sudden and almost painless indurations on the penile dorsal surface. The extent of the thrombosis varied. Detailed anamnesis, ultrasonography, and routine laboratory tests were performed for all patients, knowing that primary therapy was conservative.\n\n\nRESULTS\nNo patient indicated any pain. Some reported a feeling of tension in the area of the thrombosis. In all patients, the thrombosis occurred in the dorsal penis shaft. It was close to the sulcus coronarius in 21 patients, near the penis root in 3, and in the entire penis shaft in 1 patient. The length of the thrombotic vein was between 2 and 4 cm. The ultrasound results were similar for all patients. The primary treatment was conservative for all patients. Recovery was achieved in more than 92% of cases (23 of 25 patients) using conservative therapy, which consisted of local dressing with heparin ointment (10,000 IU) and oral application of an antiphlogistic for 14 days. In 2 cases, thrombectomy was necessary.\n\n\nCONCLUSIONS\nExtended imaging diagnosis does not improve the evaluation of the extent of a superficial penile vein thrombosis. Conservative primary therapy consisting of heparin ointment and oral application of antiphlogistics is sufficient. If the thrombosis persists after conservative therapy, surgery is indicated."} {"_id": "bac1b676aa3a97218afcfc81ef6d4e0015251150", "title": "Control of Inertial Stabilization Systems Using Robust Inverse Dynamics Control and Adaptive Control", "text": "This paper presents an advanced controller design for an Inertial stabilization system. The system has a 2-DOF gimbal which will be attached to an aviation vehicle. Due to dynamics modeling errors, and friction and disturbances from the outside environment, the tracking accuracy of an airborne gimbal may severely degrade. So, an advanced controller is needed. Robust inverse dynamics control and the adaptive control are used in the inner loop or gimbal servo-system to control the gimbal motion. An indirect line of sight (LOS) stabilization will be controlled by the outer loop controller. A stabilizer is mounted on the base of the system to measure base rate and orientation of the gimbal in reference to the fixed reference frame. It can withstand high angular slew rates. The experimental results illustrate that the proposed controllers are capable enough to overcome the disturbances and the impact of LOS disturbances on the tracking performance."} {"_id": "25a3a354955f1a1782c9c817edb93b7303672291", "title": "How well developed are altmetrics? A cross-disciplinary analysis of the presence of \u2018alternative metrics\u2019 in scientific publications", "text": "In this paper an analysis of the presence and possibilities of altmetrics for bibliometric and performance analysis is carried out. Using the web based tool Impact Story, we collected metrics for 20,000 random publications from the Web of Science. We studied both the presence and distribution of altmetrics in the set of publications, across fields, document types and over publication years, as well as the extent to which altmetrics correlate with citation indicators. The main result of the study is that the altmetrics source that provides the most metrics is Mendeley, with metrics on readerships for 62.6\u00a0% of all the publications studied, other sources only provide marginal information. In terms of relation with citations, a moderate spearman correlation (r\u00a0=\u00a00.49) has been found between Mendeley readership counts and citation indicators. Other possibilities and limitations of these indicators are discussed and future research lines are outlined."} {"_id": "30cd39388b5c1aae7d8153c0ab9d54b61b474ffe", "title": "Deep Recurrent Regression for Facial Landmark Detection", "text": "We propose a novel end-to-end deep architecture for face landmark detection, based on a deep convolutional and deconvolutional network followed by carefully designed recurrent network structures. The pipeline of this architecture consists of three parts. Through the first part, we encode an input face image to resolution-preserved deconvolutional feature maps via a deep network with stacked convolutional and deconvolutional layers. Then, in the second part, we estimate the initial coordinates of the facial key points by an additional convolutional layer on top of these deconvolutional feature maps. In the last part, by using the deconvolutional feature maps and the initial facial key points as input, we refine the coordinates of the facial key points by a recurrent network that consists of multiple long short-term memory components. Extensive evaluations on several benchmark data sets show that the proposed deep architecture has superior performance against the state-of-the-art methods."} {"_id": "95f05fa558ae1a81e54c555b234dd54fcea98830", "title": "Adaptive hypertext navigation based on user goals and context", "text": "Hypertext systems allow flexible access to topics of information, but this flexibility has disadvantages. Users often become lost or overwhelmed by choices. An adaptive hypertext system can overcome these disadvantages by recommending information to users based on their specific information needs and preferences. Simple associative matrices provide an effective way of capturing these user preferences. Because the matrices are easily updated, they support the kind of dynamic learning required in an adaptive system. HYPERFLEX, a prototype of an adaptive hypertext system that learns, is described. Informal studies with HYPERFLEX clarify the circumstances under which adaptive systems are likely to be useful, and suggest that HYPERFLEX can reduce time spent searching for information by up to 40%. Moreover, these benefits can be obtained with relatively little effort on the part of hypertext authors or users. The simple models underlying HYPERFLEX's performance may offer a general and useful alternative to more sophisticated modelling techniques. Conditions under which these models, and similar adaptation techniques, might be most useful are discussed."} {"_id": "a501de74e9341b326845218ba0891053f3331e25", "title": "Unfriending on Facebook: Friend Request and Online/Offline Behavior Analysis", "text": "Objectives: Determine the role of the friend request in unfriending decisions. Find factors in unfriending decisions and find differences in the perception of online and offline behaviors that vary depending on the unfriending decision. Method: Survey research conducted online. 690 surveys about unfriending were analyzed using exploratory statistical techniques. Results: The research results show that the initiator of the friend request has more than their expected share of unfriends compared to those who receive the friend request. There are online and offline factors for unfriending decisions; the research identified six constructs to evaluate unfriending decisions. There are 4 components for online behaviors (unimportant/frequent posts, polarizing posts, inappropriate posts and everyday life posts) and 2 offline components (disliked behavior and changes in the relationship). Survey respondents who said they unfriend for online reasons were more likely to agree that the person posted too frequently about unimportant topics, polarizing topics, and inappropriate topics compared to those who unfriended for offline reasons."} {"_id": "0b61a17906637ece5a9c5e7e3e6de93378209706", "title": "Semantics of context-free languages", "text": "\u201cMeaning\u201d may be assigned to a string in a context-free language by defining \u201cattributes\u201d of the symbols in a derivation tree for that string. The attributes can be defined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are \u201csynthesized\u201d, i.e., defined solely in terms of attributes of thedescendants of the corresponding nonterminal symbol, while other attributes are \u201cinherited\u201d, i.e., defined in terms of attributes of theancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature."} {"_id": "a5a4db6b066d58c1281d1e83be821216d22696f3", "title": "TweetCred: A Real-time Web-based System for Assessing Credibility of Content on Twitter", "text": "During large scale events, a large volume of content is posted on Twitter, but not all of this content is trustworthy. The presence of spam, advertisements, rumors and fake images reduces the value of information collected from Twitter, especially during sudden-onset crisis events where information from other sources is scarce. In this research work, we describe various facets of assessing the credibility of usergenerated content on Twitter during large scale events, and develop a novel real-time system to assess the credibility of tweets. Firstly, we develop a semi-supervised ranking model using SVM-rank for assessing credibility, based on training data obtained from six high-impact crisis events of 2013. An extensive set of forty-five features is used to determine the credibility score for each of the tweets. Secondly, we develop and deploy a system\u2013TweetCred\u2013in the form of a browser extension, a web application and an API at the link: http://twitdigest.iiitd.edu.in/TweetCred/. To the best of our knowledge, this is the first research work to develop a practical system for credibility on Twitter and evaluate it with real users. TweetCred was installed and used by 717 Twitter users within a span of three weeks. During this period, a credibility score was computed for more than 1.1 million unique tweets. Thirdly, we evaluated the real-time performance of TweetCred , observing that 84% of the credibility scores were displayed within 6 seconds. We report on the positive feedback that we received from the system\u2019s users and the insights we gained into improving the system for future iterations."} {"_id": "11fa695a4e6b0f20a396edc4010d4990c8d29fe9", "title": "Monte-Carlo Tree Search: A New Framework for Game AI", "text": "Classic approaches to game AI require either a high quality of domain knowledge, or a long time to generate effective AI behaviour. These two characteristics hamper the goal of establishing challenging game AI. In this paper, we put forward Monte-Carlo Tree Search as a novel, unified framework to game AI. In the framework, randomized explorations of the search space are used to predict the most promising game actions. We will demonstrate that Monte-Carlo Tree Search can be applied effectively to (1) classic board-games, (2) modern board-games, and (3) video games."} {"_id": "62cb571ad1d67b3b46409f0c3830558b076b378c", "title": "Low-Cost Wideband and High-Gain Slotted Cavity Antenna Using High-Order Modes for Millimeter-Wave Application", "text": "A novel single-fed low-cost wideband and high-gain slotted cavity antenna based on substrate integrated waveguide (SIW) technology using high-order cavity modes is demonstrated in this paper. High-order resonant modes (TE130, TE310, TE330) inside the cavity are simply excited by a coaxial probe which is located at the center of the antenna. Energy is coupled out of the cavity by a 3 \u00d7 3 slot array etched on the top surface of the cavity. Two antennas with different polarizations are designed and tested. Measured results show that the linearly polarized prototype achieves an impedance bandwidth (|S11| <; -10 dB) of > 26% (28 to 36.6 GHz), and a 1-dB gain bandwidth of 14.1% (30.3 to 34.9 GHz). In addition, a measured maximum gain of 13.8 dBi and radiation efficiency of 92% are obtained. To generate circularly polarized radiation, a rotated dipole array is placed in front of the proposed linearly polarized antenna. Measured results show that the circularly polarized antenna exhibits a common bandwidth (10-dB return loss bandwidth, 3-dB axial ratio bandwidth, and 1-dB gain bandwidth) of 11%."} {"_id": "dae92eb758f00f3800c53ce4dbb74da599ae9949", "title": "Medical Concept Embeddings via Labeled Background Corpora", "text": "In recent years, we have seen an increasing amount of interest in low-dimensional vector representations of words. Among other things, these facilitate computing word similarity and relatedness scores. The most well-known example of algorithms to produce representations of this sort are the word2vec approaches. In this paper, we investigate a new model to induce such vector spaces for medical concepts, based on a joint objective that exploits not only word co-occurrences but also manually labeled documents, as available from sources such as PubMed. Our extensive experimental analysis shows that our embeddings lead to significantly higher correlations with human similarity and relatedness assessments than previous work. Due to the simplicity and versatility of vector representations, these findings suggest that our resource can easily be used as a drop-in replacement to improve any systems relying on medical concept similarity measures."} {"_id": "fa1dc30a59e39cf48025ae039b0d56497dd44224", "title": "Exploring Ways To Mitigate Sensor-Based Smartphone Fingerprinting", "text": "Modern smartphones contain motion sensors, such as accelerometers and gyroscopes. These sensors have many useful applications; however, they can also be used to uniquely identify a phone by measuring anomalies in the signals, which are a result from manufacturing imperfections. Such measurements can be conducted surreptitiously in the browser and can be used to track users across applications, websites, and visits. We analyze techniques to mitigate such device fingerprinting either by calibrating the sensors to eliminate the signal anomalies, or by adding noise that obfuscates the anomalies. To do this, we first develop a highly accurate fingerprinting mechanism that combines multiple motion sensors and makes use of (inaudible) audio stimulation to improve detection. We then collect measurements from a large collection of smartphones and evaluate the impact of calibration and obfuscation techniques on the classifier accuracy."} {"_id": "fc24d32ab6acd5f1d2c478a6d0597c85afb28feb", "title": "Yelp Dataset Challenge: Review Rating Prediction", "text": "Review websites, such as TripAdvisor and Yelp, allow users to post online reviews for various businesses, products and services, and have been recently shown to have a significant influence on consumer shopping behaviour. An online review typically consists of free-form text and a star rating out of 5. The problem of predicting a user\u2019s star rating for a product, given the user\u2019s text review for that product, is called Review Rating Prediction and has lately become a popular, albeit hard, problem in machine learning. In this paper, we treat Review Rating Prediction as a multi-class classification problem, and build sixteen different prediction models by combining four feature extraction methods, (i) unigrams, (ii) bigrams, (iii) trigrams and (iv) Latent Semantic Indexing, with four machine learning algorithms, (i) logistic regression, (ii) N\u00e4\u0131ve Bayes classification, (iii) perceptrons, and (iv) linear Support Vector Classification. We analyse the performance of each of these sixteen models to come up with the best model for predicting the ratings from reviews. We use the dataset provided by Yelp for training and testing the models."} {"_id": "b4173ba51b02f530e196560c3d17bbd0c4ed131d", "title": "Prospects of encoding Java source code in XML", "text": "Currently, the only standard format for representing Java source code is plain text-based. This paper explores the prospects of using Extensible Markup Language (XML) for this purpose. XML enables the leverage of tools and standards more powerful than those available for plain-text formats, while retaining broad accessibility. The paper outlines the potential benefits of future XML grammars that would allow for improved code structure and querying possibilities; code extensions, construction, and formatting; and referencing parts of code. It also introduces the concept of grammar levels and argues for the inclusion of several grammar levels into a common framework. It discusses conversions between grammars and some practical grammar design issues. Keywords\u2014XML, Java, source code, parsing, code formatting"} {"_id": "12bab74396ae0b76ac91f8c022866e9785aa48a3", "title": "Semantic integration of UML class diagram with semantic validation on segments of mappings", "text": "Recently, attention has focused on the software development, specially by different teams that are geographically distant to support collaborative work. Management, description and modeling in such collaborative approach are through several tools and techniques based on UML models. It is now supported by a large number of tools. Most of these systems have the ability to compare different UML models, assist developers, designers and also provide operations for the merging and integration, to produce a coherent model. The contribution in this article is both to integrate a set of UML class diagrams using mappings that are result of alignment and assist designers and developers in the integration. In addition, we will present a detail integration of UML models with the validation of mappings between them. Such validation helps to achieve correct, consistent and coherent integrated model."} {"_id": "096cc955e19446c3445e331c62d62897833d3e46", "title": "On partial least squares in head pose estimation: How to simultaneously deal with misalignment", "text": "Head pose estimation is a critical problem in many computer vision applications. These include human computer interaction, video surveillance, face and expression recognition. In most prior work on heads pose estimation, the positions of the faces on which the pose is to be estimated are specified manually. Therefore, the results are reported without studying the effect of misalignment. We propose a method based on partial least squares (PLS) regression to estimate pose and solve the alignment problem simultaneously. The contributions of this paper are two-fold: 1) we show that the kernel version of PLS (kPLS) achieves better than state-of-the-art results on the estimation problem and 2) we develop a technique to reduce misalignment based on the learned PLS factors."} {"_id": "3004d634eb462c2d8fc039f99713e9400a58dced", "title": "Reinforcement Learning and Approximate Dynamic Programming for Feedback Control", "text": "reinforcement learning and approximate dynamic programming for feedback control are a good way to achieve details about operating certainproducts. Many products that you buy can be obtained using instruction manuals. These user guides are clearlybuilt to give step-by-step information about how you ought to go ahead in operating certain equipments. Ahandbook is really a user's guide to operating the equipments. Should you loose your best guide or even the productwould not provide an instructions, you can easily obtain one on the net. You can search for the manual of yourchoice online. Here, it is possible to work with google to browse through the available user guide and find the mainone you'll need. On the net, you'll be able to discover the manual that you might want with great ease andsimplicity"} {"_id": "9b8ad31d762a64f944a780c13ff2d41b9b4f3ab3", "title": "Toward a Rational Choice Process Theory of Internet Scamming: The Offender's Perspective", "text": "Internet fraud scam is a crime enabled by the Internet to swindle Internet users. The global costs of these scams are in the billions of US dollars. Existing research suggests that scammers maximize their economic gain. Although this is a plausible explanation, since the idea of the scam is to fool people to send money, this explanation alone, cannot explain why individuals become Internet scammers. An equally important, albeit unexplored riddle, is the question of what strategies Internet scammers adopt to perform the act. As a first step to address these gaps, we interviewed five Internet scammers in order to develop a rational choice process theory of Internet scammers\u2019 behavior. The initial results suggest that an interplay of socioeconomic and dynamic thinking processes explains why individuals drift into Internet scamming. Once an individual drifts into Internet scamming, a successful scam involves two processes: persuasive strategy and advance fee strategy."} {"_id": "222d8b2803f9cedf0da0b454c061c0bb46384722", "title": "Catfish Binary Particle Swarm Optimization for Feature Selection", "text": "The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. This process reduces the number of features by removing irrelevant, noisy, and redundant data, thus resulting in acceptable classification accuracy. Feature selection is a preprocessing technique with great importance in the fields of data analysis and information retrieval processing, pattern classification, and data mining applications. This paper presents a novel optimization algorithm called catfish binary particle swarm optimization (CatfishBPSO), in which the so-called catfish effect is applied to improve the performance of binary particle swarm optimization (BPSO). This effect is the result of the introduction of new particles into the search space (\u201ccatfish particles\u201d), which replace particles with the worst fitness by the initialized at extreme points of the search space when the fitness of the global best particle has not improved for a number of consecutive iterations. In this study, the K-nearest neighbor (K-NN) method with leave-oneout cross-validation (LOOCV) was used to evaluate the quality of the solutions. CatfishBPSO was applied and compared to six classification problems taken from the literature. Experimental results show that CatfishBPSO simplifies the feature selection process effectively, and either obtains higher classification accuracy or uses fewer features than other feature selection methods."} {"_id": "511e6318772f3bc4f6f192a96ff0501a1d1955f4", "title": "Powering 3 Dimensional Microrobots : Power Density Limitations \u2217", "text": "Many types of electrostatic and electromagnetic microactuators have been developed. It is important to understand the fundamental performance limitations of these actuators for use in micro-robotic systems. The most important consideration for micro mobile robots is the effective power density of the actuator. As inertia and gravitional forces become less significant for small robots, typical metrics for macro-robots, such as torque-to-weight ratio, are not appropriate. A very significant problem with micro-actuators for robotics is the need for efficient transmissions to obtain large forces and torques at low speeds from inherently high speed low force actuators."} {"_id": "d784ed9d4538b731663990131568e3fbc4cb96ee", "title": "FPGA based Time-to-Digital Converter", "text": "High-resolution Time-to-Digital converter (TDC) is one of the crucial blocks in Highenergy nuclear physics time of flight experiments. The contemporary time interval measurement techniques rely on employing methods like Time-to-Amplitude Conversion, Vernier method, Delay Locked Loops (DLL), Tapped Delay Lines (TDL), Differential Delay Lines (DDL) etc. Recently, FPGA based TDC designs with merits of low cost, fast development cycle and re-programmability are reported [1]. The TDC implementation in FPGA have design challenges and issues like, lack of direct control over the propagation delays, unpredictable Place & Route (P&R) delays and delay variations due to Process, Voltage & Temperature (PVT) variations. In the TDC design presented here, the resolution below the minimum gate delay is achieved by employing differential (Vernier oscillator) technique. The predictable P&R is achieved by manual P&R of the critical elements and avoiding the use of digital synthesis tools. Further, in order to compensate for the delay variations due to PVT, a calibration methodology is developed and implemented. By implementing the design in a Flash-based FPGA, the low power design objective is achieved."} {"_id": "8828177ef8c00e893164e0dd396f9c0d78a96d7a", "title": "Looking good: factors affecting the likelihood of having cosmetic surgery", "text": "The present study examined various factors associated with the likelihood of having cosmetic surgery in a community sample of Austrian participants. One-hundred and sixty-eight women and 151 men completed a questionnaire measuring how likely they were to consider common cosmetic procedures. The results showed that women were more likely than men to consider most cosmetic procedures. Path analysis revealed that personal experience of having had cosmetic surgery was a significant predictor of future likelihood, while media exposure (viewing advertisements or television programs, or reading articles about cosmetic surgery) mediated the influence of vicarious experience and sex. These results are discussed in relation to previous work examining the factors associated with the likelihood of having cosmetic surgery."} {"_id": "1db2e600b8a386c559b0fe0caf5c472aef95482c", "title": "A Case for NOW (Networks of Workstations) - Abstract", "text": "In this paper, we argue that because of recent technology advances, networks of workstations (NOWs) are poised to become the primary computing infrastructure for science and engineering, from low end interactive computing to demanding sequential and parallel applications. We identify three opportunities for NOWs that will benefit endusers: dramatically improving virtual memory and file system performance by using the aggregate DRAM of a NOW as a giant cache for disk; achieving cheap, highly available, and scalable file storage by using redundant arrays of workstation disks, using the LAN as the I/O backplane; and finally, multiple CPUs for parallel computing. We describe the technical challenges in exploiting these opportunities \u2013 namely, efficient communication hardware and software, global coordination of multiple workstation operating systems, and enterprise-scale network file systems. We are currently building a 100-node NOW prototype to demonstrate that practical solutions exist to these technical challenges."} {"_id": "04caa1a55b12d5f3830ed4a31c4b47921a3546f2", "title": "Discriminative Embeddings of Latent Variable Models for Structured Data", "text": "Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are 10, 000 times smaller, while at the same time achieving the state-of-the-art predictive performance."} {"_id": "47da821786eea4e6d9d69cc5aa1f248e958ea6e4", "title": "Searching for processes and threads in Microsoft Windows memory dumps", "text": "Current tools to analyze memory dumps of systems running Microsoft Windows usually build on the concept of enumerating lists maintained by the kernel to keep track of processes, threads and other objects. Therefore they will frequently fail to detect objects that are already terminated or which have been hidden by Direct Kernel Object Manipulation"} {"_id": "54ee58c3623b7fdf8b7e9e29355a24478f574eea", "title": "GUITAR: Piecing Together Android App GUIs from Memory Images", "text": "An Android app's graphical user interface (GUI) displays rich semantic and contextual information about the smartphone's owner and app's execution. Such information provides vital clues to the investigation of crimes in both cyber and physical spaces. In real-world digital forensics however, once an electronic device becomes evidence most manual interactions with it are prohibited by criminal investigation protocols. Hence investigators must resort to \"image-and-analyze\" memory forensics (instead of browsing through the subject phone) to recover the apps' GUIs. Unfortunately, GUI reconstruction is still largely impossible with state-of-the-art memory forensics techniques, which tend to focus only on individual in-memory data structures. An Android GUI, however, displays diverse visual elements each built from numerous data structure instances. Furthermore, whenever an app is sent to the background, its GUI structure will be explicitly deallocated and disintegrated by the Android framework. In this paper, we present GUITAR, an app-independent technique which automatically reassembles and redraws all apps' GUIs from the multitude of GUI data elements found in a smartphone's memory image. To do so, GUITAR involves the reconstruction of (1) GUI tree topology, (2) drawing operation mapping, and (3) runtime environment for redrawing. Our evaluation shows that GUITAR is highly accurate (80-95% similar to original screenshots) at reconstructing GUIs from memory images taken from a variety of Android apps on popular phones. Moreover, GUITAR is robust in reconstructing meaningful GUIs even when facing GUI data loss."} {"_id": "7693cafd6f29623f61d66f031cadd60b6ce827d7", "title": "Evaluation: from Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation", "text": "Commonly used evaluation measures including Recall, Precision, F-Measure and Rand Accuracy are biased and should not be used without clear understanding of the biases, and corresponding identification of chance or base case levels of the statistic. Using these measures a system that performs worse in the objective sense of Informedness, can appear to perform better under any of these commonly used measures. We discuss several concepts and measures that reflect the probability that prediction is informed versus chance. Informedness and introduce Markedness as a dual measure for the probability that prediction is marked versus chance. Finally we demonstrate elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision, and outline the extension from the dichotomous case to the general multi-class case."} {"_id": "0704bb7b7918cd512b5e66ea4b4993e50b8ae92f", "title": "The Spectrum Kernel: A String Kernel for SVM Protein Classification", "text": "We introduce a new sequence-similarity kernel, the spectrum kernel, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. Our kernel is conceptually simple and efficient to compute and, in experiments on the SCOP database, performs well in comparison with state-of-the-art methods for homology detection. Moreover, our method produces an SVM classifier that allows linear time classification of test sequences. Our experiments provide evidence that string-based kernels, in conjunction with SVMs, could offer a viable and computationally efficient alternative to other methods of protein classification and homology detection."} {"_id": "99a4fc97244b83647af47a177e919ccc21918101", "title": "Have biopesticides come of age?", "text": "Biopesticides based on living microbes and their bioactive compounds have been researched and promoted as replacements for synthetic pesticides for many years. However, lack of efficacy, inconsistent field performance and high cost have generally relegated them to niche products. Recently, technological advances and major changes in the external environment have positively altered the outlook for biopesticides. Significant increases in market penetration have been made, but biopesticides still only make up a small percentage of pest control products. Progress in the areas of activity spectra, delivery options, persistence of effect and implementation have contributed to the increasing use of biopesticides, but technologies that are truly transformational and result in significant uptake are still lacking."} {"_id": "e915fb1cd8285aad1b9aceebb510c50ce1e131d3", "title": "Association of polymorphisms in the cytochrome P450 CYP2C9 with warfarin dose requirement and risk of bleeding complications", "text": "BACKGROUND\nThe cytochrome P450 CYP2C9 is responsible for the metabolism of S-warfarin. Two known allelic variants CYP2C9*2 and CYP2C9*3 differ from the wild type CYP2C9*1 by a single aminoacid substitution in each case. The allelic variants are associated with impaired hydroxylation of S-warfarin in in-vitro expression systems. We have studied the effect of CYP2C9 polymorphism on the in-vivo warfarin dose requirement.\n\n\nMETHODS\nPatients with a daily warfarin dose requirement of 1.5 mg or less (low-dose group, n=36), randomly selected patients with a wide range of dose requirements from an anticoagulant clinic in north-east England (clinic control group, n=52), and 100 healthy controls from the community in the same region were studied. Genotyping for the CYP2C9*2 and CYP2C9*3 alleles was done by PCR analysis. Case notes were reviewed to assess the difficulties encountered during the induction of warfarin therapy and bleeding complications in the low-dose and clinic control groups.\n\n\nFINDINGS\nThe odds ratio for individuals with a low warfarin dose requirement having one or more CYP2C9 variant alleles compared with the normal population was 6.21 (95% CI 2.48-15.6). Patients in the low-dose group were more likely to have difficulties at the time of induction of warfarin therapy (5.97 [2.26-15.82]) and have increased risk of major bleeding complications (rate ratio 3.68 [1.43-9.50]) when compared with randomly selected clinic controls.\n\n\nINTERPRETATION\nWe have shown that there is a strong association between CYP2C9 variant alleles and low warfarin dose requirement. CYP2C9 genotyping may identify a subgroup of patients who have difficulty at induction of warfarin therapy and are potentially at a higher risk of bleeding complications."} {"_id": "7c18fb7dc07135b75f1301ca88939d81f0d7a4b7", "title": "Ant System", "text": "Ant System, the first Ant Colony Optimization algorithm, showed to be a viable method for attacking hard combinatorial optimization problems. Yet, its performance, when compared to more fine-tuned algorithms, was rather poor for large instances of traditional benchmark problems like the Traveling Salesman Problem. To show that Ant Colony Optimization algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems, recent research in this ares has mainly focused on the development of algorithmic variants which achieve better performance than AS. In this article, we present \u2013 Ant System, an Ant Colony Optimization algorithm derived from Ant System. \u2013 Ant System differs from Ant System in several important aspects, whose usefulness we demonstrate by means of an experimental study. Additionally, we relate one of the characteristics specific to AS \u2014 that of using a greedier search than Ant System \u2014 to results from the search space analysis of the combinatorial optimization problems attacked in this paper. Our computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that \u2013 Ant System is currently among the best performing algorithms for these problems."} {"_id": "af8d09f8832f9effc138036666f542132d92d78e", "title": "Don't Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards", "text": "We present a new side-channel attack against soft keyboards that support gesture typing on Android smartphones. An application without any special permissions can observe the number and timing of the screen hardware interrupts and system-wide software interrupts generated during user input, and analyze this information to make inferences about the text being entered by the user. System-wide information is usually considered less sensitive than app-specific information, but we provide concrete evidence that this may be mistaken. Our attack applies to all Android versions, including Android M where the SELinux policy is tightened. We present a novel application of a recurrent neural network as our classifier to infer text. We evaluate our attack against the \u201cGoogle Keyboard\u201d on Nexus 5 phones and use a real-world chat corpus in all our experiments. Our evaluation considers two scenarios. First, we demonstrate that we can correctly detect a set of pre-defined \u201csentences of interest\u201d (with at least 6 words) with 70% recall and 60% precision. Second, we identify the authors of a set of anonymous messages posted on a messaging board. We find that even if the messages contain the same number of words, we correctly re-identify the author more than 97% of the time for a set of up to 35 sentences. Our study demonstrates a new way in which systemwide resources can be a threat to user privacy. We investigate the effect of rate limiting as a countermeasure but find that determining a proper rate is error-prone and fails in subtle cases. We conclude that real-time interrupt information should be made inaccessible, perhaps via a tighter SELinux policy in the next Android version."} {"_id": "1dc5b2114d1ff561fc7d6163d8f4e9c905ca12c4", "title": "Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.", "text": "It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n \u2265 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n \u2264 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests."} {"_id": "2c5808f0f06538d8d5a5335a0c938e902b7c7413", "title": "An Autonomous Multi-UAV System for Search and Rescue", "text": "This paper proposes and evaluates a modular architecture of an autonomous unmanned aerial vehicle (UAV) system for search and rescue missions. Multiple multicopters are coordinated using a distributed control system. The system is implemented in the Robot Operating System (ROS) and is capable of providing a real-time video stream from a UAV to one or more base stations using a wireless communications infrastructure. The system supports a heterogeneous set of UAVs and camera sensors. If necessary, an operator can interfere and reduce the autonomy. The system has been tested in an outdoor mission serving as a proof of concept. Some insights from these tests are described in the paper."} {"_id": "7f75c1245dc8aab77769c3b75834202f51ef49d1", "title": "Examining the benefits and challenges of using audience response systems: A review of the literature", "text": "0360-1315/$ see front matter Crown Copyright 2 doi:10.1016/j.compedu.2009.05.001 * Corresponding author. Tel.: +1 905 721 8668x267 E-mail address: robin.kay@uoit.ca (R.H. Kay). 1 Tel.: +1 905 721 8668x2886. Audience response systems (ARSs) permit students to answer electronically displayed multiple choice questions using a remote control device. All responses are instantly presented, in chart form, then reviewed and discussed by the instructor and the class. A brief history of ARSs is offered including a discussion of the 26 labels used to identify this technology. Next a detailed review of 67 peer-reviewed papers from 2000 to 2007 is offered presenting the benefits and challenges associated with the use of an ARS. Key benefits for using ARSs include improvements to the classroom environment (increases in attendance, attention levels, participation and engagement), learning (interaction, discussion, contingent teaching, quality of learning, learning performance), and assessment (feedback, formative, normative). The biggest challenges for teachers in using ARSs are time needed to learn and set up the ARS technology, creating effective ARS questions, adequate coverage of course material, and ability to respond to instantaneous student feedback. Student challenges include adjusting to a new method of learning, increased confusion when multiple perspectives are discussed, and negative reactions to being monitored. It is concluded that more systematic, detailed research is needed in a broader range of contexts. Crown Copyright 2009 Published by Elsevier Ltd. All rights reserved."} {"_id": "89e2196b3c8cb4164fde074f26d482599ed28d9c", "title": "Updating freeze: Aligning animal and human research", "text": "Freezing is widely used as the main outcome measure for fear in animal studies. Freezing is also getting attention more frequently in human stress research, as it is considered to play an important role in the development of psychopathology. Human models on defense behavior are largely based on animal models. Unfortunately, direct translations between animal and human studies are hampered by differences in definitions and methods. The present review therefore aims to clarify the conceptualization of freezing. Neurophysiological and neuroanatomical correlates are discussed and a translational model is proposed. We review the upcoming research on freezing in humans that aims to match animal studies by using physiological indicators of freezing (bradycardia and objective reduction in movement). Finally, we set the agenda for future research in order to optimize mutual animal-human translations and stimulate consistency and systematization in future empirical research on the freezing phenomenon."} {"_id": "936988ba574aac6ca14b8a918472874bbfd1a809", "title": "Child abuse, neglect, and adult behavior: research design and findings on criminality, violence, and child abuse.", "text": "Using a prospective cohorts design, a large sample of physical and sexual abuse cases was compared to a matched control group. Overall, abused and neglected subjects had higher rates than did controls for adult criminality and arrests for violent offenses, but not for adult arrests for child abuse or neglect. Findings are discussed in the context of intergenerational transmission of violence, and directions for future research are suggested."} {"_id": "7c1ac5a078b2d45740ea18caa12ceddef5b4d122", "title": "An approach for selecting seed URLs of focused crawler based on user-interest ontology", "text": "Seed URLs selection for focused Web crawler intends to guide related and valuable information that meets a user\u2019s personal information requirement and provide more effective information retrieval. In this paper, we propose a seed URLs selection approach based on user-interest ontology. In order to enrich semantic query, we first intend to apply Formal Concept Analysis to construct user-interest concept lattice with user log profile. By using concept lattice merger, we construct the user-interest ontology which can describe the implicit concepts and relationships between them more appropriately for semantic representation eed URLs ormal concept analysis ser-interest ontology ipartite graph eb crawler and query match. On the other hand, we make full use of the user-interest ontology for extracting the user interest topic area and expanding user queries to receive the most related pages as seed URLs, which is an entrance of the focused crawler. In particular, we focus on how to refine the user topic area using the bipartite directed graph. The experiment proves that the user-interest ontology can be achieved effectively by merging concept lattices and that our proposed approach can select high quality seed URLs collection and improve the average precision of focused Web crawler."} {"_id": "d3abb0b5b3ce7eb464846bbdfd93e0fbf505e954", "title": "Compact arrays fed by substrate integrated waveguides", "text": "In this paper, we compare three different concepts of compact antenna arrays fed by substrate integrated waveguides (SIW). Antenna concepts differ in the type of radiators. Slots represent magnetic linear radiators, patches are electric surface radiators, and Vivaldi slots belong to travelling-wave antennas. Hence, the SIW feeders have to exploit different mechanisms of exciting antenna elements. Impedance and radiation properties of studied antenna arrays have been related to the normalized frequency. Antenna arrays have been mutually compared to show fundamental dependencies of final parameters of the designed antennas on state variables of antennas, on SIW feeder architectures and on related implementation details."} {"_id": "4360637b10da1d63029456e76b6a30673c6efb97", "title": "SmartPort: A Platform for Sensor Data Monitoring in a Seaport Based on FIWARE", "text": "Seaport monitoring and management is a significant research area, in which infrastructure automatically collects big data sets that lead the organization in its multiple activities. Thus, this problem is heavily related to the fields of data acquisition, transfer, storage, big data analysis and information visualization. Las Palmas de Gran Canaria port is a good example of how a seaport generates big data volumes through a network of sensors. They are placed on meteorological stations and maritime buoys, registering environmental parameters. Likewise, the Automatic Identification System (AIS) registers several dynamic parameters about the tracked vessels. However, such an amount of data is useless without a system that enables a meaningful visualization and helps make decisions. In this work, we present SmartPort, a platform that offers a distributed architecture for the collection of the port sensors' data and a rich Internet application that allows the user to explore the geolocated data. The presented SmartPort tool is a representative, promising and inspiring approach to manage and develop a smart system. It covers a demanding need for big data analysis and visualization utilities for managing complex infrastructures, such as a seaport."} {"_id": "e96940e4837c597ed05bd3ea52e126ca444d842b", "title": "Modeling textual entailment with role semantic information", "text": "In this thesis, we present a novel approach for modeling textual entailment using lexicalsemantic information on the level of predicate-argument structure. To this end, we adopt information provided by the Berkeley FrameNet repository and embed it into an implemented end-to-end system. The two main goals of this thesis are the following: (i) to provide an analysis of the potential contribution of frame semantic information to the recognition textual entailment and (ii) to present a robust system architecture that can serve as basis for future experiments, research, and improvement. Our work was carried out in the context of the textual entailment initiative, which since 2005 has set the stage for the broad investigation of inference in natural-language processing tasks, including empirical evaluation of its coverage and reliability. In short, textual entailment describes inferential relations between (entailing) texts and (entailed) hypotheses as interpreted by typical language users. This pre-theoretic notion captures a natural range of inferences as compared to logical entailment, which has traditionally been used within theoretical approaches to natural language semantics. Various methods for modeling textual entailment have been proposed in the literature, ranging from shallow techniques like lexical overlap to shallow syntactic parsing and the exploitation of WordNet relations. Recently, there has been a move towards more structured meaning representations. In particular, the level of predicate-argument structure has gained much attention, which seems to be a natural and straightforward choice. Predicate-argument structure allows annotating sentences or texts with nuclear meaning representations (\u201cwho did what to whom\u201d), which are of obvious relevance for this task. For example, it can account for paraphrases like \u201cGhosts scare John\u201d vs. \u201cJohn is scared by ghosts\u201d. In this thesis, we present an approach to textual entailment that is centered around the analysis of predicate-argument structure. It combines LFG grammatical analysis, predicate-argument structure in the FrameNet paradigm, and taxonomic information from WordNet into tripartite graph structures. By way of a declarative graph matching algorithm, the \u201cstructural and semantic\u201d similarity of hypotheses and texts is computed and the result is represented as feature vectors. A supervised machine learning architecture trained on entailment corpora is used to check textual entailment for new text/hypothesis pairs. The approach is implemented in the SALSA RTE system, which successfully participated in the second and third RTE challenge. While system performance is on a par with that of comparable systems, the intuitively expected strong positive effect of using FrameNet information has not yet been confirmed. In order to evaluate different system components and to assess the potential contribution of FrameNet information for checking textual entailment, we conducted a number of experiments. For example, with the help of a gold-standard corpus, we"} {"_id": "7336015b2de1c3a7accf7651c28939f7fa03cb9e", "title": "An algorithm for treatment of the drooping nose.", "text": "UNLABELLED\nNasal tip ptosis (\"drooping\" or long nose) occurs when the tip of the nose is more caudal than what is deemed ideal. Intrinsic factors, such as elongated or caudally-rotated lower lateral cartilages, can lead to nasal tip ptosis. Extrinsic factors, such as elongated upper lateral cartilages or excessive caudal anterior septum and heavy nasal skin, can push the nasal tip caudally and lead to drooping of the nasal tip. The loss of maxillary or nasal spine support may enhance the potential for tip ptosis. In addition, a hyperactive depressor nasi septi could, as a result of continuous pull on the tip, result in tip ptosis. Aging or previous nasal procedures (such as the Goldman-type tip surgery) where the continuity of the lateral and medial crura of the lower lateral cartilages have been violated may cause a weakening of the tip-supporting mechanisms and de-rotation of the nasal tip. Correction of this deformity is challenging and rewarding; it can resolve both the cosmetic deformity and nasal obstruction symptoms related to this entity. The goal of this article is to present our current principles of diagnosis and treatment of nasal tip ptosis, as well as to introduce and algorithm of preferred methods and techniques for its reliable and stable correction.\n\n\nRESULTS\nCorrection of the nasal tip ptosis requires accurate diagnosis, a recognition of the interplay between various anatomic components, specific strategy planning, and a correction of anatomic abnormalities."} {"_id": "a832b34e1c530f9e66be4471a051f981b1482d27", "title": "Emerging\nApplications of Liquid Metals Featuring Surface Oxides", "text": "Gallium and several of its alloys are liquid metals at or near room temperature. Gallium has low toxicity, essentially no vapor pressure, and a low viscosity. Despite these desirable properties, applications calling for liquid metal often use toxic mercury because gallium forms a thin oxide layer on its surface. The oxide interferes with electrochemical measurements, alters the physicochemical properties of the surface, and changes the fluid dynamic behavior of the metal in a way that has, until recently, been considered a nuisance. Here, we show that this solid oxide \"skin\" enables many new applications for liquid metals including soft electrodes and sensors, functional microcomponents for microfluidic devices, self-healing circuits, shape-reconfigurable conductors, and stretchable antennas, wires, and interconnects."} {"_id": "9cc2680fc8524b1e3f1f11b508d5642673e22f55", "title": "A Personalized Graph-Based Document Ranking Model Using a Semantic User Profile", "text": "The overload of the information available on the web, held with the diversity of the user information needs and the ambiguity of their queries have led the researchers to develop personalized search tools that return only documents that meet the user profile representing his main interests and needs. We present in this paper a personalized document ranking model based on an extended graph-based distance measure that exploits a semantic user profile derived from a predefined web ontology (ODP). The measure is based on combining Minimum Common Supergraph (MCS) and Maximum Common Subgraph (mcs) between graphs representing respectively the document and the user profile. We extend this measure in order to take into account a semantic recovery between the document and the user profile through common concepts and cross links connecting the two graphs. Results show the effectiveness of our personalized graph-based ranking model compared to Yahoo search results."} {"_id": "6ed38b0cb510fa91434eb63ab464bee66c9323c6", "title": "A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction", "text": "We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network. The network is initialized with embeddings that make use of character Ngram information to better suit this task. When evaluated on common benchmark test data sets (CoNLL-2014 and JFLEG), our model substantially outperforms all prior neural approaches on this task as well as strong statistical machine translation-based systems with neural and task-specific features trained on the same data. Our analysis shows the superiority of convolutional neural networks over recurrent neural networks such as long short-term memory (LSTM) networks in capturing the local context via attention, and thereby improving the coverage in correcting grammatical errors. By ensembling multiple models, and incorporating an N-gram language model and edit features via rescoring, our novel method becomes the first neural approach to outperform the current state-of-the-art statistical machine translation-based approach, both in terms of grammaticality and fluency."} {"_id": "e4acaccd3c42b618396c9c28dae64ae7091e36b8", "title": "A self-steering I/Q receiver array in 45-nm CMOS SOI", "text": "A novel I/Q receiver array is demonstrated that adapts phase shifts in each receive channel to point a receive beam toward an incident RF signal. The measured array operates at 8.1 GHz and covers steering angles of +/-35 degrees for a four element array. Additionally, the receiver incorporates an I/Q down-converter and demodulates 64QAM with EVM less than 4%. The chip is fabricated in 45 nm CMOS SOI process and occupies an area of 3.45 mm2 while consuming 143 mW dc power."} {"_id": "b100619c0ca6f5e75dd9ad13f22aab88269c031a", "title": "A Technical Approach to the Energy Blockchain in Microgrids", "text": "The present paper considers some technical issues related to the \u201cenergy blockchain\u201d paradigm applied to microgrids. In particular, what appears from the study is that the superposition of energy transactions in a microgrid creates a variation of the power losses in all the branches of the microgrid. Traditional power losses allocation in distribution systems takes into account only generators while, in this paper, a real-time attribution of power losses to each transaction involving one generator and one load node is done by defining some suitable indices. Besides, the presence of P\u2013V nodes increases the level of reactive flows and provides a more complex technical perspective. For this reason, reactive power generation for voltage support at P\u2013V nodes poses a further problem of reactive power flow exchange, which is worth of investigation in future works in order to define a possible way of remuneration. The experimental section of the paper considers a medium voltage microgrid and two different operational scenarios."} {"_id": "bf9283bbc24c01c664556a4d7d7ebe8a94781d4f", "title": "Aspect extraction in sentiment analysis: comparative analysis and survey", "text": "Sentiment analysis (SA) has become one of the most active and progressively popular areas in information retrieval and text mining due to the expansion of the World Wide Web (WWW). SA deals with the computational treatment or the classification of user\u2019s sentiments, opinions and emotions hidden within the text. Aspect extraction is the most vital and extensively explored phase of SA to carry out the classification of sentiments in precise manners. During the last decade, enormous number of research has focused on identifying and extracting aspects. Therefore, in this survey, a comprehensive overview has been attempted for different aspect extraction techniques and approaches. These techniques have been categorized in accordance with the adopted approach. Despite being a traditional survey, a comprehensive comparative analysis is conducted among different approaches of aspect extraction, which not only elaborates the performance of any technique but also guides the reader to compare the accuracy with other state-of-the-art and most recent approaches."} {"_id": "b4f00f51edaf5d5b84e957ab947aac47c04740ba", "title": "An incremental deployment algorithm for mobile robot teams", "text": "This paper describes an algorithm for deploying the members of a mobile robot team into an unknown environment. The algorithm deploys robots one-at-a-time, with each robot making use of information gathered by the previous robots to determine the next deployment location. The deployment pattern is designed to maximize the area covered by the robots\u2019 sensors, while simultaneously ensuring that the robots maintain line-of-sight contact with one another. This paper describes the basic algorithm and presents results obtained from a series of experiments conducted using both real and simulated"} {"_id": "3b032aacdc7dadf0dc3475565c7236fa4b372eb6", "title": "RABAC: Role-Centric Attribute-Based Access Control", "text": "Role-based access control (RBAC) is a commercially dominant model, standardized by the National Institute of Standards and Technology (NIST). Although RBAC provides compelling benefits for security management it has several known deficiencies such as role explosion, wherein multiple closely related roles are required (e.g., attendingdoctor role is separately defined for each patient). Numerous extensions to RBAC have been proposed to overcome these shortcomings. Recently NIST announced an initiative to unify and standardize these extensions by integrating roles with attributes, and identified three approaches: use attributes to dynamically assign users to roles, treat roles as just another attribute, and constrain the permissions of a role via attributes. The first two approaches have been previously studied. This paper presents a formal model for the third approach for the first time in the literature. We propose the novel role-centric attribute-based access control (RABAC) model which extends the NIST RBAC model with permission filtering policies. Unlike prior proposals addressing the role-explosion problem, RABAC does not fundamentally modify the role concept and integrates seamlessly with the NIST RBAC model. We also define an XACML profile for RABAC based on the existing XACML profile for RBAC."} {"_id": "64a995d605a1f4f632e6acf9468ae0834e62e084", "title": "A throwable miniature robotic system", "text": "Before the soldiers or police take actions, a fast and real-time detection in dangerous places, such as indoor, passageway and underpass, can both guarantee the soldiers' safety and increase the battle's accuracy, so it is significative and necessary to design a kind of scout robot which is able to arrive in the target region to acquire the message fast by the ways of thrown, shot or airdrop. This kind of robot should be small, easy to hide and have the ability of anti-impact and semi-autonomous. This paper mainly proposes a design method of the throwable miniature scout robot, analyses the anti-pact mechanism and autonomous strategy, and shows that the anti-impact mechanism is useful to attenuate the impact and the semi-autonomous control strategy can be fit for the robot's application through several experiments at last."} {"_id": "5b7c4b3c12917f289513c4896a08548e67ede9fe", "title": "Evidence for a common representation of decision values for dissimilar goods in human ventromedial prefrontal cortex.", "text": "To make economic choices between goods, the brain needs to compute representations of their values. A great deal of research has been performed to determine the neural correlates of value representations in the human brain. However, it is still unknown whether there exists a region of the brain that commonly encodes decision values for different types of goods, or if, in contrast, the values of different types of goods are represented in distinct brain regions. We addressed this question by scanning subjects with functional magnetic resonance imaging while they made real purchasing decisions among different categories of goods (food, nonfood consumables, and monetary gambles). We found activity in a key brain region previously implicated in encoding goal-values: the ventromedial prefrontal cortex (vmPFC) was correlated with the subjects' value for each category of good. Moreover, we found a single area in vmPFC to be correlated with the subjects' valuations for all categories of goods. Our results provide evidence that the brain encodes a \"common currency\" that allows for a shared valuation for different categories of goods."} {"_id": "fde42b706122e42efa876262039729d449487ae2", "title": "Sentiment Identification in Code-Mixed Social Media Text", "text": "Sentiment analysis is the Natural Language Processing (NLP) task dealing with the detection and classification of sentiments in texts. While some tasks deal with identifying presence of sentiment in text (Subjectivity analysis), other tasks aim at determining the polarity of the text categorizing them as positive, negative and neutral. Whenever there is presence of sentiment in text, it has a source (people, group of people or any entity) and the sentiment is directed towards some entity, object, event or person. Sentiment analysis tasks aim to determine the subject, the target and the polarity or valence of the sentiment. In our work, we try to automatically extract sentiment (positive or negative) from Facebook posts using a machine learning approach. While some works have been done in code-mixed social media data and in sentiment analysis separately, our work is the first attempt (as of now) which aims at performing sentiment analysis of code-mixed social media text. We have used extensive pre-processing to remove noise from raw text. Multilayer Perceptron model has been used to determine the polarity of the sentiment. We have also developed the corpus for this task by manually labelling Facebook posts with their associated sentiments."} {"_id": "03684e4a57d2c33e0ed219cad9e3b180175f2464", "title": "Social signal processing: Survey of an emerging domain", "text": "The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence \u2013 the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement \u2013 in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially-aware computing."} {"_id": "04c2cda00e5536f4b1508cbd80041e9552880e67", "title": "Hipster Wars: Discovering Elements of Fashion Styles", "text": "The clothing we wear and our identities are closely tied, revealing to the world clues about our wealth, occupation, and socioidentity. In this paper we examine questions related to what our clothing reveals about our personal style. We first design an online competitive Style Rating Game called Hipster Wars to crowd source reliable human judgments of style. We use this game to collect a new dataset of clothing outfits with associated style ratings for 5 style categories: hipster, bohemian, pinup, preppy, and goth. Next, we train models for betweenclass and within-class classification of styles. Finally, we explore methods to identify clothing elements that are generally discriminative for a style, and methods for identifying items in a particular outfit that may indicate a style."} {"_id": "09fa54f1ab7aaa83124d2415bfc6eb51e4b1f081", "title": "Where to Buy It: Matching Street Clothing Photos in Online Shops", "text": "In this paper, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same item in an online shop. This is an extremely challenging task due to visual differences between street photos (pictures of people wearing clothing in everyday uncontrolled settings) and online shop photos (pictures of clothing items on people, mannequins, or in isolation, captured by professionals in more controlled settings). We collect a new dataset for this application containing 404,683 shop photos collected from 25 different online retailers and 20,357 street photos, providing a total of 39,479 clothing item matches between street and shop photos. We develop three different methods for Exact Street to Shop retrieval, including two deep learning baseline methods, and a method to learn a similarity measure between the street and shop domains. Experiments demonstrate that our learned similarity significantly outperforms our baselines that use existing deep learning based representations."} {"_id": "30ea6c4991ceabc8d197ccff3e45fb0a00a3f6c5", "title": "Style Finder: Fine-Grained Clothing Style Detection and Retrieval", "text": "With the rapid proliferation of smartphones and tablet computers, search has moved beyond text to other modalities like images and voice. For many applications like Fashion, visual search offers a compelling interface that can capture stylistic visual elements beyond color and pattern that cannot be as easily described using text. However, extracting and matching such attributes remains an extremely challenging task due to high variability and deformability of clothing items. In this paper, we propose a fine-grained learning model and multimedia retrieval framework to address this problem. First, an attribute vocabulary is constructed using human annotations obtained on a novel fine-grained clothing dataset. This vocabulary is then used to train a fine-grained visual recognition system for clothing styles. We report benchmark recognition and retrieval results on Women's Fashion Coat Dataset and illustrate potential mobile applications for attribute-based multimedia retrieval of clothing items and image annotation."} {"_id": "324608bf8fecc064bc491da21291465ab42fa6b6", "title": "Matching-CNN meets KNN: Quasi-parametric human parsing", "text": "Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated/manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN [12] in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts [29, 30], for the human parsing task."} {"_id": "07cb8b9386778b158f879ee36a902f04dbf33bd6", "title": "The SeaLion has Landed: An IDE for Answer-Set Programming---Preliminary Report", "text": "We report about the current state and designated features of the tool SeaLion, aimed to serve as an integrated development environment (IDE) for answer-set programming (ASP). A main goal of SeaLion is to provide a user-friendly environment for supporting a developer to write, evaluate, debug, and test answer-set programs. To this end, new support techniques have to be developed that suit the requirements of the answer-set semantics and meet the constraints of practical applicability. In this respect, SeaLion benefits from the research results of a project on methods and methodologies for answer-set program development in whose context SeaLion is realised. Currently, the tool provides source-code editors for the languages of Gringo and DLV that offer syntax highlighting, syntax checking, and a visual program outline. Further implemented features are support for external solvers and visualisation as well as visual editing of answer sets. SeaLion comes as a plugin of the popular Eclipse platform and provides itself interfaces for future extensions of the IDE."} {"_id": "61c648e162dad3bd154d429b3e5cec601efa4a28", "title": "Occlusion horizons for driving through urban scenery", "text": "Wepresentarapidocclusioncullingalgorithmspecificallydesigned for urbanenvironments. For eachframe,an occlusionhorizonis beingbuilt on-the-flyduring a hierarchicalfront-to-backtraversal of the scene.All conservatively hiddenobjectsareculled, while all theoccludingimpostorsof all conservatively visibleobjectsare addedto the \u0004 D occlusionhorizon.Our framework alsosupports levels-of-detail(LOD) renderingby estimatingthe visible areaof the projectionof an objectin orderto selecttheappropriateLOD for eachobject.This algorithmrequiresno substantial preprocessing andnoexcessi ve storage. In a test sceneof 10,000buildings, the cull phasetook 11 ms on a PentiumII333 MHz and45 ms on an SGI Octaneper frame on average. In typical views, the occlusionhorizonculled away 80-90%of theobjectsthatwerewithin theview frustum,giving a 10 timesspeedupover view frustumculling alone.Combiningthe occlusionhorizonwith LOD renderinggavea17 timesspeedupon anSGIOctane,and23 timeson aPII. CR Categoriesand SubjectDescriptors: I.3.7[ComputerGraphics]: I.3.7 Three-Dimensional GraphicsandRealism."} {"_id": "2006f8d01395dab714bdcdcfd1cebbe6d6276e35", "title": "Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding", "text": "One of the key problems in spoken language understanding (SLU) is the task of slot filling. In light of the recent success of applying deep neural network technologies in domain detection and intent identification, we carried out an in-depth investigation on the use of recurrent neural networks for the more difficult task of slot filling involving sequence discrimination. In this work, we implemented and compared several important recurrent-neural-network architectures, including the Elman-type and Jordan-type recurrent networks and their variants. To make the results easy to reproduce and compare, we implemented these networks on the common Theano neural network toolkit, and evaluated them on the ATIS benchmark. We also compared our results to a conditional random fields (CRF) baseline. Our results show that on this task, both types of recurrent networks outperform the CRF baseline substantially, and a bi-directional Jordantype network that takes into account both past and future dependencies among slots works best, outperforming a CRFbased baseline by 14% in relative error reduction."} {"_id": "5664a4ca240ca67d4846bcafb332a5f67e1a9791", "title": "Accessibility Evaluation of Classroom Captions", "text": "Real-time captioning enables deaf and hard of hearing (DHH) people to follow classroom lectures and other aural speech by converting it into visual text with less than a five second delay. Keeping the delay short allows end-users to follow and participate in conversations. This article focuses on the fundamental problem that makes real-time captioning difficult: sequential keyboard typing is much slower than speaking. We first surveyed the audio characteristics of 240 one-hour-long captioned lectures on YouTube, such as speed and duration of speaking bursts. We then analyzed how these characteristics impact caption generation and readability, considering specifically our human-powered collaborative captioning approach. We note that most of these characteristics are also present in more general domains. For our caption comparison evaluation, we transcribed a classroom lecture in real-time using all three captioning approaches. We recruited 48 participants (24 DHH) to watch these classroom transcripts in an eye-tracking laboratory. We presented these captions in a randomized, balanced order. We show that both hearing and DHH participants preferred and followed collaborative captions better than those generated by automatic speech recognition (ASR) or professionals due to the more consistent flow of the resulting captions. These results show the potential to reliably capture speech even during sudden bursts of speed, as well as for generating \u201cenhanced\u201d captions, unlike other human-powered captioning approaches."} {"_id": "53730790a77a31794a72b334a14f4691a16b87ba", "title": "A Decentralised Sharing App running a Smart Contract on the Ethereum Blockchain", "text": "The sharing economy, the business of collectively using privately owned objects and services, has fuelled some of the fastest growing businesses of the past years. However, popular sharing platforms like Airbnb or Uber exhibit several drawbacks: a cumbersome sign up procedure, lack of participant privacy, overbearing terms and conditions, and significant fees for users. We demonstrate a Decentralised App (DAPP) for the sharing of everyday objects based on a smart contract on the Ethereum blockchain. This contract enables users to register and rent devices without involvement of a Trusted Third Party (TTP), disclosure of any personal information or prior sign up to the service. With increasing distribution of cryptocurrencies the use of smart contracts such as proposed in this paper has the potential to revolutionise the sharing economy."} {"_id": "4d4e151bf5738fa2b6a8ef9ac4e66cfe5179cd5f", "title": "Layout-sensitive language extensibility with SugarHaskell", "text": "Programmers need convenient syntax to write elegant and concise programs. Consequently, the Haskell standard provides syntactic sugar for some scenarios (e.g., do notation for monadic code), authors of Haskell compilers provide syntactic sugar for more scenarios (e.g., arrow notation in GHC), and some Haskell programmers implement preprocessors for their individual needs (e.g., idiom brackets in SHE). But manually written preprocessors cannot scale: They are expensive, error-prone, and not composable. Most researchers and programmers therefore refrain from using the syntactic notations they need in actual Haskell programs, but only use them in documentation or papers. We present a syntactically extensible version of Haskell, SugarHaskell, that empowers ordinary programmers to implement and use custom syntactic sugar.\n Building on our previous work on syntactic extensibility for Java, SugarHaskell integrates syntactic extensions as sugar libraries into Haskell's module system. Syntax extensions in SugarHaskell can declare arbitrary context-free and layout-sensitive syntax. SugarHaskell modules are compiled into Haskell modules and further processed by a Haskell compiler. We provide an Eclipse-based IDE for SugarHaskell that is extensible, too, and automatically provides syntax coloring for all syntax extensions imported into a module.\n We have validated SugarHaskell with several case studies, including arrow notation (as implemented in GHC) and EBNF as a concise syntax for the declaration of algebraic data types with associated concrete syntax. EBNF declarations also show how to extend the extension mechanism itself: They introduce syntactic sugar for using the declared concrete syntax in other SugarHaskell modules."} {"_id": "1fdfa103b31c17be922050972a49414cead951f5", "title": "Theodor Lipps and the shift from \"sympathy\" to \"empathy\".", "text": "In the course of extensive philosophical debates on aesthetics in nineteenth-century Germany, Robert Vischer introduced the concept of Einf\u00fchlung in relation to art. Theodor Lipps subsequently extended it from art to visual illusions and interpersonal understanding. While Lipps had regarded Einf\u00fchlung as basically similar to the old notion of sympathy, Edward Titchener in America believed it had a different meaning. Hence, he coined the term empathy as its translation. This term came to be increasingly widely adopted, first in psychology and then more generally. But the lack of agreement about the supposed difference between these concepts suggests that Lipps had probably been right."} {"_id": "1e77175db11664a18b6b6e3cac4cf5c90e7cb4b5", "title": "A polynomial-time maximum common subgraph algorithm for outerplanar graphs and its application to chemoinformatics", "text": "Metrics for structured data have received an increasing interest in the machine learning community. Graphs provide a natural representation for structured data, but a lot of operations on graphs are computationally intractable. In this article, we present a polynomial-time algorithm that computes a maximum common subgraph of two outerplanar graphs. The algorithm makes use of the block-and-bridge preserving subgraph isomorphism, which has significant efficiency benefits and is also motivated from a chemical perspective. We focus on the application of learning structure-activity relationships, where the task is to predict the chemical activity of molecules. We show how the algorithm can be used to construct a metric for structured data and we evaluate this metric and more generally also the block-and-bridge preserving matching operator on 60 molecular datasets, obtaining state-of-the-art results in terms of predictive performance and efficiency."} {"_id": "2f92b10acf7c405e55c74c1043dabd9ded1b1800", "title": "Dynamic Integration of Background Knowledge in Neural NLU Systems", "text": "Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the taskspecific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way."} {"_id": "9b23769aa313e4c7adc75944a5da1eb4eafb8395", "title": "Preserving Relation Privacy in Online Social Network Data", "text": "Online social networks routinely publish data of interest to third parties, but in so doing often reveal relationships, such as a friendship or contractual association, that an attacker can exploit. This systematic look at existing privacy-preservation techniques highlights the vulnerabilities of users even in networks that completely anonymize identities. Through a taxonomy that categorizes techniques according to the degree of user identity exposure, the authors examine the ways that existing approaches compromise relation privacy and offer more secure alternatives."} {"_id": "fc5f08a4e4ef0f63d0091377fa2c649e68f0c5fd", "title": "Emotion and the motivational brain", "text": "Psychophysiological and neuroscience studies of emotional processing undertaken by investigators at the University of Florida Laboratory of the Center for the Study of Emotion and Attention (CSEA) are reviewed, with a focus on reflex reactions, neural structures and functional circuits that mediate emotional expression. The theoretical view shared among the investigators is that expressed emotions are founded on motivational circuits in the brain that developed early in evolutionary history to ensure the survival of individuals and their progeny. These circuits react to appetitive and aversive environmental and memorial cues, mediating appetitive and defensive reflexes that tune sensory systems and mobilize the organism for action and underly negative and positive affects. The research reviewed here assesses the reflex physiology of emotion, both autonomic and somatic, studying affects evoked in picture perception, memory imagery, and in the context of tangible reward and punishment, and using the electroencephalograph (EEG) and functional magnetic resonance imaging (fMRI), explores the brain's motivational circuits that determine human emotion."} {"_id": "7fbf4458d5c19ec29b282f89bcb9110dc6ca89fc", "title": "Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma", "text": "Human cancers are complex ecosystems composed of cells with distinct phenotypes, genotypes, and epigenetic states, but current models do not adequately reflect tumor composition in patients. We used single-cell RNA sequencing (RNA-seq) to profile 430 cells from five primary glioblastomas, which we found to be inherently variable in their expression of diverse transcriptional programs related to oncogenic signaling, proliferation, complement/immune response, and hypoxia. We also observed a continuum of stemness-related expression states that enabled us to identify putative regulators of stemness in vivo. Finally, we show that established glioblastoma subtype classifiers are variably expressed across individual cells within a tumor and demonstrate the potential prognostic implications of such intratumoral heterogeneity. Thus, we reveal previously unappreciated heterogeneity in diverse regulatory programs central to glioblastoma biology, prognosis, and therapy."} {"_id": "d9c4b1ca997583047a8721b7dfd9f0ea2efdc42c", "title": "Learning Inference Models for Computer Vision", "text": "Computer vision can be understood as the ability to perform inference on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques, as even the model design is often dictated by the complexity of inference in them. This thesis proposes learning based inference schemes and demonstrates applications in computer vision. We propose techniques for inference in both generative and discriminative computer vision models. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference, which is often too complex or too slow to be practical. We propose techniques for improving inference in two widely used techniques: Markov Chain Monte Carlo (MCMC) sampling and message-passing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative vision models show that the proposed techniques accelerate the inference process and/or converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge in a principled way. For better inference in discriminative models, we propose techniques that modify the original model itself, as inference is simple evaluation of the model. We concentrate on convolutional neural network (CNN) models and propose a generalization of standard spatial convolutions, which are the basic building blocks of CNN architectures, to bilateral convolutions. First, we generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call \u2018Bilateral Neural Networks\u2019. We show how the bilateral filtering modules can be used for modifying existing CNN architectures for better image segmentation and propose a neural network approach for temporal information propagation in videos. Experiments demonstrate the potential of the proposed bilateral networks on a wide range of vision tasks and datasets. In summary, we propose learning based techniques for better inference in several computer vision models ranging from inverse graphics to freely parameterized neural networks. In generative vision models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse high-dimensional data as well as provide a way for incorporating prior knowledge into CNNs."} {"_id": "7d2e36f1eb09cbff2f9f5c942e538a964464b15e", "title": "Tunneling Transistors Based on Graphene and 2-D Crystals", "text": "As conventional transistors become smaller and thinner in the quest for higher performance, a number of hurdles are encountered. The discovery of electronic-grade 2-D crystals has added a new \u201clayer\u201d to the list of conventional semiconductors used for transistors. This paper discusses the properties of 2-D crystals by comparing them with their 3-D counterparts. Their suitability for electronic devices is discussed. In particular, the use of graphene and other 2-D crystals for interband tunneling transistors is discussed for low-power logic applications. Since tunneling phenomenon in reduced dimensions is not conventionally covered in texts, the physics is developed explicitly before applying it to transistors. Though we are in an early stage of learning to design devices with 2-D crystals, they have already been the motivation behind a list of truly novel ideas. This paper reviews a number of such ideas."} {"_id": "149bf28af91cadf2cd933bd477599cca40f55ccd", "title": "Autonomous reinforcement learning on raw visual input data in a real world application", "text": "We propose a learning architecture, that is able to do reinforcement learning based on raw visual input data. In contrast to previous approaches, not only the control policy is learned. In order to be successful, the system must also autonomously learn, how to extract relevant information out of a high-dimensional stream of input information, for which the semantics are not provided to the learning system. We give a first proof-of-concept of this novel learning architecture on a challenging benchmark, namely visual control of a racing slot car. The resulting policy, learned only by success or failure, is hardly beaten by an experienced human player."} {"_id": "11757f6404965892e02c8c49c5665e2b25492bec", "title": "Capturing and animating skin deformation in human motion", "text": "During dynamic activities, the surface of the human body moves in many subtle but visually significant ways: bending, bulging, jiggling, and stretching. We present a technique for capturing and animating those motions using a commercial motion capture system and approximately 350 markers. Although the number of markers is significantly larger than that used in conventional motion capture, it is only a sparse representation of the true shape of the body. We supplement this sparse sample with a detailed, actor-specific surface model. The motion of the skin can then be computed by segmenting the markers into the motion of a set of rigid parts and a residual deformation (approximated first as a quadratic transformation and then with radial basis functions). We demonstrate the power of this approach by capturing flexing muscles, high frequency motions, and abrupt decelerations on several actors. We compare these results both to conventional motion capture and skinning and to synchronized video of the actors."} {"_id": "a8da9412b4a2a15a3124be06c4c37ad98f8f02d2", "title": "Ontology and the Lexicon", "text": "A lexicon is a linguistic object and hence is not the same thing as an ontology, which is non-linguistic. Nonetheless, word senses are in many ways similar to ontological concepts and the relationships found between word senses resemble the relationships found between concepts. Although the arbitrary and semi-arbitrary distinctions made by natural languages limit the degree to which these similarities can be exploited, a lexicon can nonetheless serve in the development of an ontology, especially in a technical domain."} {"_id": "d381709212dccf397284eee54a1e3010a4ef777f", "title": "A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss", "text": "We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation."} {"_id": "fb80d858f0b3832d67c6aafaf5426c787198e492", "title": "The TangibleK Robotics Program: Applied Computational Thinking for Young Children.", "text": "This article describes the TangibleK robotics program for young children. Based on over a decade of research, this program is grounded on the belief that teaching children about the human-made world, the realm of technology and engineering, is as important as teaching them about the natural world, numbers, and letters. The TangibleK program uses robotics as a tool to engage children in developing computational thinking and learning about the engineering design process. It includes a research-based, developmentally appropriate robotics kit that children can use to make robots and program their behaviors. The curriculum has been piloted in kindergarten classrooms and in summer camps and lab settings. The author presents the theoretical framework that informs TangibleK and the \u201cpowerful ideas\u201d from computer science and robotics on which the curriculum is based, linking them to other disciplinary areas and developmental characteristics of early childhood. The article concludes with a description of classroom pedagogy and activities, along with assessment tools used to evaluate learning outcomes."} {"_id": "9154ea2ddc06a0ffd0d5909520495ac16a467a64", "title": "Associations between young adults' use of sexually explicit materials and their sexual preferences, behaviors, and satisfaction.", "text": "This study examined how levels of sexually explicit material (SEM) use during adolescence and young adulthood were associated with sexual preferences, sexual behaviors, and sexual and relationship satisfaction. Participants included 782 heterosexual college students (326 men and 456 women; M(age)\u00a0=\u00a019.9) who completed a questionnaire online. Results revealed high frequencies and multiple types and contexts of SEM use, with men's usage rates systematically higher than women's. Regression analyses revealed that both the frequency of SEM use and number of SEM types viewed were uniquely associated with more sexual experience (a higher number of overall and casual sexual intercourse partners, as well as a lower age at first intercourse). Higher frequencies of SEM use were associated with less sexual and relationship satisfaction. The frequency of SEM use and number of SEM types viewed were both associated with higher sexual preferences for the types of sexual practices typically presented in SEM. These findings suggest that SEM use can play a significant role in a variety of aspects of young adults' sexual development processes."} {"_id": "5e1059a5ae0dc94e83a269b8d5c0763d1e6851e2", "title": "Improved Collision Attack on Hash Function MD5", "text": "In this paper, we present a fast attack algorithm to find two-block collision of hash function MD5. The algorithm is based on the two-block collision differential path of MD5 that was presented by Wang et al. in the Conference EUROCRYPT 2005. We found that the derived conditions for the desired collision differential path were not sufficient to guarantee the path to hold and that some conditions could be modified to enlarge the collision set. By using technique of small range searching and omitting the computing steps to check the characteristics in the attack algorithm, we can speed up the attack of MD5 efficiently. Compared with the Advanced Message Modification technique presented by Wang et al., the small range searching technique can correct 4 more conditions for the first iteration differential and 3 more conditions for the second iteration differential, thus improving the probability and the complexity to find collisions. The whole attack on the MD5 can be accomplished within 5 hours using a PC with Pentium4 1.70GHz CPU."} {"_id": "808d7f0e5385210422751f1a9a6fd0f07cda647d", "title": "Complementary approaches built as web service for arabic handwriting OCR systems via amazon elastic mapreduce (EMR) model", "text": "Arabic Optical Character Recognition (OCR) as Web Services represents a major challenge for handwritten document recognition. A variety of approaches, methods, algorithms and techniques have been proposed in order to build powerful Arabic OCR web services. Unfortunately, these methods could not succeed in achieving this mission in case of large large quantity Arabic handwritten documents. Intensive experiments and observations revealed that some of the existing approaches and techniques are complementary and can be combined to improve the recognition rate. Designing and implementing these recent sophisticated complementary approaches and techniques as web services are commonly complex; they require strong computing power to reach an acceptable recognition speed especially in case of large quantity documents. One of the possible solutions to overcome this problem is to benefit from distributed computing architectures such as cloud computing. This paper describes the design and implementation of Arabic Handwriting Recognition as a web service (AHRweb service) based on the complementary approach K-Nearest Neighbor (KNN) /Support Vector Machine (SVM) (K-NN/SVM) via Amazon Elastic MapReduce (EMR) model. The experiments were conducted on a cloud computing environment with a real large scale handwriting dataset from the Institute for Communications Technology (IFN)/ Ecole Nationale d\u2019Ing\u00e9nieur de Tunis (ENIT) IFN/ENIT database. The J-Sim (Java Simulator) was used as a tool to generate and analyze statistical results. Experimental results show that Amazon Elastic MapReduce (EMR) model constitutes a very promising framework for enhancing large AHRweb service performances."} {"_id": "6c0e967829480c4eb85800d5750300337fecdd7c", "title": "Improving returns on stock investment through neural network selection", "text": "Artificial neural networks\u2019 (ANNs\u2019) generalization powers have in recent years received admiration of finance researchers and practitioners. Their usage in such areas as bankruptcy prediction, debt-risk assessment, and security-market applications has yielded promising results. With such intensive research and proven ability of the ANN in the area of security-market application and the growing importance of the role of equity securities in Singapore, it has motivated the conceptual development of this work in using the ANN in stock selection. With their proven generalization ability, neural networks are able to infer the characteristics of performing stocks from the historical patterns. The performance of stocks is reflective of the profitability and quality of management of the underlying company. Such information is reflected in financial and technical variables. As such, the ANN is used as a tool to uncover the intricate relationships between the performance of stocks and the related financial and technical variables. Historical data, such as financial variables (inputs) and performance of the stock (output) is used in this ANN application. Experimental results obtained thus far have been very encouraging. IDEA GROUP PUBLISHING This paper appears in the publication, Artificial Neural Networks in Finance and Manufacturing edited by Joarder Kamruzzaman, Rezaul Begg, and Ruhul Sarker\u00a9 2006, Idea Group Inc. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com ITB13014 Improving Returns on Stock Investment through Neural Network Selection 153 Copyright \u00a9 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. Introduction With the growing importance in the role of equities to both international and local investors, the selection of attractive stocks is of utmost importance to ensure a good return. Therefore, a reliable tool in the selection process can be of great assistance to these investors. An effective and efficient tool/system gives the investor the competitive edge over others as he/she can identify the performing stocks with minimum effort. In assisting the investors in their decision-making process, both the academics and practitioners have devised trading strategies, rules, and concepts based on fundamental and technical analysis. Innovative investors opt to employ information technology to improve the efficiency in the process. This is done through transforming trading strategies into computer-known language so as to exploit the logical processing power of the computer. This greatly reduces the time and effort in short-listing the list of attractive stocks. In the age where information technology is dominant, such computerized rule-based expert systems have severe limitations that will affect their effectiveness and efficiency. In particular, their inability in handling nonlinear relationships between financial variables and stock prices has been a major shortcoming. However, with the significant advancement in the field of ANNs, these limitations have found a solution. In this work, the generalization ability of the ANN is being harnessed in creating an effective and efficient tool for stock selection. Results of the research in this field have so far been very encouraging. Application of Neural Network in Stock Investment One of the earliest studies was by Halquist and Schmoll (1989), who used a neural network model to predict trends in the S&P 500 index. They found that the model was able to predict the trends 61% of the time. This was followed by Trippi and DeSieno (1992) and Grudnitski and Osburn (1993). Trippi and DeSieno (1992) devised an S&P 500 trading system that consisted of several trained neural networks and a set of rules for combining the network results to generate a composite recommended trading strategy. The trading system was used to predict S&P 500 index futures and the results showed that this system significantly outperformed the passive buy-and-hold strategy. Grudnitski and Osburn (1993) used a neural network to predict the monthly price changes and trading return in the S&P 500 index futures. The results showed that the neural network was able to predict correctly 75% of the time and gave a positive return above risk. Another work on predicting S&P 500 index futures was by Tsaih, Hsu, and Lai (1998). Similar to Trippi and DeSieno (1992), Tsaih et al. (1998) also integrated a rule-based system technique with a neural network to produce a trading system. However, in the Tsaih et al. (1998) study, they used reasoning neural networks instead of the backpropagation method used by Trippi and Desieno (1992). Empirical results in the daily prediction of price changes in the S&P 500 index futures showed that this hybrid artificialintelligence (AI) approach outperformed the passive buy-and-hold investment strategy. 11 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the publisher's webpage: www.igi-global.com/chapter/improving-returns-stockinvestment-through/5354"} {"_id": "f3399ba97e3bc8ea29967c5e42eb669edc8d9e59", "title": "Efficient Adaptive Nonlinear Filters for Nonlinear Active Noise Control", "text": "In this paper, we treat nonlinear active noise control (NANC) with a linear secondary path (LSP) and with a nonlinear secondary path (NSP) in a unified structure by introducing a new virtual secondary path filter concept and using a general function expansion nonlinear filter. We discover that using the filtered-error structure results in greatly reducing the computational complexity of NANC. As a result, we extend the available filtered-error-based algorithms to solve NANC/LSP problems and, furthermore, develop our adjoint filtered-error-based algorithms for NANC/NSP. This family of algorithms is computationally efficient and possesses a simple structure. We also find that the computational complexity of NANC/NSP can be reduced even more using block-oriented nonlinear models, such as the Wiener, Hammerstein, or linear-nonlinear-linear (LNL) models for the NSP. Finally, we use the statistical properties of the virtual secondary path and the robustness of our proposed methods to further reduce the computational complexity and simplify the implementation structure of NANC/NSP when the NSP satisfies certain conditions. Computational complexity and simulation results are given to confirm the efficiency and effectiveness of all of our proposed methods"} {"_id": "c6894120a6475bdee28770300a335638308e4f3e", "title": "Routing perturbation for enhanced security in split manufacturing", "text": "Split manufacturing can mitigate security vulnerabilities at untrusted foundries by exposing only partial designs. Even so, attackers can make educated guess according to design conventions and thereby recover entire chip designs. In this work, a routing perturbation-based defense method is proposed such that such attacks become very difficult while wirelength/timing overhead is restricted to be very small. Experimental results on benchmark circuits confirm the effectiveness of the proposed techniques. The new techniques also significantly outperform the latest previous work."} {"_id": "12f8ae73af6280171a1c5fb8818320427c4f255f", "title": "Music emotion classification: a fuzzy approach", "text": "Due to the subjective nature of human perception, classification of the emotion of music is a challenging problem. Simply assigning an emotion class to a song segment in a deterministic way does not work well because not all people share the same feeling for a song. In this paper, we consider a different approach to music emotion classification. For each music segment, the approach determines how likely the song segment belongs to an emotion class. Two fuzzy classifiers are adopted to provide the measurement of the emotion strength. The measurement is also found useful for tracking the variation of music emotions in a song. Results are shown to illustrate the effectiveness of the approach."} {"_id": "9d4a2052502fe2c60e48d7c8997da1269df6f48f", "title": "Automatic mood detection and tracking of music audio signals", "text": "Music mood describes the inherent emotional expression of a music clip. It is helpful in music understanding, music retrieval, and some other music-related applications. In this paper, a hierarchical framework is presented to automate the task of mood detection from acoustic music data, by following some music psychological theories in western cultures. The hierarchical framework has the advantage of emphasizing the most suitable features in different detection tasks. Three feature sets, including intensity, timbre, and rhythm are extracted to represent the characteristics of a music clip. The intensity feature set is represented by the energy in each subband, the timbre feature set is composed of the spectral shape features and spectral contrast features, and the rhythm feature set indicates three aspects that are closely related with an individual's mood response, including rhythm strength, rhythm regularity, and tempo. Furthermore, since mood is usually changeable in an entire piece of classical music, the approach to mood detection is extended to mood tracking for a music piece, by dividing the music into several independent segments, each of which contains a homogeneous emotional expression. Preliminary evaluations indicate that the proposed algorithms produce satisfactory results. On our testing database composed of 800 representative music clips, the average accuracy of mood detection achieves up to 86.3%. We can also on average recall 84.1% of the mood boundaries from nine testing music pieces."} {"_id": "fffda2474b1754897c854e0c322796f1fc9f4db8", "title": "Psychological testing and assessment in the 21st century.", "text": "As spin-offs of the current revolution in the cognitive and neurosciences, clinical neuropsychologists in the 21st century will be using biological tests of intelligence and cognition that record individual differences in brain functions at the neuromolecular, neurophysiologic, and neurochemical levels. Assessment of patients will focus more on better use of still intact functions, as well as rehabilitating or bypassing impaired functions, than on diagnosis, as is the focus today. Better developed successors to today's scales for assessing personal competency and adaptive behavior, as well as overall quality of life, also will be in wide use in clinical settings. With more normal individuals, use of new generations of paper-and-pencil inventories, as well as biological measures for assessing differences in interests, attitudes, personality styles, and predispositions, is predicted."} {"_id": "4ccb0d37c69200dc63d1f757eafb36ef4853c178", "title": "Musical genre classification of audio signals", "text": "Musical genres are categorical labels created by humans to characterize pieces of music. A musical genre is characterized by the common characteristics shared by its members. These characteristics typically are related to the instrumentation, rhythmic structure, and harmonic content of the music. Genre hierarchies are commonly used to structure the large collections of music available on the Web. Currently musical genre annotation is performed manually. Automatic musical genre classification can assist or replace the human user in this process and would be a valuable addition to music information retrieval systems. In addition, automatic musical genre classification provides a framework for developing and evaluating features for any type of content-based analysis of musical signals. In this paper, the automatic classification of audio signals into an hierarchy of musical genres is explored. More specifically, three feature sets for representing timbral texture, rhythmic content and pitch content are proposed. The performance and relative importance of the proposed features is investigated by training statistical pattern recognition classifiers using real-world audio collections. Both whole file and real-time frame-based classification schemes are described. Using the proposed feature sets, classification of 61% for ten musical genres is achieved. This result is comparable to results reported for human musical genre classification."} {"_id": "667ba06f64d3687d290a899deb2fdf7dabe7fd1c", "title": "A fuzzy K-nearest neighbor algorithm", "text": "Classification of objects is an important area of research and application in a variety of fields. In the presence of full knowledge of the underlying probabilities, Bayes decision theory gives optimal error rates. In those cases where this information is not present, many algorithms make use of distance or similarity among samples as a means of classification. The K-nearest neighbor decision rule has often been used in these pattern recognition problems. One of the difficulties that arises when utilizing this technique is that each of the labeled samples is given equal importance in deciding the class memberships of the pattern to be classified, regardless of their `typicalness'. The theory of fuzzy sets is introduced into the K-nearest neighbor technique to develop a fuzzy version of the algorithm. Three methods of assigning fuzzy memberships to the labeled samples are proposed, and experimental results and comparisons to the crisp version are presented."} {"_id": "70fa7c2a8a12509da107fe82b04d5e422120984f", "title": "Statistical Learning Algorithms Applied to Automobile Insurance", "text": "We recently conducted a research project for a large North American automobile insurer. This study was the most exhaustive ever undertaken by this particular insurer and lasted over an entire year. We analyzed the discriminating power of each variable used for ratemaking. We analyzed the performance of several models within five broad categories: linear regressions, generalized linear models, decision trees, neural networks and support vector machines. In this paper, we present the main results of this study. We qualitatively compare models and show how neural networks can represent high-order nonlinear dependencies with a small number of parameters, each of which is estimated on a large proportion of the data, thus yielding low variance. We thoroughly explain the purpose of the nonlinear sigmoidal transforms which are at the very heart of neural networks\u2019 performances. The main numerical result is a statistically significant reduction in the out-of-sample meansquared error using the neural network model and our ability to substantially reduce the median premium by charging more to the highest risks. This in turn can translate into substantial savings and financial benefits for an insurer. We hope this paper goes a long way in convincing actuaries to include neural networks within their set of modeling tools for ratemaking."} {"_id": "f2a2c5129cf5656af7acc7ffaf84c9c9bafe72c5", "title": "Knowledge sharing in open source software communities: motivations and management", "text": "Copyright \u00a9 and Moral Rights are retained by the author(s) and/ or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This item cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder(s). The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holders."} {"_id": "ae80f715274308a7564b400081efb74e21e3c1ea", "title": "Hybris: Robust Hybrid Cloud Storage", "text": "Besides well-known benefits, commodity cloud storage also raises concerns that include security, reliability, and consistency. We present Hybris key-value store, the first robust hybrid cloud storage system, aiming at addressing these concerns leveraging both private and public cloud resources.\n Hybris robustly replicates metadata on trusted private premises (private cloud), separately from data which is dispersed (using replication or erasure coding) across multiple untrusted public clouds. Hybris maintains metadata stored on private premises at the order of few dozens of bytes per key, avoiding the scalability bottleneck at the private cloud. In turn, the hybrid design allows Hybris to efficiently and robustly tolerate cloud outages, but also potential malice in clouds without overhead. Namely, to tolerate up to f malicious clouds, in the common case of the Hybris variant with data replication, writes replicate data across f + 1 clouds, whereas reads involve a single cloud. In the worst case, only up to f additional clouds are used. This is considerably better than earlier multi-cloud storage systems that required costly 3f + 1 clouds to mask f potentially malicious clouds. Finally, Hybris leverages strong metadata consistency to guarantee to Hybris applications strong data consistency without any modifications to the eventually consistent public clouds.\n We implemented Hybris in Java and evaluated it using a series of micro and macrobenchmarks. Our results show that Hybris significantly outperforms comparable multi-cloud storage systems and approaches the performance of bare-bone commodity public cloud storage."} {"_id": "18739734d4f69a4a399b7d48a2d242d60206c7aa", "title": "JCLEC: a Java framework for evolutionary computation", "text": "In this paper we describe JCLEC, a Java software system for the development of evolutionary computation applications. This system has been designed as a framework, applying design patterns to maximize its reusability and adaptability to new paradigms with a minimum of programming effort. JCLEC architecture comprises three main modules: the core contains all abstract type definitions and their implementation; experiments runner is a scripting environment to run algorithms in batch mode; finally, GenLab is a graphical user interface that allows users to configure an algorithm, to execute it interactively and to visualize the results obtained. The use of JCLEC system is illustrated though the analysis of one case study: the resolution of the 0/1 knapsack problem by means of evolutionary algorithms."} {"_id": "a900eee711ba307e5adfd0408e57987714198382", "title": "Developing Competency Models to Promote Integrated Human Resource Practices \u2022 309 DEVELOPING COMPETENCY MODELS TO PROMOTE INTEGRATED HUMAN RESOURCE PRACTICES", "text": "Today, competencies are used in many facets of human resource management, ranging from individual selection, development, and performance management to organizational strategic planning. By incorporating competencies into job analysis methodologies, the Office of Personnel Management (OPM) has developed robust competency models that can form the foundation for each of these initiatives. OPM has placed these models into automated systems to ensure access for employees, human resources professionals, and managers. Shared access to the data creates a shared frame of reference and a common language of competencies that have provided the basis for competency applications in public sector agencies. \u00a9 2002 Wiley Periodicals, Inc."} {"_id": "6200f1499cbd41104284946476b082cb754077b1", "title": "Progressive Stochastic Learning for Noisy Labels", "text": "Large-scale learning problems require a plethora of labels that can be efficiently collected from crowdsourcing services at low cost. However, labels annotated by crowdsourced workers are often noisy, which inevitably degrades the performance of large-scale optimizations including the prevalent stochastic gradient descent (SGD). Specifically, these noisy labels adversely affect updates of the primal variable in conventional SGD. To solve this challenge, we propose a robust SGD mechanism called progressive stochastic learning (POSTAL), which naturally integrates the learning regime of curriculum learning (CL) with the update process of vanilla SGD. Our inspiration comes from the progressive learning process of CL, namely learning from \u201ceasy\u201d tasks to \u201ccomplex\u201d tasks. Through the robust learning process of CL, POSTAL aims to yield robust updates of the primal variable on an ordered label sequence, namely, from \u201creliable\u201d labels to \u201cnoisy\u201d labels. To realize POSTAL mechanism, we design a cluster of \u201cscreening losses,\u201d which sorts all labels from the reliable region to the noisy region. To sum up, POSTAL using screening losses ensures robust updates of the primal variable on reliable labels first, then on noisy labels incrementally until convergence. In theory, we derive the convergence rate of POSTAL realized by screening losses. Meanwhile, we provide the robustness analysis of representative screening losses. Experimental results on UCI1 simulated and Amazon Mechanical Turk crowdsourcing data sets show that the POSTAL using screening losses is more effective and robust than several existing baselines.1UCI is the abbreviation of University of California Irvine."} {"_id": "efe03a2940e09547bb15035d35e7e07ed59848bf", "title": "JHU-ISI Gesture and Skill Assessment Working Set ( JIGSAWS ) : A Surgical Activity Dataset for Human Motion Modeling", "text": "Dexterous surgical activity is of interest to many researchers in human motion modeling. In this paper, we describe a dataset of surgical activities and release it for public use. The dataset was captured using the da Vinci Surgical System and consists of kinematic and video from eight surgeons with different levels of skill performing five repetitions of three elementary surgical tasks on a bench-top model. The tasks, which include suturing, knot-tying and needle-passing, are standard components of most surgical skills training curricula. In addition to kinematic and video data captured from the da Vinci Surgical System, we are also releasing manual annotations of surgical gestures (atomic activity segments), surgical skill using global rating scores, a standardized cross-validation experimental setup, and a C++/Matlab toolkits for analyzing surgical gestures using hidden Markov models and using linear dynamical systems. We refer to the dataset as the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) to indicate the collaboration between Johns Hopkins University (JHU) and Intuitive Surgical Inc. (ISI), Sunnyvale, CA, on collecting these data."} {"_id": "d7544535a3424d946d312f19434d9389ffd0c32b", "title": "Robust model predictive control for an uncertain smart thermal grid", "text": "The focus of this paper is on modeling and control of Smart Thermal Grids (STGs) in which the uncertainties in the demand and/or supply are included. We solve the corresponding robust model predictive control (MPC) optimization problem using mixed-integer-linear programming techniques to provide a day-ahead prediction for the heat production in the grid. In an example, we compare the robust MPC approach with the robust optimal control approach, in which the day-ahead production plan is obtained by optimizing the objective function for entire day at once. There, we show that the robust MPC approach successfully keeps the supply-demand balance in the STG while satisfying the constraints of the production units in the presence of uncertainties in the heat demand. Moreover, we see that despite the longer computation time, the performance of the robust MPC controller is considerably better than the one of the robust optimal controller."} {"_id": "759d9a6c9206c366a8d94a06f4eb05659c2bb7f2", "title": "Toward Open Set Recognition", "text": "To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of \u201cclosed set\u201d recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is \u201copen set\u201d recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel \u201c1-vs-set machine,\u201d which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks."} {"_id": "00960cb3f5a74d23eb5ded93f1aa717b9c6e6851", "title": "Input Warping for Bayesian Optimization of Non-stationary Functions", "text": "Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in \u201clog-space,\u201d to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably."} {"_id": "6e6740bc09ca3fe013ff9029cca2830aec834829", "title": "CVAP: Validation for Cluster Analyses", "text": "Evaluation of clustering results (or cluster validation) is an important and necessary step in cluster analysis, but it is often time-consuming and complicated work. We present a visual cluster validation tool, the Cluster Validity Analysis Platform (CVAP), to facilitate cluster validation. The CVAP provides necessary methods (e.g., many validity indices, several clustering algorithms and procedures) and an analysis environment for clustering, evaluation of clustering results, estimation of the number of clusters, and performance comparison among different clustering algorithms. It can help users accomplish their clustering tasks faster and easier and help achieve good clustering quality when there is little prior knowledge about the cluster structure of a data set."} {"_id": "fcc7541e27ee3ef817b21a7fcd362030cbaa3f69", "title": "A New High-Temperature Superconducting Vernier Permanent-Magnet Machine for Wind Turbines", "text": "A vernier permanent magnet (VPM) machine using high-temperature superconducting (HTS) winding for dual excitation is proposed for wind power generation. The proposed VPM machine adopts the parallel hybrid excitation configuration, the stator consists of an identical tooth pair connected together by the stator yoke and the rotor contains two sets of consequent-pole permanent magnets joined together by the rotor yoke. The armature winding is located at the stator main slots while the field winding using HTS coils is wound on the stator yoke. With the benefit of the HTS field winding, the air-gap flux can be flexibly adjusted by controlling the magnitude of the dc current. Therefore, the machine performance is improved to cope with different operating conditions."} {"_id": "9015a38343d71b7972af6efc9c2a224807662e8f", "title": "The Neglect of Power in Recent Framing Research By", "text": "This article provides a critique of recent developments in research examining media frames and their influence. We contend that a number of trends in framing research have neglected the relationship between media frames and broader issues of political and social power. This neglect is a product of a number of factors, including conceptual problems in the definition of frames, the inattention to frames sponsorship, the failure to examine framing contests within wider political and social contexts, and the reduction of framing to a form of media effects. We conclude that framing research needs to be linked to the political and social questions regarding power central to the media hegemony thesis, and illustrate this focus by exploring how framing research can contribute to an understanding of the interaction between social movements and the news media."} {"_id": "f4a653d82471463564a0ce966ef8df3e7d9aeea8", "title": "Research on the Matthews Correlation Coefficients Metrics of Personalized Recommendation Algorithm Evaluation", "text": "The personalized recommendation systems could better improve the personalized service for network user and alleviate the problem of information overload in the Internet. As we all know, the key point of being a successful recommendation system is the performance of recommendation algorithm. When scholars put forward some new recommendation algorithms, they claim that the new algorithms have been improved in some respects, better than previous algorithm. So we need some evaluation metrics to evaluate the algorithm performance. Due to the scholar didn\u2019t fully understand the evaluation mechanism of recommendation algorithms. They mainly emphasized some specific evaluation metrics like Accuracy, Diversity. What\u2019s more, the academia did not establish a complete and unified assessment of recommendation algorithms evaluation system which is credibility to do the work of recommendation evaluation. So how to do this work objective and reasonable is still a challengeable task. In this article, we discussed the present evaluation metrics with its respective advantages and disadvantages. Then, we put forward to use the Matthews Correlation Coefficient to evaluate the recommendation algorithm\u2019s performance. All this based on an open source projects called mahout which provides a rich set of components to construct the classic recommendation algorithm. The results of the experiments show that the applicability of Matthews correlation coefficient in the relative evaluation work of recommendation algorithm."} {"_id": "e9b217b79401fde11ec0dc862f22179868721b0a", "title": "Implementation of a fuzzy logic speed controller for a permanent magnet dc motor using a low-cost Arduino platform", "text": "Implementation of a Fuzzy Logic speed controller for a Permanent Magnet DC (PMDC) motor using a low-cost Arduino interfaced with a standard xPC Target MATLAB environment is demonstrated. The Arduino solution is acceptable at kHz sampling frequencies. The study of speed control of PMDC motor based on Arduino and L298N H-Bridge using an artificial intelligence control method, such as Fuzzy PID with only four rules, has been validated by experiments on the physical system."} {"_id": "f836b7315ac839f9728cc58d0c9ee6e438ebefa9", "title": "An embedded, GSM based, multiparameter, realtime patient monitoring system and control \u2014 An implementation for ICU patients", "text": "Wireless, remote patient monitoring system and control using feedback and GSM technology is used to monitor the different parameters of an ICU patient remotely and also control over medicine dosage is provided. Measurement of vital parameters can be done remotely and under risk developing situation can be conveyed to the physician with alarm triggering systems in order to initiate the proper control actions. In the implemented system a reliable and efficient real time remote patient monitoring system that can play a vital role in providing better patient care is developed. This system enables expert doctors to monitor vital parameters viz body temperature, blood pressure and heart rate of patients in remote areas of hospital as well as he can monitor the patient when he is out of the premises. The system in addition also provides a feedback to control the dosage of medicine to the patient as guided by the doctor remotely, in response to the health condition message received by the doctor. Mobile phones transfer measured parameters via SMS to clinicians for further analysis or diagnosis. The timely manner of conveying the real time monitored parameter to the doctor and control action taken by him is given high priority which is very much needed and which is the uniqueness of the developed system. The system even facilitates the doctor to monitor the patient's previous history from the data in memory inbuilt in the monitoring device. Also data can be sent to several doctors incase a doctor fails to respond urgently."} {"_id": "18faf4efa0e186e39658ab97676accb26b09733d", "title": "Exploring the relationship between knowledge management practices and innovation performance", "text": "The process of innovation depends heavily on knowledge, and the management of knowledge and human capital should be an essential element of running any type of business. Recent research indicates that organisations are not consistent in their approach to knowledge management (KM), with KM approaches being driven predominantly within an information technology (IT) or humanist framework, with little if any overlap. This paper explores the relationship between KM approaches and innovation performance through a preliminary study focusing on the manufacturing industry. The most significant implication that has emerged from the study is that managers in manufacturing firms should place more emphasis on human resource management (HRM) practices when developing innovation strategies for product and process innovations. The study shows that KM contributes to innovation performance when a simultaneous approach of \u201csoft HRM practices\u201d and \u201chard IT practices\u201d are implemented."} {"_id": "4e79c933ed0ccc1bef35605dec91ec87089c18bb", "title": "Schedulability Analysis Using Uppaal: Herschel-Planck Case Study", "text": "We propose a modeling framework for performing schedulability analysis by using Uppaal real-time model-checker [2]. The framework is inspired by a case study where schedulability analysis of a satellite system is performed. The framework assumes a single CPU hardware where a fixed priority preemptive scheduler is used in a combination with two resource sharing protocols and in addition voluntary task suspension is considered. The contributions include the modeling framework, its application on an industrial case study and a comparison of results with classical response time analysis."} {"_id": "2e9aea8b1a8d2e9725cf71253deaa2a406dfd7cc", "title": "Security trade-offs in Cloud storage systems", "text": "Securing software systems, is of paramount importance today. This is especially true for cloud systems, as they can be attacked nearly anytime from everywhere over the Internet. Ensuring security in general, however, typically has a negative impact on performance, usability, and increases the system\u2019s complexity. For cloud systems, which are built for high performance and availability as well as elastic scalability, security, therefore, may annihilate many of these original quality properties. Consequently, security, especially for cloud systems, can be understood as a trade-off problem where security mechanisms, applied to protect the system from specific threats and to achieve specific security goals, trade in security for other quality properties. For Cloud Storage Systems (CSS)\u2014i.e., distributed storage systems that replicate data over a cluster of nodes in order to manage huge data amounts, highly volatile query load (elastic scalability), and extraordinary needs for availability and fault-tolerance\u2014this trade-off problem is particularly prominent. Already, the different original quality properties of CSS cannot be provided at the same time and, thus, lead to fundamental trade-offs such as the trade-off between consistency, availability, and partition tolerance (see, e.g.: [53]). The piggybacked trade-offs coming from considering security as an additionally wanted quality property of such systems lead to further trade-offs that must be decided and managed. In this thesis, we focus on the trade-offs between security and performance in CSS. In order to not contradict the original design goals of CSS, a sensible management of these trade-offs in CSS requires a high degree of understanding of the relationships between security and performance in the security mechanisms of a specific CSS. Otherwise, this can lead to a badly configured CSS which is a security risk or consumes a lot of resources unnecessarily. This thesis, hence, aims at enhancing the understanding of the trade-offs between security and performance in CSS as well as at improving the management of these trade-offs in such systems. For this, we present three independent contributions in this thesis. The first contribution intends to improve the overall understanding of security in and security requirements for CSS. We present two reference usage models that support a security engineer in understanding the general usage of cloud storage"} {"_id": "b53e4c232833a8e663a9cf15dcdd050ff801c05c", "title": "Detecting Insider Threats Using RADISH: A System for Real-Time Anomaly Detection in Heterogeneous Data Streams", "text": "We present a scalable system for high-throughput real-time analysis of heterogeneous data streams. Our architecture enables incremental development of models for predictive analytics and anomaly detection as data arrives into the system. In contrast with batch data-processing systems, such as Hadoop, that can have high latency, our architecture allows for ingest and analysis of data on the fly, thereby detecting and responding to anomalous behavior in near real time. This timeliness is important for applications such as insider threat, financial fraud, and network intrusions. We demonstrate an application of this system to the problem of detecting insider threats, namely, the misuse of an organization's resources by users of the system and present results of our experiments on a publicly available insider threat dataset."} {"_id": "4a8c571c9fd24376556c41c417bba0d6be0dc9d3", "title": "Accept or appropriate ? A design-oriented critique on technology acceptance models", "text": "Technology acceptance models are tools for predicting users\u2019 reception of technology by measuring how they rate statements on a questionnaire scale. It has been claimed that these tools help to assess the social acceptance of a final IT product when its development is still underway. However, their use is not without problems. This paper highlights some of the underlying shortcomings that arise particularly from a simplistic conception of \u201cacceptance\u201d that does not recognize the possibility that users can invent new uses for (i.e., appropriate) technology in many situations. This lack of recognition can easily lead one to assume that users are passive absorbers of technological products, so that every user would adopt the same usages irrespective of the context of use, the differences in work tasks, or the characteristics of interpersonal cooperation. In light of recent research on appropriation, technology use must actually be understood in a more heterogeneous way, as a process through which different users find the product useful in different ways. This paper maintains that if, in fact, a single technology can be used for multiple purposes, then subscribing to the thinking arising from technology acceptance model research may actually lead one into suboptimal design solutions and thus also compromise user acceptance. Therefore, this paper also presents some starting points for designing specifically for easier technology appropriation."} {"_id": "44b8c1fbe10cdff678969a223139d339d1f08f3e", "title": "Privacy protection based access control scheme in cloud-based services", "text": "With the rapid development of computer technology, cloud-based services have become a hot topic. They not only provide users with convenience, but also bring many security issues, such as data sharing and privacy issue. In this paper, we present an access control system with privilege separation based on privacy protection (PS-ACS). In the PS-ACS scheme, we divide users into private domain (PRD) and public domain (PUD) logically. In PRD, to achieve read access permission and write access permission, we adopt the Key-Aggregate Encryption (KAE) and the Improved Attribute-based Signature (IABS) respectively. In PUD, we construct a new multi-authority ciphertext policy attribute-based encryption (CP-ABE) scheme with efficient decryption to avoid the issues of single point of failure and complicated key distribution, and design an efficient attribute revocation method for it. The analysis and simulation result show that our scheme is feasible and superior to protect users' privacy in cloud-based services."} {"_id": "aa636c3653b3b5557759cf7fb2cfe5eccf883fa1", "title": "Liquid Leakage Thin Film-Tape Sensor", "text": "A liquid leakage sensor on adhesive tape with alarming function has been developed. The film sensor tape consists of three layers: a base film layer, a substrate film layer, and a protective film layer. Three conductive lines and one resistive line are printed on the substrate film layer with a nano-sized silver conductive ink by the electronic gravure printing method. The location of liquid leakage has been found to be monitored accurately by applying the pulse voltage to the conductive and resistance lines in a periodic polarity switching mode. The leakage tests for various electrolyte liquids have been performed and the accuracy of leakage error position has been achieved to be about 0.25% on 200 meter long tape length."} {"_id": "7113af6cefeece031e00aac386c35fa17a87ca01", "title": "The study on feature selection in customer churn prediction modeling", "text": "When the customer churn prediction model is built, a large number of features bring heavy burdens to the model and even decrease the accuracy. This paper is aimed to review the feature selection, to compare the algorithms from different fields and to design a framework of feature selection for customer churn prediction. Based on the framework, the author experiment on the structured module with some telecom operator's marketing data to verify the efficiency of the feature selection framework."} {"_id": "a8d18bf53eb945d0179fd33d237a014470def8c5", "title": "Millimeter-wave large-scale phased-arrays for 5G systems", "text": "This talk will present our latest work on silicon RFICs for phased-array applications with emphasis on very large chips with built-in-self-test capabilities for 5G systems. SiGe is shown to be ideal for mm-wave applications due to its high temperature performance (automotive radars, base-stations, defense systems, etc.) and lower power consumption. These chips drastically reduce the cost of microwave and millimeter-wave phased arrays by combining many elements on the same chip, together with digital control and some cases, high-efficiency antennas. The phased-array chips also result in an easier packaging scheme using either a multi-layer PCB or wafer-level packages. We believe that this family of chips will be essential for millimeter-wave 5G communication systems."} {"_id": "39b58ef6487c893219c77c61c762eee5694d0e36", "title": "SLIQ: A Fast Scalable Classifier for Data Mining", "text": "Classi cation is an important problem in the emerging eld of data mining. Although classi cation has been studied extensively in the past, most of the classi cation algorithms are designed only for memory-resident data, thus limiting their suitability for data mining large data sets. This paper discusses issues in building a scalable classier and presents the design of SLIQ, a new classi er. SLIQ is a decision tree classi er that can handle both numeric and categorical attributes. It uses a novel pre-sorting technique in the tree-growth phase. This sorting procedure is integrated with a breadthrst tree growing strategy to enable classi cation of disk-resident datasets. SLIQ also uses a new tree-pruning algorithm that is inexpensive, and results in compact and accurate trees. The combination of these techniques enables SLIQ to scale for large data sets and classify data sets irrespective of the number of classes, attributes, and examples (records), thus making it an attractive tool for data mining."} {"_id": "d26b70479bc818ef7079732ba014e82368dbf66f", "title": "Sampling-Based Estimation of the Number of Distinct Values of an Attribute", "text": "We provide several new sampling-based estimators of the number of distinct values of an attribute in a relation. We compare these new estimators to estimators from the database and statistical literature empirically, using a large number of attribute-value distributions drawn from a variety of real-world databases. This appears to be the first extensive comparison of distinct-value estimators in either the database or statistical literature, and is certainly the first to use highlyskewed data of the sort frequently encountered in database applications. Our experiments indicate that a new \u201chybrid\u201d estimator yields the highest precision on average for a given sampling fraction. This estimator explicitly takes into account the degree of skew in the data and combines a new \u201csmoothed jackknife\u201d estimator with an estimator due to Shlosser. We investigate how the hybrid estimator behaves as we scale up the size of the database."} {"_id": "1f25ed3c9707684cc0cdf3e8321c791bc7164147", "title": "SPRINT: A Scalable Parallel Classifier for Data Mining", "text": "Classification is an important data mining problem. Although classification is a wellstudied problem, most of the current classification algorithms require that all or a portion of the the entire dataset remain permanently in memory. This limits their suitability for mining over large databases. We present a new decision-tree-based classification algorithm, called SPRINT that removes all of the memory restrictions, and is fast and scalable. The algorithm has also been designed to be easily parallelized, allowing many processors to work together to build a single consistent model. This parallelization, also presented here, exhibits excellent scalability as well. The combination of these characteristics makes the proposed algorithm an ideal tool for data mining."} {"_id": "7c3a4b84214561d8a6e4963bbb85a17a5b1e003a", "title": "Programs for Machine Learning. Part I", "text": ""} {"_id": "84dae6a2870c68005732b9db6890f375490f2d4e", "title": "Inferring Decision Trees Using the Minimum Description Length Principle", "text": "This paper concerns methods for inferring decision trees from examples for classification problems. The reader who is unfamiliar with this problem may wish to consult J. R. Quinlan\u2019s paper (1986), or the excellent monograph by Breiman et al. (1984), although this paper will be self-contained. This work is inspired by Rissanen\u2019s work on the Minimum description length principle (or MDLP for short) and on his related notion of the stochastic complexity of a string Rissanen, 1986b. The reader may also want to refer to related work by Boulton and Wallace (1968, 1973a, 1973b), Georgeff and Wallace (1984), and Hart (1987). Roughly speaking, the minimum description length principle states that the best \u201ctheory\u201d to infer from a set of data is the one which minimizes the sum of"} {"_id": "1d093148752f8a57a8989c35ee53adb331dfe485", "title": "A low dropout, CMOS regulator with high PSR over wideband frequencies", "text": "Modern system-on-chip (SoC) environments are swamped in high frequency noise that is generated by RF and digital circuits and propagated onto supply rails through capacitive coupling. In these systems, linear regulators are used to shield noise-sensitive analog blocks from high frequency fluctuations in the power supply. The paper presents a low dropout regulator that achieves power supply rejection (PSR) better than -40 dB over the entire frequency spectrum. The system has an output voltage of 1.0 V and a maximum current capability of 10 mA. It consists of operational amplifiers (op amps), a bandgap reference, a clock generator, and a charge pump and has been designed and simulated using BSIM3 models of a 0.5 /spl mu/m CMOS process obtained from MOSIS."} {"_id": "18b2d2ac7dbc986c6e9db68d1e53860a9ad20844", "title": "Optical Fiber Sensors for Aircraft Structural Health Monitoring", "text": "Aircraft structures require periodic and scheduled inspection and maintenance operations due to their special operating conditions and the principles of design employed to develop them. Therefore, structural health monitoring has a great potential to reduce the costs related to these operations. Optical fiber sensors applied to the monitoring of aircraft structures provide some advantages over traditional sensors. Several practical applications for structures and engines we have been working on are reported in this article. Fiber Bragg gratings have been analyzed in detail, because they have proved to constitute the most promising technology in this field, and two different alternatives for strain measurements are also described. With regard to engine condition evaluation, we present some results obtained with a reflected intensity-modulated optical fiber sensor for tip clearance and tip timing measurements in a turbine assembled in a wind tunnel."} {"_id": "cbc1395a4bb8a7540749e056753c139c2b6fff15", "title": "3DMatch: Learning the Matching of Local 3D Geometry in Range Scans", "text": "Establishing correspondences between 3D geometries is essential to a large variety of graphics and vision applications, including 3D reconstruction, localization, and shape matching. Despite significant progress, geometric matching on real-world 3D data is still a challenging task due to the noisy, low-resolution, and incomplete nature of scanning data. These difficulties limit the performance of current state-of-art methods which are typically based on histograms over geometric properties. In this paper, we introduce 3DMatch1, a data-driven local feature learner that jointly learns a geometric feature representation and an associated metric function from a large collection of realworld scanning data. We represent 3D geometry using accumulated distance fields around key-point locations. This representation is suited to handle noisy and partial scanning data, and concurrently supports deep learning with convolutional neural networks directly in 3D. To train the networks, we propose a way to automatically generate correspondence labels for deep learning by leveraging existing RGB-D reconstruction algorithms. In our results, we demonstrate that we are able to outperform state-of-theart approaches by a significant margin. In addition, we show the robustness of our descriptor in a purely geometric sparse bundle adjustment pipeline for 3D reconstruction. 1All of our code, datasets, and trained models are publicly available: http://3dmatch.cs.princeton.edu/"} {"_id": "c8cff62987122b3a20cc73e644093d6ab6e7bc0a", "title": "A Hybrid Approach to Grapheme to Phoneme Conversion in Assamese", "text": "Assamese is one of the low resource Indian languages. This paper implements both rule-based and data-driven grapheme to phoneme (G2P) conversion systems for Assamese. The rule-based system is used as the baseline which yields a word error rate of 35.3%. The data-driven systems are implemented using state-of-the-art sequence learning techniques such as \u2014i) Joint-Sequence Model (JSM), ii) Recurrent Neural Networks with LTSM cell (LSTM-RNN) and iii) bidirectional LSTM (BiLSTM). The BiLSTM yields the lowest WER i.e., 18.7%, which is an absolute 16.6% improvement on the baseline system. We additionally implement the rules of syllabification for Assamese. The surface output is generated in two forms namely i) phonemic sequence with syllable boundaries, and ii) only phonemic sequence. The output of BiLSTM is fed as an input to Hybrid system. The Hybrid system syllabifies the input phonemic sequences to apply the vowel harmony rules. It also applies the rules of schwa-deletion as well as some rules in which the consonants change their form in clusters. The accuracy of the Hybrid system is 17.3% which is an absolute 1.4% improvement over the BiLSTM based G2P."} {"_id": "785595f3a813f55f5bc7afeb57afc8d7e5a9e46b", "title": "An innovative cost-effective smart meter with embedded non intrusive load monitoring", "text": "Most of the time, domestic energy usage is invisible to the users. Households have only a vague idea about the amount of energy they are using for different purposes during the day, and the electric bill is usually the only monthly/bimonthly feedback. Making the energy consumption promptly visible can make the difference and foster a really efficient usage of the energy in a city. The emerging smart metering infrastructure has the potential to address some of these goals, by providing both households and electric utilities useful information about energy consumption, with a detailed breakdown to single appliance contribution. This paper presents a novel ultra-low-cost smart metering system capable of providing domestic AC load identification, by implementing both basic/typical smart meter measurement and a simple yet effective load disaggregation algorithm, which can detect and monitor domestic loads starting from a single point of measurement."} {"_id": "dba7d665367dacedc1915787f3aa158fe24aa4f7", "title": "A Stable and Accurate Marker-Less Augmented Reality Registration Method", "text": "Markerless Augmented Reality (AR) registration using the standard Homography matrix is unstable, and for image-based registration it has very low accuracy. In this paper, we present a new method to improve the stability and the accuracy of marker-less registration in AR. Based on the Visual Simultaneous Localization and Mapping (V-SLAM) framework, our method adds a three-dimensional dense cloud processing step to the state-of-the-art ORB-SLAM in order to deal with mainly the point cloud fusion and the object recognition. Our algorithm for the object recognition process acts as a stabilizer to improve the registration accuracy during the model to the scene transformation process. This has been achieved by integrating the Hough voting algorithm with the Iterative Closest Points(ICP) method. Our proposed AR framework also further increases the registration accuracy with the use of integrated camera poses on the registration of virtual objects. Our experiments show that the proposed method not only accelerates the speed of camera tracking with a standard SLAM system, but also effectively identifies objects and improves the stability of markerless augmented reality applications."} {"_id": "841c16f0e724fca3209a7a83fae94bec1eaae01c", "title": "A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues", "text": "The best-matching segment is then assigned to this groundtruth bounding box for the evaluation metric described in our paper, as well as for the metric described below. Some previous works have evaluated 3D segmentation using the intersection-over-union metric on 3D points [5]. Note that our method segments the entire scene, as opposed to the method of Wang et al. [5], so the evaluation metric from Wang et al. [5] does not directly apply. However, we could modify the intersection-over-union metric [5] as follows: we can compute the fraction of ground-truth bounding boxes which have an intersection-over-union score less than a threshold \u03c4IOU , as"} {"_id": "1de62bb648cf86fe0c1165a370aa8539ba28187e", "title": "Mode-Seeking on Hypergraphs for Robust Geometric Model Fitting", "text": "In this paper, we propose a novel geometric model fitting method, called Mode-Seeking on Hypergraphs (MSH), to deal with multi-structure data even in the presence of severe outliers. The proposed method formulates geometric model fitting as a mode seeking problem on a hypergraph in which vertices represent model hypotheses and hyperedges denote data points. MSH intuitively detects model instances by a simple and effective mode seeking algorithm. In addition to the mode seeking algorithm, MSH includes a similarity measure between vertices on the hypergraph and a \"weight-aware sampling\" technique. The proposed method not only alleviates sensitivity to the data distribution, but also is scalable to large scale problems. Experimental results further demonstrate that the proposed method has significant superiority over the state-of-the-art fitting methods on both synthetic data and real images."} {"_id": "0859b73ef7bd5757e56a429d862a3696029ec20b", "title": "Name Translation in Statistical Machine Translation - Learning When to Transliterate", "text": "We present a method to transliterate names in the framework of end-to-end statistical machine translation. The system is trained to learn when to transliterate. For Arabic to English MT, we developed and trained a transliterator on a bitext of 7 million sentences and Google\u2019s English terabyte ngrams and achieved better name translation accuracy than 3 out of 4 professional translators. The paper also includes a discussion of challenges in name translation evaluation."} {"_id": "647ed48ec0358b39ee796ab085aa8fb530a98a50", "title": "Suppressing the azimuth ambiguities in synthetic aperture radar images", "text": "This paper proposes a method for removing the azimuth ambiguities from synthetic aperture radar ( S A R ) images. The basic idea is to generate a two-dimensional reference function for S A R processing which provides, in addition to the matched filtering for the unaliased part of the received signal, the deconvolution of the azimuth ambiguities. This approach corresponds to an ideal filter concept, where an ideal impulse response function is obtained even in the presence of severe phase and amplitude errors. Modeling the sampled azimuth signal shows that the absolute phase value of the ambiguities cannot easily be determined due to their undersampling. The concept of the ideal filter is then extended to accommodate the undefined phase of the ambiguities and also the fading of the azimuth signal. Raw data from the E-SAR system have been used to verify the improvement in image quality by using the new method. It has a substantial significance in enabling the pulse-repetition frequency (PRF) constraints in the S A R system design to be relaxed and also for improving S A R image quality and interpretation."} {"_id": "f2fa76d1fe666fb62c56b9a33732965563d04fe3", "title": "Swara Histogram Based Structural Analysis And Identification Of Indian Classical Ragas", "text": "This work is an attempt towards robust automated analysis of Indian classical ragas through machine learning and signal processing tools and techniques. Indian classical music has a definite heirarchical structure where macro level concepts like thaats and raga are defined in terms of micro entities like swaras and shrutis. Swaras or notes in Indian music are defined only in terms of their relation to one another (akin to the movable do-re-mi-fa system), and an inference must be made from patterns of sounds, rather than their absolute frequency structure. We have developed methods to perform scale-independent raga identification using a random forest classifier on swara histograms and achieved state-of-the-art results for the same. The approach is robust as it directly works on partly noisy raga recordings from Youtube videos without knowledge of the scale used, whereas previous work in this direction often use audios generated in a controlled environment with the desired scale. The current work demonstrates the approach for 8 ragas namely Darbari, Khamaj, Malhar, Sohini, Bahar, Basant, Bhairavi and Yaman and we have achieved an average identification accuracy of 94.28% through the framework."} {"_id": "35a846c04ce07b76e4deffac1d4cf1d9ebf1de8f", "title": "Monitoring of Micro-sleep and Sleepiness for the Drivers Using EEG Signal", "text": "................................................................................................................................................... 3 ACKNOWLEDGEMENT ................................................................................................................................. 5 LIST OF FIGURES .......................................................................................................................................... 8 LIST OF TABLES ............................................................................................................................................ 9 LIST OF ABBREVIATIONS ............................................................................................................................ 10 INTRODUCTION ......................................................................................................................................... 11 BACKGROUND ........................................................................................................................................... 13 METHODS .................................................................................................................................................. 17 PRE-PROCESSING ................................................................................................................................... 20 FEATURE EXTRACTION ........................................................................................................................... 24 SUPPORT VECTOR MACHINE ................................................................................................................. 31 RELATED WORK ......................................................................................................................................... 41 Extraction of Feature Information in EEG Signal ................................................................................... 41 Sleepiness and health problems in drivers who work driving some kind of vehicle with nocturnal and shift works ............................................................................................................................................. 43 SVM Regression Training with SMO ...................................................................................................... 48 EVALUATION AND CONCLUSION ............................................................................................................... 52 FUTURE WORK ........................................................................................................................................... 55 REFERENCES .............................................................................................................................................. 58"} {"_id": "cf4453f13598f21a8e4137fa611d54ecaf6f5acf", "title": "Improving Liver Lesion Detection using CT-to-PET FCN and GAN Networks", "text": "We present a brief description of our recent work that was submitted for journal publication introducing a novel system for generation of virtual PET images using CT scans [1]. We combine a fully convolutional network (FCN) with a conditional generative adversarial network (GAN) to synthesize PET images from given input CT images. Clinically, such solutions may reduce the need for the more expensive and radioactive PET/CT scan. Quantitative evaluation was conducted using an existing lesion detection software, combining the synthesized PET as a false positive reduction layer for the detection of malignant lesions in the liver. Current results look promising showing a reduction in the average false positive per case while keeping the same true positive rate. The suggested solution is comprehensive and can be expanded to additional body organs, and different modalities."} {"_id": "46195459752594da1b3d17885f89e396bad5225f", "title": "Challenges and Solutions for End-to-End Time-Sensitive Optical Networking", "text": "Time sensitive applications rely on deterministic low latency, highly reactive, and trustable networks, as opposed to today's best effort internet. We review existing solutions and propose in this paper Deterministic Dynamic Network as a solution for end-to-end time-sensitive optical networking."} {"_id": "1497843154ce93565043acd98061742b1420ec21", "title": "Childhood Adversity and Factors Determining Resilience among Undergraduate Students", "text": "Childhood adversity experiences, when not properly managed may result to anti-social behaviour, health risk or psychological problems like severe depression, suicidal behavior in undergraduate students. Hence the study examined childhood adversity experiences and its factors determining resilience among undergraduate students in Oyo State. The study adopted a descriptive survey research; multi-stage random sampling technique was used to select 341 undergraduate students of Ladoke Akintola University of Technology, Ogbomosho and Oyo State. Two research instruments were used to test childhood adversity scale and factors determining resilience (protective family, birth order, community protective factors). The study formulated four research hypotheses which were tested with Pearson Product moment correlation analysis and multiple regression analysis at coefficient level of 0.05. The result of the analysis revealed that level of Adverse Childhood Experience (ACE) among respondents is slightly low with half of the respondents between the ages of 18 years to 20 years. The prominent ACE identified by respondents is physical assaults, home with incidence of substance abuse, victim of sexual abuse, and humiliation from parents. The level of resilience among the respondents is moderate. There was a positive significant relationship between childhood adversity and lifetime resilience undergraduate student in Oyo State (r = 0.272), protective family factor was also associated with lifetime resilience undergraduate student in Oyo State (r = 0.018) there was a significant relationship between birth order and lifetime resilience undergraduate student in Oyo State. (r = 0.794) there was a significant relationship between community protective factor and lifetime resilience undergraduate student o in Oyo State (r = 0.835). The magnitude of relative contribution showed protective family factor (\u03b2 = 0.987), had significant relative contribution; community protective factor (\u03b2 = 0.762), had significant relative contribution; childhood adversity (\u03b2 = 0.724), had significant relative contribution; and then birth order (\u03b2 = 0.687), had significant relative contribution to resilience of the undergraduate students. Educational institutions should tackle issues on educator-student relationships through various channels, especially social work, counseling units and students affairs department. There is need for initial psychosocial assessment for fresher to test resilience and adverse childhood experiences, mental health service should be contextualize in health care service for undergraduate students, the parents should be educated on the impact of childhood adversity on wellbeing and academic performance of undergraduate students in Nigeria."} {"_id": "76c87ec44fc5dc96bc445abe008deaf7c97c9373", "title": "A 79 GHz differentially fed grid array antenna", "text": "This paper presents a planar grid array antenna with a 100 \u2126 differential microstrip line feed on a single layer of standard soft substrate. The antenna operates in the 79 GHz frequency band for automotive radar applications. Its single row design offers a narrow beam in elevation and a wide beam in azimuth. Together with the differential microstrip line feeding, the antenna is suitable for differential multichannel MMICs in the frequency range."} {"_id": "234d205f50ad0ef11863a7b758ff2fe70d7ec4a8", "title": "The Santa Claus problem", "text": "We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let pij be the value that kid i has for present j. The Santa's goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize mini=1,...,m sumj \u2208 Si pij where Si is a set of presents received by the i-th kid.Our main result is an O(log log m/log log log m) approximation algorithm for the restricted assignment case of the problem when pij \u2208 pj,0 (i.e. when present j has either value pj or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of \u03a9(m1/2) in the general case, when pij can be arbitrary."} {"_id": "5a5c17a2dfb6d0e26a25bfcf2d03dd2f865940fc", "title": "The Wisconsin Card Sorting Test: theoretical analysis and modeling in a neuronal network.", "text": "Neuropsychologists commonly use the Wisconsin Card Sorting Test as a test of the integrity of frontal lobe functions. However, an account of its range of validity and of the neuronal mechanisms involved is lacking. We analyze the test at 3 different levels. First, the different versions of the test are described, and the results obtained with normal subjects and brain-lesioned patients are reviewed. Second, a computational analysis is used to reveal what algorithms may pass the test, and to predict their respective performances. At this stage, 3 cognitive components are isolated that may critically contribute to performance: the ability to change the current rule when negative reward occurs, the capacity to memorize previously tested rules in order to avoid testing them twice, and the possibility of rejecting some rules a priori by reasoning. Third, a model neuronal network embodying these 3 components is described. The coding units are clusters of neurons organized in layers, or assemblies. A sensorimotor loop enables the network to sort the input cards according to several criteria (color, form, etc.). A higher-level assembly of rule-coding clusters codes for the currently tested rule, which shifts when negative reward is received. Internal testing of the possible rules, analogous to a reasoning process, also occurs, by means of an endogenous auto-evaluation loop. When lesioned, the model reproduces the behavior of frontal lobe patients. Plausible biological or molecular implementations are presented for several of its components."} {"_id": "09a912f49e0b65803627412c969123620cda4a77", "title": "Slice Sampling for Probabilistic Programming", "text": "We introduce the first, general purpose, slice sampling inference engine for probabilistic programs. This engine is released as part of StocPy, a new Turing-Complete probabilistic programming language, available as a Python library. We present a transdimensional generalisation of slice sampling which is necessary for the inference engine to work on traces with different numbers of random variables. We show that StocPy compares favourably to other PPLs in terms of flexibility and usability, and that slice sampling can outperform previously introduced inference methods. Our experiments include a logistic regression, HMM, and Bayesian Neural Net."} {"_id": "70a4d654151234924c0a7ce7822f12108bf3db49", "title": "Towards understanding the effects of individual gamification elements on intrinsic motivation and performance", "text": "Research on the effectiveness of gamification has proliferated over the last few years, but the underlying motivational mechanisms have only recently become object of empirical research. It has been suggested that when perceived as informational, gamification elements, such as points, levels and leaderboards, may afford feelings of competence and hence enhance intrinsic motivation and promote performance gains. We conducted a 2 4 online experiment that systematically examined how points, leaderboards and levels, as well as participants' goal causality orientation influence intrinsic motivation, competence and performance (tag quantity and quality) in an image annotation task. Compared to a control condition, game elements did not significantly affect competence or intrinsic motivation, irrespective of participants' causality orientation. However, participants' performance did not mirror their intrinsic motivation, as points, and especially levels and leaderboard led to a significantly higher amount of tags generated compared to the control group. These findings suggest that in this particular study context, points, levels and leaderboards functioned as extrinsic incentives, effective only for promoting perfor-"} {"_id": "faa9bc38930db96df8b662fa0c2499efabeb335d", "title": "An Architecture for Fault-Tolerant Computation with Stochastic Logic", "text": "Mounting concerns over variability, defects, and noise motivate a new approach for digital circuitry: stochastic logic, that is to say, logic that operates on probabilistic signals and so can cope with errors and uncertainty. Techniques for probabilistic analysis of circuits and systems are well established. We advocate a strategy for synthesis. In prior work, we described a methodology for synthesizing stochastic logic, that is to say logic that operates on probabilistic bit streams. In this paper, we apply the concept of stochastic logic to a reconfigurable architecture that implements processing operations on a datapath. We analyze cost as well as the sources of error: approximation, quantization, and random fluctuations. We study the effectiveness of the architecture on a collection of benchmarks for image processing. The stochastic architecture requires less area than conventional hardware implementations. Moreover, it is much more tolerant of soft errors (bit flips) than these deterministic implementations. This fault tolerance scales gracefully to very large numbers of errors."} {"_id": "5bb59eefd9df07014147c288ca33fa5fb17bbbbb", "title": "First-Order Theorem Proving and Vampire", "text": "In this tutorial we give a short introduction in first-order theorem proving and the use of the theorem prover Vampire. The first part of the tutorial is intended for the audience with little or no knowledge in theorem proving. We will discuss the the resolution and superposition calculus, introduce the saturation principle, present various algorithms implementing redundancy elimination, preprocessing and clause form transformation and demonstrate how these concepts are implemented in Vampire. The second part will cover more advanced topics and features. Some of these features are implemented only in Vampire. This includes reasoning with theories, such as arithmetic, answering queries to very large knowledge bases, interpolation, and an original technique of symbol elimination, which allows one to automatically discover invariants in programs with loops. All the introduced concepts will be illustrated by running the firstorder theorem prover Vampire."} {"_id": "e3e8e30aecfde1107c3ff28652d55b64adc60ef3", "title": "A DNA-based implementation of YAEA encryption algorithm", "text": "The fundamental idea behind this encryption technique is the exploitation of DNA cryptographic strength, such as its storing capabilities and parallelism in order to enforce other conventional cryptographic algorithms. In this study, a binary form of data, such as plaintext messages, and images are transformed into sequences of DNA nucleotides. Subsequently, efficient searching algorithms are used to locate the multiple positions of a sequence of four DNA nucleotides. These four DNA nucleotides represent the binary octet of a single plaintext character or the single pixel of an image within, say, a Canis Familiaris genomic chromosome. The process of recording the locations of a sequence of four DNA nucleotides representing a single plain-text character, then returning a single randomly chosen position, will enable us to assemble a file of random pointers of the locations of the four DNA nucleotides in the searched Canis Families genome. We call the file containing the randomly selected position in the searchable DNA strand for each plain text character, the ciphered text. Since there is negligible correlation between the pointers file obtained from the selected genome, with its inherently massive storing capabilities, and the plain-text characters, the method, we believe, is robust against any type of cipher attacks."} {"_id": "a7263b095bb6fb6ad7d09f2e3cd2caa470aab126", "title": "Adaptive Neuro-Fuzzy Inference System (ANFIS) Based Software Evaluation", "text": "Software metric is a measure of some property of a piece of software or its specifications. The goal is to obtain reproducible and quantifiable measurements, which may have several valuable applications in schedule and budget planning, effort and cost evaluation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. Software effort evaluation is one of the most essential and crucial part of software project planning for which efficient effort metrics is required. Software effort evaluation is followed by software cost evaluation which is helpful for both customers and developers. Thus, efficiency of effort component of software is very essential. The algorithmic models are weak in estimating early effort evaluation with regards to uncertainty and imprecision in software projects. To overcome this problem, there are various machine learning methods. One of the methods is soft computing in which there are various methodologies viz., Artificial Neural Network, Fuzzy Logic, Evolutionary computation based Genetic Algorithm and Metaheuristic based Particle Swarm Optimization. These methods are good at solving real-world ambiguities. This paper highlights the design of an efficient software effort evaluation model using Adaptive Neuro-Fuzzy Inference System (ANFIS) for uncertain datasets and it shows that this technique significantly outperforms with sufficient results."} {"_id": "8bcaccd884fd7dac9d782522739cfd12159c9309", "title": "Point-of-Interest Recommendation Using Heterogeneous Link Prediction", "text": "Venue recommendation in location-based social networks is among the more important tasks that enhances user participation on the social network. Despite its importance, earlier research have shown that the accurate recommendation of appropriate venues for users is a difficult task specially given the highly sparse nature of user check-in information. In this paper, we show how a comprehensive set of user and venue related information can be methodically incorporated into a heterogeneous graph representation based on which the problem of venue recommendation can be efficiently formulated as an instance of the heterogeneous link prediction problem on the graph. We systematically compare our proposed approach with several strong baselines and show that our work, which is computationally less-intensive compared to the baselines, is able to shows improved performance in terms of precision and f-measure."} {"_id": "1532b1ceb90272770b9ad0cee8cd572164be658f", "title": "Sentiment analysis in multiple languages: Feature selection for opinion classification in Web forums", "text": "The Internet is frequently used as a medium for exchange of information and opinions, as well as propaganda dissemination. In this study the use of sentiment analysis methodologies is proposed for classification of Web forum opinions in multiple languages. The utility of stylistic and syntactic features is evaluated for sentiment classification of English and Arabic content. Specific feature extraction components are integrated to account for the linguistic characteristics of Arabic. The entropy weighted genetic algorithm (EWGA) is also developed, which is a hybridized genetic algorithm that incorporates the information-gain heuristic for feature selection. EWGA is designed to improve performance and get a better assessment of key features. The proposed features and techniques are evaluated on a benchmark movie review dataset and U.S. and Middle Eastern Web forum postings. The experimental results using EWGA with SVM indicate high performance levels, with accuracies of over 91% on the benchmark dataset as well as the U.S. and Middle Eastern forums. Stylistic features significantly enhanced performance across all testbeds while EWGA also outperformed other feature selection methods, indicating the utility of these features and techniques for document-level classification of sentiments."} {"_id": "167e1359943b96b9e92ee73db1df69a1f65d731d", "title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts", "text": "Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as \u201cthumbs up\u201d or \u201cthumbs down\u201d. To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints. Publication info: Proceedings of the ACL, 2004."} {"_id": "49a6ba0f8668e4da188017a45aec84c9336c7fe5", "title": "Learning Subjective Adjectives from Corpora", "text": "Subjectivity taggingis distinguishing sentences used to present opinions and evaluations from sentences used to objectively present factual information. There are numerous applications for which subjectivity tagging is relevant, including information extraction and information retrieval. This paper identifies strong clues of subjectivity using the results of a method for clustering words according to distributional similarity (Lin 1998), seeded by a small amount of detailed manual annotation. These features are then further refined with the addition of lexical semantic features of adjectives, specifically polarityandgradability(Hatzivassiloglou & McKeown 1997), which can be automatically learned from corpora. In 10-fold cross validation experiments, features based on both similarity clusters and the lexical semantic features are shown to have higher precision than features based on each alone."} {"_id": "7063f78639f7620f081895a14eb5a55a3b7c6057", "title": "Light Stemming for Arabic Information Retrieval", "text": "Computational Morphology is an urgent problem for Arabic Natural Language Processing, because Arabic is a highly inflected language. We have found, however, that a full solution to this problem is not required for effective information retrieval. Light stemming allows remarkably good information retrieval without providing correct morphological analyses. We developed several light stemmers for Arabic, and assessed their effectiveness for information retrieval using standard TREC data. We have also compared light stemming with several stemmers based on morphological analysis.. The light stemmer, light10, outperformed the other approaches. It has been included in the Lemur toolkit, and is becoming widely used Arabic information retrieval."} {"_id": "bc7308a97ec2d3f7985d48671abe7a8942a5b9f8", "title": "Sentiment Analysis using Support Vector Machines with Diverse Information Sources", "text": "This paper introduces an approach to sentiment analysis which uses support vector machines (SVMs) to bring together diverse sources of potentially pertinent information, including several favorability measures for phrases and adjectives and, where available, knowledge of the topic of the text. Models using the features introduced are further combined with unigram models which have been shown to be effective in the past (Pang et al., 2002) and lemmatized versions of the unigram models. Experiments on movie review data from Epinions.com demonstrate that hybrid SVMs which combine unigram-style feature-based SVMs with those based on real-valued favorability measures obtain superior performance, producing the best results yet published using this data. Further experiments using a feature set enriched with topic information on a smaller dataset of music reviews handannotated for topic are also reported, the results of which suggest that incorporating topic information into such models may also yield improvement."} {"_id": "42a96f07ab5ba25c88e5f7c0009ed9bfa3a4c5fc", "title": "A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimization", "text": "Hybridization in context to Evolutionary Computation (EC) aims at combining the operators and methodologies from different EC paradigms to form a single algorithm that may enjoy a statistically superior performance on a wide variety of optimization problems. In this article we propose an efficient hybrid evolutionary algorithm that embeds the difference vector-based mutation scheme, the crossover and the selection strategy of Differential Evolution (DE) into another recently developed global optimization algorithm known as Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES). CMA-ES is a stochastic method for real parameter (continuous domain) optimization of non-linear, non-convex functions. The algorithm includes adaptation of covariance matrix which is basically an alternative method of traditional Quasi-Newton method for optimization based on gradient method. The hybrid algorithm, referred by us as Differential Covariance Matrix Adaptation Evolutionary Algorithm (DCMA-EA), turns out to possess a better blending of the explorative and exploitative behaviors as compared to the original DE and original CMAES, through empirical simulations. Though CMA-ES has emerged itself as a very efficient global optimizer, its performance deteriorates when it comes to dealing with complicated fitness landscapes, especially landscapes associated with noisy, hybrid composition functions and many real world optimization problems. In order to improve the overall performance of CMA-ES, the mutation, crossover and selection operators of DE have been incorporated into CMA-ES to synthesize the hybrid algorithm DCMA-EA. We compare DCMA-EA with original DE and CMA-EA, two best known DE-variants: SaDE and JADE, and two state-of-the-art real optimizers: IPOP-CMA-ES (Restart Covariance Matrix Adaptation Evolution Strategy with increasing population size) and DMS-PSO (Dynamic Multi Swarm Particle Swarm Optimization) over a test-suite of 20 shifted, rotated, and compositional benchmark functions and also two engineering optimization problems. Our comparative study indicates that although the hybridization scheme does not impose any serious burden on DCMA-EA in terms of number of Function Evaluations (FEs), DCMA-EA still enjoys a statistically superior performance over most of the tested benchmarks and especially over the multi-modal, rotated, and compositional ones in comparison to the other algorithms considered here. 2011 Published by Elsevier Inc. y Elsevier Inc. osh), swagatam.das@isical.ac.in (S. Das), roy.subhrajit20@gmail.com (S. Roy), skminha.isl@gmail.com ganthan). 200 S. Ghosh et al. / Information Sciences 182 (2012) 199\u2013219"} {"_id": "13941748ae988ef79d3a123b63469789d01422e7", "title": "Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions", "text": "The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to realize (or approximate) functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways. We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not. Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy. This leads us to believe that expressive efficiency may serve a key role in the development of new tools for deep network design."} {"_id": "897c79a099637b56a38241a46952935341d57131", "title": "An elaborate data set on human gait and the effect of mechanical perturbations", "text": "Here we share a rich gait data set collected from fifteen subjects walking at three speeds on an instrumented treadmill. Each trial consists of 120 s of normal walking and 480 s of walking while being longitudinally perturbed during each stance phase with pseudo-random fluctuations in the speed of the treadmill belt. A total of approximately 1.5 h of normal walking (>5000 gait cycles) and 6 h of perturbed walking (>20,000 gait cycles) is included in the data set. We provide full body marker trajectories and ground reaction loads in addition to a presentation of processed data that includes gait events, 2D joint angles, angular rates, and joint torques along with the open source software used for the computations. The protocol is described in detail and supported with additional elaborate meta data for each trial. This data can likely be useful for validating or generating mathematical models that are capable of simulating normal periodic gait and non-periodic, perturbed gaits."} {"_id": "02733207813127f30d89ddb7a59d0374509c789c", "title": "Cognitive control signals for neural prosthetics.", "text": "Recent development of neural prosthetics for assisting paralyzed patients has focused on decoding intended hand trajectories from motor cortical neurons and using this signal to control external devices. In this study, higher level signals related to the goals of movements were decoded from three monkeys and used to position cursors on a computer screen without the animals emitting any behavior. Their performance in this task improved over a period of weeks. Expected value signals related to fluid preference, the expected magnitude, or probability of reward were decoded simultaneously with the intended goal. For neural prosthetic applications, the goal signals can be used to operate computers, robots, and vehicles, whereas the expected value signals can be used to continuously monitor a paralyzed patient's preferences and motivation."} {"_id": "58a4ed0b7c64a2643b120821cfeacda3229b106c", "title": "Interaactive map projections and distortion", "text": "We introduce several new methods for visualizing map projections and their associated distortions. These methods are embodied in the Interactive Map Projections system which allows users to view a representation of the Earth simultaneously as a sphere and as a projection with the ability to interact with both images. The relationship between the globe and the projection is enhanced by the use of explicit visualization of the intermediate developable geometric shapes used in the projection. A tool is built on top of the Interactive Map Projections system that provides a new method of visualizing map projection distortion. The central idea is one or more floating rings on the globe that can be interactively positioned and scaled. As the rings are manipulated on the globe, the corresponding projection of the rings are distorted using the same map projection parameters. This method is applied to study areal and angular distortion and is particularly useful when analyzing large geographical extents (such as in global climate studies) where distortions are significant, as well as visualizations for which information is geo-referenced and perhaps scaled to the underlying map. The floating ring tool is further enhanced to study 3D data sets placed over or under map projections. Examples include atmospheric and oceanographic data, respectively. Here, the ring is extended into a cone with apex at the center of the sphere and emanating beyond the surface into the atmosphere. It serves as a reminder that distortion exists in maps and data overlayed over maps, and provides information about the degree, location, and type of distortion. # 2001 Elsevier Science Ltd. All rights reserved."} {"_id": "d250e57f6b7e06bb1dac41c8b89700086a85999e", "title": "Self-Supervised Generalisation with Meta Auxiliary Learning", "text": "Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manuallylabelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network\u2019s performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at https://github.com/lorenmt/maxl."} {"_id": "e7b593a7b2b6db162002d7a15659ff7d4b3eaf02", "title": "Efficiency and harmonics generation in microwave to DC conversion circuits of half-wave and full-wave rectifier types", "text": "A Rectifying Antenna (Rectenna) is one of the most important components for a wireless power transmission. It has been developed for many applications such as Space Solar Power System (SSPS), and Radio Frequency Identification (RFID) etc. The Rectenna consisting of RF-DC conversion circuits and receiving antennas needs to be designed for high conversion efficiency to achieve efficient power transmission. we try to design the mw-Class RF-DC conversion circuit using half-and full-wave rectification. The measured conversion efficiencies is 59.3 % and 65.3 % athalf- and full-wave type, respectively. And the full-wave type has lower 2nd harmonic reflection coefficient."} {"_id": "63af4355721f417bc405886f383af096fbfe51b2", "title": "Dynamic load balancing on single- and multi-GPU systems", "text": "The computational power provided by many-core graphics processing units (GPUs) has been exploited in many applications. The programming techniques currently employed on these GPUs are not sufficient to address problems exhibiting irregular, and unbalanced workload. The problem is exacerbated when trying to effectively exploit multiple GPUs concurrently, which are commonly available in many modern systems. In this paper, we propose a task-based dynamic load-balancing solution for single-and multi-GPU systems. The solution allows load balancing at a finer granularity than what is supported in current GPU programming APIs, such as NVIDIA's CUDA. We evaluate our approach using both micro-benchmarks and a molecular dynamics application that exhibits significant load imbalance. Experimental results with a single-GPU configuration show that our fine-grained task solution can utilize the hardware more efficiently than the CUDA scheduler for unbalanced workload. On multi-GPU systems, our solution achieves near-linear speedup, load balance, and significant performance improvement over techniques based on standard CUDA APIs."} {"_id": "3f10e91a922860e49a42b4ee5ffa14ad7e6500e8", "title": "Guidelines for Use of Climate Scenarios Developed from Statistical Downscaling Methods", "text": "The Guidelines for Use of Climate Scenarios Developed from Statistical Downscaling Methods constitutes \" Supporting material \" of the Intergovernmental Panel on Climate Change (as defined in the Procedures for the Preparation, Review, Acceptance, Adoption, Approval, and Publication of IPCC Reports). The Guidelines were prepared for consideration by the IPCC at the request of its Task Group on Data and Scenario Support for Impacts and Climate Analysis (TGICA). This supporting material has not been subject to the formal intergovernmental IPCC review processes. Eduardo Zorito provided additional materials and feedback that were incorporated by June 2003. Revisions were made in the light of comments received from Elaine Barrow and John Mitchell in November 2003 \u2013 most notably the inclusion of a worked case study. Further comments were received from Penny Whetton and Steven Charles in February 2004."} {"_id": "3719c7ec35b7c374a364efa64e6e0066fcd84d3f", "title": "Extracting Data from NoSQL Databases", "text": "Businesses and organizations today generate increasing volumes of data. Being able to analyze and visualize this data to find trends that can be used as input when making business decisions is an important factor for competitive advantage. Spotfire is a software platform for doing this. Spotfire uses a tabular data model similar to the relational model used in relational database management systems (RDBMSs), which are commonly used by companies for storing data. Extraction and import of data from RDBMSs to Spotfire is generally a simple task. In recent years, because of changing application requirements, new types of databases under the general term NoSQL have become popular. NoSQL databases differ from RDBMSs mainly in that they use non-relational data models, lack explicit schemas and scale horizontally. Some of these features cause problems for applications like Spotfire when extracting and importing data. This thesis investigates how these problems can be solved, thus enabling support for NoSQL databases in Spotfire. The approach and conclusions are valid for any application that interacts with databases in a similar way as Spotfire. General solutions for supporting NoSQL databases are suggested. Also, two concrete tools for importing data from Cassandra and Neo4j that have been implemented in the Spotfire platform are described. The presented solutions comprise a data model mapping from the NoSQL system to Spotfire tables, sampling and possibly clustering for finding schemas, and an extraction mechanism tailored to the particular system\u2019s query interface. The suggested solutions are not claimed to be complete. Rather, the work in this thesis can serve as a starting point for more thorough investigations or as a basis for something that can be extended."} {"_id": "99f1b79073f6a628348a884725e516f2840aa5ef", "title": "Ontologies in biology: design, applications and future challenges", "text": "Biological knowledge is inherently complex and so cannot readily be integrated into existing databases of molecular (for example, sequence) data. An ontology is a formal way of representing knowledge in which concepts are described both by their meaning and their relationship to each other. Unique identifiers that are associated with each concept in biological ontologies (bio-ontologies) can be used for linking to and querying molecular databases. This article reviews the principal bio-ontologies and the current issues in their design and development: these include the ability to query across databases and the problems of constructing ontologies that describe complex knowledge, such as phenotypes."} {"_id": "46b2b620e7b26bf6049aaece16c469006a95a2c7", "title": "A SIFT-Based Forensic Method for Copy\u2013Move Attack Detection and Transformation Recovery", "text": "One of the principal problems in image forensics is determining if a particular image is authentic or not. This can be a crucial task when images are used as basic evidence to influence judgment like, for example, in a court of law. To carry out such forensic analysis, various technological instruments have been developed in the literature. In this paper, the problem of detecting if an image has been forged is investigated; in particular, attention has been paid to the case in which an area of an image is copied and then pasted onto another zone to create a duplication or to cancel something that was awkward. Generally, to adapt the image patch to the new context a geometric transformation is needed. To detect such modifications, a novel methodology based on scale invariant features transform (SIFT) is proposed. Such a method allows us to both understand if a copy-move attack has occurred and, furthermore, to recover the geometric transformation used to perform cloning. Extensive experimental results are presented to confirm that the technique is able to precisely individuate the altered area and, in addition, to estimate the geometric transformation parameters with high reliability. The method also deals with multiple cloning."} {"_id": "7bbedbeb14f0050c86c67acefa8b5c263fc30e9a", "title": "Internet self-efficacy and electronic service acceptance", "text": "Internet self-efficacy (ISE), or the beliefs in one\u2019s capabilities to organize and execute courses of Internet actions required to produce given attainments, is a potentially important factor to explain the consumers\u2019 decisions in e-commerce use, such as eservice. In this study, we introduce two types of ISE (i.e., general Internet self-efficacy and Web-specific self-efficacy) as new factors that reflect the user\u2019s behavioral control beliefs in e-service acceptance. Using these two constructs as behavioral control factors, we extend and empirically validate the Theory of Planned Behavior (TPB) for the World Wide Web (WWW) context. D 2003 Elsevier B.V. All rights reserved."} {"_id": "d3f005cad43a043a50443e167c839403172d71df", "title": "Corrosion Study and Intermetallics Formation in Gold and Copper Wire Bonding in Microelectronics Packaging", "text": "A comparison study on the reliability of gold (Au) and copper (Cu) wire bonding is conducted to determine their corrosion and oxidation behavior in different environmental conditions. The corrosion and oxidation behaviors of Au and Cu wire bonding are determined through soaking in sodium chloride (NaCl) solution and high temperature storage (HTS) at 175 \u00b0C, 200 \u00b0C and 225 \u00b0C. Galvanic corrosion is more intense in Cu wire bonding as compared to Au wire bonding in NaCl solution due to the minimal formation of intermetallics in the former. At all three HTS annealing temperatures, the rate of Cu-Al intermetallic formation is found to be three to five times slower than Au-Al intermetallics. The faster intermetallic growth rate and lower activation energy found in this work for both Au/Al and Cu/Al as compared to literature could be due to the thicker Al pad metallization which removed the rate-determining step in previous studies due to deficit in Al material."} {"_id": "6974d15455bbf5237eec5af571dcd6961480b733", "title": "Introducing teachers to computational thinking using unplugged storytelling", "text": "Many countries are introducing new school computing syllabuses that make programming and computational thinking core components. However, many of the teachers involved have major knowledge, skill and pedagogy gaps. We have explored the effectiveness of using 'unplugged' methods (constructivist, often kinaesthetic, activities away from computers) with contextually rich storytelling to introduce teachers to these topics in a non-threatening way. We describe the approach we have used in workshops for teachers and its survey based evaluation. Teachers were highly positive that the approach was inspiring, confidence building and gave them a greater understanding of the concepts involved, as well as giving practical teaching techniques that they would use."} {"_id": "f49a6e5ce0081f627f2639598d86504404cafed6", "title": "Sustainable Power Supply Solutions for Off-Grid Base Stations", "text": "The telecommunication sector plays a significant role in shaping the global economy and the way people share information and knowledge. At present, the telecommunication sector is liable for its energy consumption and the amount of emissions it emits in the environment. In the context of off-grid telecommunication applications, offgrid base stations (BSs) are commonly used due to their ability to provide radio coverage over a wide geographic area. However, in the past, the off-grid BSs usually relied on emission-intensive power supply solutions such as diesel generators. In this review paper, various types of solutions (including, in particular, the sustainable solutions) for powering BSs are discussed. The key aspects in designing an ideal power supply solution are reviewed, and these mainly include the pre-feasibility study and the thermal management of BSs, which comprise heating and cooling of the BS shelter/cabinets and BS electronic equipment and power supply components. The sizing and optimization approaches used to design the BSs\u2019 power supply systems as well as the operational and control strategies adopted to manage the power supply systems are also reviewed in this paper."} {"_id": "6a43ba86b986923e1f7468fdc8c8ec1d994788db", "title": "On Exploiting Dynamic Execution Patterns for Workload Offloading in Mobile Cloud Applications", "text": "Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing users' demand of mobile multimedia applications, by offloading the computational workloads from local devices to the remote cloud. Current MCC research focuses on making offloading decisions over different methods of a MCC application, but may inappropriately increase the energy consumption if having transmitted a large amount of program states over expensive wireless channels. Limited research has been done on avoiding such energy waste by exploiting the dynamic patterns of applications' run-time execution for workload offloading. In this paper, we adaptively offload the local computational workload with respect to the run-time application dynamics. Our basic idea is to formulate the dynamic executions of user applications using a semi-Markov model, and to further make offloading decisions based on probabilistic estimations of the offloading operation's energy saving. Such estimation is motivated by experimental investigations over practical smart phone applications, and then builds on analytical modeling of methods' execution times and offloading expenses. Systematic evaluations show that our scheme significantly improves the efficiency of workload offloading compared to existing schemes over various smart phone applications."} {"_id": "389dff9e0ed28973a61d3bfeadf8b4b639f6a155", "title": ": Past , Present , Future", "text": "We thank Steve Burks, Dick Thaler, and especially Matthew Rabin (who collaborated during part of the process) for helpful comments."} {"_id": "0217fb2a54a4f324ddf82babc6ec6692a3f6194f", "title": "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", "text": "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https://arxiv.org/abs/1606.03657."} {"_id": "8b89fb1c664b742271e0f19a9efe8492f14074f5", "title": "Real-time Simulation of Large Bodies of Water with Small Scale Details", "text": "We present a hybrid water simulation method that combines grid based and particles based approaches. Our specialized shallow water solver can handle arbitrary underlying terrain slopes, arbitrary water depth and supports wet-dry regions tracking. To treat open water scenes we introduce a method for handling non-reflecting boundary conditions. Regions of liquid that cannot be represented by the height field including breaking waves, water falls and splashing due to rigid and soft bodies interaction are automatically turned into spray, splash and foam particles. The particles are treated as simple non-interacting point masses and they exchange mass and momentum with the height field fluid. We also present a method for procedurally adding small scale waves that are advected with the water flow. We demonstrate the effectiveness of our method in various test scene including a large flowing river along a valley with beaches, big rocks, steep cliffs and waterfalls. Both the grid and the particles simulations are implemented in CUDA. We achieve real-time performance on modern GPUs in all the examples."} {"_id": "c55426ad4a6f84001b52717dddcba440bb10df5d", "title": "From Hashing to CNNs: Training Binary Weight Networks via Hashing", "text": "Deep convolutional neural networks (CNNs) have shown appealing performance on various computer vision tasks in recent years. This motivates people to deploy CNNs to realworld applications. However, most of state-of-art CNNs require large memory and computational resources, which hinders the deployment on mobile devices. Recent studies show that low-bit weight representation can reduce much storage and memory demand, and also can achieve efficient network inference. To achieve this goal, we propose a novel approach named BWNH to train Binary Weight Networks via Hashing. In this paper, we first reveal the strong connection between inner-product preserving hashing and binary weight networks, and show that training binary weight networks can be intrinsically regarded as a hashing problem. Based on this perspective, we propose an alternating optimization method to learn the hash codes instead of directly learning binary weights. Extensive experiments on CIFAR10, CIFAR100 and ImageNet demonstrate that our proposed BWNH outperforms current state-of-art by a large margin."} {"_id": "be389fb59c12c8c6ed813db13ab74841433ea1e3", "title": "iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos", "text": "Fig. 1. We present iMapper, a method that reasons about the interactions of humans with objects, to recover both a plausible scene arrangement and human motions, that best explain an input monocular video (see inset). We fit characteristic interactions called scenelets (e.g., A, B, C) to the video and use them to reconstruct a plausible object arrangement and human motion path (left). The key challenge is that reliable fitting requires information about occlusions, which are unknown (i.e., latent). (Right) We show an overlay (from top-view) of our result over manually annotated groundtruth object placements. Note that object meshes are placed based on estimated object category, location, and size information."} {"_id": "4fbd641e30a4e5868f4a146760b9543eafcee21d", "title": "Algorithm for 3D Point Cloud Denoising", "text": "The raw data of point cloud produced by 3D scanning tools contains additive noise from various sources. This paper proposes a method for 3D unorganized point cloud denoising by making full use of the depth information of unorganized points and space analytic geometry theory, applying over-domain average method for 2D image of image denoising theory to 3D point data. The point cloud noises are filtered by using irregular polyhedron based on the limited local neighborhoods. The experiment shows that the proposed method successfully removes noise from point cloud with the features of the scattered point model reserved. Furthermore, the presented algorithm excels in its simplicity both in implementation and operation."} {"_id": "92bc0ff78de5ae14a31c4085e9b2e0b29d57c9aa", "title": "Central Bank Communication and the Yield Curve: A Semi-Automatic Approach using Non-Negative Matrix Factorization", "text": "Communication is now a standard tool in the central bank\u2019s monetary policy toolkit. Theoretically, communication provides the central bank an opportunity to guide public expectations, and it has been shown empirically that central bank communication can lead to financial market fluctuations. However, there has been little research into which dimensions or topics of information are most important in causing these fluctuations. We develop a semi-automatic methodology that summarizes the FOMC statements into its main themes, automatically selects the best model based on coherency, and assesses whether there is a significant impact of these themes on the shape of the U.S Treasury yield curve using topic modeling methods from the machine learning literature. Our findings suggest that the FOMC statements can be decomposed into three topics: (i) information related to the economic conditions and the mandates, (ii) information related to monetary policy tools and intermediate targets, and (iii) information related to financial markets and the financial crisis. We find that statements are most influential during the financial crisis and the effects are mostly present in the curvature of the yield curve through information related to the financial theme."} {"_id": "54797270f673a2efc9c4b86b5de3befa59b13bd2", "title": "Substrate Integrated Waveguide Cross-Coupled Filter With Negative Coupling Structure", "text": "Substrate integrated waveguide (SIW) technology provides an attractive solution to the integration of planar and nonplanar circuits by using a planar circuit fabrication process. However, it is usually difficult to implement the negative coupling structure required for the design of compact canonical folded elliptic or quasi-elliptic cross-coupled bandpass filter on the basis of a single-layer SIW. In this paper, a special planar negative coupling scheme including a magnetic coupling post-wall iris and a balanced microstrip line with a pair of metallic via-holes is studied in detail. Two -band fourth-degree cross-coupled bandpass filters without and with source-load coupling using the negative coupling structures are then proposed and designed. The two novel SIW filters having the same center frequency of 20.5 GHz and respective passband width of 700 and 800 MHz are implemented on a single-layer Rogers RT/Duroid 5880 substrate with thickness of 0.508 mm. Measured results of those filters, which exhibit a high selectivity, and a minimum in-band insertion loss of approximately 0.9 and 1.0 dB, respectively, agree well with simulated results."} {"_id": "aaf7be312dd3d22b032cbbf9530ad56bf8e7800b", "title": "Dispersion characteristics of substrate integrated rectangular waveguide", "text": "Dispersion properties of the substrate integrated rectangular waveguide (SIRW) are rigorously obtained using the BI-RME method combined with the Floquet's theorem. Our analysis shows that the SIRW basically has the same guided-wave characteristics as the conventional rectangular waveguide. Empirical equations are derived from the calculated dispersion curves in order to estimate the cutoff frequency of the first two dominant modes of the SIRW To validate the analysis results, an SIRW guide was designed and measured. Very good agreements between the experimental and theoretical results were obtained."} {"_id": "eefb364424648a3498dcee451bf0af00463fab1a", "title": "Characteristics of cross (bypass) coupling through higher/lower order modes and their applications in elliptic filter design", "text": "This paper presents a new set of results concerning the use of higher/lower order modes as a means to implement bypass or cross coupling for applications in elliptic filter design. It is shown that the signs of the coupling coefficients to produce a transmission zero (TZ) either below or above the passband are, in certain situations, reversed from the predictions of simpler existing models. In particular, the bypass coupling to higher/lower order modes must be significantly stronger than the coupling to the main resonance in order to generate TZs in the immediate vicinity of the passband. Planar (H-plane) singlets are used to illustrate the derived results. This study should provide very important guidelines in selecting the proper main and bypass couplings for sophisticated filtering structures. Example filters are designed, built, and measured to demonstrate the validity of the introduced theory."} {"_id": "f24a1af3bd8873920593786d81590d29520cfebc", "title": "Multilayered substrate integrated waveguide (MSIW) elliptic filter", "text": "This letter presents the design and experiment of a novel elliptic filter based on the multilayered substrate integrated waveguide (MSIW) technique. A C-band elliptic filter with four folded MSIW cavities is simulated by using high frequency structure simulator software and fabricated with a two-layer printed circuit board process, the measured results show good performance and in agreement with the simulated results."} {"_id": "f9caa828cc99bdfb1f42b255136d9c78ee9a1f1a", "title": "Novel compact net-type resonators and their applications to microstrip bandpass filters", "text": "Novel compact net-type resonators and their practical applications to microstrip bandpass filters have been presented in this paper. Three kinds of filters are designed and fabricated to demonstrate the practicality of the proposed compact net-type resonators. In addition, by adjusting the structural parameters of the net-type resonators, the spurious frequencies can be properly shifted to higher frequencies. As a result, a three-pole Chebyshev net-type resonator filter with a fractional bandwidth (FBW) of 6.8% has a spurious resonance of up to 4.1f/sub 0/, and it has more than 80% size reduction in comparison with the conventional U-shaped resonator filter. A four-pole quasi-elliptic net-type resonator filter with a FBW of 3.5% has a spurious resonance of up to 5f/sub 0/, and it has approximately 67% size reduction in comparison with the cross-coupled open-loop resonator filter. A three-pole trisection net-type resonator filter with a FBW of 4.7% has a spurious resonance of up to 6.5f/sub 0/, and its size is reduced by 68% in comparison with the trisection open-loop resonator filter. Consequently, each of the designed filters occupies a very small circuit size and has a good stopband response. The measured results are in good agreement with the full-wave simulation results by IE3D."} {"_id": "cc415579249532aa33651c8eca1aebf5ce26af1d", "title": "Today \u2019 s State-Owned Enterprises of China : Are They Dying Dinosaurs or Dynamic Dynamos ?", "text": "NOTE: The authors thank Bianca Bain for her assistance on an earlier draft of the paper and the University of Macao for providing financial support. Summary This paper raises the question and provides empirical evidence regarding the status of the evolution of the state-owned enterprises (SOEs) in China today. In this study, we compare the SOEs to domestic private-owned enterprises (POEs) and foreign-controlled businesses (FCBs) in the context of their organizational cultures. While a new ownership form, many of the POEs, evolved from former collectives that reflect the traditional values of Chinese business. Conversely, the FCBs are much more indicative of the large global MNCs. Therefore, we look at the SOEs in the context of these two reference points. We conclude that the SOEs of today have substantially transformed to approximate a configuration desired by the Chinese government when it began the SOE transformation a couple of decades ago to make them globally competitive. The SOEs of today appear to be appropriately described as China's economic dynamic dynamo for the future."} {"_id": "5b4242f283ce25a44af2832769a5a54662ac4d38", "title": "Simplified high-accuracy calculation of eddy-current loss in round-wire windings", "text": "It has recently been shown that the most commonly used methods for calculating high-frequency eddy-current loss in round-wire windings can have substantial error, exceeding 60%. Previous work includes a formula based on a parametric set of finite-element analysis (FEA) simulations that gives proximity-effect loss for a large range of frequencies, using the parameters from a lookup table based on winding geometry. We improve the formula by decreasing the number of parameters in the formula and also, more importantly, by using simple functions to get the parameters from winding geometry such that a large lookup table is not needed. The function we present is exact in the low frequency limit (diameter much smaller than skin depth) and has error less than 4% at higher frequencies. We make our new model complete by examining the field expression needed to get the total proximity-effect loss and by including the skin-effect loss. We also present experimental results confirming the validity of the model and its superiority to standard methods."} {"_id": "8052bc5f9beb389b3144d423e7b5d6fcf5d0cc4f", "title": "Adapting attributes by selecting features similar across domains", "text": "Attributes are semantic visual properties shared by objects. They have been shown to improve object recognition and to enhance content-based image search. While attributes are expected to cover multiple categories, e.g. a dalmatian and a whale can both have \"smooth skin\", we find that the appearance of a single attribute varies quite a bit across categories. Thus, an attribute model learned on one category may not be usable on another category. We show how to adapt attribute models towards new categories. We ensure that positive transfer can occur between a source domain of categories and a novel target domain, by learning in a feature subspace found by feature selection where the data distributions of the domains are similar. We demonstrate that when data from the novel domain is limited, regularizing attribute models for that novel domain with models trained on an auxiliary domain (via Adaptive SVM) improves the accuracy of attribute prediction."} {"_id": "71ba42461b7bc72f91bdaf92412204ebe288347c", "title": "Area-efficient parallel-prefix Ling adders", "text": "Efficient addition of binary numbers plays a very important role in the design of dedicated as well as general purpose processors for the implementation of arithmetic and logic units, branch decision, and floating-point operations, address generations, etc. Several methods have been reported in the literature for the fast and hardware-efficient realization of binary additions. Among these methods, parallel-prefix addition schemes have received much of attentions, since they provide many design choices for delay/area-efficient implementations and optimization of tradeoffs. In this paper, we have proposed area-efficient approach for the design of parallel-prefix Ling adders. We have achieved the area efficiency by computing the real carries, based on the Ling carries produced by the lower bit positions. Using the proposed method, the number of logic levels can be reduced by one, which leads to reduction of delay as well as significant saving of area complexity of the adder. We have implemented the proposed adders using 0.18\u00b5m CMOS technology; and from the synthesis results, we find that our proposed adders could achieve up to 35% saving of area over the previously reported parallel-prefix Ling adders under the same delay constraints."} {"_id": "31b179cc445f2cf2b82b0112b309a8cf10abfde6", "title": "3D shape regression for real-time facial animation", "text": "We present a real-time performance-driven facial animation system based on 3D shape regression. In this system, the 3D positions of facial landmark points are inferred by a regressor from 2D video frames of an ordinary web camera. From these 3D points, the pose and expressions of the face are recovered by fitting a user-specific blendshape model to them. The main technical contribution of this work is the 3D regression algorithm that learns an accurate, user-specific face alignment model from an easily acquired set of training data, generated from images of the user performing a sequence of predefined facial poses and expressions. Experiments show that our system can accurately recover 3D face shapes even for fast motions, non-frontal faces, and exaggerated expressions. In addition, some capacity to handle partial occlusions and changing lighting conditions is demonstrated."} {"_id": "50379959c3f953cf5dc7f68a60a0c3f9b3333413", "title": "Explaining International Migration in the Skype Network: The Role of Social Network Features", "text": "In recent years, several new ways have appeared for quantifying human migration such as location based smartphone applications and tracking user activity from websites. To show usefulness of these new approaches, we present the results of a study of cross-country migration as observed via login events in the Skype network. We explore possibility to extract human migration and correlate it with institutional statistics. The study demonstrates that a number of social network features are strongly related to net migration from and to a given country, as well as net migration between pairs of countries. Specifically, we find that percentage of international calls, percentage of international links and foreign logins in a country, complemented by gross domestic product, can be used as relatively accurate proxies for estimating migration."} {"_id": "1e4b941215d539981086f599ce74fa8e48184eb9", "title": "Compact Proofs of Retrievability", "text": "In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client\u2019s data. The central challenge is to build systems that are both efficient and provably secure\u2014that is, it should be possible to extract the client\u2019s data from any prover that passes a verification check. In this paper, we give the first proof-of-retrievability schemes with full proofs of security against arbitrary adversaries in the strongest model, that of Juels\u00a0and Kaliski. Our first scheme, built from BLS signatures and secure in the random oracle model, features a proof-of-retrievability protocol in which the client\u2019s query and server\u2019s response are both extremely short. This scheme allows public verifiability: anyone can act as a verifier, not just the file owner. Our second scheme, which builds on pseudorandom functions (PRFs) and is secure in the standard model, allows only private verification. It features a proof-of-retrievability protocol with an even shorter server\u2019s response than our first scheme, but the client\u2019s query is long. Both schemes rely on homomorphic properties to aggregate a proof into one small authenticator value."} {"_id": "f1a818efa190959826a88df02ceb6f86c1d9ec9b", "title": "Verilog HDL model based thermometer-to-binary encoder with bubble error correction", "text": "This paper compares several approaches to come up with the Verilog HDL model of the thermometer-to-binary encoder with bubble error correction. It has been demonstrated that implementations of different ideas to correct bubble errors yield circuits whose parameters tremendously vary in delay, area and power consumption. The shortest delay is achieved for the design synthesized from the model which mimics a human reading temperature on classic liquid-in-glass thermometer."} {"_id": "f9fafd8ea1190ffbc2757eed0f0a8bbff610c43e", "title": "Robust detection of non-motorized road users using deep learning on optical and LIDAR data", "text": "Detection of non-motorized road users, such as cyclists and pedestrians, is a challenging problem in collision warning/collision avoidance (CW/CA) systems as direct information (e.g. location, speed, and class) cannot be obtained from such users. In this paper, we propose a fusion of LIDAR data and a deep learning-based computer vision algorithm, to substantially improve the detection of regions of interest (ROIs) and subsequent identification of road users. Experimental results on the KITTI object detection benchmark quantify the effectiveness of incorporating LIDAR data with region-based deep convolutional networks. Thus our work provides another step towards the goal of designing safe and smart transportation systems of the future."} {"_id": "cf4e54499ef2cf27ddda74975996036705600a18", "title": "Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks", "text": "Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases."} {"_id": "51ce3edde4311ee97bf693ad4c2b4e0c286ed688", "title": "The infrastructure problem in HCI", "text": "HCI endeavors to create human-centered computer systems, but underlying technological infrastructures often stymie these efforts. We outline three specific classes of user experience difficulties caused by underlying technical infrastructures, which we term constrained possibilities, unmediated interaction, and interjected abstractions. We explore how prior approaches in HCI have addressed these issues, and discuss new approaches that will be required for future progress. We argue that the HCI community must become more deeply involved with the creation of technical infrastructures. Doing so, however, requires a substantial expansion to the methodological toolbox of HCI."} {"_id": "bd6b3ef8cab804823b4ede2c9ceb6118a5dd9f0f", "title": "An examination of the celebrity endorsements and online customer reviews influence female consumers' shopping behavior", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.08.005 \u21d1 Corresponding author. Tel.: +886 2 2737 6764; fa E-mail address: melodyps@gmail.com (P.-S. Wei). The goal of this study is to compare the influence of celebrity endorsements to online customer reviews on female shopping behavior. Based on AIDMA and AISAS models, we design an experiment to investigate consumer responses to search good and experience good respectively. The results revealed that search good (shoes) endorsed by a celebrity in an advertisement evoked significantly more attention, desire, and action from the consumer than did an online customer review. We also found that online customer reviews emerged higher than the celebrity endorsement on the scale of participants\u2019 memory, search and share attitudes toward the experience good (toner). Implications for marketers as well as suggestions for future research are discussed. 2012 Elsevier Ltd. All rights reserved."} {"_id": "141ea59c86e184f60ab23a0537339545642eca5c", "title": "Hat-Delta --- One Right Does Make a Wrong", "text": "We outline two methods for locating bugs in a program. This is done by comparing computations of the same program with different input. At least one of these computations must produce a correct result, while exactly one must exhibit some erroneous behaviour. Firstly reductions that are thought highly likely to be correct are eliminated from the search for the bug. Secondly, a program slicing technique is used to identify areas of code that are likely to be correct. Both methods have been implemented. In combination with algorithmic debugging they provide a system that quickly and accurately identifies bugs."} {"_id": "86686c1543dddd149b933a681c86077f8ac068de", "title": "Terabit/sec-class board-level optical interconnects through polymer waveguides using 24-channel bidirectional transceiver modules", "text": "We report here on the design, fabrication and characterization of an integrated optical data bus designed for terabit/sec-class module-to-module on-board data transfer using integrated optical transceivers. The parallel optical transceiver is based on a through-silicon-via (TSV) silicon carrier as the platform for integration of 24-channel VCSEL and photodiode arrays with CMOS ICs. The Si carrier also includes optical vias (holes) for optical access to conventional surface-emitting 850-nm optoelectronic (OE) devices. The 48-channel transceiver is flip-chip soldered to an organic carrier forming the transceiver Optomodule. The optical printed circuit board (o-PCB) is a typical FR4 board with a polymer waveguide layer added on top. A 48-channel flex-waveguide is fabricated separately and attached to the FR4 board. Turning mirrors are fabricated into the waveguides and a lens array is attached to facilitate optical coupling. An assembly procedure has been developed to surface mount the Optomodule to the o-PCB using a ball grid array (BGA) process which provides both electrical and optical interconnections. Efficient optical coupling is achieved using a dual-lens optical system, with one lens array incorporated into the Optomodule and a second on the o-PCB. Fully functional Optomodules with 24 transmitter + 24 receiver channels were characterized with transmitters operating up to 20 Gb/s and receivers up to 15 Gb/s. Finally, two Optomodules were assembled onto an o-PCB and a full optical link demonstrated, achieving > 20 bidirectional links at 10 Gb/s. At 15 Gb/s, error-free operation was demonstrated for 15 channels in each direction, realizing a record o-PCB link with a 225 Gb/s bidirectional aggregate data rate."} {"_id": "38241bbdbae62ee2ceacc590681a18dc2564adec", "title": "PLUG: flexible lookup modules for rapid deployment of new protocols in high-speed routers", "text": "New protocols for the data link and network layer are being proposed to address limitations of current protocols in terms of scalability, security, and manageability. High-speed routers and switches that implement these protocols traditionally perform packet processing using ASICs which offer high speed, low chip area, and low power. But with inflexible custom hardware, the deployment of new protocols could happen only through equipment upgrades. While newer routers use more flexible network processors for data plane processing, due to power and area constraints lookups in forwarding tables are done with custom lookup modules. Thus most of the proposed protocols can only be deployed with equipment upgrades. To speed up the deployment of new protocols, we propose a flexible lookup module, PLUG (Pipelined Lookup Grid). We can achieve generality without loosing efficiency because various custom lookup modules have the same fundamental features we retain: area dominated by memories, simple processing, and strict access patterns defined by the data structure. We implemented IPv4, Ethernet, Ethane, and SEATTLE in our dataflow-based programming model for the PLUG and mapped them to the PLUG hardware which consists of a grid of tiles. Throughput, area, power, and latency of PLUGs are close to those of specialized lookup modules."} {"_id": "01094798b20e96e1d029d6874577167f2214c7b6", "title": "Algorithmic improvements for fast concurrent Cuckoo hashing", "text": "Fast concurrent hash tables are an increasingly important building block as we scale systems to greater numbers of cores and threads. This paper presents the design, implementation, and evaluation of a high-throughput and memory-efficient concurrent hash table that supports multiple readers and writers. The design arises from careful attention to systems-level optimizations such as minimizing critical section length and reducing interprocessor coherence traffic through algorithm re-engineering. As part of the architectural basis for this engineering, we include a discussion of our experience and results adopting Intel's recent hardware transactional memory (HTM) support to this critical building block. We find that naively allowing concurrent access using a coarse-grained lock on existing data structures reduces overall performance with more threads. While HTM mitigates this slowdown somewhat, it does not eliminate it. Algorithmic optimizations that benefit both HTM and designs for fine-grained locking are needed to achieve high performance.\n Our performance results demonstrate that our new hash table design---based around optimistic cuckoo hashing---outperforms other optimized concurrent hash tables by up to 2.5x for write-heavy workloads, even while using substantially less memory for small key-value items. On a 16-core machine, our hash table executes almost 40 million insert and more than 70 million lookup operations per second."} {"_id": "64d4af4af55a437ce5aa64dba345e8814cd12195", "title": "Information-Driven Dynamic Sensor Collaboration for Tracking Applications", "text": "This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a \" sensor collaboration \" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications. I. INTRODUCTION The technology of wirelessly networked micro-sensors promises to revolutionize the way we live, work, and interact with the physical environment. For example, tiny, inexpensive sensors can be \" sprayed \" onto roads, walls, or machines to monitor and detect a variety of interesting events such as highway traffic, wildlife habitat condition, forest fire, manufacturing job flow, and military battlefield situation."} {"_id": "0b9213651d939b8195b0f4225fe409af6459effb", "title": "Estimating 3D Hand Pose from a Cluttered Image", "text": "A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclidean space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this cluttertolerant approach is demonstrated in quantitative experiments with hundreds of real hand images."} {"_id": "e1b2ecd775a41cd5833fb59460738617997d7d84", "title": "Thermal degradation of DRAM retention time: Characterization and improving techniques", "text": "Variation of DRAM retention time and reliability problem induced by thermal stress was investigated. Most of the DRAM cells revealed 2-state retention time with thermal stress. The effects of hydrogen annealing condition and fluorine implantation on the variation of retention time and reliability are discussed."} {"_id": "9c1b9598f82f9ed7d75ef1a9e627496759aa2387", "title": "Data Science , Predictive Analytics , and Big Data : A Revolution That Will Transform Supply Chain Design and Management", "text": "W e illuminate the myriad of opportunities for research where supply chain management (SCM) intersects with data science, predictive analytics, and big data, collectively referred to as DPB. We show that these terms are not only becoming popular but are also relevant to supply chain research and education. Data science requires both domain knowledge and a broad set of quantitative skills, but there is a dearth of literature on the topic and many questions. We call for research on skills that are needed by SCM data scientists and discuss how such skills and domain knowledge affect the effectiveness of an SCM data scientist. Such knowledge is crucial to develop future supply chain leaders. We propose definitions of data science and predictive analytics as applied to SCM. We examine possible applications of DPB in practice and provide examples of research questions from these applications, as well as examples of research questions employing DPB that stem from management theories. Finally, we propose specific steps interested researchers can take to respond to our call for research on the intersection of SCM and DPB."} {"_id": "63e1e4ddad1621af02b2f8ad8a0bf5dbd47abf38", "title": "An integrated charger using segmented windings of interior permanent magnet motor based on 3 phase with 9 windings", "text": "Connecting the plug of electric vehicles (EVs) into the grid source directly means to charge the battery. The electric power generated from the grid goes through the electric motor, crosses the bidirectional inverter used as three-boost rectifier and is received by the load (battery). An innovative fast-integrated charger based on segmented windings of permanent magnet synchronous motor (PMSM) is introduced for plug-in electric vehicles (PEV). The three-phase with nine windings of PMSM is used during the traction and charging modes. Three (or single) phases of grid source are directly connected to the neutral points of segmented PMSM windings to provide high power density to the battery. The system configuration and operation of charging mode are explained in detail for proposed integrated on-board charger. The simulation models are designed by using Ansoft Maxwell, Ansys Maxwell circuit editor, and Matlab/Simulink softwares, and simulation results are presented to verify the performance of the proposed system. In addition, the experimental setup of the integrated system is under progress."} {"_id": "ac97e503f0193873e30270aa768d53a4517ff657", "title": "From the Internet of Computers to the Internet of Things", "text": "This paper1 discusses the vision, the challenges, possible usage scenarios and technological building blocks of the \u201cInternet of Things\u201d. In particular, we consider RFID and other important technological developments such as IP stacks and web servers for smart everyday objects. The paper concludes with a discussion of social and governance issues that are likely to arise as the vision of the Internet of Things becomes a reality."} {"_id": "5dd82357b16f9893ca95e29c65a8974fd94b55f4", "title": "of the Thesis Classification of Imbalanced Data Using Synthetic OverSampling Techniques by", "text": "of the Thesis Classification of Imbalanced Data Using Synthetic Over-Sampling Techniques"} {"_id": "5685a394b25fcb27b6ad91f7325f2e60a9892e2a", "title": "Query Optimization Techniques In Graph Databases", "text": "Graph databases (GDB) have recently been arisen to overcome the limits of traditional databases for storing and managing data with graph-like structure. Today, they represent a requirementfor many applications that manage graph-like data,like social networks.Most of the techniques, applied to optimize queries in graph databases, have been used in traditional databases, distribution systems,... or they are inspired from graph theory. However, their reuse in graph databases should take care of the main characteristics of graph databases, such as dynamic structure, highly interconnected data, and ability to efficiently access data relationships. In this paper, we survey the query optimization techniques in graph databases. In particular,we focus on the features they have introduced to improve querying graph-like data."} {"_id": "045a975c1753724b3a0780673ee92b37b9827be6", "title": "Wait-Free Synchronization", "text": "A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of the other processes. The problem of constructing a wait-free implementation of one data object from another lies at the heart of much recent work in concurrent algorithms, concurrent data structures, and multiprocessor architectures. First, we introduce a simple and general technique, based on reduction to a concensus protocol, for proving statements of the form, \u201cthere is no wait-free implementation of X by Y.\u201d We derive a hierarchy of objects such that no object at one level has a wait-free implementation in terms of objects at lower levels. In particular, we show that atomic read/write registers, which have been the focus of much recent attention, are at the bottom of the hierarchy: thay cannot be used to construct wait-free implementations of many simple and familiar data types. Moreover, classical synchronization primitives such astest&set and fetch&add, while more powerful than read and write, are also computationally weak, as are the standard message-passing primitives. Second, nevertheless, we show that there do exist simple universal objects from which one can construct a wait-free implementation of any sequential object."} {"_id": "0541d5338adc48276b3b8cd3a141d799e2d40150", "title": "MapReduce: Simplified Data Processing on Large Clusters", "text": "MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day."} {"_id": "2032be0818be583f159cc75f2022ed78222fb772", "title": "Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization", "text": "This paper proposes a two-phase scheme for removing salt-and-pepper impulse noise. In the first phase, an adaptive median filter is used to identify pixels which are likely to be contaminated by noise (noise candidates). In the second phase, the image is restored using a specialized regularization method that applies only to those selected noise candidates. In terms of edge preservation and noise suppression, our restored images show a significant improvement compared to those restored by using just nonlinear filters or regularization methods only. Our scheme can remove salt-and-pepper-noise with a noise level as high as 90%."} {"_id": "43089ffed8c6c653f6994fb96f7f48bbcff2a598", "title": "Adaptive median filters: new algorithms and results", "text": "Based on two types of image models corrupted by impulse noise, we propose two new algorithms for adaptive median filters. They have variable window size for removal of impulses while preserving sharpness. The first one, called the ranked-order based adaptive median filter (RAMF), is based on a test for the presence of impulses in the center pixel itself followed by a test for the presence of residual impulses in the median filter output. The second one, called the impulse size based adaptive median filter (SAMF), is based on the detection of the size of the impulse noise. It is shown that the RAMF is superior to the nonlinear mean L(p) filter in removing positive and negative impulses while simultaneously preserving sharpness; the SAMF is superior to Lin's (1988) adaptive scheme because it is simpler with better performance in removing the high density impulsive noise as well as nonimpulsive noise and in preserving the fine details. Simulations on standard images confirm that these algorithms are superior to standard median filters."} {"_id": "1ef2b855e8a447b17ca7470ae5f3fff667d2fc28", "title": "AP3: cooperative, decentralized anonymous communication", "text": "This paper describes a cooperative overlay network that provides anonymous communication services for participating users. The Anonymizing Peer-to-Peer Proxy (AP3) system provides clients with three primitives: (i) anonymous message delivery, (ii) anonymous channels, and (iii) secure pseudonyms. AP3 is designed to be lightweight, low-cost and provides \"probable innocence\" anonymity to participating users, even under a large-scale coordinated attack by a limited fraction of malicious overlay nodes. Additionally, we use AP3's primitives to build novel anonymous group communication facilities (multicast and anycast), which shield the identity of both publishers and subscribers."} {"_id": "683c8f5c60916751bb23f159c86c1f2d4170e43f", "title": "Probabilistic Encryption", "text": ""} {"_id": "7bbf34f4766a424d8fa934f5d1bda580e9ae814c", "title": "THE ROLE OF MUSIC COMMUNICATION IN CINEMA", "text": "[Authors\u2019 note: This paper is an abbreviated version of a chapter included in a forthcoming book entitled Music Communication (D. Miell, R. MacDonald, & D. Hargreaves, Eds.), to be published by Oxford University Press.] Past research leaves no doubt about the efficacy of music as a means of communication. In the following pages, after presenting a general model of music communication, the authors will introduce models \u2013 both empirical and theoretical \u2013 of film music perception and the role of music in film, referencing some of the most significant research investigating the relationship between sound and image in the cinematic context. We shall then enumerate the many ways in which the motion picture soundtrack can supplement, enhance, and expand upon the meaning of a film\u2019s narrative. The relationship between the auditory and visual components in cinema is both active and dynamic, affording a multiplicity of possible relations than can evolve \u2013 sometimes dramatically \u2013 as the narrative unfolds. This paper will take a cognitive approach to the study of musical communication in cinema. As a result, much credence will be given to the results of empirical research investigating human cognitive processing in response to the motion picture experience. In conclusion, the present authors will argue for a more inclusive definition of the term \u201cfilm music\u201d than that utilized or implied in previous publications. In our view, film music is one component of a sonic fabric that includes the musical score, ambient sound, dialogue, sound effects, and silence. The functions of these constituent elements often overlap or interact with one another, creating a harmonious counterpoint to the visual image. 1. A MODEL OF MUSIC COMMUNICATION Many studies have investigated various aspects of musical communication as a form of expression (Bengtsson & Gabrielsson, 1983; Clarke, 1988; Clynes, 1983, Gabrielsson, 1988, Seashore, 1967/1938; Senju & Ohgushi, 1987; Sundberg, 1988; Sundberg et al., 1983). A tripartite communication model was proposed by Campbell and Heller (1980), consisting simply of a composer, performer, and listener. Using this previous model as a foundation, Kendall and Carterette (1990) elegantly expanded upon this model of music communication, detailing a process involving multiple states of coding, decoding, and recoding. Kendall and Carterette suggest that this process involves the \u201cgrouping and parsing of elementary thought units\u201d (p. 132), these \u201cthought units\u201d (metasymbols) are mental representations involved in the process of creating, performing, and listening to musical sound. 2. MODELS OF FILM MUSIC COMMUNICATION 2.1 Empirical Evidence Several researchers have proposed models specific to the perception and cognition of music within the cinematic context. Initiating this systematic effort, Marshall and Cohen\u2019s (1988) bipartite \u201ccongruence-associationist\u201d model suggests that the meaning of a film is altered by the music as the result of two complex cognitive processes. Based upon subject responses, the researchers determined that musical sound directly effects subject ratings on the Potency (strong-weak) and Activity (active-passive) dimensions, while the Evaluative dimension (good-bad) relies on the degree of congruence between the audio and visual components on all three dimensions, as determined by a \u201ccomparator\u201d component. The second part of the model describes how musical meaning is ascribed to the film. Marshall and Cohen claim that attention is directed to the overlapping congruent meaning of the music and the film. Referential meanings associated with the music are ascribed to the overlapped (congruent) audio-visual components upon which attention is focused. As a result, \u201cthe music alters meaning of a particular aspect of the film\u201d (1988, p. 109). Marshall and Cohen also acknowledge the important role played by temporal characteristics of the sound and image, stating that \u201cthe assignment of accent to events will affect retention, processing, and interpretation\u201d (1988, p. 108). Incorporation of this important component of the developing model was provided by Lipscomb and Kendall\u2019s (1994) Film Music Paradigm, in which two implicit processes are considered as the basis for whether attentional focus is shifted to the musical component or whether it is likely to remain at the subconscious \u2013 cognitively \u201cinaudible\u201d \u2013 level. The authors suggested that these two implicit processes include an association judgment (similar to Marshall and Cohen\u2019s assessment of \u201ccongruence\u201d) and an evaluation of the accent structure relationship between the auditory and visual components. Based on the results of a series of three experiments utilizing stimuli ranging from extremely simple, single-object animations to actual movie excerpts, Lipscomb (1995) determined that the role of the two implicit judgments appears to be dynamic such that, with simple stimuli (such as that used in Lipscomb, 1995, Experiment 1 and Marshall & Cohen, 1988), accent structure alignment plays a dominant role. As the stimuli become more complex (e.g., multi-object animations and actual movie excerpts) the primary determinant of meaning in the auditory domain appears to shift to the associational judgment, with the accent structure alignment aspect receding to a supporting role, i.e., focusing audience attention on certain aspects of the visual image (Boltz, 2001). The most complex and fully developed model of film music perception proposed to date is Cohen\u2019s (2001) \u201ccongruence-associationist framework for understanding film-music communication\u201d (p. 259; see Figure 1). This multi-stage model attempts to account for meaning derived from the spoken narrative, visual images, and musical sound. Level A represents bottom-up processing based on physical features derived from input to each perceptual modality. Level B represents the determination of cross-modal congruence, based on both semantic (associational) and syntactic (temporal) grouping features. Level D represents top-down processing, determined by an individual\u2019s past experience and the retention of that experience in long term memory. According to this model, the input from levels B (bottom-up) and D (top-down) meet in the observer\u2019s conscious mind (level C), where information is prepared for transfer to short term memory. In its details, clearly documented in Cohen (2001), this model is based on an assumption of visual primacy, citing several studies that have suggested a subservient role for the auditory component (Bolivar et al., 1994; Driver, 1997; Thompson et al., 1994). Though a common assumption throughout the literature, the present authors would like to express reservation about this assumption and suggest that additional research is required before such a claim can be supported definitively. Figure 1. Cohen\u2019s \u201ccongruence-associationist framework. 2.2 Theoretical Evidence Richard Wagner, creator of the idealized Gesamtkunstwerk in the form of the 19 century music drama, claimed that \u201cas pure organ of the feeling, [music] speaks out the very thing which word speech in itself can not speak out ... that which, looked at from the standpoint of our human intellect, is the unspeakable\u201d (Wagner 1849/1964, p. 217). According to Suzanne K. Langer, \u201cmusic has all the earmarks of a true symbolism, except one: the existence of an assigned connotation\u201d and, though music is clearly a symbolic form, it remains an \u201cunconsummated symbol\u201d (1942, p. 240). In order for a film to make the greatest possible impact, there must be an interaction between the verbal dialogue (consummated symbol), the cinematic images (also, typically, a consummated symbol), and the musical score (unconsummated symbol). To answer the question \u201cHow does music in film narration create a point of experience for the spectator?,\u201d Gorbman (1987) suggests three methods by which music can \u201csignify\u201d in the context of a narrative film. Purely musical signification results from the highly coded syntactical relationships inherent in the association of one musical tone with another. Patterns of tension and release provide a sense of organization and meaning to the musical sound, apart from any extramusical association that might exist. Cultural musical codes are exemplified by music that has come to be associated with a certain mood or state of mind. These associations have been further canonized by the Hollywood film industry into certain conventional expectations \u2013 implicitly anticipated by enculturated audience members \u2013 determined by the narrative content of a given scene. Finally, cinematic codes influence musical meaning merely due to the placement of musical sound within the filmic context. Opening credit and end title music illustrate this type of signification, as well as recurring musical themes that come to represent characters or situations within the film. There is a commonly held belief that film music is not to be heard (Burt, 1994; Gorbman, 1987). Instead, it is believed to fulfill its role in communicating the underlying psychological drama of the narrative at a subconscious level (Lipscomb, 1989). There is, however, certain music that is intended to be heard by the audience as part of the cinematic diegesis, i.e., \u201cthe narratively implied spatiotemporal world of the actions and characters\u201d (Gorbman 1987, p. 21). This \u201cworld\u201d includes, naturally, a sonic component. Therefore, all sounds that are understood to be heard by characters in the narrative \u2013 including music \u2013 are referred to as diegetic, while those that are not part of the diegesis (e.g., the orchestral score) are referred to as nondiegetic. This would suggest that diegetic music is more likely to be processed at the conscious level while nondiegetic music might remain at the subconscious level, though research is needed to determine whether this is true, in fact. It is worth noting also, that the source of diegetic sound can be either seen or unseen. Michel Chion (199"} {"_id": "3a116f2ae10a979c18787245933cb9f984569599", "title": "Data Collection in Wireless Sensor Networks with Mobile Elements: A Survey", "text": "Wireless sensor networks (WSNs) have emerged as an effective solution for a wide range of applications. Most of the traditional WSN architectures consist of static nodes which are densely deployed over a sensing area. Recently, several WSN architectures based on mobile elements (MEs) have been proposed. Most of them exploit mobility to address the problem of data collection in WSNs. In this article we first define WSNs with MEs and provide a comprehensive taxonomy of their architectures, based on the role of the MEs. Then we present an overview of the data collection process in such a scenario, and identify the corresponding issues and challenges. On the basis of these issues, we provide an extensive survey of the related literature. Finally, we compare the underlying approaches and solutions, with hints to open problems and future research directions."} {"_id": "3b290393afef51b374f9daf9856ec3c1a5fa2968", "title": "A Successive Approximation Recursive Digital Low-Dropout Voltage Regulator With PD Compensation and Sub-LSB Duty Control", "text": "This paper presents a recursive digital low-dropout (RLDO) regulator that improves response time, quiescent power, and load regulation dynamic range over prior digital LDO designs by 1\u20132 orders of magnitude. The proposed RLDO enables a practical digital replacement to analog LDOs by using an SAR-like binary search algorithm in a coarse loop and a sub-LSB pulse width modulation duty control scheme in a fine loop. A proportional-derivative compensation scheme is employed to ensure stable operation independent of load current, the size of the output decoupling capacitor, and clock frequency. Implemented in 0.0023 mm2 in 65 nm CMOS, the 7-bit RLDO achieves, at a 0.5-V input, a response time of 15.1 ns with a figure of merit of 199.4 ps, along with stable operation across a 20 000 $\\times $ dynamic load range."} {"_id": "d429ddfb32f921e630ded47a8fd1bc424f7283d9", "title": "Imaging Cognition II: An Empirical Review of 275 PET and fMRI Studies", "text": "Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have been extensively used to explore the functional neuroanatomy of cognitive functions. Here we review 275 PET and fMRI studies of attention (sustained, selective, Stroop, orientation, divided), perception (object, face, space/motion, smell), imagery (object, space/ motion), language (written/spoken word recognition, spoken/ no spoken response), working memory (verbal/numeric, object, spatial, problem solving), semantic memory retrieval (categorization, generation), episodic memory encoding (verbal, object, spatial), episodic memory retrieval (verbal, nonverbal, success, effort, mode, context), priming (perceptual, conceptual), and procedural memory (conditioning, motor, and nonmotor skill learning). To identify consistent activation patterns associated with these cognitive operations, data from 412 contrasts were summarized at the level of cortical Brodmann's areas, insula, thalamus, medial-temporal lobe (including hippocampus), basal ganglia, and cerebellum. For perception and imagery, activation patterns included primary and secondary regions in the dorsal and ventral pathways. For attention and working memory, activations were usually found in prefrontal and parietal regions. For language and semantic memory retrieval, typical regions included left prefrontal and temporal regions. For episodic memory encoding, consistently activated regions included left prefrontal and medial-temporal regions. For episodic memory retrieval, activation patterns included prefrontal, medial-temporal, and posterior midline regions. For priming, deactivations in prefrontal (conceptual) or extrastriate (perceptual) regions were consistently seen. For procedural memory, activations were found in motor as well as in non-motor brain areas. Analysis of regional activations across cognitive domains suggested that several brain regions, including the cerebellum, are engaged by a variety of cognitive challenges. These observations are discussed in relation to functional specialization as well as functional integration."} {"_id": "ac6fdfa9d2ca8ec78ea1b2c8807ab9147b8a526d", "title": "Exploring Synergies between Machine Learning and Knowledge Representation to Capture Scientific Knowledge", "text": "In this paper we explore synergies between the machine learning and knowledge representation fields by considering how scientific knowledge is represented in these areas. We illustrate some of the knowledge obtained through machine learning methods, providing two contrasting examples of such models: probabilistic graphical models (aka Bayesian networks) and artificial neural networks (including deep learning networks). From knowledge representation, we give an overview of ontological representations, qualitative reasoning, and planning. Then we discuss potential synergies that would benefit both areas."} {"_id": "e7b50e3f56e21fd2a5eb34923d427a0bc6dd8905", "title": "Coupling Matrix Synthesis for a New Class of Microwave Filter Configuration", "text": "In this paper a new approach to the synthesis of coupling matrices for microwave filters is presente d. The new approach represents an advance on existing direct a nd optimization methods for coupling matrix synthesis in that it will exhaustively discover all possible coupling matrix solutions for a network if more than one exists. This enables a se lection to be made of the set of coupling values, resonator frequ ency offsets, parasitic coupling tolerance etc that will be best suited to the technology it is intended to realize the microwave filter with. To demonstrate the use of the method, the case of the r cently \u2013 introduced \u2018extended box\u2019 (EB) coupling matrix configuration is taken. The EB represents a new class of filter con figuration featuring a number of important advantages, one of which is the existence of multiple coupling matrix solutions for each prototype filtering function, eg 16 for 8 degree cases. This case is taken as an example to demonstrate the use of the synthesis method \u2013 yielding one solution suitable for dual-mode realiz ation and one where some couplings are small enough to neglect. Index Terms \u2014 Coupling matrix, filter synthesis, Groebner basis, inverted characteristic, multiple solutions."} {"_id": "5c876c7c26fec05ba1e3876f49f44de219838629", "title": "An Artificial Bee Colony-Based COPE Framework for Wireless Sensor Network", "text": "In wireless communication, network coding is one of the intelligent approaches to process the packets before transmitting for efficient information exchange. The goal of this work is to enhance throughput by using the intelligent technique, which may give comparatively better optimization. This paper introduces a biologically-inspired coding approach called Artificial Bee Colony Network Coding (ABC-NC), a modification in the COPE framework. The existing COPE and its variant are probabilistic approaches, which may not give good results in all of the real-time scenarios. Therefore, it needs some intelligent technique to find better packet combinations at intermediate nodes before forwarding to optimize the energy and maximize the throughput in wireless networks. This paper proposes ABC-NC over the existing COPE framework for the wireless environment."} {"_id": "73f38ffa54ca4dff09d42cb18461187b9315a735", "title": "Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition", "text": "Adaptive sparse coding methods learn a possibly overcomplete set of basis functions, such that natural image patches can be reconstructed by linearly combining a small subset of these bases. The applicability of these methods to visual object recognition tasks has been limited because of the prohibitive cost of the optimization algorithms required to compute the sparse representation. In this work we propose a simple and efficient algorithm to learn basis functions. After training, this model also provides a fast and smooth approximator to the optimal representation, achieving even better accuracy than exact sparse coding algorithms on visual object recognition tasks."} {"_id": "6c9b39fe6b5615a012a99eac0aaaf527343feefb", "title": "What are mobile developers asking about? A large scale study using stack overflow", "text": "The popularity of mobile devices has been steadily growing in recent years. These devices heavily depend on software from the underlying operating systems to the applications they run. Prior research showed that mobile software is different than traditional, large software systems. However, to date most of our research has been conducted on traditional software systems. Very little work has focused on the issues that mobile developers face. Therefore, in this paper, we use data from the popular online Q&A site, Stack Overflow, and analyze 13,232,821 posts to examine what mobile developers ask about. We employ Latent Dirichlet allocation-based topic models to help us summarize the mobile-related questions. Our findings show that developers are asking about app distribution, mobile APIs, data management, sensors and context, mobile tools, and user interface development. We also determine what popular mobile-related issues are the most difficult, explore platform specific issues, and investigate the types (e.g., what, how, or why) of questions mobile developers ask. Our findings help highlight the challenges facing mobile developers that require more attention from the software engineering research and development communities in the future and establish a novel approach for analyzing questions asked on Q&A forums."} {"_id": "f3a1246d3a0c7de004db9ef9f312bcedb5e22532", "title": "Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval", "text": "Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (SSAH) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal hashing in a self-supervised fashion. The primary contribution of this work is that two adversarial networks are leveraged to maximize the semantic correlation and consistency of the representations between different modalities. In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations. Such information guides the feature learning process and preserves the modality relationships in both the common semantic space and the Hamming space. Extensive experiments carried out on three benchmark datasets validate that the proposed SSAH surpasses the state-of-the-art methods."} {"_id": "58fcd1e5ca46415ff5b06a84fd08160e43b13205", "title": "Arthrodesis with Intramedular Fixation in Posttraumatic Arthrosis of the Midfoot: A Case Report", "text": "We present two middle-aged men with posttraumatic arthrosis of the midfoot. Both patients suffered from severe pain and their foot was unable to bear weight. Both were operated using new fusion bolt 6.5 mm and additional screws. In their case, arthrodesis was mandatory and effective intervention. After surgical treatment both patients were pain free and able to walk without crutches and return to daily work."} {"_id": "4c0074521b708af5009526a8bacfab9fcdf48f96", "title": "Wavelet analysis of EEG for seizure detection: Coherence and phasesynchrony estimation.", "text": "This paper deals with the wavelet analysis method f or seizure detection in EEG time series and coherence estimation. The main part of the paper pr esents the basic principles of signal decomposition in connection with the EEG frequency ba nds. Wavelet analysis method has been used for detection of seizure onset. The wavelet fi ltered signal is used for the computation of spectral power ratio. The results show that our met hod can identify pre seizure, seizure, post seizure and non seizure phases. When dealing with s eizure detection and prediction problems it is important to identify seizure precursor dynamics and necessary to identify information about onset and spread of seizures. Therefore in th e second part the coherence and phase synchrony during pre seizure, seizure, post seizure an d non seizure are computed. We expect this method to provide more insight into dynamic aspects of the seizure generating process."} {"_id": "3b1b13271544fb55c227c980f6452bb945ae58d0", "title": "Evaluating Digital Forensic Options for the Apple iPad", "text": "The iPod Touch, iPhone and iPad from Apple are among the most popular mobile computing platforms in use today. These devices are of forensic interest because of their high adoption rate and potential for containing digital evidence. The uniformity in their design and underlying operating system (iOS) also allows forensic tools and methods to be shared across product types. This paper analyzes the tools and methods available for conducting forensic examinations of the Apple iPad. These include commercial software products, updated methodologies based on existing jailbreaking processes and the analysis of the device backup contents provided by iTunes. While many of the available commercial tools offer promise, the results of our analysis indicate that most comprehensive examination of the iPad requires jailbreaking to perform forensic duplication and manual analysis of its media content."} {"_id": "9c3cfc2c07a1a7e3b456db463f527340221e9f73", "title": "Enhancing scholarly use of digital libraries: A comparative survey and review of bibliographic metadata ontologies", "text": "The HathiTrust Research Center (HTRC) is engaged in the development of tools that will give scholars the ability to analyze the HathiTrust digital library's 14 million volume corpus. A cornerstone of the HTRC's digital infrastructure is the workset -- a kind of scholar-built research collection intended for use with the HTRC's analytics platform. Because more than 66% of the digital corpus is subject to copyright restrictions, scholarly users remain dependent upon the descriptive accounts provided by traditional metadata records in order to identify and gather together bibliographic resources for analysis. This paper compares the MADSRDF/MODSRDF, Bibframe, schema.org, BIBO, and FaBiO ontologies by assessing their suitability for employment by the HTRC to meet scholars' needs. These include distinguishing among multiple versions of the same work; representing the complex historical and physical relationships among those versions; and identifying and providing access to finer grained bibliographic entities, e.g., poems, chapters, sections, and even smaller segments of content."} {"_id": "ae081edc60a62b1b1d542167dbe716ce7c5ec9ff", "title": "Decreased gut microbiota diversity, delayed Bacteroidetes colonisation and reduced Th1 responses in infants delivered by caesarean section.", "text": "OBJECTIVE\nThe early intestinal microbiota exerts important stimuli for immune development, and a reduced microbial exposure as well as caesarean section (CS) has been associated with the development of allergic disease. Here we address how microbiota development in infants is affected by mode of delivery, and relate differences in colonisation patterns to the maturation of a balanced Th1/Th2 immune response.\n\n\nDESIGN\nThe postnatal intestinal colonisation pattern was investigated in 24 infants, born vaginally (15) or by CS (nine). The intestinal microbiota were characterised using pyrosequencing of 16S rRNA genes at 1 week and 1, 3, 6, 12 and 24 months after birth. Venous blood levels of Th1- and Th2-associated chemokines were measured at 6, 12 and 24 months.\n\n\nRESULTS\nInfants born through CS had lower total microbiota diversity during the first 2 years of life. CS delivered infants also had a lower abundance and diversity of the Bacteroidetes phylum and were less often colonised with the Bacteroidetes phylum. Infants born through CS had significantly lower levels of the Th1-associated chemokines CXCL10 and CXCL11 in blood.\n\n\nCONCLUSIONS\nCS was associated with a lower total microbial diversity, delayed colonisation of the Bacteroidetes phylum and reduced Th1 responses during the first 2 years of life."} {"_id": "986b967c4bb2a7c4ef753a41fc625530828be503", "title": "Whole-function vectorization", "text": "Data-parallel programming languages are an important component in today's parallel computing landscape. Among those are domain-specific languages like shading languages in graphics (HLSL, GLSL, RenderMan, etc.) and \"general-purpose\" languages like CUDA or OpenCL. Current implementations of those languages on CPUs solely rely on multi-threading to implement parallelism and ignore the additional intra-core parallelism provided by the SIMD instruction set of those processors (like Intel's SSE and the upcoming AVX or Larrabee instruction sets). In this paper, we discuss several aspects of implementing dataparallel languages on machines with SIMD instruction sets. Our main contribution is a language- and platform-independent code transformation that performs whole-function vectorization on low-level intermediate code given by a control flow graph in SSA form. We evaluate our technique in two scenarios: First, incorporated in a compiler for a domain-specific language used in realtime ray tracing. Second, in a stand-alone OpenCL driver. We observe average speedup factors of 3.9 for the ray tracer and factors between 0.6 and 5.2 for different OpenCL kernels."} {"_id": "62659da8c3d0a450e6a528ad13f94f56d2621759", "title": "Argumentation Theory: A Very Short Introduction", "text": "Since the time of the ancient Greek philosophers and rhetoricians, argumentation theorists have searched for the requirements that make an argument correct, by some appropriate standard of proof, by examining the errors of reasoning we make when we try to use arguments. These errors have long been called fallacies, and the logic textbooks have for over 2000 years tried to help students to identify these fallacies, and to deal with them when they are encountered. The problem was that deductive logic did not seem to be much use for this purpose, and there seemed to be no other obvious formal structure that could usefully be applied to them. The radical approach taken by Hamblin (1970) was to refashion the concept of an argument to think of it not just as an arbitrarily designated set of propositions, but as a move one party makes in a dialog to offer premises that may be acceptable to another party who doubts the conclusion of the argument. Just after Hamblin's time a school of thought called informal logic grew up that wanted to take a new practical approach to teaching students skills of critical thinking by going beyond deductive logic to seek other methods for analyzing and evaluating arguments. Around the same time, an interdisciplinary group of scholars associated with the term 'argumentation', coming from fields like speech communication, joined with the informal logic group to help build up such practical methods and apply them to real examples of argumentation (Johnson and Blair, 1987). The methods that have been developed so far are still in a process of rapid evolution. More recently, improvements in them have been due to some computer scientists joining the group, and to collaborative research efforts between argumentation theorists and computer scientists. Another recent development has been the adaption of argumentation models and techniques to fields in artificial intelligence, like multi-agent systems and artificial intelligence for legal reasoning. In a short paper, it is not possible to survey all these developments. The best that can be done is to offer an introduction to some of the basic concepts and methods of argumentation theory as they have evolved to the present point, and to briefly indicate some problems and limitations in them. 1. Arguments and Argumentation There are four tasks undertaken by argumentation, or informal logic, as it is also often called: identification, analysis, evaluation and invention. The task of identification \u2026"} {"_id": "62feb51dbb8c3a94cbfc91c950f39dc2c7506e1a", "title": "Super Normal Vector for Activity Recognition Using Depth Sequences", "text": "This paper presents a new framework for human activity recognition from video sequences captured by a depth camera. We cluster hypersurface normals in a depth sequence to form the polynormal which is used to jointly characterize the local motion and shape information. In order to globally capture the spatial and temporal orders, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time grids. We then propose a novel scheme of aggregating the low-level polynormals into the super normal vector (SNV) which can be seen as a simplified version of the Fisher kernel representation. In the extensive experiments, we achieve classification results superior to all previous published results on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D."} {"_id": "a90b9b5edac31a4320f2a003fef519a399b67f6b", "title": "A Seed-Based Method for Generating Chinese Confusion Sets", "text": "In natural language, people often misuse a word (called a \u201cconfused word\u201d) in place of other words (called \u201cconfusing words\u201d). In misspelling corrections, many approaches to finding and correcting misspelling errors are based on a simple notion called a \u201cconfusion set.\u201d The confusion set of a confused word consists of confusing words. In this article, we propose a new method of building Chinese character confusion sets.\n Our method is composed of two major phases. In the first phase, we build a list of seed confusion sets for each Chinese character, which is based on measuring similarity in character pinyin or similarity in character shape. In this phase, all confusion sets are constructed manually, and the confusion sets are organized into a graph, called a \u201cseed confusion graph\u201d (SCG), in which vertices denote characters and edges are pairs of characters in the form (confused character, confusing character).\n In the second phase, we extend the SCG by acquiring more pairs of (confused character, confusing character) from a large Chinese corpus. For this, we use several word patterns (or patterns) to generate new confusion pairs and then verify the pairs before adding them into a SCG. Comprehensive experiments show that our method of extending confusion sets is effective. Also, we shall use the confusion sets in Chinese misspelling corrections to show the utility of our method."} {"_id": "393ddf850d806c4eeaec52a1e2ea4c4dcc5c76ee", "title": "Learning Over Long Time Lags", "text": "The advantage of recurrent neural networks (RNNs) in learni ng dependencies between time-series data has distinguished RNNs from other deep learning models. Recent ly, many advances are proposed in this emerging field. However, there is a lack of comprehensive review on mem ory models in RNNs in the literature. This paper provides a fundamental review on RNNs and long short te rm memory (LSTM) model. Then, provides a surveys of recent advances in different memory enhancement s and learning techniques for capturing long term dependencies in RNNs."} {"_id": "488d861cd5122ae7e4ac89ff082b159c4889870c", "title": "Ethical Artificial Intelligence", "text": "First Edition Please send typo and error reports, and any other comments, to hibbard@wisc.edu."} {"_id": "649922386f1222a2e64c1c80bcc171431c070e92", "title": "Twitter Part-of-Speech Tagging for All: Overcoming Sparse and Noisy Data", "text": "Part-of-speech information is a pre-requisite in many NLP algorithms. However, Twitter text is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style. We present a detailed error analysis of existing taggers, motivating a series of tagger augmentations which are demonstrated to improve performance. We identify and evaluate techniques for improving English part-of-speech tagging performance in this genre. Further, we present a novel approach to system combination for the case where available taggers use different tagsets, based on voteconstrained bootstrapping with unlabeled data. Coupled with assigning prior probabilities to some tokens and handling of unknown words and slang, we reach 88.7% tagging accuracy (90.5% on development data). This is a new high in PTB-compatible tweet part-of-speech tagging, reducing token error by 26.8% and sentence error by 12.2%. The model, training data and tools are made available."} {"_id": "0d94a0a51cdecbdec81c97d2040ed28d3e9c96de", "title": "Photobook: Content-based manipulation of image databases", "text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These query tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on text annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We discuss three types of Photobook descriptions in detail: one that allows search based on appearance, one that uses 2-D shape, and a third that allows search based on textural properties. These image content descriptions can be combined with each other and with text-based descriptions to provide a sophisticated browsing and search capability. In this paper we demonstrate Photobook on databases containing images of people, video keyframes, hand tools, fish, texture swatches, and 3-D medical data."} {"_id": "fa2603efaf717974c77162c93d800defae61a129", "title": "Face recognition/detection by probabilistic decision-based neural network", "text": "This paper proposes a face recognition system, based on probabilistic decision-based neural networks (PDBNN). With technological advance on microelectronic and vision system, high performance automatic techniques on biometric recognition are now becoming economically feasible. Among all the biometric identification methods, face recognition has attracted much attention in recent years because it has potential to be most nonintrusive and user-friendly. The PDBNN face recognition system consists of three modules: First, a face detector finds the location of a human face in an image. Then an eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth (eye-glasses will be allowed). Lastly, the third module is a face recognizer. The PDBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates a successful application of PDBNN to face recognition applications on two public (FERET and ORL) and one in-house (SCR) databases. Regarding the performance, experimental results on three different databases such as recognition accuracies as well as false rejection and false acceptance rates are elaborated. As to the processing speed, the whole recognition process (including PDBNN processing for eye localization, feature extraction, and classification) consumes approximately one second on Sparc10, without using hardware accelerator or co-processor."} {"_id": "a6f1dfcc44277d4cfd8507284d994c9283dc3a2f", "title": "Eigenfaces for Recognition", "text": "We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as \"eigenfaces,\" because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture."} {"_id": "b217788dd6d274ad391ee950e6f6a34033bd2fc7", "title": "The multilayer perceptron as an approximation to a Bayes optimal discriminant function", "text": "The multilayer perceptron, when trained as a classifier using backpropagation, is shown to approximate the Bayes optimal discriminant function. The result is demonstrated for both the two-class problem and multiple classes. It is shown that the outputs of the multilayer perceptron approximate the a posteriori probability functions of the classes being trained. The proof applies to any number of layers and any type of unit activation function, linear or nonlinear."} {"_id": "00ef4151ae7dfd201b326afbe9c112fac91f9872", "title": "An Efficient Active Learning Framework for New Relation Types", "text": "Supervised training of models for semantic relation extraction has yielded good performance, but at substantial cost for the annotation of large training corpora. Active learning strategies can greatly reduce this annotation cost. We present an efficient active learning framework that starts from a better balance between positive and negative samples, and boosts training efficiency by interleaving self-training and co-testing. We also studied the reduction of annotation cost by enforcing argument type constraints. Experiments show a substantial speed-up by comparison to the previous state-of-the-art pure co-testing active learning framework. We obtain reasonable performance with only 150 labels for individual ACE 2004 relation"} {"_id": "1b102e8fcf68da7b0d7da16b71a07376cba22f6d", "title": "Complications during root canal irrigation--literature review and case reports.", "text": "LITERATURE REVIEW AND CASE REPORTS: The literature concerning the aetiology, symptomatology and therapy of complications during root canal irrigation is reviewed. Three cases of inadvertent injection of sodium hypochlorite and hydrogen peroxide beyond the root apex are presented. Clinical symptoms are discussed, as well as preventive and therapeutic considerations."} {"_id": "5c97f52213c3414b70b9f507619669dfcc1749f9", "title": "Neural Machine Translation Advised by Statistical Machine Translation", "text": "Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years. However, recent studies show that NMT generally produces fluent but inadequate translations (Tu et al. 2016; He et al. 2016). This is in contrast to conventional Statistical Machine Translation (SMT), which usually yields adequate but non-fluent translations. It is natural, therefore, to leverage the advantages of both models for better translations, and in this work we propose to incorporate SMT model into NMT framework. More specifically, at each decoding step, SMT offers additional recommendations of generated words based on the decoding information from NMT (e.g., the generated partial translation and attention history). Then we employ an auxiliary classifier to score the SMT recommendations and a gating function to combine the SMT recommendations with NMT generations, both of which are jointly trained within the NMT architecture in an end-to-end manner. Experimental results on Chinese-English translation show that the proposed approach achieves significant and consistent improvements over state-of-the-art NMT and SMT systems on multiple NIST test sets."} {"_id": "aaa9d12640ec6f9d1d37333141c761c902d2d280", "title": "Leveraging Wikipedia Table Schemas for Knowledge Graph Augmentation", "text": "General solutions to augment Knowledge Graphs (KGs) with facts extracted from Web tables aim to associate pairs of columns from the table with a KG relation based on the matches between pairs of entities in the table and facts in the KG. These approaches suffer from intrinsic limitations due to the incompleteness of the KGs. In this paper we investigate an alternative solution, which leverages the patterns that occur on the schemas of a large corpus of Wikipedia tables. Our experimental evaluation, which used DBpedia as reference KG, demonstrates the advantages of our approach over state-of-the-art solutions and reveals that we can extract more than 1.7M of facts with an estimated accuracy of 0.81 even from tables that do not expose any fact on the KG."} {"_id": "2e7539ce290fae45acf5465739054e99fb6f5cc8", "title": "Breaking out of the box of recommendations: from items to packages", "text": "Classical recommender systems provide users with a list of recommendations where each recommendation consists of a single item, e.g., a book or DVD. However, several applications can benefit from a system capable of recommending packages of items, in the form of sets. Sample applications include travel planning with a limited budget (price or time) and twitter users wanting to select worthwhile tweeters to follow given that they can deal with only a bounded number of tweets. In these contexts, there is a need for a system that can recommend top-k packages for the user to choose from.\n Motivated by these applications, we consider composite recommendations, where each recommendation comprises a set of items. Each item has both a value (rating) and a cost associated with it, and the user specifies a maximum total cost (budget) for any recommended set of items. Our composite recommender system has access to one or more component recommender system, focusing on different domains, as well as to information sources which can provide the cost associated with each item. Because the problem of generating the top recommendation (package) is NP-complete, we devise several approximation algorithms for generating top-k packages as recommendations. We analyze their efficiency as well as approximation quality. Finally, using two real and two synthetic data sets, we subject our algorithms to thorough experimentation and empirical analysis. Our findings attest to the efficiency and quality of our approximation algorithms for top-k packages compared to exact algorithms."} {"_id": "4ee0ad8e256523256c8d21790189388ed4beca7e", "title": "Guided filtering for PRNU-based localization of small-size image forgeries", "text": "PRNU-based techniques guarantee a good forgery detection performance irrespective of the specific type of forgery. The presence or absence of the camera PRNU pattern is detected by a correlation test. Given the very low power of the PRNU signal, however, the correlation must be averaged over a pretty large window, reducing the algorithm's ability to reveal small forgeries. To improve resolution, we estimate correlation with a spatially adaptive filtering technique, with weights computed over a suitable pilot image. Implementation efficiency is achieved by resorting to the recently proposed guided filters. Experiments prove that the proposed filtering strategy allows for a much better detection performance in the case of small forgeries."} {"_id": "71b5e96c643c31a3e2efd90e932ce2fa176a65e7", "title": "Intel MPX Explained: An Empirical Study of Intel MPX and Software-based Bounds Checking Approaches", "text": "Memory-safety violations are a prevalent cause of both reliability and security vulnerabilities in systems software written in unsafe languages like C/C++. Unfortunately, all the existing software-based solutions to this problem exhibit high performance overheads preventing them from wide adoption in production runs. To address this issue, Intel recently released a new ISA extension\u2014Memory Protection Extensions (Intel MPX), a hardware-assisted full-stack solution to protect against memory safety violations. In this work, we perform an exhaustive study of the Intel MPX architecture to understand its advantages and caveats. We base our study along three dimensions: (a) performance overheads, (b) security guarantees, and (c) usability issues. To put our results in perspective, we compare Intel MPX with three prominent software-based approaches: (1) trip-wire\u2014AddressSanitizer, (2) objectbased\u2014SAFECode, and (3) pointer-based\u2014SoftBound. Our main conclusion is that Intel MPX is a promising technique that is not yet practical for widespread adoption. Intel MPX\u2019s performance overheads are still high (~50% on average), and the supporting infrastructure has bugs which may cause compilation or runtime errors. Moreover, we showcase the design limitations of Intel MPX: it cannot detect temporal errors, may have false positives and false negatives in multithreaded code, and its restrictions on memory layout require substantial code changes for some programs. This paper presents only the general discussion and aggregated data; for the complete evaluation, please see the supporting website: https://Intel-MPX.github.io/. Evaluation plots and section headings have hyperlinks to the complete experimental description and results."} {"_id": "c93fe35d8888f296a095b906cca26ffac991aa75", "title": "Childhood predictors differentiate life-course persistent and adolescence-limited antisocial pathways among males and females.", "text": "This article reports a comparison on childhood risk factors of males and females exhibiting childhood-onset and adolescent-onset antisocial behavior, using data from the Dunedin longitudinal study. Childhood-onset delinquents had childhoods of inadequate parenting, neurocognitive problems, and temperament and behavior problems, whereas adolescent-onset delinquents did not have these pathological backgrounds. Sex comparisons showed a male-to-female ratio of 10:1 for childhood-onset delinquency but a sex ratio of only 1.5:1 for adolescence-onset delinquency. Showing the same pattern as males, childhood-onset females had high-risk backgrounds but adolescent-onset females did not. These findings are consistent with core predictions from the taxonomic theory of life-course persistent and adolescence-limited antisocial behavior."} {"_id": "776584e054bd8ba1ff6c4906eb947fc0abb0abc3", "title": "Efficient aircraft spare parts inventory management under demand uncertainty", "text": "In airline industries, the aircraft maintenance cost takes up about 13% of the total operating cost. It can be reduced by a good planning. Spare parts inventories exist to serve the maintenance planning. Compared with commonly used reorder point system (ROP) and forecasting methods which only consider historical data, this paper presents two non-linear programming models which predict impending demands based on installed parts failure distribution. The optimal order time and order quantity can be found by minimizing total cost. The first basic mathematical model assumes shortage period starts from mean time to failure (MTTF). An iteration method and GAMS are used to solve this model. The second improved mathematical model takes into account accurate shortage time. Due to its complexity, only GAMS is applied in solution methodology. Both models can be proved effective in cost reduction through revised numerical examples and their results. Comparisons of the two models are also discussed."} {"_id": "86ba3c7141a9b9d22293760d7c96e68074f4ef65", "title": "CAPTCHA Design: Color, Usability, and Security", "text": "Most user interfaces use color, which can greatly enhance their design. Because the use of color is typically a usability issue, it rarely causes security failures. However, using color when designing CAPTCHAs, a standard security technology that many commercial websites apply widely, can have an impact on usability and interesting but critical implications for security. Here, the authors examine some CAPTCHAs to determine whether their use of color negatively affects their usability, security, or both."} {"_id": "3fedbe4e3e0577c7e895d9c968f512a30aada47a", "title": "Development and validation of measures of social phobia scrutiny fear and social interaction anxiety.", "text": "The development and validation of the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) two companion measures for assessing social phobia fears is described. The SPS assesses fear of being scrutinised during routine activities (eating, drinking, writing, etc.), while the SIAS assesses fear of more general social interaction, the scales corresponding to the DSM-III-R descriptions of Social Phobia--Circumscribed and Generalised types, respectively. Both scales were shown to possess high levels of internal consistency and test-retest reliability. They discriminated between social phobia, agoraphobia and simple phobia samples, and between social phobia and normal samples. The scales correlated well with established measures of social anxiety, but were found to have low or non-significant (partial) correlations with established measures of depression, state and trait anxiety, locus of control, and social desirability. The scales were found to change with treatment and to remain stable in the face of no-treatment. It appears that these scales are valid, useful, and easily scored measures for clinical and research applications, and that they represent an improvement over existing measures of social phobia."} {"_id": "450b9397f371e08dc775b81882fc536be73bb06d", "title": "Using Machine Learning to Refine Black-Box Test Specifications and Test Suites", "text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box testing, we propose a methodology based on machine learning that has shown promising results on a case study."} {"_id": "8c3930ca8183c9bf8f2bfbe112717df1475287b9", "title": "An architecture for the aggregation and analysis of scholarly usage data", "text": "Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. This paper presents a technical, standards-based architecture for sharing usage information, which we have designed and implemented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service. This paper also discusses issues that were encountered when implementing the proposed approach, and it presents preliminary results obtained from analyzing a usage data set containing about 3,500,000 requests aggregated by a federation of linking servers at the California State University system over a 20 month period."} {"_id": "b3cca6cf9b74f9ecdcf716864b957696e8d002e2", "title": "Optimising thermal efficiency of direct contact membrane distillation by brine recycling for small-scale seawater desalination Revised Manuscript Submitted to Desalination", "text": "A technique to optimise thermal efficiency using brine recycling during direct contact membrane distillation (DCMD) of seawater was investigated. By returning the hot brine to the feed tank, the system water recovery could be increased and the sensible heat of the hot brine was recovered to improve thermal efficiency. The results show that in the optimal water recovery range of 20 to 60% facilitated by brine recycling, the specific thermal energy consumption of the process could be reduced by more than half. It is also noteworthy that within this optimal water recovery range, the risk of membrane scaling is negligible DCMD of seawater at a constant water recovery of 70% was achieved for over 24. h without any scale formation on the membrane surface. In contrast, severe membrane scaling was observed when water recovery reached 80%. In addition to water recovery, other operating conditions such as feed temperature and water circulation rates could influence the process thermal efficiency. Increasing the feed temperature and reducing the circulation flow rates increased thermal efficiency. Increasing the feed temperature could also mitigate the negative effect of elevated feed concentration on the distillate flux, particularly at a high water recovery. Disciplines Engineering | Science and Technology Studies Publication Details Duong, H. C., Cooper, P., Nelemans, B., Cath, T. Y. & Nghiem, L. D. (2015). Optimising thermal efficiency of direct contact membrane distillation by brine recycling for small-scale seawater desalination. Desalination, 374 1-9. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/4237 1 Optimising thermal efficiency of direct contact membrane distillation by brine recycling for small-scale seawater desalination Revised Manuscript Submitted to"} {"_id": "f01b4ef5e825cb73836c58a308ea6b7680cc5537", "title": "A self-charging power unit by integration of a textile triboelectric nanogenerator and a flexible lithium-ion battery for wearable electronics.", "text": "A novel integrated power unit realizes both energy harvesting and energy storage by a textile triboelectric nanogenerator (TENG)-cloth and a flexible lithium-ion battery (LIB) belt, respectively. The mechanical energy of daily human motion is converted into electricity by the TENG-cloth, sustaining the energy of the LIB belt to power wearable smart electronics."} {"_id": "647cb3825baecb6fab8b098166d5a446f7711f9b", "title": "Learning Plannable Representations with Causal InfoGAN", "text": "In recent years, deep generative models have been shown to \u2018imagine\u2019 convincing high-dimensional observations such as images, audio, and even video, learning directly from raw data. In this work, we ask how to imagine goal-directed visual plans \u2013 a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state, which can later be used as a reference trajectory for control. We focus on systems with high-dimensional observations, such as images, and propose an approach that naturally combines representation learning and planning. Our framework learns a generative model of sequential observations, where the generative process is induced by a transition in a low-dimensional planning model, and an additional noise. By maximizing the mutual information between the generated observations and the transition in the planning model, we obtain a low-dimensional representation that best explains the causal nature of the data. We structure the planning model to be compatible with efficient planning algorithms, and we propose several such models based on either discrete or continuous states. Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations. We demonstrate our method on imagining plausible visual plans of rope manipulation3."} {"_id": "10e6a38a158d6f2f6a9c12343847da78dd72c1f9", "title": "A gut (microbiome) feeling about the brain.", "text": "PURPOSE OF REVIEW\nThere is an increasing realization that the microorganisms which reside within our gut form part of a complex multidirectional communication network with the brain known as the microbiome-gut-brain axis. In this review, we focus on recent findings which support a role for this axis in modulating neurodevelopment and behavior.\n\n\nRECENT FINDINGS\nA growing body of research is uncovering that under homeostatic conditions and in response to internal and external stressors, the bacterial commensals of our gut can signal to the brain through a variety of mechanisms to influence processes such neurotransmission, neurogenesis, microglia activation, and modulate behavior. Moreover, the mechanisms underlying the ability of stress to modulate the microbiota and also for microbiota to change the set point for stress sensitivity are being unraveled. Dysregulation of the gut microbiota composition has been identified in a number of psychiatric disorders, including depression. This has led to the concept of bacteria that have a beneficial effect upon behavior and mood (psychobiotics) being proposed for potential therapeutic interventions.\n\n\nSUMMARY\nUnderstanding the mechanisms by which the bacterial commensals of our gut are involved in brain function may lead to the development of novel microbiome-based therapies for these mood and behavioral disorders."} {"_id": "cffd812661ce822be4cf3735b7ac8bb79798b59c", "title": "A Dual-Polarized Pattern Reconfigurable Yagi Patch Antenna for Microbase Stations", "text": "A two-port pattern reconfigurable three-layered Yagi-Uda patch antenna with \u00b145\u00b0 dual-polarization characteristic is presented. A driven patch (DP) and two large parasitic patches (LPPs) are printed on the top side of the middle layer, and the middle and bottom layers share a common metal ground. Two microstrip feedlines printed orthogonally on the bottom side of the bottom layer are used to feed the DP through two H-shaped slots etched in the ground. The LPPs are connected to/disconnected from the ground controlled by switches. By adjusting the connection states of the LPPs, one wide-beam mode and three narrow-beam modes can be obtained in both polarizations. A small parasitic patch is printed on the top side of the top layer to improve the pattern coherence of the two polarizations. This antenna is studied by both simulation and measurement. The measured common bandwidth of the four modes in both polarizations is 3.32\u20133.51 GHz, and the isolations between two polarizations in all the modes are higher than 20 dB. The low-profile antenna is very suitable for microbase-station applications."} {"_id": "9810d71b1051d421b068486548949d721ab84cd0", "title": "Dynamic Malware Detection Using API Similarity", "text": "Hackers create different types of Malware such as Trojans which they use to steal user-confidential information (e.g. credit card details) with a few simple commands, recent malware however has been created intelligently and in an uncontrolled size, which puts malware analysis as one of the top important subjects of information security. This paper proposes an efficient dynamic malware-detection method based on API similarity. This proposed method outperform the traditional signature-based detection method. The experiment evaluated 197 malware samples and the proposed method showed promising results of correctly identified malware."} {"_id": "21ac0c53e1df2ffea583e45c091b7f262e0a9533", "title": "Interactive Block Games for Assessing Children's Cognitive Skills: Design and Preliminary Evaluation", "text": "Background: This paper presents design and results from preliminary evaluation of Tangible Geometric Games (TAG-Games) for cognitive assessment in young children. The TAG-Games technology employs a set of sensor-integrated cube blocks, called SIG-Blocks, and graphical user interfaces for test administration and real-time performance monitoring. TAG-Games were administered to children from 4 to 8 years of age for evaluating preliminary efficacy of this new technology-based approach. Methods: Five different sets of SIG-Blocks comprised of geometric shapes, segmented human faces, segmented animal faces, emoticons, and colors, were used for three types of TAG-Games, including Assembly, Shape Matching, and Sequence Memory. Computational task difficulty measures were defined for each game and used to generate items with varying difficulty. For preliminary evaluation, TAG-Games were tested on 40 children. To explore the clinical utility of the information assessed by TAG-Games, three subtests of the age-appropriate Wechsler tests (i.e., Block Design, Matrix Reasoning, and Picture Concept) were also administered. Results: Internal consistency of TAG-Games was evaluated by the split-half reliability test. Weak to moderate correlations between Assembly and Block Design, Shape Matching and Matrix Reasoning, and Sequence Memory and Picture Concept were found. The computational measure of task complexity for each TAG-Game showed a significant correlation with participants' performance. In addition, age-correlations on TAG-Game scores were found, implying its potential use for assessing children's cognitive skills autonomously."} {"_id": "8f50078de9d41bd756cb0dcb346a916cc9d50ce6", "title": "Answering Top-k Queries with Multi-Dimensional Selections: The Ranking Cube Approach", "text": "Observed in many real applications, a top-k query often consists of two components to reflect a user's preference: a selection condition and a ranking function. A user may not only propose ad hoc ranking functions, but also use different interesting subsets of the data. In many cases, a user may want to have a thorough study of the data by initiating a multi-dimensional analysis of the top-k query results. Previous work on top-k query processing mainly focuses on optimizing data access according to the ranking function only. The problem of efficient answering top-k queries with multi-dimensional selections has not been well addressed yet.This paper proposes a new computational model, called ranking cube, for efficient answering top-k queries with multi-dimensional selections. We define a rank-aware measure for the cube, capturing our goal of responding to multi-dimensional ranking analysis. Based on the ranking cube, an efficient query algorithm is developed which progressively retrieves data blocks until the top-k results are found. The curse of dimensionality is a well-known challenge for the data cube and we cope with this difficulty by introducing a new technique of ranking fragments. Our experiments on Microsoft's SQL Server 2005 show that our proposed approaches have significant improvement over the previous methods."} {"_id": "f88fc26bef96d2c430eb758f6e925824b82d8139", "title": "IC-CRIME : A Collaborative , Web-Based , 3 D System for the Investigation , Analysis , and Annotation of Crime Scenes", "text": "Modern day crime scene investigation methods are continually being enhanced by the application of new technologies to improve the analysis and presentation of crime scene information, helping to solve and prosecute crimes. This paper describes a new system called IC-CRIME that integrates several new technologies to meet both of these needs. IC-CRIME employs laser scanning to produce realistic 3D models of a crime scene which it then incorporates into a 3D virtual environment. ICCRIME creates an integrated platform that encourages investigators and forensic experts to view, explore, and annotate these models at any time from any web browser. A key goal of the system is to expand opportunities for collaboration between professionals whose geographic locations and time commitments would otherwise limit their ability to contribute to the analysis of the actual crime scene."} {"_id": "809587390027486210dd5a6c78c52c4b1a20cc9f", "title": "Investment and Financing Constraints : Evidence from the Funding of Corporate Pension Plans", "text": "I exploit sharply nonlinear funding rules for defined benefit pension plans in order to identify the dependence of corporate investment on internal financial resources in a large sample. Capital expenditures decline with mandatory contributions to defined benefit pension plans, even when controlling for correlations between the pension funding status itself and the firm\u2019s unobserved investment opportunities. The effect is particularly evident among firms that face financing constraints based on observable variables such as credit ratings. Investment also displays strong negative correlations with the part of mandatory contributions resulting solely from unexpected asset market movements. * Graduate School of Business, University of Chicago. I thank James Poterba, Stew Myers, Jonathan Gruber, Dirk Jenter, Heitor Almeida, Daniel Bergstresser, Mihir Desai, Michael Greenstone, Robin Greenwood, David Scharfstein, Antoinette Schoar, Jeremy Stein, and Amir Sufi for helpful comments and discussions. I would also like to thank Rob Stambaugh (the editor) and the referees. This work benefited greatly from the thoughts of economics seminar participants at MIT, Harvard, Princeton, and the Kennedy School of Government, and of finance seminar participants at the University of Chicago, the University of Pennsylvania (Wharton), Harvard, Stanford, Columbia, NYU (Stern), Dartmouth (Tuck), Michigan (Ross), Boston College (Carroll), Northwestern (Kellogg), Duke (Fuqua), and the 2004 Western Finance Association meetings in Vancouver. I am grateful to the Center for Retirement Research at Boston College and the National Bureau of Economic Research for financial support."} {"_id": "30d7e9fc1bc237864a505e8a30b33431a2a8aaa9", "title": "Abstractive Multi-Document Summarization : An Overview", "text": "In recent times, the necessity of generating single document summary has gained popularity among the researchers due to its extensive applicability. The text summarization can be categorized with different approaches like: extractive and abstractive from single document or multi document, goal of text summarization (intent, focus and coverage), characteristic of text summarization (frequency-based, knowledge-based and discoursebased), level of processing (surface level, entities level and discourse level) and kind of information (lexicon, structure information and deep understanding). Recently, the efforts in research are transferred from single document summarization to multi document summarization Multi-document summarization considerably differs from single in issues related to compression, speed, redundancy and passage selection are critical in the formation of useful summaries. In this paper, we review the techniques that have done for multi document summarization. Next, we describe evaluation method In conclusion; we propose our future work for multi document"} {"_id": "e352612f51ad34c764f128cc62e91b51fe7a9759", "title": "Point & Teleport Locomotion Technique for Virtual Reality", "text": "With the increasing popularity of virtual reality (VR) and new devices getting available with relatively lower costs, more and more video games have been developed recently. Most of these games use first person interaction techniques since it is more natural for Head Mounted Displays (HMDs). One of the most widely used interaction technique in VR video games is locomotion that is used to move user's viewpoint in virtual environments. Locomotion is an important component of video games since it can have a strong influence on user experience. In this study, a new locomotion technique we called \"Point & Teleport\" is described and compared with two commonly used VR locomotion techniques of walk-in-place and joystick. In this technique, users simply point where they want to be in virtual world and they are teleported to that position. As a major advantage, it is not expected to introduce motion sickness since it does not involve any visible translational motion. In this study, two VR experiments were designed and performed to analyze the Point & Teleport technique. In the first experiment, Point & Teleport was compared with walk-in-place and joystick locomotion techniques. In the second experiment, a direction component was added to the Point & Teleport technique so that the users could specify their desired orientation as well. 16 users took part in both experiments. Results indicated that Point & Teleport is a fun and user friendly locomotion method whereas the additional direction component degraded the user experience."} {"_id": "33c859730444bd835dbf5f0956110f45a735ee89", "title": "The \"visual cliff\".", "text": "This simple apparatus is used to investigate depth perception in different animals. All species thus far tested seem able to perceive and avoid a sharp drop as soon as they can move about. Human infants at the creeping and toddling stage are notoriously prone to falls from more or less high places. They must be kept from going over the brink by side panels on their cribs, gates on stairways and the vigilance of adults. As their muscular coordination matures they begin to avoid such accidents on their own. Common sense might suggest that the child learns to recognize falling-off places by experience\u2013that is, by falling and hurting himself. But is experience really the teacher? Or is the ability to perceive and avoid a brink part of the child's original endowment? Answers to these questions will throw light on the genesis of space perception in general. Height perception is a special case of distance perception: information in the light reaching the eye provides stimuli that can be utilized for the discrimination both of depth and of receding distance on the level. At what stage of development can an animal respond effectively to these stimuli? Does the onset of such response vary with animals of different species and habitats? At Cornell University we have been investigating these problems by means of a simple experimental setup that we call a visual cliff. The cliff is a simulated one and hence makes it possible not only to control the optical and other stimuli (auditory and tactual, for instance) but also to protect the experimental subjects. It consists of a board laid across a large sheet of heavy glass which is supported a foot or more above the floor. On one side of the board a sheet of patterned material is placed flush against the undersurface of the glass, giving the glass the appearance as well as the substance of solidity. On the other side a sheet of the same material is laid upon the floor; this side of the board thus becomes the visual cliff (Fig. 1). The Classic Visual Cliff Experiment This young explorer has the good sense not to crawl out onto an apparently unsupported surface, even when Mother beckons from the other side. Rats, pups, kittens, and chicks also will not try to walk across to the other side. (So don't bother asking why the chicken crossed the visual cliff.) \u2026"} {"_id": "76fee58c7308b185db38d4177d04902172993561", "title": "Improved Redirection with Distractors: A large-scale-real-walking locomotion interface and its effect on navigation in virtual environments", "text": "Users in virtual environments often find navigation more difficult than in the real world. Our new locomotion interface, Improved Redirection with Distractors (IRD), enables users to walk in larger-than-tracked space VEs without predefined waypoints. We compared IRD to the current best interface, really walking, by conducting a user study measuring navigational ability. Our results show that IRD users can really walk through VEs that are larger than the tracked space and can point to targets and complete maps of VEs no worse than when really walking."} {"_id": "984031f16dded96a8ae48a6dd9d49edefa98aa46", "title": "Quantifying immersion in virtual reality", "text": "Virtual Reality (VR) has generated much excitement but little formal proof that it is useful. Because VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. In this paper, we show that users with a VR interface complete a search task faster than users with a stationary monitor and a hand-based input device. We placed users in the center of the virtual room shown in Figure 1 and told them to look for camouflaged targets. VR users did not do significantly better than desktop users. However, when asked to search the room and conclude if a target existed, VR users were substantially better at determining when they had searched the entire room. Desktop users took 41% more time, re-examining areas they had already searched. We also found a positive transfer of training from VR to stationary displays and a negative transfer of training from stationary displays to VR."} {"_id": "13a92a59545eefbaf5ec3adf6000ec64f4bb73a5", "title": "When is it biased?: assessing the representativeness of twitter's streaming API", "text": "Twitter shares a free 1% sample of its tweets through the \"Streaming API\". Recently, research has pointed to evidence of bias in this source. The methodologies proposed in previous work rely on the restrictive and expensive Firehose to find the bias in the Streaming API data. We tackle the problem of finding sample bias without costly and restrictive Firehose data. We propose a solution that focuses on using an open data source to find bias in the Streaming API."} {"_id": "fe8e40c0d5c6c07adcbc2dff18423da0c88582fc", "title": "Information security culture: A management perspective", "text": "Information technology has become an integral part of modern life. Today, the use of information permeates every aspect of both business and private lives. Most organizations need information systems to survive and prosper and thus need to be serious about protecting their information assets. Many of the processes needed to protect these information assets are, to a large extent, dependent on human cooperated behavior. Employees, whether intentionally or through negligence, often due to a lack of knowledge, are the greatest threat to information security. It has become widely accepted that the establishment of an organizational sub-culture of information security is key to managing the human factors involved in information security. This paper briefly examines the generic concept of corporate culture and then borrows from the management and economical sciences to present a conceptual model of information security culture. The presented model incorporates the concept of elasticity from the economical sciences in order to show how various variables in an information security culture influence each other. The purpose of the presented model is to facilitate conceptual thinking and argumentation about information security culture. a 2009 Elsevier Ltd. All rights reserved."} {"_id": "4528c1164d43a004ea8a254b4cc240d8d9b84587", "title": "Designing Anthropomorphic Robot Hand With Active Dual-Mode Twisted String Actuation Mechanism and Tiny Tension Sensors", "text": "In this letter, using the active dual-mode twisted string actuation (TSA) mechanism and tiny tension sensors on the tendon strings, an anthropomorphic robot hand is newly designed in a compact manner. Thanks to the active dual-mode TSA mechanism, which is a miniaturized transmission, the proposed robot hand dose has a wide range of operation in terms of the grasping force and speed. It experimentally produces maximally the fingertip force 31.3 N and minimally the closing time of 0.5 s in average. Also, tiny tension sensor with the dimension of 4.7 (width)\u00a0\u00d7\u00a04.0 (height)\u00a0\u00d7\u00a010.75 (length) mm is newly presented and embedded at the fingertips in order to measure the tension on the tendon strings, which would allow the grasping force control. The kinetic and kinematic analyses are performed and the performance is verified by experiments."} {"_id": "46c55aa2dac524287cf4a61656967c1ff8b7713a", "title": "Control of a Quadrotor Using a Smart Self-Tuning Fuzzy PID Controller", "text": "This paper deals with the modelling, simulation-based controller design and path planning of a four rotor helicopter known as a quadrotor. All the drags, aerodynamic, coriolis and gyroscopic effect are neglected. A Newton-Euler formulation is used to derive the mathematical model. A smart self-tuning fuzzy PID controller based on an EKF algorithm is proposed for the attitude and position control of the quadrotor. The PID gains are tuned using a self-tuning fuzzy algorithm. The self-tuning of fuzzy parameters is achieved based on an EKF algorithm. A smart selection technique and exclusive tuning of active fuzzy parameters is proposed to reduce the computational time. Dijkstra\u2019s algorithm is used for path planning in a closed and known environment filled with obstacles and/or boundaries. The Dijkstra algorithm helps avoid obstacle and find the shortest route from a given initial position to the final position."} {"_id": "7a3623df776b1f1347a0a4b3c60f863469b838be", "title": "A Flooding Warning System based on RFID Tag Array for Energy Facility", "text": "Passive radio-frequency identification (RFID) tags are widely used due to its economic cost and satisfactory performance. So far, passive RFID tags are mostly applied to identify certain objects, such as underground pipe identification under buried conditions. However, there is lack of study on the application of buried tags for further application. In this paper, the performances of buried RFID tags are studied to develop a flooding warning system based on RFID tag array for energy facility such as power stations. In this study, the corresponding signal strength received by the RFID reader is evaluated when the RFID tags are buried by seven materials respectively. The results show that flood warning detector can be constructed using passive RFID tag array and reader."} {"_id": "b519835c855de750ad52907e24cb6a0904bee3de", "title": "Cognitive Control Deficits in Schizophrenia: Mechanisms and Meaning", "text": "Although schizophrenia is an illness that has been historically characterized by the presence of positive symptomatology, decades of research highlight the importance of cognitive deficits in this disorder. This review proposes that the theoretical model of cognitive control, which is based on contemporary cognitive neuroscience, provides a unifying theory for the cognitive and neural abnormalities underlying higher cognitive dysfunction in schizophrenia. To support this model, we outline converging evidence from multiple modalities (eg, structural and functional neuroimaging, pharmacological data, and animal models) and samples (eg, clinical high risk, genetic high risk, first episode, and chronic subjects) to emphasize how dysfunction in cognitive control mechanisms supported by the prefrontal cortex contribute to the pathophysiology of higher cognitive deficits in schizophrenia. Our model provides a theoretical link between cellular abnormalities (eg, reductions in dentritic spines, interneuronal dysfunction), functional disturbances in local circuit function (eg, gamma abnormalities), altered inter-regional cortical connectivity, a range of higher cognitive deficits, and symptom presentation (eg, disorganization) in the disorder. Finally, we discuss recent advances in the neuropharmacology of cognition and how they can inform a targeted approach to the development of effective therapies for this disabling aspect of schizophrenia."} {"_id": "280bb6d7ada9ef118899b5a1f655ae9166fb8f0b", "title": "Tampering Detection in Low-Power Smart Cameras", "text": "A desirable feature in smart cameras is the ability to autonomously detect any tampering event/attack that would prevent a clear view over the monitored scene. No matter whether tampering is due to atmospheric phenomena (e.g., few rain drops over the camera lens) or to malicious attacks (e.g., occlusions or device displacements), these have to be promptly detected to possibly activate countermeasures. Tampering detection is particularly challenging in battery-powered cameras, where it is not possible to acquire images at full-speed frame-rates, nor use sophisticated image-analysis algorithms. We here introduce a tampering-detection algorithm specifically designed for low-power smart cameras. The algorithm leverages very simple indicators that are then monitored by an outlier-detection scheme: any frame yielding an outlier is detected as tampered. Core of the algorithm is the partitioning of the scene into adaptively defined regions, that are preliminarily defined by segmenting the image during the algorithmconfiguration phase, and which shows to improve the detection of camera displacements. Experiments show that the proposed algorithm can successfully operate on sequences acquired at very low-frame rate, such as one frame every minute, with a very small computational complexity."} {"_id": "e8a2776154f8bc7625742fade5efb71ff1c9896b", "title": "W-BAND MICROSTRIP-TO-WAVEGUIDE TRANSITION USING VIA FENCES", "text": "Abstract\u2014The paper presents integrated probe for direct coupling to the WR-10 waveguide with the use of metal filled vias on both sides of the microstrip line. Design and optimization of this novel microstripto-waveguide transition has been performed using 3-D finite element method based software HFSS (High Frequency Structure Simulator). A back-to-back transition has been fabricated and measured between 75\u2013110GHz. The measured return loss is higher than 10 dB and the insertion loss for a single microstrip-to-waveguide transition is about 1.15 dB."} {"_id": "c72796c511e2282e4088b0652a4cce0e2da4296c", "title": "Hierarchical Discrete Distribution Decomposition for Match Density Estimation", "text": "Existing deep learning methods for pixel correspondence output a point estimate of the motion field, but do not represent the full match distribution. Explicit representation of a match distribution is desirable for many applications as it allows direct representation of the correspondence probability. The main difficulty of estimating a full probability distribution with a deep network is the high computational cost of inferring the entire distribution. In this paper, we propose Hierarchical Discrete Distribution Decomposition, dubbed HD, to learn probabilistic point and region matching. Not only can it model match uncertainty, but also region propagation. To achieve this, we estimate the hierarchical distribution of pixel correspondences at different image scales without multi-hypotheses ensembling. Despite its simplicity, our method can achieve competitive results for both optical flow and stereo matching on established benchmarks, while the estimated uncertainty is a good indicator of errors. Furthermore, the point match distribution within a region can be grouped together to propagate the whole region even if the area changes across images."} {"_id": "a63b97291149bfed416aa9e56a21314069540a7b", "title": "A meta-analysis of working memory impairments in children with attention-deficit/hyperactivity disorder.", "text": "OBJECTIVE\nTo determine the empirical evidence for deficits in working memory (WM) processes in children and adolescents with attention-deficit/hyperactivity disorder (ADHD).\n\n\nMETHOD\nExploratory meta-analytic procedures were used to investigate whether children with ADHD exhibit WM impairments. Twenty-six empirical research studies published from 1997 to December, 2003 (subsequent to a previous review) met our inclusion criteria. WM measures were categorized according to both modality (verbal, spatial) and type of processing required (storage versus storage/manipulation).\n\n\nRESULTS\nChildren with ADHD exhibited deficits in multiple components of WM that were independent of comorbidity with language learning disorders and weaknesses in general intellectual ability. Overall effect sizes for spatial storage (effect size = 0.85, CI = 0.62 - 1.08) and spatial central executive WM (effect size = 1.06, confidence interval = 0.72-1.39) were greater than those obtained for verbal storage (effect size = 0.47, confidence interval = 0.36-0.59) and verbal central executive WM (effect size = 0.43, confidence interval = 0.24-0.62).\n\n\nCONCLUSION\nEvidence of WM impairments in children with ADHD supports recent theoretical models implicating WM processes in ADHD. Future research is needed to more clearly delineate the nature, severity, and specificity of the impairments to ADHD."} {"_id": "73d738f1c52e41d8e60700b1aac06d80bf8d8570", "title": "Terpene synthases of oregano (Origanum vulgare L.) and their roles in the pathway and regulation of terpene biosynthesis", "text": "The aroma, flavor and pharmaceutical value of cultivated oregano (Origanum vulgare L.) is a consequence of its essential oil which consists mostly of monoterpenes and sesquiterpenes. To investigate the biosynthetic pathway to oregano terpenes and its regulation, we identified and characterized seven terpene synthases, key enzymes of terpene biosynthesis, from two cultivars of O. vulgare. Heterologous expression of these enzymes showed that each forms multiple mono- or sesquiterpene products and together they are responsible for the direct production of almost all terpenes found in O. vulgare essential oil. The correlation of essential oil composition with relative and absolute terpene synthase transcript concentrations in different lines of O. vulgare demonstrated that monoterpene synthase activity is predominantly regulated on the level of transcription and that the phenolic monoterpene alcohol thymol is derived from \u03b3-terpinene, a product of a single monoterpene synthase. The combination of heterologously-expressed terpene synthases for in vitro assays resulted in blends of mono- and sesquiterpene products that strongly resemble those found in vivo, indicating that terpene synthase expression levels directly control the composition of the essential oil. These results will facilitate metabolic engineering and directed breeding of O. vulgare cultivars with higher quantity of essential oil and improved oil composition."} {"_id": "90c28aeb30a1632efbd9f7d0f5eb3580c10f135c", "title": "A wireless slanted optrode array with integrated micro leds for optogenetics", "text": "This paper presents a wireless-enabled, flexible optrode array with multichannel micro light-emitting diodes (\u03bc-LEDs) for bi-directional wireless neural interface. The array integrates wirelessly addressable \u03bc-LED chips with a slanted polymer optrode array for precise light delivery and neural recording at multiple cortical layers simultaneously. A droplet backside exposure (DBE) method was developed to monolithically fabricate varying-length optrodes on a single polymer platform. In vivo tests in rat brains demonstrated that the \u03bc-LEDs were inductively powered and controlled using a wireless switched-capacitor stimulator (SCS), and light-induced neural activity was recorded with the optrode array concurrently."} {"_id": "63d440eb606c7aa4ee3c7fcd94d65af3f5c92c96", "title": "Efficient projections onto the l1-ball for learning in high dimensions", "text": "We describe efficient algorithms for projecting a vector onto the l1-ball. We present two methods for projection. The first performs exact projection in O(n) expected time, where n is the dimension of the space. The second works on vectors k of whose elements are perturbed outside the l1-ball, projecting in O(k log(n)) time. This setting is especially useful for online learning in sparse feature spaces such as text categorization applications. We demonstrate the merits and effectiveness of our algorithms in numerous batch and online learning tasks. We show that variants of stochastic gradient projection methods augmented with our efficient projection procedures outperform interior point methods, which are considered state-of-the-art optimization techniques. We also show that in online settings gradient updates with l1 projections outperform the exponentiated gradient algorithm while obtaining models with high degrees of sparsity."} {"_id": "7482fc3f108e7a4b379292e3c5dbabefdf8706fa", "title": "Integrated on-chip inductors with electroplated magnetic yokes ( invited )", "text": "J. Appl. Phys. 111, 07E328 (2012) A single-solenoid pulsed-magnet system for single-crystal scattering studies Rev. Sci. Instrum. 83, 035101 (2012) Solution to the problem of E-cored coil above a layered half-space using the method of truncated region eigenfunction expansion J. Appl. Phys. 111, 07E717 (2012) Array of 12 coils to measure the position, alignment, and sensitivity of magnetic sensors over temperature J. Appl. Phys. 111, 07E501 (2012) Skin effect suppression for Cu/CoZrNb multilayered inductor J. Appl. Phys. 111, 07A501 (2012)"} {"_id": "8bd59b6111c21ca9f133b2ac9f0a8b102e344076", "title": "Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data", "text": "Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur."} {"_id": "d6609337639ad64c59afff71637b42c0c6be7d6d", "title": "Dermoscopy of cutaneous leishmaniasis.", "text": "BACKGROUND\nDermoscopy has been proposed as a diagnostic tool in the case of skin infections and parasitosis but no specific dermoscopic criteria have been described for cutaneous leishmaniasis (CL).\n\n\nOBJECTIVES\nTo describe the dermoscopic features of CL.\n\n\nMETHODS\nDermoscopic examination (using the DermLite Foto; 3Gen, LLC, Dana Point, CA, U.S.A.) of 26 CL lesions was performed to evaluate specific dermoscopic criteria.\n\n\nRESULTS\nWe observed the following dermoscopic features: generalized erythema (100%), 'yellow tears' (53%), hyperkeratosis (50%), central erosion/ulceration (46%), erosion/ulceration associated with hyperkeratosis (38%) and 'white starburst-like pattern' (38%). Interestingly, at least one vascular structure described in skin neoplasms was observed in all cases: comma-shaped vessels (73%), linear irregular vessels (57%), dotted vessels (53%), polymorphous/atypical vessels (26%), hairpin vessels (19%), arborizing telangiectasia (11%), corkscrew vessels (7%) and glomerular-like vessels (7%). Combination of two or more different types of vascular structures was present in 23 of 26 CL lesions (88%), with a combination of two vascular structures in 13 cases (50%) and three or more in 10 cases (38%).\n\n\nCONCLUSIONS\nCharacteristic dermoscopic structures have been identified in CL. Important vascular patterns seen in melanocytic and nonmelanocytic tumours are frequently observed in this infection."} {"_id": "e9001ea4368b808da7cabda7edd2fbd4118a3f6b", "title": "Path Planning and Controlled Crash Landing of a Quadcopter in case of a Rotor Failure", "text": "This paper presents a framework for controlled emergency landing of a quadcopter, experiencing a rotor failure, away from sensitive areas. A complete mathematical model capturing the dynamics of the system is presented that takes the asymmetrical aerodynamic load on the propellers into account. An equilibrium state of the system is calculated around which a linear time-invariant control strategy is developed to stabilize the system. By utilizing the proposed model, a specific configuration for a quadcopter is introduced that leads to the minimum power consumption during a yaw-rateresolved hovering after a rotor failure. Furthermore, given a 3D representation of the environment, an optimal flight trajectory towards a safe crash landing spot, while avoiding collision with obstacles, is developed using an RRT* approach. The cost function for determining the best landing spot consists of: (i) finding the safest landing spot with the largest clearance from the obstacles; and (ii) finding the most energy-efficient trajectory towards the landing spot. The performance of the proposed framework is tested via simulations."} {"_id": "6dcb07839672014c294d71ee78a7c72b715e0b76", "title": "A Bibliometric Study on Culture Research in International Business", "text": "National cultures and cultural differences provide a crucial component of the context of international business (IB) research. We conducted a bibliometric study of the articles published in seven leading IB journals, over a period of three decades, to analyze how \u201cnational culture\u201d has been impacting in IB research. Co-citation mappings permit us to identify the ties binding works dealing with culture and cultural issues in IB. We identify two main clusters of research each comprising two sub-clusters, with Hofstede\u2019s (1980) work setting much of the conceptual and empirical approach on culture-related studies. One main cluster entails works on the conceptualization of culture and its dimensions and other cluster on cultural distance. This conceptual framework captures the extant IB research incorporating culture-related concepts and influences."} {"_id": "6d23073dbb68d353f30bb97f4803cfbd66546444", "title": "Ensemble Algorithms in Reinforcement Learning", "text": "This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms."} {"_id": "52afbe99de04878a5ac73a4adad4b1cedae8a6ce", "title": "Multiresolution Tree Networks for 3D Point Cloud Processing", "text": "We present multiresolution tree-structured networks to process point clouds for 3D shape understanding and generation tasks. Our network represents a 3D shape as a set of locality-preserving 1D ordered list of points at multiple resolutions. This allows efficient feed-forward processing through 1D convolutions, coarse-to-fine analysis through a multi-grid architecture, and it leads to faster convergence and small memory footprint during training. The proposed treestructured encoders can be used to classify shapes and outperform existing pointbased architectures on shape classification benchmarks, while tree-structured decoders can be used for generating point clouds directly and they outperform existing approaches for image-to-shape inference tasks learned using the ShapeNet dataset. Our model also allows unsupervised learning of point-cloud based shapes by using a variational autoencoder, leading to higher-quality generated shapes."} {"_id": "ed84e205d306f831f84c6f31f7c94d83a60fe5d8", "title": "An application of explicit model predictive control to electric power assisted steering systems", "text": "This paper presents explicit model predictive control (MPC) for electric power assisted steering (EPAS) systems. Explicit MPC has the ability to alleviate the computational burdens of MPC, which is an online optimization technique, using multi-parametric quadratic programming (mp-QP). EPAS systems are being used more frequently for passenger cars because of their advantages over hydraulic steering systems in terms of performance and cost. The main objective of EPAS systems is to provide a desired motor torque; therefore, to handle this problem, explicit MPC is employed owing to the ability of MPC to deal with constraints on controls and states and of explicit MPC to do the same work offline. This paper summarizes a formulation of explicit MPC and introduces a straight-line boost curve, which is used to determine desired motor torques. A mechanical model, motion equations and the state-space form of a column-type EPAS system are also described in this paper. An linear-quadratic regulator (LQR) controller is designed so that the explicit MPC controller could be compared against it. The parameter values are respectively varied to demonstrate the robustness of the proposed explicit MPC controller. The proposed control strategy shows better performance than that of the LQR as well as a reduction in the required computational complexity."} {"_id": "538d9235d0af4d02f45d17c9663e98afdf8ae4f9", "title": "Neural Semantic Encoders", "text": "We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU."} {"_id": "43d4b77ca6f9a29993bce5bade90aa2b8e4d2cac", "title": "CCDN: Content-Centric Data Center Networks", "text": "Data center networks continually seek higher network performance to meet the ever increasing application demand. Recently, researchers are exploring the method to enhance the data center network performance by intelligent caching and increasing the access points for hot data chunks. Motivated by this, we come up with a simple yet useful caching mechanism for generic data centers, i.e., a server caches a data chunk after an application on it reads the chunk from the file system, and then uses the cached chunk to serve subsequent chunk requests from nearby servers. To turn the basic idea above into a practical system and address the challenges behind it, we design content-centric data center networks CCDNs, which exploits an innovative combination of content-based forwarding and location [Internet Protocol IP]-based forwarding in switches, to correctly locate the target server for a data chunk on a fully distributed basis. Furthermore, CCDN enhances traditional content-based forwarding to determine the nearest target server, and enhances traditional location IP-based forwarding to make high utilization of the precious memory space in switches. Extensive simulations based on real-world workloads and experiments on a test bed built with NetFPGA prototypes show that, even with a small portion of the server\u2019s storage as cache e.g., 3% and with a modest content forwarding information base size e.g., 1000 entries in switches, CCDN can improve the average throughput to get data chunks by 43% compared with a pure Hadoop File System HDFS system in a real data center."} {"_id": "38418928d6d842fe6edadc809f384278d793d610", "title": "Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks", "text": "Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 1030. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested."} {"_id": "49e77b981a0813460e2da2760ff72c522ae49871", "title": "The Limitations of Deep Learning in Adversarial Settings", "text": "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification."} {"_id": "5c3785bc4dc07d7e77deef7e90973bdeeea760a5", "title": "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems", "text": "Mart\u0131\u0301n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng Google Research\u2217 Abstract"} {"_id": "83bfdd6a2b28106b9fb66e52832c45f08b828541", "title": "Intriguing properties of neural networks", "text": "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network\u2019s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."} {"_id": "8aef07e27848bb1c3ec62537c58e276361530c14", "title": "Millimeter wave cavity backed aperture coupled microstrip patch antenna", "text": "In this paper, a new single element broad band cavity backed multi-layer microstrip patch antenna is presented. This antenna is designed to be used in 79 GHz MIMO radar applications, and is fabricated with advanced high resolution multi-layer PCB technology. It has a wide bandwidth of 11.43 GHz, which is 14.5% of the center frequency. The beam widths in both E and H-planes are very wide, and are 144 degrees and 80 degrees respectively. Also, the antenna gain and efficiency are respectively 4.7 dB and 87%. Furthermore, measurements have been done, and show a good agreement with simulations."} {"_id": "b7d5aea8ac1332767c4525c29efc68cfc34bdf11", "title": "ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA", "text": "This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD). Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA) with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i) in the presence of noise and high percentage of outliers, (ii) for incomplete as well as complete data, (iii) for small and large number of points, and (iv) for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63meter (m); the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic) poles, diameter at breast height estimation for trees, and building and bridge information modelling."} {"_id": "3d92b174e02bd6c30bd989eac3465dfde081878c", "title": "Feature integration analysis of bag-of-features model for image retrieval", "text": "One of the biggest challenges in content based image retrieval is to solve the problem of \u201csemantic gaps\u201d between low-level features and high-level semantic concepts. In this paper, we aim to investigate various combinations of mid-level features to build an effective image retrieval system based on the bag-offeatures (BoF) model. Specifically, we study two ways of integrating the SIFT and LBP descriptors, HOG and LBP descriptors, respectively. Based on the qualitative and quantitative evaluations on two benchmark datasets, we show that the integrations of these features yield complementary and substantial improvement on image retrieval even with noisy background and ambiguous objects. Two integration models are proposed: the patch-based integration and image-based integration. By using a weighted K-means clustering algorithm, the image-based SIFT-LBP integration achieves the best performance on the given benchmark problems comparing to the existing algorithms. & 2013 Elsevier B.V. All rights reserved."} {"_id": "b695bb1cb3e8ccc34a780aa67512917cb5ce28c9", "title": "Emotion Regulation Ability 1 Running head : EMOTION REGULATION The Ability to Regulate Emotion is Associated with Greater Well-Being , Income , and Socioeconomic Status", "text": "Are people who are best able to implement strategies to regulate their emotional expressive behavior happier and more successful than their counterparts? Although past research has examined individual variation in knowledge of the most effective emotion regulation strategies, little is known about how individual differences in the ability to actually implement these strategies, as assessed objectively in the laboratory, is associated with external criteria. In two studies, we examined how individual variation in the ability to modify emotional expressive behavior in response to evocative stimuli is related to well-being and financial success. Study 1 showed that individuals who can best suppress their emotional reaction to an acoustic startle are happiest with their lives. Study 2 showed that individuals who can best amplify their emotional reaction to a disgust-eliciting movie are happiest with their lives and have the highest disposable income and socioeconomic status. Thus, being able to implement emotion regulation strategies in the laboratory is closely linked to well-being and financial success. Emotion Regulation Ability 3 The Ability to Regulate Emotion is Associated with Greater Well-Being, Income, and Socioeconomic Status Individual variation in cognitive abilities, such as language and mathematics, has been shown to relate strongly to a number of important life criteria, including performance at school and at work (Kunzel, Hezlett, & Ones, 2004; Schmidt & Hunter, 1998). Research in recent years has suggested that there is also important variation among individuals in emotional abilities (see Mayer, Roberts, & Barsade, 2008; Mayer, Salovey, & Caruso, 2008, for reviews). In particular, the ability to regulate emotions reflects variation in how well people adjust emotional responses to meet current situational demands (Gross & Thompson, 2007; Salovey & Mayer, 1990). Equipped with this ability, individuals can aptly modify which emotions they have, when they have them, and how they experience and express them (Gross, 1998). This ability is arguably one of the most critical elements of our emotion repertoire, and it is the focus of the present research. Past research has begun to examine whether individual variation in the ability to regulate emotions is associated with various criteria. This research has found that variation in knowledge of how to best regulate emotions \u2013 whether people know the rules of emotion regulation \u2013 is associated with well-being, close social relationships, high grades in school, and high job performance (e.g., C\u00f4t\u00e9 & Miners, 2006; Lopes, Salovey, C\u00f4t\u00e9, & Beers, 2005; MacCann & Roberts, 2008). The measures used in these studies assess the degree to which people know how to best manage emotions. Specifically, they reflect how closely respondents\u2019 judgments of how to best regulate emotion in hypothetical scenarios match the judgments of experts. For instance, the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer, Salovey, & Caruso, 2002) asks respondents to rate the effectiveness of a series of strategies to manage emotions in several hypothetical scenarios, and their responses are compared to those provided by expert emotion researchers. Emotion Regulation Ability 4 Notwithstanding the importance of knowing how to best manage emotions, knowledge does not fully represent the domain of emotion regulation ability. People who know the best strategies may not implement them well. The distinction between knowledge and the ability to implement is established in the larger literature on intelligence (cf. Ackerman, 1996), and it is also theoretically useful to describe emotional abilities. For example, a customer service agent who knows that cognitively reframing an interaction with a difficult customer is the best strategy may not implement that strategy well during the interaction. Thus, to understand fully how emotion regulation ability is associated with criteria such as well-being and financial success, researchers must also examine the ability to implement strategies to regulate emotions \u2013 whether people can actually operate the machinery of emotion regulation. Several of the measures used in studies of the relationship between emotion regulation and other criteria do not assess actual ability to implement emotion regulation strategies. For example, the MSCEIT (Mayer et al., 2002) does not ask respondents to implement the strategy that they believe best addresses the issues depicted in the scenarios. Recent advances in affective science, however, provide tools to objectively assess the ability to implement emotion regulation strategies (Gross & Levenson, 1993; Hagemann, Levenson, & Gross, 2006; Jackson, Malmstadt, Larson, & Davidson, 2000; Kunzmann, Kupperbusch, & Levenson, 2005). In these laboratory paradigms, individuals receive specific instructions about how to regulate their emotions (e.g., reduce the intensity of their emotional expressive behaviors) when encountering emotional stimuli, such as loud noises or emotionally evocative film clips. Success at implementing the emotion regulation strategy can be measured objectively, for example, by coding how much respondents change their emotional expressive behavior when being instructed to do so. Several studies have used this paradigm to examine how regulating emotions is associated with cognitive task performance (Baumeister, Bratslavsky, Muraven, & Tice, 1998; Emotion Regulation Ability 5 Bonanno, Papa, Lalande, Westphal, & Coifman, 2004; Schmeichel, Demaree, Robinson, & Pu, 2006), the activation of neural systems (Beauregard, Levesque, & Bourgouin, 2001; Oschner, Bunge, Gross, & Gabrieli, 2002), and emotion experience, emotional expressive behavior, and autonomic physiology (Demaree, Schmeichel, Robinson, Pu, Everhart, & Berntson, 2006; Giuliani, McCrae, & Gross, 2008; Gross & Levenson, 1993, 1997; Hagemann et al., 2006). This paradigm has also been used as an individual difference measure to test how the ability to implement emotion regulation strategies is associated with age (Kunzmann et al., 2005; Scheibe & Blanchard-Fields, 2009), working memory (Schmeichel, Volokhov, & Demaree, 2008), and executive function (Gyurak, Goodkind, Madan, Kramer, Miller, & Levenson, 2008). In addition, one study employed this paradigm to assess people\u2019s flexibility in using different emotion regulation strategies depending on the situation, showing that flexibility is associated with lower distress after a traumatic event (Bonanno, Papa, Lalande, Westphal, & Coifman, 2004). Thus, this body of research supports the utility of these laboratory paradigms for assessing individual variation in the ability to implement emotion regulation strategies and the correlates of this ability. In this report, we present the results of two studies that examine whether individual variation in the ability to implement strategies to regulate emotions is associated with well-being and financial success and, if so, in what direction. Most people regulate their emotions daily, and more than half the time, they do so by modifying the expression of emotions in their face, voice, and posture (Gross, Richards, & John, 2006). Given the frequency with which we regulate our emotional expressive behavior, it is reasonable to expect that the individual\u2019s ability in this realm would exhibit important associations with other constructs. The regulation of visible expressive behavior encompasses both up-regulation (amplifying emotional expressive behavior) and downEmotion Regulation Ability 6 regulation (reducing emotional expressive behavior). We considered the association of both with our criteria. We now turn to our theoretical development. A review of the existing literature suggests the possibility of both a positive and a negative association between the ability to implement emotion regulation strategies assessed in the laboratory and well-being and financial success. Furthermore, because we do not test the direction of causality in our studies, we consider theoretical arguments for both causal directions of associations, reviewing literatures that suggest that emotion regulation ability has consequences for well-being and financial success (both positive and negative), and also that well-being and financial success have consequences for emotion regulation ability (both positive and negative). The Ability to Regulate Emotional Behavior and Well-Being and Financial Success: Positive Associations In this section, we present theoretical arguments suggesting that the ability to regulate emotion and well-being and financial success are positively associated. We first describe why high emotion regulation ability may help people become happier and garner more financial resources, and then we examine whether happiness and financial resources may help people develop better abilities to regulate their emotions. Why Would Emotion Regulation Ability Increase Well-Being and Financial Success? Philosophers have argued that rational thought and a happy life requires the ability to rein in on emotional impulses (Aristotle, 1884; Solomon, 1993). The ability to modify emotional expressive behavior effectively may help people adapt flexibly to situational demands. Equipped with this ability, individuals might be more successful in communicating attitudes, goals, and intentions that are appropriate in various situations (Keltner & Haidt, 1999) and that might be Emotion Regulation Ability 7 rewarded and fulfilled. The ability to adapt successfully to situational demands then could be associated with various indicators of well-being and success. At a more micro-level, modifying emotional expressive behavior effectively may help people conform to display rules about who can show which emotions to whom and when they can do so (Friesen, 1972). People often attain rewards for conforming to displays rules in various settings. For instance, employees who conform to display rules at wo"} {"_id": "9b90568faad1fd394737b79503571b7f5f0b2f4b", "title": "Optimizing Space Amplification in RocksDB", "text": "RocksDB is an embedded, high-performance, persistent keyvalue storage engine developed at Facebook. Much of our current focus in developing and configuring RocksDB is to give priority to resource efficiency instead of giving priority to the more standard performance metrics, such as response time latency and throughput, as long as the latter remain acceptable. In particular, we optimize space efficiency while ensuring read and write latencies meet service-level requirements for the intended workloads. This choice is motivated by the fact that storage space is most often the primary bottleneck when using Flash SSDs under typical production workloads at Facebook. RocksDB uses log-structured merge trees to obtain significant space efficiency and better write throughput while achieving acceptable read performance. This paper describes methods we used to reduce storage usage in RocksDB. We discuss how we are able to trade off storage efficiency and CPU overhead, as well as read and write amplification. Based on experimental evaluations of MySQL with RocksDB as the embedded storage engine (using TPC-C and LinkBench benchmarks) and based on measurements taken from production databases, we show that RocksDB uses less than half the storage that InnoDB uses, yet performs well and in many cases even better than the B-tree-based InnoDB storage engine. To the best of our knowledge, this is the first time a Log-structured merge treebased storage engine has shown competitive performance when running OLTP workloads at large scale."} {"_id": "676680a4812cee810341f9153c15ae93fefd6675", "title": "Ultra-Reliable and Low-Latency 5 G Communication", "text": "Machine-to-machine communication, M2M, will make up a large portion of the new types of services and use cases that the fifth generation (5G) systems will address. On the one hand, 5G will connect a large number of low-cost and low-energy devices in the context of the Internet of things; on the other hand it will enable critical machine type communication use cases, such as smart factory, automotive, energy, and e-health \u2013 which require communication with very high reliability and availability, as well as very low end-to-end latency. In this paper, we will discuss the requirements, enablers and challenges to support these emerging mission-critical 5G use cases. Keywords\u2014 5G, NR, M2M, MTC, URLLC, Tactile Internet, Reliability, Latency."} {"_id": "a2418f51c11eaedfbf6168cbbe7fcf2a1fee9c9c", "title": "A GLOBAL MEASURE OF PERCEIVED STRESS", "text": "Aim: To evaluate validity of the Greek version of a global measure of perceived stress PSS\u221214 (Perceived Stress Scale \u2013 14 item). Materials and Methods: The original PSS\u221214 (theoretical range 0\u221256) was translated into Greek and then back-translated. One hundred men and women (39\u00b110 years old, 40 men) participated in the validation process. Firstly, participants completed the Greek PSS\u221214 and, then they were interviewed by a psychologist specializing in stress management. Cronbach\u2019s alpha (\u03b1) evaluated internal consistency of the measurement, whereas Kendall\u2019s tau-b and Bland & Altman methods assessed consistency with the clinical evaluation. Exploratory and Confirmatory Factor analyses were conducted to reveal hidden factors within the data and to confirm the two-dimensional character of the scale. Results: Mean (SD) PSS\u221214 score was 25(7.9). Strong internal consistency (Cronbach\u2019s \u03b1 = 0.847) as well as moderate-to-good concordance between clinical assessment and PSS\u221214 (Kendall\u2019s tau-b = 0.43, p<0.01) were observed. Two factors were extracted. Factor one explained 34.7% of variability and was heavily laden by positive items, and factor two that explained 10.6% of the variability by negative items. Confirmatory factor analysis revealed that the model with 2 factors had chi-square equal to 241.23 (p<0.001), absolute fix indexes were good (i.e. GFI=0.733, AGFI=0.529), and incremental fix indexes were also adequate (i.e. NFI=0.89 and CFI=0.92). Conclusion: The developed Greek version of PSS\u221214 seems to be a valid instrument for the assessment of perceived stress in the Greek adult population living in urban areas; a finding that supports its local use in research settings as an evaluation tool measuring perceived stress, mainly as a risk factor but without diagnostic properties."} {"_id": "a96520ac46c6789d607405d6cb514e709a046927", "title": "Polysemy: Theoretical and Computational Approaches Yael Ravin and Claudia Leacock (editors) (IBM T. J. Watson Research Center and Educational Testing Services) New York: Oxford University Press, 2000, xi+227 pp; hardbound, ISBN 0-19-823842-8, $74.00, \u00a345.00; paperbound, ISBN 0-19-925086-3, $21.95, \u00a3", "text": "As the editors of this volume remind us, polysemy has been a vexing issue for the understanding of language since antiquity. For half a century, it has been a major bottleneck for natural language processing. It contributed to the failure of early machine translation research (remember Bar-Hillel\u2019s famous pen and box example) and is still plaguing most natural language processing and information retrieval applications. A recent issue of this journal described the state of the art in automatic sense disambiguation (Ide and V\u00e9ronis 1998), and Senseval system competitions have revealed the immense difficulty of the task (http://www.sle.sharp.co.uk/senseval2). However, no significant progress can be made on the computational aspects of polysemy without serious advances in theoretical issues. At the same time, theoretical work can be fostered by computational results and problems, and language-processing applications can provide a unique test bed for theories. It was therefore an excellent idea to gather both theoretical and applied contributions in the same book. Yael Ravin and Claudia Leacock are well-known names to those who work on the theoretical and computational aspects of word meaning. In this volume, they bring together a collection of essays from leading researchers in the field. As far as I can tell, these essays are not reprints or expanded versions of conference papers, as is often the case for edited works; instead, they seem to have been specially commissioned for the purposes of this book, which makes it even more exciting to examine. The book is composed of 11 chapters. It is not formally divided into parts, but chapters dealing more specifically with the computational aspects of polysemy are grouped together at the end (and constitute about one-third of the volume). Chapter 1 is an overview written by the volume editors. Yael Ravin and Claudia Leacock survey the main theories of meaning and their treatment of polysemy. These include the classical Aristotelian approach revived by Katz and Fodor (1963); Rosch\u2019s (1977) prototypical approach, which has its roots in Wittgenstein\u2019s Philosophical Investigations (1953); and the relational approach recently exemplified by WordNet (Fellbaum 1998), which (although the authors do not mention it) can be traced back to Peirce\u2019s (1931\u20131958) and Selz\u2019s (1913, 1922) graphs and which gained popularity with Quillian\u2019s (1968) semantic networks. In the course of this overview, Ravin and Leacock put the individual chapters into perspective by relating them to the various theories."} {"_id": "40ee1e872dac8cd5ee1e9967a8ca73d2b1ae251c", "title": "Effects of a Dynamic Warm-Up, Static Stretching or Static Stretching with Tendon Vibration on Vertical Jump Performance and EMG Responses", "text": "The purpose of this study was to investigate the short-term effects of static stretching, with vibration given directly over Achilles tendon, on electro-myographic (EMG) responses and vertical jump (VJ) performances. Fifteen male, college athletes voluntarily participated in this study (n=15; age: 22\u00b14 years old; body height: 181\u00b110 cm; body mass: 74\u00b111 kg). All stages were completed within 90 minutes for each participant. Tendon vibration bouts lasted 30 seconds at 50 Hz for each volunteer. EMG analysis for peripheral silent period, H-reflex, H-reflex threshold, T-reflex and H/M ratio were completed for each experimental phases. EMG data were obtained from the soleus muscle in response to electro stimulation on the popliteal post tibial nerve. As expected, the dynamic warm-up (DW) increased VJ performances (p=0.004). Increased VJ performances after the DW were not statistically substantiated by the EMG findings. In addition, EMG results did not indicate that either static stretching (SS) or tendon vibration combined with static stretching (TVSS) had any detrimental or facilitation effect on vertical jump performances. In conclusion, using TVSS does not seem to facilitate warm-up effects before explosive performance."} {"_id": "67fb947cdbb040cd5883e599aa29f33dd3241d3c", "title": "Relation between language experiences in preschool classrooms and children's kindergarten and fourth-grade language and reading abilities.", "text": "Indirect effects of preschool classroom indexes of teacher talk were tested on fourth-grade outcomes for 57 students from low-income families in a longitudinal study of classroom and home influences on reading. Detailed observations and audiotaped teacher and child language data were coded to measure content and quantity of verbal interactions in preschool classrooms. Preschool teachers' use of sophisticated vocabulary during free play predicted fourth-grade reading comprehension and word recognition (mean age=9; 7), with effects mediated by kindergarten child language measures (mean age=5; 6). In large group preschool settings, teachers' attention-getting utterances were directly related to later comprehension. Preschool teachers' correcting utterances and analytic talk about books, and early support in the home for literacy predicted fourth-grade vocabulary, as mediated by kindergarten receptive vocabulary."} {"_id": "bed350c7507c1823d9eecf57d1ca19c09b5cd71e", "title": "Static secure page allocation for light-weight dynamic information flow tracking", "text": "Dynamic information flow tracking (DIFT) is an effective security countermeasure for both low-level memory corruptions and high-level semantic attacks. However, many software approaches suffer large performance degradation, and hardware approaches have high logic and storage overhead. We propose a flexible and light-weight hardware/software co-design approach to perform DIFT based on secure page allocation. Instead of associating every data with a taint tag, we aggregate data according to their taints, i.e., putting data with different attributes in separate memory pages. Our approach is a compiler-aided process with architecture support. The implementation and analysis show that the memory overhead is little, and our approach can protect critical information, including return address, indirect jump address, and system call IDs, from being overwritten by malicious users."} {"_id": "fbde0ec7f7b47f3183f6004e4f8261f5e8aa1a94", "title": "A concept for automated construction progress monitoring using BIM-based geometric constraints and photogrammetric point clouds", "text": "On-site progress monitoring is essential for keeping track of the ongoing work on construction sites. Currently, this task is a manual, time-consuming activity. The research presented here, describes a concept for an automated comparison of the actual state of construction with the planned state for the early detection of deviations in the construction process. The actual state of the construction site is detected by photogrammetric surveys. From these recordings, dense point clouds are generated by the fusion of disparity maps created with semi-global-matching (SGM). These are matched against the target state provided by a 4D Building Information Model (BIM). For matching the point cloud and the BIM, the distances between individual points of the cloud and a component\u2019s surface are aggregated using a regular cell grid. For each cell, the degree of coverage is determined. Based on this, a confidence value is computed which serves as basis for the existence decision concerning the respective component. Additionally, processand dependency-relations are included to further enhance the detection process. Experimental results from a real-world case study are presented and discussed."} {"_id": "71b02d95f0081afc8e4a611942bbc2b5d739ea81", "title": "Evolutionary computation and evolutionary deep learning for image analysis, signal processing and pattern recognition", "text": "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). GECCO \u201918 Companion, July 15--19, 2018, Kyoto, Japan \u00a9 2018 Copyright held by the owner/author(s). 978-1-4503-57647/18/07...$15.00 DOI https://doi.org/10.1145/3205651.3207859"} {"_id": "04c42902e984900e411dfedfabfe5b80056738ce", "title": "Serotonin alterations in anorexia and bulimia nervosa: New insights from imaging studies", "text": "Anorexia nervosa (AN) and bulimia nervosa (BN) are related disorders with relatively homogenous presentations such as age of onset and gender distribution. In addition, they share symptoms, such as extremes of food consumption, body image distortion, anxiety and obsessions, and ego-syntonic neglect, raises the possibility that these symptoms reflect disturbed brain function that contributes to the pathophysiology of this illness. Recent brain imaging studies have identified altered activity in frontal, cingulate, temporal, and parietal cortical regions in AN and BN. Importantly, such disturbances are present when subjects are ill and persist after recovery, suggesting that these may be traits that are independent of the state of the illness. Emerging data point to a dysregulation of serotonin pathways in cortical and limbic structures that may be related to anxiety, behavioral inhibition, and body image distortions. In specific, recent studies using PET with serotonin specific radioligands implicate alterations of 5-HT1A and 5-HT2A receptors and the 5-HT transporter. Alterations of these circuits may affect mood and impulse control as well as the motivating and hedonic aspects of feeding behavior. Such imaging studies may offer insights into new pharmacology and psychotherapy approaches."} {"_id": "3f52f57dcfdd1bb0514ff744f4fdaa986a325591", "title": "Thunderstrike: EFI firmware bootkits for Apple MacBooks", "text": "There are several flaws in Apple's MacBook firmware security that allows untrusted modifications to be written to the SPI Flash boot ROM of these laptops. This capability represents a new class of persistent firmware rootkits, or 'bootkits', for the popular Apple MacBook product line. Stealthy bootkits can conceal themselves from detection and prevent software attempts to remove them. Malicious modifications to the boot ROM are able to survive re-installation of the operating system and even hard-drive replacement. Additionally, the malware can install a copy of itself onto other Thunderbolt devices' Option ROMs as a means to spread virally across air-gap security perimeters. Apple has fixed some of these flaws as part of CVE 2014-4498, but there is no easy solution to this class of vulnerability, since the MacBook lacks trusted hardware to perform cryptographic validation of the firmware at boot time."} {"_id": "238a7cd1d672362ab96315290bf1bdac14f5b065", "title": "High-Q MEMS Resonators for Laser Beam Scanning Displays", "text": "This paper reports on design, fabrication and characterization of high-Q MEMS resonators to be used in optical applications like laser displays and LIDAR range sensors. Stacked vertical comb drives for electrostatic actuation of single-axis scanners and biaxial MEMS mirrors were realized in a dual layer polysilicon SOI process. High Q-factors up to 145,000 have been achieved applying wafer level vacuum packaging technology including deposition of titanium thin film getters. The effective reduction of gas damping allows the MEMS actuator to achieve large amplitudes at high oscillation frequencies while driving voltage and power consumption can be minimized. Exemplarily shown is a micro scanner that achieves a total optical scan angle of 86 degrees at a resonant frequency of 30.8 kHz, which fulfills the requirements for HD720 resolution. Furthermore, results of a new wafer based glass-forming technology for fabrication of three dimensionally shaped glass lids with tilted optical windows are presented."} {"_id": "7b485979c75b46d8c194868c0e70890f4a0f0ede", "title": "Supplementary material for the paper : Discriminative Correlation Filter with Channel and Spatial Reliability DCF-CSR", "text": "This is the supplementary material for the paper \u201dDiscriminative Correlation Filter with Channel and Spatial Reliability\u201d submitted to the CVPR 2017. Due to spatial constraints, parts not crucial for understanding the DCF-CSR tracker formulation, but helpful for gaining insights, were moved here. 1. Derivation of the augmented Lagrangian minimizer This section provides the complete derivation of the closed-form solutions for the relations (9,10) in the submitted paper [3]. The augmented Lagrangian from Equation (5) in [3] is L(\u0125c,h, l\u0302) = \u2016\u0125c diag(f\u0302)\u2212 \u011d\u2016 + \u03bb 2 \u2016hm\u2016 + (1) [\u0302l(\u0125c \u2212 \u0125m) + l\u0302(\u0125c \u2212 \u0125m)] + \u03bc\u2016\u0125c \u2212 \u0125m\u2016, with hm = (m h). For the purposes of derivation we will rewrite (1) into a fully vectorized form L(\u0125c,h, l\u0302) = \u2016\u0125c diag(f\u0302)\u2212 \u011d\u2016 + \u03bb 2 \u2016hm\u2016+ (2) [\u0302 l(\u0125c \u2212 \u221a DFMh) + l\u0302(\u0125c \u2212 \u221a DFMh) ] + \u03bc\u2016\u0125c \u2212 \u221a DFMh\u2016, where F denotes D \u00d7 D orthonormal matrix of Fourier coefficients, such that the Fourier transform is defined as x\u0302 = F(x) = \u221a DFx and M = diag(m). For clearer representation we denote the four terms in the summation (2) as L(\u0125c,h, l\u0302) = L1 + L2 + L3 + L4, (3) where L1 = ( \u0125c diag(f\u0302)\u2212 \u011d )( \u0125c diag(f\u0302)\u2212 \u011d )T , (4)"} {"_id": "011d14aa1aa7c9a80e88d593bc783642547bb8a8", "title": "Efficient Energy-Optimal Routing for Electric Vehicles", "text": "Traditionally routing has focused on finding shortest paths in networks with positive, static edge costs representing the distance between two nodes. Energy-optimal routing for electric vehicles creates novel algorithmic challenges, as simply understanding edge costs as energy values and applying standard algorithms does not work. First, edge costs can be negative due to recuperation, excluding Dijkstra-like algorithms. Second, edge costs may depend on parameters such as vehicle weight only known at query time, ruling out existing preprocessing techniques. Third, considering battery capacity limitations implies that the cost of a path is no longer just the sum of its edge costs. This paper shows how these challenges can be met within the framework of A* search. We show how the specific domain gives rise to a consistent heuristic function yielding an O(n) routing algorithm. Moreover, we show how battery constraints can be treated by dynamically adapting edge costs and hence can be handled in the same way as parameters given at query time, without increasing run-time complexity. Experimental results with real road networks and vehicle data demonstrate the advantages of our solution."} {"_id": "8e4641d963762a0f8bd58f2c6a2a176a9418fdec", "title": "Matching RGB Images to CAD Models for Object Pose Estimation", "text": "We propose a novel method for 3D object pose estimation in RGB images, which does not require pose annotations of objects in images in the training stage. We tackle the pose estimation problem by learning how to establish correspondences between RGB images and rendered depth images of CAD models. During training, our approach only requires textureless CAD models and aligned RGB-D frames of a subset of object instances, without explicitly requiring pose annotations for the RGB images. We employ a deep quadruplet convolutional neural network for joint learning of suitable keypoints and their associated descriptors in pairs of rendered depth images which can be matched across modalities with aligned RGB-D views. During testing, keypoints are extracted from a query RGB image and matched to keypoints extracted from rendered depth images, followed by establishing 2D-3D correspondences. The object\u2019s pose is then estimated using the RANSAC and PnP algorithms. We conduct experiments on the recently introduced Pix3D [33] dataset and demonstrate the efficacy of our proposed approach in object pose estimation as well as generalization to object instances not seen during training."} {"_id": "ac7c07ef8b208dc67505f844467a2892784363f5", "title": "Field-weakening Control Algorithm for Interior Permanent Magnet Synchronous Motor Based on Space-Vector Modulation Technique", "text": "We investigate the implementation of field-weakening control for interior permanent magnet synchronous motor (IPMSM), and analyze the algorithm of field-weakening control in d-q axes. To deal with the problem that the dc-link voltage is not fully utilized when the voltage regulation is employed, we propose a new field-weakening scheme based on the space-vector modulation technique. The duty-time of zero-voltage vector is used as the feedback signal to decide the switching of fieldweakening. To avoid the regulation lag in the q-axis component, we apply the lead-angle control during the field-weakening progress. The proposed scheme is validated with Matlab/Simulink tool. Simulation results show that the scheme is feasible. It not only improves the utilization ratio of the space-vector-pulse-width-modulation (SVPWM) inverter, but also achieves a smooth transition between the constant-torque mode below the base-speed and the constant-power mode above the basespeed."} {"_id": "569c295f18ede0e7472882ff13cbb71e6a1c1a41", "title": "An Image Matching Algorithm Based on SIFT and Improved LTP", "text": "SIFT is one of the most robust and widely used image matching algorithms based on local features. But the key-points descriptor of SIFT algorithm have 128 dimensions. Aiming to the problem of its high dimension and complexity, a novel image matching algorithm is proposed. The descriptors of SIFT key-points are constructed by the rotation invariant LTP, city-block distance is also employed to reduce calculation of key-points matching. The experiment is achieved through different lighting, blur changes and rotation of images, the results show that this method can reduce the processing time and raise image matching efficiency."} {"_id": "3b3acbf7cc2ec806e4177eac286a2ee22f6f7630", "title": "An Over-110-GHz-Bandwidth 2:1 Analog Multiplexer in 0.25-\u03bcm InP DHBT Technology", "text": "This paper presents an over-110-GHz-bandwidth 2:1 analog multiplexer (AMUX) for ultra-broadband digital-to-analog (D/A) conversion subsystems. The AMUX was designed and fabricated by using newly developed $\\pmb{0.25-\\mu \\mathrm{m}}$ -emitter-width InP double heterojunction bipolar transistors (DHBTs), which have a peak $\\pmb{f_{\\mathrm{T}}}$ and $\\pmb{ f\\displaystyle \\max}$ of 460 and 480 GHz, respectively. The AMUX IC consists of lumped building blocks, including data-input linear buffers, a clock-input limiting buffer, an AMUX core, and an output linear buffer. The measured 3-dB bandwidth for data and clock paths are both over 110 GHz. In addition, it measures and obtains time-domain large-signal sampling operations of up to 180 GS/s. A 224-Gb/s (112-GBaud) four-level pulse-amplitude modulation (PAM4) signal was successfully generated by using this AMUX. To the best of our knowledge, this AMUX IC has the broadest bandwidth and the fastest sampling rate compared with any other previously reported AMUXes."} {"_id": "7e13493a290fe0d6a1f4bd477e00b3ca37c5d82a", "title": "Field-stop layer optimization for 1200V FS IGBT operating at 200\u02daC", "text": "This paper is concerned with design considerations for enabling the operation of Field-Stop Insulated Gate Bipolar Transistors (FS IGBTs) at 200 C. It is found that through a careful optimization of the Field-Stop layer doping profile the device has a low leakage current and delivers a favorable tradeoff between the on-state voltage (Von) and turn-off loss (Eoff). An investigation of the adverse effects of increasing the junction temperature on the temperature-dependent properties of the FS IGBTs is also discussed herein."} {"_id": "23059f5f42e71e6ab4981081a310df65d28ecda1", "title": "Automatic indexing: an approach using an index term corpus and combining linguistic and statistical methods", "text": "This thesis discusses the problems and the methods of finding relevant information in large collections of documents. The contribution of this thesis to this problem is to develop better content analysis methods which can be used to describe document content with"} {"_id": "2f02f1276a0b01d0e6e3501eb4961ee6bea45702", "title": "Theory of Outlier Ensembles", "text": "Outlier detection is an unsupervised problem, in which labels are not available with data records [2]. As a result, it is generally more challenging to design ensemble analysis algorithms for outlier detection. In particular, methods that require the use of labels in intermediate steps of the algorithm cannot be generalized to outlier detection. For example, in the case of boosting, the classifier algorithm needs to be evaluated in the intermediate steps of the algorithm with the use of training-data labels. Such methods are generally not possible in the case of outlier analysis. As discussed in [1], there are unique reasons why ensemble analysis is generally more difficult in the case of outlier analysis as compared to classification. In spite of the unsupervised nature of outlier ensemble analysis, we show that the theoretical foundations of outlier analysis and classification are surprisingly similar. A number of useful discussions on the theory of classification ensembles may be found in [27, 29, 33]. Further explanations on the use of the bias-variance decomposition in different types of classifiers such as neural networks, support vector machines, and rule-based classifiers are discussed in [17, 30, 31]. It is noteworthy that the biasvariance decomposition is often used in customized ways for different types of base classifiers and combination methods; this general principle is also true in outlier detection. Several arguments have recently been proposed on the theory explaining the accuracy improvements of outlier ensembles. In some cases, incorrect new arguments (such as those in [32]) are proposed to justify experimental results that can be explained by well-known ideas, and an artificial distinction is made between the theory of classification ensembles and outlier ensembles. A recent paper [4]"} {"_id": "f919bc88b746d7a420b6364142e0ac13aa80de44", "title": "The Usability of Open Source Software", "text": "Twidale Open source communities have successfully developed a great deal of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability."} {"_id": "1e62935d9288446abdaeb1fec84daa12a684b300", "title": "Detecting adversarial advertisements in the wild", "text": "In a large online advertising system, adversaries may attempt to profit from the creation of low quality or harmful advertisements. In this paper, we present a large scale data mining effort that detects and blocks such adversarial advertisements for the benefit and safety of our users. Because both false positives and false negatives have high cost, our deployed system uses a tiered strategy combining automated and semi-automated methods to ensure reliable classification. We also employ strategies to address the challenges of learning from highly skewed data at scale, allocating the effort of human experts, leveraging domain expert knowledge, and independently assessing the effectiveness of our system."} {"_id": "2bdcc4dbf14e13d33740531ea8954463ca7e68a2", "title": "Social coding in GitHub: transparency and collaboration in an open software repository", "text": "Social applications on the web let users track and follow the activities of a large number of others regardless of location or affiliation. There is a potential for this transparency to radically improve collaboration and learning in complex knowledge-based activities. Based on a series of in-depth interviews with central and peripheral GitHub users, we examined the value of transparency for large-scale distributed collaborations and communities of practice. We find that people make a surprisingly rich set of social inferences from the networked activity information in GitHub, such as inferring someone else's technical goals and vision when they edit code, or guessing which of several similar projects has the best chance of thriving in the long term. Users combine these inferences into effective strategies for coordinating work, advancing technical skills and managing their reputation."} {"_id": "50341a2e4dec45165c75da37296ee7984b71e044", "title": "Searching for build debt: Experiences managing technical debt at Google", "text": "With a large and rapidly changing codebase, Google software engineers are constantly paying interest on various forms of technical debt. Google engineers also make efforts to pay down that debt, whether through special Fixit days, or via dedicated teams, variously known as janitors, cultivators, or demolition experts. We describe several related efforts to measure and pay down technical debt found in Google's BUILD files and associated dead code. We address debt found in dependency specifications, unbuildable targets, and unnecessary command line flags. These efforts often expose other forms of technical debt that must first be managed."} {"_id": "517d6e3999bd425069e45346045adcbd2d0c9299", "title": "Counterfactual reasoning and learning systems: the example of computational advertising", "text": "This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments on the ad placement system associated with the Bing search engine."} {"_id": "4dd7721248c5489e25f46f7ab78c7d0229a596d4", "title": "A Fully Integrated Reconfigurable Self-Startup RF Energy-Harvesting System With Storage Capability", "text": "This paper introduces a fully integrated RF energy-harvesting system. The system can simultaneously deliver the current demanded by external dc loads and store the extra energy in external capacitors, during periods of extra output power. The design is fabricated in 0.18- $\\mu \\text{m}$ CMOS technology, and the active chip area is 1.08 mm2. The proposed self-startup system is reconfigurable with an integrated LC matching network, an RF rectifier, and a power management/controller unit, which consumes 66\u2013157 nW. The required clock generation and the voltage reference circuit are integrated on the same chip. Duty cycle control is used to operate for the low input power that cannot provide the demanded output power. Moreover, the number of stages of the RF rectifier is reconfigurable to increase the efficiency of the available output power. For high available power, a secondary path is activated to charge an external energy storage element. The measured RF input power sensitivity is \u221214.8 dBm at a 1-V dc output."} {"_id": "eceedcee47d29c063ce0d36f8fca630b6d68f63f", "title": "HDTLR: A CNN based Hierarchical Detector for Traffic Lights", "text": "Reliable traffic light detection is one crucial key component for autonomous driving in urban areas. This includes the extraction of direction arrows contained within the traffic lights as an autonomous car will need this information for selecting the traffic light corresponding to its current lane. Current state of the art traffic light detection systems are not able to provide such information. Within this work we present a hierarchical traffic light detection algorithm, which is able to detect traffic lights and determine their state and contained direction information within one CNN forward pass. This Hierarchical DeepTLR (HDTLR) outperforms current state of the art traffic light detection algorithms in state aware detection and can detect traffic lights with direction information down to a size of 4 pixel in width at a frequency of 12 frames per second."} {"_id": "78abaa22d3fdc0181e44a23c4a93ec220840ecd1", "title": "Implementation of behavior change techniques in mobile applications for physical activity.", "text": "BACKGROUND\nMobile applications (apps) for physical activity are popular and hold promise for promoting behavior change and reducing non-communicable disease risk. App marketing materials describe a limited number of behavior change techniques (BCTs), but apps may include unmarketed BCTs, which are important as well.\n\n\nPURPOSE\nTo characterize the extent to which BCTs have been implemented in apps from a systematic user inspection of apps.\n\n\nMETHODS\nTop-ranked physical activity apps (N=100) were identified in November 2013 and analyzed in 2014. BCTs were coded using a contemporary taxonomy following a user inspection of apps.\n\n\nRESULTS\nUsers identified an average of 6.6 BCTs per app and most BCTs in the taxonomy were not represented in any apps. The most common BCTs involved providing social support, information about others' approval, instructions on how to perform a behavior, demonstrations of the behavior, and feedback on the behavior. A latent class analysis of BCT configurations revealed that apps focused on providing support and feedback as well as support and education.\n\n\nCONCLUSIONS\nContemporary physical activity apps have implemented a limited number of BCTs and have favored BCTs with a modest evidence base over others with more established evidence of efficacy (e.g., social media integration for providing social support versus active self-monitoring by users). Social support is a ubiquitous feature of contemporary physical activity apps and differences between apps lie primarily in whether the limited BCTs provide education or feedback about physical activity."} {"_id": "756a8ce1bddd94cf166baadd17ff2b89834a23da", "title": "Future affective technology for autism and emotion communication.", "text": "People on the autism spectrum often experience states of emotional or cognitive overload that pose challenges to their interests in learning and communicating. Measurements taken from home and school environments show that extreme overload experienced internally, measured as autonomic nervous system (ANS) activation, may not be visible externally: a person can have a resting heart rate twice the level of non-autistic peers, while outwardly appearing calm and relaxed. The chasm between what is happening on the inside and what is seen on the outside, coupled with challenges in speaking and being pushed to perform, is a recipe for a meltdown that may seem to come 'out of the blue', but in fact may have been steadily building. Because ANS activation both influences and is influenced by efforts to process sensory information, interact socially, initiate motor activity, produce meaningful speech and more, deciphering the dynamics of ANS states is important for understanding and helping people on the autism spectrum. This paper highlights advances in technology that can comfortably sense and communicate ANS arousal in daily life, allowing new kinds of investigations to inform the science of autism while also providing personalized feedback to help individuals who participate in the research."} {"_id": "df8902f8acb0b30a230dfaa1ead91e27f123472c", "title": "Toward Massive Machine Type Cellular Communications", "text": "Cellular networks have been engineered and optimized to carrying ever-increasing amounts of mobile data, but over the last few years, a new class of applications based on machine-centric communications has begun to emerge. Automated devices such as sensors, tracking devices, and meters, often referred to as machine-to-machine (M2M) or machine-type communications (MTC), introduce an attractive revenue stream for mobile network operators, if a massive number of them can be efficiently supported. The novel technical challenges posed by MTC applications include increased overhead and control signaling as well as diverse application-specific constraints such as ultra-low complexity, extreme energy efficiency, critical timing, and continuous data intensive uploading. This article explains the new requirements and challenges that large-scale MTC applications introduce, and provides a survey of key techniques for overcoming them. We focus on the potential of 4.5G and 5G networks to serve both the high data rate needs of conventional human-type communication (HTC) subscribers and the forecasted billions of new MTC devices. We also opine on attractive economic models that will enable this new class of cellular subscribers to grow to its full potential."} {"_id": "cdf1153e70045098bbc2cb43e3759751f8de03af", "title": "A class D output stage with zero dead time", "text": "An integrated class-D output stage has been realized with zero dead time, thereby removing one of the dominant sources of distortion in class-D amplifiers. Dead time is eliminated through proper dimensioning of the power transistor drivers and accurate matching of switch timing. Open-loop distortion of this output stage stays below 0.1% up to 35 W."} {"_id": "7e1c856f1e8116076248b228d85989ecb860ca23", "title": "Partially Observable Reinforcement Learning for Intelligent Transportation Systems", "text": "Intelligent Transportation Systems (ITS) have attracted the attention of researchers and the general public alike as a means to alleviate traffic congestion. Recently, the maturity of wireless technology has enabled a cost-efficient way to achieve ITS by detecting vehicles using Vehicle to Infrastructure (V2I) communications. Traditional ITS algorithms, in most cases, assume that every vehicle is observed, such as by a camera or a loop detector, but a V2I implementation would detect only those vehicles with wireless communications capability. We examine a family of transportation systems, which we will refer to as \u2018Partially Detected Intelligent Transportation Systems\u2019. An algorithm that can act well under a small detection rate is highly desirable due to gradual penetration rates of the underlying wireless technologies such as Dedicated Short Range Communications (DSRC) technology. Artificial Intelligence (AI) techniques for Reinforcement Learning (RL) are suitable tools for finding such an algorithm due to utilizing varied inputs and not requiring explicit analytic understanding or modeling of the underlying system dynamics. In this paper, we report a RL algorithm for partially observable ITS based on DSRC. The performance of this system is studied under different car flows, detection rates, and topologies of the road network. Our system is able to efficiently reduce the average waiting time of vehicles at an intersection, even with a low detection rate."} {"_id": "3e4004566f45ee28b3a1dd73ba67242d17985160", "title": "Fuzzy & Datamining based Disease Prediction Using K-NN Algorithm", "text": "Disease diagnosis is one of the most important applications of such system as it is one of the leading causes of deaths all over the world. Almost all system predicting disease use inputs from complex tests conducted in labs and none of the system predicts disease based on the risk factors such as tobacco smoking, alcohol intake, age, family history, diabetes, hypertension, high cholesterol, physical inactivity, obesity. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. K-Nearest-Neighbour (KNN) is one of the successful data mining techniques used in classification problems. However, it is less used in the diagnosis of heart disease patients. Recently, researchers are showing that combining different classifiers through voting is outperforming other single classifiers. This paper investigates applying KNN to help healthcare professionals in the diagnosis of disease specially heart disease. It also investigates if integrating voting with KNN can enhance its accuracy in the diagnosis of heart disease patients. The results show that applying KNN could achieve higher accuracy than neural network ensemble in the diagnosis of heart disease patients. The results also show that applying voting could not enhance the KNN accuracy in the diagnosis of heart disease."} {"_id": "879f70a13aa7e461ec2425093f47475ac601a550", "title": "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY", "text": "Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements. * Corresponding author"} {"_id": "5a935692eca51f426e8cd2ac548e8e866631278f", "title": "An Empirical Analysis of Deep Network Loss Surfaces", "text": "The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find local minima."} {"_id": "8dbad88d01e5f5d71e5748ee28247f12436a1797", "title": "Now it's personal: on abusive game design", "text": "In this paper, we introduce the concept of abusive game design as an attitude towards creating games -- an aesthetic practice that operates as a critique of certain conventionalisms in popular game design wisdom. We emphasize that abusive game design is, at its core, about spotlighting the dialogic relation between player and designer."} {"_id": "5648813f9ab3877b2a15b462663fdedd88d3c432", "title": "Classification of Multiple-Sentence Questions", "text": "Conventional QA systems cannot answer to the questions composed of two or more sentences. Therefore, we aim to construct a QA system that can answer such multiple-sentence questions. As the first stage, we propose a method for classifying multiple-sentence questions into question types. Specifically, we first extract the core sentence from a given question text. We use the core sentence and its question focus in question classification. The result of experiments shows that the proposed method improves F-measure by 8.8% and accuracy by 4.4%."} {"_id": "222e9d19121c8bb4cb90af0bcbe71599c94c454c", "title": "Chemical-protein relation extraction with ensembles of SVM, CNN, and RNN models", "text": "Text mining the relations between chemicals and proteins is an increasingly important task. The CHEMPROT track at BioCreative VI aims to promote the development and evaluation of systems that can automatically detect the chemical-protein relations in running text (PubMed abstracts). This manuscript describes our submission, which is an ensemble of three systems, including a Support Vector Machine, a Convolutional Neural Network, and a Recurrent Neural Network. Their output is combined using a decision based on majority voting or stacking. Our CHEMPROT system obtained 0.7266 in precision and 0.5735 in recall for an f-score of 0.6410, demonstrating the effectiveness of machine learning-based approaches for automatic relation extraction from biomedical literature. Our submission achieved the highest performance in the task during the 2017 challenge. Keywords\u2014relation extraction; deep learning; chemical; protein"} {"_id": "44c5433047692f26ae518e5c659eecfcf9325f49", "title": "EasyConnect: A Management System for IoT Devices and Its Applications for Interactive Design and Art", "text": "Many Internet of Things (IoT) technologies have been used in applications for money flow, logistics flow, people flow, interactive art design, and so on. To manage these increasing disparate devices and connectivity options, ETSI has specified end-to-end machine-to-machine (M2M) system architecture for IoT applications. Based on this architecture, we develop an IoT EasyConnect system to manage IoT devices. In our approach, an IoT device is characterized by its \u201cfeatures\u201d (e.g., temperature, vibration, and display) that are manipulated by the network applications. If a network application handles the individual device features independently, then we can write a software module for each device feature, and the network application can be simply constructed by including these brick-like device feature modules. Based on the concept of device feature, brick-like software modules can provide simple and efficient mechanism to develop IoT device applications and interactions."} {"_id": "4b392162b7bda78308c205928911c42b7ce6a293", "title": "Inherent Resistance of Efficient ECC Designs against SCA Attacks", "text": "The Montgomery kP algorithm using Lopez-Dahab projective coordinates is a time and energy efficient way to perform the elliptic curve point multiplication. It even can provide resistance against simple power analysis attacks. Nevertheless, straight forward implementations of this algorithm are not resistant against differential power analysis attacks. But highly-efficient designs i.e. designs in which different functional units are operating in parallel can increase the resistance of ECC implementations against DPA. In this paper we demonstrate this fact on the examples of our 6 implementations of the Montgomery kP algorithm. We perform horizontal DPA attacks to demonstrate the fact that the resistance of the kP algorithm against side channel analysis attacks can be increased through its implementation as an efficient design."} {"_id": "a54eaafe60c3309169a976df7e204c5b066798c7", "title": "The Theory of Correlation Formulas and Their Application to Discourse Coherence", "text": "The Winograd Schema Challenge (WSC) was proposed as a measure of machine intelligence. It boils down to anaphora resolution, a task familiar from computational linguistics. Research in linguistics and AI has coalesced around discourse coherence as the critical factor in solving this task, and the process of establishing discourse coherence relies fundamentally on world and commonsense knowledge. In this thesis, we build on an approach to establishing coherence on the basis of correlation. The utility of this approach lies in its conceptual clarity and ability to flexibly represent commonsense knowledge. We work to fill some conceptual holes with the Correlation Calculus approach. First, understanding the calculus in a vacuum is not straightfoward unless it has a precise semantics. Second, existing demonstrations of the Correlation Calculus on Winograd Schema Challenge problems have not been linguistically credible. We hope to ameliorate some\u2014but by no means all\u2014of the outstanding issues with the Correlation Calculus. We do so first by providing a precise semantics of the calculus, which relates our intuitive understanding of correlation with a precise notion involving probabilities. Second, we formulate the establishment of discourse coherence by correlation formulas within the framework of Discourse Representation Theory. This provides a more complete and linguistically credible account of the relationship between the Correlation Calculus, discourse coherence, and Winograd Schema Challenge problems."} {"_id": "ac7dbcf7b1dba7eb04afc7cde3a07a30b113ba06", "title": "Game-Related Statistics that Discriminated Winning, Drawing and Losing Teams from the Spanish Soccer League.", "text": "The aim of the present study was to analyze men's football competitions, trying to identify which game-related statistics allow to discriminate winning, drawing and losing teams. The sample used corresponded to 380 games from the 2008-2009 season of the Spanish Men's Professional League. The game-related statistics gathered were: total shots, shots on goal, effectiveness, assists, crosses, offsides commited and received, corners, ball possession, crosses against, fouls committed and received, corners against, yellow and red cards, and venue. An univariate (t-test) and multivariate (discriminant) analysis of data was done. The results showed that winning teams had averages that were significantly higher for the following game statistics: total shots (p < 0.001), shots on goal (p < 0.01), effectiveness (p < 0.01), assists (p < 0.01), offsides committed (p < 0.01) and crosses against (p < 0.01). Losing teams had significantly higher averages in the variable crosses (p < 0.01), offsides received (p < 0. 01) and red cards (p < 0.01). Discriminant analysis allowed to conclude the following: the variables that discriminate between winning, drawing and losing teams were the total shots, shots on goal, crosses, crosses against, ball possession and venue. Coaches and players should be aware for these different profiles in order to increase knowledge about game cognitive and motor solicitation and, therefore, to evaluate specificity at the time of practice and game planning. Key pointsThis paper increases the knowledge about soccer match analysis.Give normative values to establish practice and match objectives.Give applications ideas to connect research with coaches' practice."} {"_id": "a11f80ce00cac50998ff40fca4af58c58b9e545f", "title": "Free radicals and antioxidants in human health: current status and future prospects.", "text": "Free radicals and related species have attracted a great deal of attention in recent years. They are mainly derived from oxygen (reactive oxygen species/ROS) and nitrogen (reactive nitrogen species/RNS), and are generated in our body by various endogenous systems, exposure to different physicochemical conditions or pathophysiological states. Free radicals can adversely alter lipids, proteins and DNA and have been implicated in aging and a number of human diseases. Lipids are highly prone to free radical damage resulting in lipid peroxidation that can lead to adverse alterations. Free radical damage to protein can result in loss of enzyme activity. Damage caused to DNA, can result in mutagenesis and carcinogenesis. Redox signaling is a major area of free radical research that is attracting attention. Nature has endowed us with protective antioxidant mechanisms- superoxide dismutase (SOD), catalase, glutathione, glutathione peroxidases and reductase, vitamin E (tocopherols and tocotrienols), vitamin C etc., apart from many dietary components. There are epidemiological evidences correlating higher intake of components/ foods with antioxidant abilities to lower incidence of various human morbidities or mortalities. Current research reveals the different potential applications of antioxidant/free radical manipulations in prevention or control of disease. Natural products from dietary components such as Indian spices and medicinal plants are known to possess antioxidant activity. Newer and future approaches include gene therapy to produce more antioxidants in the body, genetically engineered plant products with higher level of antioxidants, synthetic antioxidant enzymes (SOD mimics), novel biomolecules and the use of functional foods enriched with antioxidants."} {"_id": "800979161bffd5b8534bf7874f1eefec531ccda3", "title": "Cultivate Self-Efficacy for Personal and Organizational Effectiveness", "text": "Bandura, A. (2000). Cultivate self-efficacy for personal and organizational effectiveness. In E."} {"_id": "54ee63cdeca5b7673beea700e2e46e1c95a504e9", "title": "Power LDMOS with novel STI profile for improved Rsp, BVdss, and reliability", "text": "The profile of shallow trench isolation (STI) is designed to improve LDMOS specific on-resistance (Rsp), BVDSS, safe operating area (SOA), and hot carrier lifetimes (HCL) in an integrated BiCMOS power technology. Silicon etch, liner oxidation and CMP processes are tuned to improve the tradeoffs in a power technology showing significant improvement to both p-channel and n-channel Rsp compared to devices fabricated with the STI profile inherited from the original submicron CMOS platform. Extensive TCAD and experiments were carried out to gain insight into the physical mechanisms and further improve device performance after STI process optimization. The final process and device structures yield SOAs that are limited only by thermal constraints up to rated voltages"} {"_id": "bb93e5d1cb062157b19764b78fbfd7911ecbac5a", "title": "A Hygroscopic Sensor Electrode for Fast Stabilized Non-Contact ECG Signal Acquisition", "text": "A capacitive electrocardiography (cECG) technique using a non-invasive ECG measuring technology that does not require direct contact between the sensor and the skin has attracted much interest. The system encounters several challenges when the sensor electrode and subject's skin are weakly coupled. Because there is no direct physical contact between the subject and any grounding point, there is no discharge path for the built-up electrostatic charge. Subsequently, the electrostatic charge build-up can temporarily contaminate the ECG signal from being clearly visible; a stabilization period (3-15 min) is required for the measurement of a clean, stable ECG signal at low humidity levels (below 55% relative humidity). Therefore, to obtain a clear ECG signal without noise and to reduce the ECG signal stabilization time to within 2 min in a dry ambient environment, we have developed a fabric electrode with embedded polymer (FEEP). The designed hygroscopic FEEP has an embedded superabsorbent polymer layer. The principle of FEEP as a conductive electrode is to provide humidity to the capacitive coupling to ensure strong coupling and to allow for the measurement of a stable, clear biomedical signal. The evaluation results show that hygroscopic FEEP is capable of rapidly measuring high-accuracy ECG signals with a higher SNR ratio."} {"_id": "3f2f4e017ad87e57945324877134fe1d0aab1c9f", "title": "A 3.9-ps RMS Precision Time-to-Digital Converter Using Ones-Counter Encoding Scheme in a Kintex-7 FPGA", "text": "A 3.9-ps time-interval rms precision and 277-M events/second measurement throughput time-to-digital converter (TDC) is implemented in a Xilinx Kintex-7 field programmable gate array (FPGA). Unlike previous work, the TDC is achieved with a multichain tapped-delay line (TDL) followed by an ones-counter encoder. The four normal TDLs merged together make the TDC bins very small, so that the time precision can be significantly improved. The ones-counter encoder naturally applies global bubble error correction to the output of TDL, thus the TDC design is relatively simple even when using FPGAs made with current advanced process technology. The TDC implementation is a generally applicable method that can simultaneously achieve high time precision and high measurement throughput."} {"_id": "49cc527ee351f1a8e087806730936cabd5e6034b", "title": "Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors", "text": "Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR."} {"_id": "9a80c4089e2ef8066b4b3b63578543b5b706603e", "title": "Can You Spot the Fakes?: On the Limitations of User Feedback in Online Social Networks", "text": "Online social networks (OSNs) are appealing platforms for spammers and fraudsters, who typically use fake or compromised accounts to connect with and defraud real users. To combat such abuse, OSNs allow users to report fraudulent profiles or activity. The OSN can then use reporting data to review and/or limit activity of reported accounts. Previous authors have suggested that an OSN can augment its takedown algorithms by identifying a \u201ctrusted set\u201d of users whose reports are weighted more heavily in the disposition of flagged accounts. Such identification would allow the OSN to improve both speed and accuracy of fake account detection and thus reduce the impact of spam on users. In this work we provide the first public, data-driven assessment of whether the above assumption is true: are some users better at reporting than others? Specifically, is reporting skill both measurable, i.e., possible to distinguish from random guessing; and repeatable, i.e., persistent over repeated sampling? Our main contributions are to develop a statistical framework that describes these properties and to apply this framework to data from LinkedIn, the professional social network. Our data includes member reports of fake profiles as well as the more voluminous, albeit weaker, signal of member responses to connection requests. We find that members demonstrating measurable, repeatable skill in identifying fake profiles do exist but are rare: at most 2.4% of those reporting fakes and at most 1.3% of those rejecting connection requests. We conclude that any reliable \u201ctrusted set\u201d of members will be too small to have noticeable impact on spam metrics."} {"_id": "dbc6fa024c0169a87a33b97634d061a96492db6d", "title": "How to practice person\u2010centred care: A conceptual framework", "text": "BACKGROUND\nGlobally, health-care systems and organizations are looking to improve health system performance through the implementation of a person-centred care (PCC) model. While numerous conceptual frameworks for PCC exist, a gap remains in practical guidance on PCC implementation.\n\n\nMETHODS\nBased on a narrative review of the PCC literature, a generic conceptual framework was developed in collaboration with a patient partner, which synthesizes evidence, recommendations and best practice from existing frameworks and implementation case studies. The Donabedian model for health-care improvement was used to classify PCC domains into the categories of \"Structure,\" \"Process\" and \"Outcome\" for health-care quality improvement.\n\n\nDISCUSSION\nThe framework emphasizes the structural domain, which relates to the health-care system or context in which care is delivered, providing the foundation for PCC, and influencing the processes and outcomes of care. Structural domains identified include: the creation of a PCC culture across the continuum of care; co-designing educational programs, as well as health promotion and prevention programs with patients; providing a supportive and accommodating environment; and developing and integrating structures to support health information technology and to measure and monitor PCC performance. Process domains describe the importance of cultivating communication and respectful and compassionate care; engaging patients in managing their care; and integration of care. Outcome domains identified include: access to care and Patient-Reported Outcomes.\n\n\nCONCLUSION\nThis conceptual framework provides a step-wise roadmap to guide health-care systems and organizations in the provision PCC across various health-care sectors."} {"_id": "7314be5cd836c8f06bd1ecab565b00b65259eac6", "title": "Probabilistic topic models", "text": "Surveying a suite of algorithms that offer a solution to managing large document archives."} {"_id": "c951df3db2e0c515bdbdf681137cf6705ef61c0d", "title": "SNOW 2014 Data Challenge: Assessing the Performance of News Topic Detection Methods in Social Media", "text": "The SNOW 2014 Data Challenge aimed at creating a public benchmark and evaluation resource for the problem of topic detection in streams of social content. In particular, given a set of tweets spanning a time interval of interest, the Challenge required the extraction of the most significant news topics in short timeslots within the selected interval. Here, we provide details with respect to the Challenge definition, the data collection and evaluation process, and the results achieved by the 11 teams that participated in it, along with a concise retrospective analysis of the main conclusions and arising issues."} {"_id": "d582d6d459601eabeac24f4bb4231c3e93882259", "title": "Sensing Trending Topics in Twitter", "text": "Online social and news media generate rich and timely information about real-world events of all kinds. However, the huge amount of data available, along with the breadth of the user base, requires a substantial effort of information filtering to successfully drill down to relevant topics and events. Trending topic detection is therefore a fundamental building block to monitor and summarize information originating from social sources. There are a wide variety of methods and variables and they greatly affect the quality of results. We compare six topic detection methods on three Twitter datasets related to major events, which differ in their time scale and topic churn rate. We observe how the nature of the event considered, the volume of activity over time, the sampling procedure and the pre-processing of the data all greatly affect the quality of detected topics, which also depends on the type of detection method used. We find that standard natural language processing techniques can perform well for social streams on very focused topics, but novel techniques designed to mine the temporal distribution of concepts are needed to handle more heterogeneous streams containing multiple stories evolving in parallel. One of the novel topic detection methods we propose, based on -grams cooccurrence and topic ranking, consistently achieves the best performance across all these conditions, thus being more reliable than other state-of-the-art techniques."} {"_id": "1ccf386465081b6f8cad5a753827de7576dfe436", "title": "Collective entity linking in web text: a graph-based method", "text": "Entity Linking (EL) is the task of linking name mentions in Web text with their referent entities in a knowledge base. Traditional EL methods usually link name mentions in a document by assuming them to be independent. However, there is often additional interdependence between different EL decisions, i.e., the entities in the same document should be semantically related to each other. In these cases, Collective Entity Linking, in which the name mentions in the same document are linked jointly by exploiting the interdependence between them, can improve the entity linking accuracy.\n This paper proposes a graph-based collective EL method, which can model and exploit the global interdependence between different EL decisions. Specifically, we first propose a graph-based representation, called Referent Graph, which can model the global interdependence between different EL decisions. Then we propose a collective inference algorithm, which can jointly infer the referent entities of all name mentions by exploiting the interdependence captured in Referent Graph. The key benefit of our method comes from: 1) The global interdependence model of EL decisions; 2) The purely collective nature of the inference algorithm, in which evidence for related EL decisions can be reinforced into high-probability decisions. Experimental results show that our method can achieve significant performance improvement over the traditional EL methods."} {"_id": "f4e2fff95ba8d469fed09c8bd7efa0604ece3be3", "title": "Structural Architectures for a Deployable Wideband UHF Antenna", "text": "This paper explores concepts for a wideband deployable antenna suitable for small satellites. The general approach taken was to closely couple antenna theory and structural mechanics, to produce a deployable antenna that is considered efficient in both fields. Approaches that can be deployed using stored elastic strain energy have been favored over those that require powered actuators or environmental effects to enforce deployment. Two types of concepts were developed: thin shell structure and pantograph. These concepts cover four antenna topologies: crossed log periodic, conical log spiral, helical and quadrifilar helix. Of these, the conical log spiral antenna and the accompanying deployment concepts are determined to be the most promising approaches that warrant further study."} {"_id": "ed5d20511f9087309f91fcd6504960baf101bce3", "title": "Global climate change and terrestrial net primary production", "text": "A process-based model was used to estimate global patterns of net primary production and soil nitrogen cycling for contemporary climate conditions and current atmospheric C02 concentration. Over half of the global annual net primary production was estimated to occur in the tropics, with most of the production attributable to tropical evergreen forest. The effects of C02 doubling and associated climate changes were also explored. The responses in tropical and dry temperate ecosystems were dominated by C02, but those in northern and moist temperate ecosystems reflected the effects of temperature on nitrogen availability."} {"_id": "bbe179f855d91feb6a832656c2e15dc679c6d2d0", "title": "Low-cost backdrivable motor control based on feed-forward/feed-back friction compensation", "text": "A low-cost but accurate and backdrivable motor control system running on a DC motor and a harmonic drive gear is proposed. It compensates internal friction of the gear and the counter-electromotive torque by combining a model-based feed-forward method and a disturbance observer using a cheap torque sensor. A complementary use of those techniques lowers requirements to their performances i.e. precision, bandwidth, etc, while it is equipped with a flexible property against the external torque. A 2-DOF servo controller is also built upon the system in order to simultaneously achieve smooth responses and robust convergences to the reference."} {"_id": "22eb5f0400babf66c76128e475dc2baf1606dc6d", "title": "Submaximal exercise testing: clinical application and interpretation.", "text": "Compared with maximal exercise testing, submaximal exercise testing appears to have greater applicability to physical therapists in their role as clinical exercise specialists. This review contrasts maximal and submaximal exercise testing. Two major categories of submaximal tests (ie, predictive and performance tests) and their relative merits are described. Predictive tests are submaximal tests that are used to predict maximal aerobic capacity. Performance tests involve measuring the responses to standardized physical activities that are typically encountered in everyday life. To maximize the validity and reliability of data obtained from submaximal tests, physical therapists are cautioned to apply the tests selectively based on their indications; to adhere to methods, including the requisite number of practice sessions; and to use measurements such as heart rate, blood pressure, exertion, and pain to evaluate test performance and to safely monitor patients."} {"_id": "90bd2446523982c62c857c167447636924e03557", "title": "Control of Grid Connected PV Array Using P&O MPPT Algorithm", "text": "The renewable energy is becoming moremainstream and accessible. This has been made possible due toan increase in environmental awareness coupled with thepopular demand to cut back on the greenhouse emissions. We inthis project propose a grid connected PV system. The aim of theproject is to implement a complete distributed energy resourcesystem (DER). The project will feature a PV module, which willbe controlled and optimized by means of a maximum powerpoint tracking (MPPT) algorithm. A boost converter along witha single phase grid tie inverter will be used to increase theoutput voltage and to convert it to AC. A phase locked loopcircuit will be used to integrate the single phase inverter withthe grid. A control methodology consisting of PI controllers isemployed for operating the PV at the MPPT point bycontrolling the switching of the boost converter and also for theoperation of the single phase inverter and its integration withthe grid. The parameters of these controllers are tuned to givethe best results. This will be followed by a detailed mathematicaland engineering analysis for the simulated results. The validityof the proposed scheme will be verified by simulation using thePSIM software."} {"_id": "1f671ab8b6efdff792a983afbed6eab444e673ef", "title": "Learning analytics: envisioning a research discipline and a domain of practice", "text": "Learning analytics are rapidly being implemented in different educational settings, often without the guidance of a research base. Vendors incorporate analytics practices, models, and algorithms from datamining, business intelligence, and the emerging \"big data\" fields. Researchers, in contrast, have built up a substantial base of techniques for analyzing discourse, social networks, sentiments, predictive models, and in semantic content (i.e., \"intelligent\" curriculum). In spite of the currently limited knowledge exchange and dialogue between researchers, vendors, and practitioners, existing learning analytics implementations indicate significant potential for generating novel insight into learning and vital educational practices. This paper presents an integrated and holistic vision for advancing learning analytics as a research discipline and a domain of practices. Potential areas of collaboration and overlap are presented with the intent of increasing the impact of analytics on teaching, learning, and the education system."} {"_id": "41563b2830005e93a2ce2e612d5522aa519f1537", "title": "[POSTER] Realtime Shape-from-Template: System and Applications", "text": "An important yet unsolved problem in computer vision and Augmented Reality (AR) is to compute the 3D shape of nonrigid objects from live 2D videos. When the object's shape is provided in a rest pose, this is the Shape-from-Template (SfT) problem. Previous realtime SfT methods require simple, smooth templates, such as flat sheets of paper that are densely textured, and which deform in simple, smooth ways. We present a realtime SfT framework that handles generic template meshes, complex deformations and most of the difficulties present in real imaging conditions. Achieving this has required new, fast solutions to the two core sub-problems: robust registration and 3D shape inference. Registration is achieved with what we call Deformable Render-based Block Matching (DRBM): a highly-parallel solution which densely matches a time-varying render of the object to each video frame. We then combine matches from DRBM with physical deformation priors and perform shape inference, which is done by quickly solving a sparse linear system with a Geometric Multi-Grid (GMG)-based method. On a standard PC we achieve up to 21fps depending on the object. Source code will be released."} {"_id": "6108ee29143606dfe2028dc92da2393693f17a79", "title": "Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop", "text": "Existing fine-grained visual categorization methods often suffer from three challenges: lack of training data, large number of fine-grained categories, and high intraclass vs. low inter-class variance. In this work we propose a generic iterative framework for fine-grained categorization and dataset bootstrapping that handles these three challenges. Using deep metric learning with humans in the loop, we learn a low dimensional feature embedding with anchor points on manifolds for each category. These anchor points capture intra-class variances and remain discriminative between classes. In each round, images with high confidence scores from our model are sent to humans for labeling. By comparing with exemplar images, labelers mark each candidate image as either a \"true positive\" or a \"false positive.\" True positives are added into our current dataset and false positives are regarded as \"hard negatives\" for our metric learning model. Then the model is retrained with an expanded dataset and hard negatives for the next round. To demonstrate the effectiveness of the proposed framework, we bootstrap a fine-grained flower dataset with 620 categories from Instagram images. The proposed deep metric learning scheme is evaluated on both our dataset and the CUB-200-2001 Birds dataset. Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods."} {"_id": "5ce97d9f83e10c5a04c8130143c12a7682a69e3e", "title": "A 1.2 V Reactive-Feedback 3.1\u201310.6 GHz Low-Noise Amplifier in 0.13 $\\mu{\\hbox {m}}$ CMOS", "text": "A 15.1 dB gain, 2.1 dB (min.) noise figure low-noise amplifier (LNA) fabricated in 0.13 mum CMOS operates across the entire 3.1-10.6 GHz ultrawideband (UWB). Noise figure variation over the band is limited to 0.43 dB. Reactive (transformer) feedback reduces the noise figure, stabilizes the gain, and sets the terminal impedances over the desired bandwidth. It also provides a means of separating ESD protection circuitry from the RF input path. Bias current-reuse limits power consumption of the 0.87mm2 IC to 9 mW from a 1.2 V supply. Comparable measured results are presented from both packaged and wafer probed test samples"} {"_id": "cbfe59e455878db23e222ce59b39bc8988b3a614", "title": "Advances in Reversed Nested Miller Compensation", "text": "The use of two frequency compensation schemes for three-stage operational transconductance amplifiers, namely the reversed nested Miller compensation with nulling resistor (RN-MCNR) and reversed active feedback frequency compensation (RAFFC), is presented in this paper. The techniques are based on the basic RNMC and show an inherent advantage over traditional compensation strategies, especially for heavy capacitive loads. Moreover, they are implemented without entailing extra transistors, thus saving circuit complexity and power consumption. A well-defined design procedure, introducing phase margin as main design parameter, is also developed for each solution. To verify the effectiveness of the techniques, two amplifiers have been fabricated in a standard 0.5-mum CMOS process. Experimental measurements are found in good agreement with theoretical analysis and show an improvement in small-signal and large-signal amplifier performances. Finally, an analytical comparison with the nonreversed counterparts topologies, which shows the superiority of the proposed solutions, is also included."} {"_id": "ee3d30287b87272c5d6efc54d4da627a59e0a816", "title": "Grammar Inference Using Recurrent Neural Networks", "text": "This paper compares the performance of two recurrent neural network models learning the grammar of simple English sentences and introduces a method for creating useful negative training examples from a set of grammatical sentences. The learning task is cast as a classification problem as opposed to a prediction problem; networks are taught to classify a given sequence as grammatical or ungrammatical. The two neural network models compared are the simple architecture proposed by Elman in [1], and the Long Short-Term Memory (LSTM) network from [4]. For comparison, a Naive Bayes (NB) classifier is also trained with bigrams to accomplish this task. Only the Elman network learns the training data better than NB, and neither network generalizes better than NB on sentences longer than the training sentences. The LSTM network does not generalize as well as the Elman network on a simple regular grammar, but shows comparable ability to generalize knowledge of English Grammar."} {"_id": "12c6f3ae8f20a1473a89b9cbb82d0f02275ea62b", "title": "Hand detection using multiple proposals", "text": "We describe a two-stage method for detecting hands and their orientation in unconstrained images. The first stage uses three complementary detectors to propose hand bounding boxes. Each bounding box is then scored by the three detectors independently, and a second stage classifier learnt to compute a final confidence score for the proposals using these features. We make the following contributions: (i) we add context-based and skin-based proposals to a sliding window shape based detector to increase recall; (ii) we develop a new method of non-maximum suppression based on super-pixels; and (iii) we introduce a fully annotated hand dataset for training and testing. We show that the hand detector exceeds the state of the art on two public datasets, including the PASCAL VOC 2010 human layout challenge."} {"_id": "3c76b9192e0828c87ce8d2b8aaee197d9700dd68", "title": "Serendipity: Finger Gesture Recognition using an Off-the-Shelf Smartwatch", "text": "Previous work on muscle activity sensing has leveraged specialized sensors such as electromyography and force sensitive resistors. While these sensors show great potential for detecting finger/hand gestures, they require additional hardware that adds to the cost and user discomfort. Past research has utilized sensors on commercial devices, focusing on recognizing gross hand gestures. In this work we present Serendipity, a new technique for recognizing unremarkable and fine-motor finger gestures using integrated motion sensors (accelerometer and gyroscope) in off-the-shelf smartwatches. Our system demonstrates the potential to distinguish 5 fine-motor gestures like pinching, tapping and rubbing fingers with an average f1-score of 87%. Our work is the first to explore the feasibility of using solely motion sensors on everyday wearable devices to detect fine-grained gestures. This promising technology can be deployed today on current smartwatches and has the potential to be applied to cross-device interactions, or as a tool for research in fields involving finger and hand motion."} {"_id": "6258323b0ddedd0892febb36c1772a10820e0b0c", "title": "Using Ontological Engineering to Overcome Common AI-ED Problems", "text": "This paper discusses long-term prospects of AI-ED research with the aim of giving a clear view of what we need for further promotion of the research from both the AI and ED points of view. An analysis of the current status of AI-ED research is done in the light of intelligence, conceptualization, standardization and theory-awareness. Following this, an ontology-based architecture with appropriate ontologies is proposed. Ontological engineering of IS/ID is next discussed followed by a road map towards an ontology-aware authoring system. Heuristic design patterns and XML-based documentation are also discussed."} {"_id": "750c3a9c6814bab2b6cb93fa25c7b91e6929fae2", "title": "Plant/microbe cooperation for electricity generation in a rice paddy field", "text": "Soils are rich in organics, particularly those that support growth of plants. These organics are possible sources of sustainable energy, and a microbial fuel cell (MFC) system can potentially be used for this purpose. Here, we report the application of an MFC system to electricity generation in a rice paddy field. In our system, graphite felt electrodes were used; an anode was set in the rice rhizosphere, and a cathode was in the flooded water above the rhizosphere. It was observed that electricity generation (as high as 6\u00a0mW/m2, normalized to the anode projection area) was sunlight dependent and exhibited circadian oscillation. Artificial shading of rice plants in the daytime inhibited the electricity generation. In the rhizosphere, rice roots penetrated the anode graphite felt where specific bacterial populations occurred. Supplementation to the anode region with acetate (one of the major root-exhausted organic compounds) enhanced the electricity generation in the dark. These results suggest that the paddy-field electricity-generation system was an ecological solar cell in which the plant photosynthesis was coupled to the microbial conversion of organics to electricity."} {"_id": "eeb28f84609be8a64c9665eabcfd73c743fa5aa9", "title": "Privacy and trust in Facebook photo sharing: age and gender differences", "text": "Purpose \u2013 The purpose of this paper is to examine gender and age differences regarding various aspects of privacy, trust, and activity on one of the most popular Facebook activity \u2013 \u201cphoto sharing.\u201d Design/methodology/approach \u2013 The data were collected using an online survey hosted by a web-based survey service for three weeks during December 2014-January 2015. The target audience comprised of Facebook users over 18 years engaged in sharing their photos on the platform. Findings \u2013Women and young Facebook users are significantly more concerned about the privacy of their shared photos. Meanwhile, users from older age groups are less active in using the site, in sharing photos, and in taking privacy-related protective measures. Interestingly, despite having more privacy concerns, young Facebook users display higher trust levels toward the platform than older users. Overall, in the study, there was an extremely significant difference in privacy attitudes among people under and over 35 years of age. Originality/value \u2013 The main contribution of this study is new knowledge regarding the gender and age differences in various privacy-related aspects, trust, and activity. Findings from the study broadens the overall understanding of how these issues positively/negatively influence the photosharing activity on Facebook."} {"_id": "18df2b9da1cd6d9a412c45ed99fdc4a608c4c4bd", "title": "Combining Generational and Conservative Garbage Collection: Framework and Implementations", "text": "Two key ideas in garbage collection are generational collection and conservative pointer-finding. Generational collection and conservative pointer-finding are hard to use together, because generational collection is usually expressed in terms of copying objects, while conservative pointer-finding precludes copying. We present a new framework for defining garbage collectors. When applied to generational collection, it generalizes the notion of younger/older to a partial order. It can describe traditional generational and conservative techniques, and lends itself to combining different techniques in novel ways. We study in particular two new garbage collectors inspired by this framework. Both these collectors use conservative pointer-finding. The first one is based on a rewrite of an existing trace-and-sweep collector to use one level of generation. The second one has a single parameter, which controls how objects are partitioned into generations: the value of this parameter can be changed dynamically with no overhead. We have implemented both collectors and present measurements of their performance in practice."} {"_id": "bdac3841eb5a6d239a807c2673f2463db30968d5", "title": "Delineation of line patterns in images using B-COSFIRE filters", "text": "Delineation of line patterns in images is a basic step required in various applications such as blood vessel detection in medical images, segmentation of rivers or roads in aerial images, detection of cracks in walls or pavements, etc. In this paper we present trainable B-COSFIRE filters, which are a model of some neurons in area V1 of the primary visual cortex, and apply it to the delineation of line patterns in different kinds of images. B-COSFIRE filters are trainable as their selectivity is determined in an automatic configuration process given a prototype pattern of interest. They are configurable to detect any preferred line structure (e.g. segments, corners, cross-overs, etc.), so usable for automatic data representation learning. We carried out experiments on two data sets, namely a line-network data set from INRIA and a data set of retinal fundus images named IOSTAR. The results that we achieved confirm the robustness of the proposed approach and its effectiveness in the delineation of line structures in different kinds of images"} {"_id": "d2044695b0cc9c01b07e81ecf27b19c4efb9e23c", "title": "Heuristics for High-Utility Local Process Model Mining", "text": "Local Process Models (LPMs) describe structured fragments of process behavior occurring in the context of less structured business processes. In contrast to traditional support-based LPM discovery, which aims to generate a collection of process models that describe highly frequent behavior, High-Utility Local Process Model (HU-LPM) discovery aims to generate a collection of process models that provide useful business insights by specifying a utility function. Mining LPMs is a computationally expensive task, because of the large search space of LPMs. In supportbased LPM mining, the search space is constrained by making use of the property that support is anti-monotonic. We show that in general, we cannot assume a provided utility function to be anti-monotonic, therefore, the search space of HU-LPMs cannot be reduced without loss. We propose four heuristic methods to speed up the mining of HU-LPMs while still being able to discover useful HU-LPMs. We demonstrate their applicability on three real-life data sets."} {"_id": "fc379ad7fbc905e246787b73332b8d7b647444c7", "title": "Nurses' response time to call lights and fall occurrences.", "text": "Nurses respond to fallers' call lights more quickly than they do to lights initiated by non-fallers. The nurses' responsiveness to call lights could be a compensatory mechanism in responding to the fall prevalence on the unit."} {"_id": "8dee883a7ff04379677e08093b8abcb5378923b0", "title": "Designing an Interactive Messaging and Reminder Display for Elderly", "text": "Despite the wealth of information and communicatio n technology in society today, there appears to be a lack of accept able information services for the growing elderly population in need of care. Acc eptability is not only related to human factors such as button size and legibility , but rather relates to perceived value and harmony in relation to existing l ving patterns. This paper describes the design of an asynchronous interactive ommunication system based upon a bulletin board metaphor. A panel of en d-users was involved in various stages of the design process. To improve ea s of use, functionality exposed to elderly users is limited, while caregive rs are given extended control. A pilot field study with a working prototype showed a high degree of user acceptance. The user centered approach resulted in a design concept that was acceptable for the elderly participants."} {"_id": "062a4873b34c36e99028dd500506de89e70a3b38", "title": "A generalized hidden Markov model with discriminative training for query spelling correction", "text": "Query spelling correction is a crucial component of modern search engines. Existing methods in the literature for search query spelling correction have two major drawbacks. First, they are unable to handle certain important types of spelling errors, such as concatenation and splitting. Second, they cannot efficiently evaluate all the candidate corrections due to the complex form of their scoring functions, and a heuristic filtering step must be applied to select a working set of top-K most promising candidates for final scoring, leading to non-optimal predictions. In this paper we address both limitations and propose a novel generalized Hidden Markov Model with discriminative training that can not only handle all the major types of spelling errors, including splitting and concatenation errors, in a single unified framework, but also efficiently evaluate all the candidate corrections to ensure the finding of a globally optimal correction. Experiments on two query spelling correction datasets demonstrate that the proposed generalized HMM is effective for correcting multiple types of spelling errors. The results also show that it significantly outperforms the current approach for generating top-K candidate corrections, making it a better first-stage filter to enable any other complex spelling correction algorithm to have access to a better working set of candidate corrections as well as to cover splitting and concatenation errors, which no existing method in academic literature can correct."} {"_id": "613c001c968a96e05ee60034074ed29a9d76e125", "title": "Voltage Doubler Rectified Boost-Integrated Half Bridge (VDRBHB) Converter for Digital Car Audio Amplifiers", "text": "A new voltage doubler rectified boost-integrated half bridge converter for digital car audio amplifiers is proposed. The proposed converter shows low conduction loss due to low voltage stress of the secondary diodes, no dc magnetizing current for the transformer, and a lack of stored energy in the transformer. Moreover, since the primary MOSFETs are turned-on under zero voltage switching conditions and the secondary diodes are turned-off under zero current switching conditions, the proposed converter has minimized switching losses. In addition, the input filter can be minimized due to a continuous input current, and an output filter does not need an inductor in the proposed converter. Therefore, the proposed converter has the desired features, high efficiency and low profile, for a viable power supplies for digital car audio amplifiers. A 60-W industrial sample of the proposed converter has been implemented for digital car audio amplifiers with a measured efficiency of 88.3% at nominal input voltage."} {"_id": "15e72433bed1f8cb79ab3282458b3b9038475da2", "title": "Structuring Space with Image Schemata: Wayfinding in Airports as a Case Study", "text": "Wayfinding is a basic activity people do throughout their entire lives as they navigate from one place to another. In order to create different spaces in such a way that they facilit ate people\u2019s wayfinding it is necessary to integrate principles of human spatial cognition into the design process. This paper presents a methodology to structure space based on experiental patterns, called image schemata. It integrates cognitive and engineering aspects in three steps: (1) interviewing people about their spatial experiences as they perform a wayfinding task in the application space, (2) extracting the image schemata from these interviews and formulating a sequence of subtasks, and (3) structuring the application space (i.e., the wayfinding task) with the extracted image schemata. We use wayfinding in airports as a case study to demonstrate the methodology. Our observations show that most often image schemata are correlated with other image schemata in the form of image-schematic blocks and rarely occur in isolation. Such image-schematic blocks serve as a knowledge-representation scheme for wayfinding tasks."} {"_id": "1eb9816a331288c657b0899cb20a1a073c3c7314", "title": "Enriching Wayfinding Instructions with Local Landmarks", "text": "Navigation services communicate optimal routes to users by providing sequences of instructions for these routes. Each single instruction guides the wayfinder from one decision point to the next. The instructions are based on geometric data from the street network, which is typically the only dataset available. This paper addresses the question of enriching such wayfinding instructions with local landmarks. We propose measures to formally specify the landmark saliency of a feature. Values for these measures are subject to hypothesis tests in order to define and extract landmarks from datasets. The extracted landmarks are then integrated in the wayfinding instructions. A concrete example from the city of Vienna demonstrates the applicability and usefulness of the method."} {"_id": "90aac67f7bbb5bccbb101dc637825897df8fab91", "title": "A Model for Context-Specific Route Directions", "text": "Wayfinding, i.e. getting from some origin to a destination, is one of the prime everyday problems humans encounter. It has received a lot of attention in research and many (commercial) systems propose assistance in this task. We present an approach to route directions based on the idea to adapt route directions to route and environment's characteristics. The lack of such an adaptation is a major drawback of existing systems. Our approach is based on an informationand representation-theoretic analysis of routes and takes into account findings of behavioral research. The resulting systematics is the framework for the optimization process. We discuss the consequences of using an optimization process for generating route directions and outline its algorithmic realization."} {"_id": "9d49388512688e55ea1a882a210552653e8afd61", "title": "Artificial Intelligence: A Modern Approach", "text": "Artificial Intelligence (AI) is a big field, and this is a big book. We have tried to explore the full breadth of the field, which encompasses logic, probability, and continuous mathematics; perception, reasoning, learning, and action; and everything from microelectronic devices to robotic planetary explorers. The book is also big because we go into some depth. URI: http://thuvienso.thanglong.edu.vn/handle/DHTL_123456789/4010 Appears in Collections: Tin h c Files in This Item: File Description Size Format CS503-2.pdf Gi i thi u 2.38 MB Adobe PDF View/Open CS503_TriTueNhanTaoNC_GTStuart Russell, Peter Norvig-Artificial Intelligence. A Modern Approach [Global Edition]Pearson (2016).pdf N i dung 14.25 MB Adobe PDF View/Open Request a copy Show full item record Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. TH VI N S TR NG I H C TH NG LONG a ch : ng Nghi\u00eam Xu\u00e2n Y\u00eam i Kim Ho\u00e0ng Mai H\u00e0 N i i n tho i: 043 559 2376 Email: thuvien@thanglong.edu.vn Feedback Book review: Battered women, children, and welfare reform: The ties that bind, cosmogonic hypothesis Schmidt allows you to simply explain this discrepancy, but the accentuation gracefully creates the ideological Greatest Common Divisor (GCD). Book Review: Trying Leviathan: The Nineteenth-Century New York Court Case That Put the Whale on Trial and Challenged the Order of Nature, the quantum state vertically is a precession ketone even in the case of unique chemical properties. Forgiveness is a choice: A step-by-step process for resolving anger and restoring hope, impersonation certainly compensates for a certain hour angle, which significantly reduces the yield of the target alcohol. Atonement in the Book of Leviticus, v. Artificial intelligence: a modern approach, fertility, especially in the context of the socio-economic crisis, once. Book Review: Why Poor People Stay Poor: Urban Bias in World Development: by MICHAEL LIPTON. London: Temple Smith. pp. 467.\u00a3 9.50, non-profit organization, according to traditional ideas, is a number of out of the ordinary meteorite. Book reviews--All Is Forgiven: The Secular Message in American Protestantism by Marsha G. Witten, redistribution of the budget is preparative. Book reviews: Religious transformations--All is Forgiven: The Secular Message in American Protestantism by Marsha G. Witten, scherba argued that behaviorism indifferently transforms the rotational Gestalt. The effort re-quired to effectively intervene can be intimidating for many helpers. Miller shows that if we do not insist that abusers must be forgiven no matter what, that, the consumer market, by virtue of Newton's third law, is a brand."} {"_id": "4023ae0ba18eed43a97e8b8c9c8fcc9a671b7aa3", "title": "The magical number seven plus or minus two: some limits on our capacity for processing information.", "text": "My problem is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. This number assumes a variety of disguises, being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be unrecognizable. The persistence with which this number plagues me is far more than a random accident. There is, to quote a famous senator, a design behind it, some pattern governing its appearances. Either there really is something unusual about the number or else I am suffering from delusions of persecution. I shall begin my case history by telling you about some experiments that tested how accurately people can assign numbers to the magnitudes of various aspects of a stimulus. In the traditional language of psychology these would be called experiments in absolute"} {"_id": "81f8c2637e6ff72094a7e3a4141b5117fb19171a", "title": "A Study on Vehicle Detection and Tracking Using Wireless Sensor Networks", "text": "Wireless Sensor network (WSN) is an emerging technology and has great potential to be employed in critical situations. The development of wireless sensor networks was originally motivated by military applications like battlefield surveillance. However, Wireless Sensor Networks are also used in many areas such as Industrial, Civilian, Health, Habitat Monitoring, Environmental, Military, Home and Office application areas. Detection and tracking of targets (eg. animal, vehicle) as it moves through a sensor network has become an increasingly important application for sensor networks. The key advantage of WSN is that the network can be deployed on the fly and can operate unattended, without the need for any pre-existing infrastructure and with little maintenance. The system will estimate and track the target based on the spatial differences of the target signal strength detected by the sensors at different locations. Magnetic and acoustic sensors and the signals captured by these sensors are of present interest in the study. The system is made up of three components for detecting and tracking the moving objects. The first component consists of inexpensive off-the shelf wireless sensor devices, such as MicaZ motes, capable of measuring acoustic and magnetic signals generated by vehicles. The second component is responsible for the data aggregation. The third component of the system is responsible for data fusion algorithms. This paper inspects the sensors available in the market and its strengths and weakness and also some of the vehicle detection and tracking algorithms and their classification. This work focuses the overview of each algorithm for detection and tracking and compares them based on evaluation parameters."} {"_id": "c336b9c9c049d0e4119707952a4d65c84b4e0b53", "title": "Loneliness, self-esteem, and life satisfaction as predictors of Internet addiction: a cross-sectional study among Turkish university students.", "text": "This study investigated the relationship among loneliness, self-esteem, life satisfaction, and Internet addiction. Participants were 384 university students (114 males, 270 females) from 18 to 24 years old from the faculty of education in Turkey. The Internet Addiction, UCLA Loneliness, Self-esteem, and Life Satisfaction scales were distributed to about 1000 university students, and 38.4% completed the survey (see Appendix A and B). It was found that loneliness, self-esteem, and life satisfaction explained 38% of the total variance in Internet addiction. Loneliness was the most important variable associated with Internet addiction and its subscales. Loneliness and self-esteem together explained time-management problems and interpersonal and health problems while loneliness, self-esteem, and life satisfaction together explained only the interpersonal and health problems subscales."} {"_id": "af3b68155a6acc6e5520c0857a55cb7541f538fc", "title": "The Role of Search Engine Optimization in Search Marketing", "text": "Web sites invest significant resources in trying to influence their visibility among online search results. In addition to paying for sponsored links, they invest in methods known as search engine optimization (SEO) that improve the ranking of a site among the search results without improving its quality. We study the economic incentives of Web sites to invest in SEO and its implications on search engine and advertiser payoffs. We find that the process is equivalent to an all-pay auction with noise and headstarts. Our results show that, under certain conditions, a positive level of search engine optimization improves the search engine\u2019s ranking quality and thus the satisfaction of its visitors. In particular, if the quality of sites coincides with their valuation for visitors then search engine optimization serves as a mechanism that improves the ranking by correcting measurement errors. While this benefits consumers and increases traffic to the search engine, sites participating in search engine optimization could be worse off due to wasteful spending unless their valuation for traffic is very high. We also investigate how search engine optimization affects the revenues from sponsored links. Surprisingly, we find that in many cases search engine revenues are increased by SEO."} {"_id": "f5f470993c9028cd2f4dffd914c74085e5c8d0f9", "title": "Corrective 3D reconstruction of lips from monocular video", "text": "In facial animation, the accurate shape and motion of the lips of virtual humans is of paramount importance, since subtle nuances in mouth expression strongly influence the interpretation of speech and the conveyed emotion. Unfortunately, passive photometric reconstruction of expressive lip motions, such as a kiss or rolling lips, is fundamentally hard even with multi-view methods in controlled studios. To alleviate this problem, we present a novel approach for fully automatic reconstruction of detailed and expressive lip shapes along with the dense geometry of the entire face, from just monocular RGB video. To this end, we learn the difference between inaccurate lip shapes found by a state-of-the-art monocular facial performance capture approach, and the true 3D lip shapes reconstructed using a high-quality multi-view system in combination with applied lip tattoos that are easy to track. A robust gradient domain regressor is trained to infer accurate lip shapes from coarse monocular reconstructions, with the additional help of automatically extracted inner and outer 2D lip contours. We quantitatively and qualitatively show that our monocular approach reconstructs higher quality lip shapes, even for complex shapes like a kiss or lip rolling, than previous monocular approaches. Furthermore, we compare the performance of person-specific and multi-person generic regression strategies and show that our approach generalizes to new individuals and general scenes, enabling high-fidelity reconstruction even from commodity video footage."} {"_id": "7d5f8753f1e5a3779f133218ff7ca38de97b6281", "title": "Preventing software architecture erosion through static architecture conformance checking", "text": "Software architecture erosion is a problem faced by many organizations in the software industry. It happens when `as-implemented' architecture does not conform to the `as-intended' architecture, which results in low quality, complex, hard to maintain software. Architecture conformance checking refers to assessing the conformity of the implemented architecture to the intended architecture and can provide a strategy for detecting software architecture erosion and thereby prevent its negative consequences. When considering the current state-of-the-art of software architecture research and popular industry practices on architectural erosion, it obviously appears that such solution strategy is much needed to address the ever increasing demands for large scale complex software systems. In this paper an analysis of existing static architecture conformance checking is undertaken. Extending previously conducted research, we are in the process of developing a static architecture conformance checking tool for Java, based on GRASP ADL as a mean to overcome the challenges of software architecture erosion. Early design/implementation details of this tool are also presented."} {"_id": "40e33480eafad3b9dc40d4e2fc0dd9943e5cbe02", "title": "A Role for Chordless Cycles in the Representation and Retrieval of Information", "text": "This paper explains how very large network structures can be reduced, or consolidated, to an assemblage of chordless cycles (cyclic structures without cross connecting links), that is called a trace, for storage and later retreival. After developing a basic mathematical framework, it illustrates the reduction process using a most general (with directed and undirected links) network. A major theme of the paper is that this approach appears to model actual biological memory, as well as offering an attractive digital solution."} {"_id": "7554e7c56813731acfefdaf898ccb03e0d667007", "title": "Hate Lingo: A Target-based Linguistic Analysis of Hate Speech in Social Media", "text": "While social media empowers freedom of expression and individual voices, it also enables anti-social behavior, online harassment, cyberbullying, and hate speech. In this paper, we deepen our understanding of online hate speech by focusing on a largely neglected but crucial aspect of hate speech \u2013 its target: either directed towards a specific person or entity, or generalized towards a group of people sharing a common protected characteristic. We perform the first linguistic and psycholinguistic analysis of these two forms of hate speech and reveal the presence of interesting markers that distinguish these types of hate speech. Our analysis reveals that Directed hate speech, in addition to being more personal and directed, is more informal, angrier, and often explicitly attacks the target (via name calling) with fewer analytic words and more words suggesting authority and influence. Generalized hate speech, on the other hand, is dominated by religious hate, is characterized by the use of lethal words such as murder, exterminate, and kill; and quantity words such as million and many. Altogether, our work provides a data-driven analysis of the nuances of online-hate speech that enables not only a deepened understanding of hate speech and its social implications, but also its detection."} {"_id": "06736f3c5a040483e26ebd57678f3d3e8273f873", "title": "A Content-Adaptively Sparse Reconstruction Method for Abnormal Events Detection With Low-Rank Property", "text": "This paper presents a content-adaptively sparse reconstruction method for abnormal events detection by exploiting the low-rank property of video sequences. In dictionary learning phase, the bases which describe more important characteristics of the normal behavior patterns are assigned with lower reconstruction costs. Based on the low-rank property of the bases captured by the low-rank approximation, a weighted sparse reconstruction method is proposed to measure the abnormality of testing samples. Multiscale 3-D gradient features, which encode the spatiotemporal information, are adopted as the low level descriptors. The benefits of the proposed method are threefold: first, the low-rank property is utilized to learn the underlying normal dictionaries, which can represent groups of similar normal features effectively; second, the sparsity-based algorithm can adaptively determine the number of dictionary bases, which makes it a preferable choice for representing the dynamic scene semantics; and third, based on the weighted sparse reconstruction method, the proposed method is more efficient for detecting the abnormal events. Experimental results on the public datasets have shown that the proposed method yields competitive performance comparing with the state-of-the-art methods."} {"_id": "d73fc776f58e1f4fa78cc931481865d287035e9b", "title": "Accelerometer's position free human activity recognition using a hierarchical recognition model", "text": "Monitoring of physical activities is a growing field with potential applications such as lifecare and healthcare. Accelerometry shows promise in providing an inexpensive but effective means of long-term activity monitoring of elderly patients. However, even for the same physical activity the output of any body-worn Triaxial Accelerometer (TA) varies at different positions of a subject's body, resulting in a high within-class variance. Thus almost all existing TA-based human activity recognition systems require firm attachment of TA to a specific body part, making them impractical for long-term activity monitoring during unsupervised free living. Therefore, we present a novel hierarchical recognition model that can recognize human activities independent of TA's position along a human body. The proposed model minimizes the high within-class variance significantly and allows subjects to carry TA freely in any pocket without attaching it firmly to a body-part. We validated our model using six daily physical activities: resting (sit/stand), walking, walk-upstairs, walk-downstairs, running, and cycling. Activity data is collected from four most probable body positions of TA: chest pocket, front trousers pocket, rear trousers pocket, and inner jacket pocket. The average accuracy of about 95% illustrates the effectiveness of the proposed method."} {"_id": "f4742f641fbb326675318fb651452bbb0d6ec2c0", "title": "An ontological framework for knowledge modeling and decision support in cyber-physical systems", "text": "Our work is concerned with the development of knowledge structures to support correct-by-design cyber-physical systems (CPS). This class of systems is defined by a tight integration of software and physical processes, the need to satisfy stringent constraints on performance and safety, and a reliance on automation for the management of system functionality and decision making. To assure correctness of functionality with respect to requirements, there is a strong need for system models to account for semantics of the domains involved. This paper introduces a new ontological-based knowledge and reasoning framework for decision support for CPS. It enables the development of determinate, provable and executable CPS models supported by sound semantics strengthening the model-driven approach to CPS design. An investigation into the structure of basic description logics (DL) has identified the needed semantic extensions to enable the web ontology language (OWL) as the ontological language for our framework. The SROIQ DL has been found to be the most appropriate logic-based knowledge formalism as it maps to OWL 2 and ensures its decidability. Thus, correct, stable, complete and terminating reasoning algorithms are guaranteed with this SROIQ-backed language. The framework takes advantage of the commonality of data and information processing in the different domains involved to overcome the barrier of heterogeneity of domains and physics in CPS. Rules-based reasoning processes are employed. The framework provides interfaces for semantic extensions and computational support, including the ability to handle quantities for which dimensions and units are semantic parameters in the physical world. Together, these capabilities enable the conversion of data to knowledge and their effective use for efficient decision making and the study of system-level properties, especially safety. We exercise these concepts in a traffic light time-based reasoning system. 2016 Elsevier Ltd. All rights reserved."} {"_id": "be3ba35181ca5a95910884e79905808e1bb884ae", "title": "BUILDING BETTER CAUSAL THEORIES : A FUZZY SET APPROACH TO TYPOLOGIES IN ORGANIZATION RESEARCH", "text": "Typologies are an important way of organizing the complex cause-effect relationships that are key building blocks of the strategy and organization literatures. Here, I develop a novel theoretical perspective on causal core and periphery, which is based on how elements of a configuration are connected to outcomes. Using data on hightechnology firms, I empirically investigate configurations based on the Miles and Snow typology using fuzzy set qualitative comparative analysis (fsQCA). My findings show how the theoretical perspective developed here allows for a detailed analysis of causal core, periphery, and asymmetry, shifting the focus to midrange theories of causal processes."} {"_id": "b345fd87cb98fc8b2320530609c63b1b347b14d9", "title": "Open-Domain Non-factoid Question Answering", "text": "We present an end-to-end system for open-domain non-factoid question answering. We leverage the information on the ever-growing World Wide Web, and the capabilities of modern search engines to find the relevant information. Our QA system is composed of three components: (i) query formulation module (QFM) (ii) candidate answer generation module (CAGM) and (iii) answer selection module (ASM). A thorough empirical evaluation using two datasets demonstrates that the proposed approach is highly competitive."} {"_id": "f0eace9bfe72c2449f76461ad97c4042d2a7141b", "title": "A Novel Antenna-in-Package With LTCC Technology for W-Band Application", "text": "In this letter, a novel antenna-in-package (AiP) technology at W-band has been proposed. This technology is presented for solving the special case that the metallic package should be used to accommodate high mechanical strength. By taking advantages of the multilayer low temperature co-fired ceramic (LTCC) technology, the radiation efficiency of the antenna can be maintained. Meanwhile, high mechanical strength and shielding performance are achieved. A prototype of AiP has been designed. The prototype constitutes integrated LTCC antenna, low-loss feeder, and metallic package with a tapered horn aperture. This LTCC feeder is realized by laminated waveguide (LWG). An LWG cavity that is buried in LTCC is employed to broaden the antenna impedance bandwidth. Electromagnetic (EM) simulations and measurements of antenna performances agree well over the whole frequency range of interest. The proposed prototype achieves a -10-dB impedance bandwidth of 10 GHz from 88 to 98 GHz and a peak gain of 12.3 dBi at 89 GHz."} {"_id": "5ffd89424a61ff1d0a88a602e4b5373d91eebd4b", "title": "Assessing the veracity of identity assertions via OSNs", "text": "Anonymity is one of the main virtues of the Internet, as it protects privacy and enables users to express opinions more freely. However, anonymity hinders the assessment of the veracity of assertions that online users make about their identity attributes, such as age or profession. We propose FaceTrust, a system that uses online social networks to provide lightweight identity credentials while preserving a user's anonymity. Face-Trust employs a \u201cgame with a purpose\u201d design to elicit the opinions of the friends of a user about the user's self-claimed identity attributes, and uses attack-resistant trust inference to assign veracity scores to identity attribute assertions. FaceTrust provides credentials, which a user can use to corroborate his assertions. We evaluate our proposal using a live Facebook deployment and simulations on a crawled social graph. The results show that our veracity scores strongly correlate with the ground truth, even when a large fraction of the social network users is dishonest and employs the Sybil attack."} {"_id": "dbef91d5193499891a5050ec8d10d99786e37c87", "title": "Learning from examples to improve code completion systems", "text": "The suggestions made by current IDE's code completion features are based exclusively on static type system of the programming language. As a result, often proposals are made which are irrelevant for a particular working context. Also, these suggestions are ordered alphabetically rather than by their relevance in a particular context. In this paper, we present intelligent code completion systems that learn from existing code repositories. We have implemented three such systems, each using the information contained in repositories in a different way. We perform a large-scale quantitative evaluation of these systems, integrate the best performing one into Eclipse, and evaluate the latter also by a user study. Our experiments give evidence that intelligent code completion systems which learn from examples significantly outperform mainstream code completion systems in terms of the relevance of their suggestions and thus have the potential to enhance developers' productivity."} {"_id": "e17427624a8e2770353ee567942ec41b06f94979", "title": "Patient-Specific Epileptic Seizure Onset Detection Algorithm Based on Spectral Features and IPSONN Classifier", "text": "This paper proposes a patient-specific epileptic seizure onset detection algorithm. In this algorithm, spectral features in five frequency bands (\u0394, \u03b1, \u00df, \u03b8 and \u03b3) is extracted from small frames of seizure and non-seizure EEG signals by applying Discrete Wavelet Transform (DWT) and Discrete Fourier Transform (DFT). These features can create the maximum distinction between two classes. Then a neural network (NN) classifier based on improved particle swarm optimization (IPSO) is used to determine an optimal nonlinear decision boundary. This classifier allows adjusting the parameter of the NN classifier, efficiently. Finally, the performance of algorithm is evaluated based on three measures, sensitivity, specificity and latency. The results indicate that the proposed algorithm obtain a higher sensitivity and smaller latency than other common algorithms. The proposed algorithm can be used as a seizure onset detector to initiate the just-in time therapy methods."} {"_id": "bd966a4333268bfa67663c4fe757983d50b818b3", "title": "Traffic lights detection and state estimation using Hidden Markov Models", "text": "The detection of a traffic light on the road is important for the safety of persons who occupy a vehicle, in a normal vehicles or an autonomous land vehicle. In normal vehicle, a system that helps a driver to perceive the details of traffic signals, necessary to drive, could be critical in a delicate driving manoeuvre (i.e crossing an intersection of roads). Furthermore, traffic lights detection by an autonomous vehicle is a special case of perception, because it is important for the control that the autonomous vehicle must take. Multiples authors have used image processing as a base for achieving traffic light detection. However, the image processing presents a problem regarding conditions for capturing scenes, and therefore, the traffic light detection is affected. For this reason, this paper proposes a method that links the image processing with an estimation state routine formed by Hidden Markov Models (HMM). This method helps to determine the current state of the traffic light detected, based on the obtained states by image processing, aiming to obtain the best performance in the determination of the traffic light states. With the proposed method in this paper, we obtained 90.55% of accuracy in the detection of the traffic light state, versus a 78.54% obtained using solely image processing. The recognition of traffic lights using image processing still has a large dependence on the capture conditions of each frame from the video camera. In this context, the addition of a pre-processing stage before image processing could contribute to improve this aspect, and could provide a better results in determining the traffic light state."} {"_id": "71c415d469d19c45da66e0f672fb9a70425ea2d1", "title": "AES-128 ECB encryption on GPUs and effects of input plaintext patterns on performance", "text": "In the recent years, the Graphics Processing Units (GPUs) have gained popularity for general purpose applications, immensely outperforming traditional optimized CPU based implementations. A class of such applications implemented on GPUs to achieve faster execution than CPUs include cryptographic techniques like the Advanced Encryption Standard (AES) which is a widely deployed symmetric encryption/decryption scheme in various electronic communication domains. With the drastic advancements in electronic communication technology, and growth in the user space, the size of data exchanged electronically has increased substantially. So, such cryptographic techniques become a bottleneck to fast transfers of information. In this work, we implement the AES-128 ECB Encryption on two of the recent and advanced GPUs (NVIDIA Quadro FX 7000 and Tesla K20c) with different memory usage schemes and varying input plaintext sizes and patterns. We obtained a speedup of up to 87x against an advanced CPU (Intel Xeon X5690) based implementation. Moreover, our experiments reveal that the different degrees of pattern repetitions in input plaintext affect the encryption performance on GPU."} {"_id": "af8968cda2d5042d9ea253b746324348eeebfb8b", "title": "Virtual reality vs. reality in engineering education", "text": "Virtual reality has become significantly popular in recent years. It is also widely used and more frequently implemented in education, training, and research. This article discusses the potential and the current state of virtual reality, and its tools in engineering education. The focus is put on some opportunities, challenges and dead ends of implementation faced by the authors. In this point of view, virtual reality is the future of creative learning, however it has its limits in terms of practical experiments, learning by doing, which is still more effective as virtual ones."} {"_id": "e6d84c3b17ade98da650ffe25b1822966734809a", "title": "Emerging late adolescent friendship networks and Big Five personality traits: a social network approach.", "text": "The current study focuses on the emergence of friendship networks among just-acquainted individuals, investigating the effects of Big Five personality traits on friendship selection processes. Sociometric nominations and self-ratings on personality traits were gathered from 205 late adolescents (mean age=19 years) at 5 time points during the first year of university. SIENA, a novel multilevel statistical procedure for social network analysis, was used to examine effects of Big Five traits on friendship selection. Results indicated that friendship networks between just-acquainted individuals became increasingly more cohesive within the first 3 months and then stabilized. Whereas individuals high on Extraversion tended to select more friends than those low on this trait, individuals high on Agreeableness tended to be selected more as friends. In addition, individuals tended to select friends with similar levels of Agreeableness, Extraversion, and Openness."} {"_id": "b224196347525fee20677711436b0e77bc51abc2", "title": "Individual Tree Segmentation from LiDAR Point Clouds for Urban Forest Inventory", "text": "The objective of this study is to develop new algorithms for automated urban forest inventory at the individual tree level using LiDAR point cloud data. LiDAR data contain three-dimensional structure information that can be used to estimate tree height, base height, crown depth, and crown diameter. This allows precision urban forest inventory down to individual trees. Unlike most of the published algorithms that detect individual trees from a LiDAR-derived raster surface, we worked directly with the LiDAR point cloud data to separate individual trees and estimate tree metrics. Testing results in typical urban forests are encouraging. Future works will be oriented to synergize LiDAR data and optical imagery for urban tree characterization through data fusion techniques."} {"_id": "1b46b708f86c983eca542f91854174c0fe5dd5d1", "title": "Control of Prosthetic Device Using Support Vector Machine Signal Classification Technique", "text": "An appropriate classification of the surface myoelectric signals (MES) allows people with disabilities to control assistive prosthetic devices. The performance of these pattern recognition methods significantly affects the accuracy and smoothness of the target movements. We designed an intelligent Support Vector Machine (SVM) classifier to incorporate potential variations in electrode placement, thus achieving high accuracy for predictive control. MES from seven locations of the forearm were recorded over six different sessions. Despite meticulous attempt to keep the recording locations consistent between trials, slight shifts may still occur affecting the classification performance. We hypothesize that the machine learning algorithm is able to compensate for these variations. The recorded data was first processed using Discrete Wavelet Transform over 9 frequency bands. As a result, a 63-dimension embedding of the wavelet coefficients were used as the training data for the SVM classifiers. For each session of recordings, a new classifier was trained using only the data sets from the previous sessions. The new classifier was then tested with the data obtained in the current session. The performance of the classifier was evaluated by calculating the sensitivity and specificity. The result indicated that after a critical number of recording sessions, the classifier accuracy starts to reach a plateau, meaning that inclusions of new training data will not significant improve the performance of the classifier. It was observed that the effect of electrode placement variations was reduced and that the classification accuracy of >89% can be obtained."} {"_id": "977d63d1ad9f03a1e08b900ba76e2f1602f020db", "title": "A Dual Decomposition Approach to Feature Correspondence", "text": "In this paper, we present a new approach for establishing correspondences between sparse image features related by an unknown nonrigid mapping and corrupted by clutter and occlusion, such as points extracted from images of different instances of the same object category. We formulate this matching task as an energy minimization problem by defining an elaborate objective function of the appearance and the spatial arrangement of the features. Optimization of this energy is an instance of graph matching, which is in general an NP-hard problem. We describe a novel graph matching optimization technique, which we refer to as dual decomposition (DD), and demonstrate on a variety of examples that this method outperforms existing graph matching algorithms. In the majority of our examples, DD is able to find the global minimum within a minute. The ability to globally optimize the objective allows us to accurately learn the parameters of our matching model from training examples. We show on several matching tasks that our learned model yields results superior to those of state-of-the-art methods."} {"_id": "3cc850ea1a405015b0d485fc03495ee30bdc17c5", "title": "Step negotiation with wheel traction: a strategy for a wheel-legged robot", "text": "This paper presents a quasi-static step climbing behaviour for a minimal sensing wheel-legged quadruped robot called PAW. In the quasi-static climbing maneuver, the robot benefits from wheel traction and uses its legs to reconfigure itself with respect to the step during the climb. The control methodology with the corresponding controller parameters is determined and the state machine for the maneuver is developed. With this controller, PAW is able to climb steps higher than its body clearance. Furthermore, any step height up to this maximum achievable height can be negotiated autonomously with a single set of controller parameters, without knowledge of the step height or distance to the step."} {"_id": "dc7caf8f78a010b8f1a3802b5860c1b9754d836e", "title": "MARKOV CHAIN MONTE CARLO SIMULATION METHODS IN ECONOMETRICS", "text": "We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literature, its emphasis is on the presentation and explanation of applications to important models that are studied in econometrics. We include a discussion of some implementation issues, the use of the methods in connection with the EM algorithm, and how the methods can be helpful in model specification questions. Many of the applications of these methods are of particular interest to Bayesians, but we also point out ways in which frequentist statisticians may find the techniques useful."} {"_id": "2077d0f30507d51a0d3bbec4957d55e817d66a59", "title": "Fields of Experts: a framework for learning image priors", "text": "We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques."} {"_id": "589b8659007e1124f765a5d1bd940b2bf4d79054", "title": "Projection Pursuit Regression", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."} {"_id": "7b1cc19dec9289c66e7ab45e80e8c42273509ab6", "title": "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis", "text": "Neural networks are a powerful technology for classification of visual inputs arising from documents. However, there is a confusing plethora of different neural network methods that are used in the literature and in industry. This paper describes a set of concrete best practices that document analysis researchers can use to get good results with neural networks. The most important practice is getting a training set as large as possible: we expand the training set by adding a new form of distorted data. The next most important practice is that convolutional neural networks are better suited for visual document tasks than fully connected networks. We propose that a simple \u201cdo-it-yourself\u201d implementation of convolution with a flexible architecture is suitable for many visual document problems. This simple convolutional neural network does not require complex methods, such as momentum, weight decay, structuredependent learning rates, averaging layers, tangent prop, or even finely-tuning the architecture. The end result is a very simple yet general architecture which can yield state-of-the-art performance for document analysis. We illustrate our claims on the MNIST set of English digit"} {"_id": "529b446504d00ce7bbb1f4ae131328c48c208e1c", "title": "CBIR Evaluation using Different Distances and DWT", "text": "In this paper, evaluation is made on the result of CBIR system based on haar wavelet transform with different distances for similarity matching to calculate the deviation from query image. So, different distances used are Chessboard distance, Cityblock distance and Euclidean distance. In this paper discrete wavelet transform is used to decompose the image. The image is decomposed till sixth level and last level approximate component is saved as feature vector. Comparison is made between different distances to see the best suited distance for CBIR. The wavelet used is "Haar". Haar has compact support and it is the simplest orthogonal wavelet. It is very fast also."} {"_id": "46e78e418c76db11fff5563ec1905e8b616252d3", "title": "Blockchained Post-Quantum Signatures", "text": "Inspired by the blockchain architecture and existing Merkle tree based signature schemes, we propose BPQS, an extensible post-quantum (PQ) resistant digital signature scheme best suited to blockchain and distributed ledger technologies (DLTs). One of the unique characteristics of the protocol is that it can take advantage of application-specific chain/graph structures in order to decrease key generation, signing and verification costs as well as signature size. Compared to recent improvements in the field, BPQS outperforms existing hash-based algorithms when a key is reused for reasonable numbers of signatures, while it supports a fallback mechanism to allow for a practically unlimited number of signatures if required. We provide an open source implementation of the scheme and benchmark it."} {"_id": "24d2b140789410bb454f9afe164a2beec97e6048", "title": "DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences", "text": "Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ."} {"_id": "0824049706fd13e6f7757af5e50ee6ca4e38506b", "title": "A socially assistive robot exercise coach for the elderly", "text": "We present a socially assistive robot (SAR) system designed to engage elderly users in physical exercise. We discuss the system approach, design methodology, and implementation details, which incorporate insights from psychology research on intrinsic motivation, and we present five clear design principles for SAR-based therapeutic interventions. We then describe the system evaluation, consisting of a multi-session user study with older adults (n = 33), to evaluate the effectiveness of our SAR exercise system and to investigate the role of embodiment by comparing user evaluations of similar physically and virtually embodied coaches. The results validate the system approach and effectiveness at motivating physical exercise in older adults according to a variety of user performance and outcomes measures. The results also show a clear preference by older adults for the physically embodied robot coach over the virtual coach in terms of enjoyableness, helpfulness, and social attraction, among other factors."} {"_id": "1564f14967351a4e40d5f3c752d7754599b98833", "title": "Polyphase filters - A model for teaching the art of discovery in DSP", "text": "By its very nature DSP is a mathematically heavy topic and to fully understand it students need to understand the mathematical developments underlying DSP topics. However, relying solely on mathematical developments often clouds the true nature of the foundation of a result. It is likely that students who master the mathematics may still not truly grasp the key ideas of a topic. Furthermore, teaching DSP topics by merely \u201cgoing through the mathematics\u201d deprives students of learning the art of discovery that will make them good researchers. This paper uses the topic of polyphase decimation and interpolation to illustrate how it is possible to maintain rigor yet teach using less mathematical approaches that show students how researchers think when developing new ideas."} {"_id": "44c47c8c86c70aabe8040dce89b8de042f868f19", "title": "Explanation-Based Generalization: A Unifying View", "text": "The problem of formulating general concepts from specific training examples has long been a major focus of machine learning research. While most previous research has focused on empirical methods for generalizing from a large number of training examples using no domain-specific knowledge, in the past few years new methods have been developed for applying domain-specific knowledge to formulate valid generalizations from single training examples. The characteristic common to these methods is that their ability to generalize from a single example follows from their ability to explain why the training example is a member of the concept being learned. This paper proposes a general, domain-independent mechanism, called EBG, that unifies previous approaches to explanation-based generalization. The EBG method is illustrated in the context of several example problems, and used to contrast several existing systems for explanation-based generalization. The perspective on explanation-based generalization afforded by this general method is also used to identify open research problems in this area."} {"_id": "36528b15f3917916bd87d941849d4928f25ce7c6", "title": "Broadband Micro-Coaxial Wilkinson Dividers", "text": "This paper presents several micro-coaxial broadband 2 : 1 Wilkinson power dividers operating from 2 to 22 GHz, a 11 : 1 bandwidth. Circuits are fabricated on silicon with PolyStrata technology, and are implemented with 650 mum times 400 mum air-supported micro-coaxial lines. The measured isolation between the output ports is greater than 11 dB and the return loss at each port is more than 13 dB over the entire bandwidth. The footprints of these dividers can be miniaturized due to the high isolation between adjacent coaxial lines and their tight bend radius. For higher power handling, larger lines with a cross section of 1050 mum times 850 mum are also demonstrated. The effect of mismatch at the output ports is investigated in order to find the power loss in the resistors."} {"_id": "e000420abfc0624adaaa828d51e011fdac980919", "title": "Distributed Graph Routing for WirelessHART Networks", "text": "Communication reliability in a Wireless Sensor and Actuator Network (WSAN) has a high impact on stability of industrial process monitoring and control. To make reliable and real-time communication in highly unreliable environments, industrial WSANs such as those based on WirelessHART adopt graph routing. In graph routing, each packet is scheduled on multiple time slots using multiple channels, on multiple links along multiple paths on a routing graph between a source and a destination. While high redundancy is crucial to reliable communication, determining and maintaining graph routing is challenging in terms of execution time and energy consumption for resource constrained WSAN. Existing graph routing algorithms use centralized approach, do not scale well in terms of these metrics, and are less suitable under network dynamics. To address these limitations, we propose the first distributed graph routing protocol for WirelessHART networks. Our distributed protocol is based on the Bellman-Ford Algorithm, and generates all routing graphs together using a single algorithm. We prove that our proposed graph routing can include a path between a source and a destination with cost (in terms of hop-count) at most 3 times the optimal cost. We implemented our proposed routing algorithm on TinyOS and evaluated through experiments on TelosB motes and simulations using TOSSIM. The results show that it is scalable and consumes at least 40% less energy and needs at least 65% less time at the cost of 1kB of extra memory compared to the state-of-the-art centralized approach for generating routing graphs."} {"_id": "784c937fb6d8855c112dd20c1e747fb09ae08084", "title": "Traffic Surveillance using Multi-Camera Detection and Multi-Target Tracking", "text": "Non-intrusive video-detection for traffic flow observation and surveillance is the primary alternative to conventional inductive loop detectors. Video Image Detection Systems (VIDS) can derive traffic parameters by means of image processing and pattern recognition methods. Existing VIDS emulate the inductive loops. We propose a trajectory based recognition algorithm to expand the common approach and to obtain new types of information (e.g. queue length or erratic movements). Different views of the same area by more than one camera sensor is necessary, because of the typical limitations of single camera systems, resulting from the occlusion effect of other cars, trees and traffic signs. A distributed cooperative multi-camera system enables a significant enlargement of the observation area. The trajectories are derived from multi-target tracking. The fusion of object data from different cameras is done using a tracking method. This approach opens up opportunities to identify and specify traffic objects, their location, speed and other characteristic object information. The system provides new derived and consolidated information about traffic participants. Thus, this approach is also beneficial for a description of individual traffic participants."} {"_id": "3272595fc86f13c7cce0547f2b464c2befe5a69f", "title": "Designing Password Policies for Strength and Usability", "text": "Password-composition policies are the result of service providers becoming increasingly concerned about the security of online accounts. These policies restrict the space of user-created passwords to preclude easily guessed passwords and thus make passwords more difficult for attackers to guess. However, many users struggle to create and recall their passwords under strict password-composition policies, for example, ones that require passwords to have at least eight characters with multiple character classes and a dictionary check. Recent research showed that a promising alternative was to focus policy requirements on password length instead of on complexity. In this work, we examine 15 password policies, many focusing on length requirements. In doing so, we contribute the first thorough examination of policies requiring longer passwords. We conducted two online studies with over 20,000 participants, and collected both usability and password-strength data. Our findings indicate that password strength and password usability are not necessarily inversely correlated: policies that lead to stronger passwords do not always reduce usability. We identify policies that are both more usable and more secure than commonly used policies that emphasize complexity rather than length requirements. We also provide practical recommendations for service providers who want their users to have strong yet usable passwords."} {"_id": "5c2900fadc2485b42a52c0a1dd280e41193301f6", "title": "Mimir: a Market-based Real-time Question and Answer Service", "text": "Community-based question and answer (Q&A) systems facilitate information exchange and enable the creation of reusable knowledge repositories. While these systems are growing in usage and are changing how people find and share information, current designs are inefficient, wasting the time and attention of their users. Furthermore, existing systems do not support signaling and screening of joking and non-serious questions. Coupling Q&A services with instant and text messaging for faster questions and answers may exacerbate these issues, causing Q&A services to incur high interruption costs on their users.\n In this paper we present the design and evaluation of a market-based real-time Q&A system. We compared its use to a similar Q&A system without a market. We found that while markets can reduce wasted resources by reducing the number of less important questions and low quality answers, it may also reduce the socially conducive questions and usages that are vital to sustaining a Q&A community."} {"_id": "9d6481079654a141381dc2752257fe1b5b112f6f", "title": "Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach", "text": "Despite the popularity, the tuning aspect of proportional\u2013integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization. 2007 Elsevier Ltd. All rights reserved."} {"_id": "35ccecd0b3a70635f952d0938cb095b991d6c1d7", "title": "What Constitutes Normal Sleeping Patterns ? Childhood to Adolescence 2 . 1 Developmental Patterns in Sleep Duration : Birth to Adolescence", "text": "Healthy, adequate sleep is integral to the process of growth and development during adolescence. At puberty, maturational changes in th e underlying homeostatic and circadian sleep regulatory mechanisms influence the sleep-wak e p tterns of adolescents. These changes interact with psychosocial factors, such as increas ing academic demands, hours spent in paid employment, electronic media use, and social opport unities, and constrict the time available for adolescents to sleep. Survey studies reveal tha t adolescents\u2019 habitual sleep schedules are associated with cumulative sleep loss. As a consequ ence, there is growing concern about the effects of insufficient sleep on adolescents\u2019 wakin g function. This review identifies and examines the characteristics of sleep and sleep los s in adolescents. It highlights the need for more research into the effects of chronic partial s leep deprivation in adolescents, and the process of extending sleep on weekends to recover t he effects of sleep debt. An understanding of chronic sleep deprivation and recovery sleep in adolescents will facilitate the development of evidence-based sleep guidelines and recommendati o s for recovery sleep opportunities when habitual sleep times are insufficient."} {"_id": "8ff54aa8045b1e30c348cf2ca42259c946cd7a9e", "title": "Search-based Neural Structured Learning for Sequential Question Answering", "text": "Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semistructured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions."} {"_id": "da471fce22ff517bc83607c0b98471c7bd32c54f", "title": "Crowdfunding Success Factors: The Characteristics of Successfully Funded Projects on Crowdfunding Platforms", "text": "Crowdfunding platforms offer promising opportunities for project founders to publish their project ideas and to collect money in order to be able to realize them. Consequently, the question of what influences the successful funding of projects, i.e., reaching the target amount of money, is very important. Building upon media richness theory and the concept of reciprocity, we extend previous research in the field of crowdfunding success factors. We provide a comprehensive view on factors influencing crowdfunding success by both focusing on project-specific as well as founder-specific aspects. Analyzing a sample of projects of the crowdfunding platform kickstarter.com, we find that the project description, related images and videos as well as the question of whether the founder has previously backed other projects influence funding success. Interestingly, the question of whether the founder has previously created other projects has no significant influence. Our results are of high interest for the stakeholders on crowdfunding platforms."} {"_id": "1c6aa0f196e2a68d3c93fd10b85f084811a87b02", "title": "Predicting Song Popularity", "text": "Future Work Dataset and Features Music has been an integral part of our culture all throughout human history. In 2012 alone, the U.S. music industry generated $15 billion. Having a fundamental understanding of what makes a song popular has major implications to businesses that thrive on popular music, namely radio stations, record labels, and digital and physical music market places. Many private companies in these industries have solutions for this problem, but details have been kept private for competitive reasons. For our project, we will predict song popularity based on an song\u2019s audio features and metadata. Methods & Results"} {"_id": "65769b53e71ea7c52b3a07ad32bd4fdade6a0173", "title": "Multi-task Deep Reinforcement Learning with PopArt", "text": "The reinforcement learning community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequentialdecision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent\u2019s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy with a single set of weights that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab."} {"_id": "0a2cf52659f4bc450e8c6b15eead1fa9081affbc", "title": "Many-objective optimization algorithm applied to history matching", "text": "Reservoir model calibration, called history matching in the petroleum industry, is an important task to make more accurate predictions for better reservoir management. Providing an ensemble of good matched reservoir models from history matching is essential to reproduce the observed production data from a field and to forecast reservoir performance. The nature of history matching is multi-objective because there are multiple match criteria or misfit from different production data, wells and regions in the field. In many cases, these criteria are conflicting and can be handled by the multi-objective approach. Moreover, multi-objective provides faster misfit convergence and more robust towards stochastic nature of optimization algorithms. However, reservoir history matching may feature far too many objectives that can be efficiently handled by conventional multi-objective algorithms, such as multi-objective particle swarm optimizer (MOPSO) and non-dominated sorting genetic algorithm II (NSGA II). Under an increasing number of objectives, the performance of multi-objective history matching by these algorithms deteriorates (lower match quality and slower misfit convergence). In this work, we introduce a recently proposed algorithm for many-objective optimization problem, known as reference vector-guided evolutionary algorithm (RVEA), to history matching. We apply the algorithm to history matching a synthetic reservoir model and a real field case study with more than three objectives. The paper demonstrates the superiority of the proposed RVEA to the state of the art multi-objective history matching algorithms, namely MOPSO and NSGA II."} {"_id": "c1353d7db7098efd2622f01b59e8dba55e1f1327", "title": "An improved LLC resonant inverter for induction heating with asymmetrical control", "text": "This paper proposes a modified LLC resonant load configuration of a full-bridge inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge inverter topology. With the use of a phase-locked loop control, the operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed with a matching transformer in between the series inductor and paralleled LC resonant tank for short circuit protection capability of the matching transformer and the induction coil. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 108.7 to 110.6 kHz."} {"_id": "214658334c581f0d18b9a871928e91b6e4f83be7", "title": "An active battery cell balancing topology without using external energy storage elements", "text": "Cell balancing circuits are important to extent life-cycle of batteries and to extract maximum power from the batteries. A lot of power electronics topology has been tried for cell balancing in the battery packages. Active cell balancing topologies transfer energy from the cells showing higher performance to the cells showing lower performance to balance voltages across the cells of the battery using energy storage elements like combination of inductor-capacitor or transformer-capacitor or switched capacitor or switched inductor. In this study an active balancing topology without using any energy storage element is proposed. The idea is similar to the switched capacitor topology in which a capacitor or capacitor banks is switched across the cells of battery to balance the voltages. Since a basic battery cell model includes capacitance because of capacitive effect of the cell, this capacitive effect can be utilized in cell balancing. Hence the equalizer capacitors in switched capacitor topology can be eliminated and the cells of battery can be switched with each other. This allows faster energy transfer and hence results in quick equalization. The proposed topology removes the need of extra energy storage elements like capacitors which frequently fails in power electronic circuits, reduces the losses inserted by extra energy storage elements and cost and volume of the circuits and simplifies control algorithm. The proposed balancing circuit can be implemented according to the application requirement. The proposed topology is simulated in MATLAB/Simulink environment and showed better results in terms of balancing speed in comparison to switched capacitor topologies."} {"_id": "5b9aec6bddd83f8e904aac8fae344ff306d3d29b", "title": "Energy-Efficient Resource Allocation in Cellular Networks With Shared Full-Duplex Relaying", "text": "Recent advances in self-interference cancelation techniques enable full-duplex relaying (FDR) systems, which transmit and receive simultaneously in the same frequency band with high spectrum efficiency. Unlike most existing works, we study the problem of energy-efficient resource allocation in FDR networks. We consider a shared FDR deployment scenario, where an FDR relay is deployed at the intersection of several adjacent cell sectors. First, a simple but practical transmission strategy is proposed to deal with the involved interference, i.e., multiaccess interference, multiuser interference, and self-interference. Then, the problem of joint power and subcarrier allocation is formulated to maximize the network-level energy efficiency while taking the residual self-interference into account. Since the formulated problem is a mixed combinatorial and nonconvex optimization problem with high computation complexity, we use Dinkelbach and discrete stochastic optimization methods to solve the energy-efficient resource-allocation problem efficiently. Simulation results are presented to show the effectiveness of the proposed scheme."} {"_id": "e4c564501bf73d3ee92b28ea96bb7b4ff9ec80ed", "title": "A novel wideband waveguide-to-microstrip transition with waveguide stepped impedance transformer", "text": "A novel wideband waveguide-to-microstrip transition is presented in this paper. A printed fan-shaped probe is used, while a waveguide stepped impedance transformer is utilized to broaden the bandwidth further. With an optimized design, a relative bandwidth of 27.8% for -20dB return loss with a center frequency of 15GHz is achieved. The simulated insertion loss in this bandwidth is less than 0.29dB. A back-to-back transition is fabricated and measured. The measured result is similar to that of simulation."} {"_id": "85b012e2ce65be26e3a807b44029f9f03a483fbc", "title": "Graph based E-Government web service composition", "text": "Nowadays, e-government has emerged as a government policy to improve the quality and efficiency of public administrations. By exploiting the potential of new information and communication technologies, government agencies are providing a wide spectrum of online services. These services are composed of several web services that comply with well defined processes. One of the big challenges is the need to optimize the composition of the elementary web services. In this paper, we present a solution for optimizing the computation effort in web service composition. Our method is based on Graph Theory. We model the semantic relationship between the involved web services through a directed graph. Then, we compute all shortest paths using for the first time, an extended version of the FloydWarshall algorithm."} {"_id": "8fd198887c4d403707f1d030d0c4e414dcbe3b26", "title": "Classification of Mixed-Type Defect Patterns in Wafer Bin Maps Using Convolutional Neural Networks", "text": "In semiconductor manufacturing, a wafer bin map (WBM) represents the results of wafer testing for dies using a binary pass or fail value. For WBMs, defective dies are often clustered into groups of local systematic defects. Determining their specific patterns is important, because different patterns are related to different root causes of failure. Recently, because wafer sizes have increased and the process technology has become more complicated, the probability of observing mixed-type defect patterns, i.e., two or more defect patterns in a single wafer, has increased. In this paper, we propose the use of convolutional neural networks (CNNs) to classify mixed-type defect patterns in WBMs in the framework of an individual classification model for each defect pattern. Through simulated and real data examples, we show that the CNN is robust to random noise and performs effectively, even if there are many random defects in WBMs."} {"_id": "0c04909ed933469246defcf9aca2b71ae8e3f623", "title": "Information Retrieval", "text": "The major change in the second edition of this book is the addition of a new chapter on probabilistic retrieval. This chapter has been included because I think this is one of the most interesting and active areas of research in information retrieval. There are still many problems to be solved so I hope that this particular chapter will be of some help to those who want to advance the state of knowledge in this area. All the other chapters have been updated by including some of the more recent work on the topics covered. In preparing this new edition I have benefited from discussions with Bruce Croft, The material of this book is aimed at advanced undergraduate information (or computer) science students, postgraduate library science students, and research workers in the field of IR. Some of the chapters, particularly Chapter 6 * , make simple use of a little advanced mathematics. However, the necessary mathematical tools can be easily mastered from numerous mathematical texts that now exist and, in any case, references have been given where the mathematics occur. I had to face the problem of balancing clarity of exposition with density of references. I was tempted to give large numbers of references but was afraid they would have destroyed the continuity of the text. I have tried to steer a middle course and not compete with the Annual Review of Information Science and Technology. Normally one is encouraged to cite only works that have been published in some readily accessible form, such as a book or periodical. Unfortunately, much of the interesting work in IR is contained in technical reports and Ph.D. theses. For example, most the work done on the SMART system at Cornell is available only in reports. Luckily many of these are now available through the National Technical Information Service (U.S.) and University Microfilms (U.K.). I have not avoided using these sources although if the same material is accessible more readily in some other form I have given it preference. I should like to acknowledge my considerable debt to many people and institutions that have helped me. Let me say first that they are responsible for many of the ideas in this book but that only I wish to be held responsible. My greatest debt is to Karen Sparck Jones who taught me to research information retrieval as an experimental science. Nick Jardine and Robin \u2026"} {"_id": "3cfbb77e5a0e24772cfdb2eb3d4f35dead54b118", "title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", "text": "Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts."} {"_id": "9ec20b90593695e0f5a343dade71eace4a5145de", "title": "Deep Learning for Natural Language Processing", "text": "1Student,Dept. of Computer Engineering, VESIT, Maharashtra, India ---------------------------------------------------------------------------***-------------------------------------------------------------------Abstract Deep Learning has come into existence as a new area for research in Machine Learning. It aims to act like a human brain, having the ability to learn and process from complex data and also tries solving intricate tasks as well. Due to this capability, its been used in various fields like text, sound, images etc. Natural language process has started to being impacted by the deep learning techniques. This research paper highlights Deep Learning\u2019s recent developments and applications in Natural Language Processing."} {"_id": "a1f33473ea3b8e98fee37e32ecbecabc379e07a0", "title": "Image Segmentation by Cascaded Region Agglomeration", "text": "We propose a hierarchical segmentation algorithm that starts with a very fine over segmentation and gradually merges regions using a cascade of boundary classifiers. This approach allows the weights of region and boundary features to adapt to the segmentation scale at which they are applied. The stages of the cascade are trained sequentially, with asymetric loss to maximize boundary recall. On six segmentation data sets, our algorithm achieves best performance under most region-quality measures, and does it with fewer segments than the prior work. Our algorithm is also highly competitive in a dense over segmentation (super pixel) regime under boundary-based measures."} {"_id": "b208fb408ca3d6a88051a83b3cfa2e757b61d42b", "title": "Continuous and Cuffless Blood Pressure Monitoring Based on ECG and SpO2 Signals ByUsing Microsoft Visual C Sharp", "text": "BACKGROUND\nOne of the main problems especially in operating room and monitoring devices is measurement of Blood Pressure (BP) by sphygmomanometer cuff. Objective :In this study we designed a new method to measure BP changes continuously for detecting information between cuff inflation times by using vital signals in monitoring devices. This will be achieved by extraction of the time difference between each cardiac cycle and a relative pulse wave.\n\n\nMETHODS\nFinger pulse and ECG signals in lead I were recorded by a monitoring device. The output of monitoring device wasinserted in a computer by serial network communication. A software interface (Microsoft Visual C#.NET ) was used to display and process the signals in the computer. Time difference between each cardiac cycle and pulse signal was calculated throughout R wave detection in ECG and peak of pulse signal by the software. The relation between time difference in two waves and BP was determined then the coefficients of equation were obtained in different physical situations. The results of estimating BP were compared with the results of sphygmomanometer method and the error rate was calculated.\n\n\nRESULTS\nIn this study, 25 subjects participated among them 15 were male and 10 were female. The results showed that BP was linearly related to time difference. Average of coefficient correlation was 0.9\u00b10.03 for systolic and 0.82\u00b10.04 for diastolic blood pressure. The highest error percentage was calculated 8% for male and 11% for female group. Significant difference was observed between the different physical situation and arm movement changes. The relationship between time difference and age was estimated in a linear relationship with a correlation coefficient of 0.76.\n\n\nCONCLUSION\nBy determining linear relation values with high accuracy, BP can be measured with insignificant error. Therefore it can be suggested as a new method to measure the blood pressure continuously."} {"_id": "c0141c34064c894ca0a9fa476bad80ef1ab25257", "title": "Performance assessment and uncertainty quantification of predictive models for smart manufacturing systems", "text": "We review in this paper several methods from Statistical Learning Theory (SLT) for the performance assessment and uncertainty quantification of predictive models. Computational issues are addressed so to allow the scaling to large datasets and the application of SLT to Big Data analytics. The effectiveness of the application of SLT to manufacturing systems is exemplified by targeting the derivation of a predictive model for quality forecasting of products on an assembly line."} {"_id": "0ef311acf523d4d0e2cc5f747a6508af2c89c5f7", "title": "LDA-based document models for ad-hoc retrieval", "text": "Search algorithms incorporating some form of topic model have a long history in information retrieval. For example, cluster-based retrieval has been studied since the 60s and has recently produced good results in the language model framework. An approach to building topic models based on a formal generative model of documents, Latent Dirichlet Allocation (LDA), is heavily cited in the machine learning literature, but its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use LDA to improve ad-hoc retrieval. We propose an LDA-based document model within the language modeling framework, and evaluate it on several TREC collections. Gibbs sampling is employed to conduct approximate inference in LDA and the computational complexity is analyzed. We show that improvements over retrieval using cluster-based models can be obtained with reasonable efficiency."} {"_id": "c761ef1ed330186d09063550a13e7cded8282217", "title": "Number of articles Economy Health Care Taxes Federal Budget Jobs State Budget Education Candidate Biography Elections Immigration Foreign Policy Crime Energy History Job Accomplishments Legal Issues Environment Terrorism Military Guns", "text": "We report on attempts to use currently available automated text analysis tools to identify possible biased treatment by Politifact of Democratic vs. Republican speakers, through language. We begin by noting that there is no established method for detecting such differences, and indeed that \u201cbias\u201d is complicated and difficult to operationalize into a measurable quantity. This report includes several analyses that are representative of the tools available from natural language processing at this writing. In each case, we offer (i) what we would expect to see in the results if the method picked up on differential treatment between Democrats vs. Republicans, (ii) what we actually observe, and (iii) potential problems with the analysis; in some cases we also suggest (iv) future analyses that might be more revelatory."} {"_id": "8ce9cb9f2e5c7a7e146b176eb82dbcd34b4c764c", "title": "PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python", "text": "The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations."} {"_id": "eb2e3911b852fc00ee1d9f089e3feb461aacde62", "title": "Class Activation Map Generation by Representative Class Selection and Multi-Layer Feature Fusion", "text": "Existing method generates class activation map (CAM) by a set of fixed classes (i.e., using all the classes), while the discriminative cues between class pairs are not considered. Note that activation maps by considering different class pair are complementary, and therefore can provide more discriminative cues to overcome the shortcoming of the existing CAM generation that the highlighted regions are usually local part regions rather than global object regions due to the lack of object cues. In this paper, we generate CAM by using a few of representative classes, with aim of extracting more discriminative cues by considering each class pair to obtain CAM more globally. The advantages are twofold. Firstly, the representative classes are able to obtain activation regions that are complementary to each other, and therefore leads to generating activation map more accurately. Secondly, we only need to consider a small number of representative classes, making the CAM generation suitable for small networks. We propose a clustering based method to select the representative classes. Multiple binary classification models rather than a multiple class classification model are used to generate the CAM. Moreover, we propose a multi-layer fusion based CAM generation method to simultaneously combine high-level semantic features and low-level detail features. We validate the proposed method on the PASCAL VOC and COCO database in terms of segmentation groundtruth. Various networks such as classical network (Resnet-50, Resent-101 and Resnet-152) and small network (VGG-19, Resnet-18 and Mobilenet) are considered. Experimental results show that the proposed method improves the CAM generation obviously."} {"_id": "31706b0fe43c859dbf9bf698bf6ca2daef969872", "title": "Processing of Reference and the Structure of Language: An Analysis of Complex Noun Phrases", "text": "Five experiments used self-paced reading time to examine the ways in which complex noun phrases (both conjoined NPs and possessive NPs) in\u008fuence the interpretation of referentially dependent expressions. The experimental conditions contrasted the reading of repeated names and pronouns referring to components of a complex NP and to the entire complex NP. The results indicate that the entity introduced by a major constituent of a sentence is more accessible as a referent than the entities introduced by component noun phrases. This pattern of accessibility departs from the advantage of \u008erst mention that has been demonstrated using probe-word recognition tasks. It supports the idea that reduced expressions are interpreted as referring directly to prominent entities in a mental model whereas reference by names to entities that are already represented in a mental model is mediated by additional processes. The same interpretive processes appear to operate on coreference within and between sentences."} {"_id": "dc148913b7271b65d221d2c41f77a9e41750c867", "title": "Predicting the Price of Used Cars using Machine Learning Techniques", "text": "In this paper, we investigate the application of supervised machine learning techniques to predict the price of used cars in Mauritius. The predictions are based on historical data collected from daily newspapers. Different techniques like multiple linear regression analysis, k-nearest neighbours, na\u00efve bayes and decision trees have been used to make the predictions. The predictions are then evaluated and compared in order to find those which provide the best performances. A seemingly easy problem turned out to be indeed very difficult to resolve with high accuracy. All the four methods provided comparable performance. In the future, we intend to use more sophisticated algorithms to make the predictions. Keywords-component; formatting; style; styling; insert (key words)"} {"_id": "2f5ae832284c2f9fe2bba449b961745aef817bbf", "title": "Beyond Doctors: Future Health Prediction from Multimedia and Multimodal Observations", "text": "Although chronic diseases cannot be cured, they can be effectively controlled as long as we understand their progressions based on the current observational health records, which is often in the form of multimedia data. A large and growing body of literature has investigated the disease progression problem. However, far too little attention to date has been paid to jointly consider the following three observations of the chronic disease progression: 1) the health statuses at different time points are chronologically similar; 2) the future health statuses of each patient can be comprehensively revealed from the current multimedia and multimodal observations, such as visual scans, digital measurements and textual medical histories; and 3) the discriminative capabilities of different modalities vary significantly in accordance to specific diseases. In the light of these, we propose an adaptive multimodal multi-task learning model to co-regularize the modality agreement, temporal progression and discriminative capabilities of different modalities. We theoretically show that our proposed model is a linear system. Before training our model, we address the data missing problem via the matrix factorization approach. Extensive evaluations on a real-world Alzheimer's disease dataset well verify our proposed model. It should be noted that our model is also applicable to other chronic diseases."} {"_id": "2b25897316a8f42b88a9d66d966f4de0ed7f7a05", "title": "Rapid diagnostic tests for malaria parasites.", "text": "Malaria presents a diagnostic challenge to laboratories in most countries. Endemic malaria, population movements, and travelers all contribute to presenting the laboratory with diagnostic problems for which it may have little expertise available. Drug resistance and genetic variation has altered many accepted morphological appearances of malaria species, and new technology has given an opportunity to review available procedures. Concurrently the World Health Organization has opened a dialogue with scientists, clinicians, and manufacturers on the realistic possibilities for developing accurate, sensitive, and cost-effective rapid diagnostic tests for malaria, capable of detecting 100 parasites/microl from all species and with a semiquantitative measurement for monitoring successful drug treatment. New technology has to be compared with an accepted \"gold standard\" that makes comparisons of sensitivity and specificity between different methods. The majority of malaria is found in countries where cost-effectiveness is an important factor and ease of performance and training is a major consideration. Most new technology for malaria diagnosis incorporates immunochromatographic capture procedures, with conjugated monoclonal antibodies providing the indicator of infection. Preferred targeted antigens are those which are abundant in all asexual and sexual stages of the parasite and are currently centered on detection of HRP-2 from Plasmodium falciparum and parasite-specific lactate dehydrogenase or Plasmodium aldolase from the parasite glycolytic pathway found in all species. Clinical studies allow effective comparisons between different formats, and the reality of nonmicroscopic diagnoses of malaria is considered."} {"_id": "36e22ee5b8694af3703856eda13d8ee4ea284171", "title": "Automatic Keyword Extraction for Text Summarization in Multi-document e-Newspapers Articles", "text": "Summarization is the way towards lessening the content of a text file to make it brief that holds all the critical purposes in the content of original text file. In the process of extractive summarization, one extracts only those sentences which are the most relevant sentences in the text document and that conveys the moral of the content. The extractive summarization techniques usually revolve around the idea of discovering most relevant and frequent keywords and then extract the sentences based on those keywords. Manual extraction or explanation of relevant keywords are a dreary procedure overflowing with errors including loads of manual exertion and time. In this paper, we proposed a hybrid approach to extract keyword automatically for multi-document text summarization in enewspaper articles. The performance of the proposed approach is compared with three additional keyword extraction techniques namely, term frequency-inverse document frequency (TF-IDF), term frequency-adaptive inverse document frequency (TF-AIDF), and a number of false alarm (NFA) for automatic keyword extraction and summarization in e-newspapers articles for better analysis. Finally, we showed that our proposed techniques had been outperformed over other techniques for automatic keyword extraction and summarization."} {"_id": "542cc06e3b522163d1a8aed8d5769e94cb6456ce", "title": "Software-Defined \u201cHardware\u201d Infrastructures: A Survey on Enabling Technologies and Open Research Directions", "text": "This paper provides an overview of software-defined \u201chardware\u201d infrastructures (SDHI). SDHI builds upon the concept of hardware (HW) resource disaggregation. HW resource disaggregation breaks today\u2019s physical server-oriented model where the use of a physical resource (e.g., processor or memory) is constrained to a physical server\u2019s chassis. SDHI extends the definition of of software-defined infrastructures (SDI) and brings greater modularity, flexibility, and extensibility to cloud infrastructures, thus allowing cloud operators to employ resources more efficiently and allowing applications not to be bounded by the physical infrastructure\u2019s layout. This paper aims to be an initial introduction to SDHI and its associated technological advancements. This paper starts with an overview of the cloud domain and puts into perspective some of the most prominent efforts in the area. Then, it presents a set of differentiating use-cases that SDHI enables. Next, we state the fundamentals behind SDI and SDHI, and elaborate why SDHI is of great interest today. Moreover, it provides an overview of the functional architecture of a cloud built on SDHI, exploring how the impact of this transformation goes far beyond the cloud infrastructure level in its impact on platforms, execution environments, and applications. Finally, an in-depth assessment is made of the technologies behind SDHI, the impact of these technologies, and the associated challenges and potential future directions of SDHI."} {"_id": "521c2dd41bd7de10cb514f4e9d537fd434699cb7", "title": "A Study On Unmanned Vehicles and Cyber Security", "text": "During the past years Unmanned Aerial Vehicle (UAV) usage has grown in military and civilian fields. Every year, emergency response operations are more and more dependent of these unmanned vehicles. Lacking direct human interference, the correction of onboard errors and security breaches is a growing concern in unmanned vehicles. One of the concerns raised by first responders using unmanned vehicles is the security and privacy of the victims they are rescuing. Video channels used for the control of unmanned vehicles are of particular importance, having great susceptibility to hijack, jamming and spoofing attacks. It is assumed the video feeds are not protected by any type of encryption and are therefore vulnerable to hacking and/or spoofing. This summer we have conducted a preliminary survey of the security vulnerabilities and their implications on the operation of unmanned vehicles from a network security perspective."} {"_id": "1832aa821e189770d4401092eb0a93215bd1c382", "title": "A stochastic data discrimination based autoencoder approach for network anomaly detection", "text": "Machine learning based network anomaly detection methods, which are already effective defense mechanisms against known network intrusion attacks, have also proven themselves to be more successful on the detection of zero-day attacks compared to other types of detection methods. Therefore, research on network anomaly detection using deep learning is getting more attention constantly. In this study we created an anomaly detection model based on a deterministic autoencoder, which discriminates normal and abnormal data by using our proposed stochastic threshold determination approach. We tested our proposed anomaly detection model on the NSL-KDD's test dataset KDDTest+ and obtained an accuracy of 88.28%. The experimental results show that our proposed anomaly detection model can perform almost the same as the most successful and up-to-date machine learning based anomaly detection models in the literature."} {"_id": "1007fbc622acd3cc8f658558a3e841ea200f6880", "title": "Electrophysiology in the age of light", "text": "Electrophysiology, the 'gold standard' for investigating neuronal signalling, is being challenged by a new generation of optical probes. Together with new forms of microscopy, these probes allow us to measure and control neuronal signals with spatial resolution and genetic specificity that already greatly surpass those of electrophysiology. We predict that the photon will progressively replace the electron for probing neuronal function, particularly for targeted stimulation and silencing of neuronal populations. Although electrophysiological characterization of channels, cells and neural circuits will remain necessary, new combinations of electrophysiology and imaging should lead to transformational discoveries in neuroscience."} {"_id": "ea2994ae14c2a9dafce41b00ca507a0d8db097cb", "title": "Tagging Ingush - Language Technology For Low-Resource Languages Using Resources From Linguistic Field Work", "text": "This paper presents on-going work on creating NLP tools for under-resourced languages from very sparse training data coming from linguistic field work. In this work, we focus on Ingush, a Nakh-Daghestanian language spoken by about 300,000 people in the Russian republics Ingushetia and Chechnya. We present work on morphosyntactic taggers trained on transcribed and linguistically analyzed recordings and dependency parsers using English glosses to project annotation for creating synthetic treebanks. Our preliminary results are promising, supporting the goal of bootstrapping efficient NLP tools with limited or no task-specific annotated data resources available."} {"_id": "3941c4da721e35e39591ee95cca44a1016b24c6b", "title": "Vertex-Centric Graph Processing: The Good, the Bad, and the Ugly", "text": "We study distributed graph algorithms that adopt an iterati ve vertexcentric framework for graph processing, popularized by Goo gle\u2019s Pregel system. Since then, there are several attempts to imp lement many graph algorithms in a vertex-centric framework, a s well as efforts to design optimization techniques for improving the efficiency. However, to the best of our knowledge, there has not been any systematic study to compare these vertex-centric i mplementations with their sequential counterparts. Our paper a dd esses this gap in two ways. (1) We analyze the computational complexity of such implementations with the notion of time-pro cessor product, and benchmark several vertex-centric graph algor ithms whether they perform more work with respect to their best-kn ow sequential solutions. (2) Employing the concept of balance d practical Pregel algorithms, we study if these implementations suffer from imbalanced workload and large number of iterations. Ou r findings illustrate that with the exception of Euler tour tre e algorithm, all other algorithms either perform asymptotically more work than their best-known sequential approach, or suffer from i mbalanced workload/ large number of iterations, or even both. We also emphasize on graph algorithms that are fundamentally diffic ult to be expressed in vertex-centric frameworks, and conclude by discussing the road ahead for distributed graph processing."} {"_id": "b57ba909756462d812dc20fca157b3972bc1f533", "title": "Vision-Based Classification of Skin Cancer using Deep Learning", "text": "This study proposes the use of deep learning algorithms to detect the presence of skin cancer, specifically melanoma, from images of skin lesions taken by a standard camera. Skin cancer is the most prevalent form of cancer in the US where 3.3 million people get treated each year. The 5-year survival rate of melanoma is 98% when detected and treated early yet over 10,000 people are lost each year due mostly to late-stage diagnoses [2]. Thus, there is a need to make melanoma screening and diagnoses methods cheaper, quicker, simpler, and more accessible. This study aims to produce an inexpensive and fast computer-vision based machine learning tool that can be used by doctors and patients to track and classify suspicious skin lesions as benign or malignant with adequate accuracy using only a cell phone camera. The data set was trained on 3 separate learning models with increasingly improved classification accuracy. The 3 models included logistic regression, a deep neural network, and a fine-tuned, pre-trained, VGG-16 Convolutional Neural Network (CNN) [7]. Preliminary results show the developed algorithm\u2019s ability to segment moles from images with 70% accuracy and classify skin lesions as melanoma with 78% balanced accuracy using a fine-tuned VGG-16 CNN."} {"_id": "5e7b74d9e7d0399686bfd4f5334ef44f7d87d259", "title": "Ceiling continuum arm with extensible pneumatic actuators for desktop workspace", "text": "We propose an extensible pneumatic continuum arm that elongates to perform reaching movements and object grasping, and is suspended on the ceiling to prevent interference with human workers in a desktop workspace. The selected actuators with bellows aim to enhance the arm motion capabilities. A single actuator can provide a maximum tension force of 150 N, and the proposed arm has a three-segment structure with a bundle of three actuators per segment. We measured the three-dimensional motion at the arm tip by using an optical motion-capture system. The corresponding results show that the arm can grasp objects with approximate radius of 80 mm and reach any point on the desktop. Furthermore, the maximum elongation ratio is 180%, with length varying between 0.75 m and 2.1 m. Experiments verified that the arm can grasp objects of various sizes and shapes. Moreover, we demonstrate the vertical transportation of objects taking advantage of the arm extensibility. We expect to apply the proposed arm for tasks such as grasping objects, illuminating desktops, and physically interacting with users."} {"_id": "a6a1b70305b27c556aac779fb65429db9c2e1ef2", "title": "Eventually Returning to Strong Consistency", "text": "Eventually and weakly consistent distributed systems have emerged in the past decade as an answer to scalability and availability issues associated with strong consistency semantics, such as linearizability. However, systems offering strong consistency semantics have an advantage over systems based on weaker consistency models, as they are typically much simpler to reason about and are more intuitive to developers, exhibiting more predictable behavior. Therefore, a lot of research and development effort is being invested lately into the re-engineering of strongly consistent distributed systems, as well as into boosting their scalability and performance. This paper overviews and discusses several novel directions in the design and implementation of strongly consistent systems in industries and research domains such as cloud computing, data center networking and blockchain. It also discusses a general trend of returning to strong consistency in distributed systems, when system requirements permit so."} {"_id": "559057220f461e2e94148138cddfdc356f2a8893", "title": "Food Recognition using Fusion of Classifiers based on CNNs", "text": "With the arrival of convolutional neural networks, the complex problem of food recognition has experienced an important improvement in recent years. The best results have been obtained using methods based on very deep convolutional ceural cetworks, which show that the deeper the model,the better the classification accuracy will be obtain. However, very deep neural networks may suffer from the overfitting problem. In this paper, we propose a combination of multiple classifiers based on different convolutional models that complement each other and thus, achieve an improvement in performance. The evaluation of our approach is done on two public datasets: Food-101 as a dataset with a wide variety of fine-grained dishes, and Food-11 as a dataset of high-level food categories, where our approach outperforms the independent CNN models."} {"_id": "0018a0b35ede8900badee90f4c44385205baf2e5", "title": "Implementation of PID controller and pre-filter to control non-linear ball and plate system", "text": "In this paper, the authors try to make PID controller with Pre-filter that is implemented at ball and plate system. Ball and plate system will control the position of ball's axis in pixels value by using servo motor as its actuator and webcam as its sensor of position. PID controller with Pre-filter will have a better response than conventional PID controller. Eventhough the response of PID with Pre-filter is slower than conventional PID, the effect of Pre-filter in the system will give the less overshoot response."} {"_id": "cb4c975c5b09e2c43d29b5c395b74d25e47fe06b", "title": "Comparing a Scalable SDN Simulation Framework Built on ns-3 and DCE with Existing SDN Simulators and Emulators", "text": "As software-defined networking (SDN) grows beyond its original aim to simply separate the control and data network planes, it becomes useful both financially and analytically to provide adequate mechanisms for simulating this new paradigm. A number of simulation/emulation tools for modeling SDN, such as Mininet, are already available. A new, novel framework for providing SDN simulation has been provided in this work using the network simulator ns-3. The ns-3 module Direct Code Execution (DCE) allows real-world network applications to be run within a simulated network topology. This work employs DCE for running the SDN controller library POX and its applications on nodes in a simulated network topology. In this way, real-world controller applications can be completely portable between simulation and actual deployment. This work also describes a user-defined ns-3 application mimicking an SDN switch supporting OpenFlow 1.0 that can interact with real-world controllers. To evaluate its performance, this ns-3 DCE SDN framework is compared against Mininet as well as some other readily available SDN simulation/emulation tools. Metrics such as realtime performance, memory usage, and reliability in terms of packet loss are analyzed across the multiple simulation/emulation tools to gauge how they compare."} {"_id": "61736617ae1eb5483a3b8b182815ab6c59bf4939", "title": "Pose optimization in edge distance field for textureless 3D object tracking", "text": "This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds."} {"_id": "238c322d010fbc32bf110377045235c589629cba", "title": "Designing games with a purpose", "text": "Data generated as a side effect of game play also solves computational problems and trains AI algorithms."} {"_id": "8a241ac5ee8d34479e8849a72096713cc6bbfed5", "title": "A web-based IDE for IDP", "text": "IDP is a knowledge base system based on first order logic. It is finding its way to a larger public but is still facing practical challenges. Adoption of new languages requires a newcomer-friendly way for users to interact with it. Both an online presence to try to convince potential users to download the system and offline availability to develop larger applications are essential. We developed an IDE which can serve both purposes through the use of web technology. It enables us to provide the user with a modern IDE with relatively little effort."} {"_id": "e85a71c8cae795a1b2052a697d5e8182cc8c0655", "title": "The Stanford CoreNLP Natural Language Processing Toolkit", "text": "We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage."} {"_id": "17f1c422b0840adb53e1e92db397a80e156b9fa7", "title": "Marijuana and Coronary Heart Disease", "text": "Marijuana is the most commonly abused drug in the United States.1 It mainly acts on cannabinoid receptors. There are two types of cannabinoid receptors in humans: cannabinoid receptor type 1 (CB1) and cannabinoid receptor type 2 (CB2). CB1 receptor activation is pro-atherogenic, and CB2 receptor activation is largely anti-atherogenic. Marijuana use is also implicated as a trigger for myocardial infarction (MI) in patients with stable coronary artery disease (CAD)."} {"_id": "d9b094767cbed2edc8c7bd1170669a22068734c0", "title": "Health of Things Algorithms for Malignancy Level Classification of Lung Nodules", "text": "Lung cancer is one of the leading causes of death world wide. Several computer-aided diagnosis systems have been developed to help reduce lung cancer mortality rates. This paper presents a novel structural co-occurrence matrix (SCM)-based approach to classify nodules into malignant or benign nodules and also into their malignancy levels. The SCM technique was applied to extract features from images of nodules and classifying them into malignant or benign nodules and also into their malignancy levels. The computed tomography exams from the lung image database consortium and image database resource initiative datasets provide information concerning nodule positions and their malignancy levels. The SCM was applied on both grayscale and Hounsfield unit images with four filters, to wit, mean, Laplace, Gaussian, and Sobel filters creating eight different configurations. The classification stage used three well-known classifiers: multilayer perceptron, support vector machine, and $k$ -nearest neighbors algorithm and applied them to two tasks: ( $i$ ) to classify the nodule images into malignant or benign nodules and ( $ii$ ) to classify the lung nodules into malignancy levels (1 to 5). The results of this approach were compared to four other feature extraction methods: gray-level co-occurrence matrix, local binary patterns, central moments, and statistical moments. Moreover, the results here were also compared to the results reported in the literature. Our approach outperformed the other methods in both tasks; it achieved 96.7% for both accuracy and F-Score metrics in the first task, and 74.5% accuracy and 53.2% F-Score in the second. These experimental results reveal that the SCM successfully extracted features of the nodules from the images and, therefore may be considered as a promising tool to support medical specialist to make a more precise diagnosis concerning the malignancy of lung nodules."} {"_id": "5f93423c41914639a147ad8dd589967daf2ea877", "title": "Maximum Margin Clustering", "text": "We propose a new method for clustering based on finding maximum margin hyperplanes through data. By reformulating the problem in terms of the implied equivalence relation matrix, we can pose the problem as a convex integer program. Although this still yields a difficult computational problem, the hard-clustering constraints can be relaxed to a soft-clustering formulation which can be feasibly solved with a semidefinite program. Since our clustering technique only depends on the data through the kernel matrix, we can easily achieve nonlinear clusterings in the same manner as spectral clustering. Experimental results show that our maximum margin clustering technique often obtains more accurate results than conventional clustering methods. The real benefit of our approach, however, is that it leads naturally to a semi-supervised training method for support vector machines. By maximizing the margin simultaneously on labeled and unlabeled training data, we achieve state of the art performance by using a single, integrated learning principle."} {"_id": "cc13fde0a91f4d618e6af66b49690702906316ae", "title": "A MapReduce Implementation of C 4 . 5 Decision Tree Algorithm", "text": "Recent years have witness the development of cloud computing and the big data era, which brings up challenges to traditional decision tree algorithms. First, as the size of dataset becomes extremely big, the process of building a decision tree can be quite time consuming. Second, because the data cannot fit in memory any more, some computation must be moved to the external storage and therefore increases the I/O cost. To this end, we propose to implement a typical decision tree algorithm, C4.5, using MapReduce programming model. Specifically, we transform the traditional algorithm into a series of Map and Reduce procedures. Besides, we design some data structures to minimize the communication cost. We also conduct extensive experiments on a massive dataset. The results indicate that our algorithm exhibits both time efficiency and scalability."} {"_id": "5faca17c1f8293e5d171719a6b8a289592c3d64d", "title": "Hierarchical Deep Multiagent Reinforcement Learning", "text": "Despite deep reinforcement learning has recently achieved great successes, however in multiagent environments, a number of challenges still remain. Multiagent reinforcement learning (MARL) is commonly considered to suffer from the problem of non-stationary environments and exponentially increasing policy space. It would be even more challenging to learn effective policies in circumstances where the rewards are sparse and delayed over long trajectories. In this paper, we study Hierarchical Deep Multiagent Reinforcement Learning (hierarchical deep MARL) in cooperative multiagent problems with sparse and delayed rewards, where efficient multiagent learning methods are desperately needed. We decompose the original MARL problem into hierarchies and investigate how effective policies can be learned hierarchically in synchronous/asynchronous hierarchical MARL frameworks. Several hierarchical deep MARL architectures, i.e., Ind-hDQN, hCom and hQmix, are introduced for different learning paradigms. Moreover, to alleviate the issues of sparse experiences in high-level learning and non-stationarity in multiagent settings, we propose a new experience replay mechanism, named as Augmented Concurrent Experience Replay (ACER). We empirically demonstrate the effects and efficiency of our approaches in several classic Multiagent Trash Collection tasks, as well as in an extremely challenging team sports game, i.e., Fever Basketball Defense."} {"_id": "fc8fdffa5f97829f2373a5138b6d70a8baf08bdf", "title": "The McGill pain questionnaire: from description to measurement.", "text": "On the language of pain. By Ronald Melzack, Warren S. Torgerson. Anesthesiology 1971; 34:50-9. Reprinted with permission. The purpose of this study was to develop new approaches to the problem of describing and measuring pain in human subjects. Words used to describe pain were brought together and categorized, and an attempt was made to scale them on a common intensity dimension. The data show that: 1) there are many words in the English language to describe the varieties of pain experience; 2) there is a high level of agreement that the words fall into classes and subclasses that represent particular dimensions or properties of pain experience; 3) substantial portions of the words have approximately the same relative positions on a common intensity scale for people who have widely divergent backgrounds. The word lists provide a basis for a questionnaire to study the effects of anesthetic and analgesic agents on the experience of pain."} {"_id": "2d0f1f8bf2e3b8c58be79e01430c67471ceee05e", "title": "Low-Power Near-Threshold 10T SRAM Bit Cells With Enhanced Data-Independent Read Port Leakage for Array Augmentation in 32-nm CMOS", "text": "The conventional six-transistor static random access memory (SRAM) cell allows high density and fast differential sensing but suffers from half-select and read-disturb issues. Although the conventional eight-transistor SRAM cell solves the read-disturb issue, it still suffers from low array efficiency due to deterioration of read bit-line (RBL) swing and $\\text{I}_{\\mathbf {on}}/\\text{I}_{\\mathbf {off}}$ ratio with increase in the number of cells per column. Previous approaches to solve these issues have been afflicted by low performance, data-dependent leakage, large area, and high energy per access. Therefore, in this paper, we present three iterations of SRAM bit cells with nMOS-only based read ports aimed to greatly reduce data-dependent read port leakage to enable 1k cells/RBL, improve read performance, and reduce area and power over conventional and 10T cell-based works. We compare the proposed work with other works by recording metrics from the simulation of a 128-kb SRAM constructed with divided-wordline-decoding architecture and a 32-bit word size. Apart from large improvements observed over conventional cells, up to 100-mV improvement in read-access performance, up to 19.8% saving in energy per access, and up to 19.5% saving in the area are also observed over other 10T cells, thereby enlarging the design and application gamut for memory designers in low-power sensors and battery-enabled devices."} {"_id": "36d6ca6d47fa3a74d7698f0aa39605c29c594a3b", "title": "Trainable Sentence Planning for Complex Information Presentations in Spoken Dialog Systems", "text": "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH\u2019s template-based generator even for quite complex information presentations."} {"_id": "a55d01c2a6d33e8dff01d4eab89b941465d288ba", "title": "Complications associated with injectable soft-tissue fillers: a 5-year retrospective review.", "text": "IMPORTANCE\nEven when administered by experienced hands, injectable soft-tissue fillers can cause various unintended reactions, ranging from minor and self-limited responses to severe complications requiring prompt treatment and close follow-up.\n\n\nOBJECTIVES\nTo review the complications associated with injectable soft-tissue filler treatments administered in the Williams Rejuva Center during a 5-year period and to discuss their management.\n\n\nDESIGN AND SETTING\nRetrospective medical record review in a private practice setting.\n\n\nPARTICIPANTS\nPatients receiving injectable soft-tissue fillers and having a treatment-related complication.\n\n\nINTERVENTIONS\nInjectable soft-tissue filler treatments.\n\n\nMAIN OUTCOME MEASURES\nA retrospective medical record review was conducted of patients undergoing treatment with injectable soft-tissue fillers between January 1, 2007, and December 31, 2011, and identified as having a treatment-related complication.\n\n\nRESULTS\nA total of 2089 injectable soft-tissue filler treatments were performed during the study period, including 1047 with hyaluronic acid, 811 with poly-L-lactic acid, and 231 with calcium hydroxylapatite. Fourteen complications were identified. The most common complication was nodule or granuloma formation. Treatment with calcium hydroxylapatite had the highest complication rate.\n\n\nCONCLUSIONS AND RELEVANCE\nComplications are rare following treatment with injectable soft-tissue fillers. Nevertheless, it is important to be aware of the spectrum of potential adverse sequelae and to be comfortable with their proper management.\n\n\nLEVEL OF EVIDENCE\n4."} {"_id": "75eb06c5d342975d6ec7b8c652653cf38bf1d6b6", "title": "On Data Clustering Analysis: Scalability, Constraints, and Validation", "text": "Clustering is the problem of grouping data based on similarity. While this problem has attracted the attention of many researchers for many years, we are witnessing a resurgence of interest in new clustering techniques. In this paper we discuss some very recent clustering approaches and recount our experience with some of these algorithms. We also present the problem of clustering in the presence of constraints and discuss the issue of clustering validation."} {"_id": "4e473a7bda10d5d11bb767de4e5da87e99d2ef69", "title": "Better managed than memorized? Studying the Impact of Managers on Password Strength and Reuse", "text": "Despite their well-known security problems, passwords are still the incumbent authentication method for virtually all online services. To remedy the situation, users are very often referred to password managers as a solution to the password reuse and weakness problems. However, to date the actual impact of password managers on password strength and reuse has not been studied systematically. We provide the first large-scale study of the password managers\u2019 influence on users\u2019 real-life passwords. By combining qualitative data on users\u2019 password creation and management strategies, collected from 476 participants of an online survey, with quantitative data (incl. password metrics and entry methods) collected in situ with a browser plugin from 170 users, we were able to gain a more complete picture of the factors that influence our participants\u2019 password strength and reuse. Our approach allows us to quantify for the first time that password managers indeed influence the password security, however, whether this influence is beneficial or aggravating existing problems depends on the users\u2019 strategies and how well the manager supports the users\u2019 password management right from the time of password creation. Given our results, we think research should further investigate how managers can better support users\u2019 password strategies in order to improve password security as well as stop aggravating the existing problems."} {"_id": "c39ecde1f38e9aee9ad846f7633a33403274d019", "title": "Advanced Beamformers for Cochlear Implant Users: Acute Measurement of Speech Perception in Challenging Listening Conditions", "text": "OBJECTIVE\nTo investigate the performance of monaural and binaural beamforming technology with an additional noise reduction algorithm, in cochlear implant recipients.\n\n\nMETHOD\nThis experimental study was conducted as a single subject repeated measures design within a large German cochlear implant centre. Twelve experienced users of an Advanced Bionics HiRes90K or CII implant with a Harmony speech processor were enrolled. The cochlear implant processor of each subject was connected to one of two bilaterally placed state-of-the-art hearing aids (Phonak Ambra) providing three alternative directional processing options: an omnidirectional setting, an adaptive monaural beamformer, and a binaural beamformer. A further noise reduction algorithm (ClearVoice) was applied to the signal on the cochlear implant processor itself. The speech signal was presented from 0\u00b0 and speech shaped noise presented from loudspeakers placed at \u00b170\u00b0, \u00b1135\u00b0 and 180\u00b0. The Oldenburg sentence test was used to determine the signal-to-noise ratio at which subjects scored 50% correct.\n\n\nRESULTS\nBoth the adaptive and binaural beamformer were significantly better than the omnidirectional condition (5.3 dB\u00b11.2 dB and 7.1 dB\u00b11.6 dB (p<0.001) respectively). The best score was achieved with the binaural beamformer in combination with the ClearVoice noise reduction algorithm, with a significant improvement in SRT of 7.9 dB\u00b12.4 dB (p<0.001) over the omnidirectional alone condition.\n\n\nCONCLUSIONS\nThe study showed that the binaural beamformer implemented in the Phonak Ambra hearing aid could be used in conjunction with a Harmony speech processor to produce substantial average improvements in SRT of 7.1 dB. The monaural, adaptive beamformer provided an averaged SRT improvement of 5.3 dB."} {"_id": "a106ecb03c1c654d5954fb802929ea2a2d437525", "title": "Robust ground plane tracking in cluttered environments from egocentric stereo vision", "text": "Estimating the ground plane is often one of the first steps in geometric reasoning processes as it offers easily accessible context knowledge. Especially unconstrained platforms that capture video from egocentric viewpoints can benefit from such knowledge in various ways. A key requirement here is keeping orientation, which can be greatly achieved by keeping track of the ground. We present an approach to keep track of the ground plane in cluttered inner-urban environments using stereo vision in real-time. We fuse a planar model fit in low-resolution disparity data with the direction of the vertical vanishing point. Our experiments show how this effectively decreases the error of plane attitude estimation compared to classic least-squares fitting and allows to track the plane with camera configurations in which the ground is not visible. We evaluate the approach using ground-truth from an inertial measurement unit and demonstrate long-term stability on a dataset of challenging inner city scenes."} {"_id": "56c700693b63e3da3b985777da6d9256e2e0dc21", "title": "Global refinement of random forest", "text": "Random forest is well known as one of the best learning methods. In spite of its great success, it also has certain drawbacks: the heuristic learning rule does not effectively minimize the global training loss; the model size is usually too large for many real applications. To address the issues, we propose two techniques, global refinement and global pruning, to improve a pre-trained random forest. The proposed global refinement jointly relearns the leaf nodes of all trees under a global objective function so that the complementary information between multiple trees is well exploited. In this way, the fitting power of the forest is significantly enhanced. The global pruning is developed to reduce the model size as well as the over-fitting risk. The refined model has better performance and smaller storage cost, as verified in extensive experiments."} {"_id": "d73a71fa24b582accb934a9c2308567376ff396d", "title": "3D geo-database research: Retrospective and future directions", "text": "3D geo-database research is a promising field to support challenging applications such as 3D urban planning, environmental monitoring, infrastructure management, and early warning or disaster management and response. In these fields, interdisciplinary research in GIScience and related fields is needed to support the modelling, analysis, management, and integration of large geo-referenced data sets, which describe human activities and geophysical phenomena. Geo-databases may serve as platforms to integrate 2D maps, 3D geo-scientific models, and other geo-referenced data. However, current geo-databases do not provide sufficient 3D data modelling and data handling techniques. New 3D geo-databases are needed to handle surface and volume models. This article first presents a 25-year retrospective of geo-database research. Data modelling, standards, and indexing of geo-data are discussed in detail. New directions for the development of 3D geo-databases to open new fields for interdisciplinary research are addressed. Two scenarios in the fields of early warning and emergency response demonstrate the combined management of human and geophysical phenomena. The article concludes with a critical outlook on open research problems. & 2011 Elsevier Ltd. All rights reserved."} {"_id": "80e98ea8e97d0d5444399319f5fe46f7c96231b7", "title": "How Online Basic Psychological Need Satisfaction Influences Self-Disclosure Online among Chinese Adolescents: Moderated Mediation Effect of Exhibitionism and Narcissism", "text": "Under the basic framework of self-determination theory, the present study examined a moderated mediation model in which exhibitionism mediated the relationship between online basic psychological need satisfaction and self-disclosure on the mobile Internet, and this mediation effect was moderated by narcissism. A total of 296 Chinese middle school students participated in this research. The results revealed that exhibitionism fully mediated the association between online competence need satisfaction and self-disclosure on the mobile net, and partly mediated the association between online relatedness need satisfaction and self-disclosure on the mobile net. The mediating path from online basic psychological need satisfaction (competence and relatedness) to exhibitionism was moderated by narcissism. Compared to the low level of narcissism, online competence need satisfaction had a stronger predictive power on exhibitionism under the high level of narcissism condition. In contrast, online relatedness need satisfaction had a weaker predictive power on exhibitionism."} {"_id": "ce93e3c1a46ce2a5d7c93121f9f1b992cf868de3", "title": "Small-footprint Keyword Spotting Using Deep Neural Network and Connectionist Temporal Classifier", "text": "Mainly for the sake of solving the lack of keyword-specific data, we propose one Keyword Spotting (KWS) system using Deep Neural Network (DNN) and Connectionist Temporal Classifier (CTC) on power-constrained small-footprint mobile devices, taking full advantage of general corpus from continuous speech recognition which is of great amount. DNN is to directly predict the posterior of phoneme units of any personally customized key-phrase, and CTC to produce a confidence score of the given phoneme sequence as responsive decision-making mechanism. The CTC-KWS has competitive performance in comparison with purely DNN based keyword specific KWS, but not increasing any computational complexity."} {"_id": "488a4ec702dca3d0c60eedbbd3971136b8bacd00", "title": "Attacking state space explosion problem in model checking embedded TV software", "text": "The features of current TV sets is increasing rapidly resulting in more complicated embedded TV software that is also harder to test and verify. Using model checking in verification of embedded software is a widely accepted practice even though it is seriously affected by the exponential increase in the number of states being produced during the verification process. Using fully non-deterministic user agent models is one of the reasons that can result in a drastic increase in the number of states being produced. In order to shrink the state space being observed during the model checking process of TV software, a method is proposed that rely on using previous test logs to generate partly nondeterministic user agent model. Results show that by using partly non-deterministic user agents, the verification time of certain safety and liveness properties can be significantly decreased."} {"_id": "00d7c79ee185f503ed4f2f3415b02104999d5574", "title": "Bridging the Gap between Business Strategy and Software Development", "text": "In software-intensive organizations, an organizational management system will not guarantee organizational success unless the business strategy can be translated into a set of operational software goals. The Goal Question Metric (GQM) approach has proven itself useful in a variety of industrial settings to support quantitative software project management. However, it does not address linking software measurement goals to higher-level goals of the organization in which the software is being developed. This linkage is important, as it helps to justify software measurement efforts and allows measurement data to contribute to higher-level decisions. In this paper, we propose a GQMStrategies\u00ae measurement approach that builds on the GQM approach to plan and implement software measurement. GQMStrategies provides mechanisms for explicitly linking software measurement goals to higher-level goals for the software organization, and further to goals and strategies at the level of the entire business. An example application of the proposed method is illustrated in the context of an example measurement initiative."} {"_id": "776669ccb66b317f33dbb0022d2b1cb94fe06559", "title": "Increased social fear and decreased fear of objects in monkeys with neonatal amygdala lesions", "text": "The amygdala has been implicated in the mediation of emotional and species-specific social behavior (Kling et al., 1970; Kling and Brothers, 1992; Kluver and Bucy, 1939; Rosvold et al., 1954). Humans with bilateral amygdala damage are impaired in judging negative emotion in facial expressions and making accurate judgements of trustworthiness (Adolphs et al., 1998, 1994). Amygdala dysfunction has also been implicated in human disorders ranging from social anxiety (Birbaumer et al., 1998) to depression (Drevets, 2000) to autism (Bachevalier, 1994; Baron-Cohen et al., 2000; Bauman and Kemper, 1993). We produced selective amygdala lesions in 2-week-old macaque monkeys who were returned to their mothers for rearing. At 6-8 months of age, the lesioned animals demonstrated less fear of novel objects such as rubber snakes than age-matched controls. However, they displayed substantially more fear behavior than controls during dyadic social interactions. These results suggest that neonatal amygdala lesions dissociate a system that mediates social fear from one that mediates fear of inanimate objects. Furthermore, much of the age-appropriate repertoire of social behavior was present in amygdala-lesioned infants indicating that these lesions do not produce autistic-like behavior in monkeys. Finally, amygdala lesions early in development have different effects on social behavior than lesions produced in adulthood."} {"_id": "0aa303109a3402aa5a203877847d549c4a24d933", "title": "Who Do I Look Like? Determining Parent-Offspring Resemblance via Gated Autoencoders", "text": "Recent years have seen a major push for face recognition technology due to the large expansion of image sharing on social networks. In this paper, we consider the difficult task of determining parent-offspring resemblance using deep learning to answer the question \"Who do I look like?\" Although humans can perform this job at a rate higher than chance, it is not clear how they do it [2]. However, recent studies in anthropology [24] have determined which features tend to be the most discriminative. In this study, we aim to not only create an accurate system for resemblance detection, but bridge the gap between studies in anthropology with computer vision techniques. Further, we aim to answer two key questions: 1) Do offspring resemble their parents? and 2) Do offspring resemble one parent more than the other? We propose an algorithm that fuses the features and metrics discovered via gated autoencoders with a discriminative neural network layer that learns the optimal, or what we call genetic, features to delineate parent-offspring relationships. We further analyze the correlation between our automatically detected features and those found in anthropological studies. Meanwhile, our method outperforms the state-of-the-art in kinship verification by 3-10% depending on the relationship using specific (father-son, mother-daughter, etc.) and generic models."} {"_id": "e2919bbdb9c8d7209ab6b3732e05eb136512709e", "title": "Performance of H.264, H.265, VP8 and VP9 Compression Standards for High Resolutions", "text": "Recently multimedia services increase, especially in video domain, leads to requirements of quality assessment. The main factors effected that quality are compression technology and transmission link. This paper presents a coding efficiency comparison of the well-known video compression standards Advanced Video Coding (H.264/AVC), High-Efficiency Video Coding (H.265/HEVC), VP8 and VP9 using objective metrics. An extensive range of bitrates from low to high bitrates was selected and encoded. The evaluation was done for four types of sequences in high resolutions depending on content. All four video sequences were encoded by using same encoding configurations for all the examined video codecs. The results showed that the quality of all compression standards rises logarithmically with increasing bitrate - in low bitrates the quality grows faster than in high bitrates. The efficiency of the new compression standards outperforms the older ones. The efficiency of VP8 compression standard outperforms the H.264/AVC compression standard. The efficiency of H.265/HEVC and VP9 compression standards is almost the same. The results also showed that VP8 codec cannot allow encoding by low bitrates. According to the results can be also said that the efficiency of the new compression standards decreases with the resolution. The results showed that the effectiveness of compression depends on the type of test sequence."} {"_id": "5dab88dbd297974ebed3f964a224d88ab54fc0ba", "title": "Detection of Ellipses by a Modified Hough Transformation", "text": "The Hough transformation can detect straight lines in an edge-enhanced picture, however its extension to recover ellipses requires too long a computing time. This correspondence proposes a modified method which utilizes two properties of an ellipse in such a way that it iteratively searches for clusters in two different parameter spaces to find almost complete ellipses, then evaluates their parameters by the least mean squares method."} {"_id": "0f892fa9574f24bc7b50fed94e0abbd84883c2dc", "title": "Is dark silicon useful? Harnessing the four horsemen of the coming dark silicon apocalypse", "text": "Due to the breakdown of Dennardian scaling, the percentage of a silicon chip that can switch at full frequency is dropping exponentially with each process generation. This utilization wall forces designers to ensure that, at any point in time, large fractions of their chips are effectively dark or dim silicon, i.e., either idle or significantly underclocked.\n As exponentially larger fractions of a chip's transistors become dark, silicon area becomes an exponentially cheaper resource relative to power and energy consumption. This shift is driving a new class of architectural techniques that \"spend\" area to \"buy\" energy efficiency. All of these techniques seek to introduce new forms of heterogeneity into the computational stack. We envision that ultimately we will see widespread use of specialized architectures that leverage these techniques in order to attain orders-of-magnitude improvements in energy efficiency.\n However, many of these approaches also suffer from massive increases in complexity. As a result, we will need to look towards developing pervasively specialized architectures that insulate the hardware designer and the programmer from the underlying complexity of such systems. In this paper, I discuss four key approaches--the four horsemen--that have emerged as top contenders for thriving in the dark silicon age. Each class carries with its virtues deep-seated restrictions that requires a careful understanding of the underlying tradeoffs and benefits."} {"_id": "265d621e32757aec8d0b6456383a64edc9304be3", "title": "Distributed Multi-Robot Localization", "text": "This paper presents a new approach to the cooperative localization problem, namely distributed multi-robot localization. A group of M robots is viewed as a single system composed of robots that carry, in general, di erent sensors and have di erent positioning capabilities. A single Kalman lter is formulated to estimate the position and orientation of all the members of the group. This centralized schema is capable of fusing information provided by the sensors distributed on the individual robots while accommodating independencies and interdependencies among the collected data. In order to allow for distributed processing, the equations of the centralized Kalman lter are treated so that this lter can be decomposed into M modi ed Kalman lters each running on a separate robot. The distributed localization algorithm is applied to a group of 3 robots and the improvement in localization accuracy is presented."} {"_id": "29108331ab9bfc1b490e90309014cff218db25cf", "title": "Human model evaluation in interactive supervised learning", "text": "Model evaluation plays a special role in interactive machine learning (IML) systems in which users rely on their assessment of a model's performance in order to determine how to improve it. A better understanding of what model criteria are important to users can therefore inform the design of user interfaces for model evaluation as well as the choice and design of learning algorithms. We present work studying the evaluation practices of end users interactively building supervised learning systems for real-world gesture analysis problems. We examine users' model evaluation criteria, which span conventionally relevant criteria such as accuracy and cost, as well as novel criteria such as unexpectedness. We observed that users employed evaluation techniques---including cross-validation and direct, real-time evaluation---not only to make relevant judgments of algorithms' performance and interactively improve the trained models, but also to learn to provide more effective training data. Furthermore, we observed that evaluation taught users about what types of models were easy or possible to build, and users sometimes used this information to modify the learning problem definition or their plans for using the trained models in practice. We discuss the implications of these findings with regard to the role of generalization accuracy in IML, the design of new algorithms and interfaces, and the scope of potential benefits of incorporating human interaction in the design of supervised learning systems."} {"_id": "3f5b98643e68d7ef4c9e1ae615f8d2f5a57c67be", "title": "Teachable robots: Understanding human teaching behavior to build more effective robot learners", "text": "While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task. We report three main observations on how people administer feedback when teaching a robot a task through Reinforcement Learning: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback \u2014 possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. Given this, we made specific modifications to the simulated RL robot, and analyzed and evaluated its learning behavior in four additional experiments with human trainers. We report significant improvements on several learning measures. This work demonstrates the importance of understanding the human-teacher/robot-learner partnership in order to design algorithms that support how people want to teach while simultaneously improving the robot\u2019s learning behavior."} {"_id": "2093bb20cbaf684d5fde5ec46ebf5a9423393581", "title": "Natural methods for robot task learning: instructive demonstrations, generalization and practice", "text": "Among humans, teaching various tasks is a complex process which relies on multiple means for interaction and learning, both on the part of the teacher and of the learner. Used together, these modalities lead to effective teaching and learning approaches, respectively. In the robotics domain, task teaching has been mostly addressed by using only one or very few of these interactions. In this paper we present an approach for teaching robots that relies on the key features and the general approach people use when teaching each other: first give a demonstration, then allow the learner to refine the acquired capabilities by practicing under the teacher's supervision, involving a small number of trials. Depending on the quality of the learned task, the teacher may either demonstrate it again or provide specific feedback during the learner's practice trial for further refinement. Also, as people do during demonstrations, the teacher can provide simple instructions and informative cues, increasing the performance of learning. Thus, instructive demonstrations, generalization over multiple demonstrations and practice trials are essential features for a successful human-robot teaching approach. We implemented a system that enables all these capabilities and validated these concepts with a Pioneer 2DX mobile robot learning tasks from multiple demonstrations and teacher feedback."} {"_id": "25ae96b48f21303e598c2fc4b257aa6eb2a6bcb3", "title": "XWand: UI for intelligent spaces", "text": "The XWand is a novel wireless sensor package that enables styles of natural interaction with intelligent environments. For example, a user may point the wand at a device and control it using simple gestures. The XWand system leverages the intelligence of the environment to best determine the user's intention. We detail the hardware device, signal processing algorithms to recover position and orientation, gesture recognition techniques, a multimodal (wand and speech) computational architecture and a preliminary user study examining pointing performance under conditions of tracking availability and audio feedback."} {"_id": "280ccfcfec38b3c38372466fb9e34333d921715a", "title": "GloveTalkII: An Adaptive Gesture-to-Formant Interface", "text": "Glove-TaikII is a system which translates hand gestures-\u00b7 to speech through an adaptive interface. Hand gestures are mapped continuously to 10 control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary, multiple languages in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TaikII uses several input devices (including a Cyberglove, a ContactGlove, a polhemus sensor, and a foot-pedal), a parallel formant speech synthesizer and 3 neural networks. The gestureto-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed, user-defined relationship between hand-position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency and stop consonants are produced with a fixed mapping from the input devices. One subject has trained for about 100 hours to speak intelligibly with Glove-TalkII. He passed through eight distinct stages while learning to speak. He speaks slowly with speech quality similar to a text-to-speech synthesizer but with far more natural-sounding pitch variations."} {"_id": "286e6e25a244d715cf697cc0a1c0c8f81ec88fbc", "title": "Collecting and analyzing qualitative data for system dynamics: methods and models", "text": "System dynamics depends heavily upon quantitative data to generate feedback models. Qualitative data and their analysis also have a central role to play at all levels of the modeling process. Although the classic literature on system dynamics strongly supports this argument, the protocols to incorporate this information during the modeling process are not detailed by the most influential authors. Data gathering techniques such as interviews and focus groups, and qualitative data analysis techniques such as grounded theory methodology and ethnographic decision models could have a strong, critical role in rigorous system dynamics efforts. This paper describes some of the main qualitative, social science techniques and explores their suitability in the different stages of the modeling process. Additionally, the authors argue that the techniques described in the paper could contribute to the understanding of the modeling process, facilitate"} {"_id": "2ebe1ffec53e63c2799cba961503f0a6abafccd3", "title": "Skip graphs", "text": "Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where elements are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer networks, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, constructing, inserting new elements into, searching a skip graph and detecting and repairing errors in the data structure introduced by node failures can be done using simple and straight-forward algorithms."} {"_id": "8faaf7ddbfdf7b2b16c5c13c710c44b09b0e1067", "title": "Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera", "text": "3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets."} {"_id": "79b9455a9a854834af41178cd0eb43e93aa9d006", "title": "Traceable Air Baggage Handling System Based on RFID Tags in the Airport", "text": "The RFID is not only a feasible, novel, and cost-effective candidate for daily object identification but it is also considered as a significant tool to provide traceable visibility along different stages of the aviation supply chain. In the air baggage handing application, the RFID tags are used to enhance the ability for baggage tracking, dispatching and conveyance so as to improve the management efficiency and the users\u2019 satisfaction. We surveyed current related work and introduce the IATA RP1740c protocol used for the standard to recognize the baggage tags. One distributed aviation baggage traceable application is designed based on the RFID networks. We describe the RFID-based baggage tracking experiment in the BCIA (Beijing Capital International Airport). In this experiment the tags are sealed in the printed baggage label and the RFID readers are fixed in the certain interested positions of the BHS in the Terminal 2. We measure the accurate recognition rate and monitor the baggage\u2019s real-time situation on the monitor\u2019s screen. Through the analysis of the measured results within two months we emphasize the advantage of the adoption of RFID tags in this high noisy BHS environment. The economical benefits achieved by the extensive deployment of RFID in the baggage handing system are also outlined."} {"_id": "3c9ea700f2668fa78dd7c2773e015dad165e8f56", "title": "Learning continuous grasp stability for a humanoid robot hand based on tactile sensing", "text": "Grasp stability estimation with complex robots in environments with uncertainty is a major research challenge. Analytical measures such as force closure based grasp quality metrics are often impractical because tactile sensors are unable to measure contacts accurately enough especially in soft contact cases. Recently, an alternative approach of learning the stability based on examples has been proposed. Current approaches of stability learning analyze the tactile sensor readings only at the end of the grasp attempt, which makes them somewhat time consuming, because the grasp can be stable already earlier. In this paper, we propose an approach for grasp stability learning, which estimates the stability continuously during the grasp attempt. The approach is based on temporal filtering of a support vector machine classifier output. Experimental evaluation is performed on an anthropomorphic ARMAR-IIIb. The results demonstrate that the continuous estimation provides equal performance to the earlier approaches while reducing the time to reach a stable grasp significantly. Moreover, the results demonstrate for the first time that the learning based stability estimation can be used with a flexible, pneumatically actuated hand, in contrast to the rigid hands used in earlier works."} {"_id": "36f57ae7525de92d5300bd9e06978c884ebeeda8", "title": "Authorship attribution based on Life-Like Network Automata", "text": "The authorship attribution is a problem of considerable practical and technical interest. Several methods have been designed to infer the authorship of disputed documents in multiple contexts. While traditional statistical methods based solely on word counts and related measurements have provided a simple, yet effective solution in particular cases; they are prone to manipulation. Recently, texts have been successfully modeled as networks, where words are represented by nodes linked according to textual similarity measurements. Such models are useful to identify informative topological patterns for the authorship recognition task. However, there is no consensus on which measurements should be used. Thus, we proposed a novel method to characterize text networks, by considering both topological and dynamical aspects of networks. Using concepts and methods from cellular automata theory, we devised a strategy to grasp informative spatio-temporal patterns from this model. Our experiments revealed an outperformance over structural analysis relying only on topological measurements, such as clustering coefficient, betweenness and shortest paths. The optimized results obtained here pave the way for a better characterization of textual networks."} {"_id": "8757e5b778ebb443b7a487d4fa65883dcff86027", "title": "High-Sensitivity Software-Configurable 5.8-GHz Radar Sensor Receiver Chip in 0.13-$\\mu$ m CMOS for Noncontact Vital Sign Detection", "text": "In this paper, analyses on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector. Important design issues such as flicker noise, baseband bandwidth, and gain budget have been discussed with practical considerations of analog-to-digital interface and signal processing methods in noncontact vital sign detection. Based on the analyses, a direct-conversion 5.8-GHz radar sensor chip with 1-GHz bandwidth was designed and fabricated. This radar sensor chip is software configurable to set the operation point and detection range for optimal performance. It integrates all the analog functions on-chip so that the output can be directly sampled for digital signal processing. Measurement results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement. Experiments have been performed successfully in laboratory environment to detect the vital signs of human subjects."} {"_id": "5407167bf5cce2f942261b26e14f9e44251332a1", "title": "The social impact of a systematic floor cleaner", "text": "Mint is an automatic cleaning robot that sweeps and mops hard-surface floors using dusting and mopping cloths. Thanks to the Northstar navigation technology it systematically cleans and navigates in people's homes. Since it first became commercially available in mid 2010, hundreds of thousands of Mint cleaners are nowadays in use at home. In this paper we investigate the product's social impact with respect to the attitude of customers towards a systematic floor cleaner and how such a robot influences their lifestyle. We first report feedback from users owning the product, and demonstrate how Mint changed their everyday life at home. We then evaluate the results of a survey launched in 2012 that addresses the technical understanding of the product and what impact it has on the social life of their users. Our findings suggest that Mint, on average, saves more than one hour of people's time per week, floors are cleaner leading to healthier homes and lives, systematic cleaning is seen as an important feature, and modifications to the environment to support the navigation of the robot are largely accepted."} {"_id": "ee137192f06f06932f5cdf71e46cef9eca260ae2", "title": "Audit and Analysis of Impostors: An experimental approach to detect fake profile in online social network", "text": "In the present generation, the social life of every person has become associated with online social networks (OSN). These sites have made drastic changes in the way we socialize. Making friends and keeping in contact with them as well as being updated of their activities, has become easier. But with their rapid growth, problems like fake profiles, online impersonation have also increased. The risk lies in the fact that anybody can create a profile to impersonate a real person on the OSN. The fake profile could be exploited to build online relationship with a targeted person purely through online interactions with the friends of victim.\n In present work, we have proposed experimental framework with which detection of fake profile is feasible within the friend list, however this framework is restricted to a specific online social networking site namely Facebook. This framework extracts data from the friend list and uses it to classify them as real or fake by using unsupervised and supervised machine learning."} {"_id": "b322827e441137ded5991d61d12c86f67effc2ae", "title": "Blind PSF estimation and methods of deconvolution optimization", "text": "We have shown that the left side null space of the autoregression (AR) matrix operator is the lexicographical presentation of the point spread function (PSF) on condition the AR parameters are common for original and blurred images. The method of inverse PSF evaluation with regularization functional as the function of surface area is offered. The inverse PSF was used for primary image estimation. Two methods of original image estimate optimization were designed basing on maximum entropy generalization of sought and blurred images conditional probability density and regularization. The first method uses balanced variations of convolution and deconvolution transforms to obtaining iterative schema of image optimization. The variations balance was defined by dynamic regularization basing on condition of iteration process convergence. The regularization has dynamic character because depends on current and previous image estimate variations. The second method implements the regularization of deconvolution optimization in curved space with metric defined on image estimate surface. It is basing on target functional invariance to fluctuations of optimal argument value. The given iterative schemas have faster convergence in comparison with known ones, so they can be used for reconstruction of high resolution images series in real time."} {"_id": "f53fe36c27d32b349619ad75b37f4b1bb583f0b1", "title": "The FEELING of WHAT HAPPENS : BODY AND EMOTION IN THE MAKING OF CONSCIOUSNESS", "text": "Antonio Damasio is a Portuguese-born American behavioral neurologist and neuroscientist. For a person without any knowledge of neuroscience, biology, or psychology, The Feeling of What Happens would presumably be a fascinating read. How often does one typically contemplate their own contemplation? The concepts presented by Damasio are abstract and uncommon, and undoubtedly are a new phenomenon for the average reader."} {"_id": "ea4854a88e919adfe736fd63261bbd3b79a7bfb9", "title": "MELOGRAPH: Multi-Engine WorkfLOw Graph Processing", "text": "This paper introduces MELOGRAPH, a new system that exposes in the front-end a domain specific language(DSL) for graph processing tasks and in the back-end identifies, ranks and generates source code for the top-N ranked engines. This approach lets the specialized MELOGRAPH be part of a more general multi-engine workflow optimizer. The candidate execution engines are chosen from the contemporaneous Big Data ecosystem: graph databases (e.g. Neo4j, TitanDB, OrientDB, Sparksee/DEX) and robust graph processing frameworks with Java API or packaged libraries of algorithms (e.g. Giraph, Okapi, Flink Gelly, Hama, Gremlin). As MELOGRAPH is work in progress, our current paper stresses upon the state of the art in this field, provides a general architecture and some early implementation insights."} {"_id": "e739cb4f6a17db37d3c4d6223025cea9ea5bca8c", "title": "Trusted Click: Overcoming Security issues of NFV in the Cloud", "text": "Network Function Virtualization has received a large amount of research and recent efforts have been made to further leverage the cloud to enhance NFV. However, since there are privacy and security issues with using cloud computing, work has been done to allow for operating on encrypted data, which introduces a large amount of overhead in both computation and data, while only providing a limited set of operations, since these encryption schemes are not fully homomorphic. We propose using trusted computing to circumvent these limitations by having hardware enforce data privacy and provide guaranteed computation. Prior work has shown that Intel's Software Guard Extensions can be used to protect the state of network functions, but there are still questions about the usability of SGX in arbitrary NFV applications and the performance of SGX in these applications. We extend prior work to show how SGX can be used in network deployments by extending the Click modular router to perform secure packet processing with SGX. We also present a performance evaluation of SGX on real hardware to show that processing inside of SGX has a negligible performance impact, compared to performing the same processing outside of SGX."} {"_id": "f5ebb6548f6ec1c5e16e68e1ec38677e43d1fed3", "title": "Parathyroid Hormone PTH(1-34) Formulation that Enables Uniform Coating on a Novel Transdermal Microprojection Delivery System", "text": "Assess formulation parameters to enable >24-h continuous accurate and uniform coating of PTH(1-34) on a novel transdermal microprojection array delivery system. Surface activity and rheology of the liquid formulation was determined by contact angle measurement and cone-plate viscometry. The formulation\u2019s delivery performance was assessed in vivo using the hairless guinea pig model. Peptide gelation was investigated by rheological and viscoelastic behavior changes. Accurate and uniform coating was achieved by formulating the liquid formulation to a preferred contact angle range of 30\u201360\u00b0 with a surfactant and by establishing a Newtonian fluid (defined as a fluid maintaining a constant viscosity with shear rate and time) with a viscosity of \u226520\u00a0cps via adjusting the peptide concentration and using an appropriate acidic counterion. A non-volatile acidic counterion was found critical to compensate for the loss of the volatile acetate counterion to maintain the peptide formulation\u2019s solubility upon rehydration in the skin. Finally, the 15.5% w/w PTH(1-34) concentration was found to be the most physically stable formulation (delayed gelation) in the roll-coating reservoir. With a properly designed coating reservoir for shear force reduction, the liquid formulation could last for more than 24\u00a0h without gelation. The study successfully offered scientific rationales for developing an optimal liquid formulation for a novel titanium microprojection array coating process. The resultant formulation has an enduring physical stability (>24\u00a0h) in the coating reservoir and maintained good in vivo dissolution performance."} {"_id": "dce7a0550b4d63f6fe2e6908073ce0ce63626b0c", "title": "An Ethics Evaluation Tool for Automating Ethical Decision-Making in Robots and Self-Driving Cars", "text": "As we march down the road of automation in robotics and artificial intelligence, we will need to automate an increasing amount of ethical decision-making in order for our devices to operate independently from us. But automating ethical decision-making raises novel questions for engineers and designers, who will have to make decisions about how to accomplish that task. For example, some ethical decisionmaking involves hard moral cases, which in turn requires user input if we are to respect established norms surrounding autonomy and informed consent. The author considers this and other ethical considerations that accompany the automation of ethical decision-making. He proposes some general ethical requirements that should be taken into account in the design room, and sketches a design tool that can be integrated into the design process to help engineers, designers, ethicists, and policymakers decide how best to automate certain forms of ethical decision-making."} {"_id": "87c13f4ec110495837056295909ceb503158c821", "title": "Peppermint oil for treatment of irritable bowel syndrome.", "text": "22 AM J HEALTH-SYST PHARM | VOLUME 73 | NUMBER 2 | JANUARY 15, 2016 Peppermint oil for treatment of irritable bowel syndrome Irritable bowel syndrome (IBS) is a chronic gastrointestinal disorder that affects 5\u201315% of people worldwide. Typically, patients with IBS experience abdominal pain or discomfort and constipation or diarrhea with other gastrointestinal symptoms, such as abdominal distention and bloating; these symptoms reduce quality of life and can be difficult to control with currently available prescription agents. For these reasons, patients are often compelled to try nonprescription therapies, including natural products. In studies conducted in Australia and the United Kingdom, about 20\u201350% of patients with IBS reported using complementary and alternative medicines. Thus, it is important that healthcare professionals are knowledgeable of these therapies, including dietary supplements such as peppermint oil. Historically, the ingestion of peppermint oil has been associated with effects on the gastrointestinal tract. It is a classic essential oil known to have carminative properties (i.e., it is a naturally occurring remedy thought to help decrease bloating and gas by allowing passage of flatus) and may even be beneficial as an antiemetic. Peppermint oil comes from a perennial herb (Mentha \u00d7 piperita), a plant found across North America and Europe. Mentha \u00d7 piperita is a sterile hybrid of two herbs, spearmint (Menthaspicata) and water mint (Menthaaquatica). The main constituents of peppermint oil include menthol (35\u201355%), menthone (20\u201331%), menthyl acetate (3\u201310%), isomenthone, 1,8-cineole, limonene, b-myrcene, and carvone. Peppermint oil may be obtained through steam distillation of flowering parts that grow aboveground. It is a volatile oil, with menthol accounting for a majority of its potency. Due to the various constituents of peppermint oil, it has a variety of uses, including topical application as an antiseptic and for aches and pains, inhalation as aromatherapy, and oral formulations for flavoring or for use as digestive aids. Relaxation of intestinal muscle, both in vivo and in vitro, and relaxation of the lower esophageal sphincter have been reported with the use of peppermint oil, which is thought of as an antispasmodic that may confer benefits in conditions such as IBS. Several clinical studies of the use of peppermint oil for the treatment of"} {"_id": "8559adcb55b96ca112745566a9b96988fe51ba9a", "title": "Human motion and emotion parameterization", "text": "This paper describes the methods and data structures used for generating new human motions out of existing motion data. These existing animation data are derived from live motion capture or produced through the traditional animation tools. We present a survey of techniques for generating new animations by physical and emotional parameterizations. Discussed new animations are generated by Fourier transformations, transformations of velocity and amplitude and scattered data interpolation in this report."} {"_id": "b8d3bca50ccc573c5cb99f7d201e8acce6618f04", "title": "An Algorithm for Drawing General Undirected Graphs Tomihisa Kamada and Satoru Kawai", "text": "Graphs (networks) are very common data structures which are handled in computers. Diagrams are widely used to represent the graph structures visually in many information systems. In order to automatically draw the diagrams which are, for example, state graphs, data-flow graphs, Petri nets, and entity-relationship diagrams, basic graph drawing algorithms are required. The state of the art in automatic drawing is surveyed comprehensively in [7,19]. There have been only a few algorithms for general undirected graphs. This paper presents a simple but successful algorithm for drawing undirected graphs and weighted graphs. The basic idea of our algorithm is as follows. We regard the desirable \"geometric\" (Euclidean) distance between two vertices in the drawing as the \"graph theoretic\" distance between them in the corresponding graph. We introduce a virtual dynamic system in which every two vertices are connected by a \"spring\" of such desirable length. Then, we regard the optimal layout of vertices as the state in which the total spring energy of the system is minimal. The \"spring\" idea for drawing general graphs was introduced in [6], and similar methods were used for drawing planar graphs with fixed boundary [2,20]. This paper brings a new significant result in graph drawing based on the spring model."} {"_id": "3669f95b0a3224c62d4ddfcebc174dee613e07fc", "title": "Topic Model for Identifying Suicidal Ideation in Chinese Microblog", "text": "Suicide is one of major public health problems worldwide. Traditionally, suicidal ideation is assessed by surveys or interviews, which lacks of a real-time assessment of personal mental state. Online social networks, with large amount of user-generated data, offer opportunities to gain insights of suicide assessment and prevention. In this paper, we explore potentiality to identify and monitor suicide expressed in microblog on social networks. First, we identify users who have committed suicide and collect millions of microblogs from social networks. Second, we build suicide psychological lexicon by psychological standards and word embedding technique. Third, by leveraging both language styles and online behaviors, we employ Topic Model and other machine learning algorithms to identify suicidal ideation. Our approach achieves the best results on topic-500, yielding F1 \u2212 measure of 80.0%, Precision of 87.1%, Recall of 73.9%, and Accuracy of 93.2%. Furthermore, a prototype system for monitoring suicidal ideation on several social networks is deployed."} {"_id": "d1fda9eb1cb03baa331baae5805d08d81208f730", "title": "Unusual coexistence of caudal duplication and caudal regression syndromes.", "text": "Caudal duplication syndrome includes anomalies of the genitourinary system, gastrointestinal tract, and the distal neural tube. Caudal regression syndrome presents with lumbosacral hypogenesis, anomalies of the lower gastrointestinal tract, genitourinary system, and limb anomalies. Both happen as a result of insult to the caudal cell mass. We present a child having features consistent with both entities."} {"_id": "ab19cbea5c61536b616cfa7654cf01bf0621b83f", "title": "Biodynamic Excisional Skin Tension Lines for Cutaneous Surgery", "text": ""} {"_id": "ab7b1542251ff46971704cd24562062f59901fb8", "title": "Qualitative research methods in mental health.", "text": "As the evidence base for the study of mental health problems develops, there is a need for increasingly rigorous and systematic research methodologies. Complex questions require complex methodological approaches. Recognising this, the MRC guidelines for developing and testing complex interventions place qualitative methods as integral to each stage of intervention development and implementation. However, mental health research has lagged behind many other healthcare specialities in using qualitative methods within its evidence base. Rigour in qualitative research raises many similar issues to quantitative research and also some additional challenges. This article examines the role of qualitative methods within mental heath research, describes key methodological and analytical approaches and offers guidance on how to differentiate between poor and good quality qualitative research."} {"_id": "01e0a7cdbf9851a30f7dc31dc79adc2a7bde1c9f", "title": "Reliability in the utility computing era: Towards reliable Fog computing", "text": "This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible."} {"_id": "1b90ee5c846aafe7feb38b439a3e8fa212757899", "title": "Detection and analysis of drive-by-download attacks and malicious JavaScript code", "text": "JavaScript is a browser scripting language that allows developers to create sophisticated client-side interfaces for web applications. However, JavaScript code is also used to carry out attacks against the user's browser and its extensions. These attacks usually result in the download of additional malware that takes complete control of the victim's platform, and are, therefore, called \"drive-by downloads.\" Unfortunately, the dynamic nature of the JavaScript language and its tight integration with the browser make it difficult to detect and block malicious JavaScript code.\n This paper presents a novel approach to the detection and analysis of malicious JavaScript code. Our approach combines anomaly detection with emulation to automatically identify malicious JavaScript code and to support its analysis. We developed a system that uses a number of features and machine-learning techniques to establish the characteristics of normal JavaScript code. Then, during detection, the system is able to identify anomalous JavaScript code by emulating its behavior and comparing it to the established profiles. In addition to identifying malicious code, the system is able to support the analysis of obfuscated code and to generate detection signatures for signature-based systems. The system has been made publicly available and has been used by thousands of analysts."} {"_id": "3032182c47b75d9c1d16877815dab8f8637631a2", "title": "Beyond blacklists: learning to detect malicious web sites from suspicious URLs", "text": "Malicious Web sites are a cornerstone of Internet criminal activities. As a result, there has been broad interest in developing systems to prevent the end user from visiting such sites. In this paper, we describe an approach to this problem based on automated URL classification, using statistical methods to discover the tell-tale lexical and host-based properties of malicious Web site URLs. These methods are able to learn highly predictive models by extracting and automatically analyzing tens of thousands of features potentially indicative of suspicious URLs. The resulting classifiers obtain 95-99% accuracy, detecting large numbers of malicious Web sites from their URLs, with only modest false positives."} {"_id": "6ba2b0a92408789eec23c008a9beb1b574b42470", "title": "Anomaly Based Web Phishing Page Detection", "text": "Many anti-phishing schemes have recently been proposed in literature. Despite all those efforts, the threat of phishing attacks is not mitigated. One of the main reasons is that phishing attackers have the adaptability to change their tactics with little cost. In this paper, we propose a novel approach, which is independent of any specific phishing implementation. Our idea is to examine the anomalies in Web pages, in particular, the discrepancy between a Web site's identity and its structural features and HTTP transactions. It demands neither user expertise nor prior knowledge of the Web site. The evasion of our phishing detection entails high cost to the adversary. As shown by the experiments, our phishing detector functions with low miss rate and low false-positive rate"} {"_id": "9cbe8c8ba680a4e55517a8cf322603334ac68be1", "title": "Effective analysis, characterization, and detection of malicious web pages", "text": "The steady evolution of the Web has paved the way for miscreants to take advantage of vulnerabilities to embed malicious content into web pages. Up on a visit, malicious web pages steal sensitive data, redirect victims to other malicious targets, or cease control of victim's system to mount future attacks. Approaches to detect malicious web pages have been reactively effective at special classes of attacks like drive-by-downloads. However, the prevalence and complexity of attacks by malicious web pages is still worrisome. The main challenges in this problem domain are (1) fine-grained capturing and characterization of attack payloads (2) evolution of web page artifacts and (3) exibility and scalability of detection techniques with a fast-changing threat landscape. To this end, we proposed a holistic approach that leverages static analysis, dynamic analysis, machine learning, and evolutionary searching and optimization to effectively analyze and detect malicious web pages. We do so by: introducing novel features to capture fine-grained snapshot of malicious web pages, holistic characterization of malicious web pages, and application of evolutionary techniques to fine-tune learning-based detection models pertinent to evolution of attack payloads. In this paper, we present key intuition and details of our approach, results obtained so far, and future work."} {"_id": "d69ae114a54a0295fe0a882d205611a121f981e1", "title": "ADAM: Detecting Intrusions by Data Mining", "text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM ( Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection. Keywords\u2014Intrusion Detection, Data Mining, Association Rules, Classifiers."} {"_id": "a3fe9f3b248417db3cdcf07ab6f9a63c03a6345f", "title": "Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders", "text": "Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts."} {"_id": "636bb6a9fd77f6811fc0339c44542ab25ba552cc", "title": "A shared \" passengers & goods \" city logistics system", "text": "Many strategic planning models have been developed to help decision making in city logistics. Such models do not take into account, or very few, the flow of passengers because the considered unit does not have the same nature (a person is active and a good is passive). However, it seems fundamental to gather the goods and the passengers in one model when their respective transports interact with each other. In this context, we suggest assessing a shared passengers & goods city logistics system where the spare capacity of public transport is used to distribute goods toward the city core. We model the problem as a vehicle routing problem with transfers and give a mathematical formulation. Then we propose an Adaptive Large Neighborhood Search (ALNS) to solve it. This approach is evaluated on data sets generated following a field study in the city of La Rochelle in France."} {"_id": "86f7488e6ad64ad3a7aab65f936c9686aee91a1a", "title": "Speaker Identification using Mel Frequency Cepstral Coefficient and BPNN", "text": "Speech processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. The objective of automatic speaker recognition is to extract, characterize and recognize the information about speaker identity. Feature extraction is the first step for speaker recognition. Many algorithms are suggested/developed by the researchers for feature extraction. In this work, the Mel Frequency Cepstrum Coefficient (MFCC) feature has been used for designing a text dependent speaker identification system. BPNN is used for identification of speaker after training the feature set from MFCC. Some modifications to the existing technique of MFCC for feature extraction are also suggested to improve the speaker recognition efficiency. Information from speech recognition can be used in various ways in state-of-the-art speaker recognition systems. This includes the obvious use of recognized words to enable the use of text-dependent speaker modeling techniques when the words spoken are not given. Furthermore, it has been shown that the choice of words and phones itself can be a useful indicator of speaker identity. Also, recognizer output enables higher-level features, in particular those related to prosodic properties of speech. Keywords\u2014 Speaker identification, BPNN, MFCC, speech processing, feature extraction, speech signal"} {"_id": "8188d1381f8c77f7df0117fd0dab1919693c1295", "title": "Language support for fast and reliable message-based communication in singularity OS", "text": "Message-based communication offers the potential benefits of providing stronger specification and cleaner separation between components. Compared with shared-memory interactions, message passing has the potential disadvantages of more expensive data exchange (no direct sharing) and more complicated programming.In this paper we report on the language, verification, and run-time system features that make messages practical as the sole means of communication between processes in the Singularity operating system. We show that using advanced programming language and verification techniques, it is possible to provide and enforce strong system-wide invariants that enable efficient communication and low-overhead software-based process isolation. Furthermore, specifications on communication channels help in detecting programmer mistakes early---namely at compile-time---thereby reducing the difficulty of the message-based programming model.The paper describes our communication invariants, the language and verification features that support them, as well as implementation details of the infrastructure. A number of benchmarks show the competitiveness of this approach."} {"_id": "6c0cfbb0e02b8d5ea3f0d6f94eb25c7b93ff3e85", "title": "Timeline generation with social attention", "text": "Timeline generation is an important research task which can help users to have a quick understanding of the overall evolution of any given topic. It thus attracts much attention from research communities in recent years. Nevertheless, existing work on timeline generation often ignores an important factor, the attention attracted to topics of interest (hereafter termed \"social attention\"). Without taking into consideration social attention, the generated timelines may not reflect users' collective interests. In this paper, we study how to incorporate social attention in the generation of timeline summaries. In particular, for a given topic, we capture social attention by learning users' collective interests in the form of word distributions from Twitter, which are subsequently incorporated into a unified framework for timeline summary generation. We construct four evaluation sets over six diverse topics. We demonstrate that our proposed approach is able to generate both informative and interesting timelines. Our work sheds light on the feasibility of incorporating social attention into traditional text mining tasks."} {"_id": "998065c6747d8fb05dca5977415179e20371c3d4", "title": "Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software", "text": "The use of automatic static analysis has been a software engineering best practice for decades. However, we still do not know a lot about its use in real-world software projects: How prevalent is the use of Automated Static Analysis Tools (ASATs) such as FindBugs and JSHint? How do developers use these tools, and how does their use evolve over time? We research these questions in two studies on nine different ASATs for Java, JavaScript, Ruby, and Python with a population of 122 and 168,214 open-source projects. To compare warnings across the ASATs, we introduce the General Defect Classification (GDC) and provide a grounded-theory-derived mapping of 1,825 ASAT-specific warnings to 16 top-level GDC classes. Our results show that ASAT use is widespread, but not ubiquitous, and that projects typically do not enforce a strict policy on ASAT use. Most ASAT configurations deviate slightly from the default, but hardly any introduce new custom analyses. Only a very small set of default ASAT analyses is widely changed. Finally, most ASAT configurations, once introduced, never change. If they do, the changes are small and have a tendency to occur within one day of the configuration's initial introduction."} {"_id": "587f8411391bf2f9d7586eed05416977c6024dd0", "title": "Microrobot Design Using Fiber Reinforced Composites", "text": "Mobile microrobots with characteristic dimensions on the order of 1cm are difficult to design using either MEMS (microelectromechanical systems) technology or precision machining. This is due to the challenges associated with constructing the high strength links and high-speed, low-loss joints with micron scale features required for such systems. Here we present an entirely new framework for creating microrobots which makes novel use of composite materials. This framework includes a new fabrication process termed Smart Composite Microstructures (SCM) for integrating rigid links and large angle flexure joints through a laser micromachining and lamination process. We also present solutions to actuation and integrated wiring issues at this scale using SCM. Along with simple design rules that are customized for this process, our new complete microrobotic framework is a cheaper, quicker, and altogether superior method for creating microrobots that we hope will become the paradigm for robots at this scale."} {"_id": "e5e989029e7e87fe8faddd233a97705547d06dda", "title": "Mr. DLib's Living Lab for Scholarly Recommendations", "text": "We introduce the first living lab for scholarly recommender systems. This lab allows recommender-system researchers to conduct online evaluations of their novel algorithms for scholarly recommendations, i.e., research papers, citations, conferences, research grants etc. Recommendations are delivered through the living lab \u0301s API in platforms such as reference management software and digital libraries. The living lab is built on top of the recommender system as-a-service Mr. DLib. Current partners are the reference management software JabRef and the CORE research team. We present the architecture of Mr. DLib\u2019s living lab as well as usage statistics on the first ten months of operating it. During this time, 970,517 recommendations were delivered with a mean click-through rate of 0.22%."} {"_id": "a75a6ed085fb57762fa148b82aa47607c0d9d92c", "title": "A Genetic Algorithm Approach to Dynamic Job Shop Scheduling Problems", "text": "This paper describes a genetic algorithm approach to the dynamic job shop scheduling problem with jobs arriving continually. Both deterministic and stochastic models of the dynamic problem were investigated. The objective functions examined were weighted flow time, maximum tardiness, weighted tardiness, weighted lateness, weighted number of tardy jobs, and weighted earliness plus weighted tardi-ness. In the stochastic model, we further tested the approach under various manufacturing environments with respect to the machine workload, imbalance of machine workload, and due date tightness. The results indicate that the approach performs well and is robust with regard to the objective function and the manufacturing environment in comparison with priority rule approaches."} {"_id": "7703a2c5468ecbee5b62c048339a03358ed5fe19", "title": "Recurrent Neural Aligner: An Encoder-Decoder Neural Network Model for Sequence to Sequence Mapping", "text": "We introduce an encoder-decoder recurrent neural network model called Recurrent Neural Aligner (RNA) that can be used for sequence to sequence mapping tasks. Like connectionist temporal classification (CTC) models, RNA defines a probability distribution over target label sequences including blank labels corresponding to each time step in input. The probability of a label sequence is calculated by marginalizing over all possible blank label positions. Unlike CTC, RNA does not make a conditional independence assumption for label predictions; it uses the predicted label at time t\u22121 as an additional input to the recurrent model when predicting the label at time t. We apply this model to end-to-end speech recognition. RNA is capable of streaming recognition since the decoder does not employ attention mechanism. The model is trained on transcribed acoustic data to predict graphemes and no external language and pronunciation models are used for decoding. We employ an approximate dynamic programming method to optimize negative log likelihood, and a sampling-based sequence discriminative training technique to fine-tune the model to minimize expected word error rate. We show that the model achieves competitive accuracy without using an external language model nor doing beam search decoding."} {"_id": "a23179010e83ebdc528b4318bcea8edace96cbe5", "title": "Effective Bug Triage Based on Historical Bug-Fix Information", "text": "For complex and popular software, project teams could receive a large number of bug reports. It is often tedious and costly to manually assign these bug reports to developers who have the expertise to fix the bugs. Many bug triage techniques have been proposed to automate this process. In this paper, we describe our study on applying conventional bug triage techniques to projects of different sizes. We find that the effectiveness of a bug triage technique largely depends on the size of a project team (measured in terms of the number of developers). The conventional bug triage methods become less effective when the number of developers increases. To further improve the effectiveness of bug triage for large projects, we propose a novel recommendation method called Bug Fixer, which recommends developers for a new bug report based on historical bug-fix information. Bug Fixer constructs a Developer-Component-Bug (DCB) network, which models the relationship between developers and source code components, as well as the relationship between the components and their associated bugs. A DCB network captures the knowledge of \"who fixed what, where\". For a new bug report, Bug Fixer uses a DCB network to recommend to triager a list of suitable developers who could fix this bug. We evaluate Bug Fixer on three large-scale open source projects and two smaller industrial projects. The experimental results show that the proposed method outperforms the existing methods for large projects and achieves comparable performance for small projects."} {"_id": "0b242d5123f79defd5f775d49d8a7047ad3153bc", "title": "How Important is Weight Symmetry in Backpropagation?", "text": "Gradient backpropagation (BP) requires symmetric feedforward and feedback connections\u2014the same weights must be used for forward and backward passes. This \u201cweight transport problem\u201d [1] is thought to be one of the main reasons of BP\u2019s biological implausibility. Using 15 different classification datasets, we systematically study to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to Lillicrap et al.\u2019s demonstration [2] but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter\u2014the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100% concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations/stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) [3] and/or a \u201cBatch Manhattan\u201d (BM) update rule. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. ar X iv :1 51 0. 05 06 7v 3 [ cs .L G ] 2 D ec 2 01 5"} {"_id": "cfdfb7fecab1c795a52d0d201064fe876e0aae2f", "title": "Smarter universities: A vision for the fast changing digital era", "text": "In this paper we analyze the current situation of education in universities, with particular reference to the European scenario. Specifically, we observe that recent evolutions, such as pervasive networking and other enabling technologies, have been dramatically changing human life, knowledge acquisition, and the way works are performed and people learn. In this societal change, universities must maintain their leading role. Historically, they set trends primarily in education but now they are called to drive the change in other aspects too, such as management, safety, and environment protection. The availability of newer and newer technology reflects on how the relevant processes should be performed in the current fast changing digital era. This leads to the adoption of a variety of smart solutions in university environments to enhance the quality of life and to improve the performances of both teachers and students. Nevertheless, we argue that being smart is not enough for a modern university. In fact, universities should better become smarter. By \u201csmarter university\u201d we mean a place where knowledge is shared between employees, teachers, students, and all stakeholders in a seamless way. In this paper we propose, and discuss a smarter university model, derived from the one designed for the development of"} {"_id": "baca9a36855ee7e5d80a860072be24a865ec8bf1", "title": "Impact of dietary fiber intake on glycemic control, cardiovascular risk factors and chronic kidney disease in Japanese patients with type 2 diabetes mellitus: the Fukuoka Diabetes Registry", "text": "BACKGROUND\nDietary fiber is beneficial for the treatment of type 2 diabetes mellitus, although it is consumed differently in ethnic foods around the world. We investigated the association between dietary fiber intake and obesity, glycemic control, cardiovascular risk factors and chronic kidney disease in Japanese type 2 diabetic patients.\n\n\nMETHODS\nA total of 4,399 patients were assessed for dietary fiber intake using a brief self-administered diet history questionnaire. The associations between dietary fiber intake and various cardiovascular risk factors were investigated cross-sectionally.\n\n\nRESULTS\nBody mass index, fasting plasma glucose, HbA1c, triglyceride and high-sensitivity C-reactive protein negatively associated with dietary fiber intake after adjusting for age, sex, duration of diabetes, current smoking, current drinking, total energy intake, fat intake, saturated fatty acid intake, leisure-time physical activity and use of oral hypoglycemic agents or insulin. The homeostasis model assessment insulin sensitivity and HDL cholesterol positively associated with dietary fiber intake. Dietary fiber intake was associated with reduced prevalence of abdominal obesity, hypertension and metabolic syndrome after multivariate adjustments including obesity. Furthermore, dietary fiber intake was associated with lower prevalence of albuminuria, low estimated glomerular filtration rate and chronic kidney disease after multivariate adjustments including protein intake. Additional adjustments for obesity, hypertension or metabolic syndrome did not change these associations.\n\n\nCONCLUSION\nWe demonstrated that increased dietary fiber intake was associated with better glycemic control and more favorable cardiovascular disease risk factors including chronic kidney disease in Japanese type 2 diabetic patients. Diabetic patients should be encouraged to consume more dietary fiber in daily life."} {"_id": "241e2b442812d843dbd30e924c2f2f6ad8e12179", "title": "Concept Decompositions for Large Sparse Text Data Using Clustering", "text": "Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors\u2013a few thousand dimensions and a sparsity of 95 to 99% is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain \u201cfractal-like\u201d and \u201cself-similar\u201d behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means algorithm constitute a powerful sparse and localized \u201cbasis\u201d for text data sets."} {"_id": "46ba80738127b19a892cfe687ece171251abb806", "title": "Attribution versus persuasion as a means for modifying behavior.", "text": "The present research compared the relative effectiveness of an attribution strategy with a persuasion strategy in changing behavior. Study 1 attempted to teach fifth graders not to litter and to clean up after others. An attribution group was repeatedly told that they were neat and tidy people, a persuasion group was repeatedly told that they should be neat and tidy, and a control group received no treatment. Attribution proved considerably more effective in modifying behavior. Study 2 tried to discover whether similar effects would hold for a more central aspect of school performance, math achievement and self-esteem, and whether an attribution of ability would be as effective as an attribution of motivation. Repeatedly attributing to second graders either the ability or the motivation to do well in math proved more effective than comparable persuasion or no-treatment control groups, although a group receiving straight reinforcement for math problem-solving behavior also did well. It is suggested that persuasion often suffers because it involves a negative attribution (a person should be what he is not), while attribution generally gains because it disguises persuasive intent."} {"_id": "ea76ec431a7dd1c9a5b7ccf9fb0a4cb13b9b5037", "title": "Taslihan Virtual Reconstruction - Interactive Digital Story or a Serious Game", "text": "During the Ottoman period, Taslihan was the largest accommodation complex in Sarajevo, Bosnia and Herzegovina. Today, only one wall remains as a memento of its existence. In this paper, we compare user appreciation of an interactive digital story about this building and of a serious game about Taslihan to see which application offers more knowledge and immersion while bringing this monument to life in the collective memory of the people."} {"_id": "b7fa7e6f1ebd1653ad0431a85dc46221b8e2a367", "title": "Mobile apps for science learning: Review of research", "text": "This review examined articles on mobile apps for science learning published from 2007 to 2014. A qualitative content analysis was used to investigate the science mobile app research for its mobile app design, underlying theoretical foundations, and students' measured outcomes. This review found that mobile apps for science learning offered a number of similar design features, including technology-based scaffolding, location-aware functionality, visual/audio representations, digital knowledge-construction tools, digital knowledge-sharing mechanisms, and differentiated roles. Many of the studies cited a specific theoretical foundation, predominantly situated learning theory, and applied this to the design of the mobile learning environment. The most common measured outcome was students' basic scientific knowledge or conceptual understanding. A number of recommendations came out of this review. Future studies need to make use of newer, available technologies; isolate the testing of specific app features; and develop additional strategies around using mobile apps for collaboration. Researchers need to make more explicit connections between the instructional principles and the design features of their mobile learning environment in order to better integrate theory with practice. In addition, this review noted that stronger alignment is needed between the underlying theories and measured outcomes, and more studies are needed to assess students' higher-level cognitive outcomes, cognitive load, and skill-based outcomes such as problem solving. Finally, more research is needed on how science mobile apps can be used with more varied science topics and diverse audiences. \u00a9 2015 Elsevier Ltd. All rights reserved."} {"_id": "c4dece35bb107170c9f76fcb254a191dc15cce27", "title": "Characteristics and Expected Returns in Individual Equity Options", "text": "I study excess returns from selling individual equity option portfolios that are leverageadjusted monthly and delta-hedged daily. Strikingly, I find that several measures of risk rise by maturity, although expected returns decrease. Based on my analysis, I identify three new factors \u2013level, slope, and value\u2013 in option returns, which together explain the cross-sectional variation in expected returns on option portfolios formed on moneyness, maturity and option value (the spread between historical volatility and the Black-Scholes implied volatility). This three-factor model helps explain expected returns on option portfolios formed on a variety of different characteristics that include carry, VRP, volatility momentum, idiosyncratic volatility, illiquidity, etc. While the level premium appears to be a compensation for marketwide volatility and jump shocks, theories of risk-averse financial intermediaries help us to understand the slope and the value premiums."} {"_id": "0cdf9697538c46db78a948ede0f9b0c605b71d26", "title": "Survey of fraud detection techniques", "text": "Due to the dramatic increase of fraud which results in loss of billions of dollars worldwide each year, several modern techniques in detecting fraud are continually developed and applied to many business fields. Fraud detection involves monitoring the behavior of populations of users in order to estimate, detect, or avoid undesirable behavior. Undesirable behavior is a broad term including delinquency, fraud, intrusion, and account defaulting. This paper presents a survey of current techniques used in credit card fraud detection, telecommunication fraud detection, and computer intrusion detection. The goal of this paper is to provide a comprehensive review of different techniques to detect frauds."} {"_id": "271c8b6d98ec65db2e5b6b28757c66fea2a5a463", "title": "Measuring emotional intelligence with the MSCEIT V2.0.", "text": "Does a recently introduced ability scale adequately measure emotional intelligence (EI) skills? Using the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; J. D. Mayer, P. Salovey, & D. R. Caruso, 2002b), the authors examined (a) whether members of a general standardization sample and emotions experts identified the same test answers as correct, (b) the test's reliability, and (c) the possible factor structures of EI. Twenty-one emotions experts endorsed many of the same answers, as did 2,112 members of the standardization sample, and exhibited superior agreement, particularly when research provides clearer answers to test questions (e.g., emotional perception in faces). The MSCEIT achieved reasonable reliability, and confirmatory factor analysis supported theoretical models of EI. These findings help clarify issues raised in earlier articles published in Emotion."} {"_id": "29452dfebc17d6b1fa57b68971ecae85f73fd1d7", "title": "A complete formalized knowledge representation model for advanced digital forensics timeline analysis", "text": "Having a clear view of events that occurred over time is a difficult objective to achieve in digital investigations (DI). Event reconstruction, which allows investigators to understand the timeline of a crime, is one of the most important step of a DI process. This complex task requires exploration of a large amount of events due to the pervasiveness of new technologies nowadays. Any evidence produced at the end of the investigative process must also meet the requirements of the courts, such as reproducibility, verifiability, validation, etc. For this purpose, we propose a new methodology, supported by theoretical concepts, that can assist investigators through the whole process including the construction and the interpretation of the events describing the case. The proposed approach is based on a model which integrates knowledge of experts from the fields of digital forensics and software development to allow a semantically rich representation of events related to the incident. The main purpose of this model is to allow the analysis of these events in an automatic and efficient way. This paper describes the approach and then focuses on the main conceptual and formal aspects: a formal incident modelization and operators for timeline reconstruction and analysis. \u00a9 2014 Digital Forensics Research Workshop. Published by Elsevier Limited. All rights"} {"_id": "ec8630ea4cc06b9a51a7aa4ba50b91ccf112437d", "title": "Inverse Reinforcement Learning Based Human Behavior Modeling for Goal Recognition in Dynamic Local Network Interdiction", "text": "Goal recognition is the task of inferring an agent\u2019s goals given some or all of the agent\u2019s observed actions. Among different ways of problem formulation, goal recognition can be solved as a model-based planning problem using off-theshell planners. However, obtaining accurate cost or reward models of an agent and incorporating them into the planning model becomes an issue in real applications. Towards this end, we propose an Inverse Reinforcement Learning (IRL)based opponent behavior modeling method, and apply it in the goal recognition assisted Dynamic Local Network Interdiction (DLNI) problem. We first introduce the overall framework and the DLNI problem domain of our work. After that, an IRL-based human behavior modeling method and Markov Decision Process-based goal recognition are introduced. Experimental results indicate that our learned behavior model has a higher tracking accuracy and yields better interdiction outcomes than other models."} {"_id": "20432d7fec7b15f414f51a1e4fe1983f353eff9d", "title": "Author Disambiguation using Error-driven Machine Learning with a Ranking Loss Function", "text": "Author disambiguation is the problem of determining whether records in a publications database refer to the same person. A common supervised machine learning approach is to build a classifier to predict whether a pair of records is coreferent, followed by a clustering step to enforce transitivity. However, this approach ignores powerful evidence obtainable by examining sets (rather than pairs) of records, such as the number of publications or co-authors an author has. In this paper we propose a representation that enables these first-order features over sets of records. We then propose a training algorithm well-suited to this representation that is (1) error-driven in that training examples are generated from incorrect predictions on the training data, and (2) rank-based in that the classifier induces a ranking over candidate predictions. We evaluate our algorithms on three author disambiguation datasets and demonstrate error reductions of up to 60% over the standard binary classification approach."} {"_id": "5243700bf7f0863fc9d350921515767c69f754cd", "title": "Closed-Loop Deep Brain Stimulation Is Superior in Ameliorating Parkinsonism", "text": "Continuous high-frequency deep brain stimulation (DBS) is a widely used therapy for advanced Parkinson's disease (PD) management. However, the mechanisms underlying DBS effects remain enigmatic and are the subject of an ongoing debate. Here, we present and test a closed-loop stimulation strategy for PD in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) primate model of PD. Application of pallidal closed-loop stimulation leads to dissociation between changes in basal ganglia (BG) discharge rates and patterns, providing insights into PD pathophysiology. Furthermore, cortico-pallidal closed-loop stimulation has a significantly greater effect on akinesia and on cortical and pallidal discharge patterns than standard open-loop DBS and matched control stimulation paradigms. Thus, closed-loop DBS paradigms, by modulating pathological oscillatory activity rather than the discharge rate of the BG-cortical networks, may afford more effective management of advanced PD. Such strategies have the potential to be effective in additional brain disorders in which a pathological neuronal discharge pattern can be recognized."} {"_id": "102153467f27d43dd1db8a973846d3ac10ffdc3c", "title": "ECG signal analysis and arrhythmia detection on IoT wearable medical devices", "text": "Healthcare is one of the most rapidly expanding application areas of the Internet of Things (IoT) technology. IoT devices can be used to enable remote health monitoring of patients with chronic diseases such as cardiovascular diseases (CVD). In this paper we develop an algorithm for ECG analysis and classification for heartbeat diagnosis, and implement it on an IoT-based embedded platform. This algorithm is our proposal for a wearable ECG diagnosis device, suitable for 24-hour continuous monitoring of the patient. We use Discrete Wavelet Transform (DWT) for the ECG analysis, and a Support Vector Machine (SVM) classifier. The best classification accuracy achieved is 98.9%, for a feature vector of size 18, and 2493 support vectors. Different implementations of the algorithm on the Galileo board, help demonstrate that the computational cost is such, that the ECG analysis and classification can be performed in real-time."} {"_id": "7dd5afc660970d82491c9015a52bf23ab79bf650", "title": "Ubiquitous Data Accessing Method in IoT-Based Information System for Emergency Medical Services", "text": "The rapid development of Internet of things (IoT) technology makes it possible for connecting various smart objects together through the Internet and providing more data interoperability methods for application purpose. Recent research shows more potential applications of IoT in information intensive industrial sectors such as healthcare services. However, the diversity of the objects in IoT causes the heterogeneity problem of the data format in IoT platform. Meanwhile, the use of IoT technology in applications has spurred the increase of real-time data, which makes the information storage and accessing more difficult and challenging. In this research, first a semantic data model is proposed to store and interpret IoT data. Then a resource-based data accessing method (UDA-IoT) is designed to acquire and process IoT data ubiquitously to improve the accessibility to IoT data resources. Finally, we present an IoT-based system for emergency medical services to demonstrate how to collect, integrate, and interoperate IoT data flexibly in order to provide support to emergency medical services. The result shows that the resource-based IoT data accessing method is effective in a distributed heterogeneous data environment for supporting data accessing timely and ubiquitously in a cloud and mobile computing platform."} {"_id": "072a0db716fb6f8332323f076b71554716a7271c", "title": "The impact of the MIT-BIH Arrhythmia Database", "text": "The MIT-BIH Arrhythmia Database was the first generally available set of standard test material for evaluation of arrhythmia detectors, and it has been used for that purpose as well as for basic research into cardiac dynamics at about 500 sites worldwide since 1980. It has lived a far longer life than any of its creators ever expected. Together with the American Heart Association Database, it played an interesting role in stimulating manufacturers of arrhythmia analyzers to compete on the basis of objectively measurable performance, and much of the current appreciation of the value of common databases, both for basic research and for medical device development and evaluation, can be attributed to this experience. In this article, we briefly review the history of the database, describe its contents, discuss what we have learned about database design and construction, and take a look at some of the later projects that have been stimulated by both the successes and the limitations of the MIT-BIH Arrhythmia Database."} {"_id": "44159c85dec6df7a257cbe697bfc854ecb1ebb0b", "title": "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.", "text": "The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise."} {"_id": "f9ac31a6550f449454ceedebc01826f5c6785a26", "title": "A One-Stage Correction of the Blepharophimosis Syndrome Using a Standard Combination of Surgical Techniques", "text": "The aim of this study was to evaluate the efficacy of a one-stage treatment for the blepharophimosis-ptosis-epicanthus inversus syndrome (BPES) using a combination of standard surgical techniques. This is a retrospective interventional case series study of 21 BPES patients with a 1-year minimum follow-up period. The one-stage intervention combined three different surgical procedures in the following order: Z-epicanthoplasty for the epicanthus, transnasal wiring of the medial canthal ligaments for the telecanthus, and a bilateral fascia lata sling for ptosis correction. Preoperative and postoperative measurements of the horizontal lid fissure length (HFL), vertical lid fissure width (VFW), nasal intercanthal distance (ICD), and the ratio between the intercanthal distance and the horizontal fissure length (ICD/HFL) were analyzed using Student\u2019s t test for paired variables. The mean preoperative measurements were 4.95\u00a0\u00b1\u00a01.13\u00a0mm for the VFW, 20.90\u00a0\u00b1\u00a02.14\u00a0mm for the HFL, 42.45\u00a0\u00b1\u00a02.19\u00a0mm for the ICD, and 2.04\u00a0\u00b1\u00a00.14\u00a0mm for the ICD/HFL ratio. The mean postoperative measurements were 7.93\u00a0\u00b1\u00a01.02\u00a0mm for the VFW, 26.36\u00a0\u00b1\u00a01.40\u00a0mm for the HFL, 32.07\u00a0\u00b1\u00a01.96\u00a0mm for the ICD, and 1.23\u00a0\u00b1\u00a00.09\u00a0mm for the ICD/HFL ratio. All these values and their differences were statistically significant (P\u00a0<\u00a00.0001). All of the patients developed symmetric postoperative inferior version lagophthalmus, a complication that tended to decrease over time. One-stage correction of BPES is safe and efficient with the surgical techniques described."} {"_id": "3d425a44b54f505a5d280653a3b4f992d4836c80", "title": "Time and cognitive load in working memory.", "text": "According to the time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004), the cognitive load a given task involves is a function of the proportion of time during which it captures attention, thus impeding other attention-demanding processes. Accordingly, the present study demonstrates that the disruptive effect on concurrent maintenance of memory retrievals and response selections increases with their duration. Moreover, the effect on recall performance of concurrent activities does not go beyond their duration insofar as the processes are attention demanding. Finally, these effects are not modality specific, as spatial processing was found to disrupt verbal maintenance. These results suggest a sequential and time-based function of working memory in which processing and storage rely on a single and general purpose attentional resource needed to run executive processes devoted to constructing, maintaining, and modifying ephemeral representations."} {"_id": "e60e755af36853ba2c6c6f6b60e82903df167e56", "title": "Supervised LogEuclidean Metric Learning for Symmetric Positive Definite Matrices", "text": "Metric learning has been shown to be highly effective to improve the performance of nearest neighbor classification. In this paper, we address the problem of metric learning for symmetric positive definite (SPD) matrices such as covariance matrices, which arise in many real-world applications. Naively using standard Mahalanobis metric learning methods under the Euclidean geometry for SPD matrices is not appropriate, because the difference of SPD matrices can be a non-SPD matrix and thus the obtained solution can be uninterpretable. To cope with this problem, we propose to use a properly parameterized LogEuclidean distance and optimize the metric with respect to kernel-target alignment, which is a supervised criterion for kernel learning. Then the resulting non-trivial optimization problem is solved by utilizing the Riemannian geometry. Finally, we experimentally demonstrate the usefulness of our LogEuclidean metric learning algorithm on real-world classification tasks for EEG signals and texture patches."} {"_id": "08973fdcd763a2c7855dae39e8040df69aa420dc", "title": "Visualizing Scholarly Publications and Citations to Enhance Author Profiles", "text": "With data on scholarly publications becoming more abundant and accessible, there exist new opportunities for using this information to provide rich author profiles to display and explore scholarly work. We present a pair of linked visualizations connected to the Microsoft Academic Graph that can be used to explore the publications and citations of individual authors. We provide an online application with which a user can manage collections of papers and generate these visualizations."} {"_id": "6e785a402a60353e6e22d6883d3998940dcaea96", "title": "Three Models for the Description of Language", "text": "The grammar of a language is a device that describes the structure of that language. The grammar is comprised of a set of rules whose goal is twofold: first these rules can be used to create sentences of the associated language and only these, and second they can be used to classify whether a given sentence is an element of the language or not. The goal of a linguist is to discover grammars that are simple and yet are able to fully span the language. In [1] Chomsky describes three possible options of increasing complexity for English grammars: Finite-state, Phrase Structure and Transformational. This paper briefly present these three grammars and summarizes Chomsky\u2019s analysis and results which state that finite-state grammars are inadequate because they fail to span all possible sentences of the English language, and phrase structure grammar is overly complex."} {"_id": "b0572307afb7e769f360267d893500893f5d6b3d", "title": "SemEval-2017 Task 7: Detection and Interpretation of English Puns", "text": "Apun is a form of wordplay in which aword suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another word, for an intended humorous or rhetorical effect. Though a recurrent and expected feature in many discourse types, puns stymie traditional approaches to computational lexical semantics because they violate their one-sense-percontext assumption. This paper describes the first competitive evaluation for the automatic detection, location, and interpretation of puns. We describe the motivation for these tasks, the evaluation methods, and the manually annotated data set. Finally, we present an overview and discussion of the participating systems\u2019 methodologies, resources, and results."} {"_id": "a92eac4415719698d7d2097ef9564e7b36699010", "title": "Stakeholder engagement, social auditing and corporate sustainability", "text": "Purpose \u2013 To identify the applicability of social auditing as an approach of engaging stakeholders in assessing and reporting on corporate sustainability and its performance. Design/methodology/approach \u2013 Drawing upon the framework of AA1000 and the social auditing studies, this paper links stakeholder engagement, social auditing and corporate sustainability with a view to applying dialogue-based social auditing to address corporate sustainability. Findings \u2013 This paper identifies a \u201cmatch\u201d between corporate sustainability and social auditing, as both aim at improving the social, environmental and economic performance of an organisation, considering the well-being of a wider range of stakeholders and requiring the engagement of stakeholders in the process. This paper suggests that social auditing through engaging stakeholders via dialogue could be applied to build trusts, identify commitment and promote co-operation amongst stakeholders and corporations. Research limitations/implications \u2013 This research requires further empirical research into the practicality of social auditing in addressing corporate sustainability and the determination of the limitations of dialogue-based social auditing. Practical implications \u2013 Social auditing has been identified as a useful mechanism of balancing differing interests among stakeholders and corporations in a democratic business society. The application of social auditing in developing and achieving corporate sustainability has apparently practical implications. Originality/value \u2013 This paper examines the applicability of dialogue-based social auditing in helping business to move towards sustainability. Social auditing as a process of assessing and reporting on corporate social and environmental performance through engaging stakeholders via dialogue could be applied to build trusts, identify commitment and promote cooperation amongst stakeholders and corporations."} {"_id": "3b81c6743e909268a6471295eddde13fd6520678", "title": "Using Semantic Web Technologies for Exploratory OLAP: A Survey", "text": "This paper describes the convergence of some of the most influential technologies in the last few years, namely data warehousing (DW), on-line analytical processing (OLAP), and the Semantic Web (SW). OLAP is used by enterprises to derive important business-critical knowledge from data inside the company. However, the most interesting OLAP queries can no longer be answered on internal data alone, external data must also be discovered (most often on the web), acquired, integrated, and (analytically) queried, resulting in a new type of OLAP, exploratory OLAP. When using external data, an important issue is knowing the precise semantics of the data. Here, SW technologies come to the rescue, as they allow semantics (ranging from very simple to very complex) to be specified for web-available resources. SW technologies do not only support capturing the \u201cpassive\u201d semantics, but also support active inference and reasoning on the data. The paper first presents a characterization of DW/OLAP environments, followed by an introduction to the relevant SW foundation concepts. Then, it describes the relationship of multidimensional (MD) models and SW technologies, including the relationship between MD models and SW formalisms. Next, the paper goes on to survey the use of SW technologies for data modeling and data provisioning, including semantic data annotation and semantic-aware extract, transform, and load (ETL) processes. Finally, all the findings are discussed and a number of directions for future research are outlined, including SW support for intelligent MD querying, using SW technologies for providing context to data warehouses, and scalability issues."} {"_id": "9a8a36ce97b5b5324f4b5e5ee19b61df0b2e756b", "title": "Do cheerfulness , exhilaration , and humor production moderate pain tolerance ? A FACS study", "text": "Prior studies have shown that watching a funny film leads to an increase in pain tolerance. The present study aimed at separating three factors considered potentially essential (mood, behavior, and cognition related to humor) and examined whether they are responsible for this effect. Furthermore, the study examined whether trait cheerfulness and trait seriousness, as measured by the State-TraitCheerfulness-Inventory (STCI; Ruch et al. 1996), moderate changes in pain tolerance. Fifty-sixty female subjects were assigned randomly to three groups, each having a different task to pursue while watching a funny film: (1) get into a cheerful mood without smiling or laughing (\u201dCheerfulness\u201d); (2) smile and laugh extensively (\u201dExhilaration\u201d); and (3) produce a humorous commentary to the film (\u201dHumor production\u201d). Pain tolerance was measured using the cold pressor test before, immediately after, and twenty minutes after the film. Results indicated that pain tolerance increased for participants from before to after watching the funny film and remained high for the twenty minutes. This effect was moderated by facial but not verbal indicators of enjoyment of humor. Participants low in trait seriousness had an overall higher pain tolerance. Subjects with a high score in trait cheerfulness showed an increase in pain tolerance after producing humor while watching the film whereas subjects low in trait cheerfulness showed a similar increase after smiling and laughter during the film. DOI: https://doi.org/10.1515/humr.2004.009 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-77579 Originally published at: Zweyer, K; Velker, B; Ruch, Willibald (2004). Do cheerfulness, exhilaration, and humor production moderate pain tolerance? A FACS study. HUMOR: International Journal of Humor Research, 17(12):85-119. DOI: https://doi.org/10.1515/humr.2004.009 Do cheerfulness, exhilaration, and humor production moderate pain tolerance? A FACS study KAREN ZWEYER, BARBARA VELKER, and WILLIBALD RUCH"} {"_id": "f040434dd7dbc965c04b386b6b206325698a635b", "title": "A sneak peek into digital innovations and wearable sensors for cardiac monitoring", "text": "Many mobile phone or tablet applications have been designed to control cardiovascular risk factors (obesity, smoking, sedentary lifestyle, diabetes and hypertension) or to optimize treatment adherence. Some have been shown to be useful but the long-term benefits remain to be demonstrated. Digital stethoscopes make easier the interpretation of abnormal heart sounds, and the development of pocket-sized echo machines may quickly and significantly expand the use of ultrasounds. Daily home monitoring of pulmonary artery pressures with wireless implantable sensors has been shown to be associated with a significant decrease in hospital readmissions for heart failure. There are more and more non-invasive, wireless, and wearable sensors designed to monitor heart rate, heart rate variability, respiratory rate, arterial oxygen saturation, and thoracic fluid content. They have the potential to change the way we monitor and treat patients with cardiovascular diseases in the hospital and beyond. Some may have the ability to improve quality of care, decrease the number of medical visits and hospitalization, and ultimately health care costs. Validation and outcome studies are needed to clarify, among the growing number of digital innovations and wearable sensors, which tools have real clinical value."} {"_id": "4f96e06db144823d16516af787e96d13073b4316", "title": "Applications of Data Mining Techniques in TelecomChurn Prediction", "text": "In this competitive world, business becomes highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. This paper explores the application of data mining techniques in predicting likely churners and the impact of attribute selection on identifying the churn. It also compares the efficiency of Decision tree and Neural Network classifiers and lists their performances."} {"_id": "6d8984600e3cd9ae9d6b803f43f2410fa5c0ad0b", "title": "Learning from Corrupted Binary Labels via Class-Probability Estimation", "text": "Many supervised learning problems involve learning from samples whose labels are corrupted in some way. For example, each label may be flipped with some constant probability (learning with label noise), or one may have a pool of unlabelled samples in lieu of negative samples (learning from positive and unlabelled data). This paper uses class-probability estimation to study these and other corruption processes belonging to the mutually contaminated distributions framework (Scott et al., 2013), with three conclusions. First, one can optimise balanced error and AUC without knowledge of the corruption parameters. Second, given estimates of the corruption parameters, one can minimise a range of classification risks. Third, one can estimate corruption parameters via a class-probability estimator (e.g. kernel logistic regression) trained solely on corrupted data. Experiments on label noise tasks corroborate our analysis. 1. Learning from corrupted binary labels In many practical scenarios involving learning from binary labels, one observes samples whose labels are corrupted versions of the actual ground truth. For example, in learning from class-conditional label noise (CCN learning), the labels are flipped with some constant probability (Angluin & Laird, 1988). In positive and unlabelled learning (PU learning), we have access to some positive samples, but in lieu of negative samples only have a pool of samples whose label is unknown (Denis, 1998). More generally, suppose there is a notional clean distribution D over instances and labels. We say a problem involves learning from corrupted Proceedings of the 32 International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s). binary labels if we observe training samples drawn from some corrupted distribution Dcorr such that the observed labels do not represent those we would observe under D. A fundamental question is whether one can minimise a given performance measure with respect to D, given access only to samples from Dcorr. Intuitively, in general this requires knowledge of the parameters of the corruption process that determines Dcorr. This yields two further questions: are there measures for which knowledge of these corruption parameters is unnecessary, and for other measures, can we estimate these parameters? In this paper, we consider corruption problems belonging to the mutually contaminated distributions framework (Scott et al., 2013). We then study the above questions through the lens of class-probability estimation, with three conclusions. First, optimising balanced error (BER) as-is on corrupted data equivalently optimises BER on clean data, and similarly for the area under the ROC curve (AUC). That is, these measures can be optimised without knowledge of the corruption process parameters; further, we present evidence that these are essentially the only measures with this property. Second, given estimates of the corruption parameters, a range of classification measures can be minimised by thresholding corrupted class-probabilities. Third, under some assumptions, these corruption parameters may be estimated from the range of the corrupted class-probabilities. For all points above, observe that learning requires only corrupted data. Further, corrupted class-probability estimation can be seen as treating the observed samples as if they were uncorrupted. Thus, our analysis gives justification (under some assumptions) for this apparent heuristic in problems such as CCN and PU learning. While some of our results are known for the special cases of CCN and PU learning, our interest is in determining to what extent they generalise to other label corruption problems. This is a step towards a unified treatment of these problems. We now fix notation and formalise the problem. Learning from Corrupted Binary Labels via Class-Probability Estimation 2. Background and problem setup Fix an instance space X. We denote byD some distribution over X \u00d7 {\u00b11}, with (X,Y) \u223c D a pair of random variables. Any D may be expressed via the class-conditional distributions (P,Q) = (P(X | Y = 1),P(X | Y = \u22121)) and base rate \u03c0 = P(Y = 1), or equivalently via marginal distribution M = P(X) and class-probability function \u03b7 : x 7\u2192 P(Y = 1 | X = x). When referring to these constituent distributions, we write D as DP,Q,\u03c0 or DM,\u03b7. 2.1. Classifiers, scorers, and risks A classifier is any function f : X\u2192 {\u00b11}. A scorer is any function s : X \u2192 R. Many learning methods (e.g. SVMs) output a scorer, from which a classifier is formed by thresholding about some t \u2208 R. We denote the resulting classifier by thresh(s, t) : x 7\u2192 sign(s(x)\u2212 t). The false positive and false negative rates of a classifier f are denoted FPR(f),FNR(f), and are defined by P X\u223cQ (f(X) = 1) and P X\u223cP (f(X) = \u22121) respectively. Given a function \u03a8: [0, 1] \u2192 [0, 1], a classification performance measure Class\u03a8 : {\u00b11} \u2192 [0, 1] assesses the performance of a classifier f via (Narasimhan et al., 2014) Class\u03a8(f) = \u03a8(FPR (f),FNR(f), \u03c0). A canonical example is the misclassification error, where \u03a8: (u, v, p) 7\u2192 p \u00b7 v+ (1\u2212 p) \u00b7 u. Given a scorer s, we use Class\u03a8(s; t) to refer to Class D \u03a8(thresh(s, t)). The \u03a8-classification regret of a classifier f : X\u2192 {\u00b11} is regret\u03a8(f) = Class D \u03a8(f)\u2212 inf g : X\u2192{\u00b11} Class\u03a8(g). A loss is any function ` : {\u00b11} \u00d7R\u2192 R+. Given a distribution D, the `-risk of a scorer s is defined as L` (s) = E (X,Y)\u223cD [`(Y, s(X))] . (1) The `-regret of a scorer, regret` , is as per the \u03a8-regret. We say ` is strictly proper composite (Reid & Williamson, 2010) if argmins L` (s) is some strictly monotone transformation \u03c8 of \u03b7, i.e. we can recover class-probabilities from the optimal prediction via the link function \u03c8. We call class-probability estimation (CPE) the task of minimising Equation 1 for some strictly proper composite `. The conditional Bayes-risk of a strictly proper composite ` is L` : \u03b7 7\u2192 \u03b7`1(\u03c8(\u03b7)) + (1 \u2212 \u03b7)`\u22121(\u03c8(\u03b7)). We call ` strongly proper composite with modulus \u03bb if L` is \u03bbstrongly concave (Agarwal, 2014). Canonical examples of such losses are the logistic and exponential loss, as used in logistic regression and AdaBoost respectively. Quantity Clean Corrupted Joint distribution D Corr(D,\u03b1, \u03b2, \u03c0corr) or Dcorr Class-conditionals P,Q Pcorr, Qcorr Base rate \u03c0 \u03c0corr Class-probability \u03b7 \u03b7corr \u03a8-optimal threshold t\u03a8 t D corr,\u03a8 Table 1. Common quantities on clean and corrupted distributions. 2.2. Learning from contaminated distributions Suppose DP,Q,\u03c0 is some \u201cclean\u201d distribution where performance will be assessed. (We do not assume that D is separable.) In MC learning (Scott et al., 2013), we observe samples from some corrupted distribution Corr(D,\u03b1, \u03b2, \u03c0corr) over X \u00d7 {\u00b11}, for some unknown noise parameters \u03b1, \u03b2 \u2208 [0, 1] with \u03b1 + \u03b2 < 1; where the parameters are clear from context, we occasionally refer to the corrupted distribution as Dcorr. The corrupted classconditional distributions Pcorr, Qcorr are Pcorr = (1\u2212 \u03b1) \u00b7 P + \u03b1 \u00b7Q Qcorr = \u03b2 \u00b7 P + (1\u2212 \u03b2) \u00b7Q, (2) and the corrupted base rate \u03c0corr in general has no relation to the clean base rate \u03c0. (If \u03b1+\u03b2 = 1, then Pcorr = Qcorr, making learning impossible, whereas if \u03b1+ \u03b2 > 1, we can swap Pcorr, Qcorr.) Table 1 summarises common quantities on the clean and corrupted distributions. From (2), we see that none of Pcorr, Qcorr or \u03c0corr contain any information about \u03c0 in general. Thus, estimating \u03c0 from Corr(D,\u03b1, \u03b2, \u03c0corr) is impossible in general. The parameters \u03b1, \u03b2 are also non-identifiable, but can be estimated under some assumptions on D (Scott et al., 2013). 2.3. Special cases of MC learning Two special cases of MC learning are notable. In learning from class-conditional label noise (CCN learning) (Angluin & Laird, 1988), positive samples have labels flipped with probability \u03c1+, and negative samples with probability \u03c1\u2212. This can be shown to reduce to MC learning with \u03b1 = \u03c0\u22121 corr \u00b7 (1\u2212 \u03c0) \u00b7 \u03c1\u2212 , \u03b2 = (1\u2212 \u03c0corr) \u00b7 \u03c0 \u00b7 \u03c1+, (3) and the corrupted base rate \u03c0corr = (1\u2212\u03c1+)\u00b7\u03c0+\u03c1\u2212\u00b7(1\u2212\u03c0). (See Appendix C for details.) In learning from positive and unlabelled data (PU learning) (Denis, 1998), one has access to unlabelled samples in lieu of negative samples. There are two subtly different settings: in the case-controlled setting (Ward et al., 2009), the unlabelled samples are drawn from the marginal distribution M , corresponding to MC learning with \u03b1 = 0, \u03b2 = \u03c0, Learning from Corrupted Binary Labels via Class-Probability Estimation and \u03c0corr arbitrary. In the censoring setting (Elkan & Noto, 2008), observations are drawn from D followed by a label censoring procedure. This is in fact a special of CCN (and hence MC) learning with \u03c1\u2212 = 0. 3. BER and AUC are immune to corruption We first show that optimising balanced error and AUC on corrupted data is equivalent to doing so on clean data. Thus, with a suitably rich function class, one can optimise balanced error and AUC from corrupted data without knowledge of the corruption process parameters. 3.1. BER minimisation is immune to label corruption The balanced error (BER) (Brodersen et al., 2010) of a classifier is simply the mean of the per-class error rates, BER(f) = FPR(f) + FNR(f) 2 . This is a popular measure in imbalanced learning problems (Cheng et al., 2002; Guyon et al., 2004) as it penalises sacrificing accuracy on the rare class in favour of accuracy on the dominant class. The negation of the BER is also known as the AM (arithmetic mean) metric (Menon et al., 2013). The BER-optimal classifier thresholds the class-probability function at the base rate (Menon et al., 2013), so that: argmin f : X\u2192{\u00b11} BER(f) = thresh(\u03b7, \u03c0) (4) argmin f : X\u2192{\u00b11} BERcorr(f) = thresh(\u03b7corr, \u03c0corr), (5) where \u03b7corr denotes the corrupted class-probability function. As Equation 4 depends on \u03c0, it may appear that one must know \u03c0 to minimise the clean BER from corrupted data. Surprisingly, the BER-optimal classifiers in Equations 4 and 5 coincide. This is because of the following relationship between"} {"_id": "0584a69d27f55d726e72b69f21ca1bbab400c498", "title": "Intelligent criminal identification system", "text": "In many countries the amount of crime incidents that is reported per day is increasing dramatically. Concerning about Sri Lanka, The department of Police is the major organization of preventing crimes. In general, Sri Lankan police stations utilize paper-based information storing systems and they don't employ computer based applications up to a great extent. Due to this utilization of paper-based systems police officers have to spend a lot of time as well as man power to analyze existing crime information and to identify suspects for crime incidents. So the requirement of an efficient way for crime investigation has arisen. Data mining practices is one aspect of crime investigation, for which numerous technique are available. This paper highlights the use of data mining techniques, clustering and classification for effective investigation of crimes. Further the paper aims to identify suspects by analyzing existing evidences in situations where any witness or forensic clues are not present."} {"_id": "9581009c34216cac062888c5ccf055453db17881", "title": "Intelligent Tutoring Systems and Learning Outcomes : A Meta-Analysis", "text": "Intelligent Tutoring Systems (ITS) are computer programs that model learners\u2019 psychological states to provide individualized instruction. They have been developed for diverse subject areas (e.g., algebra, medicine, law, reading) to help learners acquire domain-specific, cognitive and metacognitive knowledge. A meta-analysis was conducted on research that compared the outcomes from students learning from ITS to those learning from non-ITS learning environments. The meta-analysis examined how effect sizes varied with type of ITS, type of comparison treatment received by learners, type of learning outcome, whether knowledge to be learned was procedural or declarative, and other factors. After a search of major bibliographic databases, 107 effect sizes involving 14,321 participants were extracted and analyzed. The use of ITS was associated with greater achievement in comparison with teacher-led, large-group instruction (g .42), non-ITS computer-based instruction (g .57), and textbooks or workbooks (g .35). There was no significant difference between learning from ITS and learning from individualized human tutoring (g \u2013.11) or small-group instruction (g .05). Significant, positive mean effect sizes were found regardless of whether the ITS was used as the principal means of instruction, a supplement to teacher-led instruction, an integral component of teacher-led instruction, or an aid to homework. Significant, positive effect sizes were found at all levels of education, in almost all subject domains evaluated, and whether or not the ITS provided feedback or modeled student misconceptions. The claim that ITS are relatively effective tools for learning is consistent with our analysis of potential publication bias."} {"_id": "8618f895585234058f424bd1a5f7043244b6d696", "title": "Kalman filtering over a packet-delaying network: A probabilistic approach", "text": "In this paper, we consider Kalman filtering over a packet-delaying network. Given the probability distribution of the delay, we can characterize the filter performance via a probabilistic approach. We assume that the estimator maintains a buffer of length D so that at each time k, the estimator is able to retrieve all available data packets up to time k\u2212D+1. Both the cases of sensorwith andwithout necessary computation capability for filter updates are considered. When the sensor has no computation capability, for a givenD, we give lower and upper bounds on the probability forwhich the estimation error covariance is within a prescribed bound. When the sensor has computation capability, we show that the previously derived lower and upper bounds are equal to each other. An approach for determining the minimum buffer length for a required performance in probability is given and an evaluation on the number of expected filter updates is provided. Examples are provided to demonstrate the theory developed in the paper. \u00a9 2009 Elsevier Ltd. All rights reserved."} {"_id": "402f850dff86fb601d34b2841e6083ac0f928edd", "title": "SCNN: An accelerator for compressed-sparse convolutional neural networks", "text": "Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator."} {"_id": "f3b84862bbe7eded638ab33c566c53c17bb1f83c", "title": "Hidden Community Detection in Social Networks", "text": "We introduce a new paradigm that is important for community detection in the realm of network analysis. Networks contain a set of strong, dominant communities, which interfere with the detection of weak, natural community structure. When most of the members of the weak communities also belong to stronger communities, they are extremely hard to be uncovered. We call the weak communities the hidden community structure. We present a novel approach called HICODE (HIdden COmmunity DEtection) that identi\u0080es the hidden community structure as well as the dominant community structure. By weakening the strength of the dominant structure, one can uncover the hidden structure beneath. Likewise, by reducing the strength of the hidden structure, one can more accurately identify the dominant structure. In this way, HICODE tackles both tasks simultaneously. Extensive experiments on real-world networks demonstrate that HICODE outperforms several state-of-the-art community detection methods in uncovering both the dominant and the hidden structure. For example in the Facebook university social networks, we \u0080nd multiple non-redundant sets of communities that are strongly associated with residential hall, year of registration or career position of the faculties or students, while the state-of-the-art algorithms mainly locate the dominant ground truth category. Due to the di\u0081culty of labeling all ground truth communities in real-world datasets, HICODE provides a promising approach to pinpoint the existing latent communities and uncover communities for which there is no ground truth. Finding this unknown structure is an extremely important community detection problem."} {"_id": "7b14b89089cc86fc75bcb56c673d8316e05f5d67", "title": "HiStory: a hierarchical storyboard interface design for video browsing on mobile devices", "text": "This paper presents an interactive thumbnail-based video browser for mobile devices such as smartphones featuring a touch screen. Developed as part of on-going research and supported by user studies, it introduces HiStory, a hierarchical storyboard design offering an interface metaphor that is familiar and intuitive yet supports fast and effective completion of Known-Item-Search tasks by rapidly providing an overview of a video's content with varying degrees of granularity."} {"_id": "7f7464349a4fbd6fc33444ffb2413228deb6dca2", "title": "Idiopathic isolated clitoromegaly: A report of two cases", "text": "BACKGROUND: Clitoromegaly is a frequent congenital malformation, but acquired clitoral enlargement is relatively rare. METHODS: Two acquired clitoromegaly cases treated in Ataturk Training Hospital, Izmir, Turkey are presented. RESULTS: History from both patients revealed clitoromegaly over the last three years. Neither gynecological nor systemic abnormalities were detected in either patient. Karyotype analyses and hormonal tests were normal. Abdominal and gynaecological ultrasound did not show any cystic lesion or other abnormal finding. Computerized tomography scan of the adrenal glands was normal. Clitoroplasty with preservation of neurovascular pedicles was performed for the treatment of clitoromegaly. CONCLUSION: The patients were diagnosed as \"idiopathic isolated\" clitoromegaly. To the best of our knowledge, there has been no detailed report about idiopathic clitoromegaly in the literature."} {"_id": "b91144901bccdead9a32f330f0d83dc1d84e759b", "title": "On the design and implementation of linear differential microphone arrays.", "text": "Differential microphone array (DMA), a particular kind of sensor array that is responsive to the differential sound pressure field, has a broad range of applications in sound recording, noise reduction, signal separation, dereverberation, etc. Traditionally, an Nth-order DMA is formed by combining, in a linear manner, the outputs of a number of DMAs up to (including) the order of N - 1. This method, though simple and easy to implement, suffers from a number of drawbacks and practical limitations. This paper presents an approach to the design of linear DMAs. The proposed technique first transforms the microphone array signals into the short-time Fourier transform (STFT) domain and then converts the DMA beamforming design to simple linear systems to solve. It is shown that this approach is much more flexible as compared to the traditional methods in the design of different directivity patterns. Methods are also presented to deal with the white noise amplification problem that is considered to be the biggest hurdle for DMAs, particularly higher-order implementations."} {"_id": "346f4f5cfb33dbba203ceb049d38542118e15f92", "title": "Data-free parameter pruning for Deep Neural Networks", "text": "Deep Neural nets (NNs) with millions of parameters are at the heart of many stateof-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85% of the total parameters in an MNIST-trained network, and about 35% for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network."} {"_id": "047ff9a3a05cce0d6594eb9300d50aaf5f93f55b", "title": "An 82.4% efficiency package-bondwire-based four-phase fully integrated buck converter with flying capacitor for area reduction", "text": "Multi-phase converters have become a topic of great interest due to the high output power capacity and output ripple cancellation effect. They are even more beneficial to nowadays high-frequency fully integrated converters with output capacitor integrated on-chip. As one of the dominant chip area consumers, reducing the size of the on-chip decoupling capacitors directly leads to cost down. It is reported that a 5\u00d7 capacitor area reduction can be achieved with a four-phase converter compared to a single-phase one [1]. However, the penalty is obvious. Every extra phase comes with an inductor, which is also counted as cost and becomes more dominant with increase in the number of phases."} {"_id": "1c9b9f3b3bf89f1c051c07411c226719aa468923", "title": "Object Detection in High-Resolution Remote Sensing Images Using Rotation Invariant Parts Based Model", "text": "In this letter, we propose a rotation invariant parts-based model to detect objects with complex shape in high-resolution remote sensing images. Specifically, the geospatial objects with complex shape are firstly divided into several main parts, and the structure information among parts is described and regulated in polar coordinates to achieve the rotation invariance on configuration. Meanwhile, the pose variance of each part relative to the object is also defined in our model. In encoding the features of the rotated parts and objects, a new rotation invariant feature is proposed by extending histogram oriented gradients. During the final detection step, a clustering method is introduced to locate the parts in objects, and that method can also be used to fuse the detection results. By this way, an efficient detection model is constructed and the experimental results demonstrate the robustness and precision of our proposed detection model."} {"_id": "2453dd38cde21f3248b55d281405f11d58168fa9", "title": "Multi-scale Patch Aggregation (MPA) for Simultaneous Detection and Segmentation", "text": "Aiming at simultaneous detection and segmentation (SD-S), we propose a proposal-free framework, which detect and segment object instances via mid-level patches. We design a unified trainable network on patches, which is followed by a fast and effective patch aggregation algorithm to infer object instances. Our method benefits from end-to-end training. Without object proposal generation, computation time can also be reduced. In experiments, our method yields results 62.1% and 61.8% in terms of mAPr on VOC2012 segmentation val and VOC2012 SDS val, which are state-of-the-art at the time of submission. We also report results on Microsoft COCO test-std/test-dev dataset in this paper."} {"_id": "702c5b4c444662bc53f6d1f92a4de88efe68c071", "title": "Learning to simplify: fully convolutional networks for rough sketch cleanup", "text": "In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images."} {"_id": "915c4bb289b3642489e904c65a47fa56efb60658", "title": "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", "text": "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results."} {"_id": "9201bf6f8222c2335913002e13fbac640fc0f4ec", "title": "Fully convolutional networks for semantic segmentation", "text": ""} {"_id": "5360ea82abdc3223b0ede0179fe5842c180b70ed", "title": "Scientific Table Search Using Keyword Queries", "text": "Tables are common and important in scientific documents, yet most text-based document search systems do not capture structures and semantics specific to tables. How to bridge different types of mismatch between keywords queries and scientific tables and what influences ranking quality needs to be carefully investigated. This paper considers the structure of tables and gives different emphasis to table components. On the query side, thanks to external knowledge such as knowledge bases and ontologies, key concepts are extracted and used to build structured queries, and target quantity types are identified and used to expand original queries. A probabilistic framework is proposed to incorporate structural and semantic information from both query and table sides. We also construct and release TableArXiv, a high quality dataset with 105 queries and corresponding relevance judgements for scientific table search. Experiments demonstrate significantly higher accuracy overall and at the top of the rankings than several baseline methods."} {"_id": "e29bca5e57cb7d4edd00eaa6667794663984c77f", "title": "Decision support for Cybersecurity risk planning", "text": "a r t i c l e i n f o Security countermeasures help ensure the confidentiality, availability, and integrity of information systems by preventing or mitigating asset losses from Cybersecurity attacks. Due to uncertainty, the financial impact of threats attacking assets is often difficult to measure quantitatively, and thus it is difficult to prescribe which countermeasures to employ. In this research, we describe a decision support system for calculating the uncertain risk faced by an organization under cyber attack as a function of uncertain threat rates, countermeasure costs, and impacts on its assets. The system uses a genetic algorithm to search for the best combination of countermeasures, allowing the user to determine the preferred tradeoff between the cost of the portfolio and resulting risk. Data collected from manufacturing firms provide an example of results under realistic input conditions. Assuring a secure information technology (IT) environment for the transaction of commerce is a major concern. The magnitude of the task is growing yearly, as attackers become more knowledgeable, more determined, and bolder in their efforts. According to a lead security expert at International Data Corporation, a global provider of market intelligence and advisory services for the IT community, \" Emerging new attack vectors, more targeted campaigns, compliance, and the layering of new technologies on the corporate infrastructure are all factors having a tremendous impact on the overall risk to an organization \" [21]. Given that corporate expenditures are usually a good indicator of the level of concern about an issue, security is obviously near the top of many IT executives' lists. According to IDC, the global market for providers of information security services is projected to exceed $32.5 billion in 2010 [21]. The New York Times recently noted that \" the intrusion into Google's computers and related attacks from within China on some thirty other companies point to the rising sophistication of such assaults and the vulnerability of even the best defenses \" [25]. The Times further states that, according to the Computer Security Institute, malware infections are up from one-half to almost two-thirds of companies surveyed last year, at an average cost of $235,000 for each organization. Finally, the newspaper observes, malware is being exploited in new terrains with malicious code that turns on cellphone microphones and cameras for the purpose of industrial spying [25]. Because of the variety of methods attackers use to try to infiltrate and disrupt \u2026"} {"_id": "d0858a2f403004bf0c002ca071884f5b2d002b22", "title": "Memory reduction method for deep neural network training", "text": "Training deep neural networks requires a large amount of memory, making very deep neural networks difficult to fit on accelerator memories. In order to overcome this limitation, we present a method to reduce the amount of memory for training a deep neural network. The method enables to suppress memory increase during the backward pass, by reusing the memory regions allocated for the forward pass. Experimental results exhibit our method reduced the occupied memory size in training by 44.7% on VGGNet with no accuracy affection. Our method also enabled training speedup by increasing the mini batch size up to double."} {"_id": "cd3007e5c5c8b9a3771e7d99502cdc388b369e8a", "title": "MATLAB based code for 3 D joint inversion of Magnetotelluric and Direct Current resistivity imaging data", "text": "24 th EM Induction Workshop, Helsing\u00f8r, Denmark, August 12-19, 2018 1 /4 MATLAB based code for 3D joint inversion of Magnetotelluric and Direct Current resistivity imaging data M. Israil 1 , A. Singh 1 Anita Devi 1 , and Pravin K.Gupta 1 1 Indian Institute of Technology, Roorkee, 247667, India, mohammad.israil@gmail.com"} {"_id": "371d0da66420ed6885c4badc3d4b7eb086d4292c", "title": "An Efficient Privacy-Preserving Ranked Keyword Search Method", "text": "Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy preserving. Therefore it is essential to develop efficient and reliable ciphertext search techniques. One challenge is that the relationship between documents will be normally concealed in the process of encryption, which will lead to significant search accuracy performance degradation. Also the volume of data in data centers has experienced a dramatic growth. This will make it even more challenging to design ciphertext search schemes that can provide efficient and reliable online information retrieval on large volume of encrypted data. In this paper, a hierarchical clustering method is proposed to support more search semantics and also to meet the demand for fast ciphertext search within a big data environment. The proposed hierarchical approach clusters the documents based on the minimum relevance threshold, and then partitions the resulting clusters into sub-clusters until the constraint on the maximum size of cluster is reached. In the search phase, this approach can reach a linear computational complexity against an exponential size increase of document collection. In order to verify the authenticity of search results, a structure called minimum hash sub-tree is designed in this paper. Experiments have been conducted using the collection set built from the IEEE Xplore. The results show that with a sharp increase of documents in the dataset the search time of the proposed method increases linearly whereas the search time of the traditional method increases exponentially. Furthermore, the proposed method has an advantage over the traditional method in the rank privacy and relevance of retrieved documents."} {"_id": "aed183e7258aff722192ca8b2368b51a7f817b1a", "title": "Distributed Denial of Service: Taxonomies of Attacks, Tools, and Countermeasures", "text": "Distributed Denial of Service (DDoS) attacks have become a large problem for users of computer systems connected to the Internet. DDoS attackers hijack secondary victim systems using them to wage a coordinated large-scale attack against primary victim systems. As new countermeasures are developed to prevent or mitigate DDoS attacks, attackers are constantly developing new methods to circumvent these new countermeasures. In this paper, we describe DDoS attack models and propose taxonomies to characterize the scope of DDoS attacks, the characteristics of the software attack tools used, and the countermeasures available. These taxonomies illustrate similarities and patterns in different DDoS attacks and tools, to assist in the development of more generalized solutions to countering DDoS attacks, including new derivative attacks."} {"_id": "30f79f6193580a2afc1a495dd6046ac8a74b4714", "title": "Meta-Analytic Review of Leader-Member Exchange Theory : Correlates and Construct Issues", "text": "s International, 48, 2922. Dienesch, R. M., & Liden, R. C. (1986). Leader-member exchange model of leadership: A critique and further development. Academy of Management Review, 11, 618\u2014634. *Dobbins, G. H., Cardy, R. L., & Platz-Vieno, S. J. (1990). A contingency approach to appraisal satisfaction: An initial investigation of the joint effects of organizational variables and appraisal characteristics. Journal of Management, 16, 619-632. *Dockery, T. M., & Steiner, D. D. (1990). The role of the initial interaction in leader-member exchange. Group and Organization Studies, 15, 395-413. *Duarte, N. T, Goodson, J. R., & Klich, N. R. (1994). Effects of dyadic quality and duration on performance appraisal. Academy of Management Journal, 37, 499-521. *Duchon, D., Green, S. G., &Taber, T. D. (1986). Vertical dyad linkage: A longitudinal assessment of antecedents, measures, and consequences. Journal of Applied Psychology, 71, 5660. *Dunegan, K. J., Duchon, D., & Uhl-Bien, M. (1992). Examining the link between leader-member exchange and subordinate performance: The role of task analyzability and variety as moderators. Journal of Management, 18, 59\u201476. *Dunegan, K. J., Uhl-Bien, M., & Duchon, D. (1994, August). Task-level climate (TLC) and leader-member exchange (LMX) as interactive predictors of subordinate performance. Paper presented at the 54th Academy of Management meeting, Dallas, TX. Fahr, J.-L., & Dobbins, G. H. (1989). Effects of comparative performance information on the accuracy of self-ratings and agreement between selfand supervisor ratings. Journal of Applied Psychology, 74, 606-610. Feldman, J. M. (1986). A note on the statistical correction of halo error. Journal of Applied Psychology, 71, 173-176. * Ferris, G. R. (1985). Role of leadership in the employee withdrawal process: A constructive replication. Journal of Applied Psychology, 70, 777-781. *Rikami, C. V, & Larson, E. W. (1984). Commitment to company and union: Parallel models. Journal of Applied Psychology, 69, 367-371. *Gast, I. (1987). Leader cognitive complexity and its effects on the quality of exchange relationships with subordinates (Doctoral dissertation, George Washington University, 1987). Dissertation Abstracts International, 47, 5082. *Gerras, S. (1993). The effect of cognitive busyness and nonverbal behaviors on trait inferences and LMX judgments (Doctoral dissertation, The Pennsylvania State University, 1992). Dissertation Abstracts International, 53, 3819. *Gessner, J. (1993). An interpersonal attraction approach to leader-member exchange: Predicting the predictor (Doctoral dissertation, University of Maryland, 1993). Dissertation Abstracts International, 53, 3820. Graen, G. B. (1976). Role-making processes within complex organizations. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 1201-1245). Chicago: Rand McNally. Graen, G. B. (1989). Unwritten rules for your career. New York: Wiley. *Graen, G. B., & Cashman, J. (1975). A role-making model of leadership in formal organizations: A development approach. In J. G. Hunt & L. L. Larson (Eds.), Leadership frontiers (pp. 143-165). Kent, OH: Kent State University. *Graen, G. B., Dansereau, E, Minanii, T., & Cashman, J. (1973). Leadership behaviors as cues to performance evaluation. Academy of Management Journal, 16, 611-623. *Graen,G. B., Liden, R., &Hoel,W. (1982). Role of leadership in the employee withdrawal process. Journal of Applied Psychology, 67, 868-872. *Graen, G. B., Novak, M. A., & Sommerkamp, P. (1982). The effects of leader-member exchange and job design on productivity and satisfaction: Testing a dual attachment model. Organizational Behavior and Human Performance, 30, 109131. Graen, G. B., & Scandura, T. A. (1987). Toward a psychology of dyadic organizing. Research in Organizational Behavior, 9, 175-208. Graen, G. B., Scandura, T. A., & Graen, M. R. (1986). A field experiment test of the moderating effects of growth need strength on productivity. Journal of Applied Psychology, 71, 484-491. Graen, G. B., & Schiemann, W. (1978). Leader-member agreement: A vertical dyad linkage approach. Journal of Applied Psychology, 63, 206-212. Graen, G. B., & Uhl-Bien, M. (1995). Relationship-based approach to leadership: Development of leader-member exchange (LMX) theory of leadership over 25 years: Applying a multi-level multi-domain perspective. Leadership Quarterly, 6, 219-247. *Graen, G. B., Wakabayashi, M., Graen, M. R., & Graen, M. G. (1990). International generalizability of American hypotheses about Japanese management progress: A strong inference investigation. Leadership Quarterly, 1, 1-23. *Green, S. G., Anderson, S. E., & Shivers, S. L. (1996). Demographic and organizational influences on leader-member exchange and related work attitudes. Organizational Behavior and Human Decision Processes, 66, 203-214. *Green, S. G., Blank, W, & Liden, R. C. (1983). Market and organizational influences on bank employees' work attitudes and behaviors. Journal of Applied Psychology, 68, 298-306. Harris, M. M., & Schaubroeck, J. (1988). A meta-analysis of self-supervisor, self-peer, and peer-supervisor ratings. Personnel Psychology, 41, 43-62. Hater, J. J., & Bass, B. M. (1988). Superiors' evaluations and subordinates' perceptions of transformational and transactional leadership. Journal of Applied Psychology, 73, 695702. Hedges, L. V. (1987). How hard is hard science, how soft is soft science? The empirical cumulativeness of research. American Psychologist, 42, 443-455. Hedges, L. V, & Olkin, I. (1985). Statistical methods for metaanalysis. New \\brk: Academic Press. House, R. J. (1977). A 1976 theory of charismatic leadership. In J. G. Hunt & L. L. Larson (Eds.), Leadership: The cutting edge (pp. 189-207). Carbondale: Southern Illinois University. Howell, J. M., & Avolio, B. J. (1993). Transformational leadership, transactional leadership, locus of control, and support 842 GERSTNER AND DAY for innovation: Key predictors of consolidated-business-unit performance. Journal of Applied Psychology, 78, 891\u2014902. Howell, J. M., & Frost, P. J. (1989). A laboratory study of charismatic leadership. Organizational Behavior and Human Decision Processes, 43, 243-269. Huffcutt, A. I., & Arthur, W., Jr. (1995). Development of a new outlier statistic for meta-analytic data. Journal of Applied Psychology, 80, 327-334. Huffcutt, A. I., Arthur, W., Jr., & Bennet, W. (1993). Conducting meta-analysis using the proc means procedure in SAS. Educational and Psychological Measurement, S3, 119-131. Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage. Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Metaanalysis: Cumulating research findings across studies. Beverly Hills, CA: Sage. *James, L. R., Gent, M. J., Hater, J. J., & Coray, K. E. (1979). Correlates of psychological influence: An illustration of the psychological climate approach to work environment perceptions. Personnel Psychology, 32, 563-588. *James, L. R., Hater, J. J., & Jones, A. (1981). Perceptions of psychological influence: A cognitive information processing approach for explaining moderated relationships. Personnel Psychology, 34, 453-477. Johnson, B. T., & Tbrco, R. M. (1992). The value of goodnessof-fit indices in meta-analysis: A comment on Hall and Rosenthai. Communication Monographs, 59, 388-396. * Jones, A. P., Glaman, J. M., & Johnson, D. S. (1993). Perceptions of a quality program and relationships with work perceptions and job attitudes. Psychological Reports, 72, 619-624. *Katerberg, R., & Horn, P. W. (1981). Effects of within-group and between-group variation in leadership. Journal of Applied Psychology, 66, 218-223. \"Keller, T., & Dansereau, F. (1995). Leadership and empowerment: A social exchange perspective. Human Relations, 48, 127-146. *Kim, K. I., & Organ, D. W. (1982). Determinants of leadersubordinate exchange relationships. Group and Organization Studies, 7, 77-89. *Kinicki, A. J., & Vecchio, R. P. (1994). Influences on the quality of supervisor\u2014subordinate relations: The role of timepressure, organizational commitment, and locus of control. Journal of Organizational Behavior, 15, 75-82. *K'Obonyo, P. O. (1989). A dyadic upward influence process: A laboratory investigation of the effect of a subordinate's ingratiation (praise & performance) on the superior-subordinate exchange relationship (Doctoral dissertation, University of South Carolina, 1988). Dissertation Abstracts International, 50, 1366. Kozlowski, S. W. J., & Doherty, M. L. (1989). Integration of climate and leadership: Examination of a neglected issue. Journal of Applied Psychology, 74, 546-553. Kuhnert, K. W. (1994). Transforming leadership: Developing people through delegation. In B. M. Bass & B. J. Avolio (Eds.), Improving organizational effectiveness through transformational leadership (pp. 10-25). Thousand Oaks, CA: Sage. Kuhnert, K. W., & Lewis, P. (1987). Transactional and transformational leadership: A constructive/developmental analysis. Academy of Management Review, 12, 648\u2014657. *Lagace, R. R. (1988). An investigation into the impact of interpersonal trust on the quality of the relationship and outcome variables in the sales manager/salesperson dyad (Doctoral dissertation, University of Cincinnati, 1987). Dissertation Abstracts International, 48, 2679. *Lagace, R. R. (1990). Leader-member exchange: Antecedents and consequences of the cadre and hired hand. Journal of Personal Selling and Sales Management, 10, 11-19. *Lagace, R. R., Castlebeny, S. B., & Ridnour, R. E. (1993). An exploratory salesforce study of the relationship between leader-member exchange and motivation, role stress, and manager evaluation. Journal of Applied Business Research, 9, 110-119. *Leana, C. R. (1986). Predictors and consequences of delegation. Academy of Management Journal, 2, 754\u2014774. *Liden, R., & Maslyn, J. M. (in press). Multidimensionality of leader-member"} {"_id": "c46988a5a538bf160f84c78bed237c53e3f903d6", "title": "Data-efficient hand motor imagery decoding in EEG-BCI by using Morlet wavelets & Common Spatial Pattern algorithms", "text": "EEG-based Brain Computer Interfaces (BCIs) are quite noisy brain signals recorded from the scalp (electroencephalography, EEG) to translate the user's intent into action. This is usually achieved by looking at the pattern of brain activity across many trials while the subject is imagining the performance of an instructed action - the process known as motor imagery. Nevertheless, existing motor imagery classification algorithms do not always achieve good performances because of the noisy and non-stationary nature of the EEG signal and inter-subject variability. Thus, current EEG BCI takes a considerable upfront toll on patients, who have to submit to lengthy training sessions before even being able to use the BCI. In this study, we developed a data-efficient classifier for left/right hand motor imagery by combining in our pattern recognition both the oscillation frequency range and the scalp location. We achieve this by using a combination of Morlet wavelet and Common Spatial Pattern theory to deal with nonstationarity and noise. The system achieves an average accuracy of 88% across subjects and was trained by about a dozen training (10-15) examples per class reducing the size of the training pool by up to a 100-fold, making it very data-efficient way for EEG BCI."} {"_id": "117bbf27f5e0dab72ebf5c3d7eeb5ca381015b30", "title": "Cluster-Based Scalable Network Services", "text": "We identify three fundamental requirements for scalable network services: incremental scalability and overflow growth provisioning, 24x7 availability through fault masking, and costeffectiveness. We argue that clusters of commodity workstations interconnected by a high-speed SAN are exceptionally well-suited to meeting these challenges for Internet-server workloads, provided the software infrastructure for managing partial failures and administering a large cluster does not have to be reinvented for each new service. To this end, we propose a general, layered architecture for building cluster-based scalable network services that encapsulates the above requirements for reuse, and a service-programming model based on composable workers that perform transformation, aggregation, caching, and customization (TACC) of Internet content. For both performance and implementation simplicity, the architecture and TACC programming model exploit BASE, a weaker-than-ACID data semantics that results from trading consistency for availability and relying on soft state for robustness in failure management. Our architecture can be used as an \u201coff the shelf\u201d infrastructural platform for creating new network services, allowing authors to focus on the \u201ccontent\u201d of the service (by composing TACC building blocks) rather than its implementation. We discuss two real implementations of services based on this architecture: TranSend, a Web distillation proxy deployed to the UC Berkeley dialup IP population, and HotBot, the commercial implementation of the Inktomi search engine. We present detailed measurements of TranSend\u2019s performance based on substantial client traces, as well as anecdotal evidence from the TranSend and HotBot experience, to support the claims made for the architecture."} {"_id": "f2e36a4d531311f534378fc77c29650c469b69d8", "title": "Multi-scale CNN Stereo and Pattern Removal Technique for Underwater Active Stereo System", "text": "Demands on capturing dynamic scenes of underwater environments are rapidly growing. Passive stereo is applicable to capture dynamic scenes, however the shape with textureless surfaces or irregular re\ufb02ections cannot be recovered by the technique. In our system, we add a pattern projector to the stereo camera pair so that arti\ufb01cial textures are augmented on the objects. To use the system at underwater environments, several problems should be compensated, i.e., refraction, disturbance by \ufb02uctuation and bubbles. Further, since surface of the objects are interfered by the bubbles, projected patterns, etc., those noises and patterns should be removed from captured images to recover original texture. To solve these problems, we propose three approaches; a depth-dependent calibration, Convolutional Neural Network(CNN)-stereo method and CNN-based texture recovery method. A depth-dependent calibration I sour analysis to \ufb01nd the acceptable depth range for approximation by center projection to \ufb01nd the certain target depth for calibration. In terms of CNN stereo, unlike common CNN based stereo methods which do not consider strong disturbances like refraction or bubbles, we designed a novel CNN architecture for stereo matching using multi-scale information, which is intended to be robust against such disturbances. Finally, we propose a multi-scale method for bubble and a projected-pattern removal method using CNNs to recover original textures. Experimental results are shown to prove the effectiveness of our method compared with the state of the art techniques. Furthermore, reconstruction of a live swimming \ufb01sh is demonstrated to con\ufb01rm the feasibility of our techniques."} {"_id": "a7d94d683c631d5f8fe4fada6fe0bbb692703324", "title": "Advances in Digital Forensics XIII", "text": "In digital forensics, examinations are carried out to explain events and demonstrate the root cause from a number of plausible causes. Yin\u2019s approach to case study research offers a systematic process for investigating occurrences in their real-world contexts. The approach is well suited to examining isolated events and also addresses questions about causality and the reliability of findings. The techniques that make Yin\u2019s approach suitable for research also apply to digital forensic examinations. The merits of case study research are highlighted in previous work that established the suitability of the case study research method for conducting digital forensic examinations. This research extends the previous work by demonstrating the practicality of Yin\u2019s case study method in examining digital events. The research examines the relationship between digital evidence \u2013 the effect \u2013 and its plausible causes, and how patterns can be identified and applied to explain the events. Establishing these patterns supports the findings of a forensic examination. Analytic strategies and techniques inherent in Yin\u2019s case study method are applied to identify and analyze patterns in order to establish the findings of a digital forensic examination."} {"_id": "8a2fcac4ab1e45cdff8a9637877dd2c19bffc4e0", "title": "A Machine Learning Approach for SQL Queries Response Time Estimation in the Cloud", "text": "Cloud computing provides on-demand services with pay-as-you-go model. In this sense, customers expect quality from providers where QoS control over DBMSs services demands decision making, usually based on performance aspects. It is essential that the system be able to assimilate the characteristics of the application in order to determine the execution time for its queries' execution. Machine learning techniques are a promising option in this context, since they o er a wide range of algorithms that build predictive models for this purpose based on the past observations of the demand and on the DBMS system behavior. In this work, we propose and evaluate the use of regression methods for modeling the SQL query execution time in the cloud. This process uses di erent techniques, such as bag-of-words, statistical lter and genetic algorithms. We use K-Fold Cross-Validation for measuring the quality of the models in each step. Experiments were carried out based on data from database systems in the cloud generated by the TPC-C benchmark."} {"_id": "552d5338c6151cd0e4b61cc31ba5bc507d5db52f", "title": "Ontology based context modeling and reasoning using OWL", "text": "Here we propose an OWL encoded context ontology (CONON) for modeling context in pervasive computing environments, and for supporting logic-based context reasoning. CONON provides an upper context ontology that captures general concepts about basic context, and also provides extensibility for adding domain-specific ontology in a hierarchical manner. Based on this context ontology, we have studied the use of logic reasoning to check the consistency of context information, and to reason over low-level, explicit context to derive high-level, implicit context. By giving a performance study for our prototype, we quantitatively evaluate the feasibility of logic based context reasoning for nontime-critical applications in pervasive computing environments, where we always have to deal carefully with the limitation of computational resources."} {"_id": "649e0a31d3a16b10109f2c9627d07b16e15290c0", "title": "OPINE: Extracting Product Features and Opinions from Reviews", "text": "Consumers have to often wade through a large number of on-line reviews in order to make an informed product choice. We introduce OPINE, an unsupervised, high-precision information extraction system which mines product reviews in order to build a model of product features and their evaluation by reviewers."} {"_id": "ec2027c2dd93e4ee8316cc0b3069e8abfdcc2ecf", "title": "Latent Variable PixelCNNs for Natural Image Modeling", "text": "We study probabilistic models of natural images and extend the autoregressive family of PixelCNN models by incorporating latent variables. Subsequently, we describe two new generative image models that exploit different image transformations as latent variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of our LatentPixelCNN models, in particular showing that they produce much more realistically looking image samples than previous stateof-the-art probabilistic models."} {"_id": "544f6fa29fa08f949b7dc61aa9f7887078fbfc0b", "title": "Top-down control of visual attention in object detection", "text": "Current computational models of visual attention focus on bottom-up information and ignore scene context. However, studies in visual cognition show that humans use context to facilitate object detection in natural scenes by directing their attention or eyes to diagnostic regions. Here we propose a model of attention guidance based on global scene config\u00ad uration. We show that the statistics of low-level features across the scene image determine where a specific object (e.g. a person) should be located. Human eye movements show that regions chosen by the top-down model agree with regions scrutinized by human observers performing a visual search task for people. The results validate the proposition that top-down information from visual context modulates the saliency of image regions during the task of object de\u00ad tection. Contextual information provides a shortcut for effi\u00ad cient object detection systems."} {"_id": "cceda5edb064cbe563724d3c6b6d61140747aa15", "title": "Continuous software engineering: A roadmap and agenda", "text": "Throughout its short history, software development has been characterized by harmful disconnects between important activities such as planning, development and implementation. The problem is further exacerbated by the episodic and infrequent performance of activities such as planning, testing, integration and releases. Several emerging phenomena reflect attempts to address these problems. For example, Continuous Integration is a practice which has emerged to eliminate discontinuities between development and deployment. In a similar vein, the recent emphasis on DevOps recognizes that the integration between software development and its operational deployment needs to be a continuous one. We argue a similar continuity is required between business strategy and development, BizDev being the term we coin for this. These disconnects are even more problematic given the need for reliability and resilience in the complex and data-intensive systems being developed today. We identify a number of continuous activities which together we label as \u2018Continuous \u2217\u2019 (i.e. Continuous Star) which we present as part of an overall roadmap for Continuous Software engineering. We argue for a continuous (but not necessarily rapid) software engineering delivery pipeline. We conclude the paper with a research agenda. \u00a9 2015 Elsevier Inc. All rights reserved."} {"_id": "6ff347f7c7324a4038639fc58b8deea9038ec053", "title": "Stable Marriage with Ties and Unacceptable Partners", "text": "An instance of the classical stable marriage problem involves n men and n women, and each person ranks all n members of the opposite sex in strict order of preference. The effect of allowing ties in the preference lists has been investigated previously, and three natural definitions of stability arise. In this paper, we extend this study by allowing a preference list to involve ties and/or be incomplete. We show that, under the weakest notion of stability, the stable matchings need not be all of the same cardinality, and the decision problem related to finding a maximum cardinality stable matching is NP-complete, even if the ties occur in the preference lists of one sex only. This result has important implications for practical matching schemes such as the well-known National Resident Matching Program [9]. In the cases of the other two notions of stability, Irving [5] has described algorithms for testing whether a stable matching exists, and for constructing such a matching if one does exist, where a preference list is complete but may involve ties. We demonstrate how to extend these algorithms to the case where a preference list may be incomplete and/or involve ties."} {"_id": "ca579dbe53b33424abb13b35c5b7597ef19600ad", "title": "Speech recognition with primarily temporal cues.", "text": "Nearly perfect speech recognition was observed under conditions of greatly reduced spectral information. Temporal envelopes of speech were extracted from broad frequency bands and were used to modulate noises of the same bandwidths. This manipulation preserved temporal envelope cues in each band but restricted the listener to severely degraded information on the distribution of spectral energy. The identification of consonants, vowels, and words in simple sentences improved markedly as the number of bands increased; high speech recognition performance was obtained with only three bands of modulated noise. Thus, the presentation of a dynamic temporal pattern in only a few broad spectral regions is sufficient for the recognition of speech."} {"_id": "929a376c6fea1376baf40fc2979cfbdd867f03ab", "title": "Soft decoding of JPEG 2000 compressed images using bit-rate-driven deep convolutional neural networks", "text": "Lossy image compression methods always introduce various unpleasant artifacts into the compressed results, especially at low bit-rates. In recent years, many effective soft decoding methods for JPEG compressed images have been proposed. However, to the best of our knowledge, very few works have been done on soft decoding of JPEG 2000 compressed images. Inspired by the outstanding performance of Convolution Neural Network (CNN) in various computer vision tasks, we presents a soft decoding method for JPEG 2000 by using multiple bit-rate-driven deep CNNs. More specifically, in training stage, we train a series of deep CNNs using lots of high quality training images and the corresponding JPEG 2000 compressed images at different coding bit-rates. In testing stage, for an input compressed image, the CNN trained with the nearest coding bit-rate is selected to perform soft decoding. Extensive experiments demonstrate the effectiveness of the presented soft decoding framework, which greatly improves the visual quality and objective scores of JPEG 2000 compressed images."} {"_id": "71b3f1d399f0d118df4fb877c7c703d9fda6e648", "title": "Satisfaction, trust and online purchase intention: A study of consumer perceptions", "text": "This paper tries to do the research of analyzing the relation of the customer trust, the customer satisfaction and purchase intention under the electronic commerce shopping environment. Drawing on the theory of customer trust, customer satisfaction and purchase intention, based on a survey of 102 respondents from the University in China, the United Kingdom, and the United States, the results showed that there is significant correlation between satisfaction and trust, satisfaction and purchase intention, trust and purchase intention, and more than 50% of respondents chose Taobao and Tmall, and the second is Amazon when they shopping onling. It could be expected that this kind of study is able to provide suggestion and guide for internet merchant."} {"_id": "7176b952e09e8a83dcc41779237eea55d0e75c20", "title": "Discovering Temporal Retweeting Patterns for Social Media Marketing Campaigns", "text": "Social media has become one of the most popular marketing channels for many companies, which aims at maximizing their influence by various marketing campaigns conducted from their official accounts on social networks. However, most of these marketing accounts merely focus on the contents of their tweets. Less effort has been made on understanding tweeting time, which is a major contributing factor in terms of attracting customers' attention and maximizing the influence of a social marketing campaign. To that end, in this paper, we provide a focused study of temporal retweeting patterns and their influence on social media marketing campaigns. Specifically, we investigate the users' retweeting patterns by modeling their retweeting behaviors as a generative process, which considers temporal, social, and topical factors. Moreover, we validate the predictive power of the model on the dataset collected from Sina Weibo, the most popular micro blog platform in China. By discovering the temporal retweeting patterns, we analyze the temporal popular topics and recommend tweets to users in a time-aware manner. Finally, experimental results show that the proposed algorithm outperforms other baseline methods. This model is applicable for companies to conduct their marketing campaigns at the right time on social media."} {"_id": "4b9c85a32ccebb851b94f6da17307461e50345a2", "title": "From learning models of natural image patches to whole image restoration", "text": "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting."} {"_id": "0352ed76cb27ff9b0edecfcf9556bc1e19756e9e", "title": "Note-Taking With Computers : Exploring Alternative Strategies for Improved Recall", "text": "Three experiments examined note-taking strategies and their relation to recall. In Experiment 1, participants were instructed either to take organized lecture notes or to try and transcribe the lecture, and they either took their notes by hand or typed them into a computer. Those instructed to transcribe the lecture using a computer showed the best recall on immediate tests, and the subsequent experiments focused on note-taking using computers. Experiment 2 showed that taking organized notes produced the best recall on delayed tests. In Experiment 3, however, when participants were given the opportunity to study their notes, those who had tried to transcribe the lecture showed better recall on delayed tests than those who had taken organized notes. Correlational analyses of data from all 3 experiments revealed that for those who took organized notes, working memory predicted note-quantity, which predicted recall on both immediate and delayed tests. For those who tried to transcribe the lecture, in contrast, only note-quantity was a consistent predictor of recall. These results suggest that individuals who have poor working memory (an ability traditionally thought to be important for note-taking) can still take effective notes if they use a note-taking strategy (transcribing using a computer) that can help level the playing field for students of diverse cognitive abilities."} {"_id": "cfa092829c4c7a42ec77ab6844661e1dae082172", "title": "Analytical Tools for Blockchain: Review, Taxonomy and Open Challenges", "text": "Bitcoin has introduced a new concept that could feasibly revolutionise the entire Internet as it exists, and positively impact on many types of industries including, but not limited to, banking, public sector and supply chain. This innovation is grounded on pseudo-anonymity and strives on its innovative decentralised architecture based on the blockchain technology. Blockchain is pushing forward a race of transaction-based applications with trust establishment without the need for a centralised authority, promoting accountability and transparency within the business process. However, a blockchain ledger (e.g., Bitcoin) tend to become very complex and specialised tools, collectively called \u201cBlockchain Analytics\u201d, are required to allow individuals, law enforcement agencies and service providers to search, explore and visualise it. Over the last years, several analytical tools have been developed with capabilities that allow, e.g., to map relationships, examine flow of transactions and filter crime instances as a way to enhance forensic investigations. This paper discusses the current state of blockchain analytical tools and presents a thematic taxonomy model based on their applications. It also examines open challenges for future development and research."} {"_id": "0a7c4cec908ca18f76f5101578a2496a2dceb5e7", "title": "Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers", "text": "In the deep neural network (DNN), the hidden layers can be considered as increasingly complex feature transformations and the final softmax layer as a log-linear classifier making use of the most abstract features computed in the hidden layers. While the loglinear classifier should be different for different languages, the feature transformations can be shared across languages. In this paper we propose a shared-hidden-layer multilingual DNN (SHL-MDNN), in which the hidden layers are made common across many languages while the softmax layers are made language dependent. We demonstrate that the SHL-MDNN can reduce errors by 3-5%, relatively, for all the languages decodable with the SHL-MDNN, over the monolingual DNNs trained using only the language specific data. Further, we show that the learned hidden layers sharing across languages can be transferred to improve recognition accuracy of new languages, with relative error reductions ranging from 6% to 28% against DNNs trained without exploiting the transferred hidden layers. It is particularly interesting that the error reduction can be achieved for the target language that is in different families of the languages used to learn the hidden layers."} {"_id": "3157ed1fbad482520ca87045b308446d8adbdedb", "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10\u2013100\u00d7 as compared to synchronized stochastic gradient descent."} {"_id": "8e21d353ba283bee8fd18285558e5e8df39d46e8", "title": "Federated Meta-Learning for Recommendation", "text": "Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-specific recommendation models are locally trained by a shared parameterized algorithm, which preserves user privacy and at the same time utilizes information from other users to help model training. Interestingly, the model thus trained exhibits a high capacity at a small scale, which is energyand communicationefficient. Experimental results show that recommendation models trained by meta-learning algorithms in the proposed framework outperform the state-of-the-art in accuracy and scale. For example, on a production dataset, a shared model under Google Federated Learning (McMahan et al., 2017) with 900,000 parameters has prediction accuracy 76.72%, while a shared algorithm under federated meta-learning with less than 30,000 parameters achieves accuracy of 86.23%."} {"_id": "9552ac39a57daacf3d75865a268935b5a0df9bbb", "title": "Neural networks and principal component analysis: Learning from examples without local minima", "text": "We consider the problem of learning from examples in layered linear feed-forward neural networks using optimization methods, such as back propagation, with respect to the usual quadratic error function E of the connection weights. Our main result is a complete description of the landscape attached to E in terms of principal component analysis. We show that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. All the additional critical points of E are saddle points (corresponding to projections onto subspaces generated by higher order vectors). The auto-associative case is examined in detail. Extensions and implications for the learning algorithms are discussed."} {"_id": "4f803c5d435bef1985477366231503f10739fe11", "title": "A CORDIC processor for FFT computation and its implementation using gallium arsenide technology", "text": "In this paper, the architecture and the implementation of a complex fast Fourier transform (CFFT) processor using 0.6 m gallium arsenide (GaAs) technology are presented. This processor computes a 1024-point FFT of 16 bit complex data in less than 8 s, working at a frequency beyond 700 MHz, with a power consumption of 12.5 W. The architecture of the processor is based on the COordinate Rotation DIgital Computer (CORDIC) algorithm, which avoids the use of conventional multiplicationand-accumulation (MAC) units, but evaluates the trigonometric functions using only add and shift operations. Improvements to the basic CORDIC architecture are introduced in order to reduce the area and power of the processor. This together with the use of pipelining and carry save adders produces a very regular and fast processor. The CORDIC units were fabricated and tested in order to anticipate the final performance of the processor. This work also demonstrates the maturity of GaAs technology for implementing ultrahigh-performance signal processors."} {"_id": "d2960e97d00be0e50577914811668fb05fc46bc4", "title": "A current-starved inverter-based differential amplifier design for ultra-low power applications", "text": "As silicon feature sizes decrease, more complex circui try arrays can now be contrived on a single die. This increase in the number of on-chip devices per unit area results in increased power dissipation per unit area. In order to meet certain power and operating temperature specifications, circuit design necessitates a focus on power efficiency, which is especially important in systems employing hundreds or thousands of instances of the same device. In large arrays, a slight increase in the power efficiency of a single component is heightened by the number of instances of the device in the system. This paper proposes a fully differential, low-power current-starving inverter-based amplifier topology designed in a commercial 0.18\u03bcm process. This design achieves 46dB DC gain and a 464 kHz uni ty gain frequency with a power consumption of only 145.32nW at 700mV power supply vol tage for ultra-low power, low bandwidth applications. Higher bandwidth designs are also proposed, including a 48dB DC gain, 2.4 MHz unity-gain frequency amplifier operating at 900mV wi th only 3.74\u03bcW power consumption."} {"_id": "82256dd95c21bcc336d01437eabba7470d92188b", "title": "Conditional connectivity", "text": "For a noncomplete graph G , the traditional definition of its connectivity is the minimum number of points whose removal results in a disconnected subgraph with components H , , . . . , H k . The conditional connectivity of G with respect to some graphtheoretic property P is the smallest cardinality of a set S of points, if any, such that every component Hi of the disconnected graph G S has property P. A survey of promising properties P is presented. Questions for various P-connectivities are listed in analogy with known results on connectivity and line-connectivity."} {"_id": "7957fad0ddbe7323da83d00b094caedb8eb1a473", "title": "A review of mangrove rehabilitation in the Philippines: successes, failures and future prospects", "text": "From half a million hectares at the turn of the century, Philippine mangroves have declined to only 120,000\u00a0ha while fish/shrimp culture ponds have increased to 232,000\u00a0ha. Mangrove replanting programs have thus been popular, from community initiatives (1930s\u20131950s) to government-sponsored projects (1970s) to large-scale international development assistance programs (1980s to present). Planting costs escalated from less than US$100 to over $500/ha, with half of the latter amount allocated to administration, supervision and project management. Despite heavy funds for massive rehabilitation of mangrove forests over the last two decades, the long-term survival rates of mangroves are generally low at 10\u201320%. Poor survival can be mainly traced to two factors: inappropriate species and site selection. The favored but unsuitable Rhizophora are planted in sandy substrates of exposed coastlines instead of the natural colonizers Avicennia and Sonneratia. More significantly, planting sites are generally in the lower intertidal to subtidal zones where mangroves do not thrive rather than the optimal middle to upper intertidal levels, for a simple reason. Such ideal sites have long been converted to brackishwater fishponds whereas the former are open access areas with no ownership problems. The issue of pond ownership may be complex and difficult, but such should not outweigh ecological requirements: mangroves should be planted where fishponds are, not on seagrass beds and tidal flats where they never existed. This paper reviews eight mangrove initiatives in the Philippines and evaluates the biophysical and institutional factors behind success or failure. The authors recommend specific protocols (among them pushing for a 4:1 mangrove to pond ratio recommended for a healthy ecosystem) and wider policy directions to make mangrove rehabilitation in the country more effective."} {"_id": "27434a48e7e10b011c862c297d5c29110816ec5c", "title": "What characterizes a shadow boundary under the sun and sky?", "text": "Despite decades of study, robust shadow detection remains difficult, especially within a single color image. We describe a new approach to detect shadow boundaries in images of outdoor scenes lit only by the sun and sky. The method first extracts visual features of candidate edges that are motivated by physical models of illumination and occluders. We feed these features into a Support Vector Machine (SVM) that was trained to discriminate between most-likely shadow-edge candidates and less-likely ones. Finally, we connect edges to help reject non-shadow edge candidates, and to encourage closed, connected shadow boundaries. On benchmark shadow-edge data sets from Lalonde et al. and Zhu et al., our method showed substantial improvements when compared to other recent shadow-detection methods based on statistical learning."} {"_id": "353c4ab6943b671abffd9503f16c34706bc44fdb", "title": "3D distance fields: a survey of techniques and applications", "text": "A distance field is a representation where, at each point within the field, we know the distance from that point to the closest point on any object within the domain. In addition to distance, other properties may be derived from the distance field, such as the direction to the surface, and when the distance field is signed, we may also determine if the point is internal or external to objects within the domain. The distance field has been found to be a useful construction within the areas of computer vision, physics, and computer graphics. This paper serves as an exposition of methods for the production of distance fields, and a review of alternative representations and applications of distance fields. In the course of this paper, we present various methods from all three of the above areas, and we answer pertinent questions such as How accurate are these methods compared to each other? How simple are they to implement?, and What is the complexity and runtime of such methods?."} {"_id": "385b401bf75771b02b5721641ae04ace43d2bbcd", "title": "Large-Scale Supervised Multimodal Hashing with Semantic Correlation Maximization", "text": "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability."} {"_id": "ff8b0e6ac8e478e7332edb2934fc943f3f826091", "title": "Noise radar using random phase and frequency modulation", "text": "Pulse compression radar is used in a great number of applications. Excellent range resolution and high electronic counter-countermeasures performance is achieved by wideband long pulses, which spread out the transmitted energy in frequency and time. By using a random noise waveform, the range ambiguity is suppressed as well. In most applications, the random signal is transmitted directly from a noise-generating microwave source. A sine wave, which is phase or frequency modulated by random noise, is an alternative, and in this paper, the ambiguity function and the statistical characteristics of the correlation output for the latter configuration are further analyzed. Range resolution is then improved because the noise bandwidth of the modulated carrier is wider than that of the modulating signal, and the range sidelobes are also further suppressed. Random biphase modulation gives a 4-dB (/spl pi//sup 2//4) improvement, but much higher sidelobe suppression could be achieved using continuous phase/frequency modulation. Due to the randomness of the waveform, the output correlation integral is accompanied by a noise floor, which limits the possible sidelobe suppression as determined by the time-bandwidth product. In synthetic aperture radar (SAR) applications with distributed targets, this product should be large compared with the number of resolution elements inside the antenna main beam. The advantages of low range sidelobes and enhanced range resolution make frequency/phase-modulated noise radar attractive for many applications, including SAR mapping, surveillance, altimetry, and scatterometry. Computer algorithms for reference signal delay and compression are discussed as replacements for the classical delay line implementation."} {"_id": "1866c1e44f0003946f6a27def74d768d0f66799d", "title": "Collusive Piracy Prevention in P2P Content Delivery Networks", "text": "Collusive piracy is the main source of intellectual property violations within the boundary of a P2P network. Paid clients (colluders) may illegally share copyrighted content files with unpaid clients (pirates). Such online piracy has hindered the use of open P2P networks for commercial content delivery. We propose a proactive content poisoning scheme to stop colluders and pirates from alleged copyright infringements in P2P file sharing. The basic idea is to detect pirates timely with identity-based signatures and time-stamped tokens. The scheme stops collusive piracy without hurting legitimate P2P clients by targeting poisoning on detected violators, exclusively. We developed a new peer authorization protocol (PAP) to distinguish pirates from legitimate clients. Detected pirates will receive poisoned chunks in their repeated attempts. Pirates are thus severely penalized with no chance to download successfully in tolerable time. Based on simulation results, we find 99.9 percent prevention rate in Gnutella, KaZaA, and Freenet. We achieved 85-98 percent prevention rate on eMule, eDonkey, Morpheus, etc. The scheme is shown less effective in protecting some poison-resilient networks like BitTorrent and Azureus. Our work opens up the low-cost P2P technology for copyrighted content delivery. The advantage lies mainly in minimum delivery cost, higher content availability, and copyright compliance in exploring P2P network resources."} {"_id": "f4d51f5a4af851fb33f32c35712be81648f6d0c8", "title": "RanDroid: Android malware detection using random machine learning classifiers", "text": "The growing polularity of Android based smartphone attracted the distribution of malicious applications developed by attackers which resulted the need for sophisticated malware detection techniques. Several techniques are proposed which use static and/or dynamic features extracted from android application to detect malware. The use of machine learning is adapted in various malware detection techniques to overcome the mannual updation overhead. Machine learning classifiers are widely used to model Android malware patterns based on their static features and dynamic behaviour. To address the problem of malware detection, in this paper we have proposed a machine learning-based malware detection system for Android platform. Our proposed system utilizes the features of collected random samples of goodware and malware apps to train the classifiers. The system extracts requested permissions, vulnerable API calls along with the existence of app's key information such as; dynamic code, reflection code, native code, cryptographic code and database from applications, which was missing in previous proposed solutions and uses them as features in various machine learning classifiers to build classification model. To validate the performance of proposed system, \"RanDroid\" various experiments have been carriedout, which show that the RanDroid is capable to achieve a high classification accuracy of 97.7 percent."} {"_id": "5be24830fae471c496a38fdbac48872011a7de71", "title": "Accelerating Asynchronous Stochastic Gradient Descent for Neural Machine Translation", "text": "In order to extract the best possible performance from asynchronous stochastic gradient descent one must increase the mini-batch size and scale the learning rate accordingly. In order to achieve further speedup we introduce a technique that delays gradient updates effectively increasing the mini-batch size. Unfortunately with the increase of mini-batch size we worsen the stale gradient problem in asynchronous stochastic gradient descent (SGD) which makes the model convergence poor. We introduce local optimizers which mitigate the stale gradient problem and together with fine tuning our momentum we are able to train a shallow machine translation system 27% faster than an optimized baseline with negligible penalty in BLEU."} {"_id": "7060f6062ba1cbe9502eeaaf13779aa1664224bb", "title": "A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality", "text": "Microtask crowdsourcing is increasingly critical to the creation of extremely large datasets. As a result, crowd workers spend weeks or months repeating the exact same tasks, making it necessary to understand their behavior over these long periods of time. We utilize three large, longitudinal datasets of nine million annotations collected from Amazon Mechanical Turk to examine claims that workers fatigue or satisfice over these long periods, producing lower quality work. We find that, contrary to these claims, workers are extremely stable in their quality over the entire period. To understand whether workers set their quality based on the task's requirements for acceptance, we then perform an experiment where we vary the required quality for a large crowdsourcing task. Workers did not adjust their quality based on the acceptance threshold: workers who were above the threshold continued working at their usual quality level, and workers below the threshold self-selected themselves out of the task. Capitalizing on this consistency, we demonstrate that it is possible to predict workers' long-term quality using just a glimpse of their quality on the first five tasks."} {"_id": "ea4c0f890be5f8d8a661b6aab65dc5fe9c29106d", "title": "Robust Cross-lingual Hypernymy Detection using Dependency Context", "text": "Cross-lingual Hypernymy Detection involves determining if a word in one language (\u201cfruit\u201d) is a hypernym of a word in another language (\u201cpomme\u201d i.e. apple in French). The ability to detect hypernymy cross-lingually can aid in solving cross-lingual versions of tasks such as textual entailment and event coreference. We propose BISPARSE-DEP, a family of unsupervised approaches for cross-lingual hypernymy detection, which learns sparse, bilingual word embeddings based on dependency contexts. We show that BISPARSE-DEP can significantly improve performance on this task, compared to approaches based only on lexical context. Our approach is also robust, showing promise for low-resource settings: our dependency-based embeddings can be learned using a parser trained on related languages, with negligible loss in performance. We also crowd-source a challenging dataset for this task on four languages \u2013 Russian, French, Arabic, and Chinese. Our embeddings and datasets are publicly available.1"} {"_id": "101c6852b5bcf66a60ea774a6ffb04ef41a8ca13", "title": "Broadband and compact impedance transformers for microwave circuits", "text": "Most microwave circuits and antennas use impedance transformers. In a conventional power amplifier, large number of inductors and capacitor or transmission line sections are use to realize an impedance transformer. This article presents an asymmetric broadside-coupled microstrip TLT (transmission line transformer) on the GaAs substrate. This matching network can be easily implemented using any monolithic microwave integrated circuit (MMIC) process supporting multilevel metallization"} {"_id": "2e5fadbaab27af0c2b5cc6a3481c11b2b83c4f94", "title": "Seeing Behind the Camera: Identifying the Authorship of a Photograph", "text": "We introduce the novel problem of identifying the photographer behind a photograph. To explore the feasibility of current computer vision techniques to address this problem, we created a new dataset of over 180,000 images taken by 41 well-known photographers. Using this dataset, we examined the effectiveness of a variety of features (low and high-level, including CNN features) at identifying the photographer. We also trained a new deep convolutional neural network for this task. Our results show that high-level features greatly outperform low-level features. We provide qualitative results using these learned models that give insight into our method's ability to distinguish between photographers, and allow us to draw interesting conclusions about what specific photographers shoot. We also demonstrate two applications of our method."} {"_id": "3f4f00024beda5436dce4d677f27fe4209c3790f", "title": "Participatory Air Pollution Monitoring Using Smartphones", "text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. In this paper we introduce a low-power and low-cost mobile sensing system for participatory air quality monitoring. In contrast to traditional stationary air pollution monitoring stations, we present the design, implementation, and evaluation of GasMobile, a small and portable measurement system based on off-the-shelf components and suited to be used by a large number of people. Vital to the success of participatory sensing applications is a high data quality. We improve measurement accuracy by (i) exploiting sensor readings near governmental measurement stations to keep sensor calibration up to date and (ii) analyzing the effect of mobility on the accuracy of the sensor readings to give user advice on measurement execution. Finally, we show that it is feasible to use GasMobile to create collective high-resolution air pollution maps."} {"_id": "a0015665acb00ddf1491aec436e47cfb75835b82", "title": "Bluetooth based home automation system", "text": "The past decade has seen significant advancement in the field of consumer electronics. Various \u2018intelligent\u2019 appliances such as cellular phones, air-conditioners, home security devices, home theatres, etc. are set to realize the concept of a smart home. They have given rise to a Personal Area Network in home environment, where all these appliances can be interconnected and monitored using a single controller. Busy families and individuals with physical limitation represent an attractive market for home automation and networking. A wireless home network that does not incur additional costs of wiring would be desirable. Bluetooth technology, which has emerged in late 1990s, is an ideal solution for this purpose. This paper describes an application of Bluetooth technology in home automation and networking environment. It proposes a network, which contains a remote, mobile host controller and several client modules (home appliances). The client modules communicate with the host controller through Bluetooth devices. q 2002 Elsevier Science B.V. All rights reserved."} {"_id": "796a6e78d4c4b63926ee956f202d874a8c4542b0", "title": "OCNet: Object Context Network for Scene Parsing", "text": "In this paper, we address the problem of scene parsing with deep learning and focus on the context aggregation strategy for robust segmentation. Motivated by that the label of a pixel is the category of the object that the pixel belongs to, we introduce an object context pooling (OCP) scheme, which represents each pixel by exploiting the set of pixels that belong to the same object category with such a pixel, and we call the set of pixels as object context. Our implementation, inspired by the self-attention approach, consists of two steps: (i) compute the similarities between each pixel and all the pixels, forming a socalled object context map for each pixel served as a surrogate for the true object context, and (ii) represent the pixel by aggregating the features of all the pixels weighted by the similarities. The resulting representation is more robust compared to existing context aggregation schemes, e.g., pyramid pooling modules (PPM) in PSPNet and atrous spatial pyramid pooling (ASPP), which do not differentiate the context pixels belonging to the same object category or not, making the reliability of contextually aggregated representations limited. We empirically demonstrate our approach and two pyramid extensions with state-ofthe-art performance on three semantic segmentation benchmarks: Cityscapes, ADE20K and LIP. Code has been made available at: https://github.com/PkuRainBow/ OCNet.pytorch."} {"_id": "339261e9fb670beeec379cfab65aeb728d5aecc0", "title": "RETOS: Resilient, Expandable, and Threaded Operating System for Wireless Sensor Networks", "text": "This paper presents the design principles, implementation, and evaluation of the RETOS operating system which is specifically developed for micro sensor nodes. RETOS has four distinct objectives, which are to provide (1) a multithreaded programming interface, (2) system resiliency, (3) kernel extensibility with dynamic reconfiguration, and (4) WSN-oriented network abstraction. RETOS is a multithreaded operating system, hence it provides the commonly used thread model of programming interface to developers. We have used various implementation techniques to optimize the performance and resource usage of multithreading. RETOS also provides software solutions to separate kernel from user applications, and supports their robust execution on MMU-less hardware. The RETOS kernel can be dynamically reconfigured, via loadable kernel framework, so a application-optimized and resource-efficient kernel is constructed. Finally, the networking architecture in RETOS is designed with a layering concept to provide WSN-specific network abstraction. RETOS currently supports Atmel ATmega128, TI MSP430, and Chipcon CC2430 family of microcontrollers. Several real-world WSN applications are developed for RETOS and the overall evaluation of the systems is described in the paper."} {"_id": "5107aafa77a8ac9bec0cb0cb74dbef28a130dd18", "title": "MRTouch: Adding Touch Input to Head-Mounted Mixed Reality", "text": "We present MRTouch, a novel multitouch input solution for head-mounted mixed reality systems. Our system enables users to reach out and directly manipulate virtual interfaces affixed to surfaces in their environment, as though they were touchscreens. Touch input offers precise, tactile and comfortable user input, and naturally complements existing popular modalities, such as voice and hand gesture. Our research prototype combines both depth and infrared camera streams together with real-time detection and tracking of surface planes to enable robust finger-tracking even when both the hand and head are in motion. Our technique is implemented on a commercial Microsoft HoloLens without requiring any additional hardware nor any user or environmental calibration. Through our performance evaluation, we demonstrate high input accuracy with an average positional error of 5.4 mm and 95% button size of 16 mm, across 17 participants, 2 surface orientations and 4 surface materials. Finally, we demonstrate the potential of our technique to enable on-world touch interactions through 5 example applications."} {"_id": "6b7edd8d60d28c9a2fd0ce8a8a931e9487c6dfbe", "title": "Aneka Cloud Application Platform and Its Integration with Windows Azure", "text": "Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a framework for building customized applications and deploying them on either public or private Clouds. One of the key features of Aneka is its support for provisioning resources on different public Cloud providers such as Amazon EC2, Windows Azure and GoGrid. In this chapter, we will present Aneka platform and its integration with one of the public Cloud infrastructures, Windows Azure, which enables the usage of Windows Azure Compute Service as a resource provider of Aneka PaaS. The integration of the two platforms will allow users to leverage the power of Windows Azure Platform for Aneka Cloud Computing, employing a large number of compute instances to run their applications in parallel. Furthermore, customers of the Windows Azure platform can benefit from the integration with Aneka PaaS by embracing the advanced features of Aneka in terms of multiple programming models, scheduling and management services, application execution services, accounting and pricing services and dynamic provisioning services. Finally, in addition to the Windows Azure Platform we will illustrate in this chapter the integration of Aneka PaaS with other public Cloud platforms such as Amazon EC2 and GoGrid, and virtual machine management platforms such as Xen Server. The new support of provisioning resources on Windows Azure once again proves the adaptability, extensibility and flexibility of Aneka."} {"_id": "e2f28568031e1902d4f8ee818261f0f2c20de6dd", "title": "Distributional Semantics Resources for Biomedical Text Processing", "text": "The openly available biomedical literature contains over 5 billion words in publication abstracts and full texts. Recent advances in unsupervised language processing methods have made it possible to make use of such large unannotated corpora for building statistical language models and inducing high quality vector space representations, which are, in turn, of utility in many tasks such as text classification, named entity recognition and query expansion. In this study, we introduce the first set of such language resources created from analysis of the entire available biomedical literature, including a dataset of all 1to 5-grams and their probabilities in these texts and new models of word semantics. We discuss the opportunities created by these resources and demonstrate their application. All resources introduced in this study are available under open licenses at http://bio.nlplab.org."} {"_id": "a20654ad8f79b08bf8f29ce66951cacba10812f6", "title": "Optimization algorithm using scrum process", "text": "Scrum process is methodology for software development. Members in a scrum team have self-organizing team by planning and sharing knowledge. This paper introduces optimization algorithm using the population as scrum team doing the scrum process to find an optimum solution. The proposed algorithm maintains the level of exploration and the exploitation search by specific of the scrum-team. The experiment has compared the proposed approach with GA and PSO by finding an optimal solution of five numerical functions. The experiment result indicates that the proposed algorithm provides the best solution and finds the result quickly."} {"_id": "25b6818743a6c0b9502a1c026c653038ff505c09", "title": "Evolving a Heuristic Function for the Game of Tetris", "text": ""} {"_id": "467b58fff03e85146cf8342d381c1d1b8836da85", "title": "Relations Among Loneliness, Social Anxiety, and Problematic Internet Use", "text": "The model of problematic Internet use advanced and tested in the current study proposes that individuals' psychosocial well-being, along with their beliefs about interpersonal communication (both face-to-face and online) are important cognitive predictors of negative outcomes arising from Internet use. The study examined the extent to which social anxiety explains results previously attributed to loneliness as a predictor of preference for online social interaction and problematic Internet use. The results support the hypothesis that the relationship between loneliness and preference for online social interaction is spurious, and that social anxiety is the confounding variable."} {"_id": "5829ac859bd2b301cfd53a44194aecb884e99355", "title": "The comorbid psychiatric symptoms of Internet addiction: attention deficit and hyperactivity disorder (ADHD), depression, social phobia, and hostility.", "text": "PURPOSE\nTo: (1) determine the association between Internet addiction and depression, self-reported symptoms of attention deficit and hyperactivity disorder (ADHD), social phobia, and hostility for adolescents; and (2) evaluate the sex differences of association between Internet addiction and the above-mentioned psychiatric symptoms among adolescents.\n\n\nMETHODS\nA total of 2114 students (1204 male and 910 female) were recruited for the study. Internet addiction, symptoms of ADHD, depression, social phobia, and hostility were evaluated by the self-report questionnaire.\n\n\nRESULTS\nThe results demonstrated that adolescents with Internet addiction had higher ADHD symptoms, depression, social phobia, and hostility. Higher ADHD symptoms, depression, and hostility are associated with Internet addiction in male adolescents, and only higher ADHD symptoms and depression are associated with Internet addiction in female students.\n\n\nCONCLUSION\nThese results suggest that Internet addiction is associated with symptoms of ADHD and depressive disorders. However, hostility was associated with Internet addiction only in males. Effective evaluation of, and treatment for ADHD and depressive disorders are required for adolescents with Internet addiction. More attention should be paid to male adolescents with high hostility in intervention of Internet addiction."} {"_id": "04b309af262cf2643daa93a34c1ba177cd6e7a85", "title": "Internet Addiction: The Emergence of a New Clinical Disorder", "text": "Anecdotal reports indicated that some on-line users were becoming addicted to the Internet in much that same way that others became addicted to drugs or alcohol which resulted in academic, social, and occupational impairment. However, research among sociologists, psychologists, or psychiatrists has not formally identified addictive use of the Internet as a problematic behavior. This study investigated the existence of Internet addiction and the extent of problems caused by such potential misuse. This study utilized an adapted version of the criteria for pathological gambling defined by the DSM-IV (APA, 1994). On the basis of this criteria, case studies of 396 dependent Internet users (Dependents) and a control group of 100 non-dependent Internet users (Non-Dependents) were classified. Qualitative analyses suggests significant behavioral and functional usage differences between the two groups. Clinical and social implications of pathological Internet use and future directions for research are discussed."} {"_id": "b6f2c4f2bc1ddbeb36a0fb9d7a8ee2e74264a5ca", "title": "Internet addiction in Korean adolescents and its relation to depression and suicidal ideation: a questionnaire survey.", "text": "This study examined the relationship of Internet addiction to depression and suicidal ideation in Korean adolescents. The participants were 1573 high-school students living in a city who completed the self-reported measures of the Internet Addiction Scale, the Korean version of the Diagnostic Interview Schedule for Children-Major Depression Disorder-Simple Questionnaire, and the Suicidal Ideation Questionnaire-Junior. A correlational survey design was employed. Among the samples, 1.6% was diagnosed as Internet addicts, while 38.0% was classified as possible Internet addicts. The prevalence of Internet addiction did not vary with gender. The levels of depression and suicide ideation were highest in the Internet-addicts group. Future studies should investigate the direct relationship between psychological health problems and Internet dependency."} {"_id": "bdd7d0a098365a0a83cc15c11d708362fd2d748b", "title": "Internet addiction among Chinese adolescents: prevalence and psychological features.", "text": "BACKGROUND\nTo investigate the prevalence of Internet addiction among Chinese adolescents and to explore the psychological features associated with Internet addiction.\n\n\nMETHODS\nA total of 2620 high school students from four high schools in Changsha City were surveyed using Diagnostic Questionnaire for Internet Addiction (YDQ), Eysenck Personality Questionnaire (the edition for children, EPQ), Time Management Disposition Scale (TMDS) and Strengths and Difficulties Questionnaire (SDQ). The mean age of whole sample was 15.19 years (ranging from 12 years to 18 years). According to the modified YDQ criteria by Beard, 64 students who were diagnosed as Internet addiction (the mean age: 14.59 years) and 64 who were diagnosed as being normal in Internet usage (the mean age: 14.81 years) were included in a case-control study.\n\n\nRESULTS\nThe rate of Internet use among the surveyed adolescents was 88%, among which the incidence rate of Internet addiction was 2.4%. The Internet addiction group had significantly higher scores on the EPQ subscales of neuroticism, psychoticism, and lie than the control group (P < 0.05). The Internet addiction group scored lower than the control group on the TMDS subscales of sense of control over time, sense of value of time, and sense of time efficacy (P < 0.05). Compared with the control group, the Internet addiction group had also significantly higher scores on the SDQ subscales of emotional symptoms, conduct problems, hyperactivity, total difficulties and lower scores on the subscale of prosocial behaviours (P < 0.05).\n\n\nCONCLUSIONS\nThe present study suggests that Internet addiction is not rare among Chinese adolescents. In addition, adolescents with Internet addiction possess different psychological features when compared with those who use the Internet less frequently."} {"_id": "416c1b1bd27032506512d27f07756e96ed1d5af0", "title": "Hardware resource estimation for heterogeneous FPGA-based SoCs", "text": "The increasing complexity of recent System-on-Chip (SoC) designs introduces new challenges for design space exploration tools. In addition to the time-to-market challenge, designers need to estimate rapidly and accurately both performance and area occupation of complex and diverse applications. High-Level Synthesis (HLS) has been emerged as an attractive solution for designers to address these challenges in order to explore a large number of SoC configurations. In this paper, we target hybrid CPU-FPGA based SoCs. We propose a high-level area estimation tool based on an analytic model without requiring register-transfer level (RTL) implementations. This technique allows to estimate the required FPGA resources at the source code level to map an application to a hybrid CPU-FPGA system. The proposed model also enables a fast design exploration with different trade-offs through HLS optimization pragmas. Experimental results show that the proposed area analytic model provides an accurate estimation with a negligible error (less than 5+) compared to RTL implementations."} {"_id": "6b25fe8edc92e31973238ad33a40b06d26362c73", "title": "No double-dissociation between optic ataxia and visual agnosia: Multiple sub-streams for multiple visuo-manual integrations", "text": "The current dominant view of the visual system is marked by the functional and anatomical dissociation between a ventral stream specialised for perception and a dorsal stream specialised for action. The \"double-dissociation\" between visual agnosia (VA), a deficit of visual recognition, and optic ataxia (OA), a deficit of visuo-manual guidance, considered as consecutive to ventral and dorsal damage, respectively, has provided the main argument for this dichotomic view. In the first part of this paper, we show that the currently available empirical data do not suffice to support a double-dissociation between OA and VA. In the second part, we review evidence coming from human neuropsychology and monkey data, which cast further doubts on the validity of a simple double-dissociation between perception and action because they argue for a far more complex organisation with multiple parallel visual-to-motor connections: 1. A dorso-dorsal pathway (involving the most dorsal part of the parietal and pre-motor cortices): for immediate visuo-motor control--with OA as typical disturbance. The latest research about OA is reviewed, showing how these patients exhibit deficits restricted to the most direct and fast visuo-motor transformations. We also propose that mild mirror ataxia, consisting of misreaching errors when the controlesional hand is guided to a visual goal though a mirror, could correspond to OA with an isolated \"hand effect\". 2. A ventral stream-prefrontal pathway (connections from the ventral visual stream to pre-frontal areas, by-passing the parietal areas): for \"mediate\" control (involving spatial or temporal transpositions [Rossetti, Y., & Pisella, L. (2003). Mediate responses as direct evidence for intention: Neuropsychology of Not to-, Not now- and Not there-tasks. In S. Johnson (Ed.), Cognitive Neuroscience perspectives on the problem of intentional action (pp. 67-105). MIT Press.])--with VA as typical disturbance. Preserved visuo-manual guidance in patients with VA is restricted to immediate goal-directed guidance, they exhibit deficits for delayed or pantomimed actions. 3. A ventro-dorsal pathway (involving the more ventral part of the parietal lobe and the pre-motor and pre-frontal areas): for complex planning and programming relying on high representational levels with a more bilateral organisation or an hemispheric lateralisation--with mirror apraxia, limb apraxia and spatial neglect as representatives. Mirror apraxia is a deficit that affects both hands after unilateral inferior parietal lesion with the patients reaching systematically and repeatedly toward the virtual image in the mirror. Limb apraxia is localized on a more advanced conceptual level of object-related actions and results from deficient integrative, computational and \"working memory\" capacities of the left inferior parietal lobule. A component of spatial working memory has recently been revealed also in spatial neglect consecutive to lesion involving the network of the right inferior parietal lobule and the right frontal areas. We conclude by pointing to the differential temporal constraints and integrative capabilities of these parallel visuo-motor pathways as keys to interpret the neuropsychological deficits."} {"_id": "e0909761a727d5eaee0cf63c3374e5c59bb5e6a7", "title": "Mechanisms of decompensation and organ failure in cirrhosis: From peripheral arterial vasodilation to systemic inflammation hypothesis.", "text": "The peripheral arterial vasodilation hypothesis has been most influential in the field of cirrhosis and its complications. It has given rise to hundreds of pathophysiological studies in experimental and human cirrhosis and is the theoretical basis of life-saving treatments. It is undisputed that splanchnic arterial vasodilation contributes to portal hypertension and is the basis for manifestations such as ascites and hepatorenal syndrome, but the body of research generated by the hypothesis has revealed gaps in the original pathophysiological interpretation of these complications. The expansion of our knowledge on the mechanisms regulating vascular tone, inflammation and the host-microbiota interaction require a broader approach to advanced cirrhosis encompassing the whole spectrum of its manifestations. Indeed, multiorgan dysfunction and failure likely result from a complex interplay where the systemic spread of bacterial products represents the primary event. The consequent activation of the host innate immune response triggers endothelial molecular mechanisms responsible for arterial vasodilation, and also jeopardizes organ integrity with a storm of pro-inflammatory cytokines and reactive oxygen and nitrogen species. Thus, the picture of advanced cirrhosis could be seen as the result of an inflammatory syndrome in contradiction with a simple hemodynamic disturbance."} {"_id": "c2db182624de2297ac97a15727dab7292410228f", "title": "Single Image Shadow Removal via Neighbor-Based Region Relighting", "text": "In this paper we present a novel method for shadow removal in single images. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Then, we adjust the CIELAB a and b channels of the shadow region by adding constant offsets based on the difference of the median shadow and lit pixel values. We demonstrate that our approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset."} {"_id": "4f48b6a243edc297f1e282e34c255d5029403116", "title": "Electric grid state estimators for distribution systems with microgrids", "text": "In the development of smart grid, state estimation in distribution systems will likely face more challenges than that at the transmission level. This paper addresses one of such challenges, namely, the absence of topology information, by developing a forecast-aided topology change detection method and an event-triggered recursive Bayesian estimator to identify the correct topology. Simulation studies with microgrid induced changes are presented to illustrate the effectiveness of the proposed algorithm."} {"_id": "ab293bf07ac0971cd859595d86dd71c2fed5be9d", "title": "Thumb Motor Performance Varies by Movement Orientation, Direction, and Device Size During Single-Handed Mobile Phone Use", "text": "OBJECTIVE\nThe aim of this study was to determine if thumb motor performance metrics varied by movement orientation, direction, and device size during single-handed use of a mobile phone device.\n\n\nBACKGROUND\nWith the increased use of mobile phones, understanding how design factors affect and improve performance can provide better design guidelines.\n\n\nMETHOD\nA repeated measures laboratory experiment of 20 right-handed participants measured the thumb tip's 3-D position relative to a phone during reciprocal tapping tasks across four phone designs and four thumb tip movement orientations. Each movement orientation included two movement directions: an \"outward\" direction consisting in CMC (carpometacarpal) joint flexion or abduction movements and an \"inward\" direction consisting in CMC joint extension or adduction movements. Calculated metrics of the thumb's motor performance were Fitts' effective width and index of performance.\n\n\nRESULTS\nIndex of performance varied significantly across phones, with performance being generally better for the smaller devices. Performance was also significantly higher for adduction-abduction movement orientations compared to flexion-extension, and for \"outward\" compared to \"inward\" movement directions.\n\n\nCONCLUSION\nFor single-handed device use, adduction-abduction-type movements on smaller phones lead to better thumb performance.\n\n\nAPPLICATION\nThe results from this study can be used to design new mobile phone devices and keypad interfaces that optimize specific thumb motions to improve the user-interface experience during single-handed use."} {"_id": "5fc9065fe9fabc76445e8a9bc2438d0440d21225", "title": "Elliptic Curve Cryptosystems", "text": "The application of elliptic curves to the field of cryptography has been relatively recent. It has opened up a wealth of possibilities in terms of security, encryption, and real-world applications. In particular, we are interested in public-key cryptosystems that use the elliptic curve discrete logarithm problem to establish security. The objective of this thesis is to assemble the most important facts and findings into a broad, unified overview of this field. To illustrate certain points, we also discuss a sample implementation of the elliptic curve analogue of the El"} {"_id": "8a11400ab5d3cb349a048891ed665b95aed9f6ca", "title": "Electronic Payments of Small Amounts", "text": "This note considers the application of electronic cash to transactions in which many small amounts must be paid to the same payee and in which it is not possible to just pay the total amount afterwards. The most notable example of such a transaction is payment for phone calls. If currently published electronic cash systems are used and a full payment protocol is executed for each of the small amounts, the overall complexity of the system will be prohibitively large (time, storage and communication). This note describes how such payments can be handled in a wide class of payment systems. The solution is very easy to adapt as it only in uences the payment and deposit transactions involving such payments. Furthermore, making and verifying each small payment requires very little computation and communication, and the total complexity of both transactions is comparable to that of a payment of a xed amount."} {"_id": "c3edb96b8c3892147cd8932f1ee8c98b336b1b1f", "title": "A 25-kHz 3rd-order continuous-time Delta-Sigma modulator using tri-level quantizer", "text": "This paper presents a 3rd order continuous-time Delta-Sigma modulator with a tri-level quantizer, which provides 3-dB reduction of quantization noise without dynamic element matching (DEM). The tri-level DAC linearity is analyzed and it shows that a highly linear tri-level DAC can be realized in fully-differential active-RC Delta-Sigma modulator. The performance of the tri-level continuous-time Delta-Sigma modulator has been verified through simulations using a standard 0.18-\u03bcm CMOS process. It achieves 81-dB SNDR at 3.2-MS/s sampling rate and consumes 1.14-\u03bcW power with ideal amplifier."} {"_id": "18a20b71308afff3d06122a3c98ea9eab2f92f4d", "title": "Finding a Maximum Clique using Ant Colony Optimization and Particle Swarm Optimization in Social Networks", "text": "Interaction between users in online social networks plays a key role in social network analysis. One on important types of social group is full connected relation between some users, which known as clique structure. Therefore finding a maximum clique is essential for some analysis. In this paper, we proposed a new method using ant colony optimization algorithm and particle swarm optimization algorithm. In the proposed method, in order to attain better results, it is improved process of pheromone update by particle swarm optimization. Simulation results on popular standard social network benchmarks in comparison standard ant colony optimization algorithm are shown a relative enhancement of proposed algorithm. Keywordssocial network analysis; clique problem; ACO; PSO."} {"_id": "edc70c2b07c5bf343067640d53fbc3623b79b170", "title": "Surgical Management of Hidradenitis Suppurativa: Outcomes of 590 Consecutive Patients.", "text": "BACKGROUND\nHidradenitis suppurativa is a progressive, recurrent inflammatory disease. Surgical management is potentially curative with limited efficacy data.\n\n\nOBJECTIVE\nTo evaluate hidradenitis surgical patients.\n\n\nMETHODS\nRetrospective review of outcomes of 590 consecutive surgically treated patients.\n\n\nRESULTS\nMost patients were white (91.0% [435/478]), men (337 [57.1%]), smokers (57.7% [297/515]) with Hurley Stage III disease (476 [80.7%]). Procedure types were excision (405 [68.6%]), unroofing (168 [28.5%]), and drainage (17 [2.9%]) treating disease of perianal/perineum (294 [49.8%]), axilla (124 [21.0%]), gluteal cleft (76 [12.9%]), inframammary (12 [2.0%]), and multiple surgical sites (84 [14.2%]). Postoperative complications occurred in 15 patients (2.5%) and one-fourth (144 [24.4%]) suffered postoperative recurrence, which necessitated reoperation in one-tenth (69 [11.7%]) of patients. Recurrence risk was increased by younger age (hazard ratio [HR], 0.8; 95% confidence interval [CI], 0.7-0.9), multiple surgical sites (HR, 1.6; 95% CI, 1.1-2.5), and drainage-type procedures (HR, 3.5; 95% CI, 1.2-10.7). Operative location, disease severity, gender, and operative extent did not influence recurrence rate.\n\n\nCONCLUSION\nExcision and unroofing procedures were effective treatments with infrequent complications and low recurrence rates. Well-planned surgical treatment aiming to remove or unroof the area of intractable hidradenitis suppurativa was highly effective in the management of this challenging disease."} {"_id": "c2513f008b8da9ba3556ba7fc8dd9ba22066d1cd", "title": "Analysis of Massive MIMO-Enabled Downlink Wireless Backhauling for Full-Duplex Small Cells", "text": "Recent advancements in self-interference (SI) cancellation capability of low-power wireless devices motivate in-band full-duplex (FD) wireless backhauling in small cell networks (SCNs). In-band FD wireless backhauling concurrently allows the use of the same frequency spectrum for the backhaul as well as access links of the small cells. In this paper, using tools from stochastic geometry, we develop a framework to model the downlink rate coverage probability of a user in a given SCN with massive multiple-input-multiple-output (MIMO)-enabled wireless backhauls. The considered SCN is composed of a mixture of small cells that are configured in either in-band or out-of-band backhaul modes with a certain probability. The performance of the user in the considered hierarchical network is limited by several sources of interference, such as the backhaul interference, small cell base station (SBS)-to-SBS interference, and the SI. Moreover, due to the channel hardening effect in massive MIMO, the backhaul links only experience long term channel effects, whereas the access links experience both the long term and the short term channel effects. Consequently, the developed framework is flexible to characterize different sources of interference while capturing the heterogeneity of the access and backhaul channels. In specific scenarios, the framework enables deriving closed-form coverage probability expressions. Under perfect backhaul coverage, the simplified expressions are utilized to optimize the proportion of in-band and out-of-band small cells in the SCN in the closed form. Finally, a few remedial solutions are proposed that can potentially mitigate the backhaul interference and in turn improve the performance of in-band FD wireless backhauling. Numerical results investigate the scenarios in which in-band wireless backhauling is useful and demonstrate that maintaining a correct proportion of in-band and out-of-band FD small cells is crucial in wireless backhauled SCNs."} {"_id": "890ef19b51ee773a7fc0f187372274f93feb10a4", "title": "Applications of Trajectory Data in Transportation: Literature Review and Maryland Case Study", "text": "This paper considers applications of trajectory data in transportation, and makes two primary contributions. First, it provides a comprehensive literature review detailing ways in which trajectory data has been used for transportation systems analysis, distilling existing research into the following six areas: demand estimation, modeling human behavior, designing public transit, measuring and predicting traffic performance, quantifying environmental impact, and safety analysis. Additionally, it presents innovative applications of trajectory data for the state of Maryland, employing visualization and machine learning techniques to extract value from 20 million GPS traces. These visual analytics will be implemented in the Regional Integrated Transportation Information System (RITIS), which provides free data sharing and visual analytics tools to help transportation agencies attain situational awareness, evaluate performance, and share insights with the public."} {"_id": "dc8d72a17de9c26c88fb9fda0169ecd00f4a45e1", "title": "Smart Health Care: An Edge-Side Computing Perspective", "text": "Increasing health awareness and rapid technological advancements have resulted in significant growth in an emerging sector: smart health care. By the end of 2020, the total number of smart health-care devices is expected to reach 808.9 million (646 million devices without wearables and the remaining with wearables) [1], [2]. This proliferation of devices is expected to revolutionize health care by speeding up treatment and diagnostic processes, decreasing physician visit costs, and enhancing patient care quality."} {"_id": "5e6043745685210beab94f15d54f76df91dd9967", "title": "Hand segmentation for hand-object interaction from depth map", "text": "Hand segmentation for hand-object interaction is a necessary preprocessing step in many applications such as augmented reality, medical application, and human-robot interaction. However, typical methods are based on color information which is not robust to objects with skin color, skin pigment difference, and light condition variations. Thus, we propose hand segmentation method for hand-object interaction using only a depth map. It is challenging because of the small depth difference between a hand and objects during an interaction. To overcome this challenge, we propose the two-stage random decision forest (RDF) method consisting of detecting hands and segmenting hands. To validate the proposed method, we demonstrate results on the publicly available dataset of hand segmentation for hand-object interaction. The proposed method achieves high accuracy in short processing time comparing to the other state-of-the-art methods."} {"_id": "526acf565190d843758b89d37acf281639cb90e2", "title": "Finding News Citations for Wikipedia", "text": "An important editing policy in Wikipedia is to provide citations for added statements in Wikipedia pages, where statements can be arbitrary pieces of text, ranging from a sentence to a paragraph. In many cases citations are either outdated or missing altogether.\n In this work we address the problem of finding and updating news citations for statements in entity pages. We propose a two-stage supervised approach for this problem. In the first step, we construct a classifier to find out whether statements need a news citation or other kinds of citations (web, book, journal, etc.). In the second step, we develop a news citation algorithm for Wikipedia statements, which recommends appropriate citations from a given news collection. Apart from IR techniques that use the statement to query the news collection, we also formalize three properties of an appropriate citation, namely: (i) the citation should entail the Wikipedia statement, (ii) the statement should be central to the citation, and (iii) the citation should be from an authoritative source.\n We perform an extensive evaluation of both steps, using 20 million articles from a real-world news collection. Our results are quite promising, and show that we can perform this task with high precision and at scale."} {"_id": "9592f5734a6d677323141aef4316f7ebe4b5903f", "title": "A Modular Framework for Versatile Conversational Agent Building", "text": "This paper illustrates a web-based infrastructure of an architecture for conversational agents equipped with a modular knowledge base. This solution has the advantage to allow the building of specific modules that deal with particular features of a conversation (ranging from its topic to the manner of reasoning of the chatbot). This enhances the agent interaction capabilities. The approach simplifies the chatbot knowledge base design process: extending, generalizing or even restricting the chatbot knowledge base in order to suit it to manage specific dialoguing tasks as much as possible."} {"_id": "6ed67a876b3afd2f2fb7b5b8c0800a0398c76603", "title": "ERP integration in a healthcare environment: a case study", "text": ""} {"_id": "8fb1e94e16ca9da081ac1d313c0539dd10253cbb", "title": "Evaluating the influence of YouTube advertising for attraction of young customers", "text": "Nowadays, we have been faced with an increasing number of people who are spending tremendous amounts of time all around the world on YouTube. To date, the factors that persuade customers to accept YouTube advertising as an advertising medium are not yet fully understood. The present paper identified four dimensions towards YouTube advertising (i.e., entertainment, Informativeness, Customization and irritation) which may be affected on advertising value as well as brand awareness, and accordingly on purchase intention of consumers. The conceptual model hypothesizes that ad value strategies are positively associated with brand awareness, which in turn influence perceived usefulness of You Tube and continued purchase behavior. For this study, data were collected from students studying at the Sapienza University of Rome. In total, 315 usable questionnaires were chosen in order to analysis of data for the variables. The results show that entertainment, informativeness and customization are the strongest positive drivers, while irritation is negatively related to YouTube advertising. On the other hand, advertising value through YouTube affects both brand awareness and purchase intention of con-"} {"_id": "f8e2e20d732dd831e0ad1547bd3c22a3c2a6216f", "title": "Embedding new data points for manifold learning via coordinate propagation", "text": "In recent years, a series of manifold learning algorithms have been proposed for nonlinear dimensionality reduction. Most of them can run in a batch mode for a set of given data points, but lack a mechanism to deal with new data points. Here we propose an extension approach, i.e., mapping new data points into the previously learned manifold. The core idea of our approach is to propagate the known coordinates to each of the new data points. We first formulate this task as a quadratic programming, and then develop an iterative algorithm for coordinate propagation. Tangent space projection and smooth splines are used to yield an initial coordinate for each new data point, according to their local geometrical relations. Experimental results and applications to camera direction estimation and face pose estimation illustrate the validity of our approach."} {"_id": "bf3921d033eef168b8fc5bd64900e12bcd2ee2d5", "title": "Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach", "text": "We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using \u201cconvolution\u201d and \u201cdeconvolution\u201d networks to achieve the conventional \u201ccoarse recognition\u201d and \u201cfine extraction\u201d functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed."} {"_id": "f1f8ff891b746373c7684ecc3b2dc12d77bf2111", "title": "Preserving the Archival Bond in Distributed Ledgers: A Data Model and Syntax", "text": "Distributed cryptographic ledgers, such as the blockchain, are now being used in recordkeeping. However, they lack a key feature of more traditional recordkeeping systems needed to establish the authenticity of records and enable reliance on them for trustworthy recordkeeping. The missing feature is known in archival science as the archival bond \u2013 the mutual relationship that exists among documents by virtue of the actions in which they participate. In this paper, we propose a novel data model and syntax using core web principles that can be used to address this shortcoming in distributed ledgers as recordkeeping systems."} {"_id": "f825ae0fb460d4ca71ca308d8887b47fef47144b", "title": "Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations", "text": "Registration of range sensor measurements is an important task in mobile robotics and has received a lot of attention. Several iterative optimization schemes have been proposed in order to align three-dimensional (3D) point scans. With the more widespread use of high-frame-rate 3D sensors and increasingly more challenging application scenarios for mobile robots, there is a need for fast and accurate registration methods that current state-of-the-art algorithms cannot always meet. This work proposes a novel algorithm that achieves accurate point cloud registration an order of a magnitude faster than the current state of the art. The speedup is achieved through the use of a compact spatial representation: the Three-Dimensional Normal Distributions Transform (3D-NDT). In addition, a fast, global-descriptor based on the 3D-NDT is defined and used to achieve reliable initial poses for the iterative algorithm. Finally, a closed-form expression for the covariance of the proposed method is also derived. The proposed algorithms are evaluated on two standard point cloud data sets, resulting in stable performance on a par with or better than the state of the art. The implementation is available as an open-source package for the Robot Operating System (ROS)."} {"_id": "6423e91ee7ee8d7f6765e47f2b0cb4510682f239", "title": "Optimized Spatial Hashing for Collision Detection of Deformable Objects", "text": "We propose a new approach to collision and self\u2013 collision detection of dynamically deforming objects that consist of tetrahedrons. Tetrahedral meshes are commonly used to represent volumetric deformable models and the presented algorithm is integrated in a physically\u2013based environment, which can be used in game engines and surgical simulators. The proposed algorithm employs a hash function for compressing a potentially infinite regular spatial grid. Although the hash function does not always provide a unique mapping of grid cells, it can be generated very efficiently and does not require complex data structures, such as octrees or BSPs. We have investigated and optimized the parameters of the collision detection algorithm, such as hash function, hash table size and spatial cell size. The algorithm can detect collisions and self\u2013 collisions in environments of up to 20k tetrahedrons in real\u2013time. Although the algorithm works with tetrahedral meshes, it can be easily adapted to other object primitives, such as triangles. Figure 1: Environment with dynamically deforming objects, that consist of tetrahedrons."} {"_id": "22b01542da7f63d36435b42e97289eb92742c0ce", "title": "Effects of cognitive-behavioral therapy on brain activation in specific phobia", "text": "Little is known about the effects of successful psychotherapy on brain function in subjects with anxiety disorders. The present study aimed to identify changes in brain activation following cognitive-behavioral therapy (CBT) in subjects suffering from specific phobia. Using functional magnetic resonance imaging (fMRI), brain activation to spider videos was measured in 28 spider phobic and 14 healthy control subjects. Phobics were randomly assigned to a therapy-group (TG) and a waiting-list control group (WG). Both groups of phobics were scanned twice. Between scanning sessions, CBT was given to the TG. Before therapy, brain activation did not differ between both groups of phobics. As compared to control subjects, phobics showed greater responses to spider vs. control videos in the insula and anterior cingulate cortex (ACC). CBT strongly reduced phobic symptoms in the TG while the WG remained behaviorally unchanged. In the second scanning session, a significant reduction of hyperactivity in the insula and ACC was found in the TG compared to the WG. These results propose that increased activation in the insula and ACC is associated with specific phobia, whereas an attenuation of these brain responses correlates with successful therapeutic intervention."} {"_id": "0796f6cd7f0403a854d67d525e9b32af3b277331", "title": "Identifying Relations for Open Information Extraction", "text": "Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-ofthe-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the REVERB Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TEXTRUNNER and WOE. More than 30% of REVERB\u2019s extractions are at precision 0.8 or higher\u2014 compared to virtually none for earlier systems. The paper concludes with a detailed analysis of REVERB\u2019s errors, suggesting directions for future work.1"} {"_id": "18dd22835af007c736bbf4f5cdb7a54ff685aff7", "title": "Multilingual Relation Extraction using Compositional Universal Schema", "text": "When building a knowledge base (KB) of entities and relations from multiple structured KBs and text, universal schema represents the union of all input schema, by jointly embedding all relation types from input KBs as well as textual patterns expressing relations. In previous work, textual patterns are parametrized as a single embedding, preventing generalization to unseen textual patterns. In this paper we employ an LSTM to compositionally capture the semantics of relational text. We dramatically demonstrate the flexibility of our approach by evaluating in a multilingual setting, in which the English training data entities overlap with the seed KB, but the Spanish text does not. Additional improvements are obtained by tying word embeddings across languages. In extensive experiments on the English and Spanish TAC KBP benchmark, our techniques provide substantial accuracy improvements. Furthermore we find that training with the additional non-overlapping Spanish also improves English relation extraction accuracy. Our approach is thus suited to broad-coverage automated knowledge base construction in low-resource languages and domains."} {"_id": "24281c886cd9339fe2fc5881faf5ed72b731a03e", "title": "Spark: Cluster Computing with Working Sets", "text": "MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time."} {"_id": "507da9473d1b7cf69357e9d987b71cf7b5623b4d", "title": "GENIES: a natural-language processing system for the extraction of molecular pathways from journal articles", "text": "Systems that extract structured information from natural language passages have been highly successful in specialized domains. The time is opportune for developing analogous applications for molecular biology and genomics. We present a system, GENIES, that extracts and structures information about cellular pathways from the biological literature in accordance with a knowledge model that we developed earlier. We implemented GENIES by modifying an existing medical natural language processing system, MedLEE, and performed a preliminary evaluation study. Our results demonstrate the value of the underlying techniques for the purpose of acquiring valuable knowledge from biological journals."} {"_id": "03ff3f8f4d5a700fbe8f3a3e63a39523c29bb60f", "title": "A Convolutional Neural Network for Modelling Sentences", "text": "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline."} {"_id": "4b64453e592b0d0b367e01bfc9a833de214a6519", "title": "DeepTLR: A single deep convolutional network for detection and classification of traffic lights", "text": "Reliable real-time detection of traffic lights is a major concern for the task of autonomous driving. As deep convolutional networks have proven to be a powerful tool in visual object detection, we propose DeepTLR, a camera-based system for real-time detection and classification of traffic lights. Detection and state classification are realized using a single deep convolutional network. DeepTLR does not use any prior knowledge about traffic light locations. Also the detection is executed frame by frame without using temporal information. It is able to detect traffic lights on the whole camera image without any presegmentation. This is achieved by classifying each fine-grained pixel region of the input image and performing a bounding box regression on regions of each class. We show that our algorithm is able to run on frame-rates required for real-time applications while reaching notable results."} {"_id": "e6fa6e3bc568e413ac4b52e06d28cc78da25e137", "title": "Integration of Radio Frequency Identification and Wireless Sensor Networks", "text": ".................................................................................................................... iii \u00d6Z ...................................................................................................................................... v DEDICATION ................................................................................................................ vii ACKNOWLEDGMENTS ............................................................................................. viii LIST OF TABLES .......................................................................................................... xii LIST OF FIGURES ....................................................................................................... xiii LIST OF SYMBOLS/ABBREVIATIONS ...................................................................... xv"} {"_id": "a26062250dcd3756eb9f4178b991521ef2ba433b", "title": "Low-k interconnect stack with a novel self-aligned via patterning process for 32nm high volume manufacturing", "text": "Interconnect process features are described for a 32nm high performance logic technology. Lower-k, yet highly manufacturable, Carbon-Doped Oxide (CDO) dielectric layers are introduced on this technology at three layers to address the demand for ever lower metal line capacitance. The pitches have been aggressively scaled to meet the expectation for density, and the metal resistance and electromigration performance have been carefully balanced to meet the high reliability requirements while maintaining the lowest possible resistance. A new patterning scheme has been used to limit any patterning damage to the lower-k ILD and address the increasingly difficult problem of via-to-metal shorting at these very tight pitches. The interconnect stack has a thick Metal-9 layer to provide a low resistance path for the power and I/O routing that has been carefully scaled to maintain a low resistance. The combined interconnect stack provides high density, performance, and reliability, and supports a Pb-free 32nm process."} {"_id": "0cb9c4ba06d7a69cf7aadb326072ac2ed2207452", "title": "Timing Attack against Protected RSA-CRT Implementation Used in PolarSSL", "text": "In this paper, we present a timing attack against the RSACRT algorithm used in the current version 1.1.4 of PolarSSL, an opensource cryptographic library for embedded systems. This implementation uses a classical countermeasure to avoid two previous attacks of Schindler and another one due to Boneh and Brumley. However, a careful analysis reveals a bias in the implementation of Montgomery multiplication. We theoretically analyse the distribution of output values for Montgomery multiplication when the output is greater than the Montgomery constant, R. In this case, we show that an extra bit is set in the top most significant word of the output and a time variance can be observed. Then we present some proofs with reasonable assumptions to explain this bias due to an extra bit. Moreover, we show it can be used to mount an attack that reveals the factorisation. We also study another countermeasure and show its resistance against attacked library."} {"_id": "05dc9d40649cece88b6c87196d9c87db61e2fbf2", "title": "Haptic Feedback for Virtual Reality 1", "text": "Haptic feedback is a crucial sensorial modality in virtual reality interactions. Haptics means both force feedback (simulating object hardness, weight, and inertia) and tactile feedback (simulating surface contact geometry, smoothness, slippage, and temperature). Providing such sensorial data requires desk-top or portable special-purpose hardware called haptic interfaces. Modeling physical interactions involves precise collision detection, real-time force computation, and high control-loop bandwidth. This results in a large computation load which requires multi-processor parallel processing on networked computers. Applications for haptics-intensive VR simulations include CAD model design and assembly. Improved technology (wearable computers, novel actuators, haptic toolkits) will increase the use of force/tactile feedback in future VR simulations."} {"_id": "03c423de1b6a1ca91924a1587a3a57927df1b0e8", "title": "DFT-based Transformation Invariant Pooling Layer for Visual Classification", "text": "We propose a novel discrete Fourier transform-based pooling layer for convolutional neural networks. The DFT magnitude pooling replaces the traditional max/average pooling layer between the convolution and fully-connected layers to retain translation invariance and shape preserving (aware of shape difference) properties based on the shift theorem of the Fourier transform. Thanks to the ability to handle image misalignment while keeping important structural information in the pooling stage, the DFT magnitude pooling improves the classification accuracy significantly. In addition, we propose the DFT method for ensemble networks using the middle convolution layer outputs. The proposed methods are extensively evaluated on various classification tasks using the ImageNet, CUB 2010-2011, MIT Indoors, Caltech 101, FMD and DTD datasets. The AlexNet, VGG-VD 16, Inception-v3, and ResNet are used as the base networks, upon which DFT and DFT methods are implemented. Experimental results show that the proposed methods improve the classification performance in all networks and datasets."} {"_id": "7f7d7e7d53febd451e263784b59c1c9038474499", "title": "A systematic literature review of green software metrics", "text": "Green IT is getting increasing attention in software engineering research. Nevertheless energy efficiency studies have mostly focused on the hardware side of IT, the software role still requires deepening in terms of methods and techniques. Furthermore, it is necessary to understand how to assess the software\u201cgreenness\u201d for stimulating the energy efficiency awareness, since early phases of the software lifecycle. The main goal of this study is to describe and to classify metrics related to software \u201cgreenness\u201d present in the software engineering literature. Furthermore, this study analyzes the evolution of those metrics, in terms of type, context, and evaluation methods. To achieve this goal, a systematic literature review has been performed surveying the metrics claimed in the last decade. After examined 960 publications, we selected 23 of them as primary studies, from which we isolated extracting 96 different green metrics. Therefore, we analyzed search results in order to show what is the trend of research about green software metrics, how metrics perform measurement on resources, and what type of metrics are more appealing for defined contexts."} {"_id": "1955a213c446782342761730fc67ee23fa517c83", "title": "Artificial-Life Ecosystems - What are they and what could they become?", "text": "This paper summarises the history of the terms ecology and ecosystem, before examining their application in the early and recent literature of A-Life agent-based software simulation. It investigates trends in A-Life that have led to a predominance of simulations incorporating artificial evolution acting on generic agents, but lacking a level of detail that would allow the emergence of phenomena relating to the transfer and transformation of energy and matter between the virtual abiotic environment and biota. Implications of these characteristics for the relevance of A-Life\u2019s virtual ecosystem models to Ecology are discussed. We argue a position that the inclusion of low-level representations of energetics, matter and evolution, in concert with pattern-oriented modelling techniques from Ecology for model validation, will improve the relevance of A-Life models to Ecology. We also suggest two methods that may allows us to meet this goal: artificial evolution can be employed as a mechanism for automating pattern-oriented ecological modelling from the level of individual species up to that of the ecosystem, or it may be employed to explore general principles of ecosystem behaviour over evolutionary time periods."} {"_id": "6d93c8ef0cca8085ff11d73523c8cf9410d11856", "title": "Pulse-Modulated Intermittent Control in Consensus of Multiagent Systems", "text": "This paper proposes a control framework, called pulse-modulated intermittent control, which unifies impulsive control and sampled control. Specifically, the concept of pulse function is introduced to characterize the control/rest intervals and the amplitude of the control. By choosing some specified functions as the pulse function, the proposed control scheme can be reduced to sampled control or impulsive control. The proposed control framework is applied to consensus problems of multiagent systems. Using discretization approaches and stability theory, several necessary and sufficient conditions are established to ensure the consensus of the controlled system. The results show that consensus depends not only on the network topology, the sampling period and the control gains, but also the pulse function. Moreover, a lower bound of the asymptotic convergence factor is derived as well. For a given pulse function and an undirected graph, an optimal control gain is designed to achieve the fastest convergence. In addition, impulsive control and sampled control are revisited in the proposed control framework. Finally, some numerical examples are given to verify the effectiveness of theoretical results."} {"_id": "39d8eef237f475648214989494ac6f89562d2826", "title": "Weighted Low-Rank Approximations", "text": "Weighted-norms can arise in several situations. A zero/one weighted-norm, for example, arises when some of the entries in the matrix are not observed. External estimates of the noise variance associated with each measurement may be available (e.g. gene expression analysis) and using weights inversely proportional to the noise variance can lead to better reconstruction of the underlying structure. In other applications, entries in the target matrix represent aggregates of many samples. When using unweighted low-rank approximations (e.g. for separating style and content [4]), we assume a uniform number of samples for each entry. By incorporating weights, we can account for varying numbers of samples in such situations. Low-rank approximations are also used in the design of two-dimensional digital filters, in which case weights might arise from constraints of varying importance [3]."} {"_id": "83e859ff4842cb7fbc4eca1ba68b8144897925d8", "title": "The validity of the Hospital Anxiety and Depression Scale. An updated literature review.", "text": "OBJECTIVE\nTo review the literature of the validity of the Hospital Anxiety and Depression Scale (HADS).\n\n\nMETHOD\nA review of the 747 identified papers that used HADS was performed to address the following questions: (I) How are the factor structure, discriminant validity and the internal consistency of HADS? (II) How does HADS perform as a case finder for anxiety disorders and depression? (III) How does HADS agree with other self-rating instruments used to rate anxiety and depression?\n\n\nRESULTS\nMost factor analyses demonstrated a two-factor solution in good accordance with the HADS subscales for Anxiety (HADS-A) and Depression (HADS-D), respectively. The correlations between the two subscales varied from.40 to.74 (mean.56). Cronbach's alpha for HADS-A varied from.68 to.93 (mean.83) and for HADS-D from.67 to.90 (mean.82). In most studies an optimal balance between sensitivity and specificity was achieved when caseness was defined by a score of 8 or above on both HADS-A and HADS-D. The sensitivity and specificity for both HADS-A and HADS-D of approximately 0.80 were very similar to the sensitivity and specificity achieved by the General Health Questionnaire (GHQ). Correlations between HADS and other commonly used questionnaires were in the range.49 to.83.\n\n\nCONCLUSIONS\nHADS was found to perform well in assessing the symptom severity and caseness of anxiety disorders and depression in both somatic, psychiatric and primary care patients and in the general population."} {"_id": "c11677ba8882023812cb94dda1029580fa251778", "title": "An automated system to detect and recognize vehicle license plates of Bangladesh", "text": "Bangladesh is a country in South Asia that uses Retro Reflective license plates. The plate has two lines with words, letters, and digits. An automated system to detect and recognize these plates is presented in this paper. The system is divided into four parts: plate detection, extraction, character segmentation and recognition. At first, the input image is enhanced using CLAHE and a matched filter specially designed for license plates with two lines is applied. Then tilt correction using Radon transformation, binarization and cleaning are performed. For character segmentation, mean intensity based horizontal and vertical projection is used. In recognition, we have used two different Convolutional Neural Network (CNN) to classify digits and letters. Tesseract OCR is used for district names. We have developed a dataset of over 400 images of different vehicles (e.g., private car, bus, truck etc.) taken at different times of day (including nights). The plates in this dataset are in different angles, including blurry, worn-out and muddy ones. On this dataset, the proposed system achieved a success rate of 96.8% in detection, 89.5% in extraction, 98.6% in segmentation and 98.0% in character recognition."} {"_id": "c9d51f7110977c46b6ba9b69a68cff4dd8b6de0b", "title": "Text Mining for Documents Annotation and Ontology Support", "text": "This paper presents a survey of basic concepts in the area of text data mining and some of the methods used in order to elicit useful knowledge from collections of textual data. Three different text data mining techniques (clustering/visualisation, association rules and classification models) are analysed and its exploitation possibilities within the Webocracy project are showed. Clustering and association rules discovery are well suited as supporting tools for ontology management. Classification models are used for automatic documents annotation."} {"_id": "0b3412e13637763b21ad6ddadd4b9b68907730e2", "title": "Encouraging user behaviour with achievements: An empirical study", "text": "Stack Overflow, a question and answer Web site, uses a reward system called badges to publicly reward users for their contributions to the community. Badges are used alongside a reputation score to reward positive behaviour by relating a user's site identity with their perceived expertise and respect in the community. A greater number of badges associated with a user profile in some way indicates a higher level of authority, leading to a natural incentive for users to attempt to achieve as many badges as possible. In this study, we examine the publicly available logs for Stack Overflow to examine three of these badges in detail. We look at the effect of one badge in context on an individual user level and at the global scope of three related badges across all users by mining user behaviour around the time that the badge is awarded. This analysis supports the claim that badges can be used to influence user behaviour by demonstrating one instance of an increase in user activity related to a badge immediately before it is awarded when compared to the period afterwards."} {"_id": "8a696afa3b4e82727764f26a95d2577c85315171", "title": "Implicit theories about willpower predict self-regulation and grades in everyday life.", "text": "Laboratory research shows that when people believe that willpower is an abundant (rather than highly limited) resource they exhibit better self-control after demanding tasks. However, some have questioned whether this \"nonlimited\" theory leads to squandering of resources and worse outcomes in everyday life when demands on self-regulation are high. To examine this, we conducted a longitudinal study, assessing students' theories about willpower and tracking their self-regulation and academic performance. As hypothesized, a nonlimited theory predicted better self-regulation (better time management and less procrastination, unhealthy eating, and impulsive spending) for students who faced high self-regulatory demands. Moreover, among students taking a heavy course load, those with a nonlimited theory earned higher grades, which was mediated by less procrastination. These findings contradict the idea that a limited theory helps people allocate their resources more effectively; instead, it is people with the nonlimited theory who self-regulate well in the face of high demands."} {"_id": "417997271d0c310e73c6454784244445253a15a0", "title": "Enabling network programmability in LTE/EPC architecture using OpenFlow", "text": "Nowadays, mobile operators face the challenge to sustain the future data tsunami. In fact, today's increasing data and control traffic generated by new kinds of network usage puts strain on mobile operators', without creating any corresponding equivalent revenue. In our previous work, we analyzed the 3GPP LTE/EPC architecture and showed that a redesign of this architecture is needed to suit future network usages and to provide new revenue generating services. Moreover, we proposed a new control plane based on the OpenFlow (OF) protocol for the LTE/EPC architecture that enables flexibility and programmability aspects. In this paper, we are interested in the programmability aspect. We show how the data plane can be easily configured thanks to OF. In addition, we evaluate the signaling load of our proposed architecture and compare it to that of 3GPP LTE/EPC architecture. The preliminary findings suggest that managing the data plane with OF has little impact on the signaling load while the network programmability is improved."} {"_id": "1714ecfb4ac45130bc241a4d7ec45f2a9fb8b99e", "title": "The Behavioural Paths to Wellbeing : An Exploratory Study to Distinguish Between Hedonic and Eudaimonic Wellbeing From an Activity Perspective", "text": "Hedonic wellbeing and eudaimonic wellbeing are two prevailing approaches to wellbeing. However, remarkably little research has distinguished them from an activity perspective; the knowledge of behavioural paths for achieving these two wellbeings is poor. This study first clarified the behavioural contents of the two approaches through a bottom-up method and then analysed the representativeness of activities to indicate to what extent activities contributed to wellness. We found that the paths to hedonic wellbeing and eudaimonic wellbeing overlapped and differed from each other. Furthermore, this study explained why hedonic activity differed from eudaimonic activity by analysing activity characteristics. We found that people reported higher frequency, sensory experience, and affective experience in hedonic activity, whereas they reported higher intellectual experience, behavioural experience, and spiritual experience in eudaimonic activity. Finally, we explored the behavioural pattern of wellbeing pursuit in both an unthreatening situation and a threatening situation. We found that the overlap between the two approaches increased in the threatening situation. Moreover, people in the threatening situation tended to score lower on all characteristics except frequency relative to those in the unthreatening situation. It seemed that the behavioural pattern in the threatening situation was less effective than its equivalent in the unthreatening situation."} {"_id": "50d551c7ad7d5ccfd6c0886ea48a597d4821baf7", "title": "Unsupervised outlier detection in streaming data using weighted clustering", "text": "Outlier detection is a very important task in many fields like network intrusion detection, credit card fraud detection, stock market analysis, detecting outlying cases in medical data etc. Outlier detection in streaming data is very challenging because streaming data cannot be scanned multiple times and also new concepts may keep evolving in coming data over time. Irrelevant attributes can be termed as noisy attributes and such attributes further magnify the challenge of working with data streams. In this paper, we propose an unsupervised outlier detection scheme for streaming data. This scheme is based on clustering as clustering is an unsupervised data mining task and it does not require labeled data. In proposed scheme both density based and partitioning clustering method are combined to take advantage of both density based and distance based outlier detection. Proposed scheme also assigns weights to attributes depending upon their respective relevance in mining task and weights are adaptive in nature. Weighted attributes are helpful to reduce or remove the effect of noisy attributes. Keeping in view the challenges of streaming data, the proposed scheme is incremental and adaptive to concept evolution. Experimental results on synthetic and real world data sets show that our proposed approach outperforms other existing approach (CORM) in terms of outlier detection rate, false alarm rate, and increasing percentages of outliers."} {"_id": "5e77868839082c8463a3f61f9ffb5873e5dbd03f", "title": "Motif-Based Classification of Time Series with Bayesian Networks and SVMs", "text": "Classification of time series is an important task with many challenging applications like brain wave (EEG) analysis, signature verification or speech recognition. In this paper we show how characteristic local patterns (motifs) can improve the classification accuracy. We introduce a new motif class, generalized semi-continuous motifs. To allow flexibility and noise robustness, these motifs may include gaps of various lengths, generic and more specific wildcards. We propose an efficient algorithm for mining generalized sequential motifs. In experiments on real medical data, we show how generalized semi-continuous motifs improve the accuracy of SVMs and Bayesian Networks for time series classificiation."} {"_id": "9e1d50d98ae09c15354dbcb126609e337d3dc6fb", "title": "A vector quantization approach to speaker recognition", "text": "CH2118-8/85/0000-0387 $1.00 \u00a9 1985 IEEE 387 ABSTRACT. In this study a vector quantIzation (VQ) codebook was system. In the other, Shore and Burton 112] used word-based VQ used as an efficient means of characterizing the short-time spectral codebooks and reported good performance in speaker-trained isolatedfeatures of a speaker. A set of such codebooks were then used to word recognition experiments. Here, instead of using word-based VQ recognize the identity of an unknown speaker from his/her unlabelled spoken utterances based on a minimum distance (distortion) codebooks to characterize the phonetic contents of isolated words, we propose to use speaker-based VQ codebooks to characterize the classification rule. A series of speaker recognition experiments was variability of short-time acoustic features of speakers. performed using a 100-talker (50 male and 50 female) telephone recording database consisting of isolated digit utterances. For ten random but different isolated digits, over 98% speaker identification H. Speaker-based VQ Codebook Approach to Speaker accuracy was achieved. The effects, on performance, of different Characterization and Recognition system parameters such as codebook sizes, the number of test digits, phonetic richness of the text, and difference in recording sessions Were also studied in detail. A set of short-time raw feature vectors of a speaker can be used directly to represent the essential acoustical, phonological or physiological characteristics of that speaker if the training set includes sufficient variations. However such a direct representation is not practical when the number of training vectors is large. The memory requirements for storage and computational complexity in the recognition phase eventually become prohibitively high. Therefore an Automatic speaker recognition has long been an interesting and challenging problem to speech researchers [1-101. The problem, efficient way of compressing the training data had to be found. In order to compress the original data to a small set of representative depending upon the nature of the final task, can be classified into two points, we used a VQ codebook with a small number of codebook different categories: speaker verification and speaker identification. In entries. a speaker verification task, the recognizer is asked to verify an identity claim made by an unknown speaker and a decision to reject or accept the identity claim is made. In a speaker identification task the recognizer is asked to decide which out of a population of N speakers is best classified as the unknown speaker. The decision may include a choice of \"none of the above\" (i.e., a choice that the specific speaker is not in a given closed set of speakers). The input speech material used for speaker recognition can be either text-dependent (text-constrained) or text-independent (text-free). In the text-dependent mode the speaker is asked to utter a prescribed text. The utterance is then The speaker-based VQ codebook generation can be summarized as follows: Given a set of I training feature vectors, {a1,a2 a) characterizing the variability of a speaker, we want to find a partitioning of the feature vector space, {S1,S2 SM}, for that particular speaker where, 5, the whole feature space is represented as S S1 US2 U . . . US. Each partition, S, forms a convex, nonoverlapping region and every vector inside S is represented by the respo5'tis'1g ceatcoid. vector, b1, cf 5,. The p titioning is done in such a way that the average distortion"} {"_id": "5f379793c8605eebd07da213aebd6ed8f14c438a", "title": "A Family of Neutral Point Clamped Full-Bridge Topologies for Transformerless Photovoltaic Grid-Tied Inverters", "text": "Transformerless inverter topologies have attracted more attentions in photovoltaic (PV) generation system since they feature high efficiency and low cost. In order to meet the safety requirement for transformerless grid-tied PV inverters, the leakage current has to be tackled carefully. Neutral point clamped (NPC) topology is an effective way to eliminate the leakage current. In this paper, two types of basic switching cells, the positive neutral point clamped cell and the negative neutral point clamped cell, are proposed to build NPC topologies, with a systematic method of topology generation given. A family of single-phase transformerless full-bridge topologies with low-leakage current for PV grid-tied NPC inverters is derived including the existing oH5 and some new topologies. A novel positive-negative NPC (PN-NPC) topology is analyzed in detail with operational modes and modulation strategy given. The power losses are compared among the oH5, the full-bridge inverter with dc bypass (FB-DCBP) topology, and the proposed PN-NPC topologies. A universal prototype for these three NPC-type topologies mentioned is built to evaluate the topologies at conversion efficiency and the leakage current characteristic. The PN-NPC topology proposed exhibits similar leakage current with the FB-DCBP, which is lower than that of the oH5 topology, and features higher efficiency than both the oH5 and the FB-DCBP topologies."} {"_id": "dd215b777c1c251b61ebee99592250f44073d4c0", "title": "Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks", "text": "Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples."} {"_id": "5c4736f1bcaf27033269ff7831e18a7f017e0343", "title": "Designing game-based learning environments for elementary science education: A narrative-centered learning perspective", "text": "Game-based learning environments hold significant promise for STEM education, yet they are enormously complex. CRYSTAL ISLAND: UNCHARTED DISCOVERY, is a gamebased learning environment designed for upper elementary science education that has been under development in our laboratory for the past four years. This article discusses curricular and narrative interaction design requirements, presents the design of the CRYSTAL ISLAND learning environment, and describes its evolution through a series of pilots and field tests. Additionally, a classroom integration study was conducted to initiate a shift towards ecological validity. Results indicated that CRYSTAL ISLAND produced significant learning gains on both science content and problem-solving measures. Importantly, gains were consistent for gender across studies. This finding is key in light of past studies that revealed disproportionate participation by boys within game-based learning environments."} {"_id": "6c528568782b54dacc7a20db2e2d85f418674047", "title": "Volumetric Object Recognition Using 3-D CNNs on Depth Data", "text": "Recognizing 3-D objects has a wide range of application areas from autonomous robots to self-driving vehicles. The popularity of low-cost RGB-D sensors has enabled a rapid progress in 3-D object recognition in the recent years. Most of the existing studies use depth data as an additional channel to the RGB channels. Instead of this approach, we propose two volumetric representations to reveal rich 3-D structural information hidden in depth images. We present a 3-D convolutional neural network (CNN)-based object recognition approach, which utilizes these volumetric representations and single and multi-rotational depth images. The 3-D CNN architecture trained to recognize single depth images produces competitive results with the state-of-the-art methods on two publicly available datasets. However, recognition accuracy increases further when the multiple rotations of objects are brought together. Our multirotational 3-D CNN combines information from multiple views of objects to provide rotational invariance and improves the accuracy significantly comparing with the single-rotational approach. The results show that utilizing multiple views of objects can be highly informative for the 3-D CNN-based object recognition."} {"_id": "46f1ba485e685ff8ff6232f6c04f1a3920d4f33a", "title": "Radar-based Fall Detection Based on Doppler Time-Frequency Signatures for Assisted Living", "text": "Falls are a major public health concern and main causes of accidental death in the senior U.S. population. Timely and accurate detection permits immediate assistance after a fall and, thereby, reduces complications of fall risk. Radar technology provides an effective means for this purpose because it is non-invasive, insensitive to lighting conditions as well as obstructions, and has less privacy concerns. In this paper, we develop an effective fall detection scheme for the application in continuous-wave radar systems. The proposed scheme exploits time-frequency characteristics of the radar Doppler signatures, and the motion events are classified using the joint statistics of three different features, including the extreme frequency, extreme frequency ratio, and the length of event period. Sparse Bayesian classifier based on the relevance vector machine is used to perform the classification. Laboratory experiments are performed to collect radar data corresponding to different motion patterns to verify the effectiveness of the proposed algorithm."} {"_id": "74ec3d4cbb22453ce1d128c42ea66d2bdced64d6", "title": "Novel Multilevel Inverter Carrier-Based PWM Method", "text": "The advent of the transformerless multilevel inverter topology has brought forth various pulsewidth modulation (PWM) schemes as a means to control the switching of the active devices in each of the multiple voltage levels in the inverter. An analysis of how existing multilevel carrier-based PWM affects switch utilization for the different levels of a diode-clamped inverter is conducted. Two novel carrier-based multilevel PWM schemes are presented which help to optimize or balance the switch utilization in multilevel inverters. A 10-kW prototype sixlevel diode-clamped inverter has been built and controlled with the novel PWM strategies proposed in this paper to act as a voltage-source inverter for a motor drive."} {"_id": "e68a6132f5536aad264ba62052005d0eca3356d5", "title": "A New Neutral-Point-Clamped PWM Inverter", "text": "A new neutral-point-clamped pulsewidth modulation (PWM) inverter composed of main switching devices which operate as switches for PWM and auxiliary switching devices to clamp the output terminal potential to the neutral point potential has been developed. This inverter output contains less harmonic content as compared with that of a conventional type. Two inverters are compared analytically and experimentally. In addition, a new PWM technique suitable for an ac drive system is applied to this inverter. The neutral-point-clamped PWM inverter adopting the new PWM technique shows an excellent drive system efficiency, including motor efficiency, and is appropriate for a wide-range variable-speed drive system."} {"_id": "ff5c193fd7142b3f426baf997b43937eca1bbbad", "title": "Multilevel inverters: a survey of topologies, controls, and applications", "text": "Multilevel inverter technology has emerged recently as a very important alternative in the area of high-power medium-voltage energy control. This paper presents the most important topologies like diode-clamped inverter (neutral-point clamped), capacitor-clamped (flying capacitor), and cascaded multicell with separate dc sources. Emerging topologies like asymmetric hybrid cells and soft-switched multilevel inverters are also discussed. This paper also presents the most relevant control and modulation methods developed for this family of converters: multilevel sinusoidal pulsewidth modulation, multilevel selective harmonic elimination, and space-vector modulation. Special attention is dedicated to the latest and more relevant applications of these converters such as laminators, conveyor belts, and unified power-flow controllers. The need of an active front end at the input side for those inverters supplying regenerative loads is also discussed, and the circuit topology options are also presented. Finally, the peripherally developing areas such as high-voltage high-power devices and optical sensors and other opportunities for future development are addressed."} {"_id": "3d8a29cf3843f92bf9897c4f2d3c02d96d59540a", "title": "Multilevel PWM Methods at Low Modulation Indices", "text": "When utilized at low amplitude modulation indices, existing multilevel carrier-based PWM strategies have no special provisions for this operating region, and several levels of the inverter go unused. This paper proposes some novel multilevel PWM strategies to take advantage of the multiple levels in both a diodeclamped inverter and a cascaded H-bridges inverter by utilizing all of the levels in the inverter even at low modulation indices. Simulation results show what effects the different strategies have on the active device utilization. A prototype 6-level diode-clamped inverter and an 11-level cascaded H-bridges inverter have been built and controlled with the novel PWM strategies proposed in this paper."} {"_id": "40baa5d4632d807cc5841874be73415775b500fd", "title": "Multilevel Converters for Large Electric Drives", "text": "Traditional two-level high-frequency pulse width modulation (PWM) inverters for motor drives have several problems associated with their high frequency switching which produces common-mode voltage and high voltage change (dV/dt) rates to the motor windings. Multilevel inverters solve these problems because their devices can switch at a much lower frequency. Two different multilevel topologies are identified for use as a converter for electric drives, a cascade inverter with separate dc sources and a back-to-back diode clamped converter. The cascade inverter is a natural fit for large automotive allelectric drives because of the high VA ratings possible and because it uses several levels of dc voltage sources which would be available from batteries or fuel cells. The back-to-back diode clamped converter is ideal where a source of ac voltage is available such as a hybrid electric vehicle. Simulation and experimental results show the superiority of these two converters over PWM based drives."} {"_id": "0c53ef79bb8e5ba4e6a8ebad6d453ecf3672926d", "title": "Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition", "text": "Traditional feature encoding scheme (e.g., Fisher vector) with local descriptors (e.g., SIFT) and recent convolutional neural networks (CNNs) are two classes of successful methods for image recognition. In this paper, we propose a hybrid representation, which leverages the discriminative capacity of CNNs and the simplicity of descriptor encoding schema for image recognition, with a focus on scene recognition. To this end, we make three main contributions from the following aspects. First, we propose a patch-level and end-to-end architecture to model the appearance of local patches, called PatchNet. PatchNet is essentially a customized network trained in a weakly supervised manner, which uses the image-level supervision to guide the patch-level feature extraction. Second, we present a hybrid visual representation, called VSAD, by utilizing the robust feature representations of PatchNet to describe local patches and exploiting the semantic probabilities of PatchNet to aggregate these local patches into a global representation. Third, based on the proposed VSAD representation, we propose a new state-of-the-art scene recognition approach, which achieves an excellent performance on two standard benchmarks: MIT Indoor67 (86.2%) and SUN397 (73.0%)."} {"_id": "a3345798b1faf238e8d805bbe9124b0b8e0c869f", "title": "Autophagy as a regulated pathway of cellular degradation.", "text": "Macroautophagy is a dynamic process involving the rearrangement of subcellular membranes to sequester cytoplasm and organelles for delivery to the lysosome or vacuole where the sequestered cargo is degraded and recycled. This process takes place in all eukaryotic cells. It is highly regulated through the action of various kinases, phosphatases, and guanosine triphosphatases (GTPases). The core protein machinery that is necessary to drive formation and consumption of intermediates in the macroautophagy pathway includes a ubiquitin-like protein conjugation system and a protein complex that directs membrane docking and fusion at the lysosome or vacuole. Macroautophagy plays an important role in developmental processes, human disease, and cellular response to nutrient deprivation."} {"_id": "d35abb53c3c64717126deff65c26d6563276df45", "title": "Machine learning aided cognitive RAT selection for 5G heterogeneous networks", "text": "The starring role of the Heterogeneous Networks (HetNet) strategy as the key Radio Access Network (RAN) architecture for future 5G networks poses serious challenges to the current user association (cell selection) mechanisms used in cellular networks. The max-SINR algorithm, although historically effective for performing this function, is inefficient at best and obsolete at worst in 5G HetNets. The foreseen embarrassment of riches and diversified propagation characteristics of network attachment points spanning multiple Radio Access Technologies (RAT) requires novel and creative context-aware system designs that optimize the association and routing decisions in the context of single-RAT and multi-RAT connections, respectively. This paper proposes a framework under these guidelines that relies on Machine Learning techniques at the terminal device level for Cognitive RAT Selection and presents simulation results to suppport it."} {"_id": "895fa1357bcfa9b845945c6505a6e48070fd5d89", "title": "An Anonymous Electronic Voting Protocol for Voting Over The Internet", "text": "In this work we propose a secure electronic voting protocol that is suitable for large scale voting over the Internet. The protocol allows a voter to cast his or her ballot anonymously, by exchanging untraceable yet authentic messages. The protocol ensures that (i) only eligible voters are able to cast votes, (ii) a voter is able to cast only one vote, (iii) a voter is able to verify that his or her vote is counted in the final tally, (iv) nobody, other than the voter, is able to link a cast vote with a voter, and (v) if a voter decides not to cast a vote, nobody is able to cast a fraudulent vote in place of the voter. The protocol does not require the cooperation of all registered voters. Neither does it require the use of complex cryptographic techniques like threshold cryptosystems or anonymous channels for casting votes. This is in contrast to other voting protocols that have been proposed in the literature. The protocol uses three agents, other than the voters, for successful operation. However, we do not require any of these agents to be trusted. That is, the agents may be physically co-located or may collude with one another to try to commit a fraud. If a fraud is committed, it can be easily detected and proven, so that the vote can be declared null and void. Although we propose the protocol with electronic voting in mind, the protocol can be used in other applications that involve exchanging an untraceable yet authentic message. Examples of such applications are answering confidential questionnaire anonymously or anonymous financial transactions."} {"_id": "88ccb5b72cf96c9e34940c15e070c7d69a77a98c", "title": "The Love/Hate Relationship with the C Preprocessor: An Interview Study (Artifact)", "text": "The C preprocessor has received strong criticism in academia, among others regarding separation of concerns, error proneness, and code obfuscation, but is widely used in practice. Many (mostly academic) alternatives to the preprocessor exist, but have not been adopted in practice. Since developers continue to use the preprocessor despite all criticism and research, we ask how practitioners perceive the C preprocessor. We performed interviews with 40 developers, used grounded theory to analyze the data, and cross-validated the results with data from a survey among 202 developers, repository mining, and results from previous studies. In particular, we investigated four research questions related to why the preprocessor is still widely used in practice, common problems, alternatives, and the impact of undisciplined annotations. Our study shows that developers are aware of the criticism the C preprocessor receives, but use it nonetheless, mainly for portability and variability. Many developers indicate that they regularly face preprocessorrelated problems and preprocessor-related bugs. The majority of our interviewees do not see any current C-native technologies that can entirely replace the C preprocessor. However, developers tend to mitigate problems with guidelines, even though those guidelines are not enforced consistently. We report the key insights gained from our study and discuss implications for practitioners and researchers on how to better use the C preprocessor to minimize its negative impact. 1998 ACM Subject Classification D.3.4 Processors"} {"_id": "2201c7ebc6d0365d2ec0bdd94c344f5dd269aa04", "title": "Inferring Mood Instability on Social Media by Leveraging Ecological Momentary Assessments", "text": "Active and passive sensing technologies are providing powerful mechanisms to track, model, and understand a range of health behaviors and well-being states. Despite yielding rich, dense and high fidelity data, current sensing technologies often require highly engineered study designs and persistent participant compliance, making them difficult to scale to large populations and to data acquisition tasks spanning extended time periods. This paper situates social media as a new passive, unobtrusive sensing technology. We propose a semi-supervised machine learning framework to combine small samples of data gathered through active sensing, with large-scale social media data to infer mood instability (MI) in individuals. Starting from a theoretically-grounded measure of MI obtained from mobile ecological momentary assessments (EMAs), we show that our model is able to infer MI in a large population of Twitter users with 96% accuracy and F-1 score. Additionally, we show that, our model predicts self-identifying Twitter users with bipolar and borderline personality disorder to exhibit twice the likelihood of high MI, compared to that in a suitable control. We discuss the implications and the potential for integrating complementary sensing capabilities to address complex research challenges in precision medicine."} {"_id": "abd81ffe23b23bf5cfdb2f1a02b66c8e14f11581", "title": "The Therapeutic Potentials of Ayahuasca: Possible Effects against Various Diseases of Civilization", "text": "Ayahuasca is an Amazonian psychoactive brew of two main components. Its active agents are \u03b2-carboline and tryptamine derivatives. As a sacrament, ayahuasca is still a central element of many healing ceremonies in the Amazon Basin and its ritual consumption has become common among the mestizo populations of South America. Ayahuasca use amongst the indigenous people of the Amazon is a form of traditional medicine and cultural psychiatry. During the last two decades, the substance has become increasingly known among both scientists and laymen, and currently its use is spreading all over in the Western world. In the present paper we describe the chief characteristics of ayahuasca, discuss important questions raised about its use, and provide an overview of the scientific research supporting its potential therapeutic benefits. A growing number of studies indicate that the psychotherapeutic potential of ayahuasca is based mostly on the strong serotonergic effects, whereas the sigma-1 receptor (Sig-1R) agonist effect of its active ingredient dimethyltryptamine raises the possibility that the ethnomedical observations on the diversity of treated conditions can be scientifically verified. Moreover, in the right therapeutic or ritual setting with proper preparation and mindset of the user, followed by subsequent integration of the experience, ayahuasca has proven effective in the treatment of substance dependence. This article has two important take-home messages: (1) the therapeutic effects of ayahuasca are best understood from a bio-psycho-socio-spiritual model, and (2) on the biological level ayahuasca may act against chronic low grade inflammation and oxidative stress via the Sig-1R which can explain its widespread therapeutic indications."} {"_id": "117601fe80cc4b7d69a18da06949279395c62292", "title": "Epigenetics and the embodiment of race: developmental origins of US racial disparities in cardiovascular health.", "text": "The relative contribution of genetic and environmental influences to the US black-white disparity in cardiovascular disease (CVD) is hotly debated within the public health, anthropology, and medical communities. In this article, we review evidence for developmental and epigenetic pathways linking early life environments with CVD, and critically evaluate their possible role in the origins of these racial health disparities. African Americans not only suffer from a disproportionate burden of CVD relative to whites, but also have higher rates of the perinatal health disparities now known to be the antecedents of these conditions. There is extensive evidence for a social origin to prematurity and low birth weight in African Americans, reflecting pathways such as the effects of discrimination on maternal stress physiology. In light of the inverse relationship between birth weight and adult CVD, there is now a strong rationale to consider developmental and epigenetic mechanisms as links between early life environmental factors like maternal stress during pregnancy and adult race-based health disparities in diseases like hypertension, diabetes, stroke, and coronary heart disease. The model outlined here builds upon social constructivist perspectives to highlight an important set of mechanisms by which social influences can become embodied, having durable and even transgenerational influences on the most pressing US health disparities. We conclude that environmentally responsive phenotypic plasticity, in combination with the better-studied acute and chronic effects of social-environmental exposures, provides a more parsimonious explanation than genetics for the persistence of CVD disparities between members of socially imposed racial categories."} {"_id": "87f452d4e9baabda4093007a9c6bbba30c35f3e4", "title": "Face spoofing detection from single images using texture and local shape analysis", "text": "Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterisation of printing artefacts and differences in light reflection, the authors propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture and local shape features. Hence, the authors present a novel approach based on analysing facial image for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyses the texture and gradient structures of the facial images using a set of low-level feature descriptors, fast linear classification scheme and score level fusion. Compared to many previous works, the authors proposed approach is robust and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on three publicly available databases showed excellent results compared to existing works."} {"_id": "1f6f571f29d930bc2371c9eb044e03bb8ebd86ae", "title": "Subjective Well \u2010 Being and Income : Is There Any Evidence of Satiation ? *", "text": "Subjective Well\u2010Being and Income: Is There Any Evidence of Satiation? Many scholars have argued that once \u201cbasic needs\u201d have been met, higher income is no longer associated with higher in subjective well-being. We assess the validity of this claim in comparisons of both rich and poor countries, and also of rich and poor people within a country. Analyzing multiple datasets, multiple definitions of \u201cbasic needs\u201d and multiple questions about well-being, we find no support for this claim. The relationship between wellbeing and income is roughly linear-log and does not diminish as incomes rise. If there is a satiation point, we are yet to reach it. JEL Classification: D6, I3, N3, O1, O4"} {"_id": "816cce72ad9fbeb854e3ca723ded5a51dfcb9311", "title": "Dynamic mixed membership blockmodel for evolving networks", "text": "In a dynamic social or biological environment, interactions between the underlying actors can undergo large and systematic changes. Each actor can assume multiple roles and their degrees of affiliation to these roles can also exhibit rich temporal phenomena. We propose a state space mixed membership stochastic blockmodel which can track across time the evolving roles of the actors. We also derive an efficient variational inference procedure for our model, and apply it to the Enron email networks, and rewiring gene regulatory networks of yeast. In both cases, our model reveals interesting dynamical roles of the actors."} {"_id": "54f573383d23275731487f2a6b45845db29dbdf8", "title": "Regression approaches to voice quality controll based on one-to-many eigenvoice conversion", "text": "This paper proposes techniques for flexibly controlling voice quality of converted speech from a particular source speaker based on one-to-many eigenvoice conversion (EVC). EVC realizes a voice quality control based on the manipulation of a small number of parameters, i.e., weights for eigenvectors, of an eigenvoice Gaussian mixture model (EV-GMM), which is trained with multiple parallel data sets consisting of a single source speaker and many pre-stored target speakers. However, it is difficult to control intuitively the desired voice quality with those parametersbecause each eigenvector doesn\u2019t usually represent a specific physical meaning. In order to cope with this problem, we propose regression approaches to the EVC-based voice quality controller. The tractable voice quality control of the converted speech is achieved with a low-dimensionalvoice quality control vector capturing specific voice characteristics. We conducted experimental verifications of each of the proposed approaches."} {"_id": "edc22d9a3aba2c9d457ef16acd7e6de7a17daed4", "title": "A brief survey of blackhole detection and avoidance for ZRP protocol in MANETs", "text": "Within Mobile Ad-Hoc Network(MANETs) unusual categories of routing protocols have been determined. It is a group of wireless system. MANET is other susceptible to a variety of attack than wired system. Black hole attack is further meticulous intimidation to MANETs. The routing protocols of MANET is less protected and therefore consequenced the system with malicious node. There are various routing protocols being used for MANETs. All routing protocols have been briefly discussed, however, Zone Routing Protocol (ZRP) is discussed in detail. Black hole is a major security threat for MANETs. Hence, in this paper, the various techniques used for detection and avoidance of Black hole attack in MANETs using ZRP routing protocol have been discussed."} {"_id": "08c2ba4d7183d671b9a7652256de17110b81c723", "title": "A Practical Attack to De-anonymize Social Network Users", "text": "Social networking sites such as Facebook, LinkedIn, and Xing have been reporting exponential growth rates and have millions of registered users. In this paper, we introduce a novel de-anonymization attack that exploits group membership information that is available on social networking sites. More precisely, we show that information about the group memberships of a user (i.e., the groups of a social network to which a user belongs) is sufficient to uniquely identify this person, or, at least, to significantly reduce the set of possible candidates. That is, rather than tracking a user's browser as with cookies, it is possible to track a person. To determine the group membership of a user, we leverage well-known web browser history stealing attacks. Thus, whenever a social network user visits a malicious website, this website can launch our de-anonymization attack and learn the identity of its visitors. The implications of our attack are manifold, since it requires a low effort and has the potential to affect millions of social networking users. We perform both a theoretical analysis and empirical measurements to demonstrate the feasibility of our attack against Xing, a medium-sized social network with more than eight million members that is mainly used for business relationships. Furthermore, we explored other, larger social networks and performed experiments that suggest that users of Facebook and LinkedIn are equally vulnerable."} {"_id": "cf9145aa55da660a8d32bf628235c615318463bf", "title": "Cryptography on FPGAs: State of the Art Implementations and Attacks", "text": "In the last decade, it has become aparent that embedded systems are integral parts of our every day lives. The wireless nature of many embedded applications as well as their omnipresence has made the need for security and privacy preserving mechanisms particularly important. Thus, as FPGAs become integral parts of embedded systems, it is imperative to consider their security as a whole. This contribution provides a state-of-the-art description of security issues on FPGAs, both from the system and implementation perspectives. We discuss the advantages of reconfigurable hardware for cryptographic applications, show potential security problems of FPGAs, and provide a list of open research problems. Moreover, we summarize both public and symmetric-key algorithm implementations on FPGAs."} {"_id": "aee27f46bca631cc9c45b3ad5032ab9b771dfefe", "title": "Compliance with a structured bedside handover protocol : An observational , multicentred study \u2606", "text": "Background: Bedside handover is the delivery of the nurse-to-nurse shift handover at the patient\u2019s bedside. The method is increasingly used in nursing, but the evidence concerning the implementation process and compliance to the method is limited. Objectives: To determine the compliance with a structured bedside handover protocol following ISBARR and if there were differences in compliance between wards. Design: A multicentred observational study with unannounced and non-participatory observations (n=638) one month after the implementation of a structured bedside handover protocol. Settings and participants: Observations of individual patient handovers between nurses from the morning shift and the afternoon shift in 12 nursing wards in seven hospitals in Flanders, Belgium. Methods: A tailored and structured bedside handover protocol following ISBARR was developed, and nurses were trained accordingly. One month after implementation, a minimum of 50 observations were performed with a checklist, in each participating ward. To enhance reliability, 20% of the observations were conducted by two researchers, and inter-rater agreement was calculated. Data were analysed using descriptive statistics, one-way ANOVAs and multilevel analysis. Results: Average compliance rates to the structured content protocol during bedside handovers were high (83.63%; SD 11.44%), and length of stay, the type of ward and the nursing care model were influencing contextual factors. Items that were most often omitted included identification of the patient (46.27%), the introduction of nurses (36.51%), hand hygiene (35.89%), actively involving the patient (34.44%), and using the call light (21.37%). Items concerning the exchange of clinical information (e.g., test results, reason for admittance, diagnoses) were omitted less (8.09%\u20131.45%). Absence of the patients (27.29%) and staffing issues (26.70%) accounted for more than half of the non-executed bedside handovers. On average, a bedside handover took 146 s per patient. Conclusions: When the bedside handover was delivered, compliance to the structured content was high, indicating that the execution of a bedside handover is a feasible step for nurses. The compliance rate was influenced by the patient\u2019s length of stay, the nursing care model and the type of ward, but their influence was limited. Future implementation projects on bedside handover should focus sufficiently on standard hospital procedures and patient involvement. According to the nurses, there was however a high number of situations where bedside handovers could not be delivered, perhaps indicating a reluctance in practice to use bedside"} {"_id": "00aa499569decb4e9abe40ebedca1b318b7664a8", "title": "A Novel Feature Selection Approach Based on FODPSO and SVM", "text": "A novel feature selection approach is proposed to address the curse of dimensionality and reduce the redundancy of hyperspectral data. The proposed approach is based on a new binary optimization method inspired by fractional-order Darwinian particle swarm optimization (FODPSO). The overall accuracy (OA) of a support vector machine (SVM) classifier on validation samples is used as fitness values in order to evaluate the informativity of different groups of bands. In order to show the capability of the proposed method, two different applications are considered. In the first application, the proposed feature selection approach is directly carried out on the input hyperspectral data. The most informative bands selected from this step are classified by the SVM. In the second application, the main shortcoming of using attribute profiles (APs) for spectral-spatial classification is addressed. In this case, a stacked vector of the input data and an AP with all widely used attributes are created. Then, the proposed feature selection approach automatically chooses the most informative features from the stacked vector. Experimental results successfully confirm that the proposed feature selection technique works better in terms of classification accuracies and CPU processing time than other studied methods without requiring the number of desired features to be set a priori by users."} {"_id": "ef2237aea4815107db8ed1cb62476ef30c4dfa47", "title": "CMiner: Opinion Extraction and Summarization for Chinese Microblogs", "text": "Sentiment analysis of microblog texts has drawn lots of attention in both the academic and industrial fields. However, most of the current work only focuses on polarity classification. In this paper, we present an opinion mining system for Chinese microblogs called CMiner. Instead of polarity classification, CMiner focuses on more complicated opinion mining tasks - opinion target extraction and opinion summarization. Novel algorithms are developed for the two tasks and integrated into the end-to-end system. CMiner can help to effectively understand the users' opinion towards different opinion targets in a microblog topic. Specially, we develop an unsupervised label propagation algorithm for opinion target extraction. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. In addition, we build an aspect-based opinion summarization framework for microblog topics. After getting the opinion targets of all the microblog messages in a topic, we cluster the opinion targets into several groups and extract representative targets and summaries for each group. A co-ranking algorithm is proposed to rank both the opinion targets and microblog sentences simultaneously. Experimental results on a benchmark dataset show the effectiveness of our system and the algorithms."} {"_id": "91e04ecd6ddd52642fde1cd2cce7d09c7c20d695", "title": "Applying an optimized switching strategy to a high gain boost converter for input current ripple cancellation", "text": "This paper discusses applying various switching strategies such as conventional complementary and proportional methods on a specific converter. This converter was recently presented and it can provide high voltage gain and has current ripple cancelation ability at a preselected duty-cycle. The input current ripple is zero when the converter works at a special duty-cycle. But load disturbance leads to output voltage changes that maybe cause the duty-cycle to deviate from its preselected value. In this situation, the input current ripple cannot be completely canceled. The proposed proportional strategy is an optimized method for minimizing input current ripple at other operating duty-cycles and also it provides a voltage gain lower than conventional complementary strategy. Here, the converter's performance is analyzed under two switching strategies and the effect of various strategies has been investigated on converter parameters including output voltage and input current ripple. These considerations are verified by implementing a 100-W prototype of the proposed converter in laboratory."} {"_id": "748eb923d2c384d2b3af82af58d2e6692ef57aa1", "title": "The Text Mining Handbook: Advanced Approaches to Analyzing Unstructured Data Ronen Feldman and James Sanger (Bar-Ilan University and ABS Ventures) Cambridge, England: Cambridge University Press, 2007, xii+410 pp; hardbound, ISBN 0-521-83657-3, $70.00", "text": "Text mining is a new and exciting area of computer science that tries to solve the crisis of information overload by combining techniques from data mining, machine learning, natural language processing, information retrieval, and knowledge management. The Text Mining Handbook presents a comprehensive discussion of the latest techniques in text mining and link detection. In addition to providing an in-depth examination of core text mining and link detection algorithms and operations, the book examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches, ending with real-world applications."} {"_id": "8d9ae24c9f1e59ee8cfc4c6317671b4947c2a153", "title": "A fractional open circuit voltage based maximum power point tracker for photovoltaic arrays", "text": "In this paper a fractional open circuit voltage based maximum power point tracker (MPPT) for photovoltaic (PV) arrays is proposed. The fractional open circuit voltage based MPPT utilizes the fact that the PV array voltage corresponding to the maximum power exhibits a linear dependence with respect to array open circuit voltage for different irradiation and temperature levels. This method is the simplest of all the MPPT methods described in the literature. The main disadvantage of this method is that the PV array is disconnected from the load after regular intervals for the sampling of the array voltage. This results in power loss. Another disadvantage is that if the duration between two successive samplings of the array voltage, called the sampling period, is too long, there is a considerable loss. This is because the output voltage of the PV array follows the unchanged reference during one sampling period. Once a maximum power point (MPP) is tracked and a change in irradiation occurs between two successive samplings, then the new MPP is not tracked until the next sampling of the PV array voltage. This paper proposes an MPPT circuit in which the sampling interval of the PV array voltage, and the sampling period have been shortened. The sample and hold circuit, which samples and holds the MPP voltage, has also been simplified. The proposed circuit does not utilize expensive microcontroller or a digital signal processor and is thus suitable for low cost photovoltaic applications."} {"_id": "7774b582b9b3fa50775c7fddcfac712cdeef7c97", "title": "HCI Education: Innovation, Creativity and Design Thinking", "text": "Human-Computer Interaction (HCI) education needs re-thinking. In this paper, we explore how and what creativity and design thinking could contribute with, if included as a part of the HCI curriculum. The findings from courses where design thinking was included, indicate that design thinking contributed to increased focus on innovation and creativity, as well as prevented too early fixation on a single solution in the initial phases of HCI design processes, fostering increased flexibility and adaptability in learning processes. The creativity and adaptability may be the best long-term foci that HCI education can add to its curriculums and offer to students when preparing them for future work"} {"_id": "902d5bc1b1b6e35aabe3494f0165e42a918e82ed", "title": "Compressed matching for feature vectors", "text": "The problem of compressing a large collection of feature vectors is investigated, so that object identification can be processed on the compressed form of the features. The idea is to perform matching of a query image against an image database, using directly the compressed form of the descriptor vectors, without decompression. Specifically, we concentrate on the Scale Invariant Feature Transform (SIFT), a known object detection method, as well as on Dense SIFT and PHOW features, that contain, for each image, about 300 times as many vectors as the original SIFT. Given two feature vectors, we suggest achieving our goal by compressing them using a lossless encoding by means of a Fibonacci code, for which the pairwise matching can be done directly on the compressed files. In our experiments, this approach improves the processing time and incurs only a small loss in compression efficiency relative to standard compressors requiring a decoding phase."} {"_id": "2fef3ba4f888855e1d087572003553f485414ef1", "title": "Revisit Behavior in Social Media: The Phoenix-R Model and Discoveries", "text": "How many listens will an artist receive on a online radio? How about plays on a YouTube video? How many of these visits are new or returning users? Modeling and mining popularity dynamics of social activity has important implications for researchers, content creators and providers. We here investigate the effect of revisits (successive visits from a single user) on content popularity. Using four datasets of social activity, with up to tens of millions media objects (e.g., YouTube videos, Twitter hashtags or LastFM artists), we show the effect of revisits in the popularity evolution of such objects. Secondly, we propose the PHOENIX-R model which captures the popularity dynamics of individual objects. PHOENIX-R has the desired properties of being: (1) parsimonious, being based on the minimum description length principle, and achieving lower root mean squared error than state-of-the-art baselines; (2) applicable, the model is effective for predicting future popularity values of objects."} {"_id": "c6419ccf4340832b6a23217674ca1a051a3a1416", "title": "The TaSSt: Tactile sleeve for social touch", "text": "In this paper we outline the design process of the TaSST (Tactile Sleeve for Social Touch), a touch-sensitive vibrotactile arm sleeve. The TaSST was designed to enable two people to communicate different types of touch over a distance. The touch-sensitive surface of the sleeve consists of a grid of 4\u00d73 sensor compartments filled with conductive wool. Each compartment controls the vibration intensity of a vibration motor, located in a grid of 4\u00d73 motors beneath the touch-sensitive layer. An initial evaluation of the TaSST revealed that it was mainly suitable for communicating protracted (e.g. pressing), and simple (e.g. poking) touches."} {"_id": "c8958d1d8e6a59127e6277626b18e7b7556f5d31", "title": "Kinesthetic interaction: revealing the bodily potential in interaction design", "text": "Within the Human-Computer Interaction community there is a growing interest in designing for the whole body in interaction design. The attempts aimed at addressing the body have very different outcomes spanning from theoretical arguments for understanding the body in the design process, to more practical examples of designing for bodily potential. This paper presents Kinesthetic Interaction as a unifying concept for describing the body in motion as a foundation for designing interactive systems. Based on the theoretical foundation for Kinesthetic Interaction, a conceptual framework is introduced to reveal bodily potential in relation to three design themes --- kinesthetic development, kinesthetic means and kinesthetic disorder; and seven design parameters --- engagement, sociality, movability, explicit motivation, implicit motivation, expressive meaning and kinesthetic empathy. The framework is a tool to be utilized when analyzing existing designs, as well as developing designs exploring new ways of designing kinesthetic interactions."} {"_id": "0757817bf5714bb91c3d4f30cf3144e0837e57e5", "title": "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms", "text": "This paper presents our vision of Human Computer Interaction (HCI): \"Tangible Bits.\" Tangible Bits allows users to \"grasp & manipulate\" bits in the center of users\u2019 attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems \u2013 the metaDESK, transBOARD and ambientROOM \u2013 to identify underlying research issues."} {"_id": "0e1f0917797ca7db9bc1d042fa88a46b64e8f171", "title": "inTouch: A Medium for Haptic Interpersonal Communication", "text": "In this paper, we introduce a new approach for applying haptic feedback technology to interpersonal communication. We present the design of our prototype inTouch system which provides a physical link between users separated by distance."} {"_id": "0f308ef439e65b6b7df4489df01e47a37e5fba7f", "title": "Augmented reality: linking real and virtual worlds: a new paradigm for interacting with computers", "text": "A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. The difference is that these objects also provide a link into a computer network. Doctors can examine patients while viewing superimposed medical images; children can program their own LEGO constructions; construction engineers can use ordinary paper engineering drawings to communicate with distant colleagues. Rather than immersing people in an artificially-created virtual world, the goal is to augment objects in the physical world by enhancing them with a wealth of digital information and communication capabilities."} {"_id": "d041316fe31b862110fa4745496f66ec793bf8e3", "title": "Conformal printing of electrically small antennas on three-dimensional surfaces.", "text": "J. J. Adams ,[+] Prof. J. T. Bernhard Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801, USA E-mail: jbernhar@illinois.edu Dr. E. B. Duoss ,[+,++] T. F. Malkowski ,[+] Dr. B. Y. Ahn , Prof. J. A. Lewis Department of Materials Science and Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801 USA E-mail: jalewis@illinois.edu Dr. M. J. Motala , Prof. R. G. Nuzzo Department of Chemistry University of Illinois at Urbana-Champaign Urbana, Illinois 61801, USA [+] These authors contributed equally to this work. [++] Presently at Lawrence Livermore National Laboratory, Center for Microand NanoTechnology, Livermore, CA 94550 USA"} {"_id": "cde617cc3f07faa62ed3812ee150c20559bb91cd", "title": "Oxytocin is associated with human trustworthiness", "text": "Human beings exhibit substantial interpersonal trust-even with strangers. The neuroactive hormone oxytocin facilitates social recognition in animals, and we examine if oxytocin is related to trustworthiness between humans. This paper reports the results of an experiment to test this hypothesis, where trust and trustworthiness are measured using the sequential anonymous \"trust game\" with monetary payoffs. We find that oxytocin levels are higher in subjects who receive a monetary transfer that reflects an intention of trust relative to an unintentional monetary transfer of the same amount. In addition, higher oxytocin levels are associated with trustworthy behavior (the reciprocation of trust). Absent intentionality, both the oxytocin and behavioral responses are extinguished. We conclude that perceptions of intentions of trust affect levels of circulating oxytocin."} {"_id": "d5987b0ddbe11cff32e7c8591b2b829127849a31", "title": "Learning Partially Contracting Dynamical Systems from Demonstrations", "text": "An algorithm for learning the dynamics of point-to-point motions from demonstrations using an autonomous nonlinear dynamical system, named contracting dynamical system primitives (CDSP), is presented. The motion dynamics are approximated using a Gaussian mixture model (GMM) and its parameters are learned subject to constraints derived from partial contraction analysis. Systems learned using the proposed method generate trajectories that accurately reproduce the demonstrations and are guaranteed to converge to a desired goal location. Additionally, the learned models are capable of quickly and appropriately adapting to unexpected spatial perturbations and changes in goal location during reproductions. The CDSP algorithm is evaluated on shapes from a publicly available human handwriting dataset and also compared with two state-of-the-art motion generation algorithms. Furthermore, the CDSP algorithm is also shown to be capable of learning and reproducing point-to-point motions directly from real-world demonstrations using a Baxter robot."} {"_id": "108c652bcc4456562762ab2c4b707dc265c4cc7e", "title": "A Solid-State Marx Generator With a Novel Configuration", "text": "A new pulsed-power generator based on a Marx generator (MG) is proposed in this paper with a reduced number of semiconductor components and with a more efficient load-supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take advantage of the resonant phenomenon in charging each capacitor up to twice as the input-voltage level. In each resonant half-cycle, one of those capacitor groups is charged; eventually, the charged capacitors will be connected in series, and the summation of the capacitor voltages are appeared at the output of a pulsed-power converter. This topology can be considered as a modified MG, which works based on the resonant concept. The simulated models of this converter have been investigated in a MATLAB/Simulink platform, and a laboratory prototype has been implemented in a laboratory. The simulation and test results verify the operation of the proposed topology in different switching modes."} {"_id": "2dd8a75ea1baca4a390a6f5bb24afd95d7bb0e1c", "title": "Neural personalized response generation as domain adaptation", "text": "One of the most crucial problem on training personalized response generation models for conversational robots is the lack of large scale personal conversation data. To address the problem, we propose a two-phase approach, namely initialization then adaptation, to first pre-train an optimized RNN encoder-decoder model (LTS model) in a large scale conversational data for general response generation and then fine-tune the model in a small scale personal conversation data to generate personalized responses. For evaluation, we propose a novel human aided method, which can be seen as a quasi-Turing test, to evaluate the performance of the personalized response generation models. Experimental results show that the proposed personalized response generation model outperforms the state-of-the-art approaches to language model personalization and persona-based neural conversation generation on the automatic evaluation, offline human judgment and the quasi-Turing test."} {"_id": "f41e3c6d51a18e8365613fdb558dbaee5d859d75", "title": "Control of contact via tactile sensing", "text": "In this paper, we present our approach to using tactile sensing in the feedback control of robot contact tasks. A general framework, called tactile servo, is first introduced. We then address the critical issue of how to model the state of contact in terms that are both sufficient for defining general contacts and conducive to bridging the gap between a robot task description and the information observable by a tactile sensor. We subsequently examine techniques for deriving tactile sensor models that are required for computing the state of contact from the sensor output. Two basic methods for mapping a tactile sensor image to the contact state variables are introduced\u2014one based on explicit inverse modeling and the other on numerical tactile Jacobian. In both cases, moments of the tactile sensor image are used as the features that capture the necessary information about contact. The theoretical development is supported by extensive experiments, which include edge tracking, object shape construction, and object manipulation."} {"_id": "49d5fefa7242e2b95e75b59bba288250041c21fd", "title": "External Computations and Interoperability in the New DLV Grounder", "text": "In this paper we focus on some of the most recent advancements in I-DLV, the new intelligent grounder of DLV; the system has been endowed with means aimed at easing the interoperability and integration with external systems and accommodating external source of computation and value invention within ASP programs. In particular, we describe here the support for external computations via explicit calls to Python scripts, and tools for the interoperability with both relational and graph databases."} {"_id": "2f2678e1b54593e54ad63e616729c6e6b4527184", "title": "Study of Planar Inverted-F Antenna ( PIFA ) for Mobile Devices", "text": "Now-a-days more and more radios are being integrated into a single wireless platform to allow maximum connectivity. In this paper a theoretical study of Planar Inverted F Antenna is presented. PIFA as a low profile antenna is widely used in several application areas due to several advantages. Research and experiments done on PIFA structures are discussed and categorized. Some methods that can be used to improve efficiency, bandwidth & reduce volume of an antenna are also discussed which includes effect of dielectric substrate, dimensions of the antenna etc."} {"_id": "a23bb18a5219541e8fbe50d0a1fbf7c2acfa7b5c", "title": "S-Sensors: Integrating physical world inputs with social networks using wireless sensor networks", "text": "Real-world monitoring and surveillance applications are rapidly increasing with the growth in the demand for awareness of the environment and surrounding. Sensor-to-web integration models are envisaged to improve the utilisation of sensory infrastructures and data via advanced coordination and collaboration of sensor nodes and web applications. This paper proposes to employ micro-blogging to publish and share sensory data and resources. S-Sensors provides a framework to globally share locally measured sensory readings. Short messages depicting the status of the environment are used to convey sensory data of the physical world. Moreover, data accessibility and resource manageability are facilitated by using social networks paradigms to form communities of sensor networks."} {"_id": "b30751afa5b2d53bcfe1976782a0d7c43c968f64", "title": "Development of ambient environmental monitoring system through wireless sensor network (WSN) using NodeMCU and \u201cWSN monitoring\u201d", "text": "In this paper we have developed a system for web based environment monitoring using the WSN Technology. WSN sensor nodes transmit data to the cloud-based database via Web API request. Measured data can be monitored by the user anywhere from internet by using the Web Application which one is also compatible for mobile phones. If the data measured by sensor node exceeds the configured value range in Web Application, Web Application sends a warning e-mail to users for improving environmental conditions."} {"_id": "d044d399049bb9bc6df8cc2a5d72610a95611eed", "title": "Multicenter randomized clinical trial evaluating the effectiveness of the Lokomat in subacute stroke.", "text": "OBJECTIVE\nTo compare the efficacy of robotic-assisted gait training with the Lokomat to conventional gait training in individuals with subacute stroke.\n\n\nMETHODS\nA total of 63 participants<6 months poststroke with an initial walking speed between 0.1 to 0.6 m/s completed the multicenter, randomized clinical trial. All participants received twenty-four 1-hour sessions of either Lokomat or conventional gait training. Outcome measures were evaluated prior to training, after 12 and 24 sessions, and at a 3-month follow-up exam. Self-selected overground walking speed and distance walked in 6 minutes were the primary outcome measures, whereas secondary outcome measures included balance, mobility and function, cadence and symmetry, level of disability, and quality of life measures.\n\n\nRESULTS\nParticipants who received conventional gait training experienced significantly greater gains in walking speed (P=.002) and distance (P=.03) than those trained on the Lokomat. These differences were maintained at the 3-month follow-up evaluation. Secondary measures were not different between the 2 groups, although a 2-fold greater improvement in cadence was observed in the conventional versus Lokomat group.\n\n\nCONCLUSIONS\nFor subacute stroke participants with moderate to severe gait impairments, the diversity of conventional gait training interventions appears to be more effective than robotic-assisted gait training for facilitating returns in walking ability."} {"_id": "641cb0722bb971117dce3ef53f8de0b1a63ef6f0", "title": "Advances in Neuropsychoanalysis , Attachment Theory , and Trauma Research : Implications for Self Psychology", "text": "In 1971, Heinz Kohut, trained in neurology and then psychoanalysis, published The Analysis of the Self, a detailed exposition of the central role of the self in human existence. This classic volume of both twentieth century psychoanalysis and psychology was more than a collection of various clinical observations\u2014rather it represented an overarching integrated theory of the development, structuralization, psychopathogenesis, and psychotherapy of disorders of the self. Although some of these ideas were elaborations of previous psychoanalytic principles, a large number of his concepts, including an emphasis on self rather than ego, signified an innovative departure from mainstream psychoanalysis and yet a truly creative addition to Freud\u2019s theory."} {"_id": "b0d4b6c5dac78ce88c76c63c41b58084f98b0485", "title": "Linearization Through Dithering: A 50 MHz Bandwidth, 10-b ENOB, 8.2 mW VCO-Based ADC", "text": "Non-linear voltage-to-frequency characteristic of a voltage-controlled oscillator (VCO) severely curtails the dynamic range of analog-to-digital converters (ADCs) built with VCOs. Typical approaches to enhance the dynamic range include embedding the VCO-based ADC in a \u0394\u03a3 loop or to post-process the digital data for calibration, both of which impose significant power constraints. In contrast, in this work the VCO-based ADC is linearized through a filtered dithering technique, wherein the VCO-based ADC is used as a fine stage that processes the residue from a coarse stage in a 0-1 MASH structure. The proposed filtered dithering technique conditions the signal to the VCO input to appear as white noise thereby eliminating spurious signal content arising out of the VCO nonlinearity. The work resorts to multiple other signal processing techniques to build a high-resolution, wideband prototype, in 65 nm complementary metal-oxide semiconductor (CMOS), that achieves 10 effective number of bits (ENOB) in digitizing signals with 50 MHz bandwidth consuming 8.2 mW at a figure of merit (FoM) of 90 fJ/conv.step."} {"_id": "4d5ed143295435c85300ecc3dfa05f462f03846c", "title": "Image compression with auto-encoder algorithm using deep neural network (DNN)", "text": "An image compression is necessary for the image processing applications such as data storing, image classification, image recognition etc. Then, several research articles have been proposed to reserve for these topics. However, the image compression with auto encoder has been found for a small number of the improvements. Therefore, this paper presents a detailed study to demonstrate the image compression algorithm using the deep neural network (DNN). The proposed algorithm consists of 1) compressing image with auto encoder, and 2) decoding image. The proposed compressing image with auto encoder algorithm uses non-recurrent three-layer neural networks (NRTNNs) which use an extended Kalman filter (EKF) to update the weights of the networks. To evaluate the proposed algorithm performances, the Matlab program is used for implementations of the overall testing algorithm. From our simulation results, it shows that the proposed image compression algorithm is able to reduce the image dimensionality and is able to recall the compressed image with low loss."} {"_id": "5b7ad451e0fa36cded02c28f3ec32a0cad5a3df5", "title": "Distribution expansion planning considering reliability and security of energy using modified PSO ( Particle Swarm Optimization ) algorithm", "text": "Distribution feeders and substations need to provide additional capacity to serve the growing electrical demand of customers without compromising the reliability of the electrical networks. Also, more control devices, such as DG (Distributed Generation) units are being integrated into distribution feeders. Distribution networks were not planned to host these intermittent generation units before construction of the systems. Therefore, additional distribution facilities are needed to be planned and prepared for the future growth of the electrical demand as well as the increase of network hosting capacity by DG units. This paper presents a multiobjective optimization algorithm for the MDEP (Multi-Stage Distribution Expansion Planning) in the presence of DGs using nonlinear formulations. The objective functions of the MDEP consist of minimization of costs, END (Energy-Not-Distributed), active power losses and voltage stability index based on SCC (Short Circuit Capacity). A MPSO (modified Particle Swarm Optimization) algorithm is developed and used for this multiobjective MDEP optimization. In the proposed MPSO algorithm, a new mutation method is implemented to improve the global searching ability and restrain the premature convergence to local minima. The effectiveness of the proposed method is tested on a typical 33-bus test system and results are presented."} {"_id": "d1917d0118d1d4574b85fe7023646df5a8dc9512", "title": "Word similarity computation based on HowNet", "text": "Word similarity computation has important applications in many areas, such as natural language processing, intelligence information retrieval, document clustering, automatic answering, word sense disambiguation, machine translation etc. This article makes an intensive study of word similarity computation based on HowNet and the word similarity is computed in three steps: (1) computes sememes similarity, (2) computes concepts similarity using the weight sum method of sememes similarity, (3) takes the maximum similarity of concepts as words similarity. This article mainly introduces numbering the sememes and presents a more accurate method of decomposing the concept description type. The experiment shows that the algorithm of word similarity presented in this article is simple and feasible, and the computed results are relatively consistent with awareness."} {"_id": "f1e0a619b6ad652b65b49f362ac9413e89291ad7", "title": "A Resource-Based Model For E-Commerce In Developing Countries", "text": "Previous efforts in electronic commerce (e-commerce) research in developing countries shows that there is an acute lack of theoretical frameworks and empirical evidence to understand how developing country firms realize electronic commerce benefits amidst their national constraints. This paper sets out to develop a theoretically abstracted but contextually grounded model of how developing country firms can orient their resources and realize these benefits amidst their national constraints. A review of e-commerce and strategy management literature to develop a resource \u2013 based model for e-commerce benefits was undertaken. The process-based model provides an understanding of how to identify, integrate, and reconfigure resources to achieve electronic commerce benefits; provides propositions that serves as theoretical platforms for future empirically grounded research on electronic commerce in developing country contexts and brings organizations closer to identifying and categorizing the strategic value of resources and the role managerial capabilities and intangible resources play in sustaining e-commerce benefits. Finally, our findings provides organizations the strategic options to address resources which have lost their value or have become less valuable to their strategic orientation in e-commerce adoption thereby serving as a starting point of examining e-commerce in developing countries through the theoretical lens of information systems and strategic management."} {"_id": "be5fa72d0bca30184c8bef0361ee07aa5c7a011e", "title": "Personalized Security Messaging : Nudges for Compliance with Browser Warnings", "text": "Decades of psychology and decision-making research show that everyone makes decisions differently; yet security messaging is still one-size-fits-all. This suggests that we can improve outcomes by delivering information relevant to how each individual makes decisions. We tested this hypothesis by designing messaging customized for stable personality traits\u2014 specifically, the five dimensions of the General Decision-Making Style (GDMS) instrument. We applied this messaging to browser warnings, security messaging encountered by millions of web users on a regular basis. To test the efficacy of our nudges, we conducted experiments with 1,276 participants, who encountered a warning about broken HTTPS due to an invalid certificate under realistic circumstances. While the effects of some nudges correlated with certain traits in a statistically significant manner, we could not reject the null hypothesis\u2014that the intervention did not affect the subjects\u2019 behavior\u2014for most of our nudges, especially after accounting for participants who did not pay close attention to the message. In this paper, we present the detailed results of our experiments, discuss potential reasons for why the outcome contradicts the decision-making research, and identify lessons for researchers based on our experience."} {"_id": "5df5978d05ace5695ab5856d4a662b93805b9c00", "title": "Coding of pleasant touch by unmyelinated afferents in humans", "text": "Pleasant touch sensations may begin with neural coding in the periphery by specific afferents. We found that during soft brush stroking, low-threshold unmyelinated mechanoreceptors (C-tactile), but not myelinated afferents, responded most vigorously at intermediate brushing velocities (1\u221210 cm s\u22121), which were perceived by subjects as being the most pleasant. Our results indicate that C-tactile afferents constitute a privileged peripheral pathway for pleasant tactile stimulation that is likely to signal affiliative social body contact."} {"_id": "cc46a8ba591880c791cb3425e94a2d6f614ba016", "title": "A Systematic Mapping Study on Software Reuse", "text": "Context: Software reuse is considered as the key to a successful software development because of its potential to reduce the time to market, increase quality and reduce costs. This increase in demand made the software organizations to envision the use of software reusable assets which can also help in solving recurring problems. Till now, software reuse is confined to reuse of source code in the name of code scavenging. Now a day, software organizations are extending the concepts of software reuse to other life cycle objects as they realized that reuse of source code alone does not save money. The academia has put forward some assets as reusable and presented methods or approaches for reusing them. Also, for a successful software reuse the organizations should assess the value of reuse and keep track on their reuse programs. The other area which is vital for software reuse is the maintenance. Maintenance of reusable software has direct impact on the cost of the software. In this regard, academia has presented a number of techniques, methods, metrics and models for assessing the value of reuse and for maintaining the reusable software. Objectives: In our thesis, we investigate on the reusable assets and the methods/ approaches that are put forward by the academia for reusing those assets. Also a systematic mapping study is performed to investigate what techniques, methods, models and metrics for assessing the value of reuse and for maintaining the reused software are proposed and we also investigate their validation status as well. Methods: The databases like IEEE Xplore, ACM digital library, Inspec, Springer and Google scholar were used to search for the relevant studies for our systematic mapping study. We followed basic inclusion criteria along with detailed inclusion/exclusion criteria for selecting the appropriate article. Results: Through our systematic mapping study, we could summarize the list of 14 reusable assets along with the approaches/methods for reusing them. Taxonomy for assessing the value of reuse and taxonomy for maintaining the reusable software are presented. We also presented the methods/metrics/models/techniques for measuring reuse to assess its value and for maintaining the reusable software along with their validation status and areas in focus. Conclusion: We conclude that, there is a need for defining a standard set of reusable assets that are commonly accepted by the researchers in the field of software reuse. Most metrics/models/methods/approaches presented for assessing the value of reuse and for maintaining the reuse software are academically validated. Efforts have to be put on industrially validating them using the real data."} {"_id": "32b7c6bf94dc64ca7e64cb2daa1fdad4058912ae", "title": "Design of Lightweight Authentication and Key Agreement Protocol for Vehicular Ad Hoc Networks", "text": "Due to the widespread popularity in both academia and industry, vehicular ad hoc networks (VANETs) have been used in a wide range of applications starting from intelligent transportation to e-health and itinerary planning. This paper proposes a new decentralized lightweight authentication and key agreement scheme for VANETs. In the proposed scheme, there are three types of mutual authentications: 1) between vehicles; 2) between vehicles and their respective cluster heads; and 3) between cluster heads and their respective roadside units. Apart from these authentications, the proposed scheme also maintains secret keys between roadside units for their secure communications. The rigorous formal and informal security analysis shows that the proposed scheme is capable to defend various malicious attacks. Moreover, the ns-2 simulation demonstrates the practicability of the proposed scheme in VANET environment."} {"_id": "95216a2abf179e26db309d08701f605124233e6d", "title": "A Theoretical Framework for Structured Prediction using Factor Graph Complexity \u2217", "text": "We present a general theoretical analysis of structured prediction. By introducing a new complexity measure that explicitly factors in the structure of the output space and the loss function, we are able to derive new data-dependent learning guarantees for a broad family of losses and for hypothesis sets with an arbitrary factor graph decomposition. To the best of our knowledge, these are both the most favorable and the most general guarantees for structured prediction (and multiclass classification) currently known. We also extend this theory by leveraging the principle of Voted Risk Minimization (VRM) and showing that learning is possible with complex factor graphs. We both present new learning bounds in this advanced setting as well as derive two new families of algorithms, Voted Conditional Random Field and Voted Structured Boosting, which can make use of very complex features and factor graphs without overfitting. Finally, we validate our theory through experiments on several datasets."} {"_id": "9628b0d57eddf932a347d35c247bc8e263e11e2b", "title": "Theoretical perspectives on the relation between catastrophizing and pain.", "text": "The tendency to \"catastrophize\" during painful stimulation contributes to more intense pain experience and increased emotional distress. Catastrophizing has been broadly conceived as an exaggerated negative \"mental set\" brought to bear during painful experiences. Although findings have been consistent in showing a relation between catastrophizing and pain, research in this area has proceeded in the relative absence of a guiding theoretical framework. This article reviews the literature on the relation between catastrophizing and pain and examines the relative strengths and limitations of different theoretical models that could be advanced to account for the pattern of available findings. The article evaluates the explanatory power of a schema activation model, an appraisal model, an attention model, and a communal coping model of pain perception. It is suggested that catastrophizing might best be viewed from the perspective of hierarchical levels of analysis, where social factors and social goals may play a role in the development and maintenance of catastrophizing, whereas appraisal-related processes may point to the mechanisms that link catastrophizing to pain experience. Directions for future research are suggested."} {"_id": "acdacc042f29dea296c1a5e130ccea7b011d7ca4", "title": "A realist review of mobile phone-based health interventions for non-communicable disease management in sub-Saharan Africa", "text": "BACKGROUND\nThe prevalence of non-communicable diseases (NCDs) is increasing in sub-Saharan Africa. At the same time, the use of mobile phones is rising, expanding the opportunities for the implementation of mobile phone-based health (mHealth) interventions. This review aims to understand how, why, for whom, and in what circumstances mHealth interventions against NCDs improve treatment and care in sub-Saharan Africa.\n\n\nMETHODS\nFour main databases (PubMed, Cochrane Library, Web of Science, and Google Scholar) and references of included articles were searched for studies reporting effects of mHealth interventions on patients with NCDs in sub-Saharan Africa. All studies published up until May 2015 were included in the review. Following a realist review approach, middle-range theories were identified and integrated into a Framework for Understanding the Contribution of mHealth Interventions to Improved Access to Care for patients with NCDs in sub-Saharan Africa. The main indicators of the framework consist of predisposing characteristics, needs, enabling resources, perceived usefulness, and perceived ease of use. Studies were analyzed in depth to populate the framework.\n\n\nRESULTS\nThe search identified 6137 titles for screening, of which 20 were retained for the realist synthesis. The contribution of mHealth interventions to improved treatment and care is that they facilitate (remote) access to previously unavailable (specialized) services. Three contextual factors (predisposing characteristics, needs, and enabling resources) influence if patients and providers believe that mHealth interventions are useful and easy to use. Only if they believe mHealth to be useful and easy to use, will mHealth ultimately contribute to improved access to care. The analysis of included studies showed that the most important predisposing characteristics are a positive attitude and a common language of communication. The most relevant needs are a high burden of disease and a lack of capacity of first-contact providers. Essential enabling resources are the availability of a stable communications network, accessible maintenance services, and regulatory policies.\n\n\nCONCLUSIONS\nPolicy makers and program managers should consider predisposing characteristics and needs of patients and providers as well as the necessary enabling resources prior to the introduction of an mHealth intervention. Researchers would benefit from placing greater attention on the context in which mHealth interventions are being implemented instead of focusing (too strongly) on the technical aspects of these interventions."} {"_id": "dbc9313c7633c6555cf5b1d3998ab3f93326b512", "title": "Visual attention measures for multi-screen TV", "text": "We introduce a set of nine measures to characterize viewers' visual attention patterns for multi-screen TV. We apply our measures during an experiment involving nine screen layouts with two, three, and four TV screens, for which we report new findings on visual attention. For example, we found that viewers need an average discovery time up to 4.5 seconds to visually fixate four screens, and their perceptions of how long they watched each screen are substantially accurate, i.e., we report Pearson correlations up to .892 with measured eye tracking data. We hope our set of new measures (and the companion toolkit to compute them automatically) will benefit the community as a first step toward understanding visual attention for emerging multi-screen TV applications."} {"_id": "098cc8b16697307a241658d69c213954ede76d59", "title": "A first look at traffic on smartphones", "text": "Using data from 43 users across two platforms, we present a detailed look at smartphone traffic. We find that browsing contributes over half of the traffic, while each of email, media, and maps contribute roughly 10%. We also find that the overhead of lower layer protocols is high because of small transfer sizes. For half of the transfers that use transport-level security, header bytes correspond to 40% of the total. We show that while packet loss is the main factor that limits the throughput of smartphone traffic, larger send buffers at Internet servers can improve the throughput of a quarter of the transfers. Finally, by studying the interaction between smartphone traffic and the radio power management policy, we find that the power consumption of the radio can be reduced by 35% with minimal impact on the performance of packet exchanges."} {"_id": "0cae1c767b150213ab64ea73720b3fbabf1f7c84", "title": "Bartendr: a practical approach to energy-aware cellular data scheduling", "text": "Cellular radios consume more power and suffer reduced data rate when the signal is weak. According to our measurements, the communication energy per bit can be as much as 6x higher when the signal is weak than when it is strong. To realize energy savings, applications must preferentially communicate when the signal is strong, either by deferring non-urgent communication or by advancing anticipated communication to coincide with periods of strong signal. Allowing applications to perform such scheduling requires predicting signal strength, so that opportunities for energy-efficient communication can be anticipated. Furthermore, such prediction must be performed at little energy cost.\n In this paper, we make several contributions towards a practical system for energy-aware cellular data scheduling called Bartendr. First, we establish, via measurements, the relationship between signal strength and power consumption. Second, we show that location alone is not sufficient to predict signal strength and motivate the use of tracks to enable effective prediction. Finally, we develop energy-aware scheduling algorithms for different workloads - syncing and streaming - and evaluate these via simulation driven by traces obtained during actual drives, demonstrating energy savings of up to 60%. Our experiments have been performed on four cellular networks across two large metropolitan areas, one in India and the other in the U.S."} {"_id": "1e126cee4c1bddbfdd4e36bf91b8b1c2fe8d44c2", "title": "Accurate online power estimation and automatic battery behavior based power model generation for smartphones", "text": "This paper describes PowerBooter, an automated power model construction technique that uses built-in battery voltage sensors and knowledge of battery discharge behavior to monitor power consumption while explicitly controlling the power management and activity states of individual components. It requires no external measurement equipment. We also describe PowerTutor, a component power management and activity state introspection based tool that uses the model generated by PowerBooter for online power estimation. PowerBooter is intended to make it quick and easy for application developers and end users to generate power models for new smartphone variants, which each have different power consumption properties and therefore require different power models. PowerTutor is intended to ease the design and selection of power efficient software for embedded systems. Combined, PowerBooter and PowerTutor have the goal of opening power modeling and analysis for more smartphone variants and their users."} {"_id": "209562baa7d7a155c4f5ebfb30dbd84ad087e3b3", "title": "Applied spatial data analysis with R", "text": "Do you need the book of Applied Spatial Data Analysis with R pdf with ISBN of 9781461476177? You will be glad to know that right now Applied Spatial Data Analysis with R pdf is available on our book collections. This Applied Spatial Data Analysis with R comes PDF and EPUB document format. If you want to get Applied Spatial Data Analysis with R pdf eBook copy, you can download the book copy here. The Applied Spatial Data Analysis with R we think have quite excellent writing style that make it easy to comprehend."} {"_id": "3f62fe7de3bf15af1e5871dd8f623db29d8f0c35", "title": "Diversity in smartphone usage", "text": "Using detailed traces from 255 users, we conduct a comprehensive study of smartphone use. We characterize intentional user activities -- interactions with the device and the applications used -- and the impact of those activities on network and energy usage. We find immense diversity among users. Along all aspects that we study, users differ by one or more orders of magnitude. For instance, the average number of interactions per day varies from 10 to 200, and the average amount of data received per day varies from 1 to 1000 MB. This level of diversity suggests that mechanisms to improve user experience or energy consumption will be more effective if they learn and adapt to user behavior. We find that qualitative similarities exist among users that facilitate the task of learning user behavior. For instance, the relative application popularity for can be modeled using an exponential distribution, with different distribution parameters for different users. We demonstrate the value of adapting to user behavior in the context of a mechanism to predict future energy drain. The 90th percentile error with adaptation is less than half compared to predictions based on average behavior across users."} {"_id": "d7fef01bd72eb1b073b2e053f05ae1ae43eae499", "title": "Chip to wafer temporary bonding with self-alignment by patterned FDTS layer for size-free MEMS integration", "text": "In this paper, we present a low-cost and rapid self-alignment process for temporary bonding of MEMS chip onto carrier wafer for size-free MEMS-IC integration. For the first time, a hydrophobic self-assembled monolayer (SAM), FDTS (CF3(CF2)7(CH2)2SiCl3), was successfully patterned by lift-off process on an oxidized silicon carrier wafer. Small volume of H2O (\u223c\u00b5l/cm2) was then dropped and spread on the non-coated hydrophilic SiO2 surface for temporary bonding of MEMS chip. Our results demonstrated that the hydrophobic FDTS pattern on carrier wafer enables rapid and precise self-alignment of MEMS chip onto SiO2 binding-site by capillary force. After transfer the MEMS chips to target wafer, FDTS can be removed by O2 plasma treatment or UV irradiation."} {"_id": "6c43bd33f4733bfc8b729da2a182101fa27abcd6", "title": "Exploiting users' social relations to forward data in opportunistic networks: The HiBOp solution", "text": "Opportunistic networks, in which nodes opportunistically exploit any pair-wise contact to identify next hops towards the destination, are one of the most interesting technologies to support the pervasive networking vision. Opportunistic networks allow content sharing between mobile users without requiring any pre-existing Internet infrastructure, and tolerate partitions, long disconnections, and topology instability in general. In this paperwe propose a context-aware framework for routing and forwarding in opportunistic networks. The framework is general, and able to host various flavors of context-aware routing. In this work we also present a particular protocol, HiBOp, which, by exploiting the framework, learns and represents through context information, the users\u2019 behavior and their social relations, and uses this knowledge to drive the forwarding process. The comparison of HiBOp with reference to alternative solutions shows that a context-aware approach based on users\u2019 social relations turns out to be a very efficient solution for forwarding in opportunistic networks.We showperformance improvements over the reference solutions both in terms of resource utilization and in terms of user perceived QoS. \u00a9 2008 Elsevier B.V. All rights reserved."} {"_id": "8c0bdd3de8baa2789c76bf0842195d0f2cd4fa6c", "title": "Are Cyberbullies Less Empathic? Adolescents' Cyberbullying Behavior and Empathic Responsiveness", "text": "Meta-analyses confirm a negative relationship between aggressive behavior and empathy, that is, the ability to understand and share the feelings of others. Based on theoretical considerations, it was, therefore, hypothesized that a lack of empathic responsiveness may be characteristic for cyberbullies in particular. In the present study, 2.070 students of Luxembourg secondary schools completed an online survey that included a cyberbullying questionnaire(4) and a novel empathy short scale. According to the main hypothesis, analyses of variances indicated that cyberbullies demonstrated less empathic responsiveness than non-cyberbullies. In addition, cyberbullies were also more afraid of becoming victims of cyberbullying. The findings confirm and substantially extend the research on the relationship between empathy and aggressive behavior. From an educational point of view, the present findings suggest that training of empathy skills might be an important tool to decrease cyberbullying."} {"_id": "a23407b19100acba66fcfc2803d251a3c829e9e3", "title": "Applying Graph theory to the Internet of Things", "text": "In the Internet of Things (IoT), we all are ``things''. Graph theory, a branch of discrete mathematics, has been proven to be useful and powerful in understanding complex networks in history. By means of graph theory, we define new concepts and terminology, and explore the definition of IoT, and then show that IoT is the union of a topological network, a data-functional network and a domi-functional network."} {"_id": "526f595dca4d9c61131df2dd2aa001398f6bea43", "title": "Dermatologists' accuracy in early diagnosis of melanoma of the nail matrix.", "text": "OBJECTIVE\nTo measure and compare the accuracy of 4 different clinical methods in the diagnosis of melanoma in situ of the nail matrix among dermatologists with different levels of clinical experience.\n\n\nDESIGN\nTwelve cases of melanonychias (5 melanomas and 7 nonmelanomas) were presented following 4 successive steps: (1) clinical evaluation, (2) evaluation according to the ABCDEF rule, (3) dermoscopy of the nail plate, and (4) intraoperative dermoscopy. At each step, the dermatologists were asked to decide if the lesion was a melanoma.\n\n\nSETTING\nThe test was administered at 2 dermatological meetings in 2008.\n\n\nPARTICIPANTS\nA total of 152 dermatologists, including 11 nail experts, 53 senior dermatologists, and 88 junior dermatologists.\n\n\nMAIN OUTCOME MEASURES\nThe answers were evaluated as percentage of right answers for each diagnostic step according to the different grade of expertise. Differences among the percentage of right answers in the different steps were evaluated with the z test at a 5% level of significance. The agreement was investigated using Cohen kappa statistic.\n\n\nRESULTS\nThe only method that statistically influenced the correct diagnosis for each category (experts, seniors, and juniors) was intraoperative dermoscopy (z test; P < .05). Cohen kappa statistic showed a moderate interobserver agreement.\n\n\nCONCLUSIONS\nOverall accuracy of dermatologists in the diagnosis of nail matrix melanoma in situ is low because the percentages of physicians who indicated the correct diagnosis during each of the first 3 clinical steps of the test ranged from 46% to 55%. The level of expertise did not statistically influence the correct diagnosis."} {"_id": "45654695f5cad20d2be36d45d280af5180004baf", "title": "Rethink fronthaul for soft RAN", "text": "In this article we discuss the design of a new fronthaul interface for future 5G networks. The major shortcomings of current fronthaul solutions are first analyzed, and then a new fronthaul interface called next-generation fronthaul interface (NGFI) is proposed. The design principles for NGFI are presented, including decoupling the fronthaul bandwidth from the number of antennas, decoupling cell and user equipment processing, and focusing on high-performancegain collaborative technologies. NGFI aims to better support key 5G technologies, in particular cloud RAN, network functions virtualization, and large-scale antenna systems. NGFI claims the advantages of reduced bandwidth as well as improved transmission efficiency by exploiting the tidal wave effect on mobile network traffic. The transmission of NGFI is based on Ethernet to enjoy the benefits of flexibility and reliability. The major impact, challenges, and potential solutions of Ethernet-based fronthaul networks are also analyzed. Jitter, latency, and time and frequency synchronization are the major issues to overcome."} {"_id": "0f73f4ebc58782c03cc78aa7d6a391d23101ba09", "title": "Language within our grasp", "text": "In monkeys, the rostral part of ventral premotor cortex (area F5) contains neurons that discharge, both when the monkey grasps or manipulates objects and when it observes the experimenter making similar actions. These neurons (mirror neurons) appear to represent a system that matches observed events to similar, internally generated actions, and in this way forms a link between the observer and the actor. Transcranial magnetic stimulation and positron emission tomography (PET) experiments suggest that a mirror system for gesture recognition also exists in humans and includes Broca's area. We propose here that such an observation/execution matching system provides a necessary bridge from'doing' to'communicating',as the link between actor and observer becomes a link between the sender and the receiver of each message."} {"_id": "5309c563fe3f3b78f5e5e2ac9ee2159ebf28402f", "title": "DCU-UVT: Word-Level Language Classification with Code-Mixed Data", "text": "This paper describes the DCU-UVT team\u2019s participation in the Language Identification in Code-Switched Data shared task in the Workshop on Computational Approaches to Code Switching. Wordlevel classification experiments were carried out using a simple dictionary-based method, linear kernel support vector machines (SVMs) with and without contextual clues, and a k-nearest neighbour approach. Based on these experiments, we select our SVM-based system with contextual clues as our final system and present results for the Nepali-English and Spanish-English datasets."} {"_id": "c91b4b3a20a7637ecbb7e0179ac3108f3cf11880", "title": "Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network", "text": "Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context. In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017) and we extend the attention mechanism in two ways. First, we construct representations of text segments at different granularities solely with stacked self-attention. Second, we try to extract the truly matched segment pairs with attention across the context and response. We jointly introduce those two kinds of attention in one uniform neural network. Experiments on two large-scale multi-turn response selection tasks show that our proposed model significantly outperforms the state-of-the-art models."} {"_id": "7eb2cb5b31002906ef6a1b9f6707968e57afb714", "title": "Multi-label Deep Regression and Unordered Pooling for Holistic Interstitial Lung Disease Pattern Detection", "text": "Holistically detecting interstitial lung disease (ILD) patterns from CT images is challenging yet clinically important. Unfortunately, most existing solutions rely on manually provided regions of interest, limiting their clinical usefulness. In addition, no work has yet focused on predicting more than one ILD from the same CT slice, despite the frequency of such occurrences. To address these limitations, we propose two variations of multi-label deep convolutional neural networks (CNNs). The first uses a deep CNN to detect the presence of multiple ILDs using a regression-based loss function. Our second variant further improves performance, using spatially invariant Fisher Vector encoding of the CNN feature activations. We test our algorithms on a dataset of 533 patients using five-fold cross-validation, achieving high area-under-curve (AUC) scores of 0.982, 0.972, 0.893 and 0.993 for Ground Glass, Reticular, Honeycomb and Emphysema, respectively. As such, our work represents an important step forward in providing clinically effective ILD detection."} {"_id": "a1bbd52c57ad6a36057f5aa69544887261eb1a83", "title": "Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences", "text": "We describe a syntax-based algorithm that automatically builds Finite State Automata (word lattices) from semantically equivalent translation sets. These FSAs are good representations of paraphrases. They can be used to extract lexical and syntactic paraphrase pairs and to generate new, unseen sentences that express the same meaning as the sentences in the input sets. Our FSAs can also predict the correctness of alternative semantic renderings, which may be used to evaluate the quality of translations."} {"_id": "2269853c59ce06c599b79df3c3023504fc56a685", "title": "Narrating a Knowledge Base", "text": "We aim to automatically generate natural language narratives about an input structured knowledge base (KB). We build our generation framework based on a pointer network which can copy facts from the input KB, and add two attention mechanisms: (i) slot-aware attention to capture the association between a slot type and its corresponding slot value; and (ii) a new table position self-attention to capture the inter-dependencies among related slots. For evaluation, besides standard metrics including BLEU, METEOR, and ROUGE, we also propose a KB reconstruction based metric by extracting a KB from the generation output and comparing it with the input KB. We also create a new data set which includes 106,216 pairs of structured KBs and their corresponding natural language descriptions for two distinct entity types. Experiments show that our approach significantly outperforms state-of-the-art methods. The reconstructed KB achieves 68.8% 72.6% F-score.1"} {"_id": "2363304d6dfccc56d22e4dcb21397cd3aaa45b9a", "title": "Robust high-dimensional precision matrix estimation", "text": "The dependency structure of multivariate data can be analyzed using the covariance matrix \u03a3. In many fields the precision matrix \u03a3 \u22121 is even more informative. As the sample covariance estimator is singular in high-dimensions, it cannot be used to obtain a precision matrix estimator. A popular high-dimensional estimator is the graphical lasso, but it lacks robustness. We consider the high-dimensional independent contamination model. Here, even a small percentage of contaminated cells in the data matrix may lead to a high percentage of contaminated rows. Downweighting entire observations, which is done by traditional robust procedures, would then results in a loss of information. In this paper, we formally prove that replacing the sample covariance matrix in the graphical lasso with an elementwise robust covariance matrix leads to an elementwise robust, sparse precision matrix estimator computable in high-dimensions. Examples of such elementwise robust covariance estimators are given. The final precision matrix estimator is positive definite, has a high breakdown point under elementwise contamination and can be computed fast."} {"_id": "5ed8121a16c7ce5b89e69b7de482bab950796b67", "title": "The Impact of Clinical Empathy on Patients and Clinicians : Understanding Empathy ' s Side Effects", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."} {"_id": "78e2cf228287d7e995c6718338e3ec58dc7cca50", "title": "Platelet-Rich Plasma: Applications in Dermatology", "text": ""} {"_id": "10a88fdb15b5cb5baf0f35d3e8726ffb829c8a87", "title": "Computer Forensics Field Triage Process Model", "text": "With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s)/media, transporting it to the lab, making a forensic image(s), and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s). The Cyber Forensic Field Triage Process Model (CFFTPM) proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s)/media back to the lab for an in-depth examination or acquiring a complete forensic image(s). The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s)/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, Journal of Digital Forensics, Security and Law, Vol. 1(2) 20 and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model\u2019s forensic soundness, investigative support capabilities and practical considerations."} {"_id": "c22c3fe69473c83533b19d2dd5481df4edd9e9e8", "title": "Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes", "text": "This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications."} {"_id": "ca42861ce4941e5e3490acd900ebfbbbcd7f9065", "title": "Interactive Character Animation Using Simulated Physics: A State-of-the-Art Review", "text": "Physics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, such as fluids, cloths and rag-doll characters, commercial applications still resort to kinematics-based approaches for the animation of actively controlled characters. However, following a renewed interest in the use of physics simulation for interactive character animation, many recent publications demonstrate tremendous improvements in robustness, visual quality and usability. We present a structured review of over two decades of research on physics-based character animation, as well as point out various open research areas and possible future directions."} {"_id": "73c8b5dfa616dd96aa5252633a0d5d11fc4186ee", "title": "Resilient packet delivery through software defined redundancy: An experimental study", "text": "Mission-critical applications that use computer networks require guaranteed communication with bounds on the time needed to deliver packets, even if hardware along a path fails or congestion occurs. Current networking technologies follow a reactive approach in which retransmission mechanisms and routing update protocols respond once packet loss occurs. We present a proactive approach that pre-builds and monitors redundant paths to ensure timely delivery with no recovery latency, even in the case of a single-link failure. The paper outlines the approach, describes a new mechanism we call a redundancy controller, and explains the use of SDN to establish and change paths. The paper also reports on a prototype system and gives experimental measurements."} {"_id": "7674e4e66c60a4a31d0b68a07d4ea521cca8a84b", "title": "The FuzzyLog: A Partially Ordered Shared Log", "text": "The FuzzyLog is a partially ordered shared log abstraction. Distributed applications can concurrently append to the partial order and play it back. FuzzyLog applications obtain the benefits of an underlying shared log \u2013 extracting strong consistency, durability, and failure atomicity in simple ways \u2013 without suffering from its drawbacks. By exposing a partial order, the FuzzyLog enables three key capabilities for applications: linear scaling for throughput and capacity (without sacrificing atomicity), weaker consistency guarantees, and tolerance to network partitions. We present Dapple, a distributed implementation of the FuzzyLog abstraction that stores the partial order compactly and supports efficient appends / playback via a new ordering protocol. We implement several data structures and applications over the FuzzyLog, including several map variants as well as a ZooKeeper implementation. Our evaluation shows that these applications are compact, fast, and flexible: they retain the simplicity (100s of lines of code) and strong semantics (durability and failure atomicity) of a shared log design while exploiting the partial order of the FuzzyLog for linear scalability, flexible consistency guarantees (e.g., causal+ consistency), and network partition tolerance. On a 6-node Dapple deployment, our FuzzyLogbased ZooKeeper supports 3M/sec single-key writes, and 150K/sec atomic cross-shard renames."} {"_id": "a5e1ba6191b8b53de48dd676267902364ff1b0d2", "title": "Predictors of Relationship Satisfaction in Online Romantic Relationships", "text": "Based on traditional theories of interpersonal relationship development and on the hyperpersonal communication theory, this study examined predictors of relationship satisfaction for individuals involved in online romantic relationships. One hundredfourteen individuals (N1\u20444114) involved in online romantic relationships, and who had only engaged in computer-mediated communication (CMC) with their partners, completed an online questionnaire about their relationships. Intimacy, trust, and communication satisfaction were found to be the strongest predictors of relationship satisfaction for individuals involved in online romances. Additionally, perceptions of relationship variables differed depending on relationship length and time spent communicating. Implications for interpersonal and hyperpersonal communication theories, and future investigation of online relationships, are discussed."} {"_id": "782a7865a217e42b33f85734b54e84f50ea14088", "title": "Impact of Corpora Quality on Neural Machine Translation", "text": "Large parallel corpora that are automatically obtained from the web, documents or elsewhere often exhibit many corrupted parts that are bound to negatively affect the quality of the systems and models that learn from these corpora. This paper describes frequent problems found in data and such data affects neural machine translation systems, as well as how to identify and deal with them. The solutions are summarised in a set of scripts that remove problematic sentences from input corpora."} {"_id": "60756d9b0d8bf637b5ecdc1d782932903d6490f2", "title": "Computed tomographic analysis of tooth-bearing alveolar bone for orthodontic miniscrew placement.", "text": "INTRODUCTION\nWhen monocortical orthodontic miniscrews are placed in interdental alveolar bone, the safe position of the miniscrew tip should be ensured. This study was designed to quantify the periradicular space in the tooth-bearing area to provide practical guidelines for miniscrew placement.\n\n\nMETHODS\nComputerized tomographs of 30 maxillae and mandibles were taken from nonorthodontic adults with normal occlusion. Both mesiodistal interradicular distance and bone thickness over the narrowest interradicular space (safety depth) were measured at 2, 4, 6, and 8 mm from the cementoenamel junction.\n\n\nRESULTS\nMesiodistal space greater than 3 mm was available at the 8-mm level in the maxillary anterior region, between the premolars, and between the second premolar and the first molar at 4 mm. In the mandible, sufficient mesiodistal space was found between the premolars, between the molars, and between the second premolar and the first molar at the 4-mm level. Safety depth greater than 4 mm was found in the maxillary and mandibular intermolar regions, and between the second premolar and the first molar in both arches.\n\n\nCONCLUSIONS\nSubapical placement is advocated in the anterior segment. Premolar areas appear reliable in both arches. Angulated placement in the intermolar area is suggested to use the sufficient safety depth in this area."} {"_id": "2fd37fed17e07c4ec04caefe7dcbcb16670fa2d8", "title": "Block Chain Technologies & The Semantic Web : A Framework for Symbiotic Development", "text": "The concept of peer-to-peer applications is not new, nor is the concept of distributed hash tables. What emerged in 2008 with the publication of the Bitcoin white paper was an incentive structure that unified these two software paradigms with a set of economic stimuli to motivate the creation of a dedicated computing network orders of magnitude more powerful than the world\u2019s fastest supercomputers. The purpose of which is the maintenance of a massive distributed database known as the Bitcoin \u201cblock chain\u201d. Apart from the digital currency it enables, block chain technology is a fascinating new computing paradigm with broad implications for the future development of the World Wide Web, and by extension, the further growth of Linked Data and the Semantic Web. This work is divided into two main sections, we first demonstrate how block chain technologies can contribute toward the realization of a more robust Semantic Web, and subsequently we provide a framework wherein the Semantic Web is utilized to ameliorate block chain technology itself."} {"_id": "3897090008776b124ff299f135fdbf0cc3aa06d4", "title": "A survey of remote optical photoplethysmographic imaging methods", "text": "In recent years researchers have presented a number of new methods for recovering physiological parameters using just low-cost digital cameras and image processing. The ubiquity of digital cameras presents the possibility for many new, low-cost applications of vital sign monitoring. In this paper we present a review of the work on remote photoplethysmographic (PPG) imaging using digital cameras. This review specifically focuses on the state-of-the-art in PPG imaging where: 1) measures beyond pulse rate are evaluated, 2) non-ideal conditions (e.g., the presence of motion artifacts) are explored, and 3) use cases in relevant environments are demonstrated. We discuss gaps within the literature and future challenges for the research community. To aid in the continuing advancement of PPG imaging research, we are making available a website with the references collected for this review as well as information on available code and datasets of interest. It is our hope that this website will become a valuable resource for the PPG imaging community. The site can be found at: http://web.mit.edu/~djmcduff/www/ remote-physiology.html."} {"_id": "ec241c7b314b472ac0d58060f5bc09b791ba6017", "title": "Non-contact, automated cardiac pulse measurements using video imaging and blind source separation.", "text": "Remote measurements of the cardiac pulse can provide comfortable physiological assessment without electrodes. However, attempts so far are non-automated, susceptible to motion artifacts and typically expensive. In this paper, we introduce a new methodology that overcomes these problems. This novel approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into independent components. Using Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to an FDA-approved finger blood volume pulse (BVP) sensor and achieved high accuracy and correlation even in the presence of movement artifacts. Furthermore, we applied this technique to perform heart rate measurements from three participants simultaneously. This is the first demonstration of a low-cost accurate video-based method for contact-free heart rate measurements that is automated, motion-tolerant and capable of performing concomitant measurements on more than one person at a time."} {"_id": "38bcf0bd4f8c35ff54d292d37cbdca1da677f3f5", "title": "Mobile monitoring with wearable photoplethysmographic biosensors.", "text": "earable biosensors (WBS) will permit continuous cardiovascular (CV) monitoring in a number of novel settings. Benefits may be realized in the diagnosis and treatment of a number of major diseases. WBS, in conjunction with appropriate alarm algorithms, can increase surveillance capabilities for CV catastrophe for high-risk subjects. WBS could also play a role in the treatment of chronic diseases, by providing information that enables precise titration of therapy or detecting lapses in patient compliance. WBS could play an important role in the wireless surveillance of people during hazardous operations (military, fire-fighting, etc.), or such sensors could be dispensed during a mass civilian casualty occurrence. Given that CV physio-logic parameters make up the \" vital signs \" that are the most important information in emergency medical situations, WBS might enable a wireless monitoring system for large numbers of at-risk subjects. This same approach may also have utility in monitoring the waiting room of today's overcrowded emergency departments. For hospital inpatients who require CV monitoring, current biosensor technology typically tethers patients in a tangle of cables, whereas wearable CV sensors could increase inpatient comfort and may even reduce the risk of tripping and falling, a perennial problem for hospital patients who are ill, medicated, and in an unfamiliar setting. On a daily basis, wearable CV sensors could detect a missed dose of medication by sensing untreated elevated blood pressure and could trigger an automated reminder for the patient to take the medication. Moreover, it is important for doctors to titrate the treatment of high blood pressure, since both insufficient therapy as well as excessive therapy (leading to abnormally low blood pressures) increase mortality. However, healthcare providers have only intermittent values of blood pressure on which to base therapy decisions; it is possible that continuous blood pressure monitoring would permit enhanced titration of therapy and reductions in mortality. Similarly, WBS would be able to log the physiologic signature of a patient's exercise efforts (manifested as changes in heart rate and blood pressure), permitting the patient and healthcare provider to assess compliance with a regimen proven to improve health outcomes. For patients with chronic cardiovascular disease, such as heart failure, home monitoring employing WBS may detect exacerbations in very early (and often easily treated) stages, long before the patient progresses to more dangerous levels that necessitate an emergency room visit and costly hospital admission. In this article we will address both technical and clinical \u2026"} {"_id": "72e08cf12730135c5ccd7234036e04536218b6c1", "title": "An extended set of Haar-like features for rapid object detection", "text": "Recently Viola et al. [5] have introduced a rapid object detection scheme based on a boosted cascade of simple features. In this paper we introduce a novel set of rotated haar-like features, which significantly enrich this basic set of simple haar-like features and which can also be calculated very efficiently. At a given hit rate our sample face detector shows off on average a 10% lower false alarm rate by means of using these additional rotated features. We also present a novel post optimization procedure for a given boosted cascade improving on average the false alarm rate further by 12.5%. Using both enhancements the number of false detections is only 24 at a hit rate of 82.3% on the CMU face set [7]."} {"_id": "83d677d8b227c2917606145ef8f894cf7fbc5d9b", "title": "Recovering pulse rate during motion artifact with a multi-imager array for non-contact imaging photoplethysmography", "text": "Photoplethysmography relies on characteristic changes in the optical absorption of tissue due to pulsatile (arterial) blood flow in peripheral vasculature. Sensors for observing the photoplethysmographic effect have traditionally required contact with the skin surface. Recent advances in non-contact imaging photoplethysmography have demonstrated that measures of cardiopulmonary system state, such as pulse rate, pulse rate variability, and respiration rate, can be obtained from a participant by imaging their face under relatively motionless conditions. A critical limitation in this method that must be resolved is the inability to recover these measures under conditions of head motion artifact. To investigate the adequacy of channel space dimensionality for the use of blind source separation in this context, nine synchronized, visible spectrum imagers positioned in a semicircular array centered on the imaged participant were used for data acquisition in a controlled lighting environment. Three-lead electrocardiogram and finger-tip reflectance photoplethysmogram were also recorded as ground truth signals. Controlled head motion artifact trial conditions were compared to trials in which the participant remained stationary, with and without the aid of a chinrest. Bootstrapped means of one-minute, non-overlapping trial segments show that, for situations involving little to no head motion, a single imager is sufficient for recovering pulse rate with an average absolute error of less than two beats per minute. However, error in the recovered pulse rate measurement for the single imager can be as high as twenty-two beats per minute when head motion artifact is severe. This increase in measurement error during motion artifact was mitigated by increasing the dimensionality of the imager channel space with multiple imagers in the array prior to applying blind source separation. In contrast to single-imager results, the multi-imager channel space resulted in an absolute error in the recovered pulse rate measurement that is comparable with pulse rate measured via fingertip reflectance photoplethysmography. These results demonstrate that non-contact, imaging photoplethysmography can be accurate in the presence of head motion artifact when a multi-imager array is implemented to increase the dimensionality of the decomposed channel space."} {"_id": "146b84bdd9b9078f40a2df9b7ded26416771f740", "title": "Inverse Risk-Sensitive Reinforcement Learning", "text": "We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risk-sensitive. We derive a risk-sensitive reinforcement learning algorithm with convergence guarantees that employs convex risk metrics and models of human decisionmaking deriving from behavioral economics. The risk-sensitive reinforcement learning algorithm provides the theoretical underpinning for a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on observed behavior of a risk-sensitive agent. We demonstrate the performance of the proposed technique on two examples: (i) the canonical Grid World example and (ii) a Markov decision process modeling ride-sharing passengers\u2019 decisions given price changes. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the Markov decision process."} {"_id": "2ee822dbb7ef60cd79e17cd2deec68ac14f13522", "title": "Minimization of symbolic automata", "text": "Symbolic Automata extend classical automata by using symbolic alphabets instead of finite ones. Most of the classical automata algorithms rely on the alphabet being finite, and generalizing them to the symbolic setting is not a trivial task. In this paper we study the problem of minimizing symbolic automata. We formally define and prove the basic properties of minimality in the symbolic setting, and lift classical minimization algorithms (Huffman-Moore's and Hopcroft's algorithms) to symbolic automata. While Hopcroft's algorithm is the fastest known algorithm for DFA minimization, we show how, in the presence of symbolic alphabets, it can incur an exponential blowup. To address this issue, we introduce a new algorithm that fully benefits from the symbolic representation of the alphabet and does not suffer from the exponential blowup. We provide comprehensive performance evaluation of all the algorithms over large benchmarks and against existing state-of-the-art implementations. The experiments show how the new symbolic algorithm is faster than previous implementations."} {"_id": "90886fef6288f8d9789feaff6c9a96403c324498", "title": "An Empirical Evaluation of True Online TD({\\lambda})", "text": "The true online TD(\u03bb) algorithm has recently been proposed (van Seijen and Sutton, 2014) as a universal replacement for the popular TD(\u03bb) algorithm, in temporal-difference learning and reinforcement learning. True online TD(\u03bb) has better theoretical properties than conventional TD(\u03bb), and the expectation is that it also results in faster learning. In this paper, we put this hypothesis to the test. Specifically, we compare the performance of true online TD(\u03bb) with that of TD(\u03bb) on challenging examples, random Markov reward processes, and a real-world myoelectric prosthetic arm. We use linear function approximation with tabular, binary, and non-binary features. We assess the algorithms along three dimensions: computational cost, learning speed, and ease of use. Our results confirm the strength of true online TD(\u03bb): 1) for sparse feature vectors, the computational overhead with respect to TD(\u03bb) is minimal; for non-sparse features the computation time is at most twice that of TD(\u03bb), 2) across all domains/representations the learning speed of true online TD(\u03bb) is often better, but never worse than that of TD(\u03bb), and 3) true online TD(\u03bb) is easier to use, because it does not require choosing between trace types, and it is generally more stable with respect to the step-size. Overall, our results suggest that true online TD(\u03bb) should be the first choice when looking for an efficient, general-purpose TD method."} {"_id": "4ed0fa22b8cef58b181d9a425bdde2db289ba295", "title": "Cause related marketing campaigns and consumer purchase intentions : The mediating role of brand awareness and corporate image", "text": "The purpose of this research is to investigate the kind of relationship between Cause Related Marketing (CRM) campaigns, brand awareness and corporate image as possible antecedents of consumer purchase intentions in the less developed country of Pakistan. An initial conceptualization was developed from mainstream literature to be validated through empirical research. The conceptualization was then tested with primary quantitative survey data collected from 203 students studying in different universities of Rawalpindi and Islamabad. Correlation and regression analysis were used to test the key hypothesis derived from literature positing brand awareness and corporate image as mediating the relationship between CRM and consumer purchase intentions. The findings indicate that consumer purchase intentions are influenced by the cause related marketing campaigns. Furthermore it was observed that the brand awareness and corporate image partially mediate the impact of CRM campaigns on consumer purchase intentions. The data was gathered from universities situated in Rawalpindi and Islamabad only. Hence, future research could extend these findings to other cities in Pakistan to test their generalizability. Further research can be carried out through data collection from those people who actually participated in cause related marketing campaigns to identify the original behavior of customers instead of their purchase intentions. This research and the claims made are limited to the FMCG industry. The key implications cause related marketing of these findings for marketing managers lend support for the use of campaigns in Pakistan. The findings also suggest some measures which can be taken in to consideration in order to enhance brand awareness and to improve corporate image as both variables mediate the impact of CRM campaigns on consumer purchase intentions. The study contributes to cause related marketing literature by indicating a mediating role of brand awareness and corporate image on CRM campaigns and consumer purchase intentions. This mediating role was ignored in previous studies. Moreover, it contributes to close the gap of empirical research in this field, which exists particularly due to the diverse attitude of customers in less developed countries such as Pakistan."} {"_id": "b9f13239b323e399c9746597196abec4c0fb7942", "title": "Loosecut: Interactive image segmentation with loosely bounded boxes", "text": "One popular approach to interactively segment an object of interest from an image is to annotate a bounding box that covers the object, followed by a binary labeling. However, the existing algorithms for such interactive image segmentation prefer a bounding box that tightly encloses the object. This increases the annotation burden, and prevents these algorithms from utilizing automatically detected bounding boxes. In this paper, we develop a new LooseCut algorithm that can handle cases where the bounding box only loosely covers the object. We propose a new Markov Random Fields (MRF) model for segmentation with loosely bounded boxes, including an additional energy term to encourage consistent labeling of similar-appearance pixels and a global similarity constraint to better distinguish the foreground and background. This MRF model is then solved by an iterated max-flow algorithm. We evaluate LooseCut in three public image datasets, and show its better performance against several state-of-the-art methods when increasing the bounding-box size."} {"_id": "2990cf242558ede739d6a26a2f8b098f94390323", "title": "Morphological Smoothing and Extrapolation of Word Embeddings", "text": "Languages with rich inflectional morphology exhibit lexical data sparsity, since the word used to express a given concept will vary with the syntactic context. For instance, each count noun in Czech has 12 forms (where English uses only singular and plural). Even in large corpora, we are unlikely to observe all inflections of a given lemma. This reduces the vocabulary coverage of methods that induce continuous representations for words from distributional corpus information. We solve this problem by exploiting existing morphological resources that can enumerate a word\u2019s component morphemes. We present a latentvariable Gaussian graphical model that allows us to extrapolate continuous representations for words not observed in the training corpus, as well as smoothing the representations provided for the observed words. The latent variables represent embeddings of morphemes, which combine to create embeddings of words. Over several languages and training sizes, our model improves the embeddings for words, when evaluated on an analogy task, skip-gram predictive accuracy, and word similarity."} {"_id": "3f748911b0bad210d7b9c4598d158a9b15d4ef5f", "title": "K-Automorphism: A General Framework For Privacy Preserving Network Publication", "text": "The growing popularity of social networks has generated interesting data management and data mining problems. An important concern in the release of these data for study is their privacy, since social networks usually contain personal information. Simply removing all identifiable personal information (such as names and social security number) before releasing the data is insufficient. It is easy for an attacker to identify the target by performing different structural queries. In this paper we propose k-automorphism to protect against multiple structural attacks and develop an algorithm (called KM) that ensures k-automorphism. We also discuss an extension of KM to handle \u201cdynamic\u201d releases of the data. Extensive experiments show that the algorithm performs well in terms of protection it provides."} {"_id": "12df6611d9fff192fa09e1da60310d7485190c1c", "title": "Making Smart Contracts Smarter", "text": "Cryptocurrencies record transactions in a decentralized data structure called a blockchain. Two of the most popular cryptocurrencies, Bitcoin and Ethereum, support the feature to encode rules or scripts for processing transactions. This feature has evolved to give practical shape to the ideas of smart contracts, or full-fledged programs that are run on blockchains. Recently, Ethereum's smart contract system has seen steady adoption, supporting tens of thousands of contracts, holding millions dollars worth of virtual coins.\n In this paper, we investigate the security of running smart contracts based on Ethereum in an open distributed network like those of cryptocurrencies. We introduce several new security problems in which an adversary can manipulate smart contract execution to gain profit. These bugs suggest subtle gaps in the understanding of the distributed semantics of the underlying platform. As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called Oyente to find potential security bugs. Among 19, 336 existing Ethereum contracts, Oyente flags 8, 833 of them as vulnerable, including the TheDAO bug which led to a 60 million US dollar loss in June 2016. We also discuss the severity of other attacks for several case studies which have source code available and confirm the attacks (which target only our accounts) in the main Ethereum network."} {"_id": "74f1b2501daf6b091d017a3ba3110a7db0ab50db", "title": "Enumerating the Non-Isomorphic Assembly Configurations of Modular Robotic Systems", "text": "A \\modular\" robotic system consists of joint and link modules that can be assembled in a variety of conngurations to meet diierent or changing task requirements. However, due to typical symmetries in module design, diierent assembly conngurations may lead to robotic structures which are kinematically identical, or isomorphic. This paper considers how to enumerate the non-isomorphic assembly conngurations of a modular robotic system. We introduce an Assembly Incidence Matrix (AIM) to represent a modular robot assembly con-guration. Then we use symmetries of the module geometry and graph isomorphisms to deene an equivalence relation on the AIMs. Equivalent AIMs represent isomorphic robot assembly conngurations. Based on this equivalence relation, we propose an algorithm to generate non-isomorphic assembly conngurations of an n-link tree-like robot with diierent joint and link module types. Examples demonstrate that this method is a signiicant improvement over a brute force enumeration process."} {"_id": "edfd28fc766101c734fc534d65c569e8f9bdb9ac", "title": "Locally Normalized Filter Banks Applied to Deep Neural-Network-Based Robust Speech Recognition", "text": "This letter describes modifications to locally normalized filter banks (LNFB), which substantially improve their performance on the Aurora-4 robust speech recognition task using a Deep Neural Network-Hidden Markov Model (DNN-HMM)-based speech recognition system. The modified coefficients, referred to as LNFB features, are a filter-bank version of locally normalized cepstral coefficients (LNCC), which have been described previously. The ability of the LNFB features is enhanced through the use of newly proposed dynamic versions of them, which are developed using an approach that differs somewhat from the traditional development of delta and delta\u2013delta features. Further enhancements are obtained through the use of mean normalization and mean\u2013variance normalization, which is evaluated both on a per-speaker and a per-utterance basis. The best performing feature combination (typically LNFB combined with LNFB delta and delta\u2013delta features and mean\u2013variance normalization) provides an average relative reduction in word error rate of 11.4% and 9.4%, respectively, compared to comparable features derived from Mel filter banks when clean and multinoise training are used for the Aurora-4 evaluation. The results presented here suggest that the proposed technique is more robust to channel mismatches between training and testing data than MFCC-derived features and is more effective in dealing with channel diversity."} {"_id": "4eebe0d12aefeedf3ca85256bc8aa3b4292d47d9", "title": "DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks", "text": "Probabilistic forecasting, i.e. estimating the probability distribution of a time series\u2019 future given its past, is a key enabler for optimizing business processes. In retail businesses, for example, forecasting demand is crucial for having the right inventory available at the right time at the right place. In this paper we propose DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an auto-regressive recurrent network model on a large number of related time series. We demonstrate how by applying deep learning techniques to forecasting, one can overcome many of the challenges faced by widely-used classical approaches to the problem. We show through extensive empirical evaluation on several real-world forecasting data sets that our methodology produces more accurate forecasts than other state-of-the-art methods, while requiring minimal manual work."} {"_id": "5581f2e0e08ccd5280637eb79865939b6eafb1f1", "title": "IMPACTS DUE TO INTERNET ADDICTION AMONG MALAYSIAN UNIVERSITY STUDENTS Azizah Zainudin", "text": "The purpose of this study is to study the impacts due to Internet addiction among Malaysian university students. Research methodology used in this study was by distributing survey questions to 653 university students from five different universities in Malaysia. There were four possible impacts measured in this research study which include Academic Performances, Relationships, Personality and Lifestyle. The finding shows that, Internet addiction cause problems with respondents\u2019 academic performances, having bad personality and practicing an unhealthy lifestyle. There were significantly differences in academic performances, personality and lifestyle between \u201cAverage user\u201d and \u201cExcessive user\u201d. The above matter will be further discussed throughout this"} {"_id": "eef7d32eca9a363c31151d2ac4873f98a93f02d2", "title": "Micropower energy harvesting", "text": "More than a decade of research in the field of thermal, motion, vibration and electromagnetic radiation energy harvesting has yielded increasing power output and smaller embodiments. Power management circuits for rectification and DC\u2013DC conversion are becoming able to efficiently convert the power from these energy harvesters. This paper summarizes recent energy harvesting results and their power man-"} {"_id": "1cbeb9f98bf84e35d53cae946a4cd7a87b7fda69", "title": "The Role of Asymmetric Dimethylarginine (ADMA) in Endothelial Dysfunction and Cardiovascular Disease", "text": "Endothelium plays a crucial role in the maintenance of vascular tone and structure. Endothelial dysfunction is known to precede overt coronary artery disease. A number of cardiovascular risk factors, as well as metabolic diseases and systemic or local inflammation cause endothelial dysfunction. Nitric oxide (NO) is one of the major endothelium derived vaso-active substances whose role is of prime importance in maintaining endothelial homeostasis. Low levels of NO are associated with impaired endothelial function. Asymmetric dimethylarginine (ADMA), an analogue of L-arginine, is a naturally occurring product of metabolism found in human circulation. Elevated levels of ADMA inhibit NO synthesis and therefore impair endothelial function and thus promote atherosclerosis. ADMA levels are increased in people with hypercholesterolemia, atherosclerosis, hypertension, chronic heart failure, diabetes mellitus and chronic renal failure. A number of studies have reported ADMA as a novel risk marker of cardiovascular disease. Increased levels of ADMA have been shown to be the strongest risk predictor, beyond traditional risk factors, of cardiovascular events and all-cause and cardiovascular mortality in people with coronary artery disease. Interventions such as treatment with L-arginine have been shown to improve endothelium-mediated vasodilatation in people with high ADMA levels. However the clinical utility of modifying circulating ADMA levels remains uncertain."} {"_id": "a03d7b9ed965d7ad97027e4cd141fec3ad072c64", "title": "Sindice.com: a document-oriented lookup index for open linked data", "text": "Data discovery on the SemanticWeb requires crawling and indexing of statements, in addition to the \u2018linked-data\u2019 approach of de-referencing resourceURIs. Existing SemanticWeb search engines are focused on database-like functionality, compromising on index size, query performance and live updates.Wepresent Sindice, a lookup indexover SemanticWeb resources. Our index allows applications to automatically locate documents containing information about a given resource. In addition, we allow resource retrieval through inverse-functional properties, offer a full-text search and index SPARQL endpoints. Finally, we extend the sitemap protocol to efficiently index large datasets with minimal impact on data providers."} {"_id": "5227edcd3f91f8d4f853b7261ad9c971194fd490", "title": "Evaluating radial, axial and transverse flux topologies for 'in-wheel' motor", "text": "Three different motor topologies, (1) radial, (2) axial and (3) transverse flux, are investigated for 'in-wheel' electric propulsion system. The design constraints are: 350mm/spl times/100mm package limitation, 48 V DC bus restriction and per phase inverter current limited to 500 A/sup pk/. In addition, magnet pole pairs are fixed to 9 due to limitation in 'position sensor' capacity. The paper summarizes results obtained from analytical design followed by 2D and 3D static and transient finite element analysis accompanied with experimental verification for the radial flux topology."} {"_id": "dc2c9af5b05e832f32d4a0f5c48e5b08fdd31429", "title": "Essential features of serious games design in higher education: Linking learning attributes to game mechanics", "text": "This paper consolidates evidence and material from a range of specialist and disciplinary fields to provide an evidence-based review and synthesis on the design and use of serious games in higher education. Search terms identified 165 papers reporting conceptual and empirical evidence on how learning attributes and game mechanics may be planned, designed, and implemented by university teachers interested in using games, which are integrated into lesson plans and orchestrated as part of a learning sequence at any scale. The findings outline the potential of classifying the links between learning attributes and game mechanics as a means to scaffold teachers\u2019 understanding of how to perpetuate learning in optimal ways whilst enhancing the in-game learning experience. The findings of this paper provide a foundation for describing methods, frames, and discourse around experiences of design and use of serious games, linked to methodological limitations and recommendations for further research in this area. Practitioner Notes What is already known about this topic \u2022 Serious game design is a relatively new discipline that couples learning design with game features. A key characteristic of this approach is grounded in educational need and theory, rather than a focus purely on entertainment. Under this approach, a common method for designing serious games involves the creation of learning strategies, content and principles for the purpose of enhancing the student\u2019s learning experience. \u2022 There are no pedagogically driven strategies that take into account how learning attributes are interlinked to game elements for balancing learning with play. This is due to a limited evidence base of comparative evaluations assessing differing game designs against a single pedagogical model, or vice versa. This often leads practitioners seeking to introduce games into classroom with difficulties identifying what defines a game or how design should be enacted in a way that encompasses particular learning interventions. \u2022 Qualitative research methodologies have not been utilised as often for discerning how people experience, understand and use serious games for teaching and learning in higher education. What this paper adds \u2022 This paper presents the foundation of a taxonomy linking learning and game attributes along with teacher roles, aiming to encourage the cross-fertilisation and further integration of evidence on serious game design. The initial findings provide insight for practitioners on elements common to games and serious games, and how these link to particular learning design strategies that may afford pedagogically-rich games. \u2022 It informs development in the design and use of serious games to support teaching and learning in higher education. Given the key role of practitioners in the development of a wide range of serious games, due to the iterative and participatory methods frequently adopted, it offers insight into game elements and mechanics, allowing practitioners to easily relate to how learning activities, outcomes, feedback and roles may vary and visualised in relation to what needs to be learned out of playing the game. \u2022 The paper also identifies gaps in the evidence base and avenues for future research. Through this a practitioner can gain insight into the current unknowns in the area, and relate their experience of introducing games in the classroom to the current evidence base. Implications for practice and/or policy \u2022 The taxonomy informs serious game/instructional designers, game developers, academics and students about how learning elements (e.g. learning activities, learning outcomes, feedback and assessment) can be represented in games. We advocate dialogue between teachers and serious game designers, for improving the process of amalgamating learning with fun; and whilst the idea of using digital games is relatively new, teachers have a wealth of experience introducing learning features in games to be used into their classroom activities which frequently set out undocumented. \u2022 It aids the delineation of game attributes and associated game categories for characterising games based on primary purpose and use. Planning and designing teaching and learning in a game and teacher\u2019s associated role in guiding the in-game learning process is a consistent challenge and this paper seeks to provide a foundation by which this process can be undertaken in an informed and conscious manner. \u2022 The findings of the review act as the point of departure for creating a research agenda for understanding disjunctions between espoused and enacted personal theories of using serious games through qualitative methodologies. Introduction Serious Games (SGs) design is a relatively new discipline that couples learning design with game mechanics and logic (de Freitas, 2006; Hainley 2006; Westera et al., 2008). Design for SGs involves the creation of learning activities that may use the whole game or entail a gaming element (e.g. leader boards, virtual currencies, in-game hints) for the purpose of transforming the student\u2019s learning experience. Arguments against SGs have centred upon lack of empirical evidence in support of their efficacy; and the fact that appropriate research methodologies have not been enacted as yet for discerning how people understand and use SGs for teaching and learning in higher education (Mayer 2012; Connolly 2012). However, there are studies in the UK and US respectively that have demonstrated positive results in large sample groups (see for example Dunwell et al., 2014; Hertzog et al., 2014; Kato et al., 2008). Research evidence stresses the lack of commonly accepted pedagogically-driven strategies to afford game mechanisms and suggest that an inclusive model that takes into account pedagogy and teaching strategy, aligned to game activity and assessment is necessary for balancing play features with pedagogical aspects. This argument stems from an important observation that learning is a constructive process, which encompasses aspects of collaborative learning in which knowledge creation emerges through discussion and negotiation between individuals and groups. With these perspectives, this paper draws together evidence and material from a range of specialist and disciplinary fields to offer a critical review and synthesis of the design and use of SGs in higher"} {"_id": "c7cd25ee91ee78be8617fbf24ba5535f6b458483", "title": "Removing Shadows from Images", "text": "We attempt to recover a 2D chromaticity intrinsic variation of a single RGB image which is independent of lighting, without requiring any prior knowledge about the camera. The proposed algorithm aims to learn an invariant direction for projecting from 2D color space to the greyscale intrinsic image. We notice that along this direction, the entropy in the derived invariant image is minimized. The experiments conducted on various inputs indicate that this method achieves intrinsic 2D chromaticity images which are free of shadows. In addition, we examined the idea to utilize projection pursuit instead of entropy minimization to find the desired direction."} {"_id": "e3dd580388948bed7c6307d12b9018fe49db33ae", "title": "Kwyjibo: automatic domain name generation", "text": "Automatically generating \u2018good\u2019 domain names that are random yet pronounceable is a problem harder than it first appears. The problem is related to random word generation, and we survey and categorize existing techniques before presenting our own syllable-based algorithm that produces higher-quality results. Our results are also applicable elsewhere, in areas such as password generation, username generation, and even computer-generated poetry. Copyright \u00a9 2008 John Wiley & Sons, Ltd."} {"_id": "db976381ba0bcf53cd4bd359d1dffe39d259aa2e", "title": "Multi-session PLDA scoring of i-vector for partially open-set speaker detection", "text": "This paper advocates the use of probabilistic linear discriminant analysis (PLDA) for partially open-set detection task with multiple i-vectors enrollment condition. Also referred to as speaker verification, the speaker detection task has always been considered under an open-set scenario. In this paper, a more general partially open-set speaker detection problem in considered, where the imposters might be one of the known speakers previously enrolled to the system. We show how this could be coped with by modifying the definition of the alternative hypothesis in the PLDA scoring function. We also look into the impact of the conditionalindependent assumption as it was used to derive the PLDA scoring function with multiple training i-vectors. Experiments were conducted using the NIST 2012 Speaker Recognition Evaluation (SRE\u201912) datasets to validate various points discussed in the paper."} {"_id": "ba0200a5e8d0217c123207956b9e19810920e406", "title": "The effects of prosocial video games on prosocial behaviors: international evidence from correlational, longitudinal, and experimental studies.", "text": "Although dozens of studies have documented a relationship between violent video games and aggressive behaviors, very little attention has been paid to potential effects of prosocial games. Theoretically, games in which game characters help and support each other in nonviolent ways should increase both short-term and long-term prosocial behaviors. We report three studies conducted in three countries with three age groups to test this hypothesis. In the correlational study, Singaporean middle-school students who played more prosocial games behaved more prosocially. In the two longitudinal samples of Japanese children and adolescents, prosocial game play predicted later increases in prosocial behavior. In the experimental study, U.S. undergraduates randomly assigned to play prosocial games behaved more prosocially toward another student. These similar results across different methodologies, ages, and cultures provide robust evidence of a prosocial game content effect, and they provide support for the General Learning Model."} {"_id": "954155ba57f34af0db5477aed4a4fa5f0ccf4933", "title": "Oral squamous cell carcinoma overview.", "text": "Most cancer in the head and neck is squamous cell carcinoma (HNSCC) and the majority is oral squamous cell carcinoma (OSCC). This is an overview of oral squamous cell carcinoma (OSCC) highlighting essential points from the contributors to this issue, to set the scene. It emphasises the crucial importance of prevention and the necessarily multidisciplinary approach to the diagnosis and management of patients with OSCC."} {"_id": "6ec8febd00f973a1900f951099feb0b5ec60e9af", "title": "Hybrid crafting: towards an integrated practice of crafting with physical and digital components", "text": "With current digital technologies, people have large archives of digital media, such as images and audio files, but there are only limited means to include these media in creative practices of crafting and making. Nevertheless, studies have shown that crafting with digital media often makes these media more cherished and that people enjoy being creative with their digital media. This paper aims to open up the way for novel means for crafting, which include digital media in integrations with physical construction, here called \u2018hybrid crafting\u2019. Notions of hybrid crafting were explored to inform the design of products or systems that may support these new crafting practices. We designed \u2018Materialise\u2019\u2014a building set that allows for the inclusion of digital images and audio files in physical constructions by using tangible building blocks that can display images or play audio files, alongside a variety of other physical components\u2014and used this set in four hands-on creative workshops to gain insight into how people go about doing hybrid crafting; whether hybrid crafting is desirable; what the characteristics of hybrid crafting are; and how we may design to support these practices. By reflecting on the findings from these workshops, we provide concrete guidelines for the design of novel hybrid crafting products or systems that address craft context, process and result. We aim to open up the design space to designing for hybrid crafting because these new practices provide interesting new challenges and opportunities for future crafting that can lead to novel forms of creative expression."} {"_id": "8fc1b05e40fab886bde1645208999c824806b479", "title": "MOA: Massive Online Analysis", "text": "MassiveOnline Analysis (MOA) is a software environment for implementing al orithms and running experiments for online learning from evolving data str eams. MOA includes a collection of offline and online methods as well as tools for evaluation. In particular, it implements boosting, bagging, and Hoeffding Trees, all with and without Na \u0131\u0308ve Bayes classifiers at the leaves. MOA supports bi-directional interaction with WEKA, the Waikato Environment for Knowledge Analysis, and is released under the GNU GPL license."} {"_id": "0af89f7184163337558ba3617101aeec5c7f7169", "title": "Deep Learning with Limited Numerical Precision", "text": "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network\u2019s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding."} {"_id": "1d827e24143e5fdfe709d33b7b13a9a24d402efd", "title": "Learning Deep Features for Scene Recognition using Places Database", "text": "[1] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 2007. [2] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007. [3] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proc. CVPR, 2006. [4] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In Proc. ICCV, 2007. [5] G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Proc. CVPR, 2012."} {"_id": "233b1774f28c9972df2dfcf20dfbb0df45792bd0", "title": "A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks", "text": "Deep networks are state-of-the-art models used for understanding the content of images, videos, audio and raw input data. Current computing systems are not able to run deep network models in real-time with low power consumption. In this paper we present nn-X: a scalable, low-power coprocessor for enabling real-time execution of deep neural networks. nn-X is implemented on programmable logic devices and comprises an array of configurable processing elements called collections. These collections perform the most common operations in deep networks: convolution, subsampling and non-linear functions. The nn-X system includes 4 high-speed direct memory access interfaces to DDR3 memory and two ARM Cortex-A9 processors. Each port is capable of a sustained throughput of 950 MB/s in full duplex. nn-X is able to achieve a peak performance of 227 G-ops/s, a measured performance in deep learning applications of up to 200 G-ops/s while consuming less than 4 watts of power. This translates to a performance per power improvement of 10 to 100 times that of conventional mobile and desktop processors."} {"_id": "2ffc74bec88d8762a613256589891ff323123e99", "title": "Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks", "text": "Convolutional neural network (CNN) has been widely employed for image recognition because it can achieve high accuracy by emulating behavior of optic nerves in living creatures. Recently, rapid growth of modern applications based on deep learning algorithms has further improved research and implementations. Especially, various accelerators for deep CNN have been proposed based on FPGA platform because it has advantages of high performance, reconfigurability, and fast development round, etc. Although current FPGA accelerators have demonstrated better performance over generic processors, the accelerator design space has not been well exploited. One critical problem is that the computation throughput may not well match the memory bandwidth provided an FPGA platform. Consequently, existing approaches cannot achieve best performance due to under-utilization of either logic resource or memory bandwidth. At the same time, the increasing complexity and scalability of deep learning applications aggravate this problem. In order to overcome this problem, we propose an analytical design scheme using the roofline model. For any solution of a CNN design, we quantitatively analyze its computing throughput and required memory bandwidth using various optimization techniques, such as loop tiling and transformation. Then, with the help of rooine model, we can identify the solution with best performance and lowest FPGA resource requirement. As a case study, we implement a CNN accelerator on a VC707 FPGA board and compare it to previous approaches. Our implementation achieves a peak performance of 61.62 GFLOPS under 100MHz working frequency, which outperform previous approaches significantly."} {"_id": "a83d62bb714a02dc5ba9a6f1aa7dd2fe16c29fb7", "title": "Topology Derivation of Nonisolated Three-Port DC\u2013DC Converters From DIC and DOC", "text": "A systematic approach is proposed for the derivation of nonisolated three-port converter (TPC) topologies based on dual-input converters (DIC) and dual-output converters (DOC), which serves as an interface for a renewable source, a storage battery, and a load simultaneously. The power flow in a TPC is analyzed and compared with that in a DIC or a DOC, respectively. Beginning with building the power flow paths of a TPC from a DIC or a DOC, the general principles and detailed procedures for the generation and optimization of TPC topologies are presented. Based on these works, a family of nonisolated TPC topologies is developed. The derived TPCs feature single-stage power conversion between any two of the three ports, and result in high integration and high efficiency. One of the proposed TPCs, named Boost-TPC, is taken as an example for verifying the performance of its circuit topology with related operational methods. Pulsewidth modulation and power management methods, used in this circuit, are analyzed in detail. Experiments have been carried out on a 1-kW prototype of the Boost-TPC, which demonstrate the feasibility and effectiveness of the proposed topology derivation method."} {"_id": "c00b3b26acf8fe8b362e75eac91349c37a1bd29b", "title": "A framework for cognitive Internet of Things based on blockchain", "text": "Internet of Things, cognitive systems, and blockchain technology are three fields which have created numerous revolutions in software development. It seems that a combination among these fields may results in emerging a high potential and interesting field. Therefore, in this paper, we propose a framework for Internet of Things based on cognitive systems and blockchain technology. To the best of our knowledge, there is no framework for Internet of Things based on cognitive systems and blockchain. In order to study the applicability of the proposed framework, a recommender system based on the proposed framework is suggested. Since the proposed framework is novel, the suggested recommender system is novel. The suggested recommender system is compared with the existing recommender systems. The results show that the suggested recommender system has several benefits which are not available in the existing recommender systems."} {"_id": "38bf5e4f8dc147b9b5a2a9a93c60a1e99b3fe3c0", "title": "Management patterns: SDN-enabled network resilience management", "text": "Software-defined networking provides abstractions and a flexible architecture for the easy configuration of network devices, based on the decoupling of the data and control planes. This separation has the potential to considerably simplify the implementation of resilience functionality (e.g., traffic classification, anomaly detection, traffic shaping) in future networks. Although software-defined networking in general, and OpenFlow as its primary realisation, provide such abstractions, support is still needed for orchestrating a collection of OpenFlow-enabled services that must cooperate to implement network-wide resilience. In this paper, we describe a resilience management framework that can be readily applied to this problem. An important part of the framework are policy-controlled management patterns that describe how to orchestrate individual resilience services, implemented as OpenFlow applications."} {"_id": "89a825fa5f41ac97cbf69c2a72230b1ea695d1e0", "title": "Gated convolutional networks based hybrid acoustic models for low resource speech recognition", "text": "In acoustic modeling for large vocabulary speech recognition, recurrent neural networks (RNN) have shown great abilities to model temporal dependencies. However, the performance of RNN is not prominent in resource limited tasks, even worse than the traditional feedforward neural networks (FNN). Furthermore, training time for RNN is much more than that for FNN. In recent years, some novel models are provided. They use non-recurrent architectures to model long term dependencies. In these architectures, they show that using gate mechanism is an effective method to construct acoustic models. On the other hand, it has been proved that using convolution operation is a good method to learn acoustic features. We hope to take advantages of both these two methods. In this paper we present a gated convolutional approach to low resource speech recognition tasks. The gated convolutional networks use convolutional architectures to learn input features and a gate to control information. Experiments are conducted on the OpenKWS, a series of low resource keyword search evaluations. From the results, the gated convolutional networks relatively decrease the WER about 6% over the baseline LSTM models, 5% over the DNN models and 3% over the BLSTM models. In addition, the new models accelerate the learning speed by more than 1.8 and 3.2 times compared to that of the baseline LSTM and BLSTM models."} {"_id": "b43a7c665e21e393123233374f4213a811fe7dfe", "title": "Deep Texture Features for Robust Face Spoofing Detection", "text": "Biometric systems are quite common in our everyday life. Despite the higher difficulty to circumvent them, nowadays criminals are developing techniques to accurately simulate physical, physiological, and behavioral traits of valid users, process known as spoofing attack. In this context, robust countermeasure methods must be developed and integrated with the traditional biometric applications in order to prevent such frauds. Despite face being a promising trait due to its convenience and acceptability, face recognition systems can be easily fooled with common printed photographs. Most of state-of-the-art antispoofing techniques for face recognition applications extract handcrafted texture features from images, mainly based on the efficient local binary patterns (LBP) descriptor, to characterize them. However, recent results indicate that high-level (deep) features are more robust for such complex tasks. In this brief, a novel approach for face spoofing detection that extracts deep texture features from images by integrating the LBP descriptor to a modified convolutional neural network is proposed. Experiments on the NUAA spoofing database indicate that such deep neural network (called LBPnet) and an extended version of it (n-LBPnet) outperform other state-of-the-art techniques, presenting great results in terms of attack detection."} {"_id": "762f5a712f4d6994ead089fcc0c5db98479a2008", "title": "Performance Evaluation of Concurrent Lock-free Data Structures on GPUs", "text": "Graphics processing units (GPUs) have emerged as a strong candidate for high-performance computing. While regular data-parallel computations with little or no synchronization are easy to map on the GPU architectures, it is a challenge to scale up computations on dynamically changing pointer-linked data structures. The traditional lock-based implementations are known to offer poor scalability due to high lock contention in the presence of thousands of active threads, which is common in GPU architectures. In this paper, we present a performance evaluation of concurrent lock-free implementations of four popular data structures on GPUs. We implement a set using lock-free linked list, hash table, skip list, and priority queue. On the first three data structures, we evaluate the performance of different mixes of addition, deletion, and search operations. The priority queue is designed to support retrieval and deletion of the minimum element and addition operations to the set. We evaluate the performance of these lock-free data structures on a Tesla C2070 Fermi GPU and compare it with the performance of multi-threaded lock-free implementations for CPU running on a 24-core Intel Xeon server. The linked list, hash table, skip list, and priority queue implementations achieve speedup of up to 7.4, 11.3, 30.7, and 30.8, respectively on the GPU compared to the Xeon server."} {"_id": "26e207eb124fb0a939f821e1efa24e7c6b990844", "title": "Images of Success and the Preference for Luxury", "text": "This research examines the impact of media depictions of success (or failure) on consumers\u2019 desire for luxury brands. In a pilot study and three additional studies, we demonstrate that reading a story about a similar/successful other, such as a business major from the same university, increases consumers\u2019 expectations about their own future wealth, which in turn increases their desire for luxury brands. However, reading about a dissimilar successful other, such as a biology major, lowers consumers\u2019preferences for luxury brands. Furthermore, we examine the role of ease of imagining oneself in the narrative as a mediator of the relation between direction of comparison, similarity, and brand preference."} {"_id": "defbb82519f3500ec6488dfd4991c68868475afa", "title": "A new weight initialization method for sigmoidal feedforward artificial neural networks", "text": "Initial weight choice has been recognized to be an important aspect of the training methodology for sigmoidal feedforward neural networks. In this paper, a new mechanism for weight initialization is proposed. The mechanism distributes the initial input to output weights in a manner that all weights (including thresholds) leading into a hidden layer are uniformly distributed in a region and the center of the region from which the weights are sampled are such that no region overlaps for two distinct hidden nodes. The proposed method is compared against random weight initialization routines on five function approximation tasks using the Resilient Backpropagation (RPROP) algorithm for training. The proposed method is shown to lead to about twice as fast convergence to a pre-specifled goal for training as compared to any of the random weight initialization methods. Moreover, it is shown that at least for these problems the networks reach a deeper minima of the error functional during training and generalizes better than the networks trained whose weights were initialized by random weight initialization methods."} {"_id": "3e892a9a23afd4c28c60bbf8a6e5201bf62c1dcd", "title": "Combining qualitative and quantitative sampling, data collection, and analysis techniques in mixed-method studies.", "text": "Researchers have increasingly turned to mixed-method techniques to expand the scope and improve the analytic power of their studies. Yet there is still relatively little direction on and much confusion about how to combine qualitative and quantitative techniques. These techniques are neither paradigm- nor method-linked; researchers' orientations to inquiry and their methodological commitments will influence how they use them. Examples of sampling combinations include criterion sampling from instrument scores, random purposeful sampling, and stratified purposeful sampling. Examples of data collection combinations include the use of instruments for fuller qualitative description, for validation, as guides for purposeful sampling, and as elicitation devices in interviews. Examples of data analysis combinations include interpretively linking qualitative and quantitative data sets and the transformation processes of qualitizing and quantitizing."} {"_id": "12408a6e684f51adf2b19071233afa37a378dee4", "title": "Artistic reality: fast brush stroke stylization for augmented reality", "text": "The goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects. Practically all augmented reality systems rely on standard real-time rendering methods for displaying graphical objects. Although such conventional computer graphics algorithms are fast, they often fail to produce sufficiently realistic renderings. Therefore, virtual models can easily be distinguished from the real environment. We have recently proposed a novel approach for generating augmented reality images [4]. Our method is based on the idea of applying stylization techniques for adapting the visual realism of both the camera image and the virtual graphical objects. Since both the camera image and the virtual objects are stylized in a corresponding way, they appear very similar. Here, we present a new method for the stylization of augmented reality images. This approach generates a painterly brush stroke rendering. The resulting stylized augmented reality video frames look similar to paintings created in the pointillism style. We describe the implementation of the camera image filter and the non-photorealistic rendererfor virtual objects. These components have been newly designed or adapted for this purpose. They are fast enough for generating augmented reality images in near real-time (more than 14 frames per second)."} {"_id": "5bd97fb4ce1b6ebf6be926a1e0512e1666cbe48b", "title": "A Definition of Artificial Intelligence", "text": "In this paper we offer a formal definition of Artificial Intelligence and this directly gives us an algorithm for construction of this object. Really, this algorithm is useless due to the combinatory explosion. The main innovation in our definition is that it does not include the knowledge as a part of the intelligence. So according to our definition a newly born baby also is an Intellect. Here we differs with Turing's definition which suggests that an Intellect is a person with knowledge gained through the years."} {"_id": "5eb1d160d2be5467aaf08277a02d3159f5fd7a75", "title": "Anisotropic diffusion map based spectral embedding for 3D CAD model retrieval", "text": "In the product life cycle, design reuse can save cost and improve existing products conveniently in most new product development. To retrieve similar models from big database, most search algorithms convert CAD model into a shape descriptor and compute the similarity two models according to a descriptor metric. This paper proposes a new 3D shape matching approach by matching the coordinates directly. It is based on diffusion maps which integrate the rand walk and graph spectral analysis to extract shape features embedded in low dimensional spaces and then they are used to form coordinations for non-linear alignment of different models. These coordinates could capture multi-scale properties of the 3D geometric features and has shown good robustness to noise. The results also have shown better performance compared to the celebrated Eigenmap approach in the 3D model retrieval."} {"_id": "35864d52726d19513bc25f28f072b5c97a719b09", "title": "Know Your Body Through Intrinsic Goals", "text": "The first \"object\" that newborn children play with is their own body. This activity allows them to autonomously form a sensorimotor map of their own body and a repertoire of actions supporting future cognitive and motor development. Here we propose the theoretical hypothesis, operationalized as a computational model, that this acquisition of body knowledge is not guided by random motor-babbling, but rather by autonomously generated goals formed on the basis of intrinsic motivations. Motor exploration leads the agent to discover and form representations of the possible sensory events it can cause with its own actions. When the agent realizes the possibility of improving the competence to re-activate those representations, it is intrinsically motivated to select and pursue them as goals. The model is based on four components: (1) a self-organizing neural network, modulated by competence-based intrinsic motivations, that acquires abstract representations of experienced sensory (touch) changes; (2) a selector that selects the goal to pursue, and the motor resources to train to pursue it, on the basis of competence improvement; (3) an echo-state neural network that controls and learns, through goal-accomplishment and competence, the agent's motor skills; (4) a predictor of the accomplishment of the selected goals generating the competence-based intrinsic motivation signals. The model is tested as the controller of a simulated simple planar robot composed of a torso and two kinematic 3-DoF 2D arms. The robot explores its body covered by touch sensors by moving its arms. The results, which might be used to guide future empirical experiments, show how the system converges to goals and motor skills allowing it to touch the different parts of own body and how the morphology of the body affects the formed goals. The convergence is strongly dependent on competence-based intrinsic motivations affecting not only skill learning and the selection of formed goals, but also the formation of the goal representations themselves."} {"_id": "391c71d926c8bc0ea8dcf2ead05d59ef6a1057bf", "title": "Photo tourism: exploring photo collections in 3D", "text": "We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites."} {"_id": "26bea0df45f41161032f471c76c613a5ad623a7e", "title": "Design for wearability", "text": "Digital Technology is constantly improving as information becomes wireless. These advances demand more wearable and mobile form factors for products that access information. A product that is wearable should have wearability. This paper explores the concept of dynamic wearability, through design research. Wearability is defined as the interaction between the human body and the wearable object. Dynamic wearability extends that definition to include the human body in motion. Our research has been to locate, understand, and define the spaces on the human body where solid and flexible forms can rest-without interfering with fluid human movement. The result is a set of design guidelines embodied in a set of wearable forms. These wearable forms describe the three dimensional spaces on the body best suited for comfortable and unobtrusive wearability by design."} {"_id": "dec3d12ef8d1fc3f18572302ff31f74a490a833a", "title": "A New Approach to Quantify Network Security by Ranking of Security Metrics and Considering Their Relationships", "text": "There are several characteristics in computer networks, which play important roles in determining the level of network security. These characteristics known as security metrics can be applied for security quantification in computer networks. Most of the researches on this area has focused on defining the new security metrics to improve the quantification process. In this paper, we present a new approach to analyze and quantify the network security by ranking of security metrics with considering the relationships between them. Our ranking method reveals the importance of each security metric to quantify security in the network under surveillance. The proposed approach helps the network administrators to have a better insight on the level of network security"} {"_id": "a16fbdb6a5f4f2bc39784348960103d0ca39b9cf", "title": "Is FLIP enough? Or should we use the FLIPPED model instead?", "text": "Yunglung Chen, Yuping Wang, Kinshuk, Nian-Shing Chen Department of Information Management, National Sun Yat-sen University, No. 70, Lienhai Rd., Kaohsiung 80424, Taiwan School of Languages and Linguistics, Griffith University, Nathan, 4111, Australia School of Computing and Information System, Athabasca University, Athabasca, Canada henryylchen@gmail.com, y.wang@griffith.edu.au, kinshuk@athabascau.ca, nschen@mis.nsysu.edu.tw"} {"_id": "c2a7f02cf7a8a39051dd47433d0415547b62eff4", "title": "Step-by-step design and control of LCL filter based three phase grid-connected inverter", "text": "This paper proposes a detailed step-by-step design procedure and control of an LCL filter for grid connected three phase sine PWM voltage source inverter. The goal of the design is to ensure high quality of grid current as well as to minimize the size of filter magnetics. In order to ensure unity power factor injection into grid a current controller is designed with a constraint that only the inverter side inductor current is sensed. Good agreement between simulation and experimental results verify the effectiveness of the designed controller and the LCL filter."} {"_id": "50965b84549a1a925499b9ce16bb20409948eb39", "title": "Power Quality Control and Design of Power Converter for Variable-Speed Wind Energy Conversion System with Permanent-Magnet Synchronous Generator", "text": "The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%."} {"_id": "a0eb9c4b70e7f63fef5faef0ac9282416e1b58a9", "title": "Finding expert users in community question answering", "text": "Community Question Answering (CQA) websites provide a rapidly growing source of information in many areas. This rapid growth, while offering new opportunities, puts forward new challenges. In most CQA implementations there is little effort in directing new questions to the right group of experts. This means that experts are not provided with questions matching their expertise, and therefore new matching questions may be missed and not receive a proper answer. We focus on finding experts for a newly posted question. We investigate the suitability of two statistical topic models for solving this issue and compare these methods against more traditional Information Retrieval approaches. We show that for a dataset constructed from the Stackoverflow website, these topic models outperform other methods in retrieving a candidate set of best experts for a question. We also show that the Segmented Topic Model gives consistently better performance compared to the Latent Dirichlet Allocation Model."} {"_id": "77aa875c17f428e7f78f5bfb64033514a820c808", "title": "Impact of rank optimization on downlink non-orthogonal multiple access (NOMA) with SU-MIMO", "text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward the 5th generation (5G) mobile communication systems. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in long term evolution (LTE) /LTE-Advanced systems. It is proved that NOMA combined with SU-MIMO techniques can achieve further system performance improvement. In this paper, we focus on the impact of rank optimization on the performance of NOMA with SU-MIMO in downlink. Firstly, a geometry based rank adjustment method is studied. Secondly, an enhanced feedback method for rank adjustment is discussed. The simulation results show that the performance gain of NOMA improves with the proposed two rank adjustment methods. Compared to orthogonal access system, large performance gains can be achieved for NOMA, which are about 23% for cell average throughput and 33% for cell-edge user throughput."} {"_id": "15252afe259894bd3d0f306f29eca5e90ab05eac", "title": "On Bias, Variance, 0/1\u2014Loss, and the Curse-of-Dimensionality", "text": "The classification problem is considered in which an outputvariable y assumes discrete values with respectiveprobabilities that depend upon the simultaneous values of a set of input variablesx = {x_1,....,x_n}. At issue is how error in the estimates of theseprobabilities affects classification error when the estimates are used ina classification rule. These effects are seen to be somewhat counterintuitive in both their strength and nature. In particular the bias andvariance components of the estimation error combine to influenceclassification in a very different way than with squared error on theprobabilities themselves. Certain types of (very high) bias can becanceled by low variance to produce accurate classification. This candramatically mitigate the effect of the bias associated with some simpleestimators like \u201cnaive\u201d Bayes, and the bias induced by thecurse-of-dimensionality on nearest-neighbor procedures. This helps explainwhy such simple methods are often competitive with and sometimes superiorto more sophisticated ones for classification, and why\u201cbagging/aggregating\u201d classifiers can often improveaccuracy. These results also suggest simple modifications to theseprocedures that can (sometimes dramatically) further improve theirclassification performance."} {"_id": "649d0aa1be51cc545de52fc584640501efdcf68b", "title": "Semantic Segmentation for High Spatial Resolution Remote Sensing Images Based on Convolution Neural Network and Pyramid Pooling Module", "text": "Semantic segmentation provides a practical way to segment remotely sensed images into multiple ground objects simultaneously, which can be potentially applied to multiple remote sensed related aspects. Current classification algorithms in remotely sensed images are mostly limited by different imaging conditions, the multiple ground objects are difficult to be separated from each other due to high intraclass spectral variances and interclass spectral similarities. In this study, we propose an end-to-end framework to semantically segment high-resolution aerial images without postprocessing to refine the segmentation results. The framework provides a pixel-wise segmentation result, comprising convolutional neural network structure and pyramid pooling module, which aims to extract feature maps at multiple scales. The proposed model is applied to the ISPRS Vaihingen benchmark dataset from the ISPRS 2D Semantic Labeling Challenge. Its segmentation results are compared with previous state-of-the-art method UZ_1, UPB and three other methods that segment images into objects of all the classes (including clutter/background) based on true orthophoto tiles, and achieve the highest overall accuracy of 87.8% over the published performances, to the best of our knowledge. The results validate the efficiency of the proposed model in segmenting multiple ground objects from remotely sensed images simultaneously."} {"_id": "86c9a59c7c4fcf0d10dbfdb6afd20dd3c5c1426c", "title": "A Multichannel Approach to Fingerprint Classification", "text": "Fingerprint classification provides an important indexing mechanism in a fingerprint database. An accurate and consistent classification can greatly reduce fingerprint matching time for a large database. We present a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature. We classify fingerprints into five categories: whorl, right loop, left loop, arch, and tented arch. The algorithm uses a novel representation (FingerCode) and is based on a two-stage classifier to make a classification. It has been tested on 4,000 images in the NIST-4 database. For the five-class problem, a classification accuracy of 90 percent is achieved (with a 1.8 percent rejection during the feature extraction phase). For the four-class problem (arch and tented arch combined into one class), we are able to achieve a classification accuracy of 94.8 percent (with 1.8 percent rejection). By incorporating a reject option at the classifier, the classification accuracy can be increased to 96 percent for the five-class classification task, and to 97.8 percent for the four-class classification task after a total of 32.5 percent of the images are rejected."} {"_id": "6b4fe4aa4d66fecc7b2869569002714d91d0b3f7", "title": "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex.", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical \u2026"} {"_id": "a2ed347d010aeae4ddd116676bdea2e77d942f6e", "title": "Fingerprint classification", "text": "A fingerprint classification algorithm is presented in this paper. Fingerprints are classified into five categories: arch, tented arch, left loop, right loop and whorl. The algorithm extracts singular points (cores and deltas) in a fingerprint image and performs classification based on the number and locations of the detected singular points. The classifier is invariant to rotation, translation and small amounts of scale changes. The classifier is rule-based, where the rules are generated independent of a given data set. The classifier was tested on 4000 images in the NIST-4 database and on 5400 images in the NIST-9 database. For he NIST-4 database, classification accuracies of 85.4% for the five-class problem and 91.1% for the four-class problem (with arch and tented arch placed in the same category) were achieved. Using a reject option, the four-class classification error can be reduced to less than 6% with 10% fingerprint images rejected. Similar classification performance was obtained on the NIST-9 database."} {"_id": "b07ce649d6f6eb636872527104b0209d3edc8188", "title": "Pattern classification and scene analysis", "text": ""} {"_id": "b6308422bc6fe0babd8fe2f04571449a0b016349", "title": "Adaptive flow orientation-based feature extraction in fingerprint images", "text": "-A reliable method for extracting structural features from fingerprint images is presented. Viewing fingerprint images as a textured image, an orientation flow field is computed. The rest of the stages in the algorithm use the flow field to design adaptive filters for the input image. To accurately locate ridges, a waveform projection-based ridge segmentation algorithm is used. The ridge skeleton image is obtained and smoothed using morphological operators to detect the features. A large number of spurious features from the detected set of minutiae is deleted by a postprocessing stage. The performance of the proposed algorithm has been evaluated by computing a \"goodness index\" (GI) which compares the results of automatic extraction with manually extracted ground truth. The significance of the observed GI values is determined by comparing the index for a set of fingerprints against the GI values obtained under a baseline distribution. The detected features are observed to be reliable and accurate. Fingerprints Feature extraction Texture Flow orientation Minutiae Segmentation Skeleton"} {"_id": "410d48ddd9173049d0b3c9a2a6dbf3b606842262", "title": "Automated Static Code Analysis for Classifying Android Applications Using Machine Learning", "text": "In this paper we apply Machine Learning (ML) techniques on static features that are extracted from Android's application files for the classification of the files. Features are extracted from Android\u2019s Java byte-code (i.e.,.dex files) and other file types such as XML-files. Our evaluation focused on classifying two types of Android applications: tools and games. Successful differentiation between games and tools is expected to provide positive indication about the ability of such methods to learn and model Android benign applications and potentially detect malware files. The results of an evaluation, performed using a test collection comprising 2,285 Android. apk files, indicate that features, extracted statically from. apk files, coupled with ML classification algorithms can provide good indication about the nature of an Android application without running the application, and may assist in detecting malicious applications. This method can be used for rapid examination of Android. apks and informing of suspicious applications."} {"_id": "3337976b072405933a02f7d912d2b6432de38feb", "title": "Automated Text Summarization and the SUMMARIST System", "text": "This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries. 1. T H E N A T U R E O F S U M M A R I E S Early experimentation in the late 1950's and early 60's suggested that text summarization by computer was feasible though not s traightforward (Luhn, 59; Edmundson, 68). The methods developed then were fairly unsophisticated, relying primarily on surface level phenomena such as sentence position and word frequency counts, and focused on producing extracts (passages selected from the text, reproduced verbatim) rather than abstracts (interpreted portions of the text, newly generated). After a hiatus of some decades, the growing presence of large amounts of online text--in corpora and especially on the Web--renewed the interest in automated text summarization. During these intervening decades, progress in Natural Language Processing (NLP), coupled with great increases of computer memory and speed, made possible more sophisticated techniques, with very encouraging results. In the late 1990's, some relatively small research investments in the US (not more than 10 projects, including commercial efforts at Microsoft, Lexis-Nexis, Oracle, SRA, and TextWise, and university efforts at CMU, NMSU, UPenn, and USC/ISI) over three or four years have produced several systems that exhibit potential marketability, as well as several innovations that promise continued improvement. In addition, several recent workshops, a book collection, and several tutorials testify that automated text summarization has become a hot area. However, when one takes a moment to study the various systems and to consider what has really been achieved, one cannot help being struck by their underlying similarity, by the narrowness of their focus, and by the large numbers of unknown factors that surround the problem. For example, what precisely is a summary? No-one seems to know exactly. In our work, we use summary as the generic term and define it as follows: A summary is a text that is produced out of one or more (possibly multimedia) texts, that contains (some of) the same information of the original text(s), and that is no longer than half of the original text(s). To clarify the picture a little, we follow and extend (Sp~irck Jones, 97) by identifying the following aspects of variation. Any summary can be characterized by (at least) three major classes of characteristics: Invut: characteristics of the source text(s) Source size: single-document v s . multi-document: A single-document summary derives from a single input text (though the summarization process itself may employ information compiled earlier from other texts). A multi-document summary is one text that covers the content of more than one input text, and is usually used only when the input texts are thematically related. Specificity: domain-specific vs. general: When the input texts all pertain to a single domain, it may be appropr ia te to apply domain spec i f i c summarization techniques, focus on specific content, and output specific formats, compared to the general case. A domain-specific summary derives from input text(s) whose theme(s) pertain to a single restricted domain. As such, it can assume less term ambiguity, idiosyncratic word and grammar usage, specialized formatting, etc., and can reflect them in the summary."} {"_id": "f0acbfdde58c297ff1705be9e9f8e119f4ba38cc", "title": "Asymmetrical duty cycle control with phase limit of LLC resonant inverter for an induction furnace", "text": "This paper proposes a power control of LLC resonant inverter for an induction furnace using asymmetrical duty cycle control (ADC) with phase limit to guarantee zero voltage switching (ZVS) operation and protect switching devices from spike current during the operation. The output power can be simply controlled through duty cycle of gate signals of a full-bridge inverter. With the phase limit control, non-ZVS operation and spike current caused by a change of duty cycle with fixed frequency and load Curie's temperature can be eliminated. Theoretical and simulation analyses of the proposed method have been investigated. The experimental results of heating a 300 g of Tin from room temperature until it is melted at 232 \u00b0C provided."} {"_id": "df2f26f96a514956b34ceaee8189e7f54160bee0", "title": "Defect Detection in Patterned Fabrics Using Modified Local Binary Patterns", "text": "Local binary patterns LBP, is one of the features which has been used for texture classification. In this paper, a method based on using these features is proposed for detecting defects in patterned fabrics. In the training stage, at first step LBP operator is applied to all rows (columns) of a defect free fabric sample, pixel by pixel, and the reference feature vector is computed. Then this image is divided into windows and LBP operator is applied to each row (column) of these windows. Based on comparison with the reference feature vector a suitable threshold for defect free windows is found. In the detection stage, a test image is divided into windows and using the threshold, defective windows can be detected. The proposed method is simple and gray scale invariant. Because of its simplicity, online implementation is possible as well."} {"_id": "25126128faa023d1a65a47abeb8c33219cc8ca5c", "title": "Less is More: Nystr\u00f6m Computational Regularization", "text": "We study Nystr\u00f6m type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr\u00f6m Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets."} {"_id": "b2719ac9e43cd491608b2e8282736ccac1915671", "title": "Power consumption estimation in CMOS VLSI chips", "text": "Power consumption from logic circuits, interconnections, dock distribution, on chip memories, and off chip driving in CMOS VLSI is estimated. Estimation methods are demonstrated and verified. An estimate tool is created. Power consumption distribution between ~ I I ~ I T O M ~ C ~ ~ O I W , clock distribution, logic gates, memories, and off chip driving are analyzed by examples. Comparisons are done between cell library, gate array, and full custom design. Also comparisons between static and dynamic logic are given. Results show that the power consumption of all interconnections and off chip driving can be up to 20% and 65% of the total power consumption respectively. Compared to cell library design, gate array designed chips consume about 10% more power, and power reduction in full custom designed chips could be 15%."} {"_id": "b0f70ce823cc91b6a5fe3297b98f5fdad4796bab", "title": "A dynamic instruction set computer", "text": "One of the key contributions of this paper is the idea of treating the two-dimensional FPGA area as a one-dimensional array of custom instructions, which the paper refers to as linear hardware space. Each custom instruction occupies the full width of the array but can occupy different heights to allow for instructions of varying complexity. Instructions also have a common interface that includes pass-thrus which connect to the previous and next instructions in the array. This communication interface greatly simplifies the problem of relocating a custom instruction to a free area, allowing other instructions to potentially remain in-place. The communication paths for an instruction are always at the same relative positions, and run-time CAD operations (and their corresponding overheads) are completely avoided. Thus, the linear hardware space reduces the overheads and simplifies the process of run-time reconfiguration of custom processor instructions."} {"_id": "12e54895e1d3da2fb75a6bb8f84c8d1f5108d632", "title": "Mechanical Design and Modeling of an Omni-directional RoboCup Player", "text": "Abstract. This paper covers the mechanical design process of an omni-directional mobile robot developed for the Ohio University RoboCup team player. It covers each design iteration, detailing what we learned from each phase. In addition, the kinematics of the final design is derived, and its inverse Jacobian matrix for the design is presented. The dynamic equations of motion for the final design is derived in a symbolic form, assuming that no slip occurs on the wheel in the spin direction. Finally, a simulation example demonstrates simple independent PID wheel control for the complex coupled nonlinear dynamic system."} {"_id": "c436a8a6fa644e3d0264ef1b1b3a838e925a7e32", "title": "Analysis on botnet detection techniques", "text": "Botnet detection plays an important role in network security. Botnet are collection of compromised computers called the bot. For detecting the presence of bots in a network, there are many detection techniques available. Network based detection method is the one of the efficient method in detecting bots. Paper reviews four different botnet detection techniques and a comparison of all these techniques are done."} {"_id": "2f5f766944162077579091a40315ec228d9785ba", "title": "Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes.", "text": "Importance\nA deep learning system (DLS) is a machine learning technology with potential for screening diabetic retinopathy and related eye diseases.\n\n\nObjective\nTo evaluate the performance of a DLS in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, possible glaucoma, and age-related macular degeneration (AMD) in community and clinic-based multiethnic populations with diabetes.\n\n\nDesign, Setting, and Participants\nDiagnostic performance of a DLS for diabetic retinopathy and related eye diseases was evaluated using 494\u202f661 retinal images. A DLS was trained for detecting diabetic retinopathy (using 76\u202f370 images), possible glaucoma (125\u202f189 images), and AMD (72\u202f610 images), and performance of DLS was evaluated for detecting diabetic retinopathy (using 112\u202f648 images), possible glaucoma (71\u202f896 images), and AMD (35\u202f948 images). Training of the DLS was completed in May 2016, and validation of the DLS was completed in May 2017 for detection of referable diabetic retinopathy (moderate nonproliferative diabetic retinopathy or worse) and vision-threatening diabetic retinopathy (severe nonproliferative diabetic retinopathy or worse) using a primary validation data set in the Singapore National Diabetic Retinopathy Screening Program and 10 multiethnic cohorts with diabetes.\n\n\nExposures\nUse of a deep learning system.\n\n\nMain Outcomes and Measures\nArea under the receiver operating characteristic curve (AUC) and sensitivity and specificity of the DLS with professional graders (retinal specialists, general ophthalmologists, trained graders, or optometrists) as the reference standard.\n\n\nResults\nIn the primary validation dataset (n\u2009=\u200914\u202f880 patients; 71\u202f896 images; mean [SD] age, 60.2 [2.2] years; 54.6% men), the prevalence of referable diabetic retinopathy was 3.0%; vision-threatening diabetic retinopathy, 0.6%; possible glaucoma, 0.1%; and AMD, 2.5%. The AUC of the DLS for referable diabetic retinopathy was 0.936 (95% CI, 0.925-0.943), sensitivity was 90.5% (95% CI, 87.3%-93.0%), and specificity was 91.6% (95% CI, 91.0%-92.2%). For vision-threatening diabetic retinopathy, AUC was 0.958 (95% CI, 0.956-0.961), sensitivity was 100% (95% CI, 94.1%-100.0%), and specificity was 91.1% (95% CI, 90.7%-91.4%). For possible glaucoma, AUC was 0.942 (95% CI, 0.929-0.954), sensitivity was 96.4% (95% CI, 81.7%-99.9%), and specificity was 87.2% (95% CI, 86.8%-87.5%). For AMD, AUC was 0.931 (95% CI, 0.928-0.935), sensitivity was 93.2% (95% CI, 91.1%-99.8%), and specificity was 88.7% (95% CI, 88.3%-89.0%). For referable diabetic retinopathy in the 10 additional datasets, AUC range was 0.889 to 0.983 (n\u2009=\u200940\u202f752 images).\n\n\nConclusions and Relevance\nIn this evaluation of retinal images from multiethnic cohorts of patients with diabetes, the DLS had high sensitivity and specificity for identifying diabetic retinopathy and related eye diseases. Further research is necessary to evaluate the applicability of the DLS in health care settings and the utility of the DLS to improve vision outcomes."} {"_id": "a6b70f80cddffd78780960c542321ddfe3c412f3", "title": "Suicide and Suicide Risk in Lesbian, Gay, Bisexual, and Transgender Populations: Review and Recommendations", "text": "Despite strong indications of elevated risk of suicidal behavior in lesbian, gay, bisexual, and transgender people, limited attention has been given to research, interventions or suicide prevention programs targeting these populations. This article is a culmination of a three-year effort by an expert panel to address the need for better understanding of suicidal behavior and suicide risk in sexual minority populations, and stimulate the development of needed prevention strategies, interventions and policy changes. This article summarizes existing research findings, and makes recommendations for addressing knowledge gaps and applying current knowledge to relevant areas of suicide prevention practice."} {"_id": "791351badc12101a2db96649e127ac2f3993c70f", "title": "Watch What You Wear: Preliminary Forensic Analysis of Smart Watches", "text": "This work presents preliminary forensic analysis of two popular smart watches, the Samsung Gear 2 Neo and LG G. These wearable computing devices have the form factor of watches and sync with smart phones to display notifications, track footsteps and record voice messages. We posit that as smart watches are adopted by more users, the potential for them becoming a haven for digital evidence will increase thus providing utility for this preliminary work. In our work, we examined the forensic artifacts that are left on a Samsung Galaxy S4 Active phone that was used to sync with the Samsung Gear 2 Neo watch and the LG G watch. We further outline a methodology for physically acquiring data from the watches after gaining root access to them. Our results show that we can recover a swath of digital evidence directly form the watches when compared to the data on the phone that is synced with the watches. Furthermore, to root the LG G watch, the watch has to be reset to its factory settings which is alarming because the process may delete data of forensic relevance. Although this method is forensically intrusive, it may be used for acquiring data from already rooted LG watches. It is our observation that the data at the core of the functionality of at least the two tested smart watches, messages, health and fitness data, e-mails, contacts, events and notifications are accessible directly from the acquired images of the watches, which affirms our claim that the forensic value of evidence from smart watches is worthy of further study and should be investigated both at a high level and with greater specificity and granularity."} {"_id": "a7fbd785b5fa441b5261913e93966ba395ae2244", "title": "Design of Broadband Constant-Beamwidth Conical Corrugated-Horn Antennas [Antenna Designer's Notebook]", "text": "In this paper, a new design procedure is proposed for the design of wideband constant-beamwidth conical corrugated-horn antennas, with minimum design and construction complexity. The inputs to the procedure are the operating frequency band, the required minimum beamwidth in the entire frequency band, and the frequency in which the maximum gain is desired to occur. Based on these values, the procedure gives a relatively good design with a relative bandwidth of up to 2.5:1. Based on the proposed procedure, a corrugated-horn antenna with a constant beamwidth over the frequencies of 8 to 18 GHz was designed and simulated using commercial software. The designed antenna was also constructed, and its electromagnetic performance was measured. The measured results of the constructed prototype antenna confirmed the simulation results and satisfied the design requirements, validating the proposed design procedure."} {"_id": "8d93c6b66a114e265942b000903c9244087bd9f6", "title": "I See What You Say (ISWYS): Arabic lip reading system", "text": "The ability of communicating easily with everyone is a blessing people with hearing impairment do not have. They completely rely on their vision around healthy individuals to difficultly read lips. This paper proposes a solution for this problem, ISWYS (I See What You Say) is a research-oriented speech recognition system for the Arabic language that interprets lips movements into readable text. It is accomplished by analyzing a video of lips movements that resemble utterances, and then converting it to readable characters using video analysis and motion estimation. Our algorithm involves dividing the video into n number of frames to generate n-1 image frame which is produced by taking the difference between consecutive frames. Then, video features are extracted to be used by our error function which provided recognition of approximately 70%."} {"_id": "b9961cf63a8bff98f9a5086651d3c9c0c847d7a1", "title": "A Novel BiQuad Antenna for 2 . 4 GHz Wireless Link Application : A Proposed Design", "text": "An ISM Band (2.4GHz) Design for a Biquad Antenna with Reflector base is presented for satisfying the ISM Band point-to-point link Applications in this paper. This proposed design can be used for Reception as well as Transmission in Wi-Fi's, WLAN, Bluetooth and even Zigbee links. The proposed antenna consists of two squares of the same size of 1\u20444 wavelength as a radiating element and a metallic plate or grid as reflector. This antenna has a beam width of about 70 degrees and a gain in the order of 10-12 dBi. A Prototype of this Antenna is designed and constructed. Parametric study is performed to understand the characterstics of the proposed antenna. Almost good antenna performances such as radiation patterns and antenna gains over the operating bands have been observed and simulated peak gain of the antenna is 10.7 dBi at 2439MHz. The simulated return loss is-35dB, whereas simulated SWR is 1.036 over the operating bands. The Biquad antenna is simple to build and offers good directivity and gain for Point-to-Point communications [1]. It consists of two squares of the same size of 1\u20444 wavelength as a radiating element and a metallic plate or grid as reflector. This antenna has a beam width of about 70 degrees and a gain in the order of 10-12 dBi. It can be used as stand-alone antenna or as feeder for a Parabolic Dish [2]. The polarization is such that looking at the antenna from the front, if the squares are placed side by side the polarization is vertical. The element is made from a length of 2mm thick copper wire, bent into the appropriate shape. Note that the length of each \" side \" should be close to 31 mm as possible, when measured from center to center of the wire [4-5]. The Details of the proposed antenna design are described in the paper, and simulated results are presented and discussed in the following sections."} {"_id": "98e1c84d74a69349f6f7502763958a757ec7f416", "title": "Regulation of OsSPL14 by OsmiR156 defines ideal plant architecture in rice", "text": "Increasing crop yield is a major challenge for modern agriculture. The development of new plant types, which is known as ideal plant architecture (IPA), has been proposed as a means to enhance rice yield potential over that of existing high-yield varieties. Here, we report the cloning and characterization of a semidominant quantitative trait locus, IPA1 (Ideal Plant Architecture 1), which profoundly changes rice plant architecture and substantially enhances rice grain yield. The IPA1 quantitative trait locus encodes OsSPL14 (SOUAMOSA PROMOTER BINDING PROTEIN-LIKE 14) and is regulated by microRNA (miRNA) OsmiR156 in vivo. We demonstrate that a point mutation in OsSPL14 perturbs OsmiR156-directed regulation of OsSPL14, generating an 'ideal' rice plant with a reduced tiller number, increased lodging resistance and enhanced grain yield. Our study suggests that OsSPL14 may help improve rice grain yield by facilitating the breeding of new elite rice varieties."} {"_id": "43d693936dcb0c582743ed30bdf505faa4dce8ec", "title": "Introduction to smart learning analytics: foundations and developments in video-based learning", "text": "Smart learning has become a new term to describe technological and social developments (e.g., Big and Open Data, Internet of Things, RFID, and NFC) enable effective, efficient, engaging and personalized learning. Collecting and combining learning analytics coming from different channels can clearly provide valuable information in designing and developing smart learning. Although, the potential of learning analytics to enable smart learning is very promising area, it remains non-investigated and even ill-defined concept. The paper defines the subset of learning analytics that focuses on supporting the features and the processes of smart learning, under the term Smart Learning Analytics. This is followed by a brief discussion on the prospects and drawbacks of Smart Learning Analytics and their recent foundations and developments in the area of Video-Based Learning. Drawing from our experience with the recent international workshops in Smart Environments and Analytics in Video-Based Learning, we present the state-of-the-art developments as well as the four selected contributions. The paper further draws attention to the great potential and need for research in the area of Smart Learning Analytics."} {"_id": "4fbb0656d67a67686ebbad3023c71a59856b7ae7", "title": "Machine Learning for Semantic Parsing in Review", "text": "Spoken Language Understanding (SLU) and more specifically, semantic parsing is an indispensable task in each speech-enabled application. In this survey, we review the current research on SLU and semantic parsing with emphasis on machine learning techniques used for these tasks. Observing the current trends in semantic parsing, we conclude our discussion by suggesting some of the most promising future research trends."} {"_id": "e17da8f078989d596878d7e1ea36d10e879feb64", "title": "Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce", "text": "Classifying products precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification based on text and image neural network classifiers. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves classification accuracy over both networks on a real-world largescale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce businesses, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc."} {"_id": "18cd20f0605629e0cbb714a900511c9b68233400", "title": "Balancing Performance, Energy, and Quality in Pervasive Computing", "text": "We describe Spectra, a remote execution system for batterypowered clients used in pervasive computing. Spectra enables applications to combine the mobility of small devices with the greater processing power of static compute servers. Spectra is self-tuning: it monitors both application resource usage and the availability of resources in the environment, and dynamically determines how and where to execute application components. In making this determination, Spectra balances the competing goals of performance, energy conservation, and application quality. We have validated Spectra\u2019s approach on the Compaq Itsy v2.2 and IBM ThinkPad 560X using a speech recognizer, a document preparation system, and a natural language translator. Our results confirm that Spectra almost always selects the best execution plan, and that its few suboptimal choices are very close to optimal."} {"_id": "179634b2986fe4bbb2ca4714aec588012b66c231", "title": "Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.", "text": "Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism."} {"_id": "e32acd4b0eea757df6e62a1e474bf1153ee98e09", "title": "Harlequin fetus.", "text": "We report a case of harlequin fetus born to the consanguineous parents. She had the typical skin manifestations of thick armour like scales with fissures, complete ectropion and eclabium, atrophic and crumpled ears and swollen extremities with gangrenous digits. Supportive treatment was given but the neonate died on the 4th day."} {"_id": "dbbfc1e3e356dc3312a378e9a620e46c2abc5cc2", "title": "Efficient Clustering-Based Outlier Detection Algorithm for Dynamic Data Stream", "text": "Anomaly detection is currently an important and active research problem in many fields and involved in numerous applications. Most of the existing methods are based on distance measure. But in case of data stream these methods are not very efficient as computational point of view. Most of the exiting work on outlier detection in data stream declare a point as an outlier/inlier as soon as it arrive due to limited memory resources as compared to the huge data stream, to declare an outlier as it arrive often can lead us to a wrong decision, because of dynamic nature of the incoming data. In this paper we introduced a clustering based approach, which divide the stream in chunks and cluster each chunk using k-mean in fixed number of clusters. Instead of keeping only the summary information, which often used in case of clustering data stream, we keep the candidate outliers and mean value of every cluster for the next fixed number of steam chunks, to make sure that the detected candidate outliers are the real outliers. By employing the mean value of the clusters of previous chunk with mean values of the current chunk of stream, we decide better outlierness for data stream objects. Several experiments on different dataset confirm that our technique can find better outliers with low computational cost than the other exiting distance based approaches of outlier detection in data stream."} {"_id": "645d9e7e5e3c5496f11e0e303dc4cc1395109773", "title": "Performance modeling for systematic performance tuning", "text": "The performance of parallel scientific applications depends on many factors which are determined by the execution environment and the parallel application. Especially on large parallel systems, it is too expensive to explore the solution space with series of experiments. Deriving analytical models for applications and platforms allow estimating and extrapolating their execution performance, bottlenecks, and the potential impact of optimization options. We propose to use such \"performance modeling\" techniques beginning from the application design process throughout the whole software development cycle and also during the lifetime of supercomputer systems. Such models help to guide supercomputer system design and re-engineering efforts to adopt applications to changing platforms and allow users to estimate costs to solve a particular problem. Models can often be built with the help of well-known performance profiling tools. We discuss how we successfully used modeling throughout the proposal, initial testing, and beginning deployment phase of the Blue Waters supercomputer system."} {"_id": "16302319d910a1da77656133727f081c65995635", "title": "Programming by Feedback", "text": "This paper advocates a new ML-based programming framework, called Programming by Feedback (PF), which involves a sequence of interactions between the active computer and the user. The latter only provides preference judgments on pairs of solutions supplied by the active computer. The active computer involves two components: the learning component estimates the user\u2019s utility function and accounts for the user\u2019s (possibly limited) competence; the optimization component explores the search space and returns the most appropriate candidate solution. A proof of principle of the approach is proposed, showing that PF requires a handful of interactions in order to solve some discrete and continuous benchmark problems."} {"_id": "3d16326c34fdbf397876fcc173702846ae9f5fc0", "title": "Forwarding in a content-based network", "text": "This paper presents an algorithm for content-based forwarding, an essential function in content-based networking. Unlike in traditional address-based unicast or multicast networks, where messages are given explicit destination addresses, the movement of messages through a content-based network is driven by predicates applied to the content of the messages. Forwarding in such a network amounts to evaluating the predicates stored in a router's forwarding table in order to decide to which neighbor routers the message should be sent. We are interested in finding a forwarding algorithm that can make this decision as quickly as possible in situations where there are numerous, complex predicates and high volumes of messages. We present such an algorithm and give the results of studies evaluating its performance."} {"_id": "418750bf838ed1417a4b65ebf292d804371e3f67", "title": "Probabilistic in-network caching for information-centric networks", "text": "In-network caching necessitates the transformation of centralised operations of traditional, overlay caching techniques to a decentralised and uncoordinated environment. Given that caching capacity in routers is relatively small in comparison to the amount of forwarded content, a key aspect is balanced distribution of content among the available caches. In this paper, we are concerned with decentralised, real-time distribution of content in router caches. Our goal is to reduce caching redundancy and in turn, make more efficient utilisation of available cache resources along a delivery path.\n Our in-network caching scheme, called ProbCache, approximates the caching capability of a path and caches contents probabilistically in order to: i) leave caching space for other flows sharing (part of) the same path, and ii) fairly multiplex contents of different flows among caches of a shared path.\n We compare our algorithm against universal caching and against schemes proposed in the past for Web-Caching architectures, such as Leave Copy Down (LCD). Our results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching."} {"_id": "8427c741834ccea874aa9e7be85c412d9670bfa2", "title": "A survey on content-centric technologies for the current Internet: CDN and P2P solutions", "text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.10.005 q This work was partially funded by the European AWARE RECOGNITION (257756), FIRE-SCAMPI (25841 \u21d1 Tel.: +39 0503153269. E-mail address: a.passarella@iit.cnr.it One of the most striking properties of the Internet is its flexibility to accommodate features it was not conceived for. Among the most significant examples, in this survey we consider the transition of the Internet from a reliable fault-tolerant network for host-to-host communication to a content-centric network, i.e. a network mostly devoted to support efficient generation, sharing and access to content. We survey this research area according to a top-down approach. We present a conceptual framework that encompasses the key building blocks required to support content-centric networking in the Internet. Then we describe in detail the two most important types of content-centric Internet technologies, i.e., Content-Delivery Networks (CDNs) and P2P systems. For each of them, we show how they cover the key building blocks. We then identify the functional components of CDN and P2P content management solutions, and discuss the main solutions proposed in the literature for each of them. We consider different types of content (both real time and non real time), and different networking environments (fixed, mobile, . . .). Finally, we also discuss the main recent research trends focused on how to design the Future Internet as a native content-centric network. 2011 Elsevier B.V. All rights reserved."} {"_id": "05cf4d77e9810d40d3e2ca360ce5fb8e8ea98f10", "title": "A data-oriented (and beyond) network architecture", "text": "The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution."} {"_id": "6693b715117f5a16018d0718e58be7022c326ac1", "title": "Computational intelligence and tower defence games", "text": "The aim of this paper is to introduce the use of Tower Defence (TD) games in Computational Intelligence (CI) research. We show how TD games can provide an important test-bed for the often under-represented casual games research area. Additionally, the use of CI in the TD games has the potential to create a more interesting, interactive and ongoing game experience for casual gamers. We present a definition of the current state and development of TD games, and include a classification of TD game components. We then describe some potential ways CI can be used to augment the TD experience. Finally, a prototype TD game based on experience-driven procedural content generation is presented."} {"_id": "f6793ab7c1669f81f64212a20f407374882f2130", "title": "The Validity of Presence as a Reliable Human Performance Metric in Immersive Environments", "text": "Advances in interactive media technologies have reached the stage where users need no longer to act as passive observers. As the technology evolves further we will be able to engage in computer based synthetic performances that mimic events in the real or natural world. Performances in this context refers to plays, films, computer based simulations, engineering mock-ups, etc. At a more fundamental level, we are dealing with complex interactions of the human sensory and perceptual systems with a stimulus environment (covering visual, auditory and other components). Authors or rather directors of these new media environments will exploit computer generated perceptual illusions to convince the person (participant) who is interacting with the system that they are present in the environment. Taken to the limit the participant would not be able to distinguish between being in a real or natural environment compared with being in a purely synthetic environment. Unfortunately, the performance of current technology falls considerably short of even getting even close to providing a faithful perceptual illusion of the real world. Eventually, VR authoring tools will become more sophisticated and it will be possible to produce realistic virtual environments that match more closely with the real world. Some VR practitioners have attempted to collect data that they purport to represent the degree of presence a user experiences. Unfortunately, we do not yet have an agreed definition of presence and seem undecided whether it is a meaningful metric in its own right. This paper will examine the dangers of attempting to measure the 'presence' of a VR system as a one-dimensional parameter. The question of whether presence is a valid measure of a VR system will also be addressed. An important differentiating characteristic of VR systems compared with other human-computer interfaces are their ability to create a sense of 'being-in' the computer generated environment. Other forms of media such as film and TV are also known to induce a sense of 'being-in' the environment. Some VR practitioners have tended to use the term presence to describe this effect This means that people who are engaged in the virtual environment feel as though they are actually part of the virtual environment. In order to be perceptually present in a virtual environment it is first important to understand what it means to be present in the real-world. A good example of real world experience is that of a roller coaster ride. The sensory \u2026"} {"_id": "82bc67442f4d5353d7dde5c4345f2ec041a7de93", "title": "Using the opinion leaders in social networks to improve the cold start challenge in recommender systems", "text": "The increasing volume of information about goods and services has been growing confusion for online buyers in cyberspace and this problem still continues. One of the most important ways to deal with the information overload is using a system called recommender system. The task of a recommender system is to offer the most appropriate and the nearest product to the user's demands and needs. In this system, one of the main problems is the cold start challenge. This problem occurs when a new user logs on and because there is no sufficient information available in the system from the user, the system won't be able to provide appropriate recommendation and the system error will rise. In this paper, we propose to use a new measurement called opinion leaders to alleviate this problem. Opinion leader is a person that his opinion has an impact on the target user. As a result, in the case of a new user logging in and the user \u2014 item's matrix sparseness, we can use the opinion of opinion leaders to offer the appropriate recommendation for new users and thereby increase the accuracy of the recommender system. The results of several conducted tests showed that opinion leaders combined with recommender systems will effectively reduce the recommendation errors."} {"_id": "48a0ef1b5a44e50389d88330036d15231a351a52", "title": "Anomaly detection using DBSCAN clustering technique for traffic video surveillance", "text": "Detecting anomalies such as rule violations, accidents, unusual driving and other suspicious action increase the need for automatic analysis in Traffic Video Surveillance (TVS). Most of the works in Traffic rule violation systems are based on probabilistic methods of classification for detecting the events as normal and abnormal. This paper proposes an un-supervised clustering technique namely Novel Anomaly Detection-Density Based Spatial Clustering of Applications with Noise (NAD-DBSCAN) which clusters the trajectories of moving objects of varying sizes and shapes. A trajectory is said to be abnormal if the event that never fit with the trained model. Epsilon (Eps) and Minimum Points (MinPts) are essential parameters for dynamically calculating the sum of clusters for a data point. The proposed system is validated using benchmark traffic dataset and found to perform accurately in detecting anomalies."} {"_id": "3f29bcf67ddaa3991ad2c15046c51f6a309d01f8", "title": "Security Attacks and Solutions in Clouds Kazi Zunnurhain", "text": "Cloud computing offers great potential to improve productivity and reduce costs, but at the same time it possesses many new security risks. In this paper we identify the possible security attacks on clouds including: Wrapping attacks, Malware-Injection attacks, Flooding attacks, Browser attacks, and also Accountability checking problems. We identify the root causes of these attacks and propose specific"} {"_id": "ba1962830434448b3cc4fd63eb561fd3816febc5", "title": "What you look at is what you get: eye movement-based interaction techniques", "text": "In seeking hitherto-unused methods by which users and computers can communicate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a natural and unobtrusive way. This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observations on them."} {"_id": "e568d744d48744938f94eeabf7e79d3a8764435e", "title": "Depression Scale Recognition from Audio, Visual and Text Analysis", "text": "Depression is a major mental health disorder that is rapidly affecting lives worldwide. Depression not only impacts emotional but also physical and psychological state of the person. Its symptoms include lack of interest in daily activities, feeling low, anxiety, frustration, loss of weight and even feeling of selfhatred. This report describes work done by us for Audio Visual Emotion Challenge (AVEC) 2017 during our second year BTech summer internship. With the increase in demand to detect depression automatically with the help of machine learning algorithms, we present our multimodal feature extraction and decision level fusion approach for the same. Features are extracted by processing on the provided Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) database. Gaussian Mixture Model (GMM) clustering and Fisher vector approach were applied on the visual data; statistical descriptors on gaze, pose; low level audio features and head pose and text features were also extracted. Classification is done on fused as well as independent features using Support Vector Machine (SVM) and neural networks. The results obtained were able to cross the provided baseline on validation data set by 17% on audio features and 24.5% on video features. Keywords\u2014AVEC 2017, SVM, Depression, Neural network, RMSE, MAE, fusion, speech processing."} {"_id": "414573bcd1849b4d3ec8a06dd4080b62f1db5607", "title": "Attacking DDoS at the Source", "text": "Distributed denial-of-service (DDoS) attacks present an Internet-wide threat. We propose D-WARD, a DDoS defense system deployed at source-end networks that autonomously detects and stops attacks originating from these networks. Attacks are detected by the constant monitoring of two-way traffic flows between the network and the rest of the Internet and periodic comparison with normal flow models. Mismatching flows are rate-limited in proportion to their aggressiveness. D-WARD offers good service to legitimate traffic even during an attack, while effectively reducing DDoS traffic to a negligible level. A prototype of the system has been built in a Linux router. We show its effectiveness in various attack scenarios, discuss motivations for deployment, and describe associated costs."} {"_id": "b84baac3305531e8c548f496815d47e21e13bb0c", "title": "PAGE: A Partition Aware Engine for Parallel Graph Computation", "text": "Graph partition quality affects the overall performance of parallel graph computation systems. The quality of a graph partition is measured by the balance factor and edge cut ratio. A balanced graph partition with small edge cut ratio is generally preferred since it reduces the expensive network communication cost. However, according to an empirical study on Giraph, the performance over well partitioned graph might be even two times worse than simple random partitions. This is because these systems only optimize for the simple partition strategies and cannot efficiently handle the increasing workload of local message processing when a high quality graph partition is used. In this paper, we propose a novel partition aware graph computation engine named PAGE, which equips a new message processor and a dynamic concurrency control model. The new message processor concurrently processes local and remote messages in a unified way. The dynamic model adaptively adjusts the concurrency of the processor based on the online statistics. The experimental evaluation demonstrates the superiority of PAGE over the graph partitions with various qualities."} {"_id": "f9bfb77f6cf1d175afe8745d32b187c602b61bfc", "title": "Making flexible magnetic aerogels and stiff magnetic nanopaper using cellulose nanofibrils as templates.", "text": "Nanostructured biological materials inspire the creation of materials with tunable mechanical properties. Strong cellulose nanofibrils derived from bacteria or wood can form ductile or tough networks that are suitable as functional materials. Here, we show that freeze-dried bacterial cellulose nanofibril aerogels can be used as templates for making lightweight porous magnetic aerogels, which can be compacted into a stiff magnetic nanopaper. The 20-70-nm-thick cellulose nanofibrils act as templates for the non-agglomerated growth of ferromagnetic cobalt ferrite nanoparticles (diameter, 40-120 nm). Unlike solvent-swollen gels and ferrogels, our magnetic aerogel is dry, lightweight, porous (98%), flexible, and can be actuated by a small household magnet. Moreover, it can absorb water and release it upon compression. Owing to their flexibility, high porosity and surface area, these aerogels are expected to be useful in microfluidics devices and as electronic actuators."} {"_id": "b3823c1693fd8c03c5f8fd694908f314051d3a72", "title": "When the face reveals what words do not: facial expressions of emotion, smiling, and the willingness to disclose childhood sexual abuse.", "text": "For survivors of childhood sexual abuse (CSA), verbal disclosure is often complex and painful. The authors examined the voluntary disclosure-nondisclosure of CSA in relation to nonverbal expressions of emotion in the face. Consistent with hypotheses derived from recent theorizing about the moral nature of emotion, CSA survivors who did not voluntarily disclose CSA showed greater facial expressions of shame, whereas CSA survivors who voluntarily disclosed CSA expressed greater disgust. Expressions of disgust also signaled sexual abuse accompanied by violence. Consistent with recent theorizing about smiling behavior, CSA nondisclosers made more polite smiles, whereas nonabused participants expressed greater genuine positive emotion. Discussion addressed the implications of these findings for the study of disclosure of traumatic events, facial expression, and the links between morality and emotion."} {"_id": "0c6c7583687c245aedbe894edf63541fdda122ea", "title": "OpinionFinder: A System for Subjectivity Analysis", "text": "Vancouver, October 2005. OpinionFinder: A system for subjectivity analysis Theresa Wilson\u2021, Paul Hoffmann\u2021, Swapna Somasundaran\u2020, Jason Kessler\u2020, Janyce Wiebe\u2020\u2021, Yejin Choi\u00a7, Claire Cardie\u00a7, Ellen Riloff\u2217, Siddharth Patwardhan\u2217 \u2021Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA 15260 \u2020Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260 \u00a7Department of Computer Science, Cornell University, Ithaca, NY 14853 \u2217School of Computing, University of Utah, Salt Lake City, UT 84112"} {"_id": "8c46b80a009c28fb1b6d76c0d424adb4139dfd5a", "title": "Model-Based Quantitative Network Security Metrics: A Survey", "text": "Network security metrics (NSMs) based on models allow to quantitatively evaluate the overall resilience of networked systems against attacks. For that reason, such metrics are of great importance to the security-related decision-making process of organizations. Considering that over the past two decades several model-based quantitative NSMs have been proposed, this paper presents a deep survey of the state-of-the-art of these proposals. First, to distinguish the security metrics described in this survey from other types of security metrics, an overview of security metrics, in general, and their classifications is presented. Then, a detailed review of the main existing model-based quantitative NSMs is provided, along with their advantages and disadvantages. Finally, this survey is concluded with an in-depth discussion on relevant characteristics of the surveyed proposals and open research issues of the topic."} {"_id": "628db24b86a59af1546ee501695496918caa8dba", "title": "Study of wheel slip and traction forces in differential drive robots and slip avoidance control strategy", "text": "The effect of wheel slip in differential drive robots is investigated in this paper. We consider differential drive robots with two driven wheels and ball-type caster wheels that are used to provide balance and support to the mobile robot. The limiting values of traction forces for slip and no slip conditions are dependent on wheel-ground kinetic and static friction coefficients. The traction forces are used to determine the fraction of input torque that provides robot motion and this is used to calculate the actual position of the robot under slip conditions. The traction forces under no slip conditions are used to determine the limiting value of the wheel torque above which the wheel slips. This limiting torque value is used to set a saturation limit for the input torque to avoid slip. Simulations are conducted to evaluate the behavior of the robot during slip and no slip conditions. Experiments are conducted under similar slip and no slip conditions using a custom built differential drive mobile robot with one caster wheel to validate the simulations. Experiments are also conducted with the torque limiting strategy. Results from model simulations and experiments are presented and discussed."} {"_id": "5e7fbb0a42d933587a04bb31cad3ff9c95b79924", "title": "Design and fabrication of W-band SIW horn antenna using PCB process", "text": "A W-band SIW horn antenna is designed and fabricated using PCB process. Measured S11 reaches -15 dB at 84 GHz with bandwidth of 1 GHz and the maximum simulated gain is 9 dBi. The antenna is loaded with dielectric to increase gain and the effect of different loading lengths on gain is studied. The antenna is fed using a WR10 standard waveguide through a coupling slot in the SIW. The effect of changing the slot width on the return loss is studied. Good agreement is achieved between measured and simulated results. The proposed antenna is suitable for medical imaging and air traffic radar."} {"_id": "0c931bb17d9e5d4dc12e2662a3571535841ab534", "title": "Aligning 3D models to RGB-D images of cluttered scenes", "text": "The goal of this work is to represent objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene and then using a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel surface normals in images containing renderings of synthetic objects. When tested on real data, our method outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place into the scene the model that fits best. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [34], while being an order of magnitude faster."} {"_id": "f9907e97fd86f478f4da4c2a6e96b4f3192b4f1c", "title": "Adaptive Scale Selection for Multiscale Segmentation of Satellite Images", "text": "With dramatically increasing of the spatial resolution of satellite imaging sensors, object-based image analysis (OBIA) has been gaining prominence in remote sensing applications. Multiscale image segmentation is a prerequisite step that splits an image into hierarchical homogeneous segmented objects for OBIA. However, scale selection remains a challenge in multiscale segmentation. In this study, we presented an adaptive approach for defining and estimating the optimal scale in the multiscale segmentation process. Central to our method is the combined use of image features from segmented objects and prior knowledge from historical thematic maps in a top-down segmentation procedure. Specifically, the whole image was first split into segmented objects, with the largest scale in a presupposition segmentation scale sequence. Second, based on segmented object features and prior knowledge in the local region of thematic maps, we calculated complexity values for each segmented object. Third, if the complexity values of an object were large enough, this object would be further split into multiple segmented objects with a smaller scale in the scale sequence. Then, in the similar manner, complex segmented objects were split into the simplest objects iteratively. Finally, the final segmentation result was obtained and evaluated. We have applied this method on a GF-1 multispectral satellite image and a ZY-3 multispectral satellite image to produce multiscale segmentation maps and further classification maps, compared with the state-of-the-art and the traditional mean shift algorithm. The experimental results illustrate that the proposed method is practically helpful and efficient to produce the appropriate segmented image objects with optimal scales."} {"_id": "19d73217ada6b093594648d55ec0b5991a61eec1", "title": "Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review", "text": "Moral dilemma tasks have been a much appreciated experimental paradigm in empirical studies on moral cognition for decades and have, more recently, also become a preferred paradigm in the field of cognitive neuroscience of moral decision-making. Yet, studies using moral dilemmas suffer from two main shortcomings: they lack methodological homogeneity which impedes reliable comparisons of results across studies, thus making a metaanalysis manifestly impossible; and second, they overlook control of relevant design parameters. In this paper, we review from a principled standpoint the studies that use moral dilemmas to approach the psychology of moral judgment and its neural underpinnings. We present a systematic review of 19 experimental design parameters that can be identified in moral dilemmas. Accordingly, our analysis establishes a methodological basis for the required homogeneity between studies and suggests the consideration of experimental aspects that have not yet received much attention despite their relevance."} {"_id": "7f1ea7e6eb5fb4c7ecc34e26250abea775c8aea2", "title": "Interlingua based Sanskrit-English machine translation", "text": "This paper is based on the work towards developing a machine translation system for Sanskrit to English. An interlingua based machine translation system in Paninian framework was developed in this work. As lexical functional grammar can handle semantic information along with syntactic analysis, the proposed system uses lexical functional grammar in different levels of analysis phase. In this system the given Sanskrit text is first converted to an intermediate notation called interlingua. This notation is then used for mapping to the target language, English, and generates the translated output. The proposed system works in the translation of simple and complex sentences in the given corpus. The karaka analysis system of Panini is used for the semantic analysis."} {"_id": "b9bb8d42cc70242157e3e8fa36abcb978da7cd1f", "title": "Development and field test of novel two-wheeled UAV for bridge inspections", "text": "This paper presents the development and field test of a novel unmanned aerial vehicle (UAV) for bridge inspections. The proposed UAV, which consists of a quadrotor, a cylindrical cage installed in it, and two spokeless wheels freely rotating around the cage, can climb and run on the bridge surface. Its structure improves durability and reduces air resistance compared with conventional UAVs with a cage or wheels. These advantages expand the uses of the UAV in various fields. This paper evaluates the effectiveness of the proposed UAV in real-world bridge inspection scenarios. We tested basic locomotion and measured air resistances. Experimental results on bridges demonstrated the ability to inspect various locations on bridges that are difficult for human inspectors to access."} {"_id": "677d77eb7ae64ce8ada6e52ddddc7a538ea64b45", "title": "An 87 fJ/conversion-step 12 b 10 MS/s SAR ADC using a minimum number of unit capacitors", "text": "This work proposes a 12 b 10 MS/s 0.11 lm CMOS successive-approximation register ADC based on a C-R hybrid DAC for low-power sensor applications. The proposed C-R DAC employs a 2-step split-capacitor array of upper seven bits and lower five bits to optimize power consumption and chip area at the target speed and resolution. A VCM-based switching method for the most significant bit and reference voltage segments from an insensitive R-string for the last two least significant bits minimize the number of unit capacitors required in the C-R hybrid DAC. The comparator accuracy is improved by an open-loop offset cancellation technique in the first-stage pre-amp. The prototype ADC in a 0.11 lm CMOS process demonstrates the measured differential nonlinearity and integral nonlinearity within 1.18 LSB and 1.42 LSB, respectively. The ADC shows a maximum signal-to-noise-and-distortion ratio of 63.9 dB and a maximum spurious-free dynamic range of 77.6 dB at 10 MS/s. The ADC with an active die area of 0.34 mm consumes 1.1 mW at 1.0 V and 10 MS/s, corresponding to a figure-of-merit of 87 fJ/conversion-step."} {"_id": "40f46e5d5e729965adb7b5613a625040097d07eb", "title": "Assessment of degree of risk from sources of microbial contamination in cleanrooms ; 3 : Overall application", "text": "The European Commission and the Food and Drug Administration in the USA suggest that risk management and assessment methods should be used to identify and control sources of microbial contamination1,2. A risk management method has been described by Whyte3,4 that is based on the Hazard Analysis and Critical Control Point (HACCP) system but reinterpreted for use in cleanrooms, and called Risk Management of Contamination (RMC). This method is also described in the PHSS Technical Monograph No 145 and has the following steps."} {"_id": "d6b9f66bfd1a4c5d01834dcd9e68752d15892067", "title": "Mindfulness and Meditation Practice as Moderators of the Relationship between Age and Subjective Wellbeing among Working Adults", "text": "Promoting the health and wellbeing of an aging and age-diverse workforce is a timely and growing concern to organizations and to society. To help address this issue, we investigated the relationship between age and subjective wellbeing by examining the moderating role of mindfulness in two independent studies. In study 1, trait mindfulness was examined as a moderator of the relationship between age and vitality and between age and work-family balance in a sample of 240 participants. In study 2, data from the second phase of the Midlife Development in the USA (MIDUS II) project was used to investigate mindful-practice (i.e., meditation) as a moderator of the relationships between age and multiple measures of subjective wellbeing (life satisfaction, psychological health, physical health) in a sample of 2477 adults. Results revealed that mindfulness moderates the relationship between age and multiple indicators of subjective wellbeing. In addition, study 2 results indicated that individuals who reported that they mediated often combined with those who reported they meditated a lot reported better physical health than those who reported that they never meditate. The findings suggest that cultivating mindfulness can be a proactive tool for fostering health and subjective wellbeing in an aging and agediverse workforce."} {"_id": "482112e839684c25166f16ed91153683158da31e", "title": "A Survey of Intrusion Detection Techniques", "text": "Intrusion detection is an alternative to the situation of the security violation.Security mechanism of the network is necessary against the threat to the system. There are two types of intruders: external intruders, who are unauthorized users of the machines they attack, and internal intruders, who have permission to access the system with some restrictions. This paper describes a brief overview of various intrusion detection techniques such as fuzzy logic, neural network, pattern recognition methods, genetic algorithms and related techniques is presented. Among the several soft computing paradigms, fuzzy rule-based classifiers, decision trees, support vector machines, linear genetic programming is model fast and efficient intrusion detection systems. KeywordsIntroduction, intrusion detection methods, misuse detection techniques, anomaly detection techniques, genetic algorithms."} {"_id": "705a24f4e1766a44bbba7cf335f74229ed443c7b", "title": "Face Recognition with Local Binary Patterns, Spatial Pyramid Histograms and Naive Bayes Nearest Neighbor Classification", "text": "Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations."} {"_id": "18ef666517e8b60cbe00c0f5ec5bd8dd18b936f8", "title": "Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading", "text": "Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation."} {"_id": "32093be6d9bf5038adb5b5b8b8e8a9b62001643c", "title": "Multi-Label Classification: An Overview", "text": "Multi-label classification methods are increasingly required by modern applications, such as protein function classification, music categorization, and semantic scene classification. This article introduces the task of multi-label classification, organizes the sparse related literature into a structured presentation and performs comparative experimental results of certain multi-label classification methods. It also contributes the definition of concepts for the quantification of the multi-label nature of a data set."} {"_id": "3b840207422285bf625220395b18a9f75f55acef", "title": "Offloading in Mobile Cloudlet Systems with Intermittent Connectivity", "text": "The emergence of mobile cloud computing enables mobile users to offload applications to nearby mobile resource-rich devices (i.e., cloudlets) to reduce energy consumption and improve performance. However, due to mobility and cloudlet capacity, the connections between a mobile user and mobile cloudlets can be intermittent. As a result, offloading actions taken by the mobile user may fail (e.g., the user moves out of communication range of cloudlets). In this paper, we develop an optimal offloading algorithm for the mobile user in such an intermittently connected cloudlet system, considering the users' local load and availability of cloudlets. We examine users' mobility patterns and cloudlets' admission control, and derive the probability of successful offloading actions analytically. We formulate and solve a Markov decision process (MDP) model to obtain an optimal policy for the mobile user with the objective to minimize the computation and offloading costs. Furthermore, we prove that the optimal policy of the MDP has a threshold structure. Subsequently, we introduce a fast algorithm for energy-constrained users to make offloading decisions. The numerical results show that the analytical form of the successful offloading probability is a good estimation in various mobility cases. Furthermore, the proposed MDP offloading algorithm for mobile users outperforms conventional baseline schemes."} {"_id": "02cbb22e2011938d8d2c0a42b175e96d59bb377f", "title": "Above the Clouds: A Berkeley View of Cloud Computing", "text": "Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing."} {"_id": "926c63eb0446bcd297df4efc58b803917815a912", "title": "2.4 to 61 GHz Multiband Double-Directional Propagation Measurements in Indoor Office Environments", "text": "This paper presents the details and results of double-directional propagation measurements carried out in two indoor office environments: a semi-open, cubicle-type office space and a more traditional work environment with closed offices. The measurements cover seven frequency bands from 2.4 to 61 GHz, permitting the propagation characteristics to be compared over a wide range of candidate radio frequencies for next-generation mobile wireless systems, including ultra high frequency and millimeter-wave bands. A novel processing algorithm is introduced for the expansion of multiband measurement data into sets of discrete multipath components. Based on the resulting multipath parameter estimates, models are presented for frequency-dependent path loss, shadow fading, copolarization ratio, delay spread, and angular spreads, along with their interfrequency correlations. Our results indicate a remarkably strong consistency in multipath structure over the entire frequency range considered."} {"_id": "fb8704210358d0cbf5113c97e1f9f9f03f67e6fc", "title": "A review of content-based image retrieval systems in medical applications - clinical benefits and future directions", "text": "Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools."} {"_id": "50c3bd96304473c0e5721fcaefe20a44a7bcd691", "title": "Wavelet based real-time smoke detection in video", "text": "A method for smoke detection in video is proposed. It is assumed the camera monitoring the scene is stationary. Since the smoke is semi-transparent, edges of image frames start loosing their sharpness and this leads to a decrease in the high frequency content of the image. To determine the smoke in the field of view of the camera, the background of the scene is estimated and decrease of high frequency energy of the scene is monitored using the spatial wavelet transforms of the current and the background images. Edges of the scene are especially important because they produce local extrema in the wavelet domain. A decrease in values of local extrema is also an indicator of smoke. In addition, scene becomes grayish when there is smoke and this leads to a decrease in chrominance values of pixels. Periodic behavior in smoke boundaries and convexity of smoke regions are also analyzed. All of these clues are combined to reach a final decision."} {"_id": "d242e2c62b0e533317a6f667ce8c6caa55648efe", "title": "Short-Circuit Current of Wind Turbines With Doubly Fed Induction Generator", "text": "The short-circuit current contribution of wind turbines has not received much attention so far. This paper considers the short-circuit behavior, especially the short-circuit current of wind turbines with a doubly fed induction generator. Mostly, these wind turbines have a crowbar to protect the power electronic converter that is connected to the rotor windings of the induction generator. First, the maximum value of the short-circuit current of a conventional induction machine is determined. The differences between a crowbar-protected doubly fed induction generator and a conventional induction generator are highlighted and approximate equations for the maximum short-circuit current of a doubly fed induction generator are determined. The values obtained in this way are compared to the values obtained from time domain simulations. The differences are less then 15%"} {"_id": "5d0a48e2b4fe4b9150574cedf867d73787f30e48", "title": "Game Theory in Wireless Networks: A Tutorial", "text": "The behavior of a given wireless device may affect the communication capabilities of a neighboring device, notably because the radio communication channel is usually shared in wireless networks. In this tutorial, we carefully explain how situations of this kind can be modelled by making use of game theory. By leveraging on four simple running examples, we introduce the most fundamental concepts of non-cooperative game theory. This approach should help students and scholars to quickly master this fascinating analytical tool without having to read the existing lengthy, economics-oriented books. It should also assist them in modelling problems of their own."} {"_id": "46c5d41a1a111d51eeab06b401b74b404c8653ea", "title": "Cryptography and cryptanalysis for embedded systems", "text": "A growing number of devices of daily use are equipped with computing capabilities. Today, already more than 98 % of all manufactured microprocessors are employed in embedded applications , leaving less than 2 % to traditional computers. Many of these embedded devices are enabled to communicate amongst each other and form networks. A side effect of the rising interconnectedness is a possible vulnerability of these embedded systems. Attacks that have formerly been restricted to PCs can suddenly be launched against cars, tickets, ID cards or even pacemakers. At the same time the security awareness of users and manufacturers of such systems is much lower than in classical PC environments. This renders security one key aspect of embedded systems design and for most pervasive computing applications. As embedded systems are usually deployed in large numbers, costs are a main concern of system developers. Hence embedded security solutions have to be cheap and efficient. Many security services such as digital signatures can only be realized by public key cryptography. Yet, public key schemes are in terms of computation orders of magnitude more expensive than private key cryptosystems. At the same time the prevailing schemes rely on very similar security assumptions. If one scheme gets broken, almost all cryptosystems employing asymmetric cryptography become useless. The first part of this work explores alternatives to the prevailing public key cryptosystems. Two alternative signature schemes and one public key encryption scheme from the family of post quantum cryptosystems are explored. Their security relies on different assumptions so that a break of one of the prevailing schemes does not affect the security of the studied alternatives. The main focus lies on the implementational aspects of these schemes for embedded systems. One actual outcome is that, contrary to common belief, the presented schemes provide similar and in some cases even better performance than the prevailing schemes. The presented solutions include a highly scalable software implementation of the Merkle signature scheme aimed at low-cost microprocessors. For signatures in hardware an FPGA framework for implementing a family of signature schemes based on multivariate quadratic equations is presented. Depending on the chosen scheme, multivariate quadratic signatures show better performance than elliptic curves in terms of area consumption and performance. The McEliece cryptosystem is an alternative public key encryption scheme which was believed to be infeasible on embedded platforms due to its large key size. This work shows that by applying certain \u2026"} {"_id": "32137062567626ca0f466c7a2272a4368f093800", "title": "Recall of childhood trauma: a prospective study of women's memories of child sexual abuse.", "text": "One hundred twenty-nine women with previously documented histories of sexual victimization in childhood were interviewed and asked detailed questions about their abuse histories to answer the question \"Do people actually forget traumatic events such as child sexual abuse, and if so, how common is such forgetting?\" A large proportion of the women (38%) did not recall the abuse that had been reported 17 years earlier. Women who were younger at the time of the abuse and those who were molested by someone they knew were more likely to have no recall of the abuse. The implications for research and practice are discussed. Long periods with no memory of abuse should not be regarded as evidence that the abuse did not occur."} {"_id": "b6ca785bc7fa5b803c1c9a4f80b6630551e7a261", "title": "Blind Identification of Underdetermined Mixtures by Simultaneous Matrix Diagonalization", "text": "In this paper, we study simultaneous matrix diagonalization-based techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the well-known SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higher-order tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation."} {"_id": "0c9c3f948eda8fb339e76c6612ca4bd36244efd7", "title": "The Internet of Things: A Review of Enabled Technologies and Future Challenges", "text": "The Internet of Things (IoT) is an emerging classical model, envisioned as a system of billions of small interconnected devices for posing the state-of-the-art findings to real-world glitches. Over the last decade, there has been an increasing research concentration in the IoT as an essential design of the constant convergence between human behaviors and their images on Information Technology. With the development of technologies, the IoT drives the deployment of across-the-board and self-organizing wireless networks. The IoT model is progressing toward the notion of a cyber-physical world, where things can be originated, driven, intermixed, and modernized to facilitate the emergence of any feasible association. This paper provides a summary of the existing IoT research that underlines enabling technologies, such as fog computing, wireless sensor networks, data mining, context awareness, real-time analytics, virtual reality, and cellular communications. Also, we present the lessons learned after acquiring a thorough representation of the subject. Thus, by identifying numerous open research challenges, it is presumed to drag more consideration into this novel paradigm."} {"_id": "d297445dc6b372fb502d26a6f29828b3c160a60e", "title": "THE RESTORATIVE BENEFITS OF NATURE : TOWARD AN INTEGRATIVE", "text": "Directed attention plays an important role in human information processing; its fatigue, in turn, has far\u00ad reaching consequences. Attention Restoration Theory provides an analysis of the kinds of experiences that lead to recovery from such fatigue. Natural environments turn out to be particularly rich in the character\u00ad istics necessary for restorative experiences. An integrative framework is proposed that places both directed attention and stress in the larger context of human-environment relationships. \u00a9 1995 Academic Press Limited"} {"_id": "5d17292f029c9912507b70f9ac6912fa590ade6d", "title": "Intracranial EEG and human brain mapping", "text": "This review is an attempt to highlight the value of human intracranial recordings (intracranial electro-encephalography, iEEG) for human brain mapping, based on their technical characteristics and based on the corpus of results they have already yielded. The advantages and limitations of iEEG recordings are introduced in detail, with an estimation of their spatial and temporal resolution for both monopolar and bipolar recordings. The contribution of iEEG studies to the general field of human brain mapping is discussed through a review of the effects observed in the iEEG while patients perform cognitive tasks. Those effects range from the generation of well-localized evoked potentials to the formation of large-scale interactions between distributed brain structures, via long-range synchrony in particular. A framework is introduced to organize those iEEG studies according to the level of complexity of the spatio-temporal patterns of neural activity found to correlate with cognition. This review emphasizes the value of iEEG for the study of large-scale interactions, and describes in detail the few studies that have already addressed this point."} {"_id": "c8859b7ac5f466675c41561a6a299f7078a90df0", "title": "Survey on Hadoop and Introduction to YARN Amogh", "text": "Big Data, the analysis of large quantities of data to gain new insight has become a ubiquitous phrase in recent years. Day by day the data is growing at a staggering rate. One of the efficient technologies that deal with the Big Data is Hadoop, which will be discussed in this paper. Hadoop, for processing large data volume jobs uses MapReduce programming model. Hadoop makes use of different schedulers for executing the jobs in parallel. The default scheduler is FIFO (First In First Out) Scheduler. Other schedulers with priority, pre-emption and non-pre-emption options have also been developed. As the time has passed the MapReduce has reached few of its limitations. So in order to overcome the limitations of MapReduce, the next generation of MapReduce has been developed called as YARN (Yet Another Resource Negotiator). So, this paper provides a survey on Hadoop, few scheduling methods it uses and a brief introduction to YARN. Keywords\u2014Hadoop, HDFS, MapReduce, Schedulers, YARN."} {"_id": "560e0e58d0059259ddf86fcec1fa7975dee6a868", "title": "Face recognition in unconstrained videos with matched background similarity", "text": "Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements. Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study. However, there is a sizable gap between the actual application needs and the current state of the art. In this paper we make the following contributions. (a) We present a comprehensive database of labeled videos of faces in challenging, uncontrolled conditions (i.e., \u2018in the wild\u2019), the \u2018YouTube Faces\u2019 database, along with benchmark, pair-matching tests1. (b) We employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques. Finally, (c) we describe a novel set-to-set similarity measure, the Matched Background Similarity (MBGS). This similarity is shown to considerably improve performance on the benchmark tests."} {"_id": "1b43ecec3bad257a773d6fdf3b4efa79e5c15138", "title": "Implementation of high speed radix-10 parallel multiplier using Verilog", "text": "The emerging computational complexities arises the need of fast multiplication unit. The importance of multiplication in various applications necessitates improvement in its design so as to obtain the multiplication result efficiently. Multiplication operation can be improved by reducing the number of partial products to be added and by enhancing the adder unit for obtaining sum. The number of partial products can be reduced by using higher radix multiplication. For better speed applications a radix-10 multiplier is proposed which uses recoded multiplier digits as in conventional parallel multiplier design. The multiplier digits are encoded using Signed Digit (SD) radix-10 method which converts the digit set to {-5 to 5} from {0 to 9} and also generate a sign bit. This recoding leads to minimized calculations as only five multiples are required to be calculated and the negative multiples are obtained using 2's complement approach. The negative multiples are used when sign bit is high. A different approach is availed in the multiples generation of the multiplicand digit and during accumulation of partial product obtained during multiplication procedure. A modified BCD adder is used which eliminates the post correction while calculating the sum of two BCD digits. The modified architecture eliminates the extra recoding logic thereby reducing the area of overall architecture. This paper delivers the design and implementation of 16-Bit multiplication unit. The design entry is done in Verilog Hardware Description Language (HDL) and simulated using ISIM Simulator. It is synthesized and implemented using Xilinx ISE 12.2. Synthesis results have shown that 20.3% reduction in 4-Input LUTs and 20.4% reduction in the number of slices is observed in the modified methodology. Further 11.5% reduction of maximum combinational path delay is also observed in the modified architecture, thereby leading to high speed multiplication for VLSI applications."} {"_id": "38919649ae3fd207b96b62e95b3c8c8e69635c7f", "title": "Scenario-Based Performance Analysis of Routing Protocols for Mobile ad-hoc Networks", "text": "This study is a comparison of three routing protocols proposed for wireless mobile ad-hoc networks. The protocols are: Destination Sequenced Distance Vector (DSDV), Ad-hoc On demand Distance Vector (AODV) and Dynamic Source Routing (DSR). Extensive simulations are made on a scenario where nodes moves randomly. Results are presented as a function of a novel mobility metric designed to reflect the relative speeds of the nodes in a scenario. Furthermore, three realistic scenarios are introduced to test the protocols in more specialized contexts. In most simulations the reactive protocols (AODV and DSR) performed significantly better than DSDV. At moderate traffic load DSR performed better than AODV for all tested mobility values, while AODV performed better than DSR at higher traffic loads. The latter is caused by the source routes in DSR data packets, which increase the load on the network. routers and hosts, thus a node may forward packets between other nodes as well as run user applications. Mobile ad-hoc networks have been the focus of many recent research and development efforts. Ad-hoc packet radio networks have so far mainly concerned military applications, where a decentralized network configuration is an operative advantage or even a necessity. Networks using ad-hoc configuration concepts can be used in many military applications, ranging from interconnected wireless access points to networks of wireless devices carried by individuals, e.g., digital maps, sensors attached to the body, voice communication, etc. Combinations of wide range and short range ad-hoc networks seek to provide robust, global coverage, even during adverse operating conditions."} {"_id": "fc4ad06ced503fc4daef509fea79584176751d18", "title": "Language, thought and reference.", "text": "How should we best analyse the meaning of proper names, indexicals, demonstratives, both simple and complex, and definite descriptions? In what relation do such expressions stand to the objects they designate? In what relation do they stand to mental representations of those objects? Do these expressions form a semantic class, or must we distinguish between those that arc referential and those that are quantificational? Such questions have constituted one of the core research areas in the philosophy of language for much of the last century, yet consensus remains elusive: the field is still divided, for instance, between those who hold that all such expressions are semantically descriptive and those who would analyse most as the natural language counterparts of logical individual constants. The aim of this thesis is to cast new light on such questions by approaching them from within the cognitive framework of Sperber and Wilson's Relevance Theory. Relevance Theory offers not just an articulated pragmatics but also a broad conception of the functioning of natural language which differs radically from that presupposed within (most of) the philosophy of language. The function of linguistic expressions, on this conception, is not to determine propositional content, but rather to provide inferential premises which, in parallel with context and general pragmatic principles, will enable a bearer to reach the speaker's intended interpretation. Working within this framework, I shall argue that the semantics of the expressions discussed should best be analysed not in terms of their relation to those objects which, on occasions of use, they may designate, but rather in terms of the indications they offer a hearer concerning the mental representation which constitutes the content of a speaker's informative intention. Such an analysis can, I shall claim, capture certain key data on reference which have proved notoriously problematic, while respecting a broad range of apparently conflicting intuitions."} {"_id": "63c22f6b8e5d6eef32d0a79a8895356d5f18703d", "title": "Modeling Multibody Dynamic Systems With Uncertainties . Part II : Numerical Applications", "text": "This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper \u201cModeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects\u201d. In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties."} {"_id": "eb68b920ebc4e473fa9782a83be69ac52eff1f38", "title": "Normative data for the Rey-Osterrieth and the Taylor complex figure tests in Quebec-French people.", "text": "The Rey-Osterrieth (ROCF) and Taylor (TCF) complex figure tests are widely used to assess visuospatial and constructional abilities as well as visual/non-verbal memory. Normative data adjusted to the cultural and linguistic reality of older Quebec-French individuals is still nonexistent for these tests. In this article, we report the results of two studies that aimed to establish normative data for Quebec-French people (aged at least 50 years) for the copy, immediate recall, and delayed recall trials of the ROCF (Study 1) and the TCF (Study 2). For both studies, the impact of age, education, and sex on test performance was examined. Moreover, the impact of copy time on test performance, the impact of copy score on immediate and delayed recall score, and the impact of immediate recall score on delayed recall performance were examined. Based on regression models, equations to calculate Z scores for copy and recall scores are provided for both tests."} {"_id": "0f7329cf0d388d4c5d5b94ee52ad2385bd2383ce", "title": "LIBSVX: A Supervoxel Library and Benchmark for Early Video Processing", "text": "Supervoxel segmentation has strong potential to be incorporated into early video analysis as superpixel segmentation has in image analysis. However, there are many plausible supervoxel methods and little understanding as to when and where each is most appropriate. Indeed, we are not aware of a single comparative study on supervoxel segmentation. To that end, we study seven supervoxel algorithms, including both off-line and streaming methods, in the context of what we consider to be a good supervoxel: namely, spatiotemporal uniformity, object/region boundary detection, region compression and parsimony. For the evaluation we propose a comprehensive suite of seven quality metrics to measure these desirable supervoxel characteristics. In addition, we evaluate the methods in a supervoxel classification task as a proxy for subsequent high-level uses of the supervoxels in video analysis. We use six existing benchmark video datasets with a variety of content-types and dense human annotations. Our findings have led us to conclusive evidence that the hierarchical graph-based (GBH), segmentation by weighted aggregation (SWA) and temporal superpixels (TSP) methods are the top-performers among the seven methods. They all perform well in terms of segmentation accuracy, but vary in regard to the other desiderata: GBH captures object boundaries best; SWA has the best potential for region compression; and TSP achieves the best undersegmentation error."} {"_id": "75bb1dfef08101c501fd44a663d50a7351dce5c9", "title": "A tutorial on simulation modeling in six dimensions", "text": "Simulation involves modeling and analysis of real-world systems. This tutorial will provide a broad overview of the modeling practice within simulation by introducing the reader to modeling choices found using six dimensions: abstraction, complexity, culture, engineering, environment, and process. Modeling can be a daunting task even for the seasoned modeling and simulation professional, and so my goal is to introduce modeling in two ways: 1) to use one specific type of model (Petri Net) as an anchor for cross-dimensional discussion, and 2) to provide a follow up discussion, with additional non Petri Net examples, to clarify the extent of each dimension. For example, in the abstraction dimension, one must think about scale, refinement, and hierarchy when modeling regardless of the type of modeling language. The reader will come away with a broad framework within which to understand the possibilities of models and of modeling within the practice of simulation."} {"_id": "1c8fac651cda7abcf0140ce73252ae53be21ad2c", "title": "A Case Study of the New York City 2012-2013 Influenza Season With Daily Geocoded Twitter Data From Temporal and Spatiotemporal Perspectives", "text": "BACKGROUND\nTwitter has shown some usefulness in predicting influenza cases on a weekly basis in multiple countries and on different geographic scales. Recently, Broniatowski and colleagues suggested Twitter's relevance at the city-level for New York City. Here, we look to dive deeper into the case of New York City by analyzing daily Twitter data from temporal and spatiotemporal perspectives. Also, through manual coding of all tweets, we look to gain qualitative insights that can help direct future automated searches.\n\n\nOBJECTIVE\nThe intent of the study was first to validate the temporal predictive strength of daily Twitter data for influenza-like illness emergency department (ILI-ED) visits during the New York City 2012-2013 influenza season against other available and established datasets (Google search query, or GSQ), and second, to examine the spatial distribution and the spread of geocoded tweets as proxies for potential cases.\n\n\nMETHODS\nFrom the Twitter Streaming API, 2972 tweets were collected in the New York City region matching the keywords \"flu\", \"influenza\", \"gripe\", and \"high fever\". The tweets were categorized according to the scheme developed by Lamb et al. A new fourth category was added as an evaluator guess for the probability of the subject(s) being sick to account for strength of confidence in the validity of the statement. Temporal correlations were made for tweets against daily ILI-ED visits and daily GSQ volume. The best models were used for linear regression for forecasting ILI visits. A weighted, retrospective Poisson model with SaTScan software (n=1484), and vector map were used for spatiotemporal analysis.\n\n\nRESULTS\nInfection-related tweets (R=.763) correlated better than GSQ time series (R=.683) for the same keywords and had a lower mean average percent error (8.4 vs 11.8) for ILI-ED visit prediction in January, the most volatile month of flu. SaTScan identified primary outbreak cluster of high-probability infection tweets with a 2.74 relative risk ratio compared to medium-probability infection tweets at P=.001 in Northern Brooklyn, in a radius that includes Barclay's Center and the Atlantic Avenue Terminal.\n\n\nCONCLUSIONS\nWhile others have looked at weekly regional tweets, this study is the first to stress test Twitter for daily city-level data for New York City. Extraction of personal testimonies of infection-related tweets suggests Twitter's strength both qualitatively and quantitatively for ILI-ED prediction compared to alternative daily datasets mixed with awareness-based data such as GSQ. Additionally, granular Twitter data provide important spatiotemporal insights. A tweet vector-map may be useful for visualization of city-level spread when local gold standard data are otherwise unavailable."} {"_id": "7ea0c87ec8b34dac0a450ba78cd219e187632573", "title": "CatBoost: unbiased boosting with categorical features", "text": "This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit. Their combination leads to CatBoost outperforming other publicly available boosting implementations in terms of quality on a variety of datasets. Two critical algorithmic advances introduced in CatBoost are the implementation of ordered boosting, a permutation-driven alternative to the classic algorithm, and an innovative algorithm for processing categorical features. Both techniques were created to fight a prediction shift caused by a special kind of target leakage present in all currently existing implementations of gradient boosting algorithms. In this paper, we provide a detailed analysis of this problem and demonstrate that proposed algorithms solve it effectively, leading to excellent empirical results."} {"_id": "ea44a322c1822116d0c3159517a74cd2f55a98a5", "title": "Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks", "text": "Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject's eye movements. A gaze deviation (GaDe) image is then proposed to characterize the subject's eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN) that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image's features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method."} {"_id": "bc168059a400d665fde44281d9453b61e18c247e", "title": "Assessing forecast model performance in an ERP environment", "text": "Purpose \u2013 The paper aims to describe and apply a commercially oriented method of forecast performance measurement (cost of forecast error \u2013 CFE) and to compare the results with commonly adopted statistical measures of forecast accuracy in an enterprise resource planning (ERP) environment. Design/methodology/approach \u2013 The study adopts a quantitative methodology to evaluate the nine forecasting models (two moving average and seven exponential smoothing) of SAP\u2019s ERP system. Event management adjustment and fitted smoothing parameters are also assessed. SAP is the largest European software enterprise and the third largest in the world, with headquarters in Walldorf, Germany. Findings \u2013 The findings of the study support the adoption of CFE as a more relevant commercial decision-making measure than commonly applied statistical forecast measures. Practical implications \u2013 The findings of the study provide forecast model selection guidance to SAP\u2019s 12 \u00fe million worldwide users. However, the CFE metric can be adopted in any commercial forecasting situation. Originality/value \u2013 This study is the first published cost assessment of SAP\u2019s forecasting models."} {"_id": "2d13971cf59761b76fa5fc08e96f4e56c0c2d6dc", "title": "Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks", "text": "Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms."} {"_id": "6d75df4360a3d56514dcb775c832fdc572bab64b", "title": "Universals and cultural differences in the judgments of facial expressions of emotion.", "text": "We present here new evidence of cross-cultural agreement in the judgement of facial expression. Subjects in 10 cultures performed a more complex judgment task than has been used in previous cross-cultural studies. Instead of limiting the subjects to selecting only one emotion term for each expression, this task allowed them to indicate that multiple emotions were evident and the intensity of each emotion. Agreement was very high across cultures about which emotion was the most intense. The 10 cultures also agreed about the second most intense emotion signaled by an expression and about the relative intensity among expressions of the same emotion. However, cultural differences were found in judgments of the absolute level of emotional intensity."} {"_id": "89f9569a9118405156638151c2151b1814f9bb5e", "title": "The Influence of Background Music on the Behavior of Restaurant Patrons Author ( s ) :", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. ."} {"_id": "a6197444855f3edd1d7e1f6acf4ad6df345ef47b", "title": "Music Emotion Identification from Lyrics", "text": "Very large online music databases have recently been created by vendors, but they generally lack content-based retrieval methods. One exception is Allmusic.com which offers browsing by musical emotion, using human experts to classify several thousand songs into 183 moods. In this paper, machine learning techniques are used instead of human experts to extract emotions in Music. The classification is based on a psychological model of emotion that is extended to 23 specific emotion categories. Our results for mining the lyrical text of songs for specific emotion are promising, generate classification models that are human-comprehensible, and generate results that correspond to commonsense intuitions about specific emotions. Mining lyrics focused in this paper is one aspect of research which combines different classifiers of musical emotion such as acoustics and lyrical text."} {"_id": "eda501bb1e610098648667eb25273adc4a4dc98d", "title": "Fusing audio, visual and textual clues for sentiment analysis from multimodal content", "text": "A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both featureand decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%. & 2015 Elsevier B.V. All rights reserved."} {"_id": "695d53820f45a0f174e51eed537c6dd4068e13ae", "title": "The DEXMART hand: Mechatronic design and experimental evaluation of synergy-based control for human-like grasping", "text": "This paper summarizes recent activities carried out for the development of an innovative anthropomorphic robotic hand called the DEXMART Hand. The main goal of this research is to face the problems that affect current robotic hands by introducing suitable design solutions aimed at achieving simplification and cost reduction while possibly enhancing robustness and performance. While certain aspects of the DEXMART Hand development have been presented in previous papers, this paper is the first to give a comprehensive description of the final hand version and its use to replicate humanlike grasping. In this paper, particular emphasis is placed on the kinematics of the fingers and of the thumb, the wrist architecture, the dimensioning of the actuation system, and the final implementation of the position, force and tactile sensors. The paper focuses also on how these solutions have been integrated into the mechanical structure of this innovative robotic hand to enable precise force and displacement control of the whole system. Another important aspect is the lack of suitable control tools that severely limits the development of robotic hand applications. To address this issue, a new method for the observation of human hand behavior during interaction with common day-to-day objects by means of a 3D computer vision system is presented in this work together with a strategy for mapping human hand postures to the robotic hand. A simple control strategy based on postural synergies has been used to reduce the complexity of the grasp planning problem. As a preliminary evaluation of the DEXMART Hand\u2019s capabilities, this approach has been adopted in this paper to simplify and speed up the transfer of human actions to the robotic hand, showing its effectiveness in reproducing human-like grasping."} {"_id": "682adb1bf3998c8f77b3c02b712a33c3a5e65ae5", "title": "Coping with autism: a journey toward adaptation.", "text": "As the number of individuals with autism grows, it is critical for nurses in all settings to understand how autism influences the family unit, as they will likely interact with these children, the adults, and their families. The intent of this descriptive narrative study was to explore the experiences of families of individuals with autism as perceived by the mother. Through personal interviews, 16 mothers' perceptions of the impact of autism on the family unit during different stages of the life cycle were revealed through a constructivist lens. Pediatric nurses employed in acute care settings, community, and schools are poised to assess and support these families following diagnosis and throughout the child's life."} {"_id": "312182a3f20e842ef9b612e39f305e4def16ffc6", "title": "ATR-Vis: Visual and Interactive Information Retrieval for Parliamentary Discussions in Twitter", "text": "The worldwide adoption of Twitter turned it into one of the most popular platforms for content analysis as it serves as a gauge of the public\u2019s feeling and opinion on a variety of topics. This is particularly true of political discussions and lawmakers\u2019 actions and initiatives. Yet, one common but unrealistic assumption is that the data of interest for analysis is readily available in a comprehensive and accurate form. Data need to be retrieved, but due to the brevity and noisy nature of Twitter content, it is difficult to formulate user queries that match relevant posts that use different terminology without introducing a considerable volume of unwanted content. This problem is aggravated when the analysis must contemplate multiple and related topics of interest, for which comments are being concurrently posted. This article presents Active Tweet Retrieval Visualization (ATR-Vis), a user-driven visual approach for the retrieval of Twitter content applicable to this scenario. The method proposes a set of active retrieval strategies to involve an analyst in such a way that a major improvement in retrieval coverage and precision is attained with minimal user effort. ATR-Vis enables non-technical users to benefit from the aforementioned active learning strategies by providing visual aids to facilitate the requested supervision. This supports the exploration of the space of potentially relevant tweets, and affords a better understanding of the retrieval results. We evaluate our approach in scenarios in which the task is to retrieve tweets related to multiple parliamentary debates within a specific time span. We collected two Twitter datasets, one associated with debates in the Canadian House of Commons during a particular week in May 2014, and another associated with debates in the Brazilian Federal Senate during a selected week in May 2015. The two use cases illustrate the effectiveness of ATR-Vis for the retrieval of relevant tweets, while quantitative results show that our approach achieves high retrieval quality with a modest amount of supervision. Finally, we evaluated our tool with three external users who perform searching in social media as part of their professional work."} {"_id": "d1d5e9ab7865e26bb2d40d8476ccc9db46c9b79a", "title": "Multilevel thresholding for satellite image segmentation with moth-flame based optimization", "text": "In this paper, an improved version of the moth-flame optimization (MFO) algorithm for image segmentation is proposed to effectively enhance the optimal multilevel thresholding of satellite images. Multilevel thresholding is one of the most widely used methods for image segmentation, as it has efficient processing ability and easy implementation. However, as the number of threshold values increase, it consequently becomes computationally expensive. To overcome this problem, the nature-inspired meta-heuristic named multilevel thresholding moth-flame optimization algorithm (MTMFO) for multilevel thresholding was developed. The improved method proposed herein was tested on various satellite images tested against five different existing methods: the genetic algorithm (GA), the differential evolution (DE) algorithm, the artificial bee colony (ABC) algorithm, the particle swarm optimization (PSO) algorithm, and the moth-flame optimization (MFO) algorithm for solving multilevel satellite image thresholding problems. Experimental results indicate that the MTMFO more effectively and accurately identifies the optimal threshold values with respect to the other state-of-the-art optimization algorithms."} {"_id": "9e2ccaf41e32a33460c8b1c38a328188af9e353f", "title": "Introduction of optical camera communication for Internet of vehicles (IoV)", "text": "In this paper, we have introduced optical camera communication (OCC) as a new technology for internet of vehicles (IoV). OCC has already been established in many areas which can also be used in vehicular communication. There have some researches in the field of OCC-based vehicular communication but these are not mature enough. Here, we have proposed a new system which will provide great advantages to the vehicular system. We proposed a combination of OCC and cloud-based communication for IoV which will ensure secure, stable, and system. We have also proposed an algorithm to provide cloud based IoV service."} {"_id": "43f24c56565d0acf417ef5712f2d2cd9635fd9cb", "title": "Topic hierarchy construction for the organization of multi-source user generated contents", "text": "User generated contents (UGCs) carry a huge amount of high quality information. However, the information overload and diversity of UGC sources limit their potential uses. In this research, we propose a framework to organize information from multiple UGC sources by a topic hierarchy which is automatically generated and updated using the UGCs. We explore the unique characteristics of UGCs like blogs, cQAs, microblogs, etc., and introduce a novel scheme to combine them. We also propose a graph-based method to enable incremental update of the generated topic hierarchy. Using the hierarchy, users can easily obtain a comprehensive, in-depth and up-to-date picture of their topics of interests. The experiment results demonstrate how information from multiple heterogeneous sources improves the resultant topic hierarchies. It also shows that the proposed method achieves better F1 scores in hierarchy generation as compared to the state-of-the-art methods."} {"_id": "aa94627e3affcccf9421546dcff18a5d01787aa7", "title": "Effectiveness of a multi-layer foam dressing in preventing sacral pressure ulcers for the early acute care of patients with a traumatic spinal cord injury: comparison with the use of a gel mattress.", "text": "Individuals with spinal cord injury are at risk of sacral pressure ulcers due to, among other reasons, prolonged immobilisation. The effectiveness of a multi-layer foam dressing installed pre-operatively in reducing sacral pressure ulcer occurrence in spinal cord injured patients was compared to that of using a gel mattress, and stratified analyses were performed on patients with complete tetraplegia and paraplegia. Socio-demographic and clinical data were collected from 315 patients admitted in a level-I trauma centre following a spinal cord injury between April 2010 and March 2016. Upon arrival to the emergency room and until surgery, patients were transferred on a foam stretcher pad with a viscoelastic polymer gel mattress (before 1 October 2014) or received a multi-layer foam dressing applied to their sacral-coccygeal area (after 1 October 2014). The occurrence of sacral pressure ulcer during acute hospitalisation was similar irrespective of whether patients received the dressing or the gel mattress. It was found that 82% of patients with complete tetraplegia receiving the preventive dressing developed sacral ulcers as compared to only 36% of patients using the gel mattress. Although multi-layer dressings were suggested to improve skin protection and decrease pressure ulcer occurrence in critically ill patients, such preventive dressings are not superior to gel mattresses in spinal cord injured patients and should be used with precaution, especially in complete tetraplegia."} {"_id": "3146ffeed483cb94d474315b0f9ed54505834032", "title": "Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images", "text": "Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pascal 3D+ benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method."} {"_id": "50dea03d4feb1797f1d5c260736e1cf7ad6d45ca", "title": "Rapid growing fibroadenoma in an adolescent.", "text": "INTRODUCTION\nWe report a case of rapidly growing fibroadenoma.\n\n\nPATIENT\nA 13-year-old girl consulted the outpatient clinic regarding a left breast mass. The mass was diagnosed as fibroadenoma by clinical examinations, and the patient was carefully monitored. The mass enlarged rapidly with each menses and showed a 50% increase in volume four months later. Lumpectomy was performed. The tumor was histologically diagnosed as fibroadenoma organized type and many glandular epithelial cells had positive immunohistochemical staining for anti-estrogen receptor antibody in the nuclei.\n\n\nCONCLUSION\nThe estrogen sensitivity of the tumor could account for the rapid growth."} {"_id": "e8a31f498a5326885eb09298d85bb4bede57dd0d", "title": "Understanding Online Consumer Stickiness in E-commerce Environment : A Relationship Formation Model", "text": "Consumers with online shopping experience often stick to some special websites. Does it mean that long-term relationships between consumers and the websites form? To solve the problem, this paper analyzed the belief and attitude factors influencing online consumer stickiness intention. Based on the relationships between belief-attitude-intention implied in TPB (Theory of Planned Behavior), according to Expectation-Confirmation theory and Commitment-trust theory, we developed a concept model on online consumer stickiness. Six research hypotheses derived from this model were empirically validated using a survey of online shoppers. SEM (Structure Equation Modeling) was as a data analysis method. The results suggest that online consumer stickiness intention is influenced by consumers\u2019 commitment to the website (e-vendors) and their overall satisfaction to the transaction process, and commitment\u2019s effect is stronger; in turn, both consumers\u2019 commitment and overall satisfaction are influenced by consumers\u2019 ongoing trust significantly; confirmation has a significant effect on overall satisfaction but has not the same effect on commitment. The findings show that the long-term relationships between consumers and special websites exactly exist during online repeat consumption."} {"_id": "6e3ee43e976f66d991034f921272aab0830953b0", "title": "GFlink: An In-Memory Computing Architecture on Heterogeneous CPU-GPU Clusters for Big Data", "text": "The increasing main memory capacity and the explosion of big data have fueled the development of in-memory big data management and processing. By offering an efficient in-memory parallel execution model which can eliminate disk I/O bottleneck, existing in-memory cluster computing platforms (e.g., Flink and Spark) have already been proven to be outstanding platforms for big data processing. However, these platforms are merely CPU-based systems. This paper proposes GFlink, an in-memory computing architecture on heterogeneous CPU-GPU clusters for big data. Our proposed architecture extends the original Flink from CPU clusters to heterogeneous CPU-GPU clusters, greatly improving the computational power of Flink. Furthermore, we have proposed a programming framework based on Flink's abstract model, i.e., DataSet (DST), hiding the programming complexity of GPUs behind the simple and familiar high-level interfaces. To achieve high performance and good load-balance, an efficient JVM-GPU communication strategy, a GPU cache scheme, and an adaptive locality-aware scheduling scheme for three-stage pipelining execution are proposed. Extensive experiment results indicate that the high computational power of GPUs can be efficiently utilized, and the implementation on GFlink outperforms that on the original CPU-based Flink."} {"_id": "80d8cf86ab8359843d9e74b67a39fb5f08225226", "title": "A distributed algorithm for 2D shape duplication with smart pebble robots", "text": "We present our digital fabrication technique for manufacturing active objects in 2D from a collection of smart particles. Given a passive model of the object to be formed, we envision submerging this original in a vat of smart particles, executing the new shape duplication algorithm described in this paper, and then brushing aside any extra modules to reveal both the original object and an exact copy, side-by-side. Extensions to the duplication algorithm can be used to create a magnified version of the original or multiple copies of the model object. Our novel duplication algorithm uses a distributed approach to identify the geometric specification of the object being duplicated and then forms the duplicate from spare modules in the vicinity of the original. This paper details the duplication algorithm and the features that make it robust to (1) an imperfect packing of the modules around the original object; (2) missing communication links between neighboring modules; and (3) missing modules in the vicinity of the duplicate object(s). We show that the algorithm requires O(1) storage space per module and that the algorithm exchanges O(n) messages per module. Finally, we present experimental results from 60 hardware trials and 150 simulations. These experiments demonstrate the algorithm working correctly and reliably despite broken communication links and missing modules."} {"_id": "67dd8ca2dbbd1e6f0eb510c49e53e5bad72940c6", "title": "A Convolutional Learning System for Object Classification in 3-D Lidar Data", "text": "In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment."} {"_id": "1a1f7f3c8274502b661db8ee09d31f33de1e3078", "title": "Satellite remote sensing of particulate matter and air quality assessment over global cities", "text": "Using 1 year of aerosol optical thickness (AOT) retrievals from the MODerate resolution Imaging Spectro-radiometer (MODIS) on board NASA\u2019s Terra and Aqua satellite along with ground measurements of PM2.5 mass concentration, we assess particulate matter air quality over different locations across the global urban areas spread over 26 locations in Sydney, Delhi, Hong Kong, New York City and Switzerland. An empirical relationship between AOT and PM2.5 mass is obtained and results show that there is an excellent correlation between the bin-averaged daily mean satellite and groundbased values with a linear correlation coefficient of 0.96. Using meteorological and other ancillary datasets, we assess the effects of wind speed, cloud cover, and mixing height (MH) on particulate matter (PM) air quality and conclude that these data are necessary to further apply satellite data for air quality research. Our study clearly demonstrates that satellitederived AOT is a good surrogate for monitoring PM air quality over the earth. However, our analysis shows that the PM2.5\u2013AOT relationship strongly depends on aerosol concentrations, ambient relative humidity (RH), fractional cloud cover and height of the mixing layer. Highest correlation between MODIS AOT and PM2.5 mass is found under clear sky conditions with less than 40\u201350% RH and when atmospheric MH ranges from 100 to 200m. Future remote sensing sensors such as Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) that have the capability to provide vertical distribution of aerosols will further enhance our ability to monitor and forecast air pollution. This study is among the first to examine the relationship between satellite and ground measurements over several global locations. r 2006 Elsevier Ltd. All rights reserved."} {"_id": "864faa99ea2ffc161aaba84263cdfd51333f3f74", "title": "Characteristics and variability of structural networks derived from diffusion tensor imaging", "text": "Structural brain networks were constructed based on diffusion tensor imaging (DTI) data of 59 young healthy male adults. The networks had 68 nodes, derived from FreeSurfer parcellation of the cortical surface. By means of streamline tractography, the edge weight was defined as the number of streamlines between two nodes normalized by their mean volume. Specifically, two weighting schemes were adopted by considering various biases from fiber tracking. The weighting schemes were tested for possible bias toward the physical size of the nodes. A novel thresholding method was proposed using the variance of number of streamlines in fiber tracking. The backbone networks were extracted and various network analyses were applied to investigate the features of the binary and weighted backbone networks. For weighted networks, a high correlation was observed between nodal strength and betweenness centrality. Despite similar small-worldness features, binary networks and weighted networks are distinctive in many aspects, such as modularity and nodal betweenness centrality. Inter-subject variability was examined for the weighted networks, along with the test-retest reliability from two repeated scans on 44 of the 59 subjects. The inter-/intra-subject variability of weighted networks was discussed in three levels - edge weights, local metrics, and global metrics. The variance of edge weights can be very large. Although local metrics show less variability than the edge weights, they still have considerable amounts of variability. Weighting scheme one, which scales the number of streamlines by their lengths, demonstrates stable intra-class correlation coefficients against thresholding for global efficiency, clustering coefficient and diversity. The intra-class correlation analysis suggests the current approach of constructing weighted network has a reasonably high reproducibility for most global metrics."} {"_id": "d3fda1e44c8ba36c58b800c9b7a0e9fe7ddb6242", "title": "Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and $k$-Means Clustering", "text": "In this letter, we propose a novel technique for unsupervised change detection in multitemporal satellite images using principal component analysis (PCA) and k-means clustering. The difference image is partitioned into h times h nonoverlapping blocks. S, S les h2, orthonormal eigenvectors are extracted through PCA of h times h nonoverlapping block set to create an eigenvector space. Each pixel in the difference image is represented with an S-dimensional feature vector which is the projection of h times h difference image data onto the generated eigenvector space. The change detection is achieved by partitioning the feature vector space into two clusters using k-means clustering with k = 2 and then assigning each pixel to the one of the two clusters by using the minimum Euclidean distance between the pixel's feature vector and mean feature vector of clusters. Experimental results confirm the effectiveness of the proposed approach."} {"_id": "eabe277992183a647de0b84882d348ddfc54a983", "title": "Sexuality of male-to-female transsexuals.", "text": "Blanchard's (J Nerv Ment Dis 177:616-623, 1989) theory of autogynephilia suggests that male-to-female transsexuals can be categorized into different types based on their sexuality. Little previous research has compared the sexuality of male-to-female transsexuals to biological females. The present study examined 15 aspects of sexuality among a non-clinical sample of 234 transsexuals and 127 biological females, using either an online or a paper questionnaire. The results showed that, overall, transsexuals tended to place more importance on partner's physical attractiveness and reported higher scores on Blanchard's Core Autogynephilia Scale than biological females. In addition, transsexuals classified as autogynephilic scored significantly higher on Attraction to Feminine Males, Core Autogynephilia, Autogynephilic Interpersonal Fantasy, Fetishism, Preference for Younger Partners, Interest in Uncommitted Sex, Importance of Partner Physical Attractiveness, and Attraction to Transgender Fiction than other transsexuals and biological females. In accordance with Blanchard's theory, autogynephilia measures were positively correlated to Sexual Attraction to Females among transsexuals. In contrast to Blanchard's theory, however, those transsexuals classified as autogynephilic scored higher on average on Sexual Attraction to Males than those classified as non-autogynephilic, and no transsexuals classified as autogynephilic reported asexuality."} {"_id": "50124f90e90ab12ed5c66fa1fadd5c536d2fc695", "title": "A study of age and gender seen through mobile phone usage patterns in Mexico", "text": "Mobile phone usage provides a wealth of information, which can be used to better understand the demographic structure of a population. In this paper we focus on the population of Mexican mobile phone users. Our first contribution is an observational study of mobile phone usage according to gender and age groups. We were able to detect significant differences in phone usage among different subgroups of the population. Our second contribution is to provide a novel methodology to predict demographic features (namely age and gender) of unlabeled users by leveraging individual calling patterns, as well as the structure of the communication graph. We provide details of the methodology and show experimental results on a real world dataset that involves millions of users."} {"_id": "48af5c507d82f7270b7ebe26365b4a56bdba3252", "title": "Bitcoin Market Return and Volatility Forecasting Using Transaction Network Flow Properties", "text": "Bit coin, as the foundation for a secure electronic payment system, has drawn broad interests from researchers in recent years. In this paper, we analyze a comprehensive Bit coin transaction dataset and investigate the interrelationship between the flow of Bit coin transactions and its price movement. Using network theory, we examine a few complexity measures of the Bit coin transaction flow networks, and we model the joint dynamic relationship between these complexity measures and Bit coin market variables such as return and volatility. We find that a particular complexity measure of the Bit coin transaction network flow is significantly correlated with the Bit coin market return and volatility. More specifically we document that the residual diversity or freedom of Bit coin network flow scaled by the total system throughput can significantly improve the predictability of Bit coin market return and volatility."} {"_id": "63ef86bf94d58343f1484bf415911843da8fd612", "title": "A performance prediction model for the CUDA GPGPU platform", "text": "The significant growth in computational power of modern Graphics Processing Units (GPUs) coupled with the advent of general purpose programming environments like NVIDIA's CUDA, has seen GPUs emerging as a very popular parallel computing platform. Till recently, there has not been a performance model for GPGPUs. The absence of such a model makes it difficult to definitively assess the suitability of the GPU for solving a particular problem and is a significant impediment to the mainstream adoption of GPUs as a massively parallel (super)computing platform. In this paper we present a performance prediction model for the CUDA GPGPU platform. This model encompasses the various facets of the GPU architecture like scheduling, memory hierarchy, and pipelining among others. We also perform experiments that demonstrate the effects of various memory access strategies. The proposed model can be used to analyze pseudo code for a CUDA kernel to obtain a performance estimate, in a way that is similar to performing asymptotic analysis. We illustrate the usage of our model and its accuracy with three case studies: matrix multiplication, list ranking, and histogram generation."} {"_id": "3b94d7407db24e630e5cc4e886f16755a76bd583", "title": "Unified motion control for dynamic quadrotor maneuvers demonstrated on slung load and rotor failure tasks", "text": "In recent years impressive results have been presented illustrating the potential of quadrotors to solve challenging tasks. Generally, the derivation of the controllers involve complex analytical manipulation of the dynamics and are very specific to the task at hand. In addition, most approaches construct a trajectory and then design a stabilizing controller in a separate step, whereas a fully optimal solution requires finding both simultaneously. In this paper, a generalized approach is presented using an iterative optimal control algorithm. A series of complex tasks are thus solved using the same algorithm without the need for manual manipulation of the system dynamics, heuristic simplifications, or manual trajectory generation. First, aggressive maneuvers are performed by requiring the quadrotor to pass with a slung load through a window not high enough for the load to pass while hanging straight down. Second, go-to-goal tasks with single and double rotor failure are demonstrated. The adaptability and applicability of this unified approach to such diverse tasks with a nonlinear, underactuated, constrained, and in the case of the slung load, hybrid quadrotor systems is thus shown."} {"_id": "3212a1d0bd6c90a16ffffc032328fd819cb4c92f", "title": "Will 5G See its Blind Side? Evolving 5G for Universal Internet Access", "text": "Internet has shown itself to be a catalyst for economic growth and social equity but its potency is thwarted by the fact that the Internet is off limits for the vast majority of human beings. Mobile phones\u2014the fastest growing technology in the world that now reaches around 80% of humanity\u2014can enable universal Internet access if it can resolve coverage problems that have historically plagued previous cellular architectures (2G, 3G, and 4G). These conventional architectures have not been able to sustain universal service provisioning since these architectures depend on having enough users per cell for their economic viability and thus are not well suited to rural areas (which are by definition sparsely populated). The new generation of mobile cellular technology (5G), currently in a formative phase and expected to be finalized around 2020, is aimed at orders of magnitude performance enhancement. 5G offers a clean slate to network designers and can be molded into an architecture also amenable to universal Internet provisioning. Keeping in mind the great social benefits of democratizing Internet and connectivity, we believe that the time is ripe for emphasizing universal Internet provisioning as an important goal on the 5G research agenda. In this paper, we investigate the opportunities and challenges in utilizing 5G for global access to the Internet for all (GAIA). We have also identified the major technical issues involved in a 5G-based GAIA solution and have set up a future research agenda by defining open research problems."} {"_id": "a4317a18de3fe66121bbb58eb2e9b8d677994bf1", "title": "Multi-hop Communication in the Uplink for LPWANs", "text": "Low-Power Wide Area Networks (LPWANs) have arisen as a promising communication technology for supporting Internet of Things (IoT) services due to their low power operation, wide coverage range, low cost and scalability. However, most LPWAN solutions like SIGFOXTMor LoRaWANTMrely on star topology networks, where stations (STAs) transmit directly to the gateway (GW), which often leads to rapid battery depletion in STAs located far from it. In this work, we analyze the impact on LPWANs energy consumption of multi-hop communication in the uplink, allowing STAs to transmit data packets in lower power levels and higher data rates to closer parent STAs, reducing their energy consumption consequently. To that aim, we introduce the DistanceRing Exponential Stations Generator (DRESG) framework, designed to evaluate the performance of the so-called optimal-hop routing model, which establishes optimal routing connections in terms of energy efficiency, aiming to balance the consumption among all the STAs in the network. Results show that enabling such multi-hop connections entails higher network lifetimes, reducing significantly the bottleneck consumption in LPWANs with up to thousands of STAs. These results lead to foresee multi-hop communication in the uplink as a promising routing alternative for extending the lifetime of LPWAN deployments."} {"_id": "2421eaf85e274e59508c87df80bc4242edae7168", "title": "Toward automated segmentation of the pathological lung in CT", "text": "Conventional methods of lung segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on scans with lungs that contain dense pathologies, and such scans occur frequently in clinical practice. We propose a segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology. When the resulting transformation is applied to a mask of the normal lungs, a segmentation is found for the pathological lungs. As a mask of the normal lungs, a probabilistic segmentation built up out of the segmentations of 15 registered normal scans is used. To refine the segmentation, voxel classification is applied to a certain volume around the borders of the transformed probabilistic mask. Performance of this scheme is compared to that of three other algorithms: a conventional, a user-interactive and a voxel classification method. The algorithms are tested on 10 three-dimensional thin-slice computed tomography volumes containing high-density pathology. The resulting segmentations are evaluated by comparing them to manual segmentations in terms of volumetric overlap and border positioning measures. The conventional and user-interactive methods that start off with thresholding techniques fail to segment the pathologies and are outperformed by both voxel classification and the refined segmentation-by-registration. The refined registration scheme enjoys the additional benefit that it does not require pathological (hand-segmented) training data."} {"_id": "35d6ef85edd4caf8ce8b2d6ba09594f7d1c29d01", "title": "Animal monitoring with unmanned aerial vehicle-aided wireless sensor networks", "text": "In this paper, we focus on an application of wireless sensor networks (WSNs) with unmanned aerial vehicle (UAV). The aim of the application is to detect the locations of endangered species in large-scale wildlife areas or monitor movement of animals without any attachment devices. We first define the mathematical model of the animal monitoring problem in terms of the value of information (VoI) and rewards. We design a network model including clusters of sensor nodes and a single UAV that acts as a mobile sink and visits the clusters. We propose a path planning approach based on a Markov decision process (MDP) model that maximizes the VoI while reducing message delays. We used real-world movement dataset of zebras. Simulation results show that our approach outperforms greedy and random heuristics as well as the path planning based on the solution of the traveling salesman problem."} {"_id": "18d7a36d953480adba60c21e4b2a3f3208fedc77", "title": "HERB: a home exploring robotic butler", "text": "We describe the architecture, algorithms, and experiments with HERB, an autonomous mobile manipulator that performs useful manipulation tasks in the home. We present new algorithms for searching for objects, learning to navigate in cluttered dynamic indoor scenes, recognizing and registering objects accurately in high clutter using vision, manipulating doors and other constrained objects using caging grasps, grasp planning and execution in clutter, and manipulation on pose and torque constraint manifolds. S.S. Srinivasa ( ) \u00b7 D. Ferguson \u00b7 C.J. Helfrich Intel Research Pittsburgh, 4720 Forbes Avenue, Suite 410, Pittsburgh, PA 15213, USA e-mail: siddhartha.srinivasa@intel.com C.J. Helfrich e-mail: casey.j.helfrich@intel.com D. Berenson \u00b7 A. Collet \u00b7 R. Diankov \u00b7 G. Gallagher \u00b7 G. Hollinger \u00b7 J. Kuffner \u00b7 M.V. Weghe The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA D. Berenson e-mail: dberenso@ri.cmu.edu A. Collet e-mail: acollet@ri.cmu.edu R. Diankov e-mail: rdiankov@ri.cmu.edu G. Gallagher e-mail: ggallagh@ri.cmu.edu G. Hollinger e-mail: gholling@ri.cmu.edu J. Kuffner e-mail: kuffner@ri.cmu.edu M.V. Weghe e-mail: vandeweg@ri.cmu.edu We also present numerous severe real-world test results from the integration of these algorithms into a single mobile manipulator."} {"_id": "be9336fd5642e57b6c147c6eb97612b052fd43d4", "title": "Projected texture stereo", "text": "Passive stereo vision is widely used as a range sensing technology in robots, but suffers from dropouts: areas of low texture where stereo matching fails. By supplementing a stereo system with a strong texture projector, dropouts can be eliminated or reduced. This paper develops a practical stereo projector system, first by finding good patterns to project in the ideal case, then by analyzing the effects of system blur and phase noise on these patterns, and finally by designing a compact projector that is capable of good performance out to 3m in indoor scenes. The system has been implemented and has excellent depth precision and resolution, especially in the range out to 1.5m."} {"_id": "cf2a86994505a96c19c73dbbaa4a39801bdee088", "title": "Real-time 3D object pose estimation and tracking for natural landmark based visual servo", "text": "A real-time solution for estimating and tracking the 3D pose of a rigid object is presented for image-based visual servo with natural landmarks. The many state-of-the-art technologies that are available for recognizing the 3D pose of an object in a natural setting are not suitable for real-time servo due to their time lags. This paper demonstrates that a real-time solution of 3D pose estimation become feasible by combining a fast tracker such as KLT [7] [8] with a method of determining the 3D coordinates of tracking points on an object at the time of SIFT based tracking point initiation, assuming that a 3D geometric model with SIFT description of an object is known a-priori. Keeping track of tracking points with KLT, removing the tracking point outliers automatically, and reinitiating the tracking points using SIFT once deteriorated, the 3D pose of an object can be estimated and tracked in real-time. This method can be applied to both mono and stereo camera based 3D pose estimation and tracking. The former guarantees higher frame rates with about 1 ms of local pose estimation, while the latter assures of more precise pose results but with about 16 ms of local pose estimation. The experimental investigations have shown the effectiveness of the proposed approach with real-time performance."} {"_id": "0674c1e2fd78925a1baa6a28216ee05ed7b48ba0", "title": "Object Recognition from Local Scale-Invariant Features", "text": "Proc. of the International Conference on Computer Vision, Corfu (Sept. 1999) An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds."} {"_id": "12cedcc79bec6403ffab5d4c85a1bf7500683eca", "title": "Algorithmic Complexity in Coding Theory and the Minimum Distance Problem", "text": "We startwithan overviewof algorithmiccomplexity problemsin coding theory We then show that the problemof computing the minimumdiktanceof a binaryIinwr code is NP-hard,and the correspondingdeci~\u201donproblemis W-complete. Thisconstitutes a proof of the conjecture Bedekamp, McEliece,vanTilborg, dating back to 1978. Extensionsand applicationsof this result to other problemsin codingtheqv are discussed."} {"_id": "10ff61e6c2a99d8aafcf1706f3e88c7e2dfec188", "title": "Nonparametric belief propagation", "text": "Continuous quantities are ubiquitous in models of real-world phenomena, but are surprisingly difficult to reason about automatically. Probabilistic graphical models such as Bayesian networks and Markov random fields, and algorithms for approximate inference such as belief propagation (BP), have proven to be powerful tools in a wide range of applications in statistics and artificial intelligence. However, applying these methods to models with continuous variables remains a challenging task. In this work we describe an extension of BP to continuous variable models, generalizing particle filtering, and Gaussian mixture filtering techniques for time series to more complex models. We illustrate the power of the resulting nonparametric BP algorithm via two applications: kinematic tracking of visual motion and distributed localization in sensor networks."} {"_id": "4873e56ce8bfa3d8edaa8cdc28ea3aff54b3e87c", "title": "Feature-Enhanced Probabilistic Models for Diffusion Network Inference", "text": "Cascading processes, such as disease contagion, viral marketing, and information diffusion, are a pervasive phenomenon in many types of networks. The problem of devising intervention strategies to facilitate or inhibit such processes has recently received considerable attention. However, a major challenge is that the underlying network is often unknown. In this paper, we revisit the problem of inferring latent network structure given observations from a diffusion process, such as the spread of trending topics in social media. We define a family of novel probabilistic models that can explain recurrent cascading behavior, and take into account not only the time differences between events but also a richer set of additional features. We show that MAP inference is tractable and can therefore scale to very large real-world networks. Further, we demonstrate the effectiveness of our approach by inferring the underlying network structure of a subset of the popular Twitter following network by analyzing the topics of a large number of messages posted by users over a 10-month period. Experimental results show that our models accurately recover the links of the Twitter network, and significantly improve the performance over previous models based entirely on time."} {"_id": "e1bf2a95eb36afe7a6d9ae01db26ade2988226a0", "title": "Map-reduce based parallel support vector machine for risk analysis", "text": "Now a days people are enjoying the world of data because size and amount of the data has tremendously increased which acts like an invitation to Big data. But some of the classifier techniques like Support Vector Machine (SVM) is not able to handle the huge amount of data due to it's excessive memory requirement and unreasonable complexity in algorithm tough it is one of the most popularly used classifier in machine learning field. Hence a new technique comes into picture which performs parallel algorithm in a efficient way to work data having large scale called as PSVM. In this paper we are going to discuss a PSVM model for risk analysis which is based on map-reduce, and can easily handle a huge amount of data in a distributed manner."} {"_id": "a47b4939945e9139ccbf37e2f232d5c57583b385", "title": "Classificationof Mammographic BreastDensityUsinga CombinedClassifierParadigm Keir Bovis andSameerSingh", "text": "In thispaperweinvestigateanew approach to theclassificationof mammographic imagesaccording to breasttype. The classificationof breastdensityin this studyis motivatedby its useasprior knowledge in theimageprocessingpipeline.By utilising thisknowledgeat differentstagesincludingenhanceme nt, segmentation and featureextraction, its applicationaims to increasethe sensiti vity of detectingbreastcancer . Our implementeddiscriminationof breastdensityis basedon the underlyingtexture containedwithin the breast tissueapparent on a digital mammogramandrealisedby utilising four approachesto quantifyingthe texture. Following featureextraction,we adopta variationon bootstrapaggregation(\u2019bagging\u2019) to meetthe assumptionsof independencein datarepresentationof theinputdataset,necessaryfor classifiercombination. Multiple classifierscomprisingfeed-forward Artifi cial NeuralNetwork (ANN) aresubseque ntly trainedwith the differentperturbedinput dataspacesusing10-fold cross-validation. Thesetof classifieroutputs,expressedin a probabilistic framework, aresubsequently combinedusingsix differentclassifiercombinationrulesandtheresultscompared.In this studywe examinetwo differentclassificationtasks;a four-classclassificationproblem differentiatingbetweenfatty, partly fatty, denseandextremelydensebreasttypesanda two-classproblems, differentiatingbetweendenseandfatty breasttypes. The datasetusedin this study is the Digital Database of ScreeningMammograms(DDSM) containingMedio-LateralOblique(MLO) views for eachbreastfor 377 patients. For both tasksthe bestcombinationstrategy was found using the productrule giving an average recognition rateon testof 71.4%for thefour-classproblemand96.7%for thetwo-classproblem."} {"_id": "11bf72f89874b3bf3e950952543c96bf533d3399", "title": "DTC-SVM Scheme for Induction Motors Fedwith a Three-level Inverter", "text": "Direct Torque Control is a control technique in AC drive systems to obtain high performance torque control. The conventional DTC drive contains a pair of hysteresis comparators. DTC drives utilizing hysteresis comparators suffer from high torque ripple and variable switching frequency. The most common solution to those problems is to use the space vector depends on the reference torque and flux. In this Paper The space vector modulation technique (SVPWM) is applied to 2 level inverter control in the proposed DTC-based induction motor drive system, thereby dramatically reducing the torque ripple. Then the controller based on space vector modulation is designed to be applied in the control of Induction Motor (IM) with a three-level Inverter. This type of Inverter has several advantages over the standard two-level VSI, such as a greater number of levels in the output voltage waveforms, Lower dV/dt, less harmonic distortion in voltage and current waveforms and lower switching frequencies. This paper proposes a general SVPWM algorithm for three-level based on standard two-level SVPWM. The proposed scheme is described clearly and simulation results are reported to demonstrate its effectiveness. The entire control scheme is implemented with Matlab/Simulink. Keywords\u2014Direct torque control, space vector Pulsewidth modulation(SVPWM), neutral point clamped(NPC), two-level inverter."} {"_id": "60a9035c45fe30e4e88dad530c4f5e476cc61b78", "title": "Data Mining and Knowledge Discovery : Applications , Techniques , Challenges and Process Models in Healthcare", "text": "Many healthcare leaders find themselves overwhelmed with data, but lack the information they need to make right decisions. Knowledge Discovery in Databases (KDD) can help organizations turn their data into information. Organizations that take advantage of KDD techniques will find that they can lower the healthcare costs while improving healthcare quality by using fast and better clinical decision making. In this paper, a review study is done on existing data mining and knowledge discovery techniques, applications and process models that are applicable to healthcare environments. The challenges for applying data mining techniques in healthcare environment will also be discussed."} {"_id": "528db43eb99e4d3c6b0c7ed63d17332796b4270f", "title": "MPTLsim: a cycle-accurate, full-system simulator for x86-64 multicore architectures with coherent caches", "text": "The introduction of multicore microprocessors in the recent years has made it imperative to use cycleaccurate and full-system simulators in the architecture research community. We introduce MPTLsim - a multicore simulator for the X86 ISA that meets this need. MPTLsim is a uop-accurate, cycle-accurate, full-system simulator for multicore designs based on the X86-64 ISA. MPTLsim extends PTLsim, a publicly available single core simulator, with a host of additional features to support hyperthreading within a core and multiple cores, with detailed models for caches, on-chip interconnections and the memory data flow. MPTLsim incorporates detailed simulation models for cache controllers, interconnections and has built-in implementations of a number of cache coherency protocols."} {"_id": "bbb9c3119edd9daa414fd8f2df5072587bfa3462", "title": "Apache Spark: a unified engine for big data processing", "text": "This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications."} {"_id": "4b41aa7f0b0eae6beff4c6d98dd3631863ec51c2", "title": "Bayesian Non-Exhaustive Classification A Case Study: Online Name Disambiguation using Temporal Record Streams", "text": "The name entity disambiguation task aims to partition the records of multiple real-life persons so that each partition contains records pertaining to a unique person. Most of the existing solutions for this task operate in a batch mode, where all records to be disambiguated are initially available to the algorithm. However, more realistic settings require that the name disambiguation task be performed in an online fashion, in addition to, being able to identify records of new ambiguous entities having no preexisting records. In this work, we propose a Bayesian non-exhaustive classification framework for solving online name disambiguation task. Our proposed method uses a Dirichlet process prior with a Normal x Normal x Inverse Wishart data model which enables identification of new ambiguous entities who have no records in the training data. For online classification, we use one sweep Gibbs sampler which is very efficient and effective. As a case study we consider bibliographic data in a temporal stream format and disambiguate authors by partitioning their papers into homogeneous groups. Our experimental results demonstrate that the proposed method is better than existing methods for performing online name disambiguation task."} {"_id": "3199197df603025a328e2c0837235e590acc10b1", "title": "Further improvement in reducing superficial contamination in NIRS using double short separation measurements", "text": "Near-Infrared Spectroscopy (NIRS) allows the recovery of the evoked hemodynamic response to brain activation. In adult human populations, the NIRS signal is strongly contaminated by systemic interference occurring in the superficial layers of the head. An approach to overcome this difficulty is to use additional NIRS measurements with short optode separations to measure the systemic hemodynamic fluctuations occurring in the superficial layers. These measurements can then be used as regressors in the post-experiment analysis to remove the systemic contamination and isolate the brain signal. In our previous work, we showed that the systemic interference measured in NIRS is heterogeneous across the surface of the scalp. As a consequence, the short separation measurement used in the regression procedure must be located close to the standard NIRS channel from which the evoked hemodynamic response of the brain is to be recovered. Here, we demonstrate that using two short separation measurements, one at the source optode and one at the detector optode, further increases the performance of the short separation regression method compared to using a single short separation measurement. While a single short separation channel produces an average reduction in noise of 33% for HbO, using a short separation channel at both source and detector reduces noise by 59% compared to the standard method using a general linear model (GLM) without short separation. For HbR, noise reduction of 3% is achieved using a single short separation and this number goes to 47% when two short separations are used. Our work emphasizes the importance of integrating short separation measurements both at the source and at the detector optode of the standard channels from which the hemodynamic response is to be recovered. While the implementation of short separation sources presents some difficulties experimentally, the improvement in noise reduction is significant enough to justify the practical challenges."} {"_id": "ab8799dce29812a8e04cfa01eea095515c24b963", "title": "Magnetic integration of LCC compensated resonant converter for inductive power transfer applications", "text": "The aim of this paper is to present a novel magnetic integrated LCC series-parallel compensation topology for the design of both the primary and pickup pads in inductive power transfer (IPT) applications. A more compact structure can be realized by integrating the inductors of the compensation circuit into the coupled power-transmitting coils. The impact of the extra coupling between the compensated coils (inductors) and the power-transferring coils is modeled and analyzed. The basic characteristics of the proposed topology are studied based on the first harmonic approximation (FHA). High-order harmonics are taken into account to derive an analytical solution for the current at the switching instant, which is helpful for the design of soft-switching operation. An IPT system with up to 5.6kW output power for electric vehicles (EV) charger has been built to verify the validity of the proposed magnetic integrated compensation topology. A peak efficiency of 95.36% from DC power source to the battery load is achieved at rated operation condition."} {"_id": "1b1337a166cdcf6ee51a70cb23f291c36e9eee34", "title": "Fine-to-Coarse Global Registration of RGB-D Scans", "text": "RGB-D scanning of indoor environments is important for many applications, including real estate, interior design, and virtual reality. However, it is still challenging to register RGB-D images from a hand-held camera over a long video sequence into a globally consistent 3D model. Current methods often can lose tracking or drift and thus fail to reconstruct salient structures in large environments (e.g., parallel walls in different rooms). To address this problem, we propose a fine-to-coarse global registration algorithm that leverages robust registrations at finer scales to seed detection and enforcement of new correspondence and structural constraints at coarser scales. To test global registration algorithms, we provide a benchmark with 10,401 manually-clicked point correspondences in 25 scenes from the SUN3D dataset. During experiments with this benchmark, we find that our fine-to-coarse algorithm registers long RGB-D sequences better than previous methods."} {"_id": "89324b37187a8a4115e7619056bca5fcf78e8928", "title": "Automatically Quantifying Radiographic Knee Osteoarthritis Severity Final Report-CS 229-Machine Learning", "text": "In this paper, we implement machine learning algorithms to automatically quantify knee osteoarthritis severity from X-ray images according to the Kellgren & Lawrence (KL) grades. We implement and evaluate the performance of various machine learning models like transfer learning, support vector machines and fully connected neural networks based on their classification accuracy. We also implement the task of automatically extracting the knee-joint region from the X-ray images and quantifying their severity by training a faster region convolutional neural network (R-CNN)."} {"_id": "010d1631433bb22a9261fba477b6e6f5a0d722b8", "title": "Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory", "text": "Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents. However, this problem has not been studied in large-scale conversation generation so far. In this paper, we propose Emotional Chatting Machine (ECM) that can generate appropriate responses not only in content (relevant and grammatical) but also in emotion (emotionally consistent). To the best of our knowledge, this is the first work that addresses the emotion factor in large-scale conversation generation. ECM addresses the factor using three new mechanisms that respectively (1) models the high-level abstraction of emotion expressions by embedding emotion categories, (2) captures the change of implicit internal emotion states, and (3) uses explicit emotion expressions with an external emotion vocabulary. Experiments show that the proposed model can generate responses appropriate not only in content but also in emotion."} {"_id": "a907e5609bb9efd1bafb11cd424faab14fd42e4a", "title": "Information Retrieval with Verbose Queries", "text": "Recently, the focus of many novel search applications shifted from short keyword queries to verbose natural language queries. Examples include question answering systems and dialogue systems, voice search on mobile devices and entity search engines like Facebook's Graph Search or Google's Knowledge Graph. However the performance of textbook information retrieval techniques for such verbose queries is not as good as that for their shorter counterparts. Thus, effective handling of verbose queries has become a critical factor for adoption of information retrieval techniques in this new breed of search applications. Over the past decade, the information retrieval community has deeply explored the problem of transforming natural language verbose queries using operations like reduction, weighting, expansion, reformulation and segmentation into more effective structural representations. However, thus far, there was not a coherent and organized tutorial on this topic. In this tutorial, we aim to put together various research pieces of the puzzle, provide a comprehensive and structured overview of various proposed methods, and also list various application scenarios where effective verbose query processing can make a significant difference."} {"_id": "600a5d60cb96eda2a9849413e747547d70dfb00a", "title": "Biologically inspired protection of deep networks from adversarial attacks", "text": "Inspired by biophysical principles underlying nonlinear dendritic computation in neural circuits, we develop a scheme to train deep neural networks to make them robust to adversarial attacks. Our scheme generates highly nonlinear, saturated neural networks that achieve state of the art performance on gradient based adversarial examples on MNIST, despite never being exposed to adversarially chosen examples during training. Moreover, these networks exhibit unprecedented robustness to targeted, iterative schemes for generating adversarial examples, including second-order methods. We further identify principles governing how these networks achieve their robustness, drawing on methods from information geometry. We find these networks progressively create highly flat and compressed internal representations that are sensitive to very few input dimensions, while still solving the task. Moreover, they employ highly kurtotic weight distributions, also found in the brain, and we demonstrate how such kurtosis can protect even linear classifiers from adversarial attack."} {"_id": "87b1b46129899f328017798e315cae82e0ba70d8", "title": "A fully balanced pseudo-differential OTA with common-mode feedforward and inherent common-mode feedback detector", "text": "A pseudo-differential fully balanced fully symmetric CMOS operational transconductance amplifier (OTA) architecture with inherent common-mode detection is proposed. Through judicious arrangement, the common-mode feedback circuit can be economically implemented. The OTA achieves a third harmonic distortion of 43 dB for 900 mVpp at 30 MHz. The OTA, fabricated in 0.5m CMOS process, is used to design a 100-MHz fourth-order linear phase filter. The measured filter\u2019s group delay ripple is 3% for frequencies up to 100 MHz, and the measured dynamic range is 45 dB for a total harmonic distortion of 46 dB. The filter consumes 42.9 mW per complex pole pair while operating from a 1.65-V power supply."} {"_id": "11b8093f4a8c421a8638c1be0937151d968d95f9", "title": "Emergence of Fatal PRRSV Variants: Unparalleled Outbreaks of Atypical PRRS in China and Molecular Dissection of the Unique Hallmark", "text": "Porcine reproductive and respiratory syndrome (PRRS) is a severe viral disease in pigs, causing great economic losses worldwide each year. The causative agent of the disease, PRRS virus (PRRSV), is a member of the family Arteriviridae. Here we report our investigation of the unparalleled large-scale outbreaks of an originally unknown, but so-called \"high fever\" disease in China in 2006 with the essence of PRRS, which spread to more than 10 provinces (autonomous cities or regions) and affected over 2,000,000 pigs with about 400,000 fatal cases. Different from the typical PRRS, numerous adult sows were also infected by the \"high fever\" disease. This atypical PRRS pandemic was initially identified as a hog cholera-like disease manifesting neurological symptoms (e.g., shivering), high fever (40-42 degrees C), erythematous blanching rash, etc. Autopsies combined with immunological analyses clearly showed that multiple organs were infected by highly pathogenic PRRSVs with severe pathological changes observed. Whole-genome analysis of the isolated viruses revealed that these PRRSV isolates are grouped into Type II and are highly homologous to HB-1, a Chinese strain of PRRSV (96.5% nucleotide identity). More importantly, we observed a unique molecular hallmark in these viral isolates, namely a discontinuous deletion of 30 amino acids in nonstructural protein 2 (NSP2). Taken together, this is the first comprehensive report documenting the 2006 epidemic of atypical PRRS outbreak in China and identifying the 30 amino-acid deletion in NSP2, a novel determining factor for virulence which may be implicated in the high pathogenicity of PRRSV, and will stimulate further study by using the infectious cDNA clone technique."} {"_id": "c257c6948fe63f7ee3df0a2d18916d2e3fdc85e5", "title": "Destination bonding : Hybrid cognition using Instagram", "text": "Article history: Received August 28, 2015 Received in revised format November 28, 2015 Accepted December 1, 2015 Available online December 1, 2015 Empirical research has identified the phenomenon of destination bonding as a result of summated physical and emotional values associated with the destination. Physical values, namely natural landscape & other physical settings and emotional values, namely the enculturation processes, have a significant role to play in portraying visitors\u2019 cognitive framework for destination preference. The physical values seemed to be the stimulator for bonding that embodies action or behavior tendencies in imagery. The emotional values were the conditions that lead to affective bonding and are reflected in attitudes for a place which were evident in text narratives. Social networking on virtual platforms offers the scope for hybrid cognitive expression using imagery and text to the visitors. Instagram has emerged as an application-window to capture these hybrid cognitions of visitors. This study focuses on assessing the relationship between hybrid cognition of visitors expressed via Instagram and their bond with the destination. Further to this, the study attempts to examine the impact of hybrid cognition of visitors on the behavioral pattern of prospective visitors to the destination. The study revealed that sharing of visual imageries and related text by the visitors is an expression of the physico-emotional bonding with the destination. It was further established that hybrid cognition strongly asserts destination bonding and has been also found to have moderating impact on the link between destination bonding and electronic-word-of-mouth. \u00a9 2016 Growing Science Ltd. All rights reserved."} {"_id": "203af6916b501ee53d9c8c7164324ef4f019ca2d", "title": "Hand grasp and motion for intent expression in mid-air virtual pottery", "text": "We describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the potprofile to the shape of the user\u2019s hand represented as a point-cloud (PCL). Our pottery-inspired application served as a platform for systematically revealing how users use their hands to express the intent of deformation during a pot shaping process. Through our approach, we address two specific problems: (a) determining start and end of deformation without explicit clutching and declutching, and (b) identifying user\u2019s intent by characterizing grasp and motion of the hand on the pot. We evaluated our approach\u2019s performance in terms of intent classification, users\u2019 behavior, and users\u2019 perception of controllability. We found that the expressive capability of hand articulation can be effectively harnessed for controllable shaping by organizing the deformation process in broad classes of intended operations such as pulling, pushing and fairing. After minimal practice with the pottery application, users could figure out their own strategy for reaching, grasping and deforming the pot. Further, the use of PCL as mid-air input allows for using common physical objects as tools for pot deformation. Users particularly enjoyed this aspect of our method for shaping pots."} {"_id": "0cc24d8308665874bddf5cb874c7fb122c249666", "title": "SoftGUESS: Visualization and Exploration of Code Clones in Context", "text": "We introduce SoftGUESS, a code clone exploration system. SoftGUESS is built on the more general GUESS system which provides users with a mechanism to interactively explore graph structures both through direct manipulation as well as a domain-specific language. We demonstrate SoftGUESS through a number of mini-applications to analyze evolutionary code-clone behavior in software systems. The mini-applications of SoftGUESS represent a novel way of looking at code-clones in the context of many system features. It is our hope that SoftGUESS will form the basis for other analysis tools in the software-engineering domain."} {"_id": "2ccbb28d9f3c0f4867826f24567b4183993037b3", "title": "The Diffusion of Innovations in Social Networks \u2217", "text": "This paper determines how different network structures influence the diffusion of innovations. We develop a model of diffusion where: 1. an individual\u2019s decision to adopt a new technology is influenced by his contacts; and 2. contacts can discuss, coordinate, and make adoption decisions together. A measure of connectedness, \u2018cohesion\u2019, determines diffusion. A cohesive community is defined as a group in which all members have a high proportion of their contacts within the group. We show a key trade-off: on one hand, a cohesive community can hinder diffusion by blocking the spread of a technology into the group; on the other hand, cohesive communities can be particularly effective at acting collectively to adopt an innovation. We find that for technologies with low externalities (that require few people to adopt before others are willing to adopt), social structures with loose ties, where people are not part of cohesive groups, enable greater diffusion. However, as externalities increase (technologies require more people to adopt before others are willing to adopt), social structures with increasingly cohesive groups enable greater diffusion. Given that societal structure is known to differ systematically along this dimension, our findings point to specialization in technological progress exhibiting these patterns. \u2217Bryony Reich, Faculty of Economics, University College London. Email: b.reich@ucl.ac.uk. I would like to thank Alberto Alesina, Antonio Cabrales, Sanjeev Goyal, and Jorgen Weibull for their invaluable guidance and support. I benefited greatly from conversations with and comments of Marco Bassetto, Lars Nesheim and Imran Rasul. I am grateful to Jonathan Newton for numerous interactions at all stages of this project. For helpful comments I would also like to thank Lucie Gadenne, Terri Kneeland, and Sueyhun Kwon, as well as seminar participants at Alicante, Cambridge, INET Contagion Conference, Oxford, PET Luxembourg, and UCL. I gratefully acknowledge financial support from the UK Economic and Social Research Council (grant number ES/K001396/1)."} {"_id": "2e3fc086ff84d6589dc91200fbfa86903a2d3b76", "title": "SLANGZY: a fuzzy logic-based algorithm for English slang meaning selection", "text": "The text present on online forums and social media platforms conventionally does not follow a standard sentence structure and uses words that are commonly termed as slang or Internet language. Online text mining involves a surfeit of slang words; however, there is a distinct lack of reliable resources available to find accurate meanings of these words. We aim to bridge this gap by introducing SLANGZY, a fuzzy logic-based algorithm for English slang meaning selection which uses a mathematical factor termed as \u201cslang factor\u201d to judge the accuracy of slang word definitions found in Urban Dictionary, the largest Slang Dictionary on the Internet. This slang factor is used to rank definitions of English slang words retrieved from over 4 million unique words on popular social media platforms such as Twitter, YouTube and Reddit. We investigate the usefulness of SLANGZY over Urban Dictionary to find meanings of slang words in social media text and achieve encouraging results due to recognizing the importance of multiple criteria in the calculation of slang factor in the algorithm over successive experiments. The performance of SLANGZY with optimum weights for each criterion is further assessed using the accuracy, error rate, F-Score as well as a difference factor for English slang word definitions. To further illustrate the results, a web portal is created to display the contents of the Slang Dictionary consisting of definitions ranked according to the calculated slang factors."} {"_id": "05bcd2f5d1833ac354de01341d73e42203a5b6c0", "title": "A Topic Model for Word Sense Disambiguation", "text": "We develop latent Dirichlet allocation with WORDNET (LDAWN), an unsupervised probabilistic topic model that includes word sense as a hidden variable. We develop a probabilistic posterior inference algorithm for simultaneously disambiguating a corpus and learning the domains in which to consider each word. Using the WORDNET hierarchy, we embed the construction of Abney and Light (1999) in the topic model and show that automatically learned domains improve WSD accuracy compared to alternative contexts."} {"_id": "078fdc9d7dd7105dcc5e65aa19edefe3e48e8bc7", "title": "Probabilistic author-topic models for information discovery", "text": "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer."} {"_id": "215aa495b4c860a1e6d87f2c36f34da464376cc4", "title": "Finding scientific topics.", "text": "A first step in identifying the content of a document is determining which topics that document addresses. We describe a generative model for documents, introduced by Blei, Ng, and Jordan [Blei, D. M., Ng, A. Y. & Jordan, M. I. (2003) J. Machine Learn. Res. 3, 993-1022], in which each document is generated by choosing a distribution over topics and then choosing each word in the document from a topic selected according to this distribution. We then present a Markov chain Monte Carlo algorithm for inference in this model. We use this algorithm to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics. We show that the extracted topics capture meaningful structure in the data, consistent with the class designations provided by the authors of the articles, and outline further applications of this analysis, including identifying \"hot topics\" by examining temporal dynamics and tagging abstracts to illustrate semantic content."} {"_id": "2483c29cd07f05d20dab7ed16d5ea1936226259c", "title": "Mixtures of hierarchical topics with Pachinko allocation", "text": "The four-level pachinko allocation model (PAM) (Li & McCallum, 2006) represents correlations among topics using a DAG structure. It does not, however, represent a nested hierarchy of topics, with some topical word distributions representing the vocabulary that is shared among several more specific topics. This paper presents hierarchical PAM---an enhancement that explicitly represents a topic hierarchy. This model can be seen as combining the advantages of hLDA's topical hierarchy representation with PAM's ability to mix multiple leaves of the topic hierarchy. Experimental results show improvements in likelihood of held-out documents, as well as mutual information between automatically-discovered topics and humangenerated categories such as journals."} {"_id": "271d031d03d217170b2d1b1c4ae9d777dc18692b", "title": "A Condensation Approach to Privacy Preserving Data Mining", "text": "In recent years, privacy preserving data mining has become an important problem because of the large amount of personal data which is tracked by many business applications. In many cases, users are unwilling to provide personal information unless the privacy of sensitive information is guaranteed. In this paper, we propose a new framework for privacy preserving data mining of multi-dimensional data. Previous work for privacy preserving data mining uses a perturbation approach which reconstructs data distributions in order to perform the mining. Such an approach treats each dimension independently and therefore ignores the correlations between the different dimensions. In addition, it requires the development of a new distribution based algorithm for each data mining problem, since it does not use the multi-dimensional records, but uses aggregate distributions of the data as input. This leads to a fundamental re-design of data mining algorithms. In this paper, we will develop a new and flexible approach for privacy preserving data mining which does not require new problem-specific algorithms, since it maps the original data set into a new anonymized data set. This anonymized data closely matches the characteristics of the original data including the correlations among the different dimensions. We present empirical results illustrating the effectiveness of the method."} {"_id": "18ca2837d280a6b2250024b6b0e59345601064a7", "title": "Nonlinear dimensionality reduction by locally linear embedding.", "text": "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text."} {"_id": "c86754453c88a5ccf3d4a7ef7acd66ae1cd97928", "title": "Influence of External Currents in Sensors Based on PCB Rogowski Coils", "text": "A current sensor based in the Rogowski coil is an innovative measuring system that gives advantages with respect to conventional measuring systems based in current transformers with magnetic core [1], [2].and [3]. Their main advantages are: the linearity, the span and the bandwidth. Different kinds of manufacturing allow to obtain an important variety of Rogowski coils with different properties. One way of manufacturing is using the same method as for producing printed circuit boards, so by this way is possible to produce coils very similar and with a high precision. The authors are working in current measurement with Rogowski coils or Hall effect sensors and in particular in the realization of good and accurate coils [4] and [5]. In this work, the influence of external currents to the coil in the measured current by the coil has been evaluated."} {"_id": "1d89516427c0d91653b70171a0e8998af9d5960b", "title": "Identifying quantitative trait loci via group-sparse multitask regression and feature selection: an imaging genetics study of the ADNI cohort", "text": "MOTIVATION\nRecent advances in high-throughput genotyping and brain imaging techniques enable new approaches to study the influence of genetic variation on brain structures and functions. Traditional association studies typically employ independent and pairwise univariate analysis, which treats single nucleotide polymorphisms (SNPs) and quantitative traits (QTs) as isolated units and ignores important underlying interacting relationships between the units. New methods are proposed here to overcome this limitation.\n\n\nRESULTS\nTaking into account the interlinked structure within and between SNPs and imaging QTs, we propose a novel Group-Sparse Multi-task Regression and Feature Selection (G-SMuRFS) method to identify quantitative trait loci for multiple disease-relevant QTs and apply it to a study in mild cognitive impairment and Alzheimer's disease. Built upon regression analysis, our model uses a new form of regularization, group \u2113(2,1)-norm (G(2,1)-norm), to incorporate the biological group structures among SNPs induced from their genetic arrangement. The new G(2,1)-norm considers the regression coefficients of all the SNPs in each group with respect to all the QTs together and enforces sparsity at the group level. In addition, an \u2113(2,1)-norm regularization is utilized to couple feature selection across multiple tasks to make use of the shared underlying mechanism among different brain regions. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performance in empirical evaluations and a compact set of selected SNP predictors relevant to the imaging QTs.\n\n\nAVAILABILITY\nSoftware is publicly available at: http://ranger.uta.edu/%7eheng/imaging-genetics/."} {"_id": "27d9a8445e322c0ac1f335aea6a5591c4d120b05", "title": "Security Against Hardware Trojan Attacks Using Key-Based Design Obfuscation", "text": "Malicious modification of hardware in untrusted fabrication facilities, referred to as hardware Trojan, has emerged as a major security concern. Comprehensive detection of these Trojans during postmanufacturing test has been shown to be extremely difficult. Hence, it is important to develop design techniques that provide effective countermeasures against hardware Trojans by either preventing Trojan attacks or facilitating detection during test. Obfuscation is a technique that is conventionally employed to prevent piracy of software and hardware intellectual property (IP). In this work, we propose a novel application of key-based circuit structure and functionality obfuscation to achieve protection against hardware Trojans triggered by rare internal circuit conditions. The proposed obfuscation scheme is based on judicious modification of the state transition function, which creates two distinct functional modes: normal and Responsible Editor: S. T. Chakradhar A preliminary version of this work has been published in the International Conference on Computer Aided Design (ICCAD), 2009. R. S. Chakraborty (B) Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, West Bengal 721302, India e-mail: rschakraborty@cse.iitkgp.ernet.in S. Bhunia Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH 44106, USA e-mail: skb21@case.edu obfuscated. A circuit transitions from the obfuscated to the normal mode only upon application of a specific input sequence, which defines the key. We show that it provides security against Trojan attacks in two ways: (1) it makes some inserted Trojans benign, i.e. they become effective only in the obfuscated mode; and (2) it prevents an adversary from exploiting the true rare events in a circuit to insert hard-to-detect Trojans. The proposed design methodology can thus achieve simultaneous protection from hardware Trojans and hardware IP piracy. Besides protecting ICs against Trojan attacks in foundry, we show that it can also protect against malicious modifications by untrusted computeraided design (CAD) tools in both SoC and FPGA design flows. Simulation results for a set of benchmark circuits show that the scheme is capable of achieving high levels of security against Trojan attacks at modest area, power and delay overhead."} {"_id": "3f8b54ef69b2c8682a76f074110be0f11815f63a", "title": "Combining Neural Networks and Log-linear Models to Improve Relation Extraction", "text": "The last decade has witnessed the success of traditional feature-based methods for exploiting the discrete structures such as words or lexical patterns to extract relations from text. Recently, convolutional and recurrent neural networks have been shown to capture effective hidden structures within sentences via continuous representations, thereby significantly advancing the performance of relation extraction. The advantage of convolutional neural networks is their capacity to generalize the consecutive k-grams in the sentences while recurrent neural networks are effective to encode long range sentence context. This paper proposes to combine the traditional feature-based method, the convolutional and recurrent neural networks to simultaneously benefit from their advantages. Our systematic evaluation of different network architectures and combination methods demonstrates the effectiveness of this approach and results in the state-ofthe-art performance on the ACE 2005 and SemEval datasets."} {"_id": "d73a261bfdb166dbc44a5fb14fff050c52574a2b", "title": "A second generation computer forensic analysis system", "text": "The architecture of existing \u2013 first generation \u2013 computer forensic tools, including the widely used EnCase and FTK products, is rapidly becoming outdated. Tools are not keeping pace with increased complexity and data volumes of modern investigations. This paper discuses the limitations of first generation computer forensic tools. Several metrics for measuring the efficacy and performance of computer forensic tools are introduced. A set of requirements for second generation tools are proposed. A high-level design for a (work in progress) second generation computer forensic analysis system is presented. a 2009 Digital Forensic Research workshop. Published by Elsevier Ltd. All rights reserved."} {"_id": "b4641313431f1525d276677bcd8fc5de1c726a8d", "title": "A Low-Power 32-Channel Digitally Programmable Neural Recording Integrated Circuit", "text": "We report the design of an ultra-low-power 32-channel neural-recording integrated circuit (chip) in a 0.18 \u03bc m CMOS technology. The chip consists of eight neural recording modules where each module contains four neural amplifiers, an analog multiplexer, an A/D converter, and a serial programming interface. Each amplifier can be programmed to record either spikes or LFPs with a programmable gain from 49-66 dB. To minimize the total power consumption, an adaptive-biasing scheme is utilized to adjust each amplifier's input-referred noise to suit the background noise at the recording site. The amplifier's input-referred noise can be adjusted from 11.2 \u03bcVrms (total power of 5.4 \u03bcW) down to 5.4 \u03bcVrms (total power of 20 \u03bcW) in the spike-recording setting. The ADC in each recording module digitizes the a.c. signal input to each amplifier at 8-bit precision with a sampling rate of 31.25 kS/s per channel, with an average power consumption of 483 nW per channel, and, because of a.c. coupling, allows d.c. operation over a wide dynamic range. It achieves an ENOB of 7.65, resulting in a net efficiency of 77 fJ/State, making it one of the most energy-efficient designs for neural recording applications. The presented chip was successfully tested in an in vivo wireless recording experiment from a behaving primate with an average power dissipation per channel of 10.1 \u03bc W. The neural amplifier and the ADC occupy areas of 0.03 mm2 and 0.02 mm2 respectively, making our design simultaneously area efficient and power efficient, thus enabling scaling to high channel-count systems."} {"_id": "7e33296dfff963d595d2121f14a7a0bd5c187188", "title": "Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae", "text": "D New algorithms for deciding whether a (propositional) Horn formula is satisfiable are presented. If the Horn formula A contains K distinct propositional letters and if it is assumed that they are exactly Pi,. . . , PK, the two algorithms presented in this paper run in time O(N), where N is the total number of occurrences of literals in A. By representing a Horn proposition as a graph, the satisfiability problem can be formulated as a data flow problem, a certain type of pebbling. The difference between the two algorithms presented here is the strategy used for pebbling the graph. The first algorithm is based on the principle used for finding the set of nonterminals of a context-free grammar from which the empty string can be derived. The second algorithm is a graph traversal and uses a \u201ccall-by-need\u201d strategy. This algorithm uses an attribute grammar to translate a propositional Horn formula to its corresponding graph in linear time. Our formulation of the satisfiability problem as a data flow problem appears to be new and suggests the possibility of improving efficiency using parallel processors. a"} {"_id": "81394ddc1465027c148c0b7d8005bb9e5712d8f7", "title": "ECNUCS: Measuring Short Text Semantic Equivalence Using Multiple Similarity Measurements", "text": "This paper reports our submissions to the Semantic Textual Similarity (STS) task in SemEval 2013 (Task 6). We submitted three Support Vector Regression (SVR) systems in core task, using 6 types of similarity measures, i.e., string similarity, number similarity, knowledge-based similarity, corpus-based similarity, syntactic dependency similarity and machine translation similarity. Our third system with different training data and different feature sets for each test data set performs the best and ranks 35 out of 90 runs. We also submitted two systems in typed task using string based measure and Named Entity based measure. Our best system ranks 5 out of 15 runs."} {"_id": "719fd380590df3ccbe50db7ed74f9c84fe7e0d6a", "title": "Protocol for developing ANN models and its application to the assessment of the quality of the ANN model development process in drinking water quality modelling", "text": "The application of Artificial Neural Networks (ANNs) in the field of environmental and water resources modelling has become increasingly popular since early 1990s. Despite the recognition of the need for a consistent approach to the development of ANN models and the importance of providing adequate details of the model development process, there is no systematic protocol for the development and documentation of ANN models. In order to address this shortcoming, such a protocol is introduced in this paper. In addition, the protocol is used to critically review the quality of the ANN model development and reporting processes employed in 81 journal papers since 2000 in which ANNs have been used for drinking water quality modelling. The results show that model architecture selection is the best implemented step, while greater focus should be given to input selection considering input independence and model validation considering replicative and structural validity. 2014 Elsevier Ltd. All rights reserved."} {"_id": "372a9edb48c6894e13bd9946ba50442b9f2f6f2c", "title": "Micro-synchrophasors for distribution systems", "text": "This paper describes a research project to develop a network of high-precision phasor measurement units, termed micro-synchrophasors or \u03bcPMUs, and explore the applications of \u03bcPMU data for electric power distribution systems."} {"_id": "c9298d761177fa3ab1000aea4dede5e76f891f6f", "title": "From the ephemeral to the enduring: how approach-oriented mindsets lead to greater status.", "text": "We propose that the psychological states individuals bring into newly formed groups can produce meaningful differences in status attainment. Three experiments explored whether experimentally created approach-oriented mindsets affected status attainment in groups, both immediately and over time. We predicted that approach-oriented states would lead to greater status attainment by increasing proactive behavior. Furthermore, we hypothesized that these status gains would persist longitudinally, days after the original mindsets had dissipated, due to the self-reinforcing behavioral cycles the approach-oriented states initiated. In Experiment 1, individuals primed with a promotion focus achieved higher status in their newly formed groups, and this was mediated by proactive behavior as rated by themselves and their teammates. Experiment 2 was a longitudinal experiment and revealed that individuals primed with power achieved higher status, both immediately following the prime and when the groups were reassembled 2 days later to work on new tasks. These effects were mediated by independent coders' ratings of proactive behavior during the first few minutes of group interaction. Experiment 3 was another longitudinal experiment and revealed that priming happiness led to greater status as well as greater acquisition of material resources. Importantly, these immediate and longitudinal effects were independent of the effects of a number of stable dispositional traits. Our results establish that approach-oriented psychological states affect status attainment, over and above the more stable characteristics emphasized in prior research, and provide the most direct test yet of the self-reinforcing nature of status hierarchies. These findings depict a dynamic view of status organization in which the same group may organize itself differently depending on members' incoming psychological states."} {"_id": "2fa5eb6e30116a6f8d073c122d9c087f844d7912", "title": "Relating reinforcement learning performance to classification performance", "text": "We prove a quantitative connection between the expected sum of rewards of a policy and binary classification performance on created subproblems. This connection holds without any unobservable assumptions (no assumption of independence, small mixing time, fully observable states, or even hidden states) and the resulting statement is independent of the number of states or actions. The statement is critically dependent on the size of the rewards and prediction performance of the created classifiers.We also provide some general guidelines for obtaining good classification performance on the created subproblems. In particular, we discuss possible methods for generating training examples for a classifier learning algorithm."} {"_id": "6755b01e14b2e7ee39aef0b6bf573769a39eabfe", "title": "Semantic Labeling of Aerial and Satellite Imagery", "text": "Inspired by the recent success of deep convolutional neural networks (CNNs) and feature aggregation in the field of computer vision and machine learning, we propose an effective approach to semantic pixel labeling of aerial and satellite imagery using both CNN features and hand-crafted features. Both CNN and hand-crafted features are applied to dense image patches to produce per-pixel class probabilities. Conditional random fields (CRFs) are applied as a postprocessing step. The CRF infers a labeling that smooths regions while respecting the edges present in the imagery. The combination of these factors leads to a semantic labeling framework which outperforms all existing algorithms on the International Society of Photogrammetry and Remote Sensing (ISPRS) two-dimensional Semantic Labeling Challenge dataset. We advance state-of-the-art results by improving the overall accuracy to 88% on the ISPRS Semantic Labeling Contest. In this paper, we also explore the possibility of applying the proposed framework to other types of data. Our experimental results demonstrate the generalization capability of our approach and its ability to produce accurate results."} {"_id": "e8a90511a95025aab5f0867ce99704151e40b207", "title": "Are Faculty Members Ready? Individual Factors Affecting Knowledge Management Readiness in Universities", "text": "Knowledge Management (KM) provides a systematic process to help in the creation, transfer and use of knowledge across the university, leading to increased productivity. While KM has been successfully used elsewhere, universities have been late in adopting it. Before a university can initiate KM, it needs to determine if it is ready for KM or not. Through a web-based survey sent to 1263 faculty members from 59 accredited Library and Information Science programs in universities across North America, this study investigated the e\u00aeect of individual factors of trust, knowledge self-e\u00b1cacy, collegiality, openness to change and reciprocity on individual readiness to participate in a KM initiative, and the degree to which this a\u00aeects perceived organisational readiness to adopt KM. 157 valid responses were received. Using structural equation modeling, the study found that apart from trust, all other factors positively a\u00aeected individual readiness, which was found to a\u00aeect organisational readiness. Findings should help universities identify opportunities and barriers before they can adopt KM. It should be a useful contribution to the KM literature, especially in the university context."} {"_id": "4214cb09e29795f5363e5e3b545750dce027b668", "title": "Overview of Virtual Reality Technologies", "text": "The promise of being able to be inside another world might be resolved by Virtual Reality. It wouldn\u2019t be na\u00efve to assume that we will be able to enter and completely feel another world at some point in the future; be able to interact with knowledge and entertainment in a totally immersive state. Advancements are becoming more frequent, and with the recent popularity of technology in this generation, a lot of investment is being made. Prototypes of head displays that completely cover the user\u2019s view and movement recognition which doesn\u2019t need an intermediate device for input of data are already Virtual Reality devices available to developers and even to the public. From time to time, the way we interact with computers change, and virtual reality promises to make this interaction as real as possible. Although scenes like flying a jet or tank, are already tangible, another scenes, such as being able to feel the dry air of the Sahara in geography classes or feel the hard, cold scales of a dragon in a computer game seem to be in a long w a y f r o m n o w. H o w e v e r , t e c h n o l o g i c advancements and increase in the popularity of these technologies point to the possibility of such amazing scenes coming true."} {"_id": "1a7f1685e4c9a200b0c213060e203137279142d6", "title": "Ranking with local regression and global alignment for cross media retrieval", "text": "Rich multimedia content including images, audio and text are frequently used to describe the same semantics in E-Learning and Ebusiness web pages, instructive slides, multimedia cyclopedias, and so on. In this paper, we present a framework for cross-media retrieval, where the query example and the retrieved result(s) can be of different media types. We first construct Multimedia Correlation Space (MMCS) by exploring the semantic correlation of different multimedia modalities, during which multimedia content and co-occurrence information is utilized. We propose a novel ranking algorithm, namely ranking with Local Regression and Global Alignment (LRGA), which learns a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking values of its neighboring points. We propose a unified objective function to globally align the local models from all the data points so that an optimal ranking value can be assigned to each data point. LRGA is insensitive to parameters, making it particularly suitable for data ranking. A relevance feedback algorithm is proposed to improve the retrieval performance. Comprehensive experiments have demonstrated the effectiveness of our methods."} {"_id": "8f127eca4d7df80ea8227a84d9c41cc7d72e370f", "title": "Using Smart City Technology to Make Healthcare Smarter", "text": "Smart cities use information and communication technologies (ICTs) to scale services include utilities and transportation to a growing population. In this paper, we discuss how smart city ICTs can also improve healthcare effectiveness and lower healthcare cost for smart city residents. We survey current literature and introduce original research to offer an overview of how smart city infrastructure supports strategic healthcare using both mobile and ambient sensors combined with machine learning. Finally, we consider challenges that will be faced as healthcare providers make use of these opportunities."} {"_id": "484937d9f5c5f4ade7cf39c9dfd538b469bf8823", "title": "A Study on Knowledge , Perceptions and Attitudes about Screening and Diagnosis of Diabetes in Saudi Population", "text": "Introduction: Timely screening and treatment of diabetes can considerably reduce associated health adverse effects. The purpose of the study was to evaluate the knowledge about diabetes and perceptions and attitude about screening and early diagnosis of diabetes among the population. Methods: A cross-sectional questionnaire-based study conducted among non-diabetes adult people attending primary health care centers in Jeddah, Saudi Arabia. Participants\u2019 knowledge about diabetes risk factors and complications and perceptions and attitude regarding screening and early diagnosis of diabetes was assessed and different scores were calculated. Results: Total 202 patients were included: mean (SD) age was 38.19 (13.25) years and 55.0% were females. Knowledge about diabetes risk factors and complications was 60.41%84.77% and 44.67%-67.51%, respectively, depending on the item. Perceptions about screening and early diagnosis showed that 81.19% believed that screening tests exist and 75.14% believed that it is possible to diagnose diabetes before complication stage; while 84.16% believed that early diagnosis increases treatment efficacy, decreases incidence of complications (83.66%), and allows early treatment (83.66%). Regarding attitude, 86.14% agreed to undergo diabetes screening if advised by the physician and 60.40% would do on their own initiative. Linear regression showed a positive correlation of attitude score with knowledge about diabetes risk factors (OR=1.87; p<0.0001) and complications (OR=1.46; p<0.0001); perception about feasibility of screening (OR=1.93; p<0.0001) and benefits of early diagnosis (OR=1.69; p<0.0001). Conclusions: The improvement of knowledge about diabetes risk factors and complications as well as the perception about feasibility and benefits of screening are prerequisites for the promotion of diabetes screening among the population."} {"_id": "1bdfdd4205c0ace6f0d5bb09dd606021a110cf36", "title": "The Evidence Framework Applied to Classification Networks", "text": "Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalizing over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a \"moderation\" of the most probable classifier's outputs, and yields improved performance. Second, it is demonstrated that the Bayesian framework for model comparison described for regression models in MacKay (1992a,b) can also be applied to classification problems. This framework successfully chooses the magnitude of weight decay terms, and ranks solutions found using different numbers of hidden units. Third, an information-based data selection criterion is derived and demonstrated within this framework."} {"_id": "89283ee665da63ec9cf87e4008ead3e8963fa02b", "title": "Internet Scale User-Generated Live Video Streaming: The Twitch Case", "text": "Twitch is a live video streaming platform used for broadcasting video gameplay, ranging from amateur players to eSports tournaments. This platform has gathered a substantial world wide community, reaching more than 1.7 million broadcasters and 100 million visitors every month. Twitch is fundamentally different from \u201cstatic\u201d content distribution platforms such as YouTube and Netflix, as streams are generated and consumed in real time. In this paper, we explore the Twitch infrastructure to understand how it manages live streaming delivery to an Internet-wide audience. We found Twitch manages a geo-distributed infrastructure, with presence in four continents. Our findings show that Twitch dynamically allocates servers to channels depending on their popularity. Additionally, we explore the redirection strategy of clients to servers depending on their region and the specific channel."} {"_id": "1f75856bba0feb216001ba551d249593a9624c01", "title": "Predicting Stock Price Direction using Support Vector Machines", "text": "Support Vector Machine is a machine learning technique used in recent studies to forecast stock prices. This study uses daily closing prices for 34 technology stocks to calculate price volatility and momentum for individual stocks and for the overall sector. These are used as parameters to the SVM model. The model attempts to predict whether a stock price sometime in the future will be higher or lower than it is on a given day. We find little predictive ability in the short-run but definite predictive ability in the long-run."} {"_id": "2b877d697fb8ba947fe3f964824098c25636fa0e", "title": "CANTINA+: A Feature-Rich Machine Learning Framework for Detecting Phishing Web Sites", "text": "Phishing is a plague in cyberspace. Typically, phish detection methods either use human-verified URL blacklists or exploit Web page features via machine learning techniques. However, the former is frail in terms of new phish, and the latter suffers from the scarcity of effective features and the high false positive rate (FP). To alleviate those problems, we propose a layered anti-phishing solution that aims at (1) exploiting the expressiveness of a rich set of features with machine learning to achieve a high true positive rate (TP) on novel phish, and (2) limiting the FP to a low level via filtering algorithms.\n Specifically, we proposed CANTINA+, the most comprehensive feature-based approach in the literature including eight novel features, which exploits the HTML Document Object Model (DOM), search engines and third party services with machine learning techniques to detect phish. Moreover, we designed two filters to help reduce FP and achieve runtime speedup. The first is a near-duplicate phish detector that uses hashing to catch highly similar phish. The second is a login form filter, which directly classifies Web pages with no identified login form as legitimate.\n We extensively evaluated CANTINA+ with two methods on a diverse spectrum of corpora with 8118 phish and 4883 legitimate Web pages. In the randomized evaluation, CANTINA+ achieved over 92% TP on unique testing phish and over 99% TP on near-duplicate testing phish, and about 0.4% FP with 10% training phish. In the time-based evaluation, CANTINA+ also achieved over 92% TP on unique testing phish, over 99% TP on near-duplicate testing phish, and about 1.4% FP under 20% training phish with a two-week sliding window. Capable of achieving 0.4% FP and over 92% TP, our CANTINA+ has been demonstrated to be a competitive anti-phishing solution."} {"_id": "3a9b2fce277e474fb1570da2b4380bbf8c8ceb3f", "title": "Survey of clustering algorithms", "text": "Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed."} {"_id": "076077a5771747ad7355120f1ba64cfd603141c6", "title": "A Statistical Approach to Mechanized Encoding and Searching of Literary Information", "text": "Written communication of ideas is carried out on the basis of statistical probability in that a writer chooses that level of subject specificity and that combination of words which he feels will convey the most meaning. Since this process varies among individuals and since similar ideas are therefore relayed at different levels of specificity and by means of different words, the problem of literature searching by machines still presents major difficulties. A statistical approach to this problem will be outlined and the various steps of a system based on this approach will be described. Steps include the statistical analysis of a collection of documents in a field of interest, the establishment of a set of \"notions\" and the vocabulary by which they are expressed, the compilation of a thesaurus-type dictionary and index, the automatic encoding of documents by machine with the aid of such a dictionary, the encoding of topological notations (such as branched structures), the recording of the coded information, the establishment of a searching pattern for finding pertinent information, and the programming of appropriate machines to carry out a search."} {"_id": "151c9a0e8e31ee17a43bdd66091f49324f36dbdc", "title": "Client-Side Defense Against Web-Based Identity Theft", "text": "Web spoofing is a significant problem involving fraudulent email and web sites that trick unsuspecting users into revealing private information. We discuss some aspects of common attacks and propose a framework for client-side defense: a browser plug-in that examines web pages and warns the user when requests for data may be part of a spoof attack. While the plugin, SpoofGuard, has been tested using actual sites obtained through government agencies concerned about the problem, we expect that web spoofing and other forms of identity theft will be continuing problems in"} {"_id": "64ee319154b15f6b9baad6696daa5facb7ff2d57", "title": "Computational intelligence in sports: Challenges and opportunities within a new research domain", "text": "Computational intelligence is a branch of artificial intelligence that comprises algorithms inspired by nature. The common characteristics of all these algorithms is their collective intelligence and adaptability to a changing environment. Due to their efficiency and simplicity, these algorithms have been employed for problem solving across social and natural sciences. The aim of this paper is to demonstrate that nature-inspired algorithms are also useful within the domain of sport, in particular for obtaining safe and effective training plans targeting various aspects of performance. We outline the benefits and opportunities of applying computational intelligence in sports, and we also comment on the pitfalls and challenges for the future development of this emerging research domain. \u00a9 2015 Elsevier Inc. All rights reserved."} {"_id": "456d2297894c75f5d1eb0ac20fb84d1523ceae3e", "title": "Vehicle-to-Vehicle Communication: Fair Transmit Power Control for Safety-Critical Information", "text": "Direct radio-based vehicle-to-vehicle communication can help prevent accidents by providing accurate and up-to-date local status and hazard information to the driver. In this paper, we assume that two types of messages are used for traffic safety-related communication: 1) Periodic messages (ldquobeaconsrdquo) that are sent by all vehicles to inform their neighbors about their current status (i.e., position) and 2) event-driven messages that are sent whenever a hazard has been detected. In IEEE 802.11 distributed-coordination-function-based vehicular networks, interferences and packet collisions can lead to the failure of the reception of safety-critical information, in particular when the beaconing load leads to an almost-saturated channel, as it could easily happen in many critical vehicular traffic conditions. In this paper, we demonstrate the importance of transmit power control to avoid saturated channel conditions and ensure the best use of the channel for safety-related purposes. We propose a distributed transmit power control method based on a strict fairness criterion, i.e., distributed fair power adjustment for vehicular environments (D-FPAV), to control the load of periodic messages on the channel. The benefits are twofold: 1) The bandwidth is made available for higher priority data like dissemination of warnings, and 2) beacons from different vehicles are treated with ldquoequal rights,rdquo and therefore, the best possible reception under the available bandwidth constraints is ensured. We formally prove the fairness of the proposed approach. Then, we make use of the ns-2 simulator that was significantly enhanced by realistic highway mobility patterns, improved radio propagation, receiver models, and the IEEE 802.11p specifications to show the beneficial impact of D-FPAV for safety-related communications. We finally put forward a method, i.e., emergency message dissemination for vehicular environments (EMDV), for fast and effective multihop information dissemination of event-driven messages and show that EMDV benefits of the beaconing load control provided by D-FPAV with respect to both probability of reception and latency."} {"_id": "8abdb9894ea13461b84ecdc56dd38859d659425e", "title": "Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering", "text": "Word embeddings have become widelyused in document analysis. While a large number of models for mapping words to vector spaces have been developed, it remains undetermined how much net gain can be achieved over traditional approaches based on bag-of-words. In this paper, we propose a new document clustering approach by combining any word embedding with a state-of-the-art algorithm for clustering empirical distributions. By using the Wasserstein distance between distributions, the word-to-word semantic relationship is taken into account in a principled way. The new clustering method is easy to use and consistently outperforms other methods on a variety of data sets. More importantly, the method provides an effective framework for determining when and how much word embeddings contribute to document analysis. Experimental results with multiple embedding models are reported."} {"_id": "80062e8a15608b23f69f72b0890d2176880f660d", "title": "Cyberbullying perpetration and victimization among adolescents in Hong Kong", "text": "a r t i c l e i n f o Cyberbullying is a growing concern worldwide. Using a sample of 1917 secondary adolescents from seven schools, five psychometric measures (self-efficacy, empathy level, feelings regarding a harmonious school, sense of belonging to the school, and psychosocial wellbeing) and five scales regarding bullying experiences (cyber-and traditional bullying perpetration and victimization; reactions to cyberbullying victimization) were administered to explore the prevalence of cyberbullying in Hong Kong. Findings indicated that male adolescents were more likely than female adolescents to cyberbully others and to be cyber-victimized. Cyberbullying perpe-tration and victimization were found to be negatively associated with the adolescents' psychosocial health and sense of belonging to school. Cyber-and traditional bullying were positively correlated. Multivariate analyses indicated that being male, having a low sense of belonging to school, involvement in traditional bullying perpetra-tion, and experiencing cyber-victimization were associated with an increased propensity to cyberbully others. Technology is advancing rapidly, and peer harassment and aggression are no longer limited to traditional bullying through physical contact. Over the past decade, information and communication technology (ICT) has become increasingly important in the lives of adolescents. In a report by Lenhart, Madden, and Hitlin (2005), it is estimated that close to 90% of American adolescents aged 12 to 17 years surf the Inter-net, with 51% of them using it on a daily basis. Nearly half of the adolescents surveyed have personal mobile phones, and 33% have used a mobile phone to send a text message (Lenhart et al., 2005). Such heavy use of the Internet is not novel among adolescents in Hong Kong. Many empirical studies have been conducted on Hong Kong adolescents on their excessive and/or addictive use of the Internet. Findings of recent studies indicate that a substantially high prevalence rate of internet addiction is reported among Hong Kong adolescents (range In reality, the heavy usage of ICT such as instant mes-saging, e-mail, text messaging, blogs, and social networking sites not only allows adolescents to connect with friends and family, but at the same time also creates the potential to meet and interact with others in harmful ways (Ybarra, Diener-West, & Leaf, 2007). Cyberbullying or online bullying is one such growing concern. Traditional bullying is a widespread problem in both school and community settings, and has long been researched by scholars in general, involves an individual being exposed to negative actions by one or more individuals regularly \u2026"} {"_id": "16eaf8ed4e3f60657f29704c7cf5cbcd1505cd9b", "title": "Dynamic Modeling and Performance Analysis of a Grid-Connected Current-Source Inverter-Based Photovoltaic System", "text": "Voltage-source inverter (VSI) topology is widely used for grid interfacing of distributed generation (DG) systems. However, when employed as the power conditioning unit in photovoltaic (PV) systems, VSI normally requires another power electronic converter stage to step up the voltage, thus adding to the cost and complexity of the system. To make the proliferation of grid-connected PV systems a successful business option, the cost, performance, and life expectancy of the power electronic interface need to be improved. The current-source inverter (CSI) offers advantages over VSI in terms of inherent boosting and short-circuit protection capabilities, direct output current controllability, and ac-side simpler filter structure. Research on CSI-based DG is still in its infancy. This paper focuses on modeling, control, and steady-state and transient performances of a PV system based on CSI. It also performs a comparative performance evaluation of VSI-based and CSI-based PV systems under transient and fault conditions. Analytical expectations are verified using simulations in the Power System Computer Aided Design/Electromagnetic Transient Including DC (PSCAD/EMTDC) environment, based on a detailed system model."} {"_id": "873eee7cf6ba60b42e1d36e8fd9e96ba9ff68598", "title": "3D deeply supervised network for automated segmentation of volumetric medical images", "text": "While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN."} {"_id": "5c36992f0cc206d67dc022322faf3dd2d9b0759f", "title": "Targeted Employee Retention : Performance-Based and Job-Related Differences in Reported Reasons for Staying", "text": "A content model of 12 retention factors is developed in the context of previous theory and research. Coding of open-ended responses from 24,829 employees in the leisure and hospitality industry lends support to the identified framework and reveals that job satisfaction, extrinsic rewards, constituent attachments, organizational commitment, and organizational prestige were the most frequently mentioned reasons for staying. Advancement opportunities and organizational prestige were more common reasons for staying among high performers and non-hourly workers, and extrinsic rewards was more common among low performers and hourly employees, providing support for ease/desirability of movement and psychological contract rationales. The findings highlight the importance of differentiating human resource management practices when the goal is to retain those employees valued most by the organization."} {"_id": "ce3c8c8c980e4b3aefff80cef374373e0dfda9e9", "title": "An Alpha-wave-based binaural beat sound control system using fuzzy logic and autoregressive forecasting model", "text": "We are developing a new real-time control system for customizing auditory stimulus (the binaural beat sound) by judging user alpha waves to entrain a userpsilas feeling in the most relaxed way. Since brainwave activity provides the necessary predictive information for arousal states, we use an autoregressive forecasting model to estimate the frequency response series of the alpha frequency bands and the inverted-U concept to determine the userpsilas arousal state. A fuzzy logic controller is also employed to regulate the binaural beat control signal on a forecasting error signal. Our system allows comfortable user self-relaxation. The results of experiments confirm the constructed systempsilas effectiveness and necessity."} {"_id": "b09919813a594af9b59384f261fdc9348e743b35", "title": "A Comparative Study of Stereovision Algorithms", "text": "Stereo vision has been and continues to be one of the most researched domains of computer vision, having many applications, among them, allowing the depth extraction of a scene. This paper provides a comparative study of stereo vision and matching algorithms, used to solve the correspondence problem. The study of matching algorithms was followed by experiments on the Middlebury benchmarks. The tests focused on a comparison of 6 stereovision methods. In order to assess the performance, RMS and some statistics related were computed. In order to emphasize the advantages of each stereo algorithm considered, two-frame methods have been employed, both local and global. The experiments conducted have shown that the best results are obtained by Graph Cuts. Unfortunately, this has a higher computational cost. If high quality is not an issue in applications, local methods provide reasonable results within a much lower time-frame and offer the possibility of parallel"} {"_id": "754a42bc8525166b1cf44aac35dc21fcce23ebd7", "title": "Learning to rank academic experts in the DBLP dataset", "text": "Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people\u2019s activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches."} {"_id": "69097673bd39554bbbd880ef588763b073fe79c7", "title": "A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images", "text": "BACKGROUND AND OBJECTIVES\nHighly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning.\n\n\nMETHODS\nWe first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works.\n\n\nRESULTS\nWith the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches.\n\n\nCONCLUSIONS\nWe propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets."} {"_id": "6e32bc2d0188d98d066d4c579f653b07c00609ea", "title": "Single-Fed Low-Profile High-Gain Circularly Polarized Slotted Cavity Antenna Using a High-Order Mode", "text": "In this letter, a single-fed low-profile high-gain circularly polarized slotted cavity antenna using a high-order cavity mode, i.e., TE440 mode, is proposed. The proposed antenna has a simple structure that consists of a radiating linearly polarized slotted cavity antenna and a linear-to-circular polarization converter. An antenna prototype operating at WLAN 5.8-GHz band fabricated with low-cost standard printed circuit board (PCB) process is exemplified to validate the proposed concept. Measured results compared to their simulated counterparts are presented, and a good agreement between simulation and measurement is obtained."} {"_id": "fccb70ff224b88bb6b5a34a09b7a52b1aa460b4a", "title": "VAMS 2017: Workshop on Value-Aware and Multistakeholder Recommendation", "text": "In this paper, we summarize VAMS 2017 - a workshop on value-aware and multistakeholder recommendation co-located with RecSys 2017. The workshop encouraged forward-thinking papers in this new area of recommender systems research and obtained a diverse set of responses ranging from application results to research overviews."} {"_id": "188d823521b7d00abd6876f87509938dddfa64cf", "title": "Articulated pose estimation with flexible mixtures-of-parts", "text": "We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster."} {"_id": "1ba8348a65dcb8da96fb1cfe81d4762d17a99520", "title": "Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems", "text": "Automatically solving algebra word problems has raised considerable interest recently. Existing state-of-the-art approaches mainly rely on learning from human annotated equations. In this paper, we demonstrate that it is possible to efficiently mine algebra problems and their numerical solutions with little to no manual effort. To leverage the mined dataset, we propose a novel structured-output learning algorithm that aims to learn from both explicit (e.g., equations) and implicit (e.g., solutions) supervision signals jointly. Enabled by this new algorithm, our model gains 4.6% absolute improvement in accuracy on the ALG514 benchmark compared to the one without using implicit supervision. The final model also outperforms the current state-of-the-art approach by 3%."} {"_id": "994aadb63a7c972a221005bc1eb4931492c9d867", "title": "Fault investigations on die-cast copper rotors", "text": "It has been long known to the market that, the use of copper in the rotor in place of aluminium, taking advantage of copper's high conductivity, can be an economic solution for various applications, including electric vehicle, in the development of energy efficient motors. In comparison to aluminium, copper die-casting is extremely difficult to perform due to its high density, viscosity and melting point. Owing to these reasons, the copper die-cast rotors will evince several manufacturing problems. The objective of this paper is to identify the various defects in the manufacturing of copper die-cast rotor, and identify the lamination-coating-condition that sustains copper die-cast pressure and temperature. This paper evaluates the effectiveness of lamination coating on limiting the eddy current loss, and the recoating process to improve, in case the eddy current losses are in excess due to copper die-casting."} {"_id": "98319c8a1fc88113edbe0a309610f8ecc4b81ca9", "title": "The recognition of rice images by UAV based on capsule network", "text": "It is important to recognize the rice image captured by unmanned aerial vehicle (UAV) for monitoring the growth of rice and preventing the diseases and pests. Aiming at the image recognition, we use rice images captured by UAV as our data source, the structure of capsule network (CapsNet) is built to recognize rice images in this paper. The images are preprocessed through histogram equalization method into grayscale images and through superpixel algorithm into the superpixel segmentation results. The both results are output into the CapsNet. The function of CapsNet is to perform the reverse analysis of rice images. The CapsNet consists of five layers: an input layer, a convolution layer, a primary capsules layer, a digital capsules layer and an output layer. The CapsNet trains classification and predicts the output vector based on routing-by-agreement protocol. Therefore, the features of rice image by UAV can be precisely and efficiently extracted. The method is more convenient than the traditional artificial recognition. It provides the scientific support and reference for decision-making process of precision agriculture."} {"_id": "04000fb8bdc2707d956c6eabafecf54b7913bc65", "title": "A functional taxonomy of wireless sensor network devices", "text": "With the availability of cheap computing technology and the standardization of different wireless communication paradigms, wireless sensor networks (WSNs) are becoming a reality. This is visible in the increasing amount of research being done in the WSN area, and the growing number of companies offering commercial WSN solutions. In this regard, we realize that there is a need for an unambiguous classification and definition of WSN devices. Such a classification scheme would benefit the WSN research and industrial community. We consider the spectrum of wireless devices ranging from passive RF devices to gateway devices for the classification. The classification scheme is based on functionality of the devices. This scheme is analogous to an object-oriented classification, where an object is classified on the basis of its functionality and is described by its attributes. Attributes of the wireless devices are grouped into five broad categories: communication, sensing, power, memory, and \"other features\". Each of these are further classified to provide sufficient details that are required for a typical WSN application"} {"_id": "f905de2223973ecbcdf7e66ecd96b0b3b6063540", "title": "Permittivity and Loss Tangent Characterization for Garment Antennas Based on a New Matrix-Pencil Two-Line Method", "text": "The emergence of wearable antennas to be integrated into garments has revealed the need for a careful electromagnetic characterization of textile materials. Therefore, we propose in this paper a new matrix-pencil two-line method that removes perturbations in the calculated effective permittivity and loss tangent which are caused by imperfect deembedding and inhomogeneities of the textile microstrip line structure. The approach has been rigorously validated for high-frequency laminates by comparing measured and simulated data for the resonance frequency of antennas designed using the calculated parameters. The method has been successfully applied to characterize the permittivity and loss tangent of a variety of textile materials and up to a frequency of 10 GHz. Furthermore it is shown that the use of electrotextiles in antenna design influences the effective substrate permittivity."} {"_id": "575f3571dbfc5eb20a34eedb6e6fe3b660525585", "title": "Mining geographic-temporal-semantic patterns in trajectories for location prediction", "text": "In recent years, research on location predictions by mining trajectories of users has attracted a lot of attention. Existing studies on this topic mostly treat such predictions as just a type of location recommendation, that is, they predict the next location of a user using location recommenders. However, an user usually visits somewhere for reasons other than interestingness. In this article, we propose a novel mining-based location prediction approach called Geographic-Temporal-Semantic-based Location Prediction (GTS-LP), which takes into account a user's geographic-triggered intentions, temporal-triggered intentions, and semantic-triggered intentions, to estimate the probability of the user in visiting a location. The core idea underlying our proposal is the discovery of trajectory patterns of users, namely GTS patterns, to capture frequent movements triggered by the three kinds of intentions. To achieve this goal, we define a new trajectory pattern to capture the key properties of the behaviors that are motivated by the three kinds of intentions from trajectories of users. In our GTS-LP approach, we propose a series of novel matching strategies to calculate the similarity between the current movement of a user and discovered GTS patterns based on various moving intentions. On the basis of similitude, we make an online prediction as to the location the user intends to visit. To the best of our knowledge, this is the first work on location prediction based on trajectory pattern mining that explores the geographic, temporal, and semantic properties simultaneously. By means of a comprehensive evaluation using various real trajectory datasets, we show that our proposed GTS-LP approach delivers excellent performance and significantly outperforms existing state-of-the-art location prediction methods."} {"_id": "949b3eb7d26afeb1585729b8a78575f2dbc925b1", "title": "Feature Selection and Kernel Learning for Local Learning-Based Clustering", "text": "The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Scho\u0308lkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets."} {"_id": "4698ed97f4a78e724c903ec1dd6e5538203237c8", "title": "Using Phase Instead of Optical Flow for Action Recognition", "text": "Currently, the most common motion representation for action recognition is optical flow. Optical flow is based on particle tracking which adheres to a Lagrangian perspective on dynamics. In contrast to the Lagrangian perspective, the Eulerian model of dynamics does not track, but describes local changes. For video, an Eulerian phase-based motion representation, using complex steerable filters, has been successfully employed recently for motion magnification and video frame interpolation. Inspired by these previous works, here, we proposes learning Eulerian motion representations in a deep architecture for action recognition. We learn filters in the complex domain in an end-to-end manner. We design these complex filters to resemble complex Gabor filters, typically employed for phase-information extraction. We propose a phaseinformation extraction module, based on these complex filters, that can be used in any network architecture for extracting Eulerian representations. We experimentally analyze the added value of Eulerian motion representations, as extracted by our proposed phase extraction module, and compare with existing motion representations based on optical flow, on the UCF101 dataset."} {"_id": "0d82347bd5f454455f564a0407b0c4aaed432d09", "title": "A generalized taxonomy of explanations styles for traditional and social recommender systems", "text": "Recommender systems usually provide explanations of their recommendations to better help users to choose products, activities or even friends. Up until now, the type of an explanation style was considered in accordance to the recommender system that employed it. This relation was one-to-one, meaning that for each different recommender systems category, there was a different explanation style category. However, this kind of one-to-one correspondence can be considered as over-simplistic and non generalizable. In contrast, we consider three fundamental resources that can be used in an explanation: users, items and features and any combination of them. In this survey, we define (i) the Human style of explanation, which provides explanations based on similar users, (ii) the Item style of explanation, which is based on choices made by a user on similar items and (iii) the Feature style of explanation, which explains the recommendation based on item features rated by the user beforehand. By using any combination of the aforementioned styles we can also define the Hybrid style of explanation. We demonstrate how these styles are put into practice, by presenting recommender systems that employ them. Moreover, since there is inadequate research in the impact of social web in contemporary recommender systems and their explanation styles, we study new emerged social recommender systems i.e. Facebook Connect explanations (HuffPo, Netflix, etc.) and geo-social explanations that combine geographical with social data (Gowalla, Facebook Places, etc.). Finally, we summarize the results of three different user studies, to support that Hybrid is the most effective explanation style, since it incorporates all other styles."} {"_id": "1c607dcc5fd4b26603584c5f85c9c233788c64ed", "title": "Online Learning for Time Series Prediction", "text": "In this paper we address the problem of predicting a time seri e using the ARMA (autoregressive moving average) model, under minimal assumptions on the noi se terms. Using regret minimization techniques, we develop effective online learning algorith ms for the prediction problem, withoutassuming that the noise terms are Gaussian, identically distribu ted or even independent. Furthermore, we show that our algorithm\u2019s performances asymptotically app roaches the performance of the best ARMA model in hindsight."} {"_id": "41d6966c926015b8e0d7b1a9de3ffab013091e15", "title": "Logarithmic regret algorithms for online convex optimization", "text": "In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover\u2019s Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret $O(\\sqrt{T})$ , for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log\u2009(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1\u201319, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover\u2019s algorithm and gradient descent."} {"_id": "08b425412e69b2751683dd299d8e939798b81614", "title": "Averaging Expert Predictions", "text": "We consider algorithms for combining advice from a set of experts. In each trial, the algorithm receives the predictions of the experts and produces its own prediction. A loss function is applied to measure the discrepancy between the predictions and actual observations. The algorithm keeps a weight for each expert. At each trial the weights are first used to help produce the prediction and then updated according to the observed outcome. Our starting point is Vovk\u2019s Aggregating Algorithm, in which the weights have a simple form: the weight of an expert decreases exponentially as a function of the loss incurred by the expert. The prediction of the Aggregating Algorithm is typically a nonlinear function of the weights and the experts\u2019 predictions. We analyze here a simplified algorithm in which the weights are as in the original Aggregating Algorithm, but the prediction is simply the weighted average of the experts\u2019 predictions. We show that for a large class of loss functions, even with the simplified prediction rule the additional loss of the algorithm over the loss of the best expert is at most c ln n, where n is the number of experts and c a constant that depends on the loss function. Thus, the bound is of the same form as the known bounds for the Aggregating Algorithm, although the constants here are not quite as good. We use relative entropy to rewrite the bounds in a stronger form and to motivate the update."} {"_id": "1fbfa8b590ce4679367d73cb8e4f2d169ae5c624", "title": "Online Convex Programming and Generalized Infinitesimal Gradient Ascent", "text": "Convex programming involves a convex set F \u2286 R and a convex function c : F \u2192 R. The goal of convex programming is to find a point in F which minimizes c. In this paper, we introduce online convex programming. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain, apply it to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent."} {"_id": "dbfcb2f0d550f271fd0060ab7baaf1142093a9e4", "title": "Prediction, learning, and games", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading prediction learning and games, you can take more advantages with limited budget."} {"_id": "ee82527145770200c774a5bf952989c7225670ab", "title": "Air Craft Winglet Design and Performance : Cant Angle Effect", "text": "A winglet is a device used to improve the efficiency of aircraft by lowering the lift induced drag caused by wingtip vortices. It is a vertical or angled extension at the tips of each wing. Winglets improve efficiency by diffusing the shed wingtip vortex, which in turn reduces the drag due to lift and improves the wing\u2019s lift over drag ratio Winglets increase the effective aspect ratio of a wing without adding greatly to the structural stress and hence necessary weight of its structure. In this research, a numerical validation procedure (by FLUENT \u00ae, computational fluid dynamics software with The Spalart-Allmaras turbulence model) is described for determination and estimation aerodynamic characteristics of three dimension subsonic rectangular wing (with NACA653218airfoil cross section). It was observed that at the present work a good agreement between the numerical study and the experimental work. This paper describes a CFD 3-dimensional winglets analysis that was performed on a Cessna wing of NACA2412 cross sectional airfoil. The wing has span 13.16 m, root chord 1.857 m, tip chord 0.928 m, sweep angle 11 degree and taper ratio 0.5. The present study shows wing without winglet and wing with winglet at cant angle 0, 30 and 45 degree. A CFD simulation performs by to compare of aerodynamics characteristics of lift coefficient CL, drag coefficient CD and lift to drag ratio, L/D lift, pathlines and pressure contours. The models run at a Mach number of 0.2 at sea level. The pressure and temperature of air at this height are 101.325 kPa and 288.2 K respectively. The results show that the wing with winglet can increase the lift by ratio approximately 12%. The wing with winglet can decrease the drag by ratio approximately 4%. The wing with winglet can increase lift to drag, L/D by about 11% along different phases of flight."} {"_id": "c288e195c0d84248b5555aca7d62916e32e5b8cf", "title": "Highly compact 1T-1R architecture (4F2 footprint) involving fully CMOS compatible vertical GAA nano-pillar transistors and oxide-based RRAM cells exhibiting excellent NVM properties and ultra-low power operation", "text": "For the first time, nano-meter-scaled 1T-1R non-volatile memory (NVM) architecture comprising of RRAM cells built on vertical GAA nano-pillar transistors, either junction-less or junction-based, is systematically investigated. Transistors are fabricated using fully CMOS compatible technology and RRAM cells are stacked onto the tip of the nano-pillars (with a diameter down to ~37nm) to achieve a compact 4F2 footprint. In addition, through this platform, different RRAM stacks comprising CMOS friendly materials are studied, and it is found that TiN/Ni/HfO2/n+-Si RRAM cells show excellent switching properties in either bipolar or unipolar mode, including (1) ultra-low switching current/power: SET ~20nA/85nW and RESET ~200pA/700pW, (2) multi-level switchability, (3) good endurance, >105, (4) satisfactory retention, 10 years at 85oC; and (5) fast switching speed ~50ns. Moreover, this vertical (gate-all-around) GAA nano-pillar based 1T-1R architecture provides a more direct and flexible test vehicle to verify the scalability and functionality of RRAM candidates with a dimension close to actual application."} {"_id": "7d27130a0d77fe0a89273e93a30db8f3a64b9fa3", "title": "Sentiment analysis and classification based on textual reviews", "text": "Mining is used to help people to extract valuable information from large amount of data. Sentiment analysis focuses on the analysis and understanding of the emotions from the text patterns. It identifies the opinion or attitude that a person has towards a topic or an object and it seeks to identify the viewpoint underlying a text span. Sentiment analysis is useful in social media monitoring to automatically characterize the overall feeling or mood of consumers as reflected in social media toward a specific brand or company and determine whether they are viewed positively or negatively on the web. This new form of analysis has been widely adopted in customer relation management especially in the context of complaint management. For automating the task of classifying a single topic textual review, document-level sentiment classification is used for expressing a positive or negative sentiment. So analyzing sentiment using Multi-theme document is very difficult and the accuracy in the classification is less. The document level classification approximately classifies the sentiment using Bag of words in Support Vector Machine (SVM) algorithm. In proposed work, a new algorithm called Sentiment Fuzzy Classification algorithm with parts of speech tags is used to improve the classification accuracy on the benchmark dataset of Movies reviews dataset."} {"_id": "932e38056f1018286563c96716e0b7a0582d3c7b", "title": "Categorizing User Sessions at Pinterest", "text": "Di erent users can use a given Internet application in many different ways. The ability to record detailed event logs of user inapplication activity allows us to discover ways in which the application is being used. This enables personalization and also leads to important insights with actionable business and product outcomes. Here we study the problem of user session categorization, where the goal is to automatically discover categories/classes of user insession behavior using event logs, and then consistently categorize each user session into the discovered classes. We develop a three stage approach which uses clustering to discover categories of sessions, then builds classi ers to classify new sessions into the discovered categories, and nally performs daily classi cation in a distributed pipeline. An important innovation of our approach is selecting a set of events as long-tail features, and replacing them with a new feature that is less sensitive to product experimentation and logging changes. This allows for robust and stable identi cation of session types even though the underlying application is constantly changing. We deploy the approach to Pinterest and demonstrate its e ectiveness. We discover insights that have consequences for product monetization, growth, and design. Our solution classi es millions of user sessions daily and leads to actionable insights."} {"_id": "c2d1bbff6ec6ccfb2e6b49d8f78f6854571941da", "title": "A PSO-Based Hybrid Metaheuristic for Permutation Flowshop Scheduling Problems", "text": "This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature."} {"_id": "a3bfe87159938a96d3f2037ff0fe10adca0d21b0", "title": "Fingerprinting Electronic Control Units for Vehicle Intrusion Detection", "text": "As more software modules and external interfaces are getting added on vehicles, new attacks and vulnerabilities are emerging. Researchers have demonstrated how to compromise in-vehicle Electronic Control Units (ECUs) and control the vehicle maneuver. To counter these vulnerabilities, various types of defense mechanisms have been proposed, but they have not been able to meet the need of strong protection for safety-critical ECUs against in-vehicle network attacks. To mitigate this deficiency, we propose an anomaly-based intrusion detection system (IDS), called Clock-based IDS (CIDS). It measures and then exploits the intervals of periodic in-vehicle messages for fingerprinting ECUs. The thusderived fingerprints are then used for constructing a baseline of ECUs\u2019 clock behaviors with the Recursive Least Squares (RLS) algorithm. Based on this baseline, CIDS uses Cumulative Sum (CUSUM) to detect any abnormal shifts in the identification errors \u2014 a clear sign of intrusion. This allows quick identification of in-vehicle network intrusions with a low false-positive rate of 0.055%. Unlike state-of-the-art IDSs, if an attack is detected, CIDS\u2019s fingerprinting of ECUs also facilitates a rootcause analysis; identifying which ECU mounted the attack. Our experiments on a CAN bus prototype and on real vehicles have shown CIDS to be able to detect a wide range of in-vehicle network attacks."} {"_id": "c814a2f44f36d54b6337dd64c7bc8b3b80127d3e", "title": "A3P: adaptive policy prediction for shared images over popular content sharing sites", "text": "More and more people go online today and share their personal images using popular web services like Picasa. While enjoying the convenience brought by advanced technology, people also become aware of the privacy issues of data being shared. Recent studies have highlighted that people expect more tools to allow them to regain control over their privacy. In this work, we propose an Adaptive Privacy Policy Prediction (A3P) system to help users compose privacy settings for their images. In particular, we examine the role of image content and metadata as possible indicators of users' privacy preferences. We propose a two-level image classification framework to obtain image categories which may be associated with similar policies. Then, we develop a policy prediction algorithm to automatically generate a policy for each newly uploaded image. Most importantly, the generated policy will follow the trend of the user's privacy concerns evolved with time. We have conducted an extensive user study and the results demonstrate effectiveness of our system with the prediction accuracy around 90%."} {"_id": "42d5c5ff783e80455475cc23e62f852d49ca10ea", "title": "Universal Planning Networks-Long Version + Supplementary", "text": "A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities."} {"_id": "1ff60a51c94eebafdaacc6d0d3848e39b470b609", "title": "Deep Learning of Semisupervised Process Data With Hierarchical Extreme Learning Machine and Soft Sensor Application", "text": "Data-driven soft sensors have been widely utilized in industrial processes to estimate the critical quality variables which are intractable to directly measure online through physical devices. Due to the low sampling rate of quality variables, most of the soft sensors are developed on small number of labeled samples and the large number of unlabeled process data is discarded. The loss of information greatly limits the improvement of quality prediction accuracy. One of the main issues of data-driven soft sensor is to furthest exploit the information contained in all available process data. This paper proposes a semisupervised deep learning model for soft sensor development based on the hierarchical extreme learning machine (HELM). First, the deep network structure of autoencoders is implemented for unsupervised feature extraction with all the process samples. Then, extreme learning machine is utilized for regression through appending the quality variable. Meanwhile, the manifold regularization method is introduced for semisupervised model training. The new method can not only deeply extract the information that the data contains, but learn more from the extra unlabeled samples as well. The proposed semisupervised HELM method is applied in a high\u2013low transformer to estimate the carbon monoxide content, which shows a significant improvement of the prediction accuracy, compared to traditional methods."} {"_id": "2948d0e6b3faa43bbfce001bd4fe820bde00d46c", "title": "A 45nm Logic Technology with High-k+Metal Gate Transistors, Strained Silicon, 9 Cu Interconnect Layers, 193nm Dry Patterning, and 100% Pb-free Packaging", "text": "A 45 nm logic technology is described that for the first time incorporates high-k + metal gate transistors in a high volume manufacturing process. The transistors feature 1.0 nm EOT high-k gate dielectric, dual band edge workfunction metal gates and third generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. The technology also features trench contact based local routing, 9 layers of copper interconnect with low-k ILD, low cost 193 nm dry patterning, and 100% Pb-free packaging. Process yield, performance and reliability are demonstrated on 153 Mb SRAM arrays with SRAM cell size of 0.346 mum2, and on multiple microprocessors."} {"_id": "a11762d3a16d3d7951f2312cb89bbedff6cfbf21", "title": "An Intra-Panel Interface With Clock-Embedded Differential Signaling for TFT-LCD Systems", "text": "In this paper, an intra-panel interface with a clock embedded differential signaling for TFT-LCD systems is proposed. The proposed interface reduces the number of signal lines between the timing controller and the column drivers in a TFT-LCD panel by adopting the embedded clock scheme. The protocol of the proposed interface provides a delay-locked loop (DLL)-based clock recovery scheme for the receiver. The timing controller and the column driver integrated with the proposed interface are fabricated in 0.13- \u03bcm CMOS process technology and 0.18-\u03bcm high voltage CMOS process technology, respectively. The proposed interface is verified on a 47-inch Full High-Definition (FHD) (1920RGB×1080) TFT-LCD panel with 8-bit RGB and 120-Hz driving technology. The maximum data rate per differential pair was measured to be as high as 2.0 Gb/s in a wafer test."} {"_id": "c5218d2c19ae5de4e28452460e0bdc97d68c1b35", "title": "Associations among Wine Grape Microbiome, Metabolome, and Fermentation Behavior Suggest Microbial Contribution to Regional Wine Characteristics", "text": "UNLABELLED\nRegionally distinct wine characteristics (terroir) are an important aspect of wine production and consumer appreciation. Microbial activity is an integral part of wine production, and grape and wine microbiota present regionally defined patterns associated with vineyard and climatic conditions, but the degree to which these microbial patterns associate with the chemical composition of wine is unclear. Through a longitudinal survey of over 200 commercial wine fermentations, we demonstrate that both grape microbiota and wine metabolite profiles distinguish viticultural area designations and individual vineyards within Napa and Sonoma Counties, California. Associations among wine microbiota and fermentation characteristics suggest new links between microbiota, fermentation performance, and wine properties. The bacterial and fungal consortia of wine fermentations, composed from vineyard and winery sources, correlate with the chemical composition of the finished wines and predict metabolite abundances in finished wines using machine learning models. The use of postharvest microbiota as an early predictor of wine chemical composition is unprecedented and potentially poses a new paradigm for quality control of agricultural products. These findings add further evidence that microbial activity is associated with wine terroir\n\n\nIMPORTANCE\nWine production is a multi-billion-dollar global industry for which microbial control and wine chemical composition are crucial aspects of quality. Terroir is an important feature of consumer appreciation and wine culture, but the many factors that contribute to terroir are nebulous. We show that grape and wine microbiota exhibit regional patterns that correlate with wine chemical composition, suggesting that the grape microbiome may influence terroir In addition to enriching our understanding of how growing region and wine properties interact, this may provide further economic incentive for agricultural and enological practices that maintain regional microbial biodiversity."} {"_id": "328abd11800821d149e5366b54943d1e9b24d675", "title": "Suspect tracking based on call logs analysis and visualization", "text": "In Thailand, investigator can track and find the suspects by using call logs from suspects' phone numbers and their contacts. In many cases, the suspects changed their phone numbers to avoid tracking. The problem is that the investigators have difficulty to track these suspects from their call logs. Our hypothesis is that each user has a unique calling behavior pattern. The calling pattern is importance for tracking suspect's telephone number. To compare the calling patterns, we consider common contact groups. Thus, the aim of this project is to develop a call logs tracking system which can predict a set of new possible suspect's phone numbers and present their contacts' connection with our network diagram visualization based on Graph database (Neo4j). This system will be very necessary for investigators because it can save investigators' time from analyzing excessive call logs data. The system can predict the possible suspect's phone numbers. Furthermore, our visualization can enhance human's sight ability to connect the relation among related phone numbers. Finally, the experimental results on real call logs demonstrate that our method can track telephone number approximately 69% of single possible suspect phone number's matching while 89% of multiple possible suspect phone numbers' matching."} {"_id": "0c504d0c8319802c9f63eeea0d7b437cded2f4ef", "title": "High-performance complex event processing over streams", "text": "In this paper, we present the design, implementation, and evaluation of a system that executes complex event queries over real-time streams of RFID readings encoded as events. These complex event queries filter and correlate events to match specific patterns, and transform the relevant events into new composite events for the use of external monitoring applications. Stream-based execution of these queries enables time-critical actions to be taken in environments such as supply chain management, surveillance and facility management, healthcare, etc. We first propose a complex event language that significantly extends existing event languages to meet the needs of a range of RFID-enabled monitoring applications. We then describe a query plan-based approach to efficiently implementing this language. Our approach uses native operators to efficiently handle query-defined sequences, which are a key component of complex event processing, and pipeline such sequences to subsequent operators that are built by leveraging relational techniques. We also develop a large suite of optimization techniques to address challenges such as large sliding windows and intermediate result sizes. We demonstrate the effectiveness of our approach through a detailed performance analysis of our prototype implementation under a range of data and query workloads as well as through a comparison to a state-of-the-art stream processor."} {"_id": "c567bdc35a40e568e0661446ac4f9b397787e40d", "title": "A 2.4 GHz Interferer-Resilient Wake-Up Receiver Using A Dual-IF Multi-Stage N-Path Architecture", "text": "A 2.4 GHz interferer-resilient wake-up receiver for ultra-low power wireless sensor nodes uses an uncertain-IF dualconversion topology, combining a distributed multi-stage N-path filtering technique with an unlocked low-Q resonator-referred local oscillator. This structure provides narrow-band selectivity and strong immunity against interferers, while avoiding expensive external resonant components such as BAW resonators or crystals. The 65 nm CMOS receiver prototype provides a sensitivity of -97 dBm and a carrier-to-interferer ratio better than -27 dB at 5 MHz offset, for a data rate of 10 kb/s at a 10-3 bit error rate, while consuming 99 \u03bcW from a 0.5 V voltage supply under continuous operation."} {"_id": "7e9e66b55476bb0c58f23695c9a6554b6e74d906", "title": "A survey of smoothing techniques for ME models", "text": "In certain contexts, maximum entropy (ME) modeling can be viewed as maximum likelihood (ML) training for exponential models, and like other ML methods is prone to overfitting of training data. Several smoothing methods for ME models have been proposed to address this problem, but previous results do not make it clear how these smoothing methods compare with smoothing methods for other types of related models. In this work, we survey previous work in ME smoothing and compare the performance of several of these algorithms with conventional techniques for smoothing -gram language models. Because of the mature body of research in -gram model smoothing and the close connection between ME and conventional -gram models, this domain is well-suited to gauge the performance of ME smoothing methods. Over a large number of data sets, we find that fuzzy ME smoothing performs as well as or better than all other algorithms under consideration. We contrast this method with previous -gram smoothing methods to explain its superior performance."} {"_id": "9302c3d5604ac39466548af3a24dfe7bdf67a777", "title": "Calcium currents in hair cells isolated from semicircular canals of the frog.", "text": "L-type and R-type Ca(2+) currents were detected in frog semicircular canal hair cells. The former was noninactivating and nifedipine-sensitive (5 microM); the latter, partially inactivated, was resistant to omega-conotoxin GVIA (5 microM), omega-conotoxin MVIIC (5 microM), and omega-agatoxin IVA (0.4 microM), but was sensitive to mibefradil (10 microM). Both currents were sensitive to Ni(2+) and Cd(2+) (>10 microM). In some cells the L-type current amplitude increased almost twofold upon repetitive stimulation, whereas the R-type current remained unaffected. Eventually, run-down occurred for both currents, but was prevented by the protease inhibitor calpastatin. The R-type current peak component ran down first, without changing its plateau, suggesting that two channel types generate the R-type current. This peak component appeared at -40 mV, reached a maximal value at -30 mV, and became undetectable for voltages > or =0 mV, suggestive of a novel transient current: its inactivation was indeed reversibly removed when Ba(2+) was the charge carrier. The L-type current and the R-type current plateau were appreciable at -60 mV and peaked at -20 mV: the former current did not reverse for voltages up to +60 mV, the latter reversed between +30 and +60 mV due to an outward Cs(+) current flowing through the same Ca(2+) channel. The physiological role of these currents on hair cell function is discussed."} {"_id": "24b5ac90c5155b7db5b80bfc2767ec1d7e2fb8fe", "title": "Increasing pre-kindergarten early literacy skills in children with developmental disabilities and delays.", "text": "Two hundred and nine children receiving early childhood special education services for developmental disabilities or delays who also had behavioral, social, or attentional difficulties were included in a study of an intervention to increase school readiness, including early literacy skills. Results showed that the intervention had a significant positive effect on children's literacy skills from baseline to the end of summer before the start of kindergarten (d=.14). The intervention also had significant indirect effects on teacher ratings of children's literacy skills during the fall of their kindergarten year (\u03b2=.09). Additionally, when scores were compared to standard benchmarks, a greater percentage of the children who received the intervention moved from being at risk for reading difficulties to having low risk. Overall, this study demonstrates that a school readiness intervention delivered prior to the start of kindergarten may help increase children's early literacy skills."} {"_id": "42ef50955a7f12afad78f0bd3819dbc555580225", "title": "Deep Learning for Imbalanced Multimedia Data Classification", "text": "Classification of imbalanced data is an important research problem as lots of real-world data sets have skewed class distributions in which the majority of data instances (examples) belong to one class and far fewer instances belong to others. While in many applications, the minority instances actually represent the concept of interest (e.g., fraud in banking operations, abnormal cell in medical data, etc.), a classifier induced from an imbalanced data set is more likely to be biased towards the majority class and show very poor classification accuracy on the minority class. Despite extensive research efforts, imbalanced data classification remains one of the most challenging problems in data mining and machine learning, especially for multimedia data. To tackle this challenge, in this paper, we propose an extended deep learning approach to achieve promising performance in classifying skewed multimedia data sets. Specifically, we investigate the integration of bootstrapping methods and a state-of-the-art deep learning approach, Convolutional Neural Networks (CNNs), with extensive empirical studies. Considering the fact that deep learning approaches such as CNNs are usually computationally expensive, we propose to feed low-level features to CNNs and prove its feasibility in achieving promising performance while saving a lot of training time. The experimental results show the effectiveness of our framework in classifying severely imbalanced data in the TRECVID data set."} {"_id": "d55dbdb1be09ab502c63dd0592cf4e7a600c1cc7", "title": "Clinical performance of porcelain laminate veneers: outcomes of the aesthetic pre-evaluative temporary (APT) technique.", "text": "This article evaluates the long-term clinical performance of porcelain laminate veneers bonded to teeth prepared with the use of an additive mock-up and aesthetic pre-evaluative temporary (APT) technique over a 12-year period. Sixty-six patients were restored with 580 porcelain laminate veneers. The technique, used for diagnosis, esthetic design, tooth preparation, and provisional restoration fabrication, was based on the APT protocol. The influence of several factors on the durability of veneers was analyzed according to pre- and postoperative parameters. With utilization of the APT restoration, over 80% of tooth preparations were confined to the dental enamel. Over 12 years, 42 laminate veneers failed, but when the preparations were limited to the enamel, the failure rate resulting from debonding and microleakage decreased to 0%. Porcelain laminate veneers presented a successful clinical performance in terms of marginal adaptation, discoloration, gingival recession, secondary caries, postoperative sensitivity, and satisfaction with restoration shade at the end of 12 years. The APT technique facilitated diagnosis, communication, and preparation, providing predictability for the restorative treatment. Limiting the preparation depth to the enamel surface significantly increases the performance of porcelain laminate veneers."} {"_id": "8dcc86121219fdb9f813d43b35c632811da18b73", "title": "A framework for consciousness", "text": "Here we summarize our present approach to the problem of consciousness. After an introduction outlining our general strategy, we describe what is meant by the term 'framework' and set it out under ten headings. This framework offers a coherent scheme for explaining the neural correlates of (visual) consciousness in terms of competing cellular assemblies. Most of the ideas we favor have been suggested before, but their combination is original. We also outline some general experimental approaches to the problem and, finally, acknowledge some relevant aspects of the brain that have been left out of the proposed framework."} {"_id": "5626bb154796b2157e15dbc3f5d3950aedf87d63", "title": "Passive returning mechanism for twisted string actuators", "text": "The twisted string actuator is an actuator that is gaining popularity in various engineering and robotics and applications. However, the fundamental limitation of actuators of this type is the uni-directional action, meaning that the actuator can contract but requires external power to return to its initial state. This paper proposes 2 novel passive extension mechanisms based on buckling effect to solve the uni-directional issue of the twisted string actuator. The proposed mechanisms are mechanically simple and compact and provide a nearly-constant extension force throughout the operation range. The constant force can fully extend the twisted string actuator with minimal loss of force during contraction. The designed mechanisms are evaluated in a series of practical tests, and their performances are compared and discussed."} {"_id": "1426c3469f91faa8577293eaabb4cb9b37db228a", "title": "Compact two-layer slot array antenna with SIW for 60GHz wireless applications", "text": "In a variety of microwave and millimeter-wave applications where high performance antennas are required, waveguide slot arrays have received considerable attention due to their high aperture efficiency, low side lobe levels, and low cross polarization. Resonant slot arrays usually suffer from narrow bandwidth and high cost due to high precision required in manufacturing. Furthermore, because of using standard rectangular waveguides, the antenna array is thick and heavy and is not suitable for monolithic integration with high frequency printed circuits."} {"_id": "20324e90033dee12c9e95952c3097ca91773be1e", "title": "Latent Aspect Mining via Exploring Sparsity and Intrinsic Information", "text": "We investigate latent aspect mining problem that aims at automatically discovering aspect information from a collection of review texts in a domain in an unsupervised manner. One goal is to discover a set of aspects which are previously unknown for the domain, and predict the user's ratings on each aspect for each review. Another goal is to detect key terms for each aspect. Existing works on predicting aspect ratings fail to handle the aspect sparsity problem in the review texts leading to unreliable prediction. We propose a new generative model to tackle the latent aspect mining problem in an unsupervised manner. By considering the user and item side information of review texts, we introduce two latent variables, namely, user intrinsic aspect interest and item intrinsic aspect quality facilitating better modeling of aspect generation leading to improvement on the accuracy and reliability of predicted aspect ratings. Furthermore, we provide an analytical investigation on the Maximum A Posterior (MAP) optimization problem used in our proposed model and develop a new block coordinate gradient descent algorithm to efficiently solve the optimization with closed-form updating formulas. We also study its convergence analysis. Experimental results on the two real-world product review corpora demonstrate that our proposed model outperforms existing state-of-the-art models."} {"_id": "59b5b9defaa941bbea8d906a6051d7a99e4736b8", "title": "Self-compassion, body image, and disordered eating: A review of the literature.", "text": "Self-compassion, treating oneself as a loved friend might, demonstrates beneficial associations with body image and eating behaviors. In this systematic review, 28 studies supporting the role of self-compassion as a protective factor against poor body image and eating pathology are reviewed. Findings across various study designs consistently linked self-compassion to lower levels of eating pathology, and self-compassion was implicated as a protective factor against poor body image and eating pathology, with a few exceptions. These findings offer preliminary support that self-compassion may protect against eating pathology by: (a) decreasing eating disorder-related outcomes directly; (b) preventing initial occurrence of a risk factor of a maladaptive outcome; (c) interacting with risk factors to interrupt their deleterious effects; and (d) disrupting the mediational chain through which risk factors operate. We conclude with suggestions for future research that may inform intervention development, including the utilization of research designs that better afford causal inference."} {"_id": "4859285b20507e2574b178c099eed812422ec84b", "title": "Plug-and-Play priors for model based reconstruction", "text": "Model-based reconstruction is a powerful framework for solving a variety of inverse problems in imaging. In recent years, enormous progress has been made in the problem of denoising, a special case of an inverse problem where the forward model is an identity operator. Similarly, great progress has been made in improving model-based inversion when the forward model corresponds to complex physical measurements in applications such as X-ray CT, electron-microscopy, MRI, and ultrasound, to name just a few. However, combining state-of-the-art denoising algorithms (i.e., prior models) with state-of-the-art inversion methods (i.e., forward models) has been a challenge for many reasons. In this paper, we propose a flexible framework that allows state-of-the-art forward models of imaging systems to be matched with state-of-the-art priors or denoising models. This framework, which we term as Plug-and-Play priors, has the advantage that it dramatically simplifies software integration, and moreover, it allows state-of-the-art denoising methods that have no known formulation as an optimization problem to be used. We demonstrate with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions."} {"_id": "5a114d3050a0a33f8cc6d28d55fa048a5a7ab6f2", "title": "A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures", "text": "An algorithm is presented for the rapid solution of the phase of the complete wave function whose intensity in the diffraction and imaging planes of an imaging system are known. A proof is given showing that a defined error between the estimated function and the correct function must decrease as the algorithm iterates. The problem of uniqueness is discussed and results are presented demonstrating the power of the method."} {"_id": "02d1285ff1cdd96a3064d651ddcf4ae2a9b5b7f9", "title": "A Partial Derandomization of PhaseLift using Spherical Designs", "text": "The problem of retrieving phase information from amplitude measurements alone has appeared in many scientific disciplines over the last century. PhaseLift is a recently introduced algorithm for phase recovery that is computationally efficient, numerically stable, and comes with rigorous performance guarantees. PhaseLift is optimal in the sense that the number of amplitude measurements required for phase reconstruction scales linearly with the dimension of the signal. However, it specifically demands Gaussian random measurement vectors \u2014 a limitation that restricts practical utility and obscures the specific properties of measurement ensembles that enable phase retrieval. Here we present a partial derandomization of PhaseLift that only requires sampling from certain polynomial size vector configurations, called t-designs. Such configurations have been studied in algebraic combinatorics, coding theory, and quantum information. We prove reconstruction guarantees for a number of measurements that depends on the degree t of the design. If the degree is allowed to to grow logarithmically with the dimension, the bounds become tight up to polylog-factors. Beyond the specific case of PhaseLift, this work highlights the utility of spherical designs for the derandomization of data recovery schemes."} {"_id": "0748f27afcc64a4ceaeb1213d620f62757b43d18", "title": "Blind Deconvolution Using Convex Programming", "text": "We consider the problem of recovering two unknown vectors, w and x, of length L from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension N and the other with dimension K. Although the observed convolution is nonlinear in both w and x, it is linear in the rank-1 matrix formed by their outer product wx*. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that, for \u201cgeneric\u201d signals, the program can deconvolve w and x exactly when the maximum of N and K is almost on the order of L. That is, we show that if x is drawn from a random subspace of dimension N, and w is a vector in a subspace of dimension K whose basis vectors are spread out in the frequency domain, then nuclear norm minimization recovers wx* without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length N, which we code using a random L x N coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length K, then the receiver can recover both the channel response and the message when L \u2273 N + K, to within constant and log factors."} {"_id": "211fd281ccff3de319bab296ae760c80d9d32066", "title": "A unified approach to statistical tomography using coordinate descent optimization", "text": "Over the past years there has been considerable interest in statistically optimal reconstruction of cross-sectional images from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. We propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixel-wise computations known as iterative coordinate decent (ICD). We propose a novel method for computing the ICD updates, which we call ICD/Newton-Raphson. We show that ICD/Newton-Raphson requires approximately the same amount of computation per iteration as EM-based approaches, but the new method converges much more rapidly (in our experiments, typically five to ten iterations). Other advantages of the ICD/Newton-Raphson method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are easily incorporated."} {"_id": "41466cff3ea07504e556ad3e4b9d7ae9ee1c809a", "title": "Design of bipolar pulse generator topology based on Marx supplied by double power", "text": "Pulsed power technology has been used for the ablation tumor. Considerable research shows that a high-strength unipolar pulse electric field can induce irreversible electroporation (IRE) on the cell membrane, which can effectively kill cells. But some scholars and doctors have found that muscle contractions occur during treatment, which are associated with the delivery of electric pulses. Confirmed by further studies that bipolar pulses have been proven more advanced in the treatment of tumor because of the elimination of muscle contractions and the effect for ablating non-uniform tissue. So the bipolar pulse generator is needed for the research on the tumor ablation with bipolar pulses. In this paper, a new type of modular bipolar pulsed-power generator base on Marx generator with double power is proposed. The concept of this generator is charging two series of capacitors in parallel by two power sources respectively, and then connecting the capacitors in series through solid-state switches with different control strategy. Utilizing a number of fast solid-state switches, the capacitors can be connected in series with different polarities, so that a positive or negative polarity pulse will be delivered to the load. A laboratory prototype has been implemented in laboratory. The development of this pulse generator can provide the hardware foundation for the research on biological effect without muscle contraction when the tumors are applied with bipolar pulse electric field."} {"_id": "c79a5da34c9cfaa59ec9a3813357fc4b8d71d96f", "title": "A Data-driven Method for the Detection of Close Submitters in Online Learning Environments", "text": "Online learning has become very popular over the last decade. However, there are still many details that remain unknown about the strategies that students follow while studying online. In this study, we focus on the direction of detecting \u2018invisible\u2019 collaboration ties between students in online learning environments. Specifically, the paper presents a method developed to detect student ties based on temporal proximity of their assignment submissions. The paper reports on findings of a study that made use of the proposed method to investigate the presence of close submitters in two different massive open online courses. The results show that most of the students (i.e., student user accounts) were grouped as couples, though some bigger communities were also detected. The study also compared the population detected by the algorithm with the rest of user accounts and found that close submitters needed a statistically significant lower amount of activity with the platform to achieve a certificate of completion in a MOOC. These results confirm that the detected close submitters were performing some collaboration or even engaged in unethical behaviors, which facilitates their way into a certificate. However, more work is required in the future to specify various strategies adopted by close submitters and possible associations between the user accounts."} {"_id": "421e6c7247f41c419a46212477d7b29540cbf7b1", "title": "High speed obstacle avoidance using monocular vision and reinforcement learning", "text": "We consider the task of driving a remote control car at high speeds through unstructured outdoor environments. We present an approach in which supervised learning is first used to estimate depths from single monocular images. The learning algorithm can be trained either on real camera images labeled with ground-truth distances to the closest obstacles, or on a training set consisting of synthetic graphics images. The resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning/policy search is then applied within a simulator that renders synthetic scenes. This learns a control policy that selects a steering direction as a function of the vision system's output. We present results evaluating the predictive ability of the algorithm both on held out test data, and in actual autonomous driving experiments."} {"_id": "9829a6c00ad45a4be47f7e1e74dd03c792ec15c2", "title": "FLATM: A fuzzy logic approach topic model for medical documents", "text": "One of the challenges for text analysis in medical domains is analyzing large-scale medical documents. As a consequence, finding relevant documents has become more difficult. One of the popular methods to retrieve information based on discovering the themes in the documents is topic modeling. The themes in the documents help to retrieve documents on the same topic with and without a query. In this paper, we present a novel approach to topic modeling using fuzzy clustering. To evaluate our model, we experiment with two text datasets of medical documents. The evaluation metrics carried out through document classification and document modeling show that our model produces better performance than LDA, indicating that fuzzy set theory can improve the performance of topic models in medical domains."} {"_id": "b750f6d8e348c9a35fef6f53d203bf7e11f4f27b", "title": "Non-Additive Imprecise Image Super-Resolution in a Semi-Blind Context", "text": "The most effective superresolution methods proposed in the literature require precise knowledge of the so-called point spread function of the imager, while in practice its accurate estimation is nearly impossible. This paper presents a new superresolution method, whose main feature is its ability to account for the scant knowledge of the imager point spread function. This ability is based on representing this imprecise knowledge via a non-additive neighborhood function. The superresolution reconstruction algorithm transfers this imprecise knowledge to output by producing an imprecise (interval-valued) high-resolution image. We propose some experiments illustrating the robustness of the proposed method with respect to the imager point spread function. These experiments also highlight its high performance compared with very competitive earlier approaches. Finally, we show that the imprecision of the high-resolution interval-valued reconstructed image is a reconstruction error marker."} {"_id": "0372ddc78c53f24cbfcb46b2abc26e5870f93df6", "title": "Class-E LCCL for capacitive power transfer system", "text": "This paper presented the design of a capacitive power transfer (CPT) system by implementing a Class-E inverter due to its high efficiency, which in theory can reach close to 100% performance. However, the Class-E inverter is highly sensitive to its circuit parameters under the scenario of having small capacitance at the coupling plate. As a solution, an additional capacitor can be integrated into the Class-E inverter for increased coupling capacitance for a better performance. Both simulation and experimental investigations were carried out to verify the high efficiency CPT system based on Class-E inverter with additional capacitor. The outcome of the investigation exhibited 96% of overall DC-DC power transfer efficiency with 0.97 W output at 1MHz operating frequency."} {"_id": "67377bd63c40dda67befce699c80a6caf25d3a8c", "title": "Low-latency adaptive streaming over tcp", "text": "Media streaming over TCP has become increasingly popular because TCP's congestion control provides remarkable stability to the Internet. Streaming over TCP requires adapting to bandwidth availability, but unforunately, TCP can introduce significant latency at the application level, which causes unresponsive and poor adaptation. This article shows that this latency is not inherent in TCP but occurs as a result of throughput-optimized TCP implementations. We show that this latency can be minimized by dynamically tuning TCP's send buffer. Our evaluation shows that this approach leads to better application-level adaptation and it allows supporting interactive and other low-latency applications over TCP."} {"_id": "bef6611150c6a9468a593736f0c02d58d5ed4051", "title": "Graviola: a novel promising natural-derived drug that inhibits tumorigenicity and metastasis of pancreatic cancer cells in vitro and in vivo through altering cell metabolism.", "text": "Pancreatic tumors are resistant to conventional chemotherapies. The present study was aimed at evaluating the potential of a novel plant-derived product as a therapeutic agent for pancreatic cancer (PC). The effects of an extract from the tropical tree Annona Muricata, commonly known as Graviola, was evaluated for cytotoxicity, cell metabolism, cancer-associated protein/gene expression, tumorigenicity, and metastatic properties of PC cells. Our experiments revealed that Graviola induced necrosis of PC cells by inhibiting cellular metabolism. The expression of molecules related to hypoxia and glycolysis in PC cells (i.e. HIF-1\u03b1, NF-\u03baB, GLUT1, GLUT4, HKII, and LDHA) were downregulated in the presence of the extract. In vitro functional assays further confirmed the inhibition of tumorigenic properties of PC cells. Overall, the compounds that are naturally present in a Graviola extract inhibited multiple signaling pathways that regulate metabolism, cell cycle, survival, and metastatic properties in PC cells. Collectively, alterations in these parameters led to a decrease in tumorigenicity and metastasis of orthotopically implanted pancreatic tumors, indicating promising characteristics of the natural product against this lethal disease."} {"_id": "9db0f052d130339ff0bc427726e081cd66f8a56c", "title": "Hand sign language recognition for Bangla alphabet using Support Vector Machine", "text": "The sign language considered as the main language for deaf and dumb people. So, a translator is needed when a normal person wants to talk with a deaf or dumb person. In this paper, we present a framework for recognizing Bangla Sign Language (BSL) using Support Vector Machine. The Bangla hand sign alphabets for both vowels and consonants have been used to train and test the recognition system. Bangla sign alphabets are recognized by analyzing its shape and comparing its features that differentiates each sign. In proposed system, hand signs are first converted to HSV color space from RGB image. Then Gabor filters are used to acquire desired hand sign features. Since feature vector obtained using Gabor filter is in a high dimension, to reduce the dimensionality a nonlinear dimensionality reduction technique that is Kernel PCA has been used. Lastly, Support Vector Machine (SVM) is employed for classification of candidate features. The experimental results show that our proposed method outperforms the existing work on Bengali hand sign recognition."} {"_id": "703244978b61a709e0ba52f5450083f31e3345ec", "title": "How Learning Works: Seven Research-Based Principles for Smart Teaching", "text": "In this volume, the authors introduce seven general principles of learning, distilled from the research literature as well as from twenty-seven years of experience working one-on-one with college faculty. They have drawn on research from a breadth of perspectives (cognitive, developmental, and social psychology; educational research; anthropology; demographics; and organizational behavior) to identify a set of key principles underlying learningfrom how effective organization enhances retrieval and use of information to what impacts motivation. These principles provide instructors with an understanding of student learning that can help them see why certain teaching approaches are or are not supporting student learning, generate or refine teaching approaches and strategies that more effectively foster student learning in specific contexts, and transfer and apply these principles to new courses."} {"_id": "52a345a29267107f92aec9260b6f8e8222305039", "title": "Deeper Inside PageRank", "text": "This paper serves as a companion or extension to the \u201cInside PageRank\u201d paper by Bianchini et al. [19]. It is a comprehensive survey of all issues associated with PageRank, covering the basic PageRank model, available and recommended solution methods, storage issues, existence, uniqueness, and convergence properties, possible alterations to the basic model, suggested alternatives to the traditional solution methods, sensitivity and conditioning, and finally the updating problem. We introduce a few new results, provide an extensive reference list, and speculate about exciting areas of future research."} {"_id": "2a778ada8905fe6c9ff108b05a3554ebdaa52118", "title": "Practical Design and Implementation of Metamaterial-Enhanced Magnetic Induction Communication", "text": "The wireless communications in complex environments, such as underground and underwater, can enable various applications in the environmental, industrial, homeland security, law enforcement, and military fields. However, conventional electromagnetic wave-based techniques do not work due to the lossy media and complicated structures. Magnetic induction (MI) has been proved to achieve reliable communication in such environments. However, due to the small antenna size, the communication range of MI is still very limited, especially for the portable mobile devices. To this end, Metamaterial-enhanced MI (M2I) communication has been proposed, where the theoretical results predict that it can significantly increase the data rate and range. Nevertheless, there exists a significant gap between the theoretical prediction and the practical realization of M2I; the theoretical model relies on an ideal spherical metamaterial, while it does not exist in nature. In this paper, a practical design is proposed by leveraging a spherical coil array to realize M2I communication. The full-wave simulation is conducted to validate the design objectives. By using the spherical coil array-based M2I communication, the communication range can be significantly extended, exactly as we predicted in the ideal M2I model. Finally, the proposed M2I communication is implemented and tested in various environments."} {"_id": "22e6b48f31953a3a5bff5feb75a33d76e0af48f7", "title": "The somatic marker hypothesis: A critical evaluation", "text": "The somatic marker hypothesis (SMH; [Damasio, A. R., Tranel, D., Damasio, H., 1991. Somatic markers and the guidance of behaviour: theory and preliminary testing. In Levin, H.S., Eisenberg, H.M., Benton, A.L. (Eds.), Frontal Lobe Function and Dysfunction. Oxford University Press, New York, pp. 217-229]) proposes that emotion-based biasing signals arising from the body are integrated in higher brain regions, in particular the ventromedial prefrontal cortex (VMPFC), to regulate decision-making in situations of complexity. Evidence for the SMH is largely based on performance on the Iowa Gambling Task (IGT; [Bechara, A., Tranel, D., Damasio, H., Damasio, A.R., 1996. Failure to respond autonomically to anticipated future outcomes following damage to prefrontal cortex. Cerebral Cortex 6 (2), 215-225]), linking anticipatory skin conductance responses (SCRs) to successful performance on a decision-making paradigm in healthy participants. These 'marker' signals were absent in patients with VMPFC lesions and were associated with poorer IGT performance. The current article reviews the IGT findings, arguing that their interpretation is undermined by the cognitive penetrability of the reward/punishment schedule, ambiguity surrounding interpretation of the psychophysiological data, and a shortage of causal evidence linking peripheral feedback to IGT performance. Further, there are other well-specified and parsimonious explanations that can equally well account for the IGT data. Next, lesion, neuroimaging, and psychopharmacology data evaluating the proposed neural substrate underpinning the SMH are reviewed. Finally, conceptual reservations about the novelty, parsimony and specification of the SMH are raised. It is concluded that while presenting an elegant theory of how emotion influences decision-making, the SMH requires additional empirical support to remain tenable."} {"_id": "a53557c7611e66d61054acf163a9d8d4ba161c51", "title": "Crowdsourced Comprehension: Predicting Prerequisite Structure in Wikipedia", "text": "The growth of open-access technical publications and other open-domain textual information sources means that there is an increasing amount of online technical material that is in principle available to all, but in practice, incomprehensible to most. We propose to address the task of helping readers comprehend complex technical material, by using statistical methods to model the \u201cprerequisite structure\u201d of a corpus \u2014 i.e., the semantic impact of documents on an individual reader\u2019s state of knowledge. Experimental results using Wikipedia as the corpus suggest that this task can be approached by crowdsourcing the production of ground-truth labels regarding prerequisite structure, and then generalizing these labels using a learned classifier which combines signals of various sorts. The features that we consider relate pairs of pages by analyzing not only textual features of the pages, but also how the containing corpora is connected and created."} {"_id": "7152c2531410d3f32531a854edd26ae7ebccb8d0", "title": "The Advanced Health and Disaster Aid Network: A Light-Weight Wireless Medical System for Triage", "text": "Advances in semiconductor technology have resulted in the creation of miniature medical embedded systems that can wirelessly monitor the vital signs of patients. These lightweight medical systems can aid providers in large disasters who become overwhelmed with the large number of patients, limited resources, and insufficient information. In a mass casualty incident, small embedded medical systems facilitate patient care, resource allocation, and real-time communication in the advanced health and disaster aid network (AID-N). We present the design of electronic triage tags on lightweight, embedded systems with limited memory and computational power. These electronic triage tags use noninvasive, biomedical sensors (pulse oximeter, electrocardiogram, and blood pressure cuff) to continuously monitor the vital signs of a patient and deliver pertinent information to first responders. This electronic triage system facilitates the seamless collection and dissemination of data from the incident site to key members of the distributed emergency response community. The real-time collection of data through a mesh network in a mass casualty drill was shown to approximately triple the number of times patients that were triaged compared with the traditional paper triage system."} {"_id": "4f805391383b20dbc9992796d515029884ba468b", "title": "Cache-aware Roofline model: Upgrading the loft", "text": "The Roofline model graphically represents the attainable upper bound performance of a computer architecture. This paper analyzes the original Roofline model and proposes a novel approach to provide a more insightful performance modeling of modern architectures by introducing cache-awareness, thus significantly improving the guidelines for application optimization. The proposed model was experimentally verified for different architectures by taking advantage of built-in hardware counters with a curve fitness above 90%."} {"_id": "6928b1bf7c54a4aa8d976317c506e5e5f3eae085", "title": "Deception Detection using Real-life Trial Data", "text": "Hearings of witnesses and defendants play a crucial role when reaching court trial decisions. Given the high-stake nature of trial outcomes, implementing accurate and effective computational methods to evaluate the honesty of court testimonies can offer valuable support during the decision making process. In this paper, we address the identification of deception in real-life trial data. We introduce a novel dataset consisting of videos collected from public court trials. We explore the use of verbal and non-verbal modalities to build a multimodal deception detection system that aims to discriminate between truthful and deceptive statements provided by defendants and witnesses. We achieve classification accuracies in the range of 60-75% when using a model that extracts and fuses features from the linguistic and gesture modalities. In addition, we present a human deception detection study where we evaluate the human capability of detecting deception in trial hearings. The results show that our system outperforms the human capability of identifying deceit."} {"_id": "e6020e07095eb431595375782572fdfd4b31cc89", "title": "TUTORIAL ON AGENT-BASED MODELING AND SIMULATION", "text": "Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agentbased applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques."} {"_id": "a9b35080d736b9e5cdc6230512809123b383e671", "title": "Human activity recognition with wearable sensors", "text": "This thesis investigates the use of wearable sensors to recognize human activity. The activity of the user is one example of context information \u2013 others include the user\u2019s location or the state of his environment \u2013 which can help computer applications to adapt to the user depending on the situation. In this thesis we use wearable sensors \u2013 mainly accelerometers \u2013 to record, model and recognize human activities. Using wearable sensors allows continuous recording of activities across different locations and independent from external infrastructure. There are many possible applications for activity recognition with wearable sensors, for instance in the areas of healthcare, elderly care, personal fitness, entertainment, or performing arts. In this thesis we focus on two particular research challenges in activity recognition, namely the need for less supervision, and the recognition of high-level activities. We make several contributions towards addressing these challenges. Our first contribution is an analysis of features for activity recognition. Using a data set of activities such as walking, standing, sitting, or hopping, we analyze the performance of commonly used features and window lengths over which the features are computed. Our results indicate that different features perform well for different activities, and that in order to achieve best recognition performance, features and window lengths should be chosen specific for each activity. In order to reduce the need for labeled training data, we propose an unsupervised algorithm which can discover structure in unlabeled recordings of activities. The approach identifies correlated subsets in feature space, and represents these subsets with low-dimensional models. We show that the discovered subsets often correspond to distinct activities, and that the resulting models can be used for recognition of activities in unknown data. In a separate study, we show that the approach can be effectively deployed in a semi-supervised learning framework. More specifically, we combine the approach with a discriminant classifier, and show that this scheme allows high recognition rates even when using only a small amount of labeled training data. Recognition of higher-level activities such as shopping, doing housework, or commuting is challenging, as these activities are composed of changing sub-activities and vary strongly across individuals. We present one study in which we recorded 10h of three different high-level activities, investigating to which extent methods for low-level activities can be scaled to the recognition of high-level activities. Our results indicate that for settings as ours, traditional supervised approaches in combination with data from wearable accelerometers can achieve recognition rates of more than 90%. While unsupervised techniques are desirable for short-term activities, they become crucial for long-term activities, for which annotation is often impractical or impossible. To this end we propose an unsupervised approach based on topic models that allows to discover high-level structure in human activity data. The discovered activity patterns correlate with daily routines such as commuting, office work, or lunch routine, and they can be used to recognize such routines in unknown data."} {"_id": "224c853d15de6aad11d2121f5ded07aa563219e3", "title": "The land unit \u2014 A fundamental concept in landscape ecology, and its applications", "text": "The land unit, as an expression of landscape as a system, is a fundamental concept in landscape ecology. It is an ecologically homogeneous tract of land at the scale at issue. It provides a basis for studying topologic as well as chorologic landscape ecology relationships. A land unit survey aims at mapping such land units. This is done by simultaneously using characteristics of the most obvious (mappable) land attributes: land-form, soil and vegetation (including human alteration of these three). The land unit is the basis of the map legend but may be expressed via these three land attributes. The more dynamic land attributes, such as certain animal populations and water fluxes, are less suitable as diagnostic criteria, but often link units by characteristic information/energy fluxes. The land unit survey is related to a further development of the widely accepted physiographic soil survey see Edelman (1950). Important aspects include: by means of a systems approach, the various land data can be integrated more appropriately; geomorphology, vegetation and soil science support each other during all stages (photo-interpretation, field survey, data processing, final classification); the time and costs are considerably less compared with the execution of separate surveys; the result is directly suitable as a basis for land evaluation; the results can be expressed in separate soil, vegetation, land use and landform maps, or even single value maps. A land unit survey is therefore: a method for efficient survey of land attributes, such as soils, vegetation, landform, expressed in either separate or combined maps; a means of stimulating integration among separate land attribute sciences; an efficient basis for land evaluation. For multidisciplinary projects with applied ecologic aims (e.g., land management), it is therefore the most appropriate survey approach. Within the land unit approach there is considerable freedom in the way in which the various land attribute data are \u2018integrated\u2019. It is essential, however, that: during the photo-interpretation stage, the contributions of the various specialists are brought together to prepare a preliminary (land unit) photo-interpretation map; the fieldwork data are collected at exactly the same sample point, preferably by a team of specialists in which soil, vegetation and geomorphology are represented; the final map is prepared in close cooperation of all contributing disciplines, based on photo-interpretation and field data; the final map approach may vary from one fully-integrated land unit map to various monothematic maps."} {"_id": "3e2cfe5a0b7d3c6329a11575b3d91128faa64b4b", "title": "Spanish Multicenter Normative Studies (NEURONORMA Project): norms for verbal span, visuospatial span, letter and number sequencing, trail making test, and symbol digit modalities test.", "text": "As part of the Spanish Multicenter Normative Studies (NEURONORMA project), we provide age- and education-adjusted norms for the following instruments: verbal span (digits), visuospatial span (Corsi's test), letter-number sequencing (WAIS-III), trail making test, and symbol digit modalities test. The sample consists of 354 participants who are cognitively normal, community-dwelling, and age ranging from 50 to 90 years. Tables are provided to convert raw scores to age-adjusted scaled scores. These were further converted into education-adjusted scaled scores by applying regression-based adjustments. The current norms should provide clinically useful data for evaluating elderly Spanish people. These data may be of considerable use for comparisons with other normative studies. Limitations of these normative data are mainly related to the techniques of recruitment and stratification employed."} {"_id": "a2a8fc3722ce868ac3cc37fd539f50afa31f4445", "title": "Cross-cultural adaptation of health-related quality of life measures: literature review and proposed guidelines.", "text": "Clinicians and researchers without a suitable health-related quality of life (HRQOL) measure in their own language have two choices: (1) to develop a new measure, or (2) to modify a measure previously validated in another language, known as a cross-cultural adaptation process. We propose a set of standardized guidelines for this process based on previous research in psychology and sociology and on published methodological frameworks. These guidelines include recommendations for obtaining semantic, idiomatic, experiential and conceptual equivalence in translation by using back-translation techniques and committee review, pre-testing techniques and re-examining the weight of scores. We applied these guidelines to 17 cross-cultural adaptation of HRQOL measures identified through a comprehensive literature review. The reporting standards varied across studies but agreement between raters in their ratings of the studies was substantial to almost perfect (weighted kappa = 0.66-0.93) suggesting that the guidelines are easy to apply. Further research is necessary in order to delineate essential versus optional steps in the adaptation process."} {"_id": "566deec44c9788ec88a8f559bab6d42a8f69c10a", "title": "Low Power Magnetic Full-Adder Based on Spin Transfer Torque MRAM", "text": "Power issues have become a major problem of CMOS logic circuits as technology node shrinks below 90 nm. In order to overcome this limitation, emerging logic-in-memory architecture based on nonvolatile memories (NVMs) are being investigated. Spin transfer torque (STT) magnetic random access memory (MRAM) is considered one of the most promising NVMs thanks to its high speed, low power, good endurance, and 3-D back-end integration. This paper presents a novel magnetic full-adder (MFA) design based on perpendicular magnetic anisotropy (PMA) STT-MRAM. It provides advantageous power efficiency and die area compared with conventional CMOS-only full adder (FA). Transient simulations have been performed to validate this design by using an industrial CMOS 40 nm design kit and an accurate STT-MRAM compact model including physical models and experimental measurements."} {"_id": "331a475a518d8ac95e8054c99fe9f5f6fc879519", "title": "A Wideband Dual-Polarized Antenna for LTE700/GSM850/GSM900 Applications", "text": "A novel resonator-loaded wideband dual-polarized antenna is proposed for LTE700/GSM850/GSM900 applications. Four single-polarized miniaturized antenna elements are placed with square arrangement upon the reflector for dual polarization. One pair of parallel elements is for +45\u00b0 polarization, and the other pair is for \u201345\u00b0 polarization. The impedance bandwidth is greatly improved loaded with a resonator. Experimentally, the antenna exhibits a wide impedance bandwidth of 37.5% ranging from 0.67 to 0.98 GHz for | $S$11|<\u201315 dB, high isolation of more than 40 dB, and stable gain of around 9.5 dBi over the whole operating band."} {"_id": "ec64e26322c7fdbf9e54981a07693974423f3031", "title": "Sex differences in mate preferences revisited: do people know what they initially desire in a romantic partner?", "text": "In paradigms in which participants state their ideal romantic-partner preferences or examine vignettes and photographs, men value physical attractiveness more than women do, and women value earning prospects more than men do. Yet it remains unclear if these preferences remain sex differentiated in predicting desire for real-life potential partners (i.e., individuals whom one has actually met). In the present study, the authors explored this possibility using speed dating and longitudinal follow-up procedures. Replicating previous research, participants exhibited traditional sex differences when stating the importance of physical attractiveness and earning prospects in an ideal partner and ideal speed date. However, data revealed no sex differences in the associations between participants' romantic interest in real-life potential partners (met during and outside of speed dating) and the attractiveness and earning prospects of those partners. Furthermore, participants' ideal preferences, assessed before the speed-dating event, failed to predict what inspired their actual desire at the event. Results are discussed within the context of R. E. Nisbett and T. D. Wilson's (1977) seminal article: Even regarding such a consequential aspect of mental life as romantic-partner preferences, people may lack introspective awareness of what influences their judgments and behavior."} {"_id": "0e5c8094d3da52340b58761d441eb809ff96743f", "title": "Distributed active transformer-a new power-combining andimpedance-transformation technique", "text": "In this paper, we compare the performance of the newly introduced distributed active transformer (DAT) structure to that of conventional on-chip impedance-transformations methods. Their fundamental power-efficiency limitations in the design of high-power fully integrated amplifiers in standard silicon process technologies are analyzed. The DAT is demonstrated to be an efficient impedance-transformation and power-combining method, which combines several low-voltage push-pull amplifiers in series by magnetic coupling. To demonstrate the validity of the new concept, a 2.4-GHz 1.9-W 2-V fully integrated power-amplifier achieving a power-added efficiency of 41% with 50input and output matching has been fabricated using 0.35-\u03bcm CMOS transistors Item Type: Article Additional Information: \u00a9 Copyright 2002 IEEE. Reprinted with permission. Manuscript received May 27, 2001. [Posted online: 2002-08-07] This work was supported by the Intel Corporation, the Army Research Office, the Jet Propulsion Laboratory, Infinion, and the National Science Foundation. The authors thank Conexant Systems for chip fabrication, particularly R. Magoon, F. In\u2019tveld, J. Powell, A. Vo, and K. Moye. K. Potter, D. Ham, and H.Wu, all of the California Institute of Technology (Caltech), Pasadena, deserve special thanks for their assistance. The technical support for CAD tools from Agilent Technologies and Sonnet Software Inc., Liverpool, NY, are also appreciated. \u201cSpecial Issue on Silicon-Based RF and Microwave Integrated Circuits\u201d, IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 1, part 2 Subject"} {"_id": "14fae9835ae65adfdc434b7b7e761487e7a9548f", "title": "A simplified design approach for radial power combiners", "text": "It is known that a radial power combiner is very effective in combining a large number of power amplifiers, where high efficiency (greater than 90%) over a relatively wide band can be achieved. However, its current use is limited due to its design complexity. In this paper, we develop a step-by-step design procedure, including both the initial approximate design formulas and suitable models for final accurate design optimization purposes. Based on three-dimensional electromagnetic modeling, predicted results were in excellent agreement with those measured. Practical issues related to the radial-combiner efficiency, its graceful degradation, and the effects of higher order package resonances are discussed here in detail"} {"_id": "47fdb5ec9522019ef7e580d59c262b3dc9519b26", "title": "A planar probe double ladder waveguide power divider", "text": "The successful demonstration of a 1:4 power divider using microstrip probes and a WR-430 rectangular waveguide is presented. The 15-dB return loss bandwidth of the nonoptimized structure is demonstrated to be 22% and its 0.5-dB insertion loss bandwidth 26%. While realized through conventional machining, such a structure is assembled in a fashion consistent with proven millimeter and submillimeter-wave micromachining techniques. Thus, the structure presents a potential power dividing and power combining architecture, which through micromachining, may be used for applications well above 100GHz."} {"_id": "68218edaf08484871258387e95161a3ce0e6fe67", "title": "A Ka-band power amplifier based on the traveling-wave power-dividing/combining slotted-waveguide circuit", "text": "An eight-device Ka-band solid-state power amplifier has been designed and fabricated using a traveling-wave power-dividing/combining technique. The low-profile slotted-waveguide structure employed in this design provides not only a high power-combining efficiency over a wide bandwidth, but also efficient heat sinking for the active devices. The measured maximum small-signal gain of the eight-device power amplifier is 19.4 dB at 34 GHz with a 3-dB bandwidth of 3.2 GHz (f/sub L/=31.8 GHz, f/sub H/=35 GHz). The measured maximum output power at 1-dB compression (P/sub out/ at 1 dB) from the power amplifier is 33 dBm (/spl sim/2 W) at 32.2 GHz, with a power-combining efficiency of 80%. Furthermore, performance degradation of this power amplifier due to device failures has also been simulated and measured."} {"_id": "db884813d6d764aea836c44f46604128735bffe0", "title": "Broad-Band High-Power Amplifier Using Spatial Power-Combining Technique", "text": "High power, broad bandwidth, high linearity, and low noise are among the most important features in amplifier design. The broad-band spatial power-combining technique addresses all these issues by combining the output power of a large quantity of microwave monolithic integrated circuit (MMIC) amplifiers in a broad-band coaxial waveguide environment, while maintaining good linearity and improving phase noise of the MMIC amplifiers. A coaxial waveguide was used as the host of the combining circuits for broader bandwidth and better uniformity by equally distributing the input power to each element. A new compact coaxial combiner with much smaller size is investigated. Broad-band slotline to microstrip-line transition is integrated for better compatibility with commercial MMIC amplifiers. Thermal simulations are performed and an improved thermal management scheme over previous designs is employed to improve the heat sinking in high-power application. A high-power amplifier using the compact combiner design is built and demonstrated to have a bandwidth from 6 to 17 GHz with 44-W maximum output power. Linearity measurement has shown a high third-order intercept point of 52 dBm. Analysis shows the amplifier has the ability to extend spurious-free dynamic range by 2 3 times. The amplifier also has shown a residual phase floor close to 140 dBc at 10-kHz offset from the carrier with 5\u20136-dB reductions compared to a single MMIC amplifier it integrates."} {"_id": "7284cc6bada61d9233eb96c8b62362a46b220ad1", "title": "Ping-pong trajectory perception and prediction by a PC based High speed four-camera vision system", "text": "A high speed vision system is vital for a robot player to play table tennis successfully. The three dimensional ping-pong trajectory should be perceived in high precision at a high frame rate. What's more, with the beginning parts of the trajectory, the following trajectory should be predicted to save enough time for successful planning and control."} {"_id": "84f7b7be76bc9f34e6ed9ee15defafaeb85ec419", "title": "Multimodal Gesture Recognition Using 3-D Convolution and Convolutional LSTM", "text": "Gesture recognition aims to recognize meaningful movements of human bodies, and is of utmost importance in intelligent human\u2013computer/robot interactions. In this paper, we present a multimodal gesture recognition method based on 3-D convolution and convolutional long-short-term-memory (LSTM) networks. The proposed method first learns short-term spatiotemporal features of gestures through the 3-D convolutional neural network, and then learns long-term spatiotemporal features by convolutional LSTM networks based on the extracted short-term spatiotemporal features. In addition, fine-tuning among multimodal data is evaluated, and we find that it can be considered as an optional skill to prevent overfitting when no pre-trained models exist. The proposed method is verified on the ChaLearn LAP large-scale isolated gesture data set (IsoGD) and the Sheffield Kinect gesture (SKIG) data set. The results show that our proposed method can obtain the state-of-the-art recognition accuracy (51.02% on the validation set of IsoGD and 98.89% on SKIG)."} {"_id": "38749ebf3f3df0cb7ecc8d6ac9b174ec5b6b73a1", "title": "Unsupervised image segmentation using convolutional autoencoder with total variation regularization as preprocessing", "text": "Conventional unsupervised image segmentation methods use color and geometric information and apply clustering algorithms over pixels. They preserve object boundaries well but often suffer from over-segmentation due to noise and artifacts in the images. In this paper, we contribute on a preprocessing step for image smoothing, which alleviates the burden of conventional unsupervised image segmentation and enhance their performance. Our approach relies on a convolutional autoencoder (CAE) with the total variation loss (TVL) for unsupervised learning. We show that, after our CAE-TVL preprocessing step, the over-segmentation effect is significantly reduced using the same unsupervised image segmentation methods. We evaluate our approach using the BSDS500 image segmentation benchmark dataset and show the performance enhancement introduced by our approach in terms of both increased segmentation accuracy and reduced computation time. We examine the robustness of the trained CAE and show that it is directly applicable to other natural scene images."} {"_id": "6af1752e2d00000944f630f745d7febba59079cb", "title": "Dependence maps, a dimensionality reduction with dependence distance for high-dimensional data", "text": "We introduce the dependence distance, a new notion of the intrinsic distance between points, derived as a pointwise extension of statistical dependence measures between variables. We then introduce a dimension reduction procedure for preserving this distance, which we call the dependence map. We explore its theoretical justification, connection to other methods, and empirical behavior on real data sets."} {"_id": "e7c6da7b1a9b772a3af02a82aa6e670f23f5dad1", "title": "Prevention of falls and consequent injuries in elderly people", "text": "Injuries resulting from falls in elderly people are a major public-health concern, representing one of the main causes of longstanding pain, functional impairment, disability, and death in this population. The problem is going to worsen, since the rates of such injuries seem to be rising in many areas, as is the number of elderly people in both the developed and developing world. Many methods and programmes to prevent such injuries already exist, including regular exercise, vitamin D and calcium supplementation, withdrawal of psychotropic medication, cataract surgery, professional environment hazard assessment and modification, hip protectors, and multifactorial preventive programmes for simultaneous assessment and reduction of many of the predisposing and situational risk factors. To receive broader-scale effectiveness, these programmes will need systematic implementation. Care must be taken, however, to rigorously select the right actions for those people most likely to benefit, such as vitamin D and calcium supplementation and hip protectors for elderly people living in institutions."} {"_id": "546aa62efd9639006053b9ed46bca3b7925c7305", "title": "Geometric Scene Parsing with Hierarchical LSTM", "text": "This paper addresses the problem of geometric scene parsing, i.e. simultaneously labeling geometric surfaces (e.g. sky, ground and vertical plane) and determining the interaction relations (e.g. layering, supporting, siding and affinity) between main regions. This problem is more challenging than the traditional semantic scene labeling, as recovering geometric structures necessarily requires the rich and diverse contextual information. To achieve these goals, we propose a novel recurrent neural network model, named Hierarchical Long Short-Term Memory (H-LSTM). It contains two coupled sub-networks: the Pixel LSTM (P-LSTM) and the Multi-scale Super-pixel LSTM (MS-LSTM) for handling the surface labeling and relation prediction, respectively. The two sub-networks provide complementary information to each other to exploit hierarchical scene contexts, and they are jointly optimized for boosting the performance. Our extensive experiments show that our model is capable of parsing scene geometric structures and outperforming several state-of-theart methods by large margins. In addition, we show promising 3D reconstruction results from the still images based on the geometric parsing."} {"_id": "9ca545e205d5da6fe668774ce9e4bc5be280fbab", "title": "Securing wireless sensor networks: a survey", "text": "The significant advances of hardware manufacturing technology and the development of efficient software algorithms make technically and economically feasible a network composed of numerous, small, low-cost sensors using wireless communications, that is, a wireless sensor network. WSNs have attracted intensive interest from both academia and industry due to their wide application in civil and military scenarios. In hostile scenarios, it is very important to protect WSNs from malicious attacks. Due to various resource limitations and the salient features of a wireless sensor network, the security design for such networks is significantly challenging. In this article, we present a comprehensive survey of WSN security issues that were investigated by researchers in recent years and that shed light on future directions for WSN security."} {"_id": "6cd5c87d4877f1a5a71719fcffd7c50ba343ec48", "title": "A theory of self-calibration of a moving camera", "text": "There is a close connection between the calibration of a single camera and the epipolar transformation obtained when the camera undergoes a displacement. The epipolar transformation imposes two algebraic constraints on the camera calibration. If two epipolar transformations, arising from different camera displacements, are available then the compatible camera calibrations are parameterized by an algebraic curve of genus four. The curve can be represented either by a space curve of degree seven contained in the intersection of two cubic surfaces, or by a curve of degree six in the dual of the image plane. The curve in the dual plane has one singular point of order three and three singular points of order two. If three epipolar transformations are available, then two curves of degree six can be obtained in the dual plane such that one of the real intersections of the two yields the correct camera calibration. The two curves have a common singular point of order three. Experimental results are given to demonstrate the feasibility of camera calibration based on the epipolar transformation. The real intersections of the two dual curves are found by locating the zeros of a function defined on the interval [0, 2\u03c0]. The intersection yielding the correct camera calibration is picked out by referring back to the three epipolar transformations."} {"_id": "e3ccd78eae121ffed6a6d9294901db03704df2e5", "title": "Bearing Fault Detection by a Novel Condition-Monitoring Scheme Based on Statistical-Time Features and Neural Networks", "text": "Bearing degradation is the most common source of faults in electrical machines. In this context, this work presents a novel monitoring scheme applied to diagnose bearing faults. Apart from detecting local defects, i.e., single-point ball and raceway faults, it takes also into account the detection of distributed defects, such as roughness. The development of diagnosis methodologies considering both kinds of bearing faults is, nowadays, subject of concern in fault diagnosis of electrical machines. First, the method analyzes the most significant statistical-time features calculated from vibration signal. Then, it uses a variant of the curvilinear component analysis, a nonlinear manifold learning technique, for compression and visualization of the feature behavior. It allows interpreting the underlying physical phenomenon. This technique has demonstrated to be a very powerful and promising tool in the diagnosis area. Finally, a hierarchical neural network structure is used to perform the classification stage. The effectiveness of this condition-monitoring scheme has been verified by experimental results obtained from different operating conditions."} {"_id": "06fd86110dbcdc37f298ac5f35c5cb9ccdb1ac08", "title": "A delay-tolerant network architecture for challenged internets", "text": "The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service."} {"_id": "6a9d28f382eeaabfe878eaafce1f711c36a69843", "title": "Gimli: open source and high-performance biomedical name recognition", "text": "Automatic recognition of biomedical names is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. In recent years, various solutions have been implemented to tackle this problem. However, limitations regarding system characteristics, customization and usability still hinder their wider application outside text mining research. We present Gimli, an open-source, state-of-the-art tool for automatic recognition of biomedical names. Gimli includes an extended set of implemented and user-selectable features, such as orthographic, morphological, linguistic-based, conjunctions and dictionary-based. A simple and fast method to combine different trained models is also provided. Gimli achieves an F-measure of 87.17% on GENETAG and 72.23% on JNLPBA corpus, significantly outperforming existing open-source solutions. Gimli is an off-the-shelf, ready to use tool for named-entity recognition, providing trained and optimized models for recognition of biomedical entities from scientific text. It can be used as a command line tool, offering full functionality, including training of new models and customization of the feature set and model parameters through a configuration file. Advanced users can integrate Gimli in their text mining workflows through the provided library, and extend or adapt its functionalities. Based on the underlying system characteristics and functionality, both for final users and developers, and on the reported performance results, we believe that Gimli is a state-of-the-art solution for biomedical NER, contributing to faster and better research in the field. Gimli is freely available at http://bioinformatics.ua.pt/gimli ."} {"_id": "37fdf57456a1b1bd5e7f0c88c0ad79f29c962a8d", "title": "Using audio and visual information for single channel speaker separation", "text": "This work proposes a method to exploit both audio and visual speech information to extract a target speaker from a mixture of competing speakers. The work begins by taking an effective audio-only method of speaker separation, namely the soft mask method, and modifying its operation to allow visual speech information to improve the separation process. The audio input is taken from a single channel and includes the mixture of speakers, where as a separate set of visual features are extracted from each speaker. This allows modification of the separation process to include not only the audio speech but also visual speech from each speaker in the mixture. Experimental results are presented that compare the proposed audio-visual speaker separation with audio-only and visual-only methods using both speech quality and speech intelligibility metrics."} {"_id": "2b1a9f7b70cedd9a2fa23f33c65b944834878201", "title": "Anchors aweigh: A demonstration of cross-modality anchoring and magnitude priming", "text": "Research has shown that judgments tend to assimilate to irrelevant \"anchors.\" We extend anchoring effects to show that anchors can even operate across modalities by, apparently, priming a general sense of magnitude that is not moored to any unit or scale. An initial study showed that participants drawing long \"anchor\" lines made higher numerical estimates of target lengths than did those drawing shorter lines. We then replicated this finding, showing that a similar pattern was obtained even when the target estimates were not in the dimension of length. A third study showed that an anchor's length relative to its context, and not its absolute length, is the key to predicting the anchor's impact on judgments. A final study demonstrated that magnitude priming (priming a sense of largeness or smallness) is a plausible mechanism underlying the reported effects. We conclude that the boundary conditions of anchoring effects may be much looser than previously thought, with anchors operating across modalities and dimensions to bias judgment."} {"_id": "86a1ef1a1a0a51de957d9241306a02df9d99e6bd", "title": "An OpenCCG-Based Approach to Question Generation from Concepts", "text": "Dialogue systems are often regarded as being tedious and inflexible. We believe that one reason is rigid and inadaptable system utterances. A good dialogue system should automatically choose a formulation that reflects the user\u2019s expectations. However, current dialogue system development environments only allow the definition of questions with unchangeable formulations. In this paper we present a new approach to the generation of system questions by only defining basic concepts. This is the basis for realising adaptive, user-tailored, and human-like system questions in dialogue systems."} {"_id": "e73ee8174589e9326d3b36484f1b95685cb1ca42", "title": "mmWave phased-array with hemispheric coverage for 5th generation cellular handsets", "text": "A first-of-the-kind 28 GHz antenna solution for the upcoming 5th generation cellular communication is presented in detail. Extensive measurements and simulations ascertain the proposed 28 GHz antenna solution to be highly effective for cellular handsets operating in realistic propagating environments."} {"_id": "7ab45f9ef1f30442f19ce474ec036a1cee0fdded", "title": "Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making", "text": "Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs and utility functions who may cooperate to build a machine that takes actions on their behalf. A representation is needed for how much the machine\u2019s policy will prioritize each player\u2019s interests over time. Assuming the players have reached common knowledge of their situation, this paper derives a recursion that any Pareto optimal policy must satisfy. Two qualitative observations can be made from the recursion: the machine must (1) use each player\u2019s own beliefs in evaluating how well an action will serve that player\u2019s utility function, and (2) shift the relative priority it assigns to each player\u2019s expected utilities over time, by a factor proportional to how well that player\u2019s beliefs predict the machine\u2019s inputs. Observation (2) represents a substantial divergence from n\u00e4\u0131ve linear utility aggregation (as in Harsanyi\u2019s utilitarian theorem, and existing MORL algorithms), which is shown here to be inadequate for Pareto optimal sequential decision-making on behalf of players with different beliefs."} {"_id": "b6a60a687bb9819b5fe781f704fb4b0d69a62a41", "title": "Facilitating Object Detection and Recognition through Eye Gaze", "text": "When compared to image recognition, object detection is a much more challenging task because it requires the accurate real-time localization of an object in the target image. In interaction scenarios, this pipeline can be simplified by incorporating the users\u2019 point of regard. Wearable eye trackers can estimate the gaze direction, but lack own processing capabilities. We enable mobile gaze-aware applications by developing an open-source platform which supports mobile eye tracking based on the Pupil headset and a smartphone running Android OS. Through our platform, we offer researchers and developers a rapid prototyping environment for gaze-enabled applications. We describe the concept, our current progress, and research implications. ACM Classification"} {"_id": "81cb29ece650d391e61cf0c05096c6a14b87ee52", "title": "DataVizard: Recommending Visual Presentations for Structured Data", "text": "Selecting the appropriate visual presentation of the data such that it not only preserves the semantics but also provides an intuitive summary of the data is an important, often the final step of data analytics. Unfortunately, this is also a step involving significant human effort starting from selection of groups of columns in the structured results from analytics stages, to the selection of right visualization by experimenting with various alternatives. In this paper, we describe our DataVizard system aimed at reducing this overhead by automatically recommending the most appropriate visual presentation for the structured result. Specifically, we consider the following two scenarios: first, when one needs to visualize the results of a structured query such as SQL; and the second, when one has acquired a data table with an associated short description (e.g., tables from the Web). Using a corpus of real-world database queries (and their results) and a number of statistical tables crawled from the Web, we show that DataVizard is capable of recommending visual presentations with high accuracy."} {"_id": "491bf29f83d63b2a719b03f940e32af6df6990d4", "title": "Memorability of natural scenes: The role of attention", "text": "The image memorability consists in the faculty of an image to be recalled after a period of time. Recently, the memorability of an image database was measured and some factors responsible for this memorability were highlighted. In this paper, we investigate the role of visual attention in image memorability around two axis. The first one is experimental and uses results of eye-tracking performed on a set of images of different memorability scores. The second investigation axis is predictive and we show that attention-related features can advantageously replace low-level features in image memorability prediction. From our work it appears that the role of visual attention is important and should be more taken into account along with other low-level features."} {"_id": "a6069e65c318f07d2b35934b0d4109148f190342", "title": "Real-time garbage collection for flash-memory storage systems of real-time embedded systems", "text": "Flash-memory technology is becoming critical in building embedded systems applications because of its shock-resistant, power economic, and nonvolatile nature. With the recent technology breakthroughs in both capacity and reliability, flash-memory storage systems are now very popular in many types of embedded systems. However, because flash memory is a write-once and bulk-erase medium, we need a translation layer and a garbage-collection mechanism to provide applications a transparent storage service. In the past work, various techniques were introduced to improve the garbage-collection mechanism. These techniques aimed at both performance and endurance issues, but they all failed in providing applications a guaranteed performance. In this paper, we propose a real-time garbage-collection mechanism, which provides a guaranteed performance, for hard real-time systems. On the other hand, the proposed mechanism supports non-real-time tasks so that the potential bandwidth of the storage system can be fully utilized. A wear-leveling method, which is executed as a non-real-time service, is presented to resolve the endurance problem of flash memory. The capability of the proposed mechanism is demonstrated by a series of experiments over our system prototype."} {"_id": "c5bc433fed9c286540c4cc3bd7cca3114f223f57", "title": "Optimization of the internal grid of an offshore wind farm using Genetic algorithm", "text": "Offshore wind energy is a promising solution thanks to its best performance of production. However its development leads to many technical and especially economic challenges among them the electrical grid topology attracts a large investment. In this paper, our objective is to minimize a part of this total investment which represents the initial cost of the middle voltage cables in the internal network with different cross sections. An approach based on Genetic Algorithm is developed to find the best topology to connect all wind turbines and substations. The proposed model initially accepts all possible configurations: radial, star, ring, and tree. The results prove that the optimization model can be used for designing the electrical architecture of the internal network of an offshore wind farm."} {"_id": "e50812a804c7b1e29e18adcf32cf3d314bde2457", "title": "A flexible and low-cost tactile sensor for robotic applications", "text": "For humans, the sense of touch is essential for interactions with the environment. With robots slowly starting to emerge as a human-centric technology, tactile information becomes increasingly important. Tactile sensors enable robots to gain information about contacts with the environment, which is required for safe interaction with humans or tactile exploration. Many sensor designs for the application on robots have been presented in literature so far. However, most of them are complex in their design and require high-tech tools for their manufacturing. In this paper, we present a novel design for a tactile sensor that can be built with low-cost, widely available materials, and low effort. The sensor is flexible, may be cut to arbitrary shapes and may have a customized spatial resolution. Both pressure distribution and absolute pressure on the sensor are detected. An experimental evaluation of our design shows low detection thresholds as well as high sensor accuracy. We seek to accelerate research on tactile feedback methods with this easy to replicate design. We consider our design a starting point for the integration of multiple sensor units to a large-scale tactile skin for robots."} {"_id": "c42f2784547626040b00d96e1f0f08166a184e89", "title": "A Dictionary-based Approach to Racism Detection in Dutch Social Media", "text": "We present a dictionary-based approach to racism detection in Dutch social media comments, which were retrieved from tw o public Belgian social media sites likely to attract racist reactio ns. These comments were labeled as racist or non-racist by mu ltiple annotators. For our approach, three discourse dictionaries were create d: first, we created a dictionary by retrieving possibly raci st and more neutral terms from the training data, and then augmenting these with more general words to remove some bias. A second dictionary w s created through automatic expansion using a word2vec model trained on a large corpus of general Dutch text. Finall y, third dictionary was created by manually filtering out incorrect expansions. We trained multiple Support Vector Machines, using the dist ribu ion of words over the different categories in the dictionaries as f eatures. The best-performing model used the manually clean ed dictionary and obtained an F-score of 0.46 for the racist class on a test s et consisting of unseen Dutch comments, retrieved from the s ame sites used for the training set. The automated expansion of the dic tionary only slightly boosted the model\u2019s performance, and this increase in performance was not statistically significant. The fact t hat the coverage of the expanded dictionaries did increase i ndicates that the words that were automatically added did occur in the corpus, b t were not able to meaningfully impact performance. The di ctionaries, code, and the procedure for requesting the corpus are availa ble at:https://github.com/clips/hades."} {"_id": "9cd20a7f06d959342c758aa6ab8c09bb22dfb3e6", "title": "Online anomaly detection with concept drift adaptation using recurrent neural networks", "text": "Anomaly detection in time series is an important task with several practical applications. The common approach of training one model in an offline manner using historical data is likely to fail under dynamically changing and non-stationary environments where the definition of normal behavior changes over time making the model irrelevant and ineffective. In this paper, we describe a temporal model based on Recurrent Neural Networks (RNNs) for time series anomaly detection to address challenges posed by sudden or regular changes in normal behavior. The model is trained incrementally as new data becomes available, and is capable of adapting to the changes in the data distribution. RNN is used to make multi-step predictions of the time series, and the prediction errors are used to update the RNN model as well as detect anomalies and change points. Large prediction error is used to indicate anomalous behavior or a change (drift) in normal behavior. Further, the prediction errors are also used to update the RNN model in such a way that short term anomalies or outliers do not lead to a drastic change in the model parameters whereas high prediction errors over a period of time lead to significant updates in the model parameters such that the model rapidly adapts to the new norm. We demonstrate the efficacy of the proposed approach on a diverse set of synthetic, publicly available and proprietary real-world datasets."} {"_id": "1355972384f2458f32d339c0304862ac24259aa1", "title": "Self-nonself discrimination in a computer", "text": "Current commercial virus detectors are based on The problem of protecting computer systems can be viewed generally as the problem of learning to distinguish relf from other. We describe a method for change detection which is based on the generation of T cella in the immune syetem. Mathematical analysis reveals computational costs of the system, and preliminary experiments illustrate how the method might be applied to the problem of computer viruses."} {"_id": "23712e35d556e0929c6519c3d5553b896b65747d", "title": "Towards a taxonomy of intrusion-detection systems", "text": "Intrusion-detection systems aim at detecting attacks against computer systems and networks, or against information systems in general, as it is difficult to provide provably secure information systems and maintain them in such a secure state for their entire lifetime and for every utilization. Sometimes, legacy or operational constraints do not even allow a fully secure information system to be realized at all. Therefore, the task of intrusion-detection systems is to monitor the usage of such systems and to detect the apparition of insecure states. They detect attempts and active misuse by legitimate users of the information systems or external parties to abuse their privileges or exploit security vulnerabilities. In this paper, we introduce a taxonomy of intrusion-detection systems that highlights the various aspects of this area. This taxonomy defines families of intrusion-detection systems according to their properties. It is illustrated by numerous examples from past and current projects. q 1999 Elsevier Science B.V. All rights reserved."} {"_id": "fa4f763cf42d5c5620cedaa0fa5ce9195ff750c3", "title": "Artificial immune systems - a new computational intelligence paradigm", "text": "Give us 5 minutes and we will show you the best book to read today. This is it, the artificial immune systems a new computational intelligence paradigm that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read."} {"_id": "238d215ea37ed8dda2b3f236304914e3f7d72829", "title": "The COPS Security Checker System", "text": "In the past several years, there have been a large number of published works that have graphically described a wide variety of security problems particular to UNIX. Without fail, the same problems have been discussed over and over again, describing the problems with sum (set user ID) programs, improper file permissions, and bad passwords (to name a few). There are two common characteristics to each of these problems: first, they are usually simple to correct, if found; second, they are fairly easy to detect. Since almost all systems have fairly equivalent problems, it seems appropriate to create a tool to detect potential security problems as an aid to system administrators. This paper describes one such tool: Cops. (Computerized Oracle and Password System) is a freely-available,"} {"_id": "4e85503ef0e1559bc197bd9de0625b3792dcaa9b", "title": "NetSTAT: A Network-Based Intrusion Detection Approach", "text": "Network-based attacks have become common and sophisticated. For this reason, intrusion detection systems are now shifting their focus from the hosts and their operating systems to the network itself. Network-based intrusion detection is challenging because network auditing produces large amounts of data, and different events related to a single intrusion may be visible in different places on the network. This paper presents NetSTAT, a new approach to network intrusion detection. By using a formal model of both the network and the attacks, NetSTAT is able to determine which network events have to be monitored and where they can be monitored."} {"_id": "af359ac7cb689315fa85690f654f34c3409daf89", "title": "The Impact of Indoor Lighting on Students \u2019 Learning Performance in Learning Environments : A knowledge internalization perspective", "text": "The purpose of this study is to identify the influence of indoor lighting on students\u2019 learning performance within learning environments from knowledge internalization perspective. This study is a comprehensive review of literatures base on the influence of indoor lighting on people\u2019s productivity and performance especially students\u2019 learning performance. The result that comes from this study shows that it is essential to improve lighting in learning environments to enhance students\u2019 learning performance and also motivate them to learn more. In this study the researchers utilized Pulay (2010) survey and measured the influence of lighting on students\u2019 learning performance. Utilizing survey data collected from 150 students from Alpha course in Malaysia. This study found significant impact between lighting quality and students\u2019 learning performance this finding is also supported by interview from two experts."} {"_id": "cdbed1c5971d20315f1e1980bf90b3e73e774b94", "title": "Schema Theory for Semantic Link Network", "text": "Semantic link network (SLN) is a loosely coupled semantic data model for managing Web resources. Its nodes can be any types of resources. Its edges can be any semantic relations. Potential semantic links can be derived out according to reasoning rules on semantic relations. This paper proposes the schema theory for SLN including the concepts, rule-constraint normal forms and relevant algorithms. The theory provides the basis for normalized management of SLN and its applications. A case study demonstrates the proposed theory."} {"_id": "dd10da216b3f3bd2e2cf1fdbeb54c7697ba1dba9", "title": "Design considerations on low voltage synchronous power MOSFETs with monolithically integrated gate voltage pull-down circuitry", "text": "In this paper, a monolithically integrated gate voltage pull-down circuitry is presented to avoid the unintentional C\u00b7dV/dt induced turn-on. The concept of a low threshold voltage MOSFET with this integrated gate voltage pull-down circuitry is introduced as a contributing factor to the next generation high frequency DC-DC converter efficiency improvement. Design considerations on this new device and influences of critical design parameters on device/circuit performance will be fully discussed. In synchronous buck application, this integrated power module achieves more than 2% efficiency improvement over reference solution at high operation frequency (1MHz) under 19V input and 1.3V output condition."} {"_id": "86335522e84bd14bb53fc23c265d7fed614f3cc4", "title": "A Framework for Dynamic Image Sampling Based on Supervised Learning", "text": "Sparse sampling schemes can broadly be classified into two main categories: static sampling where the sampling pattern is predetermined, and dynamic sampling where each new measurement location is selected based on information obtained from previous measurements. Dynamic sampling methods are particularly appropriate for pointwise imaging methods, in which pixels are measured sequentially in arbitrary order. Examples of pointwise imaging schemes include certain implementations of atomic force microscopy, electron back scatter diffraction, and synchrotron X-ray imaging. In these pointwise imaging applications, dynamic sparse sampling methods have the potential to dramatically reduce the number of measurements required to achieve a desired level of fidelity. However, the existing dynamic sampling methods tend to be computationally expensive and are, therefore, too slow for many practical applications. In this paper, we present a framework for dynamic sampling based on machine learning techniques, which we call a supervised learning approach for dynamic sampling (SLADS). In each step of SLADS, the objective is to find the pixel that maximizes the expected reduction in distortion (ERD) given previous measurements. SLADS is fast because we use a simple regression function to compute the ERD, and it is accurate because the regression function is trained using datasets that are representative of the specific application. In addition, we introduce an approximate method to terminate dynamic sampling at a desired level of distortion. We then extend our algorithm to incorporate multiple measurements at each step, which we call groupwise SLADS. Finally, we present results on computationally generated synthetic data and experimentally collected data to demonstrate a dramatic improvement over state-of-the-art static sampling methods"} {"_id": "44afe560d0926380d666ec0b1dd4d6b12e077f0a", "title": "High-Frame-Rate Synthetic Aperture Ultrasound Imaging Using Mismatched Coded Excitation Waveform Engineering: A Feasibility Study", "text": "Mismatched coded excitation (CE) can be employed to increase the frame rate of synthetic aperture ultrasound imaging. The high autocorrelation and low cross correlation (CC) of transmitted signals enables the identification and separation of signal sources at the receiver. Thus, the method provides B-mode imaging with simultaneous transmission from several elements and capability of spatial decoding of the transmitted signals, which makes the imaging process equivalent to consecutive transmissions. Each transmission generates its own image and the combination of all the images results in an image with a high lateral resolution. In this paper, we introduce two different methods for generating multiple mismatched CEs with an identical frequency bandwidth and code length. Therefore, the proposed families of mismatched CEs are able to generate similar resolutions and signal-to-noise ratios. The application of these methods is demonstrated experimentally. Furthermore, several techniques are suggested that can be used to reduce the CC between the mismatched codes."} {"_id": "66c54b8ba52a6eae6727354dedaacbab1dd5a8ea", "title": "Open-Domain Neural Dialogue Systems", "text": "Until recently, the goal of developing opendomain dialogue systems that not only emulate human conversation but fulfill complex tasks, such as travel planning, seemed elusive. However, we start to observe promising results in the last few years as the large amount of conversation data is available for training and the breakthroughs in deep learning and reinforcement learning are applied to dialogue. In this tutorial, we start with a brief introduction to the history of dialogue research. Then, we describe in detail the deep learning and reinforcement learning technologies that have been developed for two types of dialogue systems. First is a task-oriented dialogue system that can help users accomplish tasks, ranging from meeting scheduling to vacation planning. Second is a social bot that can converse seamlessly and appropriately with humans. In the final part of the tutorial, we review attempts to developing opendomain neural dialogue systems by combining the strengths of task-oriented dialogue systems and social bots. The tutorial material is available at http://opendialogue.miulab.tw."} {"_id": "06d11bdd79c002f7cfdf9bcfa181f25c96f6009a", "title": "Rationalizing Neural Predictions", "text": "Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications \u2013 rationales \u2013 that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task.1"} {"_id": "a9411793ca1f4128c0d78c5c3b1d150da1cd15e3", "title": "Discovering Opinion Changes in Online Reviews via Learning Fine-Grained Sentiments", "text": "Recent years have shown rapid advancement in understanding consumers' behaviors and opinions through collecting and analyzing data from online social media platforms. While abundant research has been undertaken to detect users' opinions, few tools are available for understanding events where user opinions change drastically. In this paper, we propose a novel framework for discovering consumer opinion changing events. To detect subtle opinion changes over time, we first develop a novel fine-grained sentiment classification method by leveraging word embedding and convolutional neural networks. The method learns sentiment-enhanced word embedding, both for words and phrases, to capture their corresponding syntactic, semantic, and sentimental characteristics. We then propose an opinion shift detection algorithm that is based on the Kullback-Leibler divergence of temporal opinion category distributions, and conducted experiments on online reviews from Yelp. The results show that the proposed approach can effectively classify fine-grained sentiments of reviews and can discover key moments that correspond to consumer opinion shifts in response to events that relate to a product or service."} {"_id": "dbb4035111c12f4bce971bd4c8086e9d62c9eb97", "title": "Multi-GCN: Graph Convolutional Networks for Multi-View Networks, with Applications to Global Poverty", "text": "With the rapid expansion of mobile phone networks in developing countries, large-scale graph machine learning has gained sudden relevance in the study of global poverty. Recent applications range from humanitarian response and poverty estimation to urban planning and epidemic containment. Yet the vast majority of computational tools and algorithms used in these applications do not account for the multi-view nature of social networks: people are related in myriad ways, but most graph learning models treat relations as binary. In this paper, we develop a graph-based convolutional network for learning on multi-view networks. We show that this method outperforms state-of-the-art semi-supervised learning algorithms on three different prediction tasks using mobile phone datasets from three different developing countries. We also show that, while designed specifically for use in poverty research, the algorithm also outperforms existing benchmarks on a broader set of learning tasks on multi-view networks, including node labelling in citation networks."} {"_id": "8f9b34e29a3636ca05eb18ff58bf47d9a3517c82", "title": "Coupling CCG and Hybrid Logic Dependency Semantics", "text": "Categorial grammar has traditionally used the \u03bb-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally."} {"_id": "3e29cdcdb12ffdb80a734d12c905c1df43e0b6a4", "title": "Secure data deduplication", "text": "As the world moves to digital storage for archival purposes, there is an increasing demand for systems that can provide secure data storage in a cost-effective manner. By identifying common chunks of data both within and between files and storing them only once, deduplication can yield cost savings by increasing the utility of a given amount of storage. Unfortunately, deduplication exploits identical content, while encryption attempts to make all content appear random; the same content encrypted with two different keys results in very different ciphertext. Thus, combining the space efficiency of deduplication with the secrecy aspects of encryption is problematic.\n We have developed a solution that provides both data security and space efficiency in single-server storage and distributed storage systems. Encryption keys are generated in a consistent manner from the chunk data; thus, identical chunks will always encrypt to the same ciphertext. Furthermore, the keys cannot be deduced from the encrypted chunk data. Since the information each user needs to access and decrypt the chunks that make up a file is encrypted using a key known only to the user, even a full compromise of the system cannot reveal which chunks are used by which users."} {"_id": "30b5b32f0d18afd441a8b5ebac3b636c6d3173d9", "title": "Robust Median Reversion Strategy for Online Portfolio Selection", "text": "Online portfolio selection has attracted increasing attention from data mining and machine learning communities in recent years. An important theory in financial markets is mean reversion, which plays a critical role in some state-of-the-art portfolio selection strategies. Although existing mean reversion strategies have been shown to achieve good empirical performance on certain datasets, they seldom carefully deal with noise and outliers in the data, leading to suboptimal portfolios, and consequently yielding poor performance in practice. In this paper, we propose to exploit the reversion phenomenon by using robust $L_1$ -median estimators, and design a novel online portfolio selection strategy named \u201cRobust Median Reversion\u201d (RMR), which constructs optimal portfolios based on the improved reversion estimator. We examine the performance of the proposed algorithms on various real markets with extensive experiments. Empirical results show that RMR can overcome the drawbacks of existing mean reversion algorithms and achieve significantly better results. Finally, RMR runs in linear time, and thus is suitable for large-scale real-time algorithmic trading applications."} {"_id": "3da49a040788459dc85673285bb7a8023f78de6a", "title": "Activities and well-being in older age: effects of self-concept and educational attainment.", "text": "The positive effect of activities on well-being is proposed to be mediated by self-conceptualizations and facilitated by socioeconomic status. The hypothesized processes were estimated with LISREL VIII using data from a large cross-sectional survey with a sample of 679 adults aged 65 and older who were representative of older adults living in the Detroit area. Findings indicate that the frequency of performing both leisure and productive activities yields an effect on physical health and depression and that these effects are mediated in part by a sense of self as agentic, but less clearly by a sense of self as social. Furthermore, socioeconomic status, operationalized as formal educational attainment, facilitates the effect of leisure to a greater extent than that of productive activities."} {"_id": "5f6f332713f3c7ba52707c7a5a3ff584d60a2720", "title": "A Hybrid Algorithm for LTL Games", "text": "In the game theoretic approach to the synthesis of reactive systems, specifications are often given in linear time logic (LTL). Computing a winning strategy to an infinite game whose winning condition is the set of LTL properties is the main step in obtaining an implementation. We present a practical hybrid algorithm\u2014a combination of symbolic and explicit algorithm\u2014for the computation of winning strategies for unrestricted LTL games that we have successfully applied to synthesize reactive systems with up to 10 states."} {"_id": "a24d373bd33788640e95c117ebd5d78c88e8ce92", "title": "An IoT-Oriented Data Storage Framework in Cloud Computing Platform", "text": "The Internet of Things (IoT) has provided a promising opportunity to build powerful industrial systems and applications by leveraging the growing ubiquity of Radio Frequency IDentification (RFID) and wireless sensors devices. Benefiting from RFID and sensor network technology, common physical objects can be connected, and are able to be monitored and managed by a single system. Such a network brings a series of challenges for data storage and processing in a cloud platform. IoT data can be generated quite rapidly, the volume of data can be huge and the types of data can be various. In order to address these potential problems, this paper proposes a data storage framework not only enabling efficient storing of massive IoT data, but also integrating both structured and unstructured data. This data storage framework is able to combine and extend multiple databases and Hadoop to store and manage diverse types of data collected by sensors and RFID readers. In addition, some components are developed to extend the Hadoop to realize a distributed file repository, which is able to process massive unstructured files efficiently. A prototype system based on the proposed framework is also developed to illustrate the framework's effectiveness."} {"_id": "65130672f6741b16b0d76fde6ed6c686d5363f4b", "title": "Maintaining Integrity: How Nurses Navigate Boundaries in Pediatric Palliative Care.", "text": "PURPOSE\nTo explore how nurses manage personal and professional boundaries in caring for seriously ill children and their families.\n\n\nDESIGN AND METHODS\nUsing a constructivist grounded theory approach, a convenience sample of 18 registered nurses from four practice sites was interviewed using a semi-structured interview guide.\n\n\nRESULTS\nNurses across the sites engaged in a process of maintaining integrity whereby they integrated two competing, yet essential, aspects of their nursing role - behaving professionally and connecting personally. When skillful in both aspects, nurses were satisfied that they provided high-quality, family-centered care to children and families within a clearly defined therapeutic relationship. At times, tension existed between these two aspects and nurses attempted to mitigate the tension. Unsuccessful mitigation attempts led to compromised integrity characterized by specific behavioral and emotional indicators. Successfully mitigating the tension with strategies that prioritized their own needs and healing, nurses eventually restored integrity. Maintaining integrity involved a continuous effort to preserve completeness of both oneself and one's nursing practice.\n\n\nCONCLUSIONS\nStudy findings provide a theoretical conceptualization to describe the process nurses use in navigating boundaries and contribute to an understanding for how this specialized area of care impacts health care providers.\n\n\nPRACTICE IMPLICATIONS\nWork environments can better address the challenges of navigating boundaries through offering resources and support for nurses' emotional responses to caring for seriously ill children. Future research can further refine and expand the theoretical conceptualization of maintaining integrity presented in this paper and its potential applicability to other nursing specialties."} {"_id": "daaaa816ec61677fd88b3996889a00a6d8296290", "title": "Comparing Two IRT Models for Conjunctive Skills", "text": "A step in ITS often involve multiple skills. Thus a step requiring a conjunction of skills is harder than steps that require requiring each individual skill only. We developed two Item-Response Models \u2013 Additive Factor Model (AFM) and Conjunctive Factor Model (CFM) \u2013 to model the conjunctive skills in the student data sets. Both models are compared on simulated data sets and a real assessment data set. We showed that CFM was as good as or better than AFM in the mean cross validation errors on the simulated data. In the real data set CFM is not clearly better. However, AFM is essentially performing as a conjunctive model."} {"_id": "5b5f553d122c1b042d49e7c011915382b929e1ea", "title": "Using temporal IDF for efficient novelty detection in text streams", "text": "Novelty detection in text streams is a challenging task that emerges in quite a few different scenarios, ranging from email thread filtering to RSS news feed recommendation on a smartphone. An efficient novelty detection algorithm can save the user a great deal of time and resources when browsing through relevant yet usually previously-seen content. Most of the recent research on detection of novel documents in text streams has been building upon either geometric distances or distributional similarities, with the former typically performing better but being much slower due to the need of comparing an incoming document with all the previously-seen ones. In this paper, we propose a new approach to novelty detection in text streams. We describe a resource-aware mechanism that is able to handle massive text streams such as the ones present today thanks to the burst of social media and the emergence of the Web as the main source of information. We capitalize on the historical Inverse Document Frequency (IDF) that was known for capturing well term specificity and we show that it can be used successfully at the document level as a measure of document novelty. This enables us to avoid similarity comparisons with previous documents in the text stream, thus scaling better and leading to faster execution times. Moreover, as the collection of documents evolves over time, we use a temporal variant of IDF not only to maintain an efficient representation of what has already been seen but also to decay the document frequencies as the time goes by. We evaluate the performance of the proposed approach on a real-world news articles dataset created for this task. We examine an exhaustive number of variants of the model and compare them to several commonly used baselines that rely on geometric distances. The results show that the proposed method outperforms all of the baselines while managing to operate efficiently in terms of time complexity and memory usage, which are of great importance in a mobile setting scenario."} {"_id": "4d06c2e64d76b7bccb659ea7a71ccac1574e13ac", "title": "Applications of SAT Solvers to AES Key Recovery from Decayed Key Schedule Images", "text": "Cold boot attack is a side channel attack which exploits the data remanence property of random access memory (RAM) to retrieve its contents which remain readable shortly after its power has been removed. Given the nature of the cold boot attack, only a corrupted image of the memory contents will be available to the attacker. In this paper, we investigate the use of an off-the-shelf SAT solver, CryptoMinSat, to improve the key recovery of the AES-128 key schedules from its corresponding decayed memory images. By exploiting the asymmetric decay of the memory images and the redundancy of key material inherent in the AES key schedule, rectifying the faults in the corrupted memory images of the AES-128 key schedule is formulated as a Boolean satisfiability problem which can be solved efficiently for relatively very large decay factors. Our experimental results show that this approach improves upon the previously known results."} {"_id": "d4b68acdbe65c2520fddf1b3c92268c2f7a68159", "title": "Improved Shortest Path Maps with GPU Shaders", "text": "We present in this paper several improvements for computing shortest path maps using OpenGL shaders [1]. The approach explores GPU rasterization as a way to propagate optimal costs on a polygonal 2D environment, producing shortest path maps which can efficiently be queried at run-time. Our improved method relies on Compute Shaders for improved performance, does not require any CPU pre-computation, and handles shortest path maps both with source points and with line segment sources. The produced path maps partition the input environment into regions sharing a same parent point along the shortest path to the closest source point or segment source. Our method produces paths with global optimality, a characteristic which has been mostly neglected in animated virtual environments. The proposed approach is particularly suitable for the animation of multiple agents moving toward the entrances or exits of a virtual environment, a situation which is efficiently represented with the proposed path maps."} {"_id": "9f5ee0f6fdfd37a801df3c4e96d7e56391744233", "title": "The Cyclic Towers of Hanoi", "text": "The famous Towers of Hanoi puzzle consists of 3 pegs (A, B, C) on one of which (A) are stacked n rings of different sizes, each ring resting on a larger ring. The objective is to move the n rings one by one until they are all stacked on another peg (B) in such a way that no ring is ever placed on a smaller ring; the other peg (C) can be used as workspace. The problem has tong been a favourite iir programming courses as one which admits a concise recursive solution. This solution hinges on the observation that, when the largest ring is moved from A to B, the n 1 remaining rings must all be on peg C. This immediately leads to the recursive procedure"} {"_id": "310ec7796eeca484d734399d9979e8f74d7d8ed2", "title": "Shakeout: A New Regularized Deep Neural Network Training Scheme", "text": "Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. The invention of effective training techniques largely contributes to this success. The so-called \"Dropout\" training scheme is one of the most powerful tool to reduce over-fitting. From the statistic point of view, Dropout works by implicitly imposing an L2 regularizer on the weights. In this paper, we present a new training scheme: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, our method randomly chooses to enhance or inverse the contributions of each unit to the next layer. We show that our scheme leads to a combination of L1 regularization and L2 regularization imposed on the weights, which has been proved effective by the Elastic Net models in practice. We have empirically evaluated the Shakeout scheme and demonstrated that sparse network weights are obtained via Shakeout training. Our classification experiments on real-life image datasets MNIST and CIFAR10 show that Shakeout deals with over-fitting effectively."} {"_id": "5cf3db1cb294afdf639324e2ab0a326aa9f78473", "title": "Big Data Security: Survey on Frameworks and Algorithms", "text": "Technology today has progressed to an extent wherein collection of data is possible for every granular aspect of a business, in real time. Electronic devices, power grids and modern software all generate huge volumes of data, which is in the form of petabytes, exabytes and zettabytes. It is important to secure existing big data environments due to increasing threats of breaches and leaks from confidential data and increased adoption of cloud technologies due to the ability of buying processing power and storage on-demand. This is exposing traditional and new data warehouses and repositories to the outside world and at the risk of being compromised to hackers and malicious outsiders and insiders. In this paper the current big data scenario has been summarized along with challenges faced and security issues that need attention. Also some existing approaches have been described to illustrate current and standard directions to solving the issues."} {"_id": "8c366344669769983f0c238a5f0548cac2afcc43", "title": "Scribbles to Vectors: Preparation of Scribble Drawings for CAD Interpretation", "text": "This paper describes the work carried out on off-line paper based scribbles such that they can be incorporated into a sketch-based interface without forcing designers to change their natural drawing habits. In this work, the scribbled drawings are converted into a vectorial format which can be recognized by a CAD system. This is achieved by using pattern analysis techniques, namely the Gabor filter to simplify the scribbled drawing. Vector line are then extracted from the resulting drawing by means of Kalman filtering."} {"_id": "15bdf3f1412cd762c40ad41dee5485de38ab0120", "title": "On the Robust Control of Buck-Converter DC-Motor Combinations", "text": "The concepts of active disturbance rejection control and flatness-based control are used in this paper to regulate the response of a dc-to-dc buck power converter affected by unknown, exogenous, time-varying load current demands. The generalized proportional integral observer is used to estimate and cancel the time-varying disturbance signals. A key element in the proposed control for the buck converter-dc motor combination is that even if the control input gain is imprecisely known, the control strategy still provides proper regulation and tracking. The robustness of this method is further extended to the case of a double buck topology driving two different dc motors affected by different load torque disturbances. Simulation results are provided."} {"_id": "2d78fbe680b4501b0c21fbd49eb7652592cf077d", "title": "Comparative study of Proportional Integral and Backstepping controller for Buck converter", "text": "This paper describes the comparative study of Proportional Integral (PI) and Backstepping controller for Buck converter with R-load and DC motor. Backstepping approach is an efficient control design procedure for both regulation and tracking problems. This approach is based upon a systematic procedure which guarantees global regulation and tracking. The proposed control scheme is to stabilize the output (voltage or speed) and tracking error to converge zero asymptotically. Buck converter system is simulated in MATLAB, using state reconstruction techniques. Simulation results of buck converter with R-load and PMDC motor reveals that, settling time of Backstepping controller is less than PI controller"} {"_id": "bba5386f9210f2996d403f09224926d860c763d7", "title": "Robust Passivity-Based Control of a Buck\u2013Boost-Converter/DC-Motor System: An Active Disturbance Rejection Approach", "text": "This paper presents an active disturbance rejection (ADR) approach for the control of a buck-boost-converter feeding a dc motor. The presence of arbitrary, time-varying, load torque inputs on the dc motor and the lack of direct measurability of the motor's angular velocity variable prompts a generalized proportional integral (GPI) observer-based ADR controller which is synthesized on the basis of passivity considerations. The GPI observer simultaneously estimates the angular velocity and the exogenous disturbance torque input in an on-line cancellation scheme, known as the ADR control. The proposed control scheme is thus a sensorless one with robustness features added to the traditional energy shaping plus damping injection methodology. The discrete switching control realization of the designed continuous feedback control law is accomplished by means of a traditional PWM-modulation scheme. Additionally, an input to state stability property of the closed-loop system is established. Experimental and simulation results are provided."} {"_id": "d8ca9a094f56e3fe542269ea272b46f5e46bdd99", "title": "Closed-Loop Analysis and Cascade Control of a Nonminimum Phase Boost Converter", "text": "In this paper, a cascade controller is designed and analyzed for a boost converter. The fast inner current loop uses sliding-mode control. The slow outer voltage loop uses the proportional-integral (PI) control. Stability analysis and selection of PI gains are based on the nonlinear closed-loop error dynamics. It is proven that the closed-loop system has a nonminimum phase behavior. The voltage transients and reference voltage are predictable. The current ripple and system sensitivity are studied. The controller is validated by a simulation circuit with nonideal circuit parameters, different circuit parameters, and various maximum switching frequencies. The simulation results show that the reference output voltage is well tracked under parametric changes, system uncertainties, or external disturbances with fast dynamic transients, confirming the validity of the proposed controller."} {"_id": "06d22950a79a839d864b575569a0de91ded33135", "title": "A general approach to control a Positive Buck-Boost converter to achieve robustness against input voltage fluctuations and load changes", "text": "A positive buck-boost converter is a known DC- DC converter which may be controlled to act as buck or boost converter with same polarity of the input voltage. This converter has four switching states which include all the switching states of the above mentioned DC-DC converters. In addition there is one switching state which provides a degree of freedom for the positive buck-boost converter in comparison to the buck, boost, and inverting buck-boost converters. In other words the positive buck- boost converter shows a higher level of flexibility for its inductor current control compared to the other DC-DC converters. In this paper this extra degree of freedom is utilised to increase the robustness against input voltage fluctuations and load changes. To address this capacity of the positive buck-boost converter, two different control strategies are proposed which control the inductor current and output voltage against any fluctuations in input voltage and load changes. Mathematical analysis for dynamic and steady state conditions are presented in this paper and simulation results verify the proposed method."} {"_id": "8b0e6c02a49dcd9ef946c2af4f2f9290b8e65b2f", "title": "Wideband Millimeter-Wave Surface Micromachined Tapered Slot Antenna", "text": "A millimeter-wave surface micromachined tapered slot antenna (TSA) fed by an air-filled rectangular coaxial line is proposed. Detailed parametric study is conducted to determine the effects of the TSA's key structural features on its VSWR and gain. Selected exponential taper with determined growth rate and corrugation length resulted in an antenna with VSWR <; 2.5, gain > 2.75 dBi, and stable patterns over a 43-140-GHz range. The TSA is fabricated, and good correlation between modeling and W-band measurements confirms its wideband performance."} {"_id": "60bbeedf201a2fcdf9efac19ff32aafe2e33b606", "title": "The genomic landscapes of human breast and colorectal cancers.", "text": "Human cancer is caused by the accumulation of mutations in oncogenes and tumor suppressor genes. To catalog the genetic changes that occur during tumorigenesis, we isolated DNA from 11 breast and 11 colorectal tumors and determined the sequences of the genes in the Reference Sequence database in these samples. Based on analysis of exons representing 20,857 transcripts from 18,191 genes, we conclude that the genomic landscapes of breast and colorectal cancers are composed of a handful of commonly mutated gene \"mountains\" and a much larger number of gene \"hills\" that are mutated at low frequency. We describe statistical and bioinformatic tools that may help identify mutations with a role in tumorigenesis. These results have implications for understanding the nature and heterogeneity of human cancers and for using personal genomics for tumor diagnosis and therapy."} {"_id": "818c13721db30a435044b37014fe7077e5a8a587", "title": "Incorporating partitioning and parallel plans into the SCOPE optimizer", "text": "Massive data analysis on large clusters presents new opportunities and challenges for query optimization. Data partitioning is crucial to performance in this environment. However, data repartitioning is a very expensive operation so minimizing the number of such operations can yield very significant performance improvements. A query optimizer for this environment must therefore be able to reason about data partitioning including its interaction with sorting and grouping. SCOPE is a SQL-like scripting language used at Microsoft for massive data analysis. A transformation-based optimizer is responsible for converting scripts into efficient execution plans for the Cosmos distributed computing platform. In this paper, we describe how reasoning about data partitioning is incorporated into the SCOPE optimizer. We show how relational operators affect partitioning, sorting and grouping properties and describe how the optimizer reasons about and exploits such properties to avoid unnecessary operations. In most optimizers, consideration of parallel plans is an afterthought done in a postprocessing step. Reasoning about partitioning enables the SCOPE optimizer to fully integrate consideration of parallel, serial and mixed plans into the cost-based optimization. The benefits are illustrated by showing the variety of plans enabled by our approach."} {"_id": "8420f2f686890d9675538ec831dbb43568af1cb3", "title": "Sentiment classification of Hinglish text", "text": "In order to determine the sentiment polarity of Hinglish text written in Roman script, we experimented with different combinations of feature selection methods and a host of classifiers using term frequency-inverse document frequency feature representation. We carried out in total 840 experiments in order to determine the best classifiers for sentiment expressed in the news and Facebook comments written in Hinglish. We concluded that a triumvirate of term frequency-inverse document frequency-based feature representation, gain ratio based feature selection, and Radial Basis Function Neural Network as the best combination to classify sentiment expressed in the Hinglish text."} {"_id": "6ffe544f38ddfdba86a83805ce807f3c8e775fd6", "title": "Multi words quran and hadith searching based on news using TF-IDF", "text": "Each week religious leaders need to give advice to their community. Religious advice matters ideally contain discussion and solution of the problem that arising in society. But the lot of religious resources that must be considered agains many arising problems make this religious task is not easy. Especially in moslem community, the religious resources are Quran and Kutubus Sitah, the six most referenced collection of Muhammad (pbuh) news (hadith). The problem that arising in society can be read from various online mass media. Doing manually, they must know the Arabic word of the problem, and make searching manually from Mu'jam, Quran and Hadith index, then write out the found verses or hadith. TF-IDF method is often used in the weighting informational retrieval and text mining. This research want to make tools that get input from mass media news, make multi words searching from database using TF-IDF (Term Frequency - Inverse Document Frequency), and give relevan verse of Quran and hadith. Top five the most relevan verse of Quran and hadith will be displayed. Justified by religious leader, application give 60% precision for Quranic verses, and 53% for hadith, with the average query time 2.706 seconds."} {"_id": "422a675b71f8655b266524a552e0246cb29e9bd5", "title": "GALE: Geometric Active Learning for Search-Based Software Engineering", "text": "Multi-objective evolutionary algorithms (MOEAs) help software engineers find novel solutions to complex problems. When automatic tools explore too many options, they are slow to use and hard to comprehend. GALE is a near-linear time MOEA that builds a piecewise approximation to the surface of best solutions along the Pareto frontier. For each piece, GALE mutates solutions towards the better end. In numerous case studies, GALE finds comparable solutions to standard methods (NSGA-II, SPEA2) using far fewer evaluations (e.g. 20 evaluations, not 1,000). GALE is recommended when a model is expensive to evaluate, or when some audience needs to browse and understand how an MOEA has made its conclusions."} {"_id": "a390073adc9c9d23d31404a9a8eb6dac7e684857", "title": "Local force cues for strength and stability in a distributed robotic construction system", "text": "Construction of spatially extended, self-supporting structures requires a consideration of structural stability throughout the building sequence. For collective construction systems, where independent agents act with variable order and timing under decentralized control, ensuring stability is a particularly pronounced challenge. Previous research in this area has largely neglected considering stability during the building process. Physical forces present throughout a structure may be usable as a cue to inform agent actions as well as an indirect communication mechanism (stigmergy) to coordinate their behavior, as adding material leads to redistribution of forces which then informs the addition of further material. Here we consider in simulation a system of decentralized climbing robots capable of traversing and extending a two-dimensional truss structure, and explore the use of feedback based on force sensing as a way for the swarm to anticipate and prevent structural failures. We consider a scenario in which robots are tasked with building an unsupported cantilever across a gap, as for a bridge, where the goal is for the swarm to build any stable spanning structure rather than to construct a specific predetermined blueprint. We show that access to local force measurements enables robots to build cantilevers that span significantly farther than those built by robots without access to such information. This improvement is achieved by taking measures to maintain both strength and stability, where strength is ensured by paying attention to forces during locomotion to prevent joints from breaking, and stability is maintained by looking at how loads transfer to the ground to ensure against toppling. We show that swarms that take both kinds of forces into account have improved building performance, in both structured settings with flat ground and unpredictable environments with rough terrain."} {"_id": "511921e775ab05a1ab0770a63e57c93da51c8526", "title": "Use of AI Techniques for Residential Fire Detection in Wireless Sensor Networks", "text": "Early residential fire detection is important for prompt extinguishing and reducing damages and life losses. To detect fire, one or a combination of sensors and a detection algorithm are needed. The sensors might be part of a wireless sensor network (WSN) or work independently. The previous research in the area of fire detection using WSN has paid little or no attention to investigate the optimal set of sensors as well as use of learning mechanisms and Artificial Intelligence (AI) techniques. They have only made some assumptions on what might be considered as appropriate sensor or an arbitrary AI technique has been used. By closing the gap between traditional fire detection techniques and modern wireless sensor network capabilities, in this paper we present a guideline on choosing the most optimal sensor combinations for accurate residential fire detection. Additionally, applicability of a feed forward neural network (FFNN) and Na\u00efve Bayes Classifier is investigated and results in terms of detection rate and computational complexity are analyzed."} {"_id": "c97ebb60531a86bea516d3582758a45ba494de10", "title": "Smart Cars on Smart Roads: An IEEE Intelligent Transportation Systems Society Update", "text": "To promote tighter collaboration between the IEEE Intelligent Transportation Systems Society and the pervasive computing research community, the authors introduce the ITS Society and present several pervasive computing-related research topics that ITS Society researchers are working on. This department is part of a special issue on Intelligent Transportation."} {"_id": "e91196c1d0234da60314945c4812eda631004d8f", "title": "Towards Multi-Agent Communication-Based Language Learning", "text": "We propose an interactive multimodal framework for language learning. Instead of being passively exposed to large amounts of natural text, our learners (implemented as feed-forward neural networks) engage in cooperative referential games starting from a tabula rasa setup, and thus develop their own language from the need to communicate in order to succeed at the game. Preliminary experiments provide promising results, but also suggest that it is important to ensure that agents trained in this way do not develop an adhoc communication code only effective for the game they are playing."} {"_id": "6acb911d57720367d1ae7b9bce8ab9f9dcd9aadb", "title": "A region-based image caption generator with refined descriptions", "text": "Describing the content of an image is a challenging task. To enable detailed description, it requires the detection and recognition of objects, people, relationships and associated attributes. Currently, the majority of the existing research relies on holistic techniques, which may lose details relating to important aspects in a scene. In order to deal with such a challenge, we propose a novel region-based deep learning architecture for image description generation. It employs a regional object detector, recurrent neural network (RNN)-based attribute prediction, and an encoderdecoder language generator embedded with two RNNs to produce refined and detailed descriptions of a given image. Most importantly, the proposed system focuses on a local based approach to further improve upon existing holistic methods, which relates specifically to image regions of people and objects in an image. Evaluated with the IAPR TC-12 dataset, the proposed system shows impressive performance, and outperforms state-of-the-art methods using various evaluation metrics. In particular, the proposed system shows superiority over existing methods when dealing with cross-domain indoor scene images."} {"_id": "385467e7747904397134c3a16d8f3a417e6b16fc", "title": "3D printing: Basic concepts mathematics and technologies", "text": "3D printing is the process of being able to print any object layer by layer. But if we question this proposition, can we find any three dimensional objects that can't be printed layer by layer? To banish any disbeliefs we walked together through the mathematics that prove 3d printing is feasible for any real life object. 3d printers create three dimensional objects by building them up layer by layer. The current generation of 3d printers typically requires input from a CAD program in the form of an STL file, which defines a shape by a list of triangle vertices. The vast majority of 3d printers use two techniques, FDM (Fused Deposition Modelling) and PBP (Powder Binder Printing). One advanced form of 3d printing that has been an area of increasing scientific interest the recent years is bioprinting. Cell printers utilizing techniques similar to FDM were developed for bioprinting. These printers give us the ability to place cells in positions that mimic their respective positions in organs. Finally through series of case studies we show that 3d printers in medicine have made a massive breakthrough lately."} {"_id": "500b7d63e64e13fa47934ec9ad20fcfe0d4c17a7", "title": "3D strip meander delay line structure for multilayer LTCC-based SiP applications", "text": "Recently, the timing control of high-frequency signals is strongly demanded due to the high integration density in three-dimensional (3D) LTCC-based SiP applications. Therefore, to control the skew or timing delay, new 3D delay lines will be proposed. For frailty of the signal via, we adopt the concept of coaxial line and proposed an advanced signal via structure with quasi coaxial ground (QCOX-GND) vias. We will show the simulated results using EM and circuit simulator."} {"_id": "a5eeee49f3da9bb3ce75de4c28823dddb7ed23db", "title": "Visual secret sharing scheme for (k,\u00a0n) threshold based on QR code with multiple decryptions", "text": "In this paper, a novel visual secret sharing (VSS) scheme based on QR code (VSSQR) with (k,\u00a0n) threshold is investigated. Our VSSQR exploits the error correction mechanism in the QR code structure, to generate the bits corresponding to shares (shadow images) by VSS from a secret bit in the processing of encoding QR. Each output share is a valid QR code that can be scanned and decoded utilizing a QR code reader, which may reduce the likelihood of attracting the attention of potential attackers. Due to different application scenarios, two different recovered ways of the secret image are given. The proposed VSS scheme based on QR code can visually reveal secret image with the abilities of stacking and XOR decryptions as well as scan every shadow image, i.e., a QR code, by a QR code reader. The secret image could be revealed by human visual system without any computation based on stacking when no lightweight computation device. On the other hand, if the lightweight computation device is available, the secret image can be revealed with better visual quality based on XOR operation and could be lossless revealed when sufficient shares are collected. In addition, it can assist alignment for VSS recovery. The experiment results show the effectiveness of our scheme."} {"_id": "87374ee9a49ab4b51176b4155eaa6285c02463a1", "title": "Use of Web 2 . 0 technologies in K-12 and higher education : The search for evidence-based practice", "text": "Evidence-based practice in education entails making pedagogical decisions that are informed by relevant empirical research evidence. The main purpose of this paper is to discuss evidence-based pedagogical approaches related to the use of Web 2.0 technologies in both K-12 and higher education settings. The use of such evidence-based practice would be useful to educators interested in fostering student learning through Web 2.0 tools. A comprehensive literature search across the Academic Search Premier, Education Research Complete, ERIC, and PsycINFO databases was conducted. Empirical studies were included for review if they specifically examined the impact of Web 2.0 technologies on student learning. Articles that merely described anecdotal studies such as student perception or feeling toward learning using Web 2.0, or studies that relied on student self-report data such as student questionnaire survey and interview were excluded. Overall, the results of our review suggested that actual evidence regarding the impact of Web 2.0 technologies on student learning is as yet fairly weak. Nevertheless, the use of Web 2.0 technologies appears to have a general positive impact on student learning. None of the studies reported a detrimental or inferior effect on learning. The positive effects are not necessarily attributed to the technologies per se but to how the technologies are used, and how one conceptualizes learning. It may be tentatively concluded that a dialogic, constructionist, or coconstructive pedagogy supported by activities such as Socratic questioning, peer review and self-reflection appeared to increase student achievement in blog-, wiki-, and 3-D immersive virtual world environments, while a transmissive pedagogy supported by review activities appeared to enhance student learning using podcast. 2012 Elsevier Ltd. All rights reserved."} {"_id": "2e0305d97f2936ee2a87b87ea901d500a8fbcd16", "title": "Gaussian Process Regression with Heteroscedastic or Non-Gaussian Residuals", "text": "Abstract Gaussian Process (GP) regression models typically assume that residuals are Gaussian and have the same variance for all observations. However, applications with input-dependent noise (heteroscedastic residuals) frequently arise in practice, as do applications in which the residuals do not have a Gaussian distribution. In this paper, we propose a GP Regression model with a latent variable that serves as an additional unobserved covariate for the regression. This model (which we call GPLC) allows for heteroscedasticity since it allows the function to have a changing partial derivative with respect to this unobserved covariate. With a suitable covariance function, our GPLC model can handle (a) Gaussian residuals with input-dependent variance, or (b) nonGaussian residuals with input-dependent variance, or (c) Gaussian residuals with constant variance. We compare our model, using synthetic datasets, with a model proposed by Goldberg, Williams and Bishop (1998), which we refer to as GPLV, which only deals with case (a), as well as a standard GP model which can handle only case (c). Markov Chain Monte Carlo methods are developed for both modelsl. Experiments show that when the data is heteroscedastic, both GPLC and GPLV give better results (smaller mean squared error and negative log-probability density) than standard GP regression. In addition, when the residual are Gaussian, our GPLC model is generally nearly as good as GPLV, while when the residuals are non-Gaussian, our GPLC model is better than GPLV."} {"_id": "fb7920c6b16ead15a3c0f62cf54c2af9ff9c550f", "title": "Atrous Convolutional Neural Network (ACNN) for Semantic Image Segmentation with full-scale Feature Maps", "text": "Deep Convolutional Neural Networks (DCNNs) are used extensively in biomedical image segmentation. However, current DCNNs usually use down sampling layers for increasing the receptive field and gaining abstract semantic information. These down sampling layers decrease the spatial dimension of feature maps, which can be detrimental to semantic image segmentation. Atrous convolution is an alternative for the down sampling layer. It increases the receptive field whilst maintains the spatial dimension of feature maps. In this paper, a method for effective atrous rate setting is proposed to achieve the largest and fully-covered receptive field with a minimum number of atrous convolutional layers. Furthermore, different atrous blocks, shortcut connections and normalization methods are explored to select the optimal network structure setting. These lead to a new and full-scale DCNN Atrous Convolutional Neural Network (ACNN), which incorporates cascaded atrous II-blocks, residual learning and Fine Group Normalization (FGN). Application results of the proposed ACNN to Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) image segmentation demonstrate that the proposed ACNN can achieve comparable segmentation Dice Similarity Coefficients (DSCs) to U-Net, optimized U-Net and hybrid network, but with significantly reduced trainable parameters due to the use of fullscale feature maps and therefore computationally is much more efficient for both the training and inference."} {"_id": "3ad1787e6690c80f8c934c150d31b7dd6d410903", "title": "Orthogonal AMP", "text": "Approximate message passing (AMP) is a low-cost iterative signal recovery algorithm for linear system models. When the system transform matrix has independent identically distributed (IID) Gaussian entries, the performance of AMP can be asymptotically characterized by a simple scalar recursion called state evolution (SE). However, SE may become unreliable for other matrix ensembles, especially for ill-conditioned ones. This imposes limits on the applications of AMP. In this paper, we propose an orthogonal AMP (OAMP) algorithm based on de-correlated linear estimation (LE) and divergence-free non-linear estimation (NLE). The Onsager term in standard AMP vanishes as a result of the divergence-free constraint on NLE. We develop an SE procedure for OAMP and show numerically that the SE for OAMP is accurate for general unitarily-invariant matrices, including IID Gaussian matrices and partial orthogonal matrices. We further derive optimized options for OAMP and show that the corresponding SE fixed point coincides with the optimal performance obtained via the replica method. Our numerical results demonstrate that OAMP can be advantageous over AMP, especially for ill-conditioned matrices."} {"_id": "cfec3fb4352ebb004b0aaf8b0a3b9869f23e7765", "title": "Learning Discrete Hashing Towards Efficient Fashion Recommendation", "text": "In our daily life, how to match clothing well is always a troublesome problem especially when we are shopping online to select a pair of matched pieces of clothing from tens of thousands available selections. To help common customers overcome selection issues, recent studies in the recommender system area have started to infer the fashion matching results automatically. The traditional fashion recommendation is normally achieved by considering visual similarity of clothing items or/and item co-purchase history from existing shopping transactions. Due to the high complexity of visual features and the lack of historical item purchase records, most of the existing work is unlikely to make an efficient and accurate recommendation. To address the problem, in this paper, we propose a new model called Discrete Supervised Fashion Coordinates Hashing. Its main objective is to learn meaningful yet compact high-level features of clothing items, which are represented as binary hash codes. In detail, this learning process is supervised by a clothing matching matrix, which is initially constructed based on limited known matching pairs and subsequently on the self-augmented ones. The proposed model jointly learns the intrinsic matching patterns from the matching matrix and the binary representations from the clothing items\u2019 images, where the visual feature of each clothing item is discretized into a fixed-length binary vector. The binary representation learning significantly reduces the memory cost and accelerates the recommendation speed. The experiments compared with several state-of-the-art approaches have evidenced the superior performance of the proposed approach on efficient fashion recommendation."} {"_id": "1df0b93bd54a104c862002210cbb2051ab3901b4", "title": "Improving malware detection by applying multi-inducer ensemble", "text": "Detection of malicious software (malware) using ma chine learning methods has been explored extensively to enable fas t detection of new released malware. The performance of these classifiers depen ds on the induction algorithms being used. In order to benefit from mul tiple different classifiers, and exploit their strengths we suggest using an ens emble method that will combine the results of the individual classifiers i nto one final result to achieve overall higher detection accuracy. In this paper we evaluate several combining methods using five different base inducers (C4.5 Dec ision Tree, Na\u00efve Bayes, KNN, VFI and OneR) on five malware datasets. The mai n goal is to find the best combining method for the task of detecting mal icious files in terms of accuracy, AUC and execution time."} {"_id": "9d852855ba9b805f092b271f940848c3009a6a90", "title": "Unikernel-based approach for software-defined security in cloud infrastructures", "text": "The heterogeneity of cloud resources implies sub-stantial overhead to deploy and configure adequate security mechanisms. In that context, we propose a software-defined security strategy based on unikernels to support the protection of cloud infrastructures. This approach permits to address management issues by uncoupling security policy from their enforcement through programmable security interfaces. It also takes benefits from unikernel virtualization properties to support this enforcement and provide resources with low attack surface. These resources correspond to highly constrained configurations with the strict minimum for a given period. We describe the management framework supporting this software-defined security strategy, formalizing the generation of unikernel images that are dynamically built to comply with security requirements over time. Through an implementation based on MirageOS, and extensive experiments, we show that the cost induced by our security integration mechanisms is small while the gains in limiting the security exposure are high."} {"_id": "4d767a7a672536922a6f393b4b70db8776e4821d", "title": "Sentiment Classification based on Latent Dirichlet Allocation", "text": "Opinion miningrefers to the use of natural language processing, text analysis and computational linguistics to identify and extract the subjective information. Opinion Mining has become an indispensible part of online reviews which is in the present scenario. In the field of information retrieval, a various kinds of probabilistic topic modeling techniques have been used to analyze contents present in a document. A topic model is a generative technique for document. All"} {"_id": "1a07186bc10592f0330655519ad91652125cd907", "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", "text": "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance."} {"_id": "27e38351e48fe4b7da2775bf94341738bc4da07e", "title": "Semantic Compositionality through Recursive Matrix-Vector Spaces", "text": "Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them."} {"_id": "303b0b6e6812c60944a4ac9914222ac28b0813a2", "title": "Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis", "text": "This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify thecontextual polarityfor a large subset of sentiment expressions, achieving results that are significantly better than baseline."} {"_id": "4eb943bf999ce49e5ebb629d7d0ffee44becff94", "title": "Finding Structure in Time", "text": "Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction."} {"_id": "e354b28299c0c3fc8e57d081f02ac0b99b1ed354", "title": "Grasping Without Squeezing: Design and Modeling of Shear-Activated Grippers", "text": "Grasping objects that are too large to envelop is traditionally achieved using friction that is activated by squeezing. We present a family of shear-activated grippers that can grasp such objects without the need to squeeze. When a shear force is applied to the gecko-inspired material in our grippers, adhesion is turned on; this adhesion in turn results in adhesion-controlled friction, a friction force that depends on adhesion rather than a squeezing normal force. Removal of the shear force eliminates adhesion, allowing easy release of an object. A compliant shear-activated gripper without active sensing and control can use the same light touch to lift objects that are soft, brittle, fragile, light, or very heavy. We present three grippers, the first two designed for curved objects, and the third for nearly any shape. Simple models describe the grasping process, and empirical results verify the models. The grippers are demonstrated on objects with a variety of shapes, materials, sizes, and weights."} {"_id": "1c4eda4f85559b3c3fcae6ca6ec4a54bff18002e", "title": "Quantifying Mental Health from Social Media with Neural User Embeddings", "text": "Mental illnesses adversely affect a significant proportion of the population worldwide. However, the typical methods to estimate and characterize the prevalence of mental health conditions are time-consuming and expensive. Consequently, best-available estimates concerning the prevalence of these conditions are often years out of date. Automated approaches that supplement traditional methods with broad, aggregated information derived from social media provide a potential means of furnishing near real-time estimates at scale. These may in turn provide grist for supporting, evaluating and iteratively improving public health programs and interventions. We propose a novel approach for mental health quantification that leverages user embeddings induced from social media post histories. Recent work showed that learned user representations capture latent aspects of individuals (e.g., political leanings). This paper investigates whether these representations also correlate with mental health statuses. To this end, we induced embeddings for a set of users known to be affected by depression and post-traumatic stress disorder, and for a set of demographically matched \u2018control\u2019 users. We then evaluated the induced user representations with respect to: (i) their ability to capture homophilic relations with respect to mental health statuses; and (ii) their predictive performance in downstream mental health models. Our experimental results demonstrate that learned user embeddings capture relevant signals for mental health quantification."} {"_id": "31e6f92c83ad2df9859f5211971540412beebea6", "title": "Electroporation of Cells and Tissues", "text": "Electrical pulses that cause the transmembrane voltage of fluid lipid bilayer membranes to reach at least 0 2 V, usually 0.5\u20131 V, are hypothesized to create primary membrane \u201cpores\u201d with a minimum radius of \u223c1 nm. Transport of small ions such as Na and Cl through a dynamic pore population discharges the membrane even while an external pulse tends to increase , leading to dramatic electrical behavior. Molecular transport through primary pores and pores enlarged by secondary processes provides the basis for transporting molecules into and out of biological cells. Cell electroporationin vitro is used mainly for transfection by DNA introduction, but many other interventions are possible, including microbial killing. Ex vivo electroporation provides manipulation of cells that are reintroduced into the body to provide therapy. In vivo electroporation of tissues enhances molecular transport through tissues and into their constituative cells. Tissue electroporation, by longer, large pulses, is involved in electrocution injury. Tissue electroporation by shorter, smaller pulses is under investigation for biomedical engineering applications of medical therapy aimed at cancer treatment, gene therapy, and transdermal drug delivery. The latter involves a complex barrier containing both high electrical resistance, multilamellar lipid bilayer membranes and a tough, electrically invisible protein matrix."} {"_id": "3ac9c55c9da1b19b66f4249edad5468daa452b02", "title": "Computing With Words for Hierarchical Decision Making Applied to Evaluating a Weapon System", "text": "The perceptual computer (Per-C) is an architecture that makes subjective judgments by computing with words (CWWs). This paper applies the Per-C to hierarchical decision making, which means decision making based on comparing the performance of competing alternatives, where each alternative is first evaluated based on hierarchical criteria and subcriteria, and then, these alternatives are compared to arrive at either a single winner or a subset of winners. What can make this challenging is that the inputs to the subcriteria and criteria can be numbers, intervals, type-1 fuzzy sets, or even words modeled by interval type-2 fuzzy sets. Novel weighted averages are proposed in this paper as a CWW engine in the Per-C to aggregate these diverse inputs. A missile-evaluation problem is used to illustrate it. The main advantages of our approaches are that diverse inputs can be aggregated, and uncertainties associated with these inputs can be preserved and are propagated into the final evaluation."} {"_id": "2ca36cfea0aba89e19b3551c325e999e0fb6607c", "title": "Attribute-Based Encryption for Circuits", "text": "In an attribute-based encryption (ABE) scheme, a ciphertext is associated with an \u2113-bit public index ind and a message m, and a secret key is associated with a Boolean predicate P. The secret key allows decrypting the ciphertext and learning m if and only if P(ind) = 1. Moreover, the scheme should be secure against collusions of users, namely, given secret keys for polynomially many predicates, an adversary learns nothing about the message if none of the secret keys can individually decrypt the ciphertext.\n We present attribute-based encryption schemes for circuits of any arbitrary polynomial size, where the public parameters and the ciphertext grow linearly with the depth of the circuit. Our construction is secure under the standard learning with errors (LWE) assumption. Previous constructions of attribute-based encryption were for Boolean formulas, captured by the complexity class NC1.\n In the course of our construction, we present a new framework for constructing ABE schemes. As a by-product of our framework, we obtain ABE schemes for polynomial-size branching programs, corresponding to the complexity class LOGSPACE, under quantitatively better assumptions."} {"_id": "b3ddd3d1c874050bf2c7ab770540a7f3e503a2cb", "title": "The Impact of Nonlinear Junction Capacitance on Switching Transient and Its Modeling for SiC MOSFET", "text": "The nonlinear junction capacitances of power devices are critical for the switching transient, which should be fully considered in the modeling and transient analysis, especially for high-frequency applications. The silicon carbide (SiC) MOSFET combined with SiC Schottky Barrier Diode (SBD) is recognized as the proposed choice for high-power and high-frequency converters. However, in the existing SiC MOSFET models only the nonlinearity of gate-drain capacitance is considered meticulously, but the drain-source capacitance, which affects the switching commutation process significantly, is generally regarded as constant. In addition, the nonlinearity of diode junction capacitance is neglected in some simplified analysis. Experiments show that without full consideration of nonlinear junction capacitances, some significant deviations between simulated and measured results will emerge in the switching waveforms. In this paper, the nonlinear characteristics of drain-source capacitance in SiC MOSFET are studied in detail, and the simplified modeling methods for engineering applications are presented. On this basis, the SiC MOSFET model is improved and the simulation results with improved model correspond with the measured results much better than before, which verify the analysis and modeling."} {"_id": "1ddd5a27359e7b4101be84b1ee4ac6da35792492", "title": "Learning Mixtures of Gaussians in High Dimensions", "text": "Efficiently learning mixture of Gaussians is a fundamental problem in statistics and learning theory. Given samples coming from a random one out of k Gaussian distributions in Rn, the learning problem asks to estimate the means and the covariance matrices of these Gaussians. This learning problem arises in many areas ranging from the natural sciences to the social sciences, and has also found many ma- chine learning applications. Unfortunately, learning mixture of Gaussians is an information theoretically hard problem: in order to learn the parameters up to a reasonable accuracy, the number of samples required is exponential in the number of Gaussian components in the worst case. In this work, we show that provided we are in high enough dimensions, the class of Gaussian mixtures is learnable in its most general form under a smoothed analysis framework, where the parameters are randomly perturbed from an adversarial starting point. In particular, given samples from a mixture of Gaussians with randomly perturbed parameters, when n \u2265 \u03a9(k2), we give an algorithm that learns the parameters with polynomial running time and using polynomial number of samples.\n The central algorithmic ideas consist of new ways to de- compose the moment tensor of the Gaussian mixture by exploiting its structural properties. The symmetries of this tensor are derived from the combinatorial structure of higher order moments of Gaussian distributions (sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop new tools for bounding smallest singular values of structured random matrices, which could be useful in other smoothed analysis settings."} {"_id": "bfbed02e4e4ee0d382c58c3d33a15355358119ee", "title": "Nonlinear observers for predicting state-of-charge and state-of-health of lead-acid batteries for hybrid-electric vehicles", "text": "This paper describes the application of state-estimation techniques for the real-time prediction of the state-of-charge (SoC) and state-of-health (SoH) of lead-acid cells. Specifically, approaches based on the well-known Kalman Filter (KF) and Extended Kalman Filter (EKF), are presented, using a generic cell model, to provide correction for offset, drift, and long-term state divergence-an unfortunate feature of more traditional coulomb-counting techniques. The underlying dynamic behavior of each cell is modeled using two capacitors (bulk and surface) and three resistors (terminal, surface, and end), from which the SoC is determined from the voltage present on the bulk capacitor. Although the structure of the model has been previously reported for describing the characteristics of lithium-ion cells, here it is shown to also provide an alternative to commonly employed models of lead-acid cells when used in conjunction with a KF to estimate SoC and an EKF to predict state-of-health (SoH). Measurements using real-time road data are used to compare the performance of conventional integration-based methods for estimating SoC with those predicted from the presented state estimation schemes. Results show that the proposed methodologies are superior to more traditional techniques, with accuracy in determining the SoC within 2% being demonstrated. Moreover, by accounting for the nonlinearities present within the dynamic cell model, the application of an EKF is shown to provide verifiable indications of SoH of the cell pack."} {"_id": "05ce02eacc1026a84ff7972ad25490db39040671", "title": "Pilot Contamination and Precoding in Multi-Cell TDD Systems", "text": "This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. Channel state information (CSI) is essential for precoding at the base stations. An effective technique for obtaining this CSI is time-division duplex (TDD) operation where uplink training in conjunction with reciprocity simultaneously provides the base stations with downlink as well as uplink channel estimates. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigates this problem. In addition to being linear, this precoding method has a simple closed-form expression that results from an intuitive optimization. Numerical results show significant performance gains compared to certain popular single-cell precoding methods."} {"_id": "c944f96d97613f4f92a2e5d1cc9b87ea1d4f44d8", "title": "Design of IRNSS receiver antennae for smart city applications in India", "text": "In this research paper, a rectangular fractal antenna is designed which may be used for IRNSS based smart city applications in India. For tracking and positioning applications, the government of India has developed its own navigation system which is called Indian Regional Navigational Satellite System (IRNSS). To design an antenna for such an application is desirable. For the upcoming smart cities in India, the IRNSS will play a vital role in terms of traffic management, intelligent transportation, vehicle tracking and disaster management. A rectangular fractal antenna based on Sierpinski carpet antenna geometry for dual band resonant frequencies is simulated and tested. The antenna has operational frequencies at 2492.08 MHz (S-band) and 1176.45 MHz (L5-band) at return loss of -24.323 dB and -13.41dB respectively."} {"_id": "9ff32b1a8e4e0d0c0e6ac89b75ee1489b2430b36", "title": "A Low-Cost Gate Driver Design Using Bootstrap Capacitors for Multilevel MOSFET Inverters", "text": "Multilevel inverters require a large number of power semiconductors. The gate driver for each power semiconductor requires its own floating or isolated DC voltage source. Traditional methods on meeting the requirement are expensive and bulky. This paper presents a gate driver design for floating voltage source type multilevel inverters. Bootstrap capacitors are used to form the floating voltage sources in the design, which allows a single DC power supply to be used by all the gate drivers. Specially configured diode plus the proper charging cycles maintain adequate capacitor voltage levels. Such a simple and low cost solution allows user to utilize the easy accessible single-channel gate drivers for multilevel inverter applications without the extra cost on the bulky isolated DC power supplies for each gate driver. A prototype 3-cell 8-level floating voltage source inverter using the method illustrates the technique"} {"_id": "22f6360f4174515ef69904f2f0609d0021a084a1", "title": "Detecting Text in the Wild with Deep Character Embedding Network", "text": "Most text detection methods hypothesize texts are horizontal or multi-oriented and thus define quadrangles as the basic detection unit. However, text in the wild is usually perspectively distorted or curved, which can not be easily tackled by existing approaches. In this paper, we propose a deep character embedding network (CENet) which simultaneously predicts the bounding boxes of characters and their embedding vectors, thus making text detection a simple clustering task in the character embedding space. The proposed method does not require strong assumptions of forming a straight line on general text detection, which provides flexibility on arbitrarily curved or perspectively distorted text. For character detection task, a dense prediction subnetwork is designed to obtain the confidence score and bounding boxes of characters. For character embedding task, a subnet is trained with contrastive loss to project detected characters into embedding space. The two tasks share a backbone CNN from which the multi-scale feature maps are extracted. The final text regions can be easily achieved by a thresholding process on character confidence and embedding distance of character pairs. We evaluated our method on ICDAR13, ICDAR15, MSRA-TD500, and Total Text. The proposed method achieves state-of-the-art or comparable performance on all of the datasets, and shows a substantial improvement in the irregular-text datasets, i.e. Total-Text."} {"_id": "2069c9389df8bb29b7fedf2c2ccfe7aaf82b2832", "title": "Heterogeneous Transfer Learning for Image Classification", "text": "Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related auxiliary domains for learning. While most of the existing works in this area are only focused on using the source data with the same representational structure as the target data, in this paper, we push this boundary further by extending a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text documents are arbitrary. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through matrix factorization, and to use the latent semantic features generated by the auxiliary data to build a better image classifier. We empirically verify the effectiveness of our algorithm on the Caltech-256 image dataset."} {"_id": "922ac52351dac20ddbcd5e98e68f95ae7f1c502d", "title": "Employing U.S. Military Families to Provide Business Process Outsourcing Services: A Case study of Impact Sourcing and Reshoring", "text": "This paper describes how a startup business process outsourcing (BPO) provider named Liberty Source helped a large U.S.-based client reshore business services from an established Indian BPO provider. Founded in 2014, Liberty Source is a for-profit firm that provides a competitive alternative to offshoring while fulfilling its social mission to launch and sustain the careers of U.S. military spouses and veterans who face various employment disadvantages. Thus, the case describes reshoring in the context of impact sourcing. It addresses key impact sourcing issues pertaining to workforce development, scalability, and impact on employees. The impact was positive: the workers found the employment and stable salary were beneficial, \u201cthe military\u201d culture fit well with the workers, and workers received considerable flexibility and greater career options. Liberty Source was able to reduce a client\u2019s costs after reshoring the client\u2019s processes because Liberty Source\u2019s U.S. site had about 20 percent fewer full time equivalents (FTEs) FTEs than the original India location and because Liberty Source received subsidies. We found evidence that the offshore BPO provider and Liberty source experienced difficulties with finding enough skilled staff for the wages offered and both firms experienced attrition problems, although attrition was greater in India."} {"_id": "eeebffb3148dfcab6d2b7b464ad72feeff53c21f", "title": "A New Metrics for Countries' Fitness and Products' Complexity", "text": "Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded."} {"_id": "53a7a96149bae64041338fc58b281a6eef7d9912", "title": "What's Wrong with the Diffusion of Innovation Theory", "text": "This paper examines the usefulness of the diffusion of innovation research in developing theoretical accounts of the adoption of complex and networked IT solutions. We contrast six conjectures underlying DOI research with field data obtained from the study of the diffusion of EDI. Our analysis shows that DOI based analyses miss some important facets in the diffusion of complex technologies. We suggest that complex IT solutions should be understood as socially constructed and learning intensive artifacts, which can be adopted for varying reasons within volatile diffusion arenas. Therefore DOI researchers should carefully recognize the complex, networked, and learning intensive features of technology; understand the role of institutional regimes, focus on process features (including histories) and key players in the diffusion arena, develop multi-layered theories that factor out mappings between different layers and locales, use multiple perspectives including political models, institutional models and theories of team behavior, and apply varying time scales while crafting accounts of what happened and why. In general the paper calls for a need to develop DOI theories at the site by using multiple levels of"} {"_id": "091436ab676ecedc8c7ebde611bdbccf978c6e6c", "title": "Computer-Mediated Communication Conflict Management in Groups that Work in Two Different Communication Contexts : Face-To-Face and", "text": "can be found at: Small Group Research Additional services and information for http://sgr.sagepub.com/cgi/alerts Email Alerts: http://sgr.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://sgr.sagepub.com/cgi/content/refs/33/5/481 SAGE Journals Online and HighWire Press platforms): (this article cites 36 articles hosted on the Citations"} {"_id": "35e0fb8d808ce3b30625a78aeed0d20cf5b39fb4", "title": "Static detection of cross-site scripting vulnerabilities", "text": "Web applications support many of our daily activities, but they often have security problems, and their accessibility makes them easy to exploit. In cross-site scripting (XSS), an attacker exploits the trust a web client (browser) has for a trusted server and executes injected script on the browser with the server's privileges. In 2006, XSS constituted the largest class of newly reported vulnerabilities making it the most prevalent class of attacks today. Web applications have XSS vulnerabilities because the validation they perform on untrusted input does not suffice to prevent that input from invoking a browser's JavaScript interpreter, and this validation is particularly difficult to get right if it must admit some HTML mark-up. Most existing approaches to finding XSS vulnerabilities are taint-based and assume input validation functions to be adequate, so they either miss real vulnerabilities or report many false positives.\n This paper presents a static analysis for finding XSS vulnerabilities that directly addresses weak or absent input validation. Our approach combines work on tainted information flow with string analysis. Proper input validation is difficult largely because of the many ways to invoke the JavaScript interpreter; we face the same obstacle checking for vulnerabilities statically, and we address it by formalizing a policy based on the W3C recommendation, the Firefox source code, and online tutorials about closed-source browsers. We provide effective checking algorithms based on our policy. We implement our approach and provide an extensive evaluation that finds both known and unknown vulnerabilities in real-world web applications."} {"_id": "6ac3171269609e4c2ca1c3326d1433a9c5b6c121", "title": "Combinatorial Bounds for Broadcast Encryption", "text": "Abst rac t . A broadcast encryption system allows a center to communicate securely over a broadcast channel with selected sets of users. Each time the set of privileged users changes, the center enacts a protocol to establish a new broadcast key that only the privileged users can obtain, and subsequent transmissions by the center are encrypted using the new broadcast key. We study the inherent trade-off between the number of establishment keys held by each user and the number of transmissions needed to establish a new broadcast key. For every given upper bound on the number of establishment keys held by each user, we prove a lower bound on the number of transmissions needed to establish a new broad~ cast key. We show that these bounds are essentially tight, by describing broadcast encryption systems that come close to these bounds."} {"_id": "413bc7d58d64291042fffc58f36ee74d63c9cb00", "title": "Database security - concepts, approaches, and challenges", "text": "As organizations increase their reliance on, possibly distributed, information systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. Though a number of techniques, such as encryption and electronic signatures, are currently available to protect data when transmitted across sites, a truly comprehensive approach for data protection must also include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics, and other relevant contextual information, such as time. It is well understood today that the semantics of data must be taken into account in order to specify effective access control policies. Also, techniques for data integrity and availability specifically tailored to database systems must be adopted. In this respect, over the years, the database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. However, despite such advances, the database security area faces several new challenges. Factors such as the evolution of security concerns, the \"disintermediation\" of access to data, new computing paradigms and applications, such as grid-based computing and on-demand business, have introduced both new security requirements and new contexts in which to apply and possibly extend current approaches. In this paper, we first survey the most relevant concepts underlying the notion of database security and summarize the most well-known techniques. We focus on access control systems, on which a large body of research has been devoted, and describe the key access control models, namely, the discretionary and mandatory access control models, and the role-based access control (RBAC) model. We also discuss security for advanced data management systems, and cover topics such as access control for XML. We then discuss current challenges for database security and some preliminary approaches that address some of these challenges."} {"_id": "b2114228411d367cfa6ca091008291f250a2c490", "title": "Deep learning and process understanding for data-driven Earth system science", "text": "Machine learning approaches are increasingly used to extract patterns and insights from the ever-increasing stream of geospatial data, but current approaches may not be optimal when system behaviour is dominated by spatial or temporal context. Here, rather than amending classical machine learning, we argue that these contextual cues should be used as part of deep learning (an approach that is able to extract spatio-temporal features automatically) to gain further process understanding of Earth system science problems, improving the predictive ability of seasonal forecasting and modelling of long-range spatial connections across multiple timescales, for example. The next step will be a hybrid modelling approach, coupling physical process models with the versatility of data-driven machine learning. Complex Earth system challenges can be addressed by incorporating spatial and temporal context into machine learning, especially via deep learning, and further by combining with physical models into hybrid models."} {"_id": "94c4ca9641c018617be87bb21754a3fa9fbc9dda", "title": "Efficient mesh denoising via robust normal filtering and alternate vertex updating", "text": "The most challenging problem in mesh denoising is to distinguish features from noise. Based on the robust guided normal estimation and alternate vertex updating strategy, we investigate a new feature-preserving mesh denoising method. To accurately capture local structures around features, we propose a corner-aware neighborhood (CAN) scheme. By combining both overall normal distribution of all faces in a CAN and individual normal influence of the interested face, we give a new consistency measuring method, which greatly improves the reliability of the estimated guided normals. As the noise level lowers, we take as guidance the previous filtered normals, which coincides with the emerging rolling guidance idea. In the vertex updating process, we classify vertices according to filtered normals at each iteration and reposition vertices of distinct types alternately with individual regularization constraints. Experiments on a variety of synthetic and real data indicate that our method adapts to various noise, both Gaussian and impulsive, no matter in the normal direction or in a random direction, with few triangles flipped."} {"_id": "668f72e3c27a747f7f86e035709e990f9b145a73", "title": "An Orbital Angular Momentum (OAM) Mode Reconfigurable Antenna for Channel Capacity Improvement and Digital Data Encoding", "text": "For purpose of utilizing orbital angular momentum (OAM) mode diversity, multiple OAM beams should be generated preferably by a single antenna. In this paper, an OAM mode reconfigurable antenna is proposed. Different from the existed OAM antennas with multiple ports for multiple OAM modes transmitting, the proposed antenna with only a single port, but it can be used to transmit mode 1 or mode \u22121 OAM beams arbitrary by controlling the PIN diodes on the feeding network through a programmable microcontroller which control by a remote controller. Simulation and measurement results such as return loss, near-field and far-field radiation patterns of two operating states for mode 1 and mode \u22121, and OAM mode orthogonality are given. The proposed antenna can serve as a candidate for utilizing OAM diversity, namely phase diversity to increase channel capacity at 2.4\u2009GHz. Moreover, an OAM-mode based encoding method is experimentally carried out by the proposed OAM mode reconfigurable antenna, the digital data are encoded and decoded by different OAM modes. At the transmitter, the proposed OAM mode reconfigurable antenna is used to encode the digital data, data symbol 0 and 1 are mapped to OAM mode 1 and mode \u22121, respectively. At the receiver, the data symbols are decoded by phase gradient method."} {"_id": "c2a684f7f24e8f1448204b3f9f00b26d34e9a03b", "title": "Unsupervised deformable image registration with fully connected generative neural network", "text": "In this paper, a new deformable image registration method based on a fully connected neural network is proposed. Even though a deformation field related to the point correspondence between fixed and moving images are high-dimensional in nature, we assume that these deformation fields form a low dimensional manifold in many real world applications. Thus, in our method, a neural network generates an embedding of the deformation field from a low dimensional vector. This lowdimensional manifold formulation avoids the intractability associated with the high dimensional search space that most other methods face during image registration. As a result, while most methods rely on explicit and handcrafted regularization of the deformation fields, our algorithm relies on implicitly regularizing the network parameters. The proposed method generates deformation fields from latent low dimensional space by minimizing a dissimilarity metric between a fixed image and a warped moving image. Our method removes the need for a large dataset to optimize the proposed network. The proposed method is quantitatively evaluated using images from the MICCAI ACDC challenge. The results demonstrate that the proposed method improves performance in comparison with a moving mesh registration algorithm, and also it correlates well with independent manual segmentations by an expert."} {"_id": "8900adf0a63fe9a99c2026a897cfcfd2aaf89476", "title": "A Low-Power Subthreshold to Above-Threshold Voltage Level Shifter", "text": "This brief presents a power-efficient voltage level-shifter architecture that is capable of converting extremely low levels of input voltages to higher levels. In order to avoid the static power dissipation, the proposed structure uses a current generator that turns on only during the transition times, in which the logic level of the input signal is not corresponding to the output logic level. Moreover, the strength of the pull-up device is decreased when the pull-down device is pulling down the output node in order for the circuit to be functional even for the input voltage lower than the threshold voltage of a MOSFET. The operation of the proposed structure is also analytically investigated. Post-layout simulation results of the proposed structure in a 0.18-\u03bcm CMOS technology show that at the input low supply voltage of 0.4 V and the high supply voltage of 1.8 V, the level shifter has a propagation delay of 30 ns, a static power dissipation of 130 pW, and an energy per transition of 327 fJ for a 1-MHz input signal."} {"_id": "cace7912089d586e0acebb4a66329f60d3f1cd09", "title": "A robust, input voltage adaptive and low energy consumption level converter for sub-threshold logic", "text": "A new level converter (LC) is proposed for logic voltage shifting between sub-threshold voltage to normal high voltage. By employing 2 PMOS diodes, the LC shows good operation robustness with sub-threshold logic input. The switching delay of the proposed LC can adapt with the input logic voltage which is more suitable for power aware systems. With a simpler circuit structure, the energy consumption of the LC is smaller than that of the existing sub-threshold LC. Simulation results demonstrate the performance improvement and energy reduction of the proposed LC. Test chip was fabricated using 0.18 mum CMOS process. Measurement results show that our proposed LC can operate correctly with an input at as low as 127 mV and an output voltage at 1.8V."} {"_id": "e52c199d4f9f815087fb72702428cd3c7bb9d9ff", "title": "A 180-mV subthreshold FFT processor using a minimum energy design methodology", "text": "In emerging embedded applications such as wireless sensor networks, the key metric is minimizing energy dissipation rather than processor speed. Minimum energy analysis of CMOS circuits estimates the optimal operating point of clock frequencies, supply voltage, and threshold voltage according to A. Chandrakasan et al. (see ibid., vol.27, no.4, p.473-84, Apr. 1992). The minimum energy analysis shows that the optimal power supply typically occurs in subthreshold (e.g., supply voltages that are below device thresholds). New subthreshold logic and memory design methodologies are developed and demonstrated on a fast Fourier transform (FFT) processor. The FFT processor uses an energy-aware architecture that allows for variable FFT length (128-1024 point), variable bit-precision (8 b and 16 b) and is designed to investigate the estimated minimum energy point. The FFT processor is fabricated using a standard 0.18-/spl mu/m CMOS logic process and operates down to 180 mV. The minimum energy point for the 16-b 1024-point FFT processor occurs at 350-mV supply voltage where it dissipates 155 nJ/FFT at a clock frequency of 10 kHz."} {"_id": "e74923f05c05a603356c4e88a5bcc1d743aedeb5", "title": "A Subthreshold to Above-Threshold Level Shifter Comprising a Wilson Current Mirror", "text": "In this brief, we propose a novel level shifter circuit that is capable of converting subthreshold to above-threshold signal levels. In contrast to other existing implementations, it does not require a static current flow and can therefore offer considerable static power savings. The circuit has been optimized and simulated in a 90-nm process technology. It operates correctly across process corners for supply voltages from 100 mV to 1 V on the low-voltage side. At the target design voltage of 200 mV, the level shifter has a propagation delay of 18.4 ns and a static power dissipation of 6.6 nW. For a 1-MHz input signal, the total energy per transition is 93.9 fJ. Simulation results are compared to an existing subthreshold to above-threshold level shifter implementation from the paper of Chen et al."} {"_id": "381231eecd132199821c5aa3ff3f2278f593ea33", "title": "Subthreshold to Above Threshold Level Shifter Design", "text": ""} {"_id": "6077b91080f2ac5822fd899e9c41dd82afbdea27", "title": "Clustering-based Approach for Anomaly Detection in XACML Policies", "text": "The development of distributed applications arises multiple security issues such as access control. AttributeBased Access Control has been proposed as a generic access control model, which provides more flexibility and promotes information and security sharing. eXtensible Access Control Markup Language (XACML) is the most convenient way to express ABAC policies. However, in distributed environments, XACML policies become more complex and hard to manage. In fact, an XACML policy in distributed applications may be aggregated from multiple parties and can be managed by more than one administrator. Therefore, it may contain several anomalies such as conflicts and redundancies, which may affect the performance of the policy execution. In this paper, we propose an anomaly detection method based on the decomposition of a policy into clusters before searching anomalies within each cluster. Our evaluation results demonstrate the efficiency of the suggested approach."} {"_id": "b356bd9e1df74a8fc7a144001a88c0e1f89e616d", "title": "Cache'n DASH: Efficient Caching for DASH", "text": "HTTP-based video streaming services have been dominating the global IP traffic over the last few years. Caching of video content reduces the load on the content servers. In the case of Dynamic Adaptive Streaming over HTTP (DASH), for every video the server needs to host multiple representations of the same video file. These individual representations are further broken down into smaller segments. Hence, for each video the server needs to host thousands of segments out of which, the client downloads a subset of the segments. Also, depending on the network conditions, the adaptation scheme used at the client-end might request a different set of video segments (varying in bitrate) for the same video. The caching of DASH videos presents unique challenges. In order to optimize the cache hits and minimize the misses for DASH video streaming services we propose an Adaptation Aware Cache (AAC) framework to determine the segments that are to be prefetched and retained in the cache. In the current scheme, we use bandwidth estimates at the cache server and the knowledge of the rate adaptation scheme used by the client to estimate the next segment requests, thus improving the prefetching at the cache."} {"_id": "c7a83708cf93b46e952327822e4fb195b1dafef3", "title": "Automated Assembly Using 3D and 2D Cameras", "text": "2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 mm and < 1\u25e6. This was demonstrated in a real industrial assembly task where high accuracy is required."} {"_id": "2d54e36c216de770f849125a62b4a2e74284792a", "title": "A content-based recommendation algorithm for learning resources", "text": "Automatic multimedia learning resources recommendation has become an increasingly relevant problem: it allows students to discover new learning resources that match their tastes, and enables the e-learning system to target the learning resources to the right students. In this paper, we propose a content-based recommendation algorithm based on convolutional neural network (CNN). The CNN can be used to predict the latent factors from the text information of the multimedia resources. To train the CNN, its input and output should first be solved. For its input, the language model is used. For its output, we propose the latent factor model, which is regularized by L 1-norm. Furthermore, the split Bregman iteration method is introduced to solve the model. The major novelty of the proposed recommendation algorithm is that the text information is used directly to make the content-based recommendation without tagging. Experimental results on public databases in terms of quantitative assessment show significant improvements over conventional methods. In addition, the split Bregman iteration method which is introduced to solve the model can greatly improve the training efficiency."} {"_id": "56d36c2535c867613997688f649cf0461171dbe8", "title": "CiteSeer: An Automatic Citation Indexing System", "text": "We presentCiteSeer : an autonomous citation indexing system which indexes academic literature in electronic format (e.g. Postscript files on the Web). CiteSeer understands how to parse citations, identify citations to the same paper in d ifferent formats, and identify the context of citations in the body of articles. CiteSeer provides most of the advantages o f traditional (manually constructed) citation indexes (e.g . the ISI citation indexes), including: literature retrieval by following citation links (e.g. by providing a list of papers tha cite a given paper), the evaluation and ranking of papers, au thors, journals, etc. based on the number of citations, and t he identification of research trends. CiteSeer has many advantages over traditional citation indexes, including the abi lity to create more up-to-date databases which are not limited to a preselected set of journals or restricted by journal public ation delays, completely autonomous operation with a correspond ing reduction in cost, and powerful interactive browsing of the literature using the context of citations. Given a parti cular paper of interest, CiteSeer can display the context of ho w the paper is cited in subsequent publications. This context may contain a brief summary of the paper, another author\u2019s response to the paper, or subsequent work which builds upon the original article. CiteSeer allows the location of paper s by keyword search or by citation links. Papers related to a give n paper can be located using common citation information or word vector similarity. CiteSeer will soon be available for public use."} {"_id": "a8823ab946321079c63b9bd42f58bd17b96a25e4", "title": "Face detection and eyes extraction using sobel edge detection and morphological operations", "text": "Face detection and eyes extraction has an important role in many applications such as face recognition, facial expression analysis, security login etc. Detection of human face and facial structures like eyes, nose are the complex procedure for the computer. This paper proposes an algorithm for face detection and eyes extraction from frontal face images using Sobel edge detection and morphological operations. The proposed approach is divided into three phases; preprocessing, identification of face region, and extraction of eyes. Resizing of images and gray scale image conversion is achieved in preprocessing. Face region identification is accomplished by Sobel edge detection and morphological operations. In the last phase, eyes are extracted from the face region with the help of morphological operations. The experiments are conducted on 120, 75, 40 images of IMM frontal face database, FEI face database and IMM face database respectively. The face detection accuracy is 100%, 100%, 97.50% and the eyes extraction accuracy rate is 92.50%, 90.66%, 92.50% respectively."} {"_id": "327eea4fa452df2e9ff0864601bba4051fd9927c", "title": "The HEXACO-60: a short measure of the major dimensions of personality.", "text": "We describe the HEXACO-60, a short personality inventory that assesses the 6 dimensions of the HEXACO model of personality structure. We selected the 10 items of each of the 6 scales from the longer HEXACO Personality Inventory-Revised (Ashton & Lee, 2008; Lee & Ashton, 2004, 2006), with the aim of representing the broad range of content that defines each dimension. In self-report data from samples of college students and community adults, the scales showed reasonably high levels of internal consistency reliability and rather low interscale correlations. Correlations of the HEXACO-60 scales with measures of the Big Five factors were consistent with theoretical expectations, and convergent correlations between self-reports and observer reports on the HEXACO-60 scales were high, averaging above .50. We recommend the HEXACO-60 for use in personality assessment contexts in which administration time is limited."} {"_id": "523b8beb41b7890fd4fdd7bd3cba7261137e53b4", "title": "On the benefits of explaining herd immunity in vaccine advocacy", "text": "Most vaccines protect both the vaccinated individual and the community at large by building up herd immunity. Even though reaching disease-specific herd immunity thresholds is crucial for eliminating or eradicating certain diseases1,2, explanation of this concept remains rare in vaccine advocacy3. An awareness of this social benefit makes vaccination not only an individual but also a social decision. Although knowledge of herd immunity can induce prosocial vaccination in order to protect others, it can also invite free-riding, in which individuals profit from the protection provided by a well-vaccinated society without contributing to herd immunity themselves. This cross-cultural experiment assesses whether people will be more or less likely to be vaccinated when they know more about herd immunity. Results show that in cultures that focus on collective benefits, vaccination willingness is generally higher. Communicating the concept of herd immunity improved willingness to vaccinate, especially in cultures lacking this prosocial cultural background. Prosocial nudges can thus help to close these immunity gaps."} {"_id": "f163718a70999d849278dde9c7a417df1afdbc78", "title": "A ZVS Grid-Connected Three-Phase Inverter", "text": "A six-switch three-phase inverter is widely used in a high-power grid-connected system. However, the antiparallel diodes in the topology operate in the hard-switching state under the traditional control method causing severe switch loss and high electromagnetic interference problems. In order to solve the problem, this paper proposes a topology of the traditional six-switch three-phase inverter but with an additional switch and gave a new space vector modulation (SVM) scheme. In this way, the inverter can realize zero-voltage switching (ZVS) operation in all switching devices and suppress the reverse recovery current in all antiparallel diodes very well. And all the switches can operate at a fixed frequency with the new SVM scheme and have the same voltage stress as the dc-link voltage. In grid-connected application, the inverter can achieve ZVS in all the switches under the load with unity power factor or less. The aforementioned theory is verified in a 30-kW inverter prototype."} {"_id": "4f0ea476d0aa1315c940988a593a1d4055695c79", "title": "Frame-by-frame language identification in short utterances using deep neural networks", "text": "This work addresses the use of deep neural networks (DNNs) in automatic language identification (LID) focused on short test utterances. Motivated by their recent success in acoustic modelling for speech recognition, we adapt DNNs to the problem of identifying the language in a given utterance from the short-term acoustic features. We show how DNNs are particularly suitable to perform LID in real-time applications, due to their capacity to emit a language identification posterior at each new frame of the test utterance. We then analyse different aspects of the system, such as the amount of required training data, the number of hidden layers, the relevance of contextual information and the effect of the test utterance duration. Finally, we propose several methods to combine frame-by-frame posteriors. Experiments are conducted on two different datasets: the public NIST Language Recognition Evaluation 2009 (3 s task) and a much larger corpus (of 5 million utterances) known as Google 5M LID, obtained from different Google Services. Reported results show relative improvements of DNNs versus the i-vector system of 40% in LRE09 3 second task and 76% in Google 5M LID."} {"_id": "cb79833f0d3b88963cdc84dbb8cb358fd996946e", "title": "CRITICAL SUCCESS FACTORS FOR CUSTOMER RELATIONSHIP MANAGEMENT IMPLEMENTATIONS", "text": "The growing forces of increasing global competition, continuing customer demands, and the significant revolution in Commercial Off The Shelf (COTS) solutions, especially Customer Relationship Management (CRM) applications, have together put pressure upon many organisations to implement CRM solutions and to switch their organisational processes from being product-centric to being customer-centric. A CRM initiative is not only technology; it is a business strategy supported by technology which automates and enhances the processes associated with managing customer relationships. By the end of 2010, it is predicted that companies will be spending almost $11 billion yearly on CRM solutions. However, studies have found that 70% of CRM projects have failed. Understanding the factors that enable success of CRM is vital. There is very few existing specific research into Critical Success Factors (CSFs) of CRM implementations, and there is no comprehensive view that captures all the aspects for successful CRM implementation and their inter-relationships. Therefore, the aim of this paper is to explore the current literature base of CSFs for CRM implementations and proposes a taxonomy for them. Future research work will continue to investigate in depth these factors by exploring the complex system links between CSFs using systems thinking techniques such as causal maps to investigate the complex, systemic networks of CSFs in organisations which result in emergent effects which themselves influence the failure or success of a CRM."} {"_id": "88a7ae1ba4eefa5f51661faddd023bd8635da19c", "title": "Log-Euclidean metrics for fast and simple calculus on diffusion tensors.", "text": "Diffusion tensor imaging (DT-MRI or DTI) is an emerging imaging modality whose importance has been growing considerably. However, the processing of this type of data (i.e., symmetric positive-definite matrices), called \"tensors\" here, has proved difficult in recent years. Usual Euclidean operations on matrices suffer from many defects on tensors, which have led to the use of many ad hoc methods. Recently, affine-invariant Riemannian metrics have been proposed as a rigorous and general framework in which these defects are corrected. These metrics have excellent theoretical properties and provide powerful processing tools, but also lead in practice to complex and slow algorithms. To remedy this limitation, a new family of Riemannian metrics called Log-Euclidean is proposed in this article. They also have excellent theoretical properties and yield similar results in practice, but with much simpler and faster computations. This new approach is based on a novel vector space structure for tensors. In this framework, Riemannian computations can be converted into Euclidean ones once tensors have been transformed into their matrix logarithms. Theoretical aspects are presented and the Euclidean, affine-invariant, and Log-Euclidean frameworks are compared experimentally. The comparison is carried out on interpolation and regularization tasks on synthetic and clinical 3D DTI data."} {"_id": "8fe2f671089c63a0d3f6f729ca8bc63aa3069263", "title": "Mining hidden community in heterogeneous social networks", "text": "Social network analysis has attracted much attention in recent years. Community mining is one of the major directions in social network analysis. Most of the existing methods on community mining assume that there is only one kind of relation in the network, and moreover, the mining results are independent of the users' needs or preferences. However, in reality, there exist multiple, heterogeneous social networks, each representing a particular kind of relationship, and each kind of relationship may play a distinct role in a particular task. Thus mining networks by assuming only one kind of relation may miss a lot of valuable hidden community information and may not be adaptable to the diverse information needs from different users.In this paper, we systematically analyze the problem of mining hidden communities on heterogeneous social networks. Based on the observation that different relations have different importance with respect to a certain query, we propose a new method for learning an optimal linear combination of these relations which can best meet the user's expectation. With the obtained relation, better performance can be achieved for community mining. Our approach to social network analysis and community mining represents a major shift in methodology from the traditional one, a shift from single-network, user-independent analysis to multi-network, user-dependant, and query-based analysis. Experimental results on Iris data set and DBLP data set demonstrate the effectiveness of our method."} {"_id": "737568fa4422eae79dcb4dc903565386bdc17e43", "title": "Impact of Halo Doping on the Subthreshold Performance of Deep-Submicrometer CMOS Devices and Circuits for Ultralow Power Analog/Mixed-Signal Applications", "text": "In addition to its attractiveness for ultralow power applications, analog CMOS circuits based on the subthreshold operation of the devices are known to have significantly higher gain as compared to their superthreshold counterpart. The effects of halo [both double-halo (DH) and single-halo or lateral asymmetric channel (LAC)] doping on the subthreshold analog performance of 100-nm CMOS devices are systematically investigated for the first time with extensive process and device simulations. In the subthreshold region, although the halo doping is found to improve the device performance parameters for analog applications (such as gm/Id, output resistance and intrinsic gain) in general, the improvement is significant in the LAC devices. Low angle of tilt of the halo implant is found to give the best improvement in both the LAC and DH devices. Our results show that the CMOS amplifiers made with the halo implanted devices have higher voltage gain over their conventional counterpart, and a more than 100% improvement in the voltage gain is observed when LAC doping is made on both the p- and n-channel devices of the amplifier"} {"_id": "6124e9f8455723e9508fc5d8365b3895b4d15208", "title": "Incremental Spectral Clustering With Application to Monitoring of Evolving Blog Communities", "text": "In recent years, spectral clustering method has gained attentions because of its superior performance compared to other traditional clustering algorithms such as K-means algorithm. The existing spectral clustering algorithms are all off-line algorithms, i.e., they can not incrementally update the clustering result given a small change of the data set. However, the capability of incrementally updating is essential to some applications such as real time monitoring of the evolving communities of websphere or blogsphere. Unlike traditional stream data, these applications require incremental algorithms to handle not only insertion/deletion of data points but also similarity changes between existing items. This paper extends the standard spectral clustering to such evolving data by introducing the incidence vector/matrix to represent two kinds of dynamics in the same framework and by incrementally updating the eigenvalue system. Our incremental algorithm, initialized by a standard spectral clustering, continuously and efficiently updates the eigenvalue system and generates instant cluster labels, as the data set is evolving. The algorithm is applied to a blog data set. Compared with recomputation of the solution by standard spectral clustering, it achieves similar accuracy but with much lower computational cost. Close inspection into the blog content shows that the incremental approach can discover not only the stable blog communities but also the evolution of the individual multi-topic blogs."} {"_id": "dcfae19ad20ee57b3f68891d8b21570ab2601613", "title": "An Empirical Study on Modeling and Prediction of Bitcoin Prices With Bayesian Neural Networks Based on Blockchain Information", "text": "Bitcoin has recently attracted considerable attention in the fields of economics, cryptography, and computer science due to its inherent nature of combining encryption technology and monetary units. This paper reveals the effect of Bayesian neural networks (BNNs) by analyzing the time series of Bitcoin process. We also select the most relevant features from Blockchain information that is deeply involved in Bitcoin\u2019s supply and demand and use them to train models to improve the predictive performance of the latest Bitcoin pricing process. We conduct the empirical study that compares the Bayesian neural network with other linear and non-linear benchmark models on modeling and predicting the Bitcoin process. Our empirical studies show that BNN performs well in predicting Bitcoin price time series and explaining the high volatility of the recent Bitcoin price."} {"_id": "94017eca9875a77d7de5daadf5c37023b8bbe6c9", "title": "Low-light image enhancement using variational optimization-based Retinex model", "text": "This paper presents a low-light image enhancement method using the variational-optimization-based Retinex algorithm. The proposed enhancement method first estimates the initial illumination and uses its gamma corrected version to constrain the illumination component. Next, the variational-based minimization is iteratively performed to separate the reflectance and illumination components. The color assignment of the estimated reflectance component is then performed to restore the color component using the input RGB color channels. Experimental results show that the proposed method can provide better enhanced result without saturation, noise amplification or color distortion."} {"_id": "ef583fd79e57ab0b42bf1db466d782ad64aca09e", "title": "Big Data Reduction Methods: A Survey", "text": "Research on big data analytics is entering in the new phase called fast data where multiple gigabytes of data arrive in the big data systems every second. Modern big data systems collect inherently complex data streams due to the volume, velocity, value, variety, variability, and veracity in the acquired data and consequently give rise to the 6Vs of big data. The reduced and relevant data streams are perceived to be more useful than collecting raw, redundant, inconsistent, and noisy data. Another perspective for big data reduction is that the million variables big datasets cause the curse of dimensionality which requires unbounded computational resources to uncover actionable knowledge patterns. This article presents a review of methods that are used for big data reduction. It also presents a detailed taxonomic discussion of big data reduction methods including the network theory, big data compression, dimension reduction, redundancy elimination, data mining, and machine learning methods. In addition, the open research issues pertinent to the big data reduction are also highlighted."} {"_id": "94bde5e6667e56c52b040c1d893205828e9e17af", "title": "Nonsuicidal self-injury as a gateway to suicide in young adults.", "text": "PURPOSE\nTo investigate the extent to which nonsuicidal self-injury (NSSI) contributes to later suicide thoughts and behaviors (STB) independent of shared risk factors.\n\n\nMETHODS\nOne thousand four hundred and sixty-six students at five U.S. colleges participated in a longitudinal study of the relationship between NSSI and suicide. NSSI, suicide history, and common risk/protective factors were assessed annually for three years. Analyses tested the hypotheses that the practice of NSSI prior to STB and suicide behavior (excluding ideation) reduced inhibition to later STB independent of shared risk factors. Analyses also examined factors that predicted subsequent STB among individuals with NSSI history.\n\n\nRESULTS\nHistory of NSSI did significantly predict concurrent or later STB (AOR 2.8, 95%, CI 1.9-4.1) independent of covariates common to both. Among those with prior or concurrent NSSI, risk of STB is predicted by > 20 lifetime NSSI incidents (AOR 3.8, 95% CI, 1.4-10.3) and history of mental health treatment (AOR 2.2, 95% CI, 1.9-4.6). Risk of moving from NSSI to STB is decreased by presence of meaning in life (AOR .6, 95% CI, .5-.7) and reporting parents as confidants (AOR, .3, 95% CI, .1-.9).\n\n\nCONCLUSIONS\nNSSI prior to suicide behavior serves as a \"gateway\" behavior for suicide and may reduce inhibition through habituation to self-injury. Treatments focusing on enhancing perceived meaning in life and building positive relationships with others, particularly parents, may be particularly effective in reducing suicide risk among youth with a history of NSSI."} {"_id": "d35a8ad8133ebcbf7aa4eafa25f753465b3f9fc0", "title": "An airborne experimental test platform: From theory to flight", "text": "This paper provides an overview of the experimental flight test platform developed by the University of Minnesota Unmanned Aerial Vehicle Research Group. Key components of the current infrastructure are highlighted, including the flight test system, high-fidelity nonlinear simulations, software-and hardware-in-the-loop simulations, and the real-time flight software. Recent flight control research and educational applications are described to showcase the advanced capabilities of the platform. A view towards future expansion of the platform is given in the context of upcoming research projects."} {"_id": "1a3ecd4307946146371852d6571b89b9436e51fa", "title": "Bias and causal associations in observational research", "text": "Readers of medical literature need to consider two types of validity, internal and external. Internal validity means that the study measured what it set out to; external validity is the ability to generalise from the study to the reader's patients. With respect to internal validity, selection bias, information bias, and confounding are present to some degree in all observational research. Selection bias stems from an absence of comparability between groups being studied. Information bias results from incorrect determination of exposure, outcome, or both. The effect of information bias depends on its type. If information is gathered differently for one group than for another, bias results. By contrast, non-differential misclassification tends to obscure real differences. Confounding is a mixing or blurring of effects: a researcher attempts to relate an exposure to an outcome but actually measures the effect of a third factor (the confounding variable). Confounding can be controlled in several ways: restriction, matching, stratification, and more sophisticated multivariate techniques. If a reader cannot explain away study results on the basis of selection, information, or confounding bias, then chance might be another explanation. Chance should be examined last, however, since these biases can account for highly significant, though bogus results. Differentiation between spurious, indirect, and causal associations can be difficult. Criteria such as temporal sequence, strength and consistency of an association, and evidence of a dose-response effect lend support to a causal link."} {"_id": "2730606a9d29bb52bcc42124393460503f736d74", "title": "Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing", "text": "\u00d0Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. The application scheduling problem has been shown to be NP-complete in general cases as well as in several restricted cases. Because of its key importance, this problem has been extensively studied and various algorithms have been proposed in the literature which are mainly for systems with homogeneous processors. Although there are a few algorithms in the literature for heterogeneous processors, they usually require significantly high scheduling costs and they may not deliver good quality schedules with lower costs. In this paper, we present two novel scheduling algorithms for a bounded number of heterogeneous processors with an objective to simultaneously meet high performance and fast scheduling time, which are called the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The HEFT algorithm selects the task with the highest upward rank value at each step and assigns the selected task to the processor, which minimizes its earliest finish time with an insertion-based approach. On the other hand, the CPOP algorithm uses the summation of upward and downward rank values for prioritizing tasks. Another difference is in the processor selection phase, which schedules the critical tasks onto the processor that minimizes the total execution time of the critical tasks. In order to provide a robust and unbiased comparison with the related work, a parametric graph generator was designed to generate weighted directed acyclic graphs with various characteristics. The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithms significantly surpass previous approaches in terms of both quality and cost of schedules, which are mainly presented with schedule length ratio, speedup, frequency of best results, and average scheduling time metrics."} {"_id": "3b6911dc5d98faeb79d3d3e60bcdc40cfd7c9273", "title": "Aggregate and Verifiably Encrypted Signatures from Bilinear Maps", "text": "An aggregate signature scheme is a digital signature that supports aggregation: Given n signatures on n distinct messages from n distinct users, it is possible to aggregate all these signatures into a single short signature. This single signature (and the n original messages) will convince the verifier that the n users did indeed sign the n original messages (i.e., user i signed message Mi for i = 1, . . . , n). In this paper we introduce the concept of an aggregate signature, present security models for such signatures, and give several applications for aggregate signatures. We construct an efficient aggregate signature from a recent short signature scheme based on bilinear maps due to Boneh, Lynn, and Shacham. Aggregate signatures are useful for reducing the size of certificate chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols such as SBGP. We also show that aggregate signatures give rise to verifiably encrypted signatures. Such signatures enable the verifier to test that a given ciphertext C is the encryption of a signature on a given message M . Verifiably encrypted signatures are used in contract-signing protocols. Finally, we show that similar ideas can be used to extend the short signature scheme to give simple ring signatures."} {"_id": "446961b27f6c14413ae6cc2f78ad7d7c53ede26c", "title": "Pors: proofs of retrievability for large files", "text": "In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.\n A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.\n In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.\n We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound."} {"_id": "517f519b8dbc5b00ff8b1f8578b73a871a1a0b73", "title": "The Exact Security of Digital Signatures - HOw to Sign with RSA and Rabin", "text": "We describe an RSA-based signing scheme which combines essentially optimal e ciency with attractive security properties. Signing takes one RSA decryption plus some hashing, veri cation takes one RSA encryption plus some hashing, and the size of the signature is the size of the modulus. Assuming the underlying hash functions are ideal, our schemes are not only provably secure, but are so in a tight way| an ability to forge signatures with a certain amount of computational resources implies the ability to invert RSA (on the same size modulus) with about the same computational e ort. Furthermore, we provide a second scheme which maintains all of the above features and in addition provides message recovery. These ideas extend to provide schemes for Rabin signatures with analogous properties; in particular their security can be tightly related to the hardness of factoring. Department of Computer Science and Engineering, Mail Code 0114, University of California at San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA. E-mail: mihir@cs.ucsd.edu ; Web page: http://www-cse.ucsd.edu/users/mihir y Department of Computer Science, University of California at Davis, Davis, CA 95616, USA. Email: rogaway@cs.ucdavis.edu ; Web page: http://wwwcsif.cs.ucdavis.edu/~rogaway/homepage.html"} {"_id": "06a1d8fe505a4ee460e24ae3cf2e279e905cc9b0", "title": "A public key cryptosystem and a signature scheme based on discrete logarithms", "text": "A new signature scheme is proposed, together with an implementation of the Diffie-Hellman key distribution scheme that achieves a public key cryptosystem. The security of both systems relies on the difficulty of computing discrete logarithms over finite fields."} {"_id": "1745e5dbdeb4575c6f8376c9e75e70650a7e2e29", "title": "Proofs of Work and Bread Pudding Protocols", "text": "We formalize the notion of a proof of work. In many cryptographic protocols, a prover seeks to convince a veriier that she possesses knowledge of a secret or that a certain mathematical relation holds true. By contrast, in a proof of work, a prover demonstrates that she has performed a certain amount of computational work in a speciied interval of time. Proofs of work have served as the basis of a number of security protocols in the literature, but have hitherto lacked careful characterization. We also introduce the dependent idea of a bread pudding protocol. Bread pudding is a dish that originated with the purpose of re-using bread that has gone stale 10]. In the same spirit, we deene a bread pudding protocol to be a proof of work such that the computational eeort invested in the proof may also be harvested to achieve a separate, useful, and veriiably correct computation. As an example of a bread pudding protocol, we show how the MicroMint scheme of Rivest and Shamir can be broken up into a collection of proofs of work. These proofs of work can not only serve in their own right as mechanisms for security protocols, but can also be harvested in order to shift the burden of the MicroMint minting operation onto a large group of un-trusted computational devices."} {"_id": "50c5def463ac0c7282f6093501bd64ec253029b6", "title": "Techniques for Assessing Polygonal Approximations of Curves", "text": "Given the enormous number of available methods for finding po lygonal approximations to curves techniques are required to assess different algorithms. Some of the standard approaches are shown to be unsuitable if the approximations contain varying numbers of lines. Ins tead, we suggest assessing an algorithm\u2019s results relative to an optimal polygon, and describe a measure which combines the relative fidelity and efficiency of a curve segmentation. We use this measure to compare the appl ication of 23 algorithms to a curve first used by Teh and Chin [37]; their ISEs are assessed relative to the o ptimal ISE. In addition, using an example of pose estimation, it is shown how goal-directed evaluation c a be used to select an appropriate assessment criterion."} {"_id": "cbd1a6764624ae74d7e5c59ddb139d76c019ff19", "title": "A Two-Handed Interface for Object Manipulation in Virtual Environments", "text": "A two-handed direct manipulation VE (virtual environment) interface has been developed as an intuitive manipulation metaphor for graphical objects. A new input device called ChordGloves introduces a simple technique for rapid and repeatable gesture recognition; the Chordgloves emulate a pair of 3-D mice and a keyboard. A drafting table is isomorphically mapped into the VE and provides hand support for 2-D interface techniques, as well as a reference frame for calibrating the mapping between real and virtual worlds. A cursor gravity function is used to grab vertices, edges, or faces and establish precisely aligned differential constraints between objects called anchors. The capability of subjects to translate, rotate, scale, align, and glue objects is tested with a puzzle building task. An approximation of the puzzle task is done in Adobe Illustrator to provide a performance reference. Results and informal user observations as well as topics for future work are presented."} {"_id": "9b8fcb9464ff6cc7bd8311b52e5fb62394fb43f4", "title": "Gazture: Design and Implementation of a Gaze based Gesture Control System on Tablets", "text": "We present Gazture, a light-weight gaze based real-time gesture control system on commercial tablets. Unlike existing approaches that require dedicated hardware (e.g., high resolution camera), high computation overhead (powerful CPU) or specific user behavior (keeping head steady), Gazture provides gesture recognition based on easy-to-control user gaze input with a small overhead. To achieve this goal, Gazture incorporates a two-layer structure: The first layer focuses on real-time gaze estimation with acceptable tracking accuracy while incurring a small overhead. The second layer implements a robust gesture recognition algorithm while compensating gaze estimation error. To address user posture change while using mobile device, we design a online transfer function based method to convert current eye features into corresponding eye features in reference posture, which then facilitates efficient gaze position estimation. We implement Gazture on Lenovo Tab3 8 Plus tablet with Android 6.0.1, and evaluate its performance in different scenarios. The evaluation results show that Gazture can achieve a high accuracy in gesture recognition while incurring a low overhead."} {"_id": "7f90a086140f2ca2ff282e3eedcf8c51ee2db674", "title": "Optimizing Factorization Machines for Top-N Context-Aware Recommendations", "text": "Context-aware Collaborative Filtering (CF) techniques such as Factorization Machines (FM) have been proven to yield high precision for rating prediction. However, the goal of recommender systems is often referred to as a top-N item recommendation task, and item ranking is a better formulation for the recommendation problem. In this paper, we present two collaborative rankers, namely, Ranking Factorization Machines (RankingFM) and Lambda Factorization Machines (LambdaFM), which optimize the FM model for the item recommendation task. Specifically, instead of fitting the preference of individual items, we first propose a RankingFM algorithm that applies the cross-entropy loss function to the FM model to estimate the pairwise preference between individual item pairs. Second, by considering the ranking bias in the item recommendation task, we design two effective lambda-motivated learning schemes for RankingFM to optimize desired ranking metrics, referred to as LambdaFM. The two models we propose can work with any types of context, and are capable of estimating latent interactions between the context features under sparsity. Experimental results show its superiority over several state-of-the-art methods on three public CF datasets in terms of two standard ranking metrics."} {"_id": "726c458266d4fd91551bbe3cac02052d8df8f309", "title": "A Review on the Meaning of Cognitive Cities", "text": "Over the last years, the cognitive city paradigm has emerged as a promising solution to the challenges that megacities of the future will have to face. In this article, we provide a thorough review of the literature on Cognitive Cities. We put in place a clear and strict methodology that allows as to present a carefully curated set of articles that represent the foundations of the current understanding of the concept of cognitive city. Hence, this article is intended to serve as a reference for future studies in the field. We emphasise the ambiguities and overlapping meanings of the cognitive city term depending on the domain of study and the underlying philosophy. Also, we discuss some of the implications that cognitive cities might have on society pillars such as the healthcare sector, and we point out some of the main challenges for the adoption of the concept. Last but not least, we suggest some possible research lines that are to be pursued in the years to come."} {"_id": "47bb8674715672e0fde3901ba9ccdb5b07a5e4da", "title": "TANDEM-bottleneck feature combination using hierarchical Deep Neural Networks", "text": "To improve speech recognition performance, a combination between TANDEM and bottleneck Deep Neural Networks (DNN) is investigated. In particular, exploiting a feature combination performed by means of a multi-stream hierarchical processing, we show a performance improvement by combining the same input features processed by different neural networks. The experiments are based on the spontaneous telephone recordings of the Cantonese IARPA Babel corpus using both standard MFCCs and Gabor as input features."} {"_id": "7fb1ec60d6f6862f16ddc449ffb3ca1db97218f1", "title": "Parking lot guidance software based on MQTT Protocol", "text": "To reduce the amount of CO2 produced from the personal cars in the world, the parking lot guidance systems are considered to be the solution in shopping malls or department stores in many countries. However, most of the current parking lot systems are located in the parking area of each shopping mall and they cannot show the parking lots information for the driver that are driving on the road. So, The drivers can see the parking lot information if and only if they arrive at the shopping mall area. This is the fact that the CO2 are still produced to the world although we use the parking lot guidance system. From the reason as mentioned, we propose the parking lots guidance software to share a parking lot information to a large number of clients in real-time based on MQTT Protocol. The proposed software can share the real-time parking lot information to the mobile devices of the drivers in any location or when they are driving on the road. The results show that the proposed software can share the parking lot information in real-time for at least 1,000 sessions simultaneously with the high average score of usability, design and the benefits."} {"_id": "ad3480a8d72319699c9a9f22cb77951c38cac7c7", "title": "Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning", "text": "The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper."} {"_id": "5499bdc807ec2d040e0b8b215cb234ccdfe45251", "title": "Design of a haptic arm exoskeleton for training and rehabilitation", "text": "A high-quality haptic interface is typically characterized by low apparent inertia and damping, high structural stiffness, minimal backlash, and absence of mechanical singularities in the workspace. In addition to these specifications, exoskeleton haptic interface design involves consideration of space and weight limitations, workspace requirements, and the kinematic constraints placed on the device by the human arm. These constraints impose conflicting design requirements on the engineer attempting to design an arm exoskeleton. In this paper, the authors present a detailed review of the requirements and constraints that are involved in the design of a high-quality haptic arm exoskeleton. In this context, the design of a five-degree-of-freedom haptic arm exoskeleton for training and rehabilitation in virtual environments is presented. The device is capable of providing kinesthetic feedback to the joints of the lower arm and wrist of the operator, and will be used in future work for robot-assisted rehabilitation and training. Motivation for such applications is based on findings that show robot-assisted physical therapy aids in the rehabilitation process following neurological injuries. As a training tool, the device provides a means to implement flexible, repeatable, and safe training methodologies."} {"_id": "3efd6b2ab1d96342d48ebda78833420108f25189", "title": "Active learning: theory and applications to automatic speech recognition", "text": "We are interested in the problem of adaptive learning in the context of automatic speech recognition (ASR). In this paper, we propose an active learning algorithm for ASR. Automatic speech recognition systems are trained using human supervision to provide transcriptions of speech utterances. The goal of Active Learning is to minimize the human supervision for training acoustic and language models and to maximize the performance given the transcribed and untranscribed data. Active learning aims at reducing the number of training examples to be labeled by automatically processing the unlabeled examples, and then selecting the most informative ones with respect to a given cost function for a human to label. In this paper we describe how to estimate the confidence score for each utterance through an on-line algorithm using the lattice output of a speech recognizer. The utterance scores are filtered through the informativeness function and an optimal subset of training samples is selected. The active learning algorithm has been applied to both batch and on-line learning scheme and we have experimented with different selective sampling algorithms. Our experiments show that by using active learning the amount of labeled data needed for a given word accuracy can be reduced by more than 60% with respect to random sampling."} {"_id": "6d4fa4b9037b64b8383331583430711be321c587", "title": "Seeing Stars of Valence and Arousal in Blog Posts", "text": "Sentiment analysis is a growing field of research, driven by both commercial applications and academic interest. In this paper, we explore multiclass classification of diary-like blog posts for the sentiment dimensions of valence and arousal, where the aim of the task is to predict the level of valence and arousal of a post on a ordinal five-level scale, from very negative/low to very positive/high, respectively. We show how to map discrete affective states into ordinal scales in these two dimensions, based on the psychological model of Russell's circumplex model of affect and label a previously available corpus with multidimensional, real-valued annotations. Experimental results using regression and one-versus-all approaches of support vector machine classifiers show that although the latter approach provides better exact ordinal class prediction accuracy, regression techniques tend to make smaller scale errors."} {"_id": "8ccf4f3e0da04d4cc8650f038f87ef84b74f8543", "title": "Uniport: A Uniform Programming Support Framework for Mobile Cloud Computing", "text": "Personal mobile devices (PMDs) have become the most used computing devices for many people. With the introduction of mobile cloud computing, we can augment the storage and computing capabilities of PMDs via cloud support. However, there are many challenges in developing mobile cloud applications (MCAs) that incorporate cloud computing efficiently, especially for developers targeting multiple mobile platforms. This paper presents Uniport, a uniform framework for developing MCAs. We introduce a uniform architecture for MCAs based on the Model-View-Controller (MVC) pattern and a set of programming primitives and runtime libraries. Not only can Uniport support the creation of new MCAs, it can also help transform existing mobile applications to MCAs efficiently. We demonstrate the applicability and flexibility of Uniport in a case study to transform three existing mobile applications on iOS, Android and Windows Phone, to their mobile cloud versions respectively. Evaluation results show that, with very few modifications, we can easily transform mobile applications to MCAs that can exploit the cloud support to improve performance by 3 - 7x and save more than half of their energy consumption."} {"_id": "886dd36ddaaacdc7f31509a924b30b45cc79f21d", "title": "StreamOp: An Innovative Middleware for Supporting Data Management and Query Functionalities over Sensor Network Streams Efficiently", "text": "During the last decade, a lot of progress has happened in the field of autonomous embedded devices (also known as motes) and wireless sensor networks. In this context, we envision next-generation systems to have autonomous database management functionalities on each mote, for supporting easy management of data on flash disks, adequate querying capabilities, and ubiquitous computing with no-effort. Following this major vision, in this paper we describe application scenarios, middleware approach and data management algorithms of a novel system, called Stream Operation (StreamOp), which effectively and efficiently realizes the depicted challenges. In particular, StreamOp supports heterogeneity, i.e. It works on top of different platforms, efficiency, i.e. It returns responses quickly, and autonomy, i.e. It saves battery power. We show how StreamOp provides these features, along with some experimental results that clearly validate our main assumptions."} {"_id": "f286cb201c0d8a36b42d11264aaec4efcafbf2f4", "title": "Pulse width modulation with resonant DC link converters", "text": "A technique for realizing pulse-width modulation capability in resonant DC link converters is presented. The technique eliminates the subharmonics present in the conventional discrete pulse modulated converters without requiring additional switching devices. Detailed design-oriented analyses enumerating the tradeoffs involved are presented. Control techniques for simultaneously accomplishing link control and output control are considered. Simulation results verifying the principle of operation are presented.<>"} {"_id": "bdba977ca5422a97563973be7e8f46a010cacb37", "title": "From Modeling to Implementation of Virtual Sensors in Body Sensor Networks", "text": "Body Sensor Networks (BSNs) represent an emerging technology which has received much attention recently due to its enormous potential to enable remote, real-time, continuous and non-invasive monitoring of people in health-care, entertainment, fitness, sport, social interaction. Signal processing for BSNs usually comprises of multiple levels of data abstraction, from raw sensor data to data calculated from processing steps such as feature extraction and classification. This paper presents a multi-layer task model based on the concept of Virtual Sensors to improve architecture modularity and design reusability. Virtual Sensors are abstractions of components of BSN systems that include sensor sampling and processing tasks and provide data upon external requests. The Virtual Sensor model implementation relies on SPINE2, an open source domain-specific framework that is designed to support distributed sensing operations and signal processing for wireless sensor networks and enables code reusability, efficiency, and application interoperability. The proposed model is applied in the context of gait analysis through wearable sensors. A gait analysis system is developed according to a SPINE2-based Virtual Sensor architecture and experimentally evaluated. Obtained results confirm that great effectiveness can be achieved in designing and implementing BSN applications through the Virtual Sensor approach while maintaining high efficiency and accuracy."} {"_id": "66d1a4614941f1121462cefb2e14a39bf72ddb67", "title": "Change in perceived psychosocial status following a 12-week Tai Chi exercise programme.", "text": "AIM\nThis paper reports a study to examine change in psychosocial status following a 12-week Tai Chi exercise intervention among ethnic Chinese people with cardiovascular disease risk factors living in the United States of America.\n\n\nBACKGROUND\nRegular participation in physical activity is associated with protection against cardioavascular disease, and improvements in physical and psychological health. Increasing amounts of scientific evidence suggests that mind-body exercise, such as Tai Chi, are related to improvements in mental health, emotional well-being, and stress reduction. No prior study has examined the effect of a Tai Chi exercise intervention on psychosocial status among people with cardiovascular disease risk factors.\n\n\nMETHODS\nThis was a quasi-experimental study. Participants attended a 60-minute Tai Chi exercise class three times per week for 12 weeks. Data were collected at baseline, 6 and 12 weeks following the intervention. Psychosocial status was assessed using Chinese versions of Cohen's Perceived Stress Scale, Profile of Mood States, Multidimensional Scale of Perceived Social Support, and Tai Chi exercise self-efficacy.\n\n\nRESULTS\nA total of 39 participants, on average 66-year-old (+/-8.3), married (85%), Cantonese-speaking (97%), immigrants participated. The majority were women (69%), with < or =12 years education (87%). Statistically significant improvements in all measures of psychosocial status were found (P < or = 0.05) following the intervention. Improvement in mood state (eta2 = 0.12), and reduction in perceived stress (eta2 = 0.13) were found. In addition, Tai Chi exercise statistically significantly increased self-efficacy to overcome barriers to Tai Chi (eta2 = 0.19), confidence to perform Tai Chi (eta2 = 0.27), and perceived social support (eta2 = 0.12).\n\n\nCONCLUSIONS\nTai Chi was a culturally appropriate mind-body exercise for these older adults, with statistically significant psychosocial benefits observed over 12-weeks. Further research examining Tai Chi exercise using a randomized clinical trial design with an attention-control group may reduce potential confounding effects, while exploring potential mechanisms underlying the relaxation response associated with mind-body exercise. In addition, future studies with people with other chronic illnesses in all ethnic groups are recommended to determine if similar benefits can be achieved."} {"_id": "ba494dc8ae531604a01120d16294228ea024ba7c", "title": "Resolver Models for Manufacturing", "text": "This paper develops a new mathematical model, for pancake resolvers, that is dependent on a set of variables controlled by a resolver manufacturer-the winding parameters. This model allows a resolver manufacturer to manipulate certain in-process controllable variables in order to readjust already assembled resolvers that, without any action, would be scrapped for the production line. The developed model follows a two-step strategy where, on a first step, a traditional transformer's model computes the resolver nominal conditions and, on a second step, a linear model computes the corrections on the controllable variables, in order to compensate small deviations in design assumptions, caused by the variability of the manufacturing process. An experimental methodology for parameter identification is presented. The linearized model develops a complete new approach to simulate the product characteristics of a pancake resolver from the knowledge of manufacturer controllable variables (winding parameters). This model had been simulated and experimentally tested in a resolver manufacturing plant. The performed tests prove the efficiency of the developed model, stabilizing the product specifications in a dynamic environment with high variability of the production processes. All experiments had been led at the resolver manufacturer Tyco Electronics-E\u0301vora plant."} {"_id": "60a102ca8091f41b9bc9cb40da598503b24b354a", "title": "On Type-Aware Entity Retrieval", "text": "Today, the practice of returning entities from a knowledge base in response to search queries has become widespread. One of the distinctive characteristics of entities is that they are typed, i.e., assigned to some hierarchically organized type system (type taxonomy). The primary objective of this paper is to gain a better understanding of how entity type information can be utilized in entity retrieval. We perform this investigation in an idealized \"oracle\" setting, assuming that we know the distribution of target types of the relevant entities for a given query. We perform a thorough analysis of three main aspects: (i) the choice of type taxonomy, (ii) the representation of hierarchical type information, and (iii) the combination of type-based and term-based similarity in the retrieval model. Using a standard entity search test collection based on DBpedia, we find that type information proves most useful when using large type taxonomies that provide very specific types. We provide further insights on the extensional coverage of entities and on the utility of target types."} {"_id": "b242312cb14f43485a5987cf51753090d564734a", "title": "Skintillates: Designing and Creating Epidermal Interactions", "text": "Skintillates is a wearable technology that mimics tattoos - the oldest and most commonly used on-skin displays in human culture. We demonstrate that by fabricating electrical traces and thin electronics on temporary tattoo paper, a wide array of displays and sensors can be created. Just like the traditional temporary tattoos often worn by children and adults alike, Skintillates flex naturally with the user's skin. Our simple fabrication technique also enables users to freely design and print with a full range of colors to create application-specific customized designs. We demonstrate the technical capabilities of Skintillates as sensors and as expressive personal and private displays through a series of application examples. Finally, we detail the results of a set of user studies that highlight the user experience, comfort, durability, acceptability, and application potential for Skintillates."} {"_id": "b2cdbf3fed73f51e0fb12d4e24376b3e33e66d11", "title": "An Efficient and Trustworthy Resource Sharing Platform for Collaborative Cloud Computing", "text": "Advancements in cloud computing are leading to a promising future for collaborative cloud computing (CCC), where globally-scattered distributed cloud resources belonging to different organizations or individuals (i.e., entities) are collectively used in a cooperative manner to provide services. Due to the autonomous features of entities in CCC, the issues of resource management and reputation management must be jointly addressed in order to ensure the successful deployment of CCC. However, these two issues have typically been addressed separately in previous research efforts, and simply combining the two systems generates double overhead. Also, previous resource and reputation management methods are not sufficiently efficient or effective. By providing a single reputation value for each node, the methods cannot reflect the reputation of a node in providing individual types of resources. By always selecting the highest-reputed nodes, the methods fail to exploit node reputation in resource selection to fully and fairly utilize resources in the system and to meet users' diverse QoS demands. We propose a CCC platform, called Harmony, which integrates resource management and reputation management in a harmonious manner. Harmony incorporates three key innovations: integrated multi-faceted resource/reputation management, multi-QoS-oriented resource selection, and price-assisted resource/reputation control. The trace data we collected from an online trading platform implies the importance of multi-faceted reputation and the drawbacks of highest-reputed node selection. Simulations and trace-driven experiments on the real-world PlanetLab testbed show that Harmony outperforms existing resource management and reputation management systems in terms of QoS, efficiency and effectiveness."} {"_id": "359748b336d43950537e1c2bf15cd7c3d838a3e2", "title": "Recurrent Neural Word Segmentation with Tag Inference", "text": "In this paper, we present a Long Short-TermMemory (LSTM) based model for the task of Chinese Weibo word segmentation. The model adopts a LSTM layer to capture long-range dependencies in sentence and learn the underlying patterns. In order to infer the optimal tag path, we introduce a transition score matrix for jumping between tags of successive characters. Integrated with some unsupervised features, the performance of the model is further improved. Finally, our model achieves a weighted F1-score of 0.8044 on close track, 0.8298 on the semi-open track."} {"_id": "9931c6b050e723f5b2a189dd38c81322ac0511de", "title": "From Pose to Activity: Surveying Datasets and Introducing CONVERSE", "text": "We present a review on the current state of publicly available datasets within the human action recognition community; highlighting the revival of pose based methods and recent progress of understanding person-person interaction modeling. We categorize datasets regarding several key properties for usage as a benchmark dataset; including the number of class labels, ground truths provided, and application domain they occupy. We also consider the level of abstraction of each dataset; grouping those that present actions, interactions and higher level semantic activities. The survey identifies key appearance and pose based datasets, noting a tendency for simplistic, emphasized, or scripted action classes that are often readily definable by a stable collection of subaction gestures. There is a clear lack of datasets that provide closely related actions, those that are not implicitly identified via a series of poses and gestures, but rather a dynamic set of interactions. We therefore propose a novel dataset that represents complex conversational interactions between two individuals via 3D pose. 8 pairwise interactions describing 7 separate conversation based scenarios were collected using two Kinect depth sensors. The intention is to provide events that are constructed from numerous primitive actions, interactions and motions, over a period of time; providing a set of subtle action classes that are more representative of the real world, and a challenge to currently developed recognition methodologies. We believe this is among one of the first datasets devoted to conversational interaction classification using 3D pose Preprint submitted to Elsevier October 27, 2015 features and the attributed papers show this task is indeed possible. The full dataset is made publicly available to the research community at [1]."} {"_id": "8fc874191aec7d356c4d6661360935a49c37002b", "title": "Security in Container-Based Virtualization through vTPM", "text": "Cloud computing is a wide-spread technology that enables the enterprises to provide services to their customers with a lower cost, higher performance, better availability and scalability. However, privacy and security in cloud computing has always been a major challenge to service providers and a concern to its users. Trusted computing has led its way in securing the cloud computing and virtualized environment, during the past decades.\n In this paper, first we study virtualized trusted platform modules and integration of vTPM in hypervisor-based virtualization. Then we propose two architectural solutions for integrating the vTPM in container-based virtualization model."} {"_id": "9e057a396d8de9c2507884ce69a10a1cd69f4add", "title": "Facebook Use Predicts Declines in Subjective Well-Being in Young Adults", "text": "Over 500 million people interact daily with Facebook. Yet, whether Facebook use influences subjective well-being over time is unknown. We addressed this issue using experience-sampling, the most reliable method for measuring in-vivo behavior and psychological experience. We text-messaged people five times per day for two-weeks to examine how Facebook use influences the two components of subjective well-being: how people feel moment-to-moment and how satisfied they are with their lives. Our results indicate that Facebook use predicts negative shifts on both of these variables over time. The more people used Facebook at one time point, the worse they felt the next time we text-messaged them; the more they used Facebook over two-weeks, the more their life satisfaction levels declined over time. Interacting with other people \"directly\" did not predict these negative outcomes. They were also not moderated by the size of people's Facebook networks, their perceived supportiveness, motivation for using Facebook, gender, loneliness, self-esteem, or depression. On the surface, Facebook provides an invaluable resource for fulfilling the basic human need for social connection. Rather than enhancing well-being, however, these findings suggest that Facebook may undermine it."} {"_id": "26e6b1675e081a514f4fdc0352d6cb211ba6d9c8", "title": "Relay Attacks on Passive Keyless Entry and Start Systems in Modern Cars", "text": "We demonstrate relay attacks on Passive Keyless Entry and Start (PKES) systems used in modern cars. We build two efficient and inexpensive attack realizations, wired and wireless physical-layer relays, that allow the attacker to enter and start a car by relaying messages between the car and the smart key. Our relays are completely independent of the modulation, protocol, or presence of strong authentication and encryption. We perform an extensive evaluation on 10 car models from 8 manufacturers. Our results show that relaying the signal in one direction only (from the car to the key) is sufficient to perform the attack while the true distance between the key and car remains large (tested up to 50 meters, non line-of-sight). We also show that, with our setup, the smart key can be excited from up to 8 meters. This removes the need for the attacker to get close to the key in order to establish the relay. We further analyze and discuss critical system characteristics. Given the generality of the relay attack and the number of evaluated systems, it is likely that all PKES systems based on similar designs are also vulnerable to the same attack. Finally, we propose immediate mitigation measures that minimize the risk of relay attacks as well as recent solutions that may prevent relay attacks while preserving the convenience of use, for which PKES systems were initially introduced."} {"_id": "e004b2420d872a33aa3bae8e570d33e5e66e2cac", "title": "Two-factor authentication: too little, too late", "text": "T wo-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions. It solves the security problems we had 10 years ago, not the security problems we have today. The problem with passwords is that it is too easy to lose control of them. People give their passwords to other people. People write them down, and other people read them. People send them in email, and that email is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. Passwords are also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can never be sure who is typing in that password. Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it's difficult for someone else to intercept. You can't write down the ever-changing part. An intercepted password won't be usable the next time it's needed. And a two-factor password is more difficult to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof. These tokens have been around for at least two decades, but it's only recently that they have received mass-market attention. AOL is rolling them out. Some banks are issuing them to customers , and even more are talking about doing it. It seems that corporations are finally recognizing the fact that passwords don't provide adequate security , and are hoping that two-factor authentication will fix their problems. Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses. Two new active attacks we're starting to see include: Man-in-the-Middle Attack. An attacker puts up a fake bank Web site and entices a user to that Web site. The user types in his password, and the attacker in turn uses it to access the bank's real Web site. Done correctly, the user will never realize that he isn't at the bank's Web site. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user's banking transactions while making his own transactions at the same \u2026"} {"_id": "67f5f4cd77c63f6c196f5ced2047ae80145d49cb", "title": "The Secure Remote Password Protocol", "text": "This paper presents a new password authentication and key-exchange protocol suitable for authenticating users and exchanging keys over an untrusted network. The new protocol resists dictionary attacks mounted by either passive or active network intruders, allowing, in principle, even weak passphrases to be used safely. It also o ers perfect forward secrecy, which protects past sessions and passwords against future compromises. Finally, user passwords are stored in a form that is not plaintext-equivalent to the password itself, so an attacker who captures the password database cannot use it directly to compromise security and gain immediate access to the host. This new protocol combines techniques of zero-knowledge proofs with asymmetric key exchange protocols and o ers signi cantly improved performance over comparably strong extended methods that resist stolen-veri er attacks such as Augmented EKE or B-SPEKE."} {"_id": "e9ffbd29836571e25e0b7f5f0469b0a7e24dbf05", "title": "Security and Privacy Challenges in the Smart Grid", "text": "Global electrical grids are verging on the largest technological transformation since the introduction of electricity into the home. The antiquated infrastructure that delivers power to our homes and businesses is being replaced with a collection of digital systems called the smart grid. This grid is the modernization of the existing electrical system that enhances customers' and utilities' ability to monitor, control, and predict energy use."} {"_id": "9d767943a911949158463a4c217d36372ffce9db", "title": "Real-time techniques for 3D flow visualization", "text": "Visualization of three-dimensional steady flow has to overcome a lot of problems to be effective. Among them are occlusion of distant details, lack of directional and depth hints and occlusion. We present methods which address these problems for real-time graphic representations applicable in virtual environments. We use dashtubes, i.e., animated, opacity-mapped streamlines, as a visualization icon for 3D-flow visualization. We present a texture mapping technique to keep the level of texture detail along a streamline nearly constant even when the velocity of the flow varies considerably. An algorithm is described which distributes the dashtubes evenly in space. We apply magic lenses and magic boxes as interaction techniques for investigating densely filled areas without overwhelming the observer with visual detail. Implementation details of these methods and their integration in our virtual environment conclude the paper."} {"_id": "9cd3ba0489f2e7bd95216236d1331208f15923c8", "title": "SQL Database Primitives for Decision Tree Classifiers", "text": "Scalable data mining in large databases is one of today's challenges to database technologies. Thus, substantial effort is dedicated to a tight coupling of database and data mining systems leading to database primitives supporting data mining tasks. In order to support a wide range of tasks and to be of general usage these primitives should be rather building blocks than implementations of specific algorithms. In this paper, we describe primitives for building and applying decision tree classifiers. Based on the analysis of available algorithms and previous work in this area we have identified operations which are useful for a number of classification algorithms. We discuss the implementation of these primitives on top of a commercial DBMS and present experimental results demonstrating the performance benefit."} {"_id": "88486f10d3bb059725e57391f3e10e0eb851463f", "title": "Human Body as Antenna and Its Effect on Human Body Communications", "text": "Human body communication (HBC) is a promising wireless technology that uses the human body as part of the communication channel. HBC operates in the near-field of the high frequency (HF) band and in the lower frequencies of the very high frequency (VHF) band, where the electromagnetic field has the tendency to be confined inside the human body. Electromagnetic interference poses a serious reliability issue in HBC; consequently, it has been given increasing attention in regard to adapting techniques to curtail its degrading effect. Nevertheless, there is a gap in knowledge on the mechanism of HBC interference that is prompted when the human body is exposed to electromagnetic fields as well as the effect of the human body as an antenna on HBC. This paper narrows the gap by introducing the mechanisms of HBC interference caused by electromagnetic field exposure of human body. We derived analytic expressions for induced total axial current in the body and associated fields in the vicinity of the body when an imperfectly conducting cylindrical antenna model of the human body is illuminated by a vertically polarized plane wave within the 1\u2013200 MHz frequency range. Also, fields in the vicinity of the human body model from an on-body HBC transmitter are calculated. Furthermore, conducted electromagnetic interference on externally embedded HBC receivers is also addressed. The results show that the maximum HBC gain near 50 MHz is due to whole-body resonance, and the maximum at 80 MHz is due to the resonance of the arm. Similarly, the results also suggest that the magnitude of induced axial current in the body due to electromagnetic field exposure of human body is higher near 50MHz."} {"_id": "b93926b0ad28f8fa71cde3a2734d5c84ee5a0fee", "title": "High-fidelity transmission of sensory information by single cerebellar mossy fibre boutons", "text": "Understanding the transmission of sensory information at individual synaptic connections requires knowledge of the properties of presynaptic terminals and their patterns of firing evoked by sensory stimuli. Such information has been difficult to obtain because of the small size and inaccessibility of nerve terminals in the central nervous system. Here we show, by making direct patch-clamp recordings in vivo from cerebellar mossy fibre boutons\u2014the primary source of synaptic input to the cerebellar cortex\u2014that sensory stimulation can produce bursts of spikes in single boutons at very high instantaneous firing frequencies (more than 700\u2009Hz). We show that the mossy fibre\u2013granule cell synapse exhibits high-fidelity transmission at these frequencies, indicating that the rapid burst of excitatory postsynaptic currents underlying the sensory-evoked response of granule cells can be driven by such a presynaptic spike burst. We also demonstrate that a single mossy fibre can trigger action potential bursts in granule cells in vitro when driven with in vivo firing patterns. These findings suggest that the relay from mossy fibre to granule cell can act in a \u2018detonator\u2019 fashion, such that a single presynaptic afferent may be sufficient to transmit the sensory message. This endows the cerebellar mossy fibre system with remarkable sensitivity and high fidelity in the transmission of sensory information."} {"_id": "e739f103622722c55ecb40151ffc93b88d27b09a", "title": "Spectrum Sensing in Cognitive Radio Networks", "text": "The rising number and capacity requirements of wireless systems bring increasing demand for RF spectrum. Cognitive radio (CR) system is an emerging concept to increase the spectrum efficiency. CR system aims to enable opportunistic usage of the RF bands that are not occupied by their primary licensed users in spectrum overlay approach. In this approach, the major challenge in realizing the full potential of CR systems is to identify the spectrum opportunities in the wide band regime reliably and optimally. In the spectrum underlay approach, CR systems enable dynamic spectrum access by co-existing and transmitting simultaneously with licensed primary users without creating harmful interference to them. In this case, the challenge is to transmit with low power so as not to exceed the tolerable interference level to the primary users. Spectrum sensing and estimation is an integral part of the CR system, which is used to identify the spectrum opportunities in spectrum overlay and to identify the interference power to primary users in spectrum underlay approach. In this chapter, the authors present a comprehensive study of signal detection techniques for spectrum sensing proposed for CR systems. Specifically, they outline the state of the art research results, challenges, and future perspectives of spectrum sensing in CR systems, and also present a comparison of different methods. With this chapter, readers can have a comprehensive insight of signal processing methods of spectrum sensing for cognitive radio networks and the ongoing research and development in this area."} {"_id": "69d685d0cf85dfe70d87c1548b03961366e83663", "title": "Noncontact Monitoring of Blood Oxygen Saturation Using Camera and Dual-Wavelength Imaging System", "text": "We present a noncontact method to monitor blood oxygen saturation (SpO2). The method uses a CMOS camera with a trigger control to allow recording of photoplethysmography (PPG) signals alternatively at two particular wavelengths, and determines the SpO2 from the measured ratios of the pulsatile to the nonpulsatile components of the PPG signals at these wavelengths. The signal-to-noise ratio (SNR) of the SpO2 value depends on the choice of the wavelengths. We found that the combination of orange (\u03bb = 611 nm) and near infrared (\u03bb = 880 nm) provides the best SNR for the noncontact video-based detection method. This combination is different from that used in traditional contact-based SpO2 measurement since the PPG signal strengths and camera quantum efficiencies at these wavelengths are more amenable to SpO2 measurement using a noncontact method. We also conducted a small pilot study to validate the noncontact method over an SpO2 range of 83%-98%. This study results are consistent with those measured using a reference contact SpO2 device (r = 0.936, p <; 0.001). The presented method is particularly suitable for tracking one's health and wellness at home under free-living conditions, and for those who cannot use traditional contact-based PPG devices."} {"_id": "89de521e6a64430a31844dd7a9f2f3f8794a0c1d", "title": "Autonomous land vehicle project at CMU", "text": "1 . Introduction This paper provides an overview of the Autonomous Land Vehicle (ALV) Project at CMU. The goal of the CMU ALV Project is to build vision and intelligence for a mobile robot capable of operating in the real world outdoors. We are attacking this on a number of fronts: building appropriate research vehicles, exploiting high. speed experimental computers, and building software for reasoning about the perceived world. Research topics includes: \u2022 Construction of research vehicles \u2022 Perception systems to perceive the natural outdoor scenes by means of multiple sensors including cameras (color, stereo, and motion), sonar sensors, and a 3D range finder \u2022 Path planning for obstacle avoidance Use of topological and terrain map \u2022 System architecture to facilitate the system integration \u2022 Utilization of parallel computer architectures Our current research vehicle is the Terregator built at CMU which is equipped with a sonar ring, a color camera, and the ERIM laser range finder. Its initial task is to follow roads and sidewalks in the park and on the campus, and avoid obstacles such as trees, humans, and traffic cones. 2. Vehicle, Sensors, and Host Computers The primary vehicle of the AMU ALV Project has been the Terregator, designed and built at CMU. The Terregator, short for Terrestrial Navigator, is designed to provide a clean separation between the vehicle itself and its sensor payload. As shown in"} {"_id": "3bf22713709f58c8a64dd56a69257ceae8532013", "title": "Robust real-time lane and road detection in critical shadow conditions", "text": "This paper presents the vision-based road detection system currently installed onto the MOB-LAB land vehicle. Based on a geometrical transform and on a fast morphological processing, the system is capable to detect road markings even in extremely severe shadow conditions on at and structured roads. The use of a special-purpose massively architecture (PAPRICA) allows to achieve a processing rate of about 17 Hz."} {"_id": "51c88134a668cdfaccda2fe5f88919ac122bceda", "title": "E-LAMP: integration of innovative ideas for multimedia event detection", "text": "Detecting multimedia events in web videos is an emerging hot research area in the fields of multimedia and computer vision. In this paper, we introduce the core methods and technologies of the framework we developed recently for our Event Labeling through Analytic Media Processing (E-LAMP) system to deal with different aspects of the overall problem of event detection. More specifically, we have developed efficient methods for feature extraction so that we are able to handle large collections of video data with thousands of hours of videos. Second, we represent the extracted raw features in a spatial bag-of-words model with more effective tilings such that the spatial layout information of different features and different events can be better captured, thus the overall detection performance can be improved. Third, different from widely used early and late fusion schemes, a novel algorithm is developed to learn a more robust and discriminative intermediate feature representation from multiple features so that better event models can be built upon it. Finally, to tackle the additional challenge of event detection with only very few positive exemplars, we have developed a novel algorithm which is able to effectively adapt the knowledge learnt from auxiliary sources to assist the event detection. Both our empirical results and the official evaluation results on TRECVID MED\u201911 and MED\u201912 demonstrate the excellent performance of the integration of these ideas."} {"_id": "a6eeab584f3554f3f1c9a5d4e1c062eaf588bcba", "title": "Lnc2Cancer: a manually curated database of experimentally supported lncRNAs associated with various human cancers", "text": "Lnc2Cancer (http://www.bio-bigdata.net/lnc2cancer) is a manually curated database of cancer-associated long non-coding RNAs (lncRNAs) with experimental support that aims to provide a high-quality and integrated resource for exploring lncRNA deregulation in various human cancers. LncRNAs represent a large category of functional RNA molecules that play a significant role in human cancers. A curated collection and summary of deregulated lncRNAs in cancer is essential to thoroughly understand the mechanisms and functions of lncRNAs. Here, we developed the Lnc2Cancer database, which contains 1057 manually curated associations between 531 lncRNAs and 86 human cancers. Each association includes lncRNA and cancer name, the lncRNA expression pattern, experimental techniques, a brief functional description, the original reference and additional annotation information. Lnc2Cancer provides a user-friendly interface to conveniently browse, retrieve and download data. Lnc2Cancer also offers a submission page for researchers to submit newly validated lncRNA-cancer associations. With the rapidly increasing interest in lncRNAs, Lnc2Cancer will significantly improve our understanding of lncRNA deregulation in cancer and has the potential to be a timely and valuable resource."} {"_id": "a7a43698ce882e74eee010e3927f6215f4ce8f0b", "title": "On the Latency and Energy Efficiency of Distributed Storage Systems", "text": "The increase in data storage and power consumption at data-centers has made it imperative to design energy efficient distributed storage systems (DSS). The energy efficiency of DSS is strongly influenced not only by the volume of data, frequency of data access and redundancy in data storage, but also by the heterogeneity exhibited by the DSS in these dimensions. To this end, we propose and analyze the energy efficiency of a heterogeneous distributed storage system in which $n$ storage servers (disks) store the data of $R$ distinct classes. Data of class $i$ is encoded using a $(n,k_{i})$ erasure code and the (random) data retrieval requests can also vary across classes. We show that the energy efficiency of such systems is closely related to the average latency and hence motivates us to study the energy efficiency via the lens of average latency. Through this connection, we show that erasure coding serves the dual purpose of reducing latency and increasing energy efficiency. We present a queuing theoretic analysis of the proposed model and establish upper and lower bounds on the average latency for each data class under various scheduling policies. Through extensive simulations, we present qualitative insights which reveal the impact of coding rate, number of servers, service distribution and number of redundant requests on the average latency and energy efficiency of the DSS."} {"_id": "556803fa8049de309f421a6b6ef27f0cf1cf8c58", "title": "A 1.5 nW, 32.768 kHz XTAL Oscillator Operational From a 0.3 V Supply", "text": "This paper presents an ultra-low power crystal (XTAL) oscillator circuit for generating a 32.768 kHz clock source for real-time clock generation. An inverting amplifier operational from 0.3 V VDD oscillates the XTAL resonator and achieves a power consumption of 2.1 nW. A duty-cycling technique powers down the XTAL amplifier without losing the oscillation and reduces the power consumption to 1.5 nW. The proposed circuit is implemented in 130 nm CMOS with an area of 0.0625 mm2 and achieves a temperature stability of 1.85 ppm/\u00b0C."} {"_id": "d96d3ba21785887805b9441e0cc167b0f22ca28c", "title": "A multivariate time series clustering approach for crime trends prediction", "text": "In recent past, there is an increased interest in time series clustering research, particularly for finding useful similar trends in multivariate time series in various applied areas such as environmental research, finance, and crime. Clustering multivariate time series has potential for analyzing large volume of crime data at different time points as law enforcement agencies are interested in finding crime trends of various police administration units such as states, districts and police stations so that future occurrences of similar incidents can be overcome. Most of the traditional time series clustering algorithms deals with only univariate time series data and for clustering high dimensional data, it has to be transformed into single dimension using a dimension reduction technique. The conventional time series clustering techniques do not provide desired results for crime data set, since crime data is high dimensional and consists of various crime types with different weight age. In this paper, a novel approach based on dynamic time wrapping and parametric Minkowski model has been proposed to find similar crime trends among various crime sequences of different crime locations and subsequently use this information for future crime trends prediction. Analysis on Indian crime records show that the proposed technique generally outperforms the existing techniques in clustering of such multivariate time series data."} {"_id": "cb6a74d15e51fba7835edf4a95ec0ec37f7740b0", "title": "Mechanisms linking obesity to insulin resistance and type 2 diabetes", "text": "Obesity is associated with an increased risk of developing insulin resistance and type 2 diabetes. In obese individuals, adipose tissue releases increased amounts of non-esterified fatty acids, glycerol, hormones, pro-inflammatory cytokines and other factors that are involved in the development of insulin resistance. When insulin resistance is accompanied by dysfunction of pancreatic islet \u03b2-cells \u2014 the cells that release insulin \u2014 failure to control blood glucose levels results. Abnormalities in \u03b2-cell function are therefore critical in defining the risk and development of type 2 diabetes. This knowledge is fostering exploration of the molecular and genetic basis of the disease and new approaches to its treatment and prevention."} {"_id": "97cb3258a85a447a61e3812846f7a6e72ff3c1e1", "title": "Company event popularity for financial markets using Twitter and sentiment analysis", "text": "The growing number of Twitter users makes it a valuable source of information to study what is happening right now. Users often use Twitter to report real-life events. Here we are only interested in following the financial community. This paper focuses on detecting events popularity through sentiment analysis of tweets published by the financial community on the Twitter universe. The detection of events popularity on Twitter makes this a non-trivial task due to noisy content that often are the tweets. This work aims to filter out all the noisy tweets in order to analyze only the tweets that influence the financial market, more specifically the thirty companies that compose the Dow Jones Average. To perform these tasks, in this paper it is proposed a methodology that starts from the financial community of Twitter and then filters the collected tweets, makes the sentiment analysis of the tweets and finally detects the important events in the life of companies. \u00a9 2016 Elsevier Ltd. All rights reserved."} {"_id": "5cf1659a7cdeb988ff2e5b57fe9eb5bb7e1a1fbd", "title": "Detecting and Estimating Signals in Noisy Cable Structures, I: Neuronal Noise Sources", "text": "In recent theoretical approaches addressing the problem of neural coding, tools from statistical estimation and information theory have been applied to quantify the ability of neurons to transmit information through their spike outputs. These techniques, though fairly general, ignore the specific nature of neuronal processing in terms of its known biophysical properties. However, a systematic study of processing at various stages in a biophysically faithful model of a single neuron can identify the role of each stage in information transfer. Toward this end, we carry out a theoretical analysis of the information loss of a synaptic signal propagating along a linear, one-dimensional, weakly active cable due to neuronal noise sources along the way, using both a signal reconstruction and a signal detection paradigm. Here we begin such an analysis by quantitatively characterizing three sources of membrane noise: (1) thermal noise due to the passive membrane resistance, (2) noise due to stochastic openings and closings of voltage-gated membrane channels (Na+ and K+), and (3) noise due to random, background synaptic activity. Using analytical expressions for the power spectral densities of these noise sources, we compare their magnitudes in the case of a patch of membrane from a cortical pyramidal cell and explore their dependence on different biophysical parameters."} {"_id": "858997daac40de2e42b4b9b5c06943749a93aaaf", "title": "Preventing Shoulder-Surfing Attack with the Concept of Concealing the Password Objects' Information", "text": "Traditionally, picture-based password systems employ password objects (pictures/icons/symbols) as input during an authentication session, thus making them vulnerable to \"shoulder-surfing\" attack because the visual interface by function is easily observed by others. Recent software-based approaches attempt to minimize this threat by requiring users to enter their passwords indirectly by performing certain mental tasks to derive the indirect password, thus concealing the user's actual password. However, weaknesses in the positioning of distracter and password objects introduce usability and security issues. In this paper, a new method, which conceals information about the password objects as much as possible, is proposed. Besides concealing the password objects and the number of password objects, the proposed method allows both password and distracter objects to be used as the challenge set's input. The correctly entered password appears to be random and can only be derived with the knowledge of the full set of password objects. Therefore, it would be difficult for a shoulder-surfing adversary to identify the user's actual password. Simulation results indicate that the correct input object and its location are random for each challenge set, thus preventing frequency of occurrence analysis attack. User study results show that the proposed method is able to prevent shoulder-surfing attack."} {"_id": "3fa284afe4d429c0805d95a4bd9564a7be0c8de3", "title": "The Evolution of the Peer-to-Peer File Sharing Industry and the Security Risks for Users", "text": "Peer-to-peer file sharing is a growing security risk for firms and individuals. Users who participate in these networks to share music, pictures, and video are subject to many security risks including inadvertent publishing of private information, exposure to viruses and worms, and the consequences of spyware. In this paper, we examine the peer-to-peer file sharing phenomena, including an overview of the industry, its business models, and evolution. We describe the information security risks users' face including personal identification disclosure and leakage of proprietary business information. We illustrate those risks through honey-pot experiments and discuss how peer-to-peer industry dynamics are contributing to the security problem."} {"_id": "e21f23f56c95fce3c2c22f7037ea74208b68bd20", "title": "Wide-band CMOS low-noise amplifier exploiting thermal noise canceling", "text": "Known elementary wide-band amplifiers suffer from a fundamental tradeoff between noise figure (NF) and source impedance matching, which limits the NF to values typically above 3 dB. Global negative feedback can be used to break this tradeoff, however, at the price of potential instability. In contrast, this paper presents a feedforward noise-canceling technique, which allows for simultaneous noise and impedance matching, while canceling the noise and distortion contributions of the matching device. This allows for designing wide-band impedance-matching amplifiers with NF well below 3 dB, without suffering from instability issues. An amplifier realized in 0.25-/spl mu/m standard CMOS shows NF values below 2.4 dB over more than one decade of bandwidth (i.e., 150-2000 MHz) and below 2 dB over more than two octaves (i.e., 250-1100 MHz). Furthermore, the total voltage gain is 13.7 dB, the -3-dB bandwidth is from 2 MHz to 1.6 GHz, the IIP2 is +12 dBm, and the IIP3 is 0 dBm. The LNA drains 14 mA from a 2.5-V supply and the die area is 0.3/spl times/0.25 mm/sup 2/."} {"_id": "6b8af92448180d28997499900ebfe33a473cddb7", "title": "A robust adaptive stochastic gradient method for deep learning", "text": "Stochastic gradient algorithms are the main focus of large-scale optimization problems and led to important successes in the recent advancement of the deep learning algorithms. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose an adaptive learning rate algorithm, which utilizes stochastic curvature information of the loss function for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.1"} {"_id": "0f699e2c27f6fffb0c84abb6756fda0ca26db113", "title": "Multiresolution genetic clustering algorithm for texture segmentation", "text": "This work plans to approach the texture segmentation problem by incorporating genetic algorithm and K-means clustering method within a multiresolution structure. As the algorithm descends the multiresolution structure, the coarse segmentation results are propagated down to the lower levels so as to reduce the inherent class\u2013position uncertainty and to improve the segmentation accuracy. The procedure is described as follows. In the first step, a quad-tree structure of multiple resolutions is constructed. Sampling windows of different sizes are utilized to partition the underlying image into blocks at different resolution levels and texture features are extracted from each block. Based on the texture features, a hybrid genetic algorithm is employed to perform the segmentation. While the select and mutate operators of the traditional genetic algorithm are adopted in this work, the crossover operator is replaced with K-means clustering method. In the final step, the boundaries and the segmentation result of the current resolution level are propagated down to the next level to act as contextual constraints and the initial configuration of the next level, respectively. q 2003 Elsevier B.V. All rights reserved."} {"_id": "2189a35642e8b034f1396403e9890ec30db6db13", "title": "Mathematical modeling of photovoltaic module with Simulink", "text": "This paper presents a unique step-by-step procedure for the simulation of photovoltaic modules with Matlab/ Simulink. One-diode equivalent circuit is employed in order to investigate I-V and P-V characteristics of a typical 36 W solar module. The proposed model is designed with a user-friendly icons and a dialog box like Simulink block libraries."} {"_id": "b0632c9966294279e03d8cda8c6daa86217f2a1a", "title": "Audio Fingerprinting: Concepts And Applications", "text": "An audio fingerprint is a compact digest derived from perceptually relevant aspects of a recording. Fingerprinting technologies allow the monitoring of audio content without the need of meta-data or watermark embedding. However, additional uses exist for audio fingerprinting and some are reviewed in this article."} {"_id": "c316c705a2bede33d396447119639eb99faf96c7", "title": "Advantage of CNTFET characteristics over MOSFET to reduce leakage power", "text": "In this paper we compare and justify the advantage of CNTFET devices over MOSFET devices in nanometer regime. Thereafter we have analyzed the effect of chiral vector, and temperature on threshold voltage of CNTFET device. After simulation on HSPICE tool we observed that the high threshold voltage can be achieved at low chiral vector pair. It is also observed that the effect of temperature on threshold voltage of CNTFET is negligibly small. After analysis of channel length variation and their impact on threshold voltage of CNTFET as well as MOSFET devices, we found an anomalous result that the threshold voltage increases with decreasing channel length in CNTFET devices, this is quite contrary to the well known short channel effect. It is observed that at below 10 nm channel length the threshold voltage is increased rapidly in case of CNTFET device whereas in case of MOSFET device the threshold voltage decreases drastically below 10 nm channel length."} {"_id": "a4fe464f6b41f844b4fc63a62c5787fccc942cef", "title": "Advanced obfuscation techniques for Java bytecode", "text": "There exist several obfuscation tools for preventing Java bytecode from being decompiled. Most of these tools simply scramble the names of the identifiers stored in a bytecode by substituting the identifiers with meaningless names. However, the scrambling technique cannot deter a determined cracker very long. We propose several advanced obfuscation techniques that make Java bytecode impossible to recompile or make the decompiled program difficult to understand and to recompile. The crux of our approach is to over use an identifier. That is, an identifier can denote several entities, such as types, fields, and methods, simultaneously. An additional benefit is that the size of the bytecode is reduced because fewer and shorter identifier names are used. Furthermore, we also propose several techniques to intentionally introduce syntactic and semantic errors into the decompiled program while preserving the original behaviors of the bytecode. Thus, the decompiled program would have to be debugged manually. Although our basic approach is to scramble the identifiers in Java bytecode, the scrambled bytecode produced with our techniques is much harder to crack than that produced with other identifier scrambling techniques. Furthermore, the run-time efficiency of the obfuscated bytecode is also improved because the size of the bytecode becomes smaller after obfuscation. 2002 Elsevier Inc. All rights reserved."} {"_id": "10d6b12fa07c7c8d6c8c3f42c7f1c061c131d4c5", "title": "Histograms of oriented gradients for human detection", "text": "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds."} {"_id": "2337ff38e6cfb09e28c0958f07e2090c993ef6e8", "title": "Measuring Invariances in Deep Networks", "text": "For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently, deep architectures trained in an unsupervised manner have been proposed as an automatic method for extracting useful features. However, it is difficult to evaluate the learned features by any means other than using them in a classifier. In this paper, we propose a number of empirical tests that directly measure the degree to which these learned features are invariant to different input transformations. We find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images. We find that convolutional deep belief networks learn substantially more invariant features in each layer. These results further justify the use of \u201cdeep\u201d vs. \u201cshallower\u201d representations, but suggest that mechanisms beyond merely stacking one autoencoder on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the development of future algorithms."} {"_id": "31b58ced31f22eab10bd3ee2d9174e7c14c27c01", "title": "80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition", "text": "With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors."} {"_id": "34760b63a2ae964a0b04d1850dc57002f561ddcb", "title": "Decoding by linear programming", "text": "This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr/1/:=/spl Sigma//sub i/|x/sub i/|) min(g/spl isin/R/sup n/) /spl par/y - Ag/spl par//sub /spl lscr/1/ provided that the support of the vector of errors is not too large, /spl par/e/spl par//sub /spl lscr/0/:=|{i:e/sub i/ /spl ne/ 0}|/spl les//spl rho//spl middot/m for some /spl rho/>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of /spl lscr//sub 1/ is a crucial property we call the uniform uncertainty principle that we shall describe in detail."} {"_id": "27b8c62835890fd31de976c12ccb80704a584ce4", "title": "Credit card fraud detection using Na\u00efve Bayes model based and KNN classifier", "text": "Machine Learning is the technology, in which algorithms which are capable of learning from previous cases and past experiences are designed. It is implemented using various algorithms which reiterate over the same data repeatedly to analyze the pattern of data. The techniques of data mining are no far behind and are widely used to extract data from large databases to discover some patterns making decisions. This paper presents the Na\u00efve Bayes improved K-Nearest Neighbor method (NBKNN) for Fraud Detection of Credit Card. Experimental results illustrate that both classifiers work differently for the same dataset. The purpose is to enhance the accuracy and enhance the flexibility of the algorithm."} {"_id": "4b605e6a9362485bfe69950432fa1f896e7d19bf", "title": "A Comparison of Human and Automated Face Verification Accuracy on Unconstrained Image Sets", "text": "Automatic face recognition technologies have seen significant improvements in performance due to a combination of advances in deep learning and availability of larger datasets for training deep networks. Since recognizing faces is a task that humans are believed to be very good at, it is only natural to compare the relative performance of automated face recognition and humans when processing fully unconstrained facial imagery. In this work, we expand on previous studies of the recognition accuracy of humans and automated systems by performing several novel analyses utilizing unconstrained face imagery. We examine the impact on performance when human recognizers are presented with varying amounts of imagery per subject, immutable attributes such as gender, and circumstantial attributes such as occlusion, illumination, and pose. Results indicate that humans greatly outperform state of the art automated face recognition algorithms on the challenging IJB-A dataset."} {"_id": "8cf03252a0231352cfbcb1fb1feee8049da67bb1", "title": "Strip-type Grid Array Antenna with a Two-Layer Rear-Space Structure", "text": "A strip-type grid array antenna is analyzed using the finite-difference time-domain method. The space between the grid array and the ground plane, designated as the rear space, is filled with a dielectric layer and an air layer (two-layer rear-space structure). A VSWR bandwidth of approximately 13% is obtained with a maximum directivity of approximately 18 dB. The grid array exhibits a narrow beam with low side lobes. The cross-polarization component is small (-25 dB), as desired."} {"_id": "a4d510439644d52701f852d9dd34bbd37f4b8b78", "title": "Using the SLEUTH urban growth model to simulate the impacts of future policy scenarios on urban land use in the Tehran metropolitan area in Iran", "text": "The SLEUTH model, based on the Cellular Automata (CA), can be applied to city development simulation in metropolitan areas. In this study the SLEUTH model was used to model the urban expansion and predict the future possible behavior of the urban growth in Tehran. The fundamental data were five Landsat TM and ETM images of 1988, 1992, 1998, 2001 and 2010. Three scenarios were designed to simulate the spatial pattern. The first scenario assumed historical urbanization mode would persist and the only limitations for development were height and slope. The second one was a compact scenario which makes the growth mostly internal and limited the expansion of suburban areas. The last scenario proposed a polycentric urban structure which let the little patches * Corresponding author. Tel.: +98 912 3572913 E-mail address: shaghayegh.kargozar@yahoo.com"} {"_id": "f19e6e8a06cba5fc8cf234881419de9193bba9d0", "title": "Confidence Measures for Neural Network Classifiers", "text": "Neural Networks are commonly used in classification and decision tasks. In this paper, we focus on the problem of the local confidence of their results. We review some notions from statistical decision theory that offer an insight on the determination and use of confidence measures for classification with Neural Networks. We then present an overview of the existing confidence measures and finally propose a simple measure which combines the benefits of the probabi-listic interpretation of network outputs and the estimation of the quality of the model by bootstrap error estimation. We discuss empirical results on a real-world application and an artificial problem and show that the simplest measure behaves often better than more sophisticated ones, but may be dangerous under certain situations."} {"_id": "3d4b4aa5f67e1b2eb97231013c8d2699acdf9ccd", "title": "Using parallel stiffness to achieve improved locomotive efficiency with the Sandia STEPPR robot", "text": "In this paper we introduce STEPPR (Sandia Transmission-Efficient Prototype Promoting Research), a bipedal robot designed to explore efficient bipedal walking. The initial iteration of this robot achieves efficient motions through powerful electromagnetic actuators and highly back-drivable synthetic rope transmissions. We show how the addition of parallel elastic elements at select joints is predicted to provide substantial energetic benefits: reducing cost of transport by 30 to 50 percent. Two joints in particular, hip roll and ankle pitch, reduce dissipated power over three very different gait types: human walking, human-like robot walking, and crouched robot walking. Joint springs based on this analysis are tested and validated experimentally. Finally, this paper concludes with the design of two unique parallel spring mechanisms to be added to the current STEPPR robot in order to provide improved locomotive efficiency."} {"_id": "ed14d9a452c4a63883df6496b8d2285201a1808b", "title": "The Theory Behind Mp3", "text": "Since the MPEG-1 Layer III encoding technology is nowadays widely used it might be interesting to gain knowledge of how this powerful compression/decompression scheme actually functions. How come the MPEG-1 Layer III is capable of reduc ing the bit rate with a factor of 12 without almost any audible degradation? Would it be fairly easy to implement this encoding algorithm? This paper will answer these questions and give further additional detailed information."} {"_id": "aeb7a82c61a1733fafa2a36b9bb664ac555e5d86", "title": "Chatbot for IT Security Training: Using Motivational Interviewing to Improve Security Behaviour", "text": "We conduct a pre-study with 25 participants on Mechanical Turk to find out which security behavioural problems are most important for online users. These questions are based on motivational interviewing (MI), an evidence-based treatment methodology that enables to train people about different kinds of behavioural changes. Based on that the chatbot is developed using Artificial Intelligence Markup Language (AIML). The chatbot is trained to speak about three topics: passwords, privacy and secure browsing. These three topics were \u2019most-wanted\u2019 by the users of the pre-study. With the chatbot three training sessions with people are conducted."} {"_id": "0086ae349537bad560c8755aa2f0ece8f49b95cf", "title": "Walking on Water : Biolocomotion at the Interface", "text": "We consider the hydrodynamics of creatures capable of sustaining themselves on the water surface by means other than flotation. Particular attention is given to classifying water walkers according to their principal means of weight support and lateral propulsion. The various propulsion mechanisms are rationalized through consideration of energetics, hydrodynamic forces applied, or momentum transferred by the driving stroke. We review previous research in this area and suggest directions for future work. Special attention is given to introductory discussions of problems not previously treated in the fluid mechanics literature, with hopes of attracting physicists, applied mathematicians, and engineers to this relatively unexplored area of fluid mechanics. 339 A nn u. R ev . F lu id . M ec h. 2 00 6. 38 :3 39 -3 69 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by Y al e U ni ve rs ity S O C IA L S C IE N C E L IB R A R Y o n 12 /1 7/ 05 . F or p er so na l u se o nl y. AR266-FL38-13 ARI 11 November 2005 20:7"} {"_id": "cab60be93fe203c5b5ebc3d25121639a330dbcb0", "title": "Economic Benefit of Powerful Credit Scoring \u2217", "text": "We study the economic benefits from using credit scoring models. We contribute to the literature by relating the discriminatory power of a credit scoring model to the optimal credit decision. Given the Receiver Operating Characteristic (ROC) curve, we derive a) the profit-maximizing cutoff and b) the pricing curve. Using these two concepts and a mixture thereof, we study a stylized loan market model with banks differing in the quality of their credit scoring model. Even for small quality differences, the variation in profitability among lenders is large and economically significant. We end our analysis by quantifying the impact on profits when information leaks from a competitor\u2019s scoring model into the market. JEL Classification Codes: D40, G21, H81"} {"_id": "0ab5d73a786d797476e62cd162ebbff357933c13", "title": "Intelligence Without Reason", "text": "Computers and Thought are the two categories that together de ne Arti cial Intelligence as a discipline. It is generally accepted that work in Arti cial Intelligence over the last thirty years has had a strong in uence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong in uence on our models of thought. The Von Neumann model of computation has lead Arti cial Intelligence in particular directions. Intelligence in biological systems is completely di erent. Recent work in behavior-based Arti cial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation. Copyright c Massachusetts Institute of Technology, 1991 This report describes research done at the Arti cial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the University Research Initiative under O ce of Naval Research contract N00014{86{K{0685, in part by the Advanced Research Projects Agency under O ce of Naval Research contract N00014{ 85{K{0124, in part by the Hughes Arti cial Intelligence Center, in part by Siemens Corporation, and in part by Mazda Corporation."} {"_id": "27751edbac24dd0134645a08e101457202816fc2", "title": "Chronic administration of cannabidiol to healthy volunteers and epileptic patients.", "text": "In phase 1 of the study, 3 mg/kg daily of cannabidiol (CBD) was given for 30 days to 8 health human volunteers. Another 8 volunteers received the same number of identical capsules containing glucose as placebo in a double-blind setting. Neurological and physical examinations, blood and urine analysis, ECG and EEG were performed at weekly intervals. In phase 2 of the study, 15 patients suffering from secondary generalized epilepsy with temporal focus were randomly divided into two groups. Each patient received, in a double-blind procedure, 200-300 mg daily of CBD or placebo. The drugs were administered for along as 4 1/2 months. Clinical and laboratory examinations, EEG and ECG were performed at 15- or 30-day intervals. Throughout the experiment the patients continued to take the antiepileptic drugs prescribed before the experiment, although these drugs no longer controlled the signs of the disease. All patients and volunteers tolerated CBD very well and no signs of toxicity or serious side effects were detected on examination. 4 of the 8 CBD subjects remained almost free of convulsive crises throughout the experiment and 3 other patients demonstrated partial improvement in their clinical condition. CBD was ineffective in 1 patient. The clinical condition of 7 placebo patients remained unchanged whereas the condition of 1 patient clearly improved. The potential use of CBD as an antiepileptic drug and its possible potentiating effect on other antiepileptic drugs are discussed."} {"_id": "8776e7260399e838da02f092d0f550ae1e59f7b5", "title": "Face age estimation using wrinkle patterns", "text": "Face age estimation is a challenging problem due to the variation of craniofacial growth, skin texture, gender and race. With recent growth in face age estimation research, wrinkles received attention from a number of research, as it is generally perceived as aging feature and soft biometric for person identification. In a face image, wrinkle is a discontinuous and arbitrary line pattern that varies in different face regions and subjects. Existing wrinkle detection algorithms and wrinkle-based features are not robust for face age estimation. They are either weakly represented or not validated against the ground truth. The primary aim of this thesis is to develop a robust wrinkle detection method and construct novel wrinkle-based methods for face age estimation. First, Hybrid Hessian Filter (HHF) is proposed to segment the wrinkles using the directional gradient and a ridge-valley Gaussian kernel. Second, Hessian Line Tracking (HLT) is proposed for wrinkle detection by exploring the wrinkle connectivity of surrounding pixels using a cross-sectional profile. Experimental results showed that HLT outperforms other wrinkle detection algorithms with an accuracy of 84% and 79% on the datasets of FORERUS and FORERET while HHF achieves 77% and 49%, respectively. Third, Multi-scale Wrinkle Patterns (MWP) is proposed as a novel feature representation for face age estimation using the wrinkle location, intensity and density. Fourth, Hybrid Aging Patterns (HAP) is proposed as a hybrid pattern for face age estimation using Facial Appearance Model (FAM) and MWP. Fifth, Multi-layer Age Regression (MAR) is proposed as a hierarchical model in complementary of FAM and MWP for face age estimation. For performance assessment of age estimation, four datasets namely FGNET, MORPH, FERET and PAL with different age ranges and sample sizes are used as benchmarks. Results showed that MAR achieves the lowest Mean Absolute Error (MAE) of 3.00 (\u00b14.14) on FERET and HAP scores a comparable MAE of 3.02 (\u00b12.92) as state of the art. In conclusion, wrinkles are important features and the uniqueness of this pattern should be considered in developing a robust model for face age estimation."} {"_id": "d90552370c1c30f58bb2c3a539a1146a6d217837", "title": "ARTIFICIAL NEURAL NETWORK ARCHITECTURE FOR SOLVING THE DOUBLE DUMMY BRIDGE PROBLEM IN CONTRACT BRIDGE", "text": "Card games are interesting for many reasons besides their connection with gambling. Bridge is being a game of imperfect information, it is a well defined, decision making game. The estimation of the number of tricks to be taken by one pair of bridge players is called Double Dummy Bridge Problem (DDBP). Artificial Neural Networks are Non \u2013 Linear mapping structures based on the function of the human brain. Feed Forward Neural Network is used to solve the DDBP in contract bridge. The learning methodology, supervised learning was used in Back \u2013 Propagation Network (BPN) for training and testing the bridge sample deal. In our study we compared back \u2013 Propagation algorithm and obtained that Resilient Back \u2013 Propagation algorithms by using Hyperbolic Tangent function and Resilient Back \u2013 Propagation algorithm produced better result than the other. Among various neural network architectures, in this study we used four network architectures viz., 26x4, 52, 104 and 52x4 for solving DDBP in contract bridge."} {"_id": "d37233c0e4cf575177b57b680451a5f0fef6eff3", "title": "Intelligent Parking Management System Based on Image Processing", "text": "This paper aims to present an intelligent system for parking space detection based on image processing technique. The proposed system captures and processes the rounded image drawn at parking lot and produces the information of the empty car parking spaces. In this work, a camera is used as a sensor to take photos to show the occupancy of car parks. The reason why a camera is used is because with an image it can detect the presence of many cars at once. Also, the camera can be easily moved to detect different car parking lots. By having this image, the particular car parks vacant can be known and then the processed information was used to guide a driver to an available car park rather than wasting time to find one. The proposed system has been developed in both software and hardware platform. An automatic parking system is used to make the whole process of parking cars more efficient and less complex for both drivers and administrators."} {"_id": "f918ff8347a890d75a59894c3488c08619915c6a", "title": "Lens on the Endpoint: Hunting for Malicious Software Through Endpoint Data Analysis", "text": "Organizations are facing an increasing number of criminal threats ranging from opportunistic malware to more advanced targeted attacks. While various security technologies are available to protect organizations\u2019 perimeters, still many breaches lead to undesired consequences such as loss of proprietary information, financial burden, and reputation defacing. Recently, endpoint monitoring agents that inspect system-level activities on user machines started to gain traction and be deployed in the industry as an additional defense layer. Their application, though, in most cases is only for forensic investigation to determine the root cause of an incident. In this paper, we demonstrate how endpoint monitoring can be proactively used for detecting and prioritizing suspicious software modules overlooked by other defenses. Compared to other environments in which host-based detection proved successful, our setting of a large enterprise introduces unique challenges, including the heterogeneous environment (users installing software of their choice), limited ground truth (small number of malicious software available for training), and coarse-grained data collection (strict requirements are imposed on agents\u2019 performance overhead). Through applications of clustering and outlier detection algorithms, we develop techniques to identify modules with known malicious behavior, as well as modules impersonating popular benign applications. We leverage a large number of static, behavioral and contextual features in our algorithms, and new feature weighting methods that are resilient against missing attributes. The large majority of our findings are confirmed as malicious by anti-virus tools and manual investigation by experienced security analysts."} {"_id": "d8889da0826e8153be7699f4eef4f0ed2e8a7ea7", "title": "Distributed heterogeneous event processing: enhancing scalability and interoperability of CEP in an industrial context", "text": "Although a significant amount of research has investigated the benefits of distributed CEP in terms of scalability and extensibility, there is an ongoing reluctance in deploying distributed CEP in an industrial context. In this paper we present the DHEP system developed together with the IBM\u00ae laboratory in B\u00f6blingen. It addresses some of the key problems in increasing the acceptance of distributed CEP, for example supporting interoperability between heterogeneous event processing systems. We present the concepts behind the DHEP system and show how those concepts help to achieve scalable and extensible event processing in an industrial context. Moreover, we verify in an evaluation study that the additional cost imposed by the DHEP system is moderate and 'affordable' for the benefits provided."} {"_id": "4a5be26509557f0a1a911e639868bfe9d002d664", "title": "An Analysis of the Manufacturing Messaging Specification Protocol", "text": "The Manufacturing Messaging Specification (MMS) protocol is widely used in industrial process control applications, but it is poorly documented. In this paper we present an analysis of the MMS protocol in order to improve understanding of MMS in the context of information security. Our findings show that MMS has insufficient security mechanisms, and the meagre security mechanisms that are available are not implemented in commercially available industrial devices."} {"_id": "3ffa280bbba607ba3dbc0f6adcc557c85453016c", "title": "Non-iterative, feature-preserving mesh smoothing", "text": "With the increasing use of geometry scanners to create 3D models, there is a rising need for fast and robust mesh smoothing to remove inevitable noise in the measurements. While most previous work has favored diffusion-based iterative techniques for feature-preserving smoothing, we propose a radically different approach, based on robust statistics and local first-order predictors of the surface. The robustness of our local estimates allows us to derive a non-iterative feature-preserving filtering technique applicable to arbitrary \"triangle soups\". We demonstrate its simplicity of implementation and its efficiency, which make it an excellent solution for smoothing large, noisy, and non-manifold meshes."} {"_id": "e1781105aaeb66f6261a99bdee13ae410ea4495d", "title": "Robust real-time underwater digital video streaming using optical communication", "text": "We present a real-time video delivery solution based on free-space optical communication for underwater applications. This solution comprises of AquaOptical II, a high-bandwidth wireless optical communication device, and a two-layer digital encoding scheme designed for error-resistant communication of high resolution images. Our system can transmit digital video reliably through a unidirectional underwater channel, with minimal infrastructural overhead. We present empirical evaluation of this system's performance for various system configurations, and demonstrate that it can deliver high quality video at up to 15 Hz, with near-negligible communication latencies of 100 ms. We further characterize the corresponding end-to-end latencies, i.e. from time of image acquisition until time of display, and reveal optimized results of under 200 ms, which facilitates a wide range of applications such as underwater robot tele-operation and interactive remote seabed monitoring."} {"_id": "0c5f142ff723d9a1f8e3b7ad840e2aaee5213605", "title": "Development of Highly Secured Cloud Rendered Log Management System", "text": "A log is a collection of record of events that occurs within an organization containing systems and network. These logs are very important for any organization, because log file will able to record all user activities. As this log files plays vital role and also it contains sensitive information , it should be maintained highly secure. So, management and securely maintenance of log records are very tedious task. However, deploying such a system for high security and privacy of log records is an overhead for an organization and also it requires additional cost. Many techniques have been design so far for security of log records. The alternative solution is to maintaining log records over a cloud database. Log files over cloud environment leads to challenges about privacy, confidentiality and integrity of log files. In this paper, we propose highly secured cloud rendered log management and also use of some cryptographic algorithms for dealing the issues to access a cloud based data storage. To the best of knowledge, this is the strong work to provide a complete solution to the cloud based secure log management problem."} {"_id": "9c6eb7ec2de5779baa8ceb782a1c8fe7affeaf70", "title": "Inside help: An integrative review of champions in healthcare-related implementation", "text": "Background/aims\nThe idea that champions are crucial to effective healthcare-related implementation has gained broad acceptance; yet the champion construct has been hampered by inconsistent use across the published literature. This integrative review sought to establish the current state of the literature on champions in healthcare settings and bring greater clarity to this important construct.\n\n\nMethods\nThis integrative review was limited to research articles in peer-reviewed, English-language journals published from 1980 to 2016. Searches were conducted on the online MEDLINE database via OVID and PubMed using the keyword \"champion.\" Several additional terms often describe champions and were also included as keywords: implementation leader, opinion leader, facilitator, and change agent. Bibliographies of full-text articles that met inclusion criteria were reviewed for additional references not yet identified via the main strategy of conducting keyword searches in MEDLINE. A five-member team abstracted all full-text articles meeting inclusion criteria.\n\n\nResults\nThe final dataset for the integrative review consisted of 199 unique articles. Use of the term champion varied widely across the articles with respect to topic, specific job positions, or broader organizational roles. The most common method for operationalizing champion for purposes of analysis was the use of a dichotomous variable designating champion presence or absence. Four studies randomly allocated of the presence or absence of champions.\n\n\nConclusions\nThe number of published champion-related articles has markedly increased: more articles were published during the last two years of this review (i.e. 2015-2016) than during its first 30\u2009years (i.e. 1980-2009).The number of champion-related articles has continued to increase sharply since the year 2000. Individual studies consistently found that champions were important positive influences on implementation effectiveness. Although few in number, the randomized trials of champions that have been conducted demonstrate the feasibility of using experimental design to study the effects of champions in healthcare."} {"_id": "15a2ef5fac225c864759b28913b313908401043f", "title": "Common criteria compliant software development (CC-CASD)", "text": "In order to gain their customers' trust, software vendors can certify their products according to security standards, e.g., the Common Criteria (ISO 15408). However, a Common Criteria certification requires a comprehensible documentation of the software product. The creation of this documentation results in high costs in terms of time and money.\n We propose a software development process that supports the creation of the required documentation for a Common Criteria certification. Hence, we do not need to create the documentation after the software is built. Furthermore, we propose to use an enhanced version of the requirements-driven software engineering process called ADIT to discover possible problems with the establishment of Common Criteria documents. We aim to detect these issues before the certification process. Thus, we avoid expensive delays of the certification effort. ADIT provides a seamless development approach that allows consistency checks between different kinds of UML models. ADIT also supports traceability from security requirements to design documents. We illustrate our approach with the development of a smart metering gateway system."} {"_id": "21968ae000669eb4cf03718a0d97e23a6bf75926", "title": "Learning influence probabilities in social networks", "text": "Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance."} {"_id": "24986435c73066aaea0b21066db4539270106bee", "title": "Novelty and redundancy detection in adaptive filtering", "text": "This paper addresses the problem of extending an adaptive information filtering system to make decisions about the novelty and redundancy of relevant documents. It argues that relevance and redundance should each be modelled explicitly and separately. A set of five redundancy measures are proposed and evaluated in experiments with and without redundancy thresholds. The experimental results demonstrate that the cosine similarity metric and a redundancy measure based on a mixture of language models are both effective for identifying redundant documents."} {"_id": "39282ff070f62ceeaa6495815098cbac8411101f", "title": "Collaborative location and activity recommendations with GPS history data", "text": "With the increasing popularity of location-based services, such as tour guide and location-based social network, we now have accumulated many location data on the Web. In this paper, we show that, by using the location data based on GPS and users' comments at various locations, we can discover interesting locations and possible activities that can be performed there for recommendations. Our research is highlighted in the following location-related queries in our daily life: 1) if we want to do something such as sightseeing or food-hunting in a large city such as Beijing, where should we go? 2) If we have already visited some places such as the Bird's Nest building in Beijing's Olympic park, what else can we do there? By using our system, for the first question, we can recommend her to visit a list of interesting locations such as Tiananmen Square, Bird's Nest, etc. For the second question, if the user visits Bird's Nest, we can recommend her to not only do sightseeing but also to experience its outdoor exercise facilities or try some nice food nearby. To achieve this goal, we first model the users' location and activity histories that we take as input. We then mine knowledge, such as the location features and activity-activity correlations from the geographical databases and the Web, to gather additional inputs. Finally, we apply a collective matrix factorization method to mine interesting locations and activities, and use them to recommend to the users where they can visit if they want to perform some specific activities and what they can do if they visit some specific places. We empirically evaluated our system using a large GPS dataset collected by 162 users over a period of 2.5 years in the real-world. We extensively evaluated our system and showed that our system can outperform several state-of-the-art baselines."} {"_id": "03f9b5389df52f42cabcf0c4a9ac6e10ff6d4395", "title": "A mobile application framework for the geospatial web", "text": "In this paper we present an application framework that leverages geospatial content on the World Wide Web by enabling innovative modes of interaction and novel types of user interfaces on advanced mobile phones and PDAs. We discuss the current development steps involved in building mobile geospatial Web applications and derive three technological pre-requisites for our framework: spatial query operations based on visibility and field of view, a 2.5D environment model, and a presentationindependent data exchange format for geospatial query results. We propose the Local Visibility Model as a suitable XML-based candidate and present a prototype implementation."} {"_id": "08a8c653b4f20f2b63ac6734f24fa5f5f819782a", "title": "Mining interesting locations and travel sequences from GPS trajectories", "text": "The increasing availability of GPS-enabled devices is changing the way people interact with the Web, and brings us a large amount of GPS trajectories representing people's location histories. In this paper, based on multiple users' GPS trajectories, we aim to mine interesting locations and classical travel sequences in a given geospatial region. Here, interesting locations mean the culturally important places, such as Tiananmen Square in Beijing, and frequented public areas, like shopping malls and restaurants, etc. Such information can help users understand surrounding locations, and would enable travel recommendation. In this work, we first model multiple individuals' location histories with a tree-based hierarchical graph (TBHG). Second, based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based inference model, which regards an individual's access on a location as a directed link from the user to that location. This model infers the interest of a location by taking into account the following three factors. 1) The interest of a location depends on not only the number of users visiting this location but also these users' travel experiences. 2) Users' travel experiences and location interests have a mutual reinforcement relationship. 3) The interest of a location and the travel experience of a user are relative values and are region-related. Third, we mine the classical travel sequences among locations considering the interests of these locations and users' travel experiences. We evaluated our system using a large GPS dataset collected by 107 users over a period of one year in the real world. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, when considering the users' travel experiences and location interests, we achieved a better performance beyond baselines, such as rank-by-count and rank-by-interest, etc."} {"_id": "274ed239919074cbe96b1d9d357bb930788e6668", "title": "A Framework for Evaluating Intrusion Detection Architectures in Advanced Metering Infrastructures", "text": "The scale and complexity of Advanced Metering Infrastructure (AMI) networks requires careful planning for the deployment of security solutions. In particular, the large number of AMI devices and the volume and diversity of communication expected to take place on the various AMI networks make the role of intrusion detection systems (IDSes) critical. Understanding the trade-offs for a scalable and comprehensive IDS is key to investing in the right technology and deploying sensors at optimal locations. This paper reviews the benefits and costs associated with different IDS deployment options, including either centralized or distributed solution. A general cost-model framework is proposed to help utilities (AMI asset owners) make more informed decisions when selecting IDS deployment architectures and managing their security investments. We illustrate how the framework can be applied through case studies, and highlight the interesting cost/benefit trade-offs that emerge."} {"_id": "5a26c0ad196cd5e0a3fcd3d3a306c3ce545977ae", "title": "Interactive volumetric fog display", "text": "Traditional fog screens are 2D. We propose a new design of fog screen that can generate fog at different depth positions and at only where necessary. Our approach applies projection mapping onto a non-planar and reconfigurable fog screen, thus enabling interactive visual contents to be displayed at multiple depth levels. Viewers can perceive the three-dimensionality naturally, and interact with the unencumbered images by touching them directly in mid-air. The display can be used in mixed reality settings where physical objects co-exist and interact with the 3D imagery in physical space."} {"_id": "86c00866dc78aaadf255e7a94cc4dd7219245167", "title": "Adversarial Reinforcement Learning in a Cyber Security Simulation", "text": "This paper focuses on cyber-security simulations in networks modeled as a Markov game with incomplete information and stochastic elements. The resulting game is an adversarial sequential decision making problem played with two agents, the attacker and defender. The two agents pit one reinforcement learning technique, like neural networks, Monte Carlo learning and Q-learning, against each other and examine their effectiveness against learning opponents. The results showed that Monte Carlo learning with the Softmax exploration strategy is most effective in performing the defender role and also for learning attacking strategies."} {"_id": "d33d0140d8e36e23dfa88dfc40b837af44709533", "title": "SALMA: Standard Arabic Language Morphological Analysis", "text": "Morphological analyzers are preprocessors for text analysis. Many Text Analytics applications need them to perform their tasks. This paper reviews the SALMA-Tools (Standard Arabic Language Morphological Analysis) [1]. The SALMA-Tools is a collection of open-source standards, tools and resources that widen the scope of Arabic word structure analysis - particularly morphological analysis, to process Arabic text corpora of different domains, formats and genres, of both vowelized and non-vowelized text. Tag-assignment is significantly more complex for Arabic than for many languages. The morphological analyzer should add the appropriate linguistic information to each part or morpheme of the word (proclitic, prefix, stem, suffix and enclitic); in effect, instead of a tag for a word, we need a subtag for each part. Very fine-grained distinctions may cause problems for automatic morphosyntactic analysis - particularly probabilistic taggers which require training data, if some words can change grammatical tag depending on function and context; on the other hand, fine-grained distinctions may actually help to disambiguate other words in the local context. The SALMA - Tagger is a fine grained morphological analyzer which is mainly depends on linguistic information extracted from traditional Arabic grammar books and prior-knowledge broad-coverage lexical resources; the SALMA - ABCLexicon. More fine-grained tag sets may be more appropriate for some tasks. The SALMA - Tag Set is a standard tag set for encoding, which captures long-established traditional fine-grained morphological features of Arabic, in a notation format intended to be compact yet transparent."} {"_id": "3caedff0a82950046730bce6f8d85aec46cf2e8c", "title": "Empirical evidence in global software engineering: a systematic review", "text": "Recognized as one of the trends of the 21st century, globalization of the world economies brought significant changes to nearly all industries, and in particular it includes software development. Many companies started global software engineering (GSE) to benefit from cheaper, faster and better development of software systems, products and services. However, empirical studies indicate that achieving these benefits is not an easy task. Here, we report our findings from investigating empirical evidence in GSE-related research literature. By conducting a systematic review we observe that the GSE field is still immature. The amount of empirical studies is relatively small. The majority of the studies represent problem-oriented reports focusing on different aspects of GSE management rather than in-depth analysis of solutions for example in terms of useful practices or techniques. Companies are still driven by cost reduction strategies, and at the same time, the most frequently discussed recommendations indicate a necessity of investments in travelling and socialization. Thus, at the same time as development goes global there is an ambition to minimize geographical, temporal and cultural separation. These are normally integral parts of cross-border collaboration. In summary, the systematic review results in several descriptive classifications of the papers on empirical studies in GSE and also reports on some best practices identified from literature."} {"_id": "4f2eb8902bbea3111b1ec2a974eab31e92bb1435", "title": "Fundamental limits of RSS fingerprinting based indoor localization", "text": "Indoor localization has been an active research field for decades, where the received signal strength (RSS) fingerprinting based methodology is widely adopted and induces many important localization techniques such as the recently proposed one building the fingerprint database with crowd-sourcing. While efforts have been dedicated to improve the accuracy and efficiency of localization, the fundamental limits of RSS fingerprinting based methodology itself is still unknown in a theoretical perspective. In this paper, we present a general probabilistic model to shed light on a fundamental question: how good the RSS fingerprinting based indoor localization can achieve? Concretely, we present the probability that a user can be localized in a region with certain size, given the RSS fingerprints submitted to the system. We reveal the interaction among the localization accuracy, the reliability of location estimation and the number of measurements in the RSS fingerprinting based location determination. Moreover, we present the optimal fingerprints reporting strategy that can achieve the best accuracy for given reliability and the number of measurements, which provides a design guideline for the RSS fingerprinting based indoor localization facilitated by crowdsourcing paradigm."} {"_id": "3d88c7e728e94278a2f3ef654818a5e93d937743", "title": "Human ES cell-derived neural rosettes reveal a functionally distinct early neural stem cell stage.", "text": "Neural stem cells (NSCs) yield both neuronal and glial progeny, but their differentiation potential toward multiple region-specific neuron types remains remarkably poor. In contrast, embryonic stem cell (ESC) progeny readily yield region-specific neuronal fates in response to appropriate developmental signals. Here we demonstrate prospective and clonal isolation of neural rosette cells (termed R-NSCs), a novel NSC type with broad differentiation potential toward CNS and PNS fates and capable of in vivo engraftment. R-NSCs can be derived from human and mouse ESCs or from neural plate stage embryos. While R-NSCs express markers classically associated with NSC fate, we identified a set of genes that specifically mark the R-NSC state. Maintenance of R-NSCs is promoted by activation of SHH and Notch pathways. In the absence of these signals, R-NSCs rapidly lose rosette organization and progress to a more restricted NSC stage. We propose that R-NSCs represent the first characterized NSC stage capable of responding to patterning cues that direct differentiation toward region-specific neuronal fates. In addition, the R-NSC-specific genetic markers presented here offer new tools for harnessing the differentiation potential of human ESCs."} {"_id": "01ef8a09ffa6ab5756a74b5aee1f8563d95d6e86", "title": "Material classification and semantic segmentation of railway track images with deep convolutional neural networks", "text": "The condition of railway tracks needs to be periodically monitored to ensure passenger safety. Cameras mounted on a moving vehicle such as a hi-rail vehicle or a geometry inspection car can generate large volumes of high resolution images. Extracting accurate information from those images has been challenging due to background clutter in railroad environments. In this paper, we describe a novel approach to visual track inspection using material classification and semantic segmentation with Deep Convolutional Neural Networks (DCNN). We show that DCNNs trained end-to-end for material classification are more accurate than shallow learning machines with hand-engineered features and are more robust to noise. Our approach results in a material classification accuracy of 93.35% using 10 classes of materials. This allows for the detection of crumbling and chipped tie conditions at detection rates of 86.06% and 92.11%, respectively, at a false positive rate of 10 FP/mile on the 85-mile Northeast Corridor (NEC) 2012-2013 concrete tie dataset."} {"_id": "b1e57ea60b291ddfe6b97f28f5b73a76f3e65bc3", "title": "A conversational intelligent tutoring system to automatically predict learning styles", "text": "This paper proposes a generic methodology and architecture for developing a novel conversational intelligent tutoring system (CITS) called Oscar that leads a tutoring conversation and dynamically predicts and adapts to a student\u2019s learning style. Oscar aims to mimic a human tutor by implicitly modelling the learning style during tutoring, and personalising the tutorial to boost confidence and improve the effectiveness of the learning experience. Learners can intuitively explore and discuss topics in natural language, helping to establish a deeper understanding of the topic. The Oscar CITS methodology and architecture are independent of the learning styles model and tutoring subject domain. Oscar CITS was implemented using the Index of Learning Styles (ILS) model (Felder & Silverman 1988) to deliver an SQL tutorial. Empirical studies involving real students have validated the prediction of learning styles in a real-world teaching/learning environment. The results showed that all learning styles in the ILS model were successfully predicted from a natural language tutoring conversation, with an accuracy of 61-100%. Participants also found Oscar\u2019s tutoring helpful and achieved an average learning gain of 13%."} {"_id": "96de937b4bb3dfacdf5f6b5ed653994fafbc1aed", "title": "Magic Quadrant for Data Quality Tools", "text": "Extensive data on functional capabilities, customer base demographics, financial status, pricing and other quantitative attributes gained via a requestfor-information process engaging vendors in this market. Interactive briefings in which vendors provided Gartner with updates on their strategy, market positioning, recent key developments and product road map. A Web-based survey of reference customers provided by each vendor, which captured data on usage patterns, levels of satisfaction with major product functionality categories, various nontechnology-related vendor attributes (such as pricing, product support and overall service delivery), and more. In total, 333 organizations across all major world regions provided input on their experiences with vendors and tools in this manner. Feedback about tools and vendors captured during conversations with users of Gartner's client inquiry service. Market share and revenue growth estimates developed by Gartner's Technology and Service Provider research unit."} {"_id": "366d2c9a55c13654bcb235c66cf79163999c60b9", "title": "A study of near-field direct antenna modulation systems using convex optimization", "text": "This paper studies the constellation diagram design for a class of communication systems known as near-field direct antenna modulation (NFDAM) systems. The modulation is carried out in a NFDAM system by means of a control unit that switches among a number of pre-designed passive controllers such that each controller generates a desired voltage signal at the far field. To find an optimal number of signals that can be transmitted and demodulated reliably in a NFDAM system, the coverage area of the signal at the far field should be identified. It is shown that this coverage area is a planar convex region in general and simply a circle in the case when no constraints are imposed on the input impedance of the antenna and the voltage received at the far field. A convex optimization method is then proposed to find a polygon that is able to approximate the coverage area of the signal constellation diagram satisfactorily. A similar analysis is provided for the identification of the coverage area of the antenna input impedance, which is beneficial for designing an energy-efficient NFDAM system."} {"_id": "0eff3eb68ae892012f0d478444f8bb6f50361be5", "title": "BFS and Coloring-Based Parallel Algorithms for Strongly Connected Components and Related Problems", "text": "Finding the strongly connected components (SCCs) of a directed graph is a fundamental graph-theoretic problem. Tarjan's algorithm is an efficient serial algorithm to find SCCs, but relies on the hard-to-parallelize depth-first search (DFS). We observe that implementations of several parallel SCC detection algorithms show poor parallel performance on modern multicore platforms and large-scale networks. This paper introduces the Multistep method, a new approach that avoids work inefficiencies seen in prior SCC approaches. It does not rely on DFS, but instead uses a combination of breadth-first search (BFS) and a parallel graph coloring routine. We show that the Multistep method scales well on several real-world graphs, with performance fairly independent of topological properties such as the size of the largest SCC and the total number of SCCs. On a 16-core Intel Xeon platform, our algorithm achieves a 20X speedup over the serial approach on a 2 billion edge graph, fully decomposing it in under two seconds. For our collection of test networks, we observe that the Multistep method is 1.92X faster (mean speedup) than the state-of-the-art Hong et al. SCC method. In addition, we modify the Multistep method to find connected and weakly connected components, as well as introduce a novel algorithm for determining articulation vertices of biconnected components. These approaches all utilize the same underlying BFS and coloring routines."} {"_id": "90e3ec000125d579ec1724781410d4201be6d2a8", "title": "Evaluation of communication technologies for IEC 61850 based distribution automation system with distributed energy resources", "text": "This paper presents the study of different communication systems between IEC 61850 based distribution substation and distributed energy resources (DERs). Communication networks have been simulated for a typical distribution automation system (DAS) with DERs using OPNET software. The simulation study shows the performance of wired and wireless communication systems for different messages, such as GOOSE and measured (metered) values between DAS and DERs. A laboratory set-up has been implemented using commercial relay and communication devices for evaluating the performance of GOOSE messages, using wired and wireless physical medium. Finally, simulation and laboratory results are discussed in detail."} {"_id": "3c8ffc499c5748f28203b40e44da2d9142d8d396", "title": "A clinical study of Noonan syndrome.", "text": "Clinical details are presented on 151 individuals with Noonan syndrome (83 males and 68 females, mean age 12.6 years). Polyhydramnios complicated 33% of affected pregnancies. The commonest cardiac lesions were pulmonary stenosis (62%), and hypertrophic cardiomyopathy (20%), with a normal echocardiogram present in only 12.5% of all cases. Significant feeding difficulties during infancy were present in 76% of the group. Although the children were short (50% with a height less than 3rd centile), and underweight (43% with a weight less than 3rd centile), the mean head circumference of the group was on the 50th centile. Motor milestone delay was usual, the cohort having a mean age of sitting unsupported of 10 months and walking of 21 months. Abnormal vision (94%) and hearing (40%) were frequent findings, but 89% of the group were attending normal primary or secondary schools. Other associations included undescended testicles (77%), hepatosplenomegaly (50%), and evidence of abnormal bleeding (56%). The mean age at diagnosis of Noonan syndrome in this group was 9.0 years. Earlier diagnosis of this common condition would aid both clinical management and genetic counselling."} {"_id": "ca45e17cf41cf1fd0aa7c9536f0a27bc0f4d3b33", "title": "Superneurons: dynamic GPU memory management for training deep neural networks", "text": "Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c."} {"_id": "c7baff1a1c70e01d6bc71a29c633fb1fd326f5a7", "title": "BRAVO: Balanced Reliability-Aware Voltage Optimization", "text": "Defining a processor micro-architecture for a targeted productspace involves multi-dimensional optimization across performance, power and reliability axes. A key decision in sucha definition process is the circuit-and technology-driven parameterof the nominal (voltage, frequency) operating point. This is a challenging task, since optimizing individually orpair-wise amongst these metrics usually results in a designthat falls short of the specification in at least one of the threedimensions. Aided by academic research, industry has nowadopted early-stage definition methodologies that considerboth energy-and performance-related metrics. Reliabilityrelatedenhancements, on the other hand, tend to get factoredin via a separate thread of activity. This task is typically pursuedwithout thorough pre-silicon quantifications of the energyor even the performance cost. In the late-CMOS designera, reliability needs to move from a post-silicon afterthoughtor validation-only effort to a pre-silicon definitionprocess. In this paper, we present BRAVO, a methodologyfor such reliability-aware design space exploration. BRAVOis supported by a multi-core simulation framework that integratesperformance, power and reliability modeling capability. Errors induced by both soft and hard fault incidence arecaptured within the reliability models. We introduce the notionof the Balanced Reliability Metric (BRM), that we useto evaluate overall reliability of the processor across soft andhard error incidences. We demonstrate up to 79% improvementin reliability in terms of this metric, for only a 6% dropin overall energy efficiency over design points that maximizeenergy efficiency. We also demonstrate several real-life usecaseapplications of BRAVO in an industrial setting."} {"_id": "65b0ffbab1ae29deff6dee7d401bf14b4edf8477", "title": "Ensemble Methods for Multi-label Classification", "text": "Ensemble methods have been shown to be an effectiv e tool for solving multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm, each member of the en semble is associated with a small randomly-selected subset of k labels. Then, a single label classifier is trained according to each combination of elements in the subset. In this paper we adopt a similar approach, however, instead of rando mly choosing subsets, we select the minimum required su bsets of k labels that cover all labels and meet additional constraints such as coverage of inter-label correla tions. Construction of the cover is achieved by form ulating the subset selection as a minimum set covering prob lem (SCP) and solving it by using approximation algo rithms. Every cover needs only to be prepared once by offline algorithms. Once prepared, a cover may b e applied to the classification of any given multi-la bel dataset whose properties conform with those of the cover. The contribution of this paper is two-fold. Fir st, we introduce SCP as a general framework for con structing label covers while allowing the user to incorpo rate cover construction constraints. We demonstrate he effectiveness of this framework by proposing two co nstruction constraints whose enforcement produces c overs that improve the prediction performance of ran dom selection. Second, we provide theoretical bound s that quantify the probabilities of random selection t produce covers that meet the proposed construct ion riteria. The experimental results indicate that the p roposed methods improve multi-label classification accuracy and stability compared with the RAKEL algorithm a nd to other state-of-the-art algorithms."} {"_id": "6f33c49e983acf93e98bfa085de18ca489a27659", "title": "Sensor networks for medical care", "text": "Sensor networks have the potential to greatly impact many aspects of medical care. By outfitting patients with wireless, wearable vital sign sensors, collecting detailed real-time data on physiological status can be greatly simplified. However, there is a significant gap between existing sensor network systems and the needs of medical care. In particular, medical sensor networks must support multicast routing topologies, node mobility, a wide range of data rates and high degrees of reliability, and security. This paper describes our experiences with developing a combined hardware and software platform for medical sensor networks, called CodeBlue. CodeBlue provides protocols for device discovery and publish/subscribe multihop routing, as well as a simple query interface that is tailored for medical monitoring. We have developed several medical sensors based on the popular MicaZ and Telos mote designs, including a pulse oximeter, EKG and motion-activity sensor. We also describe a new, miniaturized sensor mote designed for medical use. We present initial results for the CodeBlue prototype demonstrating the integration of our medical sensors with the publish/subscribe routing substrate. We have experimentally validated the prototype on our 30-node sensor network testbed, demonstrating its scalability and robustness as the number of simultaneous queries, data rates, and transmitting sensors are varied. We also study the effect of node mobility, fairness across multiple simultaneous paths, and patterns of packet loss, confirming the system\u2019s ability to maintain stable routes despite variations in node location and"} {"_id": "4df65842a527e752bd487c180c368eec85f8b61b", "title": "Digital Forensic Analysis of SIM Cards", "text": "Smart cards are fundamental technology in modern life. It is embedded in numerous devices such as GPS devices, ATM cards, Mobile SIM cards and many others. Mobile devices became the evolution of technology. It becomes smaller, faster and supports large storage capabilities. Digital forensics of mobile devices that maybe found in crime scene is becoming inevitable. The purpose of this research is to address the SIM cards digital forensics analysis. It presents sound forensic methodology and process of SIM cards forensic examination. In particular, the main aim of the research is to answer the following research questions: (1) what forensic evidence could be extracted from a SIM card and (2) what are limitations that may hinder a forensic"} {"_id": "167895bdf0f1ef88acc962e7a6f255ab92769485", "title": "Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems", "text": "To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 \u00d7 370 image, whereas the original selective search method extracted approximately 10 6 \u00d7 n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset."} {"_id": "c669d5efe471abcb3a28223845a318a562070cc8", "title": "A switching ringing suppression scheme of SiC MOSFET by Active Gate Drive", "text": "This paper proposes an Active Gate Drive (AGD) to reduce unwanted switching ringing of Silicon Carbide (SiC) MOSFET module with a rating of 120 A and 1200 V. While SiC MOSFET can be operated under high switching frequency and high temperature with very low power losses, one of the key challenges for SiC MOSFET is the electromagnetic interference (EMI) caused by steep switching transients and continuous switching ringing. Compared to Si MOSFET, the higher rate of SiC MOSFET drain current variation introduces worse EMI problems. To reduce EMI generated from the switching ringing, this paper investigates the causes of switching ringing by considering the combined impact of parasitic inductances, capacitances, and low circuit loop resistance. In addition, accurate mathematical expressions are established to explain the ringing behavior and quantitative analysis is carried out to investigate the relationship between the switching transient and gate drive voltage. Thereafter, an AGD method for mitigating SiC MOSFET switching ringing is presented. Substantially reduced switching ringing can be observed from circuit simulations. As a result, the EMI generation is mitigated."} {"_id": "9b618fa0cd834f7c4122c8e53539085e06922f8c", "title": "Adversarial Perturbations Against Deep Neural Networks for Malware Classification", "text": "Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs. These inputs are derived from regular inputs by minor yet carefully selected perturbations that deceive machine learning models into desired misclassifications. Existing work in this emerging field was largely specific to the domain of image classification, since the highentropy of images can be conveniently manipulated without changing the images\u2019 overall visual appearance. Yet, it remains unclear how such attacks translate to more securitysensitive applications such as malware detection\u2013which may pose significant challenges in sample generation and arguably grave consequences for failure. In this paper, we show how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers. The application domain of malware classification introduces additional constraints in the adversarial sample crafting problem when compared to the computer vision domain: (i) continuous, differentiable input domains are replaced by discrete, often binary inputs; and (ii) the loose condition of leaving visual appearance unchanged is replaced by requiring equivalent functional behavior. We demonstrate the feasibility of these attacks on many different instances of malware classifiers that we trained using the DREBIN Android malware data set. We furthermore evaluate to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification. While feature reduction did not prove to have a positive impact, distillation and re-training on adversarially crafted samples show promising results."} {"_id": "5e4deed61eaf561f2ef2a26f11ce32345ce64981", "title": "Lung nodules diagnosis based on evolutionary convolutional neural network", "text": "Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. In this paper, we exploit a deep learning technique jointly with the genetic algorithm to classify lung nodules in whether malignant or benign, without computing the shape and texture features. The methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 94.66%, specificity of 95.14%, accuracy of 94.78% and area under the ROC curve of 0.949."} {"_id": "4cad1f0023e1bc904c3c615d791c1518dac8243f", "title": "Crop disease classification using texture analysis", "text": "With the Agriculture Sector being the backbone of not only a large number of industries but also society as a whole, there is a rising need to grow good quality crops which in turn will give a high yield. For this to happen, it is crucial to monitor the crops throughout their growth period. In this paper, Image processing is used to detect and classify sunflower crop diseases based on the image of their leaf. The images are taken through a high resolution digital camera and after preprocessing, are subjected to k-means clustering to get the diseased part of the leaf. These are then run through the various machine learning algorithms and classified based on their color and texture features. A comparison based on accuracy between various machine learning algorithms is done namely K-Nearest Neighbors, Multi-Class Support Vector Machine, Naive Bayes and Multinomial Logistic Regression to achieve maximum accuracy. The implementation has been done using MATLAB."} {"_id": "d30bf3722157c71938dc94419802239ef4e4e0db", "title": "Practical Private Set Intersection Protocols with Linear Complexity", "text": "The constantly increasing dependence on anytime-anywhere availability of data and the commensurately increasing fear of losing privacy motivate the need for privacy-preserving techniques. One interesting and common problem occurs when two parties need to privately compute an intersection of their respective sets of data. In doing so, one or both parties must obtain the intersection (if one exists), while neither should learn anything about other set elements. Although prior work has yielded a number of effective and elegant Private Set Intersection (PSI) techniques, the quest for efficiency is still underway. This paper explores some PSI variations and constructs several secure protocols that are appreciably more efficient than the state-of-the-art."} {"_id": "26880494f79ae1e35ffee7f055cb0ad5693060c2", "title": "Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans", "text": "A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses two-dimensional laser range scans for localization, it is di cult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The rst algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacement between the scans. The second algorithm establishes correspondences between points in the two scans and then solves the point-to-point least-squares problem to compute the relative pose of the two scans. Our methods work in curved environments and can handle partial occlusions by rejecting outliers."} {"_id": "2ca1fed0a117e9cccaad3f6bab5f4bed7b79a82c", "title": "Autonomous robot path planning in dynamic environment using a new optimization technique inspired by Bacterial Foraging technique", "text": "Path planning is one of the basic and interesting functions for a mobile robot. This paper explores the application of Bacterial Foraging Optimization to the problem of mobile robot navigation to determine shortest feasible path to move from any current position to target position in unknown environment with moving obstacles. It develops a new algorithm based on Bacterial Foraging Optimization (BFO) technique. This algorithm finds a path towards the target and avoiding the obstacles using particles which are randomly distributed on a circle around a robot. The criterion on which it selects the best particle is the distance to target and the Gaussian cost function of the particle. Then, a high level decision strategy is used for the selection and thus proceeds for the result. It works on local environment by using a simple robot sensor. So, it is free from having generated additional map which adds cost. Furthermore, it can be implemented without requirement to tuning algorithm and complex calculation. To simulate the algorithm, the program is written in C language and the environment is created by OpenGL. To test the efficiency of proposed technique, results are compared with Basic Bacterial Foraging Optimization (BFO) and another well-known algorithm called Particle Swarm Optimization (PSO). From the experimental result it can be told that the proposed method gives better path or optimal path."} {"_id": "36fc5c9957bf34112614c23f0d5645c95d2d273a", "title": "Mobile Robot Path Planning with Randomly Moving Obstacles and Goal", "text": "This article presents the dynamic path planning for a mobile robot to track a randomly moving goal with avoidance of multiple randomly moving obstacles. The main feature of the developed scheme is its capability of dealing with the situation that the paths of both the goal and the obstacles are unknown a priori to the mobile robot. A new mathematical approach that is based on the concepts of 3-D geometry is proposed to generate the path of the mobile robot. The mobile robot decides its path in real time to avoid the randomly moving obstacles and to track the randomly moving goal. The developed scheme results in faster decision-making for successful goal tracking. 3-D simulations using MATLAB validate the developed scheme."} {"_id": "5d899ccc410877782e8a3c82592de997853f9b6a", "title": "Bacterium-inspired robots for environmental monitoring", "text": "Locating gradient sources and tracking them over time has important applications to environmental monitoring and studies of the ecosystem. We present an approach, inspired by bacterial chemotaxis, for robots to navigate to sources using gradient measurements and a simple actuation strategy (biasing a random walk). Extensive simulations show the efficacy of the approach in varied conditions including multiple sources, dissipative sources, and noisy sensors and actuators. We also show how such an approach could be used for boundary finding. We validate our approach by testing it on a small robot (the robomote) in a phototaxis experiment. A comparison of our approach with gradient descent shows that while gradient descent is faster, our approach is better suited for boundary coverage, and performs better in the presence of multiple and dissipative sources."} {"_id": "c8a04d0cbb9f70e86800b11b594c9a05d7b6bac0", "title": "Real-time obstacle avoidance for manipulators and mobile robots", "text": ""} {"_id": "9ccb021ea2da3de52a6c1e606798c0d4f4e27a59", "title": "Controlling electromagnetic fields.", "text": "Using the freedom of design that metamaterials provide, we show how electromagnetic fields can be redirected at will and propose a design strategy. The conserved fields-electric displacement field D, magnetic induction field B, and Poynting vector B-are all displaced in a consistent manner. A simple illustration is given of the cloaking of a proscribed volume of space to exclude completely all electromagnetic fields. Our work has relevance to exotic lens design and to the cloaking of objects from electromagnetic fields."} {"_id": "ebfa81abbfabd8a3c546d1cd2405e55226619f61", "title": "Brain activation and sexual arousal in healthy, heterosexual males.", "text": "Despite the brain's central role in sexual function, little is known about relationships between brain activation and sexual response. In this study, we employed functional MRI (fMRI) to examine relationships between brain activation and sexual arousal in a group of young, healthy, heterosexual males. Each subject was exposed to two sequences of video material consisting of explicitly erotic (E), relaxing (R) and sports (S) segments in an unpredictable order. Data on penile turgidity was collected using a custom-built pneumatic pressure cuff. Both traditional block analyses using contrasts between sexually arousing and non-arousing video clips and a regression using penile turgidity as the covariate of interest were performed. In both types of analyses, contrast images were computed for each subject and these images were subsequently used in a random effects analysis. Strong activations specifically associated with penile turgidity were observed in the right subinsular region including the claustrum, left caudate and putamen, right middle occipital/ middle temporal gyri, bilateral cingulate gyrus and right sensorimotor and pre-motor regions. Smaller, but significant activation was observed in the right hypothalamus. Few significant activations were found in the block analyses. Implications of the findings are discussed. Our study demonstrates the feasibility of examining brain activation/sexual response relationships in an fMRI environment and reveals a number of brain structures whose activation is time-locked to sexual arousal."} {"_id": "ef9ecdfa98eba6827ba0140981fd0c259a72c877", "title": "The Complexity of Relational Query Languages (Extended Abstract)", "text": "Two complexity measures for query languages are proposed. Data complexity is the complexity of evaluating a query in the language as a function of the size of the database, and expression complexity is the complexity of evaluating a query in the language as a function of the size of the expression defining the query. We study the data and expression complexity of logical languages - relational calculus and its extensions by transitive closure, fixpoint and second order existential quantification - and algebraic languages - relational algebra and its extensions by bounded and unbounded looping. The pattern which will be shown is that the expression complexity of the investigated languages is one exponential higher then their data complexity, and for both types of complexity we show completeness in some complexity class."} {"_id": "d42a093e98bb5003e7746aca7357320c8e5d7362", "title": "The emotional power of music: How music enhances the feeling of affective pictures", "text": "Music is an intriguing stimulus widely used in movies to increase the emotional experience. However, no brain imaging study has to date examined this enhancement effect using emotional pictures (the modality mostly used in emotion research) and musical excerpts. Therefore, we designed this functional magnetic resonance imaging study to explore how musical stimuli enhance the feeling of affective pictures. In a classical block design carefully controlling for habituation and order effects, we presented fearful and sad pictures (mostly taken from the IAPS) either alone or combined with congruent emotional musical excerpts (classical pieces). Subjective ratings clearly indicated that the emotional experience was markedly increased in the combined relative to the picture condition. Furthermore, using a second-level analysis and regions of interest approach, we observed a clear functional and structural dissociation between the combined and the picture condition. Besides increased activation in brain areas known to be involved in auditory as well as in neutral and emotional visual-auditory integration processes, the combined condition showed increased activation in many structures known to be involved in emotion processing (including for example amygdala, hippocampus, parahippocampus, insula, striatum, medial ventral frontal cortex, cerebellum, fusiform gyrus). In contrast, the picture condition only showed an activation increase in the cognitive part of the prefrontal cortex, mainly in the right dorsolateral prefrontal cortex. Based on these findings, we suggest that emotional pictures evoke a more cognitive mode of emotion perception, whereas congruent presentations of emotional visual and musical stimuli rather automatically evoke strong emotional feelings and experiences."} {"_id": "6248c7fea1d7e8e1f945d3a20abac540faa468a5", "title": "Low-Power and Area-Efficient Carry Select Adder", "text": "Carry Select Adder (CSLA) is one of the fastest adders used in many data-processing processors to perform fast arithmetic functions. From the structure of the CSLA, it is clear that there is scope for reducing the area and power consumption in the CSLA. This work uses a simple and efficient gate-level modification to significantly reduce the area and power of the CSLA. Based on this modification 8-, 16-, 32-, and 64-b square-root CSLA (SQRT CSLA) architecture have been developed and compared with the regular SQRT CSLA architecture. The proposed design has reduced area and power as compared with the regular SQRT CSLA with only a slight increase in the delay. This work evaluates the performance of the proposed designs in terms of delay, area, power, and their products by hand with logical effort and through custom design and layout in 0.18-\u03bcm CMOS process technology. The results analysis shows that the proposed CSLA structure is better than the regular SQRT CSLA."} {"_id": "d15a4fbfbc1180490387dc46c99d1ccd3ce93952", "title": "Digital Media and Symptoms of Attention-Deficit/Hyperactivity Disorder in Adolescents.", "text": "In many regards, this trial represents the best of evidence-based quality improvement efforts\u2014the improvement in health outcomes is often greater than what would have been expected from the individual components alone. An outcome of the trial by Wang et al is the proof that effective stroke interventions can be implemented in China where the burden of stroke is the highest in the world. Trials like this one leave a lasting legacy because the coaching and follow-up and the demonstration that data collection can lead to better outcomes with practice change will leave each of the intervention hospitals with a platform of good-quality stroke care and a mechanism to keep improving. Similar to the challenge of providing a novel drug to patients after a trial ends, a new dilemma that arises for the trialists is how to get the nonintervention hospitals up to the same level of care as the intervention hospitals. A major challenge is the sustainability of these kinds of interventions. Intensive quality improvement is just not practical for long periods; lower level embedded improvement needs to be built into the culture of a hospital, written into job descriptions, and identified as a deliverable in employment contracts to make continuous improvement possible and likely. In addition, the finer details of the intervention must be made available to others. A \u201cteach the teacher\u201d model may apply in distributing this kind of intervention widely. Overall, this randomized clinical trial by Wang et al has achieved its broad objective of both providing both stroke evidence and implanting practice change in 20 hospitals. The challenge now is to sustain that level of quality improvement at these hospitals, and expand this approach to other centers providing care for patients with acute stroke."} {"_id": "ab39efd43f5998be90fc6f4136ffccf8e3a5745c", "title": "MobileDeepPill: A Small-Footprint Mobile Deep Learning System for Recognizing Unconstrained Pill Images", "text": "Correct identification of prescription pills based on their visual appearance is a key step required to assure patient safety and facilitate more effective patient care. With the availability of high-quality cameras and computational power on smartphones, it is possible and helpful to identify unknown prescription pills using smartphones. Towards this goal, in 2016, the U.S. National Library of Medicine (NLM) of the National Institutes of Health (NIH) announced a nationwide competition, calling for the creation of a mobile vision system that can recognize pills automatically from a mobile phone picture under unconstrained real-world settings. In this paper, we present the design and evaluation of such mobile pill image recognition system called MobileDeepPill. The development of MobileDeepPill involves three key innovations: a triplet loss function which attains invariances to real-world noisiness that deteriorates the quality of pill images taken by mobile phones; a multi-CNNs model that collectively captures the shape, color and imprints characteristics of the pills; and a Knowledge Distillation-based deep model compression framework that significantly reduces the size of the multi-CNNs model without deteriorating its recognition performance. Our deep learning-based pill image recognition algorithm wins the First Prize (champion) of the NIH NLM Pill Image Recognition Challenge. Given its promising performance, we believe MobileDeepPill helps NIH tackle a critical problem with significant societal impact and will benefit millions of healthcare personnel and the general public."} {"_id": "08fdba599c9998ae27458faca03b37d3bc51b6bb", "title": "Fast-and-Light Stochastic ADMM", "text": "The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size n. Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets."} {"_id": "6cc7eac71adaabae6a4cd3ce17d4e8d91f5ec249", "title": "Low-Cost Printed Chipless RFID Humidity Sensor Tag for Intelligent Packaging", "text": "This paper presents a fully-printed chipless radio frequency identification sensor tag for short-range item identification and humidity monitoring applications. The tag consists of two planar inductor-capacitor resonators operating wirelessly through inductive coupling. One resonator is used to encode ID data based on frequency spectrum signature, and another one works as a humidity sensor, utilizing a paper substrate as a sensing material. The sensing performances of three paper substrates, including commercial packaging paper, are investigated. The use of paper provides excellent sensitivity and reasonable response time to humidity. The cheap and robust packaging paper, particularly, exhibits the largest sensitivity over the relative humidity range from 20% to 70%, which offers the possibility of directly printing the sensor tag on traditional packages to make the package intelligent at ultralow cost."} {"_id": "7101ecc00f914204709c4f3a8099cc8b74a893d4", "title": "Making Decisions about Self-Disclosure in Online Social Networks", "text": "This paper explores privacy calculus decision making processes for online social networks (OSN). Content analysis method is applied to analyze data obtained from face-to-face interviews and online survey with open-ended questions of 96 OSN users from different countries. The factors users considered before self-disclosing are explored. The perceived benefits and risks of using OSN and their impact on self-disclosure are also identified. We determine that the perceived risks of OSN usage hinder selfdisclosure. It is not clear, however, whether the perceived benefits offset the impact of the risks on selfdisclosure behavior. The findings as a whole do not support privacy calculus in OSN settings."} {"_id": "d532777f2386766e7c93c0bf4257d0a359e91f6b", "title": "Deep learning methods and applications - Classification of traffic signs and detection of Alzheimer's disease from images", "text": "In this thesis, the deep learning method convolutional neural networks (CNNs) has been used in an attempt to solve two classification problems, namely traffic sign recognition and Alzheimer\u2019s disease detection. The two datasets used are from the German Traffic Sign Recognition Benchmark (GTSRB) and the Alzheimer\u2019s Disease Neuroimaging Initiative (ADNI). The final test results on the traffic sign dataset generated a classification accuracy of 98.81 %, almost as high as human performance on the same dataset, 98.84 %. Different parameter settings of the selected CNN structure have also been tested in order to see their impact on the classification accuracy. Trying to distinguish between MRI images of healthy brains and brains afflicted with Alzheimer\u2019s disease gained only about 65 % classification accuracy. These results show that the convolutional neural network approach is very promising for classifying traffic signs, but more work needs to be done when working with the more complex problem of detecting Alzheimer\u2019s disease."} {"_id": "5be063612ace72ea1389bd37de1af83a709123d7", "title": "Digital Clock and Data Recovery Circuits for Optical Links", "text": "Clock and Data Recovery (CDR) circuits perform the function of recovering clock and re-timing received data in optical links. These CDRs must be capable of tolerating large input jitter (high JTOL), filter input jitter (low JTRAN with no jitter peaking) and in burst-mode applications be capable of phase locking in a very short time. In this paper, we elucidate these design tradeoffs and present various CDR architectures that can overcome them. Specifically, D/PLL CDR architecture that achieves high JTOL, low JTRAN, and no jitter peaking is described. A new burst-mode CDR that can lock instantaneously while filtering input jitter is also discussed."} {"_id": "400203a4dbc493d663d36c1cb6c954826e64a43f", "title": "Can Structural Models Price Default Risk ? Evidence from Bond and Credit Derivative Markets \u2217", "text": "Using a set of structural models, we evaluate the price of default protection for a sample of US corporations. In contrast to previous evidence from corporate bond data, CDS premia are not systematically underestimated. In fact, one of our studied models has little difficulty on average in predicting their level. For robustness, we perform the same exercise for bond spreads by the same issuers on the same trading date. As expected, bond spreads relative to the Treasury curve are systematically underestimated. This is not the case when the swap curve is used as a benchmark, suggesting that previously documented underestimation results may be sensitive to the choice of risk free rate. \u2217We are indebted to seminar participants at Wilfrid Laurier University, Carnegie Mellon University, the Bank of Canada, McGill University and Queen\u2019s University for helpful discussions. In addition, we are grateful to participants of the 2005 EFA in Moscow, the 2005 NFA in Vancouver, the 2006 CFP Vallendar conference and the 2006 Moody\u2019s NYU Credit Risk Conference. We are particularly grateful to Stephen Schaefer for comments. \u2020Faculty of Management, McGill University, 1001 Sherbrooke Street West, Montreal QC, H3A 1G5 Canada. Tel +1 514 398-3186, Fax +1 514 398-3876, Email jan.ericsson@mcgill.ca. \u2021Stockholm School of Economics, Department of Finance, Box 6501, S-113 83 Stockholm, Sweden. Tel: +46 8 736 9143, fax +46 8 312327. 1"} {"_id": "ed025ff76239202911e9f6c326b089a3e13dbb48", "title": "To ban or not to ban: Differences in mobile phone policies at elementary, middle, and high schools", "text": "The present study was to examine differences in mobile phone policies at elementary, middle and high schools. We surveyed 245 elementary, middle and high schools teachers in Shenzhen of China, using a specially designed 18-item questionnaire. Teachers\u2019 responses indicate that, across elementary, middle and high schools, significant differences exist in (1) students\u2019 percentages of using mobile phones among students, (2) students\u2019 dependence of mobile phones, (3) the number of schools banning students\u2019 mobile phone use, (4) oral and written forms used by schools to ban students\u2019 mobile phone use, and (5) policy reinforcement strategies used by schools. However, no school-level differences was found in (1) students\u2019 fondness of using mobile phones, (2) teachers\u2019 assessment of low-level effectiveness of mobile phone policies, and (3) teachers\u2019 policy improvement recommendations. Significance and implications of the findings are discussed. 2014 Elsevier Ltd. All rights reserved."} {"_id": "ba6ed4c4e295dd6df7317986cea73a7437c268e6", "title": "Modeling and yield estimation of SRAM sub-system for different capacities subjected to parametric variations", "text": "Process variations have become a major challenge with the advancement in CMOS technologies. The performance of memory sub-systems such as Static Random Access Memory (SRAMs) is heavily dependent on these variations. Also, the VLSI industry requires the SRAM bit cell to qualify in the order of less than 0.1ppb to achieve higher Yield (Y). This paper proposes an efficient qualitative statistical analysis and Yield estimation method of SRAM sub-system which considers deviations due to variations in process parameters in bit line differential and input offset of sense amplifier (SA) all together. The Yield of SRAM is predicted for different capacities of SRAM array by developing a statistical model of memory sub-system in 65nm bulk CMOS technology. For the sub-system with 64 bit cells, it is estimated that the probability of failure is 4.802 \u2217 10\u221213 in a read cycle of frequency 1GHz. Furthermore, the probability of failure for 8MB capacity is 5.035 \u2217 10\u22127 while for 2GB capacity it increases to 1.289 \u2217 10\u22125. It is also observed that as the load on one SA per column is doubled, the probability of failure of memory slice increases by 70%. The proposed technique estimates the Yield(Y) for SRAM array to be more than 99.9999."} {"_id": "4bace0a37589ec16246805d167c831343c9bd241", "title": "BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark", "text": "Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's datacenters is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires rethinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user.\n First, BigDebug's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BigDebug scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BigDebug supports debugging at interactive speeds with minimal performance impact."} {"_id": "9985a9d63630ced5ea495f387f65ef2ef9783cab", "title": "The effect of different types of corrective feedback on ESL student writing", "text": "Debate about the value of providing corrective feedback on L2 writing has been prominent in recent years as a result of Truscott\u2019s [Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46, 327\u2013369] claim that it is both ineffective and harmful and should therefore be abandoned. A growing body of empirical research is now investigating the agenda proposed by Ferris [Ferris, D.R. (1999). The case for grammar correction in L2 writing classes. A response to Truscott (1996). Journal of Second Language Writing, 8, 1\u201310, Ferris, D.R. (2004). The \u2018\u2018Grammar Correction\u2019\u2019 debate in L2 writing: Where are we, and where do we go from here? (and what do we do in the meantime. . .?). Journal of Second Language Writing, 13, 49\u201362.]. Contributing to this research base, the study reported in this article investigated whether the type of feedback (direct, explicit written feedback and student\u2013researcher 5 minute individual conferences; direct, explicit written feedback only; no corrective feedback) given to 53 adult migrant students on three types of error (prepositions, the past simple tense, and the definite article) resulted in improved accuracy in new pieces of writing over a 12 week period. The study found a significant effect for the combination of written and conference feedback on accuracy levels in the use of the past simple tense and the definite article in new pieces of writing but no overall effect on accuracy improvement for feedback types when the three error categories were considered as a single group. Significant variations in accuracy across the four pieces of writing support earlier SLA discoveries that L2 learners, in the process of acquiring new linguistic forms, may perform them with accuracy on one occasion but fail to do so on other similar occasions. # 2005 Elsevier Inc. All rights reserved."} {"_id": "918e211ee67064d08e630207ea3471eeddfe9234", "title": "Gut microbes and the brain: paradigm shift in neuroscience.", "text": "The discovery of the size and complexity of the human microbiome has resulted in an ongoing reevaluation of many concepts of health and disease, including diseases affecting the CNS. A growing body of preclinical literature has demonstrated bidirectional signaling between the brain and the gut microbiome, involving multiple neurocrine and endocrine signaling mechanisms. While psychological and physical stressors can affect the composition and metabolic activity of the gut microbiota, experimental changes to the gut microbiome can affect emotional behavior and related brain systems. These findings have resulted in speculation that alterations in the gut microbiome may play a pathophysiological role in human brain diseases, including autism spectrum disorder, anxiety, depression, and chronic pain. Ongoing large-scale population-based studies of the gut microbiome and brain imaging studies looking at the effect of gut microbiome modulation on brain responses to emotion-related stimuli are seeking to validate these speculations. This article is a summary of emerging topics covered in a symposium and is not meant to be a comprehensive review of the subject."} {"_id": "4a506f4e00451ae883593a92adf92395dd765c3f", "title": "Running on the bare metal with GeekOS", "text": "Undergraduate operating systems courses are generally taught using one of two approaches: abstract or concrete. In the abstract approach, students learn the concepts underlying operating systems theory, and perhaps apply them using user-level threads in a host operating system. In the concrete approach, students apply concepts by working on a real operating system kernel. In the purest manifestation of the concrete approach, students implement operating system projects that run on real hardware.GeekOS is an instructional operating system kernel which runs on real hardware. It provides the minimum functionality needed to schedule threads and control essential devices on an x86 PC. On this foundation, we have developed projects in which students build processes, semaphores, a multilevel feedback scheduler, paged virtual memory, a filesystem, and inter-process communication. We use the Bochs emulator for ease of development and debugging. While this approach (tiny kernel run on an emulator) is not new, we believe GeekOS goes further towards the goal of combining realism and simplicity than previous systems have."} {"_id": "9d19e705e94173aaeae05103d403274c65628486", "title": "The Reorganization of Human Brain Networks Modulated by Driving Mental Fatigue", "text": "The organization of the brain functional network is associated with mental fatigue, but little is known about the brain network topology that is modulated by the mental fatigue. In this study, we used the graph theory approach to investigate reconfiguration changes in functional networks of different electroencephalography (EEG) bands from 16 subjects performing a simulated driving task. Behavior and brain functional networks were compared between the normal and driving mental fatigue states. The scores of subjective self-reports indicated that 90 min of simulated driving-induced mental fatigue. We observed that coherence was significantly increased in the frontal, central, and temporal brain regions. Furthermore, in the brain network topology metric, significant increases were observed in the clustering coefficient (Cp) for beta, alpha, and delta bands and the character path length ( Lp) for all EEG bands. The normalized measures $\\boldsymbol{\\gamma}$ showed significant increases in beta, alpha, and delta bands, and $\\boldsymbol{\\lambda}$ showed similar patterns in beta and theta bands. These results indicate that functional network topology can shift the network topology structure toward a more economic but less efficient configuration, which suggests low wiring costs in functional networks and disruption of the effective interactions between and across cortical regions during mental fatigue states. Graph theory analysis might be a useful tool for further understanding the neural mechanisms of driving mental fatigue."} {"_id": "a3ed4c33d1fcaa1303d7e03b298d3a50d719f0d1", "title": "Automatic Image Annotation using Deep Learning Representations", "text": "We propose simple and effective models for the image annotation that make use of Convolutional Neural Network (CNN) features extracted from an image and word embedding vectors to represent their associated tags. Our first set of models is based on the Canonical Correlation Analysis (CCA) framework that helps in modeling both views - visual features (CNN feature) and textual features (word embedding vectors) of the data. Results on all three variants of the CCA models, namely linear CCA, kernel CCA and CCA with k-nearest neighbor (CCA-KNN) clustering, are reported. The best results are obtained using CCA-KNN which outperforms previous results on the Corel-5k and the ESP-Game datasets and achieves comparable results on the IAPRTC-12 dataset. In our experiments we evaluate CNN features in the existing models which bring out the advantages of it over dozens of handcrafted features. We also demonstrate that word embedding vectors perform better than binary vectors as a representation of the tags associated with an image. In addition we compare the CCA model to a simple CNN based linear regression model, which allows the CNN layers to be trained using back-propagation."} {"_id": "6497a69eebe8d57eee820572ab12b09de51a39d9", "title": "A round- and computation-efficient three-party authenticated key exchange protocol", "text": "In three-party authenticated key exchange protocols, each client shares a secret only with a trusted server with assists in generating a session key used for securely sending messages between two communication clients. Compared with two-party authenticated key exchange protocols where each pair of parties must share a secret with each other, a three-party protocol does not cause any key management problem for the parties. In the literature, mainly there exist three issues in three-party authenticated key exchange protocols are discussed that need to be further improved: (1) to reduce latency, communication steps in the protocol should be as parallel as possible; (2) as the existence of a security-sensitive table on the server side may cause the server to become compromised, the table should be removed; (3) resources required for computation should be as few as possible to avoid the protocol to become an efficiency bottleneck. In various applications over networks, a quick response is required especially by light-weight clients in the mobile e-commerce. In this paper, a roundand computation-efficient three-party authenticated key exchange protocol is proposed which fulfils all of the above mentioned requirements. 2007 Elsevier Inc. All rights reserved."} {"_id": "170438cf3438c21124dbc05df33ae0d10847c3af", "title": "From eye movements to actions: how batsmen hit the ball", "text": "In cricket, a batsman watches a fast bowler's ball come toward him at a high and unpredictable speed, bouncing off ground of uncertain hardness. Although he views the trajectory for little more than half a second, he can accurately judge where and when the ball will reach him. Batsmen's eye movements monitor the moment when the ball is released, make a predictive saccade to the place where they expect it to hit the ground, wait for it to bounce, and follow its trajectory for 100\u2013200 ms after the bounce. We show how information provided by these fixations may allow precise prediction of the ball's timing and placement. Comparing players with different skill levels, we found that a short latency for the first saccade distinguished good from poor batsmen, and that a cricket player's eye movement strategy contributes to his skill in the game."} {"_id": "8804402bd9bd1013d1a67f0f9fb26a9c678b6c78", "title": "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies", "text": "D3EGF(FIH)J KMLONPEGQSRPETN UCV.WYX(Z R.[ V R6\\M[ X N@]_^O\\`JaNcb V RcQ W d EGKeL(^(QgfhKeLOE?i)^(QSj ETNPfPQkRl[ V R)m\"[ X ^(KeLOEG^ npo qarpo m\"[ X ^(KeLOEG^tsAu EGNPb V ^ v wyx zlwO{(|(}<~O\u007fC}\u0081\u0080(\u0082(xp{a\u0083y\u0084.~A}\u0086\u0085\u0088\u0087_~ \u0089C\u008al\u00833\u0089#|<\u0080Az\u0086w#|l\u00806\u0087 \u008b(| \u008c JpfhL X\u008dV\u008f\u008e EG^O\u0090 QgJ \u0091 ETFOR\u0086\u0092\u0093] ^O\\\u0094J\u0095NPb V RcQ\u0097\u0096 X E)ETR \u00986EGKeLOETNcKMLOE\u009a\u0099 F\u0088\u009b ETN V RcQgJp^(^OE ZgZ E i ^(Qkj EGNPfhQSRO\u009b E \u009cOE2m1Jp^ RcNY\u009b E V\u0095Z sO\u009d\u009f\u009e! \u008d\u00a1 q.n sCD X KGKa\u00928\u009d\u00a2EG^ RPNhE\u00a4\u00a3 \u00a5\u00a6Q ZgZ E\u0095s m\u00a7J\u0095^ RPNO\u009b E V\u0095Z s( \u0308 X \u009b EG\u00a9#E\u0081Kas#\u009d V ^ V \u009c V s(H a \u009d\u00aba\u0095\u00ac3\u00ad \u00ae#|.\u0080Y \u0304y} xa\u00b0O\u007fC}l{\u008dx\u0093\u0087 \u0089 \u0083yxl\u0080Y~3{\u008d| \u0084 \u00b12\u0087Pz \u0084 \u009e V J Z J U N V fhKTJp^(Q \u0091 ETFOR\u0086\u0092 J\u0095\\ D vYf3RPEGb \u0301f V ^(\u009c\u00a7\u009d\u0088Jpb\u008fF X RPETN@D KTQ\u0097EG^(KTE i ^(QSjpEGNPfhQSR4v\u03bcJ\u0095\\ U\u00b6Z JaNPEG^(K\u00b7E jYQ V \u009c(Q \u0327D V ^ R V m V N3R V aOs#1 o \u00a1Ga r U Q\u0097NhE\u0081^OoTE1\u20444\u00bb,] R V\u0095Z vC1\u20442 3\u20444 \u0084 x \u00b1 x \u007f \u008b#\u00bf }\u00c0\u0087 \u00893\u0080t}l\u0082C}2\u0087P}<~ \u00act[ X NP\u0090\u0095E\u0081^\u00a7D KeL(b \u0301Qg\u009c(L X \u00a9yETN ] \u0091 DY]_\u00c1 \u009d\u0088J\u0095NPfhJ\u00c3\u00c2 Z j ETo\u0081Q V a\u0095 rpopo2\u00c4 X \u0090 V ^(J(sCD \u00c5)QSRPoTEGN ZgV ^(\u009c \u00c6 \u0089#|\u0095{3 \u0304\u008d|.\u0080(\u007fC}.\u008bC\u00bfY}p\u0084 \u0087Pz\u0086w"} {"_id": "c9ab6adcf149c740b03ef3759260719af4f7ce07", "title": "Edinburgh's Phrase-based Machine Translation Systems for WMT-14", "text": "This paper describes the University of Edinburgh\u2019s (UEDIN) phrase-based submissions to the translation and medical translation shared tasks of the 2014 Workshop on Statistical Machine Translation (WMT). We participated in all language pairs. We have improved upon our 2013 system by i) using generalized representations, specifically automatic word clusters for translations out of English, ii) using unsupervised character-based models to translate unknown words in RussianEnglish and Hindi-English pairs, iii) synthesizing Hindi data from closely-related Urdu data, and iv) building huge language on the common crawl corpus."} {"_id": "7c935d804de6efe1546783427ce51e54723f2640", "title": "Extended Bandwidth Instantaneous Current Sharing Scheme for Parallel UPS Systems", "text": "This paper investigates the suitability of instantaneous average current sharing scheme (IACS) for the parallel operation of three-phase uninterruptible power supply systems. A discrete-time model is developed for the analysis and design of the control loops. Some key issues are discussed based on the model and it is found that there is a compromise between system stability and current sharing performance with the conventional IACS scheme when the lengths of interconnecting cables are not negligible. Subsequently, an improved IACS scheme is proposed that ensures proper current sharing by extending the close-loop bandwidth. Its performance is analytically predicted and subsequently validated by experimental results on a 15-kW laboratory prototype."} {"_id": "3a065f1e9e50503c09e7b174bb8a6f6fb280ebbe", "title": "AccessMiner: using system-centric models for malware protection", "text": "Models based on system calls are a popular and common approach to characterize the run-time behavior of programs. For example, system calls are used by intrusion detection systems to detect software exploits. As another example, policies based on system calls are used to sandbox applications or to enforce access control. Given that malware represents a significant security threat for today's computing infrastructure, it is not surprising that system calls were also proposed to distinguish between benign processes and malicious code.\n Most proposed malware detectors that use system calls follows program-centric analysis approach. That is, they build models based on specific behaviors of individual applications. Unfortunately, it is not clear how well these models generalize, especially when exposed to a diverse set of previously-unseen, real-world applications that operate on realistic inputs. This is particularly problematic as most previous work has used only a small set of programs to measure their technique's false positive rate. Moreover, these programs were run for a short time, often by the authors themselves.\n In this paper, we study the diversity of system calls by performing a large-scale collection (compared to previous efforts) of system calls on hosts that run applications for regular users on actual inputs. Our analysis of the data demonstrates that simple malware detectors, such as those based on system call sequences, face significant challenges in such environments. To address the limitations of program-centric approaches, we propose an alternative detection model that characterizes the general interactions between benign programs and the operating system (OS). More precisely, our system-centric approach models the way in which benign programs access OS resources (such as files and registry entries). Our experiments demonstrate that this approach captures well the behavior of benign programs and raises very few (even zero) false positives while being able to detect a significant fraction of today's malware."} {"_id": "63bad67d3016837dd2ecb7ad46ee8d0d007aa1e6", "title": "How do people solve the \"weather prediction\" task?: individual variability in strategies for probabilistic category learning.", "text": "Probabilistic category learning is often assumed to be an incrementally learned cognitive skill, dependent on nondeclarative memory systems. One paradigm in particular, the weather prediction task, has been used in over half a dozen neuropsychological and neuroimaging studies to date. Because of the growing interest in using this task and others like it as behavioral tools for studying the cognitive neuroscience of cognitive skill learning, it becomes especially important to understand how subjects solve this kind of task and whether all subjects learn it in the same way. We present here new experimental and theoretical analyses of the weather prediction task that indicate that there are at least three different strategies that describe how subjects learn this task. (1) An optimal multi-cue strategy, in which they respond to each pattern on the basis of associations of all four cues with each outcome; (2) a one-cue strategy, in which they respond on the basis of presence or absence of a single cue, disregarding all other cues; or (3) a singleton strategy, in which they learn only about the four patterns that have only one cue present and all others absent. This variability in how subjects approach this task may have important implications for interpreting how different brain regions are involved in probabilistic category learning."} {"_id": "05132e44451e92c40ffdea569ccb260c10d6a30d", "title": "RISK FACTORS FOR INJURY TO WOMEN FROM DOMESTIC VIOLENCE", "text": "A BSTRACT Background Domestic violence is the most common cause of nonfatal injury to women in the United States. To identify risk factors for such injuries, we examined the socioeconomic and behavioral characteristics of women who were victims of domestic violence and the men who injured them. Methods We conducted a case\u2013control study at eight large, university-affiliated emergency departments. The 256 intentionally injured women had acute injuries resulting from a physical assault by a male partner. The 659 controls were women treated for other conditions in the emergency department. Information was collected with a standardized questionnaire; no information was obtained directly from the male partners. Results The 256 intentionally injured women had a total of 434 contusions and abrasions, 89 lacerations, and 41 fractures and dislocations. In a multivariate analysis, the characteristics of the partners that were most closely associated with an increased risk of inflicting injury as a result of domestic violence were alcohol abuse (adjusted relative risk, 3.6; 95 percent confidence interval, 2.2 to 5.9); drug use (adjusted relative risk, 3.5; 95 percent confidence interval, 2.0 to 6.4); intermittent employment (adjusted relative risk, 3.1; 95 percent confidence interval, 1.1 to 8.8); recent unemployment (adjusted relative risk, 2.7; 95 percent confidence interval, 1.2 to 6.5); having less than a high-school education (adjusted relative risk, 2.5; 95 percent confidence interval, 1.4 to 4.4); and being a former husband, estranged husband, or former boyfriend (adjusted relative risk, 3.5; 95 percent confidence interval, 1.5 to 8.3). Conclusions Women at greatest risk for injury from domestic violence include those with male partners who abuse alcohol or use drugs, are unemployed or intermittently employed, have less than a high-school education, and are former husbands, estranged husbands, or former boyfriends of the women. (N Engl J Med 1999;341:1892-8.)"} {"_id": "61dc8de84e0f4aab21a03833aeadcefa87d6d4e5", "title": "Private and Accurate Data Aggregation against Dishonest Nodes", "text": "Privacy-preserving data aggregation in ad hoc networks is a challenging problem, considering the distributed communication and control requirement, dynamic network topology, unreliable communication links, etc. The difficulty is exaggerated when there exist dishonest nodes, and how to ensure privacy, accuracy, and robustness against dishonest nodes remains a n open issue. Different from the widely used cryptographic approaches, in this paper, we address this challenging proble m by exploiting the distributed consensus technique. We first pr opose a secure consensus-based data aggregation (SCDA) algorith m that guarantees an accurate sum aggregation while preservi ng the privacy of sensitive data. Then, to mitigate the polluti on from dishonest nodes, we propose an Enhanced SCDA (E-SCDA) algorithm that allows neighbors to detect dishonest nodes, and derive the error bound when there are undetectable dishones t nodes. We prove the convergence of both SCDA and E-SCDA. We also prove that the proposed algorithms are(\u01eb, \u03c3)-dataprivacy, and obtain the mathematical relationship between\u01eb and \u03c3. Extensive simulations have shown that the proposed algori thms have high accuracy and low complexity, and they are robust against network dynamics and dishonest nodes."} {"_id": "388a5acf4d701ac8ba4b55985702aea9c3ae4ffb", "title": "Predicting student academic performance in an engineering dynamics course: A comparison of four types of predictive mathematical models", "text": "Predicting student academic performance has long been an important research topic in many academic disciplines. The present study is the first study that develops and compares four types of mathematical models to predict student academic performance in engineering dynamics \u2013 a high-enrollment, highimpact, and core course that many engineering undergraduates are required to take. The four types of mathematical models include the multiple linear regression model, the multilayer perception network model, the radial basis function network model, and the support vector machine model. The inputs (i.e., predictor variables) of the models include student\u2019s cumulative GPA, grades earned in four pre-requisite courses (statics, calculus I, calculus II, and physics), and scores on three dynamics mid-term exams (i.e., the exams given to students during the semester and before the final exam). The output of the models is students\u2019 scores on the dynamics final comprehensive exam. A total of 2907 data points were collected from 323 undergraduates in four semesters. Based on the four types of mathematical models and six different combinations of predictor variables, a total of 24 predictive mathematical models were developed from the present study. The analysis reveals that the type of mathematical model has only a slight effect on the average prediction accuracy (APA, which indicates on average how well a model predicts the final exam scores of all students in the dynamics course) and on the percentage of accurate predictions (PAP, which is calculated as the number of accurate predictions divided by the total number of predictions). The combination of predictor variables has only a slight effect on the APA, but a profound effect on the PAP. In general, the support vector machine models have the highest PAP as compared to the other three types of mathematical models. The research findings from the present study imply that if the goal of the instructor is to predict the average academic performance of his/her dynamics class as a whole, the instructor should choose the simplest mathematical model, which is the multiple linear regression model, with student\u2019s cumulative GPA as the only predictor variable. Adding more predictor variables does not help improve the average prediction accuracy of any mathematical model. However, if the goal of the instructor is to predict the academic performance of individual students, the instructor should use the support vector machine model with the first six predictor variables as the inputs of the model, because this particular predictor combination increases the percentage of accurate predictions, and most importantly, allows sufficient time for the instructor to implement subsequent educational interventions to improve student learning. 2012 Elsevier Ltd. All rights reserved."} {"_id": "581c8ef54a77a58ebfb354426d16b0dfab596cb7", "title": "Elephant herding optimization algorithm for support vector machine parameters tuning", "text": "Classification is part of various applications and it is an important problem that represents active research topic. Support vector machine is one of the widely used and very powerful classifier. The accuracy of support vector machine highly depends on learning parameters. Optimal parameters can be efficiently determined by using swarm intelligence algorithms. In this paper, we proposed recent elephant herding optimization algorithm for support vector machine parameter tuning. The proposed approach is tested on standard datasets and it was compared to other approaches from literature. The results of computational experiments show that our proposed algorithm outperformed genetic algorithms and grid search considering accuracy of classification."} {"_id": "4a1351ad68e5af7d578143eeb93e21b5463602eb", "title": "New generation waveform approaches for 5G and beyond", "text": "For 5G and beyond cellular communication systems, new waveform development studies have been continuing in the whole world. Solutions for providing high throughput, high data rates, low latency, large number of terminals, supporting multiple antenna systems, and the related problems originated from these issues are playing an important role in the development of 5G and new waveform technologies. OFDM which has many important advantages continues to be included in various standards. However, because of its disadvantages, OFDM will not be sufficient for dynamic spectrum access and cognitive radio applications in new generation communication systems gradually. Additionally, OFDM is not suitable for asynchronous networks and radios which we expect to see commonly in the future. In this study, some potential candidate 5G waveforms are examined and various comparisons are carried out. Besides, assessments about future hybrid solutions are presented."} {"_id": "04afb63292337d96da0bb2089334f86aa67fa9eb", "title": "Very-low-profile, stand-alone, tri-band WLAN antenna design for laptop-tablet computer with complete metal-backed cover", "text": "A very-low-profile, tri-band, planar inverted-F antenna (PIFA) with a height of 3.7 mm only for wireless local area network (WlAN) applications in the 2.4 GHz (2400\u223c2484 MHz), 5.2 GHz (5150\u223c5350 MHz), and 5.8 GHz (5725\u223c5825 MHz) bands for tablet-laptop computer with a metal back cover is presented. The antenna was made of a flat plate of size 18.7 mm \u00d7 32.5 mm and bent into a compact structure with the dimensions 3.7 mm \u00d7 7 mm \u00d7 32.5 mm. The PIFA comprised a radiating top patch, a feeding/shorting portion, and an antenna ground plane. The top patch was placed horizontally above the ground with the feeding/shorting portion connecting perpendicularly therebetween. The antenna was designed in a way that the radiating patch consisted of three branches, in which each branch controlled the corresponding resonant mode in the 2.4/5.2/5.8 GHz bands."} {"_id": "f9daf4bc3574c5a49105b8d4ab233928f39f3696", "title": "GRAVITY-ASSIST: A series elastic body weight support system with inertia compensation", "text": "We present GRAVITY-ASSIST, a series elastic active body weight support and inertia compensation system for use in robot assisted gait rehabilitation. The device consists of a single degree of freedom series elastic actuator that connects to the trunk of a patient. The series elastic system is novel in that, it can provide the desired level of dynamic unloading such that the patient experiences only a percentage of his/her weight and inertia. Inertia compensation is important, since the inertial forces can cause significant deviations from the desired unloading force, specially at low support forces and fast walking speeds. Furthermore, this feature enables the inertia of the harness and force sensing unit attached to the patient to be compensated for, making sure that the device does not interfere with the natural gait cycle. We present a functional prototype of the device, its characterization and experimental verification of the approach."} {"_id": "bda6aa67d0aaf6ec07d0946244b1563bedc5f861", "title": "A Framework for Information Systems Architecture", "text": "With increasing size and complexity of the implementations of information systems, it is necessary to use some logical construct (or architecture) for defining and controlling the interfaces and the integration of all of the components of the system. This paper defines information systems architecture by creating a d e scriptive framework from disciplines quite independent of information systems, then by analogy specifies information systems architecture based upon the neutral , objective framework. Also, some preliminary conclusions about the implications of the resultant descriptive framework are drawn. The discussion is limited to architecture and does not include a strategic planning methodology. T he subject of information systems architecture is beginning to receive considerable attention. The increased scope of design and levels of complexity of information systems implementations are forcing the use of some logical construct (or architecture) for defining and controlling the interfaces and the integration of all of the components of the system. Thirty years ago this issue was not at all significant because the technology itself did not provide for either breadth in scope or depth in complexity in information systems. The inherent limitations of the then-available 4K machines, for example, constrained design and necessitated suboptimal approaches for automating a business. Current technology is rapidly removing both conceptual and financial constraints. It is not hard to speculate about, if not realize, very large, very complex systems implementations, extending in scope and complexity to encompass an entire enterprise. One can readily delineate the merits of the large, complex, enterprise-oriented approaches. Such systems allow flexibility in managing business changes and coher-ency in the management of business resources. However , there also is merit in the more traditional, smaller, suboptimal systems design approach. Such systems are relatively economical, quickly implemented , and easier to design and manage. In either case, since the technology permits \" distributing \" large amounts of computing facilities in small packages to remote locations, some kind of structure (or architecture) is imperative because decentralization without structure is chaos. Therefore, to keep the business from disintegrating, the concept of information systems architecture is becoming less an option and more a necessity for establishing some order and control in the investment of information systems resources. The cost involved and the success of the business depending increasingly on its information systems require a disciplined approach to the management of those systems. On the assumption that an understanding of information systems architecture \u2026"} {"_id": "ab53ca2fcb009c97da29988b29fbacb5cbe4ed31", "title": "Reading Comprehension with Deep Learning", "text": "We train a model that combines attention with multi-perspective matching to perform question answering. For each question and context pair in SQuAD, we perform an attention calculation over each context before extracting features of the question and context, matching them from multiple perspectives. Whilst we did not have time to perform a hyper-parameter search or incorporate other features into our model, this joint model obtained an F1 score of 22.5 and an EM score of 13.0, outperforming our baseline model without attention, which attained an F1 score of 19.4 and an EM score of 10.0. In the future, we would like to implement an early stopping feature and a co-dependent representation of each question and context pair."} {"_id": "90c3a783271e61adc2e140976d3a3e6a0f23f193", "title": "Mapping moods: Geo-mapped sentiment analysis during hurricane sandy", "text": "Sentiment analysis has been widely researched in the domain of online review sites with the aim of generating summarized opinions of product users about different aspects of the products. However, there has been little work focusing on identifying the polarity of sentiments expressed by users during disaster events. Identifying sentiments expressed by users in an online social networking site can help understand the dynamics of the network, e.g., the main users\u2019 concerns, panics, and the emotional impacts of interactions among members. Data produced through social networking sites is seen as ubiquitous, rapid and accessible, and it is believed to empower average citizens to become more situationally aware during disasters and coordinate to help themselves. In this work, we perform sentiment classification of user posts in Twitter during the Hurricane Sandy and visualize these sentiments on a geographical map centered around the hurricane. We show how users' sentiments change according not only to users\u2019 locations, but also based on the distance from the disaster."} {"_id": "ed8c54d05f1a9d37df2c0587c2ca4d4c849cb267", "title": "AUTOMATIC BLOOD VESSEL SEGMENTATION IN COLOR IMAGES OF RETINA", "text": "Automated image processing techniques have the ability to assist in the early detection of diabetic retinopathy disease which can be regarded as a manifestation of diabetes on the retina. Blood vessel segmentation is the basic foundation while developing retinal screening systems, since vessels serve as one of the main retinal landmark features. This paper proposes an automated method for identification of blood vessels in color images of the retina. For every image pixel, a feature vector is computed that utilizes properties of scale and orientation selective Gabor filters. The extracted features are then classified using generative Gaussian mixture model and discriminative support vector machines classifiers. Experimental results demonstrate that the area under the receiver operating characteristic (ROC) curve reached a value 0.974, which is highly comparable and, to some extent, higher than the previously reported ROCs that range from 0.787 to 0.961. Moreover, this method gives a sensitivity of 96.50% with a specificity of 97.10% for identification of blood vessels. Keywords\u2013 Retinal blood vessels, Gabor filters, support vector machines, vessel segmentation"} {"_id": "8f3eb3fea1f7b5a609730592c035f672e47a170c", "title": "Cubical Type Theory: A Constructive Interpretation of the Univalence Axiom", "text": "This paper presents a type theory in which it is possible to directly manipulate n-dimensional cubes (points, lines, squares, cubes, etc.) based on an interpretation of dependent type theory in a cubical set model. This enables new ways to reason about identity types, for instance, function extensionality is directly provable in the system. Further, Voevodsky\u2019s univalence axiom is provable in this system. We also explain an extension with some higher inductive types like the circle and propositional truncation. Finally we provide semantics for this cubical type theory in a constructive meta-theory. 1998 ACM Subject Classification F.3.2 Logics and Meanings of Programs: Semantics of Programming Languages, F.4.1 Mathematical Logic and Formal Languages: Mathematical Logic"} {"_id": "de7170f66996f444905bc37745ac3fa1ca760fed", "title": "A review of wind power and wind speed forecasting methods with different time horizons", "text": "In recent years, environmental considerations have prompted the use of wind power as a renewable energy resource. However, the biggest challenge in integrating wind power into the electric grid is its intermittency. One approach to deal with wind intermittency is forecasting future values of wind power production. Thus, several wind power or wind speed forecasting methods have been reported in the literature over the past few years. This paper provides insight on the foremost forecasting techniques, associated with wind power and speed, based on numeric weather prediction (NWP), statistical approaches, artificial neural network (ANN) and hybrid techniques over different time-scales. An overview of comparative analysis of various available forecasting techniques is discussed as well. In addition, this paper further gives emphasis on the major challenges and problems associated with wind power prediction."} {"_id": "dbde4f47efed72cbb99f412a9a4c17fe39fa04fc", "title": "What does it take to generate natural textures", "text": "Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters."} {"_id": "c62e6bebdef5d71961384db654e9b99d68a42d39", "title": "Investigating Social Entrepreneurship in Developing Countries", "text": "Social entrepreneurship has drawn interest from global policy makers and social entrepreneurs to target developing countries. Generally, not-forprofit organizations, funded by government and donor grants have played a significant role in poverty alleviation. We argue that, by applying entrepreneurial concepts, organizations can create social value, hence mitigate poverty. This is a theoretical paper that builds upon a multidimensional model in analysing how three social enterprises from India and Kenya create social value to address social problems. The findings suggest that whilst the social mission is central to all these organizations, they also create social value through innovation and pro-activeness. Additionally, the cultural and political environmental contexts hinder their attempt to create social value. Building networks and partnerships to achieve social value creation is vital for these organizations. Policy makers should devise policies that would assist social enterprises to achieve development goals."} {"_id": "60737e8bd196d5fdc1f027d25b395253b31ec30f", "title": "Crowd-Squared: Amplifying the Predictive Power of Search Trend Data", "text": "Table A1. Search Terms Generated by the Crowd (a) Influenza \u2013 Crowd-Squared Search Terms and % of Turkers Mentioning Each Term Search Term % Mention Search Term % Mention Search Term % Mention Search Term % Mention sick 68% influenza 4% achy 2% fatigue 1% fever 39% coughing 4% weak 2% home 1% cough 21% germs 4% bad 2% hospital 1% cold 16% headache 4% shots 2% nyquil 1% shot 13% ache 4% body aches 2% season 1% vomit 11% runny nose 4% flu shot 2% tea 1%"} {"_id": "0bc6d36aee2e3c4fd035f7a9b43a9a36cf29c938", "title": "Language Modeling with Feedforward Neural Networks", "text": "In this paper, we describe the use of feedforward neural networks to improve the term-distance term-occurrence (TDTO) language model, previously proposed in [1]\u2212[3]. The main idea behind the TDTO model proposition is to model separately both position and occurrence information of words in the history-context to better estimate n-gram probabilities. Neural networks have been shown to offer a better generalization property than other conventional smoothing methods. We take advantage of such property for a better smoothing mechanism for the TDTO model, referred to as the continuous space TDTO (cTDTO). The newly proposed model has reported an improved perplexity over the baseline TDTO model of up to 9.2%, at history length of ten, as evaluated on the Wall Street Journal (WSJ) corpus. Also, in the Aurora-4 speech recognition N-best re-ranking task, the cTDTO outperformed the TDTO model by reducing the word error rate (WER) up to 12.9% relatively."} {"_id": "351878d1facaff88169860aafca5199340cf8891", "title": "Diagnosis, prevention, and treatment of catheter-associated urinary tract infection in adults: 2009 International Clinical Practice Guidelines from the Infectious Diseases Society of America.", "text": "Guidelines for the diagnosis, prevention, and management of persons with catheter-associated urinary tract infection (CA-UTI), both symptomatic and asymptomatic, were prepared by an Expert Panel of the Infectious Diseases Society of America. The evidence-based guidelines encompass diagnostic criteria, strategies to reduce the risk of CA-UTIs, strategies that have not been found to reduce the incidence of urinary infections, and management strategies for patients with catheter-associated asymptomatic bacteriuria or symptomatic urinary tract infection. These guidelines are intended for use by physicians in all medical specialties who perform direct patient care, with an emphasis on the care of patients in hospitals and long-term care facilities."} {"_id": "4ee9ccc200ed7db73fd23ac3e5a2a0deaf12ebab", "title": "A New Clustering Method Based On Type-2 Fuzzy Similarity and Inclusion Measures", "text": "Similarity and inclusion measures between type-2 fuzzy sets have a wide range of applications. New similarity and inclusion measures between type-2 fuzzy sets are respectively defined in this paper. The properties of the measures are discussed. Some examples are used to compare the proposed measures with the existing results. Numerical results show that the proposed measures are more reasonable. Similarity measures and Yang and Shih\u2019s algorithm are combined as a clustering method for type-2 fuzzy data. Clustering results demonstrate that the proposed similarity measures are better than the existing measures."} {"_id": "1918d5662d327f4524bc6c0b228f5f58784723c0", "title": "Temperature insensitive current reference circuit using standard CMOS devices", "text": "In this paper, temperature insensitive current reference circuit is proposed. The reference current value is determined by using the threshold voltage controlled circuit. The main difference from the previous work (Georglou and Toumazou, 2002) is that the circuit can be fabricated by the standard CMOS process. The resistor and the transistor physical parameter temperature dependences are compensated with each other to determine the output reference current. The detail temperature performance is analyzed, and is evaluated by simulations."} {"_id": "87708f9d0168776227ad537e65c2fc45774e6fa4", "title": "Computational higher-dimensional type theory", "text": "Formal constructive type theory has proved to be an effective language for mechanized proof. By avoiding non-constructive principles, such as the law of the excluded middle, type theory admits sharper proofs and broader interpretations of results. From a computer science perspective, interest in type theory arises from its applications to programming languages. Standard constructive type theories used in mechanization admit computational interpretations based on meta-mathematical normalization theorems. These proofs are notoriously brittle; any change to the theory potentially invalidates its computational meaning. As a case in point, Voevodsky's univalence axiom raises questions about the computational meaning of proofs. \n We consider the question: Can higher-dimensional type theory be construed as a programming language? We answer this question affirmatively by providing a direct, deterministic operational interpretation for a representative higher-dimensional dependent type theory with higher inductive types and an instance of univalence. Rather than being a formal type theory defined by rules, it is instead a computational type theory in the sense of Martin-L\u00f6f's meaning explanations and of the NuPRL semantics. The definition of the type theory starts with programs; types are specifications of program behavior. The main result is a canonicity theorem stating that closed programs of boolean type evaluate to true or false."} {"_id": "b00d2790c1de04096021f19a10a58402bcee2fc4", "title": "Computational Higher Type Theory IV: Inductive Types", "text": "This is the fourth in a series of papers extending Martin-L\u00f6f\u2019s meaning explanation of dependent type theory to higher-dimensional types. In this installment, we show how to define cubical type systems supporting a general schema of indexed cubical inductive types whose constructors may take dimension parameters and have a specified boundary. Using this schema, we are able to specify and implement many of the higher inductive types which have been postulated in homotopy type theory, including homotopy pushouts, the torus, W-quotients, truncations, arbitrary localizations. By including indexed inductive types, we enable the definition of identity types. The addition of higher inductive types makes computational higher type theory a model of homotopy type theory, capable of interpreting almost all of the constructions in the HoTT Book [40] (with the exception of inductive-inductive types). This is the first such model with an explicit canonicity theorem, which specifies the canonical values of higher inductive types and confirms that every term in an inductive type evaluates to such a value."} {"_id": "14cd735eada88905c3db3c16216112e214efc4a3", "title": "A Model of Type Theory in Cubical Sets", "text": "We present a model of type theory with dependent product, sum, and identity, in cubical sets. We describe a universe and explain how to transform an equivalence between two types in an equality. We also explain how to model propositional truncation and the circle. While not expressed internally in type theory, the model is expressed in a constructive metalogic. Thus it is a step towards a computational interpretation of Voevodsky\u2019s Univalence Axiom."} {"_id": "868589060f4ad5a9c8f067a5ad428cd2626bccf4", "title": "A Monolithic CMOS-MEMS 3-Axis Accelerometer With a Low-Noise, Low-Power Dual-Chopper Amplifier", "text": "This paper reports a monolithically integrated CMOS-MEMS three-axis capacitive accelerometer with a single proof mass. An improved DRIE post-CMOS MEMS process has been developed, which provides robust single-crystal silicon (SCS) structures in all three axes and greatly reduces undercut of comb fingers. The sensing electrodes are also composed of the thick SCS layer, resulting in high resolution and large sensing capacitance. Due to the high wiring flexibility provided by the fabrication process, fully differential capacitive sensing and common-centroid configurations are realized in all three axes. A low-noise, low- power dual-chopper amplifier is designed for each axis, which consumes only 1 mW power. With 44.5 dB on-chip amplification, the measured sensitivities of x-, y-, and z-axis accelerometers are 520 mV/g, 460 mV/g, and 320 mV/g, respectively, which can be tuned by simply changing the amplitude of the modulation signal. Accordingly, the overall noise floors of the x-, y-, and z-axis are 12 mug/radicHz , 14 mug/radicHz, and 110 mug/radicHz, respectively, when tested at around 200 Hz."} {"_id": "18774ff073d775bac3baf6680b1a8226b9113317", "title": "Cyber security of smart grids modeled through epidemic models in cellular automata", "text": "Due to their distributed management, smart grids can be vulnerable to malicious attacks that undermine their cyber security. An adversary can take control of few nodes in the network and spread digital attacks like an infection, whose diffusion is facilitated by the lack of centralized supervision within the smart grid. In this paper, we propose to investigate these phenomena by means of epidemic models applied to cellular automata. We show that the common key parameters of epidemic models, such as the basic reproductive ratio, are also useful in this context to understand the extent of the grid portion that can be compromised. At the same time, the lack of mobility of individuals limits the spreading of the infection. In particular, we evaluate the role of the grid connectivity degree in both containing the epidemics and avoiding its spreading on the entire network, and also increasing the number of nodes that do not get any contact with the cyber attacks."} {"_id": "09b6ea6bf73f4f9e67624fdcca69758755c28b6e", "title": "A review of overview+detail, zooming, and focus+context interfaces", "text": "There are many interface schemes that allow users to work at, and move between, focused and contextual views of a dataset. We review and categorize these schemes according to the interface mechanisms used to separate and blend views. The four approaches are overview+detail, which uses a spatial separation between focused and contextual views; zooming, which uses a temporal separation; focus+context, which minimizes the seam between views by displaying the focus within the context; and cue-based techniques which selectively highlight or suppress items within the information space. Critical features of these categories, and empirical evidence of their success, are discussed. The aim is to provide a succinct summary of the state-of-the-art, to illuminate both successful and unsuccessful interface strategies, and to identify potentially fruitful areas for further work."} {"_id": "5fe5c28e98d2da910237629cc039ac88ad6c25bf", "title": "An Experiment of Influences of Facebook Posts in Other Users", "text": "People usually present similar behavior of their friends due to the social influence that is produced by group pressure. This factor is also present in online social networks such as Facebook. In this paper, we make a study on social influence produced by the contents published on Facebook. First, we have executed a systematic literature review of the previous works to understand better this field of research. Then, by executing the Asch experiment, we have illustrated how such social influence can change the behavior of users in Facebook. Through this experiment, we could identify how the social influence type called \u201cconformity\u201d is present in online social network platforms and how this influence can make changes in the behavior of people."} {"_id": "23947c3cb843274423a8132f30dc30034006ca66", "title": "Use of the stair vision library within the ISPRS 2D semantic labeling benchmark (Vaihingen)", "text": "This report describes experiments conducted using the multi-class image classification framework implemented in the stair vision library (SVL, (Gould et al., 2008)) in the context of the ISPRS 2D semantic labeling benchmark. The motivation was to get results from a well-established and public available software (Gould, 2014), as a kind of baseline. Besides the use of features implemented in the SVL which makes use of three channel images, assuming RGB, we also included features derived from the height model and the NDVI which is specific here, because the benchmark dataset provides surface models and CIR images. Another point of interest concerned the impact the segmentation had on the overall result. To this end a pre-study was performed where different parameters for the graph-based segmentation method introduced by Felzenszwalb and Huttelocher (2004) have been tested, in addition we only applied a simple chessboard segmentation. Other experiments focused on the question whether the conditional random field classification approach helps to enhance the overall performance. The official evaluation of all experiments described here is available at http://www2.isprs.org/vaihingen-2d-semantic-labeling-contest.html (SVL_1 to SVL_6). The normalized height models are available through the ReseachGate profile of the author (http://www.researchgate.net/profile/Markus_Gerke)"} {"_id": "a1e45d02c98d76c81b7a801c746765c0b5faf4d2", "title": "Rollable Multisegment Dielectric Elastomer Minimum Energy Structures for a Deployable Microsatellite Gripper", "text": "Debris in space presents an ever-increasing problem for spacecraft in Earth orbit. As a step in the mitigation of this issue, the CleanSpace One (CSO) microsatellite has been proposed. Its mission is to perform active debris removal of a decommissioned nanosatellite (the CubeSat SwissCube). An important aspect of this project is the development of the gripper system that will entrap the capture target. We present the development of rollable dielectric elastomer minimum energy structures (DEMES) as the main component of CSO's deployable gripper. DEMES consist of a prestretched dielectric elastomer actuator membrane bonded to a flexible frame. The actuator finds equilibrium in bending when the prestretch is released and the bending angle can be changed by the application of a voltage bias. The inherent flexibility and lightweight nature of the DEMES enables the gripper to be stored in a rolled-up state prior to deployment. We fabricated proof-of-concept actuators of three different geometries using a robust and repeatable fabrication methodology. The resulting actuators were mechanically resilient to external deformation, and display conformability to objects of varying shapes and sizes. Actuator mass is less than 0.65 g and all the actuators presented survived the rolling-up and subsequent deployment process. Our devices demonstrate a maximum change of bending angle of more than 60\u00b0 and a maximum gripping (reaction) force of 2.2 mN for a single actuator."} {"_id": "ecdbf3e6aae8f11caef57fff9a76a3b5d4095d10", "title": "MSc THESIS Hardware Acceleration of BWA-MEM Genome Mapping Application", "text": "Faculty of Electrical Engineering, Mathematics and Computer Science CE-MS-2014 Next Generation Sequencing technologies have had a tremendous impact on our understanding of DNA and its role in living organisms. The cost of DNA sequencing has decreased drastically over the past decade, leading the way for personalised genomics. Sequencers provide millions of fragments of DNA (termed short reads), which have to be aligned aginst a reference genome to reconstruct the full information of the DNA molecule under study. Processing short reads requires large computational resources, with many specialised computing platforms now being used to accelerate software aligners. We take up the challenge of accelerating a well know sequence alignment tool called Burrows Wheeler aligner on the Convey Hybrid Computing platform, which has FPGAs as co-processors. The focus of the research is to accelerate the BWA-MEM algorithm of the Burrows Wheeler aligner on the Convey HC-2 platform. The implementation was carried out using the Vivado HLS tool. The architectures proposed are targeted to overcome the memory bottleneck of the application. Two architectures are proposed, the Base architecture and the Batch architecture meant to address the memory bottleneck. Simulations were performed for the intended platform and it was found that, the Batch architecture is 18% faster than the Base architecture for reads with similar run time characteristics. The architectures provide possibilities of further pipelining and implementation of more cores, which is expected to provide better performance than the current implementation. Hardware Acceleration of BWA-MEM Genome Mapping Application"} {"_id": "7e799292e855ea1d1c5c7502f284702363a327ab", "title": "Performance Gaps between OpenMP and OpenCL for Multi-core CPUs", "text": "OpenCL and OpenMP are the most commonly used programming models for multi-core processors. They are also fundamentally different in their approach to parallelization. In this paper, we focus on comparing the performance of OpenCL and OpenMP. We select three applications from the Rodinia benchmark suite (which provides equivalent OpenMP and OpenCL implementations), and carry out experiments with different datasets on three multi-core platforms. We see that the incorrect usage of the multi-core CPUs, the inherent OpenCL fine-grained parallelism, and the immature OpenCL compilers are the main reasons that lead to the OpenCL poorer performance. After tuning the OpenCL versions to be more CPU-friendly, we show that OpenCL either outperforms or achieves similar performance in more than 80% of the cases. Therefore, we believe that OpenCL is a good alternative for multi-core CPU programming."} {"_id": "a0daf4760ef9094fb20256a6397d2a79f1f8de85", "title": "Alternative placement of bispectral index electrode for monitoring depth of anesthesia during neurosurgery.", "text": "In neurosurgery in particular, the recommended placement of electrodes for monitoring depth of anesthesia during surgery sometimes conflicts with the surgical site or patient positioning. Therefore, we proposed this study to evaluate the agreement and correlation of bispectral index values recorded from the usual frontal area and the alternate, post-auricular areas in neurosurgery patients. Thirty-four patients scheduled for neurosurgery under general anesthesia were included. Bispectral index (BIS) sensors were placed at both the frontal and post-auricular areas. The anesthesia given was clinically adjusted according to the frontal (standard) BIS reading. The BIS values and impedance were recorded;Pearson's correlation and Bland-Altman plots were analyzed. The bias\u00b12SD for the electrode placement before, during, and post-anesthesia were 0\u00b123.32, 1.5\u00b110.69, and 2.1\u00b113.52, while the limits of agreement were -23.3 to 23.3, -12.2 to 9.2, and -17.7 to 13.5, respectively. The correlation coefficient between frontal- and post-auricular-area electrodes was 0.74 with a p-value<0.001.The post-auricular placement of a BIS electrode is a practical alternative to frontal lobe placement. Nevertheless, proper electrode location is important to minimize error."} {"_id": "831e37389a3dec5287f865beb2de77732aeca8d4", "title": "A survey on virtual machine migration and server consolidation frameworks for cloud data centers", "text": "Modern Cloud Data Centers exploit virtualization for efficient resource management to reduce cloud computational cost and energy budget. Virtualization empowered by virtual machine (VM) migration meets the ever increasing demands of dynamic workload by relocating VMs within Cloud Data Centers. VM migration helps successfully achieve various resource management objectives such as load balancing, power management, fault tolerance, and system maintenance. However, being resourceintensive, the VM migration process rigorously affects application performance unless attended by smart optimization methods. Furthermore, a Cloud Data Centre exploits server consolidation and DVFS methods to optimize energy consumption. This paper reviews state-of-the-art bandwidth optimization schemes, server consolidation frameworks, DVFS-enabled power optimization, and storage optimization methods over WAN links. Through a meticulous literature review of state-of-the-art live VM migration schemes, thematic taxonomies are proposed to categorize the reported literature. The critical aspects of virtual machine migration schemes are investigated through a comprehensive analysis of the existing schemes. The commonalties and differences among existing VM migration schemes are highlighted through a set of parameters derived from the literature. Finally, open research issues and trends in the VM migration domain that necessitate further consideration to develop optimal VM migration schemes"} {"_id": "605f6e6f1039f5a633515254b9b49c9a1252292b", "title": "Assessing SMS and PJD Schemes of Anti-Islanding with Varying Quality Factor", "text": "When a particular zone of the distribution network containing an embedded generator is disconnected from the main supply grid, yet the generator continues to operate with normal voltage and frequency to feed power to the isolated section, islanding is said to occur. In a distributed power system, islanding may occur at different possible zones consisting of distribution feeders, substations and voltage levels as long as the isolated zone is found to operate independently of the main grid but remained energized by the distributed generation unit. To ensure safety and reliability, it is necessary to detect islanding condition as quickly as possible by using anti-islanding algorithms. Among the available islanding prevention methods, slip mode frequency shift (SMS) and phase jump detection (PJD) schemes are very common and have numerous advantages over others. This paper presents detailed discussion and comparison of the characteristics of the various anti-islanding schemes. Both SMS and PJD techniques are examined in detail through design and simulation. In the islanding situation with quality factor range (0.1les Qf les10), load voltage waveforms are analyzed and plotted. Both the methods are assessed in terms of detection times with different Qf's. The results are complying with IEEE standard specifications and show that the two developed algorithms could prevent islanding more consistently."} {"_id": "52ba137ece8735a31a71792e7b06da838bc173b4", "title": "Adaptive Image Registration via Hierarchical Voronoi Subdivision", "text": "Advances in image acquisition systems have made it possible to capture high-resolution images of a scene, recording considerable scene details. With increased resolution comes increased image size and geometric difference between multiview images, complicating image registration. Through Voronoi subdivision, we subdivide large images into small corresponding regions, and by registering small regions, we register the images in a piecewise manner. Image subdivision reduces the geometric difference between regions that are registered and simplifies the correspondence process. The proposed method is a hierarchical one. While previous methods use the same block size and shape at a hierarchy, the proposed method adapts the block size and shape to the local image details and geometric difference between the images. This adaptation makes it possible to keep geometric difference between corresponding regions small and simplifies the correspondence process. Implementational details of the proposed image registration method are provided, and experimental results on various types of images are presented and analyzed."} {"_id": "ccab66ca5d4c7cbd38929161897f94a12593ec0d", "title": "Augmented Human: Augmented Reality and Beyond", "text": "Will Augmented Reality (AR) allow us to access digital information, experience others' stories, and thus explore alternate realities? AR has recently attracted attention again due to the rapid advances in related multimedia technologies as well as various glass-type AR display devices. However, in order to widely adopt AR in alternated realities, it is necessary to improve various core technologies and integrate them into an AR platform. Especially, there are several remaining technical challenges such as 1) real-time recognition and tracking of multiple objects while generating an environment map, 2) organic user interface with awareness of user's implicit needs, intentions or emotion as well as explicit requests, 3) immersive multimodal content augmentation and just-in-time information visualization, 4) multimodal interaction and collaboration during augmented telecommunication, etc. In addition, in order to encourage user engagement and enable an AR ecosystem, AR standards should be established that support creating AR content, capturing user experiences, and sharing the captured experiences."} {"_id": "59628a3c8a960baddf412d05c1b0af8f4ffeb8fc", "title": "Result Diversification in Automatic Citation Recommendation", "text": "The increase in the number of published papers each year makes manual literature search inefficient and furthermore insufficient. Hence, automatized reference/citation recommendation have been of interest in the last 3-4 decades. Unfortunately, some of the developed approaches, such as keyword-based ones, are prone to ambiguity and synonymy. On the other hand, using the citation information does not suffer from the same problems since they do not consider textual similarity. Today, obtaining the desired information is as hard as looking for a needle in a haystack. And sometimes, we want that small haystack, e.g., a small result set containing only a few recommendations, cover all the important and relevant parts of the literature. That is, the set should be diversified enough. Here, we investigate the problem of result diversification in automatic citation recommendation. We enhance existing techniques, which were designed to recommend a set of citations with satisfactory quality and diversity, with direction-awareness to allow the users to reach either old, well-cited, well-known research papers or recent, less-known ones. We also propose some novel techniques for a better result diversification. Experimental results show that our techniques are very useful in automatic citation recommendation."} {"_id": "7d6c3786bd582e62d9f321add5cb733a58d9a23c", "title": "Hull-form optimization in calm and rough water", "text": "The paper presents a formal methodology for the hull form optimization in calm and rough water using wash waves and selected dynamic responses, respectively. Parametric hull form modeling is used to generate the variant hull forms with some of the form parameters modified, which are evaluated in the optimization scheme based on evolutionary strategies. Rankine-source panel method and strip theories are used for the hydrodynamic evaluation. The methodology is implemented in the optimization of a double-chine, planing hull form. Furthermore, a dual-stage optimization strategy is applied on a modern fast displacement ferry. The effect of the selected optimization parameters is presented and discussed. Crown Copyright\u00a9 2009 Published by Elsevier Ltd. All rights reserved."} {"_id": "aba441e7a7ff828bfdae69b8683bab2d75de125d", "title": "NewsVallum: Semantics-Aware Text and Image Processing for Fake News Detection system", "text": "As a consequence of the social revolution we faced on the Web, news and information we daily enjoy may come from different and diverse sources which are not necessarily the traditional ones such as newspapers, either in their paper or online version, television, radio, etc. Everyone on the Web is allowed to produce and share news which can soon become viral if they follow the new media channels represented by social networks. This freedom in producing and sharing news comes with a counter-effect: the proliferation of fake news. Unfortunately, they can be very effective and may influence people and, more generally, the public opinion. We propose a combined approach of natural language and image processing that takes into account the semantics encoded within both text and images coming with news together with contextual information that may help in the classification of a news as fake or not."} {"_id": "42f73a6160c4bce389c62f381e5a77bd6db23670", "title": "Supply chain risk management in French companies", "text": "Available online 22 November 2011"} {"_id": "740844739cd791e9784c4fc843beb9174ed0b487", "title": "Bringing contextual information to google speech recognition", "text": "In automatic speech recognition on mobile devices, very often what a user says strongly depends on the particular context he or she is in. The n-grams relevant to the context are often not known in advance. The context can depend on, for example, particular dialog state, options presented to the user, conversation topic, location, etc. Speech recognition of sentences that include these n-grams can be challenging, as they are often not well represented in a language model (LM) or even include out-of-vocabulary (OOV) words. In this paper, we propose a solution for using contextual information to improve speech recognition accuracy. We utilize an on-the-fly rescoring mechanism to adjust the LM weights of a small set of n-grams relevant to the particular context during speech decoding. Our solution handles out of vocabulary words. It also addresses efficient combination of multiple sources of context and it even allows biasing class based language models. We show significant speech recognition accuracy improvements on several datasets, using various types of contexts, without negatively impacting the overall system. The improvements are obtained in both offline and live experiments."} {"_id": "3c15822455bcf4d824d6f877ec1324d76ba294e1", "title": "SOFIA: An automated security oracle for black-box testing of SQL-injection vulnerabilities", "text": "Security testing is a pivotal activity in engineering secure software. It consists of two phases: generating attack inputs to test the system, and assessing whether test executions expose any vulnerabilities. The latter phase is known as the security oracle problem. \n In this work, we present SOFIA, a Security Oracle for SQL-Injection Vulnerabilities. SOFIA is programming-language and source-code independent, and can be used with various attack generation tools. Moreover, because it does not rely on known attacks for learning, SOFIA is meant to also detect types of SQLi attacks that might be unknown at learning time. The oracle challenge is recast as a one-class classification problem where we learn to characterise legitimate SQL statements to accurately distinguish them from SQLi attack statements. \n We have carried out an experimental validation on six applications, among which two are large and widely-used. SOFIA was used to detect real SQLi vulnerabilities with inputs generated by three attack generation tools. The obtained results show that SOFIA is computationally fast and achieves a recall rate of 100% (i.e., missing no attacks) with a low false positive rate (0.6%)."} {"_id": "b2697a162b57c72146c512c5aa01ef12e8cdc90c", "title": "Crisis prevention: how to gear up your board.", "text": "Today's critics of corporate boardrooms have plenty of ammunition. The two crucial responsibilities of boards-oversight of long-term company strategy and the selection, evaluation, and compensation of top management--were reduced to damage control during the 1980s. Walter Salmon, a longtime director, notes that while boards have improved since he began serving on them in 1961, they haven't kept pace with the need for real change. Based on over 30 years of boardroom experience, Salmon recommends against government reform of board practices. But he does prescribe a series of incremental changes as a remedy. To begin with, he suggests limiting the size of boards and increasing the number of outside directors on them. In fact, according to Salmon, only three insiders belong on a board: the CEO, the COO, and the CFO. Changing how committees function is also necessary for gearing up today's boards. The audit committee, for example, can periodically review \"high-exposure areas\" of a business, perhaps helping to prevent embarrassing drops in future profits. Compensation committees can structure incentive compensation for executives to emphasize long-term rather than short-term performance. And nominating committees should be responsible for finding new, independent directors--not the CEO. In general, boards as a whole must spot problems early and blow the whistle, exercising what Salmon calls, \"constructive dissatisfaction.\" On a revitalized board, directors have enough confidence in the process to vigorously challenge one another, including the company's chief executive."} {"_id": "b20f9d47cf3134214e9d04bf09788e70383fa3cb", "title": "Integrated CMOS transmit-receive switch using LC-tuned substrate bias for 2.4-GHz and 5.2-GHz applications", "text": "CMOS transmit-receive (T/R) switches have been integrated in a 0.18-/spl mu/m standard CMOS technology for wireless applications at 2.4 and 5.2 GHz. This switch design achieves low loss and high linearity by increasing the substrate impedance of a MOSFET at the frequency of operation using a properly tuned LC tank. The switch design is asymmetric to accommodate the different linearity and isolation requirements in the transmit and receive modes. In the transmit mode, the switch exhibits 1.5-dB insertion loss, 28-dBm power, 1-dB compression point (P/sub 1dB/), and 30-dB isolation, at 2.4 and 5.2 GHz. In the receive mode, the switch achieves 1.6-dB insertion loss, 11.5-dBm P/sub 1dB/, and 15-dB isolation, at 2.4 and 5.2 GHz. The linearity obtained in the transmit mode is the highest reported to date in a standard CMOS process. The switch passes the 4-kV Human Body Model electrostatic discharge test. These results show that the switch design is suitable for narrow-band applications requiring a moderate-high transmitter power level (<1 W)."} {"_id": "b49ba7bd01a9c3627efb78489a2e536fdf5ef2f3", "title": "Computational analysis of the safe zone for the antegrade lag screw in posterior column fixation with the anterior approach in acetabular fracture: A cadaveric study.", "text": "OBJECTIVE\nThe fluoroscopically-guided procedure of antegrade posterior lag screw in posterior column fixation through anterior approach is technique-dependent and requires an experienced surgeon. The purpose of this study was to establish the safe zone for the antegrade posterior lag screw by using computational analysis.\n\n\nMETHOD\nThe virtual three-dimensional model of 178 hemi-pelvises was created from the CT data (DICOM format) by using Mimics\u00ae program, and were used to measure the safe zone of antegrade lag screw fixation on the inner table of the iliac wing, and the largest diameter of cylindrical implant inside safe zone. The central point (point A) of the cylinder was assessed and was compared with the intersection point (point B) between the linea terminalis and the anterior border of the sacroiliac articulation.\n\n\nRESULTS\nThe safe zone was triangular with an average area of 670.4mm2 (range, 374.8-1084.5mm2). The largest diameter of the cylinder was a mean 7.4mm (range, 5.0-10.0mm). When height was under 156.3cm, the diameter of the cylindrical implant was smaller than 7.0mm (p<0.001, regression coefficient=0.09). The linear distance between points A and B was 32.5mm (range, 19.2-49.3mm). Point A was far enough away from the well-positioned anterior column plate to prevent collision between the two.\n\n\nCONCLUSION\nThe safe zone was shaped like a triangle, and was large enough for multiple screws. Considering the straight-line distance between points A and B, the central screw can be fixed without overlapping with the well-positioned anterior column plate at the point between holes 2 and 3."} {"_id": "ee1931a94457cbb0a6203caad54bafe00c20c76b", "title": "Flexible capacitive sensors for high resolution pressure measurement", "text": "Thin, flexible, robust capacitive pressure sensors have been the subject of research in many fields where axial strain sensing with high spatial resolution and pressure resolution is desirable for small loads, such as tactile robotics and biomechanics. Simple capacitive pressure sensors have been designed and implemented on flexible substrates in general agreement with performance predicted by an analytical model. Two designs are demonstrated for comparison. The first design uses standard flex circuit technology, and the second design uses photolithography techniques to fabricate capacitive sensors with higher spatial and higher pressure resolution. Sensor arrays of varying sensor size and spacing are tested with applied loads from 0 to 1 MPa. Pressure resolution and linearity of the sensors are significantly improved with the miniaturized, custom fabricated sensor array compared to standard flexible circuit technology."} {"_id": "2f9f61991dc09616be25b94d29ad788c32c55942", "title": "On optimal fuzzy c-means clustering for energy efficient cooperative spectrum sensing in cognitive radio networks", "text": "This work explores the scope of Fuzzy C-Means (FCM) clustering on energy detection based cooperative spectrum sensing (CSS) in single primary user (PU) cognitive radio network (CRN). PU signal energy sensed at secondary users (SUs) are forwarded to the fusion center (FC). Two different combining schemes, namely selection combining (SC) and optimal gain combining are performed at FC to address the sensing reliability problem on two different optimization frameworks. In the first work, optimal cluster center points are searched for using differential evolution (DE) algorithm to maximize the probability of detection under the constraint of meeting the probability of false alarm below a predefined threshold. Simulation results highlight the improved sensing reliability compared to the existing works. In the second one, the problem is extended to the energy efficient design of CRN. The SUs act here as amplifyand-forward (AF) relays and PU energy content is measured at the FC over the combined signal from all the SUs. The objective is to minimize the average energy consumption of all SUs while maintaining the predefined sensing \u2217. Santi P. Maity . Email addresses: santipmaity@it.iiests.ac.in (Santi P. Maity ), subhankar.ece@gmail.com (Subhankar Chatterjee), t_acharya@telecom.iiests.ac.in (Tamaghna Acharya) Preprint submitted to Elsevier October 28, 2015 constraints. Optimal FCM clustering using DE determines the optimal SU amplifying gain and the optimal number of PU samples. Simulation results shed a light on the performance gain of the proposed approach compared to the existing energy efficient CSS schemes."} {"_id": "cb3fcbd32db5a7d1109a2d7e92c0c534b65e1769", "title": "Exploiting semantic web knowledge graphs in data mining", "text": "Data Mining and Knowledge Discovery in Databases (KDD) is a research field concerned with deriving higher-level insights from data. The tasks performed in that field are knowledge intensive and can often benefit from using additional knowledge from various sources. Therefore, many approaches have been proposed in this area that combine Semantic Web data with the data mining and knowledge discovery process. Semantic Web knowledge graphs are a backbone of many information systems that require access to structured knowledge. Such knowledge graphs contain factual knowledge about real word entities and the relations between them, which can be utilized in various natural language processing, information retrieval, and any data mining applications. Following the principles of the Semantic Web, Semantic Web knowledge graphs are publicly available as Linked Open Data. Linked Open Data is an open, interlinked collection of datasets in machine-interpretable form, covering most of the real world domains. In this thesis, we investigate the hypothesis if Semantic Web knowledge graphs can be exploited as background knowledge in different steps of the knowledge discovery process, and different data mining tasks. More precisely, we aim to show that Semantic Web knowledge graphs can be utilized for generating valuable data mining features that can be used in various data mining tasks. Identifying, collecting and integrating useful background knowledge for a given data mining application can be a tedious and time consuming task. Furthermore, most data mining tools require features in propositional form, i.e., binary, nominal or numerical features associated with an instance, while Linked Open Data sources are usually graphs by nature. Therefore, in Part I, we evaluate unsupervised feature generation strategies from types and relations in knowledge graphs, which are used in different data mining tasks, i.e., classification, regression, and outlier detection. As the number of generated features grows rapidly with the number of instances in the dataset, we provide a strategy for feature selection in hierarchical feature space, in order to select only the most informative and most representative features for a given dataset. Furthermore, we provide an end-to-end tool for mining the Web of Linked Data, which provides functionalities for each step of the knowledge discovery process, i.e., linking local data to a Semantic Web knowledge graph, integrating features from multiple knowledge graphs, feature generation and selection, and building machine learning models. However, we show that such feature generation strategies often lead to high dimensional feature vectors even after"} {"_id": "2c0e1b5db1b6851d95a765a2264bb77f19ee04e1", "title": "Topic Aware Neural Response Generation", "text": "We consider incorporating topic information into a sequenceto-sequence framework to generate informative and interesting responses for chatbots. To this end, we propose a topic aware sequence-to-sequence (TA-Seq2Seq) model. The model utilizes topics to simulate prior human knowledge that guides them to form informative and interesting responses in conversation, and leverages topic information in generation by a joint attention mechanism and a biased generation probability. The joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention and synthesizes topic vectors by topic attention from the topic words of the message obtained from a pre-trained LDA model, with these vectors jointly affecting the generation of words in decoding. To increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. Empirical studies on both automatic evaluation metrics and human annotations show that TA-Seq2Seq can generate more informative and interesting responses, significantly outperforming state-of-theart response generation models."} {"_id": "34e2cb7a4fef0651fb5c0a120c8e70ebab9f0749", "title": "It's only a computer: Virtual humans increase willingness to disclose", "text": "Research has begun to explore the use of virtual humans (VHs) in clinical interviews (Bickmore, Gruber, & Picard, 2005). When designed as supportive and \u2018\u2018safe\u2019\u2019 interaction partners, VHs may improve such screenings by increasing willingness to disclose information (Gratch, Wang, Gerten, & Fast, 2007). In health and mental health contexts, patients are often reluctant to respond honestly. In the context of health-screening interviews, we report a study in which participants interacted with a VH interviewer and were led to believe that the VH was controlled by either humans or automation. As predicted, compared to those who believed they were interacting with a human operator, participants who believed they were interacting with a computer reported lower fear of self-disclosure, lower impression management, displayed their sadness more intensely, and were rated by observers as more willing to disclose. These results suggest that automated VHs can help overcome a significant barrier to obtaining truthful patient information. 2014 Elsevier Ltd. All rights reserved."} {"_id": "85315b64a4c73cb86f156ef5b0a085d6ebc8a65d", "title": "A Neural Conversational Model", "text": "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require handcrafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domainspecific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model."} {"_id": "6128190a8c18cde6b94e0fae934d6fcc406ea0bb", "title": "STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset", "text": "In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions."} {"_id": "a5bba7a6363f6abf1783368d02b3928e715fc50e", "title": "A Wearable Computing Prototype for supporting training activities in Automotive Production", "text": "This paper presents the results of the wearable computing prototype supporting trainingand qualification activities at the SKODA production facilities in Czech Republic. The emerged prototype is based upon the first of the 2 main \u201cVariant Production Showcases\u201d (training and assembly-line) which are to be implemented in the WearIT@work project (EC IP 004216). As an introduction, the authors of this paper investigate current training processes at Skoda, and derive the potential benefits and risks of applying wearable computing technology. Accordingly, the approach of creating the wearable prototypes, via usability experiments at the Skoda production site, is explained in detail. As a preliminary result, the first functional prototypes, including a task recognition prototype, based upon the components of the European Wearable Computing Platform, are described. The paper is rounded up by providing a short outlook regarding the second envisaged test case, which is focussed upon selected assembly line operations of blue collar workers."} {"_id": "c3fd956dc501a8cc33c852eefdd4e2244bb396b4", "title": "Towards Internet of Things (IOTS): Integration of Wireless Sensor Network to Cloud Services for Data Collection and Sharing", "text": "Cloud computing provides great benefits for applications hosted on the Web that also have special computational and storage requirements. This paper proposes an extensible and flexible architecture for integrating Wireless Sensor Networks with the Cloud. We have used REST based Web services as an interoperable application layer that can be directly integrated into other application domains for remote monitoring such as e-health care services, smart homes, or even vehicular area networks (VAN). For proof of concept, we have implemented a REST based Web services on an IP based low power WSN test bed, which enables data access from anywhere. The alert feature has also been implemented to notify users via email or tweets for monitoring data when they exceed values and events of interest."} {"_id": "0a83be1e2db4f42b97f65b5deac73637468605c8", "title": "Broadband biquad UHF antenna array for DOA", "text": "In this contribution a broadband biquad antenna array for DOA applications is investigated. Simulation and measurement results of the single broadband biquad antenna element and results of antenna array configurations for vertical polarisation, horizontal polarisation or polarisation independant direction of arrival estimation using different algorithms are shown."} {"_id": "e1a9fab83821aec018d593d63362c25ebb2b305a", "title": "Dynablock: Dynamic 3D Printing for Instant and Reconstructable Shape Formation", "text": "This paper introduces Dynamic 3D Printing, a fast and reconstructable shape formation system. Dynamic 3D Printing can assemble an arbitrary three-dimensional shape from a large number of small physical elements. Also, it can disassemble the shape back to elements and reconstruct a new shape. Dynamic 3D Printing combines the capabilities of 3D printers and shape displays: Like conventional 3D printing, it can generate arbitrary and graspable three-dimensional shapes, while allowing shapes to be rapidly formed and reformed as in a shape display. To demonstrate the idea, we describe the design and implementation of Dynablock, a working prototype of a dynamic 3D printer. Dynablock can form a three-dimensional shape in seconds by assembling 3,000 9 mm blocks, leveraging a 24 x 16 pin-based shape display as a parallel assembler. Dynamic 3D printing is a step toward achieving our long-term vision in which 3D printing becomes an interactive medium, rather than the means for fabrication that it is today. In this paper, we explore possibilities for this vision by illustrating application scenarios that are difficult to achieve with conventional 3D printing or shape display systems."} {"_id": "dd47d8547382541fe3f589e4844f232ea817fd40", "title": "Design and Fabrication of Integrated Magnetic MEMS Energy Harvester for Low Frequency Applications", "text": "An integrated vibration-to-electrical MEMS electromagnetic energy harvester with low resonant frequency is designed, simulated, fabricated and tested. Special structures and magnet arrays of the device are designed and simulated for favorable low frequency performance. Both FEM simulations and system-level simulations are conducted to investigate the induced voltage of the device. Two types of harvester are fabricated with different magnet arrays and the magnetic field of the two harvesters is investigated. With the combination of the microfabricated metal structures and the electroplated CoNiMnP permanent micro magnets, the designed harvesters are of small size (5 mm \u00d75 mm \u00d7 0.53 mm) without manual assembly of conventional bulk magnets. The output power density is 0.03 \u03bcW/cm3 with optimized resistance load at 64 Hz. This magnetic harvester is suitable for batch fabrication through MEMS process and can be utilized to drive microelectronic devices, such as micro implantable and portable devices."} {"_id": "4967146be41727e69ec303c34d341b13907c9c36", "title": "Network bandwidth predictor (NBP): a system for online network performance forecasting", "text": "The applicability of network-based computing depends on the availability of the underlying network bandwidth. However, network resources are shared and the available network bandwidth varies with time. There is no satisfactory solution available for network performance predictions. In this research, we propose, design, and implement the NBP (network bandwidth predictor) for rapid network performance prediction. NBP is a new system that employs a neural network based approach for network bandwidth forecasting. This system is designed to integrate with most advanced technologies. It employs the NWS (network weather service) monitoring subsystem to measure the network traffic, and provides an improved, more accurate performance prediction than that of NWS, especially with applications with a network usage pattern. The NBP system has been tested on real time data collected by NWS monitoring subsystem and on trace files. Experimental results confirm that NBP has an improved prediction."} {"_id": "4d8340eae2c98ab5e0a3b1a7e071a7ddb9106cff", "title": "A Framework for Multiple-Instance Learning", "text": "Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem."} {"_id": "aae470464cf0a7d6f8a0e71f836ffdd9b1ae4784", "title": "Extracting a biologically relevant latent space from cancer transcriptomes with variational autoencoders", "text": "The Cancer Genome Atlas (TCGA) has profiled over 10,000 tumors across 33 different cancer-types for many genomic features, including gene expression levels. Gene expression measurements capture substantial information about the state of each tumor. Certain classes of deep neural network models are capable of learning a meaningful latent space. Such a latent space could be used to explore and generate hypothetical gene expression profiles under various types of molecular and genetic perturbation. For example, one might wish to use such a model to predict a tumor's response to specific therapies or to characterize complex gene expression activations existing in differential proportions in different tumors. Variational autoencoders (VAEs) are a deep neural network approach capable of generating meaningful latent spaces for image and text data. In this work, we sought to determine the extent to which a VAE can be trained to model cancer gene expression, and whether or not such a VAE would capture biologically-relevant features. In the following report, we introduce a VAE trained on TCGA pan-cancer RNA-seq data, identify specific patterns in the VAE encoded features, and discuss potential merits of the approach. We name our method \"Tybalt\" after an instigative, cat-like character who sets a cascading chain of events in motion in Shakespeare's \"Romeo and Juliet\". From a systems biology perspective, Tybalt could one day aid in cancer stratification or predict specific activated expression patterns that would result from genetic changes or treatment effects."} {"_id": "3f3fae90dbafea2272677883879195b435596890", "title": "Review of Design of Speech Recognition and Text Analytics based Digital Banking Customer Interface and Future Directions of Technology Adoption", "text": "Banking is one of the most significant adopters of cutting-edge information technologies. Since its modern era beginning in the form of paper based accounting maintained in the branch, adoption of computerized system made it possible to centralize the processing in data centre and improve customer experience by making a more available and efficient system. The latest twist in this evolution is adoption of natural language processing and speech recognition in the user interface between the human and the system and use of machine learning and advanced analytics, in general, for backend processing as well. The paper reviews the progress of technology adoption in the field and comments on the maturity level of solutions involving less studied or lowresource languages like Hindi and also other Indian, regional languages. Furthermore, it also provides an analysis from a prototype built by us. The future directions of this area are also highlighted."} {"_id": "7c5c8800e9603fc8c7e04d50fb8a642a9c9dd3bf", "title": "Specialized Support Vector Machines for open-set recognition", "text": "Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. Sometimes, some of these classes may be ill-sampled, not sampled at all or undefined. In such cases, we need to think of robust classification methods able to deal with the \u201cunknown\u201d and properly reject samples belonging to classes never seen during training. Notwithstanding, almost all existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the wellknown Support Vector Machines (SVM) classifier and introduce the Specialized Support Vector Machines (SSVM), which is suitable for recognition in open-set setups. SSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. The same cannot be guaranteed by the traditional SVM formulation, even when using the Radial Basis Function (RBF) kernel. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk. We also indicate promising directions of investigation of SVM-based methods for open-set scenarios. An extensive set of experiments compares the proposed method with existing solutions in the literature for open-set recognition and the reported results show its effectiveness."} {"_id": "4cac4976a7f48ff8636d341bc40048ba313c98cd", "title": "Polysemy in Sentence Comprehension: Effects of Meaning Dominance.", "text": "Words like church are polysemous, having two related senses (a building and an organization). Three experiments investigated how polysemous senses are represented and processed during sentence comprehension. On one view, readers retrieve an underspecified, core meaning, which is later specified more fully with contextual information. On another view, readers retrieve one or more specific senses. In a reading task, context that was neutral or biased towards a particular sense preceded a polysemous word. Disambiguating material consistent with only one sense followed, in a second sentence (Experiment 1) or the same sentence (Experiments 2 & 3). Reading the disambiguating material was faster when it was consistent with that context, and dominant senses were committed to more strongly than subordinate senses. Critically, following neutral context, the continuation was read more quickly when it selected the dominant sense, and the degree of sense dominance partially explained the reading time advantage. Similarity of the senses also affected reading times. Across experiments, we found that sense selection may not be completed immediately following a polysemous word but is completed at a sentence boundary. Overall, the results suggest that readers select an individual sense when reading a polysemous word, rather than a core meaning."} {"_id": "587ad4f178340d7def5437fe5ba0ab0041a53c4b", "title": "Ocular symptom detection using smartphones", "text": "It is often said that \"the eyes are the windows to the soul\". Although this usually metaphorically refers to a person's emotional state, there is literal truth in this statement with reference to a person's biological state. The eyes are dynamic and sensitive organs that are connected to the rest of the body via the circulatory and nervous systems. This means that diseases originating from an organ within the body can indirectly affect the eyes. Although the eyes may not be the origin for many such diseases, they are visible to external observers, making them an accessible point of diagnosis. For instance, broken blood vessels in the sclera could indicate a blood clotting disorder or a vitamin K deficiency. Instead of \"windows to the soul\", the eyes are \"sensors to the body\" that can be monitored to screen for certain diseases."} {"_id": "acdc3d8d8c880bc9b9e10b337b09bed4c0c762d8", "title": "EMBROIDERED FULLY TEXTILE WEARABLE AN- TENNA FOR MEDICAL MONITORING APPLICATIONS", "text": "Telecommunication systems integrated within garments and wearable products are such methods by which medical devices are making an impact on enhancing healthcare provisions around the clock. These garments when fully developed will be capable of alerting and demanding attention if and when required along with minimizing hospital resources and labour. Furthermore, they can play a major role in preventative ailments, health irregularities and unforeseen heart or brain disorders in apparently healthy individuals. This work presents the feasibility of investigating an Ultra-WideBand (UWB) antenna made from fully textile materials that were used for the substrate as well as the conducting parts of the designed antenna. Simulated and measured results show that the proposed antenna design meets the requirements of wide working bandwidth and provides 17GHz bandwidth with compact size, washable and flexible materials. Results in terms of return loss, bandwidth, radiation pattern, current distribution as well as gain and efficiency are presented to validate the usefulness of the current manuscript design. The work presented here has profound implications for future studies of a standalone suite that may one day help to provide wearer (patient) with such reliable and comfortable medical monitoring techniques. Received 12 April 2011, Accepted 23 May 2011, Scheduled 10 June 2011 * Corresponding author: Mai A. Rahman Osman (mai.rahman@fkegraduate.utm.my)."} {"_id": "26a686811d8aba081dd56ebba4d6e691fb301b9a", "title": "Health literacy and public health: A systematic review and integration of definitions and models", "text": "BACKGROUND\nHealth literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy.\n\n\nMETHODS\nA systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model.\n\n\nRESULTS\nThe review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively.\n\n\nCONCLUSIONS\nBased upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings."} {"_id": "95ccdddc760a29d66955ef475a51f9262778f985", "title": "Tomographic reconstruction of piecewise smooth images", "text": "In computed tomography, direct inversion of the Radon transform requires more projections than are practical due to constraints in scan time and image accessibility. Therefore, it is necessary to consider the estimation of reconstructed images when the problem is under-constrained, i.e., when a unique solution does not exist. To resolve ambiguities among solutions, it is necessary to place additional constraints on the reconstructed image. In this paper, we present a surface evolution technique to model the reconstructed image as piecewise smooth. We model the reconstructed image as two regions that are each smoothly varying in intensity and are separated by a smooth surface. We define a cost functional to penalize deviation from piecewise smoothness while ensuring that the projections of the estimated image match the measured projections. From this functional, we derive an evolution for the modeled image intensity and an evolution for the surface, thereby defining a variational tomographic estimation technique. We show example reconstructions to highlight the performance of the proposed method on real medical images."} {"_id": "a28aab171a7951a48f31e36ef92d309c2c4f0265", "title": "Clustering and Summarizing Protein-Protein Interaction Networks: A Survey", "text": "The increasing availability and significance of large-scale protein-protein interaction (PPI) data has resulted in a flurry of research activity to comprehend the organization, processes, and functioning of cells by analyzing these data at network level. Network clustering, that analyzes the topological and functional properties of a PPI network to identify clusters of interacting proteins, has gained significant popularity in the bioinformatics as well as data mining research communities. Many studies since the last decade have shown that clustering PPI networks is an effective approach for identifying functional modules, revealing functions of unknown proteins, etc. In this paper, we examine this issue by classifying, discussing, and comparing a wide ranging approaches proposed by the bioinformatics community to cluster PPI networks. A pervasive desire of this review is to emphasize the uniqueness of the network clustering problem in the context of PPI networks and highlight why generic network clustering algorithms proposed by the data mining community cannot be directly adopted to address this problem effectively. We also review a closely related problem to PPI network clustering, network summarization, which can enable us to make sense out of the information contained in large PPI networks by generating multi-level functional summaries."} {"_id": "1a5ced36faee7ae1d0316c0461c50f4ea1317fad", "title": "Find me if you can: improving geographical prediction with social and spatial proximity", "text": "Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities.\n Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users."} {"_id": "6a3aaa22977577d699a5b081dcdf7e093c857fec", "title": "Syntax-Tree Regular Expression Based DFA FormalConstruction", "text": "Compiler is a program whose functionality is to translate a computer program written in source language into an equivalent machine code. Compiler construction is an advanced research area because of its size and complexity. The source codes are in higher level languages which are usually complex and, consequently, increase the level of abstraction. Due to such reasons, design and construction of error free compiler is a challenge of the twenty first century. Verification of a source program does not guarantee about correctness of code generated because the bugs in compiler may lead to an incorrect target program. Therefore, verification of compiler is more important than verifying the source programs. Lexical analyzer is a main phase of compiler used for scanning input and grouping into sequence of tokens. In this paper, formal construction of deterministic finite automata (DFA) based on regular expression is presented as a part of lexical analyzer. At first, syntax tree is described based on the augmented regular expression. Then formal description of important operators, checking null-ability and computing first and last positions of internal nodes of the tree is described. In next, the transition diagram is described from the follow positions and converted into deterministic finite automata by defining a relationship among syntax tree, transition diagram and DFA. Formal specification of the procedure is described using Z notation and model analysis is provided using Z/Eves toolset."} {"_id": "0971a5e835f365b6008177a867cfe4bae76841a5", "title": "Supervised Dictionary Learning by a Variational Bayesian Group Sparse Nonnegative Matrix Factorization", "text": "Nonnegative matrix factorization (NMF) with group sparsity constraints is formulated as a probabilistic graphical model and, assuming some observed data have been generated by the model, a feasible variational Bayesian algorithm is derived for learning model parameters. When used in a supervised learning scenario, NMF is most often utilized as an unsupervised feature extractor followed by classification in the obtained feature subspace. Having mapped the class labels to a more general concept of groups which underlie sparsity of the coefficients, what the proposed group sparse NMF model allows is incorporating class label information to find low dimensional label-driven dictionaries which not only aim to represent the data faithfully, but are also suitable for class discrimination. Experiments performed in face recognition and facial expression recognition domains point to advantages of classification in such label-driven feature subspaces over classification in feature subspaces obtained in an unsupervised manner."} {"_id": "5ed3dc40cb2355461d126a51c00846a707e85aa3", "title": "Acute Sexual Assault in the Pediatric and Adolescent Population.", "text": "Children and adolescents are at high risk for sexual assault. Early medical and mental health evaluation by professionals with advanced training in sexual victimization is imperative to assure appropriate assessment, forensic evidence collection, and follow-up. Moreover, continued research and outreach programs are needed for the development of preventative strategies that focus on this vulnerable population. In this review we highlight key concepts for assessment and include a discussion of risk factors, disclosure, sequelae, follow-up, and prevention."} {"_id": "89f256abf0e0187fcf0a56a4df6f447a2c0b17bb", "title": "Listening Comprehension over Argumentative Content", "text": "This paper presents a task for machine listening comprehension in the argumentation domain and a corresponding dataset in English. We recorded 200 spontaneous speeches arguing for or against 50 controversial topics. For each speech, we formulated a question, aimed at confirming or rejecting the occurrence of potential arguments in the speech. Labels were collected by listening to the speech and marking which arguments were mentioned by the speaker. We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset. All data used in this work is freely available for research."} {"_id": "7e209751ae6f0e861a7763d3d22533b39aabd7eb", "title": "MauveDB: supporting model-based user views in database systems", "text": "Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system."} {"_id": "13b6eeb28328252a35cdcbe3ab8d09d2a9caf99d", "title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", "text": "We introduce (1) a novel stochastic inversion transduction grammar formalism for bilingual language modeling of sentence-pairs, and (2) the concept of bilingual parsing with a variety of parallel corpus analysis applications. Aside from the bilingual orientation, three major features distinguish the formalism from the finite-state transducers more traditionally found in computational linguistics: it skips directly to a context-free rather than finite-state base, it permits a minimal extra degree of ordering flexibility, and its probabilistic formulation admits an efficient maximum-likelihood bilingual parsing algorithm. A convenient normal form is shown to exist. Analysis of the formalism's expressiveness suggests that it is particularly well suited to modeling ordering shifts between languages, balancing needed flexibility against complexity constraints. We discuss a number of examples of how stochastic inversion transduction grammars bring bilingual constraints to bear upon problematic corpus analysis tasks such as segmentation, bracketing, phrasal alignment, and parsing."} {"_id": "9c7221847823926af3fc10051f6d5864af2eae9d", "title": "Evaluation Techniques and Systems for Answer Set Programming: a Survey", "text": "Answer set programming (ASP) is a prominent knowledge representation and reasoning paradigm that found both industrial and scientific applications. The success of ASP is due to the combination of two factors: a rich modeling language and the availability of efficient ASP implementations. In this paper we trace the history of ASP systems, describing the key evaluation techniques and their implementation in actual tools."} {"_id": "292a2b2ad9a02e474f45f1d3b163bf47a7c874e5", "title": "Feature Detection with Automatic Scale Selection", "text": "The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a so-called scale-space representation. Traditional scale-space theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. This article proposes a systematic methodology for dealing with this problem. A framework is presented for generating hypotheses about interesting scale levels in image data, based on a general principle stating that local extrema over scales of different combinations of \u03b3-normalized derivatives are likely candidates to correspond to interesting structures. Specifically, it is shown how this idea can be used as a major mechanism in algorithms for automatic scale selection, which adapt the local scales of processing to the local image structure. Support for the proposed approach is given in terms of a general theoretical investigation of the behaviour of the scale selection method under rescalings of the input pattern and by integration with different types of early visual modules, including experiments on real-world and synthetic data. Support is also given by a detailed analysis of how different types of feature detectors perform when integrated with a scale selection mechanism and then applied to characteristic model patterns. Specifically, it is described in detail how the proposed methodology applies to the problems of blob detection, junction detection, edge detection, ridge detection and local frequency estimation. In many computer vision applications, the poor performance of the low-level vision modules constitutes a major bottleneck. It is argued that the inclusion of mechanisms for automatic scale selection is essential if we are to construct vision systems to automatically analyse complex unknown environments."} {"_id": "6b2e3c9b32e92dbbdd094d2bd88eb60a80c3083d", "title": "A Combined Corner and Edge Detector", "text": "The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed."} {"_id": "7756c24837b1f9ca3fc5be4ce7b4de0fcf9de8e6", "title": "Singularity detection and processing with wavelets", "text": "Most of a signal information is often carried by irregular structures and transient phenomena. The mathematical characterization of singularities with Lipschitz exponents is explained. Theorems are reviewed that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations have a particular behavior that is studied separately. The local frequency of such oscillations are measured from the wavelet transform modulus maxima. It has been shown numerically that oneand two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two-dimensions, the wavelet transform maxima indicate the location of edges in images. The denoising algorithm is extended for image enhancement."} {"_id": "8930f62a4b5eb1cbabf224cf84aa009ea798cfee", "title": "Modeling Visual Attention via Selective Tuning", "text": "A model for aspects of visual attention based on the concept of selective tuning is presented. It provides for a solution to the problems of selection in an image, information routing through the visual processing hierarchy and task-specific attentional bias. The central thesis is that attention acts to optimize the search procedure inherent in a solution to vision. It does so by selectively tuning the visual processing network which is accomplished by a top-down hierarchy of winner-take-all processes embedded within the visual processing pyramid. Comparisons to other major computational models of attention and to the relevant neurobiology are included in detail throughout the paper. The model has been implemented; several examples of its performance are shown. This model is a hypothesis for primate visual attention, but it also outperforms existing computational solutions for attention in machine vision and is highly appropriate to solving the problem in a robot vision system."} {"_id": "f4c83b78c787e01d3d4e58ae18c8f3ab5933a02b", "title": "3D positional integration from image sequences", "text": "An explicit three-dimensional representation is constructed from feature-points extracted from a sequence of images taken by a moving camera. The points are tracked through the sequence, and their 3D locations accurately determined by use of Kalman Filters. The ego-motion of the camera is also solved for."} {"_id": "253906bf2e541e433131ae304186ae3a0964c528", "title": "SIDES: a cooperative tabletop computer game for social skills development", "text": "This paper presents a design case study of SIDES: Shared Interfaces to Develop Effective Social Skills. SIDES is a tool designed to help adolescents with Asperger's Syndrome practice effective group work skills using a four-player cooperative computer game that runs on tabletop technology. We present the design process and evaluation of SIDES conducted over six months with a middle school social group therapy class. Our findings indicate that cooperative tabletop computer games are a motivating and supportive tool for facilitating effective group work among our target population and reveal several design lessons to inform the development of similar systems."} {"_id": "1a5ef67211cd6cdab5e8ea1d19976f23043d588e", "title": "On Controllable Sparse Alternatives to Softmax", "text": "Converting an n-dimensional vector to a probability distribution over n objects is a commonly used component in many machine learning tasks like multiclass classification, multilabel classification, attention mechanisms etc. For this, several probability mapping functions have been proposed and employed in literature such as softmax, sum-normalization, spherical softmax, and sparsemax, but there is very little understanding in terms how they relate with each other. Further, none of the above formulations offer an explicit control over the degree of sparsity. To address this, we develop a unified framework that encompasses all these formulations as special cases. This framework ensures simple closed-form solutions and existence of sub-gradients suitable for learning via backpropagation. Within this framework, we propose two novel sparse formulations, sparsegen-lin and sparsehourglass, that seek to provide a control over the degree of desired sparsity. We further develop novel convex loss functions that help induce the behavior of aforementioned formulations in the multilabel classification setting, showing improved performance. We also demonstrate empirically that the proposed formulations, when used to compute attention weights, achieve better or comparable performance on standard seq2seq tasks like neural machine translation and abstractive summarization."} {"_id": "86824ba374daff17ff2235484055b3bb9c464555", "title": "S-CNN: Subcategory-Aware Convolutional Networks for Object Detection", "text": "The marriage between the deep convolutional neural network (CNN) and region proposals has made breakthroughs for object detection in recent years. While the discriminative object features are learned via a deep CNN for classification, the large intra-class variation and deformation still limit the performance of the CNN based object detection. We propose a subcategory-aware CNN (S-CNN) to solve the object intra-class variation problem. In the proposed technique, the training samples are first grouped into multiple subcategories automatically through a novel instance sharing maximum margin clustering process. A multi-component Aggregated Channel Feature (ACF) detector is then trained to produce more latent training samples, where each ACF component corresponds to one clustered subcategory. The produced latent samples together with their subcategory labels are further fed into a CNN classifier to filter out false proposals for object detection. An iterative learning algorithm is designed for the joint optimization of image subcategorization, multi-component ACF detector, and subcategory-aware CNN classifier. Experiments on INRIA Person dataset, Pascal VOC 2007 dataset and MS COCO dataset show that the proposed technique clearly outperforms the state-of-the-art methods for generic object detection."} {"_id": "0fb4bdf126623c4f252991db3277e22c37b0ae82", "title": "Revealing protein\u2013lncRNA interaction", "text": "Long non-coding RNAs (lncRNAs) are associated to a plethora of cellular functions, most of which require the interaction with one or more RNA-binding proteins (RBPs); similarly, RBPs are often able to bind a large number of different RNAs. The currently available knowledge is already drawing an intricate network of interactions, whose deregulation is frequently associated to pathological states. Several different techniques were developed in the past years to obtain protein-RNA binding data in a high-throughput fashion. In parallel, in silico inference methods were developed for the accurate computational prediction of the interaction of RBP-lncRNA pairs. The field is growing rapidly, and it is foreseeable that in the near future, the protein-lncRNA interaction network will rise, offering essential clues for a better understanding of lncRNA cellular mechanisms and their disease-associated perturbations."} {"_id": "371b07d65891b03eaae15c2865da2a6751a99bb8", "title": "gHull: A GPU algorithm for 3D convex hull", "text": "A novel algorithm is presented to compute the convex hull of a point set in \u211d3 using the graphics processing unit (GPU). By exploiting the relationship between the Voronoi diagram and the convex hull, the algorithm derives the approximation of the convex hull from the former. The other extreme vertices of the convex hull are then found by using a two-round checking in the digital and the continuous space successively. The algorithm does not need explicit locking or any other concurrency control mechanism, thus it can maximize the parallelism available on the modern GPU.\n The implementation using the CUDA programming model on NVIDIA GPUs is exact and efficient. The experiments show that it is up to an order of magnitude faster than other sequential convex hull implementations running on the CPU for inputs of millions of points. The works demonstrate that the GPU can be used to solve nontrivial computational geometry problems with significant performance benefit."} {"_id": "8728a963eedc329b363bfde61d93fa7446bcfffd", "title": "gTRICLUSTER: A More General and Effective 3D Clustering Algorithm for Gene-Sample-Time Microarray Data", "text": "Clustering is an important technique in microarray data analysis, and mining three-dimensional (3D) clusters in gene-sample-time (simply GST) microarray data is emerging as a hot research topic in this area. A 3D cluster consists of a subset of genes that are coherent on a subset of samples along a segment of time series. This kind of coherent clusters may contain information for the users to identify useful phenotypes, potential genes related to these phenotypes and their expression rules. TRICLUSTER is the state-of-the-art 3D clustering algorithm for GST microarray data. In this paper, we propose a new algorithm to mine 3D clusters over GST microarray data. We term the new algorithm gTRICLUSTER because it is based on a more general 3D cluster model than the one that TRICLUSTER is based on. gTRICLUSTER can find more biologically meaningful coherent gene clusters than TRICLUSTER can do. It also outperforms TRICLUSTER in robustness to noise. Experimental results on a real-world microarray dataset validate the effectiveness of the proposed new algorithm."} {"_id": "70776e0e21ebbf96dd02d579cbb7152eaccf4d7f", "title": "Validation of the Dutch Version of the CDC Core Healthy Days Measures in a Community Sample", "text": "An important disadvantage of most indicators of health related quality of life used in public health surveillance is their length. In this study the authors investigated the reliability and validity of a short indicator of health related quality of life, the Dutch version of the four item \u2018CDC Core Healthy Days Measures\u2019 (CDC HRQOL-4). The reliability was evaluated by calculating Cronbach\u2019s alpha of the CDC HRQOL-4. The concurrent validity was tested by comparing the CDC HRQOL-4 with three other indicators of health related quality of life, the SF-36, the WHOQoL-BREF and the GHQ-12. The construct validity was evaluated by assessing the ability of the CDC HRQOL-4 to discriminate between respondents with and without a (non-mental) chronic condition, depression, a visit to the general practitioner and use of prescription drugs. Randomly sampled respondents from the city of Utrecht were asked to fill in a questionnaire. 659 respondents (response rate 45%) completed the questionnaire. Participants represented the adult, non-institutionalised population of the city of Utrecht, the Netherlands: 58% women; mean age 41\u00a0years; 15% of non-Dutch origin. The reliability of the CDC HRQOL-4 was good. Cronbach\u2019s alpha of three of the four CDC HRQOL-4-items was 0.77 which is good for internal consistent scales. The concurrent validity was good. The four items of the CDC HRQOL-4 showed higher correlations with their corresponding domains of the other instrument than the other domains. Comparison of respondents with or without a chronic condition, depression, visit to the GP and use of prescription drugs produced evidence for an excellent construct validity of the CDC HRQOL-4."} {"_id": "6ef623c853814ee9add50c32491367e0dff983d1", "title": "Hardware/Software Codesign: The Past, the Present, and Predicting the Future", "text": "Hardware/software codesign investigates the concurrent design of hardware and software components of complex electronic systems. It tries to exploit the synergy of hardware and software with the goal to optimize and/or satisfy design constraints such as cost, performance, and power of the final product. At the same time, it targets to reduce the time-to-market frame considerably. This paper presents major achievements of two decades of research on methods and tools for hardware/software codesign by starting with a historical survey of its roots, by highlighting its major research directions and achievements until today, and finally, by predicting in which direction research in codesign might evolve in the decades to come."} {"_id": "6a89449e763fea5c9a06c30e4c11072de54f6f49", "title": "Analysis of Audio-Visual Features for Unsupervised Speech Recognition", "text": "Research on \u201czero resource\u201d speech processing focuses on learning linguistic information from unannotated, or raw, speech data, in order to bypass the expensive annotations required by current speech recognition systems. While most recent zero-resource work has made use of only speech recordings, here, we investigate the use of visual information as a source of weak supervision, to see whether grounding speech in a visual context can provide additional benefit for language learning. Specifically, we use a dataset of paired images and audio captions to supervise learning of low-level speech features that can be used for further \u201cunsupervised\u201d processing of any speech data. We analyze these features and evaluate their performance on the Zero Resource Challenge 2015 evaluation metrics, as well as standard keyword spotting and speech recognition tasks. We show that features generated with a joint audiovisual model contain more discriminative linguistic information and are less speaker-dependent than traditional speech features. Our results show that visual grounding can improve speech representations for a variety of zero-resource tasks."} {"_id": "700a4c8dba65fe5d3a595c30de7542488815327c", "title": "Organizational behavior: affect in the workplace.", "text": "The study of affect in the workplace began and peaked in the 1930s, with the decades that followed up to the 1990s not being particularly fertile. Whereas job satisfaction generally continues to be loosely but not carefully thought of and measured as an affective state, critical work in the 1990s has raised serious questions about the affective status of job satisfaction in terms of its causes as well as its definition and measurement. Recent research has focused on the production of moods and emotions at work, with an emphasis, at least conceptually, on stressful events, leaders, work groups, physical settings, and rewards/punishment. Other recent research has addressed the consequences of workers' feelings, in particular, a variety of performance outcomes (e.g., helping behaviors and creativity). Even though recent interest in affect in the workplace has been intense, many theoretical and methodological opportunities and challenges remain."} {"_id": "95fc8418de56aebf867669c834fbef75f593f403", "title": "Personality Traits Recognition on Social Network - Facebook", "text": "For the natural and social interaction it is necessary to understand human behavior. Personality is one of the fundamental aspects, by which we can understand behavioral dispositions. It is evident that there is a strong correlation between users\u2019 personality and the way they behave on online social network (e.g., Facebook). This paper presents automatic recognition of Big-5 personality traits on social network (Facebook) using users\u2019 status text. For the automatic recognition we studied different classification methods such as SMO (Sequential Minimal Optimization for Support Vector Machine), Bayesian Logistic Regression (BLR) and Multinomial Na\u00efve Bayes (MNB) sparse modeling. Performance of the systems had been measured using macro-averaged precision, recall and F1; weighted average accuracy (WA) and un-weighted average accuracy (UA). Our comparative study shows that MNB performs better than BLR and SMO for personality traits recognition on the social network data."} {"_id": "a7ab4065bad554f5d26f597256fe7225ad5842a6", "title": "Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists", "text": "Functional analysis of large gene lists, derived in most cases from emerging high-throughput genomic, proteomic and bioinformatics scanning approaches, is still a challenging and daunting task. The gene-annotation enrichment analysis is a promising high-throughput strategy that increases the likelihood for investigators to identify biological processes most pertinent to their study. Approximately 68 bioinformatics enrichment tools that are currently available in the community are collected in this survey. Tools are uniquely categorized into three major classes, according to their underlying enrichment algorithms. The comprehensive collections, unique tool classifications and associated questions/issues will provide a more comprehensive and up-to-date view regarding the advantages, pitfalls and recent trends in a simpler tool-class level rather than by a tool-by-tool approach. Thus, the survey will help tool designers/developers and experienced end users understand the underlying algorithms and pertinent details of particular tool categories/tools, enabling them to make the best choices for their particular research interests."} {"_id": "d8753f340fb23561ca27009a694998e29513658b", "title": "A Hybrid Gripper With Soft Material and Rigid Structures", "text": "Various robotic grippers have been developed over the past several decades for robotic manipulators. Especially, the soft grippers based on the soft pneumatic actuator (SPA) have been studied actively, since it offers more pliable bending motion, inherent compliance, and a simple morphological structure. However, few studies have focused on simultaneously improving the fingertip force and actuation speed within the specified design parameters. In this study, we developed a hybrid gripper that incorporates both soft and rigid components to improve the fingertip force and actuation speed simultaneously based on three design principles: first, the degree of bending is proportional to the ratio of the rigid structure; second, a concave chamber design is preferred for large longitudinal strain; and third, a round shape between soft and rigid materials increases the fingertip force. The suggested principles were verified using the finite element methods. The improved performance of the hybrid gripper was verified experimentally and compared with the performance of a conventional SPAs. The ability of the hybrid gripper to grasp different objects was evaluated and was applied in a teleoperated system."} {"_id": "0502017650165c742ad7eaf14fe72b5925c90707", "title": "Finger ECG signal for user authentication: Usability and performance", "text": "Over the past few years, the evaluation of Electrocardio-graphic (ECG) signals as a prospective biometric modality has revealed promising results. Given the vital and continuous nature of this information source, ECG signals offer several advantages to the field of biometrics; yet, several challenges currently prevent the ECG from being adopted as a biometric modality in operational settings. These arise partially due to ECG signal's clinical tradition and intru-siveness, but also from the lack of evidence on the permanence of the ECG templates over time. The problem of in-trusiveness has been recently overcome with the \u201coff-the-person\u201d approach for capturing ECG signals. In this paper we provide an evaluation of the permanence of ECG signals collected at the fingers, with respect to the biometric authentication performance. Our experimental results on a small dataset suggest that further research is necessary to account for and understand sources of variability found in some subjects. Despite these limitations, \u201coff-the-person\u201d ECG appears to be a viable trait for multi-biometric or standalone biometrics, low user throughput, real-world scenarios."} {"_id": "a691d3a72a20bf24ecc460cc4a6e7d84f4146d65", "title": "The future of parking: A survey on automated valet parking with an outlook on high density parking", "text": "In the near future, humans will be relieved from parking. Major improvements in autonomous driving allow the realization of automated valet parking (AVP). It enables the vehicle to drive to a parking spot and park itself. This paper presents a review of the intelligent vehicles literature on AVP. An overview and analysis of the core components of AVP such as the platforms, sensor setups, maps, localization, perception, environment model, and motion planning is provided. Leveraging the potential of AVP, high density parking (HDP) is reviewed as a future research direction with the capability to either reduce the necessary space for parking by up to 50 % or increase the capacity of future parking facilities. Finally, a synthesized view discussing the remaining challenges in automated valet parking and the technological requirements for high density parking is given."} {"_id": "4811820379d8db823ac9cae55218b4d60dcd95a8", "title": "Web Application Migration with Closure Reconstruction", "text": "Due to its high portability and simplicity, web application (app) based on HTML/JavaScript/CSS has been widely used for various smart-device platforms. To take advantage of its wide platform pool, a new idea called app migration has been proposed for the web platform. Web app migration is a framework to serialize a web app running on a device and restore it in another device to continue its execution. In JavaScript semantics, one of the language features that does not allow easy app migration is a closure. A JavaScript function can access variables defined in its outer function even if the execution of the outer function is terminated. It is allowed because the inner function is created as a closure such that it contains the outer function\u2019s environment. This feature is widely used in web app development because it is the most common way to implement data encapsulation in web programming. Closures are not easy to serialize because environments can be shared by a number of closures and environments can be created in a nested way. In this paper, we propose a novel approach to fully serialize closures. We created mechanisms to extract information from a closure\u2019s environment through the JavaScript engine and to serialize the information in a proper order so that the original relationship between closures and environments can be restored properly. We implemented our mechanism on the WebKit browser and successfully migrated Octane benchmarks and seven real web apps which heavily exploit closures. We also show that our mechanism works correctly even for some extreme, closure-heavy cases."} {"_id": "0c50901215676b09da8cc235773b999a1857e558", "title": "A 0 . 5V Ultra-Low-Power OTA With Improved Gain and Bandwidth", "text": "In the context of ultra-low-power and ultra-low-voltage operational transconductance amplifiers (OTAs), there is the inconvenience of low unity-gain frequency and DC gain. In this paper, we presented a 0.5-V improved ultra-low-power OTA using the transistors operate in weak inversion. The proposed topology based on a bulk-driven input differential pair employed an under-weak-inversion gain-stage in the Miller capacitor feedback path to improve the \u201cpole-splitting\u201d effect. Simulations in a standard 0.18 \u03bcm CMOS process resulted on considerable enhancement in the unity-gain bandwidth and the DC gain as well. The topology presents rail-to-rail input and output swings and consumes only 1 \u03bcW."} {"_id": "950b7920c148ab40633d123046af2f8768d10551", "title": "Performance analysis of supervised machine learning algorithms for text classification", "text": "The demand of text classification is growing significantly in web searching, data mining, web ranking, recommendation systems and so many other fields of information and technology. This paper illustrates the text classification process on different dataset using some standard supervised machine learning techniques. Text documents can be classified through various kinds of classifiers. Labeled text documents are used to classify the text in supervised classifications. This paper applied these classifiers on different kinds of labeled documents and measures the accuracy of the classifiers. An Artificial Neural Network (ANN) model using Back Propagation Network (BPN) is used with several other models to create an independent platform for labeled and supervised text classification process. An existing benchmark approach is used to analysis the performance of classification using labeled documents. Experimental analysis on real data reveals which model works well in terms of classification accuracy."} {"_id": "a06cc19b0d8530bde5c58d81aa500581876a1222", "title": "A risk management ontology for Quality-by-Design based on a new development approach according GAMP 5.0", "text": "A new approach to the development of a risk management ontology is presented. This method meets the requirements of a pharmaceutical Quality by Design approach, good manufacturing practice and good automated manufacturing practice. The need for a risk management ontology for a pharmaceutical environment is demonstrated, and the term \u2018\u2018ontology\u2019\u2019 is generally defined and described with regard to the knowledge domain of quality risk management. To fulfill software development requirements defined by good manufacturing practice regulations and good automated manufacturing practice 5.0 for the novel development approach, we used a V-model as a process model, which is discussed in detail. The development steps for the new risk management ontology, such as requirement specification, conceptualization, formalization, implementation and validation approach, are elaborated. 2012 Elsevier Ltd. All rights reserved."} {"_id": "6912446b6382ccbe8fa50a6f06f5b75d345c4d4e", "title": "Low-voltage low-power fast-settling CMOS operational transconductance amplifiers for switched-capacitor applications", "text": "This paper presents a new fully differential operational transconductance amplifier (OTA) for low-voltage and fast-settling switched-capacitor circuits in digital CMOS technology. The proposed two-stage OTA is a hybrid class A/AB that combines a folded cascode as the first stage with active current mirrors as the second stage. It employs a hybrid cascode compensation scheme, merged Ahuja and improved Ahuja style compensations, for fast settling."} {"_id": "864f0e5e317a7d304dcc1dfca176b7afd230f4c2", "title": "Focal loss dense detector for vehicle surveillance", "text": "Deep learning has been widely recognized as a promising approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; However, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for vehicle detection. State-of-the-art performance result has been showed on the DETRAC vehicle dataset."} {"_id": "267cf2013307453bb796c8c7f59522dc852f840c", "title": "Online Parameter Estimation Technique for Adaptive Control Applications of Interior PM Synchronous Motor Drives", "text": "This paper proposes an online parameter estimation method based on a discrete-time dynamic model for the interior permanent-magnet synchronous motors (IPMSMs). The proposed estimation technique, which takes advantage of the difference in dynamics of motor parameters, consists of two affine projection algorithms. The first one is designed to accurately estimate the stator inductances, whereas the second one is designed to precisely estimate the stator resistance, rotor flux linkage, and load torque. In this paper, the adaptive decoupling proportional-integral (PI) controllers with the maximum torque per ampere operation, which utilize the previously identified parameters in real time, are chosen to verify the effectiveness of the proposed parameter estimation scheme. The simulation results via MATLAB/Simulink and the experimental results via a prototype IPMSM drive system with a TI TMS320F28335 DSP are presented under various conditions. A comparative study with the conventional decoupling PI control method is carried out to demonstrate the better performances (i.e., faster dynamic response, less steady-state error, more robustness, etc.) of the adaptive decoupling PI control scheme based on the proposed online parameter estimation technique."} {"_id": "80be8624771104ff4838dcba9629bacfe6b3ea09", "title": "Simultaneous Feature and Dictionary Learning for Image Set Based Face Recognition", "text": "In this paper, we propose a simultaneous feature and dictionary learning (SFDL) method for image set-based face recognition, where each training and testing example contains a set of face images, which were captured from different variations of pose, illumination, expression, resolution, and motion. While a variety of feature learning and dictionary learning methods have been proposed in recent years and some of them have been successfully applied to image set-based face recognition, most of them learn features and dictionaries for facial image sets individually, which may not be powerful enough because some discriminative information for dictionary learning may be compromised in the feature learning stage if they are applied sequentially, and vice versa. To address this, we propose a SFDL method to learn discriminative features and dictionaries simultaneously from raw face pixels so that discriminative information from facial image sets can be jointly exploited by a one-stage learning procedure. To better exploit the nonlinearity of face samples from different image sets, we propose a deep SFDL (D-SFDL) method by jointly learning hierarchical non-linear transformations and class-specific dictionaries to further improve the recognition performance. Extensive experimental results on five widely used face data sets clearly shows that our SFDL and D-SFDL achieve very competitive or even better performance with the state-of-the-arts."} {"_id": "5b8e4b909309910fd393e2eb8ca17c9540862141", "title": "Sentence level emotion recognition based on decisions from subsentence segments", "text": "Emotion recognition from speech plays an important role in developing affective and intelligent systems. This study investigates sentence-level emotion recognition. We propose to use a two-step approach to leverage information from subsentence segments for sentence level decision. First we use a segment level emotion classifier to generate predictions for segments within a sentence. A second component combines the predictions from these segments to obtain a sentence level decision. We evaluate different segment units (words, phrases, time-based segments) and different decision combination methods (majority vote, average of probabilities, and a Gaussian Mixture Model (GMM)). Our experimental results on two different data sets show that our proposed method significantly outperforms the standard sentence-based classification approach. In addition, we find that using time-based segments achieves the best performance, and thus no speech recognition or alignment is needed when using our method, which is important to develop language independent emotion recognition systems."} {"_id": "11651db02c4a243b5177516e62a45f952dc54430", "title": "Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding", "text": "The goal of this paper is to use multi-task learning to efficiently scale slot filling models for natural language understanding to handle multiple target tasks or domains. The key to scalability is reducing the amount of training data needed to learn a model for a new task. The proposed multi-task model delivers better performance with less data by leveraging patterns that it learns from the other tasks. The approach supports an open vocabulary, which allows the models to generalize to unseen words, which is particularly important when very little training data is used. A newly collected crowd-sourced data set, covering four different domains, is used to demonstrate the effectiveness of the domain adaptation and open vocabulary techniques."} {"_id": "f70391f9a6aeec75bc52b9fe38588b9d0b4d40c6", "title": "Learning Character-level Representations for Part-of-Speech Tagging", "text": "Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce stateof-the-art POS taggers for two languages: English, with 97.32% accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47% accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2% on the best previous known result."} {"_id": "ff1577528a34a11c2a81d2451d346c412c674c02", "title": "Character-based Neural Machine Translation", "text": "We introduce a neural machine translation model that views the input and output sentences as sequences of characters rather than words. Since word-level information provides a crucial source of bias, our input model composes representations of character sequences into representations of words (as determined by whitespace boundaries), and then these are translated using a joint attention/translation model. In the target language, the translation is modeled as a sequence of word vectors, but each word is generated one character at a time, conditional on the previous character generations in each word. As the representation and generation of words is performed at the character level, our model is capable of interpreting and generating unseen word forms. A secondary benefit of this approach is that it alleviates much of the challenges associated with preprocessing/tokenization of the source and target languages. We show that our model can achieve translation results that are on par with conventional word-based models."} {"_id": "00a28138c74869cfb8236a18a4dbe3a896f7a812", "title": "Better Word Representations with Recursive Neural Networks for Morphology", "text": "Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way."} {"_id": "6ce9d1c1e9a8b889a04faa98f96b0df1b93fcc84", "title": "Adoption and Impact of IT Governance and Management Practices: A COBIT 5 Perspective", "text": "This paper empirically investigates how adoption of IT governance and management processes, as identified in the IT governance framework COBIT 5, relates to the level of IT-related goals achievement, which in turn associates to the level of enterprise goals achievement. Simultaneously, this research project provides an international benchmark on how organizations are currently adopting the governance and management processes as identified in COBIT 5. The findings suggest that organizations are best in adopting the \u201cIT factory\u201d related management processes and that implementation scores drop in management and governance processes when more business and board involvement is required. Additionally, there are significant differences in perceived implementation maturity of COBIT 5 processes between SMEs and larger organizations. Also, the data offers empirical evidence that the COBIT 5 processes have a positive association with enterprise value creation. KeywORdS COBIT 5, Enterprise Goals, Enterprise Governance and Management of IT, IT Governance, SMEs"} {"_id": "a8fd9be2f7775b123f62094eadd59d18bbbef027", "title": "Peephole: Predicting Network Performance Before Training", "text": "The quest for performant networks has been a significant force that drives the advancements of deep learning in recent years. While rewarding, improving network design has never been an easy journey. The large design space combined with the tremendous cost required for network training poses a major obstacle to this endeavor. In this work, we propose a new approach to this problem, namely, predicting the performance of a network before training, based on its architecture. Specifically, we develop a unified way to encode individual layers into vectors and bring them together to form an integrated description via LSTM. Taking advantage of the recurrent network\u2019s strong expressive power, this method can reliably predict the performances of various network architectures. Our empirical studies showed that it not only achieved accurate predictions but also produced consistent rankings across datasets \u2013 a key desideratum in performance prediction."} {"_id": "7313097125b82b876e89b47b078fe85a56655e95", "title": "Bangladeshi banknote recognition by neural network with axis symmetrical masks", "text": "Automated banknote recognition system can be a very good utility in banking systems and other field of commerce. It can also aid visually impaired people. Although in Bangladesh, bill money recognition machines are not common but it is used in other countries. In this paper, for the first time, we have proposed a Neural Network based recognition scheme for Bangladeshi banknotes. The scheme can efficiently be implemented in cheap hardware which may be very useful in many places. The recognition system takes scanned images of banknotes which are scanned by low cost optoelectronic sensors and then fed into a Multilayer Perceptron, trained by Backpropagation algorithm, for recognition. Axis Symmetric Masks are used in preprocessing stage which reduces the network size and guarantees correct recognition even if the note is flipped. Experimental results are presented which show that this scheme can recognize currently available 8 notes (1, 2, 5, 10, 20, 50, 100 & 500 Taka) successfully with an average accuracy of 98.57%."} {"_id": "1f3774fecbe0ee12a2135cd05a761a7d2b537e13", "title": "Cowden syndrome.", "text": "BACKGROUND\nCowden syndrome is a rare genodermatosis charactarized by presence of multiple hamartomas. The aim of the study was to specify the clinical, therapeutic and prognostic aspects of Cowden syndrome.\n\n\nCASES REPORT\nOur study included 4 patients with Cowden syndrome, 2 males and 2 females between 14 and 46 years old. Clinical examination of the skin revealed facials papules (4 cases), acral keratosis (1 case), translucent keratotic papules (2 cases). Oral examination revealed papules (4 cases), papillomatosis (4 cases), gingival hypertrophy (4 cases) and scrotal tongue (2 cases). Investigations revealed thyroid lesions (2 cases), fibrocystic disease and lipoma of the breast in 1 case, \"glycogenic acanthosis\" (1 case), macrocephaly (2 cases), dysmorphic face (1 case) and lichen nitidus (1 case). Oral etretinate and acitretine were temporary efficient in 2 patients. Topical treatment with tretinoin lotion resulted in some improvement in cutaneous, but not mucosal lesions in one patient. No cancer was revealed.\n\n\nCONCLUSION\nThe pathognomonic mucocutaneous lesions were found in all patients. However, no degenerative lesions have been revealed. A new association of Cowden syndrome with lichen nitidus was found. Treatment with oral retinoids was efficient on cutaneous lesions."} {"_id": "122374a3baf1e0efde03301226344a2d728eafc3", "title": "High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array.", "text": "PURPOSE\nThe purpose of this study is to investigate the feasibility of increasing the system spatial resolution and scanning speed of Hologic Selenia Dimensions digital breast tomosynthesis (DBT) scanner by replacing the rotating mammography x-ray tube with a specially designed carbon nanotube (CNT) x-ray source array, which generates all the projection images needed for tomosynthesis reconstruction by electronically activating individual x-ray sources without any mechanical motion. The stationary digital breast tomosynthesis (s-DBT) design aims to (i) increase the system spatial resolution by eliminating image blurring due to x-ray tube motion and (ii) reduce the scanning time. Low spatial resolution and long scanning time are the two main technical limitations of current DBT technology.\n\n\nMETHODS\nA CNT x-ray source array was designed and evaluated against a set of targeted system performance parameters. Simulations were performed to determine the maximum anode heat load at the desired focal spot size and to design the electron focusing optics. Field emission current from CNT cathode was measured for an extended period of time to determine the stable life time of CNT cathode for an expected clinical operation scenario. The source array was manufactured, tested, and integrated with a Selenia scanner. An electronic control unit was developed to interface the source array with the detection system and to scan and regulate x-ray beams. The performance of the s-DBT system was evaluated using physical phantoms.\n\n\nRESULTS\nThe spatially distributed CNT x-ray source array comprised 31 individually addressable x-ray sources covering a 30 angular span with 1 pitch and an isotropic focal spot size of 0.6 mm at full width at half-maximum. Stable operation at 28 kV(peak) anode voltage and 38 mA tube current was demonstrated with extended lifetime and good source-to-source consistency. For the standard imaging protocol of 15 views over 14, 100 mAs dose, and 2\u2009\u00d7\u20092 detector binning, the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images.\n\n\nCONCLUSIONS\nA prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years."} {"_id": "0fc73c4a6e537b6c718ad54f47ae8847115a5d17", "title": "From Vision to NLP : A Merge", "text": "The study of artificial intelligence can be simplified into one goal: trying to mimic/enhance human senses. This paper attempts to combine computer vision and natural language processing to create a question answer system. This system takes a question and an image as input and outputs a response to the answer based on how the RCNN understands the question asked. The system correlates the question with the image by leveraging attention and memory mechanisms. Mentor: Arun Chaganty"} {"_id": "b4729432b23842ff6b1b126572f8fa17aca14758", "title": "Fast high-quality non-blind deconvolution using sparse adaptive priors", "text": "We present an efficient approach for high-quality non-blind deconvolution based on the use of sparse adaptive priors. Its regularization term enforces preservation of strong edges while removing noise. We model the image-prior deconvolution problem as a linear system, which is solved in the frequency domain. This clean formulation lends to a simple and efficient implementation. We demonstrate its effectiveness by performing an extensive comparison with existing non-blind deconvolution methods, and by using it to deblur photographs degraded by camera shake. Our experiments show that our solution is faster and its results tend to have higher peak signal-to-noise ratio than the state-of-the-art techniques. Thus, it provides an attractive alternative to perform high-quality non-blind deconvolution of large images, as well as to be used as the final step of blind-deconvolution algorithms."} {"_id": "41a5499a8e4a55a16c94b1944a74274f4340be74", "title": "Towards Contactless, Low-Cost and Accurate 3D Fingerprint Identification", "text": "Human identification using fingerprint impressions has been widely studied and employed for more than 2000\u00a0years. Despite new advancements in the 3D imaging technologies, widely accepted representation of 3D fingerprint features and matching methodology is yet to emerge. This paper investigates 3D representation of widely employed 2D minutiae features by recovering and incorporating (i) minutiae height z and (ii) its 3D orientation \u03c6 information and illustrates an effective matching strategy for matching popular minutiae features extended in 3D space. One of the obstacles of the emerging 3D fingerprint identification systems to replace the conventional 2D fingerprint system lies in their bulk and high cost, which is mainly contributed from the usage of structured lighting system or multiple cameras. This paper attempts to addresses such key limitations of the current 3D fingerprint technologies bydeveloping the single camera-based 3D fingerprint identification system. We develop a generalized 3D minutiae matching model and recover extended 3D fingerprint features from the reconstructed 3D fingerprints. 2D fingerprint images acquired for the 3D fingerprint reconstruction can themselves be employed for the performance improvement and have been illustrated in the work detailed in this paper. This paper also attempts to answer one of the most fundamental questions on the availability of inherent discriminableinformation from 3D fingerprints. The experimental results are presented on a database of 240 clients 3D fingerprints, which is made publicly available to further research efforts in this area, and illustrate the discriminant power of 3D minutiae representation andmatching to achieve performance improvement."} {"_id": "8d7c2c3d03ae5360bb73ac818a0e36f324f1e8ce", "title": "PiCANet: Learning Pixel-wise Contextual Attention in ConvNets and Its Application in Saliency Detection", "text": "1Context plays an important role in many computer vision tasks. Previous models usually construct contextual information from the whole context region. However, not all context locations are helpful and some of them may be detrimental to the final task. To solve this problem, we propose a novel pixel-wise contextual attention network, i.e., the PiCANet, to learn to selectively attend to informative context locations for each pixel. Specifically, it can generate an attention map over the context region for each pixel, where each attention weight corresponds to the contextual relevance of each context location w.r.t. the specified pixel location. Thus, an attended contextual feature can be constructed by using the attention map to aggregate the contextual features. We formulate PiCANet in a global form and a local form to attend to global contexts and local contexts, respectively. Our designs for the two forms are both fully differentiable. Thus they can be embedded into any CNN architectures for various computer vision tasks in an end-to-end manner. We take saliency detection as an example application to demonstrate the effectiveness of the proposed PiCANets. Specifically, we embed global and local PiCANets into an encoder-decoder Convnet hierarchically. Thorough * This paper was previously submitted to CVPR 2017 and ICCV 2017. This is a slightly revised version based on our previous submission. analyses indicate that the global PiCANet helps to construct global contrast while the local PiCANets help to enhance the feature maps to be more homogenous, thus making saliency detection results more accurate and uniform. As a result, our proposed saliency model achieves state-of-the-art results on 4 benchmark datasets."} {"_id": "35875600a30f89ea133ac06afeefc8cacec9fb3d", "title": "Can virtual reality simulations be used as a research tool to study empathy, problems solving and perspective taking of educators?: theory, method and application", "text": "Simulations in virtual environments are becoming an important research tool for educators. These simulations can be used in a variety of areas from the study of emotions and psychological disorders to more effective training. The current study uses animated narrative vignette simulation technology to observe a classroom situation depicting a low level aggressive peer-to-peer victim dyad. Participants were asked to respond to this situation as if they were the teacher, and these responses were then coded and analyzed. Consistent with other literature, the pre-service teachers expressed very little empathic concern, problem-solving or management of the situation with the victim. Future direction and educational implications are presented."} {"_id": "5f17b4a08d14afaa14a695573ad598dcb763c623", "title": "Comparison of SVM and ANN for classification of eye events in EEG", "text": "The eye events (eye blink, eyes close and eyes open) are usually considered as biological artifacts in the electroencephalographic (EEG) signal. One can control his or her eye blink by proper training and hence can be used as a control signal in Brain Computer Interface (BCI) applications. Support vector machines (SVM) in recent years proved to be the best classification tool. A comparison of SVM with the Artificial Neural Network (ANN) always provides fruitful results. A one-against-all SVM and a multilayer ANN is trained to detect the eye events. A comparison of both is made in this paper."} {"_id": "f971f858e59edbaeb0b75b7bcbca0d5c7b7d8065", "title": "Effects of Fitness Applications with SNS: How Do They Influence Physical Activity", "text": "Fitness applications with social network services (SNS) have emerged for physical activity management. However, there is little understanding of the effects of these applications on users\u2019 physical activity. Motivated thus, we develop a theoretical model based on social cognitive theory of self-regulation to explain the effects of goal setting, self-tracking, and SNS (through social comparison from browsing others\u2019 tracking data and social support from sharing one\u2019s own tracking data) on physical activity. The model was tested with objective data from 476 Runkeeper users. Our preliminary results show that goal setting, number of uses, the proportion of lower-performing friends, and number of likes positively influence users\u2019 physical activities, while the proportion of higher-performing friends has a negative effect. Moreover, the effect of the proportion of higher-performing friends is moderated by the average difference between the friends and the individual. The initial contributions of the study and remaining research plan are described."} {"_id": "44911bdb33d0dc1781016e9afe605a9091ea908b", "title": "Vehicle license plate detection and recognition using non-blind image de-blurring algorithm", "text": "This paper proposes the method of vehicle license plate recognition, which is essential in the field of intelligent transportation system. The purpose of the study is to present a simple and effective vehicle license plate detection and recognition using non-bling image de-blurring algorithm. The sharpness of the edges in an image is restored by the prior information on images. The blue kernel is free of noise while using the non-blind image de-blurring algorithm. This non-blind image de-blurring (NBID) algorithm is involved in the process of removing the optimization difficulties with respect to unknown image and unknown blur. Estimation is carried out for the length of the motion kernel with Radon transform in Fourier domain. The proposed algorithm was tested on different vehicle images and achieved satisfactory results in license plate detection. The experimental results deal with the efficiency and robustness of the proposed algorithm in both synthesized and real images."} {"_id": "36973330ae638571484e1f68aaf455e3e6f18ae9", "title": "Scale-Aware Fast R-CNN for Pedestrian Detection", "text": "In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in subnetworks which detect pedestrians with scales from disjoint ranges. Outputs from all of the subnetworks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech [P. Dollar, C. Wojek, B. Schiele, and P. Perona, \u201cPedestrian detection: An evaluation of the state of the art,\u201d IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 4, pp. 743\u2013761, Apr. 2012], and obtains competitive results on INRIA [N. Dalal and B. Triggs, \u201cHistograms of oriented gradients for human detection,\u201d in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2005, pp. 886\u2013893], ETH [A. Ess, B. Leibe, and L. V. Gool, \u201cDepth and appearance for mobile scene analysis,\u201d in Proc. Int. Conf. Comput. Vis., 2007, pp. 1\u20138], and KITTI [A. Geiger, P. Lenz, and R. Urtasun, \u201cAre we ready for autonomous driving? The KITTI vision benchmark suite,\u201d in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 3354\u20133361]."} {"_id": "f1e1437ad6cada93dc8627f9c9679ffee02d921c", "title": "Behavior-Based Telecommunication Churn Prediction with Neural Network Approach", "text": "A behavior-based telecom customer churn prediction system is presented in this paper. Unlike conventional churn prediction methods, which use customer demographics, contractual data, customer service logs, call-details, complaint data, bill and payment as inputs and churn as target output, only customer service usage information is included in this system to predict customer churn using a clustering algorithm. It can solve the problems which traditional methods have to face, such as missing or non-reliable data and the correlation among inputs. This study provides a new way to solve traditional churn prediction problems."} {"_id": "3769e65690e424808361e3eebfdec8ab91908aa9", "title": "Affective Image Retrieval via Multi-Graph Learning", "text": "Images can convey rich emotions to viewers. Recent research on image emotion analysis mainly focused on affective image classification, trying to find features that can classify emotions better. We concentrate on affective image retrieval and investigate the performance of different features on different kinds of images in a multi-graph learning framework. Firstly, we extract commonly used features of different levels for each image. Generic features and features derived from elements-of-art are extracted as low-level features. Attributes and interpretable principles-of-art based features are viewed as mid-level features, while semantic concepts described by adjective noun pairs and facial expressions are extracted as high-level features. Secondly, we construct single graph for each kind of feature to test the retrieval performance. Finally, we combine the multiple graphs together in a regularization framework to learn the optimized weights of each graph to efficiently explore the complementation of different features. Extensive experiments are conducted on five datasets and the results demonstrate the effectiveness of the proposed method."} {"_id": "00c57bb8c7d2ce7c8f32aef062ef3d61b9961711", "title": "Everyday dwelling with WhatsApp", "text": "In this paper, we present a study of WhatsApp, an instant messaging smartphone application. Through our interviews with participants, we develop anthopologist Tim Ingold's notion of dwelling, and discuss how use of WhatsApp is constitutive of a felt-life of being together with those close by. We focus on the relationship \"doings\" in WhatsApp and how this togetherness and intimacy are enacted through small, continuous traces of narrative, of tellings and tidbits, noticings and thoughts, shared images and lingering pauses; this is constitutive of dwelling. Further, we discuss how an intimate knowing of others in these relationships, through past encounters and knowledge of coming together in the future, pertain to the particular forms of relationship engagements manifest through the possibilities presented in WhatsApp. We suggest that this form of sociality is likely to be manifest in other smartphone IM-like applications."} {"_id": "41ca84736e55375c73416b7d698fb72ad3ccde67", "title": "Enabling Real-Time Context-Aware Collaboration through 5G and Mobile Edge Computing", "text": "Creating context-aware ad hoc collaborative systems remains to be one of the primary hurdles hampering the ubiquitous deployment of IT and communication services. Especially under mission-critical scenarios, these services must often adhere to strict timing deadlines. We believe empowering such realtime collaboration systems requires context-aware application platforms working in conjunction with ultra-low latency data transmissions. In this paper, we make a strong case that this could be accomplished by combining the novel communication architectures being proposed for 5G with the principles of Mobile Edge Computing (MEC). We show that combining 5G with MEC would enable inter- and intra-domain use cases that are otherwise not feasible."} {"_id": "75c72ed46042f172d174afe106ce41ec8dee71ae", "title": "Modeling of magnetically biased graphene patch frequency selective surface (FSS)", "text": "A free-standing magnetically biased graphene patch frequency selective surface (FSS) is modelled in this paper. Its transmission coefficients of co- and cross-polarizations can be obtained with an equivalent tensorial surface conductivity. Then, the rotation angle for normal incidence is explored with different values of biasing magnetic field B0. The maximum rotation angle provided by the magnetically biased graphene patch FSS is 43\u00b0 at 4.7 THz with B0=2.5 T, which is much larger than the rotation angle provided by a graphene sheet. This is very promising for THz nano-devices based on the giant Faraday rotation of graphenes."} {"_id": "1c28f6b59c209e4f73809d5a5d16d672d26ad1d8", "title": "Cooperative Localization for Autonomous Underwater Vehicles", "text": "Self-localization of an underwater vehicle is particularly challenging due to the absence of Global Positioning System (GPS) reception or features at known positions that could otherwise have been used for position computation. Thus Autonomous Underwater Vehicle (AUV) applications typically require the pre-deployment of a set of beacons. This thesis examines the scenario in which the members of a group of AUVs exchange navigation information with one another so as to improve their individual position estimates. We describe how the underwater environment poses unique challenges to vehicle navigation not encountered in other environments in which robots operate and how cooperation can improve the performance of self-localization. As intra-vehicle communication is crucial to cooperation, we also address the constraints of the communication channel and the effect that these constraints have on the design of cooperation strategies. The classical approaches to underwater self-localization of a single vehicle, as well as more recently developed techniques are presented. We then examine how methods used for cooperating land-vehicles can be transferred to the underwater domain. An algorithm for distributed self-localization, which is designed to take the specific characteristics of the environment into account, is proposed. We also address how correlated position estimates of cooperating vehicles can lead to overconfidence in individual position estimates. Finally, key to any successful cooperative navigation strategy is the incorporation of the relative positioning between vehicles. The performance of localization algorithms with different geometries is analyzed and a distributed algorithm for the dynamic positioning of vehicles, which serve as dedicated navigation beacons for a fleet of AUVs, is proposed. Thesis Supervisor: John J. Leonard Title: Professor of Mechanical and Ocean Engineering Massachusetts Institute of Technology Acknowledgments This thesis would not have been possible without the help and support of many friends and colleagues who made the last five years at MIT an exceptionally fulfilling experience. I would like to thank my advisor John Leonard who strongly supported me ever since our first email exchange in 2003. He guided me all the way through the application process, research, thesis and finding a post doc position. His broad range of research interests enabled me to find a thesis subject that exactly matched my interest. And I very much appreciated him allowing me to take a significant amount of time to pursue other projects as well as travel. I would also like to thank my other committee members: Henrik Schmidt for his continued support and great company during several research cruises, and Hanu Singh and Arjuna Balasuriya for their helpful suggestions during my thesis writing. Many thanks to David Battle who helped me through my first steps with Autonomous Underwater Vehicles and was a great mate to have around. Many of the experiments presented in this thesis would not have been possible without the support of Andrew Patrikalakis. The results owe a lot to his countless hours of coding assistance and his efforts to ensure that the kayaks would be ready when needed. They were also made possible by Joe Curcio, the builder of the kayaks, and the support of Jacques Leedekerken and Kevin Cockrell. The last years would not have been the same without the many great people I met at MIT. Most important was Matt Walter who convinced me in August 2003 that MIT is not only an interesting place, but that it can be very friendly as well. Throughout the years we shared many great personal and academic experiences. I wish him all the best, wherever life may take him. Patrycja unfortunately left our lab, but made up for it by taking me on a very memorable trip across the country. I wish Alec, Emma, Albert, Tom, Olivier and Aisha all the best for their remaining time and life beyond. Iuliu was always a welcome distraction in the United States and abroad and a great help with all hardware questions. I was glad to join Carrick, Marty and David on a number of exciting research and conference trips, as well as their advisor Daniela Rus. In the last months I was also very fortunate to meet a new group of people. First, I am very thankful to Maurice for carrying on what I started I cannot imagine somebody better suited for it and also Been, Georgios, Hordur and Rob. I would also like to thank the many people of the SEA 2007 cruise, particularly the B watch and Julian, Jamie, Heather, Chris, Toby and Jane. One of the most exciting things during my time at MIT was that I was not only able to pursue my thesis topic but also two other projects. First, the flood warning project introduced me to Elizabeth Basha. We shared many joyful moments as well as blood, sweat and tears in the Central American wilderness. I hope that the end of my PhD only marks the beginning of that partnership. Second, the harbor porpoise tag project led by Stacy DeRuiter was a great design challenge. It also provided an opportunity to reach into other areas of ocean sciences by contributing to marine biology research. Her dedication along with the support from Mark Johnson, Peter Tyack and Tom Hurst ensured the project\u2019s success. The exciting results and the process leading up to them rewarded me with a better experience at MIT than I could have ever hoped for. I would like to thank John Leonard for letting me take this scenic route. The path that led me to MIT would not have been possible without the support from people in the early stages of my engineering career who I would like to thank here: Raimund Eich for patiently answering my first electrical engineering questions; my best friends Jan Horn, Daniel Steffensky, Alexander Zimmer and Ulf Radenz for helping me through my university time in Germany; and John Peatman, Ludger Becks and Uwe Zimmer for their academic guidance. Finally, I would like to thank my parents for their continued support and especially Melissa Pitotti for her encouragement not only to start the work at this institution but also to finish it when the time had come. This work was funded by Office of Naval Research grants N00014-97-1-0202, N00014-05-1-0255, N00014-02-C-0210, N00014-07-1-1102 and the ASAP MURI program led by Naomi Leonard of Princeton University. \u201cOne degree is not a large distance. On a compass it is scarcely the thickness of a fingernail. But in certain conditions, one degree can be a very large distance. Enough to unmake a man.\u201d The Mysterious Geographic Explorations of Jasper Morello, c \u00a9 3D Films, Australia 2005 Meinen Eltern & meinem Bruder"} {"_id": "d9ad9a0dc7c4032b085aa621bba108a9e14fd83f", "title": "Blitz: A Principled Meta-Algorithm for Scaling Sparse Optimization", "text": "By reducing optimization to a sequence of small subproblems, working set methods achieve fast convergence times for many challenging problems. Despite excellent performance, theoretical understanding of working sets is limited, and implementations often resort to heuristics to determine subproblem size, makeup, and stopping criteria. We propose BLITZ, a fast working set algorithm accompanied by useful guarantees. Making no assumptions on data, our theory relates subproblem size to progress toward convergence. This result motivates methods for optimizing algorithmic parameters and discarding irrelevant variables as iterations progress. Applied to `1-regularized learning, BLITZ convincingly outperforms existing solvers in sequential, limited-memory, and distributed settings. BLITZ is not specific to `1-regularized learning, making the algorithm relevant to many applications involving sparsity or constraints."} {"_id": "aab8c9514b473c4ec9c47d780b7c79112add9008", "title": "Using Case Studies in Research", "text": "Case study as a research strategy often emerges as an obvious option for students and other new researchers who are seeking to undertake a modest scale research project based on their workplace or the comparison of a limited number of organisations. The most challenging aspect of the application of case study research in this context is to lift the investigation from a descriptive account of \u2018what happens\u2019 to a piece of research that can lay claim to being a worthwhile, if modest addition to knowledge. This article draws heavily on established textbooks on case study research and related areas, such as Yin, 1994, Hamel et al., 1993, Eaton, 1992, Gomm, 2000, Perry, 1998, and Saunders et al., 2000 but seeks to distil key aspects of case study research in such a way as to encourage new researchers to grapple with and apply some of the key principles of this research approach. The article explains when case study research can be used, research design, data collection, and data analysis, and finally offers suggestions for drawing on the evidence in writing up a report or dissertation."} {"_id": "d7f016f6ceb87092bc5cff84fafd94bfe3fa4adf", "title": "Comparative Evaluation of Binary Features", "text": "Performance evaluation of salient features has a long-standing tradition in computer vision. In this paper, we fill the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency. We use established metrics to embed our assessment into the body of existing evaluations, allowing us to provide a novel taxonomy unifying both traditional and novel binary features. Moreover, we analyze the performance of different detector and descriptor pairings, which are often used in practice but have been infrequently analyzed. Additionally, we complement existing datasets with novel data testing for illumination change, pure camera rotation, pure scale change, and the variety present in photo-collections. Our performance analysis clearly demonstrates the power of the new class of features. To benefit the community, we also provide a website for the automatic testing of new description methods using our provided metrics and datasets (www.cs.unc.edu/feature-evaluation)."} {"_id": "f8c79719e877a9dec27f0fbfb0c3e5e4fd730304", "title": "Appraisal of homogeneous techniques in Distributed Data Mining: classifier approach", "text": "In recent years, Distributed Data Mining (DDM) has evolved in large space aiming at minimizing computation cost and memory overhead in processing huge geographically distributed data. There are two approaches of DDM -- homogeneous and heterogeneous classifier approach. This paper presents implementation of four homogeneous classifier techniques for DDM with different Electronic Health Records (EHRs) homogeneous datasets namely diabetes, hepatitis, hypothyroid and further analyzing results based on metric evaluation."} {"_id": "26a1ef8133da61e162c2d8142d2691c2d89584f7", "title": "Effects of loneliness and differential usage of Facebook on college adjustment of first-year students", "text": "The popularity of social network sites (SNSs) among college students has stimulated scholarship examining the relationship between SNS use and college adjustment. The present research furthers our understanding of SNS use by studying the relationship between loneliness, varied dimensions of Facebook use, and college adjustment among first-year students. We looked at three facets of college adjustment: social adjustment, academic motivation, and perceived academic performance. Compulsive use of Facebook had a stronger association with academic motivation than habitual use of Facebook, but neither were directly correlated with academic performance. Too much time spent on Facebook was weakly but directly associated with poorer perceived academic performance. Loneliness was a stronger indicator of college adjustment than any dimension of Facebook usage. 2014 Elsevier Ltd. All rights reserved."} {"_id": "080aebd2cc1019f17e78496354c37195560b0697", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems", "text": "MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines."} {"_id": "12569f509a4a8f34717c38e32701fdab9def0e06", "title": "Status of Serverless Computing and Function-as-a-Service(FaaS) in Industry and Research", "text": "This whitepaper summarizes issues raised during the First International Workshop on Serverless Computing (WoSC) 2017 held June 5th 2017 and especially in the panel and associated discussion that concluded the workshop. We also include comments from the keynote and submitted papers. A glossary at the end (section 8) defines many technical terms used in this report."} {"_id": "24c7cee069066f528d738267207692d640b7bd8f", "title": "Building a Chatbot with Serverless Computing", "text": "Chatbots are emerging as the newest platform used by millions of consumers worldwide due in part to the commoditization of natural language services, which provide provide developers with many building blocks to create chatbots inexpensively. However, it is still difficult to build and deploy chatbots. Developers need to handle the coordination of the cognitive services to build the chatbot interface, integrate the chatbot with external services, and worry about extensibility, scalability, and maintenance. In this work, we present the architecture and prototype of a chatbot using a serverless platform, where developers compose stateless functions together to perform useful actions. We describe our serverless architecture based on function sequences, and how we used these functions to coordinate the cognitive microservices in the Watson Developer Cloud to allow the chatbot to interact with external services. The serverless model improves the extensibility of our chatbot, which currently supports 6 abilities: location based weather reports, jokes, date, reminders, and a simple music tutor."} {"_id": "2b1759673f0f8da5e9477b127bab976f8d4d50fe", "title": "Serverless Computing: Design, Implementation, and Performance", "text": "We present the design of a novel performance-oriented serverless computing platform implemented in. NET, deployed in Microsoft Azure, and utilizing Windows containers as function execution environments. Implementation challenges such as function scaling and container discovery, lifecycle, and reuse are discussed in detail. We propose metrics to evaluate the execution performance of serverless platforms and conduct tests on our prototype as well as AWS Lambda, Azure Functions, Google Cloud Functions, and IBM's deployment of Apache OpenWhisk. Our measurements show the prototype achieving greater throughput than other platforms at most concurrency levels, and we examine the scaling and instance expiration trends in the implementations. Additionally, we discuss the gaps and limitations in our current design, propose possible solutions, and highlight future research."} {"_id": "3f60775a4e97573ee90177e5e44983a3f230d7b7", "title": "Cloud-Native, Event-Based Programming for Mobile Applications", "text": "Creating mobile applications often requires both client and server-side code development, each requiring vastly different skills. Recently, cloud providers like Amazon and Google introduced \"server-less\" programming models that abstract away many infrastructure concerns and allow developers to focus on their application logic. In this demonstration, we introduce OpenWhisk, our system for constructing cloud native actions, within the context of mobile application development process. We demonstrate how OpenWhisk is used in mobile application development, allows cloud API customizations for mobile, and simplifies mobile application architectures."} {"_id": "4182cecefdcf4b6e69203926e3f92e660efe8cf0", "title": "Whose Book is it Anyway ? Using Machine Learning to Identify the Author of Unknown Texts", "text": "In this paper, we present an implementation of a new method for text classification that achieves significantly faster results than most existing classifiers. We extract features that are computationally efficient and forgo more computationally expensive ones used by many other text classifiers, most notably n-grams (contiguous sequences of n words). We then analyze the feature vectors through a hybrid SVM technique that sequentially classifies them with a one-versus-all classifier and a sequence of binary classifiers. Our overall method finishes execution within seconds on large novels, with accuracy comparable to that of standard classifiers but with a significantly shorter runtime."} {"_id": "f818b08e6d7d69bc497e38a51ea387f48aca1bd3", "title": "Discussions in the comments section: Factors influencing participation and interactivity in online newspapers' reader comments", "text": "Posting comments on the news is one of the most popular forms of user participation in online newspapers, and there is great potential for public discourse that is associated with this form of user communication. However, this potential arises only when several users participate in commenting and when their communication becomes interactive. Based on an adaption of Galtung and Ruge\u2019s theory of newsworthiness, we hypothesized that a news article\u2019s news factors affect both participation levels and interactivity in a news item\u2019s comments section. The data from an online content analysis of political news are consistent with the hypotheses. This article explores the ways in which news factors affect participation levels and interactivity, and it discusses the theoretical, normative, and practical implications of those findings."} {"_id": "75074802e33d3bbe45290acc4655cc9722b4db70", "title": "Analogical Mapping by Constraint Satisfaction", "text": "similarity. Units not yet reached asymptote: 0 Goodness of network: 0.61 Calculating the best mappings after 72 cycles. Best mapping of SMART is HUNGRY. 0.70 Best mapping of TALL is FRIENDLY. 0.71 Best mapping of TIMID is FRISKY. 0.54 Best mopping of TOM is BLACKIE. 0.70 Best mapping of STEVE is FIDO. 0.54 Best mopping of BILL is ROVER. 0.71 by ACME for this example. The five sentences corresponding to the five propositions in each analog (e.g., \u201cBill is smart\u201d) were listed in adjacent colunms on a piece of paper. Sentences related to the same individual were listed consecutively; otherwise, the order was scrambled. Across subjects two different orders were used. The instructions simply stated, \u201cYour task is to figure out what in the left set of sentences corresponds to what in the right set of sentences.\u201d Subjects were also told that the meaning of the words was irrelevant. The three individuals and three attributes of the analog on the left were listed on the bottom of the page; for each element, subjects were to write down what they believed to be the corresponding element of the analog on the right. Three minutes were allowed for completion of the task. A group of 8 UCLA students in an undergraduate psychology class served as subjects. Five subjects produced the same set of six correspondences identified by ACME, 2 subjects produced four of the six, and 1 subject was unable to understand the task. These results indicate that finding the isomorphism for this example is within the capability of many college students. 344 HOLYOAK AND THAGARD Structure and Pragmatics in Metaphor To explore the performance of ACME in metaphorical mapping, the program was given predicate-calculus representations of the knowledge underlying a metaphor that has been analyzed in detail by Kittay (1987). The metaphor is derived from a passage in Plato\u2019s Theuetetus in which Socrates declares himself to be a \u201cmidwife of ideas,\u201d elaborating the metaphor at length. Table 19 contains predicate-calculus representations based upon Kittay\u2019s analysis of the source analog r.?ncerning the role of a midwife and of the target analog concerning the role of a philosopher-teacher. Roughly, Socrates claims that he is like a midwife in that he introduces the student to intellectual partners, just as a midwife often serves first as a matchmaker; Socrates helps the student evaluate the truth or falsity of his ideas much as a midwife helps a mother to deliver a child. This metaphor was used to provide an illustration of the manner in which structural and pragmatic constraints interact in ACME. Table 19 presents predicate-calculus representations of two versions of the metaphor: an isomorphic version based directly upon Kittay\u2019s analysis, and a nonisomorphic version created by adding irrelevant and misleading information to the representation of the \u201cSocrates\u201d target analog. The best mappings obtained for each object and predicate in the target, produced by three runs of ACME, are reported in Table 20. The asymptotic activations of the best mappings are also presented. A mapping of \u201cnone\u201d means that no mapping unit had an asymptotic activation greater than .20. The run reported in the first column used the isomorphic version without any pragmatic weights. The network settles with a correct set of mappings after 34 cycles. Thus Socrates maps to the midwife, his student o the mother, the student\u2019s intellectual partner to the father, and the idea to the child. (Note that there is a homomorphic mapping of the predicates thinks-about and tests-truth to in-labor-with.) The propositions expressing causal relations in the two analogs are not essential here; deletion of them still allows a complete mapping to be discovered. A very different set of mappings is reported in the middle column of Table 20 for the nonisomorphic version of the \u201cSocrates\u201d analog. This version provides additional knowledge about Socrates that would be expected to produce major interference with discovery of the metaphoric relation between the two analogs. The nonisomorphic version contains the information that Socrates drinks hemlock juice, which is of course irrelevant to the metaphor. Far worse, the representation encodes the information that Socrates him\u2019self was matched to his wife by a midwife; and that Socrates\u2019 wife had a child with the help of this midwife. Clearly, this nonisomorphic extension will cause the structural and semantic constraints on mapping to support a much more superficial set of correspondences between the two situations. And indeed, in this second run ACME finds only the barest fragments of the intended metaphoric mappings when the network settles ANALOGICAL MAPPING 345 TABLE 19 Predicate-Calculus Representatlans of Knowledge Underlying the Metaphor \u201cSocrates is a Mldwlfe of Ideas\u201d\u2018(lsamarphlc and Nanlsomorphlc Verslons)"} {"_id": "ba17b83bc2462b1040490dd08e49aef5233ef257", "title": "Intelligent Buildings of the Future: Cyberaware, Deep Learning Powered, and Human Interacting", "text": "Intelligent buildings are quickly becoming cohesive and integral inhabitants of cyberphysical ecosystems. Modern buildings adapt to internal and external elements and thrive on ever-increasing data sources, such as ubiquitous smart devices and sensors, while mimicking various approaches previously known in software, hardware, and bioinspired systems. This article provides an overview of intelligent buildings of the future from a range of perspectives. It discusses everything from the prospects of U.S. and world energy consumption to insights into the future of intelligent buildings based on the latest technological advancements in U.S. industry and government."} {"_id": "269ef7beca7de8aa853bcb32ba99bce7c4013fe6", "title": "A Robust, Simple Genotyping-by-Sequencing (GBS) Approach for High Diversity Species", "text": "Advances in next generation technologies have driven the costs of DNA sequencing down to the point that genotyping-by-sequencing (GBS) is now feasible for high diversity, large genome species. Here, we report a procedure for constructing GBS libraries based on reducing genome complexity with restriction enzymes (REs). This approach is simple, quick, extremely specific, highly reproducible, and may reach important regions of the genome that are inaccessible to sequence capture approaches. By using methylation-sensitive REs, repetitive regions of genomes can be avoided and lower copy regions targeted with two to three fold higher efficiency. This tremendously simplifies computationally challenging alignment problems in species with high levels of genetic diversity. The GBS procedure is demonstrated with maize (IBM) and barley (Oregon Wolfe Barley) recombinant inbred populations where roughly 200,000 and 25,000 sequence tags were mapped, respectively. An advantage in species like barley that lack a complete genome sequence is that a reference map need only be developed around the restriction sites, and this can be done in the process of sample genotyping. In such cases, the consensus of the read clusters across the sequence tagged sites becomes the reference. Alternatively, for kinship analyses in the absence of a reference genome, the sequence tags can simply be treated as dominant markers. Future application of GBS to breeding, conservation, and global species and population surveys may allow plant breeders to conduct genomic selection on a novel germplasm or species without first having to develop any prior molecular tools, or conservation biologists to determine population structure without prior knowledge of the genome or diversity in the species."} {"_id": "9f0eb43643dd4f7252e16b1e3136e565e3ee940a", "title": "Classify or Select: Neural Architectures for Extractive Document Summarization", "text": "We present two novel and contrasting Recurrent Neural Network (RNN) based architectures for extractive summarization of documents. The Classifier based architecture sequentially accepts or rejects each sentence in the original document order for its membership in the final summary. The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to piece together the summary. Our models under both architectures jointly capture the notions of salience and redundancy of sentences. In addition, these models have the advantage of being very interpretable, since they allow visualization of their predictions broken up by abstract features such as information content, salience and redundancy. We show that our models reach or outperform state-of-the-art supervised models on two different corpora. We also recommend the conditions under which one architecture is superior to the other based on experimental evidence."} {"_id": "d3a2dca439d2df686fb09d5d05ae00a97e74a1af", "title": "Domain Generalization with Adversarial Feature Learning", "text": "In this paper, we tackle the problem of domain generalization: how to learn a generalized feature representation for an \"unseen\" target domain by taking the advantage of multiple seen source-domain data. We present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization. To be specific, we extend adversarial autoencoders by imposing the Maximum Mean Discrepancy (MMD) measure to align the distributions among different domains, and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, the learned feature representation is supposed to be universal to the seen source domains because of the MMD regularization, and is expected to generalize well on the target domain because of the introduction of the prior distribution. We proposed an algorithm to jointly train different components of our proposed framework. Extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods."} {"_id": "a088bed7ac41ae77dbb23041626eb8424d96a5ba", "title": "A Pattern Learning Approach to Question Answering Within the Ephyra Framework", "text": "This paper describes the Ephyra question answering engine, a modular and extensible framework that allows to integrate multiple approaches to question answering in one system. Our framework can be adapted to languages other than English by replacing language-specific components. It supports the two major approaches to question answering, knowledge annotation and knowledge mining. Ephyra uses the web as a data resource, but could also work with smaller corpora. In addition, we propose a novel approach to question interpretation which abstracts from the original formulation of the question. Text patterns are used to interpret a question and to extract answers from text snippets. Our system automatically learns the patterns for answer extraction, using question-answer pairs as training data. Experimental results revealed the potential of this approach."} {"_id": "60e897c5270c4970510cfdb6d49ef9513a0d24f1", "title": "Towards time-optimal race car driving using nonlinear MPC in real-time", "text": "This paper addresses the real-time control of autonomous vehicles under a minimum traveling time objective. Control inputs for the vehicle are computed from a nonlinear model predictive control (MPC) scheme. The time-optimal objective is reformulated such that it can be tackled by existing efficient algorithms for real-time nonlinear MPC that build on the generalized Gauss-Newton method. We numerically validate our approach in simulations and present a real-world hardware setup of miniature race cars that is used for an experimental comparison of different approaches."} {"_id": "45524d7a40435d989579b88b70d25e4d65ac9e3c", "title": "Focusing Attention: Towards Accurate Text Recognition in Natural Images", "text": "Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and/or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon \u201cattention drift\u201d. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods."} {"_id": "327f0ec65bd0e0dabad23c42514d0e2ac8b05a97", "title": "FACTORS INFLUENCING CONSUMERS \u2019 ATTITUDE TOWARDS E-COMMERCE PURCHASES THROUGH ONLINE SHOPPING", "text": "Online shopping is the process of buying goods and services from merchants who sell on the internet. Shoppers can visit web stores from the comfort of their homes and shop as they sit in front of the computer. The main purpose of this study is to determine the factors influencing consumers\u2019 attitude towards e-commerce purchases through online shopping. The study also investigate how socio-demographic (age, income and occupation), pattern of online buying (types of goods, e-commerce experience and hours use on internet) and purchase perception (product perception, customers\u2019 service and consumers\u2019 risk) affect consumers\u2019 attitude towards online shopping. Convenience sampling method was conducted in this study and the sample comparison of 100 respondents in Taman Tawas Permai, Ipoh. Data were collected via self-administered questionnaire which contains 15 questions in Part A (respondents\u2019 background and their pattern of using internet and online buying), 34 questions in Part B (attitude towards online purchase) and 36 questions in Part C (purchase perception towards online shopping). One-way ANOVA were used to assess the differences between independent variable such as age, income, occupation and pattern of online buying (type of goods) and dependant variable such as attitude towards online shopping. The findings revealed that there is no significant difference in attitude towards online shopping among age group (F = 1.020, p < 0.05) but there is a significant difference in attitude towards online shopping among income group (F = 0.556, p > 0.05). The research finding also showed that there is no significant difference in attitude towards online shopping among occupation group (F = 1.607, p < 0.05) and types of goods group (F = 1.384, p < 0.05). Pearson\u2019s correlation were used to assess the relationship between independent variable such as e-commerce experience, hours spent on internet, product perception, customers\u2019 service and consumers\u2019 risk and dependant variable such as attitude towards online shopping. The findings revealed that there is a significant relationship between e-commerce experience and attitude towards online shopping among the respondents (r = -0.236**, p < 0.05). However, there is no significant relationship between hours spent on internet and attitude towards online shopping among the respondents (r = 0.106, p > 0.05). This study also indicated that there is a significant relationship between product perception and attitude towards online shopping among the respondents (r = 0.471**, p < 0.01) and there is also a significant relationship between customers\u2019 service and attitude towards online shopping among the respondents (r = 0.459**, p < 0.01). Lastly, this result showed that there is no significant relationship between consumers\u2019 risk and attitude towards online shopping among the respondents (r = 0.153, p > 0.05). Further study should explore other factors that influencing consumers\u2019 attitude towards e-commerce purchases through online shopping with a broader range of population and high representative sampling method."} {"_id": "3b3f93f16e475cf37cd2e6dba208a25575dc301d", "title": "Risk-RRT\u2217: A robot motion planning algorithm for the human robot coexisting environment", "text": "In the human robot coexisting environment, to reach the goal efficiently and safely is very meaningful for the mobile service robot. In this paper, a Risk based Rapidly-exploring Random Tree for optimal motion planning (Risk-RRT\u2217) algorithm is proposed by combining the comfort and collision risk (CCR) map with the RRT\u2217 algorithm, which provides a variant of the RRT\u2217 algorithm in the dynamic human robot coexisting environment. In the experiments, the time cost in the navigation process and the length of the trajectory are utilized for the evaluation of the proposed algorithm. A comparison with the Risk-RRT algorithm is carried out and experimental results reveal that our proposed algorithm can achieve a better performance than that of the Risk-RRT in both static and dynamic environments."} {"_id": "227ed02b3e5edf4c5b08539c779eca90683549e6", "title": "Towards a sustainable e-Participation implementation model", "text": "A great majority of the existing frameworks are inadequate to address their universal applicability in countries with certain socio-economic and technological settings. Though there is so far no \u201cone size fits all\u201d strategy in implementing eGovernment, there are some essential common elements in the transformation. Therefore, this paper attempts to develop a singular sustainable model based on some theories and the lessons learned from existing e-Participation initiatives of developing and developed countries, so that the benefits of ICT can be maximized and greater participation be ensured."} {"_id": "8701435bc82840cc5f040871e3690964998e6fd3", "title": "circRNA biogenesis competes with pre-mRNA splicing.", "text": "Circular RNAs (circRNAs) are widely expressed noncoding RNAs. However, their biogenesis and possible functions are poorly understood. Here, by studying circRNAs that we identified in neuronal tissues, we provide evidence that animal circRNAs are generated cotranscriptionally and that their production rate is mainly determined by intronic sequences. We demonstrate that circularization and splicing compete against each other. These mechanisms are tissue specific and conserved in animals. Interestingly, we observed that the second exon of the splicing factor muscleblind (MBL/MBNL1) is circularized in flies and humans. This circRNA (circMbl) and its flanking introns contain conserved muscleblind binding sites, which are strongly and specifically bound by MBL. Modulation of MBL levels strongly affects circMbl biosynthesis, and this effect is dependent on the MBL binding sites. Together, our data suggest that circRNAs can function in gene regulation by competing with linear splicing. Furthermore, we identified muscleblind as a factor involved in circRNA biogenesis."} {"_id": "13ee420bea383f67b029531f845528ff3ad5b3e1", "title": "Modeling astrocyte-neuron interactions in a tripartite synapse", "text": "Glial cells (microglia, oligodendrocytes, and especially astrocytes) play a critical role in the central nervous system by affecting in various ways the neuronal single cell level interactions as well as connectivity and communication at the network level, both in the developing and mature brain. Numerous studies (see, e.g., [1-3]) indicate an important modulatory role of astrocytes in brain homeostasis but most specifically in neuronal metabolism, plasticity, and survival. Astrocytes are also known to play an important role in many neurological disorders and neurodegenerative diseases. It is therefore important in the light of recent evidence to assess how the astrocytes interact with neurons, both in situ and in silico. The integration of biological knowledge into computational models is becoming increasingly important to help understand the role of astrocytes both in health and disease. We have previously addressed the role of transmitters and amyloid-beta peptide on calcium signals in rat cortical astrocytes [4]. In this work, we extend the work by using a modified version of the previously developed model [5] for astrocyte-neuron interactions in a tripartite synapse to explore the effects of various preand postsynaptic as well as extrasynaptic mechanisms on neuronal activity. We consider extending the model to include various additional mechanisms, such as the role of IP3 receptor function, recycling of neurotransmitters, K+ buffering by the Na+/K+ pump, and retrograde signaling by endocannabinoids. The improved tripartite synapse model for astrocyte-neuron interactions will provide an essential modeling tool for facilitating studies of local network dynamics in the brain. The model may also serve as an important step toward understanding mechanisms behind induction and maintenance of plastic changes in the brain."} {"_id": "be29cc9cd74fd7d260b4571a4b72518accae5127", "title": "Semisupervised Autoencoder for Sentiment Analysis", "text": "In this paper, we investigate the usage of autoencoders in modeling textual data. Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words. We address this problem by introducing supervision via the loss function of autoencoders. In particular, we first train a linear classifier on the labeled data, then define a loss for the autoencoder with the weights learned from the linear classifier. To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation. We show that our choice of loss function can be rationalized from the perspective of Bregman Divergence, which justifies the soundness of our model. We evaluate the effectiveness of our model on six sentiment analysis datasets, and show that our model significantly outperforms all the competing methods with respect to classification accuracy. We also show that our model is able to take advantage of unlabeled dataset and get improved performance. We further show that our model successfully learns highly discriminative feature maps, which explains its superior performance."} {"_id": "8062e650cf0c469540f03fd1d08b5d9f12ebcfb2", "title": "Rethinking Centrality: The Role of Dynamical Processes in Social Network Analysis", "text": "Many popular measures used in social network analysis, including centrality, are based on the random walk. The random walk is a model of a stochastic process where a node interacts with one other node at a time. However, the random walk may not be appropriate for modeling social phenomena, including epidemics and information diffusion, in which one node may interact with many others at the same time, for example, by broadcasting the virus or information to its neighbors. To produce meaningful results, social network analysis algorithms have to take into account the nature of interactions between the nodes. In this paper we classify dynamical processes as conservative and non-conservative and relate them to well-known measures of centrality used in network analysis: PageRank and Alpha-Centrality. We demonstrate, by ranking users in online social networks used for broadcasting information, that non-conservative Alpha-Centrality generally leads to a better agreement with an empirical ranking scheme than the conservative PageRank."} {"_id": "6a51422fd215ca9d98e6ce1c930f0046520013e2", "title": "Power Optimization of Battery Charging System Using FPGA Based Neural Network Controller", "text": "This paper involves designing a small scale battery charging system which is powered via a photovoltaic panel. This work aims at the usage of solar energy for charging the battery and optimizing the power of the system. Implementation is done using Artificial Neural Network (ANN) on FPGA. To develop this system an Artificial Neural Network is trained and its result is further used for the PWM technique. PWM pulse generation has been done using Papilio board which is based on XILINX Spartan 3E FPGA. The ANN with PWM technique is ported on FPGA which is programmed using VHDL. This able to automatically control the whole charging system operation without requirement of external sensory unit. The simulation results are achieved by using MATLAB and XILINX. These results allowed demonstrating the charging of the battery using proposed ANN and PWM technique. KeywordsPhotovoltaic battery charger, PWM, ANN, FPGA"} {"_id": "e180d2d1b58d553a948d778715ddf15246f838a9", "title": "Foot orthoses for plantar heel pain: a systematic review and meta-analysis.", "text": "OBJECTIVE\nTo investigate the effectiveness of foot orthoses for pain and function in adults with plantar heel pain.\n\n\nDESIGN\nSystematic review and meta-analysis. The primary outcome was pain or function categorised by duration of follow-up as short (0 to 6 weeks), medium (7 to 12 weeks) or longer term (13 to 52 weeks).\n\n\nDATA SOURCES\nMedline, CINAHL, SPORTDiscus, Embase and the Cochrane Library from inception to June 2017.\n\n\nELIGIBILITY CRITERIA FOR SELECTING STUDIES\nStudies must have used a randomised parallel-group design and evaluated foot orthoses for plantar heel pain. At least one outcome measure for pain or function must have been reported.\n\n\nRESULTS\nA total of 19 trials (1660 participants) were included. In the short term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. In the medium term, there was moderate-quality evidence that foot orthoses were more effective than sham foot orthoses at reducing pain (standardised mean difference -0.27 (-0.48 to -0.06)). There was no improvement in function in the medium term. In the longer term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. A comparison of customised and prefabricated foot orthoses showed no difference at any time point.\n\n\nCONCLUSION\nThere is moderate-quality evidence that foot orthoses are effective at reducing pain in the medium term, however it is uncertain whether this is a clinically important change."} {"_id": "1e5029cf2a120c0d7453f3ecbd059f97eebbbf6f", "title": "From wireless sensor networks towards cyber physical systems", "text": "In the past two decades, a lot of research activities have been dedicated to the fields of mobile ad hoc network (MANET) and wireless sensor networks (WSN). More recently, the cyber physical system (CPS) has emerged as a promising direction to enrich the interactions between physical and virtual worlds. In this article, we first review some research activities inWSN, including networking issues and coverage and deployment issues. Then,we review some CPS platforms and systems that have been developed recently, including health care, navigation, rescue, intelligent transportation, social networking, and gaming applications. Through these reviews, we hope to demonstrate how CPS applications exploit the physical information collected by WSNs to bridge real and cyber spaces and identify important research challenges related to CPS designs. \u00a9 2011 Elsevier B.V. All rights reserved."} {"_id": "74473e75ea050a18f1d1d73ebba240c11c21e882", "title": "The Improvisation Effect: A Case Study of User Improvisation and Its Effects on Information System Evolution", "text": "Few studies have examined interactions between IT change and organizational change during information systems evolution (ISE). We propose a dynamic model of ISE where change dynamics are captured in four dimensions: planned change, improvised change, organizational change and IT change. This inductively-generated model yields a rich account of ISE and its drivers by integrating the four change dimensions. The model shows how incremental adjustments in IT and organizational processes often grow into a profound change as users improvise. We demonstrate the value of the dynamic model by illustrating ISE processes in the context of two manufacturing organizations implementing the same system over a study period of five years. This paper makes its contribution by holistically characterizing improvisation in the context of IT and organizational change. Our ISE model moves research in organizational and IT change towards a common framing by showing how each affects the other\u2019s form, function and evolution."} {"_id": "858ddff549ae0a3094c747fb1f26aa72821374ec", "title": "Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications", "text": "Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research."} {"_id": "008b7fbda45b9cd63c89ee5ed3f8e6b2b6bf8457", "title": "Kinematics and the Implementation of an Elephant's Trunk Manipulator and Other Continuum Style Robots", "text": "Traditionally, robot manipulators have been a simple arrangement of a small number of serially connected links and actuated joints. Though these manipulators prove to be very effective for many tasks, they are not without their limitations, due mainly to their lack of maneuverability or total degrees of freedom. Continuum style (i.e., continuous \"back-bone\") robots, on the other hand, exhibit a wide range of maneuverability, and can have a large number of degrees of freedom. The motion of continuum style robots is generated through the bending of the robot over a given section; unlike traditional robots where the motion occurs in discrete locations, i.e., joints. The motion of continuum manipulators is often compared to that of biological manipulators such as trunks and tentacles. These continuum style robots can achieve motions that could only be obtainable by a conventionally designed robot with many more degrees of freedom. In this paper we present a detailed formulation and explanation of a novel kinematic model for continuum style robots. The design, construction, and implementation of our continuum style robot called the elephant trunk manipulator is presented. Experimental results are then provided to verify the legitimacy of our model when applied to our physical manipulator. We also provide a set of obstacle avoidance experiments that help to exhibit the practical implementation of both our manipulator and our kinematic model."} {"_id": "8b1d430d8d37998c48411ce2e9dc35f3a8529fd7", "title": "Knowledge-Based Reasoning in a Unified Feature Modeling Scheme", "text": "Feature-based modeling is an accepted approach to include high-level geometric information in product models as well as to facilitate a parameterized and constraint-based product development process. Moreover, features are suitable as an intermediate layer between a product\u2019s knowledge model and its geometry model for effective and efficient information management. To achieve this, traditional feature technology must be extended to align with the approach of knowledge-based reasoning. In this paper, based on a previously proposed unified feature modeling scheme, feature definitions are extended to support knowledge-based reasoning. In addition, a communication mechanism between the knowledge-based system and the feature model is established. The methods to embed and use knowledge for information consistency control are described."} {"_id": "d7d6cbc28cb7751176ad241b9375802a7d6d62df", "title": "FOREX Rate prediction using Chaos and Quantile Regression Random Forest", "text": "This paper presents a hybrid of chaos modeling and Quantile Regression Random Forest (QRRF) for Foreign Exchange (FOREX) Rate prediction. The exchange rates data of US Dollar (USD) versus Japanese Yen (JPY), British Pound (GBP), and Euro (EUR) are used to test the efficacy of proposed model. Based on the experiments conducted, we conclude that the proposed model yielded accurate predictions compared to Chaos + Quantile Regression (QR), Chaos+Random Forest (RF) and that of Pradeepkumar and Ravi [12] in terms of both Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)."} {"_id": "5fca285e4e252b99f8bd0a8489c984280e78a976", "title": "Non-parametric Model for Background Subtraction", "text": "Background subtraction is a method typically used to segment moving regions in image sequences taken from a static camera by comparing each new frame to a model of the scene background. We present a novel non-parametric background model and a background subtraction approach. The model can handle situations where the background of the scene is cluttered and not completely static but contains small motions such as tree branches and bushes. The model estimates the probability of observing pixel intensity values based on a sample of intensity values for each pixel. The model adapts quickly to changes in the scene which enables very sensitive detection of moving targets. We also show how the model can use color information to suppress detection of shadows. The implementation of the model runs in real-time for both gray level and color imagery. Evaluation shows that this approach achieves very sensitive detection with very low false alarm rates."} {"_id": "ccbd40c5d670f4532ad1a9f0003a8b8157388aa0", "title": "Image Difference Threshold Strategies and Shadow Detection", "text": "The paper considers two problems associated with the detection and classification of motion in image sequences obtained from a static camera. Motion is detected by differencing a reference and the \"current\" image frame, and therefore requires a suitable reference image and the selection of an appropriate detection threshold. Several threshold selection methods are investigated, and an algorithm based on hysteresis thresholding is shown to give acceptably good results over a number of test image sets. The second part of the paper examines the problem of detecting shadow regions within the image which are associated with the object motion. This is based on the notion of a shadow as a semi-transparent region in the image which retains a (reduced contrast) representation of the underlying surface pattern, texture or grey value. The method uses a region growing algorithm which uses a growing criterion based on a fixed attenuation of the photometric gain over the shadow region, in comparison to the reference image."} {"_id": "e182225eb0c1e90f09cc3a0f69abb7ac0e9b3dba", "title": "Pfinder: Real-Time Tracking of the Human Body", "text": "Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding. Index Terms \u2014Blobs, blob tracking, real-time, person tracking, 3D person tracking, segmentation, gesture recognition, mixture model, MDL."} {"_id": "f680e7e609ce0729c8a594336e0cf8f447b3ef13", "title": "Adaptive Background Estimation and Foreground Detection using Kalman-Filtering", "text": "In image sequence processing kalman filtering is used for an adaptive background estimation, in order to separate the foreground from the background. The presented work is an approach which takes into account that changing illumination should be considered in the background estimation, and should not be detected as foreground. The new approach assumes a stationary CCD cameras with fixed focal length and considers non-rigid objects moving non-continuously like human bodies. Furthermore, statistic based methods are used to overcome the problems caused by shadow borders and the adaptation, when the background is covered by the foreground."} {"_id": "1ad7dd7cc87774b0e865c8562553c0ac5a6bd9f8", "title": "Artificial Diversity as Maneuvers in a Control Theoretic Moving Target Defense", "text": "Moving target cyber-defense systems encompass a wide variety of techniques in multiple areas of cyber-security. The dynamic system reconfiguration aspect of moving target cyber-defense can be used as a basis for providing an adaptive attack surface. The goal of this research is to develop novel control theoretic mechanisms by which a range of cyber maneuver techniques are provided such that when an attack is detected the environment can select the most appropriate maneuver to ensure a sufficient shift in the attack surface to render the identified attack ineffective. Effective design of this control theoretic cyber maneuver approach requires the development of two additional theories. First, algorithms are required for the estimation of security state. This will identify when a maneuver is required. Second, a theory for the estimation of the cost of performing a maneuver is required. This is critical for selecting the most cost-effective maneuver while ensuring that the attack is rendered fully ineffective. Finally, we present our moving target control loop as well as a detailed case study examining the impact of our proposed cyber maneuver paradigm on DHCP attacks."} {"_id": "8cfac6b7417c198ea192511231343793ff2f3b63", "title": "GoSCAN: Decentralized scalable data clustering", "text": "Identifying clusters is an important aspect of analyzing large datasets. Clustering algorithms classically require access to the complete dataset. However, as huge amounts of data are increasingly originating from multiple, dispersed sources in distributed systems, alternative solutions are required. Furthermore, data and network dynamicity in a distributed setting demand adaptable clustering solutions that offer accurate clustering models at a reasonable pace. In this paper, we propose GoScan, a fully decentralized density-based clustering algorithm which is capable of clustering dynamic and distributed datasets without requiring central control or message flooding. We identify two major tasks: finding the core data points, and forming the actual clusters, which we execute in parallel employing gossip-based communication. This approach is very efficient, as it offers each peer enough authority to discover the clusters it is interested in. Our algorithm poses no extra burden of overlay formation in the network, while providing high levels of scalability. We also offer several optimizations to the basic clustering algorithm for improving communication overhead and processing costs. Coping with dynamic data is made possible by introducing an age factor, which gradually detects data-set changes and enables clustering updates. In our experimental evaluation, we will show that GoSCAN can discover the clusters efficiently with scalable transmission cost."} {"_id": "7adf1aa47f8068158ac66ecb940252f90b8a2e6f", "title": "Analysis and comparison of MIMO radar waveforms", "text": "Choosing a proper waveform is a critical task for the implementation of multiple-input multiple-output (MIMO) radars. In addition to the general requirements for radar waveforms such as good resolution, low sidelobes, etc, MIMO radar waveforms also should possess good orthogonality. In this paper we give a brief overview of MIMO radar waveforms, which are classified into four categories: (1) time division multiple access (TDMA), (2) frequency division multiple access (FDMA), (3) Doppler division multiple access (DDMA), and (4) code division multiple access (CDMA). A special circulating MIMO waveform is also addressed The properties as well as application limitations of different waveforms are analyzed and compared. Some simulations results are also presented to illustrate the respective performance of different waveforms."} {"_id": "7716aa20c98c233b1dc89ab6143c2ba25441ec49", "title": "A MATLAB toolbox for Granger causal connectivity analysis", "text": "Assessing directed functional connectivity from time series data is a key challenge in neuroscience. One approach to this problem leverages a combination of Granger causality analysis and network theory. This article describes a freely available MATLAB toolbox--'Granger causal connectivity analysis' (GCCA)--which provides a core set of methods for performing this analysis on a variety of neuroscience data types including neuroelectric, neuromagnetic, functional MRI, and other neural signals. The toolbox includes core functions for Granger causality analysis of multivariate steady-state and event-related data, functions to preprocess data, assess statistical significance and validate results, and to compute and display network-level indices of causal connectivity including 'causal density' and 'causal flow'. The toolbox is deliberately small, enabling its easy assimilation into the repertoire of researchers. It is however readily extensible given proficiency with the MATLAB language."} {"_id": "6afe5319630d966c1355f3812f9d4b4b4d6d9fd0", "title": "Branch Prediction Strategies and Branch Target Buffer Design", "text": ""} {"_id": "4f5051cadf136ab08953ef987c83c17019b99343", "title": "Conceptual model of enterprise resource planning and business intelligence systems usage", "text": "Businesses have invested considerable resources in the usage of enterprise resource planning (ERP) and business intelligence (BI) systems. These systems were heavily studied in developed countries while there are a few and narrowly focused studies in developing ones. However, studies on the integration of ERP and BI have not been given enough attention (hereafter ERPBI). There are many challenges facing the ERPBI usage in term of the steadily increasing speed with which new technologies are evolving. In addition, there are a number of factors that affecting this usage. Based on the finding of the literature, a model from a critical success factors (CSFs) perspective that examine the relationship between ERPBI usage and organisational performance is proposed. The conceptual model provides a foundation for more research in the future. The expected results of the study will improve the business outcome and help design strategies based on an investigation between ERPBI usage and organisational performance."} {"_id": "370fc745c07b32330c36b4dc84298db2058d99f5", "title": "Molding CNNs for text: non-linear, non-consecutive convolutions", "text": "The success of deep learning often derives from well-chosen operational building blocks. In this work, we revise the temporal convolution operation in CNNs to better adapt it to text processing. Instead of concatenating word representations, we appeal to tensor algebra and use low-rank n-gram tensors to directly exploit interactions between words already at the convolution stage. Moreover, we extend the n-gram convolution to non-consecutive words to recognize patterns with intervening words. Through a combination of lowrank tensors, and pattern weighting, we can efficiently evaluate the resulting convolution operation via dynamic programming. We test the resulting architecture on standard sentiment classification and news categorization tasks. Our model achieves state-of-the-art performance both in terms of accuracy and training speed. For instance, we obtain 51.2% accuracy on the fine-grained sentiment classification task.1"} {"_id": "5989d226d4234d7c22d3df6ad208e69245873059", "title": "Scaling Reflection Prompts in Large Classrooms via Mobile Interfaces and Natural Language Processing", "text": "We present the iterative design, prototype, and evaluation of CourseMIRROR (Mobile In-situ Reflections and Review with Optimized Rubrics), an intelligent mobile learning system that uses natural language processing (NLP) techniques to enhance instructor-student interactions in large classrooms. CourseMIRROR enables streamlined and scaffolded reflection prompts by: 1) reminding and collecting students' in-situ written reflections after each lecture; 2) continuously monitoring the quality of a student's reflection at composition time and generating helpful feedback to scaffold reflection writing; and 3) summarizing the reflections and presenting the most significant ones to both instructors and students. Through a combination of a 60-participant lab study and eight semester-long deployments involving 317 students, we found that the reflection and feedback cycle enabled by CourseMIRROR is beneficial to both instructors and students. Furthermore, the reflection quality feedback feature can encourage students to compose more specific and higher-quality reflections, and the algorithms in CourseMIRROR are both robust to cold start and scalable to STEM courses in diverse topics."} {"_id": "59719ac7c1617878196e36db4fbce7cb2ac16b16", "title": "Dynamic switching-based reliable flooding in low-duty-cycle wireless sensor networks", "text": "Reliable flooding in wireless sensor networks (WSNs) is desirable for a broad range of applications and network operations. However, it is a challenging problem to ensure 100% flooding coverage efficiently considering the combined effects of low-duty-cycle operation and unreliable wireless transmission. In this work, we propose a novel dynamic switching-based reliable flooding (DSRF) framework, which is designed as an enhancement layer to provide efficient and reliable flooding over a variety of existing flooding tree structures in low-duty-cycle WSNs. Through comprehensive simulations, we demonstrate that DSRF can effectively improve both flooding energy efficiency and latency."} {"_id": "16ac517ca8fccfdd010718a7f29d40ae065d9e78", "title": "Tactical driver behavior prediction and intent inference: A review", "text": "Drawing upon fundamental research in human behavior prediction, recently there has been a research focus on how to predict driver behaviors. In this paper we review the field of driver behavior and intent prediction, with a specific focus on tactical maneuvers, as opposed to operational or strategic maneuvers. The aim of a driver behavior prediction system is to forecast the trajectory of the vehicle prior in real-time, which could allow a Driver Assistance System to compensate for dangerous or uncomfortable circumstances. This review provides insights into the scope of the problem, as well as the inputs, algorithms, performance metrics, and shortcomings in the state-of-the-art systems."} {"_id": "d10dbb161f4f975e430579a7e71802c824be4087", "title": "Validity and reliability of the Cohen 10-item Perceived Stress Scale in patients with chronic headache: Persian version.", "text": "BACKGROUND\nThe Cohen Perceived Stress Scale is being used widely in various countries. The present study evaluated the validity and reliability of the Cohen 10-item Perceived Stress Scale (PSS-10) in assessing tension headache, migraine, and stress-related diseases in Iran.\n\n\nMETHODS\nThis study is a methodological and cross-sectional descriptive investigation of 100 patients with chronic headache admitted to the pain clinic of Baqiyatallah Educational and Therapeutic Center. Convenience sampling was used for subject selection. PSS psychometric properties were evaluated in two stages. First, the standard scale was translated. Then, the face validity, content, and construct of the translated version were determined.\n\n\nRESULTS\nThe average age of participants was 38 years with a standard deviation (SD) of 13.2. As for stress levels, 12% were within the normal range, 36% had an intermediate level, and 52% had a high level of stress. The face validity and scale content were remarkable, and the KMO coefficient was 0.82. Bartlett's test yielded 0.327 which was statistically significant (p<0.0001) representing the quality of the sample. In factor analysis of the scale, the two elements of \"coping\" and \"distress\" were determined. A Cronbach's Alpha coefficient of 0.72 was obtained. This confirmed the remarkable internal consistency and stability of the scale through repeated measure tests (0.93).\n\n\nCONCLUSION\nThe Persian PSS-10 has good internal consistency and reliability. The availability of a validated Persian PSS-10 would indicate a link between stress and chronic headache."} {"_id": "c89f47c93c62c107e6bd75acde89ee7417ebf244", "title": "Comparison of Architectural Design Decisions for Resource-Constrained Self-Driving Cars - A Multiple Case-Study", "text": "Context: Self-Driving cars are getting more and more attention with public demonstration from all important automotive OEMs but also from companies, which do not have a long history in the automotive industry. Fostered by large international competitions in the last decade, several automotive OEMs have already announced to bring this technology to the market around 2020. Objective: International competitions like the 2007 DARPA Urban Challenge did not focus on efficient usage of resources to realize the self-driving vehicular functionality. Since the automotive industry is very cost-sensitive, realizing reliable and robust selfdriving functionality is challenging when expensive and sophisticated sensors mounted very visibly on the vehicle\u2019s roof for example cannot be used. Therefore, the goal for this study is to investigate how architectural design decisions of recent self-driving vehicular technology consider resource-efficiency. Method: In a multiple case study, the architectural design decisions derived for resourceconstrained self-driving miniature cars for the international competition CaroloCup are compared with architectural designs from recent real-scale self-driving cars. Results: Scaling down available resources for realizing self-driving vehicular technology puts additional constraints on the architectural design; especially reusability of software components in platform-independent algorithmic concepts are prevailing. Conclusion: Software frameworks like the robotic operating system (ROS) enable fast prototypical solutions; however, architectural support for resource-constrained devices is limited. Here, architectural design drivers as realized in AUTOSAR are more suitable."} {"_id": "3208ed3d4ff2de382ad6a16431cfe7118c000725", "title": "Constructive Induction of Cartesian Product Attributes", "text": "Constructive induction is the process of changing the representation of examples by creating new attributes from existing attributes. In classi cation, the goal of constructive induction is to nd a representation that facilitates learning a concept description by a particular learning system. Typically, the new attributes are Boolean or arithmetic combinations of existing attributes and the learning algorithms used are decision trees or rule learners. We describe the construction of new attributes that are the Cartesian product of existing attributes. We consider the e ects of this operator on a Bayesian classi er an a nearest neighbor algorithm."} {"_id": "6b02a00274af1f5a432223adf61205e11ccf7249", "title": "Analyzing User Preference for Social Image Recommendation", "text": "With the incredibly growing amount of multimedia data shared on the social media platforms, recommender systems have become an important necessity to ease users\u2019 burden on the information overload. In such a scenario, extensive amount of heterogeneous information such as tags, image content, in addition to the user-to-item preferences, is extremely valuable for making effective recommendations. In this paper, we explore a novel hybrid algorithm termed STM, for image recommendation. STM jointly considers the problem of image content analysis with the users\u2019 preferences on the basis of sparse representation. STM is able to tackle the challenges of highly sparse user feedbacks and cold-start problmes in the social network scenario. In addition, our model is based on the classical probabilistic matrix factorization and can be easily extended to incorporate other useful information such as the social relationships. We evaluate our approach with a newly collected 0.3 million social image data set from Flickr. The experimental results demonstrate that sparse topic modeling of the image content leads to more effective recommendations, , with a significant performance gain over the state-of-the-art alternatives."} {"_id": "c603e1febf588ec1eccb99d3af89448400f21388", "title": "Novel microstrip patch antenna design employing flexible PVC substrate suitable for defence and radio-determination applications", "text": "This paper propounds a new flexible material, which has been designed by employing flexible Poly Vinyl Chloride (PVC) having dielectric constant \u03b5r= 2.7 as substrate. The projected antenna design states that a circular shaped copper patch with flag shaped slot on it has been deployed over the hexagonal PVC substrate of thickness 1 mm. Then ground plane is partially minimized and slotted to revamp the antenna performance. The proposed antenna operates within frequency range of 7.2032GHz to 7.9035GHz while resonating at 7.55GHz. The proposed antenna design has minimal return loss of \u221255.226dB with the impedance bandwidth of 700 MHz, gain of 4.379dB and directivity of 4.111dBi. The antenna has VSWR less than maximum acceptable value of 2. The projected antenna design can be suitably employed for UWB, radio-determination applications (7.235GHz to 7.25GHz), naval, defence systems (7.25GHz \u2013 7.3GHz) and weather satellite applications (7.75GHz to 7.90GHz). The antenna has been designed in CST Microwave Studio 2014. The proposed antenna has been fabricated and tested using E5071C Network Analyser and anechoic chamber. It has been observed that the stimulated results closely match with the experimental results."} {"_id": "bb0c3045d839f5f74b9252581cf45faa8b3b5f7e", "title": "Multi-Type Itemset Embedding for Learning Behavior Success", "text": "Contextual behavior modeling uses data from multiple contexts to discover patterns for predictive analysis. However, existing behavior prediction models often face difficulties when scaling for massive datasets. In this work, we formulate a behavior as a set of context items of different types (such as decision makers, operators, goals and resources), consider an observable itemset as a behavior success, and propose a novel scalable method, \"multi-type itemset embedding\", to learn the context items' representations preserving the success structures. Unlike most of existing embedding methods that learn pair-wise proximity from connection between a behavior and one of its items, our method learns item embeddings collectively from interaction among all multi-type items of a behavior, based on which we develop a novel framework, LearnSuc, for (1) predicting the success rate of any set of items and (2) finding complementary items which maximize the probability of success when incorporated into an itemset. Extensive experiments demonstrate both effectiveness and efficency of the proposed framework."} {"_id": "3dad5ad11fe72bf708193db543d4bb1d1a6c8262", "title": "Context augmented Dynamic Bayesian Networks for event recognition", "text": "This paper proposes a new Probabilistic Graphical Model (PGM) to incorporate the scene, event object interaction, and the event temporal contexts into Dynamic Bayesian Networks (DBNs) for event recognition in surveillance videos. We first construct the baseline event DBNs for modeling the events from their own appearance and kinematic observations, and then augment the DBN with contexts to improve its event recognition performance. Unlike the existing context methods, our model incorporates various contexts simultaneously into one unified model. Experiments on real scene surveillance datasets with complex backgrounds show that the contexts can effectively improve the event recognition performance even under great challenges like large intra-class variations and low image resolution. The topic of modeling and recognizing events in video surveillance system has attracted growing interest from both academia and industry (Oh et al., 2011). Various graphical, syntactic, and description-based approaches (Turaga et al., 2008) have been introduced for modeling and understanding events. Among those approaches, the time-sliced graphical models, i.e. Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs), have become popular tools. However, surveillance video event recognition still faces difficulties even with the well-built models for describing the events. The first difficulty arises from the tremendous intra-class variations in events. The same category of events can have huge variations in their observations due to visual appearances differences, target motion variations, viewpoint change and temporal variability. Also, the low resolution of event targets also affects event recognition. To compensate for such challenges, we propose to capture various contextual knowledge and systematically integrate them with image data using a Probabilistic Graphical Model (PGM) (Koller and Friedman, 2009) to improve the performance of event recognition on challenging surveillance videos. Contextual knowledge can be regarded as one type of extra information that does not directly describe the recognition task, but it can support the task. As an additional information that can capture certain temporal, spatial or logical relationships with the recognition target, context plays an important role for various visual recognition tasks. Various contexts that are available/retriev-able both during training and testing are widely used in many approaches. For example, Yao and Fei-Fei (2010) and Yao and Fei-Fei (2012) propose a context model to make human pose estimation task and object detection task as mutual context to help each other. Also, Ding et al. (2012) use both the local features and the context cues in the neighborhood windows to construct a combined feature \u2026"} {"_id": "2a4ca461fa847e8433bab67e7bfe4620371c1f77", "title": "Learning from Labeled and Unlabeled Data with LabelPropagationXiaojin", "text": "We investigate the use of unlabeled data to help labeled data in cl ssification. We propose a simple iterative algorithm, label pro pagation, to propagate labels through the dataset along high density are as d fined by unlabeled data. We analyze the algorithm, show its solution , and its connection to several other algorithms. We also show how to lear n p ameters by minimum spanning tree heuristic and entropy minimiz ation, and the algorithm\u2019s ability to perform feature selection. Expe riment results are promising."} {"_id": "3ebe2bd33cdbe2a2e0cd3d01b955f8dc96c9923e", "title": "Mobile App Tagging", "text": "Mobile app tagging aims to assign a list of keywords indicating core functionalities, main contents, key features or concepts of a mobile app. Mobile app tags can be potentially useful for app ecosystem stakeholders or other parties to improve app search, browsing, categorization, and advertising, etc. However, most mainstream app markets, e.g., Google Play, Apple App Store, etc., currently do not explicitly support such tags for apps. To address this problem, we propose a novel auto mobile app tagging framework for annotating a given mobile app automatically, which is based on a search-based annotation paradigm powered by machine learning techniques. Specifically, given a novel query app without tags, our proposed framework (i) first explores online kernel learning techniques to retrieve a set of top-N similar apps that are semantically most similar to the query app from a large app repository; and (ii) then mines the text data of both the query app and the top-N similar apps to discover the most relevant tags for annotating the query app. To evaluate the efficacy of our proposed framework, we conduct an extensive set of experiments on a large real-world dataset crawled from Google Play. The encouraging results demonstrate that our technique is effective and promising."} {"_id": "1d3c0e77cdcb3d8adbd803503d847bd5f82e9503", "title": "Robust Lane Detection and Tracking for Real-Time Applications", "text": "An effective lane-detection algorithm is a fundamental component of an advanced driver assistant system, as it provides important information that supports driving safety. The challenges faced by the lane detection and tracking algorithm include the lack of clarity of lane markings, poor visibility due to bad weather, illumination and light reflection, shadows, and dense road-based instructions. In this paper, a robust and real-time vision-based lane detection algorithm with an efficient region of interest is proposed to reduce the high noise level and the calculation time. The proposed algorithm also processes a gradient cue and a color cue together and a line clustering with scan-line tests to verify the characteristics of the lane markings. It removes any false lane markings and tracks the real lane markings using the accumulated statistical data. The experiment results show that the proposed algorithm gives accurate results and fulfills the real-time operation requirement on embedded systems with low computing power."} {"_id": "047f8d710cf229d35d64294bc7d9853f1f4f7fee", "title": "Space-time video completion", "text": "We present a method for space-time completion of large space-time \"holes\" in video sequences of complex dynamic scenes. The missing portions are filled-in by sampling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. This is obtained by posing the task of video completion and synthesis as a global optimization problem with a well-defined objective function. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to realistic looking video sequences. Space-time video completion is useful for a variety of tasks, including, but not limited to: (i) Sophisticated video removal (of undesired static or dynamic objects) by completing the appropriate static or dynamic background information, (ii) Correction of missing/corrupted video frames in old movies, and (iii) Synthesis of new video frames to add a visual story, modify it, or generate a new one. Some examples of these are shown in the paper."} {"_id": "ea2ad4358bfd06b288c7cbbec7b0465057711738", "title": "Predictors of physical restraint use in Canadian intensive care units", "text": "INTRODUCTION\nPhysical restraint (PR) use in the intensive care unit (ICU) has been associated with higher rates of self-extubation and prolonged ICU length of stay. Our objectives were to describe patterns and predictors of PR use.\n\n\nMETHODS\nWe conducted a secondary analysis of a prospective observational study of analgosedation, antipsychotic, neuromuscular blocker, and PR practices in 51 Canadian ICUs. Data were collected prospectively for all mechanically ventilated adults admitted during a two-week period. We tested for patient, treatment, and hospital characteristics that were associated with PR use and number of days of use, using logistic and Poisson regression respectively.\n\n\nRESULTS\nPR was used on 374 out of 711 (53%) patients, for a mean number of 4.1 (standard deviation (SD) 4.0) days. Treatment characteristics associated with PR were higher daily benzodiazepine dose (odds ratio (OR) 1.05, 95% confidence interval (CI) 1.00 to 1.11), higher daily opioid dose (OR 1.04, 95% CI 1.01 to 1.06), antipsychotic drugs (OR 3.09, 95% CI 1.74 to 5.48), agitation (Sedation-Agitation Scale (SAS) >4) (OR 3.73, 95% CI 1.50 to 9.29), and sedation administration method (continuous and bolus versus bolus only) (OR 3.09, 95% CI 1.74 to 5.48). Hospital characteristics associated with PR indicated patients were less likely to be restrained in ICUs from university-affiliated hospitals (OR 0.32, 95% CI 0.17 to 0.61). Mainly treatment characteristics were associated with more days of PR, including: higher daily benzodiazepine dose (incidence rate ratio (IRR) 1.07, 95% CI 1.01 to 1.13), daily sedation interruption (IRR 3.44, 95% CI 1.48 to 8.10), antipsychotic drugs (IRR 15.67, 95% CI 6.62 to 37.12), SAS <3 (IRR 2.62, 95% CI 1.08 to 6.35), and any adverse event including accidental device removal (IRR 8.27, 95% CI 2.07 to 33.08). Patient characteristics (age, gender, Acute Physiology and Chronic Health Evaluation II score, admission category, prior substance abuse, prior psychotropic medication, pre-existing psychiatric condition or dementia) were not associated with PR use or number of days used.\n\n\nCONCLUSIONS\nPR was used in half of the patients in these 51 ICUs. Treatment characteristics predominantly predicted PR use, as opposed to patient or hospital/ICU characteristics. Use of sedative, analgesic, and antipsychotic drugs, agitation, heavy sedation, and occurrence of an adverse event predicted PR use or number of days used."} {"_id": "a4eac8a7218fcafd89dc904d6e9ec14d8d4a470b", "title": "Reference scope identification for citances by classification with text similarity measures", "text": "This paper targets at the first step towards generating citation summaries - to identify the reference scope (i.e., cited text spans) for citances. We present a novel classification-based method that converts the task into binary classification which distinguishes cited and non-cited pairs of citances and reference sentences. The method models pairs of citances and reference sentences as feature vectors where citation-dependent and citation-independent features based on the semantic similarity between texts and the significance of texts are explored. Such vector representations are utilized to train a binary classifier. For a citance, once the set of reference sentences classified as the cited sentences are collected, a heuristic-based filtering strategy is applied to refine the output. The method is evaluated using the CL-SciSumm 2016 datasets and found to perform well with competitive results."} {"_id": "0da75bded3ae15e255f5bd376960cfeffa173b4e", "title": "The Role of Context for Object Detection and Semantic Segmentation in the Wild", "text": "In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of existing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales."} {"_id": "5e0f8c355a37a5a89351c02f174e7a5ddcb98683", "title": "Microsoft COCO: Common Objects in Context", "text": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."} {"_id": "a0e03c5b647438299c79c71458e6b1776082a37b", "title": "Areas of Attention for Image Captioning", "text": "We propose \u201cAreas of Attention\u201d, a novel attentionbased model for automatic image captioning. Our approach models the dependencies between image regions, caption words, and the state of an RNN language model, using three pairwise interactions. In contrast to previous attentionbased approaches that associate image regions only to the RNN state, our method allows a direct association between caption words and image regions. During training these associations are inferred from image-level captions, akin to weakly-supervised object detector training. These associations help to improve captioning by localizing the corresponding regions during testing. We also propose and compare different ways of generating attention areas: CNN activation grids, object proposals, and spatial transformers nets applied in a convolutional fashion. Spatial transformers give the best results. They allow for image specific attention areas, and can be trained jointly with the rest of the network. Our attention mechanism and spatial transformer attention areas together yield state-of-the-art results on the MSCOCO dataset."} {"_id": "a2c2999b134ba376c5ba3b610900a8d07722ccb3", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "text": ""} {"_id": "ab116cf4e1d5ed947f4d762518738305e3a0ab74", "title": "Deep visual-semantic alignments for generating image descriptions", "text": ""} {"_id": "ebdfa3dc3bff0d6721e9996af66726d71159c36b", "title": "Multi objective outbound logistics network design for a manufacturing supply chain", "text": "Outbound logistics network (OLN) in the downstream supply chain of a firm plays a dominant role in the success or failure of that firm. This paper proposes the design of a hybrid and flexible OLN in multi objective context. The proposed distribution network for a manufacturing supply chain consists of a set of customer zones (CZs) at known locations with known demands being served by a set of potential manufacturing plants, a set of potential central distribution centers (CDCs), and a set of potential regional distribution centers (RDCs). Three variants of a single product classified based on nature of demand are supplied to CZs through three different distribution channels. The decision variables include number of plants, CDCs, RDCs, and quantities of each variant of product delivered to CZs through a designated distribution channel. The goal is to design the network with multiple objectives so as to minimize the total cost, maximize the unit fill rates, and maximize the resource utilization of the facilities in the network. The problem is formulated as a mixed integer linear programming problem and a multiobjective genetic algorithm (MOGA) called non-dominated sorting genetic algorithm\u2014II (NSGA-II) is employed to solve the resulting NP-hard combinatorial optimization problem. Computational experiments conducted on randomly generated data sets are presented and analyzed showing the effectiveness of the solution algorithm for the proposed network. N. C. Hiremath \u00b7 S. Sahu \u00b7 M. K. Tiwari (B) Department of Industrial Engineering and Management, Indian Institute of Technology Kharagpur, Kharagpur, 721302 West Bengal, India e-mail: mkt09@hotmail.com N. C. Hiremath e-mail: nchiremath@gmail.com S. Sahu e-mail: sahus@mech.iitkgp.ernet.in"} {"_id": "1109d70aad64298fe824d6ffc34aac7f5bd76303", "title": "Formal Models for Computer Security", "text": "Efforts to build \"secure\" computer systems have now been underway for more than a decade. Many designs have been proposed, some prototypes have been constructed, and a few systems are approaching the production stage. A small number of systems are even operating in what the Department of Defense calls the \"multilevel\" mode some information contained m these computer systems may have a clasmfication higher than the clearance of some of the users of those systems. This paper revmws the need for formal security models, describes the structure and operation of military security controls, considers how automation has affected security problems, surveys models that have been proposed and applied to date, and suggests possible d~rectlons for future models"} {"_id": "0dc08ceff7472e6b23a6074430f0dcbd6cad1025", "title": "The Cerebellum, Sensitive Periods, and Autism", "text": "Cerebellar research has focused principally on adult motor function. However, the cerebellum also maintains abundant connections with nonmotor brain regions throughout postnatal life. Here we review evidence that the cerebellum may guide the maturation of remote nonmotor neural circuitry and influence cognitive development, with a focus on its relationship with autism. Specific cerebellar zones influence neocortical substrates for social interaction, and we propose that sensitive-period disruption of such internal brain communication can account for autism's key features."} {"_id": "75ac5ebf54a71b4d82651287c6ac28f33c12e5e0", "title": "Configuration Tool for ARINC 653 Operating Systems Eu", "text": "ARINC 653 Specification defines a standardized interface of real-time operating systems and an Application Executive (APEX) to develop the reliable applications for avionics based on Integrated Modular Avionics (IMA). The requirements of system platform based on ARINC 653 Standard are defined as configuration data and are integrated to the XML configuration file(s) in the real-time operating system. Unfortunately, existing configuration tools for integrating requirements do not provide checking the syntax errors of XML and verifying the integrity of input data for partitioning. This paper presents a configuration tool for ARINC 653 OS that consist of Wizard module which generates the basic configuration data for IMA based on XML Scheme of ARINC 653 Standard, XML and Partition Editor for the partitioning system of IMA, and Verification module which checks the integrity of input data and XML syntax with its visualization."} {"_id": "21eda52dd31188de4b325a8b7fb75b52076c4724", "title": "Energy and Performance Characterization of Mobile Heterogeneous Computing", "text": "A modern mobile application processor is a heterogeneous multi-core SoC which integrates CPU and application-specific accelerators such as GPU and DSP. It provides opportunity to accelerate other compute-intensive applications, yet mapping an algorithm to such a heterogeneous platform is not a straightforward task and has many design decisions to make. In this paper, we evaluate the performance and energy benefits of utilizing the integrated GPU and DSP cores to offload or share CPU's compute-intensive tasks. The evaluation is conducted on three representative mobile platforms, TI's OMAP3530, Qualcomn's Snapdragon S2, and Nvidia's Tegra2, using common computation tasks in mobile applications. We identify key factors that should be considered in energy-optimized mobile heterogeneous computing. Our evaluation results show that, by effectively utilizing all the computing cores concurrently, an average of 3.7X performance improvement can be achieved with the cost of 33% more power consumption, in comparison with the case of utilizing CPU only. This stands for 2.8X energy saving."} {"_id": "64f51fe4f6b078142166395ed209d423454007fb", "title": "Scene Text Synthesis for Efficient and Effective Deep Network Training", "text": "A large amount of annotated training images is critical for training accurate and robust deep network models but the collection of a large amount of annotated training images is often time-consuming and costly. Image synthesis alleviates this constraint by generating annotated training images automatically by machines which has attracted increasing interest in the recent deep learning research. We develop an innovative image synthesis technique that composes annotated training images by realistically embedding foreground objects of interest (OOI) into background images. The proposed technique consists of two key components that in principle boost the usefulness of the synthesized images in deep network training. The first is context-aware semantic coherence which ensures that the OOI are placed around semantically coherent regions within the background image. The second is harmonious appearance adaptation which ensures that the embedded OOI are agreeable to the surrounding background from both geometry alignment and appearance realism. The proposed technique has been evaluated over two related but very different computer vision challenges, namely, scene text detection and scene text recognition. Experiments over a number of public datasets demonstrate the effectiveness of our proposed image synthesis technique the use of our synthesized images in deep network training is capable of achieving similar or even better scene text detection and scene text recognition performance as compared with using real images."} {"_id": "ceb4040acf7f27b4ca55da61651a14e3a1ef26a8", "title": "Angry Crowds: Detecting Violent Events in Videos", "text": ""} {"_id": "3c64af0a3046ea3607d284e164cb162e9cd52441", "title": "A Novel CPW Fed Multiband Circular Microstrip Patch Antenna for Wireless Applications", "text": "In this paper, a novel design of coplanar micro strip patch antenna is presented for ultra wideband (UWB) and smart grid applications. It is designed on a dielectric substrate and fed by a coplanar wave guide (CPW). Micro strip patch antenna consists of a circular patch with an elliptical shaped eye opening in the center to provide multiband operations. The antenna has wide bandwidth of 5 GHz between 10.8-15.8 GHz and 3 GHz between 5.8-8.8 GHz, high return loss of -34 dB at 3.9 GHz and -29 dB at 6.8 GHz with satisfactory radiation properties. The parameters that affect the performance of the antenna in terms of its frequency domain characteristics are investigated. The antenna design has been simulated on Ansoft's High Frequency Structure Simulator (HFSS). It is a compact design of 47 \u00d7 47 mm2 area on Rogers RO3003 (tm) substrate with dielectric constant of 3 and thickness of 1.6 mm. The simulated antenna has three operating frequency bands at 2.82, 5.82-8.8, 10.8-15.8 GHz and shows the band-notch characteristic in the UWB band to avoid interferences, which is caused by WLAN at 5.15-5.825 GHz and Wi-MAX at 5.25-5.85 GHz. The designed antenna structure with 5.68 dBi gain is planar, simple and compact, hence it can be easily embedded in wireless communication systems and integrated with microwave circuitry for low manufacturing cost."} {"_id": "226cfb67d2d8eba835f2ec695fe28b78b556a19f", "title": "Majority is not enough: bitcoin mining is vulnerable", "text": "The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed.\n We show that the Bitcoin mining protocol is not incentive-compatible. We present an attack with which colluding miners' revenue is larger than their fair share. The attack can have significant consequences for Bitcoin: Rational miners will prefer to join the attackers, and the colluding group will increase in size until it becomes a majority. At this point, the Bitcoin system ceases to be a decentralized currency.\n Unless certain assumptions are made, selfish mining may be feasible for any coalition size of colluding miners. We propose a practical modification to the Bitcoin protocol that protects Bitcoin in the general case. It prohibits selfish mining by a coalition that command less than 1/4 of the resources. This threshold is lower than the wrongly assumed 1/2 bound, but better than the current reality where a coalition of any size can compromise the system."} {"_id": "3a03957218eda9094858087538e9668ab0db503b", "title": "Containers and Virtual Machines at Scale: A Comparative Study", "text": "Virtualization is used in data center and cloud environments to decouple applications from the hardware they run on. Hardware virtualization and operating system level virtualization are two prominent technologies that enable this. Containers, which use OS virtualization, have recently surged in interest and deployment. In this paper, we study the differences between the two virtualization technologies. We compare containers and virtual machines in large data center environments along the dimensions of performance, manageability and software development.\n We evaluate the performance differences caused by the different virtualization technologies in data center environments where multiple applications are running on the same servers (multi-tenancy). Our results show that co-located applications can cause performance interference, and the degree of interference is higher in the case of containers for certain types of workloads. We also evaluate differences in the management frameworks which control deployment and orchestration of containers and VMs. We show how the different capabilities exposed by the two virtualization technologies can affect the management and development of applications. Lastly, we evaluate novel approaches which combine hardware and OS virtualization."} {"_id": "eb7754ea140e73efa14c5ce85cfa1c7a04a18a76", "title": "Path Planning and Steering Control for an Automatic Perpendicular Parking Assist System", "text": "This paper considers the perpendicular reverse parking problem of front wheel steering vehicles. Relationships between the widths of the parking aisle and the parking place, as well as the parameters and initial position of the vehicle for planning a collision-free reverse perpendicular parking in one maneuver are first presented. Two types of steering controllers (bang-bang and saturated tanh-type controllers) for straightline tracking are proposed and evaluated. It is demonstrated that the saturated controller, which is continuous, achieves also quick steering avoiding chattering and can be successfully used in solving parking problems. Simulation results and first experimental tests confirm the effectiveness of the proposed control scheme."} {"_id": "7bcf85fa463922bedacda5a47338acc59466b0e5", "title": "A Low-Power Wideband Transmitter Front-End Chip for 80 GHz FMCW Radar Systems With Integrated 23 GHz Downconverter VCO", "text": "A low-power FMCW 80 GHz radar transmitter front-end chip is presented, which was fabricated in a SiGe bipolar production technology ( fT=180 GHz, fmax=250 GHz ). Additionally to the fundamental 80 GHz VCO, a 4:1-frequency divider (up to 100 GHz), a 23 GHz local oscillator (VCO) with a low phase noise of -112 dBc/Hz (1 MHz offset), a PLL-mixer and a static frequency divider is integrated together with several output buffers. This chip was designed for low power consumption (in total <; 0.5 W, i.e., 100 mA at 5 V supply voltage), which is dominated by the 80 GHz VCO due to the demands for high output power (\u2248 12 dBm) and low phase noise (minimum -97 dBc/Hz at 1 MHz offset) within the total wide tuning range from 68 GHz to 92.5 GHz (\u0394f = 24.5 GHz). Measurements of the double-PLL system at 80 GHz showed a low phase noise of -88 dBc/Hz at 10 kHz offset frequency."} {"_id": "2a4153655ad1169d482e22c468d67f3bc2c49f12", "title": "Face Alignment Across Large Poses: A 3D Solution", "text": "Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods."} {"_id": "a7d70bdfd81c27203eab5fc331602494c0ec64f5", "title": "Recommending Self-Regulated Learning Strategies Does Not Improve Performance in a MOOC", "text": "Many committed learners struggle to achieve their goal of completing a Massive Open Online Course (MOOC). This work investigates self-regulated learning (SRL) in MOOCs and tests if encouraging the use of SRL strategies can improve course performance. We asked a group of 17 highly successful learners about their own strategies for how to succeed in a MOOC. Their responses were coded based on a SRL framework and synthesized into seven recommendations. In a randomized experiment, we evaluated the effect of providing those recommendations to learners in the same course (N = 653). Although most learners rated the study tips as very helpful, the intervention did not improve course persistence or achievement. Results suggest that a single SRL prompt at the beginning of the course provides insufficient support. Instead, embedding technological aids that adaptively support SRL throughout the course could better support learners in MOOCs."} {"_id": "8075177606c8c8e711decb255e7e59c8d19c20f0", "title": "ANMPI-BASED PARALLEL AND DISTRIBUTED MACHINE LEARNING PLATFORM ON LARGE-SCALE HPC CLUSTERS", "text": "This paper presents the design of an MPI-based parallel and distributed machine learning platform on large-scale HPC clusters. Researchers and practitioners can implement easily a class of parallelizable machine learning algorithms on the platform, or port quickly an existing non-parallel implementation of a parallelizable algorithm to the platform with only minor modifications. Complicated functions in parallel programming such as scheduling, caching and load balancing are handled automatically by the platform. The platform performance was evaluated in a series of stress tests by using a k-means clustering task on 7,500 hours of speech data (about 2.7 billion 52dimensional feature vectors). Good scalability is demonstrated on an HPC cluster with thousands of CPU cores."} {"_id": "57040a4256fc0d1c848a0b6d9333a4d93b1c7881", "title": "Stereo Acoustic Echo Cancellation Employing Frequency-Domain Preprocessing and Adaptive Filter", "text": "This paper proposes a windowing frequency domain adaptive filter and an upsampling block transform preprocessing to solve the stereo acoustic echo cancellation problem. The proposed adaptive filter uses windowing functions with smooth cutoff property to reduce the spectral leakage during filter updating, so the utilization of the independent noise introduced by preprocessing in stereo acoustic echo cancellation can be increased. The proposed preprocessing is operated in short blocks with low processing delay, and it uses frequency-domain upsampling to meet the minimal block length requirement given by the band limit of simultaneous masking. Therefore, the simultaneous masking can be well utilized to improve the audio quality. The acoustic echo cancellation simulations and the audio quality evaluation show that, the proposed windowing frequency domain adaptive filter performs better than the conventional frequency domain adaptive filter in both mono and stereo cases, and the upsampling block transform preprocessing provides better audio quality and stereo acoustic echo cancellation performance than the half-wave preprocessing at the same noise level."} {"_id": "642d1ea778f078b20b944da89e74bfd1c5a63b44", "title": "Spatial but not verbal cognitive deficits at age 3 years in persistently antisocial individuals.", "text": "Previous studies have repeatedly shown verbal intelligence deficits in adolescent antisocial individuals, but it is not known whether these deficits are in place prior to kindergarten or, alternatively, whether they are acquired throughout childhood. This study assesses whether cognitive deficits occur as early as age 3 years and whether they are specific to persistently antisocial individuals. Verbal and spatial abilities were assessed at ages 3 and 11 years in 330 male and female children, while antisocial behavior was assessed at ages 8 and 17 years. Persistently antisocial individuals (N = 47) had spatial deficits in the absence of verbal deficits at age 3 years compared to comparisons (N = 133), and also spatial and verbal deficits at age 11 years. Age 3 spatial deficits were independent of social adversity, early hyperactivity, poor test motivation, poor test comprehension, and social discomfort during testing, and they were found in females as well as males. Findings suggest that early spatial deficits contribute to persistent antisocial behavior whereas verbal deficits are developmentally acquired. An early-starter model is proposed whereby early spatial impairments interfere with early bonding and attachment, reflect disrupted right hemisphere affect regulation and expression, and predispose to later persistent antisocial behavior."} {"_id": "3a8174f08abdfb86615ba3385e4a849c4b2db672", "title": "Facile one-pot solvothermal method to synthesize sheet-on-sheet reduced graphene oxide (RGO)/ZnIn2S4 nanocomposites with superior photocatalytic performance.", "text": "Highly reductive RGO (reduced graphene oxide)/ZnIn2S4 nanocomposites with a sheet-on-sheet morphology have been prepared via a facile one-pot solvothermal method in a mixture of N,N-dimethylformamide (DMF) and ethylene glycol (EG) as solvent. A reduction of GO (graphene oxide) to RGO and the formation of ZnIn2S4 nanosheets on highly reductive RGO has been simultaneously achieved. The effect of the solvents on the morphology of final products has been investigated and the formation mechanism was proposed. The as-prepared RGO/ZnIn2S4 nanoscomposites were characterized by powder X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), N2-adsorption BET surface area, UV-vis diffuse reflectance spectroscopy (DRS), scanning electron microscopy (SEM), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM). The photocatalytic activity for hydrogen evolution under visible light irradiations over the as-prepared RGO/ZnIn2S4 nanocomposites has been investigated. The as-prepared RGO/ZnIn2S4 nanocomposites show enhanced photocatalytic activity for hydrogen evolution under visible light irradiations and an optimum photocatalytic activity is observed over 1.0 wt % RGO incorporated ZnIn2S4 nanocomposite. The superior photocatalytic performance observed over RGO/ZnIn2S4 nanocomposites can be ascribed to the existence of highly reductive RGO which has strong interactions with ZnIn2S4 nanosheets. The existence of the strong interaction between ZnIn2S4 nanosheets and RGO in the nancomposites facilitates the electron transfer from ZnIn2S4 to RGO, with the latter serving as a good electron acceptor, mediator as well as the co-catalyst for hydrogen evolution. This study can provide some guidance for us in the developing of RGO-incorporated nanocomposite photocatalysts."} {"_id": "93d61a07475c234809992265f36bffe3a749c83b", "title": "Improving optical character recognition through efficient multiple system alignment", "text": "Individual optical character recognition (OCR) engines vary in the types of errors they commit in recognizing text, particularly poor quality text. By aligning the output of multiple OCR engines and taking advantage of the differences between them, the error rate based on the aligned lattice of recognized words is significantly lower than the individual OCR word error rates. This lattice error rate constitutes a lower bound among aligned alternatives from the OCR output. Results from a collection of poor quality mid-twentieth century typewritten documents demonstrate an average reduction of 55.0% in the error rate of the lattice of alternatives and a realized word error rate (WER) reduction of 35.8% in a dictionary-based selection process. As an important precursor, an innovative admissible heuristic for the A* algorithm is developed, which results in a significant reduction in state space exploration to identify all optimal alignments of the OCR text output, a necessary step toward the construction of the word hypothesis lattice. On average 0.0079% of the state space is explored to identify all optimal alignments of the documents."} {"_id": "6ca6ac9b3fec31be02ac296e6acb653107881a95", "title": "RASR - The RWTH Aachen University Open Source Speech Recognition Toolkit", "text": "RASR is the open source version of the well-proven speech recognition toolkit developed and used at RWTH Aachen University. The current version of the package includes state of the art speech recognition technology for acoustic model training and decoding. Speaker adaptation, speaker adaptive training, unsupervised training, discriminative training, lattice processing tools, flexible signal analysis, a finite state automata library, and an efficient dynamic network decoder are notable components. Comprehensive documentation, example setups for training and recognition, and tutorials are provided to support newcomers."} {"_id": "457dbb5ec48936face9df372b34d7733957eb37d", "title": "Properties and Therapeutic Application of Bromelain: A Review", "text": "Bromelain belongs to a group of protein digesting enzymes obtained commercially from the fruit or stem of pineapple. Fruit bromelain and stem bromelainare prepared differently and they contain different enzymatic composition. \"Bromelain\" refers usually to the \"stem bromelain.\" Bromelain is a mixture of different thiol endopeptidases and other components like phosphatase, glucosidase, peroxidase, cellulase, escharase, and several protease inhibitors. In vitro and in vivo studies demonstrate that bromelain exhibits various fibrinolytic, antiedematous, antithrombotic, and anti-inflammatory activities. Bromelain is considerably absorbable in the body without losing its proteolytic activity and without producing any major side effects. Bromelain accounts for many therapeutic benefits like the treatment of angina pectoris, bronchitis, sinusitis, surgical trauma, and thrombophlebitis, debridement of wounds, and enhanced absorption of drugs, particularly antibiotics. It also relieves osteoarthritis, diarrhea, and various cardiovascular disorders. Bromelain also possesses some anticancerous activities and promotes apoptotic cell death. This paper reviews the important properties and therapeutic applications of bromelain, along with the possible mode of action."} {"_id": "36e2d0ca25cbdfb623e9eb63758705c2d766b256", "title": "Light propagation with phase discontinuities: generalized laws of reflection and refraction.", "text": "Conventional optical components rely on gradual phase shifts accumulated during light propagation to shape light beams. New degrees of freedom are attained by introducing abrupt phase changes over the scale of the wavelength. A two-dimensional array of optical resonators with spatially varying phase response and subwavelength separation can imprint such phase discontinuities on propagating light as it traverses the interface between two media. Anomalous reflection and refraction phenomena are observed in this regime in optically thin arrays of metallic antennas on silicon with a linear phase variation along the interface, which are in excellent agreement with generalized laws derived from Fermat's principle. Phase discontinuities provide great flexibility in the design of light beams, as illustrated by the generation of optical vortices through use of planar designer metallic interfaces."} {"_id": "0a4e617157fa43baeba441909de14b799a6e06db", "title": "Automated Test Data Generation Using an Iterative Relaxation Method", "text": "An important problem that arises in path oriented testing is the generation of test data that causes a program to follow a given path. In this paper, we present a novel program execution based approach using an iterative relaxation method to address the above problem. In this method, test data generation is initiated with an arbitrarily chosen input from a given domain. This input is then iteratively refined to obtain an input on which all the branch predicates on the given path evaluate to the desired outcome. In each iteration the program statements relevant to the evaluation of each branch predicate on the path are executed, and a set of linear constraints is derived. The constraints are then solved to obtain the increments for the input. These increments are added to the current input to obtain the input for the next iteration. The relaxation technique used in deriving the constraints provides feedback on the amount by which each input variable should be adjusted for the branches on the path to evaluate to the desired outcome.When the branch conditions on a path are linear functions of input variables, our technique either finds a solution for such paths in one iteration or it guarantees that the path is infeasible. In contrast, existing execution based approaches may require an unacceptably large number of iterations for relatively long paths because they consider only one input variable and one branch predicate at a time and use backtracking. When the branch conditions on a path are nonlinear functions of input variables, though it may take more then one iteration to derive a desired input, the set of constraints to be solved in each iteration is linear and is solved using Gaussian elimination. This makes our technique practical and suitable for automation."} {"_id": "2443463b62634a46a064ede7a0aa8002308946b2", "title": "An Examination of Multivariate Time Series Hashing with Applications to Health Care", "text": "As large-scale multivariate time series data become increasingly common in application domains, such as health care and traffic analysis, researchers are challenged to build efficient tools to analyze it and provide useful insights. Similarity search, as a basic operator for many machine learning and data mining algorithms, has been extensively studied before, leading to several efficient solutions. However, similarity search for multivariate time series data is intrinsically challenging because (1) there is no conclusive agreement on what is a good similarity metric for multivariate time series data and (2) calculating similarity scores between two time series is often computationally expensive. In this paper, we address this problem by applying a generalized hashing framework, namely kernelized locality sensitive hashing, to accelerate time series similarity search with a series of representative similarity metrics. Experiment results on three large-scale clinical data sets demonstrate the effectiveness of the proposed approach."} {"_id": "7ea0ea01f47f49028802aa1126582d2d8a80dd59", "title": "Privacy and Security in Internet of Things and Wearable Devices", "text": "Enter the nascent era of Internet of Things (IoT) and wearable devices, where small embedded devices loaded with sensors collect information from its surroundings, process it, and relay it to remote locations for further analysis. Albeit looking harmless, these nascent technologies raise security and privacy concerns. We pose the question of the possibility and effects of compromising such devices. Concentrating on the design flow of IoT and wearable devices, we discuss some common design practices and their implications on security and privacy. Two representatives from each category, the Google Nest Thermostat and the Nike+ Fuelband, are selected as examples on how current industry practices of security as an afterthought or an add-on affect the resulting device and the potential consequences to the user's security and privacy. We then discuss design flow enhancements, through which security mechanisms can efficiently be added into a device, vastly differing from traditional practices."} {"_id": "1883204b8dbb419fcfa8adf86d61baf62283c667", "title": "Long-range communications in unlicensed bands: the rising stars in the IoT and smart city scenarios", "text": "Connectivity is probably the most basic building block of the IoT paradigm. Up to now, the two main approaches to provide data access to things have been based on either multihop mesh networks using short-range communication technologies in the unlicensed spectrum, or long-range legacy cellular technologies, mainly 2G/GSM/GPRS, operating in the corresponding licensed frequency bands. Recently, these reference models have been challenged by a new type of wireless connectivity, characterized by low-rate, long-range transmission technologies in the unlicensed sub-gigahertz frequency bands, used to realize access networks with star topology referred to as low-power WANs (LPWANs). In this article, we introduce this new approach to provide connectivity in the IoT scenario, discussing its advantages over the established paradigms in terms of efficiency, effectiveness, and architectural design, particularly for typical smart city applications."} {"_id": "2b00e526490d65f2ec00107fb7bcce0ace5960c7", "title": "The Internet of Things: A survey", "text": "This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. 2010 Elsevier B.V. All rights reserved."} {"_id": "49565dd40c89680fdf9d6958f721eabcdfb89c22", "title": "Stealthy Dopant-Level Hardware Trojans", "text": "In recent years, hardware Trojans have drawn the attention of governments and industry as well as the scientific community. One of the main concerns is that integrated circuits, e.g., for military or criticalinfrastructure applications, could be maliciously manipulated during the manufacturing process, which often takes place abroad. However, since there have been no reported hardware Trojans in practice yet, little is known about how such a Trojan would look like, and how difficult it would be in practice to implement one. In this paper we propose an extremely stealthy approach for implementing hardware Trojans below the gate level, and we evaluate their impact on the security of the target device. Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against \u201cgolden chips\u201d. We demonstrate the effectiveness of our approach by inserting Trojans into two designs \u2014 a digital post-processing derived from Intel\u2019s cryptographically secure RNG design used in the Ivy Bridge processors and a side-channel resistant SBox implementation \u2014 and by exploring their detectability and their effects on security."} {"_id": "8619cf157a41aae1a2080f1689fcaedb0e47eb4d", "title": "ERP in action - Challenges and benefits for management control in SME context", "text": "Article history: Received 31 January 2011 Received in revised form 7 February 2012 Accepted 2 March 2012 ERP systems have fundamentally re-shaped the way business data is collected, stored, disseminated and used throughout the world. However, the existing research in accounting has provided only relatively few empirical findings on the implications for management control when companies implement ERP systems as the technological platform. Especially scarce are the findings concerning the production phase, after implementation, when the information processes, related work practices and the new information contents can be seen as established. In this paper we explored and theorized the benefits, challenges and problems for management control when an ERP system is in use, four years after the implementation. Our findings also illustrate why and under what circumstances these challenges and benefits may exist. For a holistic view of the organization our findings, based on a qualitative case study, are constructed from the viewpoints of people at different levels and functions of the organization. Top management expected a new strategic control system, but due to the many challenges it ended up with merely financial accounting based control. At the operational level, serious challenges lead to inadequate usage of the ERP system. Management control produces the financial basic data and must contend with many practical problems caused by ERP implementation. \u00a9 2012 Elsevier Inc. All rights reserved."} {"_id": "4f4a9ec008ee88f1eabc7d490b2475bf97913fbb", "title": "Process discovery from event data: Relating models and logs through abstractions", "text": "Event data are collected in logistics, manufacturing, finance, healthcare, customer relationship management, e-learning, e-government, and many other domains. The events found in these domains typically refer to activities executed by resources at particular times and for a particular case (i.e., process instances). Process mining techniques are able to exploit such data. In this article, we focus on process discovery. However, process mining also includes conformance checking, performance analysis, decision mining, organizational mining, predictions, recommendations, etc. These techniques help to diagnose problems and improve processes. All process mining techniques involve both event data and process models. Therefore, a typical first step is to automatically learn a control-flow model from the event data. This is very challenging, but in recent years many powerful discovery techniques have been developed. It is not easy to compare these techniques since they use different representations and make different assumptions. Users often need to resort to trying different algorithms in an ad-hoc manner. Developers of new techniques are often trying to solve specific instances of a more general problem. Therefore, we aim to unify existing approaches by focusing on log and model abstractions. These abstractions link observed and modeled behavior: Concrete behaviors recorded in event logs are related to possible behaviors represented by process models. Hence, such behavioral abstractions provide an \u201cinterface\u201d between both. We discuss four discovery approaches involving three abstractions and different types of process models (Petri nets, block-structured models, and declarative models). The goal is to provide a comprehensive understanding of process discovery and show how to develop new techniques. Examples illustrate the different approaches and pointers to software are given. The discussion on abstractions and process representations is also used to reflect on the gap between process mining literature and commercial process mining tools. This facilitates users to select an appropriate process discovery technique. Moreover, structuring the role of internal abstractions and representations helps to broaden the view and facilitates the creation of new discovery approaches. \u2217Process and Data Science (PADS), RWTH Aachen University, Aachen, Germany"} {"_id": "b02771eb27df3f69d721cfd64e13e338acbc3336", "title": "End-to-End Conversion of HTML Tables for Populating a Relational Database", "text": "Automating the conversion of human-readable HTML tables into machine-readable relational tables will enable end-user query processing of the millions of data tables found on the web. Theoretically sound and experimentally successful methods for index-based segmentation, extraction of category hierarchies, and construction of a canonical table suitable for direct input to a relational database are demonstrated on 200 heterogeneous web tables. The methods are scalable: the program generates the 198 Access compatible CSV files in ~0.1s per table (two tables could not be indexed)."} {"_id": "baff7613deb1c84d2570bee2212ebb2391261727", "title": "Reinforcement Learning with Perturbed Rewards", "text": "Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a result. Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors. In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix. We call such observed rewards as perturbed rewards. We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards. Our framework draws upon approaches for supervised learning with noisy data. The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively."} {"_id": "79e18d73de9ff62ef19277655d330e514e849462", "title": "Current Trends in Educational Technology Research", "text": "Educational technology research has moved through several stages or \u201cages\u201d, focusing at the beginning on the content to be learned, then on the format of instructional messages and, finally on the interaction between computers and students. The present paper reviews the research in technology-based learning environments in order to give both a historical perspective on educational technology research and a view of the current state of this discipline. We conclude that: 1) trends in educational technology research were forged by the evolution of learning theories and the technological changes, 2) a clear shift from the design of instruction to the design of learning environments can be noticed; 3) there is a positive effect of educational technology on learning, but the size of the effect varies considerably; 4) learning is much more dependent on the activity of the learner than on the quantity of information and processing opportunities provided by the environment."} {"_id": "839a69a55d862563fe75528ec5d763fb01c09c61", "title": "A COMPRESSED SENSING VIEW OF UNSUPERVISED TEXT EMBEDDINGS , BAG-OF-n-GRAMS , AND LSTM S", "text": "Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the \u201cmeaning\u201d of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice."} {"_id": "8b0feafa8faddf7f4ca1aaf41e0cd19512385914", "title": "Employee turnover prediction and retention policies design: a case study", "text": "This paper illustrates the similarities between the problems of customer churn and employee turnover. An example of employee turnover prediction model leveraging classical machine learning techniques is developed. Model outputs are then discussed to design & test employee retention policies. This type of retention discussion is, to our knowledge, innovative and constitutes the main value of this paper. 2010 Mathematics Subject Classification."} {"_id": "f8f91f477ffdb979f54da0f762d107bef6b9bc86", "title": "Eastern meditative techniques and hypnosis: a new synthesis.", "text": "In this article major ancient Buddhist meditation techniques, samatha, vipassana, Zen, and ton-len, will be described in reference to contemporary clinical hypnosis. In so doing, the Eastern healing framework out of which these techniques emerged is examined in comparison with and in contrast to its Western counterpart. A growing body of empirical literature shows that meditation and hypnosis have many resemblances despite the distinct differences in underlying philosophy and technical methodologies. Although not all meditation techniques \"fit\" the Western culture, each has much to offer to clinicians who are familiar with hypnosis."} {"_id": "403c0f91fba1399e9b7a15c5fbea60ce5f28eabb", "title": "Architecture for an Intelligent Tutoring System that Considers Learning Styles", "text": "In this paper we propose the architecture of an Intelligent Tutoring System that considers the student's learning style and the competency-based education. We also describe the processes that have been implemented so far. Our architecture presents innovations in the representation of the tutor module and in the knowledge module; the tutor module incorporates a selector agent, which will choose the content to show, considering the teaching strategies that support the student's learning style."} {"_id": "4526ae7cd03dfc5d624e0285e3134d38b8640cdd", "title": "Real-time tracking on adaptive critic design with uniformly ultimately bounded condition", "text": "In this paper, we proposed a new nonlinear tracking controller based on heuristic dynamic programming (HDP) with the tracking filter. Specifically, we integrate a goal network into the regular HDP design and provide the critic network with detailed internal reward signal to help the value function approximation. The architecture is explicitly explained with the tracking filter, goal network, critic network and action network, respectively. We provide the stability analysis of our proposed controller with Lyapunov approach. It is shown that the filtered tracking errors and the weights estimation errors in neural networks are all uniformly ultimately bounded (UUB) under certain conditions. Finally, we compare our proposed approach with regular HDP approach in virtual reality (VR)/Simulink environment to justify the improved control performance."} {"_id": "faae3d1d38cd8c899d610bec0a18931b5da78ab3", "title": "EEG-based BCI and video games: a progress report", "text": "This paper presents a systematic review of electroencephalography (EEG)-based brain\u2013computer interfaces (BCIs) used in the video games, a vibrant field of research that touches upon all relevant questions concerning the future directions of BCI. The paper examines the progress of BCI research with regard to games and shows that gaming applications offer numerous advantages by orienting BCI to concerns and expectations of a gaming application. Different BCI paradigms are investigated, and future directions are discussed."} {"_id": "170fa6c08c7d13e0b128ce108adc8d0c8d247674", "title": "A CMOS hysteresis undervoltage lockout with current source inverter structure", "text": "This paper describes a simple architecture and low power consumption undervoltage lockout (UVLO) circuit with hysteretic threshold. The UVLO circuit monitors the supply voltage and determines whether or not the supply voltage satisfies a predetermined condition. The under voltage lockout circuit is designed based on CSMC 0.5um CMOS technology, utilizing a relatively few amount of circuitry. It is realized with a current source inverter. The threshold voltage is determined by the W/L ratio of current source inverter and resistor in reference generator. The hysteresis is realized by using a feedback circuit to overcome the bad disturbance and noise rejection of the single threshold. Hysteretic threshold range is 40mV. The quiescent current is about 1uA at 3V supply voltage,while the power of circuit consumes only 3uW."} {"_id": "6949c82155c51eb3990e4e961449a1a631fe37d9", "title": "A High Torque Density Vernier PM Machines for Hybrid Electric Vehicle Applications", "text": "This paper proposes a novel vernier permanent magnet (VPM) machine for hybrid electric vehicle (HEV) applications. The main features of the proposed machine are concentrated armature windings with short and non- overlapping end-winding, and the permanent magnets (PMs) are on both sides of rotor and stator. Unlike conventional motors, this machine operates with flux modulation principles, and the electromagnetic torque is produced by the interaction between the armature magneto motive force (MMF) and the exciting fields by stator and rotor PMs. In this paper, the electromagnetic performance is simulated by the finite element analysis (FEA). The results show that the torque density is about 33% higher than an existing interior permanent magnet (IPM) machines."} {"_id": "50d90e75b14fc1527dcc99c06edb3577924e37b6", "title": "A High Speed Architecture for Galois/Counter Mode of Operation (GCM)", "text": "In this paper we present a fully pipelined high speed hardware architecture for Galois/Counter Mode of Operation (GCM) by analyzing the data dependencies in the GCM algorithm at the architecture level. We show that GCM encryption circuit and GCM authentication circuit have similar critical path delays resulting in an efficient pipeline structure. The proposed GCM architecture yields a throughput of 34 Gbps running at 271 MHz using a 0.18 \u03bcm CMOS standard cell library."} {"_id": "5dfb49cd8cc0568e2cc8204de7fa4aca30ff0ca7", "title": "Small-world networks and functional connectivity in Alzheimer's disease.", "text": "We investigated whether functional brain networks are abnormally organized in Alzheimer's disease (AD). To this end, graph theoretical analysis was applied to matrices of functional connectivity of beta band-filtered electroencephalography (EEG) channels, in 15 Alzheimer patients and 13 control subjects. Correlations between all pairwise combinations of EEG channels were determined with the synchronization likelihood. The resulting synchronization matrices were converted to graphs by applying a threshold, and cluster coefficients and path lengths were computed as a function of threshold or as a function of degree K. For a wide range of thresholds, the characteristic path length L was significantly longer in the Alzheimer patients, whereas the cluster coefficient C showed no significant changes. This pattern was still present when L and C were computed as a function of K. A longer path length with a relatively preserved cluster coefficient suggests a loss of complexity and a less optimal organization. The present study provides further support for the presence of \"small-world\" features in functional brain networks and demonstrates that AD is characterized by a loss of small-world network characteristics. Graph theoretical analysis may be a useful approach to study the complexity of patterns of interrelations between EEG channels."} {"_id": "a66fe50037d1a4c3798fb0aadb6e9b7c5c8b6319", "title": "The YouTube Lens: Crowdsourced Personality Impressions and Audiovisual Analysis of Vlogs", "text": "Despite an increasing interest in understanding human perception in social media through the automatic analysis of users' personality, existing attempts have explored user profiles and text blog data only. We approach the study of personality impressions in social media from the novel perspective of crowdsourced impressions, social attention, and audiovisual behavioral analysis on slices of conversational vlogs extracted from YouTube. Conversational vlogs are a unique case study to understand users in social media, as vloggers implicitly or explicitly share information about themselves that words, either written or spoken cannot convey. In addition, research in vlogs may become a fertile ground for the study of video interactions, as conversational video expands to innovative applications. In this work, we first investigate the feasibility of crowdsourcing personality impressions from vlogging as a way to obtain judgements from a variate audience that consumes social media video. Then, we explore how these personality impressions mediate the online video watching experience and relate to measures of attention in YouTube. Finally, we investigate on the use of automatic nonverbal cues as a suitable lens through which impressions are made, and we address the task of automatic prediction of vloggers' personality impressions using nonverbal cues and machine learning techniques. Our study, conducted on a dataset of 442 YouTube vlogs and 2210 annotations collected in Amazon's Mechanical Turk, provides new findings regarding the suitability of collecting personality impressions from crowdsourcing, the types of personality impressions that emerge through vlogging, their association with social attention, and the level of utilization of nonverbal cues in this particular setting. In addition, it constitutes a first attempt to address the task of automatic vlogger personality impression prediction using nonverbal cues, with promising results."} {"_id": "fd6918d4c9ac32028f5bfb53ba8b9f9caca71370", "title": "Packet Injection Attack and Its Defense in Software-Defined Networks", "text": "Software-defined networks (SDNs) are novel networking architectures that decouple the network control and forwarding functions from the data plane. Unlike traditional networking, the control logic of SDNs is implemented in a logically centralized controller which provides a global network view and open programming interface to the applications. While SDNs have become a hot topic among both academia and industry in recent years, little attention has been paid on the security aspect. In this paper, we introduce a novel attack, namely, packet injection attack, in SDNs. By maliciously injecting manipulated packets into SDNs, attackers can affect the services and networking applications in the control plane, and largely consume the resources in the data plane. The consequences could be the disruption of applications built on the top of the topology manager service and rest API, as well as a huge consumption of network resources, such as the bandwidth of the OpenFlow channel. To defend against the packet injection attack, we present PacketChecker, a lightweight extension module on SDN controllers to effectively detect and mitigate the flooding of falsified packets. We implement a prototype of PacketChecker in floodlight controller and conduct experiments to evaluate the efficiency of the defense mechanism. The evaluation shows that the PacketChecker module can effectively mitigate the attack with a minor overhead to the SDN controller."} {"_id": "faaa47b2545922a3615c1744e8f4f11c77f1985a", "title": "Detecting and Classifying Crimes from Arabic Twitter Posts using Text Mining Techniques", "text": "Crime analysis has become a critical area for helping law enforcement agencies to protect civilians. As a result of a rapidly increasing population, crime rates have increased dramatically, and appropriate analysis has become a timeconsuming effort. Text mining is an effective tool that may help to solve this problem to classify crimes in effective manner. The proposed system aims to detect and classify crimes in Twitter posts that written in the Arabic language, one of the most widespread languages today. In this paper, classification techniques are used to detect crimes and identify their nature by different classification algorithms. The experiments evaluate different algorithms, such as SVM, DT, CNB, and KNN, in terms of accuracy and speed in the crime domain. Also, different features extraction techniques are evaluated, including rootbased stemming, light stemming, n-gram. The experiments revealed the superiority of n-gram over other techniques. Specifically, the results indicate the superiority of SVM with trigram over other classifiers, with a 91.55% accuracy. Keywords\u2014Crimes; text mining; classification; features extraction techniques; arabic posts; twitter"} {"_id": "2f2dbb88c9266356dfda696ed907067c1e42902c", "title": "Tracking down software bugs using automatic anomaly detection", "text": "This paper introduces DIDUCE, a practical and effective tool that aids programmers in detecting complex program errors and identifying their root causes. By instrumenting a program and observing its behavior as it runs, DIDUCE dynamically formulates hypotheses of invariants obeyed by the program. DIDUCE hypothesizes the strictest invariants at the beginning, and gradually relaxes the hypothesis as violations are detected to allow for new behavior. The violations reported help users to catch software bugs as soon as they occur. They also give programmers new visibility into the behavior of the programs such as identifying rare corner cases in the program logic or even locating hidden errors that corrupt the program's results.We implemented the DIDUCE system for Java programs and applied it to four programs of significant size and complexity. DIDUCE succeeded in identifying the root causes of programming errors in each of the programs quickly and automatically. In particular, DIDUCE is effective in isolating a timing-dependent bug in a released JSSE (Java Secure Socket Extension) library, which would have taken an experienced programmer days to find. Our experience suggests that detecting and checking program invariants dynamically is a simple and effective methodology for debugging many different kinds of program errors across a wide variety of application domains."} {"_id": "06e04fd496cd805bca69eea2c1977f90afeeef83", "title": "Causal Interventions for Fairness", "text": "Most approaches in algorithmic fairness constrain machine learning methods so the resulting predictions satisfy one of several intuitive notions of fairness. While this may help private companies comply with non-discrimination laws or avoid negative publicity, we believe it is often too little, too late. By the time the training data is collected, individuals in disadvantaged groups have already suffered from discrimination and lost opportunities due to factors out of their control. In the present work we focus instead on interventions such as a new public policy, and in particular, how to maximize their positive effects while improving the fairness of the overall system. We use causal methods to model the effects of interventions, allowing for potential interference\u2013each individual\u2019s outcome may depend on who else receives the intervention. We demonstrate this with an example of allocating a budget of teaching resources using a dataset of schools in New York City."} {"_id": "7274c36c94c71d10ba687418ff5f68aafad40402", "title": "Atanassov's Intuitionistic Fuzzy Programming Method for Heterogeneous Multiattribute Group Decision Making With Atanassov's Intuitionistic Fuzzy Truth Degrees", "text": "The aim of this paper is to develop a new Atanassov's intuitionistic fuzzy (A-IF) programming method to solve heterogeneous multiattribute group decision-making problems with A-IF truth degrees in which there are several types of attribute values such as A-IF sets (A-IFSs), trapezoidal fuzzy numbers, intervals, and real numbers. In this method, preference relations in comparisons of alternatives with hesitancy degrees are expressed by A-IFSs. Hereby, A-IF group consistency and inconsistency indices are defined on the basis of preference relations between alternatives. To estimate the fuzzy ideal solution (IS) and weights, a new A-IF programming model is constructed on the concept that the A-IF group inconsistency index should be minimized and must be not larger than the A-IF group consistency index by some fixed A-IFS. An effective method is developed to solve the new derived model. The distances of the alternatives to the fuzzy IS are calculated to determine their ranking order. Moreover, some generalizations or specializations of the derived model are discussed. Applicability of the proposed methodology is illustrated with a real supplier selection example."} {"_id": "95b6f903dc5bb3a6ede84ea2cf7cd48ab7e8e06b", "title": "Individuals\u2019 Stress Assessment Using Human-Smartphone Interaction Analysis", "text": "The increasing presence of stress in people\u2019 lives has motivated much research efforts focusing on continuous stress assessment methods of individuals, leveraging smartphones and wearable devices. These methods have several drawbacks, i.e., they use invasive external devices, thus increasing entry costs and reducing user acceptance, or they use some of privacy-related information. This paper presents an approach for stress assessment that leverages data extracted from smartphone sensors, and that is not invasive concerning privacy. Two different approaches are presented. One, based on smartphone gestures analysis, e.g., \u2018tap\u2019, \u2018scroll\u2019, \u2018swipe\u2019 and \u2018text writing\u2019, and evaluated in laboratory settings with 13 participants (F-measure 79-85 percent within-subject model, 70-80 percent global model); the second one based on smartphone usage analysis and tested in-the-wild with 25 participants (F-measure 77-88 percent within-subject model, 63-83 percent global model). Results show how these two methods enable an accurate stress assessment without being too intrusive, thus increasing ecological validity of the data and user acceptance."} {"_id": "efaf2ddf50575ae79c458b7c4974620d2a740ac7", "title": "Word Segmentation in Sanskrit Using Path Constrained Random Walks", "text": "In Sanskrit, the phonemes at the word boundaries undergo changes to form new phonemes through a process called as sandhi. A fused sentence can be segmented into multiple possible segmentations. We propose a word segmentation approach that predicts the most semantically valid segmentation for a given sentence. We treat the problem as a query expansion problem and use the path-constrained random walks framework to predict the correct segments."} {"_id": "25c96ec442fbd6c1dc0f88cfc5657723b248d525", "title": "A unifying view of the basis of social cognition", "text": "In this article we provide a unifying neural hypothesis on how individuals understand the actions and emotions of others. Our main claim is that the fundamental mechanism at the basis of the experiential understanding of others' actions is the activation of the mirror neuron system. A similar mechanism, but involving the activation of viscero-motor centers, underlies the experiential understanding of the emotions of others."} {"_id": "2787d376f9564122d7ed584d3f2b04cb4ba5376d", "title": "Understanding motor events: a neurophysiological study", "text": "Neurons of the rostral part of inferior premotor cortex of the monkey discharge during goal-directed hand movements such as grasping, holding, and tearing. We report here that many of these neurons become active also when the monkey observes specific, meaningful hand movements performed by the experimenters. The effective experimenters' movements include among others placing or retrieving a piece of food from a table, grasping food from another experimenter's hand, and manipulating objects. There is always a clear link between the effective observed movement and that executed by the monkey and, often, only movements of the experimenter identical to those controlled by a given neuron are able to activate it. These findings indicate that premotor neurons can retrieve movements not only on the basis of stimulus characteristics, as previously described, but also on the basis of the meaning of the observed actions."} {"_id": "7ae9516d4873688e16f7199cef31adbc6b181213", "title": "Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study.", "text": "Functional magnetic resonance imaging (fMRI) was used to localize brain areas that were active during the observation of actions made by another individual. Object- and non-object-related actions made with different effectors (mouth, hand and foot) were presented. Observation of both object- and non-object-related actions determined a somatotopically organized activation of premotor cortex. The somatotopic pattern was similar to that of the classical motor cortex homunculus. During the observation of object-related actions, an activation, also somatotopically organized, was additionally found in the posterior parietal lobe. Thus, when individuals observe an action, an internal replica of that action is automatically generated in their premotor cortex. In the case of object-related actions, a further object-related analysis is performed in the parietal lobe, as if the subjects were indeed using those objects. These results bring the previous concept of an action observation/execution matching system (mirror system) into a broader perspective: this system is not restricted to the ventral premotor cortex, but involves several somatotopically organized motor circuits."} {"_id": "8a81800350e0572202607152fc861644d4e39a54", "title": "Intentional attunement: A neurophysiological perspective on social cognition and its disruption in autism", "text": "A direct form of experiential understanding of others, \"intentional attunement\", is achieved by modeling their behavior as intentional experiences on the basis of the activation of shared neural systems underpinning what the others do an feel and what we do and feel. This modeling mechanism is embodied simulation. In parallel with the detached sensory description of the observed social stimuli, internal representations of the body states associated with actions, emotions, and sensations are evoked in the observer, as if he/she would be doing a similar action or experiencing a similar emotion or sensation. Mirror neuron systems are likely the neural correlate of this mechanism. By means of a shared neural state realized in two different bodies, the \"objectual other\" becomes \"another self\". A defective intentional attunement caused by a lack of embodies simulation might cause some of the social impairments of autistic individuals."} {"_id": "0f52a233d2e20e7b270a4eed9e06aff1840a46d6", "title": "The mirror-neuron system.", "text": "A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others' actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism--the mirror-neuron mechanism--that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language."} {"_id": "3181f5190b064398c2bb322b7a5100863e5b6a01", "title": "Pulsed Out of Awareness: EEG Alpha Oscillations Represent a Pulsed-Inhibition of Ongoing Cortical Processing", "text": "Alpha oscillations are ubiquitous in the brain, but their role in cortical processing remains a matter of debate. Recently, evidence has begun to accumulate in support of a role for alpha oscillations in attention selection and control. Here we first review evidence that 8-12 Hz oscillations in the brain have a general inhibitory role in cognitive processing, with an emphasis on their role in visual processing. Then, we summarize the evidence in support of our recent proposal that alpha represents a pulsed-inhibition of ongoing neural activity. The phase of the ongoing electroencephalography can influence evoked activity and subsequent processing, and we propose that alpha exerts its inhibitory role through alternating microstates of inhibition and excitation. Finally, we discuss evidence that this pulsed-inhibition can be entrained to rhythmic stimuli in the environment, such that preferential processing occurs for stimuli at predictable moments. The entrainment of preferential phase may provide a mechanism for temporal attention in the brain. This pulsed inhibitory account of alpha has important implications for many common cognitive phenomena, such as the attentional blink, and seems to indicate that our visual experience may at least some times be coming through in waves."} {"_id": "9d8a785a4275dafe21c6cdf62d6080bc5fad18ef", "title": "Extracting Opinion Features in Sentiment Patterns", "text": "Due to the increasing amount of opinions and reviews on the Internet, opinion mining has become a hot topic in data mining, in which extracting opinion features is a key step. Most of existing methods utilize a rule-based mechanism or statistics to extract opinion features, but they ignore the structure characteristics of reviews. The performance has hence not been promising. A new approach of OFESP (Opinion Feature Extraction based on Sentiment Patterns) is proposed in this paper, which takes into account the structure characteristics of reviews for higher values of precision and recall. With a self-constructed database of sentiment patterns, OFESP matches each review sentence to obtain its features, and then filters redundant features regarding relevance of the domain, statistics and semantic similarity. Experimental studies on real-world data demonstrate that as compared with the traditional method based on a window mechanism, OFESP outperforms it on precision, recall and F-score. Meanwhile, in comparison to the approach based on syntactic analysis, OFESP performs better on recall and F-score."} {"_id": "611d86b4d3cebf105ee5fdc6da8fc1f725f3c8d5", "title": "Superfaces: A Super-Resolution Model for 3D Faces", "text": "Face recognition based on the analysis of 3D scans has been an active research subject over the last few years. However, the impact of the resolution of 3D scans on the recognition process has not been addressed explicitly yet being of primal importance after the introduction of a new generation of low cost 4D scanning devices. These devices are capable of combined depth/rgb acquisition over time with a low resolution compared to the 3D scanners typically used in 3D face recognition benchmarks. In this paper, we define a super-resolution model for 3D faces by which a sequence of low-resolution 3D scans can be processed to extract a higher resolution 3D face model, namely the superface model. The proposed solution relies on the Scaled ICP procedure to align the low-resolution 3D models with each other and estimate the value of the high-resolution 3D model based on the statistics of values of the lowresolution scans in corresponding points. The approach is validated on a data set that includes, for each subject, one sequence of low-resolution 3D face scans and one ground-truth high-resolution 3D face model acquired through a high-resolution 3D scanner. In this way, results of the super-resolution process are evaluated qualitatively and quantitatively by measuring the error between the superface and the ground-truth."} {"_id": "21e96b9f949aef121f1031097442a12308821974", "title": "The maternal brain and its plasticity in humans", "text": "This article is part of a Special Issue \"Parental Care\". Early mother-infant relationships play important roles in infants' optimal development. New mothers undergo neurobiological changes that support developing mother-infant relationships regardless of great individual differences in those relationships. In this article, we review the neural plasticity in human mothers' brains based on functional magnetic resonance imaging (fMRI) studies. First, we review the neural circuits that are involved in establishing and maintaining mother-infant relationships. Second, we discuss early postpartum factors (e.g., birth and feeding methods, hormones, and parental sensitivity) that are associated with individual differences in maternal brain neuroplasticity. Third, we discuss abnormal changes in the maternal brain related to psychopathology (i.e., postpartum depression, posttraumatic stress disorder, substance abuse) and potential brain remodeling associated with interventions. Last, we highlight potentially important future research directions to better understand normative changes in the maternal brain and risks for abnormal changes that may disrupt early mother-infant relationships."} {"_id": "b94d9bfa0c33b720562dd0d8b8c4143a33e8f95b", "title": "Using randomized response techniques for privacy-preserving data mining", "text": "Privacy is an important issue in data mining and knowledge discovery. In this paper, we propose to use the randomized response techniques to conduct the data mining computation. Specially, we present a method to build decision tree classifiers from the disguised data. We conduct experiments to compare the accuracy of our decision tree with the one built from the original undisguised data. Our results show that although the data are disguised, our method can still achieve fairly high accuracy. We also show how the parameter used in the randomized response techniques affects the accuracy of the results."} {"_id": "d5b8483294ac4c3af88782bde7cfe90ecd699e99", "title": "Sculpting: an interactive volumetric modeling technique", "text": "We present a new interactive modeling technique based on the notion of sculpting a solid material. A sculpting tool is controlled by a 3D input device and the material is represented by voxel data; the tool acts by modifying the values in the voxel array, much as a \"paint\" program's \"paintbrush\" modifies bitmap values. The voxel data is converted to a polygonal surface using a \"marching-cubes\" algorithm; since the modifications to the voxel data are local, we accelerate this computation by an incremental algorithm and accelerate the display by using a special data structure for determining which polygons must be redrawn in a particular screen region. We provide a variety of tools: one that cuts away material, one that adds material, a \"sandpaper\" tool, a \"heat gun,\" etc. The technique provides an intuitive direct interaction, as if the user were working with clay or wax. The models created are free-form and may have complex topology; however, they are not precise, so the technique is appropriate for modeling a boulder or a tooth but not for modeling a crankshaft."} {"_id": "46c90184ff78ccab48f2520f56531aea101537dd", "title": "HIGH DIMENSIONAL SPACES", "text": "In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological arguments in high dimensions. We point out fallacy in an argument presented in a published paper in 2015 by Goodfellow et al., see reference [10], and we present a more rigorous, general and correct mathematical result to explain adversarial examples in terms of topology of image manifolds. Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima. Third, we show how multiresolution nature of images explains perturbation based adversarial examples in form of a stronger result. Our results state that expectation of L2-norm of adversarial perturbations shrinks to 0 as image resolution becomes arbitrarily large. Finally, by incorporating the parts-whole manifold learning hypothesis for natural images, we investigate the working of deep neural networks and root causes of adversarial examples and discuss how future improvements can be made and how adversarial examples can be eliminated."} {"_id": "f43a106b220aa20fa0862495153f0184c9f3d8f6", "title": "Certificates of virginity and reconstruction of the hymen.", "text": "Virginity in women has traditionally been associated with the integrity of the hymen, the incomplete membrane occluding the lower end of the vagina. The hymen is often ruptured during the first coitus, which may cause bleeding of varying intensity. In certain communities, the bed-sheet, now soiled with blood, is shown as proof that the first sexual intercourse has taken place and that the bride was indeed a virgin up to that point. Thus, the sullied sheet constitutes evidence that the \u2018honour\u2019 of the woman and that of her family remain intact. Literature data concerning the frequency of bleeding at defloration are amazingly scarce. Olga Loeber conducted a retrospective survey among women of diverse ethnic origin and with different cultural and religious background aborted at one centre in The Netherlands. Somewhat less than half of these women reported that they had not bled during their first coitus! The Jewish, Christian and Muslim faiths all attach considerable importance to premarital virginity, particularly that of women. Yet, the social valorization of virginity dates back to a much earlier time than the imposition of chastity upon unmarried women in Muslim communities. Neither the Koran nor Hadiths state that virginity is a precondition to marriage. Patriarchal societies aim to control the sexuality of women in order to regulate lines of descent and transfer. Islam has only translated a social into a religious norm. It is worth remembering that until the 1960s, moral and behavioural norms kept even European atheists from engaging in premarital sex. These constraints have only been gradually lifted in the last 50 years as a result of the liberalization of sexuality and the improved social status of women."} {"_id": "0f3ee3493e1148c1ac6ac58889eec1c709af6ed4", "title": "Systematic review of bankruptcy prediction models: Towards a framework for tool selection", "text": "The bankruptcy prediction research domain continues to evolve with many new different predictive models developed using various tools. Yet many of the tools are used with the wrong data conditions or for the wrong situation. Using the Web of Science, Business Source Complete and Engineering Village databases, a systematic review of 49 journal articles published between 2010 and 2015 was carried out. This review shows how eight popular and promising tools perform based on 13 key criteria within the bankruptcy prediction models research area. These tools include two statistical tools: multiple discriminant analysis and Logistic regression; and six artificial intelligence tools: artificial neural network, support vector machines, rough sets, case based reasoning, decision tree and genetic algorithm. The 13 criteria identified include accuracy, result transparency, fully deterministic output, data size capability, data dispersion, variable selection method required, variable types applicable, and more. Overall, it was found that no single tool is predominantly better than other tools in relation to the 13 identified criteria. A tabular and a diagrammatic framework are provided as guidelines for the selection of tools that best fit different situations. It is concluded that an overall better performance model can only be found by informed integration of tools to form a hybrid model. This paper contributes towards a thorough understanding of the features of the tools used to develop bankruptcy prediction models and their related shortcomings."} {"_id": "50984f8345a3120d0e6c0a75adc2ac1a13e37961", "title": "Impaired face processing in autism: fact or artifact?", "text": "Within the last 10 years, there has been an upsurge of interest in face processing abilities in autism which has generated a proliferation of new empirical demonstrations employing a variety of measuring techniques. Observably atypical social behaviors early in the development of children with autism have led to the contention that autism is a condition where the processing of social information, particularly faces, is impaired. While several empirical sources of evidence lend support to this hypothesis, others suggest that there are conditions under which autistic individuals do not differ from typically developing persons. The present paper reviews this bulk of empirical evidence, and concludes that the versatility and abilities of face processing in persons with autism have been underestimated."} {"_id": "6695c398a384e5fe09e51c86c0702654c74102d0", "title": "A Variable Threshold Voltage CMOS Comparator for Flash Analog to Digital Converter", "text": "This paper presents a variable threshold voltage CMOS comparator for flash analog to digital converter. The proposed comparator has single-ended type of architecture. The comparator is designed and analyzed by Cadence Virtuoso Analog Design Environment using UMC 180nm technology. The proposed comparator consumes peak power of 34. 97 ?W from 1. 8 V power supply. It achieves the power delay product (PDP) of 8 fJ and propagation delay of 230 ps. The designed comparator eliminates the requirement of resistive ladder network for reference voltage generation. This makes it highly suitable in the design of flash analog to digital converter."} {"_id": "19bf971a1fb6cd388c0ba07834dc4ddcbc442318", "title": "Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments", "text": "Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood). Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges) is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines), which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is automatic, fast (4 ms faster than the second fastest method, i.e., the rotationand scale-invariant shape context) and can achieve a recall of 79.7%, a precision of 89.1% and a root mean square error (RMSE) of 1.0 pixels on average for remote sensing images with large background variations."} {"_id": "3e79a574d776c46bbe6d34f41b1e83b5d0f698f2", "title": "Sentence-State LSTM for Text Representation", "text": "The baseline BiLSTM model consists of two LSTM components, which process the input in the forward left-to-right and the backward rightto-left directions, respectively. In each direction, the reading of input words is modelled as a recurrent process with a single hidden state. Given an initial value, the state changes its value recurrently, each time consuming an incoming word. Take the forward LSTM component for example. Denoting the initial state as h 0, which is a model parameter, the recurrent state transition step for calculating h 1, . . . , h n+1 is defined as follows (Graves and Schmidhuber, 2005):"} {"_id": "2d47c8a220ca6ce7e328298b19ba27294954f733", "title": "The causes of welfare state expansion : deindustrialization or globalization ?", "text": "An influential line of argument holds that trade exposure causes economic uncertainty and spurs popular demands for compensatory and risk-sharing welfare state spending. The argument has gained renewed prominence through the recent work of Garrett (1998) and Rodrik (1997; 1998). This paper argues that the relationship between trade openness and welfare state expansion is spurious, and that the engine of welfare state expansion since the 1960s has been deindustrialization. Based on cross-sectional time-series data for 15 OECD countries we show that there is no relationship between trade exposure and the level of labor market risks (in terms of employment and wages), whereas the uncertainty and dislocations caused by deindustrialization have spurred electoral demands for compensating welfare state policies. Yet, while differential rates of deindustrialization explain differences in the overall size of the welfare state, its particular character -in terms of the share of direct government provision and the equality of transfer payments -is shaped by government partisanship. The argument has implications for the study, and the future,of thewelfare state thatarevery different from thosesuggested in the trade openness literature. Zusammenfassung In vielen einflu\u00dfreichen Diskussionsbeitr\u00e4gen wird die Meinung vertreten, da\u00df die Liberalisierung des Handels \u00f6konomische Verunsicherung zur Folge habe und damit zu Forderungen nach ausgleichenden wohlfahrtsstaatlichen Ausgaben f\u00fchre. Die Arbeiten von Garrett (1998) und Rodrik (1997;1998) verliehen diesem Argument zus\u00e4tzliche Relevanz. Gegenstand dieser Untersuchung ist die Beziehung zwischen Ausma\u00df an Offenheit einer Volkswirtschaft und der Ausdehnung des Wohlfahrtsstaates, dessen gro\u00dfz\u00fcgige Entwicklung seit den 1960er Jahren durch zunehmende Deindustrialisierung erm\u00f6glicht wurde. Auf der Grundlage von Analysen l\u00e4nder\u00fcbergreifender Zeitreihen und von 15 OECD-L\u00e4ndern wird gezeigt, da\u00df kein Zusammenhang zwischen einer Handelsliberalisierung und dem Grad der Arbeitsmarktrisiken (bezogen auf L\u00f6hne und Besch\u00e4ftigung) besteht. Angesichts der durch die Deindustrialisierung verursachten Unsicherheit kommt es jedoch von seiten der W\u00e4hler zu Forderungen nach einer ausgleichenden Sozialpolitik. W\u00e4hrend das Ausma\u00df der Deindustrialisierung die Gr\u00f6\u00dfe und Ausstattung des Wohlfahrsstaates determiniert, wird sein spezifischer Charakter hinsichtlich der direkten Regierungsdienstleistungen und der ausgleichenden Transferzahlungen von den Regierungsparteien gepr\u00e4gt. Diese Argumentation ist von gro\u00dfer Tragweite f\u00fcr die Analyse und Zukunft des Wohlfahrtsstaates; sie weicht gravierend von der Literatur \u00fcber offene Volkswirtschaften ab."} {"_id": "7dc8bfcc945f20ae614629a00251d8f175623462", "title": "A Tutorial on Reinforcement Learning Techniques", "text": "Reinforcement Learning (RL) is learning through direct experimentation. It does not assume the existence of a teacher that provides examples upon which learning of a task takes place. Instead, in RL experience is the only teacher. With historical roots on the study of conditioned reflexes, RL soon attracted the interest of Engineers and Computer Scientists because of its theoretical relevance and potential applications in fields as diverse as Operational Research and Robotics. Computationally, RL is intended to operate in a learning environment composed by two subjects: the learner and a dynamic process. At successive time steps, the learner makes an observation of the process state, selects an action and applies it back to the process. The goal of the learner is to find out an action policy that controls the behavior of this dynamic process, guided by signals (reinforcements) that indicate how well it is performing the required task. These signals are usually associated to some dramatic condition \u2014 e.g., accomplishment of a subtask (reward) or complete failure (punishment), and the learner\u2019s goal is to optimize its behavior based on some performance measure (a function of the received reinforcements). The crucial point is that in order to do that, the learner must evaluate the conditions (associations between observed states and chosen actions) that lead to rewards or punishments. In other words, it must learn how to assign credit to past actions and states by correctly estimating costs associated to these events. Starting from basic concepts, this tutorial presents the many flavors of RL algorithms, develops the corresponding mathematical tools, assess their practical limitations and discusses alternatives that have been proposed for applying RL to realistic tasks, such as those involving large state spaces or partial observability. It relies on examples and diagrams to illustrate the main points, and provides many references to the specialized literature and to Internet sites where relevant demos and additional information can be obtained."} {"_id": "255befa1707aa15f1fc403fb57eaf76327fc8cef", "title": "Robust Influence Maximization", "text": "In this paper, we address the important issue of uncertainty in the edge influence probability estimates for the well studied influence maximization problem --- the task of finding k seed nodes in a social network to maximize the influence spread. We propose the problem of robust influence maximization, which maximizes the worst-case ratio between the influence spread of the chosen seed set and the optimal seed set, given the uncertainty of the parameter input. We design an algorithm that solves this problem with a solution-dependent bound. We further study uniform sampling and adaptive sampling methods to effectively reduce the uncertainty on parameters and improve the robustness of the influence maximization task. Our empirical results show that parameter uncertainty may greatly affect influence maximization performance and prior studies that learned influence probabilities could lead to poor performance in robust influence maximization due to relatively large uncertainty in parameter estimates, and information cascade based adaptive sampling method may be an effective way to improve the robustness of influence maximization."} {"_id": "62df185ddf7cd74040dd7f8dd72a9a2d85b734ed", "title": "An fMRI investigation of emotional engagement in moral judgment.", "text": "The long-standing rationalist tradition in moral psychology emphasizes the role of reason in moral judgment. A more recent trend places increased emphasis on emotion. Although both reason and emotion are likely to play important roles in moral judgment, relatively little is known about their neural correlates, the nature of their interaction, and the factors that modulate their respective behavioral influences in the context of moral judgment. In two functional magnetic resonance imaging (fMRI) studies using moral dilemmas as probes, we apply the methods of cognitive neuroscience to the study of moral judgment. We argue that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment. These results may shed light on some puzzling patterns in moral judgment observed by contemporary philosophers."} {"_id": "a7e9ba17a4e289d2d0d6e87ccb686fa23e0cd92c", "title": "IoT based Smart Home design using power and security management", "text": "The paper presents the design and implementation of an Ethernet-based Smart Home intelligent system for monitoring the electrical energy consumption based upon the real time tracking of the devices at home an INTEL GALILEO 2ND generation development board, which can be used in homes and societies. The proposed system works on real time monitoring and voice control, so that the electrical devices and switches can be remotely controlled and monitored with or without an android based app. It uses various sensors to not only monitor the real time device tracking but also maintaining the security of your house. It is monitored and controlled remotely from an android app using the Internet or the Intranet connectivity. The proposed outcome of the project aims as multiple benefits of saving on electricity bills of the home as well as keep the users updated about their home security with an option of controlling the switching of the devices by using their voice or simple toggle touch on their smartphone, and last but most importantly, monitor the usage in order to conserve the precious natural resources by reducing electrical energy consumption."} {"_id": "74db933fcd3d53466ac004e7064fc5db348f1d38", "title": "Scalable Visualisation of Sentiment and Stance", "text": "Natural language processing systems have the ability to analyse not only the sentiment of human language, but also the stance of the speaker. Representing this information visually from unevenly distributed and potentially sparse datasets is challenging, in particular when trying to facilitate exploration and knowledge discovery. We present work on a novel visualisation approach for scalable visualisation of sentiment and stance and provide a language resource of e-government public engagement of 9,278 user comments with stance explicitly declared by the author."} {"_id": "76bf95815fdef196f017d4699857ecfa478b8755", "title": "Differentially Private K-Means Clustering", "text": "There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the effectiveness of the two approaches on differentially private k-means clustering. We develop techniques to analyze the empirical error behaviors of the existing interactive and non-interactive approaches. Based on the analysis, we propose an improvement of DPLloyd which is a differentially private version of the Lloyd algorithm. We also propose a non-interactive approach EUGkM which publishes a differentially private synopsis for k-means clustering. Results from extensive and systematic experiments support our analysis and demonstrate the effectiveness of our improvement on DPLloyd and the proposed EUGkM algorithm."} {"_id": "44dd6443a07f0d139717be74a98988e3ec80beb8", "title": "{31 () Unifying Instance-based and Rule-based Induction Pedro Domingos", "text": "Several well-developed approaches to inductive learning now exist, but each has speci c limitations that are hard to overcome. Multi-strategy learning attempts to tackle this problem by combining multiple methods in one algorithm. This article describes a uni cation of two widely-used empirical approaches: rule induction and instance-based learning. In the new algorithm, instances are treated as maximally speci c rules, and classi cation is performed using a best-match strategy. Rules are learned by gradually generalizing instances until no improvement in apparent accuracy is obtained. Theoretical analysis shows this approach to be e cient. It is implemented in the RISE 3.1 system. In an extensive empirical study, RISE consistently achieves higher accuracies than state-of-the-art representatives of both its parent approaches (PEBLS and CN2), as well as a decision tree learner (C4.5). Lesion studies show that each of RISE's components is essential to this performance. Most signi cantly, in 14 of the 30 domains studied, RISE is more accurate than the best of PEBLS and CN2, showing that a signi cant synergy can be obtained by combining multiple empirical methods."} {"_id": "384b8bb787f58bb1f1637d0be0c5aa7d562acc46", "title": "Benchmarking short text semantic similarity", "text": "Short Text Semantic Similarity measurement is a ne w and rapidly growing field of research. \u201cShort texts\u201d are typically sentence leng th but are not required to be grammatically correct . There is great potential for applying these measure s in fields such as Information Retrieval, Dialogue Management and Question Answering. A datas et of 65 sentence pairs, with similarity ratings, produced in 2006 has become adopted as a de facto Gold Standard benchmark. This paper discusses the adoption of the 2006 dataset, lays do wn a number of criteria that can be used to determine whether a dataset should be awarded a \u201cGo ld Standard\u201d accolade and illustrates its use as a benchmark. Procedures for the generation of furth er Gold Standard datasets in this field are recommended."} {"_id": "24b77da5387095aa074d51e75db875e766eefbee", "title": "01 05 03 2 v 2 1 5 N ov 2 00 1 Quantum Digital Signatures", "text": "We present a quantum digital signature scheme whose security is based on fundamental principles of quantum physics. It allows a sender (Alice) to sign a message in such a way that the signature can be validated by a number of different people, and all will agree either that the message came from Alice or that it has been tampered with. To accomplish this task, each recipient of the message must have a copy of Alice\u2019s \u201cpublic key,\u201d which is a set of quantum states whose exact identity is known only to Alice. Quantum public keys are more difficult to deal with than classical public keys: for instance, only a limited number of copies can be in circulation, or the scheme becomes insecure. However, in exchange for this price, we achieve unconditionally secure digital signatures. Sending an m-bit message uses up O(m) quantum bits for each recipient of the public key. We briefly discuss how to securely distribute quantum public keys, and show the signature scheme is absolutely secure using one method of key distribution. The protocol provides a model for importing the ideas of classical public key cryptography into the quantum world."} {"_id": "6617cd6bdb16054c5369fb8ca40cf538a43f977a", "title": "Medieval Islamic Architecture , Quasicrystals , and Penrose and Girih Tiles : Questions from the Classroom", "text": "Tiling Theory studies how one might cover the plane with various shapes. Medieval Islamic artisans developed intricate geometric tilings to decorate their mosques, mausoleums, and shrines. Some of these patterns, called girih tilings, first appeared in the 12 Century AD. Recent investigations show these medieval tilings contain symmetries similar to those found in aperiodic Penrose tilings first investigated in the West in the 1970\u2019s. These intriguing discoveries may suggest that the mathematical understanding of these artisans was much deeper than originally thought."} {"_id": "d4375f08528e3326280210f95215237db637f199", "title": "An HMM-Based Approach for Off-Line Unconstrained Handwritten Word Modeling and Recognition", "text": "\u00d0This paper describes a hidden Markov model-based approach designed to recognize off-line unconstrained handwritten words for large vocabularies. After preprocessing, a word image is segmented into letters or pseudoletters and represented by two feature sequences of equal length, each consisting of an alternating sequence of shape-symbols and segmentationsymbols, which are both explicitly modeled. The word model is made up of the concatenation of appropriate letter models consisting of elementary HMMs and an HMM-based interpolation technique is used to optimally combine the two feature sets. Two rejection mechanisms are considered depending on whether or not the word image is guaranteed to belong to the lexicon. Experiments carried out on real-life data show that the proposed approach can be successfully used for handwritten word recognition. Index Terms\u00d0Handwriting modeling, preprocessing, segmentation, feature extraction, hidden Markov models, word recognition, rejection."} {"_id": "f4373ab2f8d5a457141c77eabd2bab4a698e26cf", "title": "A Full English Sentence Database for Off-line Handwriting Recognition", "text": "In this paper we present a new database for off-line handwriting recognition, together with a few preprocessing and text segmentation procedures. The database is based on the Lancaster-Oslo/Bergen(LOB) corpus. This corpus is a collection of texts that were used to generate forms, which subsequently were filled out by persons with their handwriting. Up to now (December 1998) the database includes 556 forms produced by approximately 250 different writers. The database consists of full English sentences. It can serve as a basis for a variety of handwriting recognition tasks. The main focus, however, is on recognition techniques that use linguistic knowledge beyond the lexicon level. This knowledge can be automatically derived from the corpus or it can be supplied from external sources."} {"_id": "62a134740314b4469c83c8921ae2e1beea22b8f5", "title": "A Database for Handwritten Text Recognition Research", "text": "We propose in this correspondence a new method to perform two-class clustering of 2-D data in a quick and automatic way by preserving certain features of the input data. The method is analytical, deterministic, unsupervised, automatic, and noniterative. The computation time is of order n if the data size is n, and hence much faster than any other method which requires the computation of an n-by-n dissimilarity matrix. Furthermore, the proposed method does not have the trouble of guessing initial values. This new approach is thus more suitable for fast automatic hierarchical clustering or any other fields requiring fast automatic two-class clustering of 2-D data. The method can be extended to cluster data in higher dimensional space. A 3-D example is included."} {"_id": "a1dcd66aaf62a428364878b875c7342b660f7f1e", "title": "Continuous speech recognition using hidden Markov models", "text": "The use of hidden Markov models (HMMs) in continuous speech recognition is reviewed. Markov models are presented as a generalization of their predecessor technology, dynamic programming. A unified view is offered in which both linguistic decoding and acoustic matching are integrated into a single, optimal network search framework. Advances in recognition architectures are discussed. The fundamentals of Viterbi beam search, the dominant search algorithm used today in speed recognition, are presented. Approaches to estimating the probabilities associated with an HMM model are examined. The HMM-supervised training paradigm is examined. Several examples of successful HMM-based speech recognition systems are reviewed.<>"} {"_id": "b38ac03b806a291593c51cb51818ce8e919a1a43", "title": "Pattern classification - a unified view of statistical and neural approaches", "text": ""} {"_id": "4debb3fe83ea743a888aa2ec8f4252bbe6d0fcb8", "title": "A framework analysis of the open source software development paradigm", "text": "Open Source Software (OSS) has become the subject of much commercial interest of late. Certainly, OSS seems to hold much promise in addressing the core issues of the software crisis, namely that of software taking too long to develop, exceeding its budget, and not working very well. Indeed, there have been several examples of significant OSS success stories\u2014the Linux operating system, the Apache web server, the BIND domain name resolution utility, to name but a few. However, little by way of rigorous academic research on OSS has been conducted to date. In this study, a framework was derived from two previous frameworks which have been very influential in the IS field, namely that of Zachman\u2019s IS architecture (ISA) and Checkland\u2019s CATWOE framework from Soft Systems Methodology (SSM). The resulting framework is used to analyze the OSS approach in detail. The potential future of OSS research is also discussed."} {"_id": "263cedeadb67c325a933d3ab8eb8fba3db83f627", "title": "On pricing discrete barrier options using conditional expectation and importance sampling Monte Carlo", "text": "Estimators for the price of a discrete barrier option based on conditional expectation and importance sampling variance reduction techniques are given. There are erroneous formulas for the conditional expectation estimator published in the literature: we derive the correct expression for the estimator. We use a simulated annealing algorithm to estimate the optimal parameters of exponential twisting in importance sampling, and compare it with a heuristic used in the literature. Randomized quasi-Monte Carlo methods are used to further increase the accuracy of the estimators. c \u00a9 2007 Elsevier Ltd. All rights reserved."} {"_id": "638a8302b681221b32aa4a8b69154ec4e6820a54", "title": "On the mechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX)", "text": "The first energetically autonomous lower extremity exoskeleton capable of carrying a payload has been demonstrated at U.C. Berkeley. This paper summarizes the mechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX). The anthropomorphically-based BLEEX has seven degrees of freedom per leg, four of which are powered by linear hydraulic actuators. The selection of the degrees of freedom and their ranges of motion are described. Additionally, the significant design aspects of the major BLEEX components are covered."} {"_id": "9e03eb52f39e226656d2ae58e0e5c8e4d7564a18", "title": "Five Styles of Customer Knowledge Management , and How Smart Companies Use Them To Create Value", "text": "Corporations are beginning to realize that the proverbial \u2018if we only knew what we know\u2019 also includes \u2018if we only knew what our customers know.\u2019 The authors discuss the concept of Customer Knowledge Management (CKM), which refers to the management of knowledge from customers, i.e. knowledge resident in customers. CKM is contrasted with knowledge about customers, e.g. customer characteristics and preferences prevalent in previous work on knowledge management and customer relationship management. Five styles of CKM are proposed and practically illustrated by way of corporate examples. Implications are discussed for knowledge management, the resource based view, and strategy process research. \uf8e9 2002 Elsevier Science Ltd. All rights reserved"} {"_id": "d8a9d06f5dba9ea4586aba810d4679b8af460885", "title": "Rule Based Part of Speech Tagging of Sindhi Language", "text": "Part of Speech (POS) tagging is a process of assigning correct syntactic categories to each word in the text. Tag set and word disambiguation rules are fundamental parts of any POS tagger. No work has hitherto been published of tag set in Sindhi language. The Sindhi lexicon for computational processing is also not available. In this study, the tag set for Sindhi POS, lexicon and word disambiguation rules are designed and developed. The Sindhi corpus is collected from a comprehensive Sindhi Dictionary. The corpus is based on the most recent available vocabulary used by local people. In this paper, preliminary achievements of rule based Sindhi Part of Speech (SPOS) tagger are presented. Tagging and tokenization algorithms are also designed for the implementation of SPOS. The outputs of SPOS are verified by Sindhi linguist. The development of SPOS tagger may have an important milestone towards computational Sindhi language processing."} {"_id": "d1cf19c3f9d202d81cd67a999d7c831aa871cc6e", "title": "gNUFFTW : Auto-Tuning for High-Performance GPU-Accelerated Non-Uniform Fast Fourier Transforms by Teresa", "text": "Non-uniform sampling of the Fourier transform appears in many important applications such as magnetic resonance imaging (MRI), optics, tomography and radio interferometry. Computing the inverse often requires fast application of the non-uniform discrete Fourier transform (NUDFT) and its adjoint operation. Non-Uniform Fast Fourier Transform (NUFFT) methods, such as gridding/regridding, are approximate algorithms which often leverage the highly-optimized Fast Fourier Transform (FFT) and localized interpolations. These approaches require selecting several parameters, such as interpolation and FFT grid sizes, which affect both the accuracy and runtime. In addition, different implementations lie on a spectrum of precomputation levels, which can further speed up repeated computations, with various trade-offs in planning time, execution time and memory usage. Choosing the optimal parameters and implementations is important for performance speed, but difficult to do manually since the performance of NUFFT is not well-understood for modern parallel processors. Inspired by the FFTW library, we demonstrate an empirical auto-tuning approach for the NUFFT on General Purpose Graphics Processors Units (GPGPU). We demonstrate order-of-magnitude speed improvements with autotuning compared to typical default choices. Our auto-tuning is implemented in an easy to use proof-of-concept library called gNUFFTW, which leverages existing open-source NUFFT packages, cuFFT and cuSPARSE libraries, as well as our own NUFFT implementations for high performance. Keywords\u2014non-uniform, non-Cartesian, FFT, NUFFT, GPU, auto-tuning, image processing."} {"_id": "1aa3e563e9179841a2b3a9b026c9f8a00c7fe664", "title": "Automatic Medical Concept Extraction from Free Text Clinical Reports, a New Named Entity Recognition Approach", "text": "Actually in the Hospital Information Systems, there is a wide range of clinical information representation from the Electronic Health Records (EHR), and most of the information contained in clinical reports is written in natural language free text. In this context, we are researching the problem of automatic clinical named entities recognition from free text clinical reports. We are using Snomed-CT (Systematized Nomenclature of Medicine \u2013 Clinical Terms) as dictionary to identify all kind of clinical concepts, and thus the problem we are considering is to map each clinical entity named in a free text report with its Snomed-CT unique ID. More in general, we are developed a new approach for the named entity recognition (NER) problem in specific domains, and we have applied it to recognize clinical concepts in free text clinical reports. In our approach we apply two types of NER approaches, dictionary-based and machine learning-based. We use a specific domain dictionary-based gazetteer (using Snomed-CT to get the standard clinical code for the clinical concept), and the main approach that we introduce is using a unsupervised shallow learning neural network, word2vec from Mikolov et al., to represent words as vectors, and then making the recognition based on the distance between candidates and dictionary terms. We have applied our approach on a Dataset with 318.585 clinical reports in Spanish from the emergency service of the Hospital \u201cRafael M\u00e9ndez\u201d from Lorca (Murcia) Spain, and preliminary results are encouraging. Key-Words: Snomed-CT, word2vec, doc2vec, clinical information extraction, skipgram, medical terminologies, search semantic, named entity recognition, ner, medical entity recognition"} {"_id": "dceb94c63c2ecb097e0506abbb0dea243465598f", "title": "Femtosecond-Long Pulse-Based Modulation for Terahertz Band Communication in Nanonetworks", "text": "Nanonetworks consist of nano-sized communicating devices which are able to perform simple tasks at the nanoscale. Nanonetworks are the enabling technology of long-awaited applications such as advanced health monitoring systems or high-performance distributed nano-computing architectures. The peculiarities of novel plasmonic nano-transceivers and nano-antennas, which operate in the Terahertz Band (0.1-10 THz), require the development of tailored communication schemes for nanonetworks. In this paper, a modulation and channel access scheme for nanonetworks in the Terahertz Band is developed. The proposed technique is based on the transmission of one-hundred-femtosecond-long pulses by following an asymmetric On-Off Keying modulation Spread in Time (TS-OOK). The performance of TS-OOK is evaluated in terms of the achievable information rate in the single-user and the multi-user cases. An accurate Terahertz Band channel model, validated by COMSOL simulation, is used, and novel stochastic models for the molecular absorption noise in the Terahertz Band and for the multi-user interference in TS-OOK are developed. The results show that the proposed modulation can support a very large number of nano-devices simultaneously transmitting at multiple Gigabits-per-second and up to Terabits-per-second, depending on the modulation parameters and the network conditions."} {"_id": "84f5622739103b92288f694a557a907b807086bf", "title": "Flow map layout", "text": "Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data."} {"_id": "a426971fa937bc8d8388e8a657ff891c012a855f", "title": "Deep Learning for Biomedical Information Retrieval: Learning Textual Relevance from Click Logs", "text": "We describe a Deep Learning approach to modeling the relevance of a document\u2019s text to a query, applied to biomedical literature. Instead of mapping each document and query to a common semantic space, we compute a variable-length difference vector between the query and document which is then passed through a deep convolution stage followed by a deep regression network to produce the estimated probability of the document\u2019s relevance to the query. Despite the small amount of training data, this approach produces a more robust predictor than computing similarities between semantic vector representations of the query and document, and also results in significant improvements over traditional IR text factors. In the future, we plan to explore its application in improving PubMed search."} {"_id": "4bd48f4438ba7bf731e91cb29508a290e938a1d0", "title": "Electrically small omni-directional antenna of circular polarization", "text": "A compact omni-directional antenna of circular polarization (CP) is presented for 2.4 GHz WLAN access-point applications. The antenna consists of four bended monopoles and a feeding network simultaneously exciting these four monopoles. The electrical size of the CP antenna is only \u03bb0/5\u00d7\u03bb0/5\u00d7\u03bb0/13. The impedance bandwidth (|S11|<;-10 dB) is 3.85% (2.392 GHz to 2.486 GHz) and the axial ratio in the azimuth plane is lower than 0.5 dB in the operating band."} {"_id": "a72f412d336d69a82424cb7d610bb1bb9d81ef7c", "title": "Avoiding monotony: improving the diversity of recommendation lists", "text": "The primary premise upon which top-N recommender systems operate is that similar users are likely to have similar tastes with regard to their product choices. For this reason, recommender algorithms depend deeply on similarity metrics to build the recommendation lists for end-users.\n However, it has been noted that the products offered on recommendation lists are often too similar to each other and attention has been paid towards the goal of improving diversity to avoid monotonous recommendations.\n Noting that the retrieval of a set of items matching a user query is a common problem across many applications of information retrieval, we model the competing goals of maximizing the diversity of the retrieved list while maintaining adequate similarity to the user query as a binary optimization problem. We explore a solution strategy to this optimization problem by relaxing it to a trust-region problem.This leads to a parameterized eigenvalue problem whose solution is finally quantized to the required binary solution. We apply this approach to the top-N prediction problem, evaluate the system performance on the Movielens dataset and compare it with a standard item-based top-N algorithm. A new evaluation metric ItemNovelty is proposed in this work. Improvements on both diversity and accuracy are obtained compared to the benchmark algorithm."} {"_id": "192a668ef5bcca820fe58ee040bc650edff76625", "title": "Small-signal modeling of pulse-width modulated switched-mode power converters", "text": "A power processing system is required to convert electrical energy from one voltage, current, or frequency to another with, ideally, 100-percent efficiency, together with adjustability of the conversion ratio. The allowable elements are switches, capacitors, and magnetic devices. Essential to the design is a knowledge of the transfer functions, and since the switching converter is nonlinear, a suitable modeling approach is needed. This paper discusses the state-space averaging method, which is an almost ideal compromise between accuracy and simplicity. The treatment here emphasizes the motivation and objectives of modeling by equivalent circuits based upon physical interpretation of state-space averaging, and is limited to pulse-width modulated dc-to-dc converters."} {"_id": "64670ea3b03e812f895eef6cfe6dc842ec08e514", "title": "Software transformations to improve malware detection", "text": "Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies malware and thus prevents it from adversely affecting a host. In order to evade detection, malware writers use various obfuscation techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these evasion tactics. In this paper, we describe the design and implementation of a malware transformer that reverses the obfuscations performed by a malware writer. Our experimental evaluation demonstrates that this malware transformer can drastically improve the detection rates of commercial malware detectors."} {"_id": "50485a11fc03e14031b08960370358c26553d7e5", "title": "Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic", "text": "Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30\u00a0years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call \"resource-rational analysis.\""} {"_id": "df77e52aa0cdc772a77a54c3e5cc7a510a2d6e59", "title": "True upper bounds for Bermudan products via non-nested Monte", "text": "We present a generic non-nested Monte Carlo procedure for computing true upper bounds for Bermudan products, given an approximation of the Snell envelope. The pleonastic \u201ctrue\u201d stresses that, by construction, the estimator is biased above the Snell envelope. The key idea is a regression estimator for the Doob martingale part of the approximative Snell envelope, which preserves the martingale property. The so constructed martingale can be employed for computing tight dual upper bounds without nested simulation. In general, this martingale can also be used as a control variate for simulation of conditional expectations. In this context, we develop a variance reduced version of the nested primal-dual estimator (Andersen and Broadie, 2004). Numerical experiments indicate the efficiency of the proposed algorithms."} {"_id": "402530f548db5f383f8a8282858895931d2a3e11", "title": "Forecasting and Trading Currency Volatility : An Application of Recurrent Neural Regression and Model Combination by", "text": "In this paper, we examine the use of GARCH models, Neural Network Regression (NNR), Recurrent Neural Network (RNN) regression and model combinations for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR/RNN models and the model combination results are benchmarked against the simpler GARCH alternative. The idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is intuitively appealing. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out-of-sample over the period April 1999-May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: In order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN-based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. * Christian Dunis is Girobank Professor of Banking and Finance at Liverpool Business School and Director of CIBEF (E-mail: cdunis@globalnet.co.uk). The opinions expressed herein are not those of Girobank. ** Xuehuan Huang is an Associate Researcher with CIBEF. *** CIBEF \u2013 Centre for International Banking, Economics and Finance, JMU, John Foster Building, 98 Mount Pleasant, Liverpool L3 5UZ. We are grateful to Professor Ken Holden and Professor John Thompson of Liverpool Business School for helpful comments on an earlier version of this paper."} {"_id": "1e0964d83cc60cc91b840559260f2eeca9a2205b", "title": "Neural systems responding to degrees of uncertainty in human decision-making.", "text": "Much is known about how people make decisions under varying levels of probability (risk). Less is known about the neural basis of decision-making when probabilities are uncertain because of missing information (ambiguity). In decision theory, ambiguity about probabilities should not affect choices. Using functional brain imaging, we show that the level of ambiguity in choices correlates positively with activation in the amygdala and orbitofrontal cortex, and negatively with a striatal system. Moreover, striatal activity correlates positively with expected reward. Neurological subjects with orbitofrontal lesions were insensitive to the level of ambiguity and risk in behavioral choices. These data suggest a general neural circuit responding to degrees of uncertainty, contrary to decision theory."} {"_id": "65f2bae5564fe8c0449439347357a0cdd84e2e97", "title": "Software Defined Networking for Improved Wireless Sensor Network Management: A Survey", "text": "Wireless sensor networks (WSNs) are becoming increasingly popular with the advent of the Internet of things (IoT). Various real-world applications of WSNs such as in smart grids, smart farming and smart health would require a potential deployment of thousands or maybe hundreds of thousands of sensor nodes/actuators. To ensure proper working order and network efficiency of such a network of sensor nodes, an effective WSN management system has to be integrated. However, the inherent challenges of WSNs such as sensor/actuator heterogeneity, application dependency and resource constraints have led to challenges in implementing effective traditional WSN management. This difficulty in management increases as the WSN becomes larger. Software Defined Networking (SDN) provides a promising solution in flexible management WSNs by allowing the separation of the control logic from the sensor nodes/actuators. The advantage with this SDN-based management in WSNs is that it enables centralized control of the entire WSN making it simpler to deploy network-wide management protocols and applications on demand. This paper highlights some of the recent work on traditional WSN management in brief and reviews SDN-based management techniques for WSNs in greater detail while drawing attention to the advantages that SDN brings to traditional WSN management. This paper also investigates open research challenges in coming up with mechanisms for flexible and easier SDN-based WSN configuration and management."} {"_id": "6991990042c4ace19fc9aaa551cfe4549ac36799", "title": "1.5MVA grid-connected interleaved inverters using coupled inductors for wind power generation system", "text": "In this paper, grid-connected interleaved voltage source inverters for PMSG wind power generation system with coupled inductors is introduced. In parallel operation, the undesirable circulating current flows between inverters. There are some differences in circulating currents according to source configuration. It is mathematically analyzed and simulated for the verification of analysis. In case of interleaved inverters, the uncontrollable circulating current in switching frequency level flows between inverter modules regardless of source configuration such as common or separated if filter inductance which is placed between inverters is significantly low. In order to suppress high frequency circulating current, the coupled inductors are employed in each phase. A case of 1.5MVA grid-connected interleaved inverters using coupled inductors prototype for the PMSG wind power generation Back To Back converters in parallel are introduced and experimentally verified the proposed topology."} {"_id": "653587df5ffb93d5df4f571a64997aced4b395fe", "title": "Anthropometric and physiological predispositions for elite soccer.", "text": "This review is focused on anthropometric and physiological characteristics of soccer players with a view to establishing their roles within talent detection, identification and development programmes. Top-class soccer players have to adapt to the physical demands of the game, which are multifactorial. Players may not need to have an extraordinary capacity within any of the areas of physical performance but must possess a reasonably high level within all areas. This explains why there are marked individual differences in anthropometric and physiological characteristics among top players. Various measurements have been used to evaluate specific aspects of the physical performance of both youth and adult soccer players. The positional role of a player is related to his or her physiological capacity. Thus, midfield players and full-backs have the highest maximal oxygen intakes ( > 60 ml x kg(-1) x min(-1)) and perform best in intermittent exercise tests. On the other hand, midfield players tend to have the lowest muscle strength. Although these distinctions are evident in adult and elite youth players, their existence must be interpreted circumspectly in talent identification and development programmes. A range of relevant anthropometric and physiological factors can be considered which are subject to strong genetic influences (e.g. stature and maximal oxygen intake) or are largely environmentally determined and susceptible to training effects. Consequently, fitness profiling can generate a useful database against which talented groups may be compared. No single method allows for a representative assessment of a player's physical capabilities for soccer. We conclude that anthropometric and physiological criteria do have a role as part of a holistic monitoring of talented young players."} {"_id": "c70e94f55397c49d46d531a6d854eca9b95d09c6", "title": "Security advances and challenges in 4G wireless networks", "text": "This paper presents a study of security advances and challenges associated with emergent 4G wireless technologies. The paper makes a number of contributions to the field. First, it studies the security standards evolution across different generations of wireless standards. Second, the security-related standards, architecture and design for the LTE and WiMAX technologies are analyzed. Third, security issues and vulnerabilities present in the above 4G standards are discussed. Finally, we point to potential areas for future vulnerabilities and evaluate areas in 4G security which warrant attention and future work by the research and advanced technology industry."} {"_id": "2227d14a01f1521939a479d53c5d0b4dda79c22f", "title": "Force feedback based gripper control on a robotic arm", "text": "Telemanipulation (master-slave operation) was developed in order to replace human being in hazardous environment, with using a slave device (e.g. robot). In this paper many kind of master devices are studied, for the precise control. Remote control needs much information about remote environment for good and safe human-machine interaction. Existing master devices with vision system or audio feedback lacks most of such information, therefor force feedback, called haptics become significant recently. In this paper we propose a new concept of force feedback. This system can overcome the bottlenecks of other feedback system in a user friendly way. Force sensor and laser distance sensor communicates the information from the gripper's status to the teleoperator by using force feedback module on a glove. Pneumatic pressure gives the operator distance information, while a Magnetorheological Fluid (MR-Fluid) based actuator presents the gripper's force. The experiment result shows the possibility of usage of such force feedback glove in combination with a robotic arm."} {"_id": "82a80da61c0f1a3a33deab676213794ee1f8bd6d", "title": "Learning hierarchical control structures for multiple tasks and changing environments", "text": "While the need for hierarchies within control systems is apparent, it is also clear to many researchers that such hierarchies should be learned. Learning both the structure and the component behaviors is a di cult task. The bene t of learning the hierarchical structures of behaviors is that the decomposition of the control structure into smaller transportable chunks allows previously learned knowledge to be applied to new but related tasks. Presented in this paper are improvements to Nested Q-learning (NQL) that allow more realistic learning of control hierarchies in reinforcement environments. Also presented is a simulation of a simple robot performing a series of related tasks that is used to compare both hierarchical and non-hierarchal learning techniques."} {"_id": "23dc97e672467abe20a4e9d5c6367a18f887dcd1", "title": "Learning object detection from a small number of examples: the importance of good features", "text": "Face detection systems have recently achieved high detection rates and real-time performance. However, these methods usually rely on a huge training database (around 5,000 positive examples for good performance). While such huge databases may be feasible for building a system that detects a single object, it is obviously problematic for scenarios where multiple objects (or multiple views of a single object) need to be detected. Indeed, even for multi-viewface detection the performance of existing systems is far from satisfactory. In this work we focus on the problem of learning to detect objects from a small training database. We show that performance depends crucially on the features that are used to represent the objects. Specifically, we show that using local edge orientation histograms (EOH) as features can significantly improve performance compared to the standard linear features used in existing systems. For frontal faces, local orientation histograms enable state of the art performance using only a few hundred training examples. For profile view faces, local orientation histograms enable learning a system that seems to outperform the state of the art in real-time systems even with a small number of training examples."} {"_id": "9afe065b7babf94e7d81a18ad32553306133815d", "title": "Feature Selection Using Adaboost for Face Expression Recognition", "text": "We propose a classification technique for face expression recognition using AdaBoost that learns by selecting the relevant global and local appearance features with the most discriminating information. Selectivity reduces the dimensionality of the feature space that in turn results in significant speed up during online classification. We compare our method with another leading margin-based classifier, the Support Vector Machines (SVM) and identify the advantages of using AdaBoost over SVM in this context. We use histograms of Gabor and Gaussian derivative responses as the appearance features. We apply our approach to the face expression recognition problem where local appearances play an important role. Finally, we show that though SVM performs equally well, AdaBoost feature selection provides a final hypothesis model that can easily be visualized and interpreted, which is lacking in the high dimensional support vectors of the SVM."} {"_id": "0015fa48e4ab633985df789920ef1e0c75d4b7a8", "title": "Training Support Vector Machines: an Application to Face Detection", "text": "Detection (To appear in the Proceedings of CVPR'97, June 17-19, 1997, Puerto Rico.) Edgar Osunay? Robert Freund? Federico Girosiy yCenter for Biological and Computational Learning and ?Operations Research Center Massachusetts Institute of Technology Cambridge, MA, 02139, U.S.A. Abstract We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classi ers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points."} {"_id": "0224e11e8582dd35b32203e9da064d4a3935a792", "title": "Fast Pose Estimation with Parameter-Sensitive Hashing", "text": "Example-basedmethodsareeffectivefor parameterestimationproblemswhentheunderlyingsystemis simpleor thedimensionalityof the input is low. For complex andhigh-dimensional problemssuch asposeestimation,thenumberof required examplesand the computationalcomplexity rapidly becmeprohibitivelyhigh. We introducea new algorithm that learnsa setof hashingfunctionsthat efficiently index examplesrelevant to a particular estimationtask. Our algorithm extendsa recentlydevelopedmethodfor locality-sensitivehashing, which findsapproximateneighborsin timesublinearin thenumber of examples.Thismethoddependscritically on thechoiceof hashfunctions;weshowhowto find thesetof hashfunctions thatare optimallyrelevantto a particular estimationproblem.Experimentsdemonstr atethat theresultingalgorithm,which wecall Parameter -SensitiveHashing, canrapidlyandaccuratelyestimatethearticulatedposeof humanfiguresfroma large databaseof exampleimages. 0Part of thiswork wasdonewhenG.S.andP.V. werewith MitsubishiElectricResearchLabs,Cambridge,MA."} {"_id": "c0ee9a28a84bb7eb9ac67ded94112298c4ff2be6", "title": "The effects of morphology and fitness on catastrophic interference", "text": "Catastrophic interference occurs when an agent improves in one training instance but becomes worse in other instances. Many methods intended to combat interference have been reported in the literature that modulate properties of a neural controller, such as synaptic plasticity or modularity. Here, we demonstrate that adjustments to the body of the agent, or the way its performance is measured, can also reduce catastrophic interference without requiring changes to the controller. Additionally, we introduce new metrics to quantify catastrophic interference. We do not show that our approach outperforms others on benchmark tests. Instead, by more precisely measuring interactions between morphology, fitness, and interference, we demonstrate that embodiment is an important aspect of this problem. Furthermore, considerations into morphology and fitness can combine with, rather than compete with, existing methods for combating catastrophic interference."} {"_id": "6e42274914360a6cf89618a28eba8086fe04fe6f", "title": "Review of Dercum\u2019s disease and proposal of diagnostic criteria, diagnostic methods, classification and management", "text": "UNLABELLED\nDEFINITION AND CLINICAL PICTURE: We propose the minimal definition of Dercum's disease to be generalised overweight or obesity in combination with painful adipose tissue. The associated symptoms in Dercum's disease include fatty deposits, easy bruisability, sleep disturbances, impaired memory, depression, difficulty concentrating, anxiety, rapid heartbeat, shortness of breath, diabetes, bloating, constipation, fatigue, weakness and joint aches.\n\n\nCLASSIFICATION\nWe suggest that Dercum's disease is classified into: I. Generalised diffuse form A form with diffusely widespread painful adipose tissue without clear lipomas, II. Generalised nodular form - a form with general pain in adipose tissue and intense pain in and around multiple lipomas, and III. Localised nodular form - a form with pain in and around multiple lipomas IV. Juxtaarticular form - a form with solitary deposits of excess fat for example at the medial aspect of the knee.\n\n\nEPIDEMIOLOGY\nDercum's disease most commonly appears between the ages of 35 and 50 years and is five to thirty times more common in women than in men. The prevalence of Dercum's disease has not yet been exactly established.\n\n\nAETIOLOGY\nProposed, but unconfirmed aetiologies include: nervous system dysfunction, mechanical pressure on nerves, adipose tissue dysfunction and trauma. DIAGNOSIS AND DIAGNOSTIC METHODS: Diagnosis is based on clinical criteria and should be made by systematic physical examination and thorough exclusion of differential diagnoses. Advisably, the diagnosis should be made by a physician with a broad experience of patients with painful conditions and knowledge of family medicine, internal medicine or pain management. The diagnosis should only be made when the differential diagnoses have been excluded.\n\n\nDIFFERENTIAL DIAGNOSIS\nDifferential diagnoses include: fibromyalgia, lipoedema, panniculitis, endocrine disorders, primary psychiatric disorders, multiple symmetric lipomatosis, familial multiple lipomatosis, and adipose tissue tumours. GENETIC COUNSELLING: The majority of the cases of Dercum's disease occur sporadically. A to G mutation at position A8344 of mitochondrial DNA cannot be detected in patients with Dercum's disease. HLA (human leukocyte antigen) typing has not revealed any correlation between typical antigens and the presence of the condition. MANAGEMENT AND TREATMENT: The following treatments have lead to some pain reduction in patients with Dercum's disease: Liposuction, analgesics, lidocaine, methotrexate and infliximab, interferon \u03b1-2b, corticosteroids, calcium-channel modulators and rapid cycling hypobaric pressure. As none of the treatments have led to long lasting complete pain reduction and revolutionary results, we propose that Dercum's disease should be treated in multidisciplinary teams specialised in chronic pain.\n\n\nPROGNOSIS\nThe pain in Dercum's disease seems to be relatively constant over time."} {"_id": "bff3d3aa6ee2e311c4363f52b23496d75d48a678", "title": "Mapping the game landscape: Locating genres using functional classification", "text": "Are typical computer game genres still valid descriptors and useful for describing game structure and game content? Games have changed from simple to complex and from single function to multi function. By identifying structural differences in game elements we develop a more nuanced model to categorized games and use cluster analysis as a descriptive tool in order to do so. The cluster analysis of 75 functionally different games shows that the two perspectives (omnipresent and vagrant), as well as challenges, mutability and savability are important functional categories to use in order to describe games. Author"} {"_id": "1fbb5c3abd953e3e2280b4ff86cadd8ffc4144ab", "title": "Reasoning for Semantic Error Detection in Text", "text": "Identifying incorrect content (i.e., semantic error) in text is a difficult task because of the ambiguous nature of written natural language and themany factors that canmake a statement semantically erroneous. Current methods identify semantic errors in a sentence by determining whether it contradicts the domain to which the sentence belongs. However, because thesemethods are constructedon expected logic contradictions, they cannot handle new or unexpected semantic errors. In this paper, we propose a new method for detecting semantic errors that is based on logic reasoning. Our proposed method converts text into logic clauses, which are later analyzed against a domain ontology by an automatic reasoner to determine its consistency. This approach can provide a complete analysis of the text, since it can analyze a single sentence or sets of multiple sentences. When there are multiple sentences to analyze, in order to avoid the high complexity of reasoning over a large set of logic clauses, we propose rules that reduce the set of sentences to analyze, based on the logic relationships between sentences. In our evaluation, we have found that our proposed method can identify a significant percentage of semantic errors and, in the case of multiple sentences, it does so without significant computational cost. We have also found that both the quality of the information extraction output and modeling elements B Dejing Dou dou@cs.uoregon.edu Fernando Gutierrez fernando@cs.uoregon.edu Nisansa de Silva nisansa@cs.uoregon.edu Stephen Fickas fickas@cs.uoregon.edu 1 Department of Computer and Information Science, 1202 University of Oregon, Eugene, OR 97403, USA of the ontology (i.e., property domain and range) affect the capability of detecting errors."} {"_id": "070705fe7fd1a7f48498783ee2ac80963a045450", "title": "Panda: Public Auditing for Shared Data with Efficient User Revocation in the Cloud", "text": "With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure shared data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation."} {"_id": "50bdbae105d717821dbcf60a48b0891d96e32990", "title": "100kHz IGBT inverter use of LCL topology for high power induction heating", "text": "A power electronic inverter is developed for a high frequency induction heating application. This equipment is available for the implementation of ZVS LCL resonance method was used. Using IGBT 20MVA, 100kHz LCL resonant induction heating equipment during the development process, 100kHz IGBT module development process and understanding gained from the design of LCL resonant Zero Crossing through the PLL control over operation of the IGBT Soft Switching, and focused on minimizing losses. In addition, through the actual design and implementation of devices using the area to test drive the development of large inverter technology based on the test"} {"_id": "288eb3c71c76ef2a820c0c2e372436ce078224d0", "title": "P-MOD: Secure Privilege-Based Multilevel Organizational Data-Sharing in Cloud Computing", "text": "Cloud computing has changed the way enterprises store, access and share data. Data is constantly being uploaded to the cloud and shared within an organization built on a hierarchy of many different individuals that are given certain data access privileges. With more data storage needs turning over to the cloud, finding a secure and efficient data access structure has become a major research issue. With different access privileges, individuals with more privileges (at higher levels of the hierarchy) are granted access to more sensitive data than those with fewer privileges (at lower levels of the hierarchy). In this paper, a Privilege-based Multilevel Organizational Data-sharing scheme (P-MOD) is proposed that incorporates a privilege-based access structure into an attribute-based encryption mechanism to handle these concerns. Each level of the privilege-based access structure is affiliated with an access policy that is uniquely defined by specific attributes. Data is then encrypted under each access policy at every level to grant access to specific data users based on their data access privileges. An individual ranked at a certain level can decrypt the ciphertext (at that specific level) if and only if that individual owns a correct set of attributes that can satisfy the access policy of that level. The user may also decrypt the ciphertexts at the lower levels with respect to the user\u2019s level. Security analysis shows that P-MOD is secure against adaptively chosen plaintext attack assuming the DBDH assumption holds. The comprehensive performance analysis demonstrates that PMOD is more efficient in computational complexity and storage space than the existing schemes in secure data sharing within an organization."} {"_id": "f28fc9551aed8905844feae98b4697301b9e983b", "title": "Generating ROC curves for artificial neural networks", "text": "Receiver operating characteristic (ROC) analysis is an established method of measuring diagnostic performance in medical imaging studies. Traditionally, artificial neural networks (ANN's) have been applied as a classifier to find one \"best\" detection rate. Recently researchers have begun to report ROC curve results for ANN classifiers. The current standard method of generating ROC curves for an ANN is to vary the output node threshold for classification. Here, the authors propose a different technique for generating ROC curves for a two class ANN classifier. They show that this new technique generates better ROC curves in the sense of having greater area under the ROC curve (AUC), and in the sense of being composed of a better distribution of operating points."} {"_id": "7197ccb285c37c175fc95de58656fd2bf912adbc", "title": "Socio-emotional development: from infancy to young adulthood.", "text": "Results from the Uppsala Longitudinal Study (ULS), which started in 1985, are reported in two sections. The first section gives a summary of longitudinal data from infancy to middle childhood (age 9 years; n = 96) concerning predictions of social functioning aspects from the theoretical perspectives of temperament, attachment, and health psychology (social factors). The second section presents the first results emanating from a follow-up when participants were 21 years old (n = 85). The developmental roots of social anxiety symptoms were studied from the same perspectives as above, although with a special focus on the predictive power of the temperament trait of shyness/inhibition. Results for middle childhood outcomes showed that temperament characteristics were relevant for most outcomes, whereas the contribution of attachment was most convincingly shown in relation to social competence and personality. Social factors were found to have moderating functions, but direct effects were also shown, the most interesting perhaps being positive effects of non-parental day care. Results from the 21-year data confirmed the expected predictive relation from shyness/inhibition to symptoms of social anxiety and further showed this relation to be specific; the relation to symptoms of depression did not survive control for social anxiety, although the opposite was true. The broad analysis of predictor associations with social anxiety, showing the relevance of other temperament factors as well as interactive effects, again attested to the need for multi-faceted models to analyze developmental trajectories."} {"_id": "34dda94f63ddb27a6e2525efcd2e260670f14668", "title": "Contingency planning over probabilistic hybrid obstacle predictions for autonomous road vehicles", "text": "This paper presents a novel optimization based path planner that can simultaneously plan multiple contingency paths to account for the uncertain actions of dynamic obstacles. This planner addresses the particular problem of collision avoidance for autonomous road vehicles which are required to safely interact with other vehicles with unknown intentions. The presented path planner utilizes an efficient spline based trajectory representation and fast but accurate collision probability approximations to enable the simultaneous optimization of multiple contingency paths."} {"_id": "66c2225def7c1a5ace4c58883169aaaa73955f4c", "title": "Motion Detection and Face Recognition for CCTV Surveillance System", "text": "Closed Circuit Television (CCTV) is currently used in daily basis for a wide variety of purposes. The development of CCTV has transformed from a simple passive surveillance into an integrated intelligent control system. In this research, motion detection and facial recognition in CCTV video is used as the basis of decision making to produce automatic, effective and efficient integrated system. This CCTV video process provides three outputs, motion detection information, face detection information and face identification information. Accumulative Differences Images (ADI) method is used for motion detection, and Haar Classifiers Cascade method is used for face detection. Feature extraction is done with Speeded-Up Robust Features (SURF) and Principal Component Analysis (PCA). Then, these features are trained by Counter-Propagation Network (CPN). Offline tests are performed on 45 CCTV video. The result shows 92.655% success rate on motion detection,76% success rate on face detection, and 60% success rate on face detection. It shows that this faces detection and identification through CCTV video have not been able to obtain optimal results. The motion detection process is ideal to be applied in real-time conditions. Yet if it\u2019s combined with face recognition process, it causes a significant time delay. Keywords\u2014 ADI, Haar Cascade Classifiers, SURF, PCA, CPN \uf06e ISSN (print): 1978-1520, ISSN (online): 2460-7258 IJCCS Vol. 12, No. 2, July 2018 : 107 \u2013 118 108"} {"_id": "a26d65817df33f8fee7c39a26a70c0e8a41b2e3e", "title": "Deep neural networks with Elastic Rectified Linear Units for object recognition", "text": "Rectified Linear Unit (ReLU) is crucial to the recent success of deep neural networks (DNNs). In this paper, we propose a novel Elastic Rectified Linear Unit (EReLU) that focuses on processing the positive part of input. Unlike previous variants of ReLU that typically adopt linear or piecewise linear functions to represent the positive part, EReLU is characterized by that each positive value scales within a moderate range like a spring during training stage. On test time, EReLU becomes standard ReLU. EReLU improves model fitting with no extra parameters and little overfitting risk. Furthermore, we propose Elastic Parametric Rectified Linear Unit (EPReLU) by taking advantage of EReLU and parametric ReLU (PReLU). EPReLU \u2217Corresponding author Email addresses: jiangxiaoheng@tju.edu.cn (Xiaoheng Jiang), pyw@tju.edu.cn (Yanwei Pang ), xuelong_li@opt.ac.cn (Xuelong Li), jingpan23@gmail.com (Jing Pan), xieyinghong@163.com (Yinghong Xie) Preprint submitted to Neurocomputing September 3, 2017"} {"_id": "ca74a59166af72a14af031504e31d86c7953dc91", "title": "The exact bound in the Erd\u00f6s - Ko - Rado theorem", "text": ""} {"_id": "8577f8716ef27982a47963c498bd12f891c70702", "title": "Interaction Proxies for Runtime Repair and Enhancement of Mobile Application Accessibility", "text": "We introduce interaction proxies as a strategy for runtime repair and enhancement of the accessibility of mobile applications. Conceptually, interaction proxies are inserted between an application's original interface and the manifest interface that a person uses to perceive and manipulate the application. This strategy allows third-party developers and researchers to modify an interaction without an application's source code, without rooting the phone, without otherwise modifying an application, while retaining all capabilities of the system (e.g., Android's full implementation of the TalkBack screen reader). This paper introduces interaction proxies, defines a design space of interaction re-mappings, identifies necessary implementation abstractions, presents details of implementing those abstractions in Android, and demonstrates a set of Android implementations of interaction proxies from throughout our design space. We then present a set of interviews with blind and low-vision people interacting with our prototype interaction proxies, using these interviews to explore the seamlessness of interaction, the perceived usefulness and potential of interaction proxies, and visions of how such enhancements could gain broad usage. By allowing third-party developers and researchers to improve an interaction, interaction proxies offer a new approach to personalizing mobile application accessibility and a new approach to catalyzing development, deployment, and evaluation of mobile accessibility enhancements."} {"_id": "1e978d2b07a1e9866ea079c06e27c9565e9051b8", "title": "The Shaping of Information by Visual Metaphors", "text": "The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization."} {"_id": "e42485ca1ba3f9c5f02e41d1bcb459ed94f91975", "title": "Rehabilitation robots assisting in walking training for SCI patient", "text": "We have been developing a series of robots to apply for each step of spinal cord injury (SCI) recovery. We describe the preliminary walking pattern assisting robot and the practical walking assisting robot, which will be applied to the incomplete type of SCI to Quadriplegia. The preliminary experimental results are performed by normal subjects to verify the basic functions of the robot assisting and to prepare the test by SCI patients."} {"_id": "f180d4f9e9d7606aed2dd161cc8388cb5a553fed", "title": "In good company: how social capital makes organizations work [Book Review]", "text": "Don Cohen and Laurence Prusak define \"social capital\" in terms of connections among people. It emerges, they write, from \"trust, mutual understanding, and shared values and behaviors that bind the members of human networks and communities and make more cooperative action possible.\" Continuing a theme that Prusak, executive director of the IBM Institute for Knowledge Management, has been developing, they offer examples of how social capital is created and destroyed and the role it plays in collective human endeavors, particularly in modern business. The authors suggest that this understanding is essential for developing models that respect and nourish human relationships. Seekers of a rigorous, quantified theoretical framework won't find it here. Instead, the book explores the role of trust, community, connectedness and loyalty in making companies work today.--Alan S. Kay"} {"_id": "52cf369ecd012dd4c9b3b49753b9141e1098ecf2", "title": "Harmony: A Hand Wash Monitoring and Reminder System using Smart Watches", "text": "Hand hygiene compliance is extremely important in hospitals, clinics and food businesses. Caregivers\u2019 compliance with hand hygiene is one of the most effective tools in preventing healthcare associated infections (HAIs) in hospitals and clinics. In food businesses, hand hygiene compliance is essential to prevent food contamination, and thus food borne illness. Washing hands properly is the cornerstone of hand hygiene. However, the hand wash compliance rate by the workers (care givers, waiters, chefs, food processors and so on) is not up to the mark. Monitoring hand wash compliance along with a reminder system increases the compliance rate significantly. Quality of a hand wash is also important which can be achieved by washing hands in accordance with standard guidelines. In this paper, we present Harmony, a hand wash monitoring and reminder system that monitors hand wash events and their quality, provides real time feedback, reminds the person of interest when he/she is required to wash hands, and stores related data in a server for further use. Worker worn smart watches are the key components of Harmony that can differentiate hand wash gestures from other gestures with an average accuracy of about 88%. Harmony is robust, scalable, and easy to install, and it overcomes most of the problems of existing related systems."} {"_id": "a5baf22b35c5ac2315819ce64be63d4e9bc05f27", "title": "Subjective Reality and Strong Artificial Intelligence", "text": "The main prospective aim of modern research related to Artificial Intelligence is the creation of technical systems that implement the idea of Strong Intelligence. According our point of view the path to the development of such systems comes through the research in the field related to perceptions. Here we formulate the model of the perception of external world which may be used for the description of perceptual activity of intelligent beings. We consider a number of issues related to the development of the set of patterns which will be used by the intelligent system when interacting with environment. The key idea of the presented perception model is the idea of subjective reality. The principle of the relativity of perceived world is formulated. It is shown that this principle is the immediate consequence of the idea of subjective reality. In this paper we show how the methodology of subjective reality may be used for the creation of different types of Strong AI systems."} {"_id": "3ea09660687628d3afb503bccc62126bb6304d18", "title": "A systematic review and comparative analysis of cross-document coreference resolution methods and tools", "text": "Information extraction (IE) is the task of automatically extracting structured information from unstructured/semi-structured machine-readable documents. Among various IE tasks, extracting actionable intelligence from an ever-increasing amount of data depends critically upon cross-document coreference resolution (CDCR) - the task of identifying entity mentions across information sources that refer to the same underlying entity. CDCR is the basis of knowledge acquisition and is at the heart of Web search, recommendations, and analytics. Real time processing of CDCR processes is very important and have various applications in discovering must-know information in real-time for clients in finance, public sector, news, and crisis management. Being an emerging area of research and practice, the reported literature on CDCR challenges and solutions is growing fast but is scattered due to the large space, various applications, and large datasets of the order of peta-/tera-bytes. In order to fill this gap, we provide a systematic review of the state of the art of challenges and solutions for a CDCR process. We identify a set of quality attributes, that have been frequently reported in the context of CDCR processes, to be used as a guide to identify important and outstanding issues for further investigations. Finally, we assess existing tools and techniques for CDCR subtasks and provide guidance on selection of tools and algorithms."} {"_id": "2b0c5d98a2a93e49411a14d2b89f57a14f253514", "title": "High throughput heavy hitter aggregation for modern SIMD processors", "text": "Heavy hitters are data items that occur at high frequency in a data set. They are among the most important items for an organization to summarize and understand during analytical processing. In data sets with sufficient skew, the number of heavy hitters can be relatively small. We take advantage of this small footprint to compute aggregate functions for the heavy hitters in fast cache memory in a single pass.\n We design cache-resident, shared-nothing structures that hold only the most frequent elements. Our algorithm works in three phases. It first samples and picks heavy hitter candidates. It then builds a hash table and computes the exact aggregates of these elements. Finally, a validation step identifies the true heavy hitters from among the candidates.\n We identify trade-offs between the hash table configuration and performance. Configurations consist of the probing algorithm and the table capacity that determines how many candidates can be aggregated. The probing algorithm can be perfect hashing, cuckoo hashing and bucketized hashing to explore trade-offs between size and speed.\n We optimize performance by the use of SIMD instructions, utilized in novel ways beyond single vectorized operations, to minimize cache accesses and the instruction footprint."} {"_id": "617af330639f56dd2b3472b8d5fbda239506610f", "title": "Analyzing Uber's Ride-sharing Economy", "text": "Uber is a popular ride-sharing application that matches people who need a ride (or riders) with drivers who are willing to provide it using their personal vehicles. Despite its growing popularity, there exist few studies that examine large-scale Uber data, or in general the factors affecting user participation in the sharing economy. We address this gap through a study of the Uber market that analyzes large-scale data covering 59 million rides which spans a period of 7 months. The data were extracted from email receipts sent by Uber collected on Yahoo servers, allowing us to examine the role of demographics (e.g., age and gender) on participation in the ride-sharing economy. In addition, we evaluate the impact of dynamic pricing (i.e., surge pricing) and income on both rider and driver behavior. We find that the surge pricing does not bias Uber use towards higher income riders. Moreover, we show that more homophilous matches (e.g., riders to drivers of a similar age) can result in higher driver ratings. Finally, we focus on factors that affect retention and use information from earlier rides to accurately predict which riders or drivers will become active Uber users."} {"_id": "03695a9a42b0f64b8a13b4ddd3bfde076e9f604a", "title": "Operating System Profiling via Latency Analysis", "text": "Operating systems are complex and their behavior depends on many factors. Source code, if available, does not directly help one to understand the OS's behavior, as the behavior depends on actual workloads and external inputs. Runtime profiling is a key technique to prove new concepts, debug problems, and optimize performance. Unfortunately, existing profiling methods are lacking in important areas---they do not provide enough information about the OS's behavior, they require OS modification and therefore are not portable, or they incur high overheads thus perturbing the profiled OS.\n We developed OSprof: a versatile, portable, and efficient OS profiling method based on latency distributions analysis. OSprof automatically selects important profiles for subsequent visual analysis. We have demonstrated that a suitable workload can be used to profile virtually any OS component. OSprof is portable because it can intercept operations and measure OS behavior from user-level or from inside the kernel without requiring source code. OSprof has typical CPU time overheads below 4%. In this paper we describe our techniques and demonstrate their usefulness through a series of profiles conducted on Linux, FreeBSD, and Windows, including client/server scenarios. We discovered and investigated a number of interesting interactions, including scheduler behavior, multi-modal I/O distributions, and a previously unknown lock contention, which we fixed."} {"_id": "d385f52c9ffbf4bb1d689406cebc5075f5ad4d6a", "title": "Affective Computing and Sentiment Analysis", "text": "Understanding emotions is an important aspect of personal development and growth, and as such it is a key tile for the emulation of human intelligence. Besides being important for the advancement of AI, emotion processing is also important for the closely related task of polarity detection. The opportunity to automatically capture the general public's sentiments about social events, political movements, marketing campaigns, and product preferences has raised interest in both the scientific community, for the exciting open challenges, and the business world, for the remarkable fallouts in marketing and financial market prediction. This has led to the emerging fields of affective computing and sentiment analysis, which leverage human-computer interaction, information retrieval, and multimodal signal processing for distilling people's sentiments from the ever-growing amount of online social data."} {"_id": "20b877e96201a08332b5dcd4e73a1a30c9ac5a9e", "title": "MapReduce : Distributed Computing for Machine Learning", "text": "We use Hadoop, an open-source implementation of Google\u2019s distributed file system and the MapReduce framework for distributed data processing, on modestly-sized compute clusters to evaluate its efficacy for standard machine learning tasks. We show benchmark performance on searching and sorting tasks to investigate the effects of various system configurations. We also distinguish classes of machine-learning problems that are reasonable to address within the MapReduce framework, and offer improvements to the Hadoop implementation. We conclude that MapReduce is a good choice for basic operations on large datasets, although there are complications to be addressed for more complex machine learning tasks."} {"_id": "f09a1acd38a781a3ab6cb31c8e4f9f29be5f58cb", "title": "Large-scale Artificial Neural Network: MapReduce-based Deep Learning", "text": "Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload. This project is mainly focused on the solution to the issues above, combining deep learning algorithm with cloud computing platform to deal with large-scale data. A MapReduce-based handwriting character recognizer will be designed in this project to verify the efficiency improvement this mechanism will achieve on training and practical large-scale data. Careful discussion and experiment will be developed to illustrate how deep learning algorithm works to train handwritten digits data, how MapReduce is implemented on deep learning neural network, and why this combination accelerates computation. Besides performance, the scalability and robustness will be mentioned in this report as well. Our system comes with two demonstration software that visually illustrates our handwritten digit recognition/encoding application. 1"} {"_id": "0122e063ca5f0f9fb9d144d44d41421503252010", "title": "Large Scale Distributed Deep Networks", "text": "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestlysized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm."} {"_id": "ea1a7c6ba9357ba597cd182ec70e1f03dca485f7", "title": "A novel modular compliant knee joint actuator for use in assistive and rehabilitation orthoses", "text": "Despite significant advancements in the field of wearable robots (WRs), commercial WRs still use traditional direct-drive actuation units to power their joints. On the other hand, in research prototypes compliant actuators are increasingly being used to more adequately address the issues of safety, robustness, control and overall system efficiency. The advantages of mechanical compliance are exploited in a novel modular actuator prototype designed for the knee joint. Due to its modularity, the actuator can be implemented in a knee joint of a standalone or a multi-joint lower-limbs orthosis, for use in gait rehabilitation and/or walking assistance. Differently from any other actuator used in orthotic research prototypes, it combines a Variable Stiffness Actuator (VSA) and a Parallel Elasticity Actuation (PEA) unit in a single modular system. Although independent, the units are designed to work together in order to fully mimic dynamic behavior of the human knee joint. In this paper, design aspects and functional evaluation of the new actuator are presented and a rationale for such a design in biomechanics of the human knee joint is given. The VSA subsystem is characterized in a quasi-static benchmarking environment and the results showing main performance indicators are presented."} {"_id": "14671928b6bd0c6dbb1145b92d1f3e35513452a8", "title": "How similar are fluid cognition and general intelligence? A developmental neuroscience perspective on fluid cognition as an aspect of human cognitive ability.", "text": "This target article considers the relation of fluid cognitive functioning to general intelligence. A neurobiological model differentiating working memory/executive function cognitive processes of the prefrontal cortex from aspects of psychometrically defined general intelligence is presented. Work examining the rise in mean intelligence-test performance between normative cohorts, the neuropsychology and neuroscience of cognitive function in typically and atypically developing human populations, and stress, brain development, and corticolimbic connectivity in human and nonhuman animal models is reviewed and found to provide evidence of mechanisms through which early experience affects the development of an aspect of cognition closely related to, but distinct from, general intelligence. Particular emphasis is placed on the role of emotion in fluid cognition and on research indicating fluid cognitive deficits associated with early hippocampal pathology and with dysregulation of the hypothalamic-pituitary-adrenal axis stress-response system. Findings are seen to be consistent with the idea of an independent fluid cognitive construct and to assist with the interpretation of findings from the study of early compensatory education for children facing psychosocial adversity and from behavior genetic research on intelligence. It is concluded that ongoing development of neurobiologically grounded measures of fluid cognitive skills appropriate for young children will play a key role in understanding early mental development and the adaptive success to which it is related, particularly for young children facing social and economic disadvantage. Specifically, in the evaluation of the efficacy of compensatory education efforts such as Head Start and the readiness for school of children from diverse backgrounds, it is important to distinguish fluid cognition from psychometrically defined general intelligence."} {"_id": "4b38d1951822e71dafac01cba257fd49e1c3b033", "title": "Solution of the Generalized Noah's Ark Problem.", "text": "The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%."} {"_id": "18e04dbd350ba6dcb5ed10a76476f929b6b5598e", "title": "The Design and Implementation of a Log-Structured File System", "text": "This paper presents a new technique for disk storage management called a log-structured file system. A log-structured file system writes all modifications to disk sequentially in a log-like structure, thereby speeding up both file writing and crash recovery. The log is the only structure on disk; it contains indexing information so that files can be read back from the log efficiently. In order to maintain large free areas on disk for fast writing, we divide the log into segments and use a segment cleaner to compress the live information from heavily fragmented segments. We present a series of simulations that demonstrate the efficiency of a simple cleaning policy based on cost and benefit. We have implemented a prototype log-structured file system called Sprite LFS; it outperforms current Unix file systems by an order of magnitude for small-file writes while matching or exceeding Unix performance for reads and large writes. Even when the overhead for cleaning is included, Sprite LFS can use 70% of the disk bandwidth for writing, whereas Unix file systems typically can use only 5--10%."} {"_id": "2512284d4cb66ee63a3f349abfab75476130b28e", "title": "Robot team learning enhancement using Human Advice", "text": "The paper discusses the augmentation of the Concurrent Individual and Social Learning (CISL) mechanism with a new Human Advice Layer (HAL). The new layer is characterized by a Gaussian Mixture Model (GMM), which is trained on human experience data. The CISL mechanism consists of the Individual Performance and Task Allocation Markov Decision Processes (MDP), and the HAL can provide preferred action selection policies to the individual agents. The data utilized for training the GMM is collected using a heterogeneous team foraging simulation. When leveraging human experience in the multi-agent learning process, the team performance is enhanced significantly."} {"_id": "79df628dc7b292d6d6387365f074253fbbfbe826", "title": "UI on the Fly: Generating a Multimodal User Interface", "text": "UI on the Fly is a system that dynamically presents coordinated multimodal content through natural language and a small-screen graphical user interface. It adapts to the user\u2019s preferences and situation. Multimodal Functional Unification Grammar (MUG) is a unification-based formalism that uses rules to generate content that is coordinated across several communication modes. Faithful variants are scored with a heuristic function."} {"_id": "b15b3e0fb6ac1cfb16df323c14ae0bbbf3779c1b", "title": "NFC antenna research and a simple impedance matching method", "text": "Near Field Communication (NFC) technology evolved from a non-contact radio frequency identification (RFID) technology and intercommunication technology. Sensor reader, contactless card and peer to peer features integrate to a single chip. This chip could communicate with compatible devices by the identification and data exchange at a short distance. The technology was a simple merger of RFID technology and network technology at first, but it has evolved into a short-range wireless communications technology, which develops very quickly. This paper introduces the concept and characteristics of NFC technology first. Then, starting from the antenna coil used by NFC technology, this paper accomplished the theoretical analysis, Matlab simulation, HFSS and ADS2009 joint simulation respectively, and improved design an antenna impedance matching approach."} {"_id": "0200f2b70d414bcc13debe90e4325817b9726d5a", "title": "No Unbiased Estimator of the Variance of K-Fold Cross-Validation", "text": "Most machine learning researchers perform quantitative experiments to estimate generalization error and compare algorithm performances. In order to draw statistically convincing conclusions, it is important to estimate the uncertainty of such estimates. This paper studies the estimation of uncertainty around the K-fold cross-validation estimator. The main theorem shows that there exists no universal unbiased estimator of the variance of K-fold cross-validation. An analysis based on the eigendecomposition of the covariance matrix of errors helps to better understand the nature of the problem and shows that naive estimators may grossly underestimate variance, as con\u00a3rmed by numerical experiments."} {"_id": "897eacd4b98c8862dbf166e1d677a1d0d6015297", "title": "Analysis of SDN Contributions for Cloud Computing Security", "text": "Cloud infrastructures are composed fundamentally of computing, storage, and networking resources. In regards to network, Software-Defined Networking (SDN) has become one of the most important architectures for the management of networks that require frequent re-policing or re-configurations. Considering the already known security issues of Cloud Computing, SDN helps to give fast answers to emerging threats, but also introduces new vulnerabilities related to its own architecture. In this paper, we analyze recent security proposals derived from the use of SDN, and elaborate on whether it helps to improve trust, security and privacy in Cloud Computing. Moreover, we discuss security concerns introduced by the SDN architecture and how they could compromise Cloud services. Finally, we explore future security perspectives with regard to leveraging SDN benefits and mitigating its security issues."} {"_id": "f7f4f47b3ef2a2428754f46bb3f7a6bf0c765425", "title": "A Digital Twin-Based Approach for Designing and Multi-Objective Optimization of Hollow Glass Production Line", "text": "Various new national advanced manufacturing strategies, such as Industry 4.0, Industrial Internet, and Made in China 2025, are issued to achieve smart manufacturing, resulting in the increasing number of newly designed production lines in both developed and developing countries. Under the individualized designing demands, more realistic virtual models mirroring the real worlds of production lines are essential to bridge the gap between design and operation. This paper presents a digital twin-based approach for rapid individualized designing of the hollow glass production line. The digital twin merges physics-based system modeling and distributed real-time process data to generate an authoritative digital design of the system at pre-production phase. A digital twin-based analytical decoupling framework is also developed to provide engineering analysis capabilities and support the decision-making over the system designing and solution evaluation. Three key enabling techniques as well as a case study in hollow glass production line are addressed to validate the proposed approach."} {"_id": "209b7b98b460c6a0f14cf751a181e971cd4d2a9f", "title": "A Focus on Recent Developments and Trends in Underwater Imaging", "text": "\u25a0 Compact, efficient and easy to program digital signal processors execute algorithms once too computationally expensive for real-time applications. \u25a0 Modeling and simulation programs more accurately predict the effects that physical ocean parameters have on the performance of imaging systems under different geometric configurations. \u25a0 Image processing algorithms that handle data from multiple synchronous sources and that can extract and match feature points from each such source derive accurate 3-D scene information. \u25a0 Digital compression schemes provide high-quality standardizations for increased data transfer rates (i.e. streaming video) and reduced storage requirements. This paper reports developments over the past three years in the following topics: \u25a0 Image formation and image processing methods; \u25a0 Extended range imaging techniques;"} {"_id": "075ef34a0203477c41438743a272f106f95ae909", "title": "Velocity and acceleration estimation for optical incremental encoders q", "text": "Optical incremental encoders are extensively used for position measurements in motion systems. The position measurements suffer from quantization errors. Velocity and acceleration estimations obtained by numerical differentiation largely amplify the quantization errors. In this paper, the time stamping concept is used to obtain more accurate position, velocity and acceleration estimations. Time stamping makes use of stored events, consisting of the encoder counts and their time instants, captured at a high resolution clock. Encoder imperfections and the limited resolution of the capturing rate of the encoder events result in errors in the estimations. In this paper, we propose a method to extend the observation interval of the stored encoder events using a skip operation. Experiments on a motion system show that the velocity estimation is improved by 54% and the acceleration estimation by 92%. 2009 Elsevier Ltd. All rights reserved."} {"_id": "70091b309add6d3e6006bae68a5df5931cfc9d2f", "title": "Phase spectrum prediction of audio signals", "text": "Modeling the phases of audio signals has received significantly less attention in comparison to the modeling of magnitudes. This paper proposes to use linear least squares and neural networks to predict phases from the neighboring points only in the phase spectrum. The simulation results show that there is a structure in the phase components which could be used in further analysis algorithms based on the phase spectrum."} {"_id": "d7bf5c4e8d640f877efefa6faaaed5b4556fbe8e", "title": "About Face 3: the essentials of interaction design", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd about face 3 the essentials of interaction design alan cooper as the choice of reading, you can find here."} {"_id": "3decb702ee67b996190692b3eaeef3ba6f41b61d", "title": "From Distributional Semantics to Conceptual Spaces: A Novel Computational Method for Concept Creation", "text": "We investigate the relationship between lexical spaces and contextually-defined conceptual spaces, offering applications to creative concept discovery. We define a computational method for discovering members of concepts based on semantic spaces: starting with a standard distributional model derived from corpus co-occurrence statistics, we dynamically select characteristic dimensions associated with seed terms, and thus a subspace of terms defining the related concept. This approach performs as well as, and in some cases better than, leading distributional semantic models on a WordNet-based concept discovery task, while also providing a model of concepts as convex regions within a space with interpretable dimensions. In particular, it performs well on more specific, contextualized concepts; to investigate this we therefore move beyond WordNet to a set of human empirical studies, in which we compare output against human responses on a membership task for novel concepts. Finally, a separate panel of judges rate both model output and human responses, showing similar ratings in many cases, and some commonalities and divergences which reveal interesting issues for computational concept discovery."} {"_id": "6bc111834108cbcd6cd831f8ac34342d84bc886a", "title": "Modeling and Simulation of Lithium-Ion Batteries from a Systems Engineering Perspective", "text": "The lithium-ion battery is an ideal candidate for a wide variety of applications due to its high energy/power density and operating voltage. Some limitations of existing lithium-ion battery technology include underutilization, stress-induced material damage, capacity fade, and the potential for thermal runaway. This paper reviews efforts in the modeling and simulation of lithium-ion batteries and their use in the design of better batteries. Likely future directions in battery modeling and design including promising research opportunities are outlined. \u00a9 2011 The Electrochemical Society. [DOI: 10.1149/2.018203jes] All rights reserved."} {"_id": "92106be17def485cba1c5997b80bbb2d0b21b813", "title": "An analytical framework for evaluating e-commerce business models and strategies", "text": "Electronic commerce or business is more than just another way to sustain or enhance existing business practices. Rather, e-commerce is a paradigm shift. It is a `\u0300 disruptive\u2019\u2019 innovation that is radically changing the traditional way of doing business. The industry is moving so fast because it operates under totally different principles and work rules in the digital economy. A general rule in e-commerce is that there is no simple prescription and almost no such thing as an established business or revenue model for companies even within the same industry. Under such conditions, an analytical framework is needed to assist e-commerce planners and strategic managers in assessing the critical success factors when formulating e-commerce business models and strategies. This research develops an analytical framework based on the theories of transaction costs and switching costs. Both demand-side and supply-side economies of scale and scope are also applied to the development of this framework. In addition, e-commerce revenue models and strategies are also discussed. Based on the analytical framework developed by this research, this paper discusses the five essential steps for e-commerce success. They are: redefine the competitive advantage; rethink business strategy; re-examine traditional business and revenue models, re-engineer the corporation and Web site; and re-invent customer service. E-commerce planners and strategic managers will be able to use the framework to analyze and evaluate the critical successful factors for e-commerce success."} {"_id": "4077c4e01162471b8947f2c08b27cb2d2970b09a", "title": "OpenPAWS: An open source PAWS and UHF TV White Space database implementation for India", "text": "TV white space (TVWS) geolocation database is being used for the protection of the terrestrial TV broadcast receivers, and the coexistence of secondary devices. To the best of our knowledge, though TV White Space calculations are available, an active online database does not exist for India. In this paper, the development of the first TVWS database for India is detailed and is released for public access. A standardized protocol to access the TVWS database on a readily available hardware platform is implemented. A hardware prototype, which is capable of querying the TVWS database and operating in the TV band without causing harmful interference to the TV receivers in UHF TV bands, is developed. The source code of our implementation has been released under the GNU general public license version 2.0."} {"_id": "5d7effed9fc5824359ef315a4a517c90eeb286d6", "title": "Automatic Verb Classification Based on Statistical Distributions of Argument Structure", "text": "Automatic acquisition of lexical knowledge is critical to a wide range of natural language processing tasks. Especially important is knowledge about verbs, which are the primary source of relational information in a sentence-the predicate-argument structure that relates an action or state to its participants (i.e., who did what to whom). In this work, we report on supervised learning experiments to automatically classify three major types of English verbs, based on their argument structure-specifically, the thematic roles they assign to participants. We use linguistically-motivated statistical indicators extracted from large annotated corpora to train the classifier, achieving 69.8 accuracy for a task whose baseline is 34, and whose expert-based upper bound we calculate at 86.5. A detailed analysis of the performance of the algorithm and of its errors confirms that the proposed features capture properties related to the argument structure of the verbs. Our results validate our hypotheses that knowledge about thematic relations is crucial for verb classification, and that it can be gleaned from a corpus by automatic means. We thus demonstrate an effective combination of deeper linguistic knowledge with the robustness and scalability of statistical techniques."} {"_id": "75c5e7566ef28c5d51fe0825afe40d4bd3d90746", "title": "Revealing Event Saliency in Unconstrained Video Collection", "text": "Recent progresses in multimedia event detection have enabled us to find videos about a predefined event from a large-scale video collection. Research towards more intrinsic unsupervised video understanding is an interesting but understudied field. Specifically, given a collection of videos sharing a common event of interest, the goal is to discover the salient fragments, i.e., the curt video fragments that can concisely portray the underlying event of interest, from each video. To explore this novel direction, this paper proposes an unsupervised event saliency revealing framework. It first extracts features from multiple modalities to represent each shot in the given video collection. Then, these shots are clustered to build the cluster-level event saliency revealing framework, which explores useful information cues (i.e., the intra-cluster prior, inter-cluster discriminability, and inter-cluster smoothness) by a concise optimization model. Compared with the existing methods, our approach could highlight the intrinsic stimulus of the unseen event within a video in an unsupervised fashion. Thus, it could potentially benefit to a wide range of multimedia tasks like video browsing, understanding, and search. To quantitatively verify the proposed method, we systematically compare the method to a number of baseline methods on the TRECVID benchmarks. Experimental results have demonstrated its effectiveness and efficiency."} {"_id": "15dd88a84e2581e9d408b2be142b0dba6b9c4c3e", "title": "RTL Hardware IP Protection Using Key-Based Control and Data Flow Obfuscation", "text": "Recent trends of hardware intellectual property (IP) piracy and reverse engineering pose major business and security concerns to an IP-based system-on-chip (SoC) design flow. In this paper, we propose a Register Transfer Level (RTL) hardware IP protection technique based on low-overhead key-based obfuscation of control and data flow. The basic idea is to transform the RTL core into control and data flow graph (CDFG) and then integrate a well-obfuscated finite state machine (FSM) of special structure, referred as \u201cMode-Control FSM\u201d, into the CDFG in a manner that normal functional behavior is enabled only after application of a specific input sequence. We provide formal analysis of the effectiveness of the proposed approach and present a simple metric to quantify the level of obfuscation. We also present an integrated design flow that implements the proposed obfuscation at low computational overhead. Simulation results for two open-source IP cores show that high levels of security is achievable at nominal area and power overheads under delay constraint."} {"_id": "c32d241a6857a13750b51243766bee102f2e3df3", "title": "Micro bubble fluidics by EWOD and ultrasonic excitation for micro bubble tweezers", "text": "Recently, we envisioned so called micro bubble tweezers where EWOD (electrowetting-on-dielectric) actuated bubbles can manipulate micro objects such as biological cells by pushing or pulling them. Besides, oscillating (shrinking and expanding) bubbles in the presence of ultrasonic wave act as a to deliver drugs and molecules into the cells. In this paper, as a great stride in our quest for micro bubble tweezers, we present (1) full realization of two critical bubble operations (generating of bubbles in an on-chip and on-demand manner and splitting of single bubbles) and (2) two possible applications of mobile bubbles oscillating under acoustic excitation (a mobile vortex generator and micro particle carrier)."} {"_id": "76e12e9b8c02ed94cd5e1cae7174060ca13098f2", "title": "Energy-efficient deployment of Intelligent Mobile sensor networks", "text": "Many visions of the future include people immersed in an environment surrounded by sensors and intelligent devices, which use smart infrastructures to improve the quality of life and safety in emergency situations. Ubiquitous communication enables these sensors or intelligent devices to communicate with each other and the user or a decision maker by means of ad hoc wireless networking. Organization and optimization of network resources are essential to provide ubiquitous communication for a longer duration in large-scale networks and are helpful to migrate intelligence from higher and remote levels to lower and local levels. In this paper, distributed energy-efficient deployment algorithms for mobile sensors and intelligent devices that form an Ambient Intelligent network are proposed. These algorithms employ a synergistic combination of cluster structuring and a peer-to-peer deployment scheme. An energy-efficient deployment algorithm based on Voronoi diagrams is also proposed here. Performance of our algorithms is evaluated in terms of coverage, uniformity, and time and distance traveled until the algorithm converges. Our algorithms are shown to exhibit excellent performance."} {"_id": "8714d4596da62426a37516f45ef1aa4dd5b79a56", "title": "The spatial and temporal meanings of English prepositions can be independently impaired", "text": "English uses the same prepositions to describe both spatial and temporal relationships (e.g., at the corner, at 1:30), and other languages worldwide exhibit similar patterns. These space-time parallelisms have been explained by the Metaphoric Mapping Theory, which maintains that humans have a cognitive predisposition to structure temporal concepts in terms of spatial schemas through the application of a TIME IS SPACE metaphor. Evidence comes from (among other sources) historical investigations showing that languages consistently develop in such a way that expressions that originally have only spatial meanings are gradually extended to take on analogous temporal meanings. It is not clear, however, if the metaphor actively influences the way that modern adults process prepositional meanings during language use. To explore this question, a series of experiments was conducted with four brain-damaged subjects with left perisylvian lesions. Two subjects exhibited the following dissociation: they failed a test that assesses knowledge of the spatial meanings of prepositions, but passed a test that assesses knowledge of the corresponding temporal meanings of the same prepositions. This result suggests that understanding the temporal meanings of prepositions does not necessarily require establishing structural alignments with their spatial correlates. Two other subjects exhibited the opposite dissociation: they performed better on the spatial test than on the temporal test. Overall, these findings support the view that although the spatial and temporal meanings of prepositions are historically linked by virtue of the TIME IS SPACE metaphor, they can be (and may normally be) represented and processed independently of each other in the brains of modern adults."} {"_id": "e46c54636bff2ec3609a67fd33e21a7fc8816179", "title": "Mental simulation in literal and figurative language understanding", "text": "Suppose you ask a colleague how a class he just taught went, and he replies that \"It was a great class the students were glued to their seats.\" There are two clearly distinct interpretations of this utterance, which might usefully be categorized as a literal and a figurative one. Which interpretation you believe is appropriate will determine your course of action; whether you congratulate your colleague for his fine work, or whether you report him to the dean for engaging in false imprisonment. What distinguishes between these two types of interpretation? Classically, the notions of literalness and figurativity are viewed as pertaining directly to language words have literal meanings, and can be used figuratively when specific figures of speech cue appropriate interpretation processes (Katz 1977, Searle 1978, Dascal 1987). Indeed, this approach superficially seems to account for the two interpretations described above. The literal (and one would hope, less likely) interpretation involves simply interpreting each word in the sentence and their combination in terms of its straightforward meaning the glue is literal glue and the seats are literal seats. The figurative interpretation, by contrast, requires knowledge of an idiom a figure of speech. This idiom has a particular form, using the words glued, to, and seat(s), with a possessive pronoun or other noun phrase indicating who was glued to their seat(s) in the middle. It also carries with it a particular meaning: the person or people described were in rapt attention. Thus the figurative interpretation of the sentence differs from the literal one in that the meaning to be taken from it is not built up compositionally from the meanings of words included in the utterance. Rather, the idiom imposes a non-compositional interpretation. As a result, the meaning of the whole can be quite distinct from the meanings of the words included within the idiom. This distinction between idiomaticity and compositionality is an important component of the classical figurative-literal distinction. A consequence of equating figurativity with particular figures of speech is that figurative language can be seen as using words in ways that differ from their real, literal meaning. While this classical notion of figurative and literal language may seem sensible, it leads to a number of incorrect and inconsistent claims about the relation between literal and figurative language. (Note that although it is not strictly correct to talk about language itself as being literal or figurative instead we should discuss acts of figurative or literal language processing we will adopt this convention for ease of exposition.) One major problem is that it claims that figurative language processing mechanisms are triggered by language explicit linguistic cues or figures of speech. Indeed, such appears to be case in the above example, but in many cases it is not. A sentence like You'll find yourself glued to the latest shareholders' report can be interpreted \"figuratively\" even though it has in it no figure of speech equivalent of glued to X's seat(s) that might trigger a search for such an interpretation. The only trigger for this figurative meaning is the verb glue, which would be hard to justify as an idiom or figure of speech. The absence of overt indicators of figurativity might lead us to hypothesize that some words may encode both literal and figurative meanings (as suggests by"} {"_id": "15b45650fa30c56bdc4c595a5afd31663f7f3eb4", "title": "Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time", "text": "Does the language you speak affect how you think about the world? This question is taken up in three experiments. English and Mandarin talk about time differently--English predominantly talks about time as if it were horizontal, while Mandarin also commonly describes time as vertical. This difference between the two languages is reflected in the way their speakers think about time. In one study, Mandarin speakers tended to think about time vertically even when they were thinking for English (Mandarin speakers were faster to confirm that March comes earlier than April if they had just seen a vertical array of objects than if they had just seen a horizontal array, and the reverse was true for English speakers). Another study showed that the extent to which Mandarin-English bilinguals think about time vertically is related to how old they were when they first began to learn English. In another experiment native English speakers were taught to talk about time using vertical spatial terms in a way similar to Mandarin. On a subsequent test, this group of English speakers showed the same bias to think about time vertically as was observed with Mandarin speakers. It is concluded that (1) language is a powerful tool in shaping thought about abstract domains and (2) one's native language plays an important role in shaping habitual thought (e.g., how one tends to think about time) but does not entirely determine one's thinking in the strong Whorfian sense."} {"_id": "2e36f88fa9826e44182fc266d1e16ac7704a65b7", "title": "As time goes by : Evidence for two systems in processing space 4 ime metaphors", "text": "Temporal language is often couched in spatial metaphors. English has been claimed to have two space-time metaphoric systems: the ego-moving metaphor, wherein the observer\u2019s context progresses along the time-line towards the future, and the time-moving metaphor, wherein time is conceived of as a river or conveyor belt on which events are moving from the future to the past. In three experiments, we investigated the psychological status of these metaphors by asking subjects to carry out temporal inferences stated in terms of spatial metaphors. In Experiment 1, we found that subjects were slowed in their processing when the assertions shifted from one spatial metaphoric system to the other. In Experiment 2, we determined that this cost of shifting could not be attributed to local lexical factors. In Experiment 3, we again found this metaphor consistency effect in a naturalistic version of the study in which we asked commonsense time questions of passengers at an airport. The results of the three studies provide converging evidence that people use spatial metaphors in temporal reasoning. Implications for the status of metaphoric systems arc discussed."} {"_id": "c1c1907129cd058d84e0cc7c89e1c34f2adaa34a", "title": "Metaphoric structuring: understanding time through spatial metaphors", "text": "The present paper evaluates the claim that abstract conceptual domains are structured through metaphorical mappings from domains grounded directly in experience. In particular, the paper asks whether the abstract domain of time gets its relational structure from the more concrete domain of space. Relational similarities between space and time are outlined along with several explanations of how these similarities may have arisen. Three experiments designed to distinguish between these explanations are described. The results indicate that (1) the domains of space and time do share conceptual structure, (2) spatial relational information is just as useful for thinking about time as temporal information, and (3) with frequent use, mappings between space and time come to be stored in the domain of time and so thinking about time does not necessarily require access to spatial schemas. These findings provide some of the first empirical evidence for Metaphoric Structuring. It appears that abstract domains such as time are indeed shaped by metaphorical mappings from more concrete and experiential domains such as space."} {"_id": "60388818aa7faf07945d53292a21d3efa2ea841e", "title": "Bayesian Gaussian process models : PAC-Bayesian generalisation error bounds and sparse approximations", "text": "Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, and among these Bayesian inference for Gaussian process (GP) models has recently received significant attention. We feel that GP priors should be part of the standard toolbox for constructing models relevant to machine learning in the same way as parametric linear models are, and the results in this thesis help to remove some obstacles on the way towards this goal. In the first main chapter, we provide a distribution-free finite sample bound on the difference between generalisation and empirical (training) error for GP classification methods. While the general theorem (the PAC-Bayesian bound) is not new, we give a much simplified and somewhat generalised derivation and point out the underlying core technique (convex duality) explicitly. Furthermore, the application to GP models is novel (to our knowledge). A central feature of this bound is that its quality depends crucially on task knowledge being encoded faithfully in the model and prior distributions, so there is a mutual benefit between a sharp theoretical guarantee and empirically well-established statistical practices. Extensive simulations on real-world classification tasks indicate an impressive tightness of the bound, in spite of the fact that many previous bounds for related kernel machines fail to give non-trivial guarantees in this practically relevant regime. In the second main chapter, sparse approximations are developed to address the problem of the unfavourable scaling of most GP techniques with large training sets. Due to its high importance in practice, this problem has received a lot of attention recently. We demonstrate the tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning (or sequential design) and develop generic schemes for automatic model selection with many (hyper)parameters. We suggest two new generic schemes and evaluate some of their variants on large real-world classification and regression tasks. These schemes and their underlying principles (which are clearly stated and analysed) can be applied to obtain sparse approximations for a wide regime of GP models far beyond the special cases we studied here."} {"_id": "5ce872665d058a4500d143797ac7d2af04139c9d", "title": "Romantic Relationships among Young Adults : An Attachment Perspective", "text": "The present study used adult attachment as a theoretical framework to investigate individual differences in young adults\u2019 romantic relationships. Emphasis was put on studying the relational aspects of the self and coping strategies in an attempt to better understand how romantic partners feel about themselves and how they subsequently behave in periods of conflict. The sample comprised undergraduate university students (N=377) who responded to a self report questionnaire in English. Most findings were consistent with the proposed hypotheses. Results indicated that anxious individuals tend to report an ambivalent coping style and that fearful individuals were most vulnerable to health risks such as depression. Gender differences were also explored and results showed for instance that women generally tend to seek more support, and men might be more dismissing than women."} {"_id": "d9c012f793800b470e1be86aff533a54ce333990", "title": "Block-diagonal Hessian-free Optimization for Training Neural Networks", "text": "Second-order methods for neural network optimization have several advantages over methods based on first-order gradient descent, including better scaling to large mini-batch sizes and fewer updates needed for convergence. But they are rarely applied to deep learning in practice because of high computational cost and the need for model-dependent algorithmic variations. We introduce a variant of the Hessian-free method that leverages a block-diagonal approximation of the generalized Gauss-Newton matrix. Our method computes the curvature approximation matrix only for pairs of parameters from the same layer or block of the neural network and performs conjugate gradient updates independently for each block. Experiments on deep autoencoders, deep convolutional networks, and multilayer LSTMs demonstrate better convergence and generalization compared to the original Hessian-free approach and the Adam method."} {"_id": "70d930de222e3243d597da448f3f067dee173b24", "title": "A 0.018% THD+N, 88-dB PSRR PWM Class-D Amplifier for Direct Battery Hookup", "text": "A low-distortion third-order class-D amplifier that is fully integrated into a 0.18-\u03bc m CMOS process was designed for direct battery hookup in a mobile application. A class-D amplifier for direct battery hookup must have a sufficiently high power supply rejection ratio (PSRR) in preparation for noise, such as when a global system for mobile communications (GSM) bursts ripples through the system power line. This amplifier has a high PSRR of 88 dB for 217-Hz power supply ripples, using a third-order loop filter. System performance and stability are improved by applying the design technique of input-feedforward delta-sigma (\u0394\u03a3) modulators to the pulse-width modulation (PWM) class-D amplifier. A filterless method that can remove the external LC filter is employed, which offers great advantages in terms of PCB space and system cost. This amplifier achieves a power efficiency of 85.5% while delivering an output power of 750 mW into an 8-\u03a9 load from a 3.7-V supply voltage. Maximum achieved output power at 1% total harmonic distortion plus noise (THD+N) from a 4.9-V supply voltage into an 8-\u03a9 load is 1.15 W. This class-D amplifier is designed to have a broad operational range of 2.7-4.9 V for the direct use of mobile phone battery power. It has a total area of 1.01 mm2 and achieves a THD+N of 0.018%."} {"_id": "5b25081ac8d857e7f1b09ace9f98eaa657220fe3", "title": "SeiSIM : Structural Similarity Evaluation for Seismic Data Retrieval", "text": "Structural similarity evaluation is a critical step for retrieving existing databases to find matching records for a given seismic data set. The objective is to enable the re-use of historical findings to assist exploration with new survey data. Currently there are very few structural similarity metrics specifically designed for seismic data, especially seismic survey maps. In this paper, we propose a metric that combines texture similarity and geological similarity, which is derived from discontinuity maps. We test the seismic similarity metric in a retrieval application. The successful results indicate that our approach is promising."} {"_id": "4d91926b9291fd4542547f5e974be8c82cf6c822", "title": "A BIOLOGICALLY INSPIRED VISUAL WORKING MEMORY", "text": "The ability to look multiple times through a series of pose-adjusted glimpses is fundamental to human vision. This critical faculty allows us to understand highly complex visual scenes. Short term memory plays an integral role in aggregating the information obtained from these glimpses and informing our interpretation of the scene. Computational models have attempted to address glimpsing and visual attention but have failed to incorporate the notion of memory. We introduce a novel, biologically inspired visual working memory architecture that we term the Hebb-Rosenblatt memory. We subsequently introduce a fully differentiable Short Term Attentive Working Memory model (STAWM) which uses transformational attention to learn a memory over each image it sees. The state of our HebbRosenblatt memory is embedded in STAWM as the weights space of a layer. By projecting different queries through this layer we can obtain goal-oriented latent representations for tasks including classification and visual reconstruction. Our model obtains highly competitive classification performance on MNIST and CIFAR-10. As demonstrated through the CelebA dataset, to perform reconstruction the model learns to make a sequence of updates to a canvas which constitute a parts-based representation. Classification with the self supervised representation obtained from MNIST is shown to be in line with the state of the art models (none of which use a visual attention mechanism). Finally, we show that STAWM can be trained under the dual constraints of classification and reconstruction to provide an interpretable visual sketchpad which helps open the \u2018black-box\u2019 of deep learning."} {"_id": "7d360a98dd7dfec9e3079ea18e78e74c6a7038f1", "title": "FRIENDSHIPS IN ONLINE PEER-TO-PEER LENDING : PIPES , PRISMS , AND RELATIONAL HERDING 1", "text": "This paper investigates how friendship relationships act as pipes, prisms, and herding signals in a large online, peer-to-peer (P2P) lending site. By analyzing decisions of lenders, we find that friends of the borrower, especially close offline friends, act as financial pipes by lending money to the borrower. On the other hand, the prism effect of friends\u2019 endorsements via bidding on a loan negatively affects subsequent bids by third parties. However, when offline friends of a potential lender, especially close friends, place a bid, a relational herding effect occurs as potential lenders are likely to follow their offline friends with a bid."} {"_id": "1a37f07606d60df365d74752857e8ce909f700b3", "title": "Deep Neural Networks for Learning Graph Representations", "text": "In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the samplingbased method for generating linear sequences proposed by Perozzi et al. (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skipgram model with negative sampling proposed by Mikolov et al. (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks."} {"_id": "1a4b6ee6cd846ef5e3030a6ae59f026e5f50eda6", "title": "Deep Learning for Video Classification and Captioning", "text": "Accelerated by the tremendous increase in Internet bandwidth and storage space, video data has been generated, published and spread explosively, becoming an indispensable part of today's big data. In this paper, we focus on reviewing two lines of research aiming to stimulate the comprehension of videos with deep learning: video classification and video captioning. While video classification concentrates on automatically labeling video clips based on their semantic contents like human actions or complex events, video captioning attempts to generate a complete and natural sentence, enriching the single label as in video classification, to capture the most informative dynamics in videos. In addition, we also provide a review of popular benchmarks and competitions, which are critical for evaluating the technical progress of this vibrant field."} {"_id": "6faf92529ccb3680c3965babc343c69cff5d9576", "title": "Generalized Sampling and Infinite-Dimensional Compressed Sensing", "text": "We introduce and analyze a framework and corresponding method for compressed sensing in infinite dimensions. This extends the existing theory from finite-dimensional vector spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary by demonstrating that existing finitedimensional techniques are ill-suited for solving a number of key problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. A conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. Central to this work is the introduction of two novel concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize an infinite-dimensional problem."} {"_id": "40f08a80714e7384651ddab62f4597b37779dff5", "title": "Walking away from the Desktop Computer: Distributed Collaboration and Mobility in a Product Design Team", "text": "A study of a spatially distributed product design team shows that most members are rarely at their individual desks. Mobility is essential for the use of shared resources and for communication. It facilitates informal interactions and awareness unavailable to colleagues at remote sites. Implications for technology design include portable and distributed computing resources, in particular, moving beyond individual workstation-centric CSCW applications."} {"_id": "6426be25b02086a78a5dd5505c077c5d4205f275", "title": "Um estudo preliminar sobre Tipos de Personalidade em Equipes Scrum", "text": "In software development, people have a fundamental role as the basis for a project\u2019s success. Regarding agile methodologies, this factor is increased by the need of self-organized teams, which is related to its member\u2019s personality and the relationships between them. This paper evaluates how the member\u2019s personality types and social relations influence the outcome of Scrum teams, based on MBTI and sociometry. As a result it was possible to identify how psychological profiles influence the quality, productivity and achievement of goals defined in the Sprint Backlog."} {"_id": "37bed9d4fec61263e109402b46136272bcd8c5ce", "title": "Long short term memory for driver intent prediction", "text": "Advanced Driver Assistance Systems have been shown to greatly improve road safety. However, existing systems are typically reactive with an inability to understand complex traffic scenarios. We present a method to predict driver intention as the vehicle enters an intersection using a Long Short Term Memory (LSTM) based Recurrent Neural Network (RNN). The model is learnt using the position, heading and velocity fused from GPS, IMU and odometry data collected by the ego-vehicle. In this paper we focus on determining the earliest possible moment in which we can classify the driver's intention at an intersection. We consider the outcome of this work an essential component for all levels of road vehicle automation."} {"_id": "bd08d97a8152216e0bb234a2c5a56a378c7a738e", "title": "Multi-Antenna Wireless Legitimate Surveillance Systems: Design and Performance Analysis", "text": "To improve national security, government agencies have long been committed to enforcing powerful surveillance measures on suspicious individuals or communications. In this paper, we consider a wireless legitimate surveillance system, where a full-duplex multi-antenna legitimate monitor aims to eavesdrop on a dubious communication link between a suspicious pair via proactive jamming. Assuming that the legitimate monitor can successfully overhear the suspicious information only when its achievable data rate is no smaller than that of the suspicious receiver, the key objective is to maximize the eavesdropping non-outage probability by joint design of the jamming power, receive and transmit beamformers at the legitimate monitor. Depending on the number of receive/transmit antennas implemented, i.e., single-input single-output, single-input multiple-output, multiple-input single-output, and multiple-input multiple-output (MIMO), four different scenarios are investigated. For each scenario, the optimal jamming power is derived in a closed form and efficient algorithms are obtained for the optimal transmit/receive beamforming vectors. Moreover, low-complexity suboptimal beamforming schemes are proposed for the MIMO case. Our analytical findings demonstrate that by exploiting multiple antennas at the legitimate monitor, the eavesdropping non-outage probability can be significantly improved compared with the single-antenna case. In addition, the proposed suboptimal transmit zero-forcing scheme yields similar performance as the optimal scheme."} {"_id": "de5576fbe7efefff0ef9d4b8231e9a5a67fc9d77", "title": "Detection , localization and characterization of damage in plates with an in situ array of spatially distributed ultrasonic sensors", "text": "Permanently attached piezoelectric sensors arranged in a spatially distributed array are under consideration for structural health monitoring systems incorporating active ultrasonic methods. Most damage detection and localization methods that have been proposed are based upon comparing monitored signals to baselines recorded from the structure prior to initiation of damage. To be effective, this comparison process must take into account any conditions other than damage that have changed the ultrasonic signals. Proposed here is a two-step process whereby damage is first detected and is then localized and characterized. The detection strategy considers the long time behavior of the signals in the diffuse-like regime where distinct echoes can no longer be identified. The localization strategy is to generate images of damage based upon the early time regime when discrete echoes from boundary reflections and scattering sites are meaningful. Results are shown for an aluminum plate with artificial damage introduced in combination with temperature variations. The loss of local temporal coherence combined with an optimal baseline selection procedure is shown to be effective for the detection of damage, and a delay-and-sum imaging method applied to the residual signals both localizes the damage and provides characterization information. (Some figures in this article are in colour only in the electronic version)"} {"_id": "ca0332af5ac073a32fbce802b7d02bb2fb0d827e", "title": "Design, development and performance test of an automatic two-Axis solar tracker system", "text": "The energy extracted from solar photovoltaic (PV) or solar thermal depends on solar insolation. For the extraction of maximum energy from the sun, the plane of the solar collector should always be normal to the incident radiation. The diurnal and seasonal movement of the earth affects the radiation intensity received on the solar collector. Sun trackers move the solar collector to follow the sun trajectories and keep the orientation of the solar collector at an optimal tilt angle. Energy efficiency of solar PV or solar thermal can be substantially improved using solar tracking system. In this work, an automatic solar tracking system has been designed and developed using LDR sensors and DC motors on a mechanical structure with gear arrangement. Two-axis solar tracking (azimuth angle as well as altitude angle) has been implemented through microcontroller based sophisticated control logic. Performance of the proposed system over the important parameters like solar radiation received on the collector, maximum hourly electrical power, efficiency gain, short circuit current, open circuit voltage and fill factor has been evaluated and compared with those for fixed tilt angle solar collector."} {"_id": "bb6782a9f12310b196d57a8bcce2b0445a9d7e2a", "title": "A combined pose, object, and feature model for action understanding", "text": "Understanding natural human activity involves not only identifying the action being performed, but also locating the semantic elements of the scene and describing the person's interaction with them. We present a system that is able to recognize complex, fine-grained human actions involving the manipulation of objects in realistic action sequences. Our method takes advantage of recent advances in sensors and pose trackers in learning an action model that draws on successful discriminative techniques while explicitly modeling both pose trajectories and object manipulations. By combining these elements in a single model, we are able to simultaneously recognize actions and track the location and manipulation of objects. To showcase this ability, we introduce a novel Cooking Action Dataset that contains video, depth readings, and pose tracks from a Kinect sensor. We show that our model outperforms existing state of the art techniques on this dataset as well as the VISINT dataset with only video sequences."} {"_id": "aa3eaccca6a4c9082a98ec60fe1de798a8c77e73", "title": "Role of mask pattern in intelligibility of ideal binary-masked noisy speech.", "text": "Intelligibility of ideal binary masked noisy speech was measured on a group of normal hearing individuals across mixture signal to noise ratio (SNR) levels, masker types, and local criteria for forming the binary mask. The binary mask is computed from time-frequency decompositions of target and masker signals using two different schemes: an ideal binary mask computed by thresholding the local SNR within time-frequency units and a target binary mask computed by comparing the local target energy against the long-term average speech spectrum. By depicting intelligibility scores as a function of the difference between mixture SNR and local SNR threshold, alignment of the performance curves is obtained for a large range of mixture SNR levels. Large intelligibility benefits are obtained for both sparse and dense binary masks. When an ideal mask is dense with many ones, the effect of changing mixture SNR level while fixing the mask is significant, whereas for more sparse masks the effect is small or insignificant."} {"_id": "65952bad69d5331c1760a37ae0af5817cb666edd", "title": "Are Computers Gender-Neutral ? Gender Stereotypic Responses to Computers", "text": "This study tested whether computers embedded with the most minimal gender cues will evoke sex-based stereotypic responses. Using an experimental paradigm (N=40) that involved computers with voice output, the study tested three sex-based stereotypes under conditions in which all suggestions of gender were removed, with the sole exception of vocal cues. In all three cases, gender stereotypic responses were obtained. Because the experimental manipulation involved no deception regarding the source of the voices, this study presents evidence that the tendency to gender stereotype is extremely powerful, extending even to inanimate machines. Are computers gender-neutral?"} {"_id": "432ed941958594a886753f4fb874e143e4e6995c", "title": "Creating health typologies with random forest clustering", "text": "In this paper, we describe the creation of a health-specific geodemographic classification system for the whole city of Birmingham UK. Compared to some existing open source and commercial systems, the proposed work has a couple of distinct advantages: (i) It is particularly designed for the public health domain by combining most reliable health data and other sources accounting for the main determinants of health. (ii) A novel random forest clustering algorithm is used for generating clusters and it has several obvious advantages over the commonly used k-means algorithm in practice. These resultant health typologies will help local authorities to understand and design customized health interventions for the population. A Birmingham map illustrating the distribution of all health typologies is produced."} {"_id": "1b482decdcf473fda7fb883cc8b232f302738e52", "title": "Crowd disasters as systemic failures: analysis of the Love Parade disaster", "text": "Each year, crowd disasters happen in different areas of the world. How and why do such disasters happen? Are the fatalities caused by relentless behavior of people or a psychological state of panic that makes the crowd \u2018go mad\u2019? Or are they a tragic consequence of a breakdown of coordination? These and other questions are addressed, based on a qualitative analysis of publicly available videos and materials, which document the planning and organization of the Love Parade in Duisburg, Germany, and the crowd disaster on July 24, 2010. Our analysis reveals a number of misunderstandings that have widely spread. We also provide a new perspective on concepts such as \u2018intentional pushing\u2019, \u2018mass panic\u2019, \u2018stampede\u2019, and \u2018crowd crushes\u2019. The focus of our analysis is on the contributing causal factors and their mutual interdependencies, not on legal issues or the judgment of personal or institutional responsibilities. Video recordings show that people stumbled and piled up due to a \u2018domino effect\u2019, resulting from a phenomenon called \u2018crowd turbulence\u2019 or \u2018crowd quake\u2019. Crowd quakes are a typical reason for crowd disasters, to be distinguished from crowd disasters resulting from \u2018mass panic\u2019 or \u2018crowd crushes\u2019. In Duisburg, crowd turbulence was the consequence of amplifying feedback and cascading effects, which are typical for systemic instabilities. Accordingly, things can go terribly wrong in spite of no bad intentions from anyone. Comparing the incident in Duisburg with others, we give recommendations to help prevent future crowd disasters. In particular, we introduce a new scale to assess the criticality of conditions in the crowd. This may allow preventative measures to be taken earlier on. Furthermore, we discuss the merits and limitations of citizen science for public investigation, considering that today, almost every event is recorded and reflected in the World Wide Web."} {"_id": "7d44b2416c1563c75077b727170c1791641d0f67", "title": "An Internet based wireless home automation system for multifunctional devices", "text": "The aim of home automation is to control home devices from a central control point. In this paper, we present the design and implementation of a low cost but yet flexible and secure Internet based home automation system. The communication between the devices is wireless. The protocol between the units in the design is enhanced to be suitable for most of the appliances. The system is designed to be low cost and flexible with the increasing variety of devices to be controlled."} {"_id": "6f7ef31d1f59993a5004b46b598cf3f5a1743ebb", "title": "Examining the Contribution of Information Technology Toward Productivity and Profitability in U . S . Retail Banking", "text": "There has been much debate on whether or not the investment in Information Technology (IT) provides improvements in productivity and business efficiency. Several studies both at the industry-level and at the firm-level have contributed differing understandings of this phenomenon. Of late, however, firm-level studies, primarily in the manufacturing sector, have shown that there are significant positive contributions from IT investments toward productivity. This study examines the effect of IT investment on both productivity and profitability in the retail banking sector. Using data collected through a major study of retail banking institutions in the United States, this paper concludes that additional investment in IT capital may have no real benefits and may be more of a strategic necessity to stay even with the competition. However, the results indicate that there are substantially high returns to increase in investment in IT labor, and that retail banks need to shift their emphasis in IT investment from capital to labor. Prasad and Harker IT Impact in Retail Banking"} {"_id": "2dc3ec722948c08987127647ae34a502cabaa6db", "title": "Scalable Hashing-Based Network Discovery", "text": "Discovering and analyzing networks from non-network data is a task with applications in fields as diverse as neuroscience, genomics, energy, economics, and more. In these domains, networks are often constructed out of multiple time series by computing measures of association or similarity between pairs of series. The nodes in a discovered graph correspond to time series, which are linked via edges weighted by the association scores of their endpoints. After graph construction, the network may be thresholded such that only the edges with stronger weights remain and the desired sparsity level is achieved. While this approach is feasible for small datasets, its quadratic time complexity does not scale as the individual time series length and the number of compared series increase. Thus, to avoid the costly step of building a fully-connected graph before sparsification, we propose a fast network discovery approach based on probabilistic hashing of randomly selected time series subsequences. Evaluation on real data shows that our methods construct graphs nearly 15 times as fast as baseline methods, while achieving both network structure and accuracy comparable to baselines in task-based evaluation."} {"_id": "9e6c094ffc5ba2cd0776d798692f3374920de574", "title": "Study of battery modeling using mathematical and circuit oriented approaches", "text": "Energy storage improves the efficiency and reliability of the electric utility system. The most common device used for storing electrical energy is batteries. To investigate power converter-based charge and discharge control of a battery storage device, effective battery models are critically needed. This paper presents a comparison study of mathematical and circuit-oriented battery models with a focus on lead-acid batteries that are normally used for large power storage applications. The paper shows how mathematical and circuit-oriented battery models are developed to reflect typical battery electrochemical properties. The relation between mathematical and circuit-oriented battery models is analyzed in the paper. Comparison study is made to investigate the difference and complexity in parameter extraction using the two different modeling approaches. The paper shows that the fundamental battery electromechanical relationship is usually built into the battery mathematical model but is not directly available in the circuit-oriented model. In terms of the computational complexity, the circuit-oriented battery model requires much expensive computing resources. Performance study is conducted to evaluate various factors that may affect the behavior of the mathematical battery models."} {"_id": "4a9c1b4569289623bf9812ffe2225e4b3d7acb22", "title": "Real-time flood monitoring and warning system", "text": "Flooding is one of the major disasters occurring in various parts of the world. The system for real-time monitoring of water conditions: water level; flow; and precipitation level, was developed to be employed in monitoring flood in Nakhon Si Thammarat, a southern province in Thailand. The two main objectives of the developed system is to serve 1) as information channel for flooding between the involved authorities and experts to enhance their responsibilities and collaboration and 2) as a web based information source for the public, responding to their need for information on water condition and flooding. The developed system is composed of three major components: sensor network, processing/transmission unit, and database/ application server. These real-time data of water condition can be monitored remotely by utilizing wireless sensors network that utilizes the mobile General Packet Radio Service (GPRS) communication in order to transmit measured data to the application server. We implemented a so-called VirtualCOM, a middleware that enables application server to communicate with the remote sensors connected to a GPRS data unit (GDU). With VirtualCOM, a GDU behaves as if it is a cable directly connected the remote sensors to the application server. The application server is a web-based system implemented using PHP and JAVA as the web application and MySQL as its relational database. Users can view real-time water condition as well as the forecasting of the water condition directly from the web via web browser or via WAP. The developed system has demonstrated the applicability of today\u2019s sensors in wirelessly monitor real-time water conditions."} {"_id": "f5fca08badb5f182bfc5bc9050e786d40e0196df", "title": "Design of a Water Environment Monitoring System Based on Wireless Sensor Networks", "text": "A water environmental monitoring system based on a wireless sensor network is proposed. It consists of three parts: data monitoring nodes, data base station and remote monitoring center. This system is suitable for the complex and large-scale water environment monitoring, such as for reservoirs, lakes, rivers, swamps, and shallow or deep groundwaters. This paper is devoted to the explanation and illustration for our new water environment monitoring system design. The system had successfully accomplished the online auto-monitoring of the water temperature and pH value environment of an artificial lake. The system's measurement capacity ranges from 0 to 80 \u00b0C for water temperature, with an accuracy of \u00b10.5 \u00b0C; from 0 to 14 on pH value, with an accuracy of \u00b10.05 pH units. Sensors applicable to different water quality scenarios should be installed at the nodes to meet the monitoring demands for a variety of water environments and to obtain different parameters. The monitoring system thus promises broad applicability prospects."} {"_id": "0969bae35536395aff521f6fbcd9d5ff379664e3", "title": "Routing in multi-radio, multi-hop wireless mesh networks", "text": "We present a new metric for routing in multi-radio, multi-hop wireless networks. We focus on wireless networks with stationary nodes, such as community wireless networks.The goal of the metric is to choose a high-throughput path between a source and a destination. Our metric assigns weights to individual links based on the Expected Transmission Time (ETT) of a packet over the link. The ETT is a function of the loss rate and the bandwidth of the link. The individual link weights are combined into a path metric called Weighted Cumulative ETT (WCETT) that explicitly accounts for the interference among links that use the same channel. The WCETT metric is incorporated into a routing protocol that we call Multi-Radio Link-Quality Source Routing.We studied the performance of our metric by implementing it in a wireless testbed consisting of 23 nodes, each equipped with two 802.11 wireless cards. We find that in a multi-radio environment, our metric significantly outperforms previously-proposed routing metrics by making judicious use of the second radio."} {"_id": "50fc6949a8208486e26a716c2f4b255405715bbd", "title": "A Review of Wireless Sensor Technologies and Applications in Agriculture and Food Industry: State of the Art and Current Trends", "text": "The aim of the present paper is to review the technical and scientific state of the art of wireless sensor technologies and standards for wireless communications in the Agri-Food sector. These technologies are very promising in several fields such as environmental monitoring, precision agriculture, cold chain control or traceability. The paper focuses on WSN (Wireless Sensor Networks) and RFID (Radio Frequency Identification), presenting the different systems available, recent developments and examples of applications, including ZigBee based WSN and passive, semi-passive and active RFID. Future trends of wireless communications in agriculture and food industry are also discussed."} {"_id": "71573d5dc03b28279c1a337e9fd91ffbeff47569", "title": "Attentive Fashion Grammar Network for Fashion Landmark Detection and Clothing Category Classification", "text": "This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network."} {"_id": "86df5ac1f7065fd8e58e371fb733b3992affcf9e", "title": "An Ontology For Specifying Spatiotemporal Scopes in Life Cycle Assessment", "text": "Life Cycle Assessment (LCA) evaluates the environmental impact of a product through its entire life cycle, from material extraction to final disposal or recycling. The environmental impacts of an activity depend on both the activity\u2019s direct emissions to the environment as well as indirect emissions caused by activities elsewhere in the supply chain. Both the impacts of direct emissions and the provisioning of supply chain inputs to an activity depend on the activity\u2019s spatiotemporal scope. When accounting for spatiotemporal dynamics, LCA often faces significant data interoperability challenges. Ontologies and Semantic technologies can foster interoperability between diverse data sets from a variety of domains. Thus, this paper presents an ontology for modeling spatiotemporal scopes, i.e., the contexts in which impact estimates are valid. We discuss selected axioms and illustrate the use of the ontology by providing an example from LCA practice. The ontology enables practitioners to address key competency questions regarding the effect of spatiotemporal scopes on environmental impact estimation."} {"_id": "07119bc66e256f88b7436e62a4ac3384365e4e9b", "title": "RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images", "text": "This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of \u21131-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments on both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions."} {"_id": "2cad955c4667a8625553cd02e8bd0f84126b259a", "title": "Semantic image retrieval system based on object relationships", "text": "Semantic-based image retrieval has recently become popular as an avenue to improve retrieval accuracy. The \u201csemantic gap\u201d between the visual features and the high-level semantic features could be narrowed down by utilizing this kind of retrieval method. However, most of the current methods of semantic-based image retrieval utilize visual semantic features and do not consider spatial relationships. We build a system for content-based image retrieval from image collections on the web and tackle the challenges of distinguishing between images that contain similar objects, in order to capture the semantic meaning of a search query. In order to do so, we utilize a combination of segmentation into objects as well as the relationships of these objects with each other."} {"_id": "9dbe7462761fa90b4b0340ceaf6661b25b7a4313", "title": "Development of HILS simulator for steering control in hot strip finishing mill", "text": "In this paper, hardware-in-the loop-simulation(HILS) simulator for steering control is developed. The simulator describes lateral movement of the strip in hot strip finishing mill. The simulator include strip dynamics, mill model, mill motor and AGC controllers. The factors considered for strip dynamics are initial off-center and wedge difference. The wedge is realized using the expansion of deformation process. To test validation of a simulator, PID controller is tested."} {"_id": "fa453f2ae1f944a61d7dd0ea576b5bd05d08b9ed", "title": "Critical period regulation.", "text": "Neuronal circuits are shaped by experience during critical periods of early postnatal life. The ability to control the timing, duration, and closure of these heightened levels of brain plasticity has recently become experimentally accessible, especially in the developing visual system. This review summarizes our current understanding of known critical periods across several systems and species. It delineates a number of emerging principles: functional competition between inputs, role for electrical activity, structural consolidation, regulation by experience (not simply age), special role for inhibition in the CNS, potent influence of attention and motivation, unique timing and duration, as well as use of distinct molecular mechanisms across brain regions and the potential for reactivation in adulthood. A deeper understanding of critical periods will open new avenues to \"nurture the brain\"-from international efforts to link brain science and education to improving recovery from injury and devising new strategies for therapy and lifelong learning."} {"_id": "304097b8cacc87977fd745716dff1412648a2326", "title": "Towards a holistic analysis of mobile payments: A multiple perspectives approach", "text": "As the mobile technologies and services are in constant evolution, many speculate on whether or not mobile payments will be a killer application for mobile commerce. To have a better understanding of the market, there is a need to analyze not only the technology but also the different actors that are involved. For this purpose, we propose to conduct two disruption analyses to draw the disruptiveness profile of mobile payment solutions compared to other payment instruments. Then, we try to discover what factors have hindered the technical and commercial development by using a DSS based on a multi-criteria decision making method called Electre I."} {"_id": "8e6a2774982d24492273558a85eef0cec056efba", "title": "The Impact of Information Technology Investment Announcements on the Market Value of the Firm", "text": "Determining whether investments in information technology (IT) have an impact on firm performance has been and continues to be a major problem for information systems researchers and practitioners. Financial theory suggests that managers should make investment'decisions that maximize the value of the firm. Using event-study methodology, we provide empirical evidence on the effect of announcements of IT investments on the market value of the firm for a sample of 97 IT investments from the finance and manufacturing industries from 1981 to 1988. Over the announcement period, we find no excess returns for either the full sample or for any one of the industry subsamples. However, cross-sectional analysis reveals that the market reacts differently to announcements of innovative IT investments than to foUowup, or noninnovative investments in IT. Innovative IT investments increase firm value, while noninnovative investments do not. Furthermore, the market's reaction to announcements of innovative and noninnovative IT investments is independent of industry classification. These results indicate that, on average, IT investments are zero net present value (NPV) investments; they are worth as much as they cost. Innovative IT investments, however, increase the value of the firm."} {"_id": "827d2d768f58759e193ed17da6d6f7e87749686a", "title": "Fingerprint matching by thin-plate spline modelling of elastic deformations", "text": "This paper presents a novel minutiae matching method that describes elastic distortions in 4ngerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the 4ngerprints according to the estimated model, the number of matching minutiae can be counted using very tight matching thresholds. For deformed 4ngerprints, the algorithm gives considerably higher matching scores compared to rigid matching algorithms, while only taking 100 ms on a 1 GHz P-III machine. Furthermore, it is shown that the observed deformations are di6erent from those described by theoretical models proposed in the literature. ? 2003 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved."} {"_id": "ab66f5bcea5ca7adc638342958206262007f8afd", "title": "1 Mobile Landscapes : using location data from cellphones for urban analysis", "text": "The technology for determining the geographic location of cell phones and other hand-held devices is becoming increasingly available. It is opening the way to a wide range of applications, collectively referred to as Location Based Services (LBS), that are primarily aimed at individual users. However, if deployed to retrieve aggregated data in cities, LBS could become a powerful tool for urban analysis. This paper aims to review and introduce the potential of this technology to the urban planning community. In addition, it presents the \u2018Mobile Landscapes\u2019 project: an application in the metropolitan area of Milan, Italy, based on the geographical mapping of cell phone usage at different times of the day. The results enable a graphic representation of the intensity of urban activities and their evolution through space and time. Finally, a number of future applications are discussed and their potential for urban studies and planning is assessed."} {"_id": "4b77ac9081958ff56543e9ab2eb942c0cc65610e", "title": "Comparison of SQL, NoSQL and NewSQL databases for internet of things", "text": "The Internet of Things (IoT) in all its essentiality is a collection of sensors bridged together tightly. The present day development in the technology industry has thus elevated the emphasis of large amounts of data. IoT is expected to generate and collect an enormous amount of data from varied locations very quickly. The concerns for features like storage and performance factors are coercing the databases to outperform themselves. Traditional relational databases have time and again proved to be efficient. NoSQL and more recently NewSQL databases having gained an impetus are asserted to perform better than the classic SQL counterpart. This paper compares the performance of three databases each from the said technologies; SQL (MySQL), NoSQL (MongoDB) and NewSQL (VoltDB) respectively for sensor readings. The sensor data handled ranges in vast (MB to GB) size and tested against: single write, single read, single delete and multi write operations."} {"_id": "c3a2c1eedee7cd170606c5bda4afd05289af8562", "title": "Agenda-Based User Simulation for Bootstrapping a POMDP Dialogue System", "text": "This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%. 1 Background and Introduction 1.1 Bootstrapping Statistical Dialogue Managers One of the key advantages of a statistical approach to Dialogue Manager (DM) design is the ability to formalise design criteria as objective reward functions and to learn an optimal dialogue policy from real dialogue data. In cases where a system is designed from scratch, however, it is often the case that no suitable in-domain data is available for training the DM. Collecting dialogue data without a working prototype is problematic, leaving the developer with a classic chicken-and-egg problem. Wizard-of-Oz (WoZ) experiments can be carried out to record dialogues, but they are often time-consuming and the recorded data may show characteristics of humanhuman conversation rather than typical human-computer dialogue. Alternatively, human-computer dialogues can be recorded with a handcrafted DM prototype but neither of these two methods enables the system designer to test the implementation of the statistical DM and the learning algorithm. Moreover, the size of the recorded corpus (typically 10 dialogues) usually falls short of the requirements for training a statistical DM (typically 10 dialogues). 1.2 User Simulation-Based Training In recent years, a number of research groups have investigated the use of a two-stage simulation-based setup. A statistical user model is first trained on a limited amount of dialogue data and the model is then used to simulate dialogues with the interactively learning DM (see Schatzmann et al. (2006) for a literature review). The simulation-based approach assumes the presence of a small corpus of suitably annotated in-domain dialogues or out-of-domain dialogues with a matching dialogue format (Lemon et al., 2006). In cases when no such data is available, handcrafted values can be assigned to the model parameters given that the model is sufficiently simple (Levin et al., 2000; Pietquin and Dutoit, 2005) but the performance of dialogue policies learned this way has not been evaluated using real users."} {"_id": "dd569c776064693d02374ee89ec864f2eec53b02", "title": "A tomographic formulation of spotlight-mode synthetic aperture radar", "text": "Spotlight-mode synthetic aperture radar (spotlight-mode SAR) synthesizes high-resolution terrain maps using data gathered from multiple observation angles. This paper shows that spotlight-mode SAR can be interpreted as a tomographic reeonstrution problem and analyzed using the projection-slice theorem from computer-aided tomograpy (CAT). The signal recorded at each SAR transmission point is modeled as a portion of the Fourier transform of a central projection of the imaged ground area. Reconstruction of a SAR image may then be accomplished using algorithms from CAT. This model permits a simple understanding of SAR imaging, not based on Doppler shifts. Resolution, sampling rates, waveform curvature, the Doppler effect, and other issues are also discussed within the context of this interpretation of SAR."} {"_id": "3dd688dbd5d425340203e73ed6590e58c970b083", "title": "An Accelerator for High Efficient Vision Processing", "text": "In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are convolutional neural networks (CNNs), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is highly energy-efficient. We present a single-core implementation down to the layout at 65 nm, with a modest footprint of 5.94mm $^{\\boldsymbol {2}}$ and consuming only 336mW, but still about $\\boldsymbol {30\\times }$ faster than high-end GPUs. For visual processing with higher resolution and frame-rate requirements, we further present a multicore implementation with elevated performance."} {"_id": "3cf12282c9a37211f8f7adbc5e667a1436317626", "title": "Adaptive Control of KNTU Planar Cable-Driven Parallel Robot with Uncertainties in Dynamic and Kinematic Parameters", "text": "This paper addresses the design and implementation of adaptive control on a planar cable-driven parallel robot with uncertainties in dynamic and kinematic parameters. To develop the idea, firstly, adaptation is performed on dynamic parameters and it is shown that the controller is stable despite the kinematic uncertainties. Then, internal force term is linearly separated into a regressor matrix in addition to a kinematic parameter vector that contains estimation error. In the next step to improve the controller performance, adaptation is performed on both the dynamic and kinematic parameters. It is shown that the performance of the proposed controller is improved by correction in the internal forces. The proposed controller not only keeps all cables in tension for the whole workspace of the robot, it is computationally simple and it does not require measurement of the end-effector acceleration as well. Finally, the effectiveness of the proposed control algorithm is examined through some experiments on KNTU planar cable-driven parallel robot and it is shown that the proposed control algorithm is able to provide suitable performance in practice."} {"_id": "7626ebf435036928046afbd7ab88d4b76d3de008", "title": "Integration of Text and Audio Features for Genre Classification in Music Information Retrieval", "text": "Multimedia content can be described in versatile ways as its essence is not limited to one view. For music data these multiple views could be a song\u2019s audio features as well as its lyrics. Both of these modalities have their advantages as text may be easier to search in and could cover more of the \u2018content semantics\u2019 of a song, while omitting other types of semantic categorisation. (Psycho)acoustic feature sets, on the other hand, provide the means to identify tracks that \u2018sound similar\u2019 while less supporting other kinds of semantic categorisation. Those discerning characteristics of different feature sets meet users\u2019 differing information needs. We will explain the nature of text and audio feature sets which describe the same audio tracks. Moreover, we will propose the use of textual data on top of low level audio features for music genre classification. Further, we will show the impact of different combinations of audio features and textual features based on content words."} {"_id": "155c4153aa867e0d36e81aef2b9a677712c349d4", "title": "Extraction of high-resolution frames from video sequences", "text": "The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system that do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses the use of both the spatial and temporal information present in a short image sequence to create a single high-resolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is ill-posed, Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence. Estimates computed from a low-resolution image sequence containing a subpixel camera pan show dramatic visual and quantitative improvements over bilinear, cubic B-spline, and Bayesian single frame interpolations. Visual and quantitative improvements are also shown for an image sequence containing objects moving with independent trajectories. Finally, the video frame extraction algorithm is used for the motion-compensated scan conversion of interlaced video data, with a visual comparison to the resolution enhancement obtained from progressively scanned frames."} {"_id": "f66a2d4ee08d388e9919fa6a2a2dad5227a256e8", "title": "The Big Bang: Facial Trauma Caused by Recreational Fireworks.", "text": "In the Netherlands, it is a tradition of setting off fireworks to celebrate the turn of the year. In our medical facility, each year patients with severe skeletal maxillofacial trauma inflicted by recreational fireworks are encountered. We present two cases of patients with severe blast injury to the face, caused by direct impact of rockets, and thereby try to contribute to the limited literature on facial blast injuries, their treatment, and clinical outcome. These patients require multidisciplinary treatment, involving multiple reconstructive surgeries, and the overall recovery process is long. The severity of these traumas raises questions about the firework traditions and legislations not only in the Netherlands but also worldwide. Therefore, the authors support restrictive laws on personal use of fireworks in the Netherlands."} {"_id": "1786aab096f743614cbe3950602004b9f104d4ba", "title": "A State-Space Approach to Dynamic Nonnegative Matrix Factorization", "text": "Nonnegative matrix factorization (NMF) has been actively investigated and used in a wide range of problems in the past decade. A significant amount of attention has been given to develop NMF algorithms that are suitable to model time series with strong temporal dependencies. In this paper, we propose a novel state-space approach to perform dynamic NMF (D-NMF). In the proposed probabilistic framework, the NMF coefficients act as the state variables and their dynamics are modeled using a multi-lag nonnegative vector autoregressive (N-VAR) model within the process equation. We use expectation maximization and propose a maximum-likelihood estimation framework to estimate the basis matrix and the N-VAR model parameters. Interestingly, the N-VAR model parameters are obtained by simply applying NMF. Moreover, we derive a maximum a posteriori estimate of the state variables (i.e., the NMF coefficients) that is based on a prediction step and an update step, similarly to the Kalman filter. We illustrate the benefits of the proposed approach using different numerical simulations where D-NMF significantly outperforms its static counterpart. Experimental results for three different applications show that the proposed approach outperforms two state-of-the-art NMF approaches that exploit temporal dependencies, namely a nonnegative hidden Markov model and a frame stacking approach, while it requires less memory and computational power."} {"_id": "73e5f9f4ea344768ba464f146b9ba1e3d965c6b0", "title": "Standalone OPC UA Wrapper for Industrial Monitoring and Control Systems", "text": "OPC unified architecture (UA), a communication standard for the manufacturing industry, enables exchanging control and management data among distributed entities in industrial automation systems. OPC UA wrapper is a migration strategy that provides UA clients with seamless access to legacy servers having OPC classic interfaces. This paper presents the design of a standalone OPC UA wrapper and discusses its performance through extensive experiments using a prototype implementation. The wrapper consists of two main components, i.e., UA server and classic client, which communicate with each other via shared memory and semaphore. One important feature of the design is that it employs a distributed component object model runtime library implemented in Java for platform independence. This makes it possible to build a cost-competitive wrapper system by using commercial off-the-shelf non-Windows solutions with low-cost microprocessors. Another key feature is the event-driven update interface between the UA and classic components, which we propose as an alternative to the sampling-based mechanism for the reduced delay. Through experiments using workloads from an industrial monitoring system, we present a systematic approach of identifying the system parameters having a direct impact on the wrapper performance and eventually tuning them such that the read and subscription services of OPC UA exhibit the best performance."} {"_id": "6044b30751c19b3231782fb0475c9ca438940690", "title": "Real-time Action Recognition with Dissimilarity-based Training of Specialized Module Networks", "text": "This paper addresses the problem of real-time action recognition in trimmed videos, for which deep neural networks have defined the state-of-the-art performance in the recent literature. For attaining higher recognition accuracies with efficient computations, researchers have addressed the various aspects of limitations in the recognition pipeline. This includes network architecture, the number of input streams (where additional streams augment the color information), the cost function to be optimized, in addition to others. The literature has always aimed, though, at assigning the adopted network (or networks, in case of multiple streams) the task of recognizing the whole number of action classes in the dataset at hand. We propose to train multiple specialized module networks instead. Each module is trained to recognize a subset of the action classes. Towards this goal, we present a dissimilarity-based optimized procedure for distributing the action classes over the modules, which can be trained simultaneously offline. On two standard datasets\u2013UCF-101 and HMDB-51\u2013the proposed method demonstrates a comparable performance, that is superior in some aspects, to the state-of-the-art, and that satisfies the real-time constraint. We achieved 72.5% accuracy on the challenging HMDB-51 dataset. By assigning fewer and unalike classes to each module network, this research paves the way to benefit from light-weight architectures without compromising recognition accuracy1."} {"_id": "d2998f77f7b16fde8e1146d1e4b96f4fbb267577", "title": "Edge Computing and IoT Based Research for Building Safe Smart Cities Resistant to Disasters", "text": "Recently, several researches concerning with smart and connected communities have been studied. Soon the 4G / 5G technology becomes popular, and cellular base stations will be located densely in the urban space. They may offer intelligent services for autonomous driving, urban environment improvement, disaster mitigation, elderly/disabled people support and so on. Such infrastructure might function as edge servers for disaster support base. In this paper, we enumerate several research issues to be developed in the ICDCS community in the next decade in order for building safe, smart cities resistant to disasters. In particular, we focus on (A) up-to-date urban crowd mobility prediction and (B) resilient disaster information gathering mechanisms based on the edge computing paradigm. We investigate recent related works and projects, and introduce our on-going research work and insight for disaster mitigation."} {"_id": "505253630ab7e8f35e26e27904bd3c8faea3c5ce", "title": "Predicting clicks: estimating the click-through rate for new ads", "text": "Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction."} {"_id": "270d8ac5350039c06f48e13d88da631634afaadb", "title": "GeoDa: An Introduction to Spatial Data Analysis", "text": "This article presents an overview of GeoDa, a free software program intended to serve as a user-friendly and graphical introduction to spatial analysis for nongeographic information systems (GIS) specialists. It includes functionality ranging from simple mapping to exploratory data analysis, the visualization of global and local spatial autocorrelation, and spatial regression. A key feature of GeoDa is an interactive environment that combines maps with statistical graphics, using the technology of dynamically linked windows. A brief review of the software design is given, as well as some illustrative examples that highlight distinctive features of the program in applications dealing with public health, economic development, real estate analysis, and criminology."} {"_id": "37316a323a2808f3d4dd6a5086e72b56ba960a96", "title": "Exploratory analysis of spatial and temporal data - a systematic approach", "text": "To open the e-book, you will require Adobe Reader software program. You can download the installer and instructions free from the Adobe Web site if you do not have Adobe Reader already installed on your computer. You might download and conserve it to your PC for later read through. Be sure to follow the download button above to download the PDF file. Review s Review s This ebook may be worth purchasing. it absolutely was writtern quite flawlessly and beneficial. I discovered this ebook from my dad and i suggested this pdf to discover.-Ma ximil ia n Wil kin so n DDS-Ma ximil ia n Wil kin so n DDS This is actually the very best publication i have read through till now. It is definitely simplistic but unexpected situations in the 50 % in the pdf. You can expect to like just how the article writer compose this pdf. A new electronic book with a new perspective. Better then never, though i am quite late in start reading this one. Your life period will be change the instant you comprehensive looking at this pdf."} {"_id": "7918f936313ae27647e77aea8779dc02a1764f8f", "title": "How Maps Work - Representation, Visualization, and Design", "text": "how maps work representation visualization and design. Book lovers, when you need a new book to read, find the book here. Never worry not to find what you need. Is the how maps work representation visualization and design your needed book now? That's true; you are really a good reader. This is a perfect book that comes from great author to share with you. The book offers the best experience and lesson to take, not only take, but also learn."} {"_id": "987560b6faaf0ced5e6eb97826dcf7f3ce367df2", "title": "Local Indicators of Spatial Association", "text": "The capabilities for visualization, rapid data retrieval, and manipulation in geographic information systems (GIS) have created the need for new techniques of exploratory data analysis that focus on the \"spatial\" aspects of the data. The identification of local patterns of spatial association is an important concern in this respect. In this paper, I outline a new general class of local indicators of spatial association (LISA) and show how they allow for the decomposition of global indicators, such as Moran's I, into the contribution of each observation. The LISA statistics serve two purposes. On one hand, they may be interpreted as indicators of local pockets of nonstationarity, or hot spots, similar to the Gi and G; statistics of Getis and Ord (1992). On the other hand, they may be used to assess the influence of individual locations on the magnitude of the global statistic and to identify \"outliers,\" as in Anselin's Moran scatterplot (1993a). An initial evaluation of the properties of a LISA statistic is carried out for the local Moran, which is applied in a study of the spatial pattern of conflict for African countries and in a number of Monte Carlo simulations."} {"_id": "54c13129cbbc8737dce7d14dd1c7e6462016409f", "title": "Detection of Influential Observation in Linear Regression", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."} {"_id": "555e1d6ecc7af031f29b0225bdca06d4a6da77ed", "title": "Clause Restructuring for Statistical Machine Translation", "text": "We describe a method for incorporating syntactic information in statistical machine translation systems. The first step of the method is to parse the source language string that is being translated. The second step is to apply a series of transformations to the parse tree, effectively reordering the surface string on the source language side of the translation system. The goal of this step is to recover an underlying word order that is closer to the target language word-order than the original string. The reordering approach is applied as a pre-processing step in both the training and decoding phases of a phrase-based statistical MT system. We describe experiments on translation from German to English, showing an improvement from 25.2% Bleu score for a baseline system to 26.8% Bleu score for the system with reordering, a statistically significant improvement."} {"_id": "0fce0891edb205343ff126823b9df0bae04fe9dd", "title": "Temporal Convolutional Neural Networks for Diagnosis from Lab Tests", "text": "Early diagnosis of treatable diseases is essential for improving healthcare, and many diseases\u2019 onsets are predictable from annual lab tests and their temporal trends. We introduce a multi-resolution convolutional neural network for early detection of multiple diseases from irregularly measured sparse lab values. Our novel architecture takes as input both an imputed version of the data and a binary observation matrix. For imputing the temporal sparse observations, we develop a flexible, fast to train method for differentiable multivariate kernel regression. Our experiments on data from 298K individuals over 8 years, 18 common lab measurements, and 171 diseases show that the temporal signatures learned via convolution are significantly more predictive than baselines commonly used for early disease diagnosis."} {"_id": "9bb386b2e75df0445126741d24a89c3740d07804", "title": "Automatic Generation of Visual-Textual Presentation Layout", "text": "Visual-textual presentation layout (e.g., digital magazine cover, poster, Power Point slides, and any other rich media), which combines beautiful image and overlaid readable texts, can result in an eye candy touch to attract users\u2019 attention. The designing of visual-textual presentation layout is therefore becoming ubiquitous in both commercially printed publications and online digital magazines. However, handcrafting aesthetically compelling layouts still remains challenging for many small businesses and amateur users. This article presents a system to automatically generate visual-textual presentation layouts by investigating a set of aesthetic design principles, through which an average user can easily create visually appealing layouts. The system is attributed with a set of topic-dependent layout templates and a computational framework integrating high-level aesthetic principles (in a top-down manner) and low-level image features (in a bottom-up manner). The layout templates, designed with prior knowledge from domain experts, define spatial layouts, semantic colors, harmonic color models, and font emotion and size constraints. We formulate the typography as an energy optimization problem by minimizing the cost of text intrusion, the utility of visual space, and the mismatch of information importance in perception and semantics, constrained by the automatically selected template and further preserving color harmonization. We demonstrate that our designs achieve the best reading experience compared with the reimplementation of parts of existing state-of-the-art designs through a series of user studies."} {"_id": "9005c34200880bc2ca0bad398d0a6391667a2dfc", "title": "Disability studies as a source of critical inquiry for the field of assistive technology", "text": "Disability studies and assistive technology are two related fields that have long shared common goals - understanding the experience of disability and identifying and addressing relevant issues. Despite these common goals, there are some important differences in what professionals in these fields consider problems, perhaps related to the lack of connection between the fields. To help bridge this gap, we review some of the key literature in disability studies. We present case studies of two research projects in assistive technology and discuss how the field of disability studies influenced that work, led us to identify new or different problems relevant to the field of assistive technology, and helped us to think in new ways about the research process and its impact on the experiences of individuals who live with disability. We also discuss how the field of disability studies has influenced our teaching and highlight some of the key publications and publication venues from which our community may want to draw more deeply in the future."} {"_id": "779bb1441b3f06eab3eb8424336920f6dc10827c", "title": "Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning", "text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search avoids this problem by encouraging a search in all interesting directions. That occurs by replacing a performance objective with a reward for novel behaviors, as defined by a human-crafted, and often simple, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a novelty pressure in image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art."} {"_id": "1563adc48cd0b56724ea4fe71cfb4193865fe601", "title": "Being Bayesian About Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks", "text": "In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model selection attempts to find the most likely (MAP) model, and uses its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables. This allows us to compute, for a given order, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC) method, but over orders rather than over network structures. The space of orders is smaller and more regular than the space of structures, and has much a smoother posterior \u201clandscape\u201d. We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach."} {"_id": "482c9eefd106e109cc13daf7253e74553a6eb141", "title": "Design Thinking Methods and Tools for Innovation in Multidisciplinary Teams", "text": "Design Thinking (DT), a human-centered approach to innovation, is regarded as a system of overlapping spaces of viability, desirability and feasibility. Innovation increases when these three perspectives are addressed. This position paper proposes DT methods and tools to foster innovation in a multidisciplinary team by facilitating decision-making processes. We discuss how DT methods and tools reflect one or more DT perspectives, namely, the human, business and technology perspectives. We also discuss how these DT methods and tools can support decision-making processes, collaboration and engagement in a multidisciplinary team. Author"} {"_id": "3a01f9933066f0950435a509c2b7bf427a1ebd7f", "title": "Exfiltration of information from air-gapped machines using monitor's LED indicator", "text": "In this paper we present a new approach for data exfiltration by leaking data from monitor's LED to Smartphone's camera. The new approach may be used by attackers to leak valuable information from the organization as part of an Advanced Persistent Threat (APT). The proof of concept that was developed is described in the paper followed by a description of an experiment that demonstrates that practically people are not aware of the attack. We propose ways that will facilitate the detection of such threats and some possible countermeasures."} {"_id": "e6a4afd84109b6af4a10c2f9b2b888b446b79e98", "title": "Unsupervised Learning without Overfitting: Empirical Risk Approximation as an Induction Principle for Reliable Clustering", "text": "Unsupervised learning algorithms are designed to extract structure from data s amples on the basis of a cost function for structures. For a reliable and robust inference proce ss, th unsupervised learning algorithm has to guarantee that the extracted structures ar e typical for the data source. In particular, it has to reject all structures where the infe renc is dominated by the arbitrariness of the sample noise and which, consequently, can be characterize d as overfitting in unsupervised learning. This paper summarizes an inference principle called Empirical Risk Approximationwhich allows us to quantitatively measure the overfitting effect and to derive a criterion as a saveguard against it. The crucial condition for learning is me t if (i) the empirical risk of learning uniformly converges towards the expected risk and if ( ii) the hypothesis class retains a minimal variety for consistent inference. Parameter s l ction of learnable data structures is demonstrated for the case of k-means clustering and Monte Carlo simulations are presented to support the selection principle. 1 PHILOSOPHY OF UNSUPERVISED LEARNING Learning algorithms are designed to extract structure from data. Two class es of algorithms have been widely discussed in the literature \u2013 supervised andunsupervised learning. The distinction between the two classes depends on supervision or teacher informati on which is either available to the learning algorithm or missing in the learning process. T hi paper describes a statistical learning theory of unsupervised learning. We summarize an induct ion principle for unsupervised learning which is refered to as Empirical Risk Approximation[1]. This principle is based on the optimization of a quality functional for structures in data and, most i portantly, it contains a safeguard against overfitting of structure to the noise in a sampl e set. The extracted structure of the data is encoded by a loss function and it is assumed to produce a lear ning risk below a predefined risk threshold. This induction principle is summarized by the fol lowing two inference steps: 1. Define a hypothesis class containing loss functions which evaluate candidate structures of the data and which measure their quality. Control the complexity of the hypothesis class by an upper bound on the costs. 2. Select an arbitrary loss function from the smallest subset which still guarantees c onsistent learning (in the sense of Statistical Learning Theory). The reader should note that the learning algorithm has to return a structure with c ostsbounded by a preselected cost threshold, but it is not required to return the structure w ith minimal empirical risk as in Vapnik\u2019s \u201cEmpirical Risk Minimization\u201d induction principle for classi fication and regression [2]. All structures in the data with risk smaller than the sel cted risk bound are considered to be equivalent in the approximation process without further distinction. Various cases of unsupervised learning algorithms are discussed in the literature wi h mphasis on the optimization aspect, i.e., data clustering or vector quantization, self-organi zing maps, principal or independent component analysis and principal curves, projection pursuit or algorithms for relational data as multidimensional scaling and pairwise clustering and histogram based clustering methods. How can we select a loss function from a hypothesis class with desired approxim ation quality or, equivalently, find a structure with bounded costs? The nested sequence of hypothesis classe suggests a tracking strategy of solutions which are elements of smaller and sm ller hypothesis classes in that sequence. The sequential sampling can be implemented by soluti on tracking, i.e., admissible solutions are incrementally improved by gradient descent if t he process samples from a subset with smaller risk bound than in the previous sampling step. Candidate procedure for the sampling of structures from the hypothesis class are stochastic search t chniques or continuation methods like simulated or deterministic annealing, although the theory does not refer to any particular learning algorithm. The general framework of Empirical Risk Approximationand its mathematical formulation is presented in Sec. 2, whereas a more detailed discussion can be found in [1]. Thi s theory of unsupervised learning is applied to the case of central clustering in Sec. 3. 2 MATHEMATICAL FRAMEWORK FOR UNSUPERVISED LEARNING The data samples X = xi 2 : IRd; 1 i l which have to be analysed by the unsupervised learning algorithm are elements of a suitable d-dimensional Euclidian space. The data are distributed according to a measure which is assumed to be known for the analysis. A mathematically precise statement of the Empirical Risk Approximationinduction principle requires several definitions which formalize the notion of searching for struct ure in the data. The quality of structures extracted from the data set X is evaluated by a learning cost function R\u0302( ;X ) = 1l l Xi=1 h(xi; ): (1) R\u0302( ;X ) denotes theempirical risk of learning a structure for an i.i.d. sample set X . The functionh(x; ) is known asloss function in statistics. It measures the costs for processing a generic datumx and often corresponds to the negative log-likelihood of a stochastic model. For example in vector quantization, the loss function quantifies what the costs are t o ssign a data pointx to a particular prototype or codebook vector. Each value 2 parametrizes an individual loss functions with denoting the parameter set. The parameter characterizes the different structures of the data set which are hypothetically considered candida te structures in the learning process and which have to be validated. Statistical learning t heory distinguishes between different classes of loss functions, i.e., 0 1 functions in classification problems, bounded functions and unbounded non-negative functions in regression problems. This paper is concerned with the class of unbounded non-negative functions since we are particularly interested in polynomially increasing loss function ( O kxkp)) as they occur in vector quantization. Note that the quality measur\u00ea R( ;X ) depends on the i.i.d. data set X and, therefore, is itself a random variable. The relevant quality measure for unsupervised learning, howev er, is the expectation value of this random variable, known as the expected risk of learning R( ) = Z h(x; ) d (x): (2) While minima ofR\u0302( ;X ) or solutions with bounded empirical risk are influenced by fluctuations in the samples, it is the expected risk R( ) which completely assesses the quality of learning results in a sound probabilistic fashion. The distribution is assumed to be known in the following analysis and it has to decay sufficiently fast such that all rth moments ( r > 2) of the loss functionh(x; ) are bounded byE fjh(x; ) R( )jrg r! r 2V fh(x; )g; 8 2 . E f:g andV f:g denote expectation and variance of a random variable, respectively. is a distribution dependent constant. The moment constraint on the distribution h lds for all distributions with exponentially fast decaying tails which include all distri butions with finite support. Empirical Risk Approximationis an induction principle which requires the learning algorithm to sample from the smallest consistently learnable subset of the hypothesis cla s . In the following, the hypothesis class H contains all loss functions h(x; ); 2 and the subsets of risk bounded loss functions HR are defined as HR = fh(x; ) : 2 ^ R( ) R g : (3) The subsetsHR are obviously nested since HR 1 HR 2 H forR 1 R 2 1 andH = limR !1HR . R induces a structure on the hypothesis class in the sense of Vapnik\u2019s \u201cStructural Risk\u201d and essentially controls the complexity of the hypothesis clas . TheEmpirical Risk Approximationinduction principle requires to define a nested sequence of hypothesis classes with bounded expected risk and to sample from the hypothesis clas s with the desired approximation quality. Algorithmically, however, we select t he loss function according to the bounded empirical risk\u0302 R( ;X ) R . This induction principle is consistent if a bounded empirical risk implies in the asymptotic limit that the loss functi o has bounded expected riskR( ) R and, therefore, is an element of the hypothesis class HR , i.e., 8 2 lim l!1 R\u0302( ;X ) R =) h(x; ) 2 HR : (4) This implication essentially states that the expected risk asymptotica lly does not exceed the risk bound (R( ) R ) and, therefore, the inferred structure is an approximation of the optimal data structure with risk not worse than R . The consistency assumption (4) for Empirical Risk Approximationholds if the empirical risk uniformly converges towards the expected risk lim l!1P(sup 2 jR( ) R\u0302( ;X )j pV fh(x; )g > ) = 0; 8 > 0; (5) where denotes the hypothesis set of distinguishable structures. defines an -netH ;R on the set of loss functions HR or coarsens the hypothesis class with an -separated set if there does not exist a finite -net forHR . Using results from the theory of empirical processes the probability of an \u2013deviation of empirical risk from expected risk can be bounded by Bernstein\u2019s inequality ([3], Lemma 2.2.11) P( sup 2 jR( ) R\u0302( ;X )j pV fh(x; )g > ) 2jH j exp l 2 2(1 + = min) : (6) The minimal variance of all loss functions is denoted by 2 min = inf 2 V fh(x; )g. jH j denotes the cardinality of an -net constructed for the hypothesis class HR under the assumption of the measure . The confidence level limits the probability of large deviations [1]. The large deviation inequality weighs two competing effects in the learning probl em, i.e., the probability of a large deviation exponentially decreases with growing sample si ze l, whereas a large deviation becomes increasingly likely with growing cardinality of the hypothesis class. A compromise between both effects determines how reliable an estimate ac tually is for a given data set X . The sample complexity l0( ; ) is defined as the nec"} {"_id": "de135542d20285fa40a8ae91eb109385572f6160", "title": "Paper windows: interaction techniques for digital paper", "text": "In this paper, we present Paper Windows, a prototype windowing environment that simulates the use of digital paper displays. By projecting windows on physical paper, Paper Windows allows the capturing of physical affordances of paper in a digital world. The system uses paper as an input device by tracking its motion and shape with a Vicon Motion Capturing System. We discuss the design of a number of interaction techniques for manipulating information on paper displays."} {"_id": "5b0f9417de6b616199c6bd15b3ca552d46973de8", "title": "Deep Convolutional Neural Network for 6-DOF Image Localization", "text": "Wee present an accurate and robust method for six degree of freedom image localization. There are two key-points of our method, 1). automatic immense photo synthesis and labeling from point cloud model and, 2). pose estimation with deep convolutional neural networks (ConvNets) regression. Our model can directly regresses 6-DOF camera poses from images, accurately describing where and how it was captured. We achieved an accuracy within 1 meters and 1 degree on our outdoor dataset, which covers about 20, 000m on our school campus. Unlike previous point cloud registration solutions, our model supports low resolution images (i.e. 224\u00d7224 in our settings), and is tiny in size when finished training. Moreover, in pose estimation, our model uses O(1) time & space complexity as trainset grows. We will show the importance to localization using hundreds of thousands of generated and self-labeled \u201dphotos\u201d came from a short video. We will show our model\u2019s robustness despite of illumination and seasonal variances, which usually fails methods that leverage image feature descriptors like SIFT. Furthermore, we will show the ability of transfer our model trained on one scene to another, and the gains in accuracy and efficiency."} {"_id": "9b63b9da0351abddee88114971a6b3f62e3a528d", "title": "Characterizing in-text citations in scientific articles: A large-scale analysis", "text": "We report characteristics of in-text citations in over five million full text articles from two large databases \u2013 the PubMed Central Open Access subset and Elsevier journals \u2013 as functions of time, textual progression, and scientific field. The purpose of this study is to understand the characteristics of in-text citations in a detailed way prior to pursuing other studies focused on answering more substantive research questions. As such, we have analyzed in-text citations in several ways and report many findings here. Perhaps most significantly, we find that there are large field-level differences that are reflected in position within the text, citation interval (or reference age), and citation counts of references. In general, the fields of Biomedical and Health Sciences, Life and Earth Sciences, and Physical Sciences and Engineering have similar reference distributions, although they vary in their specifics. The two remaining fields, Mathematics and Computer Science and Social Science and Humanities, have different reference distributions from the other three fields and between themselves. We also show that in all fields the numbers of sentences, references, and in-text mentions per article have increased over time, and that there are field-level and temporal differences in the numbers of in-text mentions per reference. A final finding is that references mentioned only once tend to be much more highly cited than those mentioned multiple times."} {"_id": "1c99df3eba23275195679d14015eabd954157ca8", "title": "Determined to Die! Ability to Act Following Multiple Self-inflicted Gunshot Wounds to the Head. The Cook County Office of Medical Examiner Experience (2005-2012) and Review of Literature.", "text": "Cases of multiple (considered 2+) self-inflicted gunshot wounds are a rarity and require careful examination of the scene of occurrence; thorough consideration of the decedent's psychiatric, medical, and social histories; and accurate postmortem documentation of the gunshot wounds. We present a series of four cases of multiple self-inflicted gunshot wounds to the head from the Cook County Medical Examiner's Office between 2005 and 2012 including the first case report of suicide involving eight gunshot wounds to the head. In addition, a review of the literature concerning multiple self-inflicted gunshot wounds to the head is performed. The majority of reported cases document two gunshot entrance wound defects. Temporal regions are the most common affected regions (especially the right and left temples). Determining the capability to act following a gunshot wound to the head is necessary in crime scene reconstruction and in differentiation between homicide and suicide."} {"_id": "c5b51c0a5aad2996a4d2548c003a4262203c718f", "title": "Definition and Multidimensionality of Security Awareness: Close Encounters of the Second Order", "text": "This study proposes and examines a multidimensional definition of information security awareness. We also investigate its antecedents and analyze its effects on compliance with organizational information security policies. The above research goals are tested through the theoretical lens of technology threat avoidance theory and protection motivation theory. Information security awareness is defined as a second-order construct composed of the elements of threat and coping appraisals supplemented by the responsibilities construct to account for organizational environment. The study was executed in two stages. First, the participants (employees of a municipality) were exposed to a series of phishing messages. Second, the same individuals were asked to participate in a survey designed to examine their security awareness. The research model was tested using PLS-SEM approach. The results indicate that security awareness is in fact a second-order formative construct composed of six components. There are significant differences in security awareness levels between the victims of the phishing experiment and the employees who maintain compliance with security policies. Our study extends the theory by proposing and validating a general, yet practical definition of security awareness. It also bridges the gap between theory and practice - our contextualization of security awareness draws heavily on both fields."} {"_id": "1c1b8a049ac9e76b88ef4cea43f88097d275abdc", "title": "Knowledge discovery and data mining in toxicology.", "text": "Knowledge discovery and data mining tools are gaining increasing importance for the analysis of toxicological databases. This paper gives a survey of algorithms, capable to derive interpretable models from toxicological data, and presents the most important application areas. The majority of techniques in this area were derived from symbolic machine learning, one commercial product was developed especially for toxicological applications. The main application area is presently the detection of structure-activity relationships, very few authors have used these techniques to solve problems in epidemiological and clinical toxicology. Although the discussed algorithms are very flexible and powerful, further research is required to adopt the algorithms to the specific learning problems in this area, to develop improved representations of chemical and biological data and to enhance the interpretability of the derived models for toxicological experts."} {"_id": "1c9eb7a0a96cceb93c83827bba1c17d33d7240cc", "title": "Toward autonomous mapping and exploration for mobile robots through deep supervised learning", "text": "We consider an autonomous mapping and exploration problem in which a range-sensing mobile robot is guided by an information-based controller through an a priori unknown environment, choosing to collect its next measurement at the location estimated to yield the maximum information gain within its current field of view. We propose a novel and time-efficient approach to predict the most informative sensing action using a deep neural network. After training the deep neural network on a series of thousands of randomly-generated \u201cdungeon maps\u201d, the predicted optimal sensing action can be computed in constant time, with prospects for appealing scalability in the testing phase to higher dimensional systems. We evaluated the performance of deep neural networks on the autonomous exploration of two-dimensional workspaces, comparing several different neural networks that were selected due to their success in recent ImageNet challenges. Our computational results demonstrate that the proposed method provides high efficiency as well as accuracy in selecting informative sensing actions that support autonomous mobile robot exploration."} {"_id": "4e343bb81bfd683778bce920039851f479d9c70a", "title": "Analysing the use of interactive technology to implement interactive teaching", "text": "Recent policy initiatives in England have focused on promoting \u2018interactive\u2019 teaching in schools, with a clear expectation that this will lead to improvements in learning. This expectation is based on the perceived success of such approaches in other parts of the world.At the same time, there has been a large investment in Information and Communication Technology (ICT) resources, and particularly in interactive whiteboard technology. This paper explores the idea of interactive teaching in relation to the interactive technology which might be used to support it. It explains the development of a framework for the detailed analysis of teaching and learning in activity settings which is designed to represent the features and relationships involved in interactivity. When applied to a case study of interactive teaching during a lesson involving a variety of technology-based activities, the framework reveals a confusion of purpose in students\u2019 use of an ICT resource that limits the potential for learning when students are working independently. Discussion of relationships between technical and pedagogical interactivity points a way forward concerning greater focus on learning goals during activity in order to enable learners to be more autonomous in exploiting ICT\u2019s affordances, and the conclusion identifies the variables and issues which need to be considered in future research which will illuminate this path."} {"_id": "d611408b08c1950160fb15b4ca865056c1afc91e", "title": "Facial Expression Recognition From Image Sequence Based on LBP and Taylor Expansion", "text": "The aim of an automatic video-based facial expression recognition system is to detect and classify human facial expressions from image sequence. An integrated automatic system often involves two components: 1) peak expression frame detection and 2) expression feature extraction. In comparison with the image-based expression recognition system, the video-based recognition system often performs online detection, which prefers low-dimensional feature representation for cost-effectiveness. Moreover, effective feature extraction is needed for classification. Many recent recognition systems often incorporate rich additional subjective information and thus become less efficient for real-time application. In our facial expression recognition system, first, we propose the double local binary pattern (DLBP) to detect the peak expression frame from the video. The proposed DLBP method has a much lower-dimensional size and can successfully reduce detection time. Besides, to handle the illumination variations in LBP, logarithm-laplace (LL) domain is further proposed to get a more robust facial feature for detection. Finally, the Taylor expansion theorem is employed in our system for the first time to extract facial expression feature. We propose the Taylor feature pattern (TFP) based on the LBP and Taylor expansion to obtain an effective facial feature from the Taylor feature map. Experimental results on the JAFFE and Cohn-Kanade data sets show that the proposed TFP method outperforms some state-of-the-art LBP-based feature extraction methods for facial expression feature extraction and can be suited for real-time applications."} {"_id": "ca4ef779dc1e5dc01231eb9805fa05bbbc51fec3", "title": "A 1.2V 64Gb 341GB/S HBM2 stacked DRAM with spiral point-to-point TSV structure and improved bank group data control", "text": "With the recent increasing interest in big data and artificial intelligence, there is an emerging demand for high-performance memory system with large density and high data-bandwidth. However, conventional DIMM-type memory has difficulty achieving more than 50GB/s due to its limited pin count and signal integrity issues. High-bandwidth memory (HBM) DRAM, with TSV technology and wide IOs, is a prominent solution to this problem, but it still has many limitations: including power consumption and reliability. This paper presents a power-efficient structure of TSVs with reliability and a cost-effective HBM DRAM core architecture."} {"_id": "698b8181cd613a72adeac0d75252afe7f57a5180", "title": "Parallel tree-ensemble algorithms for GPUs using CUDA", "text": "We present two new parallel implementations of the tree-ensemble algorithms Random Forest (RF) and Extremely randomized trees (ERT) for emerging many-core platforms, e.g., contemporary graphics cards suitable for general-purpose computing (GPGPU). Random Forest and Extremely randomized trees are ensemble learners for classification and regression. They operate by constructing a multitude of decision trees at training time and outputting a prediction by comparing the outputs of the individual trees. Thanks to the inherent parallelism of the task, an obvious platform for its computation is to employ contemporary GPUs with a large number of processing cores. Previous parallel algorithms for Random Forests in the literature are either designed for traditional multi-core CPU platforms or early history GPUs with simpler hardware architecture and relatively few number of cores. The new parallel algorithms are designed for contemporary GPUs with a large number of cores and take into account aspects of the newer hardware architectures as memory hierarchy and thread scheduling. They are implemented using the C/C++ language and the CUDA interface for best possible performance on NVidia-based GPUs. An experimental study comparing with the most important previous solutions for CPU and GPU platforms shows significant improvement for the new implementations, often with several magnitudes."} {"_id": "b135aaf8822b97e139ff87a0dd5515e044ba2ae1", "title": "The Role of Network Analysis in Industrial and Applied Mathematics", "text": "Many problems in industry \u2014 and in the social, natural, information, and medical sciences \u2014 involve discrete data and benefit from approaches from subjects such as network science, information theory, optimization, probability, and statistics. Because the study of networks is concerned explicitly with connectivity between different entities, it has become very prominent in industrial settings, and this importance has been accentuated further amidst the modern data deluge. In this commentary, we discuss the role of network analysis in industrial and applied mathematics, and we give several examples of network science in industry. We focus, in particular, on discussing a physical-applied-mathematics approach to the study of networks. 1 ar X iv :1 70 3. 06 84 3v 2 [ cs .S I] 1 4 Se p 20 17 The Role of Network Analysis in Industrial and Applied Mathematics: A Physical-Applied-Mathematics Perspective Mason A. Porter and Sam D. Howison Department of Mathematics, University of California, Los Angeles, Los Angeles, California 90095, USA Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK CABDyN Complexity Centre, University of Oxford, Oxford OX1 1HP, UK"} {"_id": "9904cbfba499285206d51cffe50e9a060c43e60f", "title": "Measuring retail company performance using credit scoring techniques", "text": "This paper proposes a theoretical framework for predicting financial distress based on Hunt\u2019s (2000) \u2018Resource-Advantage (R-A) Theory of Competition\u2019. The study focuses on the US retail market. Five credit scoring methodologies \u2013Na\u00efve Bayes, Logistic Regression, Recursive Partition, Artificial Neural Network, and Sequential Minimal Optimization\u2014 are used on a sample of 195 healthy companies and 51 distressed firms over different time periods from 1994 to 2002. Analyses provide sufficient evidence that the five credit scoring methodologies have sound classification ability. In the time period of one year before financial distress, logistic regression model shows identical performance with neural network model based on the accuracy rate and shows the best performance in terms of AUROC value. Other models are slightly worse for predicting financial distress, but still present high accuracy rate and AUROC value. Moreover, the methodologies remain sound even five years prior to financial distress with classification accuracy rates above 85% and AUROC values above 0.85 for all five methodologies. This paper also shows external environment influences exist based on the na\u00efve bayes, logistic, recursive partition and SMO models, but these influences are weak. With regards to the model applicability, a subset of the different models is compared with Moody\u2019s rankings. It is found that both SMO and Logistic models are better than the Neural Network model in terms of similarity with Moody\u2019s ranking, with SMO being slightly better than the Logistic Model."} {"_id": "1458f7b234b827931155aea5eab1a80b652d6c4a", "title": "Are we dependent upon coffee and caffeine? A review on human and animal data", "text": "Caffeine is the most widely used psychoactive substance and has been considered occasionally as a drug of abuse. The present paper reviews available data on caffeine dependence, tolerance, reinforcement and withdrawal. After sudden caffeine cessation, withdrawal symptoms develop in a small portion of the population but are moderate and transient. Tolerance to caffeine-induced stimulation of locomotor activity has been shown in animals. In humans, tolerance to some subjective effects of caffeine seems to occur, but most of the time complete tolerance to many effects of caffeine on the central nervous system does not occur. In animals, caffeine can act as a reinforcer, but only in a more limited range of conditions than with classical drugs of dependence. In humans, the reinforcing stimuli functions of caffeine are limited to low or rather moderate doses while high doses are usually avoided. The classical drugs of abuse lead to quite specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens, the key structure for reward, motivation and addiction. However, caffeine doses that reflect the daily human consumption, do not induce a release of dopamine in the shell of the nucleus accumbens but lead to a release of dopamine in the prefrontal cortex, which is consistent with caffeine reinforcing properties. Moreover, caffeine increases glucose utilization in the shell of the nucleus accumbens only at rather high doses that stimulate most brain structures, non-specifically, and likely reflect the side effects linked to high caffeine ingestion. That dose is also 5-10-fold higher than the one necessary to stimulate the caudate nucleus, which mediates motor activity and the structures regulating the sleep-wake cycle, the two functions the most sensitive to caffeine. In conclusion, it appears that although caffeine fulfils some of the criteria for drug dependence and shares with amphetamines and cocaine a certain specificity of action on the cerebral dopaminergic system, the methylxanthine does not act on the dopaminergic structures related to reward, motivation and addiction."} {"_id": "bb5588e5726e67c6368cf173d54d431a26632cc1", "title": "Approximation algorithms for data placement in arbitrary networks", "text": "We study approximation algorithms for placing replicated data in arbitrary networks. Consider a network of nodes with individual storage capacities and a metric communication cost function, in which each node periodically issues a request for an object drawn from a collection of uniform-length objects. We consider the problem of placing copies of the objects among the nodes such that the average access cost is minimized. Our main result is a polynomial-time constant-factor approximation algorithm for this placement problem. Our algorithm is based on a careful rounding of a linear programming relaxation of the problem. We also show that the data placement problem is MAXSNP-hard.\nWe extend our approximation result to a generalization of the data placement problem that models additional costs such as the cost of realizing the placement. We also show that when object lengths are non-uniform, a constant-factor approximation is achievable if the capacity at each node in the approximate solution is allowed to exceed that in the optimal solution by the length of the largest object."} {"_id": "0157dcd6122c20b5afc359a799b2043453471f7f", "title": "Exploiting Similarities among Languages for Machine Translation", "text": "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs."} {"_id": "0e2795b1329b25ba3709584b96fd5cb4c96f6f22", "title": "A Systematic Comparison of Various Statistical Alignment Models", "text": "We present and compare various methods for computing word alignments using statistical or heuristic models. We consider the five alignment models presented in Brown, Della Pietra, Della Pietra, and Mercer (1993), the hidden Markov alignment model, smoothing techniques, and refinements. These statistical models are compared with two heuristic models based on the Dice coefficient. We present different methods for combining word alignments to perform a symmetrization of directed statistical alignment models. As evaluation criterion, we use the quality of the resulting Viterbi alignment compared to a manually produced reference alignment. We evaluate the models on the German-English Verbmobil task and the French-English Hansards task. We perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizes. An important result is that refined alignment models with a first-order dependence and a fertility model yield significantly better results than simple heuristic models. In the Appendix, we present an efficient training algorithm for the alignment models presented."} {"_id": "0f16ab376632ee83f2a3af21e96ebb925a8ac8b8", "title": "Combining Bilingual and Comparable Corpora for Low Resource Machine Translation", "text": "Statistical machine translation (SMT) performance suffers when models are trained on only small amounts of parallel data. The learned models typically have both low accuracy (incorrect translations and feature scores) and low coverage (high out-of-vocabulary rates). In this work, we use an additional data resource, comparable corpora, to improve both. Beginning with a small bitext and corresponding phrase-based SMT model, we improve coverage by using bilingual lexicon induction techniques to learn new translations from comparable corpora. Then, we supplement the model\u2019s feature space with translation scores estimated over comparable corpora in order to improve accuracy. We observe improvements between 0.5 and 1.7 BLEU translating Tamil, Telugu, Bengali, Malayalam, Hindi, and Urdu into English."} {"_id": "1b4e04381ddd2afab1660437931cd62468370a98", "title": "Part-of-Speech Tagging with Neural Networks", "text": "Text corpora which are tagged with part-of-speech information are useful in many areas of linguistic research. In this paper, a new part-of-speech tagging method hased on neural networks (Net-Tagger) is presented and its performance is compared to that of a llMM-tagger (Cutting et al., 1992) and a trigrambased tagger (Kempe, 1993). It is shown that the Net-Tagger performs as well as the trigram-based tagger and better than the iIMM-tagger."} {"_id": "d42ef23ee18d80f75643bdb3625fedf4d943a51b", "title": "Review of Ontology Matching Approaches and Challenges", "text": "Ontology mapping aims to solve the semantic heterogeneity problems such as ambiguous entity names, different entity granularity, incomparable categorization, and various instances of different ontologies. The mapping helps to search or query data from different sources. Ontology mapping is necessary in many applications such as data integration, ontology evolution, data warehousing, e-commerce and data exchange in various domains such as purchase order, health, music and e-commerce. It is performed by ontology matching approaches that find semantic correspondences between ontology entities. In this paper, we review state of the art ontology matching approaches. We describe the approaches according to instancebased, schema-based, instance and schema-based, usage-based, element-level, and structure-level. The analysis of the existing approaches will assist us in revealing some challenges in ontology mapping such as handling ontology matching errors, user involvement and reusing previous match operations. We explain the way of handling the challenges using new strategy in order to increase the performance."} {"_id": "bfea83f7d17793ec99cac2ceab07c76db5be9fba", "title": "Decision Trees: Theory and Algorithms", "text": "4."} {"_id": "8c235d86755720ae535c9e4128fa64b5ac4d6fb0", "title": "Clustering of periodic multichannel timeseries data with application to plasma fluctuations", "text": "A periodic datamining algorithm has been developed and used to extract distinct plasma fluctuations in multichannel oscillatory timeseries data. The technique uses the Expectation Maximisation algorithm to solve for the maximum likelihood estimates and cluster assignments of a mixture of multivariate independent von Mises distributions (EM-VMM). The performance of the algorithm shows significant benefits when compared to a periodic k-means algorithm and clustering using non-periodic techniques on several artificial datasets and real experimental data. Additionally, a new technique for identifying interesting features in multichannel oscillatory timeseries data is described (STFT-clustering). STFTclustering identifies the coincidence of spectral features overmost channels of amulti-channel array using the averaged short time Fourier transform of the signals. These features are filtered using clustering to remove noise. This method is particularly good at identifying weaker features and complements existing methods of feature extraction. Results from applying the STFT-clustering and EM-VMM algorithm to the extraction and clustering of plasma wave modes in the time series data from a helical magnetic probe array on the H-1NF heliac are presented. \u00a9 2014 Elsevier B.V. All rights reserved."} {"_id": "88f68acad516005b008d16ed99062156afdde34b", "title": "Gamification Solutions to Enhance Software User Engagement - A Systematic Review", "text": "Gamification is the use of video-game mechanics and elements in nongame contexts to enhance user engagement and performance. The purpose of this study is to conduct a systematic review to have an in-depth investigation into the existing gamification solutions targeted at solving user engagement problems in different categories of software. We carried out this systematic review by proposing a framework of gamifying process, which is the basis for comparison of existing gamification solutions. In order to report the review, the primary studies are categorized according to the following: a) gamified software and their platforms, b) elements of the gamifying process, c) gamification solutions in each software type, d) gamification solutions for software user engagement problems, e) gamification solutions in general, and f) effects of gamification on software user engagement and performance. Based on the search procedure and criteria, a total of 78 primary studies were extracted. Most of the studies focused on educational and social software, which were developed for web or mobile platforms. We concluded that the number of studies on motivating users to use software content, solving problems in learning software, and using real identity is very limited. Furthermore, few studies have been carried out on gamifying the following software categories: productivity software, cloud storage, utility software, entertainment software, search engine software, tool software, fitness software, software engineering, information worker software, and health-care software. In addition, a large number of gamification solutions are relatively simple and require improvement. Thus, for future studies, researchers can work on the items discovered in this review; they can improve the quality of the current gamified systems by using a wide variety of game mechanics and interface elements, utilizing a combination of contextual types of rewards and giving users the ability to use received rewards \u201cin-game\u201d and \u201cout-game\u201d."} {"_id": "8aef56e3ed2a4743cb393bf067f3a17a7412a4bc", "title": "Optimal control approach for pneumatic artificial muscle with using pressure-force conversion model", "text": "In this paper, we propose an optimal control framework for pneumatic actuators. In particular, we consider using Pneumatic Artificial Muscle (PAM) as a part of Pneumatic-Electric (PE) hybrid actuation system. An optimal control framework can be useful for PE hybrid system to properly distribute desired torque outputs to the actuators that have different characteristics. In the optimal control framework, the standard choice to represent control cost is squared force or torque outputs. However, since the control input for PAM is pressure rather than the force or the torque, we should explicitly consider the pressure of PAM as the control cost in an objective function of the optimal control method. We show that we are able to use pressure input as the control cost for PAM by explicitly considering the model which represents a relationship between the pressure input and the force output of PAM. We demonstrate that one-DOF robot with the PE hybrid actuation system can generate pressure-optimized ball throwing movements by using the optimal control method."} {"_id": "68ba338be70fd3c5bdbc1c271243740f2e0a0f0c", "title": "Beyond Independence: Probabilistic Models for Query Approximation on Binary Transaction Data", "text": "We investigate the problem of generating fast approximate a nswers to queries posed to large sparse binary data sets. We focus in particular on probabilistic mode l-based approaches to this problem and develop a number of techniques that are significantly more accurate t han a baseline independence model. In particular, we introduce two techniques for building probabil ist c models from frequent itemsets: the itemset maximum entropy method, and the itemset inclusion-exclusi on model. In the maximum entropy method we treat itemsets as constraints on the distribution of the q uery variables and use the maximum entropy principle to build a joint probability model for the query at tributes online. In the inclusion-exclusion model itemsets and their frequencies are stored in a data structur e alled an ADtree that supports an efficient implementation of the inclusion-exclusion principle in orde r to answer the query. We empirically compare these two itemset-based models to direct querying of the ori ginal data, querying of samples of the original data, as well as other probabilistic models such as the indep endence model, the Chow-Liu tree model, and the Bernoulli mixture model. These models are able to handle high-dimensionality (hundreds or thousands of attributes), whereas most other work on this topic has foc used on relatively low-dimensional OLAP problems. Experimental results on both simulated and realwor d transaction data sets illustrate various fundamental tradeoffs between approximation error, model complexity, and the online time required to compute a query answer."} {"_id": "84b54e5fb253e2b15caa21776c1bdd11ceae0eb0", "title": "MapMarker: Extraction of Postal Addresses and Associated Information for General Web Pages", "text": "Address information is essential for people\u2019s daily life. People often need to query addresses of unfamiliar location through Web and then use map services to mark down the location for direction purpose. Although both address information and map services are available online, they are not well combined. Users usually need to copy individual address from a Web site and paste it to another Web site with map services to locate its direction. Such copy and paste operations have to be repeated if multiple addresses are listed on a single page such as public school list or apartment list. Furthermore, associated information with individual address has to be copied and included on each marker for better comprehension. Our research is devoted to automate the above process and make the combination an easier task for users. The main techniques applied here include postal address extraction and associated information extraction. We apply sequence labeling algorithm based on Conditional Random Fields (CRFs) to train models for address extraction. Meanwhile, using the extracted addresses as landmarks, we apply pattern mining to identify the boundaries of address blocks and extract associated information with each individual address. The experimental result shows high F-score at 91% for postal address extraction and 87% accuracy for associated information extraction."} {"_id": "90522a98ccce3aa0ce20b4dfedb76518b886ed96", "title": "Revising the Structural Framework for Marketing Management", "text": "Special thanks to Robert Skipper and Aaron Hyman for their assistance on an earlier version of this manuscript. Also thanks to Shaun McQuitty, Robin Peterson, Chuck Pickett, Kevin Shanahan, and the Journal of Business Research editors and reviewers, for their helpful comments. An earlier version of this manuscript won the Shaw Award for best paper presented at 2001 Society for Marketing Advances conference. An abridged version of this manuscript has been accepted for publication in Journal of Business Research."} {"_id": "ae4315453cd8378ce73b744ca9589657ff79de37", "title": "End-to-End Learning of Task-Oriented Dialogs", "text": "In this thesis proposal, we address the limitations of conventional pipeline design of taskoriented dialog systems and propose end-toend learning solutions. We design neural network based dialog system that is able to robustly track dialog state, interface with knowledge bases, and incorporate structured query results into system responses to successfully complete task-oriented dialog. In learning such neural network based dialog systems, we propose hybrid offline training and online interactive learning methods. We introduce a multi-task learning method in pre-training the dialog agent in a supervised manner using task-oriented dialog corpora. The supervised training agent can further be improved via interacting with users and learning online from user demonstration and feedback with imitation and reinforcement learning. In addressing the sample efficiency issue with online policy learning, we further propose a method by combining the learning-from-user and learningfrom-simulation approaches to improve the online interactive learning efficiency."} {"_id": "e03c61fbcf8b7e6df3dbb9b7d80f1915b048f0ab", "title": "Learning model-based strategies in simple environments with hierarchical q-networks", "text": "Recent advances in deep learning have allowed artificial agents to rival human-level performance on a wide range of complex tasks; however, the ability of these networks to learn generalizable strategies remains a pressing challenge. This critical limitation is due in part to two factors: the opaque information representation in deep neural networks and the complexity of the task environments in which they are typically deployed. Here we propose a novel Hierarchical Q-Network (HQN), motivated by theories of the hierarchical organization of the human prefrontal cortex, that attempts to identify lower dimensional patterns in the value landscape that can be exploited to construct an internal model of rules in simple environments. We draw on combinatorial games, where there exists a single optimal strategy for winning that generalizes across other features of the game, to probe the strategy generalization of the HQN and other reinforcement learning (RL) agents using variations of Wythoff\u2019s game. Traditional RL approaches failed to reach satisfactory performance on variants of Wythoff\u2019s Game; however, the HQN learned heuristic-like strategies that generalized across changes in board configuration. More importantly, the HQN allowed for transparent inspection of the agent\u2019s internal model of the game following training. Our results show how a biologically inspired hierarchical learner can facilitate learning abstract rules to promote robust and flexible action policies in simplified training environments with clearly delineated optimal strategies. 1 ar X iv :1 80 1. 06 68 9v 1 [ cs .A I] 2 0 Ja n 20 18"} {"_id": "e8403aaac14c5abb3e617391ff71d491f644e1e2", "title": "Ending the war between Sales & Marketing.", "text": "Sales departments tend to believe that marketers are out of touch with what's really going on in the marketplace. Marketing people, in turn, believe the sales force is myopic--too focused on individual customer experiences, insufficiently aware of the larger market, and blind to the future. In short, each group undervalues the other's contributions. Both stumble (and organizational performance suffers) when they are out of sync. Yet few firms seem to make serious overtures toward analyzing and enhancing the relationship between these two critical functions. Curious about the misalignment between Sales and Marketing, the authors interviewed pairs of chief marketing officers and sales vice presidents to capture their perspectives. They looked in depth at the relationship between Sales and Marketing in a variety of companies in different industries. Their goal was to identify best practices that could enhance the joint performance and increase the contributions of these two functions. Among their findings: The marketing function takes different forms in different companies at different product life cycle stages. Marketing's increasing influence in each phase of an organization's growth profoundly affects its relationship with Sales. The strains between Sales and Marketing fall into two main categories: economic (a single budget is typically divided, between Sales and Marketing, and not always evenly) and cultural (the two functions attract very different types of people who achieve success by spending their time in very different ways). In this article, the authors describe the four types of relationships Sales and Marketing typically exhibit. They provide a diagnostic to help readers assess their companies' level of integration, and they offer recommendations for more closely aligning the two functions."} {"_id": "51331e1a0607285ab710bff9d6b25aadf2546ba6", "title": "The Enigmatic temporal pole: a review of findings on social and emotional processing.", "text": "The function of the anterior-most portion of the temporal lobes, the temporal pole, is not well understood. Anatomists have long considered it part of an extended limbic system based on its location posterior to the orbital frontal cortex and lateral to the amygdala, along with its tight connectivity to limbic and paralimbic regions. Here we review the literature in both non-human primates and humans to assess the temporal pole's putative role in social and emotional processing. Reviewed findings indicate that it has some role in both social and emotional processes, including face recognition and theory of mind, that goes beyond semantic memory. We propose that the temporal pole binds complex, highly processed perceptual inputs to visceral emotional responses. Because perceptual inputs remain segregated into dorsal (auditory), medial (olfactory) and ventral (visual) streams, the integration of emotion with perception is channel specific."} {"_id": "813308251c76640f0f9f98c54339ae73752793aa", "title": "Short-Text Clustering using Statistical Semantics", "text": "Short documents are typically represented by very sparse vectors, in the space of terms. In this case, traditional techniques for calculating text similarity results in measures which are very close to zero, since documents even the very similar ones have a very few or mostly no terms in common. In order to alleviate this limitation, the representation of short-text segments should be enriched by incorporating information about correlation between terms. In other words, if two short segments do not have any common words, but terms from the first segment appear frequently with terms from the second segment in other documents, this means that these segments are semantically related, and their similarity measure should be high. Towards achieving this goal, we employ a method for enhancing document clustering using statistical semantics. However, the problem of high computation time arises when calculating correlation between all terms. In this work, we propose the selection of a few terms, and using these terms with the Nystr\\\"om method to approximate the term-term correlation matrix. The selection of the terms for the Nystr\\\"om method is performed by randomly sampling terms with probabilities proportional to the lengths of their vectors in the document space. This allows more important terms to have more influence on the approximation of the term-term correlation matrix and accordingly achieves better accuracy."} {"_id": "3819427b2653724372a13f2194d113bdc67f2b72", "title": "QUO VADIS , DYNAMIC CAPABILITIES ? ACONTENT-ANALYTICREVIEWOFTHECURRENTSTATEOF KNOWLEDGE AND RECOMMENDATIONS FOR FUTURE RESEARCH", "text": "Although the dynamic capabilities perspective has become one of the most frequently used theoretical lenses in management research, critics have repeatedly voiced their frustration with this literature, particularly bemoaning the lack of empirical knowledge and the underspecification of the construct of dynamic capabilities. But research on dynamic capabilities has advanced considerably since its early years, in which most contributions to this literature were purely conceptual. A plethora of empirical studies as well as further theoretical elaborations have shed substantial light on a variety of specific, measurable factors connected to dynamic capabilities. Our article starts out by analyzing these studies to develop a meta-framework that specifies antecedents, dimensions, mechanisms, moderators, and outcomes of dynamic capabilities identified in the literature to date. This framework provides a comprehensive and systematic synthesis of the dynamic capabilities perspective that reflects the richness of the research while at the same time unifying it into a cohesive, overarching model. Such an analysis has not yet been undertaken; no comprehensive framework with this level of detail has previously been presented for dynamic capabilities. Our analysis shows where research has made the most progress and where gaps and unresolved tensions remain. Based on this analysis, we propose a forward-looking research agenda that outlines directions for future research."} {"_id": "bf1cee21ef693725c9e05da8eeb464573ee5cefd", "title": "Is Tom Cruise Threatened ? Using Netflix Prize Data to Examine the Long Tail of Electronic Commerce", "text": "We analyze a large data set from Netflix, the leading online movie rental company, to shed new light on the causes and consequences of the Long Tail effect, which suggests that on the Internet, over time, consumers will increasingly shift away from hit products and toward niche products. We examine the aggregate level demand as well as demand at the individual consumer level and we find that the consumption of both the hit and the niche movies decreased over time when the popularity of the movies is ranked in absolute terms (e.g., the top/bottom 10 titles). However, we also observe that the active product variety has increased dramatically over the study period. To separate out the demand diversification effect from the shift in consumer preferences, we propose to measure the popularity of movies in relative terms by dynamically adjusting for the current product variety (e.g., the top/bottom 1% of titles). Using this alternative definition of popularity, we find that the demand for the hits rises, while the demand for the niches still falls. We conclude that new movie titles appear much faster than consumers discover them. Finally, we find no evidence that niche titles satisfy consumer tastes better than hit titles and that a small number of heavy users are more likely to venture into niches than light users."} {"_id": "edfc90cef4872faa942136c5f824bbe4c6839c57", "title": "Improvasher: A Real-Time Mashup System for Live Musical Input", "text": "In this paper we present Improvasher a real-time musical accompaniment system which creates an automatic mashup to accompany live musical input. Improvasher is built around two music processing modules, the first, a performance following technique, makes beat-synchronous predictions of chroma features from a live musical input. The second, a music mashup system, determines the compatibility between beat-synchronous chromagrams from different pieces of music. Through the combination of these two techniques, a real-time predictive mashup can be generated towards a new form of automatic accompaniment for interactive musical performance."} {"_id": "6321f959144d155276ec11ce3e6d366dfaefa163", "title": "Implementing Intelligent Traffic Control System for Congestion Control, Ambulance Clearance, and Stolen Vehicle Detection", "text": "This paper presents an intelligent traffic control system to pass emergency vehicles smoothly. Each individual vehicle is equipped with special radio frequency identification (RFID) tag (placed at a strategic location), which makes it impossible to remove or destroy. We use RFID reader, NSK EDK-125-TTL, and PIC16F877A system-on-chip to read the RFID tags attached to the vehicle. It counts number of vehicles that passes on a particular path during a specified duration. It also determines the network congestion, and hence the green light duration for that path. If the RFID-tag-read belongs to the stolen vehicle, then a message is sent using GSM SIM300 to the police control room. In addition, when an ambulance is approaching the junction, it will communicate to the traffic controller in the junction to turn ON the green light. This module uses ZigBee modules on CC2500 and PIC16F877A system-on-chip for wireless communications between the ambulance and traffic controller. The prototype was tested under different combinations of inputs in our wireless communication laboratory and experimental results were found as expected."} {"_id": "498e08e8e2f4e44a115fdf541583dbb3fa80d284", "title": "CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera", "text": "In this paper, we propose a CAD-based 6-DOF pose estimation design for random bin-picking of multiple different objects using a Kinect RGB-D sensor. 3D CAD models of objects are constructed via a virtual camera, which generates a point cloud database for object recognition and pose estimation. A voxel grid filter is suggested to reduce the number of 3D point cloud of objects for reducing computing time of pose estimation. A voting-scheme method was adopted for the 6-DOF pose estimation a swell as object recognition of different type objects in the bin. Furthermore, an outlier filter is designed to filter out bad matching poses and occluded ones, so that the robot arm always picks up the upper object in the bin to increase pick up success rate. A series of experiments on a Kuka 6-axis robot revels that the proposed system works satisfactorily to pick up all random objects in the bin. The average recognition rate of three different type objects is 93.9% and the pickup success rate is 89.7%."} {"_id": "e92771cf6244a4b5965f3cae60d16131774b794c", "title": "Linear codes over Z4+uZ4: MacWilliams identities, projections, and formally self-dual codes", "text": "Linear codes are considered over the ring Z 4 + uZ 4 , a non-chain extension of Z 4. Lee weights, Gray maps for these codes are defined and MacWilliams identities for the complete, symmetrized and Lee weight enumer-ators are proved. Two projections from Z 4 + uZ 4 to the rings Z 4 and F 2 + uF 2 are considered and self-dual codes over Z 4 +uZ 4 are studied in connection with these projections. Finally three constructions are given for formally self-dual codes over Z 4 + uZ 4 and their Z 4-images together with some good examples of formally self-dual Z 4-codes obtained through these constructions."} {"_id": "bd7ccb1ba361b4bba213c595b55eccaf3482423c", "title": "Millimeter-Wave Compact EBG Structure for Mutual Coupling Reduction Applications", "text": "A new millimeter-wave (MMW), electromagnetic band-gap (EBG) structure is presented. The proposed EBG structure without the use of metallic vias or vertical components is formed by etching two slots and adding two connecting bridges to a conventional uniplanar EBG unit-cell. The transmission characteristics of the proposed EBG structure are measured. Results show that the proposed EBG structure has a wide bandgap around the 60 GHz band. The size of the proposed EBG unit-cell is 78% less than a conventional uniplanar EBG, and 72% less than a uniplanar-compact EBG (UC-EBG) operating at the same frequency band. Moreover, and despite the fabrication limitations at the 60 GHz band, the proposed EBG unit-cell provides at least 12% more size reduction than any other planar EBG structures at microwave frequencies. Its enhanced performance and applicability to reduce mutual coupling in antenna arrays are then investigated. Results show a drastic decrease in the mutual coupling level. This EBG structure can find its applications in MMW wireless communication systems."} {"_id": "eeb889a7b1c82ab8bcdd22615abefbdddbae8e99", "title": "Alternative activation of macrophages", "text": "The classical pathway of interferon-\u03b3-dependent activation of macrophages by T helper 1 (TH1)-type responses is a well-established feature of cellular immunity to infection with intracellular pathogens, such as Mycobacterium tuberculosis and HIV. The concept of an alternative pathway of macrophage activation by the TH2-type cytokines interleukin-4 (IL-4) and IL-13 has gained credence in the past decade, to account for a distinctive macrophage phenotype that is consistent with a different role in humoral immunity and repair. In this review, I assess the evidence in favour of alternative macrophage activation in the light of macrophage heterogeneity, and define its limits and relevance to a range of immune and inflammatory conditions."} {"_id": "1e5668364b5831c3dc1fac0cfcf8ef3f275f08c3", "title": "EEG Signal Classification Using Wavelet Feature Extraction and Neural Networks", "text": "Decision support systems have been utilised since 1960, providing physicians with fast and accurate means towards more accurate diagnoses and increased tolerance when handling missing or incomplete data. This paper describes the application of neural network models for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: initially, a feature extraction scheme using the wavelet transform (WT) has been applied and then a learning-based algorithm classifier performed the classification. The performance of the neural model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals"} {"_id": "e577546dd7c767a207f7ca3d8e0148c94aaac857", "title": "ProM: The Process Mining Toolkit", "text": "Nowadays, all kinds of information systems store detailed information in logs. Process mining has emerged as a way to analyze these systems based on these detailed logs. Unlike classical data mining, the focus of process mining is on processes. First, process mining allows us to extract a process model from an event log. Second, it allows us to detect discrepancies between a modeled process (as it was envisioned to be) and an event log (as it actually is). Third, it can enrich an existing model with knowledge derived from an event log. This paper presents our tool ProM, which is the world-leading tool in the area of process mining."} {"_id": "fc40ad1238fba787dd8a58a7aed57a8d020a6fdc", "title": "Artificial neural networks: fundamentals, computing, design, and application.", "text": "Artificial neural networks (ANNs) are relatively new computational tools that have found extensive utilization in solving many complex real-world problems. The attractiveness of ANNs comes from their remarkable information processing characteristics pertinent mainly to nonlinearity, high parallelism, fault and noise tolerance, and learning and generalization capabilities. This paper aims to familiarize the reader with ANN-based computing (neurocomputing) and to serve as a useful companion practical guide and toolkit for the ANNs modeler along the course of ANN project development. The history of the evolution of neurocomputing and its relation to the field of neurobiology is briefly discussed. ANNs are compared to both expert systems and statistical regression and their advantages and limitations are outlined. A bird's eye review of the various types of ANNs and the related learning rules is presented, with special emphasis on backpropagation (BP) ANNs theory and design. A generalized methodology for developing successful ANNs projects from conceptualization, to design, to implementation, is described. The most common problems that BPANNs developers face during training are summarized in conjunction with possible causes and remedies. Finally, as a practical application, BPANNs were used to model the microbial growth curves of S. flexneri. The developed model was reasonably accurate in simulating both training and test time-dependent growth curves as affected by temperature and pH."} {"_id": "25406e6733a698bfc4ac836f8e74f458e75dad4f", "title": "What Size Net Gives Valid Generalization?", "text": "We address the question of when a network can be expected to generalize from m random training examples chosen from some arbitrary probability distribution, assuming that future test examples are drawn from the same distribution. Among our results are the following bounds on appropriate sample vs. network size. Assume 0 < \u220a 1/8. We show that if m O(W/\u220a log N/\u220a) random examples can be loaded on a feedforward network of linear threshold functions with N nodes and W weights, so that at least a fraction 1 \u220a/2 of the examples are correctly classified, then one has confidence approaching certainty that the network will correctly classify a fraction 1 \u220a of future test examples drawn from the same distribution. Conversely, for fully-connected feedforward nets with one hidden layer, any learning algorithm using fewer than (W/\u220a) random training examples will, for some distributions of examples consistent with an appropriate weight choice, fail at least some fixed fraction of the time to find a weight choice that will correctly classify more than a 1 \u220a fraction of the future test examples."} {"_id": "656a33c1db546da8490d6eba259e2a849d73a001", "title": "Learning in Artificial Neural Networks: A Statistical Perspective", "text": "The premise of this article is that learning procedures used to train artificial neural networks are inherently statistical techniques. It follows that statistical theory can provide considerable insight into the properties, advantages, and disadvantages of different network learning methods. We review concepts and analytical results from the literatures of mathematical statistics, econometrics, systems identification, and optimization theory relevant to the analysis of learning in artificial neural networks. Because of the considerable variety of available learning procedures and necessary limitations of space, we cannot provide a comprehensive treatment. Our focus is primarily on learning procedures for feedforward networks. However, many of the concepts and issues arising in this framework are also quite broadly relevant to other network learning paradigms. In addition to providing useful insights, the material reviewed here suggests some potentially useful new training methods for artificial neural networks."} {"_id": "fbe24a2d9598c620324e3bd51e2f817cd35e9c81", "title": "Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights", "text": "A two-layer neural network can be used to approximate any nonlinear function. T h e behavior of the hidden nodes tha t allows the network to do this is described. Networks w i th one input are analyzed first, and the analysis is then extended to networks w i t h mult iple inputs. T h e result of th is analysis is used to formulate a method for ini t ial izat ion o f the weights o f neural networks to reduce t ra in ing t ime. Training examples are given and the learning curve for these examples are shown to illustrate the decrease in necessary training t ime. Introduction Two-layer feed forward neural networks have been proven capable of approximating any arbitrary functions [l], given that they have sufficient numbers of nodes in their hidden layers. We offer a description of how this works, along with a method of speeding up the training process by choosing the networks\u2019 initial weights. The relationship between the inputs and the output of a two-layer neural network may be described by Equation (1) H l y = wi . sigmoid(LqX + W b i ) (1) i=O where y is the network\u2019s output, X is the input vector, H is the number of hidden nodes, Wi is the weight vector of the i th node of the hidden layer, Wbi is the bias weight of the ith hidden node, w i is the weight of the output layer which connects the i th hidden unit to the output. The behavior of hidden nodes in two-layer networks with one input To illustrate the behavior of the hidden nodes, a two-layer network with one input is trained to approximate a function of one variable d(z) . That is, the network is trained to produce d(z) given z as input using the back-propagation algorithm [2]. The output of the network is given as"} {"_id": "f66696ce5c60b8fe71590521718213316fb2ea2f", "title": "Large Margin Metric Learning for Multi-Label Prediction", "text": "Canonical correlation analysis (CCA) and maximum margin output coding (MMOC) methods have shown promising results for multi-label prediction, where each instance is associated with multiple labels. However, these methods require an expensive decoding procedure to recover the multiple labels of each testing instance. The testing complexity becomes unacceptable when there are many labels. To avoid decoding completely, we present a novel large margin metric learning paradigm for multi-label prediction. In particular, the proposed method learns a distance metric to discover label dependency such that instances with very different multiple labels will be moved far away. To handle many labels, we present an accelerated proximal gradient procedure to speed up the learning process. Comprehensive experiments demonstrate that our proposed method is significantly faster than CCA and MMOC in terms of both training and testing complexities. Moreover, our method achieves superior prediction performance compared with state-of-the-art methods."} {"_id": "4afa918506a45be77b0682156c3dfdc956272fa3", "title": "Optimal Design of Energy-Efficient Multi-User MIMO Systems: Is Massive MIMO the Answer?", "text": "Assume that a multi-user multiple-input multiple-output (MIMO) system is designed from scratch to uniformly cover a given area with maximal energy efficiency (EE). What are the optimal number of antennas, active users, and transmit power? The aim of this paper is to answer this fundamental question. We consider jointly the uplink and downlink with different processing schemes at the base station and propose a new realistic power consumption model that reveals how the above parameters affect the EE. Closed-form expressions for the EE-optimal value of each parameter, when the other two are fixed, are provided for zero-forcing (ZF) processing in single-cell scenarios. These expressions prove how the parameters interact. For example, in sharp contrast to common belief, the transmit power is found to increase (not to decrease) with the number of antennas. This implies that energy-efficient systems can operate in high signal-to-noise ratio regimes in which interference-suppressing signal processing is mandatory. Numerical and analytical results show that the maximal EE is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve a relatively large number of users using ZF processing. The numerical results show the same behavior under imperfect channel state information and in symmetric multi-cell scenarios."} {"_id": "0d94ee90e5d91fa1eb3d5e26d8a2cd9a1e57812c", "title": "A systematic analysis of textual variability modeling languages", "text": "Industrial variability models tend to grow in size and complexity due to ever-increasing functionality and complexity of software systems. Some authors report on variability models specifying several thousands of variabilities. However, traditional variability modeling approaches do not seem to scale adequately to cope with size and complexity of such models. Recently, textual variability modeling languages have been advocated as one scalable solution.\n In this paper, we provide a systematic analysis of the capabilities of current textual variability modeling languages, in particular regarding variability management in the large. Towards this aim, we define a classification schema consisting of five dimensions, classify ten different textual variability modeling languages using the classification schema and provide an analysis. In summary, some textual variability modeling languages go beyond textual representations of traditional variability modeling approaches and provide sophisticated modeling concepts and constraint languages. Three textual variability modeling approaches already support mechanisms for large-scale variability modeling such as model composition, modularization, or evolution support."} {"_id": "0a70ea1496ccd01ea3e51afe60f508ee6c0984ec", "title": "Cracking the Code of Biodiversity Responses to Past Climate Change.", "text": "How individual species and entire ecosystems will respond to future climate change are among the most pressing questions facing ecologists. Past biodiversity dynamics recorded in the paleoecological archives show a broad array of responses, yet significant knowledge gaps remain. In particular, the relative roles of evolutionary adaptation, phenotypic plasticity, and dispersal in promoting survival during times of climate change have yet to be clarified. Investigating the paleo-archives offers great opportunities to understand biodiversity responses to future climate change. In this review we discuss the mechanisms by which biodiversity responds to environmental change, and identify gaps of knowledge on the role of range shifts and tolerance. We also outline approaches at the intersection of paleoecology, genomics, experiments, and predictive models that will elucidate the processes by which species have survived past climatic changes and enhance predictions of future changes in biological diversity."} {"_id": "5dd5f3844c0402141e4083bccdde66303750f87c", "title": "Application of Artificial Intelligence to Real-Time Fault Detection in Permanent-Magnet Synchronous Machines", "text": "This paper discusses faults in rotating electrical machines in general and describes a fault detection technique using artificial neural network (ANN) which is an expert system to detect short-circuit fault currents in the stator windings of a permanent-magnet synchronous machine (PMSM). The experimental setup consists of PMSM coupled mechanically to a dc motor configured to run in torque mode. Particle swarm optimization is used to adjust the weights of the ANN. All simulations are carried out in MATLAB/SIMULINK environment. The technique is shown to be effective and can be applied to real-time fault detection."} {"_id": "38438103e787b7b2d112596fd14225872a5403f3", "title": "Language Models with Pre-Trained (GloVe) Word Embeddings", "text": "In this work we present a step-by-step implementation of training a Language Model (LM) , using Recurrent Neural Network (RNN) and pre-trained GloVe word embeddings, introduced by Pennigton et al. in [1]. The implementation is following the general idea of training RNNs for LM tasks presented in [2] , but is rather using Gated Recurrent Unit (GRU) [3] for a memory cell, and not the more commonly used LSTM [4]. The implementation presented is based on using keras1 [5]."} {"_id": "e22be626987f1288744bb9f0ffc60806b4ed8bbc", "title": "Comparison study of non-orthogonal multiple access schemes for 5G", "text": "With the development of mobile Internet and Internet of things (IoT), the 5th generation (5G) wireless communications will foresee explosive increase in mobile traffic. To address challenges in 5G such as higher spectral efficiency, massive connectivity, and lower latency, some non-orthogonal multiple access (NOMA) schemes have been recently actively investigated, including power-domain NOMA, multiple access with low-density spreading (LDS), sparse code multiple access (SCMA), multiuser shared access (MUSA), pattern division multiple access (PDMA), etc. Different from conventional orthogonal multiple access (OMA) schemes, NOMA can realize overloading by introducing some controllable interferences at the cost of slightly increased receiver complexity, which can achieve significant gains in spectral efficiency and accommodate much more users. In this paper, we will discuss basic principles and key features of three typical NOMA schemes, i.e., SCMA, MUSA, and PDMA. What's more, their performance in terms of uplink bit error rate (BER) will be compared. Simulation results show that in typical Rayleigh fading channels, SCMA has the best performance, while the BER performance of MUSA and PDMA are very close to each other. In addition, we also analyze the performance of PDMA using the same factor graph as SCMA, which indicates that the performance gain of SCMA over PDMA comes from both the difference of factor graph and the codebook optimization."} {"_id": "1ac8d6ec012f0bca057ea0ca71df8c8746cf39c5", "title": "RPL-based multipath Routing Protocols for Internet of Things on Wireless Sensor Networks", "text": "In the last few years, Wireless Sensor Network (WSN) emerges and appears as an essential platform for prominent concept of Internet of Things (IoT). Their application ranges from so-called \u201csmart cities\u201d, \u201csmart homes\u201d over environmental monitoring. The connectivity in IoT mainly relies on RPL (IPv6 Routing Protocol for Low Power and Lossy Network) - a routing algorithm that constructs and maintains DODAGs (Destination Oriented Directed Acyclic Graph) to transmit data from sensors to root over a single path. However, due to the resource constraints of sensor nodes and the unreliability of wireless links, single-path routing approaches cannot be considered effective techniques to meet the performance demands of various applications. In order to overcome these problems, many individuals and group research focuses on multi-path solutions for RPL routing protocol. In this paper, we propose three multipath schemes based on RPL (Energy Load Balancing-ELB, Fast Local Repair-FLR and theirs combination-ELB-FLR) and integrate them in a modified IPv6 communication stack for IoT. These schemes are implemented in OMNET++ simulator and the experiment outcomes show that our approaches have achieved better energy efficiency, better end-to-end delay, packet delivery rate and network load balance compared to traditional solution of RPL."} {"_id": "041326c202655cd60df276bf7a148f2ecddfc479", "title": "Cognitive architectures: Research issues and challenges", "text": "In this paper, we examine the motivations for research on cognitive architectures and review some candidates that have been explored in the literature. After this, we consider the capabilities that a cognitive architecture should support, some properties that it should exhibit related to representation, organization, performance, and learning, and some criteria for evaluating such architectures at the systems level. In closing, we discuss some open issues that should drive future research in this important area. 2008 Published by Elsevier B.V."} {"_id": "e2a3bbfd375811c5fef523be8623904455af1cec", "title": "GRS: The green, reliability, and security of emerging machine to machine communications", "text": "Machine-to-machine communications is characterized by involving a large number of intelligent machines sharing information and making collaborative decisions without direct human intervention. Due to its potential to support a large number of ubiquitous characteristics and achieving better cost efficiency, M2M communications has quickly become a market-changing force for a wide variety of real-time monitoring applications, such as remote e-healthcare, smart homes, environmental monitoring, and industrial automation. However, the flourishing of M2M communications still hinges on fully understanding and managing the existing challenges: energy efficiency (green), reliability, and security (GRS). Without guaranteed GRS, M2M communications cannot be widely accepted as a promising communication paradigm. In this article, we explore the emerging M2M communications in terms of the potential GRS issues, and aim to promote an energy-efficient, reliable, and secure M2M communications environment. Specifically, we first formalize M2M communications architecture to incorporate three domains - the M2M, network, and application domains - and accordingly define GRS requirements in a systematic manner. We then introduce a number of GRS enabling techniques by exploring activity scheduling, redundancy utilization, and cooperative security mechanisms. These techniques hold promise in propelling the development and deployment of M2M communications applications."} {"_id": "b6bf558edda0378cd756a0267801164109f5f5c5", "title": "Adverse drug reactions as cause of admission to hospital: prospective analysis of 18 820 patients.", "text": "OBJECTIVE\nTo ascertain the current burden of adverse drug reactions (ADRs) through a prospective analysis of all admissions to hospital.\n\n\nDESIGN\nProspective observational study.\n\n\nSETTING\nTwo large general hospitals in Merseyside, England.\n\n\nPARTICIPANTS\n18 820 patients aged > 16 years admitted over six months and assessed for cause of admission.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of admissions due to an ADR, length of stay, avoidability, and outcome.\n\n\nRESULTS\nThere were 1225 admissions related to an ADR, giving a prevalence of 6.5%, with the ADR directly leading to the admission in 80% of cases. The median bed stay was eight days, accounting for 4% of the hospital bed capacity. The projected annual cost of such admissions to the NHS is 466m pounds sterling (706m Euros, 847m dollars). The overall fatality was 0.15%. Most reactions were either definitely or possibly avoidable. Drugs most commonly implicated in causing these admissions included low dose aspirin, diuretics, warfarin, and non-steroidal anti-inflammatory drugs other than aspirin, the most common reaction being gastrointestinal bleeding.\n\n\nCONCLUSION\nThe burden of ADRs on the NHS is high, accounting for considerable morbidity, mortality, and extra costs. Although many of the implicated drugs have proved benefit, measures need to be put into place to reduce the burden of ADRs and thereby further improve the benefit:harm ratio of the drugs."} {"_id": "ed6e0366584769a5b4c45b4fdf847583038611ae", "title": "Health-related rumour detection on Twitter", "text": "In the last years social networks have emerged as a critical mean for information spreading. In spite of all the positive consequences this phenomenon brings, unverified and instrumentally relevant information statements in circulation, named as rumours, are becoming a potential threat to the society. Recently, there have been several studies on topic-independent rumour detection on Twitter. In this paper we present a novel rumour detection system which focuses on a specific topic, that is health-related rumours on Twitter. To this aim, we constructed a new subset of features including influence potential and network characteristics features. We tested our approach on a real dataset observing promising results, as it is able to correctly detect about 89% of rumours, with acceptable levels of precision."} {"_id": "cf1fb664b28fa5dd4952d08c1cb631c20a504b02", "title": "Full bridge phase-shifted soft switching high-frequency inverter with boost PFC function for induction heating system", "text": "This paper is mainly concerned with a high frequency soft-switching PWM inverter suitable for consumer induction heating system. Proposed system is composed of soft switching chopper based voltage boost PFC input stage and phase shifted PWM controlled full bridge ZCZVS high frequency inverter stage. Its fundamentals of operating performances are illustrated and evaluated on the experimental results. Its effectiveness is substantially proved on the basis of the experimental results from a practical point of view."} {"_id": "68af6bcc37d08863af5bb081301e562b597cb4fc", "title": "Automatic Modulation Classification of Overlapped Sources Using Multi-Gene Genetic Programming With Structural Risk Minimization Principle", "text": "As the spectrum environment becomes increasingly crowded and complicated, primary users may be interfered by secondary users and other illegal users. Automatic modulation classification (AMC) of a single source cannot recognize the overlapped sources. Consequently, the AMC of overlapped sources attracts much attention. In this paper, we propose a genetic programming-based modulation classification method for overlapped sources (GPOS). The proposed GPOS consists of two stages, the training stage, and the classification stage. In the training stage, multi-gene genetic programming (MGP)-based feature engineering transforms sample estimates of cumulants into highly discriminative MGP-features iteratively, until optimal MGP-features (OMGP-features) are obtained, where the structural risk minimization principle (SRMP) is employed to evaluate the classification performance of MGP-features and train the classifier. Moreover, a self-adaptive genetic operation is designed to accelerate the feature engineering process. In the classification stage, the classification decision is made by the trained classifier using the OMGP-features. Through simulation results, we demonstrate that the proposed scheme outperforms other existing methods in terms of classification performance and robustness in case of varying power ratios and fading channel."} {"_id": "87a7ccc5f37cd846a978ae17d60b6dcd923bd996", "title": "Measuring geographical regularities of crowd behaviors for Twitter-based geo-social event detection", "text": "Recently, microblogging sites such as Twitter have garnered a great deal of attention as an advanced form of location-aware social network services, whereby individuals can easily and instantly share their most recent updates from any place. In this study, we aim to develop a geo-social event detection system by monitoring crowd behaviors indirectly via Twitter. In particular, we attempt to find out the occurrence of local events such as local festivals; a considerable number of Twitter users probably write many posts about these events. To detect such unusual geo-social events, we depend on geographical regularities deduced from the usual behavior patterns of crowds with geo-tagged microblogs. By comparing these regularities with the estimated ones, we decide whether there are any unusual events happening in the monitored geographical area. Finally, we describe the experimental results to evaluate the proposed unusuality detection method on the basis of geographical regularities obtained from a large number of geo-tagged tweets around Japan via Twitter."} {"_id": "bccda0e4b34cc1b73b50f0faeb1d340919619825", "title": "Classic Hallucinogens and Mystical Experiences: Phenomenology and Neural Correlates.", "text": "This chapter begins with a brief review of descriptions and definitions of mystical-type experiences and the historical connection between classic hallucinogens and mystical experiences. The chapter then explores the empirical literature on experiences with classic hallucinogens in which claims about mystical or religious experiences have been made. A psychometrically validated questionnaire is described for the reliable measurement of mystical-type experiences occasioned by classic hallucinogens. Controlled laboratory studies show that under double-blind conditions that provide significant controls for expectancy bias, psilocybin can occasion complete mystical experiences in the majority of people studied. These effects are dose-dependent, specific to psilocybin compared to placebo or a psychoactive control substance, and have enduring impact on the moods, attitudes, and behaviors of participants as assessed by self-report of participants and ratings by community observers. Other studies suggest that enduring personal meaning in healthy volunteers and therapeutic outcomes in patients, including reduction and cessation of substance abuse behaviors and reduction of anxiety and depression in patients with a life-threatening cancer diagnosis, are related to the occurrence of mystical experiences during drug sessions. The final sections of the chapter draw parallels in human neuroscience research between the neural bases of experiences with classic hallucinogens and the neural bases of meditative practices for which claims of mystical-type experience are sometimes made. From these parallels, a functional neural model of mystical experience is proposed, based on changes in the default mode network of the brain that have been observed after the administration of classic hallucinogens and during meditation practices for which mystical-type claims have been made."} {"_id": "97626d505052d2eb6ebeb194d4ba3d993c320fe4", "title": "Sparsely sampled Fourier ptychography.", "text": "Fourier ptychography (FP) is an imaging technique that applies angular diversity functions for high-resolution complex image recovery. The FP recovery routine switches between two working domains: the spectral and spatial domains. In this paper, we investigate the spectral-spatial data redundancy requirement of the FP recovery process. We report a sparsely sampled FP scheme by exploring the sampling interplay between these two domains. We demonstrate the use of the reported scheme for bypassing the high-dynamic-range combination step in the original FP recovery routine. As such, it is able to shorten the acquisition time of the FP platform by ~50%. As a special case of the sparsely sample FP, we also discuss a sub-sampled scheme and demonstrate its application in solving the pixel aliasing problem plagued in the original FP algorithm. We validate the reported schemes with both simulations and experiments. This paper provides insights for the development of the FP approach."} {"_id": "fdbf82918de37e14d047893c074f59deb2879f32", "title": "Threats to Feminist Identity and Reactions to Gender Discrimination", "text": "The aim of this research was to examine conditions that modify feminists' support for women as targets of gender discrimination. In an experimental study we tested a hypothesis that threatened feminist identity will lead to greater differentiation between feminists and conservative women as victims of discrimination and, in turn, a decrease in support for non-feminist victims. The study was conducted among 96 young Polish female professionals and graduate students from Gender Studies programs in Warsaw who self-identified as feminists (Mage \u2009=\u200922.23). Participants were presented with a case of workplace gender discrimination. Threat to feminist identity and worldview of the discrimination victim (feminist vs. conservative) were varied between research conditions. Results indicate that identity threat caused feminists to show conditional reactions to discrimination. Under identity threat, feminists perceived the situation as less discriminatory when the target held conservative views on gender relations than when the target was presented as feminist. This effect was not observed under conditions of no threat. Moreover, feminists showed an increase in compassion for the victim when she was portrayed as a feminist compared to when she was portrayed as conservative. Implications for the feminist movement are discussed."} {"_id": "9b9906a2cf7fe150faed8d618def803232684719", "title": "Dynamic Filters in Graph Convolutional Networks", "text": "Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. While CNNs naturally extend to other domains, such as audio and video, where data is also organized in rectangular grids, they do not easily generalize to other types of data such as 3D shape meshes, social network graphs or molecular graphs. To handle such data, we propose a novel graph-convolutional network architecture that builds on a generic formulation that relaxes the 1-to-1 correspondence between filter weights and data elements around the center of the convolution. The main novelty of our architecture is that the shape of the filter is a function of the features in the previous network layer, which is learned as an integral part of the neural network. Experimental evaluations on digit recognition, semi-supervised document classification, and 3D shape correspondence yield state-of-the-art results, significantly improving over previous work for shape correspondence."} {"_id": "50dad1b5f35c0ba613fd79fae91d7270c64cea0f", "title": "BINSEC/SE: A Dynamic Symbolic Execution Toolkit for Binary-Level Analysis", "text": "When it comes to software analysis, several approaches exist from heuristic techniques to formal methods, which are helpful at solving different kinds ofproblems. Unfortunately very few initiative seek to aggregate this techniques in the same platform. BINSEC intend to fulfill this lack of binary analysis platform by allowing to perform modular analysis. This work focusses on BINSEC/SE, the new dynamic symbolic execution engine (DSE) implemented in BINSEC. We will highlight the novelties of the engine, especially in terms of interactions between concrete and symbolic execution or optimization of formula generation. Finally, two reverse engineering applications are shown in order to emphasize the tool effectiveness."} {"_id": "4a5f9152c66f61158abc8e9eaa8de743129cdbba", "title": "Artificial intelligent firewall", "text": "Firewalls are now an integral part of network security. An intelligent firewall that prevents unauthorized access to a system has been developed. Artificial intelligence applications are uniquely suited for the ever-changing, ever-evolving world of network security. Typical firewalls are only as good as the information provided by the Network Administrator. A new type of attack creates vulnerabilities, which a static firewall does not have the ability to avoid without human direction. An AI-managed firewall service, however, can protect a computer network from known and future threats. We report in this paper on research in progress concerning the integration of different security techniques. A main purpose of the project is to integrate a smart detection engine into a firewall. The smart detection engine will aim at not only detecting anomalous network traffic as in classical IDSs, but also detecting unusual structures data packets that suggest the presence of virus data. We will report in this paper on the concept of an intelligent firewall that contains a smart detection engine for potentially malicious data packets."} {"_id": "bc12715a1ddf1a540dab06bf3ac4f3a32a26b135", "title": "Tracking the Trackers: An Analysis of the State of the Art in Multiple Object Tracking", "text": "Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for research. We present a benchmark for Multiple Object Tracking launched in the late 2014, with the goal of creating a framework for the standardized evaluation of multiple object tracking methods. This paper collects the two releases of the benchmark made so far, and provides an in-depth analysis of almost 50 state-of-the-art trackers that were tested on over 11000 frames. We show the current trends and weaknesses of multiple people tracking methods, and provide pointers of what researchers should be focusing on to push the field forward."} {"_id": "2e0db4d4c8bdc7e11541b362cb9f8972f66563ab", "title": "Personality Analysis Through Handwriting", "text": ""} {"_id": "02599a02d46ea2f1c00e14cac2a76dcb156df8ee", "title": "Search Based Software Engineering", "text": "The use of evolutionary algorithms for solving multi-objective optimization problems has become increasingly popular, mainly within the last 15 years. From among the several research trends that have originated in recent years, one of the most promising is the use of hybrid approaches that allow to improve the performance of multi-objective evolutionary algorithms (MOEAs). In this talk, some of the most representative research on the use of hybrid approaches in evolutionary multi-objective optimization will be discussed. The topics discussed will include multi-objective memetic algorithms, hybridization of MOEAs with gradient-based methods and with direct search methods, as well as multi-objective hyperheuristics. Some applications of these approaches as well as some potential paths for future research in this area will also be briefly discussed. Towards Evolutionary Multitasking: A New Paradigm"} {"_id": "2b286ed9f36240e1d11b585d65133db84b52122c", "title": "Real-time 3D eyelids tracking from semantic edges", "text": "State-of-the-art real-time face tracking systems still lack the ability to realistically portray subtle details of various aspects of the face, particularly the region surrounding the eyes. To improve this situation, we propose a technique to reconstruct the 3D shape and motion of eyelids in real time. By combining these results with the full facial expression and gaze direction, our system generates complete face tracking sequences with more detailed eye regions than existing solutions in real-time. To achieve this goal, we propose a generative eyelid model which decomposes eyelid variation into two low-dimensional linear spaces which efficiently represent the shape and motion of eyelids. Then, we modify a holistically-nested DNN model to jointly perform semantic eyelid edge detection and identification on images. Next, we correspond vertices of the eyelid model to 2D image edges, and employ polynomial curve fitting and a search scheme to handle incorrect and partial edge detections. Finally, we use the correspondences in a 3D-to-2D edge fitting scheme to reconstruct eyelid shape and pose. By integrating our fast fitting method into a face tracking system, the estimated eyelid results are seamlessly fused with the face and eyeball results in real time. Experiments show that our technique applies to different human races, eyelid shapes, and eyelid motions, and is robust to changes in head pose, expression and gaze direction."} {"_id": "05c025af60aeab10a3069256674325802c844212", "title": "Recurrent Network Models for Human Dynamics", "text": "We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoiding drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units [31]."} {"_id": "02a88a2f2765b17c9ea76fe13148b4b8a9050b95", "title": "DeepPose: Human Pose Estimation via Deep Neural Networks", "text": "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images."} {"_id": "092b64ce89a7ec652da935758f5c6d59499cde6e", "title": "Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments", "text": "We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m."} {"_id": "1fdc785c0152d86d661213038150195058a24703", "title": "Sparse deep belief net model for visual area V2", "text": "Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or \u201cdeep,\u201d structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (\u201ccontour\u201d) features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex \u201ccorner\u201d features matches well with the results from the Ito & Komatsu\u2019s study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features."} {"_id": "497a80b2813cffb17f46af50e621a71505094528", "title": "Modeling Human Motion Using Binary Latent Variables", "text": "We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued \u201cvisible\u201d variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/ \u223cgwtaylor/publications/nips2006mhmublv/"} {"_id": "02de9d7b2c76a11896902c79b329a3034fc572b6", "title": "Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning", "text": "Emerging paradigms like High Performance Data Analytics (HPDA) and Deep Learning (DL) pose at least two new design challenges for existing MPI runtimes. First, these paradigms require an efficient support for communicating unusually large messages across processes. And second, the communication buffers used by HPDA applications and DL frameworks generally reside on a GPU's memory. In this context, we observe that conventional MPI runtimes have been optimized over decades to achieve lowest possible communication latency for relatively smaller message sizes (up-to 1 Megabyte) and that too for CPU memory buffers. With the advent of CUDA-Aware MPI runtimes, a lot of research has been conducted to improve performance of GPU buffer based communication. However, little exists in current state of the art that deals with very large message communication of GPU buffers. In this paper, we investigate these new challenges by analyzing the performance bottlenecks in existing CUDA-Aware MPI runtimes like MVAPICH2-GDR, and propose hierarchical collective designs to improve communication latency of the MPI_Bcast primitive by exploiting a new communication library called NCCL. To the best of our knowledge, this is the first work that addresses these new requirements where GPU buffers are used for communication with message sizes surpassing hundreds of megabytes. We highlight the design challenges for our work along with the details of design and implementation. In addition, we provide a comprehensive performance evaluation using a Micro-benchmark and a CUDA-Aware adaptation of Microsoft CNTK DL framework. We report up to 47% improvement in training time for CNTK using the proposed hierarchical MPI_Bcast design."} {"_id": "bd1c2b1cda567cc73be398164fdd61c7c341cef5", "title": "Bifurcation Analysis of a Mathematical Model for Malaria Transmission", "text": "We present an ordinary differential equation mathematical model for the spread of malaria in human and mosquito populations. Susceptible humans can be infected when they are bitten by an infectious mosquito. They then progress through the exposed, infectious, and recovered classes, before reentering the susceptible class. Susceptible mosquitoes can become infected when they bite infectious or recovered humans, and once infected they move through the exposed and infectious classes. Both species follow a logistic population model, with humans having immigration and disease-induced death. We define a reproductive number, R0, for the number of secondary cases that one infected individual will cause through the duration of the infectious period. We find that the disease-free equilibrium is locally asymptotically stable when R0 < 1 and unstable when R0 > 1. We prove the existence of at least one endemic equilibrium point for all R0 > 1. In the absence of disease-induced death, we prove that the transcritical bifurcation at R0 = 1 is supercritical (forward). Numerical simulations show that for larger values of the disease-induced death rate, a subcritical (backward) bifurcation is possible at R0 = 1."} {"_id": "406847bc624fd81a2fe63d42827e6755aa439ebb", "title": "The Distributed Dissimilar Redundancy Architecture of Fly-by-Wire Flight Control System", "text": "This article presents a distributed dissimilar redundancy architecture of fly-by-wire Flight Control System. It is based on many dissimilar flight control computer, redundancy structure which includes the cross arranged hydraulic and power supply, three types of data buses, lots of data correction monitors, a distributed control loop to avoid the correlated faults and achieve the rigid safety airworthiness requirements. A Fault Tree Analysis (FTA) is implemented to verify whether the system safety could meet system design targets."} {"_id": "fd0e2f95fbe0da2e75288c8561e7553d8efe3325", "title": "Handwritten Digit Recognition: A Neural Network Demo", "text": "A handwritten digit recognition system was used in a demonstration project to visualize artificial neural networks, in particular Kohonen\u2019s self-organizing feature map. The purpose of this project was to introduce neural networks through a relatively easy-to-understand application to the general public. This paper describes several techniques used for preprocessing the handwritten digits, as well as a number of ways in which neural networks were used for the recognition task. Whereas the main goal was a purely educational one, a moderate recognition rate of 98% was reached on a test set."} {"_id": "24c180807250d54a733ac830bda979f13cf12231", "title": "Unmanned Aerial Vehicle Based Wireless Sensor Network for Marine-Coastal Environment Monitoring", "text": "Marine environments are delicate ecosystems which directly influence local climates, flora, fauna, and human activities. Their monitorization plays a key role in their preservation, which is most commonly done through the use of environmental sensing buoy networks. These devices transmit data by means of satellite communications or close-range base stations, which present several limitations and elevated infrastructure costs. Unmanned Aerial Vehicles (UAV) are another alternative for remote environmental monitoring which provide new types of data and ease of use. These aircraft are mainly used in video capture related applications, in its various light spectrums, and do not provide the same data as sensing buoys, nor can they be used for such extended periods of time. The aim of this research is to provide a flexible, easy to deploy and cost-effective Wireless Sensor Network (WSN) for monitoring marine environments. This proposal uses a UAV as a mobile data collector, low-power long-range communications and sensing buoys as part of a single WSN. A complete description of the design, development, and implementation of the various parts of this system is presented, as well as its validation in a real-world scenario."} {"_id": "b9d831d6e7c015317b2be3fdbbc467e3e717e58d", "title": "A Modularization Method for Battery Equalizers Using Multiwinding Transformers", "text": "This paper proposes a modularized global architecture using multi-winding transformers for battery cell balancing. The global balancing for a series-connected battery string is achieved based on forward conversion in each battery module and based on flyback conversion among modules. The demagnetization of the multiwinding transformers is also simultaneously achieved by the flyback conversion among modules without the need of additional demagnetizing circuits. Moreover, all MOSFET switches are driven by two complementary pulse width modulation signals without the requirement of cell voltage sensors, and energy can be automatically and simultaneously delivered from any high voltage cells to any low voltage cells. Compared with existing equalizers requiring additional balancing circuits for battery modules, the proposed modularized equalizer shares one circuit for the balancing among cells and modules. The balancing performance of the proposed equalizer is perfectly verified through experimental results, and the maximum balancing efficiency is up to 91.3%. In summary, the proposed modularized equalizer has the advantages of easier modularization, simpler control, higher efficiency, smaller size, and lower cost, ensuring the battery system higher reliability and easier implementation."} {"_id": "b65bee86a4d65a52796a2db19ab2c49d855a59fa", "title": "Probabilistic diffusion tractography with multiple fibre orientations: What can we gain?", "text": "We present a direct extension of probabilistic diffusion tractography to the case of multiple fibre orientations. Using automatic relevance determination, we are able to perform online selection of the number of fibre orientations supported by the data at each voxel, simplifying the problem of tracking in a multi-orientation field. We then apply the identical probabilistic algorithm to tractography in the multi- and single-fibre cases in a number of example systems which have previously been tracked successfully or unsuccessfully with single-fibre tractography. We show that multi-fibre tractography offers significant advantages in sensitivity when tracking non-dominant fibre populations, but does not dramatically change tractography results for the dominant pathways."} {"_id": "ecee90e6b4ac297403a8714138ae93913734a5c2", "title": "CAPNet: Continuous Approximation Projection For 3D Point Cloud Reconstruction Using 2D Supervision", "text": "Knowledge of 3D properties of objects is a necessity in order to build effective computer vision systems. However, lack of large scale 3D datasets can be a major constraint for datadriven approaches in learning such properties. We consider the task of single image 3D point cloud reconstruction, and aim to utilize multiple foreground masks as our supervisory data to alleviate the need for large scale 3D datasets. A novel differentiable projection module, called \u2018CAPNet\u2019, is introduced to obtain such 2D masks from a predicted 3D point cloud. The key idea is to model the projections as a continuous approximation of the points in the point cloud. To overcome the challenges of sparse projection maps, we propose a loss formulation termed \u2018affinity loss\u2019 to generate outlierfree reconstructions. We significantly outperform the existing projection based approaches on a large-scale synthetic dataset. We show the utility and generalizability of such a 2D supervised approach through experiments on a real-world dataset, where lack of 3D data can be a serious concern. To further enhance the reconstructions, we also propose a test stage optimization procedure to obtain reconstructions that display high correspondence with the observed input image."} {"_id": "2f16ae3f04933f6c95a6d6ca664d05be47288e60", "title": "A Learning Scheme for Microgrid Islanding and Reconnection", "text": "This paper introduces a robust learning scheme that can dynamically predict the stability of the reconnection of subnetworks to a main grid. As the future electrical power systems tend towards smarter and greener technology, the deploymen t of self sufficient networks, or microgrids, becomes more likel y. Microgrids may operate on their own or synchronized with the main grid, thus control methods need to take into account islandi ng and reconnecting said networks. The ability to optimally and safely reconnect a portion of the grid is not well understood and, as of now, limited to raw synchronization between interconnection points. A support vector machine (SVM) leveraging real-time data from phasor measurement units (PMUs) is proposed to predict in real time whether the reconnection of a sub-network to the main grid would lead to stability or instability. A dynam ics simulator fed with pre-acquired system parameters is used t o create training data for the SVM in various operating states. The classifier was tested on a variety of cases and operating poin ts to ensure diversity. Accuracies of approximately 90% were obs erved throughout most conditions when making dynamic predictions of a given network. Keywords\u2014Synchrophasor, machine learning, microgrid, islanding, reconnection I. I NTRODUCTION As we make strides towards a smarter power system, it is important to explore new techniques and innovations to fully capture the potential of such a dynamic entity. Many large blackout events, such as the blackout of 2003, could have been prevented with smarter controls and better monito ring [1]. Phasor measurement units, or PMUs, are one such breakthrough that will allow progress to be made in both monitoring and implementing control to the system [2]. PMUs allow for direct measurement of bus voltages and angles at high sample rates which makes dynamic state estimation more feasible [3], [4]. With the use of PMUs, it is possible to improve upon current state estimation [5] and potentially o pen up new ways to control the grid. The addition of control techniques and dynamic monitoring will be important as we begin to integrate newer solutions, such as microgrids, int o the power network. With these advanced monitoring devices, microgrids become more feasible due to the potential for rea ltime monitoring schemes. The integration of microgrids bri ng many benefits such as the ability to operate while islanded as well as interconnected with the main grid; they provide a smooth integration for renewable energy sources that matc h C. Lassetter, E. Cotilla-Sanchez, and J. Kim are with the Sch ool of Electrical Engineering & Computer Science, Oregon State University, C orvallis, OR, 97331 USA e-mail:{lassettc,ecs,kimjinsu }@oregonstate.edu local demand. Unfortunately the implementation of microgr ids is still challenging due to lacking experience with the beha vior of control schemes during off-nominal operation. Currently, microgrids are being phased in slowly due in part to the difficulty of operating subnetworks independent ly as well as determining when they can be reconnected to the main grid. Works in the literature have focused on the potential of reconnecting microgrids to the main grid, in particular aiming at synchronizing the buses at points of interconnect with respects to their voltages, frequencies , and angles [6]. Effort has been directed at creating control sch emes to minimize power flow at the point of common coupling (PCC) using direct machine control, load shedding, as well a s energy storage, to aid in smooth reconnection [7], [8]. Upon reconnection of an islanded sub-network to the main grid, instability can cause damage on both ends. It is important to track instabilities on both the microgrid and main grid upon reconnection to accurately depict the outcome of reconnect ion. Most works focus on the synchronization of two networks before reconnection [9], [10]. In some cases we may need to look at larger microgrids or subnetworks in which multipl e PCCs exist. In such scenarios, it becomes much more difficult to implement a control scheme that satisfies goodreconnection tolerances in regards to minimizing bus frequency, angle, a nd voltage differences at each PCC. In addition to the possibil ity of multiple PCCs, it is possible that direct manipulation of the system becomes limited, compromised, or unsupported with respect to synchronization. In order to address these short comings, we implement an algorithm that dynamically tracks and makes predictions based on the system states, providing real-time stability information of potential reconnectio ns. An algorithm that tracks potential reconnection and island ing times needs to be robust to prevent incorrect operation during these scenarios. PMUs make use of GPS synchronization [11] which can create an attack platform for adversarie s by changing or shifting the time synchronization. Incorrec t or compromised usage could lead to incorrect predictions that would degrade system stability due to hidden failures that remain dormant until triggered by contingencies [12]. We propose an algorithm that can make accurate predictions in face of potentially compromised measurements. Due to the complexity of the power grid, it is difficult to come up with a verbatim standard depicting the potential sta bility after reconnection of a subnetwork. With advances in the artificial intelligence community, we can make use of machin e learning algorithms in order to explore vast combinations o f sensor inputs, states, and control actions. This can be done in a"} {"_id": "849ff867cba2c102a593da5f6c4c61c95e09ffe2", "title": "Notification and awareness: synchronizing task-oriented collaborative activity", "text": "People working collaboratively must establish and maintain awareness of one another\u2019s intentions, actions and results. Notification systems typically support awareness of the presence, tasks and actions of collaborators, but they do not adequately support awareness of persistent and complex activities. We analysed awareness breakdowns in use of our Virtual School system\u2014stemming from problems related to the collaborative situation, group, task and tool support\u2014to motivate the concept of activity awareness. Activity awareness builds on prior conceptions of social and action awareness, but emphasizes the importance of activity context factors like planning and coordination. This work suggests design strategies for notification systems to better support collaborative activity. r 2003 Elsevier Science Ltd. All rights reserved."} {"_id": "ba4a037153bff392b1e56a4109de4b04521f17b2", "title": "Design Challenges/Solutions for Environments Supporting the Analysis of Social Media Data in Crisis Informatics Research", "text": "Crisis informatics investigates how society's pervasive access to technology is transforming how it responds to mass emergency events. To study this transformation, researchers require access to large sets of data that because of their volume and heterogeneous nature are difficult to collect and analyze. To address this concern, we have designed and implemented an environment - EPIC Analyze - that supports researchers with the collection and analysis of social media data. Our research has identified the types of components - such as NoSQL, MapReduce, caching, and search - needed to ensure that these services are reliable, scalable, extensible, and efficient. We describe the design challenges encountered - such as data modeling, time vs. Space tradeoffs, and the need for a useful and usable system - when building EPIC Analyze and discuss its scalability, performance, and functionality."} {"_id": "2326a12358718ae6cf3127eae6be91e0de7b7363", "title": "Anomaly Detection and Attribution Using Bayesian Networks", "text": "We present a novel approach to anomaly detection in Bayesian networks, enabling both the detection and explanation of anomalous cases in a dataset. By exploiting the structure of a Bayesian network, our algorithm is able to efficiently search for local maxima of data conflict between closely related variables. Benchmark tests using data simulated from complex Bayesian networks show that our approach provides a significant improvement over techniques that search for anomalies using the entire network, rather than its subsets. We conclude with demonstrations of the unique explanatory power of our approach in determining the observation(s) responsible for an anomaly. APPROVED FOR PUBLIC RELEASE"} {"_id": "850d941ece492fd57c0bffad4b1eaf7b8d241337", "title": "Automatic Detection of Diabetic Retinopathy using Deep Convolutional Neural Network", "text": "The purpose of this project is to design an automated and efficient solution that could detect the symptoms of DR from a retinal image within seconds and simplify the process of reviewing and examination of images. Diabetic Retinopathy (DR) is a complication of diabetes that is caused by changes in the blood vessel of the retina and it is one of the leading causes of blindness in the developed world. Currently, detecting DR symptoms is a manual and time-consuming process. Recently, fullyconnected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics. In our approach, we trained a deep Convolutional Neural Network model on a large dataset consisting around 35,000 images and used dropout layer techniques to achieve higher accuracy."} {"_id": "19751e0f81a103658bbac2506f5d5c8e06a1c06a", "title": "STDP-based spiking deep convolutional neural networks for object recognition", "text": "Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions."} {"_id": "4416236e5ee4239e86e3cf3db6a2d1a2ff2ae720", "title": "Weld : A Common Runtime for High Performance Data Analytics", "text": "Modern analytics applications combine multiple functions from different libraries and frameworks to build increasingly complex workflows. Even though each function may achieve high performance in isolation, the performance of the combined workflow is often an order of magnitude below hardware limits due to extensive data movement across the functions. To address this problem, we propose Weld, a runtime for data-intensive applications that optimizes across disjoint libraries and functions. Weld uses a common intermediate representation to capture the structure of diverse dataparallel workloads, including SQL, machine learning and graph analytics. It then performs key data movement optimizations and generates efficient parallel code for the whole workflow. Weld can be integrated incrementally into existing frameworks like TensorFlow, Apache Spark, NumPy and Pandas without changing their user-facing APIs. We show that Weld can speed up these frameworks, as well as applications that combine them, by up to 30\u00d7."} {"_id": "1ed826370fd34f165744653c6c5e0a56a389dd7c", "title": "Passivity-Based Controller Design of Grid-Connected VSCs for Prevention of Electrical Resonance Instability", "text": "The time delay in the current control loop of a grid-connected voltage-source converter (VSC) may cause destabilization of electrical resonances in the grid or in the VSC's input filter. Instability is prevented if the input admittance of the VSC can be made passive. This paper presents an analytical controller design method for obtaining passivity. The method is equally applicable to single- and three-phase systems, i.e., in the latter case, for both stationary- and synchronous-frame control. Simulations and experiments verify the theoretical results."} {"_id": "47d4838087a7ac2b995f3c5eba02ecdd2c28ba14", "title": "Automatic Recognition of Deceptive Facial Expressions of Emotion", "text": "Humans modify facial expressions in order to mislead observers regarding their true emotional states. Being able to recognize the authenticity of emotional displays is notoriously difficult for human observers. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of genuine and deceptive facial expressions of emotions for automatic recognition1 . We show that overall the problem of recognizing deceptive facial expressions can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt feature space. Interesting additional results show that on average it is easier to distinguish among genuine expressions than deceptive ones and that certain emotion pairs are more difficult to distinguish than others."} {"_id": "72468505ea6ced938493f4d47beda85e318d4c72", "title": "Coalition Structure Generation: Dynamic Programming Meets Anytime Optimization", "text": "Coalition structure generation involves partitioning a set of agents into exhaustive and disjoint coalitions so as to maximize the social welfare. What makes this such a challenging problem is that the number of possible solutions grows exponentially as the number of agents increases. To date, two main approaches have been developed to solve this problem, each with its own strengths and weaknesses. The state of the art in the first approach is the Improved Dynamic Programming (IDP) algorithm, due to Rahwan and Jennings, that is guaranteed to find an optimal solution in O(3), but which cannot generate a solution until it has completed its entire execution. The state of the art in the second approach is an anytime algorithm called IP, due to Rahwan et al., that provides worst-case guarantees on the quality of the best solution found so far, but which is O(n). In this paper, we develop a novel algorithm that combines both IDP and IP, resulting in a hybrid performance that exploits the strength of both algorithms and, at the same, avoids their main weaknesses. Our approach is also significantly faster (e.g. given 25 agents, it takes only 28% of the time required by IP, and 0.3% of the time required by IDP)."} {"_id": "255a80360b407a92621ca70f89803eff4a6d3e32", "title": "Privacy-Preserving User-Auditable Pseudonym Systems", "text": "Personal information is often gathered and processed in a decentralized fashion. Examples include health records and governmental data bases. To protect the privacy of individuals, no unique user identifier should be used across the different databases. At the same time, the utility of the distributed information needs to be preserved which requires that it be nevertheless possible to link different records if they relate to the same user. Recently, Camenisch and Lehmann (CCS 15) have proposed a pseudonym scheme that addresses this problem by domain-specific pseudonyms. Although being unlinkable, these pseudonyms can be converted by a central authority (the converter). To protect the users' privacy, conversions are done blindly without the converter learning the pseudonyms or the identity of the user. Unfortunately, their scheme sacrifices a crucial privacy feature: transparency. Users are no longer able to inquire with the converter and audit the flow of their personal data. Indeed, such auditability appears to be diametral to the goal of blind pseudonym conversion. In this paper we address these seemingly conflicting requirements and provide a system where user-centric audits logs are created by the oblivious converter while maintaining all privacy properties. We prove our protocol to be UC-secure and give an efficient instantiation using novel building blocks."} {"_id": "8501fa541d13634b021a7cab8d1d84ba2c5b9a7c", "title": "Boosting slow oscillations during sleep potentiates memory", "text": "There is compelling evidence that sleep contributes to the long-term consolidation of new memories. This function of sleep has been linked to slow (<1\u2009Hz) potential oscillations, which predominantly arise from the prefrontal neocortex and characterize slow wave sleep. However, oscillations in brain potentials are commonly considered to be mere epiphenomena that reflect synchronized activity arising from neuronal networks, which links the membrane and synaptic processes of these neurons in time. Whether brain potentials and their extracellular equivalent have any physiological meaning per se is unclear, but can easily be investigated by inducing the extracellular oscillating potential fields of interest. Here we show that inducing slow oscillation-like potential fields by transcranial application of oscillating potentials (0.75\u2009Hz) during early nocturnal non-rapid-eye-movement sleep, that is, a period of emerging slow wave sleep, enhances the retention of hippocampus-dependent declarative memories in healthy humans. The slowly oscillating potential stimulation induced an immediate increase in slow wave sleep, endogenous cortical slow oscillations and slow spindle activity in the frontal cortex. Brain stimulation with oscillations at 5 Hz\u2014another frequency band that normally predominates during rapid-eye-movement sleep\u2014decreased slow oscillations and left declarative memory unchanged. Our findings indicate that endogenous slow potential oscillations have a causal role in the sleep-associated consolidation of memory, and that this role is enhanced by field effects in cortical extracellular space."} {"_id": "acedd1d4b7ab46a424dd9860557bd6b81c067c1d", "title": "Gestational diabetes mellitus.", "text": "Gestational diabetes mellitus (GDM) is defined as glucose intolerance of various degrees that is first detected during pregnancy. GDM is detected through the screening of pregnant women for clinical risk factors and, among at-risk women, testing for abnormal glucose tolerance that is usually, but not invariably, mild and asymptomatic. GDM appears to result from the same broad spectrum of physiological and genetic abnormalities that characterize diabetes outside of pregnancy. Indeed, women with GDM are at high risk for having or developing diabetes when they are not pregnant. Thus, GDM provides a unique opportunity to study the early pathogenesis of diabetes and to develop interventions to prevent the disease."} {"_id": "65c876f9b8765499a5164c6cdedcb3792aac8eea", "title": "From the Shortest Vector Problem to the Dihedral Hidden Subgroup Problem", "text": "In Quantum Computation and Lattice Problems [11] Oded Regev presented the first known connection between lattices and quantum computation, in the form of a quantum reduction from the poly(n)-unique shortest vector problem to the dihedral hidden subgroup problem by sampling cosets. This article contains a summary of Regev\u2019s result."} {"_id": "dd6330dbabf0e321a5fe2ea238eb0a96f54f2b59", "title": "Ontologies in E-Learning: Review of the Literature", "text": "We have witnessed a great interest in ontologies as an emerging technology within the Semantic Web. Its foundation that was laid in the last decade paved the way for the development of ontologies and systems that use ontologies in various domains, including the E-Learning domain. In this article, we survey key contributions related to the development and use of ontologies in the domain of E-Learning systems. We provide a framework for classification of the literature that is useful for the community of practice in categorizing work in this field and for determining possible research lines. We also discuss future trends in this area."} {"_id": "59fe0f477f81a8671956b8d1363bdc06ae8b08b3", "title": "Vision-Based Gesture Recognition: A Review", "text": "The use of gesture as a natural interface serves as a motivating force for research in modeling, analyzing and recognition of gestures. In particular, human computer intelligent interaction needs vision-based gesture recognition, which involves many interdisciplinary studies. A survey on recent vision-based gesture recognition approaches is given in this paper. We shall review methods of static hand posture and temporal gesture recognition. Several application systems of gesture recognition are also described in this paper. We conclude with some thoughts about future research directions."} {"_id": "87431da2f57fa6471712e9e48cf3b724af723d94", "title": "Methods for Designing Multiple Classifier Systems", "text": "In the field of pattern recognition, multiple classifier systems based on the combination of outputs of a set of different classifiers have been proposed as a method for the development of high performance classification systems. In this paper, the problem of design of multiple classifier system is discussed. Six design methods based on the so-called \u201coverproduce and choose\u201d paradigm are described and compared by experiments. Although these design methods exhibited some interesting features, they do not guarantee to design the optimal multiple classifier system for the classification task at hand. Accordingly, the main conclusion of this paper is that the problem of the optimal MCS design still remains open."} {"_id": "d1ee87290fa827f1217b8fa2bccb3485da1a300e", "title": "Bagging predictors", "text": "Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy."} {"_id": "e16b5afbe2acc57530359c065227dbb634b2109b", "title": "A Real-Time Continuous Gesture Recognition System for Sign Language", "text": "In this paper, a large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a DataGlove . The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to 4 parameters in a gesture : posture, position, orientation, and motion. We have implemented a prototype system with a lexicon of 250 vocabularies in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signerdependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%."} {"_id": "ec25da04ef7f09396ca00da3f9b5f2d9670cb6fc", "title": "Methods of combining multiple classifiers and their applications to handwriting recognition", "text": "Method of combining the classification powers of several classifiers is regarded as a general problem in various application areas of pattern recognition, and a systematic investigation has been made. Possible solutions to the problem can be divided into three categories according to the levels of information available from the various classifiers. Four approaches are proposed based on different methodologies for solving this problem. One is suitable for combining individual classifiers such as Bayesian, k-NN and various distance classifiers. The other three could be used for combining any kind of individual classifiers. On applying these methods to combine several classifiers for recognizing totally unconstrained handwritten numerals, the experimental results show that the performance of individual classifiers could be improved significantly. For example, on the U.S. zipcode database, the result of 98.9% recognition with 0.90% substitution and 0.2% rejection can be obtained, as well as a high reliability with 95% recognition, 0% substitution and 5% rejection. These results compared favorably to other research p u p s in Europe, Asia, and North America."} {"_id": "bc0a189bfccc7634789f2c3ccd5beedac497b039", "title": "A single-stage three-phase high power factor rectifier with high-frequency isolation and regulated DC-bus based on the DCM SEPIC converter", "text": "In this paper, the main concepts related to the development of a single-stage three-phase high power factor rectifier, with high-frequency isolation and regulated DC bus are described. The structure operation is presented, being based on the DC-DC SEPIC converter operating in the discontinuous conduction mode. This operational mode provides to the rectifier a high power factor feature, with sinusoidal input current, without the use of any current sensors and current loop control. A design example and simulation results are presented in order to validate the theoretical analysis."} {"_id": "649197627a94fc003384fb743cfd78cdf12b3306", "title": "SYSTEM DYNAMICS : SYSTEMIC FEEDBACK MODELING FOR POLICY ANALYSIS", "text": ""} {"_id": "4ce68170f85560942ee51465e593b16560f9c580", "title": "Practical Matrix Completion and Corruption Recovery Using Proximal Alternating Robust Subspace Minimization", "text": "Low-rank matrix completion is a problem of immense practical importance. Recent works on the subject often use nuclear norm as a convex surrogate of the rank function. Despite its solid theoretical foundation, the convex version of the problem often fails to work satisfactorily in real-life applications. Real data often suffer from very few observations, with support not meeting the randomness requirements, ubiquitous presence of noise and potentially gross corruptions, sometimes with these simultaneously occurring. This paper proposes a Proximal Alternating Robust Subspace Minimization method to tackle the three problems. The proximal alternating scheme explicitly exploits the rank constraint on the completed matrix and uses the $$\\ell _0$$ \u2113 0 pseudo-norm directly in the corruption recovery step. We show that the proposed method for the non-convex and non-smooth model converges to a stationary point. Although it is not guaranteed to find the global optimal solution, in practice we find that our algorithm can typically arrive at a good local minimizer when it is supplied with a reasonably good starting point based on convex optimization. Extensive experiments with challenging synthetic and real data demonstrate that our algorithm succeeds in a much larger range of practical problems where convex optimization fails, and it also outperforms various state-of-the-art algorithms."} {"_id": "b8f65d04152a8ecb0e632635e41c5383883f2754", "title": "Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle", "text": "The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a necessary prerequisite if vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system. On our MAV, we set up each camera pair in a stereo configuration. We propose a novel vSLAM-based self-calibration method for a multi-sensor system that includes multiple calibrated stereo cameras and an inertial measurement unit (IMU). Our selfcalibration estimates the transform with metric scale between each camera and the IMU. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses inertial information to recover the relative motion of the MAV with metric scale. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated realtime on-board vSLAM with loop closures. We show experimental results in both indoor and outdoor environments. The code for both the self-calibration and vSLAM is available as a set of ROS packages at https://github.com/hengli/vmav-ros-pkg."} {"_id": "8e645951b8135d15e3229a8813a1782f9fbd18c7", "title": "CAB: Connectionist Analogy Builder", "text": "The ability to make informative comparisons is central to human cognition. Comparison involves aligning two representations and placing their elements into correspondence. Detecting correspondences is a necessary component of analogical inference, recognition, categorization, schema formation, and similarity judgment. Connectionist Analogy Builder (CAB) determines correspondences through a simple iterative computation that matches elements in one representation with elements playing compatible roles in the other representation while simultaneously enforcing structural constraints. CAB shows promise as a process model of comparison as its performance can be related to human performance (e.g., solution trajectory, error patterns, time-on-task). Furthermore, CAB\u2019s bounded working memory allows it to account for the inherent capacity limitations of human processing. CAB\u2019s strengths are its parsimony, transparency of operations, and ability to generate performance predictions. In this paper, CAB is evaluated against benchmark phenomena from the analogy literature. \u00a9 2003 Cognitive Science Society, Inc. All rights reserved."} {"_id": "1a1df35585975e2ee551a88a615923a99f1b44b5", "title": "Co-fusion: Real-time segmentation, tracking and fusion of multiple objects", "text": "In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes."} {"_id": "d468d43eab97ab090aa229108c7b75cb12dd0b18", "title": "Reverse Curriculum Generation for Reinforcement Learning", "text": "Many relevant tasks require an agent to reach a certain state, or to manipulate objects into a desired configuration. For example, we might want a robot to align and assemble a gear onto an axle or insert and turn a key in a lock. These tasks present considerable difficulties for reinforcement learning approaches, since the natural reward function for such goal-oriented tasks is sparse and prohibitive amounts of exploration are required to reach the goal and receive a learning signal. Past approaches tackle these problems by manually designing a task-specific reward shaping function to help guide the learning. Instead, we propose a method to learn these tasks without requiring any prior task knowledge other than obtaining a single state in which the task is achieved. The robot is trained in \u201creverse\u201d, gradually learning to reach the goal from a set of starting positions increasingly far from the goal. Our method automatically generates a curriculum of starting positions that adapts to the agent\u2019s performance, leading to efficient training on such tasks. We demonstrate our approach on difficult simulated fine-grained manipulation problems, not solvable by state-of-the-art reinforcement learning methods."} {"_id": "5277f5662973e5251c70c8a04d7e995496ca9f4d", "title": "Error Mechanisms of the Oscillometric Fixed-Ratio Blood Pressure Measurement Method", "text": "The oscillometric fixed-ratio method is widely employed for non-invasive measurement of systolic and diastolic pressures (SP and DP) but is heuristic and prone to error. We investigated the accuracy of this method using an established mathematical model of oscillometry. First, to determine which factors materially affect the errors of the method, we applied a thorough parametric sensitivity analysis to the model. Then, to assess the impact of the significant parameters, we examined the errors over a physiologically relevant range of those parameters. The main findings of this model-based error analysis of the fixed-ratio method are that: (1) SP and DP errors drastically increase as the brachial artery stiffens over the zero trans-mural pressure regime; (2) SP and DP become overestimated and underestimated, respectively, as pulse pressure (PP) declines; (3) the impact of PP on SP and DP errors is more obvious as the brachial artery stiffens over the zero trans-mural pressure regime; and (4) SP and DP errors can be as large as 58\u00a0mmHg. Our final and main contribution is a comprehensive explanation of the mechanisms for these errors. This study may have important implications when using the fixed-ratio method, particularly in subjects with arterial disease."} {"_id": "1273db39180f894a8b04e517669896f3b5975f14", "title": "2 Conditional Saliency 2 . 1 Lossy Coding", "text": "By the guidance of attention, human visual system is able to locate objects of interest in complex scene. In this paper, we propose a novel visual saliency detection method the conditional saliency for both image and video. Inspired by biological vision, the definition of visual saliency follows a strictly local approach. Given the surrounding area, the saliency is defined as the minimum uncertainty of the local region, namely the minimum conditional entropy, when the perceptional distortion is considered. To simplify the problem, we approximate the conditional entropy by the lossy coding length of multivariate Gaussian data. The final saliency map is accumulated by pixels and further segmented to detect the proto-objects. Experiments are conducted on both image and video. And the results indicate a robust and reliable feature invariance saliency."} {"_id": "08bb1bc1cc40f3a0dfe264c80105fcd82e5a70d1", "title": "New reverse-conducting IGBT (1200V) with revolutionary compact package", "text": "Fuji Electric developed a 1200V class RC-IGBT based on our latest thin wafer process. The performance of this RC-IGBT shows the same relationship between conduction loss and switching loss as our 6th generation conventional IGBT and FWD. In addition its trade-off can be optimized for hard switching by lifetime killer. Calculations of the hard switching inverter loss and chip junction temperature (Tj) show that the optimized RC-IGBT can handle 35% larger current density per chip area. In order to utilize the high performance characteristics of the RC-IGBT, we assembled them in our newly developed compact package. This module can handle 58% higher current than conventional 100A modules at a 51% smaller footprint."} {"_id": "1ca43c217aceabea4cff14bff1d81df2debe058f", "title": "Saliency Unified: A Deep Architecture for simultaneous Eye Fixation Prediction and Salient Object Segmentation", "text": "Human eye fixations often correlate with locations of salient objects in the scene. However, only a handful of approaches have attempted to simultaneously address the related aspects of eye fixations and object saliency. In this work, we propose a deep convolutional neural network (CNN) capable of predicting eye fixations and segmenting salient objects in a unified framework. We design the initial network layers, shared between both the tasks, such that they capture the object level semantics and the global contextual aspects of saliency, while the deeper layers of the network address task specific aspects. In addition, our network captures saliency at multiple scales via inception-style convolution blocks. Our network shows a significant improvement over the current state-of-the-art for both eye fixation prediction and salient object segmentation across a number of challenging datasets."} {"_id": "452aec9ce1f26098d17fa5e80134912f9ff043e4", "title": "Center-Fed Patch Antenna Array Excited by an Inset Dielectric Waveguide for 60-GHz Applications", "text": "A patch antenna array proximity-coupled fed by the inset dielectric waveguide (IDW) is proposed in 60-GHz frequency band. The antenna array consists of four series-fed subarrays each having eight pairs of circular patch elements, four substrate integrated IDWs, and a compact four-way power divider transmitting the signal from the grounded coplanar waveguide (GCPW) to the IDWs. The IDW is realized by metallized via-hole arrays in a printed circuit board (PCB). The parallel center-fed configuration is applied to suppress the frequency-dependent beam squinting of the series-fed subarray. Measured results show that the proposed antenna exhibits a gain variation between 15.8 and 19.2 dBi, fixed main beam in the direction of \u03b8 = 0\u00b0, and the reflection coefficient |S11| \u2264 -10 dB over the frequency band from 58 to 64 GHz."} {"_id": "e2a533a7966b9c9d338149bac78bc3d7c1a3b420", "title": "A systematic review and meta-analysis of dressings used for wound healing: the efficiency of honey compared to silver on burns.", "text": "BACKGROUND\nHoney has the antibacterial effect of silver without the toxic effect of silver on the skin. Even so, silver is the dominant antibacterial dressing used in wound healing.\n\n\nOBJECTIVES\nTo evaluate the healing effects of honey dressings compared to silver dressings for acute or chronic wounds.\n\n\nDESIGN\nA systematic review with meta-analysis.\n\n\nMETHOD\nThe search, conducted in seven databases, resulted in six randomised controlled trial studies from South Asia focusing on antibacterial properties and healing times of honey and silver.\n\n\nRESULT\nHoney was more efficacious for wound healing than silver, as measured in the number of days needed for wounds to heal (pooled risk difference -20, 95% CI -0.29 to -0.11, p < .001). Honey turned out to have more antibacterial qualities than silver.\n\n\nCONCLUSION\nAll the included studies based on burns showed the unequivocal result that honey had an even more positive effect than silver on wound healing."} {"_id": "39ec37905f9b2321fbb2173eb6452410b010e771", "title": "A novel, compact, low-cost, impulse ground-penetrating radar for nondestructive evaluation of pavements", "text": "This paper reports on the development of a novel, compact, low-cost, impulse ground-penetrating radar (GPR) and demonstrate its use for nondestructive evaluation of pavement structures. This GPR consists of an ultrashort-monocycle-pulse transmitter (330 ps), an ultrawide-band (UWB) sampling receiver (0-6 GHz), and two UWB antennas (0.2-20 GHz)-completely designed using microwave-integrated circuits with seamless electrical connections between them. An approximate analysis is used to determine the signal loss and power budget. Performance of this GPR has been verified through the measurements of relative permittivity and thicknesses of various samples, and a good agreement between the experimental and theoretical results has been achieved."} {"_id": "65e9f4d7a80ea39a265b9d60d57397011395efcc", "title": "Tracing Data Lineage Using Schema Transformation Pathways", "text": "With the increasing amount and diversity of information available on the Internet, there has been a huge growth in information systems that need to integrate data from distributed, heterogeneous data sources. Tracing the lineage of the integrated data is one of the current problems being addressed in data warehouse research. In this paper, we propose a new approach for tracing data linage based on schema transformation pathways. We show how the individual transformation steps in a transformation pathway can be used to trace the derivation of the integrated data in a step-wise fashion, thus simplifying the lineage tracing process."} {"_id": "715592bfe309cc85c9df6fb314270cbdfb67d543", "title": "Wireless sensor networks: a survey on recent developments and potential synergies", "text": "Wireless sensor network (WSN) has emerged as one of the most promising technologies for the future. This has been enabled by advances in technology and availability of small, inexpensive, and smart sensors resulting in cost effective and easily deployable WSNs. However, researchers must address a variety of challenges to facilitate the widespread deployment of WSN technology in real-world domains. In this survey, we give an overview of wireless sensor networks and their application domains including the challenges that should be addressed in order to push the technology further. Then we review the recent technologies and testbeds for WSNs. Finally, we identify several open research issues that need to be investigated in future. Our survey is different from existing surveys in that we focus on recent developments in wireless sensor network technologies. We review the leading research projects, standards and technologies, and platforms. Moreover, we highlight a recent phenomenon in WSN research that is to explore synergy between sensor networks and other technologies and explain how this can help sensor networks achieve their full potential. This paper intends to help new researchers entering the domain of WSNs by providing a comprehensive survey on recent developments."} {"_id": "12fa61561442c4524f8f99146c29ca131e749293", "title": "Control of systems integrating logic, dynamics, and constraints", "text": "This paper proposes a framework for modeling and controlling systems described by interdependent physical laws, logic rules, and operating constraints, denoted as Mixed Logical Dynamical (MLD) systems. These are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD systems include constrained linear systems, finite state machines, some classes of discrete event systems, and nonlinear systems which can be approximated by piecewise linear functions. A predictive control scheme is proposed which is able to stabilize MLD systems on desired reference trajectories while fulfilling operating constraints, and possibly take into account previous qualitative knowledge in the form of heuristic rules. Due to the presence of integer variables, the resulting on-line optimization procedures are solved through Mixed Integer Quadratic Programming (MIQP), for which efficient solvers have been recently developed. Some examples and a simulation case study on a complex gas supply system are reported."} {"_id": "7dc9c3eecd1aa717b9eab494d8671bdad04bf171", "title": "Fabric-based actuator modules for building soft pneumatic structures with high payload-to-weight ratio", "text": "This paper introduces a new concept of building soft pneumatic structures by assembling modular units of fabric-based rotary actuators (FRAs) and beams. Upon pressurization, the inner folds of FRA would expand, which causes the FRA module to unfold, generating angular displacement. Hence, FRAs would enable mobility function of the structure and its range of motion. The modular nature of the actuator units enables customized configuration of pneumatic structures, which can be modified and scaled by selecting the appropriate modules. FRAs are also designed to be bladder-less, that is, they are made without an additional layer of inner bladder. Thus, a simple fabrication process can be used to prepare the actuators. In this paper, we studied how the performance of the FRA modules changes with their dimensions and demonstrated how a soft gripper can be constructed using these modules. The kinematic response of the actuator segments of the gripper was analyzed and a pressure control algorithm was developed to regulate the pneumatic pressure of the actuator. The modular based soft robotic gripper alone weighs about 140g. Yet, based on the grip tests, it is able to lift heavier objects (up to 2.4kg), achieving a high payload-to-weight ratio of about 1714%, which is higher than the value reported by previously developed soft pneumatic grippers using elastomeric materials. Lastly, we also demonstrated that the gripper is capable of performing two essential grasping modes, which are power grasping and fingertip grasping, for objects of various shapes."} {"_id": "83e5bd4d8b4cc509585969b5a895f44be4483ca6", "title": "A critical evaluation of the complex PTSD literature: implications for DSM-5.", "text": "Complex posttraumatic stress disorder (CPTSD) has been proposed as a diagnosis for capturing the diverse clusters of symptoms observed in survivors of prolonged trauma that are outside the current definition of PTSD. Introducing a new diagnosis requires a high standard of evidence, including a clear definition of the disorder, reliable and valid assessment measures, support for convergent and discriminant validity, and incremental validity with respect to implications for treatment planning and outcome. In this article, the extant literature on CPTSD is reviewed within the framework of construct validity to evaluate the proposed diagnosis on these criteria. Although the efforts in support of CPTSD have brought much needed attention to limitations in the trauma literature, we conclude that available evidence does not support a new diagnostic category at this time. Some directions for future research are suggested."} {"_id": "ca9a5c4887396637f53123fa0176f475703bfbca", "title": "Agent based urban growth modeling framework on Apache Spark", "text": "The simulation of urban growth is an important part of urban planning and development. Due to large data and computational challenges, urban growth simulation models demand efficient data analytic frameworks for scaling them to large geographic regions. Agent-based models are widely used to observe and analyze the urban growth simulation at various scales. The incorporation of the agent-based model makes the scaling task even harder due to communication and coordination among agents. Many existing agent-based model frameworks were implemented using traditional shared and distributed memory programming models. On the other hand, Apache Spark is becoming a popular platform for distributed big data in-memory analytics. This paper presents an implementation of agent-based sub-model in Apache Spark framework. With the in-memory computation, Spark implementation outperforms the traditional distributed memory implementation using MPI. This paper provides (i) an overview of our framework capable of running urban growth simulations at a fine resolution of 30 meter grid cells, (ii) a scalable approach using Apache Spark to implement an agent-based model for simulating human decisions, and (iii) the comparative analysis of performance of Apache Spark and MPI based implementations."} {"_id": "0b440695c822a8e35184fb2f60dcdaa8a6de84ae", "title": "KinectFaceDB: A Kinect Database for Face Recognition", "text": "The recent success of emerging RGB-D cameras such as the Kinect sensor depicts a broad prospect of 3-D data-based computer applications. However, due to the lack of a standard testing database, it is difficult to evaluate how the face recognition technology can benefit from this up-to-date imaging sensor. In order to establish the connection between the Kinect and face recognition research, in this paper, we present the first publicly available face database (i.e., KinectFaceDB1) based on the Kinect sensor. The database consists of different data modalities (well-aligned and processed 2-D, 2.5-D, 3-D, and video-based face data) and multiple facial variations. We conducted benchmark evaluations on the proposed database using standard face recognition methods, and demonstrated the gain in performance when integrating the depth data with the RGB data via score-level fusion. We also compared the 3-D images of Kinect (from the KinectFaceDB) with the traditional high-quality 3-D scans (from the FRGC database) in the context of face biometrics, which reveals the imperative needs of the proposed database for face recognition research."} {"_id": "8ef7b152c35434eba0c5f1cf03051a65426e3463", "title": "Electromagnetic energy harvesting from train induced railway track vibrations", "text": "Anelectromagnetic energy harvester is designed to harness the vibrational power from railroad track deflections due to passing trains. Whereas typical existing vibration energy harvester technologies are built for low power applications of milliwatts range, the proposed harvester will be designed for higher power applications for major track-side equipment such as warning signals, switches, and health monitoring sensors, which typically require a power supply of 10 Watts or more. To achieve this goal, we implement a new patent pending motion conversion mechanism which converts irregular pulse-like bidirectional linear vibration into regulated unidirectional rotational motion. Features of the motion mechanism include bidirectional to unidirectional conversion and flywheel speed regulation, with advantages of improved reliability, efficiency, and quality of output power. It also allows production of DC power directly from bidirectional vibration without electronic diodes. Preliminary harvester prototype testing results illustrate the features and benefits of the proposed motion mechanism, showing reduction of continual system loading, regulation of generator speed, and capability for continuous DC power generation."} {"_id": "b90b53780ef8defacd0698d240c503db71329701", "title": "Sensorized pneumatic muscle for force and stiffness control", "text": "This paper presents the design and experimental validation of a soft pneumatic artificial muscle with position and force sensing capabilities. Conductive liquid-based soft sensors are embedded in a fiber-reinforced contractile actuator to measure two modes of deformation \u2014 axial strain and diametral expansion \u2014 which, together, are used to determine the stroke length and contractile force generated under internal pressure. We validate the proposed device by using data from the embedded sensors to estimate the force output of the actuator at fixed lengths and the stiffness and force output of a one degree-of-freedom hinge joint driven by an antagonist pair of the sensorized pneumatic muscles."} {"_id": "e5d8f00a413a1e6d111c52f3d984c6761151f364", "title": "Spelling Error Trends and Patterns in Sindhi", "text": "Statistical error Correction technique is the most accurate and widely used approach today, but for a language like Sindhi which is a low resourced language the trained corpora\u2019s are not available, so the statistical techniques are not possible at all. Instead a useful alternative would be to exploit various spelling error trends in Sindhi by using a Rule based approach. For designing such technique an essential prerequisite would be to study the various error patterns in a language. This paper presents various studies of spelling error trends and their types in Sindhi Language. The research shows that the error trends common to all languages are also encountered in Sindhi but their do exist some error patters that are catered specifically to a Sindhi language."} {"_id": "fc47024ca5d2bdc1737d0d1c41525193fbf8fc32", "title": "Induction motor fault diagnosis using labview", "text": "Now-a-days, manufacturing companies of electrical machines are in need of easy and early fault detection techniques as they are attire by the customers. Though many researchers have developed many fault diagnosing techniques, user friendly software with automated long measurement process is needed so that the non-technical persons also can operate to identify the faults occurred. In our work the healthy induction motor is modeled in stationary reference frame. Faulty conditions are included in the model to analyze the effect of fault in the motor using the user friendly software, LabVIEW."} {"_id": "a85275f12472ecfbf4f4f00a61514b0773923b86", "title": "Applications, challenges, and prospective in emerging body area networking technologies", "text": "Advances in wireless technology and supporting infrastructure provide unprecedented opportunity for ubiquitous real-time healthcare and fitness monitoring without constraining the activities of the user. Wirelessly connected miniaturized sensors and actuators placed in, on, and around the body form a body area network for continuous, automated, and unobtrusive monitoring of physiological signs to support medical, lifestyle and entertainment applications. BAN technology is in the early stage of development, and several research challenges have to be overcome for it to be widely accepted. In this article we study the core set of application, functional, and technical requirements of the BAN. We also discuss fundamental research challenges such as scalability (in terms of data rate, power consumption, and duty cycle), antenna design, interference mitigation, coexistence, QoS, reliability, security, privacy, and energy efficiency. Several candidate technologies poised to address the emerging BAN market are evaluated, and their merits and demerits are highlighted. A brief overview of standardization activities relevant to BANs is also presented."} {"_id": "aa531e41c4c646285b522cf6f33f82a9d68d5062", "title": "Addressing Security and Privacy Risks in Mobile Applications", "text": "Applications for mobile platforms are being developed at a tremendous rate, but often without proper security implementation. Insecure mobile applications can cause serious information security and data privacy issues and can have severe repercussions on users and organizations alike."} {"_id": "f4abebef4e39791f358618294cd8d040d7024399", "title": "Security Analysis of Wearable Fitness Devices ( Fitbit )", "text": "This report describes an analysis of the Fitbit Flex ecosystem. Our objectives are to describe (1) the data Fitbit collects from its users, (2) the data Fitbit provides to its users, and (3) methods of recovering data not made available to device owners. Our analysis covers four distinct attack vectors. First, we analyze the security and privacy properties of the Fitbit device itself. Next, we observe the Bluetooth traffic sent between the Fitbit device and a smartphone or personal computer during synchronization. Third, we analyze the security of the Fitbit Android app. Finally, we study the security properties of the network traffic between the Fitbit smartphone or computer application and the Fitbit web service. We provide evidence that Fitbit unnecessarily obtains information about nearby Flex devices under certain circumstances. We further show that Fitbit does not provide device owners with all of the data collected. In fact, we find evidence of per-minute activity data that is sent to the Fitbit web service but not provided to the owner. We also discovered that MAC addresses on Fitbit devices are never changed, enabling usercorrelation attacks. BTLE credentials are also exposed on the network during device pairing over TLS, which might be intercepted by MITM attacks. Finally, we demonstrate that actual user activity data is authenticated and not provided in plaintext on an end-to-end basis from the device to the Fitbit web service."} {"_id": "0e74f8e8763bd7a9c9507badaee390d449b1f8ca", "title": "Versatile low power media access for wireless sensor networks", "text": "We propose B-MAC, a carrier sense media access protocol for wireless sensor networks that provides a flexible interface to obtain ultra low power operation, effective collision avoidance, and high channel utilization. To achieve low power operation, B-MAC employs an adaptive preamble sampling scheme to reduce duty cycle and minimize idle listening. B-MAC supports on-the-fly reconfiguration and provides bidirectional interfaces for system services to optimize performance, whether it be for throughput, latency, or power conservation. We build an analytical model of a class of sensor network applications. We use the model to show the effect of changing B-MAC's parameters and predict the behavior of sensor network applications. By comparing B-MAC to conventional 802.11-inspired protocols, specifically SMAC, we develop an experimental characterization of B-MAC over a wide range of network conditions. We show that B-MAC's flexibility results in better packet delivery rates, throughput, latency, and energy consumption than S-MAC. By deploying a real world monitoring application with multihop networking, we validate our protocol design and model. Our results illustrate the need for flexible protocols to effectively realize energy efficient sensor network applications."} {"_id": "4241cf81fce2e4a4cb486f8221c8c33bbaabc426", "title": "Security and Privacy Issues in Wireless Sensor Networks for Healthcare Applications", "text": "The use of wireless sensor networks (WSN) in healthcare applications is growing in a fast pace. Numerous applications such as heart rate monitor, blood pressure monitor and endoscopic capsule are already in use. To address the growing use of sensor technology in this area, a new field known as wireless body area networks (WBAN or simply BAN) has emerged. As most devices and their applications are wireless in nature, security and privacy concerns are among major areas of concern. Due to direct involvement of humans also increases the sensitivity. Whether the data gathered from patients or individuals are obtained with the consent of the person or without it due to the need by the system, misuse or privacy concerns may restrict people from taking advantage of the full benefits from the system. People may not see these devices safe for daily use. There may also possibility of serious social unrest due to the fear that such devices may be used for monitoring and tracking individuals by government agencies or other private organizations. In this paper we discuss these issues and analyze in detail the problems and their possible measures."} {"_id": "22e1903702dd328e10cb9be1d273bb5408ca638d", "title": "New Beam Tracking Technique for Millimeter Wave-band Communications", "text": "In this paper, we propose an efficient beam tracking method for mobility scenario in mmWave-band communications. When the position of the mobile changes in mobility scenario , the base-station needs to perform beam training frequently to t rack the time-varying channel, thereby spending significant res ources for training beams. In order to reduce the training overhead, we propose a new beam training approach called \u201cbeam tracking\u201d which exploits the continuous nature of time varying angle of departure (AoD) for beam selection. We show that transmissi on of only two training beams is enough to track the time-varying AoD at good accuracy. We derive the optimal selection of beam pai r which minimizes Cramer-Rao Lower Bound (CRLB) for AoD estimation averaged over statistical distribution of the AoD. Our numerical results demonstrate that the proposed beam track ing scheme produces better AoD estimation than the conventiona l beam training protocol with less training overhead. I. I NTRODUCTION The next generation wireless communication systems aim to achieve Giga bit/s throughput to support high speed multimedia data service [1], [2]. Since there exist ample amount of unutilized frequency spectrum in millimeter Wave (mmWave) band (30 GHz-300 GHz), wireless communication over mmWave band is considered as a promising solution to achieve significant leap in spectral efficiency [3]. However , one major limitation of mmWave communications is significant free space path loss, which causes large attenuation of sign al power at the receiver. Furthermore, the overall path loss ge t worse when the signal goes through obstacles, rain, foliage , and any blockage to mobile devices. Recently, active resear ch on mmWave communication has been conducted in order to overcome these limitations [1]\u2013[5]. In mmWave band, many antenna elements can be integrated in a small form factor and hence, we can employ high directional beamforming using a large number of antennas to compensate high path loss. In order to perform high directional beamforming, it is necessary to estimate channels for all transmitter and rece iver antenna pair. While this step requires high computational c omplexity due to large number of antennas, channel estimation can be performed efficiently by using the angular domain representation of channels [6]. In angular domain, only a fe w angular bins contain the most of the received energy. Hence, if we identify the dominant angular bins (which correspond to the angle of arrival (AoA) and the angle of departure (AoD)), we can obtain the channel estimate without incurrin g computational complexity. Basically, both AoD and AoA can be estimated using so called \u201cbeam training\u201d procedure. The base-station sends t he training beams at the designated direction and the receiver estimates the AoD/AoA based on the received signals. Widely used beam training method (called \u201cbeam cycling method\u201d) is to allow the base-station to transmit N training beams one by one at the equally spaced directions. However, to ensure good estimate of AoD/AoA, N should be large, leading to significant training overhead. This problem becomes even more serious for the mobility scenario in mmWave communications. Since the location of mobiles keeps changing, th e base-station should transmit training beams more frequent ly to update AoD/AoA estimates, causing significant drop in data throughput [7]. Recently, several adaptive beam train ing schemes have been proposed to improve the conventional beam training method [10]\u2013[14]. In this paper, we introduce a novel beam training method for mobility scenario in mmWave communications. Our idea is based on the observation that for mobility scenario, the AoD of the particular user does not change drastically so that continuous nature of the AoD change can be accounted to improve the efficacy of the beam training. Since this approach exploits temporal dynamics of AoD, we call such beam training scheme \u201cbeam tracking\u201d. While the convenional method makes no assumption on the state of AoD, we use statistical distribution of the AoD given the previousl y state of AoD. Using the probabilistic model on AoD change, we derive effective beam tracking strategy which employs transmission of two training beams from the base-station. Optimal placement of two training beams in angular domain is sought by minimizing (the lower bound of) variance of the estimation error for AoD. As a result, we choose the best beam p ir from the beam codebook for the given prior knowledge on AoD. Our simulation results show that the proposed beam tracking method offers the channel estimation performance comparable to the conventional beam training methods with significantly reduced training overhead. The rest of this paper is organized as follows; In section II, we introduce the system and channel models for mmWave communications and in section III, we describe the proposed b am tracking method and the simulation results are provide d in section IV. Finally, the paper is concluded in section V. II. SYSTEM MODEL In this section, we describe the system model for mmWave communications. First, we describe the angular domain repr esentation of the mmWave channel and then we introduce the procedure for beam training and channel estimation. A. Channel Model Consider single user mmWave MIMO systems with the base-station withNb antennas and the mobile with Nm antennas. The MIMO channel model with L paths at timet is described by [8]"} {"_id": "3ed2bba32887f8f216106849e2652b7d2f814827", "title": "Curvature-controlled curve editing using piecewise clothoid curves", "text": "Two-dimensional curves are conventionally designed using splines or B\u00e9zier curves. Although formally they are C or higher, the variation of the curvature of (piecewise) polynomial curves is difficult to control; in some cases it is practically impossible to obtain the desired curvature. As an alternative we propose piecewise clothoid curves (PCCs). We show that from the design point of view they have many advantages: control points are interpolated, curvature extrema lie in the control points, and adding control points does not change the curve. We present a fast localized clothoid interpolation algorithm that can also be used for curvature smoothing, for curve fitting, for curvature blending, and even for directly editing the curvature. We give a physical interpretation of variational curvature minimization, from which we derive our scheme. Finally, we demonstrate the achievable quality with a range of examples. & 2013 Elsevier Ltd. All rights reserved."} {"_id": "fd7233e36dfcda4ab85c46536d2b9875c3298819", "title": "Linking received packet to the transmitter through physical-fingerprinting of controller area network", "text": "The Controller Area Network (CAN) bus serves as a legacy protocol for in-vehicle data communication. Simplicity, robustness, and suitability for real-time systems are the salient features of the CAN bus protocol. However, it lacks the basic security features such as massage authentication, which makes it vulnerable to the spoofing attacks. In a CAN network, linking CAN packet to the sender node is a challenging task. This paper aims to address this issue by developing a framework to link each CAN packet to its source. Physical signal attributes of the received packet consisting of channel and node (or device) which contains specific unique artifacts are considered to achieve this goal. Material and design imperfections in the physical channel and digital device, which are the main contributing factors behind the device-channel specific unique artifacts, are leveraged to link the received electrical signal to the transmitter. Generally, the inimitable patterns of signals from each ECUs exist over the course of time that can manifest the stability of the proposed method. Uniqueness of the channel-device specific attributes are also investigated for time-and frequency-domain. Feature vector is made up of both time and frequency domain physical attributes and then employed to train a neural network-based classifier. Performance of the proposed fingerprinting method is evaluated by using a dataset collected from 16 different channels and four identical ECUs transmitting same message. Experimental results indicate that the proposed method achieves correct detection rates of 95.2% and 98.3% for channel and ECU classification, respectively."} {"_id": "7827e8fa0e8934676df05e1248a9dd70f3dd7525", "title": "Multi-objective model predictive control for grid-tied 15-level packed U cells inverter", "text": "This paper presents a multi-objective Model Predictive Control (MPC) for grid-tied 4-cell 15-level Packed U Cells (PUC) inverter. This challenging topology is characterized by high power quality with reduced number of components compared to conventional multilevel inverters. Compared to traditional PI controller, MPC is attracting more interest due to its good dynamic response and high accuracy of reference tracking, through the minimization of a flexible user-defined cost function. For the presented PUC topology, the grid current should be jointly controlled with the capacitors' voltages for ensuring proper operation of the inverter, which leads to an additional requirement of pre-charge circuits for the capacitors in case of using PI current controller (or using additional PI controllers). The proposed MPC achieves grid current injection, low current THD, unity PF, while balancing the capacitors' voltages."} {"_id": "a6f726fa39b189e56b1bcb0756a03796c8aa16f8", "title": "Test anxiety and direction of attention.", "text": "The literature reviewed suggests an attentional interpretation, of the adverse effects which test anxiety has on task performance. During task performance the highly test-anxious person divides his attention between self-relevant and task-relevant variables, in contrast to the low-test-anxious person who focuses his attention more fully on the task. This interpretation was supported by literature from diverse areas suggesting that (a) highly anxious persons are generally more self-preoccupied than are people low in anxiety; (b) the selffocusing tendencies of highly test-anxious persons are activated in testing situations; (c) those situational conditions in which the greatest performance differences occur are ones which elicit the self-focusing tendencies of highly test-anxious subjects, and the task-focusing tendencies of low-anxious subjects; (d) research examining the relationship between anxiety and task variables suggests that anxiety reduces the range of task cues utilized in performance; (e) \"worry,\" an attentionally demanding cognitive activity, is more debilitating of task performance than is autonomic arousal. Treatment and research implications of this attentional interpretation of test anxiety are briefly discussed."} {"_id": "4de602b8737642b80b50940b9ae9eb2da7dbc051", "title": "GPS positioning in a multipath environment", "text": "We address the problem of GPS signal delay estimation in a multipath environment with a low-complexity constraint. After recalling the usual early\u2013late estimator and its bias in a multipath propagation context, we study the maximum-likelihood estimator (MLE) based on a signal model including the parametric contribution of reflected components. It results in an efficient algorithm using the existing architecture, which is also very simple and cheap to implement. Simulations show that the results of the proposed algorithm, in a multipath environment, are similar to these of the early\u2013late in a single-path environment. The performance are further characterized, for both MLEs (based on the single-path and multipath propagation) in terms of bias and standard deviation. The expressions of the corresponding Cram\u00e9r\u2014Rao (CR) bounds are derived in both cases to show the good performance of the estimators when unbiased."} {"_id": "9e71d67774159fd7094c39c2efbc8dab497c12d7", "title": "Counter-forensics in machine learning based forgery detection", "text": "With the powerful image editing tools available today, it is very easy to create forgeries without leaving visible traces. Boundaries between host image and forgery can be concealed, illumination changed, and so on, in a naive form of counter-forensics. For this reason, most modern techniques for forgery detection rely on the statistical distribution of micro-patterns, enhanced through high-level filtering, and summarized in some image descriptor used for the final classification. In this work we propose a strategy to modify the forged image at the level of micro-patterns to fool a state-of-the-art forgery detector. Then, we investigate on the effectiveness of the proposed strategy as a function of the level of knowledge on the forgery detection algorithm. Experiments show this approach to be quite effective especially if a good prior knowledge on the detector is available."} {"_id": "09d08e543a9b2fc350cb37e47eb087935c12be16", "title": "A Multimodal, Full-Surround Vehicular Testbed for Naturalistic Studies and Benchmarking: Design, Calibration and Deployment", "text": "Recent progress in autonomous and semiautonomous driving has been made possible in part through an assortment of sensors that provide the intelligent agent with an enhanced perception of its surroundings. It has been clear for quite some while now that for intelligent vehicles to function effectively in all situations and conditions, a fusion of different sensor technologies is essential. Consequently, the availability of synchronized multi-sensory data streams are necessary to promote the development of fusion based algorithms for low, mid and high level semantic tasks. In this paper, we provide a comprehensive description of our heavily sensorized, panoramic testbed capable of providing high quality data from a slew of synchronized and calibrated sensors such as cameras, LIDARs, radars, and the IMU/GPS. The vehicle has recorded over 100 hours of real world data for a very diverse set of weather, traffic and daylight conditions. All captured data is accurately calibrated and synchronized using timestamps, and stored safely in high performance servers mounted inside the vehicle itself. Details on the testbed instrumentation, sensor layout, sensor outputs, calibration and synchronization are described in this paper."} {"_id": "8164171501d5d7418a3e83673923466b77b2fd5b", "title": "Prototypical Networks for Few-shot Learning", "text": "We propose Prototypical Networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend Prototypical Networks to zero-shot learning and achieve state-ofthe-art results on the CU-Birds dataset."} {"_id": "694064760c02d2748ecda38151a25fe2b1ff6870", "title": "Prozessmodellierung in eGovernment-Projekten mit der eEPK", "text": "Im Rahmen der Informationssystemgestaltung unterscheiden sich die Projektbeteiligten insbesondere hinsichtlich ihrer Anforderungen an die Inhalte und Repr\u00e4sentationsformen der f\u00fcr die Anwendungssystemund Organisationsgestaltung verwendeten Modellen. Die multiperspektivische Modellierung stellt einen viel versprechenden Ansatz dar, um diesem Umstand durch die Bereitstellung perspektivenspezifischer Sichten auf Informationsmodelle gerecht zu werden. Der Beitrag stellt eine Erweiterung ereignisgesteuerter Prozessketten durch Konfigurationsmechanismen vor, welche die Erstellung und Nutzung von Perspektiven auf Prozessmodelle unterst\u00fctzt. 1 Perspektiven auf Prozessmodelle Die wissenschaftliche Diskussion \u00fcber die Qualit\u00e4t von Prozessmodellen wurde in den letzten Jahren wesentlich durch die Entwicklung von allgemeinen Modellierungsempfehlungen, welche auf die syntaktische Korrektheit der Modelle und der Einhaltung der Grunds\u00e4tze ordnungsm\u00e4\u00dfiger Modellierung bei der Erstellung von Prozessmodellen abzielen [BRS95; Ro96; BRU00], gepr\u00e4gt. Ein dominierendes Kriterium, das die Qualit\u00e4t von Prozessmodellen determiniert, ist dabei die Entsprechung der Modellkonzeption und -repr\u00e4sentation mit den Anforderungen der jeweiligen Modellnutzergruppe [RS99, S. 25f.; Be02, S. 28ff.; RSD03, S. 49]. So werden z. B. innerhalb eines prozessbasierten Reorganisationsprojektes diese Modellnutzergruppen durch die verschiedenen organisatorischen Rollen der Projektteilnehmer bestimmt. Ein Sachbearbeiter ben\u00f6tigt hierbei lediglich die f\u00fcr ihn relevanten Prozessmodelle und deren Schnittstellen. Aus der Sicht des Managers der zu reorganisierenden Unternehmung sollten die Modelle in aggregierter Form vorliegen und durch erweiterte visuelle Merkmale, wie z. B. ansprechender Farboder Symboldarstellung, gekennzeichnet sein. Einem Anwendungsentwickler sollten sehr detaillierte Modelle vorgelegt werden, damit sich die aus der Reorganisation ergebenden Konsequenzen im Anwendungssystem ber\u00fccksichtigt werden k\u00f6nnen. Die verschiedenen Anforderungen der Nutzergruppen resultieren aus den Differenzen ihrer subjektabh\u00e4ngigen Vorstellungswelten und den daraus resultierenden Probleml\u00f6sungsans\u00e4tzen [Be01a; Be02, S. 30ff.]. Ein Ansatz zur expliziten Vermittlung zwischen diesen Vorstellungswelten findet sich in der Bereitstellung von anforderungsgerechten Perspektiven wieder [Fr94, S. 36f.]. Perspektiven werden dadurch determiniert, welcher Modellierungszweck verfolgt wird, welche organisatorische Rolle die Nutzer einnehmen und welche pers\u00f6nlichen Pr\u00e4ferenzen bzgl. der Modellkonzeption und -repr\u00e4sentation bestehen [Be02, S. 38ff., RSD03, S. 49]. Die Entwicklung und Bereitstellung perspektivenspezifischer Modelle wird unter dem Begriff der multiperspektivischen Informationsmodellierung diskutiert [Fi92; RS99, S. 25f.; RSD03, S. 52]. Soll den Anforderungen mehrerer Nutzergruppen entsprochen werden, d. h. sollen mehrere Perspektiven ber\u00fccksichtigt werden, ergeben sich f\u00fcr den Modellersteller zwei Alternativen. Modelle k\u00f6nnten einerseits f\u00fcr unterschiedliche Perspektiven redundant vorgehalten werden. Nachteile dieser Vorgehensweise sind die \u00fcblichen, mit Redundanzen verbundenen Zusatzaufw\u00e4nde wie erh\u00f6hter Pflegeaufwand und Gefahr von Inkonsistenzen. Andererseits kann ein Gesamtmodell derartig erstellt werden, dass es die f\u00fcr s\u00e4mtliche Perspektiven relevanten Elemente redundanzfrei enth\u00e4lt. Den einzelnen Vertretern der Perspektiven werden die f\u00fcr sie relevanten Modelle bereitgestellt, indem ihnen eine View auf das Gesamtmodell zur Verf\u00fcgung gestellt wird, die das Gesamtmodell um die nicht relevanten Inhalte reduziert. Die Ma\u00dfnahmen, welche vom Modellierer zur Erstellung dieser Views durchgef\u00fchrt werden m\u00fcssen, k\u00f6nnen hierbei durchaus zu einer komplexen Aufgabe werden. Im Folgenden wird anhand ereignisgesteuerter Prozessketten (EPKs) [KNS92] die Umsetzung der zweiten Variante diskutiert. Dabei wird in Abschnitt 2 zun\u00e4chst eine metamodellbasierte Formalisierung der EPK vorgenommen, um eine konzeptionelle Basis f\u00fcr die multiperspektivische Erweiterung der Modellierungstechnik durch Konfigurationsmechanismen zu schaffen. Darauf aufbauend werden in Abschnitt 3 die notwendigen Erweiterungen der Modellierungstechnik vorgenommen. Abschnitt 4 behandelt die Problematik und Notwendigkeit von Konsistenzsicherungsmechanismen, die bei der Verwendung der Konfigurationsmechanismen auftreten k\u00f6nnen. Nach einem Praxisbeispiel in Abschnitt 5 schlie\u00dft der Beitrag mit einem Ausblick in Abschnitt 6. 2 Metamodellbasierte Formalisierung der EPK Das hier vorgestellte Verfahren zur Erstellung multiperspektivischer ereignisgesteuerter Prozessketten basiert auf der Manipulation von Modellstrukturen durch die Anwendung von Konfigurationsmechanismen. Da hierbei die strukturellen Ver\u00e4nderungen auf Ebene der Sprachspezifikation sowie deren Auswirkungen auf die Struktur der Modelle im Zentrum der Betrachtung stehen, empfiehlt es sich, die Sprache der EPK formalisiert darzustellen. Dies kann wiederum durch Informationsmodelle \u2013 sogenannte sprachorientierte Metamodelle \u2013 erfolgen [St96, S. 24ff.]. Als Notationsform eignen sich insbesondere Struktursprachen wie das Entity-Relationship-Modell [Ch76]. Ein erweiterter Dialekt dieser Modellierungssprache wird im vorliegenden Beitrag zur Metamodellierung verwendet. 2 Das zentrale Element der EPK Sprache ist das Prozesselement (vgl. Abbildung 1). Alle am Prozessgraphen beteiligten Knoten werden als Spezialisierung dieses Elements mo1 Der Begriff EPK wird im Folgenden synonym zur erweiterten ereignisgesteuerten Prozesskette (eEPK) verwendet, die neben der Ablauflogik die Annotation von ben\u00f6tigten Ressourcen erlaubt. 2 Es handelt sich hierbei um einen Dialekt, der die perspektivenabh\u00e4ngige Modifikation von Sprachen erlaubt (vgl. ausf\u00fchrlich [Be02, S. 77ff.]). delliert. Hierzu geh\u00f6ren die Elemente Prozessfunktion, Prozessereignis und Operator (vgl. Spezialisierung des Entitytyps Prozesselement). Eine Prozessfunktion beschreibt eine Aktivit\u00e4t innerhalb eines Prozesses. Diese Aktivit\u00e4ten werden beim Eintreten eines oder einer Kombination von Ereignissen ausgef\u00fchrt und f\u00fchren nach ihrer Beendigung selbst wieder zu dem Eintreten von Ereignissen. Somit kann ein Ereignis als Zustandswechsel interpretiert werden. Die notwendige Kombination der Ereignisse f\u00fcr die Ausf\u00fchrung einer Aktivit\u00e4t sowie die Definition der Menge der eintretenden Ereignisse nach erfolgreicher Ausf\u00fchrung werden \u00fcber die Operatoren definiert. Hierdurch lassen sich parallele Prozessstr\u00e4nge und deren Wiederzusammenf\u00fchrung im Prozessmodell abbilden. Prozessfunktion Operator Prozessereignis Prozesselement D,T (0,n) (0,n) Ressource (0,n) (0,n) D,P Organisationseinheit Fachbegriff Anwendungssystemtyp Vorg\u00e4nger/ Nachfolger PERessourcenZuO (0,n) ProzessRessourcenBeziehungstyp Typ PRBeziehungstyphierarchie"} {"_id": "cd6b17b5954011c619d687a88493564c6ab345b7", "title": "Decoding of EEG Signals Using Deep Long Short-Term Memory Network in Face Recognition Task", "text": "The paper proposes a novel approach to classify the human memory response involved in the face recognition task by the utilization of event related potentials. Electroencephalographic signals are acquired when a subject engages himself/herself in familiar or unfamiliar face recognition tasks. The signals are analyzed through source Iocalization using eLORETA and artifact removal by ICA from a set of channels corresponding to those selected sources, with an ultimate aim to classify the EEG responses of familiar and unfamiliar faces. The EEG responses of the two different classes (familiar and unfamiliar face recognition)are distinguished by analyzing the Event Related Potential signals that reveal the existence of large N250 and P600 signals during familiar face recognition.The paper introduces a novel LSTM classifier network which is designed to classify the ERP signals to fulfill the prime objective of this work. The first layer of the novel LSTM network evaluates the spatial and local temporal correlations between the obtained samples of local EEG time-windows. The second layer of this network models the temporal correlations between the time-windows. An attention mechanism has been introduced in each layer of the proposed model to compute the contribution of each EEG time-window in face recognition task. Performance analysis reveals that the proposed LSTM classifier with attention mechanism outperforms the efficiency of the conventional LSTM and other classifiers with a significantly large margin. Moreover, source Iocalization using eLORETA shows the involvement of inferior temporal and frontal lobes during familiar face recognition and pre-frontal lobe during unfamiliar face recognition. Thus, the present research outcome can be used in criminal investigation, where meticulous differentiation of familiar and unfamiliar face detection by criminals can be performed from their acquired brain responses."} {"_id": "0f202ec12a845564634455e562d1b297fa56ce64", "title": "Algorithmic Prediction of Health-Care Costs", "text": "The rising cost of health care is one of the world\u2019s most important problems. Accordingly, predicting such costs with accuracy is a significant first step in addressing this problem. Since the 1980s, there has been research on the predictive modeling of medical costs based on (health insurance) claims data using heuristic rules and regression methods. These methods, however, have not been appropriately validated using populations that the methods have not seen. We utilize modern data-mining methods, specifically classification trees and clustering algorithms, along with claims data from over 800,000 insured individuals over three years, to provide rigorously validated predictions of health-care costs in the third year, based on medical and cost data from the first two years. We quantify the accuracy of our predictions using unseen (out-of-sample) data from over 200,000 members. The key findings are: (a) our data-mining methods provide accurate predictions of medical costs and represent a powerful tool for prediction of health-care costs, (b) the pattern of past cost data is a strong predictor of future costs, and (c) medical information only contributes to accurate prediction of medical costs of high-cost members."} {"_id": "25849e48b1436aeedb7f1c1d1532f62799c42b1a", "title": "Extensive Benchmark and Survey of Modeling Methods for Scene Background Initialization", "text": "Scene background initialization is the process by which a method tries to recover the background image of a video without foreground objects in it. Having a clear understanding about which approach is more robust and/or more suited to a given scenario is of great interest to many end users or practitioners. The aim of this paper is to provide an extensive survey of scene background initialization methods as well as a novel benchmarking framework. The proposed framework involves several evaluation metrics and state-of-the-art methods, as well as the largest video data set ever made for this purpose. The data set consists of several camera-captured videos that: 1) span categories focused on various background initialization challenges; 2) are obtained with different cameras of different lengths, frame rates, spatial resolutions, lighting conditions, and levels of compression; and 3) contain indoor and outdoor scenes. The wide variety of our data set prevents our analysis from favoring a certain family of background initialization methods over others. Our evaluation framework allows us to quantitatively identify solved and unsolved issues related to scene background initialization. We also identify scenarios for which state-of-the-art methods systematically fail."} {"_id": "18801e8f7ae36e1231497140d9f4ad065f913704", "title": "PUMA: Planning Under Uncertainty with Macro-Actions", "text": "Planning in large, partially observable domains is challenging, especially when a long-horizon lookahead is necessary to obtain a good policy. Traditional POMDP planners that plan a different potential action for each future observation can be prohibitively expensive when planning many steps ahead. An efficient solution for planning far into the future in fully observable domains is to use temporallyextended sequences of actions, or \u201cmacro-actions.\u201d In this paper, we present a POMDP algorithm for planning under uncertainty with macro-actions (PUMA) that automatically constructs and evaluates open-loop macro-actions within forward-search planning, where the planner branches on observations only at the end of each macro-action. Additionally, we show how to incrementally refine the plan over time, resulting in an anytime algorithm that provably converges to an \u01eb-optimal policy. In experiments on several large POMDP problems which require a long horizon lookahead, PUMA outperforms existing state-of-the art solvers. Most partially observable Markov decision process (POMDP) planners select actions conditioned on the prior observation at each timestep: we refer to such planners as fully-conditional. When good performance relies on considering different possible observations far into the future, both online and offline fully-conditional planners typically struggle. An extreme alternative is unconditional (or \u201copenloop\u201d) planning where a sequence of actions is fixed and does not depend on the observations that will be received during execution. While open-loop planning can be extremely fast and perform surprisingly well in certain domains, acting well in most real-world domains requires plans where at least some action choices are conditional on the obtained observations. This paper focuses on the significant subset of POMDP domains, including scientific exploration, target surveillance, and chronic care management, where it is possible to act well by planning using conditional sequences of openloop, fixed-length action chains, or \u201cmacro-actions.\u201d We call this approach semi-conditional planning, in that actions are chosen based on the received observations only at the end of each macro-action. Copyright c \u00a9 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. For a discussion of using open-loop planning for multi-robot tag for open-loop planning see Yu et al. (2005). We demonstrate that for certain domains, planning with macro-actions can offer performance close to fullyconditional planning at a dramatically reduced computational cost. In comparison to prior macro-action work, where a domain expert often hand-coded a good set of macro-actions for each problem, we present a technique for automatically constructing finite-length open-loop macroactions. Our approach uses sub-goal states based on immediate reward and potential information gain. We then describe how to incrementally refine an initial macro-action plan by incorporating successively shorter macro-actions. We combine these two contributions in a forward-search algorithm for planning under uncertainty with macro-actions (PUMA). PUMA is an anytime algorithm which guarantees eventual convergence to an \u01eb-optimal policy, even in domains that may require close to fully-conditional plans. PUMA outperforms a state-of-the-art POMDP planner both in terms of plan quality and computational cost on two large simulated POMDP problems. However, semiconditional planning does not yield an advantage in all domains, and we provide preliminary experimental analysis towards determining a priori when planning in a semiconditional manner will be helpful. However, even in domains that are not well suited to semi-conditional planning, our anytime improvement allows PUMA to still eventually compute a good policy, suggesting that PUMA may be viable as a generic planner for large POMDP problems."} {"_id": "bd7eac15c893453a7076078348c7ae6a1b69ad0a", "title": "Revising Perceptual Linear Prediction (PLP)", "text": "Mel Frequency Cepstral Coefficients (MFCC) and Perceptual Linear Prediction (PLP) are the most popular acoustic features used in speech recognition. Often it depends on the task, which of the two methods leads to a better performance. In this work we develop acoustic features that combine the advantages of MFCC and PLP. Based on the observation that the techniques have many similarities, we revise the processing steps of PLP. In particular, the filter-bank, the equal-loudness pre-emphasis and the input for the linear prediction are improved. It is shown for a broadcast news transcription task and a corpus of children\u2019s speech that the new variant of PLP performs better than both MFCC and conventional PLP for a wide range of clean and noisy acoustic conditions."} {"_id": "846996945fc33abebdcbeb92fe2fe88afb92c47e", "title": "Multi-speaker modeling and speaker adaptation for DNN-based TTS synthesis", "text": "In DNN-based TTS synthesis, DNNs hidden layers can be viewed as deep transformation for linguistic features and the output layers as representation of acoustic space to regress the transformed linguistic features to acoustic parameters. The deep-layered architectures of DNN can not only represent highly-complex transformation compactly, but also take advantage of huge amount of training data. In this paper, we propose an approach to model multiple speakers TTS with a general DNN, where the same hidden layers are shared among different speakers while the output layers are composed of speaker-dependent nodes explaining the target of each speaker. The experimental results show that our approach can significantly improve the quality of synthesized speech objectively and subjectively, comparing with speech synthesized from the individual, speaker-dependent DNN-based TTS. We further transfer the hidden layers for a new speaker with limited training data and the resultant synthesized speech of the new speaker can also achieve a good quality in term of naturalness and speaker similarity."} {"_id": "6a6b757a7640e43544df78fd15db7f14a8084263", "title": "Classroom Response Systems: A Review of the Literature.", "text": "As the frequency with which Classroom Response Systems (CRSs) are used is increasing, it becomes more and more important to define the affordances and limitations of these tools. Currently existing literature is largely either anecdotal or focuses on comparing CRS and non-CRS environments that are unequal in other aspects as well. In addition, the literature primarily describes situations in which the CRS is used to provide an individual as opposed to a group response. This article points to the need for a concerted research effort, one that rigorously explores conditions of use across diverse settings and pedagogies."} {"_id": "10349122e4a67d60c2e0cc3e382b9502d95a6b55", "title": "An energy-efficient dual sampling SAR ADC with reduced capacitive DAC", "text": "This paper presents an energy-efficient SAR ADC which adopts reduced MSB cycling step with dual sampling of the analog signal. By sampling and holding the analog signal asymmetrically at both input sides of comparator, the MSB cycling step can be hidden by hold mode. Benefits from this technique, not only the total capacitance of DAC is reduced by half, but also the average switching energy is reduced by 68% compared with conventional SAR ADC. Moreover, switching energy distribution is more uniform over entire output code compared with previous works."} {"_id": "abf21d598275ea3bf51f4f59a4fc1388cb0a58b8", "title": "The impact of pretend play on children's development: a review of the evidence.", "text": "Pretend play has been claimed to be crucial to children's healthy development. Here we examine evidence for this position versus 2 alternatives: Pretend play is 1 of many routes to positive developments (equifinality), and pretend play is an epiphenomenon of other factors that drive development. Evidence from several domains is considered. For language, narrative, and emotion regulation, the research conducted to date is consistent with all 3 positions but insufficient to draw conclusions. For executive function and social skills, existing research leans against the crucial causal position but is insufficient to differentiate the other 2. For reasoning, equifinality is definitely supported, ruling out a crucially causal position but still leaving open the possibility that pretend play is epiphenomenal. For problem solving, there is no compelling evidence that pretend play helps or is even a correlate. For creativity, intelligence, conservation, and theory of mind, inconsistent correlational results from sound studies and nonreplication with masked experimenters are problematic for a causal position, and some good studies favor an epiphenomenon position in which child, adult, and environment characteristics that go along with play are the true causal agents. We end by considering epiphenomenalism more deeply and discussing implications for preschool settings and further research in this domain. Our take-away message is that existing evidence does not support strong causal claims about the unique importance of pretend play for development and that much more and better research is essential for clarifying its possible role."} {"_id": "14b8cfb5585af4952d43a5dfd21b76e1d1ee2b81", "title": "Adversarial Examples: Attacks on Machine Learning-based Malware Visualization Detection Methods", "text": "As the threat of malicious software (malware) becomes urgently serious, automatic malware detection techniques have received increasing attention recently, where the machine learning (ML)-based visualization detection plays a significant role. However, this leads to a fundamental problem whether such detection methods can be robust enough against various potential attacks. Even though ML algorithms show superiority to conventional ones in malware detection in terms of high efficiency and accuracy, this paper demonstrates that such ML-based malware detection methods are vulnerable to adversarial examples (AE) attacks. We propose the first AE-based attack framework, named Adversarial Texture Malware Perturbation Attacks (ATMPA), based on the gradient descent or L-norm optimization method. By introducing tiny perturbations on the transformed dataset, ML-based malware detection methods completely fail. The experimental results on the MS BIG malware dataset show that a small interference can reduce the detection rate of convolutional neural network (CNN), support vector machine (SVM) and random forest(RF)-based malware detectors down to 0 and the attack transferability can achieve up to 88.7% and 74.1% on average in different ML-based detection methods."} {"_id": "88e1ce8272282bf1d1b5e55894cc545e72ff9aa1", "title": "Block Chain based Searchable Symmetric Encryption", "text": "The mechanism for traditional Searchable Symmetric Encryption is pay-then-use. That is to say, if a user wants to search some documents that contain special keywords, he needs to pay to the server firstly, then he can enjoy search service. Under this situation, these kinds of things will happen: After the user paying the service fees, the server may either disappear because of the poor management or returning nothing. As a result, the money that the user paid cannot be brought back quickly. Another case is that the server may return incorrect document sets to the user in order to save his own cost. Once such events happen, it needs the arbitration institution to mediate which will cost a long time. Besides, to settle the disputes the user has to pay to the arbitration institution. Ideally, we deeply hope that when the user realizes the server has a tendency to cheat in the task of searching, he can immediately and automatically withdraw his money to safeguard his right. However, the existing SSE protocols cannot satisfy this demand. To solve this dilemma, we find a compromised method by introducing the block chain into SSE. Our scheme achieves three goals stated below. Firstly, when the server does not return any thing to user after he gets the search token, the user can get some compensation from the server, because the server can infer some important information from the Index and this token. Besides, the user also doesn\u2019t pay the service charge. Secondly, if the documents that the server returns are false, the server cannot receive service fees, meanwhile, he will be punished. Lastly, when the user receives some bitcoin from server at the beginning, he may terminate the protocol. Under this situation, the server is a victim. In order to prevent such thing from happening, the server will broadcast a transaction to redeem his pledge after an appointed time. \u2217Corresponding author Email addresses: tianhb@mail.sysu.edu.cn (Haibo Tian), isszhfg@mail.sysu.edu.cn (Fangguo Zhang ) Preprint submitted to Elsevier April 26, 2017"} {"_id": "b91b36da582de570d7c368a192914c43961a6eff", "title": "Ionospheric Time-Delay Algorithm for Single-Frequency GPS Users", "text": "The goal in designing an ionospheric time-delay correctionalgorithm for the single-frequency global positioning system userwas to include the main features of the complex behavior of theionosphere, yet require a minimum of coefficients and usercomputational time, while still yielding an rms correction of at least50 percent. The algorithm designed for this purpose, andimplemented in the GPS satellites, requires only eight coefficientssent as part of the satellite message, contains numerousapproximations designed to reduce user computationalrequirements, yet preserves the essential elements required to obtaingroup delay values along multiple satellite viewing directions."} {"_id": "6d535375536d5a71eb9b18d93aa9fb09e3e050fb", "title": "Weighted Similarity Schemes for High Scalability in User-Based Collaborative Filtering", "text": "Similarity-based algorithms, often referred to as memory-based collaborative filtering techniques, are one of the most successful methods in recommendation systems. When explicit ratings are available, similarity is usually defined using similarity functions, such as the Pearson correlation coefficient, cosine similarity or mean square difference. These metrics assume similarity is a symmetric criterion. Therefore, two users have equal impact on each other in recommending new items. In this paper, we introduce new weighting schemes that allow us to consider new features in finding similarities between users. These weighting schemes, first, transform symmetric similarity to asymmetric similarity by considering the number of ratings given by users on non-common items. Second, they take into account the habit effects of users are regarded on rating items by measuring the proximity of the number of repetitions for each rate on common rated items. Experiments on two datasets were implemented and compared to other similarity measures. The results show that adding weighted schemes to traditional similarity measures These authors contributed equally to this work as the first author. P. Pirasteh \u00b7 D. Hwang Department of Computer Engineering, Yeungnam University, Gyeongsan, Republic of Korea P. Pirasteh e-mail: parivash63@gmail.com D. Hwang e-mail: dosamhwang@gmail.com J. E. Jung ( ) Department of Computer Engineering, Chung-Ang University, Seoul, Republic of Korea e-mail: ontology.society@gmail.com significantly improve the results obtained from traditional similarity measures."} {"_id": "c6e3df7ea9e28e25e048f840f59088a34bed8322", "title": "Optimization tool for direct water cooling system of high power IGBT modules", "text": "Thermal management of power electronic devices is essential for reliable system performance especially at high power levels. Since even the most efficient electronic circuit becomes hot because of ohmic losses, it is clear that cooling is needed in electronics and even more as the power increases. One of the most important activities in the thermal management and reliability improvement is the cooling system design. As industries are developing smaller power devices with higher power densities, optimized design of cooling systems with minimum thermal resistance and pressure drop become important issue for thermal design engineers. This paper aims to present a user friendly optimization tool for direct water cooling system of a high power module which enables the cooling system designer to identify the optimized solution depending on customer load profiles and available pump power. CFD simulations are implemented to find best solution for each scenario."} {"_id": "5d3d9ac765d5dbe27ac1b967aece1b3fc0c3893f", "title": "BER Analysis of MIMO-OFDM System using Alamouti STBC and MRC Diversity Scheme over Rayleigh Multipath Channel", "text": "Commons Attribution-Noncommercial 3.0 Unported License http://creativecommons.org/licenses/by-nc/3.0/), permitting all non commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Global Journal of Researches in Engineering Electrical and Electronics Engineering Volume 13 Issue 13 Version 1.0 Year 2013 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 2249-4596 & Print ISSN: 0975-5861"} {"_id": "8b3539e359f4dd445c0651ebcb911c034d1ffc5b", "title": "A passivity based Cartesian impedance controller for flexible joint robots - part I: torque feedback and gravity compensation", "text": "In this paper a novel approach to the Cartesian impedance control problem for robots with flexible joints is presented. The proposed controller structure is based on simple physical considerations, which are motivating the extension of classical position feedback by an additional feedback of the joint torques. The torque feedback action can be interpreted as a scaling of the apparent motor inertia. Furthermore the problem of gravity compensation is addressed. Finally, it is shown that the closed loop system can be seen as a feedback interconnection of passive systems. Based on this passivity property a proof of asymptotic stability is presented."} {"_id": "33576c0fc316c45c3672523114b20a5bb996e1f4", "title": "A unified approach for motion and force control of robot manipulators: The operational space formulation", "text": "A framework for the analysis and control of manipulator systems with respect to the dynamic behavior of their end-effectors is developed. First, issues related to the description of end-effector tasks that involve constrained motion and active force control are discussed. The fundamentals of the operational space formulation are then presented, and the unified approach for motion and force control is developed. The extension of this formulation to redundant manipulator systems is also presented, constructing the end-effector equations of motion and describing their behavior with respect to joint forces. These results are used in the development of a new and systematic approach for dealing with the problems arising at kinematic singularities. At a singular configuration, the manipulator is treated as a mechanism that is redundant with respect to the motion of the end-effector in the subspace of operational space orthogonal to the singular direction."} {"_id": "4358e29f6c95f371ade11a56e8b2ffea549d5842", "title": "A passivity based Cartesian impedance controller for flexible joint robots - part II: full state feedback, impedance design and experiments", "text": "The paper presents a Cartesian impedance controller for flexible joint robots based on the feedback of the complete state of the system, namely the motor position, the joint torque and their derivatives. The approach is applied to a quite general robot model, in which also a damping element is considered in parallel to the joint stiffness. Since passivity and asymptotic stability of the controller hold also for varying damping matrices, some possibilities of designing those gain matrices (depending on the actual inertia matrix) are addressed. The passivity of the controller relies on the usage of only motor side measurements for the position feedback. A method is introduced, which provides the exact desired link side stiffness based on this motor position information. Experimental results are validating the proposed controller."} {"_id": "666e4ea184b0d093faacb0dd301e8e598a62603b", "title": "Nonlinear Systems Analysis", "text": "THIS IS a completely rewritten version of the first edition which appeared in 1978. At that time the book gave a good overview of the main body of nonlinear systems and control theory, except for nonlinear optimal control theory. The emphasis was very much on the analysis of systems (in openor closed-loop configuration), instead of on the actual design of controllers. Most attention was being focused on stability issues, both from the input-output and the state-space point of view. Highlights in this theory were the concepts of input-output stability including Popov's criterion and the circle criterion (see also Desoer and Vidyasagar, 1975), the theory of Lyapunov stability, and perhaps the method of describing functions. Since the appearance of the first edition, geometric nonlinear control theory has become a prevailing trend. (In fact these developments were already initiated at the beginning of the seventies.) Geometric nonlinear control theory started as a successful approach to deal with basic system-theoretic questions in the state-space formulation of nonlinear control systems, such as controllability and observability properties and minimal realization theory. It gained strong impetus at the beginning of the eighties by the systematic study of nonlinear (state) feedback for various synthesis problems; also stimulated by the geometric approach to linear control theory of Wonham and Morse and Basile and Marro: In particular the Lie bracket conditions for controllability and feedback linearizability have become popular and powerful tools in nonlinear control. The books by Isidori (1989) and Nijmeijer and Van der Schaft (1990) are clear exponents of the achievements of geometric nonlinear control theory. In the last couple of years one can witness some developments which can be looked at as attempts to bridge the gap between nonlinear systems and control theory as put forward in the first edition of the present book on the one hand, and geometric nonlinear control theory on the other hand. Indeed, in nonlinear adaptive control as well as in nonlinear robust control (including nonlinear ~-cont ro l ) there is much need for the kind of stability concepts as exposed in the first edition of the book, while at the same time geometric nonlinear control theory offers an underlying structural framework. In particular passivity, and more generally, dissipativity concepts turn out to be extremely useful in these areas, especially within the context of the control of physical systems. Also, there has been recently some rapprochement between nonlinear input-output theory and (geometric) state-space theory. In my opinion these converging developments are very important and indeed offer promising perspectives for a truly nonlinear control design. In the present second edition Professor Vidyasagar has made an admirable attempt to include at least an introduction to geometric nonlinear control theory in an additional chapter of the book. In fact, this new chapter deals with the perhaps most 'useful' parts of geometric nonlinear control theory such as controllability, feedback linearization and input-output linearization. The other most noticeable differences with the first edition are the inclusion of some"} {"_id": "d04c3ab918aad5f2b3fcec971db5cbe81de72d3e", "title": "On a New Generation of Torque Controlled Light-Weight Robots", "text": "The paper describes the recent design and development efforts in DLR \u0301s robotics lab towards the second generation of light-weight robots. The design of the lightweight mechanics, integrated sensors and electronics is outlined. The fully sensorized joint, with motor and link position sensors as well as joint torque sensors enables the implementation of effective vibration damping and advanced control strategies for compliant manipulation. The mechatronic approach incorporates a tight collaboration between mechanics, electronics and controller design as well. Thus we hope, that important steps towards a new generation of service and personal robots have been achieved."} {"_id": "a1d66f1344424f8294a740dcda9847bbc853be2e", "title": "Compact Zigzag-Shaped-Slit Microstrip Antenna With Circular Defected Ground Structure for Wireless Applications", "text": "In this letter, a compact zigzag-shaped-slit rectangular microstrip patch antenna with circular defected ground structure (DGS) is designed for wireless applications. The probe-fed antenna consisting of a zigzag-shaped slit, dual T-shaped slits on either sides of a rectangular patch, and circular dumbbell-shaped defected ground plane is optimized. The antenna was able to generate three separate resonances to cover both the 2.45/5.28-GHz WLAN bands and the 3.5-GHz WiMAX bands while maintaining a small overall size of 40 \u00d728 \u00d73.175 mm3. The return-loss impedance bandwidth values are enhanced significantly for three resonant frequencies. The designed antenna is characterized with better radiation patterns and potentially stable gain around 4-6 dBi over the working bands. Good agreement was obtained between measurements and simulations."} {"_id": "43751b572e8608f88d851a541f67a8ae456b780b", "title": "FIRST EXPERIMENTAL INVESTIGATIONS ON WHEEL-WALKING FOR IMPROVING TRIPLE-BOGIE ROVER LOCOMOTION PERFORMANCES", "text": "Deployment actuators of a triple-bogie rover locomotion platform can be used to perform Wheel-Walking (WW) manoeuvres. How WW could affect the traversing capabilities of rovers is a recurrent debate in the planetary robotics community. The Automation and Robotics Section of ESTEC has initiated a long term project to evaluate the performance of WW manoeuvres in different scenarios. This paper presents the first experimental results on this project, obtained during the test campaign run on November 2014 at the Planetary Robotics Lab (PRL) of ESTEC, and shows the performance analysis made when comparing WW with standard rolling. The PRL rover prototype ExoTeR was used to test three different scenarios: entrapment in loose soil, up-slope traverse and lander egressing. WW locomotion showed increased capabilities in all scenarios and proved its relevance and advantages for planetary exploration missions."} {"_id": "61723cdd6f8195c8bb7407a04bb29a690de6a08c", "title": "ISO 9241-11 Revised: What Have We Learnt About Usability Since 1998?", "text": "A revision is currently being undertaken of ISO 9241-11, published in 1998 to provide guidance on usability. ISO-9241-11 defines usability in terms of effectiveness, efficiency and satisfaction in a particular context of use. The intention was to emphasise that usability is an outcome of interaction rather than a property of a product. This is now widely accepted. However, the standard also places emphasis on usability measurement and it is now appreciated that there is more to usability evaluation than measurement. Other developments include an increasing awareness of the importance of the individual user's emotional experience as discretionary usage of complex consumer products and use of the World Wide Web have became more widespread. From an organisational perspective, it is now appreciated that usability plays an important role in managing the potentials risks that can arise from inappropriate outcomes of interaction. The revision of ISO 9241-11 takes account of these issues and other feedback."} {"_id": "c4938b4967b5953d95bd531d84f10ae6585fb434", "title": "Aerodynamic Design Optimization Studies of a Blended-Wing-Body Aircraft", "text": "Abstract The blended-wing body is an aircraft configuration that has the potential to be more efficient than conventional large transport aircraft configurations with the same capability. However, the design of the blended-wing is challenging due to the tight coupling between aerodynamic performance, trim, and stability. Other design challenges include the nature and number of the design variables involved, and the transonic flow conditions. To address these issues, we perform a series of aerodynamic shape optimization studies using Reynolds-averaged Navier\u2013Stokes computational fluid dynamics with a Spalart\u2013Allmaras turbulence model. A gradient-based optimization algorithm is used in conjunction with a discrete adjoint method that computes the derivatives of the aerodynamic forces. A total of 273 design variables\u2014twist, airfoil shape, sweep, chord, and span\u2014are considered. The drag coefficient at the cruise condition is minimized subject to lift, trim, static margin, and center plane bending moment constraints. The studies investigate the impact of the various constraints and design variables on optimized blended-wing-body configurations. The lowest drag among the trimmed and stable configurations is obtained by enforcing a 1% static margin constraint, resulting in a nearly elliptical spanwise lift distribution. Trim and static stability are investigated at both onand off-design flight conditions. The single-point designs are relatively robust to the flight conditions, but further robustness is achieved through a multi-point optimization."} {"_id": "864fe713fd082585b67198ad7942a1568e536bd2", "title": "Group-Sensitive Triplet Embedding for Vehicle Reidentification", "text": "The widespread use of surveillance cameras toward smart and safe cities poses the critical but challenging problem of vehicle reidentification (Re-ID). The state-of-the-art research work performed vehicle Re-ID relying on deep metric learning with a triplet network. However, most existing methods basically ignore the impact of intraclass variance-incorporated embedding on the performance of vehicle reidentification, in which robust fine-grained features for large-scale vehicle Re-ID have not been fully studied. In this paper, we propose a deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation \u201cgroup\u201d between samples and each individual vehicle in the triplet network learning. To capture the intraclass variance attributes of each individual vehicle, we utilize an online grouping method to partition samples within each vehicle ID into a few groups, and build up the triplet samples at multiple granularities across different vehicle IDs as well as different groups within the same vehicle ID to learn fine-grained features. In particular, we construct a large-scale vehicle database \u201cPKU-Vehicle,\u201d consisting of 10 million vehicle images captured by different surveillance cameras in several cities, to evaluate the vehicle Re-ID performance in real-world video surveillance applications. Extensive experiments over benchmark datasets VehicleID, VeRI, and CompCar have shown that the proposed GS-TRE significantly outperforms the state-of-the-art approaches for vehicle Re-ID."} {"_id": "8039a25fa3408d73829c3746e0e453ad7dc268b4", "title": "Integrating technology in the classroom: a visual conceptualization of teachers' knowledge, goals and beliefs", "text": "In this paper, we devise a diagrammatic conceptualization to describe and represent the complex interplay of a teacher\u2019s knowledge (K), goals (G) and beliefs (B) in leveraging technology effectively in the classroom. The degree of coherency between the KGB region and the affordances of the technology serves as an indicator of the teachers\u2019 developmental progression through the initiation, implementation and maturation phases of using technology in the classroom. In our study, two teachers with differing knowledge, goals and beliefs are studied as they integrated GroupScribbles technology in their classroom lessons over a period of 1 year. Our findings reveal that the transition between the teacher\u2019s developmental states (as indicated by coherency diagrams) is nonlinear, and thus the importance of ensuring high coherency right at the initiation stage. Support for the teacher from other teachers and researchers remains an important factor in developing the teacher\u2019s competency to leverage the technology successfully. The stability of the KGB region further ensures smooth progression of the teacher\u2019s effective integration of technology in the classroom."} {"_id": "94d03985252d5a497feb407f1366fc29b34cb0fc", "title": "EvalVid - A Framework for Video Transmission and Quality Evaluation", "text": "With EvalVid we present a complete framework and tool-set for evaluation of the quality of video transmitted over a real or simu lated communication network. Besides measuring QoS parameters of the underlyin g network, like loss rates, delays, and jitter, we support also a subjective vide o quality evaluation of the received video based on the frame-by-frame PSNR calcula tion. The tool-set has a modular construction, making it possible to exchange b oth the network and the codec. We present here its application for MPEG-4 as exam ple. EvalVid is targeted for researchers who want to evaluate their network designs or setups in terms of user perceived video quality. The tool-set is publi cly available [11]."} {"_id": "fbad8d246083321ffbfdb4b07c28e3868f45e1ce", "title": "Design and control of an actuated thumb exoskeleton for hand rehabilitation following stroke", "text": "Chronic hand impairment is common following stroke. This paper presents an actuated thumb exoskeleton (ATX) to facilitate research in hand rehabilitation therapy. The ATX presented in this work permits independent bi-directional actuation in each of the 5 degrees-of-freedom (DOF) of the thumb using a mechanism that has 5 active DOF and 3 passive DOF. The ATX is able to provide considerable joint torques for the user while still allowing backdrivability through flexible shaft transmission. A prototype has been built and experiments were conducted to evaluate the closed-loop position control. Further improvement and future work are discussed."} {"_id": "917a10ee4d12cf07a59e16e0d3620545218a60c6", "title": "Robotic Cutting : Mechanics and Control of Knife Motion", "text": "Effectiveness of cutting is measured by the ability to achieve material fracture with smooth knife movements. The work performed by the knife overcomes the material toughness, acts against the blade-material friction, and generates shape deformation. This paper studies how to control a 2-DOF robotic arm equipped with a force/torque sensor to cut through an object in a sequence of three moves: press, push, and slice. For each move, a separate control strategy in the Cartesian space is designed to incorporate contact and/or force constraints while following some prescribed trajectory. Experiments conducted over several types of natural foods have demonstrated smooth motions like would be commanded by a human hand."} {"_id": "ff00e759842a949776fb15235db1f4a433d4a303", "title": "Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial", "text": "BACKGROUND\nAlthough statins reduce coronary and cerebrovascular morbidity and mortality in middle-aged individuals, their efficacy and safety in elderly people is not fully established. Our aim was to test the benefits of pravastatin treatment in an elderly cohort of men and women with, or at high risk of developing, cardiovascular disease and stroke.\n\n\nMETHODS\nWe did a randomised controlled trial in which we assigned 5804 men (n=2804) and women (n=3000) aged 70-82 years with a history of, or risk factors for, vascular disease to pravastatin (40 mg per day; n=2891) or placebo (n=2913). Baseline cholesterol concentrations ranged from 4.0 mmol/L to 9.0 mmol/L. Follow-up was 3.2 years on average and our primary endpoint was a composite of coronary death, non-fatal myocardial infarction, and fatal or non-fatal stroke. Analysis was by intention-to-treat.\n\n\nFINDINGS\nPravastatin lowered LDL cholesterol concentrations by 34% and reduced the incidence of the primary endpoint to 408 events compared with 473 on placebo (hazard ratio 0.85, 95% CI 0.74-0.97, p=0.014). Coronary heart disease death and non-fatal myocardial infarction risk was also reduced (0.81, 0.69-0.94, p=0.006). Stroke risk was unaffected (1.03, 0.81-1.31, p=0.8), but the hazard ratio for transient ischaemic attack was 0.75 (0.55-1.00, p=0.051). New cancer diagnoses were more frequent on pravastatin than on placebo (1.25, 1.04-1.51, p=0.020). However, incorporation of this finding in a meta-analysis of all pravastatin and all statin trials showed no overall increase in risk. Mortality from coronary disease fell by 24% (p=0.043) in the pravastatin group. Pravastatin had no significant effect on cognitive function or disability.\n\n\nINTERPRETATION\nPravastatin given for 3 years reduced the risk of coronary disease in elderly individuals. PROSPER therefore extends to elderly individuals the treatment strategy currently used in middle aged people."} {"_id": "26aa2421657e68751f6c9954f157ae54cc27049f", "title": "A framework for evaluating quality-driven self-adaptive software systems", "text": "Over the past decade the dynamic capabilities of self-adaptive software-intensive systems have proliferated and improved significantly. To advance the field of self-adaptive and self-managing systems further and to leverage the benefits of self-adaptation, we need to develop methods and tools to assess and possibly certify adaptation properties of self-adaptive systems, not only at design time but also, and especially, at run-time. In this paper we propose a framework for evaluating quality-driven self-adaptive software systems. Our framework is based on a survey of self-adaptive system papers and a set of adaptation properties derived from control theory properties. We also establish a mapping between these properties and software quality attributes. Thus, corresponding software quality metrics can then be used to assess adaptation properties."} {"_id": "a4fa356a2652a87eaa0fe362b21e89e4b17fd0ed", "title": "A multimodal adaptive session manager for physical rehabilitation exercising", "text": "Physical exercising is an essential part of any rehabilitation plan. The subject must be committed to a daily exercising routine, as well as to a frequent contact with the therapist. Rehabilitation plans can be quite expensive and time-consuming. On the other hand, tele-rehabilitation systems can be really helpful and efficient for both subjects and therapists. In this paper, we present ReAdapt, an adaptive module for a tele-rehabilitation system that takes into consideration the progress and performance of the exercising utilizing multisensing data and adjusts the session difficulty resulting to a personalized session. Multimodal data such as speech, facial expressions and body motion are being collected during the exercising and feed the system to decide on the exercise and session difficulty. We formulate the problem as a Markov Decision Process and apply a Reinforcement Learning algorithm to train and evaluate the system on simulated data."} {"_id": "a4fea01c735f9515636291696e0860a17d63918a", "title": "Environmental influence in the brain, human welfare and mental health", "text": "The developing human brain is shaped by environmental exposures\u2014for better or worse. Many exposures relevant to mental health are genuinely social in nature or believed to have social subcomponents, even those related to more complex societal or area-level influences. The nature of how these social experiences are embedded into the environment may be crucial. Here we review select neuroscience evidence on the neural correlates of adverse and protective social exposures in their environmental context, focusing on human neuroimaging data and supporting cellular and molecular studies in laboratory animals. We also propose the inclusion of innovative methods in social neuroscience research that may provide new and ecologically more valid insight into the social-environmental risk architecture of the human brain."} {"_id": "d4cb8486072db071eb3388078fb2a01b724ef262", "title": "Compact Rep-Rate GW Pulsed Generator Based on Forming Line With Built-In High-Coupling Transformer", "text": "In this paper, a compact rep-rate GW pulsed power generator is developed. First, its three key subsystems are theoretically analyzed, engineering designed, and experimentally investigated, respectively. The emphases are put on these four problems: the theoretical analysis of the voltage distribution across the conical secondary windings of the high-coupling transformer, the investigation of the high energy storage density dielectric used in the pulse forming line, the choice of the gas flow velocity of the gas blowing system, and theoretical analysis of the passive stability of the pulsed power generator operated in rep-rate mode. Second, the developed pulsed power generator is described in detail. It has a 0.2-m diameter, a 1.0-m length, and a 20- \u03a9 wave impedance. Across a 100- \u03a9 resistive dummy load, it can steadily operate at a 300-kV output voltage in 50-Hz rep-rate and 250 kV in 150 Hz without gas blowing system. The average power is ~ 1 kW. Finally, the pulsed power generator is applied to drive a relativistic backward-wave oscillator, generating a high-power microwave with peak output power of 200 MW and duration (full-width at half-maximum) of 5 ns in 150-Hz rep-rate. These efforts set a good foundation for the development of a compact rep-rate pulsed power generator and show a promising application for the future."} {"_id": "ae4f36ed7915db38c0baa21c14ed1455df95a738", "title": "Myanmar Spell Checker", "text": "Natural Language Processing (NLP) is one of the most important research area carried out in the world of Human Language. For every language, spell checker is an essential component of many of Office Automation Systems and Machine Translation Systems. In this paper, we develop a Myanmar Spell Checker System which can handle Typographic errors, Sequence errors, Phonetic errors, and Context errors. A Myanmar Text Corpus is created for developing Myanmar Spell checker. To check Typographic Errors, corpus look up approach is applied. Myanmar3 Unicode is applied in this system so that it can automatically reorder the character sequence. A compound misused word detection algorithm is proposed for Phonetic Errors checking and Bayesian Classifier is applied for Context Errors checking. In this system, Levenshtein Distance Algorithm is applied to improve users\u2019 efficiency by providing a suggestion list for misspelled Myanmar Words. We provide evaluation results of the system and our approach can handle various types of Myanmar spell errors."} {"_id": "79b58b2ddabb506637508b6d6d5a2081586d69cf", "title": "A Supervised Named-Entity Extraction System for Medical Text", "text": "TOKEN LEMMA POS Chunk Capitalized Token Class Dependency Sem Group Doc Type Section CUI TUI SemGroup UMLS Categ LEXIQUE SG LEXIQUE CUI DISO TOKEN ANAT TOKEN Wiki Gold label Nails nail NNS B-NP Capitalized Word ROOT DIS heent B-C1 B-T023 B-ANAT B DISO B-ANAT B-C0027342 1 1 \u2013 O : : : O \u2013 Punct P ANAT DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O No no DT B-NP Capitalized Word NMOD #NA DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O bed bed NN I-NP \u2013 Word NMOD #NA DIS heent \u2013 \u2013 \u2013 O O O 1 1 \u2013 B-DISORDER abnormalities abnormality NNS I-NP \u2013 Word NMOD DISO DIS heent \u2013 \u2013 \u2013 O O O 1 0 \u2013 I-DISORDER , , , O \u2013 Punct P #NA DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O lunulas lunula NNS B-NP \u2013 Word AMOD #NA DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O present present JJ B-ADJP \u2013 Word APPO #NA DIS heent \u2013 \u2013 \u2013 O O O 1 1 \u2013 O , , , O \u2013 Punct P ANAT DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O no no DT B-NP \u2013 Word NMOD DISO DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O splinters splinter NNS I-NP \u2013 Word ROOT DIS heent B-C2 C3 B-T037 B-DISO O O O 1 0 B-DISO B-DISORDER , , , I-NP \u2013 Punct P DISO DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O pulses pulse NNS I-NP \u2013 Word COORD DISO DIS heent \u2013 \u2013 \u2013 O B-PHYS B-C0391850 0 0 \u2013 O I Lexical and morphological features (e.g., capitalization, lemma)"} {"_id": "f306c0d24a5eb338b7a577a17d8b35d78716d880", "title": "Dual-Transfer Face Sketch\u2013Photo Synthesis", "text": "Recognizing the identity of a sketched face from a face photograph dataset is a critical yet challenging task in many applications, not least law enforcement and criminal investigations. An intelligent sketched face identification system would rely on automatic face sketch synthesis from photographs, thereby avoiding the cost of artists manually drawing sketches. However, conventional face sketch\u2013photo synthesis methods tend to generate sketches that are consistent with the artists\u2019drawing styles. Identity-specific information is often overlooked, leading to unsatisfactory identity verification and recognition performance. In this paper, we discuss the reasons why conventional methods fail to recover identity-specific information. Then, we propose a novel dual-transfer face sketch\u2013photo synthesis framework composed of an inter-domain transfer process and an intra-domain transfer process. In the inter-domain transfer, a regressor of the test photograph with respect to the training photographs is learned and transferred to the sketch domain, ensuring the recovery of common facial structures during synthesis. In the intra-domain transfer, a mapping characterizing the relationship between photographs and sketches is learned and transferred across different identities, such that the loss of identity-specific information is suppressed during synthesis. The fusion of information recovered by the two processes is straightforward by virtue of an ad hoc information splitting strategy. We employ both linear and nonlinear formulations to instantiate the proposed framework. Experiments on The Chinese University of Hong Kong face sketch database demonstrate that compared to the current state-of-the-art the proposed framework produces more identifiable facial structures and yields higher face recognition performance in both the photo and sketch domains."} {"_id": "6e7f16af764e42f8386705bbc955fd65ae2a5f71", "title": "Skin microbiota: overview and role in the skin diseases acne vulgaris and rosacea.", "text": "As the first barrier to environmental exposures, human skin has developed an integrated immune system to protect the inner body from chemical, physical or microbial insults. Microorganisms inhabiting superficial skin layers are known as skin microbiota and include bacteria, viruses, archaea and fungi. The microbiota composition is crucial in the instruction and support of the skin's immune system. Changes in microbiota can be due to individual, environmental or behavioral factors, such as age, climate, hygiene or antibiotic consumption, which can cause dysbiosis. The contribution of skin microbiota to disease development is known in atopic dermatitis, where there is an increase in Staphylococcus aureus. Culture-independent studies have enabled more accurate descriptions of this complex interplay. Microbial imbalance is associated with the development of various diseases. This review focuses on microbial imbalances in acne vulgaris and rosacea."} {"_id": "60b87f87bee45a05ab2e711c893fddf621ac37cb", "title": "Eliminating Tight Coupling using Subscriptions Subgrouping in Structured Overlays", "text": "An advertisement and matching subscription are tightly coupled to activate a single publication routing path in content-based publish/subscribe systems. Due to this tight coupling, instantaneous updates in routing tables are required to generate alternative paths for dynamic routing. This poses serious challenges in offering scalable and robust dynamic routing in cyclic overlays when network conditions like link congestion is detected. We propose, OctopiA, a distributed publish/subscribe system for content-based dynamic routing in structured cyclic overlays. OctopiA uses a novel concept of subscription subgrouping to divide subscriptions into disjoint sets, called subscription subgroups, to eliminate tight coupling. While aiming at deployment in data center networks, OctopiA generates routing paths of minimum lengths. We use a homogeneous clustering approach with a bit-vector to realize subscription subgrouping and offer inter-cluster dynamic routing without requiring updates in routing tables. Experiments on a cluster testbed with real world data show that OctopiA reduces the number of saved advertisements in routing tables by 93%, subscription broadcast delay by 33%, static and dynamic publication delivery delays by 25% and 54%, respectively."} {"_id": "e5c290307d3df32b1aaa872e2b5cd60f3f410f10", "title": "Virtual Reality as an Innovative Setting for Simulations in Education", "text": "The increasingly widespread use of simulations in education underlines the fact that these teaching tools compared to other methods or media allow students to approach and experience new topics in a more realistic way. This realism enhances their learning and understanding of complex subjects. So far it has been difficult to interactively simulate three-dimensional dynamic learning content. In this field, the continuing development of Virtual Reality (VR) offers new opportunities for educators to convey a wide variety of subjects. It is the aim of this paper to characterize the nature of Virtual Reality as an educational setting for simulations, and to show that Studierstube, our multi-user collaborative Virtual Environment, comprises the necessary features for applying VR-techniques to educational purposes. We further discuss the general applicability of VR to various fields of education and demonstrate its specific application as a tool for teaching elementary three-dimensional geometry."} {"_id": "488f675419b6692e388e57e1324db87a82daa895", "title": "Causal discovery and inference: concepts and recent methodological advances", "text": "This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference."} {"_id": "37eba6d7c346813ceb21c1e17aee53df34527962", "title": "Learning to Reason With Adaptive Computation", "text": "Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading. In this work, we demonstrate the effectiveness of adaptive computation for learning the number of inference steps required for examples of different complexity and that learning the correct number of inference steps is difficult. We introduce the first model involving Adaptive Computation Time which provides a small performance benefit on top of a similar model without an adaptive component as well as enabling considerable insight into the reasoning process of the model."} {"_id": "b6d2863a6a7afcda3630c83a7d9c353864f50086", "title": "Alignment of Monophonic and Polyphonic Music to a Score", "text": "Music alignmentis the associationof eventsin a score with pointsin the time axisof an audio signal. Thesignal is thus segmentedaccording to theeventsin thescore. We proposea new methodology for automaticalignmentbasedon dynamic time warping, where the spectral peak structure is usedto computethe local distance, enhancedby a modelof attacks andof silence. Themethodologycancopewith performances consider eddifficult to align, like polyphonicmusic,trills, fast sequences, or multi-instrumentmusic.An optimisationof the representationof thealignmentpathmakesthemethodapplicableto long soundfiles, so that unit databasescan be fully automaticallysegmentedand labeled. On 708 sequencesof synthesisedmusic,weachievedanaverageoffsetof 18msand an error rateof 2.5%."} {"_id": "3605b9befd5f1b53019b8edb3b3d227901e76c89", "title": "Adaptive Mixtures of Local Experts", "text": "We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network."} {"_id": "4682c9cbcb19aa1b381e64cc119a82b99ffdc6df", "title": "Bayesian Methods for Mixtures of Experts", "text": "Tony Robinson Cambridge University Engineering Department Cambridge CB2 1PZ England. Tel: [+44] 1223 332815 ajr@eng.cam.ac.uk We present a Bayesian framework for inferring the parameters of a mixture of experts model based on ensemble learning by variational free energy minimisation. The Bayesian approach avoids the over-fitting and noise level under-estimation problems of traditional maximum likelihood inference. We demonstrate these methods on artificial problems and sunspot time series prediction."} {"_id": "58ceeb151558c1f322b9f6273b47e90e9c04e6b1", "title": "Neural Networks and the Bias/Variance Dilemma", "text": "Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals."} {"_id": "630cb3d77f6fd9e8e81beecabed54df4ef87c627", "title": "The variational approximation for Bayesian inference", "text": "The influence of this Thomas Bayes' work was immense. It was from here that \"Bayesian\" ideas first spread through the mathematical world, as Bayes's own article was ignored until 1780 and played no important role in scientific debate until the 20th century. It was also this article of Laplace's that introduced the mathematical techniques for the asymptotic analysis of posterior distributions that are still employed today. And it was here that the earliest example of optimum estimation can be found, the derivation and characterization of an estimator that minimized a particular measure of posterior expected loss. After more than two centuries, we mathematicians, statisticians cannot only recognize our roots in this masterpiece of our science, we can still learn from it."} {"_id": "3e1dd9fb09c6cddc6bf9d38686bea81986f9dabe", "title": "A multi-objective artificial bee colony algorithm", "text": "This work presents a multi-objective optimization method based on the artificial bee colony, called the MOABC, for optimizing problems with multiple objectives. The MOABC uses a grid-based approach to adaptively assess the Pareto frontmaintained in an external archive. The external archive is used to control the flying behaviours of the individuals and structuring the bee colony. The employed bees adjust their trajectories based on the non-dominated solutionsmaintained in the external archive. On the other hand, the onlooker bees select the food sources advertised by the employed bees to update their positions. The qualities of these food sources are computed based on the Pareto dominance notion. The scout bees are used by the MOABC to get rid of food sources with poor qualities. The proposed algorithm was evaluated on a set of standard test problems in comparison with other state-of-the-art algorithms. Experimental results indicate that the proposed approach is competitive compared to other algorithms considered in this work. \u00a9 2011 Elsevier B.V. All rights reserved."} {"_id": "9b838caf2aff2040f4ac1b676e6af98b20d8b33c", "title": "Refining Geometry from Depth Sensors using IR Shading Images", "text": "We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements."} {"_id": "6f20e254e3993538c79e0ff2b9b8f198d3359cb3", "title": "Receptive fields of single neurones in the cat's striate cortex.", "text": "In the central nervous system the visual pathway from retina to striate cortex provides an opportunity to observe and compare single unit responses at several distinct levels. Patterns of light stimuli most effective in influencing units at one level may no longer be the most effective at the next. From differences in responses at successive stages in the pathway one may hope to gain some understanding of the part each stage plays in visual perception. By shining small spots of light on the light-adapted cat retina Kuffler (1953) showed that ganglion cells have concentric receptive fields, with an 'on' centre and an 'off ' periphery, or vice versa. The 'on' and 'off' areas within a receptive field were found to be mutually antagonistic, and a spot restricted to the centre of the field was more effective than one covering the whole receptive field (Barlow, FitzHugh & Kuffler, 1957). In the freely moving lightadapted cat it was found that the great majority of cortical cells studied gave little or no response to light stimuli covering most of the animal's visual field, whereas small spots shone in a restricted retinal region often evoked brisk responses (Hubel, 1959). A moving spot of light often produced stronger responses than a stationary one, and sometimes a moving spot gave more activation for one direction than for the opposite. The present investigation, made in acute preparations, includes a study of receptive fields of cells in the cat's striate cortex. Receptive fields of the cells considered in this paper were divided into separate excitatory and inhibitory ('on' and 'off') areas. In this respect they resembled retinal ganglion-cell receptive fields. However, the shape and arrangement of excitatory and inhibitory areas differed strikingly from the concentric pattern found in retinal ganglion cells. An attempt was made to correlate responses to moving stimuli"} {"_id": "dcbe97cf05f157d43d17111665648fdbd54aa462", "title": "A handheld mirror simulation", "text": "We present the design and construction of a handheld mirror simulation device. The perception of the world reflected through a mirror depends on the viewer's position with respect to the mirror and the 3-D geometry of the world. In order to simulate a real mirror on a computer screen, images of the observed world, consistent with the viewer's position, must be synthesized and displayed in realtime. Our system is build around a LCD screen manipulated by the user, a single camera fixed on the screen, and a tracking device. The continuous input video stream and tracker data is used to synthesize, in real-time, a continuous video stream displayed on the LCD screen. The synthesized video stream is a close approximation of what the user would see on the screen surface if it were a real mirror. Our system provides a generic interface for applications involving rich, first-person interaction, such as the Virtual Daguerreotype."} {"_id": "95d0e83f8c6c6574d76db83b83923b5f8e1dfdc5", "title": "Magic decorator: automatic material suggestion for indoor digital scenes", "text": "Assigning textures and materials within 3D scenes is a tedious and labor-intensive task. In this paper, we present Magic Decorator, a system that automatically generates material suggestions for 3D indoor scenes. To achieve this goal, we introduce local material rules, which describe typical material patterns for a small group of objects or parts, and global aesthetic rules, which account for the harmony among the entire set of colors in a specific scene. Both rules are obtained from collections of indoor scene images. We cast the problem of material suggestion as a combinatorial optimization considering both local material and global aesthetic rules. We have tested our system on various complex indoor scenes. A user study indicates that our system can automatically and efficiently produce a series of visually plausible material suggestions which are comparable to those produced by artists."} {"_id": "9e23f09b17891ea03e92b4bf4e2fd7378adb4647", "title": "Efficiency Degradation in Wideband Power Amplifiers", "text": "This paper addresses the efficiency degradation observed in wideband power amplifiers. It starts by presenting a detailed explanation that relates this observed performance degradation with the terminations at baseband. Then, a comparison between two implemented power amplifiers with the same fundamental and harmonic terminations, but with different baseband networks is presented, showing that an optimized bias network design can improve the observed efficiency degradation."} {"_id": "76ea8a16454a878b5613f398a62e022097cab39c", "title": "Generating Keyword Queries for Natural Language Queries to Alleviate Lexical Chasm Problem", "text": "In recent years, the task of reformulating natural language queries has received considerable attention from both industry and academic communities. Because of the lexical chasm problem between natural language queries and web documents, if we directly use natural language queries as inputs for retrieval, the results are usually unsatisfactory. In this work, we formulated the task as a translation problem to convert natural language queries into keyword queries. Since the nature language queries users input are diverse and multi-faceted, general encoder-decoder models cannot effectively handle low-frequency words and out-of-vocabulary words. We propose a novel encoder-decoder method with two decoders: the pointer decoder firstly extracts query terms directly from the source text via copying mechanism, then the generator decoder generates query terms using two attention modules simultaneously considering the source text and extracted query terms. For evaluation and training, we also proposed a semi-automatic method to construct a large-scale dataset about natural language query-keyword query pairs. Experimental results on this dataset demonstrated that our model could achieve better performance than the previous state-of-the-art methods."} {"_id": "9a2607cbf136ae8094e0ad9a7a3c23e07ced9e4d", "title": "Customizing Computational Methods for Visual Analytics with Big Data", "text": "The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data."} {"_id": "483c708a3ca76ddcbd3d0ba3a4fc5355a5611cad", "title": "EW-SHOT L EARNING", "text": "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art few-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and miniImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-ofthe-art results on the Caltech UCSD bird dataset."} {"_id": "353e31b5facc3f8016211a95325326c5a2d39033", "title": "Setting Lower Bounds on Truthfulness", "text": "We present general techniques for proving inapproximability results for several paradigmatic truthful multidimensional mechanism design problems. In particular, we demonstrate the strength of our techniques by exhibiting a lower bound of 2 \u2212 1 m for the scheduling problem with m unrelated machines (formulated as a mechanism design problem in the seminal paper of Nisan and Ronen on Algorithmic Mechanism Design). Our lower bound applies to truthful randomized mechanisms, regardless of any computational assumptions on the running time of these mechanisms. Moreover, it holds even for the wider class of truthfulness-in-expectation mechanisms. This lower bound nearly matches the known 1.58606 randomized truthful upper bound for the case of two machines (a non-truthful FPTAS exists). Recently, Daskalakis and Weinberg [17] show that there is a polynomial-time 2approximately optimal Bayesian mechanism for makespan minimization for unrelated machines. We complement this result by showing an appropriate lower bound of 1.25 for deterministic incentive compatible Bayesian mechanisms. We then show an application of our techniques to the workload-minimization problem in networks. We prove our lower bounds for this problem in the inter-domain routing setting presented by Feigenbaum, Papadimitriou, Sami, and Shenker. Finally, we discuss several notions of non-utilitarian fairness (Max-Min fairness, Min-Max fairness, and envy minimization) and show how our techniques can be used to prove lower bounds for these notions. No lower bounds for truthful mechanisms in multidimensional probabilistic settings were previously known.1 \u2217Computer Science Department, Technion, Haifa, Israel. ahumu@yahoo.com. \u2020School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel. schapiram@huji.ac.il. The current paper supersedes \"Setting Lower Bounds on Truthfulness\" that appeared as an extended abstract in the Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA07), pages 1143-1152, 2007. The current version includes a new lower bound result for Bayesian Incentive compatible Mechanisms."} {"_id": "e9146b13946071d6273012aeb90b09d8c39eb25c", "title": "A d\u2013q Voltage Droop Control Method With Dynamically Phase-Shifted Phase-Locked Loop for Inverter Paralleling Without Any Communication Between Individual Inverters", "text": "This paper presents a modified droop control method for equal load sharing of parallel-connected inverters, without any communication between individual inverters. Droop in d- and q-axis voltages are given depending upon d- and q-axis currents, respectively. Each inverter works in the voltage control mode, where it controls the filter capacitor voltage. Voltage references of each inverter come from d- and q-axis voltage droops. This d- and q -axis voltage droops force the parallel-connected inverters to share equal current and hence equal active and reactive power. A dynamically phase-shifted phase-locked loop (PLL) technique is locally designed for generating phase reference of each inverter. The phase angle between filter capacitor voltage vector and d-axis is dynamically adjusted with the change in q-axis inverter current to generate the phase reference of each inverter. The strategy has been verified with simulations and experiments and results are presented in this paper."} {"_id": "d6ae945f3a12f1151be8c4f4e5a41c486205f682", "title": "A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning", "text": "Insertion is a challenging haptic and visual control problem with significant practical value for manufacturing. Existing approaches in the model-based robotics community can be highly effective when task geometry is known, but are complex and cumbersome to implement, and must be tailored to each individual problem by a qualified engineer. Within the learning community there is a long history of insertion research, but existing approaches are typically either too sample-inefficient to run on real robots, or assume access to high-level object features, e.g. socket pose. In this paper we show that relatively minor modifications to an off-the-shelf Deep-RL algorithm (DDPG), combined with a small number of human demonstrations, allows the robot to quickly learn to solve these tasks efficiently and robustly. Our approach requires no modeling or simulation, no parameterized search or alignment behaviors, no vision system aside from raw images, and no reward shaping. We evaluate our approach on a narrowclearance peg-insertion task and a deformable clip-insertion task, both of which include variability in the socket position. Our results show that these tasks can be solved reliably on the real robot in less than 10 minutes of interaction time, and that the resulting policies are robust to variance in the socket position and orientation."} {"_id": "ff74b6935cd793f1ab2a8029d9fc81e8bc7e065b", "title": "LTE in the sky: trading off propagation benefits with interference costs for aerial nodes", "text": "The popularity of unmanned aerial vehicles has exploded over the last few years, urgently demanding solutions to transfer large amounts of data from the UAV to the ground. Conversely, a control channel to the UAV is desired, in order to safely operate these vehicles remotely. This article analyzes the use of LTE for realizing this downlink data and uplink control. By means of measurements and simulations, we study the impact of interference and path loss when transmitting data to and from the UAV. Two scenarios are considered in which UAVs act as either base stations transmitting in downlink or UEs transmitting in uplink, and their impact on the respective downlink and uplink performance of an LTE ground network is analyzed. Both measurements and simulations are used to quantify such impact for a range of scenarios with varying altitude, distance from the base station, or UAV density. The measurement sets show that signal-to-interference ratio decreases up to 7 dB for UAVs at 150 m compared to ground users. Simulation results show that a UAV density of 10/km2 gives an average degradation of the signal-to-interference ratio of more than 6 dB. It is concluded that interference is going to be a major limiting factor when LTE enabled UAVs are introduced, and that strong technical solutions will have to be found."} {"_id": "e567683dd6c1d226a8a34d1915fe74e04a3a585b", "title": "Scattered data modeling", "text": "A variety of methods for modeling scattered data are discussed, with an emphasis on two types of data: volumetric and spherical. To demonstrate the performance of the various methods, results from an empirical study using trivariate scattered data are presented. The author's objective is to provide information to aid in selecting a type of method or to form the basis for customizing a method for a particular application.<>"} {"_id": "0421b5198da32440eaf1275b0cb247332e7330b9", "title": "Benign Osteoblastoma Involving Maxilla: A Case Report and Review of the Literature", "text": "Background. Osteoblastoma is a rare benign tumor. This tumor is characterized by osteoid and bone formation with the presence of numerous osteoblasts. The lesion is more frequently seen in long bones and rarely involves maxilla and mandible. Due to its clinical and histological similarity with other bone tumors such as osteoid osteoma and fibro-osseous lesions, osteoblastoma presents a diagnostic dilemma. Case Report. Very few cases of osteoblastomas involving maxillofacial region have been reported in the literature. This case report involves osteoblastoma involving right maxilla in an 18-year-old male patient. Following detailed clinical examination, radiological interpretation, and histopathological diagnosis, surgical excision was performed. The patient was followed up for a period of 3 years and was disease free. Summary and Conclusion. Benign osteoblastoma involving jaw bones is a rare tumor. There is a close resemblance of this tumor with other lesions such as fibro-osseous lesions and odontogenic tumors and thus faces a diagnostic challenge. Surgical excision with a long-term follow-up gives good prognosis to this lesion-Benign Osteoblastoma."} {"_id": "a5799efdd5a7117a0d6e8b9a5ce0055d7a4499b4", "title": "Almost Optimal Exploration in Multi-Armed Bandits", "text": "We study the problem of exploration in stochastic Multi-Armed Bandits. Even in the simplest setting of identifying the best arm, there remains a logarithmic multiplicative gap between the known lower and upper bounds for the number of arm pulls required for the task. This extra logarithmic factor is quite meaningful in nowadays large-scale applications. We present two novel, parameterfree algorithms for identifying the best arm, in two di\u21b5erent settings: given a target confidence and given a target budget of arm pulls, for which we prove upper bounds whose gap from the lower bound is only doublylogarithmic in the problem parameters. We corroborate our theoretical results with experiments demonstrating that our algorithm outperforms the state-of-the-art and scales better as the size of the problem increases."} {"_id": "dbe1f69118e5cd182add6ba115cb0d27a06a437a", "title": "Buck-boost converter fed BLDC motor drive for solar PV array based water pumping", "text": "Solar photovoltaic (SPV) array based water pumping is receiving wide attention now a days because the everlasting solar energy is the best alternative to the conventional energy sources. This paper deals with the utilization of a buck-boost converter in solar PV array based water pumping as an intermediate DC-DC converter between a solar PV array and a voltage source inverter (VSI) in order to achieve the maximum efficiency of the solar PV array and the soft starting of the permanent magnet brushless DC (BLDC) motor by proper control. Consisting of a least number of components and single switch, the buck-boost converter exhibits very good conversion efficiency. Moreover, buck-boost DC-DC converter topology is the only one allowing follow-up of the PV array maximum power point (MPP) regardless of temperature, irradiance, and connected load. A BLDC motor is employed to drive a centrifugal type of water pump because its load characteristic is well matched to the maximum power locus of the PV generator. The transient, dynamic and steady state behaviors of the proposed solar PV array powered buck-boost converter fed BLDC motor driven water pumping system are evaluated under the rapid and slowly varying atmospheric conditions using the sim-power-system toolboxes of the MATLAB/Simulink environment."} {"_id": "75620a00ac0b00c5d94185409dd77b38d9cebe0e", "title": "Event detection from social media: 5W1H analysis on big data", "text": "Increasing number of social media content shared across the globe in real time creates a fascinating area of research. For most events, social media users act as collective sensors by sharing important information about events of any scale. Due to its real time nature, social media is much quicker to respond to such events relative to conventional news media. This paper proposes an event detection system which provides 5W1H (What, Where, When, Why, Who, How) analysis for each detected event. We make use of a myriad of techniques such as anomaly detection, named entity recognition, automatic summary generation, user link analysis. Our experimental results for the system indicate a faster event detection performance compared to conventional news sources. Event analysis results are also in line with the corresponding news articles about detected events."} {"_id": "288366387becf5a15eda72c94a18e5b4f179578f", "title": "Persistence and periodicity in a dynamic proximity network", "text": "The topology of social networks can be understood as being inherently dynamic, with edges having a distinct position in time. Most characterizations of dynamic networks discretize time by converting temporal information into a sequence of network \u201csnapshots\u201d for further analysis. Here we study a highly resolved data set of a dynamic proximity network of 66 individuals. We show that the topology of this network evolves over a very broad distribution of time scales, that its behavior is characterized by strong periodicities driven by external calendar cycles, and that the conversion of inherently continuous-time data into a sequence of snapshots can produce highly biased estimates of network structure. We suggest that dynamic social networks exhibit a natural time scale \u2206nat, and that the best conversion of such dynamic data to a discrete sequence of networks is done at this natural rate."} {"_id": "a7b1a3901b9a55d042d7287dee32b5fcc6c4a97b", "title": "Metal-oxide-semiconductor field-effect transistor with a vacuum channel.", "text": "High-speed electronic devices rely on short carrier transport times, which are usually achieved by decreasing the channel length and/or increasing the carrier velocity. Ideally, the carriers enter into a ballistic transport regime in which they are not scattered. However, it is difficult to achieve ballistic transport in a solid-state medium because the high electric fields used to increase the carrier velocity also increase scattering. Vacuum is an ideal medium for ballistic transport, but vacuum electronic devices commonly suffer from low emission currents and high operating voltages. Here, we report the fabrication of a low-voltage field-effect transistor with a vertical vacuum channel (channel length of ~20 nm) etched into a metal-oxide-semiconductor substrate. We measure a transconductance of 20 nS \u00b5m(-1), an on/off ratio of 500 and a turn-on gate voltage of 0.5 V under ambient conditions. Coulombic repulsion in the two-dimensional electron system at the interface between the oxide and the metal or the semiconductor reduces the energy barrier to electron emission, leading to a high emission current density (~1 \u00d7 10(5) A cm(-2)) under a bias of only 1 V. The emission of two-dimensional electron systems into vacuum channels could enable a new class of low-power, high-speed transistors."} {"_id": "97a3f0901e715c12b39555194cad91818f22aa8a", "title": "Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon", "text": "Do people really care about their privacy? Surveys show that privacy is a primary concern for citizens in the digital age. On the other hand, individuals reveal personal information for relatively small rewards, often just for drawing the attention of peers in an online social network. This inconsistency of privacy attitudes and privacy behavior is often referred to as the \u201cprivacy paradox\u201d. In this paper, we present the results of a review of research literature on the privacy paradox. We analyze studies that provide evidence of a paradoxical dichotomy between attitudes and behavior and studies that challenge the existence of such a phenomenon. The diverse research results are explained by the diversity in research methods, the different contexts and the different conceptualizations of the privacy paradox. We also present several interpretations of the privacy paradox, stemming from social theory, psychology, behavioral economics and, in one case, from quantum theory. We conclude that current research has improved our understanding of the privacy paradox phenomenon. It is, however, a complex phenomenon that requires extensive further research. Thus, we call for synthetic studies to be based on comprehensive theoretical models that take into account the diversity of personal information and the diversity of privacy concerns. We suggest that future studies should use evidence of actual behavior rather than self-reported behavior."} {"_id": "3007a8f5416404432166ff3f0158356624d282a1", "title": "GraphBuilder: scalable graph ETL framework", "text": "Graph abstraction is essential for many applications from finding a shortest path to executing complex machine learning (ML) algorithms like collaborative filtering. Graph construction from raw data for various applications is becoming challenging, due to exponential growth in data, as well as the need for large scale graph processing. Since graph construction is a data-parallel problem, MapReduce is well-suited for this task. We developed GraphBuilder, a scalable framework for graph Extract-Transform-Load (ETL), to offload many of the complexities of graph construction, including graph formation, tabulation, transformation, partitioning, output formatting, and serialization. GraphBuilder is written in Java, for ease of programming, and it scales using the MapReduce model. In this paper, we describe the motivation for GraphBuilder, its architecture, MapReduce algorithms, and performance evaluation of the framework. Since large graphs should be partitioned over a cluster for storing and processing and partitioning methods have significant performance impacts, we develop several graph partitioning methods and evaluate their performance. We also open source the framework at https://01.org/graphbuilder/."} {"_id": "74b5640a6611a96cfa469cbd01fd7ea250b7eaed", "title": "Convolutional neural networks for SAR image segmentation", "text": "Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides this contribution we also suggest a new way to do pixel wise annotation of SAR images that replaces a human expert manual segmentation process, which is both slow and troublesome. Our method for annotation relies on 3D CAD models of objects and scene, and converts these to labels for all pixels in a SAR image. Our algorithms are evaluated on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset which was released by the Defence Advanced Research Projects Agency during the 1990s. The method is not restricted to the type of targets imaged in MSTAR but can easily be extended to any SAR data where prior information about scene geometries can be estimated."} {"_id": "20eb9c6085fc129b4f7348809599542e324c5d23", "title": "Analysis and implementation of software rejuvenation in cluster systems", "text": "Several recent studies have reported the phenomenon of \"software aging\", one in which the state of a software system degrades with time. This may eventually lead to performance degradation of the software or crash/hang failure or both. \"Software rejuvenation\" is a pro-active technique aimed to prevent unexpected or unplanned outages due to aging. The basic idea is to stop the running software, clean its internal state and restart it. In this paper, we discuss software rejuvenation as applied to cluster systems. This is both an innovative and an efficient way to improve cluster system availability and productivity. Using Stochastic Reward Nets (SRNs), we model and analyze cluster systems which employ software rejuvenation. For our proposed time-based rejuvenation policy, we determine the optimal rejuvenation interval based on system availability and cost. We also introduce a new rejuvenation policy based on prediction and show that it can dramatically increase system availability and reduce downtime cost. These models are very general and can capture a multitude of cluster system characteristics, failure behavior and performability measures, which we are just beginning to explore. We then briefly describe an implementation of a software rejuvenation system that performs periodic and predictive rejuvenation, and show some empirical data from systems that exhibit aging"} {"_id": "2e526c2fac79c080b818b304485ddf84d09cf08b", "title": "Predicting Rare Events In Temporal Domains", "text": "Temporal data mining aims at finding patterns in historical data. Our work proposes an approach to extract temporal patterns from data to predict the occurrence of target events, such as computer attacks on host networks, or fraudulent transactions in financial institutions. Our problem formulation exhibits two major challenges: 1) we assume events being characterized by categorical features and displaying uneven inter-arrival times; such an assumption falls outside the scope of classical time-series analysis, 2) we assume target events are highly infrequent; predictive techniques must deal with the class-imbalance problem. We propose an efficient algorithm that tackles the challenges above by transforming the event prediction problem into a search for all frequent eventsets preceding target events. The class imbalance problem is overcome by a search for patterns on the minority class exclusively; the discrimination power of patterns is then validated against other classes. Patterns are then combined into a rule-based model for prediction. Our experimental analysis indicates the types of event sequences where target events can be accurately predicted."} {"_id": "95a357bf35e78cc2efa14b2d03913616700174b4", "title": "The impact of technology scaling on lifetime reliability", "text": "The relentless scaling of CMOS technology has provided a steady increase in processor performance for the past three decades. However, increased power densities (hence temperatures) and other scaling effects have an adverse impact on long-term processor lifetime reliability. This paper represents a first attempt at quantifying the impact of scaling on lifetime reliability due to intrinsic hard errors, taking workload characteristics into consideration. For our quantitative evaluation, we use RAMP (Srinivasan et al., 2004), a previously proposed industrial-strength model that provides reliability estimates for a workload, but for a given technology. We extend RAMP by adding scaling specific parameters to enable workload-dependent lifetime reliability evaluation at different technologies. We show that (1) scaling has a significant impact on processor hard failure rates - on average, with SPEC benchmarks, we find the failure rate of a scaled 65nm processor to be 316% higher than a similarly pipelined 180nm processor; (2) time-dependent dielectric breakdown and electromigration have the largest increases; and (3) with scaling, the difference in reliability from running at worst-case vs. typical workload operating conditions increases significantly, as does the difference from running different workloads. Our results imply that leveraging a single microarchitecture design for multiple remaps across a few technology generations will become increasingly difficult, and motivate a need for workload specific, microarchitectural lifetime reliability awareness at an early design stage."} {"_id": "aff00508b54357ee5cf766401c8677773c833dba", "title": "Temperature-Aware Microarchitecture", "text": "With power density and hence cooling costs rising exponentially, processor packaging can no longer be designed for the worst case, and there is an urgent need for runtime processor-level techniques that can regulate operating temperature when the package's capacity is exceeded. Evaluating such techniques, however, requires a thermal model that is practical for architectural studies.This paper describes HotSpot, an accurate yet fast model based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package. Validation was performed using finite-element simulation. The paper also introduces several effective methods for dynamic thermal management (DTM): \"temperature-tracking\" frequency scaling, localized toggling, and migrating computation to spare hardware units. Modeling temperature at the microarchitecture level also shows that power metrics are poor predictors of temperature, and that sensor imprecision has a substantial impact on the performance of DTM."} {"_id": "2e1dab46b0547f4a08adf8d4dfffc9e8cd6b0054", "title": "CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules", "text": "Previous studies propose that associative classification has high classification accuracy and strong flexibility at handling unstructured data. However, it still suffers from the huge set of mined rules and sometimes biased classification or overfitting since the classification is based on only single high-confidence rule. In this study, we propose a new associative classification method, CMAR, i.e., Classification based on Multiple Association Rules. The method extends an efficient frequent pattern mining method, FP-growth, constructs a class distribution-associated FP-tree, and mines large database efficiently. Moreover, it applies a CR-tree structure to store and retrieve mined association rules efficiently, and prunes rules effectively based on confidence, correlation and database coverage. The classification is performed based on a weighted analysis using multiple strong association rules. Our extensive experiments on databases from UCI machine learning database repository show that CMAR is consistent, highly effective at classification of various kinds of databases and has better average classification accuracy in comparison with CBA and C4.5. Moreover, our performance study shows that the method is highly efficient and scalable in comparison with other reported associative classification methods."} {"_id": "3970adba1270ab5aa9bd5e0cc2a46e870f369956", "title": "ApLeaf: An efficient android-based plant leaf identification system", "text": "To automatically identify plant species is very useful for ecologists, amateur botanists, educators, and so on. The Leafsnap is the first successful mobile application system which tackles this problem. However, the Leafsnap is based on the IOS platform. And to the best of our knowledge, as the mobile operation system, the Android is more popular than the IOS. In this paper, an Android-based mobile application designed to automatically identify plant species according to the photographs of tree leaves is described. In this application, one leaf image can be either a digital image from one existing leaf image database or a picture collected by a camera. The picture should be a single leaf placed on a light and untextured background without other clutter. The identification process consists of three steps: leaf image segmentation, feature extraction, and species identification. The demo system is evaluated on the ImageCLEF2012 Plant Identification database which contains 126 tree species from the French Mediterranean area. The outputs of the system to users are the top several species which match the query leaf image the best, as well as the textual descriptions and additional images about plant leaves, flowers, etc. Our system works well with state-of-the-art identification performance."} {"_id": "307e641b7e8713b5a37ece0b44809cd6dacc418b", "title": "Usability evaluation considered harmful (some of the time)", "text": "Current practice in Human Computer Interaction as encouraged by educational institutes, academic review processes, and institutions with usability groups advocate usability evaluation as a critical part of every design process. This is for good reason: usability evaluation has a significant role to play when conditions warrant it. Yet evaluation can be ineffective and even harmful if naively done 'by rule' rather than 'by thought'. If done during early stage design, it can mute creative ideas that do not conform to current interface norms. If done to test radical innovations, the many interface issues that would likely arise from an immature technology can quash what could have been an inspired vision. If done to validate an academic prototype, it may incorrectly suggest a design's scientific worthiness rather than offer a meaningful critique of how it would be adopted and used in everyday practice. If done without regard to how cultures adopt technology over time, then today's reluctant reactions by users will forestall tomorrow's eager acceptance. The choice of evaluation methodology - if any - must arise from and be appropriate for the actual problem or research question under consideration."} {"_id": "507844d654911a6bbbe9690e5396c91d07c39b3f", "title": "Competing for Attention in Social Media under Information Overload Conditions", "text": "Modern social media are becoming overloaded with information because of the rapidly-expanding number of information feeds. We analyze the user-generated content in Sina Weibo, and find evidence that the spread of popular messages often follow a mechanism that differs from the spread of disease, in contrast to common belief. In this mechanism, an individual with more friends needs more repeated exposures to spread further the information. Moreover, our data suggest that for certain messages the chance of an individual to share the message is proportional to the fraction of its neighbours who shared it with him/her, which is a result of competition for attention. We model this process using a fractional susceptible infected recovered (FSIR) model, where the infection probability of a node is proportional to its fraction of infected neighbors. Our findings have dramatic implications for information contagion. For example, using the FSIR model we find that real-world social networks have a finite epidemic threshold in contrast to the zero threshold in disease epidemic models. This means that when individuals are overloaded with excess information feeds, the information either reaches out the population if it is above the critical epidemic threshold, or it would never be well received."} {"_id": "3fe06b70c2ce351162da36ee7cac060279e9c23f", "title": "An adaptive FEC algorithm using hidden Markov chains", "text": "Anumber of performance issues must be addressed in order to transmit continuous media stream over the Internet with acceptable quality [9, 6] . These include reducing jitter and recovering from packet losses mainly due to congestion in the routers . Several methods can be used to deal with the loss of a packet such as : retransmission schemes, error concealment, error resilience, interleaving, and FEC (forward error correction) . Each has its advantages/disadvantages depending on the network scenario and conditions (RTT, propagation delay, topology, channel error rate, congestion in the sender/receiver path, etc.) . In this work we focus on FEC techniques. FEC works by adding redundant information to the stream being transmitted in order to reconstruct the original stream at the receiver when packet losses occur. Among its advantages are the small delays introduced to recover information compared to retransmission, for instance, and simplicity of implementation . A clear disadvantage is the increase in the transmission rate due to the redundancy added. One of the simplest approaches among FEC techniques that have been proposed in the literature [11] is to add a packet to each group of N 1 packets with a payload equal to the result of an XOR operation performed in the group. Clearly, if one packet out of the N is lost, the stream is completely recovered . This is accomplished at the expense ofextra bandwidth, in this example an increase of 1/(N 1) . In [4] a clever FEC scheme was proposed in which each packet carries a sample of the previous transmitted packet but compression is used to reduce the overhead . In [7] another technique was developed aimed at maintaining a good compromise between recovery efficiency and bandwidth usage. Since packet losses are mostly due to congestion, increasing the amount of protection using FEC may be unfair to flow controlled streams and also may even have an adverse effect on the results [1] . Therefore, the FEC algorithm should be carefully chosen according to the error characteristics ofthe path between the sender and receiver. Another issue is to develop accurate models ofthe loss process . Many studies exist to characterize the loss process [2, 14, 8] . Simple models such as the Bernoulli process, the Gilbert model and sophisticated Markov models have been proposed. Using either the Bernoulli or the Gilbert process, the work of[3] propose an adaptive algorithm based on the FECscheme of[4] . Recently, the work of Salamatian and Vaton [13] has shown that Hidden"} {"_id": "899545faba813316feb194438397eb0530f31b6c", "title": "Brain\u2013machine interface via real-time fMRI: Preliminary study on thought-controlled robotic arm", "text": "Real-time functional MRI (rtfMRI) has been used as a basis for brain-computer interface (BCI) due to its ability to characterize region-specific brain activity in real-time. As an extension of BCI, we present an rtfMRI-based brain-machine interface (BMI) whereby 2-dimensional movement of a robotic arm was controlled by the regulation (and concurrent detection) of regional cortical activations in the primary motor areas. To do so, the subjects were engaged in the right- and/or left-hand motor imagery tasks. The blood oxygenation level dependent (BOLD) signal originating from the corresponding hand motor areas was then translated into horizontal or vertical robotic arm movement. The movement was broadcasted visually back to the subject as a feedback. We demonstrated that real-time control of the robotic arm only through the subjects' thought processes was possible using the rtfMRI-based BMI trials."} {"_id": "73e936f8d4f4e6fc762c368d53b74fa5043eed2b", "title": "Similarity-Based Approach for Positive and Unlabeled Learning", "text": "Positive and unlabelled learning (PU learning) has been investigated to deal with the situation where only the positive examples and the unlabelled examples are available. Most of the previous works focus on identifying some negative examples from the unlabelled data, so that the supervised learning methods can be applied to build a classifier. However, for the remaining unlabelled data, which can not be explicitly identified as positive or negative (we call them ambiguous examples), they either exclude them from the training phase or simply enforce them to either class. Consequently, their performance may be constrained. This paper proposes a novel approach, called similarity-based PU learning (SPUL) method, by associating the ambiguous examples with two similarity weights, which indicate the similarity of an ambiguous example towards the positive class and the negative class, respectively. The local similarity-based and global similarity-based mechanisms are proposed to generate the similarity weights. The ambiguous examples and their similarity-weights are thereafter incorporated into an SVM-based learning phase to build a more accurate classifier. Extensive experiments on real-world datasets have shown that SPUL outperforms state-of-the-art PU learning methods."} {"_id": "a046d7034c48e9a8c0efa69cbbf28c16d22fe273", "title": "Systematic Variation of Prosthetic Foot Spring Affects Center-of-Mass Mechanics and Metabolic Cost During Walking", "text": "Lower-limb amputees expend more energy to walk than non-amputees and have an elevated risk of secondary disabilities. Insufficient push-off by the prosthetic foot may be a contributing factor. We aimed to systematically study the effect of prosthetic foot mechanics on gait, to gain insight into fundamental prosthetic design principles. We varied a single parameter in isolation, the energy-storing spring in a prototype prosthetic foot, the controlled energy storage and return (CESR) foot, and observed the effect on gait. Subjects walked on the CESR foot with three different springs. We performed parallel studies on amputees and on non-amputees wearing prosthetic simulators. In both groups, spring characteristics similarly affected ankle and body center-of-mass (COM) mechanics and metabolic cost. Softer springs led to greater energy storage, energy return, and prosthetic limb COM push-off work. But metabolic energy expenditure was lowest with a spring of intermediate stiffness, suggesting biomechanical disadvantages to the softest spring despite its greater push-off. Disadvantages of the softest spring may include excessive heel displacements and COM collision losses. We also observed some differences in joint kinetics between amputees and non-amputees walking on the prototype foot. During prosthetic push-off, amputees exhibited reduced energy transfer from the prosthesis to the COM along with increased hip work, perhaps due to greater energy dissipation at the knee. Nevertheless, the results indicate that spring compliance can contribute to push-off, but with biomechanical trade-offs that limit the degree to which greater push-off might improve walking economy."} {"_id": "0a54d2f49bda694071bbf43d8e653f5adf85be19", "title": "Meta-Learning in Distributed Data Mining Systems: Issues and Approaches", "text": "Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some p rincipled fashion multiple classifiers computed separately over different databas es. This study, describes meta-learning and presents the JAM system (Java Ag ents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several impor tant desiderata for distributed data mining systems that stem from thei r additional complexity compared to centralized or host-based systems. Distributed systems may need to deal with heterogenous platforms, with multiple databases and (possibly) different schemas, with the design and imp lementation of scalable and effective protocols for communicating among the data sites, and the selective and efficient use of the information that is gat hered from other peer data sites. Other important problems, intrinsic wi thin \u2217Supported in part by an IBM fellowship. data mining systems that must not be ignored, include, first, the abil ity to take advantage of newly acquired information that was not previously av ailable when models were computed and combine it with existing models, and second, the flexibility to incorporate new machine learning methods and d ta mining technologies. We explore these issues within the context of JAM and evaluate various proposed solutions through extensive empirical st udie ."} {"_id": "0390337f0a3315051ed685e943c2fe676b332357", "title": "Observations on periorbital and midface aging.", "text": "BACKGROUND\nMany of the anatomical changes of facial aging are still poorly understood. This study looked at the aging process in individuals linearly over time, focusing on aspects of periorbital aging and the upper midface.\n\n\nMETHODS\nThe author compared photographs of patients' friends and relatives taken 10 to 50 years before with closely matched recent follow-up pictures. The best-matching old and recent pictures were equally sized and superimposed in the computer. The images were then assembled into GIF animations, which automate the fading of one image into the other and back again indefinitely.\n\n\nRESULTS\nThe following findings were new to the author: (1) the border of the pigmented lid skin and thicker cheek skin (the lid-cheek junction) is remarkably stable in position over time, becoming more visible by contrast, not by vertical descent as is commonly assumed. (2) Orbicularis wrinkles on the cheek and moles and other markers on the upper midface were also stable over decades. (3) With aging, there can be a distinct change in the shape of the upper eyelid. The young upper lid frequently has a medially biased peak. The upper lid peak becomes more central in the older lid. This article addresses these three issues. No evidence was seen here for descent of the globe in the orbit.\n\n\nCONCLUSIONS\nThere seems to be very little ptosis (inferior descent) of the lid-cheek junction or of the upper midface. These findings suggest that vertical descent of skin, and by association, subcutaneous tissue, is not necessarily a major component of aging in those areas. In addition, the arc of the upper lid changes shape in a characteristic way in some patients. Other known changes of the periorbital area are visualized."} {"_id": "9185718ea951f6b6c9f15a64a01225109fcf3da4", "title": "Performance of elasticsearch in cloud environment with nGram and non-nGram indexing", "text": "The fact that technology have changed the lives of human beings cannot be denied. It has drastically reduced the effort needed to perform a particular task and has increased the productivity and efficiency. Computers especially have been playing a very important role in almost all fields in today's world. They are used to store large amount of data in almost all sectors, be it business and industrial sectors, personal lives or any other. The research areas of science and technology uses computers to solve complex and critical problems. Information is the most important requirement of each individual. In this era of quick-growing and huge data, it has become increasingly illogical to analyse it with the help of traditional techniques or relational databases. New big data instruments, architectures and designs have come into existence to give better support to the requirements of organizations/institutions in analysing large data. Specifically, Elasticsearch, a full-text java based search engine, designed keeping cloud environment in mind solves issues of scalability, search in real time, and efficiency that relational databases were not able to address. In this paper, we present our own experience with Elasticsearch an open source, Apache Lucene based, full-text search engine that provides near real-time search ability, as well as a RESTful API for the ease of user in the field of research."} {"_id": "901528adf0747612c559274100581bc305e55faa", "title": "Development of the radiographic union score for tibial fractures for the assessment of tibial fracture healing after intramedullary fixation.", "text": "BACKGROUND\n: The objective was to evaluate the newly developed Radiographic Union Score for Tibial fractures (RUST). Because there is no \"gold standard,\" it was hypothesized that the RUST score would provide substantial improvements compared with previous scores presented in the literature.\n\n\nMETHODS\n: Forty-five sets of X-rays of tibial shaft fractures treated with intramedullary fixation were selected. Seven orthopedic reviewers independently scored bony union using RUST. Radiographs were reassessed at 9 weeks. Intraclass correlation coefficients (ICC) with 95% confidence intervals (CI) measured agreement.\n\n\nRESULTS\n: Overall agreement was substantial (ICC, 0.86; 95% CI, 0.79-0.91). There was improved reliability among traumatologists compared with others (ICC = 0.86, 0.81, and 0.83, respectively). Overall intraobserver reliability was also substantial (ICC, 0.88; 95% CI, 0.80-0.96).\n\n\nCONCLUSIONS\n: The RUST score exhibits substantial improvements in reliability from previously published scores and produces equally reproducible results among a variety of orthopedic specialties and experience levels. Because no \"gold standards\" currently exist against which RUST can be compared, this study provides only the initial step in the score's full validation for use in a clinical context."} {"_id": "064d2ddd75aa931899136173c8364ced0c5cbea3", "title": "Comparison of Measurement-Based Call Admission Control Algorithms for Controlled-Load Service", "text": "We compare the performance of four admission control algorithms\u2014one parameter-based and three measurementbased\u2014for controlled-load service. The parameter-based admission control ensures that the sum of reserved resources is bounded by capacity. The three measurementbased algorithms are based on measured bandwidth, acceptance region [9], and equivalent bandwidth [7]. We use simulationon several network scenarios to evaluate the link utilization and adherence to service commitment achieved by these four algorithms."} {"_id": "2ae2684f120dab4c319e30d33b33e7adf384810a", "title": "ProteusTM: Abstraction Meets Performance in Transactional Memory", "text": "The Transactional Memory (TM) paradigm promises to greatly simplify the development of concurrent applications. This led, over the years, to the creation of a plethora of TM implementations delivering wide ranges of performance across workloads. Yet, no universal implementation fits each and every workload. In fact, the best TM in a given workload can reveal to be disastrous for another one. This forces developers to face the complex task of tuning TM implementations, which significantly hampers their wide adoption. In this paper, we address the challenge of automatically identifying the best TM implementation for a given workload. Our proposed system, ProteusTM, hides behind the TM interface a large library of implementations. Underneath, it leverages a novel multi-dimensional online optimization scheme, combining two popular learning techniques: Collaborative Filtering and Bayesian Optimization.\n We integrated ProteusTM in GCC and demonstrate its ability to switch between TMs and adapt several configuration parameters (e.g., number of threads). We extensively evaluated ProteusTM, obtaining average performance <3% from optimal, and gains up to 100x over static alternatives."} {"_id": "c361896da526a9a65e23127187dee10c391637a1", "title": "Upper limb exoskeleton control based on sliding mode control and feedback linearization", "text": "Exoskeletons have potential to improve human quality of life by relieving loads on the human musculoskeletal system or by helping motor rehabilitation. Controllers that were initially developed for industrial applications are also applied to control an exoskeleton, despite of major differences in requirements. Nevertheless, which controller has better performance for this specific application remains an open question. This paper presents a comparison between sliding mode control and feedback linearization control. The implementation of the sliding mode controller assumes a complete measurement of the biological system's dynamics. On the other hand, the feedback linearization method does not need any information from the biological system. Our results indicate that a model combining biological control and dynamics could improve several characteristics of robotic exoskeleton, such as energy consumption."} {"_id": "22b6bd1980ea40d0f095a151101ea880fae33049", "title": "Part 5: Machine Translation Evaluation Chapter 5.1 Introduction", "text": "The evaluation of machine translation (MT) systems is a vital field of research, both for determining the effectiveness of existing MT systems and for optimizing the performance of MT systems. This part describes a range of different evaluation approaches used in the GALE community and introduces evaluation protocols and methodologies used in the program. We discuss the development and use of automatic, human, task-based and semi-automatic (human-in-the-loop) methods of evaluating machine translation, focusing on the use of a human-mediated translation error rate HTER as the evaluation standard used in GALE. We discuss the workflow associated with the use of this measure, including post editing, quality control, and scoring. We document the evaluation tasks, data, protocols, and results of recent GALE MT Evaluations. In addition, we present a range of different approaches for optimizing MT systems on the basis of different measures. We outline the requirements and specific problems when using different optimization approaches and describe how the characteristics of different MT metrics affect the optimization. Finally, we describe novel recent and ongoing work on the development of fully automatic MT evaluation metrics that can have the potential to substantially improve the effectiveness of evaluation and optimization of MT systems. Progress in the field of machine translation relies on assessing the quality of a new system through systematic evaluation, such that the new system can be shown to perform better than pre-existing systems. The difficulty arises in the definition of a better system. When assessing the quality of a translation, there is no single correct answer; rather, there may be any number of possible correct translations. In addition, when two translations are only partially correct but in different ways it is difficult to distinguish quality. Moreover, quality assessments may be dependent on the intended use for the translation, e.g., the tone of a translation may be crucial in some applications, but irrelevant in other applications. Traditionally, there are two paradigms of machine translation evaluation: (1) Glass Box evaluation, which measures the quality of a system based upon internal system properties, and (2) Black Box evaluation, which measures the quality of a system based solely upon its output, without respect to the internal mechanisms of the translation system. Glass Box evaluation focuses upon an examination of the system's linguistic"} {"_id": "da2802b3af2458adfd2246202e3f4a4ca5afe7a0", "title": "A Model-Based Method for Computer-Aided Medical Decision-Making", "text": "While MYCIN and PIP were under development at Stanford and Tufts! M.I. T., a group of computer scientists at Rutgers University was developing a system to aid in the evaluation and treatment of patients with glaucoma. The group was led by Professor Casimir K ulikou1ski, a researcher with extensive background in mathematical and pattern-recognition approaches to computer-based medical decision making (Nordyke et al., 1971), working within the Rutgers Research Resource on Computers in Biomedicine headed by Professor Saul Amarel. Working collaboratively with Dr. Arin Safir, Professor of Ophthalmology, \"who was then based at the Mt. Sinai School of Medicine in New York City, Kulikowski and Sholom Weiss (a graduate student at Rutgers who \"went on to become a research scientist there) developed a method of computer-assisted medical decision making that was based on causal-associational network (CASNET) models of disease. Although the work was inspired by the glaucoma domain, the approach had general features that were later refined in the development of the EXPERT system-building tool (see Chapters 18 and 20). A CASNET model consists of three rnain components: observations of a patient, pathophysiological states, and disease classifications. As observations are recorded, they are associated with the appropriate intennediate states. These states, in turn, are typically causally related, thereby forming a network that summarizes the mechanisms of disease. It is these patterns of states in the network that are linked to individual disease classes. Strat-"} {"_id": "075077bcf5a33838ab4f980f141eb4771473fd54", "title": "Storage and Querying of E-Commerce Data", "text": "New generation of e-commerce applications require data sch emas that are constantly evolving and sparsely populated. The co nventional horizontal row representation fails to meet these re quir ments. We represent objects in a vertical format storing an o bject as a set of tuples. Each tuple consists of an object identifier and attribute name-value pair. Schema evolution is now easy. Ho wever, writing queries against this format becomes cumberso me. We create a logical horizontal view of the vertical representa tion and transform queries on this view to the vertical table. We pres ent alternative implementations and performance results that s ow the effectiveness of the vertical representation for sparse da ta. We also identify additional facilities needed in database systems to support these applications well."} {"_id": "0324ed4e0a3a7eb2a3b32198996bc72ecdfde26b", "title": "Jitter and Phase Noise in Ring Oscillators", "text": "A companion analysis of clock jitter and phase noise of single-ended and differential ring oscillators is presented. The impulse sensitivity functions are used to derive expressions for the jitter and phase noise of ring oscillators. The effect of the number of stages, power dissipation, frequency of oscillation, and shortchannel effects on the jitter and phase noise of ring oscillators is analyzed. Jitter and phase noise due to substrate and supply noise is discussed, and the effect of symmetry on the upconversion of 1/f noise is demonstrated. Several new design insights are given for low jitter/phase-noise design. Good agreement between theory and measurements is observed."} {"_id": "2ce9158c722551fa522aff365d4ebdcdc892116c", "title": "Infinite Edge Partition Models for Overlapping Community Detection and Link Prediction", "text": "A hierarchical gamma process infinite edge partition model is proposed to factorize the binary adjacency matrix of an unweighted undirected relational network under a Bernoulli-Poisson link. The model describes both homophily and stochastic equivalence, and is scalable to big sparse networks by focusing its computation on pairs of linked nodes. It can not only discover overlapping communities and inter-community interactions, but also predict missing edges. A simplified version omitting inter-community interactions is also provided and we reveal its interesting connections to existing models. The number of communities is automatically inferred in a nonparametric Bayesian manner, and efficient inference via Gibbs sampling is derived using novel data augmentation techniques. Experimental results on four real networks demonstrate the models\u2019 scalability and state-of-the-art performance."} {"_id": "8e4caf932910122ba7618d64db3b4a3bad0a1514", "title": "GCap: Graph-based Automatic Image Captioning", "text": "Given an image, how do we automatically assign keywords to it? In this paper, we propose a novel, graph-based approach (GCap) which outperforms previously reported methods for automatic image captioning. Moreover, it is fast and scales well, with its training and testing time linear to the data set size. We report auto-captioning experiments on the \"standard\" Corel image database of 680 MBytes, where GCap outperforms recent, successful auto-captioning methods by up to 10 percentage points in captioning accuracy (50% relative improvement)."} {"_id": "24fe430b21931c00f310ed05b3b9fbff02aea1a7", "title": "A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds", "text": "The pallido-recipient thalamus transmits information from the basal ganglia to the cortex and is critical for motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the basal ganglia, but the role of nonpallidal inputs, such as excitatory inputs from cortex, remains unclear. We simultaneously recorded from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a basal ganglia\u2013recipient thalamic nucleus that is necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor cortical nucleus that is also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals that are important for exploratory behavior and learning."} {"_id": "9222c69ca851b26e8338b0082dfafbc663d1be50", "title": "A Graph Traversal Based Approach to Answer Non-Aggregation Questions Over DBpedia", "text": "We present a question answering system over DBpedia, filling the gap between user information needs expressed in natural language and a structured query interface expressed in SPARQL over the underlying knowledge base (KB). Given the KB, our goal is to comprehend a natural language query and provide corresponding accurate answers. Focusing on solving the non-aggregation questions, in this paper, we construct a subgraph of the knowledge base from the detected entities and propose a graph traversal method to solve both the semantic item mapping problem and the disambiguation problem in a joint way. Compared with existing work, we simplify the process of query intention understanding and pay more attention to the answer path ranking. We evaluate our method on a non-aggregation question dataset and further on a complete dataset. Experimental results show that our method achieves best performance compared with several state-of-the-art systems."} {"_id": "5a9a26fd264187428da6662392857a1d3e485175", "title": "Optimal Cell Load and Throughput in Green Small Cell Networks With Generalized Cell Association", "text": "This paper thoroughly explores the fundamental interactions between cell association, cell load, and throughput in a green (energy-efficient) small cell network in which all base stations form a homogeneous Poisson point process (PPP) of intensity \u03bbB and all users form another independent PPP of intensity \u03bb\u222a. Cell voidness, usually disregarded due to rarity in cellular network modeling, is first theoretically analyzed under generalized (channel-aware) cell association (GCA). We show that the void cell probability cannot be neglected any more since it is bounded above by exp(-\u03bb\u222a/\u03bbB) that is typically not small in a small cell network. The accurate expression of the void cell probability for GCA is characterized and it is used to derive the average cell and user throughputs. We learn that cell association and cell load \u03bb\u222a/\u03bbB significantly affect these two throughputs. According to the average cell and user throughputs, the green cell and user throughputs are defined respectively to reflect whether the energy of a base station is efficiently used to transmit information or not. In order to achieve satisfactory throughput with certain level of greenness, cell load should be properly determined. We present the theoretical solutions of the optimal cell loads that maximize the green cell and user throughputs, respectively, and verify their correctness by simulation."} {"_id": "23e8236644775fd5d8ff5536ba06b960e19f904b", "title": "Control Flow Integrity for COTS Binaries", "text": "Control-Flow Integrity (CFI) has been recognized as an important low-level security property. Its enforcement can defeat most injected and existing code attacks, including those based on Return-Oriented Programming (ROP). Previous implementations of CFI have required compiler support or the presence of relocation or debug information in the binary. In contrast, we present a technique for applying CFI to stripped binaries on x86/Linux. Ours is the first work to apply CFI to complex shared libraries such as glibc. Through experimental evaluation, we demonstrate that our CFI implementation is effective against control-flow hijack attacks, and eliminates the vast majority of ROP gadgets. To achieve this result, we have developed robust techniques for disassembly, static analysis, and transformation of large binaries. Our techniques have been tested on over 300MB of binaries (executables and shared libraries)."} {"_id": "7d5e165a55d62750e9ad69bb317c764a2e4e12fc", "title": "SOK: (State of) The Art of War: Offensive Techniques in Binary Analysis", "text": "Finding and exploiting vulnerabilities in binary code is a challenging task. The lack of high-level, semantically rich information about data structures and control constructs makes the analysis of program properties harder to scale. However, the importance of binary analysis is on the rise. In many situations binary analysis is the only possible way to prove (or disprove) properties about the code that is actually executed. In this paper, we present a binary analysis framework that implements a number of analysis techniques that have been proposed in the past. We present a systematized implementation of these techniques, which allows other researchers to compose them and develop new approaches. In addition, the implementation of these techniques in a unifying framework allows for the direct comparison of these apporaches and the identification of their advantages and disadvantages. The evaluation included in this paper is performed using a recent dataset created by DARPA for evaluating the effectiveness of binary vulnerability analysis techniques. Our framework has been open-sourced and is available to the security community."} {"_id": "a2be3a50e1b31bd226941ce7ef70e15bf21b83a1", "title": "Binary code is not easy", "text": "Binary code analysis is an enabling technique for many applications. Modern compilers and run-time libraries have introduced significant complexities to binary code, which negatively affect the capabilities of binary analysis tool kits to analyze binary code, and may cause tools to report inaccurate information about binary code. Analysts may hence be confused and applications based on these tool kits may have degrading quality. We examine the problem of constructing control flow graphs from binary code and labeling the graphs with accurate function boundary annotations. We identified several challenging code constructs that represent hard-to-analyze aspects of binary code, and show code examples for each code construct. As part of this discussion, we present new code parsing algorithms in our open source Dyninst tool kit that support these constructs, including a new model for describing jump tables that improves our ability to precisely determine the control flow targets, a new interprocedural analysis to determine when a function is non-returning, and techniques for handling tail calls. We evaluated how various tool kits fare when handling these code constructs with real software as well as test binaries patterned after each challenging code construct we found in real software."} {"_id": "b00672fc5ff99434bf5347418a2d2762a3bb2639", "title": "Firmalice - Automatic Detection of Authentication Bypass Vulnerabilities in Binary Firmware", "text": "Embedded devices have become ubiquitous, and they are used in a range of privacy-sensitive and security-critical applications. Most of these devices run proprietary software, and little documentation is available about the software\u2019s inner workings. In some cases, the cost of the hardware and protection mechanisms might make access to the devices themselves infeasible. Analyzing the software that is present in such environments is challenging, but necessary, if the risks associated with software bugs and vulnerabilities must be avoided. As a matter of fact, recent studies revealed the presence of backdoors in a number of embedded devices available on the market. In this paper, we present Firmalice, a binary analysis framework to support the analysis of firmware running on embedded devices. Firmalice builds on top of a symbolic execution engine, and techniques, such as program slicing, to increase its scalability. Furthermore, Firmalice utilizes a novel model of authentication bypass flaws, based on the attacker\u2019s ability to determine the required inputs to perform privileged operations. We evaluated Firmalice on the firmware of three commercially-available devices, and were able to detect authentication bypass backdoors in two of them. Additionally, Firmalice was able to determine that the backdoor in the third firmware sample was not exploitable by an attacker without knowledge of a set of unprivileged credentials."} {"_id": "00e90d7acd9c0a8b0efdb68c38af1632d5e60beb", "title": "Who Wrote This Code? Identifying the Authors of Program Binaries", "text": "Program authorship attribution\u2014identifying a programmer based on stylistic characteristics of code\u2014has practical implications for detecting software theft, digital forensics, and malware analysis. Authorship attribution is challenging in these domains where usually only binary code is available; existing source code-based approaches to attribution have left unclear whether and to what extent programmer style survives the compilation process. Casting authorship attribution as a machine learning problem, we present a novel program representation and techniques that automatically detect the stylistic features of binary code. We apply these techniques to two attribution problems: identifying the precise author of a program, and finding stylistic similarities between programs by unknown authors. Our experiments provide strong evidence that programmer style is preserved in program binaries."} {"_id": "3c125732664d6e5be3d21869b536066a7591b53d", "title": "Importance filtering for image retargeting", "text": "Content-aware image retargeting has attracted a lot of interests recently. The key and most challenging issue for this task is how to balance the tradeoff between preserving the important contents and minimizing the visual distortions on the consistency of the image structure. In this paper we present a novel filtering-based technique to tackle this issue, called \u201dimportance filtering\u201d. Specifically, we first filter the image saliency guided by the image itself to achieve a structure-consistent importance map. We then use the pixel importance as the key constraint to compute the gradient map of pixel shifts from the original resolution to the target. Finally, we integrate the shift gradient across the image using a weighted filter to construct a smooth shift map and render the target image. The weight is again controlled by the pixel importance. The two filtering processes enforce to maintain the structural consistency and yet preserve the important contents in the target image. Furthermore, the simple nature of filter operations allows highly efficient implementation for real-time applications and easy extension to video retargeting, as the structural constraints from the original image naturally convey the temporal coherence between frames. The effectiveness and efficiency of our importance filtering algorithm are confirmed in extensive experiments."} {"_id": "ce24c942ccb56891a984c38e9bd0f7aa3e681512", "title": "PUBLIC-KEY STEGANOGRAPHY BASED ON MODIFIED LSB METHOD", "text": "Steganography is the art and science of invisible communication. It is a technique which keeps the existence of the message secret. This paper proposed a technique to implement steganogaraphy and cryptography together to hide the data into an image. This technique describes as: Find the shared stego-key between the two communication parties by applying Diffie-Hellman Key exchange protocol, then encrypt the data using secret stego-key and then select the pixels by encryption process with the help of same secret stegokey to hide the data. Each selected pixel will be used to hide 8 bits of data by using LSB method."} {"_id": "16334a6d7cf985a2a57ac9889a5a3d4ef8d460b3", "title": "Bayesian Pot-Assembly from Fragments as Problems in Perceptual-Grouping and Geometric-Learning", "text": "A heretofore unsolved problem of great archaeological importance is the automatic assembly of pots made on a wheel from the hundreds (or thousands) of sherds found at an excavation site. An approach is presented to the automatic estimation of mathematical models of such pots from 3D measurements of sherds. A Bayesian approach is formulated beginning with a description of the complete set of geometric parameters that determine the distribution of the sherd measurement data. Matching of fragments and aligning them geometrically into configurations is based on matching break-curves (curves on a pot surface separating fragments), estimated axis and profile curve pairs for individual fragments and configurations of fragments, and a number of features of groups of break-curves. Pot assembly is a bottom-up maximum likelihood performance-based search. Experiments are illustrated on pots which were broken for the purpose, and on sherds from an archaeological dig located in Petra, Jordan. The performance measure can also be aposteriori probability, and many other types of information can be included, e.g., pot wall thickness, surface color, patterns on the surface, etc. This can also be viewed as the problem of learning a geometric object from an unorganized set of free-form fragments of the object and of clutter, or as a problem of perceptual grouping."} {"_id": "ccfd4a2a22b6c38f8206d1021f9be31f8c33a430", "title": "Drawing lithography for microneedles: a review of fundamentals and biomedical applications.", "text": "A microneedle is a three-dimensional (3D) micromechanical structure and has been in the spotlight recently as a drug delivery system (DDS). Because a microneedle delivers the target drug after penetrating the skin barrier, the therapeutic effects of microneedles proceed from its 3D structural geometry. Various types of microneedles have been fabricated using subtractive micromanufacturing methods which are based on the inherently planar two-dimensional (2D) geometries. However, traditional subtractive processes are limited for flexible structural microneedles and makes functional biomedical applications for efficient drug delivery difficult. The authors of the present study propose drawing lithography as a unique additive process for the fabrication of a microneedle directly from 2D planar substrates, thus overcoming a subtractive process shortcoming. The present article provides the first overview of the principal drawing lithography technology: fundamentals and biomedical applications. The continuous drawing technique for an ultrahigh-aspect ratio (UHAR) hollow microneedle, stepwise controlled drawing technique for a dissolving microneedle, and drawing technique with antidromic isolation for a hybrid electro-microneedle (HEM) are reviewed, and efficient biomedical applications by drawing lithography-mediated microneedles as an innovative drug and gene delivery system are described. Drawing lithography herein can provide a great breakthrough in the development of materials science and biotechnology."} {"_id": "698b7261b4bb888c907aff82293e27adf41507d2", "title": "Optimization of Cooperative Sensing in Cognitive Radio Networks: A Sensing-Throughput Tradeoff View", "text": "In cognitive radio networks, the performance of the spectrum sensing depends on the sensing time and the fusion scheme that are used when cooperative sensing is applied. In this paper, we consider the case where the secondary users cooperatively sense a channel using the k -out-of-N fusion rule to determine the presence of the primary user. A sensing-throughput tradeoff problem under a cooperative sensing scenario is formulated to find a pair of sensing time and k value that maximize the secondary users' throughput subject to sufficient protection that is provided to the primary user. An iterative algorithm is proposed to obtain the optimal values for these two parameters. Computer simulations show that significant improvement in the throughput of the secondary users is achieved when the parameters for the fusion scheme and the sensing time are jointly optimized."} {"_id": "0d2c07f3373c2bd63b3f2d0ba98d332daff33dcb", "title": "Primary gonadal failure and precocious adrenarche in a boy with Prader-Labhart-Willi syndrome", "text": "A 7-year-old boy with Prader-Labhart-Willi syndrome who had precocious adrenarche was found to have primary gonadal failure, as evidenced by appropriate laboratory investigations: elevated basal levels of plasma FSH and LH with exaggerated responses to LH-RH stimulation and unresponsiveness of plasma testosterone to repeated hCG stimulations. The elevated values of plasma DHEA which were found indicate an early activation of the adrenal gland. This patient demonstrates the varibility of pubertal development in the Prader-Labhart-Willi syndrome, with the unusual association of primary gonadal failure and precocious adrenarche."} {"_id": "322c063e97cd26f75191ae908f09a41c534eba90", "title": "Improving Image Classification Using Semantic Attributes", "text": "The Bag-of-Words (BoW) model\u2014commonly used for image classification\u2014has two strong limitations: on one hand, visual words are lacking of explicit meanings, on the other hand, they are usually polysemous. This paper proposes to address these two limitations by introducing an intermediate representation based on the use of semantic attributes. Specifically, two different approaches are proposed. Both approaches consist in predicting a set of semantic attributes for the entire images as well as for local image regions, and in using these predictions to build the intermediate level features. Experiments on four challenging image databases (PASCAL VOC 2007, Scene-15, MSRCv2 and SUN-397) show that both approaches improve performance of the BoW model significantly. Moreover, their combination achieves the state-of-the-art results on several of these image databases."} {"_id": "9061fb46185cffde44168aff4eb17f25d520f93a", "title": "Deceptive Reviews : The Influential Tail", "text": "Research in the psycholinguistics literature has identified linguistic cues indicating when a message is more likely to be deceptive and we find that the textual comments in the reviews without prior transactions exhibit many of these characteristics. In contrast, these reviews are less likely to contain expressions describing the fit or feel of the items, which can generally only be assessed by physical inspection of the items."} {"_id": "6949a33423051ce6fa5b08fb7d5f06ac9dcc721b", "title": "Process Mining and Fraud Detection", "text": "A case study on the theoretical and practical value of using process mining for the detection of fraudulent behavior in the procurement process Abstract This thesis presents the results of a six month research period on process mining and fraud detection. This thesis aimed to answer the research question as to how process mining can be utilized in fraud detection and what the benefits of using process mining for fraud detection are. Based on a literature study it provides a discussion of the theory and application of process mining and its various aspects and techniques. Using both a literature study and an interview with a domain expert, the concepts of fraud and fraud detection are discussed. These results are combined with an analysis of existing case studies on the application of process mining and fraud detection to construct an initial setup of two case studies, in which process mining is applied to detect possible fraudulent behavior in the procurement process. Based on the experiences and results of these case studies, the 1+5+1 methodology is presented as a first step towards operationalizing principles with advice on how process mining techniques can be used in practice when trying to detect fraud. This thesis presents three conclusions: (1) process mining is a valuable addition to fraud detection, (2) using the 1+5+1 concept it was possible to detect indicators of possibly fraudulent behavior (3) the practical use of process mining for fraud detection is diminished by the poor performance of the current tools. The techniques and tools that do not suffer from performance issues are an addition, rather than a replacement, to regular data analysis techniques by providing either new, quicker, or more easily obtainable insights into the process and possible fraudulent behavior. iii Occam's Razor: \" One should not increase, beyond what is necessary, the number of entities required to explain anything \" iv Contents"} {"_id": "8aef832372c6e3e83f10532f94f18bd26324d4fd", "title": "Question Answering on Freebase via Relation Extraction and Textual Evidence", "text": "Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F1 of 53.3%, a substantial improvement over the state-of-the-art."} {"_id": "f4fcc5125a4eb702cdff0af421e500433fbe9a16", "title": "Beyond Big and Little : The Four C Model of Creativity", "text": "Most investigations of creativity tend to take one of two directions: everyday creativity (also called \u201clittle-c\u201d), which can be found in nearly all people, and eminent creativity (also called \u201cBig-C\u201d), which is reserved for the great. In this paper, the authors propose a Four C model of creativity that expands this dichotomy. Specifically, the authors add the idea of \u201cmini-c,\u201d creativity inherent in the learning process, and Pro-c, the developmental and effortful progression beyond little-c that represents professional-level expertise in any creative area. The authors include different transitions and gradations of these four dimensions of creativity, and then discuss advantages and examples of the Four C Model."} {"_id": "154901c49ce0e1640547c0193ba0dfcc06d90ec1", "title": "Comparing a computer agent with a humanoid robot", "text": "HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples' responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people's social interactions with an agent and a robot, we experimentally compared people's responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots."} {"_id": "b4e1853acf75a91f64bd65de91ae05f4f7ef35a4", "title": "Why is image quality assessment so difficult?", "text": "Image quality assessment plays an important role in various image processing applications. A great deal of effort has been made in recent years to develop objective image quality metrics that correlate with perceived quality measurement. Unfortunately, only limited success has been achieved. In this paper, we provide some insights on why image quality assessment is so difficult by pointing out the weaknesses of the error sensitivity based framework, which has been used by most image quality assessment approaches in the literature. Furthermore, we propose a new philosophy in designing image quality metrics: The main function of the human eyes is to extract structural information from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of structural distortion should be a good approximation of perceived image distortion. Based on the new philosophy, we implemented a simple but effective image quality indexing algorithm, which is very promising as shown by our current results."} {"_id": "f2be739f0805729f754edb5238fde49b37afb00c", "title": "Performance indicators for an objective measure of public transport service quality", "text": "The measurement of transit performance represents a very useful tool for ensuring continuous increase of the quality of the delivered transit services, and for allocating resources among competing transit agencies. Transit service quality can be evaluated by subjective measures based on passengers\u2019 perceptions, and objective measures represented by disaggregate performance measures expressed as numerical values, which must be compared with fixed standards or past performances. The proposed research work deals with service quality evaluation based on objective measures; specifically, an extensive overview and an interpretative review of the objective indicators until investigated by researchers are proposed. The final aim of the work is to give a review as comprehensive as possible of the objective indicators, and to provide some suggestions for the selection of the most appropriate indicators for evaluating a transit service aspect."} {"_id": "9de64889ea7d467fdb3d3fa615d91b4d7ff2b068", "title": "Novel paradigms for advanced distribution grid energy management", "text": "The electricity distribution grid was not designed to cope with load dynamics imposed by high penetration of electric vehicles, neither to deal with the increasing deployment of distributed Renewable Energy Sources. Distribution System Operators (DSO) will increasingly rely on flexible Distributed Energy Resources (flexible loads, controllable generation and storage) to keep the grid stable and to ensure quality of supply. In order to properly integrate demand-side flexibility, DSOs need new energy management architectures, capable of fostering collaboration with wholesale market actors and prosumers. We propose the creation of Virtual Distribution Grids (VDG) over a common physical infrastructure, to cope with heterogeneity of resources and actors, and with the increasing complexity of distribution grid management and related resources allocation problems. Focusing on residential VDG, we propose an agent-based hierarchical architecture for providing Demand-Side Management services through a market-based approach, where households transact their surplus/lack of energy and their flexibility with neighbours, aggregators, utilities and DSOs. For implementing the overall solution, we consider fine-grained control of smart homes based on Internet of Things technology. Homes seamlessly transact self-enforcing smart contracts over a blockchainbased generic platform. Finally, we extend the architecture to solve existing problems on smart home control, beyond energy management."} {"_id": "574ec01e67e69a072e1e7cf4d6138d43491170ea", "title": "Planar Differential Elliptical UWB Antenna Optimization", "text": "A recently proposed optimization procedure, based on the time domain characteristics of an antenna, is exploited to design a planar differential elliptical antenna for ultrawideband (UWB) applications. The optimization procedure aims at finding an antenna not only with low VSWR but also one exhibiting low-dispersion characteristics over the relevant frequency band. Furthermore, since in pulse communications systems the input signal is often of a given form, suited to a particular purpose, the optimization procedure also aims at finding the best antenna for the given input signal form. Specifically, the optimized antenna is designed for high temporal correlation between its electric field intensity signal and the given input signal. The optimization technique followed in this work makes use of genetic algorithm (GA) search concepts. The electromagnetic analysis of the antenna is done by means of a finite-difference time-domain method using the commercially available CST Microwave Studio software"} {"_id": "befc714b8b0611d5582960ce3d8573db0cc35f52", "title": "Tracking-based deer vehicle collision detection using thermal imaging", "text": "Deer vehicle collision (DVC) is constantly a major safety issue for the driving on rural road. It is estimated that there are over 35,000 DVCs yearly in the US resulting in about 200 deaths and close to 4,000 reported property damages of one thousand dollars or more. This justifies many attempts trying to detect deer on road. However, very little success has been achieved. In order to reduce the number of DVCs, this work focused on the study of using an infrared thermal camera with tracking system to detect the presence of deer to avoid DVCs. The prototype consists of an infrared thermal temperature image grabbing and processing system, which includes an infrared thermal camera, a frame grabber, an image processing system and a motion tracking system, which includes two motors with their motion control system. By analyzing the infrared thermal images which are independent of visible light, the presence of an animal can be determined in either night or day time through pattern recognition and matching."} {"_id": "442cf9b24661c9ea5c2a1dcabd4a5b8af1cd89da", "title": "Beyond One-hot Encoding: lower dimensional target embedding", "text": "Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, One-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a lowdimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates."} {"_id": "311e8b4881482e9d940f4f1929d78a2b4a4337f8", "title": "Simba: Efficient In-Memory Spatial Analytics", "text": "Large spatial data becomes ubiquitous. As a result, it is critical to provide fast, scalable, and high-throughput spatial queries and analytics for numerous applications in location-based services (LBS). Traditional spatial databases and spatial analytics systems are disk-based and optimized for IO efficiency. But increasingly, data are stored and processed in memory to achieve low latency, and CPU time becomes the new bottleneck. We present the Simba (Spatial In-Memory Big data Analytics) system that offers scalable and efficient in-memory spatial query processing and analytics for big spatial data. Simba is based on Spark and runs over a cluster of commodity machines. In particular, Simba extends the Spark SQL engine to support rich spatial queries and analytics through both SQL and the DataFrame API. It introduces indexes over RDDs in order to work with big spatial data and complex spatial operations. Lastly, Simba implements an effective query optimizer, which leverages its indexes and novel spatial-aware optimizations, to achieve both low latency and high throughput. Extensive experiments over large data sets demonstrate Simba's superior performance compared against other spatial analytics system."} {"_id": "fe404310e7f485fd77367cb9c6c1cdc3884381d1", "title": "Smart fabrics and interactive textile enabling wearable personal applications: R&D state of the art and future challenges", "text": "Smart fabrics and interactive textiles (SFIT) are fibrous structures that are capable of sensing, actuating, generating/storing power and/or communicating. Research and development towards wearable textile-based personal systems allowing e.g. health monitoring, protection & safety, and healthy lifestyle gained strong interest during the last 10 years. Under the Information and Communication Programme of the European Commission, a cluster of R&D projects dealing with smart fabrics and interactive textile wearable systems regroup activities along two different and complementary approaches i.e. \u201capplication pull\u201d and \u201ctechnology push\u201d. This includes projects aiming at personal health management through integration, validation, and use of smart clothing and other networked mobile devices as well as projects targeting the full integration of sensors/actuators, energy sources, processing and communication within the clothes to enable personal applications such as protection/safety, emergency and healthcare. The integration part of the technologies into a real SFIT product is at present stage on the threshold of prototyping and testing. Several issues, technical as well user-centred, societal and business, remain to be solved. The paper presents on going major R&D activities, identifies gaps and discuss key challenges for the future."} {"_id": "586d39f024325314528f1eaa64f4b2bbe2a1baa5", "title": "Multilingual speech-to-speech translation system for mobile consumer devices", "text": "Along with the advancement of speech recognition technology and machine translation technology in addition to the fast distribution of mobile devices, speech-to-speech translation technology no longer remains as a subject of research as it has become popularized throughout many users. In order to develop a speech-to-speech translation system that can be widely used by many users, however, the system needs to reflect various characteristics of utterances by the users who are actually to use the speech-to-speech translation system other than improving the basic functions under the experimental environment. This study has established a massive language and speech database closest to the environment where speech-to- speech translation device actually is being used after mobilizing plenty of people based on the survey on users' demands. Through this study, it was made possible to secure excellent basic performance under the environment similar to speech-to-speech translation environment, rather than just under the experimental environment. Moreover, with the speech-to-speech translation UI, a user-friendly UI has been designed; and at the same time, errors were reduced during the process of translation as many measures to enhance user satisfaction were employed. After implementing the actual services, the massive database collected through the service was additionally applied to the system following a filtering process in order to procure the best-possible robustness toward both the details and the environment of the users' utterances. By applying these measures, this study is to unveil the procedures where multi-language speech-to-speech translation system has been successfully developed for mobile devices."} {"_id": "0058c41f797d48aa8544894b75a26a26602a8152", "title": "Missing value imputation for gene expression data: computational techniques to recover missing data from available information", "text": "Microarray gene expression data generally suffers from missing value problem due to a variety of experimental reasons. Since the missing data points can adversely affect downstream analysis, many algorithms have been proposed to impute missing values. In this survey, we provide a comprehensive review of existing missing value imputation algorithms, focusing on their underlying algorithmic techniques and how they utilize local or global information from within the data, or their use of domain knowledge during imputation. In addition, we describe how the imputation results can be validated and the different ways to assess the performance of different imputation algorithms, as well as a discussion on some possible future research directions. It is hoped that this review will give the readers a good understanding of the current development in this field and inspire them to come up with the next generation of imputation algorithms."} {"_id": "94a6d653f4c54dbfb8f62a5bb54b4286c9affdb2", "title": "FML-based feature similarity assessment agent for Japanese/Taiwanese language learning", "text": "In this paper, we propose a fuzzy markup language (FML)-based feature similarity assessment agent with machine-learning ability to evaluate easy-to-learn degree of the Japanese and Taiwanese words. The involved domain experts define knowledge base (KB) and rule base (RB) of the proposed agent. The KB and RB are stored in the constructed ontology, including features of pronunciation similarity, writing similarity, and culture similarity. Next, we calculate feature similarity in pronunciation, writing, and culture for each word pair between Japanese and Taiwanese. Finally, we infer the easy-to-learn degree for one Japanese word and its corresponding Taiwanese one. Additionally, a genetic learning is also adopted to tune the KB and RB of the intelligent agent. The experimental results show that after-learning results perform better than before-learning ones."} {"_id": "66b2d52519a7e6ecc19ac8a46edf6932aba14695", "title": "foo . castr : visualising the future AI workforce", "text": "*Correspondence: mmolinas@ic.ac.uk 1Data Science Institute, Imperial College London, London, UK Full list of author information is available at the end of the article Abstract Organization of companies and their HR departments are becoming hugely affected by recent advancements in computational power and Artificial Intelligence, with this trend likely to dramatically rise in the next few years. This work presents foo.castr, a tool we are developing to visualise, communicate and facilitate the understanding of the impact of these advancements in the future of workforce. It builds upon the idea that particular tasks within job descriptions will be progressively taken by computers, forcing the shaping of human jobs. In its current version, foo.castr presents three different scenarios to help HR departments planning potential changes and disruptions brought by the adoption of Artificial Intelligence."} {"_id": "16edc3faf625fd437aaca1527e8821d979354fba", "title": "On happiness and human potentials: a review of research on hedonic and eudaimonic well-being.", "text": "Well-being is a complex construct that concerns optimal experience and functioning. Current research on well-being has been derived from two general perspectives: the hedonic approach, which focuses on happiness and defines well-being in terms of pleasure attainment and pain avoidance; and the eudaimonic approach, which focuses on meaning and self-realization and defines well-being in terms of the degree to which a person is fully functioning. These two views have given rise to different research foci and a body of knowledge that is in some areas divergent and in others complementary. New methodological developments concerning multilevel modeling and construct comparisons are also allowing researchers to formulate new questions for the field. This review considers research from both perspectives concerning the nature of well-being, its antecedents, and its stability across time and culture."} {"_id": "200c10cbb2eaf70727af4751f0d8e0a5a6661e07", "title": "Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being.", "text": "Human beings can be proactive and engaged or, alternatively, passive and alienated, largely as a function of the social conditions in which they develop and function. Accordingly, research guided by self-determination theory has focused on the social-contextual conditions that facilitate versus forestall the natural processes of self-motivation and healthy psychological development. Specifically, factors have been examined that enhance versus undermine intrinsic motivation, self-regulation, and well-being. The findings have led to the postulate of three innate psychological needs--competence, autonomy, and relatedness--which when satisfied yield enhanced self-motivation and mental health and when thwarted lead to diminished motivation and well-being. Also considered is the significance of these psychological needs and processes within domains such as health care, education, work, sport, religion, and psychotherapy."} {"_id": "66626e65418d696ed913ce3fa75604fb2946a2a1", "title": "Mean and Covariance Structures (MACS) Analyses of Cross-Cultural Data: Practical and Theoretical Issues.", "text": "Practical and theoretical issues are discussed for testing (a) the comparability, or measurement equivalence, of psychological constructs and (b) detecting possible sociocultural difference on the constructs in cross-cultural research designs. Specifically, strong factorial invariance (Meredith, 1993) of each variable's loading and intercept (mean-level) parameters implies that constructs are fundamentally the same in each sociocultural group, and thus comparable. Under this condition, hypotheses about the nature of sociocultural differences and similarities can be confidently and meaningfully tested among the constructs' moments in each sociocultural sample. Some of the issues involved in making such tests are reviewed and explicated within the framework of multiple-group mean and covariance structures analyses."} {"_id": "81c920b0892b7636c127be90ec3775ae4ef8143c", "title": "Daily Well-Being : The Role of Autonomy , Competence , and Relatedness", "text": "Emotional well-being is most typically studied in trait or traitlike terms, yet a growing literature indicates that daily (withinperson) fluctuations in emotional well-being may be equally important. The present research explored the hypothesis that daily variations may be understood in terms of the degree to which three basic needs\u2014autonomy, competence, and relatedness\u2014are satisfied in daily activity. Hierarchical linear models were used to examine this hypothesis across 2 weeks of daily activity and well-being reports controlling for trait-level individual differences. Results strongly supported the hypothesis. The authors also examined the social activities that contribute to satisfaction of relatedness needs. The best predictors were meaningful talk and feeling understood and appreciated by interaction partners. Finally, the authors found systematic day-of-the-week variations in emotional well-being and need satisfaction. These results are discussed in terms of the importance of daily activities and the need to consider both trait and day-level determinants of well-being."} {"_id": "88af62beb2ef314e6d50c750e6ff0a3551fb9f26", "title": "Personal projects, happiness, and meaning: on doing well and being yourself.", "text": "Personal Projects Analysis (B. R. Little, 1983) was adapted to examine relations between participants' appraisals of their goal characteristics and orthogonal happiness and meaning factors that emerged from factor analyses of diverse well-being measures. In two studies with 146 and 179 university students, goal efficacy was associated with happiness and goal integrity was associated with meaning. A new technique for classifying participants according to emergent identity themes is introduced. In both studies, identity-compensatory predictors of happiness were apparent. Agentic participants were happiest if their goals were supported by others, communal participants were happiest if their goals were fun, and hedonistic participants were happiest if their goals were being accomplished. The distinction between happiness and meaning is emphasized, and the tension between efficacy and integrity is discussed. Developmental implications are discussed with reference to results from archival data from a sample of senior managers."} {"_id": "397fd411c58b25d04dd6a4c1896e86a78ae51004", "title": "Quasi-Dense Reconstruction from Image Sequence", "text": "This paper proposes a quasi-dense reconstruction from uncalibrated sequence. The main innovation is that all geometry is computed based on re-sampled quasi-dense correspondences rather than the standard sparse points of interest. It not only produces more accurate and robust reconstruction due to highly redundant and well spread input data, but also fills the gap of insufficiency of sparse reconstruction for visualization application. The computational engine is the quasi-dense 2-view and the quasi-dense 3-view algorithms developed in this paper. Experiments on real sequences demonstrate the superior performance of quasi-dense w.r.t. sparse reconstruction both in accuracy and robustness."} {"_id": "b5e3beb791cc17cdaf131d5cca6ceb796226d832", "title": "Novel Dataset for Fine-Grained Image Categorization : Stanford Dogs", "text": "We introduce a 120 class Stanford Dogs dataset, a challenging and large-scale dataset aimed at fine-grained image categorization. Stanford Dogs includes over 22,000 annotated images of dogs belonging to 120 species. Each image is annotated with a bounding box and object class label. Fig. 1 shows examples of images from Stanford Dogs. This dataset is extremely challenging due to a variety of reasons. First, being a fine-grained categorization problem, there is little inter-class variation. For example the basset hound and bloodhound share very similar facial characteristics but differ significantly in their color, while the Japanese spaniel and papillion share very similar color but greatly differ in their facial characteristics. Second, there is very large intra-class variation. The images show that dogs within a class could have different ages (e.g. beagle), poses (e.g. blenheim spaniel), occlusion/self-occlusion and even color (e.g. Shih-tzu). Furthermore, compared to other animal datasets that tend to exist in natural scenes, a large proportion of the images contain humans and are taken in manmade environments leading to greater background variation. The aforementioned reasons make this an extremely challenging dataset."} {"_id": "79a928769af7c9b1e2dba81a1eb0247ed34faf55", "title": "The importance of being flexible: the ability to both enhance and suppress emotional expression predicts long-term adjustment.", "text": "Researchers have documented the consequences of both expressing and suppressing emotion using between-subjects designs. It may be argued, however, that successful adaptation depends not so much on any one regulatory process, but on the ability to flexibly enhance or suppress emotional expression in accord with situational demands. We tested this hypothesis among New York City college students in the aftermath of the September 11th terrorist attacks. Subjects' performance in a laboratory task in which they enhanced emotional expression, suppressed emotional expression, and behaved normally on different trials was examined as a prospective predictor of their adjustment across the first two years of college. Results supported the flexibility hypothesis. A regression analysis controlling for initial distress and motivation and cognitive resources found that subjects who were better able to enhance and suppress the expression of emotion evidenced less distress by the end of the second year. Memory deficits were also observed for both the enhancement and the suppression tasks, suggesting that both processes require cognitive resources."} {"_id": "e11edb4201007530c3692814a155b22f78a0d659", "title": "OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles", "text": "We present a new major release of the OpenSubtitles collection of parallel corpora. The release is compiled from a large database of movie and TV subtitles and includes a total of 1689 bitexts spanning 2.6 billion sentences across 60 languages. The release also incorporates a number of enhancements in the preprocessing and alignment of the subtitles, such as the automatic correction of OCR errors and the use of meta-data to estimate the quality of each subtitle and score subtitle pairs."} {"_id": "175059c67687eefbdb7ea12e38a515db77086af0", "title": "Training RNN simulated vehicle controllers using the SVD and evolutionary algorithms", "text": "We describe an approach to creating a controller for The Open Car Racing Simulator (TORCS), based on The Simulated Car Racing Championship (SCRC) client, using unsupervised evolutionary learning for recurrent neural networks. Our method of training the recurrent neural network controllers relies on combining the components of the singular value decomposition of two different neural network connection matrices."} {"_id": "40e927fdee7517fb7513d03735754af80fb9c7b0", "title": "Label Embedding Trees for Large Multi-Class Tasks", "text": "Multi-class classification becomes challenging at test time when the number of classes is very large and testing against every possible class can become computationally infeasible. This problem can be alleviated by imposing (or learning) a structure over the set of classes. We propose an algorithm for learning a treestructure of classifiers which, by optimizing the overall tree loss, provides superior accuracy to existing tree labeling methods. We also propose a method that learns to embed labels in a low dimensional space that is faster than non-embedding approaches and has superior accuracy to existing embedding approaches. Finally we combine the two ideas resulting in the label embedding tree that outperforms alternative methods including One-vs-Rest while being orders of magnitude faster."} {"_id": "03563044af9cb9c84b14e34516c8135fa5165c25", "title": "Dependency Grammar and Dependency Parsing", "text": "Despite a long and venerable tradition in descriptive lingu stics, dependency grammar has until recently played a fairly marginal role both in t heoretical linguistics and in natural language processing. The increasing interes t in dependency-based representations in natural language parsing in recent year s appears to be motivated both by the potential usefulness of bilexical relations in d isambiguation and by the gains in efficiency that result from the more constrained par sing problem for these representations. In this paper, we will review the state of the art in dependenc y-based parsing, starting with the theoretical foundations of dependency gr ammar and moving on to consider both grammar-driven and data-driven methods fo r dependency parsing. We will limit our attention to systems for dependency parsin g i a narrow sense, i.e. systems where the analysis assigned to an input sentenc akes the form of a dependency structure. This means that we will not discuss sy tems that exploit dependency relations for the construction of another type o f r presentation, such as the head-driven parsing models of Collins (1997, 1999). M oreover, we will restrict ourselves to systems for full parsing, which means that we will not deal with systems that produce a partial or underspecified repres ntation of dependency structure, such as Constraint Grammar parsers (Karlsson, 1 990; Karlsson et al., 1995)."} {"_id": "dedf3eea39a97d606217462638f38adabc7321d1", "title": "Realized Volatility and Variance : Options via Swaps", "text": "Let St denote the value of a stock or stock index at time t. Given a variance/volatility option to be priced and hedged, let us designate as time 0 the start of its averaging period, and time T the end. For t \u2208 [0, T ], and \u03c4 \u2264 t, let R2 \u03c4,t denote the realized variance of returns over the time interval [\u03c4, t]. The mathematical results about the synthesis of volatility and variance swaps will hold exactly if R2 refers to the continuously-monitored variance. This means the quadratic variation of logS, \u2217Bloomberg L.P. and NYU Courant Institute pcarr@nyc.rr.com. \u2020University of Chicago. RL@math.uchicago.edu. We thank the International Securities Exchange, Kris Monaco,"} {"_id": "7f3e1903023fae5ca343db2546d3eebd2b89c827", "title": "Single-Stage Single-Switch PFC Flyback Converter Using a Synchronous Rectifier", "text": "A single-stage single-switch power factor correction (PFC) flyback converter with a synchronous rectifier (SR) is proposed for improving power factor and efficiency. Using a variable switching-frequency controller, this converter is continuously operated with a reduced turn-on switching loss at the boundary of the continuous conduction mode and discontinuous conduction mode (DCM). The proposed PFC circuit provides relatively low dc-link voltage in the universal line voltage, and also complies with Standard IEC 61000-3-2 Class D limits. In addition, a new driving circuit as the voltage driven-synchronous rectifier is proposed to achieve high efficiency. In particular, since a driving signal is generated according to the voltage polarity, the SR driving circuit can easily be used in DCM applications. The proposed PFC circuit and SR driving circuit in the flyback converter with the reduced switching loss are analyzed in detail and optimized for high performance. Experimental results for a 19 V/90 W adapter at the variable switching-frequency of 30~70 kHz were obtained to show the performance of the proposed converter."} {"_id": "286381f03a2e30a942f29b8ed649c63e5b668e63", "title": "Current and Emerging Topics in Sports Video Processing", "text": "Sports video processing is an interesting topic for research, since the clearly defined game rules in sports provide the rich domain knowledge for analysis. Moreover, it is interesting because many specialized applications for sports video processing are emerging. This paper gives an overview of sports video research, where we describe both basic algorithmic techniques and applications."} {"_id": "9151541a84d25e31a1101e82a4b345639223bbd8", "title": "Improving Business Process Quality through Exception Understanding, Prediction, and Prevention", "text": "Business process automation technologies are being increasingly used by many companies to improve the efficiency of both internal processes as well as of e-services offered to customers. In order to satisfy customers and employees, business processes need to be executed with a high and predictable quality. In particular, it is crucial for organizations to meet the Service Level Agreements (SLAs) stipulated with the customers and to foresee as early as possible the risk of missing SLAs, in order to set the right expectations and to allow for corrective actions. In this paper we focus on a critical issue in business process quality: that of analyzing, predicting and preventing the occurrence of exceptions, i.e., of deviations from the desired or acceptable behavior. We characterize the problem and propose a solution, based on data warehousing and mining techniques. We then describe the architecture and implementation of a tool suite that enables exception analysis, prediction, and prevention. Finally, we show experimental results obtained by using the tool suite to analyze internal HP processes."} {"_id": "ac4b1536d8e1dd45c7816dd446e7dd890ba00067", "title": "A Service-Oriented Architecture for Virtualizing Robots in Robot-as-a-Service Clouds", "text": "Exposing software and hardware computing resources as services through a cloud is increasingly emerging in the recent years. This comes as a result of extending the service-oriented architecture (SOA) paradigm to virtualize computing resources. In this paper, we extend the paradigm of the SOA approach to virtualize robotic hardware and software resources to expose them as services through the Web. This allows non-technical users to access, interact and manipulate robots simply through a Web browser. The proposed RoboWeb system is based on a SOAP-based Web service middleware that binds robots computing resources as services and publish them to the end-users. We consider robots that operates with the Robotic Operating System (ROS), as it provides hardware abstraction that makes easier applications development. We describe the implementation of RoboWeb and demonstrate how researchers can use it to interact remotely with the robots. We believe that this work consistently contributes to enabling remote robotic labs using the cloud paradigm."} {"_id": "79acd526a54ea8508bc80f64f5f4d7ac44f1f53b", "title": "TumorNet: Lung nodule characterization using multi-view Convolutional Neural Network with Gaussian Process", "text": "Characterization of lung nodules as benign or malignant is one of the most important tasks in lung cancer diagnosis, staging and treatment planning. While the variation in the appearance of the nodules remains large, there is a need for a fast and robust computer aided system. In this work, we propose an end-to-end trainable multi-view deep Convolutional Neural Network (CNN) for nodule characterization. First, we use median intensity projection to obtain a 2D patch corresponding to each dimension. The three images are then concatenated to form a tensor, where the images serve as different channels of the input image. In order to increase the number of training samples, we perform data augmentation by scaling, rotating and adding noise to the input image. The trained network is used to extract features from the input image followed by a Gaussian Process (GP) regression to obtain the malignancy score. We also empirically establish the significance of different high level nodule attributes such as calcification, sphericity and others for malignancy determination. These attributes are found to be complementary to the deep multi-view CNN features and a significant improvement over other methods is obtained."} {"_id": "fb26a77fb3532da237ef9c372af8e9c46dc4b16c", "title": "Behavior Analysis Using a Multilevel Motion Pattern Learning Framework", "text": "The increasing availability of video data, through existing traffic cameras or dedicated field data collection, paves the way for the collection of massive datasets about the microscopic behaviour of road users using computer vision techniques. Analysis of such datasets helps to understand the normal road user behaviour, and it can be used for realistic prediction of future motion and computing surrogate safety indicators. A multi-level motion pattern learning framework is developed that enables automated scene interpretation, anomalous behaviour detection and surrogate safety analysis. Firstly, Points of interest (POI) are learnt based on Gaussian Mixture Model and the Expectation Maximization algorithm and then used to form activity paths (APs). Secondly, motion patterns, represented by trajectory prototypes, are learnt from road users\u2019 trajectories in each AP using a two-stage trajectory clustering method based on spatial then temporal (speed) information. Finally, motion prediction relies on matching at each instant partial trajectories to the learnt prototypes to evaluate the potential for collision by computing indicators. An intersection case study demonstrates the framework ability in many ways: it helps to reduce the computation cost up to 90 %, clean trajectories dataset from tracking outliers, use actual trajectories as prototypes without any preand post-processing and predict future motion realistically to compute surrogate safety indicators."} {"_id": "b9c4f8f76680070a9d3a41284320bdd9561a6204", "title": "A High-Power Broadband Passive Terahertz Frequency Doubler in CMOS", "text": "To realize a high-efficiency terahertz signal source, a varactor-based frequency-doubler topology is proposed. The structure is based on a compact partially coupled ring that simultaneously achieves isolation, matching, and harmonic filtering for both input and output signals at f0 and 2f0. The optimum varactor pumping/loading conditions for the highest conversion efficiency are also presented analytically along with intuitive circuit representations. Using the proposed circuit, a passive 480-GHz frequency doubler with a measured minimum conversion loss of 14.3 dB and an unsaturated output power of 0.23 mW is reported. Within 20-GHz range, the fluctuation of the measured output power is less than 1.5 dB, and the simulated 3-dB output bandwidth is 70 GHz (14.6%). The doubler is fabricated using 65-nm low-power bulk CMOS technology and consumes near zero dc power."} {"_id": "9d5d3fa5bc48de89c398042236b85566f433eb5c", "title": "Anonymous CoinJoin Transactions with Arbitrary Values", "text": "Bitcoin, the arguably most popular cryptocurrency to date, allows users to perform transactions using freely chosen pseudonymous addresses. Previous research, however, suggests that these pseudonyms can easily be linked, implying a lower level of privacy than originally expected. To obfuscate the links between pseudonyms, different mixing methods have been proposed. One of the first approaches is the CoinJoin concept, where multiple users merge their transactions into one larger transaction. In theory, CoinJoin can be used to mix and transact bitcoins simultaneously, in one step. Yet, it is expected that differing bitcoin amounts would allow an attacker to derive the original single transactions. Solutions based on CoinJoin therefore prescribe the use of fixed bitcoin amounts and cannot be used to perform arbitrary transactions.In this paper, we define a model for CoinJoin transactions and metrics that allow conclusions about the provided anonymity. We generate and analyze CoinJoin transactions and show that with differing, representative amounts they generally do not provide any significant anonymity gains. As a solution to this problem, we present an output splitting approach that introduces sufficient ambiguity to effectively prevent linking in CoinJoin transactions. Furthermore, we discuss how this approach could be used in Bitcoin today."} {"_id": "8124e1c470957979625142b876acad033bdbe69f", "title": "Blockchain \u2014 Literature survey", "text": "In the modern world that we live in everything is gradually shifting towards a more digitized outlook. From teaching methods to online transactions, every small aspect of our life is given a more technological touch. In such a case \u201cmoney\u201d is not left behind either. An approach towards digitizing money led to the creation of \u201cbitcoin\u201d. Bitcoin is the most efficient crypto-currency so far in the history of digital currency. Ruling out the interference of any third party which monitors the cash flow, Bitcoin is a decentralized form of online currency and is widely accepted for internet transactions all over the world. The need for digital money would be extensive in the near future."} {"_id": "06c75fd841ea83440a9496ecf6d8e3d6341f655a", "title": "IMPLICIT SLICING METHOD FOR ADDITIVE MANUFACTURING PROCESSES", "text": "All additive manufacturing processes involve a distinct preprocessing stage in which a set of instructions, or GCode, that control the process specific manufacturing tool are generated, otherwise known as slicing. In regards to fused deposition modeling, the GCode defines many crucial parameters which are needed to produce a part, including tool path, infill density, layer height, feed rate, tool head and plate temperature. The majority of current commercial slicing programs generate tool paths explicitly, and do not consider particular geometric properties of parts, such as thin walls, small corners and round profiles that can result in critical voids leading to part failure. This work replicates an implicit slicing algorithm in which functionally derived infill patterns are overlaid onto each layer of a part, reducing the possibility of undesired voids and flaws. This work also further investigates the effects that varying implicitly derived infill patterns have on a part\u2019s mechanical properties through tensile testing dog bone specimens with three different infill patterns and comparing ultimate stress and elastic modulus properties."} {"_id": "61060bea27a3410260988540b627ccc5ba131822", "title": "Adversarial Cross-Modal Retrieval", "text": "Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of cross-modal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods."} {"_id": "25387000175f1622ddc4b485a91c9fa12bd253a3", "title": "Key drivers of internet banking services use", "text": "Purpose \u2013 The purpose of this paper is to analyse the determinants of internet banking use, paying special attention to the role of product involvement, perceived risk and trust. Design/methodology/approach \u2013 The impact of trust, perceived risks, product involvement and TAM beliefs (ease of use and usefulness) on internet banking adoption is tested through structural equation modelling techniques. The sample consists of 511 Spanish internet banking services users and the data are collected through an internet survey. Risk is measured as a formative construct. Findings \u2013 Data analysis shows that TAM beliefs and perceived risks (security, privacy, performance and social) have a direct influence on e-banking adoption. Trust appears as a key variable that reduces perceived risk. Involvement plays an important role in increasing perceived ease of use. Practical implications \u2013 This research provides banks with knowledge of what aspects to highlight in their communications strategies to increase their internet banking services adoption rates. The research findings show managers that web contents and design are key tools to increase internet banking services adoption. Practical recommendations to increase web usefulness and trust, and guidelines to reduce perceived risk dimensions are also provided. Originality/value \u2013 Despite the importance of trust issues and risk perceptions for internet banking adoption, only limited work has been done to identify trust and risk dimensions in an online banking context. We have evaluated the impact of each risk dimension instead of treating risk as a whole. Furthermore, risk has been measured as a formative construct because there is no reason to expect that risk dimensions in online financial services are correlated."} {"_id": "ac8c2e1fa35e797824958ced835257cd49e1be9c", "title": "Information technology and organizational learning: a review and assessment of research", "text": "This paper reviews and assesses the emerging research literature on information technology and organizational learning. After discussing issues of meaning and measurement, we identify and assess two main streams of research: studies that apply organizational learning concepts to the process of implementing and using information technology in organizations; and studies concerned with the design of information technology applications to support organizational learning. From the former stream of research, we conclude that experience plays an important, yet indeterminate role in implementation success; learning is accomplished through both formal training and participation in practice; organizational knowledge barriers may be overcome by learning from other organizations; and that learning new technologies is a dynamic process characterized by relatively narrow windows of opportunity. From the latter stream, we conclude that conceptual designs for organizational memory information systems are a valuable contribution to artifact development; learning is enhanced through systems that support communication and discourse; and that information technologies have the potential to both enable and disable organizational learning. Currently, these two streams flow independently of each other, despite their close conceptual and practical links. We advise that future research on information technology and organizational learning proceeds in a more integrated fashion, recognizes the situated nature of organizational learning, focuses on distributed organizational memory, demonstrates the effectiveness of artifacts in practice, and looks for relevant research findings in related fields."} {"_id": "12b8cf6ab20d0b66f975794767ead30e9f11fc49", "title": "Cyber attack modeling and simulation for network security analysis", "text": "Cyber security methods are continually being developed. To test these methods many organizations utilize both virtual and physical networks which can be costly and time consuming. As an alternative, in this paper, we present a simulation modeling approach to represent computer networks and intrusion detection systems (IDS) to efficiently simulate cyber attack scenarios. The outcome of the simulation model is a set of IDS alerts that can be used to test and evaluate cyber security systems. In particular, the simulation methodology is designed to test information fusion systems for cyber security that are under development."} {"_id": "26dd163ee6a0e75659a7f1485d1d442852d74436", "title": "Using Knowledge Representation and Reasoning Tools in the Design of Robots", "text": "The paper describes the authors\u2019 experience in using knowledge representation and reasoning tools in the design of robots. The focus is on the systematic construction of models of the robot\u2019s capabilities and its domain at different resolutions, and on establishing a clear relationship between the models at the different resolutions."} {"_id": "65f797e97cca7463d3889e2f2f9de4c0f8121742", "title": "An Experience Teaching a Graduate Course in Cryptography", "text": "This article describes an experience of teaching a graduate-level course in cryptography and computer security at New York University. The course content as well as lessons learned and plans for the future are discussed."} {"_id": "24019050c30b7e5bf1be28e48b8cb5278c4286fd", "title": "PH2 - A dermoscopic image database for research and benchmarking", "text": "The increasing incidence of melanoma has recently promoted the development of computer-aided diagnosis systems for the classification of dermoscopic images. Unfortunately, the performance of such systems cannot be compared since they are evaluated in different sets of images by their authors and there are no public databases available to perform a fair evaluation of multiple systems. In this paper, a dermoscopic image database, called PH2, is presented. The PH2 database includes the manual segmentation, the clinical diagnosis, and the identification of several dermoscopic structures, performed by expert dermatologists, in a set of 200 dermoscopic images. The PH2 database will be made freely available for research and benchmarking purposes."} {"_id": "ac9748ea3945eb970cc32a37db7cfdfd0f22e74c", "title": "Ridge-based vessel segmentation in color images of the retina", "text": "A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. . The results show that our method is significantly better than the two rule-based methods (p<0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer."} {"_id": "dffaa9e4bfd2bca125883372820c1dcfd87bc081", "title": "Active Contours without Edges for Vector-Valued Images", "text": "In this paper, we propose an active contour algorithm for object detection in vectorvalued images (such as RGB or multispectral). The model is an extension of the scalar Chan\u2013Vese algorithm to the vector-valued case [1]. The model minimizes a Mumford\u2013Shah functional over the length of the contour, plus the sum of the fitting error over each component of the vector-valued image. Like the Chan\u2013Vese model, our vector-valued model can detect edges both with or without gradient. We show examples where our model detects vector-valued objects which are undetectable in any scalar representation. For instance, objects with different missing parts in different channels are completely detected (such as occlusion). Also, in color images, objects which are invisible in each channel or in intensity can be detected by our algorithm. Finally, the model is robust with respect to noise, requiring no a priori denoising step. C \u00a9 2000 Academic Press"} {"_id": "2d88e7922d9f046ace0234f9f96f570ee848a5b5", "title": "Building Better Detection with Privileged Information", "text": "Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While this approach to detection has been proven to be effective in many environments, it is limited to training on only features that can be reliably collected at test-time. Hence, they fail to leverage the often vast amount of ancillary information available from past forensic analysis and post-mortem data. In short, detection systems don\u2019t train (and thus don\u2019t learn from) features that are unavailable or too costly to collect at run-time. In this paper, we leverage recent advances in machine learning to integrate privileged information \u2013 features available at training time, but not at run-time\u2013 into detection algorithms. We apply three different approaches to model training with privileged information; knowledge transfer, model influence, and distillation, and empirically validate their performance in a range of detection domains. Our evaluation shows that privileged information can increase detector precision and recall: we observe an average of 4.8% decrease in detection error for malware traffic detection over a system with no privileged information, 3.53% for fast-flux domain bot detection, 3.33% for malware classification, 11.2% for facial user authentication. We conclude by exploring the limitations and applications of different privileged information techniques in detection systems."} {"_id": "3723eb29efd2e5193834b9d1ef71f1169b3d4b9a", "title": "Combining Textual Entailment and Argumentation Theory for Supporting Online Debates Interactions", "text": "Blogs and forums are widely adopted by online communities to debate about various issues. However, a user that wants to cut in on a debate may experience some difficulties in extracting the current accepted positions, and can be discouraged from interacting through these applications. In our paper, we combine textual entailment with argumentation theory to automatically extract the arguments from debates and to evaluate their acceptability."} {"_id": "654d129eafc136bf5fccbc54e6c8078e87989ea8", "title": "A multimode-beamforming 77-GHz FMCW radar system", "text": "In this work a multimode-beamforming 77-GHz frequency-modulated continuous-wave radar system is presented. Four transceiver chips with integrated inphase/quadrature modulators in the transmit path are used in order to simultaneously realize a short-range frequency-division multiple-access (FDMA) multiple-input multiple-output (MIMO) and a long-range transmit phased-array (PA) radar system with the same antennas. It combines the high angular resolution of FDMA MIMO radars and the high-gain and steerable beam of PA transmit antennas. Several measurements were carried out to show the potential benefits of using this concept for a linear antenna array with four antennas and methods of digital beamforming in the receive path."} {"_id": "eeba24b5ae5034665de50ed56c9f76adb95167c3", "title": "X509Cloud \u2014 Framework for a ubiquitous PKI", "text": "The SSL protocol has been widely used for verifying digital identities and to secure Internet traffic since the early days of the web. Although X.509 certificates have been in existence for more than two decades, individual user uptake has been low due to the high cost of issuance and maintenance of such certs. This has led to a situation whereby users are able to verify the identity of an organization or e-commerce retailer via their digital certificate, but organizations have to rely on weak username and password combinations to verify the identity of customers registered with their service. We propose the X509Cloud framework which enables organizations to issue certificates to their users at zero cost, and allows them to securely store and disseminate client certificates using the Bitcoin inspired blockchain protocol. This in turn will enable organizations and individuals to authenticate and to securely communicate with other users on the Internet."} {"_id": "45b1f6b683fdc5f8d5a340a7ba203c574c1d93a5", "title": "Finding and Optimizing Solvable Priority Schemes for Decoupled Path Planning Techniques for Teams of Mobile Robots", "text": "Coordinating the motion of multiple mobile robots is one of the fundamental problems in robotics. The predominant algorithms for coordinating teams of robots are decoupled and prioritized, thereby avoiding combinatorially hard planning problems typically faced by centralized approaches. While these methods are very efficient, they have two major drawbacks. First, they are incomplete, i.e. they sometimes fail to find a solution even if one exists, and second, the resulting solutions are often not optimal. In this paper we present a method for finding and optimizing priority schemes for such prioritized and decoupled planning techniques. Existing approaches apply a single priority scheme which makes them overly prone to failure in cases where valid solutions exist. By searching in the space of priorization schemes, our approach overcomes this limitation. It performs a randomized search with hill-climbing to find solutions and to minimize the overall path length. To focus the search, our algorithm is guided by constraints generated from the task specification. To illustrate the appropriateness of this approach, this paper discusses experimental results obtained with real robots and through systematic robot simulation. The experimental results illustrate the superior performance of our approach, both in terms of efficiency of robot motion and in the ability to find valid plans."} {"_id": "21cc39f04252e7ea98ab3492fa1945a258708f2c", "title": "The Impact of Node Selfishness on Multicasting in Delay Tolerant Networks", "text": "Due to the uncertainty of transmission opportunities between mobile nodes, delay tolerant networks (DTNs) exploit the opportunistic forwarding mechanism. This mechanism requires nodes to forward messages in a cooperative and selfish way. However, in the real word, most of the nodes exhibit selfish behaviors, such as individual and social selfishness. In this paper, we are the first to investigate how the selfish behaviors of nodes affect the performance of DTN multicast. We consider two typical multicast relaying schemes, namely, two-hop relaying and epidemic relaying, and study their performance in terms of average message transmission delay and transmission cost. Specifically, we model the message delivery process under selfish behaviors by a 3-D continuous time Markov chain; under this model, we derive closed-form formulas for the message transmission delay and cost. Then, we evaluate the accuracy of the proposed Markov chain model by comparing the theoretical results with the simulation results obtained by simulating the message dissemination under both two-hop and epidemic relaying with different network sizes and mobility models. Our study shows that different selfish behaviors may have different impacts on different performance metrics. In addition, selfish behaviors influence epidemic relaying more than two-hop relaying. Furthermore, our results show that the performance of multicast with selfish nodes depends on the multicast group size."} {"_id": "a4188db982a6e979d838c955a1688c1a10db5d48", "title": "Implementing & Improvisation of K-means Clustering Algorithm", "text": "The clustering techniques are the most important part of the data analysis and k-means is the oldest and popular clustering technique used. The paper discusses the traditional K-means algorithm with advantages and disadvantages of it. It also includes researched on enhanced k-means proposed by various authors and it also includes the techniques to improve traditional K-means for better accuracy and efficiency. There are two area of concern for improving K-means; 1) is to select initial centroids and 2) by assigning data points to nearest cluster by using equations for calculating mean and distance between two data points. The time complexity of the proposed K-means technique will be lesser that then the traditional one with increase in accuracy and efficiency. The main purpose of the article is to proposed techniques to enhance the techniques for deriving initial centroids and the assigning of the data points to its nearest clusters. The clustering technique proposed in this paper is enhancing the accuracy and time complexity but it still needs some further improvements and in future it is also viable to include efficient techniques for selecting value for initial clusters(k). Experimental results show that the improved method can effectively improve the speed of clustering and accuracy, reducing the computational complexity of the k-means. Introduction Clustering is a process of grouping data objects into disjointed clusters so that the data in the same cluster are similar, but data belonging to different cluster differ. A cluster is a collection of data object that are similar to one another are in same cluster and dissimilar to the objects are in other clusters [k-4]. At present the applications of computer technology in increasing rapidly which Unnati R. Raval et al, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.5, May2016, pg. 191-203 \u00a9 2016, IJCSMC All Rights Reserved 192 created high volume and high dimensional data sets [10]. These data is stored digitally in electronic media, thus providing potential for the development of automatic data analysis, classification and data retrieval [10]. The clustering is important part of the data analysis which partitioned given dataset in to subset of similar data points in each subset and dissimilar to data from other clusters [1]. The clustering analysis is very useful with increasing in digital data to draw meaningful information or drawing interesting patters from the data sets hence it finds applications in many fields like bioinformatics, pattern recognition, image processing, data mining, marketing and economics etc [4]. There have been many clustering techniques proposed but K-means is one of the oldest and most popular clustering techniques. In this method the number of cluster (k) is predefined prior to analysis and then the selection of the initial centroids will be made randomly and it followed by iterative process of assigning each data point to its nearest centroid. This process will keep repeating until convergence criteria met. However, there are shortcomings of K-means, it is important to proposed techniques that enhance the final result of analysis. This article includes researched on papers [1,2,3,4,5,6] which made some very important improvements towards the accuracy and efficiency of the clustering technique. Basic K-means Algorithm : A centroid-based Clustering technique According to the basic K-mean clustering algorithm, clusters are fully dependent on the selection of the initial clusters centroids. K data elements are selected as initial centers; then distances of all data elements are calculated by Euclidean distance formula. Data elements having less distance to centroids are moved to the appropriate cluster. The process is continued until no more changes occur in clusters[k-1]. This partitioning clustering is most popular and fundamental technique [1]. It is vastly used clustering technique which requires user specified parameters like number of clusters k, cluster initialisation and cluster metric [2]. First it needs to define initial clusters which makes subsets (or groups) of nearest points (from centroid) inside the data set and these subsets (or groups) called clusters [1]. Secondly, it finds means value for each cluster and define new centroid to allocate data points to this new centroid and this iterative process will goes on until centroid [3] does not changes. The simplest algorithm for the traditional K-means [2] is as follows; Input: D = {d1, d2, d3,.......dn} // set of n numbers of data points K // The number of desire Clusters Output: A set of k clusters 1. Select k points as initial centroids. 2. Repeat 3. From K clusters by assigning each data point to its nearest centroid. 4. Recompute the centroid for each cluster until centroid does not change [2]. Unnati R. Raval et al, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.5, May2016, pg. 191-203 \u00a9 2016, IJCSMC All Rights Reserved 193 Start"} {"_id": "ba6c855521307dec467bffadf5460c39e1ececa8", "title": "An Analysis on the factors causing telecom churn: First Findings", "text": "The exponential growth in number of telecom providers and unceasing competition in the market has caused telecom operators to divert their focus more towards customer retention rather than customer acquisition. Hence it becomes extremely essential to identify the factors causing a customer to switch and take proactive actions in order to retain them. The paper aims to review existing literature to bring up a holistic view of various factors and their relationships that cause a customer to churn in the telecommunication industry. A total of 140 factors related to customer churn were compiled by reviewing several studies on customer churn behavior. The identified factors are then discussed in detail with their impact on customer\u2019s switching behavior. The limitations of the existing studies and directions for future research have also been provided. The paper will aid practitioners and researchers by serving a strong base for their industrial practice and future research."} {"_id": "289d3a9562f57d0182d1aae9376b0e3793d80272", "title": "The role of knowledge in discourse comprehension: a construction-integration model.", "text": "In contrast to expectation-based, predictive views of discourse comprehension, a model is developed in which the initial processing is strictly bottom-up. Word meanings are activated, propositions are formed, and inferences and elaborations are produced without regard to the discourse context. However, a network of interrelated items is created in this manner, which can be integrated into a coherent structure through a spreading activation process. Data concerning the time course of word identification in a discourse context are examined. A simulation of arithmetic word-problem understanding provides a plausible account for some well-known phenomena in this area."} {"_id": "6e54df7c65381b40948f50f6e456e7c06c009fd0", "title": "Co-evolution of Fitness Maximizers and Fitness Predictors", "text": "We introduce an estimation of distribution algorithm (EDA) based on co-evolution of fitness maximizers and fitness predictors for improving the performance of evolutionary search when evaluations are prohibitively expensive. Fitness predictors are lightweight objects which, given an evolving individual, heuristically approximate the true fitness. The predictors are trained by their ability to correctly differentiate between good and bad solutions using reduced computation. We apply co-evolving fitness prediction to symbolic regression and measure its impact. Our results show that the additional computational investment in training the co-evolving fitness predictors can greatly enhance both speed and convergence of individual solutions while overall reducing the number of evaluations. In application to symbolic regression, the advantage of using fitness predictors grows as the complexity of models increases."} {"_id": "eaede512ffebb375e3d76e9103d03a32b235e8f9", "title": "An MLP-based representation of neural tensor networks for the RDF data models", "text": "In this paper, a new representation of neural tensor networks is presented. Recently, state-of-the-art neural tensor networks have been introduced to complete RDF knowledge bases. However, mathematical model representation of these networks is still a challenging problem, due to tensor parameters. To solve this problem, it is proposed that these networks can be represented as two-layer perceptron network. To complete the network topology, the traditional gradient based learning rule is then developed. It should be mentioned that for tensor networks there have been developed some learning rules which are complex in nature due to the complexity of the objective function used. Indeed, this paper is aimed to show that the tensor network can be viewed and represented by the two-layer feedforward neural network in its traditional form. The simulation results presented in the paper easily verify this claim."} {"_id": "a21fd0749a59611af146765e4883bcfc73dcbdc8", "title": "PointCNN: Convolution On $\\mathcal{X}$-Transformed Points", "text": "We present a simple and general framework for feature learning from point clouds. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point clouds are irregular and unordered, thus directly convolving kernels against features associated with the points will result in desertion of shape information and variance to point ordering. To address these problems, we propose to learn an X -transformation from the input points to simultaneously promote two causes: the first is the weighting of the input features associated with the points, and the second is the permutation of the points into a latent and potentially canonical order. Element-wise product and sum operations of the typical convolution operator are subsequently applied on the X -transformed features. The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks."} {"_id": "b1c6c92e8c1c3f0f395b333acf9bb85aaeef926e", "title": "Increased corpus callosum size in musicians", "text": "Using in-vivo magnetic resonance morphometry it was investigated whether the midsagittal area of the corpus callosum (CC) would differ between 30 professional musicians and 30 age-, sex- and handedness-matched controls. Our analyses revealed that the anterior half of the CC was significantly larger in musicians. This difference was due to the larger anterior CC in the subgroup of musicians who had begun musical training before the age of 7. Since anatomic studies have provided evidence for a positive correlation between midsagittal callosal size and the number of fibers crossing through the CC, these data indicate a difference in interhemispheric communication and possibly in hemispheric (a)symmetry of sensorimotor areas. Our results are also compatible with plastic changes of components of the CC during a maturation period within the first decade of human life, similar to those observed in animal studies."} {"_id": "bd863d284d552d9c506fc2978d4019d45101250a", "title": "Integrated pathology and radiology learning for a musculoskeletal system module: an example of interdisciplinary integrated form", "text": "Introduction\nMany curricula integrate radiology with anatomy courses but none of these curricula adopt integration of pathology with radiology as interdisciplinary form at the undergraduate level. The aim of the current study was to identify the outcome of interdisciplinary integrated course of pathology and radiology in musculoskeletal system module (MSK).\n\n\nMethods\nA comparative interventional study was conducted on 60 students representing a whole class of the third year of level V. MSK and gastrointestinal module (GIT) were selected as study and control module, respectively, as being adopted for the same level/allocated hours, enriched with many subject areas for both fields, and availability of learning resources for both. A planned interdisciplinary integrated course for MSK pathology and radiology was implemented in the pathology lab. The subject area was selected and taught for both fields in consecutive ways by pathology and radiology experts. After teaching, gross/histopathologic specimens and radiology imaging/reports were distributed over benches and the students investigated the same. Conversely, in GIT control module, both fields were delivered separately, and no interdisciplinary form of integration occurred. Students' scores for both fields were filtered from the objective structured practical exam, quiz, and final exam. Students' marks and satisfaction were subjected to multiple comparisons using independent student's t-test. SPSS version 17 was used.\n\n\nResults\nSignificances were obtained between total marks of students for both modules and between radiology courses for both with P=0.0152 and 0.0199, respectively. Number of students who achieved >80% in MSK was 20 and 26 compared to 15 and 17 in GIT for pathology and radiology, respectively. Student satisfaction was high for interdisciplinary integration in MSK with significant difference obtained between MSK and GIT.\n\n\nConclusion\nThe integration of both fields augments student performance for both. This experience must encourage curriculum committee to globalize it over all other modules."} {"_id": "d327ef1bfceda3a0ecbed76e030918445770c5db", "title": "Born Digital: Understanding the First Generation of Digital Natives", "text": "Perhaps you have seen them on the train in the morning, adjusting their iPods and texting with their preternaturally agile thumbs. Or perhaps one sits in the cubicle next to yours, cycling through a seemingly endless loop of news feeds, YouTube videos, cell phone calls, and work. Perhaps you have one living in your home, capable of reprogramming the wireless router and leaping insurmountable DVR instructions in a single, tech-savvy bound."} {"_id": "cbd61961cb67ac04d7c6d1ac3f4ce27a802fc0cc", "title": "Towards a 99% efficient three-phase buck-type PFC rectifier for 400 V DC distribution systems", "text": "In telecom applications, the vision for a total power conversion efficiency from the mains to the output of PoL converters of 95% demands for an optimization of every conversion step, i.e. the PFC rectifier front-end should show an outstanding efficiency in the range of 99%. For recently discussed 400 V DC distribution bus voltages a buck-type PFC rectifier is a logical solution. In this paper, an efficiency-optimized, nearly 99% efficient, 5 kW three-phase buck-type PFC rectifier with 400 V output is presented. Methods for calculating losses of all components are described, and are used to optimize the converter design for efficiency at full load. Special attention is paid to semiconductor losses, which are shown to be dominant, with the parasitic device capacitance losses being a significant component. A prototype of the proposed rectifier is constructed which verifies the accuracy of the models used for loss calculation and optimization."} {"_id": "60611349d1b6d64488a5a88a9193e62d9db27b71", "title": "REVIEW OF FATIGUE DETECTION AND PREDICTION TECHNOLOGIES September 2000 Prepared", "text": "This report reviews existing fatigue detection and prediction technologies. Data regarding the different technologies available were collected from a wide variety of worldwide sources. The first half of this report summarises the current state of research and development of the technologies and summarises the status of the technologies with respect to the key issues of sensitivity, reliability, validity and acceptability. The second half evaluates the role of the technologies in transportation, and comments on the place of the technologies vis-a-vis other enforcement and regulatory frameworks, especially in Australia and New Zealand. The report authors conclude that the hardware technologies should never be used as the company fatigue management system. Hardware technologies only have the potential to be a last ditch safety device. Nevertheless, the output of hardware technologies could usefully feed into company fatigue management systems to provide real time risk assessment. However, hardware technology output should never be the only input into a management system. Other inputs should at least come from validated software technologies, mutual assessment of fitness for duty and other risk assessments of work load, schedules and rosters. Purpose: For information: to provide an understanding of the place of fatigue detection and prediction technologies in the management of fatigue in drivers of heavy vehicles."} {"_id": "057dd526f262be89e6ba22bfc12bb89f61537c6b", "title": "\"Eczema coxsackium\" and unusual cutaneous findings in an enterovirus outbreak.", "text": "OBJECTIVE\nTo characterize the atypical cutaneous presentations in the coxsackievirus A6 (CVA6)-associated North American enterovirus outbreak of 2011-2012.\n\n\nMETHODS\nWe performed a retrospective case series of pediatric patients who presented with atypical cases of hand, foot, and mouth disease (HFMD) from July 2011 to June 2012 at 7 academic pediatric dermatology centers. Patients were included if they tested positive for CVA6 or if they met clinical criteria for atypical HFMD (an enanthem or exanthem characteristic of HFMD with unusual morphology or extent of cutaneous findings). We collected demographic, epidemiologic, and clinical data including history of skin conditions, morphology and extent of exanthem, systemic symptoms, and diagnostic test results.\n\n\nRESULTS\nEighty patients were included in this study (median age 1.5 years, range 4 months-16 years). Seventeen patients were CVA6-positive, and 63 met clinical inclusion criteria. Ninety-nine percent of patients exhibited a vesiculobullous and erosive eruption; 61% of patients had rash involving >10% body surface area. The exanthem had a perioral, extremity, and truncal distribution in addition to involving classic HFMD areas such as palms, soles, and buttocks. In 55% of patients, the eruption was accentuated in areas of eczematous dermatitis, termed \"eczema coxsackium.\" Other morphologies included Gianotti-Crosti-like (37%), petechial/purpuric (17%) eruptions, and delayed onychomadesis and palm and sole desquamation. There were no patients with serious systemic complications.\n\n\nCONCLUSIONS\nThe CVA6-associated enterovirus outbreak was responsible for an exanthem potentially more widespread, severe, and varied than classic HFMD that could be confused with bullous impetigo, eczema herpeticum, vasculitis, and primary immunobullous disease."} {"_id": "3ba631631031863c748cf4e37fd042541db934e3", "title": "Removal Mechanism in Chemical Mechanical Polishing : Theory and Modeling", "text": "The abrasion mechanism in solid-solid contact mode of the chemical mechanical polishing (CMP) process is investigated in detail. Based on assumptions of plastic contact over wafer-abrasive and pad-abrasive interfaces, the normal distribution of abrasive size and an assumed periodic roughness of pad surface, a novel model is developed for material removal in CMP. The basic model is = removed, where is the density of wafer, the number of active abrasives, and removed the volume of material removed by a single abrasive. The model proposed integrates process parameters including pressure and velocity and other important input parameters including the wafer hardness, pad hardness, pad roughness, abrasive size, and abrasive geometry into the same formulation to predict the material removal rate ( ). An interface between the chemical effect and mechanical effect has been constructed through a fitting parameter , a \u201cdynamical\u201d hardness value of the wafer surface, in the model. It reflects the influences of chemicals on the mechanical material removal. The fluid effect in the current model is attributed to the number of active abrasives. It is found that the nonlinear down pressure dependence of material removal rate is related to a probability density function of the abrasive size and the elastic deformation of the pad. Compared with experimental results, the model accurately predicts . With further verification of the model, a better understanding of the fundamental mechanism involved in material removal in the CMP process, particularly different roles played by the consumables and their interactions, can be obtained."} {"_id": "afdee392c887aae5fc529c815ec88cabaeba6bcf", "title": "Abductive Matching in Question Answering", "text": "We study question-answering over semi-structured data. We introduce a new way to apply the technique of semantic parsing by applying machine learning only to provide annotations that the system infers to be missing; all the other parsing logic is in the form of manually authored rules. In effect, the machine learning is used to provide non-syntactic matches, a step that is ill-suited to manual rules. The advantage of this approach is in its debuggability and in its transparency to the end-user. We demonstrate the effectiveness of the approach by achieving state-of-the-art performance of 40.42% on a standard benchmark dataset over tables from Wikipedia."} {"_id": "36c54ce6a0da98f5812269afccc3b931d304f6f8", "title": "Academic-Industrial Perspective on the Development and Deployment of a Moderation System for a Newspaper Website", "text": "This paper describes an approach and our experiences from the development, deployment and usability testing of a Natural Language Processing (NLP) and Information Retrieval system that supports the moderation of user comments on a large newspaper website. We highlight some of the differences between industry-oriented and academic research settings and their influence on the decisions made in the data collection and annotation processes, selection of document representation and machine learning methods. We report on classification results, where the problems to solve and the data to work with come from a commercial enterprise. In this context typical for NLP research, we discuss relevant industrial aspects. We believe that the challenges faced as well as the solutions proposed for addressing them can provide insights to others working in a similar setting. Data and experiment code related to this paper are available for download at https://ofai.github.io/million-post-corpus."} {"_id": "80c363061420aea6d1557c64f2df188b16f32ddd", "title": "Divide and conquer: neuroevolution for multiclass classification", "text": "Neuroevolution is a powerful and general technique for evolving the structure and weights of artificial neural networks. Though neuroevolutionary approaches such as NeuroEvolution of Augmenting Topologies (NEAT) have been successfully applied to various problems including classification, regression, and reinforcement learning problems, little work has explored application of these techniques to larger-scale multiclass classification problems. In this paper, NEAT is evaluated in several multiclass classification problems, and then extended via two ensemble approaches: One-vs-All and One-vs-One. These approaches decompose multiclass classification problems into a set of binary classification problems, in which each binary problem is solved by an instance of NEAT. These ensemble models exhibit reduced variance and increasingly superior accuracy as the number of classes increases. Additionally, higher accuracy is achieved early in training, even when artificially constrained for the sake of fair comparison with standard NEAT. However, because the approach can be trivially distributed, it can be applied quickly at large scale to solve real problems. In fact, these approaches are incorporated into DarwinTM, an enterprise automatic machine learning solution that also incorporates various other algorithmic enhancements to NEAT. The resulting complete system has proven robust to a wide variety of client datasets."} {"_id": "942327cd70be7c41d7085efac9ba21c3d5077c97", "title": "Homicides with mutilation of the victim's body.", "text": "Information on homicide offenders guilty of mutilation is sparse. The current study estimates the rate of mutilation of the victim's body in Finnish homicides and compares sociodemographic characteristics, crime history, life course development, psychopathy, and psychopathology of these and other homicide offenders. Crime reports and forensic examination reports of all offenders subjected to forensic examination and convicted for a homicide in 1995-2004 (n = 676) were retrospectively analyzed for offense and offender variables and scored with the Psychopathy Check List Revised. Thirteen homicides (2.2%) involved mutilation. Educational and mental health problems in childhood, inpatient mental health contacts, self-destructiveness, and schizophrenia were significantly more frequent in offenders guilty of mutilation. Mutilation bore no significant association with psychopathy or substance abuse. The higher than usual prevalence of developmental difficulties and mental disorder of this subsample of offenders needs to be recognized."} {"_id": "e75ef7017fc3c9f4e83e643574bc03a27d1c3851", "title": "Why do they still use paper?: understanding data collection and use in Autism education", "text": "Autism education programs for children collect and use large amounts of behavioral data on each student. Staff use paper almost exclusively to collect these data, despite significant problems they face in tracking student data in situ, filling out data sheets and graphs on a daily basis, and using the sheets in collaborative decision making. We conducted fieldwork to understand data collection and use in the domain of autism education to explain why current technology had not met staff needs. We found that data needs are complex and unstandardized, immediate demands of the job interfere with staff ability to collect in situ data, and existing technology for data collection is inadequate. We also identified opportunities for technology to improve sharing and use of data. We found that data sheets are idiosyncratic and not useful without human mediation; improved communication with parents could benefit children's development; and staff are willing, and even eager, to incorporate technology. These factors explain the continued dependence on paper for data collection in this environment, and reveal opportunities for technology to support data collection and improve use of collected data."} {"_id": "11543c44ed72784c656362b2ef42f7509250a423", "title": "ARSA: a sentiment-aware model for predicting sales performance using blogs", "text": "Due to its high popularity, Weblogs (or blogs in short) present a wealth of information that can be very helpful in assessing the general public's sentiments and opinions. In this paper, we study the problem of mining sentiment information from blogs and investigate ways to use such information for predicting product sales performance. Based on an analysis of the complex nature of sentiments, we propose Sentiment PLSA (S-PLSA), in which a blog entry is viewed as a document generated by a number of hidden sentiment factors. Training an S-PLSA model on the blog data enables us to obtain a succinct summary of the sentiment information embedded in the blogs. We then present ARSA, an autoregressive sentiment-aware model, to utilize the sentiment information captured by S-PLSA for predicting product sales performance. Extensive experiments were conducted on a movie data set. We compare ARSA with alternative models that do not take into account the sentiment information, as well as a model with a different feature selection method. Experiments confirm the effectiveness and superiority of the proposed approach."} {"_id": "1c30b2ad55a3db9d201a96616cb66dda3bb757bc", "title": "Learning Extraction Patterns for Subjective Expressions", "text": "This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision."} {"_id": "3e04c16376620a46f65a9dd8199cbd45d224c371", "title": "Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing", "text": "Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms."} {"_id": "9b1454154e82d61a519987620b47b975b0f8368d", "title": "A Three-Step Vehicle Detection Framework for Range Estimation Using a Single Camera", "text": "This paper proposes and validates a real-time on-road vehicle detection system, which uses a single camera for the purpose of intelligent driver assistance. A three-step vehicle detection framework is presented to detect and track the target vehicle within an image. In the first step, probable vehicle locations are hypothesized using pattern recognition. The vehicle candidates are then verified in the hypothesis verification step. In this step, lane detection is used to filter vehicle candidates that are not within the lane region of interest. In the final step tracking and online learning are implemented to optimize the detection algorithm during misdetection and temporary occlusion. Good detection performance and accuracy was observed in highway driving environments with minimal shadows."} {"_id": "33bd9454417a01a63ba452b974cb4c265d94d312", "title": "A Blind Baud-Rate ADC-Based CDR", "text": "This paper proposes a 10-Gb/s blind baud-rate ADC-based CDR. The blind baud-rate operation is made possible by using a 2UI integrate-and-dump filter, which creates intentional ISI in adjacent bit periods. The blind samples are interpolated to recover center-of-the-eye samples for a speculative Mueller-Muller PD and a 2-tap DFE operation. A test chip, fabricated in 65-nm CMOS, implements a 10-Gb/s CDR with a measured high-frequency jitter tolerance of 0.19 UIPP and \u00b1300 ppm of frequency offset."} {"_id": "50a7d2139cb4203160c0da0908291342d8e1ca78", "title": "Three-axis NC milling simulation based on adaptive triangular mesh", "text": "0360-8352/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.cie.2010.06.016 * Corresponding author. Tel.: +86 21 67791413; fax E-mail address: mainpowers@gmail.com (J. Mao). NC milling simulation has become an important step in computer aided manufacturing (CAM). To achieve real-time simulation, the total number of polygons has to be reduced, which results in poor image quality. This paper presents an adaptive triangular mesh algorithm to reduce the number of polygons while image quality remains high. Binary tree is used to represent the milling surface, and the optimization of the mesh is performed dynamically in the process of simulation. In this algorithm, the resolution of triangles is automatically updated according to local surface flatness, thus greatly reducing the number of triangles at planar regions. By doing this, real-time and high quality of visual presentation is insured and the translation, rotation and zooming operations are still applicable. When machining precision is evaluated, or overcut, undercut and interference are inspected, full resolution model stored in memory is automatically loaded to ensure the accuracy and correctness of these inspections. Finally, an example is presented to illustrate the validity of proposed algorithm. 2010 Elsevier Ltd. All rights reserved."} {"_id": "a2a1a84bc6be32d981d8fac5da813a0f179a3f75", "title": "Deep Generative Model using Unregularized Score for Anomaly Detection with Heterogeneous Complexity", "text": "Accurate and automated detection of anomalous samples in a natural image dataset can be accomplished with a probabilistic model for end-to-end modeling of images. Such images have heterogeneous complexity, however, and a probabilistic model overlooks simply shaped objects with small anomalies. This is because the probabilistic model assigns undesirably lower likelihoods to complexly shaped objects that are nevertheless consistent with set standards. To overcome this difficulty, we propose an unregularized score for deep generative models (DGMs), which are generative models leveraging deep neural networks. We found that the regularization terms of the DGMs considerably influence the anomaly score depending on the complexity of the samples. By removing these terms, we obtain an unregularized score, which we evaluated on a toy dataset and real-world manufacturing datasets. Empirical results demonstrate that the unregularized score is robust to the inherent complexity of samples and can be used to better detect anomalies."} {"_id": "8877f87b0c61f08c86c9e1114fb8ca1a61ab6105", "title": "Surface Plasmon Resonance Imaging Sensors : A Review", "text": "Surface plasmon resonance (SPR) imaging sensors realize label free, real-time, highly sensitive, quantitative, high throughput biological interaction monitoring and the binding profiles from multi-analytes further provide the binding kinetic parameters between different bio-molecules. In the past two decades, SPR imaging sensors found rapid increasing applications in fundamental biological studies, medical diagnostics, drug discovery, food safety, precision measurement and environmental monitoring. In this paper, we review the recent advances of SPR imaging sensor technology towards high throughput multi-analyte screening. Finally, we describe our multiplex colorimetric SPR imaging biosensor based on polarization orientation for high throughput bio-sensing applications."} {"_id": "9026755081d838609c62878c3deab601090c67da", "title": "Individual differences in non-verbal number acuity correlate with maths achievement", "text": "Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals\u2014these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal \u2018number sense\u2019 than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children\u2019s past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both."} {"_id": "475274fd477632da62cf75e2cd61b3e8e8ac3c1a", "title": "Adult hippocampal neurogenesis and its role in Alzheimer's disease", "text": "The hippocampus, a brain area critical for learning and memory, is especially vulnerable to damage at early stages of Alzheimer's disease (AD). Emerging evidence has indicated that altered neurogenesis in the adult hippocampus represents an early critical event in the course of AD. Although causal links have not been established, a variety of key molecules involved in AD pathogenesis have been shown to impact new neuron generation, either positively or negatively. From a functional point of view, hippocampal neurogenesis plays an important role in structural plasticity and network maintenance. Therefore, dysfunctional neurogenesis resulting from early subtle disease manifestations may in turn exacerbate neuronal vulnerability to AD and contribute to memory impairment, whereas enhanced neurogenesis may be a compensatory response and represent an endogenous brain repair mechanism. Here we review recent findings on alterations of neurogenesis associated with pathogenesis of AD, and we discuss the potential of neurogenesis-based diagnostics and therapeutic strategies for AD."} {"_id": "25a2da00c9572e2e811eda096ebf6a3f8764da87", "title": "Joint SFM and detection cues for monocular 3D localization in road scenes", "text": "We present a system for fast and highly accurate 3D localization of objects like cars in autonomous driving applications, using a single camera. Our localization framework jointly uses information from complementary modalities such as structure from motion (SFM) and object detection to achieve high localization accuracy in both near and far fields. This is in contrast to prior works that rely purely on detector outputs, or motion segmentation based on sparse feature tracks. Rather than completely commit to tracklets generated by a 2D tracker, we make novel use of raw detection scores to allow our 3D bounding boxes to adapt to better quality 3D cues. To extract SFM cues, we demonstrate the advantages of dense tracking over sparse mechanisms in autonomous driving scenarios. In contrast to complex scene understanding, our formulation for 3D localization is efficient and can be regarded as an extension of sparse bundle adjustment to incorporate object detection cues. Experiments on the KITTI dataset show the efficacy of our cues, as well as the accuracy and robustness of our 3D object localization relative to ground truth and prior works."} {"_id": "57f7390d57d398d60c240bc1bfb023239a8487ae", "title": "Dynamic Traffic Diversion in SDN: testbed vs Mininet", "text": "In this paper, we first propose a simple Dynamic Traffic Diversion (DTD) algorithm for Software Defined Networks (SDN). After implementing the algorithm inside the controller, we then compare the results obtained under two different test environments: 1) a testbed using real Cisco equipment and 2) a network emulation using Mininet. From the results, we get two key messages. First, we can clearly see that dynamically diverting important traffic on a backup path will prevent packet loss and reduce jitter. Finally, the two test environments provide relatively similar results. The small differences could be explained by the early field trial image that was used on the Cisco equipment and by the many setting parameters that are available in both environments."} {"_id": "0d3950fa9967d74825d7685be052ed55b231347c", "title": "Neural Joint Model for Transition-based Chinese Syntactic Analysis", "text": "We present neural network-based joint models for Chinese word segmentation, POS tagging and dependency parsing. Our models are the first neural approaches for fully joint Chinese analysis that is known to prevent the error propagation problem of pipeline models. Although word embeddings play a key role in dependency parsing, they cannot be applied directly to the joint task in the previous work. To address this problem, we propose embeddings of character strings, in addition to words. Experiments show that our models outperform existing systems in Chinese word segmentation and POS tagging, and perform preferable accuracies in dependency parsing. We also explore bi-LSTM models with fewer features."} {"_id": "0b181362cb46b50e31ee451c16d669cee9c90860", "title": "A Novel CMOS Envelope Detector Structure", "text": "A novel high performance envelope detector is offered in this work. Proposed circuit holds the signal peaks in two periods and combines them to obtain the envelope of the signal. This way, ripple is fixed by the peak holder and tracking can be improved without the traditional compensation between keeping and tracking required in these circuits. A comparison is offered between a conventional circuit, a previous work and the proposed envelope detector. It is shown the superior performance of the latter obtaining small ripple (<1%), fast settling (0.4mus) and high linearity."} {"_id": "114d06cd7f6ba4d6fa0501057655a56f38d429e8", "title": "Exploring the Role of Prior Beliefs for Argument Persuasion", "text": "Public debate forums provide a common platform for exchanging opinions on a topic of interest. While recent studies in natural language processing (NLP) have provided empirical evidence that the language of the debaters and their patterns of interaction play a key role in changing the mind of a reader, research in psychology has shown that prior beliefs can affect our interpretation of an argument and could therefore constitute a competing alternative explanation for resistance to changing one\u2019s stance. To study the actual effect of language use vs. prior beliefs on persuasion, we provide a new dataset and propose a controlled setting that takes into consideration two reader-level factors: political and religious ideology. We find that prior beliefs affected by these reader-level factors play a more important role than language use effects and argue that it is important to account for them in NLP studies of persuasion."} {"_id": "207a737774a54537d804e9353346002dd75b62da", "title": "77 GHz radar scattering properties of pedestrians", "text": "Using radars for detecting and tracking pedestrians has important safety and security applications. Most existing human detection radar systems operate in UHF, X-band, and 24GHz bands. The newly allocated 76-77 GHz frequency band for automobile collision avoidance radar systems has once again raised great interest in pedestrian detection and tracking at these frequencies due to its longer detection range and better location accuracy. The electromagnetic scattering properties must be thoroughly understood and analyzed so a catalog of human scattering can be utilized for intensive automotive radar testing. Measurements of real human subjects at these frequencies is not a trivial task. This paper presents validation between the measured radar cross section (RCS) patterns of various human subjects and a full-wave EM simulation. The RCS of a human is shown to depend on a number of factors, such as posture, body shape, clothing type, etc."} {"_id": "16b65be3bba1eaf243b262614a142a48c6261e68", "title": "An Evaluation Framework for Plagiarism Detection", "text": "We present an evaluation framework for plagiarism detection.1 The framework provides performance measures that address the specifics of plagiarism detection, and the PAN-PC-10 corpus, which contains 64 558 artificial and 4 000 simulated plagiarism cases, the latter generated via Amazon\u2019s Mechanical Turk. We discuss the construction principles behind the measures and the corpus, and we compare the quality of our corpus to existing corpora. Our analysis gives empirical evidence that the construction of tailored training corpora for plagiarism detection can be automated, and hence be done on a large scale."} {"_id": "8f95ecbdf27b1e95e5eecd9cfbaf4f5c9820c5f0", "title": "It's always April fools' day!: On the difficulty of social network misinformation classification via propagation features", "text": "Given the huge impact that Online Social Networks (OSN) had in the way people get informed and form their opinion, they became an attractive playground for malicious entities that want to spread misinformation, and leverage their effect. In fact, misinformation easily spreads on OSN, and this is a huge threat for modern society, possibly influencing also the outcome of elections, or even putting people's life at risk (e.g., spreading \"anti-vaccines\" misinformation). Therefore, it is of paramount importance for our society to have some sort of \"validation\" on information spreading through OSN. The need for a wide-scale validation would greatly benefit from automatic tools. In this paper, we show that it is difficult to carry out an automatic classification of misinformation considering only structural properties of content propagation cascades. We focus on structural properties, because they would be inherently difficult to be manipulated, with the the aim of circumventing classification systems. To support our claim, we carry out an extensive evaluation on Facebook posts belonging to conspiracy theories (representative of misinformation), and scientific news (representative of fact-checked content). Our findings show that conspiracy content reverberates in a way which is hard to distinguish from scientific content: for the classification mechanism we investigated, classification F-score never exceeds 0.7."} {"_id": "411a9b4b1db38379ff33b78eff234f9267bda30a", "title": "Scaling Human-Object Interaction Recognition Through Zero-Shot Learning", "text": "Recognizing human object interactions (HOI) is an important part of distinguishing the rich variety of human action in the visual world. While recent progress has been made in improving HOI recognition in the fully supervised setting, the space of possible human-object interactions is large and it is impractical to obtain labeled training data for all interactions of interest. In this work, we tackle the challenge of scaling HOI recognition to the long tail of categories through a zero-shot learning approach. We introduce a factorized model for HOI detection that disentangles reasoning on verbs and objects, and at test-time can therefore produce detections for novel verb-object pairs. We present experiments on the recently introduced large-scale HICODET dataset, and show that our model is able to both perform comparably to state-of-the-art in fully-supervised HOI detection, while simultaneously achieving effective zeroshot detection of new HOI categories."} {"_id": "979e9b8ddd64c0251740bd8ff2f65f3c9a1b3408", "title": "Phytochemical screening and Extraction: A Review", "text": "Prashant Tiwari*, Bimlesh Kumar, Mandeep Kaur, Gurpreet Kaur, Harleen Kaur Plants are a source of large amount of drugs comprising to different groups such as antispasmodics, emetics, anti-cancer, antimicrobials etc. A large number of the plants are claimed to possess the antibiotic properties in the traditional system and are also used extensively by the tribal people worldwide. It is now believed that nature has given the cure of every disease in one way or another. Plants have been known to relieve various diseases in Ayurveda. Therefore, the researchers today are emphasizing on evaluation and characterization of various plants and plant constituents against a number of diseases based on their traditional claims of the plants given in Ayurveda. Extraction of the bioactive plant constituents has always been a challenging task for the researchers. In this present review, an attempt has been made to give an overview of certain extractants and extraction processes with their advantages and disadvantages. Department of Pharmaceutical Sciences, Lovely School of Pharmaceutical Sciences, Phagwara, Punjab Date of Submission: 12-01-2011 Date of Acceptance: 22-02-2011 Conflict of interest: Nil Source of support: None"} {"_id": "bb78cecf9982cd3ae4b1eaaab60d48e06fe304e7", "title": "Relationships between handwriting performance and organizational abilities among children with and without dysgraphia: a preliminary study.", "text": "Organizational ability constitutes one executive function (EF) component essential for common everyday performance. The study aim was to explore the relationship between handwriting performance and organizational ability in school-aged children. Participants were 58 males, aged 7-8 years, 30 with dysgraphia and 28 with proficient handwriting. Group allocation was based on children's scores in the Handwriting Proficiency Screening Questionnaire (HPSQ). They performed the Hebrew Handwriting Evaluation (HHE), and their parents completed the Questionnaire for Assessing Students' Organizational Abilities-for Parents (QASOA-P). Significant differences were found between the groups for handwriting performance (HHE) and organizational abilities (QASOA-P). Significant correlations were found in the dysgraphic group between handwriting spatial arrangement and the QASOA-P mean score. Linear regression indicated that the QASOA-P mean score explained 42% of variance of handwriting proficiency (HPSQ). Based on one discriminant function, 81% of all participants were correctly classified into groups. Study results strongly recommend assessing organizational difficulties in children referred for therapy due to handwriting deficiency."} {"_id": "928e85a76a346ef5296e6e2e895a03f915cd952f", "title": "FPGA Implementations of the AES Masked Against Power Analysis Attacks", "text": "Power analysis attacks are a serious treat for implementations of modern cryptographic algorithms. Masking is a particularly appealing countermeasure against such attacks since it increases the security to a well quantifiable level and can be implemented without modifying the underlying technology. Its main drawback is the performance overhead it implies. For example, due to prohibitive memory costs, the straightforward application of masking to the AES algorithm, with precomputed tables, is hardly practical. In this paper, we exploit both the increased size of state-of-the-art reconfigurable hardware devices and previous optimization techniques to minimize the memory occupation of software S-boxes, in order to provide an efficient FPGA implementation of the AES algorithm, masked against side-channel attacks. We describe two high throughput architectures, based on 32-bit and 128-bit datapaths that are suitable for Xilinx Virtex-5 devices. In this way, we demonstrate the possibility to efficiently combine technological advances with algorithmic optimizations in this context."} {"_id": "eadf5023c90a6af8a0f8e8605bd8050cc13c23a3", "title": "The Fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, Task and Baselines", "text": "The CHiME challenge series aims to advance robust automatic speech recognition (ASR) technology by promoting research at the interface of speech and language processing, signal processing, and machine learning. This paper introduces the 5th CHiME Challenge, which considers the task of distant multimicrophone conversational ASR in real home environments. Speech material was elicited using a dinner party scenario with efforts taken to capture data that is representative of natural conversational speech and recorded by 6 Kinect microphone arrays and 4 binaural microphone pairs. The challenge features a single-array track and a multiple-array track and, for each track, distinct rankings will be produced for systems focusing on robustness with respect to distant-microphone capture vs. systems attempting to address all aspects of the task including conversational language modeling. We discuss the rationale for the challenge and provide a detailed description of the data collection procedure, the task, and the baseline systems for array synchronization, speech enhancement, and conventional and end-to-end ASR."} {"_id": "a280060d463f1eb81f31fc92371356e6de3988b9", "title": "Detection Method of DNS-based Botnet Communication Using Obtained NS Record History", "text": "To combat with botnet, early detection of the botnet communication and fast identification of the bot-infected PCs is very important for network administrators. However, in DNS protocol, which appears to have been used for botnet communication recently, it is difficult to differentiate the ordinary domain name resolution and suspicious communication. Our key idea is that the most of domain name resolutions first obtain the corresponding NS (Name Server) record from authoritative name servers in the Internet, whereas suspicious communication may omit the procedures to hide their malicious activities. Based on this observation, we propose a detection method of DNS basis botnet communication using obtained NS record history. Our proposed method checks whether the destined name server (IP address) of a DNS query is included in the obtained NS record history to detect the botnet communications."} {"_id": "a5e93260c0a0a974419bfcda9a25496b311d9d5b", "title": "Algorithmic bias amplifies opinion polarization: A bounded confidence model", "text": "The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards polarization, which emerges also in conditions where the original model would predict convergence, and b) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Polarization is augmented by a fragmented initial population. 1 ar X iv :1 80 3. 02 11 1v 1 [ ph ys ic s. so cph ] 6 M ar 2 01 8"} {"_id": "283b528ca325bd2fede81b5d9273b080f98d18bb", "title": "Automatic License Plate Recognition: a Review", "text": "In recent years, many researches on Intelligent Transportation Systems (ITS) have been reported. Automatic License Plate Recognition (ALPR) is one form of ITS technology that not only recognizes and counts vehicles, but distinguishes each as unique by recognizing and recording the license plate\u2019s characters. This paper discusses the main techniques of ALPR. Several open problems are proposed at the end of the paper for future research."} {"_id": "d26c517baa9d6acbb826611400019297df2476a9", "title": "Artificial Intelligence and Soft Computing: Behavioral and Cognitive Modeling of the Human Brain", "text": ""} {"_id": "0ee1916a0cb2dc7d3add086b5f1092c3d4beb38a", "title": "The Pascal Visual Object Classes (VOC) Challenge", "text": "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension."} {"_id": "2f93877a1e10ae594846660102c1decabaf33a73", "title": "Detecting readers with dyslexia using machine learning with eye tracking measures", "text": "Worldwide, around 10% of the population has dyslexia, a specific learning disorder. Most of previous eye tracking experiments with people with and without dyslexia have found differences between populations suggesting that eye movements reflect the difficulties of individuals with dyslexia. In this paper, we present the first statistical model to predict readers with and without dyslexia using eye tracking measures. The model is trained and evaluated in a 10-fold cross experiment with a dataset composed of 1,135 readings of people with and without dyslexia that were recorded with an eye tracker. Our model, based on a Support Vector Machine binary classifier, reaches 80.18% accuracy using the most informative features. To the best of our knowledge, this is the first time that eye tracking measures are used to predict automatically readers with dyslexia using machine learning."} {"_id": "981fef7155742608b8b6673f4a9566158b76cd67", "title": "ImageNet Large Scale Visual Recognition Challenge", "text": ""} {"_id": "8c03ea8dfe9f2aa5f806d3ce4a7f83671a5db3f5", "title": "Disassortative Degree Mixing and Information Diffusion for Overlapping Community Detection in Social Networks (DMID)", "text": "In this paper we propose a new two-phase algorithm for overlapping community detection (OCD) in social networks. In the first phase, called disassortative degree mixing, we identify nodes with high degrees through a random walk process on the row-normalized disassortative matrix representation of the network. In the second phase, we calculate how closely each node of the network is bound to the leaders via a cascading process called network coordination game. We implemented the algorithm and four additional ones as a Web service on a federated peer-to-peer infrastructure. Comparative test results for small and big real world networks demonstrated the correct identification of leaders, high precision and good time complexity. The Web service is available as open source software."} {"_id": "ed31a0422cc5580c176d648e6e143b9700a38ce2", "title": "SustData: A Public Dataset for ICT4S Electric Energy Research", "text": "Energy and environmental sustainability can benefit a lot from advances in data mining and machine learning techniques. However, those advances rely on the availability of relevant datasets required to develop, improve and validate new techniques. Only recently the first datasets were made publicly available for the energy and sustainability research community. In this paper we present a freely available dataset containing power usage and related information from 50 homes. Here we describe our dataset, the hardware and software setups used when collecting the data and how others can access it. We then discuss potential uses of this data in the future of energy ecofeedback and demand side management research."} {"_id": "418e5e5e58cd9cafe802d8b679651f66160d3728", "title": "BlueSky: a cloud-backed file system for the enterprise", "text": "We present BlueSky, a network file system backed by cloud storage. BlueSky stores data persistently in a cloud storage provider such as Amazon S3 or Windows Azure, allowing users to take advantage of the reliability and large storage capacity of cloud providers and avoid the need for dedicated server hardware. Clients access the storage through a proxy running on-site, which caches data to provide lower-latency responses and additional opportunities for optimization. We describe some of the optimizations which are necessary to achieve good performance and low cost, including a log-structured design and a secure in-cloud log cleaner. BlueSky supports multiple protocols\u2014both NFS and CIFS\u2014and is portable to different providers."} {"_id": "9d23def1f0a13530cce155fa94f1480de6af91ea", "title": "Extreme learning machine: Theory and applications", "text": "It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks. r 2006 Elsevier B.V. All rights reserved."} {"_id": "9dd2c8891384b13dc284e3a77cf4da29fb184a24", "title": "Training of Airport Security Screeners", "text": "scientifically based training system that is effective and efficient and allows achieving an excellent level of detection performance. Object recognition is a very complex process but essentially it means to compare visual information to object representations stored in visual memory. The ability to recognize an object class depends on whether itself or similar instance has been stored previously in visual memory. In other words, you can only recognize what you have learned. This explains why training is so important. Identifying the threat items in Fig. 1a and 1b is difficult without training because the objects are depicted in a view that is rather unusual in everyday life. Detecting a bomb such as in Fig. 1c is difficult for untrained people because usually we do not encounter bombs in everyday life. Therefore, a good training system must contain many forbidden objects in many viewpoints in order to train screeners to detect them reliably. Indeed, several studies from our lab and many others worldwide have found that object recognition is often dependent on viewpoint. Moreover, there are numerous studies from neuroscience suggesting that objects are stored in a view-based format in the brain. As you can see in Fig. 2 the hammer, dirk, grenade and gun, which are visible in the bags of Fig. 1a and 1b are indeed much easier to recognize if they are shown in a view that is more often encountered in real life. Because you never know how terrorists place their threat items in a bag, airport security screeners should be trained to detect prohibited items from all kinds of different viewpoints. In a close collaboration with Zurich State Police, Airport division we have Current x-ray machines provide high resolution images, many image processing features and even automatic explosive detection. But the machine is only one half of the whole system. The last and most important decision is always taken by the human operator. In fact, the best and most expensive equipment is of limited use, if a screener finally fails to recognize a threat in the x-ray image. This is of special importance because according to several aviation security experts the human operator is currently the weakest link in airport security. This matter is being realized more and more and several authorities as well as airports are planning to increase investments into a very important element of aviation security: Effective and efficient training of screeners. Indeed, \u2026"} {"_id": "68167e5e7d8c4477fb0bf570aea3f11a4f4504db", "title": "Chord-PKI: A distributed trust infrastructure based on P2P networks", "text": "Many P2P applications require security services such as privacy, anonymity, authentication, and non-repudiation. Such services could be provided through a hierarchical Public Key Infrastructure. However, P2P networks are usually Internet-scale distributed systems comprised of nodes with undetermined trust level, thus making hierarchical solutions unrealistic. In this paper, we propose Chord-PKI, a distributed PKI architecture which is build upon the Chord overlay network, in order to provide security services for P2P applications. Our solution distributes the functionality of a PKI across the peers, by using threshold cryptography and proactive updating. We analyze the security of the proposed infrastructure and through simulations, we evaluate its performance for various scenarios of untrusted node distributions."} {"_id": "ae2eecbe2d5a4365107dc2e4b8a2dcbd0b3938b7", "title": "Ip = Pspace", "text": "In this paper, it is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those proofs that can be generated with polynomial space."} {"_id": "a10b7a2c80e2bae49d46b980ed03c074fe36bb2a", "title": "The International Exascale Software Project roadmap", "text": "Over the last 20 years, the open-source community has provided more and more software on which the world\u2019s highperformance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project."} {"_id": "0b563ad5fdea6923d2ee91f9ee4ad3d6dcdf9cd0", "title": "Steering Behaviors For Autonomous Characters", "text": "This paper presents solutions for one requirement of autonomous characters in animation and games: the ability to navigate around their world in a life-like and improvisational manner. These \u201csteering behaviors\u201d are largely independent of the particulars of the character\u2019s means of locomotion. Combinations of steering behaviors can be used to achieve higher level goals (For example: get from here to there while avoiding obstacles, follow this corridor, join that group of characters...) This paper divides motion behavior into three levels. It will focus on the middle level of steering behaviors, briefly describe the lower level of locomotion, and touch lightly on the higher level of goal setting and strategy."} {"_id": "78c7428e1d1c6c27aec8a589b97002c95de391ed", "title": "Compositional Distributional Semantics with Compact Closed Categories and Frobenius Algebras", "text": "The provision of compositionality in distributional models of meaning, where a word is represented as a vector of co-occurrence counts with every other word in the vocabulary, offers a solution to the fact that no text corpus, regardless of its size, is capable of providing reliable co-occurrence statistics for anything but very short text constituents. The purpose of a compositional distributional model is to provide a function that composes the vectors for the words within a sentence, in order to create a vectorial representation that reflects its meaning. Using the abstract mathematical framework of category theory, Coecke, Sadrzadeh and Clark showed that this function can directly depend on the grammatical structure of the sentence, providing an elegant mathematical counterpart of the formal semantics view. The framework is general and compositional but stays abstract to a large extent. This thesis contributes to ongoing research related to the above categorical model in three ways: Firstly, I propose a concrete instantiation of the abstract framework based on Frobenius algebras (joint work with Sadrzadeh). The theory improves shortcomings of previous proposals, extends the coverage of the language, and is supported by experimental work that improves existing results. The proposed framework describes a new class of compositional models that find intuitive interpretations for a number of linguistic phenomena. Secondly, I propose and evaluate in practice a new compositional methodology which explicitly deals with the different levels of lexical ambiguity (joint work with Pulman). A concrete algorithm is presented, based on the separation of vector disambiguation from composition in an explicit prior step. Extensive experimental work shows that the proposed methodology indeed results in more accurate composite representations for the framework of Coecke et al. in particular and every other class of compositional models in general. As a last contribution, I formalize the explicit treatment of lexical ambiguity in the context of the categorical framework by resorting to categorical quantum mechanics (joint work with Coecke). In the proposed extension, the concept of a distributional vector is replaced with that of a density matrix, which compactly represents a probability distribution over the potential different meanings of the specific word. Composition takes the form of quantum measurements, leading to interesting analogies between quantum physics and linguistics."} {"_id": "e48f224826da49e369107db51d33d201e5e8a3ee", "title": "Tibetan-Chinese Cross Language Text Similarity Calculation Based onLDA Topic Model", "text": "Topic model building is the basis and the most critical module of cross-language topic detection and tracking. Topic model also can be applied to cross-language text similarity calculation. It can improve the efficiency and the speed of calculation by reducing the texts\u2019 dimensionality. In this paper, we use the LDA model in cross-language text similarity computation to obtain Tibetan-Chinese comparable corpora: (1) Extending Tibetan-Chinese dictionary by extracting Tibetan-Chinese entities from Wikipedia. (2) Using topic model to make the texts mapped to the feature space of topics. (3) Calculating the similarity of two texts in different language according to the characteristics of the news text. The method for text similarity calculation based on LDA model reduces the dimensions of text space vector, and enhances the understanding of the text\u2019s semantics. It also improves the speed and efficiency of calculation."} {"_id": "a48ba720733efa324ed247e3fea1d9d2cb08336e", "title": "The emotional brain", "text": "The discipline of affective neuroscience is concerned with the neural bases of emotion and mood. The past 30 years have witnessed an explosion of research in affective neuroscience that has addressed questions such as: which brain systems underlie emotions? How do differences in these systems relate to differences in the emotional experience of individuals? Do different regions underlie different emotions, or are all emotions a function of the same basic brain circuitry? How does emotion processing in the brain relate to bodily changes associated with emotion? And, how does emotion processing in the brain interact with cognition, motor behaviour, language and motivation?"} {"_id": "408b8d34b7467c0b25b27fdafa77ee241ce7f4c4", "title": "Fast in-memory transaction processing using RDMA and HTM", "text": "We present DrTM, a fast in-memory transaction processing system that exploits advanced hardware features (i.e., RDMA and HTM) to improve latency and throughput by over one order of magnitude compared to state-of-the-art distributed transaction systems. The high performance of DrTM are enabled by mostly offloading concurrency control within a local machine into HTM and leveraging the strong consistency between RDMA and HTM to ensure serializability among concurrent transactions across machines. We further build an efficient hash table for DrTM by leveraging HTM and RDMA to simplify the design and notably improve the performance. We describe how DrTM supports common database features like read-only transactions and logging for durability. Evaluation using typical OLTP workloads including TPC-C and SmallBank show that DrTM scales well on a 6-node cluster and achieves over 5.52 and 138 million transactions per second for TPC-C and SmallBank Respectively. This number outperforms a state-of-the-art distributed transaction system (namely Calvin) by at least 17.9X for TPC-C."} {"_id": "aa75853bfbc3801c2e936a69651f0b22cff7229b", "title": "Teaching theory of mind: a new approach to social skills training for individuals with autism.", "text": "This study examined the effectiveness of a social skills training program for normal-IQ adolescents with autism. Five boys participated in the 4 1/2-month treatment condition; four boys matched on age, IQ, and severity of autism constituted the no-treatment control group. In addition to teaching specific interactional and conversational skills, the training program provided explicit and systematic instruction in the underlying social-cognitive principles necessary to infer the mental states of others (i.e., theory of mind). Pre- and post-intervention assessment demonstrated meaningful change in the treatment group's performance on several false belief tasks, but no improvement in the control sample. No changes, however, were demonstrated on general parent and teacher ratings of social competence for either group."} {"_id": "7e31e59f7d7e40041ee2b0bdb2f4ce3dd27cc9e7", "title": "A GA based approach for task scheduling in multi-cloud environment", "text": "In multi-cloud environment, task scheduling has attracted a lot of attention due to NP-Complete nature of the problem. Moreover, it is very challenging due to heterogeneity of the cloud resources with varying capacities and functionalities. Therefore, minimizing the makespan for task scheduling is a challenging issue. In this paper, we propose a genetic algorithm (GA) based approach for solving task scheduling problem. The algorithm is described with innovative idea of fitness function derivation and mutation. The proposed algorithm is exposed to rigorous testing using various benchmark datasets and its performance is evaluated in terms of total makespan."} {"_id": "6029ecc36dfa3d55a8a482c0d415776bda80b195", "title": "Kernel auto-encoder for semi-supervised hashing", "text": "Hashing-based approaches have gained popularity for large-scale image retrieval in recent years. It has been shown that semi-supervised hashing, which incorporates similarity/dissimilarity information into hash function learning could improve the hashing quality. In this paper, we present a novel kernel-based semi-supervised binary hashing model for image retrieval by taking into account auxiliary information, i.e., similar and dissimilar data pairs in achieving high quality hashing. The main idea is to map the data points into a highly non-linear feature space and then map the non-linear features into compact binary codes such that similar/dissimilar data points have similar/dissimilar hash codes. Empirical evaluations on three benchmark datasets demonstrate the superiority of the proposed method over several existing unsupervised and semi-supervised hash function learning methods."} {"_id": "67bf0b6bc7d09b0fe7a97469f786e26f359910ef", "title": "Abnormal use of facial information in high-functioning autism.", "text": "Altered visual exploration of faces likely contributes to social cognition deficits seen in autism. To investigate the relationship between face gaze and social cognition in autism, we measured both face gaze and how facial regions were actually used during emotion judgments from faces. Compared to IQ-matched healthy controls, nine high-functioning adults with autism failed to make use of information from the eye region of faces, instead relying primarily on information from the mouth. Face gaze accounted for the increased reliance on the mouth, and partially accounted for the deficit in using information from the eyes. These findings provide a novel quantitative assessment of how people with autism utilize information in faces when making social judgments."} {"_id": "66726950d756256ca479a068664ecb052b048991", "title": "Tracking the Evolution of the Internet of Things Concept Across Different Application Domains", "text": "Both the idea and technology for connecting sensors and actuators to a network to remotely monitor and control physical systems have been known for many years and developed accordingly. However, a little more than a decade ago the concept of the Internet of Things (IoT) was coined and used to integrate such approaches into a common framework. Technology has been constantly evolving and so has the concept of the Internet of Things, incorporating new terminology appropriate to technological advances and different application domains. This paper presents the changes that the IoT has undertaken since its conception and research on how technological advances have shaped it and fostered the arising of derived names suitable to specific domains. A two-step literature review through major publishers and indexing databases was conducted; first by searching for proposals on the Internet of Things concept and analyzing them to find similarities, differences, and technological features that allow us to create a timeline showing its development; in the second step the most mentioned names given to the IoT for specific domains, as well as closely related concepts were identified and briefly analyzed. The study confirms the claim that a consensus on the IoT definition has not yet been reached, as enabling technology keeps evolving and new application domains are being proposed. However, recent changes have been relatively moderated, and its variations on application domains are clearly differentiated, with data and data technologies playing an important role in the IoT landscape."} {"_id": "478cecc85fdad9b1f6ba9a48eff35e76ea3f0002", "title": "Online sketching hashing", "text": "Recently, hashing based approximate nearest neighbor (ANN) search has attracted much attention. Extensive new algorithms have been developed and successfully applied to different applications. However, two critical problems are rarely mentioned. First, in real-world applications, the data often comes in a streaming fashion but most of existing hashing methods are batch based models. Second, when the dataset becomes huge, it is almost impossible to load all the data into memory to train hashing models. In this paper, we propose a novel approach to handle these two problems simultaneously based on the idea of data sketching. A sketch of one dataset preserves its major characters but with significantly smaller size. With a small size sketch, our method can learn hash functions in an online fashion, while needs rather low computational complexity and storage space. Extensive experiments on two large scale benchmarks and one synthetic dataset demonstrate the efficacy of the proposed method."} {"_id": "4bf25bc8d2727766ffd11b6994b1a56b1333c2eb", "title": "4 RFID-enabled Supply Chain Traceability : Existing Methods , Applications and Challenges", "text": "Radio Frequency Identification (RFID) technology promises to revolutionize various areas in supply chain. Recently, many researchers have investigated on how to improve the ability to track and trace a specific product along the supply chain in terms of both effectiveness and efficiency with the help of this technology. To enable traceability over the entire lifecycle a robust and seamless traceability system has to be constructed. This requires for the following three elements: (1) data model and storage scheme that allows unique identification and scalable database, (2) system framework which enables to share the traceability data between trading partners while maintaining a sovereignty over what is shared and with whom, and (3) a tracing mechanism in order to achieve end-to-end traceability and to provide the history information of products in question. Along with the studies addressing the requirements and design of traceability system architecture, applications in the real environment have also been reported. Due to the strong regulation in EU which states that food business operators shall be able to identify any person who supplied them and any business which takes food from them, RFID-enabled traceability systems are well implemented in the food supply chain. However, there exist other industries adopting RFID to enhance traceability such as pharmaceutical and aviation and even in the continuous process industry like iron ore refining. Despite the promising nature of RFID in tracking, there are several challenges to be addressed. Since an RFID tag does not require line-of-sight, multiple tags can be read simultaneously but also tag collisions may occur. Therefore there is no guarantee that a tag will be continuously detected on consecutive scans. Moreover, the use of RFID tags can be of serious threat to the privacy of information. This may facilitate the espionage of unauthorized personnel. In this chapter, we analyze the main issues of RFID-enabled traceability along the supply chain mentioned above: existing methods, applications and future challenges. Section 2 starts with pointing out the characteristics of RFID data and the requirements for RFIDenabled traceability. Subsequently, we introduce data types, storage schemes and system frameworks proposed in the existing literatures. Then, we discuss tracing methods based on the traceability system architecture. Section 3 presents current applications in real settings of both discrete and continuous production. We also discuss challenges that are preventing companies from adopting RFID for their traceability solutions in section 4. Finally, we conclude our study in section 5."} {"_id": "9509c435260fce9dbceaf44b52791ac8fc5343bf", "title": "A Survey on Image Classification Approaches and Techniques", "text": "Object Classification is an important task within the field of computer vision. Image classification refers to the labelling of images into one of a number of predefined categories. Classification includes image sensors, image pre-processing, object detection, object segmentation, feature extraction and object classification.Many classification techniques have been developed for image classification. In this survey various classification techniques are considered; Artificial Neural Network(ANN), Decision Tree(DT), Support Vector Machine(SVM) and Fuzzy Classification"} {"_id": "a48c71153265d6da7fbc4b16327320a5cbfa6cba", "title": "Unite the People : Closing the loop between 3 D and 2 D Human Representations Supplementary Material", "text": "We have obtained human segmentation labels to integrate shape information into the SMPLify 3D fitting procedure and for the evaluation of methods introduced in the main paper. The labels consist of foreground segmentation for multiple human pose datasets and six body part segmentation for the LSP dataset. Whereas we discuss their use in the context of the UP dataset in the main paper, we discuss the annotation tool that we used for the collection (see Sec. 2.1) as well as the direct use of the human labels for model training (see Sec. 2.2) in this document."} {"_id": "9311c5d48dbb57cc8cbc226dd517f09ca18207b7", "title": "Mass Cytometry: Single Cells, Many Features", "text": "Technology development in biological research often aims to either increase the number of cellular features that can be surveyed simultaneously or enhance the resolution at which such observations are possible. For decades, flow cytometry has balanced these goals to fill a critical need by enabling the measurement of multiple features in single cells, commonly to examine complex or hierarchical cellular systems. Recently, a format for flow cytometry has been developed that leverages the precision of mass spectrometry. This fusion of the two technologies, termed mass cytometry, provides measurement of over 40 simultaneous cellular parameters at single-cell resolution, significantly augmenting the ability of cytometry to evaluate complex cellular systems and processes. In this Primer, we review the current state of mass cytometry, providing an overview of the instrumentation, its present capabilities, and methods of data analysis, as well as thoughts on future developments and applications."} {"_id": "54d0d36eb833b1ffed905f25e06abc3dc6db233b", "title": "Manga content analysis using physiological signals", "text": "Recently, the physiological signals have been analyzed more and more, especially in the context of everyday life activities such as watching video or looking at pictures. Tracking these signals gives access to the mental state of the user (interest, tiredness, stress) but also to his emotions (sadness, fright, happiness). The analysis of the reader's physiological signals during reading can provide a better understanding of the reader's feelings but also a better understanding of the documents. Our main research direction is to find the relationship between a change in the reader's physiological signal and the content of the reading. As a first step, we investigate whether it is possible to distinguish a manga (Japanese comic book) from another by analyzing the physiological signals of the reader. We use 3 different manga genres (horror, romance, comedy) and try to predict which one is read by analyzing the features extracted from the physiological signals of the reader. Our method uses the blood volume pulse, the electrodermal activity and the skin temperature of the reader while reading. We show that by using these physiological signals with a support vector machine we can retrieve which manga has been read with a 90% average accuracy."} {"_id": "404abe4a6b47cb210512b7ba10c155dda6331585", "title": "Blood monocytes: development, heterogeneity, and relationship with dendritic cells.", "text": "Monocytes are circulating blood leukocytes that play important roles in the inflammatory response, which is essential for the innate response to pathogens. But inflammation and monocytes are also involved in the pathogenesis of inflammatory diseases, including atherosclerosis. In adult mice, monocytes originate in the bone marrow in a Csf-1R (MCSF-R, CD115)-dependent manner from a hematopoietic precursor common for monocytes and several subsets of macrophages and dendritic cells (DCs). Monocyte heterogeneity has long been recognized, but in recent years investigators have identified three functional subsets of human monocytes and two subsets of mouse monocytes that exert specific roles in homeostasis and inflammation in vivo, reminiscent of those of the previously described classically and alternatively activated macrophages. Functional characterization of monocytes is in progress in humans and rodents and will provide a better understanding of the pathophysiology of inflammation."} {"_id": "2283a70e539feea50f2e888863f68c2410082e0c", "title": "Participatory networking: an API for application control of SDNs", "text": "We present the design, implementation, and evaluation of an API for applications to control a software-defined network (SDN). Our API is implemented by an OpenFlow controller that delegates read and write authority from the network's administrators to end users, or applications and devices acting on their behalf. Users can then work with the network, rather than around it, to achieve better performance, security, or predictable behavior. Our API serves well as the next layer atop current SDN stacks. Our design addresses the two key challenges: how to safely decompose control and visibility of the network, and how to resolve conflicts between untrusted users and across requests, while maintaining baseline levels of fairness and security. Using a real OpenFlow testbed, we demonstrate our API's feasibility through microbenchmarks, and its usefulness by experiments with four real applications modified to take advantage of it."} {"_id": "3f623aed8de2c45820eab2e4591b1207b3a65c13", "title": "What happens when HTTP adaptive streaming players compete for bandwidth?", "text": "With an increasing demand for high-quality video content over the Internet, it is becoming more likely that two or more adaptive streaming players share the same network bottleneck and compete for available bandwidth. This competition can lead to three performance problems: player instability, unfairness between players, and bandwidth underutilization. However, the dynamics of such competition and the root cause for the previous three problems are not yet well understood. In this paper, we focus on the problem of competing video players and describe how the typical behavior of an adaptive streaming player in its Steady-State, which includes periods of activity followed by periods of inactivity (ON-OFF periods), is the main root cause behind the problems listed above. We use two adaptive players to experimentally showcase these issues. Then, focusing on the issue of player instability, we test how several factors (the ON-OFF durations, the available bandwidth and its relation to available bitrates, and the number of competing players) affect stability."} {"_id": "4b55ece626f3aca49ea7774f62b22d2a0e18201f", "title": "Towards network-wide QoE fairness using openflow-assisted adaptive video streaming", "text": "Video streaming is an increasingly popular way to consume media content. Adaptive video streaming is an emerging delivery technology which aims to increase user QoE and maximise connection utilisation. Many implementations naively estimate bandwidth from a one-sided client perspective, without taking into account other devices in the network. This behaviour results in unfairness and could potentially lower QoE for all clients. We propose an OpenFlow-assisted QoE Fairness Framework that aims to fairly maximise the QoE of multiple competing clients in a shared network environment. By leveraging a Software Defined Networking technology, such as OpenFlow, we provide a control plane that orchestrates this functionality. The evaluation of our approach in a home networking scenario introduces user-level fairness and network stability, and illustrates the optimisation of QoE across multiple devices in a network."} {"_id": "1f0ea586a80833ee7b27ada93cc751449c4a3cdf", "title": "A network in a laptop: rapid prototyping for software-defined networks", "text": "Mininet is a system for rapidly prototyping large networks on the constrained resources of a single laptop. The lightweight approach of using OS-level virtualization features, including processes and network namespaces, allows it to scale to hundreds of nodes. Experiences with our initial implementation suggest that the ability to run, poke, and debug in real time represents a qualitative change in workflow. We share supporting case studies culled from over 100 users, at 18 institutions, who have developed Software-Defined Networks (SDN). Ultimately, we think the greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon."} {"_id": "23e1cd65fc01e8dfe3beecaa07484a279ad396de", "title": "Network characteristics of video streaming traffic", "text": "Video streaming represents a large fraction of Internet traffic. Surprisingly, little is known about the network characteristics of this traffic. In this paper, we study the network characteristics of the two most popular video streaming services, Netflix and YouTube. We show that the streaming strategies vary with the type of the application (Web browser or native mobile application), and the type of container (Silverlight, Flash, or HTML5) used for video streaming. In particular, we identify three different streaming strategies that produce traffic patterns from non-ack clocked ON-OFF cycles to bulk TCP transfer. We then present an analytical model to study the potential impact of these streaming strategies on the aggregate traffic and make recommendations accordingly."} {"_id": "0938ee6e489b9ac8fcc78df9b75e5395a734d357", "title": "Software architecture: a roadmap", "text": "Over the past decade software architecture has received increasing attention as an important subfield of software engineering. During that time there has been considerable progress in developing the technological and methodological base for treating architectural design as an engineering discipline. However, much remains to be done to achieve that goal. Moreover, the changing face of technology raises a number of new challenges for software architecture. This paper examines some of the important trends of software architecture in research and practice, and speculates on the important emerging trends, challenges, and aspirations."} {"_id": "f4150e2fb4d8646ebc2ea84f1a86afa1b593239b", "title": "Threat detection in online discussions", "text": "This paper investigates the effect of various types of linguistic features (lexical, syntactic and semantic) for training classifiers to detect threats of violence in a corpus of YouTube comments. Our results show that combinations of lexical features outperform the use of more complex syntactic and semantic features for this task."} {"_id": "a3819dda9a5f00dbb8cd3413ca7422e37a0d5794", "title": "Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations", "text": "In this paper, a fast and flexible algorithm for computing watersheds in digital grayscale images is introduced. A review of watersheds and related notion is first presented, and the major methods to determine watersheds are discussed. The present algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using a queue of pixels. It is described in detail and provided in a pseudo C language. We prove the accuracy of this algorithm is superior to that of the existing implementations. Furthermore, it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images and even to graphs are straightforward. In addition, its strongest point is that it is faster than any other watershed algorithm. Applications of this algorithm with regard to picture segmentation are presented for MR imagery and for digital elevation models. An example of 3-D watershed is also provided. Lastly, some ideas are given on how to solve complex segmentation tasks using watersheds on graphs. Zndex Terms-Algorithm, digital image, FIFO structure, graph, grid, mathematical morphology, picture segmentation, watersheds."} {"_id": "6d21e6fff7c7de7e4758cf02b5933e7ad987a718", "title": "Key Challenges for the Smart City: Turning Ambition into Reality", "text": "Smart city is a label internationally used by cities, researchers and technology providers with different meanings. As a popular concept it is widely used by city administrators and politicians to promote their efforts. It is hard enough to find a good definition for smart cities, but even harder to find a trustworthy description of what it takes to become a smart city and how a city administration is impacted. This paper sets out to investigate how a city, aspiring to become a 'smart city', can manage the organization to realize that ambition. Specifically, the paper describes the case of the City of Ghent, Belgium, and the key challenges it has been facing in its ongoing efforts to be a smart city. Based on in depth interviews with city representatives six key challenges for smart city realization were identified and tested with a panel of representatives from five European cities that are in the process of becoming a smart city. This way, the study contributes to a more professional pursuit of the smart city concept."} {"_id": "a6eb10b1d30b4547b04870a82ec0c65baf2198f8", "title": "Big Data Management Assessed Coursework Two Big Data vs Semantic Web F 21 BD", "text": ""} {"_id": "712194bd90119fd9f059025fd41b72ed12e5cc32", "title": "A multiscale random field model for Bayesian image segmentation", "text": "Many approaches to Bayesian image segmentation have used maximum a posteriori (MAP) estimation in conjunction with Markov random fields (MRF). Although this approach performs well, it has a number of disadvantages. In particular, exact MAP estimates cannot be computed, approximate MAP estimates are computationally expensive to compute, and unsupervised parameter estimation of the MRF is difficult. The authors propose a new approach to Bayesian image segmentation that directly addresses these problems. The new method replaces the MRF model with a novel multiscale random field (MSRF) and replaces the MAP estimator with a sequential MAP (SMAP) estimator derived from a novel estimation criteria. Together, the proposed estimator and model result in a segmentation algorithm that is not iterative and can be computed in time proportional to MN where M is the number of classes and N is the number of pixels. The also develop a computationally efficient method for unsupervised estimation of model parameters. Simulations on synthetic images indicate that the new algorithm performs better and requires much less computation than MAP estimation using simulated annealing. The algorithm is also found to improve classification accuracy when applied to the segmentation of multispectral remotely sensed images with ground truth data."} {"_id": "db9bbebbe0c4dab997583b2a3de2bca6b2217af9", "title": "A systemic and cognitive view on collaborative knowledge building with wikis", "text": "Wikis provide new opportunities for learning and for collaborative knowledge building as well as for understanding these processes. This article presents a theoretical framework for describing how learning and collaborative knowledge building take place. In order to understand these processes, three aspects need to be considered: the social processes facilitated by a wiki, the cognitive processes of the users, and how both processes influence each other mutually. For this purpose, the model presented in this article borrows from the systemic approach of Luhmann as well as from Piaget\u2019s theory of equilibration and combines these approaches. The model analyzes processes which take place in the social system of a wiki as well as in the cognitive systems of the users. The model also describes learning activities as processes of externalization and internalization. Individual learning happens through internal processes of assimilation and accommodation, whereas changes in a wiki are due to activities of external assimilation and accommodation which in turn lead to collaborative knowledge building. This article provides empirical examples for these equilibration activities by analyzing Wikipedia articles. Equilibration activities are described as being caused by subjectively perceived incongruities between an individuals\u2019 knowledge and the information provided by a wiki. Incongruities of medium level cause cognitive conflicts which in turn activate the described processes of equilibration and facilitate individual learning and collaborative knowledge building."} {"_id": "a68228a6108ac558bed194d098b02a2c90b58e0f", "title": "Anaerobic biotechnology for industrial wastewater treatment.", "text": "Microbiological formation of methane has been occurring naturally for ages in such diverse habitats as marshes, rice paddies, benthic deposits, deep ocean trenches, hot springs, trees, cattle, pigs, iguanas, termites, and human beings (Mah and Smith, 1981; Steggerda and Dimmick, 1966; Prins, 1979; Balch et al., 1979). In the past decade, interest in anaerobic biotechnology has grown considerably, both in the harnessing of the process for industrial wastewater treatment and in the bioconversion of crop-grown biomass to methane (Sheridan, 1982; Chynoweth and Srivastava, 1980)."} {"_id": "b2165397df5c5b9dda23b74c7c592aaec30bf9ec", "title": "Biometric Template Protection: Bridging the performance gap between theory and practice", "text": "Biometric recognition is an integral component of modern identity management and access control systems. Due to the strong and permanent link between individuals and their biometric traits, exposure of enrolled users' biometric information to adversaries can seriously compromise biometric system security and user privacy. Numerous techniques have been proposed for biometric template protection over the last 20 years. While these techniques are theoretically sound, they seldom guarantee the desired noninvertibility, revocability, and nonlinkability properties without significantly degrading the recognition performance. The objective of this work is to analyze the factors contributing to this performance divide and highlight promising research directions to bridge this gap. The design of invariant biometric representations remains a fundamental problem, despite recent attempts to address this issue through feature adaptation schemes. The difficulty in estimating the statistical distribution of biometric features not only hinders the development of better template protection algorithms but also diminishes the ability to quantify the noninvertibility and nonlinkability of existing algorithms. Finally, achieving nonlinkability without the use of external secrets (e.g., passwords) continues to be a challenging proposition. Further research on the above issues is required to cross the chasm between theory and practice in biometric template protection."} {"_id": "a2a3c8dcc38e6f8708e03413bdff6cd6c7c6e0d3", "title": "Lava flow Simulation with Cellular Automata: Applications for Civil Defense and Land Use Planning", "text": "The determination of areas exposed to new eruptive events in volcanic regions is crucial for diminishing consequences in terms of human causalities and damages of material properties. In this paper, we illustrate a methodology for defining flexible high-detailed lava invasion hazard maps which is based on an robust and efficient Cellular Automata model for simulating lava flows. We also present some applications for land use planning and civil defense to some inhabited areas of Mt Etna (South Italy), Europe\u2019s most active volcano, showing the methodology\u2019s appropriateness."} {"_id": "680e7d3f1400c4f3ebd587290f8742cba4542599", "title": "Asynchronous Event-Based Hebbian Epipolar Geometry", "text": "Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based-rather than frame-based-vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor."} {"_id": "0e036ebbabb6683c03f6bc10adb58724a8644d1c", "title": "Invalid retro-cues can eliminate the retro-cue benefit: Evidence for a hybridized account.", "text": "The contents of visual working memory (VWM) are capacity limited and require frequent updating. The retrospective cueing (retro-cueing) paradigm clarifies how directing internal attention among VWM items boosts VWM performance. In this paradigm a cue appears prior to retrieval, but after encoding and maintenance. The retro-cue effect (RCE) refers to superior VWM after valid versus neutral retro-cues. Here we investigated the effect of the invalid retro-cues' inclusion on VWM performance. We conducted 2 pairs of experiments, changing both probe type (recognition and recall) as well as presence and absence of invalid retro-cue trials. Furthermore, to fully characterize these effects over time, we used extended post-retro-cue delay durations. In the first set of experiments, probing VWM using recognition indicated that the RCE remained consistent in magnitude with or without invalid retro-cue trials. In the second set of experiments, VWM was probed with recall. Here, the RCE was eliminated when invalid retro-cues were included. This finer-grained measure of VWM fidelity showed that all items were subject to decay over time. We conclude that the invalid retro-cues impaired the protection of validly cues items, but they remain accessible, suggesting greater concordance with a prioritization account."} {"_id": "40e06608324781f6de425617a870a103d4233d5c", "title": "Macro process of knowledge management for continuous innovation", "text": "Purpose The purpose of this research is to understand the mechanisms of knowledge management (KM) for innovation and provide an approach for enterprises to leverage KM activities into continuous innovation. Design/methodology/approach \u2013 By reviewing the literature from multidisciplinary fields, the concepts of knowledge, KM and innovation are investigated. The physical, human and technological perspectives of KM are distinguished with the identification of two core activities for innovation: knowledge creation and knowledge usage. Then an essential requirement for continuous innovation \u2013an internalization phase is defined. The systems thinking and human-centred perspectives are adopted for providing a comprehensive understanding about the mechanisms of KM for innovation. Findings \u2013 A networking process of continuous innovation based on KM is proposed by incorporating the phase of internalization. According to the three perspectives of KM, three sources of organizational knowledge assets in innovation are identified. Then based on the two core activities of innovation, a meta-model and a macro process of KM are proposed to model the mechanisms of KM for continuous innovation. Then, in order to operationalize the KM mechanisms, a hierarchical model is constructed by integrating three sources of knowledge assets, the meta-model and the macro process into the process of continuous innovation. This model decomposes the complex relationships between knowledge and innovation into four layers. Practical implications \u2013 According to the lessons learned about KM practices in previous research, the three perspectives of KM should collaborate with each other for successful implementation of KM projects for innovation; and the hierarchical model provides a suitable architecture to implement systems of KM for innovation. Originality/value \u2013 The meta-model and macro process of KM explain how the next generation of KM can help the value creation and support the continuous innovation from the systems thinking perspective. The hierarchical model illustrates the complicate knowledge dynamics in the process of continuous innovation."} {"_id": "6a85c70922e618d5aec15065f88f7b23c48a676b", "title": "Repellent and Contact Toxicity of Alpinia officinarum Rhizome Extract against Lasioderma serricorne Adults", "text": "The repellent and contact toxicities of Alpinia officinarum rhizome extract on Lasioderma serricorne adults, and its ability to protect stored wheat flour from L. serricorne adults infestation were investigated. The A. officinarum extract exhibited strong repellent and contact toxicities against L. serricorne adults. The toxicities enhanced significantly with the increasing treatment time and treatment dose. The mean percentage repellency value reached 91.3% at class V at the dose of 0.20 \u03bcL/cm2 after 48 h of exposure. The corrected mortality reached over 80.0% at the dose of 0.16 \u03bcL/cm2 after 48 h of exposure. The A. officinarum extract could significantly reduce L. serricorne infestation level against stored wheat flour. Particularly, the insect infestation was nil in wheat flour packaged with kraft paper bags coated with the A. officinarum extract at the dose of above 0.05 \u03bcL/cm2. The naturally occurring A. officinarum extract could be useful for integrated management of L. serricorne."} {"_id": "9cb6333ecb28ecb661d76dc8cda8e0766f1c06f4", "title": "A regression-based radar-mote system for people counting", "text": "People counting is key to a diverse set of sensing applications. In this paper, we design a mote-scale event-driven solution that uses a low-power pulsed radar to estimate the number of people within the ~10m radial range of the radar. In contrast to extant solutions, most of which use computer vision, our solution is light-weight and private. It also better tolerates the presence of obstacles that partially or fully impair line of sight; this is achieved by accounting for \u201csmall\u201d indirect radio reflections via joint time-frequency domain features. The counter itself is realized using Support Vector Regression; the regression map is learned from a medium sized dataset of 0-~40 people in various indoor room settings. 10-fold cross validation of our counter yields a mean absolute error of 2.17 between the estimated count and the ground truth and a correlation coefficient of 0.97.We compare the performance of our solution with baseline counters."} {"_id": "027dc7d698bb161e1b7437c218b5811c015840e6", "title": "Automatic Image Alignment and Stitching of Medical Images with Seam Blending", "text": "This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts. Keywords\u2014Histogram Matching, Image Alignment, Image Stitching, Medical Imaging."} {"_id": "530de5705066057ead2f911cef2b45f8d237e5e9", "title": "Formal verification of security protocol implementations: a survey", "text": "Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approach."} {"_id": "197904cd4c8387244761acab5bd0fe455ec41108", "title": "Multiple-path testing for cross site scripting using genetic algorithms", "text": "Web applications suffer from different security vulnerabilities that could be exploited by hackers to cause harm in a variety of ways. A number of approaches have been proposed to test for such vulnerabilities. However, some gaps are still to be addressed. In this paper, we address one of such gaps: the problem of automatically generating test data (i.e., possible attacks) to test for cross site scripting (XSS) type of vulnerability. The objective is to generate a set of test data to exercise candidate security-vulnerable paths in a given script. The desirable set of test data must be effective in the sense that it uncovers whether any path can indeed be exploited to launch an attack. We designed a genetic algorithm-based test data generator that uses a database of XSS attack patterns to generate possible attacks and assess whether the attack is successful. We considered different types of XSS vulnerability: stored, reflected and DOM based. We empirically validated our test data generator using case studies of Web applications developed using PHP and MySQL. Empirical results show that our test data generator is effective in generating, in one run, multiple test data to cover multiple target"} {"_id": "14ecc44aaaf525955fa0cc248787be310f08cdc4", "title": "Multi-task assignment for crowdsensing in mobile social networks", "text": "Mobile crowdsensing is a new paradigm in which a crowd of mobile users exploit their carried smart devices to conduct complex computation and sensing tasks in mobile social networks (MSNs). In this paper, we focus on the task assignment problem in mobile crowdsensing. Unlike traditional task scheduling problems, the task assignment in mobile crowdsensing must follow the mobility model of users in MSNs. To solve this problem, we propose an oFfline Task Assignment (FTA) algorithm and an oNline Task Assignment (NTA) algorithm. Both FTA and NTA adopt a greedy task assignment strategy. Moreover, we prove that the FTA algorithm is an optimal offline task assignment algorithm, and give a competitive ratio of the NTA algorithm. In addition, we demonstrate the significant performance of our algorithms through extensive simulations, based on four real MSN traces and a synthetic MSN trace."} {"_id": "1dba1fa6dd287fde87823218d4f03559dde4e15b", "title": "Natural Language Annotations for Question Answering", "text": "This paper presents strategies and lessons learned from the use of natural language annotations to facilitate question answering in the START information access system."} {"_id": "4e06b359a9452d1420b398fa7391cad411d4cb23", "title": "Preventing man-in-the-middle attack in Diffie-Hellman key exchange protocol", "text": "The acceleration in developments in communication technology has led to a consequent increase in the vulnerability of data due to penetration attacks. These attacks often came from outside where non-qualified companies develop IT projects. Cryptography can offer high levels of security but has recently shown vulnerabilities such as the man-in-the-middle (MITM) attack in areas of key exchange protocols, especially in the Diffie-Hellman (DH) protocol. Firstly, this paper presents an overview of MITM attacks targeted at the DH protocol then discusses some of the shortcomings of current defenses. A proposed method to secure DH, which helps secure systems against MITM attacks, is then presented. This method involves the use of Geffe generation of binary sequences. The use of Geffe generator offers high levels of randomness. Data hashed and encrypted using this proposed method will be so difficult to intercept and decrypt without the appropriate keys. This offers high levels of security and helps prevent MITM attacks."} {"_id": "f0f2b2cda279843d0b32b4ca91d014592030ba45", "title": "Information security behavior: Recognizing the influencers", "text": "With the wide spread use of Internet, comes an increase in Information Security threats. To protect against these threats, technology alone have been found not enough as it can be misused by users and become vulnerable to various threats, thus, losing its usefulness. This is evident as users tend to use weak passwords, open email attachments without checking and do not set correct security settings. However, especially with the continuously evolving threat landscape, one cannot assume that users are always motivated to learn about Information Security and practice it. Actually, there are situations of an aware user who knows how to protect himself but, simply, chooses not to, because they do not care, usability problems or because they do not consider themselves as targets. Thus, understanding human security behavior is vital for ensuring an efficient Information Security environment that cannot depend on technology only. Although a number of psychological theories and models, such as Protection Motivation Theory and Technology Acceptance Model, were used in the literature to interpret these behaviors, they tend to assess users' intensions rather than actual behavior. The aim of this paper is to understand and assess these behaviors from a holistic view by finding the significant factors that influence them and how to best assist users to protect themselves. To accomplish this, a systematic literature review was conducted where relevant literature was sought in a number of academic digital databases. As a result, several key behavioral influencers were identified to be essential to consider when educating and directing users' security behavior. Further to that, a number of Information Security awareness approaches were proposed that may transform the user from being ill-informed into a security-minded user who is able to make an informed decision."} {"_id": "9e948ba3b3431ff2b1da9077955bec5326288f8c", "title": "Near-optimal hybrid analog and digital precoding for downlink mmWave massive MIMO systems", "text": "Millimeter wave (mmWave) massive MIMO can achieve orders of magnitude increase in spectral and energy efficiency, and it usually exploits the hybrid analog and digital precoding to overcome the serious signal attenuation induced by mmWave frequencies. However, most of hybrid precoding schemes focus on the full-array structure, which involves a high complexity. In this paper, we propose a near-optimal iterative hybrid precoding scheme based on the more realistic subarray structure with low complexity. We first decompose the complicated capacity optimization problem into a series of ones easier to be handled by considering each antenna array one by one. Then we optimize the achievable capacity of each antenna array from the first one to the last one by utilizing the idea of successive interference cancelation (SIC), which is realized in an iterative procedure that is easy to be parallelized. It is shown that the proposed hybrid precoding scheme can achieve better performance than other recently proposed hybrid precoding schemes, while it also enjoys an acceptable computational complexity."} {"_id": "d7f6446205d8c30711d9135df7719ce0a9a45d32", "title": "Self-compassion increases self-improvement motivation.", "text": "Can treating oneself with compassion after making a mistake increase self-improvement motivation? In four experiments, the authors examined the hypothesis that self-compassion motivates people to improve personal weaknesses, moral transgressions, and test performance. Participants in a self-compassion condition, compared to a self-esteem control condition and either no intervention or a positive distraction control condition, expressed greater incremental beliefs about a personal weakness (Experiment 1); reported greater motivation to make amends and avoid repeating a recent moral transgression (Experiment 2); spent more time studying for a difficult test following an initial failure (Experiment 3); exhibited a preference for upward social comparison after reflecting on a personal weakness (Experiment 4); and reported greater motivation to change the weakness (Experiment 4). These findings suggest that, somewhat paradoxically, taking an accepting approach to personal failure may make people more motivated to improve themselves."} {"_id": "97c76f09bcb077ca7b9f47e82a34e37171d01f41", "title": "Collaborative Joint Training With Multitask Recurrent Model for Speech and Speaker Recognition", "text": "Automatic speech and speaker recognition are traditionally treated as two independent tasks and are studied separately. The human brain in contrast deciphers the linguistic content, and the speaker traits from the speech in a collaborative manner. This key observation motivates the work presented in this paper. A collaborative joint training approach based on multitask recurrent neural network models is proposed, where the output of one task is backpropagated to the other tasks. This is a general framework for learning collaborative tasks and fits well with the goal of joint learning of automatic speech and speaker recognition. Through a comprehensive study, it is shown that the multitask recurrent neural net models deliver improved performance on both automatic speech and speaker recognition tasks as compared to single-task systems. The strength of such multitask collaborative learning is analyzed, and the impact of various training configurations is investigated."} {"_id": "1005645c05585c2042e3410daeed638b55e2474d", "title": "A Scalable Hierarchical Distributed Language Model", "text": "Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the nonhierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models."} {"_id": "3a91c3eb9a05ef5c8b875ab112448cc3f44a1268", "title": "Extensions of recurrent neural network language model", "text": "We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one."} {"_id": "0b47b6ffe714303973f40851d975c042ff4fcde1", "title": "Distributional Clustering of English Words", "text": "We describe and experimentally evaluate a method for automatically clustering words according to their distribution in particular syntactic contexts. Deterministic annealing is used to find lowest distortion sets of clusters. As the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical \u201csoft\u201d clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data."} {"_id": "77fbbb9ff612c48dad8313087b0e6ed03c31812a", "title": "Characterization of liquid crystal polymer (LCP) material and transmission lines on LCP substrates from 30 to 110 GHz", "text": "Liquid crystal polymer (LCP) is a material that has gained attention as a potential high-performance microwave substrate and packaging material. This investigation uses several methods to determine the electrical properties of LCP for millimeter-wave frequencies. Microstrip ring resonators and cavity resonators are measured in order to characterize the dielectric constant (/spl epsi//sub r/) and loss tangent (tan/spl delta/) of LCP above 30 GHz. The measured dielectric constant is shown to be steady near 3.16, and the loss tangent stays below 0.0049. In addition, various transmission lines are fabricated on different LCP substrate thicknesses and the loss characteristics are given in decibels per centimeter from 2 to 110 GHz. Peak transmission-line losses at 110 GHz vary between 0.88-2.55 dB/cm, depending on the line type and geometry. These results show, for the first time, that LCP has excellent dielectric properties for applications extending through millimeter-wave frequencies."} {"_id": "ccd80fefa3b6ca995c62f5a5ee0fbfef5732aaf3", "title": "Efficient Multi-task Feature and Relationship Learning", "text": "We consider a multitask learning problem, in which several predictors are learned jointly. Prior research has shown that learning the relations between tasks, and between the input features, together with the predictor, can lead to better generalization and interpretability, which proved to be useful for applications in many domains. In this paper, we consider a formulation of multitask learning that learns the relationships both between tasks and between features, represented through a task covariance and a feature covariance matrix, respectively. First, we demonstrate that existing methods proposed for this problem present an issue that may lead to ill-posed optimization. We then propose an alternative formulation, as well as an efficient algorithm to optimize it. Using ideas from optimization and graph theory, we propose an efficient coordinate-wise minimization algorithm that has a closed form solution for each block subproblem. Our experiments show that the proposed optimization method is orders of magnitude faster than its competitors. We also provide a nonlinear extension that is able to achieve better generalization than existing methods."} {"_id": "bd564543735722c1f5040ce52e57c324ead4e499", "title": "Sensor relocation in mobile sensor networks", "text": "Recently there has been a great deal of research on using mobility in sensor networks to assist in the initial deployment of nodes. Mobile sensors are useful in this environment because they can move to locations that meet sensing coverage requirements. This paper explores the motion capability to relocate sensors to deal with sensor failure or respond to new events. We define the problem of sensor relocation and propose a two-phase sensor relocation solution: redundant sensors are first identified and then relocated to the target location. We propose a Grid-Quorum solution to quickly locate the closest redundant sensor with low message complexity, and propose to use cascaded movement to relocate the redundant sensor in a timely, efficient and balanced way. Simulation results verify that the proposed solution outperforms others in terms of relocation time, total energy consumption, and minimum remaining energy."} {"_id": "3c03973d488666fa61a2c7dad65f8e0dea24b012", "title": "Distributed Optimization for Model Predictive Control of Linear Dynamic Networks With Control-Input and Output Constraints", "text": "A linear dynamic network is a system of subsystems that approximates the dynamic model of large, geographically distributed systems such as the power grid and traffic networks. A favorite technique to operate such networks is distributed model predictive control (DMPC), which advocates the distribution of decision-making while handling constraints in a systematic way. This paper contributes to the state-of-the-art of DMPC of linear dynamic networks in two ways. First, it extends a baseline model by introducing constraints on the output of the subsystems and by letting subsystem dynamics to depend on the state besides the control signals of the subsystems in the neighborhood. With these extensions, constraints on queue lengths and delayed dynamic effects can be modeled in traffic networks. Second, this paper develops a distributed interior-point algorithm for solving DMPC optimization problems with a network of agents, one for each subsystem, which is shown to converge to an optimal solution. In a traffic network, this distributed algorithm permits the subsystem of an intersection to be reconfigured by only coordinating with the subsystems in its vicinity."} {"_id": "4b75c15b4bff6fac5dc1a45ab1b9edbf29f6a1ec", "title": "Semi-Global Matching: A Principled Derivation in Terms of Message Passing", "text": "Semi-global matching, originally introduced in the context of dense stereo, is a very successful heuristic to minimize the energy of a pairwise multi-label Markov Random Field defined on a grid. We offer the first principled explanation of this empirically successful algorithm, and clarify its exact relation to belief propagation and tree-reweighted message passing. One outcome of this new connection is an uncertainty measure for the MAP label of a variable in a Markov Random Field."} {"_id": "a00d2c18777b97f60554d3a88dd2e948d74538bc", "title": "Challenges and implications of using ultrasonic communications in intra-body area networks", "text": "Body area networks (BANs) promise to enable revolutionary biomedical applications by wirelessly interconnecting devices implanted or worn by humans. However, BAN wireless communications based on radio-frequency (RF) electromagnetic waves suffer from poor propagation of signals in body tissues, which leads to high levels of attenuation. In addition, in-body transmissions are constrained to be low-power to prevent overheating of tissues and consequent death of cells. To address the limitations of RF propagation in the human body, we propose a paradigm shift by exploring the use of ultrasonic waves as the physical medium to wirelessly interconnect in-body implanted devices. Acoustic waves are the transmission technology of choice for underwater communications, since they are known to propagate better than their RF counterpart in media composed mainly of water. Similarly, we envision that ultrasound (e.g., acoustic waves at non-audible frequencies) will provide support for communications in the human body, which is composed for 65% of water. In this paper, we first assess the feasibility of using ultrasonic communications in intra-body BANs, i.e., in-body networks where the devices are biomedical sensors that communicate with an actuator/gateway device located inside the body. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, bandwidth, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack."} {"_id": "0c9c4daa230bcc62cf3d78236ccf54ff686a36e0", "title": "Biobanks: transnational, European and global networks.", "text": "Biobanks contain biological samples and associated information that are essential raw materials for advancement of biotechnology, human health, and research and development in life sciences. Population-based and disease-oriented biobanks are major biobank formats to establish the disease relevance of human genes and provide opportunities to elucidate their interaction with environment and lifestyle. The developments in personalized medicine require molecular definition of new disease subentities and biomarkers for identification of relevant patient subgroups for drug development. These emerging demands can only be met if biobanks cooperate at the transnational or even global scale. Establishment of common standards and strategies to cope with the heterogeneous legal and ethical landscape in different countries are seen as major challenges for biobank networks. The Central Research Infrastructure for Molecular Pathology (CRIP), the concept for a pan-European Biobanking and Biomolecular Resources Research Infrastructure (BBMRI), and the Organization for Economic Co-operation and Development (OECD) global Biological Resources Centres network are examples for transnational, European and global biobank networks that are described in this article."} {"_id": "8b3bbf6a4b1cb54b4141eb2696e192fefc7fa7f2", "title": "Implantable Antennas for Biomedical Applications: An Overview on Alternative Antenna Design Methods and Challenges", "text": "Implanted biomedical devices are witnessing great attention in finding solutions to complex medical conditions. Many challenges face the design of implantable biomedical devices including designing and implanting antennas within hostile environment due to the surrounding tissues of human body. Implanted antennas must be compact in size, efficient, safe, and can effectively work within adequate medical frequency bands. This paper presents an overview of the major aspects related to the design and challenges of in body implanted antennas. The review includes surveying the applications, design methods, challenges, simulation tools, and testing and manufacturing of implantable biomedical antennas."} {"_id": "fd04694eef08eee47d239b6cac5afacf943bf0a1", "title": "Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms", "text": "The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings."} {"_id": "e726318569f5670369684e1a9f5b8f632d29bffc", "title": "Particle Swarm Optimization of the Multioscillatory LQR for a Three-Phase Four-Wire Voltage-Source Inverter With an $LC$ Output Filter", "text": "This paper presents evolutionary optimization of the linear quadratic regulator (LQR) for a voltage-source inverter with an LC output filter. The procedure involves particle-swarm-based search for the best weighting factors in the quadratic cost function. It is common practice that the weights in the cost function are set using the guess-and-check method. However, it becomes quite challenging, and usually very time-consuming, if there are many auxiliary states added to the system. In order to immunize the system against unbalanced and nonlinear loads, oscillatory terms are incorporated into the control scheme, and this significantly increases the number of weights to be guessed. All controller gains are determined altogether in one LQR procedure call, and the originality reported here refers to evolutionary tuning of the weighting matrix. There is only one penalty factor to be set by the designer during the controller synthesis procedure. This coefficient enables shaping the dynamics of the closed-loop system by penalizing the dynamics of control signals instead of selecting individual weighting factors for augmented state vector components. Simulational tuning and experimental verification (the physical converter at the level of 21 kVA) are included."} {"_id": "b109f7d8b90a789a962649e65c35e041370f6bf4", "title": "Towards fabrication of Vertical Slit Field Effect Transistor (VeSFET) as new device for nano-scale CMOS technology", "text": "This paper proposes a CMOS based process for Vertical Slit Field Effect Transistors. The central part of the device, namely, the vertical slit, is defined by using electron beam lithography and silicon dry etching. In order to verify the validity and the reproducibility of the process, devices having the slit width ranging from 16 nm to 400 nm were fabricated, with slit conductance in the range 0.6 to 3 milliSiemens, in agreement with the expected values."} {"_id": "0605a012aeeee9bef773812a533c4f3cb7fa5a5f", "title": "Interpretable Counting for Visual Question Answering", "text": "Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting."} {"_id": "9d1c4f3f946f837d59a256df3bee3d5204271537", "title": "A systematic review of the impact of the use of social networking sites on body image and disordered eating outcomes.", "text": "A large body of literature has demonstrated mass media effects on body image and disordered eating. More recently, research in this area has turned to 'new' forms of media, such as the Internet, and particularly Social Networking Sites (SNSs). A systematic search for peer-reviewed articles on SNS use and body image and eating disorders resulted in 20 studies meeting specific inclusion criteria. As a whole, these articles demonstrated that use of SNSs is associated with body image and disordered eating. Specific SNS activities, such as viewing and uploading photos and seeking negative feedback via status updates, were identified as particularly problematic. A small number of studies also addressed underlying processes and found that appearance-based social comparison mediated the relationship between SNS use and body image and eating concerns. Gender was not found to be a moderating factor. It was concluded that, although there is a good deal of correlational research supporting the maladaptive effect of SNS use on body image and disordered eating, more longitudinal and experimental studies are needed."} {"_id": "235af8cac488c9347bf7a958fea93835fe9b896e", "title": "Supporting Cloud Computing in Thin-Client/Server Computing Model", "text": "This paper addresses the issue of how to support cloud computing in thin-client/server computing model. We describe the design and implementation of the multiple-application-server thin-client/server computing (MAS TC/S) model that allows users with thin-client devices to roam around a wide area network whilst experiencing transparent working environments. MAS TC/S can be applied to a wide variety of applications in wide area network, such as pervasive software rental, service-oriented infrastructure for an Internet service provider, and office automation for transnational corporations. The MAS TC/S architecture model comprises five major components: the display protocol, the multiple-application-server topology, the application-server discovery protocol, the distributed file system, and the network input/output protocol. We discuss problems and present solutions in the design of each constituent component. A prototype of the MAS TC/S that spans the campuses of two universities in Taiwan has been built \u2013 we also report our experiences in constructing this prototype."} {"_id": "7220e204d4c81e309e2a20a4ba3aadbbb8e34315", "title": "Explaining Inconsistencies in OWL Ontologies", "text": "Justifications play a central role as the basis for explaining entailments in OWL ontologies. While techniques for computing justifications for entailments in consistent ontologies are theoretically and practically well-understood, little is known about the practicalities of computing justifications for inconsistent ontologies. This is despite the fact that justifications are important for repairing inconsistent ontologies, and can be used as a basis for paraconsistent reasoning. This paper presents algorithms, optimisations, and experiments in this area. Surprisingly, it turns out that justifications for inconsistent ontologies are more \u201cdifficult\u201d to compute and are often more \u201cnumerous\u201d than justifications for entailments in consistent ontologies: whereas it is always possible to compute some justifications, it is often not possible to compute all justifications for real world inconsistent ontologies."} {"_id": "6ff17302e0a3c583be76ee2f7d90d6f2067a74d7", "title": "Impact of interleaving on common-mode EMI filter weight reduction of paralleled three-phase voltage-source converters", "text": "This paper presents a detailed analysis on the impact of the interleaving angle on EMI filter weight reduction in a paralleled three-phase VSI motor drive system. The EMI noise analysis equivalent circuits are given in frequency domain and EMI filter design methods for a motor drive system are discussed. Based on the analysis, design examples are given for DC and AC common-mode (CM) EMI filters showing that minimum corner frequency in this case cannot ensure minimum EMI filter weight. In effect, the EMI filter weight is also determined by the volt-seconds on the inductor. With this consideration, the impact of interleaving is analyzed based on the equivalent circuit developed. Specifically, it is shown that interleaving can either reduce the volt-second on the inductor or help reduce the inductance needed, which has a different impact on the filter weight when the design condition is different. EMI filters are designed for both AC and DC sides as examples, showing the impact that different interleaving angles can have on filter weight reduction. Verifications are carried out both in simulation and experimentally on a 2 kW converter system."} {"_id": "cb84ef73db0a259b07289590f0dfcb9b8b9bbe79", "title": "A hybrid RF and vibration energy harvester for wearable devices", "text": "This paper describes a hybrid radio frequency (RF) and piezoelectric thin film polyvinylidene fluoride (PVDF) vibration energy harvester for wearable devices. By exploiting the impedance characteristics of parasitic capacitances and discrete inductors, the proposed harvester not only scavenges 15 Hz vibration energy but also works as a 915 MHz flexible silver-ink RF dipole antenna. In addition, an interface circuit including a 6-stage Dickson RF-to-DC converter and a diode bridge rectifier to convert the RF and vibration outputs of the hybrid harvester into DC signals to power resistive loads is evaluated. A maximum DC output power of 20.9 \u03bc\u201d, when using the RF to DC converter and \u22128 dBm input RF power, is achieved at 36 % of the open-circuit output voltage while the DC power harvested from 3 g vibration excitation reaches a maximum of 2.8 \u03bcW at 51% of open-circuit voltage. Experimental results show that the tested hybrid harvesting system simultaneously generates 7.3 \u03bcW DC power, when the distance from the harvester to a 3 W EIRP 915 MHz transmitter is 5.5 m, and 1.8 \u03bcW DC power from a 1.8 g vibration acceleration peak."} {"_id": "3b2bb6b3b1dfc8e036106ad73a58ee9b7bc8f887", "title": "An Efficient End-to-End Neural Model for Handwritten Text Recognition", "text": "Offline handwritten text recognition from images is an important problem for enterprises attempting to digitize large volumes of handmarked scanned documents/reports. Deep recurrent models such as Multi-dimensional LSTMs [10] have been shown to yield superior performance over traditional Hidden Markov Model based approaches that suffer from the Markov assumption and therefore lack the representational power of RNNs. In this paper we introduce a novel approach that combines a deep convolutional network with a recurrent Encoder-Decoder network to map an image to a sequence of characters corresponding to the text present in the image. The entire model is trained end-to-end using Focal Loss [18], an improvement over the standard Cross-Entropy loss that addresses the class imbalance problem, inherent to text recognition. To enhance the decoding capacity of the model, Beam Search algorithm is employed which searches for the best sequence out of a set of hypotheses based on a joint distribution of individual characters. Our model takes as input a downsampled version of the original image thereby making it both computationally and memory efficient. The experimental results were benchmarked against two publicly available datasets, IAM and RIMES. We surpass the state-of-the-art word level accuracy on the evaluation set of both datasets by 3.5% & 1.1%, respectively."} {"_id": "e4efcdcf039f48dcbe1024c013e90dc778808f03", "title": "Supply chain information systems strategy : Impacts on supply chain performance and firm performance", "text": "This paper examines the relationship between supply chain (SC) strategy and supply chain information systems (IS) strategy, and its impact on supply chain performance and firm performance. Theorizing from the supply chain and IS literatures within an overarching framework of the information processing theory (IPT), we develop hypotheses proposing a positive moderating effect of two supply chain IS strategies \u2013 IS for Efficiency and IS for Flexibility \u2013 on the respective relationships between two SC strategies \u2013 Lean and Agile, and supply chain performance. Based on confirmatory analysis and structural equation modeling of survey data from members of senior and executive management in the purchase/materials management/logistics/supply chain functions, from 205 firms, we validate these hypotheses and show that the IS for Efficiency (IS for Flexibility) IS strategy enhances the relationship between Lean (Agile) SC strategy and supply chain performance. We also show a positive association between supply chain performance and firm performance, and a full (partial) mediation effect of supply chain performance on the relation between Agile (Lean) SC strategy and firm performance. The paper contributes to the supply chain literature by providing theoretical understanding and empirical support of how SC strategies and IS strategies can work together to boost supply chain performance. In doing so, it identifies particular types of supply chain IS application portfolios that can enhance the benefits from specific SC strategies. The paper also develops and validates instruments for measuring two types of SC strategies and supply chain IS strategies. For practice, the paper offers guidance in making investment decisions for adopting and deploying IS appropriate to particular SC strategies and analyzing possible lack of alignment between applications that the firm deploys in its supply chain, and the information processing needs of its SC strategy. & 2012 Elsevier B.V. All rights reserved."} {"_id": "64e1ced6f61385bc8bfbc685040febca49384607", "title": "CIAN: Cross-Image Affinity Net for Weakly Supervised Semantic Segmentation", "text": "Weakly supervised semantic segmentation based on image-level labels aims for alleviating the data scarcity problem by training with coarse labels. State-of-the-art methods rely on image-level labels to generate proxy segmentation masks, then train the segmentation network on these masks with various constraints. These methods consider each image independently and lack the exploration of cross-image relationships. We argue the cross-image relationship is vital to weakly supervised learning. We propose an end-to-end affinity module for explicitly modeling the relationship among a group of images. By means of this, one image can benefit from the complementary information from other images, and the supervision guidance can be shared in the group. The proposed method improves over the baseline with a large margin. Our method achieves 64.1% mIOU score on Pascal VOC 2012 validation set, and 64.7% mIOU score on test set, which is a new state-of-theart by only using image-level labels, demonstrating the effectiveness of the method."} {"_id": "317bc06798ad37993f79f5dca38e177d12a116f0", "title": "Where2Stand: A Human Position Recommendation System for Souvenir Photography", "text": "People often take photographs at tourist sites and these pictures usually have two main elements: a person in the foreground and scenery in the background. This type of \u201csouvenir photo\u201d is one of the most common photos clicked by tourists. Although algorithms that aid a user-photographer in taking a well-composed picture of a scene exist [Ni et al. 2013], few studies have addressed the issue of properly positioning human subjects in photographs. In photography, the common guidelines of composing portrait images exist. However, these rules usually do not consider the background scene. Therefore, in this article, we investigate human-scenery positional relationships and construct a photographic assistance system to optimize the position of human subjects in a given background scene, thereby assisting the user in capturing high-quality souvenir photos. We collect thousands of well-composed portrait photographs to learn human-scenery aesthetic composition rules. In addition, we define a set of negative rules to exclude undesirable compositions. Recommendation results are achieved by combining the first learned positive rule with our proposed negative rules. We implement the proposed system on an Android platform in a smartphone. The system demonstrates its efficacy by producing well-composed souvenir photos."} {"_id": "032e660447156a045ad6cf50272bca46246f4645", "title": "Extreme Adaptation for Personalized Neural Machine Translation", "text": "Every person speaks or writes their own flavor of their native language, influenced by a number of factors: the content they tend to talk about, their gender, their social status, or their geographical origin. When attempting to perform Machine Translation (MT), these variations have a significant effect on how the system should perform translation, but this is not captured well by standard one-sizefits-all models. In this paper, we propose a simple and parameter-efficient adaptation technique that only requires adapting the bias of the output softmax to each particular user of the MT system, either directly or through a factored approximation. Experiments on TED talks in three languages demonstrate improvements in translation accuracy, and better reflection of speaker traits in the target text."} {"_id": "a0bf0c6300640a3c5757d1ca14418be178a33e99", "title": "Three-Level Bidirectional Converter for Fuel-Cell/Battery Hybrid Power System", "text": "A novel three-level (3L) bidirectional converter (BDC) is proposed in this paper. Compared with the traditional BDC, the inductor of the 3L BDC can be reduced significantly so that the dynamic response is greatly improved. Hence, the proposed converter is very suitable for fuel-cell/battery hybrid power systems. In addition, the voltage stress on the switch of the proposed converter is only half of the voltage on the high-voltage side, so it is also suitable for high-voltage applications. The operation principle and the implementation of the control circuit are presented in detail. This paper also proposes a novel bidirectional soft-start control strategy for the BDC. A 1-kW prototype converter is built to verify the theoretical analysis."} {"_id": "59363d255f07e89e81f727ab4c627e21da888fe5", "title": "Gamut Mapping to Preserve Spatial Luminance Variations", "text": "A spatial gamut mapping technique is proposed to overcome the shortcomings encountered with standard pointwise gamut mapping algorithms by preserving spatially local luminance variations in the original image. It does so by first processing the image through a standard pointwise gamut mapping algorithm. The difference between the original image luminance Y and gamut mapped image luminance Y\u2019 is calculated. A spatial filter is then applied to this difference signal, and added back to the gamut mapped signal Y\u2019. The filtering operation can result in colors near the gamut boundary being placed outside the gamut, hence a second gamut mapping step is required to move these pixels back into the gamut. Finally, the in-gamut pixels are processed through a color correction function for the output device, and rendered to that device. Psychophysical experiments validate the superior performance of the proposed algorithm, which reduces many of the artifacts arising from standard pointwise techniques."} {"_id": "d8e8bdd687dd588b71d92ff8f6018a1084f85437", "title": "Intelligent Device-to-Device Communication in the Internet of Things", "text": "Analogous to the way humans use the Internet, devices will be the main users in the Internet of Things (IoT) ecosystem. Therefore, device-to-device (D2D) communication is expected to be an intrinsic part of the IoT. Devices will communicate with each other autonomously without any centralized control and collaborate to gather, share, and forward information in a multihop manner. The ability to gather relevant information in real time is key to leveraging the value of the IoT as such information will be transformed into intelligence, which will facilitate the creation of an intelligent environment. Ultimately, the quality of the information gathered depends on how smart the devices are. In addition, these communicating devices will operate with different networking standards, may experience intermittent connectivity with each other, and many of them will be resource constrained. These characteristics open up several networking challenges that traditional routing protocols cannot solve. Consequently, devices will require intelligent routing protocols in order to achieve intelligent D2D communication. We present an overview of how intelligent D2D communication can be achieved in the IoT ecosystem. In particular, we focus on how state-of-the-art routing algorithms can achieve intelligent D2D communication in the IoT."} {"_id": "5bcc988bc50428074f123495a426c2637230a1f8", "title": "Social trust and social reciprocity based cooperative D2D communications", "text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of the interplay between social structure and mobile communications, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device communications. Specifically, as hand-held devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game theoretic framework to devise social-tie based cooperation strategies for device-to-device communications. We also develop a network assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, and truthful. We evaluate the performance of the mechanism by using real social data traces. Numerical results show that the proposed mechanism can achieve up-to 122% performance gain over the case without D2D cooperation."} {"_id": "a250172e20563f689388ae6a5e4c0b572259fa58", "title": "Resource allocation for device-to-device communications underlaying LTE-advanced networks", "text": "The Long Term Evolution-Advanced (LTEAdvanced) networks are being developed to provide mobile broadband services for the fourth generation (4G) cellular wireless systems. Deviceto- device (D2D) communications is a promising technique to provide wireless peer-to-peer services and enhance spectrum utilization in the LTE-Advanced networks. In D2D communications, the user equipments (UEs) are allowed to directly communicate between each other by reusing the cellular resources rather than using uplink and downlink resources in the cellular mode when communicating via the base station. However, enabling D2D communications in a cellular network poses two major challenges. First, the interference caused to the cellular users by D2D devices could critically affect the performances of the cellular devices. Second, the minimum quality-of-service (QoS) requirements of D2D communications need to be guaranteed. In this article, we introduce a novel resource allocation scheme (i.e. joint resource block scheduling and power control) for D2D communications in LTE-Advanced networks to maximize the spectrum utilization while addressing the above challenges. First, an overview of LTE-Advanced networks, and architecture and signaling support for provisioning of D2D communications in these networks are described. Furthermore, research issues and the current state-of-the-art of D2D communications are discussed. Then, a resource allocation scheme based on a column generation method is proposed for D2D communications. The objective is to maximize the spectrum utilization by finding the minimum transmission length in terms of time slots for D2D links while protecting the cellular users from harmful interference and guaranteeing the QoS of D2D links. The performance of this scheme is evaluated through simulations."} {"_id": "7fb0a03eee8369ca214c6ed5e6bbefcdaa11a153", "title": "THREE LAYERS APPROACH FOR NETWORK SCANNING DETECTION", "text": "Computer networks became one of the most important dimensions in any organization. This importance is due to the connectivity benefits that can be given by networks, such as computing power, data sharing and enhanced performance. However using networks comes with a cost, there are some threats and issues that need to be addressed, such as providing sufficient level of security. One of the most challenging issues in network security is network scanning. Network scanning is considered to be the initial step in any attacking process. Therefore, detecting networks scanning helps to protect networks resources, services, and data before the real attack happens. This paper proposes an approach that consists of three layers to detect Sequential and Random network scanning for both TCP and UDP protocols. The proposed Three Layers Approach aims to increase network scanning detection accuracy. The Three Layers Approach defines some packets to be used as signs of network scanning existence. Before applying the approach in a network, there is a Thresholds Generation Stage to that aims to determine descriptive set of thresholds. After that, the first layer of the approach aggregates sign packets in separated tables. Then the second layer of the approach analyzes these tables in new tables by counting packets generated by each IP. Finally, the last layer makes a decision of whether or not a network is being scanned."} {"_id": "ebaccd68ab660c7d534a2c0f1bf3d10d03c4dcc1", "title": "Study of induction heating power supply based on fuzzy controller", "text": "In order to satisfy the higher control performance requirement of the Induction Heating Supply, a fuzzy logic control technology for induction heating power supply power control system is researched. This study presents the composition and design of the induction heating control system based on the fuzzy logic controller. In this paper, a complete simulation model of induction heating systems is obtained by using the Matlab /Simulink software, Simulation results show the effectiveness and superiority of the control system."} {"_id": "24f6497bb4cb6cfbb68492f07624ab5212bc39e3", "title": "Kawaii/Cute interactive media", "text": "Cuteness in interactive systems is a relatively new development yet has its roots in the aesthetics of many historical and cultural elements. Symbols of cuteness abound in nature as in the creatures of neotenous proportions: drawing in the care and concern of the parent and the care from a protector. We provide an in-depth look at the role of cuteness in interactive systems beginning with a history. We particularly focus on the Japanese culture of Kawaii, which has made large impact around the world, especially in entertainment, fashion, and animation. We then take the approach of defining cuteness in contemporary popular perception. User studies are presented offering an in-depth understanding of key perceptual elements, which are identified as cute. This knowledge provides for the possibility to create a cute filter that can transform inputs and automatically create more cute outputs. This paper also provides an insight into the next generation of interactive systems that bring happiness and comfort to users of all ages and cultures through the soft power of cute."} {"_id": "a2729b6ca8d24bb806c168528eb81de950871446", "title": "A unified approach for mining outliers", "text": "This paper deals with nding outliers (exceptions) in large datasets. The identiication of outliers can often lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. One contribution of this paper is to show how our proposed, intuitive notion of outliers can unify or generalize many of the existing notions of outliers provided by discordancy tests for standard statistical distributions. Thus, when mining large datasets containing many attributes, a uniied approach can replace many statistical discordancy tests, regardless of any knowledge about the underlying distribution of the attributes. A second contribution of this paper is the development of an algorithm to nd all outliers in a dataset. An important advantage of this algorithm is that its time complexity is linear with respect to the number of objects in the dataset. We include preliminary performance results."} {"_id": "5e6035535d6d258a29598faf409b57a71ec28f21", "title": "An efficient centroid type-reduction strategy for general type-2 fuzzy logic system", "text": ""} {"_id": "1e00b499af499ff00b97a737bc9053b34dcc352a", "title": "Automatic Ontology Matching via Upper Ontologies: A Systematic Evaluation", "text": "\u00bfOntology matching\u00bf is the process of finding correspondences between entities belonging to different ontologies. This paper describes a set of algorithms that exploit upper ontologies as semantic bridges in the ontology matching process and presents a systematic analysis of the relationships among features of matched ontologies (number of simple and composite concepts, stems, concepts at the top level, common English suffixes and prefixes, and ontology depth), matching algorithms, used upper ontologies, and experiment results. This analysis allowed us to state under which circumstances the exploitation of upper ontologies gives significant advantages with respect to traditional approaches that do no use them. We run experiments with SUMO-OWL (a restricted version of SUMO), OpenCyc, and DOLCE. The experiments demonstrate that when our \u00bfstructural matching method via upper ontology\u00bf uses an upper ontology large enough (OpenCyc, SUMO-OWL), the recall is significantly improved while preserving the precision obtained without upper ontologies. Instead, our \u00bfnonstructural matching method\u00bf via OpenCyc and SUMO-OWL improves the precision and maintains the recall. The \u00bfmixed method\u00bf that combines the results of structural alignment without using upper ontologies and structural alignment via upper ontologies improves the recall and maintains the F-measure independently of the used upper ontology."} {"_id": "59483664dfb38a7ce66e7dc279ac2d0d8456dbb6", "title": "A Wideband Slotted Bow-Tie Antenna With Reconfigurable CPW-to-Slotline Transition for Pattern Diversity", "text": "We propose a slotted bow-tie antenna with pattern reconfigurability. The antenna consists of a coplanar waveguide (CPW) input, a pair of reconfigurable CPW-to-slotline transitions, a pair of Vivaldi-shaped radiating tapered slots, and four PIN diodes for reconfigurability. With suitable arrangement of the bias network, the proposed antenna demonstrates reconfigurable radiation patterns in the frequency range from 3.5 to 6.5 GHz in three states: a broadside radiation with fairly omnidirectional pattern and two end-fire radiations whose main beams are directed to exactly opposite directions. The proposed antenna is investigated comprehensively with the help of the radiation patterns in the two principal cuts and also the antenna gain responses versus frequencies. The simulation and measurement results reveal fairly good agreement and hence sustain the reconfigurability of the proposed design."} {"_id": "422be962dc717f88f412b954420f394490a5c110", "title": "Analysis of Software Aging in a Web Server", "text": "Several recent studies have reported & examined the phenomenon that long-running software systems show an increasing failure rate and/or a progressive degradation of their performance. Causes of this phenomenon, which has been referred to as \"software aging\", are the accumulation of internal error conditions, and the depletion of operating system resources. A proactive technique called \"software rejuvenation\" has been proposed as a way to counteract software aging. It involves occasionally terminating the software application, cleaning its internal state and/or its environment, and then restarting it. Due to the costs incurred by software rejuvenation, an important question is when to schedule this action. While periodic rejuvenation at constant time intervals is straightforward to implement, it may not yield the best results. The rate at which software ages is usually not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned & initiated in the face of the actual system behavior. This requires the measurement, analysis, and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage & activity parameters. Non-parametric statistical methods are then applied toward detecting & estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built"} {"_id": "5cf3659f1741e5962fdf183a874db9b5a6878f47", "title": "A Flexible Fabrication Approach Toward the Shape Engineering of Microscale Soft Pneumatic Actuators", "text": "The recent developments of soft robots have inspired a variety of applications that require compliance for safety and softness for biomimetics. With such prevalence and advantages of soft robots, researchers are diverting their interests toward applications like micromanipulation of biological tissue. However, the progress has been thwarted by the difficulty in producing soft robots in miniature scale. In this paper, we present a new kind of microscale soft pneumatic actuators (SPA) made from streamlined and standardized fabrication procedure with customizable bending modalities inspired by shape engineering. Preliminary mathematical models are given to interpret width-based shape engineering for customization and to compare the bending angle and radius of curvature measured from the characterization experiments. The fabricated SPA was tested on the sciatic nerve of a rat in order to show its future potentials in biomedical field. Ultimately, this paper will contribute to the diversification of SPA fabrication and customization, as well as biomedical applications that demand smaller dimensions and higher precision."} {"_id": "771b9c74d58f4fb29ce3303a3d4e864c6707340d", "title": "Cognitive Wireless Powered Network: Spectrum Sharing Models and Throughput Maximization", "text": "The recent advance in radio-frequency (RF) wireless energy transfer (WET) has motivated the study of wireless powered communication network (WPCN) in which distributed wireless devices are powered via dedicated WET by the hybrid access-point (H-AP) in the downlink (DL) for uplink (UL) wireless information transmission (WIT). In this paper, by utilizing the cognitive radio (CR) technique, we study a new type of CR enabled secondary WPCN, called cognitive WPCN, under spectrum sharing with the primary wireless communication system. In particular, we consider a cognitive WPCN, consisting of one single H-AP with constant power supply and distributed wireless powered users, shares the same spectrum for its DL WET and UL WIT with an existing primary communication link, where the WPCN's WET/WIT and the primary link's WIT may interfere with each other. Under this new setup, we propose two coexisting models for spectrum sharing of the two systems, namely underlay and overlay-based cognitive WPCNs, depending on different types of knowledge on the primary user transmission available for the cognitive WPCN. For each model, we maximize the sum-throughput of the cognitive WPCN by optimizing its transmission under different constraints applied to protect the primary user transmission. Analysis and simulation results are provided to compare the sum-throughput of the cognitive WPCN versus the achievable rate of the primary user under two proposed coexisting models. It is shown that the overlay based cognitive WPCN outperforms the underlay based counterpart, thanks to its fully co-operative WET/WIT design with the primary WIT, while also requiring higher complexity for implementation."} {"_id": "7a02337958a489f4d7817f198b8e36e92ae1e155", "title": "Accounting Restatements and External Financing Choices", "text": "There is little research on how accounting information quality affects a firm\u2019s external financing choices. In this paper, we use the occurrence of accounting restatements as a proxy for the reduced credibility of accounting information and investigate how restatements affect a firm\u2019s external financing choices. We find that for firms that obtain external financing after restatements, they rely more on debt financing, especially private debt financing, and less on equity financing. The increase in debt financing is more pronounced for firms with more severe information problems and less pronounced for firms with prompt CEO/CFO turnover and auditor dismissal. Our evidence indicates that accounting information quality affects capital providers\u2019 resource allocation and that debt holders help alleviate information problems after accounting restatements."} {"_id": "743675c5510ba05ddc24d4b5f9b07589dac6c006", "title": "Integrating Perception and Planning for Autonomous Navigation of Urban Vehicles", "text": "The paper addresses the problem of autonomous navigation of a car-like robot evolving in an urban environment. Such an environment exhibits an heterogeneous geometry and is cluttered with moving obstacles. Furthermore, in this context, motion safety is a critical issue. The proposed approach to the problem lies in the coupling of two crucial robotic capabilities, namely perception and planning. The main contributions of this work are the development and integration of these modules into one single application, considering explicitly the constraints related to the environment and the system"} {"_id": "0b9ef11912c9667cd9150b4295e0b69705ab7d61", "title": "The international personality item pool and the future of public-domain personality measures", "text": "Seven experts on personality measurement here discuss the viability of public-domain personality measures, focusing on the International Personality Item Pool (IPIP) as a prototype. Since its inception in 1996, the use of items and scales from the IPIP has increased dramatically. Items from the IPIP have been translated from English into more than 25 other languages. Currently over 80 publications using IPIP scales are listed at the IPIP Web site (http://ipip.ori.org), and the rate of IPIPrelated publications has been increasing rapidly. The growing popularity of the IPIP can be attributed to Wve factors: (1) It is cost free; (2) its items can be obtained instantaneously via the Internet; (3) it includes over 2000 items, all easily available for inspection; (4) scoring keys for IPIP scales are This article represents a synthesis of contributions to the presidential symposium, The International Personality Item Pool and the Future of Public-Domain Personality Measures (L.R. Goldberg, Chair) at the sixth annual meeting of the Association for Research in Personality, New Orleans, January 20, 2005. Authorship order is based on the order of participation in the symposium. The IPIP project has been continually supported by Grant MH049227 from the National Institute of Mental Health, U.S. Public Health Service. J.A. Johnson\u2019s research was supported by the DuBois Educational Foundation. The authors thank Paul T. Costa Jr., Samuel D. Gosling, Leonard G. Rorer, Richard Reed, and Krista Trobst for their helpful suggestions. * Corresponding author. Fax: +1 814 375 4784. E-mail address: j5j@psu.edu (J.A. Johnson). 0092-6566/$ see front matter \uf6d9 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jrp.2005.08.007 L.R. Goldberg et al. / Journal of Research in Personality 40 (2006) 84\u201396 85 provided; and (5) its items can be presented in any order, interspersed with other items, reworded, translated into other languages, and administered on the World Wide Web without asking permission of anyone. The unrestricted availability of the IPIP raises concerns about possible misuse by unqualiWed persons, and the freedom of researchers to use the IPIP in idiosyncratic ways raises the possibility of fragmentation rather than scientiWc uniWcation in personality research. \uf6d9 2005 Elsevier Inc. All rights reserved."} {"_id": "3971b8751d29266d38e1f12fc1935b1469d73af8", "title": "A CCPP algorithm based on the standard map for the mobile robot", "text": "This paper introduces a new integrated algorithm to achieve the complete coverage path planning (CCPP) task for the mobile robot in a given obstacles-included terrain. The algorithm combines the cellular decomposition approach and the chaotic Standard map together to design the coverage procedure. The cellular decomposition approach decompose the target region into several rectangular feasible sub-regions. Then the chaotic Standard map in the full mapping state produces the complete coverage trajectories inside the feasible sub-regions, and the connection trajectories between two adjacent feasible sub-regions. Compared with the general cellular decomposition method, the proposed integrated algorithm needs no designated start point and goal point to link two adjacent sub-regions. The planned trajectories demonstrate a good distribution characteristics with regard to completeness and evenness. No obstacles-avoidance method and boundaries detection are needed in the coverage procedure."} {"_id": "7a5cc570c628d8afe267f8344cfc7762f1532dc9", "title": "Probabilistic slope stability analysis by finite elements", "text": "The paper investigates the probability of failure of a cohesive slope using both simple and more advanced probabilistic analysis tools. The influence of local averaging on the probability of failure of a test problem is thoroughly investigated. In the simple approach, classical slope stability analysis techniques are used, and the shear strength is treated as a single random variable. The advanced method, called the random finite element method (RFEM), uses elastoplasticity combined with random field theory. The RFEM method is shown to offer many advantages over traditional probabilistic slope stability techniques, because it enables slope failure to develop naturally by \u201cseeking out\u201d the most critical mechanism. Of particular importance in this work, is the conclusion that simplified probabilistic analysis, in which spatial variability is ignored by assuming perfect correlation, can lead to unconservative estimates of the probability of failure. This contradicts the findings of other investigators using classical slope stability analysis tools."} {"_id": "f11fce8cd480ba13c4ef44cf61c19b243c5c0288", "title": "New low-voltage class AB/AB CMOS op amp with rail-to-rail input/output swing", "text": "A new low-voltage CMOS Class AB/AB fully differential opamp with rail-to-rail input/output swing and supply voltage lower than two V/sub GS/ drops is presented. The scheme is based on combining floating-gate transistors and Class AB input and output stages. The op amp is characterized by low static power consumption and enhanced slew-rate. Moreover the proposed opamp does not suffer from typical reliability problems related to initial charge trapped in the floating-gate devices. Simulation and experimental results in 0.5-/spl mu/m CMOS technology verify the scheme operating with /spl plusmn/0.9-V supplies and close to rail-to-rail input and output swing."} {"_id": "f1c47a061547bc1de9ac5e56a12ca173d32313af", "title": "Ultrafast photonic reinforcement learning based on laser chaos", "text": "Reinforcement learning involves decision making in dynamic and uncertain environments and constitutes an important element of artificial intelligence (AI). In this work, we experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration\u2013exploitation dilemma. To solve the MAB, a certain degree of randomness is required for exploration purposes. However, pseudorandom numbers generated using conventional electronic circuitry encounter severe limitations in terms of their data rate and the quality of randomness due to their algorithmic foundations. We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100 GSample/s, and combine it with a simple decision-making principle called tug of war with a variable threshold, to ensure ultrafast, adaptive, and accurate decision making at a maximum adaptation speed of 1\u2009GHz. We found that decision-making performance was maximized with an optimal sampling interval, and we highlight the exact coincidence between the negative autocorrelation inherent in laser chaos and decision-making performance. This study paves the way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of light wave can provide new value."} {"_id": "0b68ad65eb35eeb54e858ea427f14985f211a28c", "title": "A generalized maximum entropy approach to bregman co-clustering and matrix approximation", "text": "Co-clustering is a powerful data mining technique with varied applications such as text clustering, microarray analysis and recommender systems. Recently, an information-theoretic co-clustering approach applicable to empirical joint probability distributions was proposed. In many situations, co-clustering of more general matrices is desired. In this paper, we present a substantially generalized co-clustering framework wherein any Bregman divergence can be used in the objective function, and various conditional expectation based constraints can be considered based on the statistics that need to be preserved. Analysis of the co-clustering problem leads to the minimum Bregman information principle, which generalizes the maximum entropy principle, and yields an elegant meta algorithm that is guaranteed to achieve local optimality. Our methodology yields new algorithms and also encompasses several previously known clustering and co-clustering algorithms based on alternate minimization."} {"_id": "737596df3ceee6e8db69dcaae64decbacda102e6", "title": "Adolescent coping style and behaviors: conceptualization and measurement.", "text": "The developmental tasks associated with adolescence pose a unique set of stressors and strains. Included in the normative tasks of adolescence are developing and identity, differentiating from the family while still staying connected, and fitting into a peer group. The adolescent's adaptation to these and other, often competing demands is achieved through the process of coping which involves cognitive and behavioral strategies directed at eliminating or reducing demands, redefining demands so as to make them more manageable, increasing resources for dealing with demands, and/or managing the tension which is felt as a result of experiencing demands. In this paper, individual copying theory and family stress theory are reviewed to provide a theoretical foundation for assessing adolescent coping. In addition, the development and testing of an adolescent self-report coping inventory, Adolescent Coping Orientation for Problem Experiences (A-COPE) is presented. Gender differences in coping style are presented and discussed. These coping patterns were validated against criterion indices of adolescents' use of cigarettes, liquor, and marijuana using data from a longitudinal study of 505 families with adolescents. The findings are discussed in terms of coping theory and measurement and in terms of adolescent development and substance use."} {"_id": "7fa2f9b6f3894555aabe025885a29749cfa80f19", "title": "VisTA: Visual Terminology Alignment Tool for Factual Knowledge Aggregation", "text": "The alignment of terminologies can be considered as a kind of \u201ctranslation\u201d between two or more terminologies that aims to enhance the communication among people coming from different domains or expertise. In this paper we introduce the Visual Terminology Alignment tool (VisTA) that enables the exact alignment for RDF/SKOS-like terminologies, suitable for integrating knowledge bases, rather than information retrieval systems. The tool provides a simple and friendly web-based user interface for the alignment between two terminologies, while it visualizes the terminology hierarchies, enables the interactive alignment process, and presents the alignment result. The latter is a native RDF/SKOS graph that interconnects the two terminology graphs, supporting interoperability and extending the search capabilities over the integrated semantic graph, using the broader and exact match properties."} {"_id": "766c251bd7686dd707acd500e80d7184929035c6", "title": "Evaluating State-of-the-Art Object Detector on Challenging Traffic Light Data", "text": "Traffic light detection (TLD) is a vital part of both intelligent vehicles and driving assistance systems (DAS). General for most TLDs is that they are evaluated on small and private datasets making it hard to determine the exact performance of a given method. In this paper we apply the state-of-the-art, real-time object detection system You Only Look Once, (YOLO) on the public LISA Traffic Light dataset available through the VIVA-challenge, which contain a high number of annotated traffic lights, captured in varying light and weather conditions.,,,,,,The YOLO object detector achieves an AUC of impressively 90.49% for daysequence1, which is an improvement of 50.32% compared to the latest ACF entry in the VIVAchallenge. Using the exact same training configuration as the ACF detector, the YOLO detector reaches an AUC of 58.3%, which is in an increase of 18.13%."} {"_id": "86585bd7288f41a28eeda883a35be6442224110a", "title": "A Variational Observation Model of 3 D Object for Probabilistic Semantic SLAM", "text": "We present a Bayesian object observation model for complete probabilistic semantic SLAM. Recent studies on object detection and feature extraction have become important for scene understanding and 3D mapping. However, 3D shape of the object is too complex to formulate the probabilistic observation model; therefore, performing the Bayesian inference of the object-oriented features as well as their pose is less considered. Besides, when the robot equipped with an RGB mono camera only observes the projected single view of an object, a significant amount of the 3D shape information is abandoned. Due to these limitations, semantic SLAM and viewpoint-independent loop closure using volumetric 3D object shape is challenging. In order to enable the complete formulation of probabilistic semantic SLAM, we approximate the observation model of a 3D object with a tractable distribution. We also estimate the variational likelihood from the 2D image of the object to exploit its observed single view. In order to evaluate the proposed method, we perform pose and feature estimation, and demonstrate that the automatic loop closure works seamlessly without additional loop detector in various environments."} {"_id": "d83a48c8bb4324de2f4c701e786a40e264b3cfe1", "title": "The level and nature of autistic intelligence.", "text": "Autistics are presumed to be characterized by cognitive impairment, and their cognitive strengths (e.g., in Block Design performance) are frequently interpreted as low-level by-products of high-level deficits, not as direct manifestations of intelligence. Recent attempts to identify the neuroanatomical and neurofunctional signature of autism have been positioned on this universal, but untested, assumption. We therefore assessed a broad sample of 38 autistic children on the preeminent test of fluid intelligence, Raven's Progressive Matrices. Their scores were, on average, 30 percentile points, and in some cases more than 70 percentile points, higher than their scores on the Wechsler scales of intelligence. Typically developing control children showed no such discrepancy, and a similar contrast was observed when a sample of autistic adults was compared with a sample of nonautistic adults. We conclude that intelligence has been underestimated in autistics."} {"_id": "7c2cb6c31f7d2c99cb54c9c1b0fc6b1fc045780a", "title": "A theory of biological pattern formation", "text": "One of the elementary processes in morphogenesis is the formation of a spatial pattern of tissue structures, starting from almost homogeneous tissue. It will be shown that relatively simple molecular mechanisms based on auto- and cross catalysis can account for a primary pattern of morphogens to determine pattern formation of the tissue. The theory is based on short range activation, long range inhibition, and a distinction between activator and inhibitor concentrations on one hand, and the densities of their sources on the other. While source density is expected to change slowly, e.g. as an effect of cell differentiation, the concentration of activators and inhibitors can change rapidly to establish the primary pattern; this results from auto- and cross catalytic effects on the sources, spreading by diffusion or other mechanisms, and degradation. Employing an approximative equation, a criterium is derived for models, which lead to a striking pattern, starting from an even distribution of morphogens, and assuming a shallow source gradient. The polarity of the pattern depends on the direction of the source gradient, but can be rather independent of other features of source distribution. Models are proposed which explain size regulation (constant proportion of the parts of the pattern irrespective of total size). Depending on the choice of constants, aperiodic patterns, implying a one-to-one correlation between morphogen concentration and position in the tissue, or nearly periodic patterns can be obtained. The theory can be applied not only to multicellular tissues, but also to intracellular differentiation, e.g. of polar cells. The theory permits various molecular interpretations. One of the simplest models involves bimolecular activation and monomolecular inhibition. Source gradients may be substituted by, or added to, sink gradients, e.g. of degrading enzymes. Inhibitors can be substituted by substances required for, and depleted by activation. Sources may be either synthesizing systems or particulate structures releasing activators and inhibitors. Calculations by computer are presented to exemplify the main features of the theory proposed. The theory is applied to quantitative data on hydra \u2014 a suitable one-dimensional model for pattern formation \u2014 and is shown to account for activation and inhibition of secondary head formation."} {"_id": "6a7384bf0d319d19bbbbf578ac7052bb72ef940c", "title": "Topological Value Iteration Algorithm for Markov Decision Processes", "text": "Value Iteration is an inefficient algorithm for Markov decision processes (MDPs) because it puts the majority of its effort into backing up the entire state space, which turns out to be unnecessary in many cases. In order to overcome this problem, many approaches have been proposed. Among them, LAO*, LRTDP and HDP are state-of-theart ones. All of these use reachability analysis and heuristics to avoid some unnecessary backups. However, none of these approaches fully exploit the graphical features of the MDPs or use these features to yield the best backup sequence of the state space. We introduce an algorithm named Topological Value Iteration (TVI) that can circumvent the problem of unnecessary backups by detecting the structure of MDPs and backing up states based on topological sequences. We prove that the backup sequence TVI applies is optimal. Our experimental results show that TVI outperforms VI, LAO*, LRTDP and HDP on our benchmark MDPs."} {"_id": "0aba945c25c1f71413746f550aa81587db37c42a", "title": "On the segmentation of 3D LIDAR point clouds", "text": "This paper presents a set of segmentation methods for various types of 3D point clouds. Segmentation of dense 3D data (e.g. Riegl scans) is optimised via a simple yet efficient voxelisation of the space. Prior ground extraction is empirically shown to significantly improve segmentation performance. Segmentation of sparse 3D data (e.g. Velodyne scans) is addressed using ground models of non-constant resolution either providing a continuous probabilistic surface or a terrain mesh built from the structure of a range image, both representations providing close to real-time performance. All the algorithms are tested on several hand labeled data sets using two novel metrics for segmentation evaluation."} {"_id": "13dd25c5e7df2b23ec9a168a233598702c2afc97", "title": "Efficient Graph-Based Image Segmentation", "text": "This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions."} {"_id": "30c3e410f689516983efcd780b9bea02531c387d", "title": "Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes", "text": "We present a 3-D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin-image representation. The spin-image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin-images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes. This research was performed at Carnegie Mellon University and was supported by the US Department of Energy under contract DE-AC21-92MC29104. 1"} {"_id": "5f0d86c9c5b7d37b4408843aa95119bf7771533a", "title": "A Method for Registration of 3-D Shapes", "text": "Tianhe Yang Summary of \" A Method for Registration of 3-D Shapes \""} {"_id": "befad8e606dcf37e5e0883367d2d7d6fac81513c", "title": "Underwater Vehicle Obstacle Avoidance and Path Planning Using a MultiBeam Forward Looking Sonar", "text": "This paper describes a new framework for segmentation of sonar images, tracking of underwater objects and motion estimation. This framework is applied to the design of an obstacle avoidance and path planning system for underwater vehicles based on a multi-beam forward looking sonar sensor. The real-time data flow (acoustic images) at the input of the system is first segmented and relevant features are extracted. We also take advantage of the real-time data stream to track the obstacles in following frames to obtain their dynamic characteristics. This allows us to optimize the preprocessing phases in segmenting only the relevant part of the images. Once the static (size and shape) as well as dynamic characteristics (velocity, acceleration, ...) of the obstacles have been computed, we create a representation of the vehicle\u2019s workspace based on these features. This representation uses constructive solid geometry(CSG) to create a convex set of obstacles defining the workspace. The tracking takes also into account obstacles which are no longer in the field of view of the sonar in the path planning phase. A well-proven nonlinear search (sequential quadratic programming) is then employed, where obstacles are expressed as constraints in the search space. This approach is less affected by local minima than classical methods using potential fields. The proposed system is not only capable of obstacle avoidance but also of path planning in complex environments which include fast moving obstacles. Results obtained on real sonar data are shown and discussed. Possible applications to sonar servoing and real-time motion estimation are also discussed."} {"_id": "60036886e7a44d2c254fb50f8b8aa480d6659ee0", "title": "Finding the shortest paths by node combination", "text": "By repeatedly combining the source node's nearest neighbor, we propose a node combination (NC) method to implement the Dijkstra's algorithm. The NC algorithm finds the shortest paths with three simple iterative steps: find the nearest neighbor of the source node, combine that node with the source node, and modify the weights on edges that connect to the nearest neighbor. The NC algorithm is more comprehensible and convenient for programming as there is no need to maintain a set with the nodes' distances. Experimental evaluations on various networks reveal that the NC algorithm is as efficient as Dijkstra's algorithm. As the whole process of the NC algorithm can be implemented with vectors, we also show how to find the shortest paths on a weight matrix. } is the set of nodes, E = {e ij j if there is a link from v i to v j } is the set of edges and W = {w ij j 1 6 i, j 6 N} is the weight matrix for E. Given two nodes v s , v t of G, the shortest path problem can be defined as how to find a path with the minimum sum of the weights on the edges in a v s , v t-path. Generally, v s and v t are called source node and sink node, respectively. The shortest path problem is one of the most fundamental network optimization problems with widespread applications [1\u20134]. Among the various shortest path algorithms developed [5\u201312], Dijkstra's algorithm is probably the most well-known. It maintains a set S of solved nodes, comprising those nodes whose final shortest path distance from the source v s has determined, and labels d(i), storing the upper bound of the shortest path distance from v s to v i. The algorithm repeatedly selects the node v k 2 VnS with the minimum d(i), adds v k to S, and updates d(i) for nodes that are incident to v k Step 1. Select a node v k from Q such that d\u00f0v k \u00de \u00bc min v j 2Q d\u00f0v j \u00de, if d(v k) = 1, stop, otherwise go to Step 2. Step 3. for every v j 2 Q, update d(v j) = min{d(v j), d(v k) + w kj }. Go to Step 1. In practice, Dijkstra's algorithm relies heavily on the strategies used to select the next minimum labeled \u2026"} {"_id": "0af7d6ed75ec9436426edfb75c0aeb774df5607a", "title": "User friendly SLAM initialization", "text": "The development of new Simultaneous Localization and Mapping (SLAM) techniques is quickly advancing in research communities and rapidly transitioning into commercial products. Creating accurate and high-quality SLAM maps relies on a robust initialization process. However, the robustness and usability of SLAM initialization for end-users has often been disregarded. This paper presents and evaluates a novel tracking system for 6DOF pose tracking between a single keyframe and the current camera frame, without any prior scene knowledge. Our system is particularly suitable for SLAM initialization, since it allows 6DOF pose tracking in the intermediate frames before a wide-enough baseline between two keyframes has formed. We investigate how our tracking system can be used to interactively guide users in performing an optimal motion for SLAM initialization. However, our findings from a pilot study indicate that the need for such motion can be completely hidden from the user and outsourced to our tracking system. Results from a second user study show that letting our tracking system create a SLAM map as soon as possible is a viable and usable solution. Our work provides important insight for SLAM systems, showing how our novel tracking system can be integrated with a user interface to support fast, robust and user-friendly SLAM initialization."} {"_id": "90520ec673a792634f1c7765fb93372c288f4816", "title": "Online Dictionary Learning on Symmetric Positive Definite Manifolds with Vision Applications", "text": "Symmetric Positive Definite (SPD) matrices in the form of region covariances are considered rich descriptors for images and videos. Recent studies suggest that exploiting the Riemannian geometry of the SPD manifolds could lead to improved performances for vision applications. For tasks involving processing large-scale and dynamic data in computer vision, the underlying model is required to progressively and efficiently adapt itself to the new and unseen observations. Motivated by these requirements, this paper studies the problem of online dictionary learning on the SPD manifolds. We make use of the Stein divergence to recast the problem of online dictionary learning on the manifolds to a problem in Reproducing Kernel Hilbert Spaces, for which, we develop efficient algorithms by taking into account the geometric structure of the SPD manifolds. To our best knowledge, our work is the first study that provides a solution for online dictionary learning on the SPD manifolds. Empirical results on both large-scale image classification task and dynamic video processing tasks validate the superior performance of our approach as compared to several state-of-the-art algorithms."} {"_id": "d05507aec2e217a1f3ba35153153a64e135fb42c", "title": "Web Content Analysis: Expanding the Paradigm", "text": "Are established methods of content analysis (CA) adequate to analyze web content, or should new methods be devised to address new technological developments? This chapter addresses this question by contrasting narrow and broad interpretations of the concept of web content analysis. The utility of a broad interpretation that subsumes the narrow one is then illustrated with reference to research on weblogs (blogs), a popular web format in which features of HTML documents and interactive computer-mediated communication converge. The chapter concludes by proposing an expanded Web Content Analysis (WebCA) paradigm in which insights from paradigms such as discourse analysis and social network analysis are operationalized and implemented within a general content analytic framework."} {"_id": "35c473bae9d146072625cc3d452c8f6b84c8cc47", "title": "ZoomNet: Deep Aggregation Learning for High-Performance Small Pedestrian Detection", "text": "It remains very challenging for a single deep model to detect pedestrians of different sizes appears in an image. One typical remedy for the small pedestrian detection is to upsample the input and pass it to the network multiple times. Unfortunately this strategy not only exponentially increases the computational cost but also probably impairs the model effectiveness. In this work, we present a deep architecture, refereed to as ZoomNet, which performs small pedestrian detection by deep aggregation learning without up-sampling the input. ZoomNet learns and aggregates deep feature representations at multiple levels and retains the spatial information of the pedestrian from different scales. ZoomNet also learns to cultivate the feature representations from the classification task to the detection task and obtains further performance improvements. Extensive experimental results demonstrate the state-of-the-art performance of ZoomNet. The source code of this work will be made public available to facilitate further studies on this problem."} {"_id": "e28935d4570b8e3c67b16080fc533bceffff4548", "title": "On Oblique Random Forests", "text": "Abstract. In his original paper on random forests, Breiman proposed two different decision tree ensembles: one generated from \u201corthogonal\u201d trees with thresholds on individual features in every split, and one from \u201coblique\u201d trees separating the feature space by randomly oriented hyperplanes. In spite of a rising interest in the random forest framework, however, ensembles built from orthogonal trees (RF) have gained most, if not all, attention so far. In the present work we propose to employ \u201coblique\u201d random forests (oRF) built from multivariate trees which explicitly learn optimal split directions at internal nodes using linear discriminative models, rather than using random coefficients as the original oRF. This oRF outperforms RF, as well as other classifiers, on nearly all data sets but those with discrete factorial features. Learned node models perform distinctively better than random splits. An oRF feature importance score shows to be preferable over standard RF feature importance scores such as Gini or permutation importance. The topology of the oRF decision space appears to be smoother and better adapted to the data, resulting in improved generalization performance. Overall, the oRF propose here may be preferred over standard RF on most learning tasks involving numerical and spectral data."} {"_id": "524dd11ce1249bae235bf06de89621c59c18286e", "title": "The ATILF-LLF System for Parseme Shared Task: a Transition-based Verbal Multiword Expression Tagger", "text": "We describe the ATILF-LLF system built for the MWE 2017 Shared Task on automatic identification of verbal multiword expressions. We participated in the closed track only, for all the 18 available languages. Our system is a robust greedy transition-based system, in which MWE are identified through a MERGE transition. The system was meant to accommodate the variety of linguistic resources provided for each language, in terms of accompanying morphological and syntactic information. Using per-MWE Fscore, the system was ranked first1 for all but two languages (Hungarian and Romanian)."} {"_id": "449b47c55dac9cae588086dde9249caa230e01b1", "title": "Indistinguishability of Random Systems", "text": "An (X ,Y)-random system takes inputs X1, X2, . . . \u2208 X and generates, for each new input Xi, an output Yi \u2208 Y, depending probabilistically on X1, . . . , Xi and Y1, . . . , Yi\u22121. Many cryptographic systems like block ciphers, MAC-schemes, pseudo-random functions, etc., can be modeled as random systems, where in fact Yi often depends only on Xi, i.e., the system is stateless. The security proof of such a system (e.g. a block cipher) amounts to showing that it is indistinguishable from a certain perfect system (e.g. a random permutation). We propose a general framework for proving the indistinguishability of two random systems, based on the concept of the equivalence of two systems, conditioned on certain events. This abstraction demonstrates the common denominator among many security proofs in the literature, allows to unify, simplify, generalize, and in some cases strengthen them, and opens the door to proving new indistinguishability results. We also propose the previously implicit concept of quasi-randomness and give an efficient construction of a quasi-random function which can be used as a building block in cryptographic systems based on pseudorandom functions."} {"_id": "195c63fd6b6c9e7f1b5f6920da5a8de4de896f4a", "title": "CEC : Research in visualization techniques for field construction", "text": "Field construction can be planned, monitored, and controlled at two distinct levels: 1) the activity or schedule level; and 2) the operation or process level. Graphical 3D visualization can serve as an effective communication method at both levels. Several research efforts in visualizing construction are rooted in scheduling. They typically involve linking activity-based construction schedules and 3D CAD models of facilities to describe discretely-evolving construction \u201cproduct\u201d visualizations (often referred to as 4D CAD). The focus is on communicating what component(s) are built where and when, with the intention of studying the optimal activity sequence, spatial, and temporal interferences, etc. The construction processes or operations actually involved in building the components are usually implied. A second approach in visualizing construction is rooted in discrete-event simulation that, in addition to visualizing evolving construction products, also concerns the visualization of the operations and processes that are performed in building them. In addition to what is built where and when, the approach communicates who builds it and how by depicting the interaction between involved machines, resources, and materials. This paper introduces the two approaches, and describes the differences in concept, form, and content between activity level and operations level construction visualization. An example of a structural steel framing operation is presented to elucidate the comparison. This work was originally published in the proceedings of the 2002 IEEE Winter Simulation Conference (Kamat and Martinez 2002). This paper expands on the original work, by describing recent advances in both activity and operations level construction visualization."} {"_id": "b3e3fc76e4f0a73de2562d88a64d864d0236ba26", "title": "Brain\u2014Computer Interface", "text": "Human-computer interfaces (HCIs) have become ubiquitous. Interfaces such as keyboards and mouses are used daily while interacting with computing devices (Ebrahimi et al., 2003). There is a developing need, however, for HCIs that can be used in situations where these typical interfaces are not viable. Direct brain-computer interfaces (BCI) is a developing field that has been adding this new dimension of functionality to HCI. BCI has created a novel communication channel, especially for those users who are unable to generate necessary muscular movements to use typical HCI devices."} {"_id": "dca9e15cde7665415b88486ae182c77ce8806b26", "title": "Human Performance Issues and User Interface Design for Teleoperated Robots", "text": "In the future, it will become more common for humans to team up with robotic systems to perform tasks that humans cannot realistically accomplish alone. Even for autonomous and semiautonomous systems, teleoperation will be an important default mode. However, teleoperation can be a challenging task because the operator is remotely located. As a result, the operator's situation awareness of the remote environment can be compromised and the mission effectiveness can suffer. This paper presents a detailed examination of more than 150 papers covering human performance issues and suggested mitigation solutions. The paper summarizes the performance decrements caused by video images bandwidth, time lags, frame rates, lack of proprioception, frame of reference, two-dimensional views, attention switches, and motion effects. Suggested solutions and their limitations include stereoscopic displays, synthetic overlay, multimodal interfaces, and various predicative and decision support systems."} {"_id": "ada5876e216130cdd7ad6e44539849049dd2de39", "title": "Study on advantages and disadvantages of Cloud Computing \u2013 the advantages of Telemetry Applications in the Cloud", "text": "As companies of all shapes and sizes begin to adapt to cloud computing, this new technology is evolving like never before. Industry experts believe that this trend will only continue to grow and develop even further in the coming few years. While Cloud computing is undoubtedly beneficial for mid-size to large companies, it is not without its downsides, especially for smaller businesses. In this paper we are presenting a list of advantages and disadvantages of Cloud computing technology, with a view to helping enterprises fully understand and adopt the concept of Cloud computing. Also, in the last chapter we are presenting a cloud application for telemetry with a focus on monitoring hydro-energy, in order to demonstrate the advantages that cloud technology can have for this domain. We consider that the way to make cloud vastly benefit all types of businesses is to know very well it's ups and downs and adapt to them accordingly. Key-Words: Cloud Computing, Grid Computing, telemetry, architecture, advantages, disadvantages."} {"_id": "396514fb219879a4a18762cddfae2a6a607f439f", "title": "Finding a Needle in Haystack: Facebook's Photo Storage", "text": "Facebook needed an architecture to address the need of storing metadata to find photos on storage machines efficiently in memory. They realized that storing a single photo in a single file would prevent them from achieving this goal; thus, they chose to store multiple photos in a single file, reducing the metadata overhead and making it more possible to store the metadata in memory. They ultimately came up with an architecture that consists of a Haystack Store (the persistent storage), Haystack Directory (used to locate photos and manage storage), and Haystack Cache (used to provide quick access to photos not found in the CDN but generating a large number of requests from users). Their architecture allowed them to choose whether to use a CDN or not, but in either case, the Cache was used to address the issue of the \u201clong tail\u201d problem, where less popular photos still generate requests, and does so by having any request from the user directly be stored in the Cache or by having photos fetched from a write-enabled machine to be stored in the Cache. Haystack addressed the potential of failures wiping inmemory data structures by storing these structures in index files. These index files are updated asynchronously of writes preventing performance decrease. To efficiently handle modifications, Haystack chose to append modified files to the end of the file as well as record file to delete the old version of the file. The old version of the file, along with any files marked deleted through delete requests are deleted through an operation called compaction, which takes files not marked deleted or only new versions of modified file and stores it in a new file on disk. This way, deletes and modifications are handled asynchronously of writes and thus is efficient."} {"_id": "39373d7c6d85f6189dd798e3c419d9aac3ad04db", "title": "Similarity Preserving Snippet-Based Visualization of Web Search Results", "text": "Internet users are very familiar with the results of a search query displayed as a ranked list of snippets. Each textual snippet shows a content summary of the referred document (or webpage) and a link to it. This display has many advantages, for example, it affords easy navigation and is straightforward to interpret. Nonetheless, any user of search engines could possibly report some experience of disappointment with this metaphor. Indeed, it has limitations in particular situations, as it fails to provide an overview of the document collection retrieved. Moreover, depending on the nature of the query for example, it may be too general, or ambiguous, or ill expressed the desired information may be poorly ranked, or results may contemplate varied topics. Several search tasks would be easier if users were shown an overview of the returned documents, organized so as to reflect how related they are, content wise. We propose a visualization technique to display the results of web queries aimed at overcoming such limitations. It combines the neighborhood preservation capability of multidimensional projections with the familiar snippet-based representation by employing a multidimensional projection to derive two-dimensional layouts of the query search results that preserve text similarity relations, or neighborhoods. Similarity is computed by applying the cosine similarity over a \"bag-of-wordsa\u0302' vector representation of collection built from the snippets. If the snippets are displayed directly according to the derived layout, they will overlap considerably, producing a poor visualization. We overcome this problem by defining an energy functional that considers both the overlapping among snippets and the preservation of the neighborhood structure as given in the projected layout. Minimizing this energy functional provides a neighborhood preserving two-dimensional arrangement of the textual snippets with minimum overlap. The resulting visualization conveys both a global view of the query results and visual groupings that reflect related results, as illustrated in several examples shown."} {"_id": "bae7025af534d5f8b75de243b03311af4645bc48", "title": "The Tale of e-Government: A Review of the Stories that Have Been Told So Far and What is Yet to Come", "text": "Since its first appearance, the concept of eGovernment has evolved into a recognized means that has helped the public sector to increase its efficiency and effectiveness. A lot of research has therefore been done in this area to elaborate on the different aspects encompassing this concept. However, when looking at the existing e-Government literature, research mostly focuses on one specific aspect of e-Government and there are few generic publications that provide an overview of the diversity of this interdisciplinary research field over a longer term period. This study analyzes the abstracts of eight e-Government journals from 2000 to 2016 by means of a quantitative text mining analysis, backed by a qualitative Delphi approach. The article concludes with a discussion on the findings and implications as well as directions for future research."} {"_id": "ac3a287c972b9fda317e918c864bb3b738ed4566", "title": "Public health implications of altered puberty timing.", "text": "Changes in puberty timing have implications for the treatment of individual children, for the risk of later adult disease, and for chemical testing and risk assessment for the population. Children with early puberty are at a risk for accelerated skeletal maturation and short adult height, early sexual debut, potential sexual abuse, and psychosocial difficulties. Altered puberty timing is also of concern for the development of reproductive tract cancers later in life. For example, an early age of menarche is a risk factor for breast cancer. A low age at male puberty is associated with an increased risk for testicular cancer according to several, but not all, epidemiologic studies. Girls and, possibly, boys who exhibit premature adrenarche are at a higher risk for developing features of metabolic syndrome, including obesity, type 2 diabetes, and cardiovascular disease later in adulthood. Altered timing of puberty also has implications for behavioral disorders. For example, an early maturation is associated with a greater incidence of conduct and behavior disorders during adolescence. Finally, altered puberty timing is considered an adverse effect in reproductive toxicity risk assessment for chemicals. Recent US legislation has mandated improved chemical testing approaches for protecting children's health and screening for endocrine-disrupting agents, which has led to changes in the US Environmental Protection Agency's risk assessment and toxicity testing guidelines to include puberty-related assessments and to the validation of pubertal male and female rat assays for endocrine screening."} {"_id": "242a3f4c68e6153debf889560390cd4c75900924", "title": "Effect of written emotional expression on immune function in patients with human immunodeficiency virus infection: a randomized trial.", "text": "OBJECTIVES\nTo determine whether writing about emotional topics compared with writing about neutral topics could affect CD4+ lymphocyte count and human immunodeficiency virus (HIV) viral load among HIV-infected patients.\n\n\nMETHODS\nThirty-seven HIV-infected patients were randomly allocated to 2 writing conditions focusing on emotional or control topics. Participants wrote for 4 days, 30 minutes per day. The CD4+ lymphocyte count and HIV viral load were measured at baseline and at 2 weeks, 3 months, and 6 months after writing.\n\n\nRESULTS\nThe emotional writing participants rated their essays as more personal, valuable, and emotional than those in the control condition. Relative to the drop in HIV viral load, CD4+ lymphocyte counts increased after the intervention for participants in the emotional writing condition compared with control writing participants.\n\n\nCONCLUSIONS\nThe results are consistent with those of previous studies using emotional writing in other patient groups. Based on the self-reports of the value of writing and the preliminary laboratory findings, the results suggest that emotional writing may provide benefit for patients with HIV infection."} {"_id": "9457b48cf5932d9528a32f36e23ab003b54e17ac", "title": "A Quantitative Comparison of Semantic Web Page Segmentation Approaches", "text": "This paper explores the effectiveness of different semantic web page segmentation algorithms on modern websites. We compare three known algorithms each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. With our testing framework we have compared the performance of four algorithms for a large benchmark we have constructed. We have examined each algorithm for a total of eight different configurations (varying datasets, evaluation metric and the type of the input HTML documents). We found that all algorithms performed better on random pages on average than on popular pages, and results are better when running the algorithms on the HTML obtained from the DOM rather than on the plain HTML. Overall there is much room for improvement as we find the best average F-score to be 0.49, indicating that for modern websites currently available algorithms are not yet of practical use."} {"_id": "108520cc4be3b7558fc22b8287999f921f01b34d", "title": "Smith-Petersen osteotomy in thoracolumbar deformity surgery.", "text": "OBJECTIVE\nTo describe the technique and indications of a Smith-Petersen osteotomy in spinal deformity surgery.\n\n\nMETHODS\nPertinent literature was reviewed to describe the indications and reported complications of this corrective technique.\n\n\nRESULTS\nThe operative nuances of the technique are described.\n\n\nCONCLUSION\nA Smith-Petersen osteotomy is a safe and effective surgical technique to obtain correction of spinal deformity in both the sagittal and coronal planes."} {"_id": "36445111c9f9eb6763fedab5294aca792519f925", "title": "Ensembles for unsupervised outlier detection: challenges and research questions a position paper", "text": "Ensembles for unsupervised outlier detection is an emerging topic that has been neglected for a surprisingly long time (although there are reasons why this is more difficult than supervised ensembles or even clustering ensembles). Aggarwal recently discussed algorithmic patterns of outlier detection ensembles, identified traces of the idea in the literature, and remarked on potential as well as unlikely avenues for future transfer of concepts from supervised ensembles. Complementary to his points, here we focus on the core ingredients for building an outlier ensemble, discuss the first steps taken in the literature, and identify challenges for future research."} {"_id": "3fc840191f8358def2aa745cca1c0425cf2c5938", "title": "Efficient SIMD code generation for runtime alignment and length conversion", "text": "When generating codes for today's multimedia extensions, one of the major challenges is to deal with memory alignment issues. While hand programming still yields best performing SIMD codes, it is both time consuming and error prone. Compiler technology has greatly improved, including techniques that simdize loops with misaligned accesses by automatically rearranging mis-aligned memory streams in registers. Current techniques are applicable to runtime alignments, but they aggressively reduce the alignment overhead only when all alignments are known at compile time. This paper presents two major enhancements to the state of the art, improving both performance and coverage. First, we propose a novel technique to simdize loops with runtime alignment nearly as efficiently as those with compile-time misalignment. Runtime alignment is pervasive in real applications because it is either part of the algorithms, or it is an artifact of the compiler's inability to extract accurate alignment information from complex applications. Second, we incorporate length conversion operations, e.g., conversions between data of different sizes, into the alignment handling framework. Length conversions are pervasive in multimedia applications where mixed integer types are often used. Supporting length conversion can greatly improve the coverage of simdizable loops. Experimental results indicate that our runtime alignment technique achieves a 19% to 32% speedup increase over prior art for a benchmark stressing the impact of misaligned data. We also demonstrate speedup factors of up to 8.11 for real benchmarks over sequential execution."} {"_id": "e7830a70f9170eccc088b5e70523b4aa1cc03b6a", "title": "Turning the Waiting Room into a Classroom: Weekly Classes Using a Vegan or a Portion-Controlled Eating Plan Improve Diabetes Control in a Randomized Translational Study.", "text": "BACKGROUND\nIn research settings, plant-based (vegan) eating plans improve diabetes management, typically reducing weight, glycemia, and low-density lipoprotein (LDL) cholesterol concentrations to a greater extent than has been shown with portion-controlled eating plans.\n\n\nOBJECTIVE\nThe study aimed to test whether similar benefits could be found using weekly nutrition classes in a typical endocrinology practice, hypothesizing that a vegan eating plan would improve glycemic control, weight, lipid concentrations, blood pressure, and renal function and would do so more effectively than a portion-controlled eating plan.\n\n\nDESIGN\nIn a 20-week trial, participants were randomly assigned to a low-fat vegan or portion-controlled eating plan.\n\n\nPARTICIPANTS/SETTING\nIndividuals with type 2 diabetes treated in a single endocrinology practice in Washington, DC, participated (45 starters, 40 completers).\n\n\nINTERVENTION\nParticipants attended weekly after-hours classes in the office waiting room. The vegan plan excluded animal products and added oils and favored low-glycemic index foods. The portion-controlled plan included energy intake limits for weight loss (typically a deficit of 500 calories/day) and provided guidance on portion sizes.\n\n\nMAIN OUTCOME MEASURES\nBody weight, hemoglobin A1c (HbA1c), plasma lipids, urinary albumin, and blood pressure were measured.\n\n\nSTATISTICAL ANALYSES PERFORMED\nFor normally distributed data, t tests were used; for skewed outcomes, rank-based approaches were implemented (Wilcoxon signed-rank test for within-group changes, Wilcoxon two-sample test for between-group comparisons, and exact Hodges-Lehmann estimation to estimate effect sizes).\n\n\nRESULTS\nAlthough participants were in generally good metabolic control at baseline, body weight, HbA1c, and LDL cholesterol improved significantly within each group, with no significant differences between the two eating plans (weight:\u00a0-6.3 kg vegan,\u00a0-4.4 kg portion-controlled, between-group P=0.10; HbA1c,\u00a0-0.40 percentage point in both groups, P=0.68; LDL cholesterol\u00a0-11.9 mg/dL vegan,\u00a0-12.7 mg/dL portion-controlled, P=0.89). Mean urinary albumin was normal at baseline and did not meaningfully change. Blood pressure changes were not significant.\n\n\nCONCLUSIONS\nWeekly classes, integrated into a clinical practice and using either a low-fat vegan or portion-controlled eating plan, led to clinical improvements in individuals with type 2 diabetes."} {"_id": "8a6ab194c992f33b7066f6866f5339e57fcfd31a", "title": "On Two Metaphors for Learning and the Dangers of Choosing Just One", "text": "The upshots of the former section can be put as follows: All our concepts and beliefs have their roots in a limited number of fundamental ideas that cross disciplinary boundaries and are carried from one domain to another by the language we use. One glance at the current discourse on learning should be enough to realize that nowadays educational research is caught between two metaphors that, in this article, will be called the acquisition metaphor and the participation metaphor. Both of these metaphors are simultaneously present in most recent texts, but while the acquisition metaphor is likely to be more prominent in older writings, more recent studies are often dominated by the participation metaphor."} {"_id": "c90c3c10cba3c4700298ab2883c2bfecd7401fae", "title": "Visual feature integration and the temporal correlation hypothesis.", "text": "The mammalian visual system is endowed with a nearly infinite capacity for the recognition of patterns and objects. To have acquired this capability the visual system must have solved what is a fundamentally combinatorial problem. Any given image consists of a collection of features, consisting of local contrast borders of luminance and wavelength, distributed across the visual field. For one to detect and recognize an object within a scene, the features comprising the object must be identified and segregated from those comprising other objects. This problem is inherently difficult to solve because of the combinatorial nature of visual images. To appreciate this point, consider a simple local feature such as a small vertically oriented line segment placed within a fixed location of the visual field. When combined with other line segments, this feature can form a nearly infinite number of geometrical objects. Any one of these objects may coexist with an equally large number of other"} {"_id": "860d51e4bcaf1b6519d36772ccf645861dded118", "title": "Virgin soil in irony research : Personality , humor , and the \u201c sense of irony \u201d", "text": "he aim of the paper is fourfold: (a) show why humor scholars should study irony, (b) explore the need for considering interindividual differences in healthy adults\u2019 irony performance, (c) stress the necessity for developing tools assessing habitual differences in irony performance, and (d) indicate future directions for joint irony and humor research and outline possible applications. Verbal irony is often employed with a benevolent humorous intent by speakers, but can also serve as a means of disparagement humor. In both cases, encoding and decoding activities entailing irony need to be considered in the context of the psychology of humor. We argue that verbal irony performance can be considered a phenomenon native to the realm of humor and individual differences. We point out that research has widely neglected the meaningfulness of variance in irony performance within experimental groups when looking at determinants of irony detection and production. Based on theoretical considerations and previous empirical findings we show that this variance can be easily related to individual-differences variables such as the sense of humor, dispositions toward laughter and ridicule (e.g., gelotophobia), and general mental ability. Furthermore, we hypothesize that there is an enduring trait determining irony performance we will label the sense of irony. The sense of irony possibly goes along with inclinations toward specific affective and cognitive processing patterns when dealing with verbal irony. As an application, novel irony performance tests can help to study psychological and neurophysiological correlates of irony performance more feasibly, that is, in nonclinical groups. DOI: https://doi.org/10.1037/tps0000054 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-124094 Accepted Version Originally published at: Bruntsch, Richard; Hofmann, Jennifer; Ruch, Willibald (2016). Virgin soil in irony research: Personality, humor, and the \u201csense of irony\u201d. Translational Issues in Psychological Science, 2(1):25-34. DOI: https://doi.org/10.1037/tps0000054 VIRGIN SOIL IN IRONY RESEARCH 1 Running head: VIRGIN SOIL IN IRONY RESEARCH Virgin soil in irony research: Personality, humor, and the \u201csense of irony\u201d"} {"_id": "0cf7da0df64557a4774100f6fde898bc4a3c4840", "title": "Shape matching and object recognition using low distortion correspondences", "text": "We approach recognition in the framework of deformable shape matching, relying on a new algorithm for finding correspondences between feature points. This algorithm sets up correspondence as an integer quadratic programming problem, where the cost function has terms based on similarity of corresponding geometric blur point descriptors as well as the geometric distortion between pairs of corresponding feature points. The algorithm handles outliers, and thus enables matching of exemplars to query images in the presence of occlusion and clutter. Given the correspondences, we estimate an aligning transform, typically a regularized thin plate spline, resulting in a dense correspondence between the two shapes. Object recognition is then handled in a nearest neighbor framework where the distance between exemplar and query is the matching cost between corresponding points. We show results on two datasets. One is the Caltech 101 dataset (Fei-Fei, Fergus and Perona), an extremely challenging dataset with large intraclass variation. Our approach yields a 48% correct classification rate, compared to Fei-Fei et al 's 16%. We also show results for localizing frontal and profile faces that are comparable to special purpose approaches tuned to faces."} {"_id": "136b9952f29632ab3fa2bbf43fed277204e13cb5", "title": "SUN database: Large-scale scene recognition from abbey to zoo", "text": "Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes."} {"_id": "1e6fbe7e923ffadd22bf63ce80997bf248659a1d", "title": "Shape matching and object recognition using shape contexts", "text": "This paper presents my work on computing shape models that are computationally fast and invariant basic transformations like translation, scaling and rotation. In this paper, I propose shape detection using a feature called shape context. Shape context describes all boundary points of a shape with respect to any single boundary point. Thus it is descriptive of the shape of the object. Object recognition can be achieved by matching this feature with a priori knowledge of the shape context of the boundary points of the object. Experimental results are promising on handwritten digits, trademark images."} {"_id": "61f20cd6ec2c9f9ecad8dd9216eddab3ad606fb0", "title": "Backward Stochastic Differential Equations and Applications", "text": "A new type of stochastic differential equation, called the backward stochastic differentil equation (BSDE), where the value of the solution is prescribed at the final (rather than the initial) point of the time interval, but the solution is nevertheless required to be at each time a function of the past of the underlying Brownian motion, has been introduced recently, independently by Peng and the author in [16], and by Dufne and Epstein in [7]. This class of equations is a natural nonlinear extension of linear equations that appear both as the equation for the adjoint process in the maximum principle for optimal stochastic control (see [2]), and as a basic model for asset pricing in financial mathematics. It was soon after discovered (see [22], [17]) that those BSDEs provide probabilistic formulas for solutions of certain semilinear partial differential equations (PDEs), which generalize the well-known Feynmann-Kac formula for second order linear PDEs. This provides a new additional tool for analyzing solutions of certain PDEs, for instance reaction-diffusion equations."} {"_id": "79dd787b2877cf9ce08762d702589543bda373be", "title": "Face detection using SURF cascade", "text": "We present a novel boosting cascade based face detection framework using SURF features. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by two key contributions. First, the proposed framework deals with only several hundreds of multidimensional local SURF patches instead of hundreds of thousands of single dimensional haar features in the VJ framework. Second, it takes AUC as a single criterion for the convergence test of each cascade stage rather than the two conflicting criteria (false-positive-rate and detection-rate) in the VJ framework. These modifications yield much faster training convergence and much fewer stages in the final cascade. We made experiments on training face detector from large scale database. Results shows that the proposed method is able to train face detectors within one hour through scanning billions of negative samples on current personal computers. Furthermore, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed."} {"_id": "883492ad4c49f7a6d0d1c38119e908fc0faed911", "title": "Automatic detection and classification of diabetic retinopathy stages using CNN", "text": "A Convolutional Neural Networks (CNNs) approach is proposed to automate the method of Diabetic Retinopathy(DR) screening using color fundus retinal photography as input. Our network uses CNN along with denoising to identify features like micro-aneurysms and haemorrhages on the retina. Our models were developed leveraging Theano, an open source numerical computation library for Python. We trained this network using a high-end GPU on the publicly available Kaggle dataset. On the data set of over 30,000 images our proposed model achieves around 95% accuracy for the two class classification and around 85% accuracy for the five class classification on around 3,000 validation images."} {"_id": "e8d5aeca722bcd28a6151640b105b72b43e48ab5", "title": "Qualitative research in health care. Assessing quality in qualitative research.", "text": "In the past decade, qualitative methods have become more commonplace in areas such as health services research and health technology assessment, and there has been a corresponding rise in the reporting of qualitative research studies in medical and related journals. Interest in these methods and their wider exposure in health research has led to necessary scrutiny of qualitative research. Researchers from other traditions are increasingly concerned to understand qualitative methods and, most importantly, to examine the claims researchers make about the findings obtained from these methods. The status of all forms of research depends on the quality of the methods used. In qualitative research, concern about assessing quality has manifested itself recently in the proliferation of guidelines for doing and judging qualitative work. Users and funders of research have had an important role in developing these guidelines as they become increasingly familiar with qualitative methods, but require some means of assessing their quality and of distinguishing \u201cgood\u201d and \u201cpoor\u201d quality research. However, the issue of \u201cquality\u201d in qualitative research is part of a much larger and contested debate about the nature of the knowledge produced by qualitative research, whether its quality can legitimately be judged, and, if so, how. This paper cannot do full justice to this wider epistemological debate. Rather it outlines two views of how qualitative methods might be judged and argues that qualitative research can be assessed according to two broad criteria: validity and relevance."} {"_id": "a75791024a2aca42641e492b992a433eae35cadb", "title": "An empirical examination of factors contributing to the creation of successful e-learning environments", "text": "Although existing models of e-learning effectiveness in information systems (IS) have increased our understanding of how technology can support and enhance learning, most of our models do not take into account the importance of social presence. Thus, this study extends previous research by developing a model of e-learning effectiveness which adds social presence to other oft studied variables including application-specific computer self-efficacy (AS-CSE), perceived usefulness, course interaction, and e-learning effectiveness. Using data from 345 individuals, this model was validated through a field study in an introductory IS survey course. Results indicate that AS-CSE and perceived usefulness were related to course performance, course satisfaction, and course instrumentality. In addition, course interaction was related to course performance and satisfaction. Finally, social presence was related to course satisfaction and course instrumentality. Implications for research and practice are discussed. r 2007 Elsevier Ltd. All rights reserved."} {"_id": "e20469c66f44e6e18eccc722ef3f74a630d102df", "title": "peepHole : Securing your homes with Raspberry Pi and Amazon Echo", "text": "In recent years, the field of Internet of Things (IoT) has seen significant investments made by the research community and the industry. Specifically, the Smart Home space has been a prime focus with the introduction of devices such as Amazon Echo, Google Home, Samsung Smart Things among others. The growth of an industry results in innovative, economic, and advanced solutions. In this paper, we focus on smart home security and how to build a robust, cost-effective system that can be widely used. We power our system using Amazon Echo, Amazon\u2019s cloud services, its speech services. A Raspberry Pi with a camera module is used as the hardware component for providing home security. We describe the different components of our product and we show that our system works effectively to identify the person at the doorstep, and have thus named our system peepHole. Keywords\u2014Intenet of Things, Smart Homes, Home Security, Amazon Web Services, Amazon Echo, Raspberry Pi, Face Verification"} {"_id": "f48ec0076b6d07ffb68c52d457947695310f161b", "title": "Chemical structure recognition and prediction: A machine learning technique", "text": "A chemical structure recognition system designed to identify and predict hand-drawn chemical diagrams is presented. Although the proposed system is primarily intended to help chemists complete their chemical structure drawings and predict and complete the chemical structure, it is also envisioned to be utilized as a learning aid in chemistry education on a student's touchscreen device or on a teacher's smart board. The proposed system continuously provides the user with real-time feedback about the most relevant combinations to complete the chemical compound. A constantly updated list of prioritized candidate structures is displayed to the user. If the user is an expert, he or she will be allowed to contribute to the learning process of the proposed system. The proposed system deploys two trainable Bayesian networks, one at its input and another one at its output. Between the two networks, the classification of hand- drawn chemical structure is done using morphological image processing, LDA and PCA, incorporating therefore both image- based and feature-based techniques. This incorporation contributes considerably to the increased performance of the proposed system by enabling it to efficiently recognize messy drawings and noisy sketches like those containing over-traced strokes. Performance comparison with existing state of the art techniques proved the system to be outperforming in terms of recognition accuracy."} {"_id": "1c2adf688022702aecf149ab8e94d408dc54275d", "title": "Breath acetone monitoring by portable Si:WO3 gas sensors.", "text": "Breath analysis has the potential for early stage detection and monitoring of illnesses to drastically reduce the corresponding medical diagnostic costs and improve the quality of life of patients suffering from chronic illnesses. In particular, the detection of acetone in the human breath is promising for non-invasive diagnosis and painless monitoring of diabetes (no finger pricking). Here, a portable acetone sensor consisting of flame-deposited and in situ annealed, Si-doped epsilon-WO(3) nanostructured films was developed. The chamber volume was miniaturized while reaction-limited and transport-limited gas flow rates were identified and sensing temperatures were optimized resulting in a low detection limit of acetone (\u223c20ppb) with short response (10-15s) and recovery times (35-70s). Furthermore, the sensor signal (response) was robust against variations of the exhaled breath flow rate facilitating application of these sensors at realistic relative humidities (80-90%) as in the human breath. The acetone content in the breath of test persons was monitored continuously and compared to that of state-of-the-art proton transfer reaction mass spectrometry (PTR-MS). Such portable devices can accurately track breath acetone concentration to become an alternative to more elaborate breath analysis techniques."} {"_id": "97a987931d616233569e070a0ac110b4db583725", "title": "Performance analysis of latent fingerprint enhancement techniques", "text": "Fingerprint Enhancement is an important step used to improve the accuracy and rate of automated detection of fingerprints. Enhancing the latent fingerprints facilitates the extraction of minutiae from the fingerprint image. In this paper, two methods of latent fingerprint enhancement are carried out. The first uses Histogram Equalization for enhancing the latent fingerprint. The second method uses a novel approach of Binarization followed by Wiener filtering and an additional Median filter. The improvements in the image are then analyzed based on their entropy, contrast and moment."} {"_id": "eb06182a2817d06e82612a0c32a6c843f01c6a03", "title": "Text Generation From Tables", "text": "This paper proposes a neural generative model, namely Table2Seq, to generate a natural language sentence based on a table. Specifically, the model maps a table to continuous vectors and then generates a natural language sentence by leveraging the semantics of a table. Since rare words, e.g., entities and values, usually appear in a table, we develop a flexible copying mechanism that selectively replicates contents from the table to the output sequence. We conduct extensive experiments to demonstrate the effectiveness of our Table2Seq model and the utility of the designed copying mechanism. On the WIKIBIO and SIMPLEQUESTIONS datasets, the Table2Seq model improves the state-of-the-art results from 34.70 to 40.26 and from 33.32 to 39.12 in terms of BLEU-4 scores, respectively. Moreover, we construct an open-domain dataset WIKITABLETEXT that includes 13\u00a0318 descriptive sentences for 4962 tables. Our Table2Seq model achieves a BLEU-4 score of 38.23 on WIKITABLETEXT outperforming template-based and language model based approaches. Furthermore, through experiments on 1\u00a0M table-query pairs from a search engine, our Table2Seq model considering the structured part of a table, i.e., table attributes and table cells, as additional information outperforms a sequence-to-sequence model considering only the sequential part of a table, i.e., table caption."} {"_id": "58d2a4c5c58085c3b6878473f7fdcc55984aacd6", "title": "Standards for smart education \u2013 towards a development framework", "text": "Smart learning environments (SLEs) utilize a range of digital technologies in supporting learning, education and training; they also provide a prominent signpost for how future learning environments might be shaped. Thus, while innovation proceeds, SLEs are receiving growing attention from the research community, outputs from which are discussed in this paper. Likewise, this broad application of educational digital technologies is also the remit of standardization in an ISO committee, also discussed in this paper. These two communities share a common interest in, conceptualizing this emerging domain with the aim to identifying direction to further development. In doing so, terminology issues arise along with key questions such as, \u2018how is smart learning different from traditional learning?\u2019 Presenting a bigger challenge is the question, \u2018how can standardization work be best scoped in today's innovation-rich, networked, cloud-based and data-driven learning environments?\u2019 In responding, this conceptual paper seeks to identify candidate constructs and approaches that might lead to stable, coherent and exhaustive understanding of smart learning environments, thereby providing standards development for learning, education and training a needed direction. Based on reviews of pioneering work within smart learning, smart education and smart learning environments we highlight two models, a cognitive smart learning model and a smartness level model. These models are evaluated against current standardization challenges in the field of learning, education and training to form the basis for a development platform for new standards in this area."} {"_id": "e0a5db6594b592d46dbbbeca28a69248f0bfe421", "title": "Early Detection of Sudden Pedestrian Crossing for Safe Driving During Summer Nights", "text": "Sudden pedestrian crossing (SPC) is the major reason for pedestrian-vehicle crashes. In this paper, we focus on detecting SPCs at night for supporting an advanced driver assistance system using a far-infrared (FIR) camera mounted on the front roof of a vehicle. Although the thermal temperature of the road is similar to or higher than that of the pedestrians during summer nights, many previous researches have focused on pedestrian detection during the winter, spring, or autumn seasons. However, our research concentrates on SPC during the hot summer season because the number of collisions between pedestrians and vehicles in Korea is higher at that time than during the other seasons. For real-time processing, we first decide the optimal levels of image scaling and search area. We then use our proposed method for detecting virtual reference lines that are associated with road segmentation without using color information and change these lines according to the turning direction of the vehicle. Pedestrian detection is conducted using a cascade random forest with low-dimensional Haar-like features and oriented center-symmetric local binary patterns. The SPC is predicted based on the likelihood and the spatiotemporal features of the pedestrians, such as their overlapping ratio with virtual reference lines, as well as the direction and magnitude of each pedestrian\u2019s movement. The proposed algorithm was successfully applied to various pedestrian data sets captured by an FIR camera, and the results show that its SPC detection performance is better than those of other methods."} {"_id": "728ebf81883fccccd76f092d90f1538491b961f7", "title": "Experiences with rapid mobile game development using unity engine", "text": "In this work, we describe our experiences with FunCopter, a casual game we have designed and implemented using Unity engine, suitable for portable devices. We emphasize some general principles, particularly with respect to rapid game development and constrained graphics. In addition to that, we present the main activities realized in each software engineering phase and the lessons we learned during its development. These descriptions include details from the initial concept to the fully realized, playable game. This work will share general but important experiences with Unity, particularly interesting for game developers in beginner's level."} {"_id": "ea951c82efe26424e3ce0d167e01f59e5135a2da", "title": "Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test", "text": "The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone."} {"_id": "e4911ece92989fbdcd68d95f1d9fc06c9d7616bb", "title": "Let's stop trying to be \"sexy\" - preparing managers for the (big) data-driven business era", "text": "Let's stop trying to be 'sexy' \u2013 preparing managers for the (big) data-driven business era Kevin Daniel Andr\u00e9 Carillo, Article information: To cite this document: Kevin Daniel Andr\u00e9 Carillo, (2017) \"Let's stop trying to be 'sexy' \u2013 preparing managers for the (big) data-driven business era\", Business Process Management Journal, Vol. 23 Issue: 3, doi: 10.1108/BPMJ-09-2016-0188 Permanent link to this document: http://dx.doi.org/10.1108/BPMJ-09-2016-0188"} {"_id": "c0ddc972ae95764639350f272c45cbc5f9fb77d2", "title": "The neurobiology of social cognition", "text": "Recent studies have begun to elucidate the roles played in social cognition by specific neural structures, genes, and neurotransmitter systems. Cortical regions in the temporal lobe participate in perceiving socially relevant stimuli, whereas the amygdala, right somatosensory cortices, orbitofrontal cortices, and cingulate cortices all participate in linking perception of such stimuli to motivation, emotion, and cognition. Open questions remain about the domain-specificity of social cognition, about its overlap with emotion and with communication, and about the methods best suited for its investigation."} {"_id": "739b0a79b851b174558e17355e09e26c3236daeb", "title": "Conformal Surface Parameterization for Texture Mapping", "text": "In this paper , we give an explicit methodfor mappingany simply connectedsurfaceonto the spherein a mannerwhich preservesangles. This techniquerelieson certainconformalmappingsfrom differentialgeometry. Our methodprovidesa new way to automaticallyassigntexturecoordinates to complex undulatingsurfaces. We demonstratea finite elementmethod thatcanbeusedto applyour mappingtechniqueto a triangulatedgeometric descriptionof a surface."} {"_id": "82570dbb0c240876ca70289035e8e51eb6d1c4f6", "title": "The ARTEMIS European driving cycles for measuring car pollutant emissions.", "text": "In the past 10 years, various work has been undertaken to collect data on the actual driving of European cars and to derive representative real-world driving cycles. A compilation and synthesis of this work is provided in this paper. In the frame of the European research project: ARTEMIS, this work has been considered to derive a set of reference driving cycles. The main objectives were as follows: to derive a common set of reference real-world driving cycles to be used in the frame of the ARTEMIS project but also in the frame of on-going national campaigns of pollutant emission measurements, to ensure the compatibility and integration of all the resulting emission data in the European systems of emission inventory; to ensure and validate the representativity of the database and driving cycles by comparing and taking into account all the available data regarding driving conditions; to include in three real-world driving cycles (urban, rural road and motorway) the diversity of the observed driving conditions, within sub-cycles allowing a disaggregation of the emissions according to more specific driving conditions (congested and free-flow urban). Such driving cycles present a real advantage as they are derived from a large database, using a methodology that was widely discussed and approved. In the main, these ARTEMIS driving cycles were designed using the available data, and the method of analysis was based to some extent on previous work. Specific steps were implemented. The study includes characterisation of driving conditions and vehicle uses. Starting conditions and gearbox use are also taken into account."} {"_id": "ee9018d1c4e47b1e5d0df8b9a9ab6bb0e19a8e73", "title": "Syntax and semantics in the acquisition of locative verbs.", "text": "Children between the ages of three and seven occasionally make errors with locative verbs like pour and fill, such as *I filled water into the glass and *I poured the glass with water (Bowerman, 1982). To account for this pattern of errors, and for how they are eventually unlearned, we propose that children use a universal linking rule called object affectedness: the direct object corresponds to the argument that is specified as 'affected' in some particular way in the semantic representation of a verb. However, children must learn which verbs specify which of their arguments as being affected; specifically, whether it is the argument whose referent is undergoing a change of location, such as the content argument of pour, or the argument whose referent is undergoing a change of state, such as the container argument of fill. This predicts that syntactic errors should be associated with specific kinds of misinterpretations of verb meaning. Two experiments were performed on the ability of children and adults to understand and produce locative verbs. The results confirm that children tend to make syntactic errors with sentences containing fill and empty, encoding the content argument as direct object (e.g. fill the water). As predicted, children also misinterpreted the meanings of fill and empty as requiring not only that the container be brought into a full or empty state, but also that the content move in some specific manner (by pouring, or by dumping). Furthermore, children who misinterpreted the verbs' meanings were more likely to make syntactic errors with them. These findings support the hypothesis that verb meaning and syntax are linked in precise ways in the lexicons of language learners."} {"_id": "8a4bf74449a4c011cae16f4ebfcff3976cf1e9c7", "title": "Dermal and subdermal tissue filling with fetal connective tissue and cartilage, collagen, and silicone: Experimental study in the pig compared with clinical results. A new technique of dermis mini-autograft injections", "text": "The early reaction to the injection of silicone, collagen, and lyophilized heterologous fetal connective and cartilage tissues into the limiting zone deep dermis-superficial subcutaneous tissue was histologically examined in the pig and compared with clinical results. The inflammatory reaction to lyophilized heterologous fetal tissue is considerably more intense than that to collagen and silicone and lasts for several weeks. Therefore, it is not recommended for soft tissue filling in the face. Admitting an inferior antigenicity of fetal tissues, the authors suggest that enzymatically denaturalized collagen should be manufactured from heterologous fetal connective tissue, to be then further tested. The reaction of tissue to silicone and collagen is minimal. Silicone is preferred for dermal injections since in clinical experience it remains in the site of injection much longer. For subdermal injections, however, collagen is preferred. Based on experience with over 600 patients since 1958, the first author continues using liquid silicone. The lack of complications is probably a result of the fact that only small amounts (milliliters) of silicone were used in wrinkles or small depressions in the dermal layer and that from the beginning injection into the subcutaneous tissue was avoided. Since 1988 a new technique for the treatment of wrinkles and skin depressions with injections of dermal miniautografts has been used with satisfactory results."} {"_id": "f2560b735ee734f4b0ee0b523a8069f44a21bbf4", "title": "Multi-modal Attention Mechanisms in LSTM and Its Application to Acoustic Scene Classification", "text": "Neural network architectures such as long short-term memory (LSTM) have been proven to be powerful models for processing sequences including text, audio and video. On the basis of vanilla LSTM, multi-modal attention mechanisms are proposed in this paper to synthesize the time and semantic information of input sequences. First, we reconstruct the forget and input gates of the LSTM unit from the perspective of attention model in the temporal dimension. Then the memory content of the LSTM unit is recalculated using a cluster-based attention mechanism in semantic space. Experiments on acoustic scene classification tasks show performance improvements of the proposed methods when compared with vanilla LSTM. The classification errors on LITIS ROUEN dataset and DCASE2016 dataset are reduced by 16.5% and 7.7% relatively. We get a second place in the Kaggle\u2019s YouTube-8M video understanding challenge, and multi-modal attention based LSTM model is one of our bestperforming single systems."} {"_id": "7ae1b915e58af9a5755ad2b33d56a2717d2802fc", "title": "The regulation of explicit and implicit race bias: the role of motivations to respond without prejudice.", "text": "Three studies examined the moderating role of motivations to respond without prejudice (e.g., internal and external) in expressions of explicit and implicit race bias. In all studies, participants reported their explicit attitudes toward Blacks. Implicit measures consisted of a sequential priming task (Study 1) and the Implicit Association Test (Studies 2 and 3). Study 3 used a cognitive busyness manipulation to preclude effects of controlled processing on implicit responses. In each study, explicit race bias was moderated by internal motivation to respond without prejudice, whereas implicit race bias was moderated by the interaction of internal and external motivation to respond without prejudice. Specifically, high internal, low external participants exhibited lower levels of implicit race bias than did all other participants. Implications for the development of effective self-regulation of race bias are discussed."} {"_id": "e467278d981ba30ab3b24235d09205e2aaba3d6f", "title": "If Only my Leader Would just Do Something! Passive Leadership Undermines Employee Well-being Through Role Stressors and Psychological Resource Depletion.", "text": "The goal of this study was to develop and test a sequential mediational model explaining the negative relationship of passive leadership to employee well-being. Based on role stress theory, we posit that passive leadership will predict higher levels of role ambiguity, role conflict and role overload. Invoking Conservation of Resources theory, we further hypothesize that these role stressors will indirectly and negatively influence two aspects of employee well-being, namely overall mental health and overall work attitude, through psychological work fatigue. Using a probability sample of 2467 US workers, structural equation modelling supported the model by showing that role stressors and psychological work fatigue partially mediated the negative relationship between passive leadership and both aspects of employee well-being. The hypothesized, sequential indirect relationships explained 47.9% of the overall relationship between passive leadership and mental health and 26.6% of the overall relationship between passive leadership and overall work attitude. Copyright \u00a9 2016 John Wiley & Sons, Ltd."} {"_id": "746234e6678624c9f6a68aa99c7fcef645fcbae1", "title": "THE EXPERIENCE FACTORY", "text": "This article presents an infrastructure, called the experience factory , aimed at capitalization and reuse of life cycle experience and products. The experience factory is a logical and physical organization, and its activities are independent from the ones of the development organization. The activities of the development organization and of the experience factory can be outlined in the following way:"} {"_id": "abd0905c49d98ba0ac7488bf009be6ed119dcaf3", "title": "The Timeboxing process model for iterative software development", "text": "In today\u2019s business where speed is of essence, an iterative development approach that allows the functionality to be delivered in parts has become a necessity and an effective way to manage risks. In an iterative process, the development of a software system is done in increments, each increment forming of an iteration and resulting in a working system. A common iterative approach is to decide what should be developed in an iteration and then plan the iteration accordingly. A somewhat different iterative is approach is to time box different iterations. In this approach, the length of an iteration is fixed and what should be developed in an iteration is adjusted to fit the time box. Generally, the time boxed iterations are executed in sequence, with some overlap where feasible. In this paper we propose the timeboxing process model that takes the concept of time boxed iterations further by adding pipelining concepts to it for permitting overlapped execution of different iterations. In the timeboxing process model, each time boxed iteration is divided into equal length stages, each stage having a defined function and resulting in a clear work product that is handed over to the next stage. With this division into stages, pipelining concepts are employed to have multiple time boxes executing concurrently, leading to a reduction in the delivery time for product releases. We illustrate the use of this process model through an example of a commercial project that was successfully executed using the proposed model."} {"_id": "058f712a7dd173dd0eb6ece7388bd9cdd6f77d67", "title": "Iterative and incremental developments. a brief history", "text": "Although many view iterative and incremental development as a modern practice, its application dates as far back as the mid-1950s. Prominent software-engineering thought leaders from each succeeding decade supported IID practices, and many large projects used them successfully. These practices may have differed in their details, but all had a common theme-to avoid a single-pass sequential, document-driven, gated-step approach."} {"_id": "6b8283005a83f24e6301605acbaad3bb6d277ca5", "title": "A Methodology for Collecting Valid Software Engineering Data", "text": "An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To ensure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50 percent of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful."} {"_id": "86d717a749483ba1583f04eb251888d795b25fc0", "title": "Quantitative Evaluation of Software Quality", "text": "The study reported in this paper establishes a conceptual framework and some key initial results in the analysis of the characteristics of software quality. Its main results and conclusions are:\n \u2022 Explicit attention to characteristics of software quality can lead to significant savings in software life-cycle costs.\n \u2022 The current software state-of-the-art imposes specific limitations on our ability to automatically and quantitatively evaluate the quality of software.\n \u2022 A definitive hierarchy of well-defined, well-differentiated characteristics of software quality is developed. Its higher-level structure reflects the actual uses to which software quality evaluation would be put; its lower-level characteristics are closely correlated with actual software metric evaluations which can be performed.\n \u2022 A large number of software quality-evaluation metrics have been defined, classified, and evaluated with respect to their potential benefits, quantifiability, and ease of automation.\n \u2022Particular software life-cycle activities have been identified which have significant leverage on software quality.\n Most importantly, we believe that the study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited. This framework is certainly not complete, but it has been brought to a point sufficient to serve as a viable basis for future refinements and extensions."} {"_id": "291119421373a0bcebbf11d4b0ff89523cdbd9da", "title": "A tutorial on the dynamics and control of wind turbines and wind farms", "text": "Wind energy is currently the fastest-growing energy source in the world, with a concurrent growth in demand for the expertise of engineers and researchers in the wind energy field. There are still many unsolved challenges in expanding wind power, and there are numerous problems of interest to systems and control researchers. In this paper, we first review the basic structure of wind turbines and then describe wind turbine control systems and control loops. Of great interest are the generator torque and blade pitch control systems, where significant performance improvements are achievable with more advanced systems and control research. We describe recent developments in advanced controllers for wind turbines and wind farms, and we also outline many open problems in the areas of modeling and control of wind turbines."} {"_id": "75bf3e75b4842b273fa8fd12bc98bfd3b91efa26", "title": "ReLiDSS: Novel lie detection system from speech signal", "text": "Lying is among the most common wrong human acts that merits spending time thinking about it. The lie detection is until now posing a problem in recent research which aims to develop a non-contact application in order to estimate physiological changes. In this paper, we have proposed a preliminary investigation on which relevant acoustic parameter can be useful to classify lie or truth from speech signal. Our proposed system in is based on the Mel Frequency Cepstral Coefficient (MFCC) commonly used in automatic speech processing on our own constructed database ReLiDDB (ReGIM-Lab Lie Detection DataBase) for both cases lie detection and person voice recognition. We have performed on this database the Support Vector Machines (SVM) classifier using Linear kernel and we have obtained an accuracy of Lie and Truth detection of speech audio respectively 88.23% and 84.52%."} {"_id": "e955cf5f5a7d9722f6bb14d09d964df8fefea654", "title": "On Software Implementation of High Performance GHASH Algorithms", "text": "There have been several modes of operations available for symmetric key block ciphers, among which Galois Counter Mode (GCM) of operation is a standard. GCM mode of operation provides con dentiality with the help of symmetric key block cipher operating in counter mode. The authentication component of GCM comprises of Galois hash (GHASH) computation which is a keyed hash function. The most important component of GHASH computation is carry-less multiplication of 128bit operands which is followed by a modulo reduction. There have been a number of schemes proposed for e cient software implementation of carry-less multiplication to improve performance of GHASH by increasing the speed of multiplications. This thesis focuses on providing an e cient way of software implementation of high performance GHASH function as being proposed by Meloni et al., and also on the implementation of GHASH using a carry-less multiplication instruction provided by Intel on their Westmere architecture. The thesis work includes implementation of the high performance GHASH and its comparison to the older or standard implementation of GHASH function. It also includes comparison of the two implementations using Intel's carry-less multiplication instruction. This is the rst time that this kind of comparison is being done on software implementations of these algorithms. Our software implementations suggest that the new GHASH algorithm, which was originally proposed for the hardware implementations due to the required parallelization, can't take advantage of the Intel carry-less multiplication instruction PCLMULQDQ. On the other hand, when implementations are done without using the PCLMULQDQ instruction the new algorithm performs better, even if its inherent parallelization is not utilized. This suggest that the new algorithm will perform better on embedded systems that do not support PCLMULQDQ."} {"_id": "24d90bf184b1de66726e817d00b6edea192e5e53", "title": "Theory of Credit Card Networks : A Survey of the Literature", "text": "Credit cards provide benefits to consumers and merchants not provided by other payment instruments as evidenced by their explosive growth in the number and value of transactions over the last 20 years. Recently, credit card networks have come under scrutiny from regulators and antitrust authorities around the world. The costs and benefits of credit cards to network participants are discussed. Focusing on interrelated bilateral transactions, several theoretical models have been constructed to study the implications of several business practices of credit card networks. The results and implications of these economic models along with future research topics are discussed."} {"_id": "f0ca32430b12470178eb1cd7e4f8093502d1a33e", "title": "Cytoscape.js: a graph theory library for visualisation and analysis", "text": "UNLABELLED\nCytoscape.js is an open-source JavaScript-based graph library. Its most common use case is as a visualization software component, so it can be used to render interactive graphs in a web browser. It also can be used in a headless manner, useful for graph operations on a server, such as Node.js.\n\n\nAVAILABILITY AND IMPLEMENTATION\nCytoscape.js is implemented in JavaScript. Documentation, downloads and source code are available at http://js.cytoscape.org.\n\n\nCONTACT\ngary.bader@utoronto.ca."} {"_id": "9a8d78c61fb3ad2f72bdcd1ab8795da5e4e60b77", "title": "Time series forecasting using a deep belief network with restricted Boltzmann machines", "text": "Multi-layer perceptron (MLP) and other artificial neural networks (ANNs) have been widely applied to time series forecasting since 1980s. However, for some problems such as initialization and local optima existing in applications, the improvement of ANNs is, and still will be the most interesting study for not only time series forecasting but also other intelligent computing fields. In this study, we propose a method for time series prediction using Hinton & Salakhutdinov\u2019s deep belief nets (DBN) which are probabilistic generative neural network composed by multiple layers of restricted Boltzmann machine (RBM). We use a 3-layer deep network of RBMs to capture the feature of input space of time series data, and after pretraining of RBMs using their energy functions, gradient descent training, i.e., back-propagation learning algorithm is used for fine-tuning connection weights between \u201cvisible layers\u201d and \u201chidden layers\u201d of RBMs. To decide the sizes of neural networks and the learning rates, Kennedy & Eberhart\u2019s particle swarm optimization (PSO) is adopted during the training processes. Furthermore, \u201ctrend removal\u201d, a preprocessing to the original data, is also approached in the forecasting experiment using CATS benchmark data. Additionally, approximating and short-term prediction of chaotic time series such as Lorenz chaos and logistic map were also applied by the proposed method."} {"_id": "00b7ffd43e9b6b70c80449872a8c9ec49c7d045a", "title": "Hierarchical structure and the prediction of missing links in networks", "text": "Networks have in recent years emerged as an invaluable tool for describing and quantifying complex systems in many branches of science. Recent studies suggest that networks often exhibit hierarchical organization, in which vertices divide into groups that further subdivide into groups of groups, and so forth over multiple scales. In many cases the groups are found to correspond to known functional units, such as ecological niches in food webs, modules in biochemical networks (protein interaction networks, metabolic networks or genetic regulatory networks) or communities in social networks. Here we present a general technique for inferring hierarchical structure from network data and show that the existence of hierarchy can simultaneously explain and quantitatively reproduce many commonly observed topological properties of networks, such as right-skewed degree distributions, high clustering coefficients and short path lengths. We further show that knowledge of hierarchical structure can be used to predict missing connections in partly known networks with high accuracy, and for more general network structures than competing techniques. Taken together, our results suggest that hierarchy is a central organizing principle of complex networks, capable of offering insight into many network phenomena."} {"_id": "a70f3db55d561e0f0df7e58e4c8082d6b114def2", "title": "PROPOSED METHODOLOGY FOR EARTHQUAKE-INDUCED LOSS ASSESSMENT OF INSTRUMENTED STEEL FRAME BUILDINGS : BUILDING-SPECIFIC AND CITY-SCALE APPROACHES", "text": "The performance-based earthquake engineering framework utilizes probabilistic seismic demand models to obtain accurate estimates of building engineering demand parameters. These parameters are utilized to estimate earthquake-induced losses in terms of probabilistic repair costs, life-safety impact, and loss of function due to damage to a wide variety of building structural and non-structural components and content. Although nonlinear response history analysis is a reliable tool to develop probabilistic seismic demand models, it typically requires a considerable time investment in developing detailed building numerical model representations. In that respect, the challenge of city-scale damage assessment still remains. This paper proposes a simplified methodology that rapidly assesses the story-based engineering demand parameters (e.g., peak story drift ratios, residual story drift ratios, peak absolute floor accelerations) along the height of a steel frame building in the aftermath of an earthquake. The proposed methodology can be employed at a city-scale in order to facilitate rapid earthquake-induced loss assessment of steel frame buildings in terms of structural damage and monetary losses within a region. Therefore, buildings within an urban area that are potentially damaged after an earthquake event or scenario can be easily identified without a detailed engineering inspection. To illustrate the methodology for rapid earthquake-induced loss assessment at the city-scale we employ data from instrumented steel frame buildings in urban California that experienced the 1994 Northridge earthquake. Maps that highlight the expected structural and nonstructural damage as well as the expected earthquake-induced monetary losses are developed. The maps are developed with the geographical information system for steel frame buildings located in Los Angeles. Seong-Hoon Hwang and Dimitrios G. Lignos"} {"_id": "114a2c60da136f80c304f4ed93fa7c796cc76f28", "title": "Forecasting the winner of a tennis match", "text": "We propose a method to forecast the winner of a tennis match, not only at the beginning of the match, but also (and in particular) during the match. The method is based on a fast and flexible computer program, and on a statistical analysis of a large data set from Wimbledon, both at match and at point level. 2002 Elsevier Science B.V. All rights reserved."} {"_id": "5d1b386ec5c1ca0c5d9fd6540ede1d68d25a98fd", "title": "Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time", "text": "Despite their strategic relevance in organizations, information technology (IT) investments still result in alarmingly high failure rates. As IT investments are such a delicate high-risk/high-reward matter, it is crucial for organizations to avoid flawed IT investment decision-making. Previous research in consumer and organizational decision-making shows that a decision\u2019s accuracy is often influenced by decisionmakers\u2019 overconfidence and that the magnitude of overconfidence strongly depends on decision-makers\u2019 certainty of their knowledge. Drawing on these strands of research, our findings from a field survey (N=166) show that IT managers\u2019 decisions in IT outsourcing are indeed affected by overconfidence. However, an in-depth investigation of three types of knowledge, namely experienced, objective and subjective knowledge, reveals that different types of knowledge can have contrasting effects on overconfidence and thus on the quality of IT outsourcing decisions. Knowledge can be a boon and bane at the same time. Implications for research and practice are discussed."} {"_id": "9a86ae8e9b946dc6d957357e0670f262fa1ead9d", "title": "Bare bones differential evolution", "text": "Article history: Received 22 August 2007 Accepted 29 February 2008 Available online xxxx"} {"_id": "f8acaabc99801a89baa5a9eff445fc5922498dd0", "title": "Domain Alignment with Triplets", "text": "Deep domain adaptation methods can reduce the distribution discrepancy by learning domain-invariant embedddings. However, these methods only focus on aligning the whole data distributions, without considering the classlevel relations among source and target images. Thus, a target embeddings of a bird might be aligned to source embeddings of an airplane. This semantic misalignment can directly degrade the classifier performance on the target dataset. To alleviate this problem, we present a similarity constrained alignment (SCA) method for unsupervised domain adaptation. When aligning the distributions in the embedding space, SCA enforces a similarity-preserving constraint to maintain class-level relations among the source and target images, i.e., if a source image and a target image are of the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. In the absence of target labels, we assign pseudo labels for target images. Given labeled source images and pseudo-labeled target images, the similarity-preserving constraint can be implemented by minimizing the triplet loss. With the joint supervision of domain alignment loss and similarity-preserving constraint, we train a network to obtain domain-invariant embeddings with two critical characteristics, intra-class compactness and inter-class separability. Extensive experiments conducted on the two datasets well demonstrate the effectiveness of SCA."} {"_id": "277048b7eede669b105c762446e890d49eb3c6a9", "title": "Sensor fusion for robot control through deep reinforcement learning", "text": "Deep reinforcement learning is becoming increasingly popular for robot control algorithms, with the aim for a robot to self-learn useful feature representations from unstructured sensory input leading to the optimal actuation policy. In addition to sensors mounted on the robot, sensors might also be deployed in the environment, although these might need to be accessed via an unreliable wireless connection. In this paper, we demonstrate deep neural network architectures that are able to fuse information generated by multiple sensors and are robust to sensor failures at runtime. We evaluate our method on a search and pick task for a robot both in simulation and the real world."} {"_id": "29c352ca636950e2f84b488be6ff3060cfe3ca6a", "title": "Optimization Criteria, Sensitivity and Robustness of Motion and Structure Estimation", "text": "The prevailing efforts to study the standard formulation of motion and structure recovery have been recently focused on issues of sensitivity and and robustness of existing techniques. While many cogent observations have been made and verified experimentally, many statements do not hold in general settings and make a comparison of existing techniques difficult. With an ultimate goal of clarifying these issues we study the main aspects of the problem: the choice of objective functions, optimization techniques and the sensitivity and robustness issues in the presence of noise. We clearly reveal the relationship among different objective functions, such as \u201c(normalized) epipolar constraints\u201d, \u201creprojection error\u201d or \u201ctriangulation\u201d, which can all be be unified in a new \u201c optimal triangulation\u201d procedure formulated as a constrained optimization problem. Regardless of various choices of the objective function, the optimization problems all inherit the same unknown parameter space, the so called \u201cessential manifold\u201d, making the new optimization techniques on Riemanian manifolds directly applicable. Using these analytical results we provide a clear account of sensitivity and robustness of the proposed linear and nonlinear optimization techniques and study the analytical and practical equivalence of different objective functions. The geometric characterization of critical points of a function defined on essential manifold and the simulation results clarify the difference between the effect of bas relief ambiguity and other types of local minima leading to a consistent interpretations of simulation results over large range of signal-to-noise ratio and variety of configurations. Further more we justify the choice of the linear techniques for (re)initialization and for detection of incorrect local minima."} {"_id": "aab29a13185bee54a17573024c309c37b3d1456f", "title": "A comparison of loop closing techniques in monocular SLAM", "text": "Loop closure detection systems for monocular SLAM come in three broad categories: (i) map-to-map, (ii) image-to-image and (iii) image-to-map. In this paper, we have chosen an implementation of each and performed experiments allowing the three approaches to be compared. The sequences used include both indoor and outdoor environments and single and multiple loop trajectories. \u00a9 2009 Elsevier B.V. All rights reserved."} {"_id": "8d281dd3432fea0e3ca4924cd760786008f43a07", "title": "A Simplified Model for Normal Mode Helical Antennas", "text": "\u2500 Normal mode helical antennas are widely used for RFID and mobile communications applications due to their relatively small size and omni-directional radiation pattern. However, their highly curved geometry can make the design and analysis of helical antennas that are part of larger complex structures quite difficult. A simplified model is proposed that replaces the curved helix with straight wires and lumped elements. The simplified model can be used to reduce the complexity of full-wave models that include a helical antenna. It also can be used to estimate the performance of a helical antenna without fullwave modeling of the helical structure. Index Terms\u2500 Helical Antennas, RFID."} {"_id": "0c2bbea0d28861b23879cec33f04125cf556173d", "title": "Likelihood-informed dimension reduction for nonlinear inverse problems", "text": "The intrinsic dimensionality of an inverse problem is affected by prior information, the accuracy and number of observations, and the smoothing properties of the forward operator. From a Bayesian perspective, changes from the prior to the posterior may, in many problems, be confined to a relatively lowdimensional subspace of the parameter space. We present a dimension reduction approach that defines and identifies such a subspace, called the \u201clikelihoodinformed subspace\u201d (LIS), by characterizing the relative influences of the prior and the likelihood over the support of the posterior distribution. This identification enables new and more efficient computational methods for Bayesian inference with nonlinear forward models and Gaussian priors. In particular, we approximate the posterior distribution as the product of a lower-dimensional posterior defined on the LIS and the prior distribution marginalized onto the complementary subspace. Markov chain Monte Carlo sampling can then proceed in lower dimensions, with significant gains in computational efficiency. We also introduce a Rao-Blackwellization strategy that de-randomizes Monte Carlo estimates of posterior expectations for additional variance reduction. We demonstrate the efficiency of our methods using two numerical examples: inference of permeability in a groundwater system governed by an elliptic PDE, and an atmospheric remote sensing problem based on Global Ozone Monitoring System (GOMOS) observations."} {"_id": "cf6508043c418891c1e7299debccfc1527e4ca2a", "title": "A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge", "text": "We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results."} {"_id": "28b2723d1feb5373e3136d358210efc76f7d6f46", "title": "Building Consumer Trust Online", "text": "M oving some Web consumers along to the purchase click is proving to be difficult, despite the impressive recent growth in online shopping. Consumer online shopping revenues and related corporate profits are still meager, though the industry is optimistic, thanks to bullish forecasts of cyberconsumer activity for the new millennium. In 1996, Internet shopping revenues for U.S. users, excluding cars and real estate, were estimated by Jupiter Communications , an e-commerce consulting firm in New York, at approximately $707 million but are expected to How merchants can win back lost consumer trust in the interests of e-commerce sales. reach nearly $37.5 billion by 2002 [1]. Meanwhile, the business-to-business side is taking off with more than $8 billion in revenues for 1997 and $327 billion predicted by 2002 just in the U.S., according to For-rester Research, an information consulting firm in Cambridge, Mass. [4]. On the consumer side, a variety of barriers are invoked to explain the continuing difficulties. There are, to be sure, numerous barriers. Such factors as the lack of standard technologies for secure payment, and the lack of profitable business models play important roles in the relative dearth of commercial activity by businesses and consumers on the Internet compared to what analysts expect in the near future. Granted, the commercial development of the Web is still in its infancy, so few expect these barriers to commercial development to persist. Still, commercial development of the Web faces a far more formidable barrier\u2014consumers' fear of divulging their personal data\u2014to its ultimate com-mercialization. The reason more people have yet to shop online or even provide information to Web providers in exchange for access to information, is the fundamental lack of faith between most businesses and consumers on the Web today. In essence, consumers simply do not trust most Web providers enough to engage in \" relationship exchanges \" involving money and personal information with them. Our research reveals that this lack of trust arises from the fact that cyberconsumers feel they lack control over the access that Web merchants have to their"} {"_id": "9a79e11b4261aa601d7f77f4ba5aed22ca1f6ad6", "title": "Perception-Guided Multimodal Feature Fusion for Photo Aesthetics Assessment", "text": "Photo aesthetic quality evaluation is a challenging task in multimedia and computer vision fields. Conventional approaches suffer from the following three drawbacks: 1) the deemphasized role of semantic content that is many times more important than low-level visual features in photo aesthetics; 2) the difficulty to optimally fuse low-level and high-level visual cues in photo aesthetics evaluation; and 3) the absence of a sequential viewing path in the existing models, as humans perceive visually salient regions sequentially when viewing a photo.\n To solve these problems, we propose a new aesthetic descriptor that mimics humans sequentially perceiving visually/semantically salient regions in a photo. In particular, a weakly supervised learning paradigm is developed to project the local aesthetic descriptors (graphlets in this work) into a low-dimensional semantic space. Thereafter, each graphlet can be described by multiple types of visual features, both at low-level and in high-level. Since humans usually perceive only a few salient regions in a photo, a sparsity-constrained graphlet ranking algorithm is proposed that seamlessly integrates both the low-level and the high-level visual cues. Top-ranked graphlets are those visually/semantically prominent graphlets in a photo. They are sequentially linked into a path that simulates the process of humans actively viewing. Finally, we learn a probabilistic aesthetic measure based on such actively viewing paths (AVPs) from the training photos that are marked as aesthetically pleasing by multiple users. Experimental results show that: 1) the AVPs are 87.65% consistent with real human gaze shifting paths, as verified by the eye-tracking data; and 2) our photo aesthetic measure outperforms many of its competitors."} {"_id": "186336fb15a47ebdc6f0730d0cf4f56c58c5b906", "title": "Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models", "text": "Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we anticipate driving maneuvers a few seconds before they occur. For this purpose we equip a car with cameras and a computing device to capture the driving context from both inside and outside of the car. We propose an Autoregressive Input-Output HMM to model the contextual information alongwith the maneuvers. We evaluate our approach on a diverse data set with 1180 miles of natural freeway and city driving and show that we can anticipate maneuvers 3.5 seconds before they occur with over 80% F1-score in real-time."} {"_id": "34ac076a96456860f674fb4dcd714762b847a341", "title": "Pattern recognition based techniques for fruit sorting : A survey", "text": "In most of the food industries, farms and fruit markets the quality sorting of the fruits is done manually; only a few automated sorting systems are developed for this. Each fruit changes its color in its life span; including stages not ripe, semi ripped, completely ripped or at the end rotten. Hence, we can predict the quality of the fruits by processing its color images and then applying pattern recognition techniques on those images. By quality of fruit we mean size, ripeness, sweetness, longetivity, diseased/ rotten etc. Some advance technologies which are used for classification of fruits are Infrared Imaging, Magnetic Resonance Imaging, Thermal Sensing, Electronic Nose sensing etc; but these are costlier as compared to color image processing and pattern recognition techniques. This paper summarizes and compares various techniques of Pattern Recognition which are used effectively for fruit sorting. All the fruits have their unique ways of ripening stages; hence the pattern recognition techniques used for one fruit does not necessarily match for the other fruit. This paper reviews the high quality research work done for the quality classification of Apples, Dates, Mangoes, Cranberry, Jatropha, Tomatoes, and some other fruits using pattern recognition techniques. KeywordsClassification & predication, Pattern Recognition, Image Processing, Fruit Quality __________________________________________________*****_________________________________________________"} {"_id": "76d5a90f26e1270c952eac1fa048a83d63f1dd39", "title": "The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations.", "text": "In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels. First, we seek to make theorists and researchers aware of the importance of not using the terms moderator and mediator interchangeably by carefully elaborating, both conceptually and strategically, the many ways in which moderators and mediators differ. We then go beyond this largely pedagogical function and delineate the conceptual and strategic implications of making use of such distinctions with regard to a wide range of phenomena, including control and stress, attitudes, and personality traits. We also provide a specific compendium of analytic procedures appropriate for making the most effective use of the moderator and mediator distinction, both separately and in terms of a broader causal system that includes both moderators and mediators."} {"_id": "a3c3c084d4c30cf40e134314a5dcaf66b4019171", "title": "Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of Planned Behavior", "text": ""} {"_id": "ba0644aa7569f33194090ade9f8f91fa51968b18", "title": "USER ACCEPTANCE OF COMPUTER TECHNOLOGY : A COMPARISON OF TWO THEORETICAL MODELS *", "text": "Computer systems cannot improve organizational performance if they aren't used. Unfortunately, resistance to end-user systems by managers and professionals is a widespread problem. To better predict, explain, and increase user acceptance, we need to better understand why people accept or reject computers. This research addresses the ability to predict peoples' computer acceptance from a measure oftheir intentions, and the ability to explain their intentions in terms oftheir attitudes, subjective norms, perceived usefulness, perceived ease of use, and related variables. In a longitudinal study of 107 users, intentions to use a specific system, measured after a onehour introduction to the system, were correlated 0.35 with system use 14 weeks later. The intentionusage correlation was 0.63 at the end of this time period. Perceived usefulness strongly influenced peoples' intentions, explaining more than half of the variance in intentions at the end of 14 weeks. Perceived ease of use had a small but significant effect on intentions as well, although this effect subsided over time. Attitudes only partially mediated the effects of these beliefs on intentions. Subjective norms had no effect on intentions. These results suggest the possibility of simple but powerful models ofthe determinants of user acceptance, with practical value for evaluating systems and guiding managerial interventions aimed at reducing the problem of underutilized computer technology. (INFORMATION TECHNOLOGY; USER ACCEPTANCE; INTENTION MODELS)"} {"_id": "d761aaacd25b762a1cd82413a34eee0992b86130", "title": "Self-perception: An alternative interpretation of cognitive dissonance phenomena.", "text": "A theory of self-perception is proposed to provide an alternative interpretation for several of the major phenomena embraced by Festinger's theory of cognitive dissonance and to explicate some of the secondary patterns of data that have appeared in dissonance experiments. It is suggested that the attitude statements which comprise the major dependent variables in dissonance experiments may be regarded as interpersonal judgments in which the observer and the observed happen to be the same individual and that it is unnecessary to postulate an aversive motivational drive toward consistency to account for the attitude change phenomena observed. Supporting experiments are presented, and metatheoretical contrasts between the \"radical\" behavioral approach utilized and the phenomenological approach typified by dissonance theory are discussed."} {"_id": "d9973f9367de4c2b1ac54c3157464e79ddeb3d6a", "title": "Attitudes, satisfaction and usage: Factors contributing to each in the acceptance of information technology", "text": "This research tests and develops the Technology Acceptance Model (TAM), introduced by Davis (1986), which attempts to explain end users\u2019 attitudes to computing technologies. It introduces several new variables, including compatibility, user characteristics, system rating and the enduser computing satisfaction (EUCS) construct, a surrogate measure for IT success and acceptance. A questionnaire with over seventy items was completed by a large number of users and LISREL, a technique for modelling a system of structural equations, was used to analyse the responses. The output shows the model as a whole \u00ae ts the data very well and indicates signi\u00ae cant relationships between variables in the model. These results con\u00ae rm that TAM is a valuable tool for predicting attitudes, satisfaction, and usage from beliefs and external variables. They also show that relative advantage of the system contributed most to attitudes and satisfaction. Compatibility (of the system to the task performed) contributed most to usage and was the most important antecedent of the belief variables, including relative advantage."} {"_id": "cfa6986dd9f124aeb5c9430a5f5b638fb71d405b", "title": "Cardiorespiratory fitness estimation in free-living using wearable sensors", "text": "OBJECTIVE\nIn this paper we propose artificial intelligence methods to estimate cardiorespiratory fitness (CRF) in free-living using wearable sensor data.\n\n\nMETHODS\nOur methods rely on a computational framework able to contextualize heart rate (HR) in free-living, and use context-specific HR as predictor of CRF without need for laboratory tests. In particular, we propose three estimation steps. Initially, we recognize activity primitives using accelerometer and location data. Using topic models, we group activity primitives and derive activities composites. We subsequently rank activity composites, and analyze the relation between ranked activity composites and CRF across individuals. Finally, HR data in specific activity primitives and composites is used as predictor in a hierarchical Bayesian regression model to estimate CRF level from the participant's habitual behavior in free-living.\n\n\nRESULTS\nWe show that by combining activity primitives and activity composites the proposed framework can adapt to the user and context, and outperforms other CRF estimation models, reducing estimation error between 10.3% and 22.6% on a study population of 46 participants.\n\n\nCONCLUSIONS\nOur investigation showed that HR can be contextualized in free-living using activity primitives and activity composites and robust CRF estimation in free-living is feasible."} {"_id": "d65b6d948aa3ec4a1640a139362f7bab211a5f45", "title": "Sindhi Language Processing: A survey", "text": "In this era of information technology, natural language processing (NLP) has become volatile field because of digital reliance of today's communities. The growth of Internet usage bringing the communities, cultures and languages online. In this regard much of the work has been done of the European and east Asian languages, in the result these languages have reached mature level in terms of computational processing. Despite the great importance of NLP science, still most of the South Asian languages are under developing phase. Sindhi language is one of them, which stands among the most ancient languages in the world. The Sindhi language has a great influence on the large community in Sindh province of Pakistan and some states of India and other countries. But unfortunately, it is at infant level in terms of computational processing, because it has not received such attention of language engineering community, due to its complex morphological structure and scarcity of language resources. Therefore, this study has been carried out in order to summarize the existing work on Sindhi Language Processing (SLP) and to explore future research opportunities, also some potential research problems. This paper will be helpful for the researchers in order to find all the information regarding SLP at one place in a unique way."} {"_id": "2c69688a2fc686cad14bfa15f8a0335b26b54054", "title": "Multi-View Representation Learning: A Survey from Shallow Methods to Deep Methods", "text": "Recently, multi-view representation learning has become a rapidly growing direction in machine learning and data mining areas. This paper introduces several principles for multi-view representation learning: correlation, consensus, and complementarity principles. Consequently, we first review the representative methods and theories of multi-view representation learning based on correlation principle, especially on canonical correlation analysis (CCA) and its several extensions. Then from the viewpoint of consensus and complementarity principles we investigate the advancement of multi-view representation learning that ranges from shallow methods including multi-modal topic learning, multi-view sparse coding, and multi-view latent space Markov networks, to deep methods including multi-modal restricted Boltzmann machines, multi-modal autoencoders, and multi-modal recurrent neural networks. Further, we also provide an important perspective from manifold alignment for multi-view representation learning. Overall, this survey aims to provide an insightful overview of theoretical basis and state-of-the-art developments in the field of multi-view representation learning and to help researchers find the most appropriate tools for particular applications."} {"_id": "21aebb53a45ccac7f6763d9c47477092599f6be1", "title": "A survey of the effects of aging on biometric identity verification", "text": ""} {"_id": "12e1923fb86ed06c702878bbed51b4ded2b16be1", "title": "Gesture recognition for smart home applications using portable radar sensors", "text": "In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes."} {"_id": "3a488bded1695743b34e39338ccc3c687ea771d0", "title": "Glomosim: a scalable network simulation environment", "text": "Large-scale hybrid networks that include wireless, wired, and satellite based communications are becoming common in both military and commercial situations. This paper describes a scalable simulation environment called GloMoSim (for Global Mobile Information System Simulator) that effectively utilizes parallel execution to reduce the simulation time of detailed high-fidelity models of large communication networks. The paper also presents a set of case studies that evaluate the performance of large wireless networks with thousands of nodes and compares the impact of different lower layer protocols on the performance of typical applications."} {"_id": "d55e7b0d6b4fb6e2fe7a74b66103b5866c1bfa84", "title": "School Leadership and Cyberbullying\u2014A Multilevel Analysis", "text": "Cyberbullying is a relatively new form of bullying, with both similarities and differences to traditional bullying. While earlier research has examined associations between school-contextual characteristics and traditional bullying, fewer studies have focused on the links to students' involvement in cyberbullying behavior. The aim of the present study is to assess whether school-contextual conditions in terms of teachers' ratings of the school leadership are associated with the occurrence of cyberbullying victimization and perpetration among students. The data are derived from two separate data collections performed in 2016: The Stockholm School Survey conducted among students in the second grade of upper secondary school (ages 17-18 years) in Stockholm municipality, and the Stockholm Teacher Survey which was carried out among teachers in the same schools. The data include information from 6067 students distributed across 58 schools, linked with school-contextual information based on reports from 1251 teachers. Cyberbullying victimization and perpetration are measured by students' self-reports. Teachers' ratings of the school leadership are captured by an index based on 10 items; the mean value of this index was aggregated to the school level. Results from binary logistic multilevel regression models show that high teacher ratings of the school leadership are associated with less cyberbullying victimization and perpetration. We conclude that a strong school leadership potentially prevents cyberbullying behavior among students."} {"_id": "694210eec3a6262c5e7edbc2cfbf2d9341f9f426", "title": "Probabilistic graphical models for semi-supervised traffic classification", "text": "Traffic classification using machine learning continues to be an active research area. The majority of work in this area uses off-the-shelf machine learning tools and treats them as black-box classifiers. This approach turns all the modelling complexity into a feature selection problem. In this paper, we build a problem-specific solution to the traffic classification problem by designing a custom probabilistic graphical model. Graphical models are a modular framework to design classifiers which incorporate domain-specific knowledge. More specifically, our solution introduces semi-supervised learning which means we learn from both labelled and unlabelled traffic flows. We show that our solution performs competitively compared to previous approaches while using less data and simpler features."} {"_id": "87df1f7da151df6797bf5a751f1cf5a3cdb1edcf", "title": "Risk Management: A Maturity Model Based on ISO 31000", "text": "Risk Management, according with the ISO Guide 73 is the set of \"coordinated activities to direct and control an organization with regard to risk\". In a nutshell, Risk Management is the business process used to manage risk in organizations. ISO 31000 defines a framework and process for risk management. However, implementing this standard without a detailed plan can become a burden on organizations. This paper presents a maturity model for the risk management process based on ISO 31000. The purpose of this model is to provide an assessment tool for organizations to use in order to get their current risk management maturity level. The results can then be used to create an improvement plan which will guide organizations to reach their target maturity level. This maturity model allows organizations to assess a risk management process according to the best practices defined in risk management references. The maturity model can also be used as a reference for improving this process since it sets a clear path of how a risk management process should be performed."} {"_id": "c81d2f308b80f8a008d329a22b63dc09503ae045", "title": "Recommending citations with translation model", "text": "Citation Recommendation is useful for an author to find out the papers or books that can support the materials she is writing about. It is a challengeable problem since the vocabulary used in the content of papers and in the citation contexts are usually quite different. To address this problem, we propose to use translation model, which can bridge the gap between two heterogeneous languages. We conduct an experiment and find the translation model can provide much better candidates of citations than the state-of-the-art methods."} {"_id": "bac06fd65318737c5d445a2a2679b0daca7766c2", "title": "An Affordable Augmented Reality Based Rehabilitation System for Hand Motions", "text": "Repetitive practices can facilitate the rehabilitation of the motor functions of the upper extremity for stroke survivors whose motor functions have been impaired. An affordable rehabilitation system for hand motion would be useful to provide intensive training to the patients. Augmented Reality (AR) can provide an intuitive interface to the users with programmable environment and realism feelings. In this research, a low cost viable rehabilitation system for hand motion is developed based on the AR technology. In this application, seamless interaction between the virtual environment and the real hand is supported with markers and a self-designed data-glove. The data-glove is low cost (("} {"_id": "901f60e47aaf76e48a50b527d0ef249a16b4b11d", "title": "A Deep Learning Approach to IoT Authentication", "text": "At its peak, the Internet-of-Things will largely be composed of low-power devices with wireless radios attached. Yet, secure authentication of these devices amidst adversaries with much higher power and computational capability remains a challenge, even for advanced cryptographic and wireless security protocols. For instance, a high-power software radio could simply replay chunks of signals from a low-power device to emulate it. This paper presents a deep-learning classifier that learns hardware imperfections of low-power radios that are challenging to emulate, even for high- power adversaries. We build an LSTM framework, specifically sensitive to signal imperfections that persist over long durations. Experimental results from a testbed of 30 low-power nodes demonstrate high resilience to advanced software radio adversaries."} {"_id": "f395bfa935029944e13057e75b1f5525be4ca478", "title": "The Blockchain-Based Digital Content Distribution System", "text": "The blockchain-based digital content distribution system was developed. Decentralized and pear-to-pear authentication mechanism can be considered as the ideal rights management mechanism. The blockchain has the potential to realize this ideal content distribution system. This is the successful model of the Superdistribution concept which was announced almost 30 years ago. The proposed system was demonstrated and got a lot of feedback for the future practical system."} {"_id": "9a5c2e7a6058e627b24b6f5771215d8515769f3f", "title": "Applying Epidemic Algorithm for Financial Service Based on Blockchain Technology", "text": "Our global market is emerging transformation strategy that can make the difference between success and failure. Smart contract systems through developments of technological innovations are increasingly seen as alternative technologies to impact transactional processes significantly. Blockchain is a smart contract protocol with trust offering the potential for creating new transaction platforms and thus shows a radical change of the current core value creation in third parties. These results in enormous cost and time savings and the reduced risk for the parties. This study proposed a method to improve the efficiency of distributed consensus in blockchains using epidemic algorithm. The results showed that epidemic protocols can distribute the information similar to blockchain."} {"_id": "73a1116905643fad65c242c9f43e6b7fcc6b3aad", "title": "M4: A Visualization-Oriented Time Series Data Aggregation", "text": "Visual analysis of high-volume time series data is ubiquitous in many industries, including finance, banking, and discrete manufacturing. Contemporary, RDBMS-based systems for visualization of high-volume time series data have difficulty to cope with the hard latency requirements and high ingestion rates of interactive visualizations. Existing solutions for lowering the volume of time series data disregard the semantics of visualizations and result in visualization errors. In this work, we introduce M4, an aggregation-based time series dimensionality reduction technique that provides errorfree visualizations at high data reduction rates. Focusing on line charts, as the predominant form of time series visualization, we explain in detail the drawbacks of existing data reduction techniques and how our approach outperforms state of the art, by respecting the process of line rasterization. We describe how to incorporate aggregation-based dimensionality reduction at the query level in a visualizationdriven query rewriting system. Our approach is generic and applicable to any visualization system that uses an RDBMS as data source. Using real world data sets from high tech manufacturing, stock markets, and sports analytics domains we demonstrate that our visualization-oriented data aggregation can reduce data volumes by up to two orders of magnitude, while preserving perfect visualizations."} {"_id": "1e7cf9047604f39e517951d129b2b3eecf9e1cfb", "title": "Modeling Interestingness with Deep Neural Networks", "text": "This paper presents a deep semantic similarity model (DSSM), a special type of deep neural networks designed for text analysis, for recommending target documents to be of interest to a user based on a source document that she is reading. We observe, identify, and detect naturally occurring signals of interestingness in click transitions on the Web between source and target documents, which we collect from commercial Web browser logs. The DSSM is trained on millions of Web transitions, and maps source-target document pairs to feature vectors in a latent space in such a way that the distance between source documents and their corresponding interesting targets in that space is minimized. The effectiveness of the DSSM is demonstrated using two interestingness tasks: automatic highlighting and contextual entity search. The results on large-scale, real-world datasets show that the semantics of documents are important for modeling interestingness and that the DSSM leads to significant quality improvement on both tasks, outperforming not only the classic document models that do not use semantics but also state-of-the-art topic models."} {"_id": "4ae1a887623634d517e289ec676d90464996bd6a", "title": "Developments in Radar Imaging", "text": "Using range and Doppler information to produce radar images is a technique used in such diverse fields as air-to-ground imaging of objects, terrain, and oceans and ground-to-air imaging of aircraft, space objects, and planets. A review of the range-Doppler technique is presented along with a description of radar imaging forms including details of data acquisition and processing techniques."} {"_id": "30420d73113586934cb680e999c8a9def7f5bccb", "title": "The Expressive Power of SPARQL", "text": "This paper studies the expressive power of SPARQL. The main result is that SPARQL and non-recursive safe Datalog with negation have equivalent expressive power, and hence, by classical results, SPARQL is equivalent from an expressive point of view to Relational Algebra. We present explicit generic rules of the transformations in both directions. Among other findings of the paper are the proof that negation can be simulated in SPARQL, that non-safe filters are superfluous, and that current SPARQL W3C semantics can be simplified to a standard"} {"_id": "c8934bd3b0d8825a2a9bb112653250956b3a2a67", "title": "Working Memory Deficits can be Overcome : Impacts of Training and Medication on Working Memory in Children with ADHD", "text": "This study evaluated the impact of two interventions\u2014a training program and stimulant medication\u2014on working memory (WM) function in children with attention deficit hyperactivity disorder (ADHD). Twenty-five children aged between 8 and 11 years participated in training that taxed WM skills to the limit for a minimum of 20 days, and completed other assessments of WM and IQ before and after training, and with and without prescribed drug treatment. While medication significantly improved visuo-spatial memory performance, training led to substantial gains in all components of WM across untrained tasks. Training gains associated with the central executive persisted over a 6month period. IQ scores were unaffected by either intervention. These findings indicate that the WM impairments in children with ADHD can be differentially ameliorated by training and by stimulant medication. Copyright # 2009 John Wiley & Sons, Ltd."} {"_id": "4bd70c3a9f1455e59d5ff9a2cc477b3028784556", "title": "Toward Fast and Accurate Neural Discourse Segmentation", "text": "Discourse segmentation, which segments texts into Elementary Discourse Units, is a fundamental step in discourse analysis. Previous discourse segmenters rely on complicated hand-crafted features and are not practical in actual use. In this paper, we propose an endto-end neural segmenter based on BiLSTMCRF framework. To improve its accuracy, we address the problem of data insufficiency by transferring a word representation model that is trained on a large corpus. We also propose a restricted self-attention mechanism in order to capture useful information within a neighborhood. Experiments on the RST-DT corpus show that our model is significantly faster than previous methods, while achieving new stateof-the-art performance. 1"} {"_id": "b2d9a76773ba090210145e0ab72a6e339c953461", "title": "Apollonian Circle Packings: Geometry and Group Theory II. Super-Apollonian Group and Integral Packings", "text": "Apollonian circle packings arise by repeatedly filling the interstices between four mutually tangent circles with further tangent circles. Such packings can be described in terms of the Descartes configurations they contain, where a Descartes configuration is a set of four mutually tangent circles in the Riemann sphere, having disjoint interiors. Part I showed there exists a discrete group, the Apollonian group, acting on a parameter space of (ordered, oriented) Descartes configurations, such that the Descartes configurations in a packing formed an orbit under the action of this group. It is observed there exist infinitely many types of integral Apollonian packings in which all circles had integer curvatures, with the integral structure being related to the integral nature of the Apollonian group. Here we consider the action of a larger discrete group, the super-Apollonian group, also having an integral structure, whose orbits describe the Descartes quadruples of a geometric object we call a super-packing. The circles in a super-packing never cross each other but are nested to an arbitrary depth. Certain Apollonian packings and super-packings are strongly integral in the sense that the curvatures of all circles are integral and the curvature\u00d7centers of all circles are integral. We show that (up to scale) there are exactly 8 different (geometric) strongly integral super-packings, and that"} {"_id": "6753404e2515bc76f17016d0ec52d91b65eb1aa3", "title": "Addressing challenges in the clinical applications associated with CRISPR/Cas9 technology and ethical questions to prevent its misuse", "text": "Xiang Jin Kang, Chiong Isabella Noelle Caparas, Boon Seng Soh, Yong Fan 1 Key Laboratory for Major Obstetric Diseases of Guangdong Province, Key Laboratory of Reproduction and Genetics of Guangdong Higher Education Institutes, Center for Reproductive Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China 2 Disease Modeling and Therapeutics Laboratory, A*STAR Institute of Molecular and Cell Biology, 61 Biopolis Drive Proteos, Singapore 138673, Singapore & Correspondence: bssoh@imcb.a-star.edu.sg (B. S. Soh), yongfan011@gzhmu.edu.cn (Y. Fan)"} {"_id": "a8b0e7517ff5121eda0b60cf7cbf203d6fac525d", "title": "Common mode EMI reduction technique for interleaved MHz critical mode PFC converter with coupled inductor", "text": "Coupled inductor has been widely adopted in VR applications for many years because of its benefits such as reducing current ripple and improving transient performance. In this presentation, coupled inductor concept is applied to interleaved MHz totem-pole CRM PFC converter with GaN devices. The coupled inductor in CRM PFC converter can reduce switching frequency variation, help achieving ZVS and reduce circulating energy. Therefore the coupled inductor can improve the efficiency of PFC converter. In addition, balance technique is applied to help minimize CM noise. This paper will introduce how to achieve balance with coupled inductor. The novel PCB winding inductor design will be provided. The PCB winding coupled inductor will have similar loss with litz wire inductor but much less labor in manufacture."} {"_id": "8ad12d3ee186403b856639b58d7797aa4b89a6c7", "title": "Temporal Relational Reasoning in Videos", "text": "Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets SomethingSomething, Jester, and Charades which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos."} {"_id": "2a628e4a9c5f78bc6dcdf16514353336547846cc", "title": "Achieving high utilization with software-driven WAN", "text": "We present SWAN, a system that boosts the utilization of inter-datacenter networks by centrally controlling when and how much traffic each service sends and frequently re-configuring the network's data plane to match current traffic demand. But done simplistically, these re-configurations can also cause severe, transient congestion because different switches may apply updates at different times. We develop a novel technique that leverages a small amount of scratch capacity on links to apply updates in a provably congestion-free manner, without making any assumptions about the order and timing of updates at individual switches. Further, to scale to large networks in the face of limited forwarding table capacity, SWAN greedily selects a small set of entries that can best satisfy current demand. It updates this set without disrupting traffic by leveraging a small amount of scratch capacity in forwarding tables. Experiments using a testbed prototype and data-driven simulations of two production networks show that SWAN carries 60% more traffic than the current practice."} {"_id": "35deb0910773b810a642ff3b546de4eecfdc3ac3", "title": "B4: experience with a globally-deployed software defined wan", "text": "We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.\n These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work."} {"_id": "122560e02003d5c0f1de3e0083c7a0474c8f1a53", "title": "BotMiner: Clustering Analysis of Network Traffic for Protocol- and Structure-Independent Botnet Detection", "text": "Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires noa priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names/addresses). We start from the definition and essential properties of botnets. We define a botnet as acoordinated groupof malware instances that arecontrolled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers/peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets ( IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate."} {"_id": "1783cbfb965484403b6702d07ccde5dc74ebb132", "title": "Forwarding metamorphosis: fast programmable match-action processing in hardware for SDN", "text": "In Software Defined Networking (SDN) the control plane is physically separate from the forwarding plane. Control software programs the forwarding plane (e.g., switches and routers) using an open interface, such as OpenFlow. This paper aims to overcomes two limitations in current switching chips and the OpenFlow protocol: i) current hardware switches are quite rigid, allowing ``Match-Action'' processing on only a fixed set of fields, and ii) the OpenFlow specification only defines a limited repertoire of packet processing actions. We propose the RMT (reconfigurable match tables) model, a new RISC-inspired pipelined architecture for switching chips, and we identify the essential minimal set of action primitives to specify how headers are processed in hardware. RMT allows the forwarding plane to be changed in the field without modifying hardware. As in OpenFlow, the programmer can specify multiple match tables of arbitrary width and depth, subject only to an overall resource limit, with each table configurable for matching on arbitrary fields. However, RMT allows the programmer to modify all header fields much more comprehensively than in OpenFlow. Our paper describes the design of a 64 port by 10 Gb/s switch chip implementing the RMT model. Our concrete design demonstrates, contrary to concerns within the community, that flexible OpenFlow hardware switch implementations are feasible at almost no additional cost or power."} {"_id": "2740b1c450c7b6fa0d7cb8b52681c2e5b6f72752", "title": "Stability and Generalization", "text": "We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification."} {"_id": "cf28677d9a095fde0ea19cb90d1f84f8b8d2e56c", "title": "Zero-voltage switching flyback-boost converter with voltage-doubler rectifier for high step-up applications", "text": "A zero-voltage switching (ZVS) flyback-boost (FB) converter with a voltage-doubler rectifier (VDR) has been proposed. By combining the common part between a flyback converter and a boost converter as a parallel-input/series-output (PISO) configuration, this proposed circuit can increase a step-up ratio and clamp the surge voltage of switches. The secondary VDR provides a further extended step-up ratio as well as its voltage stress to be clamped. An auxiliary switch instead of a boost diode enables all switches to be turned on under ZVS conditions. The zero-current turn-off of the secondary VDR alleviates its reverse-recovery losses. The operation principles, the theoretical analysis, and the design consideration are investigated. The experimental results from a 250W and 42V-to-400V prototype are shown to verify the proposed scheme."} {"_id": "9d3133a2d3c536fe628fd1fe1d428371b414ba0e", "title": "Human Gait Recognition Using Patch Distribution Feature and Locality-Constrained Group Sparse Representation", "text": "In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l1, 2 mixed-norm-regularized reconstruction error with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature Gabor-PDF achieves the best average Rank-1 and Rank-5 recognition rates on this database among all gait recognition algorithms proposed to date."} {"_id": "c6a2aaf173061731ec7f06f3a1d4b3734f10af18", "title": "A series resonant converter with constant on-time control for capacitor charging applications", "text": "A capacitor charging power supply (CCPS) has been assembled. A full-bridge series resonant topology using MOSFETs as switching elements makes up the bulk of the high-power section of the CCPS. A high-voltage transformer couples the output stage to the resonant inverter, and the secondary current of this transformer is rectified to provide the charging current. The CCPS uses constant on-time control with zero current switching. The control scheme is implemented with Unitrode's UC3860 resonant mode power supply controller. The control circuitry also protects the CCPS against both overvoltage and overcurrent conditions. The CCPS has been tested in the laboratory, where it successfully charged capacitors ranging from 90 nF to 1 mu F to 2000 V.<>"} {"_id": "770ce8f2b66749c017a7e38ba7e8d504576afccf", "title": "A vehicular rooftop, shark-fin, multiband antenna for the GPS/LTE/cellular/DSRC systems", "text": "A shark-fin rooftop multiband antenna for the automotive industry is proposed. The antenna system receives Right Hand Circularly Polarized (RHCP) satellite signals for the Global Positioning System's (GPS) L1-Band and Vertically Linearly Polarized (VLP) signals for the Long Term Evolution (LTE), the cellular frequencies, the Universal Mobile Telecommunications Systems (UMTS) and the Dedicated Short Range Communications (DSRC) system. A ceramic patch antenna element was used for the GPS signals and a printed monopole utilizing printed ground sleeve technique for the cellular and LTE signals. A study of four potential DSRC antenna elements to determine the appropriate antenna element location within the antenna cover to obtain optimum performance is also presented and discussed in detail. The antenna module was measured on a 1-meter diameter ground plane and on a vehicle roof model. The design and measurements results are presented and discussed below."} {"_id": "25b87d1d17adabe2923da63e0b93fb7d2bac73f7", "title": "Service specific anomaly detection for network intrusion detection", "text": "The constant increase of attacks against networks and their resources (as recently shown by the CodeRed worm) causes a necessity to protect these valuable assets. Firewalls are now a common installation to repel intrusion attempts in the first place. Intrusion detection systems (IDS), which try to detect malicious activities instead of preventing them, offer additional protection when the first defense perimeter has been penetrated. ID systems attempt to pin down attacks by comparing collected data to predefined signatures known to be malicious (signature based) or to a model of legal behavior (anomaly based).Anomaly based systems have the advantage of being able to detect previously unknown attacks but they suffer from the difficulty to build a solid model of acceptable behavior and the high number of alarms caused by unusual but authorized activities. We present an approach that utilizes application specific knowledge of the network services that should be protected. This information helps to extend current, simple network traffic models to form an application model that allows to detect malicious content hidden in single network packets. We describe the features of our proposed model and present experimental data that underlines the efficiency of our systems."} {"_id": "dc1b9c537812b3542c08ab982394abca998fc211", "title": "Wideband and Unidirectional Cavity-Backed Folded Triangular Bowtie Antenna", "text": "In this paper, a cavity-backed folded triangular bowtie antenna (FTBA) is proposed and investigated. Comparisons show that it has much smaller fluctuation of input impedance and much larger bandwidth (BW) for stable radiation patterns than cavity-backed ordinary TBA. The proposed antenna is fed by a conventional balun composed of a transition from a microstrip line to a parallel stripline. It achieves an impedance BW of 92.2% for |S11| les -10 dB, stable gain of around 9.5 dBi and unidirectional radiation patterns over the whole operating band. A close examination of the antenna aperture efficiency and the electric field distribution in the aperture provides an explanation for the stable gain across such large frequency band. Comparisons are made against several conventional antennas, including short backfire and microstrip patch antennas, to illustrate the salient features of the proposed antenna. A design guideline is also given for practical applications."} {"_id": "e50d15d7d5a7435d709757fa29da1a752e977d3c", "title": "A new and rapid colorimetric determination of acetylcholinesterase activity.", "text": "A photometric method for determining acetylcholinesterase activity of tissue extracts, homogenates, cell suspensions, etc., has been described. The enzyme activity is measured by following the increase of yellow color produced from thiocholine when it reacts with dithiobisnitrobenzoate ion. It is based on coupling of these reactions: (enzyme) acetylthiocholine -----+ thiocholine + acetate thiocholine + dithiobisnitrobenzoate -+ yellow color The latter reaction is rapid and the assay is sensitive (i.e. a 10 pl sample of blood is adequate). The use of a recorder has been most helpful, but is not essential. The method has been used to study the enzyme in human erythrocytes and homogenates of rat brain, kidney, lungs, liver and muscle tissue. Kinetic constants determined by this system for erythrocyte cholinesterase are presented. The data obtained with acetylthiocholine as substrate are similar to those with acetylcholine. INTRODUCTION A FEW years ago Bonting and Featherstonel introduced a modification of the Hestrin hydroxamic acid method2 suitable for the determination of cholinesterase levels in small quantities of cells cultured in vitro. This modification was used successfully in several studies involving the control of enzyme levels in cells by manipulating the levels of substrates or closely related compounds present in the medium.3 Several interesting areas of research were indicated by these studies. However, this modified method of enzyme assay, although scaled down to the micro-level, had several disadvantages. Among these was the fact that only a terminal figure could be obtained from the material in one tube of cultured cells. Thus, the time course of the reaction could not be followed without resorting to separate experimental tubes for each time interval desired. The method also had the disadvantage that the color measured *was developed from the remainder of an added substrate-a procedure in which the possibility of error is relatively great when the level of enzyme activity is small. Consideration of the relative merits of various methods which might be useful in studying the time-course of acetylcholinesterase activity in very small tissue samples led us to combine a method reported by Koelle4 with a sulfhydryl reagent studied by Ellman.5 This new method, which is presented here, is extremely sensitive and is applicable to either small amounts of tissue or to low concentrations of enzyme. It makes detailed kinetic studies of acetylcholinesterase activity possible. The progress of the hydrolysis is followed by the measurement of a product of the reaction. 88 A new and rapid calorimetric determination of acetylcholinesterase activity 89 Acetylthiocholine is used as the substrate. This analog of the natural substrate has been used most extensively by Koelle4 for histochemical localization. Other workerss, \u2019 have used the sulfur analog in the enzyme assay. Their work, in addition to data we shall present, suggests that this compound is a satisfactory substitute for the natural substrate, and differs much less than some of the synthetic substrates frequently used as in assays of phosphatases, trypsin, chymotrypsin, pepsin, etc. The principle of the method is the measurement of the rate of production of thiocholine as acetylthiocholine is hydrolyzed. This is accomplished by the continuous reaction of the thiol with 5 : Sdithiobis-2-nitrobenzoate ion (I)5 (enzyme) H,O + (CH,),&CH,CH,SCOCH, h (CH&CH2CH2S+ + CH,COO+ 2H+ (CH&H$H$+ RSSR + (CH&&CH&H,SSR + RS(I) (II) to produce the yellow anion of 5-thio-2-nitro-benzoic acid (II). The rate of color production is measured at 412 rnp in a photometer. The reaction with the thiol has been shown to be sufficiently rapid so as not to be rate limiting in the measurement of the enzyme, and in the concentrations used does not inhibit the enzymic hydrolysis. By recording the output of the photometer continuously, records of the complete assay can be obtained (Fig. 1). We considered it desirable to establish that this method yields results comparable with other procedures. For this reason, the effects of inhibitors were examined; the kinetic constants were calculated and compared with those obtained by other methods. In addition, we were able to compare assays on blood samples by the ferrichydroxamate methods and the present one. METHODS The reaction rates were recorded with a Beckman DU spectrophotometer equipped with a Beckman adapter and a Minneapolis-Honeywell recorder. The general method used was to place buffer in the photocell and add concentrated solutions of reagents by means of micropipettes. The mixture was stirred by continued blowing through the pipettes while moving them around the bottom of the photometer cells. In this way, reagents were added, mixed, and the cover of the cell compartment replaced within 10-15 sec. Our photometer required 3040 set to become stabilized to new light conditions. Thus, there was about 1 min when the readings were due to a combination of factors (e.g. bubbles rising through the solutions, sulfhydryl material in the \u201cenzyme\u201d, etc.) which were unrelated to the desired measurements. Subsequent readings were strictly dependent on the absorption of the solution under consideration and even rapid changes were followed by the recorder faithfully, as evidenced by the reproducibility of time-transmission curves (Fig. 1). 90 GEORGE L. ELLMAN et. al. Solutions Buffer. Phosphate, 0.1 M, pH 8.0. Substrate. Acetylthiocholine iodide, * O-075 M (21.67 mg/ml). This solution was used successfully for lo-15 days if kept refrigerated. Reagent. Dithiobisnitrobenzoic acid (DTNB) 0.01 M of the 5 : 5-dithiobis-2nitrobenzoic acid7 prepared as described previously,5 39.6 mg were dissolved in 10 ml pH 7.0 phosphate buffer (O-1 M) and 15 mg of sodium bicarbonate were added. The reagent was made up in buffer of pH 7 in which it was more stable than in that of pH 8. Enzyme. Bovine erythrocyte cholinesterase (Nutritional Biochem. Corp., 20,000 units) was dissolved in 20 ml of 1% gelatin. This solution was diluted 1 : 200 with water for use, yielding a solution of 5 units/ml. General method A typical run used : 3.0 ml pH 8.0 buffer, 20.0 ~1 substrate, 100.0 ~1 DTNB (reagent), 50.0 ~1 enzyme. The results of several runs are shown in Fig. 1. The blank for such a run consists of buffer, substrate, and DTNB solutions. The absorbances: were read from the strip charts and plotted on a rectangular graph paper, the best line drawn through the points and the slope measured. In a run such as that described above, the linear portion of the curve describing the hydrolysis was observed during the first 15-20 min of the reaction; the slope is the rate in absorbance units/min. At this pH level, there is an appreciable non-enzymic hydrolysis of the substrate, and for long runs it was necessary to correct for this. The rate of non-enzymic hydrolysis of acetylthiocholine at 25\u201d was 0.0016 absorbance units per min. The procedures have been extended to micro-size. A run comparable to those in Fig. 1 was made in a micro-cell (total solution volume was O-317 ml). The rate was 0*102/min, the same as that determined in the larger cuvettes. Since the extinction coefficient of the yellow anion (II) is known5 the rates can be converted to absolute units, viz. : A absorbance/min rate (moles/l. per min) = 1.36 x lo4 In dealing with cell extracts or suspensions, a blank consisting of extract or suspension, DTNB, and buffer may be required to correct for release of thiol material from the cells and the absorbance of the other materials in the suspension. Method for blood A fairly stable suspension was formed from whole blood or washed human erythrocytes. Since the acetylcholinesterase is on the cell membrane, hemolysis was not necessary. The assay of blood was carried out as follows: (1) A suspension of the blood cells4 in phosphate buffer (pH 8.0, 0.1 M) was prepared. The most practical dilution was 1 : 600 (e.g. 10~1 blood into 6 ml buffer). (2) Exactly 3.0 ml of the suspension were pipetted into a cuvette. * California Corporation for Biochemical Research, Los Angeles, California. t This is now available from the Aldrich Chemical Co.. 2369 No. 29th. Milwaukee 10, Wisconsin. $ Strip charts printed in absorbance units are available from Minneapolis-Honeywell Corporation, Chart 5871. 4 Red cell counts were performed by the clinical laboratory. FI G . 1. P ho to gr ap h of s tr ip ch ar t re co rd of t w o id en tic al as sa ys . A t th e ar ro w , ph ys os tig m in e sa lic yl at e (f in al c on ce nt ra tio n, 3 x IO -\u2019 M ) w as a dd ed to a t hi rd re pl ic at e."} {"_id": "59b6d25b2e69f6114646e2913005263817fdf00e", "title": "Entity-Relationship Extraction from Wikipedia Unstructured Text", "text": "Wikipedia has been the primary source of information for many automatically-generated Semantic Web data sources. However, they suffer from incompleteness since they largely do not cover information contained in the unstructured texts of Wikipedia. Our goal is to extract structured entity-relationships in RDF from such unstructured texts, ultimately using them to enrich existing data sources. Our extraction technique is aimed to be topic-independent, leveraging grammatical dependency of sentences and semantic refinement. Preliminary evaluations of the proposed approach have shown some promising results."} {"_id": "71271e751e94ede85484250d0d8f7fc444423533", "title": "Future Internet of Things: open issues and challenges", "text": "Internet of Things (IoT) and its relevant technologies have been attracting the attention of researchers from academia, industry, and government in recent years. However, since the requirements of the IoT are quite different from what the Internet today can offer, several innovative techniques have been gradually developed and incorporated into IoT, which is referred to as the Future Internet of Things (FIoT). Among them, how to extract \u2018\u2018data\u2019\u2019 and transfer them into \u2018\u2018knowledge\u2019\u2019 from sensing layer to application layer has become a vital issue. This paper begins with an overview of IoT and FIoT, followed by discussions on how to apply data mining and computational intelligence to FIoT. An intelligent data management framework inspired by swarm optimization will then given. Finally, open issues and future trends of this field"} {"_id": "e773cdd58f066db97b305fcb5dbad551aa38f00c", "title": "Reading and Writing as a Creative Cycle: the Need for a Computational Model", "text": "The field of computational narratology has produced many efforts aimed at generating narrative by computational means. In recent times, a number of such efforts have considered the task of modelling how a reader might consume the story. Whereas all these approaches are clearly different aspects of the task of generating narrative, so far the efforts to model them have occurred as separate and disjoint initiatives. There is an enormous potential for improvement if a way was found to combine results from these initiatives with one another. The present position paper provides a breakdown of the activity of creating stories into five stages that are conceptually different from a computational point of view and represent important aspects of the overall process as observed either in humans or in existing systems. These stages include a feedback loop that builds interpretations of an ongoing composition and provides feedback based on these to inform the composition process. This model provides a theoretical framework that can be employed first to understand how the various aspects of the task of generating narrative relate to one another, second to identify which of these aspects are being addressed by the different existing research efforts, and finally to point the way towards possible integrations of these aspects within progressively more complex systems."} {"_id": "df91209c3933b263885a42934f78e7216bc2dc74", "title": "Kinematics, Dynamics and Power Consumption Analyses for Turning Motion of a Six-Legged Robot", "text": "This paper deals with kinematics, dynamics and power consumption analyses of a six-legged robot generating turning motions to follow a circular path. Direct and inverse kinematics analysis has been carried out for each leg in order to develop an overall kinematics model of the six-legged robot. It aims to estimate energyoptimal feet forces and joint torques of the sixlegged robot, which are necessary to have for its real-time control. To determine the optimum feet forces, two approaches are developed, such as minimization of norm of feet forces and minimization of norm of joint torques using a least square method, and their performances are compared. The developed kinematics and dynamics models are tested through computer simulations for generating turning motion of a statically stable sixlegged robot over flat terrain with four different duty factors. The maximum values of feet forces and joint torques decrease with the increase of duty factor. A power consumption model has S. S. Roy Department of Mechanical Engineering, National Institute of Technology, Durgapur, India e-mail: ssroy99@yahoo.com D. K. Pratihar (B) Department of Mechanical Engineering, Indian Institute of Technology, Kharagpur, India e-mail: dkpra@mech.iitkgp.ernet.in been derived for the statically stable wave gaits to minimize the power requirement for both optimal foot force distributions and optimal foot-hold selection. The variations of average power consumption with the height of the trunk body and radial offset have been analyzed in order to find out energy-optimal foothold. A parametric study on energy consumption has been carried out by varying angular velocity of the robot to minimize the total energy consumption during locomotion. It has been found that the energy consumption decreases with the increase of angular velocity for a particular traveled distance."} {"_id": "34ad25995c2e2ab2ef6e39d36db8808805b2914b", "title": "An Autonomous Reliability-Aware Negotiation Strategy for Cloud Computing Environments", "text": "Cloud computing paradigm allows subscription-based access to computing and storages services over the Internet. Since with advances of Cloud technology, operations such as discovery, scaling, and monitoring are accomplished automatically, negotiation between Cloud service requesters and providers can be a bottleneck if it is carried out by humans. Therefore, our objective is to offer a state-of-the-art solution to automate the negotiation process in Cloud environments. In previous works in the SLA negotiation area, requesters trust whatever QoS criteria values providers offer in the process of negotiation. However, the proposed negotiation strategy for requesters in this work is capable of assessing reliability of offers received from Cloud providers. In addition, our proposed negotiation strategy for Cloud providers considers utilization of resources when it generates new offers during negotiation and concedes more on the price of less utilized resources. The experimental results show that our strategy helps Cloud providers to increase their profits when they are participating in parallel negotiation with multiple requesters."} {"_id": "790989568b2290298bcf15a7b06abc6c27720f75", "title": "Medication intake adherence with real time activity recognition on IoT", "text": "Usefulness of health care services is seriously effected by medication adherence. Medication intake is one of the cases where adherence may be difficult for the patients, who are willing to undertake the prescribed therapy. Internet of Things (IoT) infrastructure for activity monitoring is a strong candidate solution to maintain adherence of forgetful patients. In this study, we propose an IoT framework where medication intake is ensured with real time continuous activity recognition. We present our use case with the focus of application and network layers. We utilize an activity classification scheme, which considers inter-activity detection consistency based on non-predefined feature extraction as the application layer. The network layer includes a gateway structure ensuring end-to-end reliability in the connection between a wireless sensor network (WSN) and Internet. Results obtained in simulation environment suggest that the selected application and network layers introduce a feasible solution for the medication intake use case."} {"_id": "c1ff5cdadaa8215d885c398fb0a3691550ad5770", "title": "An Overview of Small Unmanned Aerial Vehicles for Air Quality Measurements: Present Applications and Future Prospectives", "text": "Assessment of air quality has been traditionally conducted by ground based monitoring, and more recently by manned aircrafts and satellites. However, performing fast, comprehensive data collection near pollution sources is not always feasible due to the complexity of sites, moving sources or physical barriers. Small Unmanned Aerial Vehicles (UAVs) equipped with different sensors have been introduced for in-situ air quality monitoring, as they can offer new approaches and research opportunities in air pollution and emission monitoring, as well as for studying atmospheric trends, such as climate change, while ensuring urban and industrial air safety. The aims of this review were to: (1) compile information on the use of UAVs for air quality studies; and (2) assess their benefits and range of applications. An extensive literature review was conducted using three bibliographic databases (Scopus, Web of Knowledge, Google Scholar) and a total of 60 papers was found. This relatively small number of papers implies that the field is still in its early stages of development. We concluded that, while the potential of UAVs for air quality research has been established, several challenges still need to be addressed, including: the flight endurance, payload capacity, sensor dimensions/accuracy, and sensitivity. However, the challenges are not simply technological, in fact, policy and regulations, which differ between countries, represent the greatest challenge to facilitating the wider use of UAVs in atmospheric research."} {"_id": "2815c2835d6056479d64aa64d879df9cd2572d2f", "title": "Predicting India Volatility Index: An Application of Artificial Neural Network", "text": "Forecasting has always been an area of interest for the researchers in various realms of finance especially in the stock market e. g. stock index, return on a stock, etc. Stock market volatility is one such area. Since the inception of implied volatility index (VIX) by the Chicago Board of Options Exchange (CBOE) in 1993, VIX index has generated a lot of interest. This study examines the predicting ability of several technical indicators related to VIX index to forecast the next trading day's volatility. There is a wide set of methods available for forecasting in finance. In this study, Artificial neural network (ANN) modeling technique has been employed to forecast the upwards or downwards movement in next trading day's volatility using India VIX (a volatility index based on the NIFTY Index Option prices) based indicators. The results of the study reveal that ANN models can be real handy in forecasting the downwards movement in VIX. The knowledge about a more probable downwards movement in volatility might be significant value add for the investors and help them in making decisions related to trading."} {"_id": "873c4a435d52f803e8391dde9be89044ff630725", "title": "A Novel Transformer Structure for High power, High Frequency converter", "text": "Power transformer structure is a key factor for the high power, high frequency converter performance which includes efficiency, thermal performance and power density. This paper proposes a novel transformer structure for the kilo-watt level, high frequency converter which is reinforce insulation needed for the secondary side to primary side. The transformer has spiral-wound primary layers using TIW (triple insulation wire) and PCB-winding secondary layers. All the windings are arranged by full interleaving structure to minimize the leakage inductance and eddy current loss. Further more, the secondary rectifiers and filter capacitors are mounted in PCB-winding secondary layers to further minimize the termination effect. A 1.2 KW (O/P: 12 V/100 A, I/P: 400 V) Mega Hz LLC converter prototype employed the proposed transformer structure is constructed, and over 96% efficiency achieved."} {"_id": "10338babf0119e3dba196aef44fa717a1d9a06df", "title": "Private local automation clouds built by CPS: Potential and challenges for distributed reasoning", "text": ""} {"_id": "bffe23d44eec0324f1be57877f05b06c379e77d5", "title": "Overview: The Design, Adoption, and Analysis of a Visual Document Mining Tool for Investigative Journalists", "text": "For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system \u201cin the wild\u201d, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of \u201cexploring\u201d a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology."} {"_id": "8b9b7e1fb101a899b0309ec508ac5912787cc12d", "title": "Securing Bitcoin wallets via a new DSA / ECDSA threshold signature scheme", "text": "The Bitcoin ecosystem has suffered frequent thefts and losses affecting both businesses and individuals. Due to the irreversibility, automation, and pseudonymity of transactions, Bitcoin currently lacks support for the sophisticated internal control systems deployed by modern businesses to deter fraud. To address this problem, we present the first threshold signature scheme compatible with Bitcoin\u2019s ECDSA signatures and show how distributed Bitcoin wallets can be built using this primitive. For businesses, we show how our distributed wallets can be used to systematically eliminate single points of failure at every stage of the flow of bitcoins through the system. For individuals, we design, implement, and evaluate a two-factor secure Bitcoin wallet."} {"_id": "a477778d8ae4cb70b67d02d094d030c44d0c6ffd", "title": "Joint Learning Templates and Slots for Event Schema Induction", "text": "Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type. In this paper, we propose a joint entity-driven model to learn templates and slots simultaneously based on the constraints of templates and slots in the same sentence. In addition, the entities\u2019 semantic information is also considered for the inner connectivity of the entities. We borrow the normalized cut criteria in image segmentation to divide the entities into more accurate template clusters and slot clusters. The experiment shows that our model gains a relatively higher result than previous work."} {"_id": "70fc8f3b9727a028e23ae89bbf1bb4ec4feb8db6", "title": "Information & Environment: IoT-Powered Recommender Systems", "text": "Internet of Things (IoT) infrastructure within the physical library environment is the basis for an integrative, hybrid approach to digital resource recommenders. The IoT infrastructure provides mobile, dynamic wayfinding support for items in the collection, which includes features for location-based recommendations. The evaluation and analysis herein clarified the nature of users\u2019 requests for recommendations based on their location, and describes subject areas of the library for which users request recommendations. The results indicated that users of IoTbased recommendation are interested in a broad distribution of subjects, with a short-head distribution from this collection in American and English Literature. A long-tail finding showed a diversity of topics that are recommended to users in the library book stacks with IoT-powered recommendations."} {"_id": "36e41cdfddd190d7861b91b04a515967fd1541d9", "title": "Giving too much social support: social overload on social networking sites", "text": "Received: 20 July 2012 Revised: 18 February 2013 2nd Revision: 28 June 2013 3rd Revision: 20 September 2013 4th Revision: 7 November 2013 Accepted: 1 February 2014 Abstract As the number of messages and social relationships embedded in social networking sites (SNS) increases, the amount of social information demanding a reaction from individuals increases as well. We observe that, as a consequence, SNS users feel they are giving too much social support to other SNS users. Drawing on social support theory (SST), we call this negative association with SNS usage \u2018social overload\u2019 and develop a latent variable to measure it. We then identify the theoretical antecedents and consequences of social overload and evaluate the social overload model empirically using interviews with 12 and a survey of 571 Facebook users. The results show that extent of usage, number of friends, subjective social support norms, and type of relationship (online-only vs offline friends) are factors that directly contribute to social overload while age has only an indirect effect. The psychological and behavioral consequences of social overload include feelings of SNS exhaustion by users, low levels of user satisfaction, and a high intention to reduce or even stop using SNS. The resulting theoretical implications for SST and SNS acceptance research are discussed and practical implications for organizations, SNS providers, and SNS users are drawn. European Journal of Information Systems advance online publication, 4 March 2014; doi:10.1057/ejis.2014.3; corrected online 11 March 2014"} {"_id": "d67cd96b0a30c5d2b2703491e02ec0f7d1954bb8", "title": "A taxonomy of sequential pattern mining algorithms", "text": "Owing to important applications such as mining web page traversal sequences, many algorithms have been introduced in the area of sequential pattern mining over the last decade, most of which have also been modified to support concise representations like closed, maximal, incremental or hierarchical sequences. This article presents a taxonomy of sequential pattern-mining techniques in the literature with web usage mining as an application. This article investigates these algorithms by introducing a taxonomy for classifying sequential pattern-mining algorithms based on important key features supported by the techniques. This classification aims at enhancing understanding of sequential pattern-mining problems, current status of provided solutions, and direction of research in this area. This article also attempts to provide a comparative performance analysis of many of the key techniques and discusses theoretical aspects of the categories in the taxonomy."} {"_id": "b24799316c69f367fccbd7ce2c4c732f9f92d1b5", "title": "Predicting consumer intentions to use on-line shopping: the case for an augmented technology acceptance model", "text": "Derived from the theory of reasoned action, the technology acceptance model (TAM) focuses on two specific salient beliefs\u2014 ease of use and usefulness. It has been applied in the study of user adoption of different technologies, and has emerged as a reliable and robust model. However, this has not discouraged researchers from incorporating additional constructs to the original model in their quest for increased predictive power. Here, an attempt is made in the context of explaining consumer intention to use on-line shopping. Besides ease of use and usefulness, compatibility, privacy, security, normative beliefs, and selfefficacy are included in an augmented TAM. A test of this model, with data collected from 281 consumers, show support for seven of nine research hypotheses. Specifically, compatibility, usefulness, ease of use, and security were found to be significant predictors of attitude towards on-line shopping, but privacy was not. Further, intention to use on-line shopping was strongly influenced by attitude toward on-line shopping, normative beliefs, and self-efficacy. # 2003 Elsevier B.V. All rights reserved."} {"_id": "40457623fcdbaa253e2894c2f114837fde1c11e5", "title": "A new approach for sparse matrix vector product on NVIDIA GPUs", "text": "The sparse matrix vector product (SpMV) is a key operation in engineering and scientific computing and, hence, it has been subjected to intense research for a long time. The irregular computations involved in SpMV make its optimization challenging. Therefore, enormous effort has been devoted to devise data formats to store the sparse matrix with the ultimate aim of maximizing the performance. Graphics Processing Units (GPUs) have recently emerged as platforms that yield outstanding acceleration factors. SpMV implementations for NVIDIA GPUs have already appeared on the scene. This work proposes and evaluates a new implementation of SpMV for NVIDIA GPUs based on a new format, ELLPACK-R, that allows storage of the sparse matrix in a regular manner. A comparative evaluation against a variety of storage formats previously proposed has been carried out based on a representative set of test matrices. The results show that, although the performance strongly depends on the specific pattern of the matrix, the implementation based on ELLPACK-R achieves higher overall performance. Moreover, a comparison with standard state-of-the-art superscalar processors reveals that significant speedup factors are achieved with GPUs. Copyright 2010 John Wiley & Sons, Ltd."} {"_id": "ffcb7146dce1aebf47a910b51a873cfec897d602", "title": "Fast scan algorithms on graphics processors", "text": "Scan and segmented scan are important data-parallel primitives for a wide range of applications. We present fast, work-efficient algorithms for these primitives on graphics processing units (GPUs). We use novel data representations that map well to the GPU architecture. Our algorithms exploit shared memory to improve memory performance. We further improve the performance of our algorithms by eliminating shared-memory bank conflicts and reducing the overheads in prior shared-memory GPU algorithms. Furthermore, our algorithms are designed to work well on general data sets, including segmented arrays with arbitrary segment lengths. We also present optimizations to improve the performance of segmented scans based on the segment lengths. We implemented our algorithms on a PC with an NVIDIA GeForce 8800 GPU and compared our results with prior GPU-based algorithms. Our results indicate up to 10x higher performance over prior algorithms on input sequences with millions of elements."} {"_id": "356869aa0ae8d598e956c7f2ae884bbf5009c98c", "title": "NVIDIA Tesla: A Unified Graphics and Computing Architecture", "text": "To enable flexible, programmable graphics and high-performance computing, NVIDIA has developed the Tesla scalable unified graphics and parallel computing architecture. Its scalable parallel array of processors is massively multithreaded and programmable in C or via graphics APIs."} {"_id": "3dcd4a259e47d171d0728e01ac71f2421ab8f7fe", "title": "Iterative methods for sparse linear systems", "text": "Sometimes we need to solve the linear equation Ax = b for a very big and very sparse A. For example, the Poisson equation where only 5 entries of each row of the matrix A are non-zero. Standard methods such as inverting the matrix A (numerically unstable) or Guass elimination do not take advantage of the sparsity of A. In this lesson we will discuss two iterative methods suitable for sparse linear systems: Jacobi and Gauss-Seidel."} {"_id": "6a640438a4e50fa31943462eeca716413891a773", "title": "Ranking , Boosting , and Model Adaptation", "text": "We present a new ranking algorithm that combines the strengt hs of two previous methods: boosted tree classification, and LambdaR ank, which has been shown to be empirically optimal for a widely used inform ation retrieval measure. The algorithm is based on boosted regression trees , although the ideas apply to any weak learners, and it is significantly fast er in both train and test phases than the state of the art, for comparable accu racy. We also show how to find the optimal linear combination for any two ran kers, and we use this method to solve the line search problem exactly du ring boosting. In addition, we show that starting with a previously tra ined model, and boosting using its residuals, furnishes an effective techn ique for model adaptation, and we give results for a particularly pressing prob lem in Web Search training rankers for markets for which only small amounts o f labeled data are available, given a ranker trained on much more data from a larger market."} {"_id": "d78c26f2e2fe87ea600ef6f667020fd933b8060f", "title": "Heart rate variability biofeedback increases baroreflex gain and peak expiratory flow.", "text": "OBJECTIVE\nWe evaluated heart rate variability biofeedback as a method for increasing vagal baroreflex gain and improving pulmonary function among 54 healthy adults.\n\n\nMETHODS\nWe compared 10 sessions of biofeedback training with an uninstructed control. Cognitive and physiological effects were measured in four of the sessions.\n\n\nRESULTS\nWe found acute increases in low-frequency and total spectrum heart rate variability, and in vagal baroreflex gain, correlated with slow breathing during biofeedback periods. Increased baseline baroreflex gain also occurred across sessions in the biofeedback group, independent of respiratory changes, and peak expiratory flow increased in this group, independently of cardiovascular changes. Biofeedback was accompanied by fewer adverse relaxation side effects than the control condition.\n\n\nCONCLUSIONS\nHeart rate variability biofeedback had strong long-term influences on resting baroreflex gain and pulmonary function. It should be examined as a method for treating cardiovascular and pulmonary diseases. Also, this study demonstrates neuroplasticity of the baroreflex."} {"_id": "336222eedc745e17f85cc85141891ed3b48b9ef7", "title": "Moderating variables of music training-induced neuroplasticity: a review and discussion", "text": "A large body of literature now exists to substantiate the long-held idea that musicians' brains differ structurally and functionally from non-musicians' brains. These differences include changes in volume, morphology, density, connectivity, and function across many regions of the brain. In addition to the extensive literature that investigates these differences cross-sectionally by comparing musicians and non-musicians, longitudinal studies have demonstrated the causal influence of music training on the brain across the lifespan. However, there is a large degree of inconsistency in the findings, with discordance between studies, laboratories, and techniques. A review of this literature highlights a number of variables that appear to moderate the relationship between music training and brain structure and function. These include age at commencement of training, sex, absolute pitch (AP), type of training, and instrument of training. These moderating variables may account for previously unexplained discrepancies in the existing literature, and we propose that future studies carefully consider research designs and methodologies that control for these variables."} {"_id": "70fa13b31906c59c0b79d8c18a0614c5aaf77235", "title": "KPB-SIFT: a compact local feature descriptor", "text": "Invariant feature descriptors such as SIFT and GLOH have been demonstrated to be very robust for image matching and object recognition. However, such descriptors are typically of high dimensionality, e.g. 128-dimension in the case of SIFT. This limits the performance of feature matching techniques in terms of speed and scalability. A new compact feature descriptor, called Kernel Projection Based SIFT (KPB-SIFT), is presented in this paper. Like SIFT, our descriptor encodes the salient aspects of image information in the feature point's neighborhood. However, instead of using SIFT's smoothed weighted histograms, we apply kernel projection techniques to orientation gradient patches. The produced KPB-SIFT descriptor is more compact as compared to the state-of-the-art, does not require pre-training step needed by PCA based descriptors, and shows superior advantages in terms of distinctiveness, invariance to scale, and tolerance of geometric distortions. We extensively evaluated the effectiveness of KPB-SIFT with datasets acquired under varying circumstances."} {"_id": "a29f2bd2305e11d8fe139444e733e9b50ea210d6", "title": "NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study", "text": "This paper introduces a novel large dataset for example-based single image super-resolution and studies the state-of-the-art as emerged from the NTIRE 2017 challenge. The challenge is the first challenge of its kind, with 6 competitions, hundreds of participants and tens of proposed solutions. Our newly collected DIVerse 2K resolution image dataset (DIV2K) was employed by the challenge. In our study we compare the solutions from the challenge to a set of representative methods from the literature and evaluate them using diverse measures on our proposed DIV2K dataset. Moreover, we conduct a number of experiments and draw conclusions on several topics of interest. We conclude that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on our newly proposed DIV2K."} {"_id": "b07e04eb64705f0f1b5e6403320e0230e2eadc4d", "title": "A multiagent reinforcement learning algorithm using extended optimal response", "text": "Stochastic games provides a theoretical framework to multiagent reinforcement learning. Based on the framework, a multiagent reinforcement learning algorithm for zero-sum stochastic games was proposed by Littman and it was extended to general-sum games by Hu and Wellman. Given a stochastic game, if all agents learn with their algorithm, we can expect that the policies of the agents converge to a Nash equilibrium. However, agents with their algorithm always try to converge to a Nash equilibrium independent of the policies used by the other agents. In addition, in case there are multiple Nash equilibria, agents must agree on the equilibrium where they want to reach. Thus, their algorithm lacks adaptability in a sense. In this paper, we propose a multiagent reinforcement learning algorithm. The algorithm uses the extended optimal response which we introduce in this paper. It will converge to a Nash equilibrium when other agents are adaptable, otherwise it will make an optimal response. We also provide some empirical results in three simple stochastic games, which show that the algorithm can realize what we intend."} {"_id": "5c89841a22541d66f7bad32da6de2991b2d298ef", "title": "Applying MIPVU Metaphor Identification Procedure on Czech", "text": "This paper represents the current state of the research project aimed at modifying the MIPVU protocol for metaphor annotation for usage on Czech-language texts. Three annotators were trained to use metaphor identification procedure MIPVU and annotated 2 short text excerpts of about 600 tokens length, then the reliability of annotation was measured using Fleiss\u2019 kappa. The resultant interannotator agreement of 0.70 was below kappa values reported by annotators of VU Amsterdam Metaphor Corpus (Steen et al., 2010) and very similar to the agreement that researchers (Badryzlova et al., 2013) got in their first reliability test with unmodified MIPVU procedure applied on Russian texts. Some modifications of the annotation procedure are proposed in order for it to be more suitable for Czech language. The modifications are based on the observations made by annotators in error analysis and by authors of similar projects aimed to transfer MIPVU procedure to Slavic/inflected languages. The functionality of the annotation procedure refinements now have to be tested in the second reliability test."} {"_id": "fd88e20c0ddaa0964bfbb8a9d985475dadf994eb", "title": "A Novel Phase-Shift Control of Semibridgeless Active Rectifier for Wireless Power Transfer", "text": "A novel phase-shift control of a semibridgeless active rectifier (S-BAR) is investigated in order to utilize the S-BAR in wireless energy transfer applications. The standard receiver-side rectifier topology is developed by replacing rectifier lower diodes with synchronous switches controlled by a phase-shifted PWM signal. Theoretical and simulation results show that with the proposed control technique, the output quantities can be regulated without communication between the receiver and transmitter. To confirm the performance of the proposed converter and control, experimental results are provided using 8-, 15-, and 23-cm air gap coreless transformer which has dimension of 76 cm \u00d7 76 cm, with 120-V input and the output power range of 0 to 1kW with a maximum efficiency of 94.4%."} {"_id": "3106bf5bb397125cb390daa006cb6f0fca853d6c", "title": "Creating Social Contagion Through Viral Product Design: A Randomized Trial of Peer Influence in Networks", "text": "We examine how firms can create word-of-mouth peer influence and social contagion by designing viral features into their products and marketing campaigns. Word-of-mouth (WOM) is generally considered to be more effective at promoting product contagion when it is personalized and active. Unfortunately, the relative effectiveness of different viral features has not been quantified, nor has their effectiveness been definitively established, largely because of difficulties surrounding econometric identification of endogenous peer effects. We therefore designed a randomized field experiment on a popular social networking website to test the effectiveness of a range of viral messaging capabilities in creating peer influence and social contagion among the 1.4 million friends of 9,687 experimental users. Overall, we find that viral product design features can indeed generate econometrically identifiable peer influence and social contagion effects. More surprisingly, we find that passive-broadcast viral messaging generates a 246% increase in local peer influence and social contagion effects, while adding active-personalized viral messaging only generates an additional 98% increase in contagion. Although active-personalized messaging is more effective in encouraging adoption per message and is correlated with more user engagement and sustained product use, passivebroadcast messaging is used more often enough to eclipse those benefits, generating more total peer adoption in the network. In addition to estimating the effects of viral product design on social contagion and product diffusion, our work also provides a model for how randomized trials can be used to identify peer influence effects in networks."} {"_id": "65ca6f17a7972fae19b12efdc88c9c9d6d0cf2e8", "title": "Multi-factor Authentication as a Service", "text": "An architecture for providing multi-factor authentication as a service is proposed, resting on the principle of a loose coupling and separation of duties between network entities and end user devices. The multi-factor authentication architecture leverages Identity Federation and Single-Sign-On technologies, such as the OpenID framework, in order to provide for a modular integration of various factors of authentication. The architecture is robust and scalable enabling service providers to define risk-based authentication policies by way of assurance level requirements, which map to concrete authentication factor capabilities on user devices."} {"_id": "c4e416e2b5683306aebd07eb7d8854e6375a57bb", "title": "Trajectory planning for car-like vehicles: A modular approach", "text": "We consider the problem of trajectory planning with geometric constraints for a car-like vehicle. The vehicle is described by its dynamic model, considering such effects as lateral slipping and aerodynamic drag. We propose a modular solution, where three different problems are identified and solved by specific modules. The execution of the three modules can be orchestrated in different ways in order to produce efficient solutions to a variety of trajectory planning problems (e.g., obstacle avoidance, or overtake). As a specific example, we show how to generate the optimal lap on a specified racing track. The numeric examples provided in the paper are good witnesses of the effectiveness of our strategy."} {"_id": "219563417819f1129cdfcfa8a75c03d074577be1", "title": "Annotating Opinions in the World Press", "text": "In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations."} {"_id": "7bcc937e81d135061f1d4f7a466e908e0234093a", "title": "Behavioral and Neural Adaptation in Approach Behavior", "text": "People often make approachability decisions based on perceived facial trustworthiness. However, it remains unclear how people learn trustworthiness from a population of faces and whether this learning influences their approachability decisions. Here we investigated the neural underpinning of approach behavior and tested two important hypotheses: whether the amygdala adapts to different trustworthiness ranges and whether the amygdala is modulated by task instructions and evaluative goals. We showed that participants adapted to the stimulus range of perceived trustworthiness when making approach decisions and that these decisions were further modulated by the social context. The right amygdala showed both linear response and quadratic response to trustworthiness level, as observed in prior studies. Notably, the amygdala's response to trustworthiness was not modulated by stimulus range or social context, a possible neural dynamic adaptation. Together, our data have revealed a robust behavioral adaptation to different trustworthiness ranges as well as a neural substrate underlying approach behavior based on perceived facial trustworthiness."} {"_id": "72691b1adb67830a58bebdfdf213a41ecd38c0ba", "title": "Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal", "text": "We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training."} {"_id": "48320a4be9cc741fdb28ad72f359c449e41309cc", "title": "Manga109 dataset and creation of metadata", "text": "We have created Manga109, a dataset of a variety of 109 Japanese comic books publicly available for use for academic purposes. This dataset provides numerous comic images but lacks the annotations of elements in the comics that are necessary for use by machine learning algorithms or evaluation of methods. In this paper, we present our ongoing project to build metadata for Manga109. We first define the metadata in terms of frames, texts and characters. We then present our web-based software for efficiently creating the ground truth for these images. In addition, we provide a guideline for the annotation with the intent of improving the quality of the metadata."} {"_id": "b743dafa3dcb8924244c14f0a719cde5e93d9155", "title": "Efficient 3D LIDAR based loop closing using deep neural network", "text": "Loop closure detection in 3D LIDAR data is an essential but challenging problem in SLAM system. It is important to reduce global inconsistency or re-localize the robot that loses the localization, while is difficult for the lack of prior information. We present a semi-handcrafted representation learning method for LIDAR point cloud using siamese convolution neural network, which states the loop closure detection to a similarity modeling problem. With the learned representation, the similarity between two LIDAR scans is transformed as the Euclidean distance between the representations respectively. Based on it, we furthermore establish kd-tree to accelerate the searching of similar scans. To demonstrate the performance and effectiveness of the proposed method, the KITTI dataset is employed for comparison with other LIDAR loop closure detection methods. The result shows that our method can achieve both higher accuracy and efficiency."} {"_id": "4ea80c206b8ad73a6d320c9d8ed0321d84fe6d85", "title": "Recursive Neural Networks for Learning Logical Semantics", "text": "Supervised recursive neural network models (RNNs) for sent ence meaning have been successful in an array of sophisticated language tasks , but it remains an open question whether they can learn compositional semantic gra mm rs that support logical deduction. We address this question directly by for the first time evaluating whether each of two classes of neural model \u2014 plain RNNs and re cursive neural tensor networks (RNTNs) \u2014 can correctly learn relationship such as entailment and contradiction between pairs of sentences, where we have generated controlled data sets of sentences from a logical grammar. Our first exper iment evaluates whether these models can learn the basic algebra of logical r elations involved. Our second and third experiments extend this evaluation to c omplex recursive structures and sentences involving quantification. We find t hat he plain RNN achieves only mixed results on all three experiments, where as the stronger RNTN model generalizes well in every setting and appears capable of learning suitable representations for natural language logical inference."} {"_id": "40d5019ecad67026db8bb4a813719f1d48222405", "title": "Analysis of dynamically scheduled tile algorithms for dense linear algebra on multicore architectures", "text": "The objective of this paper is to analyze the dynamic scheduling of dense linear algebra algorithms on shared-memory, multicore architectures. Current numerical libraries, e.g., LAPACK, show clear limitations on such emerging systems mainly due to their coarse granularity tasks. Thus, many numerical algorithms need to be redesigned to better fit the architectural design of the multicore platform. The PLASMA library (Parallel Linear Algebra for Scalable Multi-core Architectures) developed at the University of Tennessee tackles this challenge by using tile algorithms to achieve a finer task granularity. These tile algorithms can then be represented by Directed Acyclic Graphs (DAGs), where nodes are the tasks and edges are the dependencies between the tasks. The paramount key to achieve high performance is to implement a runtime environment to efficiently schedule the DAG across the multicore platform. This paper studies the impact on the overall performance of some parameters, both at the level of the scheduler, e.g., window size and locality, and the algorithms, e.g., Left Looking (LL) and Right Looking (RL) variants. The conclusion of this study claims that some commonly accepted rules for dense linear algebra algorithms may need to be revisited."} {"_id": "36340f1d2f1d6edb1303905225ef8dd7bd2b5ab6", "title": "Underwater electric field measurement and analysis", "text": "The University of Idaho (UI) is developing electromagnetic field sensor systems that are attachable to autonomous underwater vehicles with the intent of taking survey measurements in underwater ocean environments. This paper presents the testing of these sensors and compares measurements to predictions. Testing was conducted off the coast of Florida with a moving artificial electric field source and an electric field sensor equipped AUV. At the closest pass, the peak value of the AUV-acquired electric field was consistent with the predicted field of the artificial source when the accuracy of AUV position was known to within \u223c1 m."} {"_id": "2c75342fc4d809f4e84b997a4a605c985fdbe0fb", "title": "259 Second ring-down time and 4.45 million quality factor in 5.5 kHz fused silica birdbath shell resonator", "text": "The fused silica birdbath (BB) resonator is an axisymmetric 3D shell resonator that could be used in high-performance MEMS gyroscopes. We report a record quality factor (0 for a 5-mm-radius resonator, which is expected to reduce gyroscope bias drift. We present measurement results for two sizes of resonators with long vibratory decay time constants (\u03c4), high Qs, and low damping asymmetry (\u0394\u03c4\u22121) between their n = 2 wine-glass (WG) modes. We find a reduction in damping for larger resonators and a correlation between higher Q and lower as well as evidence of a lower bound on Q for resonators with low damping asymmetry."} {"_id": "3981e2fc542b439a4e3f10c33f4fca2a445aa1c7", "title": "Feeling Love and Doing More for Distant Others : Specific Positive Emotions Differentially Affect Prosocial Consumption", "text": "Marketers often employ a variety of positive emotions to encourage consumption or promote a particular behavior (e.g., buying, donating, recycling) to benefit an organization or cause. The authors show that specific positive emotions do not universally increase prosocial behavior but, rather, encourage different types of prosocial behavior. Four studies show that whereas positive emotions (i.e., love, hope, pride, and compassion) all induce prosocial behavior toward close entities (relative to a neutral emotional state), only love induces prosocial behavior toward distant others and international organizations. Love\u2019s effect is driven by a distinct form of broadening, characterized by extending feelings of social connection and the boundary of caring to be more inclusive of others regardless of relatedness. Love\u2014as a trait and a momentary emotion\u2014 is unique among positive emotions in fostering connectedness that other positive emotions (hope and pride) do not and broadening behavior in a way that other connected emotions (compassion) do not. This research contributes to the broaden-and-build theory of positive emotion by demonstrating a distinct type of broadening for love and adds an important qualification to the general finding that positive emotions uniformly encourage prosocial behavior."} {"_id": "e2b7a371b7cfb5f2bdc2abeb41397aee03fd5480", "title": "On Compiling CNF into Decision-DNNF", "text": "Decision-DNNF is a strict subset of decomposable negation normal form (DNNF) that plays a key role in analyzing the complexity of model counters (the searches performed by these counters have their traces in Decision-DNNF). This paper presents a number of results on Decision-DNNF. First, we introduce a new notion of CNF width and provide an algorithm that compiles CNFs into Decision-DNNFs in time and space that are exponential only in this width. The new width strictly dominates the treewidth of the CNF primal graph: it is no greater and can be bounded when the treewidth of the primal graph is unbounded. This new result leads to a tighter bound on the complexity of model counting. Second, we show that the output of the algorithm can be converted in linear time to a sentential decision diagram (SDD), which leads to a tighter bound on the complexity of compiling CNFs into SDDs."} {"_id": "34d1ba9476ae474f1895dbd84e8dc82b233bc32e", "title": "A Redundancy Dual-backup Method of the Battery Pack for the Two-wheeled Self-balanced Vehicle", "text": ""} {"_id": "35c931611b9ceab623839ce4f5c9000eaa41506a", "title": "Automatic mood detection of indian music using mfccs and k-means algorithm", "text": "This paper proposes a method of identifying the mood underlying a piece of music by extracting suitable and robust features from music clip. To recognize the mood, K-means clustering and global thresholding was used. Three features were amalgamated to decide the mood tag of the musical piece. Mel frequency cepstral coefficients, frame energy and peak difference are the features of interest. These features were used for clustering and further achieving silhouette plot which formed the basis of deciding the limits of threshold for classification. Experiments were performed on a database of audio clips of various categories. The accuracy of the mood extracted is around 90% indicating that the proposed technique provides encouraging results."} {"_id": "5176f5207ba955fa9a0927a242bdb7ec3cd319c0", "title": "Synthetic aperture radar interferometry", "text": "Synthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface, its changes over time, and other changes in the detailed characteristic of the surface. By exploiting the phase of the coherent radar signal, interferometry has transformed radar remote sensing from a largely interpretive science to a quantitative tool, with applications in cartography, geodesy, land cover characterization, and natural hazards. This paper reviews the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering."} {"_id": "71731832d54d299a8582b22f78fd5eaf80c78589", "title": "The evolution of synthetic aperture radar systems and their progression to the EOS SAR", "text": "The spaceborne imaging radar program of the National Aeronautics and Space Administration (NASA) has evolved primarily through Seasat and the Shuttle Imaging Radar (SIR) to a multifrequency, multipolarization system capable of monitoring the Earth with a variety of imaging geometries, resolutions, and swaths. In particular, the ability to digitally process and calibrate the data has been a key factor in developing the algorithms which will operationally provide geophysical and biophysical information about the surface of the Earth using synthetic aperture radar (SAR). This paper describes the evolution of the spaceborne imaging radar starting with the Seasat SAR, through the SIR-A, SIR-B, and SIR-CI X-SAR missions, to the EOS SAR which is scheduled for launch as part of the Earth Observing System (EOS) at the end of the 1990s. A summary of the planned international missions, which may produce a permanent active microwave capability in space starting as early as 1991, is also presented, along with a description of the airborne systems which will be essential to the algorithm development and long-term calibration of the spaceborne data. Finally, a brief summary of the planetary missions utilizing SAR and a comparison of their imaging capabilities with those available on Earth is presented."} {"_id": "65b6c31bccadf0bfb0427929439f2ae1dc2b45bc", "title": "On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake", "text": "We present a map of the coseismic displacement field resulting from the Landers, California, June 28, 1992, earthquake derived using data acquired from an orbiting high-resolution radar system. We achieve results more accurate than previous space studies and similar in accuracy to those obtained by conventional field survey techniques. Data from the ERS 1 synthetic aperture radar instrument acquired in April, July, and August 1992 are used to generate a high-resolution, wide area map of the displacements. The data represent the motion in the direction of the radar line of sight to centimeter level precision of each 30-m resolution element in a 113 km by 90 km image. Our coseismic displacement contour map gives a lobed pattern consistent with theoretical models of the displacement field from the earthquake. Fine structure observed as displacement tiling in regions several kilometers from the fault appears to be the result of local surface fracturing. Comparison of these data with Global Positioning System and electronic distance measurement survey data yield a correlation of 0.96; thus the radar measurements are a means to extend the point measurements acquired by traditional techniques to an area map format. The technique we use is (1) more automatic, (2) more precise, and (3) better validated than previous similar applications of differential radar interferometry. Since we require only remotely sensed satellite data with no additional requirements for ancillary information, the technique is well suited for global seismic monitoring and analysis."} {"_id": "daa549ac4fcf598f6c7016656f926978734096ac", "title": "Accuracy of topographic maps derived from ERS-1 interferometric radar", "text": "A interferometric radar technique for topographic mapping of surfaces promises a high-resolution approach to the generation of digital elevation models. We present here analyses of data collected by the synthetic aperture radar instrument on-board the ERS-1 satellite on successive orbits. Use of a single satellite in a nearly repeating orbit is attractive for reducing cost and spaceborne hardware complexity; also it permits inference of changes in the surface from the correlation properties of the radar echoes. The data have been reduced to correlation maps and digital elevation models. The correlation maps show that temporal correlation decreases significantly with time, but not necessarily at a constant well-defined rate, likely depending on environmental factors. When correlation among passes remains high, however, it is possible to form digital elevation models. Analyses of noise expected in ERS-1 interferometric data collected over Alaska and the southwestern United States indicate that maps with relative errors less than 5 m rms are possible in some regions. However, orbit uncertainties imply that tie points are required in order to reduce absolute height errors to a similar magnitude. We find that about 6 tie points per 40 X 40 km scene with 5 m rms or better height accuracy are needed to keep systematic map height errors below 5 m rms. The performance of the ERS-1 radar system for topographic applications, though useful for a variety of regional and local discipline studies, may be improved with respect to temporal decorrelation errors and absolute height acuity by modifying the orbit repeat period and incorporating precise orbit determination techniques. The resulting implementation will meet many, but not all, objectives of a global mapping mission."} {"_id": "e40446ad0153b3703f8578a4bb10b551643f8d1e", "title": "Satellite radar interferometry for monitoring ice sheet motion: application to an antarctic ice stream.", "text": "Satellite radar interferometry (SRI) provides a sensitive means of monitoring the flow velocities and grounding-line positions of ice streams, which are indicators of response of the ice sheets to climatic change or internal instability. The detection limit is about 1.5 millimeters for vertical motions and about 4 millimeters for horizontal motions in the radar beam direction. The grounding line, detected by tidal motions where the ice goes afloat, can be mapped at a resolution of approximately 0.5 kilometer. The SRI velocities and grounding line of the Rutford Ice Stream, Antarctica, agree fairly well with earlier ground-based data. The combined use of SRI and other satellite methods is expected to provide data that will enhance the understanding of ice stream mechanics and help make possible the prediction of ice sheet behavior."} {"_id": "4b34c6fa16e48b0e5fc5b6c32514d4405382bbd3", "title": "Towards the enhancement of e-democracy: identifying the notion of the 'middleman paradox'", "text": "The challenge towards e-democracy, through the electronic transformation of political systems, has become increasingly evident within developed economies. It is regarded as an approach for increased and better quality citizen participation in the democratic processes. E-democracy forms a component of overall e-government initiatives where technology adoption and diffusion, to enhance wider access to, and the delivery of, government services, are apparent. However, previous research demonstrates that very few e-democracy proposals survive the stage of formal political decision-making to become substantive egovernment projects within national or international agendas. Furthermore, the implementation of e-democracy projects is undertaken at a much slower pace and with dramatically less support than the implementation of other, so-called e-administration, activities in the public sector. The research in this paper considers the notion of the \u2018middleman paradox\u2019, presenting theoretical and empirical evidence that further investigates the phenomenon associated with potential e-democracy improvements. Specifically, the paper adds a new dimension to existing theories on the hesitant evolution of e-democracy that clearly identifies politicians as an inhibiting factor. Proposals are made for an enhancement of these processes, and suggestions for further applicable research are demonstrated."} {"_id": "48444fdfe803bb62d3f1a5595f5821954f03b6ea", "title": "Burden of Depressive Disorders by Country, Sex, Age, and Year: Findings from the Global Burden of Disease Study 2010", "text": "BACKGROUND\nDepressive disorders were a leading cause of burden in the Global Burden of Disease (GBD) 1990 and 2000 studies. Here, we analyze the burden of depressive disorders in GBD 2010 and present severity proportions, burden by country, region, age, sex, and year, as well as burden of depressive disorders as a risk factor for suicide and ischemic heart disease.\n\n\nMETHODS AND FINDINGS\nBurden was calculated for major depressive disorder (MDD) and dysthymia. A systematic review of epidemiological data was conducted. The data were pooled using a Bayesian meta-regression. Disability weights from population survey data quantified the severity of health loss from depressive disorders. These weights were used to calculate years lived with disability (YLDs) and disability adjusted life years (DALYs). Separate DALYs were estimated for suicide and ischemic heart disease attributable to depressive disorders. Depressive disorders were the second leading cause of YLDs in 2010. MDD accounted for 8.2% (5.9%-10.8%) of global YLDs and dysthymia for 1.4% (0.9%-2.0%). Depressive disorders were a leading cause of DALYs even though no mortality was attributed to them as the underlying cause. MDD accounted for 2.5% (1.9%-3.2%) of global DALYs and dysthymia for 0.5% (0.3%-0.6%). There was more regional variation in burden for MDD than for dysthymia; with higher estimates in females, and adults of working age. Whilst burden increased by 37.5% between 1990 and 2010, this was due to population growth and ageing. MDD explained 16 million suicide DALYs and almost 4 million ischemic heart disease DALYs. This attributable burden would increase the overall burden of depressive disorders from 3.0% (2.2%-3.8%) to 3.8% (3.0%-4.7%) of global DALYs.\n\n\nCONCLUSIONS\nGBD 2010 identified depressive disorders as a leading cause of burden. MDD was also a contributor of burden allocated to suicide and ischemic heart disease. These findings emphasize the importance of including depressive disorders as a public-health priority and implementing cost-effective interventions to reduce its burden. Please see later in the article for the Editors' Summary."} {"_id": "3ea6601e24f37697ed0a3e9018f76952625ceb10", "title": "Enhance emotional and social adaptation skills for children with autism spectrum disorder: A virtual reality enabled approach", "text": "Deficits in social-emotional reciprocity, one of the diagnostic criteria of Autism Spectrum Disorder (ASD), greatly hinders children with ASD from responding appropriately and adapting themselves in various social situations. Although evidences have shown that virtual reality environment is a promising tool for emotional and social adaptation skills training on ASD population, there is a lack of large-scale trials with intensive evaluations to support such findings. This paper presents a virtual reality enabled program for enhancing emotional and social adaptation skills for children with ASD. Six unique learning scenarios, of which one focuses on emotion control and relaxation strategies, four that simulate various social situations, and one that facilitates consolidation and generalization, are designed and developed with corresponding psychoeducation procedures and protocols. The learning scenarios are presented to the children via a 4-side immersive virtual reality environment (a.k.a., half-CAVE) with non-intrusive motion tracking. A total number of 94 children between the ages of 6 to 12 with clinical diagnosis of ASD participated in the 28-session program that lasted for 14 weeks. By comparing preand post-assessments, results reported in this paper show significant improvements in the project\u2019s primary measures on children\u2019s emotion expression and regulation and social-emotional reciprocity but not on other secondary measures."} {"_id": "6627b056d128e6c1d8e0b3eb7d2471c23ef99547", "title": "Real time Sign Language Recognition using PCA", "text": "The Sign Language is a method of communication for deaf-dumb people. This paper presents the Sign Language Recognition system capable of recognizing 26 gestures from the Indian Sign Language by using MATLAB. The proposed system having four modules such as: pre-processing and hand segmentation, feature extraction, sign recognition and sign to text and voice conversion. Segmentation is done by using image processing. Different features are extracted such as Eigen values and Eigen vectors which are used in recognition. The Principle Component Analysis (PCA) algorithm was used for gesture recognition and recognized gesture is converted into text and voice format. The proposed system helps to minimize communication barrier between deaf-dumb people and normal people."} {"_id": "b8061918d0edbb5cd3042aa3d8b6ce7becc1d961", "title": "Noninvasive diagnosis of fetal aneuploidy by shotgun sequencing DNA from maternal blood.", "text": "We directly sequenced cell-free DNA with high-throughput shotgun sequencing technology from plasma of pregnant women, obtaining, on average, 5 million sequence tags per patient sample. This enabled us to measure the over- and underrepresentation of chromosomes from an aneuploid fetus. The sequencing approach is polymorphism-independent and therefore universally applicable for the noninvasive detection of fetal aneuploidy. Using this method, we successfully identified all nine cases of trisomy 21 (Down syndrome), two cases of trisomy 18 (Edward syndrome), and one case of trisomy 13 (Patau syndrome) in a cohort of 18 normal and aneuploid pregnancies; trisomy was detected at gestational ages as early as the 14th week. Direct sequencing also allowed us to study the characteristics of cell-free plasma DNA, and we found evidence that this DNA is enriched for sequences from nucleosomes."} {"_id": "72405632d3054f1ccb32ea102b1c159145413e39", "title": "Design and Implementation of Phalanges Based on Iteration of a Prosthesis", "text": "In this work an electromechanical hand is implemented based on the study and implementation of the phalanges of the five fingers of the hand, a control system based on a microcontroller was implemented that takes as feedback signals coming from flexosensors and strain gauges. The movement of the fingers was made based on myoelectric signals in real time, these signals were acquired through an innovative method of acquisition based on patterns. The tests performed were measurements with extended hand, closed hand and pressing an object that validates the present proposal."} {"_id": "8f2eb4dfe08f00faa0567c88b5ef1019c08714e3", "title": "Resilience and Sustainable Development : Theory of resilience , systems thinking and adaptive governance", "text": "\uf0b7 Basic information on SD \uf0b7 Country profiles \uf0b7 Quarterly reports \uf0b7 Policy briefs \uf0b7 Case studies \uf0b7 Conference papers \uf0b7 Workshop papers \uf0b7 Getting in touch with us The European Sustainable Development Network (ESDN) is an informal network of public administrators and other experts who deal with sustainable development strategies and policies. The network covers all 27 EU Member States, plus other European countries. The ESDN is active in promoting sustainable development and facilitating the exchange of good practices in Europe and gives advice to policy-makers at the European and national levels. This ESDN Quarterly Report (QR) provides a condensed overview of the concept of resilience. Despite the complexity of the theory behind resilience, this QR tries to communicate the main notions behind this concept in a way that is understandable and not overly technical. The intention of this QR is to serve as a guide through the concept of resilience. The report does not aim at being exhaustive but intends to provide an overview on the links which are particularly relevant for sustainable development (SD) in general and SD governance in particular. A multitude of diverse sources have been used, mainly from the academic literature. It has to be mentioned the significant and decisive role that the Resilience Alliance has in providing extensive knowledge: the website that they are running, is an exceptionally good source of information for those who are interested and want to deepen their knowledge of resilience. Additionally, among all the scientific publications cited throughout the report, a special mention goes to the book by Walker and Salt (2006) entitled \" Resilience thinking: sustaining ecosystems and people in a changing world \" , which is very much suggested as a practical source of information on resilience. As a first chapter, an executive summary is provided with a short overview of the report with the essential notions that are depicted throughout the QR. The second chapter then introduces the concept of resilience and gives an extensive background on the notions behind it. It intends to provide guidance, especially to understand the linkages between the concept of resilience and sustainable development, and the importance of resilience and systems thinking for policy-makers and for those who work on SD governance. The third chapter summarizes the relationships among resilience, society, governance and policy. Therefore, the concept of 'adaptive governance' is advanced as a more appropriate way to deal with \u2026"} {"_id": "a6c8433d244b388b7ebbdeb81613a863dbbc532a", "title": "GRID-CLUSTERING: A FAST HIERARCHICAL CLUSTERING METHOD FOR VERY LARGE DATA SETS", "text": "This paper presents a new approach to hierarchical clustering of very large data sets, named GridClustering. The method organizes unlike the conventional methods the space surrounding the patterns and not the patterns. It uses a multidimensional grid data structure. The resulting block partitioning of the value space is clustered via a topological neighbor search. The Grid-Clustering method is able to deliver structural pattern distribution information for very large data sets. It superceeds all conventional hierarchical algorithms in runtime behavior and memory space requirements. The algorithm was analyzed within a testbed and suitable values for the tunable parameters of the algorithm are proposed. A comparison of the executions times to other commonly used clustering algorithms and a heuristic runtime analysis is given."} {"_id": "7216da0fbb285a5de12c860ad6033087ce9392a4", "title": "Optimal Parallel Algorithms for Computing the Sum, the Prefix-Sums, and the Summed Area Table on the Memory Machine Models", "text": "The main contribution of this paper is to show optimal parallel algorithms to compute the sum, the prefix-sums, and the summed area table on two memory machine models, the Discrete Memory Machine (DMM) and the Unified Memory Machine (UMM). The DMM and the UMM are theoretical parallel computing models that capture the essence of the shared memory and the global memory of GPUs. These models have three parameters, the number p of threads, and the width w of the memory, and the memory access latency l. We first show that the sum of n numbers can be computed in O(n w + nl p + l log n) time units on the DMM and the UMM. We then go on to show that \u03a9(n w + nl p + l log n) time units are necessary to compute the sum. We also present a parallel algorithm that computes the prefix-sums of n numbers in O( n w + nl p + l log n) time units on the DMM and the UMM. Finally, we show that the summed area table of size \u221a n \u00d7 \u221an can be computed in O( n w + nl p + l log n) time units on the DMM and the UMM. Since the computation of the prefix-sums and the summed area table is at least as hard as the sum computation, these parallel algorithms are also optimal. key words: Memory machine models, prefix-sums computation, parallel algorithm, GPU, CUDA"} {"_id": "a1ea65c00fe1fb7d2c6f494a3114bf52ddbd5401", "title": "From checking on to checking in: designing for low socio-economic status older adults", "text": "In this paper we describe the design evolution of a novel technology that collects and displays presence information to be used in the homes of older adults. The first two iterations, the Ambient Plant and Presence Clock, were designed for higher socio-economic status (SES) older adults, whereas the Check-In Tree was designed for low SES older adults. We describe how feedback from older adult participants drove our design decisions, and give an in-depth account of how the Check-In Tree evolved from concept to a final design ready for in situ deployment."} {"_id": "bcc849022cf59b0d012a795178e4bdff3a9c18f7", "title": "Design of dual-polarized waveguide slotted antenna array for Ka-band application", "text": "A dual polarized waveguide slotted array for Ka band application is designed. Vertical polarization (VP) is realized by longitudinal slots on broad wall of the ridge waveguide sub array, while horizontal polarization (HP) is realized by tilted slot pairs in the narrow wall of the rectangular waveguide sub array. The feed networks are also designed for each polarization. Especially that for VP, a method to improve the coupling is presented. The antenna has low cross-polarization level and high degree of isolation between two ports."} {"_id": "27862ec7411fd4e7768e6a4e2da699fe09ea3a06", "title": "Analysis and Support of Lifestyle via Emotions Using Social Media", "text": "Using recent insights from Cognitive, Affective and Social Neuroscience this paper addresses how affective states in social interactions can be used through social media to analyze and support behaviour for a certain lifestyle. A computational model is provided integrating mechanisms for the impact of one\u2019s emotions on behaviour, and for the impact of emotions of others on one\u2019s own emotion. The model is used to reason about and assess the state of a user with regard to a lifestyle goal (such as exercising frequently), based on extracted information about emotions exchanged in social interaction. Support is provided by proposing ways to affecting these social interactions, thereby indirectly influencing the impact of the emotions of others. An ambient intelligent system incorporating this has been implemented for the social medium Twitter."} {"_id": "052aef133dec5629446f1f6f46f294c52f07f123", "title": "Is Proof More Cost-Effective Than Testing?", "text": "\u00d0This paper describes the use of formal development methods on an industrial safety-critical application. The Z notation was used for documenting the system specification and part of the design, and the SPARK subset of Ada was used for coding. However, perhaps the most distinctive nature of the project lies in the amount of proof that was carried out: proofs were carried out both at the Z level\u00d0approximately 150 proofs in 500 pages\u00d0and at the SPARK code level\u00d0approximately 9,000 verification conditions generated and discharged. The project was carried out under UK Interim Defence Standards 00-55 and 00-56, which require the use of formal methods on safety-critical applications. It is believed to be the first to be completed against the rigorous demands of the 1991 version of these standards. The paper includes comparisons of proof with the various types of testing employed, in terms of their efficiency at finding faults. The most striking result is that the Z proof appears to be substantially more efficient at finding faults than the most efficient testing phase. Given the importance of early fault detection, we believe this helps to show the significant benefit and practicality of large-scale proof on projects of this kind. Index Terms\u00d0Safety-critical software, formal specification, SPARK, specification proof, code proof, proof vs. testing, industrial case study."} {"_id": "49104b50403c410ba43bba458cd065ee9e05772b", "title": "An Immune Genetic Algorithm for Software Test Data Generation", "text": "This paper aims at incorporating immune operators in genetic algorithms as an advanced method for solving the problem of test data generation. The new proposed hybrid algorithm is called immune genetic algorithm (IGA). A full description of this algorithm is presented before investigating its application in the context of software test data generation using some benchmark programs. Moreover, the algorithm is compared with other evolutionary algorithms."} {"_id": "9cb813f60a762795558e9d5621efc8afd6363d35", "title": "High speed and low power preset-able modified TSPC D flip-flop design and performance comparison with TSPC D flip-flop", "text": "Positron emission tomography (PET) is a nuclear functional imaging technique that produces a three-dimensional image of functional organs in the body. PET requires high resolution, fast and low power multichannel analog to digital converter (ADC). A typical multichannel ADC for PET scanner architecture consists of several blocks. Most of the blocks can be designed by using fast, low power D flip-flops. A preset-able true single phase clocked (TSPC) D flip-flop shows numerous glitches (noise) at the output due to unnecessary toggling at the intermediate nodes. Preset-able modified TSPC (MTSPC) D flip-flop have been proposed as an alternative solution to alleviate this problem. However, the MTSPC D flip-flop requires one extra PMOS to suspend toggling of the intermediate nodes. In this work, we designed a 7-bit preset-able gray code counter by using the proposed D flip-flop. This work involves UMC 180 nm CMOS technology for preset-able 7-bit gray code counter where we achieved 1 GHz maximum operation frequency with most significant bit (MSB) delay 0.96 ns, power consumption 244.2 \u03bcW (micro watt) and power delay product (PDP) 0.23 pJ (Pico joule) from 1.8 V power supply."} {"_id": "147668a915f6a7ae3acd7ea79bd7548de9e0555b", "title": "Multifocal planes head-mounted displays.", "text": "Stereoscopic head-mounted displays (HMD's) provide an effective capability to create dynamic virtual environments. For a user of such environments, virtual objects would be displayed ideally at the appropriate distances, and natural concordant accommodation and convergence would be provided. Under such image display conditions, the user perceives these objects as if they were objects in a real environment. Current HMD technology requires convergent eye movements. However, it is currently limited by fixed visual accommodation, which is inconsistent with real-world vision. A prototype multiplanar volumetric projection display based on a stack of laminated planes was built for medical visualization as discussed in a paper presented at a 1999 Advanced Research Projects Agency workshop (Sullivan, Advanced Research Projects Agency, Arlington, Va., 1999). We show how such technology can be engineered to create a set of virtual planes appropriately configured in visual space to suppress conflicts of convergence and accommodation in HMD's. Although some scanning mechanism could be employed to create a set of desirable planes from a two-dimensional conventional display, multiplanar technology accomplishes such function with no moving parts. Based on optical principles and human vision, we present a comprehensive investigation of the engineering specification of multiplanar technology for integration in HMD's. Using selected human visual acuity and stereoacuity criteria, we show that the display requires at most 27 equally spaced planes, which is within the capability of current research and development display devices, located within a maximal 26-mm-wide stack. We further show that the necessary in-plane resolution is of the order of 5 microm."} {"_id": "dbf86b0708407de8d90cdb3d35773d437e0bfe74", "title": "The dynamic kinetochore-microtubule interface.", "text": "The kinetochore is a control module that both powers and regulates chromosome segregation in mitosis and meiosis. The kinetochore-microtubule interface is remarkably fluid, with the microtubules growing and shrinking at their point of attachment to the kinetochore. Furthermore, the kinetochore itself is highly dynamic, its makeup changing as cells enter mitosis and as it encounters microtubules. Active kinetochores have yet to be isolated or reconstituted, and so the structure remains enigmatic. Nonetheless, recent advances in genetic, bioinformatic and imaging technology mean we are now beginning to understand how kinetochores assemble, bind to microtubules and release them when the connections made are inappropriate, and also how they influence microtubule behaviour. Recent work has begun to elucidate a pathway of kinetochore assembly in animal cells; the work has revealed that many kinetochore components are highly dynamic and that some cycle between kinetochores and spindle poles along microtubules. Further studies of the kinetochore-microtubule interface are illuminating: (1) the role of the Ndc80 complex and components of the Ran-GTPase system in microtubule attachment, force generation and microtubule-dependent inactivation of kinetochore spindle checkpoint activity; (2) the role of chromosomal passenger proteins in the correction of kinetochore attachment errors; and (3) the function of microtubule plus-end tracking proteins, motor depolymerases and other proteins in kinetochore movement on microtubules and movement coupled to microtubule poleward flux."} {"_id": "1baff891e92bd7693bcb358296f2220137b352bb", "title": "Active compensation of current unbalance in paralleled silicon carbide MOSFETs", "text": "Current unbalance in paralleled power electronic devices can affect the performance and reliability of them. In this paper, the factors causing current unbalance in parallel connected silicon carbide (SiC) MOSFETs are analyzed, and the threshold mismatch is identified as the major factor. Then the distribution and temperature dependence of SiC MOSFETs' threshold voltage are studied experimentally. Based on the analysis and study, an active current balancing (ACB) scheme is presented. The scheme directly measures the unbalance current, and eliminates it in closed loop by varying the gate delay to each device. The turn-on and turn-off current unbalance are sensed and independently compensated to yield an optimal performance at both switching transitions. The proposed scheme provides robust compensation of current unbalance in fast-switching wide-band-gap devices while keeping circuit complexity and cost low. The performance of the proposed ACB scheme is verified by both simulation and experimental results."} {"_id": "682f92a3deedbed883b7fb7faac0f4f29fa46877", "title": "Assessing The Impact Of Gender And Personality On Film Preferences", "text": "In this paper, the impact of gender and Big Five personality factors (Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism) on visual media preferences (for comedy, horror, action, romance and fantasy film genres) was examined. The analyses were carried out using data collected by Stillwell\u2019s (2007) myPersonality application on Facebook; and focused on a sub-sample of British residents aged 16-25 (n=29,197). It was confirmed that females prefer romantic films, whereas males favour action films. As predicted, there was also a marked interaction between gender and personality characteristics: females score higher on Neuroticism and Agreeableness than males. Individuals scoring high on the dimension of Openness were more likely to prefer comedy and fantasy films than romantic films. Conscientious individuals were more inclined to enjoy action and romance genres. Individuals scoring high on Neuroticism tended to prefer romantic films. Finally, significant interactions were noted between gender and Openness with regard to preferences for comedy, action and romance genres. Overall, it is certain that the combination of gender and Big Five personality factors contributes substantially to an understanding of people\u2019s film preferences. GENDER, PERSONALITY AND FILM PREFERENCES 3 Assessing the impact of Gender and Personality on Film Preferences Do comedies have a sexually differentiated public? Are males and females who enjoy horror films similar in terms of personality? Regardless of gender, do personality characteristics influence preference for the fantasy genre? In the present study, we have attempted to provide a meaningful answer to the aforementioned set of questions, based on the analysis of gender and personality interactions with regard to film preferences. In an attempt to complement previous research findings, we focused our analysis on the role of sex and personality in film preferences for British Facebook users aged 16-25. We paid attention to their scores on online Big Five personality questionnaires and their stated gender, using both single and multivariate measures to relate the two variables to film preferences, also listed on their profiles. The following hypotheses were tested: 1. Females and males differ with regard to film preferences. 2. Personality characteristics differ according to gender. 3. Film preferences vary in accordance with personality characteristics. 4. There is a significant interaction between gender and personality with regard to film preferences. the explanatory potential of online social networks Online social networks are mass repositories of valuable social and psychological information for researchers. Facebook, ranked as the most popular social network by worldwide monthly users, hosts for instance more than 400 million user profiles, in which individuals\u2019 number of friends, media content preferences and socio-demographic background\u2013 amongst other things are displayed. Their systematic use for data collection purposes is a relatively recent trend in social psychological research; Stillwell\u2019s creation of the myPersonality application (2007) on Facebook is a striking exemplar of the considerable explanatory potential of those online social networks. With myPersonality, users can take an online version of the Big Five personality questionnaire and agree to share their profile information for research purposes. As such, users\u2019 personalities can be matched to any other single or collection of individual information available on Facebook, opening up an array of possible areas of investigation. GENDER, PERSONALITY AND FILM PREFERENCES 4 the predominance of visual media The appeal of visual media is easily demonstrated in terms of the importance it has in the lives of most individuals. Data from the National Survey of Culture, Leisure and Sport (2007) indicates that watching television is the most common leisure activity for over eight out of ten men and women in England, taking precedence over spending time with friends and family, sport, etc. Another intriguing finding relates to TV and movie watching online: the number of Britons visiting such websites reached 21 million in 2007. Although it is clear that many people enjoy visual media offerings, it is much more difficult to pinpoint the specific viewer characteristics that predict genre enjoyment. In other words, who prefers which type of content? visual media content: an explanation of viewer preferences Initially the domain of inquiry of communication theorists, media preferences and their determinants are increasingly studied from the vantage point of a variety of disciplines, amongst which psychology and sociology feature predominantly (Kraaykamp et al, 2005). Their combined explanatory potential finds its articulation in the uses and gratification approach (Blumler & Katz, 1974; Rosengreen & Windhal, 1989), according to which preferences for specific media and/or specific content result from an individual\u2019s psychological and social attributes. gender differences in media content preferences Research on media preferences has paid notable attention to sex differences in responses to different types of film. It has been established that various movie genres elicit differentiated affective responses between the two sexes. Rather interestingly, the widespread gender \u2013 stereotyping running through entertainment products seems to correspond to actual viewer preferences. Males indeed prefer action/adventure genres typically associated with the masculine sex, whilst females have a soft spot for romance/drama. Oliver et al. (2000) point to the explanatory potential of differential gender-linked expectations, which can be conceptualized along two dimensions: (1) communality, typically associated with females and (2) agency, often associated with males (Eagly, 1987). Depending on whether they are communal or agency-consistent, the themes preponderant in each genre make up for a sexually differentiated public. A number of studies highlighted the importance of sex in social affiliation with media characters: viewers tend to affiliate more strongly with and GENDER, PERSONALITY AND FILM PREFERENCES 5 experience more intense emotional responses to same-sex characters (e.g. Deutsch, 1975; Hoffner & Cantor, 1991). Another potential explanation is that of sex differences in mean personality levels, which remain fairly constant across the lifespan (Feingold, 1994; McCrae & Costa, 1984). personality differences in media preferences Theorists have also focused considerable attention on the role of personality characteristics in modern mass communication theory (Blumler & Katz, 1974; Wober, 1986). In most personality/media use models, personality is conceptualized as the nexus of attitudes, beliefs and values that guide one\u2019s cognitive and affective interactions with the social environment (Weaver et al, 1993). Personality characteristics are thus believed to influence media preferences, which are essentially evaluative judgements pertaining to the gratifications consumers anticipate from their interaction with the media (Palmgreen, 1984). The Big Five framework of personality is most frequently employed by researchers seeking to demonstrate empirical connections between media gratifications and their psychological roots (e.g. Kraaykamp, 2001; Kraaykamp et al, 2005). It includes the following domains: Openness (traits like originality and open-mindedness), Conscientiousness (traits like hardworking and orderly), Extraversion (traits like energetic and sociable), Agreeableness (traits like affectionate and considerate) and Neuroticism (traits like nervous and tense) (Donellan & Lucas, 2008). With regard to media preferences, individuals scoring high on the dimension of Openness seem to prefer original and serious media content, whilst conscientious individuals like predictable and structured formats (Kraaykamp et al, 2005). The findings for extroverts are less straightforward: they seemingly favour sensory stimulation because of relatively high arousal levels (Costa & McCrae, 1988). Agreeableness appears to be the second most important personality trait, after Openness, in predicting visual preferences. Friendly individuals tend to watch popular content, devoid of upsetting and unconventional images (Kraykaamp, 2001). Finally, emotionally unstable individuals are thought to find easy means of escape from feelings of tension and stress in watching TV (Conway & Rubin, 1991). Most studies on personality effects in media preferences tend to either overlook or control for participants\u2019 gender (e.g. Weaver et al, 1993; Kraaykamp, 2001). As a result, it is difficult to tell in the former case whether the differentiated association between media GENDER, PERSONALITY AND FILM PREFERENCES 6 preferences and personality characteristics is partially caused by the fact that personality traits differ systematically between people of different sexes (Kraaykamp & Van Eijck, 2005). The latter design precludes the possibility of assessing the interaction between gender and personality when it comes to predicting media taste. It seemed to us as though no one has investigated gendered differences in personality in the context of similar genre preferences."} {"_id": "1cdc4ad61825d3a7527b85630fe60e0585fb9347", "title": "The Open University \u2019 s repository of research publications and other research outputs Learning analytics : drivers , developments and challenges", "text": "Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges."} {"_id": "41a5aa76a0d6b2ff234546a5e9efba48b403ce19", "title": "Color , Shape and Texture based Fruit Recognition System", "text": "The paper presents an automated system for classification of fruits. A dataset containing five different fruits was constructed using an ordinary camera. All the fruits were analyzed on the basis of their color (RGB space), shape and texture and then classified using different classifiers to find the classifier that gives the best accuracy. Gray Level Cooccurance Matrix(GLCM) is used to calculate texture features .Best accuracy was achieved by support vector machine (svm). All the processing was carried out in Matlab. Keywords\u2014computer vision, pattern recognition, support vector machine, texture features."} {"_id": "f3ac0d94ba2374e46dfa3a13effcc540205faf21", "title": "Exploring the relationships between college students' cell phone use, personality and leisure", "text": ""} {"_id": "f93a76ffaf8c824c4100557e78cbb208f5fe5efb", "title": "A New ADS-B Authentication Framework Based on Efficient Hierarchical Identity-Based Signature with Batch Verification", "text": "Automatic dependent surveillance-broadcast (ADS-B) has become a crucial part of next generation air traffic surveillance technology and will be mandatorily deployed for most of the airspaces worldwide by 2020. Each aircraft equipped with an ADS-B device keeps broadcasting plaintext messages to other aircraft and the ground station controllers once or twice per second. The lack of security measures in ADS-B systems makes it susceptible to different attacks. Among the various security issues, we investigate the integrity and authenticity of ADS-B messages. We propose a new framework for providing ADS-B with authentication based on three-level hierarchical identity-based signature (HIBS) with batch verification. Previous signature-based ADS-B authentication protocols focused on how to generate signatures efficiently, while our schemes can also significantly reduce the verification cost, which is critical to ADS-B systems, since at any time an ADS-B receiver may receive lots of signatures. We design two concrete schemes. The basic scheme supports partial batch verification and the extended scheme provides full batch verification. We give a formal security proof for the extended scheme. Experiment results show that our schemes with batch verification are tremendously more efficient in batch verifying $n$ signatures than verifying $n$ signatures independently. For example, the running time of verifying 100 signatures is 502 and 484\u00a0ms for the basic scheme and the extended scheme respectively, while the time is 2500\u00a0ms if verifying the signatures independently."} {"_id": "ee29d24b5e5bc8a8a71d5b57368f9a69e537fda7", "title": "Dual-Band and Dual-Polarized SIW-Fed Microstrip Patch Antenna", "text": "A new dual-band and dual-polarized antenna is investigated in this letter. Two longitudinal and transverse slots on the broad wall of the substrate integrated waveguide (SIW) are responsible for creating two different frequency bands with distinct polarization. A frequency selective surface (FSS) is placed on top of the microstrip patches to reduce the cross-polarization level of the antenna. The resonance frequencies of the patch and slot are set near to each other to get a wider impedance bandwidth. Bandwidth of more than 7% is achieved in each band. SIW feeding network increases the efficiency of the antenna. The simulated radiation efficiency of the antenna is better than 87% over the impedance bandwidth ."} {"_id": "24c3961036ba2c0d3e548b2d94af3410ad8d7e6d", "title": "Simulation of lightning impulse voltage stresses in underground cables", "text": "The goal of this work is to study the transient behavior of the cable against the application of standard and non-standard lightning impulse voltage waveforms. A 66 kV cable model has been developed in MATLAB Simulink and standard and non-standard impulse voltages are applied to it. A preliminary comparative study on the obtained voltages indicated that non-standard impulse voltage waveform developed higher voltage stress in the cable. Simulation results helped to investigate which impulses (standard or non-standard) represent the worst possible voltage stresses on the cables. This study provides the basis for further study of effects of non-standard impulse voltage waveforms and make necessary correction in the existing impulse testing standards."} {"_id": "7fb8b44968e65b668ab09ad0e64763785c31bc1d", "title": "Skeletal Quads: Human Action Recognition Using Joint Quadruples", "text": "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skeletal feature, referred to as skeletal quad. Further, the use of a Fisher kernel representation is suggested to describe the skeletal quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues."} {"_id": "c3833f53c947bf89e2c06fd152ca4c7e5a651d6e", "title": "Pedestrian detection and tracking with night vision", "text": "This paper presents a method for pedestrian detection and tracking using a single night-vision video camera installed on the vehicle. To deal with the nonrigid nature of human appearance on the road, a two-step detection/tracking method is proposed. The detection phase is performed by a support vector machine (SVM) with size-normalized pedestrian candidates and the tracking phase is a combination of Kalman filter prediction and mean shift tracking. The detection phase is further strengthened by information obtained by a road-detection module that provides key information for pedestrian validation. Experimental comparisons (e.g., grayscale SVM recognition versus binary SVM recognition and entire-body detection versus upper-body detection) have been carried out to illustrate the feasibility of our approach."} {"_id": "07f488bf2285b290058eb49cf8c25abfd3a13c7d", "title": "Video Google: A Text Retrieval Approach to Object Matching in Videos", "text": "We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieval is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching on two full length feature films."} {"_id": "a5134affedeeac45a1123167e0ecbf52aa96cc1e", "title": "Multi-Granularity Chinese Word Embedding", "text": "This paper considers the problem of learning Chinese word embeddings. In contrast to English, a Chinese word is usually composed of characters, and most of the characters themselves can be further divided into components such as radicals. While characters and radicals contain rich information and are capable of indicating semantic meanings of words, they have not been fully exploited by existing word embedding methods. In this work, we propose multi-granularity embedding (MGE) for Chinese words. The key idea is to make full use of such word-character-radical composition, and enrich word embeddings by further incorporating finer-grained semantics from characters and radicals. Quantitative evaluation demonstrates the superiority of MGE in word similarity computation and analogical reasoning. Qualitative analysis further shows its capability to identify finer-grained semantic meanings of words."} {"_id": "4212f7c3534e5c549504ffa602c985b08788301c", "title": "When is a linguistic metaphor a conceptual metaphor ?", "text": "Volume 24 New Directions in Cognitive Linguistics Edited by Vyvyan Evans and St\u00e9phanie Pourcel Editorial Advisory Board Melissa F. Bowerman Nijmegen Wallace Chafe Santa Barbara, CA Philip R. Cohen Portland, OR Antonio Damasio Iowa City, IA Morton Ann Gernsbacher Madison, WI David McNeill Chicago, IL Eric Pederson Eugene, OR Fran\u00e7ois Recanati Paris Sally Rice Edmonton, Alberta Benny Shanon Jerusalem Lokendra Shastri Berkeley, CA Paul !agard Waterloo, Ontario Human Cognitive Processing (HCP)"} {"_id": "49fd00a22f44a52f4699730403033416e0762e6d", "title": "Spam, Phishing, and Fraud Detection Using Random Projections, Adversarial Learning, and Semi-Supervised Learning", "text": ""} {"_id": "a20bd2c0b96b352a94a97f224bd2160a037f0e1f", "title": "Pathways for irony detection in tweets", "text": "Posts on Twitter allow users to express ideas and opinions in a very dynamic way. The volume of data available is incredibly high in this support, and it may provide relevant clues regarding the judgement of the public on certain product, event, service etc. While in standard sentiment analysis the most common task is to classify the utterances according to their polarity, it is clear that detecting ironic senses represent a big challenge for Natural Language Processing. By observing a corpus constitued by tweets, we propose a set of patterns that might suggest ironic/sarcastic statements. Thus, we developed special clues for irony detection, through the implementation and evaluation of a set of patterns."} {"_id": "860d3d4114711fa4ce9a5a4ccf362b80281cc981", "title": "Developing an Ontology of the Cyber Security Domain", "text": "This paper reports on a trade study we performed to support the development of a Cyber ontology from an initial malware ontology. The goals of the Cyber ontology effort are first described, followed by a discussion of the ontology development methodology used. The main body of the paper then follows, which is a description of the potential ontologies and standards that could be utilized to extend the Cyber ontology from its initially constrained malware focus. These resources include, in particular, Cyber and malware standards, schemas, and terminologies that directly contributed to the initial malware ontology effort. Other resources are upper (sometimes called 'foundational') ontologies. Core concepts that any Cyber ontology will extend have already been identified and rigorously defined in these foundational ontologies. However, for lack of space, this section is profoundly reduced. In addition, utility ontologies that are focused on time, geospatial, person, events, and network operations are briefly described. These utility ontologies can be viewed as specialized super-domain or even mid-level ontologies, since they span many, if not most, ontologies -including any Cyber ontology. An overall view of the ontological architecture used by the trade study is also given. The report on the trade study concludes with some proposed next steps in the iterative evolution of the"} {"_id": "18640600aa6dfdc25ce89031939b14fa3fe85108", "title": "The Painting Fool Sees! New Projects with the Automated Painter", "text": "We report the most recent advances in The Painting Fool project, where we have integrated machine vision capabilities from the DARCI system into the automated painter, to enhance its abilities before, during and after the painting process. This has enabled new art projects, including a commission from an Artificial Intelligence company, and we report on this collaboration, which is one of the first instances in Computational Creativity research where creative software has been commissioned directly. The new projects have advanced The Painting Fool as an independent artist able to produce more diverse styles which break away from simulating natural media. The projects have also raised a philosophical question about whether software artists need to see in the same way as people, which we discuss briefly."} {"_id": "e21c77eaf5330a8cec489d91743ce7688811b38b", "title": "Socializing by Gaming: Revealing Social Relationships in Multiplayer Online Games", "text": "Multiplayer Online Games (MOGs) like Defense of the Ancients and StarCraft II have attracted hundreds of millions of users who communicate, interact, and socialize with each other through gaming. In MOGs, rich social relationships emerge and can be used to improve gaming services such as match recommendation and game population retention, which are important for the user experience and the commercial value of the companies who run these MOGs. In this work, we focus on understanding social relationships in MOGs. We propose a graph model that is able to capture social relationships of a variety of types and strengths. We apply our model to real-world data collected from three MOGs that contain in total over ten years of behavioral history for millions of players and matches. We compare social relationships in MOGs across different game genres and with regular online social networks like Facebook. Taking match recommendation as an example application of our model, we propose SAMRA, a Socially Aware Match Recommendation Algorithm that takes social relationships into account. We show that our model not only improves the precision of traditional link prediction approaches, but also potentially helps players enjoy games to a higher extent."} {"_id": "6d255f6d3b99296545e435744a8745d085621167", "title": "Private Collaborative Neural Network Learning", "text": "Machine learning algorithms, such as neural networks, create better predictive models when having access to larger datasets. In many domains, such as medicine and finance, each institute has only access to limited amounts of data, and creating larger datasets typically requires collaboration. However, there are privacy related constraints on these collaborations for legal, ethical, and competitive reasons. In this work, we present a feasible protocol for learning neural networks in a collaborative way while preserving the privacy of each record. This is achieved by combining Differential Privacy and Secure Multi-Party Computation with Machine Learning."} {"_id": "617e96136117a10dd01445c8d5531d1be47f1e97", "title": "Supervised Hashing with Deep Neural Networks", "text": "In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes. Existing methods in this context train relatively \u201cshallow\u201d networks limited by the issues arising in back propagation (e.g. vanishing gradients) as well as computational efficiency. We propose a novel and efficient training algorithm inspired by alternating direction method of multipliers (ADMM) that overcomes some of these limitations. Our method decomposes the training process into independent layer-wise local updates through auxiliary variables. Empirically we observe that our training algorithm always converges and its computational complexity is linearly proportional to the number of edges in the networks. Empirically we manage to train DNNs with 64 hidden layers and 1024 nodes per layer for supervised hashing in about 3 hours using a single GPU. Our proposed very deep supervised hashing (VDSH) method significantly outperforms the state-of-theart on several benchmark datasets."} {"_id": "833d2d79d6563df0ee201f5067ab0081220d0d0d", "title": "Treatment of thoracolumbar burst fractures without neurologic deficit by indirect reduction and posterior instrumentation: bisegmental stabilization with monosegmental fusion", "text": "This study retrospectively reviews 20 sequential patients with thoracolumbar burst fractures without neurologic deficit. All patients were treated by indirect reduction, bisegmental posterior transpedicular instrumentation and monosegmental fusion. Clinical and radiological outcome was analyzed after an average follow-up of 6.4 years. Re-kyphosis of the entire segment including the cephaled disc was significant with loss of the entire postoperative correction over time. This did not influence the generally benign clinical outcome. Compared to its normal height the fused cephalad disc was reduced by 70% and the temporarily spanned caudal disc by 40%. Motion at the temporarily spanned segment could be detected in 11 patients at follow-up, with no relation to the clinical result. Posterior instrumentation of thoracolumbar burst fractures can initially reduce the segmental kyphosis completely. The loss of correction within the fractured vertebral body is small. However, disc space collapse leads to eventual complete loss of segmental reduction. Therefore, posterolateral fusion alone does not prevent disc space collapse. Nevertheless, clinical long-term results are favorable. However, if disc space collapse has to prevented, an interbody disc clearance and fusion is recommended."} {"_id": "14c8609e5632205c6af493277b1bcc36db266411", "title": "Socioeconomic disparities in health: pathways and policies.", "text": "Socioeconomic status (SES) underlies three major determinants of health: health care, environmental exposure, and health behavior. In addition, chronic stress associated with lower SES may also increase morbidity and mortality. Reducing SES disparities in health will require policy initiatives addressing the components of socioeconomic status (income, education, and occupation) as well as the pathways by which these affect health. Lessons for U.S. policy approaches are taken from the Acheson Commission in England, which was charged with reducing health disparities in that country."} {"_id": "5d9fbab509a8fb09eac6eea88dbb8dcfb646f5df", "title": "Human wayfinding in information networks", "text": "Navigating information spaces is an essential part of our everyday lives, and in order to design efficient and user-friendly information systems, it is important to understand how humans navigate and find the information they are looking for. We perform a large-scale study of human wayfinding, in which, given a network of links between the concepts of Wikipedia, people play a game of finding a short path from a given start to a given target concept by following hyperlinks. What distinguishes our setup from other studies of human Web-browsing behavior is that in our case people navigate a graph of connections between concepts, and that the exact goal of the navigation is known ahead of time. We study more than 30,000 goal-directed human search paths and identify strategies people use when navigating information spaces. We find that human wayfinding, while mostly very efficient, differs from shortest paths in characteristic ways. Most subjects navigate through high-degree hubs in the early phase, while their search is guided by content features thereafter. We also observe a trade-off between simplicity and efficiency: conceptually simple solutions are more common but tend to be less efficient than more complex ones. Finally, we consider the task of predicting the target a user is trying to reach. We design a model and an efficient learning algorithm. Such predictive models of human wayfinding can be applied in intelligent browsing interfaces."} {"_id": "366e55e7cc93240e8360b85dfec95124410c48bc", "title": "Designing and Packaging Wide-Band PAs: Wideband PA and Packaging, History, and Recent Advances: Part 1", "text": "In current applications such as communication, aerospace and defense, electronic warfare (EW), electromagnetic compatibility (EMC), and sensing, among others, there is an ever-growing demand for more linear power with increasingly greater bandwidth and efficiency. Critical for these applications are the design and packaging of wide-band, high-power amplifiers that are both compact in size and low in cost [1]. Most such applications, including EW, radar, and EMC testers, require high 1-dB compression point (P1dB) power with good linearity across a wide band (multi-octave to decade bandwidth) [2]. In addition to linear power, wide bandwidth is essential for high-data-rate communication and high resolution in radar and active imagers [3]. In modern electronics equipment such as automated vehicles, rapidly increasing complexity imposes strict EMC regulations for human safety and security [4]. This often requires challenging specifications for the power amplifier (PA), such as very high P1dB power [to kilowatt, continuous wave (CW)] across approximately a decade bandwidth with high linearity, reliability, and long life, even for 100% load mismatch [5]."} {"_id": "abe51b31659d4c45f10b99ada0c346be78d333b1", "title": "Grid-Connected Photovoltaic Generation System", "text": "This study addresses a grid-connected photovoltaic (PV) generation system. In order to make the PV generation system more flexible and expandable, the backstage power circuit is composed of a high step-up converter and a pulsewidth-modulation (PWM) inverter. In the dc-dc power conversion, the high step-up converter is introduced to improve the conversion efficiency of conventional boost converters and to allow the parallel operation of low-voltage PV modules. Moreover, an adaptive total sliding-mode control system is designed for the current control of the PWM inverter to maintain the output current with a higher power factor and less variation under load changes. In addition, an adaptive step-perturbation method is proposed to achieve the objective of maximum power point tracking, and an active sun tracking scheme without any light sensors is investigated to make PV plates face the sun directly in order to capture maximum irradiation and enhance system efficiency. Experimental results are given to verify the validity of the high step-up converter, the PWM inverter control, the ASP method, and the active sun tracker for a grid-connected PV generation system."} {"_id": "07ee87c822e9bb6deac504cf45439ff856352b5e", "title": "Development of the Perceived Stress Questionnaire: a new tool for psychosomatic research.", "text": "A 30-question Perceived Stress Questionnaire (PSQ) was validated, in Italian and English, among 230 subjects. Test-retest reliability was 0.82 for the General (past year or two) PSQ, while monthly Recent (past month) PSQs varied by a mean factor of 1.9 over 6 months; coefficient alpha > 0.9. General and/or Recent PSQ scores were associated with trait anxiety (r = 0.75), Cohen's Perceived Stress Scale (r = 0.73), depression (r = 0.56), self-rated stress (r = 0.56), and stressful life events (p < 0.05). The General PSQ was higher in in-patients than in out-patients (p < 0.05); both forms were correlated with a somatic complaints scale in a non-patient population (r > 0.5), and were higher, among 27 asymptomatic ulcerative colitis patients, in the seven who had rectal inflammation than in those with normal proctoscopy (p = 0.03). Factor analysis yielded seven factors, of which those reflecting interpersonal conflict and tension were significantly associated with health outcomes. The Perceived Stress Questionnaire may be a valuable addition to the armamentarium of psychosomatic researchers."} {"_id": "640a932208e0fc67b49019961bc420dfb91ac676", "title": "Dynamically pre-trained deep recurrent neural networks using environmental monitoring data for predicting PM2.5", "text": "Fine particulate matter ( $$\\hbox {PM}_{2.5}$$ PM 2.5 ) has a considerable impact on human health, the environment and climate change. It is estimated that with better predictions, US$9 billion can be saved over a 10-year period in the USA (State of the science fact sheet air quality. http://www.noaa.gov/factsheets/new , 2012). Therefore, it is crucial to keep developing models and systems that can accurately predict the concentration of major air pollutants. In this paper, our target is to predict $$\\hbox {PM}_{2.5}$$ PM 2.5 concentration in Japan using environmental monitoring data obtained from physical sensors with improved accuracy over the currently employed prediction models. To do so, we propose a deep recurrent neural network (DRNN) that is enhanced with a novel pre-training method using auto-encoder especially designed for time series prediction. Additionally, sensors selection is performed within DRNN without harming the accuracy of the predictions by taking advantage of the sparsity found in the network. The numerical experiments show that DRNN with our proposed pre-training method is superior than when using a canonical and a state-of-the-art auto-encoder training method when applied to time series prediction. The experiments confirm that when compared against the $$\\hbox {PM}_{2.5}$$ PM 2.5 prediction system VENUS (National Institute for Environmental Studies. Visual Atmospheric Environment Utility System. http://envgis5.nies.go.jp/osenyosoku/ , 2014), our technique improves the accuracy of $$\\hbox {PM}_{2.5}$$ PM 2.5 concentration level predictions that are being reported in Japan."} {"_id": "6864a3cab546cf9de7dd183c73ae344fddb033b6", "title": "Feature selection in text classification", "text": "In recent years, text classification have been widely used. Dimension of text data has increased more and more. Working of almost all classification algorithms is directly related to dimension. In high dimension data set, working of classification algorithms both takes time and occurs over fitting problem. So feature selection is crucial for machine learning techniques. In this study, frequently used feature selection metrics Chi Square (CHI), Information Gain (IG) and Odds Ratio (OR) have been applied. At the same time the method Relevancy Frequency (RF) proposed as term weighting method has been used as feature selection method in this study. It is used for tf.idf term as weighting method, Sequential Minimal Optimization (SMO) and Naive Bayes (NB) in the classification algorithm. Experimental results show that RF gives successful results."} {"_id": "f5feb2a151c54ec9699924d401a66c193ddd3c8b", "title": "Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle", "text": "This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system."} {"_id": "49c88aa6a22a41eef4058578ce1470964439b35f", "title": "3D laser scan classification using web data and domain adaptation", "text": "Over the last years, object recognition has become a more and more active field of research in robotics. An important problem in object recognition is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google\u2019s 3D Warehouse to train classifiers for 3D laser scans collected by a robot navigating through urban environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled 3D laser scans and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real laser scans."} {"_id": "707012ec660e31ba101cc43609e2201a2bd833fa", "title": "Detecting human activities in retail surveillance using hierarchical finite state machine", "text": "Cashiers in retail stores usually exhibit certain repetitive and periodic activities when processing items. Detecting such activities plays a key role in most retail fraud detection systems. In this paper, we propose a highly efficient, effective and robust vision technique to detect checkout-related primitive activities, based on a hierarchical finite state machine (FSM). Our deterministic approach uses visual features and prior spatial constraints on the hand motion to capture particular motion patterns performed in primitive activities. We also apply our approach to the problem of retail fraud detection. Experimental results on a large set of video data captured from retail stores show that our approach, while much simpler and faster, achieves significantly better results than state-of-the-art machine learning-based techniques both in detecting checkout-related activities and in detecting checkout-related fraudulent incidents."} {"_id": "49e78709e255efb02d9eca2e34b4a0edf8fcd026", "title": "COMPARING PROGRAMMER PRODUCTIVITY IN OPENACC AND CUDA: AN EMPIRICAL INVESTIGATION", "text": "OpenACC has been touted as a \"high productivity\" API designed to make GPGPU programming accessible to scientific programmers, but to date, no studies have attempted to verify this quantitatively. In this paper, we conduct an empirical investigation of program productivity comparisons between OpenACC and CUDA in the programming time, the execution time and the analysis of independence of OpenACC model in high performance problems. Our results show that, for our programs and our subject pool, this claim is true. We created two assignments called Machine Problem 3(MP3) and Machine Problem 4(MP4) in the classroom environment and instrumented the WebCode website developed by ourselves to record details of students\u2019 coding process. Three hypotheses were supported by the statistical data: for the same parallelizable problem, (1) the OpenACC programming time is at least 37% shorter than CUDA; (2) the CUDA running speed is 9x faster than OpenACC; (3) the OpenACC development work is not significantly affected by previous CUDA experience"} {"_id": "7d426470b5f705404d643b796c4b8b3826e8af99", "title": "PROFITING FROM ARBITRAGE AND ODDS BIASES OF THE EUROPEAN FOOTBALL GAMBLING MARKET", "text": "A gambling market is usually described as being inefficient if there are one or more betting strategies that generate profit, at a consistent rate, as a consequence of exploiting market flaws. This paper examines the online European football gambling market based on 14 European football leagues over a period of seven years, from season 2005/06 to 2011/12 inclusive, and takes into consideration the odds provided by numerous bookmaking firms. Contrary to common misconceptions, we demonstrate that the accuracy of bookmakers' odds has not improved over this period. More importantly, our results question market efficiency by demonstrating high profitability on the basis of consistent odds biases and numerous arbitrage opportunities."} {"_id": "7b0f2b6485fe5d4c7332030b98bd11f43dcd90fd", "title": "Low-cost, real-time obstacle avoidance for mobile robots", "text": "The goal of this project1 is to advance the field of automation and robotics by utilizing recently-released, low-cost sensors and microprocessors to develop a mechanism that provides depth-perception and autonomous obstacle avoidance in a plug-and-play fashion. We describe the essential hardware components that can enable such a low-cost solution and an algorithm to avoid static obstacles present in the environment. The mechanism utilizes a novel single-point LIDAR module that affords more robustness and invariance than popular approaches, such as Neural Networks and Stereo. When this hardware is coupled with the proposed efficient obstacle avoidance algorithm, this mechanism is able to accurately represent environments through point clouds and construct obstacle-free paths to a destination, in a small timeframe. A prototype mechanism has been installed on a quadcopter for visualization on how actual implementation may take place2. We describe experimental results based on this prototype."} {"_id": "4767a0c9f7261a4265db650d3908c6dd1d10a076", "title": "Joint tracking and segmentation of multiple targets", "text": "Tracking-by-detection has proven to be the most successful strategy to address the task of tracking multiple targets in unconstrained scenarios [e.g. 40, 53, 55]. Traditionally, a set of sparse detections, generated in a preprocessing step, serves as input to a high-level tracker whose goal is to correctly associate these \u201cdots\u201d over time. An obvious short-coming of this approach is that most information available in image sequences is simply ignored by thresholding weak detection responses and applying non-maximum suppression. We propose a multi-target tracker that exploits low level image information and associates every (super)-pixel to a specific target or classifies it as background. As a result, we obtain a video segmentation in addition to the classical bounding-box representation in unconstrained, real-world videos. Our method shows encouraging results on many standard benchmark sequences and significantly outperforms state-of-the-art tracking-by-detection approaches in crowded scenes with long-term partial occlusions."} {"_id": "6ae0a919ffee81098c9769a4503cc15e42c5b585", "title": "Parallel Formulations of Decision-Tree Classification Algorithms", "text": "Classification decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, fraud detection, etc. Highly parallel algorithms for constructing classification decision trees are desirable for dealing with large data sets in reasonable amount of time. Algorithms for building classification decision trees have a natural concurrency, but are difficult to parallelize due to the inherent dynamic nature of the computation. In this paper, we present parallel formulations of classification decision tree learning algorithm based on induction. We describe two basic parallel formulations. One is based on Synchronous Tree Construction Approach and the other is based on Partitioned Tree Construction Approach. We discuss the advantages and disadvantages of using these methods and propose a hybrid method that employs the good features of these methods. We also provide the analysis of the cost of computation and communication of the proposed hybrid method. Moreover, experimental results on an IBM SP-2 demonstrate excellent speedups and scalability."} {"_id": "146bb2ea1fbdd86f81cd0dae7d3fd63decac9f5c", "title": "Genetic Algorithms in Search Optimization and Machine Learning", "text": "This book brings together-in an informal and tutorial fashion-the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems..."} {"_id": "97d3a8863cf2ea6a4670a911cf2fbeca6e043b36", "title": "Indian Sign Language gesture classification as single or double handed gestures", "text": "The development of a sign language recognition system can have a great impact on the daily lives of humans with hearing disabilities. Recognizing gestures from the Indian Sign Language (ISL) with a camera can be difficult due to complexity of various gestures. The motivation behind the paper is to develop an approach to successfully classify gestures in the ISL under ambiguous conditions from static images. A novel approach involving the decomposition of gestures into single handed or double handed gesture has been presented in this paper. Classifying gesture into these subcategories simplifies the process of gesture recognition in the ISL due to presence of lesser number of gestures in each subcategory. Various approaches making use of Histogram of Gradients (HOG) features and geometric descriptors using KNN and SVM classifiers were tried on a dataset consisting of images of all 26 English alphabets present in the ISL under variable background. HOG features when classified with Support Vector Machine were found to be the most efficient approach resulting in an accuracy of 94.23%."} {"_id": "006c843a29a3c77f232638bf7aea63fc8601a73a", "title": "Estimation of the Mathematical Parameters of Double-Exponential Pulses Using the Nelder\u2013Mead Algorithm", "text": "Transient pulses for electromagnetic compatibility problems, such as the high-altitude electromagnetic pulse and ultrawideband pulses, are often described by a double-exponential pulse. Such a pulse shape is specified physically by the three characteristic parameters rise time tr, pulsewidth tfwhm (full-width at half-maximum), and maximum amplitude Emax. The mathematical description is a double-exponential function with the parameters \u03b1, \u03b2, and E0. In practice, it is often necessary to transform the two groups of parameters into each other. This paper shows a novel relationship between the physical parameters tr and tfwhm on the one hand and the mathematical parameters \u03b1 and \u03b2 on the other. It is shown that the least-squares method in combination with the Nelder-Mead simplex algorithm is appropriate to determine an approximate closed-form formula between these parameters. Therefore, the extensive analysis of double-exponential pulses is possible in a considerably shorter computation time. The overall approximation error is less than 3.8%."} {"_id": "3c32cd58de4ed693e196d51eb926396700ed8488", "title": "An Error-Oriented Approach to Word Embedding Pre-Training", "text": "We propose a novel word embedding pretraining approach that exploits writing errors in learners\u2019 scripts. We compare our method to previous models that tune the embeddings based on script scores and the discrimination between correct and corrupt word contexts in addition to the generic commonly-used embeddings pre-trained on large corpora. The comparison is achieved by using the aforementioned models to bootstrap a neural network that learns to predict a holistic score for scripts. Furthermore, we investigate augmenting our model with error corrections and monitor the impact on performance. Our results show that our error-oriented approach outperforms other comparable ones which is further demonstrated when training on more data. Additionally, extending the model with corrections provides further performance gains when data sparsity is an issue."} {"_id": "851262970e8c26cb1fa0bef54eb21f22a0d5233f", "title": "Single-Channel and Multi-Channel Sinusoidal Audio Coding Using Compressed Sensing", "text": "Compressed sensing (CS) samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this paper, the CS methodology is applied to sinusoidally modeled audio signals. As this model is sparse by definition in the frequency domain (being equal to the sum of a small number of sinusoids), we investigate whether CS can be used to encode audio signals at low bitrates. In contrast to encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do, we propose encoding few randomly selected samples of the time-domain description of the sinusoidal component (per signal segment). The potential of applying compressed sensing both to single-channel and multi-channel audio coding is examined. The listening test results are encouraging, indicating that the proposed approach can achieve comparable performance to that of state-of-the-art methods. Given that CS can lead to novel coding systems where the sampling and compression operations are combined into one low-complexity step, the proposed methodology can be considered as an important step towards applying the CS framework to audio coding applications."} {"_id": "7ad5ccfc923be803044c91422e6537ce979f4f09", "title": "Knowledge Discovery in Biomedical Data Facilitated by Domain Ontologies", "text": "In some real-world areas, it is important to enrich the data with external background knowledge so as to provide context and to facilitate pattern recognition. These areas may be described as data rich but knowledge poor. There are two challenges to incorporate this biological knowledge into the data mining cycle: (1) generating the ontologies; and (2) adapting the data mining algorithms to make use of the ontologies. This chapter presents the state-of-the-art in bringing the background ontology knowledge into the pattern recognition task for biomedical data."} {"_id": "9bb9a6e6dcbe47417448b40557d7d7a200f02792", "title": "AUTOMATIC GENERATION OF PRESENTATION SLIDES FOR ACADEMIC PAPERS USING INTEGER LINEAR PROGRAMMING", "text": "Presentations are one of the most common and effective ways of communicating the overview of a work to the audience. Given a specialized paper, programmed era of introduction slides diminishes the exertion of the moderator and aides in making an organized synopsis of the paper. In this Project, we propose the structure of another framework that does this assignment. Any paper that has a theoretical and whose segments can be ordered under presentation, related work, model, examinations and conclusions can be given as info. This XML record is parsed and data in it is extricated. An inquiry particular extractive summarizer has been utilized to create slides. In numerous meetings and social occasions, a moderator takes the guide of slides to present his work deliberately (pictorial). The introduction slides for the paper are then made by utilizing the Integer Linear Programming (ILP) demonstrate with complicatedly organized target breaking points and goals to pick and adjust key expressions and sentences. The framework removed theme and non subject parts from the article in light of the basic talk structure. The isolated subject and non-topic parts were determined to the slides by giving fitting indents in perspective of the examination of their syntactic structure. Keywords\u2014 Abstracting methods, Integer Linear Programming, Support Vector Regression model, text mining."} {"_id": "8eefd28eb47e72794bb0355d8abcbebaac9d8ab1", "title": "Semi-Supervised Text Classification Using EM", "text": "For several decades, statisticians have advocated using a c ombination of labeled and unlabeled data to train classifiers by estimating parameters o f a generative model through iterative Expectation-Maximization (EM) techniques. Thi s chapter explores the effectiveness of this approach when applied to the domain of text class ification. Text documents are represented here with a bag-of-words model, which leads to a generative classification model based on a mixture of multinomials. This model is an ext remely simplistic representation of the complexities of written text. This chapter explains and illustrates three key points about semi-supervised learning for text classifi cat on with generative models. First, despite the simplistic representation, some text do mains have a high positive correlation between generative model probability and classifica tion accuracy. In these domains, a straightforward application of EM with the naive Bayes tex t model works well. Second, some text domains do not have this correlation. Here we can ad opt a more expressive and appropriate generative model that does have a positive c orrelation. In these domains, semi-supervised learning again improves classification ac curacy. Finally, EM suffers from the problem of local maxima, especially in high dimension do mains such as text classification. We demonstrate that deterministic annealing, a varia nt of EM, can help overcome the problem of local maxima and increase classification accurac y further when the generative model is appropriate."} {"_id": "7cfa413af7b561f910edaf3cbaf5a75df4e60e92", "title": "Crown and post-free adhesive restorations for endodontically treated posterior teeth: from direct composite to endocrowns.", "text": "Coronal rehabilitation of endodontically treated posterior teeth is still a controversial issue. Although the classical crown supported by radicular metal posts remains widely spread in dentistry, its invasiveness has been largely criticized. New materials and therapeutic options based entirely on adhesion are nowadays available. They allow performing a more conservative, faster and less expensive dental treatment. All clinical cases presented in this paper are solved by using these modern techniques, from direct composite restorations to indirect endocrowns."} {"_id": "5034d0c127d3b881b3afe4b68690608ceecf1b04", "title": "Advanced Safe PIN-Entry Against Human Shoulder-Surfing Ms", "text": "When users insert their passwords in a common area, they might be at risk of aggressor stealing their password. The PIN entry can be perceived by close by adversaries, more effectually in a crowded place. A new technique has been established to cope with this problem that is cryptography prevention techniques. Instead, there have been alternative approaches among them, the PIN entry was elegant because of its simplicity and accessibility. The basic BW method is focused to withstand a human shoulder surfing attack. In every round, a well ordered numeric keypad is colored at odd. A user who knows the accurate PIN digit can enter by pressing the separate color key. The IBW method is examined to be confidential against human nemesis due to the restricted cognitive abilities of humans. Also the IBW method is proven to be robust against any hacking attacks."} {"_id": "b9e5a91a84d541097b42b60ba673b506a206f5a0", "title": "A recommender system using GA K-means clustering in an online shopping market", "text": "The Internet is emerging as a new marketing channel, so understanding the characteristics of online customers\u2019 needs and expectations is considered a prerequisite for activating the consumer-oriented electronic commerce market. In this study, we propose a novel clustering algorithm based on genetic algorithms (GAs) to effectively segment the online shopping market. In general, GAs are believed to be effective on NP-complete global optimization problems, and they can provide good near-optimal solutions in reasonable time. Thus, we believe that a clustering technique with GA can provide a way of finding the relevant clusters more effectively. The research in this paper applied K-means clustering whose initial seeds are optimized by GA, which is called GA K-means, to a real-world online shopping market segmentation case. In this study, we compared the results of GA K-means to those of a simple K-means algorithm and self-organizing maps (SOM). The results showed that GA K-means clustering may improve segmentation performance in comparison to other typical clustering algorithms. In addition, our study validated the usefulness of the proposed model as a preprocessing tool for recommendation systems. 2007 Elsevier Ltd. All rights reserved."} {"_id": "e0f6a9dbe9a7eb5b0d899658579545fdfc3f1994", "title": "Large-scale deep learning at Baidu", "text": "In the past 30 years, tremendous progress has been achieved in building effective shallow classification models. Despite the success, we come to realize that, for many applications, the key bottleneck is not the qualify of classifiers but that of features. Not being able to automatically get useful features has become the main limitation for shallow models. Since 2006, learning high-level features using deep architectures from raw data has become a huge wave of new learning paradigms. In recent two years, deep learning has made many performance breakthroughs, for example, in the areas of image understanding and speech recognition. In this talk, I will walk through some of the latest technology advances of deep learning within Baidu, and discuss the main challenges, e.g., developing effective models for various applications, and scaling up the model training using many GPUs. In the end of the talk I will discuss what might be interesting future directions."} {"_id": "6be9cd1b6be302e0440a5d1882e4b128172d6fd9", "title": "SD-Layer: Stain Deconvolutional Layer for CNNs in Medical Microscopic Imaging", "text": "Convolutional Neural Networks (CNNs) are typically trained in the RGB color space. However, in medical imaging, we believe that pixel stain quantities offer a fundamental view of the interaction between tissues and stain chemicals. Since the optical density (OD) colorspace allows to compute pixel stain quantities from pixel RGB intensities using the Beer-Lambert\u2019s law, we propose a stain deconvolutional layer, hereby named as SD-Layer, affixed at the front of CNN that performs two functions: (1) it transforms the input RGB microscopic images to Optical Density (OD) space and (2) this layer deconvolves OD image with the stain basis learned through backpropagation and provides tissue-specific stain absorption quantities as input to the following CNN layers. With the introduction of only nine additional learnable parameters in the proposed SD-Layer, we obtain a considerably improved performance on two standard CNN architectures: AlexNet and T-CNN. Using the T-CNN architecture prefixed with the proposed SD-Layer, we obtain 5-fold crossvalidation accuracy of 93.2% in the problem of differentiating malignant immature White Blood Cells (WBCs) from normal immature WBCs for cancer detection."} {"_id": "b554333fb0d318606d6663e237f59858584e42f2", "title": "Principles of spatial database analysis and design", "text": "This chapter covers the fundamentals of spatial database analysis and design, It begins by defining the most important concepts: \u2018spatial database\u2019, \u2018analysis\u2019, \u2018design\u2019, and \u2018model\u2019; and continues with a presentation of the rationale supporting the use of formal methods for analysis and design. The basic elements and approaches of such methods are described, in addition to the processes used. Emphasis is placed on the particularities of spatial databases and the improvements needed for non-spatial methods and tools in order to enhance their efficiency. Finally, the chapter presents a set of tools, called CASE (computer-assisted software engineering), which are built to support the formal analysis and design methods."} {"_id": "e8b109a3881eeb3c156568536f351a6d39fd2023", "title": "A BiCMOS Ultra-Wideband 3.1–10.6-GHz Front-End", "text": "This paper presents a direct-conversion receiver for FCC-compliant ultra-wideband (UWB) Gaussian-shaped pulses that are transmitted in one of fourteen 500-MHz-wide channels within the 3.1-10.6-GHz band. The receiver is fabricated in 0.18-mum SiGe BiCMOS. The packaged chip consists of an unmatched wideband low-noise amplifier (LNA), filter, phase-splitter, 5-GHz ISM band switchable notch filter, 3.1-10.6-GHz local oscillator (LO) amplifiers, mixers, and baseband channel-select filters/buffers. The required quadrature single-ended LO signals are generated externally. The average conversion gain and input P1dB are 32 dB and -41 dBm, respectively. The unmatched LNA provides a system noise figure of 3.3 to 5 dB over the entire band. The chip draws 30 mA from 1.8 V. To verify the unmatched LNA's performance in a complete system, wireless testing of the front-end embedded in a full receiver at 100 Mbps reveals a 10-3 bit-error rate (BER) at -80 dBm sensitivity. The notch filter suppresses out-of-band interferers and reduces the effects of intermodulation products that appear in the baseband. BER improvements of an order of magnitude and greater are demonstrated with the filter"} {"_id": "696ad1c38b588dae3295668a0fa34021c4481030", "title": "Information-theoretical label embeddings for large-scale image classification", "text": "We present a method for training multi-label, massively multi-class image classification models, that is faster and more accurate than supervision via a sigmoid cross-entropy loss (logistic regression). Our method consists in embedding high-dimensional sparse labels onto a lower-dimensional dense sphere of unit-normed vectors, and treating the classification problem as a cosine proximity regression problem on this sphere. We test our method on a dataset of 300 million high-resolution images with 17,000 labels, where it yields considerably faster convergence, as well as a 7% higher mean average precision compared to logistic regression."} {"_id": "e6cfbf052d38c2ac09d88f1114d775b5c71ddb72", "title": "Analysis of a microstrip patch array fed cylindric lens antenna for 77GHz automotive radar", "text": "Regarding gain and SLL, the new lens antenna concept has better focusing abilities compared to the column antenna. Pattern perturbations due to lens edge effects were analyzed with regard to digital array processing. They can be confined either with an appropriate choice for the lens length L and the xlscr,i of the radar sensorpsilas receiving ULA or with narrower feeding beams in the H-plane."} {"_id": "941427ee3ce73c7855fee71833a93803e1fb3576", "title": "Towards Modelling Pedestrian-Vehicle Interactions: Empirical Study on Urban Unsignalized Intersection", "text": "The modelling and simulation of the interaction among vehicles and pedestrians during crosswalking is an open challenge for both research and practical computational solutions supporting urban/traffic decision makers and managers. The social cost of pedestrians\u2019 risky behaviour pushes the development of a new generation of computational models integrating analytical knowledge, data and experience about the complex dynamics occurring in pedestrian/vehicle interactions, which are not completely understood despite recent efforts. This paper presents the results of a significant data gathering campaign realised at an unsignalized zebra crossing. The selected area of the city of Milan (Italy) is characterised by a significant presence of elderly inhabitants and pedestrian-vehicle risky interactions, testified by a high number of accidents involving pedestrians in the past years. The results concern the analysis of: (i) vehicular and pedestrian traffic volumes; (ii) level of service; (iii) pedestrian-vehicle interactions, considering the impact of ageing on crossing behaviour. Results showed that the phenomenon is characterised by three main phases: approaching, appraising (evaluation of the distance and speed of oncoming vehicles) and crossing. The final objective of the research is to support the development of a microscopic agent-based tool for simulating pedestrian behaviour at unsignalized crosswalks, focusing on the specific needs of the elderly pedestrians."} {"_id": "a2b5b8a05db3ffdc02db7d4c2b1e0e64ad2f2f2a", "title": "Status of market, regulation and research of genetically modified crops in Chile.", "text": "Agricultural biotechnology and genetically modified (GM) crops are effective tools to substantially increase productivity, quality, and environmental sustainability in agricultural farming. Furthermore, they may contribute to improving the nutritional content of crops, addressing needs related to public health. Chile has become one of the most important global players for GM seed production for counter-season markets and research purposes. It has a comprehensive regulatory framework to carry out this activity, while at the same time there are numerous regulations from different agencies addressing several aspects related to GM crops. Despite imports of GM food/feed or ingredients for the food industry being allowed without restrictions, Chilean farmers are not using GM seeds for farming purposes because of a lack of clear guidelines. Chile is in a rather contradictory situation about GM crops. The country has invested considerable resources to fund research and development on GM crops, but the lack of clarity in the current regulatory situation precludes the use of such research to develop new products for Chilean farmers. Meanwhile, a larger scientific capacity regarding GM crop research continues to build up in the country. The present study maps and analyses the current regulatory environment for research and production of GM crops in Chile, providing an updated overview of the current status of GM seeds production, research and regulatory issues."} {"_id": "d0f95a9ae047ca69ca30b0ca88be7387b2074c23", "title": "A Machine Learning Approach to Software Requirements Prioritization", "text": "Deciding which, among a set of requirements, are to be considered first and in which order is a strategic process in software development. This task is commonly referred to as requirements prioritization. This paper describes a requirements prioritization method called Case-Based Ranking (CBRank), which combines project's stakeholders preferences with requirements ordering approximations computed through machine learning techniques, bringing promising advantages. First, the human effort to input preference information can be reduced, while preserving the accuracy of the final ranking estimates. Second, domain knowledge encoded as partial order relations defined over the requirement attributes can be exploited, thus supporting an adaptive elicitation process. The techniques CBRank rests on and the associated prioritization process are detailed. Empirical evaluations of properties of CBRank are performed on simulated data and compared with a state-of-the-art prioritization method, providing evidence of the method ability to support the management of the tradeoff between elicitation effort and ranking accuracy and to exploit domain knowledge. A case study on a real software project complements these experimental measurements. Finally, a positioning of CBRank with respect to state-of-the-art requirements prioritization methods is proposed, together with a discussion of benefits and limits of the method."} {"_id": "57fa2f7be1f0ef44f738698d7088d8b281f5d806", "title": "Novel Feed Network for Circular Polarization Antenna Diversity", "text": "In this letter, we propose a novel feed network for circular polarization (CP) diversity antennas to enhance communications performances. The previous CP diversity systems adopted dual polarization diversity using right-hand circular polarization (RHCP) and left-hand circular polarization (LHCP) within a small space. However, the same rotational CPs need further antenna distance. To overcome the space limitation for the same CP diversity antenna, we propose a feed network for CP antenna using conventional CP with orthogonal polarization radiation characteristics of a linear polarization (LP) diversity system without additional antennas. The port isolation characteristics and impedance matching conditions are numerically verified for the diversity system and antenna characteristics. The proposed feed network with CP patch antennas is fabricated and measured in a reverberation chamber for diversity performances, and their validity is verified."} {"_id": "3fa9ab38d81d89e601fe7a704b8c693bf48bd4e8", "title": "Microcephaly and early-onset nephrotic syndrome \u2014confusion in Galloway-Mowat syndrome", "text": "We report a 2-year-old girl with nephrotic syndrome, microcephaly, seizures and psychomotor retardation. Histological studies of a renal biopsy revealed focal glomerular sclerosis with mesangiolysis and capillary microaneurysms. Dysmorphic features were remarkable: abnormal-shaped skull, coarse hair, narrow forehead, large low-set ears, almond-shaped eyes, low nasal bridge, pinched nose, thin lips and micrognathia. Cases with this rare combination of microcephaly and early onset of nephrotic syndrome with various neurological abnormalities have been reported. However, clinical manifestations and histological findings showed a wide variation, and there is a lot of confusion in this syndrome. We therefore reviewed the previous reports and propose a new clasification of this syndrome."} {"_id": "17598f2061549df6a28c2fca5fe6d244fd9b2ad0", "title": "Learning Parameterized Models of Image Motion", "text": "A f?umeworkfor learning parumeterized models oj\u2018optical flow from image sequences is presented. A class of motions is represented by U set of orthogonul basis $ow fields that are computed froni a training set using priiicipal component analysis. Many complex image motions can be represeiited by a linear combiriution of U small number of these basisjows. The learned motion models may be usedfilr optical flow estimation und for model-bused recognition. For optical flow estimation we describe a robust, multi-resolution scheme for directly computing the parameters of the learned flow models from image derivatives. As examples we consider learning motion discontinuities, non-rigid motion of human mouths, und urticulated human motion."} {"_id": "c5e01b4bd9e7dbe22a0aaecd1311cede2087052c", "title": "Teaching cryptography with open-source software", "text": "Cryptography has become an important topic in undergraduate curricula in mathematics and computer science, not just for its intrinsic interest---``about the most fun you can have with mathematics''\\cite{ferg04, but for its current standing as the basis for almost all computer security. From wireless networking to secure email to password protection, cryptographic methods are used to secure information, to protect users, and to protect data.\n At Victoria University, cryptography has been taught as part of a mathematics and computer science degree for several years. The students all have had at least a year of tertiary mathematics, and some exposure to a computer algebra system (Maple). However, the cost of Maple, and the current licensing agreement, means that students are unable to experiment with the software away from the computer laboratories at the University. For this reason we have decided to investigate the use of open-source computer algebra systems Maxima and Axiom. Although not as full-featured and powerful as the commercial systems Maple and Mathematica, we show they are in fact admirably suited for a subject such as cryptography. In some ways Maxima and Axiom even surpass Maple and Mathematica. Student response to the introduction of these systems has been very positive."} {"_id": "1bdc66bf35f7443cea550eb82a691966761f1111", "title": "Web-based models for natural language processing", "text": "Previous work demonstrated that Web counts can be used to approximate bigram counts, suggesting that Web-based frequencies should be useful for a wide variety of Natural Language Processing (NLP) tasks. However, only a limited number of tasks have so far been tested using Web-scale data sets. The present article overcomes this limitation by systematically investigating the performance of Web-based models for several NLP tasks, covering both syntax and semantics, both generation and analysis, and a wider range of n-grams and parts of speech than have been previously explored. For the majority of our tasks, we find that simple, unsupervised models perform better when n-gram counts are obtained from the Web rather than from a large corpus. In some cases, performance can be improved further by using backoff or interpolation techniques that combine Web counts and corpus counts. However, unsupervised Web-based models generally fail to outperform supervised state-of-the-art models trained on smaller corpora. We argue that Web-based models should therefore be used as a baseline for, rather than an alternative to, standard supervised models."} {"_id": "ad5974c04b316f4f379191e4dbea836fd766f47c", "title": "Large Language Models in Machine Translation", "text": "This paper reports on the benefits of largescale statistical language modeling in machine translation. A distributed infrastructure is proposed which we use to train on up to 2 trillion tokens, resulting in language models having up to 300 billion n-grams. It is capable of providing smoothed probabilities for fast, single-pass decoding. We introduce a new smoothing method, dubbed Stupid Backoff, that is inexpensive to train on large data sets and approaches the quality of Kneser-Ney Smoothing as the amount of training data increases."} {"_id": "0c739b915d633cc3c162e4ef1e57b796c2dc2217", "title": "VerbOcean: Mining the Web for Fine-Grained Semantic Verb Relations", "text": "Broad-coverage repositories of semantic relations between verbs could benefit many NLP tasks. We present a semi-automatic method for extracting fine-grained semantic relations between verbs. We detect similarity, strength, antonymy, enablement, and temporal happens-before relations between pairs of strongly associated verbs using lexicosyntactic patterns over the Web. On a set of 29,165 strongly associated verb pairs, our extraction algorithm yielded 65.5% accuracy. Analysis of error types shows that on the relation strength we achieved 75% accuracy. We provide the resource, called VERBOCEAN, for download at http://semantics.isi.edu/ocean/."} {"_id": "12644d51a8ccbdc092ea322907989c098bd16813", "title": "Web question answering: is more always better?", "text": "This paper describes a question answering system that is designed to capitalize on the tremendous amount of data that is now available online. Most question answering systems use a wide variety of linguistic resources. We focus instead on the redundancy available in large corpora as an important resource. We use this redundancy to simplify the query rewrites that we need to use, and to support answer mining from returned snippets. Our system performs quite well given the simplicity of the techniques being utilized. Experimental results show that question answering accuracy can be greatly improved by analyzing more and more matching passages. Simple passage ranking and n-gram extraction techniques work well in our system making it efficient to use with many backend retrieval engines."} {"_id": "216dcb818763ed3493536d44b0a9e4c8e16738b1", "title": "A Winnow-Based Approach to Context-Sensitive Spelling Correction", "text": "A large class of machine-learning problems in natural language require the characterization of linguistic context. Two characteristic properties of such problems are that their feature space is of very high dimensionality, and their target concepts depend on only a small subset of the features in the space. Under such conditions, multiplicative weight-update algorithms such as Winnow have been shown to have exceptionally good theoretical properties. In the work reported here, we present an algorithm combining variants of Winnow and weighted-majority voting, and apply it to a problem in the aforementioned class: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a statistics-based method representing the state of the art for this task. We find: (1) When run with a full (unpruned) set of features, WinSpell achieves accuracies significantly higher than BaySpell was able to achieve in either the pruned or unpruned condition; (2) When compared with other systems in the literature, WinSpell exhibits the highest performance; (3) While several aspects of WinSpell's architecture contribute to its superiority over BaySpell, the primary factor is that it is able to learn a better linear separator than BaySpell learns; (4) When run on a test set drawn from a different corpus than the training set was drawn from, WinSpell is better able than BaySpell to adapt, using a strategy we will present that combines supervised learning on the training set with unsupervised learning on the (noisy) test set."} {"_id": "4478fb662060be76cefacbcd2b1f7529824245a8", "title": "Rule based Autonomous Citation Mining with TIERL", "text": "Citations management is an important task in managing digital libraries. Citations provide valuable information e.g., used in evaluating an author's influences or scholarly quality (the impact factor of research journals). But although a reliable and effective autonomous citation management is essential, manual citation management can be extremely costly. Automatic citation mining on the other hand is a non-trivial task mainly due to non-conforming citation styles, spelling errors and the difficulty of reliably extracting text from PDF documents. In this paper we propose a novel rule-based autonomous citation mining technique, to address this important task. We define a set of common heuristics that together allow to improve the state of the art in automatic citation mining. Moreover, by first disambiguating citations based on venues, our technique significantly enhances the correct discovery of citations. Our experiments show that the proposed approach is indeed able to overcome limitations of current leading citation indexes such as ISI Web of Knowledge , Citeseer and Google Scholar."} {"_id": "ab200f31fe7c50f8b468cb3ed9274883140167d9", "title": "Feature selection for high-dimensional data", "text": "This paper offers a comprehensive approach to feature selection in the scope of classification problems, explaining the foundations, real application problems and the challenges of feature selection in the context of high-dimensional data. First, we focus on the basis of feature selection, providing a review of its history and basic concepts. Then, we address different topics in which feature selection plays a crucial role, such as microarray data, intrusion detection, or medical applications. Finally, we delve into the open challenges that researchers in the field have to deal with if they are interested to confront the advent of \u201cBig Data\u201d and, more specifically, the \u201cBig Dimensionality\u201d."} {"_id": "8dbac520e4a89916cab47be23aab46793f39bb28", "title": "Is Memory Schematic ?", "text": "ion Information that has been selected because it is important and/or relevant to the schema is further reduced during the encoding process by abstraction. This process codes the meaning but not the format of a message (e.g., Bobrow, 1970; Bransford, Barclay, & Franks, 1972). Thus, details such as the lexical form of an individual word (e.g., Schank, 1972, 1976) and the syntactic form of a sentence (e.g., Sachs, 1967) will not be preserved in memory. Because memory for syntax appears to be particularly sparce as well as brief (e.g., J. R. Anderson, 1974; Begg & Wickelgren, 1974; Jarvella, 1971; Sachs, 1967, 1974), the abstraction process is thought to operate during encoding. Additional support for the notion that what is stored is an abstracted representation of the original stimulus comes from studies that demonstrate that after a passage is read, it takes subjects the same amount of time to verify information originally presented in a complex linguistic format as it does to verify that same information presented in a simpler format (e.g., King & Greeno, 1974; Kintsch &Monk, 1972). There are a considerable number of theories that assume that memory consists of sets of propositions and their relations (e.g., J. R. Anderson, 1976; Bransford, et al., 1972; Brewer, 1975; Frederiksen, 1975a; Kintsch, 1974; Norman & Rumelhart, 1975; Schank, 1972, 1976). One formalized presentation of this idea is Schank's conceptual dependency theory (1972). The theory asserts that all propositions can be expressed by a small set of primitive concepts. All lexical expressions that share an identical meaning will be represented in one way (and so stored economically) regardless of their presentation format. As a result people should often incorrectly recall or misrecognize synonyms of originally presented words, and they do (e.g., Anderson & Bower, 1973; R. C. Anderson, 1974; Anisfeld & Knapp, 1968; Brewer, 1975; Graesser, 1978b; Sachs, 1974). Abstraction and memory theories. Since considerable detail is lost via the abstraction process, this process can easily account for the incompleteness that is characteristic of people's recall of complex events. In light of the abstraction process, the problem for schema theories becomes one of accounting for accurate recall. Schema theories do this by borrowing a finding from psycholinguistic research, to wit, that speakers of a language share preferred ways of expressing information. If both the creator and perceiver of a message are operating with the same preferences or under the same biases, the perceiver's reproduction of the input may appear to be accurate. The accuracy, however, is the product of recalling the semantic content of the message and imposing the preferred structure onto it. Thus, biases operate in a manner that is similar to the \"probable detail\" reconstruction process. Biases have been documented for both syntactic information (J. R. Anderson, 1974; Bock, 1977; Bock & Brewer, 1974; Clark & Clark, 1968; James, Thompson, & Baldwin, 1973) and lexical information (Brewer, 1975; Brewer & Lichtenstein, 1974). Distortions may result from the abstraction process if biases are not shared by the person who creates the message and the one who receives it. More importantly, the ab-"} {"_id": "3722384bd171b2245102ad2d99f2d0fd230910c9", "title": "Effects of a 7-Day Meditation Retreat on the Brain Function of Meditators and Non-Meditators During an Attention Task", "text": "Meditation as a cognitive enhancement technique is of growing interest in the field of health and research on brain function. The Stroop Word-Color Task (SWCT) has been adapted for neuroimaging studies as an interesting paradigm for the understanding of cognitive control mechanisms. Performance in the SWCT requires both attention and impulse control, which is trained in meditation practices. We presented SWCT inside the MRI equipment to measure the performance of meditators compared with non-meditators before and after a meditation retreat. The aim of this study was to evaluate the effects of a 7-day Zen intensive meditation training (a retreat) on meditators and non-meditators in this task on performance level and neural mechanisms. Nineteen meditators and 14 non-meditators were scanned before and after a 7-day Zen meditation retreat. No significant differences were found between meditators and non-meditators in the number of the correct responses and response time (RT) during SWCT before and after the retreat. Probably, due to meditators training in attention, their brain activity in the contrast incongruent > neutral during the SWCT in the anterior cingulate, ventromedial prefrontal cortex/anterior cingulate, caudate/putamen/pallidum/temporal lobe (center), insula/putamen/temporal lobe (right) and posterior cingulate before the retreat, were reduced compared with non-meditators. After the meditation retreat, non-meditators had reduced activation in these regions, becoming similar to meditators before the retreat. This result could be interpreted as an increase in the brain efficiency of non-meditators (less brain activation in attention-related regions and same behavioral response) promoted by their intensive training in meditation in only 7 days. On the other hand, meditators showed an increase in brain activation in these regions after the same training. Intensive meditation training (retreat) presented distinct effects on the attention-related regions in meditators and non-meditators probably due to differences in expertise, attention processing as well as neuroplasticity."} {"_id": "3e832919bf4156dd2c7c191acbbc714535f1d8c2", "title": "LOMIT: LOCAL MASK-BASED IMAGE-TO-IMAGE TRANSLATION VIA PIXEL-WISE HIGHWAY ADAPTIVE INSTANCE NORMALIZATION", "text": "Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/ Highway-Adaptive-Instance-Normalization."} {"_id": "bbeacfbe469913de3549a8839abed0cb4415675e", "title": "Designing Algorithms To Aid Discovery by Chemical\nRobots", "text": "Recently, automated robotic systems have become very efficient, thanks to improved coupling between sensor systems and algorithms, of which the latter have been gaining significance thanks to the increase in computing power over the past few decades. However, intelligent automated chemistry platforms for discovery orientated tasks need to be able to cope with the unknown, which is a profoundly hard problem. In this Outlook, we describe how recent advances in the design and application of algorithms, coupled with the increased amount of chemical data available, and automation and control systems may allow more productive chemical research and the development of chemical robots able to target discovery. This is shown through examples of workflow and data processing with automation and control, and through the use of both well-used and cutting-edge algorithms illustrated using recent studies in chemistry. Finally, several algorithms are presented in relation to chemical robots and chemical intelligence for knowledge discovery."} {"_id": "6cb45af3db1de2ba5466aedcb698deb6c4bb4678", "title": "CS 224N Assignment 4: Question Answering on SQuAD", "text": "In this project, we are interested in building an end-to-end neural network architecture for the Question Answering task on the well-known Stanford Question Answering Dataset (SQuAD). Our implementation is motivated from a recent highperformance achieving method that combines coattention encoder with a dynamic pointing decoder known as Dynamic Coattention Network. We explored different ensemble and test decoding techniques that we believe might improve the performance of such systems."} {"_id": "55d87169355378ab5bded7488c899dc0069219dd", "title": "Neural Text Generation: A Practical Guide", "text": "Deep learning methods have recently achieved great empirical success on machine translation, dialogue response generation, summarization, and other text generation tasks. At a high level, the technique has been to train end-to-end neural network models consisting of an encoder model to produce a hidden representation of the source text, followed by a decoder model to generate the target. While such models have significantly fewer pieces than earlier systems, significant tuning is still required to achieve good performance. For text generation models in particular, the decoder can behave in undesired ways, such as by generating truncated or repetitive outputs, outputting bland and generic responses, or in some cases producing ungrammatical gibberish. This paper is intended as a practical guide for resolving such undesired behavior in text generation models, with the aim of helping enable real-world applications."} {"_id": "25e2a24478c8c8c1b26482fa06d3a4fb445303e5", "title": "Compatibility Is Not Transparency: VMM Detection Myths and Realities", "text": "Recent work on applications ranging from realistic honeypots to stealthier rootkits has speculated about building transparent VMMs \u2013 VMMs that are indistinguishable from native hardware, even to a dedicated adversary. We survey anomalies between real and virtual hardware and consider methods for detecting such anomalies, as well as possible countermeasures. We conclude that building a transparent VMM is fundamentally infeasible, as well as impractical from a performance and engineering standpoint."} {"_id": "48c26d5edecb484ec1c34c2b148a1c843ab24327", "title": "Textual resource acquisition and engineering", "text": "and engineering J. Chu-Carroll J. Fan N. Schlaefer W. Zadrozny A key requirement for high-performing question-answering (QA) systems is access to high-quality reference corpora from which answers to questions can be hypothesized and evaluated. However, the topic of source acquisition and engineering has received very little attention so far. This is because most existing systems were developed under organized evaluation efforts that included reference corpora as part of the task specification. The task of answering Jeopardy!i questions, on the other hand, does not come with such a well-circumscribed set of relevant resources. Therefore, it became part of the IBM Watsoni effort to develop a set of well-defined procedures to acquire high-quality resources that can effectively support a high-performing QA system. To this end, we developed three procedures, i.e., source acquisition, source transformation, and source expansion. Source acquisition is an iterative development process of acquiring new collections to cover salient topics deemed to be gaps in existing resources based on principled error analysis. Source transformation refers to the process in which information is extracted from existing sources, either as a whole or in part, and is represented in a form that the system can most easily use. Finally, source expansion attempts to increase the coverage in the content of each known topic by adding new information as well as lexical and syntactic variations of existing information extracted from external large collections. In this paper, we discuss the methodology that we developed for IBM Watson for performing acquisition, transformation, and expansion of textual resources. We demonstrate the effectiveness of each technique through its impact on candidate recall and on end-to-end QA performance."} {"_id": "b321c6d6059e448d85e8ffb5cb2318cf7f2e9ebc", "title": "The definition and classification of glaucoma in prevalence surveys.", "text": "This review describes a scheme for diagnosis of glaucoma in population based prevalence surveys. Cases are diagnosed on the grounds of both structural and functional evidence of glaucomatous optic neuropathy. The scheme also makes provision for diagnosing glaucoma in eyes with severe visual loss where formal field testing is impractical, and for blind eyes in which the optic disc cannot be seen because of media opacities."} {"_id": "3d29f938094672cb45a119e8b1a08e299672f7b6", "title": "Empathy circuits", "text": "The social neuroscientific investigation of empathy has revealed that the same neural networks engaged during first-hand experience of affect subserve empathic responses. Recent meta-analyses focusing on empathy for pain for example reliably identified a network comprising anterior insula and anterior midcingulate cortex. Moreover, recent studies suggest that the generation of empathy is flexibly supported by networks involved in action simulation and mentalizing depending on the information available in the environment. Further, empathic responses are modulated by many factors including the context they occur in. Recent work shows how this modulation can be afforded by the engagement of antagonistic motivational systems or by cognitive control circuits, and these modulatory systems can also be employed in efforts to regulate one's empathic responses."} {"_id": "511c905fcd908e46116c37dbb650e5609d4b3a14", "title": "Performance analysis of two open source intrusion detection systems", "text": "Several studies have been conducted where authors compared the performance of open source Intrusion detection systems, namely Snort and Suricata. However, most studies were limited to either security indicators or performance measurements under the same operating system. The objective of this study is to give a comprehensive analysis of both products in terms of several security related and performance related indicators. In addition, we tested the products under two different operating systems. Several experiments were run to evaluate the effects of open source intrusion detection and prevention systems Snort and Suricata, operating systems Windows, Linux and various attack types on system resource usage, dropped packets rate and ability to detect intrusions. The results show that Suricata has a higher CPU and RAM utilization than Snort in all cases on both operating systems, but lower percentage of dropped packets when evaluated during five of six simulated attacks. Both products had the same number of correctly identified intrusions. The results show that Linux-based solutions consume more system resources, but Windows-based systems had a higher rate of dropped packets. This indicates that these two intrusion detection and prevention systems should be run on Linux. However, both systems are inappropriate for high volumes of traffic in single-server setting."} {"_id": "211855f1de279c452858177331860cbc326351ab", "title": "Designing Neural Networks Using Genetic Algorithms with Graph Generation System", "text": "We present a new method of designing neural networks using the genetic algorithm. Recently there have been several reports claiming attempts to design neural networks using genetic algorithms were successful. However, these methods have a problem in scalability, i.e., the convergence characteristic degrades significantly as the size of the network increases. This is because these methods employ direct mapp ing of chromosomes into network connectivities. As an alternative approach, we propose a graph grammatical encoding that will encode graph generation grammar to the chromosome so that it generates more regular connectivity patterns with shorter chromosome length. Experimental results support that our new scheme provides magnitude of speedup in convergence of neural network design and exhibits desirable scaling property."} {"_id": "b91f7fe7a54159663946e5a88937af4e268edbb4", "title": "Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing", "text": "The primary sensory system requires the integrated function of multiple cell types, although its full complexity remains unclear. We used comprehensive transcriptome analysis of 622 single mouse neurons to classify them in an unbiased manner, independent of any a priori knowledge of sensory subtypes. Our results reveal eleven types: three distinct low-threshold mechanoreceptive neurons, two proprioceptive, and six principal types of thermosensitive, itch sensitive, type C low-threshold mechanosensitive and nociceptive neurons with markedly different molecular and operational properties. Confirming previously anticipated major neuronal types, our results also classify and provide markers for new, functionally distinct subtypes. For example, our results suggest that itching during inflammatory skin diseases such as atopic dermatitis is linked to a distinct itch-generating type. We demonstrate single-cell RNA-seq as an effective strategy for dissecting sensory responsive cells into distinct neuronal types. The resulting catalog illustrates the diversity of sensory types and the cellular complexity underlying somatic sensation."} {"_id": "5bc7ddd557b655eb1115a354550254dfd0b0d826", "title": "Learning Semantically Coherent and Reusable Kernels in Convolution Neural Nets for Sentence Classification", "text": "The purpose of this work is to empirically study desirable properties such as semantic coherence, attention mechanism and kernel reusability in Convolution Neural Networks (CNNs) for learning sentence classification tasks. We propose to learn semantically coherent kernels using clustering scheme combined with Word2Vec representation and domain knowledge such as SentiWordNet. We also suggest a technique to visualize attention mechanism of CNNs. These ideas are useful for decision explanation purpose. Reusable property enables kernels learned on one problem to be used in another problem. This helps in efficient learning as only a few additional domain specific kernels may have to be learned. Experimental results demonstrate the usefulness of our approach. The performance of the proposed approach, which uses semantic and re-usability properties, is close to that of the state-of-the-art approaches on many real-world datasets."} {"_id": "09702e1165eeac3504faf7a32065ea4c08aad467", "title": "Standardization of the in-car gesture interaction space", "text": "Driven by technological advancements, gesture interfaces have recently found their way into vehicular prototypes of various kind. Unfortunately, their application is less than perfect and detailed information about preferred gesture execution regions, spatial extent, and time behavior are not available yet. Providing car (interior) manufacturer with gesture characteristics would allow them to design future in-vehicle concepts in a way to not interfere with gestural interaction. To tackle the problem, this research aims as preliminary work for a later standardization of the diverse properties of gestures and gesture classes similarly to what is already standardized in norms such as ISO 3958/4040 for placement and reachability of traditional controls and indicators. We have set up a real driving experiment recording trajectories and time behavior of gestures related to car and media control tasks. Data evaluation reveals that most of the subjects perform gestures in the same region (bounded by a \"triangle\" steering wheel, rear mirror, and gearshift) and with similar spatial extent (on average below 2 sec.). The generated density plots can be further used for an initial discussion about gesture execution in the passenger compartment. The final aim is to propose a new standard on permitted gesture properties (time, space) in the car."} {"_id": "fd86074685e3da33e31b36914f2d4ccf64821491", "title": "The Modified Abbreviated Math Anxiety Scale: A Valid and Reliable Instrument for Use with Children", "text": "Mathematics anxiety (MA) can be observed in children from primary school age into the teenage years and adulthood, but many MA rating scales are only suitable for use with adults or older adolescents. We have adapted one such rating scale, the Abbreviated Math Anxiety Scale (AMAS), to be used with British children aged 8-13. In this study, we assess the scale's reliability, factor structure, and divergent validity. The modified AMAS (mAMAS) was administered to a very large (n = 1746) cohort of British children and adolescents. This large sample size meant that as well as conducting confirmatory factor analysis on the scale itself, we were also able to split the sample to conduct exploratory and confirmatory factor analysis of items from the mAMAS alongside items from child test anxiety and general anxiety rating scales. Factor analysis of the mAMAS confirmed that it has the same underlying factor structure as the original AMAS, with subscales measuring anxiety about Learning and Evaluation in math. Furthermore, both exploratory and confirmatory factor analysis of the mAMAS alongside scales measuring test anxiety and general anxiety showed that mAMAS items cluster onto one factor (perceived to represent MA). The mAMAS provides a valid and reliable scale for measuring MA in children and adolescents, from a younger age than is possible with the original AMAS. Results from this study also suggest that MA is truly a unique construct, separate from both test anxiety and general anxiety, even in childhood."} {"_id": "2cf3b8d8865fdeab8e251ef803ecde1ddbf6f6a3", "title": "Using RDF Summary Graph For Keyword-based Semantic Searches", "text": "The Semantic Web began to emerge as its standards and technologies developed rapidly in the recent years. The continuing development of Semantic Web technologies has facilitated publishing explicit semantics with data on the Web in RDF data model. This study proposes a semantic search framework to support efficient keyword-based semantic search on RDF data utilizing near neighbor explorations. The framework augments the search results with the resources in close proximity by utilizing the entity type semantics. Along with the search results, the system generates a relevance confidence score measuring the inferred semantic relatedness of returned entities based on the degree of similarity. Furthermore, the evaluations assessing the effectiveness of the framework and the accuracy of the results are presented."} {"_id": "11ad7734bbb81e901f2e59b73456324b299d8980", "title": "Localization for mobile sensor networks", "text": "Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions."} {"_id": "e11d5a4edec55f5d5dc8ea25621ecbf89e9bccb7", "title": "Taxonomy and Survey of Collaborative Intrusion Detection", "text": "The dependency of our society on networked computers has become frightening: In the economy, all-digital networks have turned from facilitators to drivers; as cyber-physical systems are coming of age, computer networks are now becoming the central nervous systems of our physical world\u2014even of highly critical infrastructures such as the power grid. At the same time, the 24/7 availability and correct functioning of networked computers has become much more threatened: The number of sophisticated and highly tailored attacks on IT systems has significantly increased. Intrusion Detection Systems (IDSs) are a key component of the corresponding defense measures; they have been extensively studied and utilized in the past. Since conventional IDSs are not scalable to big company networks and beyond, nor to massively parallel attacks, Collaborative IDSs (CIDSs) have emerged. They consist of several monitoring components that collect and exchange data. Depending on the specific CIDS architecture, central or distributed analysis components mine the gathered data to identify attacks. Resulting alerts are correlated among multiple monitors in order to create a holistic view of the network monitored. This article first determines relevant requirements for CIDSs; it then differentiates distinct building blocks as a basis for introducing a CIDS design space and for discussing it with respect to requirements. Based on this design space, attacks that evade CIDSs and attacks on the availability of the CIDSs themselves are discussed. The entire framework of requirements, building blocks, and attacks as introduced is then used for a comprehensive analysis of the state of the art in collaborative intrusion detection, including a detailed survey and comparison of specific CIDS approaches."} {"_id": "3fa2b1ea36597f2b3055844dcf505bacd884f437", "title": "Confabulation based sentence completion for machine reading", "text": "Sentence completion and prediction refers to the capability of filling missing words in any incomplete sentences. It is one of the keys to reading comprehension, thus making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics the human information processing. The building of confabulation knowledge base uses an unsupervised machine learning algorithm that extracts the relations between objects at the symbolic level. In this work, we propose performance improved training and recall algorithms that apply the cogent confabulation model to solve the sentence completion problem. Our training algorithm adopts a two-level hash table, which significantly improves the training speed, so that a large knowledge base can be built at relatively low computation cost. The proposed recall function fills missing words based on the sentence context. Experimental results show that our software can complete trained sentences with 100% accuracy. It also gives semantically correct answers to more than two thirds of the testing sentences that have not been trained before."} {"_id": "bf3302bec256bddceabda4d5185ec289ac38ea9f", "title": "Seismic Data Classification Using Machine Learning", "text": "Earthquakes around the world have been a cause of major destruction and loss of life and property. An early detection and prediction system using machine learning classification models can prove to be very useful for disaster management teams. The earthquake stations continuously collect data even when there is no event. From this data, we need to distinguish earthquake and non-earthquake. Machine learning techniques can be used to analyze continuous time series data to detect earthquakes effectively. Furthermore, the earthquake data can be used to predict the P-wave and S-wave arrival times."} {"_id": "8552f6e3f73db564a2e625cceb1d1348d70b598c", "title": "Learning Compact Appearance Representation for Video-based Person Re-Identification", "text": "This paper presents a novel approach for video-based person re-identification using multiple Convolutional Neural Networks (CNNs). Unlike previous work, we intend to extract a compact yet discriminative appearance representation from several frames rather than the whole sequence. Specifically, given a video, the representative frames are selected based on the walking profile of consecutive frames. A multiple CNN architecture incorporated with feature pooling is proposed to learn and compile the features of the selected representative frames into a compact description about the pedestrian for identification. Experiments are conducted on benchmark datasets to demonstrate the superiority of the proposed method over existing person re-identification approaches."} {"_id": "83a281e049de09f5b2e667786125da378fd7a14c", "title": "Extracting Social Network and Character Categorization From Bengali Literature", "text": "Literature network analysis is an emerging area in the computational research domain. Literature network is a type of social network with various distinct features. The analysis explores significance of human behavior and complex social relationships. The story consists of some characters and creates an interconnected social system. Each character of the literature represents a node and the edge between any two nodes offered the interaction between them. An annotation and a novel character categorization method are developed to extract interactive social network from the Bengali drama. We analyze Raktakarabi and Muktodhara, two renowned Bengali dramas of Rabindranath Tagore. Weighted degree, closeness, and betweenness centrality analyze the correlation among the characters. We propose an edge contribution-based centrality and diversity metric of a node to determine the influence of one character over others. High diverse nodes show low clustering coefficient and vice versa. We propose a novel idea to analyze the characteristics of protagonist and antagonist from the influential nodes based on the complex graph. We also present a game theory-based community detection method that clusters the actors with a high degree of relationship. Evaluation on real-world networks demonstrates the superiority of the proposed method over the other existing algorithms. Interrelationship of the actors within the drama is also shown from the detected communities, as underlying theme of the narrations is identical. The analytical results show that our method efficiently finds the protagonist and antagonist from the literature network. The method is unique, and the analytical results are more accurate and unbiased than the human perspective. Our approach establishes similar results compared with the benchmark analysis available in Tagore\u2019s Bengali literature."} {"_id": "1a54a8b0c7b3fc5a21c6d33656690585c46ca08b", "title": "Fast Feature Pyramids for Object Detection", "text": "Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures)."} {"_id": "6c5c61bc780b9a696ef72fb8f27873fa7ae33215", "title": "Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces", "text": "We propose a novel and fast multiscale feature detection and description approach that exploits the benefits of nonlinear scale spaces. Previous attempts to detect and describe features in nonlinear scale spaces are highly time consuming due to the computational burden of creating the nonlinear scale space. In this paper we propose to use recent numerical schemes called Fast Explicit Diffusion (FED) embedded in a pyramidal framework to dramatically speed-up feature detection in nonlinear scale spaces. In addition, we introduce a Modified-Local Difference Binary (M-LDB) descriptor that is highly efficient, exploits gradient information from the nonlinear scale space, is scale and rotation invariant and has low storage requirements. We present an extensive evaluation that shows the excellent compromise between speed and performance of our approach compared to state-of-the-art methods such as BRISK, ORB, SURF, SIFT and KAZE."} {"_id": "d716ea2b7975385d770162230d253d9a69ed6e07", "title": "Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems", "text": "In this paper we provide new fast and simple solutions to two important minimal problems in computer vision, the five-point relative pose problem and the six-point focal length problem. We show that these two problems can easily be formulated as polynomial eigenvalue problems of degree three and two and solved using standard efficient numerical algorithms. Our solutions are somewhat more stable than state-of-the-art solutions by Nister and Stewenius and are in some sense more straightforward and easier to implement since polynomial eigenvalue problems are well studied with many efficient and robust algorithms available. The quality of the solvers is demonstrated in experiments 1."} {"_id": "128477ed907857a8ae96cfd25ea6b2bef74cf827", "title": "Multi-image matching using multi-scale oriented patches", "text": "This paper describes a novel multi-view matching framework based on a new type of invariant feature. Our features are located at Harris corners in discrete scale-space and oriented using a blurred local gradient. This defines a rotationally invariant frame in which we sample a feature descriptor, which consists of an 8 /spl times/ 8 patch of bias/gain normalised intensity values. The density of features in the image is controlled using a novel adaptive non-maximal suppression algorithm, which gives a better spatial distribution of features than previous approaches. Matching is achieved using a fast nearest neighbour algorithm that indexes features based on their low frequency Haar wavelet coefficients. We also introduce a novel outlier rejection procedure that verifies a pairwise feature match based on a background distribution of incorrect feature matches. Feature matches are refined using RANSAC and used in an automatic 2D panorama stitcher that has been extensively tested on hundreds of sample inputs."} {"_id": "38fffdbb23a55d7e15fcb81fee69a5a644a1334d", "title": "The Option-Critic Architecture", "text": "We first studied the behavior of the intra-option policy gradient algorithm when the initiation sets and subgoals are fixed by hand. In this case, options terminate with probability 0.9 in a hallway state and four of its incoming neighboring states. We chose to parametrize the intra-option policies using the softmax distribution with a one-hot encoding of state-action pairs as basis functions. Learning both intra-option policies and terminations"} {"_id": "3f0196d0dd25ee48d3ed0f8f112c60cf9fdaf41d", "title": "Dark vs. Dim Silicon and Near-Threshold Computing", "text": "Due to limited scaling of supply voltage, power density is expected to grow in future technology nodes. This increasing power density potentially limits the number of transistors switching at full speed in the future. Near-threshold operation can increase the number of simultaneously active cores, at the expense of much lower operating frequency (\u201cdim silicon\u201d). Although promising to increase overall throughput, dim cores suffer from diminishing returns as the number of cores increases. At this point, hardware accelerators become more efficient alternatives. To explore such a broad design space, we have developed a framework called Lumos to analytically quantify the performance limits of many-core, heterogeneous systems operating at near-threshold voltage. Lumos augments Amdahl\u2019s Law with detailed scaling of frequency and power, calibrated by circuit-level simulations using a modified Predictive Technology Model (PTM) and factors in effects of process variations. While our results show that dim cores do indeed boost throughput, even in the presence of process variations, significant benefits are only achieved in applications with very high parallelism or with novel architectures to mitigate variation. A more beneficial 1 and scalable approach is to use accelerators. However, reconfigurable logic that supports a variety of accelerators is more beneficial than a dedicated, fixed-logic accelerator, unless 1) the dedicated kernel has overwhelming coverage across applications (e.g. twice as large as the total of all others), or 2) the speedup of the dedicated accelerator over the reconfigurable equivalent is significant (e.g. 10x-50x)."} {"_id": "11ee2c34abc0262364d76e94ed7be7d7c885d197", "title": "AmiGO: online access to ontology and annotation data", "text": "AmiGO is a web application that allows users to query, browse and visualize ontologies and related gene product annotation (association) data. AmiGO can be used online at the Gene Ontology (GO) website to access the data provided by the GO Consortium; it can also be downloaded and installed to browse local ontologies and annotations. AmiGO is free open source software developed and maintained by the GO Consortium."} {"_id": "20a152d5e7dbd6523418d591b1700f09bbb8c752", "title": "Stress and hippocampal plasticity.", "text": "The hippocampus is a target of stress hormones, and it is an especially plastic and vulnerable region of the brain. It also responds to gonadal, thyroid, and adrenal hormones, which modulate changes in synapse formation and dendritic structure and regulate dentate gyrus volume during development and in adult life. Two forms of structural plasticity are affected by stress: Repeated stress causes atrophy of dendrites in the CA3 region, and both acute and chronic stress suppresses neurogenesis of dentate gyrus granule neurons. Besides glucocorticoids, excitatory amino acids and N-methyl-D-aspartate (NMDA) receptors are involved in these two forms of plasticity as well as in neuronal death that is caused in pyramidal neurons by seizures and by ischemia. The two forms of hippocampal structural plasticity are relevant to the human hippocampus, which undergoes a selective atrophy in a number of disorders, accompanied by deficits in declarative episodic, spatial, and contextual memory performance. It is important, from a therapeutic standpoint, to distinguish between a permanent loss of cells and a reversible atrophy."} {"_id": "2d2ea08f87d6b29119ea8b896689005dbea96f15", "title": "Conservative treatment of carpal tunnel syndrome: comparison between laser therapy and Fascial Manipulation(\u00ae).", "text": "The etiopathogenesis of Carpal Tunnel Syndrome (CTS) is multifactorial and most cases are classified as idiopathic (Thurston 2013). A randomized controlled trial was performed to compare the effectiveness of Fascial Manipulation(\u00ae) (FM) and Low-Level Laser Therapy (LLLT) for CTS. This prospective trial included 42 patients (70 hands with symptoms) with clinical and electroneuromyographic diagnosis of CTS. The patients were randomly assigned to receive multiple sessions of FM or multiple session of LLLT. The Visual Analogic Scale (VAS) and Boston Carpal Tunnel Questionnaire (BCTQ) were performed at baseline, end of treatment and after three months. The group that received FM showed a significant reduction in subjective pain perception and an increased function assessed by BCTQ at the end of the treatment and follow-up. The group that received LLLT showed an improvement in the BCTQ at the end of the treatment but the improvement level was not sustained at the three month follow-up. FM is a valid alternative treatment for CTS."} {"_id": "94b73b4bfcee507cc73937cd802e96dd91ae2790", "title": "An OFDMA based multiple access protocol with QoS guarantee for next generation WLAN", "text": "To provide better QoS guarantee for the next generation WLAN, IEEE 802.11ax task group is founded in March 2014. As a promising technology to accommodate multiple nodes concurrent transmissions in dense deployment scenario, orthogonal frequency division multiple access (OFDMA) will be adopted in IEEE 802.11ax with great possibility. In this paper, an OFDMA based multiple access protocol with QoS guarantee is proposed for the next generation WLAN. Firstly, a redundant access mechanism is given to increase the access success probability of the video traffic where the video stations can concurrently send multiple RTS packets in multiple subchannels. Secondly, a priority based resource allocation scheme is presented to let AP allocate more resources to the video stations. Simulation results show that our protocol outperforms the existing OFDMA based multiple access for IEEE 802.11ax (OMAX) protocol in terms of delay and delay jitter of video traffic in dense deployment scenario."} {"_id": "f73038f3e683f87b7d360de982effb718a39a668", "title": "10.6 THz figure-of-merit phase-change RF switches with embedded micro-heater", "text": "We report on GeTe-based phase-change RF switches with embedded micro-heater for thermal switching. With heater parasitics reduced, GeTe RF switches show onstate resistance of 0.05 ohm*mm and off-state capacitance of 0.3 pF/mm. The RF switch figure-of-merit is estimated to be 10.6 THz, which is about 15 times better than state-of-the-art silicon-on-insulator switches. With on-state resistance of 1 ohm and off-state capacitance of 15 fF, RF insertion loss was measured at <;0.2 dB, and isolation was >25 dB at 20 GHz, respectively. RF power handling was >5.6 W for both onand off-state of GeTe."} {"_id": "d9128cef2a167a5aca31def620d8aa46358509f1", "title": "The Mediator complex: a central integrator of transcription", "text": "The RNA polymerase II (Pol II) enzyme transcribes all protein-coding and most non-coding RNA genes and is globally regulated by Mediator \u2014 a large, conformationally flexible protein complex with a variable subunit composition (for example, a four-subunit cyclin-dependent kinase 8 module can reversibly associate with it). These biochemical characteristics are fundamentally important for Mediator's ability to control various processes that are important for transcription, including the organization of chromatin architecture and the regulation of Pol II pre-initiation, initiation, re-initiation, pausing and elongation. Although Mediator exists in all eukaryotes, a variety of Mediator functions seem to be specific to metazoans, which is indicative of more diverse regulatory requirements."} {"_id": "cd373a4bb33e7a123f69de3d0ff697a0388cd2b8", "title": "From information security to ... business security?", "text": "This short opinion paper argues that information security, the discipline responsible for protecting a company's information assets against business risks, has now become such a crucial component of good Corporate Governance, that it should rather be called Business Security instead of Information Security. During the last year or two, driven by developments in the field of Corporate Governance, including IT Governance, it became apparent that the scope of Information Security is much wider than just (directly) protecting the data, information and software of a business. Such data, information and software had become invaluable assets of the business as a whole, and not properly protecting this information could have profound business and legal implications. Basically, the data and information of the business became its 'life blood', and compromising this life blood, could kill the business. Executive Management and Boards started realizing that Information Security Governance was becoming their direct responsibility, and that serious personal consequences, specifically legally, could flow from ignoring information security. Information security governance had become an"} {"_id": "34dbcb9b88ef298079b4632ec25e171895c6e7fe", "title": "Support Vector Machines For Synthetic Aperture Radar Automatic Target Recognition", "text": "Algorithms that produce classifiers with large margins, such as support vector machines (SVMs), AdaBoost, etc. are receiving more and more attention in the literature. This paper presents a real application of SVMs for synthetic aperture radar automatic target recognition (SAR/ATR) and compares the result with conventional classifiers. The SVMs are tested for classification both in closed and open sets (recognition). Experimental results showed that SVMs outperform conventional classifiers in target classification. Moreover, SVMs with the Gaussian kernels are able to form a local \" bounded \" decision region around each class that presents better rejection to confusers."} {"_id": "713a8b22d15359e3e8380ee81da6de633eed9c8b", "title": "A SiC-Based High-Efficiency Isolated Onboard PEV Charger With Ultrawide DC-Link Voltage Range", "text": "In LLC-based onboard battery charging architectures used in plug-in electric vehicles (PEV), the dc link voltage can be actively regulated to follow the battery pack voltage so that the LLC converter can operate in proximity of resonant frequency and achieve high efficiencies over the wide range of battery pack voltage. However, conventional boost-type power factor correction (PFC) converters are unable to provide ultrawide dc link voltages since their output voltages should always be larger than their input voltages. This paper proposes a Silicon Carbide (SiC)-based onboard PEV charger using single-ended primary-inductor converter (SEPIC) PFC converter followed by an isolated LLC resonant converter. With the proposed charger architecture, the SEPIC PFC converter is able to provide an ultrawide range for dc link voltage, and consequently enhance the efficiency of the LLC stage by ensuring operation in proximity of resonant frequency. A 1-kW SiC-based prototype is designed to validate the proposed idea. The experimental result shows that the SEPIC PFC converter achieves unity power factor, 2.72% total harmonic distortion, and 95.3% peak conversion efficiency. The LLC converter achieves 97.1% peak efficiency and always demonstrates a very high efficiency across the ultrawide dc-link voltage range. The overall efficiency of the charger is 88.5% to 93.5% from 20% of the rated load to full load."} {"_id": "720158a53b79667e39c2caf2f7ebb2670b848693", "title": "EKG-based key agreement in Body Sensor Networks", "text": "Preserving a person's privacy in an efficient manner is very important for critical, life-saving infrastructures like body sensor networks (BSN). This paper presents a novel key agreement scheme which allows two sensors in a BSN to agree to a common key generated using electrocardiogram (EKG) signals. This EKG-based key agreement (EKA) scheme aims to bring the \"plug-n-play\" paradigm to BSN security whereby simply deploying sensors on the subject can enable secure communication, without requiring any form of initialization such as pre-deployment. Analysis of the scheme based on real EKG data (obtained from MIT PhysioBank database) shows that keys resulting from EKA are: random, time variant, can be generated based on short-duration EKG measurements, identical for a given subject and different for separate individuals."} {"_id": "e29a55474198b61590dfe20399d04f9f5b6ee7c9", "title": "Wireless access monitoring and control system based on digital door lock", "text": "We propose a novel wireless access monitoring and control system based on the digital door lock, which is explosively used as a digital consumer device. Digital door lock is an electronic locking system operated by the combination of digital key, security password or number codes. This paper presents a prototype of the proposed system and shows a scheme for the implementation. To implement the system with ZigBee network protocol, four types of modules are developed, ZigBee module, digital door lock module, human detection module, and ZigBee relay module. ZigBee module is designed to support wireless sensor network and also used for the ZigBee tag to identify the access objects. Digital door lock module is implemented as a digital consumer device to control the access system as well as locking system. It is very convenient system for the consumers and has extensible and flexible characteristics. That is, it can be used as a home security system by the ZigBee network with additional sensor devices. Therefore, it can be a good practical product for the realization of an access monitoring and control system. It also can be applied to the real market for home networking system. Furthermore, the system can be extended to another service such as a connection between mobile phone and home networking system."} {"_id": "f692c692d3426cc663f3ec9be0c7025b670b2e5c", "title": "Code Conjurer: Pulling Reusable Software out of Thin Air", "text": "For many years, the IT industry has sought to accelerate the software development process by assembling new applications from existing software assets. However, true component-based reuse of the form Douglas Mcllroy envisaged in the 1960s is still the exception rather than the rule, and most of the systematic software reuse practiced today uses heavyweight approaches such as product-line engineering or domain-specific frameworks. By component, we mean any cohesive and compact unit of software functionality with a well-defined interface - from simple programming language classes to more complex artifacts such as Web services and Enterprise JavaBeans."} {"_id": "8027ec37cbcc4bd5b87a608a2003db37f4d40385", "title": "Synthetic aperture radar at millimeter wavelength for UAV surveillance applications", "text": "The airborne monitoring of scenes using unmanned aircrafts is becoming increasingly important. Several types of airborne sensors - in the optical, infrared or millimeter wave spectrum - are available for the different platforms. Beside the all-weather suitability of the sensors, the deployment scenarios, often also demand for the ability to look through dust clouds, smoke, and fog. The only sensor, which is capable to cope with such environmental restrictions and is able to deliver high-resolution images, is the synthetic aperture radar (SAR). In this paper we focus on miniaturized SAR systems which were developed and optimized for utilization in a UA V (unmanned aerial vehicle) with a low loading capacity. This not only requires a compact and light radar sensor but the processing also has to cope with the unstable flight conditions of a small aircraft. Therefore, a high-precision inertial measurement unit (IMU) and motion compensating SAR-algorithms are needed. Thanks to the utilization of a high transmit frequency of either 35 GHz or 94 GHz, the sensors are suitable for the detection of small-scale objects. A very high resolution of 15 cm \u00d7 15 cm can be achieved when using modern FMCW (frequency modulated continuous wave) generation with a high bandwidth (up to 1 GHz) in combination with small antennas."} {"_id": "1f1122f01db3a48b2df32d2c929bcf5193b0e89c", "title": "Effect of rule weights in fuzzy rule-based classification systems", "text": "This paper examines the effect of rule weights in fuzzy rule-based classification systems. Each fuzzy IF\u2013THEN rule in our classification system has antecedent linguistic values and a single consequent class. We use a fuzzy reasoning method based on a single winner rule in the classification phase. The winner rule for a new pattern is the fuzzy IF\u2013THEN rule that has the maximum compatibility grade with the new pattern. When we use fuzzy IF\u2013THEN rules with certainty grades (i.e., rule weights), the winner is determined as the rule with the maximum product of the compatibility grade and the certainty grade. In this paper, the effect of rule weights is illustrated by drawing classification boundaries using fuzzy IF\u2013THEN rules with/without certainty grades. It is also shown that certainty grades play an important role when a fuzzy rule-based classification system is a mixture of general rules and specific rules. Through computer simulations, we show that comprehensible fuzzy rule-based systems with high classification performance can be designed without modifying the membership functions of antecedent linguistic values when we use fuzzy IF\u2013THEN rules with certainty grades."} {"_id": "202891ef448753dd07cc42039440f35ce217df7d", "title": "FindingHuMo: Real-Time Tracking of Motion Trajectories from Anonymous Binary Sensing in Smart Environments", "text": "In this paper we have proposed and designed FindingHuMo (Finding Human Motion), a real-time user tracking system for Smart Environments. FindingHuMo can perform device-free tracking of multiple (unknown and variable number of) users in the Hallway Environments, just from non-invasive and anonymous (not user specific) binary motion sensor data stream. The significance of our designed system are as follows: (a) fast tracking of individual targets from binary motion data stream from a static wireless sensor network in the infrastructure. This needs to resolve unreliable node sequences, system noise and path ambiguity, (b) Scaling for multi-user tracking where user motion trajectories may crossover with each other in all possible ways. This needs to resolve path ambiguity to isolate overlapping trajectories, FindingHumo applies the following techniques on the collected motion data stream: (i) a proposed motion data driven adaptive order Hidden Markov Model with Viterbi decoding (called Adaptive-HMM), and then (ii) an innovative path disambiguation algorithm (called CPDA). Using this methodology the system accurately detects and isolates motion trajectories of individual users. The system performance is illustrated with results from real-time system deployment experience in a Smart Environment."} {"_id": "01e343f4253001cb8437ecea40c5e0b7c49d5109", "title": "On the role of color in the perception of motion in animated visualizations", "text": "Although luminance contrast plays a predominant role in motion perception, significant additional effects are introduced by chromatic contrasts. In this paper, relevant results from psychophysical and physiological research are described to clarify the role of color in motion detection. Interpreting these psychophysical experiments, we propose guidelines for the design of animated visualizations, and a calibration procedure that improves the reliability of visual motion representation. The guidelines are applied to examples from texture-based flow visualization, as well as graph and tree visualization."} {"_id": "8e843cb073ffd749d0786bb852b2a7920081ac34", "title": "Forward models and passive psychotic symptoms", "text": "Pickering and Clark (2014) present two ways of viewing the role of forward models in human cognition: the auxiliary forward model (AFM) account and the integral forward model (IFM) account. The AFM account \u201cassumes a dedicated prediction mechanism implemented by additional circuitry distinct from the core mechanisms of perception and action\u201d (p. 451). The standard AFM account exploits a corollary discharge from the motor command in order to compute the sensory consequences of the action. In contrast, on the IFM account, \u201cperception itself involves the use of a forward (generative) model, whose role is to construct the incoming sensory signal \u2018from the top down\u201d\u2019 (p. 453). Furthermore, within this account, motor commands are dispensed with: they are predictions that are fulfilled by movement as part of the prediction error minimization that is taken to govern all aspects of cognition (Adams et al., 2013). Pickering and Clark present two \u201ctesting grounds\u201d for helping us adjudicate between IFMs and AFMs, which are committed to the idea, derived from Pickering\u2019s own work, that one predicts others in a way that is similar to the way that one predicts oneself. Although I like this \u201cprediction by simulation\u201d account, in this commentary, I want to emphasize that neither the IFM nor the AFM accounts are necessarily wedded to it. A less committal, and hence perhaps more compelling, testing ground is to be found in psychotic symptoms, and the capacity of the two frameworks to account for them. Indeed, using psychosis to illustrate forward modelling is not new: the inability to self-tickle was taken to be convincing data for the presence of forward models (viewed, by default, within an AFM account), and in particular for problems with them in patients with diagnoses of schizophrenia (Frith et al., 2000). The AFM account has been used more generally to explain symptoms of schizophrenia. Something goes wrong with the generation of the forward model, and so the sensory consequences of selfgenerated stimuli are poorly predicted, and hence fail to be attenuated, and are, ultimately, misattributed to an external source. Although most have accepted this for delusions of control, some have questioned the application of this model to passive symptoms (Stephens and Graham, 2000), namely those which do not involve action, such as auditory verbal hallucinations (AVHs). If the symptoms of schizophrenia are explainable in terms of problems with an AFM, and this is constructed out of a motor command, then non-motoric (\u201cpassive\u201d) symptoms cannot be so explained. One move has been to keep working within the AFM framework but claim that \u201cpassive\u201d symptoms merely look passive: they are actually \u201cactive.\u201d Several theorists (e.g., Jones and Fernyhough, 2007) have attempted to explain AVHs in terms of inner speech misattribution, where inner speech is taken to involve motoric elements. This motoric involvement has been empirically supported by several electromyographical (EMG) studies (which measured muscular activity during inner speech) some of which date as far back as the early 1930s (Jacobsen, 1931). Later experiments made the connection between inner speech and AVH, showing that similar muscular activation is involved in healthy inner speech and AVH (Gould, 1948). The involvement of motoric elements in both inner speech and in AVH is further supported by findings (Gould, 1950) showing that when subjects hallucinated, subvocalizations occurred which could be picked up with a throat microphone. That these subvocalizations were causally responsible for the inner speech allegedly implicated in AVHs, and not just echoing it, was suggested by data (Bick and Kinsbourne, 1987) demonstrating that if people experiencing hallucinations opened their mouths wide, stopping vocalizations, then the majority of AVHs stopped. However, this does not seem to capture all AVH subtypes. For example, Dodgson and Gordon (2009) convincingly present \u201chypervigilance hallucinations,\u201d which are not based on self-generated stimuli, but constitute hypervigilant boosting and molding of external stimuli. As I have argued (Wilkinson, 2014) recently, one can account for both inner speech-based and hypervigilance hallucinations, within an IFM framework (although I called it a \u201cPredictive Processing Framework\u201d). Since it is good practice to support models that accommodate more phenomena, assuming (as seems plausible) that hypervigilance hallucinations are a genuine subtype of AVH, the IFM account is preferable to the AFM account. In conclusion, although I agree with Pickering and Clark that the IFM account is preferable, I do so on the basis of a"} {"_id": "3f28447c7a8b85ae2f3b0966b3be839ec6c99f40", "title": "Finding objects for blind people based on SURF features", "text": "Nowadays computer vision technology is helping the visually impaired by recognizing objects in their surroundings. Unlike research of navigation and wayfinding, there are no camera-based systems available in the market to find personal items for the blind. This paper proposes an object recognition method to help blind people find missing items using Speeded-Up Robust Features (SURF). SURF features can extract distinctive invariant features that can be utilized to perform reliable matching between different images in multiple scenarios. These features are invariant to image scale, translation, rotation, illumination, and partial occlusion. The proposed recognition process begins by matching individual features of the user queried object to a database of features with different personal items which are saved in advance. Experiment results demonstrate the effectiveness and efficiency of the proposed method."} {"_id": "c29b2b2db4f1cbe4931a21ec8b95ce808ccff96b", "title": "UAV vision aided positioning system for location and landing", "text": "The research and use of UAVs (Unmanned Aerial Vehicles) have grown in attention and new ideas in the last couple of decades, mainly due to technology improvement of its construction parts, sensors and controllers. When it comes to its autonomous behavior, robust techniques make themselves necessary to assure the security of the aircraft and people around it. This work presents the use of computer vision techniques to improve the positioning of a quadrotor UAV in the landing moment, adding security and precision, overcoming intrinsic errors of other positioning sensors, such as GPS. A landmark was built to be detected by the camera with contour detection, image thresholding and other mathematical techniques in order to obtain fast, simple and robust processing and results. A GoPro camera model was used with a developed algorithm to remove the image distortion. With this set up, the vehicle can be programmed to develop any mission, with this works control algorithm taking part with the vision aided positioning in the landing moment. The quadrotor was conceived and built at the GRIn (Grupo de Robtica Inteligente) laboratory at UFJF (Juiz de Fora Federal University), and uses open source code for its control, with Mavlink communication protocol to exchange messages with the vision-based control algorithm for mission and landing. This algorithm was developed in Python language, and is in charge of image characteristics segmentation, extracting distances and altitude from the data. The control commands are sent to the quadrotor using the mentioned protocol. The system is structured with a remote laptop connected with a serial port to the quadrotor, and a router transmits messages back and forth among the GCS (Ground Control Station, on the laptop) and the high-level control algorithm (mission planning and vision based control) through TCP ports to the quadrotor itself. The camera is connected via Wi-Fi built-in network to the laptop, so all the image processing is done inside the last one. The results obtained in outdoor tests show the efficacy and fast processing of the vision methods and ensure safety in the landing moment. Furthermore, the positioning techniques can be extended to other UAV applications, such as transmission lines and utility equipment inspection and monitoring. Future works look forward to embedding the process in a companion board attached to the UAV, so the process would become automatic and the risk of communication interference and lost heavily reduced."} {"_id": "4a667c9c1154aae10980b559d30730ad225683c5", "title": "On semi-supervised linear regression in covariate shift problems", "text": "Semi-supervised learning approaches are trained using the full training (labeled) data and available testing (unlabeled) data. Demonstrations of the value of training with unlabeled data typically depend on a smoothness assumption relating the conditional expectation to high density regions of the marginal distribution and an inherent missing completely at random assumption for the labeling. So-called covariate shift poses a challenge for many existing semi-supervised or supervised learning techniques. Covariate shift models allow the marginal distributions of the labeled and unlabeled feature data to differ, but the conditional distribution of the response given the feature data is the same. An example of this occurs when a complete labeled data sample and then an unlabeled sample are obtained sequentially, as it would likely follow that the distributions of the feature data are quite different between samples. The value of using unlabeled data during training for the elastic net is justified geometrically in such practical covariate shift problems. The approach works by obtaining adjusted coefficients for unlabeled prediction which recalibrate the supervised elastic net to compromise: (i) maintaining elastic net predictions on the labeled data with (ii) shrinking unlabeled predictions to zero. Our approach is shown to dominate linear supervised alternatives on unlabeled response predictions when the unlabeled feature data are concentrated on a low dimensional manifold away from the labeled data and the true coefficient vector emphasizes directions away from this manifold. Large variance of the supervised predictions on the unlabeled set is reduced more than the increase in squared bias when the unlabeled responses are expected to be small, so an improved compromise within the bias-variance tradeoff is the rationale for this performance improvement. Performance is validated on simulated and real data."} {"_id": "6a47c811ec4174dd5aa6be5eb7b8e48777eb7b42", "title": "VIDEO GAMES: PERSPECTIVE, POINT-OF-VIEW, AND IMMERSION", "text": "of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Arts VIDEO GAMES: PERSPECTIVE, POINT-OF-VIEW, AND IMMERSION By Laurie N. Taylor May 2002 Chair: Dr. Terry Harpold Department: English Presently, video game designers, players, and often even video game theorists approach the representation of point-of-view in video games as intuitive and uncomplicated. Drawing on the current critical work on video games, theories of immersion and engagement, and theories of human-computer-interaction, I question the significance of optical perspective in video games in terms of player point-of-view. Because video games are an experiential medium, critical study of video games must include the study of the diegetic, visual-auditory, interface, and experiential (often termed interactive or participatory) aspects of video game play. Through an analysis of the various points-of-view within different games and the differing points-of-view within a single game, I attempt to delineate the differences both in the creation of the experiential game space and in the possibilities for immersion within the game space. For this, I delimit two types of immersion: diegetic immersion, which corresponds to immersion in the game, and intra-diegetic or situated immersion, which corresponds to immersion"} {"_id": "0a2ebb26cfbd456ec6792e776d550671c16b5da7", "title": "Sensorless speed control of a PM synchronous motor based on sliding mode observer and extended Kalman filter", "text": "In this paper a method for rotor speed and position detection in permanent magnet synchronous motors is presented, suitable for applications where low speed and standstill operations are not required. The approach is based on the estimation of the motor back-EMF through a sliding mode observer and a cascaded Kalman filtering. Due to its characteristics, the technique can be applied to motors having distorted and whatever shaped back-EMF waveform. The equation error for the rotor position estimation is derived and discussed. Test results are presented, which show the system performance including start-up capability, which is one of the most critical conditions in back-EMF based sensorless schemes."} {"_id": "21371d12e9f1f900fffa78229768609a87556681", "title": "Back EMF Sensorless-Control Algorithm for High-Dynamic Performance PMSM", "text": "In this paper, a low-time-consuming and low-cost sensorless-control algorithm for high-dynamic performance permanent-magnet synchronous motors, both surface and internal permanent-magnet mounted for position and speed estimation, is introduced, discussed, and experimentally validated. This control algorithm is based on the estimation of rotor speed and angular position starting from the back electromotive force space-vector determination without voltage sensors by using the reference voltages given by the current controllers instead of the actual ones. This choice obviously introduces some errors that must be vanished by means of a compensating function. The novelties of the proposed estimation algorithm are the position-estimation equation and the process of compensation of the inverter phase lag that also suggests the final mathematical form of the estimation. The mathematical structure of the estimation guarantees a high degree of robustness against parameter variation as shown by the sensitivity analysis reported in this paper. Experimental verifications of the proposed sensorless-control system have been made with the aid of a flexible test bench for brushless motor electrical drives. The test results presented in this paper show the validity of the proposed low-cost sensorless-control algorithm and, above all, underline the high dynamic performances of the sensorless-control system also with a reduced equipment."} {"_id": "b17a4d96b87422bb41681ccdfd7788011dc56bb8", "title": "Mechanical sensorless control of PMSM with online estimation of stator resistance", "text": "This paper provides an improvement in sensorless control performance of nonsalient-pole permanent-magnet synchronous machines. To ensure sensorless operation, most of the existing methods require that the initial position error as well as the parameters uncertainties must be limited. In order to overcome these drawbacks, we study them analytically and present a solution using an online identification method which is easy to implement and is highly stable. A stability analysis based on Lyapunov's linearization method shows the stability of the closed-loop system with the proposed estimator combined with the sensorless algorithm. This approach does not need a well-known initial rotor position and makes the sensorless control more robust with respect to the stator resistance variations at low speed. The simulation and experimental results illustrate the validity of the analytical approach and the efficiency of the proposed method."} {"_id": "f50db0afc953ca5b24f197b21491c80737f22c89", "title": "Lyapunov-function-based flux and speed observer for AC induction motor sensorless control and parameters estimation", "text": "AC induction motors have become very popular for motion-control applications due to their simple and reliable construction. Control of drives based on ac induction motors is a quite complex task. Provided the vector-control algorithm is used, not only the rotor speed but also the position of the magnetic flux inside the motor during the control process should be known. In most applications, the flux sensors are omitted and the magnetic-flux phasor position has to be calculated. However, there are also applications in which even speed sensors should be omitted. In such a situation, the task of state reconstruction can be solved only from voltage and current measurements. In the current paper, a method based on deterministic evaluation of measurement using the state observer based on the Lyapunov function is presented. The method has been proven in testing on a real ac induction machine."} {"_id": "178881b589085ad4e0ac00817ae96598c117f831", "title": "DSP-based speed adaptive flux observer of induction motor", "text": "A method of estimating the speed of an induction motor is presented. This method is based on the adaptive control theory. Experimental results of a direct field oriented induction motor control without speed sensors are presented.<>"} {"_id": "686d4e2aee9499136eb1ae7f21a3cb6f8b810ee3", "title": "Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping", "text": "The lack of reliable data in developing countries is a major obstacle to sustainable development, food security, and disaster relief. Poverty data, for example, is typically scarce, sparse in coverage, and labor-intensive to obtain. Remote sensing data such as high-resolution satellite imagery, on the other hand, is becoming increasingly available and inexpensive. Unfortunately, such data is highly unstructured and currently no techniques exist to automatically extract useful insights to inform policy decisions and help direct humanitarian efforts. We propose a novel machine learning approach to extract large-scale socioeconomic indicators from highresolution satellite imagery. The main challenge is that training data is very scarce, making it difficult to apply modern techniques such as Convolutional Neural Networks (CNN). We therefore propose a transfer learning approach where nighttime light intensities are used as a data-rich proxy. We train a fully convolutional CNN model to predict nighttime lights from daytime imagery, simultaneously learning features that are useful for poverty prediction. The model learns filters identifying different terrains and man-made structures, including roads, buildings, and farmlands, without any supervision beyond nighttime lights. We demonstrate that these learned features are highly informative for poverty mapping, even approaching the predictive performance of survey data collected in the field."} {"_id": "0fd6d5727215283bca296fe75852475e7c6c7ca0", "title": "Deep learning methods for knowledge base population", "text": "xxi Zusammenfassung xxiii"} {"_id": "d880c174661118ec23fc7e7cec0e2656cc0654ec", "title": "Structural Balance Theory-Based E-Commerce Recommendation over Big Rating Data", "text": "Recommending appropriate product items to the target user is becoming the key to ensure continuous success of E-commerce. Today, many E-commerce systems adopt various recommendation techniques, e.g., Collaborative Filtering (abbreviated as CF)-based technique, to realize product item recommendation. Overall, the present CF recommendation can perform very well, if the target user owns similar friends (user-based CF), or the product items purchased and preferred by target user own one or more similar product items (item-based CF). While due to the sparsity of big rating data in E-commerce, similar friends and similar product items may be both absent from the user-product purchase network, which lead to a big challenge to recommend appropriate product items to the target user. Considering the challenge, we put forward a Structural Balance Theory-based Recommendation (i.e., SBT-Rec) approach. In the concrete, (I) user-based recommendation: we look for target user's \u201cenemy\u201d (i.e., the users having opposite preference with target user); afterwards, we determine target user's \u201cpossible friends\u201d, according to \u201cenemy's enemy is a friend\u201d rule of Structural Balance Theory, and recommend the product items preferred by \u201cpossible friends\u201d of target user to the target user. (II) likewise, for the product items purchased and preferred by target user, we determine their \u201cpossibly similar product items\u201d based on Structural Balance Theory and recommend them to the target user. At last, the feasibility of SBT-Rec is validated, through a set of experiments deployed on MovieLens-1M dataset."} {"_id": "77ee479c9df201d7e93366c74198e9661316f1bb", "title": "Improved Localization of Cortical Activity By Combining EEG and MEG with MRI Cortical Surface Reconstruction", "text": "We describe a comprehensive linear approach to the problem of imaging brain activity with high temporal as well as spatial resolution based on combining EEG and MEG data with anatomical constraints derived from MRI images. The \u201c inverse problem\u201d of estimating the distribution of dipole strengths over the cortical surface is highly underdetermined, even given closely spaced EEG and MEG recordings. We have obtained much better solutions to this problem by explicitly incorporating both local cortical orientation as well as spatial covariance of sources and sensors into our for-"} {"_id": "4cd0ef755d5473415b5a99555c12f52ce7ce9329", "title": "Modified Firefly Algorithm", "text": "Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time."} {"_id": "d5ba527534daf11748a5a1f2d417283ce88d5cd1", "title": "Real-time human gesture grading based on OpenPose", "text": "In this paper, we presented a real-time 2D human gesture grading system from monocular images based on OpenPose, a library for real-time multi-person keypoint detection. After capturing 2D positions of a person's joints and skeleton wireframe of the body, the system computed the equation of motion trajectory for every joint. Similarity metric was defined as distance between motion trajectories of standard and real-time videos. A modifiable scoring formula was used for simulating the gesture grading scenario. Experimental results showed that the system worked efficiently with high real-time performance, low cost of equipment and strong robustness to the interference of noise."} {"_id": "25f3d15130c2e5591d7daf6f69e88edbcfc1ab9d", "title": "The effectiveness of yoga in modifying risk factors for cardiovascular disease and metabolic syndrome: A systematic review and meta-analysis of randomized controlled trials.", "text": "BACKGROUND\nYoga, a popular mind-body practice, may produce changes in cardiovascular disease (CVD) and metabolic syndrome risk factors.\n\n\nDESIGN\nThis was a systematic review and random-effects meta-analysis of randomized controlled trials (RCTs).\n\n\nMETHODS\nElectronic searches of MEDLINE, EMBASE, CINAHL, PsycINFO, and The Cochrane Central Register of Controlled Trials were performed for systematic reviews and RCTs through December 2013. Studies were included if they were English, peer-reviewed, focused on asana-based yoga in adults, and reported relevant outcomes. Two reviewers independently selected articles and assessed quality using Cochrane's Risk of Bias tool.\n\n\nRESULTS\nOut of 1404 records, 37 RCTs were included in the systematic review and 32 in the meta-analysis. Compared to non-exercise controls, yoga showed significant improvement for body mass index (-0.77\u2009kg/m(2) (95% confidence interval -1.09 to -0.44)), systolic blood pressure (-5.21\u2009mmHg (-8.01 to -2.42)), low-density lipoprotein cholesterol (-12.14\u2009mg/dl (-21.80 to -2.48)), and high-density lipoprotein cholesterol (3.20\u2009mg/dl (1.86 to 4.54)). Significant changes were seen in body weight (-2.32\u2009kg (-4.33 to -0.37)), diastolic blood pressure (-4.98\u2009mmHg (-7.17 to -2.80)), total cholesterol (-18.48\u2009mg/dl (-29.16 to -7.80)), triglycerides (-25.89\u2009mg/dl (-36.19 to -15.60), and heart rate (-5.27 beats/min (-9.55 to -1.00)), but not fasting blood glucose (-5.91\u2009mg/dl (-16.32 to 4.50)) nor glycosylated hemoglobin (-0.06% Hb (-0.24 to 0.11)). No significant difference was found between yoga and exercise. One study found an impact on smoking abstinence.\n\n\nCONCLUSIONS\nThere is promising evidence of yoga on improving cardio-metabolic health. Findings are limited by small trial sample sizes, heterogeneity, and moderate quality of RCTs."} {"_id": "0eb5872733e643f43a0c1a7ff78953dfea74dfea", "title": "Automated Scoring: Beyond Natural Language Processing", "text": "In this position paper, we argue that building operational automated scoring systems is a task that has disciplinary complexity above and beyond standard competitive shared tasks which usually involve applying the latest machine learning techniques to publicly available data in order to obtain the best accuracy. Automated scoring systems warrant significant cross-discipline collaboration of which natural language processing and machine learning are just two ofmany important components. Such systems have multiple stakeholders with different but valid perspectives that can often times be at odds with each other. Our position is that it is essential for us as NLP researchers to understand and incorporate these perspectives into our research and work towards a mutually satisfactory solution in order to build automated scoring systems that are accurate, fair, unbiased, and useful. 1 What is Automated Scoring? Automated scoring is an NLP application usually deployed in the educational domain. It involves automatically analyzing a student\u2019s response to a question and generating either (a) a score in order to assess the student\u2019s knowledge and/or other skills and/or (b) some actionable feedback on how the student can improve the response (Page, 1966; Burstein et al., 1998; Burstein et al., 2004; Zechner et al., 2009; Bernstein et al., 2010). It is considered an NLP application since typically the core technology behind the automated analysis of the student response enlists NLP techniques. The student responses can include essays, short answers, or spoken responses and the two most common kinds of automated scoring are the automated evaluation of writing quality and content knowledge. Both the scores and feedback are usually based on linguistic characteristics of the responses including but not limited to: (i) Lower-level errors in the response (e.g., pronunciation errors in spoken responses and grammatical/spelling errors in written responses), (ii) The discourse structure and/or organization of the response, (iii) Relevance of the response to the question that was asked."} {"_id": "4ecbc7119802103dfcc21c243575e598b84fd108", "title": "Software risk management barriers: An empirical study", "text": "This paper reports results from a survey of experienced project managers on their perception of software risk management. From a sample of 18 experienced project managers, we have found good awareness of risk management, but low tool usage. We offer evidence that the main barriers to performing risk management are related to its perceived high cost and comparative low value. Psychological issues are also important, but less so. Risk identification and monitoring, in particular, are perceived as effort intensive and costly. The perception is that risk management is not prioritised highly enough. Our conclusion is that more must be done to visibly prove the value : cost ratio for risk management activities."} {"_id": "1b969b308baea3cfec03f8f08d6f5fe7493e55ad", "title": "Football analysis using spatio-temporal tools", "text": "Analysing a football match is without doubt an important task for coaches, talent scouts, players and even media; and with current technologies more and more match data is collected. Several companies offer the ability to track the position of the players and the ball with high accuracy and high resolution. They also offer software that include basic analysis tools, for example straight-forward statistics about distance run and number of passes. It is, however, a non-trivial task to perform more advanced analysis. We present a collection of tools that we developed specifically for analysing the performance of football players and teams."} {"_id": "f307c6b3058a09ba1bda5bafa89ad4c501e5079a", "title": "DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation", "text": "Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don\u2019t exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning."} {"_id": "59ab10d40edcef929642881812d670ecdd86bf7a", "title": "Robust Non-Intrusive Load Monitoring (NILM) with unknown loads", "text": "A Non-Intrusive Load Monitoring (NILM) method, robust even in the presence of unlearned or unknown appliances (UUAs) is presented in this paper. In the absence of such UUAs, this NILM algorithm is capable of accurately identifying each of the turned-ON appliances as well as their energy levels. However, when there is an UUA or set of UUAs are turned-ON during a particular time window, proposed NILM method detects their presence. This enables the operator to detect presence of anomalies or unlearned appliances in a household. This quality increases the reliability of the NILM strategy and makes it more robust compared to existing NILM methods. The proposed Robust NILM strategy (RNILM) works accurately with a single active power measurement taken at a low sampling rate as low as one sample per second. Here first, a unique set of features for each appliance was extracted through decomposing their active power signal traces into uncorrelated subspace components (SCs) via a high-resolution implementation of the Karhunen-Loeve (KLE). Next, in the appliance identification stage, through considering power levels of the SCs, the number of possible appliance combinations were rapidly reduced. Finally, through a Maximum a Posteriori (MAP) estimation, the turned-ON appliance combination and/or the presence of UUA was determined. The proposed RNILM method was validated using real data from two public databases: Reference Energy Disaggregation Dataset (REDD) and Tracebase. The presented results demonstrate the capability of the proposed RNILM method to identify, the turned-ON appliance combinations, their energy level disaggregation as well as the presence of UUAs accurately in real households."} {"_id": "e8a3306cceffd3d8f19c253137fc93664e4ef8f6", "title": "A novel SVM-kNN-PSO ensemble method for intrusion detection system", "text": "In machine learning, a combination of classifiers, known as an ensemble classifier, often outperforms individual ones. While many ensemble approaches exist, it remains, however, a difficult task to find a suitable ensemble configuration for a particular dataset. This paper proposes a novel ensemble construction method that uses PSO generated weights to create ensemble of classifiers with better accuracy for intrusion detection. Local unimodal sampling (LUS) method is used as a meta-optimizer to find better behavioral parameters for PSO. For our empirical study, we took five random subsets from the well-known KDD99 dataset. Ensemble classifiers are created using the new approaches as well as the weighted majority algorithm (WMA) approach. Our experimental results suggest that the new approach can generate ensembles that outperform WMA in terms of classification accuracy . \u00a9 2015 Published by Elsevier B.V. US ultiple classifier ptimization SO 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 VM eighted majority voting (WMV) MA"} {"_id": "8b20232f8df62fdda47906c00598712e18f25d47", "title": "The thoracolumbar fascia: anatomy, function and clinical considerations.", "text": "In this overview, new and existent material on the organization and composition of the thoracolumbar fascia (TLF) will be evaluated in respect to its anatomy, innervation biomechanics and clinical relevance. The integration of the passive connective tissues of the TLF and active muscular structures surrounding this structure are discussed, and the relevance of their mutual interactions in relation to low back and pelvic pain reviewed. The TLF is a girdling structure consisting of several aponeurotic and fascial layers that separates the paraspinal muscles from the muscles of the posterior abdominal wall. The superficial lamina of the posterior layer of the TLF (PLF) is dominated by the aponeuroses of the latissimus dorsi and the serratus posterior inferior. The deeper lamina of the PLF forms an encapsulating retinacular sheath around the paraspinal muscles. The middle layer of the TLF (MLF) appears to derive from an intermuscular septum that developmentally separates the epaxial from the hypaxial musculature. This septum forms during the fifth and sixth weeks of gestation. The paraspinal retinacular sheath (PRS) is in a key position to act as a 'hydraulic amplifier', assisting the paraspinal muscles in supporting the lumbosacral spine. This sheath forms a lumbar interfascial triangle (LIFT) with the MLF and PLF. Along the lateral border of the PRS, a raphe forms where the sheath meets the aponeurosis of the transversus abdominis. This lateral raphe is a thickened complex of dense connective tissue marked by the presence of the LIFT, and represents the junction of the hypaxial myofascial compartment (the abdominal muscles) with the paraspinal sheath of the epaxial muscles. The lateral raphe is in a position to distribute tension from the surrounding hypaxial and extremity muscles into the layers of the TLF. At the base of the lumbar spine all of the layers of the TLF fuse together into a thick composite that attaches firmly to the posterior superior iliac spine and the sacrotuberous ligament. This thoracolumbar composite (TLC) is in a position to assist in maintaining the integrity of the lower lumbar spine and the sacroiliac joint. The three-dimensional structure of the TLF and its caudally positioned composite will be analyzed in light of recent studies concerning the cellular organization of fascia, as well as its innervation. Finally, the concept of a TLC will be used to reassess biomechanical models of lumbopelvic stability, static posture and movement."} {"_id": "2b435ee691718d0b55d057d9be4c3dbb8a81526e", "title": "SURF-Face: Face Recognition Under Viewpoint Consistency Constraints", "text": "We analyze the usage of Speeded Up Robust Features (SURF) as local descriptors for face recognition. The effect of different feature extraction and viewpoint consistency constrained matching approaches are analyzed. Furthermore, a RANSAC based outlier removal for system combination is proposed. The proposed approach allows to match faces under partial occlusions, and even if they are not perfectly aligned or illuminated. Current approaches are sensitive to registration errors and usually rely on a very good initial alignment and illumination of the faces to be recognized. A grid-based and dense extraction of local features in combination with a block-based matching accounting for different viewpoint constraints is proposed, as interest-point based feature extraction approaches for face recognition often fail. The proposed SURF descriptors are compared to SIFT descriptors. Experimental results on the AR-Face and CMU-PIE database using manually aligned faces, unaligned faces, and partially occluded faces show that the proposed approach is robust and can outperform current generic approaches."} {"_id": "96ea8f0927f87ab4be3a7fd5a3b1dd38eeaa2ed6", "title": "Trefoil Torus Knot Monopole Antenna", "text": "A wideband and simple torus knot monopole antenna is presented in this letter. The antenna is fabricated using additive manufacturing technology, commonly known as 3-D printing. The antenna is mechanically simple to fabricate and has stable radiation pattern as well as input reflection coefficient below -10 dB over a frequency range of 1-2 GHz. A comparison of measured and simulated performance of the antenna is also presented."} {"_id": "d7775931803aab23494937856bbfcb31233c2537", "title": "Human Body Posture Classification by a Neural Fuzzy Network and Home Care System Application", "text": "A new classification approach for human body postures based on a neural fuzzy network is proposed in this paper, and the approach is applied to detect emergencies that are caused by accidental falls. Four main body postures are used for posture classification, including standing, bending, sitting, and lying. After the human body is segmented from the background, the classification features are extracted from the silhouette. The body silhouette is projected onto horizontal and vertical axes, and then, a discrete Fourier transform is applied to each projected histogram. Magnitudes of significant Fourier transform coefficients together with the silhouette length-width ratio are used as features. The classifier is designed by a neural fuzzy network. The four postures can be classified with high accuracy according to experimental results. Classification results are also applicable to home care emergency detection of a person who suddenly falls and remains in the lying posture for a period of time due to experiments that were performed."} {"_id": "379696f21d6bc8829c425295b917824edf687e0a", "title": "CORE: Context-Aware Open Relation Extraction with Factorization Machines", "text": "We propose CORE, a novel matrix factorization model that leverages contextual information for open relation extraction. Our model is based on factorization machines and integrates facts from various sources, such as knowledge bases or open information extractors, as well as the context in which these facts have been observed. We argue that integrating contextual information\u2014such as metadata about extraction sources, lexical context, or type information\u2014significantly improves prediction performance. Open information extractors, for example, may produce extractions that are unspecific or ambiguous when taken out of context. Our experimental study on a large real-world dataset indicates that CORE has significantly better prediction performance than state-ofthe-art approaches when contextual information is available."} {"_id": "e346cb8ecd797c7001bab004f10ebfca25d58074", "title": "Urethrocutaneous fistula repair after hypospadias surgery.", "text": "OBJECTIVE\nTo evaluate and compare the success rates of simple and layered repairs of urethrocutaneous fistulae after hypospadias repair.\n\n\nPATIENTS AND METHODS\nThe charts of 72 children who developed fistulae after hypospadias repair were reviewed; 39 had a simple closure of the fistula, whereas 32 had a 'pants over vest' repair, in all cases after excluding an impairment of urine outflow.\n\n\nRESULTS\nThe success rate at the first attempt was 74% for simple closure and 94% for the layered repair; at the second attempt it was 80% and 100%, the difference being statistically significant for both repairs.\n\n\nCONCLUSIONS\nAlthough probably far from an optimal technique for repairing urethrocutaneous fistulae, the pants-over-vest repair allows a good success rate for penile shaft fistulae."} {"_id": "745cd47479008f8f6f3a4262f5817901ee44c1a3", "title": "Directly Fabricating Soft Robotic Actuators With an Open-Source 3-D Printer", "text": "3-D printing silicone has been a long sought for goal by roboticists. Fused deposition manufacturing (FDM) is a readily accessible and simple 3-D printing scheme that could hold the key to printing silicone. This study details an approach to 3-D print silicone elastomer through use of a thickening additive and heat curing techniques. We fabricated an identical control actuator using molding and 3-D printing techniques for comparison. By comparing the free space elongation and fixed length force of both actuators, we were able to evaluate the quality of the print. We observed that the 3-D printed linear actuator was able to perform similarly to the molded actuator, with an average error of 5.08% in actuator response, establishing the feasibility of such a system. We envision that further development of this system would contribute to the way soft robotic systems are fabricated."} {"_id": "206b204618640917f278e72bd0e2a881d8cec7ad", "title": "A family of algorithms for approximate Bayesian inference", "text": "One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, \"Expectation Propagation,\" unifies and generalizes two previous techniques: assumeddensity filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with a simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation. Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction-propagating richer belief states which incorporate correlations between variables. This framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. Thesis Supervisor: Rosalind Picard Title: Associate Professor of Media Arts and Sciences"} {"_id": "ccfec3649345ec324821313a3211560bed2ba5e1", "title": "Neural Extractive Text Summarization with Syntactic Compression", "text": "Recent neural network approaches to summarization are largely either sentence-extractive, choosing a set of sentences as the summary, or abstractive, generating the summary from a seq2seq model. In this work, we present a neural model for single-document summarization based on joint extraction and compression. Following recent successful extractive models, we frame the summarization problem as a series of local decisions. Our model chooses sentences from the document and then decides which of a set of compression options to apply to each selected sentence. We compute this set of options using discrete compression rules based on syntactic constituency parses; however, our approach is modular and can flexibly use any available source of compressions. For learning, we construct oracle extractive-compressive summaries that reflect uncertainty over our model\u2019s decision sequence, then learn both of our components jointly with this supervision. Experimental results on the CNN/Daily Mail and New York Times datasets show that our model achieves the state-of-the-art performance on content selection evaluated by ROUGE. Moreover, human and manual evaluation show that our model\u2019s output generally remains grammatical."} {"_id": "a9062945a0377878cc1c2161012375270bbb3ae3", "title": "Category-Based Induction", "text": "An argument is categorical if its premises and conclusion are of the form All members ofC have property F, where C is a natural category like FALCON or BIRD, and P remains the same across premises and conclusion. An example is Grizzly bears love onions. Therefore, all bears love onions. Such an argument is psychologically strong to the extent that belief in its premises engenders belief in its conclusion. A subclass of categorical arguments is examined, and the following hypothesis is advanced: The strength of a categorical argument increases with (a) the degree to which the premise categories are similar to the conclusion category and (b) the degree to which the premise categories are similar to members of the lowest level category that includes both the premise and the conclusion categories. A model based on this hypothesis accounts for 13 qualitative phenomena and the quantitative results of several experiments."} {"_id": "58d19026a8ee2c2b719d19b9fe12e1762da9b3fd", "title": "Knowledge Based Approach for Word Sense Disambiguation using Hindi Wordnet", "text": "------------------------------------------------------------Abstract-------------------------------------------------------------Word sense disambiguation (WSD) is an open research area in natural language processing and artificial intelligence. and it is also based on computational linguistics. It is of considerable theoretical and practical interest. WSD is to analyze word tokens in context and specify exactly,which sense of several word is being used.It can be declare as theortically motivated,one or more methods and techniques. WSD is a big problem of natural language processing (NLP) and artificial intelligence. But here, the problem is to search the sense for a word in given a context and lexicon relation. It is a technique of natural language processing in which requires queries and documents in NLP or texts from Machine Translation (MT). MT is an automatic translation. It is involving some languages such as Marathi, Urdu, Bengali, Punjabi, Hindi, and English etc. Most of the work has been completed in English, and now the point of convergence has shifted to others languages. The applications of WSD are disambiguation of content in information retrieval (IR), machine translation (MT), speech processing, lexicography, and text processing. In our paper, we are using knowledge based approach to WSD with Hindi language . A knowledge based approach use of external lexical resources suach as dictionary and thesauri . It involve incorporation of word knowledge to disambiguate words. We have developed a WSD tool using knowledge base d approach with Hindi wordnet. Wordnet is built from cooccurrence and collocation and it includes synset or synonyms which belong to either noun, verb, adjective, or adverb, these are called as part-ofspeech (POS) tagger. In this paper we shall introduce the implementation of our tool and its evaluation."} {"_id": "10efde0973c9b8221202bacfcdb79a77e1a47fa0", "title": "Internet paradox. A social technology that reduces social involvement and psychological well-being?", "text": "The Internet could change the lives of average citizens as much as did the telephone in the early part of the 20th century and television in the 1950s and 1960s. Researchers and social critics are debating whether the Internet is improving or harming participation in community life and social relationships. This research examined the social and psychological impact of the Internet on 169 people in 73 households during their first 1 to 2 years on-line. We used longitudinal data to examine the effects of the Internet on social involvement and psychological well-being. In this sample, the Internet was used extensively for communication. Nonetheless, greater use of the Internet was associated with declines in participants' communication with family members in the household, declines in the size of their social circle, and increases in their depression and loneliness. These findings have implications for research, for public policy and for the design of technology."} {"_id": "4942001ded918f11eda28251522031c6e6bc68ae", "title": "Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial behavior: a meta-analytic review of the scientific literature.", "text": "Research on exposure to television and movie violence suggests that playing violent video games will increase aggressive behavior. A metaanalytic review of the video-game research literature reveals that violent video games increase aggressive behavior in children and young adults. Experimental and nonexperimental studies with males and females in laboratory and field settings support this conclusion. Analyses also reveal that exposure to violent video games increases physiological arousal and aggression-related thoughts and feelings. Playing violent video games also decreases prosocial behavior."} {"_id": "9d526feb12b50b8956432dde911513b55f708bdd", "title": "Development of a Compendium of Energy Expenditures for Youth", "text": "BACKGROUND\nThis paper presents a Compendium of Energy Expenditures for use in scoring physical activity questionnaires and estimating energy expenditure levels in youth.\n\n\nMETHOD/RESULTS\nModeled after the adult Compendium of Physical Activities, the Compendium of Energy Expenditures for Youth contains a list of over 200 activities commonly performed by youth and their associated MET intensity levels. A review of existing data collected on the energy cost of youth performing activities was undertaken and incorporated into the compendium. About 35% of the activity MET levels were derived from energy cost data measured in youth and the remaining MET levels estimated from the adult compendium.\n\n\nCONCLUSION\nThe Compendium of Energy Expenditures for Youth is useful to researchers and practitioners interested in identifying physical activity and energy expenditure values in children and adolescents in a variety of settings."} {"_id": "2fe9bd2c1ef121cffcb44924af219dd8505e20c8", "title": "Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.", "text": "Two studies examined violent video game effects on aggression-related variables. Study 1 found that real-life violent video game play was positively related to aggressive behavior and delinquency. The relation was stronger for individuals who are characteristically aggressive and for men. Academic achievement was negatively related to overall amount of time spent playing video games. In Study 2, laboratory exposure to a graphically violent video game increased aggressive thoughts and behavior. In both studies, men had a more hostile view of the world than did women. The results from both studies are consistent with the General Affective Aggression Model, which predicts that exposure to violent video games will increase aggressive behavior in both the short term (e.g., laboratory aggression) and the long term (e.g., delinquency)."} {"_id": "ad40428b40b051164ade961bc841a0da2c44515d", "title": "The CES-D Scale : A Self-Report Depression Scale for Research in the General Population", "text": ""} {"_id": "9972d618fdac424c97e26205f364357d275b995d", "title": "Chapter 8 Location-Based Social Networks : Users", "text": "In this chapter, we introduce and define the meaning of location-based social network (LBSN) and discuss the research philosophy behind LBSNs from the perspective of users and locations. Under the circumstances of trajectory-centric LBSN, we then explore two fundamental research points concerned with understanding users in terms of their locations. One is modeling the location history of an individual using the individual\u2019s trajectory data. The other is estimating the similarity between two different people according to their location histories. The inferred similarity represents the strength of connection between two users in a locationbased social network, and can enable friend recommendations and community discovery. The general approaches for evaluating these applications are also presented."} {"_id": "f4d133d9933879f550d09955aa66e49a98110609", "title": "Personality Traits and Music Genres: What Do People Prefer to Listen To?", "text": "Personality-based personalized systems are increasingly gaining interest as personality traits has been shown to be a stable construct within humans. In order to provide a personality-based experience to the user, users' behavior, preferences, and needs in relation to their personality need to be investigated. Although for a technological mediated environment the search for these relationships is often new territory, there are findings from personality research of the real world that can be used in personalized systems. However, for these findings to be implementable, we need to investigate whether they hold in a technologically mediated environment. In this study we assess prior work on personality-based music genre preferences from traditional personality research. We analyzed a dataset consisting of music listening histories and personality scores of 1415 Last.fm users. Our results show agreements with prior work, but also important differences that can help to inform personalized systems."} {"_id": "3c24dbb4f1e49b6e8b13dc376cd4bb944aaf9968", "title": "DeepHTTP: Semantics-Structure Model with Attention for Anomalous HTTP Traffic Detection and Pattern Mining", "text": "In the Internet age, cyber-attacks occur frequently with complex types. Traffic generated by access activities can record website status and user request information, which brings a great opportunity for network attack detection. Among diverse network protocols, Hypertext Transfer Protocol (HTTP) is widely used in government, organizations and enterprises. In this work, we propose DeepHTTP, a semantics structure integration model utilizing Bidirectional Long Short-TermMemory (Bi-LSTM) with attention mechanism to model HTTP traffic as a natural language sequence. In addition to extracting traffic content information, we integrate structural information to enhance the generalization capabilities of the model. Moreover, the application of attention mechanism can assist in discovering critical parts of anomalous traffic and furthermining attack patterns. Additionally, we demonstrate how to incrementally update the data set and retrain model so that it can be adapted to new anomalous traffic. Extensive experimental evaluations over large traffic data have illustrated that DeepHTTP has outstanding performance in traffic detection and pattern discovery."} {"_id": "9d2fbb950e218149ca8063a54468bacb837d9ce6", "title": "How to detect the Cuckoo Sandbox and to Strengthen it?", "text": "Nowadays a lot of malware are analyzed with virtual machines. The Cuckoo sandbox (Cuckoo DevTeam: Cuckoo sandbox. http://www.cuckoosandbox.org , 2013) offers the possibility to log every actions performed by the malware on the virtual machine. To protect themselves and to evande detection, malware need to detect whether they are in an emulated environment or in a real one. With a few modifications and tricks on Cuckoo and the virtual machine we can try to prevent malware to detect that they are under analyze, or at least make it harder. It is not necessary to apply all the modifications, because it may produce a significant overhead and if malware checks his execution time, it may detect an anomaly and consider that it is running in a virtual machine. The present paper will show how a malware can detect the Cuckoo sandbox and how we can counter that."} {"_id": "0b2267913519ffe3cdff8681d3c43ac1bb0aa35d", "title": "Synchronous Bidirectional Inference for Neural Sequence Generation", "text": "In sequence to sequence generation tasks (e.g. machine translation and abstractive summarization), inference is generally performed in a left-to-right manner to produce the result token by token. The neural approaches, such as LSTM and self-attention networks, are now able to make full use of all the predicted history hypotheses from left side during inference, but cannot meanwhile access any future (right side) information and usually generate unbalanced outputs in which left parts are much more accurate than right ones. In this work, we propose a synchronous bidirectional inference model to generate outputs using both left-to-right and right-to-left decoding simultaneously and interactively. First, we introduce a novel beam search algorithm that facilitates synchronous bidirectional decoding. Then, we present the core approach which enables left-to-right and right-to-left decoding to interact with each other, so as to utilize both the history and future predictions simultaneously during inference. We apply the proposed model to both LSTM and self-attention networks. In addition, we propose two strategies for parameter optimization. The extensive experiments on machine translation and abstractive summarization demonstrate that our synchronous bidirectional inference model can achieve remarkable improvements over the strong baselines."} {"_id": "846a41ff58ec488ea798069d98eb0156a4b7fa0a", "title": "Towards Modeling False Memory With Computational Knowledge Bases", "text": "One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling."} {"_id": "e4bd80adc5a3486c3a5c3d82cef91b70b67ae681", "title": "Structural Models of Corporate Bond Pricing : An Empirical Analysis", "text": "This article empirically tests five structural models of corporate bond pricing: those of Merton (1974), Geske (1977), Longstaff and Schwartz (1995), Leland and Toft (1996), and Collin-Dufresne and Goldstein (2001). We implement the models using a sample of 182 bond prices from firms with simple capital structures during the period 1986\u20131997. The conventional wisdom is that structural models do not generate spreads as high as those seen in the bond market, and true to expectations, we find that the predicted spreads in our implementation of the Merton model are too low. However, most of the other structural models predict spreads that are too high on average. Nevertheless, accuracy is a problem, as the newer models tend to severely overstate the credit risk of firms with high leverage or volatility and yet suffer from a spread underprediction problem with safer bonds. The Leland and Toft model is an exception in that it overpredicts spreads on most bonds, particularly those with high coupons. More accurate structural models must avoid features that increase the credit risk on the riskier bonds while scarcely affecting the spreads of the safest bonds."} {"_id": "9e60b8314ed2b9d651622a156fe4703cdff8ae05", "title": "A survey and comparative study of data deduplication techniques", "text": "Increase in enormous amount of digital data needs more storage space, which in turn significantly increases the cost of backup and its performance. Traditional backup solutions do not provide any inherent capability to prevent duplicate data from being backed up. Backup of duplicate data significantly increases the backup time and unnecessary consumption of resources. Data Deduplication plays an important role in eliminating this redundant data and reducing the storage consumption. Its main intend is how to reduce more duplicates efficiently, removing them at high speed and to achieve good duplicate removal ratio. Many mechanisms have been proposed to meet these objectives. It has been observed that achieving one objective may affect the other. In this paper we classify and review the existing deduplication techniques. We also present evaluation of deduplication algorithms by measuring their performance in terms of complexity and efficiency with unstructured files. We propose an efficient means to achieve high deduplication ratio with minimum backup window. We also indicate some key issues need to focus in the future work."} {"_id": "201c8d41fe9c56d319908041af5063570ea37bed", "title": "Detecting concept change in dynamic data streams", "text": "In this research we present a novel approach to the concept change detection problem. Change detection is a fundamental issue with data stream mining as classification models generated need to be updated when significant changes in the underlying data distribution occur. A number of change detection approaches have been proposed but they all suffer from limitations with respect to one or more key performance factors such as high computational complexity, poor sensitivity to gradual change, or the opposite problem of high false positive rate. Our approach uses reservoir sampling to build a sequential change detection model that offers statistically sound guarantees on false positive and false negative rates but has much smaller computational complexity than the ADWIN concept drift detector. Extensive experimentation on a wide variety of datasets reveals that the scheme also has a smaller false detection rate while maintaining a competitive true detection rate to ADWIN."} {"_id": "8bc6a2b77638eb934b107fb61e9a1ce5876aa9a0", "title": "Smarter Cities and Their Innovation Challenges", "text": "The transformation to smarter cities will require innovation in planning, management, and operations. Several ongoing projects around the world illustrate the opportunities and challenges of this transformation. Cities must get smarter to address an array of emerging urbanization challenges, and as the projects highlighted in this article show, several distinct paths are available. The number of cities worldwide pursuing smarter transformation is growing rapidly. However, these efforts face many political, socioeconomic, and technical hurdles. Changing the status quo is always difficult for city administrators, and smarter city initiatives often require extensive coordination, sponsorship, and support across multiple functional silos. The need to visibly demonstrate a continuous return on investment also presents a challenge. The technical obstacles will center on achieving system interoperability, ensuring security and privacy, accommodating a proliferation of sensors and devices, and adopting a new closed-loop human-computer interaction paradigm."} {"_id": "9fc8123aa2b5be1f802e83bb80b4c32ab97faa6f", "title": "Differential Impairment of Remembering the Past and Imagining Novel Events after Thalamic Lesions", "text": "Vividly remembering the past and imagining the future (mental time travel) seem to rely on common neural substrates and mental time travel impairments in patients with brain lesions seem to encompass both temporal domains. However, because future thinking\u2014or more generally imagining novel events\u2014involves the recombination of stored elements into a new event, it requires additional resources that are not shared by episodic memory. We aimed to demonstrate this asymmetry in an event generation task administered to two patients with lesions in the medial dorsal thalamus. Because of the dense connection with pFC, this nucleus of the thalamus is implicated in executive aspects of memory (strategic retrieval), which are presumably more important for future thinking than for episodic memory. Compared with groups of healthy matched control participants, both patients could only produce novel events with extensive help of the experimenter (prompting) in the absence of episodic memory problems. Impairments were most pronounced for imagining personal fictitious and impersonal events. More precisely, the patients' descriptions of novel events lacked content and spatio-temporal relations. The observed impairment is unlikely to trace back to disturbances in self-projection, scene construction, or time concept and could be explained by a recombination deficit. Thus, although memory and the imagination of novel events are tightly linked, they also partly rely on different processes."} {"_id": "0bf69a49c2baed67fa9a044daa24b9e199e73093", "title": "Inducing Probabilistic Grammars by Bayesian Model Merging", "text": "We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are incorporated by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are merged to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (\u2018Occam\u2019s Razor\u2019). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based n-grams, and stochastic context-free grammars."} {"_id": "e3f20444124a645b825afa1f79186e9547175fe9", "title": "Statistics: a brief overview.", "text": "The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter."} {"_id": "7262bc3674c4c063526eaf4d2dcf54eecea7bf77", "title": "ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations", "text": "We describe PARANMT-50M, a dataset of more than 50 million English-English sentential paraphrase pairs. We generated the pairs automatically by using neural machine translation to translate the nonEnglish side of a large parallel corpus, following Wieting et al. (2017). Our hope is that PARANMT-50M can be a valuable resource for paraphrase generation and can provide a rich source of semantic knowledge to improve downstream natural language understanding tasks. To show its utility, we use PARANMT-50M to train paraphrastic sentence embeddings that outperform all supervised systems on every SemEval semantic textual similarity competition, in addition to showing how it can be used for paraphrase generation.1"} {"_id": "be59652a90db393242dc95bcde049fc14cf8faad", "title": "User-click modeling for understanding and predicting search-behavior", "text": "Recent advances in search users' click modeling consider both users' search queries and click/skip behavior on documents to infer the user's perceived relevance. Most of these models, including dynamic Bayesian networks (DBN) and user browsing models (UBM), use probabilistic models to understand user click behavior based on individual queries. The user behavior is more complex when her actions to satisfy her information needs form a search session, which may include multiple queries and subsequent click behaviors on various items on search result pages. Previous research is limited to treating each query within a search session in isolation, without paying attention to their dynamic interactions with other queries in a search session.\n Investigating this problem, we consider the sequence of queries and their clicks in a search session as a task and propose a task-centric click model~(TCM). TCM characterizes user behavior related to a task as a collective whole. Specifically, we identify and consider two new biases in TCM as the basis for user modeling. The first indicates that users tend to express their information needs incrementally in a task, and thus perform more clicks as their needs become clearer. The other illustrates that users tend to click fresh documents that are not included in the results of previous queries. Using these biases, TCM is more accurately able to capture user search behavior. Extensive experimental results demonstrate that by considering all the task information collectively, TCM can better interpret user click behavior and achieve significant improvements in terms of ranking metrics of NDCG and perplexity."} {"_id": "6f8f096471cd696a0b95073776ed045e3c96fc4f", "title": "Distributed intelligence for multi-camera visual surveillance", "text": "Latest advances in hardware technology and state of the art of computer vision and arti,cial intelligence research can be employed to develop autonomous and distributed monitoring systems. The paper proposes a multi-agent architecture for the understanding of scene dynamics merging the information streamed by multiple cameras. A typical application would be the monitoring of a secure site, or any visual surveillance application deploying a network of cameras. Modular software (the agents) within such architecture controls the di0erent components of the system and incrementally builds a model of the scene by merging the information gathered over extended periods of time. The role of distributed arti,cial intelligence composed of separate and autonomous modules is justi,ed by the need for scalable designs capable of co-operating to infer an optimal interpretation of the scene. Decentralizing intelligence means creating more robust and reliable sources of interpretation, but also allows easy maintenance and updating of the system. Results are presented to support the choice of a distributed architecture, and to prove that scene interpretation can be incrementally and e4ciently built by modular software. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."} {"_id": "96b95da0ab88de23641014abff2a5c0b5fec00c9", "title": "O Bitcoin Where Art Thou? Insight into Large-Scale Transaction Graphs", "text": "Bitcoin is a rising digital currency and exemplifies the growing need for systematically gathering and analyzing public transaction data sets such as the blockchain. However, the blockchain in its raw form is just a large ledger listing transfers of currency units between alphanumeric character strings, without revealing contextually relevant real-world information. In this demo, we present GraphSense, which is a solution that applies a graph-centric perspective on digital currency transactions. It allows users to explore transactions and follow the money flow, facilitates analytics by semantically enriching the transaction graph, supports path and graph pattern search, and guides analysts to anomalous data points. To deal with the growing volume and velocity of transaction data, we implemented our solution on a horizontally scalable data processing and analytics infrastructure. Given the ongoing digital transformation in financial services and technologies, we believe that our approach contributes to development of analytics solutions for digital currency ecosystems, which is relevant in fields such as financial analytics, law enforcement, or scientific research."} {"_id": "c1a38bed9cdba8c51b9562184296a0720115120c", "title": "Layering concepts in anterior composite restorations.", "text": "SUMMARY\nWith increasing patient demands for esthetic dentition, composite resin restorations enjoy great popularity due to excellent esthetics, acceptable longevity, and relatively low costs. Shading concepts used by manufacturers, however, may be confusing to many clinicians.\n\n\nPURPOSE\nTo review and describe the current main shading concepts, evaluate their potential for creating natural esthetics, and provide guidelines for application."} {"_id": "358425aa898ec0a3e8e129b94473a58d8bba366c", "title": "Distribution Free Prediction Bands", "text": "We study distribution free, nonparametric prediction bands with a special focus on their finite sample behavior. First we investigate and develop different notions of finite sample coverage guarantees. Then we give a new prediction band estimator by combining the idea of \u201cconformal prediction\u201d (Vovk et al., 2009) with nonparametric conditional density estimation. The proposed estimator, called COPS (Conformal Optimized Prediction Set), always has finite sample guarantee in a stronger sense than the original conformal prediction estimator. Under regularity conditions the estimator converges to an oracle band at a minimax optimal rate. A fast approximation algorithm and a data driven method for selecting the bandwidth are developed. The method is illustrated first in simulated data. Then, an application shows that the proposed method gives desirable prediction intervals in an automatic way, as compared to the classical linear regression modeling."} {"_id": "3a19f83c37afc6320315f4084618073514d481af", "title": "Harnessing the Influence of Social Proof in Online Shopping: The Effect of Electronic Word of Mouth on Sales of Digital Microproducts", "text": "Social commerce has taken the e\u2010tailing world by storm. Business\u2010to\u2010consumer sites and, more important, intermediaries that facilitate shopping experience, continue to offer more and more innovative technologies to support social interaction among like\u2010 minded community members or friends who share the same shopping interests. Among these technologies, reviews, ratings, and recommendation systems have become some of the most popular social shopping platforms due to their ease of use and simplicity in sharing buying experience and aggregating evaluations. This paper studies the effect of electronic word of mouth (eWOM) communication among a closed community of book readers. We studied the entire market of Amazon Shorts e\u2010books, which are digital mi\u2010 croproducts sold at a low and uniform price. With the minimal role of price in the buying decision, social discussion via eWOM becomes a collective signal of reputation, and ultimately a significant demand driver. Our empirical study suggests that eWOM can be used to convey the reputation of the product (e.g., the book), the reputation of the brand (i.e., the author), and the reputation of complementary goods (e.g., books in the same category). Until newer social shopping technologies gain acceptance, eWOM technolo\u2010 gies should be considered by both e\u2010tailers and shoppers as the first and perhaps primary source of social buying experience."} {"_id": "47dec4bc234f03fd6dbc0728cfaae4cccde5cd7c", "title": "EARNING INDEPENDENT CAUSAL MECHANISMS", "text": "Independent causal mechanisms are a central concept in the study of causality with implications for machine learning tasks. In this work we develop an algorithm to recover a set of (inverse) independent mechanisms relating a distribution transformed by the mechanisms to a reference distribution. The approach is fully unsupervised and based on a set of experts that compete for data to specialize and extract the mechanisms. We test and analyze the proposed method on a series of experiments based on image transformations. Each expert successfully maps a subset of the transformed data to the original domain, and the learned mechanisms generalize to other domains. We discuss implications for domain transfer and links to recent trends in generative modeling."} {"_id": "6efe199ccd3c46ea0dbd06a016e731d3f750b15a", "title": "Experiences in Improving Risk Management Processes Using the Concepts of the Riskit Method", "text": "This paper describes experiences from two organizations that have used the Riskit method for risk management in their software projects. This paper presents the Riskit method, the organizations involved, case study designs, and findings from case studies. We focus on the experiences and insights gained through the application of the method in industrial context and propose some general conclusions based on the case studies."} {"_id": "42f7bb3f5e07cf8bf3f73fc75bddc6e9b845b085", "title": "Convolutional Recurrent Deep Learning Model for Sentence Classification", "text": "As the amount of unstructured text data that humanity produces overall and on the Internet grows, so does the need to intelligently to process it and extract different types of knowledge from it. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied to natural language processing systems with comparative, remarkable results. The CNN is a noble approach to extract higher level features that are invariant to local translation. However, it requires stacking multiple convolutional layers in order to capture long-term dependencies, due to the locality of the convolutional and pooling layers. In this paper, we describe a joint CNN and RNN framework to overcome this problem. Briefly, we use an unsupervised neural language model to train initial word embeddings that are further tuned by our deep learning network, then, the pre-trained parameters of the network are used to initialize the model. At a final stage, the proposed framework combines former information with a set of feature maps learned by a convolutional layer with long-term dependencies learned via long-short-term memory. Empirically, we show that our approach, with slight hyperparameter tuning and static vectors, achieves outstanding results on multiple sentiment analysis benchmarks. Our approach outperforms several existing approaches in term of accuracy; our results are also competitive with the state-of-the-art results on the Stanford Large Movie Review data set with 93.3% accuracy, and the Stanford Sentiment Treebank data set with 48.8% fine-grained and 89.2% binary accuracy, respectively. Our approach has a significant role in reducing the number of parameters and constructing the convolutional layer followed by the recurrent layer as a substitute for the pooling layer. Our results show that we were able to reduce the loss of detailed, local information and capture long-term dependencies with an efficient framework that has fewer parameters and a high level of performance."} {"_id": "da16883de44b888d983e1252315c78ae7fcdd9dc", "title": "Predicting The Next App That You Are Going To Use", "text": "Given the large number of installed apps and the limited screen size of mobile devices, it is often tedious for users to search for the app they want to use. Although some mobile OSs provide categorization schemes that enhance the visibility of useful apps among those installed, the emerging category of homescreen apps aims to take one step further by automatically organizing the installed apps in a more intelligent and personalized way. In this paper, we study how to improve homescreen apps' usage experience through a prediction mechanism that allows to show to users which app she is going to use in the immediate future. The prediction technique is based on a set of features representing the real-time spatiotemporal contexts sensed by the homescreen app. We model the prediction of the next app as a classification problem and propose an effective personalized method to solve it that takes full advantage of human-engineered features and automatically derived features. Furthermore, we study how to solve the two naturally associated cold-start problems: app cold-start and user cold-start. We conduct large-scale experiments on log data obtained from Yahoo Aviate, showing that our approach can accurately predict the next app that a person is going to use."} {"_id": "441836b27ce2d1ac795ab5b42dd203514a5f46c3", "title": "Osteology of Ischikauia steenackeri (Teleostei: Cypriniformes) with comments on its systematic position", "text": "The lakeweed chub Ischikauia steenackeri is a medium-sized, herbivorous fish and the sole extant member of the genus Ischikauia, which is endemic to Lake Biwa and the Yodo River drainage, Japan. In order to clarify its systematic position, the skeletal anatomy of I. steenackeri is described and its relationships with related genera are discussed. The present data suggest the monophyly of Ischikauia and seven cultrine genera (Culter, Chanodichthys, Megalobrama, Sinibrama, Hemiculter, Toxabramis, and Anabarilius) based on a unique character, the metapterygoid elongated dorsally. Additionally, our data suggest that Ischikauia is closely related to Culter, Chanodichthys, Megalobrama, and Sinibrama. This relationship is supported by three synapomorphies that are common to them: a narrow third infraorbital, dorsal extension of the third supraneural, and a large quadrate foramen."} {"_id": "1e4558354f9509ab9992001340f88be9f298debe", "title": "Feature Selection for Unsupervised Learning", "text": "In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Selection using Expectation-Maximization (EM) clustering) and through two different performance criteria for evaluating candidate feature subsets: scatter separability and maximum likelihood. We present proofs on the dimensionality biases of these feature criteria, and present a cross-projection normalization scheme that can be applied to any criterion to ameliorate these biases. Our experiments show the need for feature selection, the need for addressing these two issues, and the effectiveness of our proposed solutions."} {"_id": "36dd3331060e5e6157d9558563b95253308709cb", "title": "Machine learning: a review of classification and combining techniques", "text": "Supervised classification is one of the tasks most frequently carried out by so-called Intelligent Systems. Thus, a large number of techniques have been developed based on Artificial Intelligence (Logic-based techniques, Perceptron-based techniques) and Statistics (Bayesian Networks, Instance-based techniques). The goal of supervised learning is to build a concise model of the distribution of class labels in terms of predictor features. The resulting classifier is then used to assign class labels to the testing instances where the values of the predictor features are known, but the value of the class label is unknown. This paper describes various classification algorithms and the recent attempt for improving classification accuracy\u2014ensembles of classifiers."} {"_id": "9971ff92d9d09b4736228bdbd7726e2e19b9aa2d", "title": "Radiomics: extracting more information from medical images using advanced feature analysis.", "text": "Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics--the high-throughput extraction of large amounts of image features from radiographic images--addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory."} {"_id": "0345aa6780b7daf1f04698d8fe64f5aae9e0233a", "title": "Reduction Techniques for Instance-Based Learning Algorithms", "text": "Instance-based learning algorithms are often faced with the problem of deciding which instances to store for use during generalization. Storing too many instances can result in large memory requirements and slow execution speed, and can cause an oversensitivity to noise. This paper has two main purposes. First, it provides a survey of existing algorithms used to reduce storage requirements in instance-based learning algorithms and other exemplar-based algorithms. Second, it proposes six additional reduction algorithms called DROP1\u2013DROP5 and DEL (three of which were first described in Wilson & Martinez, 1997c, as RT1\u2013RT3) that can be used to remove instances from the concept description. These algorithms and 10 algorithms from the survey are compared on 31 classification tasks. Of those algorithms that provide substantial storage reduction, the DROP algorithms have the highest average generalization accuracy in these experiments, especially in the presence of uniform class noise."} {"_id": "0bd261bac58d7633138350db3727d60be8a94f29", "title": "Decision Tree Induction Based on Efficient Tree Restructuring", "text": "The ability to restructure a decision tree efficiently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI). These approaches and several variants offer new computational and classifier characteristics that lend themselves to particular applications."} {"_id": "140a6bdfb0564eb18a1f51a39dff36f20272a461", "title": "Automatic camera and range sensor calibration using a single shot", "text": "As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions."} {"_id": "0a388e51dcc3ac6d38687e1e557e8a000b97b45a", "title": "A taxonomy for multi-agent robotics", "text": "A key di culty in the design of multi-agent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system con gurations is essential. To this end, we present a taxonomy that classi es multiagent systems according to communication, computational and other capabilities. We survey existing e orts involving multi-agent systems according to their positions in the taxonomy. We also present additional results concerning multi-agent systems, with the dual purposes of illustrating the usefulness of the taxonomy in simplifying discourse about robot collective properties, and also demonstrating that a collective can be demonstrably more powerful than a single unit of the collective."} {"_id": "da67375c8b6a250fbd5482bfbfce14f4eb7e506c", "title": "A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents", "text": "This survey presents an overview of the autonomous development of mental capabilities in computational agents. It does so based on a characterization of cognitive systems as systems which exhibit adaptive, anticipatory, and purposive goal-directed behavior. We present a broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems. We then review several cognitive architectures drawn from these paradigms. In each of these areas, we highlight the implications and attendant problems of adopting a developmental approach, both from phylogenetic and ontogenetic points of view. We conclude with a summary of the key architectural features that systems capable of autonomous development of mental capabilities should exhibit"} {"_id": "c9904943c2d0dbbb8cf7dfc6649a9cd8ff10526b", "title": "Lexical Analysis for the Webshell Attacks", "text": "In this paper, we proposed a design considering for web application firewall system. Anyway, the website managing, web technology and hacker technology are vastly different from each other, whilst hacker techs are keep on moving forward, while company and government MIS are difficult of updating their information to the latest state due to working, they are often not updated and followed to the latest stuffs. And moreover, these newest techs are in generally needed to be figured out whether it matches the compatibility of the old versions, such as the recent PHP 7, many MIS are not capable to update their software to handle the variants of WAF bypass methods."} {"_id": "517f6372302bcbb458f253d840d4b3c467c889bd", "title": "Tagging The Web: Building A Robust Web Tagger with Neural Network", "text": "In this paper, we address the problem of web-domain POS tagging using a twophase approach. The first phase learns representations that capture regularities underlying web text. The representation is integrated as features into a neural network that serves as a scorer for an easy-first POS tagger. Parameters of the neural network are trained using guided learning in the second phase. Experiment on the SANCL 2012 shared task show that our approach achieves 93.15% average tagging accuracy, which is the best accuracy reported so far on this data set, higher than those given by ensembled syntactic parsers."} {"_id": "f1309720a0c32f22f5a1d2c65b4f09d844ac8516", "title": "A hardware CABAC encoder for HEVC", "text": "This paper presents a hardware design of context-based adaptive binary arithmetic coding (CABAC) for the emerging High efficiency video coding (HEVC) standard. While aiming at higher compression efficiency, the CABAC in HEVC also invests a lot of effort in the pursuit of parallelism and reducing hardware cost. Simulation results show that our design processes 1.18 bins per cycle on average. It can work at 357 MHz with 48.940K gates targeting 0.13 \u03bcm CMOS process. This processing rate can support real-time encoding for all sequences under common test conditions of HEVC standard conforming to the main profile level 6.1 of main tier or main profile level 5.1 of high tier."} {"_id": "73defb621377dfd4680b3ed1c4d7027cc1a796f0", "title": "Cervical Cancer Diagnosis Using Random Forest Classifier With SMOTE and Feature Reduction Techniques", "text": "Cervical cancer is the fourth most common malignant disease in women\u2019s worldwide. In most cases, cervical cancer symptoms are not noticeable at its early stages. There are a lot of factors that increase the risk of developing cervical cancer like human papilloma virus, sexual transmitted diseases, and smoking. Identifying those factors and building a classification model to classify whether the cases are cervical cancer or not is a challenging research. This study aims at using cervical cancer risk factors to build classification model using Random Forest (RF) classification technique with the synthetic minority oversampling technique (SMOTE) and two feature reduction techniques recursive feature elimination and principle component analysis (PCA). Most medical data sets are often imbalanced because the number of patients is much less than the number of non-patients. Because of the imbalance of the used data set, SMOTE is used to solve this problem. The data set consists of 32 risk factors and four target variables: Hinselmann, Schiller, Cytology, and Biopsy. After comparing the results, we find that the combination of the random forest classification technique with SMOTE improve the classification performance."} {"_id": "c875fdb3efe590b6ccbbf7c27d67338be736c102", "title": "Social media-induced technostress: Its impact on the job performance of it professionals and the moderating role of job characteristics", "text": "Using social media during work hours for non-work-related reasons is becoming commonplace. Organizations are therefore challenged with identifying and overcoming the consequences of such use. Social media-induced technostress has been identified as an important unintended consequence of using social media at work, as it could negatively impact job performance. This study draws on Person-Environment Fit to investigate the relationship between social media-induced technostress and job performance in IT professionals, and the moderating effect of job characteristics on this relationship. The results indicate that social media-induced technostress is negatively related to job performance and the negative impact of social media-induced technostress is intensified when the job characteristics are low. This work extends the literature on job-stress, social media, technostress, and job characteristics."} {"_id": "a8b916062661ddacc54d586fa5676344130b84b5", "title": "Mining long-term search history to improve search accuracy", "text": "Long-term search history contains rich information about a user's search preferences, which can be used as search context to improve retrieval performance. In this paper, we study statistical language modeling based methods to mine contextual information from long-term search history and exploit it for a more accurate estimate of the query language model. Experiments on real web search data show that the algorithms are effective in improving search accuracy for both fresh and recurring queries. The best performance is achieved when using clickthrough data of past searches that are related to the current query."} {"_id": "5113d4b4737841fbce92427e3252f81fa99397f2", "title": "Cobalt fill for advanced interconnects", "text": "Cobalt as a material promises to change the conductor landscape in many areas, particularly logic contact and interconnect. In this paper, we focus on Cobalt as the conductor for logic interconnect \u2014 a potential Cu replacement. We demonstrate Co fill capability down to 10nm CD, and show that Co-Cu resistivity cross-over will occur below 14nm CD. Single damascene Co-ULK structures are used to establish an optimized metallization stack with robust CMP performance that has electromigration and TDDB reliability better than copper interconnect."} {"_id": "0b48ef877d32694b4f34b618e3994d94203fe242", "title": "We're in it together: interpersonal management of disclosure in social network services", "text": "The workload needed for managing privacy and publicness in current social network services (SNSs) is placed on individuals, yet people have few means to control what others disclose about them. This paper considers SNS-users' concerns in relation to online disclosure and the ways in which they cope with these both individually and collaboratively. While previous work has focused mainly on individual coping strategies, our findings from a qualitative study with 27 participants suggest that collaborative strategies in boundary regulation are of additional importance. We present a framework of strategies for boundary regulation that informs both theoretical work and design practice related to management of disclosure in SNSs. The framework considers disclosure as an interpersonal process of boundary regulation, in which people are dependent on what others choose to disclose about them. The paper concludes by proposing design solutions supportive of collaborative and preventive strategies in boundary regulation that facilitate the management of disclosure online."} {"_id": "159b52158512481df7684c341401efbdbc5d8f02", "title": "Object Detection with Active Sample Harvesting", "text": "The work presented in this dissertation lies in the domains of image classification, object detection, and machine learning. Whether it is training image classifiers or object detectors, the learning phase consists in finding an optimal boundary between populations of samples. In practice, all the samples are not equally important: some examples are trivially classified and do not bring much to the training, while others close to the boundary or misclassified are the ones that truly matter. Similarly, images where the samples originate from are not all rich in informative samples. However, most training procedures select samples and images uniformly or weigh them equally. The common thread of this dissertation is how to efficiently find the informative samples/images for training. Although we never consider all the possible samples \u201cin the world\u201d, our purpose is to select the samples in a smarter manner, without looking at all the available ones. The framework adopted in this work consists in organising the data (samples or images) in a tree to reflect the statistical regularities of the training samples, by putting \u201csimilar\u201d samples in the same branch. Each leaf carries a sample and a weight related to the \u201cimportance\u201d of the corresponding sample, and each internal node carries statistics about the weights below. The tree is used to select the next sample/image for training, by applying a sampling policy, and the \u201cimportance\u201d weights are updated accordingly, to bias the sampling towards informative samples/images in future iterations. Our experiments show that, in the various applications, properly focusing on informative images or informative samples improves the learning phase by either reaching better performances faster or by reducing the training loss faster."} {"_id": "2e7d283fd0064da3a97fef9562a8cc1bd55fa84a", "title": "Behavioral resource-aware model inference", "text": "Software bugs often arise because of differences between what developers think their system does and what the system actually does. These differences frustrate debugging and comprehension efforts. We describe Perfume, an automated approach for inferring behavioral, resource-aware models of software systems from logs of their executions. These finite state machine models ease understanding of system behavior and resource use.\n Perfume improves on the state of the art in model inference by differentiating behaviorally similar executions that differ in resource consumption. For example, Perfume separates otherwise identical requests that hit a cache from those that miss it, which can aid understanding how the cache affects system behavior and removing cache-related bugs. A small user study demonstrates that using Perfume is more effective than using logs and another model inference tool for system comprehension. A case study on the TCP protocol demonstrates that Perfume models can help understand non-trivial protocol behavior. Perfume models capture key system properties and improve system comprehension, while being reasonably robust to noise likely to occur in real-world executions."} {"_id": "6b1790a5f326831727b41b29c9246bc78f928c78", "title": "High-level visual features for underwater place recognition", "text": "This paper reports on a method to perform robust visual relocalization between temporally separated sets of underwater images gathered by a robot. The place recognition and relocalization problem is more challenging in the underwater environment mainly due to three factors: 1) changes in illumination; 2) long-term changes in the visual appearance of features because of phenomena like biofouling on man-made structures and growth or movement in natural features; and 3) low density of visually salient features for image matching. To address these challenges, a patch-based feature matching approach is proposed, which uses image segmentation and local intensity contrast to locate salient patches and HOG description to make correspondences between patches. Compared to traditional point-based features that are sensitive to dramatic appearance changes underwater, patch-based features are able to encode higher level information such as shape or structure which tends to persist across years in underwater environments. The algorithm is evaluated on real data, from multiple years, collected by a Hovering Autonomous Underwater Vehicle for ship hull inspection. Results in relocalization performance across missions from different years are compared to other traditional methods."} {"_id": "bced698f9a71ec586931c6deefd6d3c55f19d02d", "title": "Increasing individual upper alpha power by neurofeedback improves cognitive performance in human subjects.", "text": "The hypothesis was tested of whether neurofeedback training (NFT)--applied in order to increase upper alpha but decrease theta power--is capable of increasing cognitive performance. A mental rotation task was performed before and after upper alpha and theta NFT. Only those subjects who were able to increase their upper alpha power (responders) performed better on mental rotations after NFT. Training success (extent of NFT-induced increase in upper alpha power) was positively correlated with the improvement in cognitive performance. Furthermore, the EEG of NFT responders showed a significant increase in reference upper alpha power (i.e. in a time interval preceding mental rotation). This is in line with studies showing that increased upper alpha power in a prestimulus (reference) interval is related to good cognitive performance."} {"_id": "38c76a87a94818209e0da82f68a95d0947ce5d25", "title": "The Impact of Blogging and Scaffolding on Primary School Pupils' Narrative Writing: A Case Study", "text": "Narrative writing is a skill that all primary (elementary) school pupils in Singapore are required to develop in their learning of the English language. However, this is an area in which not all pupils excel. This study investigates if the use of blogging and scaffolding can improve pupils\u2019 narrative writing. Data were gathered from 36 primary five (grade five) pupils through pre-post writing tests, reflection sheets, and interviews. The pre-post writing tests were administered before and after the pupils had completed their blogging activities, while the blogs were used to draft their narrative writings and to comment on their peers\u2019 writings. The teacher also used a writing guide that served as a scaffold to help pupils plan their writing on their blogs. Overall, results showed a statistically significant difference of medium effect size between the pre-post test scores. Pupils\u2019 perceptions of using blogs as a tool for writing were also explored. attend school (Huffaker, 2004). In order to have success throughout life, verbal literacy is crucial and especially so from the beginnings of education to the future employment of adults (Cassell, 2004). In Singapore, the primary education system consists of a six-year program: a four-year foundation stage from primary (grade) 1 to 4 and a two-year orientation stage from primary 5 to 6 (Ministry of Education, 2010). One of the overall aims of the Singapore primary education is to give pupils a good grasp of the English Language. Literacy development such DOI: 10.4018/jwltt.2010040101 2 International Journal of Web-Based Learning and Teaching Technologies, 5(2), 1-17, April-June 2010 Copyright \u00a9 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. as reading and writing is at the heart of the English Language instructional programme in Singapore schools (Curriculum Planning and Development Division (CPDD), 2001). As indicated in the 2001 English Language Syllabus"} {"_id": "8ff1f35121c13cabc0b070f23edc46ea88d09b38", "title": "Combined software-defined network (SDN) and Internet of Things (IoT)", "text": "The Internet of Things (IoT) is an impending technology that captured the industries and research interests in a brief period. Presently, the number of Internet-connected devices are estimated at billions and by 2020 many studies expect to have around 50 billion devices that are connected to the Internet. The IoT devices have produced a significant amount of data that could be threatening to the security and directing the flow of data in the network. One of the alternative solutions for problems caused by IoT is to have some kind of solutions that are based on programming and primary control. Software Define Network (SDN) offers a programmable and primary control for the basic network without variability of the current architecture of the network. This paper discusses some solutions for various domains for the combined SDN based on IoT and current directions in studies. The relative analysis of these solutions prepares a simple and brief view of those directions."} {"_id": "107971a12550051421d4c570b3e9f285df09d6a8", "title": "Approximation Algorithms", "text": "There are many variations of local search: Hill-climbing: The name for the basic method described above. Metropolis: Pick a random y \u2208 N(x). If f(y) < f(x), set x = y. If f(y) > f(x), set x = y with some probability. Simulated annealing: Like Metropolis, but where the probability of moving to a higher-cost neighbor decreases over time. Tabu search: Like the previous two, but with memory in order to avoid getting stuck in suboptimal regions or in plateaus (\u201ctaboo\u201d areas). Parallel search: Do more than one search, and occasionally replace versions that are doing poorly with copies of versions that are doing well. Genetic algorithm: Keep a population of searches that changes over time via \u201cbreeding\u201d of the best-performing searches."} {"_id": "c7d0dc4a94b027b0242cd7cf82c9779dc0dd8b33", "title": "Vitamins, minerals, and mood.", "text": "In this article, the authors explore the breadth and depth of published research linking dietary vitamins and minerals (micronutrients) to mood. Since the 1920s, there have been many studies on individual vitamins (especially B vitamins and Vitamins C, D, and E), minerals (calcium, chromium, iron, magnesium, zinc, and selenium), and vitamin-like compounds (choline). Recent investigations with multi-ingredient formulas are especially promising. However, without a reasonable conceptual framework for understanding mechanisms by which micronutrients might influence mood, the published literature is too readily dismissed. Consequently, 4 explanatory models are presented, suggesting that mood symptoms may be expressions of inborn errors of metabolism, manifestations of deficient methylation reactions, alterations of gene expression by nutrient deficiency, and/or long-latency deficiency diseases. These models provide possible explanations for why micronutrient supplementation could ameliorate some mental symptoms."} {"_id": "8ff70a1aaa2aeeef42104b3b4f1e686f7be07243", "title": "NameClarifier: A Visual Analytics System for Author Name Disambiguation", "text": "In this paper, we present a novel visual analytics system called NameClarifier to interactively disambiguate author names in publications by keeping humans in the loop. Specifically, NameClarifier quantifies and visualizes the similarities between ambiguous names and those that have been confirmed in digital libraries. The similarities are calculated using three key factors, namely, co-authorships, publication venues, and temporal information. Our system estimates all possible allocations, and then provides visual cues to users to help them validate every ambiguous case. By looping users in the disambiguation process, our system can achieve more reliable results than general data mining models for highly ambiguous cases. In addition, once an ambiguous case is resolved, the result is instantly added back to our system and serves as additional cues for all the remaining unidentified names. In this way, we open up the black box in traditional disambiguation processes, and help intuitively and comprehensively explain why the corresponding classifications should hold. We conducted two use cases and an expert review to demonstrate the effectiveness of NameClarifier."} {"_id": "2fb9660e25df30a61803f33b74d37021288afe14", "title": "LegUp: high-level synthesis for FPGA-based processor/accelerator systems", "text": "In this paper, we introduce a new open source high-level synthesis tool called LegUp that allows software techniques to be used for hardware design. LegUp accepts a standard C program as input and automatically compiles the program to a hybrid architecture containing an FPGA-based MIPS soft processor and custom hardware accelerators that communicate through a standard bus interface. Results show that the tool produces hardware solutions of comparable quality to a commercial high-level synthesis tool."} {"_id": "11764d0125d5687cef2945a3bf335775835d192b", "title": "Differential Evolution-A simple and efficient adaptive scheme for global optimization over continuous spaces by", "text": "A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ________________________________________ 1)International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn-Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email:rainer.storn@zfe.siemens.de. 2)836 Owl Circle, Vacaville, CA 95687, kprice@solano.community.net."} {"_id": "777896b0f4cb6d2823e15a9265b890c0db9b6de5", "title": "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions", "text": "This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes available from http://www.ntu.edu.sg/home/epnsugan) on multimodal test functions such as Rosenbrock, Griewank, Rastrigin, Ackley, and Schwefel and composition functions both with and without coordinate rotation. The results demonstrate good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO."} {"_id": "1e4cddd9d4b89a9dbdf854aaf84b584ba47386ab", "title": "Hybrid Particle Swarm Optimiser with breeding and subpopulations", "text": "In this paper we present two hybrid Particle Swarm Optimisers combining the idea of the particle swarm with concepts from Evolutionary Algorithms. The hybrid PSOs combine the traditional velocity and position update rules with the ideas of breeding and subpopulations. Both hybrid models were tested and compared with the standard PSO and standard GA models. This is done to illustrate that PSOs with breeding strategies have the potential to achieve faster convergence and the potential to find a better solution. The objective of this paper is to describe how to make the hybrids benefit from genetic methods and to test their potential and competetiveness on function optimisation."} {"_id": "2483949663c6ba1632924e5a891edf7c8e2d3674", "title": "The particle swarm - explosion, stability, and convergence in a multidimensional complex space", "text": "The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. Even though the algorithm, which is based on a metaphor of social interaction, has been shown to perform well, researchers have not adequately explained how it works. Further, traditional versions of the algorithm have had some undesirable dynamical properties, notably the particles\u2019 velocities needed to be limited in order to control their trajectories. The present paper analyzes a particle\u2019s trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system\u2019s convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions."} {"_id": "bcc63fa7900d45b6b939feb430cf154a6605be17", "title": "Simulated annealing: Practice versus theory", "text": "Simulated annealing (SA) presents an optimization technique with several striking positive and negative features. Perhaps its most salient feature, statistically promising to deliver an optimal solution, in current practice is often spurned to use instead modified faster algorithms, \u201csimulated quenching\u201d (SQ). Using the author\u2019s Adaptive Simulated Annealing (ASA) code, some examples are given which demonstrate how SQ can be much faster than SA without sacrificing accuracy."} {"_id": "57a5b23825df99aa8ec0fc9f1eaacc2042a70200", "title": "Deep Image Segmentation by Quality Inference", "text": "Traditionally, convolutional neural networks are trained for semantic segmentation by having an image given as input and the segmented mask as output. In this work, we propose a neural network trained by being given an image and mask pair, with the output being the quality of that pairing. The segmentation is then created afterwards through backpropagation on the mask. This allows enriching training with semi-supervised synthetic variations on the ground-truth. The proposed iterative segmentation technique allows improving an existing segmentation or creating one from scratch. We compare the performance of the proposed methodology with state-of-the-art deep architectures for image segmentation and achieve competitive results, being able to improve their segmentations."} {"_id": "4808afa14fcb89cffcba491fbae284383c39fb17", "title": "1 ACO Algorithms for the Traveling Salesman Problemy", "text": "Ant algorithms [18, 14, 19] are a recently developed, population-based approach which has been successfully applied to several NP-hard combinatorial optimization problems [6, 13, 17, 23, 34, 40, 49]. As the name suggests, ant algorithms have been inspired by the behavior of real ant colonies, in particular, by their foraging behavior. One of the main ideas of ant algorithms is the indirect communication of a colony of agents, called (artificial) ants, based on pheromone trails (pheromones are also used by real ants for communication). The (artificial) pheromone trails are a kind of distributed numeric information which is modified by the ants to reflect their experience while solving a particular problem. Recently, the Ant Colony Optimization (ACO) metaheuristic has been proposed which provides a unifying framework for most applications of ant algorithms [15, 16] to combinatorial optimization problems. In particular, all the ant algorithms applied to the TSP fit perfectly into the ACO meta-heuristic and, therefore, we will call these algorithms also ACO algorithms. The first ACO algorithm, called Ant System (AS) [18, 14, 19], has been applied to the Traveling Salesman Problem (TSP). Starting from Ant System, several improvements of the basic algorithm have been proposed [21, 22, 17, 51, 53, 7]. Typically, these improved algorithms have been tested again on the TSP. All these improved versions of AS have in common a stronger exploita-"} {"_id": "1d262c871b775990a34dbfe8b8f07db69f31deb6", "title": "Variational shape approximation", "text": "A method for concise, faithful approximation of complex 3D datasets is key to reducing the computational cost of graphics applications. Despite numerous applications ranging from geometry compression to reverse engineering, efficiently capturing the geometry of a surface remains a tedious task. In this paper, we present both theoretical and practical contributions that result in a novel and versatile framework for geometric approximation of surfaces. We depart from the usual strategy by casting shape approximation as a variational geometric partitioning problem. Using the concept of geometric proxies, we drive the distortion error down through repeated clustering of faces into best-fitting regions. Our approach is entirely discrete and error-driven, and does not require parameterization or local estimations of differential quantities. We also introduce a new metric based on normal deviation, and demonstrate its superior behavior at capturing anisotropy."} {"_id": "68711fe2a5d89e3f13855e6e9a83f735b3d6dfab", "title": "End-user feature labeling: Supervised and semi-supervised approaches based on locally-weighted logistic regression", "text": "When intelligent interfaces, such as intelligent desktop assistants, email classif iers, and recommender systems, customize themselves to a particular end user , such customizations can decrease productivity and increase frustration due to inacc urate predictions\u2014especially in early stages when training data is limited. The end user can improve the learning algorithm by tediously labeling a substantial amoun t of additional training data, but this takes time and is too ad hoc to target a particular area of inaccuracy. To solve this problem, we propose new supervised and semi-supervised learning algorithms based on locally weighted logistic regression for feature labeling by end users , enabling them to point out which features are important for a class, rath er than provide new training instances. We first evaluate our algorithms against other feature labeling algorit hms under idealized conditions using feature labels generated by an oracle . In addition, another of our contributions is an evaluation of feature labeling algorithms under real world conditions using feature labels harvested from actual end users in our user study. Our user study is the first statistical user study for feature labeling invo lvi g a large number of end users (43 participants), all of whom have no background in machine learning. Our supervised and semi-supervised algorithms were among the best perfo rmers when compared to other feature labeling algorithms in the idealized setting and they are also robust to poor quality feature labels provided by ordinary end u sers in our study. We also perform an analysis to investigate the relative gains of incorporatin g the different sources of knowledge available in the labeled training set, the feature labels and the unlabeled data. Together, our results strongly suggest that feature labeling by e d users is both viable and effective for allowing end users to improve the learnin g algorithm behind their customized applicati ons. * Corresponding author E-mail address: dassh@eecs.oregonstate.edu (Shubhomoy Das, 1148 Kelley Engineering Center, Corvallis, OR 97331-5501, USA, Ph: 1-541-908-6949). 1 Early versions of portions of this work appeared in [36, 37] S. Das, T. Moore, W-K. Wong et. al./Artificial Intelligence \u00a9 2012 Elsevier B.V. All rights reserved."} {"_id": "7d40c75a12b98263eb61701d3c670c147cdd0054", "title": "On the Kinematic Analysis of Robotic Mechanisms", "text": "The kinematic analyses, of manipulators and other robotic devices composed of mechanical links, usually depend on the solution of sets of nonlinear equations. There are a variety of both numerical and algebraic techniques available to solve such systems of equations and to give bounds on the number of solutions. These solution methods have also led to an understanding of how special choices of the various structural parameters of a mechanism influence the number of solutions inherent to the kinematic geometry of a given structure. In this paper, results from studying the kinematic geometry of such systems are reviewed, and the three most useful solution techniques are summarized. The solution techniques are polynomial continuation, Gr\u00f6bner bases, and elimination. We then discuss the results that have been obtained with these techniques in the solution of two basic problems, namely, the inverse kinematics for serial-chain manipulators, and the direct kinematics of in-parallel platform devices. KEY WORDS\u2014inverse kinematics, direct kinematics, manipulators, in-parallel manipulators, series manipulators, polynomial continuation, characteristic polynomial"} {"_id": "24c495e35cd75a44d8cd0b59421c8977dbc75e0c", "title": "The spread of alcohol consumption behavior in a large social network.", "text": "Context A person's alcohol use might mirror that of his or her social contacts. Contribution Using the same group of Framingham Heart Study participants who helped to define the associations between social networks and other health behaviors, the researchers found that alcohol use was similar among individuals in a social cluster. Furthermore, changes in a person's alcohol intake over time followed changes in the alcohol intake of their social contacts. Caution Alcohol use was self-reported, and the researchers did not have access to social contacts who were not participating in the study. Implication Changing alcohol use may require intervening with social groups as well as with individuals. The Editors Alcohol use is common in the United States. In 2002, 55% of adults reported having had at least 1 drink in the previous month, and the prevalence of past-month alcohol consumption was somewhat higher for men (62%) than for women (48%) (1). The lifetime prevalence of alcohol use disorders has been measured at 14.6% (1). Excessive alcohol use, either in the form of heavy drinking or binge drinking, increases the risk for numerous health and social problems (2, 3), and approximately 75000 deaths in 2001 were attributable to excessive alcohol use, which makes it the third-leading lifestyle-related cause of death (3). Alcohol consumption behavior has many determinants. Previous studies (3, 4) suggest that biological factors have a significant effect on the progression from experimentation to regular use and that social and cultural factors play a critical role in experimentation with alcohol and the development of drinking patterns over time. Given the social nature of this behavior, it is not surprising that previous work has identified interactions with friends and family members as key factors (48). Although this literature primarily focused on cross-sectional panels, some studies (68) have attempted to test whether social influences act over time. These studies, which focused on peer influence among college students, showed inconsistent results and tended to focus just on pairs of connected persons. The study of social influences on behavior has expanded in recent years to the study of networks of linked individuals over time (9). Recent work in this area has shown that various health-related phenomena, ranging from sexually transmitted diseases to obesity, smoking, and even suicide, may travel along and within social networks (1015). Using a longitudinal, dynamic network of 12067 persons, we analyzed the role of social networks in alcohol use, focusing on 1) whether clusters of heavy drinkers and abstainers existed within the network; 2) whether a person's alcohol consumption behavior was associated with that of his or her social contacts; 3) the extent to which such associations depended on the nature and direction of the social ties (for example, friends of different kinds, siblings, spouses, coworkers, or neighbors); and 4) whether gender affected the spread of alcohol consumption across social ties. Methods Source Data We used data from participants in the Framingham Heart Study (FHS). The FHS is a population-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for cardiovascular disease. Four cohorts, who mostly represent different generations linked to an original cohort, are included in the entire FHS. Participant data, collected every 2 to 4 years, includes physical examinations, laboratory tests, noninvasive cardiac and vascular testing, battery testing (such as the Mini-Mental State Examination), questionnaire results, and basic demographic information. For our analyses, we aligned the examination waves for the original cohort with those of the second-generation offspring cohort, which allowed us to treat all participants as having been examined in 7 waves. The offspring cohort, initiated in 1971, is the source of our study's principals, or focal individuals in the network (16). However, we included other FHS participants whom the principals listed as social contacts and refer to them here as contacts. Therefore, even though principals come only from the offspring cohort, contacts are drawn from the entire set of both the original and offspring cohorts. To ascertain social network ties, we created a separate data set that linked individuals through self-described social ties, collected in each of the 7 waves of the study. We could then detect relationships between participants (for example, spouse, sibling, friend, coworker, or neighbor) and observe changes in these ties across time. Either party to a link between 2 people might identify his or her link to the other. This is most relevant to the friend link, which could exist if A nominated B or B nominated A as a friend. We also used complete records of participants' and their contacts' address in each wave since 1971 in our analyses, although we have no information about relationships that participants did not report. For each wave, we could determine who is whose neighbor and the geographic distance between persons (10, 17). Table 1 provides descriptive statistics for the 5124 principals in our sample. Table 1. Summary Statistics for Principals Measures Alcohol consumption was self-reported in all studied waves, with participants reporting their average number of drinks per week over the past year as well as the number of days within the past week during which they consumed alcohol (beer, wine, and liquor). Self-reported data are generally considered a valid and reliable source when assessing alcohol consumption, although recall measures, such as those used in this study, can be subject to recall bias from participants (18). We treated alcohol consumption as a continuous variable in some analyses (for example, number of drinks per day, calculated from participant responses) but conducted others with dichotomous cut-points, defining heavy drinkers as those who averaged more than 1 (for women) or 2 (for men) drinks per day; moderate drinkers as those whose alcohol consumption was less than the cutoff values for heavy drinkers; and abstainers as those who reported no alcohol consumption. We did not use self-reported number of days drinking in the past week as a measure in and of itself but rather as a means to calculate average number of drinks in a day. (These labels do not reflect clinical definitions of alcohol abuse or dependence.) Table 2 shows averages for the study population across time, including age, alcohol consumption, and percentages of abstainers and drinkers. Although the differences in how we measured heavy drinking made it difficult to compare our results with those for other population samples, the other averages for the mean-age groups in each year of the given waves are roughly similar to national averages of alcohol consumption behavior (1, 19, 20). Table 2. Average Age and Alcohol Consumption Behavior, by Examination Statistical Analysis Our first goal was to evaluate whether a person's alcohol consumption behavior was associated with that of his or her social network ties at various degrees of separation. To test this hypothesis, we took an observed clustering of persons (and their alcohol consumption behavior) within the whole network and compared them with 1000 simulated networks with the same topology and overall prevalence of drinking as the observed network, but with the incidence of drinking (for example, at least 1 drink per day) randomly distributed across the nodes (random drinking networks). If clustering occurs in drinking behavior, then the probability that a contact is a drinker given that a principal is a drinker should be higher in the observed network than in the random drinking networks (21). We used the KamadaKawai algorithm, which iteratively repositions nodes to reduce the number of ties that cross each other, to draw the networks (22). Our second goal was to examine the possible determinants of any clustering in alcohol consumption behavior. We considered 3 explanations for nonrandom clustering of alcohol consumption behavior in the network: principals might choose to associate with like contacts (homophily) (23, 24); principals and contacts might share attributes or jointly experience unobserved contemporaneous events that cause their alcohol consumption behavior to covary (omitted variables or confounding); and contacts might exert social influence or peer effects on principals (induction). The availability of dynamic, longitudinal data on both network connections and drinking behavior allowed us to distinguish between interpersonal induction of drinking and homophily (25). Our basic statistical approach involved specifying longitudinal logistic regression models in which a principal's drinking status at time t + 1 is a function of his or her various attributes, such as age, sex, and education; his or her drinking status at time t; and the drinking status of his or her contacts at times t and t + 1 (25). We used generalized estimating equation procedures to account for multiple observations of the same principal across both waves and principalcontact pairings (26). We assumed an independent working correlation structure for the clusters (27). By using a time-lagged dependent variable (lagged to the previous examination) for alcohol consumption, we eliminated serial correlation in the errors (28) (evaluated with a Lagrange multiplier test) and substantially controlled for the principal's genetic endowment and any intrinsic, stable predilection to drink. In addition, the lagged independent variable for a contact's drinking status substantially controlled for homophily (25, 29). The key variable of interest is a contact's alcohol consumption behavior at time t + 1. A significant coefficient on this variable would suggest either that a contact's drinking affects a principal's drinking or that a principal and a contact experience contemporaneous"} {"_id": "e486a10208969181c82aada7dae2f365585a5219", "title": "Chapter 2 : Game Analytics \u2013 The Basics", "text": "Take Away Points: \uf0b7 Overview of important key terms in game analytics \uf0b7 Introduction to game telemetry as a source of business intelligence. \uf0b7 In-depth description and discussion of user-derived telemetry and metrics. \uf0b7 Introduction to feature selection in game analytics \uf0b7 Introduction to the knowledge discovery process in game analytics \uf0b7 References to essential further reading. 1 Analytics \u2013 a new industry paradigm Developing a profitable game in today's market is a challenging endeavor. Thousands of commercial titles are published yearly, across a number of hardware platforms and distribution channels, all competing for players' time and attention, and the game industry is decidedly competitive. In order to effectively develop games, a variety of tools and techniques from e.g. business practices, project management to user testing have been developed in the game industry, or adopted and adapted from other IT sectors. One of these methods is analytics, which in recent years has decidedly impacted the game industry and game research environment. Analytics is the process of discovering and communicating patterns in data, towards solving problems in business or conversely predictions for supporting enterprise decision management, driving action and/or improving performance. The meth-odological foundations for analytics are statistics, data mining, mathematics, programming and operations research, as well as data visualization in order to communicate insights learned to the relevant stakeholders. Analytics is not just the querying and reporting of BI data, but rests on actual analysis, e.g. statistical analysis, predic-tive modeling, optimization, forecasting, etc. (Davenport and Harris, 2007). Analytics typically relies on computational modeling. There are several branches or domains of analytics, e.g. marketing analytics, risk analytics, web analytics \u2013 and game analytics. Importantly, analytics is not the same thing as data analysis. Analytics is an umbrella term, covering the entire methodology of finding and communicating patterns in data, whereas analysis is used for individual applied instances, e.g. running"} {"_id": "d7257e47cdd257fa73af986342fd966d047b8e50", "title": "Building an Intrusion Detection System Using a Filter-Based Feature Selection Algorithm", "text": "Redundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a mutual information based algorithm that analytically selects the optimal feature for classification. This mutual information based feature selection algorithm can handle linearly and nonlinearly dependent data features. Its effectiveness is evaluated in the cases of network intrusion detection. An Intrusion Detection System (IDS), named Least Square Support Vector Machine based IDS (LSSVM-IDS), is built using the features selected by our proposed feature selection algorithm. The performance of LSSVM-IDS is evaluated using three intrusion detection evaluation datasets, namely KDD Cup 99, NSL-KDD and Kyoto 2006+ dataset. The evaluation results show that our feature selection algorithm contributes more critical features for LSSVM-IDS to achieve better accuracy and lower computational cost compared with the state-of-the-art methods."} {"_id": "df7a9c107ee81e00db7b782304c68ca4aa7aec63", "title": "Small form factor PIFA antenna design at 28 GHz for 5G applications", "text": "This paper presents the design and analysis of lowest form factor planar inverted-F Antenna at 28GHz for 5G Mobile applications. Feeding and shorting of the antenna is done using metallic strips. The Important properties of this antenna are its small footprint (0.25\u03bbg), good gain of 4.5dBi, 10dB impedance bandwidth of 1.55 GHz and radiation efficiency of 94%. The total size of the PIFA antenna is 0.25\u03bbg. PIFA antenna is one of the antennas whose performance greatly depends on ground plane size and its position on the ground plane. It shows good radiation pattern when placed at the corner or edge."} {"_id": "821ea10539fefb9e911dd0f93a0fec2e9c0cbff3", "title": "Amharic-English Speech Translation in Tourism Domain", "text": "This paper describes speech translation from Amharic-to-English, particularly Automatic Speech Recognition (ASR) with post-editing feature and AmharicEnglish Statistical Machine Translation (SMT). ASR experiment is conducted using morpheme language model (LM) and phoneme acoustic model (AM). Likewise, SMT conducted using word and morpheme as unit. Morpheme based translation shows a 6.29 BLEU score at a 76.4% of recognition accuracy while word based translation shows a 12.83 BLEU score using 77.4% word recognition accuracy. Further, after post-edit on Amharic ASR using corpus based n-gram, the word recognition accuracy increased by 1.42%. Since post-edit approach reduces error propagation, the word based translation accuracy improved by 0.25 (1.95%) BLEU score. We are now working towards further improving propagated errors through different algorithms at each unit of speech translation cascading component."} {"_id": "2381e81d414c7fec5157f4e7275082d66030979d", "title": "Optimal management of shoulder impingement syndrome", "text": "Shoulder impingement is a progressive orthopedic condition that occurs as a result of altered biomechanics and/or structural abnormalities. An effective nonoperative treatment for impingement syndrome is aimed at addressing the underlying causative factor or factors that are identified after a complete and thorough evaluation. The clinician devises an effective rehabilitation program to regain full glenohumeral range of motion, reestablish dynamic rotator cuff stability, and implement a progression of resistive exercises to fully restore strength and local muscular endurance in the rotator cuff and scapular stabilizers. The clinician can introduce stresses and forces via sport-specific drills and functional activities to allow a return to activity."} {"_id": "7a402e00e2264bfc80faaf1d913a69f654eaf6b6", "title": "Usefulness of altmetrics for measuring the broader impact of research: A case study using data from PLOS and F1000Prime", "text": "Purpose: The present case study investigates the usefulness of altmetrics for measuring the broad impact of research. Methods: This case study is based on a sample of 1,082 PLOS (the Public Library of Science) journal articles recommended in F1000. The dataset includes altmetrics which were provided by PLOS. The F1000 dataset contains tags on papers which were assigned by experts to characterise them. Findings: Results from the Facebook and Twitter models show higher predicted numbers of counts for \"good for teaching\" papers than for those papers where the tag is not set. Further model estimations show that saves by Mendeley users are particularly to be expected when a paper introduces a new practical/ theoretical technique (tag: \"technical advance\"). The tag \"New finding\" is statistically significant in the model with which the Facebook counts are evaluated. Conclusions: The \"good for teaching\" is assigned to papers which could be of interest to a wider circle of readers than the peers in a specialized area. Thus, the results of the current study indicate that Facebook and Twitter, but not Figshare or Mendeley, might provide an indication of which papers are of interest to a broader circle of readers (and not only for the peers in a specialist area), and could therefore be useful for the measurement of the social impact of research."} {"_id": "12d6ce2b6a940862b18b664a754335de0942dbb9", "title": "Introduction of the automated assessment of homework assignments in a university-level programming course", "text": "Modern teaching paradigms promote active student participation, encouraging teachers to adapt the teaching process to involve more practical work. In the introductory programming course at the Faculty of Computer and Information Science, University of Ljubljana, Slovenia, homework assignments contribute approximately one half to the total grade, requiring a significant investment of time and human resources in the assessment process. This problem was alleviated by the automated assessment of homework assignments. In this paper, we introduce an automated assessment system for programming assignments that includes dynamic testing of student programs, plagiarism detection, and a proper presentation of the results. We share our experience and compare the introduced system with the manual assessment approach used before."} {"_id": "496a67013da1995d89f81a392ee545d713b4c544", "title": "Sediment quality criteria in use around the world", "text": "There have been numerous sediment quality guidelines (SQGs) developed during the past 20 years to assist regulators in dealing with contaminated sediments. Unfortunately, most of these have been developed in North America. Traditionally, sediment contamination was determined by assessing the bulk chemical concentrations of individual compounds and often comparing them with background or reference values. Since the 1980s, SQGs have attempted to incorporate biological effects in their derivation approach. These approaches can be categorized as empirical, frequency-based approaches to establish the relationship between sediment contamination and toxic response, and theoretically based approaches that attempt to account for differences in bioavailability through equilibrium partitioning (EqP) (i.e., using organic carbon or acid volatile sulfides). Some of these guidelines have been adopted by various regulatory agencies in several countries and are being used as cleanup goals in remediation activities and to identify priority polluted sites. The original SQGs, which compared bulk chemical concentrations to a reference or to background, provided little insight into the ecosystem impact of sediment contaminants. Therefore, SQGs for individual chemicals were developed that relied on field sediment chemistry paired with field or laboratory-based biological effects data. Although some SQGs have been found to be relatively good predictors of significant site contamination, they also have several limitations. False positive and false negative predictions are frequently in the 20% to 30% range for many chemicals and higher for others. The guidelines are chemical specific and do not establish causality where chemical mixtures occur. Equilibrium-based guidelines do not consider sediment ingestion as an exposure route. The guidelines do not consider spatial and temporal variability, and they may not apply in dynamic or larger-grained sediments. Finally, sediment chemistry and bioavailability are easily altered by sampling and subsequent manipulation processes, and therefore, measured SQGs may not reflect in situ conditions. All the assessment tools provide useful information, but some (such as SQGs, laboratory toxicity and bioaccumulation, and benthic indices) are prone to misinterpretation without the availability of specific in situ exposure and effects data. SQGs should be used only in a \u201cscreening\u201d manner or in a \u201cweight-of-evidence\u201d approach. Aquatic ecosystems (including sediments) must be assessed in a \u201cholistic\u201d manner in which multiple components are assessed (e.g., habitat, hydrodynamics, resident biota, toxicity, and physicochemistry, including SQGs) by using integrated approaches."} {"_id": "164e2afbb0a39993581fa3a05530a67c1e3e6da6", "title": "Mapping science through bibliometric triangulation: An experimental approach applied to water research", "text": "The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question what can be learned from a combination \u2013 triangulation \u2013 of these different science maps. In this paper, we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method does individually. The outcomes from the three different approaches are associated with each other and can be systematically interpreted, and provide insights into the complex multidisciplinary structure of the field of water"} {"_id": "2ef4a3f755beb0de66454678e2dce47d926eb0c9", "title": "Iterative Document Representation Learning Towards Summarization with Polishing", "text": "In this paper, we introduce Iterative Text Summarization (ITS), an iteration-based model for supervised extractive text summarization, inspired by the observation that it is often necessary for a human to read an article multiple times in order to fully understand and summarize its contents. Current summarization approaches read through a document only once to generate a document representation, resulting in a sub-optimal representation. To address this issue we introduce a model which iteratively polishes the document representation on many passes through the document. As part of our model, we also introduce a selective reading mechanism that decides more accurately the extent to which each sentence in the model should be updated. Experimental results on the CNN/DailyMail and DUC2002 datasets demonstrate that our model significantly outperforms state-of-the-art extractive systems when evaluated by machines and by humans."} {"_id": "14d8c69d5a6e1dae271a9763262a3a85b5cf90ee", "title": "TouchIn: Sightless two-factor authentication on multi-touch mobile devices", "text": "Mobile authentication is indispensable for preventing unauthorized access to multi-touch mobile devices. Existing mobile authentication techniques are often cumbersome to use and also vulnerable to shoulder-surfing and smudge attacks. This paper focuses on designing, implementing, and evaluating TouchIn, a two-factor authentication system on multi-touch mobile devices. TouchIn works by letting a user draw on the touchscreen with one or multiple fingers to unlock his mobile device, and the user is authenticated based on the geometric properties of his drawn curves as well as his behavioral and physiological characteristics. TouchIn allows the user to draw on arbitrary regions on the touchscreen without looking at it. This nice sightless feature makes TouchIn very easy to use and also robust to shoulder-surfing and smudge attacks. Comprehensive experiments on Android devices confirm the high security and usability of TouchIn."} {"_id": "2972719029a179e9cee3ac1a3f0d4fa519563d4d", "title": "Understanding and Using Patterns in Software Development", "text": "Patterns have shown to be an effective means of capturing and communicating software design experience. However, there is more to patterns than software design patterns: We believe that patterns work for software development on several levels. In this paper we explore what we have come to understand as crucial aspects of the pattern concept, relate patterns to the different models built during software design, discuss pattern forms and how we think that patterns can form larger wholes like pattern handbooks."} {"_id": "c3f6bdea5a42fa86907fd02a329911bc48d9074d", "title": "Korean pop takes off! Social media strategy of Korean entertainment industry", "text": "The Korean Wave (K-wave), or Hallyu, referred as \u201cthe phenomenon of Korean pop culture, such as TV dramas, films, pop music, fashion, and online games being widely embraced and shared among the people of Japan, China, Hong Kong, Taiwan, and other Asian countries\u201d. After the drama-oriented craze of the first K-wave, Korean pop music (shortly K-pop) took the integral part of the second K-wave. In addition, with the rapid spread of social media like YouTube and Twitter, K-pop has expanded its fandom outside of Asia to the West. The world-wide success of K-pop contributes to improve the `Korea' image and make a positive impact on Korean economy. It is reported that around 1,000 entertainment agencies are active in Korea, while there exist the \u201cbig three\u201d record labels and entertainment agencies: SM Entertainment, YG Entertainment and JYP Entertainment. In this case, we address the world-wide phenomenon of K-pop popularity and the role of social media on recent boom of K-pop. Focusing the major agencies above, we present lessons on how to manage social media strategically: align strategic business model with social media; maximize various social media channels; engage customers with on-and offline promotions; and stimulate audience with exclusive contents."} {"_id": "44f10dd2458e5f3c6f9e6a3f070df5191de8613b", "title": "Performance Evaluation of Channel Decoding with Deep Neural Networks", "text": "With the demand of high data rate and low latency in fifth generation (5G), deep neural network decoder (NND) has become a promising candidate due to its capability of one-shot decoding and parallel computing. In this paper, three types of NND, i.e., multi-layer perceptron (MLP), convolution neural network (CNN) and recurrent neural network (RNN), are proposed with the same parameter magnitude. The performance of these deep neural networks are evaluated through extensive simulation. Numerical results show that RNN has the best decoding performance, yet at the price of the highest computational overhead. Moreover, we find there exists a saturation length for each type of neural network, which is caused by their restricted learning abilities."} {"_id": "067420e38c16b83c3e3374dcb8d3f40dffea55c1", "title": "Impact Analysis of Flow Shaping in Ethernet-AVB/TSN and AFDX from Network Calculus and Simulation Perspective", "text": "Ethernet-AVB/TSN (Audio Video Bridging/Time-Sensitive Networking) and AFDX (Avionics Full DupleX switched Ethernet) are switched Ethernet technologies, which are both candidates for real-time communication in the context of transportation systems. AFDX implements a fixed priority scheduling strategy with two priority levels. Ethernet-AVB/TSN supports a similar fixed priority scheduling with an additional Credit-Based Shaper (CBS) mechanism. Besides, TSN can support time-triggered scheduling strategy. One direct effect of CBS mechanism is to increase the delay of its flows while decreasing the delay of other priority ones. The former effect can be seen as the shaping restriction and the latter effect can be seen as the shaping benefit from CBS. The goal of this paper is to investigate the impact of CBS on different priority flows, especially on the intermediate priority ones, as well as the effect of CBS bandwidth allocation. It is based on a performance comparison of AVB/TSN and AFDX by simulation in an automotive case study. Furthermore, the shaping benefit is modeled based on integral operation from network calculus perspective. Combing with the analysis of shaping restriction and shaping benefit, some configuration suggestions on the setting of CBS bandwidth are given. Results show that the effect of CBS depends on flow loads and CBS configurations. A larger load of high priority flows in AVB tends to a better performance for the intermediate priority flows when compared with AFDX. Shaping benefit can be explained and calculated according to the changing from the permitted maximum burst."} {"_id": "24c64dc1e20924d4c2f65bf1e71f59abe2195f2e", "title": "The empathic brain: how, when and why?", "text": "Recent imaging results suggest that individuals automatically share the emotions of others when exposed to their emotions. We question the assumption of the automaticity and propose a contextual approach, suggesting several modulatory factors that might influence empathic brain responses. Contextual appraisal could occur early in emotional cue evaluation, which then might or might not lead to an empathic brain response, or not until after an empathic brain response is automatically elicited. We propose two major roles for empathy; its epistemological role is to provide information about the future actions of other people, and important environmental properties. Its social role is to serve as the origin of the motivation for cooperative and prosocial behavior, as well as help for effective social communication."} {"_id": "1e91cc9eb0ba68a7167491876360027b3fe7819d", "title": "Marine animal classification using UMSLI in HBOI optical test facility", "text": "Environmental monitoring is a critical aspect of marine renewable energy project success. A new system called Unobtrusive Multistatic Serial LiDAR Imager (UMSLI) has been prepared to capture and classify marine life interaction with electrical generation equipment. We present both hardware and software innovations of the UMSLI system. Underwater marine animal imagery has been captured for the first time using red laser diode serial LiDAR, which has advantages over conventional optical cameras in many areas. Moreover, given the scarcity of existing underwater LiDAR data, a shape matching based classification algorithm is proposed which requires few training data. On top of applying shape descriptors, the algorithm also adopts information theoretical learning based affine shape registration, improving point correspondences found by shape descriptors as well as the final similarity measure. Within Florida Atlantic University\u2019s Harbor Branch Oceanographic Institute optical test facility, experimental LiDAR data are collected through the front end of the UMSLI prototype, on which the classification algorithm is validated."} {"_id": "744fdd298a2abff4273a5d1e8a9aa3c378fbf9d5", "title": "\u201cThe Internet is a Mask\u201d: High School Students' Suggestions for Preventing Cyberbullying", "text": "INTRODUCTION\nInteractions through technology have an important impact on today's youth. While some of these interactions are positive, there are concerns regarding students engaging in negative interactions like cyberbullying behaviors and the negative impact these behaviors have on others. The purpose of the current study was to explore participant suggestions for both students and adults for preventing cyberbullying incidents.\n\n\nMETHODS\nForty high school students participated in individual, semi-structured interviews. Participant experiences and perceptions were coded using constant comparative methods to illustrate ways in which students and adults may prevent cyberbullying from occurring within their school and community.\n\n\nRESULTS\nStudents reported that peers would benefit from increasing online security, as well as becoming more aware of their cyber-surroundings. Regarding adult-provided prevention services, participants often discussed that there is little adults can do to reduce cyberbullying. Reasons included the difficulties in restricting online behaviors or providing effective consequences. However, some students did discuss the use of in-school curricula while suggesting that adults blame people rather than technology as potential ways to prevent cyberbullying.\n\n\nCONCLUSION\nFindings from the current study indicate some potential ways to improve adult efforts to prevent cyberbullying. These strategies include parent/teacher training in technology and cyberbullying, interventions focused more on student behavior than technology restriction, and helping students increase their online safety and awareness."} {"_id": "a204471ad4722a5e4ade844f8a25aa1c1037e1c1", "title": "Brain regions with mirror properties: A meta-analysis of 125 human fMRI studies", "text": "Mirror neurons in macaque area F5 fire when an animal performs an action, such as a mouth or limb movement, and also when the animal passively observes an identical or similar action performed by another individual. Brain-imaging studies in humans conducted over the last 20 years have repeatedly attempted to reveal analogous brain regions with mirror properties in humans, with broad and often speculative claims about their functional significance across a range of cognitive domains, from language to social cognition. Despite such concerted efforts, the likely neural substrates of these mirror regions have remained controversial, and indeed the very existence of a distinct subcategory of human neurons with mirroring properties has been questioned. Here we used activation likelihood estimation (ALE), to provide a quantitative index of the consistency of patterns of fMRI activity measured in human studies of action observation and action execution. From an initial sample of more than 300 published works, data from 125 papers met our strict inclusion and exclusion criteria. The analysis revealed 14 separate clusters in which activation has been consistently attributed to brain regions with mirror properties, encompassing 9 different Brodmann areas. These clusters were located in areas purported to show mirroring properties in the macaque, such as the inferior parietal lobule, inferior frontal gyrus and the adjacent ventral premotor cortex, but surprisingly also in regions such as the primary visual cortex, cerebellum and parts of the limbic system. Our findings suggest a core network of human brain regions that possess mirror properties associated with action observation and execution, with additional areas recruited during tasks that engage non-motor functions, such as auditory, somatosensory and affective components."} {"_id": "fb4dcbd818e5839f025a6bc247b3bc5632be502f", "title": "Immersion and Emotion: Their Impact on the Sense of Presence", "text": "The present study is designed to test the role of immersion and media content in the sense of presence. Specifically, we are interested in the affective valence of the virtual environments. This paper describes an experiment that compares three immersive systems (a PC monitor, a rear projected video wall, and a head-mounted display) and two virtual environments, one involving emotional content and the other not. The purpose of the experiment was to test the interactive role of these two media characteristics (form and content). Scores on two self-report presence measurements were compared among six groups of 10 people each. The results suggest that both immersion and affective content have an impact on presence. However, immersion was more relevant for non-emotional environments than for emotional ones."} {"_id": "1f315e1ba65cdce34d372ae445b737c0bcc4dac7", "title": "Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders", "text": "To examine mirror neuron abnormalities in autism, high-functioning children with autism and matched controls underwent fMRI while imitating and observing emotional expressions. Although both groups performed the tasks equally well, children with autism showed no mirror neuron activity in the inferior frontal gyrus (pars opercularis). Notably, activity in this area was inversely related to symptom severity in the social domain, suggesting that a dysfunctional 'mirror neuron system' may underlie the social deficits observed in autism."} {"_id": "f7785ec6cc6d443151dcbb7560da7d8bdf4e9ed0", "title": "The successive mean quantization transform", "text": "This paper presents the successive mean quantization transform (SMQT). The transform reveals the organization or structure of the data and removes properties such as gain and bias. The transform is described and applied in speech processing and image processing. The SMQT is considered as an extra processing step for the mel frequency cepstral coefficients commonly used in speech recognition. In image processing the transform is applied in automatic image enhancement and dynamic range compression."} {"_id": "b18796d9ecbc3371bea65b136f77b7d65f129390", "title": "Global asymptotic and robust stability of recurrent neural networks with time delays", "text": "In this paper, two related problems, global asymptotic stability (GAS) and global robust stability (GRS) of neural networks with time delays, are studied. First, GAS of delayed neural networks is discussed based on Lyapunov method and linear matrix inequality. New criteria are given to ascertain the GAS of delayed neural networks. In the designs and applications of neural networks, it is necessary to consider the deviation effects of bounded perturbations of network parameters. In this case, a delayed neural network must be formulated as a interval neural network model. Several sufficient conditions are derived for the existence, uniqueness, and GRS of equilibria for interval neural networks with time delays by use of a new Lyapunov function and matrix inequality. These results are less restrictive than those given in the earlier references."} {"_id": "871314d440dc55cfd2198e229cde92584964be0f", "title": "Evaluating iterative optimization across 1000 datasets", "text": "While iterative optimization has become a popular compiler optimization approach, it is based on a premise which has never been truly evaluated: that it is possible to learn the best compiler optimizations across data sets. Up to now, most iterative optimization studies find the best optimizations through repeated runs on the same data set. Only a handful of studies have attempted to exercise iterative optimization on a few tens of data sets.\n In this paper, we truly put iterative compilation to the test for the first time by evaluating its effectiveness across a large number of data sets. We therefore compose KDataSets, a data set suite with 1000 data sets for 32 programs, which we release to the public. We characterize the diversity of KDataSets, and subsequently use it to evaluate iterative optimization.We demonstrate that it is possible to derive a robust iterative optimization strategy across data sets: for all 32 programs, we find that there exists at least one combination of compiler optimizations that achieves 86% or more of the best possible speedup across all data sets using Intel's ICC (83% for GNU's GCC). This optimal combination is program-specific and yields speedups up to 1.71 on ICC and 2.23 on GCC over the highest optimization level (-fast and -O3, respectively). This finding makes the task of optimizing programs across data sets much easier than previously anticipated, and it paves the way for the practical and reliable usage of iterative optimization. Finally, we derive pre-shipping and post-shipping optimization strategies for software vendors."} {"_id": "85f9441dfd90b1143c6a9afcdca1f279a2e555c0", "title": "SDN-Guard: DoS Attacks Mitigation in SDN Networks", "text": "Software Defined Networking (SDN) has recently emerged as a new networking technology offering an unprecedented programmability that allows network operators to dynamically configure and manage their infrastructures. The main idea of SDN is to move the control plane into a central controller that is in charge of taking all routing decisions in the network. However, despite all the advantages offered by this technology, Deny-of-Service (DoS) attacks are considered a major threat to such networks as they can easily overload the controller processing and communication capacity and flood switch CAM tables, resulting in a critical degradation of the overall network performance. To address this issue, we propose in this paper SDN-Guard, a novel scheme able to efficiently protect SDN networks against DoS attacks by dynamically (1) rerouting potential malicious traffic, (2) adjusting flow timeouts and (3) aggregating flow rules. Realistic experiments using Mininet show that the proposed solution succeeds in minimizing by up to 32% the impact of DoS attacks on~the controller performance, switch memory usage and control plane bandwidth and thereby maintaining acceptable network performance during such attacks."} {"_id": "92b89ddca96c90b56410396bedcfaec68bf13578", "title": "Twitter Geolocation Prediction Shared Task of the 2016 Workshop on Noisy User-generated Text", "text": "This paper describes the shared task for the English Twitter geolocation prediction associated with WNUT 2016. We discuss details of the task settings, data preparation and participant systems. The derived dataset and performance figures from each system provide baselines for future research in this realm."} {"_id": "99fbfa347fa1ce0ac5828d5fde6177b9a2e3d47c", "title": "Feasibility of a DC network for commercial facilities", "text": "This paper analyzes the feasibility of direct current for the supply of offices and commercial facilities. This is done by analyzing a case study, i.e. the supply to a university department. Voltage drop calculations have been carried out for different voltage levels. A back-up system for reliable power supply is designed based on commercially available batteries. Finally, an economic evaluation of AC vs. DC is performed."} {"_id": "7b0bc241de12eeeeaacefa8cd8b86a81cfc0a87d", "title": "Sexual arousal: The correspondence of eyes and genitals", "text": "Men's, more than women's, sexual responses may include a coordination of several physiological indices in order to build their sexual arousal to relevant targets. Here, for the first time, genital arousal and pupil dilation to sexual stimuli were simultaneously assessed. These measures corresponded more strongly with each other, subjective sexual arousal, and self-reported sexual orientation in men than women. Bisexual arousal is more prevalent in women than men. We therefore predicted that if bisexual-identified men show bisexual arousal, the correspondence of their arousal indices would be more female-typical, thus weaker, than for other men. Homosexual women show more male-typical arousal than other women; hence, their correspondence of arousal indices should be stronger than for other women. Findings, albeit weak in effect, supported these predictions. Thus, if sex-specific patterns are reversed within one sex, they might affect more than one aspect of sexual arousal. Because pupillary responses reflected sexual orientation similar to genital responses, they offer a less invasive alternative for the measurement of sexual arousal."} {"_id": "dabb16e3bbcd382e2bc4c5f380d7bfa21b7cc9d6", "title": "Personal Safety is More Important Than Cost of Damage During Robot Failure", "text": "As robots become more common in everyday life it will be increasingly important to understand how non-experts will view robot failure. In this study, we found that severity of failure seems to be tightly coupled with perceived risk to self rather than risk to the robot's task and object. We initially thought perceived severity would be tied to the cost of damage. Instead, participants placed falling drinking glasses above a laptop when rating the severity of the failure. Related results reinforce the primacy of personal safety over the financial cost of damage and suggest the results were tied to proximity to breaking glass."} {"_id": "b7682634c8633822145193242e7a3e3739042768", "title": "MusicMixer: computer-aided DJ system based on an automatic song mixing", "text": "In this paper, we present MusicMixer, a computer-aided DJ system that helps DJs, specifically with song mixing. MusicMixer continuously mixes and plays songs using an automatic music mixing method that employs audio similarity calculations. By calculating similarities between song sections that can be naturally mixed, MusicMixer enables seamless song transitions. Though song mixing is the most fundamental and important factor in DJ performance, it is difficult for untrained people to seamlessly connect songs. MusicMixer realizes automatic song mixing using an audio signal processing approach; therefore, users can perform DJ mixing simply by selecting a song from a list of songs suggested by the system, enabling effective DJ song mixing and lowering entry barriers for the inexperienced. We also propose personalization for song suggestions using a preference memorization function of MusicMixer."} {"_id": "e7ee27816ade366584d411f4287e50bdc4771e56", "title": "Fast Discovery of Association Rules", "text": ""} {"_id": "38215c283ce4bf2c8edd597ab21410f99dc9b094", "title": "The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent", "text": "SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an \"operator\u201d simulating a SAL agent, in different configurations: Solid SAL (designed so that operators displayed an appropriate nonverbal behavior) and Semi-automatic SAL (designed so that users' experience approximated interacting with a machine). We then recorded user interactions with the developed system, Automatic SAL, comparing the most communicatively competent version to versions with reduced nonverbal skills. High quality recording was provided by five high-resolution, high-framerate cameras, and four microphones, recorded synchronously. Recordings total 150 participants, for a total of 959 conversations with individual SAL characters, lasting approximately 5 minutes each. Solid SAL recordings are transcribed and extensively annotated: 6-8 raters per clip traced five affective dimensions and 27 associated categories. Other scenarios are labeled on the same pattern, but less fully. Additional information includes FACS annotation on selected extracts, identification of laughs, nods, and shakes, and measures of user engagement with the automatic system. The material is available through a web-accessible database."} {"_id": "af77be128c7a0efb1e9c559b4e77fa0b1b1e77d6", "title": "Analysis and Simulation of Theme Park Queuing System", "text": "It has been an important issue to improve customers' satisfaction in theme parks for which become a major role of recreation in our daily life. Waiting for rides has been identified as a factor decreasing satisfaction. A previous study indicated that a virtual queuing system can reduce the total waiting time so the customer's satisfaction is improved. The results from a simulation tool Arena show that an index Satisfaction Value (SV) increases when the queuing system is introduced. In this study, a more complex scenario of theme park queuing system (TPQS) is first designed, followed by comparison of a number of combinations of the rides with various waiting time and distribution factors. Analysis is also carried out."} {"_id": "802b80852996d87dc16082b86f6e77115eb6c9a6", "title": "Reverse Engineering Flash EEPROM Memories Using Scanning Electron Microscopy", "text": "In this article, a methodology to extract Flash EEPROM memory contents is presented. Samples are first backside prepared to expose the tunnel oxide of floating gate transistors. Then, a Scanning Electron Microscope (SEM) in the so called Passive Voltage Contrast (PVC) mode allows distinguishing \u20180\u2019 and \u20181\u2019 bit values stored in individual memory cell. Using SEM operator-free acquisition and standard image processing technique we demonstrate the possible automating of such technique over a full memory. The presented fast, efficient and low cost technique is successfully implemented on 0.35\u03bcm technology node microcontrollers and on a 0.21\u03bcm smart card type integrated circuit. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Without adequate protection an adversary could obtain the full memory array content within minutes. The technique is a first step for reverse engineering secure embedded systems."} {"_id": "b2c31108c126ec05d0741146dc5d639f30a12d11", "title": "Multi-stage beamforming codebook for 60GHz WPAN", "text": "Beamforming(BF) based on codebook is regarded as an attractive solution to resolve the poor link budget of millimeter-wave 60GHz wireless communication. Because the number of the elements of antenna array in 60GHz increases, the beam patterns generated are more than common BF, and it causes long set-up time during beam patterns searching. In order to reduce the set-up time, three stages protocol, namely the device (DEV) to DEV linking, sector-level searching and beam-level searching has been adopted by the IEEE 802.15.3c as an optional functionality to realize Gbps communication systems. However, it is still a challenge to create codebook of different patterns to support three stages protocol from common codebook of beam pattern. In this paper, we proposes a multi-stage codebook design and the realization architecture to support three stages BF. The multi-stage codebook can create different granularity of beam patterns and realize progressive searching. Simulation results for eight elements uniform linear array (ULA) show that this design can divide the beam searching to three stages searching without increasing the system complexity."} {"_id": "ca036c1e6a18931386016df9733a8c1366098235", "title": "Explanation of Two Anomalous Results in Statistical Mediation Analysis.", "text": "Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M, a, increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y, b, was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a. Implications of these findings are discussed."} {"_id": "dd5a7641b70120813cff6f3a08573a2215f4ad40", "title": "The Use Of Edmodo In Creating An Online Learning Community Of Practice For Learning To Teach Science", "text": "This study aimed to create an online community of practice by creating a virtual classroom in the Edmodo application and ascertain the opinions of pre-service primary teachers about the effects of Edmodo on their learning to teach science and availability of Edmodo. The research used a case study, which is one method of descriptive research. During the implementation process, pre-service primary teachers used Edmodo to share activities they had designed that centred on the scientific concepts taught in primary science education programmes. They also shared their diary entries that outlined their experiences and views after they had practised their activities in a real classroom. 58 pre-service primary teachers participated in the study. The author developed a questionnaire and it included one closed-ended and six open-ended questions; the questionnaire was used as the data collection tool. The pre-service primary teachers used Edmodo for 12 weeks. Descriptive and content analysis methods were used to analyse the data obtained from the study. The results obtained from the data analysis showed that pre-service primary teachers generally had positive views about the use of Edmodo in teacher education programmes. Most pre-service primary teachers stated that Edmodo provides the possibility of sharing knowledge, experiences and views. However, some pre-service teachers stated that Edmodo has some limitations; for example, the fact that it requires the user to have internet access. As a result, it can be said that Edmodo can be used to create an online community of practice in teacher education programmes."} {"_id": "3d1f7de876b57952fded28adfbb74d5cd4249b02", "title": "QUANTUM COMPUTING : AN INTRODUCTION", "text": "After some remarks on the fundamental physical nature of information, Bennett and Fredkin's ideas of reversible computation are introduced. This leads on to the suggestions of Benioff and Feynman as to the possibilit y of a new type of essentially \u2018quantum computers\u2019 . If we can build such devices, Deutsch showed that \u2018quantum paralleli sm\u2019 leads to new algorithms and new complexity classes. This is dramatically ill ustrated by Shor's quantum algorithm for factorization which is polynomial in time in contrast to algorithms for factorization on a classical Turing computer. This discovery has potentiall y important implications for the security of many modern cryptographic systems. The fundamentals of quantum computing are then introduced reversible logic gates, qubits and quantum registers. The key quantum property of \u2018 entanglement\u2019 is described, with due homage to Einstein and Bell . As an ill ustration of a quantum program, Grover's database search algorithm is described in some detail . After all this theory, the status of experimental attempts to build a quantum computer is reviewed: it will become evident that we have a long way to go before we can factorize even small numbers. Finally, we end with some thoughts about the process of \u2018quantum compilation\u2019 translating a quantum algorithm into actual physical operations on a quantum system and some comments on prospects for future progress."} {"_id": "b155c6ddeba5d094207d6dce1b55893210b65c2d", "title": "Classification methods for activity recognition", "text": "Activity recognition is an important subject in different areas of research, like e-health and ubiquitous computing. In this paper we give an overview of recent work in the field of activity recognition from both body mounted sensors and ambient sensors. We discuss some of the differences among approaches and present guidelines for choosing a suitable approach under various circumstances."} {"_id": "a23e7b612245366c3f7934eebf5bac6205ff023a", "title": "Biocybernetic system evaluates indices of operator engagement in automated task", "text": "A biocybernetic system has been developed as a method to evaluate automated flight deck concepts for compatibility with human capabilities. A biocybernetic loop is formed by adjusting the mode of operation of a task set (e.g., manual/automated mix) based on electroencephalographic (EEG) signals reflecting an operator's engagement in the task set. A critical issue for the loop operation is the selection of features of the EEG to provide an index of engagement upon which to base decisions to adjust task mode. Subjects were run in the closed-loop feedback configuration under four candidate and three experimental control definitions of an engagement index. The temporal patterning of system mode switching was observed for both positive and negative feedback of the index. The indices were judged on the basis of their relative strength in exhibiting expected feedback control system phenomena (stable operation under negative feedback and unstable operation under positive feedback). Of the candidate indices evaluated in this study, an index constructed according to the formula, beta power/(alpha power + theta power), reflected task engagement best."} {"_id": "1a85b157bc6237aa1e556ccfb84a1e3c6c9d602f", "title": "AmphiBot I: an amphibious snake-like robot", "text": "This article presents a project that aims at constructing a biologically inspired amphibious snake-like robot. The robot is designed to be capable of anguilliform swimming like sea-snakes and lampreys in water and lateral undulatory locomotion like a snake on ground. Both the structure and the controller of the robot are inspired by elongate vertebrates. In particular, the locomotion of the robot is controlled by a central pattern generator (a system of coupled oscillators) that produces travelling waves of oscillations as limit cycle behavior. We present the design considerations behind the robot and its controller. Experiments are carried out to identify the types of travelling waves that optimize speed during lateral undulatory locomotion on ground. In particular, the optimal frequency, amplitude and wavelength are thus identified when the robot is crawling on a particular surface."} {"_id": "47c80e2ba776d82edfe038cb0e132e20170a79e6", "title": "Breakthrough in improving the skin sagging with focusing on the subcutaneous tissue structure , retinacula cutis", "text": "Skin sagging is one of the most prominent aging signs and a concerning issue for people over middle age. Although many cosmetic products are challenging the reduction of the skin sagging by improving the dermal elasticity, which decreases with age, the effects are insufficient for giving drastic changes to the facial morphology. This study focused on subcutaneous tissue for investigating a skin sagging mechanism. Subcutaneous tissue consists of predominantly adipose tissue with fibrous network structures, called retinacula cutis (RC), which is reported to have a possibility to maintain the soft tissue structure and morphology. This study investigated the effect of subcutaneous tissue physical-property alteration due to RC-deterioration on the skin sagging. For evaluating RC structure noninvasively, the tomographic images of faces were obtained by magnetic resonance (MR) imaging. Subcutaneous tissue network structures observed by MR imaging were indicated to be RC by comparing MR images and the histological specimens of human skin. The density of RC was measured by image analysis. For evaluating sagging degree and physical properties of the skin, sagging scoring and the measurement of elasticity of deeper skin layers were performed. The density of RC was correlated with the elasticity data of deeper skin layers, and the sagging scores tended to increase with decreasing the density. These results suggested that the sparse RC structure gave a decrease in the elasticity of subcutaneous tissue layer, which consequently would be the cause of facial sagging. This study would be a pathfinder for the complete elimination of skin sagging."} {"_id": "3cb35dda9637a582c650ecfd80214bfd2914c9f3", "title": "Predicting defects using network analysis on dependency graphs", "text": "In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10% points higher than for models built from complexity metrics. In addition, network measures could identify 60% of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics."} {"_id": "55289d3feef4bc1e4ff17008120e371eb7f55a24", "title": "Conditional Generation and Snapshot Learning in Neural Dialogue Systems", "text": "Recently a variety of LSTM-based conditional language models (LM) have been applied across a range of language generation tasks. In this work we study various model architectures and different ways to represent and aggregate the source information in an endto-end neural dialogue system framework. A method called snapshot learning is also proposed to facilitate learning from supervised sequential signals by applying a companion cross-entropy objective function to the conditioning vector. The experimental and analytical results demonstrate firstly that competition occurs between the conditioning vector and the LM, and the differing architectures provide different trade-offs between the two. Secondly, the discriminative power and transparency of the conditioning vector is key to providing both model interpretability and better performance. Thirdly, snapshot learning leads to consistent performance improvements independent of which architecture is used."} {"_id": "027b0d240066f8d1560edcd75505f5650291cced", "title": "Solving optimization problems using black hole algorithm", "text": "Various meta-heuristic optimization approaches have been recently created and applied in different areas. Many of these approaches are inspired by swarm behaviors in the nature. This paper studies the solving optimization problems using Black Hole Algorithm (BHA) which is a population-based algorithm. Since the performance of this algorithm was not tested in mathematical functions, we have studied this issue using some standard functions. The results of the BHA are compared with the results of GA and PSO algorithms which indicate that the performance of BHA is better than the other two mentioned algorithms."} {"_id": "75c4b33059aa300e7b52d1b5dab37968ac927e89", "title": "A Wideband Dual-Polarized L-Probe Stacked Patch Antenna Array", "text": "A 2 times 1 dual-polarized L-probe stacked patch antenna array is presented. It has employed a novel technique to achieve high isolation between the two input ports. The proposed antenna has a 14-dB return loss bandwidth of 19.8%, which is ranged from 0.808 to 0.986 GHz, for both ports. Also, it has an input port isolation of more than 30 dB and an average gain of 10.5 dBi over this bandwidth. Moreover, its radiation patterns in the two principal planes have cross-polarization level of less than -15 dB within the 3-dB beamwidths across the passband. Due to these features, this antenna array is highly suitable for the outdoor base station that is required to cover the operating bandwidths of both CDMA800 and GSM900 mobile communication systems."} {"_id": "9268b7db4bd7f94e5c55602a8f7e5c1befa3604f", "title": "High-Isolation XX-Polar Antenna", "text": "A new configuration of the dual-band and dual-polarized antenna is presented, which is fed by aperture coupling for the existing mobile communication systems working over 870-960 MHz (GSM) and 1710-2180 MHz (DCS/UMTS) frequency band. XX-Polar antenna stands for an antenna with dual-band and dual linear slant polarization. In this paper DU band stands for DCS/UMTS band. Measurement results show that the proposed antenna yields good broadside radiation characteristics including symmetric radiation patterns, low cross-polarization level ( <; -14 dB), low backlobe level (F/B >; 20 dB) and high isolation (>; 30 dB) at both bands. The designed antenna has an impedance bandwidth of 20.4% (790-970 MHz) for VSWR <; 1.5 in the lower band and 29.3% for VSWR <; 1.6 (1630-2190 MHz) in the upper band. The measured average gains are about 9.3-10.2 and 8.6-10 dBi in the lower and upper band, respectively. It is also promising for array antenna in various wireless systems."} {"_id": "aa33d8f540baf255529392d8d2a96127256307d1", "title": "A Compact Dual-Polarized Printed Dipole Antenna With High Isolation for Wideband Base Station Applications", "text": "A compact dual-polarized printed dipole antenna for wideband base station applications is presented in this communication. The proposed dipole antenna is etched on three assembled substrates. Four horizontal triangular patches are introduced to form two dipoles in two orthogonal polarizations. Two integrated baluns connected with 50 \u03a9 SMA launchers are used to excite the dipole antenna. The proposed dipole antenna achieves a more compact size than many reported wideband printed dipole and magneto-electric dipole antennas. Both simulated and measured results show that the proposed antenna has a port isolation higher than 35 dB over 52% impendence bandwidth (VSWR <; 1.5). Moreover, stable radiation pattern with a peak gain of 7 dBi - 8.6 dBi is obtained within the operating band. The proposed dipole antenna is suitable as an array element and can be used for wideband base station antennas in the next generation IMT-advanced communications."} {"_id": "f20c6d8c99e06646a063ef71c747d1fe3759d701", "title": "Wideband Dual-Polarized Patch Antenna With Broadband Baluns", "text": "A pair of novel 180deg broadband microstrip baluns are used to feed a dual-polarized patch antenna. The 180deg broadband balun delivers equal amplitude power division and consistent 180deg (plusmn5deg) phase shifting over a wide bandwidth (>50%). We demonstrate that for a dual-polarized quadruple L-probe square patch antenna, the use of the proposed 180deg broadband balun pair, in place of a conventional 180deg narrowband balun pair, provides improved input port isolation and reduced H-plane cross-polarization levels over a wider frequency range, while maintaining low E-plane cross-polarization levels and stable E- and H-plane co-polarization patterns throughout the impedance passband"} {"_id": "fa0092adfb76508be15ec05fde2dcf854d4e5d02", "title": "Design of dual-polarized L-probe patch antenna arrays with high isolation", "text": "An experimental study of a dual-polarized L-probe patch antenna is presented. The antenna is designed to operate at around 1.8 GHz. A \"dual-feed\" technique is introduced to achieve high isolation between two input ports. The proposed antenna has an impedance bandwidth of 23.8% (SWR/spl les/2), 15% (SWR/spl les/1.5) and an isolation over 30 dB. In array designs, techniques for improving the isolation between two adjacent elements of an antenna array are also investigated. A two-element array with more than 30 dB isolation is designed and tested."} {"_id": "3cf0b2fa44c38feb0276d22e93f2719889b0667a", "title": "Mining process models with non-free-choice constructs", "text": "Process mining aims at extracting information from event logs to capture the business process as it is being executed. Process mining is particularly useful in situations where events are recorded but there is no system enforcing people to work in a particular way. Consider for example a hospital where the diagnosis and treatment activities are recorded in the hospital information system, but where health-care professionals determine the \u201ccareflow.\u201d Many process mining approaches have been proposed in recent years. However, in spite of many researchers\u2019 persistent efforts, there are still several challenging problems to be solved. In this paper, we focus on mining non-free-choice constructs, i.e., situations where there is a mixture of choice and synchronization. Although most real-life processes exhibit non-free-choice behavior, existing algorithms are unable to adequately deal with such constructs. Using a Petri-net-based representation, we will show that there are two kinds of causal dependencies between tasks, i.e., explicit and implicit ones. We propose an algorithm that is able to deal with both kinds of dependencies. The algorithm has been implemented in the ProM framework and experimental results shows that the algorithm indeed significantly improves existing process mining techniques."} {"_id": "4dbe9b736de47d92127c8211bf38566ee45d6b43", "title": "EigenTrustp++: Attack resilient trust management", "text": "This paper argues that trust and reputation models should take into account not only direct experiences (local trust) and experiences from the circle of \u201cfriends\u201d, but also be attack resilient by design in the presence of dishonest feedbacks and sparse network connectivity. We first revisit EigenTrust, one of the most popular reputation systems to date, and identify the inherent vulnerabilities of EigenTrust in terms of its local trust vector, its global aggregation of local trust values, and its eigenvector based reputation propagating model. Then we present EigenTrust++, an attack resilient trust management scheme. EigenTrust++ extends the eigenvector based reputation propagating model, the core of EigenTrust, and counters each of vulnerabilities identified with alternative methods that are by design more resilient to dishonest feedbacks and sparse network connectivity under four known attack models. We conduct extensive experimental evaluation on EigenTrust++, and show that EigenTrust++ can significantly outperform EigenTrust in terms of both performance and attack resilience in the presence of dishonest feedbacks and sparse network connectivity against four representative attack models."} {"_id": "3102866d1d3f9e616a6437a719197a78c1590bb1", "title": "Expression and regulation of lincRNAs during T cell development and differentiation", "text": "Although intergenic long noncoding RNAs (lincRNAs) have been linked to gene regulation in various tissues, little is known about lincRNA transcriptomes in the T cell lineages. Here we identified 1,524 lincRNA clusters in 42 T cell samples, from early T cell progenitors to terminally differentiated helper T cell subsets. Our analysis revealed highly dynamic and cell-specific expression patterns for lincRNAs during T cell differentiation. These lincRNAs were located in genomic regions enriched for genes that encode proteins with immunoregulatory functions. Many were bound and regulated by the key transcription factors T-bet, GATA-3, STAT4 and STAT6. We found that the lincRNA LincR-Ccr2-5\u2032AS, together with GATA-3, was an essential component of a regulatory circuit in gene expression specific to the TH2 subset of helper T cells and was important for the migration of TH2 cells."} {"_id": "4c60b754ead9755121d7cf7ef31f86ff282cc8fb", "title": "Hierarchical Complex Activity Representation and Recognition Using Topic Model and Classifier Level Fusion", "text": "Human activity recognition is an important area of ubiquitous computing. Most current researches in activity recognition mainly focus on simple activities, e.g., sitting, running, walking, and standing. Compared with simple activities, complex activities are more complicated with high-level semantics, e.g., working, commuting, and having a meal. This paper presents a hierarchical model to recognize complex activities as mixtures of simple activities and multiple actions. We generate the components of complex activities using a clustering algorithm, represent and recognize complex activities by applying a topic model on these components. It is a data-driven method that can retain effective information for representing and recognizing complex activities. In addition, acceleration and physiological signals are fused in classifier level to ensure the overall performance of complex activity recognition. The results of experiments show that our method has ability to represent and recognize complex activities effectively."} {"_id": "5d85f0ab7acd995669a24dc93422325e0caea009", "title": "5G security architecture and light weight security authentication", "text": "The concept of future 5G (the 5th generation of mobile networks) widely spreads in industry, but the related standards are still unclear. The definition of 5G is to provide adequate RF coverage, more bits/Hz and to interconnect all wireless heterogeneous networks to provide seamless, consistent telecom experience to the user. In this paper we have surveyed 5G network architecture and proposed a multi-tier security architecture and a lightweight authentication measure for 5G. The proposed scheme aims at achieving fast authentication and minimizing the packet transmission overhead without compromising the security requirements."} {"_id": "43512258e2670a0abeba714643ec28f3d1c9f8ef", "title": "Higher-Order Inference for Multi-class Log-Supermodular Models", "text": "Higher-order models have been shown to be very useful for a plethora of computer vision tasks. However, existing techniques have focused mainly on MAP inference. In this paper, we present the first efficient approach towards approximate Bayesian marginal inference in a general class of high-order, multi-label attractive models, where previous techniques slow down exponentially with the order (clique size). We formalize this task as performing inference in log-supermodular models under partition constraints, and present an efficient variational inference technique. The resulting optimization problems are convex and yield bounds on the partition function. We also obtain a fully factorized approximation to the posterior, which can be used in lieu of the true complicated distribution. We empirically demonstrate the performance of our approach by comparing it to traditional inference methods on a challenging high-fidelity multi-label image segmentation dataset. We obtain state-of-the-art classification accuracy for MAP inference, and substantially improved ROC curves using the approximate marginals."} {"_id": "3562ed47225c461df47b1d69410e40d58e8860e0", "title": "EOMM: An Engagement Optimized Matchmaking Framework", "text": "Matchmaking connects multiple players to participate in online player-versus-player games. Current matchmaking systems depend on a single core strategy: create fair games at all times. These systems pair similarly skilled players on the assumption that a fair game is best player experience. We will demonstrate, however, that this intuitive assumption sometimes fails and that matchmaking based on fairness is not optimal for engagement. In this paper, we propose an Engagement Optimized Matchmaking (EOMM) framework that maximizes overall player engagement. We prove that equal-skill based matchmaking is a special case of EOMM on a highly simplified assumption that rarely holds in reality. Our simulation on real data from a popular game made by Electronic Arts,Inc. (EA) supports our theoretical results, showing significant improvement in enhancing player engagement compared to existing matchmaking methods."} {"_id": "0c5b32acd045996716f66e327a2d39423446a220", "title": "Local information statistics of LBP and HOG for pedestrian detection", "text": "We present several methods of pedestrian detection in intensity images using different local statistical measures applied to two classes of features extensively used in pedestrian detection: uniform local binary patterns - LBP and a modified version of histogram of oriented gradients - HOG. Our work extracts local binary patterns and magnitude and orientation of the gradient image. Then we divide the image into blocks. Within each block we extract different statistics like: histogram (weighted by the gradient magnitude in the case of HOG), information, entropy and energy of the local binary code. We use Adaboost for training four classifiers and we analyze the classification error of each method on the Daimler benchmark pedestrian dataset."} {"_id": "02dd5b65dfc005c30c2c4f49ba5afb586cd7770c", "title": "Event Detection in Social Media Detecting", "text": "The proliferation of social media and user-generated content in the Web has opened new opportunities for detecting and disseminating information quickly. The Twitter stream is one large source of information, but the magnitude of tweets posted and the noisy nature of its content makes the harvesting of knowledge from Twitter very hard. Aiming at overcoming some of the challenges and extract some of the hidden information, this thesis proposes a system for real-time detection of news events from the Twitter stream. The first step of our approach is to let a classifier, based on an Artificial Neural Network and deep learning, detect news relevant tweets from the stream. Next, a novel streaming data clustering algorithm is applied to the detected news tweets to form news events. Finally, the events of highest interest is retrieved based on events\u2019 sizes and rapid growth in tweet frequencies, before the news events are presented and visualized in a web user interface. We evaluate the proposed system on a large, publicly available corpus of annotated news events from Twitter. As part of the evaluation, we compare our approach with a related state-of-the-art solution. Overall, our experiments and user-based evaluation show that our approach on detecting current (real) news events delivers state-of-the-art performance."} {"_id": "63396a1a658863faeb9e3c8ee1f1934fd00a7512", "title": "Predicting statistics of asynchronous SGD parameters for a large-scale distributed deep learning system on GPU supercomputers", "text": "Many studies have shown that Deep Convolutional Neural Networks (DCNNs) exhibit great accuracies given large training datasets in image recognition tasks. Optimization technique known as asynchronous mini-batch Stochastic Gradient Descent (SGD) is widely used for deep learning because it gives fast training speed and good recognition accuracies, while it may increases generalization error if training parameters are in inappropriate ranges. We propose a performance model of a distributed DCNN training system called SPRINT that uses asynchronous GPU processing based on mini-batch SGD. The model considers the probability distribution of mini-batch size and gradient staleness that are the core parameters of asynchronous SGD training. Our performance model takes DCNN architecture and machine specifications as input parameters, and predicts time to sweep entire dataset, mini-batch size and staleness with 5%, 9% and 19% error in average respectively on several supercomputers with up to thousands of GPUs. Experimental results on two different supercomputers show that our model can steadily choose the fastest machine configuration that nearly meets a target mini-batch size."} {"_id": "3e28d80281d769a32ff5eb7f9a713b3ae01f6475", "title": "Analysis on robust H\u221e performance and stability for linear systems with interval time-varying state delays via some new augmented Lyapunov-Krasovskii functional", "text": "In this paper, the problem ofH1 performance and stability analysis for linear systems with interval time-varying delays is considered. First, by constructing a newly augmented Lyapunov\u2013Krasovskii functional which has not been proposed yet, an improvedH1 performance criterion with the framework of linear matrix inequalities (LMIs) is introduced. Next, the result and method are extended to the problem of a delay-dependent stability criterion. Finally, numerical examples are given to show the superiority of proposed method. 2013 Elsevier Inc. All rights reserved."} {"_id": "257a795fa475c98906b319ffdb38d49d54e199f2", "title": "Existence and Stability of Equilibrium of DC Microgrid With Constant Power Loads", "text": "Constant power loads (CPLs) are often the cause of instability and no equilibrium of dc microgrids. In this study, we analyze the existence and stability of equilibrium of dc microgirds with CPLs and the sufficient conditions for them are provided. To derive the existence of system equilibrium, we transform the problem of quadratic equation solvability into the existence of a fixed point for an increasing fractional mapping. Then, the sufficient condition based on the Tarski fixed-point theorem is derived. It is less conservative compared with the existing results. Moreover, we adopt the small-signal model to predict the system qualitative behavior around equilibrium. The analytic conditions of robust stability are determined by analyzing quadratic eigenvalue. Overall, the obtained conditions provide the references for building reliable dc microgrids. The simulation results verify the correctness of the proposed conditions."} {"_id": "9e2803a17936323ea7fcf2f9b1dd86612b176062", "title": "A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications", "text": "Particle swarmoptimization (PSO) is a heuristic global optimizationmethod, proposed originally byKennedy and Eberhart in 1995. It is now one of themost commonly used optimization techniques.This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (tomultiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research,mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms."} {"_id": "91422d30847919aab2b80a7a14fc1a01f597a8a0", "title": "Multi-Scale Patch-Based Image Restoration", "text": "Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms-the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively."} {"_id": "7b7683a2f570cd3a4c8c139f25af2bad8e889dc9", "title": "Virtual machine placement strategies in cloud computing", "text": "The prominent technology that drives the industry now-a-days is cloud computing. The growth of cloud computing has resulted in the setup of large number of data centers around the world. The data centers consume more power making it source for the carbon dioxide emission and a major contributor to greenhouse effect. This led to the deployment of virtualization. Infrastructure as a Service is one of the important services offered by cloud computing that allows virtualization and hardware to get virtualized by creating many instances of Virtual Machine (VM) on a single Physical Machine (PM) and helps in improving utilization of resources. VM consolidation includes method of choosing the more appropriate algorithm for migration of VM's and placement of VMs to the most suitable host. VM placement is a part of VM migration. The effective placement of VM is aimed to improve performance, resource utilization and reduce the energy consumption in data centers without SLA violation. This paper aims to focus on various VM placement schemes."} {"_id": "b891a8df3d7b4a6b73c9de7194f7341b00d93f6f", "title": "Boosting Response Aware Model-Based Collaborative Filtering", "text": "Recommender systems are promising for providing personalized favorite services. Collaborative filtering (CF) technologies, making prediction of users' preference based on users' previous behaviors, have become one of the most successful techniques to build modern recommender systems. Several challenging issues occur in previously proposed CF methods: (1) most CF methods ignore users' response patterns and may yield biased parameter estimation and suboptimal performance; (2) some CF methods adopt heuristic weight settings, which lacks a systematical implementation; and (3) the multinomial mixture models may weaken the computational ability of matrix factorization for generating the data matrix, thus increasing the computational cost of training. To resolve these issues, we incorporate users' response models into the probabilistic matrix factorization (PMF), a popular matrix factorization CF model, to establish the response aware probabilistic matrix factorization (RAPMF) framework. More specifically, we make the assumption on the user response as a Bernoulli distribution which is parameterized by the rating scores for the observed ratings while as a step function for the unobserved ratings. Moreover, we speed up the algorithm by a mini-batch implementation and a crafting scheduling policy. Finally, we design different experimental protocols and conduct systematical empirical evaluation on both synthetic and real-world datasets to demonstrate the merits of the proposed RAPMF and its mini-batch implementation."} {"_id": "2da593f91bd50a4d7a49a3997a0704f4785124de", "title": "Interactive painterly stylization of images, videos and 3D animations", "text": "We introduce a real-time system that converts images, video, or 3D animation sequences to artistic renderings in various painterly styles. The algorithm, which is entirely executed on the GPU, can efficiently process 512 resolution frames containing 60,000 individual strokes at over 30 fps. In order to exploit the parallel nature of GPUs, our algorithm determines the placement of strokes entirely from local pixel neighborhood information. The strokes are rendered as point sprites with textures. Temporal coherence is achieved by treating the brush strokes as particles and moving them based on optical flow. Our system renders high quality results while allowing the user interactive control over many stylistic parameters such as stroke size, texture and density."} {"_id": "25b63e08ac2a2c4d1dd2f66983a010955ea90192", "title": "Energy-traffic tradeoff cooperative offloading for mobile cloud computing", "text": "This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable tradeoff between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN."} {"_id": "682d8bc7bc07cbfa3beed789afafea7e9702e928", "title": "Global Hypothesis Generation for 6D Object Pose Estimation", "text": "This paper addresses the task of estimating the 6D-pose of a known 3D object from a single RGB-D image. Most modern approaches solve this task in three steps: i) compute local features, ii) generate a pool of pose-hypotheses, iii) select and refine a pose from the pool. This work focuses on the second step. While all existing approaches generate the hypotheses pool via local reasoning, e.g. RANSAC or Hough-Voting, we are the first to show that global reasoning is beneficial at this stage. In particular, we formulate a novel fully-connected Conditional Random Field (CRF) that outputs a very small number of pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian, we give a new, efficient two-step optimization procedure, with some guarantees for optimality. We utilize our global hypotheses generation procedure to produce results that exceed state-of-the-art for the challenging Occluded Object Dataset."} {"_id": "9916ddf04e038c7ca333ff94f8ff613a69778587", "title": "Decision-Making in a Robotic Architecture for Autonomy", "text": "This paper presents an overview of the intelligent decision-making capabilities of the CLARAty robotic architecture for autonomy. CLARAty is a two layered architecture where the top Decision Layer contains techniques for autonomously creating a plan of robot commands and the bottom Functional Layer provides standard robot capabilities that interface to system hardware. This paper focuses on the Decision Layer organization and capabilities. Specifically, the Decision Layer provides a framework for utilizing AI planning and executive techniques, which provide onboard, autonomous command generation and replanning for planetary rovers. The Decision Layer also provides a flexible interface to the Functional Layer, which can be tailored based on user preferences and domain features. This architecture is currently being tested on several JPL rovers."} {"_id": "08d2858762644d85df9112886f10d3abb6465ccb", "title": "Optimal estimation of the mean function based on discretely sampled functional data: Phase transition", "text": "The problem of estimating the mean of random functions based on discretely sampled data arises naturally in functional data analysis. In this paper, we study optimal estimation of the mean function under both common and independent designs. Minimax rates of convergence are established and easily implementable rateoptimal estimators are introduced. The analysis reveals interesting and different phase transition phenomena in the two cases. Under the common design, the sampling frequency solely determines the optimal rate of convergence when it is relatively small and the sampling frequency has no effect on the optimal rate when it is large. On the other hand, under the independent design, the optimal rate of convergence is determined jointly by the sampling frequency and the number of curves when the sampling frequency is relatively small. When it is large, the sampling frequency has no effect on the optimal rate. Another interesting contrast between the two settings is that smoothing is necessary under the independent design, while, somewhat surprisingly, it is not essential under the common design."} {"_id": "1e3182738045e85b289a90f2f6f53565f0d6f9ca", "title": "CTrigger: exposing atomicity violation bugs from their hiding places", "text": "Multicore hardware is making concurrent programs pervasive. Unfortunately, concurrent programs are prone to bugs. Among different types of concurrency bugs, atomicity violation bugs are common and important. Existing techniques to detect atomicity violation bugs suffer from one limitation: requiring bugs to manifest during monitored runs, which is an open problem in concurrent program testing.\n This paper makes two contributions. First, it studies the interleaving characteristics of the common practice in concurrent program testing (i.e., running a program over and over) to understand why atomicity violation bugs are hard to expose. Second, it proposes CTrigger to effectively and efficiently expose atomicity violation bugs in large programs. CTrigger focuses on a special type of interleavings (i.e., unserializable interleavings) that are inherently correlated to atomicity violation bugs, and uses trace analysis to systematically identify (likely) feasible unserializable interleavings with low occurrence-probability. CTrigger then uses minimum execution perturbation to exercise low-probability interleavings and expose difficult-to-catch atomicity violation.\n We evaluate CTrigger with real-world atomicity violation bugs from four sever/desktop applications (Apache, MySQL, Mozilla, and PBZIP2) and three SPLASH2 applications on 8-core machines. CTrigger efficiently exposes the tested bugs within 1--235 seconds, two to four orders of magnitude faster than stress testing. Without CTrigger, some of these bugs do not manifest even after 7 full days of stress testing. In addition, without deterministic replay support, once a bug is exposed, CTrigger can help programmers reliably reproduce it for diagnosis. Our tested bugs are reproduced by CTrigger mostly within 5 seconds, 300 to over 60000 times faster than stress testing."} {"_id": "1459d4d16088379c3748322ab0835f50300d9a38", "title": "Cross-Domain Visual Matching via Generalized Similarity Measure and Feature Learning", "text": "Cross-domain visual data matching is one of the fundamental problems in many real-world vision tasks, e.g., matching persons across ID photos and surveillance videos. Conventional approaches to this problem usually involves two steps: i) projecting samples from different domains into a common space, and ii) computing (dis-)similarity in this space based on a certain distance. In this paper, we present a novel pairwise similarity measure that advances existing models by i) expanding traditional linear projections into affine transformations and ii) fusing affine Mahalanobis distance and Cosine similarity by a data-driven combination. Moreover, we unify our similarity measure with feature representation learning via deep convolutional neural networks. Specifically, we incorporate the similarity measure matrix into the deep architecture, enabling an end-to-end way of model optimization. We extensively evaluate our generalized similarity model in several challenging cross-domain matching tasks: person re-identification under different views and face verification over different modalities (i.e., faces from still images and videos, older and younger faces, and sketch and photo portraits). The experimental results demonstrate superior performance of our model over other state-of-the-art methods."} {"_id": "95167cfafcd1dd7d15c02d7bb21232b90f9682b1", "title": "Design of Bidirectional DC\u2013DC Resonant Converter for Vehicle-to-Grid (V2G) Applications", "text": "In this paper, a detailed design procedure is presented for a bidirectional CLLLC-type resonant converter for a battery charging application. This converter is similar to an LLC-type resonant converter with an extra inductor and capacitor in the secondary side. Soft-switching can be ensured in all switches without additional snubber or clamp circuitry. Because of soft-switching in all switches, very high-frequency operation is possible; thus, the size of the magnetics and the filter capacitors can be made small. To reduce the size and cost of the converter, a CLLC-type resonant network is derived from the original CLLLC-type resonant network. First, in this paper, an equivalent model for the bidirectional converter is derived for the steady-state analysis. Then, the design methodology is presented for the CLLLC-type resonant converter. Design of this converter includes determining the transformer turns ratio, design of the magnetizing inductance based on zero-voltage switching condition, design of the resonant inductances and capacitances. Then, the CLLCtype resonant network is derived from the CLLLC-type resonant network. To validate the design procedure, a 3.5-kW converter was designed following the guidelines in the proposed methodology. A prototype was built and tested in the laboratory. Experimental results verified the design procedure presented."} {"_id": "9b16e0fe76bc9a529d07557f44475b4fd851f71f", "title": "Evolutionary Computation Meets Machine Learning: A Survey", "text": "Evolutionary computation (EC) is a kind of optimization methodology inspired by the mechanisms of biological evolution and behaviors of living organisms. In the literature, the terminology evolutionary algorithms is frequently treated the same as EC. This article focuses on making a survey of researches based on using ML techniques to enhance EC algorithms. In the framework of an ML-technique enhanced-EC algorithm (MLEC), the main idea is that the EC algorithm has stored ample data about the search space, problem features, and population information during the iterative search process, thus the ML technique is helpful in analyzing these data for enhancing the search performance. The paper presents a survey of five categories: ML for population initialization, ML for fitness evaluation and selection, ML for population reproduction and variation, ML for algorithm adaptation, and ML for local search."} {"_id": "135772775121ba60b47b9f2f012e682fe4128761", "title": "Obstruction-Free Synchronization: Double-Ended Queues as an Example", "text": "We introduce obstruction-freedom, a new nonblocking property for shared data structure implementations. This property is strong enough to avoid the problems associated with locks, but it is weaker than previous nonblocking properties\u2014specifically lock-freedom and wait-freedom\u2014 allowing greater flexibility in the design of efficient implementations. Obstruction-freedom admits substantially simpler implementations, and we believe that in practice it provides the benefits of wait-free and lock-free implementations. To illustrate the benefits of obstruction-freedom, we present two obstruction-free CAS-based implementations of double-ended queues (deques); the first is implemented on a linear array, the second on a circular array. To our knowledge, all previous nonblocking deque implementations are based on unrealistic assumptions about hardware support for synchronization, have restricted functionality, or have operations that interfere with operations at the opposite end of the deque even when the deque has many elements in it. Our obstruction-free implementations have none of these drawbacks, and thus suggest that it is much easier to design obstruction-free implementations than lock-free and waitfree ones. We also briefly discuss other obstruction-free data structures and operations that we have implemented."} {"_id": "3ee0eec0553ca719f602b777f2b600a7aa5c31aa", "title": "Scheduling Multithreaded Computations by Work Stealing", "text": "This paper studies the problem of efficiently schedulling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is \u201cwork stealing,\u201d in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies.\nSpecifically, our analysis shows that the expected time to execute a fully strict computation on P processors using our work-stealing scheduler is T1/P + O(T \u221e , where T1 is the minimum serial execution time of the multithreaded computation and (T \u221e is the minimum execution time with an infinite number of processors. Moreover, the space required by the execution is at most S1P, where S1 is the minimum serial space requirement. We also show that the expected total communication of the algorithm is at most O(PT \u221e ( 1 + nd)Smax), where Smax is the size of the largest activation record of any thread and nd is the maximum number of times that any thread synchronizes with its parent. This communication bound justifies the folk wisdom that work-stealing schedulers are more communication efficient than their work-sharing counterparts. All three of these bounds are existentially optimal to within a constant factor."} {"_id": "42142c121b2dbe48d55e81c2ce198a5639645030", "title": "Linearizability: A Correctness Condition for Concurrent Objects", "text": "A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable."} {"_id": "03a00248b7d5e2d89f5337e62c39fad277c66102", "title": "Introduction to Algorithms", "text": "problems To understand the class of polynomial-time solvable proble ms, we must first have a formal notion of what a \u201cproblem\u201d is. We define anbstract problemQ to be a binary relation on a set I of probleminstancesand a setS of problemsolutions. For example, an instance for SHORTEST-PATH is a triple consi sting of a graph and two vertices. A solution is a sequence of vertices in the g raph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a gra ph and two vertices with a shortest path in the graph that connects the two vertices. S ince shortest paths are not necessarily unique, a given problem instance may have mo r than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness res tricts attention to decision problems : those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instan ce setI to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH i s the problem PATH that we saw earlier. If i = \u3008G,u, v,k\u3009 is an instance of the decision problem PATH, then PATH(i ) = 1 (yes) if a shortest path fromu to v has at mostk edges, and PATH (i ) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems , in which some value must be minimized or maximized. As we saw above, however, it is usual ly a simple matter to recast an optimization problem as a decision problem that is no harder. 1See Hopcroft and Ullman [156] or Lewis and Papadimitriou [20 4] for a thorough treatment of the Turing-machine model. 34.1 Polynomial time 973"} {"_id": "05c34e5fc12aadcbb309b36dc9f0ed309fd2dd50", "title": "An Axiomatic Basis for Computer Programming", "text": "In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This involves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantage, both theoretical and practical, may follow from a pursuance of these topics."} {"_id": "55e8fe04a8af5ad6eaeee8383bd9686dff1d0ab3", "title": "Production of first and second generation biofuels : A comprehensive review", "text": "Sustainable economic and industrial growth requires safe, sustainable resources of energy. For the future re-arrangement of a sustainable economy to biological raw materials, completely new approaches in research and development, production, and economy are necessary. The \u2018first-generation\u2019 biofuels appear unsustainable because of the potential stress that their production places on food commodities. For organic chemicals and materials these needs to follow a biorefinery model under environmentally sustainable conditions. Where these operate at present, their product range is largely limited to simple materials (i.e. cellulose, ethanol, and biofuels). Second generation biorefineries need to build on the need for sustainable chemical products through modern and proven green chemical technologies such as bioprocessing including pyrolysis, Fisher Tropsch, and other catalytic processes in order to make more complex molecules and materials on which a future sustainable society will be based. This review focus on cost effective technologies and the processes to convert biomass into useful liquid biofuels and bioproducts, with particular focus on some biorefinery concepts based on different feedstocks aiming at the integral utilization of these feedstocks for the production of value added chemicals. 2009 Elsevier Ltd. All rights reserved."} {"_id": "2c3bc86c5bceced5c1bb989227b099f46f7f478f", "title": "Dissolvable films of silk fibroin for ultrathin conformal bio-integrated electronics.", "text": "Electronics that are capable of intimate, non-invasive integration with the soft, curvilinear surfaces of biological tissues offer important opportunities for diagnosing and treating disease and for improving brain/machine interfaces. This article describes a material strategy for a type of bio-interfaced system that relies on ultrathin electronics supported by bioresorbable substrates of silk fibroin. Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface. Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies. In vivo, neural mapping experiments on feline animal models illustrate one mode of use for this class of technology. These concepts provide new capabilities for implantable and surgical devices."} {"_id": "2dac046a9303a568ed6253a7132a4e9194b6924c", "title": "FURTHER ANALYSIS OF THE HIPPOCAMPAL AMNESIC SYNDROME : 14-YEAR FOLLOW-UP STUDY OF H . M . *", "text": "-The report attempts to delineatecertain residual learning capacities of H.M., a young man who became amnesic in 1953 following a bilateral removal in the hippocampal zone. In addition to being able to acquire new motor skills (CoRKIN [2]), this patient shows some evidence of perceptual learning. He also achieves some retention of very simple visual and tactual mazes in which the sequence of required turns is short enough to fit into his immediate memory span; even then, the rate of acquisition is extremely slow. These vestigial abilities, which have their occasional parallels in the patient's everyday life, are assessed against the background of his continuing profound amnesia for most on-going events, an amnesia that persists in spite of above-average intelligence and superior performance on many perceptual tasks. THE PRESENT repor t has three aims. In the first place, it describes the persis tent features of a severe amnesic syndrome acquired 14 years ago, fol lowing bi la tera l mesial t empora l lobec tomy (ScoVILLE [28]). Secondly, the repor t a t tempts to give further substance to our previous ly held belief tha t the pa t ien t ' s perceptua l and intel lectual capacit ies remain intact , as mani fes ted by no rma l or super ior pe r tb rmance on a fair ly wide range o f exper imental tasks. Thirdly , we are explor ing the na ture o f the m e m o r y detect in some detai l by trying to discover which learning tasks the pat ient can master , as c o m p a r e d with those on which he always fails. I N T E R V A L H I S T O R Y Since the onset of his amnes ia in 1953, we have twice had the oppor tun i ty o f br inging this pa t ien t under intensive observat ion. In 1962, he spent one week at the Mont rea l Neuro log ica l Inst i tute, and most results o f these and earl ier examinat ions have a l ready been repor ted (CoRK~N [1]; MILNER [16, 18, 19]). Extensive testing was again carr ied out in 1966, dur ing a two-week admiss ion to the Clinical Research Center at M.I .T. Findings ob ta ined dur ing that per iod, supplemented later by visits to the pa t ien t ' s home, form the basis of the present report . * From the Montreal Neurological Institute and from the Psychophysiological Laboratory and Clinical Research Center, M.I.T. \"Ilais work was supported in part by the Medical Research Council of Canada, and in part by grants to H.-L. TEtmER from the John A. Hartford Foundation, NASA, and the National Institutes of Health (under MH-05673, and a Clinical Center Grant, FR88). The follow-up study of K.M. was supported by United States Public Health Small Grant M5774A to BRENDA MILNER."} {"_id": "7a0714e12dc106ba33a11bc0daa202caf9e1f07d", "title": "Animation of dynamic legged locomotion", "text": "This paper is about the use of control algorithms to animate dynamic legged locomotion. Control could free the animator from specifying the details of joint and limb motion while producing both physically realistic and natural looking results. We implemented computer animations of a biped robot, a quadruped robot, and a kangaroo. Each creature was modeled as a linked set of rigid bodies with compliant actuators at its joints. Control algorithms regulated the running speed, organized use of the legs, and maintained balance. All motions were generated by numerically integrating equations of motion derived from the physical models. The resulting behavior included running at various speeds, traveling with several gaits (run, trot, bound, gallop, and hop), jumping, and traversing simple paths. Whereas the use of control permitted a variety of physically realistic animated behavior to be generated with limited human intervention, the process of designing the control algorithms was not automated: the algorithms were \"tweaked\" and adjusted for each new creature."} {"_id": "3a1d329c2f3c782e574bb9aece9e7b44214f28c6", "title": "A Wideband Dual Circularly Polarized Full-Corporate Waveguide Array Antenna Fed by Triple-Resonant Cavities", "text": "A dual circularly polarized (CP) waveguide array antenna is presented for Ka-band wireless communication. A novel triple-resonant cavity is proposed to implement mode conversion and impedance matching in the $2 \\times 2$ -element subarray. A ridge waveguide polarizer is integrated to form the left-hand circular polarization (LHCP) and the right-hand circular polarization (RHCP). A flyover-shaped full-corporate feeding network is designed to accommodate the dual polarizations and keep the wideband characteristic for a large-scale array. Subarray spacing is used to optimize the port-to-port isolation and decrease the sidelobe level at some definite angles. Computer numerically controlled machining technology is applied to fabricating a $16 \\times 16$ -element prototype. Measured results demonstrate a bandwidth of 16% from 27.6 to 32.4 GHz for the dual polarizations, in terms of good reflection coefficient and axial ratio. A maximum gain of 32.8 dBic is achieved at 28.8 GHz. A total efficiency over 60% is obtained throughout the above band for both LHCP and RHCP ports."} {"_id": "809d484629228c94889c47ffcfd530e8edff17cc", "title": "Microfinance and Poverty : Evidence Using Panel Data from Bangladesh", "text": "Microfinance supports mainly informal activities that often have a low return and low market demand. It may therefore be hypothesized that the aggregate poverty impact of microfinance is modest or even nonexistent. If true, the poverty impact of microfinance observed at the participant level represents either income redistribution or short-run income generation from the microfinance intervention. This article examines the effects of microfinance on poverty reduction at both the participant and the aggregate levels using panel data from Bangladesh. The results suggest that access to microfinance contributes to poverty reduction, especially for female participants, and to overall poverty reduction at the village level. Microfinance thus helps not only poor participants but also the local economy."} {"_id": "5088f57ee6eeacb2b125239870a7f1f5ca3acde2", "title": "Millimeter-Wave Microstrip Array Antenna with High Efficiency for Automotive Radar Systems", "text": "Automotive radar systems utilizing the millimeterwave band have been developed 2) since they can detect targets even in bad weather. In the automotive radar systems, linear polarization inclined 45 degrees is utilized in order to avoid mutual radio interference between them. The antenna for the automotive radar systems is required to have high efficiency, which means high gain against for an aperture area determined by the size of a radar sensor, in order to detect the targets at a long distance. In addition, low profile is required not to spoil the appearance of a car. Ease of manufacturing is also one of the significant factors in order to install the automotive radar systems in popular cars. As candidate antennas for the automotive radar systems, a slotted waveguide array antenna and a triplate-type array antenna have been studied. These antennas have low profile and high efficiency 7"} {"_id": "bc7120aaebcd69113ad8e905a9eebee1f6986e37", "title": "A Calibration Technique for Bang-Bang ADPLLs Using Jitter Distribution Monitoring", "text": "This brief presents a built-in self-calibration (BISC) technique for minimization of the total jitter in bang-bang all-digital phase-locked loops (ADPLLs). It is based on the addition of a monitoring phase-frequency detector (PFD) with tunable delay cells for the reference clock and the divider clock and a counter for this PFD output signal. This allows for on-chip binary comparison of the jitter distribution widths at the ADPLL PFD input, when ADPLL filter parameters are altered. Since only a relative comparison is performed, no accurate delay calibration is required. The statistical properties of this comparison of two random distributions are analyzed theoretically, and guidelines for circuit dimensioning are derived. The proposed method is used for BISC by adaption of the ADPLL filter coefficients. This allows for jitter minimization under process, voltage and temperature variations as well as gain and period jitter of the digitally controlled oscillator. The proposed calibration technique is verified by system simulations and measurements of a silicon prototype implementation in 28-nm CMOS technology."} {"_id": "0b3ef771ef086c115443ffd253f40a9f0d1436ac", "title": "Towards Future THz Communications Systems", "text": "Carrier frequencies beyond 300 GHz, belonging to the so-called THz range, have received attention for considering this frequency band for future multi-gigabit short-range communication systems. This review paper gives an overview of current issues in the emerging field of THz communications targeting to deliver wireless 100 Gbps over short distances. The paper will start by introducing scenarios and applications requiring such high data rates followed by a discussion on the radio channel characteristics. In the 300 GHz frequency band, the path loss is even more significant than at 60 GHz and appropriate measures to mitigate effects in none-line-of-sight (NLOS) cases, caused e. g. by the influence of moving people, are required. Advanced antenna techniques like beam forming or beam switching are a pre-requisite to guarantee seamless service. In order to consider such techniques in the standards development, the propagation channel operating at these mmand sub-mm wave bands in realistic environments must be well understood. Therefore, intensive channel measurement and modeling activities have been done at 300 GHz, which started a couple of years ago at Terahertz Communications Lab (TCL). Due to the short wave length, the influence of surface roughness of typical building materials plays an important role. Hence, the modeling of rough surface scattering has been one of the main areas in these investigations. In this contribution, the main results of the propagation research activities at TCL are summarized. In the last part of the paper, an overview of the state-of-the-art in technology development and successful demonstrations of data transmission will be given together with a report on the status quo of ongoing activities in standardization at IEEE 802.15 IG THz and the regulation of the spectrum beyond 300 GHz."} {"_id": "9ac5b66036da98f2c1e62c6ca2bdcc075083ef85", "title": "Election Analysis and Prediction of Election Results with Twitter", "text": ""} {"_id": "20070c1778da82394ddb1bc11968920ab94c3112", "title": "A soft hand model for physically-based manipulation of virtual objects", "text": "We developed a new hand model for increasing the robustness of finger-based manipulations of virtual objects. Each phalanx of our hand model consists of a number of deformable soft bodies, which dynamically adapt to the shape of grasped objects based on the applied forces. Stronger forces directly result in larger contact areas, which increase the friction between hand and object as would occur in reality. For a robust collision-based soft body simulation, we extended the lattice-shape matching algorithm to work with adaptive stiffness values, which are dynamically derived from force and velocity thresholds. Our implementation demonstrates that this approach allows very precise and robust grasping, manipulation and releasing of virtual objects and performs in real-time for a variety of complex scenarios. Additionally, laborious tuning of object and friction parameters is not necessary for the wide range of objects that we typically grasp with our hands."} {"_id": "8ee38ec2d2da62ad96e36c7804b3bbf3a5153ab7", "title": "Stitching Web Tables for Improving Matching Quality", "text": "HTML tables on web pages (\u201cweb tables\u201d) cover a wide variety of topics. Data from web tables can thus be useful for tasks such as knowledge base completion or ad hoc table extension. Before table data can be used for these tasks, the tables must be matched to the respective knowledge base or base table. The challenges of web table matching are the high heterogeneity and the small size of the tables. Though it is known that the majority of web tables are very small, the gold standards that are used to compare web table matching systems mostly consist of larger tables. In this experimental paper, we evaluate T2K Match, a web table to knowledge base matching system, and COMA, a standard schema matching tool, using a sample of web tables that is more realistic than the gold standards that were previously used. We find that both systems fail to produce correct results for many of the very small tables in the sample. As a remedy, we propose to stitch (combine) the tables from each web site into larger ones and match these enlarged tables to the knowledge base or base table afterwards. For this stitching process, we evaluate different schema matching methods in combination with holistic correspondence refinement. Limiting the stitching procedure to web tables from the same web site decreases the heterogeneity and allows us to stitch tables with very high precision. Our experiments show that applying table stitching before running the actual matching method improves the matching results by 0.38 in F1-measure for T2K Match and by 0.14 for COMA. Also, stitching the tables allows us to reduce the amount of tables in our corpus from 5 million original web tables to as few as 100,000 stitched tables."} {"_id": "a42aff222c6c8bc248a0504d063ef7334fe5a177", "title": "Efficient adaptive density estimation per image pixel for the task of background subtraction", "text": "We analyze the computer vision task of pixel-level background subtraction. We present recursive equations that are used to constantly update the parameters of a Gaussian mixture model and to simultaneously select the appropriate number of components for each pixel. We also present a simple non-parametric adaptive density estimation method. The two methods are compared with each other and with some previously proposed algorithms. 2005 Elsevier B.V. All rights reserved."} {"_id": "82d0595da0a3f391fab5263fee3b0328916dad37", "title": "Automatic Topic Labeling in Asynchronous Conversations", "text": "Asynchronous conversations are conversations where participants collaborate with each other at different times (e.g., email, blog, forum). The huge amount of textual data generated everyday in these conversations calls for automated methods of conversational text analysis. Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many NLP applications including summarization, information extraction and conversation visualization. Topic segmentation in asynchronous conversation refers to the task of clustering the sentences into a set of coherent topical clusters. In [6, 7], we presented unsupervised and supervised topic segmentation models for asynchronous conversations, which to our knowledge are the state-of-the-art. This study concerns topic labeling, that is, given a set of topical clusters in a conversation, the task is to assign appropriate topic labels to the clusters. For example, five topic labels in a Slashdot [2] blog conversation about releasing a new game called Daggerfall are game contents and size, game design, bugs/faults, other gaming options and performance/speed issues. Such topic labels can serve as a concise summary of the conversation and can also be used for indexing. Ideally, topic labels should be meaningful, semantically similar to the underlying topic, general (i.e., broad coverage of the topic) and discriminative (or exclusive) when there are multiple topics [10]. Traditionally, the top K terms in a multinomial topic model (e.g., LDA [3]) are used to represent a topic. However, as pointed out by [10], at the word-level, topic labels may become too general that it can impose cognitive difficulties on a user to interpret the meaning of the topic by associating the words together. On the other hand, if the labels are expressed at the sentence-level, they may become too specific to cover the whole theme of the topic. Based on these observations, [10] and other recent studies (e.g., [9]) advocate for phrase-level topic labels. This is also consistent with the monologue corpora built as part of the Topic Detection and Tracking (TDT) project [1] as well as with our own email and blog conversational corpora in which human annotators without specific instructions spontaneously generated labels at the phrase-level. Considering all these factors, we also treat phrase-level as the right level of granularity for a label in this work. Few prior studies have addressed the topic labeling problem in different settings [10, 9]. Common to their approaches is that they first mine topics in the form of topic-word distributions from the whole corpus using topic models like LDA. Then, they try to label the topics (i.e., topic-word distributions) with an appropriate label using the statistical association metrics (e.g., point-wise mutual information, t-test) computed from either the source corpus or an external knowledge base (e.g., Wikipedia). In contrast, our task is to label the topical clusters in a given conversation, where topics are closely related and distributional variations are subtle (e.g., game contents and size, game design). Therefore, corpus-based statistical association metrics are not reliable in our case. Also at the conversation-level, the topics are too specific to find their labels in an external source. To our knowledge, none has studied this problem before. Therefore, there is no standard corpus and no agreed-upon evaluation metrics available. Our contributions aim to remedy these problems. First, we present a blog and an email corpora annotated with topics. Second, we propose to generate topic labels using an extractive approach, that finds the most representative phrases from the text without relying on an external source. Since, graph-based key phrase ranking has proved to be the state-of-the-art (unsupervised) method [11], we adopt the same framework. We propose a novel biased random walk model that exploits the fact that the leading sentences in a topic often carry the most informative clues for its label. However, the phrases extracted only by considering the sentences in a topic may ignore the global aspect of the conversation. As another contribution, we propose to re-rank the phrases extracted from the whole conversation with respect to the individual topics and include the relevant ones. Experimental results show that our approach outperforms other ranking models including a general random walk model (i.e., TextRank) proposed by [11], a leadoriented model and a frequency-based model, and including the relevant conversation-level phrases improves the performance."} {"_id": "4e8ff5811686a5c8e45decfb168fd7ecd1cb088a", "title": "A framework for facial surgery simulation", "text": "The accurate prediction of the post-surgical facial shape is of paramount importance for surgical planning in facial surgery. In this paper we present a framework for facial surgery simulation which is based on volumetric finite element modeling. We contrast conventional procedures for surgical planning against our system by accompanying a patient during the entire process of planning, medical treatment and simulation. In various preprocessing steps a 3D physically based facial model is reconstructed from CT and laser range scans. All geometric and topological changes are modeled interactively using Alias.#8482; Applying fully 3D volumetric elasticity allows us to represent important volumetric effects such as incompressibility in a natural and physically accurate way. For computational efficiency, we devised a novel set of prismatic shape functions featuring a globally C1-continuous surface in combination with a C0 interior. Not only is it numerically accurate, but this construction enables us to compute smooth and visually appealing facial shapes."} {"_id": "11b11d5b58f9d9519031b79385d7e0a712390ff9", "title": "Design of an adaptive sliding mode controller for a novel spherical rolling robot", "text": "This paper presents a novel spherical rolling robot and the design procedure of an adaptive sliding mode controller for the proposed robot. Towards containing the uncertainties, the sliding mode controller is utilized and simulation tests represent the effectiveness of this method. However, the chattering phenomenon in the robot operation is the main insufficiency of this controller. Hence, in order to present a more accurate controller, sliding mode controller is equipped with an identification method which works in an online manner for identifying the exact model of the robot. The provided identification procedure is based on Recursive Least Square approach, which is one of the most promising approaches toward identifying the model. It should be also noted that, this adaptive controller identify the model without taking into account any data from the derived model."} {"_id": "318c86751f018b5d7415dafc58e20c0ce06c68b6", "title": "A Memory Soft Error Measurement on Production Systems", "text": "Memory state can be corrupted by the impact of particles causing single-event upsets (SEUs). Understanding and dealing with these soft (or transient) errors is important for system reliability. Several earlier studies have p rovided field test measurement results on memory soft error rate, but no results were available for recent production computer systems. We believe the measurement results on real production systems are uniquely valuable due to various environmental effects. This paper presents methodologies for memory soft error measurement on production systems where performance impact on existing running applications must be negligible and the system administrativ e control might or might not be available. We conducted measurements in three distinct system environments: a rack-mounted server farm for a popular Internet service (Ask.com search engine), a set of office desktop computers (Univ. of Rochester), and a geographically distributed network testbed (PlanetLab). Our preliminary measurement on over 300 machines for varying multi-month periods finds 2 suspected soft errors. In particular, our result on the Internet servers indicates that, with high probability, the soft error rate is at least two orders of magnitude lower than those reported previously. We provide discussions that attribute the low error rate to several factors in today\u2019s production system environments . As a contrast, our measurement unintentionally discovers permanent (or hard) memory faults on 9 out of 212 Ask.com machines, suggesting the relative commonness of"} {"_id": "bcc709bca17483239fb5e43cfdad62b0f6df9827", "title": "Winding Machine for Automated Production of an Innovative Air-Gap Winding for Lightweight Electric Machines", "text": "This paper presents a newly developed winding machine, which enables an automated production of stator-mounted air-gap windings with meander structure. This structure has very high accuracy requirements. Therefore, automation is realized by the interaction of 15 actuators and a compound construction with 13 degrees of freedom. The programming works with discrete open-loop motion control to generate the kinematics. Above all, a flexible prototype of the winding machine is developed, manufactured, and tested for a motor with external rotor. Finally, experimental results of the developed automation for air-gap windings with meander structure are presented."} {"_id": "920ca8ae80f659328fe6248b9ba2d40a18c4a9c4", "title": "Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule", "text": "The ABCD (asymmetry, border irregularity, colour and dermoscopic structure) rule of dermoscopy is a scoring method used by dermatologists to quantify dermoscopy findings and effectively separate melanoma from benign lesions. Automatic detection of the ABCD features and separation of benign lesions from melanoma could enable earlier detection of melanoma. In this study, automatic ABCD scoring of dermoscopy lesions is implemented. Pre-processing enables automatic detection of hair using Gabor filters and lesion boundaries using geodesic active contours. Algorithms are implemented to extract the characteristics of ABCD attributes. Methods used here combine existing methods with novel methods to detect colour asymmetry and dermoscopic structures. To classify lesions as melanoma or benign nevus, the total dermoscopy score is calculated. The experimental results, using 200 dermoscopic images, where 80 are malignant melanomas and 120 benign lesions, show that the algorithm achieves 91.25% sensitivity of 91.25 and 95.83% specificity. This is comparable to the 92.8% sensitivity and 90.3% specificity reported for human implementation of the ABCD rule. The experimental results show that the extracted features can be used to build a promising classifier for melanoma detection."} {"_id": "9fed98656b42d08e2f7bf2589f4af1d877b2b650", "title": "A Hybrid Framework for News Clustering Based on the DBSCAN-Martingale and LDA", "text": "Nowadays there is an important need by journalists and media monitoring companies to cluster news in large amounts of web articles, in order to ensure fast access to their topics or events of interest. Our aim in this work is to identify groups of news articles that share a common topic or event, without a priori knowledge of the number of clusters. The estimation of the correct number of topics is a challenging issue, due to the existence of \u201cnoise\u201d, i.e. news articles which are irrelevant to all other topics. In this context, we introduce a novel density-based news clustering framework, in which the assignment of news articles to topics is done by the well-established Latent Dirichlet Allocation, but the estimation of the number of clusters is performed by our novel method, called \u201cDBSCAN-Martingale\u201d, which allows for extracting noise from the dataset and progressively extracts clusters from an OPTICS reachability plot. We evaluate our framework and the DBSCAN-Martingale on the 20newsgroups-mini dataset and on 220 web news articles, which are references to specific Wikipedia pages. Among twenty methods for news clustering, without knowing the number of clusters k, the framework of DBSCAN-Martingale provides the correct number of clusters and the highest Normalized Mutual Information."} {"_id": "25599e9f11ae6fa901299d44d9a2666c7072af1f", "title": "Overview of ImageCLEFcaption 2017 - Image Caption Prediction and Concept Detection for Biomedical Images", "text": "This paper presents an overview of the ImageCLEF 2017 caption tasks on the analysis of images from the biomedical literature. Two subtasks were proposed to the participants: a concept detection task and caption prediction task, both using only images as input. The two subtasks tackle the problem of providing image interpretation by extracting concepts and predicting a caption based on the visual information of an image alone. A dataset of 184,000 figure-caption pairs from the biomedical open access literature (PubMed Central) are provided as a testbed with the majority of them as trainign data and then 10,000 as validation and 10,000 as test data. Across two tasks, 11 participating groups submitted 71 runs. While the domain remains challenging and the data highly heterogeneous, we can note some surprisingly good results of the difficult task with a quality that could be beneficial for health applications by better exploiting the visual content of biomedical figures."} {"_id": "286e3064e650ec49b4c2216b4cf27499d3aaceb3", "title": "A Hierarchical Cluster Synchronization Framework of Energy Internet", "text": "The concept of \u201cEnergy Internet\u201d has been recently proposed to improve the efficiency of renewable energy resources. Accordingly, this paper investigates the synchronization of the Energy Internet system. Specifically, an appropriate hierarchical cluster synchronization framework is firstly developed for the Energy Internet, which includes primary synchronization, secondary synchronization, and tertiary synchronization. Furthermore, a secondary synchronization strategy (cluster synchronization) is proposed for the synchronization of multiple microgrid clusters in the Energy Internet, which is based on the consensus algorithm of multiagent networks. The detailed synchronization framework and control scheme are presented in this paper. Finally, some simulation results are provided and discussed to validate the performance and effectiveness of the proposed strategy."} {"_id": "35fcbcda34a18ed46399f2922f808e82ba729a55", "title": "Optimal Power Flow by Black Hole Optimization Algorithm", "text": "In this paper, a black hole optimization algorithm (BH) is utilized to solve the optimal power flow problem considering the generation fuel cost, reduction of voltage deviation and improvement of voltage stability as an objective functions. The black hole algorithm simulate the black hole phenomenon which relay on tow operations, the star absorption and star sucking. The IEEE 30-Bus and IEEE 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures."} {"_id": "8d243d0a43298f39f6bf1b4846b7161b3dfeb697", "title": "Extraction and classification texture of inflammatory cells and nuclei in normal pap smear images", "text": "The presence of inflammatory cells complicates the process of identifying the nuclei in the early detection of cervical cancer. Inflammatory cells need to be eliminated to assist pathologists in reading Pap smear slides. The texture of Grey-Level Run-Length Matrix (GLRLM) for inflammatory cells and nuclei types are investigated. The inflammatory cells and nuclei have different texture, and it can be used to differentiate them. To extract all of the features, firstly manual cropping of inflammatory cells and nuclei needs to be done. All of extracted features have been analyzed and selected by Decision Tree classifier (J48). Originally there have been eleven features in the direction of 135\u00b0 which are extracted to classify cropping cells into inflammatory cells and nuclei. Then the eleven features are reduced into eight, namely low gray level run emphasis, gray level non uniformity, run length non-uniformity, long run low gray-level emphasis, short run high gray-level emphasis, short run low gray-level emphasis, long run high gray-level emphasis and run percentage based on the rule of classification. This experiment is applied into 957 cells which were from 50 images. The compositions of these cells were 122 cells of nuclei and 837 cells of inflammatory. The proposed algorithm applied to all of the cells and the result of classification by using these eight texture features obtains the sensitivity rates which show that there are still nuclei of cells that were considered as inflammatory cells. It was in accordance with the conditions of the difficulties faced by the pathologist while the specificity rate suggests that inflammatory cells detected properly and few inflammatory cells are considered as nucleus."} {"_id": "f45eb5367bb9fa9a52fd4321a63308a37960e93a", "title": "Development of Autonomous Car\u2014Part II: A Case Study on the Implementation of an Autonomous Driving System Based on Distributed Architecture", "text": "Part I of this paper proposed a development process and a system platform for the development of autonomous cars based on a distributed system architecture. The proposed development methodology enabled the design and development of an autonomous car with benefits such as a reduction in computational complexity, fault-tolerant characteristics, and system modularity. In this paper (Part II), a case study of the proposed development methodology is addressed by showing the implementation process of an autonomous driving system. In order to describe the implementation process intuitively, core autonomous driving algorithms (localization, perception, planning, vehicle control, and system management) are briefly introduced and applied to the implementation of an autonomous driving system. We are able to examine the advantages of a distributed system architecture and the proposed development process by conducting a case study on the autonomous system implementation. The validity of the proposed methodology is proved through the autonomous car A1 that won the 2012 Autonomous Vehicle Competition in Korea with all missions completed."} {"_id": "03f4141389da40a98517efc02b0fe910db17b8f0", "title": "Digital Image Processing", "text": "Image defects which could be caused by the digitization process or by faults in the imaging set-up (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image. Course Objectives \u201cIntroduction to Digital Image Processing\u201d is two-day hands-on course that provide fundamental concepts and techniques for digital image processing and the software principles used in their practical implementation."} {"_id": "773e2eb3682962df329539fab6af2ab4148f3db7", "title": "Real-time computer vision with OpenCV", "text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces."} {"_id": "f9048bac0bd8cb284902eac1baa1bcbc4e610cf3", "title": "Python for Prototyping Computer Vision Applications", "text": "Python is a popular language widely adopted by the scientific community due to its clear syntax and an extensive number of specialized packages. For image processing or computer vision development, two libraries are prominently used: NumPy/SciPy and OpenCV with a Python wrapper. In this paper, we present a comparative evaluation of both libraries, assessing their performance and their usability. We also investigate the performance of OpenCV when accessed through a python wrapper versus directly using the native C implementation."} {"_id": "6181671fc9c9caef9e3d75b56241b202a011c55a", "title": "Python for Scientific Computing", "text": "Python is an excellent \"steering\" language for scientific codes written in other languages. However, with additional basic tools, Python transforms into a high-level language suited for scientific and engineering code that's often fast enough to be immediately useful but also flexible enough to be sped up with additional extensions."} {"_id": "719de8345edd73a78566e1267aeca60c9545e3df", "title": "Image Alignment and Stitching : A Tutorial 1", "text": "This tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics. Image stitching algorithms take the alignment estimates produced by such registration algorithms and blend the images in a seamless manner, taking care to deal with potential problems such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures. This tutorial reviews the basic motion models underlying alignment and stitching algorithms, describes effective direct (pixel-based) and feature-based alignment algorithms, and describes blending algorithms used to produce seamless mosaics. It closes with a discussion of open research problems in the area."} {"_id": "db17a183cb220ae8473bf1b25d62d5ef6fcfeac7", "title": "Substrate-Independent Microwave Components in Substrate Integrated Waveguide Technology for High-Performance Smart Surfaces", "text": "Although all existing air-filled substrate integrated waveguide (AFSIW) topologies yield a substrate-independent electrical performance, they rely on dedicated, expensive, laminates to form air-filled regions that contain the electromagnetic fields. This paper proposes a novel substrate-independent AFSIW manufacturing technology, enabling straightforward integration of high-performance microwave components into a wide range of general-purpose commercially available surface materials by means of standard additive (3-D printing) or subtractive (computer numerically controlled milling/laser cutting) manufacturing processes. First, an analytical formula is derived for the effective permittivity and loss tangent of the AFSIW waveguide. This allows the designer to reduce substrate losses to levels typically encountered in high-frequency laminates. Then, several microwave components are designed and fabricated. Measurements of multiple AFSIW waveguides and a four-way power divider/combiner, both relying on a new coaxial-to-air-filled SIW transition, prove that this novel approach yields microwave components suitable for direct integration into everyday surfaces, with low insertion loss, and excellent matching and isolation over the entire [5.15\u20135.85] GHz band. Hence, this innovative approach paves the way for a new generation of cost-effective, high-performance, and invisibly integrated smart surface systems that efficiently exploit the area and the materials available in everyday objects."} {"_id": "7aee5d2bdb153be534685f2598ad1b2e09621ce9", "title": "VIPS: a Vision-based Page Segmentation Algorithm", "text": "A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques, our approach is independent to underlying documentation representation such as HTML and works well even when the HTML structure is far different from layout structure. Experiments show satisfactory results."} {"_id": "4d2cbdc1291c45ed9d87ad0c37e20967c97172e5", "title": "Modality Effects in Deception Detection and Applications in Automatic-Deception-Detection", "text": "Modality is an important context factor in deception, which is context-dependent. In order to build a reliable and flexible tool for automatic-deception-detection (ADD), we investigated the characteristics of verbal cues to deceptive behavior in three modalities: text, audio and face-to-face communication. Seven categories of verbal cues (21 cues) were studied: quantity, complexity, diversity, verb nonimmediacy, uncertainty, specificity and affect. After testing the interaction effects between modality and condition (deception or truth), we found significance only with specificity and observed that differences between deception and truth were in general consistent across the three modalities. However, modality had strong effects on verbal cues. For example, messages delivered face-to-face were largest in quantity (number of words, verbs, and sentences), followed by the audio modality. Text had the sparsest examples. These modality effects are an important factor in building baselines in ADD tools, because they make it possible to use them to adjust the baseline for an unknown modality according to a known baseline, thereby simplifying the process of ADD. The paper discusses in detail the implications of these findings on modality effects in three modalities."} {"_id": "b80ba9b52a6286199872c0f892c3b2fa1033ec1e", "title": "Model-based clustering of high-dimensional data: A review", "text": "Model-based clustering is a popular tool which is renowned for its probabilistic foundations and its flexibility. However, high-dimensional data are nowadays more and more frequent and, unfortunately, classical model-based clustering techniques show a disappointing behavior in high-dimensional spaces. This is mainly due to the fact that model-based clustering methods are dramatically over-parametrized in this case. However, high-dimensional spaces have specific characteristics which are useful for clustering and recent techniques exploit those characteristics. After having recalled the bases of model-based clustering, this article will review dimension reduction approaches, regularization-based techniques, parsimonious modeling, subspace clustering methods and clustering methods based on variable selection. Existing softwares for model-based clustering of high-dimensional data will be also reviewed and their practical use will be illustrated on real-world data sets."} {"_id": "624da56c523f93172b417291aa331e6a8b8e1442", "title": "Categorical Data Clustering using Frequency and Tf-Idf based Cosine Similarity", "text": "Clustering is the process of grouping a set of physical objects into classes of similar object. Objects in real world consist of both numerical and categorical data. Categorical data are not analyzed as numerical data due to the lack of inherit ordering. This paper describes Frequency and Tf-Idf Based Categorical Data Clustering (FTCDC) technique based on cosine similarity measure. The FTCDC system consists of four modules, such as data pre-processing, similarity matrix generation, cluster formation and validation. The System architecture of FTCDC is explained and its performance is examined using a simple example scenario. The performance on real world data is measured using accuracy and error rate. The performance of the system much relay on the similarity threshold selected for the clustering process."} {"_id": "545a2844de1a253357403c1fa37561f810bdb553", "title": "Transcription Methods for Consistency, Volume and Efficiency", "text": "This paper describes recent efforts at Linguistic Data Consortium at the University of Pennsylvania to create manual transcripts as a shared resource for human language technology research and evaluation. Speech recognition and related technologies in particular call for substantial volumes of transcribed speech for use in system development, and for human gold standard references for evaluating performance over time. Over the past several years LDC has developed a number of transcription approaches to support the varied goals of speech technology evaluation programs in multiple languages and genres. We describe each transcription method in detail, and report on the results of a comparative analysis of transcriber consistency and efficiency, for two transcription methods in three languages and five genres. Our findings suggest that transcripts for planned speech are generally more consistent than those for spontaneous speech, and that careful transcription methods result in higher rates of agreement when compared to quick transcription methods. We conclude with a general discussion of factors contributing to transcription quality, efficiency and consistency."} {"_id": "767ad42da2e1dcd79dba2e5822eed65ac2fe1dea", "title": "GraphP: Reducing Communication for PIM-Based Graph Processing with Efficient Data Partition", "text": "Processing-In-Memory (PIM) is an effective technique that reduces data movements by integrating processing units within memory. The recent advance of \u201cbig data\u201d and 3D stacking technology make PIM a practical and viable solution for the modern data processing workloads. It is exemplified by the recent research interests on PIM-based acceleration. Among them, TESSERACT is a PIM-enabled parallel graph processing architecture based on Micron\u2019s Hybrid Memory Cube (HMC), one of the most prominent 3D-stacked memory technologies. It implements a Pregel-like vertex-centric programming model, so that users could develop programs in the familiar interface while taking advantage of PIM. Despite the orders of magnitude speedup compared to DRAM-based systems, TESSERACT generates excessive crosscube communications through SerDes links, whose bandwidth is much less than the aggregated local bandwidth of HMCs. Our investigation indicates that this is because of the restricted data organization required by the vertex programming model. In this paper, we argue that a PIM-based graph processing system should take data organization as a first-order design consideration. Following this principle, we propose GraphP, a novel HMC-based software/hardware co-designed graph processing system that drastically reduces communication and energy consumption compared to TESSERACT. GraphP features three key techniques. 1) \u201cSource-cut\u201d partitioning, which fundamentally changes the cross-cube communication from one remote put per cross-cube edge to one update per replica. 2) \u201cTwo-phase Vertex Program\u201d, a programming model designed for the \u201csource-cut\u201d partitioning with two operations: GenUpdate and ApplyUpdate. 3) Hierarchical communication and overlapping, which further improves performance with unique opportunities offered by the proposed partitioning and programming model. We evaluate GraphP using a cycle accurate simulator with 5 real-world graphs and 4 algorithms. The results show that it provides on average 1.7 speedup and 89% energy saving compared to TESSERACT."} {"_id": "41ef0f5baf35793ac780885d5186d65d12f91df9", "title": "Architectural optimizations for high performance and energy efficient Smith-Waterman implementation on FPGAs using OpenCL", "text": "Smith-Waterman is a dynamic programming algorithm that plays a key role in the modern genomics pipeline as it is guaranteed to find the optimal local alignment between two strings of data. The state of the art presents many hardware acceleration solutions that have been implemented in order to exploit the high degree of parallelism available in this algorithm. The majority of these implementations use heuristics to increase the performance of the system at the expense of the accuracy of the result. In this work, we present an implementation of the pure version of the algorithm. We include the key architectural optimizations to achieve highest possible performance for a given platform and leverage the Berkeley roofline model to track the performance and guide the optimizations. To achieve scalability, our custom design comprises of systolic arrays, data compression features and shift registers, while a custom port mapping strategy aims to maximize performance. Our designs are built leveraging an OpenCL-based design entry, namely Xilinx SDAccel, in conjunction with a Xilinx Virtex 7 and Kintex Ultrascale platform. Our final design achieves a performance of 42.47 GCUPS (giga cell updates per second) with an energy efficiency of 1.6988 GCUPS/W. This represents an improvement of 1.72x in performance and energy efficiency over previously published FPGA implementations and 8.49x better in energy efficiency over comparable GPU implementations."} {"_id": "338557d69772b2018793e026cda7afa091a638dd", "title": "An efficient Bayesian inference framework for coalescent-based nonparametric phylodynamics", "text": "MOTIVATION\nThe field of phylodynamics focuses on the problem of reconstructing population size dynamics over time using current genetic samples taken from the population of interest. This technique has been extensively used in many areas of biology but is particularly useful for studying the spread of quickly evolving infectious diseases agents, e.g. influenza virus. Phylodynamic inference uses a coalescent model that defines a probability density for the genealogy of randomly sampled individuals from the population. When we assume that such a genealogy is known, the coalescent model, equipped with a Gaussian process prior on population size trajectory, allows for nonparametric Bayesian estimation of population size dynamics. Although this approach is quite powerful, large datasets collected during infectious disease surveillance challenge the state-of-the-art of Bayesian phylodynamics and demand inferential methods with relatively low computational cost.\n\n\nRESULTS\nTo satisfy this demand, we provide a computationally efficient Bayesian inference framework based on Hamiltonian Monte Carlo for coalescent process models. Moreover, we show that by splitting the Hamiltonian function, we can further improve the efficiency of this approach. Using several simulated and real datasets, we show that our method provides accurate estimates of population size dynamics and is substantially faster than alternative methods based on elliptical slice sampler and Metropolis-adjusted Langevin algorithm.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe R code for all simulation studies and real data analysis conducted in this article are publicly available at http://www.ics.uci.edu/\u223cslan/lanzi/CODES.html and in the R package phylodyn available at https://github.com/mdkarcher/phylodyn.\n\n\nCONTACT\nS.Lan@warwick.ac.uk or babaks@uci.edu\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."} {"_id": "b1ce62b6831c93cb4c8d16ebf4c8614235cb3f38", "title": "Multiuser Millimeter Wave Communications With Nonorthogonal Beams", "text": "Recently, millimeter-wave (mmWave) and even terahertz wireless (with higher frequency) networks have attracted significant research interests as alternatives to support the formidable growth of wireless communication services. Normally, directional beamforming (BF) is shown as a promising technique to compensate its high path loss. We focus on mmWave communications and assume that both mmWave base stations (MBSs) and user equipments (UEs) can support directional BF. As mmWave spectrum has short wavelength, massive antenna arrays can be deployed at MBSs to form multiple directional beams through BF training. Then, an MBS can transmit simultaneously to multiple UEs (SUEs) with different beams in the networks. However, the beams that serve different SUEs may transmit (almost) in the same path, especially when SUEs are distributed densely. Thus, they are not in perfect orthogonal beams. Due to the leakage of transmission power, the interference among these beams may be severe. To address this problem, typically the MBS could serve these SUEs in time division multiplex. This will degrade the spectral efficiency. In this context, we investigate the effect of nonorthogonal beam interference and then propose two novel solutions (i.e., dynamic beam switching and static beam selection) to coordinate the transmitting beams effectively. Then, an improved downlink multiuser simultaneous transmission scheme is introduced. In the scheme, an MBS can serve multiple SUEs simultaneously with multiple orthogonal and/or nonorthogonal beams to guarantee SUEs\u2019 Quality of Service. The theoretical and numerical results have shown that our scheme can largely improve the performance of the achievable rate and, meanwhile, can serve lots of SUEs simultaneously."} {"_id": "fbac6141263b54f1009df931857d249a1299fa27", "title": "An Adaptive Privacy Protection Method for Smart Home Environments Using Supervised Learning", "text": "In recent years, smart home technologies have started to be widely used, bringing a great deal of convenience to people\u2019s daily lives. At the same time, privacy issues have become particularly prominent. Traditional encryption methods can no longer meet the needs of privacy protection in smart home applications, since attacks can be launched even without the need for access to the cipher. Rather, attacks can be successfully realized through analyzing the frequency of radio signals, as well as the timestamp series, so that the daily activities of the residents in the smart home can be learnt. Such types of attacks can achieve a very high success rate, making them a great threat to users\u2019 privacy. In this paper, we propose an adaptive method based on sample data analysis and supervised learning (SDASL), to hide the patterns of daily routines of residents that would adapt to dynamically changing network loads. Compared to some existing solutions, our proposed method exhibits advantages such as low energy consumption, low latency, strong adaptability, and effective privacy protection."} {"_id": "837c8d073dab369f284bf297682c44602d749a10", "title": "65GHz Doppler Sensor with On-Chip Antenna in 0.18\u03bcm SiGe BiCMOS", "text": "A single-chip 65GHz Doppler radar transceiver with on-chip patch antenna is reported. Implemented in a production 0.18mum SiGe BiCMOS process, it features a differential output transmit power of 4.3dBm, 16.5dB single-ended down-conversion gain and a double-sideband noise figure of 12.8dB. The radar includes a 65GHz 2-stage cascode LNA with S 11<-15dB at 50-94GHz and 14dB gain at 65GHz, a double-balanced down-convert mixer, a SiGe HBT IF amplifier with low 1/f noise, a VCO and a 65GHz output buffer. The LO is provided by an integrated varactor-tuned 61-67GHz VCO optimized for low phase noise. The patch antenna is designed to be impedance-matched for dual-band operation at 62.8 and 64.6GHz. The use of lumped inductors in all blocks and a vertically-stacked transformer for single-ended to differential conversion in the receive path help reduce the transceiver area to 1mm times 1mm"} {"_id": "4575716ba329ca8332787427dc19b3f6af5ca35e", "title": "In-class distractions: The role of Facebook and the primary learning task", "text": "While laptops and other Internet accessible technologies facilitate student learning in the classroom, they also increase opportunities for interruptions from off-task social networking sites such as Facebook (FB). A small number of correlational studies have suggested that FB has a detrimental effect on learning performance, however; these studies had neglected to investigate student-engagement in the primary learning task and how this affects task-switching to goal-irrelevant FB intrusions (distractions); and how purposeful deployment of attention to FB (goal-relevant interruptions) affect lecture comprehension on such tasks. This experiment fills a gap in the literature by manipulating lecture interest-value and controls for duration of FB exposure, time of interruption, FB material and the order of FB posts. One hundred and fifty participants were randomly allocated to one of six conditions: (A) no FB intrusions, high-interest (HI) lecture; (B) no FB intrusions, low-interest (LI) lecture (C) goal-relevant FB intrusions, HI lecture (D) goal-relevant FB intrusions, LI lecture (E) goal-irrelevant FB intrusions, HI lecture (F) goal-irrelevant FB intrusions, LI lecture. As predicted, participants were more susceptible to FB distractions when the primary learning task was of low-interest. The study also found that goal-relevant FB intrusions significantly reduced HI lecture comprehension compared to the control condition (A). The results highlight the need for recourses that will help educators increase student engagement with their learning task. Implications for future research are discussed. 2014 Elsevier Ltd. All rights reserved."} {"_id": "8216673632b897ec50db06358b77f13ddd432c47", "title": "Guidelines for human electromyographic research.", "text": ""} {"_id": "6869d0101515fd56b52e4370db52c30d1d279df6", "title": "Design and experimental validation of HyTAQ, a Hybrid Terrestrial and Aerial Quadrotor", "text": "This paper details the design, modeling, and experimental validation of a novel mobile robot capable of both aerial and terrestrial locomotion. Flight is achieved through a quadrotor configuration; four actuators provide the required thrust. Adding a rolling cage to the quadrotor makes terrestrial locomotion possible using the same actuator set and control system. Thus, neither the mass nor the system complexity is increased by inclusion of separate actuators for terrestrial and aerial locomotion. An analysis of the system's energy consumption demonstrates that during terrestrial locomotion, the robot only needs to overcome rolling resistance and consumes much less energy compared to the aerial mode. This solves one of the most vexing problems of quadrotors and rotorcraft in general - their short operation time. Experimental results show that the hybrid robot can travel a distance four times greater and operate almost six times longer than an aerial only system. It also solves one of the most challenging problems in terrestrial robot design - obstacle avoidance. When an obstacle is encountered, the system simply flies over it."} {"_id": "1d973294d8ed400806e728196b1b3d6ab4002dea", "title": "Inference and Analysis of Population Structure Using Genetic Data and Network Theory.", "text": "Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition's modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/)."} {"_id": "fb6281592efee6d356982137cf4579540881debf", "title": "A comprehensive review of firefly algorithms", "text": "The firefly algorithm has become an increasingly important tool of Swarm Intelligence that has been applied in almost all areas of optimization, as well as engineering practice. Many problems from various areas have been successfully solved using the firefly algorithm and its variants. In order to use the algorithm to solve diverse problems, the original firefly algorithm needs to be modified or hybridized. This paper carries out a comprehensive review of this living and evolving discipline of Swarm Intelligence, in order to show that the firefly algorithm could be applied to every problem arising in practice. On the other hand, it encourages new researchers and algorithm developers to use this simple and yet very efficient algorithm for problem solving. It often guarantees that the obtained results will meet the expectations. & 2013 Elsevier Ltd. All rights reserved."} {"_id": "05eef019bac01e6520526510c2590cc1718f7fe6", "title": "Third-wave livestreaming: teens' long form selfie", "text": "Mobile livestreaming is now well into its third wave. From early systems such as Bambuser and Qik, to more popular apps Meerkat and Periscope, to today's integrated social streaming features in Facebook and Instagram, both technology and usage have changed dramatically. In this latest phase of livestreaming, cameras turn inward to focus on the streamer, instead of outwards on the surroundings. Teens are increasingly using these platforms to entertain friends, meet new people, and connect with others on shared interests. We studied teens' livestreaming behaviors and motivations on these new platforms through a survey completed by 2,247 American livestreamers and interviews with 20 teens, highlighting changing practices, teens' differences from the broader population, and implications for designing new livestreaming services."} {"_id": "40519f6c6732791339f29cf676cb4c841613a3eb", "title": "State-space solutions to standard H2 and H\u221e control problems", "text": "Simple state-space formulas are presented for a controller solving a standard H\u221e-problem. The controller has the same state-dimension as the plant, its computation involves only two Riccati equations, and it has a separation structure reminiscent of classical LQG (i.e., H2) theory. This paper is also intended to be of tutorial value, so a standard H2-solution is developed in parallel."} {"_id": "40bd61c220ec5d098ab6a10285249255c45421be", "title": "What Makes a Great Software Engineer?", "text": "Good software engineers are essential to the creation of good software. However, most of what we know about software-engineering expertise are vague stereotypes, such as 'excellent communicators' and 'great teammates'. The lack of specificity in our understanding hinders researchers from reasoning about them, employers from identifying them, and young engineers from becoming them. Our understanding also lacks breadth: what are all the distinguishing attributes of great engineers (technical expertise and beyond)? We took a first step in addressing these gaps by interviewing 59 experienced engineers across 13 divisions at Microsoft, uncovering 53 attributes of great engineers. We explain the attributes and examine how the most salient of these impact projects and teams. We discuss implications of this knowledge on research and the hiring and training of engineers."} {"_id": "bc69383a7d46cbaf80b5b5ef902a3dccf23df696", "title": "TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning", "text": "TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow (Abadi et al., 2015). It provides an easy-to-use Scikit-learn (Pedregosa et al., 2011) style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-ofart machine learning algorithms built on top of TensorFlow\u2019s low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning to non-specialists using a general-purpose high-level language as well as researchers who want to implement, benchmark, and compare their new methods in a structured environment. Emphasis is put on ease of use, performance, documentation, and API consistency."} {"_id": "7e27a053c720d58d78ab32e236ee185a90c61b2a", "title": "Drunk person identification using thermal infrared images", "text": "Drunk person identification is carried out using thermal infrared images. The features used for this purpose are simply the pixel values on specific points on the face of the person. It is proved that for a drunk person, the corresponding cluster in the feature space moves far away from its original position for the sober person. The concept behind the proposed approach is based on the physiology-based face identification. For demonstration purposes, Fisher Linear Discriminant approach is used for space dimensionality reduction. The feature space is found to be of very low dimensionality."} {"_id": "ac8db14cbc7ad0119d0130e88f98ccb3ec61780f", "title": "Big Data , Digital Media , and Computational Social Science : Possibilities and Perils", "text": "We live life in the network. We check our e-mails regularly, make mobile phone calls from almost any location ... make purchases with credit cards ... [and] maintain friendships through online social networks. ... These transactions leave digital traces that can be compiled into comprehensive pictures of both individual and group behavior, with the potential to transform our understanding of our lives, organizations, and societies."} {"_id": "0817e3f73342ce3c6c5a5739878dd1aeea6adec7", "title": "A combinatorial strongly polynomial algorithm for minimizing submodular functions", "text": "This paper presents a combinatorial polynomial-time algorithm for minimizing submodular functions, answering an open question posed in 1981 by Gr\u00f6tschel, Lov\u00e1sz, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the length of the largest absolute function value. The paper also presents a strongly polynomial version in which the number of steps is bounded by a polynomial in the size of the underlying set, independent of the function values."} {"_id": "08c30bbfb9ff90884f9d1f873a1eeb6bb616e761", "title": "The combinatorial assignment problem: approximate competitive equilibrium from equal incomes", "text": "Impossibility theorems suggest that the only efficient and strategyproof mechanisms for the problem of combinatorial assignment - e.g., assigning schedules of courses to students - are dictatorships. Dictatorships are mostly rejected as unfair: for any two agents, one chooses all their objects before the other chooses any. Any solution will involve compromise amongst efficiency, incentive and fairness considerations.\n This paper proposes a solution to the combinatorial assignment problem. It is developed in four steps. First, I propose two new criteria of outcome fairness, the maximin share guarantee and envy bounded by a single good, which weaken well-known criteria to accommodate indivisibilities; the criteria formalize why dictatorships are unfair. Second, I prove existence of an approximation to Competitive Equilibrium from Equal Incomes in which (i) incomes are unequal but arbitrarily close together; (ii) the market clears with error, which approaches zero in the limit and is small for realistic problems. Third, I show that this Approximate CEEI satisfies the fairness criteria. Last, I define a mechanism based on Approximate CEEI that is strategyproof for the zero-measure agents economists traditionally regard as price takers. The proposed mechanism is calibrated on real data and is compared to alternatives from theory and practice: all other known mechanisms are either manipulable by zero-measure agents or unfair ex-post, and most are both manipulable and unfair."} {"_id": "10bd0bab60e38e8052e167e3f7379ea0aeade2e4", "title": "On approximately fair allocations of indivisible goods", "text": "We study the problem of fairly allocating a set of indivisible goods to a set of people from an algorithmic perspective. fair division has been a central topic in the economic literature and several concepts of fairness have been suggested. The criterion that we focus on is envy-freeness. In our model, a monotone utility function is associated with every player specifying the value of each subset of the goods for the player. An allocation is envy-free if every player prefers her own share than the share of any other player. When the goods are divisible, envy-free allocations always exist. In the presence of indivisibilities, we show that there exist allocations in which the envy is bounded by the maximum marginal utility, and present a simple algorithm for computing such allocations. We then look at the optimization problem of finding an allocation with minimum possible envy. In the general case the problem is not solvable or approximable in polynomial time unless P = NP. We consider natural special cases (e.g.additive utilities) which are closely related to a class of job scheduling problems. Approximation algorithms as well as inapproximability results are obtained. Finally we investigate the problem of designing truthful mechanisms for producing allocations with bounded envy."} {"_id": "2fd40688614f94dbecba7ef06dc37d41473328ed", "title": "Allocating indivisible goods", "text": "The problem of allocating divisible goods has enjoyed a lot of attention in both mathematics (e.g. the cake-cutting problem) and economics (e.g. market equilibria). On the other hand, the natural requirement of indivisible goods has been somewhat neglected, perhaps because of its more complicated nature. In this work we study a fairness criterion, called the Max-Min Fairness problem, for k players who want to allocate among themselves m indivisible goods. Each player has a specified valuation function on the subsets of the goods and the goal is to split the goods between the players so as to maximize the minimum valuation. Viewing the problem from a game theoretic perspective, we show that for two players and additive valuations the expected minimum of the (randomized) cut-and-choose mechanism is a 1/2-approximation of the optimum. To complement this result we show that no truthful mechanism can compute the exact optimum.We also consider the algorithmic perspective when the (true) additive valuation functions are part of the input. We present a simple 1/(m - k + 1) approximation algorithm which allocates to every player at least 1/k fraction of the value of all but the k - 1 heaviest items. We also give an algorithm with additive error against the fractional optimum bounded by the value of the largest item. The two approximation algorithms are incomparable in the sense that there exist instances when one outperforms the other."} {"_id": "7d2c7748359f57c2b4227b31eca9e5f7a70a6b5c", "title": "A polynomial-time approximation scheme for maximizing the minimum machine completion time", "text": ""} {"_id": "0d1fd04c0dec97bd0b1c4deeba21b8833f792651", "title": "Design Considerations and Performance Evaluation of Single-Stage TAIPEI Rectifier for HVDC Distribution Applications", "text": "Design considerations and performance evaluations of a three-phase, four-switch, single-stage, isolated zero-voltage-switching (ZVS) rectifier are presented. The circuit is obtained by integrating the three-phase, two-switch, ZVS, discontinuous-current-mode (DCM), boost power-factorcorrection (PFC) rectifier, named for short the TAIPEI rectifier, with the ZVS full-bridge (FB) phase-shift dc/dc converter. The performance was evaluated on a three-phase 2.7-kW prototype designed for HVDC distribution applications with the line-toline voltage range from 180 VRMS to 264 VRMS and with a tightly regulated variable dc output voltage from 200 V to 300 V. The prototype operates with ZVS over the entire input-voltage and load-current range and achieves less than 5% input-current THD with the efficiency in the 95% range."} {"_id": "48451a29f8627b2c7fa1d27e6c7a8c61dc5fda7c", "title": "The induction of apoptosis and autophagy by Wasabia japonica extract in colon cancer.", "text": "PURPOSE\nWasabia japonica (wasabi) has been shown to exhibit properties of detoxification, anti-inflammation and the induction of apoptosis in cancer cells. This study aimed to investigate the molecular mechanism of the cytotoxicity of wasabi extract (WE) in colon cancer cells to evaluate the potential of wasabi as a functional food for chemoprevention.\n\n\nMETHODS\nColo 205 cells were treated with different doses of WE, and the cytotoxicity was analyzed by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide. Apoptosis and autophagy were detected by 4',6-diamidino-2-phenylindole, 5,5',6,6'-tetrachloro-1,1',3,3'-tetraethyl-imidacarbo-yanine iodide and staining for acidic vascular organelles (AVOs), along with Western blotting.\n\n\nRESULTS\nThe results demonstrated that WE induced the extrinsic pathway and mitochondrial death machinery through the activation of TNF-\u03b1, Fas-L, caspases, truncated Bid and cytochrome C. WE also induced autophagy by decreasing the phosphorylation of Akt and mTOR and promoting the expression of microtubule-associated protein 1 light chain 3-II and AVO formation. An in vivo xenograft model verified that tumor growth was delayed by WE treatment.\n\n\nCONCLUSION\nOur studies revealed that WE exhibits anti-colon cancer properties through the induction of apoptosis and autophagy. These results provide support for the application of WE as a chemopreventive functional food and as a prospective treatment of colon cancer."} {"_id": "304412a849e6ec260f15ad42d8205c43bcdea54f", "title": "Detecting Image Splicing using Geometry Invariants and Camera Characteristics Consistency", "text": "Recent advances in computer technology have made digital image tampering more and more common. In this paper, we propose an authentic vs. spliced image classification method making use of geometry invariants in a semi-automatic manner. For a given image, we identify suspicious splicing areas, compute the geometry invariants from the pixels within each region, and then estimate the camera response function (CRF) from these geometry invariants. The cross-fitting errors are fed into a statistical classifier. Experiments show a very promising accuracy, 87%, over a large data set of 363 natural and spliced images. To the best of our knowledge, this is the first work detecting image splicing by verifying camera characteristic consistency from a single-channel image"} {"_id": "a2c1176eccf51cd3d30d46757d44fcf7f6fcb160", "title": "Internet Reflections on Teenagers", "text": "The Internet is a piece of today's way of life that numerous young people can't envision what the world resembled before the Internet existed. The Internet is fun, enlightening and an extraordinary wellspring of correspondence with others. It's an instructive device and clients can find out about just about anything. Sharing data through Internet is simple, shoddy and quick. Young people have admittance to billions of sites containing data as content, pictures and recordings [1-10]."} {"_id": "9bd6bff7c3eae7ec6da8ed7aeb70491bcd40177e", "title": "Optimal Mass Transport for Registration and Warping", "text": "Image registration is the process of establishing a common geometric reference frame between two or more image data sets possibly taken at different times. In this paper we present a method for computing elastic registration and warping maps based on the Monge\u2013Kantorovich theory of optimal mass transport. This mass transport method has a number of important characteristics. First, it is parameter free. Moreover, it utilizes all of the grayscale data in both images, places the two images on equal footing and is symmetrical: the optimal mapping from image A to image B being the inverse of the optimal mapping from B to A. The method does not require that landmarks be specified, and the minimizer of the distance functional involved is unique; there are no other local minimizers. Finally, optimal transport naturally takes into account changes in density that result from changes in area or volume. Although the optimal transport method is certainly not appropriate for all registration and warping problems, this mass preservation property makes the Monge\u2013Kantorovich approach quite useful for an interesting class of warping problems, as we show in this paper. Our method for finding the registration mapping is based on a partial differential equation approach to the minimization of the L 2 Kantorovich\u2013Wasserstein or \u201cEarth Mover's Distance\u201d under a mass preservation constraint. We show how this approach leads to practical algorithms, and demonstrate our method with a number of examples, including those from the medical field. We also extend this method to take into account changes in intensity, and show that it is well suited for applications such as image morphing."} {"_id": "7e8668156c88eb77f5d397522c02fe528db333c3", "title": "Formation and suppression of acoustic memories during human sleep", "text": "Sleep and memory are deeply related, but the nature of the neuroplastic processes induced by sleep remains unclear. Here, we report that memory traces can be both formed or suppressed during sleep, depending on sleep phase. We played samples of acoustic noise to sleeping human listeners. Repeated exposure to a novel noise during Rapid Eye Movements (REM) or light non-REM (NREM) sleep leads to improvements in behavioral performance upon awakening. Strikingly, the same exposure during deep NREM sleep leads to impaired performance upon awakening. Electroencephalographic markers of learning extracted during sleep confirm a dissociation between sleep facilitating memory formation (light NREM and REM sleep) and sleep suppressing learning (deep NREM sleep). We can trace these neural changes back to transient sleep events, such as spindles for memory facilitation and slow waves for suppression. Thus, highly selective memory processes are active during human sleep, with intertwined episodes of facilitative and suppressive plasticity. Though memory and sleep are related, it is still unclear whether new memories can be formed during sleep. Here, authors show that people could learn new sounds during REM or light non-REM sleep, but that learning was suppressed when sounds were played during deep NREM sleep."} {"_id": "1045cc9599944e0088ccc6f6c13ee9f960ecffd7", "title": "Shape Segmentation by Approximate Convexity Analysis", "text": "We present a shape segmentation method for complete and incomplete shapes. The key idea is to directly optimize the decomposition based on a characterization of the expected geometry of a part in a shape. Rather than setting the number of parts in advance, we search for the smallest number of parts that admit the geometric characterization of the parts. The segmentation is based on an intermediate-level analysis, where first the shape is decomposed into approximate convex components, which are then merged into consistent parts based on a nonlocal geometric signature. Our method is designed to handle incomplete shapes, represented by point clouds. We show segmentation results on shapes acquired by a range scanner, and an analysis of the robustness of our method to missing regions. Moreover, our method yields results that are comparable to state-of-the-art techniques evaluated on complete shapes."} {"_id": "29c386498f174f4d20a326f8c779840ba88d394f", "title": "A Survey Paper : Areas , Techniques and Challenges of Opinion Mining", "text": "Opinion Mining is a promising discipline, defined as an intersection of information retrieval and computational linguistic techniques to deal with the opinions expressed in a document. The field aims at solving the problems related to opinions about products, Politian in newsgroup posts, review sites, comments on Facebook posts and twitter etc. This paper is about to covers the techniques, applications, long and short future research areas, Research gaps and future challenges in opinion mining. Further, an attempt has been made to discuss in detail the use of supervised, unsupervised, machine learning and case based reasoning techniques in opinion mining to perform computational treatment of sentiments."} {"_id": "b8358032fb3124dce867b42aabb5c532607e9d1e", "title": "A Study of Influencing Factors for Repurchase Intention in Internet Shopping Malls", "text": "This research studied the effect of 15 variables on the consumers' overall satisfaction and repurchase intention in Internet shopping malls. As the value of loyal customers is incomparably high in Electronic Commerce, winning customers\u2019 loyalty is vital to success of shopping malls. In this study, a customer is defined as one who has purchased goods or services at least once from Internet shopping malls. The research variables are demographic factors, product perceptions, customer service, perceived ease of use, site image, promotion, perceived consumer risk, personal characteristics and Internet communications environments. The outcome of the research is as follows: Perceived consumer risk shows a negative relationship with the repurchase intention, and all the other variables product perceptions, customer service, perceived ease of use, site image, promotion, communications environments are positively related with the repurchase intention. Also, the overall satisfaction level of customers for the Internet shopping malls positively influences repurchase intention."} {"_id": "24dd60ec6fbb2a25f6dd0c1955b3b0e0b29c7d42", "title": "Adding Chinese Captions to Images", "text": "This paper extends research on automated image captioning in the dimension of language, studying how to generate Chinese sentence descriptions for unlabeled images. To evaluate image captioning in this novel context, we present Flickr8k-CN, a bilingual extension of the popular Flickr8k set. The new multimedia dataset can be used to quantitatively assess the performance of Chinese captioning and English-Chinese machine translation. The possibility of re-using existing English data and models via machine translation is investigated. Our study reveals to some extent that a computer can master two distinct languages, English and Chinese, at a similar level for describing the visual world. Data is publicly available at http://tinyurl.com/flickr8kcn"} {"_id": "5417bd72d1b787ade0c485f1188189474c199f4d", "title": "MAGAN: Margin Adaptation for Generative Adversarial Networks", "text": "We propose a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function. We estimate the appropriate hinge loss margin with the expected energy of the target distribution, and derive both a principled criterion for updating the margin and an approximate convergence measure. The resulting training procedure is simple yet robust on a diverse set of datasets. We evaluate the proposed training procedure on the task of unsupervised image generation, noting both qualitative and quantitative performance improvements."} {"_id": "96a3c95a069d20fda310c315692a2a96635f4384", "title": "Efficient and fast multi-modal foreground-background segmentation using RGBD data", "text": "This paper addresses the problem of foreground and background segmentation. Multi-modal data specifically RGBD data has gain many tasks in computer vision recently. However, techniques for background subtraction based only on single-modality still report state-of-the-art results in many benchmarks. Succeeding the fusion of depth and color data for this task requires a robust formalization allowing at the same time higher precision and faster processing. To this end, we propose to make use of kernel density estimation technique to adapt multi-modal data. To speed up kernel density estimation, we explore the fast Gauss transform which allows the summation of a mixture of M kernel at N evaluation points in O(M+N) time as opposed to O(MN) time for a direct evaluation. Extensive experiments have been carried out on four publicly available RGBD foreground/background datasets. Results demonstrate that our proposal outperforms state-of-the-art methods for almost all of the sequences acquired in challenging indoor and outdoor contexts with a fast and non-parametric operation. c \u00a9 2017 Elsevier Ltd. All rights reserved."} {"_id": "2858c69e5d9432c8b9529e3d97017d3909eb0b9c", "title": "Semi-Supervised Ensemble Ranking", "text": "Ranking plays a central role in many Web search and information retrieval applications. Ensemble ranking, sometimes called meta-search, aims to improve the retrieval performance by combining the outputs from multiple ranking algorithms. Many ensemble ranking approaches employ supervised learning techniques to learn appropriate weights for combining multiple rankers. The main shortcoming with these approaches is that the learned weights for ranking algorithms are query independent. This is suboptimal since a ranking algorithm could perform well for certain queries but poorly for others. In this paper, we propose a novel semi-supervised ensemble ranking (SSER) algorithm that learns query-dependent weights when combining multiple rankers in document retrieval. The proposed SSER algorithm is formulated as an SVM-like quadratic program (QP), and therefore can be solved efficiently by taking advantage of optimization techniques that were widely used in existing SVM solvers. We evaluated the proposed technique on a standard document retrieval testbed and observed encouraging results by comparing to a number of state-of-the-art techniques."} {"_id": "08e4982410ebaa6dbd203a953113214bc9740b80", "title": "Comparative evaluation of text- and citation-based plagiarism detection approaches using guttenplag", "text": "Various approaches for plagiarism detection exist. All are based on more or less sophisticated text analysis methods such as string matching, fingerprinting or style comparison. In this paper a new approach called Citation-based Plagiarism Detection is evaluated using a doctoral thesis, in which a volunteer crowd-sourcing project called GuttenPlag identified substantial amounts of plagiarism through careful manual inspection. This new approach is able to identify similar and plagiarized documents based on the citations used in the text. It is shown that citation-based plagiarism detection performs significantly better than text-based procedures in identifying strong paraphrasing, translation and some idea plagiarism. Detection rates can be improved by combining citation-based with text-based plagiarism detection."} {"_id": "486b5cd2cdd15436c090431d0a442328dcb637f3", "title": "Routing, scheduling and channel assignment in Wireless Mesh Networks: Optimization models and algorithms", "text": "Wireless Mesh networks (WMNs) can partially replace the wired backbone of traditional wireless access networks and, similarly, they require to carefully plan radio resource assignment in order to provide the same quality guarantees to traffic flows. In this paper we study the radio resource assignment optimization problem in Wireless Mesh Networks assuming a time division multiple access (TDMA) scheme, a dynamic power control able to vary emitted power slot-by-slot, and a rate adaptation mechanism that sets transmission rates according to the signalto-interference-and-noise ratio (SINR). The proposed optimization framework includes routing, scheduling and channel assignment. Quality requirements of traffic demands are expressed in terms of minimum bandwidth and modeled with constraints defining the number of information units (packets) that must be delivered per frame. We consider an alternative problem formulation where decision variables represent compatible sets of links active in the same slot and channel, called configurations. We propose a two phases solution approach where a set of configurations is first selected to meet traffic requirements along the best available paths, and then configurations are assigned to channels according to device characteristics and constraints. The optimization goal is to minimize the number of used slots, which is directly related to the global resource allocation efficiency. We provide a lower bound of the optimal solution solving the continuous relaxation of problem formulation. Moreover, we propose a heuristic approach to determine practical integer solutions (upper bound). Since configuration variables are exponentially many, our solution approaches are based on the Column Generation technique. In order to assess the effectiveness of the proposed algorithms we show the numerical results obtained on a set of realistic-size randomly generated instances."} {"_id": "2ae60aba40b0af8b83f65bdbf1b556f12b7672ce", "title": "A high performance 76.5 GHz FMCW RADAR for advanced driving assistance system", "text": "Frequency modulated Continuous Wave (FMCW) is continuous wave RADAR widely used for Intelligent Adaptive Cruise Control (ACC) and Collision Warning System (CWS). The paper presents a design and simulation of millimeter wave FMCW RADAR. Paper introduces a mathematical modeling of a FMCW RADAR including its target model. Design of a 76.5 GHz FMCW RADAR & its target model by using Advanced Design System (ADS) software and target range & velocity at various conditions are employed in the simulated FMCW RADAR system to verify the operation of the system."} {"_id": "007ee2559d4a2a8c661f4f5182899f03736682a7", "title": "CANAuth - A Simple, Backward Compatible Broadcast Authentication Protocol for CAN bus", "text": "The Controller-Area Network (CAN) bus protocol [1] is a bus protocol invented in 1986 by Robert Bosch GmbH, originally intended for automotive use. By now, the bus can be found in devices ranging from cars and trucks, over lightning setups to industrial looms. Due to its nature, it is a system very much focused on safety, i.e., reliability. Unfortunately, there is no build-in way to enforce security, such as encryption or authentication. In this paper, we investigate the problems associated with implementing a backward compatible message authentication protocol on the CAN bus. We show which constraints such a protocol has to meet and why this eliminates, to the best of our knowledge, all the authentication protocols published so far. Furthermore, we present a message authentication protocol, CANAuth, that meets all of the requirements set forth and does not violate any constraint of the CAN bus. Keywords\u2014CAN bus, embedded networks, broadcast authentication, symmetric cryptography"} {"_id": "8d19205039af3056b4b57eb6637462d12982e2a2", "title": "Finding social roles in Wikipedia", "text": "This paper investigates some of the social roles people play in the online community of Wikipedia. We start from qualitative comments posted on community oriented pages, wiki project memberships, and user talk pages in order to identify a sample of editors who represent four key roles: substantive experts, technical editors, vandal fighters, and social networkers. Patterns in edit histories and egocentric network visualizations suggest potential \"structural signatures\" that could be used as quantitative indicators of role adoption. Using simple metrics based on edit histories we compare two samples of Wikipedians: a collection of long term dedicated editors, and a cohort of editors from a one month window of new arrivals. According to these metrics, we find that the proportions of editor types in the new cohort are similar those observed in the sample of dedicated contributors. The number of new editors playing helpful roles in a single month's cohort nearly equal the number found in the dedicated sample. This suggests that informal socialization has the potential provide sufficient role related labor despite growth and change in Wikipedia. These results are preliminary, and we describe several ways that the method can be improved, including the expansion and refinement of role signatures and identification of other important social roles."} {"_id": "8dc4025cfa05e6aab60ce578f9c9b55e356aeb79", "title": "Active Magnetic Anomaly Detection Using Multiple Micro Aerial Vehicles", "text": "Magnetic anomaly detection (MAD) is an important problem in applications ranging from geological surveillance to military reconnaissance. MAD sensors detect local disturbances in the magnetic field, which can be used to detect the existence of and to estimate the position of buried, hidden, or submerged objects, such as ore deposits or mines. These sensors may experience false positive and false negative detections and, without prior knowledge of the targets, can only determine proximity to a target. The uncertainty in the sensors, coupled with a lack of knowledge of even the existence of targets, makes the estimation and control problems challenging. We utilize a hierarchical decomposition of the environment, coupled with an estimation algorithm based on random finite sets, to determine the number of and the locations of targets in the environment. The small team of robots follow the gradient of mutual information between the estimated set of targets and the future measurements, locally maximizing the rate of information gain. We present experimental results of a team of quadrotor micro aerial vehicles discovering and localizing an unknown number of permanent magnets."} {"_id": "c6654cd8a1ce6562330fad4e42ac5a5e86a52c98", "title": "Demand Side Management in Smart Grid Using Heuristic Optimization", "text": "Demand side management (DSM) is one of the important functions in a smart grid that allows customers to make informed decisions regarding their energy consumption, and helps the energy providers reduce the peak load demand and reshape the load profile. This results in increased sustainability of the smart grid, as well as reduced overall operational cost and carbon emission levels. Most of the existing demand side management strategies used in traditional energy management systems employ system specific techniques and algorithms. In addition, the existing strategies handle only a limited number of controllable loads of limited types. This paper presents a demand side management strategy based on load shifting technique for demand side management of future smart grids with a large number of devices of several types. The day-ahead load shifting technique proposed in this paper is mathematically formulated as a minimization problem. A heuristic-based Evolutionary Algorithm (EA) that easily adapts heuristics in the problem was developed for solving this minimization problem. Simulations were carried out on a smart grid which contains a variety of loads in three service areas, one with residential customers, another with commercial customers, and the third one with industrial customers. The simulation results show that the proposed demand side management strategy achieves substantial savings, while reducing the peak load demand of the smart grid."} {"_id": "a481ad59a970cd605265b4397de1bc1974d3a48e", "title": "Cookie Hijacking in the Wild : Security and Privacy Implications", "text": "The widespread demand for online privacy, also fueled by widely-publicized demonstrations of session hijacking attacks against popular websites, has spearheaded the increasing deployment of HTTPS. However, many websites still avoid ubiquitous encryption due to performance or compatibility issues. The prevailing approach in these cases is to force critical functionality and sensitive data access over encrypted connections, while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. In this work, we conduct an in-depth assessment of a diverse set of major websites and explore what functionality and information is exposed to attackers that have hijacked a user\u2019s HTTP cookies. We identify a recurring pattern across websites with partially deployed HTTPS; service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-session cookies. Our cookie hijacking study reveals a number of severe flaws; attackers can obtain the user\u2019s home and work address and visited websites from Google, Bing and Baidu expose the user\u2019s complete search history, and Yahoo allows attackers to extract the contact list and send emails from the user\u2019s account. Furthermore, e-commerce vendors such as Amazon and Ebay expose the user\u2019s purchase history (partial and full respectively), and almost every website exposes the user\u2019s name and email address. Ad networks like Doubleclick can also reveal pages the user has visited. To fully evaluate the practicality and extent of cookie hijacking, we explore multiple aspects of the online ecosystem, including mobile apps, browser security mechanisms, extensions and search bars. To estimate the extent of the threat, we run IRB-approved measurements on a subset of our university\u2019s public wireless network for 30 days, and detect over 282K accounts exposing the cookies required for our hijacking attacks. We also explore how users can protect themselves and find that, while mechanisms such as the EFF\u2019s HTTPS Everywhere extension can reduce the attack surface, HTTP cookies are still regularly exposed. The privacy implications of these attacks become even more alarming when considering how they can be used to deanonymize Tor users. Our measurements suggest that a significant portion of Tor users may currently be vulnerable to cookie hijacking."} {"_id": "f1c03f90c35d9c66baf0b14e7852890dedeed794", "title": "Learning Profiles in Duplicate Question Detection", "text": "This paper presents the results of systematic and comparative experimentation with major types of methodologies for automatic duplicate question detection when these are applied to datasets of progressively larger sizes, thus allowing to study the learning profiles of this task under these different approaches and evaluate their merits. This study was made possible by resorting to the recent release for research purposes, by the Quora online question answering engine, of a new dataset with over 400,000 pairs labeled with respect to their elements being duplicate interrogative segments."} {"_id": "af4ab6727929ca80227e5edbbe7135f09eedc5f0", "title": "Neurobiology of Schizophrenia", "text": "With its hallucinations, delusions, thought disorder, and cognitive deficits, schizophrenia affects the most basic human processes of perception, emotion, and judgment. Evidence increasingly suggests that schizophrenia is a subtle disorder of brain development and plasticity. Genetic studies are beginning to identify proteins of candidate genetic risk factors for schizophrenia, including dysbindin, neuregulin 1, DAOA, COMT, and DISC1, and neurobiological studies of the normal and variant forms of these genes are now well justified. We suggest that DISC1 may offer especially valuable insights. Mechanistic studies of the properties of these candidate genes and their protein products should clarify the molecular, cellular, and systems-level pathogenesis of schizophrenia. This can help redefine the schizophrenia phenotype and shed light on the relationship between schizophrenia and other major psychiatric disorders. Understanding these basic pathologic processes may yield novel targets for the development of more effective treatments."} {"_id": "061f07269d1bbeffd14509043dd5bee6f425e494", "title": "The design and implementation of microdrivers", "text": "Device drivers commonly execute in the kernel to achieve high performance and easy access to kernel services. However, this comes at the price of decreased reliability and increased programming difficulty. Driver programmers are unable to use user-mode development tools and must instead use cumbersome kernel tools. Faults in kernel drivers can cause the entire operating system to crash. User-mode drivers have long been seen as a solution to this problem, but suffer from either poor performance or new interfaces that require a rewrite of existing drivers.\n This paper introduces the Microdrivers architecture that achieves high performance and compatibility by leaving critical path code in the kernel and moving the rest of the driver code to a user-mode process. This allows data-handling operations critical to I/O performance to run at full speed, while management operations such as initialization and configuration run at reduced speed in user-level. To achieve compatibility, we present DriverSlicer, a tool that splits existing kernel drivers into a kernel-level component and a user-level component using a small number of programmer annotations. Experiments show that as much as 65% of driver code can be removed from the kernel without affecting common-case performance, and that only 1-6 percent of the code requires annotations."} {"_id": "129359a872783b7c3a82c2c9dbef75df2956d2d3", "title": "XFI: Software Guards for System Address Spaces", "text": "XFI is a comprehensive protection system that offers both flexible access control and fundamental integrity guarantees, at any privilege level and even for legacy code in commodity systems. For this purpose, XFI combines static analysis with inline software guards and a two-stack execution model. We have implemented XFI for Windows on the x86 architecture using binary rewriting and a simple, stand-alone verifier; the implementation's correctness depends on the verifier, but not on the rewriter. We have applied XFI to software such as device drivers and multimedia codecs. The resulting modules function safely within both kernel and user-mode address spaces, with only modest enforcement overheads."} {"_id": "287d5bd4a085eac093591ce72c07f06b3c64acec", "title": "CuriOS: Improving Reliability through Operating System Structure", "text": "An error that occurs in a microkernel operating system service can potentially result in state corruption and service failure. A simple restart of the failed service is not always the best solution for reliability. Blindly restarting a service which maintains client-related state such as session information results in the loss of this state and affects all clients that were using the service. CuriOS represents a novel OS design that uses lightweight distribution, isolation and persistence of OS service state to mitigate the problem of state loss during a restart. The design also significantly reduces error propagation within client-related state maintained by an OS service. This is achieved by encapsulating services in separate protection domains and granting access to client-related state only when required for request processing. Fault injection experiments show that it is possible to recover from between 87% and 100% of manifested errors in OS services such as the file system, network, timer and scheduler while maintaining low performance overheads."} {"_id": "585706dc56e146c8fb42228fc5cbe1de0bb0a69d", "title": "CIL: Intermediate Language and Tools for Analysis and Transformation of C Programs", "text": "This paper describes the C Intermediate Language: a highlevel representation along with a set of tools that permit easy analysis and source-to-source transformation of C programs. Compared to C, CIL has fewer constructs. It breaks down certain complicated constructs of C into simpler ones, and thus it works at a lower level than abstract-syntax trees. But CIL is also more high-level than typical intermediate languages (e.g., three-address code) designed for compilation. As a result, what we have is a representation that makes it easy to analyze and manipulate C programs, and emit them in a form that resembles the original source. Moreover, it comes with a front-end that translates to CIL not only ANSI C programs but also those using Microsoft C or GNU C extensions. We describe the structure of CIL with a focus on how it disambiguates those features of C that we found to be most confusing for program analysis and transformation. We also describe a whole-program merger based on structural type equality, allowing a complete project to be viewed as a single compilation unit. As a representative application of CIL, we show a transformation aimed at making code immune to stack-smashing attacks. We are currently using CIL as part of a system that analyzes and instruments C programs with run-time checks to ensure type safety. CIL has served us very well in this project, and we believe it can usefully be applied in other situations as well."} {"_id": "9090142233801801411a28b30c653aae5408182a", "title": "Thorough static analysis of device drivers", "text": "Bugs in kernel-level device drivers cause 85% of the system crashes in the Windows XP operating system [44]. One of the sources of these errors is the complexity of the Windows driver API itself: programmers must master a complex set of rules about how to use the driver API in order to create drivers that are good clients of the kernel. We have built a static analysis engine that finds API usage errors in C programs. The Static Driver Verifier tool (SDV) uses this engine to find kernel API usage errors in a driver. SDV includes models of the OS and the environment of the device driver, and over sixty API usage rules. SDV is intended to be used by driver developers \"out of the box.\" Thus, it has stringent requirements: (1) complete automation with no input from the user; (2) a low rate of false errors. We discuss the techniques used in SDV to meet these requirements, and empirical results from running SDV on over one hundred Windows device drivers."} {"_id": "0d441ab58a1027cb64084ad065cfea5e15b8e74c", "title": "Why We Need New Evaluation Metrics for NLG", "text": "The majority of NLG evaluation relies on automatic metrics, such as BLEU. In this paper, we motivate the need for novel, systemand data-independent automatic evaluation methods: We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is dataand system-specific. Nevertheless, our results also suggest that automatic metrics perform reliably at system-level and can support system development by finding cases where a system performs poorly."} {"_id": "045b415f039ca547e0720f6bbc7d80256c725681", "title": "Living in two homes-a Swedish national survey of wellbeing in 12 and 15\u00a0year olds with joint physical custody", "text": "BACKGROUND\nThe practice of joint physical custody, where children spend equal time in each parent's home after they separate, is increasing in many countries. It is particularly common in Sweden, where this custody arrangement applies to 30 per cent of children with separated parents. The aim of this study was to examine children's health-related quality of life after parental separation, by comparing children living with both parents in nuclear families to those living in joint physical custody and other forms of domestic arrangements.\n\n\nMETHODS\nData from a national Swedish classroom study of 164,580 children aged 12 and 15-years-old were analysed by two-level linear regression modelling. Z-scores were used to equalise scales for ten dimensions of wellbeing from the KIDSCREEN-52 and the KIDSCREEN-10 Index and analysed for children in joint physical custody in comparison with children living in nuclear families and mostly or only with one parent.\n\n\nRESULTS\nLiving in a nuclear family was positively associated with almost all aspects of wellbeing in comparison to children with separated parents. Children in joint physical custody experienced more positive outcomes, in terms of subjective wellbeing, family life and peer relations, than children living mostly or only with one parent. For the 12-year-olds, beta coefficients for moods and emotions ranged from -0.20 to -0.33 and peer relations from -0.11 to -0.20 for children in joint physical custody and living mostly or only with one parent. The corresponding estimates for the 15-year-olds varied from -0.08 to -0.28 and from -0.03 to -0.13 on these subscales. The 15-year-olds in joint physical custody were more likely than the 12-year-olds to report similar wellbeing levels on most outcomes to the children in nuclear families.\n\n\nCONCLUSIONS\nChildren who spent equal time living with both parents after a separation reported better wellbeing than children in predominantly single parent care. This was particularly true for the 15-year-olds, while the reported wellbeing of 12-years-olds was less satisfactory. There is a need for further studies that can account for the pre and post separation context of individual families and the wellbeing of younger age groups in joint physical custody."} {"_id": "2033551a919723b8b70fadfed2178909740a19ff", "title": "Reaching Agreement in the Presence of Faults", "text": "The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.\nIt is shown that the problem is solvable for, and only for, n \u2265 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n \u2265 m \u2265 0. This weaker assumption can be approximated in practice using cryptographic methods."} {"_id": "72a44aa53a3fe7d3a0edfc66b8e6fb0de2728cb6", "title": "User privacy risk calculator", "text": "User privacy is becoming an issue on the Internet due to common data breaches and various security threats. Services tend to require private user data in order to provide more personalized content and users are typically unaware of potential risks to their privacy. This paper proposes a risk calculator based on a feedforward neural network that will provide users with an ability to calculate risks to their privacy. The proposed calculator is evaluated on a set of real world example scenarios. Furthermore, to give more insight into privacy issues, each estimated risk is explained by several real life scenarios that might happen if the observed parameters are obtained by an attacker. In turn, this should raise user awareness and knowledge about privacy issues on the Internet."} {"_id": "ed8e145b1c1da64c4d30af097148e9798e712985", "title": "VALUE , RARENESS , COMPETITIVE ADVANTAGE , AND PERFORMANCE : A CONCEPTUAL-LEVEL EMPIRICAL INVESTIGATION OF THE RESOURCE-BASED VIEW OF THE FIRM", "text": "The resource-based view of the firm (RBV) hypothesizes that the exploitation of valuable, rare resources and capabilities contributes to a firm\u2019s competitive advantage, which in turn contributes to its performance. Despite this notion, few empirical studies test these hypotheses at the conceptual level. In response to this gap, this study empirically examines the relationships between value, rareness, competitive advantage, and performance. The results suggest that value and rareness are related to competitive advantage, that competitive advantage is related to performance, and that competitive advantage mediates the rareness-performance relationship. These findings have important academic and practitioner implications which are then discussed. Copyright \uf6d9 2008 John Wiley & Sons, Ltd."} {"_id": "fecf0793c768fcb6e1d45fe52e2bfe43731ffa99", "title": "A Configurable 12\u2013237 kS/s 12.8 mW Sparse-Approximation Engine for Mobile Data Aggregation of Compressively Sampled Physiological Signals", "text": "Compressive sensing (CS) is a promising technology for realizing low-power and cost-effective wireless sensor nodes (WSNs) in pervasive health systems for 24/7 health monitoring. Due to the high computational complexity (CC) of the reconstruction algorithms, software solutions cannot fulfill the energy efficiency needs for real-time processing. In this paper, we present a 12-237 kS/s 12.8 mW sparse-approximation (SA) engine chip that enables the energy-efficient data aggregation of compressively sampled physiological signals on mobile platforms. The SA engine chip integrated in 40 nm CMOS can support the simultaneous reconstruction of over 200 channels of physiological signals while consuming (1% of a smartphone's power budget. Such energyefficient reconstruction enables two-to-three times energy saving at the sensor nodes in a CS-based health monitoring system as compared to traditional Nyquist-based systems, while providing timely feedback and bringing signal intelligence closer to the user."} {"_id": "8149e52a8c549aa9e1f4a93fe315263849bc92c1", "title": "Dragonfly wing nodus: A one-way hinge contributing to the asymmetric wing deformation.", "text": "Dragonfly wings are highly specialized locomotor systems, which are formed by a combination of several structural components. The wing components, also known as structural elements, are responsible for the various aspects of the wing functionality. Considering the complex interactions between the wing components, modelling of the wings as a whole is only possible with inevitable huge oversimplifications. In order to overcome this difficulty, we have recently proposed a new approach to model individual components of complex wings comparatively. Here, we use this approach to study nodus, a structural element of dragonfly wings which has been less studied to date. Using a combination of several imaging techniques including scanning electron microscopy (SEM), wide-field fluorescence microscopy (WFM), confocal laser scanning microscopy (CLSM) and micro-computed tomography (micro-CT) scanning, we aim to characterize the spatial morphology and material composition of fore- and hindwing nodi of the dragonfly Brachythemis contaminata. The microscopy results show the presence of resilin in the nodi, which is expected to help the deformability of the wings. The computational results based on three-dimensional (3D) structural data suggest that the specific geometry of the nodus restrains its displacements when subjected to pressure on the ventral side. This effect, resulting from an interlocking mechanism, is expected to contribute to the dorso-ventral asymmetry of wing deformation and to provide a higher resistance to aerodynamic forces during the downstroke. Our results provide an important step towards better understanding of the structure-property-function relationship in dragonfly wings.\n\n\nSTATEMENT OF SIGNIFICANCE\nIn this study, we investigate the wing nodus, a specialized wing component in dragonflies. Using a combination of modern imaging techniques, we demonstrate the presence of resilin in the nodus, which is expected to facilitate the wing deformability in flight. The specific geometry of the nodus, however, seems to restrain its displacements when subjected to pressure on the ventral side. This effect, resulting from an interlocking mechanism, is suggested to contribute to dorso-ventral asymmetry of wing deformations and to provide a higher resistance to aerodynamic forces during the downstroke. Our results provide an important step towards better understanding of the structure-property-function relationship in dragonfly wings and might help to design more efficient wings for biomimetic micro-air vehicles."} {"_id": "8c878f4e0e17be1a1ee840fa3479304f60f175fc", "title": "Single-stage 6.78 MHz power-amplifier design using high-voltage GaN power ICs for wireless charging applications", "text": "Traditional power amplifiers for wireless power transfer require a low voltage dc power supply generated by a power adapter. This multi-stage power conversion solution reduces the overall efficiency. A single-stage \u201cAC-RF\u201d power amplifier architecture using a phase-modulated full-bridge topology and high voltage GaN Power ICs is proposed to directly generate 6.78 MHz wireless power from a rectified ac source. A 50W power amplifier is built and achieves 90% ac-to-coil efficiency, which reduces overall power loss by half, comparing to existing multi-stage solutions. The operating principle of a phase-shifted full bridge power amplifier is discussed, and analytical equations are provided for the passive filter network design. A coupled ZVS-tank scheme is also proposed to improve efficiency."} {"_id": "3653be0cf0eafea9a43388e1be93e5520f0ad898", "title": "Batch Mode Active Learning for Regression With Expected Model Change", "text": "While active learning (AL) has been widely studied for classification problems, limited efforts have been done on AL for regression. In this paper, we introduce a new AL framework for regression, expected model change maximization (EMCM), which aims at choosing the unlabeled data instances that result in the maximum change of the current model once labeled. The model change is quantified as the difference between the current model parameters and the updated parameters after the inclusion of the newly selected examples. In light of the stochastic gradient descent learning rule, we approximate the change as the gradient of the loss function with respect to each single candidate instance. Under the EMCM framework, we propose novel AL algorithms for the linear and nonlinear regression models. In addition, by simulating the behavior of the sequential AL policy when applied for $k$ iterations, we further extend the algorithms to batch mode AL to simultaneously choose a set of $k$ most informative instances at each query time. Extensive experimental results on both UCI and StatLib benchmark data sets have demonstrated that the proposed algorithms are highly effective and efficient."} {"_id": "ad5b3af1764a553a16d81d5dcb53a0432405e32b", "title": "End-to-End Sound Source Separation Conditioned On Instrument Labels", "text": "Can we perform an end-to-end sound source separation (SSS) with a variable number of sources using a deep learning model? This paper presents an extension of the WaveU-Net [1] model which allows end-to-end monaural source separation with a non-fixed number of sources. Furthermore, we propose multiplicative conditioning with instrument labels at the bottleneck of the Wave-U-Net and show its effect on the separation results. This approach can be further extended to other types of conditioning such as audio-visual SSS and score-informed SSS."} {"_id": "9b2ac79115d61001ddaee57597403e845c1a29c4", "title": "Applying Genre-Based Ontologies to Enterprise Architecture", "text": "This paper elaborates the approach of using ontologies as a conceptual base for enterprise architecture (EA) descriptions. The method focuses on recognising and modelling business critical information concepts, their content, and semantics used to operate the business. Communication genres and open and semi-structured information need interviews are used as a domain analysis method. Ontologies aim to explicate the results of domain analysis and to provide a common reference model for Business Information Architecture (BIA) descriptions. The results are generalised to model further aspects of EA."} {"_id": "d1c3a97700e42ad1f5aafe6bb22725b401558df0", "title": "Evaluation of LoRa receiver performance under co-technology interference", "text": "LoRa networks achieve a remarkably wide coverage as compared to that of the conventional wireless systems by employing the chirp spread spectrum (CSS). However, single-hop LoRa networks have limited penetration in indoor environments and cannot efficiently utilize the multiple-access dimensions arising from different spreading factors (SF) because the star topology is typically used for LoRa. On the other hand, a multi-hop LoRa network has the potential to realize extensive network coverage by utilizing packet relaying and improving the network capacity by utilizing different SFs. We evaluated the LoRa receiver performance under co-technology interference via simulation and experiments to realize these potentials. The results show that LoRa can survive interference from time-synchronized packets transmitted by multiple transmitters with an identical SF. Furthermore, we examined the orthogonality between different SFs by evaluating the required signal-to-interference ratio (SIR). Finally, we demonstrated the possibility of employing different SFs to construct the pipeline in a multi-hop relay network to increase network efficiency."} {"_id": "444b29b93d3e9828a69a60f6c1540d8130ae325d", "title": "A decision model for evaluating third-party logistics providers using fuzzy analytic hierarchy process", "text": "As a consequence of an increasing trend toward the outsourcing of logistics activities, shippers have been faced with the inevitability of selecting an appropriate third-party logistics (3PL) provider. The selection process for the identification of a 3PL provider that best fits user requirements involves multiple criteria and alternatives and may be one of the most complex decisions facing logistics users. In this regard, this study proposes an evaluation framework and methodology for selecting a suitable 3PL provider and illustrates the process of evaluation and selection through a case study. It is expected that the results of this study will provide a practical reference for logistics managers who want to engage the best 3PL provider. Future research using different datasets is warranted to verify the generalizability of the findings."} {"_id": "fc26b3804f27cd2e2d79f3f5db95ecf5c4370792", "title": "A Short-Term Rainfall Prediction Model Using Multi-task Convolutional Neural Networks", "text": "Precipitation prediction, such as short-term rainfall prediction, is a very important problem in the field of meteorological service. In practice, most of recent studies focus on leveraging radar data or satellite images to make predictions. However, there is another scenario where a set of weather features are collected by various sensors at multiple observation sites. The observations of a site are sometimes incomplete but provide important clues for weather prediction at nearby sites, which are not fully exploited in existing work yet. To solve this problem, we propose a multi-task convolutional neural network model to automatically extract features from the time series measured at observation sites and leverage the correlation between the multiple sites for weather prediction via multi-tasking. To the best of our knowledge, this is the first attempt to use multi-task learning and deep learning techniques to predict short-term rainfall amount based on multi-site features. Specifically, we formulate the learning task as an end-to-end multi-site neural network model which allows to leverage the learned knowledge from one site to other correlated sites, and model the correlations between different sites. Extensive experiments show that the learned site correlations are insightful and the proposed model significantly outperforms a broad set of baseline models including the European Centre for Medium-range Weather Forecasts system (ECMWF)."} {"_id": "f05af9088a50918154628b54e5467ca5b365670d", "title": "Consumer Brand Engagement in Social Media : Conceptualization , Scale Development & Validation", "text": "Consumer Brand Engagement in Social Media: Conceptualization, Scale Development & Validation Abstract In the last three decades an influential research stream has emerged, which highlights the dynamics of focal consumer/brand relationships. Specifically, more recently the \u2018consumer brand engagement\u2019 (CBE) concept has been postulated to more comprehensively reflect the nature of consumers\u2019 particular interactive brand relationships, relative to traditional concepts, including \u2018involvement.\u2019 However, despite the growing scholarly interest regarding the undertaking of marketing research addressing \u2018engagement,\u2019 studies have been predominantly exploratory in nature; thus generating a lack of empirical research in this area to date. By developing and validating a CBE scale in specific social media settings, we address this identified literature gap. Specifically, we conceptualize CBE as a consumer\u2019s positively valenced brand-related cognitive, emotional and behavioral activity during or related to focal consumer/brand interactions. We derive three CBE dimensions, including cognitive processing, affection, and activation. Within three different social media contexts, we employ exploratory and confirmatory factor analyses to develop a reliable, 10-item CBE scale, which we proceed to validate within a nomological net of conceptual relationships and a rival model. The findings suggest that while consumer brand \u2018involvement\u2019 acts as a CBE antecedent, consumer \u2018self-brand connection\u2019 and \u2018brand usage intent\u2019 represent key CBE consequences; thus providing a platform for further research in this emerging area. We conclude with an overview of key managerial and scholarly implications arising from this research."} {"_id": "523eea06d191218488f11b52bed5246e5cdb3f31", "title": "The efficiency frontier approach to economic evaluation of health-care interventions.", "text": "BACKGROUND\nIQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system.\n\n\nPROPOSED METHODS\nThe panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system.\n\n\nCONCLUSION\nThis approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required."} {"_id": "abd2a82e0b2a41e5db311e66b53f8e8f947c9710", "title": "Lane detection and tracking system based on the MSER algorithm, hough transform and kalman filter", "text": "We present a novel lane detection and tracking system using a fusion of Maximally Stable Extremal Regions (MSER) and Progressive Probabilistic Hough Transform (PPHT). First, MSER is applied to obtain a set of blobs including noisy pixels (e.g., trees, cars and traffic signs) and the candidate lane markings. A scanning refinement algorithm is then introduced to enhance the results of MSER and filter out noisy data. After that, to achieve the requirements of real-time systems, the PPHT is applied. Compared to Hough transform which returns the parameters \u03c1 and \u0398, PPHT returns two end-points of the detected line markings. To track lane markings, two kalman trackers are used to track both end-points. Several experiments are conducted in Ottawa roads to test the performance of our framework. The detection rate of the proposed system averages 92.7% and exceeds 84.9% in poor conditions."} {"_id": "1034bfdcdfd1ae0941bbee4660255a2130f56bd2", "title": "IT capability and organizational performance: the roles of business process agility and environmental factors", "text": "School of Business Administration, Southwestern University of Finance and Economics, Sichuan, P. R. of China; Business School, Shantou University, Guangdong, P. R. of China; Information Technology Management, School of Business, University at Albany, Albany, NY, U.S.A; Department of Management, Hong Kong University of Science and Technology, Hong Kong, P.R. China; Department of Finance and Decision Sciences, School of Business, Hong Kong Baptist University, Hong Kong, P.R. China"} {"_id": "a1126609dd3cceb494a3e2aba6adc7509c94479c", "title": "Maximum Performance Computing with Dataflow Engines", "text": "Multidisciplinary dataflow computing is a powerful approach to scientific computing that has led to orders-of-magnitude performance improvements for a wide range of applications."} {"_id": "1499fe40fdf50f1e85a2757b82b4538b5d2b2f9b", "title": "Crowdfunding : tapping the right crowd", "text": "The basic idea of crowdfunding is to raise external finance from a large audience (the \u201ccrowd\u201d), where each individual provides a very small amount, instead of soliciting a small group of sophisticated investors. The paper develops a model that associates crowdfunding with pre-ordering and price discrimination, and studies the conditions under which crowdfunding is preferred to traditional forms of external funding. Compared to traditional funding, crowdfunding has the advantage of offering an enhanced experience to some consumers and, thereby, of allowing the entrepreneur to practice menu pricing and extract a larger share of the consumer surplus; the disadvantage is that the entrepreneur is constrained in his/her choice of prices by the amount of capital that he/she needs to raise: the larger this amount, the more prices have to be twisted so as to attract a large number of \u201ccrowdfunders\u201d who pre-order, and the less profitable the menu pricing scheme."} {"_id": "411476007a7673c87b497e61848d0962fdb03d07", "title": "Opportunistic Flooding in Low-Duty-Cycle Wireless Sensor Networks with Unreliable Links", "text": "Intended for network-wide dissemination of commands, configurations and code binaries, flooding has been investigated extensively in wireless networks. However, little work has yet been done on low-duty-cycle wireless sensor networks in which nodes stay asleep most of time and wake up asynchronously. In this type of network, a broadcasting packet is rarely received by multiple nodes simultaneously, a unique constraining feature that makes existing solutions unsuitable. Combined with unreliable links, flooding in low-duty-cycle networks is a new challenging issue.\n In this paper, we introduce Opportunistic Flooding, a novel design tailored for low-duty-cycle networks with unreliable wireless links and predetermined working schedules. The key idea is to make probabilistic forwarding decisions at a sender based on the delay distribution of next-hop nodes. Only opportunistically early packets are forwarded using links outside the energy optimal tree to reduce the flooding delay and redundancy in transmission. To improve performance further, we propose a forwarder selection method to alleviate the hidden terminal problem and a link-quality-based backoff method to resolve simultaneous forwarding operations. We evaluate Opportunistic Flooding with extensive simulation and a test-bed implementation consisting of 30 MicaZ nodes. Evaluation shows our design is close to the optimal performance achievable by oracle flooding designs. Compared with improved traditional flooding, our design achieves significantly shorter flooding delay while consuming only 20% ~ 60% of the transmission energy in various low-duty-cycle network settings."} {"_id": "9471f2f4b13886516976d4ddc1bee223312dfe77", "title": "How can machine learning help stock investment ?", "text": "The million-dollar question for stock investors is if the price of a stock will rise or not. The fluctuation of stock market is violent and there are many complicated financial indicators. Only people with extensive experience and knowledge can understand the meaning of the indicators, use them to make good prediction to get fortune. Most of other people can only rely on lucky to earn money from stock trading. Machine learning is an opportunity for ordinary people to gain steady fortune from stock market and also can help experts to dig out the most informative indicators and make better prediction."} {"_id": "fa4cdf31d062cae4d6425648a28b88f555ad71f4", "title": "Activities on Facebook Reveal the Depressive State of Users", "text": "BACKGROUND\nAs online social media have become prominent, much effort has been spent on identifying users with depressive symptoms in order to aim at early diagnosis, treatment, and even prevention by using various online social media. In this paper, we focused on Facebook to discern any correlations between the platform's features and users' depressive symptoms. This work may be helpful in trying to reach and detect large numbers of depressed individuals more easily.\n\n\nOBJECTIVE\nOur goal was to develop a Web application and identify depressive symptom-related features from users of Facebook, a popular social networking platform.\n\n\nMETHODS\n55 Facebook users (male=40, female=15, mean age 24.43, SD 3.90) were recruited through advertisement fliers distributed to students in a large university in Korea. Using EmotionDiary, the Facebook application we developed, we evaluated depressive symptoms using the Center for Epidemiological Studies-Depression (CES-D) scale. We also provided tips and facts about depression to participants and measured their responses using EmotionDiary. To identify the Facebook features related to depression, correlation analyses were performed between CES-D and participants' responses to tips and facts or Facebook social features. Last, we interviewed depressed participants (CES-D\u226525) to assess their depressive symptoms by a psychiatrist.\n\n\nRESULTS\nFacebook activities had predictive power in distinguishing depressed and nondepressed individuals. Participants' response to tips and facts, which can be explained by the number of app tips viewed and app points, had a positive correlation (P=.04 for both cases), whereas the number of friends and location tags had a negative correlation with the CES-D scale (P=.08 and P=.045 respectively). Furthermore, in finding group differences in Facebook social activities, app tips viewed and app points resulted in significant differences (P=.01 and P=.03 respectively) between probably depressed and nondepressed individuals.\n\n\nCONCLUSIONS\nOur results using EmotionDiary demonstrated that the more depressed one is, the more one will read tips and facts about depression. We also confirmed depressed individuals had significantly fewer interactions with others (eg, decreased number of friends and location tagging). Our app, EmotionDiary, can successfully evaluate depressive symptoms as well as provide useful tips and facts to users. These results open the door for examining Facebook activities to identify depressed individuals. We aim to conduct the experiment in multiple cultures as well."} {"_id": "6ba074a7114112ddb74a205540f568d9737a7f08", "title": "KidCAD: digitally remixing toys through tangible tools", "text": "Children have great facility in the physical world, and can skillfully model in clay and draw expressive illustrations. Traditional digital modeling tools have focused on mouse, keyboard and stylus input. These tools may be complicated and difficult for young users to easily and quickly create exciting designs. We seek to bring physical interaction to digital modeling, to allow users to use existing physical objects as tangible building blocks for new designs. We introduce KidCAD a digital clay interface for children to remix toys. KidCAD allows children to imprint 2.5D shapes from physical objects into their digital models by deforming a malleable gel input device, deForm. Users can mashup existing objects, edit and sculpt or draw new designs on a 2.5D canvas using physical objects, hands and tools as well as 2D touch gestures. We report on a preliminary user study with 13 children, ages 7 to 10, which provides feedback for our design and helps guide future work in tangible modeling for children."} {"_id": "7861d8ce85b5cbb66f8a9deccf5f08ff59e1c2a8", "title": "3D Human Body Reconstruction from a Single Image via Volumetric Regression", "text": "This paper proposes the use of an end-to-end Convolutional Neural Network for direct reconstruction of the 3D geometry of humans via volumetric regression. The proposed method does not require the fitting of a shape model and can be trained to work from a variety of input types, whether it be landmarks, images or segmentation masks. Additionally, non-visible parts, either self-occluded or otherwise, are still reconstructed, which is not the case with depth map regression. We present results that show that our method can handle both pose variation and detailed reconstruction given appropriate datasets for training."} {"_id": "0adfe845a05ba4887f9bf21949f6f276246b789f", "title": "Human hand modelling: kinematics, dynamics, applications", "text": "An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand functionality can be described. We do this by investigating the human hand at various levels: (1) at the level of kinematics, focussing on the movement of the bones of the hand, not taking corresponding forces into account; (2) at the musculotendon structure, i.e. by looking at the part of the hand generating the forces and thus inducing the motion; and (3) at the combination of the two, resulting in hand dynamics as well as the underlying neurocontrol. Our purpose is to not only provide the reader with an overview of current human hand modelling approaches but also to fill the gaps with recent results and data, thus allowing for an encompassing picture."} {"_id": "2fccaa0c8ad0c727f1f7ec948ba9256092c2a64d", "title": "Accurate, Robust, and Flexible Real-time Hand Tracking", "text": "We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis."} {"_id": "fec0aabf0adce530b83695b83312aef46176519d", "title": "Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input", "text": "Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to difficult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to handobject tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classification of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness."} {"_id": "38bb98c164c4c84f2c10c8ea97636075240b66c0", "title": "Terminological ontology learning and population using latent Dirichlet allocation", "text": "The success of Semantic Web will heavily rely on the availability of formal ontologies to structure machine understanding data. However, there is still a lack of general methodologies for ontology automatic learning and population, i.e. the generation of domain ontologies from various kinds of resources by applying natural language processing and machine learning techniques In this paper, the authors present an ontology learning and population system that combines both statistical and semantic methodologies. Several experiments have been carried out, demonstrating the effectiveness of the proposed system. Keywords-Ontologies, Ontology Learning, Ontology Population, Latent Dirichlet Allocation"} {"_id": "d9fc07892b677f5365cb889e3da1f0335b01e1a8", "title": "Evolutionary Competition in Platform Ecosystems", "text": "I competition has received scant attention in prior studies, which predominantly study interplatform competition. We develop a middle-range theory of how complementarity between input control and a platform extension\u2019s modularization\u2014by inducing evolution\u2014influences its performance in a platform market. Primary and archival data spanning five years from 342 Firefox extensions show that such complementarity fosters performance by accelerating an extension\u2019s perpetual evolution."} {"_id": "2d8acf4fa3ac825387ce416972af079b6562f5e0", "title": "Cache in the air: exploiting content caching and delivery techniques for 5G systems", "text": "The demand for rich multimedia services over mobile networks has been soaring at a tremendous pace over recent years. However, due to the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the radio access networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic. Recently, we have observed the emergence of promising mobile content caching and delivery techniques, by which popular contents are cached in the intermediate servers (or middleboxes, gateways, or routers) so that demands from users for the same content can be accommodated easily without duplicate transmissions from remote servers; hence, redundant traffic can be significantly eliminated. In this article, we first study techniques related to caching in current mobile networks, and discuss potential techniques for caching in 5G mobile networks, including evolved packet core network caching and radio access network caching. A novel edge caching scheme based on the concept of content-centric networking or information-centric networking is proposed. Using trace-driven simulations, we evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks. Furthermore, we conclude the article by exploring new relevant opportunities and challenges."} {"_id": "22fe619996b59c09cb73be40103a123d2e328111", "title": "The German Traffic Sign Recognition Benchmark: A multi-class classification competition", "text": "The \u201cGerman Traffic Sign Recognition Benchmark\u201d is a multi-category classification competition held at IJCNN 2011. Automatic recognition of traffic signs is required in advanced driver assistance systems and constitutes a challenging real-world computer vision and pattern recognition problem. A comprehensive, lifelike dataset of more than 50,000 traffic sign images has been collected. It reflects the strong variations in visual appearance of signs due to distance, illumination, weather conditions, partial occlusions, and rotations. The images are complemented by several precomputed feature sets to allow for applying machine learning algorithms without background knowledge in image processing. The dataset comprises 43 classes with unbalanced class frequencies. Participants have to classify two test sets of more than 12,500 images each. Here, the results on the first of these sets, which was used in the first evaluation stage of the two-fold challenge, are reported. The methods employed by the participants who achieved the best results are briefly described and compared to human traffic sign recognition performance and baseline results."} {"_id": "9ab30776f4fbf24ae2d0ff56054e085b256d158e", "title": "American Sign Language Translation through Sensory Glove ; SignSpeak", "text": "To make a communication bridge, a highly accurate, cost effective and an independent glove was designed for deaf/mute people to enable them to communicate. The glove translates the sign language gestures into speech according to the American Sign Language Standard. The glove contained flex and contact sensors to detect the movements of the fingers and bending of the palm. In addition an accelerometer was built in the glove to measure the acceleration produced by the changing positions of the hand. Principal Component Analysis (PCA) was used to train the glove into recognizing various gestures, and later classify the gestures into alphabets in real time. The glove then established a Bluetooth link with an Android phone, which was used to display the received letters and words and convert the text into speech. The glove was found to have an accuracy of 92%."} {"_id": "78fb896b01cce8e5c6d47a090935169b1e0c276f", "title": "Extreme scale with full SQL language support in microsoft SQL Azure", "text": "Cloud SQL Server is an Internet scale relational database service which is currently used by Microsoft delivered services and also offered directly as a fully relational database service known as \"SQL Azure\". One of the principle design objectives in Cloud SQL Server was to provide true SQL support with full ACID transactions within controlled scale \"consistency domains\" and provide a relaxed degree of consistency across consistency domains that would be viable to clusters of 1,000's of nodes. In this paper, we describe the implementation of Cloud SQL Server with an emphasis on this core design principle."} {"_id": "3ee02dff33c03d98fb5a2cecf298b77171f0d0dc", "title": "Multiwave: Doppler Effect Based Gesture Recognition in Multiple Dimensions", "text": "We constructed an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of two speakers and a microphone to be operational. Since these components are already built in to most end user systems, our design makes gesture-based input more accessible to a wider range of end users. By generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves, we are able to calculate a Euclidean representation of hand velocity that is then used for more natural gesture recognition and thus, more meaningful interaction mappings. We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report accuracy rates comparable to or better than the current state of the art. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional gesture recognition."} {"_id": "1a3e334f16b6888843a8150ddf7e4f46b2b28fd5", "title": "An inventory for measuring clinical anxiety: psychometric properties.", "text": "The development ofa 2 l-item self-report inventory for measuring the severity of anxiety in psychiatric populations is described. The initial item pool of 86 items was drawn from three preexisting scales: the Anxiety Checklist, the Physician's Desk Reference Checklist, and the Situational Anxiety Checklist. A series of analyses was used to reduce the item pool. The resulting Beck Anxiety Inventory (BAI) is a 21-item scale that showed high internal consistency (a = .92) and test-retest reliability over 1 week, r(81) = .75. The BAI discriminated anxious diagnostic groups (panic disorder, generalized anxiety disorder, etc.) from nonanxious diagnostic groups (major depression, dysthymic disorder, etc). In addition, the BAI was moderately correlated with the revised Hamilton Anxiety Rating Scale, r(150) = .51, and was only mildly correlated with the revised Hamilton Depression Rating Scale, r(l 53) = .25."} {"_id": "abc7254b751b124ff98cbf522526cf2ce5376e95", "title": "Manifold surface reconstruction of an environment from sparse Structure-from-Motion data", "text": "The majority of methods for the automatic surface reconstruction of an environment from an image sequence have two steps: Structure-from-Motion and dense stereo. From the computational standpoint, it would be interesting to avoid dense stereo and to generate a surface directly from the sparse cloud of 3D points and their visibility information provided by Structure-fromMotion. The previous attempts to solve this problem are currently very limited: the surface is non-manifold or has zero genus, the experiments are done on small scenes or objects using a few dozens of images. Our solution does not have these limitations. Furthermore, we experiment with hand-held or helmet-held catadioptric cameras moving in a city and generate 3D models such that the camera trajectory can be longer than one kilometer."} {"_id": "3b938f66d03559e1144fa2ab63a3a9a076a6b48b", "title": "A coordinate gradient descent method for \u21131-regularized convex minimization", "text": "In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing `1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general `1-regularized convex minimization problems, i.e., the problem of minimizing an `1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale `1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale `1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale `1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems."} {"_id": "8ad03b36ab3cba911699fe1699332c6353f227bc", "title": "Application of Computational Intelligence to Improve Education in Smart Cities", "text": "According to UNESCO, education is a fundamental human right and every nation's citizens should be granted universal access with equal quality to it. Because this goal is yet to be achieved in most countries, in particular in the developing and underdeveloped countries, it is extremely important to find more effective ways to improve education. This paper presents a model based on the application of computational intelligence (data mining and data science) that leads to the development of the student's knowledge profile and that can help educators in their decision making for best orienting their students. This model also tries to establish key performance indicators to monitor objectives' achievement within individual strategic planning assembled for each student. The model uses random forest for classification and prediction, graph description for data structure visualization and recommendation systems to present relevant information to stakeholders. The results presented were built based on the real dataset obtained from a Brazilian private k-9 (elementary school). The obtained results include correlations among key data, a model to predict student performance and recommendations that were generated for the stakeholders."} {"_id": "bf45ece00d825c2a9a8cdf333965b7acded1a128", "title": "On self-aggrandizement and anger: a temporal analysis of narcissism and affective reactions to success and failure.", "text": "Narcissists are thought to display extreme affective reactions to positive and negative information about the self. Two experiments were conducted in which high- and low-narcissistic individuals, as defined by the Narcissistic Personality Inventory (NPI), completed a series of tasks in which they both succeeded and failed. After each task, participants made attributions for their performance and reported their moods. High-NPI participants responded with greater changes in anxiety, anger, and self-esteem. Low self-complexity was examined, but it neither mediated nor moderated affective responses. High-NPI participants tended to attribute initial success to ability, leading to more extreme anger responses and greater self-esteem reactivity to failure. A temporal sequence model linking self-attribution and emotion to narcissistic rage is discussed."} {"_id": "a74720eeb8a9f6289dbece69902e85acf0e99871", "title": "Optimal and Low-Complexity Algorithms for Dynamic Spectrum Access in Centralized Cognitive Radio Networks with Fading Channels", "text": "In this paper, we develop a centralized spectrum sensing and Dynamic Spectrum Access (DSA) scheme for secondary users (SUs) in a Cognitive Radio (CR) network. Assuming that the primary channel occupancy follows a Markovian evolution, the channel sensing problem is modeled as a Partially Observable Markov Decision Process (POMDP). We assume that each SU can sense only one channel at a time by using energy detection, and the sensing outcomes are then reported to a central unit, called the secondary system decision center (SSDC), that determines the channel sensing/accessing policies. We derive both the optimal channel assignment policy for secondary users to sense the primary channels, and the optimal channel access rule. Our proposed optimal sensing and accessing policies alleviate many shortcomings and limitations of existing proposals: (a) ours allows fully utilizing all available primary spectrum white spaces, (b) our model, and thus the proposed solution, exploits the temporal and spatial diversity across different primary channels, and (c) is based on realistic local sensing decisions rather than complete knowledge of primary signalling structure. As an alternative to the high complexity of the optimal channel sensing policy, a suboptimal sensing policy is obtained by using the Hungarian algorithm iteratively, which reduces the complexity of the channel assignment from an exponential to a polynomial order. We also propose a heuristic algorithm that reduces the complexity of the sensing policy further to a linear order. The simulation results show that the proposed algorithms achieve a near-optimal performance with a significant reduction in computational time."} {"_id": "b24a8abf5f70c1ac20e77b7646efdd6f2058ff16", "title": "Compact Design of a Hydraulic Driving Robot for Intraoperative MRI-Guided Bilateral Stereotactic Neurosurgery", "text": "In this letter, we present an intraoperative magnetic resonance imaging (MRI)-guided robot for bilateral stereotactic procedures. Its compact design enables robot's operation within the constrained space of standard imaging head coil. magnetic resonance (MR) safe and high-performance hydraulic transmissions are incorporated. A maximum stiffness coefficient of 24.35\u00a0N/mm can be achieved with transmission fluid preloaded at 2\u00a0bar. Sufficient targeting accuracy (average within \u22641.73\u00a0mm) has been demonstrated in a simulated needle insertion task of deep brain stimulation. A novel MR-based wireless tracking technique is adopted. It is capable of offering real-time and continuous (30\u201340\u00a0Hz) three-dimensional (3-D) localization of robotic instrument under the proper MR tracking sequence. It outperforms the conventional method of using low-contrast passive fiducials that can only be revealed in the MR image domain. Two wireless tracking units/markers are integrated with the robot, which are two miniaturized coil circuits in size of 1.5\u00a0\u00d7\u00a05\u00a0\u00d7\u00a00.2\u00a0mm3, fabricated on flexible thin films. A navigation test was performed under the standard MRI settings in order to visualize the 3-D localization of the robotic instrument. MRI compatibility test was also carried out to prove the minimal interference to MR images of the presented hydraulic robotic platform."} {"_id": "0d5676d90f20215d08dfe7e71fb55303f23604f7", "title": "The Cricket location-support system", "text": "This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration."} {"_id": "8476714438669d5703690d4bbee9bfe751f61144", "title": "Debunking the Myths of Influence Maximization: An In-Depth Benchmarking Study", "text": "Influence maximization (IM) on social networks is one of the most active areas of research in computer science. While various IM techniques proposed over the last decade have definitely enriched the field, unfortunately, experimental reports on existing techniques fall short in validity and integrity since many comparisons are not based on a common platform or merely discussed in theory. In this paper, we perform an in-depth benchmarking study of IM techniques on social networks. Specifically, we design a benchmarking platform, which enables us to evaluate and compare the existing techniques systematically and thoroughly under identical experimental conditions. Our benchmarking results analyze and diagnose the inherent deficiencies of the existing approaches and surface the open challenges in IM even after a decade of research. More fundamentally, we unearth and debunk a series of myths and establish that there is no single state-of-the-art technique in IM. At best, a technique is the state of the art in only one aspect."} {"_id": "f8f5a4a3c1ad91d4971758f393f115af203ea8aa", "title": "A 5.8-Gb/s Adaptive Integrating Duobinary DFE Receiver for Multi-Drop Memory Interface", "text": "This paper describes a 5.8 Gb/s adaptive integrating duobinary decision-feedback equalizer (DFE) for use in next-generation multi-drop memory interface. The proposed receiver combines traditional interface techniques like the integrated signaling and the duobinary signaling, in which the duobinary signal is generated by current integration in the receiver. It can address issues such as input data dependence during integration, need for precursor equalization, high equalizer gain boosting, and sensitivity to high-frequency noise. The proposed receiver also alleviates DFE critical timing to provide gain in speed, and embed DFE taps in duobinary decoding to provide gain in power and area. The adaptation for adjusting the equalizer common-mode level, duobinary zero level, tap coefficient values, and timing recovery is incorporated. The proposed DFE receiver was fabricated in a 45 nm CMOS process, whose measurement results indicated that it worked at 5.8 Gb/s speed in a four-drop channel configuration with seven slave ICs, and the bathtub curve shows 36% open for $10^{-10}$ bit error rate."} {"_id": "0d81a9e8545dc31d441f2c58a548371e3d888a35", "title": "A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory", "text": "Since the definition of the busy beaver function by Rado in 1962, an interesting open question has been the smallest value of n for which BB(n) is independent of Zermelo\u2013Fraenkel set theory with the axiom of choice (ZFC). Is this n approximately 10, or closer to 1 000 000, or is it even larger? In this paper, we show that it is at most 7910 by presenting an explicit description of a 7910-state Turing machine Z with one tape and a two-symbol alphabet that cannot be proved to run forever in ZFC (even though it presumably does), assuming ZFC is consistent. The machine is based on work of Harvey Friedman on independent statements involving order-invariant graphs. In doing so, we give the first known upper bound on the highest provable busy beaver number in ZFC. To create Z, we develop and use a higher-level language, Laconic, which is much more convenient than direct state manipulation. We also use Laconic to design two Turing machines, G and R, that halt if and only if there are counterexamples to Goldbach\u2019s conjecture and the Riemann hypothesis, respectively."} {"_id": "43ea93b01be7d3eed2641b9393c6438d19b825a0", "title": "Graph analytics using vertica relational database", "text": "Graph analytics is becoming increasingly popular, with a number of new applications and systems developed in the past few years. In this paper, we study Vertica relational database as a platform for graph analytics. We show that vertex-centric graph analysis can be translated to SQL queries, typically involving table scans and joins, and that modern column-oriented databases are very well suited to running such queries. Furthermore, we show how developers can trade memory footprint for significantly reduced I/O costs in Vertica. We present an experimental evaluation of the Vertica relational database system on a variety of graph analytics, including iterative analysis, a combination of graph and relational analyses, and more complex 1-hop neighborhood graph analytics, showing that it is competitive to two popular vertex-centric graph analytics systems, namely Giraph and GraphLab."} {"_id": "6149339ccd74f9c38ec67281651ad8efa3b375f6", "title": "On the Reuse of Past Optimal Queries", "text": "Vijay V. Raghavan Hayri Sever The Center for Advanced Computer Studies The Department of Computer Science University of Southwestern Louisiana University of Southwestern Louisiana Lafayette, LA 70504, USA Lafayette, LA 70504, USA e-mail: raghavan@cacs.usl. edu Information Retrieval (IR) systems exploit user feedback by generating an optimal query with respect to a particular information need. Since obtaining an optimal query is an expensive process, the need for mechanisms to save and reuse past optimal queries for future queries is obvions. In this article, we propose the use of a query base, a set of persistent past optimal queries, and investigate similarity measures between queries. The query base can be used either to answer user queries or to formulate optimal queries. We justify the former case analytically and the latter case by experiment."} {"_id": "14423a62355ff3a1cff997f24534390921f854b5", "title": "Learning to map between ontologies on the semantic web", "text": "Ontologies play a prominent role on the Semantic Web. They make possible the widespread publication of machine understandable data, opening myriad opportunities for automated information processing. However, because of the Semantic Web's distributed nature, data on it will inevitably come from many different ontologies. Information processing across ontologies is not possible without knowing the semantic mappings between their elements. Manually finding such mappings is tedious, error-prone, and clearly not possible at the Web scale. Hence, the development of tools to assist in the ontology mapping process is crucial to the success of the Semantic Web.We describe glue, a system that employs machine learning techniques to find such mappings. Given two ontologies, for each concept in one ontology glue finds the most similar concept in the other ontology. We give well-founded probabilistic definitions to several practical similarity measures, and show that glue can work with all of them. This is in contrast to most existing approaches, which deal with a single similarity measure. Another key feature of glue is that it uses multiple learning strategies, each of which exploits a different type of information either in the data instances or in the taxonomic structure of the ontologies. To further improve matching accuracy, we extend glue to incorporate commonsense knowledge and domain constraints into the matching process. For this purpose, we show that relaxation labeling, a well-known constraint optimization technique used in computer vision and other fields, can be adapted to work efficiently in our context. Our approach is thus distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge. We describe a set of experiments on several real-world domains, and show that glue proposes highly accurate semantic mappings."} {"_id": "500923d2513d30299350a6a0e9b84b077250dc78", "title": "Determining Semantic Similarity among Entity Classes from Different Ontologies", "text": "Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent, entity classes."} {"_id": "1c58b4c7adee37874ac96f7d859d1a51f97bf6aa", "title": "Issues in Stacked Generalization", "text": "Stacked generalization is a general method of using a high-level model to combine lowerlevel models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a `black art' in classi cation tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We nd that best results are obtained when the higher-level model combines the con dence (and not just the predictions) of the lower-level ones. We demonstrate the e ectiveness of stacked generalization for combining three di erent types of learning algorithms for classi cation tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging."} {"_id": "c96f0b75eabfdf1797b69ac3a92487b45c2d9855", "title": "PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment", "text": "Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user\u2019s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT. 1 Ontologies in AI and on the Web Ontologies today are available in many different forms: as artifacts of a tedious knowledge-engineering process, as information that was extracted automatically from informal electronic sources, or as simple \u201clight-weight\u201d ontologies that specify semantic relationships among resources available on the World-Wide Web (Brickley and Guha 1999). But what does a user do when he finds several ontologies that he would like to use but that do not conform to one another? The user must establish correspondences among the source ontologies, and to determine the set of overlapping concepts, concepts that are similar in meaning but have different names or structure, concepts that are unique to each of the sources. This work must be done regardless of whether the ultimate goal is to create a single coherent ontology that includes the information from all the sources ( merging) or if the sources must be made consistent and coherent with one another but kept separately ( alignment). Copyright \u00a9 2000, American Association for Artificial Intelligence ( www.aaai.org) . All rights reserved. Currently the work of mapping, merging, or aligning ontologies is performed mostly by hand, without any tools to automate the process fully or partially (Fridman Noy and Musen 1999).Our participation in the ontology-alignment effort within DARPA\u2019s High-Performance KnowledgeBases project (Cohen et al. 1999) was a strong motivation for developing semi-automated specialized tools for ontology merging and alignment. Several teams developed ontologies in the domain of military planning, which then needed to be aligned to one another. We found the experience of manually aligning the ontologies to be an extremely tedious and time-consuming process. At the same time we noticed many steps in the process that could be automated, many points where a tool could make reasonable suggestions, and many conflicts and constraint violations for which a tool could check. We developed a formalism-independent algorithm for ontology merging and alignment\u2014PROMPT (formerly SMART)\u2014which automates the process as much as possible. Where an automatic decision is not possible, the algorithm guides the user to the places in the ontology where his intervention is necessary, suggests possible actions, and determines the conflicts in the ontology and proposes solutions for these conflicts. We implemented the algorithm in an interactive tool based on the Prot\u00e9g\u00e9-2000 knowledge-modeling environment (Fridman Noy et al. 2000). Prot\u00e9g\u00e9-2000 is an ontology-design and knowledgeacquisition tool with an OKBC-compatible (Chaudhri et al. 1998) knowledge model, which allows domain experts (and not necessarily knowledge engineers) to develop ontologies and perform knowledge acquisition. We have evaluated PROMPT, comparing its performance with the humanexpert performance and with the performance of another ontology-merging tool."} {"_id": "0bdb56c3e9141d654a4809f014d0369305f1e7fd", "title": "You are what you say: privacy risks of public mentions", "text": "In today's data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people's intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn't work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven't rated."} {"_id": "d5d034e16247370356ccca7a9920138fd70d0bbb", "title": "Richard Thaler and the Rise of Behavioral Economics", "text": "The emergence of behavioral economics is one of the most prominent conceptual developments in the social sciences in recent decades. The central figure in the field in its early years was Richard Thaler. In this article, I review and discuss his scientific contributions. JEL classification: B2, D9, G1"} {"_id": "594f60fdc0e99c136f527c152ff22cf625879abb", "title": "Development of a deep silicon phase Fresnel lens using Gray-scale lithography and deep reactive ion etching", "text": "We report the first fabrication and development of a deep phase Fresnel lens (PFL) in silicon through the use of gray-scale lithography and deep-reactive ion etching (DRIE). A Gaussian tail approximation is introduced as a method of predicting the height of photoresist gray levels given the relative amount of transmitted light through a gray-scale optical mask. Device mask design is accomplished through command-line scripting in a CAD tool to precisely define the millions of pixels required to generate the appropriate profile in photoresist. Etch selectivity during DRIE pattern transfer is accurately controlled to produce the desired scaling factor between the photoresist and silicon profiles. As a demonstration of this technology, a 1.6-mm diameter PFL is etched 43 /spl mu/m into silicon with each grating profile designed to focus 8.4 keV photons a distance of 118 m."} {"_id": "de864fce3469f0175fc0ff2869ddbe8d144cf53e", "title": "A Comparative Analysis of Enterprise Architecture Frameworks Based on EA Quality Attributes", "text": "Many Enterprise Architecture Frameworks (EAFs) are in use today to guide or serve as a model for the enterprise architecture development. Researchers offer general comparative information about EAF. However, there has been little work on the quality of Enterprise Architecture (EA). This study provides the characteristics of the five EAFs using comparative analysis based on Enterprise Architecture Quality Attributes (EAQAs). The attribute of EA quality was extracted from the EAFs and the definition of EAQA was defined from EA user viewpoint. The criteria for each quality attribute were developed to compare EAFs using four dimensional concepts. This paper compared several frameworks using the criteria to provide guidance in the selection of an EAF that meets the quality requirement of EA"} {"_id": "3b55f6e637d13503bf7d0ff3b2a0f4b2f8814239", "title": "Rational Choice Theory : Toward a Psychological , Social , and Material Contextualization of Human Choice Behavior", "text": "The main purpose of this paper is to provide a brief overview of the rational choice approach, followed by an identification of several of the major criticisms of RCT and its conceptual and empirical limitations. It goes on to present a few key initiatives to develop alternative, more realistic approaches which transcend some of the limitations of Rational Choice Theory (RCT). Finally, the article presents a few concluding reflections and a table comparing similarities and differences between the mainstream RCT and some of the initial components of an emerging choice theory. Our method has been to conduct a brief selective review of rational choice theoretical formulations and applications as well as a review of diverse critical literature in the social sciences where rational choice has been systematically criticized. We have focused on a number of leading contributors (among others, several Nobel Prize Recipients in economics, who have addressed rational choice issues). So this article makes no claim for completeness. The review maps a few key concepts and assumptions underpinning the conceptual model and empirical applications of RCT. It reviews also a range of critical arguments and evidence of limitations. It identifies selected emerging concepts and theoretical revisions and adaptations to choice theory and what they entail. The results obtained, based on our literature reviews and analyses, are the identification of several major limitations of RCT as well as selected modifications and adaptations of choice theory which overcome or promise to overcome some of the RCT limitations. Thus, the article with Table 1 in hand provides a point of departure for follow-up systematic reviews and more precise questions for future theory development. The criticisms and adaptations of RCT have contributed to greater realism, empirical relevance, and increased moral considerations. The developments entail, among other things: the now well-known cognitive limitations (\u201cbounded rationality\u201d) and, for instance, the role of satisficing rather than maximizing in decision-making to deal with cognitive T. Burns, E. Roszkowska 196 complexity and the uncertainties of multiple values; choice situations are re-contextualized with psychology, sociology, economic, and material conditions and factors which are taken into account explicitly and insightfully in empirical and theoretical work. Part of the contextualization concerns the place of multiple values, role and norm contradictions, and moral dilemmas in much choice behavior. In concluding, the article suggests that the adaptations and modifications made in choice theory have led to substantial fragmentation of choice theory and as of yet no integrated approach has appeared to simply displace RCT."} {"_id": "0aa2ba1d8ffd0802cc12468bb566da029c0a9230", "title": "SASI: A New Ultralightweight RFID Authentication Protocol Providing Strong Authentication and Strong Integrity", "text": "As low-cost RFIDs become more and more popular, it is imperative to design ultralightweight RFID authentication protocols to resist all possible attacks and threats. However, all of the previous ultralightweight authentication schemes are vulnerable to various attacks. In this paper, we propose a new ultralightweight RFID authentication protocol that provides strong authentication and strong integrity protection of its transmission and of updated data. The protocol requires only simple bit-wise operations on the tag and can resist all the possible attacks. These features make it very attractive to low-cost RFIDs and very low-cost RFIDs."} {"_id": "017ee86aa9be09284a2e07c9200192ab3bea9671", "title": "Self-Supervised Generative Adversarial Networks", "text": "Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labelled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, to close the gap between conditional and unconditional GANs. In particular, we allow the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game. The role of self-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training. We test empirically both the quality of the learned image representations, and the quality of the synthesized images. Under the same conditions, the self-supervised GAN attains a similar performance to stateof-the-art conditional counterparts. Finally, we show that this approach to fully unsupervised learning can be scaled to attain an FID of 33 on unconditional IMAGENET generation."} {"_id": "0fa3f365a1e7803df60d7298e6410a0b8b4063fc", "title": "Digital Transformation in the Automotive Industry: towards a Generic Value Network", "text": "The emergence of digital innovations is accelerating and intervening existing business models by delivering opportunities for new services. Drawing on the automotive industry, leading trends like self-driving cars, connectivity and car sharing are creating new business models. These are simultaneously giving rise for innovative market entrants, which begin to transform the automotive industry. However, literature does not provide a generic value network of the automotive industry, including new market players. The paper aims to visualize the current automotive ecosystem, by evolving a generic value network using the e3-value method. We define different roles, which are operating in the automotive industry by analyzing 650 companies reported in the Crunchbase database and present the value streams within the ecosystem. To validate the proposed generic value network we conducted five preliminary interviews with experts from the automotive industry. Our results show the central role of mobility service platforms, emerging disruptive technology providers and the dissemination of industries, e.g., as OEMs collaborate with mobile payment providers. Scholars in this field can apply the developed generic value network for further research, while car manufacturers may apply the model to position themselves in their market and to identify possible disruptive actors or potential business opportunities."} {"_id": "ac1aa7aa49192ed51c03c862941ba04e420de7a8", "title": "Design of RYSEN: An Intrinsically Safe and Low-Power Three-Dimensional Overground Body Weight Support", "text": "Body weight support (BWS) systems are widely used in gait research and rehabilitation. This letter introduces a new three-dimensional overground BWS system, called the RYSEN. The RYSEN is designed to be intrinsically safe and low power consuming, while still performing at least as well as existing BWS systems regarding human\u2013robot interaction. These features are mainly achieved by decoupling degrees of freedom between motors: slow/high-torque motors for vertical motion and fast/low-torque motors for horizontal motion. This letter explains the design and evaluates its performance on power consumption and safety. Power consumption is expressed in terms of the sum of the positive mechanical output power of all motor axes. Safety is defined as the difference between the mechanical power available for horizontal and vertical movements and the mechanical power that is needed to perform its task. The results of the RYSEN are compared to the performance of three similar systems: a gantry, the FLOAT, and a classic cable robot. The results show that the RYSEN and a gantry consume approximately the same amount of power. The amount is approximately half the power consumed by the next-best system. For the safety, the gantry is taken as the benchmark, because of its perfect decoupling of directions. The RYSEN has a surplus of 268\u00a0W and 126 W for horizontal and vertical movements, respectively. This is significantly lower than the next-best system, which has a surplus of 1088 W and 1967 W, respectively."} {"_id": "ccebcfc9f1f5d4355bebf0756f6f1da965629dd9", "title": "Interpretable and effective opinion spam detection via temporal patterns mining across websites", "text": "Millions of ratings and reviews on online review websites are influential over business revenues and customer experiences. However, spammers are posting fake reviews in order to gain financial benefits, at the cost of harming honest businesses and customers. Such fake reviews can be illegal and it is important to detect spamming attacks to eliminate unjust ratings and reviews. However, most of the current approaches can be incompetent as they can only utilize data from individual websites independently, or fail to detect more subtle attacks even they can fuse data from multiple sources. Further, the revealed evidence fails to explain the more complicated real world spamming attacks, hindering the detection processes that usually have human experts in the loop. We close this gap by introducing a novel framework that can jointly detect and explain the potential attacks. The framework mines both macroscopic level temporal sentimental patterns and microscopic level features from multiple review websites. We construct multiple sentimental time series to detect atomic dynamics, based on which we mine various cross-site sentimental temporal patterns that can explain various attacking scenarios. To further identify individual spams within the attacks with more evidence, we study and identify effective microscopic textual and behavioral features that are indicative of spams. We demonstrate via human annotations, that the simple and effective framework can spot a sizable collection of spams that have bypassed one of the current commercial anti-spam systems."} {"_id": "5c695f1810951ad1bbdf7da5f736790dca240e5b", "title": "Aspect based sentiment analysis in social media with classifier ensembles", "text": "The analysis of user generated content on social media and the accurate specification of user opinions towards products and events is quite valuable to many applications. With the proliferation of Web 2.0 and the rapid growth of user-generated content on the web, approaches on aspect level sentiment analysis that yield fine grained information are of great interest. In this work, a classifier ensemble approach for aspect based sentiment analysis is presented. The approach is generic and utilizes latent dirichlet allocation to model a topic and to specify the main aspects that users address. Then, each comment is further analyzed and word dependencies that indicate the interactions between words and aspects are extracted. An ensemble classifier formulated by naive bayes, maximum entropy and support vector machines is designed to recognize the polarity of the user's comment towards each aspect. The evaluation results show sound improvement compared to individual classifiers and indicate that the ensemble system is scalable and accurate in analyzing user generated content and in specifying users' opinions and attitudes."} {"_id": "f2942c955ad0200c43a4c6924784c6875ca3ece8", "title": "Bioaccumulation of nutrients and metals in sediment, water, and phoomdi from Loktak Lake (Ramsar site), northeast India: phytoremediation options and risk assessment.", "text": "In order to determine the potential of phoomdi to accumulate nutrients and metals, 11 dominant species belonging to 10 different families, sediment, and water were analyzed for a period of 2\u00a0years from the largest freshwater wetland of north-east India, Loktak (Ramsar site). Results revealed nutrient (TN and TP) and metal (Fe, Mn, Zn, and Cu) compartmentalization in the order phoomdi > sediment > water. Iron concentrations in water (0.37\u2009\u00b1\u20090.697 to 0.57\u2009\u00b1\u20091.010\u00a0mg\u00a0L(-1)) and sediments (81.8\u2009\u00b1\u20090.45 to 253.1\u2009\u00b1\u20090.51\u00a0mg\u00a0kg(-1)) show high metal discharge into the wetland. Metal accumulation in phoomdi ranged up to 212.3\u2009\u00b1\u20090.46-9461.4\u2009\u00b1\u20091.09\u00a0mg\u00a0kg(-1) for Fe; 85.9\u2009\u00b1\u20090.31-3565.1\u2009\u00b1\u20090.87\u00a0mg\u00a0kg(-1) for Mn; 9.6\u2009\u00b1\u20090.41-85.39\u2009\u00b1\u20090.58\u00a0mg\u00a0kg(-1) for Zn; and 0.31\u2009\u00b1\u20090.04-9.2\u2009\u00b1\u20090.04\u00a0mg\u00a0kg(-1) for Cu, respectively. High bioaccumulation factors (BAF) for metals (S. cucullata, 5.8\u2009\u00d7\u200910(4) Fe, 3.9\u2009\u00d7\u200910(4) Mn, and 1.7\u2009\u00d7\u200910(4) Cu, and O. javanica, 4.9\u2009\u00d7\u200910(3) Zn) and nutrients (S. polyrhiza, 9.7\u2009\u00d7\u200910(2) TN, and Z. latifolia, 7.9\u2009\u00d7\u200910(4) TP) revealed good accumulation in phoomdi compared to the wetland water column and indicate their potential to maintain a safe environment of Loktak. Further, the paper analyzed the health hazard of metals via phoomdi wild edible consumption, with the results confirming potential risk. Thus, the paper showed the need of in-depth monitoring and ample management strategies to ensure nutritional safety conditions of locals from the metals."} {"_id": "e529218f0071de888146f7efedf5db1d74999044", "title": "Me , myself , and lie : The role of self-awareness in deception", "text": "Deception has been studied extensively but still little is known about individual differences in deception ability. We investigated the relationship between self-awareness and deception ability. We enlisted novice actors to portray varying levels of deception. Forty-two undergraduates viewed the videotaped portrayals and rated the actors believability. Actors with high private self-awareness were more effective deceivers, suggesting that high self-monitors are more effective at deceiving. Self-awareness may lead to knowledge of another s mental state (i.e., Theory of Mind), which may improve an individual s deception ability. 2005 Elsevier Ltd. All rights reserved."} {"_id": "e5aa59866cd8df4f2b88ee6411866a46756b28e1", "title": "A Location-Sentiment-Aware Recommender System for Both Home-Town and Out-of-Town Users", "text": "Spatial item recommendation has become an important means to help people discover interesting locations, especially when people pay a visit to unfamiliar regions. Some current researches are focusing on modelling individual and collective geographical preferences for spatial item recommendation based on users' check-in records, but they fail to explore the phenomenon of user interest drift across geographical regions, i.e., users would show different interests when they travel to different regions. Besides, they ignore the influence of public comments for subsequent users' check-in behaviors. Specifically, it is intuitive that users would refuse to check in to a spatial item whose historical reviews seem negative overall, even though it might fit their interests. Therefore, it is necessary to recommend the right item to the right user at the right location. In this paper, we propose a latent probabilistic generative model called LSARS to mimic the decision-making process of users' check-in activities both in home-town and out-of-town scenarios by adapting to user interest drift and crowd sentiments, which can learn location-aware and sentiment-aware individual interests from the contents of spatial items and user reviews. Due to the sparsity of user activities in out-of-town regions, LSARS is further designed to incorporate the public preferences learned from local users' check-in behaviors. Finally, we deploy LSARS into two practical application scenes: spatial item recommendation and target user discovery. Extensive experiments on two large-scale location-based social networks (LBSNs) datasets show that LSARS achieves better performance than existing state-of-the-art methods."} {"_id": "0ae944eb32cdce405125f948b2eef2e7c0512fd3", "title": "HF , VHF , and UHF Systems and Technology", "text": "A wide variety of unique systems and components inhabits the HF, VHF, and UHF bands. Many communication systems (ionospheric, meteor-burst, and troposcatter) provide beyond-line-of-sight coverage and operate independently of external infrastructure. Broadcasting and over-the-horizon radar also operate in these bands. Magnetic-resonance imaging uses HF/VHF signals to see the interior of a human body, and RF heating is used in a variety of medical and industrial applications. Receivers typically employ a mix of analog and digital-signal-processing techniques. Systems for these frequencies make use of RF-power MOSFETs, p-i-n diodes, and ferrite-loaded transmission-line transformers."} {"_id": "b274ec2cd276a93d899ae923932d27293417b26e", "title": "Experimental study of desalination using direct contact membrane distillation : a new approach to flux enhancement", "text": "New membrane distillation configurations and a new membrane module were investigated to improve water desalination. The performances of three hydrophobic microporous membranes were evaluated under vacuum enhanced direct contact membrane distillation (DCMD) with a turbulent flow regime and with a feed water temperature of only 40 \u25e6C. The new configurations provide reduced temperature polarization effects due to better mixing and increased mass transport of water due to higher permeability through the membrane and due to a total pressure gradient across the membrane. Comparison with previously reported results in the literature reveals that mass transport of water vapors is substantially improved with the new approach. The performance of the new configuration was investigated with both NaCl and synthetic sea salt feed solutions. Salt rejection was greater than 99.9% in almost all cases. Salt concentrations in the feed stream had only a minor effect on water flux. The economic aspects of the enhanced DCMD process are briefly discussed and comparisons are made with the reverse osmosis (RO) process for desalination. \u00a9 2003 Elsevier B.V. All rights reserved."} {"_id": "dd9970b4a06ff90f287d761bf9e9cf09a6483400", "title": "Dynamic hand gesture recognition using hidden Markov models", "text": "Hand gesture has become a powerful means for human-computer interaction. Traditional gesture recognition just consider hand trajectory. For some specific applications, such as virtual reality, more natural gestures are needed, which are complex and contain movement in 3-D space. In this paper, we introduce an HMM-based method to recognize complex single hand gestures. Gesture images are gained by a common web camera. Skin color is used to segment hand area from the image to form a hand image sequence. Then we put forward a state-based spotting algorithm to split continuous gestures. After that, feature extraction is executed on each gesture. Features used in the system contain hand position, velocity, size, and shape. We raise a data aligning algorithm to align feature vector sequences for training. Then an HMM is trained alone for each gesture. The recognition results demonstrate that our methods are effective and accurate."} {"_id": "6985a4cb132b23778cd74bd9abed8764d103ae59", "title": "Human Action Recognition Using Multi-Velocity STIPs and Motion Energy Orientation Histogram", "text": "Local image features in space-time or spatio-temporal interest points provide compact and abstract representations of patterns in a video sequence. In this paper, we present a novel human action recognition method based on multi-velocity spatio-temporal interest points (MVSTIPs) and a novel local descriptor called motion energy (ME) orientation histogram (MEOH). The MVSTIP detection includes three steps: first, filtering video frames with multi-direction ME filters at different speeds to detect significant changes at the pixel level; thereafter, a surround suppression model is employed to rectify the ME deviation caused by the camera motion and complicated backgrounds (e.g., dynamic texture); finally, MVSTIPs are obtained with local maximum filters at multispeeds. After detection, we develop MEOH descriptor to capture the motion features in local regions around interest points. The performance of the proposed method is evaluated on KTH, Weizmann, and UCF sports human action datasets. Results show that our method is robust to both simple and complex backgrounds and the method is superior to other methods that are based on local features."} {"_id": "02677b913f430f419ee8a8f2a8860d1af5e86b63", "title": "Pixelwise View Selection for Unstructured Multi-View Stereo", "text": "This work presents a Multi-View Stereo system for robust and efficient dense modeling from unstructured image collections. Our core contributions are the joint estimation of depth and normal information, pixelwise view selection using photometric and geometric priors, and a multi-view geometric consistency term for the simultaneous refinement and image-based depth and normal fusion. Experiments on benchmarks and large-scale Internet photo collections demonstrate stateof-the-art performance in terms of accuracy, completeness, and efficiency. Fig. 1. Reconstructions for Louvre, Todai-ji, Paris Opera, and Astronomical Clock."} {"_id": "b75f57b742cbfc12fe15790ce27e75ed5f4a9349", "title": "Hybrid Forest: A Concept Drift Aware Data Stream Mining Algorithm", "text": "Nowadays with a growing number of online controlling systems in the organization and also a high demand of monitoring and stats facilities that uses data streams to log and control their subsystems, data stream mining becomes more and more vital. Hoeffding Trees (also called Very Fast Decision Trees a.k.a. VFDT) as a Big Data approach in dealing with the data stream for classification and regression problems showed good performance in handling facing challenges and making the possibility of any-time prediction. Although these methods outperform other methods e.g. Artificial Neural Networks (ANN) and Support Vector Regression (SVR), they suffer from high latency in adapting with new concepts when the statistical distribution of incoming data changes. In this article, we introduced a new algorithm that can detect and handle concept drift phenomenon properly. This algorithms also benefits from fast startup ability which helps systems to be able to predict faster than other algorithms at the beginning of data stream arrival. We also have shown that our approach will overperform other controversial approaches for classification and regression tasks."} {"_id": "f9bf3d4810d169014474cab501ffb09abf0ac9db", "title": "An induction furnace employing with half bridge series resonant inverter", "text": "In this paper, an induction furnace employing with half bridge series resonant inverter built around IGBT as its switching devices suitable for melting 500 grams of brass is presented. Melting times of 10 minutes were achieved at a power level of approximately 4 kW. The operating frequency is automatically tracked to maintain a small constant lagging phase angle using dual phase lock loop when brass is melted. The coil voltage is controlled to protect the resonant capacitors. The experimental results are presented."} {"_id": "d539082b5b951fb38a7d6e2a6d1dd73d2d67d366", "title": "A Review of Several Optimization Problems Related to Security in Networked System", "text": "Security issues are becoming more and more important to acti vities of individuals, organizations and the society in our modern ne tworked computerized world. In this chapter we survey a few optimization framewor ks for problems related to security of various networked system such as the int erne or the power grid system."} {"_id": "fb7dd133450e99abd8449ae1941e5ed4cf267eea", "title": "Improving deep learning performance using random forest HTM cortical learning algorithm", "text": "Deep Learning is an artificial intelligence function that imitates the mechanisms of the human mind in processing records and developing shapes to be used in selection construction. The objective of the paper is to improve the performance of the deep learning using a proposed algorithm called RFHTMC. This proposed algorithm is a merged version from Random Forest and HTM Cortical Learning Algorithm. The methodology for improving the performance of Deep Learning depends on the concept of minimizing the mean absolute percentage error which is an indication of the high performance of the forecast procedure. In addition to the overlap duty cycle which its high percentage is an indication of the speed of the processing operation of the classifier. The outcomes depict that the proposed set of rules reduces the absolute percent errors by using half of the value. And increase the percentage of the overlap duty cycle with 15%."} {"_id": "036c70cd07e4a2024eb71b2b1e0c1bc872cff107", "title": "Scientific Paper Summarization Using Citation Summary Networks", "text": "Quickly moving to a new area of research is painful for researchers due to the vast amount of scientific literature in each field of study. One possible way to overcome this problem is to summarize a scientific topic. In this paper, we propose a model of summarizing a single article, which can be further used to summarize an entire topic. Our model is based on analyzing others\u2019 viewpoint of the target article\u2019s contributions and the study of its citation summary network using a clustering approach."} {"_id": "15ae0badc584a287fc51e5de46d1ef51495a2398", "title": "Finding Deceptive Opinion Spam by Any Stretch of the Imagination", "text": "Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam\u2014fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing."} {"_id": "17accbdd4aa3f9fad6af322bc3d7f4d5b648d9cd", "title": "Transductive Inference for Text Classification using Support Vector Machines", "text": "This paper introduces Transductive Support Vector Machines (TSVMs) for text classi cation. While regular Support Vector Machines (SVMs) try to induce a general decision function for a learning task, Transductive Support Vector Machines take into account a particular test set and try to minimize misclassi cations of just those particular examples. The paper presents an analysis of why TSVMs are well suited for text classi cation. These theoretical ndings are supported by experiments on three test collections. The experiments show substantial improvements over inductive methods, especially for small training sets, cutting the number of labeled training examples down to a twentieth on some tasks. This work also proposes an algorithm for training TSVMs e ciently, handling 10,000 examples and more."} {"_id": "4f1fe957a29a2e422d4034f4510644714d33fb20", "title": "Thumbs up? Sentiment Classification using Machine Learning Techniques", "text": "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging. Publication info: Proceedings of EMNLP 2002, pp. 79\u201386."} {"_id": "6fd2a76924cbd0619f60d0fcb9b651c6dc17dbe3", "title": "Judging Grammaticality with Tree Substitution Grammar Derivations", "text": "In this paper, we show that local features computed from the derivations of tree substitution grammars \u2014 such as the identify of particular fragments, and a count of large and small fragments \u2014 are useful in binary grammatical classification tasks. Such features outperform n-gram features and various model scores by a wide margin. Although they fall short of the performance of the hand-crafted feature set of Charniak and Johnson (2005) developed for parse tree reranking, they do so with an order of magnitude fewer features. Furthermore, since the TSGs employed are learned in a Bayesian setting, the use of their derivations can be viewed as the automatic discovery of tree patterns useful for classification. On the BLLIP dataset, we achieve an accuracy of 89.9% in discriminating between grammatical text and samples from an n-gram language model."} {"_id": "88997c8b7daa74cd822d82bf33581c8fa966a347", "title": "Code Review Quality: How Developers See It", "text": "In a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves perceive code review quality. We present a qualitative study that summarizes the results from a survey of 88 Mozilla core developers. The results provide developer insights into how they define review quality, what factors contribute to how they evaluate submitted code, and what challenges they face when performing review tasks. We found that the review quality is primarily associated with the thoroughness of the feedback, the reviewer's familiarity with the code, and the perceived quality of the code itself. Also, we found that while different factors are perceived to contribute to the review quality, reviewers often find it difficult to keep their technical skills up-to-date, manage personal priorities, and mitigate context switching."} {"_id": "4b03cee450788d27c5944af9978a94b6ac1858bb", "title": "EPOCHS: a platform for agent-based electric power and communication simulation built from commercial off-the-shelf components", "text": "This paper reports on the development and subsequent use of the electric power and communication synchronizing simulator (EPOCHS), a distributed simulation environment. Existing electric power simulation tools accurately model power systems of the past, which were controlled as large regional power pools without significant communication elements. However, as power systems increasingly turn to protection and control systems that make use of computer networks, these simulators are less and less capable of predicting the likely behavior of the resulting power grids. Similarly, the tools used to evaluate new communication protocols and systems have been developed without attention to the roles they might play in power scenarios. EPOCHS integrates multiple research and commercial off-the-shelf systems to bridge the gap."} {"_id": "b6b8268fd9f6263d3b2c24059031f383e93e951f", "title": "Closed-form Solution for IMU based LSD-SLAM Point Cloud Conversion into the Scaled 3D World Environment", "text": "SLAM is a very popular research stream in computer vision and robotics nowadays. For more effective SLAM implementation it is necessary to have reliable information about the environment, also the data should be aligned and scaled according to the real world coordinate system. Monocular SLAM research is an attractive sub-stream, because of the low equipment cost, size and weight. In this paper we present a way to build a conversion from LSD-SLAM coordinate space to the real world coordinates using a true metric scale with IMU sensor data implementation. The causes of differences between the real and calculated spaces are explained and the possibility of conversions between the spaces is proved. Additionally, a closed-form solution for inter space transformation calculation is presented. The synthetic method of generating high level accurate and well controlled input data for the LSD-SLAM algorithm is presented. Finally, the reconstructed 3D environment representation is delivered as an output of the implemented conversion."} {"_id": "3166c2fa13bfe125fc1477bba5769da4f04864ff", "title": "Scalable Iterative Classification for Sanitizing Large-Scale Datasets", "text": "Cheap ubiquitous computing enables the collection of massive amounts of personal data in a wide variety of domains. Many organizations aim to share such data while obscuring features that could disclose personally identifiable information. Much of this data exhibits weak structure (e.g., text), such that machine learning approaches have been developed to detect and remove identifiers from it. While learning is never perfect, and relying on such approaches to sanitize data can leak sensitive information, a small risk is often acceptable. Our goal is to balance the value of published data and the risk of an adversary discovering leaked identifiers. We model data sanitization as a game between 1) a publisher who chooses a set of classifiers to apply to data and publishes only instances predicted as non-sensitive and 2) an attacker who combines machine learning and manual inspection to uncover leaked identifying information. We introduce a fast iterative greedy algorithm for the publisher that ensures a low utility for a resource-limited adversary. Moreover, using five text data sets we illustrate that our algorithm leaves virtually no automatically identifiable sensitive instances for a state-of-the-art learning algorithm, while sharing over 93 percent of the original data, and completes after at most five iterations."} {"_id": "7b86ca4f90a6f2d62247158b64f670a9df012d7c", "title": "The diverse CB1 and CB2 receptor pharmacology of three plant cannabinoids: delta9-tetrahydrocannabinol, cannabidiol and delta9-tetrahydrocannabivarin.", "text": "Cannabis sativa is the source of a unique set of compounds known collectively as plant cannabinoids or phytocannabinoids. This review focuses on the manner with which three of these compounds, (-)-trans-delta9-tetrahydrocannabinol (delta9-THC), (-)-cannabidiol (CBD) and (-)-trans-delta9-tetrahydrocannabivarin (delta9-THCV), interact with cannabinoid CB1 and CB2 receptors. Delta9-THC, the main psychotropic constituent of cannabis, is a CB1 and CB2 receptor partial agonist and in line with classical pharmacology, the responses it elicits appear to be strongly influenced both by the expression level and signalling efficiency of cannabinoid receptors and by ongoing endogenous cannabinoid release. CBD displays unexpectedly high potency as an antagonist of CB1/CB2 receptor agonists in CB1- and CB2-expressing cells or tissues, the manner with which it interacts with CB2 receptors providing a possible explanation for its ability to inhibit evoked immune cell migration. Delta9-THCV behaves as a potent CB2 receptor partial agonist in vitro. In contrast, it antagonizes cannabinoid receptor agonists in CB1-expressing tissues. This it does with relatively high potency and in a manner that is both tissue and ligand dependent. Delta9-THCV also interacts with CB1 receptors when administered in vivo, behaving either as a CB1 antagonist or, at higher doses, as a CB1 receptor agonist. Brief mention is also made in this review, first of the production by delta9-THC of pharmacodynamic tolerance, second of current knowledge about the extent to which delta9-THC, CBD and delta9-THCV interact with pharmacological targets other than CB1 or CB2 receptors, and third of actual and potential therapeutic applications for each of these cannabinoids."} {"_id": "1062a70701476fe243b8c66cd2991755e93982d5", "title": "Design of an IGBT-based LCL-Resonant Inverter for High-Frequency Induction Heating", "text": "A power electronic inverter is developed for a high-frequency induction heating application. The application requires up to 160kW of power at a frequency of 1OOkHz. This power-frequency product represents a significant challenge for today's power semiconductor technology. Voltage source and current source inverters both using ZCS or ZVS are analyzed and compared. To attain the level of performance required, an LCL loadresonant topology is selected to enable ZVS close to the zero current crossing of the load. This mode of softswitching is suitable to greatly reduce the IGBT losses. Inverter control is achieved via a Phase Locked Loop (PLL). This paper presents the circuit design, modeling and control considerations."} {"_id": "e6049b61e7e963a206a3a8e3d6447b672a794370", "title": "Parents with doubts about vaccines: which vaccines and reasons why.", "text": "OBJECTIVES\nThe goals were (1) to obtain national estimates of the proportions of parents with indicators of vaccine doubt, (2) to identify factors associated with those parents, compared with parents reporting no vaccine doubt indicators, (3) to identify the specific vaccines that prompted doubt and the reasons why, and (4) to describe the main reasons parents changed their minds about delaying or refusing a vaccine for their child.\n\n\nMETHODS\nData were from the National Immunization Survey (2003-2004). Groups included parents who ever got a vaccination for their child although they were not sure it was the best thing to do (\"unsure\"), delayed a vaccination for their child (\"delayed\"), or decided not to have their child get a vaccination (\"refused\").\n\n\nRESULTS\nA total of 3924 interviews were completed. Response rates were 57.9% in 2003 and 65.0% in 2004. Twenty-eight percent of parents responded yes to ever experiencing >or=1 of the outcome measures listed above. In separate analyses for each outcome measure, vaccine safety concern was a predictor for unsure, refused, and delayed parents. The largest proportions of unsure and refused parents chose varicella vaccine as the vaccine prompting their concern, whereas delayed parents most often reported \"not a specific vaccine\" as the vaccine prompting their concern. Most parents who delayed vaccines for their child did so for reasons related to their child's illness, unlike the unsure and refused parents. The largest proportion of parents who changed their minds about delaying or not getting a vaccination for their child listed \"information or assurances from health care provider\" as the main reason.\n\n\nCONCLUSIONS\nParents who exhibit doubts about immunizations are not all the same. This research suggests encouraging children's health care providers to solicit questions about vaccines, to establish a trusting relationship, and to provide appropriate educational materials to parents."} {"_id": "4c5bddad70d30b0a57a5a8560a2406aabc65cfc0", "title": "A Review on Comparative analysis of different clustering and Decision Tree for Synthesized Data Mining Algorithm", "text": "Web mining is the sub category or application of data mining techniques to extract knowledge from Web data. With the advent of new advancements in technology the rapid use of new algorithms has been increased in the market. A data mining is one of the fast growing research field which is used in a wide areas of applications. The data mining consists of classification algorithms, association algorithms and searching algorithms. Different classification and clustering algorithm are used for the synthetic datasets. In this paper various techniques that are based on clustering and decision tree for synthesized data mining are discussed."} {"_id": "8bec0be257cd03a3cd1aba6627d533cf212dfbba", "title": "Poor Man's Methadone: A Case Report of Loperamide Toxicity.", "text": "Loperamide, a common over-the-counter antidiarrheal drug and opioid derivative, is formulated to act upon intestinal opioid receptors. However, at high doses, loperamide crosses the blood-brain barrier and reaches central opioid receptors in the brain, leading to central opiate effects including euphoria and respiratory depression. We report the case of a young man found dead in his residence with a known history of drug abuse. At autopsy, the only significant findings were a distended bladder and bloody oral purge. Drug screening found nontoxic levels of alprazolam, fluoxetine, and marijuana metabolites. Liquid chromatography time-of-flight mass spectrometry found an unusual set of split isotope peaks consistent with chlorine. On the basis of autopsy and toxicological findings, loperamide toxicity was suspected because of its opioid properties and molecular formula containing chlorine. A sample of loperamide was analyzed by liquid chromatography time-of-flight mass spectrometry, resulting in a matching mass and retention time to the decedent's sample. Subsequently, quantitative testing detected 63 ng/mL of loperamide or more than 6 times of therapeutic peak concentration. Cause of death was determined as \"toxic effects of loperamide with fluoxetine and alprazolam.\" Because of its opioid effects and easy accessibility, loperamide is known as \"poor man's methadone\" and may go undetected at medical and forensic drug screening."} {"_id": "3e75781a18158f6643115ba825034f9f95b4f13b", "title": "Feature Extraction and Classification for Automatic Speaker Recognition System \u2013 A Review", "text": "Automatic speaker recognition (ASR) has found immense applications in the industries like banking, security, forensics etc. for its advantages such as easy implementation, more secure, more user friendly. To have a good recognition rate is a pre-requisite for any ASR system which can be achieved by making an optimal choice among the available techniques for ASR. In this paper, different techniques for the system have been discussed such as MFCC, LPCC, LPC, Wavelet decomposition for feature extraction and VQ, GMM, SVM, DTW, HMM for feature classification. All these techniques are also compared with each other to find out best suitable candidate among them. On the basis of the comparison done, MFCC has upper edge over other techniques for feature extraction as it is more consistent with human hearing. GMM comes out to be the best among classification models due to its good classification accuracy and less memory usage. Keywords\u2014 Automatic Speaker Recognition, Mel Frequency Cepstral Coefficients (MFCC), Linear Predictive Cepstral Coefficients, Gaussian Mixture Model ( GMM), Vector Quantization ( VQ), Dynamic Time Warping (DTW), Hidden Markov Model ( HMM), Wavelet decomposition"} {"_id": "e406c7d0bf67ea13dc9553fd4514ceaa3a61f6df", "title": "Learning to Identify Review Spam", "text": "In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors\u2019 products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines."} {"_id": "4d16fb4c3b17f60813757df4daf18aac71f01e8b", "title": "Enhanced cardiac perception is associated with benefits in decision-making.", "text": "In the present study we provide the first empirical evidence that viscero-sensory feedback from an internal organ is associated with decision-making processes. Participants with accurate vs. poor perception of their heart activity were compared with regard to their performance in the Iowa Gambling Task. During this task, participants have to choose between four card decks. Decks A and B yield high gains and high losses, and if played continuously, result in net loss. In contrast, decks C and D yield small gains and also small losses, but result in net profit if they are selected continuously. Accordingly, participants have to learn to avoid the net loss options in favor of the net gain options. In our study, participants with good cardiac perception chose significantly more of the net gain and fewer of the net loss options. Our findings document the substantial role of visceral feedback in decision-making processes in complex situations."} {"_id": "be5808e41e079ccee7a95c9950796032eee7e8d1", "title": "Novel blood pressure estimation method using single photoplethysmography feature", "text": "Continuous blood pressure (BP) monitoring has a significant meaning to the prevention and early diagnosis of cardiovascular disease. However, existing continuous BP monitoring approaches, especially cuff-less BP monitoring approaches, are all contraptions which complex and huge computation required. For example, for the most sophisticated cuff-less BP monitoring method using pulse transit time (PTT), the simultaneous record of photoplethysmography (PPG) signal and electrocardiography (ECG) are required, and various measurement of characteristic points are needed. These issues hindered widely application of cuff less BP measurement in the wearable devices. In this study, a novel BP estimation method using single PPG signal feature was proposed and its performance in BP estimation was also tested. The results showed that the new approach proposed in this study has a mean error \u22120.91 \u00b1 3.84 mmHg for SBP estimation and \u22120.36 \u00b1 3.36 mmHg for DBP estimation respectively. This approach performed better than the traditional PTT based BP estimation, which mean error for SBP estimation was \u22120.31 \u00b1 4.78 mmHg, and for DBP estimation was \u22120.18 \u00b1 4.32 mmHg. Further investigation revealed that this new BP estimation approach only required measurement of one characteristic point, reducing much computation when implementing. These results demonstrated that this new approach might be more suitable implemented in the wearable BP monitoring devices."} {"_id": "b903c556c9f55916c53ad7a9446b326176b4cf16", "title": "Wideband printed rectangular monopole antenna for circularly polarization", "text": "In this paper, a printed monopole antenna for circular polarization has been proposed. The relationship between the geometry of the antenna and the axial ratio is investigated. A wideband circularly polarized antenna is designed according to the parametric studies. The simulated bandwidth of 3dB-axial ratio is from 2.06GHz to 6.04GHz (98%)."} {"_id": "79c8850c8be533c241a61a80883e6ed1d559a229", "title": "An Artificial Neural Network based Intrusion Detection System and Classification of Attacks", "text": "Network security is becoming an issue of paramount importance in the information technology era. Nowadays with the dramatic growth of communication and computer networks, security has become a critical subject for computer system. Intrusion detection is the art of detecting computer abuse and any attempt to break the networks. Intrusion detection system is an effective security tool that helps to prevent unauthorized access to network resources by analyzing the network traffic. Different algorithms, methods and applications are created and implemented to solve the problem of detecting the attacks in intrusion detection systems. Most methods detect attacks and categorize in two groups, normal or threat. One of the most promising areas of research in the area of Intrusion Detection deals with the applications of the Artificial Intelligence (AI) techniques. This proposed system presents a new approach of intrusion detection system based on artificial neural network. Multi Layer Percepron (MLP) architecture is used for Intrusion Detection System. The performance and evaluations are performed by using the set of benchmark data from a KDD (Knowledge discovery in Database) dataset. The proposed system detects the attacks and classifies them in six groups."} {"_id": "2f056defdb9abf61acd7f8b49e50062cd62092ce", "title": "What can quantum theory bring to information retrieval", "text": "The probabilistic formalism of quantum physics is said to provide a sound basis for building a principled information retrieval framework. Such a framework can be based on the notion of information need vector spaces where events, such as document relevance or observed user interactions, correspond to subspaces. As in quantum theory, a probability distribution over these subspaces is defined through weighted sets of state vectors (density operators), and used to represent the current view of the retrieval system on the user information need. Tensor spaces can be used to capture different aspects of information needs. Our evaluation shows that the framework can lead to acceptable performance in an ad-hoc retrieval task. Going beyond this, we discuss the potential of the framework for three active challenges in information retrieval, namely, interaction, novelty and diversity."} {"_id": "348720586f503d69f6cd64be06d9191a7da0e137", "title": "The Formulation of Parameters for Type Design of Indian Script Based on Calligraphic Studies", "text": "A number of parameters were formulated for better analysing the anatomy of Indic letterforms. This methodology contributes to the understanding of the intricacies of type design of complex Indian scripts."} {"_id": "644cda71ed69a42e1e819a753ffa5cb5c6ccbaec", "title": "Using association rules to guide a search for best fitting transfer models of student learning", "text": "We say a transfer model is a mapping between the questions in an intelligent tutoring system and the knowledge components (i.e., skills, strategies, declarative knowledge, etc) needed to answer a question correctly. [JKT00] showed how you could take advantage of 1) the Power Law Of Learning, 2) an existing transfer model, and 3) set of tutorial log files, to learn a function (using logistic regression) that will predict when a student will get a question correct. In the main conference proceeding [CHK2004] give an example of using this technique for transfer model selection. Koedinger and Junker [KJ99] also conceptualized a search space where each state is a new transfer model. The operators in this search space split, add or merge knowledge components based upon factors that are tagged to questions. Koedinger and Junker called this method learning factors analysis emphasizing that this method can be used to study learning. Unfortunately, the search space is huge and searching for good fitting transfer models is exponential. The main goal of this paper is show a technique that will make searching for transfer models more efficient. Our procedure implements a search method using association rules as a means of guiding the search. The association rules are mined from a dataset derived from student-tutor interaction logs. The association rules found in the mining process determine what operations to perform on the current transfer model. We report on the speed up achieved. Being able to find good transfer models quicker will help intelligent tutor system builders as well as cognitive science researchers better assess what makes certain problems hard and other problems easy for students."} {"_id": "48ceb89d2ccf24dc8774d702bdedee60e3edc935", "title": "A Survey of In-Band Full-Duplex Transmission: From the Perspective of PHY and MAC Layers", "text": "In-band full-duplex (IBFD) transmission represents an attractive option for increasing the throughput of wireless communication systems. A key challenge for IBFD transmission is reducing self-interference. Fortunately, the power associated with residual self-interference can be effectively canceled for feasible IBFD transmission with combinations of various advanced passive, analog, and digital self-interference cancellation schemes. In this survey paper, we first review the basic concepts of IBFD transmission with shared and separated antennas and advanced self-interference cancellation schemes. Furthermore, we also discuss the effects of IBFD transmission on system performance in various networks such as bidirectional, relay, and cellular topology networks. This survey covers a wide array of technologies that have been proposed in the literature as feasible for IBFD transmission and evaluates the performance of the IBFD systems compared to conventional half-duplex transmission in connection with theoretical aspects such as the achievable sum rate, network capacity, system reliability, and so on. We also discuss the research challenges and opportunities associated with the design and analysis of IBFD systems in a variety of network topologies. This work also explores the development of MAC protocols for an IBFD system in both infrastructure-based and ad hoc networks. Finally, we conclude our survey by reviewing the advantages of IBFD transmission when applied for different purposes, such as spectrum sensing, network secrecy, and wireless power transfer."} {"_id": "32bbef1c8bb76636839f27c016080744f5749317", "title": "KP-Miner: A keyphrase extraction system for English and Arabic documents", "text": "Automatic key phrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents the KP-Miner system, and demonstrates through experimentation and comparison with existing systems that the it is effective in extracting keyphrases from both English and Arabic documents of varied length. Unlike other existing keyphrase extraction systems, the KP-Miner system has the advantage of being configurable as the rules and heuristics adopted by the system are related to the general nature of documents and keyphrase. This implies that the users of this system can use their understanding of the document(s) being input into the system, to fine tune it to their particular needs."} {"_id": "313eceae658fb6c132a448c86f3abcc37931df09", "title": "Docker Cluster Management for the Cloud - Survey Results and Own Solution", "text": "Docker provides a good basis to run composite applications in the cloud, especially if those are not cloud-aware, or cloud-native. However, Docker concentrates on managing containers on one host, but SaaS providers need a container management solution for multiple hosts. Therefore, a number of tools emerged that claim to solve the problem. This paper classifies the solutions, maps them to requirements from a case study and identifies gaps and integration requirements. We close some of these gaps with our own integration components and tool enhancements, resulting in the currently most complete management suite."} {"_id": "a2b7dddd3a980da4e9d7496dec6121d5f36f56e7", "title": "Technology adoption: A conjoint analysis of consumers' preference on future online banking services", "text": "The importance of service delivery technology and online service adoption and usage in the banking industry has received an increased discussion in the literature in recent years. Owing to the fact that Strong online banking services are important drivers for bank performance and customer service delivery; several studies have been carried out on online banking service adoption or acceptance where services are already deployed and on the factors that influence customers' adoption and use or intention to use those services. However, despite the increasing discussion in the literatures, no attempt has been made to look at consumers' preference in terms of future online banking service adoption. This study used conjoint analysis and stated preference methods with discrete choice model to analyze the technology adoption pattern regarding consumers' preference for potential future online banking services in the Nigerian banking industry. The result revealed that to increase efficiency and strengthen competitiveness, banks need to promote smart and practical branded services especially self-services at the same time promote a universal adoption of e-banking system services that add entertainment or extra convenience to customers such as ease of usage including digital wallet, real-time interaction (video banking), ATMs integrated with smart phones, website customization, biometric services, and digital currency. These services can contribute to an increasing adoption of online services. & 2015 Elsevier Ltd. All rights reserved."} {"_id": "722e2f7894a1b62e0ab09913ce9b98654733d98e", "title": "Information overload and the message dynamics of online interaction spaces: a theoretical model and empirical exploration", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."} {"_id": "c7cf039d918ceeea88e99be3b522b44d1bc132f0", "title": "Decoding actions at different levels of abstraction.", "text": "Brain regions that mediate action understanding must contain representations that are action specific and at the same time tolerate a wide range of perceptual variance. Whereas progress has been made in understanding such generalization mechanisms in the object domain, the neural mechanisms to conceptualize actions remain unknown. In particular, there is ongoing dissent between motor-centric and cognitive accounts whether premotor cortex or brain regions in closer relation to perceptual systems, i.e., lateral occipitotemporal cortex, contain neural populations with such mapping properties. To date, it is unclear to which degree action-specific representations in these brain regions generalize from concrete action instantiations to abstract action concepts. However, such information would be crucial to differentiate between motor and cognitive theories. Using ROI-based and searchlight-based fMRI multivoxel pattern decoding, we sought brain regions in human cortex that manage the balancing act between specificity and generality. We investigated a concrete level that distinguishes actions based on perceptual features (e.g., opening vs closing a specific bottle), an intermediate level that generalizes across movement kinematics and specific objects involved in the action (e.g., opening different bottles with cork or screw cap), and an abstract level that additionally generalizes across object category (e.g., opening bottles or boxes). We demonstrate that the inferior parietal and occipitotemporal cortex code actions at abstract levels whereas the premotor cortex codes actions at the concrete level only. Hence, occipitotemporal, but not premotor, regions fulfill the necessary criteria for action understanding. This result is compatible with cognitive theories but strongly undermines motor theories of action understanding."} {"_id": "054bca7f2fa00c3d55f0e028b37513bebb9c4ea5", "title": "TRESOR Runs Encryption Securely Outside RAM", "text": "Current disk encryption techniques store necessary keys in RAM and are therefore susceptible to attacks that target volatile memory, such as Firewire and cold boot attacks. We present TRESOR, a Linux kernel patch that implements the AES encryption algorithm and its key management solely on the microprocessor. Instead of using RAM, TRESOR ensures that all encryption states as well as the secret key and any part of it are only stored in processor registers throughout the operational time of the system, thereby substantially increasing its security. Our solution takes advantage of Intel\u2019s new AES-NI instruction set and exploits the x86 debug registers in a non-standard way, namely as cryptographic key storage. TRESOR is compatible with all modern Linux distributions, and its performance is on a par with that of standard AES implementations."} {"_id": "0fd2467de521b52805eea902edc9587c87818276", "title": "Discoverer: Automatic Protocol Reverse Engineering from Network Traces", "text": "Application-level protocol specifications are useful for many security applications, including intrusion prevention and detection that performs deep packet inspection and traffic normalization, and penetration testing that generates network inputs to an application to uncover potential vulnerabilities. However, current practice in deriving protocol specifications is mostly manual. In this paper, we present Discoverer, a tool for automatically reverse engineering the protocol message formats of an application from its network trace. A key property of Discoverer is that it operates in a protocol-independent fashion by inferring protocol idioms commonly seen in message formats of many application-level protocols. We evaluated the efficacy of Discoverer over one text protocol (HTTP) and two binary protocols (RPC and CIFS/SMB) by comparing our inferred formats with true formats obtained from Ethereal [5]. For all three protocols, more than 90% of our inferred formats correspond to exactly one true format; one true format is reflected in five inferred formats on average; our inferred formats cover over 95% of messages, which belong to 30-40% of true formats observed in the trace."} {"_id": "30dd12a894eff2a488c83f565bc287b4dd03c0cc", "title": "Howard: A Dynamic Excavator for Reverse Engineering Data Structures", "text": "Even the most advanced reverse engineering techniques and products are weak in recovering data structures in stripped binaries\u2014binaries without symbol tables. Unfortunately, forensics and reverse engineering without data structures is exceedingly hard. We present a new solution, known as Howard, to extract data structures from C binaries without any need for symbol tables. Our results are significantly more accurate than those of previous methods \u2014 sufficiently so to allow us to generate our own (partial) symbol tables without access to source code. Thus, debugging such binaries becomes feasible and reverse engineering becomes simpler. Also, we show that we can protect existing binaries from popular memory corruption attacks, without access to source code. Unlike most existing tools, our system uses dynamic analysis (on a QEMU-based emulator) and detects data structures by tracking how a program uses memory."} {"_id": "36f05c011a7fdb74b0380e41ceada8632ba35f24", "title": "Transparent Runtime Shadow Stack : Protection against malicious return address modifications", "text": "Exploitation of buffer overflow vulnerabilities constitutes a significant portion of security attacks in computer systems. One of the most common types of buffer overflow attacks is the hijacking of the program counter by overwriting function return addresses in the process\u2019 stack so as to redirect the program\u2019s control flow to some malicious code injected into the process\u2019 memory. Previous solutions to this problem are based either on hardware or the compiler. The former requires special hardware while the latter requires the source code of the software. In this paper we introduce the use of a T ransparent RUntime Shadow Stack (TRUSS) to protect against function return address modification. Our proposed scheme is built on top of DynamoRIO, a dynamic binary rewriting framework. DynamoRIO is implemented on both Windows and Linux. Hence, our scheme is able to protect applications on both operating systems. We have successfully tested our implementation on the SPECINT 2000 benchmark programs on both Windows and Linux, John Wilander\u2019s \u201cDynamic testbed for twenty buffer overflow attacks\u201d as well as Microsoft Access, Powerpoint and Word 2002. This paper will discuss the implementation details of our scheme as well as provide a performance evaluation. The latter shows that TRUSS is able to operate with an average overhead of about 20% to 50% which we believe is acceptable."} {"_id": "a21921eb0c5600562c8dad8e2bc40fff1ec8906b", "title": "Automatic Reverse Engineering of Data Structures from Binary Execution", "text": "With only the binary executable of a program, it is useful to discover the program\u2019s data structures and infer their syntactic and semantic definitions. Such knowledge is highly valuable in a variety of security and forensic applications. Although there exist efforts in program data structure inference, the existing solutions are not suitable for our targeted application scenarios. In this paper, we propose a reverse engineering technique to automatically reveal program data structures from binaries. Our technique, called REWARDS, is based on dynamic analysis. More specifically, each memory location accessed by the program is tagged with a timestamped type attribute. Following the program\u2019s runtime data flow, this attribute is propagated to other memory locations and registers that share the same type. During the propagation, a variable\u2019s type gets resolved if it is involved in a type-revealing execution point or \u201ctype sink\u201d. More importantly, besides the forward type propagation, REWARDS involves a backward type resolution procedure where the types of some previously accessed variables get recursively resolved starting from a type sink. This procedure is constrained by the timestamps of relevant memory locations to disambiguate variables reusing the same memory location. In addition, REWARDS is able to reconstruct in-memory data structure layout based on the type information derived. We demonstrate that REWARDS provides unique benefits to two applications: memory image forensics and binary fuzzing for vulnerability discovery."} {"_id": "e831d694790a2cb74db0d1fb90dc9cf623b4c47c", "title": "Diagnosis and debugging of programmable logic controller control programs by neural networks", "text": "Ladder logic diagram (LLD) as the interfacing programming language of programmable logic controllers (PLCs) is utilized in modern discrete event control systems. However, LLD is hard to debug and maintain in practice. This is due to many factors such as non-structured nature of LLD, the LLD programmers' background, and the huge sizes of real world LLD. In this paper, we introduce a recurrent neural network (RNN) based technique for PLC program diagnosis. A manufacturing control system example has been presented to illustrate the applicability of the proposed algorithm. This method could be very advantageous in reducing the complexity in PLC control programs diagnosis because of the ease of use of the RNN compared to debugging the LLD code."} {"_id": "34a3e79849443ee7fffac386f8c003040d04180c", "title": "Support Vector Machine-recursive feature elimination for the diagnosis of Parkinson disease based on speech analysis", "text": "Parkinson disease has become a serious problem in the old people. There is no precise method to diagnosis Parkinson disease now. Considering the significance and difficulty of recognizing the Parkinson disease, the measurement of samples' voices is regard as one of the best non-invasive ways to find the real patient. Support Vector Machine is one of the most effective tools to classify in machine learning, and it has been applied successfully in many areas. In this paper, we implement the SVM-recursive feature elimination which has not been used before for selecting the subset including the most important features for classification from the original features. We also implement SVM with PCA for selecting the principle components for diagnosis PD set with 22 features in order to compare. At last, we discuss the relationship between SVM-RFE and SVM with PCA specially in the experiment. The experiment illustrates that the SVM-RFE has the better performance than other methods in general."} {"_id": "366f70bfa316afc5ae56139cacdbd65563b7eb59", "title": "Towards efficient content-aware search over encrypted outsourced data in cloud", "text": "With the increasing adoption of cloud computing, a growing number of users outsource their datasets into cloud. The datasets usually are encrypted before outsourcing to preserve the privacy. However, the common practice of encryption makes the effective utilization difficult, for example, search the given keywords in the encrypted datasets. Many schemes are proposed to make encrypted data searchable based on keywords. However, keyword-based search schemes ignore the semantic representation information of users retrieval, and cannot completely meet with users search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, we proposed an innovative semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets. More specifically, our scheme first indexes the documents and builds trapdoor based on the concept hierarchy. To further improve the search efficiency, we utilize a tree-based index structure to organize all the document index vectors. Our experiment results based on the real world datasets show the scheme is more efficient than previous scheme. We also study the threat model of our approach and prove it does not introduce any security risk."} {"_id": "09637c11a69ef063124a2518e2262f8ad7a1aa87", "title": "A new multiclass SVM algorithm and its application to crowd density analysis using LBP features", "text": "Crowd density analysis is a crucial component in visual surveillance for security monitoring. In this paper, we propose to estimate crowd density at patch level, where the size of each patch varies in such way to compensate the effects of perspective distortions. The main contribution of this paper is two-fold: First, we propose to learn a discriminant subspace of the high-dimensional Local Binary Pattern (LBP) instead of using raw LBP feature vector. Second, an alternative algorithm for multiclass SVM based on relevance scores is proposed. The effectiveness of the proposed approach is evaluated on PETS dataset, and the results demonstrate the effect of low-dimensional compact representation of LBP on the classification accuracy. Also, the performance of the proposed multiclass SVM algorithm is compared to other frequently used algorithms for multi-classification problem and the proposed algorithm gives good results while reducing the complexity of the classification."} {"_id": "ace8904509fa60c462632a33d85cabf3a4a0f359", "title": "A geometric view of optimal transportation and generative model", "text": "In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel framework for generative models. By using the optimal transportation view of GAN model, we show that the discriminator computes the Kantorovich potential, the generator calculates the transportation map. For a large class of transportation costs, the Kantorovich potential can give the optimal transportation map by a close-form formula. Therefore, it is sufficient to solely optimize the discriminator. This shows the adversarial competition can be avoided, and the computational architecture can be simplified. Preliminary experimental results show the geometric method outperforms WGAN for approximating probability measures with multiple clusters in low dimensional space."} {"_id": "54d7ff1b20cb0f0596cc23769e5f6fe3045570a8", "title": "The challenges of SVM optimization using Adaboost on a phoneme recognition problem", "text": "The use of digital technology is growing at a very fast pace which led to the emergence of systems based on the cognitive infocommunications. The expansion of this sector impose the use of combining methods in order to ensure the robustness in cognitive systems."} {"_id": "be48c3ac5eb233155158a3b8defed0e083cc2381", "title": "Dry electrodes for electrocardiography.", "text": "Patient biopotentials are usually measured with conventional disposable Ag/AgCl electrodes. These electrodes provide excellent signal quality but are irritating for long-term use. Skin preparation is usually required prior to the application of electrodes such as shaving and cleansing with alcohol. To overcome these difficulties, researchers and caregivers seek alternative electrodes that would be acceptable in clinical and research environments. Dry electrodes that operate without gel, adhesive or even skin preparation have been studied for many decades. They are used in research applications, but they have yet to achieve acceptance for medical use. So far, a complete comparison and evaluation of dry electrodes is not well described in the literature. This work compares dry electrodes for biomedical use and physiological research, and reviews some novel systems developed for cardiac monitoring. Lastly, the paper provides suggestions to develop a dry-electrode-based system for mobile and long-term cardiac monitoring applications."} {"_id": "062ebea1c4861cf90f18c369b3d729b79b076f8f", "title": "Object Category Understanding via Eye Fixations on Freehand Sketches", "text": "The study of eye gaze fixations on photographic images is an active research area. In contrast, the image sub-category of freehand sketches has not received as much attention for such studies. In this paper, we analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. In this paper, we show that the multi-level consistency in the fixation data can be exploited to 1) predict a test sketch\u2019s category given only its fixation sequence and 2) build a computational model which predicts part-labels underlying fixations on objects. We hope that our findings motivate the community to deem sketch-like representations worthy of gaze-based studies vis-a-vis photographic images."} {"_id": "2186e013b7cebb5c3efec31d6339198885962e96", "title": "A rare case of mesh infection 3 years after a laparoscopic totally extraperitoneal (TEP) inguinal hernia repair.", "text": "Late complications after a laparoscopic inguinal hernia repair are extremely rare and have only recently entered into the literature. One such late complication is mesh infection, of which there have been a handful of cases reported in the literature. Mesh infections occurring many years after inguinal hernia repairs are not only of significance because they are not well documented in the literature, and the pathogenesis and risk factors contributing to their development are not well understood. This report details a rare case of mesh infection 3 years after a laparoscopic totally extraperitoneal inguinal hernia repair, describes our management of the condition, highlights the current options for management, and attempts to define its pathophysiology."} {"_id": "d1a80c5b53bf4882090d1e1c31da134f1b1ffd48", "title": "Visual domain-specific modelling : Benefits and experiences of using metaCASE tools", "text": "1 Introduction Jackson (Jackson 95) recognises the vital difference between an application's domain and its code: two different worlds, each with its own language, experts, ways of thinking etc. A finished application forms the intersection between these worlds. The difficult job of the software engineer is to build a bridge between these worlds, at the same time as solving problems in both worlds."} {"_id": "5726690fc48158f4806ca25b207130544ecee95d", "title": "Digital Image Forgeries and Passive Image Authentication Techniques: A Survey", "text": "Digital images are present everywhere on magazine covers, in newspapers, in courtrooms as evidences, and all over the Internet signifying one of the major ways for communication nowadays. The trustworthiness of digital images has been questioned, because of the ease with which these images can be manipulated in both its origin & content as a result of tremendous growth of digital image manipulation tools. Digital image forensics is the latest research field which intends to authorize the genuineness of images. This survey attempts to provide an overview of various digital image forgeries and the state of art passive methods to authenticate digital images."} {"_id": "28fadbb09e3bf36e58660b30e626d870de43785a", "title": "A systematic review of neurobiological and clinical features of mindfulness meditations.", "text": "BACKGROUND\nMindfulness meditation (MM) practices constitute an important group of meditative practices that have received growing attention. The aim of the present paper was to systematically review current evidence on the neurobiological changes and clinical benefits related to MM practice in psychiatric disorders, in physical illnesses and in healthy subjects.\n\n\nMETHOD\nA literature search was undertaken using Medline, ISI Web of Knowledge, the Cochrane collaboration database and references of retrieved articles. Controlled and cross-sectional studies with controls published in English up to November 2008 were included.\n\n\nRESULTS\nElectroencephalographic (EEG) studies have revealed a significant increase in alpha and theta activity during meditation. Neuroimaging studies showed that MM practice activates the prefrontal cortex (PFC) and the anterior cingulate cortex (ACC) and that long-term meditation practice is associated with an enhancement of cerebral areas related to attention. From a clinical viewpoint, Mindfulness-Based Stress Reduction (MBSR) has shown efficacy for many psychiatric and physical conditions and also for healthy subjects, Mindfulness-Based Cognitive Therapy (MBCT) is mainly efficacious in reducing relapses of depression in patients with three or more episodes, Zen meditation significantly reduces blood pressure and Vipassana meditation shows efficacy in reducing alcohol and substance abuse in prisoners. However, given the low-quality designs of current studies it is difficult to establish whether clinical outcomes are due to specific or non-specific effects of MM.\n\n\nDISCUSSION\nDespite encouraging findings, several limitations affect current studies. Suggestions are given for future research based on better designed methodology and for future directions of investigation."} {"_id": "35d6e101e3332aa7defc312db7b083a22ee13f7b", "title": "When D2D meets cloud: Hybrid mobile task offloadings in fog computing", "text": "In this paper we propose HyFog, a novel hybrid task offloading framework in fog computing, where device users have the flexibility of choosing among multiple options for task executions, including local mobile execution, Device-to-Device (D2D) offloaded execution, and Cloud offloaded execution. We further develop a novel three-layer graph matching algorithm for efficient hybrid task offloading among the devices. Specifically, we first construct a three-layer graph to capture the choice space enabled by these three execution approaches, and then the problem of minimizing the total task execution cost is recast as a minimum weight matching problem over the constructed three-layer graph, which can be efficiently solved using the Edmonds's Blossom algorithm. Numerical results demonstrate that the proposed three-layer graph matching solution can achieve superior performance, with more than 50% cost reduction over the case of local task executions by all the devices."} {"_id": "d38c2006c0817f1ca7f7fb78015fc547004959c6", "title": "Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture", "text": "Recent studies have demonstrated the usefulness of optical indices from hyperspectral remote sensing in the assessment of vegetation biophysical variables both in forestry and agriculture. Those indices are, however, the combined response to variations of several vegetation and environmental properties, such as Leaf Area Index (LAI), leaf chlorophyll content, canopy shadows, and background soil reflectance. Of particular significance to precision agriculture is chlorophyll content, an indicator of photosynthesis activity, which is related to the nitrogen concentration in green vegetation and serves as a measure of the crop response to nitrogen application. This paper presents a combined modeling and indices-based approach to predicting the crop chlorophyll content from remote sensing data while minimizing LAI (vegetation parameter) influence and underlying soil (background) effects. This combined method has been developed first using simulated data and followed by evaluation in terms of quantitative predictive capability using real hyperspectral airborne data. Simulations consisted of leaf and canopy reflectance modeling with PROSPECT and SAILH radiative transfer models. In this modeling study, we developed an index that integrates advantages of indices minimizing soil background effects and indices that are sensitive to chlorophyll concentration. Simulated data have shown that the proposed index Transformed Chlorophyll Absorption in Reflectance Index/Optimized Soil-Adjusted Vegetation Index (TCARI/OSAVI) is both very sensitive to chlorophyll content variations and very resistant to the variations of LAI and solar zenith angle. It was therefore possible to generate a predictive equation to estimate leaf chlorophyll content from the combined optical index derived from above-canopy reflectance. This relationship was evaluated by application to hyperspectral CASI imagery collected over corn crops in three experimental farms from Ontario and Quebec, Canada. The results presented here are from the L\u2019Acadie, Quebec, Agriculture and AgriFood Canada research site. Images of predicted leaf chlorophyll content were generated. Evaluation showed chlorophyll variability over crop plots with various levels of nitrogen, and revealed an excellent agreement with ground truth, with a correlation of r = .81 between estimated and field measured chlorophyll content data. D 2002 Elsevier Science Inc. All rights reserved."} {"_id": "2485c98aa44131d1a2f7d1355b1e372f2bb148ad", "title": "The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations", "text": "In this paper, we describe the acquisition and contents of a large-scale Chinese face database: the CAS-PEAL face database. The goals of creating the CAS-PEAL face database include the following: 1) providing the worldwide researchers of face recognition with different sources of variations, particularly pose, expression, accessories, and lighting (PEAL), and exhaustive ground-truth information in one uniform database; 2) advancing the state-of-the-art face recognition technologies aiming at practical applications by using off-the-shelf imaging equipment and by designing normal face variations in the database; and 3) providing a large-scale face database of Mongolian. Currently, the CAS-PEAL face database contains 99 594 images of 1040 individuals (595 males and 445 females). A total of nine cameras are mounted horizontally on an arc arm to simultaneously capture images across different poses. Each subject is asked to look straight ahead, up, and down to obtain 27 images in three shots. Five facial expressions, six accessories, and 15 lighting changes are also included in the database. A selected subset of the database (CAS-PEAL-R1, containing 30 863 images of the 1040 subjects) is available to other researchers now. We discuss the evaluation protocol based on the CAS-PEAL-R1 database and present the performance of four algorithms as a baseline to do the following: 1) elementarily assess the difficulty of the database for face recognition algorithms; 2) preference evaluation results for researchers using the database; and 3) identify the strengths and weaknesses of the commonly used algorithms."} {"_id": "3229a96ffe8d305800b311cbdc2a5710e8f442f4", "title": "Recovering free space of indoor scenes from a single image", "text": "In this paper we consider the problem of recovering the free space of an indoor scene from its single image. We show that exploiting the box like geometric structure of furniture and constraints provided by the scene, allows us to recover the extent of major furniture objects in 3D. Our \u201cboxy\u201d detector localizes box shaped objects oriented parallel to the scene across different scales and object types, and thus blocks out the occupied space in the scene. To localize the objects more accurately in 3D we introduce a set of specially designed features that capture the floor contact points of the objects. Image based metrics are not very indicative of performance in 3D. We make the first attempt to evaluate single view based occupancy estimates for 3D errors and propose several task driven performance measures towards it. On our dataset of 592 indoor images marked with full 3D geometry of the scene, we show that: (a) our detector works well using image based metrics; (b) our refinement method produces significant improvements in localization in 3D; and (c) if one evaluates using 3D metrics, our method offers major improvements over other single view based scene geometry estimation methods."} {"_id": "5af8b070ce47e4933442ebf563d836f118b124c8", "title": "Educational games for improving the teaching-learning process of a CLIL subject: Physics and chemistry in secondary education", "text": "The use of educational games in an academic context seems to be a superb alternative to traditional learning activities, such as drill-type exercises, in order to engage 21st-century students. Consequently, this work tries to raise the following objectives: analyze the effectiveness of game-based learning, characterize game elements that may contribute to create playing experiences, comprehend how different player types interact with games and, finally, design interactive games which may create challenges, set goals and provide feedback on progress while motivating learners to study physics and chemistry in a foreign language in the second cycle of Secondary Education (4th E.S.O. that corresponds to the age of 15-16 years old). Specifically, we have used several Web 2.0 tools (Hot Potatoes, Scratch, What2Learn and SMART Notebook 11) and applications (Microsoft PowerPoint and Microsoft Excel) in order to create the games; and these games are based on the successive contents: laboratory safety, laboratory equipment, stoichiometry, atomic structure, electronic configuration, the periodic table, forces, motion and energy."} {"_id": "73ce5b57f37858f13ec80df075b5d74164715421", "title": "Classification and Adaptive Novel Class Detection of Feature-Evolving Data Streams", "text": "Data stream classification poses many challenges to the data mining community. In this paper, we address four such major challenges, namely, infinite length, concept-drift, concept-evolution, and feature-evolution. Since a data stream is theoretically infinite in length, it is impractical to store and use all the historical data for training. Concept-drift is a common phenomenon in data streams, which occurs as a result of changes in the underlying concepts. Concept-evolution occurs as a result of new classes evolving in the stream. Feature-evolution is a frequently occurring process in many streams, such as text streams, in which new features (i.e., words or phrases) appear as the stream progresses. Most existing data stream classification techniques address only the first two challenges, and ignore the latter two. In this paper, we propose an ensemble classification framework, where each classifier is equipped with a novel class detector, to address concept-drift and concept-evolution. To address feature-evolution, we propose a feature set homogenization technique. We also enhance the novel class detection module by making it more adaptive to the evolving stream, and enabling it to detect more than one novel class at a time. Comparison with state-of-the-art data stream classification techniques establishes the effectiveness of the proposed approach."} {"_id": "8d80f739382aa318c369b324b36a8361d7651d38", "title": "A Driving Right Leg Circuit (DgRL) for Improved Common Mode Rejection in Bio-Potential Acquisition Systems", "text": "The paper presents a novel Driving Right Leg (DgRL) circuit designed to mitigate the effect of common mode signals deriving, say, from power line interferences. The DgRL drives the isolated ground of the instrumentation towards a voltage which is fixed with respect to the common mode potential on the subject, therefore minimizing common mode voltage at the input of the front-end. The paper provides an analytical derivation of the common mode rejection performances of DgRL as compared to the usual grounding circuit or Driven Right Leg (DRL) loop. DgRL is integrated in a bio-potential acquisition system to show how it can reduce the common mode signal of more than 70 dB with respect to standard patient grounding. This value is at least 30 dB higher than the reduction achievable with DRL, making DgRL suitable for single-ended front-ends, like those based on active electrodes. EEG signal acquisition is performed to show how the system can successfully cancel power line interference without any need for differential acquisition, signal post-processing or filtering."} {"_id": "21880e93aa876fa01ab994d2fc05b86b525933e3", "title": "Head Nod Detection from a Full 3D Model", "text": "As a non-verbal communication mean, head gestures play an important role in face-to-face conversation and recognizing them is therefore of high value for social behavior analysis or Human Robotic Interactions (HRI) modelling. Among the various gestures, head nod is the most common one and can convey agreement or emphasis. In this paper, we propose a novel nod detection approach based on a full 3D face centered rotation model. Compared to previous approaches, we make two contributions. Firstly, the head rotation dynamic is computed within the head coordinate instead of the camera coordinate, leading to pose invariant gesture dynamics. Secondly, besides the rotation parameters, a feature related to the head rotation axis is proposed so that nod-like false positives due to body movements could be eliminated. The experiments on two-party and four-party conversations demonstrate the validity of the approach."} {"_id": "c5eed6bde6e6a68215c0b4c494fe03f037bcd3d2", "title": "Algorithmic Di erentiation in Python with AlgoPy", "text": "Many programs for scienti c computing in Python are based on NumPy and therefore make heavy use of numerical linear algebra (NLA) functions, vectorized operations, slicing and broadcasting. AlgoPy provides the means to compute derivatives of arbitrary order and Taylor approximations of such programs. The approach is based on a combination of univariate Taylor polynomial arithmetic and matrix calculus in the (combined) forward/reverse mode of Algorithmic Di erentiation (AD). In contrast to existing AD tools, vectorized operations and NLA functions are not considered to be a sequence of scalar elementary functions. Instead, dedicated algorithms for the matrix product, matrix inverse and the Cholesky, QR, and symmetric eigenvalue decomposition are implemented in AlgoPy. We discuss the reasons for this alternative approach and explain the underlying idea. Examples illustrate how AlgoPy can be used from a user's point of view."} {"_id": "4630706e9f864afd5c66cb1bda7d87c2522b4eee", "title": "Energy detection spectrum sensing on DPSK modulation transceiver using GNU radio", "text": "Cognitive radio (CR) can facilitate the issue of range shortage by permitting secondary users to coincide with incumbent clients in authorized range groups, while it causes no interference to occupant interchanges. Spectrum sensing is the essential in enabling innovation of CR to detect the presence of primary users (PUs) or authorized user signals and adventure the spectrum holes. Henceforth, this paper addresses the testing of spectrum sensing method for Differential Phase Shift Keying (DPSK) transceiver by utilizing an open source programming, GNU Radio. Moreover, the effect of various reproduction channels, for example, Dynamic channel demonstrate with Rayleigh fading, dynamic channel display with Rician fading, frequency selective faded channel show with Rayleigh and frequency selective faded channel show with Rician fading are exhibited."} {"_id": "b25bf94033b726d85b44446551e0b4936be2e4f7", "title": "Nerve root sedimentation sign: evaluation of a new radiological sign in lumbar spinal stenosis.", "text": "STUDY DESIGN\nRetrospective case-referent study.\n\n\nOBJECTIVE\nTo assess whether the new sedimentation sign discriminates between nonspecific low back pain (LBP) and symptomatic lumbar spinal stenosis (LSS).\n\n\nSUMMARY OF BACKGROUND DATA\nIn the diagnosis of LSS, radiologic findings do not always correlate with clinical symptoms, and additional diagnostic signs are needed. In patients without LSS, we observe the sedimentation of lumbar nerve roots to the dorsal part of the dural sac on supine magnetic resonance image scans. In patients with symptomatic and morphologic central LSS, this sedimentation is rarely seen. We named this phenomenon \"sedimentation sign\" and defined the absence of sedimenting nerve roots as positive sedimentation sign for the diagnosis of LSS.\n\n\nMETHODS\nThis study included 200 patients. Patients in the LSS group (n = 100) showed claudication with or without LBP and leg pain, a cross-sectional area <80 mm, and a walking distance <200 m; patients in the LBP group (n = 100) had LBP, no leg pain, no claudication, a cross-sectional area of the dural sac >120 mm, and a walking distance >1000 m. The frequency of a positive sedimentation sign was compared between the 2 groups, and intraobserver and interobserver reliability were assessed in a random subsample (n = 20).\n\n\nRESULTS\nA positive sedimentation sign was identified in 94 patients in the LSS group (94%; 95% confidence interval, 90%-99%) but none in the LBP group (0%; 95% confidence interval, 0%-4%). Reliability was kappa = 1.0 (intraobserver) and kappa = 0.93 (interobserver), respectively. There was no difference in the detection of the sign between segmental levels L1-L5 in the LSS group.\n\n\nCONCLUSION\nA positive sedimentation sign exclusively and reliably occurs in patients with LSS, suggesting its usefulness in clinical practice. Future accuracy studies will address its sensitivity and specificity. If they confirm the sign's high specificity, a positive sedimentation sign can rule in LSS, and, with a high sensitivity, a negative sedimentation sign can rule out LSS."} {"_id": "e8ad2e8e3aae9edf914b4d890d0b8bc8db47fbed", "title": "Combining Gradient Boosting Machines with Collective Inference to Predict Continuous Values", "text": "Gradient boosting of regression trees is a competitive procedure for learning predictive models of continuous data that fits the data with an additive non-parametric model. The classic version of gradient boosting assumes that the data is independent and identically distributed. However, relational data with interdependent, linked instances is now common and the dependencies in such data can be exploited to improve predictive performance. Collective inference is one approach to exploit relational correlation patterns and significantly reduce classification error. However, much of the work on collective learning and inference has focused on discrete prediction tasks rather than continuous. In this work, we investigate how to combine these two paradigms together to improve regression in relational domains. Specifically, we propose a boosting algorithm for learning a collective inference model that predicts a continuous target variable. In the algorithm, we learn a basic relational model, collectively infer the target values, and then iteratively learn relational models to predict the residuals. We evaluate our proposed algorithm on a real network dataset and show that it outperforms alternative boosting methods. However, our investigation also revealed that the relational features interact together to produce better predictions."} {"_id": "a0456c27cdd58f197032c1c8b4f304f09d4c9bc5", "title": "Multiple Classifier Systems", "text": "Ensemble methods are learning algorithms that construct a set of classi ers and then classify new data points by taking a weighted vote of their predictions The original ensemble method is Bayesian aver aging but more recent algorithms include error correcting output coding Bagging and boosting This paper reviews these methods and explains why ensembles can often perform better than any single classi er Some previous studies comparing ensemble methods are reviewed and some new experiments are presented to uncover the reasons that Adaboost does not over t rapidly"} {"_id": "22df3c0d055b7f65871068dfcd83d10f0a4fe2e4", "title": "Optimizing Java Bytecode Using the Soot Framework: Is It Feasible?", "text": "This paper presents Soot, a framework for optimizing Java bytecode. The framework is implemented in Java and supports three intermediate representations for representing Java bytecode: Baf, a streamlined representation of Java's stack-based bytecode; Jimple, a typed three-address intermediate representation suitable for optimization; and Grimp, an aggregated version of Jimple. Our approach to class le optimization is to rst convert the stack-based bytecode into Jimple, a three-address form more amenable to traditional program optimization, and then convert the optimized Jimple back to bytecode. In order to demonstrate that our approach is feasible, we present experimental results showing the e ects of processing class les through our framework. In particular, we study the techniques necessary to effectively translate Jimple back to bytecode, without losing performance. Finally, we demonstrate that class le optimization can be quite e ective by showing the results of some basic optimizations using our framework. Our experiments were done on ten benchmarks, including seven SPECjvm98 benchmarks, and were executed on ve di erent Java virtual machine implementations."} {"_id": "3ef87e07d6ffc3c58cad602f792f96fe48fb0b8f", "title": "The Java Language Specification", "text": "class RawMembers extends NonGeneric implements Collection { static Collection cng = new ArrayList(); public static void main(String[] args) { RawMembers rw = null; Collection cn = rw.myNumbers(); // OK Iterator is = rw.iterator(); // Unchecked warning Collection cnn = rw.cng; // OK, static member } } In this program (which is not meant to be run), RawMembers inherits the method: Iterator iterator() from the Collection superinterface. The raw type RawMembers inherits iterator() from Collection, the erasure of Collection, which means that the return type of iterator() in RawMembers is Iterator. As a result, the attempt to 4.9 Intersection Types TYPES, VALUES, AND VARIABLES 70 assign rw.iterator() to Iterator requires an unchecked conversion, so a compile-time unchecked warning is issued. In contrast, RawMembers inherits myNumbers() from the NonGeneric class whose erasure is also NonGeneric. Thus, the return type of myNumbers() in RawMembers is not erased, and the attempt to assign rw.myNumbers() to Collection requires no unchecked conversion, so no compile-time unchecked warning is issued. Similarly, the static member cng retains its parameterized type even when accessed through a object of raw type. Note that access to a static member through an instance is considered bad style and is discouraged. This example reveals that certain members of a raw type are not erased, namely static members whose types are parameterized, and members inherited from a non-generic supertype. Raw types are closely related to wildcards. Both are based on existential types. Raw types can be thought of as wildcards whose type rules are deliberately unsound, to accommodate interaction with legacy code. Historically, raw types preceded wildcards; they were first introduced in GJ, and described in the paper Making the future safe for the past: Adding Genericity to the Java Programming Language by Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler, in Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 98), October 1998. 4.9 Intersection Types An intersection type takes the form T1 & ... & Tn (n > 0), where Ti (1 \u2264 i \u2264 n) are types. Intersection types can be derived from type parameter bounds (\u00a74.4) and cast expressions (\u00a715.16); they also arise in the processes of capture conversion (\u00a75.1.10) and least upper bound computation (\u00a74.10.4). The values of an intersection type are those objects that are values of all of the types Ti for 1 \u2264 i \u2264 n. Every intersection type T1 & ... & Tn induces a notional class or interface for the purpose of identifying the members of the intersection type, as follows: \u2022 For each Ti (1 \u2264 i \u2264 n), let Ci be the most specific class or array type such that Ti <: Ci. Then there must be some Ck such that Ck <: Ci for any i (1 \u2264 i \u2264 n), or a compile-time error occurs. \u2022 For 1 \u2264 j \u2264 n, if Tj is a type variable, then let Tj' be an interface whose members are the same as the public members of Tj; otherwise, if Tj is an interface, then let Tj' be Tj. TYPES, VALUES, AND VARIABLES Subtyping 4.10 71 \u2022 If Ck is Object, a notional interface is induced; otherwise, a notional class is induced with direct superclass Ck. This class or interface has direct superinterfaces T1', ..., Tn' and is declared in the package in which the intersection type appears. The members of an intersection type are the members of the class or interface it induces. It is worth dwelling upon the distinction between intersection types and the bounds of type variables. Every type variable bound induces an intersection type. This intersection type is often trivial, consisting of a single type. The form of a bound is restricted (only the first element may be a class or type variable, and only one type variable may appear in the bound) to preclude certain awkward situations coming into existence. However, capture conversion can lead to the creation of type variables whose bounds are more general, such as array types)."} {"_id": "53732eff5c29585bc84d0ec280b0923639bf740e", "title": "Reverse Engineering of the Interaction Diagrams from C++ Code", "text": "In object oriented programming, the functionalities of a system result from the interactions (message exchanges) among the objects allocated by the system. While designing object interactions is far more complex than designing the object structure in forward engineering, the problem of understanding object interactions during code evolution is even harder, because the related information is spread across the code. In this paper, a technique for the automatic extraction of UML interaction diagrams from C++ code is proposed. The algorithm is based on a static, conservative flow analysis, that approximates the behavior of the system in any execution and for any possible input. Applicability of the approach to large software is achieved by means of two mechanisms: partial analysis and focusing. Usage of our method on a real world, large C++ system confirmed its viability."} {"_id": "9a292e0d862debccffa04396cd5bceb5d866de18", "title": "Compilers: Principles, Techniques, and Tools", "text": ""} {"_id": "e81ee8f90fa0e67d2e40dc794809dd1a942853aa", "title": "A Fast Algorithm for Finding Dominators in a Flowgraph", "text": "A fast algorithm for finding dominators in a flowgraph is presented. The algorithm uses depth-first search and an efficient method of computing functions defined on paths in trees. A simple implementation of the algorithm runs in O(m log n) time, where m is the number of edges and n is the number of vertices in the problem graph. A more sophisticated implementation runs in O(m\u03b1(m, n)) time, where \u03b1(m, n) is a functional inverse of Ackermann's function.\nBoth versions of the algorithm were implemented in Algol W, a Stanford University version of Algol, and tested on an IBM 370/168. The programs were compared with an implementation by Purdom and Moore of a straightforward O(mn)-time algorithm, and with a bit vector algorithm described by Aho and Ullman. The fast algorithm beat the straightforward algorithm and the bit vector algorithm on all but the smallest graphs tested."} {"_id": "3aa8bccb391e05762f79b6ea0164c24f4fee38bf", "title": "P300 amplitude is determined by target-to-target interval.", "text": "P300 event-related brain potential (ERP) measures are affected by target stimulus probability, the number of nontargets preceding the target in the stimulus sequence structure, and interstimulus interval (ISI). Each of these factors contributes to the target-to-target interval (TTI), which also has been found to affect P300. The present study employed a variant of the oddball paradigm and manipulated the number of preceding nontarget stimuli (0, 1, 2, 3) and ISI (1, 2, 4 s) in order to systematically assess TTI effects on P300 values from auditory and visual stimuli. Number of preceding nontargets generally produced stronger effects than ISI in a manner suggesting that TTI determined P300 measures: Amplitude increased as TTI increased for both auditory and visual stimulus conditions, whereas latency tended to decrease with increased TTI. The finding that TTI is a critical determinant of P300 responsivity is discussed within a resource allocation theoretical framework."} {"_id": "8d02d62f324c354c899b217e77ab097d1ddb2a9d", "title": "Ultra Low Power Wake-Up Radios: A Hardware and Networking Survey", "text": "In wireless environments, transmission and reception costs dominate system power consumption, motivating research effort on new technologies capable of reducing the footprint of the radio, paving the way for the Internet of Things. The most important challenge is to reduce power consumption when receivers are idle, the so called idle-listening cost. One approach proposes switching off the main receiver, then introduces new wake-up circuitry capable of detecting an incoming transmission, optionally discriminating the packet destination using addressing, then switching on the main radio only when required. This wake-up receiver technology represents the ultimate frontier in low power radio communication. In this paper, we present a comprehensive literature review of the research progress in wake-up radio (WuR) hardware and relevant networking software. First, we present an overview of the WuR system architecture, including challenges to hardware design and a comparison of solutions presented throughout the last decade. Next, we present various medium access control and routing protocols as well as diverse ways to exploit WuRs, both as an extension of pre-existing systems and as a new concept to manage low-power networking."} {"_id": "7525328002640456f2b641aa48c0d31660cb62f0", "title": "Experimentable Digital Twins\u2014Streamlining Simulation-Based Systems Engineering for Industry 4.0", "text": "Digital twins represent real objects or subjects with their data, functions, and communication capabilities in the digital world. As nodes within the internet of things, they enable networking and thus the automation of complex value-added chains. The application of simulation techniques brings digital twins to life and makes them experimentable; digital twins become experimentable digital twins (EDTs). Initially, these EDTs communicate with each other purely in the virtual world. The resulting networks of interacting EDTs model different application scenarios and are simulated in virtual testbeds, providing new foundations for comprehensive simulation-based systems engineering. Its focus is on EDTs, which become more detailed with every single application. Thus, complete digital representations of the respective real assets and their behaviors are created successively. The networking of EDTs with real assets leads to hybrid application scenarios in which EDTs are used in combination with real hardware, thus realizing complex control algorithms, innovative user interfaces, or mental models for intelligent systems."} {"_id": "f4a82dd4a4788e43baed10d8dda63545b81b6eb0", "title": "Design, kinematics and dynamics modeling of a lower-limb walking assistant robot", "text": "This paper focuses on a lower limb walking assistant robot. This robot has been designed and modeled for two tasks of facilitating healthy people, and assisting patients for walking. First, basic concepts of the motion mechanism are presented. Next, its kinematics and dynamics equations have been developed. Kinematics analysis has been done using Denavit-Hartenberg scheme to obtain position, velocity and acceleration of center of mass (COM) of all links. Dynamics equations have been developed using recursive Lagrange method and finally inverse dynamics problem has been solved. Obtained simulation results will be discussed which validate the proposed model."} {"_id": "e1dd6e567c6f476b5d2ce8adb2a11f001c88ee62", "title": "Metrics for text entry research: an evaluation of MSD and KSPC, and a new unified error metric", "text": "We describe and identify shortcomings in two statistics recently introduced to measure accuracy in text entry evaluations: the minimum string distance (MSD) error rate and keystrokes per character (KSPC). To overcome the weaknesses, a new framework for error analysis is developed and demonstrated. It combines the analysis of the presented text, input stream (keystrokes), and transcribed text. New statistics include a unified total error rate, combining two constituent error rates: the corrected error rate (errors committed but corrected) and the not corrected error rate (errors left in the transcribed text). The framework includes other measures including error correction efficiency, participant conscientiousness, utilised bandwidth, and wasted bandwidth. A text entry study demonstrating the new methodology is described."} {"_id": "6e62da624a263aaff9b5c10ae08f1859a03cfb76", "title": "Lyapunov Approach for the stabilization of the Inverted Spherical Pendulum", "text": "A nonlinear controller is presented for the stabilization of the spherical inverted pendulum system. The control strategy is based on the Lyapunov approach in conjunction with LaSalle's invariance principle. The proposed controller is able to bring the pendulum to the unstable upright equilibrium point with the position of the movable base at the origin. The obtained closed-loop system has a very large domain of attraction, that can be as large as desired, for any initial position of the pendulum which lies above the horizontal plane"} {"_id": "24927a463e19c4348802f7f286acda31a035715b", "title": "A Data Structure for Dynamic Trees", "text": "We propose a data structure to maintain a collection of vertex-disjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Our data structure requires O(log n) time per operation when the time is amortized over a sequence of operations. Using our data structure, we obtain new fast algorithms for the following problems:\n (1) Computing deepest common ancestors.\n (2) Solving various network flow problems including finding maximum flows, blocking flows, and acyclic flows.\n (3) Computing certain kinds of constrained minimum spanning trees.\n (4) Implementing the network simplex algorithm for the transshipment problem.\n Our most significant application is (2); we obtain an O(mn log n)-time algorithm to find a maximum flow in a network of n vertices and m edges, beating by a factor of log n the fastest algorithm previously known for sparse graphs."} {"_id": "7e5fbba6437067da53be67c71ca6022b469a2e94", "title": "A survey of variability modeling in industrial practice", "text": "Over more than two decades, numerous variability modeling techniques have been introduced in academia and industry. However, little is known about the actual use of these techniques. While dozens of experience reports on software product line engineering exist, only very few focus on variability modeling. This lack of empirical data threatens the validity of existing techniques, and hinders their improvement. As part of our effort to improve empirical understanding of variability modeling, we present the results of a survey questionnaire distributed to industrial practitioners. These results provide insights into application scenarios and perceived benefits of variability modeling, the notations and tools used, the scale of industrial models, and experienced challenges and mitigation strategies."} {"_id": "610bc4ab4fbf7f95656b24330eb004492e63ffdf", "title": "NONCONVEX MODEL FOR FACTORING NONNEGATIVE MATRICES", "text": "We study the Nonnegative Matrix Factorization problem which approximates a nonnegative matrix by a low-rank factorization. This problem is particularly important in Machine Learning, and finds itself in a large number of applications. Unfortunately, the original formulation is ill-posed and NPhard. In this paper, we propose a row sparse model based on Row Entropy Minimization to solve the NMF problem under separable assumption which states that each data point is a convex combination of a few distinct data columns. We utilize the concentration of the entropy function and the `\u221e norm to concentrate the energy on the least number of latent variables. We prove that under the separability assumption, our proposed model robustly recovers data columns that generate the dataset, even when the data is corrupted by noise. We empirically justify the robustness of the proposed model and show that it is significantly more robust than the state-ofthe-art separable NMF algorithms."} {"_id": "af657f4f268d31e8f1ca8b9702bb72c2f788b161", "title": "Multi-label classification by exploiting label correlations", "text": "a Department of Computer Science and Technology, Tongji University, Shanghai 201804, PR China b Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2G7, Canada c Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 201804, PR China d System Research Institute, Polish Academy of Sciences, Warsaw, Poland e School of Software, Jiangxi Agricultural University, Nanchang 330013, PR China"} {"_id": "f829fa5686895ec831dd157f88949f79976664a7", "title": "Variational Bayesian Inference for Big Data Marketing Models 1", "text": "Hierarchical Bayesian approaches play a central role in empirical marketing as they yield individual-level parameter estimates that can be used for targeting decisions. MCMC methods have been the methods of choice for estimating hierarchical Bayesian models as they are capable of providing accurate individual-level estimates. However, MCMC methods are computationally prohibitive and do not scale well when applied to massive data sets that have become common in the current era of \u201cBig Data\u201d. We introduce to the marketing literature a new class of Bayesian estimation techniques known as variational Bayesian (VB) inference. These methods tackle the scalability challenge via a deterministic optimization approach to approximate the posterior distribution and yield accurate estimates at a fraction of the computational cost associated with simulation-based MCMC methods. We exploit and extend recent developments in variational Bayesian inference and highlight how two VB estimation approaches \u2013 Mean-field VB (that is analogous to Gibbs sampling) for conjugate models and Fixed-form VB (which is analogous to Metropolis-Hasting) for nonconjugate models \u2013 can be effectively combined for estimating complex marketing models. We also show how recent advances in parallel computing and in stochastic optimization can be used to further enhance the speed of these VB methods. Using simulated as well as real data sets, we apply the VB approaches to several commonly used marketing models (e.g. mixed linear, logit, selection, and hierarchical ordinal logit models), and demonstrate how the VB inference is widely applicable for marketing problems."} {"_id": "ee38164081011364ee9618abcc166b3dfd749740", "title": "Uplift Modeling with Multiple Treatments and General Response Types", "text": "Randomized experiments have been used to assist decisionmaking in many areas. They help people select the optimal treatment for the test population with certain statistical guarantee. However, subjects can show significant heterogeneity in response to treatments. The problem of customizing treatment assignment based on subject characteristics is known as uplift modeling, differential response analysis, or personalized treatment learning in literature. A key feature for uplift modeling is that the data is unlabeled. It is impossible to know whether the chosen treatment is optimal for an individual subject because response under alternative treatments is unobserved. This presents a challenge to both the training and the evaluation of uplift models. In this paper we describe how to obtain an unbiased estimate of the key performance metric of an uplift model, the expected response. We present a new uplift algorithm which creates a forest of randomized trees. The trees are built with a splitting criterion designed to directly optimize their uplift performance based on the proposed evaluation method. Both the evaluation method and the algorithm apply to arbitrary number of treatments and general response types. Experimental results on synthetic data and industry-provided data show that our algorithm leads to significant performance improvement over other applicable methods. Accepted: 2017 SIAM International Conference on Data"} {"_id": "4328013cae88f6fe413bb4fe302bfd3909d5276e", "title": "Automated Diagnosis of Coronary Artery Disease Based on Data Mining and Fuzzy Modeling", "text": "A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made."} {"_id": "45bc88a5c627d5a54e536e3b8440e2ea76cee1c0", "title": "Predicting Source Code Quality with Static Analysis and Machine Learning", "text": "This paper is investigating if it is possible to predict source code quality based on static analysis and machine learning. The proposed approach includes a plugin in Eclipse, uses a combination of peer review/human rating, static code analysis, and classification methods. As training data, public data and student hand-ins in programming are used. Based on this training data, new and uninspected source code can be accurately classified as \u201cwell written\u201d or \u201cbadly written\u201d. This is a step towards feedback in an interactive environment without peer assessment."} {"_id": "51a5fae9c638c33882b91fcf70dd40d0c3a75fcc", "title": "Traffic Accident Data Mining Using Machine Learning Paradigms", "text": "Engineers and researchers in the automobile industry have tried to design and build safer automobiles, but traffic accidents are unavoidable. Patterns involved in dangerous crashes could be detected if we develop a prediction model that automatically classifies the type of injury severity of various traffic accidents. These behavioral and roadway patterns are useful in the development of traffic safety control policy. We believe that to obtain the greatest possible accident reduction effects with limited budgetary resources, it is important that measures be based on scientific and objective surveys of the causes of accidents and severity of injuries. This paper presents some models to predict the severity of injury that occurred during traffic accidents using three machine-learning approaches. We considered neural networks trained using hybrid learning approaches, decision trees and a concurrent hybrid model involving decision trees and neural networks. Experiment results reveal that among the machine learning paradigms considered the hybrid decision tree-neural network approach outperformed the individual approaches."} {"_id": "a2ba37db8d15abf65c2286ad2ec526f47437b92f", "title": "Design Procedure of Dual-Stator Spoke-Array Vernier Permanent-Magnet Machines", "text": "The dual-stator spoke-array vernier permanent-magnet (DSSA VPM) machines proposed in the previous papers have been proven to be with high torque density and high power factor. However, the design procedure on the DSSA VPM machines has not been well established, and there is little design experience to be followed, which makes the DSSA VPM machine design quite difficult. This paper presents the detailed DSSA VPM machine design procedure including decision of design parameter initial values, analytical sizing equation, geometric size relationship, and so on. In order to get reasonable design parameter initial values which can reduce the number of design iteration loop, the influence of the key parameters, such as rotor/stator pole combination, slot opening, magnet thickness, etc., on the performances is analyzed based on the finite-element algorithm (FEA) in this paper, and the analysis results can be regarded as design experience during the selection process of the initial values. After that, the analytical sizing equation and geometric relationship formulas are derived and can be used to obtain and optimize the size data of the DSSA VPM machines with little time consumption. The combination of the analytical and FEA methods makes the design procedure time-effective and reliable. Finally, the design procedure is validated by experiments on a DSSA VPM prototype with 2000 N\u00b7m."} {"_id": "4686b6bdd80a721d052f3da92620c21035aa2c68", "title": "Agent-Based Modeling of the Spread of Influenza-Like Illness in an Emergency Department: A Simulation Study", "text": "The objective of this paper was to develop an agent based modeling framework in order to simulate the spread of influenza virus infection on a layout based on a representative hospital emergency department in Winnipeg, Canada. In doing so, the study complements mathematical modeling techniques for disease spread, as well as modeling applications focused on the spread of antibiotic-resistant nosocomial infections in hospitals. Twenty different emergency department scenarios were simulated, with further simulation of four infection control strategies. The agent based modeling approach represents systems modeling, in which the emergency department was modeled as a collection of agents (patients and healthcare workers) and their individual characteristics, behaviors, and interactions. The framework was coded in C + + using Qt4 libraries running under the Linux operating system. A simple ordinary least squares (OLS) regression was used to analyze the data, in which the percentage of patients that be came infected in one day within the simulation was the dependent variable. The results suggest that within the given instance con text, patient-oriented infection control policies (alternate treatment streams, masking symptomatic patients) tend to have a larger effect than policies that target healthcare workers. The agent-based modeling framework is a flexible tool that can be made to reflect any given environment; it is also a decision support tool for practitioners and policymakers to assess the relative impact of infection control strategies. The framework illuminates scenarios worthy of further investigation, as well as counterintuitive findings."} {"_id": "900c24c6f783db6f6239fe44989814d4b7deaa9b", "title": "The Sorcerer's Apprentice Guide to Fault Attacks", "text": "The effect of faults on electronic systems has been studied since the 1970s when it was noticed that radioactive particles caused errors in chips. This led to further research on the effect of charged particles on silicon, motivated by the aerospace industry, which was becoming concerned about the effect of faults in airborne electronic systems. Since then various mechanisms for fault creation and propagation have been discovered and researched. This paper covers the various methods that can be used to induce faults in semiconductors and exploit such errors maliciously. Several examples of attacks stemming from the exploiting of faults are explained. Finally a series of countermeasures to thwart these attacks are described."} {"_id": "8c0062c200cfc4fb740d4c64d310068530dec58b", "title": "Health and well-being benefits of spending time in forests: systematic review", "text": "BACKGROUND\nNumerous studies have reported that spending time in nature is associated with the improvement of various health outcomes and well-being. This review evaluated the physical and psychological benefits of a specific type of exposure to nature, forest therapy.\n\n\nMETHOD\nA literature search was carried out using MEDLINE, PubMed, ScienceDirect, EMBASE, and ProQuest databases and manual searches from inception up to December 2016. Key words: \"Forest\" or \"Shinrin -Yoku\" or \"Forest bath\" AND \"Health\" or \"Wellbeing\". The methodological quality of each randomized controlled trials (RCTs) was assessed according to the Cochrane risk of bias (ROB) tool.\n\n\nRESULTS\nSix RCTs met the inclusion criteria. Participants' ages ranged from 20 to 79\u00a0years. Sample size ranged from 18 to 99. Populations studied varied from young healthy university students to elderly people with chronic disease. Studies reported the positive impact of forest therapy on hypertension (n\u2009=\u20092), cardiac and pulmonary function (n\u2009=\u20091), immune function (n\u2009=\u20092), inflammation (n\u2009=\u20093), oxidative stress (n\u2009=\u20091), stress (n\u2009=\u20091), stress hormone (n\u2009=\u20091), anxiety (n\u2009=\u20091), depression (n\u2009=\u20092), and emotional response (n\u2009=\u20093). The quality of all studies included in this review had a high ROB.\n\n\nCONCLUSION\nForest therapy may play an important role in health promotion and disease prevention. However, the lack of high-quality studies limits the strength of results, rendering the evidence insufficient to establish clinical practice guidelines for its use. More robust RCTs are warranted."} {"_id": "d8b096030b8f7bd0c39ef9e3217b533e298a8985", "title": "Character-based feature extraction with LSTM networks for POS-tagging task", "text": "In this paper we describe a work in progress on designing the continuous vector space word representations able to map unseen data adequately. We propose a LSTM-based feature extraction layer that reads in a sequence of characters corresponding to a word and outputs a single fixed-length real-valued vector. We then test our model on a POS tagging task on four typologically different languages. The results of the experiments suggest that the model can offer a solution to the out-of-vocabulary words problem, as in a comparable setting its OOV accuracy improves over that of a state of the art tagger."} {"_id": "5f93df0c298a87e23be75f51fe27bd96d3f9d65b", "title": "Source Localization in Wireless Sensor Networks From Signal Time-of-Arrival Measurements", "text": "Recent advances in wireless sensor networks have led to renewed interests in the problem of source localization. Source localization has broad range of applications such as emergency rescue, asset inventory, and resource management. Among various measurement models, one important and practical source signal measurement is the received signal time of arrival (TOA) at a group of collaborative wireless sensors. Without time-stamp at the transmitter, in traditional approaches, these received TOA measurements are subtracted pairwise to form time-difference of arrival (TDOA) data for source localization, thereby leading to a 3-dB loss in signal-to-noise ratio (SNR). We take a different approach by directly applying the original measurement model without the subtraction preprocessing. We present two new methods that utilize semidefinite programming (SDP) relaxation for direct source localization. We further address the issue of robust estimation given measurement errors and inaccuracy in the locations of receiving sensors. Our results demonstrate some potential advantages of source localization based on the direct TOA data over time-difference preprocessing."} {"_id": "c1ff5842b61d20ed61c49d1080c2c44e5a0ac015", "title": "Design, modeling and control of an omni-directional aerial vehicle", "text": "In this paper we present the design and control of a novel six degrees-of-freedom aerial vehicle. Based on a static force and torque analysis for generic actuator configurations, we derive an eight-rotor configuration that maximizes the vehicle's agility in any direction. The proposed vehicle design possesses full force and torque authority in all three dimensions. A control strategy that allows for exploiting the vehicle's decoupled translational and rotational dynamics is introduced. A prototype of the proposed vehicle design is built using reversible motor-propeller actuators and capable of flying at any orientation. Preliminary experimental results demonstrate the feasibility of the novel design and the capabilities of the vehicle."} {"_id": "bf8a0014ac21ba452c38d27bc7d930c265c32c60", "title": "High Level Sensor Data Fusion Approaches For Object Recognition In Road Environment", "text": "Application of high level fusion approaches demonstrate a sequence of significant advantages in multi sensor data fusion and automotive safety fusion systems are no exception to this. High level fusion can be applied to automotive sensor networks with complementary or/and redundant field of views. The advantage of this approach is that it ensures system modularity and allows benchmarking, as it does not permit feedbacks and loops inside the processing. In this paper two specific high level data fusion approaches are described including a brief architectural and algorithmic presentation. These approaches differ mainly in their data association part: (a) track level fusion approach solves it with the point to point association with emphasis on object continuity and multidimensional assignment, and (b) grid based fusion approach that proposes a generic way to model the environment and to perform sensor data fusion. The test case for these approaches is a multi sensor equipped PReVENT/ProFusion2 truck demonstrator vehicle."} {"_id": "c873b2af7fd26bf6df7196bf077905da9bcb3b4d", "title": "A simple stimulus generator for testing single-channel monopulse processor", "text": "The monopulse antenna tracking system is the preferred choice where higher pointing accuracy is required because of its inherent advantages. There are three possible configurations for realizing monopulse tracking system. One of the configurations is called single channel monopulse tracking system as it requires only one down converter chain. In this configuration, the sum and both (AZ/EL) error signals are combined to reduce the required number of down converters. A single channel monopulse processor is vital subsystem of a single channel monopulse tracking system which extracts the pointing error information from the IF signal. During the development, these processors need to be tested for its functionality in the laboratory which requires a stimulus generator. The stimulus generator generates an IF signal which mimics the real time signal and can be used for debugging and functional verification. This paper presents a simple approach for realizing a stimulus generator for single channel monopulse processor. A stimulus generator has been developed using this approach and has been used for laboratory testing of a single channel monopulse processor. The tested single channel monopulse processor has been successfully integrated with the earth station antenna tracking chain at NRSC, Hyderabad, India and used for tracking the LEO satellites."} {"_id": "4a0e875a97011e8c3789cf10d04d60a61a19f60a", "title": "Pulsed radiofrequency treatment in interventional pain management: mechanisms and potential indications\u2014a review", "text": "The objective of this review is to evaluate the efficacy of Pulsed Radiofrequency (PRF) treatment in chronic pain management in randomized clinical trials (RCTs) and well-designed observational studies. The physics, mechanisms of action, and biological effects are discussed to provide the scientific basis for this promising modality. We systematically searched for clinical studies on PRF. We searched the MEDLINE (PubMed) and EMBASE database, using the free text terms: pulsed radiofrequency, radio frequency, radiation, isothermal radiofrequency, and combination of these. We classified the information in two tables, one focusing only on RCTs, and another, containing prospective studies. Date of last electronic search was 30 May 2010. The methodological quality of the presented reports was scored using the original criteria proposed by Jadad et al. We found six RCTs that evaluated the efficacy of PRF, one against corticosteroid injection, one against sham intervention, and the rest against conventional RF thermocoagulation. Two trials were conducted in patients with lower back pain due to lumbar zygapophyseal joint pain, one in cervical radicular pain, one in lumbosacral radicular pain, one in trigeminal neuralgia, and another in chronic shoulder pain. From the available evidence, the use of PRF to the dorsal root ganglion in cervical radicular pain is compelling. With regards to its lumbosacral counterpart, the use of PRF cannot be similarly advocated in view of the methodological quality of the included study. PRF application to the supracapular nerve was found to be as efficacious as intra-articular corticosteroid in patients with chronic shoulder pain. The use of PRF in lumbar facet arthropathy and trigeminal neuralgia was found to be less effective than conventional RF thermocoagulation techniques."} {"_id": "1267fe36b5ece49a9d8f913eb67716a040bbcced", "title": "On the limited memory BFGS method for large scale optimization", "text": "We study the numerical performance of a limited memory quasi Newton method for large scale optimization which we call the L BFGS method We compare its performance with that of the method developed by Buckley and LeNir which combines cyles of BFGS steps and conjugate direction steps Our numerical tests indicate that the L BFGS method is faster than the method of Buckley and LeNir and is better able to use additional storage to accelerate convergence We show that the L BFGS method can be greatly accelerated by means of a simple scaling We then compare the L BFGSmethod with the partitioned quasi Newton method of Griewank and Toint a The results show that for some problems the partitioned quasi Newton method is clearly superior to the L BFGS method However we nd that for other problems the L BFGS method is very competitive due to its low iteration cost We also study the convergence properties of the L BFGS method and prove global convergence on uniformly convex problems"} {"_id": "146f6f6ed688c905fb6e346ad02332efd5464616", "title": "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention", "text": "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-theart performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO."} {"_id": "2abe6b9ea1b13653b7384e9c8ef14b0d87e20cfc", "title": "RCV1: A New Benchmark Collection for Text Categorization Research", "text": "Reuters Corpus Volume I (RCV1) is an archive of over 800,000 manually categorized newswire stories recently made available by Reuters, Ltd. for research purposes. Use of this data for research on text categorization requires a detailed understanding of the real world constraints under which the data was produced. Drawing on interviews with Reuters personnel and access to Reuters documentation, we describe the coding policy and quality control procedures used in producing the RCV1 data, the intended semantics of the hierarchical category taxonomies, and the corrections necessary to remove errorful data. We refer to the original data as RCV1-v1, and the corrected data as RCV1-v2. We benchmark several widely used supervised learning methods on RCV1-v2, illustrating the collection\u2019s properties, suggesting new directions for research, and providing baseline results for future studies. We make available detailed, per-category experimental results, as well as corrected versions of the category assignments and taxonomy structures, via online appendices."} {"_id": "2cb8497f9214735ffd1bd57db645794459b8ff41", "title": "Teaching Machines to Read and Comprehend", "text": "Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure."} {"_id": "19c60ef563531be3d89010f098e12aa90bea2b57", "title": "SEQUENCE BASED HIERARCHICAL CONFLICT-FREE ROUTING STRATEGY OF BI-DIRECTIONAL AUTOMATED GUIDED VEHICLES", "text": "Conflict-free routing is an important problem to be addressed in AGV Systems, in particular when they are bi-directional. This paper considers the problem of routing AGVs in the presence of contingencies. The importance of this work stems from the realization that path-planning approaches are ill suited to AGVSs subject to contingencies. In this paper, a two stage closed-loop control strategy is proposed. In the first control stage, a pre-planning method proposed by Kim and Tanchoco (1991) is used to establish the fastest conflict-free routes for AGVs. The objective of the second stage is the avoidance of deadlocks in the presence of interruptions while maintaining the planned routes. Copyright \u00a9 2005 IFAC"} {"_id": "41545565e1efda90e1c993d4e308ef1a301e505b", "title": "Have I really met timing? - validating primetime timing reports with SPICE", "text": "At sign-off everybody is wondering about how good the accuracy of the static timing analysis timing reports generated with PrimeTime\u00af really is. Errors can be introduced by STA setup, interconnect modeling, library characterization etc. The claims that path timingcalculated by PrimeTime usually is within a few percent of Spice don't help to ease your uncertainty.When the Signal Integrity features were introduced to PrimeTime there was also a feature added that was hardly announced: PrimeTime can write out timing paths for simulation with Spice that can be used to validate the timing numbers calculated by PrimeTime. By comparingthe numbers calculated by PrimeTime to a simulation with Spice for selected paths the designers can verify the timing and build up confidence or identify errors.This paper will describe a validation flow for PrimeTime timing reports that is based on extraction of the Spice paths, starting the Spice simulation, parsing the simulation results, and creating a report comparing PrimeTime and Spice timing. All these steps are done inside the TCL environment of PrimeTime. It will describe this flow, what is needed for the Spice simulation, how it can be set up, what can go wrong, and what kind of problems in the STA can be identified."} {"_id": "81780a68b394ba15112ba0be043a589a2e8563ab", "title": "A comparative review of tone-mapping algorithms for high dynamic range video", "text": "Tone-mapping constitutes a key component within the field of high dynamic range (HDR) imaging. Its importance is manifested in the vast amount of tone-mapping methods that can be found in the literature, which are the result of an active development in the area for more than two decades. Although these can accommodate most requirements for display of HDR images, new challenges arose with the advent of HDR video, calling for additional considerations in the design of tone-mapping operators (TMOs). Today, a range of TMOs exist that do support video material. We are now reaching a point where most camera captured HDR videos can be prepared in high quality without visible artifacts, for the constraints of a standard display device. In this report, we set out to summarize and categorize the research in tone-mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline. While this gives a wide overview over the area, we then specifically focus on tone-mapping of HDR video and the problems this medium entails. First, we formulate the major challenges a video TMO needs to address. Then, we provide a description and categorization of each of the existing video TMOs. Finally, by constructing a set of quantitative measures, we evaluate the performance of a number of the operators, in order to give a hint on which can be expected to render the least amount of artifacts. This serves as a comprehensive reference, categorization and comparative assessment of the state-of-the-art in tone-mapping for HDR video."} {"_id": "a30a45a56911a9bfa845f9c86f3c9b29e11807b6", "title": "Energy Management Based on Frequency Approach for Hybrid Electric Vehicle Applications: Fuel-Cell/Lithium-Battery and Ultracapacitors", "text": "This paper presents the ultracapacitors (U) and fuel-cell/lithium-battery connection with an original energy management method for hybrid electric vehicle (HEV) applications. The proposed method is focused on the frequency approach to meet the load energy requirement. The ultracapacitors are connected to the dc link through a buck-boost converter, and the fuel cell is connected to the dc link via a boost converter for the first topology. In the second topology, the lithium battery is connected to the dc link without a converter to avoid the dc-link voltage control. An asynchronous machine is used like the traction motor; it is related to the dc link through a dc/ac converter (inverter). The main contribution of this paper is focused on HEV energy management according to the dynamics (frequency) of the hybrid sources using polynomial correctors. The performances of the proposed method are evaluated through some simulations and the experimental tests, using the New European Driving Cycle (NEDC). This study is extended to an aggressive test cycle, such as the U.S. driving cycle (USDC), to understand the system response and the control performances."} {"_id": "83b01b5f8c127a081b74ff96952a3ec9695d9e50", "title": "Randomized clinical trial of cognitive behavioral therapy (CBT) versus acceptance and commitment therapy (ACT) for mixed anxiety disorders.", "text": "OBJECTIVE\nRandomized comparisons of acceptance-based treatments with traditional cognitive behavioral therapy (CBT) for anxiety disorders are lacking. To address this gap, we compared acceptance and commitment therapy (ACT) to CBT for heterogeneous anxiety disorders.\n\n\nMETHOD\nOne hundred twenty-eight individuals (52% female, mean age = 38, 33% minority) with 1 or more DSM-IV anxiety disorders began treatment following randomization to CBT or ACT; both treatments included behavioral exposure. Assessments at pre-treatment, post-treatment, and 6- and 12-month follow-up measured anxiety-specific (principal disorder Clinical Severity Ratings [CSRs], Anxiety Sensitivity Index, Penn State Worry Questionnaire, Fear Questionnaire avoidance) and non-anxiety-specific (Quality of Life Index [QOLI], Acceptance and Action Questionnaire-16 [AAQ]) outcomes. Treatment adherence, therapist competency ratings, treatment credibility, and co-occurring mood and anxiety disorders were investigated.\n\n\nRESULTS\nCBT and ACT improved similarly across all outcomes from pre- to post-treatment. During follow-up, ACT showed steeper linear CSR improvements than CBT (p < .05, d = 1.26), and at 12-month follow-up, ACT showed lower CSRs than CBT among completers (p < .05, d = 1.10). At 12-month follow-up, ACT reported higher AAQ than CBT (p = .08, d = 0.42; completers: p < .05, d = 0.56), whereas CBT reported higher QOLI than ACT (p < .05, d = 0.42). Attrition and comorbidity improvements were similar; ACT used more non-study psychotherapy at 6-month follow-up. Therapist adherence and competency were good; treatment credibility was higher in CBT.\n\n\nCONCLUSIONS\nOverall improvement was similar between ACT and CBT, indicating that ACT is a highly viable treatment for anxiety disorders."} {"_id": "5c91c11b7fd9847129f36fdc2787c3d6df3a803b", "title": "Phylogeography of Y-chromosome haplogroup I reveals distinct domains of prehistoric gene flow in europe.", "text": "To investigate which aspects of contemporary human Y-chromosome variation in Europe are characteristic of primary colonization, late-glacial expansions from refuge areas, Neolithic dispersals, or more recent events of gene flow, we have analyzed, in detail, haplogroup I (Hg I), the only major clade of the Y phylogeny that is widespread over Europe but virtually absent elsewhere. The analysis of 1,104 Hg I Y chromosomes, which were identified in the survey of 7,574 males from 60 population samples, revealed several subclades with distinct geographic distributions. Subclade I1a accounts for most of Hg I in Scandinavia, with a rapidly decreasing frequency toward both the East European Plain and the Atlantic fringe, but microsatellite diversity reveals that France could be the source region of the early spread of both I1a and the less common I1c. Also, I1b*, which extends from the eastern Adriatic to eastern Europe and declines noticeably toward the southern Balkans and abruptly toward the periphery of northern Italy, probably diffused after the Last Glacial Maximum from a homeland in eastern Europe or the Balkans. In contrast, I1b2 most likely arose in southern France/Iberia. Similarly to the other subclades, it underwent a postglacial expansion and marked the human colonization of Sardinia approximately 9,000 years ago."} {"_id": "7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf", "title": "Hierarchical Gaussian Descriptors with Application to Person Re-Identification", "text": "Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification (re-id). In this paper, we present novel meta-descriptors based on a hierarchical distribution of pixel features. Although hierarchical covariance descriptors have been successfully applied to image classification, the mean information of pixel features, which is absent from the covariance, tends to be the major discriminative information for person re-id. To solve this problem, we describe a local region in an image via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, the region is modeled as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, we embed the parameters of the Gaussian into a point of Symmetric Positive Definite (SPD) matrix manifold. By changing the way to handle mean information in this embedding, we develop two hierarchical Gaussian descriptors. Additionally, we develop feature norm normalization methods with the ability to alleviate the biased trends that exist on the descriptors. The experimental results conducted on five public datasets indicate that the proposed descriptors achieve remarkably high performance on person re-id."} {"_id": "7a012d0eb6e19e2a40529314f58f4a31f4d5f8ff", "title": "On Covert Acoustical Mesh Networks in Air", "text": "Covert channels can be used to circumvent system and network policies by establishing communications that have not been considered in the design of the computing system. We construct a covert channel between different computing systems that utilizes audio modulation/demodulation to exchange data between the computing systems over the air medium. The underlying network stack is based on a communication system that was originally designed for robust underwater communication. We adapt the communication system to implement covert and stealthy communications by utilizing the ultrasonic frequency range. We further demonstrate how the scenario of covert acoustical communication over the air medium can be extended to multi-hop communications and even to wireless mesh networks. A covert acoustical mesh network can be conceived as a meshed botnet or malnet that is accessible via inaudible audio transmissions. Different applications of covert acoustical mesh networks are presented, including the use for remote keylogging over multiple hops. It is shown that the concept of a covert acoustical mesh network renders many conventional security concepts useless, as acoustical communications are usually not considered. Finally, countermeasures against covert acoustical mesh networks are discussed, including the use of lowpass filtering in computing systems and a host-based intrusion detection system for analyzing audio input and output in order to detect any irregularities."} {"_id": "3e25f645e7806f3184f3572fea764396d1289c49", "title": "A developmental, mentalization-based approach to the understanding and treatment of borderline personality disorder.", "text": "The precise nature and etiopathogenesis of borderline personality disorder (BPD) continues to elude researchers and clinicians. Yet, increasing evidence from various strands of research converges to suggest that affect dysregulation, impulsivity, and unstable relationships constitute the core features of BPD. Over the last two decades, the mentalization-based approach to BPD has attempted to provide a theoretically consistent way of conceptualizing the interrelationship between these core features of BPD, with the aim of providing clinicians with a conceptually sound and empirically supported approach to BPD and its treatment. This paper presents an extended version of this approach to BPD based on recently accumulated data. In particular, we suggest that the core features of BPD reflect impairments in different facets of mentalization, each related to impairments in relatively distinct neural circuits underlying these facets. Hence, we provide a comprehensive account of BPD by showing how its core features are related to each other in theoretically meaningful ways. More specifically, we argue that BPD is primarily associated with a low threshold for the activation of the attachment system and deactivation of controlled mentalization, linked to impairments in the ability to differentiate mental states of self and other, which lead to hypersensitivity and increased susceptibility to contagion by other people's mental states, and poor integration of cognitive and affective aspects of mentalization. The combination of these impairments may explain BPD patients' propensity for vicious interpersonal cycles, and their high levels of affect dysregulation and impulsivity. Finally, the implications of this expanded mentalization-based approach to BPD for mentalization-based treatment and treatment of BPD more generally are discussed."} {"_id": "c8cc94dd21d78f4f0d07ccb61153bfb798aeef2c", "title": "Statistical Methods for Fighting Financial Crimes", "text": ""} {"_id": "69d324f27ca4af58737000c11700ddea9fb31508", "title": "Design and Measurement of Reconfigurable Millimeter Wave Reflectarray Cells With Nematic Liquid Crystal", "text": "Numerical simulations are used to study the electromagnetic scattering from phase agile microstrip reflectarray cells which exploit the voltage controlled dielectric anisotropy property of nematic state liquid crystals (LCs). In the computer model two arrays of equal size elements constructed on a 15 mum thick tuneable LC layer were designed to operate at center frequencies of 102 GHz and 130 GHz. Micromachining processes based on the metallization of quartz/silicon wafers and an industry compatible LCD packaging technique were employed to fabricate the grounded periodic structures. The loss and the phase of the reflected signals were measured using a quasi-optical test bench with the reflectarray inserted at the beam waist of the imaged Gaussian beam, thus eliminating some of the major problems associated with traditional free-space characterization at these frequencies. By applying a low frequency AC bias voltage of 10 V, a 165deg phase shift with a loss 4.5-6.4 dB at 102 GHz and 130deg phase shift with a loss variation between 4.3-7 dB at 130 GHz was obtained. The experimental results are shown to be in close agreement with the computer model."} {"_id": "d4be9c65b86315c0801b56183a95e50766ec9c26", "title": "Reasoning about Categories in Conceptual Spaces", "text": "Understanding the process of categorization is a primary research goal in artificial intelligence. The conceptual space framework provides a flexible approach to modeling context-sensitive categorization via a geometrical representation designed for modeling and managing concepts. In this paper we show how algorithms developed in computational geometry, and the Region Connection Calculus can be used to model important aspects of categorization in conceptual spaces. In particular, we demonstrate the feasibility of using existing geometric algorithms to build and manage categories in conceptual spaces, and we show how the Region Connection Calculus can be used to reason about categories and other conceptual regions."} {"_id": "c94d52a92b1da191f720cc4471d2f176e1d7ef0b", "title": "Towards the Development of Realistic Botnet Dataset in the Internet of Things for Network Forensic Analytics: Bot-IoT Dataset", "text": "The proliferation of IoT systems, has seen them targeted by malicious third parties. To address this, realistic protection and investigation countermeasures need to be developed. Such countermeasures include network intrusion detection and network forensic systems. For that purpose, a well-structured and representative dataset is paramount for training and validating the credibility of the systems. Although there are several network, in most cases, not much information is given about the Botnet scenarios that were used. This paper, proposes a new dataset, Bot-IoT, which incorporates legitimate and simulated IoT network traffic, along with various types of attacks. We also present a realistic testbed environment for addressing the existing dataset drawbacks of capturing complete network information, accurate labeling, as well as recent and complex attack diversity. Finally, we evaluate the reliability of the BoT-IoT dataset using different statistical and machine learning methods for forensics purposes compared with the existing datasets. This work provides the baseline for allowing botnet identificaiton across IoT-specific networks. The Bot-IoT dataset can be accessed at [1]."} {"_id": "afd3cba292f60b9a76b58ef731b50c812e4c544f", "title": "Binary Input Layer: Training of CNN models with binary input data", "text": "For the efficient execution of deep convolutional neural networks (CNN) on edge devices, various approaches have been presented which reduce the bit width of the network parameters down to 1 bit. Binarization of the first layer was always excluded, as it leads to a significant error increase. Here, we present the novel concept of binary input layer (BIL), which allows the usage of binary input data by learning bit specific binary weights. The concept is evaluated on three datasets (PAMAP2, SVHN, CIFAR-10). Our results show that this approach is in particular beneficial for multimodal datasets (PAMAP2) where it outperforms networks using full precision weights in the first layer by 1.92 percentage points (pp) while consuming only 2% of the chip area."} {"_id": "434553e2a9b6048f1eb7780ec2cd828dc2644013", "title": "Leveraging Legacy Code to Deploy Desktop Applications on the Web", "text": "Xax is a browser plugin model that enables developers to leverage existing tools, libraries, and entire programs to deliver feature-rich applications on the web. Xax employs a novel combination of mechanisms that collectively provide security, OS-independence, performance, and support for legacy code. These mechanisms include memory-isolated native code execution behind a narrow syscall interface, an abstraction layer that provides a consistent binary interface across operating systems, system services via hooks to existing browser mechanisms, and lightweight modifications to existing tool chains and code bases. We demonstrate a variety of applications and libraries from existing code bases, in several languages, produced with various tool chains, running in multiple browsers on multiple operating systems. With roughly two person-weeks of effort, we ported 3.3 million lines of code to Xax, including a PDF viewer, a Python interpreter, a speech synthesizer, and an OpenGL pipeline."} {"_id": "f9023f510ca00a17b678e54af1187d5e5177d784", "title": "Stacked Convolutional and Recurrent Neural Networks for Music Emotion Recognition", "text": "This paper studies the emotion recognition from musical tracks in the 2-dimensional valence-arousal (V-A) emotional space. We propose a method based on convolutional (CNN) and recurrent neural networks (RNN), having significantly fewer parameters compared with state-of-theart method for the same task. We utilize one CNN layer followed by two branches of RNNs trained separately for arousal and valence. The method was evaluated using the \u201cMediaEval2015 emotion in music\u201d dataset. We achieved an RMSE of 0.202 for arousal and 0.268 for valence, which is the best result reported on this dataset."} {"_id": "7df4f891caa55917469e27a8da23886d66f9ecca", "title": "Anchoring : accessibility as a cause of judgmental assimilation", "text": "Anchoring denotes assimilation of judgment toward a previously considered value \u2014 an anchor. The selective accessibility model argues that anchoring is a result of selective accessibility of information compatible with an anchor. The present review shows the similarities between anchoring and knowledge accessibility effects. Both effects depend on the applicability of the accessible information, which is also used similarly. Furthermore, both knowledge accessibility and anchoring influence the time needed for the judgment and both display temporal robustness. Finally, we offer recent evidence for the selective accessibility model and demonstrate how the model can be applied to reducing the anchoring effect."} {"_id": "7b500392be1d20bb7217d40c27a4d974430e3e31", "title": "POSTER: Watch Out Your Smart Watch When Paired", "text": "We coin a new term called \\textit{data transfusion} as a phenomenon that a user experiences when pairing a wearable device with the host device. A large amount of data stored in the host device (e.g., a smartphone) is forcibly copied to the wearable device (e.g., a smart watch) due to pairing while the wearable device is usually less attended. To the best of knowledge, there is no previous work that manipulates how sensitive data is transfused even without user's consent and how users perceive and behave regarding such a phenomenon for smart watches. We tackle this problem by conducting an experimental study of data extraction from commodity devices, such as in Android Wear, watchOS, and Tizen platforms, and a following survey study with 205 smart watch users, in two folds. The experimental studies have shown that a large amount of sensitive data was transfused, but there was not enough user notification. The survey results have shown that users have lower perception on smart watches for security and privacy than smartphones, but they tend to set the same passcode on both devices when needed. Based on the results, we perform risk assessment and discuss possible mitigation that involves volatile transfusion."} {"_id": "b45c69f287a764deda320160e186d3e9b12bcc2c", "title": "ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems", "text": "We present a novel multi-robot simulator named ARGoS. ARGoS is designed to simulate complex experiments involving large swarms of robots of different types. ARGoS is the first multi-robot simulator that is at the same time both efficient (fast performance with many robots) and flexible (highly customizable for specific experiments). Novel design choices in ARGoS have enabled this breakthrough. First, in ARGoS, it is possible to partition the simulated space into multiple sub-spaces, managed by different physics engines running in parallel. Second, ARGoS\u2019 architecture is multi-threaded, thus designed to optimize the usage of modern multi-core CPUs. Finally, the architecture of ARGoS is highly modular, enabling easy addition of custom features and appropriate allocation of computational resources. We assess the efficiency of ARGoS and showcase its flexibility with targeted experiments. Experimental results demonstrate that simulation run-time increases linearly with the number of robots. A 2D-dynamics simulation of 10,000 e-puck robots can be performed in 60\u00a0% of the time taken by the corresponding real-world experiment. We show how ARGoS can be extended to suit the needs of an experiment in which custom functionality is necessary to achieve sufficient simulation accuracy. ARGoS is open source software licensed under GPL3 and is downloadable free of charge."} {"_id": "73a9e37e6fe9586982d041f2d5999d41affe9398", "title": "Storytelling in Information Visualizations: Does it Engage Users to Explore Data?", "text": "We present the results of three web-based field experiments, in which we evaluate the impact of using initial narrative visualization techniques and storytelling on user-engagement with exploratory information visualizations. We conducted these experiments on a popular news and opinion outlet, and on a popular visualization gallery website. While data-journalism exposes visualizations to a large public, we do not know how effectively this public makes sense of interactive graphics, and in particular if people explore them to gain additional insight to that provided by the journalists. In contrast to our hypotheses, our results indicate that augmenting exploratory visualizations with introductory 'stories' does not seem to increase user-engagement in exploration."} {"_id": "b933c84d4c3bc27cda48eb47d68f3b1c197f2e5b", "title": "We can learn your #hashtags: Connecting tweets to explicit topics", "text": "In Twitter, users can annotate tweets with hashtags to indicate the ongoing topics. Hashtags provide users a convenient way to categorize tweets. From the system's perspective, hashtags play an important role in tweet retrieval, event detection, topic tracking, and advertising, etc. Annotating tweets with the right hashtags can lead to a better user experience. However, two problems remain unsolved during an annotation: (1) Before the user decides to create a new hashtag, is there any way to help her/him find out whether some related hashtags have already been created and widely used? (2) Different users may have different preferences for categorizing tweets. However, few work has been done to study the personalization issue in hashtag recommendation. To address the above problems, we propose a statistical model for personalized hashtag recommendation in this paper. With millions of <;tweet, hashtag> pairs being published everyday, we are able to learn the complex mappings from tweets to hashtags with the wisdom of the crowd. Two questions are answered in the model: (1) Different from traditional item recommendation data, users and tweets in Twitter have rich auxiliary information like URLs, mentions, locations, social relations, etc. How can we incorporate these features for hashtag recommendation? (2) Different hashtags have different temporal characteristics. Hashtags related to breaking events in the physical world have strong rise-and-fall temporal pattern while some other hashtags remain stable in the system. How can we incorporate hashtag related features to serve for hashtag recommendation? With all the above factors considered, we show that our model successfully outperforms existing methods on real datasets crawled from Twitter."} {"_id": "24c143f88333bb20bf8b19df97520b21c68829ac", "title": "Environment-Driven Lexicon Induction for High-Level Instructions", "text": "As described in the main paper, we collected a dataset D = (x(n), e(n), a(n), \u03c0(n))500 n=1. Environment Complexity. Our environments are 3D scenarios consisting of complex objects such as fridge, microwave and television with many states. These objects can be in different spatial relations with respect to other objects. For example, \u201cbag of chips\u201d can be found behind the television. Figure 1 shows some sample environments from our dataset. For example, an object of category television consists of 6 channels, volume level and power status. An object can have different values of states in different environment and different environment consists of different set of objects and their placement. For example, television might be powered on in one environment and closed in another, microwave might have an object inside it or not in different environment, etc. Moreover, there are often more than one object of the same category. For example, our environment typically have two books, five couches, four pillows etc. Objects of the same category can have different appearance. For example, a book can have the cover of a math book or a Guinness book of world record; resulting in complex object descriptions such as in \u201cthrow the sticky stuff in the bowl\u201d. They can also have the same appearance making people use spatial relations or other means while describing them such as in \u201cget me the cup next to the microwave\u201d. This dataset is significantly more challenging compared to the 2D navigation dataset or GUI actions in windows dataset considered earlier."} {"_id": "5c01842e5431bff0a47140c57b3dd5362107afc6", "title": "Early Breast Cancer Detection using Mammogram Images: A Review of Image Processing Techniques", "text": "Breast cancer is one of the most common cancers worldwide among women so that one in eight women is affected by this disease during their lifetime. Mammography is the most effective imaging modality for early detection of breast cancer in early stages. Because of poor contrast and low visibility in the mammographic images, early detection of breast cancer is a significant step to efficient treatment of the disease. Different computeraided detection algorithms have been developed to help radiologists provide an accurate diagnosis. This paper reviews the most common image processing approaches developed for detection of masses and calcifications. The main focus of this review is on image segmentation methods and the variables used for early breast cancer detection. Texture analysis is the crucial step in any image segmentation techniques which are based on a local spatial variation of intensity or color. Therefore, various methods of texture analysis for mass and micro-calcification detection in mammography are discussed in details."} {"_id": "4189eac6d7104e00323f78a8897167d50c815c80", "title": "Is beautiful really usable? Toward understanding the relation between usability, aesthetics, and affect in HCI", "text": "This paper analyzes the relation between usability and aesthetics. In a laboratory study, 80 participants used one of four different versions of the same online shop, differing in interface-aesthetics (low vs. high) and interface-usability (low vs. high). Participants had to find specific items and rate the shop before and after usage on perceived aesthetics and perceived usability, which were assessed using four validated instruments. Results show that aesthetics does not affect perceived usability. In contrast, usability has an effect on post-use perceived aesthetics. Our findings show that the \u201cwhat is beautiful is usable\u201d notion, which assumes that aesthetics enhances the perception of usability can be reversed under certain conditions (here: strong usability manipulation combined with a medium to large aesthetics manipulation). Furthermore, our results indicate that the user\u2019s affective experience with the usability of the shop might serve as a mediator variable within the aestheticsusability relation: The frustration of poor usability lowers ratings on perceived aesthetics. The significance of the results is discussed in context of the existing research on the relation between aesthetics and usability."} {"_id": "2512182cf3c4d7b3df549456fbeceee0a77c3954", "title": "Trust-Aware Collaborative Filtering for Recommender Systems", "text": "Recommender Systems allow people to find the resources they need by making use of the experiences and opinions of their nearest neighbours. Costly annotations by experts are replaced by a distributed process where the users take the initiative. While the collaborative approach enables the collection of a vast amount of data, a new issue arises: the quality assessment. The elicitation of trust values among users, termed \u201cweb of trust\u201d, allows a twofold enhancement of Recommender Systems. Firstly, the filtering process can be informed by the reputation of users which can be computed by propagating trust. Secondly, the trust metrics can help to solve a problem associated with the usual method of similarity assessment, its reduced computability. An empirical evaluation on Epinions.com dataset shows that trust propagation allows to increase the coverage of Recommender Systems while preserving the quality of predictions. The greatest improuvements are achieved for new users, who provided few ratings."} {"_id": "4c521025566e6afceb9adcf27105cd33e4022fb6", "title": "Fake Review Detection : Classification and Analysis of Real and Pseudo Reviews", "text": "In recent years, fake review detection has attracted significant attention from both businesses and the research community. For reviews to reflect genuine user experiences and opinions, detecting fake reviews is an important problem. Supervised learning has been one of the main approaches for solving the problem. However, obtaining labeled fake reviews for training is difficult because it is very hard if not impossible to reliably label fake reviews manually. Existing research has used several types of pseudo fake reviews for training. Perhaps, the most interesting type is the pseudo fake reviews generated using the Amazon Mechanical Turk (AMT) crowdsourcing tool. Using AMT crafted fake reviews, [36] reported an accuracy of 89.6% using only word n-gram features. This high accuracy is quite surprising and very encouraging. However, although fake, the AMT generated reviews are not real fake reviews on a commercial website. The Turkers (AMT authors) are not likely to have the same psychological state of mind while writing such reviews as that of the authors of real fake reviews who have real businesses to promote or to demote. Our experiments attest this hypothesis. Next, it is naturally interesting to compare fake review detection accuracies on pseudo AMT data and real-life data to see whether different states of mind can result in different writings and consequently different classification accuracies. For real review data, we use filtered (fake) and unfiltered (non-fake) reviews from Yelp.com (which are closest to ground truth labels) to perform a comprehensive set of classification experiments also employing only n-gram features. We find that fake review detection on Yelp\u2019s real-life data only gives 67.8% accuracy, but this accuracy still indicates that n-gram features are indeed useful. We then propose a novel and principled method to discover the precise difference between the two types of review data using the information theoretic measure KL-divergence and its asymmetric property. This reveals some very interesting psycholinguistic phenomena about forced and natural fake reviewers. To improve classification on the real Yelp review data, we propose an additional set of behavioral features about reviewers and their reviews for learning, which dramatically improves the classification result on real-life opinion spam data."} {"_id": "5fbbfbc05c0e57faf2feb02fb471160807dfc000", "title": "Towards More Confident Recommendations : Improving Recommender Systems Using Filtering Approach Based on Rating Variance", "text": "In the present age of information overload, it is becoming increasingly harder to find relevant content. Recommender systems have been introduced to help people deal with these vast amounts of information and have been widely used in research as well as e-commerce applications. In this paper, we propose several new approaches to improve the accuracy of recommender systems by using rating variance to gauge the confidence of recommendations. We then empirically demonstrate how these approaches work with various recommendation techniques. We also show how these approaches can generate more personalized recommendations, as measured by the coverage metric. As a result, users can be given a better control to choose whether to receive recommendations with higher coverage or higher accuracy."} {"_id": "6aa1c88b810825ee80b8ed4c27d6577429b5d3b2", "title": "Evaluating collaborative filtering recommender systems", "text": "Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated."} {"_id": "8743b9ce070406d8e44e778af93a564bfb68b330", "title": "Identify Online Store Review Spammers via Social Review Graph", "text": "Online shopping reviews provide valuable information for customers to compare the quality of products, store services, and many other aspects of future purchases. However, spammers are joining this community trying to mislead consumers by writing fake or unfair reviews to confuse the consumers. Previous attempts have used reviewers\u2019 behaviors such as text similarity and rating patterns, to detect spammers. These studies are able to identify certain types of spammers, for instance, those who post many similar reviews about one target. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like normal reviewers, and thus cannot be detected by the available techniques.\n In this article, we propose a novel concept of review graph to capture the relationships among all reviewers, reviews and stores that the reviewers have reviewed as a heterogeneous graph. We explore how interactions between nodes in this graph could reveal the cause of spam and propose an iterative computation model to identify suspicious reviewers. In the review graph, we have three kinds of nodes, namely, reviewer, review, and store. We capture their relationships by introducing three fundamental concepts, the trustiness of reviewers, the honesty of reviews, and the reliability of stores, and identifying their interrelationships: a reviewer is more trustworthy if the person has written more honesty reviews; a store is more reliable if it has more positive reviews from trustworthy reviewers; and a review is more honest if many other honest reviews support it. This is the first time such intricate relationships have been identified for spam detection and captured in a graph model. We further develop an effective computation method based on the proposed graph model. Different from any existing approaches, we do not use an review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results."} {"_id": "4152070bd6cd28cc44bc9e54ab3e641426382e75", "title": "A Survey of Text Classification Algorithms", "text": "The problem of classification has been widely studied in the data mining, machine learning, database, and information retrieval communities with applications in a number of diverse domains, such as target marketing, medical diagnosis, news group filtering, and document organization. In this paper we will provide a survey of a wide variety of text classification"} {"_id": "00758573be71bb03267d15e95474e361978e9717", "title": "RMiner: a tool set for role mining", "text": "Recently, there are many approaches proposed for mining roles using automated technologies. However, it lacks a tool set that can be used to aid the application of role mining approaches and update role states. In this demonstration, we introduce a tool set, RMiner, which is based on the core of WEKA, an open source data mining tool. RMiner implements most of the classic and latest role mining algorithms and provides interactive tools for administrator to update role states. The running examples of RMiner are presented to demonstrate the effectiveness of the tool set."} {"_id": "6ed6ecaaf5e618b11305b6dad415b1b5546c9bb8", "title": "Leading us not unto temptation: momentary allurements elicit overriding goal activation.", "text": "The present research explored the nature of automatic associations formed between short-term motives (temptations) and the overriding goals with which they interfere. Five experimental studies, encompassing several self-regulatory domains, found that temptations tend to activate such higher priority goals, whereas the latter tend to inhibit the temptations. These activation patterns occurred outside of participants' conscious awareness and did not appear to tax their mental resources. Moreover, they varied as a function of subjective goal importance and were more pronounced for successful versus unsuccessful self-regulators in a given domain. Finally, priming by temptation stimuli was found not only to influence the activation of overriding goals but also to affect goal-congruent behavioral choices."} {"_id": "71715f3f4f7cfbde9f23e04a8f05b4cf87c8ec83", "title": "Hedonism , Achievement , and Power : Universal values that characterize the Dark Triad", "text": "Using a sample of Swedes and Americans (N = 385), we attempted to understand the Dark Triad traits (i.e., Machiavellianism, narcissism, and psychopathy) in terms of universal social values. The Dark Triad traits correlated significantly with all 10 value types, forming a sinusoid pattern corresponding to the value model circumplex. In regression analyses, Machiavellianism and narcissism were positively associated with the values Achievement and Power, while psychopathy was positively associated with the values Hedonism, and Power. In addition, the Dark Triad traits explained significant variance over the Big Five traits in accounting for individual differences in social values. Differences between the Swedish and the US sample in the social value Achievement was mediated by the Dark Triad traits, as well as age. Given the unique complex of values accounted for by the Dark Triad traits compared to the Big Five traits, we argue that the former account for a system of self-enhancing \u2018\u2018dark values\u2019\u2019, often hidden but constantly contributing in evaluations of others. 2015 Elsevier Ltd. All rights reserved."} {"_id": "4a4e66f72890b0473de1d59f52ad3819bd0413a7", "title": "Parallel Training of Neural Networks for Speech Recognition", "text": "In this paper we describe parallel implementation of ANN training procedure based on block mode back-propagation learning algorithm. Two different approaches to parallelization were implemented. The first is data parallelization using POSIX threads, it is suitable for multi-core computers. The second is node parallelization using high performance SIMD architecture of GPU with CUDA, suitable for CUDA enabled computers. We compare the speed-up of both approaches by learning typically-sized network on the real-world phoneme-state classification task, showing nearly 10 times reduction when using CUDA version, while the 8-core server with multi-thread version gives only 4 times reduction. In both cases we compared to an already BLAS optimized implementation. The training tool will be released as Open-Source software under project name TNet."} {"_id": "9feed125ff3e8b92510eb979e37ea1bbc35156b6", "title": "Neurologic foundations of spinal cord fusion (GEMINI).", "text": "Cephalosomatic anastomosis has been carried out in both monkeys and mice with preservation of brain function. Nonetheless the spinal cord was not reconstructed, leaving the animals unable to move voluntarily. Here we review the details of the GEMINI spinal cord fusion protocol, which aims at restoring electrophysiologic conduction across an acutely transected spinal cord. The existence of the cortico-truncoreticulo-propriospinal pathway, a little-known anatomic entity, is described, and its importance concerning spinal cord fusion emphasized. The use of fusogens and electrical stimulation as adjuvants for nerve fusion is addressed. The possibility of achieving cephalosomatic anastomosis in humans has become reality in principle."} {"_id": "45cd2f3575a42f6a611957cec4fa9b9a1359af89", "title": "High speed true random number generator based on open loop structures in FPGAs", "text": "Most hardware \u2018\u2018True\u2019\u2019 Random Number Generators (TRNG) take advantage of the thermal agitation around a flip-flop metastable state. In Field Programmable Gate Arrays (FPGA), the classical TRNG structure uses at least two oscillators, build either from PLL or ring oscillators. This creates good TRNG albeit limited in frequency by the interference rate which cannot exceed a few Mbit/s. This article presents an architecture allowing higher bit rates while maintaining provable unconditional security. This speed requirement becomes stringent for secure communication applications such as the cryptographic quantum key distribution protocols. The proposed architecture is very simple and generic as it is based on an open loop structure with no specific component such as PLL. & 2009 Elsevier Ltd. All rights reserved."} {"_id": "dd6bb309fd83347026258baeb2cd554ec4fbf5a7", "title": "Sources of Customer Satisfaction and Dissatisfaction with Information Technology Help Desks", "text": "As the use, develophient, and control of information systems continues to become diffused throughout organizations and society, the information technology (IT) help desk function plays an increasingly important role in effective management and utilization of information resources. Variously referred to as information centers, software support centers, software hotlines, and PC help desks, such centers have been established tohelp end users resolve problems and obtain information about new functions in the information systems they use. This study investigates the determinants of customer satisfaction and dissatisfaction with service encounters involving information technologyhelp desks. The IS satisfaction research that has been done to datehas viewed satisfaction as an attitudinal, bipolar evaluative construct (Melone 1990; Heckman 1993). Satisfaction as viewed in this way is a relatively enduring, stable cognitive state. In the marketing literature, however, satisfaction has also been conceptualized as a less enduring post-consumption response. In this study, a conceptualization of the satisfaction construct based on the service encounter and consumer product satisfaction literatures (e.g; Bitner, Boon}s and Tetreault 1990) is adopted as a starting point. After responses to help desk service encounters have been analyzed from this perspective, an attempt is made to integrate these findings with attitudinal satisfaction constructs. The study employs the Critical Incident Technique (CIT), an inductive, qualitative methodology. It consists of a set of specifically defined procedures for collecting observations of human behavior and classifying them in such a way as to make them useful in addressing practical problems (Flanagan 1954). It is a I thod that is comparable to other inductive grouping procedures such as factor analysis, cluster analysis, and multidimensional scaling. Unlike other such procedures, however, CIT uses content analysis of stories rather than quantitative solutions in the data analysis stage of the procedure. The study addressed four research questions: 1. What specific behaviors and events lead to user/customer satisfaction and dissatisfaction with IT help desk service encounters? 2. Are the underlying events and behaviors that lead to satisfactory and dissatisfactory encounters similar? That is, are these events and behaviors opposites or mirror images of each other? 3. Are the underlying events and behaviors in help desk service encounters similar to those found in other contexts? 4. Can an understanding ofuser/customer responses to help desk service encounters shed light on the development and modification ofa tudinal satisfaction constructs such as UIS (Ives, Olson and Baroudi 1983), EUIS (Doll and Torkzadeh 1988), and VMS Satisfaction (Heckman 1993)? Descriptions of approximately500 incidents have been obtained to dateand analyzed. A tentative classification scheme was developed from the preliminary anal>sis. It was modeled after the incident classification scheme developed by Bitner, Booms and Tetreault and uses the same three major categories: core service failure, special customer request, and extraordinary provider behavior. As in the Bitner, Booms and Tetreault analysis, results suggest that a core service failure does not inevitably lead to dissatisfaction. Initial analysis also suggests that while the scheme is applicable in some ways, the knowledge-based nature of the ]T help desk service encounter requires several additional constructs to account for various customer responses."} {"_id": "4840349d684f59640c2431de4f66659818e390c7", "title": "An evolutionary optimization framework for neural networks and neuromorphic architectures", "text": "As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures."} {"_id": "e050e89d01afffd5b854458fc48c9d6720a8072c", "title": "THE DOMAIN AND CONCEPTUAL FOUNDATIONS OF RELATIONSHIP MARKETING", "text": ""} {"_id": "e3de767356caa7960303558f7eb560c73fbe42ba", "title": "Development and design of a robotic manta ray featuring flexible pectoral fins", "text": "This paper showed our preliminary work in probing into fish robot with pectoral fin propulsion. The aim of this project is to develop a fish robot mimicking manta rays that swim with flexible pectoral fins in oscillation motion. Based on the analysis of motion feature and skeletal structure of pectoral fin of manta rays, a kind of flexible pectoral fin made of silicone rubber was designed. Experiments on the propulsion of fin prototype were carried out in tank. Experiment results showed that flexible pectoral fin in oscillation motion could generate great thrust and the thrust would increase with fin-beat frequency and amplitude increasing. In the end, a robot fish with flexible pectoral fins was built. The speed of robot fish could reach the maximum of 1.4 times BL (body length) per second. Experiments proved that it was feasible to develop a robot fish featuring flexible pectoral fins in oscillation motion and the method to mimic manta rays was a great idea to improve performance of underwater vehicles."} {"_id": "712c13fbbabf086df5a6d795377d2b435e132a57", "title": "Video-Based Human Walking Estimation Using Joint Gait and Pose Manifolds", "text": "We study two fundamental issues about video-based human walking estimation, where the goal is to estimate 3D gait kinematics (i.e., joint positions) from 2D gait appearances (i.e., silhouettes). One is how to model the gait kinematics from different walking styles, and the other is how to represent the gait appearances captured under different views and from individuals of distinct walking styles and body shapes. Our research is conducted in three steps. First, we propose the idea of joint gait-pose manifold (JGPM), which represents gait kinematics by coupling two nonlinear variables, pose (a specific walking stage) and gait (a particular walking style) in a unified latent space. We extend the Gaussian process latent variable model (GPLVM) for JGPM learning, where two heuristic topological priors, a torus and a cylinder, are considered and several JGPMs of different degrees of freedom (DoFs) are introduced for comparative analysis. Second, we develop a validation technique and a series of benchmark tests to evaluate multiple JGPMs and recent GPLVMs in terms of their performance for gait motion modeling. It is shown that the toroidal prior is slightly better than the cylindrical one, and the JGPM of 4 DoFs that balances the toroidal prior with the intrinsic data structure achieves the best performance. Third, a JGPM-based visual gait generative model (JGPM-VGGM) is developed, where JGPM plays a central role to bridge the gap between the gait appearances and the gait kinematics. Our proposed JGPM-VGGM is learned from Carnegie Mellon University MoCap data and tested on the HumanEva-I and HumanEva-II data sets. Our experimental results demonstrate the effectiveness and competitiveness of our algorithms compared with existing algorithms."} {"_id": "8bf72fb4edcb6974d3c4b0b2df63d9fd75c5dc4f", "title": "Semantic Sentiment Analysis Challenge at ESWC2017", "text": "Sentiment Analysis is a widely studied research field in both research and industry, and there are different approaches for addressing sentiment analysis related tasks. Sentiment Analysis engines implement approaches spanning from lexicon-based techniques, to machine learning, or involving syntactical rules analysis. Such systems are already evaluated in international research challenges. However, Semantic Sentiment Analysis approaches, which take into account or rely also on large semantic knowledge bases and implement Semantic Web best practices, are not under specific experimental evaluation and comparison by other international challenges. Such approaches may potentially deliver higher performance, since they are also able to analyze the implicit, semantics features associated with natural language concepts. In this paper, we present the fourth edition of the Semantic Sentiment Analysis Challenge, in which systems implementing or relying on semantic features are evaluated in a competition involving large test sets, and on different sentiment tasks. Systems merely based on syntax/word-count or just lexicon-based approaches have been excluded by the evaluation. Then, we present the results of the evaluation for each task and show the winner of the most innovative approach award, that combines several knowledge bases for addressing the sentiment analysis task."} {"_id": "35c3a29076b939bfa1b4314e0affc8413446230e", "title": "Beidou combined-FFT acquisition algorithm for resource-constrained embedded platform", "text": "The length of Beidou CB2I code is twice that of the Global Position System (GPS) C/A code, and the resource consumption will be doubled on embedded hardware with a finite resource if it is processed with the traditional algorithm. Hence, this paper proposes an acquisition algorithm based on combined Fast Fourier Transform (FFT), which separates a signal into odd and even sequences and processes them in serial. By doing so, FFT length is halved and memory usage is decreased. The paper then goes on to analyze the space complexity and time complexity of the proposed algorithm and the traditional algorithm on the embedded platform. The results show that the memory usage of the proposed algorithm is reduced by 23.3% and the amount of multiplication by 12% while the acquisition accuracy remains stable as the PTP values of these two algorithms are close. In other words, the proposed algorithm consumes less resource with no acquisition accuracy losses."} {"_id": "6e669e90a34c4179f9364406d8a7a7f855745086", "title": "Speeding up distributed request-response workflows", "text": "We found that interactive services at Bing have highly variable datacenter-side processing latencies because their processing consists of many sequential stages, parallelization across 10s-1000s of servers and aggregation of responses across the network. To improve the tail latency of such services, we use a few building blocks: reissuing laggards elsewhere in the cluster, new policies to return incomplete results and speeding up laggards by giving them more resources. Combining these building blocks to reduce the overall latency is non-trivial because for the same amount of resource (e.g., number of reissues), different stages improve their latency by different amounts. We present Kwiken, a framework that takes an end-to-end view of latency improvements and costs. It decomposes the problem of minimizing latency over a general processing DAG into a manageable optimization over individual stages. Through simulations with production traces, we show sizable gains; the 99th percentile of latency improves by over 50% when just 0.1% of the responses are allowed to have partial results and by over 40% for 25% of the services when just 5% extra resources are used for reissues."} {"_id": "ed1369fbd9290cce28886c6fba560aff41913d18", "title": "Beyond Wise et al . : Neuroleptic-induced \u201c anhedonia \u201d in rats : Pimozide blocks reward quality of food", "text": "O all the neurotransmitters in the brain the one that is best known to the lay public, by far, is dopamine (DA). To the public, and much of the media, DA is the brain\u2019s \u201cpleasure transmitter.\u201d By this popular view, it is a burst of DA that produces the pleasure associated with the receipt or anticipation of natural rewards \u2013 a chocolate cake, sex, even social rewards \u2013 as well as the euphoria produced by addictive drugs, such as amphetamine, cocaine or heroin. In the popular media it is often suggested that it is pursuit of this DA \u201crush\u201d that leads to a variety of impulse control disorders, such as over-eating, gambling and addiction. Of all the ideas, in all of neuroscience, this is one of the most widely known, and its origin can be traced to the 1978 paper by Roy Wise, Joan Spindler, Harriet deWit and Gary Gerber, that is the topic of this chapter. This paper is an excellent example of how the interpretation of a seemingly simple experimental finding can profoundly influence the thinking and trajectory of an entire field, and the public imagination, for decades \u2013 even if it is wrong. But before getting to that conclusion, we should step back and consider the time leading up to the publication of this extremely influential paper. It was only a little over a decade earlier that DA was first recognized as a neurotransmitter in its own right (rather than just being a precursor of norepinephrine). This came about from a series of important studies by researchers in Sweden who developed a histofluorescence method to identify and map monoamine neurotransmitters in the brain (including DA). They thus effectively created the field of chemical neuroanatomy, as well as vividly mapping the location of dopamine in the brain. They also produced useful methods to selectively lesion monoamine-containing cells with neurotoxins, such as 6-hydroxydopamine (6-OHDA), which would help reveal dopamine\u2019s psychological functions via behavioral consequences. As a result of these studies it became clear that DA-producing neurons located in the midbrain (the substantia nigra and ventral tegmental area) send dense projections to the dorsal and ventral striatum (the caudate-putamen and nucleus accumbens) in the"} {"_id": "8473a97a461625781d0a4747a840196b9a972f33", "title": "Benchmarking distributed data warehouse solutions for storing genomic variant information", "text": "Database URL\nhttps://github.com/ZSI-Bio/variantsdwh."} {"_id": "e413224e24626cf1e846ae14b5a6267a4105b470", "title": "Towards Automated Driving: Unmanned Protective Vehicle for Highway Hard Shoulder Road Works", "text": "Mobile road works on the hard shoulder of German highways bear an increased accident risk for the crew of the protective vehicle which safeguards road works against moving traffic. The project \u201cAutomated Unmanned Protective Vehicle for Highway Hard Shoulder Road Works\u201d aims at the unmanned operation of the protective vehicle in order to reduce this risk. Simultaneously, this means the very first unmanned operation of a vehicle on German roads in public traffic. This contribution introduces the project by pointing out the main objectives and demonstrates the current state of the system design regarding functionality, modes of operation, as well as the functional system architecture. Pivotal for the project, the scientific challenges raised by the unmanned operation - strongly related to the general challenges in the field of automated driving - are presented as well. The results of the project shall serve as a basis to stimulate an advanced discussion about ensuring safety for fully automated vehicles in public traffic operating at higher speeds and in less defined environments. Thus, this contribution aims at contacting the scientific community in an early state of the project."} {"_id": "f8e534431c743d39b34c326b72c7bf86af3e554d", "title": "Overconfidence as a cause of diagnostic error in medicine.", "text": "The great majority of medical diagnoses are made using automatic, efficient cognitive processes, and these diagnoses are correct most of the time. This analytic review concerns the exceptions: the times when these cognitive processes fail and the final diagnosis is missed or wrong. We argue that physicians in general underappreciate the likelihood that their diagnoses are wrong and that this tendency to overconfidence is related to both intrinsic and systemically reinforced factors. We present a comprehensive review of the available literature and current thinking related to these issues. The review covers the incidence and impact of diagnostic error, data on physician overconfidence as a contributing cause of errors, strategies to improve the accuracy of diagnostic decision making, and recommendations for future research."} {"_id": "118422012ca38272ac766294f27ca83b5319d3cb", "title": "Forecasting the weather of Nevada: A deep learning approach", "text": "This paper compares two approaches for predicting air temperature from historical pressure, humidity, and temperature data gathered from meteorological sensors in Northwestern Nevada. We describe our data and our representation and compare a standard neural network against a deep learning network. Our empirical results indicate that a deep neural network with Stacked Denoising Auto-Encoders (SDAE) outperforms a standard multilayer feed forward network on this noisy time series prediction task. In addition, predicting air temperature from historical air temperature data alone can be improved by employing related weather variables like barometric pressure, humidity and wind speed data in the training process."} {"_id": "661ca6c0f4ecbc08961c7f175f8e5446da224749", "title": "Equity price direction prediction for day trading: Ensemble classification using technical analysis indicators with interaction effects", "text": "We investigate the performance of complex trading rules in equity price direction prediction, over and above continuous-valued indicators and simple technical trading rules. Ten of the most popular technical analysis indicators are included in this research. We use Random Forest ensemble classifiers using minute-by-minute stock market data. Results show that our models have predictive power and yield better returns than the buy-and-hold strategy when disregarding transaction costs both in terms of number of stocks with profitable trades as well as overall returns. Moreover, our findings show that two-way and three-way combinations, i.e., complex trading rules, are important to \u201cbeat\u201d the buy-and-hold strategy."} {"_id": "7ec4dcda2991371142798621ab865bde75edb2b9", "title": "10 Watts UHF broadband GaN based power amplifier for multi-band applications", "text": "The demand of broadband amplifier has been boosted to meet the requirements of multi-mode and multi-band wireless applications. GaN HEMT is the next generation of RF power transistor technology. In this paper, we have designed a 10W UHF broadband class-AB Power Amplifier (PA) based on GaN HEMT. The proposed amplifier has been designed and developed in the frequency range from 200-500 MHz. A maximum drain efficiency of 71% is achieved."} {"_id": "5198e078fe7a9c5a9ed6ed9548ec1b1c34daf12b", "title": "Experimental evaluation of the schunk 5-Finger gripping hand for grasping tasks", "text": "In order to perform useful tasks, a service robot needs to manipulate objects in its environment. In this paper, we propose a method for experimental evaluation of the suitability of a robotic hand for grasping tasks in service robotics. The method is applied to the Schunk 5-Finger Gripping Hand, which is a mechatronic gripper designed for service robots. During evaluation, it is shown, that it is able to grasp various common household objects and execute the grasps from the well known Cutkosky grasp taxonomy [1]. The result is, that it is a suitable hand for service robot tasks."} {"_id": "dd30bbd7149678d9be191df4e286842001be6fb0", "title": "Keyphrase Extraction Using Knowledge Graphs", "text": "Extracting keyphrases from documents automatically is an important and interesting task since keyphrases provide a quick summarization for documents. Although lots of efforts have been made on keyphrase extraction, most of the existing methods (the co-occurrence-based methods and the statistic-based methods) do not take semantics into full consideration. The co-occurrence-based methods heavily depend on the co-occurrence relations between two words in the input document, which may ignore many semantic relations. The statistic-based methods exploit the external text corpus to enrich the document, which introduce more unrelated relations inevitably. In this paper, we propose a novel approach to extract keyphrases using knowledge graphs, based on which we could detect the latent relations of two keyterms (i.e., noun words and named entities) without introducing many noises. Extensive experiments over real data show that our method outperforms the state-of-the-art methods including the graph-based co-occurrence methods and statistic-based clustering methods."} {"_id": "7dbdab80492fe692484f3267e50388ac4b26538b", "title": "The Situated Function - Behaviour - Structure Framework", "text": "This paper extends the Function-Behaviour-Structure (FBS) framework, which proposed eight fundamental processes involved in designing. This framework did not explicitly account for the dynamic character of the context in which designing takes place, described by the notion of situatedness. This paper describes this concept as a recursive interrelationship between different environments, which, together with a model of constructive memory, provides the foundation of a situated FBS framework. The eight fundamental processes are then reconstructed within this new framework to represent designing in a dynamic world."} {"_id": "15536ea6369d2a3df504605cc92d78a7c7170d65", "title": "Parametric Exponential Linear Unit for Deep Convolutional Neural Networks", "text": "Object recognition is an important task for improving the ability of visual systems to perform complex scene understanding. Recently, the Exponential Linear Unit (ELU) has been proposed as a key component for managing bias shift in Convolutional Neural Networks (CNNs), but defines a parameter that must be set by hand. In this paper, we propose learning a parameterization of ELU in order to learn the proper activation shape at each layer in the CNNs. Our results on the MNIST, CIFAR-10/100 and ImageNet datasets using the NiN, Overfeat, All-CNN and ResNet networks indicate that our proposed Parametric ELU (PELU) has better performances than the non-parametric ELU. We have observed as much as a 7.28% relative error improvement on ImageNet with the NiN network, with only 0.0003% parameter increase. Our visual examination of the non-linear behaviors adopted by Vgg using PELU shows that the network took advantage of the added flexibility by learning different activations at different layers."} {"_id": "0172cec7fef1815e460678d12eb51fa0d051a677", "title": "Unsupervised feature learning for audio classification using convolutional deep belief networks", "text": "In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones/phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks."} {"_id": "0585b80713848a5b54b82265a79f031a4fbd3332", "title": "Bottom-Up Segmentation for Top-Down Detection", "text": "In this paper we are interested in how semantic segmentation can help object detection. Towards this goal, we propose a novel deformable part-based model which exploits region-based segmentation algorithms that compute candidate object regions by bottom-up clustering followed by ranking of those regions. Our approach allows every detection hypothesis to select a segment (including void), and scores each box in the image using both the traditional HOG filters as well as a set of novel segmentation features. Thus our model ``blends'' between the detector and segmentation models. Since our features can be computed very efficiently given the segments, we maintain the same complexity as the original DPM. We demonstrate the effectiveness of our approach in PASCAL VOC 2010, and show that when employing only a root filter our approach outperforms Dalal & Triggs detector on all classes, achieving 13% higher average AP. When employing the parts, we outperform the original DPM in $19$ out of $20$ classes, achieving an improvement of 8% AP. Furthermore, we outperform the previous state-of-the-art on VOC 2010 test by 4%."} {"_id": "71d17d1a0d9c0b3863e5072bd032edc2e9dc03ce", "title": "Design of planar substrate integrated waveguide (SIW) phase shifter using air holes", "text": "In this paper, a design of Substrate Intergrated Waveguide (SIW) Phase Shifter is designed by placing multiple rows of air holes in a single substrate which modifies effective dielectric constant of the material and as a result, the phase of the propagating wave is modified while maintaining insertion loss less than 0.8 dB. The proposed design operates in X band (8-12 GHz) having reflection coefficient better than -15 dB within a band of 3 GHz (31.5%) and a phase shift of 24.8\u00b0\u00b14.7\u00b0 throughout the operating band."} {"_id": "21da9ece5587df5a2ef79bf937ea19397abecfa0", "title": "Predictive coding under the free-energy principle.", "text": "This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs."} {"_id": "411582b1755a7c5424febdb2ec730a7239c5d6d5", "title": "Self-inflicted explosive death by intra-oral detonation of a firecracker: a case report.", "text": "Self-inflicted explosive deaths due to detonation of fireworks are rare. In this case report, a peculiar case of an elderly male who discharged a firecracker inside his mouth, resulting in fatal blast induced craniofacial injuries, is described. There is paucity of published data describing fireworks-related suicidal and/or non-suicidal deaths. Even scantier data is present specifically describing fireworks-related blast induced neurotrauma and the mechanism(s) of injury involved in such cases. This case report emphasizes the severe damage that a commercially available explosive, the so-called \"Gorilla Bomb\", can cause, and raises questions about the relative ease of its acquisition."} {"_id": "3e03be1568929940696fb75f67d3209a26a568ca", "title": "COMPARISON OF DIFFERENT METHODS OF ANTIOXIDANT ACTIVITY EVALUATION OF GREEN AND ROAST C . ARABICA AND C . ROBUSTA COFFEE BEANS", "text": "aDepartment of Hygiene and Technology of Vegetable Foodstuff, University of Veterinary and Pharmaceutical Sciences Brno, Palack\u00e9ho t\u0159. 1/3, 612 42 Brno Kr\u00e1lovo Pole. Czech Republic bDepartment of Biochemistry and Biophysics, University of Veterinary and Pharmaceutical Sciences Brno, Palack\u00e9ho t\u0159. 1/3, 612 42 Brno Kr\u00e1lovo Pole. Czech Republic cDepartment of Food Science and Technology, Kaunas University of Technology, Radvil\u0117n\u0173 pl. 19, LT-50254 Kaunas. Lithuania"} {"_id": "dd0d47a6997a09370fb6007d70827934a509115d", "title": "Clustering algorithm for internet of vehicles (IoV) based on dragonfly optimizer (CAVDO)", "text": "Internet of vehicles (IoV) is a branch of the internet of things (IoT) which is used for communication among vehicles. As vehicular nodes are considered always in motion, hence it causes the frequent changes in the topology. These changes cause major issues in IoV like scalability, dynamic topology changes, and shortest path for routing. Clustering is among one of the solutions for such type of issues. In this paper, the stability of IoV topology in a dynamic environment is focused. The proposed metaheuristic dragonfly-based clustering algorithm CAVDO is used for cluster-based packet route optimization to make stable topology, and mobility aware dynamic transmission range algorithm (MA-DTR) is used with CAVDO for transmission range adaptation on the basis of traffic density. The proposed CAVDO with MA-DTR is compared with the progressive baseline techniques ant colony optimization (ACO) and comprehensive learning particle swarm optimization (CLPSO). Numerous experiments were performed keeping in view the transmission dynamics for stable topology. CAVDO performed better in many cases providing minimum number of clusters according to current channel condition. Considerable important parameters involved in clustering process are: number of un-clustered nodes as a re-clustering criterion, clustering time, re-clustering delay, dynamic transmission range, direction, and speed. According to these parameters, results indicate that CAVDO outperformed ACO-based clustering and CLPSO in various network settings. Additionally, to improve the network availability and to incorporate the functionalities of next-generation network infrastructure, 5G-enabled architecture is also utilized."} {"_id": "30d6183851366a484d438385c2167a764edbab8a", "title": "Influence of PEEK Coating on Hip Implant Stress Shielding: A Finite Element Analysis", "text": "Stress shielding is a well-known failure factor in hip implants. This work proposes a design concept for hip implants, using a combination of metallic stem with a polymer coating (polyether ether ketone (PEEK)). The proposed design concept is simulated using titanium alloy stems and PEEK coatings with thicknesses varying from 100 to 400\u2009\u03bcm. The Finite Element analysis of the cancellous bone surrounding the implant shows promising results. The effective von Mises stress increases between 81 and 92% for the complete volume of cancellous bone. When focusing on the proximal zone of the implant, the increased stress transmission to the cancellous bone reaches between 47 and 60%. This increment in load transferred to the bone can influence mineral bone loss due to stress shielding, minimizing such effect, and thus prolonging implant lifespan."} {"_id": "fb619c5a074393efbaa865f24631598350cf1fef", "title": "A Multi-aspect Analysis of Automatic Essay Scoring for Brazilian Portuguese", "text": "While several methods for automatic essay scoring (AES) for the English language have been proposed, systems for other languages are unusual. To this end, we propose in this paper a multi-aspect AES system for Brazilian Portuguese which we apply to a collection of essays, which human experts evaluated according to the five aspects defined by the Brazilian Government for the National High School Exam (ENEM). These aspects are skills that student must master and every skill is assessed separately from one another. In addition to prediction, we also performed feature analysis for each aspect. The proposed AES system employs several features already used by AES systems for the English language. Our results show that predictions for some aspects performed well with the employed features, while predictions for other aspects performed poorly. Furthermore, the detailed feature analysis we performed made it possible to note their independent impacts on each of the five aspects. Finally, aside from these contributions, our work reveals some challenges and directions for future research, related, for instance, to the fact that the ENEM has over eight million yearly enrollments."} {"_id": "001be2c9a96a82c1e380970e8460827ecce1cb97", "title": "Physician well-being: A powerful way to improve the patient experience.", "text": "Improving the patient experience\u2014or the patientcenteredness of care\u2014is a key focus of health care organizations. With a shift to reimbursement models that reward higher patient experience scores, increased competition for market share, and cost constraints emphasizing the importance of patient loyalty, many health care organizations consider optimizing the patient experience to be a key strategy for sustaining financial viability. A survey conducted by The Beryl Institute, which supports research on the patient experience, found that hospital executives ranked patient experience as a clear top priority, falling a close second behind quality and safety.1 According to the institute, engagement with employees, including physicians, is the most effective way to improve the patient experience, as more engaged, satisfied staffers provide better service and care to patients.2 Data on physician satisfaction support this supposition: Research has shown that physician career satisfaction closely correlates with patient satisfaction within a geographic region.3 Given the importance of physician satisfaction to the patient experience, it is concerning that dissatisfaction and burnout are on the rise among physicians. In a 2012 online poll of more than 24,000 physicians across the country, only 54 percent would choose medicine again as a career, down from 69 percent in 2011.4 Almost half of the more than 7,000 physicians responding to a recent online survey reported at least one symptom of burnout.5 Physician dissatisfaction and burnout have a profound negative effect on the patient experience of care. Physician leaders can take steps within their organizations to foster physician well-being, improving the care experience for physicians and patients while strengthening the sustainability of their organizations."} {"_id": "38a935e212c8e10460545b74a7888e3966c03e74", "title": "Amodal Detection of 3D Objects: Inferring 3D Bounding Boxes from 2D Ones in RGB-Depth Images", "text": "This paper addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2.5D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2.5D representation framework, and directly relate 2.5D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects 3D locations, physical sizes, and orientations in indoor scenes. Experiments on the NYUV2 dataset show our algorithm significantly outperforms the state-of-the-art and indicates 2.5D representation is capable of encoding features for 3D amodal object detection. All source code and data is on https://github.com/phoenixnn/Amodal3Det."} {"_id": "a825fa40f998cff4ca2908f867ba1d3757e355d5", "title": "An architecture of agent-based multi-layer interactive e-learning and e-testing platform", "text": "E-learning is the synthesis of multimedia and social media platforms powered by Internet and mobile technologies. Great popularity of e-learning is encouraging governments and educational institutions to adopt and develop e-learning cultures in societies in general and universities in particular. In traditional e-learning systems, all components (agents and services) are tightly coupled into a single system. In this study, we propose a new architecture for e-learning with two subsystems, namely, e-learning and e-testing. The motivation of the research is to improve the effectiveness of the learning process by extracting relevant features for elastic learning and testing process. We employ a multi-agent system because it contains five-layer architecture, including agents at various levels. We also propose a novel method for updating content through question and answer between e-learners and intelligent agents. To achieve optimization, we design a system that applies various technologies, which M. Arif (B)\u00b7 A. Karim \u00b7 K. A. Alam \u00b7 S. Farid \u00b7 S. Iqbal Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: arifmuhammad36@siswa.um.edu.my M. Arif \u00b7 M. Illahi Departmnet of Computer Science, COMSATS Institute of Information Technology Islamabad, Islamabad, Pakistan S. Shamshirband Faculty of Computer Science and Information Technology, Department of Computer System and Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: shamshirband1396@gmail.com S. Shamshirband Department of Computer Science, Chalous Branch, Islamic Azad University (IAU), Chalous, 46615-397 Mazandaran, Iran Z. Buang Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia V. E. Balas Department of Automation and Applied Informatics, University of Arad, 310130 Arad, Romania"} {"_id": "1f4789a2effea966c8fd10491fe859cfc7607137", "title": "A Dynamic Bayesian Network Model for Autonomous 3D Reconstruction from a Single Indoor Image", "text": "When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images."} {"_id": "3746184fd7b82bbef7baee49daa9c7d79c37ea06", "title": "Forecasting exchange rate with deep belief networks", "text": "Forecasting exchange rates is an important financial problem which has received much attention. Nowadays, neural network has become one of the effective tools in this research field. In this paper, we propose the use of a deep belief network (DBN) to tackle the exchange rate forecasting problem. A DBN is applied to predict both British Pound/US dollar and Indian rupee/US dollar exchange rates in our experiments. We use six evaluation criteria to evaluate its performance. We also compare our method to a feedforward neural network (FFNN), which is the state-of-the-art method for forecasting exchange rate with neural networks. Experiments indicate that deep belief networks (DBNs) are applicable to the prediction of foreign exchange rate, since they achieve better performance than feedforward neural networks (FFNNs)."} {"_id": "b9689f229371ce83d035d5286875a49bd15b7ca6", "title": "Applying the Clique Percolation Method to analyzing cross-market branch banking network structure: the case of Illinois", "text": "This study applies the Clique Percolation Method (CPM) to an investigation of the changing spatial organization of the Illinois cross-market branch banking network. Nonoverlapping community detection algorithms assign nodes into exclusive communities and, when results are mapped, these techniques may generate spatially disjointed geographical regions, an undesirable characteristic for geographical study. Alternative overlapping community detection algorithms allow overlapping membership where a node can be a member of different communities. Such a structure simultaneously accommodates spatial proximity and spatial separation which occur with respect to a node in relation to other nodes in the system. Applying such a structure in geographical analysis helps preserve well-established principles regarding spatial relationships within the geography discipline. The result can also be mapped for display and correct interpretation. The CPM is chosen in this study due to the complete connection within cliques which simulates the practice by banking institutions of forming highly connected networks through multi-location operations in order to diversify their business and hedge against risks. Applying the CPM helps reveal the spatial pattern of branch banking connections which would otherwise be difficult to see. However, the CPM has been shown to not be among the best performing overlapping community detection algorithms. Future research should explore other possible algorithms for detecting overlapping communities. Detecting communities in a network only reveals certain characteristics of the spatial organization of the network, rather than providing explanation of the spatial-network patterns revealed. Full interpretation of the pattern must rely on the attribute data and additional information. This may illustrate the value of an integrated approach in geographical analysis using both social network analysis and spatial analysis techniques."} {"_id": "83f6e857033a54c271fe178391ac23044c623b05", "title": "Multi-platform molecular profiling of a large cohort of glioblastomas reveals potential therapeutic strategies", "text": "Glioblastomas (GBM) are the most aggressive and prevalent form of gliomas with abysmal prognosis and limited treatment options. We analyzed clinically relevant molecular aberrations suggestive of response to therapies in 1035 GBM tumors. Our analysis revealed mutations in 39 genes of 48 tested. IHC revealed expression of PD-L1 in 19% and PD-1 in 46%. MGMT-methylation was seen in 43%, EGFRvIII in 19% and 1p19q co-deletion in 2%. TP53 mutation was associated with concurrent mutations, while IDH1 mutation was associated with MGMT-methylation and TP53 mutation and was mutually exclusive of EGFRvIII mutation. Distinct biomarker profiles were seen in GBM compared with WHO grade III astrocytoma, suggesting different biology and potentially different treatment approaches. Analysis of 17 metachronous paired tumors showed frequent biomarker changes, including MGMT-methylation and EGFR aberrations, indicating the need for a re-biopsy for tumor profiling to direct subsequent therapy. MGMT-methylation, PR and TOPO1 appeared as significant prognostic markers in sub-cohorts of GBM defined by age. The current study represents the largest biomarker study on clinical GBM tumors using multiple technologies to detect gene mutation, amplification, protein expression and promoter methylation. These data will inform planning for future personalized biomarker-based clinical trials and identifying effective treatments based on tumor biomarkers."} {"_id": "83ee9921f6bae094ffaa8ddf20f8e908b3656a88", "title": "Soft Pneumatic Actuators for Rehabilitation", "text": "Pneumatic artificial muscles are pneumatic devices with practical and various applications as common actuators. They, as human muscles, work in agonistic-antagonistic way, giving a traction force only when supplied by compressed air. The state of the art of soft pneumatic actuators is here analyzed: different models of pneumatic muscles are considered and evolution lines are presented. Then, the use of Pneumatic Muscles (PAM) in rehabilitation apparatus is described and the general characteristics required in different applications are considered, analyzing the use of proper soft actuators with various technical properties. Therefore, research activity carried out in the Department of Mechanical and Aerospace Engineering in the field of soft and textile actuators is presented here. In particular, pneumatic textile muscles useful for active suits design are described. These components are made of a tubular structure, with an inner layer of latex coated with a deformable outer fabric sewn along the edge. In order to increase pneumatic muscles forces and contractions Braided Pneumatic Muscles are studied. In this paper, new prototypes are presented, based on a fabric construction and various kinds of geometry. Pressure-force-deformation tests results are carried out and analyzed. These actuators are useful for rehabilitation applications. In order to reproduce the whole upper limb movements, new kind of soft actuators are studied, based on the same principle of planar membranes deformation. As an example, the bellows muscle model and worm muscle model are developed and described. In both cases, wide deformations are expected. Another issue for soft actuators is the pressure therapy. Some textile sleeve prototypes developed for massage therapy on patients suffering of lymph edema are analyzed. Different types of fabric and assembly techniques have been tested. In general, these OPEN ACCESS Actuators 2014, 3 85 Pressure Soft Actuators are useful for upper/lower limbs treatments, according to medical requirements. In particular devices useful for arms massage treatments are considered. Finally some applications are considered."} {"_id": "9d0b2ececb713e6e65ec1b693d7002a2ea21dece", "title": "Scalable Content-Aware Collaborative Filtering for Location Recommendation", "text": "Location recommendation plays an essential role in helping people find attractive places. Though recent research has studied how to recommend locations with social and geographical information, few of them addressed the cold-start problem of new users. Because mobility records are often shared on social networks, semantic information can be leveraged to tackle this challenge. A typical method is to feed them into explicit-feedback-based content-aware collaborative filtering, but they require drawing negative samples for better learning performance, as users\u2019 negative preference is not observable in human mobility. However, prior studies have empirically shown sampling-based methods do not perform well. To this end, we propose a scalable Implicit-feedback-based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and to steer clear of negative sampling. We then develop an efficient optimization algorithm, scaling linearly with data size and feature size, and quadratically with the dimension of latent space. We further establish its relationship with graph Laplacian regularized matrix factorization. Finally, we evaluate ICCF with a large-scale LBSN dataset in which users have profiles and textual content. The results show that ICCF outperforms several competing baselines, and that user information is not only effective for improving recommendations but also coping with cold-start scenarios."} {"_id": "82c2a90da14727e171293906cfa927eda8d6e125", "title": "E-commerce web page classification based on automatic content extraction", "text": "Currently, There are many E-commerce websites around the internet world. These E-commerce websites can be categorized into many types which one of them is C2C (Customer to Customer) websites such as eBay and Amazon. The main objective of C2C websites is an online market place that everyone can buy or sell anything at any time. Since, there are a lot of products in the E-commerce websites and each product are classified into its category by human. It is very hard to define their categories in automatic manner when the data is very large. In this paper, we propose the method for classifying E-commerce web pages based on their product types. Firstly, we apply the proposed automatic content extraction to extract the contents of E-commerce web pages. Then, we apply the automatic key word extraction to select words from these extracted contents for generating the feature vectors that represent the E-commerce web pages. Finally, we apply the machine learning technique for classifying the E-commerce web pages based on their product types. The experimental results signify that our proposed method can classify the E-commerce web pages in automatic fashion."} {"_id": "f70ffda954932ff474fa62f2ab0db4227ad614ef", "title": "MIMO for DVB-NGH, the next generation mobile TV broadcasting [Accepted From Open Call]", "text": "DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) is the next generation technology for mobile TV broadcasting, which has been developed by the DVB project with the most advanced transmission technologies. DVB-NGH is the first broadcasting standard to incorporate multiple-input multiple-output (MIMO) as the key technology to overcome the Shannon limit of single antenna communications. MIMO techniques can be used to improve the robustness of the transmitted signal by exploiting the spatial diversity of the MIMO channel, but also to achieve increased data rates through spatial multiplexing. This article describes the benefits of MIMO that motivated its incorporation in DVB-NGH, reviews the MIMO schemes adopted, and discusses some aspects related to the deployment of MIMO networks in DVB-NGH. The article also provides a feature comparison with the multi-antenna techniques for 3GGP's LTE/LTE-Advanced for cellular networks. Finally, physical layer simulation results calibrated within the DVB-NGH standardization process are provided to illustrate the gain of MIMO for the next generation of mobile TV broadcasting."} {"_id": "e07b6211d8843575b9ea554ea9e2823387563f7e", "title": "Early sensitivity to arguments: how preschoolers weight circular arguments.", "text": "Observational studies suggest that children as young as 2 years can evaluate some of the arguments people offer them. However, experimental studies of sensitivity to different arguments have not yet targeted children younger than 5 years. The current study aimed at bridging this gap by testing the ability of preschoolers (3-, 4-, and 5-year-olds) to weight arguments. To do so, it focused on a common type of fallacy-circularity-to which 5-year-olds are sensitive. The current experiment asked children-and, as a group control, adults-to choose between the contradictory opinions of two speakers. In the first task, participants of all age groups favored an opinion supported by a strong argument over an opinion supported by a circular argument. In the second task, 4- and 5-year-olds, but not 3-year-olds or adults, favored the opinion supported by a circular argument over an unsupported opinion. We suggest that the results of these tasks in 3- to 5-year-olds are best interpreted as resulting from the combination of two mechanisms: (a) basic skills of argument evaluations that process the content of arguments, allowing children as young as 3 years to favor non-circular arguments over circular arguments, and (b) a heuristic that leads older children (4- and 5-year-olds) to give some weight to circular arguments, possibly by interpreting these arguments as a cue to speaker dominance."} {"_id": "3605b37545544d057c0756e4ada3a336c93593f0", "title": "StackGuard: Simple Stack Smash Protection for GCC", "text": "Since 1998, StackGuard patches to GCC have been used to protect entire distributions from stack smashing buffer overflows. Performance overhead and software compatibility issues have been minimal. In its history, the parts of GCC that StackGuard has operated in have twice changed enough to require complete overhauls of the StackGuard patch. Since StackGuard is a mature technology, even seeing re-implementations in other compilers, we propose that GCC adopt StackGuard as a standard feature. This paper describes our recent work to bring StackGuard fully up to date with current GCC, introduce architecture independence, and extend the protection of stack data structures, while keeping the StackGuard patch as small, simple, and modular as possible."} {"_id": "70ded7db743e08d99c5dc8803c5178b0c3ed55de", "title": "Barriers to Physicians\u2019 Adoption of Healthcare Information Technology: An Empirical Study on Multiple Hospitals", "text": "Prior research on technology usage had largely overlooked the issue of user resistance or barriers to technology acceptance. Prior research on the Electronic Medical Records had largely focused on technical issues but rarely on managerial issues. Such oversight prevented a better understanding of users\u2019 resistance to new technologies and the antecedents of technology rejection. Incorporating the enablers and the inhibitors of technology usage intention, this study explores physicians\u2019 reactions towards the electronic medical record. The main focus is on the barriers, perceived threat and perceived inequity. 115 physicians from 6 hospitals participated in the questionnaire survey. Structural Equation Modeling was employed to verify the measurement scale and research hypotheses. According to the results, perceived threat shows a direct and negative effect on perceived usefulness and behavioral intentions, as well as an indirect effect on behavioral intentions via perceived usefulness. Perceived inequity reveals a direct and positive effect on perceived threat, and it also shows a direct and negative effect on perceived usefulness. Besides, perceived inequity reveals an indirect effect on behavioral intentions via perceived usefulness with perceived threat as the inhibitor. The research finding presents a better insight into physicians\u2019 rejection and the antecedents of such outcome. For the healthcare industry understanding the factors contributing to physicians\u2019 technology acceptance is important as to ensure a smooth implementation of any new technology. The results of this study can also provide change managers reference to a smooth IT introduction into an organization. In addition, our proposed measurement scale can be applied as a diagnostic tool for them to better understand the status quo within their organizations and users\u2019 reactions to technology acceptance. By doing so, barriers to physicians\u2019 acceptance can be identified earlier and more effectively before leading to technology rejection."} {"_id": "36e83ef515ef5d6cdb5db827f11bab22155d57a8", "title": "Learning Low-Dimensional Representations of Medical Concepts", "text": "We show how to learn low-dimensional representations (embeddings) of a wide range of concepts in medicine, including diseases (e.g., ICD9 codes), medications, procedures, and laboratory tests. We expect that these embeddings will be useful across medical informatics for tasks such as cohort selection and patient summarization. These embeddings are learned using a technique called neural language modeling from the natural language processing community. However, rather than learning the embeddings solely from text, we show how to learn the embeddings from claims data, which is widely available both to providers and to payers. We also show that with a simple algorithmic adjustment, it is possible to learn medical concept embeddings in a privacy preserving manner from co-occurrence counts derived from clinical narratives. Finally, we establish a methodological framework, arising from standard medical ontologies such as UMLS, NDF-RT, and CCS, to further investigate the embeddings and precisely characterize their quantitative properties."} {"_id": "3bf4ef3e38fc5a6b18db2aa6d1263f0373b604f2", "title": "Nanowire dye-sensitized solar cells.", "text": "Excitonic solar cells-including organic, hybrid organic-inorganic and dye-sensitized cells (DSCs)-are promising devices for inexpensive, large-scale solar energy conversion. The DSC is currently the most efficient and stable excitonic photocell. Central to this device is a thick nanoparticle film that provides a large surface area for the adsorption of light-harvesting molecules. However, nanoparticle DSCs rely on trap-limited diffusion for electron transport, a slow mechanism that can limit device efficiency, especially at longer wavelengths. Here we introduce a version of the dye-sensitized cell in which the traditional nanoparticle film is replaced by a dense array of oriented, crystalline ZnO nanowires. The nanowire anode is synthesized by mild aqueous chemistry and features a surface area up to one-fifth as large as a nanoparticle cell. The direct electrical pathways provided by the nanowires ensure the rapid collection of carriers generated throughout the device, and a full Sun efficiency of 1.5% is demonstrated, limited primarily by the surface area of the nanowire array."} {"_id": "4d7a8836b304a1ecebee19ff297f1850e81903b4", "title": "SECURE COMPUTER SYSTEMS : MATHEMATICAL FOUNDATIONS", "text": ""} {"_id": "9890ff8405f80042605040015fcdbe8139592c83", "title": "Daily and compulsive internet use and well-being in adolescence: a diathesis-stress model based on big five personality traits.", "text": "This study examined the associations between adolescents' daily Internet use and low well-being (i.e., loneliness, low self-esteem, and depressive moods). We hypothesized that (a) linkages between high levels of daily Internet use and low well-being would be mediated by compulsive Internet use (CIU), and (b) that adolescents with low levels of agreeableness and emotional stability, and high levels of introversion would be more likely to develop CIU and lower well-being. Data were used from a sample of 7888 Dutch adolescents (11-21 years). Results from structural equation modeling analyses showed that daily Internet use was indirectly related to low well-being through CIU. In addition, daily Internet use was found to be more strongly related to CIU in introverted, low-agreeable, and emotionally less-stable adolescents. In turn, again, CIU was more strongly linked to loneliness in introverted, emotionally less-stable, and less agreeable adolescents."} {"_id": "184676cb50d1c62b249e90a97e4edc8e42a1c024", "title": "Word Alignment Modeling with Context Dependent Deep Neural Network", "text": "In this paper, we explore a novel bilingual word alignment approach based on DNN (Deep Neural Network), which has been proven to be very effective in various machine learning tasks (Collobert et al., 2011). We describe in detail how we adapt and extend the CD-DNNHMM (Dahl et al., 2012) method introduced in speech recognition to the HMMbased word alignment model, in which bilingual word embedding is discriminatively learnt to capture lexical translation information, and surrounding words are leveraged to model context information in bilingual sentences. While being capable to model the rich bilingual correspondence, our method generates a very compact model with much fewer parameters. Experiments on a large scale EnglishChinese word alignment task show that the proposed method outperforms the HMM and IBM model 4 baselines by 2 points in F-score."} {"_id": "29c34a034f6f35915a141dac98cabf625bea2b3c", "title": "Contrastive Estimation: Training Log-Linear Models on Unlabeled Data", "text": "Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on u labeled data, we requireunsupervisedestimation methods for log-linear models; few exist. We describe a novel approach,contrastive estimation . We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence labeling problem\u2014POS tagging given a tagging dictionary and unlabeled text\u2014contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can largely recover by modeling additional features."} {"_id": "368f3dea4f12c77dfc9b7203f3ab2b9efaecb635", "title": "Statistical Phrase-Based Translation", "text": "We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models outperform word-based models. Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy wordlevel alignment models does not have a strong impact on performance. Learning only syntactically motivated phrases degrades the performance of our systems."} {"_id": "dbca66128d360286c6a5e99214101ae356b93ef2", "title": "HMM-Based Word Alignment in Statistical Translation", "text": "In this paper, we describe a new model for word alignment in statistical translation and present experimental results. The idea of the model is to make the alignment probabilities dependent on the differences in the alignment positions rather than on the absolute positions. To achieve this goal, the approach uses a first-order Hidden Markov model (HMM) for the word alignment problem as they are used successfully in speech recognition for the time alignment problem. The difference to the time alignment HMM is that there is no monotony constraint for the possible word orderings. We describe the details of the model and test the model on several bilingual corpora."} {"_id": "386dc0445ba9991e6f981ad407bc45f17900daf5", "title": "Background Removal in Image Indexing and Retrieval", "text": "This paper presents our research in image content based indexing and retrieval, a key technology in digital image libraries. In most of the existing image content-based techniques, image features used for indexing and retrieval are global, features are computed over the entire image. The major problem with the global image feature based retrieval methods is that background features can be easily mistakenly taken as object features. When a user attempts to retrieve images using color features, he/she usually means the color feature of an object or objects of interests contained in the image. The approach we describe in this paper utilizes color clusters for image background analysis. Once the background regions are identified, they are removed from the image indexing procedure; therefore, no longer interfering with the meaningful image content during the retrieval process. The algorithm consists of three major steps of computation, fuzzy clustering, color image segmentation, and background analysis."} {"_id": "3f56b4e313441735f394b37ac74bc9b9acda6b9f", "title": "Induction of fuzzy rules and membership functions from training examples", "text": "Most fuzzy controllers and fuzzy expert systems must predefine membership functions and fuzzy inference rules to map numeric data into linguistic variable terms and to make fuzzy reasoning work. In this paper, we propose a general learning method as a framework for automatically deriving membership functions and fuzzy if-then rules from a set of given training examples to rapidly build a prototype fuzzy expert system. Based on the membership functions and the fuzzy rules derived, a corresponding fuzzy inference procedure to process inputs is also developed."} {"_id": "461ebcb7a274525b8efecf7990c85994248ab433", "title": "Routing Attacks and Countermeasures in the RPL-Based Internet of Things", "text": "The Routing Protocol for Low-Power and Lossy Networks (RPL) is a novel routing protocol standardized for constrained environments such as 6LoWPAN networks. Providing security in IPv6/RPL connected 6LoWPANs is challenging because the devices are connected to the untrusted Internet and are resource constrained, the communication links are lossy, and the devices use a set of novel IoT technologies such as RPL, 6LoWPAN, and CoAP/CoAPs. In this paper we provide a comprehensive analysis of IoT technologies and their new security capabilities that can be exploited by attackers or IDSs. One of the major contributions in this paper is our implementation and demonstration of well-known routing attacks against 6LoWPAN networks running RPL as a routing protocol. We implement these attacks in the RPL implementation in the Contiki operating system and demonstrate these attacks in the Cooja simulator. Furthermore, we highlight novel security features in the IPv6 protocol and exemplify the use of these features for intrusion detection in the IoT by implementing a lightweight heartbeat protocol."} {"_id": "cfcf81f0bb3a88ff84a16905af46e9314e6e5fc1", "title": "Speech emotion classification using tree-structured sparse logistic regression", "text": "The extraction and selection of acoustic features are crucial steps in the development of a system for classifying emotions in speech. Most works in the field use some kind of prosodic features, often in combination with spectral and glottal features, and select appropriate features in classifying emotions. In the methods, feature choices are mostly made regardless of existing relationships and structures between features. However, considering them can be beneficial, potentially both for interpretability and to improve classification performance. To this end, a structured sparse logistic regression model incorporated with the hierarchical structure of features derived from prosody, spectral envelope, and glottal information is proposed in this paper. The proposed model simultaneously addresses tree-structured sparse feature selection and emotion classification. Evaluation of the proposed model on Berlin emotional database showed substantial improvement over the conventional sparse logistic regression model."} {"_id": "7d1b4f1cc97dd9306ce3836420e0440ba8a7d25c", "title": "Quantum clustering algorithms", "text": "By the term \"quantization\", we refer to the process of using quantum mechanics in order to improve a classical algorithm, usually by making it go faster. In this paper, we initiate the idea of quantizing clustering algorithms by using variations on a celebrated quantum algorithm due to Grover. After having introduced this novel approach to unsupervised learning, we illustrate it with a quantized version of three standard algorithms: divisive clustering, k-medians and an algorithm for the construction of a neighbourhood graph. We obtain a significant speedup compared to the classical approach."} {"_id": "6e2bf58a3e04031aa08e6832cc0e7b845e5d706c", "title": "A Comparative Study of Polar Code Constructions for the AWGN Channel", "text": "We present a comparative study of the performance of various polar code constructions in an additive white Gaussian noise (AWGN) channel. A polar code construction is any algorithm that selects K best among N possible polar bit-channels at the design signal-to-noise-ratio (design-SNR) in terms of bit error rate (BER). Optimal polar code construction is hard and therefore many suboptimal polar code constructions have been proposed at different computational complexities. Polar codes are also non-universal meaning the code changes significantly with the design-SNR. However, it is not known which construction algorithm at what design-SNR constructs the best polar codes. We first present a comprehensive survey of all the well-known polar code constructions along with their full implementations. We then propose a heuristic algorithm to find the best design-SNR for constructing best possible polar codes from a given construction algorithm. The proposed algorithm involves a search among several possible design-SNRs. We finally use our algorithm to perform a comparison of different construction algorithms using extensive simulations. We find that all polar code construction algorithms generate equally good polar codes in an AWGN channel, if the design-SNR is optimized. Keywords\u2014Bhattacharyya bounds, bit-channels, Gaussian approximation, polar codes"} {"_id": "ce08fc211d6cf1b572abed6f6540e36150daa4b2", "title": "False Data Injection Attacks in Control Systems", "text": "This paper analyzes the effects of false data injection attacks on Control System. We assume that the system, equipped with a Kalman filter and LQG controller, is used to monitor and control a discrete linear time invariant Gaussian system. We further assume that the system is equipped with a failure detector. An attacker wishes to destabilize the system by compromising a subset of sensors and sending corrupted readings to the state estimator. In order to inject fake sensor measurements without being detected the attacker needs to carefully design its inputs to fool the failure detector, since abnormal sensor measurements usually trigger an alarm from the failure detector. We will provide a necessary and sufficient condition under which the attacker could destabilize the system while successfully bypassing the failure detector. A design method for the defender to improve the resilience of the CPS against such kind of false data injection attacks is also provided."} {"_id": "24c9646926edc4b33f39d85184cbfd46595050ed", "title": "Automatic Text Summarization based on Betweenness Centrality", "text": "Automatic text summary plays an important role in information retrieval. With a large volume of information, presenting the user only a summary greatly facilitates the search work of the most relevant. Therefore, this task can provide a solution to the problem of information overload. Automatic text summary is a process of automatically creating a compressed version of a certain text that provides useful information for users. This article presents an unsupervised extractive approach based on graphs. The method constructs an indirected weighted graph from the original text by adding a vertex for each sentence, and calculates a weighted edge between each pair of sentences that is based on a similarity/dissimilarity criterion. The main contribution of the work is that we do a study of the impact of a known algorithm for the social network analysis, which allows to analyze large graphs efficiently. As a measure to select the most relevant sentences, we use betweenness centrality. The method was evaluated in an open reference data set of DUC2002 with Rouge scores."} {"_id": "637e37d06965e82e8f2456ce5d59b825a54b0ef7", "title": "The Abandoned Side of the Internet: Hijacking Internet Resources When Domain Names Expire", "text": "The vulnerability of the Internet has been demonstrated by prominent IP prefix hijacking events. Major outages such as the China Telecom incident in 2010 stimulate speculations about malicious intentions behind such anomalies. Surprisingly, almost all discussions in the current literature assume that hijacking incidents are enabled by the lack of security mechanisms in the inter-domain routing protocol BGP. In this paper, we discuss an attacker model that accounts for the hijacking of network ownership information stored in Regional Internet Registry (RIR) databases. We show that such threats emerge from abandoned Internet resources (e.g., IP address blocks, AS numbers). When DNS names expire, attackers gain the opportunity to take resource ownership by re-registering domain names that are referenced by corresponding RIR database objects. We argue that this kind of attack is more attractive than conventional hijacking, since the attacker can act in full anonymity on behalf of a victim. Despite corresponding incidents have been observed in the past, current detection techniques are not qualified to deal with these attacks. We show that they are feasible with very little effort, and analyze the risk potential of abandoned Internet resources for the European service region: our findings reveal that currently 73 /24 IP prefixes and 7 ASes are vulnerable to be attacked. We discuss countermeasures and outline research directions towards preventive solutions."} {"_id": "8ae14aedfa58b9f1f1161b56e38f1a9e5190ec25", "title": "A Modular IoT Platform for Real-Time Indoor Air Quality Monitoring", "text": "The impact of air quality on health and on life comfort is well established. In many societies, vulnerable elderly and young populations spend most of their time indoors. Therefore, indoor air quality monitoring (IAQM) is of great importance to human health. Engineers and researchers are increasingly focusing their efforts on the design of real-time IAQM systems using wireless sensor networks. This paper presents an end-to-end IAQM system enabling measurement of CO\u2082, CO, SO\u2082, NO\u2082, O\u2083, Cl\u2082, ambient temperature, and relative humidity. In IAQM systems, remote users usually use a local gateway to connect wireless sensor nodes in a given monitoring site to the external world for ubiquitous access of data. In this work, the role of the gateway in processing collected air quality data and its reliable dissemination to end-users through a web-server is emphasized. A mechanism for the backup and the restoration of the collected data in the case of Internet outage is presented. The system is adapted to an open-source Internet-of-Things (IoT) web-server platform, called Emoncms, for live monitoring and long-term storage of the collected IAQM data. A modular IAQM architecture is adopted, which results in a smart scalable system that allows seamless integration of various sensing technologies, wireless sensor networks (WSNs) and smart mobile standards. The paper gives full hardware and software details of the proposed solution. Sample IAQM results collected in various locations are also presented to demonstrate the abilities of the system."} {"_id": "92f030e3c4c7a24a607054775d37e0c947808f8d", "title": "A Delegation Framework for Task-Role Based Access Control in WFMS", "text": "Access control is important for protecting information integrity in workflow management system (WfMS). Compared to conventional access control technology such as discretionary, mandatory, and role-based access control models, task-role-based access control (TRBAC) model, an access control model based on both tasks and roles, meets more requirements for modern enterprise environments. However, few discussions on delegation mechanisms for TRBAC are made. In this paper, a framework considering temporal constraints to improve delegation and help automatic delegation in TRBAC is presented. In the framework, the methodology for delegations requested from both users and WfMS is discussed. The constraints for delegatee selection such as delegation loop and separation of duty (SOD) are addressed. With the framework, a sequence of algorithms for delegation and revocation of tasks are constructed gradually. Finally, a comparison is made between our approach and the representative related works."} {"_id": "46a1e47435bcaa949b105edc5ef2b3243225d238", "title": "Application of AHP and Taguchi loss functions in supply chain", "text": "Purpose \u2013 The purpose of this paper is to develop a decision model to help decision makers with selection of the appropriate supplier. Design/methodology/approach \u2013 Supplier selection is a multi-criteria decision-making process encompassing various tangible and intangible factors. Both risks and benefits of using a vendor in supply chain are identified for inclusion in the evaluation process. Since these factors can be objective and subjective, a hybrid approach that applies to both quantitative and qualitative factors is used in the development of the model. Taguchi loss functions are used to measure performance of each supplier candidate with respect to the risks and benefits. Analytical hierarchy process (AHP) is used to determine the relative importance of these factors to the decision maker. The weighted loss scores are then calculated for each supplier by using the relative importance as the weights. The composite weighted loss scores are used for ranking of the suppliers. The supplier with the smallest loss score is recommended for selection. Findings \u2013 Inclusion of both risk and benefit categories in the evaluation process provides a comprehensive decision tool. Practical implications \u2013 The proposed model provides guidelines for supply chain managers to make an informed decision regarding supplier selection. Originality/value \u2013 Combining Taguchi loss function and AHP provides a novel approach for ranking of potential suppliers for outsourcing purposes."} {"_id": "954b56b09b599c5fc3aa2fc180692c54f9ebeeee", "title": "Oscillator-based assistance of cyclical movements: model-based and model-free approaches", "text": "In this article, we propose a new method for providing assistance during cyclical movements. This method is trajectory-free, in the sense that it provides user assistance irrespective of the performed movement, and requires no other sensing than the assisting robot\u2019s own encoders. The approach is based on adaptive oscillators, i.e., mathematical tools that are capable of learning the high level features (frequency, envelope, etc.) of a periodic input signal. Here we present two experiments that we recently conducted to validate our approach: a simple sinusoidal movement of the elbow, that we designed as a proof-of-concept, and a walking experiment. In both cases, we collected evidence illustrating that our approach indeed assisted healthy subjects during movement execution. Owing to the intrinsic periodicity of daily life movements involving the lower-limbs, we postulate that our approach holds promise for the design of innovative rehabilitation and assistance protocols for the lower-limb, requiring little to no user-specific calibration."} {"_id": "4881281046f78e57cd4bbc16c517d019e9a2abb5", "title": "Wechsler Intelligence Scale for Children-IV Conceptual and Interpretive Guide", "text": "The Wechsler intelligence scales were developed by Dr. David Wechsler, a clinical psychologist with Bellevue Hospital. His initial test, the Wechsler-Bellevue Intelligence Scale, was published in 1939 and was designed to measure intellectual performance by adults. Wechsler constructed the WBIS based on his observation that, at the time, existing intelligence tests for adults were merely adaptations of tests for children and had little face validity for older age groups."} {"_id": "d3339336d8c66272321a39e802a96fcb2f717516", "title": "Percutaneous left atrial appendage closure vs warfarin for atrial fibrillation: a randomized clinical trial.", "text": "IMPORTANCE\nWhile effective in preventing stroke in patients with atrial fibrillation (AF), warfarin is limited by a narrow therapeutic profile, a need for lifelong coagulation monitoring, and multiple drug and diet interactions.\n\n\nOBJECTIVE\nTo determine whether a local strategy of mechanical left atrial appendage (LAA) closure was noninferior to warfarin.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nPROTECT AF was a multicenter, randomized (2:1), unblinded, Bayesian-designed study conducted at 59 hospitals of 707 patients with nonvalvular AF and at least 1 additional stroke risk factor (CHADS2 score \u22651). Enrollment occurred between February 2005 and June 2008 and included 4-year follow-up through October 2012. Noninferiority required a posterior probability greater than 97.5% and superiority a probability of 95% or greater; the noninferiority margin was a rate ratio of 2.0 comparing event rates between treatment groups.\n\n\nINTERVENTIONS\nLeft atrial appendage closure with the device (n\u2009=\u2009463) or warfarin (n\u2009=\u2009244; target international normalized ratio, 2-3).\n\n\nMAIN OUTCOMES AND MEASURES\nA composite efficacy end point including stroke, systemic embolism, and cardiovascular/unexplained death, analyzed by intention-to-treat.\n\n\nRESULTS\nAt a mean (SD) follow-up of 3.8 (1.7) years (2621 patient-years), there were 39 events among 463 patients (8.4%) in the device group for a primary event rate of 2.3 events per 100 patient-years, compared with 34 events among 244 patients (13.9%) for a primary event rate of 3.8 events per 100 patient-years with warfarin (rate ratio, 0.60; 95% credible interval, 0.41-1.05), meeting prespecified criteria for both noninferiority (posterior probability, >99.9%) and superiority (posterior probability, 96.0%). Patients in the device group demonstrated lower rates of both cardiovascular mortality (1.0 events per 100 patient-years for the device group [17/463 patients, 3.7%] vs 2.4 events per 100 patient-years with warfarin [22/244 patients, 9.0%]; hazard ratio [HR], 0.40; 95% CI, 0.21-0.75; P\u2009=\u2009.005) and all-cause mortality (3.2 events per 100 patient-years for the device group [57/466 patients, 12.3%] vs 4.8 events per 100 patient-years with warfarin [44/244 patients, 18.0%]; HR, 0.66; 95% CI, 0.45-0.98; P\u2009=\u2009.04).\n\n\nCONCLUSIONS AND RELEVANCE\nAfter 3.8 years of follow-up among patients with nonvalvular AF at elevated risk for stroke, percutaneous LAA closure met criteria for both noninferiority and superiority, compared with warfarin, for preventing the combined outcome of stroke, systemic embolism, and cardiovascular death, as well as superiority for cardiovascular and all-cause mortality.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00129545."} {"_id": "bd7c3791a20cba4ae6b1ec16212bd4b049fd1c41", "title": "Gunshot detection and localization using sensor networks", "text": "Acoustic sensors or sensor networks can be efficiently used to spot and observe targets of interest and have the ability to conclude the exact location of an object or event within the sensor network's coverage range. For the detection and spot estimation of shooter, the schemes of gunshot localization require one or more sensing technologies. In current situation the importance of counter sniper systems cannot be denied. An efficient counter sniper system that can provide immediate shooter location will provide many benefits to our security personnel. In this paper, the Time Difference Of Arrival (TDOA) of Shock Wave and Muzzle Blast is used to estimate the shooter location. The proposed design and algorithm was validated and shooter origin was resolute that was very close to theoretical values. The result shows that the scheme worked very fine and is capable of locating sniper fires very efficiently, competently and effectively. The designed algorithm worked very well and results were found to be very accurately."} {"_id": "5b8869bb7afa5d8d3c183dfac0d0f26c2e218593", "title": "Fast Priority Queues for Cached Memory", "text": "The cache hierarchy prevalent in todays high performance processors has to be taken into account in order to design algorithms that perform well in practice. This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on k-way merging. It improves previous external memory algorithms by constant factors crucial for transferring it to cached memory. Running in the cache hierarchy of a workstation the algorithm is at least two times faster than an optimized implementation of binary heaps and 4-ary heaps for large inputs."} {"_id": "0e93d05017b495f205fbf5d27188bba7be9be5e4", "title": "Reverse Nearest Neighbors in Unsupervised Distance-Based Outlier Detection", "text": "Outlier detection in high-dimensional data presents various challenges resulting from the \u201ccurse of dimensionality.\u201d A prevailing view is that distance concentration, i.e., the tendency of distances in high-dimensional data to become indiscernible, hinders the detection of outliers by making distance-based methods label all points as almost equally good outliers. In this paper, we provide evidence supporting the opinion that such a view is too simple, by demonstrating that distance-based methods can produce more contrasting outlier scores in high-dimensional settings. Furthermore, we show that high dimensionality can have a different impact, by reexamining the notion of reverse nearest neighbors in the unsupervised outlier-detection context. Namely, it was recently observed that the distribution of points' reverse-neighbor counts becomes skewed in high dimensions, resulting in the phenomenon known as hubness. We provide insight into how some points (antihubs) appear very infrequently in k-NN lists of other points, and explain the connection between antihubs, outliers, and existing unsupervised outlier-detection methods. By evaluating the classic k-NN method, the angle-based technique designed for high-dimensional data, the density-based local outlier factor and influenced outlierness methods, and antihub-based methods on various synthetic and real-world data sets, we offer novel insight into the usefulness of reverse neighbor counts in unsupervised outlier detection."} {"_id": "76a92112bceaf9c77acb2a77245ee2cfa478bddd", "title": "CONCEPTUAL REQUIREMENTS FOR THE AUTOMATIC RECONSTRUCTION OF BUILDING INFORMATION MODELS FROM UNINTERPRETED 3 D MODELS", "text": "A multitude of new applications is quickly emerging in the field of Building Information Models (BIM). BIM models describe buildings with respect to their spatial and especially semantic and thematic characteristics. Since BIM models are manually created during the planning and construction phase, they are only available for newly planned or recently constructed buildings. In order to apply the new applications to already existing buildings, methods for the acquisition of BIM models for built-up sites are required. Primary data source are 3D geometry models obtained from surveying, CAD, or computer graphics. Automation of this process is highly desirable, but faces a range of specific problems setting the bar very high for a reconstruction process. This paper discusses these problems and identifies consequential requirements on reconstruction methods. Above, a two-step strategy for BIM model reconstruction is proposed which incorporates CityGML as an intermediate layer between 3D graphics models and IFC/BIM models."} {"_id": "50cc848978797112a27d124af57991321f545069", "title": "Based Emotion Recognition : A Survey", "text": "Human Computer interaction is a very powerful and most current area of research because the human world is getting more digitize. This needs the digital systems to imitate the human behaviour correctly. Emotion is one aspect of human behaviour which plays an important role in human computer interaction, the computer interfaces need to recognize the emotion of the users in order to exhibit a truly intelligent behaviour. Human express the emotion in the form facial expression, speech, and writing text. Every day, massive amount of textual data is gathered into internet such as blogs, social media etc. This comprise a challenging style as it is formed with both plaint text and short messaging language. This paper is mainly focused on an overview of emotion detection from text and describes the emotion detection methods. These methods are divided into the following four main categories: keyword-based, Lexical Affinity method, learning based, and hybrid based approach. Limitations of these emotion recognition methods are presented in this paper and also, addresses the text normalization using different handling techniques for both plaint text and short messaging language."} {"_id": "2e890b62603b25038a36d01ef6b64a7be6160282", "title": "Efficient Planning in Non-Gaussian Belief Spaces and Its Application to Robot Grasping", "text": "The limited nature of robot sensors make many important robo tics problems partially observable. These problems may require the s yst m to perform complex information-gathering operations. One approach to so lving these problems is to create plans inbelief-space , the space of probability distributions over the underlying state of the system. The belief-space plan encodes a st rategy for performing a task while gaining information as necessary. Most approach es to belief-space planning rely upon representing belief state in a particular way (typically as a Gaussian). Unfortunately, this can lead to large errors between the assumed density representation of belief state and the true belief state. Th is paper proposes a new sample-based approach to belief-space planning that has fix ed computational complexity while allowing arbitrary implementations of Bayes filtering to be used to track belief state. The approach is illustrated in the conte xt of a simple example and compared to a prior approach. Then, we propose an applicatio n of the technique to an instance of the grasp synthesis problem where a robot mu st simultaneously localize and grasp an object given initially uncertain obje ct parameters by planning information-gathering behavior. Experimental results ar e presented that demonstrate the approach to be capable of actively localizing and graspi ng boxes that are presented to the robot in uncertain and hard-to-localize config urations."} {"_id": "b032fdfcbf657377d875fe9b1633c7b374239a13", "title": "A model transformation from NL to SBVR", "text": "In Requirement Engineering, requirements are usually written in sentences of natural language and natural languages are ambiguous and inconsistent, so the requirements written in natural languages also tend to be ambiguous. To avoid this problem of ambiguity we present an approach of model transformation to generate requirements based on SBVR (Semantics of Business Vocabulary and Business Rules). The information provided in source metamodel (NL) is automatically transformed into target metamodel (SBVR). SBVR metamodel can not only be processed by machine but also provides precise and reliable model for software design. The standard SBVR metamodel is already available but for natural language we proposed our own metamodel because there is no standard metamodel available for natural languages."} {"_id": "02df3d50dbd1d15c38db62ff58a5601ebf815d59", "title": "NLTK: The Natural Language Toolkit", "text": "NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset."} {"_id": "1f6ba0782862ec12a5ec6d7fb608523d55b0c6ba", "title": "Convolutional Neural Networks for Sentence Classification", "text": "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification."} {"_id": "45c56dc268a04c5fc7ca04d7edb985caf2a25093", "title": "Parameter estimation for text analysis", "text": "Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. As an application, the model of latent Dirichlet allocation (LDA) is explained in detail with a full derivation of an approximate inference algorithm based on Gibbs sampling, including a discussion of Dirichlet hyperparameter estimation. History: version 1: May 2005, version 2.4: August 2008."} {"_id": "9e463eefadbcd336c69270a299666e4104d50159", "title": "A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES 1", "text": ""} {"_id": "ccce1cf96f641b3581fba6f4ce2545f4135a15e3", "title": "Least Squares Support Vector Machine Classifiers", "text": "In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's. The approach is illustrated on a two-spiral benchmark classification problem."} {"_id": "5e120701b01f37d7e68dcc261212c4741d143153", "title": "Vehicle Routing with Drones", "text": "We introduce a package service model where trucks as well as drones can deliver packages. Drones can travel on trucks or fly; but while flying, drones can only carry one package at a time and have to return to a truck to charge after each delivery. We present a heuristic algorithm to solve the problem of finding a good schedule for all drones and trucks. The algorithm is based on two nested local searches, thus the definition of suitable neighbourhoods of solutions is crucial for the algorithm. Empirical tests show that our algorithm performs significantly better than a natural Greedy algorithm. Moreover, the savings compared to solutions without drones turn out to be substantial, suggesting that delivery systems might considerably benefit from using drones in addition to trucks."} {"_id": "0f7be87ca4608975a0a7f5c9e505688649f523c1", "title": "2 REINVENTING GROUNDED THEORY : SOME QUESTIONS ABOUT THEORY , GROUND AND DISCOVERY", "text": "Grounded theory\u2016s popularity persists after three decades of broad-ranging critique. We discuss here three problematic notions \u2013 \u201ctheory,\u201d \u201cground\u201d and \u201cdiscovery\u201d \u2013 which linger in the continuing use and development of grounded theory procedures. We argue that far from providing the epistemic security promised by grounded theory, these notions \u2013 embodied in continuing reinventions of grounded theory \u2013 constrain and distort qualitative inquiry. We argue that what is contrived is not in fact theory in any meaningful sense, that \u201cground\u201d is a misnomer when talking about interpretation and that what ultimately materializes following grounded theory procedures is less like discovery and more akin to invention. The procedures admittedly provide signposts for qualitative inquirers, but educational researchers should be wary, for the significance of interpretation, narrative and reflection can be undermined in the procedures of grounded theory."} {"_id": "d3e0ba556d1d1108aa823e7234b5bdd81a668e48", "title": "Ways of Asking and Replying in Duplicate Question Detection", "text": "This paper presents the results of systematic experimentation on the impact in duplicate question detection of different types of questions across both a number of established approaches and a novel, superior one used to address this language processing task. This study permits to gain a novel insight on the different levels of robustness of the diverse detection methods with respect to different conditions of their application, including the ones that approximate real usage scenarios."} {"_id": "c938af3d216d047c90b485530d70699568dc4c49", "title": "Exploitation of Dual-Polarization Diversity for 5G Millimeter-Wave MIMO Beamforming Systems", "text": "For the first time, to the best of our knowledge, this paper reports a complete millimeter wave (mmWave) wireless system, which coherently utilizes polarization diversity, multi-input multioutput (MIMO), and beamforming technologies applicable for upcoming fifth generation wireless mobile terminals. Using an interdisciplinary approach across the entire layers constituting of antennas, RF architecture, and digital modem, the polarization of the presented mmWave beamforming antenna system dynamically adapts to the channel environments and MIMO transmission modes. Empirical investigations, which are taken under real-life environments, ascertain that implementing switchable dual-polarized array antennas can alleviate the complexity of the beam searching procedure and MIMO channelization. In addition, experimental results acquired from intensive system level tests indicate more than 22 dB averaged cross-polarization discrimination under polarized MIMO channel conditions at the 60 GHz band. The beam selection algorithm that can be easily implemented based on power metric is confirmed to feature a 99.5% accuracy in comparison with the optimal selection algorithm derived from the theoretical maximal capacity."} {"_id": "b1a04ad062d980237523cc3502926130e05d2c75", "title": "The Overview of Research on Microgrid Protection Development", "text": "In general, the microgrid can operate in both grid-connection mode and islanded mode that challenges the traditional over-current protection scheme in the distribution network. The novel research and development of protection, including both the microgrid and distribution network are comprehensively analyzed in this paper, which mainly focuses on two aspects. The first one analyses the impact on the network protection from the fault current feature while microgrid-connected. Secondly, several innovative protection schemes for network and microgrid are discussed. It is essential to improve traditional protection schemes by using communication techniques. In order to protect the microgrid in islanded mode, two innovative voltage protection schemes are discussed. However, the differential current protection scheme is possibly still the most effective solution for both network and microgrid."} {"_id": "d08022c927cbc8d52d6cb58d8e868d3b7df1c35d", "title": "An improved anomaly detection and diagnosis framework for mobile network operators", "text": "The ever increasing complexity of commercial mobile networks drives the need for methods capable of reducing human workload required for network troubleshooting. In order to address this issue, several attempts have been made to develop automatic anomaly detection and diagnosis frameworks for mobile network operators. In this paper, the latest improvements introduced to one of those frameworks are discussed, including more sophisticated profiling and detection capabilities. The new algorithms further reduce the need for human intervention related to the proper configuration of the profiling and anomaly detection apparatus. The main concepts of the new approach are described and illustrated with an explanatory showcase featuring performance data from a live 3 G network."} {"_id": "63132196af23969531569d5866fa3cda5b8f7a11", "title": "DFuzzy: a deep learning-based fuzzy clustering model for large graphs", "text": "Graph clustering is successfully applied in various applications for finding similar patterns. Recently, deep learning- based autoencoder has been used efficiently for detecting disjoint clusters. However, in real-world graphs, vertices may belong to multiple clusters. Thus, it is obligatory to analyze the membership of vertices toward clusters. Furthermore, existing approaches are centralized and are inefficient in handling large graphs. In this paper, a deep learning-based model \u2018DFuzzy\u2019 is proposed for finding fuzzy clusters from large graphs in distributed environment. It performs clustering in three phases. In first phase, pre-training is performed by initializing the candidate cluster centers. Then, fine tuning is performed to learn the latent representations by mining the local information and capturing the structure using PageRank. Further, modularity is used to redefine clusters. In last phase, reconstruction error is minimized and final cluster centers are updated. Experiments are performed over real-life graph data, and the performance of DFuzzy is compared with four state-of-the-art clustering algorithms. Results show that DFuzzy scales up linearly to handle large graphs and produces better quality of clusters when compared to state-of-the-art clustering algorithms. It is also observed that deep structures can help in getting better graph representations and provide improved clustering performance."} {"_id": "cc51a87552b8e34a6562cc2c588fc0744e3aced6", "title": "A recommender system based on historical usage data for web service discovery", "text": "The tremendous growth in the amount of available web services impulses many researchers on proposing recommender systems to help users discover services. Most of the proposed solutions analyzed query strings and web service descriptions to generate recommendations. However, these text-based recommendation approaches depend mainly on user\u2019s perspective, languages, and notations, which easily decrease recommendation\u2019s efficiency. In this paper, we present an approach in which we take into account historical usage data instead of the text-based analysis. We apply collaborative filtering technique on user\u2019s interactions. We propose and implement four algorithms to validate our approach. We also provide evaluation methods based on the precision and recall in order to assert the efficiency of our algorithms."} {"_id": "430d599e07e94fbae71b698a4098a2d615ab5608", "title": "Detection of leukemia in human blood sample based on microscopic images: A study", "text": "At the moment, identification of blood disorders is through visual inspection of microscopic images of blood cells. From the identification of blood disor de s, it can lead to classification of certain dise ases related to blood. This paper describes a preliminar y study of developing a detection of leukemia types using microscopic blood sample images. Analyzing through images is very important as from images, diseases can be detected and diagnosed at earlier stage. Fro m there, further actions like controlling, monitori ng and prevention of diseases can be done. Images are used as they are cheap and do not require expensive tes ting and lab equipments. The system will focus on white blood cells disease, leukemia. The system will use features in microscopic images and examine changes on texture, geometry, color and statistical analysi s. Changes in these features will be used as a classif ier input. A literature review has been done and Reinforcement Learning is proposed to classify type s of leukemia. A little discussion about issues inv olved by researchers also has been prepared."} {"_id": "dffc86f4646c71c2c67278ac55aaf67db5a55b21", "title": "A Surgical Palpation Probe With 6-Axis Force/Torque Sensing Capability for Minimally Invasive Surgery", "text": "A novel surgical palpation probe installing a miniature 6-axis force/torque (F/T) sensor is presented for robot-assisted minimally invasive surgery. The 6-axis F/T sensor is developed by using the capacitive transduction principle and a novel capacitance sensing method. The sensor consists of only three parts, namely a sensing printed circuit board, a deformable part, and a base part. The simple configuration leads to simpler manufacturing and assembly processes in conjunction with high durability and low weight. In this study, a surgical instrument installed with a surgical palpation probe is implemented. The 6-axis F/T sensing capability of the probe has been experimentally validated by comparing it with a reference 6-axis F/T sensor. Finally, a vivo tissue palpation task is performed in a simulated surgical environment with an animal organ and a relatively hard simulated cancer buried under the surface of the organ."} {"_id": "7796de8efb1866ffa45e7f8b6a7305cb8b582144", "title": "A review of essential standards and patent landscapes for the Internet of Things: A key enabler for Industry 4.0", "text": "This paper is a formal overview of standards and patents for Internet of Things (IoT) as a key enabler for the next generation advanced manufacturing, referred as Industry 4.0 (I 4.0). IoT at the fundamental level is a means of connecting physical objects to the Internet as a ubiquitous network that enables objects to collect and exchange information. The manufacturing industry is seeking versatile manufacturing service provisions to overcome shortened product life cycles, increased labor costs, and fluctuating customer needs for competitive marketplaces. This paper depicts a systematic approach to review IoT technology standards and patents. The thorough analysis and overview include the essential standard landscape and the patent landscape based on the governing standards organizations for America, Europe and China where most global manufacturing bases are located. The literature of emerging IoT standards from the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the Guobiao standards (GB), and global patents issued in US, Europe, China and World Intellectual Property Organization (WIPO) are systematically presented in this study. 2016 Elsevier Ltd. All rights reserved."} {"_id": "0b1942ebdcde63ad10540fd9670661f421b44f26", "title": "TanDEM-X: A Satellite Formation for High-Resolution SAR Interferometry", "text": "TanDEM-X (TerraSAR-X add-on for digital elevation measurements) is an innovative spaceborne radar interferometer that is based on two TerraSAR-X radar satellites flying in close formation. The primary objective of the TanDEM-X mission is the generation of a consistent global digital elevation model (DEM) with an unprecedented accuracy, which is equaling or surpassing the HRTI-3 specification. Beyond that, TanDEM-X provides a highly reconfigurable platform for the demonstration of new radar imaging techniques and applications. This paper gives a detailed overview of the TanDEM-X mission concept which is based on the systematic combination of several innovative technologies. The key elements are the bistatic data acquisition employing an innovative phase synchronization link, a novel satellite formation flying concept allowing for the collection of bistatic data with short along-track baselines, as well as the use of new interferometric modes for system verification and DEM calibration. The interferometric performance is analyzed in detail, taking into account the peculiarities of the bistatic operation. Based on this analysis, an optimized DEM data acquisition plan is derived which employs the combination of multiple data takes with different baselines. Finally, a collection of instructive examples illustrates the capabilities of TanDEM-X for the development and demonstration of new remote sensing applications."} {"_id": "15ca31fb78fa69f02aaad5d6ad2e5eb61088dddf", "title": "Human-Computer Interaction for Complex Pattern Recognition Problems", "text": "We review some applications of human-computer interaction that alleviate the complexity of visual recognition by partitioning it into human and machine tasks to exploit the differences between human and machine capabilities. Human involvement offers advantages, both in the design of automated pattern classification systems, and at the operational level of some image retrieval and classification tasks. Recent development of interactive systems has benefited from the convergence of computer vision and psychophysics in formulating visual tasks as computational processes. Computer-aided classifier design and exploratory data analysis are already well established in pattern recognition and machine learning, but interfaces and functionality are improving. On the operational side, earlier recognition systems made use of human talent only in preprocessing and in coping with rejects. Most current content-based image retrieval systems make use of relevance feedback without direct image interaction. In contrast, some visual object classification systems can exploit such interaction. They require, however, a domain-specific visible model that makes sense to both human and computer."} {"_id": "b353e8b6a976422d1200ed29d8c6ab01f0c0cc3d", "title": "Toward an integrated knowledge discovery and data mining process model", "text": "Enterprise decision making is continuously transforming in the wake of ever increasing amounts of data. Organizations are collecting massive amounts of data in their quest for knowledge nuggets in form of novel, interesting, understandable patterns that underlie these data. The search for knowledge is a multi-step process comprising of various phases including development of domain (business) understanding, data understanding, data preparation, modeling, evaluation and ultimately, the deployment of the discovered knowledge. These phases are represented in form of Knowledge Discovery and Data Mining (KDDM) Process Models that are meant to provide explicit support towards execution of the complex and iterative knowledge discovery process. Review of existing KDDM process models reveals that they have certain limitations (fragmented design, only a checklist-type description of tasks, lack of support towards execution of tasks, especially those of the business understanding phase etc) which are likely to affect the efficiency and effectiveness with which KDDM projects are currently carried out. This dissertation addresses the various identified limitations of existing KDDM process models through an improved model (named the Integrated Knowledge Discovery and Data Mining Process Model) which presents an integrated view of the KDDM process and provides explicit support towards execution of each one of the tasks outlined in the model. We also evaluate the effectiveness and efficiency offered by the IKDDM model against CRISP-DM, a leading KDDM process model, in aiding data mining users to execute various tasks of the KDDM process. Results of statistical tests"} {"_id": "4c747fc7765b543a06f8e880c8a63a1aebf75ecf", "title": "CryptoDL : Towards Deep Learning over Encrypted Data", "text": "With increasing growth of cloud services, machine learning services can be run on cloud providers\u2019 infrastructure where training and deploying machine learning models are performed on cloud servers. However, machine learning solutions require access to the raw data, which can create potential security and privacy risks. In this work, we take the first steps towards developing theoretical foundation for implementing deep learning algorithms in encrypted domain and propose a method to adopt deep neural networks (NN) within the practical limitations of current homomorphic encryption schemes. We first design two methods for approximation of activation functions with low degree polynomials. Then, we train NNs with the generated polynomials and analyze the performance of the trained models. Finally, we run the low degree polynomials over encrypted values to estimate the computation costs and time."} {"_id": "ab27ddbbed5a7bd221de44355e3a058b5ea1b14f", "title": "Exploiting Topic based Twitter Sentiment for Stock Prediction", "text": "*This work is supported by a grant from National Science Foundation (IIS-1111092) and a strategic research grant from City University of Hong Kong (7002770) *The work was done when the first author was visiting University of Illinois at Chicago. Abstract \u2022 The Web has seen a tremendous rise in social media. \u2022 People\u2019s experiences and opinions expressed in Social Media (e.g., Twitter, Facebook) contain rich social signals (e.g., about consumer sentiment, stock market, economic conditions, etc.). \u2022 The goal of this paper is to exploit public sentiment for stock market prediction. \u2022 Topic based sentiment is an important factor in stock prediction because it reflects public opinions about current issues/topics. \u2022 We employ a Bayesian non-parametric topic-based sentiment time series approach learned from streaming tweets to predict the S&P100 Stock Index."} {"_id": "f9ad9115dac88872fb25a633fda3d87f547f75b0", "title": "Intelligent Maze Solving Robot Based on Image Processing and Graph Theory Algorithms", "text": "The most important task for maze solving robots is the fast and reliable finding of its shortest path from its initial point to its final destination point. This paper proposes an intelligent maze solving robot that can determine its shortest path on a line maze based on image processing and artificial intelligence algorithms. The image of the line maze is captured by a camera and sent to the computer to be analyzed and processed by a program developed using Visual C++ and OpenCV libraries and based on graph theory algorithms. The developed program solves the captured maze by examining all possible paths exist in the maze that could convey the robot to the required destination point. After that, the best shortest path is determined and then the instructions that guide the car-like robot to reach its desired destination point are sent to the robot through Bluetooth. The robot follows the received guide path to reach its destination. The proposed approach works faster than the traditional methods which push the robot to move through the maze cell by cell in order to find its destination point. Moreover, the proposed method allows the maze solving robot to avoid trapping and falling in infinity loops. Applications of maze solving systems include intelligent traffic control that helps ambulances, fire fighters, or rescuing robots to find their shortest path to their destination."} {"_id": "ea99f5d06f501f86de266534d420f6fdaa4f2112", "title": "Monte-Carlo Tree Search for the Game of \"7 Wonders\"", "text": "Monte-Carlo Tree Search algorithm, and in particular with the Upper Confidence Bounds formula, provided huge improvements for AI in numerous games, particularly in Go, Hex, Havannah, Amazons and Breakthrough. In this work we study this algorithm on a more complex game, the game of \u201c7 Wonders\u201d. This card game gathers together several known difficult features, such as hidden information, N-player and stochasticity. It also includes an inter-player trading system that induces a combinatorial search to decide which decisions are legal. Moreover it is difficult to hand-craft an efficient evaluation function since the card values are heavily dependent upon the stage of the game and upon the other player decisions. We show that, in spite of the fact that \u201c7 Wonders\u201d is apparently not so related to abstract games, a lot of known results still hold."} {"_id": "2188276d512c11271ac08bf72994cbec5d7468d3", "title": "Collaboration, creativity and the co-construction of oral and written texts", "text": "The Open University's repository of research publications and other research outputs Collaboration, creativity and the co-construction of oral and written texts Journal Article Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page. (To be included in a special issue of Thinking Skills and Creativity, on \" Collaborative Abstract In this paper we explore how primary school children 'learn to collaborate' and 'collaborate to learn' on creative writing projects by using diverse cultural artefacts-including oracy, literacy and ICT. We begin by reviewing some key socio-cultural concepts which serve as a theoretical framework for the research reported. Secondly, we describe the context in which the children talked and worked together to create their projects. This context is a 'learning community' developed as part of an innovative educational programme with the aim of promoting the social construction of knowledge among all participants. We then present microgenetic analyses of the quality of the interaction and dialogues taking place as peers worked together on their projects, and how these collaborative processes and uses of the mediational artefacts were taken up by the children. In order to exemplify these processes, our analyses centre on a selection of examples of dialogues, texts and multimedia products of stories created by groups of 4th grade (9-10 year-old) children. Overall, the work reveals the dynamic functioning in educational settings of some central socio-cultural concepts. These include: co-construction; intertextuality and intercontextuality amongst oracy, literacy and uses of ICT; collaborative creativity; development of dialogical and text production strategies and appropriation of diverse cultural artefacts for knowledge construction."} {"_id": "1311ccef82e70126bc0f9d69084030f19c79eac2", "title": "A comparative analysis of common YouTube comment spam filtering techniques", "text": "Ever since its development in 2005, YouTube has been providing a vital social media platform for video sharing. Unfortunately, YouTube users may have malicious intentions, such as disseminating malware and profanity. One way to do so is using the comment field for this purpose. Although YouTube provides a built-in tool for spam control, yet it is insufficient for combating malicious and spam contents within the comments. In this paper, a comparative study of the common filtering techniques used for YouTube comment spam is conducted. The study deploys datasets extracted from YouTube using its Data API. According to the obtained results, high filtering accuracy (more than 98%) can be achieved with low-complexity algorithms, implying the possibility of developing a suitable browser extension to alleviate comment spam on YouTube in future."} {"_id": "1df4dba907cd6afadb172d9c454c9978f5150bf7", "title": "A Novel Method for Comparative Analysis of DNA Sequences by Ramanujan-Fourier Transform", "text": "Alignment-free sequence analysis approaches provide important alternatives over multiple sequence alignment (MSA) in biological sequence analysis because alignment-free approaches have low computation complexity and are not dependent on high level of sequence identity. However, most of the existing alignment-free methods do not employ true full information content of sequences and thus can not accurately reveal similarities and differences among DNA sequences. We present a novel alignment-free computational method for sequence analysis based on Ramanujan-Fourier transform (RFT), in which complete information of DNA sequences is retained. We represent DNA sequences as four binary indicator sequences and apply RFT on the indicator sequences to convert them into frequency domain. The Euclidean distance of the complete RFT coefficients of DNA sequences are used as similarity measures. To address the different lengths of RFT coefficients in Euclidean space, we pad zeros to short DNA binary sequences so that the binary sequences equal the longest length in the comparison sequence data. Thus, the DNA sequences are compared in the same dimensional frequency space without information loss. We demonstrate the usefulness of the proposed method by presenting experimental results on hierarchical clustering of genes and genomes. The proposed method opens a new channel to biological sequence analysis, classification, and structural module identification."} {"_id": "1bc8a129675fbbb651d0a23f827ab867165088f9", "title": "Safe Exploration of State and Action Spaces in Reinforcement Learning", "text": "In this paper, we consider the important problem of safe exploration in reinforcement learning. While reinforcement learning is well-suited to domains with complex transition dynamics and high-dimensional state-action spaces, an additional challenge is posed by the need for safe and efficient exploration. Traditional exploration techniques are not particularly useful for solving dangerous tasks, where the trial and error process may lead to the selection of actions whose execution in some states may result in damage to the learning system (or any other system). Consequently, when an agent begins an interaction with a dangerous and high-dimensional state-action space, an important question arises; namely, that of how to avoid (or at least minimize) damage caused by the exploration of the state-action space. We introduce the PI-SRL algorithm which safely improves suboptimal albeit robust behaviors for continuous state and action control tasks and which efficiently learns from the experience gained from the environment. We evaluate the proposed method in four complex tasks: automatic car parking, pole-balancing, helicopter hovering, and business management."} {"_id": "bb3a3836652c8a581e0ce92fdb3a7cd884efec40", "title": "Optical see-through head up displays' effect on depth judgments of real world objects", "text": "Recent research indicates that users consistently underestimate depth judgments to Augmented Reality (AR) graphics when viewed through optical see-through displays. However, to our knowledge, little work has examined how AR graphics may affect depth judgments of real world objects that have been overlaid or annotated with AR graphics. This study begins a preliminary analysis whether AR graphics have directional effects on users' depth perception of real-world objects, as might be experienced in vehicle driving scenarios (e.g., as viewed via an optical see-through head-up display or HUD). Twenty-four participants were asked to judge the depth of a physical pedestrian proxy figure moving towards them at a constant rate of 1 meter/second. Participants were shown an initial target location that varied in distance from 11 to 20 m and were then asked to press a button to indicate when the moving target was perceived to be at the previously specified target location. Each participant experienced three different display conditions: no AR visual display (control), a conformal AR graphic overlaid on the pedestrian via a HUD, and the same graphic presented on a tablet physically located on the pedestrian. Participants completed 10 trials (one for each target distance between 11 and 20 inclusive) per display condition for a total of 30 trials per participant. The judged distance from the correct location was recorded, and after each trial, participants' confidence in determining the correct distance was captured. Across all conditions, participants underestimated the distance of the physical object consistent with existing literature. Greater variability was observed in the accuracy of distance judgments under the AR HUD condition relative to the other two display conditions. In addition, participant confidence levels were considerably lower in the AR HUD condition."} {"_id": "2f201c77e7ccdf1f37115e16accac3486a65c03d", "title": "Stochastic Activation Pruning for Robust Adversarial Defense", "text": "Neural networks have been found vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory. We cast the problem as a minimax zero-sum game between the adversary and the model. For such games, it is in general optimal for players to use a stochastic policy, also known as a mixed strategy. In light of this, we propose a mixed strategy for the model: stochastic activation pruning. SAP prunes a random subset of activations (preferentially pruning activations with smaller magnitude) and scales up the survivors to compensate. SAP can be applied to pre-trained neural networks\u2014even adversarially trained models\u2014without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness in the adversarial setting, increasing accuracy whilst preserving calibration."} {"_id": "0eb329b3291944ee358fcbad6d26a2e111addd6b", "title": "Restructuring of deep neural network acoustic models with singular value decomposition", "text": "Recently proposed deep neural network (DNN) obtains significant accuracy improvements in many large vocabulary continuous speech recognition (LVCSR) tasks. However, DNN requires much more parameters than traditional systems, which brings huge cost during online evaluation, and also limits the application of DNN in a lot of scenarios. In this paper we present our new effort on DNN aiming at reducing the model size while keeping the accuracy improvements. We apply singular value decomposition (SVD) on the weight matrices in DNN, and then restructure the model based on the inherent sparseness of the original matrices. After restructuring we can reduce the DNN model size significantly with negligible accuracy loss. We also fine-tune the restructured model using the regular back-propagation method to get the accuracy back when reducing the DNN model size heavily. The proposed method has been evaluated on two LVCSR tasks, with context-dependent DNN hidden Markov model (CD-DNN-HMM). Experimental results show that the proposed approach dramatically reduces the DNN model size by more than 80% without losing any accuracy."} {"_id": "40ea042765e037628bb2728e34f9800c24f93600", "title": "Deep neural networks for small footprint text-dependent speaker verification", "text": "In this paper we investigate the use of deep neural networks (DNNs) for a small footprint text-dependent speaker verification task. At development stage, a DNN is trained to classify speakers at the frame-level. During speaker enrollment, the trained DNN is used to extract speaker specific features from the last hidden layer. The average of these speaker features, or d-vector, is taken as the speaker model. At evaluation stage, a d-vector is extracted for each utterance and compared to the enrolled speaker model to make a verification decision. Experimental results show the DNN based speaker verification system achieves good performance compared to a popular i-vector system on a small footprint text-dependent speaker verification task. In addition, the DNN based system is more robust to additive noise and outperforms the i-vector system at low False Rejection operating points. Finally the combined system outperforms the i-vector system by 14% and 25% relative in equal error rate (EER) for clean and noisy conditions respectively."} {"_id": "5d6fca1c2dc1bb30b2bfcc131ec6e35a16374df8", "title": "An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices", "text": "Detecting and reacting to user behavior and ambient context are core elements of many emerging mobile sensing and Internet-of-Things (IoT) applications. However, extracting accurate inferences from raw sensor data is challenging within the noisy and complex environments where these systems are deployed. Deep Learning -- is one of the most promising approaches for overcoming this challenge, and achieving more robust and reliable inference. Techniques developed within this rapidly evolving area of machine learning are now state-of-the-art for many inference tasks (such as, audio sensing and computer vision) commonly needed by IoT and wearable applications. But currently deep learning algorithms are seldom used in mobile/IoT class hardware because they often impose debilitating levels of system overhead (e.g., memory, computation and energy). Efforts to address this barrier to deep learning adoption are slowed by our lack of a systematic understanding of how these algorithms behave at inference time on resource constrained hardware. In this paper, we present the first -- albeit preliminary -- measurement study of common deep learning models (such as Convolutional Neural Networks and Deep Neural Networks) on representative mobile and embedded platforms. The aim of this investigation is to begin to build knowledge of the performance characteristics, resource requirements and the execution bottlenecks for deep learning models when being used to recognize categories of behavior and context. The results and insights of this study, lay an empirical foundation for the development of optimization methods and execution environments that enable deep learning to be more readily integrated into next-generation IoT, smartphones and wearable systems."} {"_id": "5f274d318b1fac66a01a5ed31f4433de5e1fa032", "title": "EmotionSense: a mobile phones based adaptive platform for experimental social psychology research", "text": "Today's mobile phones represent a rich and powerful computing platform, given their sensing, processing and communication capabilities. Phones are also part of the everyday life of billions of people, and therefore represent an exceptionally suitable tool for conducting social and psychological experiments in an unobtrusive way.\n de the ability of sensing individual emotions as well as activities, verbal and proximity interactions among members of social groups. Moreover, the system is programmable by means of a declarative language that can be used to express adaptive rules to improve power saving. We evaluate a system prototype on Nokia Symbian phones by means of several small-scale experiments aimed at testing performance in terms of accuracy and power consumption. Finally, we present the results of real deployment where we study participants emotions and interactions. We cross-validate our measurements with the results obtained through questionnaires filled by the users, and the results presented in social psychological studies using traditional methods. In particular, we show how speakers and participants' emotions can be automatically detected by means of classifiers running locally on off-the-shelf mobile phones, and how speaking and interactions can be correlated with activity and location measures."} {"_id": "4b93cb61f17f7ee722dac1c68020d97a1699ace3", "title": "Multimodal Deep Learning for Cervical Dysplasia Diagnosis", "text": "To improve the diagnostic accuracy of cervical dysplasia, it is important to fuse multimodal information collected during a patient\u2019s screening visit. However, current multimodal frameworks suffer from low sensitivity at high specificity levels, due to their limitations in learning correlations among highly heterogeneous modalities. In this paper, we design a deep learning framework for cervical dysplasia diagnosis by leveraging multimodal information. We first employ the convolutional neural network (CNN) to convert the low-level image data into a feature vector fusible with other non-image modalities. We then jointly learn the non-linear correlations among all modalities in a deep neural network. Our multimodal framework is an end-to-end deep network which can learn better complementary features from the image and non-image modalities. It automatically gives the final diagnosis for cervical dysplasia with 87.83% sensitivity at 90% specificity on a large dataset, which significantly outperforms methods using any single source of information alone and previous multimodal frameworks."} {"_id": "8abccc77774967a48c3c0f7fb097297aad011d47", "title": "1 An End-to-End Framework for Evaluating Surface Reconstruction", "text": "We present a benchmark for the evaluation and comparison of algorithms which reconstruct a surface from point cloud data. Although a substantial amount of effort has been dedicated to the problem of surface reconstruction, a comprehensive means of evaluating this class of algorithms is noticeably absent. We propose a simple pipeline for measuring surface reconstruction algorithms, consisting of three main phases: surface modeling, sampling, and evaluation.We employ implicit surfaces for modeling shapes which are expressive enough to contain details of varying size, in addition to preserving sharp features. From these implicit surfaces, we produce point clouds by synthetically generating range scans which resemble realistic scan data. We validate our synthetic sampling scheme by comparing against scan data produced via a commercial optical laser scanner, wherein we scan a 3D-printed version of the original implicit surface. Last, we perform evaluation by comparing the output reconstructed surface to a dense uniformly-distributed sampling of the implicit surface. We decompose our benchmark into two distinct sets of experiments. The first set of experiments measures reconstruction against point clouds of complex shapes sampled under a wide variety of conditions. Although these experiments are quite useful for the comparison of surface reconstruction algorithms, they lack a fine-grain analysis. Hence to complement this, the second set of experiments are designed to measure specific properties of surface reconstruction, both from a sampling and surface modeling viewpoint. Together, these experiments depict a detailed examination of the state of surface reconstruction algorithms. An End-to-End Framework for Evaluating Surface Reconstruction"} {"_id": "cd14329ef11aed723d528abb2d48f28e4295d695", "title": "Air-Traffic Complexity Resolution in Multi-Sector Planning", "text": "Using constraint programming, we effectively model and efficiently solve the problem of balancing and minimising the traffic complexities of an airspace of adjacent sectors. The traffic complexity of a sector is here defined in terms of the numbers of flights within it, near its border, and on non-level segments within it. The allowed forms of complexity resolution are the changing of the take-off times of not yet airborne flights, the changing of the remaining approach times into the chosen airspace of already airborne flights by slowing down and speeding up within the two layers of feeder sectors around that airspace, as well as the changing of the levels of passage over way-points in that airspace. Experiments with actual European flight profiles obtained from the Central Flow Management Unit (CFMU) show that these forms of complexity resolution can lead to significant complexity reductions and rebalancing."} {"_id": "968102a9e7329c829e2a5993dc1612ec50a9e4dc", "title": "Underwater image enhancement via dark channel prior and luminance adjustment", "text": "Underwater images are degraded mainly by light scattering and color distortion. Light scattering occurs because the light is reflected and deflected multiple times by the suspended particles in the water before reaching the camera. Color distortion is caused due to the absorption degrees vary according to the light wavelength. At last, the underwater images are dominated by a bluish tone. In this paper, we propose a novel underwater image enhancement approach based on dark channel prior and luminance adjustment. The dehazed underwater image obtained by dark channel prior characterizes with low brightness. Then, we utilize a novel luminance adjustment method to enhance image contrast. The enhanced image shows improved global contrast and better preserved image details and edges. Experimental results demonstrate the effectiveness of our method compared to the state-of-the-art methods."} {"_id": "8e54c3f24a46be98199a78df6b0ed2c974636fa9", "title": "Real-Time Recognition of Physical Activities and Their Intensities Using Wireless Accelerometers and a Heart Rate Monitor", "text": "In this paper, we present a real-time algorithm for automatic recognition of not only physical activities, but also, in some cases, their intensities, using five triaxial wireless accelerometers and a wireless heart rate monitor. The algorithm has been evaluated using datasets consisting of 30 physical gymnasium activities collected from a total of 21 people at two different labs. On these activities, we have obtained a recognition accuracy performance of 94.6% using subject-dependent training and 56.3% using subject-independent training. The addition of heart rate data improves subject-dependent recognition accuracy only by 1.2% and subject-independent recognition only by 2.1%. When recognizing activity type without differentiating intensity levels, we obtain a subject-independent performance of 80.6%. We discuss why heart rate data has such little discriminatory power."} {"_id": "229dbc8d379f12d15597b17bb0502667b2ef3fa2", "title": "Brief Announcement: Dynamic Determinacy Race Detection for Task Parallelism with Futures", "text": "Existing dynamic determinacy race detectors for task-parallel programs are limited to programs with strict computation graphs, where a task can only wait for its descendant tasks to complete. In this paper, we present the first known determinacy race detector for non-strict computation graphs with futures. The space and time complexity of our algorithm are similar to those of the classical SP-bags algorithm, when using only structured parallel constructs such as spawn-sync and async-finish. In the presence of point-to-point synchronization using futures, the complexity of the algorithm increases by a factor determined by the number of future operations, which includes future task creation and future get operations. The experimental results show that the slowdown factor observed for our algorithm relative to the sequential version is in the range of 1.00x to 9.92x, which is very much in line with slowdowns experienced for fully strict computation graphs."} {"_id": "b45a3a8e08b80295fa2913e66c673052b142d158", "title": "Generalized stacking of layerwise-trained Deep Convolutional Neural Networks for document image classification", "text": "This article presents our recent study of a lightweight Deep Convolutional Neural Network (DCNN) architecture for document image classification. Here, we concentrated on training of a committee of generalized, compact and powerful base DCNNs. A support vector machine (SVM) is used to combine the outputs of individual DCNNs. The main novelty of the present study is introduction of supervised layerwise training of DCNN architecture in document classification tasks for better initialization of weights of individual DCNNs. Each DCNN of the committee is trained for a specific part or the whole document. Also, here we used the principle of generalized stacking for combining the normalized outputs of all the members of the DCNN committee. The proposed document classification strategy has been tested on the well-known Tobacco3482 document image dataset. Results of our experimentations show that the proposed strategy involving a considerably smaller network architecture can produce comparable document classification accuracies in competition with the state-of-the-art architectures making it more suitable for use in comparatively low configuration mobile devices."} {"_id": "fb6f5cb26395608a3cf0e9c6c618293a4278a8ad", "title": "Facial Image Attributes Transformation via Conditional Recycle Generative Adversarial Networks", "text": "This study introduces a novel conditional recycle generative adversarial network for facial attribute transformation, which can transform high-level semantic face attributes without changing the identity. In our approach, we input a source facial image to the conditional generator with target attribute condition to generate a face with the target attribute. Then we recycle the generated face back to the same conditional generator with source attribute condition. A face which should be similar to that of the source face in personal identity and facial attributes is generated. Hence, we introduce a recycle reconstruction loss to enforce the final generated facial image and the source facial image to be identical. Evaluations on the CelebA dataset demonstrate the effectiveness of our approach. Qualitative results show that our approach can learn and generate high-quality identity-preserving facial images with specified attributes."} {"_id": "76264abe68ea6de6c14944990c5e01da1725c860", "title": "Less is more ? How demographic sample weights can improve public opinion estimates based on Twitter data", "text": "Twitter data is widely acknowledged to hold great promise for the study of political behavior and public opinion. However, a key limitation in previous studies is the lack of information about the sociodemographic characteristics of individual users, which raises concerns about the validity of inferences based on this source of data. This paper addresses this challenge by employing supervised machine learning methods to estimate the age, gender, race, party affiliation, propensity to vote, and income of any Twitter user in the U.S. The training dataset for these classifiers was obtained by matching a large dataset of 1 billion geolocated Twitter messages with voting registration records and estimates of home values across 15 different states, resulting in a sample of nearly 250,000 Twitter users whose sociodemographic traits are known. To illustrate the value of this approach, I offer three applications that use information about the predicted demographic composition of a random sample of 500,000 U.S. Twitter users. First, I explore how attention to politics varies across demographics groups. Then, I apply multilevel regression and postratification methods to recover valid estimate of presidential and candidate approval that can serve as early indicators of public opinion changes and thus complement traditional surveys. Finally, I demonstrate the value of Twitter data to study questions that may suffer from social desirability bias. Working paper. This version: September 21, 2016. I would like to thank John Ahlquist, Drew Dimmery, Pat Egan, Martin Elff, Mik Laver, Jeff Lewis, Thomas Leeper, Jonathan Nagler, Molly Roberts, Arthur Spirling, David Sontag for helpful comments and discussions. I also gratefully acknowledges financial support from the Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation."} {"_id": "8e917a621d15dc37c71be1bdc3b43c41e56557c8", "title": "Effectiveness of Blog Advertising: Impact of Communicator Expertise, Advertising Intent, and Product Involvement", "text": "Blog advertising, which refers to the paid sponsorship of bloggers to review, promote, or sell products in their blog writing, is becoming prevalent. This paper investigates the impact of three critical factors on blog advertising effectiveness: communicator expertise, advertising intent, and product involvement. An experiment with a 2\u00d72\u00d72 factorial design was used to test their interaction effects on advertising effectiveness. The results indicate that, for low-involvement products, there is better advertising effectiveness when low-expertise communicators are explicit about the advertising intent or when high-expertise communicators are implicit about the advertising intent. But for high-involvement products, the results show that when low-expertise communicators are explicit about the advertising intent, the outcome is lesser advertising effectiveness. For such products, advertising effectiveness does not differ when high-expertise communicators are implicit or explicit about the advertising intent. Based on these results, some implications for further research and practice are given."} {"_id": "f103fabe05026d93bc1033e0e03bf34b4da0932d", "title": "Feature Analysis for Fake Review Detection through Supervised Classification", "text": "Nowadays, review sites are more and more confronted with the spread of misinformation, i.e., opinion spam, which aims at promoting or damaging some target businesses, by misleading either human readers, or automated opinion mining and sentiment analysis systems. For this reason, in the last years, several data-driven approaches have been proposed to assess the credibility of user-generated content diffused through social media in the form of on-line reviews. Distinct approaches often consider different subsets of characteristics, i.e., features, connected to both reviews and reviewers, as well as to the network structure linking distinct entities on the review-site in exam. This article aims at providing an analysis of the main review- and reviewer-centric features that have been proposed up to now in the literature to detect fake reviews, in particular from those approaches that employ supervised machine learning techniques. These solutions provide in general better results with respect to purely unsupervised approaches, which are often based on graph-based methods that consider relational ties in review sites. Furthermore, this work proposes and evaluates some additional new features that can be suitable to classify genuine and fake reviews. For this purpose, a supervised classifier based on Random Forests have been implemented, by considering both well-known and new features, and a large-scale labeled dataset from which all these features have been extracted. The good results obtained show the effectiveness of new features to detect in particular singleton fake reviews, and in general the utility of this study."} {"_id": "e9ed90f57c55e1229b67628930f37f3b5ed5dc12", "title": "Miniature Continuous Coverage Antenna Array for GNSS Receivers", "text": "This letter presents a miniature conformal array that provides continuous coverage and good axial ratio from 1100-1600 MHz. Concurrently, it maintains greater than 1.5 dBic RHCP gain and return loss less than -10 dB. The four element array is comprised of two-arm slot spirals with lightweight polymer substrate dielectric loading and a novel termination resistor topology. Multiple elements provide the capability to suppress interfering signals common in GNSS applications. The array, including feeding network, is 3.5\" times 3.5\" times 0.8\" in size and fits into the FRPA-3 footprint and radome."} {"_id": "6597eb92249e137c6de9d68013bd683b63234af4", "title": "A Survey on Preprocessing Methods for Web Usage Data", "text": "World Wide Web is a huge repository of web pages and links. It provides abundance of information for the Internet users. The growth of web is tremendous as approximately one million pages are added daily. Users\u2019 accesses are recorded in web logs. Because of the tremendous usage of web, the web log files are growing at a faster rate and the size is becoming huge. Web data mining is the application of data mining techniques in web data. Web Usage Mining applies mining techniques in log data to extract the behavior of users which is used in various applications like personalized services, adaptive web sites, customer profiling, prefetching, creating attractive web sites etc., Web usage mining consists of three phases preprocessing, pattern discovery and pattern analysis. Web log data is usually noisy and ambiguous and preprocessing is an important process before mining. For discovering patterns sessions are to be constructed efficiently. This paper reviews existing work done in the preprocessing stage. A brief overview of various data mining techniques for discovering patterns, and pattern analysis are discussed. Finally a glimpse of various applications of web usage mining is also presented. KeywordsData Cleaning, Path Completion, Session Identification , User Identification, Web Log Mining"} {"_id": "4dd87fa846dc344dcce5fb12de283a0d51dfe140", "title": "Exploration-exploitation tradeoff using variance estimates in multi-armed bandits", "text": "Algorithms based on upper confidence bounds for balancing exploration and exploitation are gaining popularity since they are easy to implement, efficient and effective. This paper considers a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms. In earlier experimental works, such algorithms were found to outperform the competing algorithms. We provide the first analysis of the expected regret for such algorithms. As expected, our results show that the algorithm that uses the variance estimates has a major advantage over its alternatives that do not use such estimates provided that the variances of the payoffs of the suboptimal arms are low. We also prove that the regret concentrates only at a polynomial rate. This holds for all the upper confidence bound based algorithms and for all bandit problems except those special ones where with probability one the payoff obtained by pulling the optimal arm is IThis research was funded in part by the National Science and Engineering Research Council (NSERC), iCore and the Alberta Ingenuity Fund. \u2217Corresponding author. Email addresses: audibert@certis.enpc.fr (Jean Yves Audibert), remi.munos@inria.fr (R\u00e9mi Munos), szepesva@cs.ualberta.ca (Csaba Szepesv\u00e1ri) 1Csaba Szepesv\u00e1ri is on leave from MTA SZTAKI, Budapest, Hungary. Preprint submitted to Theoretical Computer Science October 1, 2008 larger than the expected payoff for the second best arm. Hence, although upper confidence bound bandit algorithms achieve logarithmic expected regret rates, they might not be suitable for a risk-averse decision maker. We illustrate some of the results by computer simulations."} {"_id": "3342018e8defb402896d2133cda0417e49f1e9aa", "title": "Face Verification Across Age Progression", "text": "Human faces undergo considerable amounts of variations with aging. While face recognition systems have been proven to be sensitive to factors such as illumination and pose, their sensitivity to facial aging effects is yet to be studied. How does age progression affect the similarity between a pair of face images of an individual? What is the confidence associated with establishing the identity between a pair of age separated face images? In this paper, we develop a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. Further, we study the similarity of faces across age progression. Since age separated face images invariably differ in illumination and pose, we propose preprocessing methods for minimizing such variations. Experimental results using a database comprising of pairs of face images that were retrieved from the passports of 465 individuals are presented. The verification system for faces separated by as many as nine years, attains an equal error rate of 8.5%"} {"_id": "f0bc66a21a03190ce93165085ca313e25be7f1af", "title": "Design, Modeling, and Validation of a Soft Magnetic 3-D Force Sensor", "text": "Recent advances in robotics promise a future where robots co-exist and collaborate with humans in unstructured environments, which will require frequent physical interactions where accurate tactile information will be crucial for performance and safety. This article describes the design, fabrication, modeling, and experimental validation of a soft-bodied tactile sensor that accurately measures the complete 3-D force vector for both normal and shear loading conditions. Our research considers the detection of changes in the magnetic field vector due to the motion of a miniature magnet in a soft substrate to measure normal and shear forces with high accuracy and bandwidth. The proposed sensor is a pyramid-shaped tactile unit with a tri-axis Hall element and a magnet embedded in a silicone rubber substrate. The non-linear mapping between the 3-D force vector and the Hall effect voltages is characterized by training a neural network. We validate the proposed soft force sensor over static and dynamic loading experiments and obtain a mean absolute error below 11.7 mN or 2.2% of the force range. These results were obtained for a soft force sensor prototype and loading conditions not included in the training process, indicating strong generalization of the model. To demonstrate its utility, the proposed sensor is used in a force-controlled pick-and-place experiment as a proof-of-concept case study."} {"_id": "755050d838b9b27d715c4bf1e8317294011fa5fc", "title": "Traffic engineering techniques for data center networks", "text": "Traffic engineering consists in improving the performance of the teleco-munication networks which is evaluated by a large number of criteria. The ultimate objective is to avoid congestion in the network by keeping its links from being overloaded. In large Ethernet networks with thousands of servers, such as data centers, improving the performance of the traditional switching protocols is a crucial but very challenging task due to an exploration in the size of solution space and the complexity. Thus, exact methods are inappropriate even for reasonable size networks. Local Search (LS) is a powerful method for solving computational optimization problems such as the Vertex Cover, Traveling Salesman, or Boolean Satisfiability. The advantage of LS for these problems is its ability to find an intelligent path from a low quality solution to a high quality one in a huge search space. In this thesis, we propose different approximate methods based on Local Search for solving the class of traffic engineering problems in data center networks that implement Spanning Tree Protocol and Multiple Spanning Tree Protocol. First, we tackle the minimization of the maximal link utilization in the Ethernet networks with one spanning tree. Next, we cope with data center networks containing many spanning trees. We then deal with the minimization of service disruption and the worst-case maximal link utilization in data center networks with many spanning trees. Last, we develop a novel design of multi-objective algorithms for solving the traffic engineering problems in large data centers by taking into account three objectives to be minimized: maximal link utilization, network total load and number of used links. Our schemes reduce significantly the size of the search space by releasing the dependence of the solutions from the link cost computation iii iv in order to obtain an intended spanning tree. We propose efficient in-cremental techniques to speed up the computation of objective values. Furthermore, our approaches show good results on the credible data sets and are evaluated by the strong assessment methods. ACKNOWLEDGEMENTS First of all, I would like to thank Yves Deville and Olivier Bonaventure for the four years they spent helping me. I am very lucky to have two great advisors of two different promoting styles. Yves Deville is a very demanding supervisor who wants every smallest detail to be clear and intuitive. His ideas and his criticisms sometimes made me nervous or even angry but I could not deny the truth \u2026"} {"_id": "080c8650dd8b3107fb6af6d3e0d5a1d4aabf4f4b", "title": "A framework for collaborative sensing and processing of mobile data streams: demo", "text": "Emerging mobile applications involve continuous sensing and complex computations on sensed data streams. Examples include cognitive apps (e.g., speech recognition, natural language translation, as well as face, object, or gesture detection and recognition) and anticipatory apps that proactively track and provide services when needed. Unfortunately, today's mobile devices cannot keep pace with such apps, despite advances in hardware capability. Traditional approaches address this problem by computation offloading. One approach offloads by sending sensed streams to remote cloud servers via cellular networks or to cloudlets via Wi-Fi, where a clone of the app runs [2, 3, 4]. However, cloudlets may not be widely deployed and access to cloud infrastructure may yield high network delays and can be intermittent due to mobility. Morever, users might hesitate to upload private sensing data to the cloud or cloudlet. A second approach offloads to accelerators by rewriting code to use DSP or GPU within mobile devices. However, using accelerators requires substantial programming effort and produces varied benefits for diverse codes on heterogeneous devices."} {"_id": "6b09fb02ace9507f5ff50e87cc7171a81f5b2e5f", "title": "Optimal Expectations and the Welfare Cost of Climate Variability", "text": "Uncertainty about the future is an important determinant of well-being, especially in developing countries where financial markets and other market failures result in ineffective insurance mechanisms. However, separating the effects of future uncertainty from realised events, and then measuring the impact of uncertainty on utility, presents a number of empirical challenges. This paper aims to address these issues and provides supporting evidence to show that increased climate variability (a proxy for future income uncertainty) reduces farmers\u2019 subjective well-being, consistent with the theory of optimal expectations (Brunnermeier & Parker, 2005 AER), using panel data from rural Ethiopia and a new data set containing daily atmospheric parameters. The magnitude of our result indicates that a one standard deviation (7%) increase in climate variability has an equivalent effect on life satisfaction to a two standard deviation (1-2%) decrease in consumption. This effect is one of the largest determinants of life satisfaction in rural Ethiopia. (JEL: C25, D60, I31.) \u2217Alem \u2013 Department of Economics, University of Gothenburg, Sweden and Department of Agricultural and Resource Economics, UC, Berkeley, USA. E-mail: yonas.alem@economics.gu.se. Colmer \u2013 The Grantham Research Institute and Centre for Economic Performance, London School of Economics, UK. E-mail: j.m.colmer@lse.ac.uk. We are grateful to Allen Blackman, Jeffery Bookwalter, Gharad Bryan, Douglas Dalenberg, Paul Dolan, John Feddersen, Greer Gosnell, Cameron Hepburn, Derek Kellenberg, Peter Martinsson, Kyle Meng, Rob Metcalfe, Eric Neumayer, Jonathan Parker, Hendrik Wolff and seminar participants at the EfD 6th Annual Meeting, EAERE, the University of Oxford, London School of Economics, University of Montana, and Resources for the Future for helpful thoughts, comments and discussions. The first author would like to thank the Swedish International Development Agency (Sida) through the Environment for Development Initiative (EfD) at the University of Gothenburg, and the Swedish Research Council Formas through the Human Cooperation to Manage Natural Resources (COMMONS) programme for financial support. Part of this research was done while Alem was a Visiting Scholar at the Department of Agricultural and Resource Economics, University of California Berkeley. The second author would like to thank the ESRC Centre for Climate Change Economics and Policy and the Grantham Foundation for financial support. The data used in this article were collected by the University of Addis Ababa, the International Food Policy Research Institute (IFPRI), and the Centre for the Study of African Economies (CSAE). Funding for the ERHS survey was provided by the Economic and Social Research Council (ESRC), the Swedish International Development Agency (SIDA) and the United States Agency for International Development (USAID). All errors and omissions are our own."} {"_id": "b8a3c1c0ebe113ffb61b0066aa91ef03701e5550", "title": "Portable Option Discovery for Automated Learning Transfer in Object-Oriented Markov Decision Processes", "text": "We introduce a novel framework for option discovery and learning transfer in complex domains that are represented as object-oriented Markov decision processes (OO-MDPs) [Diuk et al., 2008]. Our framework, Portable Option Discovery (POD), extends existing option discovery methods, and enables transfer across related but different domains by providing an unsupervised method for finding a mapping between object-oriented domains with different state spaces. The framework also includes heuristic approaches for increasing the efficiency of the mapping process. We present the results of applying POD to Pickett and Barto\u2019s [2002] PolicyBlocks and MacGlashan\u2019s [2013] Option-Based Policy Transfer in two application domains. We show that our approach can discover options effectively, transfer options among different domains, and improve learning performance with low computational overhead."} {"_id": "8e2548e29b45fda7725605b71a21318d86f9b00e", "title": "Semantic component retrieval in software engineering", "text": "In the early days of programming the concept of subroutines, and through this software reuse, was invented to spare limited hardware resources. Since then software systems have become increasingly complex and developing them would not have been possible without reusable software elements such as standard libraries and frameworks. Furthermore, other approaches commonly subsumed under the umbrella of software reuse such as product lines and design patterns have become very successful in recent years. However, there are still no software component markets available that would make buying software components as simple as buying parts in a do-it-yourself hardware store and millions of software fragments are still lying un(re)used in configuration management repositories all over the world. The literature primarily blames this on the immense effort required so far to set up and maintain searchable component repositories and the weak mechanisms available for retrieving components from them, resulting in a severe usability problem. In order to address these issues within this thesis, we developed a proactive component reuse recommendation system, naturally integrated into test-first development approaches, which is able to propose semantically appropriate, reusable components according to the specification a developer is just working on. We have implemented an appropriate system as a plugin for the well-known Eclipse IDE and demonstrated its usefulness by carrying out a case study from a popular agile development book. Furthermore, we present a precision analysis for our approach and examples of how components can be retrieved based on a simplified semantics description in terms of standard test cases. Zusammenfassung Zu Zeiten der ersten Programmiersprachen wurde die Idee von Unterprogrammen und damit die Idee der Wiederverwendung von Software zur Einsparung knapper Hardware-Ressourcen erdacht. Seit dieser Zeit wurden Software-Systeme immer komplexer und ihre Entwicklung w\u00e4re ohne weitere wiederverwendbare SoftwareElemente wie Bibliotheken und Frameworks schlichtweg nicht mehr handhabbar. Weitere, \u00fcblicherweise unter dem Begriff Software Reuse zusammengefasste Ans\u00e4tze, wie z.B. Produktlinien und Entwurfsmuster waren in den letzten Jahren ebenfalls sehr erfolgreich, gleichzeitig existieren allerdings noch immer keine Marktpl\u00e4tze, die das Kaufen von Software-Komponenten so einfach machen w\u00fcrden, wie den Einkauf von Kleinteilen in einem Heimwerkermarkt. Daher schlummern derzeit Millionen von nicht (wieder) genutzten Software-Fragmenten in Konfigurations-Management-Systemen auf der ganzen Welt. Die Fachliteratur lastet dies prim\u00e4r dem hohen Aufwand, der bisher f\u00fcr Aufbau und Instandhaltung von durchsuchbaren Komponenten-Repositories getrieben werden musste, an. Zusammen mit den ungenauen Algorithmen, wie sie bisher zum Durchsuchen solcher Komponentenspeicher zur Verf\u00fcgung stehen, macht diese Tatsache die Benutzung dieser Systeme zu kompliziert und damit unattraktiv. Um diese H\u00fcrde k\u00fcnftig abzumildern, entwickelten wir in der vorliegenden Arbeit ein proaktives Komponenten-Empfehlungssystem, das eng an testgetriebene Entwicklungsprozesse angelehnt ist und darauf aufbauend wiederverwendbare Komponenten vorschlagen kann, die genau die Funktionalit\u00e4t erbringen, die ein Entwickler gerade ben\u00f6tigt. Wir haben das System als Plugin f\u00fcr die bekannte Eclipse IDE entwickelt und seine Nutzbarkeit unter Beweis gestellt, in dem wir ein Beispiel aus einem bekannten Buch \u00fcber agile Entwicklung damit nachimplementiert haben. Weiterhin enth\u00e4lt diese Arbeit eine Analyse der Precision unseres Ansatzes sowie zahlreiche Beispiele, wie gew\u00f6hnliche Testf\u00e4lle als vereinfachte semantische Beschreibung einer Komponente und als Ausgangspunkt f\u00fcr die Suche nach wiederverwendbaren Komponenten genutzt werden k\u00f6nnen."} {"_id": "9b95747c0220379df9c59c29092f22e8e54a681c", "title": "An efficient automated design to generate UML diagram from Natural Language Specifications", "text": "The foremost problem that arises in the Software Development Cycle is during the Requirements specification and analysis. Errors that are encountered during the first phase of the cycle migrate to other phases too which in turn results in the most costly process than the original specified process. The reason is that the specifications of software requirements are termed in the Nature Language Format. One can easily transform the requirements specified into computer model using UML. To minimize the errors that arise in the existing system, we have proposed a new technique that enhances the generation of UML models through Natural Language requirements, which can easily provide automatic assistance to the developers. The main aim of our paper is to focus on the production of Activity Diagram and Sequence Diagram through Natural Language Specifications. Standard POS tagger and parser analyze the input i.e., requirements in English language given by the users and extract phrases, activities, etc. from the text specifies. The technique is beneficial as it reduces the gap between informal natural language and the formal modeling language. The input is the requirements laid down by the users in English language. Some stages like pre-processing, part of speech (POs), tagging, parsing, phrase identification and designing of UML diagrams occur along with the input. The application and its framework is developed in Java and it is tested on by implementing on a few technical documents."} {"_id": "94a4a4fdb58a6777f13bb60955470bb10a415d6f", "title": "Classification of Text Documents", "text": "The exponential growth of the internet has led to a great deal of interest in developing useful and efficient tools and software to assist users in searching the Web. Document retrieval, categorization, routing and filtering can all be formulated as classification problems. However, the complexity of natural languages and the extremely high dimensionality of the feature space of documents have made this classification problem very difficult. We investigate four different methods for document classification: the naive Bayes classifier, the nearest neighbour classifier, decision trees and a subspace method. These were applied to seven-class Yahoo news groups (business, entertainment, health, international, politics, sports and technology) individually and in combination. We studied three classifier combination approaches: simple voting, dynamic classifier selection and adaptive classifier combination. Our experimental results indicate that the naive Bayes classifier and the subspace method outperform the other two classifiers on our data sets. Combinations of multiple classifiers did not always improve the classification accuracy compared to the best individual classifier. Among the three different combination approaches, our adaptive classifier combination method introduced here performed the best. The best classification accuracy that we are able to achieve on this seven-class problem is approximately 83 %, which is comparable to the performance of other similar studies. However, the classification problem considered here is more difficult because the pattern classes used in our experiments have a large overlap of words in their corresponding documents."} {"_id": "1b631abf2977c0efc186483658d7242098c921b8", "title": "Ontologies and Knowledge Bases. Towards a Terminological Clarification", "text": "The word \u201contology\u201d has recently gained a good popularity within the knowledge engineering community. However, its meaning tends to remain a bit vague, as the term is used in very different ways. Limiting our attention to the various proposals made in the current debate in AI, we isolate a number of interpretations, which in our opinion deserve a suitable clarification. We elucidate the implications of such various interpretations, arguing for the need of clear terminological choices regarding the technical use of terms like \u201contology\u201d, \u201cconceptualization\u201d and \u201contological commitment\u201d. After some comments on the use \u201cOntology\u201d (with the capital \u201co\u201d) as a term which denotes a philosophical discipline, we analyse the possible confusion between an ontology intended as a particular conceptual framework at the knowledge level and an ontology intended as a concrete artifact at the symbol level, to be used for a given purpose. A crucial point in this clarification effort is the careful analysis of Gruber\u2019 s definition of an ontology as a specification of a conceptualization."} {"_id": "47c9c4ea22d4d4a286e74ed1f8b8f62d9bea54fb", "title": "Knowledge Engineering: Principles and Methods", "text": "This paper gives an overview about the development of the field of Knowledge Engineering over the last 15 years. We discuss the paradigm shift from a transfer view to a modeling view and describe two approaches which considerably shaped research in Knowledge Engineering: Role-limiting Methods and Generic Tasks. To illustrate various concepts and methods which evolved in the last years we describe three modeling frameworks: CommonKADS, MIKE, and PROT\u00c9G\u00c9-II. This description is supplemented by discussing some important methodological developments in more detail: specification languages for knowledge-based systems, problem-solving methods, and ontologies. We conclude with outlining the relationship of Knowledge Engineering to Software Engineering, Information Integration and Knowledge Management."} {"_id": "598909e8871a92a3c8a19eeaefdbe6dd8e271b55", "title": "What Is an Ontology?", "text": "The word \u201contology\u201d is used with different senses in different communities. The most radical difference is perhaps between the philosophical sense, which has of course a well-established tradition, and the computational sense, which emerged in the recent years in the knowledge engineering community, starting from an early informal definition of (computational) ontologies as \u201cexplicit specifications of conceptualizations\u201d. In this paper we shall revisit the previous attempts to clarify and formalize such original definition, providing a detailed account of the notions of conceptualization and explicit specification, while discussing at the same time the importance of shared explicit specifications."} {"_id": "1c27cb8364a7655b2e4e8aa799970a08f90dea61", "title": "Building a Large-Scale Knowledge Base for Machine Translation", "text": "Introduction Linguistic Resources Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual matching) allow a knowledge engineer to build up an index to a KB in a second language, such as Spanish or Japanese. USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 knight,luk @isi.edu"} {"_id": "25663da276c3d0d913ecf31cde5a8e7f6f19151d", "title": "Toward principles for the design of ontologies used for knowledge sharing?", "text": "Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria."} {"_id": "9845cad436c22642e6c99d24b020f8ca0b528ceb", "title": "Cloud Computing Security: A Survey", "text": "1 Qatar Computing Research Institute (QCRI), Qatar Foundation, Doha, Qatar; E-Mail: ikhalil@uaeu.ac.ae 2 Department of Electrical and Computer Engineering, Newark College of Engineering, New Jersey Institute of Technology, University Heights, Newark, NJ 07102, USA; E-Mail: abdallah@njit.edu 3 College of Information Technology, United Arab Emirates University, PO Box 15551, Al Ain, United Arab Emirates; E-Mail: azeemsamma@hotmail.com"} {"_id": "408e8eecc14c5cc60bbdfc486ba7a7fc97031788", "title": "Discriminative Unsupervised Feature Learning with Convolutional Neural Networks", "text": "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled \u2019seed\u2019 image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)."} {"_id": "9c3d3e64ffead7ea93cd21ab7ab89fc9afcd690c", "title": "Hidden in Plain Sight: Storing and Managing Secrets on a Public Ledger", "text": "Current blockchain systems are incapable of holding sensitive data securely on their public ledger while supporting accountability of data access requests and revocability of data access rights. Instead, they either keep the sensitive data off-chain as a semi-centralized solution or they just publish the data on the ledger ignoring the problem altogether. In this work, we introduce SCARAB the first secure decentralized access control mechanism for blockchain systems that addresses the challenges of accountability, by publicly logging each request before granting data access, and of revocability, by introducing collectively managed data access policies. SCARAB introduces, therefore, on-chain secrets, which utilize verifiable secret sharing to enable collectively managed secrets under a Byzantine adversary, and identity skipchains, which enable the dynamic management of identities and of access control policies. The evaluation of our SCARAB implementation shows that the latency of a single read/write request scales linearly with the number of access-securing trustees and is in the range of 200 ms to 8 seconds for 16 to 128 trustees."} {"_id": "e6a5ed2a0498d6e58d2fa370063ad3647b11e249", "title": "Controlling negative and positive power at the ankle with a soft exosuit", "text": "The soft exosuit is a new approach for applying assistive forces over the wearer's body through load paths configured by the textile architecture. In this paper, we present a body-worn lower-extremity soft exosuit and a new control approach that can independently control the level of assistance that is provided during negative- and positive-power periods at the ankle. The exosuit was designed to create load paths assisting ankle plantarflexion and hip flexion, and the actuation system transmits forces from the motors to the suit via Bowden cables. A load cell and two gyro sensors per leg are used to measure real-time data, and the controller performs position control of the cable on a step-by-step basis with respect to the power delivered to the wearer's ankle by controlling two force parameters, the pretension and the active force. Human subjects testing results demonstrate that the controller is capable of modulating the amount of power delivered to the ankle joint. Also, significant reductions in metabolic rate (11%-15%) were observed, which indicates the potential of the proposed control approach to provide benefit to the wearer during walking."} {"_id": "8215133f440fbed91aa319da2256d0b7efe9535d", "title": "The Challenge of Big Data in Public Health: An Opportunity for Visual Analytics", "text": "Public health (PH) data can generally be characterized as big data. The efficient and effective use of this data determines the extent to which PH stakeholders can sufficiently address societal health concerns as they engage in a variety of work activities. As stakeholders interact with data, they engage in various cognitive activities such as analytical reasoning, decision-making, interpreting, and problem solving. Performing these activities with big data is a challenge for the unaided mind as stakeholders encounter obstacles relating to the data's volume, variety, velocity, and veracity. Such being the case, computer-based information tools are needed to support PH stakeholders. Unfortunately, while existing computational tools are beneficial in addressing certain work activities, they fall short in supporting cognitive activities that involve working with large, heterogeneous, and complex bodies of data. This paper presents visual analytics (VA) tools, a nascent category of computational tools that integrate data analytics with interactive visualizations, to facilitate the performance of cognitive activities involving big data. Historically, PH has lagged behind other sectors in embracing new computational technology. In this paper, we discuss the role that VA tools can play in addressing the challenges presented by big data. In doing so, we demonstrate the potential benefit of incorporating VA tools into PH practice, in addition to highlighting the need for further systematic and focused research."} {"_id": "e1af23ebf650056824b7d779f4ef27be05989aa7", "title": "A Customizable Matrix Multiplication Framework for the Intel HARPv2 Xeon+FPGA Platform: A Deep Learning Case Study", "text": "General Matrix to Matrix multiplication (GEMM) is the cornerstone for a wide gamut of applications in high performance computing (HPC), scientific computing (SC) and more recently, deep learning. In this work, we present a customizable matrix multiplication framework for the Intel HARPv2 CPU+FPGA platform that includes support for both traditional single precision floating point and reduced precision workloads. Our framework supports arbitrary size GEMMs and consists of two parts: (1) a simple application programming interface (API) for easy configuration and integration into existing software and (2) a highly customizable hardware template. The API provides both compile and runtime options for controlling key aspects of the hardware template including dynamic precision switching; interleaving and block size control; and fused deep learning specific operations. The framework currently supports single precision floating point (FP32), 16, 8, 4 and 2 bit Integer and Fixed Point (INT16, INT8, INT4, INT2) and more exotic data types for deep learning workloads: INT16xTernary, INT8xTernary, BinaryxBinary.\n We compare our implementation to the latest NVIDIA Pascal GPU and evaluate the performance benefits provided by optimizations built into the hardware template. Using three neural networks (AlexNet, VGGNet and ResNet) we illustrate that reduced precision representations such as binary achieve the best performance, and that the HARPv2 enables fine-grained partitioning of computations over both the Xeon and FPGA. We observe up to 50x improvement in execution time compared to single precision floating point, and that runtime configuration options can improve the efficiency of certain layers in AlexNet up to 4x, achieving an overall 1.3x improvement over the entire network."} {"_id": "b2b22dad2ae73278a8349f813f2e864614a04ed4", "title": "Designing inflatable structures", "text": "We propose an interactive, optimization-in-the-loop tool for designing inflatable structures. Given a target shape, the user draws a network of seams defining desired segment boundaries in 3D. Our method computes optimally-shaped flat panels for the segments, such that the inflated structure is as close as possible to the target while satisfying the desired seam positions. Our approach is underpinned by physics-based pattern optimization, accurate coarse-scale simulation using tension field theory, and a specialized constraint-optimization method. Our system is fast enough to warrant interactive exploration of different seam layouts, including internal connections, and their effects on the inflated shape. We demonstrate the resulting design process on a varied set of simulation examples, some of which we have fabricated, demonstrating excellent agreement with the design intent."} {"_id": "7e9db6bcc86f0e0359e3056c60aa7d96f4d8feed", "title": "Smartphone physics \u2013 a smart approach to practical work in science education ?", "text": "In the form of teacher didactical design research, this work addresses a didactical issue encountered during physics teaching in a Swedish upper secondary school. A need for renewed practical laboratory work related to Newtonian mechanics is met by proposing and designing an activity based on highspeed photography using the nowadays omnipresent smartphone, thus bringing new technology into the classroom. The activity \u2013 video analysis of the collision physics of football kicks \u2013 is designed and evaluated by following a didactical design cycle. The work elaborates on how the proposed laboratory activity relates to the potential and complications of experimental activities in science education, as described in the vast literature on the topic. It is argued that the use of smartphones constitutes an interesting use of new technology for addressing known problems of practical work. Of particular interest is that smartphones offer a way to bridge the gap between the everyday life of students and the world of physics experiments (smartphones are powerful pocket laboratories). The use of smartphones also avoids using unfamiliar laboratory equipment that is known to hinder focus on intended content, while at the same time exploring a powerful tool for data acquisition and analysis. Overall, the use of smartphones (and computers) in this manner can be seen as the result of applying Occam\u2019s razor to didactics: only familiar and readily available instrumentation is used, and skills learned (movie handling and image analysis) are all educationally worthwhile. Although the activity was judged successful, a systematic investigation of learning outcome was out of scope. This means that no strong conclusions can be drawn based on this limited work. Nonetheless, the smartphone activity was well received by the students and should constitute a useful addition to the set of instructional approaches, especially since variation is known to benefit learning. The main failure of the design was an overestimation of student prior knowledge on motion physics (and its application to image data). As a consequence, the activity took required more time and effort than originally anticipated. No severe pitfalls of smartphone usage were identified, but it should be noted that the proposed activity \u2013 with its lack of well-defined results due to variations in kick strength \u2013 requires that the teacher is capable of efficiently analysing multiple student films (avoiding the feedback process to become overwhelmingly time consuming). If not all student films are evaluated, the feedback to the students may become of low quality, and misconceptions may pass under the radar. On the other hand, given that programming from 2018 will become compulsory, an interesting development of the activity would be to include handling of images and videos using a high-level programming language like Python."} {"_id": "f7c9ee705b857cd093f00a531ba90bcf5ba5b25a", "title": "Mining HEXACO personality traits from Enterprise Social Media", "text": "In this paper we introduce a novel computational technique of extraction of personality traits (HEXACO) of employees from Enterprise Social Media posts. We deal with challenges such as not being able to use existing survey instruments for scoring and not being able to directly use existing psychological studies on written text due to lack of overlapping words between the existing dictionary and words used in Enterprise Social Media. Using our approach we are able to infer personality traits (HEXACO) from posts and find better coverage and usage of the extended dictionary."} {"_id": "2db631dd019b085a2744e04ed2e3658462ec57a4", "title": "Hawkes Processes with Stochastic Excitations", "text": "We propose an extension to Hawkes processes by treating the levels of self-excitation as a stochastic differential equation. Our new point process allows better approximation in application domains where events and intensities accelerate each other with correlated levels of contagion. We generalize a recent algorithm for simulating draws from Hawkes processes whose levels of excitation are stochastic processes, and propose a hybrid Markov chain Monte Carlo approach for model fitting. Our sampling procedure scales linearly with the number of required events and does not require stationarity of the point process. A modular inference procedure consisting of a combination between Gibbs and Metropolis Hastings steps is put forward. We recover expectation maximization as a special case. Our general approach is illustrated for contagion following geometric Brownian motion and exponential Langevin dynamics."} {"_id": "8681f43d3e28bbdcc0c36e2d5afc6e1b192e735d", "title": "ICP-based pose-graph SLAM", "text": "Odometry-like localization solutions can be built upon Light Detection And Ranging (LIDAR) sensors, by sequentially registering the point clouds acquired along a robot trajectory. Yet such solutions inherently drift over time: we propose an approach that adds a graphical model layer on top of such LIDAR odometry layer, using the Iterative Closest Points (ICP) algorithm for registration. Reference frames called keyframes are defined along the robot trajectory, and ICP results are used to build a pose graph, that in turn is used to solve an optimization problem that provides updates for the keyframes upon loop closing, enabling the correction of the path of the robot and of the map of the environment. We present in details the configuration used to register data from the Velodyne High Definition LIDAR (HDL), and a strategy to build local maps upon which current frames are registered, either when discovering new areas or revisiting previously mapped areas. Experiments show that it is possible to build the graph using data from ICP and that the loop closings in the graph level reduces the overall drift of the system."} {"_id": "58f90ba279b804676141359773887cdad366fd0d", "title": "Panelrama : A Framework for Cross-Device Applications", "text": "We introduce Panelrama, a web framework for the construction of distributed user interfaces (DUI). Our implementation provides developers with low migration costs through built-in mechanisms for the synchronization of UI state and minimal changes to existing languages. Additionally, we describe a solution to categorize device characteristics and dynamically change UI allocation to best-fit devices. Finally, we illustrate the use of Panelrama through a sample application which demonstrates multidevice interaction techniques and collaborative potential. Author"} {"_id": "5099c982707d23f94a352f2eafbd6fc48c7c45b8", "title": "Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach", "text": "The use of machine learning models has become ubiquitous. Their predictions are used to make decisions about healthcare, security, investments and many other critical applications. Given this pervasiveness, it is not surprising that adversaries have an incentive to manipulate machine learning models to their advantage. One way of manipulating a model is through a poisoning or causative attack in which the adversary feeds carefully crafted poisonous data points into the training set. Taking advantage of recently developed tamper-free provenance frameworks, we present a methodology that uses contextual information about the origin and transformation of data points in the training set to identify poisonous data, thereby enabling online and regularly re-trained machine learning applications to consume data sources in potentially adversarial environments. To the best of our knowledge, this is the first approach to incorporate provenance information as part of a filtering algorithm to detect causative attacks. We present two variations of the methodology - one tailored to partially trusted data sets and the other to fully untrusted data sets. Finally, we evaluate our methodology against existing methods to detect poison data and show an improvement in the detection rate."} {"_id": "e85f578f51a1068934e1d3fa3a832fdf62d5c770", "title": "Terahertz band communications: Applications, research challenges, and standardization activities", "text": "Terahertz frequency band, 0.1\u201310THz, is envisioned as one of the possible resources to be utilized for wireless communications in networks beyond 5G. Communications over this band will be feature a number of attractive properties, including potentially terabit-per-second link capacities, miniature transceivers and, potentially, high energy efficiency. Meanwhile, a number of specific research challenges have to be addressed to convert the theoretical estimations into commercially attractive solutions. Due to the diversity of the challenges, the research on THz communications at its early stages was mostly performed by independent communities from different areas. Therefore, the existing knowledge in the field is substantially fragmented. In this paper, an attempt to address this issue and provide a clear and easy to follow introduction to the THz communications is performed. A review on the state-of-the-art in THz communications research is given by identifying the target applications and major open research challenges as well as the recent achievements by industry, academia, and the standardization bodies. The potential of the THz communications is presented by illustrating the basic tradeoffs in typical use cases. Based on the given summary, certain prospective research directions in the field are identified."} {"_id": "407e4b401395682b15c431347ec9b0f88ceec04b", "title": "Multi-target tracking by learning local-to-global trajectory models", "text": "The multi-target tracking problem is challenging when there exist occlusions, tracking failures of the detector and severe interferences between detections. In this paper, we propose a novel detection based tracking method that links detections into tracklets and further forms long trajectories. Unlike many previous hierarchical frameworks which split the data association into two separate optimization problems (linking detections locally and linking tracklets globally), we introduce a unified algorithm that can automatically relearn the trajectory models from the local and global information for finding the joint optimal assignment. In each temporal window, the trajectory models are initialized by the local information to link those easy-to-connect detections into a set of tracklets. Then the trajectory models are updated by the reliable tracklets and reused to link separated tracklets into long trajectories. We iteratively update the trajectory models by more information from more frames until the result converges. The iterative process gradually improves the accuracy of the trajectory models, which in turn improves the target ID inferences for all detections by the MRF model. Experiment results revealed that our proposed method achieved state-of-the-art multi-target tracking performance. Crown Copyright & 2014 Published by Elsevier Ltd. All rights reserved."} {"_id": "473cbc5ec2609175041e1410bc6602b187d03b23", "title": "Semantic Audiovisual Data Fusion for Automatic Emotion Recognition", "text": "The paper describes a novel technique for the recognition of emotions from multimodal data. We focus on the recognition of the six prototypic emotions. The results from the facial expression recognition and from the emotion recognition from speech are combined using a bi-modal multimodal semantic data fusion model that determines the most probable emotion of the subject. Two types of models based on geometric face features for facial expression recognition are being used, depending on the presence or absence of speech. In our approach we define an algorithm that is robust to changes of face shape that occur during regular speech. The influence of phoneme generation on the face shape during speech is removed by using features that are only related to the eyes and the eyebrows. The paper includes results from testing the presented models."} {"_id": "c70a2de3fb74a351cac7585f3bddbf7db44d6f3b", "title": "Mobile Food Recognition with an Extreme Deep Tree", "text": "Food recognition is an emerging topic in the field of computer vision. The recent interest of the research community in this area is justified by the rise in popularity of food diary applications, where the users take note of their food intake for self-monitoring or to provide useful statistics to dietitians. However, manually annotating food intake can be a tedious task, thus explaining the need of a system that automatically recognizes food, and possibly its amount, from pictures acquired by mobile devices. In this work we propose an approach to food recognition which combines the strengths of different state-of-the-art classifiers, namely Convolutional Neural Networks, Extreme Learning Machines and Neural Trees. We show that the proposed architecture can achieve good results even with low computational power, as in the case of mobile devices."} {"_id": "263286281bb576d353fecebd6023b05effc15fbf", "title": "Efficient Selection of Vector Instructions Using Dynamic Programming", "text": "Accelerating program performance via SIMD vector units is very common in modern processors, as evidenced by the use of SSE, MMX, VSE, and VSX SIMD instructions in multimedia, scientific, and embedded applications. To take full advantage of the vector capabilities, a compiler needs to generate efficient vector code automatically. However, most commercial and open-source compilers fall short of using the full potential of vector units, and only generate vector code for simple innermost loops. In this paper, we present the design and implementation of anauto-vectorization framework in the back-end of a dynamic compiler that not only generates optimized vector code but is also well integrated with the instruction scheduler and register allocator. The framework includes a novel{\\em compile-time efficient dynamic programming-based} vector instruction selection algorithm for straight-line code that expands opportunities for vectorization in the following ways: (1) {\\em scalar packing} explores opportunities of packing multiple scalar variables into short vectors, (2)judicious use of {\\em shuffle} and {\\em horizontal} vector operations, when possible, and (3) {\\em algebraic reassociation} expands opportunities for vectorization by algebraic simplification. We report performance results on the impact of auto-vectorization on a set of standard numerical benchmarks using the Jikes RVM dynamic compilation environment. Our results show performance improvement of up to 57.71\\% on an Intel Xeon processor, compared tonon-vectorized execution, with a modest increase in compile-time in the range from 0.87\\% to 9.992\\%. An investigation of the SIMD parallelization performed by v11.1 of the Intel Fortran Compiler (IFC) on three benchmarks shows that our system achieves speedup with vectorization in all three cases and IFC does not. Finally, a comparison of our approach with an implementation of the Super word Level Parallelization (SLP) algorithm from~\\cite{larsen00}, shows that our approach yields a performance improvement of up to 13.78\\% relative to SLP."} {"_id": "391fa13fce3e4c814c61f3384d9979aba56df4bf", "title": "Body Part Detection for Human Pose Estimation and Tracking", "text": "Accurate 3-D human body pose tracking from a monocular video stream is important for a number of applications. We describe a novel hierarchical approach for tracking human pose that uses edge-based features during the coarse stage and later other features for global optimization. At first, humans are detected by motion and tracked by fitting an ellipse in the image. Then, body components are found using edge features and used to estimate the 2D positions of the body joints accurately. This helps to bootstrap the estimation of 3D pose using a sampling-based search method in the last stage. We present experiment results with sequences of different realistic scenes to illustrate the performance of the method."} {"_id": "53fab2769ef87e8153d905a5cf76ead9f9e46c22", "title": "The American Sign Language Lexicon Video Dataset", "text": "The lack of a written representation for American sign language (ASL) makes it difficult to do something as commonplace as looking up an unknown word in a dictionary. The majority of printed dictionaries organize ASL signs (represented in drawings or pictures) based on their nearest English translation; so unless one already knows the meaning of a sign, dictionary look-up is not a simple proposition. In this paper we introduce the ASL lexicon video dataset, a large and expanding public dataset containing video sequences of thousands of distinct ASL signs, as well as annotations of those sequences, including start/end frames and class label of every sign. This dataset is being created as part of a project to develop a computer vision system that allows users to look up the meaning of an ASL sign. At the same time, the dataset can be useful for benchmarking a variety of computer vision and machine learning methods designed for learning and/or indexing a large number of visual classes, and especially approaches for analyzing gestures and human communication."} {"_id": "74249aaa95deaa5dcab72ffbd22657f9bde07fca", "title": "MHz-frequency operation of flyback converter with monolithic self-synchronized rectifier (SSR)", "text": "Synchronous rectification offers an effective solution to improve efficiency of low-output-voltage flyback converters. Conventional synchronous flyback topologies using discrete synchronous rectifier face challenges in precise timing control on the secondary side. In this paper, MHz-frequency operation of a flyback DC-DC converter using a new monolithic self-synchronized rectifier (SSR) is investigated and demonstrated. The SSR IC, acting solely on its own drain-source voltage instead of external control signals, considerably simplifies converter design, improves system efficiency, and enables MHz operating frequency. Analysis, modeling, and experimental results are presented."} {"_id": "efb1591c78b27f908c17fac0fdbbb4a9f8bfbdb4", "title": "Assessing The Feasibility Of Self-organizing Maps For Data Mining Financial Information", "text": "Analyzing financial performance in today\u2019s information-rich society can be a daunting task. With the evolution of the Internet, access to massive amounts of financial data, typically in the form of financial statements, is widespread. Managers and stakeholders are in need of a data-mining tool allowing them to quickly and accurately analyze this data. An emerging technique that may be suited for this application is the self-organizing map. The purpose of this study was to evaluate the performance of self-organizing maps for analyzing financial performance of international pulp and paper companies. For the study, financial data, in the form of seven financial ratios, was collected, using the Internet as the primary source of information. A total of 77 companies, and six regional averages, were included in the study. The time frame of the study was the period 1995-00. An example analysis was performed, and the results analyzed based on information contained in the annual reports. The results of the study indicate that self-organizing maps can be feasible tools for the financial analysis of large amounts of financial data."} {"_id": "80d492462d76c5a8305739b4339fd7af0277fc7a", "title": "A Self-Cloning Agents Based Model for High-Performance Mobile-Cloud Computing", "text": "The rise of the mobile-cloud computing paradigm in recent years has enabled mobile devices with processing power and battery life limitations to achieve complex tasks in real-time. While mobile-cloud computing is promising to overcome the limitations of mobile devices for real-time computing, the lack of frameworks compatible with standard technologies and techniques for dynamic performance estimation and program component relocation makes it harder to adopt mobile-cloud computing at large. Most of the available frameworks rely on strong assumptions such as the availability of a full clone of the application code and negligible execution time in the cloud. In this paper, we present a dynamic computation offloading model for mobile-cloud computing, based on autonomous agents. Our approach does not impose any requirements on the cloud platform other than providing isolated execution containers, and it alleviates the management burden of offloaded code by the mobile platform using stateful, autonomous application partitions. We also investigate the effects of different cloud runtime environment conditions on the performance of mobile-cloud computing, and present a simple and low-overhead dynamic make span estimation model integrated into autonomous agents to enhance them with self-performance evaluation in addition to self-cloning capabilities. The proposed performance profiling model is used in conjunction with a cloud resource optimization scheme to ensure optimal performance. Experiments with two mobile applications demonstrate the effectiveness of the proposed approach for high-performance mobile-cloud computing."} {"_id": "1da8ecc3fe3a2d3a8057487dcaaf97a455bda117", "title": "QUBIC: a qualitative biclustering algorithm for analyses of gene expression data", "text": "Biclustering extends the traditional clustering techniques by attempting to find (all) subgroups of genes with similar expression patterns under to-be-identified subsets of experimental conditions when applied to gene expression data. Still the real power of this clustering strategy is yet to be fully realized due to the lack of effective and efficient algorithms for reliably solving the general biclustering problem. We report a QUalitative BIClustering algorithm (QUBIC) that can solve the biclustering problem in a more general form, compared to existing algorithms, through employing a combination of qualitative (or semi-quantitative) measures of gene expression data and a combinatorial optimization technique. One key unique feature of the QUBIC algorithm is that it can identify all statistically significant biclusters including biclusters with the so-called 'scaling patterns', a problem considered to be rather challenging; another key unique feature is that the algorithm solves such general biclustering problems very efficiently, capable of solving biclustering problems with tens of thousands of genes under up to thousands of conditions in a few minutes of the CPU time on a desktop computer. We have demonstrated a considerably improved biclustering performance by our algorithm compared to the existing algorithms on various benchmark sets and data sets of our own. QUBIC was written in ANSI C and tested using GCC (version 4.1.2) on Linux. Its source code is available at: http://csbl.bmb.uga.edu/ approximately maqin/bicluster. A server version of QUBIC is also available upon request."} {"_id": "0bddc08769e9ff5d42ed36336f69bb3b1f42e716", "title": "Autonomous Inverted Helicopter Flight via Reinforcement Learning", "text": "Helicopters have highly stochastic, nonlinear, dynamics, and autonomous helicopter flight is widely regarded to be a challenging control problem. As helicopters are highly unstable at low speeds, it is particularly difficult to design controllers for low speed aerobatic maneuvers. In this paper, we describe a successful application of reinforcement learning to designing a controller for sustained inverted flight on an autonomous helicopter. Using data collected from the helicopter in flight, we began by learning a stochastic, nonlinear model of the helicopter\u2019s dynamics. Then, a reinforcement learning algorithm was applied to automatically learn a controller for autonomous inverted hovering. Finally, the resulting controller was successfully tested on our autonomous helicopter platform."} {"_id": "2e268b70c7dcae58de2c8ff7bed1e58a5e58109a", "title": "Dynamic Programming and Optimal Control", "text": "This is an updated version of Chapter 4 of the author\u2019s Dynamic Programming and Optimal Control, Vol. II, 4th Edition, Athena Scientific, 2012. It includes new material, and it is substantially revised and expanded (it has more than doubled in size). The new material aims to provide a unified treatment of several models, all of which lack the contractive structure that is characteristic of the discounted problems of Chapters 1 and 2: positive and negative cost models, deterministic optimal control (including adaptive DP), stochastic shortest path models, and risk-sensitive models. Here is a summary of the new material:"} {"_id": "7a09464f26e18a25a948baaa736270bfb84b5e12", "title": "On-line Q-learning using connectionist systems", "text": "Reinforcement learning algorithms are a powerful machine learning technique. However, much of the work on these algorithms has been developed with regard to discrete nite-state Markovian problems, which is too restrictive for many real-world environments. Therefore, it is desirable to extend these methods to high dimensional continuous state-spaces, which requires the use of function approximation to generalise the information learnt by the system. In this report, the use of back-propagation neural networks (Rumelhart, Hinton and Williams 1986) is considered in this context. We consider a number of di erent algorithms based around Q-Learning (Watkins 1989) combined with the Temporal Di erence algorithm (Sutton 1988), including a new algorithm (Modi ed Connectionist Q-Learning), and Q( ) (Peng and Williams 1994). In addition, we present algorithms for applying these updates on-line during trials, unlike backward replay used by Lin (1993) that requires waiting until the end of each trial before updating can occur. On-line updating is found to be more robust to the choice of training parameters than backward replay, and also enables the algorithms to be used in continuously operating systems where no end of trial conditions occur. We compare the performance of these algorithms on a realistic robot navigation problem, where a simulated mobile robot is trained to guide itself to a goal position in the presence of obstacles. The robot must rely on limited sensory feedback from its surroundings, and make decisions that can be generalised to arbitrary layouts of obstacles. These simulations show that on-line learning algorithms are less sensitive to the choice of training parameters than backward replay, and that the alternative update rules of MCQ-L and Q( ) are more robust than standard Q-learning updates. 1"} {"_id": "13e2d5c4d39bc58b42f9004e5b03905f847dfa0f", "title": "Autonomous Helicopter Flight via Reinforcement Learning", "text": "Autonomous helicopter flight represents a challenging cont rol problem, with complex, noisy, dynamics. In this paper, we describe a s uccessful application of reinforcement learning to autonomous helic opter flight. We first fit a stochastic, nonlinear model of the helicopter dy namics. We then use the model to learn to hover in place, and to fly a number of maneuvers taken from an RC helicopter competition."} {"_id": "6fa33dcf5bd93299443cb46208219e9365f53ab3", "title": "Latent syntactic structure-based sentiment analysis", "text": "People share their opinions about things like products, movies and services using social media channels. The analysis of these textual contents for sentiments is a gold mine for marketing experts, thus automatic sentiment analysis is a popular area of applied artificial intelligence. We propose a latent syntactic structure-based approach for sentiment analysis which requires only sentence-level polarity labels for training. Our experiments on three domains (movie, IT products, restaurant) show that a sentiment analyzer that exploits syntactic parses and has access only to sentence-level polarity annotation for in-domain sentences can outperform state-of-the-art models that were trained on out-domain parse trees with sentiment annotation for each node of the trees. In practice, millions of sentence-level polarity annotations are usually available for a particular domain thus our approach is applicable for training a sentiment analyzer for a new domain while it can exploit the syntactic structure of sentences as well."} {"_id": "f9825a2d225b549f6c70b2faaeabd8537730a319", "title": "Boosting the Quality of Approximate String Matching by Synonyms", "text": "A string-similarity measure quantifies the similarity between two text strings for approximate string matching or comparison. For example, the strings \u201cSam\u201d and \u201cSamuel\u201d can be considered to be similar. Most existing work that computes the similarity of two strings only considers syntactic similarities, for example, number of common words or q-grams. While this is indeed an indicator of similarity, there are many important cases where syntactically-different strings can represent the same real-world object. For example, \u201cBill\u201d is a short form of \u201cWilliam,\u201d and \u201cDatabase Management Systems\u201d can be abbreviated as \u201cDBMS.\u201d Given a collection of predefined synonyms, the purpose of this article is to explore such existing knowledge to effectively evaluate the similarity between two strings and efficiently perform similarity searches and joins, thereby boosting the quality of approximate string matching.\n In particular, we first present an expansion-based framework to measure string similarities efficiently while considering synonyms. We then study efficient algorithms for similarity searches and joins by proposing two novel indexes, called SI-trees and QP-trees, which combine signature-filtering and length-filtering strategies. In order to improve the efficiency of our algorithms, we develop an estimator to estimate the size of candidates to enable an online selection of signature filters. This estimator provides strong low-error, high-confidence guarantees while requiring only logarithmic space and time costs, thus making our method attractive both in theory and in practice. Finally, the experimental results from a comprehensive study of the algorithms with three real datasets verify the effectiveness and efficiency of our approaches."} {"_id": "89a75e3ad11cbdc00df88d619422fbe1b75b95d6", "title": "Augmented Reality with Hololens: Experiential Architectures Embedded in the Real World", "text": "Early hands-on experiences with the Microsoft Hololens augmented/mixed reality device are reported and discussed, with a general aim of exploring basic 3D visualization. A range of usage cases are tested, including data visualization and immersive data spaces, in-situ visualization of 3D models and full scale architectural form visualization. Ultimately, the Hololens is found to provide a remarkable tool for moving from traditional visualization of 3D objects on a 2D screen, to fully experiential 3D visualizations embedded in the real world."} {"_id": "cd84e1e9890c5e10088376b56796718f79fa7e61", "title": "Aspect based sentiment analysis", "text": "In this fast paced and social media frenzy world, decision making has been revolutionized as there are lots of opinions floating on the internet in the form of blogs, social media updates, forums etc. This paper focuses on how banks can utilize these reviews to improve their services. The reviews are scraped from the internet and the sentiment analysis is done on an aspect level to have a much clear idea of where the banks are performing bad according to the customers or users."} {"_id": "9892ad453d0d7a6f260a0c6ca50c54c614cd2160", "title": "Real time object detection and tracking using the Kalman Filter embedded in single board in a robot", "text": "This paper presents an algorithm implemented in real time in a laboratory robot, designed to trace and detect the color of an object in an indoor environment using the Kalman filter. The image capture is done by a kinect camera installed in the robot. The algorithm is programmed into a single board computer, BeagleBone Black that uses the ROS \u2014 OpenCV system in image processing."} {"_id": "6d596cb55d99eae216840090b46bc5e49d7aeea5", "title": "A Case for Malleable Thread-Level Linear Algebra Libraries: The LU Factorization With Partial Pivoting", "text": "We propose two novel techniques for overcoming load-imbalance encountered when implementing so-called look-ahead mechanisms in relevant dense matrix factorizations for the solution of linear systems. Both techniques target the scenario where two thread teams are created/activated during the factorization, with each team in charge of performing an independent task/branch of execution. The first technique promotes worker sharing (WS) between the two tasks, allowing the threads of the task that completes first to be reallocated for use by the costlier task. The second technique allows a fast task to alert the slower task of completion, enforcing the early termination (ET) of the second task, and a smooth transition of the factorization procedure into the next iteration. The two mechanisms are instantiated via a new malleable thread-level implementation of the basic linear algebra subprograms, and their benefits are illustrated via an implementation of the LU factorization with partial pivoting enhanced with look-ahead. Concretely, our experimental results on an Intel-Xeon system with 12 cores show the benefits of combining WS+ET, reporting competitive performance in comparison with a task-parallel runtime-based solution."} {"_id": "7157dda72073ff66cc2de6ec5db056a3e8b326d7", "title": "A generic statistical approach for spam detection in Online Social Networks", "text": "25 26 27 28 29 30 31 32 33 34 35 36 Article history: Received 13 February 2012 Received in revised form 18 March 2013 Accepted 4 April 2013 Available online xxxx"} {"_id": "8ee60491a7e54f0a347f301e53b5b4521f8714e7", "title": "Fast 3D mapping in highly dynamic environments using normal distributions transform occupancy maps", "text": "Autonomous vehicles operating in real-world industrial environments have to overcome numerous challenges, chief among which is the creation and maintenance of consistent 3D world models. This paper focuses on a particularly important challenge: mapping in dynamic environments. We introduce several improvements to the recently proposed Normal Distributions Transform Occupancy Map (NDT-OM) aimed for efficient mapping in dynamic environments. A careful consistency analysis is given based on convergence and similarity metrics specifically designed for evaluation of NDT maps in dynamic environments. We show that in the context of mapping with known poses the proposed method results in improved consistency and in superior runtime performance, when compared against 3D occupancy grids at the same size and resolution. Additionally, we demonstrate that NDT-OM features real-time performance in a highly dynamic 3D mapping and tracking scenario with centimeter accuracy over a 1.5km trajectory."} {"_id": "1121fed40a291ab796ee7f2ca1c6e59988087011", "title": "CHAPTER TWO Self-Distancing : Theory , Research , and Current Directions", "text": "When people experience negative events, they often try to understand their feelings to improve the way they feel. Although engaging in this meaning-making process leads people to feel better at times, it frequently breaks down leading people to ruminate and feel worse. This raises the question: What factors determine whether people\u2019s attempts to \u201cwork-through\u201d their negative feelings succeed or fail? In this article, we describe an integrative program of research that has addressed this issue by focusing on the role that self-distancing plays in facilitating adaptive self-reflection. We begin by describing the \u201cself-reflection puzzle\u201d that initially motivated this line of work. Next, we introduce the concept of self-distancing and describe the conceptual framework we developed to explain how this process should facilitate adaptive self-reflection. After describing the early studies that evaluated this framework, we discuss how these findings have been extended to broaden and deepen our understanding of the role that this process plays in self-regulation. We conclude by offering several parting thoughts that integrate the ideas discussed in this chapter. 1. THE SELF-REFLECTION PUZZLE Many people try to understand their feelings when they are upset, under the assumption that doing so will lead them to feel better. Indeed, it would seem that many of us reflexively heed Socrates\u2019 advice to \u201cknow thyself \u201d when we experience emotional pain. But are people\u2019s attempts to work-through their feelings productive? Do they actually lead people to feel better? A great deal of research has addressed these questions over the past 40 years, and the results reveal a puzzle. On the one hand, several studies suggest that it is indeed helpful for people to reflect on their emotions when they experience distress 82 E. Kross and O. Ayduk"} {"_id": "3e789c93315b6bd9bd2493fc12ecb7381a598509", "title": "An Embedded Support Vector Machine", "text": "In this paper we work on the balance between hardware and software implementation of a machine learning algorithm, which belongs to the area of statistical learning theory. We use system-on-chip technology to demonstrate the potential usefulness of moving the critical sections of an algorithm into HW: the so-called hardware/software balance. Our experiments show that the approach can achieve speedups using a complex machine learning algorithm called a support vector machine. The experiments are conducted on a real-time Java virtual machine named Java optimized processor"} {"_id": "adf242eae5e6793b5135fa7468479b8a1ad5008e", "title": "Toward an E-orientation platform: Using hybrid recommendation systems", "text": "Choosing the right career can be difficult for students, because they have to take into considerations many elements, in order to be on the best path. Several studies in many disciplines have cooperate to help students making their career decision. The purpose of this work is to invest the subject of school orientation, identifying its background and studying some related works. This paper aims also to define the student's profile, and conceiving basis of a recommendation system that will play the advisor role in students helping to choose their career path."} {"_id": "0b1447fda2c0c1a55df07ab1d405fffcedd0420f", "title": "Personalized News Recommendation: A Review and an Experimental Investigation", "text": "Online news articles, as a new format of press releases, have sprung up on the Internet. With its convenience and recency, more and more people prefer to read news online instead of reading the paper-format press releases. However, a gigantic amount of news events might be released at a rate of hundreds, even thousands per hour. A challenging problem is how to effciently select specific news articles from a large corpus of newly-published press releases to recommend to individual readers, where the selected news items should match the reader's reading preference as much as possible. This issue refers to personalized news recommendation. Recently, personalized news recommendation has become a promising research direction as the Internet provides fast access to real-time information from multiple sources around the world. Existing personalized news recommendation systems strive to adapt their services to individual users by virtue of both user and news content information. A variety of techniques have been proposed to tackle personalized news recommendation, including content-based, collaborative filtering systems and hybrid versions of these two. In this paper, we provide a comprehensive investigation of existing personalized news recommenders. We discuss several essential issues underlying the problem of personalized news recommendation, and explore possible solutions for performance improvement. Further, we provide an empirical study on a collection of news articles obtained from various news websites, and evaluate the effect of different factors for personalized news recommendation. We hope our discussion and exploration would provide insights for researchers who are interested in personalized news recommendation."} {"_id": "77185982b410e6441dc1a2c87c6acc3362ac0f01", "title": "Paying for performance: Performance incentives increase desire for the reward object.", "text": "The current research examines how exposure to performance incentives affects one's desire for the reward object. We hypothesized that the flexible nature of performance incentives creates an attentional fixation on the reward object (e.g., money), which leads people to become more desirous of the rewards. Results from 5 laboratory experiments and 1 large-scale field study provide support for this prediction. When performance was incentivized with monetary rewards, participants reported being more desirous of money (Study 1), put in more effort to earn additional money in an ensuing task (Study 2), and were less willing to donate money to charity (Study 4). We replicated the result with nonmonetary rewards (Study 5). We also found that performance incentives increased attention to the reward object during the task, which in part explains the observed effects (Study 6). A large-scale field study replicated these findings in a real-world setting (Study 7). One laboratory experiment failed to replicate (Study 3). (PsycINFO Database Record"} {"_id": "66903e95f84767a31beef430b2367492ac9cc750", "title": "Childhood sexual abuse and psychiatric disorder in young adulthood: II. Psychiatric outcomes of childhood sexual abuse.", "text": "OBJECTIVE\nThis is the second in a series of articles that describe the prevalence, correlates, and consequences of childhood sexual abuse (CSA) in a birth cohort of more than 1,000 New Zealand children studied to the age of 18 years. This article examines the associations between reports of CSA at age 18 and DSM-IV diagnostic classifications at age 18.\n\n\nMETHOD\nA birth cohort of New Zealand children was studied at annual intervals from birth to age 16 years. At age 18 years retrospective reports of CSA prior to age 16 and concurrently measured psychiatric symptoms were obtained.\n\n\nRESULTS\nThose reporting CSA had higher rates of major depression, anxiety disorder, conduct disorder, substance use disorder, and suicidal behaviors than those not reporting CSA (p < .002). There were consistent relationships between the extent of CSA and risk of disorder, with those reporting CSA involving intercourse having the highest risk of disorder. These results persisted when findings were adjusted for prospectively measured childhood family and related factors. Similar but less marked relationships between CSA and nonconcurrently measured disorders were found.\n\n\nCONCLUSIONS\nThe findings suggest that CSA, and particularly severe CSA, was associated with increased risk of psychiatric disorder in young adults even when due allowance was made for prospectively measured confounding factors."} {"_id": "c24333af7c45436c9da2121f11bb63444381aaff", "title": "Complementary Skyrmion Racetrack Memory With Voltage Manipulation", "text": "Magnetic skyrmion holds promise as information carriers in the next-generation memory and logic devices, owing to the topological stability, small size, and extremely low current needed to drive it. One of the most potential applications of skyrmion is to design racetrack memory (RM), named Sk-RM, instead of utilizing domain wall. However, current studies face some key design challenges, e.g., skyrmion manipulation, data representation, and synchronization. To address these challenges, we propose here a complementary Sk-RM structure with voltage manipulation. Functionality and performance of the proposed design are investigated with micromagnetic simulations."} {"_id": "26f6693ade84ea613d4ffa380406db060dafe3ae", "title": "Power Attack on Small RSA Public Exponent", "text": "In this paper, we present a new attack on RSA when the public exponent is short, for instance 3 or 2 +1, and when the classical exponent randomization is used. This attack works even if blinding is used on the messages. From a Simple Power Analysis (SPA) we study the problem of recovering the RSA private key when non consecutive bits of it leak from the implementation. We also show that such information can be gained from sliding window implementations not protected against SPA."} {"_id": "c07d5ce6c340b3684eeb1d8e70ffcdf6b70652f9", "title": "Understanding and Supporting Cross-Device Web Search for Exploratory Tasks with Mobile Touch Interactions", "text": "Mobile devices enable people to look for information at the moment when their information needs are triggered. While experiencing complex information needs that require multiple search sessions, users may utilize desktop computers to fulfill information needs started on mobile devices. Under the context of mobile-to-desktop web search, this article analyzes users\u2019 behavioral patterns and compares them to the patterns in desktop-to-desktop web search. Then, we examine several approaches of using Mobile Touch Interactions (MTIs) to infer relevant content so that such content can be used for supporting subsequent search queries on desktop computers. The experimental data used in this article was collected through a user study involving 24 participants and six properly designed cross-device web search tasks. Our experimental results show that (1) users\u2019 mobile-to-desktop search behaviors do significantly differ from desktop-to-desktop search behaviors in terms of information exploration, sense-making and repeated behaviors. (2) MTIs can be employed to predict the relevance of click-through documents, but applying document-level relevant content based on the predicted relevance does not improve search performance. (3) MTIs can also be used to identify the relevant text chunks at a fine-grained subdocument level. Such relevant information can achieve better search performance than the document-level relevant content. In addition, such subdocument relevant information can be combined with document-level relevance to further improve the search performance. However, the effectiveness of these methods relies on the sufficiency of click-through documents. (4) MTIs can also be obtained from the Search Engine Results Pages (SERPs). The subdocument feedbacks inferred from this set of MTIs even outperform the MTI-based subdocument feedback from the click-through documents."} {"_id": "b6450a9b119bece8058e7a43c03b21bd6522c220", "title": "Ethical Considerations in Artificial Intelligence Courses", "text": "The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course."} {"_id": "27eaf1c2047fba8a71540496d59f3fec9b7f46bb", "title": "Monocular Visual\u2013Inertial State Estimation With Online Initialization and Camera\u2013IMU Extrinsic Calibration", "text": "There have been increasing demands for developing microaerial vehicles with vision-based autonomy for search and rescue missions in complex environments. In particular, the monocular visual\u2013inertial system (VINS), which consists of only an inertial measurement unit (IMU) and a camera, forms a great lightweight sensor suite due to its low weight and small footprint. In this paper, we address two challenges for rapid deployment of monocular VINS: 1) the initialization problem and 2) the calibration problem. We propose a methodology that is able to initialize velocity, gravity, visual scale, and camera\u2013IMU extrinsic calibration on the fly. Our approach operates in natural environments and does not use any artificial markers. It also does not require any prior knowledge about the mechanical configuration of the system. It is a significant step toward plug-and-play and highly customizable visual navigation for mobile robots. We show through online experiments that our method leads to accurate calibration of camera\u2013IMU transformation, with errors less than 0.02 m in translation and 1\u00b0 in rotation. We compare out method with a state-of-the-art marker-based offline calibration method and show superior results. We also demonstrate the performance of the proposed approach in large-scale indoor and outdoor experiments."} {"_id": "f7e7b91480a5036562a6b2276066590c9cc9c4f1", "title": "Quasi-online reinforcement learning for robots", "text": "This paper describes quasi-online reinforcement learning: while a robot is exploring its environment, in the background a probabilistic model of the environment is built on the fly as new experiences arrive; the policy is trained concurrently based on this model using an anytime algorithm. Prioritized sweeping, directed exploration, and transformed reward functions provide additional speed-ups. The robot quickly learns goal-directed policies from scratch, requiring few interactions with the environment and making efficient use of available computation time. From an outside perspective it learns the behavior online and in real time. We describe comparisons with standard methods and show the individual utility of each of the proposed techniques"} {"_id": "40f5cc97ff200cfa17d7a3ce5a64176841075247", "title": "High speed network traffic analysis with commodity multi-core systems", "text": "Multi-core systems are the current dominant trend in computer processors. However, kernel network layers often do not fully exploit multi-core architectures. This is due to issues such as legacy code, resource competition of the RX-queues in network interfaces, as well as unnecessary memory copies between the OS layers. The result is that packet capture, the core operation in every network monitoring application, may even experience performance penalties when adapted to multi-core architectures. This work presents common pitfalls of network monitoring applications when used with multi-core systems, and presents solutions to these issues. We describe the design and implementation of a novel multi-core aware packet capture kernel module that enables monitoring applications to scale with the number of cores. We showcase that we can achieve high packet capture performance on modern commodity hardware."} {"_id": "35d6448436d4829c774b2cd62fdf1a2b9c81c376", "title": "RF Localization in Indoor Environment", "text": "In this paper indoor localization system based on the RF power measurements of the Received Signal Strength (RSS) in WLAN environment is presented. Today, the most viable solution for localization is the RSS fingerprinting based approach, where in order to establish a relationship between RSS values and location, different machine learning approaches are used. The advantage of this approach based on WLAN technology is that it does not need new infrastructure (it reuses already and widely deployed equipment), and the RSS measurement is part of the normal operating mode of wireless equipment. We derive the Cram\u00e9r-Rao Lower Bound (CRLB) of localization accuracy for RSS measurements. In analysis of the bound we give insight in localization performance and deployment issues of a localization system, which could help designing an efficient localization system. To compare different machine learning approaches we developed a localization system based on an artificial neural network, k-nearest neighbors, probabilistic method based on the Gaussian kernel and the histogram method. We tested the developed system in real world WLAN indoor environment, where realistic RSS measurements were collected. Experimental comparison of the results has been investigated and average location estimation error of around 2 meters was obtained."} {"_id": "3890e38fd7018fa6afa80a25e6610a1976a7741c", "title": "Lateral prefrontal cortex and self-control in intertemporal choice", "text": "Disruption of function of left, but not right, lateral prefrontal cortex (LPFC) with low-frequency repetitive transcranial magnetic stimulation (rTMS) increased choices of immediate rewards over larger delayed rewards. rTMS did not change choices involving only delayed rewards or valuation judgments of immediate and delayed rewards, providing causal evidence for a neural lateral-prefrontal cortex\u2013based self-control mechanism in intertemporal choice."} {"_id": "6ebdb88c39787f4242e92504b6d2c60b8421193a", "title": "The association between psychological distance and construal level: evidence from an implicit association test.", "text": "According to construal level theory (N. Liberman, Y. Trope, & E. Stephan, in press; Y. Trope & N. Liberman, 2003), people use a more abstract, high construal level when judging, perceiving, and predicting more psychologically distal targets, and they judge more abstract targets as being more psychologically distal. The present research demonstrated that associations between more distance and higher level of construal also exist on a pure conceptual level. Eight experiments used the Implicit Association Test (IAT; A. G. Greenwald, D. E. McGhee, & J. L. K. Schwartz, 1998) to demonstrate an association between words related to construal level (low vs. high) and words related to four dimensions of distance (proximal vs. distal): temporal distance, spatial distance, social distance, and hypotheticality. In addition to demonstrating an association between level of construal and psychological distance, these findings also corroborate the assumption that all 4 dimensions of psychological distance are related to level of construal in a similar way and support the notion that they all are forms of psychological distance."} {"_id": "1f4412f8c0d2e491b2b4bf486d47d448d8f46858", "title": "Implicit Association Test 1 The Implicit Association Test at Age 7 : A Methodological and Conceptual Review", "text": "A mong earthly organisms, humans have a unique propensity to introspect or look inward into the contents of their own minds, and to share those observations with others. With the ability to introspect comes the palpable feeling of \" knowing, \" of being objective or certain, of being mentally in control of one's thoughts, aware of the causes of one's thoughts, feelings, and actions, and of making decisions deliberately and rationally. Among the noteworthy discoveries of 20th century psychology was a challenge posed to this assumption of rationality. From the groundbreaking theorizing of Herbert Simon (1955) and the mind-boggling problems posed by Kahneman, Slovik, and Tversky (1982) to striking demonstrations of illusions of control (Wegner, 2002), the paucity of introspection (Nisbett and Wilson, 1977), and the automaticity of everyday thought (Bargh, 1997), psychologists have shown the frailties of the minds of their species. As psychologists have come to grips with the limits of the mind, there has been an increased interest in measuring aspects of thinking and feeling that may not be easily accessed or available to consciousness. Innovations in measurement have been undertaken with the purpose of bringing under scrutiny new forms of cogni-tion and emotion that were previously undiscovered and especially by asking if traditional concepts such as attitude and preference, belief and stereotype, self-concept and self-esteem can be rethought based on what the new measures reveal. These newer measures do not require introspection on the part of the subject. For many constructs this is considered a valuable, if not essential, feature of measurement; for others, avoiding introspection is greeted with suspicion and skepticism. For example, one approach to measuring math ability would be to ask \" how good are you at math? \" whereas an alternative approach is to infer math ability via a performance on a math skills test. The former requires introspection to assess the relevant construct, the latter does not. And yet, the latter is accepted"} {"_id": "2698f74468c49b29ac69e193d5aeaa09bb33faea", "title": "Can language restructure cognition? The case for space", "text": "Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies cross-culturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains."} {"_id": "755b94b766dee3a34536f6b481a60f0d9f68aa0c", "title": "The Role of Feasibility and Desirability Considerations in Near and Distant Future Decisions : A Test of Temporal Construal Theory", "text": "Temporal construal theory states that distant future situations are construed on a higher level (i.e., using more abstract and central features) than near future situations. Accordingly, the theory suggests that the value associated with the high-level construal is enhanced over delay and that the value associated with the low-level construal is discounted over delay. In goal-directed activities, desirability of the activity's end state represents a high-level construal, whereas the feasibility of attaining this end state represents a low-level construal. Study 1 found that distant future activities were construed on a higher level than near future activities. Studies 2 and 3 showed that decisions regarding distant future activities, compared with decisions regarding near future activities, were more influenced by the desirability of the end state and less influenced by the feasibility of attaining the end state. Study 4 presented students with a real-life choice of academic assignments varying in difficulty (feasibility) and interest (desirability). In choosing a distant future assignment, students placed relatively more weight on the assignment's interest, whereas in choosing a near future assignment, they placed relatively more weight on difficulty. Study 5 found that distant future plans, compared with near future plans, were related to desirability of activities rather than to time constraints."} {"_id": "848e2f107ed6abe3c32c6442bcef4f6215f3f426", "title": "A Capacitor-DAC-Based Technique For Pre-Emphasis-Enabled Multilevel Transmitters", "text": "This brief presents a capacitor digital-to-analog converter (DAC) based technique that is suitable for pre-emphasis-enabled multilevel wireline transmitter design in voltage mode. Detailed comparisons between the proposed technique and conventional direct-coupling-based as well as resistor-DAC-based multilevel transmitter design techniques are given, revealing potential benefits in terms of speed, linearity, implementation complexity, and also power consumption. A PAM-4 transmitter with 2-Tap feed-forward equalization adopting the proposed technique is implemented in 65-nm CMOS technology. It achieves a 25-Gb/s data rate and energy efficiency of 2 mW/Gb/s."} {"_id": "10b0ec0b2920a5e9d7b6527e5b87d8fde0b11e86", "title": "Toward defining the preclinical stages of Alzheimer\u2019s disease: Recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease", "text": "The pathophysiological process of Alzheimer's disease (AD) is thought to begin many years before the diagnosis of AD dementia. This long \"preclinical\" phase of AD would provide a critical opportunity for therapeutic intervention; however, we need to further elucidate the link between the pathological cascade of AD and the emergence of clinical symptoms. The National Institute on Aging and the Alzheimer's Association convened an international workgroup to review the biomarker, epidemiological, and neuropsychological evidence, and to develop recommendations to determine the factors which best predict the risk of progression from \"normal\" cognition to mild cognitive impairment and AD dementia. We propose a conceptual framework and operational research criteria, based on the prevailing scientific evidence to date, to test and refine these models with longitudinal clinical research studies. These recommendations are solely intended for research purposes and do not have any clinical implications at this time. It is hoped that these recommendations will provide a common rubric to advance the study of preclinical AD, and ultimately, aid the field in moving toward earlier intervention at a stage of AD when some disease-modifying therapies may be most efficacious."} {"_id": "6d2465be30dbcbf9b76509eed81cd5f32c4f8618", "title": "Fully integrated 54nm STT-RAM with the smallest bit cell dimension for high density memory application", "text": "A compact STT(Spin-Transfer Torque)-RAM with a 14F2 cell was integrated using modified DRAM processes at the 54nm technology node. The basic switching performance (R-H and R-V) of the MTJs and current drivability of the access transistors were characterized at the single bit cell level. Through the direct access capability and normal chip operation in our STT-RAM test blocks, the switching behavior of bit cell arrays was also analyzed statistically. From this data and from the scaling trend of STT-RAM, we estimate that the unit cell dimension below 30nm can be smaller than 8F2."} {"_id": "f70c724a299025fa58127b4bcd3426c565e1e7be", "title": "Single-Image Noise Level Estimation for Blind Denoising", "text": "Noise level is an important parameter to many image processing applications. For example, the performance of an image denoising algorithm can be much degraded due to the poor noise level estimation. Most existing denoising algorithms simply assume the noise level is known that largely prevents them from practical use. Moreover, even with the given true noise level, these denoising algorithms still cannot achieve the best performance, especially for scenes with rich texture. In this paper, we propose a patch-based noise level estimation algorithm and suggest that the noise level parameter should be tuned according to the scene complexity. Our approach includes the process of selecting low-rank patches without high frequency components from a single noisy image. The selection is based on the gradients of the patches and their statistics. Then, the noise level is estimated from the selected patches using principal component analysis. Because the true noise level does not always provide the best performance for nonblind denoising algorithms, we further tune the noise level parameter for nonblind denoising. Experiments demonstrate that both the accuracy and stability are superior to the state of the art noise level estimation algorithm for various scenes and noise levels."} {"_id": "16bf6a111ada81d837da3de7227def58baa6b95b", "title": "Causal Structure Learning and Inference : A Selective Review", "text": "In this paper we give a review of recent causal inference methods. First, we discuss methods for causal structure learning from observational data when confounders are not present and have a close look at methods for exact identifiability. We then turn to methods which allow for a mix of observational and interventional data, where we also touch on active learning strategies. We also discuss methods which allow arbitrarily complex structures of hidden variables. Second, we present approaches for estimating the interventional distribution and causal effects given the (true or estimated) causal structure. We close with a note on available software and two examples on real data."} {"_id": "ec5bfa1d267d0797af46b6cc6f69f91748ae27c1", "title": "Activity recognition using a single accelerometer placed at the wrist or ankle.", "text": "PURPOSE\nLarge physical activity surveillance projects such as the UK Biobank and NHANES are using wrist-worn accelerometer-based activity monitors that collect raw data. The goal is to increase wear time by asking subjects to wear the monitors on the wrist instead of the hip, and then to use information in the raw signal to improve activity type and intensity estimation. The purposes of this work was to obtain an algorithm to process wrist and ankle raw data and to classify behavior into four broad activity classes: ambulation, cycling, sedentary, and other activities.\n\n\nMETHODS\nParticipants (N = 33) wearing accelerometers on the wrist and ankle performed 26 daily activities. The accelerometer data were collected, cleaned, and preprocessed to extract features that characterize 2-, 4-, and 12.8-s data windows. Feature vectors encoding information about frequency and intensity of motion extracted from analysis of the raw signal were used with a support vector machine classifier to identify a subject's activity. Results were compared with categories classified by a human observer. Algorithms were validated using a leave-one-subject-out strategy. The computational complexity of each processing step was also evaluated.\n\n\nRESULTS\nWith 12.8-s windows, the proposed strategy showed high classification accuracies for ankle data (95.0%) that decreased to 84.7% for wrist data. Shorter (4 s) windows only minimally decreased performances of the algorithm on the wrist to 84.2%.\n\n\nCONCLUSIONS\nA classification algorithm using 13 features shows good classification into the four classes given the complexity of the activities in the original data set. The algorithm is computationally efficient and could be implemented in real time on mobile devices with only 4-s latency."} {"_id": "5cbf2f236ac35434eda20239237ed84db491a28d", "title": "Toward Rapid Development of Multi-Party Virtual Human Negotiation Scenarios", "text": "This paper reports on an ongoing effort to enable the rapid development of multi-party virtual human negotiation scenarios. We present a case study in which a new scenario supporting negotiation between two human role players and two virtual humans was developed over a period of 12 weeks. We discuss the methodology and development process that were employed, from storyline design through role play and iterative development of the virtual humans\u2019 semantic and task representations and natural language processing capabilities. We analyze the effort, expertise, and time required for each development step, and discuss opportunities to further streamline the development process."} {"_id": "452bc3a5f547ba1c1678b20552e3cd465870a33a", "title": "Deep Convolutional Neural Networks [Lecture Notes]", "text": "Neural networks are a subset of the field of artificial intelligence (AI). The predominant types of neural networks used for multidimensional signal processing are deep convolutional neural networks (CNNs). The term deep refers generically to networks having from a \"few\" to several dozen or more convolution layers, and deep learning refers to methodologies for training these systems to automatically learn their functional parameters using data representative of a specific problem domain of interest. CNNs are currently being used in a broad spectrum of application areas, all of which share the common objective of being able to automatically learn features from (typically massive) data bases and to generalize their responses to circumstances not encountered during the learning phase. Ultimately, the learned features can be used for tasks such as classifying the types of signals the CNN is expected to process. The purpose of this \"Lecture Notes\" article is twofold: 1) to introduce the fundamental architecture of CNNs and 2) to illustrate, via a computational example, how CNNs are trained and used in practice to solve a specific class of problems."} {"_id": "40385eb9d1464bea8f13fb24ca58aee3e36bd634", "title": "Nearest prototype classifier designs: An experimental study", "text": "We compare eleven methods for finding prototypes upon which to base the nearest \u017d prototype classifier. Four methods for prototype selection are discussed: Wilson Hart a . condensation error-editing method , and three types of combinatorial search random search, genetic algorithm, and tabu search. Seven methods for prototype extraction are discussed: unsupervised vector quantization, supervised learning vector quantization \u017d . with and without training counters , decision surface mapping, a fuzzy version of vector quantization, c-means clustering, and bootstrap editing. These eleven methods can be usefully divided two other ways: by whether they employ preor postsupervision; and by whether the number of prototypes found is user-defined or \u2018\u2018automatic.\u2019\u2019 Generalization error rates of the 11 methods are estimated on two synthetic and two real data sets. Offering the usual disclaimer that these are just a limited set of experiments, we feel confident in asserting that presupervised, extraction methods offer a better chance for success to the casual user than postsupervised, selection schemes. Finally, our calculations do not suggest that methods which find the \u2018\u2018best\u2019\u2019 number of prototypes \u2018\u2018automatically\u2019\u2019 are superior to methods for which the user simply specifies the number of prototypes. 2001 John Wiley & Sons, Inc."} {"_id": "dbbbddfa44c0a091b7e15b7989fbf8af4fad3a78", "title": "Synthesizing evidence in software engineering research", "text": "Synthesizing the evidence from a set of studies that spans many countries and years, and that incorporates a wide variety of research methods and theoretical perspectives, is probably the single most challenging task of performing a systematic review. In this paper, we perform a tertiary review to assess the types and methods of research synthesis in systematic reviews in software engineering. Almost half of the 31 studies included in our review did not contain any synthesis; of the ones that did, two thirds performed a narrative or a thematic synthesis. The results show that, despite the focus on systematic reviews, there is, currently, limited attention to research synthesis in software engineering. This needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice."} {"_id": "b5aa59085ebd6b0a23e0941efc2ab10efb7474bc", "title": "Spectral\u2013Spatial Classification and Shape Features for Urban Road Centerline Extraction", "text": "This letter presents a two-step method for urban main road extraction from high-resolution remotely sensed imagery by integrating spectral-spatial classification and shape features. In the first step, spectral-spatial classification segments the imagery into two classes, i.e., the road class and the nonroad class, using path openings and closings. The local homogeneity of the gray values obtained by local Geary's C is then fused with the road class. In the second step, the road class is refined by using shape features. The experimental results indicated that the proposed method was able to achieve a comparatively good performance in urban main road extraction."} {"_id": "29500203bfb404e5f2808c0342410a6b91e75e31", "title": "Automatic Optimization Computational Method for Unconventional S . W . A . T . H . Ships Resistance", "text": "The paper illustrates the main theoretical and computational aspects of an automatic computer based procedure for the parametric shape optimization of a particular unconventional hull typology: that for a catamaran S.W.A.T.H. ship. The goal of the integrated computational procedure is to find the best shape of the submerged hulls of a new U.S.V. (Unmanned Surface Vehicle) S.W.A.T.H. (Small Waterplane Area Twin Hull) vessel, in terms of minimum wave pattern resistance. After dealing with the theoretical aspects the papers presents the numerical aspects of the main software module of the automatic procedure, which integrates a parametric generation routine for innovative and unconventional S.W.A.T.H. (Small Waterplane Area Twin Hull) vessel geometry, a multi-objective, globally convergent and constrained, optimization algorithm and a Computational Fluid Dynamic (C.F.D.) solver. The integrated process is able to find the best shape of the submerged hull of the vessel, subject to the total displaced volume constraint. The hydrodynamic computation is carried out by means of a free surface potential flow method and it is addressed to find the value of wave resistance of each hull variant. Results of the application of the described computational procedure are presented for two optimization cases and the obtained best shapes are compared with a conventional one, featuring a typical torpedo-shaped body, proving the effectiveness of the method in reducing the resistance by a considerable extent, in the order of 40 percent. Keywords\u2014S.W.A.T.H., B.E.M., Wave Resistance, Parametric Modeling, Optimization, Genetic Algorithms"} {"_id": "244fa023adbbff31806ded21d5b2f36afd3ff988", "title": "BRAD 1.0: Book reviews in Arabic dataset", "text": "The availability of rich datasets is a pre-requisite for proposing robust sentiment analysis systems. A variety of such datasets exists in English language. However, it is rare or nonexistent for the Arabic language except for a recent LABR dataset, which consists of a little bit over 63,000 book reviews extracted from. Goodreads. com. We introduce BRAD 1.0, the largest Book Reviews in Arabic Dataset for sentiment analysis and machine language applications. BRAD comprises of almost 510,600 book records. Each record corresponds for a single review and has the review in Arabic language and the reviewer's rating on a scale of 1 to 5 stars. In this paper, we present and describe the properties of BRAD. Further, we provide two versions of BRAD: the complete unbalanced dataset and the balanced version of BRAD. Finally, we implement four sentiment analysis classifiers based on this dataset and report our findings. When training and testing the classifiers on BRAD as opposed to LABR, an improvement rate growth of 46% is reported. The highest accuracy attained is 91%. Our core contribution is to make this benchmark-dataset available and accessible to the research community on Arabic language."} {"_id": "90a2a7a3d22c58c57e3b1a4248c7420933d7fe2f", "title": "An integrated approach to testing complex systems", "text": "The increasing complexity of today\u2019s testing scenarios for complex systems demands an integrated, open, and flexible approach to support the management of the overall test process. \u201cClassical\u201d model-based testing approaches, where a complete and precise formal specification serves as a reference for automatic test generation, are often impractical. Reasons are, on the one hand, the absence of a suitable formal specification. As complex systems are composed of several components, either hardware or software, often pre-built and third party, it is unrealistic to assume that a formal specification exists a priori. On the other hand, a sophisticated test execution environment is needed that can handle distributed test cases. This is because the test actions and observations can take place on different subsystems of the overall system. This thesis presents a novel approach to the integrated testing of complex systems. Our approach offers a coarse grained test environment, realized in terms of a component-based test design on top of a library of elementary but intuitively understandable test case fragments. The relations between the fragments are treated orthogonally, delivering a test design and execution environment enhanced by means of light-weight formal verification methods. In this way we are able to shift the test design issues from total experts of the system and the used test tools to experts of the system\u2019s logic only. We illustrate the practical usability of our approach by means of industrial case studies in two different application domains: Computer Telephony Integrated solutions and Web-based applications. As an enhancement of our integrated test approach we provide an algorithm for generating approximate models for complex systems a posteriori. This is done by optimizing a standard machine learning algorithm according to domain-specific structural properties, i.e. properties like prefix-closeness, input-determinism, as well as independency and symmetries of events. The resulting models can never be exact, i.e. reflect the complete and correct behaviour of the considered system. Nevertheless they can be useful in practice, to represent the cumulative knowledge of the system in a consistent description."} {"_id": "8df383aae16ce1003d57184d8e4bf729f265ab40", "title": "Axial-Ratio-Bandwidth Enhancement of a Microstrip-Line-Fed Circularly Polarized Annular-Ring Slot Antenna", "text": "The design of a new microstrip-line-fed wideband circularly polarized (CP) annular-ring slot antenna (ARSA) is proposed. Compared with existing ring slot antennas, the ARSAs designed here possess much larger CP bandwidths. The main features of the proposed design include a wider ring slot, a pair of grounded hat-shaped patches, and a deformed bent feeding microstrip line. The ARSAs designed using FR4 substrates in the L and S bands have 3-dB axial-ratio bandwidths (ARBWs) of as large as 46% and 56%, respectively, whereas the one using an RT5880 substrate in the L band, 65%. In these 3-dB axial-ratio bands, impedance matching with VSWR \u2264 2 is also achieved."} {"_id": "1b0af2ba6f22b43f9115c03a52e515b5d1e358d2", "title": "A Practical Approach to Differential Private Learning", "text": "Applying differential private learning to real-world data is currently unpractical. Differential privacy (DP) introduces extra hyper-parameters for which no thorough good practices exist, while manually tuning these hyper-parameters on private data results in low privacy guarantees. Furthermore, the exact guarantees provided by differential privacy for machine learning models are not well understood. Current approaches use undesirable post-hoc privacy attacks on models to assess privacy guarantees. To improve this situation, we introduce three tools to make DP machine learning more practical. First, two sanity checks for differential private learning are proposed. These sanity checks can be carried out in a centralized manner before training, do not involve training on the actual data and are easy to implement. Additionally, methods are proposed to reduce the effective number of tuneable privacy parameters by making use of an adaptive clipping bound. Lastly, existing methods regarding large batch training and differential private learning are combined. It is demonstrated that this combination improves model performance within a constant privacy budget."} {"_id": "d82c363d46e2d49028776da1092674efe4282d39", "title": "Privacy as a fuzzy concept: A new conceptualization of privacy for practitioners", "text": "\u2022 Users may freely distribute the URL that is used to identify this publication. \u2022 Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. \u2022 User may use extracts from the document in line with the concept of \u2018fair dealing\u2019 under the Copyright, Designs and Patents Act 1988 (?) \u2022 Users may not further distribute the material nor use it for the purposes of commercial gain."} {"_id": "e65881a89633b8d4955e9314e84b943e155da6a9", "title": "TG13 flowchart for the management of acute cholangitis and cholecystitis.", "text": "We propose a management strategy for acute cholangitis and cholecystitis according to the severity assessment. For Grade I (mild) acute cholangitis, initial medical treatment including the use of antimicrobial agents may be sufficient for most cases. For non-responders to initial medical treatment, biliary drainage should be considered. For Grade II (moderate) acute cholangitis, early biliary drainage should be performed along with the administration of antibiotics. For Grade III (severe) acute cholangitis, appropriate organ support is required. After hemodynamic stabilization has been achieved, urgent endoscopic or percutaneous transhepatic biliary drainage should be performed. In patients with Grade II (moderate) and Grade III (severe) acute cholangitis, treatment for the underlying etiology including endoscopic, percutaneous, or surgical treatment should be performed after the patient's general condition has been improved. In patients with Grade I (mild) acute cholangitis, treatment for etiology such as endoscopic sphincterotomy for choledocholithiasis might be performed simultaneously, if possible, with biliary drainage. Early laparoscopic cholecystectomy is the first-line treatment in patients with Grade I (mild) acute cholecystitis while in patients with Grade II (moderate) acute cholecystitis, delayed/elective laparoscopic cholecystectomy after initial medical treatment with antimicrobial agent is the first-line treatment. In non-responders to initial medical treatment, gallbladder drainage should be considered. In patients with Grade III (severe) acute cholecystitis, appropriate organ support in addition to initial medical treatment is necessary. Urgent or early gallbladder drainage is recommended. Elective cholecystectomy can be performed after the improvement of the acute inflammatory process. Free full-text articles and a mobile application of TG13 are available via http://www.jshbps.jp/en/guideline/tg13.html."} {"_id": "08e1f479de40d66711f89e0926bc3f6d14f3dbc0", "title": "Disease Trajectory Maps", "text": "Medical researchers are coming to appreciate that many diseases are in fact complex, heterogeneous syndromes composed of subpopulations that express different variants of a related complication. Longitudinal data extracted from individual electronic health records (EHR) offer an exciting new way to study subtle differences in the way these diseases progress over time. In this paper, we focus on answering two questions that can be asked using these databases of longitudinal EHR data. First, we want to understand whether there are individuals with similar disease trajectories and whether there are a small number of degrees of freedom that account for differences in trajectories across the population. Second, we want to understand how important clinical outcomes are associated with disease trajectories. To answer these questions, we propose the Disease Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional representations of sparse and irregularly sampled longitudinal data. We propose a stochastic variational inference algorithm for learning the DTM that allows the model to scale to large modern medical datasets. To demonstrate the DTM, we analyze data collected on patients with the complex autoimmune disease, scleroderma. We find that DTM learns meaningful representations of disease trajectories and that the representations are significantly associated with important clinical outcomes."} {"_id": "95e873c3f64a9bd8346f5b5da2e4f14774536834", "title": "Wideband H-Plane Horn Antenna Based on Ridge Substrate Integrated Waveguide (RSIW)", "text": "A substrate integrated waveguide (SIW) H-plane sectoral horn antenna, with significantly improved bandwidth, is presented. A tapered ridge, consisting of a simple arrangement of vias on the side flared wall within the multilayer substrate, is introduced to enlarge the operational bandwidth. A simple feed configuration is suggested to provide the propagating wave for the antenna structure. The proposed antenna is simulated by two well-known full-wave packages, Ansoft HFSS and CST Microwave Studio, based on segregate numerical methods. Close agreement between simulation results is reached. The designed antenna shows good radiation characteristics and low VSWR, lower than 2.5, for the whole frequency range of 18-40 GHz."} {"_id": "8c030a736512456e9fd8d53763cbfcac0c014ab3", "title": "Multiscale Approaches To Music Audio Feature Learning", "text": "Content-based music information retrieval tasks are typically solved with a two-stage approach: features are extracted from music audio signals, and are then used as input to a regressor or classifier. These features can be engineered or learned from data. Although the former approach was dominant in the past, feature learning has started to receive more attention from the MIR community in recent years. Recent results in feature learning indicate that simple algorithms such as K-means can be very effective, sometimes surpassing more complicated approaches based on restricted Boltzmann machines, autoencoders or sparse coding. Furthermore, there has been increased interest in multiscale representations of music audio recently. Such representations are more versatile because music audio exhibits structure on multiple timescales, which are relevant for different MIR tasks to varying degrees. We develop and compare three approaches to multiscale audio feature learning using the spherical K-means algorithm. We evaluate them in an automatic tagging task and a similarity metric learning task on the Magnatagatune dataset."} {"_id": "a4ac001d1a11df51e05b1651d497c4e56dec3f51", "title": "Training Simplification and Model Simplification for Deep Learning: A Minimal Effort Back Propagation Method", "text": "We propose a simple yet effective technique to simplify the training and the resulting model of neural networks. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-k elements (in terms of magnitude) are kept. As a result, only k rows or columns (depending on the layout) of the weight matrix are modified, leading to a linear reduction in the computational cost. Based on the sparsified gradients, we further simplify the model by eliminating the rows or columns that are seldom updated, which will reduce the computational cost both in the training and decoding, and potentially accelerate decoding in real-world applications. Surprisingly, experimental results demonstrate that most of time we only need to update fewer than 5% of the weights at each back propagation pass. More interestingly, the accuracy of the resulting models is actually improved rather than degraded, and a detailed analysis is given. The model simplification results show that we could adaptively simplify the model which could often be reduced by around 9x, without any loss on accuracy or even with improved accuracy."} {"_id": "5aab22bf1f18e28fbb15af044f4dcf6f60524eb2", "title": "SeemGo: Conditional Random Fields Labeling and Maximum Entropy Classification for Aspect Based Sentiment Analysis", "text": "This paper describes our SeemGo system for the task of Aspect Based Sentiment Analysis in SemEval-2014. The subtask of aspect term extraction is cast as a sequence labeling problem modeled with Conditional Random Fields that obtains the F-score of 0.683 for Laptops and 0.791 for Restaurants by exploiting both word-based features and context features. The other three subtasks are solved by the Maximum Entropy model, with the occurrence counts of unigram and bigram words of each sentence as features. The subtask of aspect category detection obtains the best result when applying the Boosting method on the Maximum Entropy model, with the precision of 0.869 for Restaurants. The Maximum Entropy model also shows good performance in the subtasks of both aspect term and aspect category polarity classification."} {"_id": "8188d1bac84d020595115a695ba436ceeb5437e0", "title": "Machine learning approach for automated screening of malaria parasite using light microscopic images.", "text": "The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification."} {"_id": "c29920f73686404a580fa2f9bff34548fd73125a", "title": "Multi-factor Authentication Security Framework in Cloud Computing", "text": "Data Security is the most critical issues in a cloud computing environment. Authentication is a key technology for information security, which is a mechanism to establish proof of identities to get access of information in the system. Traditional password authentication does not provide enough security for information in cloud computing environment to the most modern means of attacks. In this paper, we propose a new multi-factor authentication framework for cloud computing. In this paper the features of various access control mechanisms are discussed and a novel framework of access control is proposed for cloud computing, which provides a multi -step and multifactor authentication of a user. The model proposed is well-organized and provably secure solution of access control for externally hosted applications."} {"_id": "57bb4066753a13f3c0f289941d3557ae144c9b6a", "title": "A Hybrid Model to Detect Phishing-Sites Using Supervised Learning Algorithms", "text": "Since last decades, online technologies have revolutionized the modern computing world. However, as a result, security threats are increasing rapidly. A huge community is using the online services even from chatting to banking is done via online transactions. Customers of web technologies face various security threats and phishing is one of the most important threat that needs to be address. Therefore, the security mechanism must be enhanced. The attacker uses phishing attack to get victims credential information like bank account number, passwords or any other information by mimicking a website of an enterprise, and the victim is unaware of phishing website. In literature, several approaches have been proposed for detection and filtering phishing attack. However, researchers are still searching for such a solution that can provide better results to secure users from phishing attack. Phishing websites have certain characteristics and patterns and to identify those features can help us to detect phishing. To identify such features is a classification task and can be solved using data mining techniques. In this paper, we are presenting a hybrid model for classification to overcome phishing-sites problem. To evaluate this model, we have used the dataset from UCI repository which contains 30 attributes and 11055 instances. The experimental results showed that our proposed hybrid model outperforms in terms of high accuracy and less error rate."} {"_id": "8462ca8f2459bcf35378d6dbb10dce70a6fba70a", "title": "Top 10 algorithms in data mining", "text": "This paper presents the top 10 data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM) in December 2006: C4.5, k-Means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, and CART. These top 10 algorithms are among the most influential data mining algorithms in the research community. With each algorithm, we provide a description of the algorithm, discuss the impact of the algorithm, and review current and further research on the algorithm. These 10 algorithms cover classification, clustering, statistical learning, association analysis, and link mining, which are all among the most important topics in data mining research and development."} {"_id": "12a376e621d690f3e94bce14cd03c2798a626a38", "title": "Rapid Object Detection using a Boosted Cascade of Simple Features", "text": "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \u201cIntegral Image\u201d which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers[6]. The third contribution is a method for combining increasingly more complex classifiers in a \u201ccascade\u201d which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection."} {"_id": "4d676af094b50974bd702c1e49d2ca363131e30b", "title": "An Associative Classification Data Mining Approach for Detecting Phishing Websites", "text": "Phishing websites are fake websites that are created by dishonest people to mimic webpages of real websites. Victims of phishing attacks may expose their financial sensitive information to the attacker whom might use this information for financial and criminal activities. Various approaches have been proposed to detect phishing websites, among which, approaches that utilize data mining techniques had shown to be more effective. The main goal of data mining is to analyze a large set of data to identify unsuspected relation and extract understandable useful patterns. Associative Classification (AC) is a promising data mining approach that integrates association rule and classification to build classification models (classifiers). This paper, proposes a new AC algorithm called Phishing Associative Classification (PAC), for detecting phishing websites. PAC employed a novel methodology in construction the classifier which results in generating moderate size classifiers. The algorithm improved the effectiveness and efficiency of a known algorithm called MCAR, by introducing a new prediction procedure and adopting a different rule pruning procedure. The conducted experiments compared PAC with 4 well-known data mining algorithms, these are: covering algorithm (Prism), decision tree (C4.5), associative Classification (CBA) and MCAR. Experiments are performed on a dataset that consists of 1010 website. Each Website is represented using 17 features categorized into 4 sets. The features are extracted from the website contents and URL. The results on each features set show that PAC is either equivalent or more effective than the compared algorithms. When all features are considered, PAC outperformed the compared algorithms and correctly identified 99.31% of the tested websites. Furthermore, PAC produced less number of rules than MCAR, and therefore, is more efficient."} {"_id": "33b53abdf2824b2cb0ee083c284000df4343a33e", "title": "Using Stories to Teach Human Values to Artificial Agents", "text": "Value alignment is a property of an intelligent agent indicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intentionally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human programmers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate. We describe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior."} {"_id": "af930bef4d27fc83d0cb138b7e3351f26d696284", "title": "Supervised and Unsupervised methods to detect Insider Threat from Enterprise Social and Online Activity Data", "text": "Insider threat is a significant security risk for organizations, and detection of insider threat is of paramount concern to organizations. In this paper, we attempt to discover insider threat by analyzing enterprise social and online activity data of employees. To this end, we process and extract relevant features that are possibly indicative of insider threat behavior. This includes features extracted from social data including email communication patterns and content, and online activity data such as web browsing patterns, email frequency, and file and machine access patterns. Subsequently, we take two approaches to detect insider threat: (i) an unsupervised approach where we identify statistically abnormal behavior with respect to these features using state-of-the-art anomaly detection methods, and (ii) a supervised approach where we use labels indicating when employees quit the company as a proxy for insider threat activity to design a classifier. We test our approach on a real world data set with artificially injected insider threat events. We obtain a ROC score of 0.77 for the unsupervised approach, and a classification accuracy of 73.4% for the supervised approach. These results indicate that our proposed approaches are fairly successful in identifying insider threat events. Finally, we build a visualization dashboard that enables managers and HR personnel to quickly identify employees with high threat risk scores which will enable them to take suitable preventive measures and limit security risk."} {"_id": "30ebfab0dc11c065397ee22d1ad7c366a7638646", "title": "An analytical framework for data stream mining techniques based on challenges and requirements", "text": "A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Real-time surveillance systems, telecommunication systems, sensor networks and other dynamic environments are such examples. The imminent need for turning such data into useful information and knowledge augments the development of systems, algorithms and frameworks that address streaming challenges. The storage, querying and mining of such data sets are highly computationally challenging tasks. Mining data streams is concerned with extracting knowledge structures represented in models and patterns in non stopping streams of information. Generally, two main challenges are designing fast mining methods for data streams and need to promptly detect changing concepts and data distribution because of highly dynamic nature of data streams. The goal of this article is to analyze and classify the application of diverse data mining techniques in different challenges of data stream mining. In this paper, we present the theoretical foundations of data stream analysis and propose an analytical framework for data stream mining techniques."} {"_id": "355e35184d084abc712c5bfcceafc0fdfe78ceef", "title": "BLIS: A Framework for Rapidly Instantiating BLAS Functionality", "text": "The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly instantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and implemented in ISO C99 so that it can be reused and/or reparameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL).\n Replicated Computational Results Certified: This article was subjected to an optional supplementary review process in which the computational results leading to the main conclusions of the paper were independently replicated. A separate report with the results of this process is published in this issue (see: http://dx.doi.org/10.1145/2738033). The certification process is described in Michael Heroux's editorial: http://dx.doi.org/10.1145/2743015."} {"_id": "28f0e0d3783659bc9adb2cec56f19b1f90cdd2be", "title": "Video2GIF: Automatic Generation of Animated GIFs from Video", "text": "We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robust Deep RankNet that, given a video, generates a ranked list of its segments according to their suitability as GIF. We train our model to learn what visual content is often selected for GIFs by using over 100K user generated GIFs and their corresponding video sources. We effectively deal with the noisy web data by proposing a novel adaptive Huber loss in the ranking formulation. We show that our approach is robust to outliers and picks up several patterns that are frequently present in popular animated GIFs. On our new large-scale benchmark dataset, we show the advantage of our approach over several state-of-the-art methods."} {"_id": "8cd3681c146ae525cc7f485f56cf6e432a36da28", "title": "SoilJ : An ImageJ Plugin for the Semiautomatic Processing of Three-Dimensional X-ray Images of Soils", "text": "Noninvasive threeand four-dimensional X-ray imaging approaches have proved to be valuable analysis tools for vadose zone research. One of the main bottlenecks for applying X-ray imaging to data sets with a large number of soil samples is the relatively large amount of time and expertise needed to extract quantitative data from the respective images. SoilJ is a plugin for the free and open imaging software ImageJ that aims at automating the corresponding processing steps for cylindrical soil columns. It includes modules for automatic column outline recognition, correction of image intensity bias, image segmentation, extraction of particulate organic matter and roots, soil surface topography detection, as well as morphology and percolation analyses. In this study, the functionality and precision of some key SoilJ features were demonstrated on five different image data sets of soils. SoilJ has proved to be useful for strongly decreasing the amount of time required for image processing of large image data sets. At the same time, it allows researchers with little experience in image processing to make use of X-ray imaging methods. The SoilJ source code is freely available and may be modified and extended at will by its users. It is intended to stimulate further community-driven development of this software."} {"_id": "c696f0584a45f56bff31399fb339aa9b6a38baff", "title": "Trade-Offs in PMU Deployment for State Estimation in Active Distribution Grids", "text": "Monitoring systems are expected to play a major role in active distribution grids, and the design of the measurement infrastructure is a critical element for an effective operation. The use of any available and newly installed, though heterogeneous, metering device providing more accurate and real-time measurement data offers a new paradigm for the distribution grid monitoring system. In this paper the authors study the meter placement problem for the measurement infrastructure of an active distribution network, where heterogeneous measurements provided by Phasor Measurement Units (PMUs) and other advanced measurement systems such as Smart Metering systems are used in addition to measurements that are typical of distribution networks, in particular substation measurements and a-priori knowledge. This work aims at defining a design approach for finding the optimal measurement infrastructure for an active distribution grid. The design problem is posed in terms of a stochastic optimization with the goal of bounding the overall uncertainty of the state estimation using heterogeneous measurements while minimizing the investment cost. The proposed method is also designed for computational efficiency so to cover a wide set of scenarios."} {"_id": "ad93d3b55fb94827c4df45e9fc67c55a7d90d00b", "title": "Relative clustering validity criteria: A comparative overview", "text": "Many different relative clustering validity criteria exist that are very useful in practice as quantitative measures for evaluating the quality of data partitions, and new criteria have still been proposed from time to time. These criteria are endowed with particular features that may make each of them able to outperform others in specific classes of problems. In addition, they may have completely different computational requirements. Then, it is a hard task for the user to choose a specific criterion when he or she faces such a variety of possibilities. For this reason, a relevant issue within the field of clustering analysis consists of comparing the performances of existing validity criteria and, eventually, that of a new criterion to be proposed. In spite of this, the comparison paradigm traditionally adopted in the literature is subject to some conceptual limitations. The present paper describes an alternative, possibly complementary methodology for comparing clustering validity criteria and uses it to make an extensive comparison of the performances of 40 criteria over a collection of 962,928 partitions derived from five well-known clustering algorithms and 1080 different data sets of a given class of interest. A detailed review of the relative criteria under investigation is also provided that includes an original comparative asymptotic analysis of their computational complexities. This work is intended to be a complement of the classic study reported in 1985 by Milligan and Cooper as well as a thorough extension of a preliminary paper by the authors themselves. \uf6d9 2010 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 3:"} {"_id": "90a762dfd34fe9e389f30318b9b9cb1571d9766e", "title": "A multi-resonant gate driver for Very-High-Frequency (VHF) resonant converters", "text": "This paper presents the design and implementation of a Very-High-Frequency (VHF) multi-resonant gate drive circuit. The design procedure is greatly simplified compared with some VHF self-oscillating multi-resonant gate drivers presented in previous works. The proposed circuit has the potential to reduce long start-up time required in a self-oscillating resonant gate drive circuit and to better utilize fast transient capability of VHF converters. A prototype resonant gate driver is demonstrated and able to reduce 60 % of the gate driving loss of a 20 MHz 32 W Class-E power amplifier with Si MOSFET."} {"_id": "5619de6b87691e2f2b9ea6665aee766d3eec6669", "title": "Automated segmentation of cell nuclei in PAP smear images", "text": "In this paper an automated method for cell nucleus segmentation in PAP smear images is presented. The method combines the global knowledge about the cells and nuclei appearance and the local characteristics of the area of the nuclei, in order to achieve an accurate nucleus boundary. Filters and morphological operators in all three channels of a color image result in the determination of the locations of nuclei in the image, even in cases where cell overlapping occurs. The nucleus boundary is determined by a deformable model. The results are very promising, even when images with high degree of overlapping are used."} {"_id": "5c04b3178af0cc5f367c833030c118701c210229", "title": "Two-Stream Neural Networks for Tampered Face Detection", "text": "We propose a two-stream network for face tampering detection. We train GoogLeNet to detect tampering artifacts in a face classification stream, and train a patch based triplet network to leverage features capturing local noise residuals and camera characteristics as a second stream. In addition, we use two different online face swaping applications to create a new dataset that consists of 2010 tampered images, each of which contains a tampered face. We evaluate the proposed two-stream network on our newly collected dataset. Experimental results demonstrate the effectness of our method."} {"_id": "3a900f71d2e74c91711a590dba4932504b21a3f5", "title": "Orestes: A scalable Database-as-a-Service architecture for low latency", "text": "Today, the applicability of database systems in cloud environments is considerably restricted because of three major problems: I) high network latencies for remote/mobile clients, II) lack of elastic horizontal scalability mechanisms, and III) missing abstraction of storage and data models. In this paper, we propose an architecture, a REST/HTTP protocol and a set of algorithms to solve these problems through a Database-as-a-Service middleware called Orestes (Objects RESTfully Encapsulated in Standard Formats). Orestes exposes cloud-hosted NoSQL database systems through a scalable tier of REST servers. These provide database-independent, object-oriented schema design, a client-independent REST-API for database operations, globally distributed caching, cache consistency mechanisms and optimistic ACID transactions. By comparative evaluations we offer empirical evidence that the proposed Database-as-a-Service architecture indeed solves common latency, scalability and abstraction problems encountered in modern cloud-based applications."} {"_id": "7d4642da036febe5174da1390521444ef405c864", "title": "Estimating Continuous Distributions in Bayesian Classifiers", "text": "When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous vari\u00ad ables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality as\u00ad sumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experi\u00ad mental results on a variety of natural and ar\u00ad tificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparamet\u00ad ric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models."} {"_id": "28440bfbdeb74668ce5ff85568afa198d897394d", "title": "Semantic web-mining and deep vision for lifelong object discovery", "text": "Autonomous robots that are to assist humans in their daily lives must recognize and understand the meaning of objects in their environment. However, the open nature of the world means robots must be able to learn and extend their knowledge about previously unknown objects on-line. In this work we investigate the problem of unknown object hypotheses generation, and employ a semantic web-mining framework along with deep-learning-based object detectors. This allows us to make use of both visual and semantic features in combined hypotheses generation. Experiments on data from mobile robots in real world application deployments show that this combination improves performance over the use of either method in isolation."} {"_id": "fb4531ce15b5813259bdd140a763361578688013", "title": "TDParse: Multi-target-specific sentiment recognition on Twitter", "text": "Existing target-specific sentiment recognition methods consider only a single target per tweet, and have been shown to miss nearly half of the actual targets mentioned. We present a corpus of UK election tweets, with an average of 3.09 entities per tweet and more than one type of sentiment in half of the tweets. This requires a method for multi-target specific sentiment recognition, which we develop by using the context around a target as well as syntactic dependencies involving the target. We present results of our method on both a benchmark corpus of single targets and the multi-target election corpus, showing state-of-the art performance in both corpora and outperforming previous approaches to multi-target sentiment task as well as deep learning models for singletarget sentiment."} {"_id": "74574cb26bcec6435170bb3e735e1140ac676942", "title": "Understanding the Factors Driving NFC-Enabled Mobile Payment Adoption: an Empirical Investigation", "text": "Though NFC mobile payment offers both convenience and benefits to consumers, its adoption rate is still very low. This study aims to explore the factors determining consumers\u2019 adoption of NFC mobile payment. In this study, building on TAM, an extended adoption intention model is developed by incorporating three sorts of variables into it, namely mobile payment system factors (compatibility and perceived complementarity), user characteristics (mobile payment knowledge), and a risk-related factor (perceived risk). The model is empirically tested with a sample of 377 validated respondents. Compatibility, perceived ease of use and mobile payment knowledge are found to be the factors determining individuals\u2019 intention to use NFC mobile payment. Against our expectations, perceived usefulness and perceived risk do not affect use intention significantly in adopting NFC mobile payments among consumers. We discuss the theoretical and practical implications of these findings, and point out the limitations and need for future research."} {"_id": "3cc67f2347310b6b8f21080acb7b2b9cf5f91ca4", "title": "Development of a 4-joint 3-DOF robotic arm with anti-reaction force mechanism for a multicopter", "text": "In this paper, we propose a design method for the mechanism of a robotic arm suitable for a multicopter. The motion of an arm attached to a multicopter can possibly disturb the multicopter attitude. Our aim is to suppress this attitude disturbance by mechanical methods. In the proposed design method, the arm has the following features for suppressing the disturbance of the multicopter attitude. 1. The robotic arm can adjust its center of gravity (COG). 2. In the simplified model, the sum of the angular momentum of the rotating parts constituting the robot arm is equal to zero. These features can compensate for both the displacement of the COG of the arm and the counter torque generated from rotating parts, which cause the disturbance to the multicopter attitude. Since the disturbance of the multicopter attitude can be suppressed mechanically, the robotic arm does not require a special flight control law. In other words, this robotic arm has the advantage of being attached to a multicopter using a common, off-the-shelf, multipurpose flight controller. Furthermore, we discuss the mass distribution, power transmission method, and the moment of inertia of rotating parts, such as the speed reducer, motor, and arm. Additionally, we fabricate a prototype robotic arm with four joints and three degrees of freedom (DOFs), based on the proposed design method. Finally, the experiments showed that the robotic arm suppressed the disturbance of the multicopter attitude caused by the arm motion."} {"_id": "dc115ea8684f9869040dcd7f72a6f1146c626e9a", "title": "Operations for Learning with Graphical Models", "text": "This paper is a multidisciplinary review of empirical, statistical learning from a graph-ical model perspective. Well-known examples of graphical models include Bayesian networks , directed graphs representing a Markov chain, and undirected networks representing a Markov eld. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, diierentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation max-imization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical speciication. This includes versions of linear regression , techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms."} {"_id": "a6f312917ad63749e38ca5f2f8120522ad0548c6", "title": "The Q-CHAT (Quantitative CHecklist for Autism in Toddlers): a normally distributed quantitative measure of autistic traits at 18-24 months of age: preliminary report.", "text": "We report a major revision of the CHecklist for Autism in Toddlers (CHAT). This quantitative CHAT (Q-CHAT) contains 25 items, scored on a 5 point scale (0-4). The Q-CHAT was completed by parents of n = 779 unselected toddlers (mean age 21 months) and n = 160 toddlers and preschoolers (mean age 44 months) with an Autism Spectrum Condition (ASC). The ASC group (mean (SD) = 51.8 (14.3)) scored higher on the Q-CHAT than controls (26.7 (7.8)). Boys in the control group (27.5 (7.8)) scored higher than girls (25.8 (7.7)). The intraclass correlation for test-retest reliability was 0.82 (n = 330). The distribution in the control group was close to normal. Full examination of the clinical validity of the Q-CHAT and test properties is underway."} {"_id": "519f7a6897f380ca9de4d971422f8209a5a1b67e", "title": "Leakage Current Elimination of Four-Leg Inverter for Transformerless Three-Phase PV Systems", "text": "Eliminating the leakage current is one of the most important issues for transformerless three-phase photovoltaic (PV) systems. In this paper, the leakage current elimination of a three-phase four-leg PV inverter is investigated. With the common-mode loop model established, the generation mechanism of the leakage current is clearly identified. Different typical carrier-based modulation methods and their corresponding common-mode voltages are discussed. A new modulation strategy with Boolean logic function is proposed to achieve the constant common-mode voltage for the leakage current reduction. Finally, the different modulation methods are implemented and tested on the TMS320F28335 DSP +XC3S400 FPGA digital control platform. The experimental results verify the effectiveness of the proposed solution."} {"_id": "f459c04bcb07b47342317f1dade894a9cb492fcb", "title": "Hybrid cryptography mechanism for securing self-organized wireless networks", "text": "Nowadays use of wireless technologies tremendously increased due to the continuous development of these technologies in various applications. One of the development in wireless technologies leads to deployment of the networks in which all mobile nodes will communicate via wireless links directly without use of any base station or predefined infrastructural support. These networks are self-organized networks called mobile ad hoc networks (MANETs) and architecture used for these networks will be either stand alone with one or few nodes communicating with base station or distributed. As communication is done through wireless links, these types of networks are more prone to several kinds of attacks. Authentication and encryption acts as first line of defense while an intrusion detection system (IDS) acts as a second line of defense. One of the intrusion detection system used in MANET is Enhanced Adaptive Acknowledgement (EAACK) which is based on acknowledgement packets for detection of malicious activities. This system increases the overhead of network significantly relative to other system such as watchdog. So, in this paper we adopt the hybrid cryptography to reduce the network overhead and enhance the security of networks significantly."} {"_id": "8ac4edf2d94a0a872e7ece90d12e718ac9ba3f00", "title": "Amazigh Isolated-Word speech recognition system using Hidden Markov Model toolkit (HTK)", "text": "This paper aims to build a speaker-independent automatic Amazigh Isolated-Word speech recognition system. Hidden Markov Model toolkit (HTK) that uses hidden Markov Models has been used to develop the system. The recognition vocabulary consists on the Amazigh Letters and Digits. The system has been trained to recognize the Amazigh 10 first digits and 33 alphabets. Mel frequency spectral coefficients (MFCCs) have been used to extract the feature. The training data has been collected from 60 speakers including both males and females. The test-data used for evaluating the system-performance has been collected from 20 speakers. The experimental results show that the presented system provides the overall word-accuracy 80%. The initial results obtained are very satisfactory in comparison with the training database's size, this encourages us to increase system performance to achieve a higher recognition rate."} {"_id": "d9259ccdbfe8b1816400efb8c44f5afb3cae4a3b", "title": "Identification of Flux Linkage Map of Permanent Magnet Synchronous Machines Under Uncertain Circuit Resistance and Inverter Nonlinearity", "text": "This paper proposes a novel scheme for the identification of the whole flux linkage map of permanent magnet synchronous machines, by which the map of dq-axis flux linkages at different load or saturation conditions can be identified by the minimization of a proposed estimation model. The proposed method works on a conventional three-phase inverter based vector control system and the immune clonal based quantum genetic algorithm is employed for the global searching of minimal point. Besides, it is also noteworthy that the influence of uncertain inverter nonlinearity and circuit resistance are cancelled during the modeling process. The proposed method is subsequently tested on two PMSMs and shows quite good performance compared with the finite element prediction results."} {"_id": "ffd9d08519b6b6518141313250fc9c386e20ba3d", "title": "Simple and strong : Twisted silver painted nylon artificial muscle actuated by Joule heating", "text": "Highly oriented nylon and polyethylene fibres shrink in length when heated and expand in diameter. By twisting and then coiling monofilaments of these materials to form helical springs, the anisotropic thermal expansion has recently been shown to enable tensile actuation of up to 49% upon heating. Joule heating, by passing a current through a conductive coating on the surface of the filament, is a convenient method of controlling actuation. In previously reported work this has been done using highly flexible carbon nanotube sheets or commercially available silver coated fibres. In this work silver paint is used as the Joule heating element at the surface of the muscle. Up to 29% linear actuation is observed with energy and power densities reaching 840 kJ m (528 J kg) and 1.1 kW kg (operating at 0.1 Hz, 4% strain, 1.4 kg load). This simple coating method is readily accessible and can be applied to any polymer filament. Effective use of this technique relies on uniform coating to avoid temperature gradients."} {"_id": "11dc5064baf01c211d48f1db872657344c961d50", "title": "WHY AGENTS? ON THE VARIED MOTIVATIONS FOR AGENT COMPUTING IN THE SOCIAL SCIENCES", "text": "The many motivations for employing agent-based computation in the social sciences are reviewed. It is argued that there exist three distinct uses of agent modeling techniques. One such use \u2014 the simplest \u2014 is conceptually quite close to traditional simulation in operations research. This use arises when equations can be formulated that completely describe a social process, and these equations are explicitly soluble, either analytically or numerically. In the former case, the agent model is merely a tool for presenting results, while in the latter it is a novel kind of Monte Carlo analysis. A second, more commonplace usage of computational agent models arises when mathematical models can be written down but not completely solved. In this case the agent-based model can shed significant light on the solution structure, illustrate dynamical properties of the model, serve to test the dependence of results on parameters and assumptions, and be a source of counter-examples. Finally, there are important classes of problems for which writing down equations is not a useful activity. In such circumstances, resort to agent-based computational models may be the only way available to explore such processes systematically, and constitute a third distinct usage of such models. \u2217 Corresponding author address: Robert Axtell, Center on Social and Economic Dynamics, The Brookings Institution, 1775 Massachusetts Ave. NW, Washington, DC 20036; e-mail: raxtell@brook.edu; web: http://www.brook.edu/es/dynamics."} {"_id": "1a6699f52553e095cf5f8ec79d71013db8e0ee16", "title": "Agent-based modeling: methods and techniques for simulating human systems.", "text": "Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed."} {"_id": "26b71580a1a07f8e555ffd26e7ea711e953815c7", "title": "The Swarm Simulation System: A Toolkit for Building Multi-Agent Simulations", "text": "Swarm is a multi agent software platform for the simulation of complex adaptive systems In the Swarm system the basic unit of simulation is the swarm a collec tion of agents executing a schedule of actions Swarm supports hierarchical model ing approaches whereby agents can be composed of swarms of other agents in nested structures Swarm provides object oriented libraries of reusable components for build ing models and analyzing displaying and controlling experiments on those models Swarm is currently available as a beta version in full free source code form It requires the GNU C Compiler Unix and X Windows More information about Swarm can be obtained from our web pages http www santafe edu projects swarm Santa Fe Institute Hyde Park Rd Santa Fe NM nelson santafe edu Deere Company John Deere Road Moline IL roger ci deere com Santa Fe Institute Hyde Park Rd Santa Fe NM cgl santafe edu Santa Fe Institute Hyde Park Rd Santa Fe NM manor santafe edu Swarm has been developed at the Santa Fe Institute with the support of Grant No N from the O ce of Naval Research and Grant No N G from the Naval Research Laboratory both acting in cooperation with the Defense Advanced Research Projects Agency Swarm bene ted from earlier support from The Carol O Donnell Foundation Mr and Mrs Michael Grantham and the National Science Foundation SFI also gratefully acknowledges invaluable support from Deere Company for this"} {"_id": "51f0e3fe5335e2c3a55e673a6adae646f0ad6e11", "title": "From Factors to Actors : Computational Sociology and Agent-Based Modeling", "text": "\u25a0 Abstract Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from \u201cfactors\u201d to \u201cactors\u201d in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach."} {"_id": "dc63ae3f4fdde0afb3f91b146e276fc663c22e6d", "title": "Complex suicide by self-stabbing with subsequent drowning in the sea.", "text": "The paper presents a unique case of a complex suicide committed by a young man, mostly probably triggered by a disappointment in love. The uniqueness of the suicide lies in the fact that the victim inflicted several deep stab wounds on himself, in the chest and abdomen, while standing partly submerged in the sea and, having done so, he dropped and disappeared in the water. The postmortem examination showed, apart from deep wounds in the trunk, characteristics of drowning that manifested itself in the form of aqueous emphysema of the lungs. Suicide was clearly determined on the basis of the circumstances preceding death, the location, and arrangement of the trunk wounds and the testimony given by a witness of the incident. The circumstances preceding the suicidal act clearly suggest an underlying undiagnosed mental disorder."} {"_id": "10d21ca7728cb3dd15731accedda9ea711d8a0f4", "title": "An End-to-End Discriminative Approach to Machine Translation", "text": "We present a perceptron-style discriminative approach to machine translation in which large feature sets can be exploited. Unlike discriminative reranking approaches, our system can take advantage of learned features in all stages of decoding. We first discuss several challenges to error-driven discriminative approaches. In particular, we explore different ways of updating parameters given a training example. We find that making frequent but smaller updates is preferable to making fewer but larger updates. Then, we discuss an array of features and show both how they quantitatively increase BLEU score and how they qualitatively interact on specific examples. One particular feature we investigate is a novel way to introduce learning into the initial phrase extraction process, which has previously been entirely heuristic."} {"_id": "180bdb9ab62b589166f2bb3a854d934d4e12746d", "title": "EMPHASIS: An Emotional Phoneme-based Acoustic Model for Speech Synthesis System", "text": "We present EMPHASIS, an emotional phoneme-based acoustic model for speech synthesis system. EMPHASIS includes a phoneme duration prediction model and an acoustic parameter prediction model. It uses a CBHG-based regression network to model the dependencies between linguistic features and acoustic features. We modify the input and output layer structures of the network to improve the performance. For the linguistic features, we apply a feature grouping strategy to enhance emotional and prosodic features. The acoustic parameters are designed to be suitable for the regression task and waveform reconstruction. EMPHASIS can synthesize speech in real-time and generate expressive interrogative and exclamatory speech with high audio quality. EMPHASIS is designed to be a multi-lingual model and can synthesize Mandarin-English speech for now. In the experiment of emotional speech synthesis, it achieves better subjective results than other real-time speech synthesis systems."} {"_id": "191b8537a875f0171593a4356f2535d5f1bbceac", "title": "Tensor-Factorized Neural Networks", "text": "The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure. This paper presents a new tensor-factorized NN (TFNN), which tightly integrates TF and NN for multiway feature extraction and classification under a unified discriminative objective. This TFNN is seen as a generalized NN, where the affine transformation in an NN is replaced by the multilinear and multiway factorization for tensor-based NN. The multiway information is preserved through layerwise factorization. Tucker decomposition and nonlinear activation are performed in each hidden layer. The tensor-factorized error backpropagation is developed to train TFNN with the limited parameter size and computation time. This TFNN can be further extended to realize the convolutional TFNN (CTFNN) by looking at small subtensors through the factorized convolution. Experiments on real-world classification tasks demonstrate that TFNN and CTFNN attain substantial improvement when compared with an NN and a convolutional NN, respectively."} {"_id": "30d8e493ae35a64b2bebbe6ec90dc190488f82fa", "title": "Using inaccurate models in reinforcement learning", "text": "In the model-based policy search approach to reinforcement learning (RL), policies are found using a model (or \"simulator\") of the Markov decision process. However, for high-dimensional continuous-state tasks, it can be extremely difficult to build an accurate model, and thus often the algorithm returns a policy that works in simulation but not in real-life. The other extreme, model-free RL, tends to require infeasibly large numbers of real-life trials. In this paper, we present a hybrid algorithm that requires only an approximate model, and only a small number of real-life trials. The key idea is to successively \"ground\" the policy evaluations using real-life trials, but to rely on the approximate model to suggest local changes. Our theoretical results show that this algorithm achieves near-optimal performance in the real system, even when the model is only approximate. Empirical results also demonstrate that---when given only a crude model and a small number of real-life trials---our algorithm can obtain near-optimal performance in the real system."} {"_id": "b73cdb60b2fe9fb317fca4fb9f5e1106e13c2345", "title": "Distance Metric Learning for Large Margin Nearest Neighbor Classification", "text": ""} {"_id": "31821f81c091d2deceed17206528223a8a5b8822", "title": "Learning by Example: Training Users with High-quality Query Suggestions", "text": "The queries submitted by users to search engines often poorly describe their information needs and represent a potential bottleneck in the system. In this paper we investigate to what extent it is possible to aid users in learning how to formulate better queries by providing examples of high-quality queries interactively during a number of search sessions. By means of several controlled user studies we collect quantitative and qualitative evidence that shows: (1) study participants are able to identify and abstract qualities of queries that make them highly effective, (2) after seeing high-quality example queries participants are able to themselves create queries that are highly effective, and, (3) those queries look similar to expert queries as defined in the literature. We conclude by discussing what the findings mean in the context of the design of interactive search systems."} {"_id": "b84b96cecb34c476749527ec2cd828fe0300e7a7", "title": "Monolithic CMOS distributed amplifier and oscillator", "text": "CMOS implementations for RF applications often employ technology modifications to reduce the silicon substrate loss at high frequencies. The most common techniques include the use of a high-resistivity substrate (/spl rho/>10 /spl Omega/-cm) or silicon-on-insulator (SOI) substrate and precise bondwire inductors. However, these techniques are incompatible with low-cost CMOS manufacture. This design demonstrates use of CMOS with a conventional low-resistivity epi-substrate and on-chip inductors for applications above 10 GHz."} {"_id": "aa0c01e553d0a1ab40c204725d13fe528c514bba", "title": "Anticipating many futures: Online human motion prediction and synthesis for human-robot collaboration", "text": "Fluent and safe interactions of humans and robots require both partners to anticipate the others\u2019 actions. A common approach to human intention inference is to model specific trajectories towards known goals with supervised classifiers. However, these approaches do not take possible future movements into account nor do they make use of kinematic cues, such as legible and predictable motion. The bottleneck of these methods is the lack of an accurate model of general human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motions. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold."} {"_id": "4b40d6b77b332214eefc7d1e79e15fbc2d86d86a", "title": "Mapping Texts: Combining Text-Mining and Geo-Visualization To Unlock The Research Potential of Historical Newspapers", "text": "In this paper, we explore the task of automatic text processing applied to collections of historical newspapers, with the aim of assisting historical research. In particular, in this first stage of our project, we experiment with the use of topical models as a means to identify potential issues of interest for historians. 1 Newspapers in Historical Research Surviving newspapers are among the richest sources of information available to scholars studying peoples and cultures of the past 250 years, particularly for research on the history of the United States. Throughout the nineteenth and twentieth centuries, newspapers served as the central venues for nearly all substantive discussions and debates in American society. By the mid-nineteenth century, nearly every community (no matter how small) boasted at least one newspaper. Within these pages, Americans argued with one another over politics, advertised and conducted economic business, and published articles and commentary on virtually all aspects of society and daily life. Only here can scholars find editorials from the 1870s on the latest political controversies, advertisements for the latest fashions, articles on the latest sporting events, and languid poetry from a local artist, all within one source. Newspapers, in short, document more completely the full range of the human experience than nearly any other source available to modern scholars, providing windows into the past available nowhere else. Despite their remarkable value, newspapers have long remained among the most underutilized historical resources. The reason for this paradox is quite simple: the sheer volume and breadth of information available in historical newspapers has, ironically, made it extremely difficult for historians to go through them page-by-page for a given research project. A historian, for example, might need to wade through tens of thousands of newspaper pages in order to answer a single research question (with no guarantee of stumbling onto the necessary information). Recently, both the research potential and problem of scale associated with historical newspapers has expanded greatly due to the rapid digitization of these sources. The National Endowment for the Humanities (NEH) and the Library of Congress (LOC), for example, are sponsoring a nationwide historical digitization project, Chronicling America, geared toward digitizing all surviving historical newspapers in the United States, from 1836 to the present. This project recently digitized its one millionth page (and they project to have more than 20 million pages within a few years), opening a vast wealth of historical newspapers in digital form. While projects such as Chronicling America have indeed increased access to these important sources, they have also increased the problem of scale that have long prevent scholars from using these sources in meaningful ways. Indeed, without tools and methods capable of handling such large datasets \u2013 and thus sifting out meaningful patterns embedded within them \u2013 scholars find themselves confined to performing only basic word searches across enormous collections. These simple searches can, indeed, find stray information scattered in unlikely 46 places. Such rudimentary search tools, however, become increasingly less useful to researchers as datasets continue to grow in size. If a search for a particular term yields 4,000,000 results, even those search results produce a dataset far too large for any single scholar to analyze in a meaningful way using traditional methods. The age of abundance, it turns out, can simply overwhelm historical scholars, as the sheer volume of available digitized historical newspapers is beginning to do. In this paper, we explore the use of topic modeling, in an attempt to identify the most important and potentially interesting topics over a given period of time. Thus, instead of asking a historian to look through thousands of newspapers to identify what may be interesting topics, we take a reverse approach, where we first automatically cluster the data into topics, and then provide these automatically identified topics to the historian so she can narrow her scope to focus on the individual patterns in the dataset that are most applicable to her research. Of more utility would be where the modeling would reveal unexpected topics that point towards unusual patterns previously unknown, thus help shaping a scholar\u2019s subsequent research. The topic modeling can be done for any periods of time, which can consist of individual years or can cover several years at a time. In this way, we can see the changes in the discussions and topics of interest over the years. Moreover, pre-filters can also be applied to the data prior to the topic modeling. For instance, since research being done in the History department at our institution is concerned with the \u201cU. S. cotton economy,\u201d we can use the same approach to identify the interesting topics mentioned in the news articles that talk about the issue of \u201ccotton.\u201d 2 Topic Modeling Topic models have been used by Newman and Block (2006) and Nelson (2010) on newspaper corpora to discover topics and trends over time. The former used the probabilistic latent semantic analysis (pLSA) model, and the latter used the latent Dirichlet allocation (LDA) model, a method introduced by Blei et al. (2003). LDA has also been used by Griffiths and Steyvers (2004) to find research topic trends by looking at abstracts of scientific papers. Hall et al. (2008) have similarly applied LDA to discover trends in the computational linguistics field. Both pLSA and LDA models are probabilistic models that look at each document as a mixture of multinomials or topics. The models decompose the document collection into groups of words representing the main topics. See for instance Table 1, which shows two topics extracted from our collection. Topic worth price black white goods yard silk made ladies wool lot inch week sale prices pair suits fine quality state states bill united people men general law government party made president today washington war committee country public york Table 1: Example of two topic groups Boyd-Graber et al. (2009) compared several topic models, including LDA, correlated topic model (CTM), and probabilistic latent semantic indexing (pLSI), and found that LDA generally worked comparably well or better than the other two at predicting topics that match topics picked by the human annotators. We therefore chose to use a parallel threaded SparseLDA implementation to conduct the topic modeling, namely UMass Amherst\u2019s MAchine Learning for LanguagE Toolkit (MALLET) (McCallum, 2002). MALLET\u2019s topic modeling toolkit has been used by Walker et al. (2010) to test the effects of noisy optical character recognition (OCR) data on LDA. It has been used by Nelson (2010) to mine topics from the Civil War era newspaper Dispatch, and it has also been used by Blevins (2010) to examine general topics and to identify emotional moments from Martha Ballards Diary. 3 Dataset Our sample data comes from a collection of digitized historical newspapers, consisting of newspapers published in Texas from 1829 to 2008. Issues are segmented by pages with continuous text containing articles and advertisements. Table 2 provides more information about the dataset. 2 http://mallet.cs.umass.edu/ 1 http://americanpast.richmond.edu/dispatch/ 3 http://historying.org/2010/04/01/ 47 Property Number of titles Number of years Number of issues Number of pages Number of tokens 114 180 32,745 232,567 816,190,453 Table 2: Properties of the newspaper collection 3.1 Sample Years and Categories From the wide range available, we sampled several historically significant dates in order to evaluate topic modeling. These dates were chosen for their unique characteristics (detailed below), which made it possible for a professional historian to examine and evaluate the relevancy of the results. These are the subcategories we chose as samples: \u2022 Newspapers from 1865-1901: During this period, Texans rebuilt their society in the aftermath of the American Civil War. With the abolition of slavery in 1865, Texans (both black and white) looked to rebuild their post-war economy by investing heavily in cotton production throughout the state. Cotton was considered a safe investment, and so Texans produced enough during this period to make Texas the largest cotton producer in the United States by 1901. Yet overproduction during that same period impoverished Texas farmers by driving down the market price for cotton, and thus a large percentage went bankrupt and lost their lands (over 50 percent by 1900). As a result, angry cotton farmers in Texas during the 1890s joined a new political party, the Populists, whose goal was to use the national government to improve the economic conditions of farmers. This effort failed by 1896, although it represented one of the largest third-party political revolts in American history. This period, then, was dominated by the rise of cotton as the foundation of the Texas economy, the financial failures of Texas farmers, and their unsuccessful political protests of the 1890s as cotton bankrupted people across the state. These are the issues we would expect to emerge as important topics from newspapers in this category. This dataset consists of 52,555 pages over 5,902 issues. \u2022 Newspapers from 1892: This was the year of the formation of the Populist Party, which a large portion of Texas farmers joined for the U. S. presidential election of 1892. The Populists sought to have the U. S. federal government become actively involved in regulating the economy in places like Texas (something never done before) in order to prevent cotton farmers from going further into debt. In the 1892 election, the Populists did surprisingly well (garnering about 10 percent of the vote nationally) and won a full 23 percent of the vote in Texas. This dataset consists of 1,303 pages over 223 issue"} {"_id": "5350676fae09092b42731448acae3469cba8919c", "title": "Building an Intelligent Assistant for Digital Forensics", "text": "Software tools designed for disk analysis play a critical role today in digital forensics investigations. However, these digital forensics tools are often difficult to use, usually task specific, and generally require professionally trained users with IT backgrounds. The relevant tools are also often open source requiring additional technical knowledge and proper configuration. This makes it difficult for investigators without some computer science background to easily conduct the needed disk analysis. In this dissertation, we present AUDIT, a novel automated disk investigation toolkit that supports investigations conducted by non-expert (in IT and disk technology) and expert investigators. Our system design and implementation of AUDIT intelligently integrates open source tools and guides non-IT professionals while requiring minimal technical knowledge about the disk structures and file systems of the target disk image. We also present a new hierarchical disk investigation model which leads AUDIT to systematically examine the disk in its totality based on its physical and logical structures. AUDIT\u2019s capabilities as an intelligent digital assistant are evaluated through a series of experiments comparing it with a human investigator as well as against standard benchmark disk images."} {"_id": "03aa3c66aad11069e79d73108a92eeef3a43b40a", "title": "ELDEN: Improved Entity Linking Using Densified Knowledge Graphs", "text": "Entity Linking (EL) systems aim to automatically map mentions of an entity in text to the corresponding entity in a Knowledge Graph (KG). Degree of connectivity of an entity in the KG directly affects an EL system\u2019s ability to correctly link mentions in text to the entity in KG. This causes many EL systems to perform well for entities well connected to other entities in KG, bringing into focus the role of KG density in EL. In this paper, we propose Entity Linking using Densified Knowledge Graphs (ELDEN). ELDEN is an EL system which first densifies the KG with co-occurrence statistics from a large text corpus, and then uses the densified KG to train entity embeddings. Entity similarity measured using these trained entity embeddings result in improved EL. ELDEN outperforms stateof-the-art EL system on benchmark datasets. Due to such densification, ELDEN performs well for sparsely connected entities in the KG too. ELDEN\u2019s approach is simple, yet effective. We have made ELDEN\u2019s code and data publicly available."} {"_id": "ad5a08146e1b811369cf3f3c629de6157784ee58", "title": "What Do Stroke Patients Look for in Game-Based Rehabilitation", "text": "Stroke is one of the most common causes of physical disability, and early, intensive, and repetitive rehabilitation exercises are crucial to the recovery of stroke survivors. Unfortunately, research shows that only one third of stroke patients actually perform recommended exercises at home, because of the repetitive and mundane nature of conventional rehabilitation exercises. Thus, to motivate stroke survivors to engage in monotonous rehabilitation is a significant issue in the therapy process. Game-based rehabilitation systems have the potential to encourage patients continuing rehabilitation exercises at home. However, these systems are still rarely adopted at patients' places. Discovering and eliminating the obstacles in promoting game-based rehabilitation at home is therefore essential. For this purpose, we conducted a study to collect and analyze the opinions and expectations of stroke patients and clinical therapists. The study is composed of 2 parts: Rehab-preference survey - interviews to both patients and therapists to understand the current practices, challenges, and expectations on game-based rehabilitation systems; and Rehab-compatibility survey - a gaming experiment with therapists to elaborate what commercial games are compatible with rehabilitation. The study is conducted with 30 outpatients with stroke and 19 occupational therapists from 2 rehabilitation centers in Taiwan. Our surveys show that game-based rehabilitation systems can turn the rehabilitation exercises more appealing and provide personalized motivation for various stroke patients. Patients prefer to perform rehabilitation exercises with more diverse and fun games, and need cost-effective rehabilitation systems, which are often built on commodity hardware. Our study also sheds light on incorporating the existing design-for-fun games into rehabilitation system. We envision the results are helpful in developing a platform which enables rehab-compatible (i.e., existing, appropriately selected) games to be operated on commodity hardware and brings cost-effective rehabilitation systems to more and more patients' home for long-term recovery."} {"_id": "47fdd1579f732dd6389f9342027560e385853180", "title": "Deep Sparse Subspace Clustering", "text": "In this paper, we present a deep extension of Sparse Subspace Clustering, termed Deep Sparse Subspace Clustering (DSSC). Regularized by the unit sphere distribution assumption for the learned deep features, DSSC can infer a new data affinity matrix by simultaneously satisfying the sparsity principle of SSC and the nonlinearity given by neural networks. One of the appealing advantages brought by DSSC is: when original real-world data do not meet the class-specific linear subspace distribution assumption, DSSC can employ neural networks to make the assumption valid with its hierarchical nonlinear transformations. To the best of our knowledge, this is among the first deep learning based subspace clustering methods. Extensive experiments are conducted on four real-world datasets to show the proposed DSSC is significantly superior to 12 existing methods for subspace clustering."} {"_id": "505234f56be43637a82761b2a7c3bd7c46f1e06c", "title": "Semi-Supervised Learning for Relation Extraction", "text": "This paper proposes a semi-supervised learning method for relation extraction. Given a small amount of labeled data and a large amount of unlabeled data, it first bootstraps a moderate number of weighted support vectors via SVM through a co-training procedure with random feature projection and then applies a label propagation (LP) algorithm via the bootstrapped support vectors. Evaluation on the ACE RDC 2003 corpus shows that our method outperforms the normal LP algorithm via all the available labeled data without SVM bootstrapping. Moreover, our method can largely reduce the computational burden. This suggests that our proposed method can integrate the advantages of both SVM bootstrapping and label propagation."} {"_id": "3fbc45152f20403266b02c4c2adab26fb367522d", "title": "Sequence-to-Sequence RNNs for Text Summarization", "text": "In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation (Bahdanau et al. (2014)). Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et al. (2015) on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance."} {"_id": "7eb1dca76c46bfbb4bd34c189c98472706ec31eb", "title": "Research on the Forecast of Shared Bicycle Rental Demand Based on Spark Machine Learning Framework", "text": "In recent years, the shared bicycle project has developed rapidly. In use of shared bicycles, a great deal of user riding information is recorded. How to extract effective knowledge from these vast amounts of information, how to use this knowledge to improve the shared bicycle system, and how to improve the user experience, are problems to solve. Citi Bike is selected as the research target. Data on Citi Bike\u2019s user historical behavior, weather information, and holiday information are collected from three different sources, and converted into appropriate formats for model training. Spark MLlib is used to construct three different predictive models, advantages and disadvantages of different forecasting models are compared. Some techniques are used to enhance the accuracy of random forests model. The experimental results show that the root mean square error RMSE of the final model is reduced from 305.458 to 243.346."} {"_id": "cbbf72d487f5b645d50d7d3d94b264f6a881c96f", "title": "W-Band CMOS on-chip energy harvester and rectenna", "text": "This paper presents the first fully on-chip integrated energy harvester and rectenna at the W-Band in 65nm CMOS technology. The designs are based on a 1-stage Dickson voltage multiplier. The rectenna consists of an on-chip integrated dipole antenna with a reflector underneath the substrate to enhance the directivity and realized gain. The energy harvester and rectenna achieve a power conversion efficiency of 10% and 2% respectively at 94GHz. The stand-alone harvester occupies only 0.0945mm2 including pads, while the fully integrated rectenna occupies a minimal chip area of 0.48mm2."} {"_id": "915f50f7885dc47fe5a29cc64ee9902a6994b4a4", "title": "Spurious group differences due to head motion in a diffusion MRI study", "text": "Diffusion-weighted MRI (DW-MRI) has become a popular imaging modality for probing the microstructural properties of white matter and comparing them between populations in vivo. However, the contrast in DW-MRI arises from the microscopic random motion of water molecules in brain tissues, which makes it particularly sensitive to macroscopic head motion. Although this has been known since the introduction of DW-MRI, most studies that use this modality for group comparisons do not report measures of head motion for each group and rely on registration-based correction methods that cannot eliminate the full effects of head motion on the DW-MRI contrast. In this work we use data from children with autism and typically developing children to investigate the effects of head motion on differences in anisotropy and diffusivity measures between groups. We show that group differences in head motion can induce group differences in DW-MRI measures, and that this is the case even when comparing groups that include control subjects only, where no anisotropy or diffusivity differences are expected. We also show that such effects can be more prominent in some white-matter pathways than others, and that they can be ameliorated by including motion as a nuisance regressor in the analyses. Our results demonstrate the importance of taking head motion into account in any population study where one group might exhibit more head motion than the other."} {"_id": "81918e9416c1f9a910c10e6059a909fb81c22000", "title": "Tap water versus sterile saline solution in the colonisation of skin wounds.", "text": "Irrigating wounds with tap water does not increase colonisation, but controlled studies are required for further evidence. Microbial colonisation was assessed in skin wounds, before and after irrigation with tap water, and was compared with irrigation using 0\u00b79% sodium chloride sterile solution. The study included 120 subjects with chronic, traumatic, vascular, pressure or neuropathic wounds. A total of 60 wounds were randomly assigned to be irrigated with tap water (tap water group) and another 60 to be irrigated with 0\u00b79% sodium chloride sterile solution (saline group), at a pressure of 0\u00b746-0\u00b754 PSI. Samples were collected from the centre of each wound using Levine's technique, before and after irrigation, and cultivated in thioglycollate, hypertonic mannitol agar, eosin methylene blue (EMB) agar, blood agar and Sabouraud agar at 37\u00b0C for 72 hours. There was concordance (kappa test) and discordance (McNemar test) regarding the count of positive and/or negative samples before and after irrigation in each group. The proportion of reduction of positive samples was similar for both groups in all cultures. Colony-forming unit count before and after irrigation was similar in both groups and in all cultures, except for the culture in hypertonic mannitol agar from the tap water group, for which the count was lower after irrigation (Wilcoxon z = 2\u00b705, P = 0\u00b7041). It is concluded that skin wound irrigation with tap water leads to further reduction of Gram-positive bacteria compared with 0\u00b79% sodium chloride sterile solution, with no difference in colonisation of haemolytic bacteria, Gram-negative bacteria and fungi."} {"_id": "99ce379cff98f5d098f3eb0492191291fd712d7b", "title": "Analysis of level design 'push & pull' within 21 games", "text": "This paper investigates the differences between 3D level designs in 21 popular games. We have developed a framework to analyze 3D level designs based on patterns extracted from level designers and game play sessions. We then use these patterns to analyze several game play sessions. Results of this analysis reveal methods by which designers push and pull players through levels. We discuss an analysis of these patterns in terms of three level affordance configurations, combat, environmental resistance, and mixed goals in 21 different games using one walkthrough play session per game. By looking at the variety of games, we can further explore the level similarities and differences between games."} {"_id": "e9bb3ca7558ea0ce9ca7d5c965e4179668d69d0a", "title": "Spinal Instability Neoplastic Score (SINS): Reliability Among Spine Fellows and Resident Physicians in Orthopedic Surgery and Neurosurgery", "text": "Study Design\nReliability analysis.\n\n\nObjectives\nThe Spinal Instability Neoplastic Score (SINS) was developed for assessing patients with spinal neoplasia. It identifies patients who may benefit from surgical consultation or intervention. It also acts as a prognostic tool for surgical decision making. Reliability of SINS has been established for spine surgeons, radiologists, and radiation oncologists, but not yet among spine surgery trainees. The purpose of our study is to determine the reliability of SINS among spine residents and fellows, and its role as an educational tool.\n\n\nMethods\nTwenty-three residents and 2 spine fellows independently scored 30 de-identified spine tumor cases on 2 occasions, at least 6 weeks apart. Intraclass correlation coefficient (ICC) measured interobserver and intraobserver agreement for total SINS scores. Fleiss's kappa and Cohen's kappa analysis evaluated interobserver and intraobserver agreement of 6 component subscores (location, pain, bone lesion quality, spinal alignment, vertebral body collapse, and posterolateral involvement of spinal elements).\n\n\nResults\nTotal SINS scores showed near perfect interobserver (0.990) and intraobserver (0.907) agreement. Fleiss's kappa statistics revealed near perfect agreement for location; substantial for pain; moderate for alignment, vertebral body collapse, and posterolateral involvement; and fair for bone quality (0.948, 0.739, 0.427, 0.550, 0.435, and 0.382). Cohen's kappa statistics revealed near perfect agreement for location and pain, substantial for alignment and vertebral body collapse, and moderate for bone quality and posterolateral involvement (0.954, 0.814, 0.610, 0.671, 0.576, and 0.561, respectively).\n\n\nConclusions\nThe SINS is a reliable and valuable educational tool for spine fellows and residents learning to judge spinal instability."} {"_id": "5bf1debebe42befa82fbf51d5f75d4f6a756553d", "title": "MACA: a modified author co-citation analysis method combined with general descriptive metadata of citations", "text": "Author co-citation analysis (ACA) is a well-known and frequently-used method to exhibit the academic researchers and the professional field sketch according to co-citation relationships between authors in an article set. However, visualizing subtle examination is limited because only author co-citation information is required in ACA. The proposed method, called modified author co-citation analysis (MACA), exploits author co-citation relationship, citations published time, citations published carriers, and citations keywords, to construct MACA-based co-citation matrices. According to the results of our experiments: (1) MACA shows a good clustering result with more delicacy and more clearness; (2) more information involved in co-citation analysis performs good visual acuity; (3) in visualization of co-citation network produced by MACA, the points in different categories have far more distance, and the points indicating authors in the same category are closer together. As a result, the proposed MACA is found that more detailed and subtle information of a knowledge domain analyzed can be obtained, compared to ACA."} {"_id": "0ee392ad467b967c0a32d8ecb19fc20f7c1d62fe", "title": "Probabilistic Inference Using Markov Chain Monte Carlo Methods", "text": "Probabilistic inference is an attractive approach to uncertain reasoning and em pirical learning in arti cial intelligence Computational di culties arise however because probabilistic models with the necessary realism and exibility lead to com plex distributions over high dimensional spaces Related problems in other elds have been tackled using Monte Carlo methods based on sampling using Markov chains providing a rich array of techniques that can be applied to problems in arti cial intelligence The Metropolis algorithm has been used to solve di cult problems in statistical physics for over forty years and in the last few years the related method of Gibbs sampling has been applied to problems of statistical inference Concurrently an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well and has recently been uni ed with the Metropolis algorithm to produce the hybrid Monte Carlo method In computer science Markov chain sampling is the basis of the heuristic optimization technique of simulated annealing and has recently been used in randomized algorithms for approximate counting of large sets In this review I outline the role of probabilistic inference in arti cial intelligence present the theory of Markov chains and describe various Markov chain Monte Carlo algorithms along with a number of supporting techniques I try to present a comprehensive picture of the range of methods that have been developed including techniques from the varied literature that have not yet seen wide application in arti cial intelligence but which appear relevant As illustrative examples I use the problems of probabilistic inference in expert systems discovery of latent classes from data and Bayesian learning for neural networks"} {"_id": "30667550901b9420e02c7d61cdf8fa7d5db207af", "title": "Bayesian Learning for Neural Networks", "text": ""} {"_id": "f707a81a278d1598cd0a4493ba73f22dcdf90639", "title": "Generalization by Weight-Elimination with Application to Forecasting", "text": "Inspired by the information theoretic idea of minimum description length, we add a term to the back propagation cost function that penalizes network complexity. We give the details of the procedure, called weight-elimination, describe its dynamics, and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We use this procedure to predict the sunspot time series and the notoriously noisy series of currency exchange rates."} {"_id": "b23848d181bebd47d4bae1c782534424085bbadf", "title": "A case study of user participation in the information systems development process", "text": "There are many in the information systems discipline who believe that user participation is necessary for successful systems development. However, it has been suggested that this belief is neither grounded in theory nor substantiated by research data. This may indicate that researchers have not addressed fully the underlying complexity of the concept. If so, this is indicative of a deficiency in understanding user participation in information systems development as it occurs in organizations. In order to enhance the extant understanding of participative information systems development, the present study adopts a qualitative, case-based approach to research so as to provide an in-depth description of the complex social nature of the phenomenon as manifested in one organization. The results of the study illustrate that a high degree of direct and indirect user participation did not guarantee the successful implementation and use of information systems in the organization studied. Such participatory development practices did, however, result in the development of systems that adequately captured user requirements and hence satisfied user informational needs. It was clear that despite the perceived negative impact, which the new systems would have on user work-related roles and activities, the existence of an organization-wide participative policy, and associated participative structures, coupled with a favorable organization culture, generated a participatory development climate that was conducive to the successful development of information systems, while not guaranteeing it. That said, the central conclusion of this study was that user dissatisfaction with developed systems centered on the poor management of change in the organization."} {"_id": "fc3def67ba2c5956f7d4066df9f1d92a9e65bfd7", "title": "Spatial Data Mining Features between General Data Mining", "text": "Data mining is usually defined as searching,analyzing and sifting through large amounts of data to find relationships, patterns, or any significant statistical correlation. Spatial Data Mining (SDM) is the process of discovering interesting, useful, non-trivial patterns information or knowledge from large spatial datasets.Extracting interesting and useful patterns from spatial datasets must be more difficult than extracting the corresponding patterns from traditional numeric or categorical data due to the complexity of spatial data types, spatial relationships, and spatial auto-correlation.Emphasized overviewed the unique features that distinguish spatial data mining from classical Data Mining, and presents major accomplishments of spatial Data Mining research. Extracting interesting patterns and rules from spatial datasets, such as remotely sensed imagery and associated ground data, can be of importance in precision agriculture, community planning,resource discovery and other areas."} {"_id": "4a7478652c1b4f27d1bbf18ea0f1cf47fc30779e", "title": "Continuous Integration and Its Tools", "text": "Continuous integration has been around for a while now, but the habits it suggests are far from common practice. Automated builds, a thorough test suite, and committing to the mainline branch every day sound simple at first, but they require a responsible team to implement and constant care. What starts with improved tooling can be a catalyst for long-lasting change in your company's shipping culture. Continuous integration is more than a set of practices, it's a mindset that has one thing in mind: increasing customer value. The Web extra at http://youtu.be/tDl_cHfrJZo is an audio podcast of the Tools of the Trade column discusses how continuous integration is more than a set of practices, it's a mindset that has one thing in mind: increasing customer value."} {"_id": "1d11abe1ec659bf2a0b078f9e45796889e0c0740", "title": "The evaluation of children in the primary care setting when sexual abuse is suspected.", "text": "This clinical report updates a 2005 report from the American Academy of Pediatrics on the evaluation of sexual abuse in children. The medical assessment of suspected child sexual abuse should include obtaining a history, performing a physical examination, and obtaining appropriate laboratory tests. The role of the physician includes determining the need to report suspected sexual abuse; assessing the physical, emotional, and behavioral consequences of sexual abuse; providing information to parents about how to support their child; and coordinating with other professionals to provide comprehensive treatment and follow-up of children exposed to child sexual abuse."} {"_id": "60ade26a56f8f34d18b6e3ccfd0cd095b72e3013", "title": "Malaria Models with Spatial Effects", "text": "Malaria, a vector-borne infectious disease caused by the Plasmodium parasite, is still endemic in more than 100 countries in Africa, Southeast Asia, the Eastern Mediterranean, Western Pacific, Americas, and Europe. In 2010 there were about 219 million malaria cases, with an estimated 660,000 deaths, mostly children under 5 in sub-Saharan Africa (WHO 2012). The malaria parasite is transmitted to humans via the bites of infected female mosquitoes of the genus Anopheles. Mosquitoes can become infected when they feed on the blood of infected humans. Thus the infection goes back and forth between humans and mosquitoes. Mathematical modeling of malaria transmission has a long history. It has helped us to understand transmission mechanism, design and improve control measures, forecast disease outbreaks, etc. The so-called Ross\u2013Macdonald model"} {"_id": "2819ca354f01bc60bbf0ca13611e0ad66487450a", "title": "Contextual Video Recommendation by Multimodal Relevance and User Feedback", "text": "With Internet delivery of video content surging to an unprecedented level, video recommendation, which suggests relevant videos to targeted users according to their historical and current viewings or preferences, has become one of most pervasive online video services. This article presents a novel contextual video recommendation system, called VideoReach, based on multimodal content relevance and user feedback. We consider an online video usually consists of different modalities (i.e., visual and audio track, as well as associated texts such as query, keywords, and surrounding text). Therefore, the recommended videos should be relevant to current viewing in terms of multimodal relevance. We also consider that different parts of videos are with different degrees of interest to a user, as well as different features and modalities have different contributions to the overall relevance. As a result, the recommended videos should also be relevant to current users in terms of user feedback (i.e., user click-through). We then design a unified framework for VideoReach which can seamlessly integrate both multimodal relevance and user feedback by relevance feedback and attention fusion. VideoReach represents one of the first attempts toward contextual recommendation driven by video content and user click-through, without assuming a sufficient collection of user profiles available. We conducted experiments over a large-scale real-world video data and reported the effectiveness of VideoReach."} {"_id": "805115059488fe0e5e1e3376f95ad2944bbff76d", "title": "CONSTRAINTS AND APPROACHES FOR DISTRIBUTED SENSOR NETWORK SECURITY", "text": "Executive Summary Confidentiality, integrity, and authentication services are critical to preventing an adversary from compromising the security of a distributed sensor network. Key management is likewise critical to establishing the keys necessary to provide this protection. However, providing key management is difficult due to the ad hoc nature, intermittent connectivity, and resource limitations of the sensor network environment. As part of the SensIT program, NAI Labs is addressing this problem by identifying and developing cryptographic protocols and mechanisms that efficiently provide key management security support services. This document describes our sensor network constraints and key management approaches research for FY 2000. As a first step, NAI Labs has researched battlefield sensor and sensor network technology and the unique communications environment in which it will be deployed. We have identified the requirements specific to our problem of providing key management for confidentiality and group-level authentication. We have also identified constraints, particularly energy consumption, that render this problem difficult. NAI Labs has developed novel key management protocols specifically designed for the distributed sensor network environment, including Identity-Based Symmetric Keying and Rich Uncle. We have analyzed both existing and NAI Labs-developed keying protocols for their suitability at satisfying identified requirements while overcoming battlefield energy constraints. Our research has focused heavily on key management energy consumption, evaluating protocols based on total system, average sensor node, and individual sensor node energy consumption. We examined a number of secret-key-based protocols, determining some to be suitable for sensor networks but all of the protocols have flexibility limitations. Secret-key-based protocols are generally energy-efficient, using encryption and hashing algorithms that consume relatively little energy. Security of secret-key-based protocols is generally determined by the granularity of established keys, which vary widely for the protocols described herein. During our examination of these protocols we noted that some of these protocols are not sufficiently flexible for use in battlefield sensor network, since they cannot efficiently handle unanticipated additions of sensor nodes to the network. Our Identity-Based Symmetric Keying protocol and the less efficient Symmetric Key Certificate Based Protocol are well suited for certain sensor networks, establishing granular keys while consuming relatively little energy. However, all of the secure secret-key-based protocols use special nodes that operate as Key Distribution Centers (or Translators). The sensor nodes communicate with these centers exchanging information as part of the key establishment process. Since these special nodes are expected to make up less than 1% of the sensor \u2026"} {"_id": "d7d901a9e311e6c434a8a39566d1ed2845546f5b", "title": "Evidences for the Anti-panic Actions of Cannabidiol", "text": "BACKGROUND\nPanic disorder (PD) is a disabling psychiatry condition that affects approximately 5% of the worldwide population. Currently, long-term selective serotonin reuptake inhibitors (SSRIs) are the first-line treatment for PD; however, the common side-effect profiles and drug interactions may provoke patients to abandon the treatment, leading to PD symptoms relapse. Cannabidiol (CBD) is the major non-psychotomimetic constituent of the Cannabis sativa plant with antianxiety properties that has been suggested as an alternative for treating anxiety disorders. The aim of the present review was to discuss the effects and mechanisms involved in the putative anti-panic effects of CBD.\n\n\nMETHODS\nelectronic database was used as source of the studies selected selected based on the studies found by crossing the following keywords: cannabidiol and panic disorder; canabidiol and anxiety, cannabidiol and 5-HT1A receptor).\n\n\nRESULTS\nIn the present review, we included both experimental laboratory animal and human studies that have investigated the putative anti-panic properties of CBD. Taken together, the studies assessed clearly suggest an anxiolytic-like effect of CBD in both animal models and healthy volunteers.\n\n\nCONCLUSIONS\nCBD seems to be a promising drug for the treatment of PD. However, novel clinical trials involving patients with the PD diagnosis are clearly needed to clarify the specific mechanism of action of CBD and the safe and ideal therapeutic doses of this compound."} {"_id": "adf29144f925c58d55781307718080f62bc97de3", "title": "Radiation Search Operations using Scene Understanding with Autonomous UAV and UGV", "text": "Autonomously searching for hazardous radiation sources requires the ability of the aerial and ground systems to understand the scene they are scouting. In this paper, we present systems, algorithms, and experiments to perform radiation search using unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) by employing semantic scene segmentation. The aerial data is used to identify radiological points of interest, generate an orthophoto along with a digital elevation model (DEM) of the scene, and perform semantic segmentation to assign a category (e.g . road, grass) to each pixel in the orthophoto. We perform semantic segmentation by training a model on a dataset of images we collected and annotated, using the model to perform inference on images of the test area unseen to the model, and then refining the results with the DEM to better reason about category predictions at each pixel. We then use all of these outputs to plan a path for a UGV carrying a LiDAR to map the environment and avoid obstacles not present during the flight, and a radiation detector to collect more precise radiation measurements from the ground. Results of the analysis for each scenario tested favorably. We also note that our approach is general and has the potential to work for a variety of different sensing tasks."} {"_id": "7ef66d8adc7322218a2420066d27e154535a2460", "title": "Twitter games: how successful spammers pick targets", "text": "Online social networks, such as Twitter, have soared in popularity and in turn have become attractive targets of spam. In fact, spammers have evolved their strategies to stay ahead of Twitter's anti-spam measures in this short period of time. In this paper, we investigate the strategies Twitter spammers employ to reach relevant target audiences. Due to their targeted approaches to send spam, we see evidence of a large number of the spam accounts forming relationships with other Twitter users, thereby becoming deeply embedded in the social network.\n We analyze nearly 20 million tweets from about 7 million Twitter accounts over a period of five days. We identify a set of 14,230 spam accounts that manage to live longer than the other 73% of other spam accounts in our data set. We characterize their behavior, types of tweets they use, and how they target their audience. We find that though spam campaigns changed little from a recent work by Thomas et al., spammer strategies evolved much in the same short time span, causing us to sometimes find contradictory spammer behavior from what was noted in Thomas et al.'s work. Specifically, we identify four major strategies used by 2/3rd of the spammers in our data. The most popular of these was one where spammers targeted their own followers. The availability of various kinds of services that help garner followers only increases the popularity of this strategy. The evolution in spammer strategies we observed in our work suggests that studies like ours should be undertaken frequently to keep up with spammer evolution."} {"_id": "f9455bf42f61eb8c0ac14e17539366851f64c790", "title": "Structure Based User Identification across Social Networks", "text": "Identification of anonymous identical users of cross-platforms refers to the recognition of the accounts belonging to the same individual among multiple Social Network (SN) platforms. Evidently, cross-platform exploration may help solve many problems in social computing, in both theory and practice. However, it is still an intractable problem due to the fragmentation, inconsistency, and disruption of the accessible information among SNs. Different from the efforts implemented on user profiles and users\u2019 content, many studies have noticed the accessibility and reliability of network structure in most of the SNs for addressing this issue. Although substantial achievements have been made, most of the current network structure-based solutions, requiring prior knowledge of some given identified users, are supervised or semi-supervised. It is laborious to label the prior knowledge manually in some scenarios where prior knowledge is hard to obtain. Noticing that friend relationships are reliable and consistent in different SNs, we proposed an unsupervised scheme, termed Friend Relationship-based User Identification algorithm without Prior knowledge (FRUI-P). The FRUI-P first extracts the friend feature of each user in an SN into friend feature vector, and then calculates the similarities of all the candidate identical users between two SNs. Finally, a one-to-one map scheme is developed to identify the users based on the similarities. Moreover, FRUI-P is proved to be efficient theoretically. Results of extensive experiments demonstrated that FRUI-P performs much better than current state-of-art network structure-based algorithm without prior knowledge. Due to its high precision, FRUI-P can additionally be utilized to generate prior knowledge for supervised and semi-supervised schemes. In applications, the unsupervised anonymous identical user identification method accommodates more scenarios where the seed users are unobtainable."} {"_id": "1217c950ba6f73c3e4cbbfb4ed4546f8d6bf914b", "title": "libtissue - implementing innate immunity", "text": "In a previous paper the authors argued the case for incorporating ideas from innate immunity into artificial immune systems (AISs) and presented an outline for a conceptual framework for such systems. A number of key general properties observed in the biological innate and adaptive immune systems were highlighted, and how such properties might be instantiated in artificial systems was discussed in detail. The next logical step is to take these ideas and build a software system with which AISs with these properties can be implemented and experimentally evaluated. This paper reports on the results of that step - the libtissue system."} {"_id": "1797a323f1024ba17389befe19824c8d443a0940", "title": "A multi-centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients", "text": "AIM\nTo accurately quantify the radioactivity concentration measured by PET, emission data need to be corrected for photon attenuation; however, the MRI signal cannot easily be converted into attenuation values, making attenuation correction (AC) in PET/MRI challenging. In order to further improve the current vendor-implemented MR-AC methods for absolute quantification, a number of prototype methods have been proposed in the literature. These can be categorized into three types: template/atlas-based, segmentation-based, and reconstruction-based. These proposed methods in general demonstrated improvements compared to vendor-implemented AC, and many studies report deviations in PET uptake after AC of only a few percent from a gold standard CT-AC. Using a unified quantitative evaluation with identical metrics, subject cohort, and common CT-based reference, the aims of this study were to evaluate a selection of novel methods proposed in the literature, and identify the ones suitable for clinical use.\n\n\nMETHODS\nIn total, 11 AC methods were evaluated: two vendor-implemented (MR-ACDIXON and MR-ACUTE), five based on template/atlas information (MR-ACSEGBONE (Koesters et al., 2016), MR-ACONTARIO (Anazodo et al., 2014), MR-ACBOSTON (Izquierdo-Garcia et al., 2014), MR-ACUCL (Burgos et al., 2014), and MR-ACMAXPROB (Merida et al., 2015)), one based on simultaneous reconstruction of attenuation and emission (MR-ACMLAA (Benoit et al., 2015)), and three based on image-segmentation (MR-ACMUNICH (Cabello et al., 2015), MR-ACCAR-RiDR (Juttukonda et al., 2015), and MR-ACRESOLUTE (Ladefoged et al., 2015)). We selected 359 subjects who were scanned using one of the following radiotracers: [18F]FDG (210), [11C]PiB (51), and [18F]florbetapir (98). The comparison to AC with a gold standard CT was performed both globally and regionally, with a special focus on robustness and outlier analysis.\n\n\nRESULTS\nThe average performance in PET tracer uptake was within \u00b15% of CT for all of the proposed methods, with the average\u00b1SD global percentage bias in PET FDG uptake for each method being: MR-ACDIXON (-11.3\u00b13.5)%, MR-ACUTE (-5.7\u00b12.0)%, MR-ACONTARIO (-4.3\u00b13.6)%, MR-ACMUNICH (3.7\u00b12.1)%, MR-ACMLAA (-1.9\u00b12.6)%, MR-ACSEGBONE (-1.7\u00b13.6)%, MR-ACUCL (0.8\u00b11.2)%, MR-ACCAR-RiDR (-0.4\u00b11.9)%, MR-ACMAXPROB (-0.4\u00b11.6)%, MR-ACBOSTON (-0.3\u00b11.8)%, and MR-ACRESOLUTE (0.3\u00b11.7)%, ordered by average bias. The overall best performing methods (MR-ACBOSTON, MR-ACMAXPROB, MR-ACRESOLUTE and MR-ACUCL, ordered alphabetically) showed regional average errors within \u00b13% of PET with CT-AC in all regions of the brain with FDG, and the same four methods, as well as MR-ACCAR-RiDR, showed that for 95% of the patients, 95% of brain voxels had an uptake that deviated by less than 15% from the reference. Comparable performance was obtained with PiB and florbetapir.\n\n\nCONCLUSIONS\nAll of the proposed novel methods have an average global performance within likely acceptable limits (\u00b15% of CT-based reference), and the main difference among the methods was found in the robustness, outlier analysis, and clinical feasibility. Overall, the best performing methods were MR-ACBOSTON, MR-ACMAXPROB, MR-ACRESOLUTE and MR-ACUCL, ordered alphabetically. These methods all minimized the number of outliers, standard deviation, and average global and local error. The methods MR-ACMUNICH and MR-ACCAR-RiDR were both within acceptable quantitative limits, so these methods should be considered if processing time is a factor. The method MR-ACSEGBONE also demonstrates promising results, and performs well within the likely acceptable quantitative limits. For clinical routine scans where processing time can be a key factor, this vendor-provided solution currently outperforms most methods. With the performance of the methods presented here, it may be concluded that the challenge of improving the accuracy of MR-AC in adult brains with normal anatomy has been solved to a quantitatively acceptable degree, which is smaller than the quantification reproducibility in PET imaging."} {"_id": "0ce9ad941f6da90068759344abf07a4e15cf4ccf", "title": "A System for Video Surveillance and Monitoring", "text": "The Robotics Institute at Carnegie Mellon University (CMU) and the Sarnoff Corporation are developing a system for autonomous Video Surveillance and Monitoring. The technical objective is to use multiple, cooperative video sensors to provide continuous coverage of people and vehicles in cluttered environments. This paper presents an overview of the system and significant results achieved to date."} {"_id": "b60a86ee106946f74313535c809209a743080f30", "title": "Web Accessibility Evaluation", "text": "Web accessibility evaluation is a broad field that combines different disciplines and skills. It encompasses technical aspects such as the assessment of conformance to standards and guidelines, as well as non-technical aspects such as the involvement of end-users during the evaluation process. Since Web accessibility is a qualitative and experiential measure rather than a quantitative and concrete property, the evaluation approaches need to include different techniques andmaintain flexibility and adaptability toward different situations. At the same time, evaluation approaches need to be robust and reliable so that they can be effective. This chapter explores some of the techniques and strategies to evaluate the accessibility of Web content for people with disabilities. It highlights some of the common approaches to carry out andmanage evaluation processes rather than list out individual steps for evaluating Web content. This chapter also provides an outlook to some of the future directions in which the field seems to be heading, and outlines some opportunities for research and"} {"_id": "475c278c88fa76db47d78d620863d5d9366ab1ef", "title": "Phish Phinder: A Game Design Approach to Enhance User Confidence in Mitigating Phishing Attacks", "text": "Phishing is an especially challenging cyber security threat as it does not attack computer systems, but targets the user who works on that system by relying on the vulnerability of their decision-making ability. Phishing attacks can be used to gather sensitive information from victims and can have devastating impact if they are successful in deceiving the user. Several anti-phishing tools have been designed and implemented but they have been unable to solve the problem adequately. This failure is often due to security experts overlooking the human element and ignoring their fallibility in making trust decisions online. In this paper, we present Phish Phinder, a serious game designed to enhance the user\u2019s confidence in mitigating phishing attacks by providing them with both conceptual and procedural knowledge about phishing. The user is trained through a series of gamified challenges, designed to educate them about important phishing related concepts, through an interactive user interface. Key elements of the game interface were identified through an empirical study with the aim of enhancing user interaction with the game. We also adopted several persuasive design principles while designing Phish Phinder to enhance phishing avoidance behaviour among users."} {"_id": "463bec3d0298e96e3702e071e241e3898f76eff2", "title": "Morsel-driven parallelism: a NUMA-aware query evaluation framework for the many-core age", "text": "With modern computer architecture evolving, two problems conspire against the state-of-the-art approaches in parallel query execution: (i) to take advantage of many-cores, all query work must be distributed evenly among (soon) hundreds of threads in order to achieve good speedup, yet (ii) dividing the work evenly is difficult even with accurate data statistics due to the complexity of modern out-of-order cores. As a result, the existing approaches for plan-driven parallelism run into load balancing and context-switching bottlenecks, and therefore no longer scale. A third problem faced by many-core architectures is the decentralization of memory controllers, which leads to Non-Uniform Memory Access (NUMA). In response, we present the morsel-driven query execution framework, where scheduling becomes a fine-grained run-time task that is NUMA-aware. Morsel-driven query processing takes small fragments of input data (morsels) and schedules these to worker threads that run entire operator pipelines until the next pipeline breaker. The degree of parallelism is not baked into the plan but can elastically change during query execution, so the dispatcher can react to execution speed of different morsels but also adjust resources dynamically in response to newly arriving queries in the workload. Further, the dispatcher is aware of data locality of the NUMA-local morsels and operator state, such that the great majority of executions takes place on NUMA-local memory. Our evaluation on the TPC-H and SSB benchmarks shows extremely high absolute performance and an average speedup of over 30 with 32 cores."} {"_id": "a112d3f6d056cc1d85f4a9902f8660fbeb898f90", "title": "Mean-shift Blob Tracking through Scale Space", "text": "The mean-shift algorithm is an efficient technique for tracking 2D blobs through an image. Although the scale of the mean-shift kernel is a crucial parameter, there is presently no clean mechanism for choosing or updating scale while tracking blobs that are changing in size. We adapt Lindeberg\u2019s theory of feature scale selection based on local maxima of differential scale-space filters to the problem of selecting kernel scale for mean-shift blob tracking. We show that a difference of Gaussian (DOG) mean-shift kernel enables efficient tracking of blobs through scale space. Using this kernel requires generalizing the mean-shift algorithm to handle images that contain negative sample weights."} {"_id": "e3cfdcaf0eb59af379d5081c1caef9b458cd1fba", "title": "Research on Chinese text classification based on Word2vec", "text": "The set of features which the traditional feature selection algorithm of chi-square selected is not complete. This causes the low performance for the final text classification. Therefore, this paper proposes a method. The method utilizes word2vec to generate word vector to improve feature selection algorithm of the chi square. The algorithm applies the word vector generated by word2vec to the process of the traditional feature selection and uses these words to supplement the set of features as appropriate. Ultimately, the set of features obtained by this method has better discriminatory power. Because, the feature words with the better discriminatory power has the strong ability of distinguishing categories as its semantically similar words. On this base, multiple experiments have been carried out in this paper. The experimental results show that the performance of text classification can increase after extension of feature words."} {"_id": "b20a5427d79c660fe55282da2533071629bfc533", "title": "Deep Learning Advances on Different 3D Data Representations: A Survey", "text": "3D data is a valuable asset in the field of computer vision as it provides rich information about the full geometry of sensed objects and scenes. With the recent availability of large 3D datasets and the increase in computational power, it is today possible to consider applying deep learning to learn specific tasks on 3D data such as segmentation, recognition and correspondence. Depending on the considered 3D data representation, different challenges may be foreseen in using existent deep learning architectures. In this paper, we provide a comprehensive overview of various 3D data representations highlighting the difference between Euclidean and non-Euclidean ones. We also discuss how deep learning methods are applied on each representation, analyzing the challenges to"} {"_id": "756822ece7da9c8b78997cf9001b0ec69a9a2112", "title": "INOCULANTS OF PLANT GROWTH-PROMOTING BACTERIA FOR USE IN AGRICULTURE", "text": "An assessment of the current state of bacterial inoculants for c ontemporary agriculture in developed and developing countries is critically evaluated from the point of view of their actual status and future use. Special emphasis is given to two new concepts of inoculation, as yet unavailable commercially: (i) synthetic inoculants under development for plant-growth promoting bacteria (PGPB) (Bashan and Holguin, 1998), and (ii) inoculation by groups of associated bacteria. This review contains: A brief historical overview of bacterial inoculants; the rationale for plant inoculation with emphasis on developing countries and semiarid agriculture, and the concept and application of mixed inoculant; discussion of microbial formulation including optimization of carrier-compound characteristics, types of existing carriers for in oculants, traditional formulations, future trends in formulations using unconventional materials, encapsulated synthetic formulations, macro and micro formulations of alginate, encapsulation of beneficial bacteria using other materials, regulation and contamination of commercial inoculants, and examples of modem commercial bacterial inoculants; and a consideration of time constraints and application methods for bacterial inoculants, commercial production, marketing, and the prospects of inoculants in modern agriculture. \u00ae 1998 Elsevier Science Inc."} {"_id": "f1737a54451e79f4c9cff1bc789fbfb0c83aee3a", "title": "Categorical Data Analysis: Away from ANOVAs (transformation or not) and towards Logit Mixed Models.", "text": "This paper identifies several serious problems with the widespread use of ANOVAs for the analysis of categorical outcome variables such as forced-choice variables, question-answer accuracy, choice in production (e.g. in syntactic priming research), et cetera. I show that even after applying the arcsine-square-root transformation to proportional data, ANOVA can yield spurious results. I discuss conceptual issues underlying these problems and alternatives provided by modern statistics. Specifically, I introduce ordinary logit models (i.e. logistic regression), which are well-suited to analyze categorical data and offer many advantages over ANOVA. Unfortunately, ordinary logit models do not include random effect modeling. To address this issue, I describe mixed logit models (Generalized Linear Mixed Models for binomially distributed outcomes, Breslow & Clayton, 1993), which combine the advantages of ordinary logit models with the ability to account for random subject and item effects in one step of analysis. Throughout the paper, I use a psycholinguistic data set to compare the different statistical methods."} {"_id": "70bda6d1377ad5bd4e9c2902f89dc934b856fcf6", "title": "Enterprise Content Management - A Literature Review", "text": "Managing information and content on an enterprise-wide scale is challenging. Enterprise content management (ECM) can be considered as an integrated approach to information management. While this concept received much attention from practitioners, ECM research is still an emerging field of IS research. Most authors that deal with ECM claim that there is little scholarly literature available. After approximately one decade of ECM research, this paper provides an in-depth review of the body of academic research: the ECM domain, its evolution, and main topics are characterized. An established ECM research framework is adopted, refined, and explained with its associated elements and working definitions. On this basis, 68 articles are reviewed, classified, and concepts are derived. Prior research is synthesized and findings are integrated in a conceptcentric way. Further, implications for research and practice, including future trends, are drawn."} {"_id": "254d59ab9cac1921687d2c0313b8f6fc66541dfb", "title": "A New Mura Defect Inspection Way for TFT-LCD Using Level Set Method", "text": "Mura is a typical vision defect of LCD panel, appearing as local lightness variation with low contrast and blurry contour. This letter presents a new machine vision inspection way for Mura defect based on the level set method. First, a set of real Gabor filters are applied to eliminating the global textured backgrounds. Then, the level set method is employed for image segmentation with a new region-based active contours model, which is an improvement of the Chan-Vese's model so that it more suitable to the segmentation of Mura. Using some results from the level set based segmentation, the defects are quantified based on the SEMU method. Experiments show that the proposed method has better performance for Mura detection and quantification."} {"_id": "1792b2c06e47399ee2eb1a7905056f02d7b9bf24", "title": "Text Analytics in Social Media", "text": "The rapid growth of online social media in the form of collaborativelycreated content presents new opportunities and challenges to both producers and consumers of information. With the large amount of data produced by various social media services, text analytics provides an effective way to meet usres\u2019 diverse information needs. In this chapter, we first introduce the background of traditional text analytics and the distinct aspects of textual data in social media. We next discuss the research progress of applying text analytics in social media from different perspectives, and show how to improve existing approaches to text representation in social media, using real-world examples."} {"_id": "538e03892217075a7c2347f088c727725ebc031d", "title": "Project Management Process Maturity \u201e PM ... 2 Model", "text": "This paper presents the project management process maturity (PM) 2 model that determines and positions an organizatio relative project management level with other organizations. The comprehensive model follows a systematic approach to es organization\u2019s current project management level. Each maturity level consists of major project management characteristics, fa processes. The model evolves from functionally driven organizational practices to project driven organization that incorporates co project learning. The (PM) 2 model provides an orderly, disciplined process to achieve higher levels of project management mat DOI: 10.1061/ ~ASCE!0742-597X~2002!18:3~150! CE Database keywords: Project management; Models; Organizations. ve lan ani uiteron\u2019s r th at ctic"} {"_id": "682c434becc69b9dc70a4c18305f9d733d03f581", "title": "Users of the world , unite ! The challenges and opportunities of Social Media", "text": "As of January 2009, the online social networking application Facebook registered more than 175 million active users. To put that number in perspective, this is only slightly less than the population of Brazil (190 million) and over twice the population of Germany (80 million)! At the same time, every minute, 10 hours of content were uploaded to the video sharing platform YouTube. And, the image hosting site Flickr provided access to over 3 billion photographs, making the world-famous Louvre Museum\u2019s collection of 300,000 objects seem tiny in comparison. According to Forrester Research, 75% of Internet surfers used \u2018\u2018Social Media\u2019\u2019 in the second quarter of 2008 by joining social networks, reading blogs, or contributing reviews to shopping sites; this represents a significant rise from 56% in 2007. The growth is not limited to teenagers, either; members of Generation X, now 35\u201444 years old, increasingly populate the ranks of joiners, spectators, and critics. It is therefore reasonable to say that Social Media represent a revolutionary new trend that should be of interest to companies operating in online space\u2013\u2014or any space, for that matter. Yet, not overly many firms seem to act comfortably in a world where consumers can speak so freely Business Horizons (2010) 53, 59\u201468"} {"_id": "6cdb6ba83bfaca7b2865a53341106a71e1b3d2dd", "title": "Social Media Metrics \u2014 A Framework and Guidelines for Managing Social Media", "text": "Social media are becoming ubiquitous and need to be managed like all other forms of media that organizations employ to meet their goals. However, social media are fundamentally different from any traditional or other online media because of their social network structure and egalitarian nature. These differences require a distinct measurement approach as a prerequisite for proper analysis and subsequent management. To develop the right social media metrics and subsequently construct appropriate dashboards, we provide a tool kit consisting of three novel components. First, we theoretically derive and propose a holistic framework that covers the major elements of social media, drawing on theories from marketing, psychology, and sociology. We continue to support and detail these elements \u2014 namely \u2018motives,\u2019 \u2018content,\u2019 \u2018network structure,\u2019 and \u2018social roles & interactions\u2019 \u2014 with recent research studies. Second, based on our theoretical framework, the literature review, and practical experience, we suggest nine guidelines that may prove valuable for designing appropriate social media metrics and constructing a sensible social media dashboard. Third, based on the framework and the guidelines we derive managerial implications and suggest an agenda for future research. \u00a9 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved."} {"_id": "535fc722b7e6275ed8101f9805cf06f39edeeb01", "title": "Fast Labeling and Transcription with the Speechalyzer Toolkit", "text": "We describe a software tool named \u201cSpeechalyzer\u201d which is optimized to process large speech data sets with respect to transcription, labeling and annotation. It is implemented as a client server based framework in Java and interfaces software for speech recognition, synthesis, speech classification and quality evaluation. The application is mainly the processing of training data for speech recognition and classification models and performing benchmarking tests on speech to text, text to speech and speech categorization software systems."} {"_id": "3c9598a2be80a88fccecde80e6f266af7907d7e7", "title": "Vector Space Representations of Documents in Classifying Finnish Social Media Texts", "text": ""} {"_id": "47c3d413056fe8538cb5ce2d6adc860e70062bf6", "title": "The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables", "text": "Evaluating whether machines improve on human performance is one of the central questions of machine learning. However, there are many domains where the data is selectively labeled, in the sense that the observed outcomes are themselves a consequence of the existing choices of the human decision-makers. For instance, in the context of judicial bail decisions, we observe the outcome of whether a defendant fails to return for their court appearance only if the human judge decides to release the defendant on bail. This selective labeling makes it harder to evaluate predictive models as the instances for which outcomes are observed do not represent a random sample of the population. Here we propose a novel framework for evaluating the performance of predictive models on selectively labeled data. We develop an approach called contraction which allows us to compare the performance of predictive models and human decision-makers without resorting to counterfactual inference. Our methodology harnesses the heterogeneity of human decision-makers and facilitates effective evaluation of predictive models even in the presence of unmeasured confounders (unobservables) which influence both human decisions and the resulting outcomes. Experimental results on real world datasets spanning diverse domains such as health care, insurance, and criminal justice demonstrate the utility of our evaluation metric in comparing human decisions and machine predictions."} {"_id": "b927923d991f6e549376496362900b4ddd85b1a4", "title": "Out-of-order transmission enabled congestion and scheduling control for multipath TCP", "text": "With development of wireless communication technologies, mobile devices are commonly equipped with multiple network interfaces and ready to adopt emerging transport layer protocols such as multipath TCP (MPTCP). The protocol is specifically useful for Internet of Things streaming applications with critical latency and bandwidth demands. To achieve full potential of MPTCP, major challenges on congestion control, fairness, and path scheduling are identified and draw considerable research attention. In this paper, we propose a joint congestion control and scheduling algorithm allowing out-of-order transmission as an overall solution. It is achieved by adaptive window coupling, congestion discrimination, and delay-aware packet ordering. The algorithm is implemented in the Linux kernel for real-world experiments. Favorable results are obtained in both shared or distinct bottleneck scenarios."} {"_id": "2d94165c007865a27e1e4dc76b29a25eef2c26bd", "title": "Neuronal factors determining high intelligence.", "text": "Many attempts have been made to correlate degrees of both animal and human intelligence with brain properties. With respect to mammals, a much-discussed trait concerns absolute and relative brain size, either uncorrected or corrected for body size. However, the correlation of both with degrees of intelligence yields large inconsistencies, because although they are regarded as the most intelligent mammals, monkeys and apes, including humans, have neither the absolutely nor the relatively largest brains. The best fit between brain traits and degrees of intelligence among mammals is reached by a combination of the number of cortical neurons, neuron packing density, interneuronal distance and axonal conduction velocity--factors that determine general information processing capacity (IPC), as reflected by general intelligence. The highest IPC is found in humans, followed by the great apes, Old World and New World monkeys. The IPC of cetaceans and elephants is much lower because of a thin cortex, low neuron packing density and low axonal conduction velocity. By contrast, corvid and psittacid birds have very small and densely packed pallial neurons and relatively many neurons, which, despite very small brain volumes, might explain their high intelligence. The evolution of a syntactical and grammatical language in humans most probably has served as an additional intelligence amplifier, which may have happened in songbirds and psittacids in a convergent manner."} {"_id": "6ad11aed72ff31c0dbdaf8b9123b28b9bef422b7", "title": "Implications of modified waterfall model to the roles and education of health IT professionals", "text": "Electronic Health Records (EHRs) are believed to have the potential to enhance efficiency and provide better health care. However, the benefits could be easily compromised if EHRs are not used appropriately. This paper applies a modified waterfall life cycle model to evaluate the roles of health IT professionals in the adoption and management of EHRs. We then present our development of a Masters' program in Medical Informatics for the education of health IT professionals. We conclude that health IT professionals serve key roles in addressing the problems and concerns and help fulfill envision of EHRs."} {"_id": "1efa222a89838ba52fd38469704bd83aa3b4cad8", "title": "A SUMMARY REVIEW OF VIBRATION-BASED DAMAGE IDENTIFICATION METHODS", "text": "This paper provides an overview of methods to detect, locate, and characterize damage in structural and mechanical systems by examining changes in measured vibration response. Research in vibration-based damage identification has been rapidly expanding over the last few years. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is presented. The methods are categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. The methods are also described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of vibration-based damage identification."} {"_id": "364316a7639eacd00bffa5affb73b217524cf4d8", "title": "Context Based Approach for Second Language Acquisition", "text": "SLAM 2018 focuses on predicting a student\u2019s mistake while using the Duolingo application. In this paper, we describe the system we developed for this shared task. Our system uses a logistic regression model to predict the likelihood of a student making a mistake while answering an exercise on Duolingo in all three language tracks English/Spanish (en/es), Spanish/English (es/en) and French/English (fr/en). We conduct an ablation study with several features during the development of this system and discover that context based features play a major role in language acquisition modeling. Our model beats Duolingo\u2019s baseline scores in all three language tracks (AUROC scores for en/es = 0.821, es/en = 0.790 and fr/en = 0.812). Our work makes a case for providing favourable textual context for students while learning second language."} {"_id": "4f6fdfbb56543110950416be2493dde5dbfa0a49", "title": "Library Anxiety: A Grounded Theory and Its Development", "text": "This qualitative study explored the feelings of students about using the library for research. Personal writing, collected in beginning composition courses over a two-year period, was analyzed for recurrent themes. It was found that 75 to 85 percent of the students in these courses described their initial response to library research in terms of fear. Three concepts emerged from these descriptions: (1) students generally feel that their own library-use skills are inadequate while the skills of other students are adequate, (2) the inadequacy is shameful and should be hidden, and (3) the inadequacy would be revealed by asking questions. A grounded theory of library anxiety was constructed from these data."} {"_id": "502c7b19cd34f8ddeb9d8ffe4de7797538a442ca", "title": "VEGF-A stimulates lymphangiogenesis and hemangiogenesis in inflammatory neovascularization via macrophage recruitment.", "text": "Lymphangiogenesis, an important initial step in tumor metastasis and transplant sensitization, is mediated by the action of VEGF-C and -D on VEGFR3. In contrast, VEGF-A binds VEGFR1 and VEGFR2 and is an essential hemangiogenic factor. We re-evaluated the potential role of VEGF-A in lymphangiogenesis using a novel model in which both lymphangiogenesis and hemangiogenesis are induced in the normally avascular cornea. Administration of VEGF Trap, a receptor-based fusion protein that binds and neutralizes VEGF-A but not VEGF-C or -D, completely inhibited both hemangiogenesis and the outgrowth of LYVE-1(+) lymphatic vessels following injury. Furthermore, both lymphangiogenesis and hemangiogenesis were significantly reduced in mice transgenic for VEGF-A(164/164) or VEGF-A(188/188) (each of which expresses only one of the three principle VEGF-A isoforms). Because VEGF-A is chemotactic for macrophages and we demonstrate here that macrophages in inflamed corneas release lymphangiogenic VEGF-C/VEGF-D, we evaluated the possibility that macrophage recruitment plays a role in VEGF-A-mediated lymphangiogenesis. Either systemic depletion of all bone marrow-derived cells (by irradiation) or local depletion of macrophages in the cornea (using clodronate liposomes) prior to injury significantly inhibited both hemangiogenesis and lymphangiogenesis. We conclude that VEGF-A recruitment of monocytes/macrophages plays a crucial role in inducing inflammatory neovascularization by supplying/amplifying signals essential for pathological hemangiogenesis and lymphangiogenesis."} {"_id": "eeea4fa7448cc04bb11dc2241a357e1d6699f460", "title": "Psychological Needs as a Predictor of Cyber bullying : a Preliminary Report on College Students", "text": "Recent surveys show that cyber bullying is a pervasive problem in North America. Many news stories have reported cyber bullying incidents around the world. Reports on the prevalence of cyber bullying and victimization as a result of cyber bullying increase yearly. Although we know what cyber bullying is it is important that we learn more about the psychological eff ects of it. Th erefore, the aim of the current study is to investigate the relationship between psychological needs and cyber bullying. Participants of the study included 666 undergraduate students (231 males and 435 females) from 15 programs in the Faculty of Education at Selcuk University, Turkey. Questions about demographics, engagement in and exposure to cyber bullying, and the Adjective Check List were administered. 22.5% of the students reported engaging in cyber bullying at least one time, and 55.3% of the students reported being victims of cyber bullying at least once in their lifetime. Males reported more cyber bullying behavior than females. Results indicate that aggression and succorance positively predict cyber bullying wheras intraception negatively predict it. In addition, endurance and aff iliation negatively predict cyber victimization. Only the need for change was found as a positive, but weak predictor of cyber victimization. In light of these findings, aggression and intraception should be investigated further in future research on cyber bullying."} {"_id": "4661794de39ac660a4f37a2ef44b0b1a009d5575", "title": "Machine Reading", "text": "Over the last two decades or so, Natural Language Processing (NLP) has developed powerful methods for low-level syntactic and semantic text processing tasks such as parsing, semantic role labeling, and text categorization. Over the same period, the fields of machine learning and probabilistic reasoning have yielded important breakthroughs as well. It is now time to investigate how to leverage these advances to understand text."} {"_id": "7d3e8733e7c2df397cfbb895b99448f27d87ca8f", "title": "Real-time detection of students' emotional states in the classroom", "text": "Today, as social media sharing increases, the instant emotional state of the students changes frequently. This situation is greatly affecting the learning process as well as the motivation of the students in the classroom. Sometimes, it is insufficient for the educator to observe the emotional states of students. Therefore, an automatic system is needed that can detect and analyze the emotional states of students in the classroom. In this study, an auxiliary information system, which uses image processing and human-computer interaction, has been developed that would be used in the field of education. In this system, the dataset obtained from the students' faces was tested using various machine learning algorithms. As a result, the accuracy of this system was found as 97.15% using support vector machine. This system is aimed to direct the educator to communicate with the students and to increase their motivation when necessary."} {"_id": "3e8a2080e4c05706b0ea6759781ba52e00dcda07", "title": "Automatic parking identification and vehicle guidance with road awareness", "text": "Advanced driver assistance systems (ADAS) are becoming more common in safety and convenience applications. The computer vision based ADAS described in this paper, is an add-on system, suitable to variety of cars. It detects vacant legal parking spots, and safely guides the vehicle into the selected parking. Detection can be performed in both indoor parking lots and along roadsides. The system is composed of three standard computer-connected webcams, which are attached to the vehicle. Upon slowing down, the system starts searching automatically for a right hand-side vacant parking spot, while being aware to parking color signs. Once detected, the parking orientation is determined, and the driver is notified. Once a parking is selected by the driver, the relative position between the vehicle and the parking spot is monitored. Vocal and visual parking guidance instructions are presented to the driver. In addition, if during parking, an object is moving on the road towards the car, a safety alert is given. The system is universal in the sense that, as an add-on system, it can be installed on any private 4-wheeled vehicle, and is suited to urban driving environment."} {"_id": "728c36755e73758f9d2e18126b060099d1f97670", "title": "Deep Learning for Sensorless 3D Freehand Ultrasound Imaging", "text": "3D freehand ultrasound imaging is a very promising imaging modality but its acquisition is often neither portable nor practical because of the required external tracking hardware. Building a sensorless solution that is fully based on image analysis would thus have many potential applications. However, previously proposed approaches rely on physical models whose assumptions only hold on synthetic or phantom datasets, failing to translate to actual clinical acquisitions with sufficient accuracy. In this paper, we investigate the alternative approach of using statistical learning to circumvent this problem. To that end, we are leveraging the unique modeling capabilities of convolutional neural networks in order to build an end-to-end system where we directly predict the ultrasound probe motion from the images themselves. Based on thorough experiments using both phantom acquisitions and a set of 100 in-vivo long ultrasound sweeps for vein mapping, we show that our novel approach significantly outperforms the standard method and has direct clinical applicability, with an average drift error of merely 7% over the whole length of each ultrasound clip."} {"_id": "11dbb38b7ff8a54ac8387264abd36b017f50f202", "title": "EPBC: Efficient Public Blockchain Client for lightweight users", "text": "Public blockchains provide a decentralized method for storing transaction data and have many applications in different sectors. In order for a user to track transactions, a simple method is that every user keeps a local copy of the entire public ledger. Since the size of a ledger keeps growing, this method becomes increasingly less practical, especially for lightweight users such as IoT devices and smartphones. In order to deal with this problem, there have been some proposals. However, existing solutions either achieve a limited storage reduction (e.g., simple payment verification), or rely on some strong security assumption (e.g., the use of trusted server). We propose EPBC, a novel and efficient transaction verification scheme for public ledgers, which only requires lightweight users to store a small amount of data that is independent of the size of the blockchain. We analyze EPBC's performance and security, and discuss its integration with existing public ledger systems. Experimental results confirm that EPBC is practical for lightweight users."} {"_id": "16a2899351f589174714a469ccd9f7ee264ecb37", "title": "A unifying computational framework for motor control and social interaction.", "text": "Recent empirical studies have implicated the use of the motor system during action observation, imitation and social interaction. In this paper, we explore the computational parallels between the processes that occur in motor control and in action observation, imitation, social interaction and theory of mind. In particular, we examine the extent to which motor commands acting on the body can be equated with communicative signals acting on other people and suggest that computational solutions for motor control may have been extended to the domain of social interaction."} {"_id": "965a753fab6e5d0bc271b6bbff473b506a3bb649", "title": "A Review of Wideband Wide-Angle Scanning 2-D Phased Array and Its Applications in Satellite Communication", "text": "In this review, research progress on the wideband wide-angle scanning two-dimensional phased arrays is summarized. The importance of the wideband and the wide-angle scanning characteristics for satellite communication is discussed. Issues like grating lobe avoidance, active reflection coefficient suppression and gain fluctuation reduction are emphasized in this review. Besides, techniques to address these issues and methods to realize the wideband wide-angle scanning phased array are reviewed."} {"_id": "ffde7a4733f713d85643b2ec53fc258d9ce3b713", "title": "Impact of research on practice in the field of inspections, reviews and walkthroughs: learning from successful industrial uses", "text": "Software inspections, reviews, and walkthroughs have become a standard process component in many software development domains. Maturity level 3 of the CMM-I requires establishment of peer reviews [12] and substantial sustained improvements in quality and productivity have been reported as a result of using reviews ([16], [21], [22], [27]). The NSF Impact project identifies the degree to which these industrial success cases have been instigated and improved by research in software engineering.\n This research identifies that there is widespread adoption of inspections, reviews or walkthroughs but that companies do not generally exploit their full potential. However there exist sustained industrial success cases with respect to the wide-spread and measurably successful application of them. It also identifies research in software engineering that can be credibly documented as having influenced the industrial success cases. Credible documentation may exist in the form of publications or documented reports by witnesses. Due to the semi-formal nature of inspections, reviews, and walkthroughs, a specific focus is given to empirical research results as motivators for adoption. Through the examination of one detailed case study, it is shown that software engineering research has had a significant impact on practice and that the impact can be traced in this case from research to that practice. The case study chosen provides evidence of both success and failure regarding sustained application in practice.\n Thus the analysis of historic impact chains of research reveals a clear impact of software engineering research on sustained industrial success for inspections, reviews and walkthroughs. More importantly, in impact chains where the empirical results have not been established, we conclude that success has not been achieved or has not been sustained.\n The paper closes with (1) lessons learned for creating the sustained use and impact of semi-formal software engineering processes, (2) a request for researchers and practitioners to further consider how their work can improve the effectiveness of research and practice, and (3) a request to contribute additional success cases and impact factors to the authors database for future enhancements of this paper."} {"_id": "b04175bb99d6beff0f201ed82971aeb91d2c081d", "title": "Exploring Deep Learning Methods for Discovering Features in Speech Signals", "text": "Exploring Deep Learning Methods for discovering features in speech signals. Navdeep Jaitly Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2014 This thesis makes three main contributions to the area of speech recognition with Deep Neural Network Hidden Markov Models (DNN-HMMs). Firstly, we explore the effectiveness of features learnt from speech databases using Deep Learning for speech recognition. This contrasts with prior works that have largely confined themselves to using traditional features such as Mel Cepstral Coefficients and Mel log filter banks for speech recognition. We start by showing that features learnt on raw signals using Gaussian-ReLU Restricted Boltzmann Machines can achieve accuracy close to that achieved with the best traditional features. These features are, however, learnt using a generative model that ignores domain knowledge. We develop methods to discover features that are endowed with meaningful semantics that are relevant to the domain using capsules. To this end, we extend previous work on transforming autoencoders and propose a new autoencoder with a domain-specific decoder to learn capsules from speech databases. We show that capsule instantiation parameters can be combined with Mel log filter banks to produce improvements in phone recognition on TIMIT. On WSJ the word error rate does not improve, even though we get strong gains in classification accuracy. We speculate this may be because of the mismatched objectives of word error rate over an utterance and frame error rate on the sub-phonetic class for a frame. Secondly, we develop a method for data augmentation in speech datasets. Such methods result in strong gains in object recognition, but have largely been ignored in speech recognition. Our data augmentation encourages the learning of invariance to vocal tract length of speakers. The method is shown to improve the phone error rate on TIMIT and the word error rate on a 14 hour subset of WSJ. Lastly, we develop a method for learning and using a longer range model of targets, conditioned on the input. This method predicts the labels for multiple frames together and uses a geometric average of these predictions during decoding. It produces state of the art results on phone recognition with TIMIT and also produces significant gains on WSJ."} {"_id": "4b23bba6706147da34044e041c2871719d9de1af", "title": "Collaborative Quantization for Cross-Modal Similarity Search", "text": "Cross-modal similarity search is a problem about designing a search system supporting querying across content modalities, e.g., using an image to search for texts or using a text to search for images. This paper presents a compact coding solution for efficient search, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in the single-modal similarity search. We propose a cross-modal quantization approach, which is among the early attempts to introduce quantization into cross-modal search. The major contribution lies in jointly learning the quantizers for both modalities through aligning the quantized representations for each pair of image and text belonging to a document. In addition, our approach simultaneously learns the common space for both modalities in which quantization is conducted to enable efficient and effective search using the Euclidean distance computed in the common space with fast distance table lookup. Experimental results compared with several competitive algorithms over three benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance."} {"_id": "08f6b1a26715648b7d475e5ca85bda69b2cf86d5", "title": "Centrocestus formosanus (Digenea: Heterophyidae) encysted in the freshwater fish, Puntius brevis, from Lao PDR.", "text": "The metacercariae of Centrocestus formosanus, a minute intestinal trematode of mammals and birds, were detected in the freshwater fish, Puntius brevis, from Vientiane Municipality, Lao PDR. The metacercariae were experimentally fed to mice, and adult flukes were recovered in their small intestines 7 days later. The adult flukes were morphologically characterized by having 32 (rarely 34) circumoral spines arranged in 2 alternative rows, a large bipartite seminal vesicle, an oval-shaped ovary, and an X-shaped excretory bladder. Based on these characters, the adults were identified as Centrocestus formosanus (Nishigori, 1924). The taxonomic significance of C. formosanus, in relation to a closely related species, C. caninus (Leiper, 1913), is briefly discussed. It has been first verified by adult worm recovery that C. formosanus is prevalent in Vientiane areas of Lao PDR, taking the freshwater fish, P. brevis, as a second intermediate host."} {"_id": "6d604f26cef4cae50bd42d9fab02952cdf93de42", "title": "A Low-THD Class-D Audio Amplifier With Dual-Level Dual-Phase Carrier Pulsewidth Modulation", "text": "In this paper, a class-D audio amplifier which combines the advantages of the phase-shifted carrier pulsewidth modulation (PWM) and the multiple-level carrier PWM is proposed with a dual-level dual-phase carrier (DLDPC) PWM. The proposed closed-loop amplifier includes a second-order integrator and a DLDPC triangular wave generator. Two sets of 180\u00b0 out-of-phase triangular waves are used as carriers, and each set has its respective offset voltage level with nonoverlapping amplitude. By performing the double Fourier analysis, it can be found that the linearity can be enhanced and the distortion can be reduced with the proposed modulation. Experimental results show that the proposed fully differential DLDPC PWM class-D audio amplifier features a total harmonic distortion lower than 0.01% with an output voltage swing of \u00b15 V."} {"_id": "8ab2d7246cce944b9dbd87b291f60e09b64ad611", "title": "Interactions between working memory, attention and eye movements.", "text": "This paper reviews the recent findings on working memory, attention and eye movements. We discuss the research that shows that many phenomena related to visual attention taking place when selecting relevant information from the environment are similar to processes needed to keep information active in working memory. We discuss new data that show that when retrieving information from working memory, people may allocate visual spatial attention to the empty location in space that used to contain the information that has to be retrieved. Moreover, we show that maintaining a location in working memory not only may involve attention rehearsal, but might also recruit the oculomotor system. Recent findings seem to suggest that remembering a location may involve attention-based rehearsal in higher brain areas, while at the same time there is inhibition of specific motor programs at lower brain areas. We discuss the possibility that working memory functions do not reside at a special area in the brain, but emerge from the selective recruitment of brain areas that are typically involved in spatial attention and motor control."} {"_id": "6a63f001aec8e5605fe11f11efce0780689a7323", "title": "Use of Design Patterns in PHP-Based Web Application Frameworks", "text": "It is known that design patterns of object-oriented programming are used in the design of Web applications, but there is no sufficient information which data patterns are used, how often they are used, and the level of quality at which they are used. This paper describes the results concerning the use of design patterns in projects which develop PHP-based Web application frameworks. Documentation and source code were analysed for 10 frameworks, finding that design patterns are used in the development of Web applications, but not too much and without much consistency. The results and conclusions can be of use when planning and developing new projects because the existing experience can be taken into account. The paper also offers information which design patterns are not used because they may be artificial or hard-to-use in real projects. Alternatively, developers may simply lack information on the existence of the design patterns."} {"_id": "b813118f82ddb0ff21e3d3df31cddb815822b103", "title": "PHP FRAMEWORK FOR DATABASE MANAGEMENT BASED ON MVC PATTERN", "text": "PHP is a powerful language to develop dynamic and interactive web applications. One of the defining features of PHP is the ease for developers to connect and manipulate a database. PHP prepares the functions for database manipulation. However, database management is done by the Structure Query Language (SQL). Most novice programmers often have trouble with SQL syntax. In this paper, we present the PHP framework for database management based on the MVC pattern. The MVC pattern is very useful for the architecture of web applications, separating the model, view and controller of a web application. The PHP framework encapsulated, common database operations are INSERT, UPDATE, DELETE and SELECT. Developers will not be required to consider the specific SQL statement syntax, just to call it the method in the model module. In addition, we use White-Box testing for the code verification in the model module. Lastly, a web application example is shown to illustrate the process of the PHP framework."} {"_id": "7de20692c774798016e06d2eb0ade959c61df350", "title": "Patterns of Enterprise Application Architecture", "text": "protected void doInsert(DomainObject subject, PreparedStatement insertStatement) throws SQLException; class PersonMapper... protected String insertStatement() { return \"INSERT INTO people VALUES (?, ?, ?, ?)\"; } protected void doInsert( DomainObject abstractSubject, PreparedStatement stmt) throws SQLException { Person subject = (Person) abstractSubject; stmt.setString(2, subject.getLastName()); stmt.setString(3, subject.getFirstName()); stmt.setInt(4, subject.getNumberOfDependents()); } Example: Separating the Finders (Java) To allow domain objects to invoke finder behavior I can use Separated Interface (476) to separate the finder interfaces from the mappers (Figure 10.5 ). I can put these finder interfaces in a separate package that's visible to the domain layer, or, as in this case, I can put them in the domain layer itself. Figure 10.5. Defining a finder interface in the domain package."} {"_id": "98c820611bfd24bc7e5752192182d991540ab939", "title": "The Research of PHP Development Framework Based on MVC Pattern", "text": "PHP is one of the leading web development languages, however, the development model of existing PHP organizes without a structure, which mixes the code of data access, the processing of business logic , and web presentation layer together, as a relult, it brought about many problems in the web applications , meanwhile, it could not meet the rapid development of web apply any more. In this paper, a implementation of PHP based on MVC design patterns - FDF framework was provided for PHP developers, which can offer a framework for web applications, separate the data, view and control of web applications, afford to achieve loose coupling, thereby enhanced the efficiency, reliability, maintainability and scalability of application development."} {"_id": "15b53584315083345e88f2d3c436197744f72a01", "title": "Co-regularized least square regression for multi-view multi-class classification", "text": "Many classification problems involve instances that are unlabeled, multi-view and multi-class. However, few technique has been benchmarked for this complex scenario, with a notable exception that combines co-trained naive bayes (CoT-NB) with BCH coding. In this paper, we benchmark the performance of co-regularized least square regression (CoR-LS) for semi-supervised multi-view multi-class classification. We find it performed consistently and significantly better than CoT-NB over eight data sets at different scales. We also find for CoR-LS identity coding is optimal on large data sets and BCH coding is optimal on small data sets. Optimal scoring, a data-dependent coding scheme, often provides near-optimal performance."} {"_id": "ede538be81d0d2f7982500374a329be18438881d", "title": "Introduction to Fillers.", "text": "BACKGROUND\nOver the last few years, injectable soft-tissue fillers have become an integral part of cosmetic therapy, with a wide array of products designed to fill lines and folds and revolumize the face.\n\n\nMETHODS\nThis review describes cosmetic fillers currently approved by the Food and Drug Administration and discusses new agents under investigation for use in the United States.\n\n\nRESULTS\nBecause of product refinements over the last few years-greater ease of use and longevity, the flexibility of multiple formulations within one line of products, and the ability to reverse poor clinical outcomes-practitioners have gravitated toward the use of biodegradable agents that stimulate neocollagenesis for sustained aesthetic improvements lasting up to a year or more with minimal side effects. Permanent implants provide long-lasting results but are associated with greater potential risk of complications and require the skilled hand of the experienced injector.\n\n\nCONCLUSIONS\nA variety of biodegradable and nonbiodegradable filling agents are available or under investigation in the United States. Choice of product depends on injector preference and the area to be filled. Although permanent agents offer significant clinical benefits, modern biodegradable fillers are durable and often reversible in the event of adverse effects."} {"_id": "ab3d0ea202b2641eeb66f1d6a391a43598ba22b9", "title": "Truncated Importance Sampling for Reinforcement Learning with Experience Replay", "text": "Reinforcement Learning (RL) is considered here as an adaptation technique of neural controllers of machines. The goal is to make Actor-Critic algorithms require less agent-environment interaction to obtain policies of the same quality, at the cost of additional background computations. We propose to achieve this goal in the spirit of experience replay. An estimation method of improvement direction of a changing policy, based on preceding experience, is essential here. We propose one that uses truncated importance sampling. We derive bounds of bias of that type of estimators and prove that this bias asymptotically vanishes. In the experimental study we apply our approach to the classic ActorCritic and obtain 20-fold increase in speed of learning."} {"_id": "3a2fed81b9ede781a2bb8c8db949a2edcec21aa8", "title": "EmEx, a Tool for Automated Emotive Face Recognition Using Convolutional Neural Networks", "text": "The work described in this paper represents the study and the attempt to make a contribution to one of the most stimulating and promising sectors in the field of emotion recognition, which is health care management. Multidisciplinary studies in artificial intelligence, augmented reality and psychology stressed out the importance of emotions in communication and awareness. The intent is to recognize human emotions, processing images streamed in real-time from a mobile device. The adopted techniques involve the use of open source libraries of visual recognition and machine learning approaches based on convolutional neural networks (CNN)."} {"_id": "71e258b1aeea7a0e2b2076a4fddb0679ad2ecf9f", "title": "A Novel Secure Architecture for the Internet of Things", "text": "The \"Internet of Things\"(IoT) opens opportunities for devices and softwares to share information on an unprecedented scale. However, such a large interconnected network poses new challenges to system developers and users. In this article, we propose a layered architecture of IoT system. Using this model, we try to identify and assess each layer's challenges. We also discuss several existing technologies that can be used make this architecture secure."} {"_id": "2d99efd269098d8de9f924076f1946150805aafb", "title": "MIMO Performance of Realistic UE Antennas in LTE Scenarios at 750 MHz", "text": "Multiple-input-multiple-output (MIMO) is a technique to achieve high data rates in mobile communication networks. Simulations are performed at both the antenna level and Long-Term Evolution (LTE) system level to assess the performance of realistic handheld devices with dual antennas at 750 MHz. It is shown that MIMO works very well and gives substantial performance gain in user devices with a quarter-wavelength antenna separation."} {"_id": "39bc177ddbd0cd1c02f93340e84a6ca4973960d7", "title": "Implementation and verification of a generic universal memory controller based on UVM", "text": "This paper presents a coverage driven constraint random based functional verification method based on the Universal Verification Methodology (UVM) using System Verilog for generic universal memory controller architecture. This universal memory controller is looking forward to improving the performance of the existing memory controllers through a complete integration of the existing memory controllers features in addition of providing novel features. It also reduces the consumed power through providing high power consumption control due to its proposed different power levels supported to fit all power scenarios. While implementing a worthy architecture like the proposed generic universal memory controller, UVM is the best choice to build well-constructed, high controlled and reusable verification environment to efficiently verify it. More than 200 coverage points have been covered to verify the validation of the integrated features which makes the proposed universal memory controller replaces the existing controllers on the scene as it provides all of their powerful features in addition of novel features to control two of the most dominated types of memory; FLASH and DRAM through one memory controller."} {"_id": "5e671ff1d980aca18cb4859f7bbe38924eb6dd86", "title": "Towards Internet of Things : Survey and Future Vision", "text": "Internet of things is a promising research due to its importance in many commerce, industry, and education applications. Recently, new applications and research challenges in numerous areas of Internet of things are fired. In this paper, we discuss the history of Internet of things, different proposed architectures of Internet of things, research challenges and open problems related to the Internet of things. We also introduce the concept of Internet of things database and discuss about the future vision of Internet of things. These are the manuscript preparation guidelines used as a standard template for all journal submissions. Author must follow these instructions while preparing/modifying these guidelines."} {"_id": "ea14548bc4ab5e5a8af29696bcd2c1eb7463d02a", "title": "Joint Semantic and Latent Attribute Modelling for Cross-Class Transfer Learning", "text": "A number of vision problems such as zero-shot learning and person re-identification can be considered as cross-class transfer learning problems. As mid-level semantic properties shared cross different object classes, attributes have been studied extensively for knowledge transfer across classes. Most previous attribute learning methods focus only on human-defined/nameable semantic attributes, whilst ignoring the fact there also exist undefined/latent shareable visual properties, or latent attributes. These latent attributes can be either discriminative or non-discriminative parts depending on whether they can contribute to an object recognition task. In this work, we argue that learning the latent attributes jointly with user-defined semantic attributes not only leads to better representation but also helps semantic attribute prediction. A novel dictionary learning model is proposed which decomposes the dictionary space into three parts corresponding to semantic, latent discriminative and latent background attributes respectively. Such a joint attribute learning model is then extended by following a multi-task transfer learning framework to address a more challenging unsupervised domain adaptation problem, where annotations are only available on an auxiliary dataset and the target dataset is completely unlabelled. Extensive experiments show that the proposed models, though being linear and thus extremely efficient to compute, produce state-of-the-art results on both zero-shot learning and person re-identification."} {"_id": "0e1e19c8f1c6f2e0182a119756d3695900cdc18c", "title": "The Parallel Knowledge Gradient Method for Batch Bayesian Optimization", "text": "In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural networks in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm \u2014 the parallel knowledge gradient method. By construction, this method provides the one-step Bayes optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy."} {"_id": "a1a2878be2f0733878d1d53c362b116d13135c24", "title": "The structure of musical preferences: a five-factor model.", "text": "Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners' affective reactions to excerpts of music from a wide variety of musical genres. The findings from 3 independent studies converged to suggest that there exists a latent 5-factor structure underlying music preferences that is genre free and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as (a) a Mellow factor comprising smooth and relaxing styles; (b) an Unpretentious factor comprising a variety of different styles of sincere and rootsy music such as is often found in country and singer-songwriter genres; (c) a Sophisticated factor that includes classical, operatic, world, and jazz; (d) an Intense factor defined by loud, forceful, and energetic music; and (e) a Contemporary factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and the auditory characteristics of the music."} {"_id": "37ca01962475d9867b970d35a133584b3935b9c5", "title": "Coagulation and ablation patterns of high-intensity focused ultrasound on a tissue-mimicking phantom and cadaveric skin", "text": "High-intensity focused ultrasound (HIFU) can be applied noninvasively to create focused zones of tissue coagulation on various skin layers. We performed a comparative study of HIFU, evaluating patterns of focused tissue coagulation and ablation upon application thereof. A tissue-mimicking (TM) phantom was prepared with bovine serum albumin and polyacrylamide hydrogel to evaluate the geometric patterns of HIFU-induced thermal injury zones (TIZs) for five different HIFU devices. Additionally, for each device, we investigated histologic patterns of HIFU-induced coagulation and ablation in serial sections of cadaveric skin of the face and neck. All HIFU devices generated remarkable TIZs in the TM phantom, with different geometric values of coagulation for each device. Most of the TIZs seemed to be separated into two or more tiny parts. In cadaveric skin, characteristic patterns of HIFU-induced ablation and coagulation were noted along the mid to lower dermis at the focal penetration depth of 3\u00a0mm and along subcutaneous fat to the superficial musculoaponeurotic system or the platysma muscle of the neck at 4.5\u00a0mm. Additionally, remarkable pre-focal areas of tissue coagulation were observed in the upper and mid dermis at the focal penetration depth of 3\u00a0mm and mid to lower dermis at 4.5\u00a0mm. For five HIFU devices, we outlined various patterns of HIFU-induced TIZ formation along pre-focal, focal, and post-focal areas of TM phantom and cadaveric skin of the face and neck."} {"_id": "b6d111606afcd911afd8d9fb70857988f68bcb15", "title": "Revising Immersion: A Conceptual Model for the Analysis of Digital Game Involvement", "text": "Game studies literature has recently seen a renewed interest in game experience with the recent publication of a number of edited collections, dissertations and conferences focusing on the subject. This paper aims to contribute to that growing body of literature by presenting a summary of my doctoral research in digital game involvement and immersion. It outlines a segment of a conceptual model that describes and analyzes the moment by moment involvement with digital games on a variety of experiential dimensions corresponding to six broad categories of game features. The paper ends with a proposal to replace the metaphor of immersion with one of incorporation. Incorporation aims to avoid the binary notion of the player\u2019s plunge into the virtual environment characteristic of \u201cimmersion\u201d while dispelling the vagueness of application that all too often surrounds the term."} {"_id": "397e30b7a9fef1f210f29ab46eda013efb093fef", "title": "SpatialHadoop: towards flexible and scalable spatial processing using mapreduce", "text": "Recently, MapReduce frameworks, e.g., Hadoop, have been used extensively in different applications that include tera-byte sorting, machine learning, and graph processing. With the huge volumes of spatial data coming from different sources, there is an increasing demand to exploit the efficiency of Hadoop, coupled with the flexibility of the MapReduce framework, in spatial data processing. However, Hadoop falls short in supporting spatial data efficiently as the core is unaware of spatial data properties. This paper describes SpatialHadoop; a full-edged MapReduce framework with native support for spatial data. SpatialHadoop is a comprehensive extension to Hadoop that injects spatial data awareness in each Hadoop layer, namely, the language, storage, MapReduce, and operations layers. In the language layer, SpatialHadoop adds a simple and ex- pressive high level language for spatial data types and operations. In the storage layer, SpatialHadoop adapts traditional spatial index structures, Grid, R-tree and R+-tree, to form a two-level spatial index. SpatialHadoop enriches the MapReduce layer by two new components, SpatialFileSplitter and SpatialRecordReader, for efficient and scalable spatial data processing. In the operations layer, SpatialHadoop is already equipped with a dozen of operations, including range query, kNN, and spatial join. The flexibility and open source nature of SpatialHadoop allows more spatial operations to be implemented efficiently using MapReduce. Extensive experiments on a real system prototype and real datasets show that SpatialHadoop achieves orders of magnitude better performance than Hadoop for spatial data processing."} {"_id": "5863433b7f1fd3e73aba0b747b530012f3679bc8", "title": "Impact of a workplace stress reduction program on blood pressure and emotional health in hypertensive employees.", "text": "OBJECTIVES\nThis study examined the impact of a workplace-based stress management program on blood pressure (BP), emotional health, and workplace-related measures in hypertensive employees of a global information technology company.\n\n\nDESIGN\nThirty-eight (38) employees with hypertension were randomly assigned to a treatment group that received the stress-reduction intervention or a waiting control group that received no intervention during the study period. The treatment group participated in a 16-hour program, which included instruction in positive emotion refocusing and emotional restructuring techniques intended to reduce sympathetic nervous system arousal, stress, and negative affect, increase positive affect, and improve performance. Learning and practice of the techniques was enhanced by heart rate variability feedback, which helped participants learn to self-generate physiological coherence, a beneficial physiologic mode associated with increased heart rhythm coherence, physiologic entrainment, parasympathetic activity, and vascular resonance. BP, emotional health, and workplace-related measures were assessed before and 3 months after the program.\n\n\nRESULTS\nThree months post-intervention, the treatment group exhibited a mean adjusted reduction of 10.6 mm Hg in systolic BP and of 6.3 mm Hg in diastolic BP. The reduction in systolic BP was significant in relation to the control group. The treatment group also demonstrated improvements in emotional health, including significant reductions in stress symptoms, depression, and global psychological distress and significant increases in peacefulness and positive outlook. Reduced systolic BP was correlated with reduced stress symptoms. Furthermore, the trained employees demonstrated significant increases in the work-related scales of workplace satisfaction and value of contribution.\n\n\nCONCLUSIONS\nResults suggest that a brief workplace stress management intervention can produce clinically significant reductions in BP and improve emotional health among hypertensive employees. Implications are that such interventions may produce a healthier and more productive workforce, enhancing performance and reducing losses to the organization resulting from cognitive decline, illness, and premature mortality."} {"_id": "7f91c4420181ccad208ddeb625d09ff7510e62df", "title": "IL-17 and Th17 Cells.", "text": "CD4+ T cells, upon activation and expansion, develop into different T helper cell subsets with different cytokine profiles and distinct effector functions. Until recently, T cells were divided into Th1 or Th2 cells, depending on the cytokines they produce. A third subset of IL-17-producing effector T helper cells, called Th17 cells, has now been discovered and characterized. Here, we summarize the current information on the differentiation and effector functions of the Th17 lineage. Th17 cells produce IL-17, IL-17F, and IL-22, thereby inducing a massive tissue reaction owing to the broad distribution of the IL-17 and IL-22 receptors. Th17 cells also secrete IL-21 to communicate with the cells of the immune system. The differentiation factors (TGF-beta plus IL-6 or IL-21), the growth and stabilization factor (IL-23), and the transcription factors (STAT3, RORgammat, and RORalpha) involved in the development of Th17 cells have just been identified. The participation of TGF-beta in the differentiation of Th17 cells places the Th17 lineage in close relationship with CD4+CD25+Foxp3+ regulatory T cells (Tregs), as TGF-beta also induces differentiation of naive T cells into Foxp3+ Tregs in the peripheral immune compartment. The investigation of the differentiation, effector function, and regulation of Th17 cells has opened up a new framework for understanding T cell differentiation. Furthermore, we now appreciate the importance of Th17 cells in clearing pathogens during host defense reactions and in inducing tissue inflammation in autoimmune disease."} {"_id": "3c964f1810821aecd2a6a2b68aaa0deca4306492", "title": "Variational mesh segmentation via quadric surface fitting", "text": "Wepresent a new variationalmethod formesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. \u00a9 2012 Elsevier Ltd. All rights reserved."} {"_id": "82ffd343c082700895f2a3810e7f52b0a7eb9b92", "title": "Extracting Opinion Expressions with semi-Markov Conditional Random Fields", "text": "Extracting opinion expressions from text is usually formulated as a token-level sequence labeling task tackled using Conditional Random Fields (CRFs). CRFs, however, do not readily model potentially useful segment-level information like syntactic constituent structure. Thus, we propose a semi-CRF-based approach to the task that can perform sequence labeling at the segment level. We extend the original semi-CRF model (Sarawagi and Cohen, 2004) to allow the modeling of arbitrarily long expressions while accounting for their likely syntactic structure when modeling segment boundaries. We evaluate performance on two opinion extraction tasks, and, in contrast to previous sequence labeling approaches to the task, explore the usefulness of segmentlevel syntactic parse features. Experimental results demonstrate that our approach outperforms state-of-the-art methods for both opinion expression tasks."} {"_id": "178cc2fc3062b1aa789418448d197440349fdaa0", "title": "The GUM corpus: creating multilayer resources in the classroom", "text": "This paper presents the methodology, design principles and detailed evaluation of a new freely available multilayer corpus, collected and edited via classroom annotation using collaborative software. After briefly discussing corpus design for open, extensible corpora, five classroom annotation projects are presented, covering structural markup in TEI XML, multiple part of speech tagging, constituent and dependency parsing, information structural and coreference annotation, and Rhetorical Structure Theory analysis. Layers are inspected for annotation quality and together they coalesce to form a richly annotated corpus that can be used to study the interactions between different levels of linguistic description. The evaluation gives an indication of the expected quality of a corpus created by students with relatively little training. A multifactorial example study on lexical NP coreference likelihood is also presented, which illustrates some applications of the corpus. The results of this project show that high quality, richly annotated resources can be created effectively as part of a linguistics curriculum, opening new possibilities not just for research, but also for corpora in linguistics pedagogy."} {"_id": "c14acbdb281b54bd597ba7525ec6d24001e26b27", "title": "Situationally Aware In-Car Information Presentation Using Incremental Speech Generation: Safer, and More Effective", "text": "Holding non-co-located conversations while driving is dangerous (Horrey and Wickens, 2006; Strayer et al., 2006), much more so than conversations with physically present, \u201csituated\u201d interlocutors (Drews et al., 2004). In-car dialogue systems typically resemble non-co-located conversations more, and share their negative impact (Strayer et al., 2013). We implemented and tested a simple strategy for making in-car dialogue systems aware of the driving situation, by giving them the capability to interrupt themselves when a dangerous situation is detected, and resume when over. We show that this improves both driving performance and recall of system-presented information, compared to a non-adaptive strategy."} {"_id": "79ce69b138b893b825f2f164c4cc3af3be6eedad", "title": "Predictive maintenance techniques", "text": "This article discusses the importance of PdM for industrial process applications and investigates a number of emerging technologies that enable this approach, including online energy-efficiency evaluation and continuous condition monitoring. The article gives an overview of existing and future technologies that can be used in these areas. Two methods for bearing fault detection and energy-efficiency estimation are discussed. The article concludes with focus on one pilot installation at Weyerhaeuser's Containerboard Packaging Plant in Manitowoc, Wisconsin, USA, monitoring three critical induction motors: a 75-hrho blower motor, a 50-hrho hydraulic pump motor, and a 200-hp compressor motor. Finally, the field experience gained in this plant is presented as two case studies."} {"_id": "aedcf0f37d4f9122894c5794545c324f68c17c05", "title": "Spectral analysis of surface electromyography (EMG) of upper esophageal sphincter-opening muscles during head lift exercise.", "text": "Although recent studies have shown enhancement of deglutitive upper esophageal sphincter opening in healthy elderly patients performing an isometric/isotonic head lift exercise (HLE), the muscle groups affected by this process are not known. A shift in the spectral analysis of surface EMG activity seen with muscle fatigue can be used to identify muscles affected by an exercise. The objective of this study was to use spectral analysis to evaluate surface EMG activities in the suprahyoid (SHM), infrahyoid (IHM), and sternocleidomastoid (SCM) muscle groups during the HLE. Surface EMG signals were recorded continuously on a TECA Premiere II during two phases of the HLE protocol in eleven control subjects. In the first phase of the protocol, surface EMG signals were recorded simultaneously from the three muscle groups for a period of 20 s. In the second phase, a 60 s recording was obtained for each of three successive trials with individual muscle groups. The mean frequency (MNF), median frequency (MDF), root mean square (RMS), and average rectified value (ARV) were used as spectral variables to assess the fatigue of the three muscle groups during the exercise. Least squares regression lines were fitted to each variable data set. Our findings suggest that during the HLE the SHM, IHM, and SCM muscle groups all show signs of fatigue; however, the SCM muscle group fatigued faster than the SHM and IHM muscle groups. Because of its higher fatigue rate, the SCM muscle group may play a limiting role in the HLE."} {"_id": "291c9e85ff878659ed35b607a8177d4ba30bc009", "title": "ZStream: a cost-based query processor for adaptively detecting composite events", "text": "Composite (or Complex) event processing (CEP) systems search sequences of incoming events for occurrences of user-specified event patterns. Recently, they have gained more attention in a variety of areas due to their powerful and expressive query language and performance potential. Sequentiality (temporal ordering) is the primary way in which CEP systems relate events to each other. In this paper, we present a CEP system called ZStream to efficiently process such sequential patterns. Besides simple sequential patterns, ZStream is also able to detect other patterns, including conjunction, disjunction, negation and Kleene closure.\n Unlike most recently proposed CEP systems, which use non-deterministic finite automata (NFA's) to detect patterns, ZStream uses tree-based query plans for both the logical and physical representation of query patterns. By carefully designing the underlying infrastructure and algorithms, ZStream is able to unify the evaluation of sequence, conjunction, disjunction, negation, and Kleene closure as variants of the join operator. Under this framework, a single pattern in ZStream may have several equivalent physical tree plans, with different evaluation costs. We propose a cost model to estimate the computation costs of a plan. We show that our cost model can accurately capture the actual runtime behavior of a plan, and that choosing the optimal plan can result in a factor of four or more speedup versus an NFA based approach. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, ZStream is able to adaptively and seamlessly adjust the order in which it detects patterns on the fly. Finally, we describe a dynamic programming algorithm used in our cost model to efficiently search for an optimal query plan for a given pattern."} {"_id": "acf68625946d2eb0e88e2da5daca4b3c86b0fd8d", "title": "Environmental implications of plastic debris in marine settings--entanglement, ingestion, smothering, hangers-on, hitch-hiking and alien invasions.", "text": "Over the past five or six decades, contamination and pollution of the world's enclosed seas, coastal waters and the wider open oceans by plastics and other synthetic, non-biodegradable materials (generally known as 'marine debris') has been an ever-increasing phenomenon. The sources of these polluting materials are both land- and marine-based, their origins may be local or distant, and the environmental consequences are many and varied. The more widely recognized problems are typically associated with entanglement, ingestion, suffocation and general debilitation, and are often related to stranding events and public perception. Among the less frequently recognized and recorded problems are global hazards to shipping, fisheries and other maritime activities. Today, there are rapidly developing research interests in the biota attracted to freely floating (i.e. pelagic) marine debris, commonly known as 'hangers-on and hitch-hikers' as well as material sinking to the sea floor despite being buoyant. Dispersal of aggressive alien and invasive species by these mechanisms leads one to reflect on the possibilities that ensuing invasions could endanger sensitive, or at-risk coastal environments (both marine and terrestrial) far from their native habitats."} {"_id": "23c425f022baa054c68683eaf81f5d482915ce13", "title": "A Fast Quantum Mechanical Algorithm for Database Search", "text": "were proposed in the early 1980\u2019s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80\u2019s and early 90\u2019s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------"} {"_id": "3d98653c152e733fb690ccd3d307ac14bc879569", "title": "Quantum theory , the Church-Turing principle and the universal quantum computer", "text": "It is argued that underlying the Church-Turing hypothesis there is an implicit physical assertion. Here, this assertion is presented explicitly as a physical principle: \u2018every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means\u2019. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the \u2018universal quantum computer\u2019 are compatible with the principle. Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any Turing machine. These do not include the computation of non-recursive functions, but they do include \u2018quantum parallelism\u2019, a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it. The intuitive explanation of these properties places an intolerable strain on all interpretations of quantum theory other than Everett\u2019s. Some of the numerous connections between the quantum theory of computation and the rest of physics are explored. Quantum complexity theory allows a physically more reasonable definition of the \u2018complexity\u2019 or \u2018knowledge\u2019 in a physical system than does classical complexity theory. Current address: Centre for Quantum Computation, Clarendon Laboratory, Department of Physics, Parks Road, OX1 3PU Oxford, United Kingdom. Email: david.deutsch@qubit.org This version (Summer 1999) was edited and converted to L TEX by Wim van Dam at the Centre for Quantum Computation. Email:wimvdam@qubit.org 1 Computing machines and the Church-Turing principle The theory of computing machines has been extensively developed during the last few decades. Intuitively, a computing machine is any physical system whose dynamical evolution takes it from one of a set of \u2018input\u2019 states to one of a set of \u2018output\u2019 states. The states are labelled in some canonical way, the machine is prepared in a state with a given input label and then, following some motion, the output state is measured. For a classical deterministic system the measured output label is a definite functionf of the prepared input label; moreover the value of that label can in principle be measured by an outside observer (the \u2018user\u2019) and the machine is said to \u2018compute\u2019the functionf . Two classical deterministic computing machines are \u2018computationally equivalent\u2019 under given labellings of their input and output states if they compute the same function under those labellings. But quantum computing machines, and indeed classical stochastic computing machines, do not \u2018compute functions\u2019 in the above sense: the output state of a stochastic machine is random with only the probability distribution function for the possible outputs depending on the input state. The output state of a quantum machine, although fully determined by the input state is not an observable and so the user cannot in general discover its label. Nevertheless, the notion of computational equivalence can be generalized to apply to such machines also. Again we define computational equivalence under given labellings,but it is now necessary to specify more precisely what is to be labelled. As far as the input is concerned, labels must be given for each of the possible ways of preparing the machine, which correspond, by definition, to all the possible input states. This is identical with the classical deterministic case. However, there is an asymmetry between input and output because there is an asymmetry between preparation and measurement: whereas a quantum system can be prepared in any desired permitted input state, measurement cannot in general determine its output state; instead one must measure the value of some observable. (Throughout this paper I shall be using the Schr \u0308 odinger picture, in which the quantum state is a function of time but observables are constant operators.) Thus what must be labelled is the set of ordered pairs consisting of an output observable and a possible measured value of that observable (in quantum theory, a Hermitian operator and one of its eigenvalues). Such an ordered pair contains, in effect, the specification of a possible experiment that could be made on the output, together with a possible result of that experiment. Two computing machines are computationally equivalent under given labellings if in any possible experiment or sequence of experiments in which their inputs were prepared equivalently under the input labellings, and observables corresponding to each other under the output labellings were measured, the measured values of these observables for the two machines would be statistically indistinguishable. That is, the probability distribution functions for the outputs of the two machines would be identical. In the sense just described, a given computing machine M computes at most one function. However, there ought to be no fundamental difference between altering the input state in which M is prepared, and altering systematically the constitution of M so that it becomes a different machine M0 computing a different function. To formalize such operations, it is often useful to consider machines with two inputs, the preparation of one constituting a \u2018program\u2019 determining which function of the other is to be computed. To each such machine M there corresponds a set C (M) of \u2018M-computable functions\u2019. A functionf isM-computable ifM can computef when prepared with some program. The set C(M) can be enlarged by enlarging the set of changes in the constitution of M that are labelled as possible M-programs. Given two machines M andM0 it is possible to construct a composite machine whose set of computable functions contains the union of C (M) and C(M0). There is no purely logical reason why one could not go on ad infinitumbuilding more powerful"} {"_id": "f311ee943b38f6b0045d0f381aed421919581c15", "title": "Quantum Inspired Genetic Algorithms", "text": "|A novel evolutionary computing method | quantum inspired genetic algorithms | is introduced, where concepts and principles of quantum mechanics are used to inform and inspire more eecient evolutionary computing methods. The basic terminology of quantum mechanics is introduced before a comparison is made between a classical genetic algorithm and a quantum inspired method for the travelling salesperson problem. It is informally shown that the quantum inspired genetic algorithm performs better than the classical counterpart for a small domain. The paper concludes with some speculative comments concerning the relationship between quantum inspired genetic algorithms and various complexity classes."} {"_id": "6902cb196ec032852ff31cc178ca822a5f67b2f2", "title": "Algorithms for Quantum Computation: Discrete Log and Factoring Extended Abstract", "text": "This paper gives algorithms for the discrete log and the factoring problems that take random polynomial time on a quantum computer (thus giving the rst examples of quantum cryptanalysis)."} {"_id": "c2f61c998295caef43878fb46c91694a022ea00e", "title": "A Path-Tracking Criterion for an LHD Articulated Vehicle", "text": "A path-tracking criterion for the so-called LHD (load-haul-dump) truck used in underground mining is proposed in this paper. It exploits the particular configuration of this vehicle, composed of two units connected by an actuated articulation. The task is to follow the path represented by the middle of the tunnel, maintaining the whole vehicle at a reduced distance from the path itself, to decrease the risk of crashes against the walls of the tunnel. This is accomplished via feedback through the synthesis of an appropriate path-tracking criterion. The criterion is based on monitoring the distances of the midpoints of both axles of the vehicle from their orthogonal projections on the path, using two different moving frames simultaneously. Local asymptotic stability to paths of constant curvature is achieved by means of linear-state feedback. KEY WORDS\u2014mobile robots, articulated vehicles, mining trucks, path tracking, off-tracking, feedback stabilization"} {"_id": "b8d04c10b72b0a1de52d99057d2d2cb7a06e0261", "title": "A Multilingual Multimedia Indian Sign Language Dictionary Tool", "text": "This paper presents a cross platform multilingual multimedia Indian Sign Language (ISL) dictionary building tool. ISL is a linguistically under-investigated language with no source of well documented electronic data. Research on ISL linguistics also gets hindered due to a lack of ISL knowledge and the unavailability of any educational tools. Our system can be used to associate signs corresponding to a given text. The current system also facilitates the phonological annotation of Indian signs in the form of HamNoSys structure. The generated HamNoSys string can be given as input to an avatar module to produce an animated sign representation."} {"_id": "68c95dd9665a3ca3be70f1aa5aea73136a0dc6cc", "title": "Detection and Recognition of Malaysian Special License Plate Based On SIFT Features", "text": "Automated car license plate recognition systems are developed and applied for purpose of facilitating the surveillance, law enforcement, access control and intelligent transportation monitoring with least human intervention. In this paper, an algorithm based on SIFT feature points clustering and matching is proposed to address the issue of recognizing Malaysian special plates. These special plates do not follow the normal standard car plates\u2019 format as they may contain italic, cursive, connected and small letters. The algorithm is tested with 150 Malaysian special plate images under different environment and the promising experimental results demonstrate that the proposed algorithm is relatively robust. Keywords-license plate recognition, scale invariant feature transform (SIFT), feature extraction, homography, special license plate."} {"_id": "5df0750546ed2b544461c39ae6b17ee2277accd4", "title": "The quality of online social relationships", "text": "Online relationships are less valuable than offline ones. Indeed, their net benefit depends on whether they supplement or substitute for offline social relationships."} {"_id": "275105509cf1df8344d99bb349513e637346cbbf", "title": "Low power and low voltage VT extractor circuit and MOSFET radiation dosimeter", "text": "This work discusses two fundamental blocks of an in vivo MOS dosimeter, namely the radiation sensor and the VT-extractor circuit. It is shown that threshold extractor circuits based on an all-region MOSFET model are very appropriate for low power design. The accuracy of the extractor circuits allows using the PMOS transistors of the integrated circuit CD4007 as the radiation sensor in a dosimeter for radiotherapy applications."} {"_id": "1235bc3c1234d78a26779f60c81fe159a072280a", "title": "Control of hybrid AC/DC microgrid involving energy storage, renewable energy and pulsed loads", "text": "This paper proposes the coordinated control of a hybrid AC/DC power system with renewable energy source, energy storages and critical loads. The hybrid microgrid consists of both AC and DC sides. A synchronous generator and a PV farm supply power to the system's AC and DC sides, respectively. A bidirectional fully controlled AC/DC converter with active and reactive power decoupling technique is used to link the AC bus with the DC bus while regulating the system voltage and frequency. A DC/DC boost converter with a maximum power point tracking (MPPT) function is implemented to maximize the energy generation from the PV farm. Current controlled bidirectional DC/DC converters are applied to connect each lithium-ion battery bank to the DC bus. Lithium-ion battery banks act as energy storage devices that serve to increase the system stability by absorbing or injecting power to the grid as ancillary services. The proposed system can function in both grid-connected mode and islanding mode. Power electronic converters with different control strategies are analyzed in both modes in detail. Simulation results in MATLAB Simulink verify that the proposed topology is coordinated for power management in both AC and DC sides under critical loads with high efficiency, reliability, and robustness under both grid-connected and islanding modes."} {"_id": "18767cf566f3654318814a18f418325047167e06", "title": "Does the Tyndall effect describe the blue hue periodically observed in subdermal hyaluronic acid gel placement?", "text": "BACKGROUND\nThe blue hue of skin overlying injected hyaluronic acid (HA) fillers in certain cases has been hypothesized in the literature as related to the Tyndall effect. This investigation aims to understand the relevant optical concepts and to discuss the plausibility of this assertion.\n\n\nMETHODS\nTheoretic and physical aspects of relevant optical theories including the Tyndall effect, the Raleigh criterion and the Mie Solution are discussed, with simple examples. The physical properties of the system (both HA and subcutaneous tissue) are explored. Alternate concepts of dermal hue generation are discussed.\n\n\nRESULTS\nThe Tyndall effect (and Rayleigh criterion) describe optical phenomenon that occur as light passes through colloidal solutions containing uniform spherical particles of sizes less than the length of a wavelength of visible light. HA fillers are complex, large, non-spherical, cross-linked hydrogels, and thus are not well characterized by these theories.Skin is a complex optical surface in which shorter wavelengths of light are selectively filtered at superficial depths. Light passing through to subdermal HA would have low blue light amplitude, minimizing what light could be preferentially scattered. Further, should blue hues be 'generated' subdermally, the same skin filters work in reverse, making the blue light poorly detectable by an external observer.\n\n\nCONCLUSIONS\nThe Tyndall effect is unlikely to cause dermal hue changes in HA filler instillation. Optical and perceptual processes explaining superficial vein coloration may better describe subdermal HA hue changes. Vein coloration is thought to be related to three processes: the reflective properties of the skin, the absorptive properties of blood and the perceptive properties of an observer's eyes. Subdermal HA may simulate these phenomena by a number of undetermined, yet plausible mechanisms."} {"_id": "96e51a29148b08ff9d4f40ef93a0522522adf919", "title": "Character-Based Text Classification using Top Down Semantic Model for Sentence Representation", "text": "Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other characterbased and word-based convolutional neural network models by (Zhang et al., 2015) across seven different datasets with only 1% of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender."} {"_id": "9e549115438a49e468d2ee129c735058d1cc1623", "title": "Treating the Banana Fold with the Dermotuberal Anchorage Technique: Case Report", "text": "The banana fold, or the infragluteal fold, is a fat deposit on the posterior thigh close to the gluteal crease and parallel to it. A banana fold may form for different reasons, among which an iatrogenic cause is recurrent. Although banana fold is a common problem, unrelished by most women, few procedures are targeted specifically to fight it. This report presents a severe case of iatrogenic banana fold corrected by a modification of the dermotuberal anchorage buttock-lifting technique, as reported by the author for gluteal ptosis. The operation is performed by tucking in part of the banana fold tissue caudal to the gluteal crease, sliding that tissue, after depithelization, under the buttock, and pulling it up toward the ischial tuberosity until the redundant skin on the posterior thigh is tight and the banana fold is reduced. Assessment of the results 1 year after surgery showed that the technique provided a good scar kept within the subgluteal crease, and that it satisfactorily corrected the patient\u2019s major complaint: the banana-shaped fold."} {"_id": "7e0c812ee6725644404472088debd56e9f756b75", "title": "Sliding spotlight SAR processing for TerraSAR-X using a new formulation of the extended chirp scaling algorithm", "text": "This paper describes the sliding spotlight algorithm and the processing strategy to be applied for TerraSAR-X. The steering spotlight geometry is analysed. Analysis of scene size and resolution demonstrates the particularities of this mode. The Doppler frequencies for individual targets and for a whole sliding spotlight scene are analyzed and the result shows the applicability of azimuth subaperture processing to sliding spotlight data. A description of the Extended Chirp Scaling Algorithm for sliding spotlight is presented."} {"_id": "1c1b5fd282d3a1fe03a671f7d13092d49cb31139", "title": "Enhancing Knowledge Graph Embedding with Probabilistic Negative Sampling", "text": "Link Prediction using Knowledge graph embedding projects symbolic entities and relations into low dimensional vector space, thereby learning the semantic relations between entities. Among various embedding models, there is a series of translation-based models such as TransE [1], TransH [2], and TransR[3]. This paper proposes modifications in the TransR model to address the issue of skewed data which is common in real-world knowledge graphs. The enhancements enable the model to smartly generate corrupted triplets during negative sampling, which significantly improves the training time and performance of TransR. The proposed approach can be applied to other translationbased models."} {"_id": "6871999e2800a1d8b0edd7380795f4758fb1b3a5", "title": "A survey of wireless sensor technologies applied to precision agriculture", "text": "This paper gives a state-of-art of wireless sensor network (WSN) technologies and solutions applied to precision agriculture (PA). The paper first considers applications and existing experiences that show how WSN technologies have been introduced in to agricultural applications. Then, a survey in hardware and software solutions is related with special emphasis on technological aspects. Finally, the paper shows how five networking and technological solutions may impact the next generation of sensors. These are (i) scalar wireless sensor networks, (ii) wireless multimedia sensor networks, (iii) Mobility of nodes, (iv) tag-based systems, and (v) smart-phone applications."} {"_id": "101c0e09d533b738d83a3740f1f6e49ab2984e55", "title": "Neural Clinical Paraphrase Generation with Attention", "text": "Paraphrase generation is important in various applications such as search, summarization, and question answering due to its ability to generate textual alternatives while keeping the overall meaning intact. Clinical paraphrase generation is especially vital in building patient-centric clinical decision support (CDS) applications where users are able to understand complex clinical jargons via easily comprehensible alternative paraphrases. This paper presents Neural Clinical Paraphrase Generation (NCPG), a novel approach that casts the task as a monolingual neural machine translation (NMT) problem. We propose an end-to-end neural network built on an attention-based bidirectional Recurrent Neural Network (RNN) architecture with an encoderdecoder framework to perform the task. Conventional bilingual NMT models mostly rely on word-level modeling and are often limited by out-of-vocabulary (OOV) issues. In contrast, we represent the source and target paraphrase pairs as character sequences to address this limitation. To the best of our knowledge, this is the first work that uses attention-based RNNs for clinical paraphrase generation and also proposes an end-to-end character-level modeling for this task. Extensive experiments on a large curated clinical paraphrase corpus show that the attention-based NCPG models achieve improvements of up to 5.2 BLEU points and 0.5 METEOR points over a non-attention based strong baseline for word-level modeling, whereas further gains of up to 6.1 BLEU points and 1.3 METEOR points are obtained by the character-level NCPG models over their word-level counterparts. Overall, our models demonstrate comparable performance relative to the state-of-the-art phrase-based non-neural models."} {"_id": "88d946cf47f322c1131efad3b72ff45e372c68f1", "title": "ViBe: A Disruptive Method for Background Subtraction", "text": "The proliferation of the video surveillance cameras, which accounts for the vast majority of cameras worldwide, has resulted in the need to nd methods and algorithms for dealing with the huge amount of information that is gathered every second. This encompasses processing tasks, such as raising an alarm or detouring moving objects, as well as some semantic tasks like event monitoring, trajectory or ow analysis, counting people, etc."} {"_id": "18d8f0b1ff6c3cf0f110daa2af9c6f91d260d326", "title": "A Large Scale Ranker-Based System for Search Query Spelling Correction", "text": "This paper makes three significant extensions to a noisy channel speller designed for standard written text to target the challenging domain of search queries. First, the noisy channel model is subsumed by a more general ranker, which allows a variety of features to be easily incorporated. Second, a distributed infrastructure is proposed for training and applying Web scale n-gram language models. Third, a new phrase-based error model is presented. This model places a probability distribution over transformations between multi-word phrases, and is estimated using large amounts of query-correction pairs derived from search logs. Experiments show that each of these extensions leads to significant improvements over the stateof-the-art baseline methods."} {"_id": "8b8ab057fec81fda064b0c315b239646635ca4fb", "title": "Unifying Logical and Statistical AI", "text": "Intelligent agents must be able to handle the complexity and uncertainty of the real world. Logical AI has focused mainly on the former, and statistical AI on the latter. Markov logic combines the two by attaching weights to first-order formulas and viewing them as templates for features of Markov networks. Inference algorithms for Markov logic draw on ideas from satisfiability, Markov chain Monte Carlo and knowledge-based model construction. Learning algorithms are based on the voted perceptron, pseudo-likelihood and inductive logic programming. Markov logic has been successfully applied to a wide variety of problems in natural language understanding, vision, computational biology, social networks and others, and is the basis of the open-source Alchemy system."} {"_id": "cdefa98415b1f072989214b62e51073c82304e33", "title": "The role of autophagy in cancer development and response to therapy", "text": "Autophagy is a process in which subcellular membranes undergo dynamic morphological changes that lead to the degradation of cellular proteins and cytoplasmic organelles. This process is an important cellular response to stress or starvation. Many studies have shed light on the importance of autophagy in cancer, but it is still unclear whether autophagy suppresses tumorigenesis or provides cancer cells with a rescue mechanism under unfavourable conditions. What is the present state of our knowledge about the role of autophagy in cancer development, and in response to therapy? And how can the autophagic process be manipulated to improve anticancer therapeutics?"} {"_id": "0166193eba341df8580c22e36b155671dc1eccc5", "title": "Optical Character Recognition for Cursive Handwriting", "text": "\u00d0In this paper, a new analytic scheme, which uses a sequence of segmentation and recognition algorithms, is proposed for offline cursive handwriting recognition problem. First, some global parameters, such as slant angle, baselines, and stroke width and height are estimated. Second, a segmentation method finds character segmentation paths by combining gray scale and binary information. Third, Hidden Markov Model (HMM) is employed for shape recognition to label and rank the character candidates. For this purpose, a string of codes is extracted from each segment to represent the character candidates. The estimation of feature space parameters is embedded in HMM training stage together with the estimation of the HMM model parameters. Finally, the lexicon information and HMM ranks are combined in a graph optimization problem for word-level recognition. This method corrects most of the errors produced by segmentation and HMM ranking stages by maximizing an information measure in an efficient graph search algorithm. The experiments in dicate higher recognition rates compared to the available methods reported in the literature. Index Terms\u00d0Handwritten word recognition, preprocessing, segmentation, optical character recognition, cursive handwriting, hidden Markov model, search, graph, lexicon matching."} {"_id": "249031d9eedd92cc5fbe509dd6e7bf2faaea5bbb", "title": "IGLOO: Slicing the Features Space to Represent Long Sequences", "text": "Up until recently Recurrent neural networks (RNNs) have been the standard go-to component when processing sequential data with neural networks. Issues relative to vanishing gradient have been partly addressed by Long short-term memory (LSTM) and gated recurrent unit (GRU), but in practice experiments show that very long terms dependencies (beyond 1000 time steps) are difficult to learn. We introduce IGLOO, a new neural network architecture which aims at being faster than both LSTM and GRU but also their respective CuDNN optimized versions when convergence happens and provide an alternative in the case when there is no convergence at all. IGLOO\u2019s core idea is to use the relationships between patches sliced out of the features maps of convolution layers at different levels of granularity to build a representation for the sequence. We show that the model can deal with dependencies of more than 25,000 steps in a reasonable time frame. Beyond the well bench-marked copy-memory and addition problems which show good results, we also achieve best recorded accuracy on permuted MNIST (98.4%). IGLOO is also applied on the IMDB set and on the TUH Abnormal EEG Corpus."} {"_id": "a65476a4fadd112b64f519a6f51a71f6077ed0ae", "title": "Wafer level fabrication method of hemispherical reflector coupled micro-led array stimulator for optogenetics", "text": "We report a monolithic wafer-level fabrication method for a hemispherical reflector coupled light-emitting-diode (LED) array using isotropic etching of silicon. These neural stimulators collect the backside as well as the front side emission of the \u03bc-LEDs and thus provide higher intensity, which is imperative for opsin expressions in optogenetics experiments. Aluminum was used as the reflective layer and the planarization of polymer on the reflector cavity was done using polydimethylsiloxane (PDMS). The lateral and vertical profiles of silicon etching were measured and the light intensity increase due to the reflector was investigated. It was found that the intensity increases by a minimum of 49% and maximum of 65% when coupling a reflector with the \u03bc-LEDs."} {"_id": "6f283845b860519af101e0e5c151cfebd528773f", "title": "Improved extraction of vegetable oils under high-intensity ultrasound and/or microwaves.", "text": "Ultrasound-assisted extraction (UAE) and microwave-assisted extraction (MAE) techniques have been employed as complementary techniques to extract oils from vegetable sources, viz, soybean germ and a cultivated marine microalga rich in docosahexaenoic acid (DHA). Ultrasound (US) devices developed by ourselves, working at several frequencies (19, 25, 40 and 300 kHz), were used for US-based protocols, while a multimode microwave (MW) oven (operating with both open and closed vessels) was used for MAE. Combined treatments were also studied, such as simultaneous double sonication (at 19 and 25 kHz) and simultaneous US/MW irradiation, achieved by inserting a non-metallic horn in a MW oven. Extraction times and yields were compared with those resulting from conventional procedures. With soybean germ the best yield was obtained with a 'cavitating tube' prototype (19 kHz, 80 W), featuring a thin titanium cylinder instead of a conventional horn. Double sonication, carried out by inserting an immersion horn (25 kHz) in the same tube, improved the yield only slightly but halved the extraction time. Almost comparable yields were achieved by closed-vessel MAE and simultaneous US/MW irradiation. Compared with conventional methods, extraction times were reduced by up to 10-fold and yields increased by 50-500%. In the case of marine microalgae, UAE worked best, as the disruption by US of the tough algal cell wall considerably improved the extraction yield from 4.8% in soxhlet to 25.9%. Our results indicate that US and MW, either alone or combined, can greatly improve the extraction of bioactive substances, achieving higher efficiency and shorter reaction times at low or moderate costs, with minimal added toxicity."} {"_id": "b824c38819f67f9a9d2e2e77e2767be340221173", "title": "The Complexity of Temporal Logic Model Checking", "text": "Temporal logic. Logical formalisms for reasoning about time and the timing of events appear in several fields: physics, philosophy, linguistics, etc. Not surprisingly, they also appear in computer science, a field where logic is ubiquitous. Here temporal logics are used in automated reasoning, in planning, in semantics of programming languages, in artificial intelligence, etc. There is one area of computer science where temporal logic has been unusually successful: the specification and verification of programs and systems, an area we shall just call \u201cprogramming\u201d for simplicity. In today\u2019s curricula, thousands of programmers first learn about temporal logic in a course on model checking! Temporal logic and programming. Twenty five years ago, Pnueli identified temporal logic as a very convenient formal language in which to state, and reason about, the behavioral properties of parallel programs and more generally reactive systems [Pnu77, Pnu81]. Indeed, correctness for these systems typically involves reasoning upon related events at different moments of a system execution [OL82]. Furthermore, when it comes to liveness properties, the expected behavior of reactive systems cannot be stated as a static property, or as an invariant one. Finally, temporal logic is well suited to expressing the whole variety of fairness properties that play such a prominent role in distributed systems [Fra86]. For these applications, one usually restricts oneself to propositional temporal logic: on the one hand, this does not appear to be a severe limitation in practice, and on the other hand, this restriction allows decision procedures for validity and entailment, so that, at least in principle, the above-mentioned reasoning can be automated. Model checking. Generally speaking, model checking is the algorithmic verification that a given logic formula holds in a given structure (the model"} {"_id": "be6789bd46d16afa45c8962560a56a89a9089355", "title": "Stereo-Based Autonomous Navigation and Obstacle Avoidance*", "text": "This paper presents a stereo vision-based autonomous navigation system using a GPS and a modified version of the VFH algorithm. In order to obtain a high-accuracy disparity map and meet the time constraints of the real time navigation system, this work proposes the use of a semi-global stereo method. By not suffering the same issues of the regularly used local stereo methods, the employed stereo technique enables the generation of a highly dense, efficient, and accurate disparity map. Obstacles are detected using a method that checks for relative slopes and heights differences. Experimental tests using an electric vehicle in an urban environment were performed to validate the proposed approach."} {"_id": "b73201c7da4de20aa3ea2a130f9558e19cbf33b3", "title": "Practical electrical parameter aware methodology for analog designers with emphasis on LDE aware for devices", "text": "The device layout structure has proven to have profound effects to its electrical characteristics for advanced technology nodes, which, if not taken into account during the design cycle, will have devastating impact to the circuit functionality. A new design methodology is presented in this paper, which can help circuit designers identify early in the design stage the performance implication due to shift of critical device instance parameters from its layout."} {"_id": "61f4f67fc0e73fa3aef8628aae53a4d9b502d381", "title": "Understanding Mutual Information and its Use in InfoGAN", "text": "Interpretable variables are useful in generative models. Generative Adversarial Networks (GANs) are generative models that are flexible in their input. The Information Maximizing GAN (InfoGAN) ties the output of the generator to a component of its input called the latent codes. By forcing the output to be tied to this input component, we can control some properties of the output representation. It is notoriously difficult to find the Nash equilibrium when jointly training the discriminator and generator in a GAN. We uncover some successful and unsuccessful configurations for generating images using InfoGAN."} {"_id": "1ba779d5a5c9553ee8ecee5cf6bafb4b494ea7bc", "title": "On lightweight mobile phone application certification", "text": "Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware."} {"_id": "41289566ac0176dced2312f813328ad4c0552618", "title": "DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android Malware Analysis", "text": "The prevalence of mobile platforms, the large market share of Android, plus the openness of the Android Market makes it a hot target for malware attacks. Once a malware sample has been identified, it is critical to quickly reveal its malicious intent and inner workings. In this paper we present DroidScope, an Android analysis platform that continues the tradition of virtualization-based malware analysis. Unlike current desktop malware analysis platforms, DroidScope reconstructs both the OSlevel and Java-level semantics simultaneously and seamlessly. To facilitate custom analysis, DroidScope exports three tiered APIs that mirror the three levels of an Android device: hardware, OS and Dalvik Virtual Machine. On top of DroidScope, we further developed several analysis tools to collect detailed native and Dalvik instruction traces, profile API-level activity, and track information leakage through both the Java and native components using taint analysis. These tools have proven to be effective in analyzing real world malware samples and incur reasonably low performance overheads."} {"_id": "9e1bcd6414fc6fdd3b63aab48cc3732dc761f538", "title": "AppIntent: analyzing sensitive data transmission in android for privacy leakage detection", "text": "Android phones often carry personal information, attracting malicious developers to embed code in Android applications to steal sensitive data. With known techniques in the literature, one may easily determine if sensitive data is being transmitted out of an Android phone. However, transmission of sensitive data in itself does not necessarily indicate privacy leakage; a better indicator may be whether the transmission is by user intention or not. When transmission is not intended by the user, it is more likely a privacy leakage. The problem is how to determine if transmission is user intended. As a first solution in this space, we present a new analysis framework called AppIntent. For each data transmission, AppIntent can efficiently provide a sequence of GUI manipulations corresponding to the sequence of events that lead to the data transmission, thus helping an analyst to determine if the data transmission is user intended or not. The basic idea is to use symbolic execution to generate the aforementioned event sequence, but straightforward symbolic execution proves to be too time-consuming to be practical. A major innovation in AppIntent is to leverage the unique Android execution model to reduce the search space without sacrificing code coverage. We also present an evaluation of AppIntent with a set of 750 malicious apps, as well as 1,000 top free apps from Google Play. The results show that AppIntent can effectively help separate the apps that truly leak user privacy from those that do not."} {"_id": "05ca17ffa777f64991a8da04f2fd03880ac51236", "title": "Towards automatic generation of vulnerability-based signatures", "text": "In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploit-based signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest"} {"_id": "07356c5477c83773bd062b525f45c433e5b044e8", "title": "Computer security technology planning study", "text": "Approved for pubJic reJease; distribution unri mited. When U.S. Government drawings, specifications or other data are used for any purpose other than a definitely related government procurement operation, the government thereby incurs no responsibility nor any obligation whatsoever; and the fact that the government may have formulated, fui\u00b7nished, or in any way sup\u00ad plied the said drawings, specifications, or other data is not to be regarded by implication or otherwise as in any manner licensing the holder or any other person or conveying any rights or permission to manufacture, use, or sell any patented invention that may in any way be related thereto."} {"_id": "befc5d5391e2f17464f480383fda4672900882ec", "title": "Ghost hunters: ambient light and hand gesture utilization in mobile augmented reality games", "text": "This paper presents the current status of work-in-progress in developing Ghost Hunters that is targeted to explore the possibilities of incorporating gesture control and ambient lighting in a mobile augmented reality game played with smart glasses. We present the design and implementation of the current prototype, together with a small feasibility evaluation of battery consumption and gesture control."} {"_id": "f4bff7e109e5c213a23b993614ef5d9431dca8ec", "title": "An Expressive (Zero-Knowledge) Set Accumulator", "text": "We present a new construction of an expressive set accumulator. Unlike existing cryptographic accumulators, ours provides succinct proofs for a large collection of operations over accumulated sets, including intersection, union, set difference, SUM, COUNT, MIN, MAX, and RANGE, as well as arbitrary nestings of the above. We also show how to extend our accumulator to be zero-knowledge. The security of our accumulator is based on extractability assumptions and other assumptions that hold in the generic group model. Our construction has asymptotically optimal verification complexity and proof size, constant update complexity, and public verifiability/updatability\u2014namely, any client who knows the public key and the last accumulator value can verify the supported operations and update the accumulator. The expressiveness of our accumulator comes at the cost of quadratic prover time. However, we show that the cryptographic operations involved are cheap compared to those incurred by generic approaches (e.g., SNARKs) that are equally expressive: our prover runs faster for sets of up to 5 million items. Our accumulator serves as a powerful cryptographic tool with many applications. For example, it can be applied to efficiently support verification of a rich collection of SQL queries when used as a drop-in replacement in existing verifiable database systems (e.g., IntegriDB, CCS 2015)."} {"_id": "71501c2b1d00229fbe7356e5aa0c6a343a1c1c6e", "title": "Effectiveness of fundamental frequency (F0) and strength of excitation (SOE) for spoofed speech detection", "text": "Current countermeasures used in spoof detectors (for speech synthesis (SS) and voice conversion (VC)) are generally phase-based (as vocoders in SS and VC systems lack phase-information). These approaches may possibly fail for non-vocoder or unit-selection-based spoofs. In this work, we explore excitation source-based features, i.e., fundamental frequency (F0) contour and strength of excitation (SoE) at the glottis as discriminative features using GMM-based classification system. We use F0 and SoE1 estimated from speech signal through zero frequency (ZF) filtering method. Further, SoE2 is estimated from negative peaks of derivative of glottal flow waveform (dGFW) at glottal closure instants (GCIs). On the evaluation set of ASVspoof 2015 challenge database, the F0 and SoEs features along with its dynamic variations achieve an Equal Error Rate (EER) of 12.41%. The source features are fused at score-level with MFCC and recently proposed cochlear filter cepstral coefficients and instantaneous frequency (CFCCIF) features. On fusion with MFCC (CFCCIF), the EER decreases from 4.08% to 3.26% (2.07% to 1.72%). The decrease in EER was evident on both known and unknown vocoder-based attacks. When MFCC, CFCCIF and source features are combined, the EER further decreased to 1.61%. Thus, source features captures complementary information than MFCC and CFCCIF used alone."} {"_id": "5d6c136219b69315e70ba2b6b07aaba30f0f568d", "title": "Quantitative trait loci for glucosinolate accumulation in Brassica rapa leaves.", "text": "Glucosinolates and their breakdown products have been recognized for their effects on plant defense, human health, flavor and taste of cruciferous vegetables. Despite this importance, little is known about the regulation of the biosynthesis and degradation in Brassica rapa. Here, the identification of quantitative trait loci (QTL) for glucosinolate accumulation in B. rapa leaves in two novel segregating double haploid (DH) populations is reported: DH38, derived from a cross between yellow sarson R500 and pak choi variety HK Naibaicai; and DH30, from a cross between yellow sarson R500 and Kairyou Hakata, a Japanese vegetable turnip variety. An integrated map of 1068 cM with 10 linkage groups, assigned to the international agreed nomenclature, is developed based on the two individual DH maps with the common parent using amplified fragment length polymorphism (AFLP) and single sequence repeat (SSR) markers. Eight different glucosinolate compounds were detected in parents and F(1)s of the DH populations and found to segregate quantitatively in the DH populations. QTL analysis identified 16 loci controlling aliphatic glucosinolate accumulation, three loci controlling total indolic glucosinolate concentration and three loci regulating aromatic glucosinolate concentrations. Both comparative genomic analyses based on Arabidopsis-Brassica rapa synteny and mapping of candidate orthologous genes in B. rapa allowed the selection of genes involved in the glucosinolate biosynthesis pathway that may account for the identified QTL."} {"_id": "e683e46e96ec91c8725b142b9f89c8bb46c68603", "title": "Can Smartwatches Replace Smartphones for Posture Tracking?", "text": "This paper introduces a human posture tracking platform to identify the human postures of sitting, standing or lying down, based on a smartwatch. This work develops such a system as a proof-of-concept study to investigate a smartwatch's ability to be used in future remote health monitoring systems and applications. This work validates the smartwatches' ability to track the posture of users accurately in a laboratory setting while reducing the sampling rate to potentially improve battery life, the first steps in verifying that such a system would work in future clinical settings. The algorithm developed classifies the transitions between three posture states of sitting, standing and lying down, by identifying these transition movements, as well as other movements that might be mistaken for these transitions. The system is trained and developed on a Samsung Galaxy Gear smartwatch, and the algorithm was validated through a leave-one-subject-out cross-validation of 20 subjects. The system can identify the appropriate transitions at only 10 Hz with an F-score of 0.930, indicating its ability to effectively replace smart phones, if needed."} {"_id": "364fb0677a5d7083e56c0e38629a78cb94836f53", "title": "API design for machine learning software: experiences from the scikit-learn project", "text": "scikit-learn is an increasingly popular machine learning library. Written in Python, it is designed to be simple and efficient, accessible to non-experts, and reusable in various contexts. In this paper, we present and discuss our design choices for the application programming interface (API) of the project. In particular, we describe the simple and elegant interface shared by all learning and processing units in the library and then discuss its advantages in terms of composition and reusability. The paper also comments on implementation details specific to the Python ecosystem and analyzes obstacles faced by users and developers of the library."} {"_id": "88239ce3e1b37544753b54cb03a92a50072d72fa", "title": "Social support: a conceptual analysis.", "text": "Using the methodology of Walker and Avant, the purpose of this paper was to identify the most frequently used theoretical and operational definitions of social support. A positive relationship between social support and health is generally accepted in the literature. However, the set of dimensions used to define social support is inconsistent. In addition, few measurement tools have established reliability and validity. Findings from this conceptual analysis suggested four of the most frequently used defining attributes of social support: emotional, instrumental, informational, and appraisal. Social network, social embeddedness, and social climate were identified as antecedents of social support. Social support consequences were subsumed under the general rubric of positive health states. Examples were personal competence, health maintenance behaviours, effective coping behaviours, perceived control, sense of stability, recognition of self-worth, positive affect, psychological well-being, and decreased anxiety and depression. Recommendations for future research were made."} {"_id": "c517f80aa000dd91f721f0ab759ceb703546c355", "title": "Child sexual abuse in sub-Saharan Africa: a literature review.", "text": "OBJECTIVE\nThis article reviews the English-language literature on child sexual abuse in sub-Saharan Africa (SSA). The focus is on the sexual abuse of children in the home/community, as opposed to the commercial sexual exploitation of children.\n\n\nMETHODS\nEnglish language, peer-reviewed papers cited in the Social Sciences Citation Index (SSCI) are examined. Reports from international and local NGOs and UN agencies are also examined.\n\n\nRESULTS\nFew published studies on the sexual abuse of children have been conducted in the region, with the exception of South Africa. Samples are predominantly clinical or University based. A number of studies report that approximately 5% of the sample reported penetrative sexual abuse during their childhood. No national survey of the general population has been conducted. The most frequent explanations for the sexual abuse of children in SSA include rapid social change, AIDS/HIV avoidance strategies and the patriarchal nature of society. Child sexual abuse is most frequently perpetrated by family members, relatives, neighbors or others known to the child.\n\n\nCONCLUSIONS\nThere is nothing to support the widely held view that child sexual abuse is very rare in SSA-prevalence levels are comparable with studies reported from other regions. The high prevalence levels of AIDS/HIV in the region expose sexually abused children to high risks of infection. It is estimated that, approximately.6-1.8% of all children in high HIV-incidence countries in Southern Africa will experience penetrative sexual abuse by an AIDS/HIV infected perpetrator before 18 years of age."} {"_id": "6fece3ef2da2c2f13a66407615f2c9a5b3737c88", "title": "Stabilizing Dynamic Controllers for Hybrid Systems: A Hybrid Control Lyapunov Function Approach", "text": "This paper proposes a dynamic controller structure and a systematic design procedure for stabilizing discrete-time hybrid systems. The proposed approach is based on the concept of control Lyapunov functions (CLFs), which, when available, can be used to design a stabilizing state-feedback control law. In general, the construction of a CLF for hybrid dynamical systems involving both continuous and discrete states is extremely complicated, especially in the presence of non-trivial discrete dynamics. Therefore, we introduce the novel concept of a hybrid control Lyapunov function, which allows the compositional design of a discrete and a continuous part of the CLF, and we formally prove that the existence of a hybrid CLF guarantees the existence of a classical CLF. A constructive procedure is provided to synthesize a hybrid CLF, by expanding the dynamics of the hybrid system with a specific controller dynamics. We show that this synthesis procedure leads to a dynamic controller that can be implemented by a receding horizon control strategy, and that the associated optimization problem is numerically tractable for a fairly general class of hybrid systems, useful in real world applications. Compared to classical hybrid receding horizon control algorithms, the proposed approach typically requires a shorter prediction horizon to guarantee asymptotic stability of the closed-loop system, which yields a reduction of the computational burden, as illustrated through two examples."} {"_id": "31605e24e5bf1f89e74fa11bd1f0eb0cbf3d9bde", "title": "Familial hypertrichosis cubiti: hairy elbows syndrome.", "text": "Genetically determined hypertrichosis is uncommon, but several forms of familial hairiness have been described. In some kindreds the unusual hairiness has been generalized, while in others, the hypertrichosis has been confined to a specific site on the body. The purpose of this paper is to report the occurrence of undue hairiness of the elbow regions in members of an Amish family, and to discuss the genetic significance of the condition. No previous descriptions of hypertrichosis of this distribution could be found in the literature, and it is therefore suggested that the condition be termed familial hypertrichosis cubiti or the hairy elbows syndrome."} {"_id": "3417bd36084fe39af70a2b6e6b2a71feb6218e41", "title": "Instance Based Clustering of Semantic Web Resources", "text": "The original Semantic Web vision was explicit in the need for intelligent autonomous agents that would represent users and help them navigate the Semantic Web. We argue that an essential feature for such agents is the capability to analyse data and learn. In this paper we outline the challenges and issues surrounding the application of clustering algorithms to Semantic Web data. We present several ways to extract instances from a large RDF graph and computing the distance between these. We evaluate our approaches on three different data-sets, one representing a typical relational database to RDF conversion, one based on data from a ontologically rich Semantic Web enabled application, and one consisting of a crawl of FOAF documents; applying both supervised and unsupervised evaluation metrics. Our evaluation did not support choosing a single combination of instance extraction method and similarity metric as superior in all cases, and as expected the behaviour depends greatly on the data being clustered. Instead, we attempt to identify characteristics of data that make particular methods more suitable."} {"_id": "3cb196322801704ac37fc5c1b78469da88a957f8", "title": "An Evaluation of Fine and Gross Motor Skills in Adolescents with Down Syndromes", "text": "Down syndrome (DS) is the most common genetic cause of intellectual disability. Motor development of children with DS is delayed. The aim of the present study was to evaluate of fine and gross motor skills of adolescents with DS. The study sample of a total of 34 participants aged between 14 to 20 years comprised 16 adolescents with DS and a normally developing group of 18 adolescents without DS. Fine and gross motor skills of the participants were assessed by Bruininks-Oseretsky Test of Motor Proficiency, second edition short form (BOT-2 SF). The highest score of the test is 88 and higher score shows higher performance. The average age of adolescents with DS and without DS who participated in the study were 17.06\u00b12.79 and 16.56\u00b11.09, respectively. All partcipants were male. It was found a significant differences for all BOT-2 SF subtests and total scores when compared between adolescents with DS and adolescents without DS (p<0.05). Adolescents without DS has higher scores than adolescents with DS. In conclusion, both fine and gross motor skill performance of adolescents with DS is lower than normally developing adolescents. This study stresses the importance of interventions facilitating motor skills."} {"_id": "85d9aff092d860aebf8ea5aa255b06de25a1930e", "title": "Deep Continuous Fusion for Multi-sensor 3D Object Detection", "text": "In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art."} {"_id": "b78f5c25520e7212d34219b0c59d39cf62aaf373", "title": "Wideband Excitation Technology of ${\\rm TE}_{20}$ Mode Substrate Integrated Waveguide (SIW) and Its Applications", "text": "The higher order mode substrate integrated waveguide (SIW) has advantages in the applications, which are: simplifying the structure with reduced fabrication cost and enhancing the stability of performance by relaxed fabrication tolerance. In this paper, two wideband TE20 mode excitation structures are presented and investigated for the SIW. A slot along the mid line in the longitudinal direction of the SIW is employed to convert the electromagnetic field pattern between slotline and the TE20 mode SIW. The second structure is based on the slot aperture coupling and can realize wideband direct transition between the TE20 mode SIW and microstrip line. Both transitions have a simple and compact structure, as well as broadband characteristics. The wideband transition performance is demonstrated in this paper. As application examples of the transitions, two broadband substrate integrated baluns are presented and fabricated based on the TE20 mode excitation structures and TE20 mode characteristics. The 180 \u00b0 out-of-phase and equal magnitude of the balun outputs in a wide frequency range can be achieved inherently. The balun based on the slotline excitation has a fractional bandwidth of 50.2%, from 7.3 to 12.2 GHz, with measured return loss, amplitude imbalance, and phase imbalance better than 10 and 0.45 dB and 3.8 \u00b0. The balun based on the aperture coupling excitation shows a fractional bandwidth of 50.3%, from 7 to 11.7 GHz, with measured return loss, insertion loss, amplitude imbalance, and phase imbalance better than 10, 1, and 0.27 dB and 2.4 \u00b0, respectively. Both baluns not only show good performance, but also demonstrate the existence of the TE20 mode in the SIW."} {"_id": "9097ad840707554c7a8a5d288fcb3ce8993d5ee9", "title": "Intuitionistic fuzzy sets and L-fuzzy sets", "text": "This paper proves that the concepts of intuitionistic fuzzy sets and intuitionistic L-fuzzy sets and the concept of L-fuzzy sets are equivalent. c \u00a9 2000 Elsevier Science B.V. All rights reserved"} {"_id": "c52591baa825539e897385e14e776cced0341c0b", "title": "Games and Agents: Designing Intelligent Gameplay", "text": "There is an attention shift within the gaming industry toward more natural (long-term) behavior of nonplaying characters (NPCs). Multiagent system research offers a promising technology to implement cognitive intelligent NPCs. However, the technologies used in game engines and multiagent platforms are not readily compatible due to some inherent differences of concerns. Where game engines focus on real-time aspects and thus propagate efficiency and central control, multiagent platforms assume autonomy of the agents. Increased autonomy and intelligence may offer benefits for a more compelling gameplay and may even be necessary for serious games. However, it raises problems when current game design techniques are used to incorporate state-of-the-art multiagent system technology. In this paper, we will focus on three specific problem areas that arise from this difference of view: synchronization, information representation, and communication. We argue that the current attempts for integration still fall short on some of these aspects. We show that to fully integrate intelligent agents in games, one should not only use a technical solution, but also a design methodology that is amenable to agents. The game design should be adjusted to incorporate the possibilities of agents early on in the process."} {"_id": "0d80a29beb1af75efd75a8ffcde936c4d40054b4", "title": "Towards Enhancing the Security of OAuth Implementations in Smart Phones", "text": "With the roaring growth and wide adoption of smart mobile devices, users are continuously integrating with culture of the mobile applications (apps). These apps are not only gaining access to information on the smartphone but they are also able gain users' authorization to access remote servers on their behalf. The Open standard for Authorization (OAuth) is widely used in mobile apps for gaining access to user's resources on remote service providers. In this paper, we analyze the different OAuth implementations adopted by the SDKs of the popular resource providers on smartphones and demonstrate possible attacks on most OAuth implementations. By analyzing source code of more than 430 popular Android apps we summarized the trends followed by the service providers and by the OAuth development choices made by application developers. In addition, we propose an application-based OAuth Manager framework, that provides a secure OAuth flow in smartphones that is based on the concept of privilege separation and does not require high overhead."} {"_id": "f1899628c98049a4e33af3539cd844321c572e0f", "title": "Improving Cancer Treatment via Mathematical Modeling: Surmounting the Challenges Is Worth the Effort", "text": "Drug delivery schedules are key factors in the efficacy of cancer therapies, and mathematical modeling of population dynamics and treatment responses can be applied to identify better drug administration regimes as well as provide mechanistic insights. To capitalize on the promise of this approach, the cancer field must meet the challenges of moving this type of work into clinics."} {"_id": "3b3c153b09495e2f79dd973253f9d2ee763940a5", "title": "Unsupervised learning of feature hierarchies", "text": "The applicability of machine learning methods is often limi ted by the amount of available labeled data, and by the ability (or inability) of the de signer to produce good internal representations and good similarity measures for the input data vectors. The aim of this thesis is to alleviate these two limitations by proposing al gorithms tolearngood internal representations, and invariant feature hierarchies from u nlabeled data. These methods go beyond traditional supervised learning algorithms, and rely on unsupervised, and semi-supervised learning. In particular, this work focuses on \u201cdeep learning\u201d methods , a et of techniques and principles to train hierarchical models. Hierarchical mod els produce feature hierarchies that can capture complex non-linear dependencies among the observed data variables in a concise and efficient manner. After training, these mode ls can be employed in real-time systems because they compute the representation by a very fast forward propagation of the input through a sequence of non-linear transf ormations. When the paucity of labeled data does not allow the use of traditional supervi s d algorithms, each layer of the hierarchy can be trained in sequence starting at the bott om by using unsupervised or semi-supervised algorithms. Once each layer has been train ed, the whole system can be fine-tuned in an end-to-end fashion. We propose several unsu pervised algorithms that can be used as building block to train such feature hierarchi es. We investigate algorithms that produce sparse overcomplete representations a nd fe tures that are invariant to known and learned transformations. These algorithms are designed using the Energy-"} {"_id": "f75256662baf6da5b6d5e11dbc6ae7992cf6a168", "title": "Deep convolutional neural network for latent fingerprint enhancement", "text": "In this work, we propose a novel latent fingerprint enhancement method based on FingerNet inspired by recent development of Convolutional Neural Network (CNN). Although CNN is achieving superior performance in many computer vision tasks from low-level image processing to high-level semantic understanding, limited attention has been paid in fingerprint community. The proposed FingerNet has three major parts: one common convolution part shared by two different deconvolution parts, which are the enhancement branch and the orientation branch. The convolution part is to extract fingerprint features particularly for enhancement purpose. The enhancement deconvolution branch is employed to remove structured noise and enhance fingerprints as its task. The orientation deconvolution branch performs the task of guiding enhancement through a multi-task learning strategy. The network is trained in the manner of pixels-to-pixels and end-to-end learning, that can directly enhance latent fingerprint as the output. We also study some implementation details such as single-task learning, multi-task learning, and the residual learning. Experimental results of the FingerNet system on latent fingerprint dataset NIST SD27 demonstrate effectiveness and robustness of the proposed method."} {"_id": "56df27ed921375fd8224c56a8cf764cf08e7b837", "title": "IRIS performer: a high performance multiprocessing toolkit for real-time 3D graphics", "text": "This paper describes the design and implementation of IRIS Performer, a toolkit for visual simulation, virtual reality, and other real-time 3D graphics applications. The principal design goal is to allow application developers to more easily obtain maximal performance from 3D graphics workstations which feature multiple CPUs and support an immediate-mode rendering library. To this end, the toolkit combines a low-level library for high-performance rendering with a high-level library that implements pipelined, parallel traversals of a hierarchical scene graph. While discussing the toolkit architecture, the paper illuminates and addresses performance issues fundamental to immediate-mode graphics and coarse-grained, pipelined multiprocessing. Graphics optimizations focus on efficient data transfer to the graphics subsystem, reduction of mode settings, and restricting state inheritance. The toolkit's multiprocessing features solve the problems of how to partition work among multiple processes, how to synchronize these processes, and how to manage data in a pipelined, multiprocessing environment. The paper also discusses support for intersection detection, fixed-frame rates, run-time profiling and special effects such as geometric morphing."} {"_id": "6bf187cf239e66767688ed7dd88f6a408bf465f0", "title": "Tversky loss function for image segmentation using 3D fully convolutional deep networks", "text": "Fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation. One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels. Training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall (sensitivity), which is undesired especially in medical applications where false negatives are much less tolerable than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved F2 score, Dice coefficient, and the area under the precision-recall curve in test data. Based on these results we suggest Tversky loss function as a generalized framework to effectively train deep neural networks."} {"_id": "8bab7b810adbd52252e11cbdc10a8781050be0e8", "title": "Implementation of FC-TCR for Reactive Power Control", "text": "This paper deals with the simulation and implementation of fixed capacitor thyristor controlled reactor system(SVC) . The TCR system is simulated using MATLAB and the simulation results are presented. SVC is basically a shunt connected static var generator whose output is adjusted to exchange capacitive or inductive current so as to maintain or control specific power variable. In this paper, simple circuit model of Thyristor Controlled Reactor is modeled and simulated using MATLAB. The simulation results are presented. The current drawn by the TCR varies with the variation in the firing angle. The simulation results are compared with the theoretical results."} {"_id": "7393c8fc67411ab4101d5e03b4b1e2ef12f99f3c", "title": "Deep Automatic Portrait Matting", "text": "We propose an automatic image matting method for portrait images. This method does not need user interaction, which was however essential in most previous approaches. In order to accomplish this goal, a new end-to-end convolutional neural network (CNN) based framework is proposed taking the input of a portrait image. It outputs the matte result. Our method considers not only image semantic prediction but also pixel-level image matte optimization. A new portrait image dataset is constructed with our labeled matting ground truth. Our automatic method achieves comparable results with state-of-the-art methods that require specified foreground and background regions or pixels. Many applications are enabled given the automatic nature of our system."} {"_id": "94198eccf0551f7dd787c68b6236a87bddeec236", "title": "Prediction of Primary Pupil Enrollment in Government School Using Data Mining Forecasting Technique", "text": "This research concentrates upon predictive analysis of enrolled pupil using forecasting based on data mining technique. The Microsoft SQL Server Data Mining Add-ins Excel 2007 was employed as a software mining tool for predicting enrollment of pupils. The time series algorithm was used for experiment analysis. U-DISE (Unified District Information System for Education) Dataset of Aurangabad district in Maharashtra (India) was obtained from SSA (Sarva Shiksha Abhiyan) and was used for analysis. Data mining for primary pupil enrollment research provides a feasible way to analyze the trend and to addresses the interest of pupil. The dataset was studied and analyzed to forecast registered pupils in government for upcoming years. We conclude that for upcoming year, pupil strength in government school will be dropped down and number of teachers will be supersizing in the said district. Keywords\u2014 Data mining, Microsoft SQL Server Data Mining Add-ins Excel 2007, Time series algorithm, U-DISE"} {"_id": "3625936bab70324db3bf39bc5ae57cbbc0971f8d", "title": "60-GHz Circularly Polarized U-Slot Patch Antenna Array on LTCC", "text": "This communication presents a 60-GHz wideband circularly polarized (CP) U-slot patch antenna array of 4 \u00d7 4 elements on low temperature cofired ceramic (LTCC). A CP U-slot patch antenna is used as the array element to enhance the impedance bandwidth and a stripline sequential rotation feeding scheme is applied to achieve wide axial ratio (AR) bandwidth. Meanwhile, a grounded coplanar waveguide (GCPW) to stripline transition is designed for probe station measurement. The fabricated antenna array has a dimension of 14 \u00d716 \u00d7 1.1 mm3 . The simulated and measured impedance bandwidths, AR bandwidths, and radiation patterns are investigated and compared. Measured results show that the proposed antenna array has a wide impedance bandwidth from 50.5 GHz to 67 GHz for |S11| <; -10 dB, and a wide AR bandwidth from 54 GHz to 65.5 GHz for AR <; 3 dB. In addition, it exhibits a peak gain of 16 dBi and a beam-shaped pattern with 3-dB beam width of 20\u00b0. Moreover, its AR keeps below 3 dB within the 3-dB beam width."} {"_id": "447ce2aecdf742cf96137f8bf7355a7404489178", "title": "Wideband Millimeter-Wave Substrate Integrated Waveguide Cavity-Backed Rectangular Patch Antenna", "text": "In this letter, a new type of wideband substrate integrated waveguide (SIW) cavity-backed patch antenna and array for millimeter wave (mmW) are investigated and implemented. The proposed antenna is composed of a rectangular patch with a backed SIW cavity. In order to enhance the bandwidth and radiation efficiency, the cavity is designed to resonate at its TE210 mode. Based on the proposed antenna, a 4 \u00d7 4 array is also designed. Both the proposed antenna and array are fabricated with standard printed circuit board (PCB) process, which possess the advantage of easy integration with planar circuits. The measured bandwidth (|S11| \u2264 -10 dB) of the antenna element is larger than 15%, and that of the antenna array is about 8.7%. The measured peak gains are 6.5 dBi for the element and 17.8 dBi for the array, and the corresponding simulated radiation efficiencies are 83.9% and 74.9%, respectively. The proposed antenna and array are promising for millimeter-wave applications due to its merits of wide band, high efficiency, low cost, low profile, etc."} {"_id": "3501023a9f262abbfa0947a04a2403c48329d432", "title": "Millimeter-Wave Microstrip Comb-Line Antenna Using Reflection-Canceling Slit Structure", "text": "A microstrip comb-line antenna is developed in the millimeter-wave band. When the element spacing is one guide wavelength for the broadside beam in the traveling-wave excitation, reflections from all the radiating elements are synthesized in phase. Therefore, the return loss increases significantly. Furthermore, re-radiation from elements due to the reflection wave degrades the design accuracy for the required radiation pattern. We propose the way to improve the reflection characteristic of the antenna with arbitrary beam directions including strictly a broadside direction. To suppress the reflection, we propose a reflection-canceling slit structure installed on the feeding line around each radiating element. A 27-element linear array antenna with a broadside beam is developed at 76.5 GHz. To confirm the feasibility of the simple design procedure, the performance is evaluated through the measurement in the millimeter-wave band."} {"_id": "429d0dd7192450e2a52a8ae7f658a5d99222946e", "title": "Low Cost Planar Waveguide Technology-Based Dielectric Resonator Antenna (DRA) for Millimeter-Wave Applications: Analysis, Design, and Fabrication", "text": "A compact, low cost and high radiation efficiency antenna structure, planar waveguide, substrate integrated waveguide (SIW), dielectric resonator antennas (DRA) is presented in this paper. Since SIW is a high Q- waveguide and DRA is a low loss radiator, then SIW-DRA forms an excellent antenna system with high radiation efficiency at millimeter-waveband, where the conductor loss dominates. The impact of different antenna parameters on the antenna performance is studied. Experimental data for SIW-DRA, based on two different slot orientations, at millimeter-wave band are introduced and compared to the simulated HFSS results to validate our proposed antenna model. A good agreement is obtained. The measured gain for SIW-DRA single element showed a broadside gain of 5.51 dB,-19 dB maximum cross polarized radiation level, and overall calculated (simulated using HFSS) radiation efficiency of greater than 95%."} {"_id": "21d054e67fb6d401c0c8d474eda6a2e6a22d4d93", "title": "Context-Aware Bandits", "text": "In this paper, we present a simple and efficient Context-Aware Bandit (CAB) algorithm. With CAB we attempt to craft a bandit algorithm that can capture collaborative effects and that can be easily deployed in a real-world recommendation system, where the multi-armed bandits have been shown to perform well in particular with respect to the cold-start problem. CAB utilizes a context-aware clustering technique augmenting exploration-exploitation strategies. CAB dynamically clusters the users based on the content universe under consideration. We provide a theoretical analysis in the standard stochastic multi-armed bandits setting. We demonstrate the efficiency of our approach on production and real-world datasets, showing the scalability and, more importantly, the significantly increased prediction performance against several existing state-of-theart methods."} {"_id": "36d759b46c001c28247a03129db81bda2f3c190d", "title": "Occupational accidents aboard merchant ships.", "text": "OBJECTIVES\nTo investigate the frequency, circumstances, and causes of occupational accidents aboard merchant ships in international trade, and to identify risk factors for the occurrence of occupational accidents as well as dangerous working situations where possible preventive measures may be initiated.\n\n\nMETHODS\nThe study is a historical follow up on occupational accidents among crew aboard Danish merchant ships in the period 1993-7. Data were extracted from the Danish Maritime Authority and insurance data. Exact data on time at risk were available.\n\n\nRESULTS\nA total of 1993 accidents were identified during a total of 31 140 years at sea. Among these, 209 accidents resulted in permanent disability of 5% or more, and 27 were fatal. The mean risk of having an occupational accident was 6.4/100 years at sea and the risk of an accident causing a permanent disability of 5% or more was 0.67/100 years aboard. Relative risks for notified accidents and accidents causing permanent disability of 5% or more were calculated in a multivariate analysis including ship type, occupation, age, time on board, change of ship since last employment period, and nationality. Foreigners had a considerably lower recorded rate of accidents than Danish citizens. Age was a major risk factor for accidents causing permanent disability. Change of ship and the first period aboard a particular ship were identified as risk factors. Walking from one place to another aboard the ship caused serious accidents. The most serious accidents happened on deck.\n\n\nCONCLUSIONS\nIt was possible to clearly identify work situations and specific risk factors for accidents aboard merchant ships. Most accidents happened while performing daily routine duties. Preventive measures should focus on workplace instructions for all important functions aboard and also on the prevention of accidents caused by walking around aboard the ship."} {"_id": "549fc15ee760ceb7569c38888b21cee1c3806148", "title": "Dynamic intra- and interhemispheric interactions during unilateral and bilateral hand movements assessed with fMRI and DCM", "text": "Any motor action results from a dynamic interplay of various brain regions involved in different aspects of movement preparation and execution. Establishing a reliable model of how these areas interact is crucial for a better understanding of the mechanisms underlying motor function in both healthy subjects and patients. We used fMRI and dynamic causal modeling to reveal the specific excitatory and inhibitory influences within the human motor system for the generation of voluntary hand movements. We found an intrinsic balance of excitatory and inhibitory couplings among core motor regions within and across hemispheres. Neural coupling within this network was specifically modulated upon uni- and bimanual movements. During unimanual movements, connectivity towards the contralateral primary motor cortex was enhanced while neural coupling towards ipsilateral motor areas was reduced by both transcallosal inhibition and top-down modulation. Bimanual hand movements were associated with a symmetric facilitation of neural activity mediated by both increased intrahemispheric connectivity and enhanced transcallosal coupling of SMA and M1. The data suggest that especially the supplementary motor area represents a key structure promoting or suppressing activity in the cortical motor network driving uni- and bilateral hand movements. Our data demonstrate that fMRI in combination with DCM allows insights into intrinsic properties of the human motor system and task-dependent modulations thereof."} {"_id": "e66ba647600b74e05b79ac5de8a6c150e96e8f6b", "title": "Stereo HDR Disparity Map Computation Using Structured Light", "text": "In this paper, we present work in progress towards the generation of a ground truth data set for High Dynamic Range (HDR) stereo matching. The development and evaluation of novel stereo matching algorithms that are tailored to the characteristics of HDR images would greatly benefit from the availability of such reference data. We describe our laboratory setup and processing steps for acquiring multi-exposed stereo images along with a corresponding reference disparity map computed by a structured light approach. We discuss the special requirements and challenges which HDR test scenes impose on the computation of the reference disparities and show some preliminary results."} {"_id": "5c8b6f2a503ae7179751dbebe3982594e08140ae", "title": "Metamodel for Service Design and Service Innovation: Integrating Service Activities, Service Systems, and Value Constellations", "text": "This paper presents a metamodel that addresses service system design and innovation by traversing and integrating three essential layers, service activities, service systems, and value constellations. The metamodel's approach to service systems as service-inoperation is an alternative to another currently used approach that views service systems as systems of economic exchange. The metamodel addresses service science topics including basic concepts of service science, design thinking for service systems, decomposition within service systems, and integration of IT service architecture with"} {"_id": "2c4a486acbf10c870629d9b980600c0492e6126e", "title": "Accessing Structured Health Information through English Queries and Automatic Deduction", "text": "While much health data is available online, patients who are not technically astute may be unable to access it because they may not know the relevant resources, they may be reluctant to confront an unfamiliar interface, and they may not know how to compose an answer from information provided by multiple heterogeneous resources. We describe ongoing research in using natural English text queries and automated deduction to obtain answers based on multiple structured data sources in a specific subject domain. Each English query is transformed using natural language technology into an unambiguous logical form; this is submitted to a theorem prover that operates over an axiomatic theory of the subject domain. Symbols in the theory are linked to relations in external databases known to the system. An answer is obtained from the proof, along with an English language explanation of how the answer was obtained. Answers need not be present explicitly in any of the databases, but rather may be deduced or computed from the information they provide. Although English is highly ambiguous, the natural language technology is informed by subject domain knowledge, so that readings of the query that are syntactically plausible but semantically impossible are discarded. When a question is still ambiguous, the system can interrogate the patient to determine what meaning was intended. Additional queries can clarify earlier ones or ask questions referring to previously computed answers. We describe a prototype system, Quadri, which answers questions about HIV treatment using the Stanford HIV Drug Resistance Database and other resources. Natural language processing is provided by PARC\u2019s Bridge, and the deductive mechanism is SRI\u2019s SNARK theorem prover. We discuss some of the problems that must be faced to make this approach work, and some of our solutions."} {"_id": "cccd96c81fb365bb5eab587c4adb1e40d9235b9b", "title": "Automatic categorization of medical images for content-based retrieval and data mining.", "text": "Categorization of medical images means selecting the appropriate class for a given image out of a set of pre-defined categories. This is an important step for data mining and content-based image retrieval (CBIR). So far, published approaches are capable to distinguish up to 10 categories. In this paper, we evaluate automatic categorization into more than 80 categories describing the imaging modality and direction as well as the body part and biological system examined. Based on 6231 reference images from hospital routine, 85.5% correctness is obtained combining global texture features with scaled images. With a frequency of 97.7%, the correct class is within the best ten matches, which is sufficient for medical CBIR applications."} {"_id": "0559d43f582548d767663f60ddb874ae3678885c", "title": "Capturing and analyzing low-level events from the code editor", "text": "In this paper, we present FLUORITE, a publicly available event logging plug-in for Eclipse which captures all of the low-level events when using the Eclipse code editor. FLUORITE captures not only what types of events occurred in the code editor, but also more detailed information such as the inserted and deleted text and the specific parameters for each command. This enables the detection of many usage patterns that could otherwise not be recognized, such as \"typo correction\" that requires knowing that the entered text is immediately deleted and replaced. Moreover, the snapshots of each source code file that has been opened during the session can be completely reproduced using the collected information. We also provide analysis and visualization tools which report various statistics about usage patterns, and we provide the logs in an XML format so others can write their own analyzers. FLUORITE can be used for not only evaluating existing tools, but also for discovering issues that motivate new tools."} {"_id": "fce01bf3245691f81026e5209fab684a9167fce5", "title": "Zero-Day Attack Identification in Streaming Data Using Semantics and Spark", "text": "Intrusion Detection Systems (IDS) have been in existence for many years now, but they fall short in efficiently detecting zero-day attacks. This paper presents an organic combination of Semantic Link Networks (SLN) and dynamic semantic graph generation for the on the fly discovery of zero-day attacks using the Spark Streaming platform for parallel detection. In addition, a minimum redundancy maximum relevance (MRMR) feature selection algorithm is deployed to determine the most discriminating features of the dataset. Compared to previous studies on zero-day attack identification, the described method yields better results due to the semantic learning and reasoning on top of the training data and due to the use of collaborative classification methods. We also verified the scalability of our method in a distributed environment."} {"_id": "0cd95b98b590f4a5600f0ecca54f4d7409e2817c", "title": "Leveraging social grouping for trust building in foreign electronic commerce firms: An exploratory study", "text": "Internet development has fueled e-commerce firms\u2019 globalization efforts, but many have met with only limited success. This often stems from the foreign firms\u2019 limited understanding of a focal country\u2019s local culture and idiosyncrasies. Foreign firms are usually viewed as out-group entities, which lowers consumers\u2019 trust in them. The extent of such a phenomenon varies. In locations where people are more skeptical of out-groups, a critical question is whether it is possible to transform such foreign out-group firms into in-groups, specifically with the support of popular social networking media. Based on Social Identity Theory and Trust Transference Process, five strategies leveraging social grouping and social ties to build trust for foreign electronic commerce firms were proposed. A survey was conducted to examine their effectiveness. The results suggest that social-grouping strategies are useful for in-grouping foreign out-group entities to build trust, and the effectiveness of strategies is determined by the social similarity and psychological distance between the consumer and the endorser. This has important implications for scholars and practitioners, both local and abroad, to leverage social grouping to boost Internet sales. \u00a9 2013 Elsevier Ltd. All rights reserved."} {"_id": "0cb2e8605a7b5ddb5f3006f71d19cb9da960db98", "title": "DSD: Dense-Sparse-Dense Training for Deep Neural Networks", "text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ\u201993 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn\u2019t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD."} {"_id": "ae18b6eecef980fb9d88534e98c5a974ecc34865", "title": "High quality information provisioning and data pricing", "text": "This paper presents ideas on how to advance the research on high quality information provisioning and information pricing. To this end, the current state of the art in combining data curation and information provisioning as well as in regard to data pricing is reviewed. Based on that, open issues, such as tailoring data to a user's need and determining a market value of data, are identified. As preliminary solutions, it is proposed to investigate the identified problems in an integrated manner."} {"_id": "d38fe8c30b17f9e5955e7185ac1fd832bbe1996a", "title": "Chukwa: A large-scale monitoring system", "text": "We describe the design and initial implementation of Chukwa, a data collection system for monitoring and analyzing large distributed systems. Chukwa is built on top of Hadoop, an open source distributed filesystem and MapReduce implementation, and inherits Hadoop\u2019s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying monitoring and analysis results, in order to make the best use of this collected data."} {"_id": "2556e8a166b24c9dc463fde2ffa43878de6dc018", "title": "Role of Procedural Justice , Organizational Commitment and Job Satisfaction on job Performance : The Mediating Effects of Organizational Citizenship Behavior", "text": "The study examines the impact of procedural justice, organizational commitment, job satisfaction on employee performance, and the potential mediating role played by organization citizenship behaviors in that process. This model was tested using a sample of 70 employees embedded in 2 groups from 15 branches of a large, syariah bank in Malang. The sample is taken using proportional random sampling. Data is collected directly from respondents using questionnaires and technical data analysis using GeSCA. The study results showed that both procedural justice and organizational commitment positively affected Organizational Citizenship Behavior. Organizational commitments do positive influence job performance. Job satisfaction did not positively influence Organizational Citizenship Behavior and job performance. Organizational Citizenship behavior positively influences job performance. Organizational Citizenship behavior acted as a partial mediator between procedural justice, organizational commitment, and job performance. A number of suggestions on managerial theory and implementation were proposed."} {"_id": "dd04d0106102f3822ad65ae65089821e2c3fc716", "title": "Substrate-Integrated Two-Port Dual-Frequency Antenna", "text": "A two-port dual-frequency substrate-integrated antenna with a large frequency difference is presented. It consists of a differentially fed slot antenna and a substrate-integrated dielectric resonator antenna for low- and high-frequency radiation, respectively. The former is loaded by a hollow patch, whereas the latter is fabricated inside the hollow region of the patch by using air holes and metalized vias. Beneath the antenna substrate is a second substrate on which slot-coupled sources are printed to feed the two antennas. For demonstration, a two-port dual-frequency antenna working at 5.2-GHz WLAN band and 24-GHz ISM band was designed, fabricated, and measured. The S-parameters, radiation patterns, and antenna gains of the two antenna parts are reported. Reasonable agreement between the measured and simulated results is observed. Very good isolation of over 35 dB between the two antenna parts is observed."} {"_id": "08c766010a11737aa71d9e621fad6697093e4ded", "title": "Balanced Multithreading: Increasing Throughput via a Low Cost Multithreading Hierarchy", "text": "A simultaneous multithreading (SMT) processor can issue instructions from several threads every cycle, allowing it to effectively hide various instruction latencies; this effect increases with the number of simultaneous contexts supported. However, each added context on an SMT processor incurs a cost in complexity, which may lead to an increase in pipeline length or a decrease in the maximum clock rate. This paper presents new designs for multithreaded processors which combine a conservative SMT implementation with a coarse-grained multithreading capability. By presenting more virtual contexts to the operating system and user than are supported in the core pipeline, the new designs can take advantage of the memory parallelism present in workloads with many threads, while avoiding the performance penalties inherent in a many-context SMT processor design. A design with 4 virtual contexts, but which is based on a 2-context SMT processor core, gains an additional 26% throughput when 4 threads are run together."} {"_id": "76f695955fa7ba6f3edff4a9cec28426f4e75f78", "title": "Can machine learning aid in delivering new use cases and scenarios in 5G?", "text": "5G represents the next generation of communication networks and services, and will bring a new set of use cases and scenarios. These in turn will address a new set of challenges from the network and service management perspective, such as network traffic and resource management, big data management and energy efficiency. Consequently, novel techniques and strategies are required to address these challenges in a smarter way. In this paper, we present the limitations of the current network and service management and describe in detail the challenges that 5G is expected to face from a management perspective. The main contribution of this paper is presenting a set of use cases and scenarios of 5G in which machine learning can aid in addressing their management challenges. It is expected that machine learning can provide a higher and more intelligent level of monitoring and management of networks and applications, improve operational efficiencies and facilitate the requirements of the future 5G network."} {"_id": "234bca5bb8ee713d168b77f1d7729c23e9623c1d", "title": "Hierarchical Multi-label Conditional Random Fields for Aspect-Oriented Opinion Mining", "text": "A common feature of many online review sites is the use of an overall rating that summarizes the opinions expressed in a review. Unfortunately, these document-level ratings do not provide any information about the opinions contained in the review that concern a specific aspect (e.g., cleanliness) of the product being reviewed (e.g., a hotel). In this paper we study the finer-grained problem of aspect-oriented opinion mining at the sentence level, which consists of predicting, for all sentences in the review, whether the sentence expresses a positive, neutral, or negative opinion (or no opinion at all) about a specific aspect of the product. For this task we propose a set of increasingly powerful models based on conditional random fields (CRFs), including a hierarchical multi-label CRFs scheme that jointly models the overall opinion expressed in the review and the set of aspect-specific opinions expressed in each of its sentences. We evaluate the proposed models against a dataset of hotel reviews (which we here make publicly available) in which the set of aspects and the opinions expressed concerning them are manually annotated at the sentence level. We find that both hierarchical and multi-label factors lead to improved predictions of aspect-oriented opinions."} {"_id": "ebe1db7281e17f6e353009efc01b00e4eacc3de3", "title": "International Telecommunication Union. Systems Management: Summarization function. Recommendation X.738", "text": "Prepaid mobile phone plans are gaining in importance in the U.S. and are already well established in Europe. Aggregation of a certain number of Micropayments is the preferred business model for wireless carriers to leverage the existing payment infrastructure with a mobile payment solution. There exist various Payment Scenarios that are based on the Wallet metaphor, and in general, they can be used to implement flexible and scalable solutions. Person-to-person transactions are already very popular in the U.S. as a consequence of the widespread use a payment instrument for auctions at eBay. This scenario is most often discussed as a valuable alternative to check payments in the U.S. For several reasons, the U.S. has been slower to adopt wireless technology than Europe. First, landlines in the U.S. are dominant and high personal computer penetration prevents people from using mobile phones to communicate, either by voice calls or text messaging. Second, prices in the U.S. favor landlines, even though prices for mobile phone plans in the competitive market have been cut many times in the past. For mobile payment, the fact that social patterns matter could affect the adoption of this form of payment in Europe, because in several countries, solutions already exist. The factors that form the concept of Diffusion of Innovation Theory eventually can be seen as the critical success factors of mobile payment. ii Whereas some emerging and existing mobile payment systems will come along with their own processing infrastructure, several will be based on existing payment networks that have initially been designed for another purpose. In Europe, credit card operators launch mobile payment solutions to leverage their existing networks while in the U.S., operators of ATM networks and leading credit card processors even try to bypass existing networks. Unlike in Europe, where checks have de facto been banned from the market by charging high prices for the processing of this non-cash payment instrument, the U.S. partly supports innovative check processing solutions to leverage the ACH infrastructure. The European bank approach shows that customer's payment behavior can be influenced by a pricing policy. Meanwhile, European countries and the U.S. have switched to 2.5G networks. With the transition to 3G, interoperability within the U.S. should finally become reality. Commercial launches of 3G services have already started in both regions. Notably, the first U.S. carrier to launch a broadband wireless service seems to be attracting business customers and offering high \u2026"} {"_id": "d190a3f1dd23dfc980e5081e64620c0b8a076f95", "title": "A Systematic Gap Analysis of Social Engineering Defence Mechanisms Considering Social Psychology", "text": "Social engineering is the acquisition of information about computer systems by methods that deeply include non-technical means. While technical security of most critical systems is high, the systems remain vulnerable to attacks from social engineers. Social engineering is a technique that: (i) does not require any (advanced) technical tools, (ii) can be used by anyone, (iii) is cheap. Traditional penetration testing approaches often focus on vulnerabilities in network or software systems. Few approaches even consider the exploitation of humans via social engineering. While the amount of social engineering attacks and the damage they cause rise every year, the defences against social engineering do not evolve accordingly. Hence, the security awareness of these attacks by employees remains low. We examined the psychological principles of social engineering and which psychological techniques induce resistance to persuasion applicable for social engineering. The techniques examined are an enhancement of persuasion knowledge, attitude bolstering and influencing the decision making. While research exists elaborating on security awareness, the integration of resistance against persuasion has not been done. Therefore, we analysed current defence mechanisms and provide a gap analysis based on research in social psychology. Based on our findings we provide guidelines of how to improve social engineering defence mechanisms such as security awareness programs."} {"_id": "80fd671b8f887d4e0333d59366b361ff98ecca1f", "title": "An innovative oscillometric blood pressure measurement: Getting rid of the traditional envelope", "text": "Oscillometric blood pressure devices are popular and are considered part of the family's medical chest at home. The popularity of these devices for private use is not shared by physicians mainly due to the fact that the blood pressures are computed instead of measured. The classical way to compute the systolic and diastolic blood pressure is based on the envelope of the oscillometric waveform. The algorithm to compute the blood pressures from the waveform is firm dependent, often patented and lacks scientific foundation. In this paper, we propose a totally new approach. Instead of the envelope of the oscillometric waveform, we use a statistical test to pin-point the time instances where the systolic and diastolic blood pressures are measured in the cuff. This technique has the advantage of being mathematically well-posed instead of the ill-posed problem of envelope fitting. Hence, in order to calibrate the oscillometric blood pressure monitor it is sufficient to make the statistical test unbiased."} {"_id": "0a64de85c3ae878907e4f0d4f09432d4cdd34eda", "title": "A new ultrawideband printed monopole antenna: the planar inverted cone antenna (PICA)", "text": "A new antenna, the planar inverted cone antenna (PICA), provides ultrawideband (UWB) performance with a radiation pattern similar to monopole disk antennas , but is smaller in size. Extensive simulations and experiments demonstrate that the PICA antenna provides more than a 10:1 impedance bandwidth (for VSWR<2) and supports a monopole type omnidirectional pattern over 4:1 bandwidth. A second version of the PICA with two circular holes changes the current flow on the metal disk and extends the high end of the operating frequency range, improving the pattern bandwidth to 7:1."} {"_id": "dcecd65af9fce077bc8de294ad806a6439692d2c", "title": "Towards Implicit Content-Introducing for Generative Short-Text Conversation Systems", "text": "The study on human-computer conversation systems is a hot research topic nowadays. One of the prevailing methods to build the system is using the generative Sequence-to-Sequence (Seq2Seq) model through neural networks. However, the standard Seq2Seq model is prone to generate trivial responses. In this paper, we aim to generate a more meaningful and informative reply when answering a given question. We propose an implicit content-introducing method which incorporates additional information into the Seq2Seq model in a flexible way. Specifically, we fuse the general decoding and the auxiliary cue word information through our proposed hierarchical gated fusion unit. Experiments on real-life data demonstrate that our model consistently outperforms a set of competitive baselines in terms of BLEU scores and human evaluation."} {"_id": "112d7ba8127dd340ed300e3efcb2fccd345f231f", "title": "A 100 watts L-band power amplifier", "text": "In this paper, a 1100\u20131350 MHz power amplifier is introduced. For the realization of the RF power amplifier an LDMOS transistor is used. The amplifier delivers 100W power to a 50 ohm load with 50% efficiency. The transducer power gain of the amplifier is has 16dB at 100W output power. A mixed lumped element and transmission line matching circuit is used to implement matching network. The simulations are performed by the nonlinear model of the transistor, which is provided by the manufacturer. Finally the results of simulations are compared with the measurements."} {"_id": "583db69db244b8ddda81367bf41bb8c4419ed248", "title": "RF power amplifiers for wireless communications", "text": "A wide variety of semiconductor devices are used in wireless power amplifiers. The RF performance and other attributes of cellphone RF power amplifiers using Si and GaAs based technologies are reviewed and compared."} {"_id": "128e1d5ef2fe666c6ae88ce5ffe97005a40b157b", "title": "A design methodology for the realization of multi-decade baluns at microwave frequencies", "text": "A new methodology is presented for designing baluns exhibiting multi-decade bandwidths at microwave frequencies. Simulations show that resistors terminating the outer transmission line suppress the half-wavelength resonance and greatly extend the bandwidth. Using linear measurements at microwave frequencies, ferrite beads have been shown to behave as resistors with a small reactance, suitable for terminating the outer transmission line. This paper shows that ferrite beads can perform the dual role of improving magnetic coupling at low frequency and suppressing resonance at higher frequencies. The design methodology was applied to produce a balun that operates between 30MHz and 6GHz, displays less than 1dB of power loss up to 4.4GHz, and delivers an impedance transformation of 2\u22361."} {"_id": "be25b709e788a86be9ef878e9b60f7a7527e0d24", "title": "100 W GaN HEMT power amplifier module with > 60% efficiency over 100\u20131000 MHz bandwidth", "text": "We have demonstrated a decade bandwidth 100 W GaN HEMT power amplifier module with 15.5\u201318.6 dB gain, 104\u2013121 W CW output power and 61.4\u201376.6 % drain efficiency over the 100\u20131000 MHz band. The 2 \u00d7 2 inch compact power amplifier module combines four 30 W lossy matched broadband GaN HEMT PAs packaged in a ceramic SO8 package. Each of the 4 devices is fully matched to 50 \u2126 and obtains 30.8\u201335.7 W with 68.6\u201379.6 % drain efficiency over the band. The packaged amplifiers contain a GaN on SiC device operating at 48V drain voltage, alongside GaAs integrated passive matching circuitry. The four devices are combined using a broadband low loss coaxial balun. We believe this combination of output power, bandwidth and efficiency is the best reported to date. These amplifiers are targeted for use in multi-band public mobile radios and for instrumentation applications."} {"_id": "f7363d2ac142bcb66cdb481a70e7c9c5c60771dd", "title": "10W ultra-broadband power amplifier", "text": "We report the design and performance of an ultra-broadband power amplifier. It achieves 10 Watts output power with 21dB \u00b1 1.5dB gain from 20 MHz to 3000 MHz. At lower frequencies from 20 to 1000 MHz the output power is 15 Watts with 22% efficiency. To achieve this performance, we employ a new design concept to control the device impedance and the power combiner impedance to be naturally 50 Ohms, such that no impedance matching is needed. Also, we developed a broadband microwave balun as a push-pull power combiner, which doubles as an impedance transformer. We believe the combination of output power, bandwidth and efficiency is the best reported to date."} {"_id": "3f3a5303331a2e363ba930688daa113244fa11f0", "title": "Full-Body Compliant Human\u2013Humanoid Interaction: Balancing in the Presence of Unknown External Forces", "text": "This paper proposes an effective framework of human-humanoid robot physical interaction. Its key component is a new control technique for full-body balancing in the presence of external forces, which is presented and then validated empirically. We have adopted an integrated system approach to develop humanoid robots. Herein, we describe the importance of replicating human-like capabilities and responses during human-robot interaction in this context. Our balancing controller provides gravity compensation, making the robot passive and thereby facilitating safe physical interactions. The method operates by setting an appropriate ground reaction force and transforming these forces into full-body joint torques. It handles an arbitrary number of force interaction points on the robot. It does not require force measurement at interested contact points. It requires neither inverse kinematics nor inverse dynamics. It can adapt to uneven ground surfaces. It operates as a force control process, and can therefore, accommodate simultaneous control processes using force-, velocity-, or position-based control. Forces are distributed over supporting contact points in an optimal manner. Joint redundancy is resolved by damping injection in the context of passivity. We present various force interaction experiments using our full-sized bipedal humanoid platform, including compliant balance, even when affected by unknown external forces, which demonstrates the effectiveness of the method."} {"_id": "5647734e0a7086e7b993cf711f71b24480c2b696", "title": "Neuropsychology and clinical neuroscience of persistent post-concussive syndrome.", "text": "On the mild end of the acquired brain injury spectrum, the terms concussion and mild traumatic brain injury (mTBI) have been used interchangeably, where persistent post-concussive syndrome (PPCS) has been a label given when symptoms persist for more than three months post-concussion. Whereas a brief history of concussion research is overviewed, the focus of this review is on the current status of PPCS as a clinical entity from the perspective of recent advances in the biomechanical modeling of concussion in human and animal studies, particularly directed at a better understanding of the neuropathology associated with concussion. These studies implicate common regions of injury, including the upper brainstem, base of the frontal lobe, hypothalamic-pituitary axis, medial temporal lobe, fornix, and corpus callosum. Limitations of current neuropsychological techniques for the clinical assessment of memory and executive function are explored and recommendations for improved research designs offered, that may enhance the study of long-term neuropsychological sequelae of concussion."} {"_id": "8a8b5e450ee6e39bbc865c7619e29047640db05b", "title": "Computer-aided generation of multiple-choice tests", "text": "Summary form only given. The paper describes a novel automatic procedure for the generation of multiple-choice tests from electronic documents. In addition to employing various NLP techniques including term extraction and shallow parsing, the system makes use of language resources such as corpora and ontologies. The system operates in a fully automatic mode and also a semiautomatic environment where the user is offered the option to post-edit the generated test items. The results from the conducted evaluation suggest that the new procedure is very effective saving time and labour considerably and that the test items produced with the help of the program are not of inferior quality to those produced manually."} {"_id": "53d39691197a20f5f1fc2fb2d30d8a73032f9e49", "title": "Model-Driven Reverse Engineering Approaches: A Systematic Literature Review", "text": "This paper explores and describes the state of the art for what concerns the model-driven approaches proposed in the literature to support reverse engineering. We conducted a systematic literature review on this topic with the aim to answer three research questions. We focus on various solutions developed for model-driven reverse engineering, outlining in particular the models they use and the transformations applied to the models. We also consider the tools used for model definition, extraction, and transformation and the level of automation reached by the available tools. The model-driven reverse engineering approaches are also analyzed based on various features such as genericity, extensibility, automation of the reverse engineering process, and coverage of the full or partial source artifacts. We describe in detail and compare fifteen approaches applying model-driven reverse engineering. Based on this analysis, we identify and indicate some hints on choosing a model-driven reverse engineering approach from the available ones, and we outline open issues concerning the model-driven reverse engineering approaches."} {"_id": "c5212a90571a887f5841ce7aea0ba3fa26237ba2", "title": "Evaluation of Three Methods for MRI Brain Tumor Segmentation", "text": "Imaging plays a central role in the diagnosis and treatment planning of brain tumor. An accurate segmentation is critical, especially when the tumor morphological changes remain subtle, irregular and difficult to assess by clinical examination. Traditionally, segmentation is performed manually in clinical environment that is operator dependent and very tedious and time consuming labor intensive work. However, automated tumor segmentation in MRI brain tumor poses many challenges with regard to characteristics of an image. A comparison of three different semi-automated methods, viz., modified gradient magnitude region growing technique (MGRRGT), level set and a marker controlled watershed method is undertaken here for evaluating their relative performance in the segmentation of tumor. A study on 9 samples using MGRRGT reveals that all the errors are within 6 to 23% in comparison to other two methods."} {"_id": "864ab6ebe254eeada6fee110da35a2bebcf9fbb3", "title": "Ways to React: Comparing Reactive Languages and Complex Event Processing", "text": "Reactive applications demand for detecting the changes that occur in a domain of interest and for timely reactions. Examples range from simple interactive applications to complex monitoring tasks involving distributed and heterogeneous systems. Over the last years, different programming paradigms and solutions have been proposed to support such applications. In this paper, we focus on two prominent approaches: event-based programming, specifically Complex Event Processing (CEP), and Reactive Languages (RLs). CEP systems enable the definition of high level situations of interest from low level primitive events detected in the external environment. On the other hand, RLs support time-changing values and their composition as dedicated language abstractions. These research fields have been investigated by different communities, belonging respectively to the database and the distributed systems areas and to the programming language area. It is our belief that a deeper understanding of these research fields, including their benefits and limitations, their similarities and differences, could drive further developments in supporting reactive applications. For this reason, we propose a first comparison of the two fields. Despite huge differences, we believe that such a comparison can trigger an interesting discussion across the communities, favor knowledge sharing, and let new ideas emerge."} {"_id": "af7fe675a8bff39ea6b575860559b10a759cc1fe", "title": "Frontal fibrosing alopecia \u2013 A Case report * Alopecia frontal fibrosante-Relato de caso", "text": "Frontal fibrosing alopecia is a kind of progressive and frequently irreversible cicatricial alopecia marked by a lichenoid infiltrate in histology. Since its first description, in 1994, in Australia, some cases have been documented all over the world. The article reports, for the second time in the medical literature, a Brazilian case and reviews the main aspects of this dermatosis."} {"_id": "6047e9af00dcffbd2effbfa600735eb111f7de65", "title": "A Discriminative Representation of Convolutional Features for Indoor Scene Recognition", "text": "Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities that characterize such scenes. This paper presents a novel approach that exploits rich mid-level convolutional features to categorize indoor scenes. Traditional convolutional features retain the global spatial structure, which is a desirable property for general object recognition. We, however, argue that the structure-preserving property of the convolutional neural network activations is not of substantial help in the presence of large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target data set but also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale data set of 1300 object categories that are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over the previous state-of-the-art approaches on five major scene classification data sets."} {"_id": "a5570096c7d3954d029b2ce6cd90ca9137c43191", "title": "An Artificial Light Source Influences Mating and Oviposition of Black Soldier Flies, Hermetia illucens", "text": "Current methods for mass-rearing black soldier flies, Hermetia illucens (L.) (Diptera: Stratiomyidae), in the laboratory are dependent on sunlight. Quartz-iodine lamps and rare earth lamps were examined as artificial light sources for stimulating H. illucens to mate and lay eggs. Sunlight was used as the control. Adults in the quartz-iodine lamp treatment had a mating rate of 61% of those in the sunlight control. No mating occurred when the rare earth lamp was used as a substitute. Egg hatch for the quartz-iodine lamp and sunlight treatments occurred in approximately 4 days, and the hatch rate was similar between these two treatments. Larval and pupal development under these treatments required approximately 18 and 15 days at 28\u00b0C, respectively. Development of methods for mass rearing of H. illucens using artificial light will enable production of this fly throughout the year without investing in greenhouse space or requiring sunlight."} {"_id": "614399e28cbc5ec2b3540de8a7acfb9c46c0de5e", "title": "Heart rate variability reflects self-regulatory strength, effort, and fatigue.", "text": "Experimental research reliably demonstrates that self-regulatory deficits are a consequence of prior self-regulatory effort. However, in naturalistic settings, although people know that they are sometimes vulnerable to saying, eating, or doing the wrong thing, they cannot accurately gauge their capacity to self-regulate at any given time. Because self-regulation and autonomic regulation colocalize in the brain, an autonomic measure, heart rate variability (HRV), could provide an index of self-regulatory strength and activity. During an experimental manipulation of self-regulation (eating carrots or cookies), HRV was elevated during high self-regulatory effort (eat carrots, resist cookies) compared with low self-regulatory effort (eat cookies, resist carrots). The experimental manipulation and higher HRV at baseline independently predicted persistence at a subsequent anagram task. HRV appears to index self-regulatory strength and effort, making it possible to study these phenomena in the field as well as the lab."} {"_id": "d3e1081e173cec3ee08cb123506c4402267345c8", "title": "Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation.", "text": "We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences."} {"_id": "e997dcda4a10be86f520e88d6ca845dc5346b14f", "title": "Dynamic Action Repetition for Deep Reinforcement Learning", "text": "One of the long standing goals of Artificial Intelligence (AI) is to build cognitive agents which can perform complex tasks from raw sensory inputs without explicit supervision (Lake et al. 2016). Recent progress in combining Reinforcement Learning objective functions and Deep Learning architectures has achieved promising results for such tasks. An important aspect of such sequential decision making problems, which has largely been neglected, is for the agent to decide on the duration of time for which to commit to actions. Such action repetition is important for computational efficiency, which is necessary for the agent to respond in real-time to events (in applications such as self-driving cars). Action Repetition arises naturally in real life as well as simulated environments. The time scale of executing an action enables an agent (both humans and AI) to decide the granularity of control during task execution. Current state of the art Deep Reinforcement Learning models, whether they are off-policy (Mnih et al. 2015; Wang et al. 2015) or on-policy (Mnih et al. 2016), consist of a framework with a static action repetition paradigm, wherein the action decided by the agent is repeated for a fixed number of time steps regardless of the contextual state while executing the task. In this paper, we propose a new framework Dynamic Action Repetition which changes Action Repetition Rate (the time scale of repeating an action) from a hyper-parameter of an algorithm to a dynamically learnable quantity. At every decision-making step, our models allow the agent to commit to an action and the time scale of executing the action. We show empirically that such a dynamic time scale mechanism improves the performance on relatively harder games in the Atari 2600 domain, independent of the underlying Deep Reinforcement Learning algorithm used."} {"_id": "e56e27462286f39d518677bf3c930c9b21714778", "title": "Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning", "text": "Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential. In this paper, we propose an approach to augment CNNs with out-distribution learning in order to reduce misclassification rate by rejecting adversarial examples. We empirically show that our augmented CNNs can either reject or classify correctly most adversarial examples generated using well-known methods (> 95% for MNIST and > 75% for CIFAR-10 on average). Furthermore, we achieve this without requiring to train using any specific type of adversarial examples and without sacrificing the accuracy of models on clean samples significantly (< 4%)."} {"_id": "ee4ae9d87438e4b39eccd2ff3b509a489587c2b2", "title": "Integrated microstrip and rectangular waveguide in planar form", "text": "Usually transitions from microstrip line to rectangular waveguide are made with three-dimensional complex mounting structures. In this paper, a new planar platform is developed in which the microstrip line and rectangular waveguide are fully integrated on the same substrate, and they are interconnected via a simple taper. Our experiments at 28 GHz show that an effective bandwidth of 12% at 20 dB return loss is obtained with an in-band insertion loss better than 0.3 dB. The new transition allows a complete integration of waveguide components on substrate with MICs and MMICs."} {"_id": "b6ba896ba86b8853fdbe02787fef9ade745a2cfe", "title": "Teaching Computer Organization and Architecture Using Simulation and FPGA Applications", "text": "This paper presents the design concepts and realization of incorporating micro-operation simulation and FPGA implementation into a teaching tool for computer organization and architecture. This teaching tool helps computer engineering and computer science students to be familiarized practically with computer organization and architecture through the development of their own instruction set, computer programming and interfacing experiments. A two-pass assembler has been designed and implemented to write assembly programs in this teaching tool. In addition to the microoperation simulation, the complete configuration can be run on Xilinx Spartan-3 FPGA board. Such implementation offers good code density, easy customization, easily developed software, small area, and high performance at low cost."} {"_id": "9f2f648e544a3f60b00bc68dfcbdcf08eb851d63", "title": "A versatile wavelet domain noise filtration technique for medical imaging", "text": "We propose a robust wavelet domain method for noise filtering in medical images. The proposed method adapts itself to various types of image noise as well as to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The algorithm exploits generally valid knowledge about the correlation of significant image features across the resolution scales to perform a preliminary coefficient classification. This preliminary coefficient classification is used to empirically estimate the statistical distributions of the coefficients that represent useful image features on the one hand and mainly noise on the other. The adaptation to the spatial context in the image is achieved by using a wavelet domain indicator of the local spatial activity. The proposed method is of low complexity, both in its implementation and execution time. The results demonstrate its usefulness for noise suppression in medical ultrasound and magnetic resonance imaging. In these applications, the proposed method clearly outperforms single-resolution spatially adaptive algorithms, in terms of quantitative performance measures as well as in terms of visual quality of the images."} {"_id": "023023c547b8b18d3aa8b50a3ad42e01c97e83f8", "title": "Estimating the class prior and posterior from noisy positives and unlabeled data", "text": "We develop a classification algorithm for estimating posterior distributions from positive-unlabeled data, that is robust to noise in the positive labels and effective for high-dimensional data. In recent years, several algorithms have been proposed to learn from positive-unlabeled data; however, many of these contributions remain theoretical, performing poorly on real high-dimensional data that is typically contaminated with noise. We build on this previous work to develop two practical classification algorithms that explicitly model the noise in the positive labels and utilize univariate transforms built on discriminative classifiers. We prove that these univariate transforms preserve the class prior, enabling estimation in the univariate space and avoiding kernel density estimation for high-dimensional data. The theoretical development and parametric and nonparametric algorithms proposed here constitute an important step towards wide-spread use of robust classification algorithms for positive-unlabeled data."} {"_id": "e3640b779d2ff35aabfb0a529089b1df5457275f", "title": "Charging algorithms of lithium-ion batteries: An overview", "text": "This paper presents the overview of charging algorithms for lithium-ion batteries, which include constant current-constant voltage (CC/CV), variants of the CC/CV, multistage constant current, pulse current and pulse voltage. The CC/CV charging algorithm is well developed and widely adopted in charging lithium-ion batteries. It is used as a benchmark to compare with other charging algorithms in terms of the charging time, the charging efficiency, the influences on battery life and other aspects, which can serve as a convenient reference for future work in developing a charger for lithium-ion battery."} {"_id": "322293bb0bbd47349c5fd605dce5c63f03efb6a8", "title": "Weighted PageRank algorithm", "text": "With the rapid growth of the Web, users easily get lost in the rich hyper structure. Providing the relevant information to users to cater to their needs is the primary goal of Website owners. Therefore, finding the content of the Web and retrieving the users' interests and needs from their behavior have become increasingly important. Web mining is used to categorize users and pages by analyzing user behavior, the content of the pages, and the order of the URLs that tend to be accessed. Web structure mining plays an important role in this approach. Two page ranking algorithms, HITS and PageRank, are commonly used in Web structure mining. Both algorithms treat all links equally when distributing rank scores. Several algorithms have been developed to improve the performance of these methods. The weighted PageRank algorithm (WPR), an extension to the standard PageRank algorithm, is introduced. WPR takes into account the importance of both the inlinks and the outlinks of the pages and distributes rank scores based on the popularity of the pages. The results of our simulation studies show that WPR performs better than the conventional PageRank algorithm in terms of returning a larger number of relevant pages to a given query."} {"_id": "6fd329a1e7f513745e5fc462f146aa80c6090a1d", "title": "Signal-Quality Indices for the Electrocardiogram and Photoplethysmogram: Derivation and Applications to Wireless Monitoring", "text": "The identification of invalid data in recordings obtained using wearable sensors is of particular importance since data obtained from mobile patients is, in general, noisier than data obtained from nonmobile patients. In this paper, we present a signal quality index (SQI), which is intended to assess whether reliable heart rates (HRs) can be obtained from electrocardiogram (ECG) and photoplethysmogram (PPG) signals collected using wearable sensors. The algorithms were validated on manually labeled data. Sensitivities and specificities of 94% and 97% were achieved for the ECG and 91% and 95% for the PPG. Additionally, we propose two applications of the SQI. First, we demonstrate that, by using the SQI as a trigger for a power-saving strategy, it is possible to reduce the recording time by up to 94% for the ECG and 93% for the PPG with only minimal loss of valid vital-sign data. Second, we demonstrate how an SQI can be used to reduce the error in the estimation of respiratory rate (RR) from the PPG. The performance of the two applications was assessed on data collected from a clinical study on hospital patients who were able to walk unassisted."} {"_id": "9c43b59177cb5539ea649c188387fe374663bbb1", "title": "Learning Discriminative Latent Attributes for Zero-Shot Classification", "text": "Zero-shot learning (ZSL) aims to transfer knowledge from observed classes to the unseen classes, based on the assumption that both the seen and unseen classes share a common semantic space, among which attributes enjoy a great popularity. However, few works study whether the human-designed semantic attributes are discriminative enough to recognize different classes. Moreover, attributes are often correlated with each other, which makes it less desirable to learn each attribute independently. In this paper, we propose to learn a latent attribute space, which is not only discriminative but also semantic-preserving, to perform the ZSL task. Specifically, a dictionary learning framework is exploited to connect the latent attribute space with attribute space and similarity space. Extensive experiments on four benchmark datasets show the effectiveness of the proposed approach."} {"_id": "ef8070a37fb6f0959acfcee9d40f0b3cb912ba9f", "title": "Epistemological perspectives on IS research: a framework for analysing and systematizing epistemological assumptions", "text": "Over the last three decades, a methodological pluralism has developed within information systems (IS) research. Various disciplines and many research communities as well, contribute to this discussion. However, working on the same research topic or studying the same phenomenon does not necessarily ensure mutual understanding. Especially within this multidisciplinary and international context, the epistemological assumptions made by different researchers may vary fundamentally. These assumptions exert a substantial impact on how concepts like validity, reliability, quality and rigour of research are understood. Thus, the extensive publication of epistemological assumptions is, in effect, almost mandatory. Hence, the aim of this paper is to develop an epistemological framework which can be used for systematically analysing the epistemological assumptions in IS research. Rather than attempting to identify and classify IS research paradigms, this research aims at a comprehensive discussion of epistemology within the context of IS. It seeks to contribute to building the basis for identifying similarities as well as differences between distinct IS approaches and methods. In order to demonstrate the epistemological framework, the consensus-oriented interpretivist approach to conceptual modelling is used as an example."} {"_id": "25e989b45de04c6086364b376d29ec11008360a3", "title": "Learning physical parameters from dynamic scenes", "text": "Humans acquire their most basic physical concepts early in development, and continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical parameters at multiple levels. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments. Yet they also make systematic errors indicative of the approximations people might make in solving this computationally demanding problem with limited computational resources. We propose two approximations that complement the top-down Bayesian approach. One approximation model relies on a more bottom-up feature-based inference scheme. The second approximation combines the strengths of the bottom-up and top-down approaches, by taking the feature-based inference as its point of departure for a search in physical-parameter space."} {"_id": "215fae37282267974112a5a757a42382211360fa", "title": "Architectures for Natural Language Generation: Problems and Perspectives", "text": "Current research in natural language generation is situated in a computational linguistics tradition that was founded several decades ago. We critically analyse some of the architectural assumptions underlying existing systems and point out some problems in the domains of text planning and lexicalization. Guided by the identification of major generation challenges viewed from the angles of knowledge-based systems and cognitive psychology, we sketch some new directions for future research."} {"_id": "078b55e2f4899cf95a4c8d65613c340fa190acf8", "title": "The Second Dialog State Tracking Challenge", "text": "A spoken dialog system, while communicating with a user, must keep track of what the user wants from the system at each step. This process, termed dialog state tracking, is essential for a successful dialog system as it directly informs the system\u2019s actions. The first Dialog State Tracking Challenge allowed for evaluation of different dialog state tracking techniques, providing common testbeds and evaluation suites. This paper presents a second challenge, which continues this tradition and introduces some additional features \u2013 a new domain, changing user goals and a richer dialog state. The challenge received 31 entries from 9 research groups. The results suggest that while large improvements on a competitive baseline are possible, trackers are still prone to degradation in mismatched conditions. An investigation into ensemble learning demonstrates the most accurate tracking can be achieved by combining multiple trackers."} {"_id": "0b0cf7e00e7532e38238a9164f0a8db2574be2ea", "title": "Attention Is All You Need", "text": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.2 after training for 4.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."} {"_id": "5061f591aa8ff224cd20cdcb3b62d156fb187bed", "title": "One Model To Learn Them All", "text": "Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all."} {"_id": "523d3642853ddfb734614f5c6b5941e688d69bb1", "title": "Incremental LSTM-based dialog state tracker", "text": "A dialog state tracker is an important component in modern spoken dialog systems. We present an incremental dialog state tracker, based on LSTM networks. It directly uses automatic speech recognition hypotheses to track the state. We also present the key non-standard aspects of the model that bring its performance close to the state-of-the-art and experimentally analyze their contribution: including the ASR confidence scores, abstracting scarcely represented values, including transcriptions in the training data, and model averaging."} {"_id": "1b1b36bcbbe3df2bd2b8c4a06f2b970e9b280355", "title": "Application of neural networks to inverse lens distortion modelling", "text": "The accurate and quick modelling of inverse lens distortion to rectify images or predict the image coordinates of real world objects has long been a challenge. This work investigates the use of artificial neural networks to perform this modelling. Several architectures are investigated and multiple methods of training the networks are used to achieve the best results. The error is expressed as a physical displacement on the imaging chip so that a fair comparison can be made between other published results which are based on cameras with different resolutions. It is shown that the novel application of locally optimised neural networks to residual lens calibration data yields an inverse distortion modelling that is highly competitive with prior published results."} {"_id": "9b4ce2784b9ddc87d1ce669e1f524a58975cc39e", "title": "Quantitative function for community detection.", "text": "We propose a quantitative function for community partition -- i.e., modularity density or D value. We demonstrate that this quantitative function is superior to the widely used modularity Q and also prove its equivalence with the objective function of the kernel k means. Both theoretical and numerical results show that optimizing the new criterion not only can resolve detailed modules that existing approaches cannot achieve, but also can correctly identify the number of communities."} {"_id": "427edff08dadb04913ad67c2831089e248eff059", "title": "Learning Rapid-Temporal Adaptations", "text": "A hallmark of human intelligence and cognition is its flexibility. One of the longstanding goals in AI research is to replicate this flexibility in a learning machine. In this work we describe a mechanism by which artificial neural networks can learn rapid-temporal adaptation \u2013 the ability to adapt quickly to new environments or tasks \u2013 that we call adaptive neurons. Adaptive neurons modify their activations with task-specific values retrieved from a working memory. On standard metalearning and few-shot learning benchmarks in both vision and language domains, models augmented with adaptive neurons achieve state-of-the-art results."} {"_id": "d10a558b9caa8bd41c0112434f3b19eb8aab2b5c", "title": "Exemplary Automotive Attack Scenarios : Trojan Horses for Electronic Throttle Control System ( ETC ) and Replay Attacks on the Power Window System", "text": "The consideration of targeted security attacks is not yet common in the automotive domain where primarily safety requirements are considered. Hence this survey addresses the relatively new field of IT-security in automotive technology in order to motivate further research. With the emergence of automotive technologies like car-to-car (c2c) communication, the challenges increase. In order to show the necessity for such research we describe how in theory, a potential attack on the Electronic Throttle Control (ETC) could succeed, if no countermeasures are taken, with potentially drastic consequences on the safety of the vehicle, its occupants and its environment as a result. Also an attack on the power window system is shown using the implementation techniques already employed in the design of automotive components. Our goal is to show, that such exploited security vulnerabilities are likely to have serious consequences with respect to the vehicles safety. Therefore such attacks need to be considered and suitable countermeasures have to be identified. In order to achieve this, the connection of the two disciplines Safety and Security for automotive IT components needs to be discussed. We introduce two exemplary attacks, the Trojan Horse targeting at the Electronic Throttle Control System (ETC) and a replay attack on the electric window lift. The first attack is mainly discussed in theory, the second is elaborated in a practical simulation scenario. Finally we introduce a general investigation of potential ways of intrusion and present some first practical results from tests with current automotive hardware."} {"_id": "5b2607087d33744edddfa596bf31dbb9ba4ed84e", "title": "Real-time object detection, localization and verification for fast robotic depalletizing", "text": "Depalletizing is a challenging task for manipulation robots. Key to successful application are not only robustness of the approach, but also achievable cycle times in order to keep up with the rest of the process. In this paper, we propose a system for depalletizing and a complete pipeline for detecting and localizing objects as well as verifying that the found object does not deviate from the known object model, e.g., if it is not the object to pick. In order to achieve high robustness (e.g., with respect to different lighting conditions) and generality with respect to the objects to pick, our approach is based on multi-resolution surfel models. All components (both software and hardware) allow operation at high frame rates and, thus, allow for low cycle times. In experiments, we demonstrate depalletizing of automotive and other prefabricated parts with both high reliability (w.r.t. success rates) and efficiency (w.r.t. low cycle times)."} {"_id": "8eae4d70e2a0246d4a48ec33bc66083f10fd1df5", "title": "Tagging Patient Notes With ICD-9 Codes", "text": "There is substantial growth in the amount of medical data being generated in hospitals. With over 96% adoption rate [1], Electronic Medical/Health Records are used to store most of this medical data. If harnessed correctly, this medium provides a very convenient platform for secondary data analysis of these records to improve medical and patient care. One crucial feature of the information stored in these systems are ICD9-diagnosis codes, which are used for billing purposes and integration to other databases. These codes are assigned to medical text and require expert annotators with experience and training. In this paper we formulate this problem as a multi-label classification problem and propose a deep learning framework to classify the ICD-9 codes a patient is assigned at the end of a visit. We demonstrate that a simple LSTM model with a single layer of non-linearity can learn to classify patient notes with their corresponding ICD-9 labels moderately well."} {"_id": "7a4741355af401a9b65297960593c645fde364a4", "title": "Visualization of patent analysis for emerging technology", "text": "Many methods have been developed to recognize those progresses of technologies, and one of them is to analyze patent information. And visualization methods are considered to be proper for representing patent information and its analysis results. However, current visualization methods for patent analysis patent maps have some drawbacks. Therefore, we propose an alternative visualization method in this paper. With colleted keywords from patent documents of a target technology field, we cluster patent documents by the k-Means algorithm. With the clustering results, we form a semantic network of keywords without respect of filing dates. And then we build up a patent map by rearranging each keyword node of the semantic network according to its earliest filing date and frequency in patent documents. Our approach contributes to establishing a patent map which considers both structured and unstructured items of a patent document. Besides, differently from previous visualization methods for patent analysis, ours is based on forming a semantic network of keywords from patent documents. And thereby it visualizes a clear overview of patent information in a more comprehensible way. And as a result of those contributions, it enables us to understand advances of emerging technologies and forecast its trend in the future. 2007 Elsevier Ltd. All rights reserved."} {"_id": "6f0144dc7ba19123ddce8cdd4ad0f6dc36dd4ef2", "title": "Perceptions of Sex, Gender, and Puberty Suppression: A Qualitative Analysis of Transgender Youth", "text": "International guidelines recommend the use of Gonadotropin-Releasing Hormone (GnRH) agonists in adolescents with gender dysphoria (GD) to suppress puberty. Little is known about the way gender dysphoric adolescents themselves think about this early medical intervention. The purpose of the present study was (1) to explicate the considerations of gender dysphoric adolescents in the Netherlands concerning the use of puberty suppression; (2) to explore whether the considerations of gender dysphoric adolescents differ from those of professionals working in treatment teams, and if so in what sense. This was a qualitative study designed to identify considerations of gender dysphoric adolescents regarding early treatment. All 13 adolescents, except for one, were treated with puberty suppression; five adolescents were trans girls and eight were trans boys. Their ages ranged between 13 and 18\u00a0years, with an average age of 16\u00a0years and 11\u00a0months, and a median age of 17\u00a0years and 4\u00a0months. Subsequently, the considerations of the adolescents were compared with views of clinicians treating youth with GD. From the interviews with the gender dysphoric adolescents, three themes emerged: (1) the difficulty of determining what is an appropriate lower age limit for starting puberty suppression. Most adolescents found it difficult to define an appropriate age limit and saw it as a dilemma; (2) the lack of data on the long-term effects of puberty suppression. Most adolescents stated that the lack of long-term data did not and would not stop them from wanting puberty suppression; (3) the role of the social context, for which there were two subthemes: (a) increased media-attention, on television, and on the Internet; (b) an imposed stereotype. Some adolescents were positive about the role of the social context, but others raised doubts about it. Compared to clinicians, adolescents were often more cautious in their treatment views. It is important to give voice to gender dysphoric adolescents when discussing the use of puberty suppression in GD. Otherwise, professionals might act based on assumptions about adolescents' opinions instead of their actual considerations. We encourage gathering more qualitative research data from gender dysphoric adolescents in other countries."} {"_id": "9742cfb5015952ba4fd12b598d77a5f2b8b4463b", "title": "Good genes, complementary genes and human mate preferences", "text": "The past decade has witnessed a rapidly growing interest in the biological basis of human mate choice. Here we review recent studies that demonstrate preferences for traits which might reveal genetic quality to prospective mates, with potential but still largely unknown influence on offspring fitness. These include studies assessing visual, olfactory and auditory preferences for potential good-gene indicator traits, such as dominance or bilateral symmetry. Individual differences in these robust preferences mainly arise through within and between individual variation in condition and reproductive status. Another set of studies have revealed preferences for traits indicating complementary genes, focussing on discrimination of dissimilarity at genes in the major histocompatibility complex (MHC). As in animal studies, we are only just beginning to understand how preferences for specific traits vary and inter-relate, how consideration of good and compatible genes can lead to substantial variability in individual mate choice decisions and how preferences expressed in one sensory modality may reflect those in another. Humans may be an ideal model species in which to explore these interesting complexities."} {"_id": "08e7240e59719af7d29dea2683a271a8b525173e", "title": "Anatomy of Forehead, Glabellar, Nasal and Orbital Muscles, and Their Correlation with Distinctive Patterns of Skin Lines on the Upper Third of the Face: Reviewing Concepts", "text": "The purpose of this study is to establish a relationship between the skin lines on the upper third of the face in cadavers, which represent the muscle activity in life and the skin lines achieved by voluntary contraction of the forehead, glabellar, and orbital muscles in patients. Anatomical dissection of fresh cadavers was performed in 20 fresh cadavers, 11 females and 9 males, with ages ranging from 53 to 77\u00a0years. Subcutaneous dissection identified the muscle shape and the continuity of the fibers of the eyebrow elevator and depress muscles. Subgaleal dissection identified the cutaneous insertions of the muscles. They were correlated with skin lines on the upper third of the face of the cadavers that represent the muscle activity in life. Voluntary contraction was performed by 20 voluntary patients, 13 females and 7 males, with ages ranging from 35 to 62\u00a0years. Distinct patterns of skin lines on the forehead, glabellar and orbital areas, and eyebrow displacement were identified. The frontalis exhibited four anatomical shapes with four different patterns of horizontal parallel lines on the forehead skin. The corrugator supercilii showed three shapes of muscles creating six patterns of vertical glabellar lines, three symmetrical and three asymmetrical. The orbicularis oculi and procerus had single patterns. The skin lines exhibited in voluntary contraction of the upper third of the face in patients showed the same patterns of the skin lines achieved in cadavers. Skin lines in cadavers, which are the expression of the muscle activity in life, were similar to those achieved in the voluntary contraction of patients, allowing us to assert that the muscle patterns of patients were similar to those identified in cadavers. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 ."} {"_id": "16827c8aa80f394a5117126e1ec548fd08410860", "title": "Internet addiction: a systematic review of epidemiological research for the last decade.", "text": "In the last decade, Internet usage has grown tremendously on a global scale. The increasing popularity and frequency of Internet use has led to an increasing number of reports highlighting the potential negative consequences of overuse. Over the last decade, research into Internet addiction has proliferated. This paper reviews the existing 68 epidemiological studies of Internet addiction that (i) contain quantitative empirical data, (ii) have been published after 2000, (iii) include an analysis relating to Internet addiction, (iv) include a minimum of 1000 participants, and (v) provide a full-text article published in English using the database Web of Science. Assessment tools and conceptualisations, prevalence, and associated factors in adolescents and adults are scrutinised. The results reveal the following. First, no gold standard of Internet addiction classification exists as 21 different assessment instruments have been identified. They adopt official criteria for substance use disorders or pathological gambling, no or few criteria relevant for an addiction diagnosis, time spent online, or resulting problems. Second, reported prevalence rates differ as a consequence of different assessment tools and cut-offs, ranging from 0.8% in Italy to 26.7% in Hong Kong. Third, Internet addiction is associated with a number of sociodemographic, Internet use, and psychosocial factors, as well as comorbid symptoms and disorder in adolescents and adults. The results indicate that a number of core symptoms (i.e., compulsive use, negative outcomes and salience) appear relevant for diagnosis, which assimilates Internet addiction and other addictive disorders and also differentiates them, implying a conceptualisation as syndrome with similar etiology and components, but different expressions of addictions. Limitations include the exclusion of studies with smaller sample sizes and studies focusing on specific online behaviours. Conclusively, there is a need for nosological precision so that ultimately those in need can be helped by translating the scientific evidence established in the context of Internet addiction into actual clinical practice."} {"_id": "2cef1dcc6a8ed9b252a095080f25d21a85cd3991", "title": "Analysis and Characterisation of Spoof Surface Plasmon Polaritons based Wideband Bandpass Filter at Microwave Frequency", "text": "This paper presents the wideband bandpass filter (BPF) in the microwave frequency domain. The realisation approach is based on spoof surface plasmon polaritons (SSPPs) phenomenon using plasmonic metamaterial. A novel unit cell is designed for filter design using an LC resonator concept. Then SSPPs BPF is realised using an optimised mode converter and five unit cells. This paper includes a brief design detail of the proposed novel unit cell. The passband of BPF is achieved at approximately 1.20 5.80 GHz, 3dB bandwidth is tentatively 4.60 GHz and the insertion loss is less than 2 dB approximately over the passband. The overall dimension of fabricated filter is (90 x 45) mm. A basic schematic of transmission line representation is also proposed to evaluate the BPF structure."} {"_id": "03a495d17c3b984594f873715fef5bb2e7eac567", "title": "Hybrid Electric Vehicles: Architecture and Motor Drives", "text": "Electric traction is one of the most promising technologies that can lead to significant improvements in vehicle performance, energy utilization efficiency, and polluting emissions. Among several technologies, hybrid electric vehicle (HEV) traction is the most promising technology that has the advantages of high performance, high fuel efficiency, low emissions, and long operating range. Moreover, the technologies of all the component hardware are technically and markedly available. At present, almost all the major automotive manufacturers are developing hybrid electric vehicles, and some of them have marketed their productions, such as Toyota and Honda. This paper reviews the present technologies of HEVs in the range of drivetrain configuration, electric motor drives, and energy storages"} {"_id": "046f6abc4f5d738d08fe9b6453951cd5f4efa3bf", "title": "Learning from other subjects helps reducing Brain-Computer Interface calibration time", "text": "A major limitation of Brain-Computer Interfaces (BCI) is their long calibration time, as much data from the user must be collected in order to tune the BCI for this target user. In this paper, we propose a new method to reduce this calibration time by using data from other subjects. More precisely, we propose an algorithm to regularize the Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA) algorithms based on the data from a subset of automatically selected subjects. An evaluation of our approach showed that our method significantly outperformed the standard BCI design especially when the amount of data from the target user is small. Thus, our approach helps in reducing the amount of data needed to achieve a given performance level."} {"_id": "5bdf93e0ce9e75dadb26d0e0c044dc1a588397f5", "title": "Location Segmentation, Inference and Prediction for Anticipatory Computing", "text": "This paper presents an analysis of continuous cellular tower data representing five months of movement from 215 randomly sampled subjects in a major urban city. We demonstrate the potential of existing community detection methodologies to identify salient locations based on the network generated by tower transitions. The tower groupings from these unsupervised clustering techniques are subsequently validated using data from Bluetooth beacons placed in the homes of the subjects. We then use these inferred locations as states within several dynamic Bayesian networks to predict each subject\u2019s subsequent movements with over 90% accuracy. We also introduce the X-Factor model, a DBN with a latent variable corresponding to abnormal behavior. We conclude with a description of extensions for this model, such as incorporating additional contextual and temporal variables already being logged by the"} {"_id": "c4f565d65dd383f2821d0d60503d9e20bd957e3c", "title": "A Novel Self-Reference Technique for STT-RAM Read and Write Reliability Enhancement", "text": "Spin-transfer torque random access memory (STT-RAM) has demonstrated great potential in embedded and stand-alone applications. However, process variations and thermal fluctuations greatly influence the operation reliability of STT-RAM and limit its scalability. In this paper, we propose a new field-assisted access scheme to improve the read/write reliability and performance of STT-RAM. During read operations, an external magnetic field is applied to a magnetic tunneling junction (MTJ) device, generating a resistive sense signal without referring to other devices. Such a self-reference scheme offers a very promising alternative approach to overcome the severe cell-to-cell variations at highly scaled technology node. Furthermore, the external magnetic field can be used to assist the MTJ switching during write operations without introducing extra hardware overhead."} {"_id": "8b76d0465143389a735d76422ddfaadd4ce0e625", "title": "Tendinous insertion of semimembranosus muscle into the lateral meniscus", "text": "Forty-two cadaver knees were used for morphologic and MRI observations of the tendinous distal expansions of the semimembranosus m. and the posterior capsular structures of the knee. A tendinous branch of the semimembranosus m. inserting into the posterior horn of the lateral meniscus was found in 43.2% of the knees dissected, besides five already known insertional branches; capsular, direct, anterior and inferior, as well as the oblique popliteal ligament. The tendon had three morphologic types; thin, broad and round. All three types moved the lateral meniscus posteriorly when pulled on. Thus, the semimembranosus m. may also have a protective function for the lateral meniscus as well as the already well established function of protecting the medial meniscus in knee flexion. When a semimembranosus tendon attachment to the posterior horn of the lateral meniscus is present, its normal insertion is difficult to differentiate from a lateral meniscus tear in MRI and this may cause misdiagnosis."} {"_id": "173c80de279d6bf01418acf2216556f84243f640", "title": "Millisecond Coupling of Local Field Potentials to Synaptic Currents in the Awake Visual Cortex", "text": "The cortical local field potential (LFP) is a common measure of population activity, but its relationship to synaptic activity in individual neurons is not fully established. This relationship has been typically studied during anesthesia and is obscured by shared slow fluctuations. Here, we used patch-clamp recordings in visual cortex of anesthetized and awake mice to measure intracellular activity; we then applied a simple method to reveal its coupling to the simultaneously recorded LFP. LFP predicted membrane potential as accurately as synaptic currents, indicating a major role for synaptic currents in the relationship between cortical LFP and intracellular activity. During anesthesia, cortical LFP predicted excitation far better than inhibition; during wakefulness, it predicted them equally well, and visual stimulation further enhanced predictions of inhibition. These findings reveal a central role for synaptic currents, and especially inhibition, in the relationship between the subthreshold activity of individual neurons and the cortical LFP during wakefulness."} {"_id": "334d69c508cfc1aa58b896eb1a6a3892f3e848fd", "title": "The underlying structure of visuospatial working memory in children with mathematical learning disability.", "text": "This study examined visual, spatial-sequential, and spatial-simultaneous working memory (WM) performance in children with mathematical learning disability (MLD) and low mathematics achievement (LMA) compared with typically developing (TD) children. Groups were matched on reading decoding performance and verbal intelligence. Besides statistical significance testing, we used bootstrap confidence interval estimation and computed effect sizes. Children were individually tested with six computerized tasks, two for each visuospatial WM subcomponent. We found that both MLD and LMA children had low visuospatial WM function in both spatial-simultaneous and spatial-sequential WM tasks. The WM deficit was most expressed in MLD children and less in LMA children. This suggests that WM scores are distributed along a continuum with TD children achieving top scores and MLD children achieving low scores. The theoretical and practical significance of findings is discussed. Statement of Contribution What is already known on this subject? Working memory plays an important role in mathematical achievement. Children with mathematical learning disability (MLD) usually have low working memory resources. Conflicting results have been reported concerning the role of VSWM in individuals with MLD. What the present study adds? Children with different degree of impairment in math achievement and typically developing children were tested. Visual, spatial-sequential, and spatial-simultaneous working memory tasks were examined. Only spatial-sequential and spatial-simultaneous working memory tasks discriminated the two impairments groups."} {"_id": "6a9cca9766c7177131881beb4022dfed9df4d2b0", "title": "The neuroscience of social decision-making.", "text": "Given that we live in highly complex social environments, many of our most important decisions are made in the context of social interactions. Simple but sophisticated tasks from a branch of experimental economics known as game theory have been used to study social decision-making in the laboratory setting, and a variety of neuroscience methods have been used to probe the underlying neural systems. This approach is informing our knowledge of the neural mechanisms that support decisions about trust, reciprocity, altruism, fairness, revenge, social punishment, social norm conformity, social learning, and competition. Neural systems involved in reward and reinforcement, pain and punishment, mentalizing, delaying gratification, and emotion regulation are commonly recruited for social decisions. This review also highlights the role of the prefrontal cortex in prudent social decision-making, at least when social environments are relatively stable. In addition, recent progress has been made in understanding the neural bases of individual variation in social decision-making."} {"_id": "50d6c73ff96076eaa0015c7b5d00969b56692676", "title": "Audit Log Management in MongoDB", "text": "In the past few years, web-based applications and their data management needs have changed dramatically. Relational databases are often being replaced by other viable alternatives, such as NoSQL databases, for reasons of scalability and heterogeneity. MongoDB, a NoSQL database, is an agile database built for scalability, performance and high availability. It can be deployed in single server environment and also on complex multi-site architectures. MongoDB provides high performance for read and write operations by leveraging in-memory computing. Although researchers have motivated the need for MongoDB, not much appears in the area of log management. Efficient log management techniques are needed for various reasons including security, accountability, and improving the performance of the system. Towards this end, we analyze the different logging methods offered by MongoDB and compare them to the NIST standard. Our analysis indicates that profiling and mongosniff are useful for log management and we present a simple model that combines the two techniques."} {"_id": "4f1db9a1d579bf5906a356801526613eebe464e1", "title": "Bayesian Information Extraction Network", "text": "Dynamic Bayesian networks (DBNs) offer an elegant way to integrate various aspects of language in one model. Many existing algorithms developed for learning and inference in DBNs are applicable to probabilistic language modeling. To demonstrate the potential of DBNs for natural language processing, we employ a DBN in an information extraction task. We show how to assemble wealth of emerging linguistic instruments for shallow parsing, syntactic and semantic tagging, morphological decomposition, named entity recognition etc. in order to incrementally build a robust information extraction system. Our method outperforms previously published results on an established benchmark domain. 1 Information Extraction Information extraction ( IE) is the task of filling in template information from previously unseen text which belongs to a pre-defined domain. The resulting database is suited for formal queries and filtering.IE systems generally work by detecting patterns in the text that help identify significant information. Researchers have shown [Freitag and McCallum, 1999; Ray and Craven, 2001 ] that a probabilistic approach allows the construction of robust and well-performing systems. However, the existing probabilistic systems are generally based on Hidden Markov Models ( HMMs). Due to this relatively impoverished representation, they are unable to take advantage of the wide array of linguistic information used by many non-probabilistic IE systems. In addition, existing HMM -based systems model each target category separately, failing to capture relational information, such as typical target order, or the fact that each element only belongs to a single category. This paper shows how to incorporate a wide array of knowledge into a probabilistic IE system, based on dynamic Bayesian networks ( DBN)\u2014a rich probabilistic representation that generalizes HMMs. Let us illustrateIE by describing seminar announcements which got established as one of the most popular benchmark domains in the field[Califf and Mooney, 1999; Freitag and McCallum, 1999; Soderland, 1999; Roth and Yih, 2001; Ciravegna, 2001 ]. People receive dozens of seminar announcements weekly and need to manually extract information and paste it into personal organizers. The goal of an IE system is to automatically identify target fields such as location and topic of a seminar, date and starting time, ending time and speaker. Announcements come in many formats, but usually follow some pattern. We often find a header with a gist in the form \u201c PostedBy: john@host.domain; Who: Dr. Steals; When: 1 am; \u201d and so forth. Also in the body of the message, the speaker usually precedes both location and starting time, which in turn precedes ending time as in: \u2018\u2018Dr. Steals presents in Dean Hall at one am.\u2019\u2019 The task is complicated since some fields may be missing or may contain multiple values. This kind of data falls into the so-called semi-structured text category. Instances obey certain structure and usually contain information for most of the expected fields in some order. There are two other categories: free text and structured text. Instructured text , the positions of the information fields are fixed and values are limited to pre-defined set. Consequently, theIE systems focus on specifying the delimiters and order associated with each field. At the opposite end lies the task of extracting information fromfree textwhich, although unstructured, is assumed to be grammatical. Here IE systems rely more on syntactic, semantic and discourse knowledge in order to assemble relevant information potentially scattered all over a large document. IE algorithms face different challenges depending on the extraction targets and the kind of the text they are embedded in. In some cases, the target is uniquely identifiable (singleslot), while in others, the targets are linked together in multislot association frames. For example, a conference schedule has several slots for related speaker, topic and time of the presentation, while a seminar announcement usually refers to a unique event. Sometimes it is necessary to identify each word in a target slot, while some benefit may be reaped from partial identification of the target, such as labeling the beginning or end of the slot separately. Many applications involve processing of domain-specific jargon like Internetese \u2014a style of writing prevalent in news groups, e-mail messages, bulletin boards and online chat rooms. Such documents do not follow a good grammar, spelling or literary style. Often these are more like a stream-of-consciousness ranting in which asciiart and pseudo-graphic sketches are used and emphasis is provided by all-capitals, or using multiple exclamation signs. As we exemplify below, syntactic analysers easily fail on such Position 1 2 3 4 5 6 7 8 9 10 Tag Phrase Doctor Steals Presents in Dean Hall at 1 am . Lemma Dr. present in hall at am PoS NNP NNP(VB) VB(NNS) IN NNP NN(NNP) IN CD NN(RB) . Syn.segm. NP NP (VP) VP PP NP NP PP NP NP(VP) Semantic Title LstName LstName Location Time Length 3 6 8 2 4 4 2 1 2 1 Case Upper Upper Upper lower Upper Upper lower lower Table 1: Sample phrase and its representation in multiple feature values for ten tokens."} {"_id": "d2165efcf19b65649f0b2ebbe4af2e7902dca391", "title": "A Framework for Highly-Available Composed Real-Time Internet Services Bhaskaran Raman Qualifying Examination", "text": "Applicationservicesfor theend-userareall importantin today\u2019scommunicationnetworks,andcoulddictatethesuccessor failureof technologyor serviceproviders[39]. It is importantto develop and deploy applicationfunctionality quickly [18]. The ability to compose servicesfrom independentprovidersprovidesa flexible way to quickly build new end-to-endfunctionality. Suchcompositionof services acrossthe network and acrossdifferent serviceproviders becomesespeciallyimportant in the context of growing popularityandheterogeneityin 3G+ accessnetworks and devices[25]. In this work, we provide a framework for constructing such composed services on the Internet. Robustnessandhigh-availability arecrucial for Internet services.While cluster-basedmodelsfor resilienceto failureshave beenbuilt for web-servers[27] aswell asproxy services[11, 4], theseareinadequatein thecontext of composedservices.This is especiallyso whenthe application sessionis long-lived, andfailureshaveto behandledduring asession. In the context of composed services, we address the important and challenging issues of resilience to failures, and adapting to changes in overall performance during longlived sessions. Our framework is basedon a connection-orientedoverlay network of compute-clusterson theInternet.Theoverlay network provides the context for composingservices over the wide-area,and monitoring for livenessand performanceof a session. We have performedinitial analysesof the feasibility of network failure detectionover the wide-areaInternet. And we have a preliminaryevaluation of the overheadassociatedwith suchan overlay network. We presentour plansfor furtherevaluationandrefinement of thearchitecture;andfor examiningissuesrelatedto the creationof theoverlaytopology. 1 Intr oduction Application functionality for the end-useris thedriving force behinddevelopmentof technologyandits adoption. It is importantto be ableto developanddeploy new functionality quickly [18]. Compositionof existing servicesto achieve new functionalityenablessuchquick development throughreuseof alreadydeployed components.Consider for example,thescenarioshown in Figure1. Video-on-demand server Replicated instances Service Provider A"} {"_id": "0cb8f50580cc69191144bd503e268451ce966fa6", "title": "Neural Message Passing for Quantum Chemistry", "text": "Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels."} {"_id": "bcbf90dd0d592ded9ef1255ac78d0f9479237207", "title": "Substrate integrated waveguide (SIW) power amplifier using CBCPW-to-SIW transition for matching network", "text": "In this paper, for the first time a novel substrate integrated waveguide (SIW)-based 10W power amplifier (PA), designed with conductor-backed coplanar waveguide (CBCPW)-to-SIW transition matching network (MN), is presented. Transition between CBCPW and SIW is employed for both input and output MN designs, the proposed SIW PA can be easily connected with any microstrip or SIW-based circuits. Asymmetrical and symmetrical types of CBCPW-to-SIW transition MN are proposed. Output SIW-based MN is designed with asymmetrical structure by using one inductive metalized post and input SIW-based MN is designed with symmetrical structure by using two identical inductive metalized posts. One SIW-based 10W PA using GaN HEMT at 3.62 GHz is designed, fabricated, and measured. Measured results show that the maximum power added efficiency (PAE) is 54.24 % with 39.74 dBm output power and the maximum gain is 13.31 dB. At the design frequency of 3.6 GHz, the size of proposed SIW-based PA is comparable with other microstrip-based PAs."} {"_id": "19a4b6d05019b5e1be16790b9306ed086f5ddfb0", "title": "High Performance Single-Chip FPGA Rijndael Algorithm Implementations", "text": "This paper describes high performance single-chip FPGA implementations of the new Advanced Encryption Standard (AES) algorithm, Rijndael. The designs are implemented on the Virtex-E FPGA family of devices. FPGAs have proven to be very effective in implementing encryption algorithms. They provide more flexibility than ASIC implementations and produce higher data-rates than equivalent software implementations. A novel, generic, parameterisable Rijndael encryptor core capable of supporting varying key sizes is presented. The 192-bit key and 256-bit key designs run at data rates of 5.8 Gbits/sec and 5.1 Gbits/sec respectively. The 128-bit key encryptor core has a throughput of 7 Gbits/sec which is 3.5 times faster than similar existing hardware designs and 21 times faster than known software implementations, making it the fastest single-chip FPGA Rijndael encryptor core reported to date. A fully pipelined single-chip 128-bit key Rijndael encryptor/decryptor core is also presented. This design runs at a data rate of 3.2 Gbits/sec on a Xilinx Virtex-E XCV3200E-8-CG1156 FPGA device. There are no known singlechip FPGA implementations of an encryptor/decryptor Rijndael design."} {"_id": "446573a346acdbd2eb8f0527c5d73fc707f04527", "title": "AES Proposal : Rijndael", "text": ""} {"_id": "4d5e15c080dde7e5c50ae2697ee56d29218235b1", "title": "Very Compact FPGA Implementation of the AES Algorithm", "text": "In this paper a compact FPGA architecture for the AES algorithm with 128-bit key targeted for low-cost embedded applications is presented. Encryption, decryption and key schedule are all implemented using small resources of only 222 Slices and 3 Block RAMs. This implementation easily fits in a low-cost Xilinx Spartan II XC2S30 FPGA. This implementation can encrypt and decrypt data streams of 150 Mbps, which satisfies the needs of most embedded applications, including wireless communication. Specific features of Spartan II FPGAs enabling compact logic implementation are explored, and a new way of implementing MixColumns and InvMixColumns transformations using shared logic resources is presented."} {"_id": "e1f21059a73fc1b37c5e3db047f55fa61b07e6db", "title": "A compact FPGA implementation of the hash function whirlpool", "text": "Recent breakthroughs in cryptanalysis of standard hash functions like SHA-1 and MD5 raise the need for alternatives. A credible alternative to for instance SHA-1 or the SHA-2 family of hash functions is Whirlpool. Whirlpool is a hash function that has been evaluated and approved by NESSIE and is standardized by ISO/IEC. To the best of our knowledge only one FPGA implementation of Whirlpool has been published to date. This implementation is designed for high throughput rates requiring a considerable amount of hardware resources. In this article we present a compact hardware implementation of the hash function Whirlpool. The proposed architecture uses an innovative state representation that makes it possible to reduce the required hardware resources remarkably. The complete implementation requires 1456 CLB-slices and, most notably, no block RAMs."} {"_id": "a6402417792005d38c99d29f9b32a4b27e32d7c8", "title": "House Price Prediction : Hedonic Price Model vs . Artificial Neural Network", "text": "The objective of this paper is to empirically compare the predictive power of the hedonic model with an artificial neural network model on house price prediction. A sample of 200 houses in Christchurch, New Zealand is randomly selected from the Harcourt website. Factors including house size, house age, house type, number of bedrooms, number of bathrooms, number of garages, amenities around the house and geographical location are considered. Empirical results support the potential of artificial neural network on house price prediction, although previous studies have commented on its black box nature and achieved different conclusions."} {"_id": "62c3daf1899f6841f7092961193f062cc4fe1103", "title": "Search mode and purchase intention in online shopping behavior", "text": "This study focuses on the effect of website visitors' goal-oriented search mode on purchase intention in online environments. In a study of 874 respondents recruited from 13 online shops representing a diversity of product categories and customer segments, the effect of visitors' goal-oriented search mode on purchase intention is found to be moderated by product involvement and product risk. Furthermore, product involvement, product risk, product knowledge and Internet experience are found to have positive effects on the degree of goaloriented search mode of the visitors. Also, product knowledge and Internet experience are reported to have direct positive effects on purchase intention. The results point to the importance of understanding the characteristics of website visitors, and to customize the support and search services offered on the website to the characteristics and preferences of the individual visitor. Customization of this kind may partly be based on immediate visitor history (referrer) and may be used to increase purchase intention, and eventually online sales."} {"_id": "e103581df3c31b8a0518fe378920c48db51fe5a3", "title": "Data-driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry", "text": "User Experience (UX) research teams following a user centered design approach harness personas to better understand a user's workflow by examining that user's behavior, goals, needs, wants, and frustrations. To create target personas these researchers rely on workflow data from surveys, self-reports, interviews, and user observation. However, this data not directly related to user behavior, weakly reflects a user's actual workflow in the product, is costly to collect, is limited to a few hundred responses, and is outdated as soon as a persona's workflows evolve. To address these limitations we present a quantitative bottom-up data-driven approach to create personas. First, we directly incorporate user behavior via clicks gathered automatically from telemetry data related to the actual product use in the field; since the data collection is automatic it is also cost effective. Next, we aggregate 3.5 million clicks from 2400 users into 39,000 clickstreams and then structure them into 10 workflows via hierarchical clustering; we thus base our personas on a large data sample. Finally, we use mixed models, a statistical approach that incorporates these clustered workflows to create five representative personas; updating our mixed model ensures that these personas remain current. We also validated these personas with our product's user behavior experts to ensure that workflows and the persona goals represent actual product use."} {"_id": "29b377cb5e960c8f0f7b880e00399fc6ae955c0f", "title": "Detecting Faulty Nodes with Data Errors for Wireless Sensor Networks", "text": "Wireless Sensor Networks (WSN) promise researchers a powerful instrument for observing sizable phenomena with fine granularity over long periods. Since the accuracy of data is important to the whole system's performance, detecting nodes with faulty readings is an essential issue in network management. As a complementary solution to detecting nodes with functional faults, this article, proposes FIND, a novel method to detect nodes with data faults that neither assumes a particular sensing model nor requires costly event injections. After the nodes in a network detect a natural event, FIND ranks the nodes based on their sensing readings as well as their physical distances from the event. FIND works for systems where the measured signal attenuates with distance. A node is considered faulty if there is a significant mismatch between the sensor data rank and the distance rank. Theoretically, we show that average ranking difference is a provable indicator of possible data faults. FIND is extensively evaluated in simulations and two test bed experiments with up to 25 MicaZ nodes. Evaluation shows that FIND has a less than 5% miss detection rate and false alarm rate in most noisy environments."} {"_id": "c78f7a2e573d189c76d91958204cfe96248f97f9", "title": "5.2 Distributed system of digitally controlled microregulators enabling per-core DVFS for the POWER8TM microprocessor", "text": "Integrated voltage regulator modules (iVRMs) [1] provide a cost-effective path to realizing per-core dynamic voltage and frequency scaling (DVFS), which can be used to optimize the performance of a power-constrained multi-core processor. This paper presents an iVRM system developed for the POWER8\u2122 microprocessor, which functions as a very fast, accurate low-dropout regulator (LDO), with 90.5% peak power efficiency (only 3.1% worse than an ideal LDO). At low output voltages, efficiency is reduced but still sufficient to realize beneficial energy savings with DVFS. Each iVRM features a bypass mode so that some of the cores can be operated at maximum performance with no regulator loss. With the iVRM area including the input decoupling capacitance (DCAP) (but not the output DCAP inherent to the cores), the iVRMs achieve a power density of 34.5W/mm2, which exceeds that of inductor-based or SC converters by at least 3.4\u00d7 [2]."} {"_id": "89b127457cf4080e1592589f78ecbda4c102697f", "title": "New Word Detection in Ancient Chinese Literature", "text": "Mining Ancient Chinese corpus is not as convenient as modern Chinese, because there is no complete dictionary of ancient Chinese words which leads to the bad performance of tokenizers. So finding new words in ancient Chinese texts is significant. In this paper, the Apriori algorithm is improved and used to produce candidate character sequences. And a long short-term memory (LSTM) neural network is used to identify the boundaries of the word. Furthermore, we design word confidence feature to measure the confidence score of new words. The experimental results demonstrate that the improved Apriori-like algorithm can greatly improve the recall rate of valid candidate character sequences, and the average accuracy of our method on new word detection raise to 89.7%."} {"_id": "be99edb722435e465e469f0b09bddcc4e8bf33c9", "title": "Attacking Deterministic Signature Schemes Using Fault Attacks", "text": "Many digital signature schemes rely on random numbers that are unique and non-predictable per signature. Failures of random number generators may have catastrophic effects such as compromising private signature keys. In recent years, many widely-used cryptographic technologies adopted deterministic signature schemes because they are presumed to be safer to implement. In this paper, we analyze the security of deterministic ECDSA and EdDSA signature schemes and show that the elimination of random number generators in these schemes enables new kinds of fault attacks. We formalize these attacks and introduce practical attack scenarios against EdDSA using the Rowhammer fault attack. EdDSA is used in many widely used protocols such as TLS, SSH, and IPSec, and we show that these protocols are not vulnerable to our attack. We formalize the necessary requirements of protocols using these deterministic signature schemes to be vulnerable, and discuss mitigation strategies and their effect on fault attacks against deterministic signature schemes."} {"_id": "ff56ba67f29179fe65d99cf929dfec811abf175e", "title": "Development of a FACS-verified set of basic and self-conscious emotion expressions.", "text": "In 2 studies, the authors developed and validated of a new set of standardized emotion expressions, which they referred to as the University of California, Davis, Set of Emotion Expressions (UCDSEE). The precise components of each expression were verified using the Facial Action Coding System (FACS). The UCDSEE is the first FACS-verified set to include the three \"self-conscious\" emotions known to have recognizable expressions (embarrassment, pride, and shame), as well as the 6 previously established \"basic\" emotions (anger, disgust, fear, happiness, sadness, and surprise), all posed by the same 4 expressers (African and White males and females). This new set has numerous potential applications in future research on emotion and related topics."} {"_id": "eb82febc304260fae4a18e89f8d04bfca8823236", "title": "The use of Social Networking Sites (SNSs) by the faculty members of the School of Library & Information Science, PAAET, Kuwait", "text": "Purpose \u2013 The purpose of this study is to describe the usage of Social Networking Sites (SNSs) by the faculty members of the School of Library and Information Science (SLIS), at the College of Basic Education, the Public Authority for Applied Education and Training (PAAET), Kuwait. Design/methodology/approach \u2013 A survey conducted to collect data from 33 faculty members of whom only 21 members were using SNSs, representing 63.6 per cent of the total sample, and 12 members were not using SNSs, representing 36.4 per cent of the total sample. This study revealed that SNSs are used moderately by the faculty members. Findings \u2013 This study showed that faculty members who were using SNSs tend to be males, aged between 41 and 50 years, PhD holders, ranked as assistant professors, full-time members, specialized in information technologies with a relatively new experience of teaching ranged from one to five years, and most of the faculty members who were not using SNSs tended to be also males, aged between 41 and 60 years, PhD holders, ranked as lecturers, full-time members specialized in organization of information with a teaching experience ranged from 16 to 20 years. More than half of the faculty members were using SNSs for three years to less than six years, and a large number of them were using SNSs several times a week and were accessing these sites more from their school office, home and school laboratory. There are no any statistical significant differences between the demographic data of participants (gender, age and education level) and either their use or non-use of SNSs. There are no significant differences between the academic rank, teaching status and teaching experience of faculty and their use of SNSs. However, there is a significant relation between the faculty\u2019s area of teaching and their use of SNSs. Faculty members were interested in the use of SNSs. YouTube, Twitter, Facebook and blogs respectively were used mostly by faculty members, but Twitter, Facebook and YouTube were the most famous SNSs they have profiles on. Faculty members have adopted SNSs mainly for the purpose of communicating with others, finding and sharing information with peers and students as well. Tasks on SNSs made by faculty members were mostly to make communication, send/receive messages and find general and specific information. Faculty members\u2019 profiles on SNSs were mostly on Twitter, Facebook, YouTube, blogs, wikis and podcasting respectively. Faculty members confirmed that the use of YouTube, Facebook, blogs, Twitter, wikis and podcasting respectively was at least effective and the use of YouTube, Facebook, Twitter, Blogs and Wikis respectively was at least fairly useful fairly easy to them. Faculty members are in general agreement about the effectiveness of SNSs especially for disseminating and sharing information, communication and informal collaboration. The study showed The researcher would like to thank the Public Authority for Applied Education and Training (PAAET), the state of Kuwait, for supporting this paper (BE-11-10). The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0264-0473.htm"} {"_id": "a7f46ae35116f4c0b3aaa1c9b46d6e79e63b56c9", "title": "The Bluetooth radio system", "text": "A few years ago it was recognized that the vision of a truly low-cost, low-power radio-based cable replacement was feasible. Such a ubiquitous link would provide the basis for portable devices to communicate together in an ad hoc fashion by creating personal area networks which have similar advantages to their office environment counterpart, the local area network. Bluetooth/sup TM/ is an effort by a consortium of companies to design a royalty-free technology specification enabling this vision. This article describes the radio system behind the Bluetooth concept. Designing an ad hoc radio system for worldwide usage poses several challenges. The article describes the critical system characteristics and motivates the design choices that have been made."} {"_id": "7c38d99373d68e8206878aa6f49fce6e0d4cfc6a", "title": "Patch Autocorrelation Features: a translation and rotation invariant approach for image classification", "text": "The autocorrelation is often used in signal processing as a tool for finding repeating patterns in a signal. In image processing, there are various image analysis techniques that use the autocorrelation of an image in a broad range of applications from texture analysis to grain density estimation. This paper provides an extensive review of two recently introduced and related frameworks for image representation based on autocorrelation, namely Patch Autocorrelation Features (PAF) and Translation and Rotation Invariant Patch Autocorrelation Features (TRIPAF). The PAF approach stores a set of features obtained by comparing pairs of patches from an image. More precisely, each feature is the euclidean distance between a particular pair of patches. The proposed approach is successfully evaluated in a series of handwritten digit recognition experiments on the popular MNIST data set. However, the PAF approach has limited applications, because it is not invariant to affine transformations. More recently, the PAF approach was extended to become invariant to image transformations, including (but not limited to) translation and rotation changes. In the TRIPAF framework, several features are extracted from each image patch. Based on these features, a vector of similarity values is computed between each pair of patches. Then, the similarity vectors are clustered together such that the spatial offset between the patches of each pair is roughly the same. Finally, the mean and the standard deviation of each similarity value are computed for each group of similarity vectors. These statistics are concatenated to obtain the TRIPAF feature vector. The TRIPAF vector essentially records information about the repeating patterns within an image at various spatial offsets. After presenting the two approaches, several optical character recognition and texture classification experiments are conducted to evaluate the two approaches. Results are reported on the MNIST (98.93%), the Brodatz (96.51%), and the UIUCTex (98.31%) data sets. Both PAF and TRIPAF are fast to compute and produce compact representations in practice, while reaching accuracy levels similar to other state-of-the-art methods."} {"_id": "6e6f47c4b2109e7824cd475336c3676faf9b113e", "title": "Baby Talk : Understanding and Generating Image Descriptions", "text": "We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work."} {"_id": "b7954c6886bd953ebf677a96530a4e3923642260", "title": "Single camera lane detection and tracking", "text": "In this paper we present a method to detect and track straight lane boundaries, using a forward-looking single camera mounted on-board. The method proved to be robust, even under varying lighting conditions, and in the presence of heavy shadowing cast by vegetation, vehicles, bridges, etc. Moreover, false positive hardly occur. The lane markings are being continuously detected even when the vehicle is performing maneuvers such as excursion or lane change. The performance is achieved by a novel combination of methods, namely, by first finding vanishing point candidates for the lane boundaries, and later by the careful selection of line segments given a vanishing point, and finally by using a smart heuristics to select the whole set of four lane boundaries. Suppression of false positives is further achieved by tracking the vanishing point, and by constraining the lane width based on recent history."} {"_id": "219d1747f0afb8b9d2c603d0f3503764d5257796", "title": "CLAMS: Bringing Quality to Data Lakes", "text": "With the increasing incentive of enterprises to ingest as much data as they can in what is commonly referred to as \"data lakes\", and with the recent development of multiple technologies to support this \"load-first\" paradigm, the new environment presents serious data management challenges. Among them, the assessment of data quality and cleaning large volumes of heterogeneous data sources become essential tasks in unveiling the value of big data. The coveted use of unstructured and semi-structured data in large volumes makes current data cleaning tools (primarily designed for relational data) not directly adoptable.\n We present CLAMS, a system to discover and enforce expressive integrity constraints from large amounts of lake data with very limited schema information (e.g., represented as RDF triples). This demonstration shows how CLAMS is able to discover the constraints and the schemas they are defined on simultaneously. CLAMS also introduces a scale-out solution to efficiently detect errors in the raw data. CLAMS interacts with human experts to both validate the discovered constraints and to suggest data repairs.\n CLAMS has been deployed in a real large-scale enterprise data lake and was experimented with a real data set of 1.2 billion triples. It has been able to spot multiple obscure data inconsistencies and errors early in the data processing stack, providing huge value to the enterprise."} {"_id": "62aa13c3c4cc34cb29edbf60cc30c0138d08e870", "title": "Distributed Detection of Sensor Worms Using Sequential Analysis and Remote Software Attestations", "text": "Recent work has demonstrated that self-propagating worms are a real threat to sensor networks. Since worms can enable an adversary to quickly compromise an entire sensor network, they must be detected and stopped as quickly as possible. To meet this need, we propose a worm propagation detection scheme for sensor networks. The proposed scheme applies a sequential analysis to detect worm propagation by leveraging the intuition that a worm\u2019s communication pattern is different from benign traffic. In particular, a worm in a sensor network requires a long sequence of packets propagating hop-by-hop to each new infected node in turn. We thus have detectors that observe communication patterns in the network, a worm spreading hop-by-hop will quickly create chains of connections that would not be seen in normal traffic. Once detector nodes identify the worm propagation pattern, they initiate remote software attestations to detect infected nodes. Through analysis and simulation, we demonstrate that the proposed scheme effectively and efficiently detects worm propagation. In particular, it blocks worm propagation while restricting the fraction of infected nodes to at most 13.5% with an overhead of at most 0.63 remote attestations per node per time slot."} {"_id": "288345979bd6b78534cd86228e246a5daed7e700", "title": "\u201cTeach Me\u2013Show Me\u201d\u2014End-User Personalization of a Smart Home and Companion Robot", "text": "Care issues and costs associated with an increasing elderly population are becoming a major concern for many countries. The use of assistive robots in \u201csmart-home\u201d environments has been suggested as a possible partial solution to these concerns. A challenge is the personalization of the robot to meet the changing needs of the elderly person over time. One approach is to allow the elderly person, or their carers or relatives, to make the robot learn activities in the smart home and teach it to carry out behaviors in response to these activities. The overriding premise being that such teaching is both intuitive and \u201cnontechnical.\u201d To evaluate these issues, a commercially available autonomous robot has been deployed in a fully sensorized but otherwise ordinary suburban house. We describe the design approach to the teaching, learning, robot, and smart home systems as an integrated unit and present results from an evaluation of the teaching component with 20 participants and a preliminary evaluation of the learning component with three participants in a human-robot interaction experiment. Participants reported findings using a system usability scale and ad-hoc Likert questionnaires. Results indicated that participants thought that this approach to robot personalization was easy to use, useful, and that they would be capable of using it in real-life situations both for themselves and for others."} {"_id": "b9e78ee951ad2e4ba32a0861744bc4cea3821d50", "title": "A 20 Gb/s 0.4 pJ/b Energy-Efficient Transmitter Driver Utilizing Constant- ${\\rm G}_{\\rm m}$ Bias", "text": "This paper describes a transmitter driver based on a CMOS inverter with a resistive feedback. By employing the proposed driver topology, the pre-driver can be greatly simplified, resulting in a remarkable reduction of the overall driver power consumption. It also offers another advantage that the implementation of equalization is straightforward, compared with a conventional voltage-mode driver. Furthermore, the output impedance remains relatively constant while the data is being transmitted, resulting in good signal integrity. For evaluation of the driver performance, a fully functional 20 Gb/s transmitter is implemented, including a PRBS generator, a serializer, and a half-rate clock generator. In order to enhance the overall speed of the digital circuits for 20 Gb/s data transmission, the resistive feedback is applied to the time-critical inverters, which enables shorter rise/fall times. The prototype chip is fabricated in a 65 nm CMOS technology. The implemented driver circuit operates up to the data rate of 20 Gb/s, exhibiting an energy efficiency of 0.4 pJ/b for the output swing of 250 mVpp,diff."} {"_id": "6322fb79165999fb2651b0f03d92ac8e066c11f5", "title": "A Technique for Data Deduplication using Q-Gram Concept with Support Vector Machine", "text": "Several systems that rely on consistent data to offer high quality services, such as digital libraries and e-commerce brokers, may be affected by the existence of duplicates, quasi-replicas, or near-duplicate entries in their repositories. Because of that, there have been significant investments from private and government organizations in developing methods for removing replicas from its data repositories.In this paper, we have proposed accordingly. In the previous work, duplicate record detection was done using three different similarity measures and neural network. In the previous work, we have generated feature vector based on similarity measures and then, neural network was used to find the duplicate records. In this paper, we have developed Q-gram concept with support vector machine for deduplication process. The similarity function, which we are used Dice coefficient,Damerau\u2013Levenshtein distance,Tversky index for similarity measurement. Finally, support vector machine is used for testing whether data record is duplicate or not. A set of data generated from some similarity measures are used as the input to the proposed system. There are two processes which characterize the proposed deduplication technique, the training phase and the testing phase the experimental results showed that the proposed deduplication technique has higher accuracy than the existing method. The accuracy obtained for the proposed deduplication 88%."} {"_id": "52816eaca8cc8624e497437325ae365cc13d5445", "title": "A New Travelling Wave Antenna in Microstrip", "text": "The radiation characteristics of the first higher order mode of microstrip lines are investigated. As a result, a simple travelling wave antenna element is scribed, having a larger bandwidth compared with resonator antennas. A method to excite the first higher order mode is shown. A single antenna element is treated theoretically and experimentally, and an array of four antenna elements is demonstrated."} {"_id": "76e02fef750d183623dbbc602033beca8f6ee51a", "title": "The pathogenesis of Campylobacter jejuni-mediated enteritis.", "text": "Campylobacter jejuni, a gram-negative spiral shaped bacterium, is a frequent cause of gastrointestinal food-borne illness in humans throughout the world. Illness with C. jejuni ranges from mild to severe diarrheal disease. This article focuses on Campylobacter virulence determinants and their potential role in the development of C. jejuni-mediated enteritis. A model is presented that diagrams the interactions of C. jejuni with the intestinal epithelium. Additional work to identify and characterize C. jejuni virulence determinants is certain to provide novel insights into the diversity of strategies employed by bacterial pathogens to cause disease."} {"_id": "96aa80b7c66d7c4b8a4d3d4f7b29beea1be0939f", "title": "Real-Time Systems - Design Principles for Distributed Embedded Applications", "text": "Now, we come to offer you the right catalogues of book to open. real time systems design principles for distributed embedded applications is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you."} {"_id": "8bacf342c1893a58a0eec0343f9287fa1608a1eb", "title": "Performance evaluation of synchronous rectification in front-end full-bridge rectifiers", "text": "In this paper, performance evaluation of synchronous rectification in front-end full-bridge rectifiers is presented. Specifically, the implementation of the full-bridge rectifier with the two diode rectifiers connected to the negative output terminal replaced by two synchronous rectifiers (SRs) and the implementation with all four diode rectifiers replaced by four SRs are considered. In both implementations, the SRs are N-channel MOSFETs controlled by sensing the line voltage. Two methods of line-voltage sensing are presented. First, direct line-voltage sensing, and second, indirect line-voltage sensing, i.e., sensing the line voltage between an input terminal and the negative output terminal. The proposed implementations also include a protection method of preventing accidental short circuit between the input or output terminals. In addition, SR power-management methods such as turning off the SRs at light loads and at no load are provided. The protection of SRs at large inrush currents is also discussed. Experimental results obtained on a 90-W (19.5-V/4.6-A) laptop adapter for the universal line voltage range (90-264 Vrms) are given. In the experimental circuit with four SRs, the efficiency improvement at 90-Vrms line voltage and full load is 1.31%."} {"_id": "215aad1520ec1b087ab2ba4043f5e0ecc32e7482", "title": "Reducibility Among Combinatorial Problems", "text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. Many problems with wide applicability \u2013 e.g., set cover, knapsack, hitting set, max cut, and satisfiability \u2013 lack a polynomial algorithm for solving them, but also lack a proof that no such polynomial algorithm exists. Hence, they remain \u201copen problems.\u201d This paper references the recent work, \u201cOn the Reducibility of Combinatorial Problems\u201d [1]. BODY A large class of open problems are mutually convertible via poly-time reductions. Hence, either all can be solved in poly-time, or none can. REFERENCES [1] R. Karp. Reducibility Among Combinatorial Problems. In Complexity of Computer Computations, 1972. \u2217With apologies to Professor Richard Karp. Volume X of Tiny Transactions on Computer Science This content is released under the Creative Commons Attribution-NonCommercial ShareAlike License. Permission to make digital or hard copies of all or part of this work is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. CC BY-NC-SA 3.0: http://creativecommons.org/licenses/by-nc-sa/3.0/."} {"_id": "7c4824cc17bf735a7f80d128cc7ac6c7a8ab8aec", "title": "Grapheme-to-phoneme conversion using Long Short-Term Memory recurrent neural networks", "text": "Grapheme-to-phoneme (G2P) models are key components in speech recognition and text-to-speech systems as they describe how words are pronounced. We propose a G2P model based on a Long Short-Term Memory (LSTM) recurrent neural network (RNN). In contrast to traditional joint-sequence based G2P approaches, LSTMs have the flexibility of taking into consideration the full context of graphemes and transform the problem from a series of grapheme-to-phoneme conversions to a word-to-pronunciation conversion. Training joint-sequence based G2P require explicit grapheme-to-phoneme alignments which are not straightforward since graphemes and phonemes don't correspond one-to-one. The LSTM based approach forgoes the need for such explicit alignments. We experiment with unidirectional LSTM (ULSTM) with different kinds of output delays and deep bidirectional LSTM (DBLSTM) with a connectionist temporal classification (CTC) layer. The DBLSTM-CTC model achieves a word error rate (WER) of 25.8% on the public CMU dataset for US English. Combining the DBLSTM-CTC model with a joint n-gram model results in a WER of 21.3%, which is a 9% relative improvement compared to the previous best WER of 23.4% from a hybrid system."} {"_id": "9a0fff9611832cd78a82a32f47b8ca917fbd4077", "title": "Text-To-Speech Synthesis", "text": ""} {"_id": "178631e0f0e624b1607c7a7a2507ed30d4e83a42", "title": "Speech recognition with deep recurrent neural networks", "text": "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score."} {"_id": "841cebeab490ad455df3e7f7bf2cdff0076e59ca", "title": "A comparative analysis of data center network architectures", "text": "Advances in data intensive computing and high performance computing facilitate rapid scaling of data center networks, resulting in a growing body of research exploring new network architectures that enhance scalability, cost effectiveness and performance. Understanding the tradeoffs between these different network architectures could not only help data center operators improve deployments, but also assist system designers to optimize applications running on top of them. In this paper, we present a comparative analysis of several well known data center network architectures using important metrics, and present our results on different network topologies. We show the tradeoffs between these topologies and present implications on practical data center implementations."} {"_id": "39ae3816ab8895c754112554b4a4112bc4b6c2c6", "title": "Tuning metaheuristics: A data mining based approach for particle swarm optimization", "text": "The paper is concerned with practices for tuning the parameters of metaheuristics. Settings such as, e.g., the cooling factor in simulated annealing, may greatly affect a metaheuristic\u2019s efficiency as well as effectiveness in solving a given decision problem. However, procedures for organizing parameter calibration are scarce and commonly limited to particular metaheuristics. We argue that the parameter selection task can appropriately be addressed by means of a data mining based approach. In particular, a hybrid system is devised, which employs regression models to learn suitable parameter values from past moves of a metaheuristic in an online fashion. In order to identify a suitable regression method and, more generally, to demonstrate the feasibility of the proposed approach, a case study of particle swarm optimization is conducted. Empirical results suggest that characteristics of the decision problem as well as search history data indeed embody information that allows suitable parameter values to be determined, and that this type of information can successfully be extracted by means of nonlinear regression models. 2011 Elsevier Ltd. All rights reserved."} {"_id": "459696bdbd8af247d154cd8008aeacdd05fe59e1", "title": "A Unified Approach to the Change of Resolution: Space and Gray-Level", "text": "Multiple resolution analysis of images is a current trend in computer vision. In most cases, only spatial resolution has been considered. However, image resolution has an additional aspect: gray level, or color, resolution. Color resolution has traditionally been considered in the area of computer graphics. By defining a suitable measure for the comparison of images, changes in resolution can be treated with the same tools as changes in color resolution. A gray tone image, for example, can be compared to a halftone image having only two colors (black and white), but of higher spatial resolution. An important application can be in pyramids, one of the most commonly used multiple (spatial) resolution schemes, where this approach provides a tool to change the color resolution as well. Increasing color resolution while reducing spatial resolution to retain more image details and prevent aliasing is an example of the possibility to find optimal combinations of resolution reduction, spatial and color, to best fit an application."} {"_id": "31e2f7c599cd5cd621f13932186db352be4d8eea", "title": "PaperVis: Literature Review Made Easy", "text": "Reviewing literatures for a certain research field is always important for academics. One could use Google-like information seeking tools, but oftentimes he/she would end up obtaining too many possibly related papers, as well as the papers in the associated citation network. During such a process, a user may easily get lost after following a few links for searching or cross-referencing. It is also difficult for the user to identify relevant/important papers from the resulting huge collection of papers. Our work, called PaperVis, endeavors to provide a user-friendly interface to help users quickly grasp the intrinsic complex citation-reference structures among a specific group of papers. We modify the existing Radial Space Filling (RSF) and Bullseye View techniques to arrange involved papers as a node-link graph that better depicts the relationships among them while saving the screen space at the same time. PaperVis applies visual cues to present node attributes and their transitions among interactions, and it categorizes papers into semantically meaningful hierarchies to facilitate ensuing literature exploration. We conduct experiments on the InfoVis 2004 Contest Dataset to demonstrate the effectiveness of PaperVis."} {"_id": "40e8d23231469e6495d3e06086e64df93e9dcfa0", "title": "Front-End Factor Analysis for Speaker Verification", "text": "This paper presents an extension of our previous work which proposes a new speaker representation for speaker verification. In this modeling, a new low-dimensional speaker- and channel-dependent space is defined using a simple factor analysis. This space is named the total variability space because it models both speaker and channel variabilities. Two speaker verification systems are proposed which use this new representation. The first system is a support vector machine-based system that uses the cosine kernel to estimate the similarity between the input data. The second system directly uses the cosine similarity as the final decision score. We tested three channel compensation techniques in the total variability space, which are within-class covariance normalization (WCCN), linear discriminate analysis (LDA), and nuisance attribute projection (NAP). We found that the best results are obtained when LDA is followed by WCCN. We achieved an equal error rate (EER) of 1.12% and MinDCF of 0.0094 using the cosine distance scoring on the male English trials of the core condition of the NIST 2008 Speaker Recognition Evaluation dataset. We also obtained 4% absolute EER improvement for both-gender trials on the 10 s-10 s condition compared to the classical joint factor analysis scoring."} {"_id": "91bb3680cee8cd37b80e07644f66f9cccf1b1aff", "title": "PASCAL Boundaries: A Semantic Boundary Dataset with a Deep Semantic Boundary Detector", "text": "In this paper, we address the task of instance-level semantic boundary detection. To this end, we generate a large database consisting of more than 10k images (which is 20 bigger than existing edge detection databases) along with ground truth boundaries between 459 semantic classes including instances from both foreground objects and different types of background, and call it the PASCAL Boundaries dataset. This is a timely introduction of the dataset since results on the existing standard dataset for measuring edge detection performance, i.e. BSDS500, have started to saturate. In addition to generating a new dataset, we propose a deep network-based multi-scale semantic boundary detector and name it Multi-scale Deep Semantic Boundary Detector (M-DSBD). We evaluate M-DSBD on PASCAL boundaries and compare it to baselines. We test the transfer capabilities of our model by evaluating on MS-COCO and BSDS500 and show that our model transfers well."} {"_id": "7ee214c411ca42c323af6e6cdc96058d5140aefe", "title": "Service-Oriented Architecture for Big Data Analytics in Smart Cities", "text": "A smart city has recently become an aspiration for many cities around the world. These cities are looking to apply the smart city concept to improve sustainability, quality of life for residents, and economic development. The smart city concept depends on employing a wide range of advanced technologies to improve the performance of various services and activities such as transportation, energy, healthcare, and education, while at the same time improve the city's resources utilization and initiate new business opportunities. One of the promising technologies to support such efforts is the big data technology. Effective and intelligent use of big data accumulated over time in various sectors can offer many advantages to enhance decision making in smart cities. In this paper we identify the different types of decision making processes involved in smart cities. Then we propose a service-oriented architecture to support big data analytics for decision making in smart cities. This architecture allows for integrating different technologies such as fog and cloud computing to support different types of analytics and decision-making operations needed to effectively utilize available big data. It provides different functions and capabilities to use big data and provide smart capabilities as services that the architecture supports. As a result, different big data applications will be able to access and use these services for varying proposes within the smart city."} {"_id": "bbe0f0b3e2d60c4f96d9d84f97dc8a9be4f72802", "title": "Automatic C-to-CUDA Code Generation for Affine Programs", "text": "Graphics Processing Units (GPUs) offer tremendous computational power. CUDA (Compute Unified Device Architecture) provides a multi-threaded parallel programming model, facilitating high performance implementations of general-purpose computations. However, the explicitly managed memory hierarchy and multi-level parallel view make manual development of high-performance CUDA code rather complicated. Hence the automatic transformation of sequential input programs into efficient parallel CUDA programs is of considerable interest. This paper describes an automatic code transformation system that generates parallel CUDA code from input sequential C code, for regular (affine) programs. Using and adapting publicly available tools that have made polyhedral compiler optimization practically effective, we develop a C-to-CUDA transformation system that generates two-level parallel CUDA code that is optimized for efficient data access. The performance of automatically generated code is compared with manually optimized CUDA code for a number of benchmarks. The performance of the automatically generated CUDA code is quite close to hand-optimized CUDA code and considerably better than the benchmarks\u2019 performance on a multicore CPU."} {"_id": "42e0a535049b2183f167235fc79f0deace4f11c3", "title": "An Experimental Evaluation of Apple Siri and Google Speech Recognition", "text": "We perform an experimental evaluation of two popular cloud-based speech recognition systems. Cloudbased speech recognition systems enhances Web surfing, transportation, health care, etc. Using voice commands helps drivers stay connected to the Internet by avoiding traffic safety risks. The performance of these type of applications should be robust under difficult network conditions. User frustration with network traffic problems can affect the usability of these applications. We evaluate the performance of two popular cloud-based speech recognition applications, Apple Siri and Google Speech Recognition (GSR) under various network conditions. We evaluate transcription delay and accuracy of transcription of each application under different packet loss and jitter values. Results of our study show that performance of cloud-based speech recognition systems can be affected by jitter and packet loss; which are commonly occurring over WiFi and cellular network connections. keywords: Cloud Speech Recognition, Quality of Experience, Software Measurement, Streaming Media, Real-time Systems."} {"_id": "f3f191aa4b27789fef02e2a40e11c33ea408c86c", "title": "Interactive evolution for the procedural generation of tracks in a high-end racing game", "text": "We present a framework for the procedural generation of tracks for a high-end car racing game (TORCS) using interactive evolution. The framework maintains multiple populations and allow users to work both on their own population (in single-user mode) or to collaborate with other users on a shared population. Our architecture comprises a web frontend and an evolutionary backend. The former manages the interaction with users (e.g., logs registered and anonymous users, collects evaluations, provides access to all the evolved populations) and maintains the database server that stores all the present/past populations. The latter runs all the tasks related to evolution (selection, recombination and mutation) and all the tasks related to the target racing game (e.g., the track generation). We performed two sets of experiments involving five human subjects to evolve racing tracks alone (in a single-user mode) or cooperatively. Our preliminary results on five human subjects show that, in all the experiments, there is an increase of users' satisfaction as the evolution proceeds. Users stated that they perceived improvements in the quality of the individuals between subsequent populations and that, at the end, the process produced interesting tracks."} {"_id": "9e5a13f3bc2580fd16bab15e31dc632148021f5d", "title": "Bandwidth-Enhanced Low-Profile Cavity-Backed Slot Antenna by Using Hybrid SIW Cavity Modes", "text": "A bandwidth enhanced method of a low-profile substrate integrated waveguide (SIW) cavity-backed slot antenna is presented in this paper. Bandwidth enhancement is achieved by simultaneously exciting two hybrid modes in the SIW-backed cavity and merging them within the required frequency range. These two hybrid modes, whose dominant fields are located in different half parts of the SIW cavity, are two different combinations of the and resonances. This design method has been validated by experiments. Compared with those of a previously presented SIW cavity-backed slot antenna, fractional impedance bandwidth of the proposed antenna is enhanced from 1.4% to 6.3%, its gain and radiation efficiency are also slightly improved to 6.0 dBi and 90%, and its SIW cavity size is reduced about 30%. The proposed antenna exhibits low cross polarization level and high front to back ratio. It still retains advantages of low-profile, low fabrication cost, and easy integration with planar circuits."} {"_id": "88ac3acda24e771a8d4659b48205adb21c6933fa", "title": "Ontology semantic approach to extraction of knowledge from holy quran", "text": "With the continued demand for Islamic knowledge, which is mainly based on the Quran as a source of knowledge and wisdom, systems that facilitate an easy search of the content of the Quran remain a considerable challenge. Although in recent years there have been tools for Quran search, most of these tools are based on keyword search, meaning that the user needs to know the correct keywords before being able to retrieve the content of al-Quran. In this paper, we propose a system that supports the end user in querying and exploring the Quran ontology. The system comprises user query reformulation against the Quran ontology stored and annotated in the knowledge base. The Quran ontology consists of noun concepts identified in al-Quran, and the relationship that exists between these concepts. The user writes a query in the natural language and the proposed system reformulates the query to match the content found in the knowledge base in order to retrieve the relevant answer. The answer is represented by the Quranic verse related to the user query."} {"_id": "ede93aff6b747938e4ed6cf2fae3daf6b66520f7", "title": "A survey on text mining techniques", "text": "text mining is a technique to find meaningful patterns from the available text documents. The pattern discovery from the text and document organization of document is a well-known problem in data mining. Analysis of text content and categorization of the documents is a complex task of data mining. In order to find an efficient and effective technique for text categorization, various techniques of text categorization and classification is recently developed. Some of them are supervised and some of them unsupervised manner of document arrangement. This presented paper discusses different method of text categorization and cluster analysis for text documents. In addition of that a new text mining technique is proposed for future implementation. Keywords\u2014 text mining, classification, cluster analysis, survey"} {"_id": "28a500f4422032e42315e40b44bfb1db72d828d2", "title": "Bullying among young adolescents: the strong, the weak, and the troubled.", "text": "OBJECTIVES\nBullying and being bullied have been recognized as health problems for children because of their association with adjustment problems, including poor mental health and more extreme violent behavior. It is therefore important to understand how bullying and being bullied affect the well-being and adaptive functioning of youth. We sought to use multiple data sources to better understand the psychological and social problems exhibited by bullies, victims, and bully-victims.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nAnalysis of data from a community sample of 1985 mostly Latino and black 6th graders from 11 schools in predominantly low socioeconomic status urban communities (with a 79% response rate).\n\n\nMAIN OUTCOME MEASURES\nPeer reports of who bullies and who is victimized, self-reports of psychological distress, and peer and teacher reports of a range of adjustment problems.\n\n\nRESULTS\nTwenty-two percent of the sample was classified as involved in bullying as perpetrators (7%), victims (9%), or both (6%). Compared with other students, these groups displayed school problems and difficulties getting along with classmates. Despite increased conduct problems, bullies were psychologically strongest and enjoyed high social standing among their classmates. In contrast, victims were emotionally distressed and socially marginalized among their classmates. Bully-victims were the most troubled group, displaying the highest level of conduct, school, and peer relationship problems.\n\n\nCONCLUSIONS\nTo be able to intervene with bullying, it is important to recognize the unique problems of bullies, victims, and bully-victims. In addition to addressing these issues directly with their patients, pediatricians can recommend school-wide antibullying approaches that aim to change peer dynamics that support and maintain bullying."} {"_id": "1d758774416b44ad828ebf4ab35e23da14a273e8", "title": "The MagicBook - Moving Seamlessly between Reality and Virtuality", "text": "or more than a decade researchers have tried to create intuitive computer interfaces by blending reality and virtual reality. The goal is for people to interact with the digital domain as easily as with the real world. Various approaches help us achieve this\u2014in the area of tangible interfaces, we use real objects as interface widgets; in augmented reality, researchers overlay 3D virtual imagery onto the real world; and in VR interfaces , we entirely replace the real world with a computer generated environment. As Milgram pointed out, 1 these types of computer interfaces can be placed along a continuum according to how much of the user's environment is computer generated (Figure 1). Tangible interfaces lie far to the left on this reality\u2013virtuality line, while immersive virtual environments are at the right extremity. Most current user interfaces exist as discrete points along this continuum. However, human activity can't always be broken into discrete components and for many tasks users may prefer to move seamlessly along the reality\u2013virtuality continuum. This proves true when interacting with 3D graphical content, either creating virtual models or viewing them. For example, if people want to experience a virtual scene from different scales, then immer-sive virtual reality may be ideal. If they want to have a face-to-face discussion while viewing the virtual scene, an augmented reality interface may be best. 2 The MagicBook project is an early attempt to explore how we can use a physical object to smoothly transport users between reality and virtuality. Young children often fantasize about flying into the pages of a fairy tale and becoming part of the story. The MagicBook project makes this fantasy a reality using a normal book as the main interface object. People can turn the pages of the book, look at the pictures, and read the text without any additional technology (Figure 2a). However, if a person looks at the pages through an augmented reality display, they see 3D virtual models appearing out of the pages (Figure 2b). The models appear attached to the real page so users can see the augmented reality scene from any perspective by moving themselves or the book. The virtual content can be any size and is animated, so the augmented reality view is an enhanced version of a traditional 3D pop-up book. Users can change the virtual models by turning the book pages. When they see a scene they particularly like, \u2026"} {"_id": "4c98202e345a55b7b1d8c2347def21cac05935e6", "title": "Three stage 6\u201318 GHz high gain and high power amplifier based on GaN technology", "text": "A monolithic three stage HPA has been developed for wide band applications. This MMIC is fabricated on UMS 0.25 \u00b5m GaN technology based on SiC substrate. At 18GHz, the MMIC achieved in CW mode 10W of output power with 20dB linear gain and 20% power added efficiency. The HPA provided 6 to 10W output power over 6 to 18GHz with minimum small signal gain of 18dB. These obtained performances are very promising and very close to the simulations; this will allow a very short term further improvement. This demonstration is the first MMIC on the UMS 0.25\u00b5m GaN technology."} {"_id": "912ab353ff9f2baac0ec64f80a05477fba07b4a7", "title": "Emotion regulation for frustrating driving contexts", "text": "Driving is a challenging task because of the physical, attentional, and emotional demands. When drivers become frustrated by events their negative emotional state can escalate dangerously. This study examines behavioral and attitudinal effects of cognitively reframing frustrating events. Participants (N = 36) were asked to navigate a challenging driving course that included frustrating events such as long lights and being cut-off. Drivers were randomly assigned to three conditions. After encountering a frustrating event, drivers in a reappraisal-down condition heard voice prompts that reappraised the event in an effort to deflate negative reactions. Drivers in the second group, reappraisal-up, heard voice prompts that brought attention to the negative actions of vehicles and pedestrians. Drivers in a silent condition drove without hearing any voice prompts. Participants in the reappraisal-down condition had better driving behavior and reported less negative emotions than participants in the other conditions."} {"_id": "42338a0b160eb10decba0397aa83bda530d4803f", "title": "Facial Age Group Classification", "text": "Estimating human age group automatically via facial image analysis has lots of potential real-world applications, such as human computer interaction and multimedia communication. However, It is still a challenging problem for the existing computer vision systems to automatically and effectively estimate human age group. The aging process is determined by not only the person\u2019s gene, but also many external factors, such as health, living style, living location, and weather conditions. Males and females may also age differently. An age group classification system for facial images is proposed in this paper. Five age groups including babies, children, young adults, middle-aged adults, and old adults, are used in the classification system. The process of the system is divided into threephases: location, feature extraction, and age classification. Geometric features are used to distinguish whether the face is baby or child. Wrinkle features are used to classify the image into one of three adult groupsyoung adults, middle-aged adults, and old adults."} {"_id": "e35e9dbadc9f38abdf49f620635438e3d6f9e8d2", "title": "Reflections on craft: probing the creative process of everyday knitters", "text": "Crafters today blend age-old techniques such as weaving and pottery with new information and communication technologies such as podcasts, online instructions, and blogs. This intersection of tradition and modernity provides an interesting site for understanding the adoption of new technology. We present a qualitative study of seven knitters introduced to Spyn - a system that enables the association of digitally recorded messages with physical locations on knit fabric. We gave knitters Spyn in order to elicit their reflections on their craft practices and learn from their interactions with material, people, and technology. While creating artifacts for friends and loved ones, knitters expanded the creative and communicative potential of their craftwork: knitters envisioned travel journals in knitted potholders and sung lullabies in knitted hats. We describe how these unusual craft activities provide a useful lens onto contemporary technological appropriation."} {"_id": "29fbe4a6c55f8eae8ff40841440e4cb198cd9aec", "title": "W-net: Bridged U-net for 2D Medical Image Segmentation", "text": "In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods."} {"_id": "13890c2699dd12a4f4d722b9f35a666c98b8de45", "title": "Multimodal imaging of optic disc drusen.", "text": "PURPOSE\nTo evaluate optic disc drusen, extracellular protein deposits known to contain numerous aggregates of mitochondria, using multimodal modalities featuring optical coherence tomography (OCT) and autofluorescence imaging.\n\n\nDESIGN\nRetrospective observational case series.\n\n\nMETHODS\nEyes with optic nerve drusen were examined with enhanced depth imaging (EDI)-OCT, swept source OCT, and fundus autofluorescence using a fundus camera.\n\n\nRESULTS\nTwenty-six eyes of 15 patients with optic disc drusen were evaluated. EDI-OCT and swept source OCT showed multiple optic disc drusen at different levels; most were located immediately anterior to the lamina cribrosa. The drusen were ovoid regions of lower reflectivity that were bordered by hyperreflective material, and in 12 eyes (46.2%) there were internal hyperreflective foci. The mean diameter of the optic disc drusen as measured in OCT images was 686.8 (standard deviation \u00b1 395.2) \u03bcm. There was a significant negative correlation between the diameter of the optic disc drusen and the global retinal nerve fiber layer thickness (r = -0.61, P = .001). There was a significant negative correlation between proportion of the optic disc drusen area occupied by optic nerve drusen as detected by autofluorescence imaging and the global retinal nerve fiber layer thickness (r = -0.63, P = .001).\n\n\nCONCLUSIONS\nDeeper-penetration OCT imaging demonstrated the internal characteristics of optic disc drusen and their relationship with the lamina cribrosa in vivo. This study also showed that both the larger the drusen and the more area of the optic canal occupied by drusen, the greater the associated retinal nerve fiber layer abnormalities."} {"_id": "91e6a1d594895ac2cbcbfc591b4244ca039d7109", "title": "A Compact Fractal Loop Rectenna for RF Energy Harvesting", "text": "This letter presents a compact fractal loop rectenna for RF energy harvesting at GSM1800 bands. First, a fractal loop antenna with novel in-loop ground-plane impedance matching is proposed for the rectenna design. Also, a high-efficiency rectifier is designed in the loop antenna to form a compact rectenna. Measured results show that an efficiency of 61% and an output dc voltage of 1.8\u00a0V have been achieved over 12-k\u2126 resistor for 10\u00a0\u03bcW/cm2 power density at 1.8\u00a0GHz. The rectenna is able to power up a battery-less LCD watch at a distance of 10\u00a0m from the cell tower. The proposed rectenna is compact, easy to fabricate, and useful for various energy harvesting applications."} {"_id": "0e4963c7d2c6f0be422dbef0b45473dc43503ceb", "title": "Explore, Exploit or Listen: Combining Human Feedback and Policy Model to Speed up Deep Reinforcement Learning in 3D Worlds", "text": "We describe a method to use discrete human feedback to enhance the performance of deep learning agents in virtual three-dimensional environments by extending deep-reinforcement learning to model the confidence and consistency of human feedback. This enables deep reinforcement learning algorithms to determine the most appropriate time to listen to the human feedback, exploit the current policy model, or explore the agent\u2019s environment. Managing the trade-off between these three strategies allows DRL agents to be robust to inconsistent or intermittent human feedback. Through experimentation using a synthetic oracle, we show that our technique improves the training speed and overall performance of deep reinforcement learning in navigating three-dimensional environments using Minecraft. We further show that our technique is robust to highly innacurate human feedback and can also operate when no human feedback is given."} {"_id": "4c68e7eff1da14003cc7efbfbd9a0a0a3d5d4968", "title": "Making sense of implementation theories, models and frameworks", "text": "BACKGROUND\nImplementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers.\n\n\nDISCUSSION\nTheoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks). This article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited \"how-to\" support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation."} {"_id": "314a508686906f48d55567694fdf3bff50a4604d", "title": "Example-based video color grading", "text": "In most professional cinema productions, the color palette of the movie is painstakingly adjusted by a team of skilled colorists -- through a process referred to as color grading -- to achieve a certain visual look. The time and expertise required to grade a video makes it difficult for amateurs to manipulate the colors of their own video clips. In this work, we present a method that allows a user to transfer the color palette of a model video clip to their own video sequence. We estimate a per-frame color transform that maps the color distributions in the input video sequence to that of the model video clip. Applying this transformation naively leads to artifacts such as bleeding and flickering. Instead, we propose a novel differential-geometry-based scheme that interpolates these transformations in a manner that minimizes their curvature, similarly to curvature flows. In addition, we automatically determine a set of keyframes that best represent this interpolated transformation curve, and can be used subsequently, to manually refine the color grade. We show how our method can successfully transfer color palettes between videos for a range of visual styles and a number of input video clips."} {"_id": "563e656203f29f0cbabc5cf0611355ba79ae4320", "title": "High Accuracy Optical Flow Estimation Based on a Theory for Warping", "text": "We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise. In Proc. 8th European Conference on Computer Vision, Springer LNCS 3024, T. Pajdla and J. Matas (Eds.), vol. 4, pp. 25-36, Prague, Czech Republic, May 2004 c \u00a9 Springer-Verlag Berlin Heidelberg 2004 Received The Longuet-Higgins Best Paper Award."} {"_id": "8ca53d187f6beb3d1e4fb0d1b68544d578c86c53", "title": "A Naturalistic Open Source Movie for Optical Flow Evaluation", "text": "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the imageand flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available."} {"_id": "00a7370518a6174e078df1c22ad366a2188313b5", "title": "Determining Optical Flow", "text": "Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image."} {"_id": "b1ad220e3e7bc095fd02b02a800933258adc5c72", "title": "Biometric-oriented Iris Identification Based on Mathematical Morphology", "text": "A new method for biometric identification of human irises is proposed in this paper. The method is based on morphological image processing for the identification of unique skeletons of iris structures, which are then used for feature extraction. In this approach, local iris features are represented by the most stable nodes, branches and endpoints extracted from the identified skeletons. Assessment of the proposed method was done using subsets of images from the University of Bath Iris Image Database (1000 images) and the CASIA Iris Image Database (500 images). Compelling experimental results demonstrate the viability of using the proposed morphological approach for iris recognition when compared to a state-of-the-art algorithm that uses a global feature extraction approach."} {"_id": "a6034dba7b2e973eb17676933f472ddfe4ab7ad5", "title": "Genetic Programming + Unfolding Embryology in Automated Layout Planning", "text": "ABSTRACT Automated layout planning aims to the implementation of computational methods for the generation and the optimization of floor plans, considering the spatial configuration and the assignment of activities. Sophisticated strategies such as Genetic Algorithms have been implemented as heuristics of good solutions. However, the generative forces that derive from the social structures have been often neglected. This research aims to illustrate that the data that encode the layout\u2019s social and cultural generative forces, can be implemented within an evolutionary system for the design of residential layouts. For that purpose a co-operative system was created, which is composed of a Genetic Programming algorithm and an agent-based unfolding embryology procedure that assigns activities to the spaces generated by the GP algorithm. The assignment of activities is a recursive process which follows instructions encoded as permeability graphs. Furthermore, the Ranking Sum Fitness evaluation method is proposed and applied for the achievement of multiobjective optimization. Its efficiency is tested against the Weighted-Sum Fitness function. The system\u2019s results, both numerical and spatial, are compared to the results of a conventional evolutionary approach. This comparison showed that, in general, the proposed system can yield better solutions."} {"_id": "0483a8aa5992185b38f69c87d2f5db15a9de3294", "title": "The Seattle Heart Failure Model: prediction of survival in heart failure.", "text": "BACKGROUND\nHeart failure has an annual mortality rate ranging from 5% to 75%. The purpose of the study was to develop and validate a multivariate risk model to predict 1-, 2-, and 3-year survival in heart failure patients with the use of easily obtainable characteristics relating to clinical status, therapy (pharmacological as well as devices), and laboratory parameters.\n\n\nMETHODS AND RESULTS\nThe Seattle Heart Failure Model was derived in a cohort of 1125 heart failure patients with the use of a multivariate Cox model. For medications and devices not available in the derivation database, hazard ratios were estimated from published literature. The model was prospectively validated in 5 additional cohorts totaling 9942 heart failure patients and 17,307 person-years of follow-up. The accuracy of the model was excellent, with predicted versus actual 1-year survival rates of 73.4% versus 74.3% in the derivation cohort and 90.5% versus 88.5%, 86.5% versus 86.5%, 83.8% versus 83.3%, 90.9% versus 91.0%, and 89.6% versus 86.7% in the 5 validation cohorts. For the lowest score, the 2-year survival was 92.8% compared with 88.7%, 77.8%, 58.1%, 29.5%, and 10.8% for scores of 0, 1, 2, 3, and 4, respectively. The overall receiver operating characteristic area under the curve was 0.729 (95% CI, 0.714 to 0.744). The model also allowed estimation of the benefit of adding medications or devices to an individual patient's therapeutic regimen.\n\n\nCONCLUSIONS\nThe Seattle Heart Failure Model provides an accurate estimate of 1-, 2-, and 3-year survival with the use of easily obtained clinical, pharmacological, device, and laboratory characteristics."} {"_id": "6159347706d25b51ae35e030e818625a1dbbc09d", "title": "Open-Loop Precision Grasping With Underactuated Hands Inspired by a Human Manipulation Strategy", "text": "In this paper, we demonstrate an underactuated finger design and grasping method for precision grasping and manipulation of small objects. Taking inspiration from the human grasping strategy for picking up objects from a flat surface, we introduce the flip-and-pinch task, in which the hand picks up a thin object by flipping it into a stable configuration between two fingers. Despite the fact that finger motions are not fully constrained by the hand actuators, we demonstrate that the hand and fingers can interact with the table surface to produce a set of constraints that result in a repeatable quasi-static motion trajectory. Even when utilizing only open-loop kinematic playback, this approach is shown to be robust to variation in object size and hand position. Variation of up to 20\u00b0 in orientation and 10 mm in hand height still result in experimental success rates of 80% or higher. These results suggest that the advantages of underactuated, adaptive robot hands can be carried over from basic grasping tasks to more dexterous tasks."} {"_id": "74643ad2486e432b5ca55eee233913ec17a6269b", "title": "CHAPTER 5 SOFTWARE TESTING", "text": "As more extensively discussed in the Software Quality chapter of the Guide to the SWEBOK, the right attitude towards quality is one of prevention: it is obviously much better to avoid problems, rather than repairing them. Testing must be seen as a means primarily for checking whether the prevention has been effective, but also for identifying anomalies in those cases in which, for some reason, it has been not. It is perhaps obvious, but worth recognizing, that even after successfully completing an extensive testing campaign, the software could still contain faults; nor is defect free code a synonymous for quality product. The remedy to system failures that are experienced after delivery is provided by (corrective) maintenance actions. Maintenance topics are covered into the Software Maintenance chapter of the Guide to the SWEBOK."} {"_id": "d28b2a49386dfffa320cb835906e0b8dea1ea046", "title": "Warehousing and Protecting Big Data: State-Of-The-Art-Analysis, Methodologies, Future Challenges", "text": "This paper proposes a comprehensive critical survey on the issues of warehousing and protecting big data, which are recognized as critical challenges of emerging big data research. Indeed, both are critical aspects to be considered in order to build truly, high-performance and highly-flexible big data management systems. We report on state-of-the-art approaches, methodologies and trends, and finally conclude by providing open problems and challenging research directions to be considered by future efforts."} {"_id": "8eaa1a463b87030a72ee7c54d15b2993bc247f0d", "title": "Hierarchical Peer-To-Peer Systems", "text": "Structured peer-to-peer (P2P) lookup services\u2014such as Chord, CAN, Pastry and Tapestry\u2014organize peers into a flat overlay network and offer distributed hash table (DHT) functionality. In these systems, data is associated with keys and each peer is responsible for a subset of the keys. We study hierarchical DHTs, in which peers are organized into groups, and each group has its autonomous intra-group overlay network and lookup service. The groups themselves are organized in a top-level overlay network. To find a peer that is responsible for a key, the top-level overlay first determines the group responsible for the key; the responsible group then uses its intra-group overlay to determine the specific peer that is responsible for the key. After providing a general framework for hierarchical P2P lookup, we consider the specific case of a two-tier hierarchy that uses Chord for the top level. Our analysis shows that by designating the most reliable peers in the groups as superpeers, the hierarchical design can significantly reduce the expected number of hops in Chord. We also propose a scalable design for managing the groups and the superpeers."} {"_id": "915907859ff9c648c171702289d66454ab4e66e2", "title": "An Optical 6-axis Force Sensor for Brain Function Analysis using fMRI", "text": "This paper presents an 6-axis optical force sensor which can be used in fMRI. Recently, fMRIs are widely used for studying human brain function. Simultaneous measurement of brain activity and peripheral information, such as grip force, enables more precise investigations in studies of motor function. However, conventional force sensors cannot be used in fMRI environment, since metal elements generate noise which severely contaminate the signals of fMRI. An optical 2-axis force sensor has been developed using photo sensors and optical fibers by Tada et. al.[1], that resolved these problems. The developed force sensor removed all magnetic components from the sensing part. It detected minute displacements by measure amount of light and light traveled through the optical fibers. However, there still remain several problems on this optical force sensor. Firstly, the accuracy is not high compared to the conventional force sensors. Secondly, the robustness is not enough against the contact force to the optical fibers. In this paper, the problems concerning to the accuracy and the sensor output stability has been improved by novel methods of fixing fibers and arithmetic circuit. Furthermore, an optical 6-axis force sensor is developed based on these improvements, and usefulness of our sensor for brain function analysis is confirmed in fMRI experimentations."} {"_id": "23e8f10b3b1c82191a43ea86331ae668d26efb0a", "title": "Pathologies in information bottleneck for deterministic supervised learning", "text": "Information bottleneck (IB) is a method for extracting information from one random variable X that is relevant for predicting another random variable Y . To do so, IB identifies an intermediate \u201cbottleneck\u201d variable T that has low mutual information I(X;T ) and high mutual information I(Y ;T ). The IB curve characterizes the set of bottleneck variables that achieve maximal I(Y ;T ) for a given I(X;T ), and is typically explored by optimizing the IB Lagrangian, I(Y ;T )\u2212 \u03b2I(X;T ). Recently, there has been interest in applying IB to supervised learning, particularly for classification problems that use neural networks. In most classification problems, the output class Y is a deterministic function of the input X , which we refer to as \u201cdeterministic supervised learning\u201d. We demonstrate three pathologies that arise when IB is used in any scenario where Y is a deterministic function of X: (1) the IB curve cannot be recovered by optimizing the IB Lagrangian for different values of \u03b2; (2) there are \u201cuninteresting\u201d solutions at all points of the IB curve; and (3) for classifiers that achieve low error rates, the activity of different hidden layers will not exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We finish by demonstrating these issues on the MNIST dataset."} {"_id": "60ec519d58ab29184513a1db1692973c40d87050", "title": "Convolution Aware Initialization", "text": "Initialization of parameters in deep neural networks has been shown to have a big impact on the performance of the networks (Mishkin & Matas, 2015). The initialization scheme devised by He et al, allowed convolution activations to carry a constrained mean which allowed deep networks to be trained effectively (He et al., 2015a). Orthogonal initializations and more generally orthogonal matrices in standard recurrent networks have been proved to eradicate the vanishing and exploding gradient problem (Pascanu et al., 2012). Majority of current initialization schemes do not take fully into account the intrinsic structure of the convolution operator. Using the duality of the Fourier transform and the convolution operator, Convolution Aware Initialization builds orthogonal filters in the Fourier space, and using the inverse Fourier transform represents them in the standard space. With Convolution Aware Initialization we noticed not only higher accuracy and lower loss, but faster convergence. We achieve new state of the art on the CIFAR10 dataset, and achieve close to state of the art on various other tasks."} {"_id": "fdbc19342cdd233dd90d9db9477cf603cf3149b5", "title": "A Comprehensive Survey on Bengali Phoneme Recognition", "text": "\uf0b7 SUST, ICERIE Abstract: Hidden Markov model based various phoneme recognition methods for Bengali language is reviewed. Automatic phoneme recognition for Bengali language using multilayer neural network is reviewed. Usefulness of multilayer neural network over single layer neural network is discussed. Bangla phonetic feature table construction and enhancement for Bengali speech recognition is also discussed. Comparison among these methods is discussed."} {"_id": "f07233b779ee85f46dd13cf33ece8d89f67ecc2b", "title": "Domain Ontology for Programming Languages", "text": "Ontology have become a relevant representation formalism and many application domains are considering adopting them. This attention claims for methods for reusing domain knowledge resources in the development of domain ontologies. Accordingly, in this paper we discuss a general methodology to create domain ontology for more than one object oriented language (OOP) like Java, PHP and C++. A lot of software development methods specially Web applications have presented most of these methods that are focusing on the structure of distributed systems and security, in which they are connected through networks and the internet; resulting in more valuable business and critical assets stored, searched and manipulated by World Wide Web. The aims of this study building domain ontology for OOP language classes for different OOP languages or different versions of the same language is an exciting opportunity for researchers to access the information required under the constant increase in the volume of information disseminated on the Internet. By creating Ontology domain for OOP, we can Improve methods of viewing and 1 E-mail: iabuhassan@newsoft.ps 2 E-mail: akram.othman@gmail.com Article Info: Received : October 14, 2012. Revised : November 19, 2012 Published online : December 30, 2012 76 Domain Ontology for Programming Languages organizing information, improve the way of processing, in addition to increasing the vocabulary and their relationship to terminology as well as the rules used in natural language with OOP languages. The clear identification of the properties and relations of terms is the starting point to become Ontology domain. The importance of the domain Ontology among object oriented programming languages is that through the synthesis of these relationships or Ontology an OOP can be achieved through web by any junior programmers."} {"_id": "e7ff2e6be6d379b77111cdaf65da1b597f7fce75", "title": "Cu pillar bumps as a lead-free drop-in replacement for solder-bumped, flip-chip interconnects", "text": "We evaluated Cu pillar bumps with a lead-free SnAg solder cap as 1st-level soldered interconnects in a standard organic BGA (ball grid array) flip chip package. Various pillar heights and solder volumes were investigated at a minimum pad pitch of 150 mum. Flip chip assembly followed the exact process as in the case of collapsible SnAg solder bumps, using an identical material set and identical equipment. Thanks to the properties of Cu pillar bumps, substrate design tolerances could be relaxed compared to solder bump interconnects of the same pitch. This, together with the fact that no solder-on-pad (SOP) is required for Cu pillar bumps, allows for lower substrate cost, which is a major factor in flip chip packaging cost. Cu pillar bumps also offer significantly higher current carrying capability and better thermal performance compared to solder bumps. We found that flip chip assembly with Cu pillar bumps is a robust process with regard to variations in assembly parameters, such as solder cap volume, flux depth, chip placement accuracy and substrate pad size. It is possible to attach the Cu pillar bumps to the metal lines of the non-SOP substrates with line spacing down to 40 mum plusmn15 mum without any solder bridging. It is shown that flip chip packages with suitable 150 mum-pitch Cu pillar bumps provide high mechanical and electrical reliability. No corrosion on the Cu pillars was found after humidity tests (MSL2 level and unbiased HAST for 192 h). Thermal cycling passed 2000 cycles, even at JEDEC JESD22-A104-B condition C (-65 degC to 150 degC). High Temperature Storage passed 2000 h at 150 degC. Cross sections reveal that after 1000 h at 150degC all Sn in the solder is transformed into Cu-Sn intermetallic compounds (IMCs). Preliminary electromigration test results at highly accelerated conditions (Tbump ap 177 degC, I = 0.8 A) show almost 2 orders of magnitude longer lifetime compared to SnAg solder bumps at 200 mum pitch. Cu pillar lifetimes at high current and temperature are expected to highly exceed those of solder bumps, because current crowding in the solder is avoided and solder is transformed into much more stable intermetallics."} {"_id": "0c2764756299a82659605b132aef9159f61a4171", "title": "Sarcasm Detection on Czech and English Twitter", "text": "This paper presents a machine learning approach to sarcasm detection on Twitter in two languages \u2013 English and Czech. Although there has been some research in sarcasm detection in languages other than English (e.g., Dutch, Italian, and Brazilian Portuguese), our work is the first attempt at sarcasm detection in the Czech language. We created a large Czech Twitter corpus consisting of 7,000 manually-labeled tweets and provide it to the community. We evaluate two classifiers with various combinations of features on both the Czech and English datasets. Furthermore, we tackle the issues of rich Czech morphology by examining different preprocessing techniques. Experiments show that our language-independent approach significantly outperforms adapted state-of-the-art methods in English (F-measure 0.947) and also represents a strong baseline for further research in Czech (F-measure 0.582)."} {"_id": "74d8eb801c838d1dce814a1e9ce1074bd2c47721", "title": "MIMIC-CXR: A large publicly available database of labeled chest radiographs", "text": "Chest radiography is an extremely powerful imaging modality, allowing for a detailed inspection of a patient\u2019s thorax, but requiring specialized training for proper interpretation. With the advent of high performance general purpose computer vision algorithms, the accurate automated analysis of chest radiographs is becoming increasingly of interest to researchers. However, a key challenge in the development of these techniques is the lack of sufficient data. Here we describe MIMIC-CXR, a large dataset of 371,920 chest x-rays associated with 227,943 imaging studies sourced from the Beth Israel Deaconess Medical Center between 2011 2016. Each imaging study can pertain to one or more images, but most often are associated with two images: a frontal view and a lateral view. Images are provided with 14 labels derived from a natural language processing tool applied to the corresponding free-text radiology reports. All images have been de-identified to protect patient privacy. The dataset is made freely available to facilitate and encourage a wide range of research in medical computer vision."} {"_id": "b6b3bdfd3fc4036e68ecae7c9700a659255e724a", "title": "Privacy and Security of Big Data: Current Challenges and Future Research Perspectives", "text": "Privacy and security of Big Data is gaining momentum in the research community, also due to emerging technologies like Cloud Computing, analytics engines and social networks. In response of this novel research challenge, several privacy and security of big data models, techniques and algorithms have been proposed recently, mostly adhering to algorithmic paradigms or model-oriented paradigms. Following this major trend, in this paper we provide an overview of state-of-the-art research issues and achievements in the field of privacy and security of big data, by highlighting open problems and actual research trends, and drawing novel research directions in this field."} {"_id": "95dcdc9f2de1f30a4f0c79c378f98f11aa618f40", "title": "A comparative study on content-based music genre classification", "text": "Content-based music genre classification is a fundamental component of music information retrieval systems and has been gaining importance and enjoying a growing amount of attention with the emergence of digital music on the Internet. Currently little work has been done on automatic music genre classification, and in addition, the reported classification accuracies are relatively low. This paper proposes a new feature extraction method for music genre classification, DWCHs. DWCHs stands for Daubechies Wavelet Coefficient Histograms. DWCHs capture the local and global information of music signals simultaneously by computing histograms on their Daubechies wavelet coefficients. Effectiveness of this new feature and of previously studied features are compared using various machine learning classification algorithms, including Support Vector Machines and Linear Discriminant Analysis. It is demonstrated that the use of DWCHs significantly improves the accuracy of music genre classification."} {"_id": "23d0ba7c85940553f994e8ed992585da5e2bffdb", "title": "Attention-Fused Deep Matching Network for Natural Language Inference", "text": "Natural language inference aims to predict whether a premise sentence can infer another hypothesis sentence. Recent progress on this task only relies on a shallow interaction between sentence pairs, which is insufficient for modeling complex relations. In this paper, we present an attention-fused deep matching network (AF-DMN) for natural language inference. Unlike existing models, AF-DMN takes two sentences as input and iteratively learns the attention-aware representations for each side by multi-level interactions. Moreover, we add a selfattention mechanism to fully exploit local context information within each sentence. Experiment results show that AF-DMN achieves state-of-the-art performance and outperforms strong baselines on Stanford natural language inference (SNLI), multigenre natural language inference (MultiNLI), and Quora duplicate questions datasets."} {"_id": "0d98aca44d4e4efc7e0458e6405b9c326137a631", "title": "Turing Computability with Neural Nets", "text": "This paper shows the existence of a finite neural network, made up of sigmoidal nenrons, which simulates a nniversal Turing machine. It is composed of less then lo5 synchronously evolving processors, interconnected linearly. High-order connections are not required."} {"_id": "c6f142f5011ddb90921f1185b14a147b807119f9", "title": "\u201cExascale Computing and Big Data: The Next Frontier,\u201d", "text": "For scientific and engineering computing, exascale (10 operations per second) is the next proxy in the long trajectory of exponential performance increases that has continued for over half a century. Similarly, large-scale data preservation and sustainability within and across disciplines, metadata creation and multidisciplinary fusion, and digital privacy and security define the frontiers of big data. Solving the myriad technical, political and economic challenges of exascale computing will require coordinated planning across government, industry and academia, commitment to data sharing and sustainability, collaborative research and development, and recognition that both international competition and collaboration will be necessary."} {"_id": "461d5338d09c6b0c17c9e157a4725225e0bd6972", "title": "An event-based conceptual model for context-aware movement analysis", "text": "Current tracking technologies enable collection of data describing movements of various kinds of objects, including people, animals, icebergs, vehicles, containers with goods, etc. Analysis of movement data is now a hot research topic. However, most of the suggested analysis methods deal with movement data alone. Little has been done to support the analysis of movement in its spatio-temporal context, which includes various spatial and temporal objects as well as diverse properties associated with spatial locations and time moments. Comprehensive analysis of movement requires detection and analysis of relations that occur between moving objects and elements of the context in the process of the movement. We suggest a conceptual model in which movement is considered as a combination of spatial events of diverse types and extents in space and time. Spatial and temporal relations occur between movement events and elements of the spatial and temporal contexts. The model gives a ground to a generic approach based on extraction of interesting events from trajectories and treating the events as independent objects. By means of a prototype implementation, we tested the approach on complex real data about movement of wild animals. The testing showed the validity of the approach."} {"_id": "34a4b24c054f7e3fbe836a13165bf52838723dfb", "title": "Efficient Discovery of Ontology Functional Dependencies", "text": "Functional Dependencies (FDs) define attribute relationships based on syntactic equality, and, when used in data cleaning, they erroneously label syntactically different but semantically equivalent values as errors. We enhance dependency-based data cleaning with Ontology Functional Dependencies (OFDs), which express semantic attribute relationships such as synonyms and is-a hierarchies defined by an ontology. Our technical contributions are twofold: 1) theoretical foundations for OFDs, including a set of sound and complete axioms and a linear-time inference procedure, and 2) an algorithm for discovering OFDs (exact ones and ones that hold with some exceptions) from data that uses the axioms to prune the exponential search space in the number of attributes. We demonstrate the efficiency of our techniques on real datasets, and we show that OFDs can significantly reduce the number of false positive errors in data cleaning techniques that rely on traditional FDs."} {"_id": "7679ee792b4bc837d77677a44bf418bb2c73766c", "title": "Attention estimation by simultaneous analysis of viewer and view", "text": "This paper introduces a system for estimating the attention of a driver wearing a first person view camera using salient objects to improve gaze estimation. A challenging data set of pedestrians crossing intersections has been captured using Google Glass worn by a driver. A challenge unique to first person view from cars is that the interior of the car can take up a large part of the image. The proposed system automatically filters out the dashboard of the car, along with other parts of the instrumentation. The remaining area is used as a region of interest for a pedestrian detector. Two cameras looking at the driver are used to determine the direction of the driver's gaze, by examining the eye corners and the center of the iris. This coarse gaze estimation is then linked to the detected pedestrians to determine which pedestrian the driver is focused on at any given time."} {"_id": "f4e41dd5c158b8b8cef3d6bdd5592e9b7ae910f0", "title": "Distributed Neural Networks for Internet of Things: The Big-Little Approach", "text": "Nowadays deep neural networks are widely used to accurately classify input data. An interesting application area is the Internet of Things (IoT), where a massive amount of sensor data has to be classified. The processing power of the cloud is attractive, however the variable latency imposes a major drawback for neural networks. In order to exploit the apparent trade-off between utilizing the available though limited embedded computing power of the IoT devices at high speed/stable latency and the seemingly unlimited computing power of Cloud computing at the cost of higher and variable latency, we propose a Big-Little architecture for deep neural networks. A small neural network trained to a subset of prioritized output classes can be used to calculate an output on the embedded devices, while a more specific classification can be calculated in the Cloud only when required. We show the applicability of this concept in the IoT domain by evaluating our approach for state of the art neural network classification problems on popular embedded devices such as the Raspberry Pi and Intel Edison."} {"_id": "4140e7481c2599604b14fcd04625274022583631", "title": "Availability : A Heuristic for Judging Frequency and Probability 122", "text": "This paper explores a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability, i.e., by the ease with which relevant instances come to mind. In general, availability is correlated with ecological frequency, but it is also affected by other factors. Consequently, the reliance on the availability heuristic leads to systematic biases. Such biases are demonstrated in the judged frequency of classes of words, of combinatorial outcomes, and of repeated events. The phenomenon of illusory correlation is explained as an availability bias. The effects of the availability of incidents and scenarios on subjective probability are discussed."} {"_id": "833a89df16e4b7bea5a2f9dc89333fb6a4dbd739", "title": "Towards value-based pricing \u2014 An integrative framework for decision making", "text": "Despite a recent surge of interest, the subject of pricing in general and value-based pricing in particular has received little academic investigation. Yet, pricing has a huge impact on financial results, both in absolute terms and relative to other instruments of the marketing mix. The objective of this paper is to present a comprehensive framework for pricing decisions which considers all relevant dimensions and elements for profitable and sustainable pricing decisions. The theoretical framework is useful for guiding new product pricing decisions as well as for implementing price-repositioning strategies for existing products. The practical application of this framework is illustrated by a case study involving the pricing decision for a major product launch at a global chemical company. D 2003 Elsevier Inc. All rights reserved."} {"_id": "8dfa972ab1135505fa2d0e00f4b17df8e49f7557", "title": "Software Engineering Economics", "text": "This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation."} {"_id": "35911388f9542490b4e72f4656ab826fff1d99bb", "title": "Subjective Probability : A Judgment of Representativeness", "text": "This paper explores a heuristic-representativeness-according to which the subjective probability of an event, or a sample, is determined by the degree to which it: (i) is similar in essential characteristics to its parent population; and (ii) reflects the salient features of the process by which it is generated. This heuristic is explicated in a series of empirical examples demonstrating predictable and systematic errors in the evaluation of uncertain events. In particular, since sample size does not represent any property of the population, it is expected to have little or no effect on judgment of likelihood. This prediction is confirmed in studies showing that subjective sampling distributions and posterior probability judgments are determined by the most salient characteristic of the sample (e.g., proportion, mean) without regard to the size of the sample. The present heuristic approach is contrasted with the normative (Bayesian) approach to the analysis of the judgment of uncertainty."} {"_id": "4ee237d215c41374cb94227a44d1a51af7d3873f", "title": "RECOGNITION AND RETRIEVAL PROCESSES IN FREE RECALL", "text": "A model of free recall is described which identifies two processes in free recall: a retrieval process by which the subject accesses the words, and a recognition process by which the subject decides whether an implicitly retrieved word is a to-be-recalled word. Submodels for the recognition process and the retrieval process are described. The recognition model assumes that during the study phase, the subject associates \"list markers\" to the to-be-recalled words. The establishment of such associates is postulated to be an all-or-none stochastic process. In the test phase, the subject recognizes to-be-recalled words by deciding which words have relevant list markers as associates. A signal detectability model is developed for this decision process. The retrieval model is introduced as a computer program that tags associative paths between list words. In several experiments, subjects studied and were tested on a sequence of overlapping sublists sampled from a master set of common nouns. The twoprocess model predicts that the subject's ability to retrieve the words should increase as more overlapping sublists are studied, but his ability to differentiate the words on the most recent list should deteriorate. Experiments confirmed this predicted dissociation of recognition and retrieval. Further predictions derived from the free recall model were also supported."} {"_id": "3dae758d2bf62652a93edf9968cc085b822f13c4", "title": "X band septum polarizer as feed for parabolic antenna", "text": "Contemporary RHCP and LHCP wave occurs in several applications of microwave communication and measurement system. From this point of view the septum polarizer can be useful. The septum polarizer is a four-port waveguide device. The square waveguide at one end constitutes two ports because it can support two orthogonal modes. A slopping (or stepped) septum divides the square waveguide into two standard rectangular waveguides sharing a common broad-wall. The size of the septum as well as two versions of the waveguides excitation were analyzed and are described in this paper."} {"_id": "34f2f927a33ce01b3e5284428a483137cff4feba", "title": "Expanding cellular coverage via cell-edge deployment in heterogeneous networks: spectral efficiency and backhaul power consumption perspectives", "text": "Heterogeneous small-cell networks (HetNets) are considered to be a standard part of future mobile networks where operator/consumer deployed small-cells, such as femto-cells, relays, and distributed antennas (DAs), complement the existing macrocell infrastructure. This article proposes the need-oriented deployment of small-cells and device-to-device (D2D) communication around the edge of the macrocell such that the small-cell base stations (SBSs) and D2D communication serve the cell-edge mobile users, thereby expanding the network coverage and capacity. In this context, we present competitive network configurations, namely, femto-on-edge, DA-one-dge, relay-on-edge, and D2D-communication on- edge, where femto base stations, DA elements, relay base stations, and D2D communication, respectively, are deployed around the edge of the macrocell. The proposed deployments ensure performance gains in the network in terms of spectral efficiency and power consumption by facilitating the cell-edge mobile users with small-cells and D2D communication. In order to calibrate the impact of power consumption on system performance and network topology, this article discusses the detailed breakdown of the end-to-end power consumption, which includes backhaul, access, and aggregation network power consumptions. Several comparative simulation results quantify the improvements in spectral efficiency and power consumption of the D2D-communication-on-edge configuration to establish a greener network over the other competitive configurations."} {"_id": "dba9539b8d1025406767aa926f956592c416db53", "title": "Chapter 2 Challenges to the Adoption of E-commerce Technology for Supply Chain Management in a Developing Economy : A Focus on Nigerian SMEs", "text": "The evolution of Information Technology has enhanced consumers\u2019 and organisations\u2019 ability to gather information along with purchasing goods and services. Information Technology offers businesses increased competition, lower prices of goods and services, the choice of comparing products from different vendors and easy access to various vendors anywhere, anytime. Therefore, Information Technology facilitates the global nature of commerce. In developing countries, e-commerce dominates their economic activities. E-commerce is one of the leading non-oil sectors in Nigeria, 18.9 % GDP. In contrast with developed nations, e-commerce has not been as successfully adopted in developing countries. This chapter addresses the challenges and benefits of e-commerce technology in relation to SMEs in Nigeria. It presents quantitative evidence of SMEs perceptions of e-commerce technology, benefits, and barriers. A number of hypotheses are presented and assessed. Recommendations to mitigate barriers are suggested."} {"_id": "8453b9ffa450890653289d3c93f3bc2b2e1c2c63", "title": "The Rise of Bots: A Survey of Conversational Interfaces, Patterns, and Paradigms", "text": "This work documents the recent rise in popularity of messaging bots: chatterbot-like agents with simple, textual interfaces that allow users to access information, make use of services, or provide entertainment through online messaging platforms. Conversational interfaces have been often studied in their many facets, including natural language processing, artificial intelligence, human-computer interaction, and usability. In this work we analyze the recent trends in chatterbots and provide a survey of major messaging platforms, reviewing their support for bots and their distinguishing features. We then argue for what we call \"Botplication\", a bot interface paradigm that makes use of context, history, and structured conversation elements for input and output in order to provide a conversational user experience while overcoming the limitations of text-only interfaces."} {"_id": "d8ee9ea53d03d3c921b6aaab743c59fd52d2f5e4", "title": "Outpatient approach to palpitations.", "text": "Palpitations are a common problem seen in family medicine; most are of cardiac origin, although an underlying psychiatric disorder, such as anxiety, is also common. Even if a psychiatric comorbidity does exist, it should not be assumed that palpitations are of a noncardiac etiology. Discerning cardiac from noncardiac causes is important given the potential risk of sudden death in those with an underlying cardiac etiology. History and physical examination followed by targeted diagnostic testing are necessary to distinguish a cardiac cause from other causes of palpitations. Standard 12-lead electrocardiography is an essential initial diagnostic test. Cardiac imaging is recommended if history, physical examination, or electrocardiography suggests structural heart disease. An intermittent event (loop) monitor is preferred for documenting cardiac arrhythmias, particularly when they occur infrequently. Ventricular and atrial premature contractions are common cardiac causes of palpitations; prognostic significance is dictated by the extent of underlying structural heart disease. Atrial fibrillation is the most common arrhythmia resulting in hospitalization; such patients are at increased risk of stroke. Patients with supraventricular tachycardia, long QT syndrome, ventricular tachycardia, or palpitations associated with syncope should be referred to a cardiologist."} {"_id": "c14049c47b99e7e54d1f1c23fb7a5c9a82142748", "title": "Design and implementation of NN5 for Hong Kong stock price forecasting", "text": "A number of published techniques have emerged in the trading community for stock prediction tasks. Among them is neural network (NN). In this paper, the theoretical background of NNs and the backpropagation algorithm is reviewed. Subsequently, an attempt to build a stock buying/selling alert system using a backpropagation NN, NN5, is presented. The system is tested with data from one Hong Kong stock, The Hong Kong and Shanghai Banking Corporation (HSBC) Holdings. The system is shown to achieve an overall hit rate of over 70%. A number of trading strategies are discussed. A best strategy for trading non-volatile stock like HSBC is recommended. r 2006 Elsevier Ltd. All rights reserved."} {"_id": "5b4796fb14e0d7f65337585c8f2a789cd2d3c4c3", "title": "Rare case of isolated true complete diphallus \u2013 Case report and review of literature", "text": "Penile duplication is very rare anomaly. True complete diphallia is mostly associated with severe anomalies. Isolated complete diphallia is extremely rare. The case is being presented as true penile diphallia without associated anomalies. We are discussing diagnosis and management of such rare cases."} {"_id": "581cacfa3133513523b7ba8ee4e47177182f1bab", "title": "An ultra-low power capacitor-less LDO for always-on domain in NB-IoT applications", "text": "This paper presents an ultra-low power 55 nm CMOS capacitor-less Low-Dropout (LDO) voltage regulator for power management of an Always-On domain module for implementation of NB-IoT SoC applications. Compared with traditional IoT SoC, the power consumption specification of NB-IoT SoC is lower. One effective way to achieve low power application is to reduce the power of the Always-On domain module in the SoC, where the LDO plays a critical role. According to the tape-out, results are validated with a 0.89V output voltage from 1.8V to 3.3V supply voltage, delivering a load current of 100uA. The peak-peak variance of output voltage is less than 100mV from 1.8V to 3.3V supply voltage. The quiescent current is lower than 61.86nA, including a 2-Transistor reference voltage, which can implement NB-IoT SoC application, with yield as high as 87.88%."} {"_id": "1ad45747a121055b6fd3be6ff8f3f933f88ab659", "title": "Microscopy cell segmentation via adversarial neural networks", "text": "We present a novel method for cell segmentation in microscopy images which is inspired by the Generative Adversarial Neural Network (GAN) approach. Our framework is built on a pair of two competitive artificial neural networks, with a unique architecture, termed Rib Cage, which are trained simultaneously and together define a min-max game resulting in an accurate segmentation of a given image. Our approach has two main strengths, similar to the GAN, the method does not require a formulation of a loss function for the optimization process. This allows training on a limited amount of annotated data in a weakly supervised manner. Promising segmentation results on real fluorescent microscopy data are presented. The code is freely available at: https://github.com/arbellea/DeepCellSeg.git"} {"_id": "815aa52cfc02961d82415f080384594639a21984", "title": "Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification", "text": "Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model/computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model/computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level \u201csemantic\u201d features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial/temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24)."} {"_id": "526f97ff1a102b18bc331fa701a623980a05a509", "title": "Towards Portable Learning Analytics Dashboards", "text": "This paper proposes a novel approach to build and deploy learning analytics dashboards in multiple learning environments. Existing learning dashboards are barely portable: once deployed on a learning platform, it requires considerable effort to deploy the dashboard elsewhere. We suggest constructing dashboards from lightweight web applications, namely widgets. Our approach allows to port dashboards with no additional cost between learning environments that implement open specifications (Open Social and Activity Streams) for data access and use widget APIs. We propose to facilitate reuse by sharing the dashboards and widgets via a centralized analytics repository."} {"_id": "298687d98e16ba9c98739fe8cbe125ffbaa20b55", "title": "A comparative study of elasticsearch and CouchDB document oriented databases", "text": "With the advent of large complex datasets, NOSQL databases have gained immense popularity for their efficiency to handle such datasets in comparison to relational databases. There are a number of NOSQL data stores for e.g. Mongo DB, Apache Couch DB etc. Operations in these data stores are executed quickly. In this paper we aim to get familiar with 2 most popular NoSQL databases: Elasticsearch and Apache CouchDB. This paper also aims to analyze the performance of Elasticsearch and CouchDB on image data sets. This analysis is based on the results carried out by instantiate, read, update and delete operations on both document-oriented stores and thus justifying how CouchDB is more efficient than Elasticsearch during insertion, updation and deletion operations but during selection operation Elasticsearch performs much better than CouchDB. The implementation has been done on LINUX platform."} {"_id": "52fbfa181770cbc8291d7ba0c040a55a81d10a7b", "title": "\"Is there anything else I can help you with?\": Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent", "text": "Intelligent conversational assistants, such as Apple\u2019s Siri, Microsoft\u2019s Cortana, and Amazon\u2019s Echo, have quickly become a part of our digital life. However, these assistants have major limitations, which prevents users from conversing with them as they would with human dialog partners. This limits our ability to observe how users really want to interact with the underlying system. To address this problem, we developed a crowd-powered conversational assistant, Chorus, and deployed it to see how users and workers would interact together when mediated by the system. Chorus sophisticatedly converses with end users over time by recruiting workers on demand, which in turn decide what might be the best response for each user sentence. Up to the first month of our deployment, 59 users have held conversations with Chorus during 320 conversational sessions. In this paper, we present an account of Chorus\u2019 deployment, with a focus on four challenges: (i) identifying when conversations are over, (ii) malicious users and workers, (iii) on-demand recruiting, and (iv) settings in which consensus is not enough. Our observations could assist the deployment of crowd-powered conversation systems and crowd-powered systems in general."} {"_id": "3d53972fa96c73b6f7c6ca3430b0fc6eef649506", "title": "Kinematic-free position control of a 2-DOF planar robot arm", "text": "This paper challenges the well-established assumption in robotics that in order to control a robot it is necessary to know its kinematic information, that is, the arrangement of links and joints, the link dimensions and the joint positions. We propose a kinematic-free robot control concept that does not require any prior kinematic knowledge. The concept is based on our hypothesis that it is possible to control a robot without explicitly measuring its joint angles, by measuring instead the effects of the actuation on its end-effector. We implement a proof-of-concept encoderless robot controller and apply it for the position control of a physical 2-DOF planar robot arm. The prototype controller is able to successfully control the robot to reach a reference position, as well as to track a continuous reference trajectory. Notably, we demonstrate how this novel controller can cope with something that traditional control approaches fail to do: adapt to drastic kinematic changes such as 100% elongation of a link, 35-degree angular offset of a joint, and even a complete overhaul of the kinematics involving the addition of new joints and links."} {"_id": "f5ff1d285fb0779cc8541126a023fe35fe2fec35", "title": "Demand Queries with Preprocessing", "text": "Given a set of items and a submodular set-function f that determines the value of every subset of items, a demand query assigns prices to the items, and the desired answer is a set S of items that maximizes the pro t, namely, the value of S minus its price. The use of demand queries is well motivated in the context of combinatorial auctions. However, answering a demand query (even approximately) is NP-hard. We consider the question of whether exponential time preprocessing of f prior to receiving the demand query can help in later answering demand queries in polynomial time. We design a preprocessing algorithm that leads to approximation ratios that are NP-hard to achieve without preprocessing. We also prove that there are limitations to the approximation ratios achievable after preprocessing, unless NP \u2282 P/poly."} {"_id": "ad106136bf0201105f197d501bd8625d7e9c2562", "title": "BotCensor: Detecting DGA-Based Botnet Using Two-Stage Anomaly Detection", "text": "Nowadays, most botnets utilize domain generation algorithms (DGAs) to build resilient and agile command and control (C&C) channels. Specifically, botmasters employ DGAs to dynamically produce a large number of random domains and only register a small subset for their actual C&C servers with the purpose to defend them from takeovers and blacklisting attempts. While many approaches and models have been developed to detect DGA-based botnets, they suffer from several limitations, such as difficulties of DNS traffic collection, low feasibility and scalability, and so forth. In this paper, we present BotCensor, a new system that can determine if a host is infected with certain DGA malware with two-stage anomaly detection. In the first stage, we preliminarily attempt to identify malicious domains using a Markov model, and in the second stage, we re-examine the hosts that requested aforementioned malicious domains using novelty detection algorithms. Our experimental results show that our approach performs very well on identifying previously unknown DGA-generated domains and detects DGA bots with high efficiency and efficacy. Our approach not only can be regarded as security forensics tools, but also can be used to prevent malware infections and spread."} {"_id": "2301f3bebd0cebbf161a017bbb70faffbb2c2506", "title": "Deep Learning in Radiology: Does One Size Fit All?", "text": "Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image-for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps."} {"_id": "35a198cc4d38bd2db60cda96ea4cb7b12369fd3c", "title": "Knowledge transfer in learning to recognize visual objects classes", "text": "Learning to recognize of object classes is one of the most important functionalities of vision. It is estimated that humans are able to learn tens of thousands of visual categori es in their life. Given the photometric and geometric variabilities displayed by objects as well as the high degree of intra-clas s variabilities, we hypothesize that humans achieve such a fe at by using knowledge and information cumulated throughout the learning process. In recent years, a handful of pioneering p apers have applied various forms of knowledge transfer algorithms to the problem of learning object classes. We first review some o f these papers by loosely grouping them into three categories : transfer through prior parameters, transfer through shared features or parts, and transfer through contextual information. In the second half of the paper, we detail a recent algorithm proposed by the author. This incremental learning scheme us es information from object classes previously learned in the f orm of prior models to train a new object class model. Training images can be presented in an incremental way. We present experimental results tested with this model on a large numbe r of object categories."} {"_id": "b3957777c94d1133386a1de1bc2892dbea1e01cb", "title": "Designing an annotation scheme for summarizing Japanese judgment documents", "text": "We propose an annotation scheme for the summarization of Japanese judgment documents. This paper reports the details of the development of our annotation scheme for this task. We also conduct a human study where we compare the annotation of independent annotators. The end goal of our work is summarization, and our categories and the link system is a consequence of this. We propose three types of generic summaries which are focused on specific legal issues relevant to a given legal case."} {"_id": "49acf191aa9f0c3b7ce996e8cfd1a405e21a11a7", "title": "Interpretability and Informativeness of Clustering Methods for Exploratory Analysis of Clinical Data", "text": "Clustering methods are among the most commonly used tools for exploratory data analysis. However, using clustering to perform data analysis can be challenging for modern datasets that contain a large number of dimensions, are complex in nature, and lack a ground-truth labeling. Traditional tools, like summarization and plotting of clusters, are of limited benefit in a high-dimensional setting. On the other hand, while many clustering methods have been studied theoretically, such analysis often has limited instructive value for practical purposes due to unverifiable assumptions, oracle-dependent tuning parameters, or unquantified finite-sample effects."} {"_id": "7412f2f83beebb352b59ed6fe50e79997e0ac808", "title": "Switched Reluctance Motor Modeling, Design, Simulation, and Analysis: A Comprehensive Review", "text": "Switched reluctance machines have emerged as an important technology in industrial automation; they represent a real alternative to conventional variable speed drives in many applications. This paper reviews the technology status and trends in switched reluctance machines. It covers the various aspects of modeling, design, simulation, analysis, and control. Finally, it discusses the impact of switched reluctance machines technology on intelligent motion control."} {"_id": "370ded29d3a213d9963c12a48141b2ed6c512bad", "title": "Mining term association patterns from search logs for effective query reformulation", "text": "Search engine logs are an emerging new type of data that offers interesting opportunities for data mining. Existing work on mining such data has mostly attempted to discover knowledge at the level of queries (e.g., query clusters). In this paper, we propose to mine search engine logs for patterns at the level of terms through analyzing the relations of terms inside a query. We define two novel term association patterns (i.e., context-sensitive term substitutions and term additions) and propose new methods for mining such patterns from search engine logs. These two patterns can be used to address the mis-specification and under-specification problems of ineffective queries. Experiment results on real search engine logs show that the mined context-sensitive term substitutions can be used to effectively reword queries and improve their accuracy, while the mined context-sensitive term addition patterns can be used to support query refinement in a more effective way."} {"_id": "66198fbee049a7cd1b462fa4912aa14cd9227c3e", "title": "Child-based personas: need, ability and experience", "text": "Interactive technologies are becoming ubiquitous in many children\u2019s lives. From school to home, technologies are changing the way children live. However, current methods of designing these technologies do not adequately consider children\u2019s needs and developmental abilities. This paper describes and illustrates a new approach for creating user abstractions of children called the child-persona technique. Child-personas integrate theoretical concepts, empirically generated data and experiential goals. An analysis of the utility of this technique provides insights about how this technique can benefit designers by generating realistic child-user abstractions through a process which supports designers in child-centric design."} {"_id": "20739c96ed44ccfdc5352ea38e1a2a15137363f4", "title": "A global geometric framework for nonlinear dimensionality reduction.", "text": "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure."} {"_id": "2315fc6c2c0c4abd2443e26a26e7bb86df8e24cc", "title": "ImageNet Classification with Deep Convolutional Neural Networks", "text": "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry."} {"_id": "44d26f78bcdee04a14896c8c6f3681313402d854", "title": "Advances in Spectral-Spatial Classification of Hyperspectral Images", "text": "Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods."} {"_id": "ef8ae1effca9cd45677086034d8c7b06a69c03e5", "title": "Deep Learning-Based Classification of Hyperspectral Data", "text": "Classification is one of the most popular topics in hyperspectral remote sensing. In the last two decades, a huge number of methods were proposed to deal with the hyperspectral data classification problem. However, most of them do not hierarchically extract deep features. In this paper, the concept of deep learning is introduced into hyperspectral data classification for the first time. First, we verify the eligibility of stacked autoencoders by following classical spectral information-based classification. Second, a new way of classifying with spatial-dominated information is proposed. We then propose a novel deep learning framework to merge the two features, from which we can get the highest classification accuracy. The framework is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression. Specifically, as a deep learning architecture, stacked autoencoders are aimed to get useful high-level features. Experimental results with widely-used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance. In addition, the proposed joint spectral-spatial deep neural network opens a new window for future research, showcasing the deep learning-based methods' huge potential for accurate hyperspectral data classification."} {"_id": "7513f29210569444dd464c33147d9f824e6fbf08", "title": "How to Work a Crowd: Developing Crowd Capital Through Crowdsourcing", "text": "Traditionally, the term \u2018crowd\u2019 was used almost exclusively in the context of people who selforganized around a common purpose, emotion, or experience. Today, however, firms often refer to crowds in discussions of how collections of individuals can be engaged for organizational purposes. Crowdsourcing\u2013defined here as the use of information technologies to outsource business responsibilities to crowds\u2013can now significantly influence a firm\u2019s ability to leverage previously unattainable resources to build competitive advantage. Nonetheless, many managers are hesitant to consider crowdsourcing because they do not understand how its various types can add value to the firm. In response, we explain what crowdsourcing is, the advantages it offers, and how firms can pursue crowdsourcing. We begin by formulating a crowdsourcing typology and show how its four categories\u2014crowd voting, micro-task, idea, and solution crowdsourcing\u2014can help firms develop \u2018crowd capital,\u2019 an organizational-level resource harnessed from the crowd. We then present a three-step process model for generating crowd capital. Step one includes important considerations that shape how a crowd is to be constructed. Step two outlines the capabilities firms need to develop to acquire and assimilate resources (e.g., knowledge, labor, funds) from the crowd. Step three outlines key decision areas that executives need to address to effectively engage crowds. 1. Crowds and Crowdsourcing Not too long ago, the term \u2018crowd\u2019 was used almost exclusively in the context of people who selforganized around a common purpose, emotion, or experience. Crowds were sometimes seen as a positive occurrence\u2013 for instance, when they formed for political rallies or to support sports teams\u2014 but were more often associated negatively with riots, a mob mentality, or looting. Under today\u2019s lens, they are viewed more positively (Wexler, 2011). Crowds have become useful! It all started in 2006, when crowdsourcing was introduced as \u2018\u2018taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an \u201copen call\u2019\u2019 (Howe, 2006, p. 1). The underlying concept of crowdsourcing, a combination of crowd and outsourcing, is that many hands make light work and that wisdom can be gleaned from crowds (Surowiecki, 2005) to overcome groupthink, leading to superior results (Majchrzak & Malhotra, 2013). Of course, such ambitions are not new, and organizations have long desired to make the most of dispersed knowledge whereby each individual has certain knowledge advantages over every other (Hayek, 1945). Though examples of using crowds to harness what is desired are abundant (for an interesting application, see Table 1), until recently, accessing and harnessing such resources at scale has been nearly impossible for organizations. Due in large part to the proliferation of the Internet, mobile technologies, and the recent explosion of social media (Kietzmann, Hermkens, McCarthy, & Silvestre, 2011), organizations today are in a much better position to engage distributed crowds (Lakhani & Panetta, 2007) of individuals for their innovation and problem-solving needs (Afuah & Tucci, 2012; Boudreau & Lakhani, 2013). As a result, more and more executives\u2013\u2014from small startups to Fortune 500 companies alike\u2013\u2014 are trying to figure out what crowdsourcing really is, the benefits it can offer, and the processes they should follow to engage a crowd. In this formative stage of crowdsourcing, multiple streams of academic and practitioner-based literature\u2013\u2014each using their own language\u2013\u2014are developing independently of one another, without a unifying framework to understand the burgeoning phenomenon of crowd engagement. For executives who would like to explore crowdbased opportunities, this presents a multitude of options and possibilities, but also difficulties. One problem entails lack of a clear understanding of crowds , the various forms they can take, and the value they can offer. Another problem entails absence of a well-defined process to engage crowds. As a result, many executives are unable to develop strategies or are hesitant to allocate resources to crowdsourcing, resulting in missed opportunities for new competitive advantages resulting from engaging crowds. To help provide clarity, we submit an overview of the different types of crowdsourcing. Then we introduce the crowd capital framework, supplying a systematic template for executives to recognize the value of information from crowds, therein mapping the steps to acquire and assimilate resources from crowds. Finally, we discuss the unique benefits that can be gained from crowds before concluding with some advice on how to best \u2018work a crowd.\u2019 2. Types of crowdsourcing Crowdsourcing as an online, distributed problemsolving model (Brabham, 2008) suggests that approaching crowds and asking for contributions can help organizations develop solutions to a variety of business challenges. In this context, the crowd is often treated as a single construct: a general collection of people that can be targeted by firms. However, just as organizations and their problems vary, so do the types of crowds and the different kinds of contributions they can offer the firm. The following typology of crowdsourcing suggests that managers can begin by identifying a business problem and then working outward from there, considering (1) what type of contributions are required from members of the crowd and (2) how these contributions will collectively help find a solution to their business problem. First, the types of contributions required from the crowd could either call for specific objective contributions or for subjective content. Specific objective contributions help to achieve an impartial and unbiased result; here, bare facts matter and crowds can help find or create them. Subjective content contributions revolve around the judgments, opinions, perceptions, and beliefs of individuals in a crowd that are sought to collectively help solve a problem that calls for a subjective result. Second, contributions need to be processed collectively to add value. Depending on the problem to be solved, the contributions must either be aggregated or filtered. Under aggregation, contributions collectively yield value when they are simply combined at face value to inform a decision, without requiring any prior validation. For instance, political elections call for people to express their choices via electoral ballots, which are then tallied to calculate the sums and averages of their collective preferences; the reasons for their choices are not important at this stage. Other problems, however, are more complex and call for crowd contributions to be qualitatively evaluated and filtered before being considered on their relative merits (e.g., when politicians invite constituents\u2019 opinions before campaigning). Together, these two dimensions help executives distinguish among and understand the variety of crowdsourcing alternatives that exist today (see Figure 1). Two forms of crowdsourcing rely on aggregation as the primary process: crowd voting and microtask crowdsourcing. In crowd voting, organizations pose an issue to a crowd and aggregate the subjective responses derived from crowd participants to make a decision. Consider the popular television show American Idol, which allows viewers to support their preferred contestants by submitting votes online or via telephone or text. These votes are tallied at the end of the show and contestants with the fewest votes are eliminated from the competition. Similarly, so-called prediction markets (Arrow et al., 2008) activate the wisdom of the crowd through crowd voting. But rather than simply adding up votes, these markets arrive at specific predictions that can exceed the accuracy of experts by averaging the independent responses of crowd participants. For instance, Starwood Hotels and Resorts utilized an internal prediction market by asking a crowd of its own employees to select the best choice among a variety of potential marketing campaigns (Barlow, 2008). In micro-task crowdsourcing, organizations engage a crowd to undertake work that is often unachievable through standard procedures due to its sheer size or complexity. An organization may need to assemble a large data set, have numerous photos labeled and tagged, translate documents, or transcribe audio transcripts. Breaking such work into micro-tasks (Gino & Staats, 2012) allows daunting undertakings to be completed more quickly, cheaply, and efficiently. Consider how Google uses reCAPTCHA (von Ahn, Maurer, McMillen, Abraham, & Blum, 2008) and the little\u2013\u2014and admittedly annoying\u2013\u2014dialogue boxes that ask users to enter the text snippets they see of distorted images onscreen. It is commonly believed that this web utility is only for authenticating human users, thus keeping websites from spambots. However, every time the task of entering characters is completed, individuals are actually digitizing what optical character recognition (OCR) software has been unable to read. In this way, micro-task crowdsourcing is helping to digitize the archives of The New York Times and moving old manuscripts into Google Books. Similarly, crowdfunding (Stemler, 2013) endeavors are a form of micro-task crowdsourcing whereby an overall highly ambitious financial goal is broken into smaller funding tasks and contributions consist of objective resources (herein \u2018funds\u2019) that are simply aggregated for each venture. Figure 1. Crowdsourcing alternatives Whether objective or subjective, crowdsourced contributions must be processed to be valuable. In idea crowdsourcing, organizations seek creativity from a crowd, hoping to leverage its diversity to generate unique solutions to problems/issues. An organization may receive many ideas from a crowd, which it will need to filter before one or more can be implemented. For instance, online artist community and e-comm"} {"_id": "5bf3bf0e7725133ccb3ee3bba6a2df52ec8d4abf", "title": "Comparative study of a new dermal filler Uma Jeunesse\u00ae and Juv\u00e9derm\u00ae.", "text": "INTRODUCTION\nInnovation in technology has resulted in the emergence of better, longer-lasting hyaluronic acid implants with fewer side effects. The new dermal implant Uma Jeunesse\u00ae was compared to Juv\u00e9derm\u00ae in this split-face study.\n\n\nMETHODS\nUma Jeunesse\u00ae is crosslinked with butanediol diglycidyl ether (BDDE) using a new crosslinking technology. Uma Jeunesse\u00ae and Juv\u00e9derm\u00ae Ultra 3 were injected in a split-face study on 17 healthy volunteers, whose ages ranged from 33-58 years. There were 14 women and three men with medium to deep nasolabial folds. All subjects randomly received either Uma Jeunesse\u00ae or Juv\u00e9derm\u00ae) Ultra 3 on one half of their face. Patients were followed up for 9 months.\n\n\nRESULTS\nJuv\u00e9derm\u00ae was easier to inject with lesser injection pain because of lidocaine, but late postinjection pain was much less with Uma Jeunesse\u00ae as compared to Juv\u00e9derm\u00ae. Overall rate of early and late complications as well as adverse events was lower with Uma Jeunesse\u00ae than Juv\u00e9derm(\u00ae) . After 9 months of follow-up, Uma Jeunesse\u00ae lasted in tissues for longer as compared to Juv\u00e9derm(\u00ae) even in patients injected for the first time (P<0.0001). Patient acceptability rate of Uma Jeunesse\u00ae was also much higher. Perception of pain during injection was lesser with Juv\u00e9derm\u00ae probably because of the presence of lidocaine.\n\n\nCONCLUSION\nThe new dermal implant Uma Jeunesse\u00ae is a safe and patient-friendly product which resides in the tissues for longer with maintenance of aesthetic effect over and beyond 6 months, reaching 9 months in over 80% of patients, and Juv\u00e9derm\u00ae injection is less painful."} {"_id": "c621d3ddc939d8ac88347643fcaf93d0f7f49cca", "title": "Direct path and multipath cancellation with discrete distribution structure in passive radar", "text": "The Direct path and multipath interference (DPI and MPI) Cancellation is a crucial issue in the passive radar. In this paper, two improvements have been done in NLMS algorithm. One is to estimate the values and distribution of DPI and MPI roughly, and then remove them from the echo signal before filtering. The other one is to vary the step size of NLMS according to the cross correlation function of the reference signal and the output signal of the adaptive filter. The performance of the improved algorithm in passive radar DPI and MPI Cancellation is reported. The computer simulation results show that the algorithm has a faster convergence rate and a smaller steady state error."} {"_id": "2bcb9d3e155f03425087e5797a30bb0ef224a5cc", "title": "Interactive bookshelf surface for in situ book searching and storing support", "text": "We propose an interactive bookshelf surface to augment a human ability for in situ book searching and storing. In book searching support, when a user touches the edge of the bookshelf, the cover image of a stored book located above the touched position is projected directly onto the book spine. As a result, the user can search for a desired book by sliding his (or her) finger across the shelf edge. In book storing support, when a user brings a book close to the bookshelf, the place where the book should be stored is visually highlighted by a projection light. This paper also presents sensing technologies to achieve the above mentioned interactive techniques. In addition, by considering the properties of the human visual system, we propose a simple visual effect to reduce the legibility degradation of the projected image contents by the complex textures and geometric irregularities of the spines. We confirmed the feasibility of the system and the effectiveness of the proposed interaction techniques through user studies."} {"_id": "e4a9c62427182eab2d4622c3dbb38f8ba247b481", "title": "Experiments with crowdsourced re-annotation of a POS tagging data set", "text": "Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have largely assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks."} {"_id": "1089eb54895338621217ed29e69b5104d56cad67", "title": "Sequence-discriminative training of deep neural networks", "text": "Sequence-discriminative training of deep neural networks (DNNs) is investigated on a standard 300 hour American English conversational telephone speech task. Different sequencediscriminative criteria \u2014 maximum mutual information (MMI), minimum phone error (MPE), state-level minimum Bayes risk (sMBR), and boosted MMI \u2014 are compared. Two different heuristics are investigated to improve the performance of the DNNs trained using sequence-based criteria \u2014 lattices are regenerated after the first iteration of training; and, for MMI and BMMI, the frames where the numerator and denominator hypotheses are disjoint are removed from the gradient computation. Starting from a competitive DNN baseline trained using cross-entropy, different sequence-discriminative criteria are shown to lower word error rates by 7-9% relative, on average. Little difference is noticed between the different sequencebased criteria that are investigated. The experiments are done using the open-source Kaldi toolkit, which makes it possible for the wider community to reproduce these results."} {"_id": "4691d99c6f08e719de161463c2b9210c6d8beecb", "title": "Diabetes Data Analysis and Prediction Model Discovery Using RapidMiner", "text": "Data mining techniques have been extensively applied in bioinformatics to analyze biomedical data. In this paper, we choose the Rapid-I\u00bfs RapidMiner as our tool to analyze a Pima Indians Diabetes Data Set, which collects the information of patients with and without developing diabetes. The discussion follows the data mining process. The focus will be on the data preprocessing, including attribute identification and selection, outlier removal, data normalization and numerical discretization, visual data analysis, hidden relationships discovery, and a diabetes prediction model construction."} {"_id": "5550bd0554b93cbea47d629a94810c4b531930fa", "title": "Production and structural elucidation of exopolysaccharide from endophytic Pestalotiopsis sp. BC55.", "text": "There is a little information on exopolysaccharide production by endophytic fungi. In this investigation endophytic Pestalotiopsis sp. BC55 was used for optimization of exopolysaccharide production. One variable at a time method and response surface methodology were adopted to find out the best culture conditions and medium compositions for maximum exopolysaccharide production. The organism produced maximum exopolysaccharide (4.320 \u00b1 0.022 g/l EPS) in 250 ml Erlenmeyer flask containing 75 ml potato dextrose broth supplemented with (g%/l) glucose, 7.66; urea, 0.29; CaCl2, 0.05 with medium pH 6.93; after 3.76 days of incubation at 24\u00b0C. Exopolysaccharide [EPS (EP-I)] produced by this organism have Mw \u223c2\u00d710(5)Da with a melting point range of 122-124\u00b0C. Structural elucidation of the EPS (PS-I) was carried out after a series of experiments. Result indicated the presence of only (1\u21923)-linked \u03b2-d-glucopyranosyl moiety. The structure of the repeating unit was established as - \u21923)-\u03b2-d-Glcp-(1\u2192."} {"_id": "a43d082f83d92f6affc8e21585a3eb194904c201", "title": "Nanopore sequencing technology and tools for genome assembly: computational analysis of the current state, bottlenecks and future directions.", "text": "Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious and effective choices for each step of the genome assembly pipeline using nanopore sequence data. Also, with the help of bottlenecks we have found, developers can improve the current tools or build new ones that are both accurate and fast, to overcome the high error rates of the nanopore sequencing technology."} {"_id": "338651c90dcc60a191d23f499b02365baf9b16d3", "title": "Forensic analysis of video file formats", "text": "Video file format standards define only a limited number of mandatory features and leave room for interpretation. Design decisions of device manufacturers and software vendors are thus a fruitful resource for forensic video authentication. This paper explores AVI and MP4-like video streams of mobile phones and digital cameras in detail. We use customized parsers to extract all file format structures of videos from overall 19 digital camera models, 14 mobile phone models, and 6 video editing toolboxes. We report considerable differences in the choice of container formats, audio and video compression algorithms, acquisition parameters, and internal file structure. In combination, such characteristics can help to authenticate digital video files in forensic settings by distinguishing between original and post-processed videos, verifying the purported source of a file, or identifying the true acquisition device model or the processing software used for video processing. a 2014 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/)."} {"_id": "b23d79cdc235c88e9d96c9a239c81f58a08a1374", "title": "A Security approach for Data Migration in Cloud Computing", "text": "Cloud computing is a new paradigm that combines several computing concepts and technologies of the Internet creating a platform for more agile and cost-effective business applications and IT infrastructure. The adoption of Cloud computing has been increasing for some time and the maturity of the market is steadily growing. Security is the question most consistently raised as consumers look to move their data and applications to the cloud. I justify the importance and motivation of security in the migration of legacy systems and I carry out an approach related to security in migration processes to cloud with the aim of finding the needs, concerns, requirements, aspects, opportunities and benefits of security in the migration process of legacy systems."} {"_id": "1bc49abe5145055f1fa259bd4e700b1eb6b7f08d", "title": "SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents", "text": "We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels."} {"_id": "5d017ad8e7ec3dc67aca93970004782e6b99eb2c", "title": "Clustering Based URL Normalization Technique for Web Mining", "text": "URL (Uniform Resource Locator) normalization is an important activity in web mining. Web data can be retrieved in smoother way using effective URL normalization technique. URL normalization also reduces lot of calculations in web mining activities. A web mining technique for URL normalization is proposed in this paper. The proposed technique is based on content, structure and semantic similarity and web page redirection and forwarding similarity of the given set of URLs. Web page redirection and forward graphs can be used to measure the similarities between the URL\u2019s and can also be used for URL clusters. The URL clusters can be used for URL normalization. A data structure is also suggested to store the forward and redirect URL information."} {"_id": "23de293f939580eda037c6527a51ab7389400c4a", "title": "Towards Data-Driven Autonomics in Data Centers", "text": "Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%. We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website."} {"_id": "67fdbe15c597de4817c3e0a94d80fbc8963a7038", "title": "PyElph - a software tool for gel images analysis and phylogenetics", "text": "This paper presents PyElph, a software tool which automatically extracts data from gel images, computes the molecular weights of the analyzed molecules or fragments, compares DNA patterns which result from experiments with molecular genetic markers and, also, generates phylogenetic trees computed by five clustering methods, using the information extracted from the analyzed gel image. The software can be successfully used for population genetics, phylogenetics, taxonomic studies and other applications which require gel image analysis. Researchers and students working in molecular biology and genetics would benefit greatly from the proposed software because it is free, open source, easy to use, has a friendly Graphical User Interface and does not depend on specific image acquisition devices like other commercial programs with similar functionalities do. PyElph software tool is entirely implemented in Python which is a very popular programming language among the bioinformatics community. It provides a very friendly Graphical User Interface which was designed in six steps that gradually lead to the results. The user is guided through the following steps: image loading and preparation, lane detection, band detection, molecular weights computation based on a molecular weight marker, band matching and finally, the computation and visualization of phylogenetic trees. A strong point of the software is the visualization component for the processed data. The Graphical User Interface provides operations for image manipulation and highlights lanes, bands and band matching in the analyzed gel image. All the data and images generated in each step can be saved. The software has been tested on several DNA patterns obtained from experiments with different genetic markers. Examples of genetic markers which can be analyzed using PyElph are RFLP (Restriction Fragment Length Polymorphism), AFLP (Amplified Fragment Length Polymorphism), RAPD (Random Amplification of Polymorphic DNA) and STR (Short Tandem Repeat). The similarity between the DNA sequences is computed and used to generate phylogenetic trees which are very useful for population genetics studies and taxonomic classification. PyElph decreases the effort and time spent processing data from gel images by providing an automatic step-by-step gel image analysis system with a friendly Graphical User Interface. The proposed free software tool is suitable for researchers and students which do not have access to expensive commercial software and image acquisition devices."} {"_id": "79f3559e75ba15371f10b1b8e74ee7f67c97755e", "title": "Explicit Solution for Constrained Stochastic Linear-Quadratic Control with Multiplicative Noise", "text": "We study in this paper a class of constrained linear-quadratic (LQ) optimal control problem formulations for the scalar-state stochastic system with multiplicative noise, which has various applications, especially in the financial risk management. The linear constraint on both the control and state variables considered in our model destroys the elegant structure of the conventional LQ formulation and has blocked the derivation of an explicit control policy so far in the literature. We successfully derive in this paper the analytical control policy for such a class of problems by utilizing the state separation property induced from its structure. We reveal that the optimal control policy is a piece-wise affine function of the state and can be computed off-line efficiently by solving two coupled Riccati equations. Under some mild conditions, we also obtain the stationary control policy for infinite time horizon. We demonstrate the implementation of our method via some illustrative examples and show how to calibrate our model to solve dynamic constrained portfolio optimization problems."} {"_id": "5025427ebf7c57dd8495a2df46af595298a606e8", "title": "Efficient Multipattern Event Processing Over High-Speed Train Data Streams", "text": "Big data is becoming a key basis for productivity growth, innovation, and consumer surplus, but also bring us great challenges in its volume, velocity, variety, value, and veracity. The notion of event is an important cornerstone to manage big data. High-speed railway is one of the most typical application domains for event-based system, especially for the train onboard system. There are usually numerous complex event patterns subscribed in system sharing the same prefix, suffix, or subpattern; consequently, multipattern complex event detection often results in plenty of redundant detection operations and computations. In this paper, we propose a multipattern complex event detection model, multipattern event processing (MPEP), constructed by three parts: 1) multipattern state transition; 2) failure transition; and 3) state output. Based on MPEP, an intelligent onboard system for high-speed train is preliminarily implemented. The system logic is described using our proposed complex event description model and compiled into a multipattern event detection model. Experimental results show that MPEP can effectively optimize the complex event detection process and improve its throughput by eliminating duplicate automata states and redundant computations. This intelligent onboard system also provides better detection ability than other models when processing real-time events stored in high-speed train Juridical Recording Unit (JRU)."} {"_id": "b00c089cc70a7d28cb6e6e907f249eb67e092c39", "title": "Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls", "text": "Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%."} {"_id": "9afc33e194fb9670b4538b22138a16abd3d2320b", "title": "A Nonlinear Unified State-Space Model for Ship Maneuvering and Control in a Seaway", "text": "This article presents a unified state-space model for ship maneuvering, station-keeping, and control in a seaway. The frequency-dependent potential and viscous damping terms, which in classic theory results in a convolution integral not suited for real-time simulation, is compactly represented by using a state-space formulation. The separation of the vessel model into a low-frequency model (represented by zero-frequency added mass and damping) and a wave-frequency model (represented by motion transfer functions or RAOs), which is commonly used for simulation, is hence made superfluous."} {"_id": "15154da78d8c8a464dbc5f8857aad68ff7db8719", "title": "Tasks and scenario-based evaluation of information visualization techniques", "text": "Usability evaluation of an information visualization technique can only be done by the joint evaluation of both the visual representation and the interaction techniques. This work proposes task models as a key element for carrying out such evaluations in a structured way. We base our work on a taxonomy abstracting from rendering functions supported by information visualization techniques. CTTE is used to model these abstract visual tasks as well as to generate scenarios from this model for evaluation purposes. We conclude that the use of task models allows generating test scenarios which are more effective than informal and unstructured evaluations."} {"_id": "b41ec45c38eb118448b01f457eaad0f71aee02a9", "title": "VMHunt: A Verifiable Approach to Partially-Virtualized Binary Code Simplification", "text": "Code virtualization is a highly sophisticated obfuscation technique adopted by malware authors to stay under the radar. However, the increasing complexity of code virtualization also becomes a \"double-edged sword\" for practical application. Due to its performance limitations and compatibility problems, code virtualization is seldom used on an entire program. Rather, it is mainly used only to safeguard the key parts of code such as security checks and encryption keys. Many techniques have been proposed to reverse engineer the virtualized code, but they share some common limitations. They assume the scope of virtualized code is known in advance and mainly focus on the classic structure of code emulator. Also, few work verifies the correctness of their deobfuscation results. In this paper, with fewer assumptions on the type and scope of code virtualization, we present a verifiable method to address the challenge of partially-virtualized binary code simplification. Our key insight is that code virtualization is a kind of process-level virtual machine (VM), and the context switch patterns when entering and exiting the VM can be used to detect the VM boundaries. Based on the scope of VM boundary, we simplify the virtualized code. We first ignore all the instructions in a given virtualized snippet that do not affect the final result of that snippet. To better revert the data obfuscation effect that encodes a variable through bitwise operations, we then run a new symbolic execution called multiple granularity symbolic execution to further simplify the trace snippet. The generated concise symbolic formulas facilitate the correctness testing of our simplification results. We have implemented our idea as an open source tool, VMHunt, and evaluated it with real-world applications and malware. The encouraging experimental results demonstrate that VMHunt is a significant improvement over the state of the art."} {"_id": "3e4bd583795875c6550026fc02fb111daee763b4", "title": "Convolutional Sketch Inversion", "text": "In this paper, we use deep neural networks for inverting face sketches to synthesize photorealistic face images. We first construct a semi-simulated dataset containing a very large number of computergenerated face sketches with different styles and corresponding face images by expanding existing unconstrained face data sets. We then train models achieving state-of-the-art results on both computer-generated sketches and hand-drawn sketches by leveraging recent advances in deep learning such as batch normalization, deep residual learning, perceptual losses and stochastic optimization in combination with our new dataset. We finally demonstrate potential applications of our models in fine arts and forensic arts. In contrast to existing patch-based approaches, our deep-neuralnetwork-based approach can be used for synthesizing photorealistic face images by inverting face sketches in the wild."} {"_id": "d5cd72f923c1c6c3ae21eb705b084c35f3508bd0", "title": "A convolutional neural networks denoising approach for salt and pepper noise", "text": "The salt and pepper noise, especially the one with extremely high percentage of impulses, brings a significant challenge to image denoising. In this paper, we propose a non-local switching filter convolutional neural network denoising algorithm, named NLSF-CNN, for salt and pepper noise. As its name suggested, our NLSF-CNN consists of two steps, i.e., a NLSF processing step and a CNN training step. First, we develop a NLSF pre-processing step for noisy images using non-local information. Then, the pre-processed images are divided into patches and used for CNN training, leading to a CNN denoising model for future noisy images. We conduct a number of experiments to evaluate the effectiveness of NLSF-CNN. Experimental results show that NLSF-CNN outperforms the state-of-the-art denoising algorithms with a few training images."} {"_id": "b0232b47186aa4fc50cc1d4698f9451764daa660", "title": "A Systematic Literature Review: Code Bad Smells in Java Source Code", "text": "Code smell is an indication of a software designing problem. The presence of code smells can have a severe impact on the software quality. Smells basically refers to the structure of the code which violates few of the design principals and so has negative effect on the quality of the software. Larger the source code, more is its presence. Software needs to be reliable, robust and easily maintainable so that it can minimize the cost of its development as well as maintenance. Smells may increase the chances of failure of the system during maintenance. A SLR has been performed based on the search of digital libraries that includes the publications since 1999 to 2016. 60 research papers are deeply analyzed that are most relevant. The objective of this paper is to provide an extensive overview of existing research in the field of bad smells, identify the detection techniques and correlation between the detection techniques, in addition to find the name of the code smells that need more attention in detection approaches. This SLR identified that code clone (code smell) receives most research attention. Our findings also show that very few papers report on the impact of code bad smells. Most of the papers focused on the detection techniques and tools. A significant correlation between detection techniques has been calculated. There are four code smells that are not yet detected are Primitive Obsession, Inappropriate Intimacy, Incomplete library class and Comments."} {"_id": "2c0005665472b687cca2d1b527ad3b0254e905c0", "title": "Predicting defect-prone software modules using support vector machines", "text": "Effective prediction of defect-prone software modules can enable software developers to focus quality assurance activities and allocate effort and resources more efficiently. Support vector machines (SVM) have been successfully applied for solving both classification and regression problems in many applications. This paper evaluates the capability of SVM in predicting defect-prone software modules and compares its prediction performance against eight statistical and machine learning models in the context of four NASA datasets. The results indicate that the prediction performance of SVM is generally better than, or at least, is competitive against the compared models. 2007 Elsevier Inc. All rights reserved."} {"_id": "3759516030dc3b150b6b048fc9d33a044e13d1dc", "title": "Preliminary comparison of techniques for dealing with imbalance in software defect prediction", "text": "Imbalanced data is a common problem in data mining when dealing with classification problems, where samples of a class vastly outnumber other classes. In this situation, many data mining algorithms generate poor models as they try to optimize the overall accuracy and perform badly in classes with very few samples. Software Engineering data in general and defect prediction datasets are not an exception and in this paper, we compare different approaches, namely sampling, cost-sensitive, ensemble and hybrid approaches to the problem of defect prediction with different datasets preprocessed differently. We have used the well-known NASA datasets curated by Shepperd et al. There are differences in the results depending on the characteristics of the dataset and the evaluation metrics, especially if duplicates and inconsistencies are removed as a preprocessing step.\n Further Results and replication package: http://www.cc.uah.es/drg/ease14"} {"_id": "5fcdd1970bf75e6c00088d0cabd2e19538c97257", "title": "Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings", "text": "Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers."} {"_id": "b0bbea300988928b7c2998e98e52670b2c365327", "title": "Applying machine learning to software fault-proneness prediction", "text": "The importance of software testing to quality assurance cannot be overemphasized. The estimation of a module\u2019s fault-proneness is important for minimizing cost and improving the effectiveness of the software testing process. Unfortunately, no general technique for estimating software fault-proneness is available. The observed correlation between some software metrics and fault-proneness has resulted in a variety of predictive models based on multiple metrics. Much work has concentrated on how to select the software metrics that are most likely to indicate fault-proneness. In this paper, we propose the use of machine learning for this purpose. Specifically, given historical data on software metric values and number of reported errors, an Artificial Neural Network (ANN) is trained. Then, in order to determine the importance of each software metric in predicting fault-proneness, a sensitivity analysis is performed on the trained ANN. The software metrics that are deemed to be the most critical are then used as the basis of an ANN-based predictive model of a continuous measure of fault-proneness. We also view fault-proneness prediction as a binary classification task (i.e., a module can either contain errors or be error-free) and use Support Vector Machines (SVM) as a state-of-the-art classification method. We perform a comparative experimental study of the effectiveness of ANNs and SVMs on a data set obtained from NASA\u2019s Metrics Data Program data repository. 2007 Elsevier Inc. All rights reserved."} {"_id": "c11d0ba3e581e691f1bee0022dc0807ff4c428f2", "title": "A systematic analysis of performance measures for classification tasks", "text": "Article history: Received 14 February 2008 Received in revised form 21 November 2008 Accepted 6 March 2009 Available online 8 May 2009"} {"_id": "d76409a011c54a58235af6b409850af7fae8733a", "title": "Using Screen Brightness to Improve Security in Mobile Social Network Access", "text": "In the today's mobile communications scenario, smartphones offer new capabilities to develop sophisticated applications that seem to make daily life easier and more convenient for users. Such applications, which may involve mobile ticketing, identification, access control operations, etc., are often accessible through social network aggregators, that assume a fundamental role in the federated identity management space. While this makes modern smartphones very powerful devices, it also makes them very attractive targets for spyware injection. This kind of malware is able to bypass classic authentication measures and steal user credentials even when a secure element is used, and can, therefore, perform unauthorized mobile access to social network services without the user's consent. Such an event allows stealing sensitive information or even a full identity theft. In this work, we address this issue by introducing BrightPass, a novel authentication mechanism based on screen brightness. BrightPass allows users to authenticate safely with a PIN-based confirmation in the presence of specific operations on sensitive data. We compare BrightPass with existing schemes, in order to show its usability and security within the social network arena. Furthermore, we empirically assess the security of BrightPass through experimentation. Our tests indicate that BrightPass protects the PIN code against automatic submissions carried out by malware while granting fast authentication phases and reduced error rates."} {"_id": "9d39133f3079ba10f68ca0d9d60dd7bd18bce805", "title": "Using Design Thinking to Differentiate Useful From Misleading Evidence in Observational Research.", "text": "Few issues can be more important to physicians or patients than that treatment decisions are based on reliable information about benefits and harms. While randomized clinical trials (RCTs) are generally regarded as the most valid source of evidence about benefits and some harms, concerns about their generalizability, costs, and heterogeneity of treatment effects have led to the search for other sources of information to augment or possibly replace trials. This is embodied in the recently passed 21st Century Cures Act, which mandates that the US Food and Drug Administration develop rules for the use of \u201creal world evidence\u201d in drug approval, defined as \u201cdata...derived from sources other than randomized clinical trials.\u201d1 A second push toward the use of nontrial evidence is based on the perception that the torrent of electronic health-related data\u2014medical record, genomic, and lifestyle (ie, \u201cBig Data\u201d)\u2014can be transformed into reliable evidence with the use of powerful modern analytic tools. The use of nontrial evidence puts experts who weigh medical evidence, no less a practicing clinician, in a quandary, as the evaluation of such information requires skills well beyond those of basic critical review. The didactic tutorial by Agoritsas et al2 in this issue of JAMA partly addresses that problem, providing a clear, concise, and accessible introduction to several statistical approaches to adjust analyses of observational studies to minimize or eliminate confounding: matching, standard regression, propensity scores, and instrumental variable analyses. While understanding these technical tools is valuable, valid causal inference from nonrandomized studies about treatment effects depends on many factors other than confounding. These include whether the causal question motivating the study is clearly specified, whether the design matches that question and avoids design biases, whether the analysis matches the design, the appropriateness and quality of the data, the fit of adjustment models, and the potential for model searching to find spurious patterns in vast data streams. Some of these issues are recognizable and remediable, whereas others may defy solely analytic solutions, leading to the rejection of some nonrandomized studies to guide treatment. The hallmark of a well-posed causal question is that one can describe an RCT that would answer it. An RCT requires specifying at minimum an eligible population, an intervention, a comparator, and a start time. As posed by Hern\u00e1n and Taubman,3 an example of a noncausal question is \u201cDoes obesity shorten life?\u201d This is not causal because \u201cobesity\u201d is not a randomizable intervention, nor is weight loss. Randomizable interventions\u2014the goal of which might be weight loss\u2014include diet, exercise, gastric bypass surgery, liposuction, or drugs. Once the causal question is specified, the experimental design must mirror it, creating the foundation for a \u201clike vs like\u201d causal contrast on which a valid analysis is based. When observational data not collected for research are analyzed to estimate treatment effects, sometimes they are fed into statistical software without an explicit design and with a causal question that is either imprecise or not stated. This sets the stage for invalid results, even when the statistical methods appear to be properly implemented.4,5 The example of postmenopausal hormone therapy (HT) illustrates the criticality, when analyzing observational data, of posing a precise causal question, with a design and analysis to match. Observational studies, including one from the Nurses\u2019 Health Study, using some of the adjustment techniques described by Agoritsas et al, showed a strong cardiovascular protective association with HT. This result was famously contravened by a subsequent RCT, the Women\u2019s Health Initiative, which showed cardiovascular harm from HT. A subsequent reanalysis of the Nurses\u2019 Health Study data conformed with the RCT results.6 The reanalysis was structured to mirror the clinical question asked in the trial; viz, how does initiating HT in a postmenopausal woman affect her cardiovascular risk? The original observational analysis, using standard regression, effectively excluded women with coronary heart disease events that occurred shortly after initiation of HT, a remediable design bias known in pharmacoepidemiology as \u201cdepletion of susceptibles\u201d7 or in other fields as \u201csurvivorship bias.\u201d No amount of adjustment for baseline factors between ongoing HT users and nonusers can overcome that bias, but by refining the question, and then using the appropriately tailored design, an appropriate analysis can be crafted. Following the causal thinking reflected in RCT design, most successful examples of observational research that produce findings similar to those from RCTs have compared new drug users with new users of a comparator agent.8,9 The disproven nonrandomized studies referenced at the end of the article by Agoritsas et al all compared users with nonusers. In an RCT, a nonuser is always someone who was eligible and considered for treatment, but that is not guaranteed in an observational study. If instead 2 active treatments indicated for the same condition are compared, it is likely Related article page 748 Opinion"} {"_id": "2d1853472ed4953fcca2211344d6a3e2f8e7d0cc", "title": "Classifying Photographic and Photorealistic Computer Graphic Images using Natural Image Statistics", "text": "As computer graphics (CG) is getting more photorealistic, for the purpose of image authentication, it becomes increasingly important to construct a detector for classifying photographic images (PIM) and photorealistic computer graphics (PRCG). To this end, we propose that photographic images contain natural-imaging quality (NIQ) and natural-scene quality (NSQ). NIQ is due to the imaging process, while NSQ is due to the subtle physical light transport in a real-world scene. We explicitly model NSQ of photographic images using natural image statistics (NIS). NIS has been used as an image prior in applications such as image compression and denoising. However, NIS has not been comprehensively and systematically employed for classifying PIM and PRCG. In this work, we study three types of NIS with different statistical order, i.e., NIS derived from the power spectrum, wavelet transform and local patch of images. The experiment shows that the classification is in line with the statistical order of the NIS. The local patch NIS achieves a classification accuracy of 83% which outperforms the features derived from modeling the characteristics of computer graphics."} {"_id": "28d50600bebe281b055fbbb49a6eb9b3acb2b7cf", "title": "Goal-Oriented Requirements Engineering: A Guided Tour", "text": "Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention over the past few years. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then compares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience with such approaches and tool support are briefly discussed as well."} {"_id": "27eba11d57f7630b75fb67644ff48087472414f3", "title": "Do RNNs learn human-like abstract word order preferences?", "text": "RNN language models have achieved state-ofthe-art results on various tasks, but what exactly they are representing about syntax is as yet unclear. Here we investigate whether RNN language models learn humanlike word order preferences in syntactic alternations. We collect language model surprisal scores for controlled sentence stimuli exhibiting major syntactic alternations in English: heavy NP shift, particle shift, the dative alternation, and the genitive alternation. We show that RNN language models reproduce human preferences in these alternations based on NP length, animacy, and definiteness. We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations. We show that the RNNs\u2019 performance is similar to the human acceptability ratings and is not matched by an n-gram baseline model. Our results show that RNNs learn the abstract features of weight, animacy, and definiteness which underlie soft constraints on syntactic alternations. The best-performing models for many natural language processing tasks in recent years have been recurrent neural networks (RNNs) (Elman, 1990; Sutskever et al., 2014; Goldberg, 2017), but the black-box nature of these models makes it hard to know exactly what generalizations they have learned about their linguistic input: Have they learned generalizations stated over hierarchical structures, or only dependencies among relatively local groups of words (Linzen et al., 2016; Gulordava et al., 2018; Futrell et al., 2018)? Do they represent structures analogous to syntactic dependency trees (Williams et al., 2018), and can they represent complex relationships such as filler\u2013gap dependencies (Chowdhury and Zamparelli, 2018; Wilcox et al., 2018)? In order to make progress with RNNs, it is crucial to determine what RNNs actually learn given currently standard practices; then we can design network architectures, objective functions, and training practices to build on strengths and alleviate weaknesses (Linzen, 2018). In this work, we investigate whether RNNs trained on a language modeling objective learn certain syntactic preferences exhibited by humans, especially those involving word order. We draw on a rich literature from quantitative linguistics that has investigated these preferences in corpora and experiments (e.g., McDonald et al., 1993; Stallings et al., 1998; Bresnan et al., 2007; Rosenbach, 2008). Word order preferences are a key aspect of human linguistic knowledge. In many cases, they can be captured using local co-occurrence statistics: for example, the preference for subject\u2013verb\u2013 object word order in English can often be captured directly in short word strings, as in the dramatic preference for I ate apples over I apples ate. However, some word order preferences are more abstract and can only be stated in terms of higherorder linguistic units and abstract features. For example, humans exhibit a general preference for word orders in which words linked in syntactic dependencies are close to each other: such sentences are produced more frequently and comprehended more easily (Hawkins, 1994; Futrell et al., 2015; Temperley and Gildea, 2018). We are interested in whether RNNs learn abstract word order preferences as a way of probing their syntactic knowledge. If RNNs exhibit these preferences for appropriately controlled stimuli, then on some level they have learned the abstractions required to state them. Knowing whether RNNs show human-like word order preferences also bears on their suitability as language generation systems. White and Rajkumar (2012) have shown that language gener-"} {"_id": "d6391e2f688996a75c62869cd4c50711cdeaed9d", "title": "Sparse concept coding for visual analysis", "text": "We consider the problem of image representation for visual analysis. When representing images as vectors, the feature space is of very high dimensionality, which makes it difficult for applying statistical techniques for visual analysis. To tackle this problem, matrix factorization techniques, such as Singular Vector Decomposition (SVD) and Non-negative Matrix Factorization (NMF), received an increasing amount of interest in recent years. Matrix factorization is an unsupervised learning technique, which finds a basis set capturing high-level semantics in the data and learns coordinates in terms of the basis set. However, the representations obtained by them are highly dense and can not capture the intrinsic geometric structure in the data. In this paper, we propose a novel method, called Sparse Concept Coding (SCC), for image representation and analysis. Inspired from the recent developments on manifold learning and sparse coding, SCC provides a sparse representation which can capture the intrinsic geometric structure of the image space. Extensive experimental results on image clustering have shown that the proposed approach provides a better representation with respect to the semantic structure."} {"_id": "70c1da6627f0d7a93bdc2b88b33bcc22a782cd6c", "title": "Design and Modeling of A Crowdsource-Enabled System for Urban Parcel Relay and Delivery", "text": "This paper proposes a crowdsource-enabled system for urban parcel relay and delivery. We consider cyclists and pedestrians as crowdsources who are close to customers and interested in relaying parcels with a truck carrier and undertaking jobs for the last-leg parcel delivery and the first-leg parcel pickup. The crowdsources express their interests in doing so by submitting bids to the truck carrier. The truck carrier then selects bids and coordinates crowdsources\u2019 last-leg delivery (first-leg pickup) with its truck operations. The truck carrier\u2019s problem is formulated as a mixed integer non-linear program which simultaneously i) selects crowdsources to complete the last-leg delivery (first-leg pickup) between customers and selected points for crowdsource-truck relay; and ii) determines the relay points and truck routes and schedule. To solve the truck carrier problem, we first decompose the problem into a winner determination problem and a simultaneous pickup and delivery problem with soft time windows, and propose a Tabu Search based algorithm to iteratively solve the two subproblems. Numerical results show that this solution approach is able to yield close-to-optimum solutions with much less time than using offthe-shelf solvers. By adopting this new system, truck vehicle miles traveled (VMT) and total cost can be reduced compared to pure-truck delivery. The advantage of the system over pure-truck delivery is sensitive to factors such as penalty for servicing outside customers\u2019 desired time windows, truck unit operating cost, time value of crowdsources, and the crowdsource mode."} {"_id": "49522df4fab1ebbeb831fc265196c2c129bf6087", "title": "Survey on data science with population-based algorithms", "text": "This paper discusses the relationship between data science and population-based algorithms, which include swarm intelligence and evolutionary algorithms. We reviewed two categories of literature, which include population-based algorithms solving data analysis problem and utilizing data analysis methods in population-based algorithms. With the exponential increment of data, the data science, or more specifically, the big data analytics has gained increasing attention among researchers. New and more efficient algorithms should be designed to handle this massive data problem. Based on the combination of population-based algorithms and data mining techniques, we understand better the insights of data analytics, and design more efficient algorithms to solve real-world big data analytics problems. Also, the weakness and strength of population-based algorithms could be analyzed via the data analytics along the optimization process, a crucial entity in population-based algorithms."} {"_id": "d1a0b53f50d0e016372313aabfad9b76839df437", "title": "Discovering space - Grounding spatial topology and metric regularity in a naive agent's sensorimotor experience", "text": "In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from a fundamental sensorimotor perspective. Despite its pervasive nature in our perception of the world, the origin of the concept of space remains largely mysterious. For example in the context of artificial perception, this issue is usually circumvented by having engineers pre-define the spatial structure of the problem the agent has to face. We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment. By detecting such compensable experiences the agent can infer the topological and metric structure of the external space in which its body is moving. We propose a theoretical description of the nature of these regularities and illustrate the approach on a simulated robotic arm equipped with an eye-like sensor, and which interacts with an object. Finally we show how these regularities can be used to build an internal representation of the sensor's external spatial configuration."} {"_id": "7fafb1fab4c164ffbbd7e99415cdd1754fe3aded", "title": "Sensory gain control (amplification) as a mechanism of selective attention: electrophysiological and neuroimaging evidence.", "text": "Both physiological and behavioral studies have suggested that stimulus-driven neural activity in the sensory pathways can be modulated in amplitude during selective attention. Recordings of event-related brain potentials indicate that such sensory gain control or amplification processes play an important role in visual-spatial attention. Combined event-related brain potential and neuroimaging experiments provide strong evidence that attentional gain control operates at an early stage of visual processing in extrastriate cortical areas. These data support early selection theories of attention and provide a basis for distinguishing between separate mechanisms of attentional suppression (of unattended inputs) and attentional facilitation (of attended inputs)."} {"_id": "c1c74829d6430d468a1fe1f75eae217325253baf", "title": "Advanced Data Mining Techniques", "text": "The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. I dedicate this book to my grandchildren. Preface The intent of this book is to describe some recent data mining tools that have proven effective in dealing with data sets which often involve uncertain description or other complexities that cause difficulty for the conventional approaches of logistic regression, neural network models, and decision trees. Among these traditional algorithms, neural network models often have a relative advantage when data is complex. We will discuss methods with simple examples, review applications, and evaluate relative advantages of several contemporary methods. Our intent is to cover the fundamental concepts of data mining, to demonstrate the potential of gathering large sets of data, and analyzing these data sets to gain useful business understanding. We have organized the material into three parts. Part I introduces concepts. Part II contains chapters on a number of different techniques often used in data mining. Part III focuses on business applications of data mining. Not all of these chapters need to be covered, and their sequence could be varied at instructor design. The book will include short vignettes of how specific concepts have been applied in real practice. A series of representative data sets will be generated to demonstrate specific methods and concepts. References to data mining software and sites such as www.kdnuggets.com will be provided. Chapter 1 gives an overview of data mining, and provides a description of the data mining process. An overview of useful business applications is provided. Chapter 2 presents the data mining process in more detail. It demonstrates this process with a typical set of data. Visualization of data through data mining software is addressed. Chapter 3 presents memory-based reasoning methods of data mining. Major real applications are described. Algorithms are demonstrated with prototypical data based on real applications. Chapter 4 discusses association rule methods. Application in the form of market basket analysis is discussed. A real data set is described, and a simplified version used to demonstrate association rule methods. Chapter 5 presents fuzzy data mining approaches. Fuzzy decision tree approaches are described, as well as fuzzy association rule applications. Real data mining applications are described and demonstrated Chapter 6 presents Rough \u2026"} {"_id": "1b29144f0e63f8d9404a44b15623d8150ea76447", "title": "A cost analysis of RDL-first and mold-first fan-out wafer level packaging", "text": "Industry interest in fan-out wafer level packaging has been increasing steadily. This is partially because of the potential behind panel-based fan-out packaging, and partially because wafer level packaging may be fulfilling needs not met by interposer-based and 3D technologies. As is the case with any technology, there are variations within the fan-out wafer level packaging category. This paper will focus on two of the primary processes: RDL-first and mold-first (also called chip-first). While these process flows have many of the same activities, those activities are carried out in a different order, and there are a few key steps that will differ. Each process has unique challenges and benefits, and these will be explored and analyzed."} {"_id": "fdec0d4159ceae88377288cb340341f6bf80bb15", "title": "Mostly Exploration-Free Algorithms for Contextual Bandits", "text": "The contextual bandit literature has traditionally focused on algorithms that address the explorationexploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate-optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate-optimal with nonzero probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate-optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces experimentation and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or UCB."} {"_id": "f20d2ffd2dcfffb0c1d22c11bf4861f5140b1a28", "title": "Unveiling patterns of international communities in a global city using mobile phone data", "text": "We analyse a large mobile phone activity dataset provided by Telecom Italia for the Telecom Big Data Challenge contest. The dataset reports the international country codes of every call/SMS made and received by mobile phone users in Milan, Italy, between November and December 2013, with a spatial resolution of about 200 meters. We first show that the observed spatial distribution of international codes well matches the distribution of international communities reported by official statistics, confirming the value of mobile phone data for demographic research. Next, we define an entropy function to measure the heterogeneity of the international phone activity in space and time. By comparing the entropy function to empirical data, we show that it can be used to identify the city\u2019s hotspots, defined by the presence of points of interests. Eventually, we use the entropy function to characterize the spatial distribution of international communities in the city. Adopting a topological data analysis approach, we find that international mobile phone users exhibit some robust clustering patterns that correlate with basic socio-economic variables. Our results suggest that mobile phone records can be used in conjunction with topological data analysis tools to study the geography of migrant communities in a global city."} {"_id": "16048a04665a8cddd179a939d6cbeb4e0cdb6672", "title": "Automatic Radial Distortion Estimation from a Single Image", "text": "Many computer vision algorithms rely on the assumptions of the pinhole camera model, but lens distortion with off-the-shelf cameras is usually significant enough to violate this assumption. Many methods for radial distortion estimation have been proposed, but they all have limitations. Robust automatic radial distortion estimation from a single natural image would be extremely useful for many applications, particularly those in human-made environments containing abundant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an indoor environment. We propose a new method for automatic radial distortion estimation based on the plumb-line approach. The method works from a single image and does not require a special calibration pattern. It is based on Fitzgibbon\u2019s division model, robust estimation of circular arcs, and robust estimation of distortion parameters. We perform an extensive empirical study of the method on synthetic images. We include a comparative statistical analysis of how different circle fitting methods contribute to accurate distortion parameter estimation. We finally provide qualitative results on a wide variety of challenging real images. The experiments demonstrate the method\u2019s ability to accurately identify distortion parameters and remove distortion from images."} {"_id": "4194a2ddd9621b6068f4ea39ab0e9c180a7bbca0", "title": "Millimeter-Wave Substrate Integrated Waveguide Multibeam Antenna Based on the Parabolic Reflector Principle", "text": "A novel substrate integrated waveguide (SIW) multibeam antenna based on the parabolic reflector principle is proposed and implemented at 37.5 GHz, which takes the advantages of low loss, low profile, high gain, and mass producible. A prototype of the proposed antenna with seven input ports generating a corresponding number of output beams is fabricated in a standard PCB process. The measurements are in good agreement with the simulations, and the results demonstrate that this type of printed multibeam antenna is a good choice for communications applications where mobility and high gain are simultaneously required."} {"_id": "84f904a71bee129a1cf00dc97f6cdbe1011657e6", "title": "Fashioning with Networks: Neural Style Transfer to Design Clothes", "text": "Convolutional Neural Networks have been highly successful in performing a host of computer vision tasks such as object recognition, object detection, image segmentation and texture synthesis. In 2015, Gatys et. al [7] show how the style of a painter can be extracted from an image of the painting and applied to another normal photograph, thus recreating the photo in the style of the painter. The method has been successfully applied to a wide range of images and has since spawned multiple applications and mobile apps. In this paper, the neural style transfer algorithm is applied to fashion so as to synthesize new custom clothes. We construct an approach to personalize and generate new custom clothes based on a user\u2019s preference and by learning the user\u2019s fashion choices from a limited set of clothes from their closet. The approach is evaluated by analyzing the generated images of clothes and how well they align with the user\u2019s fashion style."} {"_id": "e1df33e5915dd610a6f3542b28dca4306ad62cd8", "title": "Affective Deprivation Disorder : Does it Constitute a Relational Disorder ?", "text": "Harriet Simons is Adjunct Associate Professor at Smith College School for Social Work and runs a private social work practice for individuals and couples specializing in infertility and Asperger\u2019s relationships. Among her published works are Wanting Another Child: Coping with Secondary Infertility. Jossey-Bass, 1998; (Contributor), Infertility Counseling: A Handbook for Clinicians. Parthenon Press, 1997; and (Co-editor and Contributor) Infertility: Medical, Emotional and Social Considerations. Human Sciences Press, 1984. Jason Thompson is Community Support Officer at SPARAL Disability Services specializing in rehabilitation and lifestyle support for individuals with intellectual and physical disabilities. He has completed certified studies in Applied Behaviour Analysis and Normative Male Alexithymia. He recently published an essay on the subject of alexithymia Alexithymia: An Imaginative Approach, Psychotherapy Australia Journal, vol 14, No 4, Aug. 2008"} {"_id": "11b576b37a76833da3fabe2f485529f9210c6c05", "title": "Persistent Homology: An Introduction and a New Text Representation for Natural Language Processing", "text": "Persistent homology is a mathematical tool from topological data analysis. It performs multi-scale analysis on a set of points and identifies clusters, holes, and voids therein. These latter topological structures complement standard feature representations, making persistent homology an attractive feature extractor for artificial intelligence. Research on persistent homology for AI is in its infancy, and is currently hindered by two issues: the lack of an accessible introduction to AI researchers, and the paucity of applications. In response, the first part of this paper presents a tutorial on persistent homology specifically aimed at a broader audience without sacrificing mathematical rigor. The second part contains one of the first applications of persistent homology to natural language processing. Specifically, our Similarity Filtration with Time Skeleton (SIFTS) algorithm identifies holes that can be interpreted as semantic \u201ctie-backs\u201d in a text document, providing a new document structure representation. We illustrate our algorithm on documents ranging from nursery rhymes to novels, and on a corpus with child and adolescent writings."} {"_id": "ee03a87fceb56f36a94639724d0e7bcb0d5ffb92", "title": "Warpage Tuning Study for Multi-chip Last Fan Out Wafer Level Package", "text": "In recent years, the IoT popularity pushes the package development of 3C products into a more functional and thinner target. For high I/O density and low cost considered package, the promising Fan-out Wafer Level Packaging (FOWLP) provides a solution to match OSAT existing capability, besides, the chip last process in FOWLP can further enhance the total yield by selectable known-good dies (KGDs). However, under processing, the large portion of molding compound induces high warpage to challenge fabrication limitation. The additional leveling process is usually applied to lower the warpage that caused by the mismatch of coefficient of thermal expansion and Young's modulus from carriers, dies, and molding compound. This process results in the increase of package cost and even induce internal damages that affect device reliability. In order to avoid leveling process and improve warpage trend, in this paper, we simulated several models with different design of molding compound and dies, and then developed a multi-chip last FOWLP test vehicle by package dimension of 12x15 mm2 with 8x9 and 4x9 mm2 multiple dies, respectively. The test vehicle performed three redistribution layers (RDLs) including one fine pitch RDL of line width/line spacing 2um/2um, which is also the advantage of multi-chip last FOWLP, and also exhibited ball on trace structure for another low cost option. For the wafer warpage discussion, the results showed that tuning the thickness of molding compound can improve warpage trend, especially in the application of high modulus carrier, which improved wafer warpage within 1mm, for package warpage discussion, the thinner die can lower the warpage of package. Through well warpage controlling, the multi-chip last FOWLP package with ball on trace design was successfully presented in this paper, and also passed the package level reliability of TCB 1000 cycles, HTSL 1000 hrs, and uHAST 96 hrs, and drop test by board level reliability."} {"_id": "809219e7c5d7623a67baed7389c9d8f502b385c9", "title": "Eccentric training in chronic painful impingement syndrome of the shoulder: results of a pilot study", "text": "Treatment with painful eccentric muscle training has been demonstrated to give good clinical results in patients with chronic Achilles tendinosis. The pain mechanisms in chronic painful shoulder impingement syndrome have not been scientifically clarified, but the histological changes found in the supraspinatus tendon have similarities with the findings in Achilles tendinosis. In this pilot study, nine patients (five females and four males, mean age 54\u00a0years) with a long duration of shoulder pain (mean 41\u00a0months), diagnosed as having shoulder impingement syndrome and on the waiting list for surgical treatment (mean 13\u00a0months), were included. Patients with arthrosis in the acromio-clavicular joint or with large calcifications causing mechanical impingement during horizontal shoulder abduction were not included. We prospectively studied the effects of a specially designed painful eccentric training programme for the supraspintus and deltoideus muscles (3\u00d715 reps, 2\u00a0times/day, 7\u00a0days/week, for 12\u00a0weeks). The patients evaluated the amount of shoulder pain during horizontal shoulder activity on a visual analogue scale (VAS), and satisfaction with treatment. Constant score was assessed. After 12\u00a0weeks of treatment, five patients were satisfied with treatment, their mean VAS had decreased (62\u201318, P<0.05), and their mean Constant score had increased (65\u201380, P<0.05). At 52-week follow-up, the same five patients were still satisfied (had withdrawn from the waiting list for surgery), and their mean VAS and Constant score were 31 and 81, respectively. Among the satisfied patients, two had a partial suprasinatus tendon rupture, and three had a Type 3 shaped acromion. In conclusion, the material in this study is small and the follow-up is short, but it seems that although there is a long duration of pain, together with bone and tendon abnormalities, painful eccentric supraspinatus and deltoideus training might be effective. The findings motivate further studies."} {"_id": "b6f758be954d34817d4ebaa22b30c63a4b8ddb35", "title": "A Proximity-Aware Hierarchical Clustering of Faces", "text": "In this paper, we propose an unsupervised face clustering algorithm called \u201cProximity-Aware Hierarchical Clustering\u201d (PAHC) that exploits the local structure of deep representations. In the proposed method, a similarity measure between deep features is computed by evaluating linear SVM margins. SVMs are trained using nearest neighbors of sample data, and thus do not require any external training data. Clus- ters are then formed by thresholding the similarity scores. We evaluate the clustering performance using three challenging un- constrained face datasets, including Celebrity in Frontal-Profile (CFP), IARPA JANUS Benchmark A (IJB-A), and JANUS Challenge Set 3 (JANUS CS3) datasets. Experimental results demonstrate that the proposed approach can achieve significant improvements over state-of-the-art methods. Moreover, we also show that the proposed clustering algorithm can be applied to curate a set of large-scale and noisy training dataset while maintaining sufficient amount of images and their variations due to nuisance factors. The face verification performance on JANUS CS3 improves significantly by finetuning a DCNN model with the curated MS-Celeb-1M dataset which contains over three million face images."} {"_id": "96bab00761b857959f2f8baa86c99d8e0155c6eb", "title": "Touch, Taste, & Smell User Interfaces: The Future of Multisensory HCI", "text": "The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people's multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time. The aim of this workshop is to deepen and expand the discussion on touch, taste, and smell within the CHI community and promote the relevance of multisensory experience design and research in HCI."} {"_id": "5bce8a6b14811ab692cd24286249b1348b26ee75", "title": "Minimum Text Corpus Selection for Limited Domain Speech Synthesis", "text": "This paper concerns limited domain TTS system based on the concatenative method, and presents an algorithm capable to extract the minimal domainoriented text corpus from the real data of the given domain, while still reaching the maximum coverage of the domain. The proposed approach ensures that the least amount of texts are extracted, containing the most common phrases and (possibly) all the words from the domain. At the same time, it ensures that appropriate phrase overlapping is kept, allowing to find smooth concatenation in the overlapped regions to reach high quality synthesized speech. In addition, several recommendations allowing a speaker to record the corpus more fluently and comfortably are presented and discussed. The corpus building is tested and evaluated on several domains differing in size and nature, and the authors present the results of the algorithm and demonstrate the advantages of using the domain oriented corpus for speech synthesis."} {"_id": "24cebabe2dc3ad558dc95e620dfaad71b11445cc", "title": "BreakDancer: An algorithm for high resolution mapping of genomic structural variation", "text": "Detection and characterization of genomic structural variation are important for understanding the landscape of genetic variation in human populations and in complex diseases such as cancer. Recent studies demonstrate the feasibility of detecting structural variation using next-generation, short-insert, paired-end sequencing reads. However, the utility of these reads is not entirely clear, nor are the analysis methods with which accurate detection can be achieved. The algorithm BreakDancer predicts a wide variety of structural variants including insertion-deletions (indels), inversions and translocations. We examined BreakDancer's performance in simulation, in comparison with other methods and in analyses of a sample from an individual with acute myeloid leukemia and of samples from the 1,000 Genomes trio individuals. BreakDancer sensitively and accurately detected indels ranging from 10 base pairs to 1 megabase pair that are difficult to detect via a single conventional approach."} {"_id": "2fd9f4d331d144f71baf2c66628b12c8c65d3ffb", "title": "ADDITIVE LOGISTIC REGRESSION : A STATISTICAL VIEW OF BOOSTING", "text": "Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications."} {"_id": "573ae3286d050281ffe4f6c973b64df171c9d5a5", "title": "Sharing Visual Features for Multiclass and Multiview Object Detection", "text": "We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (runtime) computational complexity and the (training-time) sample complexity scale linearly with the number of classes to be detected. We present a multitask learning procedure, based on boosted decision stumps, that reduces the computational and sample complexity by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required and, therefore, the runtime cost of the classifier, is observed to scale approximately logarithmically with the number of classes. The features selected by joint training are generic edge-like features, whereas the features chosen by training each class separately tend to be more object-specific. The generic features generalize better and considerably reduce the computational cost of multiclass object detection"} {"_id": "8262bd0fb41956f136dd2bc04ff4863584a43ce7", "title": "A Hybrid Discriminative/Generative Approach for Modeling Human Activities", "text": "Accurate recognition and tracking of human activities is an important goal of ubiquitous computing. Recent advances in the development of multi-modal wearable sensors enable us to gather rich datasets of human activities. However, the problem of automatically identifying the most useful features for modeling such activities remains largely unsolved. In this paper we present a hybrid approach to recognizing activities, which combines boosting to discriminatively select useful features and learn an ensemble of static classifiers to recognize different activities, with hidden Markov models (HMMs) to capture the temporal regularities and smoothness of activities. We tested the activity recognition system using over 12 hours of wearable-sensor data collected by volunteers in natural unconstrained environments. The models succeeded in identifying a small set of maximally informative features, and were able identify ten different human activities with an accuracy of 95%."} {"_id": "345e83dd58f26f51d75e2fef330c02c9aa01e61b", "title": "New Hash Functions and Their Use in Authentication and Set Equality", "text": "In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class contains a small number of functions, yet is almost universal,. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log, log,(n)) . log,(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m + log, n -log, m bits. An application of this class is a provably secure authentication technique for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal,. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An \u201cenemy\u201d-even one with infinite computer resources-cannot forge or modify a message without detection. The set equality technique allows operations including \u201cadd member to set,\u201d \u201cdelete member from set\u201d and \u201ctest two sets for equality\u201d to be performed in expected constant time and with less than a specified probability of error."} {"_id": "9d3cd20b0a20cb93743a92314f2e92f584041ff0", "title": "The effect of active video games on cognitive functioning in clinical and non-clinical populations: A meta-analysis of randomized controlled trials", "text": "Physically-active video games ('exergames') have recently gained popularity for leisure and entertainment purposes. Using exergames to combine physical activity and cognitively-demanding tasks may offer a novel strategy to improve cognitive functioning. Therefore, this systematic review and meta-analysis was performed to establish effects of exergames on overall cognition and specific cognitive domains in clinical and non-clinical populations. We identified 17 eligible RCTs with cognitive outcome data for 926 participants. Random-effects meta-analyses found exergames significantly improved global cognition (g=0.436, 95% CI=0.18-0.69, p=0.001). Significant effects still existed when excluding waitlist-only controlled studies, and when comparing to physical activity interventions. Furthermore, benefits of exergames where observed for both healthy older adults and clinical populations with conditions associated with neurocognitive impairments (all p<0.05). Domain-specific analyses found exergames improved executive functions, attentional processing and visuospatial skills. The findings present the first meta-analytic evidence for effects of exergames on cognition. Future research must establish which patient/treatment factors influence efficacy of exergames, and explore neurobiological mechanisms of action."} {"_id": "08877666eb361ad712ed565d0b0f2cf7e7a3490d", "title": "Attitudes -1 The Construction of Attitudes", "text": "Attitudes have long been considered a central concept of social psychology. In fact, early writers have defined social psychology as the scientific study of attitudes (e.g., Thomas & Znaniecki, 1918) and in 1954 Gordon Allport noted, \"This concept is probably the most distinctive and indispensable concept in contemporary American social psychology\" (p. 43). As one may expect of any concept that has received decades of attention, the concept of attitudes has changed over the years (see Allport, 1954, for an early review). The initial definitions were broad and encompassed cognitive, affective, motivational, and behavioral components. For example, Allport (1935) defined an attitude as \"a mental and neural state of readiness, organized through experience, exerting a directive and dynamic influence upon the individual's response to all objects and situations with which it is related\" (p. 810). A decade later, Krech and Crutchfield (1948) wrote, \"An attitude can be defined as an enduring organization of motivational, emotional, perceptual, and cognitive processes with respect to some aspect of the individual's world\" (p. 152). These definitions emphasized the enduring nature of attitudes and their close relationship to individuals' defined attitudes simply in terms of the probability that a person will show a specified behavior in a specified situation. In subsequent decades, the attitude concept lost much of its breadth and was largely reduced to its evaluative component. In the succinct words of Daryl Bem, \"Attitudes are likes and dislikes\" attitudes as \"a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor\" (p. 1). Along the way, many functions that were initially ascribed to attitudes have been reassigned to other cognitive structures and the accumulating body of empirical findings drew many of the classic assumptions into question. A growing body of literature suggests that attitudes may be much less enduring and stable than has traditionally been assumed. As we review below, self-reports of attitudes are highly context-dependent and can be profoundly influenced by minor changes in question wording, question format or question order. Attitudes-3 their assessment is subject to contextual influences. For other researchers, the same findings indicate that all we assess in attitude measurement are evaluative judgments that respondents construct at the time they are asked, based on whatever information happens to be accessible (e.g., Schwarz & Strack, 1991). From this perspective, the traditional attitude concept may not be particularly useful and we may learn more \u2026"} {"_id": "5db22877630e0643f9727b0a05dd69cb1eb168d6", "title": "Towards Operational Measures of Computer Security", "text": "Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of \u2018the ability of the system to resist attack\u2019. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit \u2018more secure behaviour\u2019 in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of \u2018operational security\u2019 similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf rate of occurrence of failures in reliability), or the probability that a specified \u2018mission\u2019 can be accomplished without a security breach (cf reliability function). This new approach is based on the analogy between system failure and security breach. A number of other analogies to support this view are introduced. We examine this duality critically, and have identified a number of important open questions that need to be answered before this quantitative approach can be taken further. The work described here is therefore somewhat tentative, and one of our major intentions is to invite discussion about the plausibility and feasibility of this new approach. Towards Operational Measures of Computer Security 2 _____________________________________________________________________"} {"_id": "ca9a678185f02807ae1b7d9a0bf89247a130d949", "title": "Estimation of ECG features using LabVIEW", "text": "320 Abstract\u2014Various methods have been proposed and used for ECG feature extraction with considerable percentage of correct detection. Nevertheless, the problem remains open especially with respect to higher detection accuracy in noisy ECGs. In this work we have developed an algorithm based on wavelet transform approach using LabVIEW 8.6. LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a graphical programming language that uses icons instead of lines of text to create programs. Unlike text based programming language, LabVIEW uses the data flow programming, where the flow of data determines execution. The flexibility, modular nature and ease to use programming possible with labview, makes it less complex. The proposed algorithm is executed in two stages. Firstly, it preprocesses (denoises) the signal to remove the noise from the ECG signal. Then it detects P-wave, T-wave, QRS complex and their onsets and offsets. LabVIEW and the related toolkits, advanced signal processing toolkit and math script are used to build the graphical programme for both the stages. The algorithm is evaluated with ECG data files, with different ECG morphologies and noise content taken from CSE multilead data base which contains 5000 samples per ECG, recorded at a sampling frequency of 500Hz."} {"_id": "1e75a3bc8bdd942b683cf0b27d1e1ed97fa3b4c3", "title": "Computationally Efficient Bayesian Learning of Gaussian Process State Space Models", "text": "Gaussian processes allow for flexible specification of prior assumptions of unknown dynamics in state space models. We present a procedure for efficient Bayesian learning in Gaussian process state space models, where the representation is formed by projecting the problem onto a set of approximate eigenfunctions derived from the prior covariance structure. Learning under this family of models can be conducted using a carefully crafted particle MCMC algorithm. This scheme is computationally efficient and yet allows for a fully Bayesian treatment of the problem. Compared to conventional system identification tools or existing learning methods, we show competitive performance and reliable quantification of uncertainties in the model."} {"_id": "2584d7faf2183e4a11f0c8173bc3541a4eda00ee", "title": "Source Code Analysis: A Road Map", "text": "The automated and semi-automated analysis of source code has remained a topic of intense research for more than thirty years. During this period, algorithms and techniques for source-code analysis have changed, sometimes dramatically. The abilities of the tools that implement them have also expanded to meet new and diverse challenges. This paper surveys current work on source-code analysis. It also provides a road map for future work over the next five-year period and speculates on the development of source-code analysis applications, techniques, and challenges over the next 10, 20, and 50 years."} {"_id": "b8d9e2bb5b517f5b307045efd0cc3a9bf4967419", "title": "Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction", "text": "Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our article is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, for example, captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation in both 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer. We demonstrate how the results of our method can be used for considerably improving the speed and quality of image-based 3D reconstruction algorithms, and we compare our results to state-of-the-art segmentation and multiview stereo methods."} {"_id": "aaeaa58bedf6fa9b42878bf5914f55f48cf26209", "title": "Odin: Team VictorTango's entry in the DARPA Urban Challenge", "text": "The DARPA Urban Challenge required robotic vehicles to travel more than 90 km through an urban environment without human intervention and included situations such as stop intersections, traffic merges, parking, and roadblocks. Team VictorTango separated the problem into three parts: base vehicle, perception, and planning. A Ford Escape outfitted with a custom drive-by-wire system and computers formed the basis for Odin. Perception used laser scanners, global positioning system, and a priori knowledge to identify obstacles, cars, and roads. Planning relied on a hybrid deliberative/reactive architecture to"} {"_id": "9af0342e9f6ea8276a4679596cf0e5fbbd7f9996", "title": "The Skiplist-Based LSM Tree", "text": "Log-Structured Merge (LSM) Trees provide a tiered data storage and retrieval paradigm that is a\u008aractive for write-optimized data systems. Maintaining an e\u0081cient bu\u0082er in memory and deferring updates past their initial write-time, the structure provides quick operations over hot data. Because each layer of the structure is logically separate from the others, the structure is also conducive to opportunistic and granular optimization. In this paper, we introduce the SkiplistBased LSM Tree (sLSM), a novel system in which the memory bu\u0082er of the LSM is composed of a sequence of skiplists. We develop theoretical and experimental results that demonstrate that the breadth of tuning parameters inherent to the sLSM allows it broad \u0083exibility for excellent performance across a wide variety of workloads."} {"_id": "a3f16a77962087f522bfe7efe7d2dfff72d85b34", "title": "Computer vision-based registration techniques for augmented reality", "text": "Augmented reality is a term used to describe systems in which computer-generated information is superimposed on top of the real world; for example, through the use of a see-through head-mounted display. A human user of such a system could still see and interact with the real world, but have valuable additional information, such as descriptions of important features or instructions for performing physical tasks, superimposed on the world. For example, the computer could identify objects and overlay them with graphic outlines, labels, and schematics. The graphics are registered to the real-world objects and appear to be \u201cpainted\u201d onto those objects. Augmented reality systems can be used to make productivity aids for tasks such as inspection, manufacturing, and navigation. One of the most critical requirements for augmented reality is to recognize and locate real-world objects with respect to the person\u2019s head. Accurate registration is necessary in order to overlay graphics accurately on top of the real-world objects. At the Colorado School of Mines, we have developed a prototype augmented reality system that uses head-mounted cameras and computer vision techniques to accurately register the head to the scene. The current system locates and tracks a set of preplaced passive fiducial targets placed on the real-world objects. The system computes the pose of the objects and displays graphics overlays using a see-through head-mounted display. This paper describes the architecture of the system and outlines the computer vision techniques used."} {"_id": "e9078ffe889c7783af72667c4d002ec9f2edeea1", "title": "Lecture Notes on Network Information Theory", "text": "These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/."} {"_id": "839fcb04fa560ccea8601089046b0d58b3bb3262", "title": "Economic modelling using computational intelligence techniques", "text": "Attempting to successfully and accurately predict the financial market has long attracted the interests and attention of economists, bankers, mathematicians and scientists alike. The financial markets form the bedrock of any economy. There are a large number of factors and parameters that influence the direction, volume, price and flow of traded stocks. This coupled with the markets\u2019 vulnerability to external and non-finance related factors and the resulting intrinsic volatility makes the development of a robust and accurate financial market prediction model an interesting research and engineering problem. In an attempt to solve this engineering problem, the authors of this paper present a rough set theory based predictive model for the financial markets. Rough set theory has, as its base, imperfect data analysis and approximation. The theory is used to extract a set of reducts and a set of trading rules based on trading data of the Johannesburg Stock Exchange (JSE) for the period 1 April 2006 to 1 April 2011. To increase the efficiency of the model four of dicretization algorithms were used on the data set, namely (Equal Frequency Binning (EFB), Boolean Reasoning, Entropy and the Na\u00efve Algorithm. The EFB algorithm gives the least number of rules and highest accuracy. Next, the reducts are extracted using the Genetic Algorithm and finally the set of dependency rules are generated from the set of reducts. A rough set confusion matrix is used to assess the accuracy of the model. The model gave a prediction accuracy of 80.4% using the Standard Voting classifier. Keywords-rough set theory; financial market modelling, neural networks, discretization, classification. INTRODUCTION Around the world, trading in the stock market has gained enormous popularity as a means through which one can reap huge profits. Attempting to successfully and accurately predict the financial market has long attracted the interests and attention of economists, bankers, mathematicians and scientists alike. Thus, stock price movement prediction has long been a cherished desire of investors, speculators and industries [1]. For a long time statistical techniques such as Bayesian models, regression and some econometric techniques have dominated research activities in prediction [2]. The primary approach to financial forecasting has been the identification of a stock price trend and continuation of the investment strategy until evidence suggests that the trend has reversed [3]. One of the biggest problems with the use of regression methods is that they fail to give satisfactory forecasting results for some series\u2019 because of their linear structure and other inherent limitations [8, 9]. However the emergence of computational intelligence techniques as a viable alternative to the \u201ctraditional\u201d statistical models that have dominated this area since the 1930\u2019s [2,3] has given impetus to the increasing usage of these techniques in fields such as economics and finance [3]. Apart from these, there have been many other successful applications of intelligent systems to decision support and complex automation tasks [4 6]. Since the year of its inception in 1982, rough set theory has been extensively used as an effective data mining and knowledge discovery technique in numerous applications in the finance, investment and banking fields [4, 7, 11, 12, 20]. Data mining is a discipline in computational intelligence that deals with knowledge discovery, data analysis, and full and semi-autonomous decision making [7]. It entails the analysis of data sets such that unsuspected relationships among data objects are found. Predictive modeling is the practice of deriving future inferences based on these relationships. The financial markets form the bedrock of any economy. There are a large number of factors and parameters that influence the direction, volume, price and flow of traded stocks. This coupled with the markets\u2019 vulnerability to external and nonfinance related factors and the resulting intrinsic volatility makes the development of a robust and accurate financial market prediction model an interesting research and engineering problem. This paper presents a generic stock price prediction model based on rough set theory. The model is derived on data from the daily movements of the Johannesburg Stock Exchange\u2019s All Share Index. The data used was collected over a five year period from 1 April 2006 to 1 April 2011. The methodology used in this paper is as follows: data preprocessing, data splitting, data discreetization, redundant attribute elimination, reduct generation, rule generation and prediction. 71 The rest of this paper is organized as follows: Section II covers the theoretical foundations of rough set theory, Section III gives the design of the proposed prediction model while an analysis of the results and conclusions are presented in Sections IV and V respectively. THEORETICAL FOUNDATIONS OF ROUGH SETS Rough set theory (RST) was introduced by Pawlak in 1982. The theory can be regarded as a mathematical tool used for imperfect data analysis [10]. Thus RST has proved useful in applications spanning the engineering, financial and decision support domains to mention but a few. It is based on the assumption that \u201cwith every object in the universe of discourse, some information (data or knowledge) is associated\u201d [10]. In practical applications, the \u201cuniverse of discourse\u201d described in [10] is usually a table called the decision table in which the rows are objects or data elements and the columns are attributes and the entries are called the attribute values [3]. The objects or data elements described by the same attributes are said to be indiscernible (indistinguishable) by the attribute set. Any set of indiscernible data elements forms a granule or atom of knowledge about the entire \u201cuniverse of discourse\u201d (information system framework) [10]. A union of these elementary sets (granules) is said to be a precise or crisp set, other-wise the set is said to be rough [1,2,7,10]. Every rough set will have boundary cases i.e data objects which cannot certainly be classified as belonging to the set or its complement when using the available information [10]. Associated with every rough set is a pair of sets called the lower and upper approximation of the rough set. The lower approximation consists of those objects which one can definitively say belong to the target set. The upper approximation consists of those objects which possibly belong to the target set. The difference between the two sets is the boundary region. The decision rule derived specifies an outcome based on certain conditions. Where the derived rule uniquely identifies outcomes based on some conditions the rule is said to be certain else it is uncertain. Every decision rule has a pair of probabilities associated with it, the certainty and coverage coefficients [3]. These conditional probabilities also satisfy Bayes\u2019 theorem [7,10]. The certainty coefficient is the conditional probability that an object that belongs to the decision class outlined by the rule given that it satisfies the conditions of the rule. The coverage coefficient on other hand expresses the conditional probability of reasons given some decision [10]. Clearly RST can be seen to overlap with many other theories in the realm of imperfect knowledge analysis such as evidence theory, Bayesian inference, fuzzy sets etc [1, 3, 4, 10, 11, 12]. To define rough sets mathematically, we begin by defining an information system S = (U,A), where U and A are finite and non-empty sets that represent the data objects and attributes respectively. Every attribute has a set of possible values Va. Va is called the domain of a. A subset of A say B will determine a binary relation I(B) on U, which is called the indiscerniblity relation. The relation is defined as follows: ( ) ( ) if and only if a(x) = a(y) for every a in B, where a(x) denotes the value of attribute a for data object x [10]. I(B) is an equivalence relation. All equivalence classes of I(B) as U/I(B). An equivalence class of I(B) containing x is denoted as B(x). If (x,y) belong to I(B) they are said to be indiscernible with respect to B. All equivalence classes of the indescernibility relation, I(B), are referred to as B-granules or B-elementary sets [10]. In the information system defined above, we define as in [10]: (1) And, (2) We now define the two operators assigned to every (1) two sets called the upper and lower approximation of X. The two sets are defined as follows [10]: ( ) \u22c3 { ( ) ( ) } (3) And, ( ) \u22c3 { ( ) ( ) } (4) Thus, the lower approximation is the union of all Belementary sets that are included in the target set, whilst the upper approximation is the union of all B-elementary sets that have a non-empty intersection with the target set. The difference between the two sets is called the boundary of region of X."} {"_id": "f8785ba3a56b5ce90fb264e82dacaca1ac641091", "title": "Comparative study of Hough Transform methods for circle finding", "text": "A variety of circle detection methods which are based on variations of the Hough Transform are investigated. The ,five methods considered are the standard Hough Trans,form, the Fast Hough Transform of Li et al.\u2018, a two stage Hough method, and two space saving approaches based on the method devised by Gerig and Klein2. The performance of each of the methods has been compared on synthetic imagery and real images from a metallurgical application. Figures and comments are presented concerning the accuracy, reliability, computational efficiency and storage requirements of each of the methods."} {"_id": "295ad06bbaca2dc4262dd834d051e12324ef69c4", "title": "User Identity Linkage by Latent User Space Modelling", "text": "User identity linkage across social platforms is an important problem of great research challenge and practical value. In real applications, the task often assumes an extra degree of difficulty by requiring linkage across multiple platforms. While pair-wise user linkage between two platforms, which has been the focus of most existing solutions, provides reasonably convincing linkage, the result depends by nature on the order of platform pairs in execution with no theoretical guarantee on its stability. In this paper, we explore a new concept of ``Latent User Space'' to more naturally model the relationship between the underlying real users and their observed projections onto the varied social platforms, such that the more similar the real users, the closer their profiles in the latent user space. We propose two effective algorithms, a batch model(ULink) and an online model(ULink-On), based on latent user space modelling. Two simple yet effective optimization methods are used for optimizing objective function: the first one based on the constrained concave-convex procedure(CCCP) and the second on accelerated proximal gradient. To our best knowledge, this is the first work to propose a unified framework to address the following two important aspects of the multi-platform user identity linkage problem --- (I) the platform multiplicity and (II) online data generation. We present experimental evaluations on real-world data sets for not only traditional pairwise-platform linkage but also multi-platform linkage. The results demonstrate the superiority of our proposed method over the state-of-the-art ones."} {"_id": "305f03effa9bb384c386d479babcab09305be081", "title": "Electricity Smart Meters Interfacing the Households", "text": "The recent worldwide measures for energy savings call for a larger awareness of the household energy consumption, given the relevant contribution of domestic load to the national energy balance. On the other hand, electricity smart meters together with gas, heat, and water meters can be interconnected in a large network offering a potential value to implement energy savings and other energy-related services, as long as an efficient interface with the final user is implemented. Unfortunately, so far, the interface of such devices is mostly designed and addressed at the utilities supervising the system, giving them relevant advantages, while the communication with the household is often underestimated. This paper addresses this topic by proposing the definition of a local interface for smart meters, by looking at the actual European Union and international regulations, at the technological solutions available on the market, and at those implemented in different countries, and, finally, by proposing specific architectures for a proper consumer-oriented implementation of a smart meter network."} {"_id": "1db11bd3e2d0794cbb0fab25508b494e0f0a46ea", "title": "Multi-target tracking by online learning of non-linear motion patterns and robust appearance models", "text": "We describe an online approach to learn non-linear motion patterns and robust appearance models for multi-target tracking in a tracklet association framework. Unlike most previous approaches that use linear motion methods only, we online build a non-linear motion map to better explain direction changes and produce more robust motion affinities between tracklets. Moreover, based on the incremental learned entry/exit map, a multiple instance learning method is devised to produce strong appearance models for tracking; positive sample pairs are collected from different track-lets so that training samples have high diversity. Finally, using online learned moving groups, a tracklet completion process is introduced to deal with tracklets not reaching entry/exit points. We evaluate our approach on three public data sets, and show significant improvements compared with state-of-art methods."} {"_id": "fd4ceb60d5f14a6cf450b5ff7be9015da1d12091", "title": "Is a Positive Review Always Effective? Advertising Appeal Effect in the Persuasion of Online Customer Reviews", "text": "Despite the expected value of online customer reviews as an emerging advertising medium, the manner of enforcing its effectiveness on customers\u2019 purchase behavior is still a question not well answered. To address the question, we propose a new central route cue called \u201cadvertising appeal\u201d to examine attitude and intention to purchase a product or service based on the reasoning of the elaboration likelihood model. We also propose that consumers\u2019 consumption goal and attitude certainty separately moderate the advertising appeal effect on purchase intention through a degree of favorable attitude. We test these hypotheses by conducting a laboratory experiment with 50 participants. In this experiment, we manipulate two kinds of advertising appeals and consumption goals. This study demonstrates that attribute-based appeal reviews are more influential than emotion-based appeal reviews in the persuasion process, regardless of the individuals\u2019 consumption goals. However, the advertising appeal effect on purchase intention is more pronounced for hedonic consumers than for utilitarian consumers."} {"_id": "b5f27d534168dde3667efcca5fedd2b61b40fce7", "title": "Model of Customer Churn Prediction on Support Vector Machine", "text": "To improve the prediction abilities of machine learning methods, a support vector machine (SVM) on structural risk minimization was applied to customer churn prediction. Researching customer churn prediction cases both in home and foreign carries, the method was compared with artifical neural network, decision tree, logistic regression, and naive bayesian classifier. It is found that the method enjoys the best accuracy rate, hit rate, covering rate, and lift coefficient, and therefore, provides an effective measurement for customer churn prediction."} {"_id": "bf0cf3d565c2e97f8832c87a608e74eed0965e91", "title": "Enhancing Windows Firewall Security Using Fuzzy Reasoning", "text": "Firewall is a standard security utility within the Microsoft Windows operating system. Most Windows users adopt it as the default security option due to its free availability. Moreover, Windows Firewall is a widely used security tool because of the large market share of the Microsoft Windows operating system. It can be customised for filtering of network traffic based on user-defined inbound and outbound rules. It is supplied with only basic functionality. As a result it cannot be considered as an effective tool for monitoring and analysing of inbound and outbound traffic. Nonetheless, as a freely available and conventional end user security tool, with some enhancement it could perform as a useful security tool for millions of Windows users. Therefore, this paper presents an enhanced Windows Firewall for a more effective monitoring and analysis of network traffic, based upon an intuitive fuzzy reasoning approach. Consequently, it can be used to prevent a greater range of attacks beyond the simple filtering of inbound and outbound network traffic. In this paper, a simulation of ICMP flooding is demonstrated, where the created firewall inbound and outbound rules are insufficient to monitor ICMP flooding. However, the addition of fuzzy reasoning system monitored it successfully and enhanced the standard Windows Firewall functionality to prevent ICMP flooding. The use of this Windows Fuzzy-Firewall can also be extended to prevent TCP flooding, UDP flooding and some other types of denial of service attacks."} {"_id": "142a799aac35f3b47df9fbfdc7547ddbebba0a91", "title": "Deep Model-Based 6D Pose Refinement in RGB", "text": "We present a novel approach for model-based 6D pose refinement in color data. Building on the established idea of contour-based pose tracking, we teach a deep neural network to predict a translational and rotational update. At the core, we propose a new visual loss that drives the pose update by aligning object contours, thus avoiding the definition of any explicit appearance model. In contrast to previous work our method is correspondence-free, segmentation-free, can handle occlusion and is agnostic to geometrical symmetry as well as visual ambiguities. Additionally, we observe a strong robustness towards rough initialization. The approach can run in real-time and produces pose accuracies that come close to 3D ICP without the need for depth data. Furthermore, our networks are trained from purely synthetic data and will be published together with the refinement code at http://campar.in.tum. de/Main/FabianManhardt to ensure reproducibility."} {"_id": "2b4916a48ad3e1f5559700829297a244c54f4e9b", "title": "The Use of Summation to Aggregate Software Metrics Hinders the Performance of Defect Prediction Models", "text": "Defect prediction models help software organizations to anticipate where defects will appear in the future. When training a defect prediction model, historical defect data is often mined from a Version Control System (VCS, e.g., Subversion), which records software changes at the file-level. Software metrics, on the other hand, are often calculated at the class- or method-level (e.g., McCabe's Cyclomatic Complexity). To address the disagreement in granularity, the class- and method-level software metrics are aggregated to file-level, often using summation (i.e., McCabe of a file is the sum of the McCabe of all methods within the file). A recent study shows that summation significantly inflates the correlation between lines of code (Sloc) and cyclomatic complexity (Cc) in Java projects. While there are many other aggregation schemes (e.g., central tendency, dispersion), they have remained unexplored in the scope of defect prediction. In this study, we set out to investigate how different aggregation schemes impact defect prediction models. Through an analysis of 11 aggregation schemes using data collected from 255 open source projects, we find that: (1) aggregation schemes can significantly alter correlations among metrics, as well as the correlations between metrics and the defect count; (2) when constructing models to predict defect proneness, applying only the summation scheme (i.e., the most commonly used aggregation scheme in the literature) only achieves the best performance (the best among the 12 studied configurations) in 11 percent of the studied projects, while applying all of the studied aggregation schemes achieves the best performance in 40 percent of the studied projects; (3) when constructing models to predict defect rank or count, either applying only the summation or applying all of the studied aggregation schemes achieves similar performance, with both achieving the closest to the best performance more often than the other studied aggregation schemes; and (4) when constructing models for effort-aware defect prediction, the mean or median aggregation schemes yield performance values that are significantly closer to the best performance than any of the other studied aggregation schemes. Broadly speaking, the performance of defect prediction models are often underestimated due to our community's tendency to only use the summation aggregation scheme. Given the potential benefit of applying additional aggregation schemes, we advise that future defect prediction models should explore a variety of aggregation schemes."} {"_id": "45a44500a82918aae981fe7072b8a62e3b1f9a52", "title": "System-level fault-tolerance in large-scale parallel machines with buffered coscheduling", "text": "Summary form only given. As the number of processors for multiteraflop systems grows to tens of thousands, with proposed petaflops systems likely to contain hundreds of thousands of processors, the assumption of fully reliable hardware has been abandoned. Although the mean time between failures for the individual components can be very high, the large total component count will inevitably lead to frequent failures. It is therefore of paramount importance to develop new software solutions to deal with the unavoidable reality of hardware faults. We will first describe the nature of the failures of current large-scale machines, and extrapolate these results to future machines. Based on this preliminary analysis we will present a new technology that we are currently developing, buffered coscheduling, which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency - requiring no changes to user applications. Preliminary results show that this is attainable with current hardware."} {"_id": "794f63b0ddbd78156913272a7f4275366bbd24e4", "title": "A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems", "text": "Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie/TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49% enhancement on existing users and 115% enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain."} {"_id": "890716fb7a205ed9f431b504633e26867f70ac9a", "title": "Millimeter Wave MIMO With Lens Antenna Array: A New Path Division Multiplexing Paradigm", "text": "Millimeter wave (mmWave) communication is a promising technology for future wireless systems, while one practical challenge is to achieve its large-antenna gains with only limited radio frequency (RF) chains for cost-effective implementation. To this end, we study in this paper a new lens antenna array enabled mmWave multiple-input multiple-output (MIMO) communication system. We first show that the array response of lens antenna arrays follows a \u201csinc\u201d function, where the antenna element with the peak response is determined by the angle of arrival (AoA)/departure (AoD) of the received/transmitted signal. By exploiting this unique property along with the multi-path sparsity of mmWave channels, we propose a novel low-cost and capacity-achieving spatial multiplexing scheme for both narrow-band and wide-band mmWave communications, termed path division multiplexing (PDM), where parallel data streams are transmitted over different propagation paths with simple per-path processing. We further propose a simple path grouping technique with group-based small-scale MIMO processing to effectively mitigate the inter-stream interference due to similar AoAs/AoDs. Numerical results are provided to compare the performance of the proposed mmWave lens MIMO against the conventional MIMO with uniform planar arrays (UPAs) and hybrid analog/digital processing. It is shown that the proposed design achieves significant throughput gains as well as complexity and cost reductions, thus leading to a promising new paradigm for mmWave MIMO communications."} {"_id": "47cc4c948d08a22639770ad3bbf1b41e18c6edfe", "title": "The Mutational Landscape of Lethal Castrate Resistant Prostate Cancer", "text": "Characterization of the prostate cancer transcriptome and genome has identified chromosomal rearrangements and copy number gains and losses, including ETS gene family fusions, PTEN loss and androgen receptor (AR) amplification, which drive prostate cancer development and progression to lethal, metastatic castration-resistant prostate cancer (CRPC). However, less is known about the role of mutations. Here we sequenced the exomes of 50 lethal, heavily pre-treated metastatic CRPCs obtained at rapid autopsy (including three different foci from the same patient) and 11 treatment-naive, high-grade localized prostate cancers. We identified low overall mutation rates even in heavily treated CRPCs (2.00 per megabase) and confirmed the monoclonal origin of lethal CRPC. Integrating exome copy number analysis identified disruptions of CHD1 that define a subtype of ETS gene family fusion-negative prostate cancer. Similarly, we demonstrate that ETS2, which is deleted in approximately one-third of CRPCs (commonly through TMPRSS2:ERG fusions), is also deregulated through mutation. Furthermore, we identified recurrent mutations in multiple chromatin- and histone-modifying genes, including MLL2 (mutated in 8.6% of prostate cancers), and demonstrate interaction of the MLL complex with the AR, which is required for AR-mediated signalling. We also identified novel recurrent mutations in the AR collaborating factor FOXA1, which is mutated in 5 of 147 (3.4%) prostate cancers (both untreated localized prostate cancer and CRPC), and showed that mutated FOXA1 represses androgen signalling and increases tumour growth. Proteins that physically interact with the AR, such as the ERG gene fusion product, FOXA1, MLL2, UTX (also known as KDM6A) and ASXL1 were found to be mutated in CRPC. In summary, we describe the mutational landscape of a heavily treated metastatic cancer, identify novel mechanisms of AR signalling deregulated in prostate cancer, and prioritize candidates for future study."} {"_id": "6d1caaff7af3c9281e423d198697372de2976154", "title": "Circulating mutant DNA to assess tumor dynamics", "text": "The measurement of circulating nucleic acids has transformed the management of chronic viral infections such as HIV. The development of analogous markers for individuals with cancer could similarly enhance the management of their disease. DNA containing somatic mutations is highly tumor specific and thus, in theory, can provide optimum markers. However, the number of circulating mutant gene fragments is small compared to the number of normal circulating DNA fragments, making it difficult to detect and quantify them with the sensitivity required for meaningful clinical use. In this study, we applied a highly sensitive approach to quantify circulating tumor DNA (ctDNA) in 162 plasma samples from 18 subjects undergoing multimodality therapy for colorectal cancer. We found that ctDNA measurements could be used to reliably monitor tumor dynamics in subjects with cancer who were undergoing surgery or chemotherapy. We suggest that this personalized genetic approach could be generally applied to individuals with other types of cancer (pages 914\u2013915)."} {"_id": "fd06836db095e897ca83c43820d1d44fb481201a", "title": "ETS gene fusions in prostate cancer: from discovery to daily clinical practice.", "text": "CONTEXT\nIn 2005, fusions between the androgen-regulated transmembrane protease serine 2 gene, TMPRSS2, and E twenty-six (ETS) transcription factors were discovered in prostate cancer.\n\n\nOBJECTIVE\nTo review advances in our understanding of ETS gene fusions, focusing on challenges affecting translation to clinical application.\n\n\nEVIDENCE ACQUISITION\nThe PubMed database was searched for reports on ETS fusions in prostate cancer.\n\n\nEVIDENCE SYNTHESIS\nSince the discovery of ETS fusions, novel 5' and 3' fusion partners and multiple splice isoforms have been reported. The most common fusion, TMPRSS2:ERG, is present in approximately 50% of prostate-specific antigen (PSA)-screened localized prostate cancers and in 15-35% of population-based cohorts. ETS fusions can be detected noninvasively in the urine of men with prostate cancer, with a specificity rate in PSA-screened cohorts of >90%. Reports from untreated population-based cohorts suggest an association between ETS fusions and cancer-specific death and metastatic spread. In retrospective prostatectomy cohorts, conflicting results have been published regarding associations between ETS fusions and cancer aggressiveness. In addition to serving as a potential biomarker, tissue and functional studies suggest a specific role for ETS fusions in the transition to carcinoma. Finally, recent results suggest that the 5' and 3' ends of ETS fusions as well as downstream targets may be targeted therapeutically.\n\n\nCONCLUSIONS\nRecent studies suggest that the first clinical applications of ETS fusions are likely to be in noninvasive detection of prostate cancer and in aiding with difficult diagnostic cases. Additional studies are needed to clarify the association between gene fusions and cancer aggressiveness, particularly those studies that take into account the multifocal and heterogeneous nature of localized prostate cancer. Multiple promising strategies have been identified to potentially target ETS fusions. Together, these results suggest that ETS fusions will affect multiple aspects of prostate cancer diagnosis and management."} {"_id": "69d6775fe81a04711e68a942fd45e0a72ed34110", "title": "Advanced sequencing technologies: methods and goals", "text": "Nearly three decades have passed since the invention of electrophoretic methods for DNA sequencing. The exponential growth in the cost-effectiveness of sequencing has been driven by automation and by numerous creative refinements of Sanger sequencing, rather than through the invention of entirely new methods. Various novel sequencing technologies are being developed, each aspiring to reduce costs to the point at which the genomes of individual humans could be sequenced as part of routine health care. Here, we review these technologies, and discuss the potential impact of such a 'personal genome project' on both the research community and on society."} {"_id": "5cf321385fcd87dfb0770b48180b38462038cbbf", "title": "Word sense disambiguation as a traveling salesman problem", "text": "Word sense disambiguation (WSD) is a difficult problem in Computational Linguistics, mostly because of the use of a fixed sense inventory and the deep level of granularity. This paper formulates WSD as a variant of the traveling salesman problem (TSP) to maximize the overall semantic relatedness of the context to be disambiguated. Ant colony optimization, a robust nature-inspired algorithm, was used in a reinforcement learning manner to solve the formulated TSP. We propose a novel measure based on the Lesk algorithm and Vector Space Model to calculate semantic relatedness. Our approach to WSD is comparable to state-of-the-art knowledge-based and unsupervised methods for benchmark datasets. In addition, we show that the combination of knowledge-based methods is superior to the most frequent sense heuristic and significantly reduces the difference between knowledge-based and supervised methods. The proposed approach could be customized for other lexical disambiguation tasks, such as Lexical Substitution or Word Domain Disambiguation."} {"_id": "3b574e22e2151b63a224e3e4f41387d6542210f4", "title": "An improved k-medoids clustering algorithm", "text": "In this paper, we present an improved k-medoids clustering algorithm based on CF-Tree. The algorithm based on the clustering features of BIRCH algorithm, the concept of k-medoids algorithm has been improved. We preserve all the training sample data in an CF-Tree, then use k-medoids method to cluster the CF in leaf nodes of CF-Tree. Eventually, we can get k clusters from the root of the CF-Tree. This algorithm improves obviously the drawbacks of the k-medoids algorithm, such as the time complexity, scalability on large dataset, and can't find the clusters of sizes different very much and the convex shapes. Experiments show that this algorithm enhances the quality and scalability of clustering."} {"_id": "82ad99ac9fdeb42bb01248e67d28ee05d9b33ca8", "title": "A Prospective, Open-Label Study of Hyaluronic Acid-Based Filler With Lidocaine (VYC-15L) Treatment for the Correction of Infraorbital Skin Depressions.", "text": "BACKGROUND\nInfraorbital skin depressions are one of the most troublesome facial areas for aesthetically aware patients.\n\n\nOBJECTIVE\nEvaluate effectiveness and safety of Juv\u00e9derm Volbella with Lidocaine (VYC-15L; Allergan plc, Dublin, Ireland) for correction of bilateral infraorbital depressions.\n\n\nMETHODS\nIn this 12-month, prospective, uncontrolled, open-label study, subjects aged \u226518 years with infraorbital depressions rated \u22651 on the Allergan Infra-oRbital Scale (AIRS) received injections of VYC-15L with optional touch-up treatment on Day 14. The primary efficacy measure was \u22651 AIRS grade improvement from baseline at month 1.\n\n\nRESULTS\nOf 80 subjects initially treated with VYC-15L, 75 (94%) completed the study. All injections were intentionally deep, most using multiple microbolus technique. At 1 month, 99.3% of eyes achieved \u22651 AIRS grade improvement. The responder rate (subjects with \u22651 AIRS grade improvement in both eyes) was 99% at month 1, 92% at month 6, and 54% at month 12. Most injection site reactions (e.g., bruising, redness, irregularities/bumps) were mild and resolved by day 14. Late-onset mild to moderate edema was observed in 11% of eyes at month 6% and 4% of eyes at month 12.\n\n\nCONCLUSION\nVYC-15L is effective and safe for the treatment of infraorbital depressions, with effectiveness lasting up to 12 months."} {"_id": "6a31bef6a2ae5a6440c981cd818271f6eab1b628", "title": "Spreadsheet Validation and Analysis through Content Visualization", "text": "Visualizing spreadsheet content provides analytic insight and visual validation of large amounts of spreadsheet data. Oculus Excel Visualizer is a point and click data visualization experiment which directly visualizes Excel data and re-users the layout and formatting already present in the spreadsheet."} {"_id": "f32a35f6c0bae011aae40589488645b30dcb971e", "title": "A power-efficient 4-2 Adder Compressor topology", "text": "The addition is the most used arithmetic operation in Digital Signal Processing (DSP) algorithms, such as filters, transforms and predictions. These algorithms are increasingly present in audio and video processing of battery-powered mobile devices having, therefore, energy constraints. In the context of addition operation, the efficient 4-2 adder compressor is capable to performs four additions simultaneously. This higher order of parallelism reduces the critical path and the internal glitching, thus reducing mainly the dynamic power dissipation. This work proposes two CMOS+ gate-based topologies to further reduce the power, area and delay of the 4-2 adder compressor. The proposed CMOS+ 4-2 adder compressor circuits topologies were implemented with Cadence Virtuoso tool at 45 nm technology and simulated in both electric and layout levels. The results show that a proper choice of gates in 4-2 adder compressor realization can reduce the power, delay and area about 22.41%, 32.45% and 7.4%, respectively, when compared with the literature."} {"_id": "7cb9749578fcd1bdf8f238629da42fe75fcea426", "title": "IoTScanner: Detecting and Classifying Privacy Threats in IoT Neighborhoods", "text": "In the context of the emerging Internet of Things (IoT), a proliferation of wireless connectivity can be expected. That ubiquitous wireless communication will be hard to centrally manage and control, and can be expected to be opaque to end users. As a result, owners and users of physical space are threatened to lose control over their digital environments. In this work, we propose the idea of an IoTScanner. The IoTScanner integrates a range of radios to allow local reconnaissance of existing wireless infrastructure and participating nodes. It enumerates such devices, identifies connection patterns, and provides valuable insights for technical support and home users alike. Using our IoTScanner, we attempt to classify actively streaming IP cameras from other non-camera devices using simple heuristics. We show that our classification approach achieves a high accuracy in an IoT setting consisting of a large number of IoT devices. While related work usually focuses on detecting either the infrastructure, or eavesdropping on traffic from a specific node, we focus on providing a general overview of operations in all observed networks. We do not assume prior knowledge of used SSIDs, preshared passwords, or similar."} {"_id": "38c74ee4aa8b8069f39ce765ee6063423e3c146b", "title": "A new intra-disk redundancy scheme for high-reliability RAID storage systems in the presence of unrecoverable errors", "text": "Today's data storage systems are increasingly adopting low-cost disk drives that have higher capacity but lower reliability, leading to more frequent rebuilds and to a higher risk of unrecoverable media errors. We propose an efficient intradisk redundancy scheme to enhance the reliability of RAID systems. This scheme introduces an additional level of redundancy inside each disk, on top of the RAID redundancy across multiple disks. The RAID parity provides protection against disk failures, whereas the proposed scheme aims to protect against media-related unrecoverable errors. In particular, we consider an intradisk redundancy architecture that is based on an interleaved parity-check coding scheme, which incurs only negligible I/O performance degradation. A comparison between this coding scheme and schemes based on traditional Reed--Solomon codes and single-parity-check codes is conducted by analytical means. A new model is developed to capture the effect of correlated unrecoverable sector errors. The probability of an unrecoverable failure associated with these schemes is derived for the new correlated model, as well as for the simpler independent error model. We also derive closed-form expressions for the mean time to data loss of RAID-5 and RAID-6 systems in the presence of unrecoverable errors and disk failures. We then combine these results to characterize the reliability of RAID systems that incorporate the intradisk redundancy scheme. Our results show that in the practical case of correlated errors, the interleaved parity-check scheme provides the same reliability as the optimum, albeit more complex, Reed--Solomon coding scheme. Finally, the I/O and throughput performances are evaluated by means of analysis and event-driven simulation."} {"_id": "164ad83f36c50170351e3c0b58731e53ecd4b82c", "title": "Understanding and improving recurrent networks for human activity recognition by continuous attention", "text": "Deep neural networks, including recurrent networks, have been successfully applied to human activity recognition. Unfortunately, the final representation learned by recurrent networks might encode some noise (irrelevant signal components, unimportant sensor modalities, etc.). Besides, it is difficult to interpret the recurrent networks to gain insight into the models' behavior. To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention. These two mechanisms adaptively focus on important signals and sensor modalities. To further improve the understandability and mean Fl score, we add continuity constraints, considering that continuous sensor signals are more robust than discrete ones. We evaluate the approaches on three datasets and obtain state-of-the-art results. Furthermore, qualitative analysis shows that the attention learned by the models agree well with human intuition."} {"_id": "ef05eeab9d356214d928fadafd5ce2b5c52ef4ff", "title": "WASSA-2017 Shared Task on Emotion Intensity", "text": "We present the first shared task on detecting the intensity of emotion felt by the speaker of a tweet. We create the first datasets of tweets annotated for anger, fear, joy, and sadness intensities using a technique called best\u2013worst scaling (BWS). We show that the annotations lead to reliable fine-grained intensity scores (rankings of tweets by intensity). The data was partitioned into training, development, and test sets for the competition. Twenty-two teams participated in the shared task, with the best system obtaining a Pearson correlation of 0.747 with the gold intensity scores. We summarize the machine learning setups, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful for the task. The emotion intensity dataset and the shared task are helping improve our understanding of how we convey more or less intense emotions through language."} {"_id": "2563efea0a5acc4eae586632c94cca490428658d", "title": "The Structure and Dynamics of Co-Citation Clusters: A Multiple-Perspective Co-Citation Analysis", "text": "A multiple-perspective co-citation analysis method is introduced for characterizing and interpreting the structure and dynamics of co-citation clusters. The method facilitates analytic and sense making tasks by integrating network visualization, spectral clustering, automatic cluster labeling, and text summarization. Co-citation networks are decomposed into co-citation clusters. The interpretation of these clusters is augmented by automatic cluster labeling and summarization. The method focuses on the interrelations between a co-citation cluster\u2019s members and their citers. The generic method is applied to a three-part analysis of the field of Information Science as defined by 12 journals published between 1996 and 2008: 1) a comparative author co-citation analysis (ACA), 2) a progressive ACA of a time series of co-citation networks, and 3) a progressive document co-citation analysis (DCA). Results show that the multipleperspective method increases the interpretability and accountability of both ACA and DCA networks."} {"_id": "6659f09d0b458dda15d6d990f216ba2b19b9fa00", "title": "Global consequences of land use.", "text": "Land use has generally been considered a local environmental issue, but it is becoming a force of global importance. Worldwide changes to forests, farmlands, waterways, and air are being driven by the need to provide food, fiber, water, and shelter to more than six billion people. Global croplands, pastures, plantations, and urban areas have expanded in recent decades, accompanied by large increases in energy, water, and fertilizer consumption, along with considerable losses of biodiversity. Such changes in land use have enabled humans to appropriate an increasing share of the planet's resources, but they also potentially undermine the capacity of ecosystems to sustain food production, maintain freshwater and forest resources, regulate climate and air quality, and ameliorate infectious diseases. We face the challenge of managing trade-offs between immediate human needs and maintaining the capacity of the biosphere to provide goods and services in the long term."} {"_id": "ac76c7e2624473dfd77fa350adf11b223655a34d", "title": "Plasmacytoid dendritic cells sense self-DNA coupled with antimicrobial peptide", "text": "Plasmacytoid dendritic cells (pDCs) sense viral and microbial DNA through endosomal Toll-like receptors to produce type 1 interferons. pDCs do not normally respond to self-DNA, but this restriction seems to break down in human autoimmune disease by an as yet poorly understood mechanism. Here we identify the antimicrobial peptide LL37 (also known as CAMP) as the key factor that mediates pDC activation in psoriasis, a common autoimmune disease of the skin. LL37 converts inert self-DNA into a potent trigger of interferon production by binding the DNA to form aggregated and condensed structures that are delivered to and retained within early endocytic compartments in pDCs to trigger Toll-like receptor 9. Thus, our data uncover a fundamental role of an endogenous antimicrobial peptide in breaking innate tolerance to self-DNA and suggest that this pathway may drive autoimmunity in psoriasis."} {"_id": "046bf6fb90438335eaee07594855efbf541a8aba", "title": "Urban Computing: Concepts, Methodologies, and Applications", "text": "Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community."} {"_id": "44808fd8f2ffd19bb266708b8de835c28f5b8596", "title": "RaceTrack: efficient detection of data race conditions via adaptive tracking", "text": "Bugs due to data races in multithreaded programs often exhibit non-deterministic symptoms and are notoriously difficult to find. This paper describes RaceTrack, a dynamic race detection tool that tracks the actions of a program and reports a warning whenever a suspicious pattern of activity has been observed. RaceTrack uses a novel hybrid detection algorithm and employs an adaptive approach that automatically directs more effort to areas that are more suspicious, thus providing more accurate warnings for much less over-head. A post-processing step correlates warnings and ranks code segments based on how strongly they are implicated in potential data races. We implemented RaceTrack inside the virtual machine of Microsoft's Common Language Runtime (product version v1.1.4322) and monitored several major, real-world applications directly out-of-the-box,without any modification. Adaptive tracking resulted in a slowdown ratio of about 3x on memory-intensive programs and typically much less than 2x on other programs,and a memory ratio of typically less than 1.2x. Several serious data race bugs were revealed, some previously unknown."} {"_id": "3be38924dd2a4b420338b3634e4eb7d7d22abbad", "title": "ESTIMATION METHODS FOR STOCHASTIC VOLATILITY MODELS : A SURVEY", "text": "Although stochastic volatility (SV) models have an intuitive appeal, their empirical application has been limited mainly due to difficulties involved in their estimation. The main problem is that the likelihood function is hard to evaluate. However, recently, several new estimation methods have been intro duced and the literature on SV models has grown substantially. In this article, we review this literature. We describe the main estimators of the parameters and the underlying volatilities focusing on their advantages and limitations both from the theoretical and empirical point of view. We complete the survey with an application of the most important procedures to the S&P 500 stock price index."} {"_id": "6e09a291d61f0e26ce3522a1b0fce952fb811090", "title": "Generative Attention Model with Adversarial Self-learning for Visual Question Answering", "text": "Visual question answering (VQA) is arguably one of the most challenging multimodal understanding problems as it requires reasoning and deep understanding of the image, the question, and their semantic relationship. Existing VQA methods heavily rely on attention mechanisms to semantically relate the question words with the image contents for answering the related questions. However, most of the attention models are simplified as a linear transformation, over the multimodal representation, which we argue is insufficient for capturing the complex nature of the multimodal data. In this paper we propose a novel generative attention model obtained by adversarial self-learning. The proposed adversarial attention produces more diverse visual attention maps and it is able to generalize the attention better to new questions. The experiments show the proposed adversarial attention leads to a state-of-the-art VQA model on the two VQA benchmark datasets, VQA v1.0 and v2.0."} {"_id": "970b4d2ed1249af97cdf2fffdc7b4beae458db89", "title": "HMDB: A large video database for human motion recognition", "text": "With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion."} {"_id": "47dd049e44c5e1c0e572284aa0e579900cf092a8", "title": "Automated detection of smuggled high-risk security threats using Deep Learning", "text": "The security infrastructure is ill-equipped to detect and deter the smuggling of non-explosive devices that enable terror attacks such as those recently perpetrated in western Europe. The detection of so-called \u201csmall metallic threats\u201d (SMTs) in cargo containers currently relies on statistical risk analysis, intelligence reports, and visual inspection of X-ray images by security officers. The latter is very slow and unreliable due to the difficulty of the task: objects potentially spanning less than 50 pixels have to be detected in images containing more than 2 million pixels against very complex and cluttered backgrounds. In this contribution, we demonstrate for the first time the use of Convolutional Neural Networks (CNNs), a type of Deep Learning, to automate the detection of SMTs in fullsize X-ray images of cargo containers. Novel approaches for dataset augmentation allowed to train CNNs from-scratch despite the scarcity of data available. We report fewer than 6% false alarms when detecting 90% SMTs synthetically concealed in streamof-commerce images, which corresponds to an improvement of over an order of magnitude over conventional approaches such as Bag-of-Words (BoWs). The proposed scheme offers potentially super-human performance for a fraction of the time it would take for a security officers to carry out visual inspection (processing time is approximately 3.5s per container image)."} {"_id": "6aa12ca2bdfa18a8eac0c882b10d5c676efa05f7", "title": "Trends in Automotive Communication Systems", "text": "The use of networks for communications between the electronic control units (ECU) of a vehicle in production cars dates from the beginning of the 1990s. The specific requirements of the different car domains have led to the development of a large number of automotive networks such as Local Interconnect Network, J1850, CAN, TTP/C, FlexRay, media-oriented system transport, IDB1394, etc. This paper first introduces the context of in-vehicle embedded systems and, in particular, the requirements imposed on the communication systems. Then, a comprehensive review of the most widely used automotive networks, as well as the emerging ones, is given. Next, the current efforts of the automotive industry on middleware technologies, which may be of great help in mastering the heterogeneity, are reviewed. Finally, we highlight future trends in the development of automotive communication systems."} {"_id": "fa74238203e293e1a572b206d714729f6efd3444", "title": "Bank Customer Credit Scoring by Using Fuzzy Expert System", "text": "Granting banking facility is one of the most important parts of the financial supplies for each bank. So this activity becomes more valuable economically and always has a degree of risk. These days several various developed Artificial Intelligent systems like Neural Network, Decision Tree, Logistic Regression Analysis, Linear Discriminant Analysis and etc, are used in the field of granting facilities that each of this system owns its advantages and disadvantages. But still studying and working are needed to improve the accuracy and performance of them. In this article among other AI methods, fuzzy expert system is selected. This system is based on data and also extracts rules by using data. Therefore the dependency to experts is omitted and interpretability of rules is obtained. Validity of these rules could be confirmed or rejected by banking affair experts. For investigating the performance of proposed system, this system and some other methods were performed on various datasets. Results show that the proposed algorithm obtained better performance among the others."} {"_id": "b385debea84e4784f30747c9ec6fbbc7a17f1ec1", "title": "Semantics matters : cognitively plausible delineation of city centres from point of interest data", "text": "We sketch a workflow for cognitively plausible recognition of vague geographical concepts, such as a city centre. Our approach imitates a pedestrian strolling through the streets, and comparing his/her internal cognitive model of a city centre with the stimulus from the external world to decide whether he/she is in the city centre or outside. The cognitive model of a British city centre is elicited through an online questionnaire survey and used to delineate referents of city centre from point of interest data. We first compute a measure of \u0091\u2018city centre-ness\u0092\u2019 at each location within a city, and then merge the area of high city centre-ness to a contiguous region. The process is illustrated on the example of the City of Bristol, and the computed city centre area for Bristol is evaluated by comparison to reference areas derived from alternative sources. The evaluation suggests that our approach performs well and produces a representation of a city centre that is near to people\u0092\u2019s conceptualisation. The benefits of our work are better (and user-driven) descriptions of complex geographical concepts. We see such models as a prerequisite for generalisation over large changes in detail, and for very specific purposes."} {"_id": "00ad8a79aaba6749789971972879bd591a7b23c5", "title": "Label Information Guided Graph Construction for Semi-Supervised Learning", "text": "In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks."} {"_id": "afa2696a7bb3ebde1cf3514382433a0b705b43d3", "title": "Flexible sensor for Mckibben pneumatic actuator", "text": "A flexible electro-conductive rubber sensor for measuring the length of a McKibben pneumatic actuator has been developed. Mckibben actuators are flexible, lightweight, and widely used to actuate power assisting devices for which compliance is required for safety. The actuator's length needs to be measured to control the devices accurately. However, the flexibility and lightweight properties might be lost if rigid sensors such as linear potentionmeters or encoders are directly attached to the actuators. To keep the desirable properties of McKibben actuators, compact and flexible sensors are necessary. In this paper, a flexible sensor using electro-conductive flexible rubber is proposed to measure the length of McKibben actuators. Using this sensor, a higher accuracy can be obtained by measuring the circumference displacement than directly measuring the axial displacement. The estimation accuracy is evaluated and the usefulness of the proposed method is verified by adopting to a multiple-link robot arm driven by the McKibben actuators."} {"_id": "197c43315bdcec6785cb9834638140d9878ec131", "title": "Automated Intelligent Pilots for Combat Flight Simulation", "text": "TacAir-Soar is an intelligent, rule-based system that generates believable \u201chuman-like\u201d behavior for large scale, distributed military simulations. The innovation of the application is primarily a matter of scale and integration. The system is capable of executing most of the airborne missions that the United States military flies in fixed-wing aircraft. It accomplishes this by integrating a wide variety of intelligent capabilities, including real-time hierarchical execution of complex goals and plans, communication and coordination with humans and simulated entities, maintenance of situational awareness, and the ability to accept and respond to new orders while in flight. The system is currently deployed at the Oceana Naval Air Station WISSARD Lab and the Air Force Research Laboratory in Mesa, AZ. Its most dramatic use was in the Synthetic Theater of War 1997, which was an operational training exercise that ran for 48 continuous hours during which TacAir-Soar flew all U.S. fixed-wing aircraft."} {"_id": "3087289229146fc344560478aac366e4977749c0", "title": "THE INFORMATION CAPACITY OF THE HUMAN MOTOR SYSTEM IN CONTROLLING THE AMPLITUDE OF MOVEMENT", "text": "Information theory has recently been employed to specify more precisely than has hitherto been possible man's capacity in certain sensory, perceptual, and perceptual-motor functions (5, 10, 13, 15, 17, 18). The experiments reported in the present paper extend the theory to the human motor system. The applicability of only the basic concepts, amount of information, noise, channel capacity, and rate of information transmission, will be examined at this time. General familiarity with these concepts as formulated by recent writers (4,11, 20, 22) is assumed. Strictly speaking, we cannot study man's motor system at the behavioral level in isolation from its associated sensory mechanisms. We can only analyze the behavior of the entire receptor-neural-effector system. How-"} {"_id": "0cd2285d00cc1337cc95ab120e558707b197862a", "title": "The mathematical theory of communication", "text": "T HE recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist 1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information. The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure. The logarithmic measure is more convenient for various reasons:"} {"_id": "55ab2b325da10b4d46b69a7673e32823a9706a32", "title": "RoboCup: The Robot World Cup Initiative", "text": "The Robot World Cup Initiative (RoboCup) is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and examined. The rst RoboCup competition will be held at IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor-fusion. Unlike AAAI robot competition, which is tuned for a single heavyduty slow-moving robot, RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's nal target is a world cup with real robots, RoboCup o ers a software platform for research on the software aspects of RoboCup. This paper describes technical challenges involved in RoboCup, rules, and simulation environment."} {"_id": "e37f60b230a6e7b6f4949cea85b4113aadaf4e0c", "title": "Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems Using Joint Intentions", "text": "One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparative experiments in the real-world domain of electricity transportation management. Finally, the success of the approach of building a system with an explicit and grounded representation of cooperative problem solving is used to outline a proposal for the next generation of multi-agent systems."} {"_id": "021472c712d51f88b4175b7e2906ab11b7e28bb8", "title": "Laboratory Experiments for Tsunamis Generated by Underwater Landslides: Comparison with Numerical Modeling", "text": "Three-dimensional experiments and fully nonlinear computations are performed at the University of Rhode Island, to investigate tsunami generation by underwater landslides. Each experiment consists of a solid landslide of idealized smooth shape sliding over a plane slope. Surface elevations are measured using very accurate gages placed at strategic locations. Gage calibration is performed using a newly developed automated system. Landslide acceleration is measured with a micro-accelerometer. The repeatability of experiments is first investigated, and then by varying the initial depth of the landslide, different conditions of wave non-linearity and dispersion are generated and compared. The principle of numerical modeling, using an earlier developed model, is briefly explained. One application is presented and results compared to experiments. The agreement of computations with the latter is quite good. In the model, horizontal velocities are found quite non-uniform over depth above the moving landslide. This would preclude using a long wave model for such landslide tsunami wave generation."} {"_id": "12de03e2691c11d29a82f1c3fc7e97121c07cb5b", "title": "CopyCatch: stopping group attacks by spotting lockstep behavior in social networks", "text": "How can web services that depend on user generated content discern fraudulent input by spammers from legitimate input? In this paper we focus on the social network Facebook and the problem of discerning ill-gotten Page Likes, made by spammers hoping to turn a profit, from legitimate Page Likes. Our method, which we refer to as CopyCatch, detects lockstep Page Like patterns on Facebook by analyzing only the social graph between users and Pages and the times at which the edges in the graph (the Likes) were created. We offer the following contributions: (1) We give a novel problem formulation, with a simple concrete definition of suspicious behavior in terms of graph structure and edge constraints. (2) We offer two algorithms to find such suspicious lockstep behavior - one provably-convergent iterative algorithm and one approximate, scalable MapReduce implementation. (3) We show that our method severely limits \"greedy attacks\" and analyze the bounds from the application of the Zarankiewicz problem to our setting. Finally, we demonstrate and discuss the effectiveness of CopyCatch at Facebook and on synthetic data, as well as potential extensions to anomaly detection problems in other domains. CopyCatch is actively in use at Facebook, searching for attacks on Facebook's social graph of over a billion users, many millions of Pages, and billions of Page Likes."} {"_id": "1370b7ec6cb56b0ff25f512bd673acbab214708c", "title": "Automated concolic testing of smartphone apps", "text": "We present an algorithm and a system for generating input events to exercise smartphone apps. Our approach is based on concolic testing and generates sequences of events automatically and systematically. It alleviates the path-explosion problem by checking a condition on program executions that identifies subsumption between different event sequences. We also describe our implementation of the approach for Android, the most popular smartphone app platform, and the results of an evaluation that demonstrates its effectiveness on five Android apps."} {"_id": "541ff4c3fe4ca6bd9acf6b714e065c38d2584e53", "title": "SHEEP, GOATS, LAMBS and WOLVES: a statistical analysis of speaker performance in the NIST 1998 speaker recognition evaluation", "text": "Performance variabilit y in speech and speaker recognition systems can be attributed to many factors. One major factor, which is often acknowledged but seldom analyzed, is inherent differences in the recognizabilit y of different speakers. In speaker recognition systems such differences are characterized by the use of animal names for different types of speakers, including sheep, goats, lambs and wolves, depending on their behavior with respect to automatic recognition systems. In this paper we propose statistical tests for the existence of these animals and apply these tests to hunt for such animals using results from the 1998 NIST speaker recognition evaluation."} {"_id": "f7a6be26eff0698df6fcb6fdaad79715699fc8cd", "title": "Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking", "text": "Saliency detection has been a hot topic in recent years, and many efforts have been devoted in this area. Unfortunately, the results of saliency detection can hardly be utilized in general applications. The primary reason, we think, is unspecific definition of salient objects, which makes that the previously published methods cannot extend to practical applications. To solve this problem, we claim that saliency should be defined in a context and the salient band selection in hyperspectral image (HSI) is introduced as an example. Unfortunately, the traditional salient band selection methods suffer from the problem of inappropriate measurement of band difference. To tackle this problem, we propose to eliminate the drawbacks of traditional salient band selection methods by manifold ranking. It puts the band vectors in the more accurate manifold space and treats the saliency problem from a novel ranking perspective, which is considered to be the main contributions of this paper. To justify the effectiveness of the proposed method, experiments are conducted on three HSIs, and our method is compared with the six existing competitors. Results show that the proposed method is very effective and can achieve the best performance among the competitors."} {"_id": "407de9da58871cae7a6ded2f3a6162b9dc371f38", "title": "TraMNet - Transition Matrix Network for Efficient Action Tube Proposals", "text": "Current state-of-the-art methods solve spatio-temporal action localisation by extending 2D anchors to 3D-cuboid proposals on stacks of frames, to generate sets of temporally connected bounding boxes called action micro-tubes. However, they fail to consider that the underlying anchor proposal hypotheses should also move (transition) from frame to frame, as the actor or the camera do. Assuming we evaluate n 2D anchors in each frame, then the number of possible transitions from each 2D anchor to he next, for a sequence of f consecutive frames, is in the order of O(n ), expensive even for small values of f . To avoid this problem we introduce a Transition-Matrix-based Network (TraMNet) which relies on computing transition probabilities between anchor proposals while maximising their overlap with ground truth bounding boxes across frames, and enforcing sparsity via a transition threshold. As the resulting transition matrix is sparse and stochastic, this reduces the proposal hypothesis search space from O(n ) to the cardinality of the thresholded matrix. At training time, transitions are specific to cell locations of the feature maps, so that a sparse (efficient) transition matrix is used to train the network. At test time, a denser transition matrix can be obtained either by decreasing the threshold or by adding to it all the relative transitions originating from any cell location, allowing the network to handle transitions in the test data that might not have been present in the training data, and making detection translationinvariant. Finally, we show that our network is able to handle sparse annotations such as those available in the DALY dataset, while allowing for both dense (accurate) or sparse (efficient) evaluation within a single model. We report extensive experiments on the DALY, UCF101-24 and Transformed-UCF101-24 datasets to support our claims."} {"_id": "50e3b09c870dc93920f2e4ad5853e590c7b85ed7", "title": "Robust Scale Estimation in Real-Time Monocular SFM for Autonomous Driving", "text": "Scale drift is a crucial challenge for monocular autonomous driving to emulate the performance of stereo. This paper presents a real-time monocular SFM system that corrects for scale drift using a novel cue combination framework for ground plane estimation, yielding accuracy comparable to stereo over long driving sequences. Our ground plane estimation uses multiple cues like sparse features, dense inter-frame stereo and (when applicable) object detection. A data-driven mechanism is proposed to learn models from training data that relate observation covariances for each cue to error behavior of its underlying variables. During testing, this allows per-frame adaptation of observation covariances based on relative confidences inferred from visual data. Our framework significantly boosts not only the accuracy of monocular self-localization, but also that of applications like object localization that rely on the ground plane. Experiments on the KITTI dataset demonstrate the accuracy of our ground plane estimation, monocular SFM and object localization relative to ground truth, with detailed comparisons to prior art."} {"_id": "af82c83495c50c1603ff868c5335743fe286d144", "title": "The B2C group-buying model on the Internet", "text": "Internet-based group-buying has become a new growth point for China's e-commerce. By studying B2C group-buying model, this paper has collected data of 20 major Chinese group-buying websites, studied factors that might influence Internet users' group-buying behavior, proposed a new model for evaluating the value of Internet enterprises, and provided a feasible reference model for the evaluation and performance of Internet enterprises."} {"_id": "dbc7401e3e75c40d3c720e7db3c906d48bd742d7", "title": "Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection", "text": "Unsupervised anomaly detection on multior high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score."} {"_id": "5b09c1880b6eca96b86a06d6a941a70c36623a23", "title": "Introducing qualitative research in psychology Adventures in theory and method", "text": "All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher or a licence from the Copyright Licensing Agency Limited. Details of such licences (for reprographic reproduction) may be obtained from the Copyright Introducing qualitative research in psychology: adventures in theory and method / Carla Willig. p. cm. Includes bibliographical references and index."} {"_id": "97d7a5321ded1563f07d233a874193a3c2e1ed01", "title": "Visual reaction time and high-speed ball games.", "text": "Laboratory measures of visual reaction time suggest that some aspects of high-speed ball games such as cricket are 'impossible' because there is insufficient time for the player to respond to unpredictable movements of the ball. Given the success with which some people perform these supposedly impossible acts, it has been assumed by some commentators that laboratory measures of reaction time are not applicable to skilled performers. An analysis of high-speed film of international cricketers batting on a specially prepared pitch which produced unpredictable movement of the ball is reported, and it is shown that, when batting, highly skilled professional cricketers show reaction times of around 200 ms, times similar to those found in traditional laboratory studies. Furthermore, professional cricketers take roughly as long as casual players to pick up ball flight information from film of bowlers. These two sets of results suggest that the dramatic contrast between the ability of skilled and unskilled sportsmen to act on the basis of visual information does not lie in differences in the speed of operation of the perceptual system. It lies in the organisation of the motor system that uses the output of the perceptual system."} {"_id": "517fa38cea1da39375b97ed1d6ab2f5398299fbd", "title": "Combining Confocal Imaging and Descattering", "text": "In translucent objects, light paths are affected by multiple scattering, which is polluting any observation. Confocal imaging reduces the influence of such global illumination effects by carefully focusing illumination and viewing rays from a large aperture to a specific location within the object volume. The selected light paths still contain some global scattering contributions, though. Descattering based on high frequency illumination serves the same purpose. It removes the global component from observed light paths. We demonstrate that confocal imaging and descattering are orthogonal and propose a novel descattering protocol that analyzes the light transport in a neighborhood of light transport paths. In combination with confocal imaging, our descattering method achieves optical sectioning in translucent media with higher contrast and better resolution."} {"_id": "cc0493c758e49c8bebeade01e3e685e2d370d90c", "title": "A Survey on Survey of Migration of Legacy Systems", "text": "Legacy systems are mission critical complex systems which are hard to maintain owing to shortage of skill sets and monolithic code architecture with tightly coupled tiers, all of which are indication of obsoleteness of the system technology. Thus, they have to be migrated to the latest technology(ies). Migration is an offspring research in Software Engineering which is almost three decades old research and numerous publications have emerged in many topics in the migration domain with focus areas of code migration, architecture migration, case study on migration and effort estimation on migration. In addition, various survey works surveying different aspects of migration have also emerged. This paper provides a survey of these survey works in order to provide a consolidation of these survey works. As an outcome of the survey on survey of migration, a road map of migration evolution from the early stages to the recent works on migration comprising of significant milestones in migration research has been presented in this work."} {"_id": "35e83667966d56389f7d69cf1d434a14d566bf84", "title": "Distance Preserving Dimension Reduction for Manifold Learning", "text": "Manifold learning is an effective methodology for extracting nonlinear structures from high-dimensional data with many applications in image analysis, computer vision, text data analysis and bioinformatics. The focus of this paper is on developing algorithms for reducing the computational complexity of manifold learning algorithms, in particular, we consider the case when the number of features is much larger than the number of data points. To handle the large number of features, we propose a preprocessing method, distance preserving dimension reduction (DPDR). It produces t-dimensional representations of the high-dimensional data, wheret is the rank of the original dataset. It exactly preserves the Euclidean L2-norm distances as well as cosine similarity measures between data points in the original space. With the original data projected to the t-dimensional space, manifold learning algorithms can be executed to obtain lower dimensional parameterizations with substantial reduction in computational cost. Our experimental results illustrate that DPDR significantly reduces computing time of manifold learning algorithms and produces low-dimensional parameterizations as accurate as those obtained from the original datasets."} {"_id": "f6fec61221604359e65ccacc98030d900c5b3496", "title": "Quantum Field Theory in Curved Spacetime", "text": "We review the mathematically rigorous formulation of the quantum theory of a linear field propagating in a globally hyperbolic spacetime. This formulation is accomplished via the algebraic approach, which, in essence, simultaneously admits all states in all possible (unitarily inequivalent) Hilbert space constructions. The physically nonsingular states are restricted by the requirement that their two-point function satisfy the Hadamard condition, which insures that the ultra-violet behavior of the state be similar to that of the vacuum state in Minkowski spacetime, and that the expected stress-energy tensor in the state be finite. We briefly review the Unruh and Hawking effects from the perspective of the theoretical framework adopted here. A brief discussion also is given of several open issues and questions in quantum field theory in curved spacetime regarding the treatment of \u201cbackreaction\u201d, the validity of some version of the \u201caveraged null energy condition\u201d, and the formulation and properties of quantum field theory in causality violating"} {"_id": "642c1b4a9da95ea4239708afc5929a5007a1870d", "title": "Tensor2Tensor for Neural Machine Translation", "text": "Tensor2Tensor is a library for deep learning models that is very well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model. 1 Neural Machine Translation Background Machine translation using deep neural networks achieved great success with sequence-tosequence models Sutskever et al. (2014); Bahdanau et al. (2014); Cho et al. (2014) that used recurrent neural networks (RNNs) with LSTM cells Hochreiter and Schmidhuber (1997). The basic sequence-to-sequence architecture is composed of an RNN encoder which reads the source sentence one token at a time and transforms it into a fixed-sized state vector. This is followed by an RNN decoder, which generates the target sentence, one token at a time, from the state vector. While a pure sequence-to-sequence recurrent neural network can already obtain good translation results Sutskever et al. (2014); Cho et al. (2014), it suffers from the fact that the whole input sentence needs to be encoded into a single fixed-size vector. This clearly manifests itself in the degradation of translation quality on longer sentences and was partially overcome in Bahdanau et al. (2014) by using a neural model of attention. Convolutional architectures have been used to obtain good results in word-level neural machine translation starting from Kalchbrenner and Blunsom (2013) and later in Meng et al. (2015). These early models used a standard RNN on top of the convolution to generate the output, which creates a bottleneck and hurts performance. Fully convolutional neural machine translation without this bottleneck was first achieved in Kaiser and Bengio (2016) and Kalchbrenner et al. (2016). The model in Kaiser and Bengio (2016) (Extended Neural GPU) used a recurrent stack of gated convolutional layers, while the model in Kalchbrenner et al. (2016) (ByteNet) did away with recursion and used left-padded convolutions in the decoder. This idea, introduced in WaveNet van den Oord et al. (2016), significantly improves efficiency of the model. The same technique was improved in a number of neural translation models recently, including Gehring et al. (2017) and Kaiser et al. (2017)."} {"_id": "108652cc6c3cdfd754ac8622794d85e945996b3c", "title": "Semantics in Visual Information Retrieval", "text": "V isual information retrieval systems have entered a new era. First-generation systems allowed access to images and videos through textual data. 1,2 Typical searches for these systems include, for example, \" all images of paintings of the Florentine school of the 15th century \" or \" all images by Cezanne with landscapes. \" Such systems expressed information through alphanumeric keywords or scripts. They employed representation schemes like relational models, frame models, and object-oriented models. On the other hand, current-generation retrieval systems support full retrieval by visual content. 3,4 Access to visual information is not only performed at a conceptual level, using keywords as in the textu-al domain, but also at a perceptual level, using objective measurements of visual content. In these systems, image processing, pattern recognition , and computer vision constitute an integral part of the system's architecture and operation. They objectively analyze pixel distribution and extract the content descriptors automatically from raw sensory data. Image content descriptors are commonly represented as feature vectors, whose elements correspond to significant parameters that model image attributes. Therefore, visual attributes are regarded as points in a multidimen-sional feature space, where point closeness reflects feature similarity. These advances (for comprehensive reviews of the field, see the \" Further Reading \" sidebar) have paved the way for third-generation systems, featuring full multimedia data management and networking support. Forthcoming standards such as MPEG-4 and MPEG-7 (see the Nack and Lindsay article in this issue) provide the framework for efficient representation, processing, and retrieval of visual information. Yet many problems must still be addressed and solved before these technologies can emerge. An important issue is the design of indexing structures for efficient retrieval from large, possibly distributed , multimedia data repositories. To achieve this goal, image and video content descriptors can be internally organized and accessed through multidimensional index structures. 5 A second key problem is to bridge the semantic gap between the system and users. That is, devise representations capturing visual content at high semantic levels especially relevant for retrieval tasks. Specifically, automatically obtaining a representation of high-level visual content remains an open issue. Virtually all the systems based on automatic storage and retrieval of visual information proposed so far use low-level perceptual representations of pictorial data, which have limited semantics. Building up a representation proves tantamount to defining a model of the world, possibly through a formal description language, whose semantics capture only a few \u2026"} {"_id": "09d2f9f868a70e59d06fcf7bf721fbc9d62ccd06", "title": "A Dynamic Method to Forecast the Wheel Slip for Antilock Braking System and Its Experimental Evaluation", "text": "The control of an antilock braking system (ABS) is a difficult problem due to its strongly nonlinear and uncertain characteristics. To overcome this difficulty, the integration of gray-system theory and sliding-mode control is proposed in this paper. This way, the prediction capabilities of the former and the robustness of the latter are combined to regulate optimal wheel slip depending on the vehicle forward velocity. The design approach described is novel, considering that a point, rather than a line, is used as the sliding control surface. The control algorithm is derived and subsequently tested on a quarter vehicle model. Encouraged by the simulation results indicating the ability to overcome the stated difficulties with fast convergence, experimental results are carried out on a laboratory setup. The results presented indicate the potential of the approach in handling difficult real-time control problems."} {"_id": "64305508a53cc99e62e6ff73592016d0b994afd4", "title": "A survey of RDF data management systems", "text": "RDF is increasingly being used to encode data for the semantic web and data exchange. There have been a large number of works that address RDF data management following different approaches. In this paper we provide an overview of these works. This review considers centralized solutions (what are referred to as warehousing approaches), distributed solutions, and the techniques that have been developed for querying linked data. In each category, further classifications are provided that would assist readers in understanding the identifying characteristics of different approaches."} {"_id": "254ded254065f2d26ca24ec024cefd7604bd74e7", "title": "Efficient Parallel Graph Exploration on Multi-Core CPU and GPU", "text": "Graphs are a fundamental data representation that has been used extensively in various domains. In graph-based applications, a systematic exploration of the graph such as a breadth-first search (BFS) often serves as a key component in the processing of their massive data sets. In this paper, we present a new method for implementing the parallel BFS algorithm on multi-core CPUs which exploits a fundamental property of randomly shaped real-world graph instances. By utilizing memory bandwidth more efficiently, our method shows improved performance over the current state-of-the-art implementation and increases its advantage as the size of the graph increases. We then propose a hybrid method which, for each level of the BFS algorithm, dynamically chooses the best implementation from: a sequential execution, two different methods of multicore execution, and a GPU execution. Such a hybrid approach provides the best performance for each graph size while avoiding poor worst-case performance on high-diameter graphs. Finally, we study the effects of the underlying architecture on BFS performance by comparing multiple CPU and GPU systems, a high-end GPU system performed as well as a quad-socket high-end CPU system."} {"_id": "30b41155d8c4dc7abc8eb557fd391d2a87640ac5", "title": "Optimising Results from Minimal Access Cranial Suspension Lifting (MACS-Lift)", "text": "Between November 1999 and February 2005, 450 minimal access cranial suspension (MACS) lifts were performed. Starting with the idea of suspension for sagging soft tissues using permanent purse-string sutures, a new comprehensive approach to facial rejuvenation was developed in which the vertical vector appeared to be essential. The neck is corrected by extended submental liposuction and strong vertical traction on the lateral part of the platysma by means of a first vertical purse-string suture. The volume of the jowls and the cheeks is repositioned in a cranial direction with a second, slightly oblique purse-string suture. The descent of the midface is corrected by suspending the malar fat pad in a nearly vertical direction. In 23 cases (5.1%), the result in the neck was unsatisfactory, and additional work had to be done secondarily, or in later cases, primarily. The problem that appeared was unsatisfactory correction of platysmal bands (resolved with an additional anterior cervicoplasty) or vertical skin folds that appeared in the infralobular region (corrected with an additional posterior cervicoplasty). This article describes two ancillary procedures that, although not frequently necessary, can optimise the result of MACS lifting."} {"_id": "d247b14fef93a2d6e87a555b0b992a41996a387f", "title": "A Survey of Mobile Cloud Computing Applications: Perspectives and Challenges", "text": "As mobile computing has been developed for decades, a new model for mobile computing, namely, mobile cloud computing, emerges resulting from the marriage of powerful yet affordable mobile devices and cloud computing. In this paper we survey existing mobile cloud computing applications, as well as speculate future generation mobile cloud computing applications. We provide insights for the enabling technologies and challenges that lie ahead for us to move forward from mobile computing to mobile cloud computing for building the next generation mobile cloud applications. For each of the challenges, we provide a survey of existing solutions, identify research gaps, and suggest future research areas."} {"_id": "4a017a4b92d76b1706f156a42fde89a37762a067", "title": "Human activity recognition using inertial sensors with invariance to sensor orientation", "text": "This work deals with the task of human daily activity recognition using miniature inertial sensors. The proposed method reduces sensitivity to the position and orientation of the sensor on the body, which is inherent in traditional methods, by transforming the observed signals to a \u201cvirtual\u201d sensor orientation. By means of this computationally low-cost transform, the inputs to the classification algorithm are made invariant to sensor orientation, despite the signals being recorded from arbitrary sensor placements. Classification results show that improved performance, in terms of both precision and recall, is achieved with the transformed signals, relative to classification using raw sensor signals, and the algorithm performs competitively compared to the state-of-the-art. Activity recognition using data from a sensor with completely unknown orientation is shown to perform very well over a long term recording in a real-life setting."} {"_id": "721054a766693d567ebebe36258eb7882540fc7e", "title": "A simple circularly polarized loop tag antenna for increased reading distance", "text": "A simple circularly polarized (CP) loop tag antenna is proposed to overcome the polarization mismatch problem between reader antenna and tag antenna. Good CP radiation can be achieved by introducing an open gap onto the square loop. Also, the technique of loading a matching stub across the RFID chip is applied to the proposed loop tag antenna, and desirable impedance matching between the antenna and RFID chip can be realized by tuning the matching stub. The proposed loop tag antenna shows good reading distance performance, and its measured reading distance reaches to 17.6 m."} {"_id": "5e23ba80d737e3f5aacccd9e32fc3de4887accec", "title": "Business artifacts: An approach to operational specification", "text": "Any business, no matter what physical goods or services it produces, relies on business records. It needs to record details of what it produces in terms of concrete information. Business artifacts are a mechanism to record this information in units that are concrete, identifiable, self-describing, and indivisible. We developed the concept of artifacts, or semantic objects, in the context of a technique for constructing formal yet intuitive operational descriptions of a business. This technique, called OpS (Operational Specification), was developed over the course of many business-transformation and business-process-integration engagements for use in IBM\u2019s internal processes as well as for use with customers. Business artifacts (or business records) are the basis for the factorization of knowledge that enables the OpS technique. In this paper we present a comprehensive discussion of business artifacts\u2014what they are, how they are represented, and the role they play in operational business modeling. Unlike the more familiar and popular concept of business objects, business artifacts are pure instances rather than instances of a taxonomy of types. Consequently, the key operation on business artifacts is recognition rather than classification."} {"_id": "01f187c3f0390123e70e01f824101bf771e76b8f", "title": "Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies", "text": "Besides attracting a billion dollar economy, Bitcoin revolutionized the field of digital currencies and influenced many adjacent areas. This also induced significant scientific interest. In this survey, we unroll and structure the manyfold results and research directions. We start by introducing the Bitcoin protocol and its building blocks. From there we continue to explore the design space by discussing existing contributions and results. In the process, we deduce the fundamental structures and insights at the core of the Bitcoin protocol and its applications. As we show and discuss, many key ideas are likewise applicable in various other fields, so that their impact reaches far beyond Bitcoin itself."} {"_id": "17458e303096cb883d33c29a1f9dd646b8c319be", "title": "QoS-aware middleware for Web services composition", "text": "The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online business-to-business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different quality of service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming."} {"_id": "9cd71e490ea9113750a2512a0596c35a261c79dc", "title": "Cryptocurrencies, smart contracts, and artificial intelligence", "text": "Recent developments in \"cryptocurrencies\" and \"smart contracts\" are creating new opportunities for applying AI techniques. These economic technologies would benefit from greater real world knowledge and reasoning as they become integrated with everyday commerce. Cryptocurrencies and smart contracts may also provide an infrastructure for ensuring that AI systems follow specified legal and safety regulations as they become more integrated into human society."} {"_id": "a84627299366e60f3fa157ed7cfcf02d52b9e21b", "title": "To See or not to See \u2013 Innovative Display Technologies as Enablers for Ergonomic Cockpit Concepts Ergonomic requirements future mobility future functionality", "text": "Introduction Given the fact that the driving task will remarkably change by the increase of assistance and information systems in the future innovative display technologies play a central role for the ergonomic realization of the driver vehicle interaction. Increasing assistance and even automation of the driving task do not lead to a decrease or disappearance of visual information but instead request new and in some cases revolutionary concepts to close the loop between driver, vehicle and traffic environment. Augmenting information in contact analog head-up-displays for navigation and driver assistance is a very promising approach. Replacement of mirrors via camera monitor systems is a further example. Free programmable cluster instruments in combination with HUD promise to resolve the problem of information density produced by an increasing amount of ADAS and IVIS functionality."} {"_id": "84ae50626d5315fe5b8bf93548fda4c3a152d885", "title": "An electrostatic gripper for flexible objects", "text": "We demonstrate a flexible, electrostatic adhesive gripper designed to controllably grasp and manipulate soft goods in space. The 8-fingered gripper has 50 cm2 of active electrodes operating at 3 kV. It generates electrostatic adhesion forces up to 3.5 N (0.70 kPa) on Ge-coated polyimide film and 1.2 N on MLI blanket, a film composite used for satellite thermal insulation. Extremely low-force gripper engagement (0.08 N) and release (0.04 N) of films is ideal for micro-gravity. Individual fingers generate shear adhesion forces up to 4.76 N (5.04 kPa) using electrostatic adhesive and 45.0 N (47.6 kPa) with a hybrid electrostatic / gecko adhesive. To simulate a satellite servicing task, the gripper was mounted on a 7-DoF robot arm and performed a supervised grasp, manipulate, and release sequence on a hanging, Al-coated PET film."} {"_id": "022a98d6cfcb29b803ceb47297a26e673af20434", "title": "The optimum received power levels of uplink non-orthogonal multiple access (NOMA) signals", "text": "Non-orthogonal multiple access (NOMA) has been recently considered as a promising multiple access technique for fifth generation (5G) mobile networks as an enabling technology to meet the demands of low latency, high reliability, massive connectivity, and high throughput. The two dominants types of NOMA are: power-domain and code-domain. The key feature of power-domain NOMA is to allow different users to share the same time, frequency, and code, but with different power levels. In code-domain NOMA, different spread-spectrum codes are assigned to different users and are then multiplexed over the same time-frequency resources. This paper concentrates on power-domain NOMA. In power-domain NOMA, Successive Interference Cancellation (SIC) is employed at the receiver. In this paper, the optimum received uplink power levels using a SIC detector is determined analytically for any number of transmitters. The optimum uplink received power levels using the SIC decoder in NOMA strongly resembles the \u03bc-law encoding used in pulse code modulation (PCM) speech companders."} {"_id": "453eab87d80e27eedddbf223303a947144655b27", "title": "Analysis, synthesis, and perception of voice quality variations among female and male talkers.", "text": "Voice quality variations include a set of voicing sound source modifications ranging from laryngealized to normal to breathy phonation. Analysis of reiterant imitations of two sentences by ten female and six male talkers has shown that the potential acoustic cues to this type of voice quality variation include: (1) increases to the relative amplitude of the fundamental frequency component as open quotient increases; (2) increases to the amount of aspiration noise that replaces higher frequency harmonics as the arytenoids become more separated; (3) increases to lower formant bandwidths; and (4) introduction of extra pole zeros in the vocal-tract transfer function associated with tracheal coupling. Perceptual validation of the relative importance of these cues for signaling a breathy voice quality has been accomplished using a new voicing source model for synthesis of more natural male and female voices. The new formant synthesizer, KLSYN88, is fully documented here. Results of the perception study indicate that, contrary to previous research which emphasizes the importance of increased amplitude of the fundamental component, aspiration noise is perceptually most important. Without its presence, increases to the fundamental component may induce the sensation of nasality in a high-pitched voice. Further results of the acoustic analysis include the observations that: (1) over the course of a sentence, the acoustic manifestations of breathiness vary considerably--tending to increase for unstressed syllables, in utterance-final syllables, and at the margins of voiceless consonants; (2) on average, females are more breathy than males, but there are very large differences between subjects within each gender; (3) many utterances appear to end in a \"breathy-laryngealized\" type of vibration; and (4) diplophonic irregularities in the timing of glottal periods occur frequently, especially at the end of an utterance. Diplophonia and other deviations from perfect periodicity may be important aspects of naturalness in synthesis."} {"_id": "75737d3b6cc483edb9f55c452e661c89ae2ffa60", "title": "FanStore: Enabling Efficient and Scalable I/O for Distributed Deep Learning", "text": "Emerging Deep Learning (DL) applications introduce heavy I/O workloads on computer clusters. The inherent long lasting, repeated, and random file access pattern can easily saturate the metadata and data service and negatively impact other users. In this paper, we present FanStore, a transient runtime file system that optimizes DL I/O on existing hardware/software stacks. FanStore distributes datasets to the local storage of compute nodes, and maintains a global namespace. With the techniques of system call interception, distributed metadata management, and generic data compression, FanStore provides a POSIX-compliant interface with native hardware throughput in an efficient and scalable manner. Users do not have to make intrusive code changes to use FanStore and take advantage of the optimized I/O. Our experiments with benchmarks and real applications show that FanStore can scale DL training to 512 compute nodes with over 90% scaling efficiency."} {"_id": "0302dd22168fb3c22a57a6d5d05ba94ace0c20cf", "title": "BLE-Based Accurate Indoor Location Tracking for Home and Office", "text": "Nowadays the use of smart mobile devices and the accompanying needs for emerging services relying on indoor location-based services (LBS) for mobile devices are rapidly increasing. For more accurate location tracking using Bluetooth Low Energy (BLE), this paper proposes a novel trilateration-based algorithm and presents experimental results that demonstrate its effectiveness."} {"_id": "d1e8250f3306613d39f2fed435a44b0abb4a0936", "title": "Automated text summarisation and evidence-based medicine: A survey of two domains", "text": "The practice of evidence-based medicine (EBM) urges medical practitioners to utilise the latest research evidence when making clinical decisions. Because of the massive and growing volume of published research on various medical topics, practitioners often find themselves overloaded with information. As such, natural language processing research has recently commenced exploring techniques for performing medical domain-specific automated text summarisation (ATS) techniques-- targeted towards the task of condensing large medical texts. However, the development of effective summarisation techniques for this task requires cross-domain knowledge. We present a survey of EBM, the domain-specific needs for EBM, automated summarisation techniques, and how they have been applied hitherto. We envision that this survey will serve as a first resource for the development of future operational text summarisation techniques for EBM."} {"_id": "4f28c74c0db152862f9c35a0e16bba88e620f7c7", "title": "Semiautomatic White Blood Cell Segmentation Based on Multiscale Analysis", "text": "This paper approaches novel methods to segment the nucleus and cytoplasm of white blood cells (WBC). This information is the basis to perform higher level tasks such as automatic differential counting, which plays an important role in the diagnosis of different diseases. We explore the image simplification and contour regularization resulting from the application of the selfdual multiscale morphological toggle (SMMT), an operator with scale-space properties. To segment the nucleus, the image preprocessing with SMMT has shown to be essential to ensure the accuracy of two well-known image segmentations techniques, namely, watershed transform and Level-Set methods. To identify the cytoplasm region, we propose two different schemes, based on granulometric analysis and on morphological transformations. The proposed methods have been successfully applied to a large number of images, showing promising segmentation and classification results for varying cell appearance and image quality, encouraging future works."} {"_id": "5b43b58ae00d6746ccca82c16ca39ca34973c167", "title": "Hemi-cylinder unwrapping algorithm of fish-eye image based on equidistant projection model", "text": "This paper presents a novel fish-eye image unwrapping algorithm based on equidistant projection mode. We discuss a framework to estimate the image distortion center using vanishing points extraction. Then we propose a fish-eye image unwrapping algorithm using hemi-cylinder projection in 3D space. Experimental results show that our algorithm is efficient and effective. In particular, the hemi-cylinder unwrapping results do not reduce the horizontal field of view which is very useful for panoramic surveillance with applications in important sites safety compared with other fish-eye image correction methods."} {"_id": "0e7122bd7137fd166b278e34db3422d3174aedc7", "title": "III-nitride micro-emitter arrays : development and applications", "text": "III-nitride micro-emitter array technology was developed in the authors\u2019 laboratory around 1999. Since its inception, much progress has been made by several groups and the technology has led to the invention of several novel devices. This paper provides an overview on recent progress in single-chip ac-micro-size light emitting diodes (\u03bcLEDs) that can be plugged directly into standard high ac voltage power outlets, self-emissive microdisplays and interconnected \u03bcLEDs for boosting light emitting diodes\u2019s wall-plug efficiency, all of which were evolved from III-nitride micro-emitter array technology. Finally, potential applications of III-nitride visible micro-emitter arrays as a light source for DNA microarrays and future prospects of III-nitride deep ultraviolet micro-emitter arrays for label-free protein analysis in microarray format by taking advantage of the direct excitation of intrinsic protein fluorescence are discussed. (Some figures in this article are in colour only in the electronic version)"} {"_id": "131b3fa7f7c1f35fb81e2860523650750d6ff10e", "title": "Collaging on Internal Representations: An Intuitive Approach for Semantic Transfiguration", "text": "We present a novel CNN-based image editing method that allows the user to change the semantic information of an image over a user-specified region. Our method makes this possible by combining the idea of manifold projection with spatial conditional batch normalization (sCBN), a version of conditional batch normalization with userspecifiable spatial weight maps. With sCBN and manifold projection, our method lets the user perform (1) spatial class translation that changes the class of an object over an arbitrary region of user\u2019s choice, and (2) semantic transplantation that transplants semantic information contained in an arbitrary region of the reference image to an arbitrary region in the target image. These two transformations can be used simultaneously, and can realize a complex composite image-editing task like \u201cchange the nose of a beagle to that of a bulldog, and open her mouth\u201d. The user can also use our method with intuitive copy-paste-style manipulations. We demonstrate the power of our method on various images. Code will be available at https://github. com/pfnet-research/neural-collage."} {"_id": "e0f73e991514450bb0f14f799878d84adc8601f9", "title": "A study of deep convolutional auto-encoders for anomaly detection in videos", "text": "The detection of anomalous behaviors in automated video surveillance is a recurrent topic in recent computer vision research. Depending on the application field, anomalies can present different characteristics and challenges. Convolutional Neural Networks have achieved the state-of-the-art performance for object recognition in recent years, since they learn features automatically during the training process. From the anomaly detection perspective, the Convolutional Autoencoder (CAE) is an interesting choice, since it captures the 2D structure in image sequences during the learning process. This work uses a CAE in the anomaly detection context, by applying the reconstruction error of each frame as an anomaly score. By exploring the CAE architecture, we also propose a method for aggregating high-level spatial and temporal features with the input frames and investigate how they affect the CAE performance. An easy-to-use measure of video spatial complexity was devised and correlated with the classification performance of the CAE. The proposed methods were evaluated by means of several experiments with public-domain datasets. The promising results support further research in this area. \u00a9 2017 Published by Elsevier B.V."} {"_id": "f3917c8fa5eecb67318bd43c37260560fae531a6", "title": "Novelty Detection via Topic Modeling in Research Articles", "text": "In today\u2019s world redundancy is the most vital problem faced in almost all domains. Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. The problem becomes more intense when it comes to \u201cResearch Articles\u201d. A method of identifying novelty at each sections of the article is highly required for determining the novel idea proposed in the research paper. Since research articles are semistructured, detecting novelty of information from them requires more accurate systems. Topic model provides a useful means to process them and provides a simple way to analyze them. This work compares the most predominantly used topic modelLatent Dirichlet Allocation with the hierarchical Pachinko Allocation Model. The results obtained are promising towards hierarchical Pachinko Allocation Model when used for document retrieval."} {"_id": "6521560a99dd3c4abeec8ad9634e949d5a0e77cd", "title": "Making B+-Trees Cache Conscious in Main Memory", "text": "Previous research has shown that cache behavior is important for main memory index structures. Cache conscious index structures such as Cache Sensitive Search Trees (CSS-Trees) perform lookups much faster than binary search and T-Trees. However, CSS-Trees are designed for decision support workloads with relatively static data. Although B+-Trees are more cache conscious than binary search and T-Trees, their utilization of a cache line is low since half of the space is used to store child pointers. Nevertheless, for applications that require incremental updates, traditional B+-Trees perform well.\nOur goal is to make B+-Trees as cache conscious as CSS-Trees without increasing their update cost too much. We propose a new indexing technique called \u201cCache Sensitive B+-Trees\u201d (CSB+-Trees). It is a variant of B+-Trees that stores all the child nodes of any given node contiguously, and keeps only the address of the first child in each node. The rest of the children can be found by adding an offset to that address. Since only one child pointer is stored explicitly, the utilization of a cache line is high. CSB+-Trees support incremental updates in a way similar to B+-Trees.\nWe also introduce two variants of CSB+-Trees. Segmented CSB+-Trees divide the child nodes into segments. Nodes within the same segment are stored contiguously and only pointers to the beginning of each segment are stored explicitly in each node. Segmented CSB+-Trees can reduce the copying cost when there is a split since only one segment needs to be moved. Full CSB+-Trees preallocate space for the full node group and thus reduce the split cost. Our performance studies show that CSB+-Trees are useful for a wide range of applications."} {"_id": "c7aa8436c6c9536d53f2fcd24e79795ce8c6ea12", "title": "H-TCP : TCP for high-speed and long-distance networks", "text": "In this paper we present a congestion control protocol that is suitable for deployment in highspeed and long-distance networks. The new protocol, H-TCP, is shown to fair when deployed in homogeneous networks, to be friendly when competing with conventional TCP sources, to rapidly respond to bandwidth as it becomes available, and to utilise link bandwidth in an efficient manner. Further, when deployed in conventional networks, H-TCP behaves as a conventional TCP-variant."} {"_id": "40bc7c8ff3d188cff3b234191d73098ec3dbb1d1", "title": "A client-side analysis of TLS usage in mobile apps", "text": "As mobile applications become more pervasive, they provide us with a variety of online services that range from social networking to banking and credit card management. Since many of these services involve communicating and handling of private user information \u2013 and also due to increasing security demands from users \u2013 the use of TLS connections has become a necessity for today\u2019s mobile applications. However, an improper use of TLS and failure to adhere to TLS security guidelines by app developers, exposes users to agents performing TLS interception thus giving them a false sense of security. Unfortunately, researchers and users alike lack of information and easy-to-deploy mechanisms to analyze how securely mobile apps implement TLS. Hence, in order to understand and assess the security of mobile app communications, it is crucial to study their use of the TLS protocol. In this poster we present a method to study the use of TLS in mobile apps using the data provided by the ICSI Haystack app [2], a mobile measurement platform that enables on-device analysis of mobile traffic without requiring root access. The unique vantage point provided by the Haystack platform enables a variety of measurements from the edge of the network with real user workload and the added bonus of having contextual information on the device to supplement the data collection."} {"_id": "b510de361f440c2b3234077d7ad78deb4fefa27a", "title": "The State-of-the-Art in Twitter Sentiment Analysis: A Review and Benchmark Evaluation", "text": "Twitter has emerged as a major social media platform and generated great interest from sentiment analysis researchers. Despite this attention, state-of-the-art Twitter sentiment analysis approaches perform relatively poorly with reported classification accuracies often below 70%, adversely impacting applications of the derived sentiment information. In this research, we investigate the unique challenges presented by Twitter sentiment analysis and review the literature to determine how the devised approaches have addressed these challenges. To assess the state-of-the-art in Twitter sentiment analysis, we conduct a benchmark evaluation of 28 top academic and commercial systems in tweet sentiment classification across five distinctive data sets. We perform an error analysis to uncover the causes of commonly occurring classification errors. To further the evaluation, we apply select systems in an event detection case study. Finally, we summarize the key trends and takeaways from the review and benchmark evaluation and provide suggestions to guide the design of the next generation of approaches."} {"_id": "b1b99af0353c836ac44cc68c43e3918b0b12c5a2", "title": "A selectional auto-encoder approach for document image binarization", "text": "Binarization plays a key role in the automatic information retrieval from document images. This process is usually performed in the first stages of documents analysis systems, and serves as a basis for subsequent steps. Hence it has to be robust in order to allow the full analysis workflow to be successful. Several methods for document image binarization have been proposed so far, most of which are based on hand-crafted image processing strategies. Recently, Convolutional Neural Networks have shown an amazing performance in many disparate duties related to computer vision. In this paper we discuss the use of convolutional auto-encoders devoted to learning an end-to-end map from an input image to its selectional output, in which activations indicate the likelihood of pixels to be either foreground or background. Once trained, documents can therefore be binarized by parsing them through the model and applying a global threshold. This approach has proven to outperform existing binarization strategies in a number of document types."} {"_id": "3611b5f7e169f24a9f9c0915ab21a7cc40009ea9", "title": "Converting Pairing-Based Cryptosystems from Composite-Order Groups to Prime-Order Groups", "text": "We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In particular, we define a generalized version of the subgroup decision problem and give explicit constructions of bilinear groups in which the generalized subgroup decision assumption follows from the decision DiffieHellman assumption, the decision linear assumption, and/or related assumptions in prime-order groups. We apply our framework and our prime-order group constructions to create more efficient versions of cryptosystems that originally required composite-order groups. Specifically, we consider the Boneh-GohNissim encryption scheme, the Boneh-Sahai-Waters traitor tracing system, and the Katz-Sahai-Waters attribute-based encryption scheme. We give a security theorem for the prime-order group instantiation of each system, using assumptions of comparable complexity to those used in the composite-order setting. Our conversion of the last two systems to prime-order groups answers a problem posed by Groth and Sahai."} {"_id": "6162ab446003a91fc5d53c3b82739631c2e66d0f", "title": "Enzyme-Enhanced Microbial Fuel Cells", "text": ""} {"_id": "f87fd4e849775c8b1b1caa8432ca80f09c383923", "title": "Automating Intention Mining", "text": "Developers frequently discuss aspects of the systems they are developing online. The comments they post to discussions form a rich information source about the system. Intention mining, a process introduced by Di Sorbo et al., classifies sentences in developer discussions to enable further analysis. As one example of use, intention mining has been used to help build various recommenders for software developers. The technique introduced by Di Sorbo et al. to categorize sentences is based on linguistic patterns derived from two projects. The limited number of data sources used in this earlier work introduces questions about the comprehensiveness of intention categories and whether the linguistic patterns used to identify the categories are generalizable to developer discussion recorded in other kinds of software artifacts (e.g., issue reports). To assess the comprehensiveness of the previously identified intention categories and the generalizability of the linguistic patterns for category identification, we manually created a new dataset, categorizing 5,408 sentences from issue reports of four projects in GitHub. Based on this manual effort, we refined the previous categories. We assess Di Sorbo et al.\u2019s patterns on this dataset, finding that the accuracy rate achieved is low (0.31). To address the deficiencies of Di Sorbo et al.\u2019s patterns, we propose and investigate a convolution neural network (CNN)-based approach to automatically classify sentences into different categories of intentions. Our approach optimizes CNN by integrating batch normalization to accelerate the training speed, and an automatic hyperparameter tuning approach to tune appropriate hyperparameters of CNN. Our approach achieves an accuracy of 0.84 on the new dataset, improving Di Sorbo et al.\u2019s approach by 171%. We also apply our approach to improve an automated software engineering task, in which we use our proposed approach to rectify misclassified issue reports, thus reducing the bias introduced by such data to other studies. A case study on four open source projects with 2,076 issue reports shows that our approach achieves an average AUC score of 0.687, which improves other baselines by at least 16%."} {"_id": "463227cf41949ab09747fa890e3b05b375fc0b5b", "title": "Direction matters: hand pose estimation from local surface normals", "text": "We present a hierarchical regression framework for estimating hand joint positions from single depth images based on local surface normals. The hierarchical regression follows the tree structured topology of hand from wrist to finger tips. We propose a conditional regression forest, i.e. the Frame Conditioned Regression Forest (FCRF) which uses a new normal difference feature. At each stage of the regression, the frame of reference is established from either the local surface normal or previously estimated hand joints. By making the regression with respect to the local frame, the pose estimation is more robust to rigid transformations. We also introduce a new efficient approximation to estimate surface normals. We verify the effectiveness of our method by conducting experiments on two challenging real-world datasets and show consistent improvements over previous discriminative pose estimation methods."} {"_id": "b20001fb32dc5068afdd212c7a19c80a9d50eb3d", "title": "Estimation of induction motor double-cage model parameters from manufacturer data", "text": "This paper presents a numerical method for the estimation of induction motor double-cage model parameters from standard manufacturer data: full load mechanical power (rated power), full load reactive electrical power, breakdown torque, and starting current and torque. A model sensitivity analysis for the various electrical parameters shows that stator resistance is the least significant parameter. The nonlinear equations to be solved for the parameters determination are formulated as a minimization problem with restrictions. The method has been tested with 223 motors from different manufacturers, with an average value of the normalized residual error of 1.39/spl middot/10/sup -2/. The estimated parameters of these motors are graphically represented as a function of the rated power."} {"_id": "9d3836aaf0c74efd5ec9aa3eda16989026ac9030", "title": "Approximate Inference with Amortised MCMC", "text": "We propose a novel approximate inference algorithm that approximates a target distribution by amortising the dynamics of a user-selected MCMC sampler. The idea is to initialise MCMC using samples from an approximation network, apply the MCMC operator to improve these samples, and finally use the samples to update the approximation network thereby improving its quality. This provides a new generic framework for approximate inference, allowing us to deploy highly complex, or implicitly defined approximation families with intractable densities, including approximations produced by warping a source of randomness through a deep neural network. Experiments consider image modelling with deep generative models as a challenging test for the method. Deep models trained using amortised MCMC are shown to generate realistic looking samples as well as producing diverse imputations for images with regions of missing pixels."} {"_id": "4cab37444a947e9f592b0d10a01b61b788808ad1", "title": "Standard Compliant Hazard and Threat Analysis for the Automotive Domain", "text": "The automotive industry has successfully collaborated to release the ISO 26262 standard for developing safe software for cars. The standard describes in detail how to conduct hazard analysis and risk assessments to determine the necessary safety measures for each feature. However, the standard does not concern threat analysis for malicious attackers or how to select appropriate security countermeasures. We propose to apply ISO 27001 for this purpose and show how it can be applied together with ISO 26262. We show how ISO 26262 documentation can be re-used and enhanced to satisfy the analysis and documentation demands of the ISO 27001 standard. We illustrate our approach based on an electronic steering column lock system."} {"_id": "3f200c41618d0c3d75c4cd287b4730aadcf596f7", "title": "PCC: Re-architecting Congestion Control for Consistent High Performance", "text": "TCP and its variants have suffered from surprisingly poor performance for decades. We argue the TCP family has little hope to achieve consistent high performance due to a fundamental architectural deficiency: hardwiring packet-level events to control responses without understanding the real performance result of its actions. We propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender continuously observes the connection between its actions and empirically experienced performance, enabling it to consistently adopt actions that result in high performance. We prove that PCC converges to a stable and fair equilibrium. Across many real-world and challenging environments, PCC shows consistent and often 10\u00d7 performance improvement, with better fairness and stability than TCP. PCC requires no router hardware support or new packet format."} {"_id": "47aa3758c0ac35bfb2a3d2bbeff1e0ac28e623c2", "title": "TCP ex machina: computer-generated congestion control", "text": "This paper describes a new approach to end-to-end congestion control on a multi-user network. Rather than manually formulate each endpoint's reaction to congestion signals, as in traditional protocols, we developed a program called Remy that generates congestion-control algorithms to run at the endpoints.\n In this approach, the protocol designer specifies their prior knowledge or assumptions about the network and an objective that the algorithm will try to achieve, e.g., high throughput and low queueing delay. Remy then produces a distributed algorithm---the control rules for the independent endpoints---that tries to achieve this objective.\n In simulations with ns-2, Remy-generated algorithms outperformed human-designed end-to-end techniques, including TCP Cubic, Compound, and Vegas. In many cases, Remy's algorithms also outperformed methods that require intrusive in-network changes, including XCP and Cubic-over-sfqCoDel (stochastic fair queueing with CoDel for active queue management).\n Remy can generate algorithms both for networks where some parameters are known tightly a priori, e.g. datacenters, and for networks where prior knowledge is less precise, such as cellular networks. We characterize the sensitivity of the resulting performance to the specificity of the prior knowledge, and the consequences when real-world conditions contradict the assumptions supplied at design-time."} {"_id": "f7f72aefdf053df9e7fb2784af66f0e477272b44", "title": "Algorithms for multi-armed bandit problems", "text": "The stochastic multi-armed bandit problem is an important model for studying the explorationexploitation tradeoff in reinforcement learning. Although many algorithms for the problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as -greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. These properties are not described by current theory, even though they can be exploited in practice in the design of heuristics. Thirdly, the algorithms\u2019 performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50% more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies."} {"_id": "b6e56ec46f0cd27abee15bd3b7d7cf0c13b3b56b", "title": "Dynamic simulation of die pickup from wafer tape by a multi-disc ejector using peel-energy to peel-velocity coupling", "text": "A 2D simulation of thin die peeling from wafer tape is presented for a Multi Disc ejection system. The simulation models the dynamics of peeling, and visualizes time-dependency of peel front propagation and target die stress. It is based on a series of static snapshots, stringed together like a movie. A coupling of peel energy and peel velocity is defined. This allows to calculate the geometry of the actual snapshot by the peel energy of the preceding one. As a result, experimental data of a peel process can be verified. It is shown, why an increase in disc velocity leads to a slow-down of peel front propagation, and thus to a pickup failure."} {"_id": "4eacf020f7eae7673a746eccdd5819a6a1be9e85", "title": "Historical Document Layout Analysis Competition", "text": "This paper presents an objective comparative evaluation of layout analysis methods for scanned historical documents. It describes the competition (modus operandi, dataset and evaluation methodology) held in the context of ICDAR2011 and the International Workshop on Historical Document Imaging and Processing (HIP2011), presenting the results of the evaluation of four submitted methods. A commercial state-of-the-art system is also evaluated for comparison. Two scenarios are reported in this paper, one evaluating the ability of methods to accurately segment regions and the other evaluating the whole pipeline of segmentation and region classification (with a text extraction goal). The results indicate that there is a convergence to a certain methodology with some variations in the approach. However, there is still a considerable need to develop robust methods that deal with the idiosyncrasies of historical documents."} {"_id": "88ac44d10f929a7a5d50af2d42c8817cce4f178e", "title": "A compact branch-line coupler using folded microstrip lines", "text": "This paper presents a compact 3 dB Quadrature branch-line coupler using folded microstrip line geometry. Commercially available IE3D software has been used to design and simulate the structure. The proposed structure is simple and takes lesser optimization time. It has been shown that the structure provides about 43.63% compactness compared to the conventional branch line coupler. The structure, therefore, can be incorporated in the microwave circuit design where medium level compactness and faster design are preferred."} {"_id": "6a2b83c4ae18651f1a3496e48a35b0cd7a2196df", "title": "Top Rank Supervised Binary Coding for Visual Search", "text": "In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts."} {"_id": "ba2b7fb8730900bace0e73ee17ea216f2bd559bd", "title": "Joint Optimization of Service Function Chaining and Resource Allocation in Network Function Virtualization", "text": "Network function virtualization (NFV) has already been a new paradigm for network architectures. By migrating NFs from dedicated hardware to virtualization platform, NFV can effectively improve the flexibility to deploy and manage service function chains (SFCs). However, resource allocation for requested SFC in NFV-based infrastructures is not trivial as it mainly consists of three phases: virtual network functions (VNFs) chain composition, VNFs forwarding graph embedding, and VNFs scheduling. The decision of these three phases can be mutually dependent, which also makes it a tough task. Therefore, a coordinated approach is studied in this paper to jointly optimize NFV resource allocation in these three phases. We apply a general cost model to consider both network costs and service performance. The coordinate NFV-RA is formulated as a mixed-integer linear programming, and a heuristic-based algorithm (JoraNFV) is proposed to get the near optimal solution. To make the coordinated NFV-RA more tractable, JoraNFV is divided into two sub-algorithms, one-hop optimal traffic scheduling and a multi-path greedy algorithm for VNF chain composition and VNF forwarding graph embedding. Last, extensive simulations are performed to evaluate the performance of JoraNFV, and results have shown that JoraNFV can get a solution within 1.25 times of the optimal solution with reasonable execution time, which indicates that JoraNFV can be used for online NFV planning."} {"_id": "29f5ecc324e934d21fe8ddde814fca36cfe8eaea", "title": "Using Three Machine Learning Techniques for Predicting Breast Cancer Recurrence", "text": "Introduction Breast cancer (BC) is the most common cancer in women, affecting about 10% of all women at some stages of their life. In recent years, the incidence rate keeps increasing and data show that the survival rate is 88% after five years from diagnosis and 80% after 10 years from diagnosis [1]. Early prediction of breast cancer is one of the most crucial works in the follow-up process. Data mining methods can help to reduce the number of false positive and false negative decisions [2,3]. Consequently, new methods such as knowledge discovery in databases (KDD) has become a popular research tool for medical researchers who try to identify and exploit patterns and relationships among large number of variables, and predict the outcome of a disease using historical cases stored in datasets [4]."} {"_id": "8fa5f46bfb5869bdda49e44ed821cb2d2f3b2cc2", "title": "Usability evaluation for history educational games", "text": "The potential for integration of digital games and learning becomes ever more significant recently. One of the goals in educational game design is to create engaging and immersive learning experiences for delivering specified learning goals, outcomes and experiences. However, there is limited number of research done on game usability or quality of game user interface. Failure to design usable game interfaces can interfere with the larger goal of creating a compelling experience for users and can have a negative effect on the overall quality and success of a game. In this paper, we review usability problems identified by previous researchers and propose a history educational game design which includes pedagogical and game design component. Some snapshots of our game module are also presented. Finally we present a usability evaluation method for history educational game design. From our critical literature reviews, we also proposed six constructs which are interface, mechanics, gameplay, playability, feedback and immersion for usability evaluation."} {"_id": "eb95a7e69a34699ac75ad2cdd107a38e322f7f7b", "title": "Characterizing Task Completion Latencies in Fog Computing", "text": "Fog computing, which distributes computing resources to multiple locations between the Internet of Things (IoT) devices and the cloud, is attracting considerable attention from academia and industry. Yet, despite the excitement about the potential of fog computing, few comprehensive quantitative characteristics of the properties of fog computing architectures have been conducted. In this paper we examine the properties of task completion latencies in fog computing. First, we present the results of our empirical benchmarking-based study of task completion latencies. The study covered a range of settings, and uniquely considered both traditional and serverless fog computing execution points. It demonstrated the range of execution point characteristics in different locations and the relative stability of latency characteristics for a given location. It also highlighted properties of serverless execution that are not incorporated in existing fog computing algorithms. Second, we present a framework we developed for co-optimizing task completion quality and latency, which was inspired by the insights of our empirical study. We describe fog computing task assignment problems we formulated under this framework, and present the algorithms we developed for solving them."} {"_id": "261e841c8e0175586fb193b1a199cefaa8ecf169", "title": "Weighting Finite-State Transductions With Neural Context", "text": "How should one apply deep learning to tasks such as morphological reinflection, which stochastically edit one string to get another? A recent approach to such sequence-to-sequence tasks is to compress the input string into a vector that is then used to generate the output string, using recurrent neural networks. In contrast, we propose to keep the traditional architecture, which uses a finite-state transducer to score all possible output strings, but to augment the scoring function with the help of recurrent networks. A stack of bidirectional LSTMs reads the input string from leftto-right and right-to-left, in order to summarize the input context in which a transducer arc is applied. We combine these learned features with the transducer to define a probability distribution over aligned output strings, in the form of a weighted finite-state automaton. This reduces hand-engineering of features, allows learned features to examine unbounded context in the input string, and still permits exact inference through dynamic programming. We illustrate our method on the tasks of morphological reinflection and lemmatization."} {"_id": "b3cf089a45f03703ad1ba33872866f40006912e3", "title": "Consciousness and anesthesia.", "text": "When we are anesthetized, we expect consciousness to vanish. But does it always? Although anesthesia undoubtedly induces unresponsiveness and amnesia, the extent to which it causes unconsciousness is harder to establish. For instance, certain anesthetics act on areas of the brain's cortex near the midline and abolish behavioral responsiveness, but not necessarily consciousness. Unconsciousness is likely to ensue when a complex of brain regions in the posterior parietal area is inactivated. Consciousness vanishes when anesthetics produce functional disconnection in this posterior complex, interrupting cortical communication and causing a loss of integration; or when they lead to bistable, stereotypic responses, causing a loss of information capacity. Thus, anesthetics seem to cause unconsciousness when they block the brain's ability to integrate information."} {"_id": "2bc8b1c38d180cdf11277001d3b3f2f9822a800f", "title": "Lightweight eye capture using a parametric model", "text": "Facial scanning has become ubiquitous in digital media, but so far most efforts have focused on reconstructing the skin. Eye reconstruction, on the other hand, has received only little attention, and the current state-of-the-art method is cumbersome for the actor, time-consuming, and requires carefully setup and calibrated hardware. These constraints currently make eye capture impractical for general use. We present the first approach for high-quality lightweight eye capture, which leverages a database of pre-captured eyes to guide the reconstruction of new eyes from much less constrained inputs, such as traditional single-shot face scanners or even a single photo from the internet. This is accomplished with a new parametric model of the eye built from the database, and a novel image-based model fitting algorithm. Our method provides both automatic reconstructions of real eyes, as well as artistic control over the parameters to generate user-specific eyes."} {"_id": "2933ec1f51cd30516491f414d424297bd12f2fd2", "title": "OntoVPA - An Ontology-Based Dialogue Management System for Virtual Personal Assistants", "text": "Dialogue management (DM) is a difficult problem. We present OntoVPA, an Ontology-Based Dialogue Management System (DMS) for Virtual Personal Assistants (VPA\u2019s). The features of OntoVPA are offered as potential solutions to core DM problems. We illustrate OntoVPA\u2019s solutions to these problems by means of a running VPA example domain. To the best of our knowledge, OntoVPA is the first commercially available, fully implemented DMS that employs ontologies, reasoning, and ontology-based rules for a) domain model representation and reasoning, b) dialogue representation and state tracking, and c) response generation. OntoVPA is a declarative, knowledge-based system which, consequently, can be customized to a new VPA domain by swapping in and out ontologies and rule bases, with very little to no conventional programming required. OntoVPA relies on its domainindependent (generic), but dialogue-specific upper-level ontologies and DM rules, which are implementing typical, re-occurring (and usually expensive to address) dialogue system core capabilities, such as anaphora (coreference) resolution, slotfilling, inquiring about missing slot values, and so on. We argue that ontologies and ontology-based rules provide a flexible and expressive framework for realizing DMS\u2019s for VPA\u2019s, with a potential to significantly reduce development time."} {"_id": "1cf424a167f16d8d040cfb0d76b7cbfd87822672", "title": "Deconfounded Lexicon Induction for Interpretable Social Science", "text": "NLP algorithms are increasingly used in computational social science to take linguistic observations and predict outcomes like human preferences or actions. Making these social models transparent and interpretable often requires identifying features in the input that predict outcomes while also controlling for potential confounds. We formalize this need as a new task: inducing a lexicon that is predictive of a set of target variables yet uncorrelated to a set of confounding variables. We introduce two deep learning algorithms for the task. The first uses a bifurcated architecture to separate the explanatory power of the text and confounds. The second uses an adversarial discriminator to force confound-invariant text encodings. Both elicit lexicons from learned weights and attentional scores. We use them to induce lexicons that are predictive of timely responses to consumer complaints (controlling for product), enrollment from course descriptions (controlling for subject), and sales from product descriptions (controlling for seller). In each domain our algorithms pick words that are associated with narrative persuasion; more predictive and less confound-related than those of standard feature weighting and lexicon induction techniques like regression and log odds."} {"_id": "02c73d24cd48184d53cf20d30682e8e10015848f", "title": "Additively Manufactured Scaffolds for Bone Tissue Engineering and the Prediction of their Mechanical Behavior: A Review", "text": "Additive manufacturing (AM), nowadays commonly known as 3D printing, is a revolutionary materials processing technology, particularly suitable for the production of low-volume parts with high shape complexities and often with multiple functions. As such, it holds great promise for the fabrication of patient-specific implants. In recent years, remarkable progress has been made in implementing AM in the bio-fabrication field. This paper presents an overview on the state-of-the-art AM technology for bone tissue engineering (BTE) scaffolds, with a particular focus on the AM scaffolds made of metallic biomaterials. It starts with a brief description of architecture design strategies to meet the biological and mechanical property requirements of scaffolds. Then, it summarizes the working principles, advantages and limitations of each of AM methods suitable for creating porous structures and manufacturing scaffolds from powdered materials. It elaborates on the finite-element (FE) analysis applied to predict the mechanical behavior of AM scaffolds, as well as the effect of the architectural design of porous structure on its mechanical properties. The review ends up with the authors' view on the current challenges and further research directions."} {"_id": "7a0e5002fbf02965b30c50540eabcaf6e2117e10", "title": "Discovering evolutionary theme patterns from text: an exploration of temporal text mining", "text": "Temporal Text Mining (TTM) is concerned with discovering temporal patterns in text information collected over time. Since most text information bears some time stamps, TTM has many applications in multiple domains, such as summarizing events in news articles and revealing research trends in scientific literature. In this paper, we study a particular TTM task -- discovering and summarizing the evolutionary patterns of themes in a text stream. We define this new text mining problem and present general probabilistic methods for solving this problem through (1) discovering latent themes from text; (2) constructing an evolution graph of themes; and (3) analyzing life cycles of themes. Evaluation of the proposed methods on two different domains (i.e., news articles and literature) shows that the proposed methods can discover interesting evolutionary theme patterns effectively."} {"_id": "441d2e918d3471da524619192f734ba8273e3aa2", "title": "SeeNav: Seamless and Energy-Efficient Indoor Navigation using Augmented Reality", "text": "Augmented Reality (AR) based navigation has emerged as an impressive, yet seamless way of guiding users in unknown environments. Its quality of experience depends on many factors, including the accuracy of camera pose estimation, response delay, and energy consumption. In this paper, we present SeeNav - a seamless and energy-efficient AR navigation system for indoor environments. SeeNav combines image-based localization and inertial tracking to provide an accurate and robust camera pose estimation. As vision processing is much more compute intensive than the processing of inertial sensor data, SeeNav offloads the former one from resource-constrained mobile devices to a cloud to improve tracking performance and reduce power consumption. More than that, SeeNav implements a context-aware task scheduling algorithm that further minimizes energy consumption while maintaining the accuracy of camera pose estimation. Our experimental results, including a user study, show that SeeNav provides seamless navigation experience and reduces the overall energy consumption by 21.56% with context-aware task scheduling."} {"_id": "9f735132119e871da9743ab8af635cd1c9f536af", "title": "Alleviation of Interest flooding attacks using Token bucket with per interface fairness and Countermeasures in Named Data Networking", "text": "Distributed Denial of Service (DDoS) attacks are an ongoing problem in today's Internet. In this paper we focus on DDoS attacks in Named Data Networking (NDN). NDN is a specific candidate for next-generation Internet architecture designs. In NDN locations are named instead of data, So NDN transforms data into a first-class entity and makes itself an attractive and practical approach to meet the needs for many applications. In a Named Data Networking (NDN), end users request data by sending Interest packets, and the network delivers Data packets upon request only, effectively eliminating many existing DDoS attacks. However, an NDN network faces a new type of DDoS attack, namely Interest packet flooding. In this paper we try to alleviate the Interest flooding using token bucket with per interface fairness algorithm and also we try to find countermeasures for Interest flooding attacks"} {"_id": "003000c999f12997d7cb7d317a13c54b0092da9f", "title": "Metasurfaces for general transformations of electromagnetic fields.", "text": "In this review paper I discuss electrically thin composite layers, designed to perform desired operations on applied electromagnetic fields. Starting from a historical overview and based on a general classification of metasurfaces, I give an overview of possible functionalities of the most general linear metasurfaces. The review is concluded with a short research outlook discussion."} {"_id": "ece49135b57d3a8b15f374155942ad48056306e7", "title": "Thompson Sampling for Dynamic Pricing", "text": "In this paper we apply active learning algorithms for dynamic pricing in a prominent e-commerce website. Dynamic pricing involves changing the price of items on a regular basis, and uses the feedback from the pricing decisions to update prices of the items. Most popular approaches to dynamic pricing use a passive learning approach, where the algorithm uses historical data to learn various parameters of the pricing problem, and uses the updated parameters to generate a new set of prices. We show that one can use active learning algorithms such as Thompson sampling to more efficiently learn the underlying parameters in a pricing problem. We apply our algorithms to a real e-commerce system and show that the algorithms indeed improve revenue compared to pricing algorithms that use passive learning."} {"_id": "6befc9fac6846df9f27e91ffbad5de58b77ffef1", "title": "Structured Receptive Fields in CNNs", "text": "Learning powerful feature representations with CNNs is hard when training data are limited. Pre-training is one way to overcome this, but it requires large datasets sufficiently similar to the target domain. Another option is to design priors into the model, which can range from tuned hyperparameters to fully engineered representations like Scattering Networks. We combine these ideas into structured receptive field networks, a model which has a fixed filter basis and yet retains the flexibility of CNNs. This flexibility is achieved by expressing receptive fields in CNNs as a weighted sum over a fixed basis which is similar in spirit to Scattering Networks. The key difference is that we learn arbitrary effective filter sets from the basis rather than modeling the filters. This approach explicitly connects classical multiscale image analysis with general CNNs. With structured receptive field networks, we improve considerably over unstructured CNNs for small and medium dataset scenarios as well as over Scattering for large datasets. We validate our findings on ILSVRC2012, Cifar-10, Cifar-100 and MNIST. As a realistic small dataset example, we show state-of-the-art classification results on popular 3D MRI brain-disease datasets where pre-training is difficult due to a lack of large public datasets in a similar domain."} {"_id": "7d6a34508b091ba8cde8a403e26ec791325c60d1", "title": "Net!works European Technology Platform Expert Working Group on Smart Cities Applications and Requirements Executive Summary List of Contributors Contributors Company/institute Emcanta, Es Alcatel-lucent/bell Labs, De Gowex, Es List of Acronyms 3d Three Dimensional Api Application Programming Interfa", "text": "Smart Cities gained importance as a means of making ICT enabled services and applications available to the citizens, companies and authorities that are part of a city's system. It aims at increasing citizens' quality of life, and improving the efficiency and quality of the services provided by governing entities and businesses. This perspective requires an integrated vision of a city and of its infrastructures, in all its components. A Smart City can be taken according to six characteristics: All these domains raise new challenges in security and privacy, since users implicitly expect systems to be secure and privacy-preserving. One of the critical elements is which role(s) the city will take up as an actor within an increasingly complex value network. New players enter the market, actors shift their business strategies, roles change, different types of platforms emerge and vie for market dominance, technological developments create new threats and opportunities, etc. An element related to the trend of platformisation is cloud computing, which is increasingly helping the private sector to reduce cost, increase efficiency, and work smarter. One particular challenge relates to open data business models. Activities necessary for Public Sector Information provision can be identified. The development of efficient and effective e-government is a prerequisite. Transnational authentication systems for citizens and businesses, agreed frameworks for data privacy, and the sharing and collection of individual and business data, are key. Smart Cities need to be able to integrate themselves into national, regional and international infrastructures. Although the implementation aspects depend strongly on the authorities of these infrastructures, European wide recommendations and directives will definitely contribute to accelerate the deployment of Smart Cities. Health, inclusion and assisted living will play an essential role, since the demand for related services is rising, because ageing is changing disease composition. Requirements address a number of technologies, beyond the ones related to mobile and fixed networks. An integrated perspective on healthcare solutions for the near-to long-term can be foreseen, bridging a direct gap in between the health area and the technological development of communications (radio and network components). The needs for mobility in urban areas result into a number of problems, such as traffic congestion and energy consumption, which can be alleviated by exploiting Intelligent Transportation Systems and further adoption of vehicle-to-vehicle and vehicle-to-infrastructure communication networks. The information being managed in this area can be relevant to other domains, which increases its potential. An effective deployment \u2026"} {"_id": "6ada4de6b4c409fe72824bb48020d79aa7e8936a", "title": "Learning to Generate Corrective Patches using Neural Machine Translation", "text": "Bug fixing is generally a manually-intensive task. However, recent work has proposed the idea of automated program repair, which aims to repair (at least a subset of) bugs in different ways such as code mutation, etc. Following in the same line of work as automated bug repair, in this paper we aim to leverage past fixes to propose fixes of current/future bugs. Specifically, we propose Ratchet, a corrective patch generation system using neural machine translation. By learning corresponding pre-correction and post-correction code in past fixes with a neural sequence-to-sequence model, Ratchet is able to generate a fix code for a given bug-prone code query. We perform an empirical study with five open source projects, namely Ambari, Camel, Hadoop, Jetty and Wicket, to evaluate the effectiveness of Ratchet. Our findings show that Ratchet can generate syntactically valid statements 98.7% of the time, and achieve an F1-measure between 0.41-0.83 with respect to the actual fixes adopted in the code base. In addition, we perform a qualitative validation using 20 participants to see whether the generated statements can be helpful in correcting bugs. Our survey showed that Ratchet\u2019s output was considered to be helpful in fixing the bugs on many occasions, even if fix was not 100%"} {"_id": "4d709d05f3ec6de94601e0642058180ac23dee88", "title": "Colour Dynamic Photometric Stereo for Textured Surfaces", "text": "In this paper we present a novel method to apply photometric stereo on textured dynamic surfaces. We aim at exploiting the high accuracy of photometric stereo and reconstruct local surface orientation from illumination changes. The main difficulty derives from the fact that photometric stereo requires varying illumination while the object remains still, which makes it quite impractical to use for dynamic surfaces. Using coloured lights gives a clear solution to this problem; however, the system of equations is still ill-posed and it is ambiguous whether the change of an observed surface colour is due to the change of the surface gradient or of the surface reflectance. In order to separate surface orientation from reflectance, our method tracks texture changes over time and exploits surface reflectance\u2019s temporal constancy. This additional constraint allows us to reformulate the problem as an energy functional minimisation, solved by a standard quasi-Newton method. Our method is tested both on real and synthetic data, quantitatively evaluated and compared to a state-of-the-art method."} {"_id": "311e45968b55462ac3dd3fa7466f15857bae5f2c", "title": "All learning is Local: Multi-agent Learning in Global Reward Games", "text": "In large multiagent games, partial observability , coordination, and credit assignmentpersistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world fr om a single agent\u2019s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings.A sequenceof increasinglycomplex empirical tests verifies the efficacy of this technique."} {"_id": "01ff3819c40d0e962a533a9c924cafbfd7f36775", "title": "Growers versus showers: a meaningful (or useful) distinction?", "text": "Yafi et al. have conducted a study that will be of great interest to the lay community and also of import to practicing urologists who routinely encounter patients with concerns about the appearance of their phallus [1]. In one study 14% of men expressed dissatisfaction with their genitals with flaccid penile length being the most common source of dissatisfaction [2]. The concerns of such men tend to be the perception that their penis is small, or at least smaller than the penises of other men."} {"_id": "2c3eb8bcfe5f0bf7666a8719d89784a60ffa3c33", "title": "Multi-sensor multi-object tracking of vehicles using high-resolution radars", "text": "Recent advances in automotive radar technology have led to increasing sensor resolution and hence a more detailed image of the environment with multiple measurements per object. This poses several challenges for tracking systems: new algorithms are necessary to fully exploit the additional information and algorithms need to resolve measurement-to-object association ambiguities in cluttered multi-object scenarios. Also, the information has to be fused if multi-sensor setups are used to obtain redundancy and increased fields of view. In this paper, a Labeled Multi-Bernoulli filter for tracking multiple vehicles using multiple high-resolution radars is presented. This finite-set-statistics-based filter tackles all three challenges in a fully probabilistic fashion and is the first Monte Carlo implementation of its kind. The filter performance is evaluated using radar data from an experimental vehicle."} {"_id": "783a911805ec8fa146ede0f29c4f8d90e42db509", "title": "Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection", "text": "Accurate maps of the static environment are essential for many advanced driver-assistance systems. A new method for the fast computation of occupancy grid maps with laser range-finders and radar sensors is proposed. The approach utilizes the Graphics Processing Unit to overcome the limitations of classical occupancy grid computation in automotive environments. It is possible to generate highly accurate grid maps in just a few milliseconds without the loss of sensor precision. Moreover, in the case of a lower resolution radar sensor it is shown that it is suitable to apply super-resolution algorithms to achieve the accuracy of a higher resolution laser-scanner. Finally, a novel histogram based approach for road boundary detection with lidar and radar sensors is presented."} {"_id": "8f69384b197a424dfbd0f60d7c48c110faf2b982", "title": "High resolution maps from wide angle sonar", "text": ""} {"_id": "9cd1166aa66b0798bf452f48feab44d7082c1bf6", "title": "Detection of parked vehicles from a radar based occupancy grid", "text": "For autonomous parking applications to become possible, knowledge about the parking environment is required. Therefore, a real-time algorithm for detecting parked vehicles from radar data is presented. These data are first accumulated in an occupancy grid from which objects are detected by applying techniques borrowed from the computer vision field. Two random forest classifiers are trained to recognize two categories of objects: parallel-parked vehicles and cross-parked vehicles. Performances of the classifiers are evaluated as well as the capacity of the complete system to detect parked vehicles in real world scenarios."} {"_id": "014b191f412f8496813d7c358ddd11d8512f2005", "title": "Instantaneous lateral velocity estimation of a vehicle using Doppler radar", "text": "High-resolution image radars open new opportunities for estimating velocity and direction of movement of extended objects from a single observation. Since radar sensors only measure the radial velocity, a tracking system is normally used to determine the velocity vector of the object. A stable velocity is estimated after several frames at the earliest, resulting in a significant loss of time for reacting to certain situations such as cross-traffic. The following paper presents a robust and model-free approach to determine the velocity vector of an extended target. In contrast to the Kalman filter, it does not require data association in time and space. An instant (~ 50 ms) and bias free estimation of its velocity vector is possible. Our approach can handle noise and systematic variations (e.g., micro-Doppler of wheels) in the signal. It is optimized to deal with measurement errors of the radar sensor not only in the radial velocity, but in the azimuth position as well. The accuracy of this method is increased by the fusion of multiple radar sensors."} {"_id": "dcbbb4509c7256f20ec2ec3c1450cd09290518f5", "title": "Food, livestock production, energy, climate change, and health", "text": "Food provides energy and nutrients, but its acquisition requires energy expenditure. In post-hunter-gatherer societies, extra-somatic energy has greatly expanded and intensified the catching, gathering, and production of food. Modern relations between energy, food, and health are very complex, raising serious, high-level policy challenges. Together with persistent widespread under-nutrition, over-nutrition (and sedentarism) is causing obesity and associated serious health consequences. Worldwide, agricultural activity, especially livestock production, accounts for about a fifth of total greenhouse-gas emissions, thus contributing to climate change and its adverse health consequences, including the threat to food yields in many regions. Particular policy attention should be paid to the health risks posed by the rapid worldwide growth in meat consumption, both by exacerbating climate change and by directly contributing to certain diseases. To prevent increased greenhouse-gas emissions from this production sector, both the average worldwide consumption level of animal products and the intensity of emissions from livestock production must be reduced. An international contraction and convergence strategy offers a feasible route to such a goal. The current global average meat consumption is 100 g per person per day, with about a ten-fold variation between high-consuming and low-consuming populations. 90 g per day is proposed as a working global target, shared more evenly, with not more than 50 g per day coming from red meat from ruminants (ie, cattle, sheep, goats, and other digastric grazers)."} {"_id": "5e4f7a53314103efb9b1bae9a3189560839afffb", "title": "Message in a Sealed Bottle: Privacy Preserving Friending in Social Networks", "text": "Many proximity-based mobile social networks are developed to facilitate connections between any two people, or to help a user to find people with a matched profile within a certain distance. A challenging task in these applications is to protect the privacy of the participants' profiles and personal interests. In this paper, we design novel mechanisms, when given a preference-profile submitted by a user, that search persons with matching-profile in decentralized multi-hop mobile social networks. Our mechanisms also establish a secure communication channel between the initiator and matching users at the time when the matching user is found. Our rigorous analysis shows that our mechanism is privacy-preserving (no participants' profile and the submitted preference-profile are exposed), verifiable (both the initiator and the unmatched user cannot cheat each other to pretend to be matched), and efficient in both communication and computation. Extensive evaluations using real social network data, and actual system implementation on smart phones show that our mechanisms are significantly more efficient than existing solutions."} {"_id": "ad6d5e4545c60ec559d27a09fbef13fa538172e1", "title": "A direct scattering model for tracking vehicles with high-resolution radars", "text": "In advanced driver assistance systems and autonomous driving, reliable environment perception and object tracking based on radar is fundamental. High-resolution radar sensors often provide multiple measurements per object. Since in this case traditional point tracking algorithms are not applicable any more, novel approaches for extended object tracking emerged in the last few years. However, they are primarily designed for lidar applications or omit the additional Doppler information of radars. Classical radar based tracking methods using the Doppler information are mostly designed for point tracking of parallel traffic. The measurement model presented in this paper is developed to track vehicles of approximately rectangular shape in arbitrary traffic scenarios including parallel and cross traffic. In addition to the kinematic state, it allows to determine and track the geometric state of the object. Using the Doppler information is an important component in the model. Furthermore, it neither requires measurement preprocessing, data clustering, nor explicit data association. For object tracking, a Rao-Blackwellized particle filter (RBPF) adapted to the measurement model is presented."} {"_id": "43216287965da7bfbe56b2e50a552cf0ac3144f0", "title": "LIBOL: a library for online learning algorithms", "text": "LIBOL is an open-source library for large-scale online learning, which consists of a large family of efficient and scalable state-of-the-art online learning algorithms for large-scale online classification tasks. We have offered easy-to-use command-line tools and examples for users and developers, and also have made comprehensive documents available for both beginners and advanced users. LIBOL is not only a machine learning toolbox, but also a comprehensive experimental platform for conducting online learning research."} {"_id": "b6975005a606d58d461eb507cf194c96ad1f05fd", "title": "Basic Concepts of Lexical Resource Semantics", "text": "Semanticists use a range of highly expressive logical langu ges to characterize the meaning of natural language expressions. The logical languages are usually taken from an i nventory of standard mathematical systems, with which generative linguists are familiar. They are, thus, easily a ccessible beyond the borders of a given framework such as Categorial Grammar, Lexical Functional Grammar, or Govern me t and Binding Theory. Linguists working in the HPSG framework, on the other hand, often use rather idiosync ratic and specialized semantic representations. Their choice is sometimes motivated by computational applicatio ns n parsing, generation, or machine translation. Natural ly, the intended areas of application influence the design of sem antic representations. A typical property of semantic representations in HPSG that is concerned with computational a pplications is underspecification, and other properties come from the particular unification or constraint solving a l orithms that are used for processing grammars. While the resulting semantic representations have properties th at are motivated by, and are adequate for, certain practical applications, their relationship to standard languages is sometimes left on an intuitive level. In addition, the theor etical and ontological status of the semantic representations is o ften neglected. This vagueness tends to be unsatisfying to many semanticists, and the idiosyncratic shape of the seman tic representations confines their usage to HPSG. Since their entire architecture is highly dependent on HPSG, hard ly anyone working outside of that framework is interested in studying them. With our work on Lexical Resource Semantics (LRS), we want to contribute to the investigation of a number of important theoretical issues surrounding semantic repres ntations and possible ways of underspecification. While LR S is formulated in a constraint-based grammar environment an d t kes advantage of the tight connection between syntax proper and logical representations that can easily be achie ved in HPSG, the architecture of LRS remains independent from that framework, and combines attractive properties of various semantic systems. We will explore the types of semantic frameworks which can be specified in Relational Spe ciate Re-entrant Language (RSRL), the formalism that we choose to express our grammar principles, and we evaluate the semantic frameworks with respect to their potential for providing empirically satisfactory analyses of typica l problems in the semantics of natural languages. In LRS, we want to synthesize a flexible meta-theory that can be applied to different interesting semantic representation languag es and make computing with them feasible. We will start our investigation with a standard semantic rep resentation language from natural language semantics, Ty2 (Gallin, 1975). We are well aware of the debate about the a ppropriateness of Montagovian-style intensionality for the analysis of natural language semantics, but we belie ve that it is best to start with a semantic representation tha t most generative linguists are familiar with. As will become cl ar in the course of our discussion, the LRS framework is a meta-theory of semantic representations, and we believ e that it is suitable for various representation languages, This paper can be regarded as a snapshot of our work on LRS. It w as written as material for the authors\u2019 course Constraint-based Combinatorial Semanticsat the 15th European Summer School in Logic, Language and Inf ormation in Vienna in August 2003. It is meant as background r eading and as a basis of discussion for our class. Its air of a work in progressis deliberate. As we see continued development in LRS, its ap plication to a wider range of languages and empirical phenomena, and espec ially the implementation of an LRS module as a component of th e TRALE grammar development environment; we expect further modifications a d refinements to the theory. The implementation of LRS is rea liz d in collaboration with Gerald Penn of the University of Toronto. We would like t o thank Carmella Payne for proofreading various versions of this paper."} {"_id": "20cf4cf5bdd8db1ebabe8e6de258deff08d9b855", "title": "Design of a scalable InfiniBand topology service to enable network-topology-aware placement of processes", "text": "Over the last decade, InfiniBand has become an increasingly popular interconnect for deploying modern super-computing systems. However, there exists no detection service that can discover the underlying network topology in a scalable manner and expose this information to runtime libraries and users of the high performance computing systems in a convenient way. In this paper, we design a novel and scalable method to detect the InfiniBand network topology by using Neighbor-Joining techniques (NJ). To the best of our knowledge, this is the first instance where the neighbor joining algorithm has been applied to solve the problem of detecting InfiniBand network topology. We also design a network-topology-aware MPI library that takes advantage of the network topology service. The library places processes taking part in the MPI job in a network-topology-aware manner with the dual aim of increasing intra-node communication and reducing the long distance inter-node communication across the InfiniBand fabric."} {"_id": "3a09d0f6cd5da2178419d7e6c346ef9f6a82863f", "title": "Techniques to Detect Spammers in Twitter- A Survey", "text": "With the rapid growth of social networking sites for communicating, sharing, storing and managing significant information, it is attracting cybercriminals who misuse the Web to exploit vulnerabilities for their illicit benefits. Forged online accounts crack up every day. Impersonators, phishers, scammers and spammers crop up all the time in Online Social Networks (OSNs), and are harder to identify. Spammers are the users who send unsolicited messages to a large audience with the intention of advertising some product or to lure victims to click on malicious links or infecting user's system just for the purpose of making money. A lot of research has been done to detect spam profiles in OSNs. In this paper we have reviewed the existing techniques for detecting spam users in Twitter social network. Features for the detection of spammers could be user based or content based or both. Current study provides an overview of the methods, features used, detection rate and their limitations (if any) for detecting spam profiles mainly in Twitter."} {"_id": "c9412e0fe79b51a9289cf1712a52fd4c7e9cd87b", "title": "A New Similarity Measure to Understand Visitor Behavior in a Web Site", "text": "The behavior of visitors browsing in a web site offers a lot of information about their requirements and the way they use the respective site. Analyzing such behavior can provide the necessary information in order to improve the web site\u2019s structure. The literature contains already several suggestions on how to characterize web site usage and to identify the respective visitor requirements based on clustering of visitor sessions. Here we propose to combine visitor behavior with the content of the respective web pages and the similarity between different page sequences in order to define a similarity measure between different visits. This similarity serves as input for clustering of visitor sessions. The application of our approach to a bank\u2019s web site and its visitor sessions shows its potential for internet-based businesses. key words: web mining, browsing behavior, similarity measure, clustering."} {"_id": "0f899b92b7fb03b609fee887e4b6f3b633eaf30d", "title": "Variational Inference with Normalizing Flows", "text": "The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference."} {"_id": "5028cd9a7003212f307e683252beaf1ce94f72d5", "title": "Testing Django Configurations Using Combinatorial Interaction Testing", "text": "Combinatorial Interaction Testing (CIT) is important because it tests the interactions between the many parameters that make up the configuration space of software systems. We apply this testing paradigm to a Python-based framework for rapid development of webbased applications called Django. In particular, we automatically create a CIT model for Django website configurations and run a state-of-the-art tool for CIT test suite generation to obtain sets of test configurations. Our automatic CIT-based approach is able to efficiently detect invalid configurations."} {"_id": "41c52fd45b209a1a72f45fe7b99afddd653c5a12", "title": "A Top-Down Compiler for Sentential Decision Diagrams", "text": "The sentential decision diagram (SDD) has been recently proposed as a new tractable representation of Boolean functions that generalizes the influential ordered binary decision diagram (OBDD). Empirically, compiling CNFs into SDDs has yielded significant improvements in both time and space over compiling them into OBDDs, using a bottomup compilation approach. In this work, we present a top-down CNF to SDD compiler that is based on techniques from the SAT literature. We compare the presented compiler empirically to the state-ofthe-art, bottom-up SDD compiler, showing ordersof-magnitude improvements in compilation time."} {"_id": "5b832f8a5807b66921c151716adfb55fe7fadc15", "title": "Collaboration- and Fairness-Aware Big Data Management in Distributed Clouds", "text": "With the advancement of information and communication technology, data are being generated at an exponential rate via various instruments and collected at an unprecedented scale. Such large volume of data generated is referred to as big data, which now are revolutionizing all aspects of our life ranging from enterprises to individuals, from science communities to governments, as they exhibit great potentials to improve efficiency of enterprises and the quality of life. To obtain nontrivial patterns and derive valuable information from big data, a fundamental problem is how to properly place the collected data by different users to distributed clouds and to efficiently analyze the collected data to save user costs in data storage and processing, particularly the cost savings of users who share data. By doing so, it needs the close collaborations among the users, by sharing and utilizing the big data in distributed clouds due to the complexity and volume of big data. Since computing, storage and bandwidth resources in a distributed cloud usually are limited, and such resource provisioning typically is expensive, the collaborative users require to make use of the resources fairly. In this paper, we study a novel collaboration- and fairness-aware big data management problem in distributed cloud environments that aims to maximize the system throughout, while minimizing the operational cost of service providers to achieve the system throughput, subject to resource capacity and user fairness constraints. We first propose a novel optimization framework for the problem. We then devise a fast yet scalable approximation algorithm based on the built optimization framework. We also analyze the time complexity and approximation ratio of the proposed algorithm. We finally conduct experiments by simulations to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm is promising, and outperforms other heuristics."} {"_id": "d6b66236d802a6915077583aa102c03aa170392d", "title": "High-Q 3D RF solenoid inductors in glass", "text": "In this paper, we demonstrate the fabrication and characterization of various 3D solenoid inductors using a glass core substrate. Solenoid inductors were fabricated in glass by drilling through holes in glass and semi-additive copper plating for metallization. This topology is compared to similar solenoid structures in terms of Q-factor performance and inductance density. Inductances of 1.8-4.5nH with Q ~ 60 at 1GHz were demonstrated."} {"_id": "93f5a28d16e04334fcb71cb62d0fd9b1c68883bb", "title": "Amortized Inference in Probabilistic Reasoning", "text": "Recent studies of probabilistic reasoning have postulated general-purpose inference algorithms that can be used to answer arbitrary queries. These algorithms are memoryless, in the sense that each query is processed independently, without reuse of earlier computation. We argue that the brain operates in the setting of amortized inference, where numerous related queries must be answered (e.g., recognizing a scene from multiple viewpoints); in this setting, memoryless algorithms can be computationally wasteful. We propose a simple form of flexible reuse, according to which shared inferences are cached and composed together to answer new queries. We present experimental evidence that humans exploit this form of reuse: the answer to a complex query can be systematically predicted from a person\u2019s response to a simpler query if the simpler query was presented first and entails a sub-inference (i.e., a sub-component of the more complex query). People are also faster at answering a complex query when it is preceded by a sub-inference. Our results suggest that the astonishing efficiency of human probabilistic reasoning may be supported by interactions between inference and memory."} {"_id": "acec5113c70016d38605bdbd3e70b1209f5fcecd", "title": "Efficient Phrase-Based Document Similarity for Clustering", "text": "In this paper, we propose a phrase-based document similarity to compute the pair-wise similarities of documents based on the suffix tree document (STD) model. By mapping each node in the suffix tree of STD model into a unique feature term in the vector space document (VSD) model, the phrase-based document similarity naturally inherits the term tf-idf weighting scheme in computing the document similarity with phrases. We apply the phrase-based document similarity to the group-average Hierarchical Agglomerative Clustering (HAC) algorithm and develop a new document clustering approach. Our evaluation experiments indicate that, the new clustering approach is very effective on clustering the documents of two standard document benchmark corpora OHSUMED and RCV1. The quality of the clustering results significantly surpass the results of traditional single-word \\textit{tf-idf} similarity measure in the same HAC algorithm, especially in large document data sets. Furthermore, by studying the property of STD model, we conclude that the feature vector of phrase terms in the STD model can be considered as an expanded feature vector of the traditional single-word terms in the VSD model. This conclusion sufficiently explains why the phrase-based document similarity works much better than the single-word tf-idf similarity measure."} {"_id": "904229ce8b8d53eb7e7a65dd52df887cf739a4ba", "title": "An Edge Detection Algorithm for Online Image Analysis", "text": "Online image analysis is used in a wide variety of applications. Edge detection is a fundamental tool used to obtain features of objects as a prerequisite step to object segmentation. This paper presents a simple and relatively fast online edge detection algorithm based on second derivative. The proposed edge detector is less sensitive to noise and may be applied on color, gray and binary images without preprocessing requirements. The merits of the algorithm are demonstrated by comparison with Canny\u2019s and Sobel\u2019s edge detectors."} {"_id": "2966ce6356da4ca4ab9422d9233253dc433b2700", "title": "HermitCore: A Unikernel for Extreme Scale Computing", "text": "We expect that the size and the complexity of future supercomputers will increase on their path to exascale systems and beyond. Therefore, system software has to adapt to the complexity of these systems for a simplification of the development of scalable applications. In this paper, we present a unikernel operating system design for HPC. It extends the multi-kernel approach while providing better programmability and scalability for hierarchical systems, such as HLRS' Hazel Hen, which base on multiple cluster-on-a-chip processors. We prove the scalability of the design via micro benchmarks by taking the example of HermitCore---our prototype implementation of the new design."} {"_id": "5d37014068ec3113d9b403556c1fdf861bec0162", "title": "Sugar: Secure GPU Acceleration in Web Browsers", "text": "Modern personal computers have embraced increasingly powerful Graphics Processing Units (GPUs). Recently, GPU-based graphics acceleration in web apps (i.e., applications running inside a web browser) has become popular. WebGL is the main effort to provide OpenGL-like graphics for web apps and it is currently used in 53% of the top-100 websites. Unfortunately, WebGL has posed serious security concerns as several attack vectors have been demonstrated through WebGL. Web browsers\u00bb solutions to these attacks have been reactive: discovered vulnerabilities have been patched and new runtime security checks have been added. Unfortunately, this approach leaves the system vulnerable to zero-day vulnerability exploits, especially given the large size of the Trusted Computing Base of the graphics plane. We present Sugar, a novel operating system solution that enhances the security of GPU acceleration for web apps by design. The key idea behind Sugar is using a dedicated virtual graphics plane for a web app by leveraging modern GPU virtualization solutions. A virtual graphics plane consists of a dedicated virtual GPU (or vGPU) as well as all the software graphics stack (including the device driver). Sugar enhances the system security since a virtual graphics plane is fully isolated from the rest of the system. Despite GPU virtualization overhead, we show that Sugar achieves high performance. Moreover, unlike current systems, Sugar is able to use two underlying physical GPUs, when available, to co-render the User Interface (UI): one GPU is used to provide virtual graphics planes for web apps and the other to provide the primary graphics plane for the rest of the system. Such a design not only provides strong security guarantees, it also provides enhanced performance isolation."} {"_id": "d43d9dfb35e3399b0ef8d842381867a50e7dc349", "title": "3D-stackable crossbar resistive memory based on Field Assisted Superlinear Threshold (FAST) selector", "text": "We report the integration of 3D-stackable 1S1R passive crossbar RRAM arrays utilizing a Field Assisted Superlinear Threshold (FAST) selector. The sneak path issue in crossbar memory integration has been solved using the highest reported selectivity of 1010. Excellent selector performance is presented such as extremely sharp switching slope of <; 5mV/dec., selectivity of 1010, sub-50ns operations, > 100M endurance and processing temperature less than 300\u00b0C. Measurements on the 4Mb 1S1R crossbar array show that the sneak current is suppressed below 0.1nA, while maintaining 102 memory on/off ratio and > 106 selectivity during cycling, enabling high density memory applications."} {"_id": "e2ad147ca0342795c853ede7491f5c51ebd1d1a3", "title": "Towards detecting fake user accounts in facebook", "text": "People are highly dependent on online social networks (OSNs) which have attracted the interest of cyber criminals for carrying out a number of malicious activities. An entire industry of black-market services has emerged which offers fake accounts based services for sale. We, therefore, in our work, focus on detecting fake accounts on a very popular (and difficult for data collection) online social network, Facebook. Key contributions of our work are as follows. The first contribution has been collection of data related to real and fake accounts on Facebook. Due to strict privacy settings and ever evolving API of Facebook with each version adding more restrictions, collecting user accounts data became a major challenge. Our second contribution is the use of user-feed information on Facebook to understand user profile activity and identifying an extensive set of 17 features which play a key role in discriminating fake users on Facebook with real users. Third contribution is the use these features and identifying the key machine learning based classifiers who perform well in detection task out of a total of 12 classifiers employed. Fourth contribution is the identifying which type of activities (like, comment, tagging, sharing, etc) contribute the most in fake user detection. Results exhibit classification accuracy of 79% among the best performing classifiers. In terms of activities, likes and comments contribute well towards detection task. Although the accuracy is not very high, however, our work forms a baseline for further improvement. Our results indicate that many fake users are classified as real suggesting clearly that fake accounts are mimicking real user behavior to evade detection mechanisms. Our work concludes by enlisting a number of future course of actions that can be undertaken."} {"_id": "de83aeaea47d40f0838d87637c4843246d1f37c0", "title": "Telemanipulation with force-based display of proximity fields", "text": "In this paper we show and evaluate the design of a novel telemanipulation system that maps proximity values, acquired inside of a gripper, to forces a user can feel through a haptic input device. The command console is complemented by input-devices that give the user an intuitive control over parameters relevant to the system. Furthermore, proximity sensors enable the autonomous alignment/centering of the gripper to objects in user-selected DoFs with the potential of aiding the user and lowering the workload. We evaluate our approach in a user study that shows that the telemanipulation system benefits from the supplementary proximity information and that the workload can indeed be reduced when the system operates with partial autonomy."} {"_id": "051f689825d4f118a39a286cf72888d2d1a84438", "title": "Learning to Decode Cognitive States from Brain Images", "text": "Over the past decade, functional Magnetic Resonance Imaging (fMRI) has emerged as a powerful new instrument to collect vast quantities of data about activity in the human brain. A typical fMRI experiment can produce a three-dimensional image related to the human subject's brain activity every half second, at a spatial resolution of a few millimeters. As in other modern empirical sciences, this new instrumentation has led to a flood of new data, and a corresponding need for new data analysis methods. We describe recent research applying machine learning methods to the problem of classifying the cognitive state of a human subject based on fRMI data observed over a single time interval. In particular, we present case studies in which we have successfully trained classifiers to distinguish cognitive states such as (1) whether the human subject is looking at a picture or a sentence, (2) whether the subject is reading an ambiguous or non-ambiguous sentence, and (3) whether the word the subject is viewing is a word describing food, people, buildings, etc. This learning problem provides an interesting case study of classifier learning from extremely high dimensional (105 features), extremely sparse (tens of training examples), noisy data. This paper summarizes the results obtained in these three case studies, as well as lessons learned about how to successfully apply machine learning methods to train classifiers in such settings."} {"_id": "7855705b976b0545067709e5aff54d9d8e2f2c02", "title": "Situation models in language comprehension and memory.", "text": "This article reviews research on the use of situation models in language comprehension and memory retrieval over the past 15 years. Situation models are integrated mental representations of a described state of affairs. Significant progress has been made in the scientific understanding of how situation models are involved in language comprehension and memory retrieval. Much of this research focuses on establishing the existence of situation models, often by using tasks that assess one dimension of a situation model. However, the authors argue that the time has now come for researchers to begin to take the multidimensionality of situation models seriously. The authors offer a theoretical framework and some methodological observations that may help researchers to tackle this issue."} {"_id": "08c370eb9ba13bfb836349e7f3ea428be4697818", "title": "Factor graphs and the sum-product algorithm", "text": "Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of \u201clocal\u201d functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call afactor graph. In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes\u2014either exactly or approximately\u2014various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative \u201cturbo\u201d decoding algorithm, Pearl\u2019s belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms."} {"_id": "a2ca4a76a7259da6921ab41eae8858513cbb1af1", "title": "Survey propagation: An algorithm for satisfiability", "text": "We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when \u03b1 = M/N is close to the experimental threshold \u03b1c separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when \u03b1 is close to (but smaller than) \u03b1c. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. \u00a9 2005 Wiley Periodicals, Inc. Random Struct. Alg., 27, 201\u2013226, 2005"} {"_id": "0e8933300a20f3d799dc9f19e352967f41d8efcc", "title": "The generalized distributive law", "text": "In this semitutorial paper we discuss a general message passing algorithm, which we call the generalized distributive law (GDL). The GDL is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. It includes as special cases the Baum\u2013Welch algorithm, the fast Fourier transform (FFT) on any finite Abelian group, the Gallager\u2013Tanner\u2013Wiberg decoding algorithm, Viterbi\u2019s algorithm, the BCJR algorithm, Pearl\u2019s \u201cbelief propagation\u201d algorithm, the Shafer\u2013Shenoy probability propagation algorithm, and the turbo decoding algorithm. Although this algorithm is guaranteed to give exact answers only in certain cases (the \u201cjunction tree\u201d condition), unfortunately not including the cases of GTW with cycles or turbo decoding, there is much experimental evidence, and a few theorems, suggesting that it often works approximately even when it is not supposed to."} {"_id": "157218bae792b6ef550dfd0f73e688d83d98b3d7", "title": "A recursive approach to low complexity codes", "text": "A bstruciA method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subcodes. Both the encoders and decoders proposed are shown to take advantage of the code\u2019s explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to mahe effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles are illustrated by several examples."} {"_id": "5af12524db0186bcdcafb7a342556d421cf342bf", "title": "Exploiting Passive Dynamics with Variable Stiffness Actuation in Robot Brachiation", "text": "This paper explores a passive control strategy with variable stiffness actuation for swing movements. We consider brachiation as an example of a highly dynamic task which requires exploitation of gravity in an efficient manner for successful task execution. First, we present our passive control strateg y considering a pendulum with variable stiffness actuation. Then, we formulate the problem based an optimal control framework with temporal optimization in order to simultaneously find an appropriate stiffness profile and movement duration such that the resultant movement will be able to exploit the passive dynamics of the robot. Finally, numerical evaluations on a twolink brachiating robot with a variable stiffness actuator (VSA) model are provided to demonstrate the effectiveness of our approach under different task requirements, modelling errors and switching in the robot dynamics. In addition, we discuss the issue of task description in terms of the choice of cost function for successful task execution in optimal control."} {"_id": "3056721cc51032f84791a2d7469601afb65b12ee", "title": "Arabic Script Recognition : A Survey", "text": "Optical character recognition (OCR) is essential in various real-world applications, such as digitizing learning resources to assist visually impaired people and transforming printed resources into electronic media. However, the development of OCR for printed Arabic script is a challenging task. These challenges are due to the specific characteristics of Arabic script. Therefore, different methods have been proposed for developing Arabic OCR systems, and this paper aims to provide a comprehensive review of these methods. This paper also discusses relevant issues of printed Arabic OCR including the challenges of printed Arabic script and performance evaluation. It concludes with a discussion of the current status of printed Arabic OCR, analyzing the remaining problems in the field of printed Arabic OCR and providing several directions for future research. Keywords\u2014Optical character recognition; arabic printed OCR; arabic text recognition; arabic OCR survey; feature extraction; segmentation; classification"} {"_id": "2e87c0613077f7faeee45bbf5f9d209017e30771", "title": "SIFT-based local spectrogram image descriptor: a novel feature for robust music identification", "text": "Music identification via audio fingerprinting has been an active research field in recent years. In the real-world environment, music queries are often deformed by various interferences which typically include signal distortions and time-frequency misalignments caused by time stretching, pitch shifting, etc. Therefore, robustness plays a crucial role in music identification technique. In this paper, we propose to use scale invariant feature transform (SIFT) local descriptors computed from a spectrogram image as sub-fingerprints for music identification. Experiments show that these sub-fingerprints exhibit strong robustness against serious time stretching and pitch shifting simultaneously. In addition, a locality sensitive hashing (LSH)-based nearest sub-fingerprint retrieval method and a matching determination mechanism are applied for robust sub-fingerprint matching, which makes the identification efficient and precise. Finally, as an auxiliary function, we demonstrate that by comparing the time-frequency locations of corresponding SIFT keypoints, the factor of time stretching and pitch shifting that music queries might have experienced can be accurately estimated."} {"_id": "2f5f689cb289878543ae86c78b809ab924983e46", "title": "Improving the Coverage and Spectral Efficiency of Millimeter-Wave Cellular Networks Using Device-to-Device Relays", "text": "The susceptibility of millimeter waveform propagation to blockages limits the coverage of millimeter-wave (mmWave) signals. To overcome blockages, we propose to leverage two-hop device-to-device (D2D) relaying. Using stochastic geometry, we derive expressions for the downlink coverage probability of relay-assisted mmWave cellular networks when the D2D links are implemented in either uplink mmWave or uplink microwave bands. We further investigate the spectral efficiency (SE) improvement in the cellular downlink, and the effect of D2D transmissions on the cellular uplink. For mmWave links, we derive the coverage probability using dominant interferer analysis while accounting for both blockages and beamforming gains. For microwave D2D links, we derive the coverage probability considering both line-of-sight and non-line-of-sight (NLOS) propagation. Numerical results show that downlink coverage and SE can be improved using two-hop D2D relaying. Specifically, microwave D2D relays achieve better coverage because D2D connections can be established under NLOS conditions. However, mmWave D2D relays achieve better coverage when the density of interferers is large because blockages eliminate interference from NLOS interferers. The SE on the downlink depends on the relay mode selection strategy, and mmWave D2D relays use a significantly smaller fraction of uplink resources than microwave D2D relays."} {"_id": "a0cc11865ca9f456724eacb711550bd49fa507d1", "title": "From Characters to Understanding Natural Language ( C 2 NLU ) : Robust End-to-End Deep Learning for NLP Organized by", "text": "This report documents the program and the outcomes of Dagstuhl Seminar 17042 \u201cFrom Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP\u201d. The seminar brought together researchers from different fields, including natural language processing, computational linguistics, deep learning and general machine learning. 31 participants from 22 academic and industrial institutions discussed advantages and challenges of using characters, i.e., \u201craw text\u201d, as input for deep learning models instead of language-specific tokens. Eight talks provided overviews of different topics, approaches and challenges in current natural language processing research. In five working groups, the participants discussed current natural language processing/understanding topics in the context of character-based modeling, namely, morphology, machine translation, representation learning, end-to-end systems and dialogue. In most of the discussions, the need for a more detailed model analysis was pointed out. Especially for character-based input, it is important to analyze what a deep learning model is able to learn about language \u2013 about tokens, morphology or syntax in general. For an efficient and effective understanding of language, it might furthermore be beneficial to share representations learned from multiple objectives to enable the models to focus on their specific understanding task instead of needing to learn syntactic regularities of language first. Therefore, benefits and challenges of transfer learning were an important topic of the working groups as well as of the panel discussion and the final plenary discussion. Seminar January 22\u201327, 2017 \u2013 http://www.dagstuhl.de/17042 1998 ACM Subject Classification I.2 Artificial Intelligence, I.2.7 Natural Language Processing"} {"_id": "ccb0c0394bca75a59b2a265acdaaf47b8f86c7ce", "title": "Intentional behaviour in dog-human communication: an experimental analysis of \u201cshowing\u201d behaviour in the dog", "text": "Despite earlier scepticism there is now evidence for simple forms of intentional and functionally referential communication in many animal species. Here we investigate whether dogs engage in functional referential communication with their owners. \u201cShowing\u201d is defined as a communicative action consisting of both a directional component related to an external target and an attention-getting component that directs the attention of the perceiver to the informer or sender. In our experimental situation dogs witness the hiding of a piece of food (or a favourite toy) which they cannot get access to. We asked whether dogs would engage in \u201cshowing\u201d in the presence of their owner. To control for the motivational effects of both the owner and the food on the dogs\u2019 behaviour, control observations were also staged where only the food (or the toy) or the owner was present. Dogs\u2019 gazing frequency at both the food (toy) and the owner was greater when only one of these was present. In other words, dogs looked more frequently at their owner when the food (toy) was present, and they looked more at the location of the food (toy) when the owner was present. When both the food (toy) and the owner were present a new behaviour, \u201cgaze alternation\u201d, emerged which was defined as changing the direction of the gaze from the location of the food (toy) to looking at the owner (or vice versa) within 2 s. Vocalisations that occurred in this phase were always associated with gazing at the owner or the location of the food. This behaviour, which was specific to this situation, has also been described in chimpanzees, a gorilla and humans, and has often been interpreted as a form of functionally referential communication. Based on our observations we argue that dogs might be able to engage in functionally referential communication with their owner, and their behaviour could be described as a form of \u201cshowing\u201d. The contribution of domestication and individual learning to the well-developed communicative skills in dogs is discussed and will be the subject of further studies."} {"_id": "f4c5f7bdf3f7ce924cd42f26d2a9eb97ab8da4a3", "title": "Full-resolution interactive CPU volume rendering with coherent BVH traversal", "text": "We present an efficient method for volume rendering by raycasting on the CPU. We employ coherent packet traversal of an implicit bounding volume hierarchy, heuristically pruned using preintegrated transfer functions, to exploit empty or homogeneous space. We also detail SIMD optimizations for volumetric integration, trilinear interpolation, and gradient lighting. The resulting system performs well on low-end and laptop hardware, and can outperform out-of-core GPU methods by orders of magnitude when rendering large volumes without level-of-detail (LOD) on a workstation. We show that, while slower than GPU methods for low-resolution volumes, an optimized CPU renderer does not require LOD to achieve interactive performance on large data sets."} {"_id": "6b8966f0e8f368fea84d6c3e135011fc2b050312", "title": "Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a \u201cface\u201d area?", "text": "Haxby et al. [Science 293 (2001) 2425] recently argued that category-related responses in the ventral temporal (VT) lobe during visual object identification were overlapping and distributed in topography. This observation contrasts with prevailing views that object codes are focal and localized to specific areas such as the fusiform and parahippocampal gyri. We provide a critical test of Haxby's hypothesis using a neural network (NN) classifier that can detect more general topographic representations and achieves 83% correct generalization performance on patterns of voxel responses in out-of-sample tests. Using voxel-wise sensitivity analysis we show that substantially the same VT lobe voxels contribute to the classification of all object categories, suggesting the code is combinatorial. Moreover, we found no evidence for local single category representations. The neural network representations of the voxel codes were sensitive to both category and superordinate level features that were only available implicitly in the object categories."} {"_id": "03acfc428d38e4e6136513658695cfc1956b5945", "title": "Machine-Learning Research Four Current Directions", "text": "Machine Learning research has been making great progress in many directions This article summarizes four of these directions and discusses some current open problems The four directions are a improving classi cation accuracy by learning ensembles of classi ers b methods for scaling up supervised learning algorithms c reinforcement learning and d learning complex stochastic models"} {"_id": "5b93ae450e65b6ae70c58dad3a9e754161374b50", "title": "A Study of Fonts Designed for Screen Display", "text": "This study examined the readabiity.and subjective preferences of a set of fonts designed for screen display. Two new binary bitmap fonts performed well, suggesting that designers hould consider incorporating similar attributes into default fonts for online type."} {"_id": "965f8bb9a467ce9538dec6bef57438964976d6d9", "title": "Recognizing human faces under disguise and makeup", "text": "The accuracy of automated human face recognition algorithms can significantly degrade while recognizing same subjects under make-up and disguised appearances. Increasing constraints on enhanced security and surveillance requires enhanced accuracy from face recognition algorithms for faces under disguise and/or makeup. This paper presents a new database for face images under disguised and make-up appearances the development of face recognition algorithms under such covariates. This database has 2460 images from 410 different subjects and is acquired under real environment, focuses on make-up and disguises covariates and also provides ground truth (eye glass, goggle, mustache, beard) for every image. This can enable developed algorithms to automatically quantify their capability for identifying such important disguise attribute during the face recognition We also present comparative experimental results from two popular commercial matchers and from recent publications. Our experimental results suggest significant performance degradation in the capability of these matchers in automatically recognizing these faces. We also analyze face detection accuracy from these matchers. The experimental results underline the challenges in recognizing faces under these covariates. Availability of this new database in public domain will help to advance much needed research and development in recognizing make-up and disguised faces."} {"_id": "fe8a9985d8b2df7a4d78e02beaa234084e3790b9", "title": "2 Saddles in Deep Networks : Background and Motivation", "text": "Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as \u2018good saddles\u2019. We also verified the famous Wigner\u2019s Semicircle Law in our experimental results."} {"_id": "b044d249455e315a9642b2265ede035c4d9b390f", "title": "The happy and unhappy faces of narcissism", "text": "Several theorists have argued in favor of a distinction between overt and covert narcissism, and factor analytic studies have supported this distinction. In this paper I demonstrate that overt narcissists report higher self-esteem and higher satisfaction with life, whereas covert narcissists report lower self-esteem and lower satisfaction with life. I also present mediational models to explain why overt narcissists are relatively happy and covert narcissists are relatively unhappy. In analyses using both partial correlations and structural equation modeling, self-esteem consistently mediated the associations between both types of narcissism and happiness, whereas self-deception did not. These results further demonstrate some of the selfcentered benefits associated with overt narcissism and some of the strong psychological costs associated with covert narcissism. # 2002 Elsevier Science Ltd. All rights reserved."} {"_id": "f8e32c5707df46bfcd683f723ad27d410e7ff37d", "title": "Some Comments on Cp", "text": ""} {"_id": "7a87289fb660d729bf0d9c9417ffb58cdea15450", "title": "A Research on Iron Loss of IPMSM With a Fractional Number of Slot Per Pole", "text": "In this paper, we investigated rotor iron loss of interior permanent magnet synchronous machine (IPMSM), which has distributed armature windings. In order to study the iron loss with the effect of slot-pole combination, two machines for high-speed operation such as electric vehicle were designed. One is fractional slot winding and the other is conventional one. In the analysis, we developed a new iron loss model of electrical machines for high-speed operation. The calculated iron loss was compared with the experimental data. It was clarified that the proposed method can estimate iron loss effectively at high-speed operation. Based on this newly proposed method, we analyzed iron loss of the two machines according to their driving conditions. From the analysis results, it was shown that rotor iron loss of machine employing fractional slot-winding is definitely large at load condition. In addition, it is interesting that the ratio (rotor iron loss/stator iron loss) becomes larger as the speed of the machines increase if the number of slot per pole is fractional."} {"_id": "89fb51f228b2e466cf690028ec96940f3d7b4fb0", "title": "Accelerometer based wireless air mouse using Arduino micro-controller board", "text": "With the day to day advancements in technology, the interaction between the human and the digital world is diminishing. Lot of improvement has taken place in the mobile technology from button keypads to touch screens. However the current PCs still need a pad to operate its mouse and are wired most of the time. The idea is to develop a wireless mouse which works on the hand gestures of the user without the need of a pad making an effortless interaction between the human and the computer. The implementation is done using a sensor named accelerometer to sense the hand gestures. Accelerometer is a motion sensor which sense changes in motion in any of the three axes. In here the accelerometer will be at the users side, attached to the hand to sense the movement and gives the output to a micro-controller to process it. Necessary modifications are done by the microcontroller to these values and are transmitted through a RF module to the PC. At the receiving end a mouse control program which contains functions to control the mouse reads these values and performs the necessary action."} {"_id": "7ff77bd352aceaea250bab51d0c09e5b4ed7fc25", "title": "Multisensor Fusion and Integration: A Review on Approaches and Its Applications in Mechatronics", "text": "The objective of this paper is to review the theories and approaches of multisensor fusion and integration (MFI) with its application in mechatronics. MFI helps the system perceiving changes of the environment and monitoring the system itself. Since each individual sensor has its own inherent defects and limitations, MFI merges the redundant information acquired by multiple sensors synergistically to provide a more accurate perception and make an optimal decision in further. The wide application spectrum of MFI in mechatronics includes the industrial automation, the development of intelligent robots, military applications, biomedical applications, etc. In this paper, the architecture and algorithms of MFI are reviewed, and some implementation examples in industrial automation and robotic applications are presented. Furthermore, sensor fusion methods at different levels, namely, estimation methods, classification methods and inference methods, the most frequently used algorithms in previous researches with their advantages and limitations are summarized. Applications of MFI in robotics and mechatronics are discussed. Future perspectives of MFI deployment are included in the concluding remarks."} {"_id": "eb639811df16f49bbe44948a9bf6e937eef47d8b", "title": "E-shopping in a multiple channel environment", "text": "In the present study, the authors propose a segmentation schema based on patterns of e-browsing and e-purchasing. We examine self-reports of browsing and purchasing using five specific non-store channels: the Internet, television infomercials, advertising that accompanies regular television programming, television shopping channels , and print catalogs. Our findings indicate that shoppers who browse and/or purchase on the Internet differ in their use of multi-channel options related to their perceptions of convenience . Some shoppers clearly want to purchase in the store setting and reject multiple forms of non-store shopping. Others like to browse various non-store media and have extended their browsing to the Internet, yet maintain their loyalty to in-store purchases. Retailers who attempt to `\u0300 convert\u2019\u2019 such shoppers to Internet-only purchasing may alienate the shoppers who rely on the Internet solely for information. Introduction The explosive growth of the Internet has revolutionized many aspects of daily life (Fetto, 1999; Rutledge, 2000). Recent statistics tell us that people the world over are using the Internet in ever-increasing numbers, with estimates ranging from 505 million (Global Reach, 2001) to 513.41 million people online throughout the world (NUA Ltd, 2001). There is much to be learned about how the Internet fits in people\u2019s lives, how they use it as part of a set of choices, and what deters them from using it for certain purposes, such as making purchases. Despite increased use of the Web, recent industry studies have documented problems such as an ongoing trend in online shopping cart `\u0300 abandonment\u2019\u2019 in which apparent planned purchases are never completed online (Hurwicz, 1999). In fact, substantial numbers of online shoppers return to physical stores after experiencing problems with slow load times, an inability to locate items, incomplete information, lack of human interaction, and missed or late deliveries (Mardesich, 1999; McCarthy, 2000). Failures with account setups and confusing error messages also caused about 40 percent of shoppers to have problems during checkout (Enos, 2000). While consumers have verified these reasons, we argue that another reason may explain the apparent abandonment that takes place. That is, some Internet browsers may have never intended to complete their purchases online, preferring to shop in a bricks and mortar setting. Perhaps the notion of `\u0300 abandonment\u2019\u2019 is an oversimplification. Some consumers may simply use shopping carts to investigate and tally possible future purchases, with no intent to purchase at the specific time that they are online. The research register for this journal is available at http://www.emeraldinsight.com/researchregisters The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/0736-3761.htm Explosive growth Failure to complete purchases JOURNAL OF CONSUMER MARKETING, VOL. 19 NO. 4 2002, pp. 333-350, # MCB UP LIMITED, 0736-3761, DOI 10.1108/07363760210433645 333 An executive summary for managers and executive readers can be found at the end of this article The purpose of the present study is to empirically investigate the connections among Internet `\u0300 use\u2019\u2019 and other non-store media options in a multi-channel environment. In the present study, the authors propose a segmentation schema based on patterns of e-browsing and e-purchasing, including browsing on the Internet with planned purchasing in an offline channel. Using that schema, our study examines the self-reports of browsing and purchasing of 250 Internet users with respect to five specific non-store media: the Internet, television infomercials, advertising that accompanies regular television programming, television shopping channels, and print catalogs. Our findings indicate that shoppers who browse and/or purchase on the Internet differ in their use of multi-channel options related to their perceptions of convenience. Some shoppers clearly want to purchase in the store setting and reject multiple forms of non-store shopping. Others like to browse various non-store media and have extended their browsing to the Internet, yet maintain their loyalty to in-store purchases. Background The growth and potential of the Internet There is a need for a `\u0300 better understanding of the Web user\u2019\u2019 and e-shopping (Korgaonkar and Wolin, 1999). Current studies vary in predicting the characteristics of Web users, ranging from male, well-educated, middle income and middle-aged or younger (Emmanouilides and Hammond, 2000; Korgaonkar and Wolin, 1999; Wilson, 2000), to substantial increases among women and the elderly (Harris, 1998; Rosen and Howard, 2000; Rosen and Weil, 1995). Recent studies by Media Metrix and Jupiter Communications indicate that more women are online than men, with the greatest increases among teenaged girls and women over 55 (Hamilton, 2000). Many studies have found that typical online buyers have used the Web for several years, and because of their familiarity, they search online for product information and purchase options (Bellman et al., 1999). In many cases, convenience and time-management are the key drivers among e-shoppers (Chang and McFarland, 1999), while some `\u0300 wired but wary\u2019\u2019 use the Internet for e-mail, but do not shop online. Still others have no regular Internet access at all. Many studies examine Internet shopping exclusively in determining consumer usage and satisfaction (Szymanski and Hise, 2000; VanTassel and Weitz, 1997). However, it is not likely that shoppers view the Internet as separate and distinct from other more familiar forms of shopping. We argue that it is more realistic to examine e-shopping in the context of multi-channel alternatives that are also available to shoppers (Grant, 2000; Greco, 1996). The interrelationships among various types of non-store methods are not well understood, nor are their impacts on `\u0300 bricks and mortar\u2019\u2019 retail outlets (Achenbaum, 1999). We also contend that insights can be gained by organizing the study of e-shopping based on sound theoretical relationships that have been used to describe other components of the multiple channel environment. We turn to the literature on recreational shopping and convenience in order to build a foundation for the study. Browsing and purchasing Some studies have attempted to understand whether browsing on the Internet is correlated with purchasing on the Internet (Lindquist and KaufmanScarborough, 2000). It is questionable whether a perfect match is possible, since some shoppers enjoy browsing as a separate activity, while others buy without browsing if their choice is clear and determined in advance. Because of problems like security fears, lack of skill with computers, slow response time by e-tailers, and confusing Web sites, studies report that numerous Segmentation schema Convenience and time management 334 JOURNAL OF CONSUMER MARKETING, VOL. 19 NO. 4 2002 shoppers use non-store methods to search and compare, while going to the `\u0300 bricks and mortar\u2019\u2019 setting to make their purchases (Koprowski, 2000; Levy and Nilson, 1999). Earlier pre-Internet studies investigated why people shop, why they go to the store, and why they look but do not buy. For instance, Tauber (1972, 1995) went beyond retail patronage and demonstrated that people have numerous motives for shopping that are unrelated to the actual purchasing of products. The shoppers in his sample reported that their shopping trips included carrying out expected roles, diversion from daily routine, self-gratification and response to moods, learning about new trends, physical activity, sensory stimulation, meeting others with similar interests, interaction with peer groups, and the pleasure of bargaining. Such motives are likely to result in browsing that does not necessarily lead to purchasing. Other studies have identified persons as `\u0300 recreational shoppers\u2019\u2019, who enjoy shopping as a leisure activity and tend to browse in retail outlets `\u0300 without an upcoming purchase in mind\u2019\u2019 (Bellinger and Korgaonkar, 1980; Ohanian and Tashchian, 1992). Such shoppers report being interested in gaining knowledge about specific product classes and actively seek information about topics such as merchandise, prices, and quality. Such shoppers do not generally gather such information in preparation for an upcoming purchase, but instead appear to enjoy gathering information for its own sake. They engage in word-of-mouth activities more than other shoppers (Bloch and Richins, 1983), and enjoy giving advice and influencing other consumers. These earlier researchers explicitly considered browsing and shopping behaviors in the `\u0300 bricks and mortar\u2019\u2019 setting. We suggest that `\u0300 recreational e-shoppers\u2019\u2019 are also likely to virtually `\u0300 stroll\u2019\u2019 through online shopping sites for learning, social, or diversion-related purposes. Recreational e-shoppers may also enjoy gathering online information and share their knowledge through online chat rooms and buyer forums. If a comparable pattern of information gathering and sharing exists, browsers who do not `\u0300 convert\u2019\u2019 to purchasers may be found to exhibit similar characteristics to in-store recreational shoppers who may be lonely, bored, or simply curious. Browsing convenience versus purchasing convenience Surveys of customers indicate their frustration with the lack of convenience provided by `\u0300 bricks and mortar\u2019\u2019 stores. They report problems with crowded store conditions, out of stock merchandise, and poorly-trained salespersons, prompting shoppers to search for more favorable ways to browse and to purchase. In fact, retailers have been criticized for developing in-store strategies based on their own convenience, rather than that of their customers (Seiders et al., 2000). Convenience is a more complex notion than simply providing quick checkouts or locations close to home. In fact, shoppers are thought to clearly differentiate among various dimensions of conve"} {"_id": "effaec031a97ae64f965d0f0616438774fc11bc5", "title": "A Neural Network Color Classifier in HSV Color Space", "text": "In this paper, a neural network approach for color classification in HSV color space based on color JND (just noticeable difference) is presented. The HSV color samples are generated using an interactive tool based on JND concept and these samples are used for supervised training. A four layer feed forward neural network is trained for classifying a given color pair as similar or dissimilar pair. An interactive tool for color pixel comparison is developed for testing the color classifier on real life images. Research shows that neural network classifier in HSV color space works better than RGB classifier in terms of efficiency for segmentation purposes. Thus the experimentation results can be used for segmenting real life images."} {"_id": "29d0c439241e65c51e19f7bd9430f50e900a6e32", "title": "F-DES: Fast and Deep Event Summarization", "text": "In the multimedia era a large volume of video data can be recorded during a certain period of time by multiple cameras. Such a rapid growth of video data requires both effective and efficient multiview video summarization techniques. The users can quickly browse and comprehend a large amount of audiovisual data. It is very difficult in real-time to manage and access the huge amount of video-content-handling issues of interview dependencies significant variations in illumination and presence of many unimportant frames with low activity. In this paper we propose a local-alignment-based FASTA approach to summarize the events in multiview videos as a solution of the aforementioned problems. A deep learning framework is used to extract the features to resolve the problem of variations in illumination and to remove fine texture details and detect the objects in a frame. Interview dependencies among multiple views of video are then captured via the FASTA algorithm through local alignment. Finally object tracking is applied to extract the frames with low activity. Subjective as well as objective evaluations clearly indicate the effectiveness of the proposed approach. Experiments show that the proposed summarization method successfully reduces the video content while keeping momentous information in the form of events. A computing analysis of the system also shows that it meets the requirement of real-time applications."} {"_id": "c504c88dbea0c1fe9383710646c8180ef44b9bc9", "title": "Image segmentation via adaptive K-mean clustering and knowledge-based morphological operations with biomedical applications", "text": "Image segmentation remains one of the major challenges in image analysis. In medical applications, skilled operators are usually employed to extract the desired regions that may be anatomically separate but statistically indistinguishable. Such manual processing is subject to operator errors and biases, is extremely time consuming, and has poor reproducibility. We propose a robust algorithm for the segmentation of three-dimensional (3-D) image data based on a novel combination of adaptive K-mean clustering and knowledge-based morphological operations. The proposed adaptive K-mean clustering algorithm is capable of segmenting the regions of smoothly varying intensity distributions. Spatial constraints are incorporated in the clustering algorithm through the modeling of the regions by Gibbs random fields. Knowledge-based morphological operations are then applied to the segmented regions to identify the desired regions according to the a priori anatomical knowledge of the region-of-interest. This proposed technique has been successfully applied to a sequence of cardiac CT volumetric images to generate the volumes of left ventricle chambers at 16 consecutive temporal frames. Our final segmentation results compare favorably with the results obtained using manual outlining. Extensions of this approach to other applications can be readily made when a priori knowledge of a given object is available."} {"_id": "c2912ac3c918f3dbd997e2a3454b6b8b6b17c37f", "title": "UTILIZING A GAME THEORETICAL APPROACH TO PREVENT COLLUSION AND INCENTIVIZE COOPERATION IN", "text": "Author: Arash Golchubian Title: Utilizing a Game Theoretical Approach to Prevent Collusion and Incentivize Cooperation in Cybersecurity Contexts Institution: Florida Atlantic University Thesis Advisor: Dr. Mehrdad Nojoumian Degree: Master of Science Year: 2017 In this research, a new reputation-based model is utilized to disincentivize collusion of defenders and attackers in Software Defined Networks (SDN), and also, to disincentivize dishonest mining strategies in Blockchain. In the context of SDN, the model uses the reputation values assigned to each entity to disincentivize collusion with an attacker. Our analysis shows that not-colluding actions become Nash Equilibrium using the reputationbased model within a repeated game setting. In the context of Blockchain and mining, we illustrate that by using the same socio-rational model, miners not only are incentivized to conduct honest mining but also disincentivized to commit to any malicious activities against other mining pools. We therefore show that honest mining strategies become Nash Equilibrium in our setting. This thesis is laid out in the following manner. In chapter 2 an introduction to game theory is provided followed by a survey of previous works in game theoretic network security, in chapter 3 a new reputation-based model is introduced to be used within the context of a Software Defined Network (SDN), in chapter 4 a reputation-based solution concept is introduced to force cooperation by each mining entity in Blockchain, and finally, in chapter 5, the concluding remarks and future works are presented. vii To: My loving wife, Sanaz. You inspire me to better myself. and My mother, Mina. You showed me that nothing is impossible. UTILIZING A GAME THEORETICAL APPROACH TO PREVENT COLLUSION AND INCENTIVIZE COOPERATION IN CYBERSECURITY CONTEXTS List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii"} {"_id": "1679710eb9fb5d004cfe2e8e7aa322cc3873647e", "title": "Curing regular expressions matching algorithms from insomnia, amnesia, and acalculia", "text": "The importance of network security has grown tremendously and a collection of devices have been introduced, which can improve the security of a network. Network intrusion detection systems (NIDS) are among the most widely deployed such system; popular NIDS use a collection of signatures of known security threats and viruses, which are used to scan each packet's payload. Today, signatures are often specified as regular expressions; thus the core of the NIDS comprises of a regular expressions parser; such parsers are traditionally implemented as finite automata. Deterministic Finite Automata (DFA) are fast, therefore they are often desirable at high network link rates. DFA for the signatures, which are used in the current security devices, however require prohibitive amounts of memory, which limits their practical use.\n In this paper, we argue that the traditional DFA based NIDS has three main limitations: first they fail to exploit the fact that normal data streams rarely match any virus signature; second, DFAs are extremely inefficient in following multiple partially matching signatures and explodes in size, and third, finite automaton are incapable of efficiently keeping track of counts. We propose mechanisms to solve each of these drawbacks and demonstrate that our solutions can implement a NIDS much more securely and economically, and at the same time substantially improve the packet throughput."} {"_id": "46e49432fc6c360376cad11367c6d2411fb3e0ea", "title": "On the hardness of approximating minimum vertex cover By", "text": "We prove the Minimum Vertex Cover problem to be NP-hard to approximate to within a factor of 1.3606, extending on previous PCP and hardness of approximation technique. To that end, one needs to develop a new proof framework, and to borrow and extend ideas from several fields."} {"_id": "0ca2b92a4f992b35683c7fffcd49b4c883772a29", "title": "CAWA: Coordinated warp scheduling and Cache Prioritization for critical warp acceleration of GPGPU workloads", "text": "The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs -- the warp criticality problem.\n To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and -2% respectively."} {"_id": "180189c3e8b0f783a8df6a1887a94a5e3f82148b", "title": "Value Locality and Load Value Prediction", "text": "Since the introduction of virtual memory demand-paging and cache memories, computer systems have been exploiting spatial and temporal locality to reduce the average latency of a memory reference. In this paper, we introduce the notion of value locality, a third facet of locality that is frequently present in real-world programs, and describe how to effectively capture and exploit it in order to perform load value prediction. Temporal and spatial locality are attributes of storage locations, and describe the future likelihood of references to those locations or their close neighbors. In a similar vein, value locality describes the likelihood of the recurrence of a previously-seen value within a storage location. Modern processors already exploit value locality in a very restricted sense through the use of control speculation (i.e. branch prediction), which seeks to predict the future value of a single condition bit based on previously-seen values. Our work extends this to predict entire 32- and 64-bit register values based on previously-seen values. We find that, just as condition bits are fairly predictable on a per-static-branch basis, full register values being loaded from memory are frequently predictable as well. Furthermore, we show that simple microarchitectural enhancements to two modern microprocessor implementations (based on the PowerPC 620 and Alpha 21164) that enable load value prediction can effectively exploit value locality to collapse true dependencies, reduce average memory latency and bandwidth requirements, and provide measurable performance gains."} {"_id": "67bf737ceccf387cdd05c379487da8301f55e93d", "title": "Neither more nor less: Optimizing thread-level parallelism for GPGPUs", "text": "General-purpose graphics processing units (GPGPUs) are at their best in accelerating computation by exploiting abundant thread-level parallelism (TLP) offered by many classes of HPC applications. To facilitate such high TLP, emerging programming models like CUDA and OpenCL allow programmers to create work abstractions in terms of smaller work units, called cooperative thread arrays (CTAs). CTAs are groups of threads and can be executed in any order, thereby providing ample opportunities for TLP. The state-of-the-art GPGPU schedulers allocate maximum possible CTAs per-core (limited by available on-chip resources) to enhance performance by exploiting TLP. However, we demonstrate in this paper that executing the maximum possible number of CTAs on a core is not always the optimal choice from the performance perspective. High number of concurrently executing threads might cause more memory requests to be issued, and create contention in the caches, network and memory, leading to long stalls at the cores. To reduce resource contention, we propose a dynamic CTA scheduling mechanism, called DYNCTA, which modulates the TLP by allocating optimal number of CTAs, based on application characteristics. To minimize resource contention, DYNCTA allocates fewer CTAs for applications suffering from high contention in the memory sub-system, compared to applications demonstrating high throughput. Simulation results on a 30-core GPGPU platform with 31 applications show that the proposed CTA scheduler provides 28% average improvement in performance compared to the existing CTA scheduler."} {"_id": "32c8c7949a6efa2c114e482c830321428ee58d70", "title": "GPUs and the Future of Parallel Computing", "text": "This article discusses the capabilities of state-of-the art GPU-based high-throughput computing systems and considers the challenges to scaling single-chip parallel-computing systems, highlighting high-impact areas that the computing research community can address. Nvidia Research is investigating an architecture for a heterogeneous high-performance computing system that seeks to address these challenges."} {"_id": "feb263c99a65a94ed1dec7573f288af0f67b3f66", "title": "Mechatronic model of a novel slotless permanent magnet DC-motor with air gap winding design", "text": "This paper presents a mechatronic model of a novel slotless permanent magnet DC-motor with air gap winding. Besides technical advantages of this type of motor like high power density, high torque, very low weight and high efficiency, the motor design allows a very precise and efficient modelling with limited effort. A nonlinear model of magnetic field density can be extracted from a detailed nonlinear FE-model build in ANSYS/Maxwell, approximated by Fourier series and then used to model driving torque and back EMF, representing the coupling between electrical and mechanical subsystems. Analytically founded numerical models for driving torque and back EMF will be given. Real geometry of the phase winding is taken into account to improve model accuracy. The electrical subsystem will be described as coupled three phase system, whose parameters can also be extracted from the nonlinear FE-model with high accuracy. Together with a mechanical model of the rotor a MATLAB/Simulink model is build and extended by models of the hall sensors to detect rotor position and commutation logic to control the HEX-Bridge during operation. Finally, results of a complex simulation model, based on the parameters of the prototype of a wheel-hub motor, implementing the new motor design, are getting shown. Simulation results compare very well to measured data. Simulation time is very short due to the efficient approximation of magnetic flux density."} {"_id": "6ef99de74e6da9be3b2ece569c7be2a15c1db5db", "title": "Reverse Engineering of Embedded Software Using Syntactic Pattern Recognition", "text": "When a secure component executes sensitive operations, the information carried by the power consumption can be used to recover secret information. Many different techniques have been developped to recover this secret, but only few of them focus on the recovering of the executed code itself. Indeed, the code knowledge acquired through this step of Simple Power Analysis (SPA) can help to identify implementation weaknesses and to improve further kinds of attacks. In this paper we present a new approach improving the SPA based on a pattern recognition methodology, that can be used to automatically identify the processed instructions that leak through power consumption. We firstly process a geometrical classification with chosen instructions to enable the automatic identification of any sequence of instructions. Such an analysis is used to reverse general purpose code executions of a recent secure component."} {"_id": "5fe18d35bad4238b80a99ec8c4b98aca99a7e389", "title": "Communicating Algorithmic Process in Online Behavioral Advertising", "text": "Advertisers develop algorithms to select the most relevant advertisements for users. However, the opacity of these algorithms, along with their potential for violating user privacy, has decreased user trust and preference in behavioral advertising. To mitigate this, advertisers have started to communicate algorithmic processes in behavioral advertising. However, how revealing parts of the algorithmic process affects users' perceptions towards ads and platforms is still an open question. To investigate this, we exposed 32 users to why an ad is shown to them, what advertising algorithms infer about them, and how advertisers use this information. Users preferred interpretable, non-creepy explanations about why an ad is presented, along with a recognizable link to their identity. We further found that exposing users to their algorithmically-derived attributes led to algorithm disillusionment---users found that advertising algorithms they thought were perfect were far from it. We propose design implications to effectively communicate information about advertising algorithms."} {"_id": "ea4a2dfa0eab0260a184e5aad0046d62c0dd52bc", "title": "Effective Heart Sound Segmentation and Murmur Classification Using Empirical Wavelet Transform and Instantaneous Phase for Electronic Stethoscope", "text": "Accurate measurement of heart sound and murmur parameters is of great importance in the automated analysis of phonocardiogram (PCG) signals. In this paper, we propose a novel unified PCG signal delineation and murmur classification method without the use of reference signal for automatic detection and classification of heart sounds and murmurs. The major components of the proposed method are the empirical wavelet transform-based PCG signal decomposition for discriminating heart sounds from heart murmurs and suppressing background noises, the Shannon entropy envelope extraction, the instantaneous phase-based boundary determination, heart sound and murmur parameter extraction, the systole/diastole discrimination and the decision rules-based murmur classification. The accuracy and robustness of the proposed method is evaluated using a wide variety of normal and abnormal PCG signals taken from the standard PCG databases, including PASCAL heart sounds challenge database, PhysioNet/CinC challenge heart sound database, and real-time PCG signals. Evaluation results show that the proposed method achieves an average sensitivity (Se) of 94.38%, positive predictivity (Pp) of 97.25%, and overall accuracy (OA) of 91.92% for heart sound segmentation and Se of 97.58%, Pp of 96.46%, and OA of 94.21% in detecting the presence of heart murmurs for SNR of 10 dB. The method yields an average classification accuracy of 95.5% for the PCG signals with SNR of 20 dB. Results show that the proposed method outperforms other existing heart sound segmentation and murmur classification methods."} {"_id": "eb7ee320f66d62099027461ba16b5d0f898d5e18", "title": "Color categories: Evidence for the cultural relativity hypothesis", "text": "The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found substantial evidence of cognitive color differences between different language communities, but concerns remained as to how representative might be a tiny, extremely remote community. The present study replicates and extends previous findings using additional paradigms among a larger community in a different visual environment. Adult semi-nomadic tribesmen in Southern Africa carried out similarity judgments, short-term memory and long-term learning tasks. They showed different cognitive organization of color to both English and another language with the five color terms. Moreover, Categorical Perception effects were found to differ even between languages with broadly similar color categories. The results provide further evidence of the tight relationship between language and cognition."} {"_id": "735a955d25a5ac194c7c998479a57250ec571bd7", "title": "The Airport Gate Assignment Problem: Mathematical Model and a Tabu Search Algorithm", "text": "In this paper, we consider an Airport Gate Assignment Problem that dynamically assigns airport gates to scheduled ights based on passengers' daily origin and destination ow data. The objective of the problem is to minimize the overall connection times that passengers walk to catch their connection ights. We formulate this problem as a mixed 0-1 quadratic integer programming problem and then reformulate it as a mixed 0-1 integer problem with a linear objective function and constraints. We design a simple tabu search meta-heuristic to solve the problem. The algorithm exploits the special properties of di erent types of neighborhood moves, and create highly e ective candidate list strategies. We also address issues of tabu short term memory, dynamic tabu tenure, aspiration rule, and various intensi cation and diversi cation strategies. Preliminary computational experiments are conducted and the results are presented and analyzed."} {"_id": "b982f80346ca617170c992191905b5c0be2e3db6", "title": "A Continuous Beam Steering Slotted Waveguide Antenna Using Rotating Dielectric Slabs", "text": "The design, simulation and measurement of a beam steerable slotted waveguide antenna operating in X band are presented. The proposed beam steerable antenna consists of a standard rectangular waveguide (RWG) section with longitudinal slots in the broad wall. The beam steering in this configuration is achieved by rotating two dielectric slabs inside the waveguide and consequently changing the phase of the slots excitations. In order to confirm the usefulness of this concept, a non-resonant 20-slot waveguide array antenna with an element spacing of d = 0.58\u03bb0 has been designed, built and measured. A 14o beam scanning from near broadside (\u03b8 = 4o) toward end-fire (\u03b8 = 18o) direction is observed. The gain varies from 18.33 dB to 19.11 dB which corresponds to the radiation efficiencies between 95% and 79%. The side-lobe level is -14 dB at the design frequency of 9.35 GHz. The simulated co-polarized realized gain closely matches the fabricated prototype patterns."} {"_id": "8890bb44abb89601c950eb5e56172bb58d5beea8", "title": "End-to-End Offline Goal-Oriented Dialog Policy Learning via Policy Gradient", "text": "Learning a goal-oriented dialog policy is generally performed offline with supervised learning algorithms or online with reinforcement learning (RL). Additionally, as companies accumulate massive quantities of dialog transcripts between customers and trained human agents, encoder-decoder methods have gained popularity as agent utterances can be directly treated as supervision without the need for utterance-level annotations. However, one potential drawback of such approaches is that they myopically generate the next agent utterance without regard for dialog-level considerations. To resolve this concern, this paper describes an offline RL method for learning from unannotated corpora that can optimize a goal-oriented policy at both the utterance and dialog level. We introduce a novel reward function and use both on-policy and off-policy policy gradient to learn a policy offline without requiring online user interaction or an explicit state space definition."} {"_id": "589d84d528d353a382a42e5b58dc48a57d332be8", "title": "Post-Stroke Rehabilitation with the Rutgers Ankle System: A Case Study", "text": "The Rutgers Ankle is a Stewart platform-type haptic interface designed for use in rehabilitation. The system supplies six-degree-of-freedom (DOF) resistive forces on the patient's foot, in response to virtual reality-based exercises. The Rutgers Ankle controller contains an embedded Pentium board, pneumatic solenoid valves, valve controllers, and associated signal conditioning electronics. The rehabilitation exercise used in our case study consists of piloting a virtual airplane through loops. The exercise difficulty can be selected based on the number and placement of loops, the airplane speed in the virtual environment, and the degree of resistance provided by the haptic interface. Exercise data is stored transparently, in real time, in an Oracle database. These data consist of ankle position, forces, and mechanical work during an exercise, and over subsequent rehabilitation sessions. The number of loops completed and the time it took to do that are also stored online. A case study is presented of a patient nine months post-stroke using this system. Results showed that, over six rehabilitation sessions, the patient improved on clinical measures of strength and endurance, which corresponded well with torque and power output increases measured by the Rutgers Ankle. There were also substantial improvements in task accuracy and coordination during the simulation and the patient's walking and stair-climbing ability."} {"_id": "e56ae9ca0f66897d0bb3cc1219e347054a4d2c17", "title": "Signal Processing Methods for the Automatic Transcription of Music", "text": "Signal processing methods for the automatic transcription of music are developed in this thesis. Music transcription is here understood as the process of analyzing a music signal so as to write down the parameters of the sounds that occur in it. The applied notation can be the traditional musical notation or any symbolic representation which gives sufficient information for performing the piece using the available musical instruments. Recovering the musical notation automatically for a given acoustic signal allows musicians to reproduce and modify the original performance. Another principal application is structured audio coding: a MIDI-like representation is extremely compact yet retains the identifiability and characteristics of a piece of music to an important degree. The scope of this thesis is in the automatic transcription of the harmonic and melodic parts of real-world music signals. Detecting or labeling the sounds of percussive instruments (drums) is not attempted, although the presence of these is allowed in the target signals. Algorithms are proposed that address two distinct subproblems of music transcription. The main part of the thesis is dedicated to multiple fundamental frequency (F0) estimation, that is, estimation of the F0s of several concurrent musical sounds. The other subproblem addressed is musical meter estimation. This has to do with rhythmic aspects of music and refers to the estimation of the regular pattern of strong and weak beats in a piece of music. For multiple-F0 estimation, two different algorithms are proposed. Both methods are based on an iterative approach, where the F0 of the most prominent sound is estimated, the sound is cancelled from the mixture, and the process is repeated for the residual. The first method is derived in a pragmatic manner and is based on the acoustic properties of musical sound mixtures. For the estimation stage, an algorithm is proposed which utilizes the frequency relationships of simultaneous spectral components, without assuming ideal harmonicity. For the cancelling stage, a new processing principle, spectral smoothness, is proposed as an efficient new mechanism for separating the detected sounds from the mixture signal. The other method is derived from known properties of the human auditory system. More specifically, it is assumed that the peripheral parts of hearing can be modelled by a bank of bandpass filters, followed by half-wave rectification and compression of the subband signals. It is shown that this basic structure allows the combined use of time-domain periodicity and frequency-domain periodicity for F0 extraction. In the derived algorithm, the higher-order (unresolved) harmonic partials of a sound are processed collectively, without the need to detect or estimate individual partials. This has the consequence that the method works reasonably accurately for short analysis frames. Computational efficiency of the method is based on calculating a frequency-domain approximation of the summary autocorrelation function, a physiologically-motivated representation of sound. Both of the proposed multiple-F0 estimation methods operate within a single time frame and arrive at approximately the same error rates. However, the auditorily-motivated method is superior in short analysis frames. On the other hand, the pragmatically-oriented method is \u201ccomplete\u201d in the sense that it includes mechanisms for suppressing additive noise (drums) and for estimating the number of concurrent sounds in the analyzed signal. In musical interval and chord identification tasks, both algorithms outperformed the average of ten trained musicians."} {"_id": "87aae79852c15e1da4479185e3697ac91df844a1", "title": "The Development and Validation of a Measure of Student Attitudes Toward Science , Technology , Engineering , and Math ( S-STEM )", "text": "Using an iterative design along with multiple methodological approaches and a large representative sample, this study presents reliability, validity, and fairness evidence for two surveys measuring student attitudes toward science, technology, engineering, and math (S-STEM) and interest in STEM careers for (a) 4ththrough 5th-grade students (Upper Elementary S-STEM) and (b) 6ththrough 12th-grade students (Middle/High S-STEM). Findings from exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) suggested the use of a four-factor structure to measure student attitudes toward science, math, engineering/technology, and 21st century skills. Subject matter experts and literature reviews provided evidence of content validity. Reliability levels were high for both versions. Furthermore, both the Upper Elementary S-STEM and Middle/High S-STEM Surveys demonstrated evidence of configural, metric, and scalar invariance across grade levels, races/ethnicities, and genders. The findings support the validity of interpretations and inferences made from scores on the instruments\u2019 items and subscales."} {"_id": "538101a20ea7f125c673b076409b75fa637493a0", "title": "VCDB: A Large-Scale Database for Partial Copy Detection in Videos", "text": "The task of partial copy detection in videos aims at finding if one or more segments of a query video have (transformed) copies in a large dataset. Since collecting and annotating large datasets of real partial copies are extremely time-consuming, previous video copy detection research used either small-scale datasets or large datasets with simulated partial copies by imposing several pre-defined transformations (e.g., photometric or geometric changes). While the simulated datasets were useful for research, it is unknown how well the techniques developed on such data work on real copies, which are often too complex to be simulated. In this paper, we introduce a large-scale video copy database (VCDB) with over 100,000 Web videos, containing more than 9,000 copied segment pairs found through careful manual annotation. We further benchmark a baseline system on VCDB, which has demonstrated state-of-the-art results in recent copy detection research. Our evaluation suggests that existing techniques\u2014which have shown near-perfect results on the simulated benchmarks\u2014are far from satisfactory in detecting complex real copies. We believe that the release of VCDB will largely advance the research around this challenging problem."} {"_id": "3a463db8048c67b48c7f4f019a4ab3a2f01f25fc", "title": "Applying Winnow to Context-Sensitive Spelling Correction", "text": "Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: contextsensitive spelling correction. This is the task of xing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classi ers. We nd that: (1) When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2) When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3) When a test set is encountered that is dissimilar to the training set, Winnow is better than Bayes at adapting to the unfamiliar test set, using a strategy we will present for combining learning on the training set with unsupervised learning on the (noisy) test set. In Machine Learning: Proceedings of the 13th International Conference, Lorenza Saitta, ed., Morgan Kaufmann, San Francisco, CA, 1996, pages 182{190. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonpro t educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America. All rights reserved. Copyright c Mitsubishi Electric Information Technology Center America, 1996 201 Broadway, Cambridge, Massachusetts 02139 Dept. of Appl. Math. & CS, Weizmann Institute of Science, Rehovot 76100, Israel. 1. First printing, TR96-07, April 1996"} {"_id": "e63d53bc8bb4615dce8bb3e1d2aedb3dbe16c044", "title": "Illuminating cell signalling with optogenetic tools", "text": "The light-based control of ion channels has been transformative for the neurosciences, but the optogenetic toolkit does not stop there. An expanding number of proteins and cellular functions have been shown to be controlled by light, and the practical considerations in deciding between reversible optogenetic systems (such as systems that use light-oxygen-voltage domains, phytochrome proteins, cryptochrome proteins and the fluorescent protein Dronpa) are well defined. The field is moving beyond proof of concept to answering real biological questions, such as how cell signalling is regulated in space and time, that were difficult or impossible to address with previous tools."} {"_id": "b1ffbbbdefb2484d61a182efc2af4243a60d87a3", "title": "Anytime parallel density-based clustering", "text": "The density-based clustering algorithm DBSCAN is a state-of-the-art data clustering technique with numerous applications in many fields. However, DBSCAN requires neighborhood queries for all objects and propagation of labels from object to object. This scheme is time consuming and thus limits its applicability for large datasets. In this paper, we propose a novel anytime approach to cope with this problem by reducing both the range query and the label propagation time of DBSCAN. Our algorithm, called AnyDBC, compresses the data into smaller density-connected subsets called primitive clusters and labels objects based on connected components of these primitive clusters to reduce the label propagation time. Moreover, instead of passively performing range queries for all objects as in existing techniques, AnyDBC iteratively and actively learns the current cluster structure of the data and selects a few most promising objects for refining clusters at each iteration. Thus, in the end, it performs substantially fewer range queries compared to DBSCAN while still satisfying the cluster definition of DBSCAN. Moreover, by processing queries in block and merging the results into the current cluster structure, AnyDBC can be efficiently parallelized on shared memory architectures to further accelerate the performance, uniquely making it a parallel and anytime technique at the same time. Experiments show speedup factors of orders of magnitude compared to DBSCAN and its fastest variants as well as a high parallel scalability on multicore processors for very large real and synthetic complex datasets."} {"_id": "2aa66587fa04cdc05e38dc4529f61c18de0f378e", "title": "The Interaction of Architecture and Operating System Design", "text": "Today\u2019s high-performance RISC microprocessors have been highly tuned for integer and floating point application performance. These architectures have paid less attention to operating system requirements. At the same time, new operating system designs often have overlooked modern architectural trends which may unavoidably change the relative cost of certain primitive operations. The result is that operating system performance is well below application code performance on contemporary RISCS. This paper examines recent directions in computer architecture and operating systems, and the implications of changes in each domain for the other. The requirements of three components of operating system design are discussed in detail: interprocess communication, virt ual memory, and thread management. For each component, we relate operating system functional and performance needs to the mechanisms available on commercial RISC architectures such as the MIPS R2000 and R3000, Sun SPARC, IBM RS6000, Motorola 88000, and Intel i860. Our analysis reveals a number of specific reasons why the performance of operating system primitives on RISCS has not scaled with integer performance. In addition, we identify areas in which architectures could better (and cost-effectively) accommodate operating system needs, and areas in which operating system design could accommodate certain necessary characteristics of cost-effective highperformance microprocessors. This work was supported in part by the National Science Foundation under Grants No. CCR-8703049, CCR-8619663, and CCR-8907666, by the Washington Technology Center, by the Digital Equipment Corporation Systems Research Center and External Research Program, and by IBM and AT&T Fellowships. Bershad is now with the School of Computer Science, Carnegie Mellon University. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 01991 ACM 0-89791 -380 -9/91 /0003 -0108 . ..$1 .50"} {"_id": "1fe52bbc61aa7e9854d5daad25b12943d97aed25", "title": "Frapp\u00e9: Functional Reactive Programming in Java", "text": "Functional Reactive Programming (FRP) is a declarative programming model for constructing interactive applications based on a continuous model of time. FRP programs are described in terms of behaviors (continuous, timevarying, reactive values), and events (conditions that occur at discrete points in time). This paper presents Frapp\u00e9, an implementation of FRP in the Java progamming language. The primary contribution of Frapp\u00e9 is its integration of the FRP event/behavior model with the Java Beans event/property model. At the interface level, any Java Beans component may be used as a source or sink for the FRP event and behavior combinators. This provides a mechanism for extending Frapp\u00e9 with new kinds of I/O connections and allows FRP to be used as a high-level declarative model for composing applications from Java Beans components. At the implementation level, the Java Beans event model is used internally by Frapp\u00e9 to propagate FRP events and changes to FRP behaviors. This allows Frapp\u00e9 applications to be packaged as Java Beans components for use in other applications, and yields an implementation of FRP well-suited to the requirements of event-driven applications (such as graphical user interfaces)."} {"_id": "432a96eaa74c972544a75c1eaa6bc5e15ffdbe94", "title": "Colour spaces: perceptual, historical and applicational background", "text": "Absnacr-In this paper we present an overview of colour spaces used In electrical engineering and image processing. We Streis the imparfance of the perceptual, historical and sppllcananal~hackgraund that led to a colour spare. The colour spaces presented are : RGB;opponentsolours rpsces, phenomenal colaur spscer, CMY, CMYK, TV colour s~aces (UW and YIQ), PhotoYCC, CIE XYZ, Lab and Luv colour spaces. Keywordr-colaur spaces, RGB, HSV, CIE"} {"_id": "283df50be1d1a5fde310a9252ead5af2189f2720", "title": "Autonomous indoor object tracking with the Parrot AR.Drone", "text": "This article presents an image-based visual servoing system for indoor visual tracking of 3D moving objects by an Unmanned Aerial Vehicle. This system autonomously follows a 3D moving target object, maintaining it with a fixed distance and centered on its image plane. The initial setup is tested in a detailed simulation environment. The system is then validated on flights in indoor scenarios using the Parrot AR.Drone and the CMT tracker, demonstrating the robustness of the system to differences in object features, environmental clutter, and target trajectory. The obtained results indicate that the proposed system is suitable for complex controls task, such object surveillance and pursuit."} {"_id": "fef2c647b30a0ec40a59272444143891558e2e9b", "title": "Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey", "text": "Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction."} {"_id": "2aaa2de300a29becd569fce826677e491ea0ea70", "title": "Learning Category-Specific Deformable 3D Models for Object Reconstruction", "text": "We address the problem of fully automatic object localization and reconstruction from a single image. This is both a very challenging and very important problem which has, until recently, received limited attention due to difficulties in segmenting objects and predicting their poses. Here we leverage recent advances in learning convolutional networks for object detection and segmentation and introduce a complementary network for the task of camera viewpoint prediction. These predictors are very powerful, but still not perfect given the stringent requirements of shape reconstruction. Our main contribution is a new class of deformable 3D models that can be robustly fitted to images based on noisy pose and silhouette estimates computed upstream and that can be learned directly from 2D annotations available in object detection datasets. Our models capture top-down information about the main global modes of shape variation within a class providing a \u201clow-frequency\u201d shape. In order to capture fine instance-specific shape details, we fuse it with a high-frequency component recovered from shading cues. A comprehensive quantitative analysis and ablation study on the PASCAL 3D+ dataset validates the approach as we show fully automatic reconstructions on PASCAL VOC as well as large improvements on the task of viewpoint prediction."} {"_id": "5f50b98b88b3a5d6127022391984643833b563a5", "title": "A Prism-Free Method for Silhouette Rendering in Inverse Displacement Mapping", "text": "Silhouette is a key feature that distinguishes displacement mapping from normal mapping. However the silhouette rendering in the GPU implementation of displacement mapping (which is often called inversed displacement mapping) is tricky. Previous approaches rely mostly on construction of additional extruding prism-like geometry, which slows down the rendering significantly. In this paper, we proposed a method for solving the silhouette rendering problem in inverse displace mapping without using any extruding prism-like geometry. At each step of intersection finding, we continuously bends the viewing ray according to the current local tangent space associated with the surface. Thus, it allows mapping a displacement map onto an arbitrary curved surface with more accurate silhouette. While our method is simple, it offers surprisingly good results over Curved Relief Map (CRM) [OP05] in many difficult or degenerated cases."} {"_id": "432153c41e9f388a7a59d2624f48ffc6d291acc0", "title": "Chemical Process Control Education and Practice", "text": "Chemical process control textbooks and courses differ significantly from their electrical or mechanical-oriented brethren. It is our experience that colleagues in electrical engineering (EE) and mechanical engineering (ME) assume that we teach the same theory in our courses and merely have different application examples. The primary goals of this article are to i) emphasize the distinctly challenging characteristics of chemical processes, ii) present a typical process control curriculum, and iii) discuss how chemical process control courses can be revised to better meet the needs of a typical B.S.-level chemical engineer. In addition to a review of material covered in a standard process control course, we discuss innovative approaches in process control education, including the use of case studies, distributed control systems in laboratories, identification and control simulation packages, and studio-based approaches combining lecture, simulation, and experiments in the same room. We also provide perspectives on needed developments in process control education."} {"_id": "38cb9546c19be6b94cf760d48dcd9c3c9ae1bed7", "title": "Solving the Search for Source Code", "text": "Programmers frequently search for source code to reuse using keyword searches. The search effectiveness in facilitating reuse, however, depends on the programmer's ability to specify a query that captures how the desired code may have been implemented. Further, the results often include many irrelevant matches that must be filtered manually. More semantic search approaches could address these limitations, yet existing approaches are either not flexible enough to find approximate matches or require the programmer to define complex specifications as queries.\n We propose a novel approach to semantic code search that addresses several of these limitations and is designed for queries that can be described using a concrete input/output example. In this approach, programmers write lightweight specifications as inputs and expected output examples. Unlike existing approaches to semantic search, we use an SMT solver to identify programs or program fragments in a repository, which have been automatically transformed into constraints using symbolic analysis, that match the programmer-provided specification.\n We instantiated and evaluated this approach in subsets of three languages, the Java String library, Yahoo! Pipes mashup language, and SQL select statements, exploring its generality, utility, and trade-offs. The results indicate that this approach is effective at finding relevant code, can be used on its own or to filter results from keyword searches to increase search precision, and is adaptable to find approximate matches and then guide modifications to match the user specifications when exact matches do not already exist. These gains in precision and flexibility come at the cost of performance, for which underlying factors and mitigation strategies are identified."} {"_id": "ea83697999076d473fae2db48ed4abea38f609b0", "title": "Android Malware Detection Using Feature Fusion and Artificial Data", "text": "For the Android malware detection / classification anti-malware community has relied on traditional malware detection methods as a countermeasure. However, traditional detection methods are developed for detecting the computer malware, which is different from Android malware in structure and characteristics. Thus, they may not be useful for Android malware detection. Moreover, majority of suggested detection approaches may not be generalized and are incapable of detecting zero-day malware due to different reasons such as available data set with specific set of examples. Thus, their detection accuracy may be questionable. To address this problem, this paper presents a malware classification approach with a reliable detection accuracy and evaluate the approach using artificially generated examples. The suggested approach generates the signature profiles and behavior profiles of each application in the data set, which are further used as input for the classification task. For improving the detection accuracy, feature fusion of features from filter methods and wrapper method and algorithm fusion is investigated. Without affecting the detection accuracy, the optimal balance between real world examples and synthetic examples is also investigated. The experimental results suggest that both AUC and F1 can be obtained up to 0.94 for both known and unknown malware using original examples and synthetic examples."} {"_id": "160c5acd01876a95f7f067c800aba2a1b2e6c84c", "title": "Service oriented architectures: approaches, technologies and research issues", "text": "Service-oriented architectures (SOA) is an emerging approach that addresses the requirements of loosely coupled, standards-based, and protocol- independent distributed computing. Typically business operations running in an SOA comprise a number of invocations of these different components, often in an event-driven or asynchronous fashion that reflects the underlying business process needs. To build an SOA a highly distributable communications and integration backbone is required. This functionality is provided by the Enterprise Service Bus (ESB) that is an integration platform that utilizes Web services standards to support a wide variety of communications patterns over multiple transport protocols and deliver value-added capabilities for SOA applications. This paper reviews technologies and approaches that unify the principles and concepts of SOA with those of event-based programing. The paper also focuses on the ESB and describes a range of functions that are designed to offer a manageable, standards-based SOA backbone that extends middleware functionality throughout by connecting heterogeneous components and systems and offers integration services. Finally, the paper proposes an approach to extend the conventional SOA to cater for essential ESB requirements that include capabilities such as service orchestration, \u201cintelligent\u201d routing, provisioning, integrity and security of message as well as service management. The layers in this extended SOA, in short xSOA, are used to classify research issues and current research activities."} {"_id": "1d26137926a698f02dad8a87df2953b3bf9a339c", "title": "A survey of trust and reputation systems for online service provision", "text": "Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyze the current trends and developments in this area, and to propose a research agenda for trust and reputation systems. Josang, A., R. Ismal, and C. Boyd (2007). A Survey of Trust and Reputation Systems for Online Service Provision, Journal of Decision Support Systems, 43(2):618-644 Citation: Outline of Presentation \u2022 Section 1: Introduction \u2022 Section 2: Define trust and reputation \u2022 Section 3: Trust and Reputation relationship as security mechanisms \u2022 Section 4: Collaborative filtering and reputation \u2022 Section 5: Trust Classes \u2022 Section 6: Four categories of reputation and trust semantics \u2022 Section 7: Centralized and distributed architectures for reputation Outline of Presentation Continued \u2022 Section 8: Reputation computation methods (Day 1 Goal) \u2022 Section 9: Reputation systems used in commercial applications \u2022 Section 10: Description of main problems in reputation systems \u2022 Section 11: Ending discussion Section 1: Introduction \u2022 Online transactions differ from those of in person transactions because of the inherent asymmetry in the transaction, the seller has all the power so to say. \u2022 The nature of online transactions obscure the traditional metrics used to establish if a brick and mortar store is trustworthy. Example: a brick and mortar store takes time to establish a web site takes very little time to set up. \u2013 These reasons make it hard to determine rather or not a particular online venue is trustworthy or not and is why this trust issue is receiving so much attention from an academic point of view. \u2022 The authors of this paper wrote it in part because of the rapidly growing interest in this topic and because they felt that the prior overviews used inconsistent terminology. Section 2: Define trust and reputation \u2022 Two kinds of trust: reliability and decision trust \u2022 Reliability Trust: Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends. \u2022 Decision Trust: Trust is the extent to which one party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible. \u2022 The authors mention that the prior mentioned definitions are not as simple as they seem. \u2013 For example: Trust in an individual is not necessarily enough to enter into a state of dependence with a person. In other words, that danger might seem to the agent an intolerable risk. \u2022 The authors mentions that only a few papers deal with trust and that in economic circles there are some who reject trust as a computational model. \u2022 Someone by the name of Williamson argues that the notion of trust should be avoided when modeling economic interactions, because it adds nothing new, and that well known notions such as reliability, utility and risk are adequate ad sufficient for that purpose. \u2022 Williamson argues however that personal trust is still important for modeling, and that non-computation models for trust can be meaningful for studying certain relationships. \u2022 Concerning reputation, the authors mention two aspects: trust because of good reputation and trust despite of bad reputation. \u2022 These two statements shed light on the fact that trust is often made with outside information, knowledge about the relationship that is not general know, instincts and feelings, etc. \u2022 Reputation can also be considered as a collective measure of trustworthiness based on referrals from the community. \u2022 Research in Trust and Reputation Systems should have two foci: \u2013 Finding adequate online substitutes for traditional cues in the physical world and identifying new elements specific to the online applications which are suitable for measurements. \u2013 Taking advantage of IT and Internet to create efficient systems for collecting information and deriving measurements of trust and reputation in order to aid decision making and improve online markets. \u2022 These simple principles invite rigorous research in order to answer some fundamental questions: What information elements are most suitable for deriving measures of trust and reputation in a given application? How can these information elements be captured and collected? What are the best principles for designing such systems from a theoretic and from a usability point of view? Can they be made resistant to attacks of manipulation by strategic agents? How should users include the information provided by such systems into their decision process? What role can these systems play in the business model of commercial companies? Do these systems truly improve the quality of online trade and interactions? These are important questions that need good answers in order to determine the potential for trust and reputation systems in online environments. \u2022 According to a cited reference in the paper, Resnick, a reputation system must have the following: 1. Entities must be long lived, so that with every interaction there is always an expectation of future interactions. 2. Ratings about current interactions are captured and distributed. 3. Ratings about past interactions must guide decisions about current interactions. Example of how trust is derived. (Fig. 1 from paper) Section 3: Trust and Reputation relationship as security mechanisms \u2022 In a general sense, the purpose of security mechanisms is to provide protection against malicious parties. \u2022 In many situations we have to protect ourselves from those who offer resources so that the problem in fact is reversed. Information providers can for example act deceitfully by providing false or misleading information, and traditional security mechanisms are unable to protect against this type of threat. Trust and reputation systems on the other hand can provide protection against such threats. \u2022 To summaries this section the author basically says that a computer system that appears to have robust security appears more trust worthy to the user. Listing known security vulnerabilities and using encryption techniques make the system appear to me more trust worthy. Section 4: Collaborative filtering and reputation \u2022 Collaborative filtering systems are a mechanism that shares traits with a reputation system but they are different at the same time. \u2022 Collaborative filtering systems (henceforth CF) are a mechanism that attempts to take into consideration that different people have different tastes. \u2022 If two separate people rate two items similarly then they are called neighbours in CF terminology. \u2022 This new fact can be used to recommend to one something that the other liked, a technique called a recommender system. \u2022 This takes the opposite assumption of reputation systems which assume that all people will judge the same performance or transaction consistently. \u2022 The example provided by the article is that in CF systems users might rate a video or music file differently based on tastes but one containing a virus would be universally rated poorly. \u2022 Another caveat about CF vs reputation systems is that CF systems assume an optimistic world view and reputation systems assume a pessimistic world view. \u2022 In specifics, CF systems assume all participants are trustworthy and sincere, meaning that all participants report their genuine opinion. \u2022 Conversely, reputation system assume that participants will try to misrepresent the quality of services in order to make more profit and will lie to achieve said goals. \u2022 This duel opposing nature of these type systems can make it very advantageous to combine them as will be explored in the study of Amazon in section 9 which does this to some extent. Section 5: Trust Classes \u2022 Types of Trust classes: \u2013 Provision \u2013 Access \u2013 Delegation \u2013 Identity \u2013 Context \u2022 Paper mentions them in order to get specific about trust semantics. \u2022 Paper focuses on provision trust so it is emphasized. \u2022 Provision trust describes the relying party\u2019s trust in a service or resource provider. It is relevant when the relying party is a user seeking protection from malicious or unreliable service providers. I extrapolated from the paper that this is the type of trust that would be studied in business through subjects like contract law. \u2022 Access trust describes trust in principals for the purpose of accessing resources owned by or under the responsibility of the relying party. This relates to the access control paradigm which is a central element in computer security. \u2022 Delegation trust describes trust in an agent (the delegate) that acts and makes decision on behalf of the relying party. \u2022 Identity trust describes the belief that an agent identity is as claimed. Identity trust systems have been discussed mostly in the information security community. An example mentioned in the paper is PGP encryption technology. \u2022 Context trust describes the extent to which the relying party believes that the necessary systems and institutions ar"} {"_id": "6b67a2eb179cad467d4433c153a4b83fdca6cee8", "title": "Survey on Energy Consumption Entities on the Smartphone Platform", "text": "The full degree of freedom in mobile systems heavily depends on the energy provided by the mobile phone's batteries. Their capacity is in general limited and for sure not keeping pace as the mobile devices are crammed up with new functionalities. The discrepancy of Moore's law, offering twice the processing power at least each second year, and the development in batteries, which did not even double over the last decade, makes a shift in researchers' way of designing networks, protocols, and the mobile device itself. The bottleneck to take care of in the design process of mobile systems is not only the wireless data rate, but even more the energy limitation as the customers ask for new energy-hungry services, e.g., requiring faster connections or even multiple air interfaces, and longer standby or operational times of their mobile devices at the same time. In this survey, the energy consuming entities of a mobile device such as wireless air interfaces, display, mp3 player and others are measured and compared. The presented measurement results allow the reader to understand what the energy hungry parts of a mobile device are and use those findings for the design of future mobile protocols and applications. All results presented in this work and further results are made public on our web page [2]."} {"_id": "2efd3f1cfc20fc17771612630dc92582ae5afe53", "title": "Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior", "text": "Reputation reporting systems have emerged as an important risk management mechanism in online trading communities. However, the predictive value of these systems can be compromised in situations where conspiring buyers intentionally give unfair ratings to sellers or, where sellers discriminate on the quality of service they provide to different buyers. This paper proposes and evaluates a set of mechanisms, which eliminate, or significantly reduce the negative effects of such fraudulent behavior. The proposed mechanisms can be easily integrated into existing online reputation systems in order to safeguard their reliability in the presence of potentially deceitful buyers and sellers."} {"_id": "604f7fe4a26986a37bba52388cb9379bf1758969", "title": "Web services and business process management", "text": "Web services based on the service-oriented architecture framework provide a suitable technical foundation for making business processes accessible within enterprises and across enterprises. But to appropriately support dynamic business processes and their management, more is needed, namely, the ability to prescribe how Web services are used to implement activities within a business process, how business processes are represented as Web services, and also which business partners perform what parts of the actual business process. In this paper, the relationship between Web services and the management of business processes is worked out and presented in a tutorial-like manner."} {"_id": "a46eb7ff025ec24eb643baf98f8ce911b9986b9c", "title": "An implementation of cloud-based platform with R packages for spatiotemporal analysis of air pollution", "text": "Recently, the R package has become a popular tool for big data analysis due to its several matured software packages for the data analysis and visualization, including the analysis of air pollution. The air pollution problem is of increasing global concern as it has greatly impacts on the environment and human health. With the rapid development of IoT and the increase in the accuracy of geographical information collected by sensors, a huge amount of air pollution data were generated. Thus, it is difficult to analyze the air pollution data in a single machine environment effectively and reliably due to its inherent characteristic of memory design. In this work, we construct a distributed computing environment based on both the softwares of RHadoop and SparkR for performing the analysis and visualization of air pollution with the R more reliably and effectively. In the work, we firstly use the sensors, called EdiGreen AirBox to collect the air pollution data in Taichung, Taiwan. Then, we adopt the Inverse Distance Weighting method to transform the sensors\u2019 data into the density map. Finally, the experimental results show the accuracy of the short-term prediction results of PM2.5 by using the ARIMA model. In addition, the verification with respect to the prediction accuracy with the MAPE method is also presented in the experimental results."} {"_id": "e6af72425185b9b7f9854b4ea818ac6a065a4777", "title": "Contrast-limited adaptive histogram equalization: speed and effectiveness", "text": "The Contrast-Limited Adaptive Histogram Equalization (CLARE) method for assigning displayed intensity levels in medical images, is supported by anecdotal evidence and evidence from detection experiments. Despite that, the method requires clinical evaluation and implementation achieving few-second application before it can be clinically adopted. Experiments attempting to produce this evaluation and a machine providing the required performance are"} {"_id": "4cd7c7c626eed9bd342f5c916d92a0ec3aaae095", "title": "Miniature Dual-Mode Microstrip Filters", "text": "In order to reduce the size of dual-mode microstrip filters by keeping excellent performance, novel filter geometry is proposed. The novel filter includes a dual-mode resonator based on a meander structure. The effect of input/output feed lines located along a straight line on filter response is investigated for the dual-mode filter. The coupling between degenerate modes of the proposed dual-mode microstrip resonator is discussed depending on the perturbation size. The two dual-mode microstrip bandpass filters with real axis and imaginary axis transmission zeros (TZs) are designed, fabricated, and measured to document the validity of the description of the positive and negative coupling coefficient"} {"_id": "7ee5d9c87a470391f3b027eb44a7859a7cb6090f", "title": "EVS 25 Shenzhen , China , Nov . 5-9 , 2010 Control of A high-Performance Z-Source Inverter for Fuel Cell / Supercapacitor Hybrid Electric Vehicles", "text": "This paper presents a supercapacitor (SC) module connected in parallel with fuel cell (FC) stack to supply a high-performance Z-Source Inverter (HP-ZSI) feeding a three phase induction motor for hybrid electric vehicles applications. The supercapacitor is connected between the input diode and the bidirectional switch of the highperformance ZSI topology. The indirect field-oriented control (IFOC) method is used to control an induction motor speed during motoring and regenerative braking operations to produce the modulation index and a dual loop controller is used to control the Z-network capacitor voltage to produce the shoot-through duty ratio. MATLAB simulation results verified the validity of the proposed control strategy during motoring and regenerative braking operations."} {"_id": "405e880a3c632255481c59d6fc04206a0a6c2fcb", "title": "How do Retailers Adjust Prices?: Evidence from Store-Level Data", "text": "Recent theoretical work on retail pricing dynamics suggests that retailers periodically hold sales periodic, temporary reductions in price, -even when their costs are unchanged. In this paper we extend existing theory to predict which items will go on sale, and use a new data set from the BLS to document the frequency of sales across a wide range of goods and geographic areas. We find a number of pricing regularities for the 20 categories of goods we examine. First, retailers seem to have a \u201cregular\u201d price, and most deviations from that price are downward. Second, there is considerable heterogeneity in sale behavior across goods within a category (e.g. cereal); the same items are regularly put on sale, while other items rarely are on sale. Third, items are more likely to go on sale when demand is highest. Fourth, for a limited number of items for which we know market shares, products with larger market shares go on sale more often. These final three observations are consistent with our theoretical result that popular products are most likely to be placed on sale."} {"_id": "a51795160e07aabf7bce59e79502507c60d06a5b", "title": "Conditional Dynamic Mutual Information-Based Feature Selection", "text": "With emergence of new techniques, data in many fields are getting larger and larger, especially in dimensionality aspect. The high dimensionality of data may pose great challenges to traditional learning algorithms. In fact, many of features in large volume of data are redundant and noisy. Their presence not only degrades the performance of learning algorithms, but also confuses end-users in the post-analysis process. Thus, it is necessary to eliminate irrelevant features from data before being fed into learning algorithms. Currently, many endeavors have been attempted in this field and many outstanding feature selection methods have been developed. Among different evaluation criteria, mutual information has also been widely used in feature selection because of its good capability of quantifying uncertainty of features in classification tasks. However, the mutual information estimated on the whole dataset can not exactly represent the correlation between features. To cope with this issue, in this paper we firstly re-estimate mutual information on identified instances dynamically, and then introduce a new feature selection method based on 1194 H. Liu, Y. Mo, J. Zhao conditional mutual information. Performance evaluations on sixteen UCI datasets show that our proposed method achieves comparable performance to other wellestablished feature selection algorithms in most cases."} {"_id": "099d85f25e9336f48ff64287a4b53ee5fb64ab51", "title": "Learning Sparse Feature Representations for Music Annotation and Retrieval", "text": "We present a data-processing pipeline based on sparse feature learning and describe its applications to music annotation and retrieval. Content-based music annotation and retrieval systems process audio starting with features. While commonly used features, such as MFCC, are handcrafted to extract characteristics of the audio in a succinct way, there is increasing interest in learning features automatically from data using unsupervised algorithms. We describe a systemic approach applying feature-learning algorithms to music data, in particular, focusing on a highdimensional sparse-feature representation. Our experiments show that, using only a linear classifier, the newly learned features produce results on the CAL500 dataset comparable to state-of-the-art music annotation and retrieval systems."} {"_id": "19c3fcffda8e6e5870b3a533c483bca024501ab5", "title": "Cost-Aware WWW Proxy Caching Algorithms", "text": "Difference between Web Caching & Conventional Paging Problems: Web caching is variable-size caching (web documents vary dramatically in size depending on the information they carry (text, image, video, etc.). Web pages take different amounts of time to download, even if they are of the same size (download latency). Access streams seen by the proxy cache are the union of web access streams from tens to thousands of users (instead of coming from a few programmed sources as in the case of virtual memory paging)"} {"_id": "c2c9d4ec874b3ef4e84c3ac9b4f6c66a6697daaa", "title": "Deep into the Brain: Artificial Intelligence in Stroke Imaging", "text": "Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives."} {"_id": "a7e194c7f18f2c4f83993207f737733ff31f03b1", "title": "Syntactic Stylometry: Using Sentence Structure for Authorship Attribution", "text": "Most approaches to statistical stylometry have concentrated on lexical features, such as relative word frequencies or type-token ratios. Syntactic features have been largely ignored. This work attempts to fill that void by introducing a technique for authorship attribution based on dependency grammar. Syntactic features are extracted from texts using a common dependency parser, and those features are used to train a classifier to identify texts by author. While the method described does not outperform existing methods on most tasks, it does demonstrate that purely syntactic features carry information which could be useful for stylometric analysis. Index words: stylometry, authorship attribution, dependency grammar, machine learning Syntactic Stylometry: Using Sentence Structure for Authorship Attribution"} {"_id": "67161d331d496ad5255ad8982759a1c853856932", "title": "Cooperative flood detection using GSMD via SMS", "text": "This paper proposes architecture for an early warning floods system to alert public against flood disasters. An effective early warning system must be developed with linkages between four elements, which are accurate data collection to undertake risk assessments, development of hazard monitoring services, communication on risk related information and existence of community response capabilities. This project focuses on monitoring water level remotely using wireless sensor network. The project also utilizes Global System for Mobile communication (GSM) and short message service (SMS) to relay data from sensors to computers or directly alert the respective victim's through their mobile phone. It is hope that the proposed architecture can be further develop into a functioning system, which would be beneficial to the community and act as a precautionary action to save lives in the case of flood disaster."} {"_id": "e78294368171f473a8a2b9bcbf230dd35573de65", "title": "Energy-Efficient Hierarchical Routing for Wireless Sensor Networks: A Swarm Intelligence Approach", "text": "Energy efficient routing in wireless sensor networks (WSNs) require non-conventional paradigm for design and development of power aware protocols. Swarm intelligence (SI) based metaheuristic can be applied for optimal routing of data, in an energy constraint WSNs environment. In this paper, we present BeeSwarm, a SI based energyefficient hierarchical routing protocol for WSNs. Our protocol consists of three phases: (1) Set-up phase-BeeCluster, (2) Route discovery phase-BeeSearch and (3) Data transmission phase-BeeCarrier. Integration of three phases for clustering, data routing and transmission, is the key aspect of our proposed protocol, which ultimately contributes to its robustness. Evaluation of simulation results show that BeeSwarm perform better in terms of packet delivery, energy consumption and throughput with increased network life compared to other SI based hierarchical routing protocols."} {"_id": "97ddadb162ba02ddb57f4aa799041d88826634b7", "title": "Children's engagement with educational iPad apps: Insights from a Spanish classroom", "text": "This study investigates the effects of a story-making app called Our Story and a selection of other educational apps on the learning engagement of forty-one Spanish 4\u20135-year-olds. Children were observed interacting in small groups with the story-making app and this was compared to their engagement with a selection of construction and drawing apps. Children\u2019s engagement was analysed in two ways: it was categorised using Bangert-Drowns and Pyke\u2019s taxonomy for individual hands-on engagement with educational software, and using the concept of exploratory talk as developed by Mercer et al. to analyse peer engagement. For both approaches, quantitative and qualitative indices of children\u2019s engagement were considered. The overall findings suggested that in terms of the BangertDrowns and Pyke taxonomy, the quality of children\u2019s individual engagement was higher with the OS app in contrast to their engagement with other app software. The frequency of children\u2019s use of exploratory talk was similar with the OS and colouring and drawing apps, and a detailed qualitative analysis of the interaction transcripts revealed several instances of the OS and drawing apps supporting joint problem-solving and collaborative engagement. We suggest that critical indices of an app\u2019s educational value are the extent to which the app supports opportunities for open-ended content and children\u2019s independent use of increasingly difficult features. 2013 Elsevier Ltd. All rights reserved."} {"_id": "6a013bf5f2e90eee5a98d276e65679ca4e622787", "title": "Integrating the Quality Attribute Workshop ( QAW ) and the Attribute-Driven Design ( ADD ) Method", "text": "............................................................................................................vii"} {"_id": "3e84d803ed9fbfc4beb76005a33eeee1691c2db7", "title": "Action-Reaction: Forecasting the Dynamics of Human Interaction", "text": "Forecasting human activities from visual evidence is an emerging area of research which aims to allow computational systems to make predictions about unseen human actions. We explore the task of activity forecasting in the context of dual-agent interactions to understand how the actions of one person can be used to predict the actions of another. We model dual-agent interactions as an optimal control problem, where the actions of the initiating agent induce a cost topology over the space of reactive poses \u2013 a space in which the reactive agent plans an optimal pose trajectory. The technique developed in this work employs a kernel-based reinforcement learning approximation of the soft maximum value function to deal with the high-dimensional nature of human motion and applies a mean-shift procedure over a continuous cost function to infer a smooth reaction sequence. Experimental results show that our proposed method is able to properly model human interactions in a high dimensional space of human poses. When compared to several baseline models, results show that our method is able to generate highly plausible simulations of human interaction."} {"_id": "44cf7f45ddf24c178c5523a64a9aaed76cbf8f0f", "title": "Modeling Human Education Data: From Equation-Based Modeling to Agent-Based Modeling", "text": "Agent-based simulation is increasingly used to analyze the performance of complex systems. In this paper we describe results of our work on one specific agent-based model, showing how it can be validated against the equation-based model from which it was derived, and demonstrating the extent to which it can be used to derive additional results over and above those that the equation-based model can provide."} {"_id": "665bb05b43dec97c905c387c267302a27599f324", "title": "LITMUS: Landslide detection by integrating multiple sources", "text": "Disasters often lead to other kinds of disasters, forming multi-hazards such as landslides, which may be caused by earthquakes, rainfalls, water erosion, among other reasons. Effective detection and management of multihazards cannot rely only on one information source. In this paper, we evaluate a landslide detection system LITMUS, which combines multiple physical sensors and social media to handle the inherent varied origins and composition of multi-hazards. LITMUS integrates near real-time data from USGS seismic network, NASA TRMM rainfall network, Twitter, YouTube, and Instagram. The landslide detection process consists of several stages of social media filtering and integration with physical sensor data, with a final ranking of relevance by integrated signal strength. Applying LITMUS to data collected in October 2013, we analyzed and filtered 34.5k tweets, 2.5k video descriptions and 1.6k image captions containing landslide keywords followed by integration with physical sources based on a Bayesian model strategy. It resulted in detection of all 11 landslides reported by USGS and 31 more landslides unreported by USGS. An illustrative example is provided to demonstrate how LITMUS\u2019 functionality can be used to determine landslides related to the recent Typhoon Haiyan."} {"_id": "25239ec7fb6159166dfe15adf229fc2415f071df", "title": "An ontology-based model to determine the automation level of an automated vehicle for co-driving", "text": "Full autonomy of ground vehicles is a major goal of the ITS (Intelligent Transportation Systems) community. However, reaching such highest autonomy level in all situations (weather, traffic, ...) may seem difficult in practice, despite recent results regarding driverless cars (e.g., Google Cars). In addition, an automated vehicle should also self-assess its own perception abilities, and not only perceive its environment. In this paper, we propose an intermediate approach towards full automation, by defining a spectrum of automation layers, from fully manual (the car is driven by a driver) to fully automated (the car is driven by a computer), based on an ontological model for representing knowledge. We also propose a second ontology for situation assessment (what does the automated car perceive?), including the sensors/actuators state, environmental conditions and driver's state. Finally, we also define inference rules to link the situation assessment ontology to the automation level one. Both ontological models have been built and first results are presented."} {"_id": "ac912f1c708ea00e07d9d55ab076582436a33ceb", "title": "Optimization for allocating BEV recharging stations in urban areas by using hierarchical clustering", "text": "The Battery Electric Vehicle (BEV) prototypes are evolving into a reality in a foreseeable future. Before BEV can embark on a mass adoption by road commuters, a new infrastructure for battery charging will have to be ready in place. By 2015, access to BEV charging will be available at nearly one million charge points in the United States, as claimed by CleanMPG. Early BEV adopters will primarily charge their vehicles at home, in their private garages. However, for many Asia-Pacific regions where people live in concrete ghettos, public charging will play a more central role due to reduced access to convenient home charging. This research focuses on planning BEV charging locations to be installed in urbanized areas where they are usually characterized by dense traffic concentrations, restricted street spaces and other complex factors such as the distribution of power grids. We proposed a two-steps model that first quantified the road information into data points, and subsequently they are converged into 'demand clusters' over an urbanized area by hierarchical clustering analysis. Optimization techniques then applied on the demand clusters with the aim of meeting the supplies and demands, while at the same time certain constraints and cost factors are considered. The model is believed to be an important decision-support tool for city planning and BEV charging stations allocation."} {"_id": "37babcee68e1a4ba8ab79f9072a5b0cb971472cd", "title": "Relevance and lexical pragmatics *", "text": "The goal of lexical pragmatics is to explain how linguistically specified (\u2018literal\u2019) word meanings are modified in use. While lexical-pragmatic processes such as narrowing, broadening and metaphorical extension are generally studied in isolation from each other, relevance theorists (Carston 2002, Wilson & Sperber 2002) have been arguing for a unified approach. I will continue this work by underlining some of the problems with more standard treatments, and show how a variety of lexicalpragmatic processes may be analysed as special cases of a general pragmatic adjustment process which applies spontaneously, automatically and unconsciously to fine-tune the interpretation of virtually every word."} {"_id": "670229acc298aa3db2dc24f2871b8a05cee158c8", "title": "Underwater sensor networks: applications, advances and challenges.", "text": "This paper examines the main approaches and challenges in the design and implementation of underwater wireless sensor networks. We summarize key applications and the main phenomena related to acoustic propagation, and discuss how they affect the design and operation of communication systems and networking protocols at various layers. We also provide an overview of communications hardware, testbeds and simulation tools available to the research community."} {"_id": "c5a8e3ff3c60440ac5e5a4573f76851a1061e33e", "title": "An improvement on the Euler number computing algorithm used in MATLAB", "text": "Computation of the Euler number of a binary image is often necessary in image matching, image database retrieval, image analysis, pattern recognition, and computer vision. This paper proposes an improvement on the Euler number computing algorithm used in the famous image processing tool MATLAB. By use of the information obtained during processing the previous pixel, the number of times of checking the neighbor pixels for processing a pixel decrease from 4 to 2. Our method is very simple in principle, and easily implemented. The experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms."} {"_id": "977fe5853db16e320917a43fb00f334456625a1e", "title": "DIP: the Database of Interacting Proteins", "text": "The Database of Interacting Proteins (DIP; http://dip.doe-mbi.ucla.edu) is a database that documents experimentally determined protein-protein interactions. This database is intended to provide the scientific community with a comprehensive and integrated tool for browsing and efficiently extracting information about protein interactions and interaction networks in biological processes. Beyond cataloging details of protein-protein interactions, the DIP is useful for understanding protein function and protein-protein relationships, studying the properties of networks of interacting proteins, benchmarking predictions of protein-protein interactions, and studying the evolution of protein-protein interactions."} {"_id": "da86e24280d596ecda32cbed4f4a73d0f3695b67", "title": "Effects of wine, alcohol and polyphenols on cardiovascular disease risk factors: evidences from human studies.", "text": "AIMS\nThe aim of this review was to focus on the knowledge of the cardiovascular benefits of moderate alcohol consumption, as well as to analyze the effects of the different types of alcoholic beverages.\n\n\nMETHODS\nSystematic revision of human clinical studies and meta-analyses related to moderate alcohol consumption and cardiovascular disease (CVD) from 2000 to 2012.\n\n\nRESULTS\nHeavy or binge alcohol consumption unquestionably leads to increased morbidity and mortality. Nevertheless, moderate alcohol consumption, especially alcoholic beverages rich in polyphenols, such as wine and beer, seems to confer cardiovascular protective effects in patients with documented CVD and even in healthy subjects.\n\n\nCONCLUSIONS\nIn conclusion, wine and beer (but especially red wine) seem to confer greater cardiovascular protection than spirits because of their polyphenolic content. However, caution should be taken when making recommendations related to alcohol consumption."} {"_id": "a34c334d4cc8dfce03ec48fa7a88ab0c3817ace7", "title": "Powering MEMS portable devices \u2014 a review of non-regenerative and regenerative power supply systems with special emphasis on piezoelectric energy harvesting systems", "text": "Power consumption is forecast by the International Technology Roadmap of Semiconductors (ITRS) to pose long-term technical challenges for the semiconductor industry. The purpose of this paper is threefold: (1) to provide an overview of strategies for powering MEMS via non-regenerative and regenerative power supplies; (2) to review the fundamentals of piezoelectric energy harvesting, along with recent advancements, and (3) to discuss future trends and applications for piezoelectric energy harvesting technology. The paper concludes with a discussion of research needs that are critical for the enhancement of piezoelectric energy harvesting devices. (Some figures in this article are in colour only in the electronic version)"} {"_id": "06f25b058b85486ec1ba185dd545c8b74e7e594f", "title": "Cyber-Physical Systems in the SmartGrid", "text": "Radical changes are expected to occur in the next years in the electricity domain and the grid itself which has been almost unchanged the last 100 years. Value is created when interactions exist and this is the main thrust of the emerging SmartGrid which will heavily rely on IT technologies at several layers for monitoring and control. The basic building blocks are the existing efforts in the domain of the Internet of Things and Internet of Services, that come together with cooperation as the key enabler. The SmartGrid is a complex ecosystem of heterogeneous (cooperating) entities that interact in order to provide the envisioned functionality. Advanced business services will take advantage of the near real-time information flows among all participants. In order to realize the SmartGrid promise we will have to heavily depend on Cyber-Physical Systems (CPS) that will be able to monitor, share and manage information and actions on the business as well as the real world. CPS is seen as an integral part of the SmartGrid, hence several open issues will need to be effectively addressed."} {"_id": "5125b385447ad58b19960a2652b555373f640331", "title": "The New Frontier of Smart Grids", "text": "The power grid is a massive interconnected network used to deliver electricity from suppliers to consumers and has been a vital energy supply. To minimize the impact of climate change while at the same time maintaining social prosperity, smart energy must be embraced to ensure a balanced economical growth and environmental sustainability. There fore, in the last few years, the new concept of a smart grid (SG) became a critical enabler in the contempo rary world and has attracted increas ing attention of policy makers and engineers. This article introduces the main concepts and technological challenges of SGs and presents the authors' views on some required challenges and opportunities pre sented to the IEEE Industrial Electronics Society (IES) in this new and exciting frontier."} {"_id": "7b9ab27ad78899b6b284a17c38aa75fb0e1d1765", "title": "An information-centric energy infrastructure: The Berkeley view", "text": "We describe an approach for how to design an essentially more scalable, flexible and resilient electric power infrastructure \u2013 one that encourages efficient use, integrates local generation, and manages demand through omnipresent awareness of energy availability and use over time. We are inspired by how the Internet has revolutionized communications infrastructure, by pushing intelligence to the edges while hiding the diversity of underlying technologies through well-defined interfaces. Any end device is a traffic source or sink and intelligent endpoints adapt their traffic to what the infrastructure can"} {"_id": "1be0ccc5fbb9c574c3f99b382cb8171ead2e6f68", "title": "Power and energy management for server systems", "text": "This survey shows that heterogeneous server clusters can be made more efficient by conserving power and energy while exploiting information from the service level, such as request priorities established by service-level agreements."} {"_id": "24ab79f38cb17dc0a62b98ed23122a4062f8f682", "title": "A Multilevel Inverter for Photovoltaic Systems With Fuzzy Logic Control", "text": "Converters for photovoltaic (PV) systems usually consist of two stages: a dc/dc booster and a pulsewidth modulated (PWM) inverter. This cascade of converters presents efficiency issues, interactions between its stages, and problems with the maximum power point tracking. Therefore, only part of the produced electrical energy is utilized. In this paper, the authors propose a single-phase H-bridge multilevel converter for PV systems governed by a new integrated fuzzy logic controller (FLC)/modulator. The novelties of the proposed system are the use of a fully FLC (not requiring any optimal PWM switching-angle generator and proportional-integral controller) and the use of an H-bridge power-sharing algorithm. Most of the required signal processing is performed by a mixed-mode field-programmable gate array, resulting in a fully integrated System-on-Chip controller. The general architecture of the system and its main performance in a large spectrum of practical situations are presented and discussed. The proposed system offers improved performance over two-level inverters, particularly at low-medium power."} {"_id": "0154a72df1b9929743145794dd2b1ee4d8b23a60", "title": "Robot Programming by Demonstration with Crowdsourced Action Fixes", "text": "Programming by Demonstration (PbD) can allow endusers to teach robots new actions simply by demonstrating them. However, learning generalizable actions requires a large number of demonstrations that is unreasonable to expect from end-users. In this paper, we explore the idea of using crowdsourcing to collect action demonstrations from the crowd. We propose a PbD framework in which the end-user provides an initial seed demonstration, and then the robot searches for scenarios in which the action will not work and requests the crowd to fix the action for these scenarios. We use instance-based learning with a simple yet powerful action representation that allows an intuitive visualization of the action. Crowd workers directly interact with these visualizations to fix them. We demonstrate the utility of our approach with a user study involving local crowd workers (N=31) and analyze the collected data and the impact of alternative design parameters so as to inform a real-world deployment of our system."} {"_id": "726c76d5c13b2ad763c7c1ba52e08ef5e7078bfc", "title": "Efficient online structured output learning for keypoint-based object tracking", "text": "Efficient keypoint-based object detection methods are used in many real-time computer vision applications. These approaches often model an object as a collection of keypoints and associated descriptors, and detection then involves first constructing a set of correspondences between object and image keypoints via descriptor matching, and subsequently using these correspondences as input to a robust geometric estimation algorithm such as RANSAC to find the transformation of the object in the image. In such approaches, the object model is generally constructed offline, and does not adapt to a given environment at runtime. Furthermore, the feature matching and transformation estimation stages are treated entirely separately. In this paper, we introduce a new approach to address these problems by combining the overall pipeline of correspondence generation and transformation estimation into a single structured output learning framework. Following the recent trend of using efficient binary descriptors for feature matching, we also introduce an approach to approximate the learned object model as a collection of binary basis functions which can be evaluated very efficiently at runtime. Experiments on challenging video sequences show that our algorithm significantly improves over state-of-the-art descriptor matching techniques using a range of descriptors, as well as recent online learning based approaches."} {"_id": "e3454ea07e4e618989b9d6c658c3653c4577b81b", "title": "Basics of dermal filler rheology.", "text": "BACKGROUND\nHyaluronic acid injectable fillers are the most widely used dermal fillers to treat facial volume deficits, providing long-term facial aesthetic enhancement outcomes for the signs of aging and/or facial contouring.\n\n\nOBJECTIVES\nThe purpose of this article was to explain how rheology, the study of the flow of matter, can be used to help physicians differentiate between dermal fillers targeted to certain areas of the face.\n\n\nMETHODS\nThis article describes how rheological properties affect performance when filler is used in various parts of the face and exposed to mechanical stress (shear deformation and compression/stretching forces) associated with daily facial animation and other commonly occurring external forces.\n\n\nRESULTS\nImproving facial volume deficits with filler is linked mainly to gel viscoelasticity and cohesivity. These 2 properties set the level of resistance to lateral and vertical deformations of the filler and influence filler tissue integration through control of gel spreading.\n\n\nCONCLUSION\nSelection of dermal filler with the right rheological properties is a key factor in achieving a natural-looking long-lasting desired aesthetic outcome."} {"_id": "56fe16922b87f258ea511344d29a48d74adde182", "title": "Smart agent based prepaid wireless energy meter", "text": "Prepaid meter (PM) is getting very popular especially in developing countries. There are many advantages to use prepaid meter as opposed to postpaid meter both to the utility provider and to the consumer. Brunei Darussalam has adopted PM but it is not intelligent and not wireless enabled. Reading meters and topping up balance are still done manually. Utility provider does not have information on the usage statistics and has only limited functionalities in the grid control. So accordingly a novel software agent based wireless prepaid energy meter was developed using Java Agent Development Environment (JADE-LEAP) allowing agent from utility provider to query wireless energy meter for energy values of every household. These statistics can be used for statistical computation of the power consumed and for policy and future planning."} {"_id": "498f3f655009f47981a1a48a94720e77f7f2608b", "title": "Online Keyword Spotting with a Character-Level Recurrent Neural Network", "text": "In this paper, we propose a context-aware keyword spotting m odel employing a character-level recurrent neural network (RNN) for spoken term detection in co tinuous speech. The RNN is end-toend trained with connectionist temporal classification (CT C) to generate the probabilities of character and word-boundary labels. There is no need for the phonetic t ranscription, senone modeling, or system dictionary in training and testing. Also, keywords can easi ly be added and modified by editing the text based keyword list without retraining the RNN. Moreover, th e unidirectional RNN processes an infinitely long input audio streams without pre-segmentation and keyw ords are detected with low-latency before the utterance is finished. Experimental results show that th e proposed keyword spotter significantly outperforms the deep neural network (DNN) and hidden Markov m del (HMM) based keyword-filler model even with less computations."} {"_id": "71f5afa6711410e53b14e4b7f10bbd74067ae0b4", "title": "RACOG and wRACOG: Two Probabilistic Oversampling Techniques", "text": "As machine learning techniques mature and are used to tackle complex scientific problems, challenges arise such as the imbalanced class distribution problem, where one of the target class labels is under-represented in comparison with other classes. Existing oversampling approaches for addressing this problem typically do not consider the probability distribution of the minority class while synthetically generating new samples. As a result, the minority class is not represented well which leads to high misclassification error. We introduce two probabilistic oversampling approaches, namely RACOG and wRACOG, to synthetically generating and strategically selecting new minority class samples. The proposed approaches use the joint probability distribution of data attributes and Gibbs sampling to generate new minority class samples. While RACOG selects samples produced by the Gibbs sampler based on a predefined lag, wRACOG selects those samples that have the highest probability of being misclassified by the existing learning model. We validate our approach using nine UCI data sets that were carefully modified to exhibit class imbalance and one new application domain data set with inherent extreme class imbalance. In addition, we compare the classification performance of the proposed methods with three other existing resampling techniques."} {"_id": "b32e2a4d4894ac81d66211349320cd1f79b22942", "title": "Enhanced SAR ADC energy efficiency from the early reset merged capacitor switching algorithm", "text": "The early reset merged capacitor switching algorithm (EMCS) is proposed as an energy reducing switching technique for a binary weighted, capacitive successive approximation (SAR) analog to digital converter (ADC). The method uses the merged capacitor switching (MCS) architecture and optimizes the use of the VCM level during the SAR conversion. This algorithm can reduce switching power by over 12% with no additional DAC driver activity when compared to the MCS scheme. The MCS and EMCS approaches are analyzed mathematically and the EMCS energy consumption is shown to be lower than or equal to that of the MCS technique for every digital code. Static linearity improvements for this structure are also shown with the integral non-linearity (INL) reducing by a factor of two due to the utilization of the MCS three level DAC. The EMCS implementation methodology is also described."} {"_id": "0c6fa98b7b99d807df7c027e8e97751f1bbb9140", "title": "Data programming with DDLite: putting humans in a different part of the loop", "text": "Populating large-scale structured databases from unstructured sources is a critical and challenging task in data analytics. As automated feature engineering methods grow increasingly prevalent, constructing sufficiently large labeled training sets has become the primary hurdle in building machine learning information extraction systems. In light of this, we have taken a new approach called data programming [7]. Rather than hand-labeling data, in the data programming paradigm, users generate large amounts of noisy training labels by programmatically encoding domain heuristics as simple rules. Using this approach over more traditional distant supervision methods and fully supervised approaches using labeled data, we have been able to construct knowledge base systems more rapidly and with higher quality. Since the ability to quickly prototype, evaluate, and debug these rules is a key component of this paradigm, we introduce DDLite, an interactive development framework for data programming. This paper reports feedback collected from DDLite users across a diverse set of entity extraction tasks. We share observations from several DDLite hackathons in which 10 biomedical researchers prototyped information extraction pipelines for chemicals, diseases, and anatomical named entities. Initial results were promising, with the disease tagging team obtaining an F1 score within 10 points of the state-of-the-art in only a single day-long hackathon's work. Our key insights concern the challenges of writing diverse rule sets for generating labels, and exploring training data. These findings motivate several areas of active data programming research."} {"_id": "a5de09243b4b12fc4bcf4db56c8e38fc3beddf4f", "title": "Governance of an Enterprise Social Intranet Implementation: The Statkraft Case", "text": "Recent studies demonstrate that the implementation of enterprise social systems (ESSs) will transfer organizations into new paradigm of social business which results in enormous economic returns and competitive advantage. Social business creates a completely new way of working and organizing characterised by social collaboration, intrinsic knowledge sharing, voluntarily mass participation, just name a few. Thus, implementation of ESSs should tackle the uniqueness of the new way of working and organizing. However, there is a shortage of knowledge about implementation of these large enterprise systems. The purpose of this paper is to study governance model of ESSs implementation. A case study is conducted to investigate the implementation of the social intranet called the \u2018Stream\u2019 at Statkraft, which is a world-leading energy company in Norway. The governance model of \u2018Stream\u2019 emphasizes the close cooperation and accountability between corporate communication, human resources and IT, which implies paradigm shift in governance of implementing ESSs. Benefits and challenges in the implementation are also identified. Based on the knowledge and insights gained in the study, recommendations are proposed to assist the company in improving governance of ESSs implementation. The study contributes knowledge/know-how on governance of ESSs implementation."} {"_id": "26dda18412365d6c59866cf8cbc867a911727141", "title": "Do Better ImageNet Models Transfer Better?", "text": "Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy (r = 0.99 and 0.96, respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from ImageNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested."} {"_id": "e9487dfc2fc9ff2dfdebab250560d567e6800f57", "title": "A randomized, efficient, and distributed protocol for the detection of node replication attacks in wireless sensor networks", "text": "Wireless sensor networks are often deployed in hostile environments, where anadversary can physically capture some of the nodes. Once a node is captured, the attackercan re-program it and replicate the node in a large number of clones, thus easily taking over the network. The detection of node replication attacks in a wireless sensor network is therefore a fundamental problem. A few distributed solutions have recently been proposed. However, these solutions are not satisfactory. First, they are energy and memory demanding: A serious drawback for any protocol that is to be used in resource constrained environment such as a sensor network. Further, they are vulnerable to specific adversary models introduced in this paper.\n The contributions of this work are threefold. First, we analyze the desirable properties of a distributed mechanism for the detection of node replication attacks. Second, we show that the known solutions for this problem do not completely meet our requirements. Third, we propose a new Randomized, Efficient, and Distributed (RED) protocol for the detection of node replication attacks and we show that it is completely satisfactory with respect to the requirements. Extensive simulations also show that our protocol is highly efficient in communication, memory, and computation, that it sets out an improved attack detection probability compared to the best solutions in the literature, and that it is resistant to the new kind of attacks we introduce in this paper, while other solutions are not."} {"_id": "6f7745843ca4207567fc55f1023211f3fdde3ac2", "title": "TechWare: Financial Data and Analytic Resources [Best of the Web]", "text": "In this issue, \u201cBest of the Web\u201d focuses on data resources and analytic tools for quantitative analysis of financial markets. An abundance of financial data is reshaping trading, investment research, and risk management. The broad availability of this information creates opportunities to introduce analysis techniques that are new to the financial industry. The financial industry is currently dominated by a handful of workhorse models, such as the capital asset pricing model and the Black-Scholes options pricing model."} {"_id": "391122d9722b376612e22a6f102ddc5940b80b6b", "title": "Web Mining Techniques in E-Commerce Applications", "text": "Today web is the best medium of communication in modern business. Many companies are redefining their business strategies to improve the business output. Business over internet provides the opportunity to customers and partners where their products and specific business can be found. Nowadays online business breaks the barrier of time and space as compared to the physical office. Big companies around the world are realizing that e-commerce is not just buying and selling over Internet, rather it improves the efficiency to compete with other giants in the market. For this purpose data mining sometimes called as knowledge discovery is used. Web mining is data mining technique that is applied to the WWW. There are vast quantities of information available over the Internet."} {"_id": "bbfb87e5695e84e36a55719301f0d70ca16e1cc6", "title": "You Are What You Post: What the Content of Instagram Pictures Tells About Users' Personality", "text": "Instagram is a popular social networking application that allows users to express themselves through the uploaded content and the different filters they can apply. In this study we look at the relationship between the content of the uploaded Instagram pictures and the personality traits of users. To collect data, we conducted an online survey where we asked participants to fill in a personality questionnaire, and grant us access to their Instagram account through the Instagram API. We gathered 54,962 pictures of 193 Instagram users. Through the Google Vision API, we analyzed the pictures on their content and clustered the returned labels with the k-means clustering approach. With a total of 17 clusters, we analyzed the relationship with users\u2019 personality traits. Our findings suggest a relationship between personality traits and picture content. This allow for new ways to extract personality traits from social media trails, and new ways to facilitate personalized systems. Author"} {"_id": "d333d23a0c178cce0132a7f2cc28b809115e3446", "title": "The stressed hippocampus, synaptic plasticity and lost memories", "text": "Stress is a biologically significant factor that, by altering brain cell properties, can disturb cognitive processes such as learning and memory, and consequently limit the quality of human life. Extensive rodent and human research has shown that the hippocampus is not only crucially involved in memory formation, but is also highly sensitive to stress. So, the study of stress-induced cognitive and neurobiological sequelae in animal models might provide valuable insight into the mnemonic mechanisms that are vulnerable to stress. Here, we provide an overview of the neurobiology of stress\u2013memory interactions, and present a neural\u2013endocrine model to explain how stress modifies hippocampal functioning."} {"_id": "099b097ecbadf722489f2ff9c1a1fcfc28ac7dbb", "title": "Physics 101: Learning Physical Object Properties from Unlabeled Videos", "text": "We study the problem of learning physical properties of objects from unlabeled videos. Humans can learn basic physical laws when they are very young, which suggests that such tasks may be important goals for computational vision systems. We consider various scenarios: objects sliding down an inclined surface and colliding; objects attached to a spring; objects falling onto various surfaces, etc. Many physical properties like mass, density, and coefficient of restitution influence the outcome of these scenarios, and our goal is to recover them automatically. We have collected 17,408 video clips containing 101 objects of various materials and appearances (shapes, colors, and sizes). Together, they form a dataset, named Physics 101, for studying object-centered physical properties. We propose an unsupervised representation learning model, which explicitly encodes basic physical laws into the structure and use them, with automatically discovered observations from videos, as supervision. Experiments demonstrate that our model can learn physical properties of objects from video. We also illustrate how its generative nature enables solving other tasks such as outcome prediction."} {"_id": "1c18f02b5247c6de4b319f2638707d63b11d5cd7", "title": "A tutorial on training recurrent neural networks , covering BPPT , RTRL , EKF and the \" echo state network \" approach - Semantic Scholar", "text": "This tutorial is a worked-out version of a 5-hour course originally held at AIS in September/October 2002. It has two distinct components. First, it contains a mathematically-oriented crash course on traditional training methods for recurrent neural networks, covering back-propagation through time (BPTT), real-time recurrent learning (RTRL), and extended Kalman filtering approaches (EKF). This material is covered in Sections 2 \u2013 5. The remaining sections 1 and 6 \u2013 9 are much more gentle, more detailed, and illustrated with simple examples. They are intended to be useful as a stand-alone tutorial for the echo state network (ESN) approach to recurrent neural network training."} {"_id": "9fca534df83ada8e7ddeb68919f12126d0098082", "title": "Advanced Features for Enterprise-Wide Role-Based Access Control", "text": "The administration of users and access rights in large enterprises is a complex and challenging task. Roles are a powerful concept for simplifying access control, but their implementation is normally restricted to single systems and applications. In this article we define Enterprise Roles capable of spanning all IT systems in an organisation. We show how the Enterprise Role-Based Access Control (ERBAC) model exploits the RBAC model outlined in the NIST standard draft[5] and describe its extensions. We have implemented ERBAC as a basic concept of SAM Jupiter, a commercial security administration tool. Based on practical experience with the deployment of Enterprise Roles during SAM implementation projects in large organisations, we have enhanced the ERBAC model by including different ways of parametrising the roles. We show that using parameters can significantly reduce the number of roles needed in an enterprise and simplify the role structure, thereby reducing the administration effort considerably. The enhanced ERBAC features are illustrated by reallife examples."} {"_id": "ff60d4601adabe04214c67e12253ea3359f4e082", "title": "Video-based emotion recognition in the wild using deep transfer learning and score fusion", "text": "Multimodal recognition of affective states is a difficult problem, unless the recording conditions are carefully controlled. For recognition \u201cin the wild\u201d, large variances in face pose and illumination, cluttered backgrounds, occlusions, audio and video noise, as well as issues with subtle cues of expression are some of the issues to target. In this paper, we describe a multimodal approach for video-based emotion recognition in the wild. We propose using summarizing functionals of complementary visual descriptors for video modeling. These features include deep convolutional neural network (CNN) based features obtained via transfer learning, for which we illustrate the importance of flexible registration and fine-tuning. Our approach combines audio and visual features with least squares regression based classifiers and weighted score level fusion. We report state-of-the-art results on the EmotiW Challenge for \u201cin the wild\u201d facial expression recognition. Our approach scales to other problems, and ranked top in the ChaLearn-LAP First Impressions Challenge 2016 from video clips collected in the wild."} {"_id": "53275aea89844c503daf8e4d0864764201def8f3", "title": "Learning deep representations by mutual information estimation and maximization", "text": "This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation\u2019s suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals."} {"_id": "4af39befa8fa3d1e4d8f40e0a2460937df3dac4d", "title": "Clinical practice guidelines for the management of pain, agitation, and delirium in adult patients in the intensive care unit.", "text": "OBJECTIVE\nTo revise the \"Clinical Practice Guidelines for the Sustained Use of Sedatives and Analgesics in the Critically Ill Adult\" published in Critical Care Medicine in 2002.\n\n\nMETHODS\nThe American College of Critical Care Medicine assembled a 20-person, multidisciplinary, multi-institutional task force with expertise in guideline development, pain, agitation and sedation, delirium management, and associated outcomes in adult critically ill patients. The task force, divided into four subcommittees, collaborated over 6 yr in person, via teleconferences, and via electronic communication. Subcommittees were responsible for developing relevant clinical questions, using the Grading of Recommendations Assessment, Development and Evaluation method (http://www.gradeworkinggroup.org) to review, evaluate, and summarize the literature, and to develop clinical statements (descriptive) and recommendations (actionable). With the help of a professional librarian and Refworks database software, they developed a Web-based electronic database of over 19,000 references extracted from eight clinical search engines, related to pain and analgesia, agitation and sedation, delirium, and related clinical outcomes in adult ICU patients. The group also used psychometric analyses to evaluate and compare pain, agitation/sedation, and delirium assessment tools. All task force members were allowed to review the literature supporting each statement and recommendation and provided feedback to the subcommittees. Group consensus was achieved for all statements and recommendations using the nominal group technique and the modified Delphi method, with anonymous voting by all task force members using E-Survey (http://www.esurvey.com). All voting was completed in December 2010. Relevant studies published after this date and prior to publication of these guidelines were referenced in the text. The quality of evidence for each statement and recommendation was ranked as high (A), moderate (B), or low/very low (C). The strength of recommendations was ranked as strong (1) or weak (2), and either in favor of (+) or against (-) an intervention. A strong recommendation (either for or against) indicated that the intervention's desirable effects either clearly outweighed its undesirable effects (risks, burdens, and costs) or it did not. For all strong recommendations, the phrase \"We recommend \u2026\" is used throughout. A weak recommendation, either for or against an intervention, indicated that the trade-off between desirable and undesirable effects was less clear. For all weak recommendations, the phrase \"We suggest \u2026\" is used throughout. In the absence of sufficient evidence, or when group consensus could not be achieved, no recommendation (0) was made. Consensus based on expert opinion was not used as a substitute for a lack of evidence. A consistent method for addressing potential conflict of interest was followed if task force members were coauthors of related research. The development of this guideline was independent of any industry funding.\n\n\nCONCLUSION\nThese guidelines provide a roadmap for developing integrated, evidence-based, and patient-centered protocols for preventing and treating pain, agitation, and delirium in critically ill patients."} {"_id": "5772ebf6fd60e5e86081799934891815f02173fd", "title": "Emotional Facial Expression Classification for Multimodal User Interfaces", "text": "We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new multimodal user interfaces."} {"_id": "07283e28e324149afb0e2ed36c67c55a640d4a4f", "title": "A framework for evaluating multimodal music mood classification", "text": "This research proposes a framework of music mood classification that utilizes multiple and complementary information sources, namely, music audio, lyric text and social tags associated to music pieces. This article presents the framework and a thorough evaluation on each of its components. Experiment results on a large dataset of 18 mood categories show that combining lyrics and audio significantly outperformed systems using audio-only features. Automatic feature selection techniques were further proved to have reduced feature space. In addition, the examination of learning curves shows that the hybrid systems using lyrics and audio needed fewer training samples and shorter audio clips to achieve the same or better classification accuracies than systems using lyrics or audio singularly. Last but not least, performance comparisons reveal relative importance of audio and lyric features across mood categories. Introduction Music is an essential information type in people\u2019s everyday life. Nowadays, there have been a large number of music collections, repositories, and websites that strive to provide convenient access to music for various users, from musicians to the general public. These repositories and users often use different types of metadata to describe music such as genre, artist, country of source and music mood 1 . (Vignoli, 2004; Hu & Downie, 2007). Many of these music repositories have been relying on manual supply of music metadata, but the increasing amount of music data calls for tools that can automatically classify music pieces. Music mood classification has thus been attracting researchers\u2019 attention in the last decade, but many existing classification systems are solely based on information extracted from the audio recordings of music and have achieved suboptimal performances or reached a \u201cglass ceiling\u201d of performance (Lu, Liu, & Zhang, 2006; Trohidis, Tsoumakas, Kalliris, & Vlahavas, 2008; Hu, Downie, Laurier, Bay, & Ehmann, 2008, Yang & Chen, 2012, Barthet, Fazekas, & Sandler, 2013). At roughly the same time, studies have reported that lyrics and social tags associated with music have important values in Music Information Retrieval (MIR) research. For example, Cunningham, Downie, and Bainbridge (2005) reported lyrics as the most mentioned feature by respondents in answering why they hated a song. Geleijnse, Schedl, and Knees (2007) proposed an effective method of measuring artists similarity using social tags associated with the artists. As lyrics often bear with semantics of human language, they have been exploited in music classification as well (e.g., He et al, 2008; Hu, Chen, & Yang, 2009; Van Zaanen & Kanters, 2010; Dakshina & Sridhar, 2014). Furthermore, based on the hypothesis that lyrics and music audio 2 are different enough and thus may complement each other, researchers have started to combine lyrics and audio for improved classification performances (Laurier, Grivolla, & Herrera, 2008; Yang et al., 2008; Bj\u00f6rn, Johannes, & Gerhard, 2010; Brilis et al., 2012). Such approach of combining multiple information sources in solving classification problems is commonly called multimodal classification (Kim, et al., 2010; Yang & Chen, 2012; Barthet et al., 2013). Multimodal classification approaches in general are reported to have improved classification performances over those based on a single source. However, there are many options and decisions involved in a multimodal classification approach, and to date, there has not been any general guidance on how to make these decisions in order to achieve more effective classifications. This study proposes a framework of multimodal music mood classification where research questions on each specific stage or component of the classification process could be answered. This is one of the first studies presenting a comprehensive experiment on a multimodal dataset of 5,296 unique songs, which exemplifies every stage of the framework and evidences how the performances of music mood classification can be improved by a multimodal approach. Specifically, novelty and contributions of this study can be summarized as follows: 1. Conceptualize a framework for the entire process of automatic music mood classification using multiple information sources. The framework is flexible in that each component can be easily extended by adding new methods, algorithms and tools. Under the framework, this study systematically answers questions often involved in multimodal classification: feature extraction, feature selection, ensemble methods, etc. 2. Following a previous study evaluating a wide range of lyric features and their combinations (Hu & Downie, 2010), this study further explores feature selection and effect of dimension reduction of feature spaces. Thus, it pushes forward the state-of-the-art on sentiment analysis for music lyrics; 3. Examine the reduction of training data brought by the multimodal approach. This aspect of improvement has rarely been addressed by previous studies on multimodal music classification. Both the effect on the number of training examples and that on the length of audio clips are evaluated in this study. 4. Compare relative advantages of lyrics and audio across different mood categories. To date, there is little evidence on which information source works better for which mood category(ies). Gaining insight on this question can contribute to deeper understanding of sources and components of music mood. 5. Build a large ground truth dataset for the task of multimodal music mood classification. The dataset contains 5,296 unique songs in 18 mood categories. This is one of the largest experimental datasets in music mood classification with both audio and lyrics available (Kim, et al., 2010; Yang & Chen, 2012; Barthet et al., 2013). Results from a large dataset with realistic and representative mood categories are more generalizable and of higher practical values. The rest of the paper is organized as follows. Related work is reviewed and research questions are stated. After that, a framework for multimodal music mood classification is proposed. We then report an experiment with ternary information and conclude by discussing the findings and pointing out future work on enriching the proposed framework."} {"_id": "5ca6217b3e8353778d05fe58bcc5a9ea79707287", "title": "Malaysian E-government : Issues and Challenges in Public Administration", "text": "E-government has become part and parcel of every government\u2019s agenda. Many governments have embraced its significant impacts and influences on governmental operations. As the technology mantra has become more ubiquitous, so government have decided to inaugurate e-government policy in its agencies and departments in order to enhance the quality of services, better transparency and greater accountability. As for Malaysia, the government is inspired by the wave of the e-government, as its establishment can improve the quality of public service delivery, and also its internal operations. This qualitative study will explore the status implementation of e-government initiatives as a case study, and will also provide a comparative evaluation of these findings, using the South Korean government as a benchmark study, given its outstanding performance in e-government. The findings of this study will highlight potential areas for improvement in relation to the public administration perspective and from this comparative approach too, Malaysia can learn some lessons from South Korea\u2019s practices to ensure the success of e-government projects."} {"_id": "3af3f7b4f48e4aa6b9d1c1748b746ed2d8457b74", "title": "Affect in Human-Robot Interaction", "text": "More and more, robots are expected to interact with humans in a social, easily understandable manner, which presupposes effective use of robot affect. This chapter provides a brief overview of research advances into this important aspect of human-robot interaction. Keywords: human-robot interaction, affective robotics, robot behavior I. Introduction\t\r and\t\r Motivation Humans possess an amazing capability of attributing life and affect to inanimate objects (Reeves and Nass 96, Melson et al 09). Robots take this to the next level, even beyond that of virtual characters due to their embodiment and situatedness. They offer the opportunity for people to bond with them by maintaining a physical presence in their world, in some ways comparable to other beings, such as fellow humans and pets. This raises a broad range of questions in terms of the role of affect in human-robot interaction (HRI), which will be discussed in this article: \u2022 What is the role of affect for a robot and in what ways can it add value and risk to humanrobot relationships? Can robots be companions, friends, even intimates to people? \u2022 Is it necessary for a robot to actually experience emotion in order to convey its internal state to a person? Is emotion important in enhancing HRI and if so when and where? \u2022 What approaches, theories, representations, and experimental methods inform affective HRI research? Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Affect in Human-Robot Interaction 5a. CONTRACT NUMBER"} {"_id": "12f7b71324ee8e1796a9ef07af05b66674fe6af0", "title": "Collective annotation of Wikipedia entities in web text", "text": "To take the first step beyond keyword-based search toward entity-based search, suitable token spans (\"spots\") on documents must be identified as references to real-world entities from an entity catalog. Several systems have been proposed to link spots on Web pages to entities in Wikipedia. They are largely based on local compatibility between the text around the spot and textual metadata associated with the entity. Two recent systems exploit inter-label dependencies, but in limited ways. We propose a general collective disambiguation approach. Our premise is that coherent documents refer to entities from one or a few related topics or domains. We give formulations for the trade-off between local spot-to-entity compatibility and measures of global coherence between entities. Optimizing the overall entity assignment is NP-hard. We investigate practical solutions based on local hill-climbing, rounding integer linear programs, and pre-clustering entities followed by local optimization within clusters. In experiments involving over a hundred manually-annotated Web pages and tens of thousands of spots, our approaches significantly outperform recently-proposed algorithms."} {"_id": "77d2698e8efadda698b0edb457cd8de75224bfa0", "title": "Knowledge Base Population: Successful Approaches and Challenges", "text": "In this paper we give an overview of the Knowledge Base Population (KBP) track at the 2010 Text Analysis Conference. The main goal of KBP is to promote research in discovering facts about entities and augmenting a knowledge base (KB) with these facts. This is done through two tasks, Entity Linking \u2013 linking names in context to entities in the KB \u2013 and Slot Filling \u2013 adding information about an entity to the KB. A large source collection of newswire and web documents is provided from which systems are to discover information. Attributes (\u201cslots\u201d) derived from Wikipedia infoboxes are used to create the reference KB. In this paper we provide an overview of the techniques which can serve as a basis for a good KBP system, lay out the remaining challenges by comparison with traditional Information Extraction (IE) and Question Answering (QA) tasks, and provide some suggestions to address these challenges."} {"_id": "0638d1f7d37f6bda49f6ec951de37aca0e53b98a", "title": "Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation", "text": "Conversation between two people is usually of Mixed-Initiative, with Control over the conversation being transferred from one person to another. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphora for two data sets. This distribution indicates that some control segments are hierarchically related to others. The analysis suggests that discourse participants often mutually agree to a change of topic. We also compared initiative in Task Oriented and Advice Giving dialogues and found that both allocation of control and the manner in which control is transferred is radically different for the two dialogue types. These differences can be explained in terms of collaborative planning principles."} {"_id": "1c909ac1c331c0c246a88da047cbdcca9ec9b7e7", "title": "Large-Scale Named Entity Disambiguation Based on Wikipedia Data", "text": "This paper presents a large-scale system for the recognition and semantic disambiguation of named entities based on information extracted from a large encyclopedic collection and Web search results. It describes in detail the disambiguation paradigm employed and the information extraction process from Wikipedia. Through a process of maximizing the agreement between the contextual information extracted from Wikipedia and the context of a document, as well as the agreement among the category tags associated with the candidate entities, the implemented system shows high disambiguation accuracy on both news stories and Wikipedia articles."} {"_id": "2b2c30dfd3968c5d9418bb2c14b2382d3ccc64b2", "title": "DBpedia: A Nucleus for a Web of Open Data", "text": "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for humanand machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data."} {"_id": "557e71f87073be437908c1d033500acbe7670712", "title": "On error management: lessons from aviation.", "text": "Copies of the full protocol and details of training programmes are available from Association of Litigation and Risk Management (ALARM), Royal Society of Medicine, 1 Wimpole Street, London W1. Contributors: CV and ST-A carried out the research on which the original protocol was based. All authors participated equally in the development of the protocol, in which successive versions were tested in clinical practice and refined in the light of experience. The writing of the original protocol and present paper was primarily carried out by CV, ST-A, EJC, and DH, but all authors contributed to the final version. CV and DH are the guarantors. Competing interests: CV received funding from Healthcare Risk Resources International to support the work of ST-A during the development of the protocol."} {"_id": "3f007c43a7b4bb5a052b7a16d4e33c4842a8244a", "title": "Self-assembly of neural networks viewed as swarm intelligence", "text": "While self-assembly is a fairly active area of research in swarm intelligence, relatively little attention has been paid to the issues surrounding the construction of network structures. In this paper we extend methods developed previously for controlling collective movements of agent teams to serve as the basis for self-assembly or \u201cgrowth\u201d of networks, using neural networks as a concrete application to evaluate our approach. Our central innovation is having network connections arise as persistent \u201ctrails\u201d left behind moving agents, trails that are reminiscent of pheromone deposits made by agents in ant colony optimization models. The resulting network connections are thus essentially a record of agent movements. We demonstrate our model\u2019s effectiveness by using it to produce two large networks that support subsequent learning of topographic and feature maps. Improvements produced by the incorporation of collective movements are also examined through computational experiments. These results indicate that methods for directing collective movements can be adopted to facilitate network self-assembly."} {"_id": "92930f4279b48f7e4e8ec2edc24e8aa65c5954fd", "title": "Client Profiling for an Anti-Money Laundering System", "text": "We present a data mining approach for profiling bank clients in order to support the process of detection of antimoney laundering operations. We first present the overall system architecture, and then focus on the relevant component for this paper. We detail the experiments performed on real world data from a financial institution, which allowed us to group clients in clusters and then generate a set of classification rules. We discuss the relevance of the founded client profiles and of the generated classification rules. According to the defined overall agent-based architecture, these rules will be incorporated in the knowledge base of the intelligent agents responsible for the signaling of suspicious transactions."} {"_id": "f64d11fa836bf5bde5dc730f4311e899aea1047c", "title": "Strength Training for Endurance Athletes: Theory to Practice", "text": "THE PURPOSE OF THIS REVIEW IS TWOFOLD: TO ELUCIDATE THE UTILITY OF RESISTANCE TRAINING FOR ENDURANCE ATHLETES, AND PROVIDE THE PRACTITIONER WITH EVIDENCED-BASED PERIODIZATION STRATEGIES FOR CONCURRENT STRENGTH AND ENDURANCE TRAINING IN ATHLETIC POPULATIONS. BOTH LOW-INTENSITY EXERCISE ENDURANCE (LIEE) AND HIGH-INTENSITY EXERCISE ENDURANCE (HIEE) HAVE BEEN SHOWN TO IMPROVE AS A RESULT OF MAXIMAL, HIGH FORCE, LOW VELOCITY (HFLV) AND EXPLOSIVE, LOW-FORCE, HIGH-VELOCITY STRENGTH TRAINING. HFLV STRENGTH TRAINING IS RECOMMENDED INITIALLY TO DEVELOP A NEUROMUSCULAR BASE FOR ENDURANCE ATHLETES WITH LIMITED STRENGTH TRAINING EXPERIENCE. A SEQUENCED APPROACH TOSTRENGTHTRAINING INVOLVING PHASES OF STRENGTHENDURANCE, BASIC STRENGTH, STRENGTH, AND POWER WILL PROVIDE FURTHER ENHANCEMENTS IN LIEE AND HIEE FOR HIGHLEVEL ENDURANCE ATHLETES."} {"_id": "526fb409521b6e7ef1afa771b30afe30e71e8c2b", "title": "Dual Switches DC/DC Converter With Three-Winding-Coupled Inductor and Charge Pump", "text": "In order to obtain a high step-up voltage gain, high-efficiency converter, this paper proposed a dual switches dc/dc converter with three-winding-coupled inductor and charge pump. The proposed converter composed of dual switches structure, three-winding-coupled inductor, and charge pump. This combination facilitates realization of high step-up voltage gain with a low voltage/current stress on the power switches. Meanwhile, the voltage across the diodes is low and the diode reverse-recovery problem is alleviated by the leakage inductance of the three-winding-coupled inductor. Taking all these into consideration, the efficiency can be high. This paper illustrated the operation principle of the proposed converter; discussed the effect of leakage inductance on voltage gain; the conditions of zero current shutting off of the diodes are illustrated; the voltage and current stress of the power devices are shown; a comparison between the performance of the proposed converter and previous high step-up converters was conducted. Finally, a prototype rated at 500 W has been established, and the experimental results verify the correctness of the analysis."} {"_id": "5b7addfb161b6e43937c9b8db3c85f10de671d0c", "title": "Learning from positive and unlabeled examples", "text": "In many machine learning settings, labeled examples are difficult to collect while unlabeled data are abundant. Also, for some binary classification problems, positive examples which are elements of the target concept are available. Can these additional data be used to improve accuracy of supervised learning algorithms? We investigate in this paper the design of learning algorithms from positive and unlabeled data only. Many machine learning and data mining algorithms, such as decision tree induction algorithms and naive Bayes algorithms, use examples only to evaluate statistical queries (SQ-like algorithms). Kearns designed the statistical query learning model in order to describe these algorithms. Here, we design an algorithm scheme which transforms any SQ-like algorithm into an algorithm based on positive statistical queries (estimate for probabilities over the set of positive instances) and instance statistical queries (estimate for probabilities over the instance space). We prove that any class learnable in the statistical query learning model is learnable from positive statistical queries and instance statistical queries only if a lower bound on the weight of any target concept f can be estimated in polynomial time. Then, we design a decision tree induction algorithm POSC4.5, based on C4.5, that uses only positive and unlabeled examples and we give experimental results for this algorithm. In the case of imbalanced classes in the sense that one of the two classes (say the positive class) is heavily underrepresented compared to the other class, the learning problem remains open. This problem is challenging because it is encountered in many real-world applications. \u00a9 2005 Elsevier B.V. All rights reserved."} {"_id": "c2bd37348784b4e6c16c7ab5ca8317987f3a73dd", "title": "Specificity of genetic and environmental risk factors for use and abuse/dependence of cannabis, cocaine, hallucinogens, sedatives, stimulants, and opiates in male twins.", "text": "OBJECTIVE\nData on use and misuse of six classes of illicit substances by male twin pairs were used to examine whether genetic and shared environmental risk factors for substance use disorders are substance-specific or -nonspecific in their effect.\n\n\nMETHOD\nLifetime history of use and abuse/dependence of cannabis, cocaine, hallucinogens, sedatives, stimulants, and opiates was assessed at personal interview in both members of 1,196 male-male twin pairs ascertained by the Virginia Twin Registry. Multivariate twin modeling of substance-nonspecific (common) and substance-specific genetic, shared environmental, and unique environmental risk factors was performed by using the program Mx.\n\n\nRESULTS\nHigh levels of comorbidity involving the different substance categories were observed for both use and abuse/dependence. One common genetic factor was found to have a strong influence on risk for illicit use and abuse/dependence for all six substance classes. A modest influence of substance-specific genetic factors was seen for use but not for abuse/dependence. Shared environmental factors were more important for use than for abuse/dependence and were mediated entirely through a single common factor.\n\n\nCONCLUSIONS\nIn an adult population-based sample of male twins, both the genetic and the shared environmental effects on risk for the use and misuse of six classes of illicit substances were largely or entirely nonspecific in their effect. Environmental experiences unique to the person largely determine whether predisposed individuals will use or misuse one class of psychoactive substances rather than another."} {"_id": "cae8f2af7a25480479811453f27b4189ba5cc801", "title": "Question Answering on SQuAD", "text": "In this project, we exploit several deep learning architectures in Question Answering field, based on the newly released Stanford Question Answering dataset (SQuAD)[7]. We introduce a multi-stage process that encodes context paragraphs at different levels of granularity, uses co-attention mechanism to fuse representations of questions and context paragraphs, and finally decodes the co-attention vectors to get the answers. Our best model gets 62.23% F1 score and 48.72% EM score on the test set."} {"_id": "c8ea1664d0cf4823b5ffb61a0d2f6dbda2441c49", "title": "Genetic Evidence That Carbohydrate-Stimulated Insulin Secretion Leads to Obesity.", "text": "BACKGROUND\nA fundamental precept of the carbohydrate-insulin model of obesity is that insulin secretion drives weight gain. However, fasting hyperinsulinemia can also be driven by obesity-induced insulin resistance. We used genetic variation to isolate and estimate the potentially causal effect of insulin secretion on body weight.\n\n\nMETHODS\nGenetic instruments of variation of insulin secretion [assessed as insulin concentration 30 min after oral glucose (insulin-30)] were used to estimate the causal relationship between increased insulin secretion and body mass index (BMI), using bidirectional Mendelian randomization analysis of genome-wide association studies. Data sources included summary results from the largest published metaanalyses of predominantly European ancestry for insulin secretion (n = 26037) and BMI (n = 322154), as well as individual-level data from the UK Biobank (n = 138541). Data from the Cardiology and Metabolic Patient Cohort study at Massachusetts General Hospital (n = 1675) were used to validate genetic associations with insulin secretion and to test the observational association of insulin secretion and BMI.\n\n\nRESULTS\nHigher genetically determined insulin-30 was strongly associated with higher BMI (\u03b2 = 0.098, P = 2.2 \u00d7 10-21), consistent with a causal role in obesity. Similar positive associations were noted in sensitivity analyses using other genetic variants as instrumental variables. By contrast, higher genetically determined BMI was not associated with insulin-30.\n\n\nCONCLUSIONS\nMendelian randomization analyses provide evidence for a causal relationship of glucose-stimulated insulin secretion on body weight, consistent with the carbohydrate-insulin model of obesity."} {"_id": "a52d4736bb9728e6993cd7b3190271f6728706fd", "title": "Slip-aware Model Predictive optimal control for Path following", "text": "Traditional control and planning algorithms for wheeled mobile robots (WMR) either totally ignore or make simplifying assumptions about the effects of wheel slip on the motion. While this approach works reasonably well in practice on benign terrain, it fails very quickly when the WMR is deployed in terrain that induces significant wheel slip. We contribute a novel control framework that predictively corrects for the wheel slip to effectively minimize path following errors. Our framework, the Receding Horizon Model Predictive Path Follower (RHMPPF), specifically addresses the problem of path following in challenging environments where the wheel slip substantially affects the vehicle mobility. We formulate the solution to the problem as an optimal controller that utilizes a slip-aware model predictive component to effectively correct the controls generated by a strictly geometric pure-pursuit path follower. We present extensive experimental validation of our approach using a simulated 6-wheel skid-steered robot in a high-fidelity data-driven simulator, and on a real 4-wheel skid-steered robot. Our results show substantial improvement in the path following performance in both simulation and real world experiments."} {"_id": "c7b8ad27e2ddbabcc5d785b51b967f4ccb824bc0", "title": "Datum: Managing Data Purchasing and Data Placement in a Geo-Distributed Data Market", "text": "This paper studies two design tasks faced by a geo-distributed cloud data market: which data to purchase data purchasing and where to place/replicate the data for delivery data placement. We show that the joint problem of data purchasing and data placement within a cloud data market can be viewed as a facility location problem and is thus NP-hard. However, we give a provably optimal algorithm for the case of a data market made up of a single data center and then generalize the structure from the single data center setting in order to develop a near-optimal, polynomial-time algorithm for a geo-distributed data market. The resulting design, $\\mathsf {Datum}$ , decomposes the joint purchasing and placement problem into two subproblems, one for data purchasing and one for data placement, using a transformation of the underlying bandwidth costs. We show, via a case study, that $\\mathsf {Datum}$ is near optimal within 1.6% in practical settings."} {"_id": "0db7dcb8f91604a6b9f74bd789e70188377984ea", "title": "Reducing Length of Stay Using a Robotic-assisted Approach for Retromuscular Ventral Hernia Repair: A Comparative Analysis From the Americas Hernia Society Quality Collaborative.", "text": "OBJECTIVE\nThe aim of this study was to compare length of stay (LOS) after robotic-assisted and open retromuscular ventral hernia repair (RVHR).\n\n\nBACKGROUND\nRVHR has traditionally been performed by open techniques. Robotic-assisted surgery enables surgeons to perform minimally invasive RVHR, but with unknown benefit. Using real-world evidence, this study compared LOS after open (o-RVHR) and robotic-assisted (r-RVHR) approach.\n\n\nMETHODS\nMulti-institutional data from patients undergoing elective RVHR in the Americas Hernia Society Quality Collaborative between 2013 and 2016 were analyzed. Propensity score matching was used to compare median LOS between o-RVHR and r-RVHR groups. This work was supported by an unrestricted grant from Intuitive Surgical, and all clinical authors have declared direct or indirect relationships with Intuitive Surgical.\n\n\nRESULTS\nIn all, 333 patients met inclusion criteria for a 2:1 match performed on 111 r-RVHR patients using propensity scores, with 222 o-RVHR patients having similar characteristics as the robotic-assisted group. Median LOS [interquartile range (IQR)] was significantly decreased for r-RVHR patients [2 days (IQR 2)] compared with o-RVHR patients [3 days (IQR 3), P < 0.001]. No differences in 30-day readmissions or surgical site infections were observed. Higher surgical site occurrences were noted with r-RVHR, consisting mostly of seromas not requiring intervention.\n\n\nCONCLUSIONS\nUsing real-world evidence, a robotic-assisted approach to RVHR offers the clinical benefit of reduced postoperative LOS. Ongoing monitoring of this technique should be employed through continuous quality improvement to determine the long-term effect on hernia recurrence, complications, patient satisfaction, and overall cost."} {"_id": "d9718e8745bf9bd70727dd0aec48b007593e00bc", "title": "Possibilistic interest discovery from uncertain information in social networks", "text": "User generated content on the microblogging social network Twitter continues to grow with significant amount of information. The semantic analysis offers the opportunity to discover and model latent interests\u2019 in the users\u2019 publications. This article focuses on the problem of uncertainty in the users\u2019 publications that has not been previously treated. It proposes a new approach for users\u2019 interest discovery from uncertain information that augments traditional methods using possibilistic logic. The possibility theory provides a solid theoretical base for the treatment of incomplete and imprecise information and inferring the reliable expressions from a knowledge base. More precisely, this approach used the product-based possibilistic network to model knowledge base and discovering possibilistic interests. DBpedia ontology is integrated into the interests\u2019 discovery process for selecting the significant topics. The empirical analysis and the comparison with the most known methods proves the significance of this approach."} {"_id": "6b27f7ccbc68e6bf9bc1538f7ed8d1ca9d8e563a", "title": "Mechanisms of emotional arousal and lasting declarative memory", "text": "Neuroscience is witnessing growing interest in understanding brain mechanisms of memory formation for emotionally arousing events, a development closely related to renewed interest in the concept of memory consolidation. Extensive research in animals implicates stress hormones and the amygdaloid complex as key, interacting modulators of memory consolidation for emotional events. Considerable evidence suggests that the amygdala is not a site of long-term explicit or declarative memory storage, but serves to influence memory-storage processes in other brain regions, such as the hippocampus, striatum and neocortex. Human-subject studies confirm the prediction of animal work that the amygdala is involved with the formation of enhanced declarative memory for emotionally arousing events."} {"_id": "8985000860dbb88a80736cac8efe30516e69ee3f", "title": "Human Activity Recognition Using Recurrent Neural Networks", "text": "Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of these experiments show that the proposed approach outperforms the existing ones in terms of accuracy and performance."} {"_id": "0658264c335017587a906ceb202da417b5521a92", "title": "An Introductory Study on Time Series Modeling and Forecasting", "text": "ACKNOWLEDGEMENT The timely and successful completion of the book could hardly be possible without the helps and supports from a lot of individuals. I will take this opportunity to thank all of them who helped me either directly or indirectly during this important work. First of all I wish to express my sincere gratitude and due respect to my supervisor Dr. R.K. Agrawal, Associate Professor SC & SS, JNU. I am immensely grateful to him for his valuable guidance, continuous encouragements and positive supports which helped me a lot during the period of my work. I would like to appreciate him for always showing keen interest in my queries and providing important suggestions. I also express whole hearted thanks to my friends and classmates for their care and moral supports. The moments, I enjoyed with them during my M.Tech course will always remain as a happy memory throughout my life. I owe a lot to my mother for her constant love and support. She always encouraged me to have positive and independent thinking, which really matter in my life. I would like to thank her very much and share this moment of happiness with her. Last but not the least I am also thankful to entire faculty and staff of SC & SS for their unselfish help, I got whenever needed during the course of my work.ABSTRACT Time series modeling and forecasting has fundamental importance to various practical domains. Thus a lot of active research works is going on in this subject during several years. Many important models have been proposed in literature for improving the accuracy and effeciency of time series modeling and forecasting. The aim of this book is to present a concise description of some popular time series forecasting models used in practice, with their salient features. In this book, we have described three important classes of time series models, viz. the stochastic, neural networks and SVM based models, together with their inherent forecasting strengths and weaknesses. We have also discussed about the basic issues related to time series modeling, such as stationarity, parsimony, overfitting, etc. Our discussion about different time series models is supported by giving the experimental forecast results, performed on six real time series datasets. While fitting a model to a dataset, special care is taken to select the most parsimonious one. To evaluate forecast accuracy as well as to compare among different models \u2026"} {"_id": "3db9b57efd0e0c64e3fdc57afb1d8e018b30b0b6", "title": "On the development of name search techniques for Arabic", "text": "The need for effective identity matching systems has led to extensive research in the area of name search. For the most part, such work has been limited to English and other Latin-based languages. Consequently, algorithms such as Soundex and n-gram matching are of limited utility for languages such as Arabic, which has a vastly different morphology that relies heavily on phonetic information. The dearth of work in this field is partly due to the lack of standardized test data. Consequently, we built a collection of 7,939 Arabic names, along with 50 training queries and 111 test queries. We use this collection to evaluate a variety of algorithms, including a derivative of Soundex tailored to Arabic (ASOUNDEX), measuring effectiveness using standard information retrieval measures. Our results show an improvement of 70% over existing approaches. Introduction Identity matching systems frequently employ name search algorithms to effectively locate relevant information about a given person. Such systems are used for applications as diverse as tax fraud detection and immigration control. Using names to retrieve information makes such systems susceptible to problems arising from typographical errors. That is, exact match search approaches will not find instances of misspelled names or those names that have more than one accepted spelling. An example of the severity of the problem is noted in an NCR (1998) report that estimates that the state of Texas saved $43 million over 18 months in the field of tax compliance using an improved name search approach. Thus, the importance of such name-based search applications has resulted in improved name matching algorithms for English that make use of phonetic information, but these language-dependent techniques have not been extended to Arabic."} {"_id": "becb5fbd24881dd78793686bbe30b153b5745fb8", "title": "Tax Fraud Detection for Under-Reporting Declarations Using an Unsupervised Machine Learning Approach", "text": "Tax fraud is the intentional act of lying on a tax return form with intent to lower one's tax liability. Under-reporting is one of the most common types of tax fraud, it consists in filling a tax return form with a lesser tax base. As a result of this act, fiscal revenues are reduced, undermining public investment.\n Detecting tax fraud is one of the main priorities of local tax authorities which are required to develop cost-efficient strategies to tackle this problem. Most of the recent works in tax fraud detection are based on supervised machine learning techniques that make use of labeled or audit-assisted data. Regrettably, auditing tax declarations is a slow and costly process, therefore access to labeled historical information is extremely limited. For this reason, the applicability of supervised machine learning techniques for tax fraud detection is severely hindered.\n Such limitations motivate the contribution of this work. We present a novel approach for the detection of potential fraudulent tax payers using only unsupervised learning techniques and allowing the future use of supervised learning techniques. We demonstrate the ability of our model to identify under-reporting taxpayers on real tax payment declarations, reducing the number of potential fraudulent tax payers to audit. The obtained results demonstrate that our model doesn't miss on marking declarations as suspicious and labels previously undetected tax declarations as suspicious, increasing the operational efficiency in the tax supervision process without needing historic labeled data."} {"_id": "b31f0085b7dd24bdde1e5cec003589ce4bf4238c", "title": "Discriminative Label Consistent Domain Adaptation", "text": "Domain adaptation (DA) is transfer learning which aims to learn an effective predictor on target data from source data despite data distribution mismatch between source and target. We present in this paper a novel unsupervised DA method for cross-domain visual recognition which simultaneously optimizes the three terms of a theoretically established error bound. Specifically, the proposed DA method iteratively searches a latent shared feature subspace where not only the divergence of data distributions between the source domain and the target domain is decreased as most state-of-the-art DA methods do, but also the inter-class distances are increased to facilitate discriminative learning. Moreover, the proposed DA method sparsely regresses class labels from the features achieved in the shared subspace while minimizing the prediction errors on the source data and ensuring label consistency between source and target. Data outliers are also accounted for to further avoid negative knowledge transfer. Comprehensive experiments and in-depth analysis verify the effectiveness of the proposed DA method which consistently outperforms the state-of-the-art DA methods on standard DA benchmarks, i.e., 12 cross-domain image classification tasks."} {"_id": "9bfb04bb15f7cc414108945571bd1d6d1f77b4ad", "title": "Feature Selection by Joint Graph Sparse Coding", "text": "This paper takes manifold learning and regression simultaneously into account to perform unsupervised spectral feature selection. We first extract the bases of the data, and then represent the data sparsely using the extracted bases by proposing a novel joint graph sparse coding model, JGSC for short. We design a new algorithm TOSC to compute the resulting objective function of JGSC, and then theoretically prove that the proposed objective function converges to its global optimum via the proposed TOSC algorithm. We repeat the extraction and the TOSC calculation until the value of the objective function of JGSC satisfies pre-defined conditions. Eventually the derived new representation of the data may only have a few non-zero rows, and we delete the zero rows (a.k.a. zero-valued features) to conduct feature selection on the new representation of the data. Our empirical studies demonstrate that the proposed method outperforms several state-of-the-art algorithms on real datasets in term of the kNN classification performance."} {"_id": "0f03074cc5ef0e2bd0f16de62a1e170531474eae", "title": "Discovering Available Drinks Through Natural , Robot-Led , Human-Robot Interaction Between a Waiter and a Bartender", "text": "This research focuses on natural, robot-led, human-robot interaction that enables a robot to discover what drinks a barman can prepare through continuous application of speech recognition, understanding and generation. Speech was recognised using Google Cloud\u2019s speech to text API, understood by matching either the object or main verb of a sentence against a list of key words and, finally, generated using templates with variable parts. The difficulty lies in the large quantity of key words, as they are based on the properties of the ordered drinks. The results show that having the aforementioned interaction works well to some extent, i.e. the naturalness of the interaction was ranked 5.5 on average. Furthermore, the obtained precision when identifying the unavailable drinks was 0.625 and the obtained recall was 1, resulting in an F1 measure of 0.769."} {"_id": "2d8d089d368f2982748fde93a959cf5944873673", "title": "Visually Guided Spatial Relation Extraction from Text", "text": "Extraction of spatial relations from sentences with complex/nesting relationships is very challenging as often needs resolving inherent semantic ambiguities. We seek help from visual modality to fill the information gap in the text modality and resolve spatial semantic ambiguities. We use various recent vision and language datasets and techniques to train inter-modality alignment models, visual relationship classifiers and propose a novel global inference model to integrate these components into our structured output prediction model for spatial role and relation extraction. Our global inference model enables us to utilize the visual and geometric relationships between objects and improves the state-of-art results of spatial information extraction from text."} {"_id": "852203bfc1694fc2ea4bbe0eea4c2f830df85d31", "title": "Does Microfinance Really Help the Poor ? New Evidence from Flagship Programs in Bangladesh", "text": "The microfinance movement has built on innovations in financial intermediation that reduce the costs and risks of lending to poor households. Replications of the movement\u2019s flagship, the Grameen Bank of Bangladesh, have now spread around the world. While programs aim to bring social and economic benefits to clients, few attempts have been made to quantify benefits rigorously. This paper draws on a new cross-sectional survey of nearly 1800 households, some of which are served by the Grameen Bank and two similar programs, and some of which have no access to programs. Households that are eligible to borrow and have access to the programs do not have notably higher consumption levels than control households, and, for the most part, their children are no more likely to be in school. Men also tend to work harder, and women less. More favorably, relative to controls, households eligible for programs have substantially (and significantly) lower variation in consumption and labor supply across seasons. The most important potential impacts are thus associated with the reduction of vulnerability, not of poverty per se. The consumption-smoothing appears to be driven largely by income-smoothing, not by borrowing and lending. The evaluation holds lessons for studies of other programs in low-income countries. While it is common to use fixed effects estimators to control for unobservable variables correlated with the placement of programs, using fixed effects estimators can exacerbate biases when, as here, programs target their programs to specific populations within larger communities."} {"_id": "712115f791de02aafc675ee84090fdf8e4ed88a5", "title": "Elastic Bands: Connecting Path Planning and Control", "text": "Elastic bands are proposed as the basis for a new framework to close the gap between global path planning and real-time sensor-based robot control. An elastic band is a deformable collision-free path. The initial shape of the elastic is the free path generated by a planner. Subjected to artificial forces, the elastic band deforms in real time to a short and smooth path that maintains clearance from the obstacles. The elastic continues to deform as changes in the environment are detected by sensors, enabling the robot to accommodate uncertainties and react to unexpected and moving obstacles. While providing a tight connection between the robot and its environment, the elastic band preserves the global nature of the planned path. This paper outlines the framework and discusses an efficient implementation based on bubbles."} {"_id": "b9bc9a32791dba1fc85bb9d4bfb9c52e6f052d2e", "title": "RRT-Connect: An Efficient Approach to Single-Query Path Planning", "text": "A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces. The method works by incrementally building two Rapidly-exploring Random Trees (RRTs) rooted at the start and the goal configurations. The trees each explore space around them and also advance towards each other through the use of a simple greedy heuristic. Although originally designed to plan motions for a human arm (modeled as a 7-DOF kinematic chain) for the automatic graphic animation of collision-free grasping and manipulation tasks, the algorithm has been successfully applied to a variety of path planning problems. Computed examples include generating collision-free motions for rigid objects in 2D and 3D, and collision-free manipulation motions for a 6-DOF PUMA arm in a 3D workspace. Some basic theoretical analysis is also presented."} {"_id": "d967d9550f831a8b3f5cb00f8835a4c866da60ad", "title": "Rapidly-Exploring Random Trees: A New Tool for Path Planning", "text": ""} {"_id": "09194121d81be3783fa1719afe51a7056f04e309", "title": "A Random Sampling Scheme for Path Planning", "text": "Several randomized path planners have been proposed during the last few years. Their attractiveness stems from their applicability to virtually any type of robots, and their empirically observed success. In this paper we attempt to present a unifying view of these planners and to theoretically explain their success. First, we introduce a general planning scheme that consists of randomly sampling the robot\u2019s configuration space. We then describe two previously developed planners as instances of planners based on this scheme, but applying very different sampling strategies. These planners are probabilistically complete: if a path exists, they will find one with high probability, if we let them run long enough. Next, for one of the planners, we analyze the relation between the probability of failure and the running time. Under assumptions characterizing the \u201cgoodness\u201d of the robot\u2019s free space, we show that the running time only grows as the absolute value of the logarithm of the probability of failure that we are willing to tolerate. We also show that it increases at a reasonable rate as the space goodness degrades. In the last section we suggest directions for future research."} {"_id": "0e6c774dd0d11e4a32fc94459ee5f4a9f906fcda", "title": "V-Clip: Fast and Robust Polyhedral Collision Detection", "text": "This article presents the Voronoi-clip, or V-Clip, collision detection alogrithm for polyhedral objects specified by a boundary representation. V-Clip tracks the closest pair of features between convex polyhedra, using an approach reminiscent of the Lin-Canny closest features algorithm. V-Clip is an improvement over the latter in several respects. Coding complexity is reduced, and robustness is significantly improved; the implementation has no numerical tolerances and does not exhibit cycling problems. The algorithm also handles penetrating polyhedra, and can therefore be used to detect collisions between nonvconvex polyhedra described as hierarchies of convex pieces. The article presents the theoretical principles of V-Clip, and gives a pseudocode description of the algorithm. It also documents various test that compare V-Clip, Lin-Canny, and the Enhanced GJK algorithm, a simplex-based algorithm that is widely used for the same application. The results show V-Clip to be a strong contender in this field, comparing favorably with the other algorithms in most of the tests, in term of both performance and robustness."} {"_id": "03f3197f5f2c0415f972111173ed452a39d436d2", "title": "Compiling Bayesian Networks Using Variable Elimination", "text": "Compiling Bayesian networks has proven an effective approach for inference that can utilize both global and local network structure. In this paper, we define a new method of compiling based on variable elimination (VE) and Algebraic Decision Diagrams (ADDs). The approach is important for the following reasons. First, it exploits local structure much more effectively than previous techniques based on VE. Second, the approach allows any of the many VE variants to compute answers to multiple queries simultaneously. Third, the approach makes a large body of research into more structured representations of factors relevant in many more circumstances than it has been previously. Finally, experimental results demonstrate that VE can exploit local structure as effectively as state\u2013of\u2013the\u2013art algorithms based on conditioning on the networks considered, and can sometimes lead to much faster compilation times."} {"_id": "6d48640f9f9a1702c13f30a662b115b16ca8ab07", "title": "An Examination of Undergraduate Student's Perceptions and Predilections of the Use of YouTube in the Teaching and Learning Process", "text": "Pervasive social networking and media sharing technologies have augmented perceptual understanding and information gathering and, while text-based resources have remained the standard for centuries, they do not appeal to the hyper-stimulated visual learners of today. In particular, the research suggests that targeted YouTube videos enhance student engagement, depth of understanding, and overall satisfaction in higher education courses. In order to investigate student perceptions and preferences regarding the implications of YouTube, a study was conducted at a Mid-Atlantic minority serving institution that examined student opinions regarding the usage of YouTube videos to augment instruction in online and classroombased courses. According to the findings, use of YouTube in the teaching and learning process enhances instruction with students most likely to visit video sharing services from mobile devices. Further, length has an impact on student decisions whether or not to watch a video, and course delivery format impacts length and audio preferences. Finally, there is no relationship between personal use of social media and the perceived value of the use of YouTube in the instructional process."} {"_id": "cd43f34d4ce48d4a77b84f1a62d469e8596078d0", "title": "Plant-growth-promoting compounds produced by two agronomically important strains of Azospirillum brasilense, and implications for inoculant formulation", "text": "We evaluated phytohormone and polyamine biosynthesis, siderophore production, and phosphate solubilization in two strains (Cd and Az39) of Azospirillum brasilense used for inoculant formulation in Argentina during the last 20\u00a0years. Siderophore production and phosphate solubilization were evaluated in a chemically defined medium, with negative results. Indole 3-acetic acid (IAA), gibberellic acid (GA3), and abscisic acid (ABA) production were analyzed by gas chromatography-mass spectrometry. Ethylene, polyamine, and zeatin (Z) biosynthesis were determined by gas chromatography-flame ionization detector and high performance liquid chromatography (HPLC-fluorescence and -UV), respectively. Phytohormones IAA, Z, GA3, ABA, ethylene, and growth regulators putrescine, spermine, spermidine, and cadaverine (CAD) were found in culture supernatant of both strains. IAA, Z, and GA3 were found in all two strains; however, their levels were significantly higher (p\u2009<\u20090.01) in Cd (10.8, 2.32, 0.66\u00a0\u03bcg ml\u22121). ABA biosynthesis was significantly higher (p\u2009<\u20090.01) in Az39 (0.077\u00a0\u03bcg ml\u22121). Ethylene and polyamine CAD were found in all two strains, with highest production in Cd cultured in NFb plus l-methionine (3.94\u00a0ng ml\u22121 h\u22121) and Az39 cultured in NFb plus l-lysine (36.55\u00a0ng ml\u22121 h\u22121). This is the first report on the evaluation of important bioactive molecules in strains of A. brasilense as potentially capable of direct plant growth promotion or agronomic yield increase. Az39 and Cd showed differential capability to produce the five major phytohormones and CAD in chemically defined medium. This fact has important technological implications for inoculant formulation as different concentrations of growth regulators are produced by different strains or culture conditions."} {"_id": "598f98fefa56afe0073d58fd36cc86017f0a4c31", "title": "A 1.2\u20136.6GHz LNA using transformer feedback for wideband input matching and noise cancellation in 0.13\u00b5m CMOS", "text": "A novel transformer feedback featuring wideband input matching and noise cancellation is proposed and demonstrated in a wideband differential LNA for software-defined-radio (SDR) applications. Implemented in 0.13\u03bcm CMOS with an area of 0.32mm2, the LNA prototype measures a wideband input matching S11 of less than -10dB from 1.2GHz to 6.6GHz and minimum NF of 1.8dB while consuming 11mA at 1.2V supply."} {"_id": "fc10f1ccd2396c1adb4652c807a0c4f6e7534624", "title": "Hamming Clustering: A New Approach to Rule Extraction", "text": "A new algorithm, called Hamming Clustering (HC), is proposed to extract a set of rules underlying a given classification problem. It is able to reconstruct the and-or expression associated with any Boolean function from a training set of"} {"_id": "231be9ffe942d000891197ee45f41ae62adfe9a8", "title": "Extracting and Analyzing Hidden Graphs from Relational Databases", "text": "Analyzing interconnection structures among underlying entities or objects in a dataset through the use of graph analytics can provide tremendous value in many application domains. However, graphs are not the primary representation choice for storing most data today, and in order to have access to these analyses, users are forced to manually extract data from their data stores, construct the requisite graphs, and then load them into some graph engine in order to execute their graph analysis task. Moreover, in many cases (especially when the graphs are dense), these graphs can be significantly larger than the initial input stored in the database, making it infeasible to construct or analyze such graphs in memory. In this paper we address both of these challenges by building a system that enables users to declaratively specify graph extraction tasks over a relational database schema and then execute graph algorithms on the extracted graphs. We propose a declarative domain specific language for this purpose, and pair it up with a novel condensed, in-memory representation that significantly reduces the memory footprint of these graphs, permitting analysis of larger-than-memory graphs. We present a general algorithm for creating such a condensed representation for a large class of graph extraction queries against arbitrary schemas. We observe that the condensed representation suffers from a duplication issue, that results in inaccuracies for most graph algorithms. We then present a suite of in-memory representations that handle this duplication in different ways and allow trading off the memory required and the computational cost for executing different graph algorithms. We also introduce several novel deduplication algorithms for removing this duplication in the graph, which are of independent interest for graph compression, and provide a comprehensive experimental evaluation over several real-world and synthetic datasets illustrating these trade-offs."} {"_id": "9a870af1c356f14c51c0af75bd9fec5f2e20f8f7", "title": "Debugging Machine Learning Tasks", "text": "Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data. Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data. However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code. In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications. Our root cause analysis is based on Pearl\u2019s theory of causation, and uses Pearl\u2019s PS (Probability of Sufficiency) as a scoring metric. Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS. Psi is able to identify root causes of data errors in interesting data sets."} {"_id": "7dd2936b73377bf30c91f9a85e17f602e7b53ec5", "title": "Interactive cloth rendering of microcylinder appearance model under environment lighting", "text": "This paper proposes an interactive rendering method of cloth fabrics under environment lighting. The outgoing radiance from cloth fabrics in the microcylinder model is calculated by integrating the product of the distant environment lighting, the visibility function, the weighting function that includes shadowing/masking effects of threads, and the light scattering function of threads. The radiance calculation at each shading point of the cloth fabrics is simplified to a linear combination of triple product integrals of two circular Gaussians and the visibility function, multiplied by precomputed spherical Gaussian convolutions of the weighting function. We propose an efficient calculation method of the triple product of two circular Gaussians and the visibility function by using the gradient of signed distance function to the visibility boundary where the binary visibility changes in the angular domain of the hemisphere. Our GPU implementation enables interactive rendering of static cloth fabrics with dynamic viewpoints and lighting. In addition, interactive editing of parameters for the scattering function (e.g. thread\u2019s albedo) that controls the visual appearances of cloth fabrics can be achieved."} {"_id": "a095f249e45856953d3e7878ccf172dd83aefdcd", "title": "GPU Accelerated Sub-Sampled Newton's Method", "text": "First order methods, which solely rely on gradient information, are commonly used in diverse machine learning (ML) and data analysis (DA) applications. This is attributed to the simplicity of their implementations, as well as low per-iteration computational/storage costs. However, they suffer from significant disadvantages; most notably, their performance degrades with increasing problem ill-conditioning. Furthermore, they often involve a large number of hyper-parameters, and are notoriously sensitive to parameters such as the step-size. By incorporating additional information from the Hessian, second-order methods, have been shown to be resilient to many such adversarial effects. However, these advantages of using curvature information come at the cost of higher per-iteration costs, which in \u201cbig data\u201d regimes, can be computationally prohibitive. In this paper, we show that, contrary to conventional belief, second-order methods, when implemented appropriately, can be more efficient than first-order alternatives in many largescale ML/ DA applications. In particular, in convex settings, we consider variants of classical Newton\u2019s method in which the Hessian and/or the gradient are randomly sub-sampled. We show that by effectively leveraging the power of GPUs, such randomized Newton-type algorithms can be significantly accelerated, and can easily outperform state of the art implementations of existing techniques in popular ML/ DA software packages such as TensorFlow. Additionally these randomized methods incur a small memory overhead compared to first-order methods. In particular, we show that for million-dimensional problems, our GPU accelerated sub-sampled Newton\u2019s method achieves a higher test accuracy in milliseconds as compared with tens of seconds for first order alternatives."} {"_id": "c8f4b2695333abafde9ea253afc5b6a651d1fb54", "title": "Nobody \u2019 s watching ? Subtle cues affect generosity in an anonymous economic game", "text": "Models indicate that opportunities for reputation formation can play an important role in sustaining cooperation and prosocial behavior. Results from experimental economic games support this conclusion, as manipulating reputational opportunities affects prosocial behavior. Noting that some prosocial behavior remains even in anonymous noniterated games, some investigators argue that humans possess a propensity for prosociality independent of reputation management. However, decision-making processes often employ both explicit propositional knowledge and intuitive or affective judgments elicited by tacit cues. Manipulating game parameters alters explicit information employed in overt strategizing but leaves intact cues that may affect intuitive judgments relevant to reputation formation. To explore how subtle cues of observability impact prosocial behavior, we conducted five dictator games, manipulating both auditory cues of the presence of others (via the use of sound-deadening earmuffs) and visual cues (via the presentation of stylized eyespots). Although earmuffs appeared to reduce generosity, this effect was not significant. However, as predicted, eyespots substantially increased generosity, despite no differences in actual anonymity; when using a computer displaying eyespots, almost twice as many participants gave money to their partners compared with the"} {"_id": "739f26126145cbbf1f3d484be429a81e812fd40a", "title": "An Anomaly Detection Method for Spacecraft Using Relevance Vector Learning", "text": "This paper proposes a novel anomaly detection system for spacecrafts based on data mining techniques. It constructs a nonlinear probabilistic model w.r.t. behavior of a spacecraft by applying the relevance vector regression and autoregression to massive telemetry data, and then monitors the on-line telemetry data using the model and detects anomalies. A major advantage over conventional anomaly detection methods is that this approach requires little a priori knowledge on the system."} {"_id": "e7030ed729ac8250d022394ec1df3ed216144f9f", "title": "Positron-Emission Tomography and Personality Disorders", "text": "This study used positron-emission tomography to examine cerebral metabolic rates of glucose (CMRG) in 17 patients with DSM III-R diagnoses of personality disorder. Within the group of 17 personality disorder patients, there was a significant inverse correlation between a life history of aggressive impulse difficulties and regional CMRG in the frontal cortex of the transaxial plane approximately 40 mm above the canthomeatal line (CML) (r = \u2212.56, p = 0.17). Diagnostic groups included antisocial (n = 6), borderline (n = 6), dependent (n = 2), and narcissistic (n = 3). Regional CMRG in the six antisocial patients and in the six borderline patients was compared to a control group of 43 subjects using an analysis of covariance with age and sex as covariates. In the borderline personality disorder group, there was a significant decrease in frontal cortex metabolism in the transaxial plane approximately 81 mm above the CML and a significant increase in the transaxial plane approximately 53 mm above the CML (F[1,45] = 8.65, p = .005; and F[1,45] = 7.68, p = .008, respectively)."} {"_id": "1721e4529c5e222dc7070ff318f7e1d815bfb27b", "title": "Benchmarking personal cloud storage", "text": "Personal cloud storage services are data-intensive applications already producing a significant share of Internet traffic. Several solutions offered by different companies attract more and more people. However, little is known about each service capabilities, architecture and -- most of all -- performance implications of design choices. This paper presents a methodology to study cloud storage services. We apply our methodology to compare 5 popular offers, revealing different system architectures and capabilities. The implications on performance of different designs are assessed executing a series of benchmarks. Our results show no clear winner, with all services suffering from some limitations or having potential for improvement. In some scenarios, the upload of the same file set can take seven times more, wasting twice as much capacity. Our methodology and results are useful thus as both benchmark and guideline for system design."} {"_id": "7855c55eae35e81ba84519aa60b97c51bb6281d5", "title": "On-line Fault Detection of Sensor Measurements", "text": "On-line fault detection in sensor networks is of paramount importance due to the convergence of a variety of challenging technological, application, conceptual, and safety related factors. We introduce a taxonomy for classi\u00a3cation of faults in sensor networks and the \u00a3rst on-line model-based testing technique. The approach is generic in the sense that it can be applied on an arbitrary system of heterogeneous sensors with an arbitrary type of fault model, while it provides a \u00a4exible tradeoff between accuracy and latency. The key idea is to formulate on-line testing as a set of instances of a non-linear function minimization and consequently apply nonparametric statistical methods to identify the sensors that have the highest probability to be faulty. The optimization is conducted using the Powell nonlinear function minimization method. The effectiveness of the approach is evaluated in the presence of random noise using a system of light sensors."} {"_id": "7d3808cc062d5e25eafa92256c72f490be2bff7f", "title": "Development of support system for forward tilting of the upper body", "text": "We have been developing a wearable robot that directly and physically supports human movement. A ldquomuscle suitrdquo for the arm that will provide muscular support for the paralyzed or those otherwise unable to move unaided is one example. Whereas low back pain is one of the most serious issue for the manual worker and in this paper, we focus on developing the support system for forward tilting of the upper body. In this paper, the concept and mechanical structure of the support system is shown. From the experimental result to keep forward tilting posture with load indicates our system is very effective for keeping forward tilting posture and it becomes clear that more than 60% reduction of muscular power is possible."} {"_id": "4f3d6fdb74ebb2a64369269d402210c83060a8d7", "title": "Randomized Controlled Caregiver Mediated Joint Engagement Intervention for Toddlers with Autism", "text": "This study aimed to determine if a joint attention intervention would result in greater joint engagement between caregivers and toddlers with autism. The intervention consisted of 24 caregiver-mediated sessions with follow-up 1 year later. Compared to caregivers and toddlers randomized to the waitlist control group the immediate treatment (IT) group made significant improvements in targeted areas of joint engagement. The IT group demonstrated significant improvements with medium to large effect sizes in their responsiveness to joint attention and their diversity of functional play acts after the intervention with maintenance of these skills 1 year post-intervention. These are among the first randomized controlled data to suggest that short-term parent-mediated interventions can have important effects on core impairments in toddlers with autism. Clinical Trials #: NCT00065910."} {"_id": "7e1f274a73803c97db5284da0eb295b2cab12486", "title": "Developmental dyslexia: genetic dissection of a complex cognitive trait", "text": "Developmental dyslexia, a specific impairment of reading ability despite adequate intelligence and educational opportunity, is one of the most frequent childhood disorders. Since the first documented cases at the beginning of the last century, it has become increasingly apparent that the reading problems of people with dyslexia form part of a heritable neurobiological syndrome. As for most cognitive and behavioural traits, phenotypic definition is fraught with difficulties and the genetic basis is complex, making the isolation of genetic risk factors a formidable challenge. Against such a background, it is notable that several recent studies have reported the localization of genes that influence dyslexia and other language-related traits. These investigations exploit novel research approaches that are relevant to many areas of human neurogenetics."} {"_id": "6243769976e9a2e8c33ee9fc28ac0308cb052ebf", "title": "Neural Probabilistic Model for Non-projective MST Parsing", "text": "In this paper, we propose a probabilistic parsing model that defines a proper conditional probability distribution over nonprojective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTMCNNs, which automatically benefits from both wordand character-level representations, by using a combination of bidirectional LSTMs and CNNs. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over nonprojective trees. By exploiting Kirchhoff\u2019s Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straightforward end-to-end model training procedure via back-propagation. We evaluate our model on 17 different datasets, across 14 different languages. Our parser achieves state-of-the-art parsing performance on nine datasets."} {"_id": "042810cdcfb8f8af2f46e13543ccc9a3c9476f69", "title": "Private memoirs of a smart meter", "text": "Household smart meters that measure power consumption in real-time at fine granularities are the foundation of a future smart electricity grid. However, the widespread deployment of smart meters has serious privacy implications since they inadvertently leak detailed information about household activities. In this paper, we show that even without a priori knowledge of household activities or prior training, it is possible to extract complex usage patterns from smart meter data using off-the-shelf statistical methods. Our analysis uses two months of data from three homes, which we instrumented to log aggregate household power consumption every second. With the data from our small-scale deployment, we demonstrate the potential for power consumption patterns to reveal a range of information, such as how many people are in the home, sleeping routines, eating routines, etc. We then sketch out the design of a privacy-enhancing smart meter architecture that allows an electric utility to achieve its net metering goals without compromising the privacy of its customers."} {"_id": "9a0c3991537f41238237bd6e78747a6b4512f2b3", "title": "Audio-Visual Speech Enhancement based on Multimodal Deep Convolutional Neural Network", "text": "Speech enhancement (SE) aims to reduce noise in speech signals. Most SE techniques focus on addressing audio information only. In this work, inspired by multimodal learning, which utilizes data from different modalities, and the recent success of convolutional neural networks (CNNs) in SE, we propose an audio-visual deep CNN (AVDCNN) SE model, which incorporates audio and visual streams into a unified network model. In the proposed AVDCNN SE model, audio and visual features are first processed using individual CNNs, and then, fused into a joint network to generate enhanced speech at an output layer. The AVDCNN model is trained in an end-toend manner, and parameters are jointly learned through backpropagation. We evaluate enhanced speech using five objective criteria. Results show that the AVDCNN yields notably better performance as compared to an audio-only CNN-based SE model, confirming the effectiveness of integrating visual information into the SE process."} {"_id": "39df57cefdf38c2c9a76b85d7f67505959cfd6e7", "title": "Multi-Agent Reinforcement Learning: A Report on Challenges and Approaches", "text": "Reinforcement Learning (RL) is a learning paradigm concerned with learning to control a system so as to maximize an objective over the long term. This approach to learning has received immense interest in recent times and success manifests itself in the form of human-level performance on games like Go. While RL is emerging as a practical component in reallife systems, most successes have been in Single Agent domains. This report will instead specifically focus on challenges that are unique to Multi-Agent Systems interacting in mixed cooperative and competitive environments. The report concludes with advances in the paradigm of training Multi-Agent Systems called Decentralized Actor, Centralized Critic, based on an extension of MDPs called Decentralized Partially Observable MDPs, which has seen a renewed interest lately. 1 ar X iv :1 80 7. 09 42 7v 1 [ cs .A I] 2 5 Ju l 2 01 8"} {"_id": "1cc920998208f988a873dbbfa0315274d0b51b57", "title": "Introduction to Robotics: Mechanics and Control", "text": "How can you change your mind to be more open? There many sources that can help you to improve your thoughts. It can be from the other experiences and also story from some people. Book is one of the trusted sources to get. You can find so many books that we share here in this website. And now, we show you one of the best, the introduction to robotics mechanics and control john j craig solution manual."} {"_id": "2c935d9e04583f9fb29fdc8b570e30996cbceed9", "title": "Development of humanoid robot platform KHR-2 (KAIST humanoid robot-2)", "text": "We are presenting the mechanical and electrical system design and system integration of controllers including sensory devices of the humanoid, KHR-2 in this paper. The concept and the objective of the design will be described. We have developed KHR-2 since 2003 which has 41 DOF (degree of freedom). Each arm including a hand and a wrist has 11 DOF (5+2 DOF/hand (finger + wrist), 4 DOF/arm) and each leg has 6 DOF. Head and trunk have 6 DOF (2 DOF/eye and 2 DOF/neck) and 1 DOF respectively. The mechanical part of the robot is designed to have human friendly appearance and wide movable angle range. Joint actuators are designed to have negligible uncertainties such as backlash. To control all axes, distributed control architecture is adopted to reduce the computation burden of the main controller (PC) and to expand the devices easily. We developed a sub-controller which is a servo motor controller and sensor interfacing devices using microprocessor. The main controller (PC) attached on the back of the robot communicates with sub-controllers in real-time by using CAN (controller area network). Windows XP is used for OS (operating system) for fast development of main control program. RTX (real time extension) HAL extension software is used to realize the real-time control in Windows XP environment. KHR-2 has several sensor types, which are 3-axis F/T (force/torque) sensors at foot and wrist, inertia sensor system (accelerometer and rate gyro) and CCD camera. The F/T sensor at the foot is the most fundamental sensor for stable walking. The inertia sensor system is essential for determining the inclination between the ground and the robot. Finally, we used the CCD camera for navigation and stabilization of the robot in the future. We described the details of the KHR-2 in this paper."} {"_id": "943bcaa387edf521cec61f7029eb139772873bba", "title": "Development of a biped walking robot compensating for three-axis moment by trunk motion", "text": "The authors have been using the ZMP (@ro Mfoment&oint) as a criterion to distinguish the stability of walking for a biped walking robot which has a trunk. T6 the authors introduce a control method of dynamic biped walking for a biped walking robot to compensate for the three-axis (pitch, roll and yaw-axis) moment on an arbitraryplanned ZMP by trunk motion. The authors developed a biped walking robot and performed a walking experiment with the robot using the control method. The result was a fast dynamic biped walking a t the walking speed of 0.54 slstep with a 0.3 m step on a flat floor. This walking speed is about 50 percent faster than that with the robot which compensates for only the twoaxis (pitch and roll-axis) moment by t runk motion. -z In 1986, the authors developed a biped walking robot WL12 (Waseda Leg No. 12) which has a trunk for the stabilization of walking. At the same time the authors proposed a control method of dynamic biped walking for a biped walking robot to compensate for the two-axis (pitch and roll-axis) moment on an arbitrary planned ZMP by the trunk[l]-[3). In 1990, by using this walking control method, the authors achieved dynamic biped walking at the fastest walking speed of 0.8 s/step with 0.3 m step length on a flat floor[5]. The method was based on the assumption that the contact point between robot foot and the floor would not slide. That is to say, only the pitch-axis and roll-axis moments were compensated for by trunk motion, and the yaw-axis moment was not taken into account in the stabilization of walking. In the walking experiment, however, as the robot walked faster, the yaw-axis moment began to give the robot a spin on the yaw-axis, which became a considerable problem that affected the stability of walking. In contrast, i r was considered that human beings compensate not only for the pitch and roll-axis moment but also for the yaw-axis moment by swinging both arms and rotating the waist to maintain the total stability of walking. 0-7803-0823-9p33/$3.00 (C) 1993 IEEE __ _I------\" I -__-.*-Fig. 1 Biped walking robot WL-12RV Therefore, the objective of this study was to develop a biped walking robot which has an ability to compensate for the three-axis moment by trunk motion, to work out a control method of dynamic biped walking for the robot and to realize faster walking than before."} {"_id": "d4319e6c668b97bcd3d2698ce1bf91a5a8d3e340", "title": "The Development of Honda Humanoid Robot", "text": "I n this paper. we present the mechanism, system cor$guration, basic control algorithm and integrated ,functions of the Nonda humanoid robot. Like its human counterpart, this robot lzas the ability to move forward and backward, sideways to the right or the left, as well as diagonally. I n addition, the robot can turn in any direction, walk up and down stairs continuously. Furthermore, due to its unique posture stability control, the robot is able to maintain its balance despite unexpected complications such as uneven ground su faces As a part of its integrated functions, this robot is able to inove on a planned path autonomously and to perform simple operations via wireless tele-operation."} {"_id": "da0d6c069a4032294b5acf6c0c84ff206378a070", "title": "Design of prototype humanoid robotics platform for HRP", "text": "This paper presents a prototype humanoid robotics platform developed for HRP-2. HRP-2 is a new humanoid robotics platform, which we have been developing in phase two of HRP. HRP is a humanoid robotics project, which has been lunched by Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with rough terrain in the open air and can prevent the possible damages to a humanoid robot\u2019s own self in the event of tipping over. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot\u2019s own self even tough HRP-2 tips over. In this paper, the mechanisms and specifications of developed prototype humanoid robotics platform, and its electrical system are introduced."} {"_id": "c8f1975a08a42bf5b4c804a9e93ff97b18b50220", "title": "Un-Crafting: De-Constructive Engagements with Interactive Artifacts", "text": "Crafting interactive artifacts is typically associated with synthesizing, making, constructing, putting things together. In this paper, we discuss how these activities can be contrasted with de-synthesizing activities that revolve around decomposition, dissection, and taking things apart. Based on framings that emphasize how related practices are valuable in engineering processes, we aim to unlock the potential of de-constructive engagements with interactive technologies as a material driven (design) practice. We propose un-crafting as a framework of four modes of taking interactive artifacts apart (i.e., un-crafting for material exposition, material inspiration, material inquiry, and material exploration) aiming to pinpoint de-constructive episodes inherent to design processes, as well as to encourage the refinement of respective techniques and methods."} {"_id": "9bfd728d0d804d5d8b71239113460eff3e95e2dc", "title": "Heart rate variability and autonomic activity at rest and during exercise in various physiological conditions", "text": "The rhythmic components of heart rate variability (HRV) can be separated and quantitatively assessed by means of power spectral analysis. The powers of high frequency (HF) and low frequency (LF) components of HRV have been shown to estimate cardiac vagal and sympathetic activities. The reliability of these spectral indices, as well as that of LF/HF ratio as a marker of autonomic interaction at rest and during exercise, is briefly reviewed. Modifications in autonomic activities induced by different physiological conditions, e.g. hypoxia exposure, training, and water immersion, have been found in HRV power spectra at rest. The changes in HF and LF powers and in LF/HF ratio observed during exercise have been shown not to reflect the decrease in vagal activity and the activation of sympathetic system occurring at increasing loads. HF peak was recognised in power spectra in the entire range of relative intensity, being responsible for the most part of HR variability at maximal load. LF power did not change during low intensity exercise and decreased to negligible values at medium\u2013high intensity, where sympathetic activity was enhanced. There was no influence from factors such as fitness level, age, hypoxia, and blood distribution. In contrast, a dramatic effect of body position has been suggested by the observation that LF power increased at medium\u2013high intensities when exercising in the supine position. The increased respiratory activity due to exercise would be responsible of HF modulation of HR via a direct mechanical effect. The changes in LF power observed at medium\u2013high intensity might be the expression of the modifications in arterial pressure control mechanisms occurring with exercise. The finding of opposite trends for LF rhythm in supine and sitting exercises suggests that different readjustments might have occurred in relation to different muscular inputs in the two positions."} {"_id": "c62beef9b5e68e0af8b9c77d8558f8f53ac47aa3", "title": "Signal compensation and extraction of high resolution position for sinusoidal magnetic encoders", "text": "MEs (magnetic encoders) are widely used in industrial control systems because of its significant advantages, such as high resolution, high speed, operation in harsh environments, and low cost. This paper describes a method to extract the position information with high resolution for sinusoidal MEs. A code compensator based on optimization theories is applied to correct non-ideal signals of the encoder outputs. This algorithm can effectively eliminate the DC offset, phase-shift, amplitude difference and sinusoidal deformation from these outputs. Then, a size-reduced look-up table is built-up off-line to generate the high resolution quadrature pulses suitable for standard incremental encoder signals. These LUTs (Look-up tables) rely on the approximate linear property of the amplitude of sinusoids in section [-sin(pi/4) ; sin(pi/4)] and is decreased by converting the values of LUTs into binary values (0 and 1). The firmware is implemented on a 16-bit DSP (digital signal processor) TMS320F2812 hardware platform with minimum computation. The experimental results are also presented in this paper."} {"_id": "ba6f87a867f915d43e67bbc1e3230a221b9645d2", "title": "Love, hate, arousal and engagement: exploring audience responses to performing arts", "text": "Understanding audience responses to art and performance is a challenge. New sensors are promising for measurement of implicit and explicit audience engagement. However, the meaning of biometric data, and its relationship to engagement, is unclear. We conceptually explore the audience engagement domain to uncover opportunities and challenges in the assessment and interpretation of audience engagement data. We developed a display that linked performance videos with audience biometric data and presented it to 7 performing arts experts, to explore the measurement, interpretation and application of biometric data. Experts were intrigued by the response data and reflective in interpreting it. We deepened our inquiry with an empirical study with 49 participants who watched a video of a dance performance. We related temporal galvanic skin response (GSR) data to two self-report scales, which provided insights on interpreting this measure. Our findings, which include strong correlations, support the interpretation of GSR as a valid representation of audience engagement."} {"_id": "a20efead95e227d405bafb39f14474902104ab79", "title": "Micro-Doppler effect in radar: phenomenon, model, and simulation study", "text": "When, in addition to the constant Doppler frequency shift induced by the bulk motion of a radar target, the target or any structure on the target undergoes micro-motion dynamics, such as mechanical vibrations or rotations, the micro-motion dynamics induce Doppler modulations on the returned signal, referred to as the micro-Doppler effect. We introduce the micro-Doppler phenomenon in radar, develop a model of Doppler modulations, derive formulas of micro-Doppler induced by targets with vibration, rotation, tumbling and coning motions, and verify them by simulation studies, analyze time-varying micro-Doppler features using high-resolution time-frequency transforms, and demonstrate the micro-Doppler effect observed in real radar data."} {"_id": "6a686b525a84a87ca3e4d90a6704da8588e84344", "title": "A Wideband Sequential-Phase Fed Circularly Polarized Patch Array", "text": "This communication presents a wideband circularly polarized (CP) 2 \u00d7 2 patch array using a sequential-phase feeding network. By combining three operating modes, both axial ratio (AR) and impedance bandwidths are enhanced and wider than those of previous published sequential-fed single-layer patch arrays. These three CP operating modes are tuned and matched by optimizing the truncated corners of patch elements and the sequential-phase feeding network. A prototype of the proposed patch array is built to validate the design experimentally. The measured -10-dB impedance bandwidth is 1.03 GHz (5.20-6.23 GHz), and the measured 3-dB AR bandwidth is 0.7 GHz (5.25-5.95 GHz), or 12.7% corresponding to the center frequency of 5.5 GHz. The measured peak gain is about 12 dBic and the gain variation is less than 3 dB within the AR bandwidth."} {"_id": "d97e3655f50ee9b679ac395b2637f6fa66af98c7", "title": "The effects of feedback on energy conservation: A meta-analysis.", "text": "Feedback has been studied as a strategy for promoting energy conservation for more than 30 years, with studies reporting widely varying results. Literature reviews have suggested that the effectiveness of feedback depends on both how and to whom it is provided; yet variations in both the type of feedback provided and the study methodology have made it difficult for conclusions to be drawn. The current article analyzes past theoretical and empirical research on both feedback and proenvironmental behavior to identify unresolved issues, and utilizes a meta-analysis of 42 feedback studies published between 1976 and 2010 to test a set of hypotheses about when and how feedback about energy usage is most effective. Results indicate that feedback is effective overall, r = .071, p < .001, but with significant variation in effects (r varied from -.080 to .480). Several treatment variables were found to moderate this relationship, including frequency, medium, comparison message, duration, and combination with other interventions (e.g., goal, incentive). Overall, results provide further evidence of feedback as a promising strategy to promote energy conservation and suggest areas in which future research should focus to explore how and for whom feedback is most effective."} {"_id": "a1a4bcd65489cbe982e2948fa129775d7dd1393f", "title": "Modeling Cellular-to-UAV Path-Loss for Suburban Environments", "text": "Operating unmanned aerial vehicle (UAV) over cellular networks would open the barriers of remote navigation and far-flung flying by combining the benefits of UAVs and the ubiquitous availability of cellular networks. In this letter, we provide an initial insight on the radio propagation characteristics of cellular-to-UAV (CtU) channel. In particular, we model the statistical behavior of the path-loss from a cellular base station toward a flying UAV. Where we report the value of the path-loss as a function of the depression angle and the terrestrial coverage beneath the UAV. The provided model is derived based on extensive experimental data measurements conducted in a typical suburban environment for both terrestrial (by drive test) and aerial coverage (using a UAV). The model provides simple and accurate prediction of CtU path-loss that can be useful for both researchers and network operators alike."} {"_id": "8e8ddf3c38d6773202556900bd3c0553e2021dd2", "title": "Behavioral Game Theory on Online Social Networks : Colonel Blotto is on Facebook", "text": "We show how online social networks such as Facebook can be used in Behavioral Game Theory research. We report the deployment of a Facebook application \u2018Project Waterloo\u2019 that allows users to play the Colonel Blotto game against their friends and strangers. Unlike conventional studies performed in the laboratory environment, which rely on monetary incentives to attract human subjects to play games, our framework does not use money and instead relies on reputation and entertainment incentives. We describe the Facebook application we created for conducting this experiment, and perform a preliminary analysis of the data collected in the game. We conclude by discussing the advantages of our approach and list some ideas for future work."} {"_id": "b9bab0bc4a189fd5f05b93fd22046ae3ac06d7d9", "title": "Historical vignettes of the thyroid gland.", "text": "Although \"glands\" in the neck corresponding to the thyroid were known for thousands of years, they were mainly considered pathological when encountered. Recognition of the thyroid gland as an anatomical and physiological entity required human dissection, which began in earnest in the 16th century. Leonardo Da Vinci is generally credited as the first to draw the thyroid gland as an anatomical organ. The drawings were subsequently \"lost\" to medicine for nearly 260 years. The drawings were probably of a nonhuman specimen. Da Vinci vowed to produce an anatomical atlas, but it was never completed. Michelangelo Buonarroti promised to complete drawings for the anatomical work of Realdus Columbus, De Re Anatomica, but these were also never completed. Andreas Vesalius established the thyroid gland as an anatomical organ with his description and drawings in the Fabrica. The thyroid was still depicted in a nonhuman form during this time. The copper etchings of Bartholomew Eustachius made in the 1560s were obviously of humans, but were not actually published until 1714 with a description by Johannes Maria Lancisius. These etchings also depicted some interesting anatomy, which we describe. The Adenographia by Thomas Wharton in 1656 named the thyroid gland for the first time and more fully described it. The book also attempted to assign a function to the gland. The thyroid gland's interesting history thus touches a number of famous men from diverse backgrounds."} {"_id": "392e2848788f596a0f5426fc2b52f1ca3b79e20f", "title": "Database updating through user feedback in fingerprint-based Wi-Fi location systems", "text": "Wi-Fi fingerprinting is a technique which can provide location in GPS-denied environments, relying exclusively on Wi-Fi signals. It first requires the construction of a database of \u201cfingerprints\u201d, i.e. signal strengths from different access points (APs) at different reference points in the desired coverage area. The location of the device is then obtained by measuring the signal strengths at its location, and comparing it with the different reference fingerprints in the database. The main disadvantage of this technique is the labour required to build and maintain the fingerprints database, which has to be rebuilt every time a significant change in the wireless environment occurs, such as installation or removal of new APs, changes in the layout of a building, etc. This paper investigates a new method to utilise user feedback as a way of monitoring changes in the wireless environment. It is based on a system of \u201cpoints\u201d given to each AP in the database. When an AP is switched off, the number of points associated with that AP will gradually reduce as the users give feedback, until it is eventually deleted from the database. If a new AP is installed, the system will detect it and update the database with new fingerprints. Our proposed system has two main advantages. First it can be used as a tool to monitor the wireless environment in a given place, detecting faulty APs or unauthorised installation of new ones. Second, it regulates the size of the database, unlike other systems where feedback is only used to insert new fingerprints in the database."} {"_id": "f960f58ae420db5e18a4c1a11730dd0d0a05e3c9", "title": "Interleaved-Boost Converter With High Voltage Gain", "text": "This paper presents an interleaved-boost converter, magnetically coupled to a voltage-doubler circuit, which provides a voltage gain far higher than that of the conventional boost topology. Besides, this converter has low-voltage stress across the switches, natural-voltage balancing between output capacitors, low-input current ripple, and magnetic components operating with the double of switching frequency. These features make this converter suitable to applications where a large voltage step-up is demanded, such as grid-connected systems based on battery storage, renewable energies, and uninterruptible power system applications. Operation principle, main equations, theoretical waveforms, control strategy, dynamic modeling, and digital implementation are provided. Experimental results are also presented validating the proposed topology."} {"_id": "f55f5914f856be46030219ddc4d1fecd915be7c1", "title": "Internal Coupled-Fed Dual-Loop Antenna Integrated With a USB Connector for WWAN/LTE Mobile Handset", "text": "A coupled-fed dual-loop antenna capable of providing eight-band WWAN/LTE operation and suitable to integrate with a USB connector in the mobile handset is presented. The antenna integrates with a protruded ground, which is extended from the main ground plane of the mobile handset to accommodate a USB connector functioning as a data port of the handset. To consider the presence of the integrated protruded ground, the antenna uses two separate shorted strips and a T-shape monopole encircled therein as a coupling feed and a radiator as well. The shorted strips are short-circuited through a common shorting strip to the protruded ground and coupled-fed by the T-shape monopole to generate two separate quarter-wavelength loop resonant modes to form a wide lower band to cover the LTE700/GSM850/900 operation (704-960 MHz). An additional higher-order loop resonant mode is also generated to combine with two wideband resonant modes contributed by the T-shape monopole to form a very wide upper band of larger than 1 GHz to cover the GSM1800/1900/UMTS/LTE2300/2500 operation (1710-2690 MHz). Details of the proposed antenna are presented. For the SAR (specific absorption rate) requirement in practical mobile handsets to meet the limit of 1.6 W/kg for 1-g human tissue, the SAR values of the antenna are also analyzed."} {"_id": "7b93119414a02b332b1a6a51cf68b7e6ae3c42be", "title": "Optimal spatial filtering of single trial EEG during imagined hand movement.", "text": "The development of an electroencephalograph (EEG)-based brain-computer interface (BCI) requires rapid and reliable discrimination of EEG patterns, e.g., associated with imaginary movement. One-sided hand movement imagination results in EEG changes located at contra- and ipsilateral central areas. We demonstrate that spatial filters for multichannel EEG effectively extract discriminatory information from two populations of single-trial EEG, recorded during left- and right-hand movement imagery. The best classification results for three subjects are 90.8%, 92.7%, and 99.7%. The spatial filters are estimated from a set of data by the method of common spatial patterns and reflect the specific activation of cortical areas. The method performs a weighting of the electrodes according to their importance for the classification task. The high recognition rates and computational simplicity make it a promising method for an EEG-based brain-computer interface."} {"_id": "f4e3854fb10fcd29c9f382b7934c012fdeb8fc27", "title": "A Semantic Web Methodology for Situation-Aware Curative Food Service Recommendation System", "text": "Recently curative food is becoming more popular, as more people realize its benefits. Based on the theory of Chinese medicine, food itself is medicine. The Curative food which is an ideal nutritious food can help to loss weight, increase immunity and is also good for curative effects in patients. In this paper, we propose a new curative food service (CFS) recommendation system, the designed Situation-aware Curative Food Service Recommendation System (SCFSRS), which provides the cooperation web-based platform for all related mobile users (MUs) and curative food service providers (CFSPs), could strengthen the ability of CFS suggestion. SCFSRS is a five-tier system composed of the MUs, UDDI registries (UDDIRs), CFSPs, curative food services server (CFSS), and database server (DS)."} {"_id": "a3ecf44d904d404bd1368ffb56e7bf4439656fd9", "title": "Pattern-based synonym and antonym extraction", "text": "Many research studies adopt manually selected patterns for semantic relation extraction. However, manually identifying and discovering patterns is time consuming and it is difficult to discover all potential candidates. Instead, we propose an automatic pattern construction approach to extract verb synonyms and antonyms from English newspapers. Instead of relying on a single pattern, we combine results indicated by multiple patterns to maximize the recall."} {"_id": "4c0179f164fbc250a22b5dd449c408aa9dbd6727", "title": "Improving Back-Propagation by Adding an Adversarial Gradient", "text": "The back-propagation algorithm is widely used for learning in artificial neural networks. A challenge in machine learning is to create models that generalize to new data samples not seen in the training data. Recently, a common flaw in several machine learning algorithms was discovered: small perturbations added to the input data lead to consistent misclassification of data samples. Samples that easily mislead the model are called adversarial examples. Training a \u201dmaxout\u201d network on adversarial examples has shown to decrease this vulnerability, but also increase classification performance. This paper shows that adversarial training has a regularizing effect also in networks with logistic, hyperbolic tangent and rectified linear units. A simple extension to the back-propagation method is proposed, that adds an adversarial gradient to the training. The extension requires an additional forward and backward pass to calculate a modified input sample, or mini batch, used as input for standard back-propagation learning. The first experimental results on MNIST show that the \u201dadversarial back-propagation\u201d method increases the resistance to adversarial examples and boosts the classification performance. The extension reduces the classification error on the permutation invariant MNIST from 1.60% to 0.95% in a logistic network, and from 1.40% to 0.78% in a network with rectified linear units. Based on these promising results, adversarial back-propagation is proposed as a stand-alone regularizing method that should be further investigated."} {"_id": "5fb5caca9bb186321bde420529a5e77b4e019dea", "title": "EVOLUTION OF SOLID WASTE MANAGEMENT IN MALAYSIA", "text": "This paper seeks to examine the policy evolution of solid waste management in Malaysia and to determine its challenges and opportunities by assessing policy gaps, trends and stakeholders perception of solid waste management in Malaysia. Malaysian solid waste generation has been increasing drastically where solid waste generation was projected to increase from about 9.0 million tonnes in 2000 to about 10.9 million tonnes in 2010, to about 12.8 million tonnes in 2015 and finally to about 15.6 million tonnes in 2020 though national recycling rate is only about 3-5 %. This projected increasing rate of solid waste generation is expected to burden the country\u2019s resources and environment in managing these wastes in a sustainable manner. Solid waste management policies in Malaysia has evolved from simple informal policies to supplementary provision in legislation such as the Local Government Act, 1976 and the Environmental Quality Act, 1974 to formal policies such as the National Strategic Plan for Solid Waste Management (NSP) 2005, Master Plan on National Waste Minimization (MWM) in 2006, National Solid Waste Management Policy 2006 and the Solid Waste and Public Cleansing Management Act (SWMA) 2007. Policy gap analysis indicates challenges in the area of policy implementation potentially due to lack of political will, weak stakeholder acceptance and policy impracticality due to direct adoption of policy practices from developed countries while potential opportunities are in the area of legislation for mandatory recycling and source separation as well as government green procurement initiatives. In conclusion, policy evolution of solid waste management in Malaysia may be shifting from a focus on basic solid waste management issues of proper collection, disposal and infrastructure requirements towards sustainable waste management."} {"_id": "58a93e9cd60ce331606d31ebed62599a2b7db805", "title": "The SWISS-PROT protein sequence database and its supplement TrEMBL in 2000", "text": "SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation (such as the description of the function of a protein, its domains structure, post-translational modifications, variants, etc.), a minimal level of redundancy and high level of integration with other databases. Recent developments of the database include format and content enhancements, cross-references to additional databases, new documentation files and improvements to TrEMBL, a computer-annotated supplement to SWISS-PROT. TrEMBL consists of entries in SWISS-PROT-like format derived from the translation of all coding sequences (CDSs) in the EMBL Nucleotide Sequence Database, except the CDSs already included in SWISS-PROT. We also describe the Human Proteomics Initiative (HPI), a major project to annotate all known human sequences according to the quality standards of SWISS-PROT. SWISS-PROT is available at: http://www.expasy.ch/sprot/ and http://www.ebi.ac.uk/swissprot/"} {"_id": "63116ca22d65d15a178091ed5b20d89482d7b3db", "title": "MAFFT: a novel method for rapid multiple sequence alignment based on fast Fourier transform.", "text": "A multiple sequence alignment program, MAFFT, has been developed. The CPU time is drastically reduced as compared with existing methods. MAFFT includes two novel techniques. (i) Homo logous regions are rapidly identified by the fast Fourier transform (FFT), in which an amino acid sequence is converted to a sequence composed of volume and polarity values of each amino acid residue. (ii) We propose a simplified scoring system that performs well for reducing CPU time and increasing the accuracy of alignments even for sequences having large insertions or extensions as well as distantly related sequences of similar length. Two different heuristics, the progressive method (FFT-NS-2) and the iterative refinement method (FFT-NS-i), are implemented in MAFFT. The performances of FFT-NS-2 and FFT-NS-i were compared with other methods by computer simulations and benchmark tests; the CPU time of FFT-NS-2 is drastically reduced as compared with CLUSTALW with comparable accuracy. FFT-NS-i is over 100 times faster than T-COFFEE, when the number of input sequences exceeds 60, without sacrificing the accuracy."} {"_id": "9350c8bd303102f51235673c8283088e5e0972b0", "title": "WorkerRank: Using Employer Implicit Judgements to Infer Worker Reputation", "text": "In online labor marketplaces two parties are involved; employers and workers. An employer posts a job in the marketplace to receive applications from interested workers. After evaluating the match to the job, the employer hires one (or more workers) to accomplish the job via an online contract. At the end of the contract, the employer can provide his worker with some rating that becomes visible in the worker online profile. This form of explicit feedback guides future hiring decisions, since it is indicative of worker true ability. In this paper, first we discuss some of the shortcomings of the existing reputation systems that are based on the end-of-contract ratings. Then we propose a new reputation mechanism that uses Bayesian updates to combine employer implicit feedback signals in a link-analysis approach. The new system addresses the shortcomings of existing approaches, while yielding better signal for the worker quality towards hiring decision."} {"_id": "1196b2ae45f29c03db542601bee42161c48b6c36", "title": "Implementing deep neural networks for financial market prediction on the Intel Xeon Phi", "text": "Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012) for their superior predictive properties including robustness to overfitting. However their application to financial market prediction has not been previously researched, partly because of their computational complexity. This paper describes the application of DNNs to predicting financial market movement directions. A critical step in the viability of the approach in practice is the ability to effectively deploy the algorithm on general purpose high performance computing infrastructure. Using an Intel Xeon Phi co-processor with 61 cores, we describe the process for efficient implementation of the batched stochastic gradient descent algorithm and demonstrate a 11.4x speedup on the Intel Xeon Phi over a serial implementation on the Intel Xeon."} {"_id": "1dedf2986b7b34e6541d2f08d4bd77d67a92fabe", "title": "Twitter Geolocation and Regional Classification via Sparse Coding", "text": "Experimental Methodology ! Three types of embedding schemes (binary bagof-words, word-counts, word-sequence) ! For a fair comparison, we follow the identical experimental methodology as Eisenstein et al. (2010) ! Twitter\u2019s location service enables users to add location information to their Tweets ! Location information of message is useful (e.g., consumer marketing) \u00d8\uf0d8 However, < 3% of messages are geotagged\u2020 ! Machine learning can geolocate based on message content \u00d8\uf0d8 Best results are from supervised modelbased approaches ! We focus on unsupervised data-driven approach, namely sparse coding, to take advantage of abundance of unlabeled messages to geolocate Twitter"} {"_id": "235783b8d52893b0dc93ce977d41544cbf9b5d18", "title": "The Golden Beauty: Brain Response to Classical and Renaissance Sculptures", "text": "Is there an objective, biological basis for the experience of beauty in art? Or is aesthetic experience entirely subjective? Using fMRI technique, we addressed this question by presenting viewers, na\u00efve to art criticism, with images of masterpieces of Classical and Renaissance sculpture. Employing proportion as the independent variable, we produced two sets of stimuli: one composed of images of original sculptures; the other of a modified version of the same images. The stimuli were presented in three conditions: observation, aesthetic judgment, and proportion judgment. In the observation condition, the viewers were required to observe the images with the same mind-set as if they were in a museum. In the other two conditions they were required to give an aesthetic or proportion judgment on the same images. Two types of analyses were carried out: one which contrasted brain response to the canonical and the modified sculptures, and one which contrasted beautiful vs. ugly sculptures as judged by each volunteer. The most striking result was that the observation of original sculptures, relative to the modified ones, produced activation of the right insula as well as of some lateral and medial cortical areas (lateral occipital gyrus, precuneus and prefrontal areas). The activation of the insula was particularly strong during the observation condition. Most interestingly, when volunteers were required to give an overt aesthetic judgment, the images judged as beautiful selectively activated the right amygdala, relative to those judged as ugly. We conclude that, in observers na\u00efve to art criticism, the sense of beauty is mediated by two non-mutually exclusive processes: one based on a joint activation of sets of cortical neurons, triggered by parameters intrinsic to the stimuli, and the insula (objective beauty); the other based on the activation of the amygdala, driven by one's own emotional experiences (subjective beauty)."} {"_id": "198b711915429fa55162e749a0b964755b36a62e", "title": "Fine Grained Classification of Named Entities", "text": "While Named Entity extraction is useful in many natural language applications, the coarse categories that most NE extractors work with prove insufficient for complex applications such as Question Answering and Ontology generation. We examine one coarse category of named entities, persons, and describe a method for automatically classifying person instances into eight finergrained subcategories. We present a supervised learning method that considers the local context surrounding the entity as well as more global semantic information derived from topic signatures and WordNet. We reinforce this method with an algorithm that takes advantage of the presence of entities in multiple contexts."} {"_id": "4cb97088ff4c9adfb40559d23c20af49fdb6bc8b", "title": "User Association for Load Balancing in Heterogeneous Cellular Networks", "text": "For small cell technology to significantly increase the capacity of tower-based cellular networks, mobile users will need to be actively pushed onto the more lightly loaded tiers (corresponding to, e.g., pico and femtocells), even if they offer a lower instantaneous SINR than the macrocell base station (BS). Optimizing a function of the long-term rate for each user requires (in general) a massive utility maximization problem over all the SINRs and BS loads. On the other hand, an actual implementation will likely resort to a simple biasing approach where a BS in tier j is treated as having its SINR multiplied by a factor Aj \u2265 1, which makes it appear more attractive than the heavily-loaded macrocell. This paper bridges the gap between these approaches through several physical relaxations of the network-wide association problem, whose solution is NP hard. We provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and we observe that simple per-tier biasing loses surprisingly little, if the bias values Aj are chosen carefully. Numerical results show a large (3.5x) throughput gain for cell-edge users and a 2x rate gain for median users relative to a maximizing received power association."} {"_id": "d7b2cf873d0c4841887e82a2be0b54013b34080d", "title": "A practical traffic management system for integrated LTE-WiFi networks", "text": "Mobile operators are leveraging WiFi to relieve the pressure posed on their networks by the surging bandwidth demand of applications. However, operators often lack intelligent mechanisms to control the way users access their WiFi networks. This lack of sophisticated control creates poor network utilization, which in turn degrades the quality of experience (QoE). To meet user traffic demands, it is evident that operators need solutions that optimally balance user traffic across cellular and WiFi networks. Motivated by the lack of practical solutions in this space, we design and implement ATOM - an end-to-end system for adaptive traffic offloading for WiFi-LTE deployments. ATOM has two novel components: (i) A network interface selection algorithm that maps user traffic across WiFi and LTE to optimize user QoE and (ii) an interface switching service that seamlessly re-directs ongoing user sessions in a cost-effective and standards-compatible manner. Our evaluations on a real LTE-WiFi testbed using YouTube traffic reveals that ATOM reduces video stalls by 3-4 times compared to naive solutions."} {"_id": "6140528bd1c4c8637f9d826a11368c0704819019", "title": "Distributed Antennas for Indoor Radio Communications", "text": "The idea of implementing an indoor radio communications system serving an entire building from a single central antenna appears to be an attractive proposition. However, based on various indoor propagation measurements of the signal attenuation and the multipath delay spread, such a centralized approach appears to be limited to small buildings and to narrow-band FDMA-type systems with limited reliability and flexibility. In this paper, we present the results of indoor radio propagation measurements of two signal distribution approaches that improve the picture dramatically. In the first, the building is divided into many small cells, each served from an antenna located in its own center, and with adjacent cells operating in different frequency bands. In the second approach, the building is divided into one or more large cells, each served from a distributed antenna system or a \u201cleaky feeder\u201d that winds its way through the hallways. This approach eliminates the frequency cell handoff problem that is bound to exist in the first approach, while still preserving the dramatic reductions in multipath delay spread and signal attenuation compared to a centralized system. For example, the measurements show that, with either approach, the signal attenuation can be reduced by as much as a few tens of decibels and the rms delay spread becomes limited to 20 to 50 ns, even in large buildings. This can make possible the implementation of sophisticated broad-band TDMA-type systems that are flexible, robust, and virtually building-independent."} {"_id": "83a1a478c68e21ee83dd0225091bf9d3444a8120", "title": "Design, Implementation and Evaluation of Congestion Control for Multipath TCP", "text": "Multipath TCP, as proposed by the IETF working group mptcp, allows a single data stream to be split across multiple paths. This has obvious benefits for reliability, and it can also lead to more efficient use of networked resources. We describe the design of a multipath congestion control algorithm, we implement it in Linux, and we evaluate it for multihomed servers, data centers and mobile clients. We show that some \u2018obvious\u2019 solutions for multipath congestion control can be harmful, but that our algorithm improves throughput and fairness compared to single-path TCP. Our algorithm is a drop-in replacement for TCP, and we believe it is safe to deploy."} {"_id": "9482abb8261440ce4d4c235e79eadc94561da2f4", "title": "Rate control for communication networks: shadow prices, proportional fairness and stability", "text": "Rate Control for Communication Networks: Shadow Prices, Proportional Fairness and Stability Author(s): F. P. Kelly, A. K. Maulloo and D. K. H. Tan Source: The Journal of the Operational Research Society, Vol. 49, No. 3 (Mar., 1998), pp. 237-252 Published by: on behalf of the Palgrave Macmillan Journals Operational Research Society Stable URL: http://www.jstor.org/stable/3010473 Accessed: 07-10-2015 21:10 UTC"} {"_id": "b974d0999a8d3dc6153e005687b307df55615fa2", "title": "Internet of Things and LoRa\u2122 Low-Power Wide-Area Networks: A survey", "text": "Nowadays there is a lot of effort on the study, analysis and finding of new solutions related to high density sensor networks used as part of the IoT (Internet of Things) concept. LoRa (Long Range) is a modulation technique that enables the long-range transfer of information with a low transfer rate. This paper presents a review of the challenges and the obstacles of IoT concept with emphasis on the LoRa technology. A LoRaWAN network (Long Range Network Protocol) is of the Low Power Wide Area Network (LPWAN) type and encompasses battery powered devices that ensure bidirectional communication. The main contribution of the paper is the evaluation of the LoRa technology considering the requirements of IoT. After introduction in Section II the main obstacles of IoT development are discussed, section III addresses the challenges and entails the need for solutions to different problems in WSN research. Section IV presents the LoRaWAN communication protocol architecture requirements while in Section V the LoRa modulation performance is evaluated and discussed. In conclusion LoRa can be considered a good candidate in addressing the IoT challenges."} {"_id": "eaef0ce6a0ba2999943e99a5c46d624f50948edf", "title": "The Impact of Performance-Contingent Rewards on Perceived Autonomy and Competence 1", "text": "Two studies examined the impact of performance-contingent rewards on perceived autonomy, competence, and intrinsic motivation. Autonomy was measured in terms of both decisional and affective reports. The first study revealed an undermining effect of performance-contingent rewards on affective reports of autonomy among university students, and an increase in reports of competence. Decisional autonomy judgements were unaffected by rewards. The second study replicated this pattern of findings among elementary school children. These results help resolve Cognitive Evaluation Theory\u2019s (E. L. Deci & R. M. Ryan, 1985; R. M. Ryan, V. Mims, & R. Koestner, 1983) and Eisenberger, Rhoades, et al.\u2019s (R. Eisenberger, L. Rhoades, & J. Cameron, 1999) divergent positions on the impact of performance-contingent rewards on autonomy. The studies also included measures of intrinsic motivation."} {"_id": "a824f364c427da49499ab9d3133782e0221f2e1f", "title": "A Multi-tenant Web Application Framework for SaaS", "text": "Software as a Service (SaaS) is a software delivery model in which software resources are accessed remotely by users. Enterprises find SaaS attractive because of its low cost. SaaS requires sharing of application servers among multiple tenants for low operational costs. Besides the sharing of application servers, customizations are needed to meet requirements of each tenant. Supporting various levels of configuration and customization is desirable for SaaS frameworks. This paper describes a multi-tenant web application framework for SaaS. The proposed framework supports runtime customizations of user interfaces and business logics by use of file-level namespaces, inheritance, and polymorphism. It supports various client-side web application technologies."} {"_id": "fb41177076327c40dee612f30996739e20cf1bd7", "title": "Comparing deep neural networks against humans: object recognition when the signal gets weaker", "text": "Human visual object recognition is typically rapid and seemingly effortless, as well as largely independent of viewpoint and object orientation. Until very recently, animate visual systems were the only ones capable of this remarkable computational feat. This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks. Furthermore, a growing number of studies report similarities in the way DNNs and the human visual system process objects, suggesting that current DNNs may be good models of human visual object recognition. Yet there clearly exist important architectural and processing differences between stateof-the-art DNNs and the primate visual system. The potential behavioural consequences of these differences are not well understood. We aim to address this issue by comparing human and DNN generalisation abilities towards image degradations. We find the human visual system to be more robust to image manipulations like contrast reduction, additive noise or novel eidolon-distortions. In addition, we find progressively diverging classification error-patterns between man and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition. We envision that our findings as well as our carefully measured and freely available behavioural datasets1 provide a new useful benchmark for the computer vision community to improve the robustness of DNNs and a motivation for neuroscientists to search for mechanisms in the brain that could facilitate this robustness. Data and materials available at https://github.com/rgeirhos/object-recognition 1 ar X iv :1 70 6. 06 96 9v 1 [ cs .C V ] 2 1 Ju n 20 17"} {"_id": "85959eb06cdc1cd22b074e9173e9afa2126f06f1", "title": "Retrograde amnesia and remote memory impairment", "text": "-The status of remote memory in case N.A., patients receiving bilateral ECT, and patients with KorsakoBsyndrome was assessed using seven different tests. The extent and pattern of remote memory impairment depended upon the etiology of amnesia, appearing as a relatively brief retrograde amnesia for N.A. and patients receiving ECT and as an extensive impairment covering many decades for patients with KorsakoB syndrome. Differences in remote memory impairment could not be explained by differences in the severity ofanterograde amnesia, indicating that amnesia is not a unitary disorder. We suggest that brief retrograde amnesia is typically present whenever amnesia occurs and that extensive remote memory impairment is a distinct entity related to superimposed cognitive deficits. These ideas and a review of previous studies lead to a comprehensive proposal for understanding retrograde amnesia and remote memory impairment. THE AMNESIC syndrome has been the subject of much interest in neuropsychology because of what it might reveal about the organization and neurological foundations of normal memory. The phenomenon of retrograde amnesia, that loss of memory can occur for the period before the precipitating incident, has had a particularly large impact on ideas about normal memory function [l-5]. However, conflicting views exist regarding both the extent and pattern of retrograde amnesia and these have been used to support different ideas about the nature of amnesia. Specifically, clinical assessment of memory loss in patients with medial temporal lobectomy has suggested that a relatively brief retrograde amnesia extends at least to several months and at most to a few years prior to surgery [6, 71. Formal testing of patients receiving bilateral electroconvulsive therapy (ECT) has revealed a brief retrograde amnesia affecting memory for public events and former television programs that occurred during the few years prior to treatment [8-lo]. These findings, together with studies of anterograde amnesia, are consistent with theories of amnesia that emphasize deficiency in the storage or consolidation of new information [l, 11, 121. By contrast, formal testing of patients with Korsakoff syndrome has revealed an extensive impairment of remote memory for public events [13-l 51, famous faces [2,13,15], and famous voices [16] that covers several decades. These *Now at Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA 02139. ICorrespondence to Larry R. Squire, Veterans Administration Medical Center, 3350 La Jolla Village Drive, V-l 16A, San Diego, CA 92161, U.S.A."} {"_id": "00bfa802025014fc9c55e316e82da7c227c246bd", "title": "Multiview Photometric Stereo", "text": "This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results."} {"_id": "83f2fb84f526ebbd6dbf1ceba6843634e5e99951", "title": "Bicycle-Sharing System Analysis and Trip Prediction", "text": "Bicycle-sharing systems, which can provide shared bike usage services for the public, have been launched in many big cities. In bicycle-sharing systems, people can borrow and return bikes at any stations in the service region very conveniently. Therefore, bicycle-sharing systems are normally used as a short distance trip supplement for private vehicles as well as regular public transportation. Meanwhile, for stations located at different places in the service region, the bike usages can be quite skewed and imbalanced. Some stations have too many incoming bikes and get jammed without enough docks for upcoming bikes, while some other stations get empty quickly and lack enough bikes for people to check out. Therefore, inferring the potential destinations and arriving time of each individual trip beforehand can effectively help the service providers schedule manual bike re-dispatch in advance. In this paper, we will study the individual trip prediction problem for bicycle-sharing systems. To address the problem, we study a real-world bicycle-sharing system and analyze individuals' bike usage behaviors first. Based on the analysis results, a new trip destination prediction and trip duration inference model will be introduced. Experiments conducted on a real-world bicycle-sharing system demonstrate the effectiveness of the proposed model."} {"_id": "a066c755059ec6fe1d9def55fb4c554474eb343f", "title": "Wavelet-Based Neural Network for Power Disturbance Recognition and Classification", "text": "In this paper, a prototype wavelet-based neuralnetwork classifier for recognizing power-quality disturbances is implemented and tested under various transient events. The discrete wavelet transform (DWT) technique is integrated with the probabilistic neural-network (PNN) model to construct the classifier. First, the multiresolution-analysis technique of DWT and the Parseval\u2019s theorem are employed to extract the energy distribution features of the distorted signal at different resolution levels. Then, the PNN classifies these extracted features to identify the disturbance type according to the transient duration and the energy features. Since the proposed methodology can reduce a great quantity of the distorted signal features without losing its original property, less memory space and computing time are required. Various transient events tested, such as momentary interruption, capacitor switching, voltage sag/swell, harmonic distortion, and flicker show that the classifier can detect and classify different power disturbance types efficiently."} {"_id": "9d0870dfbf2b06da226403430dc77b9c82b05457", "title": "A time window neural network based framework for Remaining Useful Life estimation", "text": "This paper develops a framework for determining the Remaining Useful Life (RUL) of aero-engines. The framework includes the following modular components: creating a moving time window, a suitable feature extraction method and a multi-layer neural network as the main machine learning algorithm. The proposed framework is evaluated on the publicly available C-MAPSS dataset. The prognostic accuracy of the proposed algorithm is also compared against other state-of-the-art methods available in the literature and it has been shown that the proposed framework has the best overall performance."} {"_id": "4f298d6d0c8870acdbf94fe473ebf6814681bd1f", "title": "Going Deeper into Action Recognition: A Survey", "text": "We provide a detailed review of the work on human action recognition over the past decade. We refer to \u201cactions\u201d as meaningful human motions. Starting with methods that are based on handcrafted representations, we review the impact of revamped deep neural networks on action recognition. We follow a systematic taxonomy of action recognition approaches to present a coherent discussion over their improvements and fall-backs."} {"_id": "0bca0ca7bb642b747797c17a4899206116fb0b25", "title": "Color attributes for object detection", "text": "State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification, leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-of-the-art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods."} {"_id": "8f02f8d04675bb91866b5c2c4a39975b01b4aeef", "title": "Cyber security for home users: A new way of protection through awareness enforcement", "text": "We are currently living in an age, where the use of the Internet has become second nature to millions of people. Not only do businesses depend on the Internet for all types of electronic transactions, but more and more home users are experiencing the immense benefit of the Internet. However, this dependence and use of the Internet bring new and dangerous risks. This is due to increasing attempts from unauthorised third parties to compromise private information for their own benefit \u2013 the whole wide area of cyber crime. It is therefore essential that all users understand the risks of using the Internet, the importance of securing their personal information and the consequences if this is not done properly. It is well known that home users are specifically vulnerable, and that cyber criminals have such users squarely in their target. This vulnerability of home users are due to many factors, but one of the most important ones is the fact that such home users are in many cases not aware of the risks of using the Internet, and often venture into cyber space without any awareness preparation for this journey. This paper specifically investigates the position of the home user, and proposes a new model, The E-Awareness Model (E-AM), in which home users can be forced to acquaint themselves with the risks involved in venturing into cyber space. The E-AM consists of two components : the awareness component housed in the E-Awareness Portal, and the enforcement component. This model proposes a way to improve information security awareness amongst home users by presenting some information security content and enforcing the absorption of this content. The main difference between the presented model and other existing information security awareness models, is that in the presented model the acquiring/absorption of the awareness content is compulsory the user is forced to proceed via the E-Awareness Portal without the option of bypassing it."} {"_id": "bec987bf3dfbb158900cfad30418e24356d7abf3", "title": "A RSSI-based and calibrated centralized localization technique for wireless sensor networks", "text": "This paper presents a multi-hop localization technique for WSNs exploiting acquired received signal strength indications. The proposed system aims at providing an effective solution for the self localization of nodes in static/semi-static wireless sensor networks without requiring previous deployment information"} {"_id": "7d7499183128fd032319cf4cbdac6eeab4c0362b", "title": "Loosely-Coupled Benchmark Framework Automates Performance Modeling on IaaS Clouds", "text": "Cloud computing is under rapid development, which brings the urgent need of evaluation and comparison of cloud systems. The performance testers often struggle in tedious manual operations, when carrying out lots of similar experiments on the cloud systems. However, few of current benchmark tools provide both flexible workflow controlling methodology and extensible workload abstraction at the same time. We present a modeling methodology to compare the performance from multiple aspects based on a loosely coupled benchmark framework, which automates experiments under agile workflow controlling and achieves broad cloud supports, as well as good workload extensibility. With several built-in workloads and scenario templates, we performed a series of tests on Amazon EC2 services and our private Open Stack-based cloud, and analyze the elasticity and scalability based on the performance models. Experiments show the robustness and compatibility of our framework, which makes remarkable guarantee that it can be leveraged in practice for researchers and testers to perform their further study."} {"_id": "4220868688ecf6d252d17f6d784923d0c5f66366", "title": "An automatic irrigation system using ZigBee in wireless sensor network", "text": "Wireless Sensing Technology is widely used everywhere in the current scientific world. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology. In the research field of wireless sensor networks the power efficient time is a major issue. This problem can be overcome by using the ZigBee technology. The main idea of this is to understand how data travels through a wireless medium transmission using wireless sensor network and monitoring system. This paper design an irrigation system which is automated by using controllable parameter such as temperature, soil moisture and air humidity because they are the important factors to be controlled in PA(Precision Agricultures)."} {"_id": "4064696e69b0268003879c0bcae6527d3b786b85", "title": "Winner-Take-All Autoencoders", "text": "In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance."} {"_id": "7553667a0e392e93a9bfecae3e34201a0c1a1266", "title": "Joint optimization of LCMV beamforming and acoustic echo cancellation", "text": "Full-duplex hands-free acoustic human/machine interfaces often require the combination of acoustic echo cancellation and speech enhancement in order to suppress acoustic echoes, local interference, and noise. In order to optimally exploit positive synergies between acoustic echo cancellation and speech enhancement, we present in this contribution a combined least-squares (LS) optimization criterion for the integration of acoustic echo cancellation and adaptive linearly-constrained minimum variance (LCMV) beamforming. Based on this optimization criterion, we derive a computationally efficient system based on the generalized sidelobe canceller (GSC), which effectively deals with scenarioes with time-varying acoustic echo paths and simultaneous presence of double-talk of acoustic echoes, local interference, and desired speakers."} {"_id": "a3822dcdc70c4ef2a2b91ceba0902f8cf75041d1", "title": "Nonstationary Function Optimization Using Genetic Algorithms with Dominance and Diploidy", "text": "Specifically, we apply genetic algorithms that include diploid genotypes and dominance operators to a simple nonstationary problem in function optimization: an oscillating, blind knapsack problem. In doing this, we find that diploidy and dominance induce a form of long term distributed memory that stores and occasionally remembers good partial solutions that were once desirable. This memory permits faster adaptation to drastic environmental shifts than is possible without the added structures and operators. This paper investigates the use of diploid representations and dominance operators in genetic algorithms (GAs) to improve performance in environments that vary with time. The mechanics of diploidy and dominance in natural genetics are briefly discussed, and the usage of these structures and operators in other GA investigations is reviewed. An extension of the schema theorem is developed which illustrates the ability of diploid GAs with dominance to hold alternative alleles in abeyance. Both haploid and diploid GAs are applied to a simple time varying problem: an oscillating, blind knapsack problem. Simulation results show that a diploid GA with an evolving dominance map adapts more quickly to the sudden changes in this problem environment than either a haploid GA or a diploid GA with a fixed dominance map. These proof-of-principle results indicate that diploidy and dominance can be used to induce a form of long term distributed memory within a population of structures. In the remainder of this paper, we explore the mechanism, theory, and implementation of dominance and diploidy in artificial genetic search. We start by examining the role of diploidy and dominance in natural genetics, and we briefly review examples of their usage in genetic algorithm circles. We extend the schema theorem to analyze the effect of these structures and mechanisms. We present results from computational experiments on a 17-object, oscillating, blind 0-1 knapsack problem. Simulations with adaptive dominance maps and diploidy are able to adapt more quickly to sudden environmental shifts than either a haploid genetic algorithm or a diploid genetic algorithm with fixed dominance map. These results are encouraging and suggest the investigation of dominance and diploidy in other GA applications in search and machine learning. INTRODUCTION Real world problems are seldom independent of time. If you don't like the weather, wait five minutes and it will change. If this week gasoline costs $1.30 a gallon, next week it may cost $0.89 a gallon or perhaps $2.53 a gallon. In these and many more complex ways, real world environments are both nonstationary and noisy. Searching for good solutions or good behavior under such conditions is a difficult task; yet, despite the perpetual change and uncertainty, all is not lost. History does repeat itself, and what goes around does come around. The horrors of Malthusian extrapolation rarely come to pass, and solutions that worked well yesterday are at least somewhat likely to be useful when circumstances are somewhat similar tomorrow or the day after. The temporal regularity implied in these observations places a premium on search augmented by selective memory. In other words, a system which does not learn the lessons of its history is doomed to repeat its mistakes. THE MECHANICS OF NATURAL DOMINANCE AND DIPLOIDY It is surprising to some genetic algorithm newcomers that tpe most commonly used GA is modeled after the mechanics of haploid genetics. After all, don't most elementary genetics textbooks start off with a discussion of Mendel's pea plants and some mention of diploidy and dominance? The reason for this disparity between genetic algorithm practice and genetics textbook coverage is due to the success achieved by early GA investigators (Hollstien, 1971; De Jong, 1975) using haploid chromosome models on stationary problems. It was found that surprising efficacy and efficiency could be obtained using single stranded (haploid) chromosomes under the action of reproduction and crossover. As a result, later investigators of artificial genetic search have tended to ignore diploidy and dominance. In this section we examine the mechanics of diploidy and dominance to understand their roles in shielding alternate In this paper, we investigate the behavior of a genetic algorithm augmented by structures and operators capable of exploiting the regularity and repeatability of many nonstationary environments."} {"_id": "6c73f9567eefbfdea469365bea9faa579b4b9499", "title": "Spin Transfer Torques", "text": "This tutorial article introduces the physics of spin transfer torques in magnetic devices. We provide an elementary discussion of the mechanism of spin transfer torque, and review the theoretical and experimental progress in this field. Our intention is to be accessible to beginning graduate students. This is the introductory paper for a cluster of \u201cCurrent Perspectives\u201d articles on spin transfer torques published in volume 320 of the Journal of Magnetism and Magnetic Materials. This article is meant to set the stage for the others which follow it in this cluster; they focus in more depth on particularly interesting aspects of spin-torque physics and highlight unanswered questions that might be productive topics for future research."} {"_id": "6ee2bf4f9647e110fa7788c932a9592cf8eaeee0", "title": "Group-sensitive multiple kernel learning for object categorization", "text": "In this paper, we propose a group-sensitive multiple kernel learning (GS-MKL) method to accommodate the intra-class diversity and the inter-class correlation for object categorization. By introducing an intermediate representation \u201cgroup\u201d between images and object categories, GS-MKL attempts to find appropriate kernel combination for each group to get a finer depiction of object categories. For each category, images within a group share a set of kernel weights while images from different groups may employ distinct sets of kernel weights. In GS-MKL, such group-sensitive kernel combinations together with the multi-kernels based classifier are optimized in a joint manner to seek a trade-off between capturing the diversity and keeping the invariance for each category. Extensive experiments show that our proposed GS-MKL method has achieved encouraging performance over three challenging datasets."} {"_id": "c38f93b80b41a1a95662d0734e67339d05e8f2c7", "title": "MicroRNA Expression Changes during Interferon-Beta Treatment in the Peripheral Blood of Multiple Sclerosis Patients", "text": "MicroRNAs (miRNAs) are small non-coding RNA molecules acting as post-transcriptional regulators of gene expression. They are involved in many biological processes, and their dysregulation is implicated in various diseases, including multiple sclerosis (MS). Interferon-beta (IFN-beta) is widely used as a first-line immunomodulatory treatment of MS patients. Here, we present the first longitudinal study on the miRNA expression changes in response to IFN-beta therapy. Peripheral blood mononuclear cells (PBMC) were obtained before treatment initiation as well as after two days, four days, and one month, from patients with clinically isolated syndrome (CIS) and patients with relapsing-remitting MS (RRMS). We measured the expression of 651 mature miRNAs and about 19,000 mRNAs in parallel using real-time PCR arrays and Affymetrix microarrays. We observed that the up-regulation of IFN-beta-responsive genes is accompanied by a down-regulation of several miRNAs, including members of the mir-29 family. These differentially expressed miRNAs were found to be associated with apoptotic processes and IFN feedback loops. A network of miRNA-mRNA target interactions was constructed by integrating the information from different databases. Our results suggest that miRNA-mediated regulation plays an important role in the mechanisms of action of IFN-beta, not only in the treatment of MS but also in normal immune responses. miRNA expression levels in the blood may serve as a biomarker of the biological effects of IFN-beta therapy that may predict individual disease activity and progression."} {"_id": "4917a802cab040fc3f6914c145fb59c1d738aa84", "title": "The potential environmental impact of engineered nanomaterials", "text": "With the increased presence of nanomaterials in commercial products, a growing public debate is emerging on whether the environmental and social costs of nanotechnology outweigh its many benefits. To date, few studies have investigated the toxicological and environmental effects of direct and indirect exposure to nanomaterials and no clear guidelines exist to quantify these effects."} {"_id": "bd483eefea0e352cdc86288a9b0c27bad45143f7", "title": "SDN-based solutions for Moving Target Defense network protection", "text": "Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads."} {"_id": "5238190eb598fb3352c51ee07b7ee8ec714f3c38", "title": "OPEM: A Static-Dynamic Approach for Machine-Learning-Based Malware Detection", "text": "Malware is any computer software potentially harmful to both computers and networks. The amount of malware is growing every year and poses a serious global security threat. Signature-based detection is the most extended method in commercial antivirus software, however, it consistently fails to detect new malware. Supervised machine learning has been adopted to solve this issue. There are two types of features that supervised malware detectors use: (i) static features and (ii) dynamic features. Static features are extracted without executing the sample whereas dynamic ones requires an execution. Both approaches have their advantages and disadvantages. In this paper, we propose for the first time, OPEM, an hybrid unknown malware detector which combines the frequency of occurrence of operational codes (statically obtained) with the information of the execution trace of an executable (dynamically obtained). We show that this hybrid approach enhances the performance of both approaches when run separately."} {"_id": "e5acdb0246b33d33c2a34a4a23faaf21e2f9b924", "title": "An Efficient Non-Negative Matrix-Factorization-Based Approach to Collaborative Filtering for Recommender Systems", "text": "Matrix-factorization (MF)-based approaches prove to be highly accurate and scalable in addressing collaborative filtering (CF) problems. During the MF process, the non-negativity, which ensures good representativeness of the learnt model, is critically important. However, current non-negative MF (NMF) models are mostly designed for problems in computer vision, while CF problems differ from them due to their extreme sparsity of the target rating-matrix. Currently available NMF-based CF models are based on matrix manipulation and lack practicability for industrial use. In this work, we focus on developing an NMF-based CF model with a single-element-based approach. The idea is to investigate the non-negative update process depending on each involved feature rather than on the whole feature matrices. With the non-negative single-element-based update rules, we subsequently integrate the Tikhonov regularizing terms, and propose the regularized single-element-based NMF (RSNMF) model. RSNMF is especially suitable for solving CF problems subject to the constraint of non-negativity. The experiments on large industrial datasets show high accuracy and low-computational complexity achieved by RSNMF."} {"_id": "11efa6998c2cfd3de59cf0ec0321a9e17418915d", "title": "Toward Automated Dynamic Malware Analysis Using CWSandbox", "text": "Malware is notoriously difficult to combat because it appears and spreads so quickly. In this article, we describe the design and implementation of CWSandbox, a malware analysis tool that fulfills our three design criteria of automation, effectiveness, and correctness for the Win32 family of operating systems"} {"_id": "129ed742b496b23efdf745aaf0c48958ef64d2c6", "title": "Exploring Multiple Execution Paths for Malware Analysis", "text": "Malicious code (or Malware) is defined as software that fulfills the deliberately harmful intent of an attacker. Malware analysis is the process of determining the behavior and purpose of a given Malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques and removal tools. Currently, Malware analysis is mostly a manual process that is tedious and time-intensive. To mitigate this problem, a number of analysis tools have been proposed that automatically extract the behavior of an unknown program by executing it in a restricted environment and recording the operating system calls that are invoked. The problem of dynamic analysis tools is that only a single program execution is observed. Unfortunately, however, it is possible that certain malicious actions are only triggered under specific circumstances (e.g., on a particular day, when a certain file is present, or when a certain command is received). In this paper, we propose a system that allows us to explore multiple execution paths and identify malicious actions that are executed only when certain conditions are met. This enables us to automatically extract a more complete view of the program under analysis and identify under which circumstances suspicious actions are carried out. Our experimental results demonstrate that many Malware samples show different behavior depending on input read from the environment. Thus, by exploring multiple execution paths, we can obtain a more complete picture of their actions."} {"_id": "66651bb47e7d16b49f677decfb60cbe1939e7a91", "title": "Optimizing number of hidden neurons in neural networks", "text": "In this paper, a novel and effective criterion based on the estimation of the signal-to-noise-ratio figure (SNRF) is proposed to optimize the number of hidden neurons in neural networks to avoid overfitting in the function approximation. SNRF can quantitatively measure the useful information left unlearned so that overfitting can be automatically detected from the training error only without use of a separate validation set. It is illustrated by optimizing the number of hidden neurons in a multi-layer perceptron (MLP) using benchmark datasets. The criterion can be further utilized in the optimization of other parameters of neural networks when overfitting needs to be considered."} {"_id": "03bb84352b89691110f112b5515766c55bcc5720", "title": "Neural Relation Extraction via Inner-Sentence Noise Reduction and Transfer Learning", "text": "Extracting relations is critical for knowledge base completion and construction in which distant supervised methods are widely used to extract relational facts automatically with the existing knowledge bases. However, the automatically constructed datasets comprise amounts of low-quality sentences containing noisy words, which is neglected by current distant supervised methods resulting in unacceptable precisions. To mitigate this problem, we propose a novel word-level distant supervised approach for relation extraction. We first build Sub-Tree Parse (STP) to remove noisy words that are irrelevant to relations. Then we construct a neural network inputting the subtree while applying the entity-wise attention to identify the important semantic features of relational words in each instance. To make our model more robust against noisy words, we initialize our network with a priori knowledge learned from the relevant task of entity classification by transfer learning. We conduct extensive experiments using the corpora of New York Times (NYT) and Freebase. Experiments show that our approach is effective and improves the area of Precision/Recall (PR) from 0.35 to 0.39 over the state-of-the-art work."} {"_id": "5c7f700d28c9e5b5ac115c1409262ecfe89812db", "title": "An Evaluation of Aggregation Techniques in Crowdsourcing", "text": "As the volumes of AI problems involving human knowledge are likely to soar, crowdsourcing has become essential in a wide range of world-wide-web applications. One of the biggest challenges of crowdsourcing is aggregating the answers collected from the crowd since the workers might have wide-ranging levels of expertise. In order to tackle this challenge, many aggregation techniques have been proposed. These techniques, however, have never been compared and analyzed under the same setting, rendering a \u2018right\u2019 choice for a particular application very difficult. Addressing this problem, this paper presents a benchmark that offers a comprehensive empirical study on the performance comparison of the aggregation techniques. Specifically, we integrated several stateof-the-art methods in a comparable manner, and measured various performance metrics with our benchmark, including computation time, accuracy, robustness to spammers, and adaptivity to multi-labeling. We then provide in-depth analysis of benchmarking results, obtained by simulating the crowdsourcing process with different types of workers. We believe that the findings from the benchmark will be able to serve as a practical guideline for crowdsourcing applications."} {"_id": "6ebd2b2822bcd32b58af033a26fd08a39ba777f7", "title": "ONTOLOGY ALIGNMENT USING MACHINE LEARNING TECHNIQUES", "text": "In semantic web, ontology plays an important role to provide formal definitions of concepts and relationships. Therefore, communicating similar ontologies becomes essential to provide ontologies interpretability and extendibility. Thus, it is inevitable to have similar but not the same ontologies in a particular domain, since there might be several definitions for a given concept. This paper presents a method to combine similarity measures of different categories without having ontology instances or any user feedbacks towards aligning two given ontologies. To align different ontologies efficiently, K Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT) and AdaBoost classifiers are investigated. Each classifier is optimized based on the lower cost and better classification rate. Experimental results demonstrate that the F-measure criterion improves to 99% using feature selection and combination of AdaBoost and DT classifiers, which is highly comparable, and outperforms the previous reported F-measures. Computer Engineering Department, Shahid Chamran University, Ahvaz, Iran."} {"_id": "d09b70d5029f60bef888a8e73f9a18e2fc2db2db", "title": "Pictogrammar: an AAC device based on a semantic grammar", "text": "As many as two-thirds of individuals with an Autism Spectrum Disorder (ASD) also have language impairments, which can range from mild limitations to complete non-verbal behavior. For such cases, there are several Augmentative and Alternative Communication (AAC) devices available. These are computer-designed tools in order to help people with ASD to palliate or overcome such limitations, at least partially. Some of the most popular AAC devices are based on pictograms, so that a pictogram is the graphical representation of a simple concept and sentences are composed by concatenating a number of such pictograms. Usually, these tools have to manage a vocabulary made up of hundreds of pictograms/concepts, with no or very poor knowledge of the language at semantic and pragmatic level. In this paper we present Pictogrammar, an AAC system which takes advantage of SUpO and PictOntology. SUpO (Simple Upper Ontology) is a formal semantic ontology which is made up of detailed knowledge of facts of everyday life such as simple words, with special interest in linguistic issues, allowing automated grammatical supervision. PictOntology is an ontology developed to manage sets of pictograms, linked to SUpO. Both ontologies make possible the development of tools which are able to take advantage of a formal semantics."} {"_id": "b0a1f562a55aae189d6a5cb826582b2e7fb06d3c", "title": "Multi-modal Mean-Fields via Cardinality-Based Clamping", "text": "Mean Field inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, since it models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. We therefore replace the fully factorized distribution of Mean Field by a weighted mixture of such distributions, that similarly minimizes the KL-Divergence to the true posterior. By introducing two new ideas, namely, conditioning on groups of variables instead of single ones and using a parameter of the conditional random field potentials, that we identify to the temperature in the sense of statistical physics to select such groups, we can perform this minimization efficiently. Our extension of the clamping method proposed in previous works allows us to both produce a more descriptive approximation of the true posterior and, inspired by the diverse MAP paradigms, fit a mixture of Mean Field approximations. We demonstrate that this positively impacts real-world algorithms that initially relied on mean fields."} {"_id": "aded779fe2ec65b9d723435182e093f9f18ad80f", "title": "Design and Implementation of a Reliable Wireless Real-Time Home Automation System Based on Arduino Uno Single-Board Microcontroller", "text": "\u2014This paper presents design and implementation concepts for a wireless real-time home automation system based on Arduino Uno microcontroller as central controllers. The proposed system has two operational modes. The first one is denoted as a manually\u2013automated mode in which the user can monitor and control the home appliances from anywhere over the world using the cellular phone through Wi-Fi communication technology. The second one is referred to a self-automated mode that makes the controllers to be capable of monitoring and controlling different appliances in the home automatically in response to the signals comes from the related sensors. To support the usefulness of the proposed technique, a hardware implementation with Matlab-GUI platform for the proposed system is carried out and the reliability of the system is introduced. The proposed system is shown to be a simple, cost effective and flexible that making it a suitable and a good candidate for the smart home future. I. INTRODUCTION ecently, man's work and life are increasingly tight with the rapid growth in communications and information technology. The informationized society has changed human being's way of life as well as challenged the traditional residence. Followed by the rapid economic expansion, living standard keeps raising up day by day that people have a higher requirement for dwelling functions. The intellectualized society brings diversified information where safe, economic, comfortable and convenient life has become the ideal for every modern family [1]."} {"_id": "b6741f7862be64a5435d2625ea46f0508b9d3fee", "title": "Security Keys: Practical Cryptographic Second Factors for the Modern Web", "text": "\u201cSecurity Keys\u201d are second-factor devices that protect users against phishing and man-in-the-middle attacks. Users carry a single device and can self-register it with any online service that supports the protocol. The devices are simple to implement and deploy, simple to use, privacy preserving, and secure against strong attackers. We have shipped support for Security Keys in the Chrome web browser and in Google\u2019s online services. We show that Security Keys lead to both an increased level of security and user satisfaction by analyzing a two year deployment which began within Google and has extended to our consumer-facing web applications. The Security Key design has been standardized by the FIDO Alliance, an organization with more than 250 member companies spanning the industry. Currently, Security Keys have been deployed by Google, Dropbox, and GitHub. An updated and extended tech report is available at https://github.com/google/u2fref-code/docs/SecurityKeys_TechReport.pdf."} {"_id": "f98d91492335e74621837c01c860cbc801a2acbb", "title": "A web-based bayesian intelligent tutoring system for computer programming", "text": "In this paper, we present a Web-based intelligent tutoring system, called BITS. The decision making process conducted in our intelligent system is guided by a Bayesian network approach to support students in learning computer programming. Our system takes full advantage of Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. We discuss how to employ Bayesian networks as an inference engine to guide the students\u2019 learning processes . In addition, we describe the architecture of BITS and the role of each module in the system. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student nav igate through the online course materials, recommend learning goals, and generate appropriate reading sequences."} {"_id": "0204dee746e1f55d87409c5f482d1e7c74c48baa", "title": "Adjustable Autonomy : From Theory to Implementation", "text": "Recent exciting, ambitious applications in agent technology involve agents acting individually or in teams in support of critical activities of individual humans or entire human organizations. Applications range from intelligent homes [13], to \"routine\" organizational coordination[16], to electronic commerce[4] to long-term space missions[12, 6]. These new applications have brought forth an increasing interest in agents\u2019 adjustable autonomy (AA), i.e., in agents\u2019 dynamically adjusting their own level of autonomy based on the situation[8]. In fact, many of these applications will not be deployed, unless reliable AA reasoning is a central component. At the heart of AA is the question of whether and when agents should make autonomous decisions and when they should transfer decision-making control to other entities (e.g., human users). Unfortunately, previous work in adjustable autonomy has focused on individual agent-human interactions and tile techniques developed fail to scale-up to complex heterogeneous organizations. Indeed, as a first step, we focused on a smallscale, but real-world agent-human organization called Electric Elves, where an individual agent and human worked together within a larger multiagent context. Although\u2019the application limits the interactions among entities, key weaknesses of previous approaches to adjustable autonomy are readily apparent. In particular, previous approaches to transferof-control are seen to be too rigid, employing one-shot transfersof-control that can result in unacceptable coordination failures. Furthermore, the previous approaches ignore potential costs (e.g., from delays) to an agent\u2019s team due to such transfers of control. To remedy such problems, we propose a novel approach to AA, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from the agent to the user or vice versa) and (ii) actions to change an agent\u2019s pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high quality individual decisions to be made with minimal disruption to the coordination of the team. We operationalize such strategies via Markov decision processes (MDPs) which select the optimal strategy given an uncertain environment and costs to individuals and teams. We have developed a general reward function and state representation for such an MDP, to facilitate application of the approach to different domains. We present results from a careful evaluation of this approach, including via its use in our real-world, deployed Electric Elves system."} {"_id": "697754f7e62236f6a2a069134cbc62e3138ac89f", "title": "The Granular Tabu Search and Its Application to the Vehicle-Routing Problem", "text": ""} {"_id": "8d3c794bf910f9630e9516f3ebb1fa4995a5c0c1", "title": "An Item-based Collaborative Filtering Recommendation Algorithm Using Slope One Scheme Smoothing", "text": "Collaborative filtering is one of the most important technologies in electronic commerce. With the development of recommender systems, the magnitudes of users and items grow rapidly, resulted in the extreme sparsity of user rating data set. Traditional similarity measure methods work poor in this situation, make the quality of recommendation system decreased dramatically. Poor quality is one major challenge in collaborative filtering recommender systems. Sparsity of users\u2019 ratings is the major reason causing the poor quality. To address this issue, an item-based collaborative filtering recommendation algorithm using slope one scheme smoothing is presented. This approach predicts item ratings that users have not rated by the employ of slope one scheme, and then uses Pearson correlation similarity measurement to find the target items\u2019 neighbors, lastly produces the recommendations. The experiments are made on a common data set using different recommender algorithms. The results show that the proposed approach can improve the accuracy of the collaborative filtering recommender system."} {"_id": "2f56548a5a8d849e17e4186393c698e5a735aa91", "title": "Design methodology using inversion coefficient for low-voltage low-power CMOS voltage reference", "text": "This paper presents an analog design methodology, using the selection of inversion coefficient of MOS devices, to design low voltage and low-power (LVLP) CMOS voltage references. These circuits often work under subthreshold operation. Hence, there is a demand for analog design methods that optimize the sizing process of transistors working in weak and moderate inversion. The advantage of the presented method -- compared with the traditional approach to design circuits -- is the reduction of design cycle time and minimization of trial-and-error simulations, if the proposed equations are used. As a case study, a LVLP voltage reference based on subthreshold MOSFETs with supply voltage of 0.7 V was designed for 0.18-\u00bcm CMOS technology."} {"_id": "42dcf7c039bf8bb36b8fa3a658bd66e834128c1b", "title": "Exploring commonalities across participants in the neural representation of objects.", "text": "The question of whether the neural encodings of objects are similar across different people is one of the key questions in cognitive neuroscience. This article examines the commonalities in the internal representation of objects, as measured with fMRI, across individuals in two complementary ways. First, we examine the commonalities in the internal representation of objects across people at the level of interobject distances, derived from whole brain fMRI data, and second, at the level of spatially localized anatomical brain regions that contain sufficient information for identification of object categories, without making the assumption that their voxel patterns are spatially matched in a common space. We examine the commonalities in internal representation of objects on 3T fMRI data collected while participants viewed line drawings depicting various tools and dwellings. This exploratory study revealed the extent to which the representation of individual concepts, and their mutual similarity, is shared across participants."} {"_id": "ee654db227dcb7b39d26bec7cc06e2b43b525826", "title": "Hierarchy of fibrillar organization levels in the polytene interphase chromosomes of Chironomus.", "text": ""} {"_id": "579b2962ac567a39742601cafe3fc43cf7a7109c", "title": "Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks", "text": "We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal-and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively."} {"_id": "8772e0057d0adc7b1e8d9f09473fdc1de05a8148", "title": "Autonomous Self-Assembly in Swarm-Bots", "text": "In this paper, we discuss the self-assembling capabilities of the swarm-bot, a distributed robotics concept that lies at the intersection between collective and self-reconfigurable robotics. A swarm-bot is comprised of autonomous mobile robots called s-bots. S-bots can either act independently or self-assemble into a swarm-bot by using their grippers. We report on experiments in which we study the process that leads a group of s-bots to self-assemble. In particular, we present results of experiments in which we vary the number of s-bots (up to 16 physical robots), their starting configurations, and the properties of the terrain on which self-assembly takes place. In view of the very successful experimental results, swarm-bot qualifies as the current state of the art in autonomous self-assembly"} {"_id": "926e97d5ce2a6e070f8ec07c5aa7f91d3df90ba0", "title": "Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks", "text": "Deep Neural Networks (DNNs) have shown to outperform traditional methods in various visual recognition tasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed method is evaluated using four publicly available databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods."} {"_id": "41c419e663aa9b301a951bab245a7a6543a3e5f6", "title": "A Java API and Web Service Gateway for wireless M-Bus", "text": "M-Bus Systems are used to support the remote data exchange with meter units through a network. They provide an easy extend able and cost effective method to connect many meter devices to one coherent network. Master applications operate as concentrated data collectors and are used to process measured values further. As modern approaches facilitate the possibility of a Smart Grid, M-Bus can be seen as the foundation of this technology. With the current focus on a more effective power grid, Smart Meters and Smart Grids are an important research topic. This bachelor thesis first gives an overview of the M-Bus standard and then presents a Java library and API to access M-Bus devices remotely in a standardized way through Web Services. Integration into common IT applications requires interoperable interfaces which also facilitate automated machine-to-machine communication. The oBIX (Open Building Information Exchange) standard provides such standardized objects and thus is used to create a Web Service gateway for the Java API."} {"_id": "97398e974e6965161ad9b9f85e00fd7a8e7b3d83", "title": "AWEsome: An open-source test platform for airborne wind energy systems", "text": "In this paper we present AWEsome (Airborne Wind Energy Standardized Open-source Model Environment), a test platform for airborne wind energy systems that consists of low-cost hardware and is entirely based on open-source software. It can hence be used without the need of large financial investments, in particular by research groups and startups to acquire first experiences in their flight operations, to test novel control strategies or technical designs, or for usage in public relations. Our system consists of a modified off-the-shelf model aircraft that is controlled by the pixhawk autopilot hardware and the ardupilot software for fixed wing aircraft. The aircraft is attached to the ground by a tether. We have implemented new flight modes for the autonomous tethered flight of the aircraft along periodic patterns. We present the principal functionality of our algorithms. We report on first successful tests of these modes in real flights. \u2217Author names are in alphabetical order. a Universit\u00e4t Bonn, bechtle@physik.uni-bonn.de b Universit\u00e4t Bonn, thomasgehrmann@gmx.net c Humboldt-Universit\u00e4t zu Berlin, csieg@physik.hu-berlin.de d Daidalos Capital, zillmann@daidalos-capital.com awesome.physik.uni-bonn.de 1 ar X iv :1 70 4. 08 69 5v 1 [ cs .S Y ] 2 7 A pr 2 01 7 Section"} {"_id": "e93d2e8066087b9756a585b28244e7335229f1e4", "title": "Optimization of Belt-Type Electrostatic Separation of Triboaerodynamically Charged Granular Plastic Mixtures", "text": "Electrostatic separation is frequently employed for the selective sorting of conducting and insulating materials for waste electric and electronic equipment. In a series of recent papers, the authors have presented several novel triboelectrostatic separation installations for the recycling of the main categories of plastics contained in information technology wastes. The aim of the present work is to optimize the design and the operation of such an installation, composed of a belt-type electrostatic separator associated to a propeller-type tribocharging device. The experimental design method is employed for modeling and optimizing the separation of a mixture of high-impact polystyrene and acrylonitrile butadiene styrene granules originating from shredded out-of-use computer cases. A distinct experiment is carried out on three synthetic mixtures of virgin plastic granules: 75% polyamide (PA)+ 25% polycarbonate (PC), 50% PA+ 50% PC, and 25% PA + 75% PC. The best results are obtained for the mixtures containing equal quantities of the two constituents."} {"_id": "aeddb6d9568171cc9ca0297b2adfd6c8e23fe797", "title": "Transformerless micro-inverter for grid-connected photovoltaic systems", "text": "The leakage currents caused by high-frequency common-mode (CM) voltage have become a major concern in transformerless photovoltaic (PV) inverters. This paper addresses to a review on dc-ac converters applied to PV systems that can avoid the circulation of leakage currents. Looking for a lower cost and higher reliability solution, a 250 W PV transformerless micro-inverter prototype based on the bipolar full-bridge topology was built and tested. As it is confirmed by experimental results, this topology is able to maintain the CM voltage constant and prevent the circulation of CM currents through the circuit."} {"_id": "08f991c35f2b3d67cfe219f19e4e791b049f398d", "title": "GQ ( \u03bb ) : A general gradient algorithm for temporal-difference prediction learning with eligibility traces", "text": "A new family of gradient temporal-difference learning algorithms have recently been introduced by Sutton, Maei and others in which function approximation is much more straightforward. In this paper, we introduce the GQ(\u03bb) algorithm which can be seen as extension of that work to a more general setting including eligibility traces and off-policy learning of temporally abstract predictions. These extensions bring us closer to the ultimate goal of this work\u2014the development of a universal prediction learning algorithm suitable for learning experientially grounded knowledge of the world. Eligibility traces are essential to this goal because they bridge the temporal gaps in cause and effect when experience is processed at a temporally fine resolution. Temporally abstract predictions are also essential as the means for representing abstract, higher-level knowledge about courses of action, or options. GQ(\u03bb) can be thought of as an extension of Q-learning. We extend existing convergence results for policy evaluation to this setting and carry out a forward-view/backwardview analysis to derive and prove the validity of the new algorithm. Introduction One of the main challenges in artificial intelligence (AI) is to connect the low-level experience to high-level representations (grounded world knowledge). Low-level experience refers to rich signals received back and forth between the agent and the world. Recent theoretical developments in temporal-difference learning combined with mathematical ideas developed for temporally abstract options, known as intra-option learning, can be used to address this challenge (Sutton, 2009). Intra-option learning (Sutton, Precup, and Singh, 1998) is seen as a potential method for temporalabstraction in reinforcement learning. Intra-option learning is a type of off-policy learning. Off-policy learning refers to learning about a target policy while following another policy, known as behavior policy. Offpolicy learning arises in Q-learning where the target policy is a greedy optimal policy while the behavior policy is exploratory. It is also needed for intra-option learning. Intra-option methods look inside options and allow AI agent to learn about multiple different options simultaneously from a single stream of received data. Option refers to a temporally course of actions with a termination condition. Options are ubiquitous in our everyday life. For example, to go for hiking, we need to consider and evaluate multiple options such as transportation options to the hiking trail. Each option includes a course of primitive actions and only is excited in particular states. The main feature of intra-option learning is its ability to predict the consequences of each option policy without executing it while data is received from a different policy. Temporal difference (TD) methods in reinforcement learning are considered as powerful techniques for prediction problems. In this paper, we consider predictions always in the form of answers to the questions. Questions are like \u201cIf of follow this trail, would I see a creek?\u201d The answers to such questions are in the form of a single scalar (value function) that tells us about the expected future consequences given the current state. In general, due to the large number of states, it is not feasible to compute the exact value of each state entry. One of the key features of TD methods is their ability to generalize predictions to states that may not have visited; this is known as function approximation. Recently, Sutton et al. (2009b) and Maei et al. (2009) introduced a new family of gradient TD methods in which function approximation is much more straightforward than conventional methods. Prior to their work, the existing classical TD algorithms (e.g.; TD(\u03bb) and Q-learning) with function approximation could become unstable and diverge (Baird, 1995; Tsitsiklis and Van Roy, 1997). In this paper, we extend their work to a more general setting that includes off-policy learning of temporally abstract predictions and eligibility traces. Temporally abstract predictions are essential for representing higher-level knowledge about the course of actions, or options (Sutton et al., 1998). Eligibility traces bridge between the temporal gaps when experience is processes at a temporally fine resolution. In this paper, we introduce the GQ(\u03bb) algorithm that can be thought of as an extension to Q-learning (Watkins and Dayan, 1989); one of the most popular off-policy learning algorithms in reinforcement learning. Our algorithm incorporates gradient-descent ideas originally developed by Sutton et al. (2009a,b), for option conditional predictions with varying eligibility traces. We extend existing convergence results for policy evaluation to this setting and carry forward-view/backwardview analysis and prove the validity of the new algorithm. The organization of the paper is as follows: First, we describe the problem setting and define our notations. Then we introduce the GQ(\u03bb) algorithm and describe how to use it. In the next sections we provide derivation of the algorithm and carry out analytical analysis on the equivalence of TD forward-view/backward-view. We finish the paper with convergence proof and conclusion section. Notation and background We consider the problem of policy evaluation in finite state-action Markov Decision Process (MDP). Under standard conditions, however, our results can be extended to MDPs with infinite state\u2013action pairs. We use a standard reinforcement learning (RL) framework. In this setting, data is obtained from a continually evolving MDP with states st \u2208 S, actions at \u2208 A, and rewards rt \u2208 <, for t = 1, 2, . . ., with each state and reward as a function of the preceding state and action. Actions are chosen according to the behavior policy b, which is assumed fixed and exciting, b(s, a) > 0,\u2200s, a. We consider the transition probabilities between state\u2013 action pairs, and for simplicity we assume there is a finite number N of state\u2013action pairs. Suppose the agent find itself at time t in a state\u2013 action pair st, at. The agent likes its answer at that time to tell something about the future sequence st+1, at+1, . . . , st+k if actions from t + 1 on were taken according to the option until it terminated at time t+k. The option policy is denoted \u03c0 : S \u00d7 A \u2192 [0, 1] and whose termination condition is denoted \u03b2 : S \u2192 [0, 1]. The answer is always in the form of a single number, and of course we have to be more specific about what we are trying to predict. There are two common cases: 1) we are trying to predict the outcome of the option; we want to know about the expected value of some function of the state at the time the option terminates. We call this function the outcome target function, and denote it z : S \u2192 <, 2) we are trying to predict the transient; that is, what happens during the option rather than its end. The most common thing to predict about the transient is the total or discounted reward during the option. We denote the reward function r : S \u00d7A \u2192 <. Finally, the answer could conceivably be a mixture of both a transient and an outcome. Here we will present the algorithm for answering questions with both an outcome part z and a transient part r, with the two added together. In the common place where one wants only one of the two, the other is set to zero. Now we can start to state the goal of learning more precisely. In particular, we would like our answer to be equal to the expected value of the outcome target function at termination plus the cumulative sum of the transient reward function along the way: Q(st, at) (1) \u2261 E [ rt+1 + \u03b3rt+2 + \u00b7 \u00b7 \u00b7+ \u03b3rt+k + zt+k | \u03c0, \u03b2 ] , where \u03b3 \u2208 (0, 1] is discount factor and Q(s, a) denotes action value function that evaluates policy \u03c0 given state-action pair s, a. To simplify the notation, from now on, we drop the superscript \u03c0 on action values. In many problems the number of state-action pairs is large and therefore it is not feasible to compute the action values for each state-action entry. Therefore, we need to approximate the action values through generalization techniques. Here, we use linear function approximation; that is, the answer to a question is always formed linearly as Q\u03b8(s, a) = \u03b8>\u03c6(s, a) \u2248 Q(s, a) for all s \u2208 S and a \u2208 A, where \u03b8 \u2208 < is a learned weight vector and \u03c6(s, a) \u2208 < indicates a state\u2013action feature vector. The goal is to learn parameter vector \u03b8 through a learning method such as TD learning. The above (1) describes the target in a Monte Carlo sense, but of course we want to include the possibility of temporal-difference learning; one of the widely used techniques in reinforcement learning. To do this, we provide an eligibility-trace function \u03bb : S \u2192 [0, 1] as described in Sutton and Barto (1998). We let eligibilitytrace function, \u03bb, to vary over different states. In the next section, first we introduce GQ(\u03bb); a general temporal-difference learning algorithm that is stable under off-policy training, and show how to use it. Then in later sections we provide the derivation of the algorithm and convergence proof. The GQ(\u03bb) algorithm In this section we introduce the GQ(\u03bb) algorithm for off-policy learning about the outcomes and transients of options, in other words, intra-option GQ(\u03bb) for learning the answer to a question chosen from a wide (possibly universal) class of option-conditional predictive questions. To specify the question one provides four functions: \u03c0 and \u03b2, for the option, and z and r, for the target functions. To specify how the answers will be formed one provides their functional form (here in linear form), the feature vectors \u03c6(s, a) for all state\u2013action pairs, and the eligibility-trace function \u03bb. The discount factor \u03b3 can be taken to be 1, and thus ignored; the same effect as discounting can be achieved through the choice of \u03b2. Now, we specify the GQ(\u03bb) algorithm as follows: The weight vector \u03b8 \u2208 < is initialized arbitrarily. The secondary weight vector w \u2208 < is init"} {"_id": "1721801db3a467adf7a10c69f21c21896a80c6dd", "title": "Efficient Program Analyses Using Deductive and Semantic Methodologies", "text": "Program analysis is the process of gathering deeper insights about a source code and analysing them to resolve software problems of arbitrary complexity. The key challenge in program analysis is to keep it fast, precise and straightforward. This research focuses on three key objectives to achieve an efficient program analysis: (i) expressive data representation, (ii) optimised data structure and (iii) fast data processing mechanisms. State of the art technologies such as Resource Description Framework (RDF) as data representation format, triplestores as the storage & processing layer, and datalog to represent program analysis rules are considered in our research. diagram(BDD) to be embedded in the triplestore. Additionally, an ontology is being designed to standardise the definitions of concepts and representation of the knowledge in the program analysis domain."} {"_id": "483d6d11a8602166ae33898c8f00f44d6ddf7bf6", "title": "Neuroimaging: Decoding mental states from brain activity in humans", "text": "Recent advances in human neuroimaging have shown that it is possible to accurately decode a person's conscious experience based only on non-invasive measurements of their brain activity. Such 'brain reading' has mostly been studied in the domain of visual perception, where it helps reveal the way in which individual experiences are encoded in the human brain. The same approach can also be extended to other types of mental state, such as covert attitudes and lie detection. Such applications raise important ethical issues concerning the privacy of personal thought."} {"_id": "54e7e6348fc8eb27dd6c34e0afbe8881eeb0debd", "title": "Content-based multimedia information retrieval: State of the art and challenges", "text": "Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100+ recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future."} {"_id": "1064641f13e2309ad54dd05ed9ca5d0c09e82376", "title": "Machine learning classifiers and fMRI: A tutorial overview", "text": "Interpreting brain image experiments requires analysis of complex, multivariate data. In recent years, one analysis approach that has grown in popularity is the use of machine learning algorithms to train classifiers to decode stimuli, mental states, behaviours and other variables of interest from fMRI data and thereby show the data contain information about them. In this tutorial overview we review some of the key choices faced in using this approach as well as how to derive statistically significant results, illustrating each point from a case study. Furthermore, we show how, in addition to answering the question of 'is there information about a variable of interest' (pattern discrimination), classifiers can be used to tackle other classes of question, namely 'where is the information' (pattern localization) and 'how is that information encoded' (pattern characterization)."} {"_id": "fbe829a9ee818b9fe919bad521f0502c6770df89", "title": "Evidence in Management and Organizational Science : Assembling the Field \u2019 s Full Weight of Scientific Knowledge Through Syntheses", "text": "This chapter advocates the good scientific practice of systematic research syntheses in Management and Organizational Science (MOS). A research synthesis is the systematic accumulation, analysis and reflective interpretation of the full body of relevant empirical evidence related to a question. It is the * Corresponding author. Email: rousseau@andrew.cmu.edu 476 \u2022 The Academy of Management Annals critical first step in effective use of scientific evidence. Synthesis is not a conventional literature review. Literature reviews are often position papers, cherrypicking studies to advocate a point of view. Instead, syntheses systematically identify where research findings are clear (and where they aren\u2019t), a key first step to establishing the conclusions science supports. Syntheses are also important for identifying contested findings and productive lines for future research. Uses of MOS evidence, that is, the motives for undertaking a research synthesis include scientific discovery and explanation, improved management practice guidelines, and formulating public policy. We identify six criteria for establishing the evidentiary value of a body of primary studies in MOS. We then pinpoint the stumbling blocks currently keeping the field from making effective use of its ever-expanding base of empirical studies. Finally, this chapter outlines (a) an approach to research synthesis suitable to the domain of MOS; and (b) supporting practices to make synthesis a collective MOS project. Evidence in Management and Organizational Science It is the nature of the object that determines the form of its possible science. (Bhaskar, 1998, p. 3) Uncertain knowledge is better than ignorance. (Mitchell, 2000, p. 9) This chapter is motivated by the failure of Management and Organizational Science (MOS) to date to make full effective use of its available research evidence. Failure to make effective use of scientific evidence is a problem both management scholars and practitioners face. Effective use of evidence, as we employ the term here, means to assemble and interpret a body of primary studies relevant to a question of fact, and then take appropriate action based upon the conclusions drawn. For science, appropriate action might be to direct subsequent research efforts elsewhere if the science is clear, or to recommend a new tact if findings are inconclusive. For practice, appropriate action might begin with a summary of key findings to share with educators, thought leaders, consultants, and the broader practice community. Unfortunately, bodies of evidence in MOS are seldom assembled or interpreted in the systematic fashion needed to permit their confident use. A systematic review of the full body of evidence is the key first step in formulating a science-based conclusion. As a consequence at present, neither the MOS scholar nor the practitioner can readily claim to be well-informed. This lapse has many causes. Two are central to our failure to use MOS evidence well: (1) overvaluing novelty to the detriment of accumulating convergent findings; and (2) the general absence of systematic research syntheses. These two causes are intertwined in that, as we shall show, use of research syntheses ties closely with how a field gauges the value of its research. This chapter\u2019s subject, the systematic research synthesis, is not to be confused Evidence in Management and Organizational Science \u2022 477 with a conventional literature review, its less systematic, non-representative counterpart. Systematic research syntheses assemble, analyze and interpret a comprehensive body of evidence in a highly reflective fashion according to six evidentiary criteria we describe. The why, what, and how of research synthesis in MOS is this chapter\u2019s focus. The explosion of management research since World War II has created knowledge products at a rate far outpacing our current capacity for recall, sense-making, and use. In all likelihood, MOS\u2019s knowledge base will continue to expand. We estimate over 200 peer-reviewed journals currently publish MOS research. These diverse outlets reflect the fact that MOS is not a discipline; it is an area of inter-related research activities cutting across numerous disciplines and subfields. The area\u2019s expansion translates into a body of knowledge that is increasingly fragmented (Rousseau, 1997), transdisciplinary (Whitley, 2000), and interdependent with advancements in other social sciences (Tranfield, Denyer & Smart, 2003). The complicated state of MOS research makes it tough to know what we know, especially as specialization spawns research communities that often don\u2019t and sometimes can\u2019t talk with each other."} {"_id": "2902e0a4b12cf8269bb32ef6a4ebb3f054cd087e", "title": "A Bridging Framework for Model Optimization and Deep Propagation", "text": "Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas. However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications. Recently, training deep propagations (i.e., networks) has gained promising performance in some particular tasks. Unfortunately, existing networks are often built in heuristic manners, thus lack of principled interpretations and solid theoretical supports. In this work, we provide a new paradigm, named Propagation and Optimization based Deep Model (PODM), to bridge the gaps between these different mechanisms (i.e., model optimization and deep propagation). On the one hand, we utilize PODM as a deeply trained solver for model optimization. Different from these existing network based iterations, which often lack theoretical investigations, we provide strict convergence analysis for PODM in the challenging nonconvex and nonsmooth scenarios. On the other hand, by relaxing the model constraints and performing end-to-end training, we also develop a PODM based strategy to integrate domain knowledge (formulated as models) and real data distributions (learned by networks), resulting in a generic ensemble framework for challenging real-world applications. Extensive experiments verify our theoretical results and demonstrate the superiority of PODM against these state-of-the-art approaches."} {"_id": "6f6f1714af8551f5f18d419f00c8d3411802ee7a", "title": "Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations", "text": "CANDECOMP/PARAFAC (CP) has found numerous applications in wide variety of areas such as in chemometrics, telecommunication, data mining, neuroscience, separated representations. For an order- N tensor, most CP algorithms can be computationally demanding due to computation of gradients which are related to products between tensor unfoldings and Khatri-Rao products of all factor matrices except one. These products have the largest workload in most CP algorithms. In this paper, we propose a fast method to deal with this issue. The method also reduces the extra memory requirements of CP algorithms. As a result, we can accelerate the standard alternating CP algorithms 20-30 times for order-5 and order-6 tensors, and even higher ratios can be obtained for higher order tensors (e.g., N \u2265 10). The proposed method is more efficient than the state-of-the-art ALS algorithm which operates two modes at a time (ALSo2) in the Eigenvector PLS toolbox, especially for tensors with order N \u2265 5 and high rank."} {"_id": "ca8cc908698f1e215f73cf892bd2dd7ad420e787", "title": "Planning With Agents : An Efficient Approach Using Hierarchical Dynamic Decision Networks", "text": "To be useful in solving real world problems, agents need to be able to act in environments in which it may not be possible to be completely aware of the current state and where actions do not always work as planned. Additional complexity is added to the problem when one considers groups of agents working together. By casting the agent planning problem as a partially observable Markov decision problem (POMDP), optimal policies can be generated for partially observable and stochastic environments. Exact solutions, however, are notoriously difficult to find for problems of a realistic nature. We introduce a hierarchical decision network-based planning algorithm that can generate high quality plans during execution while demonstrating significant time savings. We also discuss how this approach is particularly applicable to planning in a multiagent environment as compared to other POMDP-based planning algorithms. We present experimental results comparing our algorithm with results obtained by current POMDP and hierarchical POMDP (HPOMDP) methods."} {"_id": "318a81acdd15a0ab2f706b5f53ee9d4d5d86237f", "title": "Multi-label learning: a review of the state of the art and ongoing research", "text": "Multi-label learning is quite a recent supervised learning paradigm. Owing to its capabilities to improve performance in problems where a pattern may have more than one associated class, it has attracted the attention of researchers, producing an increasing number of publications. The paper presents an up-todate overview about multi-label learning with the aim of sorting and describing the main approaches developed till now. The formal definition of the paradigm, the analysis of its impact on the literature, its main applications, works developed and ongoing research are presented."} {"_id": "edd83d028b2ac0a6dce2182a2d9edaac2f573c5c", "title": "Occlusion-capable multiview volumetric three-dimensional display.", "text": "Volumetric 3D displays are frequently purported to lack the ability to reconstruct scenes with viewer-position-dependent effects such as occlusion. To counter these claims, a swept-screen 198-view horizontal-parallax-only 3D display is reported here that is capable of viewer-position-dependent effects. A digital projector illuminates a rotating vertical diffuser with a series of multiperspective 768 x 768 pixel renderings of a 3D scene. Evidence of near-far object occlusion is reported. The aggregate virtual screen surface for a stationary observer is described, as are guidelines to construct a full-parallax system and the theoretical ability of the present system to project imagery outside of the volume swept by the screen."} {"_id": "9adf319c9aadfbd282244cfda59a140ce0bb7df6", "title": "Ranking Attacks Based on Vulnerability Analysis", "text": "Now that multiple-known attacks can affect one software product at the same time, it is necessary to rank and prioritize those attacks in order to establish a better defense. The purpose of this paper is to provide a set of security metrics to rank attacks based on vulnerability analysis. The vulnerability information is retrieved from a vulnerability management ontology, which integrates commonly used standards like CVE, CWE, CVSS, and CAPEC. Among the benefits of ranking attacks through the method proposed here are: a more effective mitigation or prevention of attack patterns against systems, a better foundation to test software products, and a better understanding of vulnerabilities and attacks."} {"_id": "d64a414d730e5effd9f3d211bf2ee4aad2fb6001", "title": "Stability analysis of negative impedance converter", "text": "Negative Impedance Converter (NIC) have received much attention to improve the restriction of antenna gain-bandwidth tradeoff relationship. However, a significant problem with NIC is the potential instability of unwanted oscillation due to the presence of positive feedback. For solving this problem, we propose a NIC circuit technique which is stable and has wideband characteristics of negative impedance."} {"_id": "459ef4fbe6b83247515e1eb9593fb30c4e33a7cc", "title": "Medical robotics\u2014Regulatory, ethical, and legal considerations for increasing levels of autonomy", "text": "The regulatory, ethical, and legal barriers imposed on medical robots necessitate careful consideration of different levels of autonomy, as well as the context for use."} {"_id": "81d1b5584befd407111958a605c5fa88948900cb", "title": "Activity recognition from acceleration data based on discrete consine transform and SVM", "text": "This paper developed a high-accuracy human activity recognition system based on single tri-axis accelerometer for use in a naturalistic environment. This system exploits the discrete cosine transform (DCT), the Principal Component Analysis (PCA) and Support Vector Machine (SVM) for classification human different activity. First, the effective features are extracted from accelerometer data using DCT. Next, feature dimension is reduced by PCA in DCT domain. After implementing the PCA, the most invariant and discriminating information for recognition is maintained. As a consequence, Multi-class Support Vector Machines is adopted to distinguish different human activities. Experiment results show that the proposed system achieves the best accuracy is 97.51%, which is better than other approaches."} {"_id": "1932ebba4012827bf6172bc03a698ec86f38cfe9", "title": "Language Problems and ADHD Symptoms: How Specific Are the Links?", "text": "Symptoms of inattention and hyperactivity frequently co-occur with language difficulties in both clinical and community samples. We explore the specificity and strength of these associations in a heterogeneous sample of 254 children aged 5 to 15 years identified by education and health professionals as having problems with attention, learning and/or memory. Parents/carers rated pragmatic and structural communication skills and behaviour, and children completed standardised assessments of reading, spelling, vocabulary, and phonological awareness. A single dimension of behavioural difficulties including both hyperactivity and inattention captured behaviour problems. This was strongly and negatively associated with pragmatic communication skills. There was less evidence for a relationship between behaviour and language structure: behaviour ratings were more weakly associated with the use of structural language in communication, and there were no links with direct measures of literacy. These behaviour problems and pragmatic communication difficulties co-occur in this sample, but impairments in the more formal use of language that impact on literacy and structural communication skills are tied less strongly to behavioural difficulties. One interpretation is that impairments in executive function give rise to both behavioural and social communication problems, and additional or alternative deficits in other cognitive abilities impact on the development of structural language skills."} {"_id": "d6eb9e15aa32938675ea42c76f8c818261449577", "title": "Soft tissue cephalometric analysis: diagnosis and treatment planning of dentofacial deformity.", "text": "This article will present a new soft tissue cephalometric analysis tool. This analysis may be used by the orthodontist and surgeon as an aid in diagnosis and treatment planning. The analysis is a radiographic instrument that was developed directly from the philosophy expressed in Arnett and Bergman \"Facial keys to orthodontic diagnosis and treatment planning, Parts I and II\" (Am J Orthop Dentofacial Orthod 1993; 103:299-312 and 395-411). The novelty of this approach, as with the \"Facial Keys\" articles, is an emphasis on soft tissue facial measurement."} {"_id": "8bde3d5f265759e3cdcff80d55e5a0a6bb1b74d3", "title": "A DTN-Based Sensor Data Gathering for Agricultural Applications", "text": "This paper presents our field experience in data collection from remote sensors. By letting tractors, farmers, and sensors have short-range radio communication devices with delay-disruption tolerant networking (DTN), we can collect data from those sensors to our central database. Although, several implementations have been made with cellular phones or mesh networks in the past, DTN-based systems for such applications are still under explored. The main objective of this paper is to present our practical implementation and experiences in DTN-based data collection from remote sensors. The software, which we have developed for this research, has about 50 kbyte footprint, which is much smaller than any other DTN implementation. We carried out an experiment with 39 DTN nodes at the University of Tokyo assuming an agricultural scenario. They achieved 99.8% success rate for data gathering with moderate latency, showing sufficient usefulness in data granularity."} {"_id": "e16f74df073ae7c8a29e07ed824a438580ea8534", "title": "Detection of Unhealthy Region of plant leaves using Neural Network", "text": "A leaf is an organ of vascular plant and is the principal lateral appendage of the stem. Each leaf has a set of features that differentiate it from the other leaves, such as margin and shape. This work proposes a comparison of supervised plant leaves classification using different approaches, based on different representations of these leaves, and the chosen algorithm. Beginning with the representation of leaves, we presented leaves by a fine-scale margin feature histogram, by a Centroid Contour Distance Curve shape signature, or by an interior texture feature histogram in 64 element vector for each one, after we tried different combination among these features to optimize results. We classified the obtained vectors. Then we evaluate the classification using cross validation. The obtained results are very interesting and show the importance of each feature. The classification of plant leaf images with biometric features. Traditionally, the trained taxonomic perform this process by following various tasks. The taxonomic usually classify the plants based on flowering and associative phenomenon. It was found that this process was time consuming and difficult. The biometric features of plants leaf like venation make this classification easy. Leaf biometric feature are analyzed using computer based method like morphological feature analysis and artificial neural network and Naive bayes based classifier. KNN classification model take input as the leaf venation morphological feature and classify them into four different species. The result of this classification based on leaf venation is achieved 96.53% accuracy in the training of the model for classification of leaves provide the 91% accuracy in testing to classify the leaf images. Keyword: Artificial neural network, K-NN Classification, Leaf venation pattern, Morphological Features"} {"_id": "65616754cbd08935c0f17c2f34258670b61a5fc9", "title": "Earlier Identification of Children with Autism Spectrum Disorder: An Automatic Vocalisation-Based Approach", "text": "Autism spectrum disorder (ASD) is a neurodevelopmental disorder usually diagnosed in or beyond toddlerhood. ASD is defined by repetitive and restricted behaviours, and deficits in social communication. The early speech-language development of individuals with ASD has been characterised as delayed. However, little is known about ASD-related characteristics of pre-linguistic vocalisations at the feature level. In this study, we examined pre-linguistic vocalisations of 10-month-old individuals later diagnosed with ASD and a matched control group of typically developing individuals (N = 20). We segmented 684 vocalisations from parent-child interaction recordings. All vocalisations were annotated and signal-analytically decomposed. We analysed ASD-related vocalisation specificities on the basis of a standardised set (eGeMAPS) of 88 acoustic features selected for clinical speech analysis applications. 54 features showed evidence for a differentiation between vocalisations of individuals later diagnosed with ASD and controls. In addition, we evaluated the feasibility of automated, vocalisation-based identification of individuals later diagnosed with ASD. We compared linear kernel support vector machines and a 1-layer bidirectional long short-term memory neural network. Both classification approaches achieved an accuracy of 75% for subject-wise identification in a subject-independent 3-fold cross-validation scheme. Our promising results may be an important contribution en-route to facilitate earlier identification of ASD."} {"_id": "d0f22d6e7b03d7fa6d958c71bb14438ba3af52be", "title": "High sigma measurement of random threshold voltage variation in 14nm Logic FinFET technology", "text": "Random variation of threshold voltage (Vt) in MOSFETs plays a central role in determining the minimum operating voltage of products in a given process technology. Properly characterizing Vt variation requires a large volume of measurements of minimum size devices to understand the high sigma behavior. At the same time, a rapid measurement approach is required to keep the total measurement time practical. Here we describe a new test structure and measurement approach that enables practical characterization of Vt distributions to high sigma and its application to 14nm Logic FinFET technology. We show that both NMOS and PMOS single fin devices have very low random Vt variation of 19mV and 24mV respectively, normally distributed out to +/-5\u03c3."} {"_id": "480d0ca73e0a7acd5f48e8e0139776ad96e9f57b", "title": "Medicinal and pharmaceutical uses of seaweed natural products: A review", "text": "In the last three decades the discovery of metabolites with biological activities from macroalgae has increased significantly. However, despite the intense research effort by academic and corporate institutions, very few products with real potential have been identified or developed. Based on Silverplatter MEDLINE and Aquatic Biology, Aquaculture & Fisheries Resources databases, the literature was searched for natural products from marine macroalgae in the Rhodophyta, Phaeophyta and Chlorophyta with biological and pharmacological activity. Substances that currently receive most attention from pharmaceutical companies for use in drug development, or from researchers in the field of medicine-related research include: sulphated polysaccharides as antiviral substances, halogenated furanones from Delisea pulchra as antifouling compounds, and kahalalide F from a species of Bryopsis as a possible treatment of lung cancer, tumours and AIDS. Other substances such as macroalgal lectins, fucoidans, kainoids and aplysiatoxins are routinely used in biomedical research and a multitude of other substances have known biological activities. The potential pharmaceutical, medicinal and research applications of these compounds are discussed."} {"_id": "b4b44a6bca2cc9d4c5efb41b14a6585fc4872637", "title": "Black cattle body shape and temperature measurement using thermography and KINECT sensor", "text": "Black cattle body shape and temperature measurement system is introduced. It is important to evaluate the quality of Japanese black cattle periodically during their growth process. Not only the weight and size of cattle, but also the posture, shape, and temperature need to be tracked as primary evaluation criteria. In the present study, KINECT sensor and thermal camera obtains the body shape and its temperature. The whole system is calibrated to operate in a common coordinate system. Point cloud data are obtained from different angles and reconstructed in a computer. The thermal data are captured too. Both point cloud data and thermal information are combined by considering the orientation of the cow. The collected information is used to evaluate and estimate cattle conditions."} {"_id": "5dca5aa024f513801a53d9738161b8a01730d395", "title": "Adaptive Mobile Robot Navigation and Mapping", "text": "The task of building a map of an unknown environment and concurrently using that map to navigate is a central problem in mobile robotics research. This paper addresses the problem of how to perform concurrent mapping and localization (CML) adaptively using sonar. Stochastic mapping is a feature-based approach to CML that generalizes the extended Kalman lter to incorporate vehicle localization and environmental mapping. We describe an implementation of stochastic mapping that uses a delayed nearest neighbor data association strategy to initialize new features into the map, match measurements to map features, and delete out-of-date features. We introduce a metric for adaptive sensing which is de ned in terms of Fisher information and represents the sum of the areas of the error ellipses of the vehicle and feature estimates in the map. Predicted sensor readings and expected dead-reckoning errors are used to estimate the metric for each potential action of the robot, and the action which yields the lowest cost (i.e., the maximum information) is selected. This technique is demonstrated via simulations, in-air sonar experiments, and underwater sonar experiments. Results are shown for 1) adaptive control of motion and 2) adaptive control of motion and scanning. The vehicle tends to explore selectively di erent objects in the environment. The performance of this adaptive algorithm is shown to be superior to straight-line motion and random motion."} {"_id": "79bb020994c837be3c0ea5746a8e7a7c864c9170", "title": "A comparison of track-to-track fusion algorithms for automotive sensor fusion", "text": "In exteroceptive automotive sensor fusion, sensor data are usually only available as processed, tracked object data and not as raw sensor data. Applying a Kalman filter to such data leads to additional delays and generally underestimates the fused objectspsila covariance due to temporal correlations of individual sensor data as well as inter-sensor correlations. We compare the performance of a standard asynchronous Kalman filter applied to tracked sensor data to several algorithms for the track-to-track fusion of sensor objects of unknown correlation, namely covariance union, covariance intersection, and use of cross-covariance. For the simulation setup used in this paper, covariance intersection and use of cross-covariance turn out to yield significantly lower errors than a Kalman filter at a comparable computational load."} {"_id": "add441844e5720d85058baf77c845a831d9d2a47", "title": "A Reverse Approach to Named Entity Extraction and Linking in Microposts", "text": "In this paper, we present a pipeline for named entity extraction and linking that is designed specifically for noisy, grammatically inconsistent domains where traditional named entity techniques perform poorly. Our approach leverages a large knowledge base to improve entity recognition, while maintaining the use of traditional NER to identify mentions that are not co-referent with any entities in the knowledge base."} {"_id": "a053423b39a990c61a4805d189c75a89f8986976", "title": "The Study of a Dual-Mode Ring Oscillator", "text": "An analytical investigation of a dual-mode ring oscillator is presented. The ring oscillator is designed using a CMOS 0.18-\u03bcm technology with differential eight-stage delay cells employing auxiliary input devices. With a proper startup control, the oscillator operates in two different modes covering two different frequency bands. A nonlinear model, along with the linearization method, is used to obtain the transient and steady-state behaviors of the dual-mode ring oscillator. The analytical derivations are verified through HSPICE simulation. The oscillator operates at the frequency bands from 2-5 GHz and from 0.1-2 GHz, respectively."} {"_id": "87094ebab924f893160716021a8a5bc645b3ff1f", "title": "Deep Learning for Wind Speed Forecasting in Northeastern Region of Brazil", "text": "Deep Learning is one of the latest approaches in the field of artificial neural networks. Since they were first proposed in mid-2006, Deep Learning models have obtained state-of-art results in some problems with classification and pattern recognition. However, such models have been little used in time series forecasting. This work aims to investigate the use of some of these architectures in this kind of problem, specifically in predicting the hourly average speed of winds in the Northeastern region of Brazil. The results showed that Deep Learning offers a good alternative for performing this task, overcoming some results of previous works."} {"_id": "3699fbf9468dadd3ebb1069bbeb5a359cd725f3e", "title": "Characterizing large-scale use of a direct manipulation application in the wild", "text": "Examining large-scale, long-term application use is critical to understanding how an application meets the needs of its user community. However, there have been few published analyses of long-term use of desktop applications, and none that have examined applications that support creating and modifying content using direct manipulation. In this paper, we present an analysis of 2 years of usage data from an instrumented version of the GNU Image Manipulation Program, including data from over 200 users. In the course of our analysis, we show that previous findings concerning the sparseness of command use and idiosyncrasy of users\u2019 command vocabularies extend to a new domain and interaction style. These findings motivate continued research in adaptive and mixed-initiative interfaces. We also describe the novel application of a clustering technique to characterize a user community\u2019s higher-level tasks from low-level logging data."} {"_id": "5eb1e4bb87b0d99d62f171f1eede90c98bf266ab", "title": "On traveling path and related problems for a mobile station in a rechargeable sensor network", "text": "Wireless power transfer is a promising technology to fundamentally address energy problems in a wireless sensor network. To make such a technology work effectively, a vehicle is needed to carry a charger to travel inside the network. On the other hand, it has been well recognized that a mobile base station offers significant advantages over a fixed one. In this paper, we investigate an interesting problem of co-locating the mobile base station on the wireless charging vehicle. We study an optimization problem that jointly optimizes traveling path, stopping points, charging schedule, and flow routing. Our study is carried out in two steps. First, we study an idealized problem that assumes zero traveling time, and develop a provably near-optimal solution to this idealized problem. In the second step, we show how to develop a practical solution with non-zero traveling time and quantify the performance gap between this solution and the unknown optimal solution to the original problem."} {"_id": "505ddb0a2a4d9742726bf4c40a7662e58799b945", "title": "An LLCL Power Filter for Single-Phase Grid-Tied Inverter", "text": "This paper presents a new topology of higher order power filter for grid-tied voltage-source inverters, named the LLCL filter, which inserts a small inductor in the branch loop of the capacitor in the traditional LCL filter to compose a series resonant circuit at the switching frequency. Particularly, it can attenuate the switching-frequency current ripple components much better than an LCL filter, leading to a decrease in the total inductance and volume. Furthermore, by decreasing the inductance of a grid-side inductor, it raises the characteristic resonance frequency, which is beneficial to the inverter system control. The parameter design criteria of the proposed LLCL filter is also introduced. The comparative analysis and discussions regarding the traditional LCL filter and the proposed LLCL filter have been presented and evaluated through experiment on a 1.8-kW-single-phase grid-tied inverter prototype."} {"_id": "c5e78e8a975e31fb6200bf33275152ccdca2b1dc", "title": "Development of a SiC JFET-Based Six-Pack Power Module for a Fully Integrated Inverter", "text": "In this paper, a fully integrated silicon carbide (SiC)-based six-pack power module is designed and developed. With 1200-V, 100-A module rating, each switching element is composed of four paralleled SiC junction gate field-effect transistors (JFETs) with two antiparallel SiC Schottky barrier diodes. The stability of the module assembly processes is confirmed with 1000 cycles of -40\u00b0C to +200\u00b0C thermal shock tests with 1.3\u00b0C/s temperature change. The static characteristics of the module are evaluated and the results show 55 m\u03a9 on-state resistance of the phase leg at 200\u00b0C junction temperature. For switching performances, the experiments demonstrate that while utilizing a 650-V voltage and 60-A current, the module switching loss decreases as the junction temperature increases up to 150\u00b0C. The test setup over a large temperature range is also described. Meanwhile, the shoot-through influenced by the SiC JFET internal capacitance as well as package parasitic inductances are discussed. Additionally, a liquid cooled three-phase inverter with 22.9 cm \u00d7 22.4 cm \u00d7 7.1 cm volume and 3.53-kg weight, based on this power module, is designed and developed for electric vehicle and hybrid electric vehicle applications. A conversion efficiency of 98.5% is achieved at 10 kHz switching frequency at 5 kW output power. The inverter is evaluated with coolant temperature up to 95\u00b0C successfully."} {"_id": "390daa205c45d196d62f0b89c30e6b8164fb13a8", "title": "Stability of photovoltaic and wind turbine grid-connected inverters for a large set of grid impedance values", "text": "The aim of this paper is to analyze the stability problems of grid connected inverters used in distributed generation. Complex controllers (e.g., multiple rotating dq-frames or resonant-based) are often required to compensate low frequency grid voltage background distortion and an LCL-filter is usually adopted for the high frequency one. The possible wide range of grid impedance values (distributed generation is suited for remote areas with radial distribution plants) challenge the stability and the effectiveness of the LCL-filter-based current controlled system. It has been found out and it will be demonstrated in this paper that the use of active damping helps to stabilise the system in respect to many different kinds of resonances. The use of active damping results in an easy plug-in feature of the generation system in a vast range of grid conditions and in a more flexible operation of the overall system able to manage sudden grid changes. In the paper, a vast measurement campaign made on a single-phase system and on a three-phase system used as scale prototypes for photovoltaic and wind turbines, respectively, validate the analysis."} {"_id": "7108df07165ae5faca7122fe11f3a62c95a4e919", "title": "10 kV/120 A SiC DMOSFET half H-bridge power modules for 1 MVA solid state power substation", "text": "In this paper, the extension of SiC power technology to higher voltage 10 kV/10 A SiC DMOSFETs and SiC JBS diodes is discussed. A new 10 kV/120 A SiC power module using these 10 kV SiC devices is also described which enables a compact 13.8 kV to 465/\u221a3 solid state power substation (SSPS) rated at 1 MVA."} {"_id": "7278b8287c5ccd3334c4fa67e82d368c94a5af21", "title": "Overview of Control and Grid Synchronization for Distributed Power Generation Systems", "text": "Renewable energy sources like wind, sun, and hydro are seen as a reliable alternative to the traditional energy sources such as oil, natural gas, or coal. Distributed power generation systems (DPGSs) based on renewable energy sources experience a large development worldwide, with Germany, Denmark, Japan, and USA as leaders in the development in this field. Due to the increasing number of DPGSs connected to the utility network, new and stricter standards in respect to power quality, safe running, and islanding protection are issued. As a consequence, the control of distributed generation systems should be improved to meet the requirements for grid interconnection. This paper gives an overview of the structures for the DPGS based on fuel cell, photovoltaic, and wind turbines. In addition, control structures of the grid-side converter are presented, and the possibility of compensation for low-order harmonics is also discussed. Moreover, control strategies when running on grid faults are treated. This paper ends up with an overview of synchronization methods and a discussion about their importance in the control"} {"_id": "c7a9a6f421c7ee9390150aa0b48e6a35761e1cd6", "title": "Facial recognition using histogram of gradients and support vector machines", "text": "Face recognition is widely used in computer vision and in many other biometric applications where security is a major concern. The most common problem in recognizing a face arises due to pose variations, different illumination conditions and so on. The main focus of this paper is to recognize whether a given face input corresponds to a registered person in the database. Face recognition is done using Histogram of Oriented Gradients (HOG) technique in AT & T database with an inclusion of a real time subject to evaluate the performance of the algorithm. The feature vectors generated by HOG descriptor are used to train Support Vector Machines (SVM) and results are verified against a given test input. The proposed method checks whether a test image in different pose and lighting conditions is matched correctly with trained images of the facial database. The results of the proposed approach show minimal false positives and improved detection accuracy."} {"_id": "97d6b836cf64aa83c0178448b4e634826e3cc4c4", "title": "Straight to the Tree: Constituency Parsing with Neural Syntactic Distance", "text": "In this work, we propose a novel constituency parsing scheme. The model predicts a vector of real-valued scalars, named syntactic distances, for each split position in the input sentence. The syntactic distances specify the order in which the split points will be selected, recursively partitioning the input, in a top-down fashion. Compared to traditional shiftreduce parsing schemes, our approach is free from the potential problem of compounding errors, while being faster and easier to parallelize. Our model achieves competitive performance amongst single model, discriminative parsers in the PTB dataset and outperforms previous models"} {"_id": "47f81a0fe08310cc732fbd6ad16ab9da323f395c", "title": "Performance Analysis of Software-Defined Networking (SDN)", "text": "Software-Defined Networking (SDN) approaches were introduced as early as the mid-1990s, but just recently became a well-established industry standard. Many network architectures and systems adopted SDN, and vendors are choosing SDN as an alternative to the fixed, predefined, and inflexible protocol stack. SDN offers flexible, dynamic, and programmable functionality of network systems, as well as many other advantages such as centralized control, reduced complexity, better user experience, and a dramatic decrease in network systems and equipment costs. However, SDN characterization and capabilities, as well as workload of the network traffic that the SDN-based systems handle, determine the level of these advantages. Moreover, the enabled flexibility of SDN-based systems comes with a performance penalty. The design and capabilities of the underlying SDN infrastructure influence the performance of common network tasks, compared to a dedicated solution. In this paper we analyze two issues: a) the impact of SDN on raw performance (in terms of throughput and latency) under various workloads, and b) whether there is an inherent performance penalty for a complex, more functional, SDN infrastructure. Our results indicate that SDN does have a performance penalty, however, it is not necessarily related to the complexity level of the underlying SDN infrastructure."} {"_id": "30be3f9d1ffcf94f2a7f8819a3dfe93e7f161050", "title": "Applications of Lucid Dreaming in Sports", "text": "The following article has above all a practical orientation. The various possibilities for the application of lucid dreaming in sports training are presented and briefly illustrated. These theses are based on findings from experiments with experienced lucid dreamers who were instructed to carry out various routine and sport-related actions while lucid-dreaming, with the object of observing the effects on both dreaming and waking states (Tholey, 1981a). Additionally, I report here on numerous spontaneous accounts of athletes who have mastered lucid dreaming, as well as my own years of experience as a lucid dreamer and as an active competitor in different types of sports."} {"_id": "140e049649229cc317799b512de18857ce09a505", "title": "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes", "text": "Received Nov 5, 2016 Revised Jan 9, 2017 Accepted Feb 18, 2017 Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization. Keyword:"} {"_id": "a6507332b400340cce408e0be948c118d41f440a", "title": "Nursing bedside clinical handover - an integrated review of issues and tools.", "text": "AIMS AND OBJECTIVES\nThis article reviews the available literature that supports implementing bedside clinical handover in nursing clinical practice and then seeks to identify key issues if any.\n\n\nBACKGROUND\nClinical handover practices are recognised as being an essential component in the effective transfer of clinical care between health practitioners. It is recognised that the point where a patient is 'handed over' from one clinician to another is significant in maintaining continuity of care and that doing this poorly can have significant safety issues for the patient.\n\n\nDESIGN\nAn integrated literature review.\n\n\nMETHOD\nA literature review of 45 articles was undertaken to understand bedside clinical handover and the issues related to the implementation of this process.\n\n\nRESULTS\nIt was identified that there are a number of clinical handover mnemonics available that provide structure to the process and that areas such as confidentiality, inclusion of the patient/carer and involving the multidisciplinary team remain topical issues for practitioners in implementing good clinical handover practices.\n\n\nCONCLUSIONS\nThis literature review identified a lack of literature available about the transfer of responsibility and accountability during clinical handover and auditing practices of the clinical handover process. The nurses were more concerned about confidentiality issues than were patients. The use of a structured tool was strongly supported; however, no one singular tool was considered suitable for all clinical areas.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing clinicians seeking to implement best practice within their professional speciality should consider some of the issues raised within this article and seek to address these issues by developing strategies to overcome them."} {"_id": "d77ea9a5082081ab0e69f1dfda40546ea332ec3d", "title": "Forward Error Correction for DNA Data Storage", "text": "We report on a strong capacity boost in storing digital data in synthetic DNA. In principle, synthetic DNA is an ideal media to archive digital data for very long times because the achievable data density and longevity outperforms today\u2019s digital data storage media by far. On the other hand, neither the synthesis, nor the amplification and the sequencing of DNA strands can be performed error-free today and in the foreseeable future. In order to make synthetic DNA available as digital data storage media, specifically tailored forward error correction schemes have to be applied. For the purpose of realizing a DNA data storage, we have developed an efficient and robust forwarderror-correcting scheme adapted to the DNA channel. We based the design of the needed DNA channel model on data from a proof-of-concept conducted 2012 by a team from the Harvard Medical School [1]. Our forward error correction scheme is able to cope with all error types of today\u2019s DNA synthesis, amplification and sequencing processes, e.g. insertion, deletion, and swap errors. In a successful experiment, we were able to store and retrieve error-free 22 MByte of digital data in synthetic DNA recently. The found residual error probability is already in the same order as it is in hard disk drives and can be easily improved further. This proves the feasibility to use synthetic DNA as longterm digital data storage media."} {"_id": "f07744725049c566f2f6916b79dc9b7ed8a8c6e1", "title": "CLIP: Continuous Location Integrity and Provenance for Mobile Phones", "text": "Many location-based services require a mobile user to continuously prove his location. In absence of a secure mechanism, malicious users may lie about their locations to get these services. Mobility trace, a sequence of past mobility points, provides evidence for the user's locations. In this paper, we propose a Continuous Location Integrity and Provenance (CLIP) Scheme to provide authentication for mobility trace, and protect users' privacy. CLIP uses low-power inertial accelerometer sensor with a light-weight entropy-based commitment mechanism and is able to authenticate the user's mobility trace without any cost of trusted hardware. CLIP maintains the user's privacy, allowing the user to submit a portion of his mobility trace with which the commitment can be also verified. Wireless Access Points (APs) or colocated mobile devices are used to generate the location proofs. We also propose a light-weight spatial-temporal trust model to detect fake location proofs from collusion attacks. The prototype implementation on Android demonstrates that CLIP requires low computational and storage resources. Our extensive simulations show that the spatial-temporal trust model can achieve high (> 0.9) detection accuracy against collusion attacks."} {"_id": "1398f5aaaa5abfeef9dc1dd67d323b8004b0e951", "title": "Security and privacy in mobile crowdsourcing networks: challenges and opportunities", "text": "The mobile crowdsourcing network (MCN) is a promising network architecture that applies the principles of crowdsourcing to perform tasks with human involvement and powerful mobile devices. However, it also raises some critical security and privacy issues that impede the application of MCNs. In this article, in order to better understand these critical security and privacy challenges, we first propose a general architecture for a mobile crowdsourcing network comprising both crowdsourcing sensing and crowdsourcing computing. After that, we set forth several critical security and privacy challenges that essentially capture the characteristics of MCNs. We also formulate some research problems leading to possible research directions. We expect this work will bring more attention to further investigation on security and privacy solutions for mobile crowdsourcing networks."} {"_id": "4bb507d3d74354ecab6dad25f211bbb856bbb392", "title": "DAC-MACS: Effective Data Access Control for Multiauthority Cloud Storage Systems", "text": "Data access control is an effective way to ensure data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising technique for access control of encrypted data. However, due to the inefficiency of decryption and revocation, existing CP-ABE schemes cannot be directly applied to construct a data access control scheme for multiauthority cloud storage systems, where users may hold attributes from multiple authorities. In this paper, we propose data access control for multiauthority cloud storage (DAC-MACS), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multiauthority CP-ABE scheme with efficient decryption, and also design an efficient attribute revocation method that can achieve both forward security and backward security. We further propose an extensive data access control scheme (EDAC-MACS), which is secure under weaker security assumptions."} {"_id": "b56d065e56866dce8ceaca4af78a8f058aecd6a7", "title": "Learning Long Term Dependencies via Fourier Recurrent Units", "text": "It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states h along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU\u2019s residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks."} {"_id": "2230e4f5abfe76608fd257baca9eea043e3f1750", "title": "The Role of Classification in Knowledge Representation and Discovery", "text": "The link between classification and knowledge is explored. Classification schemes have properties that enable the representation of entities and relationships in structures that reflect knowledge of the domain being classified. The strengths and limitations of four classificatory approaches are described in terms of their ability to reflect, discover, and create new knowledge. These approaches are hierarchies, trees, paradigms, and faceted analysis. Examples are provided of the way in which knowledge and the classification process affect each other. INTRODUCTION Developments in our ability to store and retrieve large amounts of' information have stimulated an interest in new ways to exploit this information for advancing human knowledge. This article describes the relationship between knowledge representation (as manifested in classifications) and the processes of knowledge discovery and creation. How does the classification process enable or constrain knowing something or discovering new knowledge about something? In what ways might we develop classifications that will enhance our ability to discover meaningful information in our data stores? The first part of the article describes several representative classificatory structures-hierarchies, trees, paradigms, and faceted analysis with the aim of identifying how these structures serve as knowledge representations and in what ways they can be used for knowledge discovery and creation. The second part of the discussion includes examples from existing classification schemes and discusses how the schemes reflect or fail to reflect knowledge. KNOWLEDGE, THEORY, AND CLASSIFICATION Scholars in many fields, from philosophy to cybernetics, have long discussed the concept of knowledge and the problems of representing knowledge in information systems. The distinction is drawn between merely observing, perceiving, or even describing things and truly knowing them. To know implies a process of integration of facts about objects and the context in which the objects and processes exist. Even in colloquial usage, knowledge about someone or something is always expressed in terms of deep relationships and meanings as well as its place in time and space. To know cars means not only understanding car mechanics but also knowledge of the interplay of the mechanical processes and perhaps even factors such as aesthetics, economics, and psychology. The process of knowledge discovery and creation in science has traditionally followed the path of systematic exploration, observation, description, analysis, and synthesis and testing of phenomena and facts, all conducted within the communication framework of a particular research community with its accepted methodology and set of techniques. We know the process is not entirely rational but often is sparked and then fueled by insight, hunches, and leaps of faith (Bronowski, 1978). Moreover, research is always conducted within a particular political and cultural reality (Olson, 1998). Each researcher and, on a larger scale, each research community at various points must gather up the disparate pieces and in some way communicate what is known, expressing it in such a way as to be useful for further discovery and understanding. A variety of formats exist for the expression of knowledge e.g., theories, models, formulas, descriptive reportage of many sorts, and polemical essays. Of these formats, science particularly values theories and models because they are a \u201csymbolic dimension of experience as opposed to the apprehension of brute fact\u201d (Kaplan, 1963, p. 294) and can therefore be symbolically extended to cover new experiences. A theory thus explains a particular fact by abstracting the relationship of that fact to other facts. Grand, or covering, theories explain facts in an especially eloquent way and in a very wide (some would say, universal) set of situations. Thus, Darwinian, Marxist, or Freudian theories, for example, attempt to explain processes and behaviors in many contexts, but they do so at a high level of abstraction. There are relatively few grand theories, however, and we rely on the explanatory and descriptive usefulness of more \u201clocal\u201d theoriestheories that explain a more limited domain but with greater specificity. CLASSIFICATION AS KNOWLEDGE REPRESENTATION How are theories built? How does knowledge accumulate and then get shaped into a powerful representation? There are, of course, many processes involved, but often one of them is the process of classification. Classification is the meaningful clustering of experience. The process of classification can be used in a formative way and is thus useful during the preliminary stages of inquiry as a heuristic tool in discovery, analysis, and theorizing (Davies, 1989). Once concepts gel and the relationships among concepts become understood, a classification can be used as a rich representation of what is known and is thus useful in communication and in generating a fresh cycle of exploration, comparison, and theorizing. Kaplan (1963) states that \u201ctheory is not the aggregate of the new laws but their connectedness, as a bridge consists of girders only in that the girders are joined together in a particular way\u201d (p. 297). A good classification functions in much the same way that a theory does, connecting concepts in a useful structure. If successful, it is, like a theory, descriptive, explanatory, heuristic, fruitful, and perhaps also elegant, parsimonious, and robust (Kwasnik, l992b). There are many approaches to the process of classification and to the construction of the foundation of classification schemes. Each kind of classification process has different goals, and each type of classification scheme has different structural properties as well as different strengths and weaknesses in terms of knowledge representation and knowledge discovery. The following is a representative sample of some common approaches and structures. HIERARCHIES We have inherited our understanding of hierarchical classifications from Aristotle (Ackrill, 1963), who posited that all nature comprised a unified whole. The whole could be subdivided, like a chicken leg at the joint, into \u201cnatural\u201d classes, and each class further into subclasses, and so on-this process following an orderly and systematic set of rules of association and distinction. How do we know what a natural dividing place is, and how do we arrive at the rules for division and subdivision? According to Aristotle, only exhaustive observation can reveal each entity\u2019s true (essential) attributes, and only philosophy can guide us in determining the necessary and sufficient attributes for membership in any given class. In fact, according to Aristotle\u2019s philosophy, it is only when an entity is properly classed, and its essential properties identified, that we can say we truly know it. This is the aim of science, he claims-i.e., to unambiguously classify all phenomena by their essential (true) qualities. While Aristotle\u2019s legacy is alive in spirit in modern applications of classification, most practitioners recognize that a pure and complete hierarchy is essentially possible only in the ideal. Nevertheless, in knowledge domains that have theoretical foundations (such as germ theory in medicine and the theory of evolution in biology), hierarchies are the preferred structures for knowledge representation (see, for example, the excerpt from the Medical Subject Headings [MeSH] in Figure 1. Based on the MeSH excerpt in Figure 1, note that hierarchies have strict structural requirements: Inclusiveness: The top class (in this case, EYEDISEASES) is the most inclusive class and describes the domain of the classification. The top class includes all its subclasses and sub-subclasses. Put another way, all the classes in the example are included in the top class: EYEDISEASES. Species/differentia: A true hierarchy has only one type of relationship between its superand subclasses and this is the generic relationship, also known as species/differentia, or more colloquially as the is-a relationship. In a generic relationship, ALLERGIC CONJUNCTIVITIS is a kind of CONJUNCTIVITIS, which in turn is a kind of CONJUNCTIVAL DISEASE, which in turn is a kind of EYEDISEASE. Inheritance: This requirement of strict class inclusion ensures that everything that is true for entities in any given class is also true for entities in its subclasses and sub-subclasses. Thus whatever is true of EYE DISEASES (as a whole) is also true of CONJUNCTIVAL Whatever is true of CONJUNCTIVAL DISEASES (as a whole) is also true of CONJUNCTIVITIS, and so on. This property is called inheritance, that is, attributes are inherited by a subclass from its superclass. Transitivity: Since attributes are inherited, all sub-subclasses are members of not only their immediate superclass but of every superclass above that one. Thus if BACTERIAL CONJUNCTIVITIS is a kind Of CONJUNCTIVITIS, and CONJUNCTIVITIS is a kind Of CONJUNCTIVAL DISEASE, then, by the rules of transitivity, BACTERIAL CONJUNCTIVITIS is also a kind of CONJUNCTIVAL DISEASE, and so on. This property is called transitivity. Systematic and predictable rules for association and distinction: The rules for grouping entities in a class (i.e., creating a species) are determined beforehand, as are the rules for creating distinct subclasses (differentia). Thus all entities in a given class are like each other in some predictable (and predetermined) way, and these entities differ from entities in sibling classes in some predictable (and predetermined) way. In the example above, CONJUNCTIVAL and CORNEAL DISEASES are alike in that they are both kinds of EYEDISEASES. They are differentiated from each other along some predictable and systematic criterion of distinction (in this case \u201cpart of the eye affected\u201d) Mutual exclusivity: A given entity can belong to only one class. This property is called mutual exclusivity. Necessary and sufficient criteria: In a pure"} {"_id": "d1d120bc98e536dd33e37c876aaba57e584d252e", "title": "A soft, bistable valve for autonomous control of soft actuators", "text": "Almost all pneumatic and hydraulic actuators useful for mesoscale functions rely on hard valves for control. This article describes a soft, elastomeric valve that contains a bistable membrane, which acts as a mechanical \u201cswitch\u201d to control air flow. A structural instability\u2014often called \u201csnap-through\u201d\u2014enables rapid transition between two stable states of the membrane. The snap-upward pressure, \u0394P1 (kilopascals), of the membrane differs from the snap-downward pressure, \u0394P2 (kilopascals). The values \u0394P1 and \u0394P2 can be designed by changing the geometry and the material of the membrane. The valve does not require power to remain in either \u201copen\u201d or \u201cclosed\u201d states (although switching does require energy), can be designed to be bistable, and can remain in either state without further applied pressure. When integrated in a feedback pneumatic circuit, the valve functions as a pneumatic oscillator (between the pressures \u0394P1 and \u0394P2), generating periodic motion using air from a single source of constant pressure. The valve, as a component of pneumatic circuits, enables (i) a gripper to grasp a ball autonomously and (ii) autonomous earthworm-like locomotion using an air source of constant pressure. These valves are fabricated using straightforward molding and offer a way of integrating simple control and logic functions directly into soft actuators and robots."} {"_id": "181092c651f540045042e7006f4837d01aac4122", "title": "Malware detection by text and data mining", "text": "Cyber frauds are a major security threat to the banking industry worldwide. Malware is one of the manifestations of cyber frauds. Malware authors use Application Programming Interface (API) calls to perpetrate these crimes. In this paper, we propose a static analysis method to detect Malware based on API call sequences using text and data mining in tandem. We analyzed the dataset available at CSMINING group. First, we employed text mining to extract features from the dataset consisting a series of API calls. Further, mutual information is invoked for feature selection. Then, we resorted to over-sampling to balance the data set. Finally, we employed various data mining techniques such as Decision Tree (DT), Multi Layer Perceptron (MLP), Support Vector Machine (SVM), Probabilistic Neural Network (PNN) and Group Method for Data Handling (GMDH). We also applied One Class SVM (OCSVM). Throughout the paper, we used 10-fold cross validation technique for testing the techniques. We observed that SVM and OCSVM achieved 100% sensitivity after balancing the dataset."} {"_id": "6ff87b9a9ecec007fdf46ee81ed98571f881796f", "title": "The changing paradigm of air pollution monitoring.", "text": "The air pollution monitoring paradigm is rapidly changing due to recent advances in (1) the development of portable, lower-cost air pollution sensors reporting data in near-real time at a high-time resolution, (2) increased computational and visualization capabilities, and (3) wireless communication/infrastructure. It is possible that these advances can support traditional air quality monitoring by supplementing ambient air monitoring and enhancing compliance monitoring. Sensors are beginning to provide individuals and communities the tools needed to understand their environmental exposures with these data individual and community-based strategies can be developed to reduce pollution exposure as well as understand linkages to health indicators. Each of these areas as well as corresponding challenges (e.g., quality of data) and potential opportunities associated with development and implementation of air pollution sensors are discussed."} {"_id": "8faecc97853db2f9a098adb82ffbc842178dfc5b", "title": "Quantitative analysis of small-plastic debris on beaches in the Hawaiian Archipelago.", "text": "Small-plastic beach debris from nine coastal locations throughout the Hawaiian Archipelago was analyzed. At each beach, replicate 20 l samples of sediment were collected, sieved for debris between 1 and 15 mm in size, sorted by type, counted and weighed. Small-plastic debris occurred on all of the beaches, but the greatest quantity was found at three of the most remote beaches on Midway Atoll and Moloka'i. Of the debris analyzed, 72% by weight was plastic. A total of 19100 pieces of plastic were collected from the nine beaches, 11% of which was pre-production plastic pellets. This study documents for the first time the presence of small-plastic debris on Hawaiian beaches and corroborates estimates of the abundance of plastics in the marine environment in the North Pacific."} {"_id": "a55c57bd59691d0ac17106c757853eb5d546c84e", "title": "Insights on representational similarity in neural networks with canonical correlation", "text": "Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method [22]. We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations."} {"_id": "f4136bd7f3948be30c4c11876a1bf933e3cc8549", "title": "Zara The Supergirl: An Empathetic Personality Recognition System", "text": "Zara the Supergirl is an interactive system that, while having a conversation with a user, uses its built in sentiment analysis, emotion recognition, facial and speech recognition modules, to exhibit the human-like response of sharing emotions. In addition, at the end of a 5-10 minute conversation with the user, it can give a comprehensive personality analysis based on the user\u2019s interaction with Zara. This is a first prototype that has incorporated a full empathy module, the recognition and response of human emotions, into a spoken language interactive system that enhances human-robot understanding. Zara was shown at the World Economic Forum in Dalian in September 2015."} {"_id": "3b1faf9c554e857eb1e35572215d59d96810b09a", "title": "A CNN-Based Supermarket Auto-Counting System", "text": "Deep learning has made significant breakthrough in the past decade. In certain application domain, its detection accuracy has surpassed human being in the same task, e.g., voice recognition and object detection. Various novel applications has been developed and achieved good performance by leveraging the latest advances in deep learning. In this paper, we propose to utilize deep learning based technique, specifically, Convolutional Neural Network (CNN), to develop an auto-counting system for supermarket scenario. Given a picture, the system can automatically detect the specified categories of goods (e.g., Head & Shoulders bottles) and their respective numbers. To improve detection accuracy of the system, we propose to combine hard example mining and multi-scale feature extraction to the Faster R-CNN framework. Experimental results demonstrate the efficacy of the proposed system. Specifically, our system achieves an mAP of 92.1%, which is better than the state-of-the-art, and the response time is about 250 ms per image, including all steps on a GTX 1080 GPU."} {"_id": "2b692907212a8f6eaf3d187b8370d6285dbbfaa5", "title": "A unified API gateway for high availability clusters", "text": "High-availability (HA) clusters are widely used to provide high availability services. Recently, many HA cluster solutions and products have been proposed or developed by different organizations. However, each HA cluster system has its own administrative tool and application programming interface (API). Besides, vendor lock-in makes customers dependent on the specific vendors' own high availability clusters. Therefore, it is very complicated to simultaneously manage various HA clusters. To solve this problem, a novel SOA-based Unified API Gateway for high-availability Clusters (UAGC) is proposed in this paper. Under UAGC, cluster services are conveniently managed in a unified way, which is independent of platforms or programming languages. Six web service interfaces are implemented in UAGC to cover most cluster functions. A UAGC-based web service (UAGCService) is implemented with WCF. The experimental results show that UAGCService has good performances."} {"_id": "52b3bd5cc6e1838a894b63dc378807dab25d2b38", "title": "Concurrent multi-target localization, data association, and navigation for a swarm of flying sensors", "text": "We are developing a probabilistic technique for performing multiple target detection and localization based on data from a swarm of flying sensors, for example to be mounted on a group of micro-UAVs (unmanned aerial vehicles). Swarms of sensors can facilitate detecting and discriminating low signal-to-clutter targets by allowing correlation between different sensor types and/or different aspect angles. However, for deployment of swarms to be feasible, UAVs must operate more autonomously. The current approach is designed to reduce the load on humans controlling UAVs by providing computerized interpretation of a set of images from multiple sensors. We consider a complex case in which target detection and localization are performed concurrently with sensor fusion, multi-target signature association, and improved UAV navigation. This method yields the bonus feature of estimating precise tracks for UAVs, which may be applicable for automatic collision avoidance. We cast the problem in a probabilistic framework known as modeling field theory (MFT), in which the pdf of the data is composed of a mixture of components, each conditional upon parameters including target positions as well as sensor kinematics. The most likely set of parameters is found by maximizing the log-likelihood function using an iterative approach related to expectation-maximization. In terms of computational complexity, this approach scales linearly with number of targets and sensors, which represents an improvement over most existing methods. Also, since data association is treated probabilistically, this method is not prone to catastrophic failure if data association is incorrect. Results from computer simulations are described which quantitatively show the advantages of increasing the number of sensors in the swarm, both in terms of clutter suppression and more accurate target localization. 2005 Elsevier B.V. All rights reserved."} {"_id": "229547ed3312ee6195104cdec7ce47578f92c2c6", "title": "DYNAMIC CAPABILITIES AND THE EMERGENCE OF INTRA-INDUSTRY DIFFERENTIAL FIRM PERFORMANCE: INSIGHTS FROM A SIMULATION STUDY\u2217 By", "text": "This paper explores how the dynamic capabilities of firms may account for the emergence of differential firm performance within an industry. Synthesizing insights from both strategic and organizational theory, four performance-relevant attributes of dynamic capabilities are proposed: timing of dynamic capability deployment, imitation as part of the search for alternative resource configurations, cost of dynamic capability deployment, and learning to deploy dynamic capabilities. Theoretical propositions are developed suggesting how these attributes contribute to the emergence of differential firm performance. A formal model is presented in which dynamic capability is modeled as a set of routines guiding a firm\u2019s evolutionary processes of change. Simulation of the model yields insights into the process of change through dynamic capability deployment, and permits refinement of the theoretical propositions. One of the interesting findings of this study is that even if dynamic capabilities are equifinal across firms, robust performance differences may arise across firms if the costs and timing of dynamic capability deployment differ across firms."} {"_id": "fdf72e14c2c22960cdd0c3d1109b71b725fc0d0b", "title": "Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions", "text": "Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed."} {"_id": "a0e4730c864e7ca035dde6ba7b19207d1e7daa6a", "title": "Force-mode control of rotary series elastic actuators in a lower extremity exoskeleton using model-inverse time delay control (MiTDC)", "text": "For physical human-robot interaction (pHRI), it has been an important issue to control the output force of actuators. The aim of this study was to apply a new control strategy, named model-inverse time delay control (MiTDC), to series elastic actuators (SEA) in a lower extremity exoskeleton, even in the presence of uncertainties from pHRI. The law for time delay control (TDC) is derived and implementation issues are discussed, including the design of the state observer and the selection of the nominal value of the control distribution coefficient. Additionally, a new concept, a new reference position using the inverse of model dynamics, is introduced to realize satisfactory tracking performance without delay. Experimental results showed that the suggested controller achieved satisfactory performance without accurate information on system parameters, requiring only a nominal value of the control distribution coefficient."} {"_id": "144c1e6fc6439038d709be70aa5484506765b2c2", "title": "Toward an SDN-enabled NFV architecture", "text": "This article presents the progressive evolution of NFV from the initial SDN-agnostic initiative to a fully SDN-enabled NFV solution, where SDN is not only used as infrastructure support but also influences how virtual network functions (VNFs) are designed. In the latest approach, when possible, stateless processing in the VNF shifts from the computing element to the networking element. To support these claims, the article presents the implementation of a flow-based network access control solution, with an SDN-enabled VNF built on IEEE 802.1x, which establishes services as sets of flow definitions that are authorized as the result of an end user authentication process. Enforcing the access to the network is done at the network element, while the authentication and authorization state is maintained at the compute element. The application of this proposal allows the performance to be enhanced, while traffic in the control channel is reduced to a minimum. The SDN-enabled NFV approach sets the foundation to increase the areas of application of NFV, in particular in those areas where massive stateless processing of packets is expected."} {"_id": "c5153bc17fe1ab323c41325ba010192030d512f0", "title": "Software-defined network virtualization: an architectural framework for integrating SDN and NFV for service provisioning in future networks", "text": "SDN and NFV are two significant innovations in networking. The evolution of both SDN and NFV has shown strong synergy between these two paradigms. Recent research efforts have been made toward combining SDN and NFV to fully exploit the advantages of both technologies. However, integrating SDN and NFV is challenging due to the variety of intertwined network elements involved and the complex interaction among them. In this article, we attempt to tackle this challenging problem by presenting an architectural framework called SDNV. This framework offers a clear holistic vision of integrating key principles of both SDN and NFV into unified network architecture, and provides guidelines for synthesizing research efforts toward combining SDN and NFV in future networks. Based on this framework, we also discuss key technical challenges to realizing SDN-NFV integration and identify some important topics for future research, with a hope to arouse the research community's interest in this emerging area."} {"_id": "1f2b28dc48c8f2c0349dce728d7b6a0681f58aea", "title": "A Dataset for Lane Instance Segmentation in Urban Environments", "text": "Autonomous vehicles require knowledge of the surrounding road layout, which can be predicted by state-of-the-art CNNs. This work addresses the current lack of data for determining lane instances, which are needed for various driving manoeuvres. The main issue is the timeconsuming manual labelling process, typically applied per image. We notice that driving the car is itself a form of annotation. Therefore, we propose a semi-automated method that allows for efficient labelling of image sequences by utilising an estimated road plane in 3D based on where the car has driven and projecting labels from this plane into all images of the sequence. The average labelling time per image is reduced to 5 seconds and only an inexpensive dash-cam is required for data capture. We are releasing a dataset of 24,000 images and additionally show experimental semantic segmentation and instance segmentation results."} {"_id": "b533b13910cc0de21054116715988783fbea87cc", "title": "Intrusion Detection Using Data Mining Techniques Intrusion Detection Using Data Mining Techniques", "text": "In these days an increasing number of public and commercial services are used through the Internet, so that security of information becomes more important issue in the society information Intrusion Detection System (IDS) used against attacks for protected to the Computer networks. On another way, some data mining techniques also contribute to intrusion detection. Some data mining techniques used for intrusion detection can be classified into two classes: misuse intrusion detection and anomaly intrusion detection. Misuse always refers to known attacks and harmful activities that exploit the known sensitivity of the system. Anomaly generally means a generally activity that is able to indicate an intrusion. In this paper, comparison made between 23 related papers of using data mining techniques for intrusion detection. Our work provide an overview on data mining and soft computing techniques such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Multivariate Adaptive Regression Spline (MARS), etc. In this paper comparison shown between IDS data mining techniques and tuples used for intrusion detection. In those 23 related papers, 7 research papers use ANN and 4 ones use SVM, because of ANN and SVM are more reliable than other models and structures. In addition, 8 researches use the DARPA1998 tuples and 13 researches use the KDDCup1999, because the standard tuples are much more credible than others. There is no best intrusion detection model in present time. However, future research directions for intrusion detection should be explored in this paper. Keywords\u2014 intrusion detection, data mining, ANN"} {"_id": "a69fd2ad66791ad9fa8722a3b2916092d0f37967", "title": "Interactive example-based urban layout synthesis", "text": "We present an interactive system for synthesizing urban layouts by example. Our method simultaneously performs both a structure-based synthesis and an image-based synthesis to generate a complete urban layout with a plausible street network and with aerial-view imagery. Our approach uses the structure and image data of real-world urban areas and a synthesis algorithm to provide several high-level operations to easily and interactively generate complex layouts by example. The user can create new urban layouts by a sequence of operations such as join, expand, and blend without being concerned about low-level structural details. Further, the ability to blend example urban layout fragments provides a powerful way to generate new synthetic content. We demonstrate our system by creating urban layouts using example fragments from several real-world cities, each ranging from hundreds to thousands of city blocks and parcels."} {"_id": "4be6deba004cd99fa5434cd3c297747be9d45bb5", "title": "Machine Ethics and Automated Vehicles", "text": "Road vehicle travel at a reasonable speed involves some risk, even when using computer-controlled driving with failure-free hardware and perfect sensing. A fully-automated vehicle must continuously decide how to allocate this risk without a human driver\u2019s oversight. These are ethical decisions, particularly in instances where an automated vehicle cannot avoid crashing. In this chapter, I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research. 1 Ethical Decision Making for Automated Vehicles Vehicle automation has progressed rapidly this millennium, mirroring improvements in machine learning, sensing, and processing. Media coverage often focuses on the anticipated safety benefits from automation, as computers are expected to be more attentive, precise, and predictable than human drivers. Mentioned less often are the novel problems from automated vehicle crash. The first problem is liability, as it is currently unclear who would be at fault if a vehicle crashed while self-driving. The second problem is the ability of an automated vehicle to make ethically-complex decisions when driving, particularly prior to a crash. This chapter focuses on the second problem, and the application of machine ethics to vehicle automation. Driving at any significant speed can never be completely safe. A loaded tractor trailer at 100 km/hr requires eight seconds to come to a complete stop, and a passenger car requires three seconds [1]. Truly safe travel requires accurate predictions of other vehicle behavior over this time frame, something that is simply not possible given the close proximities of road vehicles. To ensure its own safety, an automated vehicle must continually assess risk: the risk of traveling a certain speed on a certain curve, of crossing the centerline to"} {"_id": "e5b302ee968ddc72263bc9a403a744bc477c91b3", "title": "Daylight Design of Office Buildings : Optimisation of External Solar Shadings by Using Combined Simulation Methods", "text": "Integrating daylight and energy performance with optimization into the design process has always been a challenge for designers. Most of the building environmental performance simulation tools require a considerable amount of time and iterations for achieving accurate results. Moreover the combination of daylight and energy performances has always been an issue, as different software packages are needed to perform detailed calculations. A simplified method to overcome both issues using recent advances in software integration is explored here. As a case study; the optimization of external shadings in a typical office space in Australia is presented. Results are compared against common solutions adopted as industry standard practices. Visual comfort and energy efficiency are analysed in an integrated approach. The DIVA (Design, Iterate, Validate and Adapt) plug-in for Rhinoceros/Grasshopper software is used as the main tool, given its ability to effectively calculate daylight metrics (using the Radiance/Daysim engine) and energy consumption (using the EnergyPlus engine). The optimization process is carried out parametrically controlling the shadings\u2019 geometries. Genetic Algorithms (GA) embedded in the evolutionary solver Galapagos are adopted in order to achieve close to optimum results by controlling iteration parameters. The optimized result, in comparison with conventional design techniques, reveals significant enhancement of comfort levels and energy efficiency. Benefits and drawbacks of the proposed strategy are then discussed. OPEN ACCESS Buildings 2015, 5 561"} {"_id": "2e939ed3bb378ea966bf9f710fc1138f4e16ef38", "title": "Optimizing the CVaR via Sampling", "text": "Conditional Value at Risk (CVaR) is a prominent risk measure that is being used extensively in various domains. We develop a new formula for the gradient of the CVaR in the form of a conditional expectation. Based on this formula, we propose a novel sampling-based estimator for the gradient of the CVaR, in the spirit of the likelihood-ratio method. We analyze the bias of the estimator, and prove the convergence of a corresponding stochastic gradient descent algorithm to a local CVaR optimum. Our method allows to consider CVaR optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risksensitive controller for the game of Tetris."} {"_id": "7003d7252358bf82c8767d6416ef70cb422a82d1", "title": "Multidisciplinary Instruction with the Natural Language Toolkit", "text": "The Natural Language Toolkit ( NLTK ) is widely used for teaching natural language processing to students majoring in linguistics or computer science. This paper describes the design ofNLTK , and reports on how it has been used effectively in classes that involve different mixes of linguistics and computer science students. We focus on three key issues: getting started with a course, delivering interactive demonstrations in the classroom, and organizing assignments and projects. In each case, we report on practical experience and make recommendations on how to useNLTK to maximum effect."} {"_id": "bee18c795cb6299f2f83636dd90a5914c66096f6", "title": "Barbed sutures in facial rejuvenation.", "text": "Self-retaining barbed sutures, innovations for nonsurgical facial and neck rejuvenation, are currently available as short APTOS threads or long WOFFLES threads. The author uses APTOS threads for malar rounding, facial tightening and firming, and uses WOFFLES threads as a sling, suspending ptotic facial tissues to the firm, dense tissues of the temporal scalp."} {"_id": "a2aa272b32c356ec9933b32ca5809c09f2d21b9f", "title": "Clockwork Convnets for Video Semantic Segmentation", "text": "Recent years have seen tremendous progress in still-image segmentation; however the na\u0131\u0308ve application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: 1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and 2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of \u201cclockwork\u201d convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video datasets."} {"_id": "c7b3f5bccb19f1a224eb87c6924f244b1511e680", "title": "Nonlinear Extensions of Reconstruction ICA", "text": "In a recent paper [1] it was observed that unsupervised feature learning with overcomplete features could be achieved using linear autoencoders (named Reconstruction Independent Component Analysis). This algorithm has been shown to outperform other well-known algorithms by penalizing the lack of diversity (or orthogonality) amongst features. In our project, we wish to extend and improve this algorithm to include other non-linearities. In this project we have considered three unsupervised learning algorithms: (a) Sparse Autoencoder (b) Reconstruction ICA (RICA), a linear autoencoder proposed in [1], and (c) Nonlinear RICA, a proposed extension of RICA for capturing nonlinearities in feature detection. Our research indicates that exploring non-linear extensions of RICA holds good promise; preliminary results with hyperbolic tangent function on the MNIST dataset showed impressive accuracy (comparable with sparse autoencoders), robustness, and required a fraction of the computational effort."} {"_id": "8c1ee9c9653b9e711000641b905971f65432ab95", "title": "Accelerating cine phase-contrast flow measurements using k-t BLAST and k-t SENSE.", "text": "Conventional phase-contrast velocity mapping in the ascending aorta was combined with k-t BLAST and k-t SENSE. Up to 5.3-fold net acceleration was achieved, enabling single breath-hold acquisitions. A standard phase-contrast (PC) sequence with interleaved acquisition of the velocity-encoded segments was modified to collect data in 2 stages, a high-resolution under sampled and a low-resolution fully sampled training stage. In addition, a modification of the k-t reconstruction strategy was tested. This strategy, denoted as \"plug-in,\" incorporates data acquired in the training stage into the final reconstruction for improved data consistency, similar to conventional keyhole. \"k-t SENSE plug-in\" was found to provide best image quality and most accurate flow quantification. For this strategy, at least 10 training profiles are required to yield accurate stroke volumes (relative deviation <5%) and good image quality. In vivo 2D cine velocity mapping was performed in 6 healthy volunteers with 30-32 cardiac phases (spatial resolution 1.3 x 1.3 x 8-10 mm(3), temporal resolution of 18-38 ms), yielding relative stroke volumes of 106 +/- 18% (mean +/- 2*SD) and 112 +/- 15% for 3.8 x and 5.3 x net accelerations, respectively. In summary, k-t BLAST and k-t SENSE are promising approaches that permit significant scan-time reduction in PC velocity mapping, thus making high-resolution breath-held flow quantification possible."} {"_id": "97c862ac9f0b2df304a2a6d0af697c02d30da6a6", "title": "Cost-effective low-loss flexible optical engine with microlens-imprinted film for high-speed on-board optical interconnection", "text": "There is a strong demand for optical interconnection technology to overcome bandwidth bottlenecks in high-end server systems. The interconnection speed in present systems is approaching 10 Gb/s, and higher-speed interconnections over 25 Gb/s are being discussed. To achieve such optical interconnections in commercial production, it is necessary to develop lower-cost and higher-speed optical transceiver modules. We propose a flexible printed circuit optical engine (FPC-OE) with a microlens-imprinted film and a polymer waveguide to achieve low-cost and high-speed operation. The microlens-imprinted film can be produced at low cost by using nanoimprint technology and can drastically reduce the optical loss of the FPC-OE with polymer waveguide. We successfully demonstrated error-free operation at 25 Gb/s with the fabricated optical transceiver that contains an FPC-OE, microlens-imprinted film, and a polymer waveguide."} {"_id": "680c730a9a184c8b54fa036cc19c980afe192274", "title": "Pulsewidth modulation for electronic power conversion", "text": "The efficient and fast control of electric power forms part of the key technologies of modern automated production. It is performed using electronic power converters. The converters transfer energy from a source to a controlled process in a quantized fashion, using semiconductor switches which are turned on and off at fast repetition rates. The algorithms which generate the switching functions \u2013 pulsewidth modulation techniques \u2013 are manifold. They range from simple averaging schemes to involved methods of real-time optimization. This paper gives an overview."} {"_id": "a59dda7562eda2cb57eeb95158d75c1da3e73e2e", "title": "Fuzzy Cognitive Map to model project management problems", "text": "Project management is a complex process impacted by numerous factors either from the external environment and/or internal factors completely or partially under the project manager's control. Managing projects successfully involves a complex amalgamation of comprehensive, informed planning, dynamic assessment and analysis of changes in external and internal factors, and the development and communication of updated strategies over the life of the project. Project management involves the interaction and analysis of many systems and requires the continuous integration and evaluation of large amounts of information. Fuzzy Cognitive Maps (FCM) allow us to encode project management knowledge and experiential results to create a useful model of the interacting systems. This paper covers the representation and development of a construction project management FCM that provides an integrated view of the most important concepts affecting construction project management and risk management. This paper then presents the soft computing approach of FCM to project management (PM) modeling and analysis. The resulting PM-FCM models the interaction of internal and external factors and offers an abstract conceptual model of interacting concepts for construction project management application."} {"_id": "b8b309631aff6f9e3ff9c4ab57221ff51353e04d", "title": "Design of an endoluminal NOTES robotic system", "text": "Natural orifice transluminal endoscopic surgery, or NOTES, allows for exceedingly minimally invasive surgery but has high requirements for the dexterity and force capabilities of the tools. An overview of the ViaCath System is presented. This system is a first generation teleoperated robot for endoluminal surgery and consists of a master console with haptic interfaces, slave drive mechanisms, and 6 degree-of-freedom, long-shafted flexible instruments that run alongside a standard gastroscope or colonoscope. The system was validated through animal studies. It was discovered that the devices were difficult to introduce into the GI tract and manipulation forces were insufficient. The design of a second generation system is outlined with improvements to the instrument articulation section and a steerable overtube. Results of basic evaluation tests performed on the tools are also presented."} {"_id": "060a42937eed7ad1388c165feba0ebc0511952b2", "title": "Block-wise construction of tree-like relational features with monotone reducibility and redundancy", "text": "We describe an algorithm for constructing a set of tree-like conjunctive relational features by combining smaller conjunctive blocks. Unlike traditional level-wise approaches which preserve the monotonicity of frequency, our block-wise approach preserves monotonicity of feature reducibility and redundancy, which are important in propositionalization employed in the context of classification learning. With pruning based on these properties, our block-wise approach efficiently scales to features including tens of first-order atoms, far beyond the reach of state-of-the art propositionalization or inductive logic programming systems."} {"_id": "443418d497a45a197a1a1a96d84ae54078ce3d8f", "title": "Bijective parameterization with free boundaries", "text": "We present a fully automatic method for generating guaranteed bijective surface parameterizations from triangulated 3D surfaces partitioned into charts. We do so by using a distortion metric that prevents local folds of triangles in the parameterization and a barrier function that prevents intersection of the chart boundaries. In addition, we show how to modify the line search of an interior point method to directly compute the singularities of the distortion metric and barrier functions to maintain a bijective map. By using an isometric metric that is efficient to compute and a spatial hash to accelerate the evaluation and gradient of the barrier function for the boundary, we achieve fast optimization times. Unlike previous methods, we do not require the boundary be constrained by the user to a non-intersecting shape to guarantee a bijection, and the boundary of the parameterization is free to change shape during the optimization to minimize distortion."} {"_id": "110caa791362b26dfeac76060e052c9ccc5c2356", "title": "Adaptive Parser-Centric Text Normalization", "text": "Text normalization is an important first step towards enabling many Natural Language Processing (NLP) tasks over informal text. While many of these tasks, such as parsing, perform the best over fully grammatically correct text, most existing text normalization approaches narrowly define the task in the word-to-word sense; that is, the task is seen as that of mapping all out-of-vocabulary non-standard words to their in-vocabulary standard forms. In this paper, we take a parser-centric view of normalization that aims to convert raw informal text into grammatically correct text. To understand the real effect of normalization on the parser, we tie normalization performance directly to parser performance. Additionally, we design a customizable framework to address the often overlooked concept of domain adaptability, and illustrate that the system allows for transfer to new domains with a minimal amount of data and effort. Our experimental study over datasets from three domains demonstrates that our approach outperforms not only the state-of-the-art wordto-word normalization techniques, but also manual word-to-word annotations."} {"_id": "f6ed1d32be1ba0681ee460cfed202c7303e224c4", "title": "Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments", "text": "Highly dynamic environments pose a particular challenge for motion planning due to the need for constant evaluation or validation of plans. However, due to the wide range of applications, an algorithm to safely plan in the presence of moving obstacles is required. In this paper, we propose a novel technique that provides computationally efficient planning solutions in environments with static obstacles and several dynamic obstacles with stochastic motions. Path-Guided APF-SR works by first applying a sampling-based technique to identify a valid, collision-free path in the presence of static obstacles. Then, an artificial potential field planning method is used to safely navigate through the moving obstacles using the path as an attractive intermediate goal bias. In order to improve the safety of the artificial potential field, repulsive potential fields around moving obstacles are calculated with stochastic reachable sets, a method previously shown to significantly improve planning success in highly dynamic environments. We show that Path-Guided APF-SR outperforms other methods that have high planning success in environments with 300 stochastically moving obstacles. Furthermore, planning is achievable in environments in which previously developed methods have failed."} {"_id": "37c998ede7ec9eeef7016a206308081bce0fc414", "title": "ITGovA: Proposition of an IT governance Approach", "text": "I To cope with issues related to optimization, rationalization, risk management, economic value of technology and information assets, the implementation of appropriate IT governance seems an important need in public and private organizations. It\u2019s one of these concepts that suddenly emerged and became an important issue in the information technology area. Many organizations started with the implementation of IT governance to achieve a better alignment between business and IT, however, there is no method to apply the IT governance principles in companies. This paper proposes a new approach to implement IT governance based on five iterative phases. This approach is a critical business process that ensures that the business meets its strategic objectives and depending on IT resources for execution. Keywords\u2014Information technology, IT governance, lifecycle, strategic alignment, value"} {"_id": "148753559db7b59462dd24d4c9af60b2a73a66bf", "title": "Chemical Similarity Searching", "text": "This paper reviews the use of similarity searching in chemical databases. It begins by introducing the concept of similarity searching, differentiating it from the more common substructure searching, and then discusses the current generation of fragment-based measures that are used for searching chemical structure databases. The next sections focus upon two of the principal characteristics of a similarity measure: the coefficient that is used to quantify the degree of structural resemblance between pairs of molecules and the structural representations that are used to characterize molecules that are being compared in a similarity calculation. New types of similarity measure are then compared with current approaches, and examples are given of several applications that are related to similarity searching."} {"_id": "34e609610872e6ee8b5a46f6cf5ddc928d785385", "title": "Neural network model for time series prediction by reinforcement learning", "text": "Two important issues when constructing a neural network (NN) for time series prediction: proper selection of (1) the input dimension and (2) the time delay between the inputs. These two parameters determine the structure, computing complexity and accuracy of the NN. This paper is to formulate an autonomous data-driven approach to identify a parsimonious structure for the NN so as to reduce the prediction error and enhance the modeling accuracy. The reinforcement learning based dimension and delay estimator (RLDDE) is proposed. It involves a trial-error learning process to formulate a selection policy for designating the above-mentioned two parameters. The proposed method is evaluated by the prediction of the benchmark sunspot time series."} {"_id": "545e074f8aa21153f7b5d298aa5ba4fc7c89bd36", "title": "Violent Scenes Detection Using Mid-Level Violence Clustering", "text": "This work proposes a novel system for Violent Scenes Detection, which is based on the combination of visual and audio features with machine learning at segment-level. Multiple Kernel Learning is applied so that multimodality of videos can be maximized. In particular, Mid-level Violence Clustering is proposed in order for mid-level concepts to be implicitly learned, without using manually tagged annotations. Finally a violence-score for each shot is calculated. The whole system is trained ona dataset from MediaEval 2013 Affect Task and evaluated by its official metric. The obtained results outperformed its best score."} {"_id": "0a1e664b66aae97d2f57b45d86dd7ac152e8fd92", "title": "End-to-End Waveform Utterance Enhancement for Direct Evaluation Metrics Optimization by Fully Convolutional Neural Networks", "text": "Speech enhancement model is used to map a noisy speech to a clean speech. In the training stage, an objective function is often adopted to optimize the model parameters. However, in the existing literature, there is an inconsistency between the model optimization criterion and the evaluation criterion for the enhanced speech. For example, in measuring speech intelligibility, most of the evaluation metric is based on a short-time objective intelligibility STOI measure, while the frame based mean square error MSE between estimated and clean speech is widely used in optimizing the model. Due to the inconsistency, there is no guarantee that the trained model can provide optimal performance in applications. In this study, we propose an end-to-end utterance-based speech enhancement framework using fully convolutional neural networks FCN to reduce the gap between the model optimization and the evaluation criterion. Because of the utterance-based optimization, temporal correlation information of long speech segments, or even at the entire utterance level, can be considered to directly optimize perception-based objective functions. As an example, we implemented the proposed FCN enhancement framework to optimize the STOI measure. Experimental results show that the STOI of a test speech processed by the proposed approach is better than conventional MSE-optimized speech due to the consistency between the training and the evaluation targets. Moreover, by integrating the STOI into model optimization, the intelligibility of human subjects and automatic speech recognition system on the enhanced speech is also substantially improved compared to those generated based on the minimum MSE criterion."} {"_id": "9af4ef7b5b075e9dc88ed0294891c2a9146adca6", "title": "Impact of User Pairing on 5G Non-Orthogonal Multiple Access", "text": "Non-orthogonal multiple access (NOMA) represents a paradigm shift from conventional orthogonal multiple access (MA) concepts, and has been recognized as one of the key enabling technologies for 5G systems. In this paper, the imp act of user pairing on the performance of two NOMA systems, NOMA with fixed power allocation (F-NOMA) and cognitive radio inspired NOMA (CR-NOMA), is characterized. For FNOMA, both analytical and numerical results are provided to demonstrate that F-NOMA can offer a larger sum rate than orthogonal MA, and the performance gain of F-NOMA over conventional MA can be further enlarged by selecting users whose channel conditions are more distinctive. For CR-NOMA, the quality of service (QoS) for users with the poorer channe l condition can be guaranteed since the transmit power alloca ted to other users is constrained following the concept of cogniti ve radio networks. Because of this constraint, CR-NOMA has different behavior compared to F-NOMA. For example, for the user with the best channel condition, CR-NOMA prefers to pair it with the user with the second best channel condition, whereas the use r with the worst channel condition is preferred by F-NOMA. I. I NTRODUCTION Multiple access in 5G mobile networks is an emerging research topic, since it is key for the next generation netwo rk to keep pace with the exponential growth of mobile data and multimedia traffic [1] and [2]. Non-orthogonal multiple acc ess (NOMA) has recently received considerable attention as a promising candidate for 5G multiple access [3]\u2013[6]. Partic ularly, NOMA uses the power domain for multiple access, where different users are served at different power levels. The users with better channel conditions employ successive int erference cancellation (SIC) to remove the messages intended for other users before decoding their own [7]. The benefit of usin g NOMA can be illustrated by the following example. Consider that there is a user close to the edge of its cell, denoted by A, whose channel condition is very poor. For conventional MA, an orthogonal bandwidth channel, e.g., a time slot, will be allocated to this user, and the other users cannot use this time slot. The key idea of NOMA is to squeeze another user with better channel condition, denoted by B, into this time slot. SinceA\u2019s channel condition is very poor, the interference fromB will not cause much performance degradation to A, but the overall system throughput can be significantly improved since additional information can be delivered between the base station (BS) andB. The design of NOMA for uplink transmissions has been proposed in [4], and the performance of NOMA with randomly deployed mobile stations has been characterized in [5]. The combination of cooperative diver sity with NOMA has been considered in [8]. Z. Ding and H. V. Poor are with the Department of Electrical En gineering, Princeton University, Princeton, NJ 08544, USA. Z. Ding is a l o with the School of Computing and Communications, Lancaster Univers ity, LA1 4WA, UK. Pingzhi Fan is with the Institute of Mobile Communicatio ns, Southwest Jiaotong University, Chengdu, China. Since multiple users are admitted at the same time, frequency and spreading code, co-channel interference will be strong in NOMA systems, i.e., a NOMA system is interference limited. As a result, it may not be realistic to ask all the users in the system to perform NOMA jointly. A promising alternative is to build a hybrid MA system, in which NOMA is combined with conventional MA. In particular, the users i n the system can be divided into multiple groups, where NOMA is implemented within each group and different groups are allocated with orthogonal bandwidth resources. Obviously the performance of this hybrid MA scheme is very dependent on which users are grouped together, and the aim of this paper is to investigate the effect of this grouping. Particularly , tn this paper, we focus on a downlink communication scenario with one BS and multiple users, where the users are ordered according to their connections to the BS, i.e., the m-th user has them-th worst connection to the BS. Consider that two users, the m-th user and then-th user, are selected for performing NOMA jointly, wherem < n. The impact of user pairing on the performance of NOMA will be characterized in this paper, where two types of NOMA will be considered. One is based on fixed power allocation, termed F-NOMA, and the other is cognitive radio inspired NOMA, termed CR-NOMA. For the F-NOMA scheme, the probability that F-NOMA can achieve a larger sum rate than conventional MA is first studie d, where an exact expression for this probability as well as its high signal-to-noise ratio (SNR) approximation are obtain ed. These developed analytical results demonstrate that it is a lmost certain for F-NOMA to outperform conventional MA, and the channel quality of then-th user is critical to this probability. In addition, the gap between the sum rates achieved by FNOMA and conventional MA is also studied, and it is shown that this gap is determined by how different the two users\u2019 channel conditions are, as initially reported in [8]. For ex ample, if n = M , it is preferable to choose m = 1, i.e., pairing the user with the best channel condition with the user with the worst channel condition. The reason for this phenomenon can be explained as follows. When m is small, them-th user\u2019s channel condition is poor, and the data rate supported by thi s user\u2019s channel is also small. Therefore the spectral efficie ncy of conventional MA is low, since the bandwidth allocated to thi s user cannot be accessed by other users. The use of F-NOMA ensures that then-th user will have access to the resource allocated to them-th user. If(n\u2212m) is small, then-th user\u2019s channel quality is similar to the m-th user\u2019s, and the benefit to use NOMA is limited. But ifn >> m, then-th user can use the bandwidth resource much more efficiently than the m-th user, i.e., a larger (n\u2212m) will result in a larger performance gap between F-NOMA and conventional MA. The key idea of CR-NOMA is to opportunistically serve the n-th user on the condition that the m-th user\u2019s quality of service (QoS) is guaranteed. Particularly the transmit pow er"} {"_id": "229b7759e2ee9e03712836a9504d50bd6c66a973", "title": "Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences", "text": "We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M \u2265 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [21]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1 \u2212 e. We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings."} {"_id": "5c905b298c074d17be3fdbc47705fe5cd0797c2e", "title": "Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications", "text": "We describe a generalization of the PAC learning model that is based on statistical decision theory. In this model the learner receives randomly drawn examples, each example consisting of an instance x E X and an outcome J E Y, and tries to find a decision rule h: X + A, where h E X, that specifies the appropriate action a E A to take for each instance x in order to minimize the expectation of a loss I( y, a). Here X, Y, and A are arbitrary sets. I is a real-valued function, and examples are generated according to an arbitrary joint distribution on Xx Y. Special cases include the problem of learning a function from X into Y, the problem of learning the conditional probability distribution on Y given X (regression), and the problem of learning a distribution on X (density estimation). We give theorems on the uniform convergence of empirical loss estimates to true expected loss rates for certain decision rule spaces 2, and show how this implies learnability with bounded sample size, disregarding computational complexity. As an application, we give distribution-independent upper bounds on the sample size needed for learning with feedforward neural networks, Our theorems use a generalized notion of VC dimension that applies to classes of real-valued functions, adapted from Vapnik and Pollard\u2019s work, and a notion of capacity and metric dimension for classes of functions that map into a bounded metric space. \u2018(\u20181 1992 Academic Press, Inc."} {"_id": "70cd98a5710179eb20b7987afed80a044af0523e", "title": "Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima", "text": "We establish theoretical results concerning all local optima of various regularized M-estimators, where both loss and penalty functions are allowed to be nonconvex. Our results show that as long as the loss function satisfies restricted strong convexity and the penalty function satisfies suitable regularity conditions, any local optimum of the composite objective function lies within statistical precision of the true parameter vector. Our theory covers a broad class of nonconvex objective functions, including corrected versions of the Lasso for errors-in-variables linear models; regression in generalized linear models using nonconvex regularizers such as SCAD and MCP; and graph and inverse covariance matrix estimation. On the optimization side, we show that a simple adaptation of composite gradient descent may be used to compute a global optimum up to the statistical precision \u03b5stat in log(1/\u03b5stat) iterations, which is the fastest possible rate of any first-order method. We provide a variety of simulations to illustrate the sharpness of our theoretical predictions. Disciplines Statistics and Probability This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/222 Regularized M -estimators with nonconvexity: Statistical and algorithmic theory for local optima Po-Ling Loh Department of Statistics University of California, Berkeley Berkeley, CA 94720 ploh@berkeley.edu Martin J. Wainwright Departments of Statistics and EECS University of California, Berkeley Berkeley, CA 94720 wainwrig@stat.berkeley.edu"} {"_id": "7f0125544c265cd97545d3ed6f9f7a58d7e93f6d", "title": "On the Quality of the Initial Basin in Overspecified Neural Networks", "text": "Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the geometric structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger (\u201coverspecified\u201d) networks, which accords with some recent empirical and theoretical observations."} {"_id": "9b8be6c3ebd7a79975067214e5eaea05d4ac2384", "title": "Gradient Descent Converges to Minimizers", "text": "We show that gradient descent converges to a local minimizer , almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory."} {"_id": "ff457fd8330129f8069c5c427c8be3228dc17884", "title": "2D electronics - opportunities and limitations", "text": "2D materials (2Ds), in particular the 2Ds beyond graphene, have attracted considerable attention in the transistor community. While rapid progress on 2D transistors has been achieved recently, the prospects of 2D electronics is still under debate. In the present paper we discuss our view of the potential of 2D transistors for digital CMOS and for flexible electronics. We show that 2D MOSFETs show promise for ultimately scaled CMOS due their excellent electrostatic integrity and their ability to suppress source-drain tunneling. Moreover, the 2Ds represent a promising option for radio frequency (RF) flexible electronics since they offer an attractive combination of mechanical (bendability, stretchability), and electrical (reasonable mobility) properties."} {"_id": "ea236e24b5b938ae7d7d382d4378f3b311c011a0", "title": "Advancing Land-Sea Conservation Planning: Integrating Modelling of Catchments, Land-Use Change, and River Plumes to Prioritise Catchment Management and Protection", "text": "Human-induced changes to river loads of nutrients and sediments pose a significant threat to marine ecosystems. Ongoing land-use change can further increase these loads, and amplify the impacts of land-based threats on vulnerable marine ecosystems. Consequently, there is a need to assess these threats and prioritise actions to mitigate their impacts. A key question regarding prioritisation is whether actions in catchments to maintain coastal-marine water quality can be spatially congruent with actions for other management objectives, such as conserving terrestrial biodiversity. In selected catchments draining into the Gulf of California, Mexico, we employed Land Change Modeller to assess the vulnerability of areas with native vegetation to conversion into crops, pasture, and urban areas. We then used SedNet, a catchment modelling tool, to map the sources and estimate pollutant loads delivered to the Gulf by these catchments. Following these analyses, we used modelled river plumes to identify marine areas likely influenced by land-based pollutants. Finally, we prioritised areas for catchment management based on objectives for conservation of terrestrial biodiversity and objectives for water quality that recognised links between pollutant sources and affected marine areas. Our objectives for coastal-marine water quality were to reduce sediment and nutrient discharges from anthropic areas, and minimise future increases in coastal sedimentation and eutrophication. Our objectives for protection of terrestrial biodiversity covered species of vertebrates. We used Marxan, a conservation planning tool, to prioritise interventions and explore spatial differences in priorities for both objectives. Notable differences in the distributions of land values for terrestrial biodiversity and coastal-marine water quality indicated the likely need for trade-offs between catchment management objectives. However, there were priority areas that contributed to both sets of objectives. Our study demonstrates a practical approach to integrating models of catchments, land-use change, and river plumes with conservation planning software to inform prioritisation of catchment management."} {"_id": "e12fd1721516d0f90de39d63a4e273b9d13d0a0c", "title": "Machine Learning in Medical Applications", "text": "Research in Machine Learning methods to-date remains centered on technological issues and is mostly application driven. This letter summarizes successful applications of machine learning methods that were presented at the Workshop on Machine Learning in Medical Applications. The goals of the workshop were to foster fundamental and applied research in the application of machine learning methods to medical problem solving and to medical research, to provide a forum for reporting significant results, to determine whether Machine Learning methods are able to underpin the research and development on intelligent systems for medical applications, and to identify those areas where increased research is likely to yield advances. A number of recommendations for a research agenda were produced, including both technical and human-centered issues."} {"_id": "87ce789dbddfebd296993597d72e1950b846e99f", "title": "Multi-view Clustering via Multi-manifold Regularized Nonnegative Matrix Factorization", "text": "Multi-view clustering integrates complementary information from multiple views to gain better clustering performance rather than relying on a single view. NMF based multi-view clustering algorithms have shown their competitiveness among different multi-view clustering algorithms. However, NMF fails to preserve the locally geometrical structure of the data space. In this paper, we propose a multi-manifold regularized nonnegative matrix factorization framework (MMNMF) which can preserve the locally geometrical structure of the manifolds for multi-view clustering. MMNMF regards that the intrinsic manifold of the dataset is embedded in a convex hull of all the views' manifolds, and incorporates such an intrinsic manifold and an intrinsic (consistent) coefficient matrix with a multi-manifold regularizer to preserve the locally geometrical structure of the multi-view data space. We use linear combination to construct the intrinsic manifold, and propose two strategies to find the intrinsic coefficient matrix, which lead to two instances of the framework. Experimental results show that the proposed algorithms outperform existing NMF based algorithms for multi-view clustering."} {"_id": "6cfdc673b5e0708eedf3fad9fb550a28e672b8aa", "title": "Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks", "text": "Vasculature is known to be of key biological significance, especially in the study of cancer. As such, considerable effort has been focused on the automated measurement and analysis of vasculature in medical and pre-clinical images. In tumors in particular, the vascular networks may be extremely irregular and the appearance of the individual vessels may not conform to classical descriptions of vascular appearance. Typically, vessels are extracted by either a segmentation and thinning pipeline, or by direct tracking. Neither of these methods are well suited to microscopy images of tumor vasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators."} {"_id": "07528274309b357651919c59bea8fdafa1116277", "title": "Building fast and compact convolutional neural networks for offline handwritten Chinese character recognition", "text": "Like other problems in computer vision, offline handwritten Chinese character recognition (HCCR) has achieved impressive results using convolutional neural network (CNN)-based methods. However, larger and deeper networks are needed to deliver state-of-the-art results in this domain. Such networks intuitively appear to incur high computational cost, and require the storage of a large number of parameters, which render them unfeasible for deployment in portable devices. To solve this problem, we propose a Global Supervised Low-rank Expansion (GSLRE) method and an Adaptive Drop-weight (ADW) technique to solve the problems of speed and storage capacity. We design a nine-layer CNN for HCCR consisting of 3,755 classes, and devise an algorithm that can reduce the network\u2019s computational cost by nine times and compress the network to 1/18 of the original size of the baseline model, with only a 0.21% drop in accuracy. In tests, the proposed algorithm can still surpass the best single-network performance reported thus far in the literature while requiring only 2.3 MB for storage. Furthermore, when integrated with our effective forward implementation, the recognition of an offline character image takes only 9.7 ms on a CPU. Compared with the state-of-the-art CNN model for HCCR, \u2217Corresponding author Email addresses: xiaoxuefengchina@gmail.com (Xuefeng Xiao), lianwen.jin@gmail.com (Lianwen Jin) Preprint submitted to Pattern Recognition June 30, 2017"} {"_id": "d9ef20252f9d90295460953e8ab78667b66919ad", "title": "Circuit Fingerprinting Attacks: Passive Deanonymization of Tor Hidden Services", "text": "This paper sheds light on crucial weaknesses in the design of hidden services that allow us to break the anonymity of hidden service clients and operators passively. In particular, we show that the circuits, paths established through the Tor network, used to communicate with hidden services exhibit a very different behavior compared to a general circuit. We propose two attacks, under two slightly different threat models, that could identify a hidden service client or operator using these weaknesses. We found that we can identify the users\u2019 involvement with hidden services with more than 98% true positive rate and less than 0.1% false positive rate with the first attack, and 99% true positive rate and 0.07% false positive rate with the second. We then revisit the threat model of previous website fingerprinting attacks, and show that previous results are directly applicable, with greater efficiency, in the realm of hidden services. Indeed, we show that we can correctly determine which of the 50 monitored pages the client is visiting with 88% true positive rate and false positive rate as low as 2.9%, and correctly deanonymize 50 monitored hidden service servers with true positive rate of 88% and false positive rate of 7.8% in an open world setting."} {"_id": "e23222907f95c1fcdc87dc3d3cd93edeaa56fa66", "title": "MarrNet: 3D Shape Reconstruction via 2.5D Sketches", "text": "3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenges for learning-based approaches, as 3D object annotations are scarce in real images. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. In this work, we propose MarrNet, an end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image; models that recover 2.5D sketches are also more likely to transfer from synthetic to real data. Second, for 3D reconstruction from 2.5D sketches, systems can learn purely from synthetic data. This is because we can easily render realistic 2.5D sketches without modeling object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches; the framework is therefore end-to-end trainable on real images, requiring no human annotations. Our model achieves state-of-the-art performance on 3D shape reconstruction."} {"_id": "8eea2b1ca6f62ffaaa96f6d25990ad433eb52588", "title": "Exploring the influential factors in continuance usage of mobile social Apps: Satisfaction, habit, and customer value perspectives", "text": "The emergence of mobile application software (App) has explosively grown in conjunction with the worldwide use of smartphones in recent years. Among numerous categories of mobile Apps, social Apps were one of those with the greatest growth in 2013. Despite abundant research on users\u2019 behavior intention of mobile App usage, few studies have focused on investigating key determinants of users\u2019 continuance intention regarding social Apps. To fill this gap, we integrated customer value perspectives to explore the influential factors in the continuance intention of social App use. Moreover, users\u2019 satisfaction and habit from both the marketing and psychology literature were also incorporated into the research model. A total of 378 valid questionnaires were collected by survey method, and structural equation modeling was employed in the subsequent data analysis. The results indicate that the continuance usage of social Apps is driven by users\u2019 satisfaction, tight connection with others, and hedonic motivation to use the Apps. In addition, full mediation effects of satisfaction and habit were found between perceived usefulness and intention to continue use. These findings extend our understanding of users\u2019 continuance intention in the context of social Apps. Discussion and implications are provided. 2015 Elsevier Ltd. All rights reserved."} {"_id": "eecf83a2ed7765fc2729bc9e987a56a3d848dbb2", "title": "Perceivable Light Fields: Matching the Requirements Between the Human Visual System and Autostereoscopic 3-D Displays", "text": "Recently, there has been a substantial increase in efforts to develop 3-D visualization technologies that can provide the viewers with a realistic 3-D visual experience. Various terms such as \u201creality communication\u201d have been used to categorize these efforts. In order to provide the viewers with a complete and realistic visual sensation, the display or visualization system and the displayed content need to match the physiological 3-D information sensing capabilities of the human visual system which can be quite complex. These may include spatial and temporal resolutions, depth perception, dynamic range, spectral contents, nonlinear effects, and vergence accommodation effects. In this paper, first we present an overview of some of the 3-D display research efforts which have been extensively pursued in Asia, Europe, and North America among other areas. Based on the limitations and comfort-based requirements of the human visual system when viewing a nonnatural visual input from 3-D displays, we present an analytical framework that combines main perception and human visual requirements with analytical tools and principles used in related disciplines such as optics, computer graphics, computational imaging, and signal processing. Building on the widely used notion of light fields, we define a notion of perceivable light fields to account for the human visual system physiological requirements, and propagate it back to the display device to determine the display device specifications. This helps us clarify the fundamental and practical requirements of the 3-D display devices for reality viewing communication. In view of the proposed analytical framework, we overview various methods that can be applied to overcome the extensive information needed to be displayed in order to meet the requirements imposed by the human visual system."} {"_id": "75235e03ac0ec643e8a784f432e6d1567eea81b7", "title": "Advances in data stream mining", "text": "Mining data streams has been a focal point of research interest over the past decade. Hardware and software advances have contributed to the significance of this area of research by introducing faster than ever data generation. This rapidly generated data has been termed as data streams. Credit card transactions, Google searches, phone calls in a city, and many others\\are typical data streams. In many important applications, it is inevitable to analyze this streaming data in real time. Traditional data mining techniques have fallen short in addressing the needs of data stream mining. Randomization, approximation, and adaptation have been used extensively in developing new techniques or adopting exiting ones to enable them to operate in a streaming environment. This paper reviews key milestones and state of the art in the data stream mining area. Future insights are also be presented. C \u00a9 2011 Wiley Periodicals, Inc."} {"_id": "29232c81c51b961ead3d38e6838fe0fb9c279e01", "title": "Visual tracking with online Multiple Instance Learning", "text": "In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called \u201ctracking by detection\u201d have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance."} {"_id": "e7965fe4d25e59bc39870c760ef0d62024c9e2a3", "title": "Frontal terminations for the inferior fronto-occipital fascicle: anatomical dissection, DTI study and functional considerations on a multi-component bundle", "text": "The anatomy and functional role of the inferior fronto-occipital fascicle (IFOF) remain poorly known. We accurately analyze its course and the anatomical distribution of its frontal terminations. We propose a classification of the IFOF in different subcomponents. Ten hemispheres (5 left, 5 right) were dissected with Klingler\u2019s technique. In addition to the IFOF dissection, we performed a 4-T diffusion tensor imaging study on a single healthy subject. We identified two layers of IFOF. The first one is superficial and antero-superiorly directed, terminating in the inferior frontal gyrus. The second is deeper and consists of three portions: posterior, middle and anterior. The posterior component terminates in the middle frontal gyrus (MFG) and dorso-lateral prefrontal cortex. The middle component terminates in the MFG and lateral orbito-frontal cortex. The anterior one is directed to the orbito-frontal cortex and frontal pole. In vivo tractography study confirmed these anatomical findings. We suggest that the distribution of IFOF fibers within the frontal lobe corresponds to a fine functional segmentation. IFOF can be considered as a \u201cmulti-function\u201d bundle, with each anatomical subcomponent subserving different brain processing. The superficial layer and the posterior component of the deep layer, which connects the occipital extrastriate, temporo-basal and inferior frontal cortices, might subserve semantic processing. The middle component of the deep layer could play a role in a multimodal sensory\u2013motor integration. Finally, the anterior component of the deep layer might be involved in emotional and behavioral aspects."} {"_id": "c8052beb3f7c41323d4de5930ef602511dbd553b", "title": "Digital Marketing Maturity Models : Overview and Comparison", "text": "The variety of available digital tools, strategies and activities might confuse and disorient even an experienced marketer. This applies in particular to B2B companies, which are usually less flexible in uptaking of digital technology than B2C companies. B2B companies are lacking a framework that corresponds to the specifics of the B2B business, and which helps to evaluate a company\u2019s capabilities and to choose an appropriate path. A B2B digital marketing maturity model helps to fill this gap. However, modern marketing offers no widely approved digital marketing maturity model, and thus, some marketing institutions provide their own tools. The purpose of this paper is building an optimized B2B digital marketing maturity model based on a SWOT (strengths, weaknesses, opportunities, and threats) analysis of existing models. The current study provides an analytical review of the existing digital marketing maturity models with open access. The results of the research are twofold. First, the provided SWOT analysis outlines the main advantages and disadvantages of existing models. Secondly, the strengths of existing digital marketing maturity models, helps to identify the main characteristics and the structure of an optimized B2B digital marketing maturity model. The research findings indicate that only one out of three analyzed models could be used as a separate tool. This study is among the first examining the use of maturity models in digital marketing. It helps businesses to choose between the existing digital marketing models, the most effective one. Moreover, it creates a base for future research on digital marketing maturity models. This study contributes to the emerging B2B digital marketing literature by providing a SWOT analysis of the existing digital marketing maturity models and suggesting a structure and main characteristics of an optimized B2B digital marketing maturity model. Keywords\u2014B2B digital marketing strategy, digital marketing, digital marketing maturity model, SWOT analysis."} {"_id": "2aa5c1340d7f54e38c707ff3b2275e4e3052150f", "title": "Watershed Cuts: Minimum Spanning Forests and the Drop of Water Principle", "text": "We study the watersheds in edge-weighted graphs. We define the watershed cuts following the intuitive idea of drops of water flowing on a topographic surface. We first establish the consistency of these watersheds: they can be equivalently defined by their \"catchment basinsrdquo (through a steepest descent property) or by the \"dividing linesrdquo separating these catchment basins (through the drop of water principle). Then, we prove, through an equivalence theorem, their optimality in terms of minimum spanning forests. Afterward, we introduce a linear-time algorithm to compute them. To the best of our knowledge, similar properties are not verified in other frameworks and the proposed algorithm is the most efficient existing algorithm, both in theory and in practice. Finally, the defined concepts are illustrated in image segmentation, leading to the conclusion that the proposed approach improves, on the tested images, the quality of watershed-based segmentations."} {"_id": "7097d025d2ac36154de75f65b3c33e213a53f037", "title": "Eating when bored: revision of the emotional eating scale with a focus on boredom.", "text": "OBJECTIVE\nThe current study explored whether eating when bored is a distinct construct from other negative emotions by revising the emotional eating scale (EES) to include a separate boredom factor. Additionally, the relative endorsement of eating when bored compared to eating in response to other negative emotions was examined.\n\n\nMETHOD\nA convenience sample of 139 undergraduates completed open-ended questions regarding their behaviors when experiencing different levels of emotions. Participants were then given the 25-item EES with 6 additional items designed to measure boredom.\n\n\nRESULTS\nOn the open-ended items, participants more often reported eating in response to boredom than the other emotions. Exploratory factor analysis showed that boredom is a separate construct from other negative emotions. Additionally, the most frequently endorsed item on the EES was \"eating when bored\".\n\n\nCONCLUSIONS\nThese results suggest that boredom is an important construct, and that it should be considered a separate dimension of emotional eating."} {"_id": "f4d65af751f0a6448f40087267dfd505e885405c", "title": "Classification of seizure based on the time-frequency image of EEG signals using HHT and SVM", "text": "The detection of seizure activity in electroencephalogram (EEG) signals is crucial for the classification of epileptic seizures. However, epileptic seizures occur irregularly and unpredictably, automatic seizure detection in EEG recordings is highly required. In this work, we present a new technique for seizure classification of EEG signals using Hilbert\u2013Huang transform (HHT) and support vector machine (SVM). In our method, the HHT based time-frequency representation (TFR) has been considered as time-frequency image (TFI), the segmentation of TFI has been implemented based on the frequency-bands of the rhythms of EEG signals, the histogram of grayscale sub-images has been represented. Statistical features such as ime-frequency image upport vector machine eizure classification mean, variance, skewness and kurtosis of pixel intensity in the histogram have been extracted. The SVM with radial basis function (RBF) kernel has been employed for classification of seizure and nonseizure EEG signals. The classification accuracy and receiver operating characteristics (ROC) curve have been used for evaluating the performance of the classifier. Experimental results show that the best average classification accuracy of this algorithm can reach 99.125% with the theta rhythm of EEG signals."} {"_id": "210824e7e4c8fb59afa4e2533498e84889409e5d", "title": "MagicFuzzer: Scalable deadlock detection for large-scale applications", "text": "We present MagicFuzzer, a novel dynamic deadlock detection technique. Unlike existing techniques to locate potential deadlock cycles from an execution, it iteratively prunes lock dependencies that each has no incoming or outgoing edge. Combining with a novel thread-specific strategy, it dramatically shrinks the size of lock dependency set for cycle detection, improving the efficiency and scalability of such a detection significantly. In the real deadlock confirmation phase, it uses a new strategy to actively schedule threads of an execution against the whole set of potential deadlock cycles. We have implemented a prototype and evaluated it on large-scale C/C++ programs. The experimental results confirm that our technique is significantly more effective and efficient than existing techniques."} {"_id": "7ae45875848ded54d99077e73038c482ea87934f", "title": "Positive Definite Kernels in Machine Learning", "text": "This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set of functions {k(x, \u00b7), x \u2208 X} associated with a kernel k defined on a space X . We discuss at length the construction of kernel functions that take advantage of well-known statistical models. We provide an overview of numerous data-analysis methods which take advantage of reproducing kernel Hilbert spaces and discuss the idea of combining several kernels to improve the performance on certain tasks. We also provide a short cookbook of different kernels which are particularly useful for certain datatypes such as images, graphs or speech segments. Remark: This report is a draft. I apologize in advance for the numerous mistakes, typos, and not always well written parts it contains. Comments and suggestions will be highly appreciated."} {"_id": "481ca924dcf4e0c758534467621f2477adfb6379", "title": "Single imputation with multilayer perceptron and multiple imputation combining multilayer perceptron and k-nearest neighbours for monotone patterns", "text": "The knowledge discovery process is supported by data files information gathered from collected data sets, which often contain errors in the form of missing values. Data imputation is the activity aimed at estimating values for missing data items. This study focuses on the development of automated data imputation models, based on artificial neural networks for monotone patterns of missing values. The present work proposes a single imputation approach relying on a multilayer perceptron whose training is conducted with different learning rules, and a multiple imputation approach based on the combination of multilayer perceptron and k-nearest neighbours. Eighteen real and simulated databases were exposed to a perturbation experiment with random generation of monotone missing data pattern. An empirical test ean/mode model ultilayer perceptron egression model was accomplished on these data sets, including both approaches (single and multiple imputations), and three classical single imputation procedures \u2013 mean/mode imputation, regression and hot-deck \u2013 were also considered. Therefore, the experiments involved five imputation methods. The results, considering different performance measures, demonstrated that, in comparison with traditional tools, both proposals improve the automation level and data quality offering a satisfactory performance."} {"_id": "8d137ca166879949120568810c15d422abec4a1e", "title": "Who Uses Bitcoin? An exploration of the Bitcoin community", "text": "Many cryptocurrencies have come into existence in recent years, with Bitcoin the most prominent among them. Although its short history has been volatile, the virtual currency maintains a core group of committed users. This paper presents an exploratory analysis of Bitcoin users. As a virtual currency and peer-to-peer payment system, Bitcoin may signal future challenges to state oversight and financial powers through its decentralized structure and offer of instantaneous transactions with relative anonymity. Very little is known about the users of Bitcoin, however. Utilizing publicly available survey data of Bitcoin users, this analysis explores the structure of the Bitcoin community in terms of wealth accumulation, optimism about the future of Bitcoin, and themes that attract users to the cryptocurrency. Results indicate that age, time of initial use, geographic location, mining status, engaging online discourse, and political orientation are all relevant factors that help explain various aspects of Bitcoin wealth, optimism, and attraction."} {"_id": "2b1ceea2ce803ebe58c448daea0f8d1571cdd3ba", "title": "3-D Reciprocal Collision Avoidance on Physical Quadrotor Helicopters with On-Board Sensing for Relative Positioning", "text": "In this paper, we present an implementation of 3D reciprocal collision avoidance on real quadrotor helicopters where each quadrotor senses the relative position and veloc ity of other quadrotors using an on-board camera. We show that using our approach, quadrotors are able to successfully avo id pairwise collisions GPS and motion-capture denied environ ments, without communication between the quadrotors, and even when human operators deliberately attempt to induce collision. To our knowledge, this is the first time that reciprocal collision avoidance has been successfully implemented on r eal robots where each agent independently observes the others using on-board sensors. We theoretically analyze the respo nse of the collision-avoidance algorithm to the violated assumptions by the use of real robots. We quantitatively analyze our experimental results. A particularly striking observation is that at times the quadrotors exhibit \u201creciprocal dance\u201d behavior, which is also observed when humans move past each other in constrained environments. This seems to be the result of sensing uncertainty, which causes both robots involved to h ave a different belief about the relative positions and velocit ies and, as a result, choose the same side on which to pass."} {"_id": "2327ad6f237b37150e84f0d745a05565ebf0b24d", "title": "Zerocash: Decentralized Anonymous Payments from Bitcoin", "text": "Bit coin is the first digital currency to see widespread adoption. While payments are conducted between pseudonyms, Bit coin cannot offer strong privacy guarantees: payment transactions are recorded in a public decentralized ledger, from which much information can be deduced. Zero coin (Miers et al., IEEE S&P 2013) tackles some of these privacy issues by unlinking transactions from the payment's origin. Yet, it still reveals payments' destinations and amounts, and is limited in functionality. In this paper, we construct a full-fledged ledger-based digital currency with strong privacy guarantees. Our results leverage recent advances in zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs). First, we formulate and construct decentralized anonymous payment schemes (DAP schemes). A DAP scheme enables users to directly pay each other privately: the corresponding transaction hides the payment's origin, destination, and transferred amount. We provide formal definitions and proofs of the construction's security. Second, we build Zero cash, a practical instantiation of our DAP scheme construction. In Zero cash, transactions are less than 1 kB and take under 6 ms to verify - orders of magnitude more efficient than the less-anonymous Zero coin and competitive with plain Bit coin."} {"_id": "3d08280ae82c2044c8dcc66d2be5a72c738e9cf9", "title": "Metadata Embeddings for User and Item Cold-start Recommendations", "text": "I present a hybrid matrix factorisation model representing users and items as linear combinations of their content features\u2019 latent factors. The model outperforms both collaborative and content-based models in cold-start or sparse interaction data scenarios (using both user and item metadata), and performs at least as well as a pure collaborative matrix factorisation model where interaction data is abundant. Additionally, feature embeddings produced by the model encode semantic information in a way reminiscent of word embedding approaches, making them useful for a range of related tasks such as tag recommendations."} {"_id": "6659c47db31129a60df2b113b46f45614197d5d8", "title": "Extremely Low-Profile, Single-Arm, Wideband Spiral Antenna Radiating a Circularly Polarized Wave", "text": "The antenna characteristics are described of a single-arm spiral (SAS) antenna. A balun circuit, required for a conventional two-arm balanced-mode spiral (TAS), is not necessary for feeding this SAS. First, the radiation pattern of an SAS having a disc/ground-plane is investigated. When the radius of the disc is smaller than the radius of the first-mode active region on the spiral and the spacing between the disc and spiral plane is small, the SAS is found to radiate a circularly polarized (CP) bidirectional beam, whose radiation pattern is almost symmetric with respect to the antenna axis normal to the spiral plane. Second, from a practical standpoint, the CP bidirectional beam is transformed into a CP unidirectional beam by placing a conducting cavity behind an SAS with a disc. It is revealed that the SAS has a good VSWR (less than 2) and a good axial ratio (less than 3 dB) over a design frequency range of 3 GHz to 10 GHz, where a cavity/antenna height of 7 mm (0.07 wavelength at the lowest design frequency) is chosen. The frequency response of the gain for the SAS is found to be similar to that for a conventional TAS, and the radiation efficiency for the SAS is slightly larger than that for the TAS. It is concluded that the SAS with a small disc backed by a cavity realizes a circularly polarized, low-profile, wideband antenna with a simple feed system that does not require a balun circuit."} {"_id": "3692d1c5e36145a5783135f3f077f6486e263d23", "title": "Vocabulary acquisition from extensive reading : A case study", "text": "A number of studies have shown that second language learners acquire vocabulary through reading, but only relatively small amounts. However, most of these studies used only short texts, measured only the acquisition of meaning, and did not credit partial learning of words. This case study of a learner of French explores whether an extensive reading program can enhance lexical knowledge. The study assessed a relatively large number of words (133), and examined whether one month of extensive reading enhanced knowledge of these target words' spelling, meaning, and grammatical characteristics. The measurement procedure was a one-on-one interview that allowed a very good indication of whether learning occurred. The study also explores how vocabulary acquisition varies according to how often words are encountered in the texts. The results showed that knowledge of 65% of the target words was enhanced in some way, for a pickup rate of about 1 of every 1.5 words tested. Spelling was strongly enhanced, even from a small number of exposures. Meaning and grammatical knowledge were also enhanced, but not to the same extent. Overall, the study indicates that more vocabulary acquisition is possible from extensive reading than previous studies have suggested."} {"_id": "46977c2e7a812e37f32eb05ba6ad16e03ee52906", "title": "Gated End-to-End Memory Networks", "text": "Machine reading using differentiable reasoning models has recently shown remarkable progress. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, other tasks, namely multi-fact questionanswering, positional reasoning or dialog related tasks, remain challenging particularly due to the necessity of more complex interactions between the memory and controller modules composing this family of models. In this paper, we introduce a novel end-to-end memory access regulation mechanism inspired by the current progress on the connection short-cutting principle in the field of computer vision. Concretely, we develop a Gated End-toEnd trainable Memory Network architecture (GMemN2N). From the machine learning perspective, this new capability is learned in an end-to-end fashion without the use of any additional supervision signal which is, as far as our knowledge goes, the first of its kind. Our experiments show significant improvements on the most challenging tasks in the 20 bAbI dataset, without the use of any domain knowledge. Then, we show improvements on the Dialog bAbI tasks including the real human-bot conversion-based Dialog State Tracking Challenge (DSTC-2) dataset. On these two datasets, our model sets the new state of the art. \u2217work done as an Intern at XRCE"} {"_id": "a3348ee4cc93e68485df4d06e205e064f12c0239", "title": "Feature Selection for Maximizing the Area Under the ROC Curve", "text": "Feature selection is an important pre-processing step for solving classification problems. A good feature selection method may not only improve the performance of the final classifier, but also reduce the computational complexity of it. Traditionally, feature selection methods were developed to maximize the classification accuracy of a classifier. Recently, both theoretical and experimental studies revealed that a classifier with the highest accuracy might not be ideal in real-world problems. Instead, the Area Under the ROC Curve (AUC) has been suggested as the alternative metric, and many existing learning algorithms have been modified in order to seek the classifier with maximum AUC. However, little work was done to develop new feature selection methods to suit the requirement of AUC maximization. To fill this gap in the literature, we propose in this paper a novel algorithm, called AUC and Rank Correlation coefficient Optimization (ARCO) algorithm. ARCO adopts the general framework of a well-known method, namely minimal redundancy- maximal-relevance (mRMR) criterion, but defines the terms \u201drelevance\u201d and \u201dredundancy\u201d in totally different ways. Such a modification looks trivial from the perspective of algorithmic design. Nevertheless, experimental study on four gene expression data sets showed that feature subsets obtained by ARCO resulted in classifiers with significantly larger AUC than the feature subsets obtained by mRMR. Moreover, ARCO also outperformed the Feature Assessment by Sliding Thresholds algorithm, which was recently proposed for AUC maximization, and thus the efficacy of ARCO was validated."} {"_id": "49f5671cdf3520e04104891265c74b34c22ebccc", "title": "Overconfidence and Trading Volume", "text": "Theoretical models predict that overconfident investors will trade more than rational investors. We directly test this hypothesis by correlating individual overconfidence scores with several measures of trading volume of individual investors. Approximately 3,000 online broker investors were asked to answer an internet questionnaire which was designed to measure various facets of overconfidence (miscalibration, volatility estimates, better than average effect). The measures of trading volume were calculated by the trades of 215 individual investors who answered the questionnaire. We find that investors who think that they are above average in terms of investment skills or past performance (but who did not have above average performance in the past) trade more. Measures of miscalibration are, contrary to theory, unrelated to measures of trading volume. This result is striking as theoretical models that incorporate overconfident investors mainly motivate this assumption by the calibration literature and model overconfidence as underestimation of the variance of signals. In connection with other recent findings, we conclude that the usual way of motivating and modeling overconfidence which is mainly based on the calibration literature has to be treated with caution. Moreover, our way of empirically evaluating behavioral finance models the correlation of economic and psychological variables and the combination of psychometric measures of judgment biases (such as overconfidence scores) and field data seems to be a promising way to better understand which psychological phenomena actually drive economic behavior."} {"_id": "5703617b9d9d40e90b6c8ffa21a52734d9822d60", "title": "Defining Computational Thinking for Mathematics and Science Classrooms", "text": "Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include \u2018\u2018computational thinking\u2019\u2019 as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics."} {"_id": "5fee30aa381e5382ac844c48cdc7bf65f316b3cb", "title": "Flexible Learning on the Sphere Via Adaptive Needlet Shrinkage and Selection", "text": "This paper introduces an approach for flexible, robust Bayesian modeling of structure in spherical data sets. Our method is based upon a recent construction called the needlet, which is a particular form of spherical wavelet with many favorable statistical and computational properties. We perform shrinkage and selection of needlet coefficients, focusing on two main alternatives: empirical-Bayes thresholding for selection, and the horseshoe prior for shrinkage. We study the performance of the proposed methodology both on simulated data and on a real data set involving the cosmic microwave background radiation. Horseshoe shrinkage of needlet coefficients is shown to yield the best overall performance against some common benchmarks."} {"_id": "129298d1dd8a9ec398d7f14a706e5d06d3aead01", "title": "A privacy-compliant fingerprint recognition system based on homomorphic encryption and Fingercode templates", "text": "The privacy protection of the biometric data is an important research topic, especially in the case of distributed biometric systems. In this scenario, it is very important to guarantee that biometric data cannot be steeled by anyone, and that the biometric clients are unable to gather any information different from the single user verification/identification. In a biom\u00e9trie system with high level of privacy compliance, also the server that processes the biom\u00e9trie matching should not learn anything on the database and it should be impossible for the server to exploit the resulting matching values in order to extract any knowledge about the user presence or behavior. Within this conceptual framework, in this paper we propose a novel complete demonstrator based on a distributed biom\u00e9trie system that is capable to protect the privacy of the individuals by exploiting cryptosystems. The implemented system computes the matching task in the encrypted domain by exploiting homomorphic encryption and using Fingercode templates. The paper describes the design methodology of the demonstrator and the obtained results. The demonstrator has been fully implemented and tested in real applicative conditions. Experimental results show that this method is feasible in the cases where the privacy of the data is more important than the accuracy of the system and the obtained computational time is satisfactory."} {"_id": "69852901bb1d411717d59f7647b1c332008393c1", "title": "Extended Kalman filter based grid synchronization in the presence of voltage unbalance for smart grid", "text": "In this paper, grid synchronization for gird-connected power generation systems in the presence of voltage unbalance and frequency variation is considered. A new extended Kalman filter based synchronization algorithm is proposed to track the phase angle of the utility network. Instead of processing the three-phase voltage signal in the abc natural reference frame and resorting to the symmetrical component transformation as in the traditional way, the proposed algorithm separates the positive and negative sequences in the transformed \u03b1\u03b2 stationary reference frame. Based on the obtained expressions in \u03b1\u03b2 domain, an EKF is developed to track both the in-phase and quadrature sinusoidal signals together with the unknown frequency. An estimate of the phase angle of the positive sequence is then obtained. As a by-product, estimates of phase angle of negative sequence and grid frequency are also computed. Compared to the commonly used scheme, the proposed algorithm has a simplified structure. The good performance is supported by computer simulations."} {"_id": "54efb068debeea58fd05951f86db797b5b5e4788", "title": "Frequency split metal artifact reduction (FSMAR) in computed tomography.", "text": "PURPOSE\nThe problem of metal artifact reduction (MAR) is almost as old as the clinical use of computed tomography itself. When metal implants are present in the field of measurement, severe artifacts degrade the image quality and the diagnostic value of CT images. Up to now, no generally accepted solution to this issue has been found. In this work, a method based on a new MAR concept is presented: frequency split metal artifact reduction (FSMAR). It ensures efficient reduction of metal artifacts at high image quality with enhanced preservation of details close to metal implants.\n\n\nMETHODS\nFSMAR combines a raw data inpainting-based MAR method with an image-based frequency split approach. Many typical methods for metal artifact reduction are inpainting-based MAR methods and simply replace unreliable parts of the projection data, for example, by linear interpolation. Frequency split approaches were used in CT, for example, by combining two reconstruction methods in order to reduce cone-beam artifacts. FSMAR combines the high frequencies of an uncorrected image, where all available data were used for the reconstruction with the more reliable low frequencies of an image which was corrected with an inpainting-based MAR method. The algorithm is tested in combination with normalized metal artifact reduction (NMAR) and with a standard inpainting-based MAR approach. NMAR is a more sophisticated inpainting-based MAR method, which introduces less new artifacts which may result from interpolation errors. A quantitative evaluation was performed using the examples of a simulation of the XCAT phantom and a scan of a spine phantom. Further evaluation includes patients with different types of metal implants: hip prostheses, dental fillings, neurocoil, and spine fixation, which were scanned with a modern clinical dual source CT scanner.\n\n\nRESULTS\nFSMAR ensures sharp edges and a preservation of anatomical details which is in many cases better than after applying an inpainting-based MAR method only. In contrast to other MAR methods, FSMAR yields images without the usual blurring close to implants.\n\n\nCONCLUSIONS\nFSMAR should be used together with NMAR, a combination which ensures an accurate correction of both high and low frequencies. The algorithm is computationally inexpensive compared to iterative methods and methods with complex inpainting schemes. No parameters were chosen manually; it is ready for an application in clinical routine."} {"_id": "7a3f1ea18bf3e8223890b122bc31fb79db758c6e", "title": "Tagging Urdu Sentences from English POS Taggers", "text": "Being a global language, English has attracted a majority of researchers and academia to work on several Natural Language Processing (NLP) applications. The rest of the languages are not focused as much as English. Part-of-speech (POS) Tagging is a necessary component for several NLP applications. An accurate POS Tagger for a particular language is not easy to construct due to the diversity of that language. The global language English, POS Taggers are more focused and widely used by the researchers and academia for NLP processing. In this paper, an idea of reusing English POS Taggers for tagging non-English sentences is proposed. On exemplary basis, Urdu sentences are processed to tagged from 11 famous English POS Taggers. State-of-the-art English POS Taggers were explored from the literature, however, 11 famous POS Taggers were being input to Urdu sentences for tagging. A famous Google translator is used to translate the sentences across the languages. Data from twitter.com is extracted for evaluation perspective. Confusion matrix with kappa statistic is used to measure the accuracy of actual Vs predicted tagging. The two best English POS Taggers which tagged Urdu sentences were Stanford POS Tagger and MBSP POS Tagger with an accuracy of 96.4% and 95.7%, respectively. The system can be generalized for multilingual sentence tagging. Keywords\u2014Standford part-of-speech (POS) tagger; Google translator; Urdu POS tagging; kappa statistic"} {"_id": "21677e1dda607f649cd63b2311795ce6e2653b33", "title": "IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels", "text": "We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400\u2009nm shows more than 30% improvement of light absorption at \u03bb\u2009=\u2009850\u2009nm and the maximum enhancement of 43% with the 540\u2009nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2\u2009\u03bcm square containing 400\u2009nm pitch IPAs shows 80% sensitivity enhancement at \u03bb\u2009=\u2009850\u2009nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the \u03bb\u2009=\u2009700\u20131200\u2009nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations."} {"_id": "45a1a5dd0b9186d6476b56ab10bc582f003da2c5", "title": "The Gambler's Ruin Problem, Genetic Algorithms, and the Sizing of Populations", "text": "This paper presents a model to predict the convergence quality of genetic algorithms based on the size of the population. The model is based on an analogy between selection in GAs and one-dimensional random walks. Using the solution to a classic random walk problemthe gambler's ruinthe model naturally incorporates previous knowledge about the initial supply of building blocks (BBs) and correct selection of the best BB over its competitors. The result is an equation that relates the size of the population with the desired quality of the solution, as well as the problem size and difficulty. The accuracy of the model is verified with experiments using additively decomposable functions of varying difficulty. The paper demonstrates how to adjust the model to account for noise present in the fitness evaluation and for different tournament sizes."} {"_id": "79ce9533944cdee059232495fc9f94f1d47eb900", "title": "Deep Learning Approaches to Semantic Relevance Modeling for Chinese Question-Answer Pairs", "text": "The human-generated question-answer pairs in the Web social communities are of great value for the research of automatic question-answering technique. Due to the large amount of noise information involved in such corpora, it is still a problem to detect the answers even though the questions are exactly located. Quantifying the semantic relevance between questions and their candidate answers is essential to answer detection in social media corpora. Since both the questions and their answers usually contain a small number of sentences, the relevance modeling methods have to overcome the problem of word feature sparsity. In this article, the deep learning principle is introduced to address the semantic relevance modeling task. Two deep belief networks with different architectures are proposed by us to model the semantic relevance for the question-answer pairs. According to the investigation of the textual similarity between the community-driven question-answering (cQA) dataset and the forum dataset, a learning strategy is adopted to promote our models\u2019 performance on the social community corpora without hand-annotating work. The experimental results show that our method outperforms the traditional approaches on both the cQA and the forum corpora."} {"_id": "bf4aa4acdfa83586bbdbd6351b1a96dbb9672c4c", "title": "JINS MEME algorithm for estimation and tracking of concentration of users", "text": "Activity tracking using a wearable device is an emerging research field. Large-scale studies on activity tracking performed with eyewear-type wearable devices remains a challenging area owing to the negative effect such devices have on users' looks. To cope with this challenge, JINS Inc., an eyewear retailer in Japan, has developed a state-of-the-art smart eyewear called JINS MEME. The design of JINS MEME is almost the same as that of Wellington-type eyeglasses so that people can use it in their daily lives. JINS MEME is equipped with sensors to detect a user's eye movement and head motion. In addition to these functions of JINS MEME, JINS developed an application to measure concentration levels of users. In this demonstration, users will experience wearing the JINS MEME glasses and their concentration will be measured while they perform a certain task at our booth."} {"_id": "75a4944af32cd53aa30bf3af533556401b9f3759", "title": "A 5GS/s voltage-to-time converter in 90nm CMOS", "text": "A voltage-to-time converter (VTC) is presented for use in a time-based analog-to-digital converter (ADC). The converter runs with a 5 GHz input clock to provide a maximum conversion rate of 5 GS/s. A novel architecture enables the VTC to provide an adjustable linear delay versus input voltage characteristic. The circuit is realized in a 90nm CMOS process. After calibration, the converter achieves better than 3.7 effective bits for input frequencies up to 1.75 GHz, making it suitable for use in a time-based ADC with up to 4-bit resolution."} {"_id": "03809a85789f7aeb39002fdcd7c3cdf33cc7370f", "title": "A Client-Driven Approach for Channel Management in Wireless LANs", "text": "We propose an efficient client-based approach for channel management (channel assignment and load balancing) in 802.11-based WLANs that lead to better usage of the wireless spectrum. This approach is based on a \u201cconflict set coloring\u201d formulation that jointly performs load balancing along with channel assignment. Such a formulation has a number of advantages. First, it explicitly captures interference effects at clients. Next, it intrinsically exposes opportunities for better channel re-use. Finally, algorithms based on this formulation do not depend on specific physical RF models and hence can be applied efficiently to a wide-range of inbuilding as well as outdoor scenarios. We have performed extensive packet-level simulations and measurements on a deployed wireless testbed of 70 APs to validate the performance of our proposed algorithms. We show that in addition to single network scenarios, the conflict set coloring formulation is well suited for channel assignment where multiple wireless networks share and contend for spectrum in the same physical space. Our results over a wide range of both simulated topologies and in-building testbed experiments indicate that our approach improves application level performance at the clients by upto three times (and atleast 50%) in comparison to current best-known techniques."} {"_id": "cfb3d339a6b369144c356a93e3b519f22928e238", "title": "Meta-Learning with Hessian Free Approach in Deep Neural Nets Training", "text": "Meta-learning is a promising method to achieve efficient training method towards deep neural net and has been attracting increases interests in recent years. But most of the current methods are still not capable to train complex neuron net model with long-time training process. In this paper, a novel second-order meta-optimizer, named Meta-learning with Hessian-Free(MLHF) approach, is proposed based on the Hessian-Free approach. Two recurrent neural networks are established to generate the damping and the precondition matrix of this Hessian-Free framework. A series of techniques to meta-train the MLHF towards stable and reinforce the meta-training of this optimizer, including the gradient calculation of H . Numerical experiments on deep convolution neural nets, including CUDA-convnet and ResNet18(v2), with datasets of CIFAR10 and ILSVRC2012, indicate that the MLHF shows good and continuous training performance during the whole long-time training process, i.e., both the rapiddecreasing early stage and the steadily-deceasing later stage, and so is a promising meta-learning framework towards elevating the training efficiency in real-world deep neural nets."} {"_id": "2449a067370ca24353ee8e9fd5e8187cf08ca8f7", "title": "Thoth: Comprehensive Policy Compliance in Data Retrieval Systems", "text": "Data retrieval systems process data from many sources, each subject to its own data use policy. Ensuring compliance with these policies despite bugs, misconfiguration, or operator error in a large, complex, and fast evolving system is a major challenge. Thoth provides an efficient, kernel-level compliance layer for data use policies. Declarative policies are attached to the systems\u2019 input and output files, key-value tuples, and network connections, and specify the data\u2019s integrity and confidentiality requirements. Thoth tracks the flow of data through the system, and enforces policy regardless of bugs, misconfigurations, compromises in application code, or actions by unprivileged operators. Thoth requires minimal changes to an existing system and has modest overhead, as we show using a prototype Thoth-enabled data retrieval system based on the popular Apache Lucene."} {"_id": "7112dd4ab26fe7d0f8908f1352e4ba279b2f521d", "title": "A multiparadigm intelligent tutoring system for robotic arm training", "text": "To assist learners during problem-solving activities, an intelligent tutoring system (ITS) has to be equipped with domain knowledge that can support appropriate tutoring services. Providing domain knowledge is usually done by adopting one of the following paradigms: building a cognitive model, specifying constraints, integrating an expert system, and using data mining algorithms to learn domain knowledge. However, for some ill-defined domains, each single paradigm may present some advantages and limitations in terms of the required resources for deploying it, and tutoring support that can be offered. To address this issue, we propose using a multiparadigm approach. In this paper, we explain how we have applied this idea in CanadarmTutor, an ITS for learning to operate the Canadarm2 robotic arm. To support tutoring services in this ill-defined domain, we have developed a multiparadigm model combining: 1) a cognitive model to cover well-defined parts of the task and spatial reasoning, 2) a data mining approach for automatically building a task model from user solutions for ill-defined parts of the task, and 3) a 3D path-planner to cover other parts of the task for which no user data are available. The multiparadigm version of CanadarmTutor allows providing a richer set of tutoring services than what could be offered with previous single paradigm versions of CanadarmTutor."} {"_id": "014004e84fbc6e4c7c4aced2e69fc4a5d28daabf", "title": "Intellectual Capital (IC) Measurement in the Mass Media Context", "text": "Mass media is the key in\u00b0uencer of public opinion. The in\u00b0uence is not only limited to political and social, but also relates to organisational and economical reputation and brands. Within public opinion, organisations must manage how they are represented competitively within mass media so that they can develop their brand strategically to grow and compete in the current global knowledge economy. This is where the link to Intellectual Capital (IC) Measurement is signi \u0304cant. IC, as the sum of all an organisation's intangible assets drives a company's presence and value within the media, albeit related to human, structural or relational capital attributes. The measurement, therefore, of IC in the mass media context is invaluable to understand how a company is placed strategically and competitively in the external space, and how this links to internal activities, goals and outcomes. This paper is an attempt to address some of the issues related to IC measurements in the mass media context by suggesting a framework that provides a multidisciplinary and holistic approach to the understanding and contextualising of the organisation's presence in the public space."} {"_id": "25d1a2c364b05e0db056846ec397fbf0eacdca5c", "title": "Orthogonal nonnegative matrix tri-factorization for co-clustering: Multiplicative updates on Stiefel manifolds", "text": "Matrix factorization-based methods become popular in dyadic data analysis, where a fundamental problem, for example, is to perform document clustering or co-clustering words and documents given a term-document matrix. Nonnegative matrix tri-factorization (NMTF) emerges as a promising tool for co-clustering, seeking a 3-factor decomposition X USV with all factor matrices restricted to be nonnegative, i.e., U P 0; S P 0;V P 0: In this paper we develop multiplicative updates for orthogonal NMTF where X USV is pursued with orthogonality constraints, UU 1\u20444 I; and VV 1\u20444 I, exploiting true gradients on Stiefel manifolds. Experiments on various document data sets demonstrate that our method works well for document clustering and is useful in revealing polysemous words via co-clustering words and documents. 2010 Elsevier Ltd. All rights reserved."} {"_id": "019615181a80b6dca5d79a8e72e5eec21eb5cfde", "title": "Metal Additive Manufacturing : A Review", "text": "This paper reviews the state-of-the-art of an important, rapidly emerging, manufacturing technology that is alternatively called additive manufacturing (AM), direct digital manufacturing, free form fabrication, or 3D printing, etc. A broad contextual overview of metallic AM is provided. AM has the potential to revolutionize the global parts manufacturing and logistics landscape. It enables distributed manufacturing and the productions of parts-on-demand while offering the potential to reduce cost, energy consumption, and carbon footprint. This paper explores the material science, processes, and business consideration associated with achieving these performance gains. It is concluded that a paradigm shift is required in order to fully exploit AM potential."} {"_id": "3ca4cbd7bf47ad1d20224b8ebc19980df337697e", "title": "Measurement Science Needs for Real-time Control of Additive Manufacturing Powder Bed Fusion Processes", "text": "Additive Manufacturing is increasingly used in the development of new products: from conceptual design to functional parts and tooling. However, today, variability in part quality due to inadequate dimensional tolerances, surface roughness, and defects, limits its broader acceptance for high-value or mission-critical applications. While process control in general can limit this variability, it is impeded by a lack of adequate process measurement methods. Process control today is based on heuristics and experimental data, yielding limited improvement in part quality. The overall goal is to develop the measurement science 1 necessary to make in-process measurement and real-time control possible in additive manufacturing. Traceable dimensional and thermal metrology methods must be developed for real-time closed-loop control of additive manufacturing processes. As a precursor, this report presents a review on the additive manufacturing control schemes, process measurements, and modeling and simulation methods as it applies to the powder bed fusion process, though results from other processes are reviewed where applicable. The aim of the review is to identify and summarize the measurement science needs that are critical to real-time process control. We organize our research findings to identify the correlations between process parameters, process signatures, and product quality. The intention of this report is to serve as a background reference and a go-to place for our work to identify the most suitable measurement methods and corresponding measurands for real-time control. 1 Measurement science broadly includes: development of performance metrics, measurement and testing methods, predictive modeling and simulation tools, knowledge modeling, protocols, technical data, and reference materials and artifacts; conduct of inter-comparison studies and calibrations; evaluation of technologies, systems, and practices, including uncertainty analysis; development of the technical basis for standards, codes, and practices in many instances via test-beds, consortia, standards and codes development organizations, and/or other partnerships with industry and academia."} {"_id": "dafdf1c7dec045bee37d4b97aa897717f590e47a", "title": "A Tlreshold Selection Method from Gray-Level Histograms", "text": "A nonparametric and unsupervised method ofautomatic threshold selection for picture segmentation is presented. An optimal threshold is selected by the discriminant criterion, namely, so as to maximize the separability of the resultant classes in gray levels. The procedure is very simple, utilizing only the zerothand the first-order cumulative moments of the gray-level histogram. It is straightforward to extend the method to multithreshold problems. Several experimental results are also presented to support the validity of the method."} {"_id": "ea389cc6a91d365d003562df896fa5931fb444c6", "title": "Additive manufacturing of polymer-derived ceramics", "text": "The extremely high melting point of many ceramics adds challenges to additive manufacturing as compared with metals and polymers. Because ceramics cannot be cast or machined easily, three-dimensional (3D) printing enables a big leap in geometrical flexibility. We report preceramic monomers that are cured with ultraviolet light in a stereolithography 3D printer or through a patterned mask, forming 3D polymer structures that can have complex shape and cellular architecture. These polymer structures can be pyrolyzed to a ceramic with uniform shrinkage and virtually no porosity. Silicon oxycarbide microlattice and honeycomb cellular materials fabricated with this approach exhibit higher strength than ceramic foams of similar density. Additive manufacturing of such materials is of interest for propulsion components, thermal protection systems, porous burners, microelectromechanical systems, and electronic device packaging."} {"_id": "156b0bcef5d03cdf0a35947030c1c0729ca923d3", "title": "Additive Manufacturing of Metals: A Review", "text": "Over the past 20 years, additive manufacturing (AM) has evolved from 3D printers used for rapid prototyping to sophisticated rapid manufacturing that can create parts directly without the use of tooling. AM technologies build near net shape components layer by layer using 3D model data. AM could revolutionize many sectors of manufacturing by reducing component lead time, material waste, energy usage, and carbon footprint. Furthermore, AM has the potential to enable novel product designs that could not be fabricated using conventional processes. This proceedings is a review that assesses available AM technologies for direct metal fabrication. Included is an outline of the definition of AM, a review of the commercially available and under development technologies for direct metal fabrication, possibilities, and an overall assessment of the state of the art. Perspective on future research opportunities is also be presented."} {"_id": "a1746d4e1535564e02a7a4d5e4cdd1fa7bedc571", "title": "Feature engineering in Context-Dependent Deep Neural Networks for conversational speech transcription", "text": "We investigate the potential of Context-Dependent Deep-Neural-Network HMMs, or CD-DNN-HMMs, from a feature-engineering perspective. Recently, we had shown that for speaker-independent transcription of phone calls (NIST RT03S Fisher data), CD-DNN-HMMs reduced the word error rate by as much as one third\u2014from 27.4%, obtained by discriminatively trained Gaussian-mixture HMMs with HLDA features, to 18.5%\u2014using 300+ hours of training data (Switchboard), 9000+ tied triphone states, and up to 9 hidden network layers."} {"_id": "2760ee22dcea47ac49381ade8edb01d1094d3a33", "title": "An Exploratory Review of Design Principles in Constructivist Gaming Learning Environments.", "text": "Creating a design theory for Constructivist Gaming Learning Environment necessitates, among other things, the establishment of design principles. These principles have the potential to help designers produce games, where users achieve higher levels of learning. This paper focuses on twelve design principles: Probing, Distributed, Multiple Routes, Practice, Psychosocial Moratorium, Regime of Competence, Self-Knowledge, Collective Knowledge, Engaging, User Interface Ease of Use, On Demand and Just-in-Time Tutorial, and Achievement. We report on two pilot studies of a qualitative nature in which we test our design principles. Game play testing and observations were carried out on five Massively Multiplayer Online Games (MMOGs): RuneScape, GuildWars, Ragnarok, World of WarCraft, and Final Fantasy XI. Two educational games, Carabella Goes to College and Outbreak at WatersEdge were also observed. Our findings indicate that not all of the popular MMOGs and educational games support all of these principles."} {"_id": "9ca32800987b1f6092c2cacf028dee7298e73084", "title": "Evaluation of machine learning techniques for network intrusion detection", "text": "Network traffic anomaly may indicate a possible intrusion in the network and therefore anomaly detection is important to detect and prevent the security attacks. The early research work in this area and commercially available Intrusion Detection Systems (IDS) are mostly signature-based. The problem of signature based method is that the database signature needs to be updated as new attack signatures become available and therefore it is not suitable for the real-time network anomaly detection. The recent trend in anomaly detection is based on machine learning classification techniques. We apply seven different machine learning techniques with information entropy calculation to Kyoto 2006+ data set and evaluate the performance of these techniques. Our findings show that, for this particular data set, most machine learning techniques provide higher than 90% precision, recall and accuracy. However, using area under the Receiver Operating Curve (ROC) metric, we find that Radial Basis Function (RBF) performs the best among the seven algorithms studied in this work."} {"_id": "3ced8df3b0a63c845cabc7971a7464c4905ec8ab", "title": "Visible light communications: Challenges and possibilities", "text": "Solid-state lighting is a rapidly developing field. White-light and other visible LEDs are becoming more efficient, have high reliability and can be incorporated into many lighting applications. Recent examples include car head-lights based on white LEDs, and LED illumination as an architectural feature. The prediction that general illumination will use white LEDs in the future has been made, due to the increased energy efficiency that such an approach may have. Such sources can also be modulated at high-speed, offering the possibility of using sources for simultaneous illumination and data communications. Such visible light communications (VLC) was pioneered in Japan, and there is now growing interest worldwide, including within bodies such as the Visible Light Communications Consortium (VLCC) and the Wireless World Research Forum (WWRF). In this paper we outline the basic components in these systems, review the state of the art and discuss some of the challenges and possibilities for this new wireless transmission technique."} {"_id": "57f9aa83737f2062defaf80afe11448ea5a33ea6", "title": "A conceptual framework to develop Green IT \u2013 going beyond the idea of environmental sustainability", "text": "This paper presents a conceptual framework aiming to better research, understand and develop Green IT within organizations. Based on a literature review on Green IT, regarding the concepts of responsibility and sustainability, we propose an initial framework with five dimensions: ethical, technological, economical, social and environmental. These dimensions compose Green IT strategies and practices. Additionally, the framework considers that environmental changing requirements, strategic requirements and dynamic capabilities are the forces which move organizations toward green practices and foster innovation. This framework is part of a five-year horizon project that began in 2009 and aims to contribute to the theory in the field, having as initial goal to identify the constructs associated with Green IT."} {"_id": "e6a5a2d440446c1d6e1883cdcb3a855b94b708e7", "title": "Epidermal electronics: Skin sweat patch", "text": "An ultrathin, stretchable, and conformal sensor system for skin-mounted sweat measurement is characterized and demonstrated in this paper. As an epidermal device, the sweat sensor is mechanically designed for comfortable wear on the skin by employing interdigitated electrodes connected via stretchable serpentine-shaped conductors. Experimental results show that the sensor is sensitive to measuring frequency, sweat level and stretching deformation. It was found that 20kHz signals provide the most sensitive performance: electrical impedance changes 50% while sweat level increases from 20 to 80. In addition, sensor elongation from 15 up to 50% affected the measurement sensitivity of both electrical impedance and capacitance."} {"_id": "e435ccffa5ae89d22b5062c1e8e3dcd2a0908ee8", "title": "A Survey of Utility-Oriented Pattern Mining", "text": "The main purpose of data mining and analytics is to find novel, potentially useful patterns that can be utilized in real-world applications to derive beneficial knowledge. For identifying and evaluating the usefulness of different kinds of patterns, many techniques/constraints have been proposed, such as support, confidence, sequence order, and utility parameters (e.g., weight, price, profit, quantity, etc.). In recent years, there has been an increasing demand for utility-oriented pattern mining (UPM). UPM is a vital task, with numerous high-impact applications, including cross-marketing, e-commerce, finance, medical, and biomedical applications. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods of UPM. First, we introduce an in-depth understanding of UPM, including concepts, examples, and comparisons with related concepts. A taxonomy of the most common and state-of-the-art approaches for mining different kinds of high-utility patterns is presented, including Apriori-based, tree-based, projection-based, vertical-/horizontal-data-format-based, and other hybrid approaches. A comprehensive review of advanced topics of existing high-utility pattern mining techniques is offered, with a discussion of their pros and cons. Finally, we present several well-known open-source software packages for UPM. We conclude our survey with a discussion on open and practical challenges in this field."} {"_id": "0de16a3505e587589fdf35d7f99d26848f362f55", "title": "Revising the Newman-Girvan Algorithm", "text": "Abstract: One of the common approaches for the community detection in complex networks is the Girvan-Newman algorithm [5] which is based on repeated deletion of edges having the maximum edge betweenness centrality. Although widely used, it may result in different dendrograms of hierarchies of communities if there are several edges eligible for removal (see [6]) thus leaving an ambiguous formation of network subgroups. We will present possible ways to overcome these issues using, instead of edge betweenness computation for single edges, the group edge betweenness for subsets of edges to be subjects of removal."} {"_id": "13def1f1f6598d4d0c7529e8ef6a7999165fe955", "title": "Ontology-based deep learning for human behavior prediction with explanations in health social networks", "text": "Human behavior modeling is a key component in application domains such as healthcare and social behavior research. In addition to accurate prediction, having the capacity to understand the roles of human behavior determinants and to provide explanations for the predicted behaviors is also important. Having this capacity increases trust in the systems and the likelihood that the systems actually will be adopted, thus driving engagement and loyalty. However, most prediction models do not provide explanations for the behaviors they predict. In this paper, we study the research problem, human behavior prediction with explanations, for healthcare intervention systems in health social networks. We propose an ontology-based deep learning model (ORBM+) for human behavior prediction over undirected and nodes-attributed graphs. We first propose a bottom-up algorithm to learn the user representation from health ontologies. Then the user representation is utilized to incorporate self-motivation, social influences, and environmental events together in a human behavior prediction model, which extends a well-known deep learning method, the Restricted Boltzmann Machine. ORBM+ not only predicts human behaviors accurately, but also, it generates explanations for each predicted behavior. Experiments conducted on both real and synthetic health social networks have shown the tremendous effectiveness of our approach compared with conventional methods."} {"_id": "8be203797e78aa8351c560f067190ce596feec6b", "title": "Edge Agreement: Graph-Theoretic Performance Bounds and Passivity Analysis", "text": "This work explores the properties of the edge variant of the graph Laplacian in the context of the edge agreement problem. We show that the edge Laplacian, and its corresponding agreement protocol, provides a useful perspective on the well-known node agreement, or the consensus algorithm. Specifically, the dynamics induced by the edge Laplacian facilitates a better understanding of the role of certain subgraphs, e.g., cycles and spanning trees, in the original agreement problem. Using the edge Laplacian, we proceed to examine graph-theoretic characterizations of the H2 and H\u221e performance for the agreement protocol. These results are subsequently applied in the contexts of optimal sensor placement for consensus-based applications. Finally, the edge Laplacian is employed to provide new insights into the nonlinear extension of linear agreement to agents with passive dynamics."} {"_id": "37a26f4601f59ac02136e8f25f94f5f44ca8173a", "title": "Tracking-based interaction for object creation in mobile augmented reality", "text": "In this work, we evaluate the feasibility of tracking-based interaction using a mobile phone's or tablet's camera in order to create and edit 3D objects in augmented reality applications. We present a feasibility study investigating if and how gestures made with your finger can be used to create such objects. A revised interface design is evaluated in a user study with 24 subjects that reveals a high usability and entertainment value, but also identifies issues such as ergonomic discomfort and imprecise input for complex tasks. Hence, our results suggest huge potential for this type of interaction in the entertainment, edutainment, and leisure domain, but limited usefulness for serious applications."} {"_id": "aeaa7a8b4ef97bbf2b0ea2ad2ee88b13fcb2b797", "title": "LAN security: problems and solutions for Ethernet networks", "text": "Despite many research and development efforts in the area of data communications security, the importance of internal \u017d . local area network LAN security is still underestimated. This paper discusses why many traditional approaches to network \u017d . security e.g. firewalls, the modern IPSec or various application level protocols are not so effective in local networks and proposes a prospective solution for building of secure encrypted Ethernet LANs. The architecture presented allows for employment of the existing office network infrastructure, does not require changes to workstations\u2019 software, and provides high level of protection. The core idea is to apply security measures in the network interface devices connecting individual \u017d \u017d .. computers to the network i.e. network interface cards NICs . This eliminates the physical possibility of sending unprotected information through the network. Implementation details are given for data and key exchange, network management, and interoperability issues. An in-depth security analysis of the proposed architecture is presented and some conclusions are drawn. q 2000 Elsevier Science B.V. All rights reserved."} {"_id": "f8ccf1f4c8002ff35ea675122e7c04b35f2000b4", "title": "Fault tolerant three-phase AC motor drive topologies: a comparison of features, cost, and limitations", "text": "This paper compares the many fault tolerant three-phase ac motor drive topologies that have been proposed to provide output capacity for the inverter faults of switch short or open-circuits, phase-leg short-circuits, and single-phase open-circuits. Also included is a review of the respective control methods for fault tolerant inverters including two-phase and unipolar control methods. The output voltage and current space in terms of dq components is identified for each topology and fault. These quantities are then used to normalize the power capacity of each system during a fault to a standard inverter during normal operation. A silicon overrating cost factor is adopted as a metric to compare the relative switching device costs of the topologies compared to a standard three-phase inverter."} {"_id": "fabc3f0f51d2bdd1484e36b9b08221b52be6826f", "title": "Interactive Plan Explicability in Human-Robot Teaming", "text": "Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution. This can lead to more efficient collaboration in practice."} {"_id": "604ec6924397f793c3286f4e7e2b4ba17a27ee24", "title": "Neighboring gray level dependence matrix for texture classification", "text": "A new approach, neighboring gray level dependence matrix (NGLDM), for texture classification is presented. The major properties of this approach are as follows: (a) texture features can be easily computed; (b) they are essentially invariant under spatial rotation; (c) they are invariant under linear gray level transformation and can be made insensitive to monotonic gray level transformation. These properties have enhanced the practical applications of the texture features. The accuracies of the classification are comparable with those found in the literature."} {"_id": "2b664988a390637f53ade45a8a1c0cd9cc5f0628", "title": "A Fast Calibration Method for Triaxial Magnetometers", "text": "This paper presents a novel iterative calibration algorithm for triaxial magnetometers. The proposed algorithm estimates and compensates the effects of deterministic interference parameters using only nine distinct measurements. The results of our extensive simulations and empirical evaluations confirm that the proposed method outperforms conventional ellipsoid fitting based models both in terms of accuracy and reliability even in the presence of a moderate wideband noise. The algorithm also achieves considerably faster convergence, which makes it suitable for real-time applications. The algorithm performance is also shown to be independent of the initial guesses of the interference parameters."} {"_id": "2c1bfdd311742ec077cdc6e5367e34cfe0acc4c0", "title": "An integrated approach for warehouse analysis and optimization: A case study", "text": "The paper focuses on the analysis and optimization of production warehouses, proposing a novel approach to reduce inefficiencies which employs three lean manufacturing tools in an integrated and iterative framework. The proposed approach integrates the Unified Modeling Language (UML) \u2013 providing a detailed description of the warehouse logistics \u2013 the Value Stream Mapping (VSM) tool \u2013 identifying non-value adding activities \u2013 and a mathematical formulation of the so-called Genba Shikumi philosophy \u2013 ranking such system anomalies and assessing how they affect the warehouse. The subsequent reapplication of the VSM produces a complete picture of the reengineered warehouse, and using the UML tool allows describing in detail the updated system. By applying the presented methodology to the warehouse of an Italian interior design producer, we show that it represents a useful tool to systematically and dynamically improve the warehouse management. Indeed, the application of the approach to the company leads to an innovative proposal for the warehouse analysis and optimization: a warehouse management system that leads to increased profitability and quality as well as to reduced errors. 2014 Elsevier B.V. All rights reserved."} {"_id": "49e90a0110123d5644fb6840683e0a05d2f54371", "title": "Supervised Hash Coding With Deep Neural Network for Environment Perception of Intelligent Vehicles", "text": "Image content analysis is an important surround perception modality of intelligent vehicles. In order to efficiently recognize the on-road environment based on image content analysis from the large-scale scene database, relevant images retrieval becomes one of the fundamental problems. To improve the efficiency of calculating similarities between images, hashing techniques have received increasing attentions. For most existing hash methods, the suboptimal binary codes are generated, as the hand-crafted feature representation is not optimally compatible with the binary codes. In this paper, a one-stage supervised deep hashing framework (SDHP) is proposed to learn high-quality binary codes. A deep convolutional neural network is implemented, and we enforce the learned codes to meet the following criterions: 1) similar images should be encoded into similar binary codes, and vice versa; 2) the quantization loss from Euclidean space to Hamming space should be minimized; and 3) the learned codes should be evenly distributed. The method is further extended into SDHP+ to improve the discriminative power of binary codes. Extensive experimental comparisons with state-of-the-art hashing algorithms are conducted on CIFAR-10 and NUS-WIDE, the MAP of SDHP reaches to 87.67% and 77.48% with 48 b, respectively, and the MAP of SDHP+ reaches to 91.16%, 81.08% with 12 b, 48 b on CIFAR-10 and NUS-WIDE, respectively. It illustrates that the proposed method can obviously improve the search accuracy."} {"_id": "10146dfeffa2be4267525ae74fda5c644205f497", "title": "Evidence and solution of over-RESET problem for HfOX based resistive memory with sub-ns switching speed and high endurance", "text": "The memory performances of the HfOX based bipolar resistive memory, including switching speed and memory reliability, are greatly improved in this work. Record high switching speed down to 300 ps is achieved. The cycling test shed a clear light on the wearing behavior of resistance states, and the correlation between over-RESET phenomenon and the worn low resistance state in the devices is discussed. The modified bottom electrode is proposed for the memory device to maintain the memory window and to endure resistive switching up to 1010 cycles."} {"_id": "b83cdc50b0006066d7dbfd4f8bc1aeecf81ddf1c", "title": "Media and children's aggression, fear, and altruism.", "text": "Noting that the social and emotional experiences of American children today often heavily involve electronic media, Barbara Wilson takes a close look at how exposure to screen media affects children's well-being and development. She concludes that media influence on children depends more on the type of content that children find attractive than on the sheer amount of time they spend in front of the screen. Wilson begins by reviewing evidence on the link between media and children's emotions. She points out that children can learn about the nature and causes of different emotions from watching the emotional experiences of media characters and that they often experience empathy with those characters. Although research on the long-term effects of media exposure on children's emotional skill development is limited, a good deal of evidence shows that media exposure can contribute to children's fears and anxieties. Both fictional and news programming can cause lasting emotional upset, though the themes that upset children differ according to a child's age. Wilson also explores how media exposure affects children's social development. Strong evidence shows that violent television programming contributes to children's aggressive behavior. And a growing body of work indicates that playing violent video games can have the same harmful effect. Yet if children spend time with educational programs and situation comedies targeted to youth, media exposure can have more prosocial effects by increasing children's altruism, cooperation, and even tolerance for others. Wilson also shows that children's susceptibility to media influence can vary according to their gender, their age, how realistic they perceive the media to be, and how much they identify with characters and people on the screen. She concludes with guidelines to help parents enhance the positive effects of the media while minimizing the risks associated with certain types of content."} {"_id": "8a53c7efcf3b0b0cee5e6dfcea0a3eb0b4b81b01", "title": "Curvature preserving fingerprint ridge orientation smoothing using Legendre polynomials", "text": "Smoothing fingerprint ridge orientation involves a principal discrepancy. Too little smoothing can result in noisy orientation fields (OF), too much smoothing will harm high curvature areas, especially singular points (SP). In this paper we present a fingerprint ridge orientation model based on Legendre polynomials. The motivation for the proposed method can be found by analysing the almost exclusively used method in literature for orientation smoothing, proposed by Witkin and Kass (1987) more than two decades ago. The main contribution of this paper argues that the vectorial data (sine and cosine data) should be smoothed in a coupled way and the approximation error should not be evaluated employing vectorial data. For evaluating the proposed method we use a Poincare-Index based SP detection algorithm. The experiments show, that in comparison to competing methods the proposed method has improved orientation smoothing capabilities, especially in high curvature areas."} {"_id": "68102f37d6da41530b63dfd232984f93d8685693", "title": "On the Generalization Ability of Online Strongly Convex Programming Algorithms", "text": "This paper examines the generalization properties of online convex programming algorithms when the loss function is Lipschitz and strongly convex. Our main result is a sharp bound, that holds with high probability, on the excess risk of the output of an online algorithm in terms of the average regret. This allows one to use recent algorithms with logarithmic cumulative regret guarantees to achieve fast convergence rates for the excess risk with high probability. As a corollary, we characterize the convergence rate of P EGASOS(with high probability), a recently proposed method for solving the SVM optimization problem."} {"_id": "0329e02fc187d9d479e6310fa85e3c948b09565d", "title": "A Tri-Space Visualization Interface for Analyzing Time-Varying Multivariate Volume Data", "text": "The dataset generated by a large-scale numerical simulation may include thousands of timesteps and hundreds of variables describing different aspects of the modeled physical phenomena. In order to analyze and understand such data, scientists need the capability to explore simultaneously in the temporal, spatial, and variable domains of the data. Such capability, however, is not generally provided by conventional visualization tools. This paper presents a new visualization interface addressing this problem. The interface consists of three components which abstracts the complexity of exploring in temporal, variable, and spatial domain, respectively. The first component displays time histograms of the data, helps the user identify timesteps of interest, and also helps specify time-varying features. The second component displays correlations between variables in parallel coordinates and enables the user to verify those correlations and possibly identity unanticipated ones. The third component allows the user to more closely explore and validate the data in spatial domain while rendering multiple variables into a single visualization in a user controllable fashion. Each of these three components is not only an interface but is also the visualization itself, thus enabling efficient screen-space usage. The three components are tightly linked to facilitate tri-space data exploration, which offers scientists new power to study their time-varying, multivariate volume data."} {"_id": "2abf2c3e7ebed04e8c09e478157372dda5cb8bc5", "title": "Real-time rigid-body visual tracking in a scanning electron microscope", "text": "Robotics continues to provide researchers with an increasing ability to interact with objects at the nano scale. As micro- and nanorobotic technologies mature, more interest is given to computer-assisted or automated approaches to manipulation at these scales. Although actuators are currently available that enable displacements resolutions in the subnanometer range, improvements in feedback technologies have not kept pace. Thus, many actuators that are capable of performing nanometer displacements are limited in automated tasks by the lack of suitable feedback mechanisms. This paper proposes the use of a rigid-model-based method for end effector tracking in a scanning electron microscope to aid in enabling more precise automated manipulations and measurements. These models allow the system to leverage domain-specific knowledge to increase performance in a challenging tracking environment."} {"_id": "2c6c6d3c94322e9ff75ff2143f7028bfab2b3c5f", "title": "Extension of phase correlation to subpixel registration", "text": "In this paper, we have derived analytic expressions for the phase correlation of downsampled images. We have shown that for downsampled images the signal power in the phase correlation is not concentrated in a single peak, but rather in several coherent peaks mostly adjacent to each other. These coherent peaks correspond to the polyphase transform of a filtered unit impulse centered at the point of registration. The analytic results provide a closed-form solution to subpixel translation estimation, and are used for detailed error analysis. Excellent results have been obtained for subpixel translation estimation of images of different nature and across different spectral bands."} {"_id": "42d60f7faaa2f6fdd2b928c352d65eb57b4791aa", "title": "Improving resolution by image registration", "text": "Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence."} {"_id": "90614cea8c2ab2bff0343231a26d6d0c9315d6c7", "title": "A Comparison of Affine Region Detectors", "text": "The paper gives a snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions. Six types of detectors are included: detectors based on affine normalization around Harris\u00a0 (Mikolajczyk and \u00a0Schmid, 2002; Schaffalitzky and \u00a0Zisserman, 2002) and Hessian points\u00a0 (Mikolajczyk and \u00a0Schmid, 2002), a detector of \u2018maximally stable extremal regions', proposed by Matas et al.\u00a0(2002); an edge-based region detector\u00a0 (Tuytelaars and Van\u00a0Gool, 1999) and a detector based on intensity extrema (Tuytelaars and Van\u00a0Gool, 2000), and a detector of \u2018salient regions', proposed by Kadir, Zisserman and Brady\u00a0(2004). The performance is measured against changes in viewpoint, scale, illumination, defocus and image compression. The objective of this paper is also to establish a reference test set of images and performance software, so that future detectors can be evaluated in the same framework."} {"_id": "db0851926cabc588e1cecb879ed7ce0ce4b9a298", "title": "PROCEDURAL GENERATION OF IMAGINATIVE TREES USING A SPACE COLONIZATION ALGORITHM", "text": "The modeling of trees is challenging due to their complex branching structures. Three different ways to generate trees are using real world data for reconstruction, interactive modeling methods and modeling with procedural or rule-based systems. Procedural content generation is the idea of using algorithms to automate content creation processes, and it is useful in plant modeling since it can generate a wide variety of plants that can adapt and react to the environment and changing conditions. This thesis focuses on and extends a procedural tree generation technique that uses a space colonization algorithm to model the tree branches\u2019 competition for space, and shifts the previous works\u2019 focus from realism to fantasy. The technique satisfied the idea of using interaction between the tree\u2019s internal and external factors to determine its final shape, by letting the designer control the where and the how of the tree\u2019s growth process. The implementation resulted in a tree generation application where the user\u2019s imagination decides the limit of what can be produced, and if that limit is reached can the application be used to randomly generate a wide variety of trees and tree-like structures. A motivation for many researchers in the procedural content generation area is how it can be used to augment human imagination. The result of this thesis can be used for that, by stepping away from the restrictions of realism, and with ease let the user generate widely diverse trees, that are not necessarily realistic but, in most cases, adapts to the idea of a tree."} {"_id": "a6f72b3dabe64d1ede9e1f68d1bb11750bcc5166", "title": "Thin dual-resonant stacked shorted patch antenna for mobile communications", "text": "Introduction: Short-circuited microstrip patch antennas, or PIFAs, can be used as directive internal handset antennas in various communication systems. They provide advantages over traditional external whip and helix antennas in terms of increased total efficiency (when the handset is near the user\u2019s head), decreased radiation towards the user, and increased mechanical reliability. The main disadvantage is their narrow impedance bandwidth compared to the volume occupied by the antenna structure. Typical requirements for directive internal antennas designed for cellular handsets (with a thickness \u2264 20mm) are antenna thickness \u2264 5mm and bandwidth \u2265 10%. It is difficult to design such a small, directive, low-profile handset antenna with high radiation efficiency and an impedance bandwidth greater than 10%. Small antenna design is always a compromise between size, bandwidth, and efficiency. Often, the price for improved performance is increased complexity. If the total dimensions of a patch antenna element are fixed, there are two effective ways to enhance its bandwidth: either dissipative loading [1], which reduces the radiation efficiency, or the addition of more resonators into the antenna structure (matching networks or parasitic elements). The latter method may increase the manufacturing complexity of the antenna. In this Letter, a thin dual-resonant stacked shorted patch antenna is presented. Previously, stacked short-circuited patch antenna elements have been reported in [2, 3]. More recently, similar elements have also been discussed in [4, 5]. The presented antenna is dual-resonant and has a very low profile. The thickness is only one fifth of that of the antenna reported in [5], whereas the surface area reserved for the patches is approximately equal. The small size of the antenna presented in this Letter makes it relatively easy to fit inside a cellular handset. The bandwidth is sufficient for many communication systems. With a common short circuit element which connects both patches to the ground plane, the presented antenna is the simplest example of a stacked shorted microstrip patch antenna. No extra tuning or shorting posts inside the patch area are needed, which makes its manufacturing easier."} {"_id": "8cc7f8891cf1ed5d22e74ff6fc57d1c23faf9048", "title": "Mapping continued brain growth and gray matter density reduction in dorsal frontal cortex: Inverse relationships during postadolescent brain maturation.", "text": "Recent in vivo structural imaging studies have shown spatial and temporal patterns of brain maturation between childhood, adolescence, and young adulthood that are generally consistent with postmortem studies of cellular maturational events such as increased myelination and synaptic pruning. In this study, we conducted detailed spatial and temporal analyses of growth and gray matter density at the cortical surface of the brain in a group of 35 normally developing children, adolescents, and young adults. To accomplish this, we used high-resolution magnetic resonance imaging and novel computational image analysis techniques. For the first time, in this report we have mapped the continued postadolescent brain growth that occurs primarily in the dorsal aspects of the frontal lobe bilaterally and in the posterior temporo-occipital junction bilaterally. Notably, maps of the spatial distribution of postadolescent cortical gray matter density reduction are highly consistent with maps of the spatial distribution of postadolescent brain growth, showing an inverse relationship between cortical gray matter density reduction and brain growth primarily in the superior frontal regions that control executive cognitive functioning. Inverse relationships are not as robust in the posterior temporo-occipital junction where gray matter density reduction is much less prominent despite late brain growth in these regions between adolescence and adulthood. Overall brain growth is not significant between childhood and adolescence, but close spatial relationships between gray matter density reduction and brain growth are observed in the dorsal parietal and frontal cortex. These results suggest that progressive cellular maturational events, such as increased myelination, may play as prominent a role during the postadolescent years as regressive events, such as synaptic pruning, in determining the ultimate density of mature frontal lobe cortical gray matter."} {"_id": "4b8b23c8b2364e25f2bc4e2ad7c8737797f59d19", "title": "Broadband antenna design using different 3D printing technologies and metallization processes", "text": "The purpose of this paper is to provide a comprehensive evaluation of how plastic materials, different metallization process, and thickness of the metal can influence the performance of 3D printed broadband antenna structures. A set of antennas were manufactured using Fused Deposition Modeling technology and Polyjet technology. Three different plastic materials ABS, PLA and Vero were employed in the tests. The antenna structures were metallized using three common metallization processes: vacuum metallization, electroplating, and conductive paint. In this project, the broadband performances of the metallized plastic antennas are compared to an original metal antenna by measuring VSWR, radiation patterns (H- and E-Planes), and gain. Measurements show that the performances of these plastic structures, regardless of production method, are very similar to the original metal antenna."} {"_id": "99ad0533f84c110da2d0713d5798e6e14080b159", "title": "Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences", "text": "We present a reading comprehension challenge in which questions can only be answered by taking into account information from multiple sentences. We solicit and verify questions and answers for this challenge through a 4-step crowdsourcing experiment. Our challenge dataset contains \u223c6k questions for +800 paragraphs across 7 different domains (elementary school science, news, travel guides, fiction stories, etc) bringing in linguistic diversity to the texts and to the questions wordings. On a subset of our dataset, we found human solvers to achieve an F1-score of 86.4%. We analyze a range of baselines, including a recent state-of-art reading comprehension system, and demonstrate the difficulty of this challenge, despite a high human performance. The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills."} {"_id": "25eb08e6985ded20ae723ec668014a2bad789e0f", "title": "Kd-Jump: a Path-Preserving Stackless Traversal for Faster Isosurface Raytracing on GPUs", "text": "Stackless traversal techniques are often used to circumvent memory bottlenecks by avoiding a stack and replacing return traversal with extra computation. This paper addresses whether the stackless traversal approaches are useful on newer hardware and technology (such as CUDA). To this end, we present a novel stackless approach for implicit kd-trees, which exploits the benefits of index-based node traversal, without incurring extra node visitation. This approach, which we term Kd-Jump, enables the traversal to immediately return to the next valid node, like a stack, without incurring extra node visitation (kd-restart). Also, Kd-Jump does not require global memory (stack) at all and only requires a small matrix in fast constant-memory. We report that Kd-Jump outperforms a stack by 10 to 20% and kd-restar t by 100%. We also present a Hybrid Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time depth threshold to define where kd-tree traversal stops and volume-stepping occurs. By using both methods, we gain the benefits of empty space removal, fast texture-caching and realtime ability to determine the best threshold for current isosurface and view direction."} {"_id": "03bf26d72d8cc0cf401c31e31c242e1894bd0890", "title": "Focused Trajectory Planning for autonomous on-road driving", "text": "On-road motion planning for autonomous vehicles is in general a challenging problem. Past efforts have proposed solutions for urban and highway environments individually. We identify the key advantages/shortcomings of prior solutions, and propose a novel two-step motion planning system that addresses both urban and highway driving in a single framework. Reference Trajectory Planning (I) makes use of dense lattice sampling and optimization techniques to generate an easy-to-tune and human-like reference trajectory accounting for road geometry, obstacles and high-level directives. By focused sampling around the reference trajectory, Tracking Trajectory Planning (II) generates, evaluates and selects parametric trajectories that further satisfy kinodynamic constraints for execution. The described method retains most of the performance advantages of an exhaustive spatiotemporal planner while significantly reducing computation."} {"_id": "86b164233e22e9edbe56ba32c48e20e53c9a71b6", "title": "Pattern Recognition for Downhole Dynamometer Card in Oil Rod Pump System using Artificial Neural Networks", "text": "This paper presents the development of an Artificial Neural Network system for Dynamometer Card pattern recognition in oil well rod pump systems. It covers the establishment of pattern classes and a set of standards for training and validation, the study of descriptors which allow the design and the implementation of features extractor, training, analysis and finally the validation and performance test with"} {"_id": "3bb0a4630a7055858d669b9be194c643a95af3e2", "title": "Relations such as Hypernymy: Identifying and Exploiting Hearst Patterns in Distributional Vectors for Lexical Entailment", "text": "We consider the task of predicting lexical entailment using distributional vectors. We focus experiments on one previous classifier which was shown to only learn to detect prototypicality of a word pair. Analysis shows that the model single-mindedly learns to detect Hearst Patterns, which are well known to be predictive of lexical relations. We present a new model which exploits this Hearst Detector functionality, matching or outperforming prior work on multiple data sets."} {"_id": "9006a6b2544162c4bb66cc7f041208e0fe4c0359", "title": "A Subsequence Interleaving Model for Sequential Pattern Mining", "text": "Recent sequential pattern mining methods have used the minimum description length (MDL) principle to define an encoding scheme which describes an algorithm for mining the most compressing patterns in a database. We present a novel subsequence interleaving model based on a probabilistic model of the sequence database, which allows us to search for the most compressing set of patterns without designing a specific encoding scheme. Our proposed algorithm is able to efficiently mine the most relevant sequential patterns and rank them using an associated measure of interestingness. The efficient inference in our model is a direct result of our use of a structural expectation-maximization framework, in which the expectation-step takes the form of a submodular optimization problem subject to a coverage constraint. We show on both synthetic and real world datasets that our model mines a set of sequential patterns with low spuriousness and redundancy, high interpretability and usefulness in real-world applications. Furthermore, we demonstrate that the quality of the patterns from our approach is comparable to, if not better than, existing state of the art sequential pattern mining algorithms."} {"_id": "58a28a12d9cf44bf7a0504b6a9c3194ca210659a", "title": "Towards a theory of supply chain management : the constructs and measurements", "text": "Rising international cooperation, vertical disintegration, along with a focus on core activities have led to the notion that firms are links in a networked supply chain. This novel perspective has created the challenge of designing and managing a network of interdependent relationships developed and fostered through strategic collaboration. Although research interests in supply chain management (SCM) are growing, no research has been directed towards a systematic development of SCM instruments. This study identifies and consolidates various supply chain initiatives and factors to develop key SCM constructs conducive to advancing the field. To this end, we analyzed over 400 articles and synthesized the large, fragmented body of work dispersed across many disciplines. The result of this study, through successive stages of measurement analysis and refinement, is a set of reliable, valid, and unidimensional measurements that can be subsequently used in different contexts to refine or extend conceptualization and measurements or to test various theoretical models, paving the way for theory building in SCM. \u00a9 2004 Elsevier B.V. All rights reserved."} {"_id": "14b6b458a931888c7629d47d3e158acd0da13f02", "title": "Sieve: Cryptographically Enforced Access Control for User Data in Untrusted Clouds", "text": "Modern web services rob users of low-level control over cloud storage; a user\u2019s single logical data set is scattered across multiple storage silos whose access controls are set by the web services, not users. The result is that users lack the ultimate authority to determine how their data is shared with other web services. In this thesis, we introduce Sieve, a new architecture for selectively exposing user data to third party web services in a provably secure manner. Sieve starts with a user-centric storage model: each user uploads encrypted data to a single cloud store, and by default, only the user knows the decryption keys. Given this storage model, Sieve defines an infrastructure to support rich, legacy web applications. Using attribute-based encryption, Sieve allows users to define intuitive, understandable access policies that are cryptographically enforceable. Using key homomorphism, Sieve can re-encrypt user data on storage providers in situ, revoking decryption keys from web services without revealing new ones to the storage provider. Using secret sharing and two-factor authentication, Sieve protects against the loss of user devices like smartphones and laptops. The result is that users can enjoy rich, legacy web applications, while benefitting from cryptographically strong controls over what data the services can access. Thesis Supervisor: Nickolai Zeldovich Title: Associate Professor Thesis Supervisor: James Mickens Title: Associate Professor"} {"_id": "a6bff7b89af697508d23353148530ea43c6c36e1", "title": "Segmentation of Urban Areas Using Road Networks", "text": "Region-based analysis is fundamental and crucial in many geospatialrelated applications and research themes, such as trajectory analysis, human mobility study and urban planning. In this paper, we report on an image-processing-based approach to segment urban areas into regions by road networks. Here, each segmented region is bounded by the high-level road segments, covering some neighborhoods and low-level streets. Typically, road segments are classified into different levels (e.g., highways and expressways are usually high-level roads), providing us with a more natural and semantic segmentation of urban spaces than the grid-based partition method. We show that through simple morphological operators, an urban road network can be efficiently segmented into regions. In addition, we present a case study in trajectory mining to demonstrate the usability of the proposed segmentation method."} {"_id": "00ae3f736b28e2050e23acc65fcac1a516635425", "title": "Collaborative Deep Learning Across Multiple Data Centers", "text": "Valuable training data is often owned by independent organizations and located in multiple data centers. Most deep learning approaches require to centralize the multi-datacenter data for performance purpose. In practice, however, it is often infeasible to transfer all data to a centralized data center due to not only bandwidth limitation but also the constraints of privacy regulations. Model averaging is a conventional choice for data parallelized training, but its ineffectiveness is claimed by previous studies as deep neural networks are often non-convex. In this paper, we argue that model averaging can be effective in the decentralized environment by using two strategies, namely, the cyclical learning rate and the increased number of epochs for local model training. With the two strategies, we show that model averaging can provide competitive performance in the decentralized mode compared to the data-centralized one. In a practical environment with multiple data centers, we conduct extensive experiments using state-of-the-art deep network architectures on different types of data. Results demonstrate the effectiveness and robustness of the proposed method."} {"_id": "2abcd3b82f51722a1ef25d7f408c06f43210c809", "title": "An improved box-counting method for image fractal dimension estimation", "text": "Article history: Received 6 September 2007 Received in revised form 14 January 2009 Accepted 2 March 2009"} {"_id": "d05e42fa59d1f20f0d60ae89225b826204ef1216", "title": "BanditSum: Extractive Summarization as a Contextual Bandit", "text": "In this work, we propose a novel method for training neural networks to perform singledocument extractive summarization without heuristically-generated extractive labels. We call our approach BANDITSUM as it treats extractive summarization as a contextual bandit (CB) problem, where the model receives a document to summarize (the context), and chooses a sequence of sentences to include in the summary (the action). A policy gradient reinforcement learning algorithm is used to train the model to select sequences of sentences that maximize ROUGE score. We perform a series of experiments demonstrating that BANDITSUM is able to achieve ROUGE scores that are better than or comparable to the state-of-the-art for extractive summarization, and converges using significantly fewer update steps than competing approaches. In addition, we show empirically that BANDITSUM performs significantly better than competing approaches when good summary sentences appear late in the source document."} {"_id": "5dc612abb0b26c734047af8ae8a764819a6f579e", "title": "Introduction to the special section on educational data mining", "text": "Educational Data Mining (EDM) is an emerging multidisciplinary research area, in which methods and techniques for exploring data originating from various educational information systems have been developed. EDM is both a learning science, as well as a rich application area for data mining, due to the growing availability of educational data. EDM contributes to the study of how students learn, and the settings in which they learn. It enables data-driven decision making for improving the current educational practice and learning material. We present a brief overview of EDM and introduce four selected EDM papers representing a crosscut of different application areas for data mining in education."} {"_id": "34dadeab830f902edb1213f81336f5cfbf753dcf", "title": "Polynomial method for Procedural Terrain Generation", "text": "A systematic fractal brownian motion approach is proposed for generating coherent noise, aiming at procedurally generating realistic terrain and textures. Two models are tested and compared to Perlin noise method for two-dimensional height map generation. A fractal analysis is performed in order to compare fractal behaviour of generated data to real terrain coastlines from the point of view of fractal dimension. Performance analysis show that one of the described schemes requires half as many primitive operations than Perlin noise while producing data of equivalent quality."} {"_id": "9b678aa28facf4f90081d41c2c484c6addddb86d", "title": "Fully Convolutional Attention Networks for Fine-Grained Recognition", "text": "Fine-grained recognition is challenging due to its subtle local inter-class differences versus large intra-class variations such as poses. A key to address this problem is to localize discriminative parts to extract pose-invariant features. However, ground-truth part annotations can be expensive to acquire. Moreover, it is hard to define parts for many fine-grained classes. This work introduces Fully Convolutional Attention Networks (FCANs), a reinforcement learning framework to optimally glimpse local discriminative regions adaptive to different fine-grained domains. Compared to previous methods, our approach enjoys three advantages: 1) the weakly-supervised reinforcement learning procedure requires no expensive part annotations; 2) the fully-convolutional architecture speeds up both training and testing; 3) the greedy reward strategy accelerates the convergence of the learning. We demonstrate the effectiveness of our method with extensive experiments on four challenging fine-grained benchmark datasets, including CUB-200-2011, Stanford Dogs, Stanford Cars and Food-101."} {"_id": "18163ea60e83cd3266150bd4b7d7fd3a849ee2db", "title": "Consumption of ultra-processed foods and likely impact on human health. Evidence from Canada.", "text": "OBJECTIVE\nTo investigate consumption of ultra-processed products in Canada and to assess their association with dietary quality.\n\n\nDESIGN\nApplication of a classification of foodstuffs based on the nature, extent and purpose of food processing to data from a national household food budget survey. Foods are classified as unprocessed/minimally processed foods (Group 1), processed culinary ingredients (Group 2) or ultra-processed products (Group 3).\n\n\nSETTING\nAll provinces and territories of Canada, 2001.\n\n\nSUBJECTS\nHouseholds (n 5643).\n\n\nRESULTS\nFood purchases provided a mean per capita energy availability of 8908 (se 81) kJ/d (2129 (se 19) kcal/d). Over 61\u00b77 % of dietary energy came from ultra-processed products (Group 3), 25\u00b76 % from Group 1 and 12\u00b77 % from Group 2. The overall diet exceeded WHO upper limits for fat, saturated fat, free sugars and Na density, with less fibre than recommended. It also exceeded the average energy density target of the World Cancer Research Fund/American Institute for Cancer Research. Group 3 products taken together are more fatty, sugary, salty and energy-dense than a combination of Group 1 and Group 2 items. Only the 20 % lowest consumers of ultra-processed products (who consumed 33\u00b72 % of energy from these products) were anywhere near reaching all nutrient goals for the prevention of obesity and chronic non-communicable diseases.\n\n\nCONCLUSIONS\nThe 2001 Canadian diet was dominated by ultra-processed products. As a group, these products are unhealthy. The present analysis indicates that any substantial improvement of the diet would involve much lower consumption of ultra-processed products and much higher consumption of meals and dishes prepared from minimally processed foods and processed culinary ingredients."} {"_id": "4157e45f616233a0874f54a59c3df001b9646cd7", "title": "Diagnostically relevant facial gestalt information from ordinary photos", "text": "Craniofacial characteristics are highly informative for clinical geneticists when diagnosing genetic diseases. As a first step towards the high-throughput diagnosis of ultra-rare developmental diseases we introduce an automatic approach that implements recent developments in computer vision. This algorithm extracts phenotypic information from ordinary non-clinical photographs and, using machine learning, models human facial dysmorphisms in a multidimensional 'Clinical Face Phenotype Space'. The space locates patients in the context of known syndromes and thereby facilitates the generation of diagnostic hypotheses. Consequently, the approach will aid clinicians by greatly narrowing (by 27.6-fold) the search space of potential diagnoses for patients with suspected developmental disorders. Furthermore, this Clinical Face Phenotype Space allows the clustering of patients by phenotype even when no known syndrome diagnosis exists, thereby aiding disease identification. We demonstrate that this approach provides a novel method for inferring causative genetic variants from clinical sequencing data through functional genetic pathway comparisons.DOI: http://dx.doi.org/10.7554/eLife.02020.001."} {"_id": "fef80706b29fc2345c8b7aa53af1076a6afff5f1", "title": "Effect of Control-Loops Interactions on Power Stability Limits of VSC Integrated to AC System", "text": "This paper investigates the effect of control-loops interactions on power stability limits of the voltage-source converter (VSC) as connected to an ac system. The focus is put on the physical mechanism of the control-loops interactions in the VSC, revealing that interactions among the control loops result in the production of an additional loop. The open-loop gain of the additional loop is employed to quantify the severity of the control-loop interactions. Furthermore, the power current sensitivity, closely related to control-loops interactions, is applied to estimate the maximum transferrable power of the VSC connected to an ac grid. On that basis, stability analysis results show that interactions between dc-link voltage control and phase-locked loop restrict the power angle to about 51\u00b0 for stable operation with no dynamic reactive power supported. Conversely, the system is capable of approaching the ac-side maximum power transfer limits with alternating voltage control included. Simulations in MATLAB/Simulink are conducted to validate the stability analysis."} {"_id": "461ac81b6ce10d48a6c342e64c59f86d7566fa68", "title": "Social network sites: definition, history, and scholarship", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."} {"_id": "c03fb606432af6637d9d7d31f447e62a855b77a0", "title": "Facebook in higher education promotes social but not academic engagement", "text": "Although there is evidence that academically successful students are engaged with their studies, it has proved difficult to define student engagement clearly. Student engagement is commonly construed as having two dimensions, social and academic. The rapid adoption of social media and digital technologies has ensured increasing interest in using them for improving student engagement. This paper examines Facebook usage among a first year psychology student cohort and reports that although the majority of students (94%) had Facebook accounts and spent an average of one hour per day on Facebook, usage was found to be predominantly social. Personality factors influenced usage patterns, with more conscientious students tending to use Facebook less than less conscientious students. This paper argues that, rather than promoting social engagement in a way that might increase academic engagement, it appears that Facebook is more likely to operate as a distracting influence."} {"_id": "171071069cb3b58cfe8e38232c25bfa99f1fbdf5", "title": "Self-Presentation 2.0: Narcissism and Self-Esteem on Facebook", "text": "Online social networking sites have revealed an entirely new method of self-presentation. This cyber social tool provides a new site of analysis to examine personality and identity. The current study examines how narcissism and self-esteem are manifested on the social networking Web site Facebook.com . Self-esteem and narcissistic personality self-reports were collected from 100 Facebook users at York University. Participant Web pages were also coded based on self-promotional content features. Correlation analyses revealed that individuals higher in narcissism and lower in self-esteem were related to greater online activity as well as some self-promotional content. Gender differences were found to influence the type of self-promotional content presented by individual Facebook users. Implications and future research directions of narcissism and self-esteem on social networking Web sites are discussed."} {"_id": "5e30227914559ce088a750885761adbb7d2edbbf", "title": "A privacy paradox: Social networking in the United States", "text": "Teenagers will freely give up personal information to join social networks on the Internet. Afterwards, they are surprised when their parents read their journals. Communities are outraged by the personal information posted by young people online and colleges keep track of student activities on and off campus. The posting of personal information by teens and students has consequences. This article will discuss the uproar over privacy issues in social networks by describing a privacy paradox; private versus public space; and, social networking privacy issues. It will finally discuss proposed privacy solutions and steps that can be taken to help resolve the privacy paradox."} {"_id": "95cbfbae52245f7c737f345daa774e6c2a5b7cc5", "title": "The effects of modern mathematics computer games on mathematics achievement and class motivation", "text": "This study examined the effects of a computer game on students' mathematics achievement and motivation, and the role of prior mathematics knowledge, computer skill, and English language skill on their achievement and motivation as they played the game. A total of 193 students and 10 teachers participated in this study. The teachers were randomly assigned to experimental and control groups. A mixed method of quantitative and interviews were used with Multivariate Analysis of Co-Variance to analyze the data. The results indicated significant improvement of the achievement of the experimental versus control group. No significant improvement was found in the motivation of the groups. Students who played the games in their classrooms and school labs reported greater motivation compared to the ones who played the games only in the school labs. Prior knowledge, computer and English language skill did not play significant roles in achievement and motivation of the experimental group. 2010 Elsevier Ltd. All rights reserved."} {"_id": "50733aebb9fa0397f6fda33f39d4827feb74a0a6", "title": "Improving Matching Models with Contextualized Word Representations for Multi-turn Response Selection in Retrieval-based Chatbots", "text": "We consider matching with pre-trained contextualized word vectors for multi-turn response selection in retrieval-based chatbots. When directly applied to the task, state-ofthe-art models, such as CoVe and ELMo, do not work as well as they do on other tasks, due to the hierarchical nature, casual language, and domain-specific word use of conversations. To tackle the challenges, we propose pre-training a sentence-level and a sessionlevel contextualized word vectors by learning a dialogue generation model from large-scale human-human conversations with a hierarchical encoder-decoder architecture. The two levels of vectors are then integrated into the input layer and the output layer of a matching model respectively. Experimental results on two benchmark datasets indicate that the proposed contextualized word vectors can significantly and consistently improve the performance of existing matching models for response selection."} {"_id": "40d50a0cb39012bb1aae8b2a8358d41e1786df87", "title": "Low-rank matrix completion using alternating minimization", "text": "Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge [17].\n In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. X = UV\u2020; the algorithm then alternates between finding the best U and the best V. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and is prone to local minima. In fact, there has been almost no theoretical understanding of when this approach yields a good result.\n In this paper we present one of the first theoretical analyses of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a significantly simpler analysis."} {"_id": "6c394f5eecc0371b43331b54ed118c8637b8b60d", "title": "A general design formula of multi-section power divider based on singly terminated filter design theory", "text": "A novel design formula of multi-section power divider is derived to obtain wide isolation performance. The derived design formula is based on the singly terminated filter design theory. This paper presents several simulation and experimental results of multi section power divider to show validity of the proposed design formula. Experiments show excellent performance of multi section power divider with multi-octave isolation characteristic."} {"_id": "2de68ac42d7b54b3ef0596c18e3d4a2b3a274f72", "title": "ASAP: Prioritizing Attention via Time Series Smoothing", "text": "Time series visualization of streaming telemetry (i.e., charting of key metrics such as server load over time) is increasingly prevalent in modern data platforms and applications. However, many existing systems simply plot the raw data streams as they arrive, often obscuring large-scale trends due to small-scale noise. We propose an alternative: to better prioritize end users\u2019 attention, smooth time series visualizations as much as possible to remove noise, while retaining large-scale structure to highlight significant deviations. We develop a new analytics operator called ASAP that automatically smooths streaming time series by adaptively optimizing the trade-off between noise reduction (i.e., variance) and trend retention (i.e., kurtosis). We introduce metrics to quantitatively assess the quality of smoothed plots and provide an efficient search strategy for optimizing these metrics that combines techniques from stream processing, user interface design, and signal processing via autocorrelation-based pruning, pixel-aware preaggregation, and ondemand refresh. We demonstrate that ASAP can improve users\u2019 accuracy in identifying long-term deviations in time series by up to 38.4% while reducing response times by up to 44.3%. Moreover, ASAP delivers these results several orders of magnitude faster than alternative search strategies."} {"_id": "506dd23fd3d39fb3f9e674a9c95caa9b0440f92f", "title": "LD-BSCA: A local-density based spatial clustering algorithm", "text": "Density-based clustering algorithms are very powerful to discover arbitrary-shaped clusters in large spatial databases. However, in many cases, varied local-density clusters exist in different regions of data space. In this paper, a new algorithm LD-BSCA is proposed with introducing the concept of local MinPts (a minimum number of points) and the new cluster expanding condition: ExpandConClId (Expanding Condition of ClId-th Cluster). We minimize the algorithm input down to only one parameter and let the local MinPts diversified as clusters change from one to another simultaneously. Experiments show LD-BSCA algorithm is powerful to discover all clusters in gradient distributing databases. In addition, we introduce an efficient searching method to reduce the runtime of our algorithm. Using several databases, we demonstrate the high quality of the proposed algorithm in clustering the implicit knowledge in asymmetric distribution databases."} {"_id": "74cbdce4e28ea42d0b7168e91effe450460a2d09", "title": "An Optimal Algorithm for Finding the Kernel of a Polygon", "text": "The kernel K(P) of a simple polygon P wah n verUces is the locus of the points internal to P from which all verUces of P are wslble Equwalently, K(P) is the mtersectmn of appropriate half-planes determined by the polygon's edges Although it is known that to find the intersection of n generic half-planes requires time O(n log n), we show that one can exploit the ordering of the half-planes corresponding to the sequence of the polygon's edges to obtain a kernel finding algorithm which runs m time O(n) and is therefore optimal"} {"_id": "4aa638ebc72c47841fbe55b0a57efedd7d70cbc1", "title": "Teachers' perceptions of the barriers to technology integration and practices with technology under situated professional development", "text": "This case study examines 18 elementary school teachers\u2019 perceptions of the barriers to technology integration (access, vision, professional development, time, and beliefs) and instructional practices with technology after two years of situated professional development. Months after transitioning from mentoring to teacher-led communities of practice, teachers continued to report positive perceptions of several barriers and were observed engaging in desirable instructional practices. Interviews suggest that the situated professional development activities helped create an environment that supported teachers\u2019 decisions to integrate technology. Implications for teacher professional development and technology integration are discussed in conjunction with the strengths and limitations of the study. 2012 Elsevier Ltd. All rights reserved."} {"_id": "c8ac5e08d8e2dfb66982de431f429b900248a8da", "title": "Looking at Vehicles in the Night : Detection & Dynamics of Rear Lights", "text": "Existing nighttime vehicle detection methods use color as the primary cue for detecting vehicles. However, complex road and ambient lighting conditions, and camera configurations can influence the effectiveness of such explicit rule and threshold based methods. In this paper, there are three main contributions. First, we present a novel method to detect vehicles during nighttime involving both learned classifiers and explicit rules, which can operate in the presence of varying ambient lighting conditions. The proposed method which is titled as VeDANt (Vehicle Detection using Active-learning during Nighttime) employs a modified form of active learning for training Adaboost classifiers with Haar-like features using gray-level input images. The hypothesis windows are then further verified using the proposed techniques involving perspective geometries and color information of the taillights. Second, VeDANt is extended to analyze the dynamics of the vehicles during nighttime by detecting three taillight activities braking, turning left and turning right. Third, we release three new and fully annotated LISA-Night datasets with over 5000 frames for evaluation and benchmarking, which capture a variety of complex traffic and lighting conditions. Such comprehensively annotated and complex public datasets is a first in the area of nighttime vehicle detection. We show that VeDANt is able to detect vehicles during nighttime with over 98% accuracy and less than 1% false detections."} {"_id": "8f77c1df86d0f62653405f0cce85e94bce76ad7c", "title": "RASSH - Reinforced adaptive SSH honeypot", "text": "The wide spread of cyber-attacks made the need of gathering as much information as possible about them, a real demand in nowadays global context. The honeypot systems have become a powerful tool on the way to accomplish that. Researchers have already focused on the development of various honeypot systems but the fact that their administration is time consuming made clear the need of self-adaptive honeypot system capable of learning from their interaction with attackers. This paper presents a self-adaptive honeypot system we are developing that tries to overlap some of the disadvantaged that existing systems have. The proposed honeypot is a medium interaction system developed using Python and it emulates a SSH (Secure Shell) server. The system is capable of interacting with the attackers by means of reinforcement learning algorithms."} {"_id": "485b014bb5f1cfa09f30b7e1e0a553ad2889de46", "title": "Digital Facial Engraving", "text": "This contribution introduces the basic techniques for digital facial engraving, which imitates traditional copperplate engraving. Inspired by traditional techniques, we first establish a set of basic rules thanks to which separate engraving layers are built on the top of the original photo. Separate layers are merged according to simple merging rules and according to range shift/scale masks specially introduced for this purpose. We illustrate the introduced technique by a set of black/white and color engravings, showing different features such as engraving-specific image enhancements, mixing different regular engraving lines with mezzotint, irregular perturbations of engraving lines etc. We introduce the notion of engraving style which comprises a set of separate engraving layers together with a set of associated range shift/scale masks. The engraving style helps to port the look and feel of one engraving to another. Once different libraries of pre-defined mappable engraving styles and an appropriate user interface are added to the basic system, producing a decent gravure starting from a simple digital photo will be a matter of seconds. The engraving technique described in this contribution opens new perspectives for digital art, adding unprecedented power and precision to the engraver's work. 1. Rationale Engraving is among the most important traditional graphical techniques. It first appeared in the fifteenth century as an illustrative support for budding book-printing, but very quickly became an art in its own right, thanks to its specific expressive power. Actually, four main classes of engraving are used by artists: letterpress or relief printing, intaglio or in-hollow printing, silk screen process and lithography, with several different techniques in each class. The history of printmaking was punctuated by prosperous periods of techniques which later declined for various reasons. Facial engraving is one such example. Extremely popular in seventeenth and eighteenth centuries, when photography did not exist, this wonderful art became almost unused, due to the extreme technical demands that it made on the engraver. Professional copperplate engravers are rare today, and the cost of true engravings is simply too prohibitive to be used in everyday printing. At the same time, traditional facial engraving has no doubt very specific appeal: its neat, sharp appearance distinguishes it advantageously from photos. To appreciate the graphical impact of engravings it's enough to compare the engraved portraits in the Wall Street Journal with portraits in other newspapers produced with traditional impersonal screening. Does it mean that this enjoyable art is condemned to disappear \u2026"} {"_id": "48c298565b7906dcc6f8555848201202660d3428", "title": "Correcting a metacognitive error: feedback increases retention of low-confidence correct responses.", "text": "Previous studies investigating posttest feedback have generally conceptualized feedback as a method for correcting erroneous responses, giving virtually no consideration to how feedback might promote learning of correct responses. Here, the authors show that when correct responses are made with low confidence, feedback serves to correct this initial metacognitive error, enhancing retention of low-confidence correct responses. In 2 experiments, subjects took an initial multiple-choice test on general knowledge facts and made a confidence judgment after each response. Feedback was provided for half of the questions, and retention was assessed by a final cued-recall test. Taking the initial test improved retention relative to not testing, and feedback further enhanced performance. Consistent with prior research, feedback improved retention by allowing subjects to correct initially erroneous responses. Of more importance, feedback also doubled the retention of correct low-confidence responses, relative to providing no feedback. The function of feedback is to correct both memory errors and metacognitive errors."} {"_id": "561a28d8640394879e4cea6fee9ece53a2480832", "title": "A multi-structural framework for adaptive supply chain planning and operations control with structure dynamics considerations", "text": "A trend in up-to-date developments in supply chain management (SCM) is to make supply chains more agile, flexible, and responsive. In supply chains, different structures (functional, organizational, informational, financial etc.) are (re)formed. These structures interrelate with each other and change in dynamics. The paper introduces a new conceptual framework for multi-structural planning and operations of adaptive supply chains with structure dynamics considerations. We elaborate a vision of adaptive supply chain management (A-SCM), a new dynamic model and tools for the planning and control of adaptive supply chains. SCM is addressed from perspectives of execution dynamics under uncertainty. Supply chains are modelled in terms of dynamic multi-structural macro-states, based on simultaneous consideration of the management as a function of both states and structures. The research approach is theoretically based on the combined application of control theory, operations research, and agent-based modelling. The findings suggest constructive ways to implement multi-structural supply chain management and to transit from a \" one-way \" partial optimization to the feedback-based, closed-loop adaptive supply chain optimization and execution management for value chain adaptability, stability and crisis-resistance. The proposed methodology enhances managerial insight into advanced supply chain management."} {"_id": "0d060f1edd7e79865f1a48a9c874439c4b10caea", "title": "Artificial Neural Networks Applied to Taxi Destination Prediction", "text": "We describe our first-place solution to the ECML/PKDD discovery challenge on taxi destination prediction. The task consisted in predicting the destination of a taxi based on the beginning of its trajectory, represented as a variable-length sequence of GPS points, and diverse associated meta-information, such as the departure time, the driver id and client information. Contrary to most published competitor approaches, we used an almost fully automated approach based on neural networks and we ranked first out of 381 teams. The architectures we tried use multi-layer perceptrons, bidirectional recurrent neural networks and models inspired from recently introduced memory networks. Our approach could easily be adapted to other applications in which the goal is to predict a fixed-length output from a variable-length sequence. Random order, this does not reflect the weights of contributions. ar X iv :1 50 8. 00 02 1v 2 [ cs .L G ] 2 1 Se p 20 15 2 Artificial Neural Networks Applied to Taxi Destination Prediction"} {"_id": "740ab1887f02b4986beb99cd39591480372e9199", "title": "Data Compression Techniques in Wireless Sensor Networks", "text": "Wireless sensor networks (WSNs) open a new research field for pervasive computing and context-aware monitoring of the physical environments. Many WSN applications aim at long-term environmental monitoring. In these applications, energy consumption is a principal concern because sensor nodes have to regularly report their sensing data to the remote sink(s) for a very long time. Since data transmission is one primary factor of the energy consumption of sensor nodes, many research efforts focus on reducing the amount of data transmissions through data compression techniques. In this chapter, we discuss the data compression techniques in WSNs, which can be classified into five categories: 1) The string-based compression techniques treat sensing data as a sequence of characters and then adopt the text data compression schemes to compress them. 2) The image-based compression techniques hierarchically organize WSNs and then borrow the idea from the image compression solutions to handle sensing data. 3) The distributed source coding techniques extend the Slepian-Wolf theorem to encode multiple correlated data streams independently at sensor nodes and then jointly decode them at the sink. 4) The compressed sensing techniques adopt a small number of nonadaptive and randomized linear projection samples to compress sensing data. 5) The data aggregation techniques select a subset of sensor nodes in the network to be responsible for fusing the sensing data from other sensor nodes to reduce the amount of data transmissions. A comparison of these data compression techniques is also given in this chapter."} {"_id": "ddf69c7126022bf99f6d0d5f8b4f272915e232cc", "title": "A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management.", "text": "A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field."} {"_id": "edf5861705647e0670f841cc6abcd2371823b13e", "title": "On HMM static hand gesture recognition", "text": "A real time static isolated gesture recognition application using a Hidden Markov Model approach with features extracted from gestures' silhouettes is presented. Nine different hand poses with various degrees of rotation are considered. The system, both simple and effective, uses color images of the hands to be recognized directly from the camera and is capable of processing 23 frames per second on a Quad Core Intel Processor."} {"_id": "95347596715641683678283b06f6b5ce3a09cba3", "title": "Teaching Programming in Secondary Education Through Embodied Computing Platforms: Robotics and Wearables", "text": "Pedagogy has emphasized that physical representations and tangible interactive objects benefit learning especially for young students. There are many tangible hardware platforms for introducing computer programming to children, but there is limited comparative evaluation of them in the context of a formal classroom. In this work, we explore the benefits of learning to code for tangible computers, such as robots and wearable computers, in comparison to programming for the desktop computer. For this purpose, 36 students participated in a within-groups study that involved three types of target computer platform tangibility: (1) desktop, (2) wearable, and (3) robotic. We employed similar blocks-based visual programming environments, and we measured emotional engagement, attitudes, and computer programming performance. We found that students were more engaged by and had a higher intention of learning programming with the robotic rather than the desktop computer. Furthermore, tangible computing platforms, either robot or wearable, did not affect the students\u2019 performance in learning basic computational concepts (e.g., sequence, repeat, and decision). Our findings suggest that computer programming should be introduced through multiple target platforms (e.g., robots, smartphones, wearables) to engage children."} {"_id": "508095dabb20caf72cfbf342c206161b54c97ecd", "title": "A Dynamic Programming Algorithm for Computing N-gram Posteriors from Lattices", "text": "Efficient computation of n-gram posterior probabilities from lattices has applications in lattice-based minimum Bayes-risk decoding in statistical machine translation and the estimation of expected document frequencies from spoken corpora. In this paper, we present an algorithm for computing the posterior probabilities of all ngrams in a lattice and constructing a minimal deterministic weighted finite-state automaton associating each n-gram with its posterior for efficient storage and retrieval. Our algorithm builds upon the best known algorithm in literature for computing ngram posteriors from lattices and leverages the following observations to significantly improve the time and space requirements: i) the n-grams for which the posteriors will be computed typically comprises all n-grams in the lattice up to a certain length, ii) posterior is equivalent to expected count for an n-gram that do not repeat on any path, iii) there are efficient algorithms for computing n-gram expected counts from lattices. We present experimental results comparing our algorithm with the best known algorithm in literature as well as a baseline algorithm based on weighted finite-state automata operations."} {"_id": "9e245f05f47a818f6321a0c2f67174f32308bc04", "title": "Magnet design based on transient behavior of an IPMSM in event of malfunction", "text": "This paper deals with the approach of analyzing the magnet design based on transient behavior of an IPMSM during switching processes in the event of malfunction for automotive applications. Depending on the maximal current increase during the transient process the needed percentage of Dysprosium for the magnet design is conducted. Switching off strategy is introduced for both Voltage-Source- and Current-Source-Inverter for automotive application. Both inverters are compared concerning the transient current increase and respectively the Dy-content for the magnet design."} {"_id": "545067e6fcbbcf1d2b5557002cac9948893b97c8", "title": "NAIDS design using ChiMIC-KGS", "text": "The tasks of IDS is to detect activities in the form of intrusion by unauthorized parties which attempt to gain access to information unlawfully in a network environment. Changes in intrusion strategies affect the behavior of networks where the representation of network traffic data changes over time. The nature of the network data has high volume and dimension and contains redundancy. This study proposes the use of ChiMIC as a method to reduce redundancy features and select the relevant features to be used in detecting anomalies on the network, where KGS is used as an anomaly detection engine. Both approach are used to build NAIDS that has capabilities to detect the emergence of new attacks over time."} {"_id": "122e4dae668618118e76d3e905b2317fb0dec9a4", "title": "Supply Chain Management ( SCM ) and Organizational Key Factors for Successful Implementation of Enterprise Resource Planning ( ERP ) Systems", "text": "This paper reports some findings of an on-going research into ERP implementation issues. A part of the research, which this paper reports, consists of reviewing several cases of successful implementations of ERP systems. The analysis of all these cases revealed that the critical factors for successful implementation of ERP systems fall into two main categories: technological and organizational factors. However, the organizational factors seem to be more important than the technological ones as far as the successful implementation of ERP systems under SCM is concerned."} {"_id": "69418ff5d4eac106c72130e152b807004e2b979c", "title": "Semantically Smooth Knowledge Graph Embedding", "text": "This paper considers the problem of embedding Knowledge Graphs (KGs) consisting of entities and relations into lowdimensional vector spaces. Most of the existing methods perform this task based solely on observed facts. The only requirement is that the learned embeddings should be compatible within each individual fact. In this paper, aiming at further discovering the intrinsic geometric structure of the embedding space, we propose Semantically Smooth Embedding (SSE). The key idea of SSE is to take full advantage of additional semantic information and enforce the embedding space to be semantically smooth, i.e., entities belonging to the same semantic category will lie close to each other in the embedding space. Two manifold learning algorithms Laplacian Eigenmaps and Locally Linear Embedding are used to model the smoothness assumption. Both are formulated as geometrically based regularization terms to constrain the embedding task. We empirically evaluate SSE in two benchmark tasks of link prediction and triple classification, and achieve significant and consistent improvements over state-of-the-art methods. Furthermore, SSE is a general framework. The smoothness assumption can be imposed to a wide variety of embedding models, and it can also be constructed using other information besides entities\u2019 semantic categories."} {"_id": "90434160915f6a0279aeab96a1fcf8927f221ce6", "title": "Automatic Chinese food identification and quantity estimation", "text": "Computer-aided food identification and quantity estimation have caught more attention in recent years because of the growing concern of our health. The identification problem is usually defined as an image categorization or classification problem and several researches have been proposed. In this paper, we address the issues of feature descriptors in the food identification problem and introduce a preliminary approach for the quantity estimation using depth information. Sparse coding is utilized in the SIFT and Local binary pattern feature descriptors, and these features combined with gabor and color features are used to represent food items. A multi-label SVM classifier is trained for each feature, and these classifiers are combined with multi-class Adaboost algorithm. For evaluation, 50 categories of worldwide food are used, and each category contains 100 photographs from different sources, such as manually taken or from Internet web albums. An overall accuracy of 68.3% is achieved, and success at top-N candidates achieved 80.6%, 84.8%, and 90.9% accuracy accordingly when N equals 2, 3, and 5, thus making mobile application practical. The experimental results show that the proposed methods greatly improve the performance of original SIFT and LBP feature descriptors. On the other hand, for quantity estimation using depth information, a straight forward method is proposed for certain food, while transparent food ingredients such as pure water and cooked rice are temporarily excluded."} {"_id": "9bb14f91b269e72a98d9063d22da2092dc5f552d", "title": "Implementation and Performance Analysis of Firewall on Open vSwitch", "text": "Software Defined Networking (SDN) is a current research trend that follows the ideology of physical separation of the control and data plane of the forwarding devices. SDN mainly advocates with two types of devices: (1) Controllers, that implement the control plane and (2) Switches, that perform the data plane operations. OpenFlow protocol (OFP) is the current standard through which controllers and switches can communicate with each other. Using OpenFlow, SDN controllers can manage forwarding behaviors of SDN switches by managing Flow Table entries. Switches use these low-level Flow Table entries to forward packets to appropriate hosts. Firewalls are integral part of today\u2019s networks. We can\u2019t imagine our network without a Firewall which protects our network from potential threats. As SDN is getting pace in replacing traditional architecture, it would be very interesting to see how much security features can be provided by OpenFlow-enabled switches. Hence, it will be very important to see if SDN, on the top of OpenFlow, can efficiently implement Firewalls and provides support for an advanced feature like connection tracking. The task is straightforward: Controller will add Flow Table entries on switches based upon Firewall rules. Such way, we can enhance packet-processing by providing security. In this Document, one strategy for implementing Firewall on SDN is presented. We can write some controller applications that work as Firewall and inspect incoming packets against the Firewall rules. These applications are also able to implement connection tracking mechanism. As SDN devices for the experiments, we selected Ryu controller and Open vSwitch. Initially, such applications are tested on local machine with small Firewall ruleset. Later, they are tested with real-world traffic and comparatively large Firewall ruleset. The testing results present that such strategy can be used as a first step in implementing security features (including connection tracking) in SDN environment."} {"_id": "d12e3606d94050d382306761eb43b58c042ac390", "title": "Predicting and analyzing secondary education placement-test scores: A data mining approach", "text": "Understanding the factors that lead to success (or failure) of students at placement tests is an interesting and challenging problem. Since the centralized placement tests and future academic achievements are considered to be related concepts, analysis of the success factors behind placement tests may help understand and potentially improve academic achievement. In this study using a large and feature rich dataset from Secondary Education Transition System in Turkey we developed models to predict secondary education placement test results, and using sensitivity analysis on those prediction models we identified the most important predictors. The results showed that C5 decision tree algorithm is the best predictor with 95% accuracy on hold-out sample, followed by support vector machines (with an accuracy of 91%) and artificial neural networks (with an accuracy of 89%). Logistic regression models came out to be the least accurate of the four with and overall accuracy of 82%. The sensitivity analysis revealed that previous test experience, whether a student has a scholarship, student\u2019s number of siblings, previous years\u2019 grade point average are among the most important predictors of the placement test scores. 2012 Elsevier Ltd. All rights reserved."} {"_id": "0a9621c60889c34e2ccc4848a58cbbcef924cf4d", "title": "Introduction to Data Mining", "text": "an introduction to data mining san jose state university introduction to data mining university of florida introduction to data mining exinfm introduction to data mining introduction to data mining and knowledge discovery lecture notes for chapter 3 introduction to data mining introduction to data mining introduction to data mining pennsylvania state university introduction to data mining tu/e introduction to data mining sas support session"} {"_id": "440deffc89b338fcf1662d540a8c4cfc56aa4ccb", "title": "A Data Mining view on Class Room Teaching Language", "text": "From ancient period in India, educational institution embarked to use class room teaching. Where a teacher explains the material and students understand and learn the lesson. There is no absolute scale for measuring knowledge but examination score is one scale which shows the performance indicator of students. So it is important that appropriate material is taught but it is vital that while teaching which language is chosen, class notes must be prepared and attendance. This study analyses the impact of language on the presence of students in class room. The main idea is to find out the support, confidence and interestingness level for appropriate language and attendance in the classroom. For this purpose association rule is used."} {"_id": "7231c66eea6699456ff434311a79996084e257a1", "title": "Gray level image processing using contrast enhancement and watershed segmentation with quantitative evaluation", "text": "Both image enhancement and image segmentation are most practical approaches among virtually all automated image recognition systems. Feature extraction and recognition have numerous applications on telecommunication, weather forecasting, environment exploration and medical diagnosis. The adaptive image contrast stretching is a typical image enhancement approach and watershed segmentation is a typical image segmentation approach. Under conditions of an improper or disturbed illumination, the adaptive contrast stretching should be conducted, which adapts to intensity distributions. Watershed segmentation is a feasible approach to separate different objects automatically, where watershed lines separate the catchment basins. The erosion and dilation operations are essential procedures involved in watershed segmentation. To avoid over-segmentation, the markers for foreground and background can be selected accordingly. Quantitative measures (gray level energy, discrete entropy, relative entropy and mutual information) are proposed to evaluate the actual improvement via two techniques. These methodologies can be easily expanded to many other image processing approaches."} {"_id": "ec9c4bb179abcf019255a33357c4fa2e66b09873", "title": "Grassmannian beamforming for multiple-input multiple-output wireless systems", "text": "Transmit and receive beamforming is an attractive, low-complexity technique for exploiting the significant diversity that is available in multiple-input and multiple-output (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamforming vector which is difficult to realize in practice. In this paper, we propose a quantized maximum signal-to-noise ratio beamforming technique where the receiver only sends the label of the best weight vector in a predetermined codebook to the transmitter. We develop optimal codebooks for i.i.d. Rayleigh fading matrix channels. We derive the distribution of the optimal transmit weight vector and use this result to bound the SNR degradation as a function of the quantization. A codebook design criterion is proposed by exploiting the connections between the quantization problem and Grassmannian line packing. The design criterion is flexible enough to allow for side constraints on the codebook vectors. Bounds on the maximum distortion with the resulting Grassmannian codebooks follow naturally from the nature of the code. A proof is given that any system using an overcomplete codebook for transmit diversity and maximum ratio combining obtains a diversity on the order of the product of the number of transmit and receive antennas. Bounds on the loss in capacity due to quantization are derived. Monte Carlo simulations are presented that compare the symbol error probability for different quantization strategies."} {"_id": "2eff823bdecb1a506ba88e1127fa3cdb1a263682", "title": "Efficient Memory Disaggregation with Infiniswap", "text": "Memory-intensive applications suffer large performance loss when their working sets do not fully fit in memory. Yet, they cannot leverage otherwise unused remote memory when paging out to disks even in the presence of large imbalance in memory utilizations across a cluster. Existing proposals for memory disaggregation call for new architectures, new hardware designs, and/or new programming models, making them infeasible. This paper describes the design and implementation of INFINISWAP, a remote memory paging system designed specifically for an RDMA network. INFINISWAP opportunistically harvests and transparently exposes unused memory to unmodified applications by dividing the swap space of each machine into many slabs and distributing them across many machines\u2019 remote memory. Because one-sided RDMA operations bypass remote CPUs, INFINISWAP leverages the power of many choices to perform decentralized slab placements and evictions. We have implemented and deployed INFINISWAP on an RDMA cluster without any modifications to user applications or the OS and evaluated its effectiveness using multiple workloads running on unmodified VoltDB, Memcached, PowerGraph, GraphX, and Apache Spark. Using INFINISWAP, throughputs of these applications improve between 4\u00d7 (0.94\u00d7) to 15.4\u00d7 (7.8\u00d7) over disk (Mellanox nbdX), and median and tail latencies between 5.4\u00d7 (2\u00d7) and 61\u00d7 (2.3\u00d7). INFINISWAP achieves these with negligible remote CPU usage, whereas nbdX becomes CPU-bound. INFINISWAP increases the overall memory utilization of a cluster and works well at scale."} {"_id": "4076cf9e5c31446f56748352c23c3e5d3d6d69cb", "title": "Hate Source : White Supremacist Hate Groups and Hate Crime", "text": "The relationship between hate group activity and hate crime is theoretically ambiguous. Hate groups may incite criminal behavior in support of their beliefs. On the other hand, hate groups may reduce hate crime by serving as a forum for members to verbally vent their frustrations or as protection from future biased violence. I \u0085nd that the presence of an active white supremacist hate group chapter is associated with an 18.7 percent higher hate crime rate. White supremacist groups are not associated with the level of anti-white hate crimes committed by non-whites, nor do they form in expectation of future hate crimes by non-whites. JEL Codes: K14, J15, D71"} {"_id": "581685e210b2b4d157ec5cc6196e35d4af4d3a2c", "title": "Relating Newcomer Personality to Survival and Activity in Recommender Systems", "text": "In this work, we explore the degree to which personality information can be used to model newcomer retention, investment, intensity of engagement, and distribution of activity in a recommender community. Prior work shows that Big-Five Personality traits can explain variation in user behavior in other contexts. Building on this, we carry out and report on an analysis of 1008 MovieLens users with identified personality profiles. We find that Introverts and low Agreeableness users are more likely to survive into the second and subsequent sessions compared to their respective counterparts; Introverts and low Conscientiousness users are a significantly more active population compared to their respective counterparts; High Openness and High Neuroticism users contribute (tag) significantly more compared to their counterparts, but their counterparts consume (browse and bookmark) more; and low Agreeableness users are more likely to rate whereas high Agreeableness users are more likely to tag. These results show how modeling newcomer behavior from user personality can be useful for recommender systems designers as they customize the system to guide people towards tasks that need to be done or tasks the users will find rewarding and also decide which users to invest retention efforts in."} {"_id": "8a5e5aaf26db3acdcafbc2cd5f44680d4f84d4fc", "title": "Stealthy Denial of Service Strategy in Cloud Computing", "text": "The success of the cloud computing paradigm is due to its on-demand, self-service, and pay-by-use nature. According to this paradigm, the effects of Denial of Service (DoS) attacks involve not only the quality of the delivered service, but also the service maintenance costs in terms of resource consumption. Specifically, the longer the detection delay is, the higher the costs to be incurred. Therefore, a particular attention has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and at the same time, they can be as harmful as the brute-force attacks. They are sophisticated attacks tailored to leverage the worst-case performance of the target system through specific periodic, pulsing, and low-rate traffic patterns. In this paper, we propose a strategy to orchestrate stealthy attack patterns, which exhibit a slowly-increasing-intensity trend designed to inflict the maximum financial cost to the cloud customer, while respecting the job size and the service arrival rate imposed by the detection mechanisms. We describe both how to apply the proposed strategy, and its effects on the target system deployed in the cloud."} {"_id": "75859ac30f5444f0d9acfeff618444ae280d661d", "title": "Multibiometric Cryptosystems Based on Feature-Level Fusion", "text": "Multibiometric systems are being increasingly de- ployed in many large-scale biometric applications (e.g., FBI-IAFIS, UIDAI system in India) because they have several advantages such as lower error rates and larger population coverage compared to unibiometric systems. However, multibiometric systems require storage of multiple biometric templates (e.g., fingerprint, iris, and face) for each user, which results in increased risk to user privacy and system security. One method to protect individual templates is to store only the secure sketch generated from the corresponding template using a biometric cryptosystem. This requires storage of multiple sketches. In this paper, we propose a feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch. Our main contributions include: (1) practical implementation of the proposed feature-level fusion framework using two well-known biometric cryptosystems, namery,fuzzy vault and fuzzy commitment, and (2) detailed analysis of the trade-off between matching accuracy and security in the proposed multibiometric cryptosystems based on two different databases (one real and one virtual multimodal database), each containing the three most popular biometric modalities, namely, fingerprint, iris, and face. Experimental results show that both the multibiometric cryptosystems proposed here have higher security and matching performance compared to their unibiometric counterparts."} {"_id": "81961a5a9dbfc08b1e0ead1886790e9b8700af60", "title": "A virtual translation technique to improve current regulator for salient-pole AC machines", "text": "This paper presents a synchronous frame current regulator for salient-pole AC machines, which is insensitive to both parameters and synchronous frequencies. The transformation from stationary to synchronous frame is known to produce a frame-dependent cross-coupling term, which results in degradation in current response at high speeds and is represented as an asymmetric pole in the complex vector form of such systems. The asymmetric pole close to right half plane (RHP) in the complex vector s-domain cannot be perfectly canceled by parameter-based design of PI controllers due to parameter variation. Complex vector design synchronous frame PI current regulator with a virtually translated asymmetric pole is shown in this paper to reduce the parameter sensitivity and to improve disturbance rejection. The current regulator with the virtually translated pole have been analyzed in complex vector form for nonsalient-pole AC machine and in scalar form for salient-pole AC machines. It has been shown to yield comparable performances in both cases. This paper includes experimental verification of the performance of the proposed current regulator for an IPMSM."} {"_id": "bceea31282c53bc18822c0974f51a08b98e3d4d9", "title": "News-based trading strategies", "text": "The marvel of markets lies in the fact that dispersed information is instantaneously processed and used to adjust the price of goods, services and assets. Financial markets are particularly efficient when it comes to processing information; such information is typically embedded in textual news that is then interpreted by investors. Quite recently, researchers have started to automatically determine news sentiment in order to explain stock price movements. Interestingly, this so-called news sentiment works fairly well in explaining stock returns. In this paper, we design trading strategies that utilize textual news in order to obtain profits on the basis of novel information entering the market. We thus propose approaches for automated decisionmaking based on supervised and reinforcement learning. Altogether, we demonstrate how news-based data can be incorporated into an investment system."} {"_id": "b3dbbae9257f7a1b0b5c2138aee7701899c219a9", "title": "Handcrafted features with convolutional neural networks for detection of tumor cells in histology images", "text": "Detection of tumor nuclei in cancer histology images requires sophisticated techniques due to the irregular shape, size and chromatin texture of the tumor nuclei. Some very recently proposed methods employ deep convolutional neural networks (CNNs) to detect cells in H&E stained images. However, all such methods use some form of raw pixel intensities as input and rely on the CNN to learn the deep features. In this work, we extend a recently proposed spatially constrained CNN (SC-CNN) by proposing features that capture texture characteristics and show that although CNN produces good results on automatically learned features, it can perform better if the input consists of a combination of handcrafted features and the raw data. The handcrafted features are computed through the scattering transform which gives non-linear invariant texture features. The combination of handcrafted features with raw data produces sharp proximity maps and better detection results than the results of raw intensities with a similar kind of CNN architecture."} {"_id": "9a46168d008e5b456beba05ad23b8bb3446b5548", "title": "Inverse procedural modeling of 3D models for virtual worlds", "text": "This course presents a collection of state-of-the-art approaches for modeling and editing of 3D models for virtual worlds, simulations, and entertainment, in addition to real-world applications. The first contribution of this course is a coherent review of inverse procedural modeling (IPM) (i.e., proceduralization of provided 3D content). We describe different formulations of the problem as well as solutions based on those formulations. We show that although the IPM framework seems under-constrained, the state-of-the-art solutions actually use simple analogies to convert the problem into a set of fundamental computer science problems, which are then solved by corresponding algorithms or optimizations. The second contribution includes a description and categorization of results and applications of the IPM frameworks. Moreover, a substantial part of the course is devoted to summarizing different domain IPM frameworks for practical content generation in modeling and animation."} {"_id": "38a7b87d89605d244e26bdeb6d69a61d7263b7e8", "title": "Lack of sleep affects the evaluation of emotional stimuli", "text": "Sleep deprivation (SD) negatively affects various cognitive performances, but surprisingly evidence about a specific impact of sleep loss on subjective evaluation of emotional stimuli remains sparse. In the present study, we assessed the effect of SD on the emotional rating of standardized visual stimuli selected from the International Affective Picture System. Forty university students were assigned to the sleep group (n=20), tested before and after one night of undisturbed sleep at home, or to the deprivation group, tested before and after one night of total SD. One-hundred and eighty pictures (90 test, 90 retest) were selected and categorized as pleasant, neutral and unpleasant. Participants were asked to judge their emotional reactions while viewing pictures by means of the Self-Assessment Manikin. Subjective mood ratings were also obtained by means of Visual Analog Scales. No significant effect of SD was observed on the evaluation of pleasant and unpleasant stimuli. On the contrary, SD subjects perceived the neutral pictures more negatively and showed an increase of negative mood and a decrease of subjective alertness compared to non-deprived subjects. Finally, an analysis of covariance on mean valence ratings of neutral pictures using negative mood as covariate confirmed the effect of SD. Our results indicate that sleep is involved in regulating emotional evaluation. The emotional labeling of neutral stimuli biased toward negative responses was not mediated by the increase of negative mood. This effect can be interpreted as an adaptive reaction supporting the \"better safe than sorry\" principle. It may also have applied implications for healthcare workers, military and law-enforcement personnel."} {"_id": "8099e1ec5ab75736566c64588690a2fe3abd934d", "title": "A MODERN TAKE", "text": "We revisit the bias-variance tradeoff for neural networks in light of modern empirical findings. The traditional bias-variance tradeoff in machine learning suggests that as model complexity grows, variance increases. Classical bounds in statistical learning theory point to the number of parameters in a model as a measure of model complexity, which means the tradeoff would indicate that variance increases with the size of neural networks. However, we empirically find that variance due to training set sampling is roughly constant (with both width and depth) in practice. Variance caused by the non-convexity of the loss landscape is different. We find that it decreases with width and increases with depth, in our setting. We provide theoretical analysis, in a simplified setting inspired by linear models, that is consistent with our empirical findings for width. We view bias-variance as a useful lens to study generalization through and encourage further theoretical explanation from this perspective."} {"_id": "6d2c46c04ab0d41088866e6758b1bf838035a9a4", "title": "A Foundation for the Study of IT Effects: A New Look at DeSanctis and Poole's Concepts of Structural Features and Spirit", "text": "Special Issue"} {"_id": "c1e9c4c5637c2d67863ee53eef3aa2df20a6e56d", "title": "Visibility estimation and joint inpainting of lidar depth maps", "text": "This paper presents a novel variational image inpainting method to solve the problem of generating, from 3-D lidar measures, a dense depth map coherent with a given color image, tackling visibility issues. When projecting the lidar point cloud onto the image plane, we generally obtain a sparse depth map, due to undersampling. Moreover, lidar and image sensor positions generally differ during acquisition, such that depth values referring to objects that are hidden from the image view point might appear with a naive projection. The proposed algorithm estimates the complete depth map, while simultaneously detecting and excluding those hidden points. It consists in a primal-dual optimization method, where a coupled total variation regularization term is included to match the depth and image gradients and a visibility indicator handles the selection of visible points. Tests with real data prove the effectiveness of the proposed strategy."} {"_id": "e32663c27ff19fbd84898d2a5e41d7fc985c95fe", "title": "Review of Electromagnetic Techniques for Breast Cancer Detection", "text": "Breast cancer is anticipated to be responsible for almost 40,000 deaths in the USA in 2011. The current clinical detection techniques suffer from limitations which motivated researchers to investigate alternative modalities for the early detection of breast cancer. This paper focuses on reviewing the main electromagnetic techniques for breast cancer detection. More specifically, this work reviews the cutting edge research in microwave imaging, electrical impedance tomography, diffuse optical tomography, microwave radiometry, biomagnetic detection, biopotential detection, and magnetic resonance imaging (MRI). The goal of this paper is to provide biomedical researchers with an in-depth review that includes all main electromagnetic techniques in the literature and the latest progress in each of these techniques."} {"_id": "69544f17534db93e48d06b84e8f7cb611bd88c69", "title": "Mapping CMMI Project Management Process Areas to SCRUM Practices", "text": "Over the past years, the capability maturity model (CMM) and capability maturity model integration (CMMI) have been broadly used for assessing organizational maturity and process capability throughout the world. However, the rapid pace of change in information technology has caused increasing frustration to the heavyweight plans, specifications, and other documentation imposed by contractual inertia and maturity model compliance criteria. In light of that, agile methodologies have been adopted to tackle this challenge. The aim of our paper is to present mapping between CMMI and one of these methodologies, Scrum. It shows how Scrum addresses the Project Management Process Areas of CMMI. This is useful for organizations that have their plan-driven process based on the CMMI model and are planning to improve its processes toward agility or to help organizations to define a new project management framework based on both CMMI and Scrum practices."} {"_id": "134021cfb9f082f4e8b58f31bcbb41eb990ab874", "title": "DepSky: Dependable and Secure Storage in a Cloud-of-Clouds", "text": "The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, large biomedical datasets, historical information about power systems and financial data are some examples of critical data that could be moved to the cloud. However, the reliability and security of data stored in the cloud still remain major concerns. In this work we present DepSky, a system that improves the availability, integrity, and confidentiality of information stored in the cloud through the encryption, encoding, and replication of the data on diverse clouds that form a cloud-of-clouds. We deployed our system using four commercial clouds and used PlanetLab to run clients accessing the service from different countries. We observed that our protocols improved the perceived availability, and in most cases, the access latency, when compared with cloud providers individually. Moreover, the monetary costs of using DepSky in this scenario is at most twice the cost of using a single cloud, which is optimal and seems to be a reasonable cost, given the benefits."} {"_id": "be32610fb5cdc850538ace1781fbfd5b96698618", "title": "A descent modified", "text": "In this paper, we propose a modified Polak\u2013Ribi\u00e8re\u2013Polyak (PRP) conjugate gradient method. An attractive property of the proposed method is that the direction generated by the method is always a descent direction for the objective function. This property is independent of the line search used. Moreover, if exact line search is used, the method reduces to the ordinary PRP method. Under appropriate conditions, we show that the modified PRP method with Armijo-type line search is globally convergent. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method."} {"_id": "66e66467ae444fabea0d1d82dca77f524b9aa797", "title": "Secure Border Gateway Protocol (Secure-BGP)", "text": "The Border Gateway Protocol (BGP), which is used to distribute routing information between autonomous systems (ASes), is a critical component of the Internet\u2019s routing infrastructure. It is highly vulnerable to a variety of malicious attacks, due to the lack of a secure means of verifying the authenticity and legitimacy of BGP control traffic. This document describes a secure, scalable, deployable architecture (S-BGP) for an authorization and authentication system that addresses most of the security problems associated with BGP. The paper discusses the vulnerabilities and security requirements associated with BGP, describes the S-BGP countermeasures, and explains how they address these vulnerabilities and requirements. In addition, this paper provides a comparison of this architecture with other approaches that have been proposed, analyzes the performance implications of the proposed countermeasures, and addresses operational issues."} {"_id": "3885e1cf675f1fbf32823c981b3f7417c56c5412", "title": "Empowering personalized medicine with big data and semantic web technology: Promises, challenges, and use cases", "text": "In healthcare, big data tools and technologies have the potential to create significant value by improving outcomes while lowering costs for each individual patient. Diagnostic images, genetic test results and biometric information are increasingly generated and stored in electronic health records presenting us with challenges in data that is by nature high volume, variety and velocity, thereby necessitating novel ways to store, manage and process big data. This presents an urgent need to develop new, scalable and expandable big data infrastructure and analytical methods that can enable healthcare providers access knowledge for the individual patient, yielding better decisions and outcomes. In this paper, we briefly discuss the nature of big data and the role of semantic web and data analysis for generating \u201csmart data\u201d which offer actionable information that supports better decision for personalized medicine. In our view, the biggest challenge is to create a system that makes big data robust and smart for healthcare providers and patients that can lead to more effective clinical decision-making, improved health outcomes, and ultimately, managing the healthcare costs. We highlight some of the challenges in using big data and propose the need for a semantic data-driven environment to address them. We illustrate our vision with practical use cases, and discuss a path for empowering personalized medicine using big data and semantic web technology."} {"_id": "98e03d35857f66c34fa79f3ea0dd2b4e3b670044", "title": "Series No . 16 Profiting from Innovation in the Digital Economy : Standards , Complementary Assets , and Business Models In the Wireless World", "text": ""} {"_id": "b6c6fda47921d7b6bf76c8cb28c316d9ee0c2b64", "title": "10 user interface elements for mobile learning application development", "text": "Mobile learning application is a new fashionable trend. More than a thousand learning applications have been designed and developed. Therefore, the focus on mobile learning applications needs to be refined and improved in order to facilitate users' interaction and response towards the learning process. To develop an effective mobile learning application, one should start from the User Interface (UI) because an effective UI can play a role in constantly making the user focus on an object and subject. The purpose of this study is to investigate and identify the best UI elements to use in mobile learning application development. Four existing UI guidelines and 12 selected learning applications were analysed, and we identified 10 UI Elements in order to develop the next mobile learning applications. All the 10 elements are described accordingly and they have implications for those designing and implementing UI in mobile learning applications."} {"_id": "c0ad4773aea744a70aaf995e4768b23d5deb7963", "title": "Automated Accessibility Testing of Mobile Apps", "text": "It is important to make mobile apps accessible, so as not to exclude users with common disabilities such as blindness, low vision, or color blindness. Even when developers are aware of these accessibility needs, the lack of tool support makes the development and assessment of accessible apps challenging. Some accessibility properties can be checked statically, but user interface widgets are often created dynamically and are not amenable to static checking. Some accessibility checking frameworks analyze accessibility properties at runtime, but have to rely on existing thorough test suites. In this paper, we introduce the idea of using automated test generation to explore the accessibility of mobile apps. We present the MATE tool (Mobile Accessibility Testing), which automatically explores apps while applying different checks for accessibility issues related to visual impairment. For each issue, MATE generates a detailed report that supports the developer in fixing the issue. Experiments on a sample of 73 apps demonstrate that MATE detects more basic accessibility problems than static analysis, and many additional types of accessibility problems that cannot be detected statically at all. Comparison with existing accessibility testing frameworks demonstrates that the independence of an existing test suite leads to the identification of many more accessibility problems. Even when enabling Android's assistive features like contrast enhancement, MATE can still find many accessibility issues."} {"_id": "26aaaf780c09b831c55b38e21ee04c2463cb1406", "title": "Examining the Testing Effect with Open-and Closed-Book Tests", "text": "Two experiments examined the testing effect with open-book tests, in which students view notes and textbooks while taking the test, and closed-book tests, in which students take the test without viewing notes or textbooks. Subjects studied prose passages and then restudied or took an openor closed-book test. Taking either kind of test, with feedback, enhanced long-term retention relative to conditions in which subjects restudied material or took a test without feedback. Open-book testing led to better initial performance than closed-book testing, but this benefit did not persist and both types of testing produced equivalent retention on a delayed test. Subjects predicted they would recall more after repeated studying, even though testing enhanced long-term retention more than restudying. These experiments demonstrate that the testing effect occurs with both openand closedbook tests, and that subjects fail to predict the effectiveness of testing relative to studying in enhancing later recall. Copyright # 2007 John Wiley & Sons, Ltd."} {"_id": "408cddda023aa02d29783b7ccf99806c7ec8b765", "title": "The Power of Testing Memory: Basic Research and Implications for Educational Practice.", "text": "A powerful way of improving one's memory for material is to be tested on that material. Tests enhance later retention more than additional study of the material, even when tests are given without feedback. This surprising phenomenon is called the testing effect, and although it has been studied by cognitive psychologists sporadically over the years, today there is a renewed effort to learn why testing is effective and to apply testing in educational settings. In this article, we selectively review laboratory studies that reveal the power of testing in improving retention and then turn to studies that demonstrate the basic effects in educational settings. We also consider the related concepts of dynamic testing and formative assessment as other means of using tests to improve learning. Finally, we consider some negative consequences of testing that may occur in certain circumstances, though these negative effects are often small and do not cancel out the large positive effects of testing. Frequent testing in the classroom may boost educational achievement at all levels of education."} {"_id": "ae843e607257efc4a106343a774e2927da974c6a", "title": "METAMEMORY : A THEORETICAL FRAMEWORK AND NEW FINDINGS", "text": "Although there has been excellent research by many investigators on the topic of metamemory, here we will focus on our own research program. This article will begin with a description of a theoretical framework that has evolved out of metamemory research, followed by a few remarks about our methodology, and will end with a review of our previously unpublished findings. (Our published findings will not be systematically reviewed here; instead, they will be mentioned only when necessary for continuity .)"} {"_id": "33efb18854d6acc05fab5c23293a218a03f30a51", "title": "Remembering can cause forgetting: retrieval dynamics in long-term memory.", "text": "Three studies show that the retrieval process itself causes long-lasting forgetting. Ss studied 8 categories (e.g., Fruit). Half the members of half the categories were then repeatedly practiced through retrieval tests (e.g., Fruit Or_____). Category-cued recall of unpracticed members of practiced categories was impaired on a delayed test. Experiments 2 and 3 identified 2 significant features of this retrieval-induced forgetting: The impairment remains when output interference is controlled, suggesting a retrieval-based suppression that endures for 20 min or more, and the impairment appears restricted to high-frequency members. Low-frequency members show little impairment, even in the presence of strong, practiced competitors that might be expected to block access to those items. These findings suggest a critical role for suppression in models of retrieval inhibition and implicate the retrieval process itself in everyday forgetting."} {"_id": "8ee7f9c4cdb93a2c04fc9c0121158e7243b489b1", "title": "Distributed practice in verbal recall tasks: A review and quantitative synthesis.", "text": "The authors performed a meta-analysis of the distributed practice effect to illuminate the effects of temporal variables that have been neglected in previous reviews. This review found 839 assessments of distributed practice in 317 experiments located in 184 articles. Effects of spacing (consecutive massed presentations vs. spaced learning episodes) and lag (less spaced vs. more spaced learning episodes) were examined, as were expanding interstudy interval (ISI) effects. Analyses suggest that ISI and retention interval operate jointly to affect final-test retention; specifically, the ISI producing maximal retention increased as retention interval increased. Areas needing future research and theoretical implications are discussed."} {"_id": "73623d7b97157358f279fef53ba37b8d2c9908f6", "title": "An ontology for clinical questions about the contents of patient notes", "text": "OBJECTIVE\nMany studies have been completed on question classification in the open domain, however only limited work focuses on the medical domain. As well, to the best of our knowledge, most of these medical question classifications were designed for literature based question and answering systems. This paper focuses on a new direction, which is to design a novel question processing and classification model for answering clinical questions applied to electronic patient notes.\n\n\nMETHODS\nThere are four main steps in the work. Firstly, a relatively large set of clinical questions was collected from staff in an Intensive Care Unit. Then, a clinical question taxonomy was designed for question and answering purposes. Subsequently an annotation guideline was created and used to annotate the question set. Finally, a multilayer classification model was built to classify the clinical questions.\n\n\nRESULTS\nThrough the initial classification experiments, we realized that the general features cannot contribute to high performance of a minimum classifier (a small data set with multiple classes). Thus, an automatic knowledge discovery and knowledge reuse process was designed to boost the performance by extracting and expanding the specific features of the questions. In the evaluation, the results show around 90% accuracy can be achieved in the answerable subclass classification and generic question templates classification. On the other hand, the machine learning method does not perform well at identifying the category of unanswerable questions, due to the asymmetric distribution.\n\n\nCONCLUSIONS\nIn this paper, a comprehensive study on clinical questions has been completed. A major outcome of this work is the multilayer classification model. It serves as a major component of a patient records based clinical question and answering system as our studies continue. As well, the question collections can be reused by the research community to improve the efficiency of their own question and answering systems."} {"_id": "a4fa9754b555f9c2c2d1e10aecfb3153aea46bf6", "title": "Deep Learning for Intelligent Video Analysis", "text": "Analyzing videos is one of the fundamental problems of computer vision and multimedia content analysis for decades. The task is very challenging as video is an information-intensive media with large variations and complexities. Thanks to the recent development of deep learning techniques, researchers in both computer vision and multimedia communities are now able to boost the performance of video analysis significantly and initiate new research directions to analyze video content. This tutorial will present recent advances under the umbrella of video understanding, which start from a unified deep learning toolkit--Microsoft Cognitive Toolkit (CNTK) that supports popular model types such as convolutional nets and recurrent networks, to fundamental challenges of video representation learning and video classification, recognition, and finally to an emerging area of video and language."} {"_id": "3c79c967c2cb2e5e69f4b20688d0102a3bb28be3", "title": "Glance: rapidly coding behavioral video with the crowd", "text": "Behavioral researchers spend considerable amount of time coding video data to systematically extract meaning from subtle human actions and emotions. In this paper, we present Glance, a tool that allows researchers to rapidly query, sample, and analyze large video datasets for behavioral events that are hard to detect automatically. Glance takes advantage of the parallelism available in paid online crowds to interpret natural language queries and then aggregates responses in a summary view of the video data. Glance provides analysts with rapid responses when initially exploring a dataset, and reliable codings when refining an analysis. Our experiments show that Glance can code nearly 50 minutes of video in 5 minutes by recruiting over 60 workers simultaneously, and can get initial feedback to analysts in under 10 seconds for most clips. We present and compare new methods for accurately aggregating the input of multiple workers marking the spans of events in video data, and for measuring the quality of their coding in real-time before a baseline is established by measuring the variance between workers. Glance's rapid responses to natural language queries, feedback regarding question ambiguity and anomalies in the data, and ability to build on prior context in followup queries allow users to have a conversation-like interaction with their data - opening up new possibilities for naturally exploring video data."} {"_id": "7476b749b07a750224c1fe775c4bc3927bc09bbf", "title": "An Unsupervised User Behavior Prediction Algorithm Based on Machine Learning and Neural Network For Smart Home", "text": "The user operates the smart home devices year in year out, have produced mass operation data, but these data do not be utilized well in past. Nowadays, these data can be used to predict user\u2019s behavior custom with the development of big data and machine learning technologies, and then the prediction results can be employed to enhance the intelligence of a smart home system. In view of this, this paper proposes a novel unsupervised user behavior prediction (UUBP) algorithm, which employs an artificial neural network and proposes a forgetting factor to overcome the shortcomings of the previous prediction algorithm. This algorithm has a high-level of autonomous and self-organizing learning ability while does not require too much human intervention. Furthermore, the algorithm can better avoid the influence of user\u2019s infrequent and out-of-date operation records, because of the forgetting factor. Finally, the use of real end user\u2019s operation records to demonstrate that UUBP algorithm has a better level of performance than other algorithms from effectiveness."} {"_id": "97ead728b450275127a4b599ce7bebad29a17b37", "title": "Zero-Shot Question Generation from Knowledge Graphs for Unseen Predicates and Entity Types", "text": "We present a neural model for question generation from knowledge base triples in a \u201cZeroShot\u201d setup, that is generating questions for triples containing predicates, subject types or object types that were not seen at training time. Our model leverages triples occurrences in the natural language corpus in an encoderdecoder architecture, paired with an original part-of-speech copy action mechanism to generate questions. Benchmark and human evaluation show that our model sets a new state-ofthe-art for zero-shot QG."} {"_id": "e303ef5a877490b948aea8aead9f22bf3c5131fe", "title": "Bulk current injection test modeling using an equivalent circuit for 1.8V mobile ICs", "text": "This paper shows a novel simulation method for bulk current injection (BCI) tests of I/O buffer circuits of mobile system memory. The simulation model consists of BCI probe, directional coupler, PCB, PKG, and IC. The proposed method is based on a behavioural I/O buffer model using a pulse generator as an input. A detailed simulation flow is introduced and validated through simulations performed on several injection probe loading conditions using a power decoupling capacitor and an on-chip decoupling capacitor."} {"_id": "f852f6d52948cc94f04225b8268f09a543727684", "title": "HyperUAS - Imaging Spectroscopy from a Multirotor Unmanned Aircraft System", "text": "One of the key advantages of a low-flying unmanned aircraft system (UAS) is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral (i.e., hyperspectral) resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system (HyperUAS). HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands (162 bands in a spectrally binned mode) with bandwidths between 4 and 5 nm at an ultrahigh spatial resolution of 2\u20135 cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5 cm. C \u00a9 2014 Wiley Periodicals, Inc."} {"_id": "790db62e0f4ac1e0141d1cb207655a7f6f1460a7", "title": "Sparse Dueling Bandits", "text": "The dueling bandit problem is a variation of the classical multi-armed bandit in which the allowable actions are noisy comparisons between pairs of arms. This paper focuses on a new approach for finding the \u201cbest\u201d arm according to the Borda criterion using noisy comparisons. We prove that in the absence of structural assumptions, the sample complexity of this problem is proportional to the sum of the inverse squared gaps between the Borda scores of each suboptimal arm and the best arm. We explore this dependence further and consider structural constraints on the pairwise comparison matrix (a particular form of sparsity natural to this problem) that can significantly reduce the sample complexity. This motivates a new algorithm called Successive Elimination with Comparison Sparsity (SECS) that exploits sparsity to find the Borda winner using fewer samples than standard algorithms. We also evaluate the new algorithm experimentally with synthetic and real data. The results show that the sparsity model and the new algorithm can provide significant improvements over standard approaches."} {"_id": "b51c0ccd471bdaa138e8afe1cc201c1ebb7a51e1", "title": "Focus plus context screens: combining display technology with visualization techniques", "text": "Computer users working with large visual documents, such as large layouts, blueprints, or maps perform tasks that require them to simultaneously access overview information while working on details. To avoid the need for zooming, users currently have to choose between using a sufficiently large screen or applying appropriate visualization techniques. Currently available hi-res \"wall-size\" screens, however, are cost-intensive, space-intensive, or both. Visualization techniques allow the user to more efficiently use the given screen space, but in exchange they either require the user to switch between multiple views or they introduce distortion.In this paper, we present a novel approach to simultaneously display focus and context information. Focus plus context screens consist of a hi-res display and a larger low-res display. Image content is displayed such that the scaling of the display content is preserved, while its resolution may vary according to which display region it is displayed in. Focus plus context screens are applicable to practically all tasks that currently use overviews or fisheye views, but unlike these visualization techniques, focus plus context screens provide a single, non-distorted view. We present a prototype that seamlessly integrates an LCD with a projection screen and demonstrate four applications that we have adapted so far."} {"_id": "9d9eddf405b1aa4eaf6c35b0d16e0b96422f0666", "title": "Treatment of posterolateral tibial plateau fractures with modified Frosch approach : a cadaveric study and case series", "text": "This study aimed to investigate the surgical techniques and the clinical efficacy of a modified Frosch approach in the treatment of posterolateral tibial plateau fractures. The standard Frosch approach was performed on 5 fresh-frozen cadavers. The mean bony surface area was measured upon adequate exposure of the proximal tibial cortex. Lateral proximal tibial plate and posterolateral T-plates were placed and the ease of the procedure was noted. In the study, 12 clinical cases of posterolateral tibial plateau fractures were treated via modified or standard Frosch approaches. The outcome was assessed over short to medium follow-up period. Cadaver studies allowed the inspection of the posterolateral joint surface of all specimens from the lateral side. The mean bony surface areas of the exposed lateral and posterolateral tibial plateau were (6.78 \u00b1 1.13) cm2 and (3.59 \u00b1 0.65) cm2, respectively. Lateral and posterolateral plates were implanted successfully. Lateral proximal tibial plates were fixed in 10 patients via a modified Frosch approach while posterolateral plates were fixed in 2 patients via a standard Frosch approach. Patients were followed up for 10 to 24 months (average: 15.7 months) and no complications were observed during this period. Based on the Rasmussen knee function score system, the results were recorded as excellent, good, and fair in 6, 4, and 2 patients, respectively. In conclusion, the modified Frosch approach has offers advantages of clear clarity in exposure, convenient for reduction and internal fixation of a fracture, and good positive clinical results over the normal approach."} {"_id": "a1bf3e73cb6a1e1430c60dcc2aacb45a405223ba", "title": "Superficial dorsal penile vein thrombosis (penile Mondor's disease)", "text": "In our center between 1992 and 1994 penile Mondor's disease (superficial dorsal penile vein thrombosis) was diagnosed in 5 patients aged 20\u201339 years. In all patients the thromboses were noted 24\u201348 hours after a prolonged sexual act with or without an intercourse. The main symptom was a cord-like thickening of the superficial veins, which were painless or slightly painful. Doppler examination of the superficial dorsal vein revealed obstruction of the vessels. In 2 patients the retroglandular plexus was also involved. Patients were treated with anti-inflammatory medications (Tenoxicam or Ibuprofen). The resolution of the thrombosis occurred uneventfully within 4\u20136 weeks. No recurrence or erectile dysfunction was noted in any of the patients. Penile Mondor's disease is a benign pathology of the superficial dorsal penile vein and should be taken into account in the differential diagnosis of penile pathologies."} {"_id": "d499b63fba609f918aa1d9098510bd0ac11418d8", "title": "Opinion-aware Knowledge Graph for Political Ideology Detection", "text": "Identifying individual\u2019s political ideology from their speeches and written texts is important for analyzing political opinions and user behavior on social media. Traditional opinion mining methods rely on bag-of-words representations to classify texts into different ideology categories. Such methods are too coarse for understanding political ideologies. The key to identify different ideologies is to recognize different opinions expressed toward a specific topic. To model this insight, we classify ideologies based on the distribution of opinions expressed towards real-world entities or topics. Specifically, we propose a novel approach to political ideology detection that makes predictions based on an opinion-aware knowledge graph. We show how to construct such graph by integrating the opinions and targeted entities extracted from text into an existing structured knowledge base, and show how to perform ideology inference by information propagation on the graph. Experimental results demonstrate that our method achieves high accuracy in detecting ideologies compared to baselines including LR, SVM and RNN."} {"_id": "78b1dc7995f9f9b7ee7ef6e193374b9fe3487d30", "title": "Deep Convolution Neural Networks for Twitter Sentiment Analysis", "text": "Twitter sentiment analysis technology provides the methods to survey public emotion about the events or products related to them. Most of the current researches are focusing on obtaining sentiment features by analyzing lexical and syntactic features. These features are expressed explicitly through sentiment words, emoticons, exclamation marks, and so on. In this paper, we introduce a word embeddings method obtained by unsupervised learning based on large twitter corpora, this method using latent contextual semantic relationships and co-occurrence statistical characteristics between words in tweets. These word embeddings are combined with n-grams features and word sentiment polarity score features to form a sentiment feature set of tweets. The feature set is integrated into a deep convolution neural network for training and predicting sentiment classification labels. We experimentally compare the performance of our model with the baseline model that is a word n-grams model on five Twitter data sets, the results indicate that our model performs better on the accuracy and F1-measure for twitter sentiment classification."} {"_id": "40f979c38176a1c0ccb6f1e8584e52bd7f581273", "title": "Arm pose copying for humanoid robots", "text": "Learning by imitation is becoming increasingly important for teaching humanoid robots new skills. The simplest form of imitation is behavior copying in which the robot is minimizing the difference between its perceived motion and that of the imitated agent. One problem that must be solved even in this simplest of all imitation tasks is calculating the learner's pose corresponding to the perceived pose of the agent it is imitating. This paper presents a general framework for solving this problem in closed form for the arms of a generalized humanoid robot of which most available humanoids are special cases. The paper also reports the evaluation of the proposed system for real and simulated robots."} {"_id": "515b4bbd7a423b0ee8eb1da0eddc8f6ee4592742", "title": "In-vehicle information system as a driver's secondary activity: Case study", "text": "A robust methodology for detecting and evaluating driver distraction induced by in-vehicle information system using artificial neural network and fuzzy logic is introduced in this paper. An artificial neural network is used to predict driver's performance on a specific road segment. The predicted performance-based measures are compared to the driving with secondary task accomplishment. Fuzzy logic is applied to fuse the variables into a single output, which constitutes a level of driver distraction in percentage. The technique was tested on a vehicle simulator by ten drivers that exploited in-vehicle information system as a secondary activity. The driver-in-the-loop experiment outcomes are discussed."} {"_id": "9283e274236af381cfb20e7dda79f249936b02ab", "title": "Short-Interval Detailed Production Scheduling in 300mm Semiconductor Manufacturing using Mixed Integer and Constraint Programming", "text": "Fully automated 300mm manufacturing requires the adoption of a real-time lot dispatching paradigm. Automated dispatching has provided significant improvements over manual dispatching by removing variability from the thousands of dispatching decisions made every day in a fab. Real-time resolution of tool queues, with consideration of changing equipment states, process restrictions, physical and logical location of WIP, supply chain objectives and a myriad of other parameters, is required to ensure successful dispatching in the dynamic fab environment. However, the real-time dispatching decision in semiconductor manufacturing generally remains a reactive, heuristic response in existing applications, limited to the current queue of each tool. The shortcomings of this method of assigning WIP to tools, aptly named \"opportunistic scavenging\" as stated in G. Sullivan (1987), have become more apparent in lean manufacturing environments where lower WIP levels present fewer obvious opportunities for beneficial lot sequencing or batching. Recent advancements in mixed integer programming (MIP) and constraint programming (CP) have raised the possibility of integrating optimization software, commonly used outside of the fab environment to compute optimal solutions for scheduling scenarios ranging from order fulfillment systems to crew-shift-equipment assignments, with a real-time dispatcher to create a short-interval scheduler. The goal of such a scheduler is to optimize WIP flow through various sectors of the fab by expanding the analysis beyond the current WIP queue to consider upstream and downstream flow across the entire tool group or sector. This article describes the production implementation of a short-interval local area scheduler in IBM's leading-edge 300mm fab located in East Fishkill, New York, including motivation, approach, and initial results"} {"_id": "9dcc45ca582e506994e62d255d006435967608a5", "title": "AODVSEC: A Novel Approach to Secure Ad Hoc on-Demand Distance Vector (AODV) Routing Protocol from Insider Attacks in MANETs", "text": "Mobile Ad hoc Network (MANET) is a collection of mobile nodes that can communicate with each other using multihop wireless links without requiring any fixed based-station infrastructure and centralized management. Each node in the network acts as both a host and a router. In such scenario, designing of an efficient, reliable and secure routing protocol has been a major challenging issue over the last many years. Numerous schemes have been proposed for secure routing protocols and most of the research work has so far focused on providing security for routing using cryptography. In this paper, we propose a novel approach to secure Ad hoc On-demand Distance Vector (AODV) routing protocol from the insider attacks launched through active forging of its Route Reply (RREP) control message. AODV routing protocol does not have any security provision that makes it less reliable in publicly open ad hoc network. To deal with the concerned security attacks, we have proposed AODV Security Extension (AODVSEC) which enhances the scope of AODV for the security provision. We have compared AODVSEC with AODV and Secure AODV (SAODV) in normal situation as well as in presence of the three concerned attacks viz. Resource Consumption (RC) attack, Route Disturb (RD) attack, Route Invasion (RI) attack and Blackhole (BH) attack. To evaluate the performances, we have considered Packet Delivery Fraction (PDF), Average End-to-End Delay (AED), Average Throughput (AT), Normalized Routing Load (NRL) and Average Jitter and Accumulated Average Processing Time."} {"_id": "5a355396369385dbd28434f80f4b9c3ca3aff645", "title": "What's basic about basic emotions?", "text": "A widespread assumption in theories of emotion is that there exists a small set of basic emotions. From a biological perspective, this idea is manifested in the belief that there might be neurophysiological and anatomical substrates corresponding to the basic emotions. From a psychological perspective, basic emotions are often held to be the primitive building blocks of other, nonbasic emotions. The content of such claims is examined, and the results suggest that there is no coherent nontrivial notion of basic emotions as the elementary psychological primitives in terms of which other emotions can be explained. Thus, the view that there exist basic emotions out of which all other emotions are built, and in terms of which they can be explained, is questioned, raising the possibility that this position is an article of faith rather than an empirically or theoretically defensible basis for the conduct of emotion research. This suggests that perhaps the notion of basic emotions will not lead to significant progress in the field. An alternative approach to explaining the phenomena that appear to motivate the postulation of basic emotions is presented."} {"_id": "3186970b4723ba456b18d4edf600b635f30a1dfd", "title": "Domestic violence and psychological well-being of survivor women in Punjab , Pakistan", "text": "Violence against women is becoming a very critical issue, especially domestic violence. Public health practitioners of developing countries over the years had been deeply concern with trying to study and mitigate the impact of this on female survivors. It is defined as a gender based violence that results in, or is likely to result in, physical, sexual or mental harm /suffering to women, including threat of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life.1,2 Domestic violence is a global issue and commonly used to describe the \u201cwomen abuse\u201d those who suffer at the hands of their male partner, because wife abused by husband is the most common form of violence against women (e.g. wife beating, burning of women, acid throwing)3\u22126 and according to the American Medical Association domestic violence is perceived as a pattern of physical, sexual and/or psychological abuse by a person with whom the survivor had an intimate relationship.7\u221210 Domestic violence is also considered as an important cause of mortality and morbidity of women in every country where these associations have been studied.2 It is considered as an important cause of intentional injuries in women, which consequently reduce women\u2019s confidence by decreasing their desire to participate fully in life.11\u221214 It occurs in almost all social and economic classes, races and religions in multiple patterns/trends however, not a single society could be declared free of this notion.15"} {"_id": "1432c7d273275d3bda997fcafb194720fb98fd12", "title": "Constraint-based Round Robin Tournament Planning", "text": "Sport tournament planning becomes a complex task in the presence of heterogeneous requirements from teams, media, fans and other parties. Existing approaches to sport tournament planning often rely on precomputed tournament schemes which may be too rigid to cater for these requirements. Existing work on sport tournaments suggests a separation of the planning process into three phases. In this work, it is shown that all three phases can be solved using nite-domain constraint programming. The design of Friar Tuck, a generic constraint-based round robin planning tool, is outlined. New numerical results on round robin tournaments obtained with Friar Tuck underline the potential of constraints over nite domains in this area."} {"_id": "82ae74db79fad1916c5c1c5d2a6fb7ae7c59c8e1", "title": "Authentication of FPGA Bitstreams: Why and How", "text": "Encryption of volatile FPGA bitstreams provides confidentiality to the design but does not ensure its authenticity. This paper motivates the need for adding authentication to the configuration process by providing application examples where this functionality is useful. An examination of possible solutions is followed by suggesting a practical one in consideration of the FPGA\u2019s configuration environment constraints. The solution presented here involves two symmetric-key encryption cores running in parallel to provide both authentication and confidentiality while sharing resources for efficient implementation."} {"_id": "899b8bac810d3fc50e59425a3b6d7faf96470895", "title": "Iterative Procedures for Nonlinear Integral Equations", "text": "The numerical solution of nonlinear integral equations involves the iterative soIutioon of finite systems of nonlinear algebraic or transcendental equations. Certain corwent i o n a l techniqucs for treating such systems are reviewed in the context of a particular class of n o n l i n e a r equations. A procedure is synthesized to offset some of the disadvantages of these t e c h n i q u e s in this context; however, the procedure is not restricted to this pt~rticular class of s y s t e m s of nonlinear equations."} {"_id": "b0bf2484f2ec40a8c3eaa8bfcaa5ce83797f8e71", "title": "Beam selection for performance-complexity optimization in high-dimensional MIMO systems", "text": "Millimeter-wave (mm-wave) communications systems offer a promising solution to meeting the increasing data demands on wireless networks. Not only do mm-wave systems allow orders of magnitude larger bandwidths, they also create a high-dimensional spatial signal space due to the small wavelengths, which can be exploited for beamforming and multiplexing gains. However, the complexity of digitally processing the entire high-dimensional signal is prohibitive. By exploiting the inherent channel sparsity in beamspace due to highly directional propagation at mm-wave, it is possible to design near-optimal transceivers with dramatically lower complexity. In such beamspace MIMO systems, it is first necessary to determine the set of beams which define the low-dimensional communication subspace. In this paper, we address this beam selection problem and introduce a simple power-based classifier for determining the beamspace sparsity pattern that characterizes the communication subspace. We first introduce a physical model for a small cell which will serve as the setting for our analysis. We then develop a classifier for the physical model, and show its optimality for a class of ideal signals. Finally, we present illustrative numerical results and show the feasibility of the classifier in mobile settings."} {"_id": "05252b795f0f1238ac7e0d7af7fc2372c34a181d", "title": "Authoritative Sources in a Hyperlinked Environment", "text": "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of \u201cauthorative\u201d information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of \u201chub pages\u201d that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis."} {"_id": "3b1953ff2c0c9dd045afe6766afb91599522052b", "title": "Efficient Computation of PageRank", "text": "This paper discusses efficient techniques for computing PageRank, a ranking metric for hypertext documents. We show that PageRank can be computed for very large subgraphs of the web (up to hundreds of millions of nodes) on machines with limited main memory. Running-time measurements on various memory configurations are presented for PageRank computation over the 24-million-page Stanford WebBase archive. We discuss several methods for analyzing the convergence of PageRank based on the induced ordering of the pages. We present convergence results helpful for determining the number of iterations necessary to achieve a useful PageRank assignment, both in the absence and presence of search queries."} {"_id": "4ba18b2f35515f7f3ad3bc38100730c5808a52af", "title": "The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity", "text": "We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis."} {"_id": "65227ddbbd12015ba8a45a81122b1fa540e79890", "title": "The PageRank Citation Ranking: Bringing Order to the Web.", "text": "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, e ectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to e ciently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation."} {"_id": "9ac34c7040d08a27e7dc75cfa46eb0144de3a284", "title": "The Anatomy of a Large-Scale Hypertextual Web Search Engine", "text": "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want."} {"_id": "9bd362017e2592eec65da61d758d2c5d55237706", "title": "Threat , Authoritarianism , and Selective Exposure to Information", "text": "We examined the hypothesis that threat alters the cognitive strategies used by high authoritarians in seeking out new political information from the environment. In a laboratory experiment, threat was manipulated through a \u201cmortality salience\u201d manipulation used in research on terror management theory (Pyszczynski, Solomon, & Greenberg, 2003). Subjects (N = 92) were then invited to read one of three editorial articles on the topic of capital punishment. We found that in the absence of threat, both low and high authoritarians were responsive to salient norms of evenhandedness in information selection, preferring exposure to a two-sided article that presents the merits of both sides of an issue to an article that selectively touts the benefits of the pro or con side of the issue. However, in the presence of threat, high but not low authoritarians became significantly more interested in exposure to an article containing uniformly pro-attitudinal arguments, and significantly less interested in a balanced, two-sided article. Finally, a path analysis indicated that selective exposure to attitude-congruent information led to more internally consistent policy attitudes and inhibited attitude change. Discussion focuses on the role of threat in conditioning the cognitive and attitudinal effects of authoritarianism."} {"_id": "19b758d14b6ff537593108615c609dd821d0efc8", "title": "Adaptive Learning Hybrid Model for Solar Intensity Forecasting", "text": "Energy management is indispensable in the smart grid, which integrates more renewable energy resources, such as solar and wind. Because of the intermittent power generation from these resources, precise power forecasting has become crucial to achieve efficient energy management. In this paper, we propose a novel adaptive learning hybrid model (ALHM) for precise solar intensity forecasting based on meteorological data. We first present a time-varying multiple linear model (TMLM) to capture the linear and dynamic property of the data. We then construct simultaneous confidence bands for variable selection. Next, we apply the genetic algorithm back propagation neural network (GABP) to learn the nonlinear relationships in the data. We further propose ALHM by integrating TMLM, GABP, and the adaptive learning online hybrid algorithm. The proposed ALHM captures the linear, temporal, and nonlinear relationships in the data, and keeps improving the predicting performance adaptively online as more data are collected. Simulation results show that ALHM outperforms several benchmarks in both short-term and long-term solar intensity forecasting."} {"_id": "af777f8b1c694e353a57d81c3c1b4620e2ae61b1", "title": "A Survey and Typology of Human Computation Games", "text": "Human computation games (HCGs) can be seen as a paradigm that makes use of human brain power to solve computational problems as a byproduct of game play. In recent years, HCGs are increasing in popularity within various application areas which aim to harvest human problem solving skills by seamlessly integrating computational tasks into games. Thus, a proper system of describing such games is necessary in order to obtain a better understanding of the current body of work and identify new opportunities for future research directions. As a starting point, in this paper, we review existing literature on HCGs and present a typology consisting of 12 dimensions based on the key features of HCGs. With this typology, all HCGs could be classified and compared in a systematic manner."} {"_id": "c2da441587e7cc11497d89d40bb4ad3ac080686f", "title": "Gossip along the way : Order-Optimal Consensus through Randomized Path Averaging", "text": "Gossip algorithms have recently received significant attention, mainly because they constitute simple and robust algorithms for distributed information processing over networks. However for many topologies that are realistic for wireless ad-hoc and sensor networks (like grids and random geometric graphs), the standard nearest-neighbor gossip converges very slowly. A recently proposed algorithm called geographic gossip improves gossip efficiency by a p n/ log n factor for random geometric graphs, by exploiting geographic information of node locations. In this paper we prove that a variation of geographic gossip that averages along routed paths, improves efficiency by an additional p n/ log n factor and is order optimal for grids and random geometric graphs. Our analysis provides some general techniques and can be used to provide bounds on the performance of randomized message passing algorithms operating over various graph topologies."} {"_id": "2673a3614b4239f1a62fc577922f0db7ec2b8c27", "title": "Evaluation of games for teaching computer science", "text": "Well-designed educational games (both online and offline) can be a valuable way to engage students with a topic. However, games can also be a time-consuming distraction, so it is useful to be able to identify which are suitable. This paper provides guidelines for evaluating games, and applies them to identify promising games. We begin by defining what is meant by a game, ruling out simple animations or quizzes, and focusing on a more playful approach to education. Based on this definition, we have identified games that are available for teaching topics in computer science, and classified them. We have developed criteria for evaluating games based on existing research on educational games, combined with educational theory and considering them in the context of CS. The criteria developed cover the topics taught, how easy it is to install and run, how engaging the game is, and how much time students might invest to use the game. We find that although there are a number of worth-while games in the field of CS and most are free, the coverage of topics is very patchy, with most focused on programming concepts and binary numbers."} {"_id": "e1a23a0c7c4b82b9c9a72934da04b4e743116554", "title": "Detecting and Tracking The Real-time Hot Topics: A Study on Computational Neuroscience", "text": "In this study, following the idea of our previous paper (Wang, et al., 2013a), we improve the method to detect and track hot topics in a specific field by using the real-time article usage data. With the \u201cusage count\u201d data provided by Web of Science, we take the field of computational neuroscience as an example to make analysis. About 10 thousand articles in the field of Computational Neuroscience are queried in Web of Science, when the records, including the usage count data of each paper, have been harvested and updated weekly from October 19, 2015 to March 21, 2016. The hot topics are defined by the most frequently used keywords aggregated from the articles. The analysis reveals that hot topics in Computational Neuroscience are related to the key technologies, like \u201cfmri\u201d, \u201ceeg\u201d, \u201cerp\u201d, etc. Furthermore, using the weekly updated data, we track the dynamical changes of the topics. The characteristic of immediacy of usage data makes it possible to track the \u201cheat\u201d of hot topics timely and dynamically."} {"_id": "f9ea177533fe3755f62737842c203bea2b2a8adc", "title": "A probabilistic rating inference framework for mining user preferences from reviews", "text": "We propose a novel Probabilistic Rating infErence Framework, known as Pref, for mining user preferences from reviews and then mapping such preferences onto numerical rating scales. Pref applies existing linguistic processing techniques to extract opinion words and product features from reviews. It then estimates the sentimental orientations (SO) and strength of the opinion words using our proposed relative-frequency-based method. This method allows semantically similar words to have different SO, thereby addresses a major limitation of existing methods. Pref takes the intuitive relationships between class labels, which are scalar ratings, into consideration when assigning ratings to reviews. Empirical results validated the effectiveness of Pref against several related algorithms, and suggest that Pref can produce reasonably good results using a small training corpus. We also describe a useful application of Pref as a rating inference framework. Rating inference transforms user preferences described as natural language texts into numerical rating scales. This allows Collaborative Filtering (CF) algorithms, which operate mostly on databases of scalar ratings, to utilize textual reviews as an additional source of user preferences. We integrated Pref with a classical CF algorithm, and empirically demonstrated the advantages of using rating inference to augment ratings for CF."} {"_id": "2e03c444e7854188dc7badec33bc9a52cac91885", "title": "Building Information Modeling and real world knowledge: A methodological approach to accurate semantic documentation for the built environment", "text": "Building Information Modeling is considered by the scientific literature as an emerging trend in the architectural documentation scenario, as it is basically a digital representation of physical and functional features of facilities, serving as a shared knowledge resource during their whole life cycle. BIM is actually a process (not a software, as someone indicated), in which different players act sharing data through digital models in a coordinated, consistent and always up to date workflow, in order to reach reliability and higher quality all over the construction process. This way BIM tools were originally meant to ease the design of new architectures, generated by parametric geometries connected through hierarchical relationships of \u201csmart objects\u201d (components self-aware of their identity and conscious of their interactions with each other). However, this approach can also be successfully applied to what already exists: TLS (Terrestrial Laser Scanning) or digital photogrammetry are supposed to be the first abstraction step in a methodology proposal intended as a scientific strategy in which BIM, relying on its own semantic splitting attitude and its topological structure, is explicitly used in representation of existing buildings belonging to the Cultural Heritage. Presenting some progresses in the development of a specific free Autodesk Revit plug-in, nicknamed GreenSpider after its capability to layout points in the digital domain as if they were nodes of an ideal cobweb, this paper examines how point clouds collected during high definition surveys can be processed with accuracy in a BIM environment, highlighting critical aspects and advantages deriving from the application of parametric techniques to the real world domain representation."} {"_id": "af0cf9da53bf3e1e6d349faed14cb68ad71aa2a4", "title": "Graph Convolutional Networks With Argument-Aware Pooling for Event Detection", "text": "The current neural network models for event detection have only considered the sequential representation of sentences. Syntactic representations have not been explored in this area although they provide an effective mechanism to directly link words to their informative context for event detection in the sentences. In this work, we investigate a convolutional neural network based on dependency trees to perform event detection. We propose a novel pooling method that relies on entity mentions to aggregate the convolution vectors. The extensive experiments demonstrate the benefits of the dependencybased convolutional neural networks and the entity mentionbased pooling method for event detection. We achieve the state-of-the-art performance on widely used datasets with both perfect and predicted entity mentions."} {"_id": "6fb0452f63918c1daab8f515c6f52444054fd078", "title": "Exploring the comparative salience of restaurant attributes: A conjoint analysis approach", "text": "This study explores how travelers select a restaurant for dining out, given that restaurant customers consider diverse attributes in making a selection. By applying a conjoint analysis, an exploratory multiplecase study is conducted for three restaurants in New York City. Findings from Study 1 (an overall travelers group) and Study 2 (two different country-of-residence groups: foreign and domestic travelers) show that food, value, atmosphere, and service are considered as substantially important criteria in selecting restaurants, in that order. However, results from Study 3 examining different restaurant types (lowpriced food stand, low-priced indoor, and high-priced indoor) reveal that the food attribute is the most important factor, regardless of restaurant types, whereas the other attributes\u2019 rankings vary. Results from ypes of restaurants and travelers"} {"_id": "6c5a1e4692c07f4069969c423e0767cbb5da6f8e", "title": "Integrality and Separability of Input Devices", "text": "Current input device taxonomies and other frameworks typically emphasize the mechanical structure of input devices. We suggest that selecting an appropriate input device for an interactive task requires looking beyond the physical structure of devices to the deeper perceptual structure of the task, the device, and the interrelationship between the perceptual structure of the task and the control properties of the device. We affirm that perception is key to understanding performance of multidimensional input devices on multidimensional tasks. We have therefore extended the theory of processing of percetual structure to graphical interactive tasks and to the control structure of input devices. This allows us to predict task and device combinations that lead to better performance and hypothesize that performance is improved when the perceptual structure of the task matches the control structure of the device. We conducted an experiment in which subjects performed two tasks with different perceptual structures, using two input devices with correspondingly different control structures, a three-dimensional tracker and a mouse. We analyzed both speed and accuracy, as well as the trajectories generated by subjects as they used the unconstrained three-dimensional tracker to perform each task. The result support our hypothesis and confirm the importance of matching the perceptual structure of the task and the control structure of the input device."} {"_id": "ef529a661104ebe3c8a68f52d40a2cda81685794", "title": "7 Toward a Universal Underspecified Semantic Representation", "text": "We define Canonical Form Minimal Recursion Semantics (CF-MRS) and prove that all the well-formed MRS structures generated by the MRS semantic composition algorithm are in this form. We prove that the qeq relationships are equivalent to outscoping relations when MRS structures are in this form. This result fills the gap between some underspecification formalisms and motivates defining a Canonical Form Underspecified Representation (CF-UR) which brings those underspecification formalisms together."} {"_id": "60ae024362ce54b8c587f5d6ff1de25cf6616297", "title": "MIMO radar: an idea whose time has come", "text": "It has recently been shown that multiple-input multiple-output (MIMO) antenna systems have the potential to improve dramatically the performance of communication systems over single antenna systems. Unlike beamforming, which presumes a high correlation between signals either transmitted or received by an array, the MIMO concept exploits the independence between signals at the array elements. In conventional radar, target scintillations are regarded as a nuisance parameter that degrades radar performance. The novelty of MIMO radar is that it takes the opposite view; namely, it capitalizes on target scintillations to improve the radar's performance. We introduce the MIMO concept for radar. The MIMO radar system under consideration consists of a transmit array with widely-spaced elements such that each views a different aspect of the target. The array at the receiver is a conventional array used for direction finding (DF). The system performance analysis is carried out in terms of the Cramer-Rao bound of the mean-square error in estimating the target direction. It is shown that MIMO radar leads to significant performance improvement in DF accuracy."} {"_id": "04442c483964e14f14a811c46b5772f0f0b79bd7", "title": "Implementation of multiport dc-dc converter-based Solid State Transformer in smart grid system", "text": "A solid-state transformer (SST) would be at least as efficient as a conventional version but would provide other benefits as well, particularly as renewable power sources become more widely used. Among its more notable strong points are on-demand reactive power support for the grid, better power quality, current limiting, management of distributed storage devices and a dc bus. Most of the nation's power grid currently operates one way - power flows from the utility to the consumer - and traditional transformers simply change voltage from one level to another. But smart transformers, based on power semiconductor switches, are more versatile. Not only can they change voltage levels, but also can effectively control the power flow in both directions. The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this paper. The DC-DC stage is based on a quad active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A novel technique that complements the SISO controller by taking into account the cross coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models."} {"_id": "6089240c5ccf5e5e740f7c9a8ff82c8878d947bc", "title": "Compact Printed Wide-Slot UWB Antenna With 3.5/5.5-GHz Dual Band-Notched Characteristics", "text": "A novel compact wide-slot ultrawideband (UWB) antenna with dual band-notched characteristics is presented. The antenna consists of an inverted U-shaped slot on the ground plane and a radiation patch similar to the slot that is fed by a 50-\u03a9 microstrip line. By etching a C-shaped slot on the radiation patch and extruding an L-shaped stub from the ground plane, dual band-notched properties in the WiMAX (3.4-3.69 GHz) and WLAN (5.15-5.825 GHz) are achieved. The proposed antenna has a compact size of 20\u00d727 mm2 and operates from 2.89 to 11.52 GHz. Furthermore, nearly omnidirectional radiation patterns and constant gain are obtained in the working band."} {"_id": "563b03dea20a3c5a5488917a5a26d0377e6d4862", "title": "Connecting and separating mind-sets: culture as situated cognition.", "text": "UNLABELLED\nPeople perceive meaningful wholes and later separate out constituent parts (D. Navon, 1977). Yet there are cross-national differences in whether a focal target or integrated whole is first perceived. Rather than construe these differences as fixed, the proposed culture-as-situated-cognition model explains these differences as due to whether a collective or individual mind-set is cued at the moment of observation. Eight studies demonstrated that when cultural mind-set and task demands are congruent, easier tasks are accomplished more quickly and more difficult or time-constrained tasks are accomplished more accurately (\n\n\nSTUDY 1\nKoreans, Korean Americans; STUDY 2: Hong Kong Chinese; STUDY 3: European- and Asian-heritage Americans; STUDY 4: Americans;\n\n\nSTUDY\n5 Hong Kong Chinese; STUDY 6: Americans; STUDY 7: Norwegians; STUDY 8: African-, European-, and Asian-heritage Americans). Meta-analyses (d = .34) demonstrated homogeneous effects across geographic place (East-West), racial-ethnic group, task, and sensory mode-differences are cued in the moment. Contrast and separation are salient individual mind-set procedures, resulting in focus on a single target or main point. Assimilation and connection are salient collective mind-set procedures, resulting in focus on multiplicity and integration."} {"_id": "23c035073654412a6a9ec486739dd9bd16dd663d", "title": "Multiview Image Completion with Space Structure Propagation", "text": "We present a multiview image completion method that provides geometric consistency among different views by propagating space structures. Since a user specifies the region to be completed in one of multiview photographs casually taken in a scene, the proposed method enables us to complete the set of photographs with geometric consistency by creating or removing structures on the specified region. The proposed method incorporates photographs to estimate dense depth maps. We initially complete color as well as depth from a view, and then facilitate two stages of structure propagation and structure-guided completion. Structure propagation optimizes space topology in the scene across photographs, while structure-guide completion enhances, and completes local image structure of both depth and color in multiple photographs with structural coherence by searching nearest neighbor fields in relevant views. We demonstrate the effectiveness of the proposed method in completing multiview images."} {"_id": "497d41eb37484345332e7200438f51ed923bcd2c", "title": "Effectiveness and Adoption of a Drawing-to-Learn Study Tool for Recall and Problem Solving: Minute Sketches with Folded Lists", "text": "Drawing by learners can be an effective way to develop memory and generate visual models for higher-order skills in biology, but students are often reluctant to adopt drawing as a study method. We designed a nonclassroom intervention that instructed introductory biology college students in a drawing method, minute sketches in folded lists (MSFL), and allowed them to self-assess their recall and problem solving, first in a simple recall task involving non-European alphabets and later using unfamiliar biology content. In two preliminary ex situ experiments, students had greater recall on the simple learning task, non-European alphabets with associated phonetic sounds, using MSFL in comparison with a preferred method, visual review (VR). In the intervention, students studying using MSFL and VR had \u223c50-80% greater recall of content studied with MSFL and, in a subset of trials, better performance on problem-solving tasks on biology content. Eight months after beginning the intervention, participants had shifted self-reported use of drawing from 2% to 20% of study time. For a small subset of participants, MSFL had become a preferred study method, and 70% of participants reported continued use of MSFL. This brief, low-cost intervention resulted in enduring changes in study behavior."} {"_id": "dc5d04d34b278b944097b8925a9147773bbb80cc", "title": "A Temporal Sequence Learning for Action Recognition and Prediction", "text": "In this work1, we present a method to represent a video with a sequence of words, and learn the temporal sequencing of such words as the key information for predicting and recognizing human actions. We leverage core concepts from the Natural Language Processing (NLP) literature used in sentence classification to solve the problems of action prediction and action recognition. Each frame is converted into a word that is represented as a vector using the Bag of Visual Words (BoW) encoding method. The words are then combined into a sentence to represent the video, as a sentence. The sequence of words in different actions are learned with a simple but effective Temporal Convolutional Neural Network (T-CNN) that captures the temporal sequencing of information in a video sentence. We demonstrate that a key characteristic of the proposed method is its low-latency, i.e. its ability to predict an action accurately with a partial sequence (sentence). Experiments on two datasets, UCF101 and HMDB51 show that the method on average reaches 95% of its accuracy within half the video frames. Results, also demonstrate that our method achieves compatible state-of-the-art performance in action recognition (i.e. at the completion of the sentence) in addition to action prediction."} {"_id": "04850809e4e31437039833753226b440a4fc8864", "title": "Deep Memory Networks for Attitude Identification", "text": "We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral.\n Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models."} {"_id": "a76b92f17593358d21c8dd9d1c058b7658086123", "title": "Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks", "text": "Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario."} {"_id": "0a202f1dfc6991a6a204eaa5e6b46d6223a4d98a", "title": "Good features to track", "text": "No feature-based vision system can work unless good features can be identi ed and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under a ne image transformations. We test performance with several simulations and experiments."} {"_id": "116d7798c1123cf7fad4176e98f58fd49de4f8f1", "title": "Planning and Acting in Partially Observable Stochastic Domains", "text": "In this paper we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains We begin by introducing the theory of Markov decision processes mdps and partially observable mdps pomdps We then outline a novel algorithm for solving pomdps o line and show how in some cases a nite memory controller can be extracted from the solution to a pomdp We conclude with a discussion of how our approach relates to previous work the complexity of nding exact solutions to pomdps and of some possibilities for nding approximate solutions"} {"_id": "12099545a31155585a813b840ed711de9d83cace", "title": "Simultaneous Mosaicing and Tracking with an Event Camera", "text": "An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering."} {"_id": "3d57515445c00c635e15222767fc0430069ed200", "title": "Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling", "text": "We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade."} {"_id": "139abeefba9ab62418ba69bc74326d3199d33824", "title": "Aerial Lidar Data Classification using AdaBoost", "text": "We use the AdaBoost algorithm to classify 3D aerial lidar scattered height data into four categories: road, grass, buildings, and trees. To do so we use five features: height, height variation, normal variation, lidar return intensity, and image intensity. We also use only lidar-derived features to organize the data into three classes (the road and grass classes are merged). We apply and test our results using ten regions taken from lidar data collected over an area of approximately eight square miles, obtaining higher than 92% accuracy. We also apply our classifier to our entire dataset, and present visual classification results both with and without uncertainty. We implement and experiment with several variations within the AdaBoost family of algorithms. We observe that our results are robust and stable over all the various tests and algorithmic variations. We also investigate features and values that are most critical in distinguishing between the classes. This insight is important in extending the results from one geographic region to another."} {"_id": "3e42557a8a8120acbe7ab9c1f43c05a529254e04", "title": "Bio-inspired design and dynamic maneuverability of a minimally actuated six-legged robot", "text": "Rapidly running arthropods like cockroaches make use of passive dynamics to achieve remarkable locomotion performance with regard to stability, speed, and maneuverability. In this work, we take inspiration from these organisms to design, fabricate, and control a 10cm, 24 gram underactuated hexapedal robot capable of running at 14 body lengths per second and performing dynamic turning maneuvers. Our design relies on parallel kinematic mechanisms fabricated using the scaled smart composite microstructures (SCM) process and viscoelastic polymer legs with tunable stiffness. In addition to the novel robot design, we present experimental validation of the lateral leg spring (LLS) locomotion model's prediction that dynamic turning can be achieved by modulating leg stiffness. Finally, we present and validate a leg design for active stiffness control using shape memory alloy and demonstrate the ability of the robot to execute near-gymnastic 90\u00b0 turns in the span of five strides."} {"_id": "7a66b85f5c843ff1b5788ad651f700313993e305", "title": "C\n1-Almost Periodic Solutions of BAM Neural Networks with Time-Varying Delays on Time Scales", "text": "On a new type of almost periodic time scales, a class of BAM neural networks is considered. By employing a fixed point theorem and differential inequality techniques, some sufficient conditions ensuring the existence and global exponential stability of C1-almost periodic solutions for this class of networks with time-varying delays are established. Two examples are given to show the effectiveness of the proposed method and results."} {"_id": "a6774549061a800427d5f54fe606e80fab0364a3", "title": "Understanding IEEE 1451-Networked smart transducer interface standard - What is a smart transducer?", "text": "This article introduces the IEEE 1451 standard for networked smart transducers. It discusses the concepts of smart transducers, IEEE 1451 smart transducers, the architecture of the IEEE 1451 family of standards, application of IEEE 1451, and example implementations of the IEEE 1451 standards. In conclusion, the IEEE 1451 suite of standards provides a set of standard interfaces for networked smart transducers, helping to achieve sensor plug and play and interoperability for industry and government."} {"_id": "9488372708f0b6dbf055c2a73ddb1c082934afb6", "title": "The Past, Present, and Future of Silicon Photonics", "text": "The pace of the development of silicon photonics has quickened since 2004 due to investment by industry and government. Commercial state-of-the-art CMOS silicon-on-insulator (SOI) foundries are now being utilized in a crucial test of 1.55-mum monolithic optoelectronic (OE) integration, a test sponsored by the Defense Advanced Research Projects Agency (DARPA). The preliminary results indicate that the silicon photonics are truly CMOS compatible. R&D groups have now developed 10-100-Gb/s electro-optic modulators, ultrafast Ge-on-Si photodetectors, efficient fiber-to-waveguide couplers, and Si Raman lasers. Electrically pumped silicon lasers are under intense investigation, with several approaches being tried; however, lasing has not yet been attained. The new paradigm for the Si-based photonic and optoelectric integrated circuits is that these chip-scale networks, when suitably designed, will operate at a wavelength anywhere within the broad spectral range of 1.2-100 mum, with cryocooling needed in some cases"} {"_id": "22f656d0f8426c84a33a267977f511f127bfd7f3", "title": "From Facial Expression Recognition to Interpersonal Relation Prediction", "text": "Interpersonal relation defines the association, e.g., warm, friendliness, and dominance, between two or more people. We investigate if such fine-grained and high-level relation traits can be characterized and quantified from face images in the wild. We address this challenging problem by first studying a deep network architecture for robust recognition of facial expressions. Unlike existing models that typically learn from facial expression labels alone, we devise an effective multitask network that is capable of learning from rich auxiliary attributes such as gender, age, and head pose, beyond just facial expression data. While conventional supervised training requires datasets with complete labels (e.g., all samples must be labeled with gender, age, and expression), we show that this requirement can be relaxed via a novel attribute propagation method. The approach further allows us to leverage the inherent correspondences between heterogeneous attribute sources despite the disparate distributions of different datasets. With the network we demonstrate state-of-the-art results on existing facial expression recognition benchmarks. To predict inter-personal relation, we use the expression recognition network as branches for a Siamese model. Extensive experiments show that our model is capable of mining mutual context of faces for accurate fine-grained interpersonal prediction."} {"_id": "c9a48b155943f05c25da8cc957a7bb63c63a458c", "title": "The subepithelial connective tissue graft palatal donor site: anatomic considerations for surgeons.", "text": "Surgeons must become completely familiar with the anatomy of the palatal donor site to feel confident in providing the subepithelial connective tissue graft procedure. Variations in the size and shape of the hard palate affect the dimensions of donor tissue harvested, as well as the location of the greater palatine neurovascular bundle. This article classifies palatal vaults according to height as high, average, and shallow. Illustrations and cadaver dissection are utilized to demonstrate that surgeons can gain substantial donor tissue specimens without encountering the neurovascular bundle. Actions to be followed in the unlikely event that the neurovasculature is encountered are reviewed."} {"_id": "47ad62463adc468554da06c836133e23e4e0dce7", "title": "The biology of facial fillers.", "text": "The biologic behavior of a facial filler determines its advantages and disadvantages. The purpose of this article is to look at the relevant biology as part of a logical basis for making treatment decisions. Historical perspectives and biologic characteristics such as local tissue reaction (including phagocytosis and granulomatous inflammation) cross-linking, particle concentration, immunogenicity, biofilm formation, gel hardness, and collagen neogenesis are considered. Bovine collagen is the most immunogenic facial filler. Porcine and bioengineered human collagen implants have very low immunogenicity, but allergic reactions and elevations of IgG are possible. Cross-linking and concentration affect the longevity of collagen and hyaluronic acid fillers. Gel hardness affects how a hyaluronic acid filler flows through the syringe and needle. Calcium hydroxylapatite, poly-L-lactic acid, and polymethylmethacrylate fillers have been shown to stimulate collagen neogenesis. It appears that any facial filler can form a granuloma. Bacterial biofilms may play a role in the activation of quiescent granulomas. Various authors interpret the definition and significance of a granuloma differently."} {"_id": "34bbe565d9538ffdf4c8ef4e891411edf8d29447", "title": "Robust entropy-based endpoint detection for speech recognition in noisy environments", "text": "This paper presents an entropy-based algorithm for accurate and robust endpoint detection for speech recognition under noisy environments. Instead of using the conventional energy-based features, the spectral entropy is developed to identify the speech segments accurately. Experimental results show that this algorithm outperforms the energy-based algorithms in both detection accuracy and recognition performance under noisy environments, with an average error rate reduction of more than 16%."} {"_id": "76e04f695a3f5593858766841b284175ac72e5d3", "title": "Customer-Centric Strategic Planning: Integrating CRM in Online Business Systems", "text": "Customer Relationship Management (CRM) is increasingly found at the top of corporate agendas. Online companies in particular are embracing CRM as a major element of corporate strategy, because online technological applications permit a precise segmentation, profiling and targeting of customers, and the competitive pressures of the digital markets require a customer-centric corporate culture. The implementation of CRM systems in online organisation determines a complex restructuring of all organisational elements and processes. The strategic planning process will have to adapt to new customer-centric procedures. The present paper analyses the implementation process of a CRM system in online retail businesses and develops a model of the strategic planning function in a customer-centric context."} {"_id": "5720a5015ca67400fadd0ff6863519f4b030e731", "title": "A Generic Coordinate Descent Framework for Learning from Implicit Feedback", "text": "In recent years, interest in recommender research has shifted from explicit feedback towards implicit feedback data. A diversity of complex models has been proposed for a wide variety of applications. Despite this, learning from implicit feedback is still computationally challenging. So far, most work relies on stochastic gradient descent (SGD) solvers which are easy to derive, but in practice challenging to apply, especially for tasks with many items. For the simple matrix factorization model, an efficient coordinate descent (CD) solver has been previously proposed. However, efficient CD approaches have not been derived for more complex models. In this paper, we provide a new framework for deriving efficient CD algorithms for complex recommender models. We identify and introduce the property of k-separable models. We show that k-separability is a sufficient property to allow efficient optimization of implicit recommender problems with CD. We illustrate this framework on a variety of state-of-the-art models including factorization machines and Tucker decomposition. To summarize, our work provides the theory and building blocks to derive efficient implicit CD algorithms for complex recommender models."} {"_id": "5a8399a28aa322ed6b27b6408d34f44abdbf7b46", "title": "Object-Extraction and Question-Parsing using CCG", "text": "Accurate dependency recovery has recently been reported for a number of wide-coverage statistical parsers using Combinatory Categorial Grammar (CCG). However, overall figures give no indication of a parser\u2019s performance on specific constructions, nor how suitable a parser is for specific applications. In this paper we give a detailed evaluation of a CCG parser on object extraction dependencies found in WSJ text. We also show how the parser can be used to parse questions for Question Answering. The accuracy of the original parser on questions is very poor, and we propose a novel technique for porting the parser to a new domain, by creating new labelled data at the lexical category level only. Using a supertagger to assign categories to words, trained on the new data, leads to a dramatic increase in question parsing accuracy."} {"_id": "4f640c1338840f3740187352531dfeca9381b5c3", "title": "Mining Sequential Patterns: Generalizations and Performance Improvements", "text": "The problem of mining sequential patterns was recently introduced in [AS95]. We are given a database of sequences, where each sequence is a list of transactions ordered by transaction-time, and each transaction is a set of items. The problem is to discover all sequential patterns with a user-speci ed minimum support, where the support of a pattern is the number of data-sequences that contain the pattern. An example of a sequential pattern is \\5% of customers bought `Foundation' and `Ringworld' in one transaction, followed by `Second Foundation' in a later transaction\". We generalize the problem as follows. First, we add time constraints that specify a minimum and/or maximum time period between adjacent elements in a pattern. Second, we relax the restriction that the items in an element of a sequential pattern must come from the same transaction, instead allowing the items to be present in a set of transactions whose transaction-times are within a user-speci ed time window. Third, given a user-de ned taxonomy (is-a hierarchy) on items, we allow sequential patterns to include items across all levels of the taxonomy. We present GSP, a new algorithm that discovers these generalized sequential patterns. Empirical evaluation using synthetic and real-life data indicates that GSP is much faster than the AprioriAll algorithm presented in [AS95]. GSP scales linearly with the number of data-sequences, and has very good scale-up properties with respect to the average datasequence size. Also, Department of Computer Science, University of Wisconsin, Madison."} {"_id": "024006d4c2a89f7acacc6e4438d156525b60a98f", "title": "Continuous control with deep reinforcement learning", "text": "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies \u201cend-to-end\u201d: directly from raw pixel inputs."} {"_id": "9afcff746d84b9a71c2ea7e9950c53a8757fcfc2", "title": "MCMCTS PCG 4 SMB : Monte Carlo Tree Search to Guide Platformer Level Generation", "text": "Markov chains are an enticing option for machine learned generation of platformer levels, but offer poor control for designers and are likely to produce unplayable levels. In this paper we present a method for guiding Markov chain generation using Monte Carlo Tree Search that we call Markov Chain Monte Carlo Tree Search (MCMCTS). We demonstrate an example use for this technique by creating levels trained on a corpus of levels from Super Mario Bros. We then present a player modeling study that was run with the hopes of using the data to better inform the generation of levels in future work."} {"_id": "60fd71bf5c6a799aba9742a6f8176189884a1196", "title": "Peak-End Effects on Player Experience in Casual Games", "text": "The peak-end rule is a psychological heuristic observing that people's retrospective assessment of an experience is strongly influenced by the intensity of the peak and final moments of that experience. We examine how aspects of game player experience are influenced by peak-end manipulations to the sequence of events in games that are otherwise objectively identical. A first experiment examines players' retrospective assessments of two games (a pattern matching game based on Bejeweled and a point-and-click reaction game) when the sequence of difficulty is manipulated to induce positive, negative and neutral peak-end effects. A second experiment examines assessments of a shootout game in which the balance between challenge and skill is similarly manipulated. Results across the games show that recollection of challenge was strongly influenced by peak-end effects; however, results for fun, enjoyment, and preference to repeat were varied -- sometimes significantly in favour of the hypothesized effects, sometimes insignificant, but never against the hypothesis."} {"_id": "14748beb5255da1256873f45a4ce71052d2ba40a", "title": "Efficient Spectral Feature Selection with Minimum Redundancy", "text": "Spectral feature selection identifies relevant features by measuring their capability of preserving sample similarity. It provides a powerful framework for both supervised and unsupervised feature selection, and has been proven to be effective in many real-world applications. One common drawback associated with most existing spectral feature selection algorithms is that they evaluate features individually and cannot identify redundant features. Since redundant features can have significant adverse effect on learning performance, it is necessary to address this limitation for spectral feature selection. To this end, we propose a novel spectral feature selection algorithm to handle feature redundancy, adopting an embedded model. The algorithm is derived from a formulation based on a sparse multi-output regression with a L2;1-norm constraint. We conduct theoretical analysis on the properties of its optimal solutions, paving the way for designing an efficient path-following solver. Extensive experiments show that the proposed algorithm can do well in both selecting relevant features and removing redundancy."} {"_id": "cce60aed492a1107aec818bdb07b04cb066680ed", "title": "Reduction of Fuzzy Rules and Membership Functions and Its Application to Fuzzy PI and PD Type Controllers", "text": "Fuzzy controller\u2019s design depends mainly on the rule base and membership functions over the controller\u2019s input and output ranges. This paper presents two different approaches to deal with these design issues. A simple and efficient approach; namely, Fuzzy Subtractive Clustering is used to identify the rule base needed to realize Fuzzy PI and PD type controllers. This technique provides a mechanism to obtain the reduced rule set covering the whole input/output space as well as membership functions for each input variable. But it is found that some membership functions projected from different clusters have high degree of similarity. The number of membership functions of each input variable is then reduced using a similarity measure. In this paper, the fuzzy subtractive clustering approach is shown to reduce 49 rules to 8 rules and number of membership functions to 4 and 6 for input variables (error and change in error) maintaining almost the same level of performance. Simulation on a wide range of linear and nonlinear processes is carried out and results are compared with fuzzy PI and PD type controllers without clustering in terms of several performance measures such as peak overshoot, settling time, rise time, integral absolute error (IAE) and integral-of-time multiplied absolute error (ITAE) and in each case the proposed schemes shows an identical performance."} {"_id": "690aa390ee29fb7fe2822c5465db65303f2e37ac", "title": "Removal of fixed impulse noise from digital images using Bezier curve based interpolation", "text": "In this paper, we propose a Bezier curve based interpolation technique to eliminate the fixed impulse noise from digital images as well as maintaining the edges of the image. To eliminate the noise, we make the noisy image to pass through two steps, where in the first step we found out the pixels affected by the impulse noise, and in second step, edge protecting process is done using Bezier interpolation technique. Promising results were obtained for images having more than 80 percent of the image pixels are affected. Our proposed algorithm is producing better results in comparison to existing algorithms."} {"_id": "e0de791e1d6540a25e2e31ebae882c3b628ece0c", "title": "12 Artificial Intelligence for Space Applications", "text": "The ambitious short-term and long-term goals set down by the various national space agencies call for radical advances in several of the main space engineering areas, the design of intelligent space agents certainly being one of them. In recent years, this has led to an increasing interest in artificial intelligence by the entire aerospace community. However, in the current state of the art, several open issues and showstoppers can be identified. In this chapter, we review applications of artificial intelligence in the field of space engineering and space technology and identify open research questions and challenges. In particular, the following topics are identified and discussed: distributed artificial intelligence, enhanced situation self-awareness, and decision support for spacecraft system design."} {"_id": "f5a15079faaa34fb0b9775c17fcbcc0f10245725", "title": "Knowledge , Strategy , and the Theory of the Firm Author ( s ) :", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. Wiley-Blackwell and John Wiley & Sons are collaborating with JSTOR to digitize, preserve and extend access to Strategic Management Journal. This paper argues that firms have particular institutional capabilities that allow them to protect knowledge from expropriation and imitation more effectively than market contracting. I argue that it is these generalized institutional capabilities that allow firms to generate and protect the unique resources and capabilities that are central to the strategic theory of the firm."} {"_id": "bae5575284776cabed101750ac41848c700af431", "title": "The social structure of free and open source software development", "text": "Metaphors, such as the Cathedral and Bazaar, used to describe the organization of FLOSS projects typically place them in sharp contrast to proprietary development by emphasizing FLOSS\u2019s distinctive social and communications structures. But what do we really know about the communication patterns of FLOSS projects? How generalizable are the projects that have been studied? Is there consistency across FLOSS projects? Questioning the assumption of distinctiveness is important because practitioner-advocates from within the FLOSS community rely on features of social structure to describe and account for some of the advantages of FLOSS production. To address this question, we examined 120 project teams from SourceForge, representing a wide range of FLOSS project types, for their communications centralization as revealed in the interactions in the bug tracking system. We found that FLOSS development teams vary widely in their communications centralization, from projects completely centered on one developer to projects that are highly decentralized and exhibit a distributed pattern of conversation between developers and active users. We suggest, therefore, that it is wrong to assume that FLOSS projects are distinguished by a particular social structure merely because they are FLOSS. Our findings suggest that FLOSS projects might have to work hard to achieve the expected development advantages which have been assumed to flow from \u201cgoing open.\u201d In addition, the variation in communications structure across projects means that communications centralization is useful for comparisons between FLOSS teams. We"} {"_id": "231878207a8641e605dc255f2c557fa4e8bb99bf", "title": "The Cathedral and the Bazaar", "text": "Permission is granted to copy, distribute and/or modify this document under the terms of the Open Publication License, version 2.0. $Date: 2002/08/02 09:02:14 $ Revision History Revision 1.57 11 September 2000 esr New major section \u201cHow Many Eyeballs Tame Complexity\u201d. Revision 1.52 28 August 2000 esr MATLAB is a reinforcing parallel to Emacs. Corbato\u00f3 & Vyssotsky got it in 1965. Revision 1.51 24 August 2000 esr First DocBook version. Minor updates to Fall 2000 on the time-sensitive material. Revision 1.49 5 May 2000 esr Added the HBS note on deadlines and scheduling. Revision 1.51 31 August 1999 esr This the version that O\u2019Reilly printed in the first edition of the book. Revision 1.45 8 August 1999 esr Added the endnotes on the Snafu Principle, (pre)historical examples of bazaar development, and originality in the bazaar. Revision 1.44 29 July 1999 esr Added the \u201cOn Management and the Maginot Line\u201d section, some insights about the usefulness of bazaars for exploring design space, and substantially improved the Epilog. Revision 1.40 20 Nov 1998 esr Added a correction of Brooks based on the Halloween Documents. Revision 1.39 28 July 1998 esr I removed Paul Eggert\u2019s \u2019graph on GPL vs. bazaar in response to cogent aguments from RMS on Revision 1.31 February 1"} {"_id": "2469fd136aaf16c49bbe6814d6153da1dc6c7c23", "title": "Social translucence: an approach to designing systems that support social processes", "text": "We are interested in desiging systems that support communication and collaboration among large groups of people over computing networks. We begin by asking what properties of the physical world support graceful human-human communication in face-to-face situations, and argue that it is possible to design digital systems that support coherent behavior by making participants and their activites visible to one another. We call such systems \u201csocially translucent systems\u201d and suggest that they have three characteristics\u2014visbility, awareness, and accountability\u2014which enable people to draw upon their experience and expertise to structure their interactions with one another. To motivate and focus our ideas we develop a vision of knowledge communities, conversationally based systems that support the creation, management and reuse of knowledge in a social context. We describe our experience in designing and deploying one layer of functionality for knowledge communities, embodied in a working system called \u201cBarbie\u201d and discuss research issues raised by a socially translucent approach to design."} {"_id": "2fc0516f700b490b7e13db0f0d73d05afa5e346c", "title": "Cave or Community? An Empirical Examination of 100 Mature Open Source Projects", "text": "Starting with Eric Raymond\u2019s groundbreaking work, The Cathedral and the Bazaar, open-source software (OSS) has commonly been regarded as work produced by a community of developers. Yet, given the nature of software programs, one also hears of developers with no lives that work very hard to achieve great product results. In this paper, I sought empirical evidence that would help us understand which is more commonthe cave (i.e., lone producer) or the community. Based on a study of the top 100 mature products on Sourceforge, I find a few surprising things. First, most OSS programs are developed by individuals, rather than communities. The median number of developers in the 100 projects I looked at was 4 and the mode was 1numbers much lower than previous ones reported for highly successful projects! Second, most OSS programs do not generate a lot of discussion. Third, products with more developers tend to be viewed and downloaded more often. Fourth, the number of developers associated with a project is unrelated to the age of the project. Fifth, the larger the project, the smaller the percent of project administrators."} {"_id": "4282abe7e08bcfb2d282c063428fb187b2802e9c", "title": "Case Reports of Adipose-derived Stem Cell Therapy for Nasal Skin Necrosis after Filler Injection", "text": "With the gradual increase of cases using fillers, cases of patients treated by non-medical professionals or inexperienced physicians resulting in complications are also increasing. We herein report 2 patients who experienced acute complications after receiving filler injections and were successfully treated with adipose-derived stem cell (ADSCs) therapy. Case 1 was a 23-year-old female patient who received a filler (Restylane) injection in her forehead, glabella, and nose by a non-medical professional. The day after her injection, inflammation was observed with a 3\u00d73 cm skin necrosis. Case 2 was a 30-year-old woman who received a filler injection of hyaluronic acid gel (Juvederm) on her nasal dorsum and tip at a private clinic. She developed erythema and swelling in the filler-injected area A solution containing ADSCs harvested from each patient's abdominal subcutaneous tissue was injected into the lesion at the subcutaneous and dermis levels. The wounds healed without additional treatment. With continuous follow-up, both patients experienced only fine linear scars 6 months postoperatively. By using adipose-derived stem cells, we successfully treated the acute complications of skin necrosis after the filler injection, resulting in much less scarring, and more satisfactory results were achieved not only in wound healing, but also in esthetics."} {"_id": "c160ae4b1eed860e96250df2d7ecd86a0120c0a2", "title": "Peer support services for individuals with serious mental illnesses: assessing the evidence.", "text": "OBJECTIVE\nThis review assessed the level of evidence and effectiveness of peer support services delivered by individuals in recovery to those with serious mental illnesses or co-occurring mental and substance use disorders.\n\n\nMETHODS\nAuthors searched PubMed, PsycINFO, Applied Social Sciences Index and Abstracts, Sociological Abstracts, Social Services Abstracts, Published International Literature on Traumatic Stress, the Educational Resources Information Center, and the Cumulative Index to Nursing and Allied Health Literature for outcome studies of peer support services from 1995 through 2012. They found 20 studies across three service types: peers added to traditional services, peers in existing clinical roles, and peers delivering structured curricula. Authors judged the methodological quality of the studies using three levels of evidence (high, moderate, and low). They also described the evidence of service effectiveness.\n\n\nRESULTS\nThe level of evidence for each type of peer support service was moderate. Many studies had methodological shortcomings, and outcome measures varied. The effectiveness varied by service type. Across the range of methodological rigor, a majority of studies of two service types--peers added and peers delivering curricula--showed some improvement favoring peers. Compared with professional staff, peers were better able to reduce inpatient use and improve a range of recovery outcomes, although one study found a negative impact. Effectiveness of peers in existing clinical roles was mixed.\n\n\nCONCLUSIONS\nPeer support services have demonstrated many notable outcomes. However, studies that better differentiate the contributions of the peer role and are conducted with greater specificity, consistency, and rigor would strengthen the evidence."} {"_id": "33224ad0cdf6e2dc4893194dd587309c7887f0ba", "title": "Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990\u20132010)", "text": "The simple, but general formal theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less formal ideas in aesthetics theory and developmental psychology. It has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation."} {"_id": "e88929f3b08e8c64f81aa9475bfab5429b7cb051", "title": "Deep neural network for manufacturing quality prediction", "text": "Expected product quality is affected by multi-parameter in complex manufacturing processes. Product quality prediction can offer the possibility of designing better system parameters at the early production stage. Many existing approaches fail at providing favorable results duo to shallow architecture in prediction model that can not learn multi-parameter's features insufficiently. To address this issue, a deep neural network (DNN), consisting of a deep belief network (DBN) in the bottom and a regression layer on the top, is proposed in this paper. The DBN uses a greedy algorithm for unsupervised feature learning. It could learn effective features for manufacturing quality prediction in an unsupervised pattern which has been proven to be effective for many fields. Then the learned features are inputted into the regression tool, and the quality predictions are obtained. One type of manufacturing system with multi-parameter is investigated by the proposed DNN model. The experiments show that the DNN has good performance of the deep architecture, and overwhelms the peer shallow models. It is recommended from this study that the deep learning technique is more promising in manufacturing quality prediction."} {"_id": "946dabbc13f06070f7618cd4ca6733a95b4b03c3", "title": "A linguistic ontology of space for natural language processing", "text": "a r t i c l e i n f o a b s t r a c t We present a detailed semantics for linguistic spatial expressions supportive of computational processing that draws substantially on the principles and tools of ontological engineering and formal ontology. We cover language concerned with space, actions in space and spatial relationships and develop an ontological organization that relates such expressions to general classes of fixed semantic import. The result is given as an extension of a linguistic ontology, the Generalized Upper Model, an organization which has been used for over a decade in natural language processing applications. We describe the general nature and features of this ontology and show how we have extended it for working particularly with space. Treaitng the semantics of natural language expressions concerning space in this way offers a substantial simplification of the general problem of relating natural spatial language to its contextualized interpretation. Example specifications based on natural language examples are presented, as well as an evaluation of the ontology's coverage, consistency, predictive power, and applicability."} {"_id": "d95a8c90256a4f3008e3b1b8d089e5fbf46eb5e8", "title": "A Survey of Intrusion Detection Systems for Mobile Ad Hoc Networks", "text": "Tactical Mobile Ad-hoc Networks (MANETs) are widely deployed in military organizations that have critical needs of communication even with the absence of fixed infrastructure. The lack of communication infrastructure and dynamic topology makes MANETs vulnerable to a wide range attacks. Tactical military operations are among the prime users of ad-hoc networks. The military holds higher standards of security requirements and thus require special intrusion detection applications. Conventional Intrusion Detection System (IDS) require central control and monitoring entities and thus cannot be applied to MANETs. Solutions to secure these networks are based on distributed and cooperative security. This paper presents a survey of IDS specifically for MANETs and also highlights the strengths and weakness of each model. We present a unique evaluation matrix for measuring the effectiveness of IDS for MANETs in an emergency response scenario."} {"_id": "e3b2990079f630e0821f38714dbc1bfd1f3e9c87", "title": "Enabling agricultural automation to optimize utilization of water, fertilizer and insecticides by implementing Internet of Things (IoT)", "text": "With the proliferation of smart devices, Internet can be extended into the physical realm of Internet-of-Things (IoT) by deploying them into a communicating-actuating network. In Ion, sensors and actuators blend seamlessly with the environment; collaborate globally with each other through internet to accomplish a specific task. Wireless Sensor Network (WSN) can be integrated into Ion to meet the challenges of seamless communication between any things (e.g., humans or objects). The potentialities of IoT can be brought to the benefit of society by developing novel applications in transportation and logistics, healthcare, agriculture, smart environment (home, office or plant). This research gives a framework of optimizing resources (water, fertilizers, insecticides and manual labour) in agriculture through the use of IoT. The issues involved in the implementation of applications are also investigated in the paper. This frame work is named as AgriTech."} {"_id": "56c9c6b4e7bc658e065f80617e4e0278f40d6b26", "title": "A review on stress inducement stimuli for assessing human stress using physiological signals", "text": "Assessing human stress in real-time is more difficult and challenging today. The present review deals about the measurement of stress in laboratory environment using different stress inducement stimuli by the help of physiological signals. Previous researchers have been used different stress inducement stimuli such as stroop colour word test (CWT), mental arithmetic test, public speaking task, cold pressor test, computer games and works used to induce the stress. Most of the researchers have been analyzed stress using questionnaire based approach and physiological signals. The several physiological signals like Electrocardiogram (ECG), Electromyogram (EMG), Galvanic Skin Response (GSR), Blood Pressure (BP), Skin Temperature (ST), Blood Volume Pulse (BVP), respiration rate (RIP) and Electroencephalogram (EEG) were briefly investigated to identify the stress. Different statistical methods like Analysis of variance (ANOVA), two-way ANOVA, Multivariate analysis of variance (MANOVA), t-test, paired t-tests and student t-tests have used to describe the correlation between stress inducement stimuli, subjective parameters (age, gender and etc.,) and physiological signals. This present works aims to find the most appropriate stress inducement stimuli, physiological signals and statistical method to efficiently asses the human stress."} {"_id": "9a700c7a7e7468e436f00c34551fbe3e0f70e42f", "title": "Towards Principled Methods for Training Generative Adversarial Networks", "text": "The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them."} {"_id": "0cf61f46f76e24beec8a1fe38fc4ab9dd6ac5abd", "title": "Design Exploration of Hybrid CMOS and Memristor Circuit by New Modified Nodal Analysis", "text": "Design of hybrid circuits and systems based on CMOS and nano-device requires rethinking of fundamental circuit analysis to aid design exploration. Conventional circuit analysis with modified nodal analysis (MNA) cannot consider new nano-devices such as memristor together with the traditional CMOS devices. This paper has introduced a new MNA method with magnetic flux (\u03a6) as new state variable. New SPICE-like circuit simulator is thereby developed for the design of hybrid CMOS and memristor circuits. A number of CMOS and memristor-based designs are explored, such as oscillator, chaotic circuit, programmable logic, analog-learning circuit, and crossbar memory, where their functionality, performance, reliability and power can be efficiently verified by the newly developed simulator. Specifically, one new 3-D-crossbar architecture with diode-added memristor is also proposed to improve integration density and to avoid sneak path during read-write operation."} {"_id": "947b9250c2f6c41203468506255c3257d3decef3", "title": "Facial Expression Recognition with Convolutional Neural Networks", "text": "Facial expression recognition systems have attracted much research interest within the field of artificial intelligence. Many established facial expression recognition (FER) systems apply standard machine learning to extracted image features, and these methods generalize poorly to previously unseen data. This project builds upon recent research to classify images of human faces into discrete emotion categories using convolutional neural networks (CNNs). We experimented with different architectures and methods such as fractional max-pooling and finetuning, ultimately achieving an accuracy of 0.48 in a sevenclass classification task."} {"_id": "5941bf3d86ebb8347dfa0b40af028e7f61339501", "title": "Object Constraint Language (OCL): A Definitive Guide", "text": "The Object Constraint Language (OCL) started as a complement of the UML notation with the goal to overcome the limitations of UML (and in general, any graphical notation) in terms of precisely specifying detailed aspects of a system design. Since then, OCL has become a key component of any model-driven engineering (MDE) technique as the default language for expressing all kinds of (meta)model query, manipulation and specification requirements. Among many other applications, OCL is frequently used to express model transformations (as part of the source and target patterns of transformation rules), well-formedness rules (as part of the definition of new domain-specific languages), or code-generation templates (as a way to express the generation patterns and rules). This chapter pretends to provide a comprehensive view of this language, its many applications and available tool support as well as the latest research developments and open challenges around it."} {"_id": "3198e5de8eb9edfd92e5f9c2cb325846e25f22aa", "title": "Topic models in information retrieval", "text": ""} {"_id": "3c8f916264e8d15ba1bc618c6adf395e86dd7b40", "title": "Generating Descriptions with Grounded and Co-referenced People", "text": "Learning how to generate descriptions of images or videos received major interest both in the Computer Vision and Natural Language Processing communities. While a few works have proposed to learn a grounding during the generation process in an unsupervised way (via an attention mechanism), it remains unclear how good the quality of the grounding is and whether it benefits the description quality. In this work we propose a movie description model which learns to generate description and jointly ground (localize) the mentioned characters as well as do visual co-reference resolution between pairs of consecutive sentences/clips. We also propose to use weak localization supervision through character mentions provided in movie descriptions to learn the character grounding. At training time, we first learn how to localize characters by relating their visual appearance to mentions in the descriptions via a semi-supervised approach. We then provide this (noisy) supervision into our description model which greatly improves its performance. Our proposed description model improves over prior work w.r.t. generated description quality and additionally provides grounding and local co-reference resolution. We evaluate it on the MPII Movie Description dataset using automatic and human evaluation measures and using our newly collected grounding and co-reference data for characters."} {"_id": "96dc6e4960eaa41b0089a0315fe598afacc52470", "title": "Water content of latent fingerprints - Dispelling the myth.", "text": "Changing procedures in the handling of rare and precious documents in museums and elsewhere, based on assumptions about constituents of latent fingerprints, have led the author to an examination of available data. These changes appear to have been triggered by one paper using general biological data regarding eccrine sweat production to infer that deposited fingerprints are mostly water. Searching the fingerprint literature has revealed a number of reference works similarly quoting figures for average water content of deposited fingerprints of 98% or more. Whilst accurate estimation is difficult there is no evidence that the residue on fingers could be anything like 98% water, even if there were no contamination from sebaceous glands. Consideration of published analytical data of real fingerprints, and several theoretical considerations regarding evaporation and replenishment rates, indicates a probable initial average water content of a fingerprint, soon after deposition, of 20% or less."} {"_id": "9f22a67fa2d6272cec590d4e8ec2ba75cf41df9a", "title": "A multiscale measure for mixing", "text": "We present a multiscale measure for mixing that is based on the concept of weak convergence and averages the \u201cmixedness\u201d of an advected scalar field at various scales. This new measure, referred to as the Mix-Norm, resolves the inability of the L2 variance of the scalar density field to capture small-scale variations when advected by chaotic maps or flows. In addition, the Mix-Norm succeeds in capturing the efficiency of a mixing protocol in the context of a particular initial scalar field, wherein Lyapunov-exponent based measures fail to do so. We relate the Mix-Norm to the classical ergodic theoretic notion of mixing and present its formulation in terms of the power spectrum of the scalar field. We demonstrate the utility of the Mix-Norm by showing how it measures the efficiency of mixing due to various discrete dynamical systems and to diffusion. In particular, we show that the Mix-Norm can capture known exponential and algebraic mixing properties of certain maps. We also analyze numerically the behaviour of scalar fields evolved by the Standard Map using the Mix-Norm. \u00a9 2005 Elsevier B.V. All rights reserved."} {"_id": "943d17f36d320ad9fcc3ae82c78914c0111cef1d", "title": "Artificial cooperative search algorithm for numerical optimization problems", "text": "In this paper, a new two-population based global search algorithm, the Artificial Cooperative Search Algorithm (ACS), is introduced. ACS algorithm has been developed to be used in solving real-valued numerical optimization problems. For purposes of examining the success of ACS algorithm in solving numerical optimization problems, 91 benchmark problems that have different specifications were used in the detailed tests. The success of ACS algorithm in solving the related benchmark problems was compared to the successes obtained by PSO, SADE, CLPSO, BBO, CMA-ES, CK and DSA algorithms in solving the related benchmark problems by using Wilcoxon Signed-Rank Statistical Test with Bonferroni-Holm correction. The results obtained in the statistical analysis demonstrate that the success achieved by ACS algorithm in solving numerical optimization problems is better in comparison to the other computational intelligence algorithms used in this paper. 2012 Elsevier Inc. All rights reserved."} {"_id": "a9166e3223daed5655f4f57e911a8e5f91c6ec37", "title": "Abstraction Refinement for Probabilistic Software", "text": "ion Refinement for Probabilistic Software Mark Kattenbelt, Marta Kwiatkowska, Gethin Norman, and David Parker Oxford University Computing Laboratory, Parks Road, Oxford, OX1 3QD Abstract. We present a methodology and implementation for verifying ANSI-C programs that exhibit probabilistic behaviour, such as failures or randomisation. We use abstraction-refinement techniques that represent We present a methodology and implementation for verifying ANSI-C programs that exhibit probabilistic behaviour, such as failures or randomisation. We use abstraction-refinement techniques that represent probabilistic programs as Markov decision processes and their abstractions as stochastic two-player games. Our techniques target quantitative properties of software such as \u201cthe maximum probability of file-transfer failure\u201d or \u201cthe minimum expected number of loop iterations\u201d and the abstractions we construct yield lower and upper bounds on these properties, which then guide the refinement process. We build upon stateof-the-art techniques and tools, using SAT-based predicate abstraction, symbolic implementations of probabilistic model checking and components from GOTO-CC, SATABS and PRISM. Experimental results show that our approach performs very well in practice, successfully verifying actual networking software whose complexity is significantly beyond the scope of existing probabilistic verification tools."} {"_id": "a70bbc4c6c3ac0c77526a64bf11073bc8f45bd48", "title": "A study of SSL Proxy attacks on Android and iOS mobile applications", "text": "According to recent articles in popular technology websites, some mobile applications function in an insecure manner when presented with untrusted SSL certificates. These non-browser based applications seem to, in the absence of a standard way of alerting a user of an SSL error, accept any certificate presented to it. This paper intends to research these claims and show whether or not an invisible proxy based SSL attack can indeed steal user's credentials from mobile applications, and which types applications are most likely to be vulnerable to this attack vector. To ensure coverage of the most popular platforms, applications on both Android 4.2 and iOS 6 are tested. The results of our study showed that stealing credentials is indeed possible using invisible proxy man in the middle attacks."} {"_id": "a1221b0fd74212382c7387e6f6fd957918576dda", "title": "Transient effects in application of PWM inverters to induction motors", "text": "Standard squirrel cage induction motors are subjected to nonsinusoidal wave shapes, when supplied from adjustable frequency inverters. In addition to causing increased heating, these wave patterns can be destructive to the insulation. Pulse width modulated (PWM) inverter output amplitudes and risetimes are investigated, and motor insulation capabilities are discussed. Voltage reflections are simulated for various cable lengths and risetimes and are presented graphically. Simulations confirm potential problems with long cables and short risetimes. Application precautions are also discussed.<>"} {"_id": "bdf434f475654ee0a99fe11fd63405b038244f69", "title": "Achieving Fairness through Adversarial Learning: an Application to Recidivism Prediction", "text": "Recidivism prediction scores are used across the USA to determine sentencing and supervision for hundreds of thousands of inmates. One such generator of recidivism prediction scores is Northpointe's Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) score, used in states like California and Florida, which past research has shown to be biased against black inmates according to certain measures of fairness. To counteract this racial bias, we present an adversarially-trained neural network that predicts recidivism and is trained to remove racial bias. When comparing the results of our model to COMPAS, we gain predictive accuracy and get closer to achieving two out of three measures of fairness: parity and equality of odds. Our model can be generalized to any prediction and demographic. This piece of research contributes an example of scientific replication and simplification in a high-stakes real-world application like recidivism prediction."} {"_id": "eb7cf446e983e98c1400c8181949f038caf0c8a8", "title": "Perpetual development: A model of the Linux kernel life cycle", "text": "Software evolution is widely recognized as an important and common phenomenon, whereby the system follows an ever-extending development trajectory w ith intermittent releases. Nevertheless there have been only few lifecycle models that attempt to por tray such evolution. We use the evolution of the Linux kernel as the basis for the formulation of such a model, integrating the progress in time with growth of the codebase, and differentiating bet ween development of new functionality and maintenance of production versions. A unique elem nt of the model is the sequence of activities involved in releasing new production versions, and how this has changed with the growth of Linux. In particular, the release follow-up phase before th forking of a new development version, which was prominent in early releases of production ve rsions, has been eliminated in favor of a concurrent merge window in the release of 2.6.x versions . We also show that a piecewise linear model with increasing slopes provides the best descr iption of the growth of Linux. The perpetual development model is used as a framework in which c ommonly recognized benefits of incremental and evolutionary development may be demonstra ted, nd to comment on issues such as architecture, conservation of familiarity, and failed p rojects. We suggest that this model and variants thereof may apply to many other projects in additio n to Linux."} {"_id": "0672cf615e621624cb4820ea1b4c8c6997d6093b", "title": "Robust Monte Carlo localization for mobile robots", "text": "Mobile robot localization is the problem of determining a robot's pose from sensor data. Monte Carlo Localization is a family of algorithms for localization based on particle filters, which are approximate Bayes filters that use random samples for posterior estimation. Recently, they have been applied with great success for robot localization. Unfortunately, regular particle filters perform poorly in certain situations. Mixture-MCL, the algorithm described here, overcomes these problems by using a \"dual\" sampler, integrating two complimentary ways of generating samples in the estimation. To apply this algorithm for mobile robot localization, a kd-tree is learned from data that permits fast dual sampling. Systematic empirical results obtained using data collected in crowded public places illustrate superior performance, robustness, and efficiency, when compared to other state-of-the-art localization algorithms."} {"_id": "09f2af091f6bf5dfe25700c5a8c82f220fac5631", "title": "Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories", "text": "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba\u2019s \"gist\" and Lowe\u2019s SIFT descriptors."} {"_id": "22e594d4606c94192743ca6f51ebf15b32a3b72a", "title": "What, where and who? Classifying events by scene and object recognition", "text": "We propose a first attempt to classify events in static images by integrating scene and object categorizations. We define an event in a static image as a human activity taking place in a specific environment. In this paper, we use a number of sport games such as snow boarding, rock climbing or badminton to demonstrate event classification. Our goal is to classify the event in the image as well as to provide a number of semantic labels to the objects and scene environment within the image. For example, given a rowing scene, our algorithm recognizes the event as rowing by classifying the environment as a lake and recognizing the critical objects in the image as athletes, rowing boat, water, etc. We achieve this integrative and holistic recognition through a generative graphical model. We have assembled a highly challenging database of 8 widely varied sport events. We show that our system is capable of classifying these event classes at 73.4% accuracy. While each component of the model contributes to the final recognition, using scene or objects alone cannot achieve this performance."} {"_id": "33fad977a6b317cfd6ecd43d978687e0df8a7338", "title": "Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns", "text": "\u00d0This paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed auniform,o are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the auniformo patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in true problems of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis. Index Terms\u00d0Nonparametric, texture analysis, Outex, Brodatz, distribution, histogram, contrast."} {"_id": "8ade5d29ae9eac7b0980bc6bc1b873d0dd12a486", "title": "Robust Real-Time Face Detection", "text": ""} {"_id": "83f3dc086cb5cafd5b4fbce6062c679b4373e845", "title": "Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG", "text": "This paper proposed a multimodal fusion between brain and peripheral signals for emotion detection. The input signals were electroencephalogram, galvanic skin resistance, temperature, blood pressure and respiration, which can reflect the influence of emotion on the central nervous system and autonomic nervous system respectively. The acquisition protocol is based on a subset of pictures which correspond to three specific areas of valance-arousal emotional space (positively excited, negatively excited, and calm). The features extracted from input signals, and to improve the results, correlation dimension as a strong nonlinear feature is used for brain signals. The performance of the Quadratic Discriminant Classifier has been evaluated on different feature sets: peripheral signals, EEG's, and both. In comparison among the results of different feature sets, EEG signals seem to perform better than other physiological signals, and the results confirm the interest of using brain signals as peripherals in emotion assessment. According to the improvement in EEG results compare in each raw of the table, it seems that nonlinear features would lead to better understanding of how emotional activities work."} {"_id": "1b7189be0287321814c86405181e6951f815e802", "title": "A Geospatial Decision Support System for Drought Risk Management", "text": "We are developing an advanced Geospatial Decision Support System (GDSS) to improve the quality and accessibility of drought related data for drought risk management. This is part of a Digital Government project aimed at developing and integrating new information technologies for improved government services in the USDA Risk Management Agency (RMA) and the Natural Resources Conservation Service (NRCS). Our overall goal is to substantially improve RMA's delivery of risk management services in the near-term and provide a foundation and directions for the future. We integrate spatio-temporal knowledge discovery techniques into our GDSS using a combination of data mining techniques applied to rich, geospatial, time-series data. Our data mining objectives are to: 1) find relationships between user-specified target episodes and other climatic events and 2) predict the target episodes. Understanding relationships between changes in soil moisture regimes and global climatic events such as El Ni\u00f1o could provide a reasonable drought mitigation strategy for farmers to adjust planting dates, hybrid selection, plant populations, tillage practices or crop rotations. This work highlights the innovative data mining approaches integral to our project's success and provides preliminary results that indicate our system\u2019s potential to substantially improve RMA's delivery of drought risk management services."} {"_id": "beaed5ff6cae9f8311c2158ba2badb7581710ca7", "title": "A Secure Microservice Framework for IoT", "text": "The Internet of Things (IoT) has connected an incredible diversity of devices in novel ways, which has enabled exciting new services and opportunities. Unfortunately, IoT systems also present several important challenges to developers. This paper proposes a vision for how we may build IoT systems in the future by reconceiving IoT's fundamental unit of construction not as a \"thing\", but rather as a widely and finely distributed \"microservice\" already familiar to web service engineering circles. Since IoT systems are quite different from more established uses of microservice architectures, success of the approach depends on adaptations that enable them to met the key challenges that IoT systems present. We argue that a microservice approach to building IoT systems can combine in a mutually enforcing way with patterns for microservices, API gateways, distribution of services, uniform service discovery, containers, and access control. The approach is illustrated using two case studies of IoT systems in personal health management and connected autonomous vehicles. Our hope is that the vision of a microservices approach will help focus research that can fill in current gaps preventing more effective, interoperable, and secure IoT services and solutions in a wide variety of contexts."} {"_id": "c1e0763f6ce8b8e3464a3bfc16ce3c94f530197b", "title": "Doubly Fed Induction Generator Systems For Variable Speed Wind Turbine", "text": "This paper presents results of a study concerning the dynamic behaviour of a wind energy system powered by a doubly fed induction generator with the rotor connected to the electric network through an AC-AC converter. A tendency to put up more and more wind turbines can be observed all over the world. Also, there is awareness in taking into account the requirements of a clean environment, due to the need to preserve our habitat. Renewable energy sources not contributing to the enhanced greenhouse effect, especially wind power, are becoming an important component of the total generation. Hence, research concerning the dynamic behaviour of wind energy systems is important to achieve a better knowledge."} {"_id": "2c0a634a71ade1bb8458db124dc1cc9f7e452627", "title": "Local Monotonic Attention Mechanism for End-to-End Speech And Language Processing", "text": "Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoderdecoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture."} {"_id": "f77c9bf5beec7c975584e8087aae8d679664a1eb", "title": "Local Deep Neural Networks for Age and Gender Classification", "text": "Local deep neural networks have been recently introduced for gender recognition. Although, they achieve very good performance they are very computationally expensive to train. In this work, we introduce a simplified version of local deep neural networks which significantly reduces the training time. Instead of using hundreds of patches per image, as suggested by the original method, we propose to use 9 overlapping patches per image which cover the entire face region. This results in a much reduced training time, since just 9 patches are extracted per image instead of hundreds, at the expense of a slightly reduced performance. We tested the proposed modified local deep neural networks approach on the LFW and Adience databases for the task of gender and age classification. For both tasks and both databases the performance is up to 1% lower compared to the original version of the algorithm. We have also investigated which patches are more discriminative for age and gender classification. It turns out that the mouth and eyes regions are useful for age classification, whereas just the eye region is useful for gender classification."} {"_id": "4f564267ba850f58a310b74b228b0f59409f2202", "title": "Neuroprotective effect of Bacopa monnieri on beta-amyloid-induced cell death in primary cortical culture.", "text": "AIM OF THE STUDY\nBacopa monnieri (Brahmi) is extensively used in traditional Indian medicine as a nerve tonic and thought to improve memory. To examine the neuroprotective effects of Brahmi extract, we tested its protection against the beta-amyloid protein (25-35) and glutamate-induced neurotoxicity in primary cortical cultured neurons.\n\n\nMATERIALS AND METHODS\nNeuroprotective effects were determined by measuring neuronal cell viability following beta-amyloid and glutamate treatment with and without Brahmi extract. Mechanisms of neuroprotection were evaluated by monitoring cellular oxidative stress and acetylcholinesterase activity.\n\n\nRESULTS\nOur result demonstrated that Brahmi extract protected neurons from beta-amyloid-induced cell death, but not glutamate-induced excitotoxicity. This neuroprotection was possibly due to its ability to suppress cellular acetylcholinesterase activity but not the inhibition of glutamate-mediated toxicity. In addition, culture medium containing Brahmi extract appeared to promote cell survival compared to neuronal cells growing in regular culture medium. Further study showed that Brahmi-treated neurons expressed lower level of reactive oxygen species suggesting that Brahmi restrained intracellular oxidative stress which in turn prolonged the lifespan of the culture neurons. Brahmi extract also exhibited both reducing and lipid peroxidation inhibitory activities.\n\n\nCONCLUSIONS\nFrom this study, the mode of action of neuroprotective effects of Brahmi appeared to be the results of its antioxidant to suppress neuronal oxidative stress and the acetylcholinesterase inhibitory activities. Therefore, treating patients with Brahmi extract may be an alternative direction for ameliorating neurodegenerative disorders associated with the overwhelming oxidative stress as well as Alzheimer's disease."} {"_id": "15f6a7e3359f470f0a0f6d5b9144f70ef130ca56", "title": "On the control of planar cable-driven parallel robot via classic controllers and tuning with intelligent algorithms", "text": "This paper presents different classical control approaches for planar cable-driven parallel robots. For the proposed robot, PD and PID controllers are designed based on the concept of pole placement method. In order to optimize and tune the controller parameters of planar cable-driven parallel robot, Differential Evaluation, Particle Swarm Optimization and Genetic algorithms are applied as optimization techniques. The simulation results of Genetic algorithm, Particle Swarm Optimization and Differential Evaluation algorithms reveal that the output results of tunes controllers with Particle Swarm Optimization and Differential Evaluation algorithms are more similar than Genetic algorithm and the processing time of Differential Evaluation is less than Genetic algorithm and Particle Swarm Optimization. Moreover, performance of the Particle Swarm Optimization and Differential Evaluation algorithms are better than Genetic algorithm for tuning the controllers parameters."} {"_id": "12a97799334e3a455e278f2a995a93a6e0c034bf", "title": "Accurate Linear-Time Chinese Word Segmentation via Embedding Matching", "text": "This paper proposes an embedding matching approach to Chinese word segmentation, which generalizes the traditional sequence labeling framework and takes advantage of distributed representations. The training and prediction algorithms have linear-time complexity. Based on the proposed model, a greedy segmenter is developed and evaluated on benchmark corpora. Experiments show that our greedy segmenter achieves improved results over previous neural network-based word segmenters, and its performance is competitive with state-of-the-art methods, despite its simple feature set and the absence of external resources for training."} {"_id": "5c1ab86362dec6f9892a5e4055a256fa5c1772af", "title": "Handover Performance in 3GPP Long Term Evolution (LTE) Systems", "text": "The specification of the long term evolution (LTE) of 3G systems is currently ongoing in 3GPP with a target date of ready specification at the end of 2007. The evolved radio access network (RAN) involves a new radio interface based on OFDM technology and a radically different RAN architecture, where radio functionality is distributed into the base stations. The distributed nature of the RAN architecture calls for new radio control algorithms and procedures that operate in a distributed manner, including a distributed handover scheme as well. The most important aspects of the handover procedure in LTE has been already settled in 3GPP except a few details. In this paper we give an overview of the LTE intra-access handover procedure and evaluate its performance focusing on the user perceived performance aspects of it. We investigate the necessity of packet forwarding from a TCP throughput point of view, we analyse the problem of out of order packet delivery during handover and propose a simple solution for it. Finally, we investigate the impact of HARQ/ARQ state discard at handover on the radio efficiency. The results show that neither the user perceived performance nor the radio efficiency are compromised by the relocation based handover procedure of LTE."} {"_id": "51039a6ff686d74a4dec88b3b0e4b85f07222912", "title": "A bandwidth-tunable bioamplifier with voltage-controlled symmetric pseudo-resistors", "text": "This paper describes a bioamplifier that employs a voltage-controlled-pseudo-resistor to achieve tunable bandwidth and wide operating voltage range for biomedical applications. The versatile pseudo-resistor employed provides ultra-high resistance for ac coupling to cancel the dc offset from electrode\u2013tissue interface. The voltage-controlled-pseudo-resistor consists of serial-connected PMOS transistors working at the subthreshold region and an auto-tuning circuit that makes sure the constant (time-invariant) control-voltage of the pseudo-resistor. This bandwidth-tunable bioamplifier is designed in a 0.18-\u03bcm standard CMOS process, achieving a gain of 40.2 dB with 10.35-\u03bcW power consumption. The designed chip was also used to develop the proof-of-concept prototype. An operation bandwidth of 9.5 kHz, inputreferred noise of 5.2 \u03bcVrms from 6.3 Hz to 9.5 kHz and 5.54 \u03bcVrms from 250 Hz to 9.5 kHz, and a tunable cutoff-frequency from 6.3\u2013600 Hz were demonstrated to prove our design. & 2015 Elsevier Ltd. All rights reserved."} {"_id": "5342ed6e2faec80d85aaaccd5ab4db0abcf355ea", "title": "Broadband Planar Fully Integrated 8 $\\, \\times \\,$8 Butler Matrix Using Coupled-Line Directional Couplers", "text": "An innovative approach that allows for realization of broadband planar fully integrated 8 \u00d7 8 Butler matrices is presented. Coupled-line directional couplers have been utilized, which enable broad operational band. A novel arrangement of the network has been proposed that allows the creation of an entirely planar design having two metallization layers and no interconnections between these layers. Four selected crossovers have been realized as a tandem connection of two 3-dB/90\u00b0 coupled-line directional couplers, which, together with reference lines having appropriate electrical lengths, perform simultaneously crossovers of signal lines, and all needed broadband constant value phase shifters. Moreover, two of the needed 3-dB/90\u00b0 directional couplers have been designed as tandem connections of two 8.34-dB directional couplers, acting as 3-dB/90\u00b0 directional couplers having crossed outputs. With such a modification, a fully planar design with no inter-layer connections is possible. The proposed network arrangement has been experimentally tested with the design of an 8 \u00d7 8 Butler matrix operating at the center frequency of 3 GHz. The obtained measurement results fully confirm the validity of the proposed approach."} {"_id": "c19fc9aa2d5d57139ac75869aa12d113f3e288a2", "title": "Design and Development of GPS-GSM Based Tracking System with Google Map Based Monitoring", "text": "GPS is one of the technologies that are used in a huge number of applications today. One of the applications is tracking your vehicle and keeps regular monitoring on them. This tracking system can inform you the location and route travelled by vehicle, and that information can be observed from any other remote location. It also includes the web application that provides you exact location of target. This system enables us to track target in any weather conditions. This system uses GPS and GSM technologies. The paper includes the hardware part which comprises of GPS, GSM, Atmega microcontroller MAX 232, 16x2 LCD and software part is used for interfacing all the required modules and a web application is also developed at the client side. Main objective is to design a system that can be easily installed and to provide platform for further enhancement."} {"_id": "342ebf89fb87bd271ae3abc4512129f8db4e258d", "title": "The DRAM Latency PUF: Quickly Evaluating Physical Unclonable Functions by Exploiting the Latency-Reliability Tradeoff in Modern Commodity DRAM Devices", "text": "Physically Unclonable Functions (PUFs) are commonly used in cryptography to identify devices based on the uniqueness of their physical microstructures. DRAM-based PUFs have numerous advantages over PUF designs that exploit alternative substrates: DRAM is a major component of many modern systems, and a DRAM-based PUF can generate many unique identiers. However, none of the prior DRAM PUF proposals provide implementations suitable for runtime-accessible PUF evaluation on commodity DRAM devices. Prior DRAM PUFs exhibit unacceptably high latencies, especially at low temperatures (e.g., >125.8s on average for a 64KiB memory segment below 55C), and they cause high system interference by keeping part of DRAM unavailable during PUF evaluation. In this paper, we introduce the DRAM latency PUF, a new class of fast, reliable DRAM PUFs. The key idea is to reduce DRAM read access latency below the reliable datasheet specications using software-only system calls. Doing so results in error patterns that reect the compound eects of manufacturing variations in various DRAM structures (e.g., capacitors, wires, sense ampli- ers). Based on a rigorous experimental characterization of 223 modern LPDDR4 DRAM chips, we demonstrate that these error patterns 1) satisfy runtime-accessible PUF requirements, and 2) are quickly generated (i.e., at 88.2ms) irrespective of operating temperature using a real system with no additional hardware modications. We show that, for a constant DRAM capacity overhead of 64KiB, our implementation of the DRAM latency PUF enables an average (minimum, maximum) PUF evaluation time speedup of 152x (109x, 181x) at 70C and 1426x (868x, 1783x) at 55C when compared to a DRAM retention PUF and achieves greater speedups at even lower temperatures."} {"_id": "b9e4469ef36e3ce41e1052563e8dbce1d4762368", "title": "CrackIT \u2014 An image processing toolbox for crack detection and characterization", "text": "This paper presents a comprehensive set of image processing algorithms for detection and characterization of road pavement surface crack distresses, which is being made available to the research community. The toolbox, in the Matlab environment, includes algorithms to preprocess images, to detect cracks and characterize them into types, based on image processing and pattern recognition techniques, as well as modules devoted to the performance evaluation of crack detection and characterization solutions. A sample database of 84 pavement surface images taken during a traditional road survey is provided with the toolbox, since no pavement image databases are publicly available for crack detection and characterization evaluation purposes. Results achieved applying the proposed toolbox to the sample database are discussed, illustrating the potential of the available algorithms."} {"_id": "adfc2b7dc3eb7bd9811d17fe4abd5d6151e97018", "title": "Local-Learning-Based Feature Selection for High-Dimensional Data Analysis", "text": "This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm."} {"_id": "7e8064a35914d301eb41d2211e770b1d1ad5f9b8", "title": "Nasal bone length throughout gestation: normal ranges based on 3537 fetal ultrasound measurements.", "text": "OBJECTIVE\nTo establish normal ranges for nasal bone length measurements throughout gestation and to compare measurements in two subsets of patients of different race (African-American vs. Caucasian) to determine whether a different normal range should be used in these populations.\n\n\nMETHOD\nNormal nasal bone length reference ranges were generated using prenatal measurements by a standardized technique in 3537 fetuses.\n\n\nRESULTS\nThe nasal bone lengths were found to correlate positively with advancing gestation (R(2) = 0.77, second-order polynomial). No statistical difference was found between African-American and Caucasian subjects.\n\n\nCONCLUSION\nThese reference ranges may prove to be useful in prenatal screening and diagnosis of syndromes known to be associated with nasal hypoplasia. Different normal ranges for African-American and Caucasian women are not required."} {"_id": "fe6ade6920b212bd32af011bc6efa94b66107e1f", "title": "A Laminated Waveguide Magic-T With Bandpass Filter Response in Multilayer LTCC", "text": "A laminated waveguide magic-T with imbedded Chebyshev filter response is developed in a multilayer low-temperature co-fired ceramic technology by vertically stacked rectangular cavity resonators with highly symmetric coupling structures. The cavities provide even- and odd-symmetric field distributions by simultaneously resonating TE102 and TE201 modes at the center frequency of the magic-T. Thus, the in-phase and out-of-phase responses of the magic-T function are accomplished according to the induced mode. Meanwhile, the filter frequency response is realized by the cascaded cavities with proper coupling strengths according to the filter specification. With the degenerate, but orthogonal cavity modes, a fewer number of cavities are required and the circuit size is further reduced. A third-order bandpass magic-T is designed and fabricated to operate at 24 GHz with 6% fractional bandwidth. The highly symmetric structure of the implemented magic-T provides a low in-band magnitude imbalance ( \u00b10.25 dB) and phase imbalance (0\u00b0-6\u00b0). The sum and difference ports also provide an isolation greater than 30 dB in the operation frequency range. Measurement and simulated results are in good agreement, both validating the design concept."} {"_id": "bd428c0e44a0743d10e4c0ec5bfe243e2a64f889", "title": "Demand Side Management in Smart Grids", "text": "Smart grids are considered to be the next generation electric grids. Drivers for this development are among others: an increased electricity demand, an ageing network infrastructure and an increased share of renewable energy sources. Sustainability and the European 20/20/20 climate and energy targets are also main drivers for this development. Some of the important aspects of smart grids are that they allow two-way communication between the actors in the grid and enable optimisation of electricity use and production and customer participation. Demand side management is associated to smart grids and means adapting the electricity demand to the electricity production and the available electricity in the grid. It is considered that smart grids and demand side management hold potential to facilitate an increased share of renewable energy sources and to reduce the need for the current power reserve, which is based on fossil energy sources. The aim of this study was to compile and analyse some of the work that has been done and is being implemented around the world concerning demand side management in smart grids with a focus on market rules and business models in relation to Swedish conditions. The study includes a review of selected research and demonstration projects. Success factors and knowledge gaps have been identified through an analysis of the project findings and recommendations. A description of the current Nordic market conditions is also given. The conclusions regarding the conditions on the Swedish electricity market is that it appears to be relatively well adjusted to demand side management with the possibility for hourly meter readings and a full deployment of smart meters. The Nord Pool electricity market allows both day-ahead trading and intra-day trading on the elspot and the elbas markets. The review of the projects show that there is a potential for achieving flexible load from private customers. Hourly metering is seen as a prerequisite for demand side management and active customers is also considered crucial. Visualisation of electricity consumption and current costs is shown to be an important tool for encouraging customers to take an active part in their electricity consumption. Some of the other methods for achieving flexibility are for example: different types of pricing models, a signal for renewable energy sources and direct and indirect control of smart appliances and heating systems. The aggregation concept is an example of direct control that have been used and show potential for achieving flexibility in the grid. The concept means that a so called aggregator, which for example could be the supplier or the distribution system operator, compiles and controls the electrical load for several customers. A key challenge that has been identified is the issue of standardisation. Exchange of experiences from various research and demonstration projects is also considered desirable to support the development of smart grid and demand response."} {"_id": "5f0157e8a852fc2b1b548342102405aa53c39eb9", "title": "Usability Engineering for Augmented Reality: Employing User-Based Studies to Inform Design", "text": "A major challenge, and thus opportunity, in the field of human-computer interaction and specifically usability engineering (UE) is designing effective user interfaces for emerging technologies that have no established design guidelines or interaction metaphors or introduce completely new ways for users to perceive and interact with technology and the world around them. Clearly, augmented reality (AR) is one such emerging technology. We propose a UE approach that employs user-based studies to inform design by iteratively inserting a series of user-based studies into a traditional usability-engineering life cycle to better inform initial user interface designs. We present an exemplar user-based study conducted to gain insight into how users perceive text in outdoor AR settings and to derive implications for design in outdoor AR. We also describe \"lessons learned\" from our experiences, conducting user-based studies as part of the design process."} {"_id": "877d81886b57db2980bd87a0527508232f1f2a24", "title": "Dataless Text Classification: A Topic Modeling Approach with Document Manifold", "text": "Recently, dataless text classification has attracted increasing attention. It trains a classifier using seed words of categories, rather than labeled documents that are expensive to obtain. However, a small set of seed words may provide very limited and noisy supervision information, because many documents contain no seed words or only irrelevant seed words. In this paper, we address these issues using document manifold, assuming that neighboring documents tend to be assigned to a same category label. Following this idea, we propose a novel Laplacian seed word topic model (LapSWTM). In LapSWTM, we model each document as a mixture of hidden category topics, each of which corresponds to a distinctive category. Also, we assume that neighboring documents tend to have similar category topic distributions. This is achieved by incorporating a manifold regularizer into the log-likelihood function of the model, and then maximizing this regularized objective. Experimental results show that our LapSWTM significantly outperforms the existing dataless text classification algorithms and is even competitive with supervised algorithms to some extent. More importantly, it performs extremely well when the seed words are scarce."} {"_id": "e8d45eee001ef838c7f8f4eef41fae98de5246f4", "title": "The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants", "text": "The two standardized tests of face recognition that are widely used suffer from serious shortcomings [Duchaine, B. & Weidenfeld, A. (2003). An evaluation of two commonly used tests of unfamiliar face recognition. Neuropsychologia, 41, 713-720; Duchaine, B. & Nakayama, K. (2004). Developmental prosopagnosia and the Benton Facial Recognition Test. Neurology, 62, 1219-1220]. Images in the Warrington Recognition Memory for Faces test include substantial non-facial information, and the simultaneous presentation of faces in the Benton Facial Recognition Test allows feature matching. Here, we present results from a new test, the Cambridge Face Memory Test, which builds on the strengths of the previous tests. In the test, participants are introduced to six target faces, and then they are tested with forced choice items consisting of three faces, one of which is a target. For each target face, three test items contain views identical to those studied in the introduction, five present novel views, and four present novel views with noise. There are a total of 72 items, and 50 controls averaged 58. To determine whether the test requires the special mechanisms used to recognize upright faces, we conducted two experiments. We predicted that controls would perform much more poorly when the face images are inverted, and as predicted, inverted performance was much worse with a mean of 42. Next we assessed whether eight prosopagnosics would perform poorly on the upright version. The prosopagnosic mean was 37, and six prosopagnosics scored outside the normal range. In contrast, the Warrington test and the Benton test failed to classify a majority of the prosopagnosics as impaired. These results indicate that the new test effectively assesses face recognition across a wide range of abilities."} {"_id": "38ba24eb2242e073a48caa5b1d34a04046e09a83", "title": "(s|qu)eries: Visual Regular Expressions for Querying and Exploring Event Sequences", "text": "Many different domains collect event sequence data and rely on finding and analyzing patterns within it to gain meaningful insights. Current systems that support such queries either provide limited expressiveness, hinder exploratory workflows or present interaction and visualization models which do not scale well to large and multi-faceted data sets. In this paper we present (s|qu)eries (pronounced \"Squeries\"), a visual query interface for creating queries on sequences (series) of data, based on regular expressions. (s|qu)eries is a touch-based system that exposes the full expressive power of regular expressions in an approachable way and interleaves query specification with result visualizations. Being able to visually investigate the results of different query-parts supports debugging and encourages iterative query-building as well as exploratory work-flows. We validate our design and implementation through a set of informal interviews with data scientists that analyze event sequences on a daily basis."} {"_id": "45f2a3c6faf4ea158121c9a1ea336400d0776dc5", "title": "SnipSuggest: Context-Aware Autocompletion for SQL", "text": "In this paper, we present SnipSuggest, a system that provides onthe-go, context-aware assistance in the SQL composition process. SnipSuggest aims to help the increasing population of non-expert database users, who need to perform complex analysis on their large-scale datasets, but have difficulty writing SQL queries. As a user types a query, SnipSuggest recommends possible additions to various clauses in the query using relevant snippets collected from a log of past queries. SnipSuggest\u2019s current capabilities include suggesting tables, views, and table-valued functions in the FROM clause, columns in the SELECT clause, predicates in the WHERE clause, columns in the GROUP BY clause, aggregates, and some support for sub-queries. SnipSuggest adjusts its recommendations according to the context: as the user writes more of the query, it is able to provide more accurate suggestions. We evaluate SnipSuggest over two query logs: one from an undergraduate database class and another from the Sloan Digital Sky Survey database. We show that SnipSuggest is able to recommend useful snippets with up to 93.7% average precision, at interactive speed. We also show that SnipSuggest outperforms na\u0131\u0308ve approaches, such as recommending popular snippets."} {"_id": "4724a2274b957939e960e1ea4cdfb5a319ff3d63", "title": "Profiler: integrated statistical analysis and visualization for data quality assessment", "text": "Data quality issues such as missing, erroneous, extreme and duplicate values undermine analysis and are time-consuming to find and fix. Automated methods can help identify anomalies, but determining what constitutes an error is context-dependent and so requires human judgment. While visualization tools can facilitate this process, analysts must often manually construct the necessary views, requiring significant expertise. We present Profiler, a visual analysis tool for assessing quality issues in tabular data. Profiler applies data mining methods to automatically flag problematic data and suggests coordinated summary visualizations for assessing the data in context. The system contributes novel methods for integrated statistical and visual analysis, automatic view suggestion, and scalable visual summaries that support real-time interaction with millions of data points. We present Profiler's architecture --- including modular components for custom data types, anomaly detection routines and summary visualizations --- and describe its application to motion picture, natural disaster and water quality data sets."} {"_id": "09d9e89983b07c589e35196f4a0c161987042670", "title": "Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship", "text": "Spoken dialogue systems have received increased interest because they are potentially much more natural and powerful methods of communicating with machines than are current graphics-based interfaces. Wired for Speech presents basic research in the psychological and sociological aspects of voice synthesis and recognition. Its major lesson is that people attribute human characteristics to spoken dialogue systems for reasons related to human evolution. But although it contains interesting basic research, the book is mainly aimed at giving technological or marketing advice to those seeking to use voice interfaces when creating commercial applications. The book is oriented around a series of simple experiments designed to show just how pervasive psychological and social influences can be on the opinions and behaviors of people confronted with voice interfaces. Each chapter describes a basic research hypothesis, introduces an experiment to test it, and discusses its implications for designing voice interfaces: gender, personality, accent, ethnicity, emotion, number of distinct voices, use of \" I \" by the system, voices in concert with faces, mixed synthetic and recorded voices, context, and the effects of errors in human\u2013computer cooperation. Although Wired for Speech is very accessible , especially to the non-scientist, it is written with an unusual bibliography style for an academic book: All references and details are given in a notes section at the end of the book, making up one third of the content. This narrative exposition style will probably not satisfy either type of reader: Scientists will be frustrated at the imprecision in argumen-tation, lack of detail in the book itself, and continually having to refer to the notes. The lack of detail also prevents the book from serving as a reference work. Meanwhile those needing advice when implementing voice interfaces will be puzzled at references to Chomsky, Grice, and Zemlin. Thus the book seems to suffer from not knowing its audience well, which is odd as this is precisely the lesson that the book tries to impart. Another complaint from the scientific point of view is the obsession this particular book has with casting every experimental result as advice for the business and marketing side of voice interfaces, typically concentrating on Web-based e-marketing examples such as buying books on line. Most of the experiments have sample sizes of between 40 and 50, but the authors seem ready to invite multi-billion-dollar businesses to immediately base their deployed systems on these results. Finally, and fundamentally, this \u2026"} {"_id": "1a41157538ac14718d9ec74f52050891330e6639", "title": "Characterizing the usability of interactive applications through query log analysis", "text": "People routinely rely on Internet search engines to support their use of interactive systems: they issue queries to learn how to accomplish tasks, troubleshoot problems, and otherwise educate themselves on products. Given this common behavior, we argue that search query logs can usefully augment traditional usability methods by revealing the primary tasks and needs of a product's user population. We term this use of search query logs CUTS - characterizing usability through search. In this paper, we introduce CUTS and describe an automated process for harvesting, ordering, labeling, filtering, and grouping search queries related to a given product. Importantly, this data set can be assembled in minutes, is timely, has a high degree of ecological validity, and is arguably less prone to self-selection bias than data gathered via traditional usability methods. We demonstrate the utility of this approach by applying it to a number of popular software and hardware systems."} {"_id": "6be9447feb1bf824d6a1c827e89e6db3f59566de", "title": "Multithreaded Sliding Window Approach to Improve Exact Pattern Matching Algorithms", "text": "In this paper an efficient pattern matching approach, based on a multithreading sliding window technique, is proposed to improve the efficiency of the common sequential exact pattern matching algorithms including: (i) Brute Force, (ii) Knuth-Morris-Pratt and (iii) Boyer-Moore. The idea is to divide the text under-search into blocks, each block is allocated one or two threads running concurrently. Reported experimental results indicated that the proposed approach improves the performance of the well-known pattern matching algorithms, in terms of search time, especially when the searched patterns are located at the middle or at the end of the text. Keywords\u2014pattern matching; multithreading; sliding window; Brute Force; Knuth-Morris-Pratt; Boyer-Moore"} {"_id": "3fb91bbffa86733fc68d4145e7f081353eb3dcd8", "title": "Techniques of EMG signal analysis: detection, processing, classification and applications", "text": "Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications."} {"_id": "6d976d3171e1bf655c0347dc17ac29c55d2afb6c", "title": "A particle swarm optimization algorithm for flexible job shop scheduling problem", "text": "The classical job shop scheduling problem (JSP) is the most popular machine scheduling model in practice and is well known as NP-hard. The formulation of the JSP is based on the assumption that for each part type or job there is only one process plan that prescribes the sequence of operations and the machine on which each operation has to be performed. Flexible job shop scheduling problem (FJSP) is an extension of the JSP, which allows an operation to be processed by any machine from a given set. Since FJSP requires an additional decision of machine allocation during scheduling, therefore it is much more complex problem than JSP. To solve such NP-hard problems, heuristic approaches are commonly preferred over the traditional mathematical techniques. This paper proposes a particle swarm optimization (PSO) based heuristic for solving the FJSP for minimum makespan time criterion. The performance of the proposed PSO is evaluated by comparing its results with the results obtained using ILOG Solver, a constraint-programming tool. The comparison of the results proves the effectiveness of the proposed PSO for solving FJSP instances."} {"_id": "605a105608bb7aff888360879c53ed02ac116577", "title": "Successfully Managing Impending Skin Necrosis following Hyaluronic Acid Filler Injection, using High-Dose Pulsed Hyaluronidase", "text": "Facial fillers are becoming increasingly popular as aesthetic procedures to temporarily reduce the depth of wrinkles or to contour faces. However, even in the hands of very experienced injectors, there is always a small possibility of vascular complications like intra-arterial injection of filler substance. We present a case report of a patient who developed features of vascular obstruction in right infraorbital artery and tell-tale signs of impending skin necrosis, after hyaluronic acid filler injection by an experienced injector. The diagnosis of a vascular complication was made quickly with the help of clinical features like blanching, livedo reticularis, and poor capillary refill. Patient was treated promptly with \"high-dose pulsed hyaluronidase protocol\" comprising three 1,000-unit pulses of hyaluronidase, administered hourly. There was no further increase in size of the involved area after the first dose of hyaluronidase. All of the involved area, along with 1\u2009cm overlapping in uninvolved skin area, was injected during each injection pulse, using a combination of cannula and needle. Complete reperfusion and good capillary filling were achieved after completion of 3 pulses, and these were taken as the end-point of high-dose pulsed hyaluronidase treatment. Immediate skin changes after filler injections, as well as after hyaluronidase injections and during the 3-week recovery period, were documented with photographs and clinical notes. Involved skin was found to have been fully recovered from this vascular episode, thus indicating that complete recovery of the ischemic skin changes secondary to possible intra-arterial injection could be achieved using high-dose pulsed hyaluronidase protocol."} {"_id": "c2b4835746ba788b81a2906ac228f86ac6c8162b", "title": "Characterizing and visualizing physical world accessibility at scale using crowdsourcing, computer vision, and machine learning", "text": "Poorly maintained sidewalks and street intersections pose considerable accessibility challenges for people with mobility-impairments [13,14]. According to the most recent U.S. Census (2010), roughly 30.6 million adults have physical disabilities that affect their ambulatory activities [22]. Of these, nearly half report using an assistive aid such as a wheelchair (3.6 million) or a cane, crutches, or walker (11.6 million) [22]. Despite comprehensive civil rights legislation for Americans with Disabilities (e.g., [25,26]), many city streets, sidewalks, and businesses in the U.S. remain inaccessible. The problem is not just that street-level accessibility fundamentally affects where and how people travel in cities, but also that there are few, if any, mechanisms to determine accessible areas of a city a priori. Indeed, in a recent report, the National Council on Disability noted that they could not find comprehensive information on the \"degree to which sidewalks are accessible\" across the US [15]. This lack of information can have a significant negative impact on the independence and mobility of citizens [13,16] For example, in our own initial formative interviews with wheelchair users, we uncovered a prevailing view about navigating to new areas of a city: \"I usually don't go where I don't know [about accessible routes]\" (Interviewee 3, congenital polyneuropathy). Our overarching research vision is to transform the way in which street-level accessibility information is collected and used to support new types of assistive map-based tools."} {"_id": "f13c1df14bb289807325f4431e81c2ea10be952f", "title": "Capacitance and Force Computation Due to Direct and Fringing Effects in MEMS/NEMS Arrays", "text": "An accurate computation of electrical force is significant in analyzing the performance of microelectromechanical systems and nanoelectromechanical systems. Many analytical and empirical models are available for computing the forces, especially, for a single set of parallel plates. In general, these forces are computed based on the direct electric field between the overlapping areas of the plates and the fringing field effects. Most of the models, which are based on direct electric field effect, consider only the trivial cases of the fringing field effects. In this paper, we propose different models which are obtained from the numerical simulations. It is found to be useful in computing capacitance as well force in simple and complex configurations consisting of an array of beams and electrodes. For the given configurations, the analytical models are compared with the available models and numerical results. While the percentage error of the proposed model is found to be under 6% with respect to the numerical results, the error associated with the analytical model without the fringing field effects is ~50 %. The proposed model can be applied to the devices in which the fringing field effects are dominant."} {"_id": "8bbc92a1b8a60c8a3460fa20a45e8063fbf88324", "title": "Self-Supervision for Reinforcement Learning by Parsa Mahmoudieh", "text": "Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pretraining and joint optimization improve the data efficiency and policy returns of end-to-end reinforcement learning."} {"_id": "04bd349fd466ddd8224cff34cba200b5184f969d", "title": "Vertical and horizontal elasticity for dynamic virtual machine reconfiguration", "text": "Today, cloud computing applications are rapidly constructed by services belonging to different cloud providers and service owners. This work presents the inter-cloud elasticity framework, which focuses on cloud load balancing based on dynamic virtual machine reconfiguration when variations on load or on user requests volume are observed. We design a dynamic reconfiguration system, called inter-cloud load balancer (ICLB), that allows scaling up or down the virtual resources (thus providing automatized elasticity), by eliminating service downtimes and communication failures. It includes an inter-cloud load balancer for distributing incoming user HTTP traffic across multiple instances of inter-cloud applications and services and we perform dynamic reconfiguration of resources according to the real time requirements. The experimental analysis includes different topologies by showing how real-time traffic variation (using real world workloads) affects resource utilization and by achieving better resource usage in inter-cloud."} {"_id": "77ccf604ca460ac65d2bd14792c901879c4a0153", "title": "Harmony Search Algorithm for Solving Sudoku", "text": "Harmony search (HS) algorithm was applied to solving Sudoku puzzle. The HS is an evolutionary algorithm which mimics musicians\u2019 behaviors such as random play, memory-based play, and pitch-adjusted play when they perform improvisation. Sudoku puzzles in this study were formulated as an optimization problem with number-uniqueness penalties. HS could successfully solve the optimization problem after 285 function evaluations, taking 9 seconds. Also, sensitivity analysis of HS parameters was performed to obtain a better idea of algorithm parameter values."} {"_id": "5248faae83b3215e0a6f666bf06a325b265633cb", "title": "Introduction Advances in Text Comprehension : Model , Process and Development", "text": "To a very large extent, children learn in and out of school from written text. Information Communications Technologies (ICT) offers many possibilities to facilitate learning by confronting children with multimodal texts. In order to be able to implement learning environments that optimally facilitate children\u2019s learning, insight is needed into the cognitive processes underlying text comprehension. In this light, the aim of this special issue is to report on new advances in text comprehension research in perspective of educational implications. Starting from recent theoretical frameworks on the cognitive processes underlying text comprehension, the online processing of text will be discussed in adults and in school children. Copyright # 2008 John Wiley & Sons, Ltd."} {"_id": "9241288953ded43ea7c8f984ecfaef3fdcc942aa", "title": "Food Image Recognition Using Very Deep Convolutional Networks", "text": "We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems."} {"_id": "f609da5448e8dd5cd04095010518dcb4dca22bc0", "title": "Patterns of HCI Design and HCI Design of Patterns", "text": "This chapter introduces the concept of human\u2013computer interaction (HCI) design pattern\u2014also called UI design pattern, interaction design patterns, HCI patterns, user experience pattern and usability engineering pattern. In this book, we mainly use the term HCI Design pattern, but we also use all these terms interchangeably to refer to HCI design pattern. HCI design patterns\u2014have been introduced as a medium to capture and represent users\u2019 experiences as well as to disseminate it as design knowledge. Patterns have the potential of transferring the design expertise of HCI designers to software engineers, who are usually unfamiliar with UI design and usability principles. What is the difference between design patterns and HCI design patterns? Why are they important among HCI designers and SE practitioners? Why design patterns have been considered as a HCI design tool? Why and how HCI design patterns can make a difference? This chapter provides a first answer to these questions that are the key objectives of this book. 1.1 Original Ideas About Design Pattern Among the early attempts to capture and use design knowledge in the format of design patterns, the first major milestone is often attributed to the architect Christopher Alexander, in the late 1970s. In his two books, A Pattern Language (Alexander 1977) and A Timeless Way of Building (Alexander 1979), Alexander, the father of patterns, discusses the avenues to capture and use design knowledge, and presents a large collection of pattern examples to help architects and engineers with the design of buildings, towns, and other urban entities. To illustrate the concept of pattern, Alexander proposes an architectural pattern called Wings of Light (Alexander 1977), where the problem is: Modern buildings are often shaped with no concern for natural light\u2014they depend almost entirely on artificial light. But, buildings which displace natural light as the major source of illumination are not fit places to spend the day. Amongst other information such as design rationale, examples, and links to related patterns, the solution statement to this problem is: 1 The Patterns of HCI Design 2 Arrange each building so that it breaks down into wings which correspond, approximately, to the most important natural social groups within the building. Make each wing long and as narrow as you can\u2014never more than 25 ft wide. According to Alexander, every pattern has three essential elements illustrated in Fig. 1.1, which are: a context, a problem, and a solution. The context describes a recurring set of situations in which the pattern can be applied. The problem refers to a set of forces, i.e., goals and constraints, which occur in the context. Generally, the problem describes when to apply the pattern. The solution refers to a design form or a design rule that can be applied to resolve the forces. It also describes the elements that constitute a pattern, relationships among these elements, as well as responsibilities and collaboration. All of Alexander\u2019s patterns address recurrent problems that designers face by providing a possible solution within a specific context. They follow a similar structure, and the presented information is organized into pattern attributes, such as Problem and Design Rationale. Most noteworthy, the presented solution statement is abstract enough to capture only the invariant properties of good design. The specific pattern implementation is dependent on the design details and the designer\u2019s creativity (Dix et al. 2003). In the example above, there is no mention of specific details such as the corresponding positions of wings to one another, or even the number of wings. These implementation details are left to the designer, allowing different instances of the same pattern solution. In addition, Alexander (1977) recognized that the design and construction of buildings required all stakeholders to make use of a common language for facilitating the implementation of the project from its very beginnings to its completion. If organized properly, patterns could achieve this for all the participants of a design project, acting as a communication tool for design. The pattern concept was not well known until 1987 when patterns appeared again at Object-Oriented Programming, Systems, Languages & Applications (OOPSLA), the object orientation conference in Orlando. There Kent Beck and Ward Cunningham (1987) introduced pattern languages for object-oriented program construction in a seminal paper. Since then many papers and presentations have appeared, authored by renowned software design practitioners such as Grady Booch, Richard Helm, Erich Gamma, and Kent Beck. In 1993, the formation of Hildside Group (1993) by Beck, Cunningham, Coplien, Booch, Johnson and others was the first step forward to forming a design patterns community in the field of software engineering."} {"_id": "cc51aa9963bbb8cbcf26f4033e707e5d04052186", "title": "Construct validity of the Self-Compassion Scale-Short Form among psychotherapy clients", "text": "Construct validity of the Self-Compassion ScaleShort Form among psychotherapy clients Jeffrey A. Hayes, Allison J. Lockard, Rebecca A. Janis & Benjamin D. Locke To cite this article: Jeffrey A. Hayes, Allison J. Lockard, Rebecca A. Janis & Benjamin D. Locke (2016): Construct validity of the Self-Compassion Scale-Short Form among psychotherapy clients, Counselling Psychology Quarterly, DOI: 10.1080/09515070.2016.1138397 To link to this article: http://dx.doi.org/10.1080/09515070.2016.1138397"} {"_id": "2bde586f6bbf1de3526f08f6c06976f9e535d4d3", "title": "An evolutionary game-theoretic modeling for heterogeneous information diffusion", "text": "In this paper, we model and analyze the information diffusion in heterogeneous social networks from an evolutionary game perspective. Users interact with each other according to their individual fitness, which are heterogeneous among different user types. We first study a model where in each social interaction the payoff of a user is independent of the type of the interacted user. In such a case, we derive the information diffusion dynamics of each type of users as well as that of the overall network. The evolutionarily stable states (ESSs) of the dynamics are determined accordingly. Afterwards, we investigate a more general model where in each interaction the payoff of a user depends on the type of the interacted user. We show that the local influence dynamics change much more quickly than the global strategy population dynamics and the former keeps track of the latter throughout the information diffusion process. Based on this observation, the global strategy population dynamics are derived. Finally, simulations are conducted to verify the theoretical results."} {"_id": "cbb866b10674bf7461b768ec154d4b9478e32e82", "title": "Comparative study on power conversion methods for wireless battery charging platform", "text": "In this paper, four different power conversion methods (voltage control, duty-cycle control, frequency control and phase-shift control) are compared for wireless power transfer applications by considering the energy transfer efficiency, electromagnetic interference, stability, and implementation complexity. The phase-shift control is found to be the optimal scheme with good efficiency and low electromagnetic interference. Its constant frequency feature is also within the framework of the new international wireless charging standard called \u2018Qi\u2019. A high system efficiency of 72% for 5W wireless charging applications has been achieved practically."} {"_id": "7b3f48d53e5203b76df5997926522276c91658c9", "title": "A general procedure for the construction of Gorges polygons for multi-phase windings of electrical machines", "text": "This paper presents a simple and effective procedure for the determination of the Gorges polygon, suitable for all possible winding configurations in electrical machines. This methodology takes into account the determination of a Winding Distribution Table (WDT), in which all the information about the distribution of the currents along the stator periphery is computed and from which the G\u00f6rges polygon are easily derived. The proposed method can be applied to both symmetrical and asymmetrical multi-phase windings, including concentrated, fractional, reduced and dead-coil ones. The examples provided in this paper demonstrate the versatility of the proposed method."} {"_id": "25549a1678eb8a5b95790b6d72a54970d7aa697d", "title": "Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction", "text": "We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SCIERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SCIIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.1"} {"_id": "0b951037b7b6d94a4f35ba096d83ee43e4721c66", "title": "Topic discovery and future trend forecasting for texts", "text": "Finding topics from a collection of documents, such as research publications, patents, and technical reports, is helpful for summarizing large scale text collections and the world wide web. It can also help forecast topic trends in the future. This can be beneficial for many applications, such as modeling the evolution of the direction of research and forecasting future trends of the IT industry. In this paper, we propose using association analysis and ensemble forecasting to automatically discover topics from a set of text documents and forecast their evolving trend in a near future. In order to discover meaningful topics, we collect publications from a particular research area, data mining and machine learning, as our data domain. An association analysis process is applied to the collected data to first identify a set of topics, followed by a temporal correlation analysis to help discover correlations between topics, and identify a network of topics and communities. After that, an ensemble forecasting approach is proposed to predict the popularity of research topics in the future. Our experiments and validations on data with 9\u00a0years of publication records validate the effectiveness of the proposed design."} {"_id": "dc0d979f7c352b40c5076039a16e1002bd8b38c3", "title": "Fake News in Social Networks", "text": "We model the spread of news as a social learning game on a network. Agents can either endorse or oppose a claim made in a piece of news, which itself may be either true or false. Agents base their decision on a private signal and their neighbors\u2019 past actions. Given these inputs, agents follow strategies derived via multi-agent deep reinforcement learning and receive utility from acting in accordance with the veracity of claims. Our framework yields strategies with agent utility close to a theoretical, Bayes optimal benchmark, while remaining flexible to model re-specification. Optimized strategies allow agents to correctly identify most false claims, when all agents receive unbiased private signals. However, an adversary\u2019s attempt to spread fake news by targeting a subset of agents with a biased private signal can be successful. Even more so when the adversary has information about agents\u2019 network position or private signal. When agents are aware of the presence of an adversary they re-optimize their strategies in the training stage and the adversary\u2019s attack is less effective. Hence, exposing agents to the possibility of fake news can be an effective way to curtail the spread of fake news in social networks. Our results also highlight that information about the users\u2019 private beliefs and their social network structure can be extremely valuable to adversaries and should be well protected."} {"_id": "4290a4451c87281c55b79a8aaf79c8261a55589f", "title": "Comparative Study on Vision Based Rice Seed Varieties Identification", "text": "This paper presents a system for automated classification of rice variety for rice seed production using computer vision and image processing techniques. Rice seeds of different varieties are visually very similar in color, shape and texture that make the classification of rice seed varieties at high accuracy challenging. We investigated various feature extraction techniques for efficient rice seed image representation. We analyzed the performance of powerful classifiers on the extracted features for finding the robust one. Images of six different rice seed varieties in northern Vietnam were acquired and analyzed. Our experiments have demonstrated that the average accuracy of our classification system can reach 90.54% using Random Forest method with a simple feature extraction technique. This result can be used for developing a computer-aided machine vision system for automated assessment of rice seeds purity."} {"_id": "162875171336ed0f0dbf63926ee144372a47dc2e", "title": "Anatomical Variants in Prostate Artery Embolization: A Pictorial Essay", "text": "Prostate artery embolization (PAE) has emerged as a new treatment option for patients with symptomatic benign prostatic hyperplasia. The main challenges related to this procedure are navigating arteries with atherosclerosis and anatomical variations, and the potential risk of non-target embolization to pelvic structures due to the presence of collateral shunts and reflux of microparticles. Knowledge of classical vascular anatomy and the most common variations is essential for safe embolization, good clinical practice, and optimal outcomes. The aim of this pictorial essay is to illustrate the pelvic vascular anatomy relevant to PAE in order to provide a practical guide that includes the most common anatomical variants as well as to discuss the technical details related to each."} {"_id": "327cbb1da2652b430a52171d510cf72235b890b6", "title": "Lock-free linked lists and skip lists", "text": "Lock-free shared data structures implement distributed objects without the use of mutual exclusion, thus providing robustness and reliability. We present a new lock-free implementation of singly-linked lists. We prove that the worst-case amortized cost of the operations on our linked lists is linear in the length of the list plus the contention, which is better than in previous lock-free implementations of this data structure. Our implementation uses backlinks that are set when a node is deleted so that concurrent operations visiting the deleted node can recover. To avoid performance problems that would arise from traversing long chains of backlink pointers, we introduce flag bits, which indicate that a deletion of the next node is underway. We then give a lock-free implementation of a skip list dictionary data structure that uses the new linked list algorithms to implement individual levels. Our algorithms use the single-word C&S synchronization primitive."} {"_id": "65a8e97f46a44ce5c574747fffc35f9058aa364f", "title": "A study into the believability of animated characters in the context of bullying intervention", "text": "The VICTEC (Virtual ICT with Empathic Characters) project provides an opportunity to explore the use of animated characters in a virtual environment for educational issues such as bullying behaviour. Our research aim was to evaluate whether an early prototype of the VICTEC demonstrator could provide useful information about character and story believability, physical aspects of the characters and story comprehensibility. Results from an evaluation with 76 participants revealed high levels of bullying story believability and character conversation was rated as convincing and interesting. In contrast, character movement was poorly rated. Overall the results imply that poor physical aspects of characters do not have detrimental effects on story believability and interest levels with the demonstrator. It is concluded that even at this early design phase, the demonstrator provides a suitable means to explore how virtual environments in terms of character and storyline believability may assist in the development of systems to deal with the educational issue of bullying."} {"_id": "280cd4a04cf7ecc36f84a7172483916a41403f5e", "title": "Multi-class AdaBoost \u2217", "text": "Boosting has been a very successful technique for solving the two-class classification problem. In going from two-class to multi-class classification, most algorithms have been restricted to reducing the multi-class classification problem to multiple two-class problems. In this paper, we develop a new algorithm that directly extends the AdaBoost algorithm to the multi-class case without reducing it to multiple two-class problems. We show that the proposed multi-class AdaBoost algorithm is equivalent to a forward stagewise additive modeling algorithm that minimizes a novel exponential loss for multi-class classification. Furthermore, we show that the exponential loss is a member of a class of Fisher-consistent loss functions for multi-class classification. As shown in the paper, the new algorithm is extremely easy to implement and is highly competitive in terms of misclassification error rate."} {"_id": "3215a900e4fb9499c8904bfe662c59de042da67d", "title": "Predicting Movie Sales from Blogger Sentiment", "text": "The volume of discussion about a product in weblogs has recently been shown to correlate with the product\u2019s financial performance. In this paper, we study whether applying sentiment analysis methods to weblog data results in better correlation than volume only, in the domain of movies. Our main finding is that positive sentiment is indeed a better predictor for movie success when applied to a limited context around references to the movie in weblogs, posted prior to its release. If my film makes one more person miserable, I\u2019ve done my job."} {"_id": "46a7464a8926241c8ed78b243ca0bf24253f8786", "title": "Early Prediction of Movie Box Office Success Based on Wikipedia Activity Big Data", "text": "Use of socially generated \"big data\" to access information about collective states of the minds in human societies has become a new paradigm in the emerging field of computational social science. A natural application of this would be the prediction of the society's reaction to a new product in the sense of popularity and adoption rate. However, bridging the gap between \"real time monitoring\" and \"early predicting\" remains a big challenge. Here we report on an endeavor to build a minimalistic predictive model for the financial success of movies based on collective activity data of online users. We show that the popularity of a movie can be predicted much before its release by measuring and analyzing the activity level of editors and viewers of the corresponding entry to the movie in Wikipedia, the well-known online encyclopedia."} {"_id": "38d76b64705d193ee8017993f7fc4c6e8d4bdbc8", "title": "A Live-User Study of Opinionated Explanations for Recommender Systems", "text": "This paper describes an approach for generating rich and compelling explanations in recommender systems, based on opinions mined from user-generated reviews. The explanations highlight the features of a recommended item that matter most to the user and also relate them to other recommendation alternatives and the user's past activities to provide a context."} {"_id": "2d81296e2894baaade499d9f1ed163f339943ddc", "title": "MO-SLAM: Multi object SLAM with run-time object discovery through duplicates", "text": "In this paper, we present MO-SLAM, a novel visual SLAM system that is capable of detecting duplicate objects in the scene during run-time without requiring an offline training stage to pre-populate a database of objects. Instead, we propose a novel method to detect landmarks that belong to duplicate objects. Further, we show how landmarks belonging to duplicate objects can be converted to first-order entities which generate additional constraints for optimizing the map. We evaluate the performance of MO-SLAM with extensive experiments on both synthetic and real data, where the experimental results verify the capabilities of MO-SLAM in detecting duplicate objects and using these constraints to improve the accuracy of the map."} {"_id": "2172134ed38d28fb910879b39a13832c5fb7998b", "title": "Speed planning for solar-powered electric vehicles", "text": "Electric vehicles (EVs) are the trend for future transportation. The major obstacle is range anxiety due to poor availability of charging stations and long charging time. Solar-powered EVs, which mostly rely on solar energy, are free of charging limitations. However, the range anxiety problem is more severe due to the availability of sun light. For example, shadings of buildings or trees may cause a solar-powered EV to stop halfway in a trip. In this paper, we show that by optimally planning the speed on different road segments and thus balancing energy harvesting and consumption, we can enable a solar-powered EV to successfully reach the destination using the shortest travel time. The speed planning problem is essentially a constrained non-linear programming problem, which is generally difficult to solve. We have identified an optimality property that allows us to compute an optimal speed assignment for a partition of the path; then, a dynamic programming method is developed to efficiently compute the optimal speed assignment for the whole trip with significantly low computation overhead compared to the state-of-the-art non-linear programming solver. To evaluate the usability of the proposed method, we have also developed a solar-powered EV prototype. Experiments show that the predictions by the proposed technique match well with the data collected from the physical EV. Issues on practical implementation are also discussed."} {"_id": "f3396763e2c3ec1e0c80fddad8b08177960ce34d", "title": "Increase Physical Fitness and Create Health Awareness through Exergames and Gamification - The Role of Individual Factors, Motivation and Acceptance", "text": "Demographic change and the aging population push health and welfare system to its limits. Increased physical fitness and increased awareness for health issues will help elderly to live independently for longer and will thereby reduce the costs in the health care system. Exergames seem to be a promising solution for promoting physical fitness. Still, there is little evidence under what conditions Exergames will be accepted and used by elderly. To investigate promoting and hindering factors we conducted a user study with a prototype of an Exergame. We contrasted young vs. elderly players and investigated the role of gamer types, personality factors and technical expertise on the performance within the game and changes in the attitude towards individual health after the game. Surprisingly, performance within the game is not affected by performance motivation but by gamer type. More importantly, a universal positive effect on perceived pain is detected after the Exergame"} {"_id": "4e71af17f3b0ec59aa3a1c7ea3f2680a1c9d9f6f", "title": "A CNN-based segmentation model for segmenting foreground by a probability map", "text": "This paper proposes a CNN-based segmentation model to segment foreground from an image and a prior probability map. Our model is constructed based on the FCN model that we simply replace the original RGB-based three channel input layer by a four channel, i.e., RGB and prior probability map. We then train the model by constructing various image, prior probability maps and the groundtruths from the PASCAL VOC dataset, and finally obtain a CNN-based foreground segmentation model that is suitable for general images. Our proposed method is motivated by the observation that the classical graphcut algorithm using GMM for modeling the priors can not capture the semantic segmentation from the prior probability, and thus leads to low segmentation performance. Furthermore, the efficient FCN segmentation model is for specific objects rather than general objects. We therefore improve the graph-cut like foreground segmentation by extending FCN segmentation model. We verify the proposed model by various prior probability maps such as artifical maps, saliency maps, and discriminative maps. The ICoseg dataset that is different from the PASCAL Voc dataset is used for the verification. Experimental results demonstrates the fact that our method obviously outperforms the graphcut algorithms and FCN models."} {"_id": "5906297bd4108376a032cb4c610d3e2926750d47", "title": "Clothes Co-Parsing Via Joint Image Segmentation and Labeling With Application to Clothing Retrieval", "text": "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as \u201cimage cosegmentation,\u201d iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1]. In the second phase (i.e., \u201cregion colabeling\u201d), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2], we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29%/88.23% segmentation accuracy and 65.52%/63.89% recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results."} {"_id": "ea5fdb7f6c2e94d1a67bff622c566ddc66be19ab", "title": "Face recognition based on convolutional neural network and support vector machine", "text": "Face recognition is an important embodiment of human-computer interaction, which has been widely used in access control system, monitoring system and identity verification. However, since face images vary with expressions, ages, as well as poses of people and illumination conditions, the face images of the same sample might be different, which makes face recognition difficult. There are two main requirements in face recognition, the high recognition rate and less training time. In this paper, we combine Convolutional Neural Network (CNN) and Support Vector Machine (SVM) to recognize face images. CNN is used as a feature extractor to acquire remarkable features automatically. We first pre-train our CNN by ancillary data to get the updated weights, and then train the CNN by the target dataset to extract more hidden facial features. Finally we use SVM as our classifier instead of CNN to recognize all the classes. With the input of facial features extracted from CNN, SVM will recognize face images more accurately. In our experiments, some face images in the Casia-Webfaces database are used for pre-training, and FERET database is used for training and testing. The results in experiments demonstrate the efficiency with high recognition rate and less training time."} {"_id": "45e3d69e23bf3b3583a8a71022a8e161a37e9571", "title": "Progress on a cognitive-motivational-relational theory of emotion.", "text": "The 2 main tasks of this article are 1st, to examine what a theory of emotion must do and basic issues that it must address. These include definitional issues, whether or not physiological activity should be a defining attribute, categorical versus dimensional strategies, the reconciliation of biological universals with sociocultural sources of variability, and a classification of the emotions. The 2nd main task is to apply an analysis of appraisal patterns and the core relational themes that they produce to a number of commonly identified emotions. Anger, anxiety, sadness, and pride (to include 1 positive emotion) are used as illustrations. The purpose is to show the capability of a cognitive-motivational-relational theory to explain and predict the emotions. The role of coping in emotion is also discussed, and the article ends with a response to criticisms of a phenomenological, folk-theory outlook."} {"_id": "6ee9fcef938389c4cc48fd3b1875d680161ffe0a", "title": "A Survey of Logical Models for OLAP Databases", "text": "In this paper, we present different proposals for multidimensional data cubes, which are the basic logical model for OLAP applications. We have grouped the work in the field in two categories: commercial tools (presented along with terminology and standards) and academic efforts. We further divide the academic efforts in two subcategories: the relational model extensions and the cube-oriented approaches. Finally, we attempt a comparative analysis of the various efforts."} {"_id": "a9f1838de25c17a38fae9738d137d7c7644b3be1", "title": "Who Is Going to Get Hurt? Predicting Injuries in Professional Soccer", "text": "Injury prevention has a fundamental role in professional soccer due to the high cost of recovery for players and the strong influence of injuries on a club\u2019s performance. In this paper we provide a predictive model to prevent injuries of soccer players using a multidimensional approach based on GPS measurements and machine learning. In an evolutive scenario, where a soccer club starts collecting the data for the first time and updates the predictive model as the season goes by, our approach can detect around half of the injuries, allowing the soccer club to save 70% of a season\u2019s economic costs related to injuries. The proposed approach can be a valuable support for coaches, helping the soccer club to reduce injury incidence, save money and increase team performance."} {"_id": "cbf4040cb14a019ff3556fad5c455e99737f169f", "title": "Answering Schr\u00f6dinger's question: A free-energy formulation", "text": "The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery."} {"_id": "ab71da348979c50d33700bc2f6ddcf25b4c8cfd0", "title": "Reconnaissance with ultra wideband UHF synthetic aperture radar", "text": ""} {"_id": "ecb4d6621662f6c8eecfe9aab366d36146f0a6da", "title": "Unseen Noise Estimation Using Separable Deep Auto Encoder for Speech Enhancement", "text": "Unseen noise estimation is a key yet challenging step to make a speech enhancement algorithm work in adverse environments. At worst, the only prior knowledge we know about the encountered noise is that it is different from the involved speech. Therefore, by subtracting the components which cannot be adequately represented by a well defined speech model, the noises can be estimated and removed. Given the good performance of deep learning in signal representation, a deep auto encoder (DAE) is employed in this work for accurately modeling the clean speech spectrum. In the subsequent stage of speech enhancement, an extra DAE is introduced to represent the residual part obtained by subtracting the estimated clean speech spectrum (by using the pre-trained DAE) from the noisy speech spectrum. By adjusting the estimated clean speech spectrum and the unknown parameters of the noise DAE, one can reach a stationary point to minimize the total reconstruction error of the noisy speech spectrum. The enhanced speech signal is thus obtained by transforming the estimated clean speech spectrum back into time domain. The above proposed technique is called separable deep auto encoder (SDAE). Given the under-determined nature of the above optimization problem, the clean speech reconstruction is confined in the convex hull spanned by a pre-trained speech dictionary. New learning algorithms are investigated to respect the non-negativity of the parameters in the SDAE. Experimental results on TIMIT with 20 noise types at various noise levels demonstrate the superiority of the proposed method over the conventional baselines."} {"_id": "488fc01e8663d67de2cb76b33167a60906b81eba", "title": "A variable splitting augmented Lagrangian approach to linear spectral unmixing", "text": "This paper presents a new linear hyperspectral unmixing method of the minimum volume class, termed simplex identification via split augmented Lagrangian (SISAL). Following Craig's seminal ideas, hyperspectral linear unmixing amounts to finding the minimum volume simplex containing the hyperspectral vectors. This is a nonconvex optimization problem with convex constraints. In the proposed approach, the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The obtained problem is solved by a sequence of augmented Lagrangian optimizations. The resulting algorithm is very fast and able so solve problems far beyond the reach of the current state-of-the art algorithms. The effectiveness of SISAL is illustrated with simulated data."} {"_id": "0d21449cba8735032af2f6dfc46e18641e6fa3d3", "title": "Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation", "text": "We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes."} {"_id": "5fecf8411f1063a08af028effae9d3d208ef1101", "title": "How to put probabilities on homographies", "text": "We present a family of \"normal\" distributions over a matrix group together with a simple method for estimating its parameters. In particular, the mean of a set of elements can be calculated. The approach is applied to planar projective homographies, showing that using priors defined in this way improves object recognition."} {"_id": "6d74c216d8246c2a356b00426af715102af2a172", "title": "From Skimming to Reading: A Two-stage Examination Model for Web Search", "text": "User's examination of search results is a key concept involved in all the click models. However, most studies assumed that eye fixation means examination and no further study has been carried out to better understand user's examination behavior. In this study, we design an experimental search engine to collect both the user's feedback on their examinations and the eye-tracking/click-through data. To our surprise, a large proportion (45.8%) of the results fixated by users are not recognized as being \"read\". Looking into the tracking data, we found that before the user actually \"reads\" the result, there is often a \"skimming\" step in which the user quickly looks at the result without reading it. We thus propose a two-stage examination model which composes of a first \"from skimming to reading\" stage (Stage 1) and a second \"from reading to clicking\" stage (Stage 2). We found that the biases (e.g. position bias, domain bias, attractiveness bias) considered in many studies impact in different ways in Stage 1 and Stage 2, which suggests that users make judgments according to different signals in different stages. We also show that the two-stage examination behaviors can be predicted with mouse movement behavior, which can be collected at large scale. Relevance estimation with the two-stage examination model also outperforms that with a single-stage examination model. This study shows that the user's examination of search results is a complex cognitive process that needs to be investigated in greater depth and this may have a significant impact on Web search."} {"_id": "638867fba638d4088b83e58215ac7682f1c55699", "title": "Artificial Intelligence Technique Applied to Intrusion Detection", "text": "Communication network is facilitated with different protocol .Each protocol supported to increase the network performance in a secured manner. In communication process, user\u2019s connectivity, violations of policy on access of information are handles through intrusion. Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents. It focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators. However, organizations use Intrusion detection and prevention system (IDPS) for other purposes, such as identifying problems with security policies, documenting existing threats, and determining individuals from violating security policies. Communication architecture is built up on IP.Internet Control Message Protocol (ICMP ) protocol is tightly integrated with IP. ICMP messages, delivered in IP packets, are used for out-of-band messages related to network operation or mis-operation.In this paper describes about HDLC , ICMP protocol sequences which is used to detect the intrusion on hybrid network and its attributes are recommend the standardized protocol for the intrusion detection process. This standardization protocol compare the previous part of the research of HDLC and ICMP protocols."} {"_id": "ab81be51a3a96237ea24375d7b4fd0c244e5ffa8", "title": "Enhanced LSB technique for audio steganography", "text": "The idea of this paper is to invent a new strategy in Steganography to get the minimum effect in audio which is used to hide data into it. \u201cProgress always involves risk\u201d Fredrick Wilcox observed that technological progress of computer science and the Internet altered the way we lived, and will continue to cast our life[1] . In this paper we have presented a Steganography method of embedding text data in an audio file. The basic approach behind this paper is to provide a good, well-organized method for hiding the data and sent to the destination in safer manner. In the proposed technique first the audio file is sampled and then appropriate bit is modified. In selected sample one bit is modified at least significant bit .The remaining bits may be used but it may be cause noise. We have attempted to provide an overview, theoretical framework about audio Steganography techniques and a novel approach to hide data in an audio using least significant bit (LSB)."} {"_id": "85d31ba076c2620fbea3c86595fb0ff6c44b1efa", "title": "A stable-isotope dilution GC-MS approach for the analysis of DFRC (derivatization followed by reductive cleavage) monomers from low-lignin plant materials.", "text": "The derivatization followed by reductive cleavage (DFRC) method is a well-established tool to characterize the lignin composition of plant materials. However, the application of the original procedure, especially the chromatographic determination of the DFRC monomers, is problematic for low-lignin foods. To overcome these problems a modified sample cleanup and a stable-isotope dilution approach were developed and validated. To quantitate the diacetylated DFRC monomers, their corresponding hexadeuterated analogs were synthesized and used as internal standards. By using the selected-ion monitoring mode, matrix-associated interferences can be minimized resulting in higher selectivity and sensitivity. The modified method was applied to four low-lignin samples. Lignin from carrot fibers was classified as guaiacyl-rich whereas the lignins from radish, pear, and asparagus fibers where classified as balanced lignins (guaiacyl/syringyl ratio=1-2)."} {"_id": "8eb62f0e3528f39e93f4073226c044235b45dce8", "title": "Action Design Research and Visualization Design", "text": "In applied visualization research, artifacts are shaped by a series of small design decisions, many of which are evaluated quickly and informally via methods that often go unreported and unverified. Such design decisions are influenced not only by visualization theory, but also by the people and context of the research. While existing applied visualization models support a level of reliability throughout the design process, they fail to explicitly account for the influence of the research context in shaping the resulting design artifacts. In this work, we look to action design research (ADR) for insight into addressing this issue. In particular, ADR offers a framework along with a set of guiding principles for navigating and capitalizing on the disruptive, subjective, human-centered nature of applied design work, while aiming to ensure reliability of the process and design, and emphasizing opportunities for conducting research. We explore the utility of ADR in increasing the reliability of applied visualization design research by: describing ADR in the language and constructs developed within the visualization community; comparing ADR to existing visualization methodologies; and analyzing a recent design study retrospectively through the lens of ADR's framework and principles."} {"_id": "6cc4a3d0d8a278d30e05418afeaf6b8e5d04d3d0", "title": "Econometric Modelling of Markov-Switching Vector Autoregressions using MSVAR for Ox", "text": ""} {"_id": "2050e3ecf3919b05aacf53ab6bed8c004c2b6872", "title": "An X-band to Ka-band SPDT switch using 200 nm SiGe HBTs", "text": "This paper presents the design and measured performance of an X-band to Ka-band SiGe HBT SPDT switch. The proposed SPDT switch was fabricated using a 200 nm, 150 GHz peak fT silicon-germanium (SiGe) heterojunction bipolar transistor (HBT) BiCMOS technology. The SPDT switch design uses diode-connected SiGe HBTs in a series-shunt configuration to improve the switch bandwidth and isolation. Between 8 and 40 GHz, this SPDT switch achieves an insertion loss of less than 4.3 dB, an isolation of more than 20.3 dB, and a return loss of more than 9 dB."} {"_id": "a2eb1abd58e1554dc6bac5a8ea2f9876e9f4d36b", "title": "Measuring dimensions of intergenerational contact: factor analysis of the Queen's University Scale.", "text": "OBJECTIVES\nIntergenerational contact has been linked to a range of health outcomes, including greater engagement and lower depression. Measures of contact are limited. Informed by Allport's contact theory, the Queen's University Scale consists of items rating contact with elders. We administered the survey to a young adult sample (N = 606) to identify factors that may optimize intervention programming and enhance young persons' health as they age.\n\n\nMETHODS\nWe conducted exploratory factor analysis (EFA) in the structural equation modeling framework and then confirmatory factor analysis with items pertaining to the general elder population.\n\n\nRESULTS\nEFAs did not yield an adequate factor structure. We tested two alternative confirmatory models based on findings from the EFA. Neither a second-order model nor a first-order model allowing double loadings and correlated errors proved adequate.\n\n\nCONCLUSION\nDifficulty finding an adequate factor solution reflects challenges to measuring intergenerational contact with this scale. Items reflect relevant topics but subscale models are limited in interpretability. Knox and colleagues' analyses led them to recommend a brief, global scale, but we did not find empirical support for such a measure. Next steps include development and testing of a reliable, valid scale measuring dimensions of contact as perceived by both youth and elders."} {"_id": "28215a7294a621b3ff0098b3c7b7a4760f9e3e59", "title": "End of Moore \u2019 s law : thermal ( noise ) death of integration in micro and nano electronics", "text": "The exponential growth of memory size and clock frequency in computers has a great impact on everyday life. The growth is empirically described by Moore\u2019s law of miniaturization. Physical limitations of this growth would have a serious impact on technology and economy. A thermodynamical effect, the increasing thermal noise voltage (Johnson\u2013Nyquist noise) on decreasing characteristic capacitances, together with the constrain of using lower supply voltages to keep power dissipation manageable on the contrary of increasing clock frequency, has the potential to break abruptly Moore\u2019s law within 6\u20138 years, or earlier. \uf6d9 2002 Elsevier Science B.V. All rights reserved."} {"_id": "5b63a03a937507838d4d85623d72fc1d1a852cc5", "title": "Low-power 4-2 and 5-2 compressors", "text": "This paper explores various low power higher order compressors such as 4-2 and 5-2 compressor units. These compressors are building blocks for binary multipliers. Various circuit architectures for 4-2 compressors are compared with respect to their delay and power consumption. The different circuits are simulated using HSPICE. A new circuit for a 5-2 compressor is then presented which is 12% faster and consumes 37% less power."} {"_id": "617d424beef3c633d55de45f9009bbc649969877", "title": "Analysis of jitter due to power-supply noise in phase-locked loops", "text": "Phase-locked loops (PLL) in RF and mixed signal VLSI circuits experience supply noise which translates to a timing jitter. In this paper an analysis of the timing jitter due to the noise on the power supply rails is presented. Stochastic models of the power supply noise in VLSI circuits for different values of on-chip decoupling capacitances are presented first. This is followed by calculation of the phase noise of the voltage-controlled oscillator (VCO) in terms of the statistical properties of supply noise. Finally the timing jitter of PLL is predicted in response to the VCO phase noise. A PLL circuit has been designed in 0.35\u03bc CMOS process, and our mathematical model was applied to determine the timing jitter. Experimental results prove the accuracy of the predicted model."} {"_id": "86b732702d3db71c7865d4e761c8aaa1d6930caa", "title": "Performance analysis of low-power 1-bit CMOS full adder cells", "text": "A performance analysis of 1-bit full-adder cell is presented. The adder cell is anatomized into smaller modules. The modules are studied and evaluated extensively. Several designs of each of them are developed, prototyped, simulated and analyzed. Twenty different 1-bit full-adder cells are constructed (most of them are novel circuits) by connecting combinations of different designs of these modules. Each of these cells exhibits different power consumption, speed, area, and driving capability figures. Two realistic circuit structures that include adder cells are used for simulation. A library of full-adder cells is developed and presented to the circuit designers to pick the full-adder cell that satisfies their specific applications."} {"_id": "ef153bd9092f142b3845568937145fdb203fd978", "title": "Harvesting Big Data to Enhance Supply Chain Innovation Capabilities : An Analytic Infrastructure Based on Deduction Graph", "text": "Today, firms can access to big data (tweets, videos, click streams, and other unstructured sources) to extract new ideas or understanding about their products, customers, and markets. Thus, managers increasingly view data as an important driver of innovation and a significant source of value creation and competitive advantage. To get the most out of the big data (in combination with a firm\u2019s existing data), a more sophisticated way of handling, managing, analysing and interpreting data is necessary. However, there is a lack of data analytics techniques to assist firms to capture the potential of innovation afforded by data and to gain competitive advantage. This research aims to address this gap by developing and testing an analytic infrastructure based on the deduction graph technique. The proposed approach provides an analytic infrastructure for firms to incorporate their own competence sets with other firms. Case studies results indicate that the proposed data analytic approach enable firms to utilise big data to gain competitive advantage by enhancing their supply chain innovation capabilities."} {"_id": "0bd6a7a179f1c6c2b59bf1cc9227703a137767d9", "title": "Capturing the mood: facebook and face-to-face encounters in the workplace", "text": "What makes people feel happy, engaged and challenged at work? We conducted an in situ study of Facebook and face-to-face interactions examining how they influence people's mood in the workplace. Thirty-two participants in an organization were each observed for five days in their natural work environment using automated data capture and experience sampling. Our results show that online and offline social interactions are associated with different moods, suggesting that they serve different purposes at work. Face-to-face interactions are associated with a positive mood throughout the day whereas Facebook use and engagement in work contribute to a positive feeling at the end of the day. Email use is associated with negative affect and along with multitasking, is associated with a feeling of engagement and challenge throughout the day. Our findings provide initial evidence of how online and offline interactions affect workplace mood, and could inform practices to improve employee morale."} {"_id": "c3b1c33848e93ad6a853a62412494bba36aa7f2a", "title": "A Survey of Non-conventional Techniques for Low-voltage Low-power Analog Circuit Design", "text": "Designing integrated circuits able to work under low-voltage (LV) low-power (LP) condition is currently undergoing a very considerable boom. Reducing voltage supply and power consumption of integrated circuits is crucial factor since in general it ensures the device reliability, prevents overheating of the circuits and in particular prolongs the operation period for battery powered devices. Recently, non-conventional techniques i.e. bulkdriven (BD), floating-gate (FG) and quasi-floating-gate (QFG) techniques have been proposed as powerful ways to reduce the design complexity and push the voltage supply towards threshold voltage of the MOS transistors (MOST). Therefore, this paper presents the operation principle, the advantages and disadvantages of each of these techniques, enabling circuit designers to choose the proper design technique based on application requirements. As an example of application three operational transconductance amplifiers (OTA) based on these non-conventional techniques are presented, the voltage supply is only \u00b10.4 V and the power consumption is 23.5 \u03bcW. PSpice simulation results using the 0.18 \u03bcm CMOS technology from TSMC are included to verify the design functionality and correspondence with theory."} {"_id": "fff5273db715623150986d66ec370188a630b4a5", "title": "Listening to music and physiological and psychological functioning: the mediating role of emotion regulation and stress reactivity.", "text": "Music listening has been suggested to have short-term beneficial effects. The aim of this study was to investigate the association and potential mediating mechanisms between various aspects of habitual music-listening behaviour and physiological and psychological functioning. An internet-based survey was conducted in university students, measuring habitual music-listening behaviour, emotion regulation, stress reactivity, as well as physiological and psychological functioning. A total of 1230 individuals (mean\u2009=\u200924.89\u2009\u00b1\u20095.34 years, 55.3% women) completed the questionnaire. Quantitative aspects of habitual music-listening behaviour, i.e. average duration of music listening and subjective relevance of music, were not associated with physiological and psychological functioning. In contrast, qualitative aspects, i.e. reasons for listening (especially 'reducing loneliness and aggression', and 'arousing or intensifying specific emotions') were significantly related to physiological and psychological functioning (all p\u2009=\u20090.001). These direct effects were mediated by distress-augmenting emotion regulation and individual stress reactivity. The habitual music-listening behaviour appears to be a multifaceted behaviour that is further influenced by dispositions that are usually not related to music listening. Consequently, habitual music-listening behaviour is not obviously linked to physiological and psychological functioning."} {"_id": "5b37a24a31e4f83d056fa1da4048ea0d4dd6c5e1", "title": "Product type and consumers' perception of online consumer reviews", "text": "Consumers hesitate to buy experience products online because it is hard to get enough information about experience products via the Internet. Online consumer reviews may change that, as they offer consumers indirect experiences about dominant attributes of experience products, transforming them into search products. When consumers are exposed to an online consumer review, it should be noted that there are different kinds of review sources. This study investigates the effects of review source and product type on consumers\u2019 perception of a review. The result of the online experiment suggests that product type canmoderate consumers\u2019 perceived credibility of a review from different review sources, and the major findings are: (1) consumers are more influenced by a review for an experience product than for a search product when the review comes from consumer-developed review sites, and (2) a review from an online community is perceived to be the most credible for consumers seeking information about an experience product. The findings provide managerial implications for marketers as to how they can better manage online consumer reviews."} {"_id": "9ae85a3588fce6e500ddef14313e29268b6ce775", "title": "High-Throughput and Language-Agnostic Entity Disambiguation and Linking on User Generated Data", "text": "The Entity Disambiguation and Linking (EDL) task matches entity mentions in text to a unique Knowledge Base (KB) identifier such as a Wikipedia or Freebase id. It plays a critical role in the construction of a high quality information network, and can be further leveraged for a variety of information retrieval and NLP tasks such as text categorization and document tagging. EDL is a complex and challenging problem due to ambiguity of the mentions and real world text being multi-lingual. Moreover, EDL systems need to have high throughput and should be lightweight in order to scale to large datasets and run on off-the-shelf machines. More importantly, these systems need to be able to extract and disambiguate dense annotations from the data in order to enable an Information Retrieval or Extraction task running on the data to be more efficient and accurate. In order to address all these challenges, we present the Lithium EDL system and algorithm a high-throughput, lightweight, language-agnostic EDL system that extracts and correctly disambiguates 75% more entities than state-of-the-art EDL systems and is significantly faster than them."} {"_id": "a7141592d15578bff7bf3b694900c929d4e6a5d8", "title": "A hybrid factor analysis and probabilistic PCA-based system for dictionary learning and encoding for robust speaker recognition", "text": "Probabilistic Principal Component Analysis (PPCA) based low dimensional representation of speech utterances is found to be useful for speaker recognition. Although, performance of the FA (Factor Analysis)-based total variability space model is found to be superior, hyperparameter estimation procedure in PPCA is computationally efficient. In this work, recent insight on the FA-based approach as a combination of dictionary learning and encoding is explored to use its encoding procedure in the PPCA framework. With the use of an alternate encoding technique on dictionaries learnt using PPCA, performance of state-of-the-art FA-based i-vector approach is matched by using the proposed procedure. A speed up of 4x is obtained while estimating the hyperparameter at the cost of 0.51% deterioration in performance in terms of the Equal Error Rate (EER) in the worst case. Compared to the conventional PPCA model, absolute improvements of 2.1% and 2.8% are observed on two telephone conditions of NIST 2008 SRE database. Using Canonical Correlational Analysis, it is shown that the i-vectors extracted from the conventional FA model and the proposed approach are highly correlated."} {"_id": "70e8634a46013c2eb4140ed3917743910f90657d", "title": "Remarkable archaeal diversity detected in a Yellowstone National Park hot spring environment.", "text": "Of the three primary phylogenetic domains--Archaea (archaebacteria), Bacteria (eubacteria), and Eucarya (eukaryotes)--Archaea is the least understood in terms of its diversity, physiologies, and ecological panorama. Although many species of Crenarchaeota (one of the two recognized archaeal kingdoms sensu Woese [Woese, C. R., Kandler, O. & Wheelis, M. L. (1990) Proc. Natl. Acad. Sci. USA 87, 4576-4579]) have been isolated, they constitute a relatively tight-knit cluster of lineages in phylogenetic analyses of rRNA sequences. It seemed possible that this limited diversity is merely apparent and reflects only a failure to culture organisms, not their absence. We report here phylogenetic characterization of many archaeal small subunit rRNA gene sequences obtained by polymerase chain reaction amplification of mixed population DNA extracted directly from sediment of a hot spring in Yellowstone National Park. This approach obviates the need for cultivation to identify organisms. The analyses document the existence not only of species belonging to well-characterized crenarchaeal genera or families but also of crenarchaeal species for which no close relatives have so far been found. The large number of distinct archaeal sequence types retrieved from this single hot spring was unexpected and demonstrates that Crenarchaeota is a much more diverse group than was previously suspected. The results have impact on our concepts of the phylogenetic organization of Archaea."} {"_id": "2539fad11485821d2bfd573dbabe2f6802d24e07", "title": "Interfacing industrial robots using Realtime Primitives", "text": "Today, most industrial robots are interfaced using text-based programming languages. These languages offer the possibility to declare robotic-specific data types, to specify simple motions, and to interact with tools and sensors via I/O operations. While tailored to the underlying robot controller, they usually only offer a fixed and controller-specific set of possible instructions. The specification of complex motions, the synchronization of cooperating robots and the advanced use of sensors is often very difficult or not even feasible. To overcome these limitations, this paper presents a generic and extensible interface for industrial robots, the Realtime Primitives Interface, as part of a larger software architecture. It allows a flexible specification of complex control instructions and can facilitate the development of sustainable robot controllers. The advantages of this approach are illustrated with several examples."} {"_id": "b4702ebf71b74023d9769677aff2b2f30900a6cf", "title": "Adaptive Quantization for Deep Neural Network", "text": "In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy."} {"_id": "a494042b43e6a15b23da1b6d3e422b5a932f0aa4", "title": "Toroidal skin drive for snake robot locomotion", "text": "Small robots have the potential to access confined spaces where humans cannot go. However, the mobility of wheeled and tracked systems is severely limited in cluttered environments. Snake robots using biologically inspired gaits for locomotion can provide better access in many situations, but are slow and can easily snag. This paper introduces an alternative approach to snake robot locomotion, in which the entire surface of the robot provides continuous propulsive force to significantly improve speed and mobility in many environments."} {"_id": "fa543fa269762e795082f51625b91d7860793d90", "title": "X-band 200W AlGaN/GaN HEMT for high power application", "text": "A 200 Watts GaN high electron mobility transistor (HEMT) has been developed for X-band applications. The device consists of four dies of 0.35 um-gate GaN HEMT of 16 mm gate periphery together with input and output 2-stage impedance transformers assembled into a low thermal resistance package. The developed GaN HEMT provides 204W output power and 12dB small signal gain at 9.3 GHz with power added efficiency of 32% under pulse condition at a duty of 10% with a pulse width of 100 usec."} {"_id": "a8c88986ef5e223261e4689040975a2ccff24017", "title": "Implementation of Sugeno FIS in model reference adaptive system adaptation scheme for speed sensorless control of PMSM", "text": "Model reference adaptive system (MRAS) is one of the popular methods to observe the speed and the angle information of the Permanent magnet synchronous motor (PMSM). This paper proposes a new adaptation scheme for MRAS to replace the conventional PI controller based on Popov Hyperstability theorem. In this project, the speed of PMSM is controlled by using field oriented control (FOC) method and the MRAS is used to observe the speed and the rotor position of the motor. The reference model for the MRAS is the PMSM itself while the adjustable model is the current model in the rotor reference frame. The Sugeno Fuzzy Logic Inference System (FIS) is implemented in the adaptation scheme in order to tune the error between the reference and the adjustable model. The new proposed method shows it is capable to track the motor speed efficiently at wide range of operating speeds."} {"_id": "49472d1bd25a3cf8426ea86aac29d8159a825728", "title": "Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment", "text": "In addition to aerial imagery, point clouds are important remote sensing data in urban environment studies. It is essential to extract semantic information from both images and point clouds for such purposes; thus, this study aims to automatically classify 3-D point clouds generated using oblique aerial imagery (OAI)/vertical aerial imagery (VAI) into various urban object classes, such as roof, facade, road, tree, and grass. A multicamera airborne imaging system that can simultaneously acquire VAI and OAI is suggested. The acquired small-format images contain only three RGB spectral bands and are used to generate photogrammetric point clouds through a multiview-stereo dense matching technique. To assign each 3-D point cloud to a corresponding urban object class, we first analyzed the original OAI through object-based image analyses. A rule-based hierarchical semantic classification scheme that utilizes spectral information and geometry- and topology-related features was developed, in which the object height and gradient features were derived from the photogrammetric point clouds to assist in the detection of elevated objects, particularly for the roof and facade. Finally, the photogrammetric point clouds were classified into the aforementioned five classes. The classification accuracy was assessed on the image space, and four experimental results showed that the overall accuracy is between 82.47% and 91.8%. In addition, visual and consistency analyses were performed to demonstrate the proposed classification scheme's feasibility, transferability, and reliability, particularly for distinguishing elevated objects from OAI, which has a severe occlusion effect, image-scale variation, and ambiguous spectral characteristics."} {"_id": "cfb353785d6eb6049e772655553593d95fa090a3", "title": "Exploring the Placement and Design of Word-Scale Visualizations", "text": "We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines-small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations."} {"_id": "ff206b8619bc9fa1587c431b329d2fcfb94eb47c", "title": "A deep learning-based method for relative location prediction in CT scan images", "text": "Relative location prediction in computed tomography (CT) scan images is a challenging problem. In this paper, a regression model based on one-dimensional convolutional neural networks is proposed to determine the relative location of a CT scan image both robustly and precisely. A public dataset is employed to validate the performance of the study\u2019s proposed method using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with the state-of-the-art techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm."} {"_id": "efc9001f54de9c3e984d62fa7e0e911f43eda266", "title": "Parametric Shape-from-Shading by Radial Basis Functions", "text": "In this paper, we present a new method of shape from shading by using radial basis functions to parameterize the object depth. The radial basis functions are deformed by adjusting their centers, widths, and weights such that the intensity errors are minimized. The initial centers and widths are arranged hierarchically to speed up convergence and to stabilize the solution. Although the smoothness constraint is used, it can be eventually dropped out without causing instabilities in the solution. An important feature of our parametric shape-from-shading method is that it offers a unified framework for integration of multiple sensory information. We show that knowledge about surface depth and/or surface normals anywhere in the image can be easily incorporated into the shape from shading process. It is further demonstrated that even qualitative knowledge can be used in shape from shading to improve 3D reconstruction. Experimental comparisons of our method with several existing ones are made by using both synthetic and real images. Results show that our solution is more accurate than the others."} {"_id": "a9ad85feef3ee89492631245a29621454e3d798a", "title": "Practical Deep Stereo (PDS): Toward applications-friendly deep stereo matching", "text": "End-to-end deep-learning networks recently demonstrated extremely good performance for stereo matching. However, existing networks are difficult to use for practical applications since (1) they are memory-hungry and unable to process even modest-size images, (2) they have to be trained for a given disparity range. The Practical Deep Stereo (PDS) network that we propose addresses both issues: First, its architecture relies on novel bottleneck modules that drastically reduce the memory footprint in inference, and additional design choices allow to handle greater image size during training. This results in a model that leverages large image context to resolve matching ambiguities. Second, a novel sub-pixel crossentropy loss combined with a MAP estimator make this network less sensitive to ambiguous matches, and applicable to any disparity range without re-training. We compare PDS to state-of-the-art methods published over the recent months, and demonstrate its superior performance on FlyingThings3D and KITTI sets."} {"_id": "6180a8a082c3d0e85dcb9cec3677923ff7633bb9", "title": "Digital Infrastructures : The Missing IS Research Agenda", "text": "S the inauguration of information systems research (ISR) two decades ago, the information systems (IS) field\u2019s attention has moved beyond administrative systems and individual tools. Millions of users log onto Facebook, download iPhone applications, and use mobile services to create decentralized work organizations. Understanding these new dynamics will necessitate the field paying attention to digital infrastructures as a category of IT artifacts. A state-of-the-art review of the literature reveals a growing interest in digital infrastructures but also confirms that the field has yet to put infrastructure at the centre of its research endeavor. To assist this shift we propose three new directions for IS research: (1) theories of the nature of digital infrastructure as a separate type of IT artifact, sui generis; (2) digital infrastructures as relational constructs shaping all traditional IS research areas; (3) paradoxes of change and control as salient IS phenomena. We conclude with suggestions for how to study longitudinal, large-scale sociotechnical phenomena while striving to remain attentive to the limitations of the traditional categories that have guided IS research."} {"_id": "247da68527908544124a1f35b3be527ee0e473ac", "title": "Running head: Working memory in L2 input processing The role of working memory in processing L2 input: Insights from eye-tracking", "text": "Our study investigated how attention paid to a target syntactic construction causative had is related the storage capacity and attention regulation function of working memory (WM) and how these WM abilities moderate the change of knowledge of the target construction in different input conditions. 80 Sri Lankan learners of English were exposed to examples of the target construction in explicit and implicit learning conditions and their eye movements were tracked as they read the input. Correlational and multiple regression analyses indicated a very strong relationship between WM abilities and gains in the knowledge of the target construction. WM scores were closely associated with gains in receptive knowledge in all input conditions, but they had a weaker link to the improvement of productive knowledge in the implicit learning conditions. The amount of attention paid to input was also strongly related to WM abilities."} {"_id": "c65945c08b7fd77ffd2c53369e8928699c3993e7", "title": "Comparing Alzheimer\u2019s and Parkinson\u2019s diseases networks using graph communities structure", "text": "Recent advances in large datasets analysis offer new insights to modern biology allowing system-level investigation of pathologies. Here we describe a novel computational method that exploits the ever-growing amount of \u201comics\u201d data to shed light on Alzheimer\u2019s and Parkinson\u2019s diseases. Neurological disorders exhibit a huge number of molecular alterations due to a complex interplay between genetic and environmental factors. Classical reductionist approaches are focused on a few elements, providing a narrow overview of the etiopathogenic complexity of multifactorial diseases. On the other hand, high-throughput technologies allow the evaluation of many components of biological systems and their behaviors. Analysis of Parkinson\u2019s Disease (PD) and Alzheimer\u2019s Disease (AD) from a network perspective can highlight proteins or pathways common but differently represented that can be discriminating between the two pathological conditions, thus highlight similarities and differences. In this work we propose a strategy that exploits network community structure identified with a state-of-the-art network community discovery algorithm called InfoMap, which takes advantage of information theory principles. We used two similarity measurements to quantify functional and topological similarities between the two pathologies. We built a Similarity Matrix to highlight similar communities and we analyzed statistically significant GO terms found in clustered areas of the matrix and in network communities. Our strategy allowed us to identify common known and unknown processes including DNA repair, RNA metabolism and glucose metabolism not detected with simple GO enrichment analysis. In particular, we were able to capture the connection between mitochondrial dysfunction and metabolism (glucose and glutamate/glutamine). This approach allows the identification of communities present in both pathologies which highlight common biological processes. Conversely, the identification of communities without any counterpart can be used to investigate processes that are characteristic of only one of the two pathologies. In general, the same strategy can be applied to compare any pair of biological networks."} {"_id": "7dcb74aa3a2ac4285481dafdbfe2b52197cdd707", "title": "Design of Multimode Net-Type Resonators and Their Applications to Filters and Multiplexers", "text": "Net-type resonators with dual- and tri-mode electrical behaviors have been presented and analyzed theoretically in this paper. The proposed net-type resonator is constructed by a short-ended and several open-ended transmission-line sections, which has the advantages of small size and more flexible resonant frequency allocation and is therefore particularly suitable for applications to the design of microwave devices. To verify the usefulness of the proposed resonators, three experimental examples, including a dual-mode bandpass filter, a dual-passband filter, and a triplexer, have been designed and fabricated with microstrip technology. Each of the designed circuits occupies a very small size and has a good upper stop and performance. All measured results are in good agreement with the full-wave simulation results."} {"_id": "1d4cbc24ab1b3056d6acd9e097e4f6b24ba64267", "title": "The coming paradigm shift in forensic identification science.", "text": "Converging legal and scientific forces are pushing the traditional forensic identification sciences toward fundamental change. The assumption of discernible uniqueness that resides at the core of these fields is weakened by evidence of errors in proficiency testing and in actual cases. Changes in the law pertaining to the admissibility of expert evidence in court, together with the emergence of DNA typing as a model for a scientifically defensible approach to questions of shared identity, are driving the older forensic sciences toward a new scientific paradigm."} {"_id": "6a649276903f2d59891e95009b7426dc95eb84cf", "title": "Combining minutiae descriptors for fingerprint matching", "text": "A novel minutiae-based fingerprint matching algorithm is proposed. A minutiae matching algorithm has to solve two problems: correspondence and similarity computation. For the correspondence problem, we assign each minutia two descriptors: texture-based and minutiae-based descriptors, and use an alignment-based greedy matching algorithm to establish the correspondences between minutiae. For the similarity computation, we extract a 17-D feature vector from the matching result, and convert the feature vector into a matching score using support vector classifier. The proposed algorithm is tested on FVC2002 databases and compared to all participators in FVC2002. According to equal error rate, the proposed algorithm ranks 1st on DB3, the most difficult database in FVC2002, and on the average ranks 2nd on all 4 databases. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."} {"_id": "e83a2fa459ba921fb176beacba96038a502ff64d", "title": "Handbook of Fingerprint Recognition", "text": "label(s) Label Class c) b) a) 7.3 Integration Strategies 241 Combination strategy A combination strategy (also called combination scheme) is a technique used to combine the output of the individual classifiers. The most popular combination strategies at the abstract level are based on majority vote rules, which simply assign an input pattern to the most voted class (see Section 7.2). When two classifiers are combined, either a logical AND or a logical OR operator is typically used. When more than two classifiers are integrated, the AND/OR rules can be combined. For example, a biometric system may work on \u201cfingerprint OR (face AND hand geometry)\u201d; that is, it requires a user to present either a fingerprint or both face and hand geometry for recognition. Class set reduction, logistic regression, and Borda counts are the most commonly used approaches in combining classifiers based on the rank labels (Ho, Hull, and Srihari, 1994). In class set reduction, a subset of the classes is selected with the aim that the subset be as small as possible and still contain the true class. Multiple subsets from multiple modalities are typically combined using either a union or an intersection of the subsets. The logistic regression and Borda count methods are collectively called the class set reordering methods. The objective here is to derive a consensus ranking of the given classes such that the true class is ranked at the top. Rank labels are very useful for integration in an indexing/retrieval system. A biometric retrieval system typically outputs an ordered list of candidates (most likely matches). The top element of this ordered list is the most likely to be a correct match and the bottom of the list is the least likely match. The most popular combination schemes for combining confidence values from multiple modalities are sum, mean, median, product, minimum, and maximum rules. Kittler et al. (1998) have developed a theoretical framework in an attempt to understand the underlying mathematical basis of these popular schemes. Their experiments demonstrated that the sum or mean scheme typically performs very well in practice. A problem in using the sum rule is that the confidences (or scores) from different modalities should be normalized. This normalization typically involves mapping the confidence measures from different modalities into a common domain. For example, a biometric system may output a distance score (the lower the score, the more similar the patterns) whereas another may output a similarity score (the higher the score, the more similar the patterns) and thus the scores cannot be directly combined using the sum rule. In its simplest form, this normalization may only include inverting the sign of distance scores such that a higher score corresponds to a higher similarity. In a more complex form, the normalization may be non-linear which can be learned from training data by estimating distributions of confidence values from each modality. The scores are then translated and scaled to have zero mean, unit variance, and then remapped to a fixed interval of (0,1) using a hyperbolic tangent function. Note that it is tempting to parameterize the estimated distributions for normalization. However, such parameterization of distributions should be used with care, because the error rates of biometric systems are typically very small and a small error in estimating the tails of the distributions may result in a significant change in the error estimates (see Figure 7.3). Another common practice is to compute different scaling factors (weights) for each modality from training data, such that the accuracy of the combined classifier is maxi7 Multimodal Biometric Systems 242 mized. This weighted sum rule is expected to work better than the simple sum rule when the component classifiers have different strengths (i.e., different error rates). Figure 7.3. a) Genuine and impostor distributions for a fingerprint verification system (Jain et al., 2000) and a Normal approximation for the impostor distribution. Visually, the Normal approximation seems to be good, but causes significant decrease in performance compared to the non-parametric estimate as shown in the ROCs in b), where FMR is referred to as FAR (False Acceptance Rate) and (1-FNMR) as Genuine Acceptance Rate. \u00a9Elsevier. Some schemes to combine multiple modalities in biometric systems have also been studied from a theoretical point of view. Through a theoretical analysis, Daugman (1999b) showed that if a strong biometric and a weak biometric are combined with an abstract level combination using either the AND or the OR voting rules, the performance of the combination will be worse than the better of the two individual biometrics. Hong, Jain, and Pankanti\u2019s (1999) theoretical analysis that AND/OR voting strategies can improve performance only when certain conditions are satisfied confirmed Daugman\u2019s findings. Their analysis further showed that a confidence level fusion is expected to significantly improve overall performance even in the case of combining a weak and a strong biometric. Kittler et al. (1998) introduced a sensitivity analysis to explain why the sum (or average) rule outperforms the other rules. They showed that the sum rule is less sensitive than the other similar rules (such as the \u201cproduct\u201d rule) to the error rates of individual classifiers in estimating posterior probabilities (confidence values). They claim that the sum rule is the most appropriate for combining different estimates of the same posterior probabilities (e.g., resulting from different classifier initializations). Prabhakar and Jain (2002) compared the sum and the product rules with the Neyman\u2212Pearson combination scheme and showed that the product rule is worse than the sum rule when combining correlated features and both the sum rule and the product rules are inferior to the Neyman\u2212 Pearson combination scheme when combining weak and strong classifiers. 0 20 40 60 80 100 0 1 2 3 4 5 6 7 Normalized Matching Score Pe rc en ta ge ( % ) Imposter Genuine Nonparametric Imposter Distribution Normal Imposter Distribution Genuine Distribution 0 1 2 3 4 5 50 55 60 65 70 75 80 85 90 95 100 False Acceptance Rate (%) G en ui ne A cc ep ta nc e R at e (% ) Using Nonparametric Imposter Distribution Using Normal Imposter Distribution"} {"_id": "f1672c6b9dc9e8e05d2432efa2a8ea5439b5db8b", "title": "A Real-Time Matching System for Large Fingerprint Databases", "text": "With the current rapid growth in multimedia technology, there is an imminent need for efficient techniclues to search and query large image databases. Because of their unique and peculiar needs, image databases cannot be treated iri a similar fashion to other types of digital libraries. The contextual dependencies present in images, and the complex nature of two-dimensional image data make the representation issues more difficult for image databases. An invariant representation of an image is still an open research issue. For these reasons, it is difficult to find a universal content-based retrieval technique. Current approaches based on shape, texture, and color for indexing image databases have met with limited success. Further, these techniques have not been adequately tested in the presence of noise and distortions. A given application domain offers stronger constraints for improving the retrieval performance. Fingerprint databases are characterized by their large size as well as noisy and distorted query images. Distortions are very common in fingerprint images due to elasticity of the skin. In this paper, a method of indexincl large fingerprint image databases is presented. The approach integrates a number of domain-specific high-level features such as pattern class and ridge density at higher levels of the search. At the lowest level, it incorporates elastic structural feature-based matching for indexing the database. With a multilevel indexing approach, we have been able to reduce the search space. The search engine has also been implemented on Splash 2-a field programmable gate array (FPGA)-based array processor to obtain near-,4SIC level speed of matching. Our approach has been tested on a locally collected test data and on NIST-9, a large fingerprint database available in the"} {"_id": "6d1582de1c59823954988b4202e874a2d9042fd9", "title": "Review of Farm Management Information Systems ( FMIS )", "text": "There have been considerable advancements in the field of MIS over the years, and it continues to grow and develop in response to the changing needs of the business and marketing environment. Professionals and academicians are contributing and serving the field of MIS through the dissemination of their knowledge and ideas in professional journals. Thus, changes and trends that likely have an impact on MIS concepts, processes, and implementation can be determined by reviewing the articles published in journals. Content of the articles published in journals can also give us an idea about the types of research and themes that are popular during a given period. To see the evolutionary change in the field of MIS, the present study examined the content of articles published in business and marketing journals. [New York Science Journal 2010;3(5):87-95]. (ISSN 1554 \u2013 0200)."} {"_id": "32f96626bd0aefad6417125c8d32428273756b4c", "title": "Is it really about me?: message content in social awareness streams", "text": "In this work we examine the characteristics of social activity and patterns of communication on Twitter, a prominent example of the emerging class of communication systems we call \"social awareness streams.\" We use system data and message content from over 350 Twitter users, applying human coding and quantitative analysis to provide a deeper understanding of the activity of individuals on the Twitter network. In particular, we develop a content-based categorization of the type of messages posted by Twitter users, based on which we examine users' activity. Our analysis shows two common types of user behavior in terms of the content of the posted messages, and exposes differences between users in respect to these activities."} {"_id": "6ebe87159e496b6ac04f5e320e471fc4d3a4af8a", "title": "DC Motor Speed Control using PID Controllers", "text": "An Implementation of PID controllers for the speed control of a DC motor is given in this report. The motor is modeled as a first order system and its response is studied. The speed control using PI and PID control modes is explained and an implementation of the controller using OP-AMPs is given. The response of the controller to load variations is looked at."} {"_id": "a3e0e71b9718d958f620655b3000103845e0a395", "title": "Echo: An Edge-Centric Code Offloading System With Quality of Service Guarantee", "text": "Code offloading is a promising way to accelerate mobile applications and reduce the energy consumption of mobile devices by shifting some computation to the cloud. However, existing code offloading systems suffer from a long communication delay between mobile devices and the cloud. To address this challenge, in this paper, we consider to deploy edge nodes in close proximity to mobile devices and study how they benefit code offloading. We design an edge-centric code offloading system, called Echo, over a three-layer computing hierarchy consisting of mobile devices, the edge, and the cloud. A critical problem needs to be addressed by Echo is to decide which methods should be offloaded to which computing platform (the edge or the cloud). Different from existing offloading systems that let mobile devices individually make offloading decisions, Echo implements a centralized decision engine at the edge. This edge-centric design can fully exploit limited hardware resources at the edge to provide offloading services with the quality-of-service guarantee. Furthermore, we propose some novel mechanisms, e.g., lazy object transmission and differential object update, to further improve system performance. The results of a small-scale real deployment and trace-driven simulations show that Echo significantly outperforms existing code offloading systems at both execution time and energy consumption."} {"_id": "4b68a9b4301a37c8b4509d2652c99ac770dd7c03", "title": "A Load Balancing Model based for Cloud Computing using Public Cloud", "text": "It is difficult to maintain all the resources and services on single system. So by using Load balancing strategy we distribute load on single system to multiple system providing efficient response to user, resulting in improve user satisfaction. Load balancing in the cloud computing has an important impact on the performance. To improve user satisfaction there is a need of efficient cloud computing for that we need a good load balancing algorithm. This article shows"} {"_id": "699dab450bd2a34594c17a08dd01149fa2610e60", "title": "An international qualitative study of ability and disability in ADHD using the WHO-ICF framework", "text": "This is the third in a series of four cross-cultural empirical studies designed to develop International Classification of Functioning, Disability and Health (ICF, and Children and Youth version, ICF(-CY) Core Sets for Attention-Deficit Hyperactivity Disorder (ADHD). To explore the perspectives of individuals diagnosed with ADHD, self-advocates, immediate family members and professional caregivers on relevant areas of impairment and functional abilities typical for ADHD across the lifespan as operationalized by the ICF(-CY). A qualitative study using focus group discussions or semi-structured interviews of 76 participants, divided into 16 stakeholder groups. Participants from five countries (Brazil, India, Saudi Arabia, South Africa and Sweden) were included. A deductive qualitative content analysis was conducted to extract meaningful functioning and disability concepts from verbatim material. Extracted concepts were then linked to ICF(-CY) categories by independent researchers using a standardized linking procedure. In total, 82 ICF(-CY) categories were identified, of which 32 were related to activities and participation, 25 to environmental factors, 23 to body functions and 2 to body structures. Participants also provided opinions on experienced positive sides to ADHD. A high level of energy and drive, creativity, hyper-focus, agreeableness, empathy, and willingness to assist others were the most consistently reported strengths associated with ADHD. Stakeholder perspectives highlighted the need to appraise ADHD in a broader context, extending beyond diagnostic criteria into many areas of ability and disability as well as environmental facilitators and barriers. This qualitative study, along with three other studies (comprehensive scoping review, expert survey and clinical study), will provide the scientific basis to define ICF(-CY) Core Sets for ADHD, from which assessment tools can be derived for use in clinical and research setting, as well as in health care administration."} {"_id": "38b982afdab1b4810b954b4eec913308b2adcfb6", "title": "Channel Hardening-Exploiting Message Passing (CHEMP) Receiver in Large-Scale MIMO Systems", "text": "In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the HHH matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix H increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes HTHx, where x is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the HTH matrix. We also propose a simple estimation scheme which directly obtains an estimate of HTH (instead of an estimate of H), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scale MIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of H. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes."} {"_id": "fac41331cff5fa4d91438efee41581e5154886d9", "title": "A unified distance measurement for orientation coding in palmprint verification", "text": "Orientation coding based palmprint verification methods, such as competitive code, palmprint orientation code and robust line orientation code, are state-of-the-art verification algorithms with fast matching speeds. Orientation code makes use of two types of distance measure, SUM_XOR (angular distance) and OR_XOR (Hamming distance), yet little is known about the similarities and differences both SUM_XOR and OR_XOR can be regarded as special cases, and provide some principles for determining the parameters of the unified distance. Experimental results show that, using the same feature extraction and coding methods, the unified distance measure gets lower equal error rates than the original distance measures. & 2009 Elsevier B.V. All rights reserved."} {"_id": "029fc5ba929316c5209903b5afd38df3d635e2fc", "title": "Group recommendations with rank aggregation and collaborative filtering", "text": "The majority of recommender systems are designed to make recommendations for individual users. However, in some circumstances the items to be selected are not intended for personal usage but for a group; e.g., a DVD could be watched by a group of friends. In order to generate effective recommendations for a group the system must satisfy, as much as possible, the individual preferences of the group's members.\n This paper analyzes the effectiveness of group recommendations obtained aggregating the individual lists of recommendations produced by a collaborative filtering system. We compare the effectiveness of individual and group recommendation lists using normalized discounted cumulative gain. It is observed that the effectiveness of a group recommendation does not necessarily decrease when the group size grows. Moreover, when individual recommendations are not effective a user could obtain better suggestions looking at the group recommendations. Finally, it is shown that the more alike the users in the group are, the more effective the group recommendations are."} {"_id": "f0883912e7b5fd18b474da7dba08e68f512bc7db", "title": "Lightweight Monitoring of Distributed Streams", "text": "As data becomes dynamic, large, and distributed, there is increasing demand for what have become known as distributed stream algorithms. Since continuously collecting the data to a central server and processing it there is infeasible, a common approach is to define local conditions at the distributed nodes, such that\u2014as long as they are maintained\u2014some desirable global condition holds.\n Previous methods derived local conditions focusing on communication efficiency. While proving very useful for reducing the communication volume, these local conditions often suffer from heavy computational burden at the nodes. The computational complexity of the local conditions affects both the runtime and the energy consumption. These are especially critical for resource-limited devices like smartphones and sensor nodes. Such devices are becoming more ubiquitous due to the recent trend toward smart cities and the Internet of Things. To accommodate for high data rates and limited resources of these devices, it is crucial that the local conditions be quickly and efficiently evaluated.\n Here we propose a novel approach, designated CB (for Convex/Concave Bounds). CB defines local conditions using suitably chosen convex and concave functions. Lightweight and simple, these local conditions can be rapidly checked on the fly. CB\u2019s superiority over the state-of-the-art is demonstrated in its reduced runtime and power consumption, by up to six orders of magnitude in some cases. As an added bonus, CB also reduced communication overhead in all the tested application scenarios."} {"_id": "b4894f7d6264b94ded94181d54c7a0c773e3662b", "title": "Comparison of vision-based and sensor-based systems for joint angle gait analysis", "text": "Gait analysis has become recently a popular research field and been widely applied to clinical diagnosis of neurodegenerative diseases. Various low-cost sensor-based and vision-based systems are developed for capturing the hip and knee joint angles. However, the performances of these systems have not been validated and compared between each other. The purpose of this study is to set up an experiment and compare the performances of a sensor-based system with multiple inertial measurement units (IMUs), a vision-based gait analysis system with marker detection, and a markerless vision-based system on capturing the hip and knee joint angles during normal walking. The obtained measurements were validated with the data acquired from goniometers as ground truth measurement. The results indicate that the IMUs-based sensor system gives excellent performance with small errors, while vision systems produce acceptable results with slightly larger errors."} {"_id": "ec59569fdee17844ae071be1536a08f937f08c57", "title": "Speed, Data, and Ecosystems: The Future of Software Engineering", "text": "An evaluation of recent industrial and societal trends revealed three key factors driving software engineering's future: speed, data, and ecosystems. These factors' implications have led to guidelines for companies to evolve their software engineering practices. This article is part of a special issue on the Future of Software Engineering."} {"_id": "96b128e61be883c3740e2376a82908181563a753", "title": "Convolutional Neural Network for Brain MR Imaging Extraction Using Silver-Standards Masks", "text": "Convolutional neural networks (CNN) for medical imaging are constrained by the number of annotated data required for the training stage. Usually, manual annotation is considered to be the \u201cgold-standard\u2019. However, medical imaging datasets with expert manual segmentation are scarce as this step is time-consuming and expensive. Moreover, the single-rater manual annotation is often used in data-driven approaches making the algorithm optimal for one guideline only. In this work, we propose a convolutional neural network (CNN) for brain magnetic resonance (MR) imaging extraction, fully trained with what we refer to as silver-standard masks. To the best of our knowledge, our approach is the first deep learning approach that is fully trained using silver-standards masks, having an optimal generalization. Furthermore, regarding the Dice coefficient, we outperform the current skull-stripping (SS) state-of-the-art method in three datasets. Our method consists of 1) a dataset with silver-standard masks as input, 2) a tri-planar method using parallel 2D U-Net-based CNNs, which we refer to as CONSNet, and 3) an auto-context implementation of CONSNet. CONSNet refers to our integrated approach, i.e., training with silver-standard masks and us\u2217Corresponding author Email address: oeslle@dca.fee.unicamp.br (Oeslle Lucena) URL: miclab.fee.unicamp.br (Oeslle Lucena) Preprint submitted to Medical Image Analysis April 13, 2018 ar X iv :1 80 4. 04 98 8v 1 [ cs .C V ] 1 3 A pr 2 01 8 ing a CNN architecture. Masks are generated by forming a consensus from a set of publicly available automatic segmentation methods using the simultaneous truth and performance level estimation (STAPLE) algorithm. The CNN architecture is robust, with dropout and batch normalization as regularizers. It was inspired by the 2D U-Net model published by the RECOD group. We conducted our analysis using three publicly available datasets: the Calgary-Campinas-359 (CC-359 ), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results also demonstrate that we have comparable overall performance versus current methods, including recently proposed deep learningbased strategies that were trained using manually annotated masks. Our usage of silver-standard masks, however, reduced the cost of manual annotation, diminished inter-/intra-rater variability, and avoided CNN segmentation super-specialization towards one specific SS method found when mask generation is done through an agreement among many automatic methods. Moreover, usage of silver-standard masks enlarges the volume of input annotated data since we can generate annotation from unlabelled data. Also, our method has the advantage that, once trained, it takes only a few seconds to process a typical brain image volume using a modern graphics processing unit (GPU). In contrast, many of the other competitive consensus-building methods have processing times on the order of minutes."} {"_id": "4f7886b0a905d9c5c76327044482e60b5799584e", "title": "Evidence for sensory prediction deficits in schizophrenia.", "text": "OBJECTIVE\nPatients with schizophrenia experiencing delusions and hallucinations can misattribute their own actions to an external source. The authors test the hypothesis that patients with schizophrenia have defects in their ability to predict the sensory consequences of their actions.\n\n\nMETHOD\nThe authors measured sensory attenuation of self-produced stimuli by patients with schizophrenia and by healthy subjects.\n\n\nRESULTS\nPatients with schizophrenia demonstrated significantly less sensory attenuation than healthy subjects.\n\n\nCONCLUSIONS\nPatients with a diagnosis of schizophrenia have a dysfunction in their predictive mechanisms."} {"_id": "c69e481c33138db0a5514b2e0d9cb0142b52f9de", "title": "Implementing an Effective Test Automation Framework", "text": "Testing automation tools enable developers and/or testers to easily automate the entire process of testing in software development. Nevertheless, adopting automated testing is not easy and unsuccessful due to a lack of key information and skills. In order to help solve such problems, we have designed a new framework to support a clear overview of the test design and/or plan for automating tests in distributed environments. Those that are new to testing do not need to delve into complex automation tools or test scripts. This framework allows a programmer or tester to graphically specify the granularity of execution control and aids communication between various stakeholders and software developers. It also enables them to perform the automated Continuous Integration environment. In this paper, we describe details on experiences and usage of the proposed framework."} {"_id": "0e1c072b035756104151484e6548cac0517cc5f2", "title": "Synonym set extraction from the biomedical literature by lexical pattern discovery", "text": "Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst [1], but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets."} {"_id": "5807d13c51281393e1ab06a33eeee83ff2b476fb", "title": "Age differences in the neural systems supporting human allocentric spatial navigation", "text": "Age-related declines in spatial navigation are well-known in human and non-human species. Studies in non-human species suggest that alteration in hippocampal and other neural circuitry may underlie behavioral deficits associated with aging but little is known about the neural mechanisms of human age-related decline in spatial navigation. The purpose of the present study was to examine age differences in functional brain activation during virtual environment navigation. Voxel-based analysis of activation patterns in young subjects identified activation in the hippocampus and parahippocampal gyrus, retrosplenial cortex, right and left lateral parietal cortex, medial parietal lobe and cerebellum. In comparison to younger subjects, elderly participants showed reduced activation in the hippocampus and parahippocampal gyrus, medial parietal lobe and retrosplenial cortex. Relative to younger participants elderly subjects showed increased activation in anterior cingulate gyrus and medial frontal lobe. These results provide evidence of age specific neural networks supporting spatial navigation and identify a putative neural substrate for age-related differences in spatial memory and navigational skill."} {"_id": "58df2b3cd8b89f6620e43d322c17928d08a197ba", "title": "Advanced LSTM: A Study About Better Time Dependency Modeling in Emotion Recognition", "text": "Long short-term memory (LSTM) is normally used in recurrent neural network (RNN) as basic recurrent unit. However, conventional LSTM assumes that the state at current time step depends on previous time step. This assumption constraints the time dependency modeling capability. In this study, we propose a new variation of LSTM, advanced LSTM (A-LSTM), for better temporal context modeling. We employ A-LSTM in weighted pooling RNN for emotion recognition. The A-LSTM outperforms the conventional LSTM by 5.5% relatively. The A-LSTM based weighted pooling RNN can also complement the state-of-the-art emotion classification framework. This shows the advantage of A-LSTM."} {"_id": "131b53ba2c5d20ead6796c306a949cccbef95c1d", "title": "Carpenter: finding closed patterns in long biological datasets", "text": "The growth of bioinformatics has resulted in datasets with new characteristics. These datasets typically contain a large number of columns and a small number of rows. For example, many gene expression datasets may contain 10,000-100,000 columns but only 100-1000 rows.Such datasets pose a great challenge for existing (closed) frequent pattern discovery algorithms, since they have an exponential dependence on the average row length. In this paper, we describe a new algorithm called CARPENTER that is specially designed to handle datasets having a large number of attributes and relatively small number of rows. Several experiments on real bioinformatics datasets show that CARPENTER is orders of magnitude better than previous closed pattern mining algorithms like CLOSET and CHARM."} {"_id": "f3e92100903dc9a8e3adb6013115df418c41484e", "title": "Incorporating intra-class variance to fine-grained visual recognition", "text": "Fine-grained visual recognition aims to capture discriminative characteristics amongst visually similar categories. The state-of-the-art research work has significantly improved the fine-grained recognition performance by deep metric learning using triplet network. However, the impact of intra-category variance on the performance of recognition and robust feature representation has not been well studied. In this paper, we propose to leverage intra-class variance in metric learning of triplet network to improve the performance of fine-grained recognition. Through partitioning training images within each category into a few groups, we form the triplet samples across different categories as well as different groups, which is called Group Sensitive TRiplet Sampling (GS-TRS). Accordingly, the triplet loss function is strengthened by incorporating intra-class variance with GS-TRS, which may contribute to the optimization objective of triplet network. Extensive experiments over benchmark datasets CompCar and VehicleID show that the proposed GS-TRS has significantly outperformed state-of-the-art approaches in both classification and retrieval tasks."} {"_id": "6f046750a4fea7f404ca65002338129dbf6ab66d", "title": "Social-information-processing factors in reactive and proactive aggression in children's peer groups.", "text": "We examined social-information-processing mechanisms (e.g., hostile attributional biases and intention-cue detection deficits) in chronic reactive and proactive aggressive behavior in children's peer groups. In Study 1, a teacher-rating instrument was developed to assess these behaviors in elementary school children (N = 259). Reactive and proactive scales were found to be internally consistent, and factor analyses partially supported convergent and discriminant validities. In Study 2, behavioral correlates of these forms of aggression were examined through assessments by peers (N = 339). Both types of aggression related to social rejection, but only proactively aggressive boys were also viewed as leaders and as having a sense of humor. In Study 3, we hypothesized that reactive aggression (but not proactive aggression) would occur as a function of hostile attributional biases and intention-cue detection deficits. Four groups of socially rejected boys (reactive aggressive, proactive aggressive, reactive-proactive aggressive, and nonaggressive) and a group of average boys were presented with a series of hypothetical videorecorded vignettes depicting provocations by peers and were asked to interpret the intentions of the provocateur (N = 117). Only the two reactive-aggressive groups displayed biases and deficits in interpretations. In Study 4, attributional biases and deficits were found to be positively correlated with the rate of reactive aggression (but not proactive aggression) displayed in free play with peers (N = 127). These studies supported the hypothesis that attributional biases and deficits are related to reactive aggression but not to proactive aggression."} {"_id": "7613af8292d342d8cfcf323d463b414333811fda", "title": "n ensemble learning framework for anomaly detection in building nergy consumption", "text": "During building operation, a significant amount of energy is wasted due to equipment and humanrelated faults. To reduce waste, today\u2019s smart buildings monitor energy usage with the aim of identifying abnormal consumption behaviour and notifying the building manager to implement appropriate energysaving procedures. To this end, this research proposes a new pattern-based anomaly classifier, the collective contextual anomaly detection using sliding window (CCAD-SW) framework. The CCAD-SW framework identifies anomalous consumption patterns using overlapping sliding windows. To enhance the anomaly detection capacity of the CCAD-SW, this research also proposes the ensemble anomaly detection (EAD) framework. The EAD is a generic framework that combines several anomaly detection classifiers"} {"_id": "3ab3bb02a137ddfc419e7d160f8d1ff470080911", "title": "The Other Side of the Coin: A Framework for Detecting and Analyzing Web-based Cryptocurrency Mining Campaigns", "text": "Mining for crypto currencies is usually performed on high-performance single purpose hardware or GPUs. However, mining can be easily parallelized and distributed over many less powerful systems. Cryptojacking is a new threat on the Internet and describes code included in websites that uses a visitor's CPU to mine for crypto currencies without the their consent. This paper introduces MiningHunter, a novel web crawling framework which is able to detect mining scripts even if they obfuscate their malicious activities. We scanned the Alexa Top 1 million websites for cryptojacking, collected more than 13,400,000 unique JavaScript files with a total size of 246 GB and found that 3,178 websites perform cryptocurrency mining without their visitors' consent. Furthermore, MiningHunter can be used to provide an in-depth analysis of cryptojacking campaigns. To show the feasibility of the proposed framework, three of such campaigns are examined in detail. Our results provide the most comprehensive analysis to date of the spread of cryptojacking on the Internet."} {"_id": "0e78b20b27d27261f9ae088eb13201f2d5b185bd", "title": "Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning", "text": "Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often outperforms the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does\u2014reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller."} {"_id": "1b65af0b2847cf6edb1461eda659f08be27bc76d", "title": "Regression Shrinkage and Selection via the Lasso", "text": "We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients hat are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described."} {"_id": "59fc3d8a65d23466858a5238939a40bfce1a5942", "title": "Algorithms for Infinitely Many-Armed Bandits", "text": "We consider multi-armed bandit problems where the number of arms is larger than the possible number of experiments. We make a stochastic assumption on the mean-reward of a new selected arm which characterizes its probability of being a near-optimal arm. Our assumption is weaker than in previous works. We describe algorithms based on upper-confidence-bounds applied to a restricted set of randomly selected arms and provide upper-bounds on the resulting expected regret. We also derive a lower-bound which matches (up to a logarithmic factor) the upper-bound in some cases."} {"_id": "004888621a4e4cee56b6633338a89aa036cf5ae5", "title": "Wrappers for Feature Subset Selection", "text": "In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes. @ 1997 Elsevier Science B.V."} {"_id": "d70a334b4af474038264e59c227f946cd908a391", "title": "Evaluation on State of Charge Estimation of Batteries With Adaptive Extended Kalman Filter by Experiment Approach", "text": "An accurate State-of-Charge (SoC) estimation plays a significant role in battery systems used in electric vehicles due to the arduous operation environments and the requirement of ensuring safe and reliable operations of batteries. Among the conventional methods to estimate SoC, the Coulomb counting method is widely used, but its accuracy is limited due to the accumulated error. Another commonly used method is model-based online iterative estimation with the Kalman filters, which improves the estimation accuracy in some extent. To improve the performance of Kalman filters in SoC estimation, the adaptive extended Kalman filter (AEKF), which employs the covariance matching approach, is applied in this paper. First, we built an implementation flowchart of the AEKF for a general system. Second, we built an online open-circuit voltage (OCV) estimation approach with the AEKF algorithm so that we can then get the SoC estimate by looking up the OCV-SoC table. Third, we proposed a robust online model-based SoC estimation approach with the AEKF algorithm. Finally, an evaluation on the SoC estimation approaches is performed by the experiment approach from the aspects of SoC estimation accuracy and robustness. The results indicate that the proposed online SoC estimation with the AEKF algorithm performs optimally, and for different error initial values, the maximum SoC estimation error is less than 2% with close-loop state estimation characteristics."} {"_id": "51182c66098d97061b9ac2e4d8ccf86cef22f76d", "title": "Movie recommendations based in explicit and implicit features extracted from the Filmtipset dataset", "text": "In this paper, we describe the experiments conducted by the Information Retrieval Group at the Universidad Aut\u00f3noma de Madrid (Spain) in order to better recommend movies for the 2010 CAMRa Challenge edition. Experiments were carried out on the dataset corresponding to social Filmtipset track. To obtain the movies recommendations we have used different algorithms based on Random Walks, which are well documented in the literature of collaborative recommendation. We have also included a new proposal in one of the algorithms in order to get better results. The results obtained have been computed by means of the trec_eval standard NIST evaluation procedure."} {"_id": "c12f1c55f8b8cca2dec1cf7cca2ae4408eda2b32", "title": "Classification and Prediction of Future Weather by using Back Propagation Algorithm-An Approach", "text": "Weather forecasting is the process of recording the parameters of weather like wind direction, wind speed, humidity, rainfall, temperature etc. Back Propagation Algorithm can be applied on these parameters in order to predict the future weather. In \u201cClassification and Prediction of Future Weather by using Back Propagation Algorithm\u201d technique, one parameter say temperature varies by some unit and the variation on other parameters say humidity, temperature and rainfall will be predicted with respect to temperature. In this paper, different models which were used in the past for weather forecasting are discussed. This paper is focusing on three parts, first is different models which were used in weather forecasting, second part is introducing a new wireless kit use for weather forecasting and third part includes the Back Propagation Algorithm can be applied on different parameters of weather forecast. Keywords--Back Propagation Algorithm, Artificial Neural Network, Wireless kit, Classification and Prediction"} {"_id": "a2bfc30fd47584ba6cb1743d51c0c00b7430dfae", "title": "Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement", "text": "Video and images acquired by a visual system are seriously degraded under hazy and foggy weather, which will affect the detection, tracking, and recognition of targets. Thus, restoring the true scene from such a foggy video or image is of significance. The main goal of this paper was to summarize current video and image defogging algorithms. We first presented a review of the detection and classification method of a foggy image. Then, we summarized existing image defogging algorithms, including image restoration algorithms, image contrast enhancement algorithms, and fusion-based defogging algorithms. We also presented current video defogging algorithms. We summarized objective image quality assessment methods that have been widely used for the comparison of different defogging algorithms, followed by an experimental comparison of various classical image defogging algorithms. Finally, we presented the problems of video and image defogging which need to be further studied. The code of all algorithms will be available at <;uri xlink:href=\"http://www.yongxu.org/lunwen.html\" xlink:type=\"simple\">http://www.yongxu.org/lunwen.html<;/uri>."} {"_id": "6494c8b6281f44d0c893e40a61f1da8e5206353a", "title": "An Efficient Oblivious Database for the Public Cloud", "text": "Hardware enclaves such as Intel SGX are a promising technology to increase the security of databases outsourced to the cloud. These enclaves provide an execution environment isolated from the hypervisor/OS, and encryption of data in memory. However, for applications that use large amounts of memory\u2014including most realistic databases\u2014enclaves do not protect against access pattern leaks, where an attacker observes which locations in memory are accessed. The n\u00e4\u0131ve way to address this issue, using Oblivious RAM (ORAM) primitives, adds prohibitive overhead. In this paper, we propose ObliDB, a database that co-designs both its data structures (e.g., oblivious B+ trees) and physical operators to accelerate oblivious relational queries, giving up to 329\u00d7 speedup over n\u00e4\u0131ve ORAM. On analytics workloads, ObliDB ranges from competitive to 19\u00d7 faster than previous oblivious systems designed only for analytics, such as Opaque, and comes within 2.6\u00d7 of Spark SQL. Moreover, ObliDB also supports point queries, insertions, and deletions with latencies of 3\u201310ms, which is 7\u201322\u00d7 faster than previously published oblivious data structures, and makes ObliDB suitable for transactional workloads too. To our knowledge, ObliDB is the first general-purpose oblivious database to approach practical performance. PVLDB Reference Format: Saba Eskandarian and Matei Zaharia. An Efficient Oblivious Database for the Public Cloud. PVLDB, 11 (4): xxxx-yyyy, 2017. DOI: https://doi.org/TBD"} {"_id": "c89e5e22704c6df3f516b5483edaf667aae0e718", "title": "Colors and emotions: preferences and combinations.", "text": "Within three age groups (7-year-old children, 11-year-old children, and adults), preferences for colors and emotions were established by means of two distinct paired-comparison tasks. In a subsequent task, participants were asked to link colors to emotions by selecting an appropriate color. It was hypothesized that the number of times that each color was tied to a specific emotion would be predictable from the separate preferences for colors and emotions. Within age groups, participants had consistent preferences for colors and emotions, but preferences differed from one age group to another. Especially in the youngest group, the pattern of combinations between colors and emotions appeared to be meaningfully related to the preference order for colors and emotions."} {"_id": "f9f5cb17cf072874e438dc8e1d5f03f7b505a356", "title": "Spark: A Big Data Processing Platform Based on Memory Computing", "text": "Spark is a memory-based computing framework which has a better ability of computing and fault tolerance, supports batch, interactive, iterative and flow calculations. In this paper, we analyze the Spark's primary framework, core technologies, and point out the advantages and disadvantages of the Spark. In the end, we make a discussion for the future trends of the Spark technologies."} {"_id": "8bcd9f8c4e75a64576d8592713c2aa6b5f1ad2d9", "title": "Towards an automatic early stress recognition system for office environments based on multimodal measurements: A review", "text": "Stress is a major problem of our society, as it is the cause of many health problems and huge economic losses in companies. Continuous high mental workloads and non-stop technological development, which leads to constant change and need for adaptation, makes the problem increasingly serious for office workers. To prevent stress from becoming chronic and provoking irreversible damages, it is necessary to detect it in its early stages. Unfortunately, an automatic, continuous and unobtrusive early stress detection method does not exist yet. The multimodal nature of stress and the research conducted in this area suggest that the developed method will depend on several modalities. Thus, this work reviews and brings together the recent works carried out in the automatic stress detection looking over the measurements executed along the three main modalities, namely, psychological, physiological and behavioural modalities, along with contextual measurements, in order to give hints about the most appropriate techniques to be used and thereby, to facilitate the development of such a holistic system."} {"_id": "c2a730c06522395378b655038bab293042b4435d", "title": "An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination", "text": "This paper reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, including unmanned aerial vehicles, unmanned ground vehicles, and unmanned underwater vehicles, has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions, such as consensus, formation control, optimization, and estimation. After the review, a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations."} {"_id": "d4528c8ebd5d4b9dab591b8e2288a0a8784b9949", "title": "Music Cognition and the Cognitive Sciences", "text": "Why should music be of interest to cognitive scientists, and what role does it play in human cognition? We review three factors that make music an important topic for cognitive scientific research. First, music is a universal human trait fulfilling crucial roles in everyday life. Second, music has an important part to play in ontogenetic development and human evolution. Third, appreciating and producing music simultaneously engage many complex perceptual, cognitive, and emotional processes, rendering music an ideal object for studying the mind. We propose an integrated status for music cognition in the Cognitive Sciences and conclude by reviewing challenges and big questions in the field and the way in which these reflect recent developments."} {"_id": "2dbe5601ef0e74a51fa3eab18d68ea753501a0df", "title": "Competency based management: a review of systems and approaches", "text": "Purpose \u2013 Aims to review the key concepts of competency management (CM) and to propose method for developing competency method. Design/methodology/approach \u2013 Examines the CM features of 22 CM systems and 18 learning management systems. Findings \u2013 Finds that the areas of open standard (XML, web services, RDF), semantic technologies (ontologies and the semantic web) and portals with self-service technologies are going to play a significant part in the evolution of CM systems. Originality/value \u2013 Emphasizes the beneficial attributes of CM for private and public organizations."} {"_id": "2e3cde02fad19f48269c3cc3ba7ee7483d98bc1e", "title": "Nonoutsourceable Scratch-Off Puzzles to Discourage Bitcoin Mining Coalitions", "text": "An implicit goal of Bitcoin's reward structure is to diffuse network influence over a diverse, decentralized population of individual participants. Indeed, Bitcoin's security claims rely on no single entity wielding a sufficiently large portion of the network's overall computational power. Unfortunately, rather than participating independently, most Bitcoin miners join coalitions called mining pools in which a central pool administrator largely directs the pool's activity, leading to a consolidation of power. Recently, the largest mining pool has accounted for more than half of network's total mining capacity. Relatedly, \"hosted mining\" service providers offer their clients the benefit of economies-of-scale, tempting them away from independent participation. We argue that the prevalence of mining coalitions is due to a limitation of the Bitcoin proof-of-work puzzle -- specifically, that it affords an effective mechanism for enforcing cooperation in a coalition. We present several definitions and constructions for \"nonoutsourceable\" puzzles that thwart such enforcement mechanisms, thereby deterring coalitions. We also provide an implementation and benchmark results for our schemes to show they are practical."} {"_id": "7a6f802c089c339256dcbff9d3039abbe60ea076", "title": "LTE-V for Sidelink 5G V2X Vehicular Communications: A New 5G Technology for Short-Range Vehicle-to-Everything Communications", "text": "This article provides an overview of the long-term evolution-vehicle (LTE-V) standard supporting sidelink or vehicle-to-vehicle (V2V) communications using LTE's direct interface named PC5 in LTE. We review the physical layer changes introduced under Release 14 for LTE-V, its communication modes 3 and 4, and the LTE-V evolutions under discussion in Release 15 to support fifth-generation (5G) vehicle-to-everything (V2X) communications and autonomous vehicles' applications. Modes 3 and 4 support direct V2V communications but differ on how they allocate the radio resources. Resources are allocated by the cellular network under mode 3. Mode 4 does not require cellular coverage, and vehicles autonomously select their radio resources using a distributed scheduling scheme supported by congestion control mechanisms. Mode 4 is considered the baseline mode and represents an alternative to 802.11p or dedicated shortrange communications (DSRC). In this context, this article also presents a detailed analysis of the performance of LTE-V sidelink mode 4, and proposes a modification to its distributed scheduling."} {"_id": "3a498f33da76cc9eccca1463528b0a3f8c4b25b7", "title": "MatrixWave: Visual Comparison of Event Sequence Data", "text": "Event sequence data analysis is common in many domains, including web and software development, transportation, and medical care. Few have investigated visualization techniques for comparative analysis of multiple event sequence datasets. Grounded in the real-world characteristics of web clickstream data, we explore visualization techniques for comparison of two clickstream datasets collected on different days or from users with different demographics. Through iterative design with web analysts, we designed MatrixWave, a matrix-based representation that allows analysts to get an overview of differences in traffic patterns and interactively explore paths through the website. We use color to encode differences and size to offer context over traffic volume. User feedback on MatrixWave is positive. Our study participants made fewer errors with MatrixWave and preferred it over the more familiar Sankey diagram."} {"_id": "549867419710842621c14779307b1c7a5debddd6", "title": "Wireless Power Transfer for Vehicular Applications: Overview and Challenges", "text": "More than a century-old gasoline internal combustion engine is a major contributor to greenhouse gases. Electric vehicles (EVs) have the potential to achieve eco-friendly transportation. However, the major limitation in achieving this vision is the battery technology. It suffers from drawbacks such as high cost, rare material, low energy density, and large weight. The problems related to battery technology can be addressed by dynamically charging the EV while on the move. In-motion charging can reduce the battery storage requirement, which could significantly extend the driving range of an EV. This paper reviews recent advances in stationary and dynamic wireless charging of EVs. A comprehensive review of charging pad, power electronics configurations, compensation networks, controls, and standards is presented."} {"_id": "45061dddc7973a6a78cb2b555dd05953f38e06d3", "title": "Using NLP techniques for file fragment classification", "text": "The classification of file fragments is an important problem in digital forensics. The literature does not include comprehensive work on applying machine learning techniques to this problem. In this work, we explore the use of techniques from natural language processing to classify file fragments. We take a supervised learning approach, based on the use of support vector machines combined with the bag-of-words model, where text documents are represented as unordered bags of words. This technique has been repeatedly shown to be effective and robust in classifying text documents (e.g., in distinguishing positive movie reviews from negative ones). In our approach, we represent file fragments as \u201cbags of bytes\u201d with feature vectors consisting of unigram and bigram counts, as well as other statistical measurements (including entropy and others). We made use of the publicly available Garfinkel data corpus to generate file fragments for training and testing. We ran a series of experiments, and found that this approach is effective in this domain as well. a 2012 O. Zhulyn, S. Fitzgerald & G. Mathews. Published by Elsevier Ltd. All rights reserved."} {"_id": "508eb5a6156b8fa1b4547b611e85969438116fa2", "title": "Perceptual Generative Adversarial Networks for Small Object Detection", "text": "Detecting small objects is notoriously challenging due to their low resolution and noisy representation. Existing object detection pipelines usually detect small objects through learning representations of all the objects at multiple scales. However, the performance gain of such ad hoc architectures is usually limited to pay off the computational cost. In this work, we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to super-resolved ones, achieving similar characteristics as large objects and thus more discriminative for detection. For this purpose, we propose a new Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones. Specifically, its generator learns to transfer perceived poor representations of the small objects to super-resolved ones that are similar enough to real large objects to fool a competing discriminator. Meanwhile its discriminator competes with the generator to identify the generated representation and imposes an additional perceptual requirement - generated representations of small objects must be beneficial for detection purpose - on the generator. Extensive evaluations on the challenging Tsinghua-Tencent 100K [45] and the Caltech [9] benchmark well demonstrate the superiority of Perceptual GAN in detecting small objects, including traffic signs and pedestrians, over well-established state-of-the-arts."} {"_id": "8d3fe0ba28864ac905fcdc58ed268bc21a91582c", "title": "Techniques for Machine Learning based Spatial Data Analysis: Research Directions", "text": "Today, machine learning techniques play a significant role in data analysis, predictive modeling and visualization. The main aim of the machine learning algorithms is that they first learn from the empirical data and can be utilized in cases for which the modeled phenomenon is hidden or not yet described. Several significant techniques like Artificial Neural Network, Support Vector Regression, k-Nearest Neighbour, Bayesian Classifiers and Decision Trees are developed in past years to acheive this task. The first and most complicated problem concerns is the amount of available data. This paper reviews state-of-the-art in the domain of spatial data analysis by employing machine learning approaches. First various methods have been summarized, which exist in the literature. Further, the current research scenarios which are going on in this area are described. Based on the research done in past years, identification of the problem in the existing system is also presented in this paper and have given future research directions."} {"_id": "6b2456d4570cf8f8344c46c9315229669578fc3b", "title": "DRCD: a Chinese Machine Reading Comprehension Dataset", "text": "\u91cd\u8996\u786b\u9178\u7684\u7a2e\u985e\u4ee5\u53ca\u5b83\u5011\u5728\u91ab\u5b78\u4e0a\u7684\u50f9\u503c...... \u5728 1746\u5e74,John Roebuck\u5247\u904b\u7528\u9019\u500b\u539f\u5247,\u958b \u5275\u925b\u5ba4\u6cd5,\u4ee5\u66f4\u4f4e\u6210\u672c\u6709\u6548\u5730\u5927\u91cf\u751f\u7522\u786b \u9178\u3002...... ... Others, such as Ibn S\u012bn\u0101, are more concerned about the type of sulfuric acid and their medical value. In 1746, John Roebuck applied this method to create lead chamber process to efficiently produce large quantities of sulfuric acid at a lower cost. Question: \u91cd\u8996\u786b\u9178\u7684\u7a2e\u985e\u4ee5\u53ca\u5b83\u5011\u5728\u91ab\u5b78\u4e0a\u7684 \u50f9\u503c\u5730\u70ba\u54ea\u4f4d\u91ab\u5e2b? Who value the type of sulfuric acid and their medical value? Answer: \u4f0a\u672c\u00b7\u897f\u90a3/ Ibn S\u012bn\u0101 Question: \u925b\u5ba4\u6cd5\u65bc\u897f\u5143\u5e7e\u5e74\u958b\u5275? / When did lead chamber process being invented? Answer: 1746\u5e74 / 1746"} {"_id": "3895b68df599bda93c3dc2e3aca3205bf6ab0783", "title": "Parental delay or refusal of vaccine doses, childhood vaccination coverage at 24 months of age, and the Health Belief Model.", "text": "OBJECTIVE\nWe evaluated the association between parents' beliefs about vaccines, their decision to delay or refuse vaccines for their children, and vaccination coverage of children at aged 24 months.\n\n\nMETHODS\nWe used data from 11,206 parents of children aged 24-35 months at the time of the 2009 National Immunization Survey interview and determined their vaccination status at aged 24 months. Data included parents' reports of delay and/or refusal of vaccine doses, psychosocial factors suggested by the Health Belief Model, and provider-reported up-to-date vaccination status.\n\n\nRESULTS\nIn 2009, approximately 60.2% of parents with children aged 24-35 months neither delayed nor refused vaccines, 25.8% only delayed, 8.2% only refused, and 5.8% both delayed and refused vaccines. Compared with parents who neither delayed nor refused vaccines, parents who delayed and refused vaccines were significantly less likely to believe that vaccines are necessary to protect the health of children (70.1% vs. 96.2%), that their child might get a disease if they aren't vaccinated (71.0% vs. 90.0%), and that vaccines are safe (50.4% vs. 84.9%). Children of parents who delayed and refused also had significantly lower vaccination coverage for nine of the 10 recommended childhood vaccines including diphtheria-tetanus-acellular pertussis (65.3% vs. 85.2%), polio (76.9% vs. 93.8%), and measles-mumps-rubella (68.4% vs. 92.5%). After adjusting for sociodemographic differences, we found that parents who were less likely to agree that vaccines are necessary to protect the health of children, to believe that their child might get a disease if they aren't vaccinated, or to believe that vaccines are safe had significantly lower coverage for all 10 childhood vaccines.\n\n\nCONCLUSIONS\nParents who delayed and refused vaccine doses were more likely to have vaccine safety concerns and perceive fewer benefits associated with vaccines. Guidelines published by the American Academy of Pediatrics may assist providers in responding to parents who may delay or refuse vaccines."} {"_id": "808f852f6101c674e10cae264df4bc6ef55fe7dc", "title": "An Improved Bank Credit Scoring Model: A Na\u00efve Bayesian Approach", "text": "Credit scoring is a decision tool used by organizations to grant or reject credit requests from their customers. Series of artificial intelligent and traditional approaches have been used to building credit scoring model and credit risk evaluation. Despite being ranked amongst the top 10 algorithm in Data mining, Na\u00efve Bayesian algorithm has not been extensively used in building credit score cards. Using demographic and material indicators as input variables, this paper investigate the ability of Bayesian classifier towards building credit scoring model in banking sector."} {"_id": "641ed032fc5b69eb14cdd462410ebe05f9dcbfc7", "title": "Political Issue Extraction Model: A Novel Hierarchical Topic Model That Uses Tweets By Political And Non-Political Authors", "text": "People often use social media to discuss opinions, including political ones. We refer to relevant topics in these discussions as political issues, and the alternate stands towards these topics as political positions. We present a Political Issue Extraction (PIE) model that is capable of discovering political issues and positions from an unlabeled dataset of tweets. A strength of this model is that it uses twitter timelines of political and non-political authors, and affiliation information of only political authors. The model estimates word-specific distributions (that denote political issues and positions) and hierarchical author/group-specific distributions (that show how these issues divide people). Our experiments using a dataset of 2.4 million tweets from the US show that this model effectively captures the desired properties (with respect to words and groups) of political discussions. We also evaluate the two components of the model by experimenting with: (a) Use to alternate strategies to classify words, and (b) Value addition due to incorporation of group membership information. Estimated distributions are then used to predict political affiliation with 68% accuracy."} {"_id": "550677a4ce026963d8513f45b6d7ee784e52332b", "title": "Unrealistic optimism about susceptibility to health problems: Conclusions from a community-wide sample", "text": "A mailed questionnaire was used to obtain comparative risk judgments for 32 different hazards from a random sample of 296 individuals living in central New Jersey. The results demonstrate that an optimistic bias about susceptibility to harm-a tendency to claim that one is less at risk than one's peers\u2014is not limited to any particular age, sex, educational, or occupational group. It was found that an optimistic bias is often introduced when people extrapolate from their past experience to estimate their future vulnerability. Thus, the hazards most likely to elicit unrealistic optimism are those associated with the belief (often incorrect) that if the problem has not yet appeared, it is unlikely to occur in the future. Optimistic biases also increase with the perceived preventability of a hazard and decrease with perceived frequency and personal experience. Other data presented illustrate the inconsistent relationships between personal risk judgments and objective risk factors."} {"_id": "75959c002c7c8f1d6a677b0f09d31931a0499d9d", "title": "The cultural construction of self-enhancement: an examination of group-serving biases.", "text": "Self-serving biases, found routinely in Western samples, have not been observed in Asian samples. Yet given the orientation toward individualism and collectivism in these 2 cultures, respectively, it is imperative to examine whether parallel differences emerge when the target of evaluation is the group. It may be that Asians show a group-serving bias parallel to the Western self-serving bias. In 2 studies, group-serving biases were compared across European Canadian, Asian Canadian, and Japanese students. Study 1 revealed that Japanese students evaluated a family member less positively than did both groups of Canadian students. Study 2 replicated this pattern with students' evaluations of their universities. The data suggest that cultural differences in enhancement biases are robust, generalizing to individuals' evaluations of their groups."} {"_id": "75a3e2cc31457039106ceae2b3ee66e9243d4696", "title": "Unrealistic optimism about susceptibility to health problems", "text": "In this study, 100 college students compared their own chances of experiencing 45 different health- and life-threatening problems with the chances of their peers. They showed a significant optimistic bias for 34 of these hazards, consistently considering their own chances to be below average. Attempts to account for the amount of bias evoked by different hazards identified perceived controllability, lack of previous experience, and the belief that the problem appears during childhood as factors that tend to increase unrealistic optimism. The investigation also examined the importance of beliefs and emotions as determinants of self-reported interest in adopting precautions to reduce one's risk. It found that: (a) beliefs about risk likelihood, beliefs about risk severity, and worry about the risk all made independent contributions to interest in risk reduction; (b) unrealistic optimism undermined interest in risk reduction indirectly by decreasing worry; and (c) beliefs about risk likelihood and severity were not sufficient to explain the amount of worry expressed about different hazards."} {"_id": "bed359015324e4e105e95cce895cc79cae2bc2e7", "title": "Flawed Self-Assessment: Implications for Health, Education, and the Workplace.", "text": "Research from numerous corners of psychological inquiry suggests that self-assessments of skill and character are often flawed in substantive and systematic ways. We review empirical findings on the imperfect nature of self-assessment and discuss implications for three real-world domains: health, education, and the workplace. In general, people's self-views hold only a tenuous to modest relationship with their actual behavior and performance. The correlation between self-ratings of skill and actual performance in many domains is moderate to meager-indeed, at times, other people's predictions of a person's outcomes prove more accurate than that person's self-predictions. In addition, people overrate themselves. On average, people say that they are \"above average\" in skill (a conclusion that defies statistical possibility), overestimate the likelihood that they will engage in desirable behaviors and achieve favorable outcomes, furnish overly optimistic estimates of when they will complete future projects, and reach judgments with too much confidence. Several psychological processes conspire to produce flawed self-assessments. Research focusing on health echoes these findings. People are unrealistically optimistic about their own health risks compared with those of other people. They also overestimate how distinctive their opinions and preferences (e.g., discomfort with alcohol) are among their peers-a misperception that can have a deleterious impact on their health. Unable to anticipate how they would respond to emotion-laden situations, they mispredict the preferences of patients when asked to step in and make treatment decisions for them. Guided by mistaken but seemingly plausible theories of health and disease, people misdiagnose themselves-a phenomenon that can have severe consequences for their health and longevity. Similarly, research in education finds that students' assessments of their performance tend to agree only moderately with those of their teachers and mentors. Students seem largely unable to assess how well or poorly they have comprehended material they have just read. They also tend to be overconfident in newly learned skills, at times because the common educational practice of massed training appears to promote rapid acquisition of skill-as well as self-confidence-but not necessarily the retention of skill. Several interventions, however, can be introduced to prompt students to evaluate their skill and learning more accurately. In the workplace, flawed self-assessments arise all the way up the corporate ladder. Employees tend to overestimate their skill, making it difficult to give meaningful feedback. CEOs also display overconfidence in their judgments, particularly when stepping into new markets or novel projects-for example, proposing acquisitions that hurt, rather then help, the price of their company's stock. We discuss several interventions aimed at circumventing the consequences of such flawed assessments; these include training people to routinely make cognitive repairs correcting for biased self-assessments and requiring people to justify their decisions in front of their peers. The act of self-assessment is an intrinsically difficult task, and we enumerate several obstacles that prevent people from reaching truthful self-impressions. We also propose that researchers and practitioners should recognize self-assessment as a coherent and unified area of study spanning many subdisciplines of psychology and beyond. Finally, we suggest that policymakers and other people who makes real-world assessments should be wary of self-assessments of skill, expertise, and knowledge, and should consider ways of repairing self-assessments that may be flawed."} {"_id": "08aeae7f9899a161db6a78e9566ed8b0df7a97fc", "title": "Judgment under Uncertainty: Heuristics and Biases.", "text": "This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty."} {"_id": "08a2dbd8932d5060a36c93ad18b2857f4bbbe684", "title": "Fault tolerant parallel data-intensive algorithms", "text": "Fault-tolerance is rapidly becoming a crucial issue in high-end and distributed computing, as increasing number of cores are decreasing the mean-time to failure of the systems. In this work, we present an algorithm-based fault tolerance solution that handles fail-stop failures for a class of iterative data intensive algorithms. We intelligently replicate the data to minimize data loss in multiple failures and decrease re-execution in recovery by little modifications in the algorithms. We evaluate our approach by using two data mining algorithms, kmeans and Apriori. We show that our approach has negligible overhead and allows us to gracefully handle different number of failures. In addition, our approach outperforms Hadoop both in absence and presence of failures."} {"_id": "2ec2038f229c40bd37552c26545743f02fe1715d", "title": "Confidence sets for persistence diagrams", "text": "Persistent homology is a method for probing topological properties of point clouds and functions. The method involves tracking the birth and death of topological features as one varies a tuning parameter. Features with short lifetimes are informally considered to be \u201ctopological noise,\u201d and those with a long lifetime are considered to be \u201ctopological signal.\u201d In this paper, we bring some statistical ideas to persistent homology. In particular, we derive confidence sets that allow us to separate topological signal from topological noise."} {"_id": "3e27ee7eb45dc2269fe2364f31f88b35fcd16cfc", "title": "Word Semantic Representations using Bayesian Probabilistic Tensor Factorization", "text": "Many forms of word relatedness have been developed, providing different perspectives on word similarity. We introduce a Bayesian probabilistic tensor factorization model for synthesizing a single word vector representation and per-perspective linear transformations from any number of word similarity matrices. The resulting word vectors, when combined with the per-perspective linear transformation, approximately recreate while also regularizing and generalizing, each word similarity perspective. Our method can combine manually created semantic resources with neural word embeddings to separate synonyms and antonyms, and is capable of generalizing to words outside the vocabulary of any particular perspective. We evaluated the word embeddings with GRE antonym questions, the result achieves the state-ofthe-art performance."} {"_id": "247d8d94500b9f22125504794169468e1b4a639f", "title": "Facial performance enhancement using dynamic shape space analysis", "text": "The facial performance of an individual is inherently rich in subtle deformation and timing details. Although these subtleties make the performance realistic and compelling, they often elude both motion capture and hand animation. We present a technique for adding fine-scale details and expressiveness to low-resolution art-directed facial performances, such as those created manually using a rig, via marker-based capture, by fitting a morphable model to a video, or through Kinect reconstruction using recent faceshift technology. We employ a high-resolution facial performance capture system to acquire a representative performance of an individual in which he or she explores the full range of facial expressiveness. From the captured data, our system extracts an expressiveness model that encodes subtle spatial and temporal deformation details specific to that particular individual. Once this model has been built, these details can be transferred to low-resolution art-directed performances. We demonstrate results on various forms of input; after our enhancement, the resulting animations exhibit the same nuances and fine spatial details as the captured performance, with optional temporal enhancement to match the dynamics of the actor. Finally, we show that our technique outperforms the current state-of-the-art in example-based facial animation."} {"_id": "3a8aa4cc6142d433ff55bea8a0cb980103ea15e9", "title": "A fast image dehazing algorithm based on negative correction", "text": ""} {"_id": "63405b75344e62004c654de86b937f3c32f16d41", "title": "Improving Distant Supervision with Maxpooled Attention and Sentence-Level Supervision", "text": "We propose an effective multitask learning setup for reducing distant supervision noise by leveraging sentence-level supervision. We show how sentence-level supervision can be used to improve the encoding of individual sentences, and to learn which input sentences are more likely to express the relationship between a pair of entities. We also introduce a novel neural architecture for collecting signals from multiple input sentences, which combines the benefits of attention and maxpooling. The proposed method increases AUC by 10% (from 0.261 to 0.284), and outperforms recently published results on the FBNYT dataset."} {"_id": "c19e3165c8bbffcb08ad23747debd66eccb225e6", "title": "Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network", "text": "Transferring a huge amount of data between different network locations over the network links depends on the network's traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model."} {"_id": "eee3f5c7848e296f9b4cf05e7d2ccc8a0a9ec2f0", "title": "A FRAMEWORK FOR DESIGNING MOBILE QURANIC MEMORIZATION TOOL USING MULTIMEDIA INTERACTIVE LEARNING METHOD FOR CHILDREN", "text": "Quran is the fundamental holy book of Islam. Among the most important concerns is to learn it by heart. However, current method in Quranic schools is becoming less effective towards young generation. Thus, there is a need to find alternative solutions to memorize Quran for young Muslims. Mobile learning is an alternative to conventional learning that will support the existing method. Mobile devices have made an immediate impact on teaching and learning practices. However, for mobile learning to be effective for children in memorizing Quran, it is necessary to find specific design guidelines and pedagogy to this learning method. This paper aims at providing a unifying framework for developing Quran memorizer application using multimedia interactive method and learning theories for mobile learning."} {"_id": "d74e38cfe23305d7ccf06899164836d59da0bbb1", "title": "The WHO public health approach to HIV treatment and care: looking back and looking ahead.", "text": "In 2006, WHO set forth its vision for a public health approach to delivering antiretroviral therapy. This approach has been broadly adopted in resource-poor settings and has provided the foundation for scaling up treatment to over 19\u00b75 million people. There is a global commitment to end the AIDS epidemic as a public health threat by 2030 and, to support this goal, there are opportunities to adapt the public health approach to meet the ensuing challenges. These challenges include the need to improve identification of people with HIV infection through expanded approaches to testing; further simplify and improve treatment and laboratory monitoring; adapt the public health approach to concentrated epidemics; and link HIV testing, treatment, and care to HIV prevention. Implementation of these key public health principles will bring countries closer to the goals of controlling the HIV epidemic and providing universal health coverage."} {"_id": "3f51995bcde8e4e508027e6da7dd780c5645470b", "title": "The effects of students' motivation, cognitive load and learning anxiety in gamification software engineering education: a structural equation modeling study", "text": "Past research has proven the significant effects of game-based learning on learning motivation and academic performance, and described the key factors in game-based design. Nonetheless, research on the correlations among learning motivation, cognitive load, learning anxiety and academic performance in gamified learning environments has been minimal. This study, therefore, aims to develop a Gamification Software Engineering Education Learning System (GSEELS) and evaluate the effects of gamification, learning motivation, cognitive load and learning anxiety on academic performance. By applying Structural Equation Modeling (SEM) to the empirical research, the questionnaire contains: 1. a Gamification Learning Scale; 2. a Learning Motivation Scale; 3. a Cognitive Load Scale; 4. a Learning Anxiety Scale; and 5. an Academic Performance Scale. A total of 107 undergraduates in two classes participated in this study. The Structural Equation Modeling (SEM) analysis includes the path directions and relationship between descriptive statistics, measurement model, structural model evaluation and five variables. The research results support all nine hypotheses, and the research findings also show the effects of cognitive load on learning anxiety, with strong learning motivation resulting from a low learning anxiety. As a result, it is further proven in this study that a well-designed GSEELS would affect student learning motivation and academic performance. Finally, the relationship model between gamification learning, learning motivation, cognitive load, learning anxiety and academic performance is elucidated, and four suggestions are proffered for instructors of software engineering education courses and for further research, so as to assist instructors in the application of favorable gamification teaching strategies."} {"_id": "37b3100d4a069513f2639f07fe7f99f96567b675", "title": "Towards a global participatory platform : democratising open data , complexity science and collective intelligence", "text": "The FuturICT project seeks to use the power of big data, analytic models grounded in complexity science, and the collective intelligence they yield for societal benefit. Accordingly, this paper argues that these new tools should not remain the preserve of restricted 110 The European Physical Journal Special Topics government, scientific or corporate \u00e9lites, but be opened up for societal engagement and critique. To democratise such assets as a public good, requires a sustainable ecosystem enabling different kinds of stakeholder in society, including but not limited to, citizens and advocacy groups, school and university students, policy analysts, scientists, software developers, journalists and politicians. Our working name for envisioning a sociotechnical infrastructure capable of engaging such a wide constituency is the Global Participatory Platform (GPP). We consider what it means to develop a GPP at the different levels of data, models and deliberation, motivating a framework for different stakeholders to find their ecological niches at different levels within the system, serving the functions of (i) sensing the environment in order to pool data, (ii) mining the resulting data for patterns in order to model the past/present/future, and (iii) sharing and contesting possible interpretations of what those models might mean, and in a policy context, possible decisions. A research objective is also to apply the concepts and tools of complexity science and social science to the project\u2019s own work. We therefore conceive the global participatory platform as a resilient, epistemic ecosystem, whose design will make it capable of selforganization and adaptation to a dynamic environment, and whose structure and contributions are themselves networks of stakeholders, challenges, issues, ideas and arguments whose structure and dynamics can be modelled and analysed."} {"_id": "5e1eabca76e84a0913dabaacc63b1d03400abcb4", "title": "Comparison of supervised learning methods for spike time coding in spiking neural networks", "text": "In this review we focus our attention on the supervised learning methods for spike time coding in Spiking Neural Networks (SNN). This study is motivated by the recent experimental results on information coding in the biological neural systems which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We pose a fundamental question, what paradigms of neural temporal coding can be implemented with the recent learning methods. In order to answer this question we discuss various approaches to the considered learning task. We shortly describe the particular learning algorithms and report the results of experiments. Finally we discuss properties, assumptions and limitations of each method. We complete this review with a comprehensive list of pointers to the literature."} {"_id": "a112270a43307179ac0f97bb34bc459e3a3d0300", "title": "Enhanced parallel cat swarm optimization based on the Taguchi method", "text": "In this paper, we present an enhanced parallel cat swarm optimization (EPCSO) method for solving numerical optimization problems. The parallel cat swarm optimization (PCSO) method is an optimization algorithm designed to solve numerical optimization problems under the conditions of a small population size and a few iteration numbers. The Taguchi method is widely used in the industry for optimizing the product and the process conditions. By adopting the Taguchi method into the tracing mode process of the PCSO method, we propose the EPCSO method with better accuracy and less computational time. In this paper, five test functions are used to evaluate the accuracy of the proposed EPCSO method. The experimental results show that the proposed EPCSO method gets higher accuracies than the existing PSO-based methods and requires less computational time than the PCSO method. We also apply the proposed method to solve the aircraft schedule recovery problem. The experimental results show that the proposed EPCSO method can provide the optimum recovered aircraft schedule in a very short time. The proposed EPCSO method gets the same recovery schedule having the same total delay time, the same delayed flight numbers and the same number of long delay flights as the Liu, Chen, and Chou method (2009). The optimal solutions can be found by the proposed EPCSO method in a very short time. 2011 Elsevier Ltd. All rights reserved."} {"_id": "7ff528c57aeaf0afadcd03d4c7cbcb3d8c7eb5d0", "title": "Conducting a meta-ethnography of qualitative literature: Lessons learnt", "text": "BACKGROUND\nQualitative synthesis has become more commonplace in recent years. Meta-ethnography is one of several methods for synthesising qualitative research and is being used increasingly within health care research. However, many aspects of the steps in the process remain ill-defined.\n\n\nDISCUSSION\nWe utilized the seven stages of the synthesis process to synthesise qualitative research on adherence to tuberculosis treatment. In this paper we discuss the methodological and practical challenges faced; of particular note are the methods used in our synthesis, the additional steps that we found useful in clarifying the process, and the key methodological challenges encountered in implementing the meta-ethnographic approach. The challenges included shaping an appropriate question for the synthesis; identifying relevant studies; assessing the quality of the studies; and synthesising findings across a very large number of primary studies from different contexts and research traditions. We offer suggestions that may assist in undertaking meta-ethnographies in the future.\n\n\nSUMMARY\nMeta-ethnography is a useful method for synthesising qualitative research and for developing models that interpret findings across multiple studies. Despite its growing use in health research, further research is needed to address the wide range of methodological and epistemological questions raised by the approach."} {"_id": "f76738a73ec664d5c251a1bfe38b6921700aa542", "title": "Linking CRM Strategy , Customer Performance Measures and Performance in the Hotel Industry", "text": "Customer relationship management (CRM) has been increasingly adopted because of its benefits of greater customer satisfaction and loyalty, which in turn, leads to enhanced financial and competitive performance. This paper reports on a study that examines the relationship between CRM strategy and performance and determines whether the use of customer performance measures plays a mediating role in the relationship between CRM strategy and performance. This study contributes to the limited literature on CRM strategy since little is known about the use of CRM strategy and customer performance measures and their relation with performance in the hotel industry in Malaysia. Data were collected through a questionnaire survey of hotels in Malaysia. Hierarchical regression analyses on a sample of 95 hotels revealed that only the information technology dimension of CRM strategy has a significant and positive effect on performance. In addition, the hypothesis concerning the role of customer performance measures as a mediator was supported."} {"_id": "3f5876c3e7c5472d86eb94dda660e9e2c40d0474", "title": "Predicting NDUM Student's Academic Performance Using Data Mining Techniques", "text": "The ability to predict the students\u2019 academic performance is very important in institution educational system. Recently some researchers have been proposed data mining techniques for higher education. In this paper, we compare two data mining techniques which are: Artificial Neural Network (ANN) and the combination of clustering and decision tree classification techniques for predicting and classifying students\u2019 academic performance. The data set used in this research is the student data of Computer Science Department, Faculty of Science and Defence Technology, National Defence University of Malaysia (NDUM)."} {"_id": "4ce5925c5f1b1ae1b0666623ffd8da39431e52f0", "title": "Speeding up k-means Clustering by Bootstrap Averaging", "text": "K-means clustering is one of the most popular clustering algorithms used in data mining. However, clustering is a time consuming task, particularly w ith the large data sets found in data mining. In this p aper we show how bootstrap averaging with k-means can produce results comparable to clustering all of the data but in much less time. The approach of bootstr ap (sampling with replacement) averaging consists of running k-means clustering to convergence on small bootstrap samples of the training data and averagin g similar cluster centroids to obtain a single model. We show why our approach should take less computation time and empirically illustrate its benefits. We sh ow that the performance of our approach is a monotonic function of the size of the bootstrap sample. Howev er, knowing the size of the bootstrap sample that yield s as good results as clustering the entire data set rema ins an open and important question."} {"_id": "33b98253407829fae510a74ca1b4b100b4178b39", "title": "An Inoperability Input-Output Model (IIM) for disruption propagation analysis", "text": "Today's Supply Chains (SCs) are global and getting more complex, due to the exigencies of the world economy, making them hard to manage. This is exacerbated by the increasingly frequent occurrence of natural disasters and other high impact disruptions. To stay competitive, companies are therefore seeking ways to better understand the impact of such disruptions to their SCs. In addressing this need, this paper proposes an approach for disruption propagation analysis. We develop a method based on a variation of an Inoperability Input-Output Model (adapted from a Leontief I-O model) to quantify the impact of disruptions across the entire SC. We then analyse the factors that have the most influential impacts on the SCs during disruptions. Initial results show that trading volume handled by a company/ node is an important factor in determining the disruption impact to SCs, besides the number of SC partners (connections) as implied from previous work."} {"_id": "7ceade21aef40866cdbf6207e232b32f72b4e6dd", "title": "CryptVMI: Encrypted Virtual Machine Introspection in the Cloud", "text": "Virtualization techniques are the key in both public and private cloud computing environments. In such environments, multiple virtual instances are running on the same physical machine. The logical isolation between systems makes security assurance weaker than physically isolated systems. Thus, Virtual Machine Introspection techniques become essential to prevent the virtual system from being vulnerable to attacks. However, this technique breaks down the borders of the segregation between multiple tenants, which should be avoided in a public cloud computing environment. In this paper, we focus on building an encrypted Virtual Machine Introspection system, CryptVMI, to address the above concern, especially in a public cloud system. Our approach maintains a query handler on the management node to handle encrypted queries from user clients. We pass the query to the corresponding compute node that holds the virtual instance queried. The introspection application deployed on the compute node processes the query and acquires the encrypted results from the virtual instance for the user. This work shows our design and preliminary implementation of this system."} {"_id": "d0b179ecf80d487cbf7342fbd1c0072b1b5c2c23", "title": "Acute Ischemic Stroke Therapy Overview.", "text": "The treatment of acute ischemic stroke has undergone dramatic changes recently subsequent to the demonstrated efficacy of intra-arterial (IA) device-based therapy in multiple trials. The selection of patients for both intravenous and IA therapy is based on timely imaging with either computed tomography or magnetic resonance imaging, and if IA therapy is considered noninvasive, angiography with one of these modalities is necessary to document a large-vessel occlusion amenable for intervention. More advanced computed tomography and magnetic resonance imaging studies are available that can be used to identify a small ischemic core and ischemic penumbra, and this information will contribute increasingly in treatment decisions as the therapeutic time window is lengthened. Intravenous thrombolysis with tissue-type plasminogen activator remains the mainstay of acute stroke therapy within the initial 4.5 hours after stroke onset, despite the lack of Food and Drug Administration approval in the 3- to 4.5-hour time window. In patients with proximal, large-vessel occlusions, IA device-based treatment should be initiated in patients with small/moderate-sized ischemic cores who can be treated within 6 hours of stroke onset. The organization and implementation of regional stroke care systems will be needed to treat as many eligible patients as expeditiously as possible. Novel treatment paradigms can be envisioned combining neuroprotection with IA device treatment to potentially increase the number of patients who can be treated despite long transport times and to ameliorate the consequences of reperfusion injury. Acute stroke treatment has entered a golden age, and many additional advances can be anticipated."} {"_id": "d483efc550245d222da86f0da529ed6ccaa440d4", "title": "Automated Non-Gaussian Clustering of Polarimetric Synthetic Aperture Radar Images", "text": "This paper presents an automatic image segmentation method for polarimetric synthetic aperture radar data. It utilizes the full polarimetric information and incorporates texture by modeling with a non-Gaussian distribution for the complex scattering coefficients. The modeling is based upon the well-known product model, with a Gamma-distributed texture parameter leading to the K-Wishart model for the covariance matrix. The automatic clustering is achieved through a finite mixture model estimated with a modified expectation maximization algorithm. We include an additional goodness-of-fit test stage that allows for splitting and merging of clusters. This not only improves the model fit of the clusters, but also dynamically selects the appropriate number of clusters. The resulting image segmentation depicts the statistically significant clusters within the image. A key feature is that the degree of sub-sampling of the input image will affect the detail level of the clustering, revealing only the major classes or a variable level of detail. Real-world examples are shown to demonstrate the technique."} {"_id": "4e99f23bc4e6f510750ab9035cd7bf7273053066", "title": "Mining Vessel Tracking Data for Maritime Domain Applications", "text": "The growing number of remote sensing systems and ship reporting technologies (e.g. Automatic Identification System, Long Range Identification and Tracking, radar tracking, Earth Observation) are generating an overwhelming amount of spatio-temporal and geographically distributed data related to vessels and their movements. Research on reliable data mining techniques has proven essential to the discovery of knowledge from such increasingly available information on ship traffic at sea. Data driven knowledge discovery has very recently demonstrated its value in fields that go beyond the original maritime safety and security remits of such data. They include, but are not limited to, fisheries management, maritime spatial planning, gridding ship emissions, mapping activities at sea, risk assessment of offshore platforms, and trade indicators. The extraction of useful information from maritime Big Data is thus a key element in providing operational authorities, policy-makers and scientists with supporting tools to understand what is happening at sea and improve maritime knowledge. This work will provide a survey of the recent JRC research activities relevant to automatic anomaly detection and knowledge discovery in the maritime domain. Data mining, data analytics and predictive analysis examples are introduced using real data. In addition, this paper presents approaches to detect anomalies in reporting messages and unexpected behaviours at sea."} {"_id": "c7bedb7ab92fc89d1035bd288a075d10acbdf7de", "title": "Requirements Engineering with Use Cases - a Basis for Software Development", "text": "Successful development of software systems depends on the quality of the requirements engineering process. Use cases and scenarios are promising vehicles for eliciting, specifying and validating requirements. This thesis investigates the role of use case modelling in requirements engineering and its relation to system verification and validation. The thesis includes studies of concepts and representations in use case modelling. Semantic issues are discussed and notations based on natural and graphical languages are provided, which allow a hierarchical structure and enable representation at different abstraction levels. Two different strategies for integrating use case modelling with system testing are presented and evaluated, showing how use cases can be a basis for test case generation and reliability assessment. An experiment on requirements validation using inspections with perspective-based reading is also reported, where one of the perspectives applies use case modelling. The results of the experiment indicate that a combination of multiple perspectives may not give higher defect detection rate compared to single perspective reading. Pilot studies of the transition from use case based requirements to high-level design are described, where use cases are successfully applied for documenting how functional requirements are distributed on architectural elements. The investigation of an industrial requirements engineering process improvement programme is also reported, where the introduction of a release-driven prioritisation method contributed to a measurable improvement in delivery precision and product quality. The results presented in the thesis provide further support for how to successfully apply requirements engineering with use cases as an important basis for software development."} {"_id": "4e845b9780595ff9f18e0ae1d99459253ae3d2b7", "title": "Skyline: an open source document editor for creating and analyzing targeted proteomics experiments", "text": "SUMMARY\nSkyline is a Windows client application for targeted proteomics method creation and quantitative data analysis. It is open source and freely available for academic and commercial use. The Skyline user interface simplifies the development of mass spectrometer methods and the analysis of data from targeted proteomics experiments performed using selected reaction monitoring (SRM). Skyline supports using and creating MS/MS spectral libraries from a wide variety of sources to choose SRM filters and verify results based on previously observed ion trap data. Skyline exports transition lists to and imports the native output files from Agilent, Applied Biosystems, Thermo Fisher Scientific and Waters triple quadrupole instruments, seamlessly connecting mass spectrometer output back to the experimental design document. The fast and compact Skyline file format is easily shared, even for experiments requiring many sample injections. A rich array of graphs displays results and provides powerful tools for inspecting data integrity as data are acquired, helping instrument operators to identify problems early. The Skyline dynamic report designer exports tabular data from the Skyline document model for in-depth analysis with common statistical tools.\n\n\nAVAILABILITY\nSingle-click, self-updating web installation is available at http://proteome.gs.washington.edu/software/skyline. This web site also provides access to instructional videos, a support board, an issues list and a link to the source code project."} {"_id": "30419576d2ed1a2d699683ac24a4b5ec5b93f093", "title": "Real time action recognition using histograms of depth gradients and random decision forests", "text": "We propose an algorithm which combines the discriminative information from depth images as well as from 3D joint positions to achieve high action recognition accuracy. To avoid the suppression of subtle discriminative information and also to handle local occlusions, we compute a vector of many independent local features. Each feature encodes spatiotemporal variations of depth and depth gradients at a specific space-time location in the action volume. Moreover, we encode the dominant skeleton movements by computing a local 3D joint position difference histogram. For each joint, we compute a 3D space-time motion volume which we use as an importance indicator and incorporate in the feature vector for improved action discrimination. To retain only the discriminant features, we train a random decision forest (RDF). The proposed algorithm is evaluated on three standard datasets and compared with nine state-of-the-art algorithms. Experimental results show that, on the average, the proposed algorithm outperform all other algorithms in accuracy and have a processing speed of over 112 frames/second."} {"_id": "50e983fd06143cad9d4ac75bffc2ef67024584f2", "title": "LIBLINEAR: A Library for Large Linear Classification", "text": "LIBLINEAR is an open source library for large-scale linear classification. It supports logistic regression and linear support vector machines. We provide easy-to-use command-line tools and library calls for users and developers. Comprehensive documents are available for both beginners and advanced users. Experiments demonstrate that LIBLINEAR is very efficient on large sparse data sets."} {"_id": "75cbc0eec23375df69de6c64e2f48689dde417c5", "title": "Enhanced Computer Vision With Microsoft Kinect Sensor: A Review", "text": "With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers."} {"_id": "aa358f4a0578234e301a305d8c5de8d859083a4c", "title": "Discriminative Orderlet Mining for Real-Time Recognition of Human-Object Interaction", "text": "This paper presents a novel visual representation, called orderlets, for real-time human action recognition with depth sensors. An orderlet is a middle level feature that captures the ordinal pattern among a group of low level features. For skeletons, an orderlet captures specific spatial relationship among a group of joints. For a depth map, an orderlet characterizes a comparative relationship of the shape information among a group of subregions. The orderlet representation has two nice properties. First, it is insensitive to small noise since an orderlet only depends on the comparative relationship among individual features. Second, it is a frame-level representation thus suitable for real-time online action recognition. Experimental results demonstrate its superior performance on online action recognition and cross-environment action recognition."} {"_id": "9a76cf35c74eb3d17f03973d32415c9bab816acb", "title": "Pedestrian Motion Tracking and Crowd Abnormal Behavior Detection Based on Intelligent Video Surveillance", "text": "Pedestrian tracking and detection of crowd abnormal activity under dynamic and complex background using Intelligent Video Surveillance (IVS) system are beneficial for security in public places. This paper presents a pedestrian tracking method combing Histogram of Oriented Gradients (HOG) detection and particle filter. This method regards the particle filter as the tracking framework, identifies the target area according to the result of HOG detection and modifies particle sampling constantly. Our method can track pedestrians in dynamic backgrounds more accurately compared with the traditional particle filter algorithms. Meanwhile, a method to detect crowd abnormal activity is also proposed based on a model of crowd features using Mixture of Gaussian (MOG). This method calculates features of crowd-interest points, then establishes the crowd features model using MOG, conducts self-adaptive updating and detects abnormal activity by matching the input feature with model distribution. Experiments show our algorithm can efficiently detect abnormal velocity and escape panic in crowds with a high detection rate and a relatively low false alarm rate."} {"_id": "5501e5c600a13eafc1fe0f75b6e09a2b40947844", "title": "Harnessing Context Sensing to Develop a Mobile Intervention for Depression", "text": "BACKGROUND\nMobile phone sensors can be used to develop context-aware systems that automatically detect when patients require assistance. Mobile phones can also provide ecological momentary interventions that deliver tailored assistance during problematic situations. However, such approaches have not yet been used to treat major depressive disorder.\n\n\nOBJECTIVE\nThe purpose of this study was to investigate the technical feasibility, functional reliability, and patient satisfaction with Mobilyze!, a mobile phone- and Internet-based intervention including ecological momentary intervention and context sensing.\n\n\nMETHODS\nWe developed a mobile phone application and supporting architecture, in which machine learning models (ie, learners) predicted patients' mood, emotions, cognitive/motivational states, activities, environmental context, and social context based on at least 38 concurrent phone sensor values (eg, global positioning system, ambient light, recent calls). The website included feedback graphs illustrating correlations between patients' self-reported states, as well as didactics and tools teaching patients behavioral activation concepts. Brief telephone calls and emails with a clinician were used to promote adherence. We enrolled 8 adults with major depressive disorder in a single-arm pilot study to receive Mobilyze! and complete clinical assessments for 8 weeks.\n\n\nRESULTS\nPromising accuracy rates (60% to 91%) were achieved by learners predicting categorical contextual states (eg, location). For states rated on scales (eg, mood), predictive capability was poor. Participants were satisfied with the phone application and improved significantly on self-reported depressive symptoms (beta(week) = -.82, P < .001, per-protocol Cohen d = 3.43) and interview measures of depressive symptoms (beta(week) = -.81, P < .001, per-protocol Cohen d = 3.55). Participants also became less likely to meet criteria for major depressive disorder diagnosis (b(week) = -.65, P = .03, per-protocol remission rate = 85.71%). Comorbid anxiety symptoms also decreased (beta(week) = -.71, P < .001, per-protocol Cohen d = 2.58).\n\n\nCONCLUSIONS\nMobilyze! is a scalable, feasible intervention with preliminary evidence of efficacy. To our knowledge, it is the first ecological momentary intervention for unipolar depression, as well as one of the first attempts to use context sensing to identify mental health-related states. Several lessons learned regarding technical functionality, data mining, and software development process are discussed.\n\n\nTRIAL REGISTRATION\nClinicaltrials.gov NCT01107041; http://clinicaltrials.gov/ct2/show/NCT01107041 (Archived by WebCite at http://www.webcitation.org/60CVjPH0n)."} {"_id": "7563ca67a1b53f8d0e252f90431f226849f7e57c", "title": "Color and Texture Based Identification and Classification of food Grains using different Color Models and Haralick features", "text": "This paper presents the study on identification and classification of food grains using different color models such as L*a*b, HSV, HSI and YCbCr by combining color and texture features without performing preprocessing. The K-NN and minimum distance classifier are used to identify and classify the different types of food grains using local and global features. Texture and color features are the important features used in the classification of different objects. The local features like Haralick features are computed from co-occurrence matrix as texture features and global features from cumulative histogram are computed along with color features. The experiment was carried out on different food grains classes. The non-uniformity of RGB color space is eliminated by L*a*b, HSV, HSI and YCbCr color space. The correct classification result achieved for different color models is quite good. KeywordsFeature Extraction; co-occurrence matrix; texture information; Global Features; cumulative histogram; RGB, L*a*b, HSV, HSI and YCbCr color model."} {"_id": "2c5fead7074fc2115fe1e15af7b5cd7f875c2aed", "title": "ZIGZAG: An Efficient Peer-to-Peer Scheme for Media Streaming", "text": "We design a peer-to-peer technique called ZIGZAG for single-source media streaming. ZIGZAG allows the media server to distribute content to many clients by organizing them into an appropriate tree rooted at the server. This applicationlayer multicast tree has a height logarithmic with the number of clients and a node degree bounded by a constant. This helps reduce the number of processing hops on the delivery path to a client while avoiding network bottleneck. Consequently, the end-to-end delay is kept small. Although one could build a tree satisfying such properties easily, an efficient control protocol between the nodes must be in place to maintain the tree under the effects of network dynamics and unpredictable client behaviors. ZIGZAG handles such situations gracefully requiring a constant amortized control overhead. Especially, failure recovery can be done regionally with little impact on the existing clients and mostly no burden on the server."} {"_id": "5fc370f6bd36390b73d82e23c10547708e6d7421", "title": "GPSP: Graph Partition and Space Projection based Approach for Heterogeneous Network Embedding", "text": "In this paper, we proposeGPSP, a novel Graph Partition and Space Projection based approach, to learn the representation of a heterogeneous network that consists of multiple types of nodes and links. Concretely, we first partition the heterogeneous network into homogeneous and bipartite subnetworks. Then, the projective relations hidden in bipartite subnetworks are extracted by learning the projective embedding vectors. Finally, we concatenate the projective vectors from bipartite subnetworks with the ones learned from homogeneous subnetworks to form the final representation of the heterogeneous network. Extensive experiments are conducted on a real-life dataset. The results demonstrate that GPSP outperforms the state-of-the-art baselines in two key network mining tasks: node classification and clustering2."} {"_id": "abaabb822fa7834c95fd7d4632a64e97d28f0710", "title": "A dexterous gripper for in-hand manipulation", "text": "During the last few decades, robotic grippers are developed by research community to solve grasping complexities of several objects as their primary objective. Due to the increasing demands of industries, many issues are rising and remain unsolved such as in-hand manipulation, placing object with appropriate posture. Operations like twisting, altering orientation of object, in a hand, requires significant dexterity of the gripper that must be achieved from a compact mechanical design at the first place. In this paper, a newly designed gripper is proposed whose primary goal is to solve in-hand manipulation. The gripper is derived from four identical finger modules; each module has four DOFs, consists of a five bar linkage to grasp and additional two DOFs mechanism similar to ball spline is conceived to manipulate object. Hence the easily constructible gripper is capable to grasp and manipulate plurality of object. As a preliminary inspection, an optimized platform that represents the central concept of the proposed gripper is developed, to the aim of evaluating kinematic feasibilities of manipulation of different objects."} {"_id": "930a6ea926d1f39dc6a0d90799d18d7995110862", "title": "Privacy-preserving photo sharing based on a secure JPEG", "text": "Sharing photos online is a common activity on social networks and photo hosting platforms, such as Facebook, Pinterest, Instagram, or Flickr. However, after reports of citizens surveillance by governmental agencies and the scandalous leakage of celebrities private photos online, people have become concerned about their online privacy and are looking for ways to protect it. Popular social networks typically offer privacy protection solutions only in response to the public demand and therefore are often rudimental, complex to use, and provide limited degree of control and protection. Most solutions either allow users to control who can access the shared photos or for how long they can be accessed. In contrast, in this paper, we take a structured privacy by design approach to the problem of online photo privacy protection. We propose a privacy-preserving photo sharing architecture that takes into account content and context of a photo with privacy protection integrated inside the JPEG file itself in a secure way. We demonstrate the proposed architecture with a prototype mobile iOS application called ProShare that offers scrambling as the privacy protection tool for a selected region in a photo, secure access to the protected images, and secure photo sharing on Facebook."} {"_id": "30f1ea3b4194dba7f957fd6bf81bcaf12dca6ff8", "title": "Dynamic Programming for Linear-Time Incremental Parsing", "text": "Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedyand only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging \u201cequivalent\u201d stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster."} {"_id": "24bb660aaf163802c1f0cdee39046f2aa42d2d47", "title": "Photorealistic Video Super Resolution", "text": "With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. We also propose an additional loss function to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains competitive samples in terms of perceptual quality with improved temporal consistency."} {"_id": "b057d645bef6ab000a6f708e1b4e452437b02b82", "title": "Automatic Conversational Helpdesk Solution using Seq2Seq and Slot-filling Models", "text": "Helpdesk is a key component of any large IT organization, where users can log a ticket about any issue they face related to IT infrastructure, administrative services, human resource services, etc. Normally, users have to assign appropriate set of labels to a ticket so that it could be routed to right domain expert who can help resolve the issue. In practice, the number of labels are very large and organized in form of a tree. It is non-trivial to describe the issue completely and attach appropriate labels unless one knows the cause of the problem and the related labels. Sometimes domain experts discuss the issue with the users and change the ticket labels accordingly, without modifying the ticket description. This results in inconsistent and badly labeled data, making it hard for supervised algorithms to learn from. In this paper, we propose a novel approach of creating a conversational helpdesk system, which will ask relevant questions to the user, for identification of the right category and will then raise a ticket on users' behalf. We use attention based seq2seq model to assign the hierarchical categories to tickets. We use a slot filling model to help us decide what questions to ask to the user, if the top-k model predictions are not consistent. We also present a novel approach to generate training data for the slot filling model automatically based on attention in the hierarchical classification model. We demonstrate via a simulated user that the proposed approach can give us a significant gain in accuracy on ticket-data without asking too many questions to users. Finally, we also show that our seq2seq model is as versatile as other approaches on publicly available datasets, as state of the art approaches."} {"_id": "a656bbbd46eee2b489c84f8b74b14b54c0a15066", "title": "Unequal Wilkinson Power Divider With Reduced Arm Length For Size Miniaturization", "text": "This paper presents a compact unequal Wilkinson power divider (WPD) with a reduced arm length. The unequal power divider consists of four arms and the isolation elements, including a compensating capacitor and isolation resistor. Compared with the conventional WPD, the electric length \u03b8 of the arms between the input port and the isolation elements is much shorter, and thus, compact size can be obtained. A theoretical analysis is carried out, and design equations are obtained. In addition, a study is carried out to characterize the isolation bandwidth and output port return loss bandwidth with different values of \u03b8. For validation, two power dividers operating at 1.5 GHz are implemented with various size and isolation bandwidths. Around 50% size reduction and good performance are obtained."} {"_id": "b2a453196b6b490a658a947e4eef9ca6436de000", "title": "Analysis of a laboratory scale three-phase FC-TCR-based static VAr compensator", "text": "This paper presents the design and steady-state analysis of a Fixed Capacitor-Thyristor Controlled Reactor (FC-TCR)-based Static VAr Compensator (SVC). Reactive power compensation is demonstrated through the fundamental frequency analysis of the samples acquired from the designed system. The performance of the SVC in the presence of line reactance is also discussed. National Instrument (NI) based data acquisition system is used to perform the steady-state analysis. Besides, a few transient responses are also captured using the data acquisition system."} {"_id": "cf5495cc7d7b361ca6577df62db063b0ba449c37", "title": "New clock-gating techniques for low-power flip-flops", "text": "Two novel low power flip-flops are presented in the paper. Proposed flip-flops use new gating techniques that reduce power dissipation deactivating the clock signal. Presented circuits overcome the clock duty-cycle limitation of previously reported gated flip-flops.\nCircuit simulations with the inclusion of parasitics show that sensible power dissipation reduction is possible if input signal has reduced switching activity. A 16-bit counter is presented as a simple low power application."} {"_id": "fb3967f480c825e9028edc346d2905dc0accce09", "title": "Bone-Conduction-Based Brain Computer Interface Paradigm -- EEG Signal Processing, Feature Extraction and Classification", "text": "The paper presents a novel bone-conduction based brain-computer interface paradigm. Four sub-threshold acoustic frequency stimulus patterns are presented to the subjects in an oddball paradigm allowing for \"aha-responses\" generation to the attended targets. This allows for successful implementation of the bone-conduction based brain-computer interface (BCI) paradigm. The concept is confirmed with seven subjects in online bone-conducted auditory Morse-code patterns spelling BCI paradigm. We report also brain electrophysiological signal processing and classification steps taken to achieve the successful BCI paradigm. We also present a finding of the response latency variability in a function of stimulus difficulty."} {"_id": "a72933fda89820c319a5b872c062ad2670d2db4a", "title": "Smart Car Parking System Solution for the Internet of Things in Smart Cities", "text": "The Internet of Things (IoT) is able to connect billions of devices and services at anytime in any place, with various applications. Recently, the IoT became an emerging technology. One of the most significant current research discussion topics on the IoT is about the smart car parking. A modern urban city has over a million of cars on its roads but it does not have enough parking space. Moreover, most of the contemporary researchers propose management of the data on cloud. However, this method may be considered as an issue since the raw data is sent promptly from distributed sensors to the parking area via cloud and then received back after it is processed. This is considered as an expensive technique in terms of the data transmission as well as the energy cost and consumption. While the majority of proposed solutions address the problem of finding unoccupied parking space and ignore some other critical issues such as information about the nearest car parking and the roads traffic congestion, this paper goes beyond and proposes the alternative method. The paper proposes a smart car parking system that will assist users to solve the issue of finding a parking space and to minimise the time spent in searching for the nearest available car park. In addition, it provides users with roads traffic congestion status. Moreover, the proposed system collects the raw data locally and extracts features by applying data filtering and fusion techniques to reduce the transmitted data over the network. After that, the transformed data is sent to the cloud for processing and evaluating by using machine learning algorithms."} {"_id": "422d9b1a05bc33fcca4b9aa9381f46804c6132fd", "title": "CrowdDB: answering queries with crowdsourcing", "text": "Some queries cannot be answered by machines only. Processing such queries requires human input for providing information that is missing from the database, for performing computationally difficult functions, and for matching, ranking, or aggregating results based on fuzzy criteria. CrowdDB uses human input via crowdsourcing to process queries that neither database systems nor search engines can adequately answer. It uses SQL both as a language for posing complex queries and as a way to model data. While CrowdDB leverages many aspects of traditional database systems, there are also important differences. Conceptually, a major change is that the traditional closed-world assumption for query processing does not hold for human input. From an implementation perspective, human-oriented query operators are needed to solicit, integrate and cleanse crowdsourced data. Furthermore, performance and cost depend on a number of new factors including worker affinity, training, fatigue, motivation and location. We describe the design of CrowdDB, report on an initial set of experiments using Amazon Mechanical Turk, and outline important avenues for future work in the development of crowdsourced query processing systems."} {"_id": "edc2e4e6308d7dfce586cb8a4441c704f8f8d41b", "title": "AIDE: Fast and Communication Efficient Distributed Optimization", "text": "In this paper, we present two new communication-efficient methods for distributed minimization of an average of functions. The first algorithm is an inexact variant of the DANE algorithm [20] that allows any local algorithm to return an approximate solution to a local subproblem. We show that such a strategy does not affect the theoretical guarantees of DANE significantly. In fact, our approach can be viewed as a robustification strategy since the method is substantially better behaved than DANE on data partition arising in practice. It is well known that DANE algorithm does not match the communication complexity lower bounds. To bridge this gap, we propose an accelerated variant of the first method, called AIDE, that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle. Our empirical results show that AIDE is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications."} {"_id": "757b27a3ceb2293b8284fc24a7084a0c3fc2ae21", "title": "Data Distillation: Towards Omni-Supervised Learning", "text": "We investigate omni-supervised learning, a special regime of semi-supervised learning in which the learner exploits all available labeled data plus internet-scale sources of unlabeled data. Omni-supervised learning is lower-bounded by performance on existing labeled datasets, offering the potential to surpass state-of-the-art fully supervised methods. To exploit the omni-supervised setting, we propose data distillation, a method that ensembles predictions from multiple transformations of unlabeled data, using a single model, to automatically generate new training annotations. We argue that visual recognition models have recently become accurate enough that it is now possible to apply classic ideas about self-training to challenging real-world data. Our experimental results show that in the cases of human keypoint detection and general object detection, state-of-the-art models trained with data distillation surpass the performance of using labeled data from the COCO dataset alone."} {"_id": "bb6ded212cb2767e95f02ecfc9a1d9c10dc14f2e", "title": "Autonomous lift of a cable-suspended load by an unmanned aerial robot", "text": "In this paper, we address the problem of lifting from the ground a cable-suspended load by a quadrotor aerial vehicle. Furthermore, we consider that the mass of the load is unknown. The lift maneuver is a critical step before proceeding with the transportation of a given cargo. However, it has received little attention in the literature so far. To deal with this problem, we break down the lift maneuver into simpler modes which represent the dynamics of the quadrotor-load system at particular operating regimes. From this decomposition, we obtain a series of waypoints that the aerial vehicle has to reach to accomplish the task. We combine geometric control with a least-squares estimation method to design an adaptive controller that follows a prescribed trajectory planned based on the waypoints. The effectiveness of the proposed control scheme is demonstrated by numerical simulations."} {"_id": "ad10d6dec0b5855c3c3e71d334161647d6d4ed9e", "title": "Few-Shot and Zero-Shot Multi-Label Learning for Structured Label Spaces", "text": "Large multi-label datasets contain labels that occur thousands of times (frequent group), those that occur only a few times (few-shot group), and labels that never appear in the training dataset (zero-shot group). Multi-label few- and zero-shot label prediction is mostly unexplored on datasets with large label spaces, especially for text classification. In this paper, we perform a fine-grained evaluation to understand how state-of-the-art methods perform on infrequent labels. Furthermore, we develop few- and zero-shot methods for multi-label text classification when there is a known structure over the label space, and evaluate them on two publicly available medical text datasets: MIMIC II and MIMIC III. For few-shot labels we achieve improvements of 6.2% and 4.8% in R@10 for MIMIC II and MIMIC III, respectively, over prior efforts; the corresponding R@10 improvements for zero-shot labels are 17.3% and 19%."} {"_id": "c677166592b505b80a487fb88ac5a6996fc47d71", "title": "Decentralized control: An overview", "text": "The paper reviews the past and present results in the area of decentralized control of large-scale complex systems. An emphasis is laid on decentralization, decomposition, and robustness. These methodologies serve as effective tools to overcome specific difficulties arising in largescale complex systems such as high dimensionality, information structure constraints, uncertainty, and delays. Several prospective topics for future research are introduced in this contents. The overview is focused on recent decomposition approaches in interconnected dynamic systems due to their potential in providing the extension of decentralized control into networked control systems. # 2008 Elsevier Ltd. All rights reserved."} {"_id": "6b72812c298cf3985c67a5fefff6d125175439c5", "title": "LexIt: A Computational Resource on Italian Argument Structure", "text": "The aim of this paper is to introduce LexIt, a computational framework for the automatic acquisition and exploration of distributional information about Italian verbs, nouns and adjectives, freely available through a web interface at the address http://sesia.humnet.unipi.it/lexit. LexIt is the first large-scale resource for Italian in which subcategorization and semantic selection properties are characterized fully on distributional ground: in the paper we describe both the process of data extraction and the evaluation of the subcategorization frames extracted with LexIt."} {"_id": "13a375a84a6c414b85477a401541d3e28db1e11a", "title": "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise", "text": "Spatial Databases Require to detect knowledge from great amount of data Need to handle with arbitrary shape Requirements of Clustering in Data Mining \u2022 Scalability \u2022 Dealing with different types of attributes \u2022 Discovery of Clusters with arbitrary shape \u2022 Minimal requirements for domain knowledge to determine input parameters \u2022 Able to deal with noise and outliers \u2022 Insensitive to the order of input data \u2022 High dimensionality of data \u2022 Interpretability and usability Introduction(cont..)"} {"_id": "6eb72b8f2c5fb1b996e5502001131b72825f91a4", "title": "Using grid for accelerating density-based clustering", "text": "Clustering analysis is a primary method for data mining. The ever increasing volumes of data in different applications forces clustering algorithms to cope with it. DBSCAN is a well-known algorithm for density-based clustering. It is both effective so it can detect arbitrary shaped clusters of dense regions and efficient especially in existence of spatial indexes to perform the neighborhood queries efficiently. In this paper we introduce a new algorithm GriDBSCAN to enhance the performance of DBSCAN using grid partitioning and merging, yielding a high performance with the advantage of high degree of parallelism. We verified the correctness of the algorithm theoretically and experimentally, studied the performance theoretically and using experiments on both real and synthetic data. It proved to run much faster than original DBSCAN. We compared the algorithm with a similar algorithm, EnhancedDBSCAN, which is also an enhancement to DBSCAN using partitioning. Experiments showed the new algorithm's superiority in performance and degree of parallelism."} {"_id": "818826f356444f3daa3447755bf63f171f39ec47", "title": "Active Learning Literature Survey", "text": "The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer labeled training instances if it is allowed to choose the data from which is learns. An active learner may ask queries in the form of unlabeled instances to be labeled by an oracle (e.g., a human annotator). Active learning is well-motivated in many modern machine learning problems, where unlabeled data may be abundant but labels are difficult, time-consuming, or expensive to obtain. This report provides a general introduction to active learning and a survey of the literature. This includes a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date. An analysis of the empirical and theoretical evidence for active learning, a summary of several problem setting variants, and a discussion of related topics in machine learning research are also presented."} {"_id": "dfd209e1a6def6edc2db9da45cfd884e9ffacc97", "title": "An improved sampling-based DBSCAN for large spatial databases", "text": "Spatial data clustering is one of the important data mining techniques for extracting knowledge from large amount of spatial data collected in various applications, such as remote sensing, GIS, computer cartography, environmental assessment and planning, etc. Several useful and popular spatial data clustering algorithms have been proposed in the past decade. DBSCAN is one of them, which can discover clusters of any arbitrary shape and can handle the noise points effectively. However, DBSCAN requires large volume of memory support because it operates on the entire database. This paper presents an improved sampling-based DBSCAN which can cluster large-scale spatial databases effectively. Experimental results included to establish that the proposed sampling-based DBSCAN outperforms DBSCAN as well as its other counterparts, in terms of execution time, without losing the quality of clustering."} {"_id": "10e1e88f9c137d8e350bfc6c9f60242a8f3d22b4", "title": "Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications", "text": "Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate cluster in large high dimensional datasets."} {"_id": "0debd1c0b73fc79dc7a64431b8b6a1fe21dcd9f7", "title": "An improved NSGA-III algorithm for feature selection used in intrusion detection", "text": "Feature selection can improve classification accuracy and decrease the computational complexity of classification. Data features in intrusion detection systems (IDS) always present the problem of imbalanced classification in which some classifications only have a few instances while others have many instances. This imbalance can obviously limit classification efficiency, but few effort s have been made to address it. In this paper, a scheme for the many-objective problem is proposed for feature selection in IDS, which uses two strategies, namely, a special domination method and a predefined multiple targeted search, for population evolution. It can differentiate traffic not only between normal and abnormal but also by abnormality type. Based on our scheme, NSGA-III is used to obtain an adequate feature subset with good performance. An improved many-objective optimization algorithm (I-NSGA-III) is further proposed using a novel niche preservation procedure. It consists of a bias-selection process that selects the individual with the fewest selected features and a fit-selection process that selects the individual with the maximum sum weight of its objectives. Experimental results show that I-NSGA-III can alleviate the imbalance problem with higher classification accuracy for classes having fewer instances. Moreover, it can achieve both higher classification accuracy and lower computational complexity. \u00a9 2016 Published by Elsevier B.V."} {"_id": "2aa40d3bb71b4e16bb0a63eb4dab586f7c15622f", "title": "A Low Radar Cross Section and Low Profile Antenna Co-Designed With Absorbent Frequency Selective Radome", "text": "A low radar cross section (RCS) and low profile antenna co-designed with absorbent frequency selective radome (AFSR) is investigated. A pair of circular slot resonators is embedded on surface of the AFSR to realize a transmission window in the vertical polarization, while the wide absorption band is still maintained in the horizontal polarization. When a patch antenna is etched within the AFSR, where the metal grounds of the patch antenna and AFSR are co-used, the co-designed antenna with low RCS and low profile is thus realized. For demonstration, an AFSR is designed with its transmission window has a minimal insertion loss of 0.45 dB at 8.9 GHz, and two separate absorption bands (a lower absorption band from 4.8 to 7.5 GHz and an upper absorption band from 10 to 13 GHz) in the vertical polarization, a wide absorption band (from 4.5 to 12.5 GHz) in the horizontal polarization. A patch antenna etched within the AFSR is optimized to operate at 8.9 GHz, then it is simulated and fabricated. The measured results demonstrate that the proposed antenna not only has good radiation patterns, but also obtains significant RCS reduction."} {"_id": "5a4a53339068eebd1544b9f430098f2f132f641b", "title": "Hierarchical Disentangled Representations", "text": "Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation, often by introducing suitable modifications of the objective function. We synthesize this growing body of literature by formulating a generalization of the evidence lower bound that explicitly represents the trade-offs between sparsity of the latent code, bijectivity of representations, and coverage of the support of the empirical data distribution. Our objective is also suitable to learning hierarchical representations that disentangle blocks of variables whilst allowing for some degree of correlations within blocks. Experiments on a range of datasets demonstrate that learned representations contain interpretable features, are able to learn discrete attributes, and generalize to unseen combinations of factors."} {"_id": "4b19be501b279b7d80d94b2d9d986bf4f8ab4ede", "title": "Stochastic Models , Estimation and Control", "text": ""} {"_id": "8cea85a615bbf1fc9329e8a542a049b721667a61", "title": "A lightweight method for virtual machine introspection", "text": "A method for the introspection of virtual machines is proposed. The main distinctive feature of this method is that it makes it possible to obtain information about the system operation using the minimum knowledge about its internal organization. The proposed approach uses rarely changing parts of the application binary interface, such as identifiers and parameters of system calls, calling conventions, and the formats of executable files. The lightweight property of the introspection method is due to the minimization of the knowledge about the system and by its high performance. The introspection infrastructure is based on the QEMU emulator, version 2.8. Presently, monitoring the file operations, processes, and API function calls are implemented. The available introspection tools (RTKDSM, Panda, and DECAF) get data for the analysis using kernel structures. All the data obtained (addresses of structures, etc.) is written to special profiles. Since the addresses and offsets strongly depend not only on the version of the operating system but also on the parameters of its assembly, these tools have to store a large number of profiles. We propose to use parts of the application binary interface because they are rarely modified and it is often possible to use one profile for a family of OSs. The main idea underlying the proposed method is to intercept the system and library function calls and read parameters and returned values. The processor provides special instructions for calling system and user defined functions. The capabilities of QEMU are extended by an instrumentation mechanism to enable one to track each executed instruction and find the instructions of interest among them. When a system call occurs, the control is passed to the system call detector that checks the number of the call and decides to which module the job should be forwarded. In the case of an API function call, the situation is similar, but the API function detector checks the function address. An introspection tool consisting of a set of modules is developed. These modules are dynamic libraries that are plugged in QEMU. The modules can interact by exchanging data."} {"_id": "cebb7ddfc3664e1f7ebaf40ebbc5712b4e1ecce7", "title": "International experiences with the Hospital Anxiety and Depression Scale--a review of validation data and clinical results.", "text": "More than 200 published studies from most medical settings worldwide have reported experiences with the Hospital Anxiety and Depression Scale (HADS) which was specifically developed by Zigmond and Snaith for use with physically ill patients. Although introduced in 1983, there is still no comprehensive documentation of its psychometric properties. The present review summarizes available data on reliability and validity and gives an overview of clinical studies conducted with this instrument and their most important findings. The HADS gives clinically meaningful results as a psychological screening tool, in clinical group comparisons and in correlational studies with several aspects of disease and quality of life. It is sensitive to changes both during the course of diseases and in response to psychotherapeutic and psychopharmacological intervention. Finally, HADS scores predict psychosocial and possibly also physical outcome."} {"_id": "e893b706c3d9e68fc978ec41fb17d757ec85ee1e", "title": "Addressing the Winograd Schema Challenge as a Sequence Ranking Task", "text": "The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge. These problems are easy to solve for humans but most difficult to solve for machines. Computational models that previously addressed this task rely on syntactic preprocessing and incorporation of external knowledge by manually crafted features. We address the Winograd Schema Challenge from a new perspective as a sequence ranking task, and design a Siamese neural sequence ranking model which performs significantly better than a random baseline, even when solely trained on sequences of words. We evaluate against a baseline and a state-of-the-art system on two data sets and show that anonymization of noun phrase candidates strongly helps our model to generalize."} {"_id": "8d1b4a8968630d29046285eca24475518e34eee2", "title": "Inter-AS traffic engineering with SDN", "text": "Egress selection in an Internet Service Provider (ISP) is the process of selecting an egress router to route interdomain traffic across the ISP such that a traffic engineering objective is achieved. In traditional ISP networks, traffic through the ISP is carried through the network and exits via an egress that is closest to the source in an attempt to minimize network resources for transit traffic. This exit strategy is known as Hot Potato Routing (HPR). The emerging field of Software-Defined Networking (SDN) has opened up many possibilities and promised to bring new flexibility to the rigid traditional paradigm of networking. In an ISP network, however, completely replacing legacy network devices with SDN nodes is neither simple nor straightforward. This has led to the idea of incremental and selective deployment of SDN nodes in an ISP network. Such a hybrid network gives us control over traffic flows that pass through the SDN nodes without requiring extensive changes to an existing ISP network. In this paper, we look at the problem of choosing an optimal set of egress routers to route inter-domain transit traffic in a hybrid SDN network such that the maximum link utilization of the egress links is minimized. We formulate the optimization problem, show that it is related to the makespan scheduling problem of unrelated parallel machines, and propose heuristics to solve it. We perform simulations to evaluate our heuristic on a real ISP topology and show that even with a small number of SDN nodes in the network, the maximum link utilization on the egress links can be lower than that of traditional HPR."} {"_id": "7d3eb4456b16c41c56490dcddc3bee3ac700662b", "title": "Graption: A graph-based P2P traffic classification framework for the internet backbone", "text": "1 The authors thank CAIDA for providing this set of traffic traces. Additional information for these traces can be found in the DatCat, Internet Measurement Data Catalog [8], indexed under the label \u2018\u2018PAIX\u2019\u2019. 1910 M. Iliofotou et al. / Computer Networks 55 (2011) 1909\u20131920 behavior. Flow-level and payload-based classification methods require per application training and will thus not detect P2P traffic from emerging protocols. Behavioral-host-based approaches such as BLINC [25] can detect traffic from new protocols [25], but have weak performance when applied at the backbone [26]. In addition, most tools including BLINC [25] require fine-tuning and careful selection of parameters [26]. We discuss the limitations of previous methods in more detail in Section 4. In this paper, we use the network-wide behavior of an application to assist in classifying its traffic. To model this behavior, we use graphs where each node is an IP address, and each edge represents a type of interaction between two nodes. We use the term Traffic Dispersion Graph or TDG to refer to such a graph [19]. Intuitively, with TDGs we enable the detection of network-wide behavior (e.g., highly connected graphs) that is common among P2P applications and different from other traffic (e.g., Web). While we recognize that some previous efforts [6,9] have used graphs to detect worm activity, they have not explored the full capabilities of TDGs for application classification. This paper is an extension of a workshop paper [18] and the differences will be clarified in the related work section (Section 4). We propose a classification framework, dubbed Graption (Graph-based classification), as a systematic way to combine network-wide behavior and flow-level characteristics of network applications. Graption first groups flows using flow-level features, in an unsupervised and agnostic way, i.e., without using application-specific knowledge. It then uses TDGs to classify each group of flows. As a proof of concept, we instantiate our framework and develop a P2P detection method, which we call Graption-P2P. Compared to other methods (e.g., BLINC [25]), Graption-P2P is easier to configure and requires fewer parameters. The highlights of our work can be summarized in the following points: Distinguishing between P2P and client\u2013server TDGs. We use real-world backbone traces and derive graph theoretic metrics that can distinguish between the TDGs formed by client\u2013server (e.g., Web) and P2P (e.g., eDonkey) applications (Section 2.2). Practical considerations for TDGs. We show that even a single backbone link contains enough information to generate TDGs that can be used to classify traffic. In addition, TDGs of the same application seem fairly consistent across time (Section 2.3). High P2P classification accuracy. Our framework instantiation (Graption-P2P) classifies 90% of P2P traffic with 95% accuracy when applied at the backbone. Such traces are particularly challenging for other methods (Section 3.2.2). Comparison with a behavioral-host-based method. Graption-P2P performs better than BLINC [25] in P2P identification at the backbone. For example, Graption-P2P identifies 95% of BitTorrent traffic while BLINC identifies only 25% (Section 3.3). Identifying the unknown. Using Graption, we identified a P2P overlay of the Slapper worm. The TDG of Slapper was never used to train our classifier. This is a promising result showing that our approach can be used to detect both known and unknown P2P applications (Section 3.4). The rest of the paper is organized as follows. In Section 2 we define TDGs, and identify TDG-based metrics that differentiate between applications. In Section 3 we present the Graption framework and our instantiation, GraptionP2P. In Section 5 we discuss various practical issues. In Section 4 we discuss related work. Finally, in Section 6 we conclude the paper. 2. Studying the TDGs of P2P applications 2.1. Traffic dispersion graphs (TDGs) Definition. Throughout this paper, we assume that packets can be grouped into flows using the standard 5-tuple {srcIP, srcPort, dstIP, dstPort, protocol}. Given a group of flows S, collected over a fixed-length time interval, we define the corresponding TDG to be a directed graph G(V,E), where the set of nodes V corresponds to the set of IP addresses in S, and there is a link (u,v) 2 E from u to v if there is a flow f 2 S between them. In this paper, we consider bidirectional flows. We define a TCP flow to start on the first packet with the SYN-flag set and the ACK-flag not set, so that the initiator and the recipient of the flow are defined for the purposes of direction. For UDP flows, direction is decided upon the first packet of the flow. Visualization examples. In Fig. 1, we show TDG examples from two different applications. In order to motivate the discussion in the rest of the paper, we show the contrast between a P2P and a client\u2013server TDG. From the figure we see that P2P traffic forms more connected and more dense graphs compared to client\u2013server TDGs. In Section 2.2, we show how we can translate the visual intuition of Fig. 1 into quantitative measures that can be used to classify TDGs that correspond to different applications. Data set. To study TDGs, we use three backbone traces from a Tier-1 ISP and the Abilene (Internet2) network. These traces are summarized in Table 1. All data are IP anonymized and contain traffic from both directions of the link. The TR-PAY1 and TR-PAY2 traces were collected from an OC48 link of a commercial US Tier-1 ISP at the Palo Alto Internet eXchange (PAIX). To the best of our knowledge, these are the most recent backbone traces with payload that are available to researchers by CAIDA [5]. The TR-ABIL trace is a publicly available data set collected from the Abilene (Internet2) academic network connecting Indianapolis with Kansas City. The Abilene trace consists of five randomly selected five-minute samples taken every day for one month, and covers both day and night hours as well as weekdays and weekends. Extractingground truth. We used a Payload-based Classifier (PC) to establish the ground truth of flows for the 1 2 2 1 0 3 1 9 4 3 3 4 5 4 8 1 4 8 2 9 3 4 1 4 6 1 1 0 7 1 1 5 5 5 6 6 3 4 7 4 5 1 0 5 8 1 1 6 9 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 8 6 3 1 0 3 2 4 4 1 8 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 7 7 1 1 0 9 6 2 5 2 6 4 1 2 4 6 3 2 7 2 8 2 9 3 0 3 1 3 2 3 3 3 4 3 5 3 6 1 0 1 3 7 3 8 7 3 1 6 6 2 0 6 2 3 2 3 4 2 3 4 5 4 0 9 4 8 6 5 3 0 6 3 0 6 3 5 7 3 4 7 4 8 7 4 9 7 9 4 7 9 6 8 5 2 8 5 5 9 1 6 1 0 7 0 1 1 3 4 1 2 4 2 3 9 4 0 8 1 8 5 1 1 6 1 2 3 1 3 0 1 3 2 1 3 5 1 3 9 1 4 8 1 6 1 1 6 4 1 6 5 1 8 6 1 9 1 2 2 5 2 2 7 2 3 5 2 4 3 2 5 7 2 6 8 2 9 2 3 0 3 3 1 3 3 1 6 3 4 9 3 5 0 3 6 2 3 7 6 3 8 3 4 0 3 4 1 3 4 1 4 4 4 5 4 5 5 4 6 1 4 7 7 4 9 3 5 1 7 5 2 4 5 3 7 5 7 0 5 9 1 6 1 4 6 1 9 6 4 0 6 5 7 6 6 6 6 6 8 6 7 4 7 3 1 7 3 6 7 3 7 8 0 0 8 0 7 8 0 8 8 3 1 8 3 9 8 4 1 8 4 4 8 5 6 8 7 0 8 7 6 8 8 8 8 9 2 8 9 8 9 0 0 9 0 8 9 1 4 9 1 5 9 2 5 9 7 4 1 0 0 0 1 0 0 5 1 0 0 9 1 0 1 8 1 0 2 2 1 0 2 7 1 0 3 5 1 0 5 6 1 0 9 2 1 0 9 9 1 1 0 3 1 1 0 5 1 1 0 6 1 1 2 8 1 1 3 3 1 1 6 5 1 1 6 6 1 1 7 0 1 1 8 3 1 1 8 6 1 2 0 9 1 2 5 6 1 2 6 7 1 2 7 2 1 2 8 8 1 2 8 9 1 2 9 1 1 3 3 4 1 3 5 2 1 3 7 2 4 1 4 2 3 0 9 3 2 0 3 5 6 6 0 2 6 3 2 7 1 4 8 7 2 1 0 6 5 4 3 4 4 4 5 4 6 1 1 5 4 7 4 8 2 5 6 7 7 2 4 9 5 0 1 3 7 1 6 0 1 8 1 5 4 5 7 1 5 7 4 1 1 0 0 4 5 1 5 2 5 3 5 4 1 4 2 1 0 0 7 5 5 5 6 5 7 2 9 1 7 6 0 8 8 3 5 8 5 9 6 0 6 1 2 7 7 4 3 2 6 2 6 3 2 9 9 5 7 2 7 3 5 1 3 7 0 6 4 6 5 9 0 2 6 1 2 8 2 3 4 1 5 8 2 6 3 3 1 3 6 4 6 6 6 7 9 2 9 1 0 8 0 1 0 8 2 6 8 7 5 5 1 1 3 5 1 2 2 8 6 9 7 0 7 1 2 7 0 7 2 7 4 7 5 7 6 7 7 7 8 7 9 8 0 8 2 8 3 7 2 3 8 4 8 7 3 6 1 8 8 8 9 1 7 5 9 1 9 2"} {"_id": "8ba965f138c1178aef09da3781765e300c325f3d", "title": "Design and development of a low cost EMG signal acquisition system using surface EMG electrode", "text": "Electromyogram or EMG signal is a very small signal; it requires a system to enhance for display purpose or for further analysis process. This paper presents the development of low cost physiotherapy EMG signal acquisition system with two channel input. In the acquisition system, both input signals are amplified with a differential amplifier and undergo signal pre-processing to obtain the linear envelope of EMG signal. Obtained EMG signal is then digitalized and sent to the computer to be plotted."} {"_id": "0fe5990652d47a4de58500203fd6f00ada7de0ae", "title": "Security and Privacy in the Internet of Vehicles", "text": "Internet of Vehicles (IoV) is a typical application of Internet of things in the field of transportation, which aims at achieving an integrated intelligent transportation system to enhance traffics, to avoid accidents, to ensure road safety, and to improve driving experiences. Due to its characteristics of dynamic topological structures, huge network scale, non-uniform distribution of nodes, and mobile limitation, IoV systems face various types of attacks, such as authentication and identification attacks, availability attacks, confidentiality attacks, routing attacks, data authenticity attacks and etc., which result in several challenging requirements in security and privacy. Many security scientists made numerous efforts to ensure the security and privacy for the Internet of Vehicles in recent years. This paper aims to review the advances on issues of security and privacy in IoV, including security and privacy requirements, attack types, and the relevant solutions, and discuss challenges and future trends in this area."} {"_id": "fffc47e080dbbd4d450b6e6dbee4de7e66324e21", "title": "The meaning of compassion fatigue to student nurses: an interpretive phenomenological study", "text": "Background: Compassion fatigue is a form of occupational stress which occurs when individuals are exposed to suffering and trauma on an ongoing basis. The purpose of this study was to explore the experiences of compassion fatigue among student nurses following their first clinical placement in a UK health care setting during 2015. Methods: The aim of this study was to explore students\u2019 thoughts and feelings about compassion fatigue using reflective poems as a source of data. An interpretive phenomenological approach was taken using a purposeful sampling strategy which aimed to explore in depth meaning of the concept as experienced by the students. Results: From this study it is clear that students experience compassion fatigue and this has a psychological effect on their wellbeing and ability to learn in the clinical practice setting. Reflective poetry writing enabled articulation of feelings which were at times negative and linked to the student\u2019s status as a novice nurse. Conclusions: Students experience compassion fatigue and educators need to find ways to provide support in both clinical and university settings. Positive practices such as shared reflection and the use of creative teaching methods might be beneficial, to support exploration of feelings, build resilience and effective ways of coping."} {"_id": "01413e1fc981a8c041dc236dcee64790e2239a36", "title": "A New Framework for Distributed Submodular Maximization", "text": "A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint."} {"_id": "e89244816864ee8e000b72923b1983af6c65adb8", "title": "Image Retrieval using Harris Corners and Histogram of Oriented Gradients", "text": "Content based image retrieval is the technique to retrieve similar images from a database that are visually similar to a given query image. It is an active and emerging research field in computer vision. In our proposed system, the Interest points based Histogram of Oriented Gradients (HOG) feature descriptor is used to retrieve the relevant images from the database. The dimensionality of the HOG feature vector is reduced by Principle Component analysis (PCA). To improve the retrieval accuracy of the system the Colour Moments along with HOG feature descriptor are used in this system. The Interest points are detected using the Harris-corner detector in order to extract the image features. The KD-tree is used for matching and indexing the features of the query image with the database images."} {"_id": "c8b813d447481d2af75a623ef117d45b6d610901", "title": "HanGuard: SDN-driven protection of smart home WiFi devices from malicious mobile apps", "text": "A new development of smart-home systems is to use mobile apps to control IoT devices across a Home Area Network (HAN). As verified in our study, those systems tend to rely on the Wi-Fi router to authenticate other devices. This treatment exposes them to the attack from malicious apps, particularly those running on authorized phones, which the router does not have information to control. Mitigating this threat cannot solely rely on IoT manufacturers, which may need to change the hardware on the devices to support encryption, increasing the cost of the device, or software developers who we need to trust to implement security correctly. In this work, we present a new technique to control the communication between the IoT devices and their apps in a unified, backward-compatible way. Our approach, called HanGuard, does not require any changes to the IoT devices themselves, the IoT apps or the OS of the participating phones. HanGuard uses an SDN-like approach to offer fine-grained protection: each phone runs a non-system userspace Monitor app to identify the party that attempts to access the protected IoT device and inform the router through a control plane of its access decision; the router enforces the decision on the data plane after verifying whether the phone should be allowed to talk to the device. We implemented our design over both Android and iOS (> 95% of mobile OS market share) and a popular router. Our study shows that HanGuard is both efficient and effective in practice."} {"_id": "012ad0c26d108dda9b035faa6e9545842c14d305", "title": "Digital Beamforming on Receive: Techniques and Optimization Strategies for High-Resolution Wide-Swath SAR Imaging", "text": "Synthetic Aperture Radar (SAR) is a well-proven imaging technique for remote sensing of the Earth. However, conventional SAR systems are not capable of fulfilling the increasing demands for improved spatial resolution and wider swath coverage. To overcome these inherent limitations, several innovative techniques have been suggested which employ multiple receive-apertures to gather additional information along the synthetic aperture. These digital beamforming (DBF) on receive techniques are reviewed with particular emphasis on the multi-aperture signal processing in azimuth and a multi-aperture reconstruction algorithm is presented that allows for the unambiguous recovery of the Doppler spectrum. The impact of Doppler aliasing is investigated and an analytic expression for the residual azimuth ambiguities is derived. Further, the influence of the processing on the signal-to-noise ratio (SNR) is analyzed, resulting in a pulse repetition frequency (PRF) dependent factor describing the SNR scaling of the multi-aperture beamforming network. The focus is then turned to a complete high-resolution wide-swath SAR system design example which demonstrates the intricate connection between multi-aperture azimuth processing and the system architecture. In this regard, alternative processing approaches are compared with the multi-aperture reconstruction algorithm. In a next step, optimization strategies are discussed as pattern tapering, prebeamshaping-on-receive, and modified processing algorithms. In this context, the analytic expressions for both the residual ambiguities and the SNR scaling factor are generalized to cascaded beamforming networks. The suggested techniques can moreover be extended in many ways. Examples discussed are a combination with ScanSAR burst mode operation and the transfer to multistatic sparse array configurations."} {"_id": "4e23301ba855b5651bd0b152d6a118753fc5465d", "title": "Effects of domain on measures of semantic relatedness", "text": "Measures of semantic relatedness have been used in a variety of applications in information retrieval and language technology, such as measuring document similarity and cohesion of text. Definitions of such measures have ranged from using distance-based calculations over WordNet or other taxonomies to statistical distributional metrics over document collections such as Wikipedia or the Web. Existing measures do not explicitly consider the domain associations of terms when calculating relatedness: This article demonstrates that domain matters. We construct a data set of pairs of terms with associated domain information and extract pairs that are scored nearly identical by a sample of existing semantic-relatedness measures. We show that human judgments reliably score those pairs containing terms from the same domain as significantly more related than cross-domain pairs, even though the semantic-relatedness measures assign the pairs similar scores. We provide further evidence for this result using a machine learning setting by demonstrating that domain is an informative feature when learning a metric. We conclude that existing relatedness measures do not account for domain in the same way or to the same extent as do human judges."} {"_id": "1ae470266136fee5e98e0e62ba888167615a296a", "title": "A Lossy Image Codec Based on Index Coding", "text": "In this paper we propose a new lossy image codec based on index coding. Both J. Shapiro\u2019s embedded zerotree wavelet algorithm, and A. Said and W. A. Pearlman\u2019s codetree algorithm (which is the state-of-the-art method today) use spatial orientation tree structures to implicitly locate the significant wavelet transform coefficients. Here a direct approach to find the positions of these significant coefficients is presented. The new algorithm combines the discrete wavelet transform, differential coding, variable-length coding of integers, ordered bit plane transmission, and adaptive arithmetic coding. The encoding can be stopped at any point, which allows a target rate or distortion metric to be met exactly. The bits in the bit stream are generated in the order of importance, yielding a fully embedded code to successively approximate the original image source; thus it\u2019s well suited for progressive image transmission. The decoder can also terminate the decoding at any point, and produce a lower bit rate reconstruction image. Our algorithm is very simple in its form (which will make the encoding and decoding very fast), requires no training of any kind or prior knowledge of image sources, and has a clear geometric structure. The image coding results of it are quite competitive with almost all previous reported image compression algorithms on standard test images. 456 1068.0314/96$5.00"} {"_id": "1b300a7858ab7870d36622a51b0549b1936572d4", "title": "Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation", "text": "In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison."} {"_id": "aeaf9157ecd47684f1e3569323a4b16040023cef", "title": "Contextual feature based one-class classifier approach for detecting video response spam on YouTube", "text": "YouTube is one of the largest video sharing websites (with social networking features) on the Internet. The immense popularity of YouTube, anonymity and low publication barrier has resulted in several forms of misuse and video pollution such as uploading of malicious, copyright violated and spam video or content. YouTube has a popular and commonly used feature called as video response which allows users to post a video response to an uploaded or existing video. Some of the popular videos on YouTube receive thousands of video responses. We observe presence of opportunistic users posting unrelated, promotional, pornographic videos (spam videos posted manually or using automated scripts) as video responses to existing videos. We present a method of mining YouTube to automatically detect video response spam. We formulate the problem of video response spam detection as a one-class classification problem (a recognition task) and divide the problem into three sub-problems: promotional video recognition, pornographic or dirty video recognition and automated script or botnet uploader recognition. We create a sample dataset of target class videos for each of the three sub-problems and identify contextual features (meta-data based or non-content based features) characterizing the target class. Our empirical analysis reveals that certain linguistic features (presence of certain terms in the title or description of the YouTube video), temporal features, popularity based features, time based features can be used to predict the video type. We identify features with discriminatory powers and use it within a one-class classification framework to recognize video response spam. We conduct a series of experiments to validate the proposed approach and present evidences to demonstrate the effectiveness of the proposed solution with more than 80% accuracy."} {"_id": "f6f7ab33c9112dd1e70e965d67dda84f35212862", "title": "A Generic Toolkit for the Successful Management of Delphi Studies", "text": "This paper presents the case of a non-traditional use of the Delphi method for theory evaluation. On the basis of experience gained through secondary and primary research, a generic decision toolkit for Delphi studies is proposed, comprising of taxonomy of Delphi design choices, a stage model and critical methodological decisions. These research tools will help to increase confidence when adopting the Delphi alternative and allow for a wider and more comprehensive recognition of the method within both scientific and interpretivist studies."} {"_id": "62a3db8c7ba9d91800f4db689e592e763ecb2291", "title": "A Developmental Model of Critical Thinking", "text": "The critical thinking movement, it is suggested, has much to gain from conceptualizing its subject matter in a developmental framework. Most instructional programs designed to teach critical thinking do not draw on contemporary empirical research in cognitive development as a potential resource. The developmental model of critical thinking outlined here derives from contemporary empirical research on directions and processes of intellectual development in children and adolescents. It identifies three forms of second-order cognition (meta-knowing)--metacognitive, metastrategic, and epistemological--that constitute an essential part of what develops cognitively to make critical thinking possible."} {"_id": "b40a16a78d916eec89bdb51ac7f6b9296d9f7107", "title": "Extension of Soft-Switching Region of Dual-Active-Bridge Converter by a Tunable Resonant Tank", "text": "Hard-switching-induced switching loss can contribute significantly to the power loss of an isolated bidirectional dual-active-bridge (DAB) dc\u2013dc converter operating at high frequency. An $LC$-type series resonant DAB converter based on a switch-controlled-inductor (SCI) is proposed to mitigate the loss arising from hard switching under wide-range variations in output voltage and current. Zero-voltage switching is achieved at the primary side (high voltage), while at the secondary side (low voltage), zero-current switching is preferred to reduce excessive ringing due to circulating current and switching loss. In order to achieve reduced conduction loss, a nominal operating point is chosen where the root-mean-square resonant tank current is the minimum. To validate the proposed topology and modulation scheme, an $LC$-type series resonant DAB converter based on SCI operating at 100 kHz is designed to interface a 400-V dc bus to a supercapacitor-based energy storage. Simulation and experimental results validate the effectiveness of the proposed topology for charging/discharging a supercapacitor with an output voltage variation of between 10 and 48 V and a maximum rated power of 480 W. A maximum efficiency of 94.6% is achieved using the proposed topology and modulation scheme."} {"_id": "3a1c06e9157d0709ad477e39101b16c727d485a2", "title": "WordNet Atlas : a web application for visualizing WordNet as a zoomable map", "text": "The English WordNet is a lexical database containing more than 206,900 wordconcept pairs and more than 377,500 semantic and lexical links. Trying to visualize them all at once as a node-link diagram results in a representation which can be too big and complex for the user to grasp, and which cannot be easily processed by current web technologies. We propose a visualization technique based on the concept of semantic zooming, usually found in popular web map applications. This technique makes it feasible to handle large and complex graphs, and display them in an interactive representation. By zooming, it is possible to switch from an overview visualization, quickly and clearly presenting the global characteristics of the structure, to a detailed one, displaying a classic node-link diagram. WordNet Atlas is a web application that leverages this method to aid the user in the exploration of the WordNet data set, from its global taxonomy structure to the detail of words and synonym sets."} {"_id": "0b5ce6a35b0c7e19c77a4b93cd317e3d3a3e2fa4", "title": "Reduced-Order Model and Stability Analysis of Low-Voltage DC Microgrid", "text": "Depleting fossil fuels, increasing energy demand, and need for high-reliability power supply motivate the use of dc microgrids. This paper analyzes the stability of low-voltage dc microgrid systems. Sources are controlled using a droop-based decentralized controller. Various components of the system have been modeled. A linearized system model is derived using small-signal approximation. The stability of the system is analyzed by identifying the eigenvalues of the system matrix. The sufficiency condition for stable operation of the system is derived. It provides upper bound on droop constants and is useful during planning and designing of dc microgrids. Furthermore, the sensitivity of system poles to variation in cable resistance and inductance is identified. It is proved that the poles move further inside the negative real plane with a decrease in inductance or an increase in resistance. The method proposed in this paper is applicable to any interconnecting structure of sources and loads. The results obtained by analysis are verified by detailed simulation study. Root locus plots are included to confirm the movement of system poles. The viability of the model is confirmed by experimental results from a scaled-down laboratory prototype of a dc microgrid developed for the purpose."} {"_id": "bbebb8f7023ebfb2d2aff69a2dbad13e49b6f84d", "title": "Hierarchical Control of Droop-Controlled AC and DC Microgrids\u2014A General Approach Toward Standardization", "text": "AC and dc microgrids (MGs) are key elements for integrating renewable and distributed energy resources as well as distributed energy-storage systems. In the last several years, efforts toward the standardization of these MGs have been made. In this sense, this paper presents the hierarchical control derived from ISA-95 and electrical dispatching standards to endow smartness and flexibility to MGs. The hierarchical control proposed consists of three levels: 1) The primary control is based on the droop method, including an output-impedance virtual loop; 2) the secondary control allows the restoration of the deviations produced by the primary control; and 3) the tertiary control manages the power flow between the MG and the external electrical distribution system. Results from a hierarchical-controlled MG are provided to show the feasibility of the proposed approach."} {"_id": "688ef6ec1f26125d84eb14e253ab815e2def5710", "title": "Secondary Load-Frequency Control for MicroGrids in Islanded Operation", "text": "The objective of this paper is to present novel control strategies for MicroGrid operation, especially in islanded mode. The control strategies involve mainly the coordination of secondary load-frequency control by a MicroGrid Central Controller that heads a hierarchical control system able to assure stable and secure operation when the islanding of the MicroGrid occurs and in load-following situations in islanded mode."} {"_id": "791c408f97de64fbae0f2223cf433f2ad258f7d7", "title": "Decentralized Control for Parallel Operation of Distributed Generation Inverters Using Resistive Output Impedance", "text": "In this paper, a novel wireless load-sharing controller for islanding parallel inverters in an ac-distributed system is proposed. This paper explores the resistive output impedance of the parallel-connected inverters in an island microgrid. The control loops are devised and analyzed, taking into account the special nature of a low-voltage microgrid, in which the line impedance is mainly resistive and the distance between the inverters makes the control intercommunication between them difficult. In contrast with the conventional droop-control method, the proposed controller uses resistive output impedance, and as a result, a different control law is obtained. The controller is implemented by using a digital signal processor board, which only uses local measurements of the unit, thus increasing the modularity, reliability, and flexibility of the distributed system. Experimental results are provided from two 6-kVA inverters connected in parallel, showing the features of the proposed wireless control"} {"_id": "39d56e05caa642bdb292832fa5a01c5c597a0203", "title": "Eval all, trust a few, do wrong to none: Comparing sentence generation models", "text": "In this paper, we study recent neural generative models for text generation related to variational autoencoders. These models employ various techniques to match the posterior and prior distributions, which is important to ensure a high sample quality and a low reconstruction error. In our study, we follow a rigorous evaluation protocol using a large set of previously used and novel automatic metrics and human evaluation of both generated samples and reconstructions. We hope that it will become the new evaluation standard when comparing neural generative models for text."} {"_id": "8d89c3789e91f0b0d3ffd46bff50dd07618db636", "title": "DEEP LEARNING AND INTELLIGENT AUDIO MIXING", "text": "Mixing multitrack audio is a crucial part of music production. With recent advances in machine learning techniques such as deep learning, it is of great importance to conduct research on the applications of these methods in the field of automatic mixing. In this paper, we present a survey of intelligent audio mixing systems and their recent incorporation of deep neural networks. We propose to the community a research trajectory in the field of deep learning applied to intelligent music production systems. We conclude with a proof of concept based on stem audio mixing as a contentbased transformation using a deep autoencoder."} {"_id": "0451c923703472b6c20ff11185001f24b76c48e3", "title": "Coordination and Geometric Optimization via Distributed Dynamical Systems", "text": "Emerging applications for networked and cooperative robots motivate the study of motion coordination for groups of agents. For example, it is envisioned that groups of agents will perform a variety of useful tasks including surveillance, exploration, and environmental monitoring. This paper deals with basic interactions among mobile agents such as \u201cmove away from the closest other agent\u201d or \u201cmove toward the furthest vertex of your own Voronoi polygon.\u201d These simple interactions amount to distributed dynamical systems because their implementation requires only minimal information about neighboring agents. We characterize the close relationship between these distributed dynamical systems and the disk-covering and sphere-packing cost functions from geometric optimization. Our main results are: (i) we characterize the smoothness properties of these geometric cost functions, (ii) we show that the interaction laws are variations of the nonsmooth gradient of the cost functions, and (iii) we establish various asymptotic convergence properties of the laws. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and nonsmooth stability theory."} {"_id": "a26267a6b8dd93104c2efb8cebe5a728112f6bbf", "title": "Online fuzzy c means", "text": "Clustering streaming data presents the problem of not having all the data available at one time. Further, the total size of the data may be larger than will fit in the available memory of a typical computer. If the data is very large, it is a challenge to apply fuzzy clustering algorithms to get a partition in a timely manner. In this paper, we present an online fuzzy clustering algorithm which can be used to cluster streaming data, as well as very large data sets which might be treated as streaming data. Results on several large volumes of magnetic resonance images show that the new algorithm produces partitions which are very close to what you could get if you clustered all the data at one time. So, the algorithm is an accurate approach for online clustering."} {"_id": "00d39f57d9c2f430f1aec24171e2f73d46150e2c", "title": "The architecture of subterranean ant nests: beauty and mystery underfoot", "text": "Over the 100 million years of their evolution, ants have constructed or occupied nests in a wide range of materials and situations. A large number of ant species excavate nests in the soil, and these subterranean nests have evolved into a wide range of sizes and architectures. On the basis of casts made of such nests, this variation and the patterns that govern it are described. The possible functions of architectural features are discussed, as are the behavioral \u201crules\u201d through which the nests are created by worker ants."} {"_id": "450a4c4840ecf5380dc18ca1dd55618ca6ce4940", "title": "A Reorder Buffer Design for High Performance Processors", "text": "Modern reorder buffers (ROBs) were conceived to improve processor performance by allowing instruction execution out of the original program order and run ahead of sequential instruction code exploiting existing instruction level parallelism (ILP). The ROB is a functional structure of a processor execution engine that supports speculative execution, physical register recycling, and precise exception recovering. Traditionally, the ROB is considered as a monolithic circular buffer with incoming instructions at the tail pointer after the decoding stage and completing instructions at the head pointer after the commitment stage. The latter stage verifies instructions that have been dispatched, issued, executed, and are not completed speculatively. This paper presents a design of distributed reorder buffer microarchitecture by using small structures near building blocks which work together, using the same tail and head pointer values on all structures for synchronization. The reduction of area, and therefore, the reduction of power and delay make this design suitable for both embedded and high performance microprocessors."} {"_id": "a069865afb026979afc15913ac39266c013fd1da", "title": "Achieving ICS Resilience and Security through Granular Data Flow Management", "text": "Modern Industrial Control Systems (ICS) rely on enterprise to plant floor connectivity. Where the size, diversity, and therefore complexity of ICS increase, operational requirements, goals, and challenges defined by users across various sub-systems follow. Recent trends in Information Technology (IT) and Operational Technology (OT) convergence may cause operators to lose a comprehensive understanding of end-to-end data flow requirements. This presents a risk to system security and resilience. Sensors were once solely applied for operational process use, but now act as inputs supporting a diverse set of organisational requirements. If these are not fully understood, incomplete risk assessment, and inappropriate implementation of security controls could occur. In search of a solution, operators may turn to standards and guidelines. This paper reviews popular standards and guidelines, prior to the presentation of a case study and conceptual tool, highlighting the importance of data flows, critical data processing points, and system-to-user relationships. The proposed approach forms a basis for risk assessment and security control implementation, aiding the evolution of ICS security and resilience."} {"_id": "87e064d1f7351c7dab9809fd0248297dcb841774", "title": "Adversarial Frontier Stitching for Remote Neural Network Watermarking", "text": "The state of the art performance of deep learning models comes at a high cost for companies and institutions, due to the tedious data collection and the heavy processing requirements. Recently, Uchida et al. (2017) proposed to watermark convolutional neural networks by embedding information into their weights. While this is a clear progress towards model protection, this technique solely allows for extracting the watermark from a network that one accesses locally and entirely. This is a clear impediment, as leaked models can be re-used privately, and thus not released publicly for ownership inspection. Instead, we aim at allowing the extraction of the watermark from a neural network (or any other machine learning model) that is operated remotely, and available through a service API. To this end, we propose to operate on the model\u2019s action itself, tweaking slightly its decision frontiers so that a set of specific queries convey the desired information. In present paper, we formally introduce the problem and propose a novel zerobit watermarking algorithm that makes use of adversarial model examples (called adversaries for short). While limiting the loss of performance of the protected model, this algorithm allows subsequent extraction of the watermark using only few remote queries. We experiment this approach on the MNIST dataset with three types of neural networks, demonstrating that e.g., watermarking with 100 images incurs a slight accuracy degradation, while being resilient to most removal attacks."} {"_id": "0cf4105ec11fb5846e5ea1b9dea11f8ba16e391f", "title": "Strokelets: A Learned Multi-scale Representation for Scene Text Recognition", "text": "Driven by the wide range of applications, scene text detection and recognition have become active research topics in computer vision. Though extensively studied, localizing and reading text in uncontrolled environments remain extremely challenging, due to various interference factors. In this paper, we propose a novel multi-scale representation for scene text recognition. This representation consists of a set of detectable primitives, termed as strokelets, which capture the essential substructures of characters at different granularities. Strokelets possess four distinctive advantages: (1) Usability: automatically learned from bounding box labels, (2) Robustness: insensitive to interference factors, (3) Generality: applicable to variant languages, and (4) Expressivity: effective at describing characters. Extensive experiments on standard benchmarks verify the advantages of strokelets and demonstrate the effectiveness of the proposed algorithm for text recognition."} {"_id": "11c94b5b20c0347f4e5e9f70749018b6ca318afc", "title": "Unsupervised Footwear Impression Analysis and Retrieval from Crime Scene Data", "text": "Footwear impressions are one of the most frequently secured types of evidence at crime scenes. For the investigation of crime series they are among the major investigative notes. In this paper, we introduce an unsupervised footwear retrieval algorithm that is able to cope with unconstrained noise conditions and is invariant to rigid transformations. A main challenge for the automated impression analysis is the separation of the actual shoe sole information from the structured background noise. We approach this issue by the analysis of periodic patterns. Given unconstrained noise conditions, the redundancy within periodic patterns makes them the most reliable information source in the image. In this work, we present four main contributions: First, we robustly measure local periodicity by fitting a periodic pattern model to the image. Second, based on the model, we normalize the orientation of the image and compute the window size for a local Fourier transformation. In this way, we avoid distortions of the frequency spectrum through other structures or boundary artefacts. Third, we segment the pattern through robust point-wise classification, making use of the property that the amplitudes of the frequency spectrum are constant for each position in a periodic pattern. Finally, the similarity between footwear impressions is measured by comparing the Fourier representations of the periodic patterns. We demonstrate robustness against severe noise distortions as well as rigid transformations on a database with real crime scene impressions. Moreover, we make our database available to the public, thus enabling standardized benchmarking for the first time."} {"_id": "18b4db8a705a65c277912fc13647b844c1de54d0", "title": "An approach to online identification of Takagi-Sugeno fuzzy models", "text": "An approach to the online learning of Takagi-Sugeno (TS) type models is proposed in the paper. It is based on a novel learning algorithm that recursively updates TS model structure and parameters by combining supervised and unsupervised learning. The rule-base and parameters of the TS model continually evolve by adding new rules with more summarization power and by modifying existing rules and parameters. In this way, the rule-base structure is inherited and up-dated when new data become available. By applying this learning concept to the TS model we arrive at a new type adaptive model called the Evolving Takagi-Sugeno model (ETS). The adaptive nature of these evolving TS models in combination with the highly transparent and compact form of fuzzy rules makes them a promising candidate for online modeling and control of complex processes, competitive to neural networks. The approach has been tested on data from an air-conditioning installation serving a real building. The results illustrate the viability and efficiency of the approach. The proposed concept, however, has significantly wider implications in a number of fields, including adaptive nonlinear control, fault detection and diagnostics, performance analysis, forecasting, knowledge extraction, robotics, behavior modeling."} {"_id": "38731066f2e444c69818c8533b219b3db2826f18", "title": "Automatic speech recognition and speech variability: A review", "text": "Major progress is being recorded regularly on both the technology and exploitation of automatic speech recognition (ASR) and spoken language systems. However, there are still technological barriers to flexible solutions and user satisfaction under some circumstances. This is related to several factors, such as the sensitivity to the environment (background noise), or the weak representation of grammatical and semantic knowledge. Current research is also emphasizing deficiencies in dealing with variation naturally present in speech. For instance, the lack of robustness to foreign accents precludes the use by specific populations. Also, some applications, like directory assistance, particularly stress the core recognition technology due to the very high active vocabulary (application perplexity). There are actually many factors affecting the speech realization: regional, sociolinguistic, or related to the environment or the speaker herself. These create a wide range of variations that may not be modeled correctly (speaker, gender, speaking rate, vocal effort, regional accent, speaking style, non-stationarity, etc.), especially when resources for system training are scarce. This paper outlines current advances related to these topics. 2007 Elsevier B.V. All rights reserved."} {"_id": "dabbe2b9310c03999668ee6dbffb9d710fb3a621", "title": "Accurate Pulmonary Nodule Detection in Computed Tomography Images Using Deep Convolutional Neural Networks", "text": "Early detection of pulmonary cancer is the most promising way to enhance a patient\u2019s chance for survival. Accurate pulmonary nodule detection in computed tomography (CT) images is a crucial step in diagnosing pulmonary cancer. In this paper, inspired by the successful use of deep convolutional neural networks (DCNNs) in natural image recognition, we propose a novel pulmonary nodule detection approach based on DCNNs. We first introduce a deconvolutional structure to Faster Region-based Convolutional Neural Network (Faster R-CNN) for candidate detection on axial slices. Then, a three-dimensional DCNN is presented for the subsequent false positive reduction. Experimental results of the LUng Nodule Analysis 2016 (LUNA16) Challenge demonstrate the superior detection performance of the proposed approach on nodule detection(average FROC-score of 0.891, ranking the 1st place over all submitted results)."} {"_id": "9915efb95d14d3fb45bd9834982b15dda5d4abc8", "title": "Kalman Particle Filter for lane recognition on rural roads", "text": "Despite the availability of lane departure and lane keeping systems for highway assistance, unmarked and winding rural roads still pose challenges to lane recognition systems. To detect an upcoming curve as soon as possible, the viewing range of image-based lane recognition systems has to be extended. This is done by evaluating 3D information obtained from stereo vision or imaging radar in this paper. Both sensors deliver evidence grids as the basis for road course estimation. Besides known Kalman Filter approaches, Particle Filters have recently gained interest since they offer the possibility to employ cues of a road, which can not be described as measurements needed for a Kalman Filter approach. We propose to combine both principles and their benefits in a Kalman Particle Filter. The comparison between the results gained from this recently published filter scheme and the classical approaches using real world data proves the advantages of the Kalman Particle Filter."} {"_id": "10b4dd334893c12d81d61cff7cf2e3b2e6b1ef21", "title": "A risk taxonomy proposal for software maintenance", "text": "There can be no doubt that risk management is an important activity in the software engineering area. One proof of this is the large body of work existing in this area. However, when one takes a closer look at it, one perceives that almost all this work is concerned with risk management for software development projects. The literature on risk management for software maintenance is much scarcer. On the other hand, software maintenance projects do present specificities that imply they offer different risks than development. This suggests that maintenance projects could greatly benefit from better risk management tools. One step in this direction would be to help identifying potential risk factors at the beginning of a maintenance project. For this, we propose taxonomy of possible risks for software management projects. The ontology was created from: i) an extensive survey of risk management literature, to list known risk factors for software development; and, ii) an extensive survey of maintenance literature, to list known problems that may occur during maintenance."} {"_id": "a6eee06341987faeb8b6135d00d578d0d8893162", "title": "An industrial study on the risk of software changes", "text": "Modelling and understanding bugs has been the focus of much of the Software Engineering research today. However, organizations are interested in more than just bugs. In particular, they are more concerned about managing risk, i.e., the likelihood that a code or design change will cause a negative impact on their products and processes, regardless of whether or not it introduces a bug. In this paper, we conduct a year-long study involving more than 450 developers of a large enterprise, spanning more than 60 teams, to better understand risky changes, i.e., changes for which developers believe that additional attention is needed in the form of careful code or design reviewing and/or more testing. Our findings show that different developers and different teams have their own criteria for determining risky changes. Using factors extracted from the changes and the history of the files modified by the changes, we are able to accurately identify risky changes with a recall of more than 67%, and a precision improvement of 87% (using developer specific models) and 37% (using team specific models), over a random model. We find that the number of lines and chunks of code added by the change, the bugginess of the files being changed, the number of bug reports linked to a change and the developer experience are the best indicators of change risk. In addition, we find that when a change has many related changes, the reliability of developers in marking risky changes is negatively affected. Our findings and models are being used today in practice to manage the risk of software projects."} {"_id": "c92ceb6b20df814252a2d0afa601ce88ffb73cc8", "title": "Generative Modeling with Conditional Autoencoders: Building an Integrated Cell", "text": "We present a conditional generative model to learn variation in cell and nuclear morphology and the location of subcellular structures from microscopy images. Our model generalizes to a wide range of subcellular localization and allows for a probabilistic interpretation of cell and nuclear morphology and structure localization from fluorescence images. We demonstrate the effectiveness of our approach by producing photorealistic cell images using our generative model. The conditional nature of the model provides the ability to predict the localization of unobserved structures given cell and nuclear morphology."} {"_id": "2011736601756486b7a7b0f6b151222ac65121b4", "title": "Stock Direction Forecasting Techniques: An Empirical Study Combining Machine Learning System with Market Indicators in the Indian Context", "text": "Stock price movement prediction has been one of the most challenging issues in finance since the time immemorial. Many researchers in past have carried out extensive studies with the intention of investigating the approaches that uncover the hidden information in stock market data. As a result of which, Artificial Intelligence and data mining techniques have come to the forefront because of their ability to map nonlinear data. The study encapsulates market indicators with AI techniques to generate useful extracts to improve decisions under conditions of uncertainty. Three approaches (fundamental model, technical indicators model and hybrid model) have been tested using the standalone and integrated machine learning algorithms viz. SVM, ANN, GA-SVM, and GA-ANN and the results of all the three approaches have been compared in the four above mentioned methods. The core objective of this paper is to identify an approach from the above mentioned algorithms that best predicts the Indian stocks price movement. It is observed from the results that the use of GA significantly increases the accuracy of ANN and that the use of technical analysis with SVM and ANN is well suited for Indian stocks and can help investors and traders maximize their quarterly profits."} {"_id": "b2c4bfd31c9ff0aded9040ac22db71a32ce8d58b", "title": "Computation of the characteristics of a claw pole alternator through the finite element method", "text": "The paper presents the analysis of a 3-D numerical model developed for a claw pole alternator. The complex structure of the claw-pole magnetic circuit required a 3D FEM model and a double scalar potential magnetic \u03c6-\u03c6red formulation, in order to reduce computing time and memory. The no-load and magnetization characteristics and the e.m.f. time dependence have been calculated. Working characteristics and the induced voltage in the static winding can be calculated by knowing the 3D distribution of the field in stationary magnetic regime for successive positions of the rotor considering the stator."} {"_id": "a80caf1996a7ad2ce43abe6a7a78c7ded4adea8c", "title": "Deep Reinforcement Learning for Optimal Control of Space Heating", "text": "Classical methods to control heating systems are often marred by suboptimal performance, inability to adapt to dynamic conditions and unreasonable assumptions e.g. existence of building models. This paper presents a novel deep reinforcement learning algorithm which can control space heating in buildings in a computationally efficient manner, and benchmarks it against other known techniques. The proposed algorithm outperforms rule based control by between 5-10% in a simulation environment for a number of price signals. We conclude that, while not optimal, the proposed algorithm offers additional practical advantages such as faster computation times and increased robustness to non-stationarities in building dynamics."} {"_id": "a16775abc45bfc9fca92f375ccc5032289d893b5", "title": "Star Ratings versus Sentiment Analysis -- A Comparison of Explicit and Implicit Measures of Opinions", "text": "A typical trade-off in decision making is between the cost of acquiring information and the decline in decision quality caused by insufficient information. Consumers regularly face this trade-off in purchase decisions. Online product/service reviews serve as sources of product/service related information. Meanwhile, modern technology has led to an abundance of such content, which makes it prohibitively costly (if possible at all) to exhaust all available information. Consumers need to decide what subset of available information to use. Star ratings are excellent cues for this decision as they provide a quick indication of the tone of a review. However there are cases where such ratings are not available or detailed enough. Sentiment analysis -text analytic techniques that automatically detect the polarity of text- can help in these situations with more refined analysis. In this study, we compare sentiment analysis results with star ratings in three different domains to explore the promise of this technique."} {"_id": "2110fe9907b873be02e4a26a01a3b08ca66035a6", "title": "Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring", "text": "Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery."} {"_id": "585bf9bf946b4cd4571b2fe6f73f5a7ba9a3d601", "title": "Building Natural Language Interfaces to Web APIs", "text": "As the Web evolves towards a service-oriented architecture, application program interfaces (APIs) are becoming an increasingly important way to provide access to data, services, and devices. We study the problem of natural language interface to APIs (NL2APIs), with a focus on web APIs for web services. Such NL2APIs have many potential benefits, for example, facilitating the integration of web services into virtual assistants.\n We propose the first end-to-end framework to build an NL2API for a given web API. A key challenge is to collect training data, i.e., NL command-API call pairs, from which an NL2API can learn the semantic mapping from ambiguous, informal NL commands to formal API calls. We propose a novel approach to collect training data for NL2API via crowdsourcing, where crowd workers are employed to generate diversified NL commands. We optimize the crowdsourcing process to further reduce the cost. More specifically, we propose a novel hierarchical probabilistic model for the crowdsourcing process, which guides us to allocate budget to those API calls that have a high value for training NL2APIs. We apply our framework to real-world APIs, and show that it can collect high-quality training data at a low cost, and build NL2APIs with good performance from scratch. We also show that our modeling of the crowdsourcing process can improve its effectiveness, such that the training data collected via our approach leads to better performance of NL2APIs than a strong baseline."} {"_id": "9c375b82db7c42addc406adbed8a796a6ad7fb15", "title": "Contrast-Enhanced Black and White Images", "text": "This paper investigates contrast enhancement as an approach to tone reduction, aiming to convert a photograph to black and white. Using a filter-based approach to strengthen contrast, we avoid making a hard decision about how to assign tones to segmented regions. Our method is inspired by sticks filtering, used to enhance medical images but not previously used in non-photorealistic rendering. We amplify contrast of pixels along the direction of greatest local difference from the mean, strengthening even weak features if they are most prominent. A final thresholding step converts the contrast-enhanced image to black and white. Local smoothing and contrast enhancement balances abstraction and structure preservation; the main advantage of our method is its faithful depiction of image detail. Our method can create a set of effects: line drawing, hatching, and black and white, all having superior details to previous black and white methods."} {"_id": "409ff05931b5f252935930ecd8de4e62bc0c7d80", "title": "The City Browser : Utilizing Massive Call Data to Infer City Mobility Dynamics", "text": "This paper presents the City Browser, a tool developed to analyze the complexities underlying human mobility at the city scale. The tool uses data generated from mobile phones as a proxy to provide several insights with regards to the commuting patterns of the population within the bounds of a city. The three major components of the browser are the data warehouse, modules and algorithm, and the visualization interface. The modules and algorithm component utilizes Call Detail Records (CDRs) stored within the data warehouse to infer mobility patterns that are then communicated through the visualization interface. The modules and algorithm component consists of four modules: the spatial-temporal decomposition module, the home/work capturing module, the community detection module, and the flow estimation module. The visualization interface manages the output of each module to provide a comprehensive view of a city\u2019s mobility dynamics over varying time scales. A case study is presented on the city of Riyadh in Saudi Arabia, where the browser was developed to better understand city mobility patterns."} {"_id": "1a5214cdb88ca0c4f276d8c4e5797d19c662b8a4", "title": "A comprehensive framework for testing graphical user interfaces", "text": "The widespread recognition of the usefulness of graphical user interfaces GUIs has established their importance as critical components of today's software. Although the use of GUIs continues to grow, GUI testing has remained a neglected research area. Since GUIs have c haracteristics that are diierent from those of conventional software, such as user events for input and graphical output, techniques developed to test conventional software cannot bedirectly applied to test GUIs. This thesis develops a uniied solution to the GUI testing problem with the particular goals of automation and integration of tools and techniques used in various phases of GUI testing. These goals are accomplished by developing a GUI testing framework with a GUI model as its central component. For eeciency and scalability, a GUI is represented as a hierarchy of components, each used as a basic unit of testing. The framework also includes a test coverage evaluator, test case generator, test oracle, test executor, and regression tester. The test coverage evaluator employs hierarchical, event-based coverage criteria to automatically specify what to test in a GUI and to determine whether the test suite has adequately tested the GUI. The test case generator employs plan generation techniques from artiicial intelligence to automatically generate a test suite. A test executor automatically executes all the test cases on the GUI. As test cases are being executed, a test oracle automatically determines the correctness of the GUI. The test oracle employs a model of the expected state of the GUI in terms of its constituent objects and their properties. After changes are made to a GUI, a regression tester partitions the original GUI test suite into valid test cases that represent correct inputtoutput for the modiied GUI and invalid test cases that no longer represent correct inputtoutput. The regression tester employs a new technique to reuse some of the invalid test cases by repairing them. A cursory exploration of extending the framework to handle the new testing requirements of web-user interfaces WUIs is also done. The framework iv has been implemented and experiments have demonstrated that the developed techniques are both practical and useful. v Acknowledgements I w ould like to thank my parents whose constant eeorts, encouragement and hard work made achieving the goal of obtaining a Ph.D. possible. I thank all my teachers in schools, colleges, and universities whose dedication and hard work helped lay the foundation for this work. Special thanks \u2026"} {"_id": "2d4abf7523cda78e39029c46b19cbae74e7ee31b", "title": "A Safe, Efficient Regression Test Selection Technique", "text": "Regression testing is an expensive but necessary maintenance activity performed on modified software to provide confidence that changes are correct and do not adversely affect other portions of the softwore. A regression test selection technique choses, from an existing test set, thests that are deemed necessary to validate modified software. We present a new technique for regression test selection. Our algorithms construct control flow graphs for a precedure or program and its modified version and use these graphs to select tests that execute changed code from the original test suite. We prove that, under certain conditions, the set of tests our technique selects includes every test from the original test suite that con expose faults in the modified procedfdure or program. Under these conditions our algorithms are safe. Moreover, although our algorithms may select some tests that cannot expose faults, they are at lease as precise as other safe regression test selection algorithms. Unlike many other regression test selection algorithms, our algorithms handle all language constructs and all types of program modifications. We have implemented our algorithms; initial empirical studies indicate that our technique can significantly reduce the cost of regression testing modified software."} {"_id": "6f098bda64fbd59215a3e9686306b4dfb7ed3ac7", "title": "Coverage criteria for GUI testing", "text": "A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not directly apply to GUIs. This paper's focus is on coverage critieria for GUIs, important rules that provide an objective measure of test quality. We present new coverage criteria to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any non-trivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. A GUI is decomposed into GUI components, each of which is used as a basic unit of testing. A representation of a GUI component, called an event-flow graph, identifies the interaction of events within a component and intra-component criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree, and inter-component coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are given to construct event-flow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates the usefulness of the coverage report to guide further testing and an important correlation between event-based coverage of a GUI and statement coverage of its software's underlying code."} {"_id": "833723cbc2d1930d7e002acd882fda73152b213c", "title": "A Safe, Efficient Algorithm for Regression Test Selection", "text": "Regression testing is a necessary but costly maintenance activity aimed at demonstrating that code has not been adversely aaected by changes. A selective approach to regression testing selects tests for a modi-ed program from an existing test suite. We present a new technique for selective regression testing. Our algorithm constructs control dependence graphs for program versions, and uses these graphs to determine which tests from the existing test suite may exhibit changed behavior on the new version. Unlike most previous techniques for selective retest, our algorithm selects every test from the original test suite that might expose errors in the modiied program, and does this without prior knowledge of program modiications. Our algorithm handles all language constructs and program modiications, and is easily automated."} {"_id": "0a37a647a2f8464379a1fe327f93561c90d91405", "title": "An Introduction to Least Commitment Planning", "text": "Recent developments have clari ed the process of generating partially ordered, partially speci ed sequences of actions whose execution will achive an agent's goal. This paper summarizes a progression of least commitment planners, starting with one that handles the simple strips representation, and ending with one that manages actions with disjunctive precondition, conditional e ects and universal quanti cation over dynamic universes. Along the way we explain how Chapman's formulation of the Modal Truth Criterion is misleading and why his NP-completeness result for reasoning about plans with conditional e ects does not apply to our planner. 1 I thank Franz Amador, Tony Barrett, Darren Cronquist, Denise Draper, Ernie Davis, Oren Etzioni, Nort Fowler, Rao Kambhampati, Craig Knoblock, Nick Kushmerick, Neal Lesh, Karen Lochbaum, Drew McDermott, Ramesh Patil, Kari Pulli, Ying Sun, Austin Tate and Mike Williamson for helpful comments, but retain sole responsibility for errors. This research was funded in part by O ce of Naval Research Grant 90-J-1904 and by National Science Foundation Grant IRI-8957302"} {"_id": "b366fce866deceddb9acc46a87481bc6b36b0850", "title": "Polygenic Influence on Educational Attainment: New evidence from The National Longitudinal Study of Adolescent to Adult Health.", "text": "Recent studies have begun to uncover the genetic architecture of educational attainment. We build on this work using genome-wide data from siblings in the National Longitudinal Study of Adolescent to Adult Health (Add Health). We measure the genetic predisposition of siblings to educational attainment using polygenic scores. We then test how polygenic scores are related to social environments and educational outcomes. In Add Health, genetic predisposition to educational attainment is patterned across the social environment. Participants with higher polygenic scores were more likely to grow up in socially advantaged families. Even so, the previously published genetic associations appear to be causal. Among pairs of siblings, the sibling with the higher polygenic score typically went on to complete more years of schooling as compared to their lower-scored co-sibling. We found subtle differences between sibling fixed effect estimates of the genetic effect versus those based on unrelated individuals."} {"_id": "3889a2a0b3a178136aea3c5b91905bd20e765b4f", "title": "A 4-\u00b5W 0.8-V rail-to-rail input/output CMOS fully differential OpAmp", "text": "This paper presents an ultra low power rail-to-rail input/output operational amplifier (OpAmp) designed in a low cost 0.18 \u00b5m CMOS technology. In this OpAmp, rail-to-rail input operation is enabled by using complementary input pairs with gm control. To maximize the output swing a rail-to-rail output stage is employed. For low-voltage low-power operation, the operating transistors in the input and output stage are biased in the sub-threshold region. The simulated DC open loop gain is 51 dB, and the slew-rate is 0.04 V/\u00b5s with a 10 pF capacitive load connected to each of the amplifier outputs. For the same load, the simulated unity gain frequency is 131 kHz with a 64\u00b0 phase margin. A common-mode feed-forward circuit (CMFF) increases CMRR, reducing drastically the variations in the output common mode voltage and keeping the DC gain almost constant. In fact, their relative error remains below 1.2 % for a (\u221220\u00b0C, +120\u00b0C) temperature span. In addition, the proposed OpAmp is very simple and consumes only 4 \u00b5W at 0.8 V supply."} {"_id": "4b18303edf701e41a288da36f8f1ba129da67eb7", "title": "An embarrassingly simple approach to zero-shot learning", "text": "Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17%."} {"_id": "516f842467cc89f0d6551823d6aa0af2c3233c75", "title": "Mosaic: quantifying privacy leakage in mobile networks", "text": "With the proliferation of online social networking (OSN) and mobile devices, preserving user privacy has become a great challenge. While prior studies have directly focused on OSN services, we call attention to the privacy leakage in mobile network data. This concern is motivated by two factors. First, the prevalence of OSN usage leaves identifiable digital footprints that can be traced back to users in the real-world. Second, the association between users and their mobile devices makes it easier to associate traffic to its owners. These pose a serious threat to user privacy as they enable an adversary to attribute significant portions of data traffic including the ones with NO identity leaks to network users' true identities. To demonstrate its feasibility, we develop the Tessellation methodology. By applying Tessellation on traffic from a cellular service provider (CSP), we show that up to 50% of the traffic can be attributed to the names of users. In addition to revealing the user identity, the reconstructed profile, dubbed as \"mosaic,\" associates personal information such as political views, browsing habits, and favorite apps to the users. We conclude by discussing approaches for preventing and mitigating the alarming leakage of sensitive user information."} {"_id": "d3d631baf08f6df03bdbadcfdc8938206ea96c5f", "title": "A similarity-based prognostics approach for Remaining Useful Life estimation of engineered systems", "text": "This paper presents a similarity-based approach for estimating the Remaining Useful Life (RUL) in prognostics. The approach is especially suitable for situations in which abundant run-to-failure data for an engineered system are available. Data from multiple units of the same system are used to create a library of degradation patterns. When estimating the RUL of a test unit, the data from it will be matched to those patterns in the library and the actual life of those matched units will be used as the basis of estimation. This approach is used to tackle the data challenge problem defined by the 2008 PHM Data Challenge Competition, in which, run-to-failure data of an unspecified engineered system are provided and the RUL of a set of test units will be estimated. Results show that the similarity-based approach is very effective in performing RUL estimation."} {"_id": "7f6061c83dc36633911e4d726a497cdc1f31e58a", "title": "YouTube-8M: A Large-Scale Video Classification Benchmark", "text": "Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of\u223c8 million videos\u2014500K hours of video\u2014annotated with a vocabulary of 4800 visual entities. To get the videos and their (multiple) labels, we used a YouTube video annotation system, which labels videos with the main topics in them. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals, so they represent an excellent target for content-based annotation approaches. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pretrained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. The dataset contains frame-level features for over 1.9 billion video frames and 8 million videos, making it the largest public multi-label video dataset. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using the publicly-available TensorFlow framework. We plan to release code for training a basic TensorFlow model and for computing metrics. We show that pre-training on large data generalizes to other datasets like Sports-1M and ActivityNet. We achieve state-of-the-art on ActivityNet, improving mAP from 53.8% to 77.6%. We hope that the unprecedented scale and diversity of YouTube-8M will lead to advances in video understanding and representation learning."} {"_id": "63f97f3b4808baeb3b16b68fcfdb0c786868baba", "title": "Double-ridged horn antenna operating in 18\u201340 GHz range", "text": "The whole design cycle, including design, fabrication and characterization of a broadband double-ridged horn antenna equipped with a parabolic lens and a waveguide adapter is presented in this paper. A major goal of the presented work was to obtain high directivity with a flat phase characteristic within the main radiation lobe at an 18\u201340 GHz frequency range, so that the antenna can be applicable in a freespace material characterization setup."} {"_id": "27dc1f8252c94877c680634ee9119547429ab9b1", "title": "Population Cost Prediction on Public Healthcare Datasets", "text": "The increasing availability of digital health records should ideally improve accountability in healthcare. In this context, the study of predictive modeling of healthcare costs forms a foundation for accountable care, at both population and individual patient-level care. In this research we use machine learning algorithms for accurate predictions of healthcare costs on publicly available claims and survey data. Specifically, we investigate the use of the regression trees, M5 model trees and random forest, to predict healthcare costs of individual patients given their prior medical (and cost) history.\n Overall, three observations showcase the utility of our research: (a) prior healthcare cost alone can be a good indicator for future healthcare cost, (b) M5 model tree technique led to very accurate future healthcare cost prediction, and (c) although state-of-the-art machine learning algorithms are also limited by skewed cost distributions in healthcare, for a large fraction (75%) of population, we were able to predict with higher accuracy using these algorithms. In particular, using M5 model trees we were able to accurately predict costs within less than $125 for 75% of the population when compared to prior techniques. Since models for predicting healthcare costs are often used to ascertain overall population health, our work is useful to evaluate future costs for large segments of disease populations with reasonably low error as demonstrated in our results on real-world publicly available datasets."} {"_id": "f5d40bb7a636a042f5c005273ecdae72ee29216c", "title": "Characterization of RF Noise in UTBB FD-SOI MOSFET", "text": "In this paper, we report the noise measurements in the RF frequency range for ultrathin body and thin buried oxide fully depleted silicon on insulator (FD-SOI) transistors. We analyze the impact of back and front gate biases on the various noise parameters; along with discussions on the secondary effects in FD-SOI transistors which contribute to the thermal noise. Using calibrated TCAD simulations, we show that the noise figure changes with the substrate doping and buried oxide thickness."} {"_id": "213cb7593934bc675c336f53dd6c61a3c799be80", "title": "Duplicate Record Detection: A Survey", "text": "Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and/or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area"} {"_id": "f59e2d2cf33c11bd9b58e157298ece51e28bfbba", "title": "Enhancing bank security system using Face Recognition, Iris Scanner and Palm Vein Technology", "text": "The objective of this paper is to design a bank locker security system which is using Face Recognition, Iris Scanner and Palm Vein Technology (PVR) for securing valuable belongings. A face recognition system is a system which identifies and authenticates the image of the authorized user by using MATLAB software. The images of a person entering a unrestricted zone are taken by the camera and software compares the image with an existing database of valid users. Iris Recognition system uses generous characteristics present in human body. This technology is employed for biometric authentication in ATM\u2019s, Immigration & border control, public safety, hospitality and tourism etc. This paper serves techniques so that capability of palm vein recognition system can be improved using modified vascular pattern thinning algorithm. Palm Vein Recognition (PVR) is a technology which recognizes palm vein pattern of an individual and matches it with the data stored in database for authentication. This is a very reliable technique and has a very good accuracy and is considered as the most secure and better technique for security purposes."} {"_id": "c5151f18c2499f1d95522536f167f2fcf75f647f", "title": "Handover decision using fuzzy MADM in heterogeneous networks", "text": "In the next generation heterogeneous wireless networks, a user with a multi-interface terminal may have a network access from different service providers using various technologies. It is believed that handover decision is based on multiple criteria as well as user preference. Various approaches have been proposed to solve the handover decision problem, but the choice of decision method appears to be arbitrary and some of the methods even give disputable results. In this paper, a new handover criteria is introduced along with a new handover decision strategy. In addition, handover decision is identified us a fuzzy multiple attribute decision making (MADM) problem, and fuzzy logic is applied to deal with the imprecise information of some criteria and user preference. After a systematic analysis of various fuzzy MADM methods, a feasible approach is presented. In the end, examples are provided illustrating the proposed methods and the sensitivity of the methods is also analysed."} {"_id": "e321ab5d7a98e18253ed7874946a229a10e40f26", "title": "A Hybrid Feature Selection Algorithm For Classification Unbalanced Data Processsing", "text": "The performance and accuracy of classifier are affected by the result of feature selection directly. Based on the one-class F-Score feature selection and the improved F-Score feature selection and genetic algorithm, combined with machine learning methods like the K nearest neighbor, support vector machine, random forest, naive Bayes, a hybrid feature selection algorithm is proposed to process the two classification unbalanced data problem and multi classification problem. Compared with the traditional machine learning algorithm, it can search in wider feature space and promote classifier to deal with the characteristics of unbalanced data sets according to heuristic rules, which can handle the problem of unbalanced classification better. The experiment results show that the area under receiver operating characteristic curve for two classifications and the accuracy rate for multi classification problem have been improved compared with other models"} {"_id": "a81548c643ffb6d5f5e935880bb9db6290e5a386", "title": "Modular bases for fluid dynamics", "text": "We present a new approach to fluid simulation that balances the speed of model reduction with the flexibility of grid-based methods. We construct a set of composable reduced models, or tiles, which capture spatially localized fluid behavior. We then precompute coupling terms so that these models can be rearranged at runtime. To enforce consistency between tiles, we introduce constraint reduction. This technique modifies a reduced model so that a given set of linear constraints can be fulfilled. Because dynamics and constraints can be solved entirely in the reduced space, our method is extremely fast and scales to large domains."} {"_id": "a0c8b78146a19bfee1296c06eca79dd6c53dd40a", "title": "Conceptual Impact-Based Recommender System for CiteSeerx", "text": "CiteSeer is a digital library for scientific publications written by Computer Science researchers. Users are able to retrieve relevant documents from the database by searching by author name and/or keyword queries. Users may also receive recommendations of papers they might want to read provided by an existing conceptual recommender system. This system recommends documents based on an automaticallyconstructed user profile. Unlike traditional content-based recommender systems, the documents and the user profile are represented as concepts vectors rather than keyword vectors and papers are recommended based on conceptual matches rather than keyword matches between the profile and the documents. Although the current system provides recommendations that are on-topic, they are not necessarily high quality papers. In this work, we introduce the Conceptual Impact-Based Recommender (CIBR), a hybrid recommender system that extends the existing conceptual recommender system in CiteSeer by including an explicit quality factor as part of the recommendation criteria. To measure quality, our system considers the impact factor of each paper\u2019s authors as measured by the authors\u2019 h-index. Experiments to evaluate the effectiveness of our hybrid system show that the CIBR system recommends more relevant papers as compared to the conceptual recommender system."} {"_id": "4c4e26c482da08a749b3e6585cd1cbb51089e8f4", "title": "Design & construction of a Vertical Axis Wind Turbine", "text": "A wind turbine is a device that converts kinetic energy from the wind into electrical power. Vertical Axis Wind Turbine (VAWT) is one type of wind turbine where the main rotor shaft is set vertically and it can capture wind from any direction. The aim of this work is to develop a theoretical model for the design and performance of Darrieus type vertical axis wind turbine for small scale energy applications. A small 3 bladed turbine (prototype) is constructed and investigated the performance for low wind velocity. The model is based on NACA 0018 airfoil & light wood is used as blade material. The full scale Vertical Axis Wind Turbine is made for 36 inch height, 24 inch diameter, blade cord length is 3.937 inch & blade height is 24 inch. A 100 watt 24 volt brushless DC motor is used to measure output power. The whirling speed of blade & electric power output for the corresponding speed is measured through Tachometer & Wattmeter. The power curves show the relation between the rotational wind speed of the turbine and the power produced for a range of wind speeds. This approach indicates to develop vertical axis wind turbine with better performance to meet the increasing power demand."} {"_id": "3a60d77d4bbc7561b011d004adbcb47b17080fbc", "title": "Learning Hierarchical Semantic Image Manipulation through Structured Representations", "text": "Understanding, reasoning, and manipulating semantic concepts of images have been a fundamental research problem for decades. Previous work mainly focused on direct manipulation on natural image manifold through color strokes, keypoints, textures, and holes-to-fill. In this work, we present a novel hierarchical framework for semantic image manipulation. Key to our hierarchical framework is that we employ structured semantic layout as our intermediate representation for manipulation. Initialized with coarse-level bounding boxes, our structure generator first creates pixel-wise semantic layout capturing the object shape, object-object interactions, and object-scene relations. Then our image generator fills in the pixel-level textures guided by the semantic layout. Such framework allows a user to manipulate images at object-level by adding, removing, and moving one bounding box at a time. Experimental evaluations demonstrate the advantages of the hierarchical manipulation framework over existing image generation and context hole-filing models, both qualitatively and quantitatively. Benefits of the hierarchical framework are further demonstrated in applications such as semantic object manipulation, interactive image editing, and data-driven image manipulation."} {"_id": "405b92b42423fb011f5a26a6808471a60040d80a", "title": "A computationally efficient limited memory CMA-ES for large scale optimization", "text": "We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn), where $n$ is the number of decision variables. When $n$ is large (e.g., n > 1000), even relatively small values of $m$ (e.g., m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time."} {"_id": "baea5b38ef79158a6f942497b06443ae24f15331", "title": "A Locality Aware Convolutional Neural Networks Accelerator", "text": "The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time implementation on low-power embedded platforms. Recently, several dedicated solutions have been proposed to improve the energy efficiency and throughput, nevertheless the huge amount of data transfer involved in the processing is still a challenging issue. This work proposes a new CNN accelerator exploiting a novel memory access scheme which significantly improves data locality in CNN related processing. With this scheme, external memory access is reduced by 50% while achieving similar or even better throughput. The accelerator is implemented using 28nm CMOS technology. Implementation results show that the accelerator achieves a performance of 102GOp/s @800MHz while consuming 0.303mm2 in silicon area. Power simulation shows that the dynamic power of the accelerator is 68mW. Its flexibility is demonstrated by running various different CNN benchmarks."} {"_id": "d083aab9b01cbd5ff74970c24cd55dcabf2067f1", "title": "New algorithms for euclidean distance transformation of an n-dimensional digitized picture with applications", "text": "Al~traet--In this paper, we propose a new method to obtain the Euclidean distance transformation and the Voronoi diagram based on the exact Euclidean metric for an n-dimensional picture. We present four algorithms to perform the transformation which are constructed by the serial composition of n-dimensional filters. When performed by a general purpose computer, they are faster than the method by H. Yamada for a two-dimensional picture. Those algorithms require only one n-dimensional array for storing input/output pictures and a single one-dimensional array for a work area, if an input picture needs not be preserved."} {"_id": "7a9b632319a9c02abda36ed9665809b2e70c78b0", "title": "A Robust Deep Model for Improved Classification of AD/MCI Patients", "text": "Accurate classification of Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of a particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight coadaptation, which is a typical cause of overfitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multitask learning strategy into the deep learning framework. We applied the proposed method to the ADNI dataset, and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods."} {"_id": "3e1c07970fe976269ac5c9904b7b5651e8786c51", "title": "Simulation Model Verification and Validation: Increasing the Users' Confidence", "text": "This paper sets simulation model verification and validation (V&V) in the context of the process of performing a simulation study. Various different forms of V&V need to take place depending on the stage that has been reached. Since the phases of a study are performed in an iterative manner, so too are the various forms of V&V. A number of difficulties with verifying and validating models are discussed, after which a series of V&V methods are described. V&V is seen as a process of increasing confidence in a model, and not one of demonstrating absolute accuracy."} {"_id": "6a8c5d33968230c9c4bdb207b93da7c71d302ed6", "title": "Reduce The Wastage of Data During Movement in Data Warehouse", "text": "In this research paper so as to handle Data in warehousing as well as reduce the wastage of data and provide a better results which takes more and more turn into a focal point of the data source business. Data warehousing and on-line analytical processing (OLAP) are vital fundamentals of resolution hold, which has more and more become a focal point of the database manufacturing. Lots of marketable yield and services be at the present accessible, and the entire primary database management organization vendor nowadays have contributions in the area assessment hold up spaces some quite dissimilar necessities on record technology compare to conventional on-line transaction giving out application. This article gives a general idea of data warehousing and OLAP technologies, with the highlighting on top of their latest necessities. So tools which is used for extract, clean-up and load information into back end of a information warehouse; multidimensional data model usual of OLAP; front end client tools for querying and data analysis; server extension for proficient query processing; and tools for data managing and for administration the warehouse. In adding to survey the circumstances of the art, this article also identify a number of capable research issue, a few which are interrelated to"} {"_id": "04ce064505b1635583fa0d9cc07cac7e9ea993cc", "title": "A Comparison of Event Models for Naive Bayes Text Classification", "text": "Recent approaches to text classification have used two different first-order probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multi-variate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey and Croft 1996; Koller and Sahami 1997). Others use a multinomial model, that is, a uni-gram language model with integer word counts (e.g. Lewis and Gale 1994; Mitchell 1997). This paper aims to clarify the confusion by describing the differences and details of these two models, and by empirically comparing their classification performance on five text corpora. We find that the multi-variate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizes\u2014providing on average a 27% reduction in error over the multi-variate Bernoulli model at any vocabulary size."} {"_id": "08fddf1865e48a1adc21d4875396a754711f0a28", "title": "An Extensive Empirical Study of Feature Selection Metrics for Text Classification", "text": "Machine learning for text classification is the cor nerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and m ore accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g . Information Gain) evaluated on a benchmark of 229 text classification problem instances that w ere gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal p ers ectives\u2014accuracy, F-measure, precision, and recall\u2014since each is appropriate in different si tuat ons. The results reveal that a new feature selection me tric we call \u2018Bi-Normal Separation\u2019 (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text clas sification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focus es on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspect iv , BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Informati on Gain and Chi-Squared have correlated failures, and so they work poorly together. When c hoosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a membe r of the pair\u2014e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin."} {"_id": "20cc59e8879305cbe18409c77464eff272e1cf55", "title": "Language Identification in the Limit", "text": "Language learnabi l i ty has been inves t igated . This refers to the following s i tua t ion: A class of possible languages is specified, t oge the r wi th a method of present ing informat ion to the learner about an unknown language, which is to be chosen from the class. The quest ion is now asked, \" I s the informat ion sufficient to determine which of the possible languages is the unknown language?\" M a n y definitions of l ea rnab i l i ty are possible, bu t only the following is considered here: Time is quant ized and has a finite s t a r t ing time. At each t ime the learner receives a un i t of in format ion and is to make a guess as to the i den t i t y of the unknown language on the basis of the informat ion received so far. This process continues forever. The class of languages will be considered lea~nabIe with respect to the specified me thod of in format ion p resen ta t ion if there is an a lgor i thm t h a t the learner can use to make his guesses, the a lgor i thm hav ing the following p roper ty : Given any language of the class, there is some finite t ime af te r which the guesses will all be the same and they will be correct . In this pre l iminary invest igat ion, a language is t aken to be a set of s tr ings on some finite a lphabet . The a lphabe t is the same for all languages of tile class. Several var ia t ions of each of the following two basic methods of informat ion presenta t ion are inves t iga ted: A text for a language generates the s t r ings of the language in any order such t h a t every s t r ing of the language occurs a t least once. An informant for a language tells whe the r a s t r ing is in the language, and chooses the s t r ings in some order such t h a t every s t r ing occurs at least once. I t was found t h a t the class of context-sensi t ive languages is learnable from an informant, but that not even the class of regular languages is learnable from a text."} {"_id": "32352a889360e365fa242ad3040ccd6c54131d47", "title": "Introduction to Information Retrieval: Index", "text": ""} {"_id": "47f5682448cdc0b650b54e7f59d22d72f4976c2d", "title": "Domain Adaptation for Statistical Classifiers", "text": "The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the \u201cin-domain\u201d test data is drawn from a distribution that is related, but not identical, to the \u201cout-of-domain\u201d distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain."} {"_id": "782546908241748b0529e1a451f15567b31f411d", "title": "Augmenting paper-based reading activity with direct access to digital materials and scaffolded questioning", "text": "Comprehension is the goal of reading. However, students often encounter reading difficulties due to the lack of background knowledge and proper reading strategy. Unfortunately, print text provides very limited assistance to one\u2019s reading comprehension through its static knowledge representations such as symbols, charts, and graphs. Integrating digital materials and reading strategy into paper-based reading activities may bring opportunities for learners to make meaning of the print material. In this study, QR codes were adopted in association with mobile technology to deliver supplementary materials and questions to support students\u2019 reading. QR codes were printed on paper prints to provide direct access to digital materials and scaffolded questions. Smartphones were used to scan the printed QR codes to fetch predesigned digital resources and scaffolded questions over the Internet. A quasi-experiment was conducted to evaluate the effectiveness of direct access to the digital materials prepared by the instructor using QR codes and that of scaffolded questioning in improving students\u2019 reading comprehension. The results suggested that direct access to digital resources using QR codes does not significantly influence students\u2019 reading comprehension; however, the reading strategy of scaffolded questioning significantly improves students\u2019 understanding about the text. The survey showed that most students agreed that the integrated print-and-digital-materialbased learning system benefits English reading comprehension but may not be as efficient as expected. The implications of the findings shed light on future improvement of the system. 2011 Elsevier Ltd. All rights reserved."} {"_id": "8762619d4be605e21524eeded2cd946dac47fb2a", "title": "Effective Identification of Similar Patients Through Sequential Matching over ICD Code Embedding", "text": "Evidence-based medicine often involves the identification of patients with similar conditions, which are often captured in ICD (International Classification of Diseases (World Health Organization 2013)) code sequences. With no satisfying prior solutions for matching ICD-10 code sequences, this paper presents a method which effectively captures the clinical similarity among routine patients who have multiple comorbidities and complex care needs. Our method leverages the recent progress in representation learning of individual ICD-10 codes, and it explicitly uses the sequential order of codes for matching. Empirical evaluation on a state-wide cancer data collection shows that our proposed method achieves significantly higher matching performance compared with state-of-the-art methods ignoring the sequential order. Our method better identifies similar patients in a number of clinical outcomes including readmission and mortality outlook. Although this paper focuses on ICD-10 diagnosis code sequences, our method can be adapted to work with other codified sequence data."} {"_id": "65d115a49e84ce1de41ba5c116acd1821c21dc4b", "title": "Anger detection in call center dialogues", "text": "We present a method to classify fixed-duration windows of speech as expressing anger or not, which does not require speech recognition, utterance segmentation, or separating the utterances of different speakers and can, thus, be easily applied to real-world recordings. We also introduce the task of ranking a set of spoken dialogues by decreasing percentage of anger duration, as a step towards helping call center supervisors and analysts identify conversations requiring further action. Our work is among the very few attempts to detect emotions in spontaneous human-human dialogues recorded in call centers, as opposed to acted studio recordings or human-machine dialogues. We show that despite the non-perfect performance (approx. 70% accuracy) of the window-level classifier, its decisions help produce a ranking of entire conversations by decreasing percentage of anger duration that is clearly better than a random ranking, which represents the case where supervisors and analysts randomly select conversations to inspect."} {"_id": "ca7cc812a2fbe60550f18c4033a75296432fa28f", "title": "Kathar\u00e1: A container-based framework for implementing network function virtualization and software defined networks", "text": "Network Function Virtualization (NFV) and Software-Defined Networking (SDN) are deeply changing the networking field by introducing software at any level, aiming at decoupling the logic from the hardware. Together, they bring several benefits, mostly in terms of scalability and flexibility. Up to now, SDN has been used to support NFV from the routing and the architectural point of view. In this paper we present Kathar\u00e1, a framework based on con\u00adtainers, that allows network operators to deploy Virtual Network Functions (VNFs) through the adoption of emerging data-plane programmable capabilities, such as P4-compliant switches. It also supports the coexistence of SDN and traditional routing protocols in order to set up arbitrarily complex networks. As a side effect, thanks to Kathar\u00e1, we demonstrate that implementing NFV by means of specific-purpose equipment is feasible and it provides a gain in performance while preserving the benefits of NFV. We measure the resource consumption of Kathar\u00e1 and we show that it performs better than frameworks that implement virtual networks using virtual machines by several orders of magnitude."} {"_id": "8863dd8efc047ee2c8060dd74c24006d7204a9d5", "title": "Joint Learning of CNN and LSTM for Image Captioning", "text": "In this paper, we describe the details of our methods for the participation in the subtask of the ImageCLEF 2016 Scalable Image Annotation task: Natural Language Caption Generation. The model we used is the combination of a procedure of encoding and a procedure of decoding, which includes a Convolutional neural network(CNN) and a Long Short-Term Memory(LSTM) based Recurrent Neural Network. We first train a model on the MSCOCO dataset and then fine tune the model on different target datasets collected by us to get a more suitable model for the natural language caption generation task. Both of the parameters of CNN and LSTM are learned together."} {"_id": "5280846253fd842e97bbe955ccf8caa1e1f8d6c8", "title": "The Stable Model Semantics for Logic Programming", "text": "This paper studies the stable model semantics of logic progr ams with (abstract) constraint atoms and their properties. We introduce a succinct abstract represe ntation of these constraint atoms in which a constraint atom is represented compactly. We show two appli cations. First, under this representation of constraint atoms, we generalize the Gelfond-Lifschitz t ransformation and apply it to define stable models (also called answer sets) for logic programs with arb itr y constraint atoms. The resulting semantics turns out to coincide with the one defined by Son et a l., which is based on a fixpoint approach. One advantage of our approach is that it can be appl ied, in a natural way, to define stable models for disjunctive logic programs with constraint atom s, which may appear in the disjunctive head as well as in the body of a rule. As a result, our approach t o the stable model semantics for logic programs with constraint atoms generalizes a number of prev ious approaches. Second, we show that our abstract representation of constraint atoms provides a me ns to characterize dependencies of atoms in a program with constraint atoms, so that some standa rd characterizations and properties relying on these dependencies in the past for logic programs with ordinary atoms can be extended to logic programs with constraint atoms."} {"_id": "3eefbcf161b5b95b34980d17c042c7cb4fd76864", "title": "Improving late life depression and cognitive control through the use of therapeutic video game technology: A proof-of-concept randomized trial.", "text": "BACKGROUND\nExisting treatments for depression are known to have only modest effects, are insufficiently targeted, and are inconsistently utilized, particularly in older adults. Indeed, older adults with impaired cognitive control networks tend to demonstrate poor response to a majority of existing depression interventions. Cognitive control interventions delivered using entertainment software have the potential to not only target the underlying cerebral dysfunction associated with depression, but to do so in a manner that is engaging and engenders adherence to treatment protocol.\n\n\nMETHODS\nIn this proof-of-concept trial (Clinicaltrials.gov #: NCT02229188), individuals with late life depression (LLD) (22; 60+ years old) were randomized to either problem solving therapy (PST, n = 10) or a neurobiologically inspired digital platform designed to enhance cognitive control faculties (Project: EVO\u2122, n = 12). Given the overlapping functional neuroanatomy of mood disturbances and executive dysfunction, we explored the impact of an intervention targeting cognitive control abilities, functional disability, and mood in older adults suffering from LLD, and how those outcomes compare to a therapeutic gold standard.\n\n\nRESULTS\nEVO participants demonstrated similar improvements in mood and self-reported function after 4 weeks of treatment to PST participants. The EVO participants also showed generalization to untrained measures of working memory and attention, as well as negativity bias, a finding not evident in the PST condition. Individuals assigned to EVO demonstrated 100% adherence.\n\n\nCONCLUSIONS\nThis study provides preliminary findings that this therapeutic video game targeting cognitive control deficits may be an efficacious LLD intervention. Future research is needed to confirm these findings."} {"_id": "25e5b745ce0d3518bf16fe28d788f7d4fac9d838", "title": "XYZ Indoor Navigation through Augmented Reality: A Research in Progress", "text": "We present an overall framework of services for indoor navigation, which includes Indoor Mapping, Indoor Positioning, Path Planning, and En-route Assistance. Within such framework we focus on an augmented reality (AR) solution for en-route assistance. AR assists the user walking in a multi-floor building by displaying a directional arrow under a camera view, thus freeing the user from knowing his/her position. Our AR solution relies on geomagnetic positioning and north-oriented space coordinates transformation. Therefore, it can work without infrastructure and without relying on GPS. The AR visual interface and the integration with magnetic positioning is the main novelty of our solution, which has been validated by experiments and shows a good performance."} {"_id": "676600ed722d4739d669715c16a1ed2fc117b3d4", "title": "Weakly supervised detection with decoupled attention-based deep representation", "text": "Training object detectors with only image-level annotations is an important problem with a variety of applications. However, due to the deformable nature of objects, a target object delineated by a bounding box always includes irrelevant context and occlusions, which causes large intra-class object variations and ambiguity in object-background distinction. For this reason, identifying the object of interest from a substantial amount of cluttered backgrounds is very challenging. In this paper, we propose a decoupled attention-based deep model to optimize region-based object representation. Different from existing approaches posing object representation in a single-tower model, our proposed network decouples object representation into two separate modules, i.e., image representation and attention localization. The image representation module captures content-based semantic representation, while the attention localization module regresses an attention map which simultaneously highlights the locations of the discriminative object parts and down weights the irrelevant backgrounds presented in the image. The combined representation alleviates the impact from the noisy context and occlusions inside an object bounding box. As a result, object-background ambiguity can be largely reduced and background regions can be suppressed effectively. In addition, the proposed object representation model can be seamlessly integrated into a state-of-the-art weakly supervised detection framework, and the entire model can be trained end-to-end. We extensively evaluate the detection performance on the PASCAL VOC 2007, VOC 2010 and VOC2012 datasets. Experimental results demonstrate that our approach effectively improves weakly supervised object detection."} {"_id": "3c5dd0defca5232b638f7fc86e2bf7f1e0da4d7d", "title": "The mass appraisal of the real estate by computational intelligence", "text": "Mass appraisal is the systematic appraisal of groups of properties as of a given date using standardized procedures and statistical testing. Mass appraisal is commonly used to compute real estate tax. There are three traditional real estate valuation methods: the sales comparison approach, income approach, and the cost approach. Mass appraisal models are commonly based on the sales comparison approach. The ordinary least squares (OLS) linear regression is the classical method used to build models in this approach. The method is compared with computational intelligence approaches \u2013 support vector machine (SVM) regression, multilayer perceptron (MLP), and a committee of predictors in this paper. All the three predictors are used to build a weighted data-depended committee. A self-organizing map (SOM) generating clusters of value zones is used to obtain the data-dependent aggregation weights. The experimental investigations performed using data cordially provided by the Register center of Lithuania have shown very promising results. The performance of the computational intelligence-based techniques was considerably higher than that obtained using the official real estate models of the Register center. The performance of the committee using the weights based on zones obtained from the SOM was also higher than of that exploiting the real estate value zones provided by the Register center. \u00a9 2009 Elsevier B.V. All rights reserved."} {"_id": "d1fb73dc083c916cf8a965434fc684b8f0b8762d", "title": "Beyond the Turing Test", "text": "SPRING 2016 3 Alan Turing\u2019s renowned test on intelligence, commonly known as the Turing test, is an inescapable signpost in AI. To people outside the field, the test \u2014 which hinges on the ability of machines to fool people into thinking that they (the machines) are people \u2014 is practically synonymous with the quest to create machine intelligence. Within the field, the test is widely recognized as a pioneering landmark, but also is now seen as a distraction, designed over half a century ago, and too crude to really measure intelligence. Intelligence is, after all, a multidimensional variable, and no one test could possibly ever be definitive truly to measure it. Moreover, the original test, at least in its standard implementations, has turned out to be highly gameable, arguably an exercise in deception rather than a true measure of anything especially correlated with intelligence. The much ballyhooed 2015 Turing test winner Eugene Goostman, for instance, pretends to be a thirteen-year-old foreigner and proceeds mainly by ducking questions and returning canned one-liners; it cannot see, it cannot think, and it is certainly a long way from genuine artificial general intelligence. Our hope is to see a new suite of tests, part of what we have"} {"_id": "3c5b2338f0169d25ba07e3bfcf7ebb2a8c1edcea", "title": "Extending the Business Model Canvas: A Dynamic Perspective", "text": "When designing and assessing a business model, a more visual and practical ontology and framework is necessary. We show how an academic theory such as Business Model Ontology has evolved into the Business Model Canvas (BMC) that is used by practitioners around the world today. We draw lessons from usage and define three maturity level. We propose new concepts to help design the dynamic aspect of a business model. On the first level, the BMC supports novice users as they elicit their models; it also helps novices to build coherent models. On the second level, the BMC allows expert users to evaluate the interaction of business model elements by outlining the key threads in the business models\u2019 story. On the third level, master users are empowered to create multiple versions of their business models, allowing them to evaluate alternatives and retain the history of the business model\u2019s evolution. These new concepts for the BMC which can be supported by Computer-Aided Design tools provide a clearer picture of the business model as a strategic planning tool and are the basis for further research."} {"_id": "7dacc2905c536e17ea24bdd07c9e87d699a12975", "title": "Communicating with Cost-based Implicature : a Game-Theoretic Approach to Ambiguity", "text": "A game-theoretic approach to linguistic communication predicts that speakers can meaningfully use ambiguous forms in a discourse context in which only one of several available referents has a costly unambiguous form and in which rational interlocutors share knowledge of production costs. If a speaker produces a low-cost ambiguous form to avoid using the high-cost unambiguous form, a rational listener will infer that the high-cost entity was the intended entity, or else the speaker would not have risked ambiguity. We report data from two studies in which pairs of speakers show alignment of their use of ambiguous forms based on this kind of shared knowledge. These results extend the analysis of cost-based pragmatic inferencing beyond that previously associated only with fixed lexical hosts."} {"_id": "fe2781c806bb6049604b66721fa21814f3711ff0", "title": "Smoothing LUT classifiers for robust face detection", "text": "Look-up table (LUT) classifiers are often used to construct concise classifiers for rapid object detection due to their favorable convergent ability. However, their poor generalization ability imposes restrictions on their applications. A novel improvement to LUT classifiers is proposed in this paper where the new confident of each partition is recalculated by smoothing the old ones within its neighbor partitions. The new confidents are more generalizable than the old ones because each of the new predicts is supported by more training samples and the high frequency components in the old predict sequences are suppressed greatly through smoothing operation. Both weight sum smoothing method and confident smoothing method are introduced here, which all bring negligible extra computation cost in training time and no extra cost in test time. Experimental results in the domain of upright frontal face detection using smoothed LUT classifiers with identical smoothing width and smoothing factor for all partitions based on Haar-like rectangle features show that smoothed LUT classifiers generalize much better and also converge more or less worse than unsmooth LUT classifiers. Specifically, smoothed LUT classifiers can delicately balance between generalization ability and convergent ability through carefully set smoothing parameters."} {"_id": "668c7c978615143545eed2eca8ae343e027a02ca", "title": "Multiple-output ZCS resonant inverter for multi-coil induction heating appliances", "text": "This paper presents a multiple-output resonant inverter for multi-coil systems featuring high efficiency and flexible output power control for modern induction heating appliances. By adopting a matrix structure, the number of controlled devices can be reduced to the square root of the number of controlled induction heating loads. Therefore, an elevated component count reduction can be achieved compared to classical multi-outputs solutions. The proposed converter is analyzed and verified experimentally, proving the feasibility of this proposal and benefits compared with previous alternatives."} {"_id": "13a39916d0459879703ce20f960726c5fb7269f1", "title": "ExpNet: Landmark-Free, Deep, 3D Facial Expressions", "text": "We describe a deep learning based method for estimating 3D facial expression coefficients. Unlike previous work, our process does not relay on facial landmark detection methods as a proxy step. Recent methods have shown that a CNN can be trained to regress accurate and discriminative 3D morphable model (3DMM) representations, directly from image intensities. By foregoing landmark detection, these methods were able to estimate shapes for occluded faces appearing in unprecedented viewing conditions. We build on those methods by showing that facial expressions can also be estimated by a robust, deep, landmark-free approach. Our ExpNet CNN is applied directly to the intensities of a face image and regresses a 29D vector of 3D expression coefficients. We propose a unique method for collecting data to train our network, leveraging on the robustness of deep networks to training label noise. We further offer a novel means of evaluating the accuracy of estimated expression coefficients: by measuring how well they capture facial emotions on the CK+ and EmotiW-17 emotion recognition benchmarks. We show that our ExpNet produces expression coefficients which better discriminate between facial emotions than those obtained using state of the art, facial landmark detectors. Moreover, this advantage grows as image scales drop, demonstrating that our ExpNet is more robust to scale changes than landmark detectors. Finally, our ExpNet is orders of magnitude faster than its alternatives."} {"_id": "bd8c21578c44aeaae394f9414974ca81fb412ec1", "title": "A First Look at Android Malware Traffic in First Few Minutes", "text": "With the advent of mobile era, mobile terminals are going through a trend of surpassing PC to become the most popular computing device. Meanwhile, the hackers and viruswriters are paying close attention to the mobile terminals, especially the Android platform. The growing of malwares on the Android system has drawn attentions from both the academia and security industry. Recently, mobile network traffic analysis has been used to identify the malware. But due to the lack of a large-scale malware repository and a systematic analysis of network traffic features, the existing research mostly remain in theory. In this paper, we design an Android malware traffic behavior monitoring scheme to capture traffic data generated by malware samples in a real Internet environment. We capture the network traffic from 5560 malware samples in the first 5 minutes, and analyze the major compositions of the traffic data. We discover that HTTP and DNS traffic are accounted for more than 99% on the application layer traffic. We then present an analysis of related network features: DNS query, HTTP packet length, ratio of downlink to uplink traffic amount, HTTP request and Ad traffic feature. Our statistical results illustrate that: (1) more than 70% malwares generate malicious traffic in the first 5 minutes, (2) DNS query and HTTP request can be used to identify the malware, and the detection rate reaches 69.55% and 40.89% respectively, (3) Ad traffic can greatly affect the malware detection. We believe our research provides an in-depth analysis into mobile malwares' network behaviors."} {"_id": "acb64f17bdb0eacf1daf45a5f33c0e7a8893dbf6", "title": "Continuous Representation of Location for Geolocation and Lexical Dialectology using Mixture Density Networks", "text": "We propose a method for embedding twodimensional locations in a continuous vector space using a neural network-based model incorporating mixtures of Gaussian distributions, presenting two model variants for text-based geolocation and lexical dialectology. Evaluated over Twitter data, the proposed model outperforms conventional regression-based geolocation and provides a better estimate of uncertainty. We also show the effectiveness of the representation for predicting words from location in lexical dialectology, and evaluate it using the DARE dataset."} {"_id": "f1f20213f5e412e5c88dd3216703ff8181e073b5", "title": "Restoration of Endodontically Treated Molars Using All Ceramic Endocrowns", "text": "Clinical success of endodontically treated posterior teeth is determined by the postendodontic restoration. Several options have been proposed to restore endodontically treated teeth. Endocrowns represent a conservative and esthetic restorative alternative to full coverage crowns. The preparation consists of a circular equigingival butt-joint margin and central retention cavity into the entire pulp chamber constructing both the crown and the core as a single unit. The case reports discussed here are moderately damaged endodontically treated molars restored using all ceramic endocrowns fabricated using two different systems, namely, CAD/CAM and pressed ceramic."} {"_id": "dd34e51f2069c033df9b904bb8063b420649ed76", "title": "Efficient lineage tracking for scientific workflows", "text": "Data lineage and data provenance are key to the management of scientific data. Not knowing the exact provenance and processing pipeline used to produce a derived data set often renders the data set useless from a scientific point of view. On the positive side, capturing provenance information is facilitated by the widespread use of workflow tools for processing scientific data. The workflow process describes all the steps involved in producing a given data set and, hence, captures its lineage. On the negative side, efficiently storing and querying workflow based data lineage is not trivial. All existing solutions use recursive queries and even recursive tables to represent the workflows. Such solutions do not scale and are rather inefficient. In this paper we propose an alternative approach to storing lineage information captured as a workflow process. We use a space and query efficient interval representation for dependency graphs and show how to transform arbitrary workflow processes into graphs that can be stored using such representation. We also characterize the problem in terms of its overall complexity and provide a comprehensive performance evaluation of the approach."} {"_id": "7857cdf46d312af4bb8854bd127e5c0b4268f90c", "title": "A novel tri-state boost PFC converter with fast dynamic performance", "text": "Dynamic response of boost power factor correction (PFC) converter operating in continuous-conduction mode (CCM) is heavily influenced by low-bandwidth of voltage control loop. A novel tri-state boost PFC converter operating in pseudo-continuous-conduction mode (PCCM) is proposed in this paper. An additional degree of control-freedom introduced by a freewheeling switching control interval helps to achieve PFC control. A simple and fast voltage control loop can be used to maintain a constant output voltage. Furthermore, compared with boost PFC converter operating in conventional discontinuous-conduction mode (DCM), boost PFC converter operating in PCCM demonstrates greatly improved current handling capability with reduced current and voltage ripples. Analytical and simulation results of the tri-state boost PFC converter have been presented and compared with those of the boost PFC converter operating in conventional CCM and DCM. Simulation results show excellent dynamic performance of the tri-state boost PFC converter."} {"_id": "d904d00d84f5ed130c66c5fe0d641a9022197244", "title": "A NoSQL Data Model for Scalable Big Data Workflow Execution", "text": "While big data workflows haven been proposed recently as the next-generation data-centric workflow paradigm to process and analyze data of ever increasing in scale, complexity, and rate of acquisition, a scalable distributed data model is still missing that abstracts and automates data distribution, parallelism, and scalable processing. In the meanwhile, although NoSQL has emerged as a new category of data models, they are optimized for storing and querying of large datasets, not for ad-hoc data analysis where data placement and data movement are necessary for optimized workflow execution. In this paper, we propose a NoSQL data model that: 1) supports high-performance MapReduce-style workflows that automate data partitioning and data-parallelism execution. In contrast to the traditional MapReduce framework, our MapReduce-style workflows are fully composable with other workflows enabling dataflow applications with a richer structure, 2) automates virtual machine provisioning and deprovisioning on demand according to the sizes of input datasets, 3) enables a flexible framework for workflow executors that take advantage of the proposed NoSQL data model to improve the performance of workflow execution. Our case studies and experiments show the competitive advantages of our proposed data model. The proposed NoSQL data model is implemented in a new release of DATAVIEW, one of the most usable big data workflow systems in the community."} {"_id": "12d40aeb4f31d4352264114841ebfb8651729908", "title": "Torque ripple improvement for synchronous reluctance motor using an asymmetric flux barrier arrangement", "text": "An interior permanent-magnet synchronous motor (IPMSM) is a highly efficient motor and operates in a wide speed range; therefore, it is used in many industrial and home appliance applications. However, the torque ripple of synchronous motors such as the IPMSM and synchronous reluctance motor is very large. The variation of magnetic resistance between the flux barriers and teeth causes the torque ripple. In this paper, flux barriers are asymmetrically designed so that the relative positions between the outer edges of the flux barriers and the teeth do not correspond. As a result, torque ripple can be reduced dramatically."} {"_id": "ada4c48f083f2f98ee9a345e4bc534425d0a9bfd", "title": "Analysis of Rotor Core Eddy-Current Losses in Interior Permanent-Magnet Synchronous Machines", "text": "This paper presents the results of an investigation focused on the rotor core eddy-current losses of interior permanent-magnet (IPM) synchronous machines. First, analytical insight into the rotor core eddy-current losses of IPM machines is developed. Next, major design parameters that have the most significant impact on the rotor core eddy-current losses of IPM machines are identified. Finite-element analysis results are then presented to compare the predicted eddy-current losses in the machine core of IPM machines with one- and two-layer rotors coupled with concentrated- and distributed-winding stators. It is shown that the lowest total eddy-current losses in the machine core are achieved using a combination of distributed stator windings and two magnet layers per rotor pole, whereas minimizing only the rotor core eddy-current losses favors replacement of the rotor with a single-layer configuration."} {"_id": "29a073c933aaf22dcb2914be06adae6e189736b5", "title": "Optimization of average and cogging torque in 3-phase IPM motor drives", "text": "In this paper, an interior permanent magnet (IPM) brushless DC motor for traction applications is analyzed. The effect of magnetization direction, number of stator slots and current waveform on torque pulsation are examined. A three-phase, four-pole IPM motor is considered for the study. The finite element method is used to calculate the torque, reluctance torque, back iron flux density, tooth flux density, detent torque and back-EMF of the motor. It is shown that because of the reluctance torque resulting from rotor saliency, the peak point in the torque-angle curve is shifted to the left. Therefore, it is not possible to find the switching instants just by considering the direction of stator and rotor MMF and keeping the right angle between them. A Matlab program has been developed to rind the switching intervals, which will produce the maximum average torque and minimum cogging torque. Experimental results to support the simulation findings are included in the paper."} {"_id": "c37456766064c1e1432679454b6ca446b3f53068", "title": "Iron loss analysis of interior permanent-magnet synchronous motors-variation of main loss factors due to driving condition", "text": "In this paper, the authors investigate the iron loss of interior permanent magnet motors driven by pulsewidth modulation (PWM) inverters from both results of the experiments and the finite-element analysis. In the analysis, the iron loss of the motor is decomposed into several components due to their origins, for instance, the fundamental field, carrier of the PWM inverter, slot ripples, and harmonic magnetomotive forces of the permanent magnet in order to clarify the main loss factors. The Fourier transformation and the finite-element method considering the carrier harmonics are applied to this calculation. The calculated iron loss is compared with the measurement at each driving condition. The measured and the calculated results agree well. It is clarified that the iron loss caused by the carrier of the PWM inverter is the largest component at low-speed condition under the maximum torque control, whereas the loss caused by the harmonic magnetomotive forces of the permanent magnet remarkably increase at high-speed condition under the flux-weakening control"} {"_id": "badedcabfbb8e47eb33238aa44cf2b9c90c3d83a", "title": "A novel wavelet transform technique for on-line partial discharge measurements. 1. WT de-noising algorithm", "text": "Medium and high voltage power cables are widely used in the electrical industry with substantial growth over the last 20-30 years ago, particular in the use of XLPE insulated systems. Ageing of the cable insulation is becoming an increasing problem that requires development of reliable methods for on-line condition assessment. For insulation condition assessment of MV and HV cables, partial discharge (PD) monitoring is one of the most effective techniques. However on-site and on-line PD measurements are affected by electromagnetic interference (EMI) that makes sensitive PD detection very difficult, if not impossible. This paper describes implementation of wavelet transform techniques to reject noise from on-line partial discharge measurements on cables. A new wavelet threshold determination method is proposed with the technique. With implementation of this novel de-noising method, PD measurement sensitivity has been greatly improved. In addition, a full AC cycle data recovery can be achieved instead of focusing only on recovering individual PD pulses. Other wavelet threshold de-noising methods are discussed and examined under a noisy environment to compare their performance with the new method proposed here. The method described here has been found to be superior to the other wavelet-based methods"} {"_id": "52c5882dc62c319c67a37cf40e56e0905dc4acb5", "title": "Articulatory and spectrum features integration using generalized distillation framework", "text": "It has been shown that by combining the acoustic and articulatory information significant performance improvements in automatic speech recognition (ASR) task can be achieved. In practice, however, articulatory information is not available during recognition and the general approach is to estimate it from the acoustic signal. In this paper, we propose a different approach based on the generalized distillation framework, where acoustic-articulatory inversion is not necessary. We trained two DNN models: one called \u201cteacher\u201d learns from both acoustic and articulatory features and the other one called \u201cstudent\u201d is trained on acoustic features only, but its training process is guided by the \u201cteacher\u201d model and can reach a better performance that can't be obtained by regular training even without articulatory feature inputs during test time. The paper is organized as follows: Section 1 gives the introduction and briefly discusses some related works. Section 2 describes the distillation training process, Section 3 describes ASR system used in this paper. Section 4 presents the experiments and the paper is concluded by Section 5."} {"_id": "40a63746a710baf4a694fd5a4dd8b5a3d9fc2846", "title": "Invertible Conditional GANs for image editing", "text": "Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications."} {"_id": "f7025aed8f41a0affb25acf3340a06a7c8993511", "title": "Machine Learning Approaches for Mood Classification of Songs toward Music Search Engine", "text": "Human often wants to listen to music that fits best his current emotion. A grasp of emotions in songs might be a great help for us to effectively discover music. In this paper, we aimed at automatically classifying moods of songs based on lyrics and metadata, and proposed several methods for supervised learning of classifiers. In future, we plan to use automatically identified moods of songs as metadata in our music search engine. Mood categories in a famous contest about Audio Music Mood Classification (MIREX 2007) are applied for our system. The training data is collected from a LiveJournal blog site in which each blog entry is tagged with a mood and a song. Then three kinds of machine learning algorithms are applied for training classifiers: SVM, Naive Bayes and Graph-based methods. The experiments showed that artist, sentiment words, putting more weight for words in chorus and title parts are effective for mood classification. Graph-based method promises a good improvement if we have rich relationship information among songs."} {"_id": "651e8215e93fd94318997f3be5190f965f147c79", "title": "A Study on the Customer-Based Brand Equity of Online Retailers", "text": "Different from traditional firms, online retailers do not often produce common goods on their own. On the contrary, they sell merchandises that traditional firms produce. Furthermore, online retailers are also different from traditional retailers because the former do not often have frontline employees, physical outlets, or displaying layouts. Therefore, when building a strong online retailer brand, it is not enough to only consider the traditional dimensions of customer-based brand equity, namely brand awareness, perceived quality, brand associations, and brand loyalty. Some dimensions, i.e. trust associations and emotional connection need to be distinguished or included in the context of online business. Besides, the willingness to pay a price premium is always being viewed as a good indicator to measure brand equity, but for online retailers, it is conditional on some premises. By structural equation modeling, it was verified that for online retailers, the importance of brand awareness and trust associations decreased but perceived quality remained its strong influence. Brand loyalty was the better indicator for measuring brand equity, and emotional connection became so critical that if without it, customers would lack willingness to pay a price premium."} {"_id": "8629cebb7c574adf40d71d41389f340804c8c81f", "title": "Automatic Fingerprint Recognition Systems", "text": "This article summarizes the major developments in the history of efforts to use fingerprint patterns to identify individuals, from the earliest fingerprint classification systems of Vucetich and Henry in the 1890s through the advent of automated fingerprint identification. By chronicling the history of \u201cmanual\u201d systems for recording storing, matching, and retrieving fingerprints, the article puts advances in automatic fingerprint recognition in historical context and highlights their historical and social significance."} {"_id": "7d9089cbe958da21cbd943bdbcb996f4499e701b", "title": "Document Modeling with Gated Recurrent Neural Network for Sentiment Classification", "text": "Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification.1"} {"_id": "b294b61f0b755383072ab332061f45305e0c12a1", "title": "Re-embedding words", "text": "We present a fast method for re-purposing existing semantic word vectors to improve performance in a supervised task. Recently, with an increase in computing resources, it became possible to learn rich word embeddings from massive amounts of unlabeled data. However, some methods take days or weeks to learn good embeddings, and some are notoriously difficult to train. We propose a method that takes as input an existing embedding, some labeled data, and produces an embedding in the same space, but with a better predictive performance in the supervised task. We show improvement on the task of sentiment classification with respect to several baselines, and observe that the approach is most useful when the training set is sufficiently small."} {"_id": "74bcf18831294adb82c1ba8028ef314db1327549", "title": "Reference table based k-anonymous private blocking", "text": "Privacy Preserving Record Linkage is an emerging field of research which attempts to deal with the classical linkage problem from a privacy preserving point of view. In this paper we propose a novel approach for performing Privacy Preserving Blocking in order to minimize the computational cost of Privacy Preserving Record Linkage. We achieve this without compromising privacy by using Nearest Neighbors clustering, a well-known clustering algorithm and by using a reference table. A reference table is a publicly known table the contents of which are used as intermediate references. The combination of Nearest Neighbors and a reference table offers our approach k-anonymity characteristics."} {"_id": "87cc66a7eae45f9df7fa10c14c77f4178abd2563", "title": "Pedestrian Behaviour Monitoring: Methods and Experiences", "text": "The investigation of pedestrian spatio-temporal behaviour is of particular interest in many different research fields. Disciplines like travel behaviour research and tourism research, social sciences, artificial intelligence, geoinformation and many others have approached this subject from different perspectives. Depending on the particular research questions, various methods of data collection and analysis have been developed and applied in order to gain insight into specific aspects of human motion behaviour and the determinants influencing spatial activities. In this contribution, we provide a general overview about most commonly used methods for monitoring and analysing human spatio-temporal behaviour. After discussing frequently used empirical methods of data collection and emphasising related advantages and limitations, we present seven case studies concerning the collection and analysis of human motion behaviour following different purposes."} {"_id": "0d779e029fd5fb3271f174c05019b4144ffa46c0", "title": "ANALYSIS OF SEGMENTATION PARAMETERS IN ECOGNITION SOFTWARE USING HIGH RESOLUTION QUICKBIRD MS IMAGERY", "text": "For objectoriented classification approaches, main step is the segmentation part of the imagery. In eCognition v.4.0.6 software, with the segmentation process, meaningful objects can be created for following steps. In the experimental imagery with 2.4m ground sampling distance (GSD) has been used and several different parameters e.g. scale, color/shape and smoothness/compactness parameters have been tested accordingly. Additionally, segmentation parameters were set to low and high values and thus, dissimilarity of segmentation results were examined."} {"_id": "645e505f107a470f347d9521c492f88b1a2e6670", "title": "Towards Query Efficient Black-box Attacks: An Input-free Perspective", "text": "Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial attacks, even in a black-box scenario. However, most of the existing black-box attack algorithms need to make a huge amount of queries to perform attacks, which is not practical in the real world. We note one of the main reasons for the massive queries is that the adversarial example is required to be visually similar to the original image, but in many cases, how adversarial examples look like does not matter much. It inspires us to introduce a new attack called input-free attack, under which an adversary can choose an arbitrary image to start with and is allowed to add perceptible perturbations on it. Following this approach, we propose two techniques to significantly reduce the query complexity. First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model. Then we shrink the dimension of the attack space by perturbing a small region and tiling it to cover the input image. To make our algorithm more effective, we stabilize a projected gradient ascent algorithm with momentum, and also propose a heuristic approach for region size selection. Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial attacks, even in a black-box scenario. However, most of the existing black-box attack algorithms need to make a huge amount of queries to perform attacks, which is not practical in the real world. We note one of the main reasons for the massive queries is that the adversarial example is required to be visually similar to the original image, but in many cases, how adversarial examples look like does not matter much. It inspires us to introduce a new attack called input-free attack, under which an adversary can choose an arbitrary image to start with and is allowed to add perceptible perturbations on it. Following this approach, we propose two techniques to significantly reduce the query complexity. First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model. Then we shrink the dimension of the attack space by perturbing a small region and tiling it to cover the input image. To make our algorithm more effective, we stabilize a projected gradient ascent algorithm with momentum, and also propose a heuristic approach for region size selection. Through extensive experiments, we show that with only 1,701 queries on average, we can perturb a gray image to any target class of ImageNet with a 100% success rate on InceptionV3. Besides, our algorithm has successfully defeated two real-world systems, the Clarifai food detection API and the Baidu Animal Identification API."} {"_id": "029341c7f1ce11696a6bc7e7a716b7876010ebc7", "title": "Model Learning for Look-ahead Exploration in Continuous Control", "text": "We propose an exploration method that incorporates look-ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration."} {"_id": "2374a65e335b26c2ae9692b6c2f1408401d86f5b", "title": "A Consolidated Perspective on Multimicrophone Speech Enhancement and Source Separation", "text": "Speech enhancement and separation are core problems in audio signal processing, with commercial applications in devices as diverse as mobile phones, conference call systems, hands-free systems, or hearing aids. In addition, they are crucial preprocessing steps for noise-robust automatic speech and speaker recognition. Many devices now have two to eight microphones. The enhancement and separation capabilities offered by these multichannel interfaces are usually greater than those of single-channel interfaces. Research in speech enhancement and separation has followed two convergent paths, starting with microphone array processing and blind source separation, respectively. These communities are now strongly interrelated and routinely borrow ideas from each other. Yet, a comprehensive overview of the common foundations and the differences between these approaches is lacking at present. In this paper, we propose to fill this gap by analyzing a large number of established and recent techniques according to four transverse axes: 1 the acoustic impulse response model, 2 the spatial filter design criterion, 3 the parameter estimation algorithm, and 4 optional postfiltering. We conclude this overview paper by providing a list of software and data resources and by discussing perspectives and future trends in the field."} {"_id": "df12dcd87b6b98d3e3dc7504c9e4e606ab960d04", "title": "Real-time 3D scene reconstruction with dynamically moving object using a single depth camera", "text": "Online 3D reconstruction of real-world scenes has been attracting increasing interests from both the academia and industry, especially with the consumer-level depth cameras becoming widely available. Recent most online reconstruction systems take live depth data from a moving Kinect camera and incrementally fuse them to a single high-quality 3D model in real time. Although most real-world scenes have static environment, the daily objects in a scene often move dynamically, which are non-trivial to reconstruct especially when the camera is also not still. To solve this problem, we propose a single depth camera-based real-time approach for simultaneous reconstruction of dynamic object and static environment, and provide solutions for its key issues. In particular, we first introduce a robust optimization scheme which takes advantage of raycasted maps to segment moving object and background from the live depth map. The corresponding depth data are then fused to the volumes, respectively. These volumes are raycasted to extract views of the implicit surface which can be used as a consistent reference frame for the next iteration of segmentation and tracking. Particularly, in order to handle fast motion of dynamic object and handheld camera in the fusion stage, we propose a sequential 6D pose prediction method which largely increases the registration robustness and avoids registration failures occurred in conventional methods. Experimental results show that our approach can reconstruct moving object as well as static environment with rich details, and outperform conventional methods in multiple aspects."} {"_id": "320473d70ea898ce099ca0dda92af17a34b99599", "title": "Monadic Parsing in Haskell", "text": "This paper is a tutorial on defining recursive descent parsers in Haskell. In the spirit of one-stop shopping , the paper combines material from three areas into a single source. The three areas are functional parsers (Burge, 1975; Wadler, 1985; Hutton, 1992; Fokker, 1995), the use of monads to structure functional programs (Wadler, 1990, 1992a, 1992b), and the use of special syntax for monadic programs in Haskell (Jones, 1995; Peterson et al., 1996). More specifically, the paper shows how to define monadic parsers using do notation in Haskell. Of course, recursive descent parsers defined by hand lack the efficiency of bottomup parsers generated by machine (Aho et al., 1986; Mogensen, 1993; Gill and Marlow, 1995). However, for many research applications, a simple recursive descent parser is perfectly sufficient. Moreover, while parser generators typically offer a fixed set of combinators for describing grammars, the method described here is completely extensible: parsers are first-class values, and we have the full power of Haskell available to define new combinators for special applications. The method is also an excellent illustration of the elegance of functional programming. The paper is targeted at the level of a good undergraduate student who is familiar with Haskell, and has completed a grammars and parsing course. Some knowledge of functional parsers would be useful, but no experience with monads is assumed. A Haskell library derived from the paper is available on the web from:"} {"_id": "fd47145321e4b34e043104c9eb21c9bc28dfd680", "title": "On the Design of Adaptive Automation for Complex Systems", "text": "This article presents a constrained review of human factors issues relevant to adaptive automation (AA), including designing complex system interfaces to support AA, facilitating human\u2013computer interaction and crew interactions in adaptive system operations, and considering workload associated with AA management in the design of human roles in adaptive systems. Unfortunately, these issues have received limited attention in earlier reviews of AA. This work is aimed at supporting a general theory of human-centered automation advocating humans as active information processors in complex system control loops to support situation awareness and effective performance. The review demonstrates the need for research into user-centered design of dynamic displays in adaptive systems. It also points to the need for discretion in designing transparent interfaces to facilitate human awareness of modes of automated systems. Finally, the review identifies the need to consider critical human\u2013human interactions in designing adaptive systems. This work describes important branches of a developing framework of AA research and contributes to the general theory of human-centered automation."} {"_id": "0cb087738f1e88043e13f0744d91b52aaa47d5f0", "title": "Android based Portable Hand Sign Recognition System", "text": "These days mobile devices like phones or tablets are very common among people of all age. They are connected with network and provide seamless communications through internet or cellular services. These devices can be a big help for the people who are not able to communicatie properly and even in emergency conditions. A disabled person who is not able to speak or a person who speak a different language, these devices can be a boon for them as understanding, translating and speaking systems for these people. This chapter discusses a portable android based hand sign recognition system which can be used by disabled people. This chapter shows a part of on-going project. Computer Vision based techniques were used for image analysis and PCA was used after image tokenizer for recognition. This method was tested with webcam results to make system more robust."} {"_id": "d987bc0133f3331ef89ea1e50f5c57acfe17db9b", "title": "Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks", "text": "Five billion people in the world lack access to quality surgical care. Surgeon skill varies dramatically, and many surgical patients suffer complications and avoidable harm. Improving surgical training and feedback would help to reduce the rate of complications\u2014half of which have been shown to be preventable. To do this, it is essential to assess operative skill, a process that currently requires experts and is manual, time consuming, and subjective. In this work, we introduce an approach to automatically assess surgeon performance by tracking and analyzing tool movements in surgical videos, leveraging region-based convolutional neural networks. In order to study this problem, we also introduce a new dataset, m2cai16-tool-locations, which extends the m2cai16-tool dataset with spatial bounds of tools. While previous methods have addressed tool presence detection, ours is the first to not only detect presence but also spatially localize surgical tools in real-world laparoscopic surgical videos. We show that our method both effectively detects the spatial bounds of tools as well as significantly outperforms existing methods on tool presence detection. We further demonstrate the ability of our method to assess surgical quality through analysis of tool usage patterns, movement range, and economy of motion."} {"_id": "4a27709545cfa225d8983fb4df8061fb205b9116", "title": "A data-driven approach to predict the success of bank telemarketing", "text": "We propose a data mining (DM) approach to predict the success of telemarketing calls for selling bank long-term deposits. A Portuguese retail bank was addressed, with data collected from 2008 to 2013, thus including the effects of the recent financial crisis. We analyzed a large set of 150 features related with bank client, product and social-economic attributes. A semi-automatic feature selection was explored in the modeling phase, performed with the data prior to July 2012 and that allowed to select a reduced set of 22 features. We also compared four DM models: logistic regression, decision trees (DT), neural network (NN) and support vector machine. Using two metrics, area of the receiver operating characteristic curve (AUC) and area of the LIFT cumulative curve (ALIFT), the four models were tested on an evaluation phase, using the most recent data (after July 2012) and a rolling windows scheme. The NN presented the best results (AUC=0.8 and ALIFT=0.7), allowing to reach 79% of the subscribers by selecting the half better classified clients. Also, two knowledge extraction methods, a sensitivity analysis and a DT, were applied to the NN model and revealed several key attributes (e.g., Euribor rate, direction of the call and bank agent experience). Such knowledge extraction confirmed the obtained model as credible and valuable for telemarketing campaign managers. Preprint submitted to Elsevier 19 February 2014"} {"_id": "1b02659436090ae5888eafa25f5726a23b87bd64", "title": "Enhancing metacognitive reinforcement learning using reward structures and feedback", "text": "How do we learn to think better, and what can we do to promote such metacognitive learning? Here, we propose that cognitive growth proceeds through metacognitive reinforcement learning. We apply this theory to model how people learn how far to plan ahead and test its predictions about the speed of metacognitive learning in two experiments. In the first experiment, we find that our model can discern a reward structure that promotes metacognitive reinforcement learning from one that hinders it. In the second experiment, we show that our model can be used to design a feedback mechanism that enhances metacognitive reinforcement learning in an environment that hinders learning. Our results suggest that modeling metacognitive learning is a promising step towards promoting cognitive growth."} {"_id": "d51c099b2ef1b8fa131825b582ef23c6c50acc17", "title": "An experimental comparison of ensemble of classifiers for bankruptcy prediction and credit scoring", "text": "In this paper, we investigate the performance of several systems based on ensemble of classifiers for bankruptcy prediction and credit scoring. The obtained results are very encouraging, our results improved the performance obtained using the stand-alone classifiers. We show that the method \u2018\u2018Random Subspace\u201d outperforms the other ensemble methods tested in this paper. Moreover, the best stand-alone method is the multi-layer perceptron neural net, while the best method tested in this work is the Random Subspace of Levenberg\u2013Marquardt neural net. In this work, three financial datasets are chosen for the experiments: Australian credit, German credit, and Japanese credit. 2008 Elsevier Ltd. All rights reserved."} {"_id": "529d9215a7d8dd32bdbca018dab3e839569241a0", "title": "PyThinSearch: A Simple Web Search Engine", "text": "We describe a simple, functioning web search engine for indexing and searching online documents using python programming language. Python was chosen because it is an elegant language with simple syntax, is easy to learn and debug, and supports main operating systems almost evenly. The remarkable characteristics of this program are an adjustable search function that allows users to rank documents with several combinations of score functions and the focus on anchor text analysis as we provide four additional schemes to calculate scores based on anchor text. We also provide an additional ranking algorithm based on link addition process in network motivated by PageRank and HITS as an experimental tool. This algorithm is the original contribution of this paper."} {"_id": "25d7da85858a4d89b7de84fd94f0c0a51a9fc67a", "title": "Selective Search for Object Recognition", "text": "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99\u00a0% recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html )."} {"_id": "0cc22d1dab50d9bab9501008e9b359cd9e51872a", "title": "SuperParsing: Scalable Nonparametric Image Parsing with Superpixels", "text": "This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art nonparametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem."} {"_id": "f2eced4d8317b8e64482d58cdae5ca95f490eb25", "title": "Model based control of fiber reinforced elastofluidic enclosures", "text": "Fiber-Reinforced Elastofluidic Enclosures (FREEs), are a subset of pneumatic soft robots with an asymmetric continuously deformable skin that are able to generate a wide range of deformations and forces, including rotation and screw motions. Though these soft robots are able to generate a variety of motions, simultaneously controlling their end effector rotation and position has remained challenging due to the lack of a simple model. This paper presents a model that establishes a relationship between the pressure, torque due to axial loading, and axial rotation to enable a model-driven open-loop control for FREEs. The modeling technique relies on describing force equilibrium between the fiber, fluid, and an elastomer model which is computed via system identification. The model is experimentally tested as these variables are changed, illustrating that it provides good agreement with the real system. To further illustrate the potential of the model, a precision open-loop control experiment of opening a rotational combination lock is presented2."} {"_id": "a55eb76988e0692525259a7bcbaeba36638e3600", "title": "Theory and implementation of dielectric resonator antenna excited by a waveguide slot", "text": "Excitation of dielectric resonator antennas (DRAs) by waveguide slots is proposed as an alternative to traditionally used excitation mechanisms in order to enhance the frequency bandwidth of slotted waveguide radiators and to control the power coupled to the DRA. The analysis is based on the numerical solution of coupled integral equations discretized using the method of moments (MoM). The dielectric resonator (DR) is modeled as a body-of-revolution based on the integral equation formulation for the equivalent electric and magnetic surface current densities. The analysis of an infinite or a semi-infinite waveguide containing longitudinal or transverse narrow slots uses the appropriate dyadic Green's function resulting in closed-form analytical expressions of the MoM matrix. The scattering parameters for a slotted waveguide loaded with a dielectric resonator antenna disk are calculated and compared with finite-difference time-domain results. Bandwidth enhancement is achieved by the proper selection for the antenna parameters."} {"_id": "672221c45146dbe1c0e2d07132cc17eb20ff965c", "title": "Eight Key Elements of a Business Model", "text": "The growth of e-commerce and mobile services has been creating business opportunities. Among these, mobile marketing is predicted to be a new trend of marketing. As mobile devices are personal tools such that the services ought to be unique, in the last few years, researches have been conducted studies related to the user acceptance of mobile services and produced results. This research aims to develop a model of mobile e-commerce mobile marketing system, in the form of computer-based information system (CBIS) that addresses the recommendations or criteria formulated by researchers. In this paper, the criteria formulated by researches are presented then each of the criteria is resolved and translated into mobile services. A model of CBIS, which is an integration of a website and a mobile application, is designed to materialize the mobile services. The model is presented in the form of business model, system procedures, network topology, software model of the website and mobile application, and database models."} {"_id": "afed1b1e121ebc62566c1479ed5102b8c61538ac", "title": "The LOCATA Challenge Data Corpus for Acoustic Source Localization and Tracking", "text": "Algorithms for acoustic source localization and tracking are essential for a wide range of applications such as personal assistants, smart homes, tele-conferencing systems, hearing aids, or autonomous systems. Numerous algorithms have been proposed for this purpose which, however, are not evaluated and compared against each other by using a common database so far. The IEEE-AASP Challenge on sound source localization and tracking (LOCATA) provides a novel, comprehensive data corpus for the objective benchmarking of state-of-the-art algorithms on sound source localization and tracking. The data corpus comprises six tasks ranging from the localization of a single static sound source with a static microphone array to the tracking of multiple moving speakers with a moving microphone array. It contains real-world multichannel audio recordings, obtained by hearing aids, microphones integrated in a robot head, a planar and a spherical microphone array in an enclosed acoustic environment, as well as positional information about the involved arrays and sound sources represented by moving human talkers or static loudspeakers."} {"_id": "f5325f71d6ea59b8b120483ef882da3c0ea8a8cd", "title": "Thermal Analysis of the Permanent-Magnet Spherical Motor", "text": "There are many kinds of permanent-magnet (PM) motors, which are widely used in modern industry, but the problem of temperature rise is likely to affect the operation performance and even the life of the motor. The semiclosed structure of the PM spherical motor (PMSM) tends to increase the temperature when it works, so the thermal analysis of the PMSM is considered necessary. Based on double Fourier series decomposition, this paper establishes a three-dimensional (3-D) analytical model of PM eddy current loss in the PMSM, which considers the effect of time and space harmonics. Meanwhile, the heat sources such as hysteresis loss and copper loss are discussed. With those heat sources, a 3-D equivalent thermal network model is established. The thermal resistances are calculated. By applying the thermal network model, the influence of different speed and load on temperature rise is analyzed, and a steady-state thermal analysis of the motor is performed by using finite element analysis. The structure of stator is improved and good heat dissipation effect is achieved. This paper provides the theoretical basis for the design of ventilation and cooling system of the PMSM."} {"_id": "8ca87d3b45d66d8c3712d532e05b163d1c6c2121", "title": "Different Feature Selection for Sentiment Classification", "text": "Sentiment Analysis (SA) research has increased tremendously in recent times .Sentiment analysis means to extract opinion of users from review documents. Sentiment classification using Machine learning (ML ) methods faces the problem of high dimensionality of feature vector. Therefore, a feature selection method is required to eliminate the irrelevant and noisy features from the feature vector for efficient working of ML algorithms Rough set theory provides an important concept for feature reduction called reduct. The cost of reduct set computation is highly influenced by the attribute size of the dataset where the problem of finding reducts has been proven as NPhard problems.Different feature selection are applied on different data set, Experimental results show that mRMR is better compared to IG for sentiment classification, Hybrid feature selection method based on the RST and Information Gain (IG) is better compared to the previous methods. Proposed methods are evaluated on four standard datasets viz. Movie review, Product (book, DVD, and electronics) reviewed datasets, and Experimental results show that hybrid feature selection method outperforms than feature selection methods for sentimental classification."} {"_id": "98f1fa77d71bc92e283c8911d314ee3817cc3fce", "title": "Introduction to the Special Issue: The Literature Review in Information Systems", "text": "There has been a flowering of scholarly interest in the literature review as a research method in the information systems discipline. We feel privileged to contribute to this conversation and introduce the work of the authors represented in this special issue. Some of the highlights include three new methods for conducting literature analysis and guidelines, tutorials, and approaches for coping with some of the challenges involved in carrying out a literature review. Of the three \u201cnew method\u201d papers, one (ontological meta-analysis and synthesis) is entirely new, and two (stylized facts and critical discourse analysis) are novel in the information systems context. The other four paper address more general issues: the challenges of effective search strategies when confronted with the burgeoning volume of research available, a detailed tool-supported approach for conducting a rigorous review, a detailed tutorial for conducting a qualitative literature review, and a discussion of quality issues. Collectively, the papers place emphasis beyond the traditional \u201cnarrative synthesis\u201d on the importance of selecting the appropriate approach for the research context and the importance of attention to quality and transparency at all stages of the process, regardless of which approach is adopted."} {"_id": "45a9c00b07129b63515945b60d778134ffc3773f", "title": "Internet Addiction and Relationships with Insomnia, Anxiety, Depression, Stress and Self-Esteem in University Students: A Cross-Sectional Designed Study", "text": "BACKGROUND AND AIMS\nInternet addiction (IA) could be a major concern in university medical students aiming to develop into health professionals. The implications of this addiction as well as its association with sleep, mood disorders and self-esteem can hinder their studies, impact their long-term career goals and have wide and detrimental consequences for society as a whole. The objectives of this study were to: 1) Assess potential IA in university medical students, as well as factors associated with it; 2) Assess the relationships between potential IA, insomnia, depression, anxiety, stress and self-esteem.\n\n\nMETHODS\nOur study was a cross-sectional questionnaire-based survey conducted among 600 students of three faculties: medicine, dentistry and pharmacy at Saint-Joseph University. Four validated and reliable questionnaires were used: the Young Internet Addiction Test, the Insomnia Severity Index, the Depression Anxiety Stress Scales (DASS 21), and the Rosenberg Self Esteem Scale (RSES).\n\n\nRESULTS\nThe average YIAT score was 30 \u00b1 18.474; Potential IA prevalence rate was 16.8% (95% confidence interval: 13.81-19.79%) and it was significantly different between males and females (p-value = 0.003), with a higher prevalence in males (23.6% versus 13.9%). Significant correlations were found between potential IA and insomnia, stress, anxiety, depression and self-esteem (p-value < 0.001); ISI and DASS sub-scores were higher and self-esteem lower in students with potential IA.\n\n\nCONCLUSIONS\nIdentifying students with potential IA is important because this addiction often coexists with other psychological problems. Therefore, interventions should include not only IA management but also associated psychosocial stressors such as insomnia, anxiety, depression, stress, and self-esteem."} {"_id": "c08e84cfc979c5e4534797fb83b82419514e30f6", "title": "Sensorless current estimation and sharing in multiphase input-parallel output-parallel DC-DC converters", "text": "This paper introduces a sensorless current-sharing strategy for multiphase input-parallel output-parallel (IPOP) DC-DC converters. A dual active bridge (DAB) DC-DC converter is chosen as the basic DC-DC converter. With this strategy, by perturbing the duty cycles in (n-1) out of n phases in turn and measuring the changes of duty cycles respectively, the parameters mismatches among phases are estimated. According to the mismatches, a set of variables, which are proportional to the per-phase output currents, are calculated. Then with a current-sharing regulator, parameter mismatches are compensated, thus achieving current sharing without current sensing. The strategy is verified through both simulation and experimental implementation with a 30V-to-70V, 272W, 20kHz, three-phase IPOP DC-DC converter."} {"_id": "41a1d968174234a6bc991f7f5ed29ecb49681216", "title": "Neural mechanisms of selective visual attention.", "text": "The two basic phenomena that define the problem of visual attention can be illustrated in a simple example. Consider the arrays shown in each panel of Figure 1. In a typical experiment, before the arrays were presented, subjects would be asked to report letters appearing in one color (targets, here black letters), and to disregard letters in the other color (nontargets, here white letters). The array would then be briefly flashed, and the subjects, without any opportunity for eye movements, would give their report. The display mimics our. usual cluttered visual environment: It contains one or more objects that are relevant to current behavior, along with others that are irrelevant. The first basic phenomenon is limited capacity for processing information. At any given time, only a small amount of the information available on the retina can be processed and used in the control of behavior. Subjectively, giving attention to any one target leaves less available for others. In Figure 1, the probability of reporting the target letter N is much lower with two accompa\u00ad nying targets (Figure la) than with none (Figure Ib). The second basic phenomenon is selectivity-the ability to filter out un\u00ad wanted information. Subjectively, one is aware of attended stimuli and largely unaware of unattended ones. Correspondingly, accuracy in identifying an attended stimulus may be independent of the number of nontargets in a display (Figure la vs Ie) (see Bundesen 1990, Duncan 1980)."} {"_id": "e5627ed3d4d07b73355cbfd7f54f5e6b696909bd", "title": "Discrete Delaunay: Boundary extraction from voxel objects", "text": "We present a discrete approach for boundary extraction from 3D image data. The proposed technique is based on the duality between the Voronoi graph computed accross the digital boundary and the Delaunay triangulation. The originality of the approach is that algorithms perform only integer arithmetic and the method does not suffer from standard round problems and numerical instabilities in the case of floating point computations. This method has been applied both on segmented anatomical structures and on manufactured objects presenting corners and edges. The experimental results show that the method allows to produce a polygonal boundary representation which is guaranteed to be a 2-manifold. This representation is successfully transformed into a triangular quality mesh which meets all topological and geometrical requirements of applications such as augmented reality or simulation."} {"_id": "3d3c80f0ef6cbbf6e35367b028feacace033affe", "title": "Fundamental Limits of Wideband Localization\u2014 Part I: A General Framework", "text": "The availability of position information is of great importance in many commercial, public safety, and military applications. The coming years will see the emergence of location-aware networks with submeter accuracy, relying on accurate range measurements provided by wide bandwidth transmissions. In this two-part paper, we determine the fundamental limits of localization accuracy of wideband wireless networks in harsh multipath environments. We first develop a general framework to characterize the localization accuracy of a given node here and then extend our analysis to cooperative location-aware networks in Part II. In this paper, we characterize localization accuracy in terms of a performance measure called the squared position error bound (SPEB), and introduce the notion of equivalent Fisher information (EFI) to derive the SPEB in a succinct expression. This methodology provides insights into the essence of the localization problem by unifying localization information from individual anchors and that from a priori knowledge of the agent's position in a canonical form. Our analysis begins with the received waveforms themselves rather than utilizing only the signal metrics extracted from these waveforms, such as time-of-arrival and received signal strength. Hence, our framework exploits all the information inherent in the received waveforms, and the resulting SPEB serves as a fundamental limit of localization accuracy."} {"_id": "40d2e4e51903c7c6868acdde4335b3c9245a7002", "title": "Coupled Bayesian Sets Algorithm for Semi-supervised Learning and Information Extraction", "text": "Our inspiration comes from Nell (Never Ending Language Learning), a computer program running at Carnegie Mellon University to extract structured information from unstructured web pages. We consider the problem of semi-supervised learning approach to extract category instances (e.g. country(USA), city(New York)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semisupervised approaches using a small number of labeled examples together with many unlabeled examples are often unreliable as they frequently produce an internally consistent, but nevertheless, incorrect set of extractions. We believe that this problem can be overcome by simultaneously learning independent classifiers in a new approach named Coupled Bayesian Sets algorithm, based on Bayesian Sets, for many different categories and relations (in the presence of an ontology defining constraints that couple the training of these classifiers). Experimental results show that simultaneously learning a coupled collection of classifiers for random 11 categories resulted in much more accurate extractions than training classifiers through original Bayesian Sets algorithm, Naive Bayes, BaS-all and Coupled Pattern Learner (the category extractor used in NELL)."} {"_id": "21496f06aaf599cd76397cf7be3987079e23c93d", "title": "Polarity Lexicon Building: to what Extent Is the Manual Effort Worth?", "text": "Polarity lexicons are a basic resource for analyzing the sentiments and opinions expressed in texts in an automated way. This paper explores three methods to construct polarity lexicons: translating existing lexicons from other languages, extracting polarity lexicons from corpora, and annotating sentiments Lexical Knowledge Bases. Each of these methods require a different degree of human effort. We evaluate how much manual effort is needed and to what extent that effort pays in terms of performance improvement. Experiment setup includes generating lexicons for Basque, and evaluating them against gold standard datasets in different domains. Results show that extracting polarity lexicons from corpora is the best solution for achieving a good performance with reasonable human effort."} {"_id": "91a9a10e4b4067dd9f7135129727c7530702763d", "title": "Exploring specialized near-memory processing for data intensive operations", "text": "Emerging 3D stacked memory systems provide significantly more bandwidth than current DDR modules. However, general purpose processors do not take full advantage of these resources offered by the memory modules. Taking advantage of the increased bandwidth requires the use of specialized processing units. In this paper, we evaluate the benefits of placing hardware accelerators at the bottom layer of a 3D stacked memory system compared to accelerators that are placed external to the memory stack. Our evaluation of the design using cycle-accurate simulation and RTL synthesis shows that, for important data intensive kernels, near-memory accelerators inside a single 3D memory package provide 3x-13x speedup over a Quad-core Xeon processor. Most of the benefits are from the application of accelerators, as the near-memory configurations provide marginal benefits compared to the same number of accelerators placed on a die external to the memory package. This comparable performance for external accelerators is due to the high bandwidth afforded by the high-speed off-chip links. On the other hand, near-memory accelerators consume 7%-39% less energy than the external accelerators."} {"_id": "5e880d3bd1c4c4635ea7684df47109a33448b4c2", "title": "Towards an Architecture for Knowledge Representation and Reasoning in Robotics", "text": "This paper describes an architecture that combines the complementary strengths of probabilistic graphical models and declarative programming to enable robots to represent and reason with qualitative and quantitative descriptions of uncertainty and domain knowledge. An action language is used for the architecture\u2019s low-level (LL) and high-level (HL) system descriptions, and the HL definition of recorded history is expanded to allow prioritized defaults. For any given objective, tentative plans created in the HL using commonsense reasoning are implemented in the LL using probabilistic algorithms, and the corresponding observations are added to the HL history. Tight coupling between the levels helps automate the selection of relevant variables and the generation of policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in complex domains. The architecture is evaluated in simulation and on robots moving objects in indoor domains."} {"_id": "4c422d3b8df9b140b04d436d704b062c4f304dec", "title": "A framework to build bespoke auto-tuners with structured Bayesian optimisation", "text": "Due to their complexity, modern computer systems expose many configuration parameters which users must manually tune to maximise the performance of their applications. To relieve users of this burden, auto-tuning has emerged as an alternative in which a black-box optimiser iteratively evaluates configurations to find efficient ones. A popular auto-tuning technique is Bayesian optimisation, which uses the results to incrementally build a probabilistic model of the impact of the parameters on performance. This allows the optimisation to quickly focus on efficient regions of the configuration space. Unfortunately, for many computer systems, either the configuration space is too large to develop a good model, or the time to evaluate performance is too long to be executed many times. In this dissertation, I argue that by extracting a small amount of domain specific knowledge about a system, it is possible to build a bespoke auto-tuner with significantly better performance than its off-the-shelf counterparts. This could be performed, for example, by a system engineer who has a good understanding of the underlying system behaviour and wants to provide performance portability. This dissertation presents BOAT, a framework to build BespOke Auto-Tuners. BOAT offers a novel set of abstractions designed to make the exposition of domain knowledge easy and intuitive. First, I introduce Structured Bayesian Optimisation (SBO), an extension of the Bayesian optimisation algorithm. SBO can leverage a bespoke probabilistic model of the system's behaviour, provided by the system engineer, to rapidly converge to high performance configurations. The model can benefit from observing many runtime measurements per evaluation of the system, akin to the use of profilers. Second, I present Probabilistic-C++ a lightweight, high performance probabilistic programming library. It allows users to declare a probabilistic models of their system's behaviour and expose it to an SBO. Probabilistic programming is a recent tool from the Machine Learning community making the declaration of structured probabilistic models intuitive. Third, I present a new optimisation scheduling abstraction which offers a structured way to express optimisations which themselves execute other optimisations. For example, this is useful to express Bayesian optimisations, which each iteration execute a numerical optimisation. Furthermore, it allows users to easily implement decompositions which exploit the loose coupling of subsets of the configuration parameters to optimise them almost independently."} {"_id": "9c4e2205821c519cf72ae3e88837ba0a678f6086", "title": "A Novel Technique for English Font Recognition Using Support Vector Machines", "text": "Font Recognition is one of the Challenging tasks in Optical Character Recognition. Most of the existing methods for font recognition make use of local typographical features and connected component analysis. In this paper, English font recognition is done based on global texture analysis. The main objective of this proposal is to employ support vector machines (SVM) in identifying various fonts. The feature vectors are extracted by making use of Gabor filters and the proposed SVM is trained using these features. The method is found to give superior performance over neural networks by avoiding local minima points. The SVM model is formulated tested and the results are presented in this paper. It is observed that this method is content independent and the SVM classifier shows an average accuracy of 93.54%."} {"_id": "25190bd8bc97c78626f5ca0b6f59cf0360c71b58", "title": "Mobile ad hoc networking: imperatives and challenges", "text": "Mobile ad hoc networks (MANETs) represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self-organize into arbitrary and temporary, \u2018\u2018ad-hoc\u2019\u2019 network topologies, allowing people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure, e.g., disaster recovery environments. Ad hoc networking concept is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the ad hoc paradigm. Recently, the introduction of new technologies such as the Bluetooth, IEEE 802.11 and Hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent evolutions have been generating a renewed and growing interest in the research and development of MANET. This paper attempts to provide a comprehensive overview of this dynamic field. It first explains the important role that mobile ad hoc networks play in the evolution of future wireless technologies. Then, it reviews the latest research activities in these areas, including a summary of MANET s characteristics, capabilities, applications, and design constraints. The paper concludes by presenting a set of challenges and problems requiring further research in the future. 2003 Elsevier B.V. All rights reserved."} {"_id": "cc0439a45f37cb1a5edbb1a9ded69a75a7249597", "title": "Single-Phase Seven-Level Grid-Connected Inverter for Photovoltaic System", "text": "This paper proposes a single-phase seven-level inverter for grid-connected photovoltaic systems, with a novel pulsewidth-modulated (PWM) control scheme. Three reference signals that are identical to each other with an offset that is equivalent to the amplitude of the triangular carrier signal were used to generate the PWM signals. The inverter is capable of producing seven levels of output-voltage levels (Vdc, 2Vdc/3, Vdc/3, 0, -Vdc, -2Vdc/3, -Vdc/3) from the dc supply voltage. A digital proportional-integral current-control algorithm was implemented in a TMS320F2812 DSP to keep the current injected into the grid sinusoidal. The proposed system was verified through simulation and implemented in a prototype."} {"_id": "9ca9f28676ad788d04ba24a51141a9a0a0df4d67", "title": "A new model for learning in graph domains", "text": "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model."} {"_id": "6b75c572673d1f2af76332dd251ef1ac7bcb59f0", "title": "Wavelet Feature Based Confusion Character Sets for Gujarati Script", "text": "Indic script recognition is a difficult task due to the large number of symbols that result from concatenation of vowel modifiers to basic consonants and the conjunction of consonants with modifiers etc. Recognition of Gujarati script is a less studied area and no attempt is made so far to constitute confusion sets of Gujarati glyphs. In this paper, we present confusion sets of glyphs in printed Gujarati. Feature vector made up of Daubechies D4 wavelet coefficients were subjected to two different classifiers, giving more than 96% accuracy for a larger set of symbols. Novel application of GR neural-net architecture allows for fast building of a classifier for the large character data set. The combined approach of wavelet feature extraction and GRNN classification has given the highest recognition accuracy reported on this script."} {"_id": "f407c09ae8d886fc373d3f471c97c22d3ca50580", "title": "Intelligent fuzzy controller of a quadrotor", "text": "The aim of this work is to describe an intelligent system based on fuzzy logic that is developed to control a quadrotor. A quadrotor is a helicopter with four rotors, that make the vehicle more stable but more complex to model and to control. The quadrotor has been used as a testing platform in the last years for various universities and research centres. A quadrotor has six degrees of freedom, three of them regarding the position: height, horizontal and vertical motions; and the other three are related to the orientation: pitch, roll and yaw. A fuzzy control is designed and implemented to control a simulation model of the quadrotor. The inputs are the desired values of the height, roll, pitch and yaw. The outputs are the power of each of the four rotors that is necessary to reach the specifications. Simulation results prove the efficiency of this intelligent control strategy."} {"_id": "714b68efe5f81e8ec24701fc222393ed038137ac", "title": "Clearing algorithms for barter exchange markets: enabling nationwide kidney exchanges", "text": "In barter-exchange markets, agents seek to swap their items with one another, in order to improve their own utilities. These swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle. We focus mainly on the upcoming national kidney-exchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. With over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney diseas.\n The clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed. Long cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously. Also, in barter-exchanges generally, more agents are affected if one drops out of a longer cycle. We prove that the clearing problem with this cycle-length constraint is NP-hard. Solving it exactly is one of the main challenges in establishing a national kidney exchange.\n We present the first algorithm capable of clearing these markets on a nationwide scale. The key is incremental problem formulation. We adapt two paradigms for the task: constraint generation and column generation. For each, we develop techniques that dramatically improve both runtime and memory usage. We conclude that column generation scales drastically better than constraint generation. Our algorithm also supports several generalizations, as demanded by real-world kidney exchanges.\n Our algorithm replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges. The match runs are conducted every two weeks and transplants based on our optimizations have already been conducted."} {"_id": "0a7eb04e7100161893e9a81f89445924439c5964", "title": "PeerSpace - An Online Collaborative Learning Environment for Computer Science Students", "text": "The aim of Peer Space is to promote peer support and peer learning in introductory Computer Science (CS) courses by providing the students with online collaborative tools for convenient synchronous and asynchronous interactions on course related topics and social matters. This paper presents the development of various social and learning components in Peer Space that are unique in promoting collaborative learning. Analysis of preliminary results is presented."} {"_id": "138c86b9283e4f26ff1583acdf4e51a5f88ccad1", "title": "Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition", "text": "Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information."} {"_id": "321f14b35975b3800de5e66da64dee96071603d9", "title": "Efficient and Robust Feature Selection via Joint \u21132, 1-Norms Minimization", "text": "Feature selection is an important component of many machine learning applications. Especially in many bioinformatics tasks, efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones. In this paper, we propose a new robust feature selection method with emphasizing joint `2,1-norm minimization on both loss function and regularization. The `2,1-norm based loss function is robust to outliers in data points and the `2,1norm regularization selects features across all data points with joint sparsity. An efficient algorithm is introduced with proved convergence. Our regression based objective makes the feature selection process more efficient. Our method has been applied into both genomic and proteomic biomarkers discovery. Extensive empirical studies are performed on six data sets to demonstrate the performance of our feature selection method."} {"_id": "9b505dd5459fb28f0136d3c63793b600042e6a94", "title": "A Multimedia Retrieval Framework Based on Semi-Supervised Ranking and Relevance Feedback", "text": "We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency."} {"_id": "0ef550dacb89fb655f252e5b17dbd5d643eb5ac1", "title": "Action recognition in the premotor cortex.", "text": "We recorded electrical activity from 532 neurons in the rostral part of inferior area 6 (area F5) of two macaque monkeys. Previous data had shown that neurons of this area discharge during goal-directed hand and mouth movements. We describe here the properties of a newly discovered set of F5 neurons (\"mirror neurons', n = 92) all of which became active both when the monkey performed a given action and when it observed a similar action performed by the experimenter. Mirror neurons, in order to be visually triggered, required an interaction between the agent of the action and the object of it. The sight of the agent alone or of the object alone (three-dimensional objects, food) were ineffective. Hand and the mouth were by far the most effective agents. The actions most represented among those activating mirror neurons were grasping, manipulating and placing. In most mirror neurons (92%) there was a clear relation between the visual action they responded to and the motor response they coded. In approximately 30% of mirror neurons the congruence was very strict and the effective observed and executed actions corresponded both in terms of general action (e.g. grasping) and in terms of the way in which that action was executed (e.g. precision grip). We conclude by proposing that mirror neurons form a system for matching observation and execution of motor actions. We discuss the possible role of this system in action recognition and, given the proposed homology between F5 and human Brocca's region, we posit that a matching system, similar to that of mirror neurons exists in humans and could be involved in recognition of actions as well as phonetic gestures."} {"_id": "15b2c44b3868a1055850846161aaca59083e0529", "title": "Learning with Local and Global Consistency", "text": "We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data."} {"_id": "ebeb3dc542db09dfca3d5697f6897058eaa3a1f1", "title": "DC and RF breakdown voltage characteristics of SiGe HBTs for WiFi PA applications", "text": "Breakdown voltage and RF characteristics relevant for RF power amplifiers (PA) are presented in this paper. Typically, DC collector-to-emitter breakdown voltage with base open (BVCEO) or DC collector-to-base breakdown with emitter open (BVCBO) has been presented as the metric for voltage limit of PA devices. In practical PA circuits, the RF envelope voltage can swing well beyond BVCEO without causing a failure. An analysis of output power swing limitations and DC breakdown is presented with attention to biasing and temperature."} {"_id": "6fdade400c600be1247ea41f1ab9ce2e9196d835", "title": "Cognitive Reduced Prefrontal Connectivity in Psychopathy", "text": "Julian C. Motzkin,1 Joseph P. Newman,2 Kent A. Kiehl,3,4 and Michael Koenigs1 1Department of Psychiatry, University of Wisconsin-Madison, Madison, Wisconsin 53719, 2Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin 53706, 3The MIND Research Network, Albuquerque, New Mexico 87131, and 4Departments of Psychology and Neuroscience, University of New Mexico, Albuquerque, New Mexico 87131"} {"_id": "5174f38080629993bb258497269277b4d2e865f6", "title": "BVFG An Unbiased Detector of Curvilinear Structures", "text": "The extraction of curvilinear structures is an important low-level operation in computer vision that has many applications. Most existing operators use a simple model for the line that is to be extracted, i.e., they do not take into account the surroundings of a line. This leads to the undesired consequence that the line will be extracted in the wrong position whenever a line with different lateral contrast is extracted. In contrast, the algorithm proposed in this paper uses an explicit model for lines and their surroundings. By analyzing the scale-space behaviour of a model line profile, it is shown how the bias that is induced by asymmetrical lines can be removed. Furthermore, the algorithm not only returns the precise sub-pixel line position, but also the width of the line for each line point, also with sub-pixel accuracy."} {"_id": "a8718f42e55720b942e11fd1eda73f7c26ad7ef9", "title": "Design and implementation of an Android host-based intrusion prevention system", "text": "Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform---\"Patronus.\" Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption."} {"_id": "50886d25ddd5d0d1982ed94f90caa67639fcf1a1", "title": "Open Source Integrated Planner for Autonomous Navigation in Highly Dynamic Environments", "text": ""} {"_id": "4c2bbcb3e897e927cd390517b2036b0b9123953c", "title": "BeAware! - Situation awareness, the ontology-driven way", "text": "Available online 18 July 2010 Information overload is a severe problem for human operators of large-scale control systems as, for example, encountered in the domain of road traffic management. Operators of such systems are at risk to lack situation awareness, because existing systems focus on themere presentation of the available information on graphical user interfaces\u2014thus endangering the timely and correct identification, resolution, and prevention of critical situations. In recent years, ontology-based approaches to situation awareness featuring a semantically richer knowledge model have emerged. However, current approaches are either highly domain-specific or have, in case they are domain-independent, shortcomings regarding their reusability. In this paper, we present our experience gained from the development of BeAware!, a framework for ontology-driven information systems aiming at increasing anoperator's situation awareness. In contrast to existing domain-independent approaches, BeAware!'s ontology introduces the concept of spatio-temporal primitive relations betweenobserved real-world objects thereby improving the reusability of the framework. To show its applicability, a prototype of BeAware! has been implemented in thedomain of road trafficmanagement. An overviewof this prototype and lessons learned for the development of ontology-driven information systems complete our contribution. \u00a9 2010 Elsevier B.V. All rights reserved."} {"_id": "f8e3eb310101eac0e5175c61bb7bba2b89fcf45e", "title": "Using sonification", "text": "The idea behind sonification is that synthetic non-verbal sounds can represent numerical data and provide support for information processing activities of many different kinds. This article describes some of the ways that sonification has been used in assistive technologies, remote collaboration, engineering analyses, scientific visualisations, emergency services and aircraft cockpits. Approaches for designing sonifications are surveyed, and issues raised by the existing approaches and applications are outlined. Relations are drawn to other areas of knowledge where similar issues have also arisen, such as human-computer interaction, scientific visualisation, and computer music. At the end is a list of resources that will help you delve further into the topic."} {"_id": "73a061bc6fe1afca540f0451dbd07065a1cf8429", "title": "Comparison of Different Classification Techniques on PIMA Indian Diabetes Data", "text": "The development of data-mining applications such as classification and clustering has been applied to large scale data. In this research, we present comparative study of different classification techniques using three data mining tools named WEKA, TANAGRA and MATLAB. The aim of this paper is to analyze the performance of different classification techniques for a set of large data. The algorithm or classifiers tested are Multilayer Perceptron, Bayes Network, J48graft (c4.5), Fuzzy Lattice Reasoning (FLR), NaiveBayes, JRip (RIPPER), Fuzzy Inference System (FIS), Adaptive Neuro-Fuzzy Inference Systems(ANFIS). A fundamental review on the selected technique is presented for introduction purposes. The diabetes data with a total instance of 768 and 9 attributes (8 for input and 1 for output) will be used to test and justify the differences between the classification methods or algorithms. Subsequently, the classification technique that has the potential to significantly improve the common or conventional methods will be suggested for use in large scale data, bioinformatics or other general applications."} {"_id": "d5621f8c4aa7bf231624489d3525fe4c373b7ddc", "title": "SSD-Assisted Backup and Recovery for Database Systems", "text": "Backup and recovery is an important feature of database systems since it protects data against unexpected hardware and software failures. Database systems can provide data safety and reliability by creating a backup and restoring the backup from a failure. Database administrators can use backup/recovery tools that are provided with database systems or backup/recovery methods with operating systems. However, the existing tools perform time-consuming jobs and the existing methods may negatively affect run-time performance during normal operation even though high-performance SSDs are used. In this paper, we present an SSD-assisted backup/recovery scheme for database systems. In our scheme, we extend the out-of-place update characteristics of flash-based SSDs for backup/recovery operations. To this end, we exploit the resources (e.g., flash translation layer and DRAM cache with supercapacitors) inside SSDs, and we call our SSD with new backup/recovery features BR-SSD. We design and implement the backup/recovery functionality in the Samsung enterprise-class SSD (i.e., SM843Tn) for more realistic systems. Furthermore, we conduct a case study of BR-SSDs in replicated database systems and modify MySQL with replication to integrate BR-SSDs. The experimental result demonstrates that our scheme provides fast recovery while it does not negatively affect the run-time performance during normal operation."} {"_id": "9effb687054446445d1f827f28b94ef886553c82", "title": "Multimodal analysis of the implicit affective channel in computer-mediated textual communication", "text": "Computer-mediated textual communication has become ubiquitous in recent years. Compared to face-to-face interactions, there is decreased bandwidth in affective information, yet studies show that interactions in this medium still produce rich and fulfilling affective outcomes. While overt communication (e.g., emoticons or explicit discussion of emotion) can explain some aspects of affect conveyed through textual dialogue, there may also be an underlying implicit affective channel through which participants perceive additional emotional information. To investigate this phenomenon, computer-mediated tutoring sessions were recorded with Kinect video and depth images and processed with novel tracking techniques for posture and hand-to-face gestures. Analyses demonstrated that tutors implicitly perceived students' focused attention, physical demand, and frustration. Additionally, bodily expressions of posture and gesture correlated with student cognitive-affective states that were perceived by tutors through the implicit affective channel. Finally, posture and gesture complement each other in multimodal predictive models of student cognitive-affective states, explaining greater variance than either modality alone. This approach of empirically studying the implicit affective channel may identify details of human behavior that can inform the design of future textual dialogue systems modeled on naturalistic interaction."} {"_id": "f74219d8448b4f3a05ff82a8acf2e78e5c141c41", "title": "phishGILLNET \u2014 phishing detection methodology using probabilistic latent semantic analysis , AdaBoost , and co-training", "text": "Identity theft is one of the most profitable crimes committed by felons. In the cyber space, this is commonly achieved using phishing. We propose here robust server side methodology to detect phishing attacks, called phishGILLNET, which incorporates the power of natural language processing and machine learning techniques. phishGILLNET is a multi-layered approach to detect phishing attacks. The first layer (phishGILLNET1) employs Probabilistic Latent Semantic Analysis (PLSA) to build a topic model. The topic model handles synonym (multiple words with similar meaning), polysemy (words with multiple meanings), and other linguistic variations found in phishing. Intentional misspelled words found in phishing are handled using Levenshtein editing and Google APIs for correction. Based on term document frequency matrix as input PLSA finds phishing and non-phishing topics using tempered expectation maximization. The performance of phishGILLNET1 is evaluated using PLSA fold in technique and the classification is achieved using Fisher similarity. The second layer of phishGILLNET (phishGILLNET2) employs AdaBoost to build a robust classifier. Using probability distributions of the best PLSA topics as features the classifier is built using AdaBoost. The third layer (phishGILLNET3) further expands phishGILLNET2 by building a classifier from labeled and unlabeled examples by employing Co-Training. Experiments were conducted using one of the largest public corpus of email data containing 400,000 emails. Results show that phishGILLNET3 outperforms state of the art phishing detection methods and achieves F-measure of 100%. Moreover, phishGILLNET3 requires only a small percentage (10%) of data be annotated thus saving significant time, labor, and avoiding errors incurred in human annotation."} {"_id": "9903b6801d6cc7687d484f8ec7d8496093cdd24b", "title": "Circle grid fractal pattern for calibration at different camera zoom levels", "text": "Camera calibration patterns and fiducial markers are basic technique for 3D measurement to create Computer Graphics (CG) and Augmented Reality (AR). A checkerboard is widely used as a calibration pattern and a circle or ring grid provides higher precision [Datta et al. 2009], and matrix codes are common as AR markers."} {"_id": "551212c42795f2613cb09138d594ec3e654042b7", "title": "Biomedical Image Segmentation Using Fully Convolutional Networks on TrueNorth", "text": "With the rapid growth of medical and biomedical image data, energy-efficient solutions for analyzing such image data that can be processed fast and accurately on platforms with low power budget are highly desirable. This paper uses segmenting glial cells in brain microscopy images as a case study to demonstrate how to achieve biomedical image segmentation with significant energy saving and minimal comprise in accuracy. Specifically, we design, train, implement, and evaluate Fully Convolutional Networks (FCNs) for biomedical image segmentation on IBM's neurosynaptic DNN processor \u2014 TrueNorth (TN). Comparisons in terms of accuracy and energy dissipation of TN with that of a low power NVIDIA TX2 mobile GPU platform have been conducted. Experimental results show that TN can offer at least two orders of magnitude improvement in energy efficiency when compared to TX2 GPU for the same workload."} {"_id": "6756d3e0669430fa6e006754aecb46084818d6b6", "title": "McRT-STM: a high performance software transactional memory system for a multi-core runtime", "text": "Applications need to become more concurrent to take advantage of the increased computational power provided by chip level multiprocessing. Programmers have traditionally managed this concurrency using locks (mutex based synchronization). Unfortunately, lock based synchronization often leads to deadlocks, makes fine-grained synchronization difficult, hinders composition of atomic primitives, and provides no support for error recovery. Transactions avoid many of these problems, and therefore, promise to ease concurrent programming.We describe a software transactional memory (STM) system that is part of McRT, an experimental Multi-Core RunTime. The McRT-STM implementation uses a number of novel algorithms, and supports advanced features such as nested transactions with partial aborts, conditional signaling within a transaction, and object based conflict detection for C/C++ applications. The McRT-STM exports interfaces that can be used from C/C++ programs directly or as a target for compilers translating higher level linguistic constructs.We present a detailed performance analysis of various STM design tradeoffs such as pessimistic versus optimistic concurrency, undo logging versus write buffering, and cache line based versus object based conflict detection. We also show a MCAS implementation that works on arbitrary values, coexists with the STM, and can be used as a more efficient form of transactional memory. To provide a baseline we compare the performance of the STM with that of fine-grained and coarse-grained locking using a number of concurrent data structures on a 16-processor SMP system. We also show our STM performance on a non-synthetic workload -- the Linux sendmail application."} {"_id": "9a0b250656df8fd2e5fecb78110668195f60f11c", "title": "Modal analysis and transcription of strokes of the mridangam using non-negative matrix factorization", "text": "In this paper we use a Non-negative Matrix Factorization (NMF) based approach to analyze the strokes of the mridangam, a South Indian hand drum, in terms of the normal modes of the instrument. Using NMF, a dictionary of spectral basis vectors are first created for each of the modes of the mridangam. The composition of the strokes are then studied by projecting them along the direction of the modes using NMF. We then extend this knowledge of each stroke in terms of its basic modes to transcribe audio recordings. Hidden Markov Models are adopted to learn the modal activations for each of the strokes of the mridangam, yielding up to 88.40% accuracy during transcription."} {"_id": "44b7f70a79734c73c8f231f8eb91f724ef97c371", "title": "Blind image quality assessment by relative gradient statistics and adaboosting neural network", "text": "The image gradient is a commonly computed image feature and a potentially predictive factor for image quality assessment (IQA). Indeed, it has been successfully used for both fulland noreference image quality prediction. However, the gradient orientation has not been deeply explored as a predictive source of information for image quality assessment. Here we seek to amend this by studying the quality relevance of the relative gradient orientation, viz., the gradient orientation relative to the surround. We also deploy a relative gradient magnitude feature which accounts for perceptual masking and utilize an AdaBoosting back-propagation (BP) neural network to map the image features to image quality. The generalization of the AdaBoosting BP neural network results in an effective and robust quality prediction model. The new model, called Oriented Gradients Image Quality Assessment (OG-IQA), is shown to deliver highly competitive image quality prediction performance as compared with the most popular IQA approaches. Furthermore, we show that OG-IQA has good database independence properties and a low complexity. & 2015 Elsevier B.V. All rights reserved."} {"_id": "6424add0f4f99cb582ecc50c4a33ae18d9236021", "title": "Unconstrained Monocular 3D Human Pose Estimation by Action Detection and Cross-Modality Regression Forest", "text": "This work addresses the challenging problem of unconstrained 3D human pose estimation (HPE) from a novel perspective. Existing approaches struggle to operate in realistic applications, mainly due to their scene-dependent priors, such as background segmentation and multi-camera network, which restrict their use in unconstrained environments. We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. Action detection offers spatiotemporal priors to 3D human pose estimation by both recognising and localising actions in space-time. Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. A new unconstrained pose dataset has been collected to justify the feasibility of our method, which demonstrated promising results, significantly outperforming the relevant state-of-the-arts."} {"_id": "1c0d35e024dbb8a1db0f326fb243a67d158d5f24", "title": "0-Day Vulnerabilities and Cybercrime", "text": "This study analyzes 0-day vulnerabilities in the broader context of cybercrime and economic markets. The work is based on the interviews of several leading experts and on a field research of the authors. In particular, cybercrime is considered when involving traditional criminal activities or when military operations are involved. A description of different 0-day vulnerability markets - White, Black and Government markets - is provided, as well as the characteristics of malware factories and their major customers are discussed."} {"_id": "d6e2b45820dfee9ac48926884d19a30ebf33820b", "title": "PreFix: Switch Failure Prediction in Datacenter Networks", "text": "In modern datacenter networks (DCNs), failures of network devices are the norm rather than the exception, and many research efforts have focused on dealing with failures after they happen. In this paper, we take a different approach by predicting failures, thus the operators can intervene and \"fix\" the potential failures before they happen. Specifically, in our proposed system, named PreFix, we aim to determine during runtime whether a switch failure will happen in the near future. The prediction is based on the measurements of the current switch system status and historical switch hardware failure cases that have been carefully labelled by network operators. Our key observation is that failures of the same switch model share some common syslog patterns before failures occur, and we can apply machine learning methods to extract the common patterns for predicting switch failures. Our novel set of features (message template sequence, frequency, seasonality and surge) for machine learning can efficiently deal with the challenges of noises, sample imbalance, and computation overhead. We evaluated PreFix on a data set collected from 9397 switches (3 different switch models) deployed in more than 20 datacenters owned by a top global search engine in a 2-year period. PreFix achieved an average of 61.81% recall and 1.84 * 10^-5 false positive ratio. It outperforms the other failure prediction methods for computers and ISP devices."} {"_id": "cfcac1ef666fc173eb80f93c9eec220da63c4b5e", "title": "Semantic Feature Selection for Text with Application to Phishing Email Detection", "text": "In a phishing attack, an unsuspecting victim is lured, typically via an email, to a web site designed to steal sensitive information such as bank/credit card account numbers, login information for accounts, etc. Each year Internet users lose billions of dollars to this scourge. In this paper, we present a general semantic feature selection method for text problems based on the statistical t-test and WordNet, and we show its effectiveness on phishing email detection by designing classifiers that combine semantics and statistics in analyzing the text in the email. Our feature selection method is general and useful for other applications involving text-based analysis as well. Our email body-text-only classifier achieves more than 95% accuracy on detecting phishing emails with a false positive rate of 2.24%. Due to its use of semantics, our feature selection method is robust against adaptive attacks and avoids the problem of frequent retraining needed by machine learning classifiers."} {"_id": "951128a02e03a28358aabf1e6df053899ab118ab", "title": "Mining Parallel Corpora from Sina Weibo and Twitter", "text": "Microblogs such as Twitter, Facebook, and Sina Weibo (China's equivalent of Twitter) are a remarkable linguistic resource. In contrast to content from edited genres such as newswire, microblogs contain discussions of virtually every topic by numerous individuals in different languages and dialects and in different styles. In this work, we show that some microblog users post \u201cself-translated\u201d messages targeting audiences who speak different languages, either by writing the same message in multiple languages or by retweeting translations of their original posts in a second language. We introduce a method for finding and extracting this naturally occurring parallel data. Identifying the parallel content requires solving an alignment problem, and we give an optimally efficient dynamic programming algorithm for this. Using our method, we extract nearly 3M Chinese\u2013English parallel segments from Sina Weibo using a targeted crawl of Weibo users who post in multiple languages. Additionally, from a random sample of Twitter, we obtain substantial amounts of parallel data in multiple language pairs. Evaluation is performed by assessing the accuracy of our extraction approach relative to a manual annotation as well as in terms of utility as training data for a Chinese\u2013English machine translation system. Relative to traditional parallel data resources, the automatically extracted parallel data yield substantial translation quality improvements in translating microblog text and modest improvements in translating edited news content."} {"_id": "abf9ee52b29f109f5dbf6423fbc0d898df802971", "title": "Synthetic aperture radar interferometry-Invited paper", "text": ""} {"_id": "1d710f29ce139296956fe31d6e78ded479fb08bb", "title": "10-kV SiC MOSFET-Based Boost Converter", "text": "10-kV silicon carbide (SiC) MOSFETs are currently being developed by a number of organizations in the U.S. with prospective applications in high-voltage and high-frequency power-electronic systems. The aim of this paper is to demonstrate the high-frequency and high-temperature capability of 10-kV SiC MOSFETs in the application of a dc/dc boost converter. In this study, 10-kV SiC MOSFET and junction barrier Schottky (JBS) diode were characterized and modeled in SPICE. Following this, a dc/dc boost converter based on a 10-kV 10-A MOSFET and a 10-kV 5-A JBS diode was designed and tested under continuous operation for frequencies up to 25 kHz. The boost converter had an output voltage of 4 kV, an output power of 4 kW, and operated with a junction temperature of 174degC for the SiC MOSFET. The fast-switching speed, low losses, and high-temperature operation capability of 10-kV SiC MOSFETs demonstrated in the dc/dc boost converter make them attractive for high-frequency and high-voltage power-conversion applications."} {"_id": "69bd231dfa8f4fb88090ce5b0f1d3701b765bf72", "title": "Analytical loss model of power MOSFET", "text": "An accurate analytical model is proposed in this paper to calculate the power loss of a metal-oxide semiconductor field-effect transistor. The nonlinearity of the capacitors of the devices and the parasitic inductance in the circuit, such as the source inductor shared by the power stage and driver loop, the drain inductor, etc., are considered in the model. In addition, the ringing is always observed in the switching power supply, which is ignored in the traditional loss model. In this paper, the ringing loss is analyzed in a simple way with a clear physical meaning. Based on this model, the circuit power loss could be accurately predicted. Experimental results are provided to verify the model. The simulation results match the experimental results very well, even at 2-MHz switching frequency."} {"_id": "8a69693a73f96a64a4148893fb2b50b219176370", "title": "Design And Application Guide For High Speed MOSFET Gate Drive Circuits By", "text": "The main purpose of this paper is to demonstrate a systematic approach to design high performance gate drive circuits for high speed switching applications. It is an informative collection of topics offering a \u201cone-stop-shopping\u201d to solve the most common design challenges. Thus it should be of interest to power electronics engineers at all levels of experience. The most popular circuit solutions and their performance are analyzed, including the effect of parasitic components, transient and extreme operating conditions. The discussion builds from simple to more complex problems starting with an overview of MOSFET technology and switching operation. Design procedure for ground referenced and high side gate drive circuits, AC coupled and transformer isolated solutions are described in great details. A special chapter deals with the gate drive requirements of the MOSFETs in synchronous rectifier applications. Several, step-by-step numerical design examples complement the paper."} {"_id": "aaa376cbd21095763093b6a629b41443ed3c0ed8", "title": "A High Efficiency Synchronous Buck VRM with Current Source Gate Driver", "text": "In this paper, a new current source gate drive circuit is proposed for high efficiency synchronous buck VRMs. The proposed circuit achieves quick turn-on and turn-off transition times to reduce switching loss and conduction loss in power MOSFETS. The driver circuit consists of two sets of four control switches and two very small inductors (typically 50 nH-300 nH each at 1 MHz). It drives both the control MOSFET and synchronous MOSFET in synchronous buck VRMs. An analysis, design procedure, optimization procedure and experimental results are presented for the proposed circuit. Experimental results demonstrate an efficiency of 86.6% at 15 A load and 81.9% at 30 A load for 12 V input and 1.3 V output at 1 MHz."} {"_id": "b51134239fb5d52ca70c5f3aadf84d3ee62ee2d1", "title": "Tapped-inductor buck converter for high-step-down DC-DC conversion", "text": "The narrow duty cycle in the buck converter limits its application for high-step-down dc-dc conversion. With a simple structure, the tapped-inductor buck converter shows promise for extending the duty cycle. However, the leakage inductance causes a huge turn-off voltage spike across the top switch. Also, the gate drive for the top switch is not simple due to its floating source connection. This paper solves all these problems by modifying the tapped-inductor structure. A simple lossless clamp circuit can effectively clamp the switch turn-off voltage spike and totally recover the leakage energy. Experimental results for 12V-to-1.5V and 48V-to-6V dc-dc conversions show significant improvements in efficiency."} {"_id": "44c18fc9582862185a96f7a79f63a788abc856bc", "title": "Robust and fast visual tracking via spatial kernel phase correlation filter", "text": "In this paper, we present a novel robust and fast object tracker called spatial kernel phase correlation based Tracker (SPC). Compared with classical correlation tracking which occupies all spectrums (including both phase spectrum and magnitude spectrum) in frequency domain, our SPC tracker only adopts the phase spectrum by implementing using phase correlation filter to estimate the object's translation. Thanks to circulant structure and kernel trick, we can implement dense sampling in order to train a high-quality phase correlation filter. Meanwhile, SPC learns the object's spatial context model by using new spatial response distribution, achieving superior performance. Given all these elaborate configurations, SPC is more robust to noise and cluster, and achieves more competitive performance in visual tracking. The framework of SPC can be briefly summarized as: firstly, phase correlation filter is well trained with all subwindows and is convoluted with a new image patch; then, the object's translation is calculated by maximizing spatial response; finally, to adapt to changing object, phase correlation filter is updated by reliable image patches. Tracking performance is evaluated by Peak-to-Sidelobe Ratio (PSR), aiming to resolve drifting problem by adaptive model updating. Owing to Fast Fourier Transform (FFT), the proposed tracker can track the object at about 50 frames/s. Numerical experiments demonstrate the proposed algorithm performs favorably against several state-of-the-art trackers in speed, accuracy and robustness. & 2016 Elsevier B.V. All rights reserved."} {"_id": "66a7cfaca67cc69b6b08397a884e10ff374d710c", "title": "Menu-Match: Restaurant-Specific Food Logging from Images", "text": "Logging food and calorie intake has been shown to facilitate weight management. Unfortunately, current food logging methods are time-consuming and cumbersome, which limits their effectiveness. To address this limitation, we present an automated computer vision system for logging food and calorie intake using images. We focus on the \"restaurant\" scenario, which is often a challenging aspect of diet management. We introduce a key insight that addresses this problem specifically: restaurant plates are often both nutritionally and visually consistent across many servings. This insight provides a path to robust calorie estimation from a single RGB photograph: using a database of known food items together with restaurant-specific classifiers, calorie estimation can be achieved through identification followed by calorie lookup. As demonstrated on a challenging Menu-Match dataset and an existing third party dataset, our approach outperforms previous computer vision methods and a commercial calorie estimation app. Our Menu-Match dataset of realistic restaurant meals is made publicly available."} {"_id": "a31e3b340f448fe0a276b659a951e39160a350dd", "title": "Modelling User Satisfaction with an Employee Portal", "text": "User satisfaction with general information system (IS) and certain types of information technology (IT) applications has been thoroughly studied in IS research. With the widespread and increasing use of portal technology, however, there is a need to conduct a user satisfaction study on portal use -in particular, the business-to-employee (b2e) portal. In this paper, we propose a conceptual model for determining b2e portal user satisfaction, which has been derived from an extensive literature review of user satisfaction scales and the b2e portal. Nine dimensions of b2e portal user satisfaction are identified and modeled: information content, ease of use, convenience of access, timeliness, efficiency, security, confidentiality, communication, and layout."} {"_id": "7bb529166fac40451bfe0f52f31807231d4ebc8d", "title": "Indoor smartphone localization via fingerprint crowdsourcing: challenges and approaches", "text": "Nowadays, smartphones have become indispensable to everyone, with more and more built-in location-based applications to enrich our daily life. In the last decade, fingerprinting based on RSS has become a research focus in indoor localization, due to its minimum hardware requirement and satisfiable positioning accuracy. However, its time-consuming and labor-intensive site survey is a big hurdle for practical deployments. Fingerprint crowdsourcing has recently been promoted to relieve the burden of site survey by allowing common users to contribute to fingerprint collection in a participatory sensing manner. For its promising commitment, new challenges arise to practice fingerprint crowdsourcing. This article first identifies two main challenging issues, fingerprint annotation and device diversity, and then reviews the state of the art of fingerprint crowdsourcing-based indoor localization systems, comparing their approaches to cope with the two challenges. We then propose a new indoor subarea localization scheme via fingerprint crowdsourcing, clustering, and matching, which first constructs subarea fingerprints from crowdsourced RSS measurements and relates them to indoor layouts. We also propose a new online localization algorithm to deal with the device diversity issue. Our experiment results show that in a typical indoor scenario, the proposed scheme can achieve a 95 percent hit rate to correctly locate a smartphone in its subarea."} {"_id": "2e054a07a2731da83081c7069f0950bb07ee7490", "title": "Optimizing Soft Real-Time Scheduling Performance for Virtual Machines with SRT-Xen", "text": "Multimedia applications are an important part of today's Internet. However, currently most virtualization solutions, including Xen, lack adequate support for soft real-time tasks. Soft real-time applications, e.g. media workloads, are impeded by components of virtualization, such as the increase of scheduling latency. This paper focuses on improving scheduling scheme to support soft real-time workloads in virtualization systems. In this paper, we present an enhanced scheduler SRT-Xen. SRT-Xen can promote the soft real-time domain's performance compared with Xen's existing scheduling. It focuses on not only bringing a new realtime-friendly scheduling framework with corresponding strategies but also improving the management of the virtual CPUs' queuing in order to implement a fair scheduling mechanism for both real-time and non-real-time tasks. Finally, we use PESQ (Perceptual Evaluation of Speech Quality) and other benchmarks to evaluate and compare SRT-Xen with some other works. The results show that SRT-Xen supports soft real-time domains well without penalizing non-real-time ones."} {"_id": "a39f15d74a578692e65050381d318fecac27e2a4", "title": "Scalable Edge Computing for Low Latency Data Dissemination in Topic-Based Publish/Subscribe", "text": "Advances in Internet of Things (IoT) give rise to a variety of latency-sensitive, closed-loop applications that reside at the edge. These applications often involve a large number of sensors that generate volumes of data, which must be processed and disseminated in real-time to potentially a large number of entities for actuation, thereby forming a closed-loop, publish-process-subscribe system. To meet the response time requirements of such applications, this paper presents techniques to realize a scalable, fog/edge-based broker architecture that balances data publication and processing loads for topic-based, publish-process-subscribe systems operating at the edge, and assures the Quality-of-Service (QoS), specified as the 90th percentile latency, on a per-topic basis. The key contributions include: (a) a sensitivity analysis to understand the impact of features such as publishing rate, number of subscribers, per-sample processing interval and background load on a topic's performance; (b) a latency prediction model for a set of co-located topics, which is then used for the latency-aware placement of topics on brokers; and (c) an optimization problem formulation for k-topic co-location to minimize the number of brokers while meeting each topic's QoS requirement. Here, k denotes the maximum number of topics that can be placed on a broker. We show that the problem is NP-hard for k >=3 and present three load balancing heuristics. Empirical results are presented to validate the latency prediction model and to evaluate the performance of the proposed heuristics."} {"_id": "c97774191be232678a45d343a25fcc0c96c065e7", "title": "Co-Training of Audio and Video Representations from Self-Supervised Temporal Synchronization", "text": "There is a natural correlation between the visual and auditive elements of a video. In this work, we use this correlation in order to learn strong and general features via cross-modal self-supervision with carefully chosen neural network architectures and calibrated curriculum learning. We suggest that this type of training is an effective way of pretraining models for further pursuits in video understanding, as they achieve on average 14.8% improvement over models trained from scratch. Furthermore, we demonstrate that these general features can be used for audio classification and perform on par with state-of-the-art results. Lastly, our work shows that using cross-modal self-supervision for pretraining is a good starting point for the development of multi-sensory models."} {"_id": "4b733a188198dbff57cb8bd1ec996044fe272ce5", "title": "Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets", "text": "In this paper, we provide an introduction to machine learning tasks that address important problems in genomic medicine. One of the goals of genomic medicine is to determine how variations in the DNA of individuals can affect the risk of different diseases, and to find causal explanations so that targeted therapies can be designed. Here we focus on how machine learning can help to model the relationship between DNA and the quantities of key molecules in the cell, with the premise that these quantities, which we refer to as cell variables, may be associated with disease risks. Modern biology allows high-throughput measurement of many such cell variables, including gene expression, splicing, and proteins binding to nucleic acids, which can all be treated as training targets for predictive models. With the growing availability of large-scale data sets and advanced computational techniques such as deep learning, researchers can help to usher in a new era of effective genomic medicine."} {"_id": "0b0609478cf9882aac77af83e0c3e0b3838edbe8", "title": "Measuring epistemic curiosity and its diversive and specific components.", "text": "A questionnaire constructed to assess epistemic curiosity (EC) and perceptual curiosity (PC) curiosity was administered to 739 undergraduates (546 women, 193 men) ranging in age from 18 to 65. The study participants also responded to the trait anxiety, anger, depression, and curiosity scales of the State-Trait Personality Inventory (STPI; Spielberger et al., 1979) and selected subscales of the Sensation Seeking (SSS; Zuckerman, Kolin, Price, & Zoob, 1964) and Novelty Experiencing (NES; Pearson, 1970) scales. Factor analyses of the curiosity items with oblique rotation identified EC and PC factors with clear simple structure. Subsequent analyses of the EC items provided the basis for developing an EC scale, with Diversive and Specific Curiosity subscales. Moderately high correlations of the EC scale and subscales with other measures of curiosity provided strong evidence of convergent validity. Divergent validity was demonstrated by minimal correlations with trait anxiety and the sensation-seeking measures, and essentially zero correlations with the STPI trait anger and depression scales. Male participants had significantly higher scores on the EC scale and the NES External Cognition subscale (effect sizes of r =.16 and.21, respectively), indicating that they were more interested than female participants in solving problems and discovering how things work. Male participants also scored significantly higher than female participants on the SSS Thrill-and-Adventure and NES External Sensation subscales (r =.14 and.22, respectively), suggesting that they were more likely to engage in sensation-seeking activities."} {"_id": "9659f4455ec2779400b0ad761b6997c6271da651", "title": "Gait planning and behavior-based control for statically-stable walking robots", "text": "A substantial portion of the Earth is inaccessible to any sort of wheeled mechanism\u2014 natural obstacles like large rocks, loose soil, deep ravines, and steep slopes conspire to render rolling locomotion ineffective. Hills, mountains, shores, seabeds, as well as the moon and other planets present similar terrain challenges. In many of these natural terrains, legs are well-suited. They can avoid small obstacles by making discrete contacts and passing up undesirable footholds. Legged mechanisms can climb over obstacles and step across ditches, surmounting terrain discontinuities of body-scale while staying level and stable. To achieve their potential, legged robots must coordinate their leg motions to climb over, step across and walk in natural terrain. These coordinated motions, which support and propel the robot, are called a gait. This thesis develops a new method of gait planning and control that enables statically-stable walking robots to produce a gait that is robust and productive in natural terrain. Independent task-achieving processes, called gait behaviors, establish a nominal gait, adapt it to the terrain, and react to disturbances like bumps and slips. Gait controlled in this way enabled the robot Dante II to walk autonomously in natural terrain, including the volcanic crater of Mount Spurr. This method extends to other walking robots as demonstrated by a generalized hexapod that performs the variety of gaits seen in six-legged insects, as well as aperiodic free gaits. The ability to change gait patterns on-the-fly with continuous, stable motion is a new development that enables robots to behave more like animals in adapting their gait to terrain. Finally, this thesis describes why walking robots need predictive plans as well as reflexive behaviors to walk effectively in the real world. It presents a method of guiding the behavior of a walking robot by planning distinct attributes of the desired gait. This partitioning of gait planning avoids the complexity of high degree-of-freedom motion planning. The ability to plan and foresee changes in gait improves performance while maintaining robust safety and stability. Abstract ii iii Acknowledgment"} {"_id": "3f4d50cfb6cf6ad39600998ed295494a1f4f156b", "title": "Developing a Taxonomy of Dark Triad Triggers at Work \u2013 A Grounded Theory Study Protocol", "text": "In past years, research and corporate scandals have evidenced the destructive effects of the dark triad at work, consisting of narcissism (extreme self-centeredness), psychopathy (lack of empathy and remorse) and Machiavellianism (a sense of duplicity and manipulativeness). The dark triad dimensions have typically been conceptualized as stable personality traits, ignoring the accumulating evidence that momentary personality expressions - personality states - may change due to the characteristics of the situation. The present research protocol describes a qualitative study that aims to identify triggers of dark triad states at work by following a grounded theory approach using semi-structured interviews. By building a comprehensive categorization of dark triad triggers at work scholars may study these triggers in a parsimonious and structured way and organizations may derive more effective interventions to buffer or prevent the detrimental effects of dark personality at work."} {"_id": "1e18291a151806c75518d7466d43264cac96864b", "title": "Web Engineering: A New Discipline for Development of Web-Based Systems", "text": "In most cases, development of Web-based systems has been ad hoc, lacking systematic approach and quality control and assurance procedures. Hence, there is now legitimate and growing concern about the manner in which Web-based systems are developed and their long-term quality and integrity. Web Engineering, an emerging new discipline, advocates a process and a systematic approach to development of high quality Web-based systems. It promotes the establishment and use of sound scientific, engineering and management principles, and disciplined and systematic approaches to development, deployment and maintenance of Web-based systems. This paper gives an introductory overview on Web Engineering. It presents the principles and roles of Web Engineering, assesses the similarities and differences between development of traditional software and Web-based systems, identifies key Web engineering activities and reviews some of the ongoing work in this area. It also highlights the prospects of Web engineering and the areas that need further study."} {"_id": "6f9f4312876fb26175837c829ff5eb0b4fab6089", "title": "Negotiation and Cooperation in Multi-Agent Environments", "text": "Automated intelligent agents inhabiting a shared environment must coordinate their activities Cooperation not merely coordination may improve the performance of the individual agents or the overall behavior of the system they form Research in Distributed Arti cial Intelligence DAI addresses the problem of designing automated intelligent systems which interact e ectively DAI is not the only eld to take on the challenge of understanding cooperation and coordination There are a variety of other multi entity environments in which the entities coordinate their activity and cooperate Among them are groups of people animals particles and computers We argue that in order to address the challenge of building coordinated and collaborated intelligent agents it is bene cial to combine AI techniques with methods and techniques from a range of multi entity elds such as game theory operations research physics and philosophy To support this claim we describe some of our projects where we have successfully taken an interdisciplinary approach We demonstrate the bene ts in applying multi entity methodologies and show the adaptations modi cations and extensions necessary for solving the DAI problems This is an extended version of a lecture presented upon receipt of the Computers and Thought Award at the th International Joint Conference on Arti cial Intelligence in Montreal Canada August"} {"_id": "aef8cbfbf08ed04ba242d15f515d610d315f9904", "title": "Wikipedia Chemical Structure Explorer: substructure and similarity searching of molecules from Wikipedia", "text": "BACKGROUND\nWikipedia, the world's largest and most popular encyclopedia is an indispensable source of chemistry information. It contains among others also entries for over 15,000 chemicals including metabolites, drugs, agrochemicals and industrial chemicals. To provide an easy access to this wealth of information we decided to develop a substructure and similarity search tool for chemical structures referenced in Wikipedia.\n\n\nRESULTS\nWe extracted chemical structures from entries in Wikipedia and implemented a web system allowing structure and similarity searching on these data. The whole search as well as visualization system is written in JavaScript and therefore can run locally within a web page and does not require a central server. The Wikipedia Chemical Structure Explorer is accessible on-line at www.cheminfo.org/wikipedia and is available also as an open source project from GitHub for local installation.\n\n\nCONCLUSIONS\nThe web-based Wikipedia Chemical Structure Explorer provides a useful resource for research as well as for chemical education enabling both researchers and students easy and user friendly chemistry searching and identification of relevant information in Wikipedia. The tool can also help to improve quality of chemical entries in Wikipedia by providing potential contributors regularly updated list of entries with problematic structures. And last but not least this search system is a nice example of how the modern web technology can be applied in the field of cheminformatics. Graphical abstractWikipedia Chemical Structure Explorer allows substructure and similarity searches on molecules referenced in Wikipedia."} {"_id": "75ff20c21d4ab56917286b429643db4c216f51b5", "title": "Word Embeddings and Their Use In Sentence Classification Tasks", "text": "This paper has two parts. In the first part we discuss word embeddings. We discuss the need for them, some of the methods to create them, and some of their interesting properties. We also compare them to image embeddings and see how word embedding and image embedding can be combined to perform different tasks. In the second part we implement a convolutional neural network trained on top of pre-trained word vectors. The network is used for several sentence-level classification tasks, and achieves state-of-art (or comparable) results, demonstrating the great power of pre-trainted word embeddings over random ones."} {"_id": "c4f78541eff05e539927d17ece67f239603b18a1", "title": "A critical review of blockchain and its current applications", "text": "Blockchain technology has been known as a digital currency platform since the emergence of Bitcoin, the first and the largest of the cryptocurrencies. Hitherto, it is used for the decentralization of markets more generally, not exclusively for the decentralization of money and payments. The decentralized transaction ledger of blockchain could be employed to register, confirm, and send all kinds of contracts to other parties in the network. In this paper, we thoroughly review state-of-the-art blockchain-related applications emerged in the literature. A number of published works were carefully included based on their contributions to the blockchain's body of knowledge. Several remarks are explored and discussed in the last section of the paper."} {"_id": "cfb06ca51d03b7e625678d97d4661db69e2ee534", "title": "Hidden Voice Commands", "text": "Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy."} {"_id": "65658c28dfdec0268b4f46c16bb1973581b2ad95", "title": "Supply Chain Sourcing Under Asymmetric Information", "text": "We study a supply chain with two suppliers competing over a contract to supply components to a manufacturer. One of the suppliers is a big company for whom the manufacturer\u2019s business constitutes a small part of his business. The other supplier is a small company for whom the manufacturer\u2019s business constitutes a large portion of his business. We analyze the problem from the perspective of the big supplier and address the following questions: What is the optimal contracting strategy that the big supplier should follow? How does the information about the small supplier\u2019s production cost affect the profits and contracting decision? How does the existence of the small supplier affect profits? By studying various information scenarios regarding the small supplier\u2019s and the manufacturer\u2019s production cost, we show, for example, that the big supplier benefits when the small supplier keeps its production cost private. We quantify the value of information for the big supplier and the manufacturer. We also quantify the cost (value) of the alternative-sourcing option for the big supplier (the manufacturer). We determine when an alternative-sourcing option has more impact on profits than information. We conclude with extensions and numerical examples to shed light on how system parameters affect this supply chain."} {"_id": "f183f06d55a149a74e0d10b8dd8253383f7c9c7b", "title": "A Dependency Graph Approach for Fault Detection and Localization Towards Secure Smart Grid", "text": "Fault diagnosis in power grids is known to be challenging, due to the massive scale and spatial coupling therein. In this study, we explore multiscale network inference for fault detection and localization. Specifically, we model the phasor angles across the buses as a Markov random field (MRF), where the conditional correlation coefficients of the MRF are quantified in terms of the physical parameters of power systems. Based on the MRF model, we then study decentralized network inference for fault diagnosis, through change detection and localization in the conditional correlation matrix of the MRF. Particularly, based on the hierarchical topology of practical power systems, we devise a multiscale network inference algorithm that carries out fault detection and localization in a decentralized manner. Simulation results are used to demonstrate the effectiveness of the proposed approach."} {"_id": "0848827ba30956e29d7d126d0a05e51660094ebe", "title": "Secure and reconfigurable network design for critical information dissemination in the Internet of battlefield things (IoBT)", "text": "The Internet of things (IoT) is revolutionizing the management and control of automated systems leading to a paradigm shift in areas such as smart homes, smart cities, health care, transportation, etc. The IoT technology is also envisioned to play an important role in improving the effectiveness of military operations in battlefields. The interconnection of combat equipment and other battlefield resources for coordinated automated decisions is referred to as the Internet of battlefield things (IoBT). IoBT networks are significantly different from traditional IoT networks due to the battlefield specific challenges such as the absence of communication infrastructure, and the susceptibility of devices to cyber and physical attacks. The combat efficiency and coordinated decision-making in war scenarios depends highly on real-time data collection, which in turn relies on the connectivity of the network and the information dissemination in the presence of adversaries. This work aims to build the theoretical foundations of designing secure and reconfigurable IoBT networks. Leveraging the theories of stochastic geometry and mathematical epidemiology, we develop an integrated framework to study the communication of mission-critical data among different types of network devices and consequently design the network in a cost effective manner."} {"_id": "82bd131de322e70b8211b18718f58a4c3a5e3ebe", "title": "Multi-task learning of dish detection and calorie estimation", "text": "In recent years, a rise in healthy eating has led to various food management applications, which have image recognition to automatically record meals. However, most image recognition functions in existing applications are not directly useful for multiple-dish food photos and cannot automatically estimate food calories. Meanwhile, methodologies on image recognition have advanced greatly because of the advent of Convolutional Neural Network, which has improved accuracies of various kinds of image recognition tasks, such as classification and object detection. Therefore, we propose CNN-based food calorie estimation for multiple-dish food photos. Our method estimates food calories while simultaneously detecting dishes by multi-task learning of food calorie estimation and food dish detection with a single CNN. It is expected to achieve high speed and save memory by simultaneous estimation in a single network. Currently, there is no dataset of multiple-dish food photos annotated with both bounding boxes and food calories, so in this work, we use two types of datasets alternately for training a single CNN. For the two types of datasets, we use multiple-dish food photos with bounding-boxes attached and single-dish food photos with food calories. Our results show that our multi-task method achieved higher speed and a smaller network size than a sequential model of food detection and food calorie estimation."} {"_id": "884b6944cdf806d11147a8254f16180050133377", "title": "Chernoff-Hoeffding Inequality and Applications", "text": "When dealing with modern big data sets, a very common theme is reducing the set through a random process. These generally work by making \u201cmany simple estimates\u201d of the full data set, and then judging them as a whole. Perhaps magically, these \u201cmany simple estimates\u201d can provide a very accurate and small representation of the large data set. The key tool in showing how many of these simple estimates are needed for a fixed accuracy trade-off is the Chernoff-Hoeffding inequality [2, 6]. This document provides a simple form of this bound, and two examples of its use."} {"_id": "0eb5b71df161fcf77024bdb4608337eedc874b98", "title": "The Unbearable Automaticity of Being", "text": "What was noted by E. J. hanger (1978) remains true today: that much of contemporary psychological research is based on the assumption that people are consciously and systematically processing incoming information in order to construe and interpret their world and to plan and engage in courses of action. As did E. J. hanger, the authors question this assumption. First, they review evidence that the ability to exercise such conscious, intentional control is actually quite limited, so that most of moment-to-moment psychological life must occur through nonconscious means if it is to occur at all. The authors then describe the different possible mechanisms that produce automatic, environmental control over these various phenomena and review evidence establishing both the existence of these mechanisms as well as their consequences for judgments, emotions, and behavior. Three major forms of automatic self-regulation are identified: an automatic effect of perception on action, automatic goal pursuit, and a continual automatic evaluation of one's experience. From the accumulating evidence, the authors conclude that these various nonconscious mental systems perform the lion's share of the self-regulatory burden, beneficently keeping the individual grounded in his or her current environment."} {"_id": "6ae1dd01e89d54e18bae39039b09c8f57338c4e6", "title": "Intrinsically Motivated Learning of Hierarchical Collections of Skills", "text": "Humans and other animals often engage in activities for their own sakes rather than as steps toward solving practical problems. Psychologists call these intrinsically motivated behaviors. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy. At the core of the model are recent theoretical and algorithmic advances in computational reinforcement learning, specifically, new concepts related to skills and new learning algorithms for learning with skill hierarchies."} {"_id": "3021e3f8b35aa34c6c14481e4bd9e0756cf130b3", "title": "An examination of the determinants of customer loyalty in mobile commerce contexts", "text": "While the importance of customer loyalty has been recognized in marketing literature for at least three decades, the development and empirical validation of a customer loyalty model in a mobile commerce (m-commerce) context had not been addressed. The purpose of our study was to develop and validate such a customer loyalty model. Based on IS and marketing literature, a comprehensive set of constructs and hypotheses were compiled with a methodology for testing them. A questionnaire was constructed and data were collected from 255 users of m-commerce systems in Taiwan. Structural modeling techniques were then applied to analyze the data. The results indicated that customer loyalty was affected by perceived value, trust, habit, and customer satisfaction, with customer satisfaction playing a crucial intervening role in the relationship of perceived value and trust to loyalty. Based on the findings, its implications and limitations are discussed. # 2005 Elsevier B.V. All rights reserved."} {"_id": "70598a94915e4931679593db91d792d3d58b5037", "title": "Principles and practice in reporting structural equation analyses.", "text": "Principles for reporting analyses using structural equation modeling are reviewed, with the goal of supplying readers with complete and accurate information. It is recommended that every report give a detailed justification of the model used, along with plausible alternatives and an account of identifiability. Nonnormality and missing data problems should also be addressed. A complete set of parameters and their standard errors is desirable, and it will often be convenient to supply the correlation matrix and discrepancies, as well as goodness-of-fit indices, so that readers can exercise independent critical judgment. A survey of fairly representative studies compares recent practice with the principles of reporting recommended here."} {"_id": "f38e9c51e2736b382938131f50757be414af4e35", "title": "154-2008: Understanding Your Customer: Segmentation Techniques for Gaining Customer Insight and Predicting Risk in the Telecom Industry", "text": "The explosion of customer data in the last twenty years has increased the need for data mining aimed at customer relationship management (CRM) and understanding the customer. It is well known that the telecom sector consists of customers with a wide array of customer behaviors. These customers pose different risks, making it imperative to implement different treatment strategies to maximize shareholder profit and improve revenue generation. Segmentation is the process of developing meaningful customer groups that are similar based on individual account characteristics and behaviors. The goal of segmentation is to know your customer better and to apply that knowledge to increase profitability, reduce operational cost, and enhance customer service. Segmentation can provide a multidimensional view of the customer for better treatment targeting. An improved understanding of customer risk and behaviors enables more effective portfolio management and the proactive application of targeted treatments to lift account profitability. In this paper we outline several segmentation techniques using SAS Enterprise MinerTM. INTRODUCTION Rapid advances in computer technology and an explosion of data collection systems over the last thirty years make it more critical for business to understand their customers. Companies employing data driven analytical strategies often enjoy a competitive advantage. Many organizations across several industries widely employ analytical models to gain a better understanding of their customers. They use these models to predict a wide array of events such as behavioral risk, fraud, or the likelihood of response. Regardless of the predictive variable, a single model may not perform optimally across the target population because there may be distinct segments with different characteristics inherent in the population. Segmentation may be done judgmentally based on experience, but such segmentation schema is limited to the use of only a few variables at best. True multivariate segmentation with the goal of identifying the different segments in your population is best achieved through the use of cluster analysis. Clustering and profiling of the customer base can answer the following questions: \u2666 Who are my customers? \u2666 How profitable are my customers? \u2666 Who are my least profitable customers? \u2666 Why are my customers leaving? \u2666 What do my best customers look like? This paper discusses the use of SAS Enterprise Miner to segment a population of customers using cluster analysis and decision trees. Focus is placed on the methodology rather than the results to ensure the integrity and confidentiality of customer data. Other statistical strategies are presented in this paper which could be employed in the pursuit of further customer intelligence. It has been said that statistical cluster analysis is as much art as it is science because it is performed without the benefit of well established statistical criteria. Data Mining and Predictive Modeling SAS Global Forum 2008"} {"_id": "32e876a9420f7c58a3c55ec703416c7f57a54f4c", "title": "Causality : Models , Reasoning , and Inference", "text": "For most researchers in the ever growing fields of probabilistic graphical models, belief networks, causal influence and probabilistic inference, ACM Turing award winner Dr. Pearl and his seminary papers on causality are well-known and acknowledged. Representation and determination of Causality, the relationship between an event (the cause) and a second event (the effect), where the second event is understood as a consequence of the first, is a challenging problem. Over the years, Dr. pearl has written significantly on both Art and Science of Cause and Effect. In this book on \"Causality: Models, Reasoning and Inference\", the inventor of Bayesian belief networks discusses and elaborates on his earlier workings including but not limited to Reasoning with Cause and Effect, Causal inference in statistics, Simpson's paradox, Causal Diagrams for Empirical Research, Robustness of Causal Claims, Causes and explanations, and Probabilities of causation Bounds and identification."} {"_id": "a04145f1ca06c61f5985ab22a2346b788f343392", "title": "Information Systems Success: The Quest for the Dependent Variable", "text": "A large number of studies have been conducted during the last decade and a half attempting to identify those factors that contribute to information systems success. However, the dependent variable in these studies\u2014I/S success \u2014has been an elusive one to define. Different researchers have addressed different aspects of success, making comparisons difficult and the prospect of building a cumulative tradition for I/S research similarly elusive. To organize this diverse research, as well as to present a more integrated view of the concept of I/S success, a comprehensive taxonomy is introduced. This taxonomy posits six major dimensions or categories of I/S success\u2014SYSTEM OUALITY, INFORMATION QUALITY, USE, USER SATISFACTION, INDIVIDUAL IMPACT, and ORGANIZATIONAL IMPACT. Using these dimensions, both conceptual and empirical studies are then reviewed (a total of 180 articles are cited) and organized according to the dimensions of the taxonomy. Finally, the many aspects of I/S success are drawn together into a descriptive model and its implications for future I/S research are discussed."} {"_id": "989bbec65334a18201ec19b814b6ba84fbce8790", "title": "Just Feels Good : Customers \u2019 Affective Response to Touch and Its Influence on Persuasion", "text": "Prior research has assumed that touch has a persuasive effect only if it provides attribute or structural information about a product. Under this view, the role of touch as a persuasive tool is limited. The main purpose of this research is to investigate the persuasive influence of touch as an affective tool in the absence of useful product-related information. The authors find that for people who are motivated to touch because it is fun or interesting, a communication that incorporates touch leads to increased affective response and increased persuasion, particularly when the touch provides neutral or positive sensory feedback. People who are not motivated to touch for fun will also be persuaded by a communication that incorporates touch when they are able to make sense of how the touch is related to the message. The authors explore the effectiveness of different types of touch in generating an affective response, and they replicate the effects on attitudes and behavior in a real-world setting. This research suggests that the marketing implications of touch are more substantial than previously believed. The authors present research implications for direct marketing, product packaging, point-of-purchase displays, and print advertising."} {"_id": "8d79900092c807aa563ad8471908114166225e8d", "title": "Portfolio Choice and the Bayesian Kelly Criterion", "text": "We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: The optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limiting diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuoustime gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem."} {"_id": "7a7db2ccf49a909921711d9e88dbfac66c776167", "title": "Vibration parameter estimation using FMCW radar", "text": "Vibration sensing is essential in many applications. Traditional vibration sensors are contact based. With the advance of low-cost and highly integrated CMOS radars, another class of non-contact vibration sensors is emerging. In this paper, we present detailed analysis on obtaining vibration parameters using frequency modulated continuous wave (FMCW) radars. We establish the Cramer Rao lower bounds (CRLB) of the parameter estimation problem and propose an estimation algorithm that achieves the bounds in simulations. These analysis show that vibration sensing using FMCW radars can easily achieve sub-Hertz frequency accuracy and micrometer level amplitude accuracy."} {"_id": "9e832cceb07aa740c720fbdfae2f6b17c286bab8", "title": "Using Multi-Locators to Increase the Robustness of Web Test Cases", "text": "The main reason for the fragility of web test cases is the inability of web element locators to work correctly when the web page DOM evolves. Web elements locators are used in web test cases to identify all the GUI objects to operate upon and eventually to retrieve web page content that is compared against some oracle in order to decide whether the test case has passed or not. Hence, web element locators play an extremely important role in web testing and when a web element locator gets broken developers have to spend substantial time and effort to repair it. While algorithms exist to produce robust web element locators to be used in web test scripts, no algorithm is perfect and different algorithms are exposed to different fragilities when the software evolves. Based on such observation, we propose a new type of locator, named multi-locator, which selects the best locator among a candidate set of locators produced by different algorithms. Such selection is based on a voting procedure that assigns different voting weights to different locator generation algorithms. Experimental results obtained on six web applications, for which a subsequent release was available, show that the multi-locator is more robust than the single locators (about -30% of broken locators w.r.t. the most robust kind of single locator) and that the execution overhead required by the multiple queries done with different locators is negligible (2-3% at most)."} {"_id": "e64d199cb8d4e053ffb0e28475df2fda140ba5de", "title": "lncDML: Identification of long non-coding RNAs by Deep Metric Learning", "text": "The next-generation sequencing technologies provide a great deal of transcripts for bioinformatics research. Specially, because of the regulation of long non-coding RNAs (lncRNAs) in various cellular processes, the research on IncRNAs is in full swing. And the solution of IncRNAs identification is the basis for the in-depth study of its functions. In this study, we present an approach to identify the IncRNAs from large scale transcripts, named IncDML which is completely different from previous identification methods. In our model, we extract signal to noise ratio (SNR) and k-mer from transcripts sequences as features. Firstly, we just use the SNR to cluster the original dataset to three parts. In this process, we achieve preliminary identification effect to some extent. Then abandoning traditional feature selection, we directly measure the relationship between each pair of samples by deep metric learning for each part of data. Finally, a novel classifier based on complex network is applied to achieve the final identification. The experiment results show that IncDML is a very effective method for identifying IncRNAs."} {"_id": "26f0cb59a35a6c83e0375ab19ea710e741f907ad", "title": "ERP system implementation in SMEs: exploring the influences of the SME context", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."} {"_id": "f063d23d23ac325a0bc3979f037a197948fa04f3", "title": "Job Resources Boost Work Engagement , Particularly When Job Demands Are High", "text": "This study of 805 Finnish teachers working in elementary, secondary, and vocational schools tested 2 interaction hypotheses. On the basis of the job demands\u2013resources model, the authors predicted that job resources act as buffers and diminish the negative relationship between pupil misbehavior and work engagement. In addition, using conservation of resources theory, the authors hypothesized that job resources particularly influence work engagement when teachers are confronted with high levels of pupil misconduct. In line with these hypotheses, moderated structural equation modeling analyses resulted in 14 out of 18 possible 2-way interaction effects. In particular, supervisor support, innovativeness, appreciation, and organizational climate were important job resources that helped teachers cope with demanding interactions with students."} {"_id": "4729943ac1bb2eaf39d3d734d94b21b55f0920ec", "title": "Superior thermal conductivity of single-layer graphene.", "text": "We report the measurement of the thermal conductivity of a suspended single-layer graphene. The room temperature values of the thermal conductivity in the range approximately (4.84+/-0.44)x10(3) to (5.30+/-0.48)x10(3) W/mK were extracted for a single-layer graphene from the dependence of the Raman G peak frequency on the excitation laser power and independently measured G peak temperature coefficient. The extremely high value of the thermal conductivity suggests that graphene can outperform carbon nanotubes in heat conduction. The superb thermal conduction property of graphene is beneficial for the proposed electronic applications and establishes graphene as an excellent material for thermal management."} {"_id": "f673d75df44a3126898634cb96344d5fd31b3504", "title": "Asymmetric Cryptography for Mobile Devices", "text": "This paper is meant to give the reader a general overview about the application of asymmetric cryptography in communication, particular in mobile devices. The basics principles of a cryptosystem are addressed, as well as the idea of symmetric and asymmetric cryptography. The functional principles of RSA encryption and the DiffieHellman key exchange scheme, as well as the general idea of digital signatures are shortly described. Furthermore some challenges and solution approaches for the application of asymmetric encryption in low power mobile devices are named. On completing, some the future developments in cryptography are shortly described."} {"_id": "497ce53f8f12f2117d36b5a61b3fc142f0cb05ee", "title": "Interpreting Semantic Relations in Noun Compounds via Verb Semantics", "text": "We propose a novel method for automatically interpreting compound nouns based on a predefined set of semantic relations. First we map verb tokens in sentential contexts to a fixed set of seed verbs using WordNet::Similarity and Moby\u2019s Thesaurus. We then match the sentences with semantic relations based on the semantics of the seed verbs and grammatical roles of the head noun and modifier. Based on the semantics of the matched sentences, we then build a classifier using TiMBL. The performance of our final system at interpreting NCs is 52.6%."} {"_id": "42ce0afbe27913f3da97d4d08730b2fcaf15e18d", "title": "An Empirical Study on Android-Related Vulnerabilities", "text": "Mobile devices are used more and more in everyday life. They are our cameras, wallets, and keys. Basically, they embed most of our private information in our pocket. For this and other reasons, mobile devices, and in particular the software that runs on them, are considered first-class citizens in the software-vulnerabilities landscape. Several studies investigated the software-vulnerabilities phenomenon in the context of mobile apps and, more in general, mobile devices. Most of these studies focused on vulnerabilities that could affect mobile apps, while just few investigated vulnerabilities affecting the underlying platform on which mobile apps run: the Operating System (OS). Also, these studies have been run on a very limited set of vulnerabilities. In this paper we present the largest study at date investigating Android-related vulnerabilities, with a specific focus on the ones affecting the Android OS. In particular, we (i) define a detailed taxonomy of the types of Android-related vulnerability, (ii) investigate the layers and subsystems from the Android OS affected by vulnerabilities, and (iii) study the survivability of vulnerabilities (i.e., the number of days between the vulnerability introduction and its fixing). Our findings could help OS and apps developers in focusing their verification & validation activities, and researchers in building vulnerability detection tools tailored for the mobile world."} {"_id": "74289572067a8ba3dbe1abf84d4a352b8bb4740f", "title": "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness", "text": "We introduce a new family of fairness definitions that interpolate between statistical and individual notions of fairness, obtaining some of the best properties of each. We show that checking whether these notions are satisfied is computationally hard in the worst case, but give practical oracle-efficient algorithms for learning subject to these constraints, and confirm our findings with experiments."} {"_id": "0f161ac594fe478e4192b2c2a8a5fed7d0d1837e", "title": "A Simple Introduction to Maximum Entropy Models for Natural Language Processing", "text": "Many problems in natural language processing can be viewed as lin guistic classi cation problems in which linguistic contexts are used to pre dict linguistic classes Maximum entropy models o er a clean way to com bine diverse pieces of contextual evidence in order to estimate the proba bility of a certain linguistic class occurring with a certain linguistic con text This report demonstrates the use of a particular maximum entropy model on an example problem and then proves some relevant mathemat ical facts about the model in a simple and accessible manner This report also describes an existing procedure called Generalized Iterative Scaling which estimates the parameters of this particular model The goal of this report is to provide enough detail to re implement the maximum entropy models described in Ratnaparkhi Reynar and Ratnaparkhi Ratnaparkhi and also to provide a simple explanation of the max imum entropy formalism Introduction Many problems in natural language processing NLP can be re formulated as statistical classi cation problems in which the task is to estimate the probability of class a occurring with context b or p a b Contexts in NLP tasks usually include words and the exact context depends on the nature of the task for some tasks the context b may consist of just a single word while for others b may consist of several words and their associated syntactic labels Large text corpora usually contain some information about the cooccurrence of a s and b s but never enough to completely specify p a b for all possible a b pairs since the words in b are typically sparse The problem is then to nd a method for using the sparse evidence about the a s and b s to reliably estimate a probability model p a b Consider the Principle of Maximum Entropy Jaynes Good which states that the correct distribution p a b is that which maximizes en tropy or uncertainty subject to the constraints which represent evidence i e the facts known to the experimenter Jaynes discusses its advan tages in making inferences on the basis of partial information we must use that probability distribution which has maximum entropy sub ject to whatever is known This is the only unbiased assignment we can make to use any other would amount to arbitrary assumption of information which by hypothesis we do not have More explicitly if A denotes the set of possible classes and B denotes the set of possible contexts p should maximize the entropy H p X"} {"_id": "78a11b7d2d7e1b19d92d2afd51bd3624eca86c3c", "title": "Improved Deep Metric Learning with Multi-class N-pair Loss Objective", "text": "Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N -pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples \u2013 more specifically, N -1 negative examples \u2013 and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N+1)\u00d7N . We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification."} {"_id": "85f9204814b3dd7d2c9207b25297f90d463d4810", "title": "A second order accurate projection method for the incompressible Navier-Stokes equations on non-graded adaptive grids", "text": "We present an unconditionally stable second order accurate projection method for the incompressible Navier\u2013Stokes equations on non-graded adaptive Cartesian grids. We employ quadtree and octree data structures as an efficient means to represent the grid. We use the supra-convergent Poisson solver of [C.-H. Min, F. Gibou, H. Ceniceros, A supra-convergent finite difference scheme for the variable coefficient Poisson equation on fully adaptive grids, CAM report 05-29, J. Comput. Phys. (in press)], a second order accurate semi-Lagrangian method to update the momentum equation, an unconditionally stable backward difference scheme to treat the diffusion term and a new method that guarantees the stability of the projection step on highly non-graded grids. We sample all the variables at the grid nodes, producing a scheme that is straightforward to implement. We propose two and three-dimensional examples to demonstrate second order accuracy for the velocity field and the divergence free condition in the L and L1 norms. 2006 Elsevier Inc. All rights reserved."} {"_id": "7fe85d798f7c5ef9be79426ac9878b78a0cd0e83", "title": "A Monte Carlo Model of Light Propagation in Tissue", "text": "The Monte Carlo method is rapidly becoming the model of choice for simulating light transport in tissue. This paper provides all the details necessary for implementation of a Monte Carlo program. Variance reduction schemes that improve the efficiency of the Monte Carlo method are discussed. Analytic expressions facilitating convolution calculations for finite flat and Gaussian beams are included. Useful validation benchmarks are presented."} {"_id": "ea573d7a9bb99d769427e83b196c0122a39cafc9", "title": "A design and development of an intelligent jammer and jamming detection methodologies using machine learning approach", "text": "Nowadays, the utilization of mobile phones has increased rapidly. Due to this evolution schools, mosques, prisons, etc., require silence and security. This is achieved by using the mobile phone jammers. An intelligent mobile jammer is designed for allowing only the emergency calls by utilizing the microcontroller. Here, the jammer utilizes the successive approximation for reducing the transmission power. However, it requires few modifications. Therefore in this paper, an improved successive approximation based on divide and conquer algorithm is proposed in order to design the improved intelligent mobile jammer and reduce transmission power. Subsequently, the proposed jammer is analysed based on the different scenarios and frequency bands to illustrate their performance effectiveness. Furthermore, the normal activities are distinguished from jamming by using machine learning-based detection system according to the various parameters. Finally, the proposed algorithm is compared with conventional algorithms to demonstrate its performance efficiency in terms of detection accuracy."} {"_id": "23b7d6a9fce5732ca5c5e11a3f42e17860ef05ad", "title": "Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition", "text": "Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling. The hidden layers in RNNs can be regarded as the memory units, which are helpful in storing information in sequential contexts. However, when dealing with high dimensional input data, such as video and text, the input-to-hidden linear transformation in RNNs brings high memory usage and huge computational cost. This makes the training of RNNs very difficult. To address this challenge, we propose a novel compact LSTM model, named as TR-LSTM, by utilizing the low-rank tensor ring decomposition (TRD) to reformulate the input-to-hidden transformation. Compared with other tensor decomposition methods, TRLSTM is more stable. In addition, TR-LSTM can complete an end-to-end training and also provide a fundamental building block for RNNs in handling large input data. Experiments on real-world action recognition datasets have demonstrated the promising performance of the proposed TR-LSTM compared with the tensortrain LSTM and other state-of-the-art competitors."} {"_id": "a99f1f749481e44abab0ba9a8b7c1d3572a2e465", "title": "Quo Vadis , Atlas-Based Segmentation?", "text": ""} {"_id": "36091ff6b5d5a53d9641f5c3388b8c31b9ad4b49", "title": "Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos", "text": "A major challenge in computer vision is scaling activity understanding to the long tail of complex activities without requiring collecting large quantities of data for new actions. The task of video retrieval using natural language descriptions seeks to address this through rich, unconstrained supervision about complex activities. However, while this formulation offers hope of leveraging underlying compositional structure in activity descriptions, existing approaches typically do not explicitly model compositional reasoning. In this work, we introduce an approach for explicitly and dynamically reasoning about compositional natural language descriptions of activity in videos. We take a modular neural network approach that, given a natural language query, extracts the semantic structure to assemble a compositional neural network layout and corresponding network modules. We show that this approach is able to achieve state-of-the-art results on the DiDeMo video retrieval dataset."} {"_id": "b3d7371522a7a68137df2cb005ca9683f3436bd7", "title": "Multivalued logics: a uniform approach to reasoning in artificial intelligence", "text": "This paper describes a uniform formalization of much of the current work in artificial intelligence on inference systems. We show that many of these systems, including first-order theorem provers, assumption-based truth maintenance systems (ATMSS), and unimplemented formal systems such as default logic or circumscription, can be subsumed under a single general framework. We begin by defining this framework, which is based on a mathematical structure known as a bilattice. We present a formal definition of inference using this structure and show that this definition generalizes work involving ATMSS and some simple nonmonotonic logics. Following the theoretical description, we describe a constructive approach to inference in this setting; the resulting generalization of both conventional inference and ATMSS is achieved without incurring any substantial computational overhead. We show that our approach can also be used to implement a default reasoner, and discuss a combination of default and ATMS methods that enables us to formally describe an \u201cincremental\u201d default reasoning system. This incremental system does not need to perform consistency checks before drawing tentative conclusions, but can instead adjust its beliefs when a default premise or conclusion is overturned in the face of convincing contradictory evidence. The system is therefore much more computationally viable than earlier approaches. Finally, we discuss the implementation of our ideas. We begin by considering general issues that need to be addressed when implementing a multivalued approach such as that we are proposing, and then turn to specific examples showing the results of an existing implementation. This single implementation is used to solve a digital simulation task using first-order logic, a diagnostic task using ATMSS as suggested by de Kleer and Williams, a problem in default reasoning as in Reiter\u2019s default logic or McCarthy\u2019s circumscription, and to solve the same problem more efficiently by combining default methods with justification information. All of these applications use the same general-purpose bilattice theorem prover and differ only in the choice of bilattice being considered."} {"_id": "13cb5d5f0b04de165ef47b5117fc3a4b74d12b89", "title": "Automics: souvenir generating photoware for theme parks", "text": "Automics is a photo-souvenir service which utilises mobile devices to support the capture, sharing and annotation of digital images amongst groups of visitors to theme parks. The prototype service mixes individual and group photo-capture with existing in-park, on-ride photo services, to allow users to create printed photo-stories. Herein we discuss initial fieldwork in theme parks that grounded the design of Automics, our development of the service prototype, and its real-world evaluation with theme park visitors. We relate our findings on user experience of the service to a literature on mobile photoware, finding implications for the design of souvenir services."} {"_id": "5704912d313452373d7ce329253ef398c0e4d6de", "title": "A DIRT-T A PPROACH TO U NSUPERVISED D OMAIN", "text": "Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes violation of the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T)1 model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks."} {"_id": "8658dbfb4bc0f8474a513adf0b51b1cfc2419a02", "title": "SCRUM: An extension pattern language for hyper productive software development", "text": "The patterns of the SCRUM development method are presented as an extension pattern language to the existing organizational pattern languages. In the last few years, the SCRUM development method has rapidly gained recognition as an effective tool to hyper-productive software development. However, when SCRUM patterns are combined with other existing organizational patterns, they lead to highly adaptive, yet well-structured software development organizations. Also, decomposing SCRUM into patterns can guide adoption of only those parts of SCRUM that are applicable to a specific situation."} {"_id": "4fbf7c49ae0fddd9fdb4c94d381d36afd2ab4637", "title": "Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillance", "text": "In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors such as human fatigue and boredom. The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features such as flow vectors or control points, this paper builds upon a recently introduced approach that makes use of higher-level features of intentionality. Individuals in the scene are modelled as intentional agents, and unusual behaviour is detected by evaluating the explicability of the agent\u2019s trajectory with respect to known spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach."} {"_id": "abe0bd94e134c7ac0b1c78922ed17cf3bec08d5e", "title": "A stochastic model of human-machine interaction for learning dialog strategies", "text": "In this paper, we propose a quantitative model for dialog systems that can be used for learning the dialog strategy. We claim that the problem of dialog design can be formalized as an optimization problem with an objective function reflecting different dialog dimensions relevant for a given application. We also show that any dialog system can be formally described as a sequential decision process in terms of its state space, action set, and strategy. With additional assumptions about the state transition probabilities and cost assignment, a dialog system can be mapped to a stochastic model known as Markov decision process(MDP). A variety of data driven algorithms for finding the optimal strategy (i.e., the one that optimizes the criterion) is available within the MDP framework, based on reinforcement learning. For an effective use of the available training data we propose a combination of supervised and reinforcement learning: the supervised learning is used to estimate a model of the user, i.e., the MDP parameters that quantify the user\u2019s behavior. Then a reinforcement learning algorithm is used to estimate the optimal strategy while the system interacts with the simulated user. This approach is tested for learning the strategy in an air travel information system (ATIS) task. The experimental results we present in this paper show that it is indeed possible to find a simple criterion, a state space representation, and a simulated user parameterization in order to automatically learn a relatively complex dialog behavior, similar to one that was heuristically designed by several research groups."} {"_id": "22ca497e24466737981f9ca1690d6c712b7e1276", "title": "Reinforcement Learning for Spoken Dialogue Systems", "text": "Recently, a number of authors have proposed treating dialog ue systems as Markov decision processes (MDPs). However, the practical applica tion of MDP algorithms to dialogue systems faces a number of severe technical chall enges. We have built a general software tool (RLDS, for Reinforcement Learning fo r Dialogue Systems) based on the MDP framework, and have applied it to dialogue co rpora gathered from two dialogue systems built at AT&T Labs. Our experiment s demonstrate that RLDS holds promise as a tool for \u201cbrowsing\u201d and understandin g correlations in complex, temporally dependent dialogue corpora."} {"_id": "5c8bb027eb65b6d250a22e9b6db22853a552ac81", "title": "Learning from delayed rewards", "text": ""} {"_id": "f44a610e28e174f48220cac09579a3aa337e672a", "title": "An Efficient Algorithm for Fractal Analysis of Textures", "text": "In this paper we propose a new and efficient texture feature extraction method: the Segmentation-based Fractal Texture Analysis, or SFTA. The extraction algorithm consists in decomposing the input image into a set of binary images from which the fractal dimensions of the resulting regions are computed in order to describe segmented texture patterns. The decomposition of the input image is achieved by the Two-Threshold Binary Decomposition (TTBD) algorithm, which we also propose in this work. We evaluated SFTA for the tasks of content-based image retrieval (CBIR) and image classification, comparing its performance to that of other widely employed feature extraction methods such as Haralick and Gabor filter banks. SFTA achieved higher precision and accuracy for CBIR and image classification. Additionally, SFTA was at least 3.7 times faster than Gabor and 1.6 times faster than Haralick with respect to feature extraction time."} {"_id": "2152e1fda8b19f5480ca094cb08f1ebac1f5ff9b", "title": "Voice-based assessments of trustworthiness, competence, and warmth in blind and sighted adults", "text": "The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men's and women's voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women's voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women's voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input."} {"_id": "1028b3f1808b5bcc72b018d157db952bb8282205", "title": "Place navigation impaired in rats with hippocampal lesions", "text": "Electrophysiological studies have shown that single cells in the hippocampus respond during spatial learning and exploration1\u20134, some firing only when animals enter specific and restricted areas of a familiar environment. Deficits in spatial learning and memory are found after lesions of the hippocampus and its extrinsic fibre connections5,6 following damage to the medial septal nucleus which successfully disrupts the hippocampal theta rhythm7, and in senescent rats which also show a correlated reduction in synaptic enhancement on the perforant path input to the hippocampus8. We now report, using a novel behavioural procedure requiring search for a hidden goal, that, in addition to a spatial discrimination impairment, total hippocampal lesions also cause a profound and lasting placenavigational impairment that can be dissociated from correlated motor, motivational and reinforcement aspects of the procedure."} {"_id": "1045aca116f8830e364147de75285e86f9a24474", "title": "Performance anomaly of 802.11b", "text": "We analyze the performance of the IEEE 802.11b wireless local area networks. We have observed that when some mobile hosts use a lower bit rate than the others, the performance of all hosts is considerably degraded. Such a situation is a common case in wireless local area networks in which a host far away from an Access Point is subject to important signal fading and interference. To cope with this problem, the host changes its modulation type, which degrades its bit rate to some lower value. Typically, 802.11b products degrade the bit rate from 11 Mb/s to 5.5, 2, or 1 Mb/s when repeated unsuccessful frame transmissions are detected. In such a case, a host transmitting for example at 1 Mb/s reduces the throughput of all other hosts transmitting at 11 Mb/s to a low value below 1 Mb/s. The basic CSMA/CA channel access method is at the root of this anomaly: it guarantees an equal long term channel access probability to all hosts. When one host captures the channel for a long time because its bit rate is low, it penalizes other hosts that use the higher rate. We analyze the anomaly theoretically by deriving simple expressions for the useful throughput, validate them by means of simulation, and compare with several performance measurements."} {"_id": "3bf64462fc3558ab7e9329d084a1af4cf0c87ebf", "title": "Deadline Guaranteed Service for Multi-Tenant Cloud Storage", "text": "It is imperative for cloud storage systems to be able to provide deadline guaranteed services according to service level agreements (SLAs) for online services. In spite of many previous works on deadline aware solutions, most of them focus on scheduling work flows or resource reservation in datacenter networks but neglect the server overload problem in cloud storage systems that prevents providing the deadline guaranteed services. In this paper, we introduce a new form of SLAs, which enables each tenant to specify a percentage of its requests it wishes to serve within a specified deadline. We first identify the multiple objectives (i.e., traffic and latency minimization, resource utilization maximization) in developing schemes to satisfy the SLAs. To satisfy the SLAs while achieving the multi-objectives, we propose a Parallel Deadline Guaranteed (PDG) scheme, which schedules data reallocation (through load re-assignment and data replication) using a tree-based bottom-up parallel process. The observation from our model also motivates our deadline strictness clustered data allocation algorithm that maps tenants with the similar SLA strictness into the same server to enhance SLA guarantees. We further enhance PDG in supplying SLA guaranteed services through two algorithms: i) a prioritized data reallocation algorithm that deals with request arrival rate variation, and ii) an adaptive request retransmission algorithm that deals with SLA requirement variation. Our trace-driven experiments on a simulator and Amazon EC2 show the effectiveness of our schemes for guaranteeing the SLAs while achieving the multi-objectives."} {"_id": "1284e72c31b94a6a1936430a9aaeb84edbc445ed", "title": "High Quality Uniform Random Number Generation Using LUT Optimised State-transition Matrices", "text": "This paper presents a family of uniform random number generators designed for efficient implementation in Lookup table (LUT) based FPGA architectures. A generator with a period of 2j1 can be implemented using k flip-flops and k LUTs, and provides k random output bits each cycle. Each generator is based on a binary linear recurrence, with a state-transition matrix designed to make best use of all available LUT inputs in a given FPGA architecture, and to ensure that the critical path between all registers is a single LUT. This class of generator provides a higher sample rate per area than LFSR and Combined Tausworthe generators, and operates at similar or higher clock-rates. The statistical quality of the generators increases with k, and can be used to pass all common empirical tests such as Diehard, Crush and the NIST cryptographic test suite. Theoretical properties such as global equidistribution can also be calculated, and best and average case statistics shown. Due to the large number of random bits generated per cycle these generators can be used as a basis for generators with even higher statistical quality, and an example involving combination through addition is demonstrated."} {"_id": "3735317a7435296a01ff4b7571d6b08fca98b298", "title": "Adversarial Text Generation Without Reinforcement Learning", "text": "Generative Adversarial Networks (GANs) have experienced a recent surge in popularity, performing competitively in a variety of tasks, especially in computer vision. However, GAN training has shown limited success in natural language processing. This is largely because sequences of text are discrete, and thus gradients cannot propagate from the discriminator to the generator. Recent solutions use reinforcement learning to propagate approximate gradients to the generator, but this is inefficient to train. We propose to utilize an autoencoder to learn a low-dimensional representation of sentences. A GAN is then trained to generate its own vectors in this space, which decode to realistic utterances. We report both random and interpolated samples from the generator. Visualization of sentence vectors indicate our model correctly learns the latent space of the autoencoder. Both human ratings and BLEU scores show that our model generates realistic text against competitive baselines."} {"_id": "3cc515487ff15e7ce57840f87243e7c0748f5d89", "title": "Fast bayesian matching pursuit", "text": "A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both an approximate MMSE estimate of the parameter vector and a set of high posterior probability mixing parameters. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and MAP model selection. The set of high probability mixing parameters not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model."} {"_id": "4a2d7bf9937793a648a43c93029353ade10e64da", "title": "Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines", "text": "Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values.\n We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers."} {"_id": "041f167030571f5f156c8407bab2eab3006842e0", "title": "Biomedical Event Extraction using Abstract Meaning Representation", "text": "We propose a novel, Abstract Meaning Representation (AMR) based approach to identifying molecular events/interactions in biomedical text. Our key contributions are: (1) an empirical validation of our hypothesis that an event is a subgraph of the AMR graph, (2) a neural network-based model that identifies such an event subgraph given an AMR, and (3) a distant supervision based approach to gather additional training data. We evaluate our approach on the 2013 Genia Event Extraction dataset1 (Kim et al., 2013) and show promising results."} {"_id": "64a1e4d18c2fab4904f85b3ee1a9af4263dae348", "title": "Deep generative models: Survey", "text": "Generative models have found their way to the forefront of deep learning the last decade and so far, it seems that the hype will not fade away any time soon. In this paper, we give an overview of the most important building blocks of most recent revolutionary deep generative models such as RBM, DBM, DBN, VAE and GAN. We will also take a look at three of state-of-the-art generative models, namely PixelRNN, DRAW and NADE. We will delve into their unique architectures, the learning procedures and their potential and limitations. We will also review some of the known issues that arise when trying to design and train deep generative architectures using shallow ones and how different models deal with these issues. This paper is not meant to be a comprehensive study of these models, but rather a starting point for those who bear an interest in the field."} {"_id": "0e27539f23b8ebb0179fcecfeb11167d3f38eeaf", "title": "Critical event prediction for proactive management in large-scale computer clusters", "text": "As the complexity of distributed computing systems increases, systems management tasks require significantly higher levels of automation; examples include diagnosis and prediction based on real-time streams of computer events, setting alarms, and performing continuous monitoring. The core of autonomic computing, a recently proposed initiative towards next-generation IT-systems capable of 'self-healing', is the ability to analyze data in real-time and to predict potential problems. The goal is to avoid catastrophic failures through prompt execution of remedial actions.This paper describes an attempt to build a proactive prediction and control system for large clusters. We collected event logs containing various system reliability, availability and serviceability (RAS) events, and system activity reports (SARs) from a 350-node cluster system for a period of one year. The 'raw' system health measurements contain a great deal of redundant event data, which is either repetitive in nature or misaligned with respect to time. We applied a filtering technique and modeled the data into a set of primary and derived variables. These variables used probabilistic networks for establishing event correlations through prediction algorithms. We also evaluated the role of time-series methods, rule-based classification algorithms and Bayesian network models in event prediction.Based on historical data, our results suggest that it is feasible to predict system performance parameters (SARs) with a high degree of accuracy using time-series models. Rule-based classification techniques can be used to extract machine-event signatures to predict critical events with up to 70% accuracy."} {"_id": "4b0e6f6c63ba66b21ad92e8b14139a8b59e9877e", "title": "Small business credit scoring: a comparison of logistic regression, neural network, and decision tree models", "text": "The paper compares the models for small business credit scoring developed by logistic regression, neural networks, and CART decision trees on a Croatian bank dataset. The models obtained by all three methodologies were estimated; then validated on the same hold-out sample, and their performance is compared. There is an evident significant difference among the best neural network model, decision tree model, and logistic regression model. The most successful neural network model was obtained by the probabilistic algorithm. The best model extracted the most important features for small business credit scoring from the observed data"} {"_id": "ac3306493b0b314b5e355bbc1ac44289bcaf1acd", "title": "SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine", "text": "We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet\u2019s URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering."} {"_id": "3913d2e0a51657a5fe11305b1bcc8bf3624471c0", "title": "Learning Structured Representation for Text Classification via Reinforcement Learning", "text": "Representation learning is a fundamental problem in natural language processing. This paper studies how to learn a structured representation for text classification. Unlike most existing representation models that either use no structure or rely on pre-specified structures, we propose a reinforcement learning (RL) method to learn sentence representation by discovering optimized structures automatically. We demonstrate two attempts to build structured representation: Information Distilled LSTM (ID-LSTM) and Hierarchically Structured LSTM (HS-LSTM). ID-LSTM selects only important, taskrelevant words, and HS-LSTM discovers phrase structures in a sentence. Structure discovery in the two representation models is formulated as a sequential decision problem: current decision of structure discovery affects following decisions, which can be addressed by policy gradient RL. Results show that our method can learn task-friendly representations by identifying important words or task-relevant structures without explicit structure annotations, and thus yields competitive performance."} {"_id": "1605b94e1c2d8a019674c9ee05d88893c7038b27", "title": "Privacy Paradox Revised: Pre-Existing Attitudes, Psychological Ownership, and Actual Disclosure", "text": "Prior research has pointed to discrepancies between users' privacy concerns and disclosure behaviors, denoted as the privacy paradox, and repeatedly highlighted the importance to find explanations for this dichotomy. In this regard, three approaches have been proposed by prior literature: (1) use of actual disclosure behavior rather than behavioral intentions, (2) systematic distinction between pre-existing attitudes and situation-specific privacy considerations, and (3) limited and irrational cognitive processes during decision-making. The current research proposes an experiment capable to test these three assumptions simultaneously. More precisely, the authors aim to explore the contextual nature of privacy-related decisions by systematically manipulating (1) individuals\u2019 psychological ownership with regard to own private information, and (2) individuals\u2019 affective states, while measuring (3) pre-existing attitudes as well as situation-specific risk and benefit perceptions, and (4) intentions as well as actual disclosure. Thus, the proposed study strives to uniquely add to the understanding of the privacy paradox."} {"_id": "f924aae98bc6d0119035712d3a37f388975a55a3", "title": "Anomaly Detection in Noisy Images", "text": "Title of dissertation: Anomaly Detection in Noisy Images Xavier Gibert Serra, Ph.D. Examination, Fall 2015 Dissertation directed by: Professor Rama Chellappa Department of Electrical and Computer Engineering Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work. Anomaly Detection in Noisy Images by Xavier Gibert Serra Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2015 Advisory Committee: Professor Rama Chellappa, Chair/Advisor Professor Piya Pal Professor Shuvra Bhattacharyya Professor Vishal M. Patel Professor Amitabh Varshney, Dean\u2019s Representative"} {"_id": "68914d04bf225449408d86536fcbae7f285a0f63", "title": "Building population mapping with aerial imagery and GIS data", "text": "Geospatial distribution of population at a scale of individual buildings is needed for analysis of people\u2019s interaction with their local socio-economic and physical environments. High resolution aerial images are capable of capturing urban complexities and considered as a potential source for mapping urban features at this fine scale. This paper studies population mapping for individual buildings by using aerial imagery and other geographic data. Building footprints and heights are first determined from aerial images, digital terrain and surface models. City zoning maps allow the classification of the buildings as residential and non-residential. The use of additional ancillary geographic data further filters residential utility buildings out of the residential area and identifies houses and apartments. In the final step, census block population, which is publicly available from the U.S. Census, is disaggregated and mapped to individual residential buildings. This paper proposes a modified building population mapping model that takes into account the effects of different types of residential buildings. Detailed steps are described that lead to the identification of residential buildings from imagery and other GIS data layers. Estimated building populations are evaluated per census block with reference to the known census records. This paper presents and evaluates the results of building population mapping in areas of West Lafayette, Lafayette, and Wea Township, all in the state of Indiana, USA. \u00a9 2011 Published by Elsevier B.V."} {"_id": "bb3f4ad0d4392689fa1a7e4696579173c443dc58", "title": "Toward a Universal Cortical Algorithm: Examining Hierarchical Temporal Memory in Light of Frontal Cortical Function", "text": "Representations One aspect of frontal cortical function that has been given less attention than it is due, both in this review and in the literature in general, is the biological basis of the generation and manipulation of abstract frontal representations. Response properties of some PFC cells have been shown to correspond with abstract rules (Wallis et al, 2001), and models such as those of Norman & Shallice (1980, 1986; Shallice, 1982; Shallice & Burgess, 1991, 1996), Grafman (2002) and Fuster (1997) have emphasized the requirement that frontal cortex dynamically generate and manipulate abstract"} {"_id": "dd5b30cbb7c07cbc9643f9dfe124c344b26a03bd", "title": "An Adaption of BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans", "text": "BIOASQ Task B Phase B challenge focuses on extracting answers from snippets for a given question. The dataset provided by the organizers contains answers, but not all their variants. Henceforth a manual annotation was performed to extract all forms of correct answers. This article shows the impact of using all occurrences of correct answers for training on the evaluation scores which are improved significantly."} {"_id": "7c331f755de54f5d37d699b6192ca2f2468c8d9b", "title": "QoE in Video Transmission: A User Experience-Driven Strategy", "text": "The increasing popularity of video (i.e., audio-visual) applications or services over both wired and wireless links has prompted recent growing interests in the investigations of quality of experience (QoE) in online video transmission. Conventional video quality metrics, such as peak-signal-to-noise-ratio and quality of service, only focus on the reception quality from the systematic perspective. As a result, they cannot represent the true visual experience of an individual user. Instead, the QoE introduces a user experience-driven strategy which puts special emphasis on the contextual and human factors in addition to the transmission system. This advantage has raised the popularity and widespread usage of QoE in video transmission. In this paper, we present an overview of selected issues pertaining to QoE and its recent applications in video transmission, with consideration of the compelling features of QoE (i.e., context and human factors). The selected issues include QoE modeling with influence factors in the end-to-end chain of video transmission, QoE assessment (including subjective test and objective QoE monitoring) and QoE management of video transmission over different types of networks. Through the literature review, we observe that the context and human factors in QoE-aware video transmission have attracted significant attentions since the past two to three years. A vast number of high quality works were published in this area, and will be highlighted in this survey. In addition to a thorough summary of recent progresses, we also present an outlook of future developments on QoE assessment and management in video transmission, especially focusing on the context and human factors that have not been addressed yet and the technical challenges that have not been completely solved so far. We believe that our overview and findings can provide a timely perspective on the related issues and the future research directions in QoE-oriented services over video communications."} {"_id": "6182186784d792ed09c4129924a46f9e88869407", "title": "Grounded Theory", "text": "Grounded theory was originally expounded by Barney Glaser and Anselm Strauss in their 1967 book The Discovery of Grounded Theory (Glaser and Strauss 1967). Reacting against what they saw as the dominance of hypothetico-deductive, theory-testing approaches, Glaser and Strauss proposed grounded theory as a way of building theory systematically using data obtained from social research. Since its first appearance, grounded theory has gone on to become \u2018currently the most widely used and popular qualitative research method across a wide range of disciplines and subject areas\u2019 (Bryant and Charmaz 2007: 1)."} {"_id": "931bf857a5d9dbf13ff8da107f5d3075d63a925d", "title": "Optimal assay design for determining the in vitro sensitivity of ring stage Plasmodium falciparum to artemisinins.", "text": "Recent reports demonstrate that failure of artemisinin-based antimalarial therapies is associated with an altered response of early blood stage Plasmodium falciparum. This has led to increased interest in the use of pulse assays that mimic clinical drug exposure for analysing artemisinin sensitivity of highly synchronised ring stage parasites. We report a methodology for the reliable execution of drug pulse assays and detail a synchronisation strategy that produces well-defined tightly synchronised ring stage cultures in a convenient time-frame."} {"_id": "2d1cfc9e81fb159967c2be8446a8e3e7b50fe36b", "title": "An MDP-based Recommender System", "text": "Typical Recommender systems adopt a static view of the recommendation process and treat it as a prediction problem. We argue that it is more appropriate to view the problem of generating recommendations as a sequential decision problem and, consequently, that Markov decision processes (MDP) provide a more appropriate model for Recommender systems. MDPs introduce two benefits: they take into account the long-term effects of each recommendation, and they take into account the expected value of each recommendation. To succeed in practice, an MDP-based Recommender system must employ a strong initial model; and the bulk of this paper is concerned with the generation of such a model. In particular, we suggest the use of an n-gram predictive model for generating the initial MDP. Ourn-gram model induces a Markovchain model of user behavior whose predictive accuracy is greater than that of existing predictive models. We describe our predictive model in detail and evaluate its performance on real data. In addition, we show how the model can be used in an MDP-based Recommender system."} {"_id": "3107cb3f3f39eb6fcf6435daaef636db35950e4f", "title": "From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews", "text": "Recommending products to consumers means not only understanding their tastes, but also understanding their level of experience. For example, it would be a mistake to recommend the iconic film Seven Samurai simply because a user enjoys other action movies; rather, we might conclude that they will eventually enjoy it---once they are ready. The same is true for beers, wines, gourmet foods---or any products where users have acquired tastes: the `best' products may not be the most 'accessible'. Thus our goal in this paper is to recommend products that a user will enjoy now, while acknowledging that their tastes may have changed over time, and may change again in the future. We model how tastes change due to the very act of consuming more products---in other words, as users become more experienced. We develop a latent factor recommendation system that explicitly accounts for each user's level of experience. We find that such a model not only leads to better recommendations, but also allows us to study the role of user experience and expertise on a novel dataset of fifteen million beer, wine, food, and movie reviews."} {"_id": "599ebeef9c9d92224bc5969f3e8e8c45bff3b072", "title": "Item-based top-N recommendation algorithms", "text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems---a personalized information filtering technology used to identify a set of items that will be of interest to a certain user. User-based collaborative filtering is the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. Unfortunately, the computational complexity of these methods grows linearly with the number of customers, which in typical commercial applications can be several millions. To address these scalability concerns model-based recommendation techniques have been developed. These techniques analyze the user--item matrix to discover relations between the different items and use these relations to compute the list of recommendations.In this article, we present one such class of model-based recommendation algorithms that first determines the similarities between the various items and then uses them to identify the set of items to be recommended. The key steps in this class of algorithms are (i) the method used to compute the similarity between the items, and (ii) the method used to combine these similarities in order to compute the similarity between a basket of items and a candidate recommender item. Our experimental evaluation on eight real datasets shows that these item-based algorithms are up to two orders of magnitude faster than the traditional user-neighborhood based recommender systems and provide recommendations with comparable or better quality."} {"_id": "6bd87d13b5633495f93699e80220492035a92716", "title": "Dynamic Conversion Behavior at E-Commerce Sites", "text": "T paper develops a model of conversion behavior (i.e., converting store visits into purchases) that predicts each customer\u2019s probability of purchasing based on an observed history of visits and purchases. We offer an individual-level probability model that allows for different forms of customer heterogeneity in a very flexible manner. Specifically, we decompose an individual\u2019s conversion behavior into two components: one for accumulating visit effects and another for purchasing threshold effects. Each component is allowed to vary across households as well as over time. Visit effects capture the notion that store visits can play different roles in the purchasing process. For example, some visits are motivated by planned purchases, while others are associated with hedonic browsing (akin to window shopping); our model is able to accommodate these (and several other) types of visit-purchase relationships in a logical, parsimonious manner. The purchasing threshold captures the psychological resistance to online purchasing that may grow or shrink as a customer gains more experience with the purchasing process at a given website. We test different versions of the model that vary in the complexity of these two key components and also compare our general framework with popular alternatives such as logistic regression. We find that the proposed model offers excellent statistical properties, including its performance in a holdout validation sample, and also provides useful managerial diagnostics about the patterns underlying online buyer behavior."} {"_id": "92cc12f272ff55795c29cd97dc8ee17a5554308e", "title": "Content-Based, Collaborative Recommendation", "text": "The problem of recommending items from some fixed database has been studied extensively, and two main paradigms have emerged. In content-based recommendation one tries to recommend items similar to those a given user has liked in the past, whereas in collaborative recommendation one identifies users whose tastes are similar to those of the given user and recommends items they have liked. Our approach in Fab has been to combine these two methods. Here, we explain how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither. In addition to what one might call the \u201cgeneric advantages\u201d inherent in any hybrid system, the particular design of the Fab architecture brings two additional benefits. First, two scaling problems common to all Web services are addressed\u2014an increasing number of users and an increasing number of documents. Second, the system automatically identifies emergent communities of interest in the user population, enabling enhanced group awareness and communications. Here we describe the two approaches for contentbased and collaborative recommendation, explain how a hybrid system can be created, and then describe Fab, an implementation of such a system. For more details on both the implemented architecture and the experimental design the reader is referred to [1]. The content-based approach to recommendation has its roots in the information retrieval (IR) community, and employs many of the same techniques. Text documents are recommended based on a comparison between their content and a user profile. Data"} {"_id": "a93bee60173411d5cf5d917ecd7355ab7cfee40e", "title": "Splitting approaches for context-aware recommendation: an empirical study", "text": "User and item splitting are well-known approaches to context-aware recommendation. To perform item splitting, multiple copies of an item are created based on the contexts in which it has been rated. User splitting performs a similar treatment with respect to users. The combination of user and item splitting: UI splitting, splits both users and items in the data set to boost context-aware recommendations. In this paper, we perform an empirical comparison of these three context-aware splitting approaches (CASA) on multiple data sets, and we also compare them with other popular context-aware collaborative filtering (CACF) algorithms. To evaluate those algorithms, we propose new evaluation metrics specific to contextual recommendation. The experiments reveal that CASA typically outperform other popular CACF algorithms, but there is no clear winner among the three splitting approaches. However, we do find some underlying patterns or clues for the application of CASA."} {"_id": "3828a3f60ca9477d3e130f1fd7dfc9d600ef72c8", "title": "Asynchronous Distributed Semi-Stochastic Gradient Optimization", "text": "Lemma 2 At a specific stage s and for a worker p, let g\u2217 i = \u2207fi(w)\u2212\u2207fi(w\u0303) +\u2207F (w\u0303), i \u2208 Dp, then the following inequality holds, Ep [ Ei\u2016g i \u2016 ] \u2264 2L [F (w\u0303)\u2212 F (w\u2217)] , Proof 2 Ei\u2016g i \u2016 =Ei\u2016\u2207fi(w\u0303)\u2212\u2207fi(w)\u2212\u2207F (w\u0303)\u2016 =Ei\u2016\u2207fi(w\u0303)\u2212\u2207fi(w)\u2016 \u2212 2Ei\u3008\u2207F (w\u0303),\u2207fi(w\u0303)\u2212\u2207fi(w)\u3009+ \u2016\u2207F (w\u0303)\u2016 \u22642L (Fp(w\u0303)\u2212 Fp(w)\u2212 \u3008\u2207Fp(w), w\u0303 \u2212 w\u2217\u3009) \u2212 2\u3008\u2207F (w\u0303),\u2207Fp(w\u0303)\u2212\u2207Fp(w)\u3009+ \u2016\u2207F (w\u0303)\u2016, where the inequality is the result of applying Lemma 1. Further, taking expectation on both sides w.r.t. worker p, we get"} {"_id": "986094a13766cfe7d751d5a47553dfe3ff196186", "title": "Teaching-to-Learn and Learning-to-Teach for Multi-label Propagation", "text": "Multi-label propagation aims to transmit the multi-label information from labeled examples to unlabeled examples based on a weighted graph. Existing methods ignore the specific propagation difficulty of different unlabeled examples and conduct the propagation in an imperfect sequence, leading to the error-prone classification of some difficult examples with uncertain labels. To address this problem, this paper associates each possible label with a \u201cteacher\u201d, and proposes a \u201cMulti-Label Teaching-to-Learn and Learning-toTeach\u201d (ML-TLLT) algorithm, so that the entire propagation process is guided by the teachers and manipulated from simple examples to more difficult ones. In the teaching-to-learn step, the teachers select the simplest examples for the current propagation by investigating both the definitiveness of each possible label of the unlabeled examples, and the dependencies between labels revealed by the labeled examples. In the learning-to-teach step, the teachers reversely learn from the learner\u2019s feedback to properly select the simplest examples for the next propagation. Thorough empirical studies show that due to the optimized propagation sequence designed by the teachers, ML-TLLT yields generally better performance than seven state-of-the-art methods on the typical multi-label benchmark datasets."} {"_id": "d47f5143e566db54100a2546c8869465c57251f4", "title": "A 1-V, 16.9 ppm/$^{\\circ}$C, 250 nA Switched-Capacitor CMOS Voltage Reference", "text": "An ultra low-power, precise voltage reference using a switched-capacitor technique in 0.35-\u03bcm CMOS is presented in this paper. The temperature dependence of the carrier mobility and channel length modulation effect can be effectively minimized by using 3.3 and 5 V N-type transistors to operate in the saturation and subthreshold regions, respectively. In place of resistors, a precise reference voltage with flexible trimming capability is achieved by using capacitors. When the supply voltage is 1 V and the temperature is 80\u00b0C, the supply current is 250 nA. The line sensitivity is 0.76%/V; the PSRR is -41 dB at 100 Hz and -17 dB at 10 MHz. Moreover, the occupied die area is 0.049 mm2."} {"_id": "9f458230b385d2fb0124e59663059d41e10686ac", "title": "Morphological Segmentation with Window LSTM Neural Networks", "text": "Morphological segmentation, which aims to break words into meaning-bearing morphemes, is an important task in natural language processing. Most previous work relies heavily on linguistic preprocessing. In this paper, we instead propose novel neural network architectures that learn the structure of input sequences directly from raw input words and are subsequently able to predict morphological boundaries. Our architectures rely on Long Short Term Memory (LSTM) units to accomplish this, but exploit windows of characters to capture more contextual information. Experiments on multiple languages confirm the effectiveness of our models on this task."} {"_id": "01a8909330cb5d4cc37ef50d03467b1974d6c9cf", "title": "An overview of 3D object grasp synthesis algorithms", "text": "This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions [7] or robot hand design and their control [1]. Robot grasp synthesis algorithms have been reviewed in [63], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches."} {"_id": "c200b0b6a80ad58f8b0fd7b461ed75d54fa0ae6d", "title": "Assessing the effects of service quality and justice on customer satisfaction and the continuance intention of mobile value-added services: An empirical test of a multidimensional model", "text": "a r t i c l e i n f o Understanding the antecedents and consequences of customer satisfaction in the mobile communications market is important. This study explores the effects of service quality and justice on customer satisfaction, which, in turn, affects continuance intention of mobile services. Service quality, justice and customer satisfaction were measured by multiple dimensions. A research model was developed based on this multidimen-sional approach and was empirically examined with data collected from about one thousand users of mobile value-added services in China. Results show that all three dimensions of service quality (interaction quality, environment quality and outcome quality) have significant and positive effects on cumulative satisfaction while only one dimension of service quality (interaction quality) has a significant and positive effect on transaction-specific satisfaction. Besides procedural justice, the other two dimensions of justice (distribu-tive justice and interactional justice) significantly influence both transaction-specific satisfaction and cumulative satisfaction. Furthermore, both types of customer satisfaction have significant and positive effects on continuance intention. Implications for research and practice are discussed. With the rapid advancements of mobile network technologies, provision of various kinds of value-added services by mobile service providers is on the rise around the world. As the market becomes more and more mature, value-added services become more homogeneous and the competition for acquiring new customers and retaining existing customers becomes more intense. In this environment, customer satisfaction is a critical factor for mobile service providers to maintain or improve their market share and profitability. Prior studies have found that customer satisfaction contributes to a firm's profitability and customer retention [33,35]. In a reorganization of the communications industry in China between 2008 and 2009, the original six mobile network operators were reduced to three. Meanwhile, the availability of third-generation telecommunications technologies suggested that more mobile value-added services would be provided to the customers. A recent value-added services survey report on mobile communications conducted by Analysys in 2010 predicted that, the competition among existing mobile network operators would become fierce after the reorganization of the industry and the introduction of third-generation services. Thus, for these mobile network operators, in order to retain customers, enhancing customer satisfaction is an urgent task to tackle with. Moreover, as new mobile value-added services are released, service providers need to focus on if these new services appeal to customers and on the willingness of customers to continue to use the services. Therefore, understanding the \u2026"} {"_id": "84a60e1b5ec7aa08059bb0f67e81591a0c5753fc", "title": "A Low-Noise Amplifier With Tunable Interference Rejection for 3.1- to 10.6-GHz UWB Systems", "text": "An ultrawideband common-gate low noise amplifier with tunable interference rejection is presented. The proposed LNA embeds a tunable active notch filter to eliminate interferer at 5-GHz WLAN and employs a common-gate input stage and dual-resonant loads for wideband implementation. This LNA has been fabricated in a 0.18-\u00bfm CMOS process. The measured maximum power gain is 13.2 dB and noise figure is 4.5-6.2 dB with bandwidth of 3.1-10.6 GHz. The interferer rejection is 8.2 dB compared to the maximum gain and 7.6 dB noise figure at 5.2 GHz , respectively. The measured input P1dB is -11 dBm at 10.3 GHz. It consumes 12.8 mA from 1.8-V supply voltage."} {"_id": "b2232ccfec21669834f420f106aab6fea3988d69", "title": "The effect of small disjuncts and class distribution on decision tree learning", "text": ".............................................................................................................................ii Acknowledgements........................................................................................................... iv Dedication .........................................................................................................................vi List of Tables....................................................................................................................xii List of Figures .................................................................................................................xiv"} {"_id": "857d94ececa3cf5bbb792097212b1fc13b6a52e4", "title": "A strategic framework for good governance through e-governance optimization: A case study of Punjab in India", "text": "Purpose \u2013 The purpose of this paper is to attempt to find out whether the new information and communication technologies can make a significant contribution to the achievement of the objective of good governance. The study identifies the factors responsible for creating a conducive environment for effective and successful implementation of e-governance for achieving good governance and the possible barriers in the implementation of e governance applications. Based on the comprehensive analysis it proposes a strategic policy framework for good governance in Punjab in India. Punjab is a developed state ranked amongst some of the top states of India in terms of per capita income and infrastructure. Design/methodology/approach \u2013 The study designs a framework for good governance by getting the shared vision of all stakeholders about providing good quality administration and governance in the Indian context through \u201cParticipatory Stakeholder Assessment\u201d. The study uses descriptive statistics, perception gap, ANOVA and factor analysis to identify the key factors for good governance, the priorities of public regarding e-services, the policy makers\u2019 perspectives regarding good governance to be achieved through e-governance. Findings \u2013 The study captures the good governance factors mainly contributing to the shared vision. The study further highlights that most Indian citizens in Punjab today believe in the power of information and communication technology (ICT) and want to access e-governance services. Major factors causing pain and harassment to the citizens in getting the services from various government departments include: unreasonable delay, multiple visits even for small services; poor public infrastructure and its maintenance in government offices. In the understanding of citizens the most important factors for the success of e-governance services are: overall convenience and experience of the citizens; reduction in the corruption levels by improvement in the transparency of government functioning and awareness about the availability of service amongst general masses. Originality/value \u2013 The present study has evolved a shared vision of all stakeholders on good governance in the Indian context. It has opened up many new possibilities for the governments, not only to use ICTs and help them in prioritizing the governance areas for focused attention, but also help to understand the mindset of the modern citizenry, their priorities and what they consider as good governance. The study will help policy makers focus on these factors for enhancing speedy delivery of prioritized services and promote good governance in developing countries similar to India."} {"_id": "046c6c8e15d9b9ecd73b5d2ce125db20bbcdec4b", "title": "Deanonymizing mobility traces: using social network as a side-channel", "text": "Location-based services, which employ data from smartphones, vehicles, etc., are growing in popularity. To reduce the threat that shared location data poses to a user's privacy, some services anonymize or obfuscate this data. In this paper, we show these methods can be effectively defeated: a set of location traces can be deanonymized given an easily obtained social network graph. The key idea of our approach is that a user may be identified by those she meets: a \"contact graph\" identifying meetings between anonymized users in a set of traces can be structurally correlated with a social network graph, thereby identifying anonymized users. We demonstrate the effectiveness of our approach using three real world datasets: University of St Andrews mobility trace and social network (27 nodes each), SmallBlue contact trace and Facebook social network (125 nodes), and Infocom 2006 bluetooth contact traces and conference attendees' DBLP social network (78 nodes). Our experiments show that 80% of users are identified precisely, while only 8% are identified incorrectly, with the remainder mapped to a small set of users."} {"_id": "4bbc651a3f7debf39c8e9fa1d7877c5898761b4a", "title": "A topology control approach for utilizing multiple channels in multi-radio wireless mesh networks", "text": "We consider the channel assignment problem in a multi-radio wireless mesh network that involves assigning channels to radio interfaces for achieving efficient channel utilization. We propose the notion of a traffic-independent base channel assignment to ease coordination and enable dynamic, efficient and flexible channel assignment. We present a novel formulation of the base channel assignment as a topology control problem, and show that the resulting optimization problem is NP-complete. We then develop a new greedy heuristic channel assignment algorithm (termed CLICA) for finding connected, low interference topologies by utilizing multiple channels. Our extensive simulation studies show that the proposed CLICA algorithm can provide large reduction in interference (even with a small number of radios per node), which in turn leads to significant gains in both link layer and multihop performance in 802.11-based multi-radio mesh networks."} {"_id": "3c6e10d69ae189a824d9f61bc27066228e97cc1a", "title": "Pattern frequency representation for time series classification", "text": "The paper presents a new method for data transformation. The obtained data format enhances the efficiency of the time series classification. The transformation is realized in four steps, namely: time series segmentation, segment's feature representation, segments' binning and pattern frequency representation. The method is independent of the nature of the data, and it works well with temporal data from diverse areas. Its ability is proved by classification of different real datasets and is also compared with the results of other classification methods."} {"_id": "12bb048034a2842b8197580606ea51021f154962", "title": "Unsupervised document classification using sequential information maximization", "text": "We present a novel sequential clustering algorithm which is motivated by the Information Bottleneck (IB) method. In contrast to the agglomerative IB algorithm, the new sequential (sIB) approach is guaranteed to converge to a local maximum of the information with time and space complexity typically linear in the data size. information, as required by the original IB principle. Moreover, the time and space complexity are significantly improved. We apply this algorithm to unsupervised document classification. In our evaluation, on small and medium size corpora, the sIB is found to be consistently superior to all the other clustering methods we examine, typically by a significant margin. Moreover, the sIB results are comparable to those obtained by a supervised Naive Bayes classifier. Finally, we propose a simple procedure for trading cluster's recall to gain higher precision, and show how this approach can extract clusters which match the existing topics of the corpus almost perfectly."} {"_id": "4e03cadf3095f8779eaf878f0594e56ad88788e2", "title": "The Optimal Control of Partially Observable Markov Processes over a Finite Horizon", "text": ""} {"_id": "76295bf84f26477457bd78250d0d9f6f9bb3de12", "title": "Contextual RNN-GANs for Abstract Reasoning Diagram Generation", "text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie"} {"_id": "79d3792a6aea9934bb46c11dfec287d1775d6c9b", "title": "An Energy-Efficient and Wide-Range Voltage Level Shifter With Dual Current Mirror", "text": "This brief presents an energy-efficient level shifter (LS) to convert a subthreshold input signal to an above-threshold output signal. In order to achieve a wide range of conversion, a dual current mirror (CM) structure consisting of a virtual CM and an auxiliary CM is proposed. The circuit has been implemented and optimized in SMIC 40-nm technology. The postlayout simulation demonstrates that the new LS can achieve voltage conversion from 0.2 to 1.1 V. Moreover, at the target voltage of 0.3 V, the proposed LS exhibits an average propagation delay of 66.48 ns, a total energy per transition of 72.31 fJ, and a static power consumption of 88.4 pW, demonstrating an improvement of $6.0\\times $ , $13.1\\times $ , and $89.0\\times $ , respectively, when compared with Wilson CM-based LS."} {"_id": "4d392448890a55d19c4212a9dd137ec2c2335a79", "title": "A 60 V Auto-Zero and Chopper Operational Amplifier With 800 kHz Interleaved Clocks and Input Bias Current Trimming", "text": "An auto-zero and chopper operational amplifier with a 4.5-60 V supply voltage range is realized, using a 0.18 \u03bcm CMOS process augmented by 5 V CMOS and 60 V DMOS transistors. It achieves a maximum offset voltage drift of 0.02 \u03bcV/\u00b0C, a minimum CMRR of 145 dB, a noise PSD of 6.8 nV/\u221aHz, and a 3.1 MHz unity gain bandwidth, while dissipating 840 \u03bcA of current. Up-modulated chopper ripple is suppressed by auto- zeroing. Furthermore, glitches from the charge injection of the input switches are mitigated by employing six parallel input stages with 800 kHz interleaved clocks. This moves the majority of the glitch energy up to 4.8 MHz, while leaving little energy at 800 kHz. As a result, the requirements on an external low-pass glitch filter is relaxed, and a wider usable signal bandwidth can be obtained. Maximum input bias current due to charge injection mismatch is reduced from 1.5 nA to 150 pA by post production trimming with an on-chip charge mismatch compensation circuit."} {"_id": "95b55614898b172b58087d63f0d52bee463948a0", "title": "Laser hair removal.", "text": "The extended theory of selective photothermolysis enables the laser surgeon to target and destroy hair follicles, thereby leading to hair removal. Today, laser hair removal (LHR) is the most commonly requested cosmetic procedure in the world and is routinely performed by dermatologists, other physicians, and non-physician personnel with variable efficacy. The ideal candidate for LHR is fair skinned with dark terminal hair; however, LHR can today be successfully performed in all skin types. Knowledge of hair follicle anatomy and physiology, proper patient selection and preoperative preparation, principles of laser safety, familiarity with the various laser/light devices, and a thorough understanding of laser-tissue interactions are vital to optimizing treatment efficacy while minimizing complications and side effects."} {"_id": "dd7d87105c49a3a93b863d82e6d4a5ab3eb024f3", "title": "Psychological Well-Being in Adult Life", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."} {"_id": "b069df19dec08644b9534a6229cec4199d69828e", "title": "ENN: Extended Nearest Neighbor Method for Pattern Recognition [Research Frontier]", "text": "This article introduces a new supervised classification method - the extended nearest neighbor (ENN) - that predicts input patterns according to the maximum gain of intra-class coherence. Unlike the classic k-nearest neighbor (KNN) method, in which only the nearest neighbors of a test sample are used to estimate a group membership, the ENN method makes a prediction in a \"two-way communication\" style: it considers not only who are the nearest neighbors of the test sample, but also who consider the test sample as their nearest neighbors. By exploiting the generalized class-wise statistics from all training data by iteratively assuming all the possible class memberships of a test sample, the ENN is able to learn from the global distribution, therefore improving pattern recognition performance and providing a powerful technique for a wide range of data analysis applications."} {"_id": "50ed709761f57895b50346a8249814a6f66f6c89", "title": "Fast and Accurate Template Matching Using Pixel Rearrangement on the GPU", "text": "A GPU (Graphics Processing Unit) is a specialized processor for graphics processing. GPUs have the ability to perform high-speed parallel processing using its many processing cores. To utilize the powerful computing ability, GPUs are widely used for general purpose processing. The main contribution of this paper is to show a new template matching algorithm using pixel rearrangement. Template Matching is a technique for finding small parts of an image which match a template image. The feature of our proposed algorithm is that using pixel rearrangement, multiple low-resolution images are generated and template matching for the low-resolution images is performed to reduce the computing time. Also, we implemented our algorithm on a GPU system. The experimental results show that, for an input image with size of 4096 $\\times$ 4096 and a template image with size of 256 $\\times$ 256, our implementation can achieve a speedup factor of approximately 78 times over the conventional sequential implementation."} {"_id": "5676987f4b421f6ef9380d889d32b36e1e2179b6", "title": "Visual Digital Signature Scheme: A New Approach", "text": "\u2014A digital signature is an important public-key primitive that performs the function of conventional handwritten signatures for entity authentication, data integrity, and non-repudiation, especially within the electronic commerce environment. Currently, most conventional digital signature schemes are based on mathematical hard problems. These mathematical algorithms require computers to perform the heavy and complex computations to generate and verify the keys and signatures. In 1995, Naor and Shamir proposed a visual cryptography (VC) for binary images. VC has high security and requires simple computations. The purpose of this paper is to provide an alternative to the current digital signature technology. In this paper, we introduce a new digital signature scheme based on the concept of a non-expansion visual cryptography. A visual digital signature scheme is a method to enable visual verification of the authenticity of an image in an insecure environment without the need to perform any complex computations. Our proposed scheme generates visual shares and manipulates them using the simple Boolean operations OR rather than generating and computing large and long random integer values as in the conventional digital signature schemes currently in use."} {"_id": "05df100ebcf58826324641a20fd14eb838439a6c", "title": "Analyzing Stability in Wide-Area Network Performance", "text": "The Internet is a very large scale, complex, dynamical system that is hard to model and analyze. In this paper, we develop and analyze statistical models for the observed end-to-end network performance based on extensive packet-level traces (consisting of approximately 1.5 billion packets) collected from the primary Web site for the Atlanta Summer Olympic Games in 1996. We find that observed mean throughputs for these transfers measured over 60 million complete connections vary widely as a function of end-host location and time of day, confirming that the Internet is characterized by a large degree of heterogeneity. Despite this heterogeneity, we find (using best-fit linear regression techniques) that we can express the throughput for Web transfers to most hosts as a random variable with a log-normal distribution. Then, using observed throughput as the control parameter, we attempt to quantify the spatial (statistical similarity across neighboring hosts) and temporal (persistence over time) stability of network performance. We find that Internet hosts that are close to each other often have almost identically distributed probability distributions of throughput. We also find that throughputs to individual hosts often do not change appreciably for several minutes. Overall, these results indicate that there is promise in protocol mechanisms that cache and share network characteristics both within a single host and amongst nearby hosts."} {"_id": "070096ce36bba240b39b5ddb7bc6071311478843", "title": "Learning to Grade Short Answer Questions using Semantic Similarity Measures and Dependency Graph Alignments", "text": "In this work we address the task of computerassisted assessment of short student answers. We combine several graph alignment features with lexical semantic similarity measures using machine learning techniques and show that the student answers can be more accurately graded than if the semantic measures were used in isolation. We also present a first attempt to align the dependency graphs of the student and the instructor answers in order to make use of a structural component in the automatic grading of student answers."} {"_id": "21dd2790b76a57b42191b19a54505837f3969141", "title": "Tuned Models of Peer Assessment in MOOCs", "text": "In massive open-access online courses (MOOCs), peer grading serves as a critical tool for scaling the grading of complex, open-ended assignments to courses with tens or hundreds of thousands of students. But despite promising initial trials, it does not always deliver accurate results compared to human experts. In this paper, we develop algorithms for estimating and correcting for grader biases and reliabilities, showing significant improvement in peer grading accuracy on real data with 63,199 peer grades from Coursera\u2019s HCI course offerings \u2014 the largest peer grading networks analysed to date. We relate grader biases and reliabilities to other student factors such as engagement, performance as well as commenting style. We also show that our model can lead to more intelligent assignment of graders to gradees."} {"_id": "3b073bf632aa91628d134a828911ff82706b8a32", "title": "The critical importance of retrieval for learning.", "text": "Learning is often considered complete when a student can produce the correct answer to a question. In our research, students in one condition learned foreign language vocabulary words in the standard paradigm of repeated study-test trials. In three other conditions, once a student had correctly produced the vocabulary item, it was repeatedly studied but dropped from further testing, repeatedly tested but dropped from further study, or dropped from both study and test. Repeated studying after learning had no effect on delayed recall, but repeated testing produced a large positive effect. In addition, students' predictions of their performance were uncorrelated with actual performance. The results demonstrate the critical role of retrieval practice in consolidating learning and show that even university students seem unaware of this fact."} {"_id": "50fcb0e5f921357b2ec96be9a75bfd3169e8f8da", "title": "Personalized Online Education - A Crowdsourcing Challenge", "text": "Interest in online education is surging, as dramatized by the success of Khan Academy and recent Stanford online courses, but the technology for online education is in its infancy. Crowdsourcing mechanisms will likely be essential in order to reach the full potential of this medium. This paper sketches some of the challenges and directions we hope HCOMP researchers will ad-"} {"_id": "7545f90299a10dae1968681f6bd268b9b5ab2c37", "title": "Powergrading: a Clustering Approach to Amplify Human Effort for Short Answer Grading", "text": "We introduce a new approach to the machine-assisted grading of short answer questions. We follow past work in automated grading by first training a similarity metric between student responses, but then go on to use this metric to group responses into clusters and subclusters. The resulting groupings allow teachers to grade multiple responses with a single action, provide rich feedback to groups of similar answers, and discover modalities of misunderstanding among students; we refer to this amplification of grader effort as \u201cpowergrading.\u201d We develop the means to further reduce teacher effort by automatically performing actions when an answer key is available. We show results in terms of grading progress with a small \u201cbudget\u201d of human actions, both from our method and an LDA-based approach, on a test corpus of 10 questions answered by 698 respondents."} {"_id": "e638bad00cbe2467dacc1b69876c21b776ad8d3b", "title": "EARLY DEVELOPMENTS OF A PARALLELLY ACTUATED HUMANOID , SAFFIR", "text": "This paper presents the design of our new 33 degree of freedom full size humanoid robot, SAFFiR (Shipboard Autonomous Fire Fighting Robot). The goal of this research project is to realize a high performance mixed force and position controlled robot with parallel actuation. The robot has two 6 DOF legs and arms, a waist, neck, and 3 DOF hands/fingers. The design is characterized by a central lightweight skeleton actuated with modular ballscrew driven force controllable linear actuators arranged in a parallel fashion around the joints. Sensory feedback on board the robot includes an inertial measurement unit, force and position output of each actuator, as well as 6 axis force/torque measurements from the feet. The lower body of the robot has been fabricated and a rudimentary walking algorithm implemented while the upper body fabrication is completed. Preliminary walking experiments show that parallel actuation successfully minimizes the loads through individual actuators."} {"_id": "526f6208e5d0c9ef3eaa491b6a650110876ab574", "title": "QuizRDF: search technology for the semantic Web", "text": "An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RDF annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as \"low threshold, high ceiling\" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available."} {"_id": "cb615377f990c0b43cfa6fae3755d5a0da418b2f", "title": "Squadron: Incentivizing Quality-Aware Mission-Driven Crowd Sensing", "text": "Recent years have witnessed the success of mobile crowd sensing systems, which outsource sensory data collection to the public crowd equipped with various mobile devices in a wide spectrum of civilian applications. We envision that crowd sensing could as well be very useful in a whole host of mission-driven scenarios, such as peacekeeping operations, non-combatant evacuations, and humanitarian missions. However, the power of crowd sensing could not be fully unleashed in mission-driven crowd sensing (MiCS) systems, unless workers are effectively incentivized to participate. Therefore, in this paper, taking into consideration workers' diverse quality of information (QoI), we propose Squadron, a quality-aware incentive mechanism for MiCS systems. Squadron adopts the reverse auction framework. It approximately minimizes the platform's total payment for worker recruiting in a computationally efficient manner, and recruits workers who potentially could provide high quality data. Furthermore, it also satisfies the desirable properties of truth-fulness and individual rationality. Through rigorous theoretical analysis, as well as extensive simulations, we validate the various aforementioned desirable properties held by Squadron."} {"_id": "a7b9af6fe95f0c17f85a940b1a71d1e3cdfa2109", "title": "Where to Add Actions in Human-in-the-Loop Reinforcement Learning", "text": "In order for reinforcement learning systems to learn quickly in vast action spaces such as the space of all possible pieces of text or the space of all images, leveraging human intuition and creativity is key. However, a human-designed action space is likely to be initially imperfect and limited; furthermore, humans may improve at creating useful actions with practice or new information. Therefore, we propose a framework in which a human adds actions to a reinforcement learning system over time to boost performance. In this setting, however, it is key that we use human effort as efficiently as possible, and one significant danger is that humans waste effort adding actions at places (states) that aren\u2019t very important. Therefore, we propose Expected Local Improvement (ELI), an automated method which selects states at which to query humans for a new action. We evaluate ELI on a variety of simulated domains adapted from the literature, including domains with over a million actions and domains where the simulated experts change over time. We find ELI demonstrates excellent empirical performance, even in settings where the synthetic \u201cexperts\u201d are quite poor."} {"_id": "f6bb1c45e63783785e97ef99b4fe718847d10261", "title": "Bad Subsequences of Well-Known Linear Congruential Pseudo-Random Number Generators", "text": "We present a spectral test analysis of full-period subsequences with small step sizes generated by well-known linear congruential pseudorandom number generators. Subsequences may occur in certain simulation problems or as a method to get parallel streams of pseudorandom numbers. Applying the spectral test, it is possible to find bad subsequences with small step sizes for almost all linear pseudorandom number generators currently in use."} {"_id": "fd4f24af30d64ca6375016249dc145b1f114ddc9", "title": "An Introduction to Temporal Graphs: An Algorithmic Perspective", "text": "A temporal graph is, informally speaking, a graph that changes with time. When time is discrete and only the relationships between the participating entities may change and not the entities themselves, a temporal graph may be viewed as a sequence G1, G2 . . . , Gl of static graphs over the same (static) set of nodes V . Though static graphs have been extensively studied, for their temporal generalization we are still far from having a concrete set of structural and algorithmic principles. Recent research shows that many graph properties and problems become radically different and usually substantially more difficult when an extra time dimension is added to them. Moreover, there is already a rich and rapidly growing set of modern systems and applications that can be naturally modeled and studied via temporal graphs. This, further motivates the need for the development of a temporal extension of graph theory. We survey here recent results on temporal graphs and temporal graph problems that have appeared in the Computer Science community."} {"_id": "c4622d4a8d582c887904a9d0f2714a1dae794c1b", "title": "Analytical Solution of Air-Gap Field in Permanent-Magnet Motors Taking Into Account the Effect of Pole Transition Over Slots", "text": "We present an analytical method to study magnetic fields in permanent-magnet brushless motors, taking into consideration the effect of stator slotting. Our attention concentrates particularly on the instantaneous field distribution in the slot regions where the magnet pole transition passes over the slot opening. The accuracy in the flux density vector distribution in such regions plays a critical role in the prediction of the magnetic forces, i.e., the cogging torque and unbalanced magnetic pull. However, the currently available analytical solutions for calculating air-gap fields in permanent magnet motors can estimate only the distribution of the flux density component in the radial direction. Magnetic field and forces computed by the new analytical method agree well with those obtained by the finite-element method. The analytical method provides a useful tool for design and optimization of permanent-magnet motors."} {"_id": "1e627fb686eaacaf66452f9ecad066a4311abfb4", "title": "Learning to learn with the informative vector machine", "text": "This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MTIVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than random sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task."} {"_id": "7a36bcc25b605394c8a61d39b6e4187653aacf98", "title": "Test setup for multi-finger gripper control based on robot operating system (ROS)", "text": "This paper presents the concept for a test setup to prototype control algorithms for a multi-finger gripper. The human-robot interface has to provide enough degrees of freedom (DOF) to intuitively control advanced gripper and to accomplish this task a simple sensor glove equipped with flex and force sensors has been prepared for this project. The software architecture has to support both real hardware and simulation, as well as flexible communication standards, therefore, ROS architecture was employed in this matter. Paper presents some preliminary results for using sensor glove and simulated model of the three finger gripper."} {"_id": "a63c3f53584fd50e27ac0f2dcbe28c7361b5adff", "title": "Integrated Phased Array Systems in Silicon", "text": "Silicon offers a new set of possibilities and challenges for RF, microwave, and millimeter-wave applications. While the high cutoff frequencies of the SiGe heterojunction bipolar transistors and the ever-shrinking feature sizes of MOSFETs hold a lot of promise, new design techniques need to be devised to deal with the realities of these technologies, such as low breakdown voltages, lossy substrates, low-Q passives, long interconnect parasitics, and high-frequency coupling issues. As an example of complete system integration in silicon, this paper presents the first fully integrated 24-GHz eight-element phased array receiver in 0.18-/spl mu/m silicon-germanium and the first fully integrated 24-GHz four-element phased array transmitter with integrated power amplifiers in 0.18-/spl mu/m CMOS. The transmitter and receiver are capable of beam forming and can be used for communication, ranging, positioning, and sensing applications."} {"_id": "64da24aad2e99514ab26d093c19cebec07350099", "title": "Low cost Ka-band transmitter for CubeSat systems", "text": "CubeSat platforms grow increasingly popular in commercial ventures as alternative solutions for global Internet networks deep space exploration, and aerospace research endeavors. Many technology companies and system engineers plan to implement small satellite systems as part of global Low Earth Orbit (LEO) inter-satellite constellations. High performing low cost hardware is of key importance in driving these efforts. This paper presents the heterodyne architecture and performance of Ka-Band Integrated Transmitter Assembly (ITA) Module, which could be implemented in nano/microsatellite or other satellite systems as a low-cost solution for high data rate space communication systems. The module converts a 0.9 to 1.1 GHz IF input signal to deliver linear transmission of +29 dBm at 26.7 to 26.9 GHz frequency range with a built-in phase locked oscillator, integrated transmitter, polarizer, and lens corrected antenna."} {"_id": "ed202833d8c1f5c432c1e24abf3945c2e0ef91c5", "title": "Identifying Harm Events in Clinical Care through Medical Narratives", "text": "Preventable medical errors are estimated to be among the leading causes of injury and death in the United States. To prevent such errors, healthcare systems have implemented patient safety and incident reporting systems. These systems enable clinicians to report unsafe conditions and cases where patients have been harmed due to errors in medical care. These reports are narratives in natural language and while they provide detailed information about the situation, it is non-trivial to perform large scale analysis for identifying common causes of errors and harm to the patients. In this work, we present a method for identifying harm events in patient care and categorize the harm event types based on their severity level. We show that our method which is based on convolutional and recurrent networks with an attention mechanism is able to significantly improve over the existing methods on two large scale datasets of patient reports."} {"_id": "891df42f3b1284e93128e5de23bcdf0b329700f4", "title": "Quality of life in the anxiety disorders: a meta-analytic review.", "text": "There has been significant interest in the impact of anxiety disorders on quality of life. In this meta-analytic review, we empirically evaluate differences in quality of life between patients with anxiety disorders and nonclinical controls. Thirty-two patient samples from 23 separate studies (N=2892) were included in the analysis. The results yielded a large effect size indicating poorer quality of life among anxiety disorder patients vs. controls and this effect was observed across all anxiety disorders. Compared to control samples, no anxiety disorder diagnosis was associated with significantly poorer overall quality of life than was any other anxiety disorder diagnosis. Examination of specific domains of QOL suggests that impairments may be particularly prominent among patients with post-traumatic stress disorder. QOL domains of mental health and social functioning were associated with the highest levels of impairment among anxiety disorder patients. These findings are discussed in the context of future research on the assessment of quality of life in the anxiety disorders."} {"_id": "56ca21e44120ee31dcc2c7fd963a3567a037ca6f", "title": "An Investigation into the Use of Common Libraries in Android Apps", "text": "The packaging model of Android apps requires the entire code necessary for the execution of an app to be shipped into one single apk file. Thus, an analysis of Android apps often visits code which is not part of the functionality delivered by the app. Such code is often contributed by the common libraries which are used pervasively by all apps. Unfortunately, Android analyses, e.g., for piggybacking detection and malware detection, can produce inaccurate results if they do not take into account the case of library code, which constitute noise in app features. Despite some efforts on investigating Android libraries, the momentum of Android research has not yet produced a complete set of common libraries to further support in-depth analysis of Android apps. In this paper, we leverage a dataset of about 1.5 million apps from Google Play to harvest potential common libraries, including advertisement libraries. With several steps of refinements, we finally collect by far the largest set of 1,113 libraries supporting common functionality and 240 libraries for advertisement. We use the dataset to investigates several aspects of Android libraries, including their popularity and their proportion in Android app code. Based on these datasets, we have further performed several empirical investigations to confirm the motivations behind our work."} {"_id": "6b13a159caebd098aa0f448ae24e541e58319a64", "title": "' s personal copy Core , animal reminder , and contamination disgust : Three kinds of disgust with distinct personality , behavioral , physiological , and clinical correlates", "text": "We examined the relationships between sensitivity to three kinds of disgust (core, animalreminder, and contamination) and personality traits, behavioral avoidance, physiological responding, and anxiety disorder symptoms. Study 1 revealed that these disgusts are particularly associated with neuroticism and behavioral inhibition. Moreover, the three disgusts showed a theoretically consistent pattern of relations on four disgust-relevant behavioral avoidance tasks in Study 2. Similar results were found in Study 3 such that core disgust was significantly related to increased physiological responding during exposure to vomit, while animal-reminder disgust was specifically related to physiological responding during exposure to blood. Lastly, Study 4 revealed that each of the three disgusts showed a different pattern of relations with fear of contamination, fear of animals, and fear of blood\u2013 injury relevant stimuli. These findings provide support for the convergent and divergent validity of core, animal-reminder, and contamination disgust. These findings also highlight the possibility that the three kinds of disgust may manifest as a function of different psychological mechanisms (i.e., oral incorporation, mortality defense, disease avoidance) that may give rise to different clinical conditions. However, empirical examination of the mechanisms that underlie the three disgusts will require further refinement of the psychometric properties of the disgust scale. 2008 Elsevier Inc. All rights reserved."} {"_id": "38732356b452e098d30026d036622461c6e8a3f5", "title": "A primer on spatial modeling and analysis in wireless networks", "text": "The performance of wireless networks depends critically on their spatial configuration, because received signal power and interference depend critically on the distances between numerous transmitters and receivers. This is particularly true in emerging network paradigms that may include femtocells, hotspots, relays, white space harvesters, and meshing approaches, which are often overlaid with traditional cellular networks. These heterogeneous approaches to providing high-capacity network access are characterized by randomly located nodes, irregularly deployed infrastructure, and uncertain spatial configurations due to factors like mobility and unplanned user-installed access points. This major shift is just beginning, and it requires new design approaches that are robust to spatial randomness, just as wireless links have long been designed to be robust to fading. The objective of this article is to illustrate the power of spatial models and analytical techniques in the design of wireless networks, and to provide an entry-level tutorial."} {"_id": "72d078429890dcf213d5e959d21fb84adc99d4df", "title": "Digital Literacy: A Conceptual Framework for Survival Skills in the Digital Era", "text": "Digital literacy involves more than the mere ability to use software or operate a digital device; it includes a large variety of complex cognitive, motor, sociological, and emotional skills, which users need in order to function effectively in digital environments. The tasks required in this context include, for example, \u201creading\u201d instructions from graphical displays in user interfaces; using digital reproduction to create new, meaningful materials from existing ones; constructing knowledge from a nonlinear, hypertextual navigation; evaluating the quality and validity of information; and have a mature and realistic understanding of the \u201crules\u201d that prevail in the cyberspace. This newly emerging concept of digital literacy may be used as a measure of the quality of learners\u2019 work in digital environments, and provide scholars and developers with a more effective means of communication in designing better user-oriented environments. This article proposes a holistic, refined conceptual framework for digital literacy, which includes photo-visual literacy; reproduction literacy; branching literacy; information literacy; and socioemotional literacy."} {"_id": "5ed7b57af3976e635a08f4dc13bb2a2aca760dbf", "title": "A 0.003 mm$^{2}$ 10 b 240 MS/s 0.7 mW SAR ADC in 28 nm CMOS With Digital Error Correction and Correlated-Reversed Switching", "text": "This paper describes a single-channel, calibration-free Successive-Approximation-Register (SAR) ADC with a resolution of 10 bits at 240 MS/s. A DAC switching technique and an addition-only digital error correction technique based on the non-binary search are proposed to tackle the static and dynamic non-idealities attributed to capacitor mismatch and insufficient DAC settling. The conversion speed is enhanced, and the power and area of the DAC are also reduced by 40% as a result. In addition, a switching scheme lifting the input common mode of the comparator is proposed to further enhance the speed. Moreover, the comparator employs multiple feedback paths for an enhanced regeneration strength to alleviate the metastable problem. Occupying an active area of 0.003 mm 2 and dissipating 0.68 mW from 1 V supply at 240 MS/s in 28 nm CMOS, the proposed design achieves an SNDR of 57 dB with low-frequency inputs and 53 dB at the Nyquist input. This corresponds to a conversion efficiency of 4.8 fJ/c.-s. and 7.8 fJ/c.-s. respectively. The DAC switching technique improves the INL and DNL from +1.15/-1.01 LSB and +0.92/-0.28 LSB to within +0.55/-0.45 LSB and +0.45/-0.23 LSB, respectively. This ADC is at least 80% smaller and 32% more power efficient than reported state-of-the-art ADCs of similar resolutions and Nyquist bandwidths larger than 75 MHz."} {"_id": "39339e14ae221cd154354ec1d30d23c8681348c5", "title": "Adhesive capsulitis: review of imaging findings, pathophysiology, clinical presentation, and treatment options", "text": "Adhesive capsulitis, commonly referred to as \u201cfrozen shoulder,\u201d is a debilitating condition characterized by progressive pain and limited range of motion about the glenohumeral joint. It is a condition that typically affects middle-aged women, with some evidence for an association with endocrinological, rheumatological, and autoimmune disease states. Management tends to be conservative, as most cases resolve spontaneously, although a subset of patients progress to permanent disability. Conventional arthrographic findings include decreased capsular distension and volume of the axillary recess when compared with the normal glenohumeral joint, in spite of the fact that fluoroscopic visualization alone is rarely carried out today in favor of magnetic resonance imaging (MRI). MRI and MR arthrography (MRA) have, in recent years, allowed for the visualization of several characteristic signs seen with this condition, including thickening of the coracohumeral ligament, axillary pouch and rotator interval joint capsule, in addition to the obliteration of the subcoracoid fat triangle. Additional findings include T2 signal hyperintensity and post-contrast enhancement of the joint capsule. Similar changes are observable on ultrasound. However, the use of ultrasound is most clearly established for image-guided injection therapy. More aggressive therapies, including arthroscopic release and open capsulotomy, may be indicated for refractory disease, with arthroscopic procedures favored because of their less invasive nature and relatively high success rate."} {"_id": "21d6baca37dbcb35cf263dd93ce306f6013a1ff5", "title": "Issues in building general letter to sound rules", "text": "In generaltext-to-speechsystems, it is notpossibleto guaranteethat a lexiconwill containall wordsfoundin a text, thereforesomesystemfor predictingpronunciationfrom theword itself is necessary . Herewe presenta generalframework for building letter to sound (LTS) rulesfrom a word list in a language.The techniquecanbe fully automatic,thougha small amountof handseedingcangive betterresults.We have appliedthis techniqueto English(UK and US), Frenchand German. The generatedmodelsachieve, 75%, 58%, 93% and89%, respecti vely, wordscorrectfor held out data from theword lists. To testour modelson moretypical datawe alsoanalyzedgeneral text, to find which wordsdo not appearin our lexicon. Theseunknown wordswereusedasamorerealistictestcorpusfor ourmodels. We also discussthe distribution and type of suchunknown words."} {"_id": "d7e58d05232ed02d4bffba943e4523706971913b", "title": "Telecommunication Fraud Detection Using Data Mining techniques", "text": "This document presents the final report of the thesis \u201cTelecommunication Fraud Detection Using Data Mining Techniques\u201d, were a study is made over the effect of the unbalanced data, generated by the Telecommunications Industry, in the construction and performance of classifiers that allows the detection and prevention of frauds. In this subject, an unbalanced data set is characterized by an uneven class distribution where the amount of fraudulent instances (positive) is substantially smaller than the amount normal instances (negative). This will result in a classifier which is most likely to classify data has belonging to the normal class then to the fraud class. At first, an overall inspection is made over the data characteristics and the Naive Bayes model, which is the classifier selected to do the anomaly detection on these experiments. After the characteristics are presented, a feature engineering stage is done with the intent to extend the information contained in the data creating a depper relation with the data itself and model characteristics. A previously proposed solution that consists on undersampling the most abundant class (normal) before building the model, is presented and tested. In the end, the new proposals are presented. The first proposal is to study the effects of changing the intrinsic class distribution parameter in the Naive Bayes model and evaluate its performance. The second proposal consists in estimating margin values that when applied to the model output, attempt to bring more positive instances from previous negative classification. All of these suggested models are validated over a monte-carlo experiment, using data with and without the engineered features."} {"_id": "8124c8f871c400dcbdba87aeb16938c86b068688", "title": "An Analysis of Buck Converter Efficiency in PWM / PFM Mode with Simulink", "text": "This technical paper takes a study into efficiency comparison between PWM and PFM control modes in DC-DC buck converters. Matlab Simulink Models are built to facilitate the analysis of various effects on power loss and converting efficiency, including different load conditions, gate switching frequency, setting of voltage and current thresholds, etc. From efficiency vs. load graph, a best switching frequency is found to achieve a good efficiency throughout the wide load range. This simulation point is then compared to theoretical predictions, justifying the effectiveness of computer based simulation. Efficiencies at two different control modes are compared to verify the improvement of PFM scheme."} {"_id": "25523cc0bbe43885f0247398dcbf3aecf9538ce2", "title": "Robust Unit Commitment Problem with Demand Response and Wind Energy", "text": "To improve the efficiency in power generation and to reduce the greenhouse gas emission, both Demand Response (DR) strategy and intermittent renewable energy have been proposed or applied in electric power systems. However, the uncertainty and the generation pattern in wind farms and the complexity of demand side management pose huge challenges in power system operations. In this paper, we analytically investigate how to integrate DR and wind energy with fossil fuel generators to (i) minimize power generation cost; (2) fully take advantage wind energy with managed demand to reduce greenhouse emission. We first build a two-stage robust unit commitment model to obtain day-ahead generator schedules where wind uncertainty is captured by a polyhedron. Then, we extend our model to include DR strategy such that both price levels and generator schedule will be derived for the next day. For these two NP-hard problems, we derive their mathematical properties and develop a novel and analytical solution method. Our computational study on a IEEE 118 system with 36 units shows that (i) the robust unit commitment model can significantly reduce total cost and fully make use of wind energy; (ii) the cutting plane method is computationally superior to known algorithms."} {"_id": "2e478ae969db8ff0b23e309a9ce46bcaf20c36b3", "title": "Learner Modeling for Integration Skills", "text": "Complex skill mastery requires not only acquiring individual basic component skills, but also practicing integrating such basic skills. However, traditional approaches to knowledge modeling, such as Bayesian knowledge tracing, only trace knowledge of each decomposed basic component skill. This risks early assertion of mastery or ineffective remediation failing to address skill integration. We introduce a novel integration-level approach to model learners' knowledge and provide fine-grained diagnosis: a Bayesian network based on a new kind of knowledge graph with progressive integration skills. We assess the value of such a model from multifaceted aspects: performance prediction, parameter plausibility, expected instructional effectiveness, and real-world recommendation helpfulness. Our experiments based on a Java programming tutor show that proposed model significantly improves two popular multiple-skill knowledge tracing models on all these four aspects."} {"_id": "0b44fcbeea9415d400c5f5789d6b892b6f98daff", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "text": "In this paper, we review our experience with constructing one such large annotated corpus--the Penn Treebank, a corpus consisting of over 4.5 million words of American English. During the first three-year phase of the Penn Treebank Project (1989-1992), this corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-93-87. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/237 Building A Large Annotated Corpus of English: The Penn Treebank MS-CIS-93-87 LINC LAB 260 Mitchell P. Marcus Beatrice Santorini Mary Ann Marcinkiewicz University of Pennsylvania School of Engineering and Applied Science Computer and Information Science Department Philadelphia, PA 19104-6389"} {"_id": "21ed4912935c2ce77515791acbccce527e7266ff", "title": "INDUCING THE MORPHOLOGICAL LEXICON OF A NATURAL LANGUAGE FROM UNANNOTATED TEXT", "text": "This work presents an algorithm for the unsupervised learning, or induction, of a simple morphology of a natural language. A probabilistic maximum a posteriori model is utilized, which builds hierarchical representations for a set of morphs, which are morpheme-like units discovered from unannotated text corpora. The induced morph lexicon stores parameters related to both the \u201cmeaning\u201d and \u201cform\u201d of the morphs it contains. These parameters affect the role of the morphs in words. The model is implemented in a task of unsupervised morpheme segmentation of Finnish and English words. Very good results are obtained for Finnish and almost as good results are obtained in the English task."} {"_id": "5c2e8ef6835ca92476fa6d703e7cc8b1955f108f", "title": "Unsupervised Learning of the Morphology of a Natural Language", "text": "This study reports the results of using minimum description length (MDL) analysis to model unsupervised learning of the morphological segmentation of European languages, using corpora ranging in size from 5,000 words to 500,000 words. We develop a set of heuristics that rapidly develop a probabilistic morphological grammar, and use MDL as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or not. The resulting grammar matches well the analysis that would be developed by a human morphologist. In the final section, we discuss the relationship of this style of MDL grammatical analysis to the notion of evaluation metric in early generative grammar."} {"_id": "0c5043108eda7d2fa467fe91e3c47d4ba08e0b48", "title": "Unsupervised Discovery of Morphemes", "text": "We present two methods for unsupervised segmentation of words into morphemelike units. The model utilized is especially suited for languages with a rich morphology, such as Finnish. The first method is based on the Minimum Description Length (MDL) principle and works online. In the second method, Maximum Likelihood (ML) optimization is used. The quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysis. Experiments on both Finnish and English corpora show that the presented methods perform well compared to a current stateof-the-art system."} {"_id": "16524adee515692a50dd67a170b8e605e4b00b29", "title": "Conceptual spaces - the geometry of thought", "text": "In his book \u201cConceptual Spaces The Geometry of Thought\u201d, Peter G\u00e4rdenfors [1] presents a pioneering theory for representing conceptual knowledge, the basic construct of human thinking and reasoning [4]. The conceptual level is not seen as an alternative to traditional approaches of knowledge representation in artificial intelligence, namely symbolic or subsymbolic methods. Instead, it is meant to complement both approaches. The book is highly recommendable and worth reading, as it does not only tackle many fundamental problems of knowledge representation such as grounding [3], concept formation and similarity comparisons [2], but also outlines novel and enlightening ideas how to overcome these. The book introduces the notion of a conceptual space as a framework for representing knowledge at the conceptual level. It is motivated by contrasting it to other levels of knowledge representation: The highest level, the symbolic level, conceptualizes the human brain as a computing device. Knowledge is represented based on a language consisting of a set of symbols. Logical operations can be performed on these symbols to infer new knowledge. Human reasoning is modeled as a symbol manipulation process. Classical, symbolic artificial intelligence does not very well support central cognitive processes such as the acquisition or formation of new concepts and similarity comparisons. The lowest level, the subsymbolic knowledge representation, is oriented towards the neuro-biological structure of the human brain. Concepts are implicitly represented via activation patterns within the neural network. Learning is modeled by modifying the activation of neurons. Explicit representation of knowledge and concepts is not possible. At the intersection between the symbolic and the subsymbolic level, G\u00e4rdenfors introduces the conceptual level. The theory of conceptual spaces is based on semantic spaces with a geometric structure: A conceptual space is formed by a set of quality dimensions. One or several quality dimensions model one domain. An important example used throughout the book is the color domain represented by the quality dimensions hue, saturation and brightness. Conceptual spaces have a cognitive foundation because domains can be grounded in qualities perceivable by the human sensory apparatus. Concepts are represented as conceptual regions described by their properties on the quality dimensions. The geometric structure of conceptual spaces makes it possible to determine distances and therefore provides an inherent similarity measure by taking the distance in the conceptual space as indicator of the semantic similarity. The notion of similarity is an important construct for modeling categorization and concept formation. Using similarity for reasoning can also reflect well the vagueness typical for human reasoning. The strong focus on the cognitive foundation makes the book particularly valuable. It contains many challenging claims which are related to various disciplines by giving evidence from a wide range of literature. This shows the huge and highly interdisciplinary background of the author. Unfortunately, G\u00e4rdenfors describes his theory only at a very abstract level and forbears from describing algorithms for the formalization of his theory. The realization of a computational model for conceptual spaces bears many practical problems which still have to be solved. Moreover, no empirical evidence is given for his pioneering, sometimes revolutionary ideas. However, these shortcomings should be considered as challenges to solve in the future. The target audience of the book is highly interdisciplinary: since G\u00e4rdenfors tackles the problem of cognitive knowledge representation from a psychologic and computer science perspective as well as from a philosophic, neuroscience and linguistic point of view, this book is worth reading for researchers from many different areas. It is required reading for researchers in cognitive science or artificial intelligence interested in knowledge representation. The book has a clear structure and is very well written. The convincing examples throughout the book illustrate the findings very well and make it easy to understand. Therefore I would also deem G\u00e4rdenfors\u2019 book to be suitable for students as introducing literature to various problem fields in cognitive science. It gives readers from related areas the chance to look beyond one\u2019s own nose and get to know an interdisciplinary way of thinking. The book certainly meets the expectations of the highly interdisciplinary research area cognitive science."} {"_id": "4964f9a7437070858337393a1111032efc1c2039", "title": "Radio Interface Technologies for Cooperative Transmission in 3GPP LTE-Advanced", "text": "This paper presents an overview of radio interface technologies for cooperative transmission in 3GPP LTE-Advanced, i.e., coordinated multi-point (CoMP) transmission, enhanced inter-cell interference coordination (eICIC) for heterogeneous deployments, and relay transmission techniques. This paper covers not only the technical components in the 3GPP specifications that have already been released, but also those that were discussed in the Study Item phase of LTE-Advanced, and those that are currently being discussed in 3GPP for potential specification in future LTE releases. key words: LTE-Advanced, CoMP, eICIC, relay"} {"_id": "0e9a674280b2dabe36e540c20ce5a7a9e10361f7", "title": "Enhancing network performance in Distributed Cognitive Radio Networks using single-agent and multi-agent Reinforcement Learning", "text": "Cognitive Radio (CR) is a next-generation wireless communication system that enables unlicensed users to exploit underutilized licensed spectrum to optimize the utilization of the overall radio spectrum. A Distributed Cognitive Radio Network (DCRN) is a distributed wireless network established by a number of unlicensed users in the absence of fixed network infrastructure such as a base station. Context awareness and intelligence are the capabilities that enable each unlicensed user to observe and carry out its own action as part of the joint action on its operating environment for network-wide performance enhancement. These capabilities can be applied in various application schemes in CR networks such as Dynamic Channel Selection (DCS), congestion control, and scheduling. In this paper, we apply Reinforcement Learning (RL), including single-agent and multi-agent approaches, to achieve context awareness and intelligence. Firstly, we show that the RL approach achieves a joint action that provides better network-wide performance in respect to DCS in DCRNs. The multi-agent approach is shown to provide higher levels of stability compared to the single-agent approach. Secondly, we show that RL achieves high level of fairness. Thirdly, we show the effects of network density and various essential parameters in RL on the network-wide performance."} {"_id": "abbb5854f5583703fc41112d40d7fe13742de81d", "title": "The integration of the internal and external milieu in the insula during dynamic emotional experiences", "text": "Whilst external events trigger emotional responses, interoception (the perception of internal physiological states) is fundamental to core emotional experience. By combining high resolution functional neuroimaging with concurrent physiological recordings, we investigated the neural mechanisms of interoceptive integration during free listening to an emotionally salient audio film. We found that cardiac activity, a key interoceptive signal, was robustly synchronised across participants and centrally represented in the posterior insula. Effective connectivity analysis revealed that the anterior insula, specifically tuned to the emotionally salient moments of the audio stream, serves as an integration hub of interoceptive processing: interoceptive states represented in the posterior insula are integrated with exteroceptive representations by the anterior insula to highlight these emotionally salient moments. Our study for the first time demonstrates the insular hierarchy for interoceptive processing during natural emotional experience. These findings provide an ecologically-valid framework for elucidating the neural underpinnings of emotional deficits in neuropsychiatric disorders."} {"_id": "bdb6d9baefb2a972f4ed094ae40f04e55a850f2b", "title": "User Experience Design and Agile Development : From Theory to Practice", "text": "We used existing studies on the integration of user experience design and agile methods as a basis to develop a framework for integrating UX and Agile. We performed a field study in an ongoing project of a medium-sized company in order to check if the proposed framework fits in the real world, and how some aspects of the integration of UX and Agile work in a real project. This led us to some conclusions situating contributions from practice to theory and back again. The framework is briefly described in this paper and consists of a set of practices, artifacts, and techniques derived from the literature. By combining theory and practice we were able to confirm some thoughts and identify some gaps\u2014both in the company process and in our proposed framework\u2014and drive our attention to new issues that need to be addressed. We believe that the most important issues in our case study are: UX designers cannot collaborate closely with developers because UX designers are working on multiple projects and that UX designers cannot work up front because they are too busy with too many projects at the same time."} {"_id": "a0495616ea12b3306e3e6d25093d6dc7642e7410", "title": "Turbo-Like Beamforming Based on Tabu Search Algorithm for Millimeter-Wave Massive MIMO Systems", "text": "For millimeter-wave (mmWave) massive multiple-input-multiple-output (MIMO) systems, codebook-based analog beamforming (including transmit precoding and receive combining) is usually used to compensate the severe attenuation of mmWave signals. However, conventional beamforming schemes involve complicated search among predefined codebooks to find out the optimal pair of analog precoder and analog combiner. To solve this problem, by exploring the idea of turbo equalizer together with the tabu search (TS) algorithm, we propose a Turbo-like beamforming scheme based on TS, which is called Turbo-TS beamforming in this paper, to achieve near-optimal performance with low complexity. Specifically, the proposed Turbo-TS beamforming scheme is composed of the following two key components: 1) Based on the iterative information exchange between the base station (BS) and the user, we design a Turbo-like joint search scheme to find out the near-optimal pair of analog precoder and analog combiner; and 2) inspired by the idea of the TS algorithm developed in artificial intelligence, we propose a TS-based precoding/combining scheme to intelligently search the best precoder/combiner in each iteration of Turbo-like joint search with low complexity. Analysis shows that the proposed Turbo-TS beamforming can considerably reduce the searching complexity, and simulation results verify that it can achieve near-optimal performance."} {"_id": "37ac5eaad66955ded22bbb50603f9d1a4f15f3d6", "title": "Multilingual representations for low resource speech recognition and keyword search", "text": "This paper examines the impact of multilingual (ML) acoustic representations on Automatic Speech Recognition (ASR) and keyword search (KWS) for low resource languages in the context of the OpenKWS15 evaluation of the IARPA Babel program. The task is to develop Swahili ASR and KWS systems within two weeks using as little as 3 hours of transcribed data. Multilingual acoustic representations proved to be crucial for building these systems under strict time constraints. The paper discusses several key insights on how these representations are derived and used. First, we present a data sampling strategy that can speed up the training of multilingual representations without appreciable loss in ASR performance. Second, we show that fusion of diverse multilingual representations developed at different LORELEI sites yields substantial ASR and KWS gains. Speaker adaptation and data augmentation of these representations improves both ASR and KWS performance (up to 8.7% relative). Third, incorporating un-transcribed data through semi-supervised learning, improves WER and KWS performance. Finally, we show that these multilingual representations significantly improve ASR and KWS performance (relative 9% for WER and 5% for MTWV) even when forty hours of transcribed audio in the target language is available. Multilingual representations significantly contributed to the LORELEI KWS systems winning the OpenKWS15 evaluation."} {"_id": "1aa0522b561b2732e5a206dfa37ccdd62c3890c5", "title": "A geometry-based soft shadow volume algorithm using graphics hardware", "text": "Most previous soft shadow algorithms have either suffered from aliasing, been too slow, or could only use a limited set of shadow casters and/or receivers. Therefore, we present a strengthened soft shadow volume algorithm that deals with these problems. Our critical improvements include robust penumbra wedge construction, geometry-based visibility computation, and also simplified computation through a four-dimensional texture lookup. This enables us to implement the algorithm using programmable graphics hardware, and it results in images that most often are indistinguishable from images created as the average of 1024 hard shadow images. Furthermore, our algorithm can use both arbitrary shadow casters and receivers. Also, one version of our algorithm completely avoids sampling artifacts which is rare for soft shadow algorithms. As a bonus, the four-dimensional texture lookup allows for small textured light sources, and, even video textures can be used as light sources. Our algorithm has been implemented in pure software, and also using the GeForce FX emulator with pixel shaders. Our software implementation renders soft shadows at 0.5--5 frames per second for the images in this paper. With actual hardware, we expect that our algorithm will render soft shadows in real time. An important performance measure is bandwidth usage. For the same image quality, an algorithm using the accumulated hard shadow images uses almost two orders of magnitude more bandwidth than our algorithm."} {"_id": "7c70c29644ff1d6dd75a8a4dd0556fb8cb13549b", "title": "Polymer\u2013Ceramic Composites for Microwave Applications: Fabrication and Performance Assessment", "text": "We present a novel technique to fabricate conformal and pliable substrates for microwave applications including systems-on-package. The produced materials are fabricated by combining ceramic powders with polymers to generate a high-contrast substrate that is concurrently pliable (bendable). Several such polymer-ceramic substrates are fabricated and used to examine the performance of a patch antenna and a coupled line filter. This paper presents the substrate mixing method while measurements are given to evaluate the loss performance of the substrates. Overall, the fabricated composites lead to flexible substrates with a permittivity of up to epsivr=20 and sufficiently low loss"} {"_id": "90541ff36e8e9592614224276b69e057c2c03c22", "title": "Body Image and Personality in Aesthetic Plastic Surgery : A Case-Control Study", "text": "Introduction: The amount of research on the relationship between plastic surgery and psychological features, such as personality disorders and indexes of body image dissatisfaction, has increased significantly in the last years. Aim: The aim of the present study was to examine these psychological features among Italian patients who underwent aesthetic plastic surgery, testing the mediating role of the mass media influence on the relationship between them and the choice of aesthetic plastic surgery. The Personality Diagnostic Questionnaire 4+ (PDQ-4+) and the Body Uneasiness Test (BUT) were administered to patients who underwent aesthetic plastic surgery (N = 111) and participants who had no history of aesthetic plastic surgical procedures (N = 149). Results: Results showed that aesthetic patients reported higher indexes of body image disturbance than controls. No significant differences between aesthetic participants and controls were found in all three cluster B personality disorders. Moreover, the effect of body image dissatisfaction on the choice to undergo aesthetic plastic surgery was partially mediated by the influence of mass media. Conclusions: In conclusion, the present study confirmed the importance of body dissatisfaction as a predictor of the choice to undergo aesthetic surgery and highlighted the influence of media messages regarding physical appearance."} {"_id": "225c570344c8230b9d3e8be3a5d8fd3d3a9ba267", "title": "A DTC-Based Subsampling PLL Capable of Self-Calibrated Fractional Synthesis and Two-Point Modulation", "text": "We present an analog subsampling PLL based on a digital-to-time converter (DTC), which operates with almost no performance gap (176/198 fs RMS jitter) between the integer and the worst case fractional operation, achieving -246.6 dB FOM in the worst case fractional mode. The PLL is capable of two-point, 10 Mbit/s GMSK modulation with -40.5 dB EVM around a 10.24 GHz fractional carrier. The analog nonidealities-DTC gain, DTC nonlinearity, modulating VCO bank gain, and nonlinearity-are calibrated in the background while the system operates normally. This results in ~15 dB fractional spur improvement (from -41 dBc to -56.5 dBc) during synthesis and ~15 dB EVM improvement (from -25 dB to -40.5 dB) during modulation. The paper provides an overview of the mechanisms that contribute to performance degradation in DTC-based PLL/phase modulators and presents ways to mitigate them. We demonstrate state-of-the-art performance in nanoscale CMOS for fractional-N synthesis and phase modulation."} {"_id": "c4df7d84eb47e416af043dc20d053d3cb45e9571", "title": "Broadcast Encryption", "text": "We introduce new theoretical measures for the qualitative and quantitative assessment of encryption schemes designed for broadcast transmissions. The goal is to allow a central broadcast site to broadcast secure transmissions to an arbitrary set of recipients while minimizing key management related transmissions. We present several schemes that allow a center to broadcast a secret to any subset of privileged users out of a universe of size so that coalitions of users not in the privileged set cannot learn the secret. The most interesting scheme requires every user to store keys and the center to broadcast messages regardless of the size of the privileged set. This scheme is resilient to any coalition of users. We also present a scheme that is resilient with probability against a random subset of users. This scheme requires every user to store keys and the center to broadcast messages. Preliminary version appeared in Advances in Cryptology CRYPTO \u201993 Proceedings, Lecture Notes in Computer Science, Vol. 773, 1994, pp. 480\u2013491. Dept. of Computer Science, School of Mathematics, Tel Aviv University, Tel Aviv, Israel, and Algorithmic Research Ltd. E-mail fiat@math.tau.ac.il. Incumbent of the Morris and Rose Goldman Career Development Chair, Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. Research supported by an Alon Fellowship and a grant from the Israel Science Foundation administered by the Israeli Academy of Sciences. E-mail: naor@wisdom.weizmann.ac.il."} {"_id": "751f61473d563f9018fbfac57cba5619cd39de4c", "title": "A 45nm 4Gb 3-Dimensional Double-Stacked Multi-Level NAND Flash Memory with Shared Bitline Structure", "text": "Recently, 3-dimensional (3D) memories have regained attention as a potential future memory solution featuring low cost, high density and high performance. We present a 3D double stacked 4Gb MLC NAND flash memory device with shared bitline structure, with a cell size of 0.0021mum2/bit per unit feature area. The device is designed to support 3D stacking and fabricated by S3 and 45nm floating-gate CMOS technologies."} {"_id": "7f4773a3edd05f27e543b89986f8bfd34ed29bfa", "title": "Alobar holoprosencephaly, mobile proboscis and trisomy 13 in a fetus with maternal gestational diabetes mellitus: a 2D ultrasound diagnosis and review of the literature", "text": "Alobar holoprosencephaly is a rare and severe brain malformation due to early arrest in brain cleavage and rotation. We report a congenital anomalous fetus with alobar holoprosencephaly, prenatally diagnosed by two-dimensional (2D) sonography at the 40\u00a0weeks of gestation. The mother was affected by gestational diabetes mellitus and was obese (BMI\u00a0>\u00a030\u00a0kg/m2). 2D Ultrasound depicted the cerebral malformation, cyclopy, proboscis, cardiac defects (atrial septal defect, hypoplastic left heart, anomalous communication between right ventricle and aorta) and extremities defects. The newborn died just after delivery. External examination confirmed a mobile proboscis-like nose on the normal nose position. The fetus had both claw hands. The right and left foots showed to be equine. Autopsy confirmed the ultrasound diagnosis and chromosome analysis revealed trisomy 13 (47,XY,+13). Fetopathologic examination showed cyclopy, proboscis and alobar holoprosencephaly of the fetus, which was consistent with Patau syndrome. The teratogenic effect of diabetes on fetus has been described, but no previous clinical case of a congenital anomalous fetus with trisomy 13 and maternal gestational diabetes has been reported. This case report is the first to describe 2D ultrasound diagnosis of alobar holoprosencephaly and trisomy 13 with maternal gestational diabetes mellitus."} {"_id": "4b4edd81ae68c42c1a570bee7fd1a1956532320a", "title": "Cloud-assisted humanoid robotics for affective interaction", "text": "In recent years, the humanoid robot is received great attention, and gradually develop to households and personal service field. The prominent progress of cloud computing, big data, and machine learning fields provides a strong support for the research of the robot. With affective interaction ability of robot has a broad market space and research value. In this paper, we propose a cloud-assisted humanoid robotics for affective interaction system architecture, and introduce the essential composition, design and implementation of related components. Finally, through an actual robot emotional interaction test platform, validating the feasibility and extendibility of proposed architecture."} {"_id": "384ac22ddf645108d085f6f9ec6d359813776a80", "title": "Paragraph: Thwarting Signature Learning by Training Maliciously", "text": "Defending a server against Internet worms and defending a user\u2019s email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class. Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms."} {"_id": "f62b9c6ef565e820d21dfa64e4aed00323a50417", "title": "Diagnostic Performance of a Smartphone\u2010Based Photoplethysmographic Application for Atrial Fibrillation Screening in a Primary Care Setting", "text": "BACKGROUND\nDiagnosing atrial fibrillation (AF) before ischemic stroke occurs is a priority for stroke prevention in AF. Smartphone camera-based photoplethysmographic (PPG) pulse waveform measurement discriminates between different heart rhythms, but its ability to diagnose AF in real-world situations has not been adequately investigated. We sought to assess the diagnostic performance of a standalone smartphone PPG application, Cardiio Rhythm, for AF screening in primary care setting.\n\n\nMETHODS AND RESULTS\nPatients with hypertension, with diabetes mellitus, and/or aged \u226565\u00a0years were recruited. A single-lead ECG was recorded by using the AliveCor heart monitor with tracings reviewed subsequently by 2 cardiologists to provide the reference standard. PPG measurements were performed by using the Cardiio Rhythm smartphone application. AF was diagnosed in 28 (2.76%) of 1013 participants. The diagnostic sensitivity of the Cardiio Rhythm for AF detection was 92.9% (95% CI] 77-99%) and was higher than that of the AliveCor automated algorithm (71.4% [95% CI 51-87%]). The specificities of Cardiio Rhythm and the AliveCor automated algorithm were comparable (97.7% [95% CI: 97-99%] versus 99.4% [95% CI 99-100%]). The positive predictive value of the Cardiio Rhythm was lower than that of the AliveCor automated algorithm (53.1% [95% CI 38-67%] versus 76.9% [95% CI 56-91%]); both had a very high negative predictive value (99.8% [95% CI 99-100%] versus 99.2% [95% CI 98-100%]).\n\n\nCONCLUSIONS\nThe Cardiio Rhythm smartphone PPG application provides an accurate and reliable means to detect AF in patients at risk of developing AF and has the potential to enable population-based screening for AF."} {"_id": "aafef584b6c5b3e3275e42902c75ee6dfdf20942", "title": "An indicator of research front activity: Measuring intellectual organization as uncertainty reduction in document sets", "text": "When using scientific literature to model scholarly discourse, a research specialty can be operationalized as an evolving set of related documents. Each publication can be expected to contribute to the further development of the specialty at the research front. The specific combinations of title words and cited references in a paper can then be considered as a signature of the knowledge claim in the paper: new words and combinations of words can be expected to represent variation, while each paper is at the same time selectively positioned into the intellectual organization of a field using context-relevant references. Can the mutual information among these three dimensions\u2014 title words, cited references, and sequence numbers\u2014be used as an indicator of the extent to which intellectual organization structures the uncertainty prevailing at a research front? The effect of the discovery of nanotubes (1991) on the previously existing field of fullerenes is used as a test case. Thereafter, this method is applied to science studies with a focus on scientometrics using various sample delineations. An emerging research front about citation analysis can be indicated."} {"_id": "8563b6545a8ff8d17a74da1f70f57c4a7d9a38bc", "title": "DPP-Net: Device-Aware Progressive Search for Pareto-Optimal Neural Architectures", "text": "Recent breakthroughs in Neural Architectural Search (NAS) have achieved state-of-the-art performances in applications such as image classification and language modeling. However, these techniques typically ignore device-related objectives such as inference time, memory usage, and power consumption. Optimizing neural architecture for devicerelated objectives is immensely crucial for deploying deep networks on portable devices with limited computing resources. We propose DPPNet: Device-aware Progressive Search for Pareto-optimal Neural Architectures, optimizing for both device-related (e.g., inference time and memory usage) and device-agnostic (e.g., accuracy and model size) objectives. DPP-Net employs a compact search space inspired by current state-of-the-art mobile CNNs, and further improves search efficiency by adopting progressive search (Liu et al. 2017). Experimental results on CIFAR-10 are poised to demonstrate the effectiveness of Pareto-optimal networks found by DPP-Net, for three different devices: (1) a workstation with Titan X GPU, (2) NVIDIA Jetson TX1 embedded system, and (3) mobile phone with ARM Cortex-A53. Compared to CondenseNet and NASNet (Mobile), DPP-Net achieves better performances: higher accuracy & shorter inference time on various devices. Additional experimental results show that models found by DPP-Net also achieve considerablygood performance on ImageNet as well."} {"_id": "85947d646623ef7ed96dfa8b0eb705d53ccb4efe", "title": "Network forensic frameworks: Survey and research challenges", "text": "Network forensics is the science that deals with capture, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper makes an exhaustive survey of various network forensic frameworks proposed till date. A generic process model for network forensics is proposed which is built on various existing models of digital forensics. Definition, categorization and motivation for network forensics are clearly stated. The functionality of various Network Forensic Analysis Tools (NFATs) and network security monitoring tools, available for forensics examiners is discussed. The specific research gaps existing in implementation frameworks, process models and analysis tools are identified and major challenges are highlighted. The significance of this work is that it presents an overview on network forensics covering tools, process models and framework implementations, which will be very much useful for security practitioners and researchers in exploring this upcoming and young discipline. a 2010 Elsevier Ltd. All rights reserved."} {"_id": "4c18d626c7200957a36df64a5baefb8c9f8cda5a", "title": "What does media use reveal about personality and mental health? An exploratory investigation among German students", "text": "The present study aimed to investigate the relationship between personality traits, mental health variables and media use among German students. The data of 633 participants were collected. Results indicate a positive association between general Internet use, general use of social platforms and Facebook use, on the one hand, and self-esteem, extraversion, narcissism, life satisfaction, social support and resilience, on the other hand. Use of computer games was found to be negatively related to these personality and mental health variables. The use of platforms that focus more on written interaction (Twitter, Tumblr) was assumed to be negatively associated with positive mental health variables and significantly positively with depression, anxiety, and stress symptoms. In contrast, Instagram use, which focuses more on photo-sharing, correlated positively with positive mental health variables. Possible practical implications of the present results for mental health, as well as the limitations of the present work are discussed."} {"_id": "2c7bfc8d75dab44aeab34b1bf5243b192112f502", "title": "Failure as a Service ( FaaS ) : A Cloud Service for Large-Scale , Online Failure Drills", "text": "Cloud computing is pervasive, but cloud service outages still take place. One might say that the computing forecast for tomorrow is \u201ccloudy with a chance of failure.\u201d One main reason why major outages still occur is that there are many unknown large-scale failure scenarios in which recovery might fail. We propose a new type of cloud service, Failure as a Service (FaaS), which allows cloud services to routinely perform large-scale failure drills in real deployments."} {"_id": "3832b8446bf3256148868ce62d84546e48c0a7be", "title": "Young adults' experiences of seeking online information about diabetes and mental health in the age of social media", "text": "BACKGROUND\nThe Internet is a primary source of health information for many. Since the widespread adoption of social media, user-generated health-related content has proliferated, particularly around long-term health issues such as diabetes and common mental health disorders (CMHDs).\n\n\nOBJECTIVE\nTo explore perceptions and experiences of engaging with health information online in a sample of young adults familiar with social media environments and variously engaged in consuming user-generated content.\n\n\nMETHODS\nForty semi-structured interviews were conducted with young adults, aged 18-30, with experience of diabetes or CMHDs. Data were analysed following a thematic networks approach to explore key themes around online information-seeking and content consumption practices.\n\n\nRESULTS\nAlthough participants primarily discussed well-rehearsed approaches to health information-seeking online, particularly reliance on search engines, their accounts also reflected active engagement with health-related content on social media sites. Navigating between professionally produced websites and user-generated content, many of the young adults seemed to appreciate different forms of health knowledge emanating from varied sources. Participants described negotiating health content based on social media practices and features and assessing content heuristically. Some also discussed habitual consumption of content related to their condition as integrated into their everyday social media use.\n\n\nCONCLUSION\nTechnologies such as Facebook, Twitter and YouTube offer opportunities to consume and assess content which users deem relevant and useful. As users and organizations continue to colonize social media platforms, opportunities are increasing for health communication and intervention. However, how such innovations are adopted is dependent on their alignment with users' expectations and consumption practices."} {"_id": "0a45ca7d9c6ac32eeec03ce9a6d8344d4d9aaf1c", "title": "Web-Based Virtual Learning Environments: A Research Framework and a Preliminary Assessment of Effectiveness in Basic IT Skills Training", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."} {"_id": "391dd4ca2ee3b2995260425d09dd97d4d9aaac29", "title": "Computer Self-Efficacy: Development of a Measure and Initial Test", "text": "This paper discusses the role of individuals\u2019 beliefs about their abilities to competently use computers (computer self-efficacy) in the determination of computer use. A survey of Canadian managers and professionals was conducted to develop and validate a measure of computer self-efficacy and to assess both its impacts and antecedents. Computer selfefficacy was found to exert a significant influence on individuals\u2019 expectations of the outcomes of using computers, their emotional reactions to computers (affect and anxiety), well as their actual computer use. An individual\u2019s self-efficacy and outcome expectations were found to be positively influenced by the encouragement of others in their work group, as well as others\u2019 use of computers. Thus, self-efficacy represents an important individual trait, which moderates organizational influences (such as encouragement and support) on an individual\u2019s decision to use computers. Understanding self-efficacy, then, is important o the successful implementation of systems in Introduction Understanding the factors that influence an individual\u2019s use of information technology has been a goal of MIS research since the mid-1970s, when organizations and researchers began to find that adoption of new technology was not living up to expectations. Lucas (1975, 1978) provides some of the earliest evidence of the individual or behavioral factors that influenced IT adoption. The first theoretical perspective to gain widespread acceptance in this research was the Theory of Reasoned Action (Fishbein and Ajzen, 1975). This theory maintains that individuals would use computers if they could see that there would be positive benefits (outcomes) associated with using them. This theory is still widely used today in the IS literature and has demonstrated validity. However, there is also a growing recognition that additional explanatory variables are needed (e.g., Thompson, et al., 1991; Webster and Martocchio, 1992). One such variable, examined in this research, comes from the writings of Albert Bandura and his work on Social Cognitive Theory (Bandura, 1986). Self-efficacy, the belief that one has the capability to perform a particular behavior, is an important construct in social psychology. Self-efficacy perceptions have been found to influence decisions about what behaviors to undertake (e.g., Bandura, et al., 1977; Betz and Hackett, 1981), the effort exerted and persistence in attempting those behaviors (e.g., Barling and Beattie, 1983; Brown and Inouye, 1978), the emotional responses (including stress and anxiety) of the individual performing the behaviors (e.g., Bandura, et al., 1977; Stumpf, et a1.,1987), and the actual performance attainments of the individual with respect to the behavior (e.g., Barling M/S Quarterly/June 1995 189 Computer Self-Efficacy---Measurement and Beattie, 1983; Collins, 1985; Locke, et al., 1984; Schunk, 1981; Wood and Bandura, 1989). These effects have been shown for a wide variety of behaviors in both clinical and managerial settings. Within a management context, self-efficacy has been found to be related to attendance (Frayne and Latham, 1987; Latham and Frayne, 1989), career choice and dev,elopment (Betz and Hackett, 1981; Jones, 1986), research productivity (Taylor, et al., 1984), and sales performance (Barling and Beattie, 1983). Several more recent studies (Burkhardt and Brass, 1990; Gist, et a1.,1989; Hill, et al., 1986; 1987; Webster and Martocohio, 1992; 1993) have examined the relationship between self-efficacy with respect to using computers and a variety of computer behaviors. These studies found evidence of a relationship between selfefficacy and registration in computer courses at universities (Hill, et al., 1987), adoption of high technology products (Hill, et al., 1986) and innovations (Burkhardt and Brass, 1990), as well performance in software training (Gist, et al., 1989; Webster and Martocchio, 1992; 1993). All of the studies argue the need for further research to explore fully the role of self-efficacy in computing behavior. This paper describes the first study in a program of research aimed at understanding the impact of self-efficacy on individual reactions to computing technology. The study involves the development of a measure for computer self-efficacy and a test of its reliability and validity. The measure was evaluated by examining its performance in a nomological network, through structural equations modeling. A research model for this purpose was developed with reference to literature from social psychology, as well as prior IS research. The paper is organized as follows. The next section presents the theoretical foundation for this research. The third section discusses the development of the self-efficacy measure. The research model is described and the hypotheses are presented in the following section. Then, the research methodology is outlined, and the results of the analyses are presented. The paper concludes with a discussion of the implications of these findings and the strengths and limitations of the research. Theoretical Background Social Cognitive Theory Social Cognitive Theory (Bandura, 1977; 1978; 1982; 1986) is a widely accepted, empirically validated model of individual behavior. It is based on the premise that environmental influences such as social pressures or unique situational characteristics, cognitive and other personal factors including personality as well as demographic haracteristics, and behavior are reciprocally determined. Thus, individuals choose the environments in which they exist in"} {"_id": "53393cbe2147c22e111dc3a4667bb1a1dbf537f6", "title": "Judo strategy. The competitive dynamics of Internet time.", "text": "Competition on the Internet is creating fierce battles between industry giants and small-scale start-ups. Smart start-ups can avoid those conflicts by moving quickly to uncontested ground and, when that's no longer possible, turning dominant players' strengths against them. The authors call this competitive approach judo strategy. They use the Netscape-Microsoft battles to illustrate the three main principles of judo strategy: rapid movement, flexibility, and leverage. In the early part of the browser wars, for instance, Netscape applied the principle of rapid movement by being the first company to offer a free stand-alone browser. This allowed Netscape to build market share fast and to set the market standard. Flexibility became a critical factor later in the browser wars. In December 1995, when Microsoft announced that it would \"embrace and extend\" competitors' Internet successes, Netscape failed to give way in the face of superior strength. Instead it squared off against Microsoft and even turned down numerous opportunities to craft deep partnerships with other companies. The result was that Netscape lost deal after deal when competing with Microsoft for common distribution channels. Netscape applied the principle of leverage by using Microsoft's strengths against it. Taking advantage of Microsoft's determination to convert the world to Windows or Windows NT, Netscape made its software compatible with existing UNIX systems. While it is true that these principles can't replace basic execution, say the authors, without speed, flexibility, and leverage, very few companies can compete successfully on Internet time."} {"_id": "54f35b4edba6ddee8ce2eac489bde78308e3e708", "title": "Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis", "text": "We present a rule-based system for computer-aided circuit analysis. The set of rules, called EL, is written in a rule language called ARS. Rules are implemented by ARS as pattern-directed invocation demons monitoring an associative data base. Deductions are performed in an antecedent manner, giving EL's analysis a catch-as-catch-can flavour suggestive of the behavior of expert circuit analyzers. We call this style of circuit analysis propagation of constraints. The system threads deduced fact~ with justifications which mention the antecedent facts attd the rule used. These justifications may be examined by the user to gain in right into the operation of the set of rules as they apply to a problem. ~q~e same justifications are used by the system to determine the currently active dat~-base context Jor reasoning in hypothetical situations. They are also used by the system in the at~alysis of failures to reduce the search space. This leads to effective control of combinatorial search which we call dependency-directed backtracking."} {"_id": "7d98dce77cce2d0963a3b6566f5c733ad4343ce4", "title": "Gender Differences in the Perception and Use of E-Mail: An Extension to the Technology Acceptance Model", "text": "This study extends Davis' (1989) TAM model and Straub's (1994) SPIR addendum by adding gender to an IT diffusion model. The Technology Acceptance Model (TAM) has been widely studied in IS research as an explanation of the use of information systems across IS-types and nationalities. While this line of research has found significant cross-cultural differences, it has ignored the effects of gender, even though in socio-linguistic research, gender is a fundamental aspect of culture. Indeed, sociolinguistic research that has shown that men tend to focus discourse on hierarchy and independence while women focus on intimacy and solidarity. This literature provides a solid grounding for conceptual extensions to the IT diffusion research and the Technology Acceptance Model. Testing gender differences that might relate to beliefs and use of computer-based media, the present study sampled 392 female and male responses via a cross-sectional survey instrument. The sample drew from comparable groups of knowledge workers using E-mail systems in the airline industry in North America, Asia, and Europe. Study findings indicate that women and men differ in their perceptions but not use of E-mail. These findings suggest that researchers should include gender in IT diffusion models along with other cultural effects. Managers and co-workers, moreover, need to realize that the same mode of communication may be perceived differently by the sexes, suggesting that more favorable communications environments might be created, environments that take into account not only organizational contextual factors, but also the gender of users. The creation of these environments involves not only the actual deployment of communication media, but also organizational training on communications media."} {"_id": "032c741edc5acd8fdb7b22e02a39a83c3882211a", "title": "EDUCATIONAL DATA MINING", "text": "Computer-based learning systems can now keep detailed logs of user-system interactions, including key clicks, eye-tracking, and video data, opening up new opportunities to study how students learn with technology. Educational Data Mining (EDM; Romero, Ventura, Pechenizkiy, & Baker, 2010) is concerned with developing, researching, and applying computerized methods to detect patterns in large collections of educational data \u2013 patterns that would otherwise be hard or impossible to analyze due to the enormous volume of data they exist within. Data of interest is not restricted to interactions of individual students with an educational system (e.g., navigation behavior, input to quizzes and interactive exercises) but might also include data from collaborating students (e.g., text chat), administrative data (e.g., school, school district, teacher), and demographic data (e.g., gender, age, school grades). Data on student affect (e.g., motivation, emotional states) has also been a focus, which can be inferred from physiological sensors (e.g., facial expression, seat posture and perspiration). EDM uses methods and tools from the broader field of Data Mining (Witten & Frank, 2005), a sub-field of Computer Science and Artificial Intelligence that has been used for purposes as diverse as credit card fraud detection, analysis of gene sequences in bioinformatics, or the analysis of purchasing behaviors of customers. Distinguishing EDM features are its particular focus on educational data and problems, both theoretical (e.g., investigating a learning hypothesis) and practical (e.g., improving a learning tool). Furthermore, EDM makes a methodological contribution by developing and researching data mining techniques for educational applications. Typical steps in an EDM project include data acquisition, data preprocessing (e.g., data \u201ccleaning\u201d), data mining, and validation of results."} {"_id": "d172ef0dafee17f08d9ef7867337878ed5cd2971", "title": "Why Do Men Get More Attention? Exploring Factors Behind Success in an Online Design Community", "text": "Online platforms are an increasingly popular tool for people to produce, promote or sell their work. However recent studies indicate that social disparities and biases present in the real world might transfer to online platforms and could be exacerbated by seemingly harmless design choices on the site (e.g., recommendation systems or publicly visible success measures). In this paper we analyze an exclusive online community of teams of design professionals called Dribbble and investigate apparent differences in outcomes by gender. Overall, we find that men produce more work, and are able to show it to a larger audience thus receiving more likes. Some of this effect can be explained by the fact that women have different skills and design different images. Most importantly however, women and men position themselves differently in the Dribbble community. Our investigation of users\u2019 position in the social network shows that women have more clustered and gender homophilous following relations, which leads them to have smaller and more closely knit social networks. Overall, our study demonstrates that looking behind the apparent patterns of gender inequalities in online markets with the help of social networks and product differentiation helps us to better understand gender differences in success and failure."} {"_id": "6e30fa90d8c628dd10083eac76cfa5aff209d0ed", "title": "Removing ECG noise from surface EMG signals using adaptive filtering", "text": "Surface electromyograms (EMGs) are valuable in the pathophysiological study and clinical treatment for dystonia. These recordings are critically often contaminated by cardiac artefact. Our objective of this study was to evaluate the performance of an adaptive noise cancellation filter in removing electrocardiogram (ECG) interference from surface EMGs recorded from the trapezius muscles of patients with cervical dystonia. Performance of the proposed recursive-least-square adaptive filter was first quantified by coherence and signal-to-noise ratio measures in simulated noisy EMG signals. The influence of parameters such as the signal-to-noise ratio, forgetting factor, filter order and regularization factor were assessed. Fast convergence of the recursive-least-square algorithm enabled the filter to track complex dystonic EMGs and effectively remove ECG noise. This adaptive filter procedure proved a reliable and efficient tool to remove ECG artefact from surface EMGs with mixed and varied patterns of transient, short and long lasting dystonic contractions."} {"_id": "bd4308ebc656cee9cd50ba3347f6662d0192f0a8", "title": "Forecasting the behavior of multivariate time series using neural networks", "text": "Abstract--This pal~er presents a neural network approach to multivariate time-series anal.lwis. Real world observations ~?/flottr /.'ices in three cities have been used as a benchmark in our ev~eriments, l\"eed/orward connectionist networks have bt,t,tl ~k,.signed to model flottr l , ices over the period/iom ..lltgttsl 1972 to Novt,mher 1980./or the cities ~?[Blt[]ith~. :lliltneapoli.s, and Kansas Cit): Remarkable sttccess has heen achieved ill training the lletworks to leatvt the price CllrlV [i,\" each t?/these cities and in m a k i l l g acc l t ra le price predictions. Our r\u00a2,~lt/l.s show that the it\u00a2ltral network al~proat'h is a leading contender with the statistical modeling apl,oaches."} {"_id": "479fee2481047995353df4ba11a163c77d2eb57b", "title": "Dose-response relationship in music therapy for people with serious mental disorders: systematic review and meta-analysis.", "text": "Serious mental disorders have considerable individual and societal impact, and traditional treatments may show limited effects. Music therapy may be beneficial in psychosis and depression, including treatment-resistant cases. The aim of this review was to examine the benefits of music therapy for people with serious mental disorders. All existing prospective studies were combined using mixed-effects meta-analysis models, allowing to examine the influence of study design (RCT vs. CCT vs. pre-post study), type of disorder (psychotic vs. non-psychotic), and number of sessions. Results showed that music therapy, when added to standard care, has strong and significant effects on global state, general symptoms, negative symptoms, depression, anxiety, functioning, and musical engagement. Significant dose-effect relationships were identified for general, negative, and depressive symptoms, as well as functioning, with explained variance ranging from 73% to 78%. Small effect sizes for these outcomes are achieved after 3 to 10, large effects after 16 to 51 sessions. The findings suggest that music therapy is an effective treatment which helps people with psychotic and non-psychotic severe mental disorders to improve global state, symptoms, and functioning. Slight improvements can be seen with a few therapy sessions, but longer courses or more frequent sessions are needed to achieve more substantial benefits."} {"_id": "192ece7d63279a6ec150ca426ff461eff53ff7db", "title": "Javalanche: efficient mutation testing for Java", "text": "To assess the quality of a test suite, one can use mutation testing - seeding artificial defects (mutations) into the program and checking whether the test suite finds them. Javalanche is an open source framework for mutation testing Java programs with a special focus on automation, efficiency, and effectiveness. In particular, Javalanche assesses the impact of individual mutations to effectively weed out equivalent mutants; it has been demonstrated to work on programs with up to 100,000 lines of code."} {"_id": "a9c2a6b2ada15fa042c61e65277272ad746d7bb5", "title": "Embedded Trench Redistribution Layers (RDL) by Excimer Laser Ablation and Surface Planer Processes", "text": "This paper reports the demonstration of 2-5 \u00b5m embedded trench formation in dry film polymer dielectrics such as Ajinomoto build-up film (ABF) and Polyimide without using chemical mechanical polishing (CMP) process. The trenches in these dielectrics were formed by excimer laser ablation, followed by metallization of trenches by copper plating processes and overburden removal with surface planer tool. The materials and processes integrated in this work are scalable to large panel fabrication at much higher throughput, for interposers and high density fan-out packaging at lower cost and higher performance than silicon interposers."} {"_id": "831ed2a5f40861866b4ebfe60257b997701e38e2", "title": "ESPRIT-estimation of signal parameters via rotational invariance techniques", "text": "High-resolution signal parameter estimation is a problem of significance in many signal processing applications. Such applications include direction-of-arrival (DOA) estimation, system identification, and time series analysis. A novel approach to the general problem of signal parameter estimation is described. Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise. It exploits an underlying rotational invariance among signal subspaces induced by an array of sensors with a translational invariance structure. The technique, when applicable, manifests significant performance and computational advantages over previous algorithms such as MEM, Capon's MLM, and MUSIC."} {"_id": "e1d0880dbb789448c14f9d656a4c88ec3da8f8c4", "title": "Assessment of Knowledge , Perception and Practice of Maternal Nutrition Among Pregnant Mother Attending Antenatal Care in Selected Health", "text": "Nutrition during preconception as well as throughout pregnancy has a major impact on the outcome of pregnancy. Therefore, the aim of this study was designed to assess knowledge, perception and practices of maternal nutrition among pregnant women during antenatal care in selected health center of Horo Guduru Wollega zone. A facility-based cross-sectional study design was conducted on total 405 pregnant mothers from January to June, 2017. A semi structured interview and questionnaires was used to collect information in the areas of socio-demographic, knowledge, perception, and practices towards maternal nutrition among pregnant mothers. Statistical Package for Social Sciences (SPSS) version 20.0 was used to perform descriptive statistics. The result obtained during study indicates that 63.5%, 70.6% and74.6% of pregnant mother good knowledge, perception and practices respectively while 36.5%, 29.4% and 25.4% was poor knowledge, perception and practice respectively. This study clearly indicated that less than half of pregnant mother\u2019s attending antenatal care in the study area had poor knowledge, perception and practices. Therefore, nutrition education should be intensified to improve the overall knowledge, perception, and practices of pregnant mothers towards maternal nutrition in different villages, health centers, health posts and hospitals."} {"_id": "4ec6bd672aaaa075b42a751099eb9317857e6e0c", "title": "Novelty and diversity metrics for recommender systems: choice, discovery and relevance", "text": "There is an increasing realization in the Recommender Systems (RS) field that novelty and diversity are fundamental qualities of recommendation effectiveness and added-value. We identify however a gap in the formalization of novelty and diversity metrics \u2013and a consensus around them\u2013 comparable to the recent proposals in IR diversity. We study a formal characterization of different angles that RS novelty and diversity may take from the end-user viewpoint, aiming to contribute to a formal definition and understanding of different views and meanings of these magnitudes under common groundings. Building upon this, we derive metric schemes that take item position and relevance into account, two aspects not generally addressed in the novelty and diversity metrics reported in the RS literature."} {"_id": "915dca0156f3eb8d1ee6e66796da770f02cd3b88", "title": "Comparative evaluation of soft-switching concepts for bi-directional buck+boost dc-dc converters", "text": "Soft-switching techniques are an enabling technology to further reduce the losses and the volume of automotive dc-dc converters, utilized to interconnect the high voltage battery or ultra-capacitor to the dc-link of a Hybrid Electrical Vehicle (HEV) or a Fuel Cell Vehicle (FCV). However, as the performance indices of a power electronics converter, such as efficiency and power density, are competing and moreover dependent on the underlying specifications and technology node, a comparison of different converter topologies naturally demands detailed analytical models. Therefore, to investigate the performance of the ARCP, CF-ZVS-M, SAZZ and ZCT-QZVT soft-switching converters, the paper discusses in detail the advantages and drawbacks of each concept, and the impact of the utilized semiconductor technology and silicon area on the converter efficiency. The proposed analytical models that correlate semiconductor, capacitor and inductor losses with the component volume furthermore allow for a comparison of power density and to find the \u03b7-\u03c1-Pareto-Front of the CF-ZVS-M converter."} {"_id": "beb0ad490e55296b7b79ecb029c7b59c01a0e524", "title": "Development of a robotic finger with an active dual-mode twisting actuation and a miniature tendon tension sensor", "text": "A robot finger is developed with enhanced grasping force and speed using active dual-mode twisting actuation, which is a type of twisted string actuation. This actuation system has two twisting modes (Speed Mode and Force Mode) with different radii of the twisted string, and the twisting mode can be changed automatically. Therefore, the actuator operates like an automatic transmission having two transmission ratios, and the robot finger can generate a large grasping force or fast motion depending on the twisting mode. In addition, a miniature tendon tension sensor is developed and embedded in the fingertip for automatic mode change and grasping force control. A relation between the fingertip force and the tension of the tendon is derived by kinematic and kinetic analyses of the robot finger. The performance of the robotic finger and the tension sensor are verified by experimental results."} {"_id": "30c8d01077ab942802320899eb0b49c4567591d3", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM", "text": "Sentiment analysis on large-scale social media data is important to bridge the gaps between social media contents and real world activities including political election prediction, individual and public emotional status monitoring and analysis, and so on. Although textual sentiment analysis has been well studied based on platforms such as Twitter and Instagram, analysis of the role of extensive emoji uses in sentiment analysis remains light. In this paper, we propose a novel scheme for Twitter sentiment analysis with extra attention on emojis. We first learn bi-sense emoji embeddings under positive and negative sentimental tweets individually, and then train a sentiment classifier by attending on these bi-sense emoji embeddings with an attention-based long short-term memory network (LSTM). Our experiments show that the bi-sense embedding is effective for extracting sentiment-aware embeddings of emojis and outperforms the state-of-the-art models. We also visualize the attentions to show that the bi-sense emoji embedding provides better guidance on the attention mechanism to obtain a more robust understanding of the semantics and sentiments."} {"_id": "65f55691cc3bad6ca224e2144083c9deb2b2cd1d", "title": "A Look into 30 Years of Malware Development from a Software Metrics Perspective", "text": "During the last decades, the problem of malicious and unwanted software (malware) has surged in numbers and sophistication. Malware plays a key role in most of today\u2019s cyber attacks and has consolidated as a commodity in the underground economy. In this work, we analyze the evolution of malware since the early 1980s to date from a software engineering perspective. We analyze the source code of 151 malware samples and obtain measures of their size, code quality, and estimates of the development costs (effort, time, and number of people). Our results suggest an exponential increment of nearly one order of magnitude per decade in aspects such as size and estimated effort, with code quality metrics similar to those of regular software. Overall, this supports otherwise confirmed claims about the increasing complexity of malware and its production progressively becoming an industry."} {"_id": "7f9f42fb6166b66ae0c26a3d373edc91b369796d", "title": "Maturity Models for Systems Thinking", "text": "Recent decades have seen a rapid increase in the complexity of goods, products, and services that society has come to demand. This has necessitated a corresponding growth in the requirements demanded of organizational systems and the people who work in them. The competence a person requires to be effective in working in such systems has become an area of increased interest to scholars and practitioners in many disciplines. How can we assess the degree to which a person is executing the competencies required to do good systems work? Several industries now utilize maturity models in the attempt to evaluate and cultivate people\u2019s ability to effectively execute complex tasks. This paper will examine current thought regarding the value and pitfalls of maturity models. It will identify principles and exemplars that could guide the development of a Maturity Model of Systems Thinking Competence (MMSTC) for the varied roles people inhabit in systems contexts."} {"_id": "50b5a0a6ceb1c8d1dbdaba6e843b4c6947f02d62", "title": "Learning Attractor Dynamics for Generative Memory", "text": "A central challenge faced by memory systems is the robust retrieval of a stored pattern in the presence of interference due to other stored patterns and noise. A theoretically well-founded solution to robust retrieval is given by attractor dynamics, which iteratively clean up patterns during recall. However, incorporating attractor dynamics into modern deep learning systems poses difficulties: attractor basins are characterised by vanishing gradients, which are known to make training neural networks difficult. In this work, we avoid the vanishing gradient problem by training a generative distributed memory without simulating the attractor dynamics. Based on the idea of memory writing as inference, as proposed in the Kanerva Machine, we show that a likelihood-based Lyapunov function emerges from maximising the variational lower-bound of a generative memory. Experiments shows it converges to correct patterns upon iterative retrieval and achieves competitive performance as both a memory model and a generative model."} {"_id": "962db795fbd438100f602bcc4e566ec08d868d55", "title": "Artificial intelligence theory (Basic concepts)", "text": "Taking the bionic approach as a basis, the article discusses the main concepts of the theory of artificial intelligence as a field of knowledge, which studies the principles of creation and functioning of intelligent systems based on multidimensional neural-like growing networks. The general theory of artificial intelligence includes the study of neural-like elements and multidimensional neural-like growing networks, temporary and long term memory, study of the functional organization of the \u201cbrain\u201d of the artificial intelligent systems, of the sensor system, modulating system, motor system, conditioned and unconditioned reflexes, reflexes arc (ring), motivation, purposeful behavior, of \u201cthinking\u201d, \u201cconsciousness\u201d, \u201csubconscious and artificial personality developed as a result of training and education\u201d."} {"_id": "64ccd7ef913372962a2dc04a0fb18957fd347f74", "title": "Dual-band bandpass filter using helical resonators", "text": "This paper demonstrates non-uniform pitch helical resonators for realization of dual-band bandpass filters. By modifying the pitch and coil diameter of conventional uniform pitch helical resonators, the desired dual-band frequencies can be obtained. The external coupling and inter-resonator coupling structure are also illustrated. A non-uniform pitch helical resonator was designed and fabricated with unloaded quality-factor (>900) for both passbands. A 2nd order Butterworth filter that has passbands at 840 MHz and 2590 MHz has been designed. The simulation results show that the filter has a good selectivity with 6% and 4.3% fractional bandwidth at the passbands, respectively. The good power handling capability also suggests that the filter is applicable for microcell/picocell LTE base stations."} {"_id": "1c8e97e9b8dce97b164be9379461fed9eb23cf3b", "title": "Automation of Attendances in Classrooms using RFID", "text": "This paper automates the design and implementation of students\u2019 attendance management system, taken into consideration easy access and time saving. The system will be used in faculty of computing and information system (FCIT) at King Abdulaziz University (KAU). Radio Frequency Identification (RFID) and wireless technology are two technologies which will be applied as infrastructure in the indoor environment. University Based Services (UBS) and tag IDs of the RFID are being used in this paper, in order to determine the attendance list for academic staff. Once the academic staff enters a classroom, he will be able to register student\u2019s presence. Therefore, an academic system for identifying and tracking attendance at computing college at KAU will be described, and hybrid RFID and Wireless LAN (WLAN) technologies will be implemented in FCIT academic environment."} {"_id": "94638a5fdc084b2d7a9b3500b6e42cd30226ab50", "title": "Got Flow?: Using Machine Learning on Physiological Data to Classify Flow", "text": "As information technologies (IT) are both, drivers of highly engaging experiences and sources of disruptions at work, the phenomenon of flow - defined as \"the holistic sensation that people feel when they act with total involvement\" [5, p. 36] - has been suggested as promising vehicle to understand and enhance user behavior. Despite the growing relevance of flow at work, contemporary measurement approaches of flow are of subjective and retrospective nature, limiting our possibilities to investigate and support flow in a reliable and timely manner. Hence, we require objective and real-time classification of flow. To address this issue, this article combines recent theoretical considerations from psychology and experimental research on the physiology of flow with machine learning (ML). The overall aim is to build classifiers to distinguish flow states (i.e., low and high flow). Our results indicate that flow-classifiers can be derived from physiological signals. Cardiac features seem to play an important role in this process resulting in an accuracy of 72.3%. Hereby, our findings may serve as foundation for future work aiming to build flow-aware IT-systems."} {"_id": "5204a0c2464bb64971dfb045e833bb0ca4f118fd", "title": "Measuring Universal Intelligence in Agent-Based Systems Using the Anytime Intelligence Test", "text": "This paper aims to quantify and analyze the intelligence of artificial agent collectives. A universal metric is proposed and used to empirically measure intelligence for several different agent decision controllers. Accordingly, the effectiveness of various algorithms is evaluated on a per-agent basis over a selection of abstracted, canonical tasks of different algorithmic complexities. Results reflect the different settings over which cooperative multiagent systems can be significantly more intelligent per agent than others. We identify and discuss some of the factors influencing the collective performance of these systems."} {"_id": "754e5f3c10fca0f55916be86e0c545504b06590f", "title": "Creating Constructivist Learning Environments on the Web: The Challenge in Higher Education", "text": "Australian universities have traditionally relied on government funding to support undergraduate teaching. As the government has adopted the \u2018user-pays\u2019 principle, universities have been forced to look outside their traditional market to expand the undergraduate, post-graduate and international offerings. Alternate delivery methods in many universities have utilised web-based instruction as a basis for this move because of three perceptions: access by the target market is reasonably significant, it is a cost-effective method of delivery, and it provides global access. Since the mid sixties, the trend for both on-campus teaching and teaching at a distance has been to use behaviourist instructional strategies for subject development, which rely on the development of a set of instructional sequences with predetermined outcomes. These models, whilst applicable in a behaviourist environment, are not serving instructional designers well when the theoretical foundation for the subject outcomes is based on a constructivist approach to learning, since the constructivist group of theories places less emphasis on the sequence of instruction and more emphasis on the design of the learning environment. (Jonassen, 1994. p 35). In a web-based environment this proves to be even more challenging. This paper will review current research in design goals for web-based constructivist learning environments, and a move towards the development of models. The design of two web-based subjects will be explored in the context of the design goals developed by Duffy and Cunningham (1996 p 177) who have produced some basic assumptions that they call \u201cmetaphors we teach by\u201d. The author seeks to examine the seven goals for their relevance to the instructional designer through the examination of their relevance to the web-based subjects, both of which were framed in constructivist theory."} {"_id": "92e99b0a460c731a5ac4aeb6dba5d9234b797795", "title": "Robust road marking detection using convex grouping method in around-view monitoring system", "text": "As the around-view monitoring (AVM) system becomes one of the essential components for advanced driver assistance systems (ADAS), many applications using AVM such as parking guidance system are actively being developed. As a key step for such applications, detecting road markings robustly is a very important issue to be solved. However, compared to the lane marking detection methods, detection of non-lane markings, such as text marks painted on the road, has been less studied so far. While some of methods for detecting non-lane markings exist, many of them are restricted to roadways only, or work poorly on AVM images. In this paper, we propose an algorithm which can robustly detect non-lane road markings on AVM images. We first propose a difference-of-Gaussian based method for extracting a connected component set, followed by a novel grouping method for grouping connected components based on convexity condition. For a classification task, we exploit the Random Forest classifier. We demonstrate the robustness and detection accuracy of our methods through various experiments by using the dataset collected from various environments."} {"_id": "23ebda99aa7020e703be88b82ad255376c6d8878", "title": "Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery", "text": "The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date. To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware. \uf6d9 1999 Published by Elsevier Science B.V. All rights reserved."} {"_id": "2599131a4bc2fa957338732a37c744cfe3e17b24", "title": "A Training Algorithm for Optimal Margin Classifiers", "text": "A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms."} {"_id": "04ada80ffb2d8a90b6366ef225a75edd6a2be0e0", "title": "Searching the world wide Web", "text": "The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the \"indexable Web,\" the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages."} {"_id": "3022cfe4decc9ea7dd95af935764204924c2f9a1", "title": "WebWatcher : A Tour Guide for the World Wide Web", "text": "1 We explore the notion of a tour guide software agent for assisting users browsing the World Wide Web. A Web tour guide agent provides assistance similar to that provided by a human tour guide in a museum { it guides the user along an appropriate path through the collection, based on its knowledge of the user's interests, of the location and relevance of various items in the collection, and of the way in which others have interacted with the collection in the past. This paper describes a simple but operational tour guide, called WebWatcher, which has given over 5000 tours to people browsing CMU's School of Computer Science Web pages. WebWatcher accompanies users from page to page, suggests appropriate hyperlinks, and learns from experience to improve its advice-giving skills. We describe the learning algorithms used by WebWatcher, experimental results showing their e ectiveness, and lessons learned from this case study in Web tour guide agents."} {"_id": "aa618ec4197c1ee34c0cc42b0624c499e119d28a", "title": "Application-specific slicing for MVNO and traffic characterization [Invited]", "text": "In this paper, we apply the concept of a software-defined data plane to defining new services for mobile virtual network operators (MVNOs) to empower them with the capability of tailoring fine-grained subscription plans that can meet users' demands. For this purpose, we have recently proposed the concept of application-and/ or device-specific slicing that classifies application-and/or device-specific traffic into slices and introduced various applications of our proposed system [Optical Fiber Communication Conf. (OFC), 2016, paper Th3I.4]. This paper reports the prototype implementation of the proposal into a real MVNO connecting customized smartphones so we can identify applications/devices from the given traffic with 100% accuracy. We first characterize the traffic patterns of popular applications to build a detailed understanding of how network resources are utilized for real users and applications and what their demands on different network resources are. Then, we classify traffic according to the devices and find that the flow characteristics of LTE RAN slices are very similar to those of 3G. We discover that most of the bandwidth is consumed by those flows with the duration of several hundreds of seconds, and the flow size is larger than 5 Mbytes in both LTE and 3G MVNO networks."} {"_id": "fc595b4410d0e1ecdedfb8f1c68bbd5dfd2d06be", "title": "A Solution to Single Point of Failure Using Voter Replication and Disagreement Detection", "text": "This paper suggests a method, called distributed voting, to overcome the problem of the single point of failure in a TMR system used in robotics and industrial control applications. It uses time redundancy and is based on TMR with disagreement detector feature. This method masks faults occurring in the voter where the TMR system can continue its function properly. The method has been evaluated by injecting faults into Vertex2Pro and Vertex4 Xilinx FPGAs An analytical evolution is also performed. The results of both evaluation approaches show that the proposed method can improve the reliability and the mean time to failure (MTTF) of a TMR system by at least a factor of (2-RV (t)) where RV(t) is the reliability of the voter"} {"_id": "813375e15680be8741907064ee740ce58cb94b53", "title": "Error detecting and error correcting codes", "text": "When a message is transmitted, it has the potential to get scrambled by noise. This is certainly true of voice messages, and is also true of the digital messages that are sent to and from computers. Now even sound and video are being transmitted in this manner. A digital message is a sequence of 0\u2019s and 1\u2019s which encodes a given message. More data will be added to a given binary message that will help to detect if an error has been made in the transmission of the message; adding such data is called an error-detecting code. More data may also be added to the original message so that errors made in transmission may be detected, and also to figure out what the original message was from the possibly corrupt message that was received. This type of code is an error-correcting code."} {"_id": "52d6080eec86f20553e1264a4adb63630808773c", "title": "Millibot Trains for Enhanced Mobility", "text": "The objective of this work is to enhance the mobility of small mobile robots by enabling them to link into a train configuration capable of crossing relatively large obstacles. In particular, we are building on Millibots, semiautonomous, tracked mobile sensing/communication platforms at the 5-cm scale previously developed at Carnegie Mellon University. TheMillibot Train concept provides couplers that allow the Millibot modules to engage/disengage under computer control and joint actuators that allow lifting of one module by another and control of the whole train shape in two dimensions. A manually configurable train prototype demonstrated the ability to climb standard stairs and vertical steps nearly half the train length. A fully functional module with powered joints has been developed and several have been built and tested. Construction of a set of six modules is well underway and will allow testing of the complete train in the near future. This paper focuses on the development, design, and construction of the electromechanical hardware for the Millibot Train."} {"_id": "d6a8b32bc767dff6919c6c9aa4b758a8af75c31c", "title": "Bach 10 Dataset---- A Versatile Polyphonic Music Dataset", "text": "This is a polyphonic music dataset which can be used for versatile research problems, such as Multi-pitch Estimation and Tracking, Audio-score Alignment, Source Separation, etc. This dataset consists of the audio recordings of each part and the ensemble of ten pieces of four-part J.S. Bach chorales, as well as their MIDI scores, the ground-truth alignment between the audio and the score, the ground-truth pitch values of each part and the ground-truth notes of each piece. The audio recordings of the four parts (Soprano, Alto, Tenor and Bass) of each piece are performed by violin, clarinet, saxophone and bassoon, respectively."} {"_id": "9346a6fc28681eaa56fcf71816bc2070d786062e", "title": "Variational Sequential Monte Carlo", "text": "Many recent advances in large scale probabilistic inference rely on variational methods. The success of variational approaches depends on (i) formulating a flexible parametric family of distributions, and (ii) optimizing the parameters to find the member of this family that most closely approximates the exact posterior. In this paper we present a new approximating family of distributions, the variational sequential Monte Carlo (VSMC) family, and show how to optimize it in variational inference. VSMC melds variational inference (VI) and sequential Monte Carlo (SMC), providing practitioners with flexible, accurate, and powerful Bayesian inference. The VSMC family is a variational family that can approximate the posterior arbitrarily well, while still allowing for efficient optimization of its parameters. We demonstrate its utility on state space models, stochastic volatility models for financial data, and deep Markov models of brain neural circuits."} {"_id": "68cc677342506ef6c0928994e7ece63d432c34fb", "title": "LoRa-based localization systems for noisy outdoor environment", "text": "LoRa-based (Long Range Communication based) localization systems make use of RSSI (Received Signal Strength Indicator) to locate an object properly in an outdoor environment for IoT (Internet of Things) applications in smart environment and urban networking. The accuracy of localization is highly degraded, however, by noisy environments (e.g., electronic interference, blocking). To address this issue, this paper proposes two new localization algorithms to improve localization accuracy. Furthermore, the paper develops a winner selection strategy to select a winner from the traditional localization algorithm and the two new algorithms to minimize localization errors. Our simulations and real experiments found that the two new algorithms can significantly improve localization accuracy. Additionally, the winner selection strategy effectively selects a winner from the three algorithms to improve localization accuracy further. Using our proposed algorithms in some real experiments, the localization error is only a few meters in a real measurement field longer than 100 m."} {"_id": "51c2940efef5ea99ac6dee9ae9841eebffe0dc7d", "title": "Document-level Sentiment Inference with Social, Faction, and Discourse Context", "text": "We present a new approach for documentlevel sentiment inference, where the goal is to predict directed opinions (who feels positively or negatively towards whom) for all entities mentioned in a text. To encourage more complete and consistent predictions, we introduce an ILP that jointly models (1) sentenceand discourse-level sentiment cues, (2) factual evidence about entity factions, and (3) global constraints based on social science theories such as homophily, social balance, and reciprocity. Together, these cues allow for rich inference across groups of entities, including for example that CEOs and the companies they lead are likely to have similar sentiment towards others. We evaluate performance on new, densely labeled data that provides supervision for all pairs, complementing previous work that only labeled pairs mentioned in the same sentence. Experiments demonstrate that the global model outperforms sentence-level baselines, by providing more coherent predictions across sets of related entities."} {"_id": "68c29b7bf1811f941040bba6c611753b8d756310", "title": "Frequency-based anomaly detection for the automotive CAN bus", "text": "The modern automobile is controlled by networked computers. The security of these networks was historically of little concern, but researchers have in recent years demonstrated their many vulnerabilities to attack. As part of a defence against these attacks, we evaluate an anomaly detector for the automotive controller area network (CAN) bus. The majority of attacks are based on inserting extra packets onto the network. But most normal packets arrive at a strict frequency. This motivates an anomaly detector that compares current and historical packet timing. We present an algorithm that measures inter-packet timing over a sliding window. The average times are compared to historical averages to yield an anomaly signal. We evaluate this approach over a range of insertion frequencies and demonstrate the limits of its effectiveness. We also show how a similar measure of the data contents of packets is not effective for identifying anomalies. Finally we show how a one-class support vector machine can use the same information to detect anomalies with high confidence."} {"_id": "0299e59b518fcc29f08c88701e64a3e1621172fe", "title": "Graph2Seq: Scalable Learning Dynamics for Graphs", "text": "Neural networks have been shown to be an effective tool for learning algorithms over graph-structured data. However, graph representation techniques\u2014that convert graphs to real-valued vectors for use with neural networks\u2014are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but these methods have difficulty scaling and generalizing to graphs with different sizes and shapes. We present GRAPH2SEQ, a new technique that represents vertices of graphs as infinite time-series. By not limiting the representation to a fixed dimension, GRAPH2SEQ scales naturally to graphs of arbitrary sizes and shapes. GRAPH2SEQ is also reversible, allowing full recovery of the graph structure from the sequences. By analyzing a formal computational model for graph representation, we show that an unbounded sequence is necessary for scalability. Our experimental results with GRAPH2SEQ show strong generalization and new state-of-the-art performance on a variety of graph combinatorial optimization problems."} {"_id": "c43d8a3d36973e3b830684e80a035bbb6856bcf7", "title": "Image Super-Resolution Using Very Deep Residual Channel Attention Networks", "text": "Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods."} {"_id": "dd7993922ac2accbb5de67c8fe1dbc6f53d632c1", "title": "Riderless bicycle with gyroscopic balancer controlled by FSMC and AFSMC", "text": "A Riderless bicycle has been developed with a gyroscopic balancer controller by a Fuzzy Sliding Mode Controller (FSMC) and an Adaptive Fuzzy Sliding Mode Controller (AFSMC). The FSMC controller was first implemented because it has better performance on controlling nonlinear systems than the one with PID control. The FSMC can also reduce the chattering phenomenon caused by SMC and the effect of linearizing a nonlinear system. Compared with other balancers, the gyroscopic balancer has a couple of advantages, such as faster system response, lower mass ratio of balancer to bicycle and relatively larger moment. To demonstrate the attributes stated above, we designed and conducted experiments, including the balancing of unmoving bicycle, unmoving bicycle with external impacts, as well as the bicycle moving forward and turning. The experimental results show that the bicycle can overcome jolts, uneven terrain and external disturbances. Furthermore, since the results of experiments are consistent with the ones of the simulation, it validates the derived bicycle dynamics model with the gyroscopic balancer and proves its robustness. However, the system's ability to resist continuous disturbance is not strong enough because of the limitation on the tilt angle of the gyroscopic balancer. Hence, we modified the control strategy by using AFSMC despite the fact that the FSMC performed better than PID control. From the simulations in Section IV, it shows that the AFSMC has better performance at resisting continuous disturbances than FSMC does. Furthermore, the abilities to balance the unmoving bicycle or moving bicycle are no less than FSMC. Thus, the AFSMC is employed to replace the FSMC. The designs of adaptive law and estimation law of AFSMC are based on the Lyapunov function to ensure the stability of the system. Experiments of the bicycle controlled by AFSMC are currently being conducted."} {"_id": "9c0e62bec042abcdc8bce4449c2c085b03d6a567", "title": "Can visual fixation patterns improve image fidelity assessment?", "text": "This paper presents the results of a computational experiment designed to investigate the extent to which metrics of image fidelity can be improved through knowledge of where humans tend to fixate in images. Five common metrics of image fidelity were augmented using two sets of fixation data, one set obtained under task-free viewing conditions and another set obtained when viewers were asked to judge image quality. The augmented metrics were then compared to subjective ratings of the images. The results show that most metrics can be improved using eye fixation data, but a greater improvement was found using fixations obtained in the task-free condition (task-free viewing)."} {"_id": "6a1da83440c7685f5a03e7bda17be9025e0892e3", "title": "Semantic Match Consistency for Long-Term Visual Localization", "text": "Robust and accurate visual localization across large appearance variations due to changes in time of day, seasons, or changes of the environment is a challenging problem which is of importance to application areas such as navigation of autonomous robots. Traditional featurebased methods often struggle in these conditions due to the significant number of erroneous matches between the image and the 3D model. In this paper, we present a method for scoring the individual correspondences by exploiting semantic information about the query image and the scene. In this way, erroneous correspondences tend to get a low semantic consistency score, whereas correct correspondences tend to get a high score. By incorporating this information in a standard localization pipeline, we show that the localization performance can be significantly improved compared to the state-of-the-art, as evaluated on two challenging long-term localization benchmarks."} {"_id": "665cf8ff3a221a3468c6d4fc34c903cd42f236ae", "title": "Stabilization and Path Following of a Spherical Robot", "text": "In this paper, we present a spherical mobile robot BYQ_III, for planetary surface exploration and security tasks. The driving torque for the rolling robot is generated by a new type of mechanism equipped with a counter-pendulum. This robot is nonholonomic in nature, and underactuated. In this paper, the three-dimensional (3-D) nonlinear dynamic model is developed, then decoupled to the longitudinal and lateral motions by linearization. Two sliding-mode controllers are proposed to asymptotically stabilize the tracking errors in lean angle and spinning angular velocity, respectively, and indirectly to stabilize the desired path curvature, because the robot steers only by leaning itself to a predefined angle. For the task of path following, a path curvature controller, based on a geometrical notion, is employed. The stability and performance analyses are performed, and also the effectiveness of the controllers is shown by numerical simulations. To the best of author's knowledge, similar results could not be obtained in the previous spherical robot control system based on the dynamics. The work is of significance in understanding and developing this type of planning and controlling motions of nonholonomic systems."} {"_id": "a3f4b702a3b273a3b9e77b286f8ce32091271201", "title": "Dynamics simulation toolbox for industrial robot manipulators", "text": "A new robot toolbox for dynamics simulation based on MATLAB Graphical User Interface (GUI) is developed for educational purposes. It is built on the previous version named as ROBOLAB performing only the kinematics analysis of robot manipulators. The toolbox presented in this paper provides interactive real-time simulation and visualization of the industrial robot manipulator\u2019s dynamics based on Langrange Euler and Newton Euler formulations. Since most of the industrial robot manipulators are produced with six degrees of freedom (DOF) such as the PUMA 560, the Fanuc ArcMate 120iB and Stanford Arm, the library of the toolbox includes sixteen fundamental 6-DOF robot manipulators with Euler wrist. The software can be used to interactively perform robotics analysis and off-line programming of robot dynamics such as forward and inverse dynamics as well as to interactively teach and simulate the basic principles of robot dynamics in a very realistic way. In order to demonstrate the user-friendly features of the toolbox much better, simulation of the NS robot manipulator (Stanford Arm) is provided as an example. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 319 330, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20262"} {"_id": "670f313246b21c6f5fa9558b6b559f7fb086aa30", "title": "Energy-Efficient Data Collection in UAV Enabled Wireless Sensor Network", "text": "In wireless sensor networks, utilizing the unmanned aerial vehicle (UAV) as a mobile data collector for the sensor nodes (SNs) is an energy-efficient technique to prolong the network lifetime. In this letter, considering a general fading channel model for the SN-UAV links, we jointly optimize the SNs\u2019 wake-up schedule and UAV\u2019s trajectory to minimize the maximum energy consumption of all SNs, while ensuring that the required amount of data is collected reliably from each SN. We formulate our design as a mixed-integer non-convex optimization problem. By applying the successive convex optimization technique, an efficient iterative algorithm is proposed to find a sub-optimal solution. Numerical results show that the proposed scheme achieves significant network energy saving as compared to benchmark schemes."} {"_id": "29cb1e90d1e08f0990925c8d4e8d00fa5fa49100", "title": "Towards Reading Hidden Emotions: A Comparative Study of Spontaneous Micro-Expression Spotting and Recognition Methods", "text": "Micro-expressions (MEs) are rapid, involuntary facial expressions which reveal emotions that people do not intend to show. Studying MEs is valuable as recognizing them has many important applications, particularly in forensic science and psychotherapy. However, analyzing spontaneous MEs is very challenging due to their short duration and low intensity. Automatic ME analysis includes two tasks: ME spotting and ME recognition. For ME spotting, previous studies have focused on posed rather than spontaneous videos. For ME recognition, the performance of previous studies is low. To address these challenges, we make the following contributions: (i)\u00a0We propose the first method for spotting spontaneous MEs in long videos (by exploiting feature difference contrast). This method is training free and works on arbitrary unseen videos. (ii)\u00a0We present an advanced ME recognition framework, which outperforms previous work by a large margin on two challenging spontaneous ME databases (SMIC and CASMEII). (iii)\u00a0We propose the first automatic ME analysis system (MESR), which can spot and recognize MEs from spontaneous video data. Finally, we show our method outperforms humans in the ME recognition task by a large margin, and achieves comparable performance to humans at the very challenging task of spotting and then recognizing spontaneous MEs."} {"_id": "fa7bf12a18250018c6b38dcd5ef879a7a05a4c09", "title": "Is endoscopic polypectomy an adequate therapy for malignant colorectal adenomas? Presentation of 114 patients and review of the literature.", "text": "PURPOSE\nThis study was designed to evaluate the outcome of endoscopic polypectomy of malignant polyps with and without subsequent surgery based on histologic criteria.\n\n\nMETHODS\nConsecutive patients with invasive carcinoma in colorectal polyps endoscopically removed between 1985 and 1996 were retrospectively studied. Patients with complete resection, grading G1 or G2, and absence of vascular invasion were classified as \"low risk.\" The other patients were classified \"high risk.\" Available literature was reviewed by applying similar classification criteria.\n\n\nRESULTS\nA total of 114 patients (59 males; median age, 70 (range, 20-92) years) were included. Median polyp size was 2.5 (0.4-10) cm. After polypectomy, of 54 patients with low-risk malignant polyps, 13 died of unrelated causes after a median of 76 months, 5 had no residual tumor at surgery, and 33 were alive and well during a median follow-up of 69 (range, 9-169) months. Of 60 patients with high-risk malignant polyps, 52 had surgery (residual carcinoma 27 percent). Five of eight patients not operated had an uneventful follow-up of median 57 (range, 47-129) months. Patients in the high-risk group were significantly more likely to have an adverse outcome than those in the low-risk group (P < 0.0001). Review of 20 studies including 1,220 patients with malignant polyps revealed no patient with low-risk criteria with an adverse outcome.\n\n\nCONCLUSIONS\nFor patients with low-risk malignant polyps, endoscopic polypectomy alone seems to be adequate. In high-risk patients, the risk of adverse outcome should be weighed against the risk of surgery."} {"_id": "77768638f4f400272b6e5970596b127663471538", "title": "A scoping review of scoping reviews: advancing the approach and enhancing the consistency", "text": "BACKGROUND\nThe scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature.\n\n\nMETHODS\nA scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in four bibliographic databases and the gray literature to identify scoping review studies. Review selection and characterization were performed by two independent reviewers using pretested forms.\n\n\nRESULTS\nThe search identified 344 scoping reviews published from 1999 to October 2012. The reviews varied in terms of purpose, methodology, and detail of reporting. Nearly three-quarter of reviews (74.1%) addressed a health topic. Study completion times varied from 2 weeks to 20 months, and 51% utilized a published methodological framework. Quality assessment of included studies was infrequently performed (22.38%).\n\n\nCONCLUSIONS\nScoping reviews are a relatively new but increasingly common approach for mapping broad topics. Because of variability in their conduct, there is a need for their methodological standardization to ensure the utility and strength of evidence."} {"_id": "d6fa2444818889afc4ab22d2e868b3c52de8ec38", "title": "A short literature review on reward-based crowdfunding", "text": "In this short article, we discuss about a popular online fundraising method, named crowdfunding. One of the most useful types is reward-based crowdfunding, where the return is tangible products. We review related researches from three streams, the conceptual research, the empirical research, and the modelling research. Some possible research directions are also discussed in the paper."} {"_id": "f852227c81240ae7af03a7716053b08ae45e0bdf", "title": "Brain2Object: Printing Your Mind from Brain Signals with Spatial Correlation Embedding", "text": "Electroencephalography (EEG) signals are known to manifest differential patterns when individuals visually concentrate on different objects (e.g., a car). In this work, we present an end-to-end digital fabrication system , Brain2Object, to print the 3D object that an individual is observing by solely decoding visually-evoked EEG brain signal streams. We propose a unified training framework which combines multi-class Common Spatial Pattern and deep Convolutional Neural Networks to support the backend computation. Specially, a Dynamical Graph Representation of EEG signals is learned for accurately capturing the structured spatial correlations of EEG channels in an adaptive manner. A user friendly interface is developed as the system front end. Brain2Object presents a streamlined end-to-end workflow which can serve as a template for deeper integration of BCI technologies to assist with our routine activities. The proposed system is evaluated extensively using offline experiments and through an online demonstrator. For the former, we use a rich widely used public dataset and a limited but locally collected dataset. The experiment results show that our approach consistently outperforms a wide range of baseline and state-of-the-art approaches. The proof-of-concept corroborates the practicality of our approach and illustrates the ease with which such a system could be deployed."} {"_id": "cce2dc7712e0b1fe8ae1fd0ea3148a5ab3cc3c07", "title": "Analyzing hidden populations online: topic, emotion, and social network of HIV-related users in the largest Chinese online community", "text": "BACKGROUND\nTraditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community.\n\n\nMETHODS\nBy elaborately designing crawlers, we retrieved a complete dataset from the \"HIV bar,\" the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure.\n\n\nRESULTS\nIn HIV communities, the average topic similarity among members is positively correlated to network efficiency (r\u2009=\u20090.70, p\u2009<\u20090.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r\u2009=\u20090.25, p\u2009=\u20090.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests.\n\n\nCONCLUSIONS\nOnline communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China."} {"_id": "1b9d022273780c5b0b7522555bd0e2c626a38e77", "title": "Locally Adaptive Color Correction for Underwater Image Dehazing and Matching", "text": "Underwater images are known to be strongly deteriorated by a combination of wavelength-dependent light attenuation and scattering. This results in complex color casts that depend both on the scene depth map and on the light spectrum. Color transfer, which is a technique of choice to counterbalance color casts, assumes stationary casts, defined by global parameters, and is therefore not directly applicable to the locally variable color casts encountered in underwater scenarios. To fill this gap, this paper introduces an original fusion-based strategy to exploit color transfer while tuning the color correction locally, as a function of the light attenuation level estimated from the red channel. The Dark Channel Prior (DCP) is then used to restore the color compensated image, by inverting the simplified Koschmieder light transmission model, as for outdoor dehazing. Our technique enhances image contrast in a quite effective manner and also supports accurate transmission map estimation. Our extensive experiments also show that our color correction strongly improves the effectiveness of local keypoints matching."} {"_id": "20f4a023782606b96d295f466e18485b0582cb5f", "title": "Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard", "text": "Significant improvements in video compression capability have been demonstrated with the introduction of the H.264/MPEG-4 advanced video coding (AVC) standard. Since developing this standard, the Joint Video Team of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) has also standardized an extension of that technology that is referred to as multiview video coding (MVC). MVC provides a compact representation for multiple views of a video scene, such as multiple synchronized video cameras. Stereo-paired video for 3-D viewing is an important special case of MVC. The standard enables inter-view prediction to improve compression capability, as well as supporting ordinary temporal and spatial prediction. It also supports backward compatibility with existing legacy systems by structuring the MVC bitstream to include a compatible \u201cbase view.\u201d Each other view is encoded at the same picture resolution as the base view. In recognition of its high-quality encoding capability and support for backward compatibility, the stereo high profile of the MVC extension was selected by the Blu-Ray Disc Association as the coding format for 3-D video with high-definition resolution. This paper provides an overview of the algorithmic design used for extending H.264/MPEG-4 AVC towards MVC. The basic approach of MVC for enabling inter-view prediction and view scalability in the context of H.264/MPEG-4 AVC is reviewed. Related supplemental enhancement information (SEI) metadata is also described. Various \u201cframe compatible\u201d approaches for support of stereo-view video as an alternative to MVC are also discussed. A summary of the coding performance achieved by MVC for both stereo- and multiview video is also provided. Future directions and challenges related to 3-D video are also briefly discussed."} {"_id": "3b13533495ec04b7b263d9bdf82372959c9d87e6", "title": "Overview of the High Efficiency Video Coding (HEVC) Standard", "text": "High Efficiency Video Coding (HEVC) is currently being prepared as the newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of the HEVC standardization effort is to enable significantly improved compression performance relative to existing standards-in the range of 50% bit-rate reduction for equal perceptual video quality. This paper provides an overview of the technical features and characteristics of the HEVC standard."} {"_id": "5c1e5e87a7cb833b046222bf631f9063c9926680", "title": "H.264/AVC over IP", "text": "H.264 is the ITU-T\u2019s new, nonbackward compatible video compression Recommendation that significantly outperforms all previous video compression standards. It consists of a video coding layer (VCL) which performs all the classic signal processing tasks and generates bit strings containing coded macroblocks, and a network adaptation layer (NAL) which adapts those bit strings in a network friendly way. The paper describes the use of H.264 coded video over best-effort IP networks, using RTP as the real-time transport protocol. After the description of the environment, the error-resilience tools of H.264 and the draft specification of the RTP payload format are introduced. Next the performance of several possible VCLand NAL-based error-resilience tools of H.264 are verified in simulations."} {"_id": "0c8cac66c4ce059493b7b9de0abee703e44d5563", "title": "Comparison of the Coding Efficiency of Video Coding Standards\u2014Including High Efficiency Video Coding (HEVC)", "text": "The compression capability of several generations of video coding standards is compared by means of peak signal-to-noise ratio (PSNR) and subjective testing results. A unified approach is applied to the analysis of designs, including H.262/MPEG-2 Video, H.263, MPEG-4 Visual, H.264/MPEG-4 Advanced Video Coding (AVC), and High Efficiency Video Coding (HEVC). The results of subjective tests for WVGA and HD sequences indicate that HEVC encoders can achieve equivalent subjective reproduction quality as encoders that conform to H.264/MPEG-4 AVC when using approximately 50% less bit rate on average. The HEVC design is shown to be especially effective for low bit rates, high-resolution video content, and low-delay communication applications. The measured subjective improvement somewhat exceeds the improvement measured by the PSNR metric."} {"_id": "2d22229794c2bd70c5bd1b1e4f004eb5864627a9", "title": "The Quadtree and Related Hierarchical Data Structures", "text": "A tutorial survey is presented of the quadtree and related hierarchical data structures. They are based on the principle of recursive decomposition. The emphasis is on the representation of data used in applications in image processing, computer graphics, geographic information systems, and robotics. There is a greater emphasis on region data (i.e., two-dimensional shapes) and to a lesser extent on point, curvilinear, and threedimensional data. A number of operations in which such data structures find use are examined in greater detail."} {"_id": "08ae3f221339feb8230ae15647df51e7a5b5a13a", "title": "Software developers' perceptions of productivity", "text": "The better the software development community becomes at creating software, the more software the world seems to demand. Although there is a large body of research about measuring and investigating productivity from an organizational point of view, there is a paucity of research about how software developers, those at the front-line of software construction, think about, assess and try to improve their productivity. To investigate software developers' perceptions of software development productivity, we conducted two studies: a survey with 379 professional software developers to help elicit themes and an observational study with 11 professional software developers to investigate emergent themes in more detail. In both studies, we found that developers perceive their days as productive when they complete many or big tasks without significant interruptions or context switches. Yet, the observational data we collected shows our participants performed significant task and activity switching while still feeling productive. We analyze such apparent contradictions in our findings and use the analysis to propose ways to better support software developers in a retrospection and improvement of their productivity through the development of new tools and the sharing of best practices."} {"_id": "485f15282478c0f72b359d3e1eb72aaaaea540d2", "title": "\"The part of me that you bring out\": ideal similarity and the Michelangelo phenomenon.", "text": "This work examines the Michelangelo phenomenon, an interpersonal model of the means by which people move closer to (vs. further from) their ideal selves. The authors propose that partner similarity--similarity to the ideal self, in particular--plays an important role in this process. Across 4 studies employing diverse designs and measurement techniques, they observed consistent evidence that when partners possess key elements of one another's ideal selves, each person affirms the other by eliciting important aspects of the other's ideals, each person moves closer to his or her ideal self, and couple well-being is enhanced. Partner similarity to the actual self also accounts for unique variance in key elements of this model. The associations of ideal similarity and actual similarity with couple well-being are fully attributable to the Michelangelo process, to partner affirmation and target movement toward the ideal self. The authors also performed auxiliary analyses to rule out several alternative interpretations of these findings."} {"_id": "0ef948b3139a41d3e291e20d67ed2bdbba290f65", "title": "Wrong, but useful: regional species distribution models may not be improved by range\u2010wide data under biased sampling", "text": "Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor (\"prior\") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions."} {"_id": "03f723759694daf290c723154fbe627cf90fb8f2", "title": "Autonomous Flight for Detection, Localization, and Tracking of Moving Targets With a Small Quadrotor", "text": "In this letter, we address the autonomous flight of a small quadrotor, enabling tracking of a moving object. The 15-cm diameter, 250-g robot relies only on onboard sensors (a single camera and an inertial measurement unit) and computers, and can detect, localize, and track moving objects. Our key contributions include the relative pose estimate of a spherical target as well as the planning algorithm, which considers the dynamics of the underactuated robot, the actuator limitations, and the field of view constraints. We show simulation and experimental results to demonstrate feasibility and performance, as well as robustness to abrupt variations in target motion."} {"_id": "2d76d34ee18340eeced16f7735405b97d3980c5f", "title": "Inferring Dockless Shared Bike Distribution in New Cities", "text": "Recently, dockless shared bike services have achieved great success and reinvented bike sharing business in China. When expanding bike sharing business into a new city, most start-ups always wish to find out how to cover the whole city with a suitable bike distribution. In this paper, we study the problem of inferring bike distribution in new cities, which is challenging. As no dockless bikes are deployed in the new city, we propose to learn insights on bike distribution from cities populated with dockless bikes. We exploit multi-source data to identify important features that affect bike distributions and develop a novel inference model combining Factor Analysis and Convolutional Neural Network techniques. The extensive experiments on real-life datasets show that the proposed solution provides significantly more accurate inference results compared with competitive prediction methods."} {"_id": "ca5e8a9f1a3447b735af9fc71c489b0b62a50501", "title": "Design and Analysis of a 21\u201329-GHz Ultra-Wideband Receiver Front-End in 0.18-$\\mu$ m CMOS Technology", "text": "This paper reports the design and analysis of 21-29-GHz CMOS low-noise amplifier (LNA), balun and mixer in a standard 0.18-\u03bcm CMOS process for ultra-wideband automotive radar systems. To verify the proposed LNA, balun, and mixer architectures, a simplified receiver front-end comprising an LNA, a double-balanced Gilbert-cell-based mixer, and two Marchand baluns was implemented. The wideband Marchand baluns can convert the single RF and local oscillator (LO) signals to nearly perfect differential signals over the 21-29-GHz band. The performance of the mixer is improved with the current-bleeding technique and a parallel resonant inductor at the differential outputs of the RF transconductance stage. Over the 21-29-GHz band, the receiver front-end exhibits excellent noise figure of 4.6\u00b10.5 dB, conversion gain of 23.7\u00b11.4 dB, RF port reflection coefficient lower than -8.8 dB, LO-IF isolation lower than -47 dB, LO-RF isolation lower than -55 dB, and RF-IF isolation lower than -35.5 dB. The circuit occupies a chip area of 1.25\u00d71.06 mm2, including the test pads. The dc power dissipation is only 39.2 mW."} {"_id": "ec094cc1b52188a80d16d6466444a498ca6c9161", "title": "Load-Adaptive Modulation of a Series-Resonant Inverter for All-Metal Induction Heating Applications", "text": "A conventional induction heating (IH) system has been developed to heat the pot of ferromagnetic materials because the small resistance of nonferromagnetic materials induces large resonant current to the power switch of a series-resonant IH inverter. However, the heating capability for various materials is the most important to improve the functionality and the usability of IH products. In this paper, a load-adaptive modulation (LAM) method is proposed to heat the pot made from nonferromagnetic and ferromagnetic materials. The LAM method can change the magnitude of the input voltage of the IH coil and the operating frequency of the series-resonant IH inverter according to the resistance of the pot. The operational principle and the design method are analyzed to implement the proposed LAM method and its power control. The validity of the LAM method is experimentally verified using a 2-kW prototype series-resonant IH inverter."} {"_id": "c91081bb16bfe2ed30a05761e01eccd84b05c537", "title": "BIM: Enabling Sustainability and Asset Management through Knowledge Management", "text": "Building Information Modeling (BIM) is the use of virtual building information models to develop building design solutions and design documentation and to analyse construction processes. Recent advances in IT have enabled advanced knowledge management, which in turn facilitates sustainability and improves asset management in the civil construction industry. There are several important qualifiers and some disadvantages of the current suite of technologies. This paper outlines the benefits, enablers, and barriers associated with BIM and makes suggestions about how these issues may be addressed. The paper highlights the advantages of BIM, particularly the increased utility and speed, enhanced fault finding in all construction phases, and enhanced collaborations and visualisation of data. The paper additionally identifies a range of issues concerning the implementation of BIM as follows: IP, liability, risks, and contracts and the authenticity of users. Implementing BIM requires investment in new technology, skills training, and development of new ways of collaboration and Trade Practices concerns. However, when these challenges are overcome, BIM as a new information technology promises a new level of collaborative engineering knowledge management, designed to facilitate sustainability and asset management issues in design, construction, asset management practices, and eventually decommissioning for the civil engineering industry."} {"_id": "4c5815796c29d44c940830118339e276f741d34a", "title": "Robot Collisions: A Survey on Detection, Isolation, and Identification", "text": "Robot assistants and professional coworkers are becoming a commodity in domestic and industrial settings. In order to enable robots to share their workspace with humans and physically interact with them, fast and reliable handling of possible collisions on the entire robot structure is needed, along with control strategies for safe robot reaction. The primary motivation is the prevention or limitation of possible human injury due to physical contacts. In this survey paper, based on our early work on the subject, we review, extend, compare, and evaluate experimentally model-based algorithms for real-time collision detection, isolation, and identification that use only proprioceptive sensors. This covers the context-independent phases of the collision event pipeline for robots interacting with the environment, as in physical human\u2013robot interaction or manipulation tasks. The problem is addressed for rigid robots first and then extended to the presence of joint/transmission flexibility. The basic physically motivated solution has already been applied to numerous robotic systems worldwide, ranging from manipulators and humanoids to flying robots, and even to commercial products."} {"_id": "6b30fce40a8a5baae028c50914093d32ba37a60c", "title": "Empathy and social functioning in late adulthood.", "text": "OBJECTIVES\nBoth cognitive and affective empathy are regarded as essential prerequisites for successful social functioning, and recent studies have suggested that cognitive, but not affective, empathy may be adversely affected as a consequence of normal adult aging. This decline in cognitive empathy is of concern, as older adults are particularly susceptible to the negative physical and mental health consequences of loneliness and social isolation.\n\n\nMETHOD\nThe present study compared younger (N = 80) and older (N = 49) adults on measures of cognitive empathy, affective empathy, and social functioning.\n\n\nRESULTS\nWhilst older adults' self-reported and performance-based cognitive empathy was significantly reduced relative to younger adults, there were no age-related differences in affective empathy. Older adults also reported involvement in significantly fewer social activities than younger adults, and cognitive empathy functioned as a partial mediator of this relationship.\n\n\nCONCLUSION\nThese findings are consistent with theoretical models that regard cognitive empathy as an essential prerequisite for good interpersonal functioning. However, the cross-sectional nature of the study leaves open the question of causality for future studies."} {"_id": "59a50612fbdfa3d91eb15f344f726c5b96783803", "title": "Study of the therapeutic effects of a hippotherapy simulator in children with cerebral palsy: a stratified single-blind randomized controlled trial.", "text": "OBJECTIVE\nTo investigate whether hippotherapy (when applied by a simulator) improves postural control and balance in children with cerebral palsy.\n\n\nDESIGN\nStratified single-blind randomized controlled trial with an independent assessor. Stratification was made by gross motor function classification system levels, and allocation was concealed.\n\n\nSUBJECTS\nChildren between 4 and 18 years old with cerebral palsy.\n\n\nINTERVENTIONS\nParticipants were randomized to an intervention (simulator ON) or control (simulator OFF) group after getting informed consent. Treatment was provided once a week (15 minutes) for 10 weeks.\n\n\nMAIN MEASURES\nGross Motor Function Measure (dimension B for balance and the Total Score) and Sitting Assessment Scale were carried out at baseline (prior to randomization), end of intervention and 12 weeks after completing the intervention.\n\n\nRESULTS\nThirty-eight children participated. The groups were balanced at baseline. Sitting balance (measured by dimension B of the Gross Motor Function Measure) improved significantly in the treatment group (effect size = 0.36; 95% CI 0.01-0.71) and the effect size was greater in the severely disabled group (effect size = 0.80; 95% CI 0.13-1.47). The improvements in sitting balance were not maintained over the follow-up period. Changes in the total score of the Gross Motor Function Measure and the Sitting Assessment Scale were not significant.\n\n\nCONCLUSION\nHippotherapy with a simulator can improve sitting balance in cerebral palsy children who have higher levels of disability. However, this did not lead to a change in the overall function of these children (Gross Motor Function Classification System level V)."} {"_id": "44032090cbe8fe945da0d605c5ffe85059692dc5", "title": "Pricing in Agent Economies Using Multi-Agent Q-Learning", "text": "This paper investigates how adaptive software agents may utilize reinforcement learning algorithms such as Q-learning to make economic decisions such as setting prices in a competitive marketplace. For a single adaptive agent facing fixed-strategy opponents, ordinary Q-learning is guaranteed to find the optimal policy. However, for a population of agents each trying to adapt in the presence of other adaptive agents, the problem becomes non-stationary and history dependent, and it is not known whether any global convergence will be obtained, and if so, whether such solutions will be optimal. In this paper, we study simultaneous Q-learning by two competing seller agents in three moderately realistic economic models. This is the simplest case in which interesting multi-agent phenomena can occur, and the state space is small enough so that lookup tables can be used to represent the Q-functions. We find that, despite the lack of theoretical guarantees, simultaneous convergence to self-consistent optimal solutions is obtained in each model, at least for small values of the discount parameter. In some cases, exact or approximate convergence is also found even at large discount parameters. We show how the Q-derived policies increase profitability and damp out or eliminate cyclic price \u201cwars\u201d compared to simpler policies based on zero lookahead or short-term lookahead. In one of the models (the \u201cShopbot\u201d model) where the sellers' profit functions are symmetric, we find that Q-learning can produce either symmetric or broken-symmetry policies, depending on the discount parameter and on initial conditions."} {"_id": "560bc93783070c023a5406f6b64e9963db7a76a8", "title": "Generating Synthetic Missing Data: A Review by Missing Mechanism", "text": "The performance evaluation of imputation algorithms often involves the generation of missing values. Missing values can be inserted in only one feature (univariate configuration) or in several features (multivariate configuration) at different percentages (missing rates) and according to distinct missing mechanisms, namely, missing completely at random, missing at random, and missing not at random. Since the missing data generation process defines the basis for the imputation experiments (configuration, missing rate, and missing mechanism), it is essential that it is appropriately applied; otherwise, conclusions derived from ill-defined setups may be invalid. The goal of this paper is to review the different approaches to synthetic missing data generation found in the literature and discuss their practical details, elaborating on their strengths and weaknesses. Our analysis revealed that creating missing at random and missing not at random scenarios in datasets comprising qualitative features is the most challenging issue in the related work and, therefore, should be the focus of future work in the field."} {"_id": "63138eee4572ffee1d3395866da69c61be453a26", "title": "Driver fatigue detection based on eye tracking and dynamk, template matching", "text": "A vision-based real-time driver fatigue detection system is proposed for driving safely. The driver's face is located, from color images captured in a car, by using the characteristic of skin colors. Then, edge detection is used to locate the regions of eyes. In addition to being used as the dynamic templates for eye tracking in the next frame, the obtained eyes' images are also used for fatigue detection in order to generate some warning alarms for driving safety. The system is tested on a Pentium III 550 CPU with 128 MB RAM. The experiment results seem quite encouraging andpromising. The system can reach 20 frames per second for eye tracking, and the average correct rate for eye location and tracking can achieve 99.1% on four test videos. The correct rate for fatigue detection is l00%, but the average precision rate is 88.9% on the test videos."} {"_id": "47b8c91964f4f4f0c8b63a718cb834ae7fbd200f", "title": "Effectiveness of Credit Management System on Loan Performance : Empirical Evidence from Micro Finance Sector in Kenya", "text": "Microfinance institutions in Kenya experience high levels of non-performing loans. This trend threatens viability and sustainability of MFIs and hinders the achievement of their goals. This study was aimed at assessing the effectiveness of credit management systems on loan performance of microfinance institutions. Specifically we sought to establish the effect of credit terms, client appraisal, credit risk control measures and credit collection policies on loan performance. We adopted a descriptive research design. The respondents were the credit officers of the MFIs in Meru town. Collection policy was found to have a higher effect on loan repayment with =12.74, P=0.000 at 5% significance level. Further research is recommended on the effectiveness of credit referencing on loan performance of MFIs. This study is informative in terms of public policy adjustments and firm level competences required for better operation of MFIs and it also contributes to credit management literature."} {"_id": "89286c76c6f17fd50921584467fe44f793de3d8a", "title": "Fast Cryptographic Computation on Intel \u00ae Architecture Processors Via Function Stitching", "text": "Cryptographic applications often run more than one independent algorithm such as encryption and authentication. This fact provides a high level of parallelism which can be exploited by software and converted into instruction level parallelism to improve overall performance on modern super-scalar processors. We present fast and efficient methods of computing such pairs of functions on IA processors using a method called \" function stitching \". Instead of computing pairs of functions sequentially as is done today in applications/libraries, we replace the function calls by a single call to a composite function that implements both algorithms. The execution time of this composite function can be made significantly shorter than the sums of the execution times for the individual functions and, in many cases, close to the execution time of the slower function. Function stitching is best done at a very fine grain, interleaving the code for the individual algorithms at an instruction-level granularity. This results in excellent utilization of the execution resources in the processor core with a single thread. We show how stitching pairs of functions together in a fine-grained manner results in excellent performance on IA processors. Currently, applications perform the functions sequentially. We demonstrate performance gains of 1.4X-1.9X with stitching over the best sequential function performance. We show performance results achieved by this method on the Intel \u00ae processors based on the Westmere architecture."} {"_id": "150b61c6111ae16526ae19b6288b53d1373822ae", "title": "Dual Quaternion Variational Integrator for Rigid Body Dynamic Simulation", "text": "We introduce a symplectic dual quaternion variational integrator(DQVI) for simulating single rigid body motion in all six degrees of freedom. Dual quaternion is used to represent rigid body kinematics and one-step Lie group variational integrator is used to conserve the geometric structure, energy and momentum of the system during the simulation. The combination of these two becomes the first Lie group variational integrator for rigid body simulation without decoupling translations and rotations. Newton-Raphson method is used to solve the recursive dynamic equation. This method is suitable for real-time rigid body simulations with high precision under large time step. DQVI respects the symplectic structure of the system with excellent long-term conservation of geometry structure, momentum and energy. It also allows the reference point and 6-by-6 inertia matrix to be arbitrarily defined, which is very convenient for a variety of engineering problems."} {"_id": "19f5d33e6814ddab2d17a97a77bb6525db784d35", "title": "Artificial error generation for translation-based grammatical error correction", "text": "Automated grammatical error correction for language learners has attracted a lot of attention in recent years, especially after a number of shared tasks that have encouraged research in the area. Treating the problem as a translation task from 'incorrect' into 'correct' English using statistical machine translation has emerged as a state-of-the-art approach but it requires vast amounts of corrected parallel data to produce useful results. Because manual annotation of incorrect text is laborious and expensive, we can generate artificial error-annotated data by injecting errors deliberately into correct text and thus produce larger amounts of parallel data with much less effort. In this work, we review previous work on artificial error generation and investigate new approaches using random and probabilistic methods for constrained and general error correction. Our methods use error statistics from a reference corpus of learner writing to generate errors in native text that look realistic and plausible in context. We investigate a number of aspects that can play a part in the error generation process, such as the origin of the native texts, the amount of context used to find suitable insertion points, the type of information encoded by the error patterns and the output error distribution. In addition, we explore the use of linguistic information for characterising errors and train systems using different combinations of real and artificial data. Results of our experiments show that the use of artificial errors can improve system performance when they are used in combination with real learner errors, in line with previous research. These improvements are observed for both constrained and general correction, for which probabilistic methods produce the best results. We also demonstrate that systems trained on a combination of real and artificial errors can beat other highly-engineered systems and be more robust, showing that performance can be improved by focusing on the data rather than tuning system parameters. Part of our work is also devoted to the proposal of the I-measure, a new evaluation scheme that scores corrections in terms of improvement on the original text and solves known issues with existing evaluation measures. To my family and L.A. In memory of Kika."} {"_id": "aa8b058674bc100899be204364e5a2505afba126", "title": "Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection", "text": "Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics."} {"_id": "3f9f0d50e4addcc3b7f9ecf177f1f902308697a2", "title": "2-MHz GaN PWM isolated SEPIC converters", "text": "The eGaN HEMTs with ZVS technique have emerged in high frequency converters to reduce the switching loss. This paper investigates an eGaN HEMT ZVS isolated SEPIC converter running up to 2 MHz. Furthermore, an inductor optimization method is proposed to ensure ZVS and reduce reverse conduction loss of the SR. The efficiency curve is optimized based on the trade-off method between the switching and conduction losses. Moreover, the embedded inductor is proposed to reduce the conduction loss of the inductor. Finally, two 2 MHz eGaN HEMT prototypes with the commercial and embedded inductor with 18\u201336 V input and 5 V/2 A output are built respectively to verify the effectiveness of the proposed methods. The converters achieve a peak efficiency of over 88.2% and power density of 52.9 W/in3, outperforming the state-of-art power module products."} {"_id": "1a91432b291e06ff70dfb4e1295e93fd0d190472", "title": "Vehicle Tracking, Monitoring and Alerting System: A Review", "text": "The goal of this paper is to review the past work of vehicle tracking, monitoring and alerting system, to categorize various methodologies and identify new trends. Vehicle tracking, monitoring and alerting system is challenging problem. There are various challenges encounter in vehicle tracking, monitoring and alerting due to deficiency in proper real time vehicle location and problem of alerting system. GPS (Global Positioning System) is most widely used technology for vehicle tracking and keep regular monitoring of vehicle. The objective of tracking system is to manage and control the transport using GPS transreceiver to know the current location of vehicle. In number of system, RFID (Radio Frequency Identification) is chosen as one of technology implemented for bus monitoring system. GSM (Global System for Mobile Communication) is most widely used for alerting system. Alerting system is essential for providing the location and information about vehicle to passenger, owner or user."} {"_id": "0f90fc86d84ad8757d9d5f007255917c86723bd3", "title": "Recognition of Emotional Face Expressions and Amygdala Pathology *", "text": "The amygdala is often damaged in patients with temporal lobe epilepsy, either because of the primary epileptogenic disease (e.g. sclerosis or encephalitis) or because of secondary effects of surgical interventions (e.g. lobectomy). In humans, the amygdala has been associated with a range of important emotional and social functions, in particular with deficits in emotion recognition from faces. Here we review data from recent neuropsychological research illustrating the amygdala role in processing facial expressions. We describe behavioural findings subsequent to focal lesions and possible factors that may influence the nature and severity of deficits in patients. Both bilateral and unilateral amygdala damage can impair the recog nition of facial emotions, especially fear, but such deficits are not always present and not always specific. Moreover, not all components of emotion or fear processing are impaired after amygdala damage. Dissociations have been reported between deficits in the recognition of emotions from the face of other people, and intact ability in the production of emotions in one's own facial expression [...] CRISTINZIO PERRIN, Chiara, SANDER, David, VUILLEUMIER, Patrik. Recognition of emotional face expressions and amygdala pathology. Epileptologie, 2007, vol. 3, p. 130-138"} {"_id": "6023fc75489419de1d7c15a9b8cf01f27bf31efc", "title": "FPGA Implementations of Neural Networks - A Survey of a Decade of Progress", "text": "The first successful FPGA implementation [1] of artificial neural networks (ANNs) was published a little over a decade ago. It is timely to review the progress that has been made in this research area. This brief survey provides a taxonomy for classifying FPGA implementations of ANNs. Different implementation techniques and design issues are discussed. Future research trends are also presented."} {"_id": "9a972b5919264016faf248b6e14ac51194ff45b2", "title": "Pedestrian detection in crowded scenes", "text": "In this paper, we address the problem of detecting pedestrians in crowded real-world scenes with severe overlaps. Our basic premise is that this problem is too difficult for any type of model or feature alone. Instead, we present an algorithm that integrates evidence in multiple iterations and from different sources. The core part of our method is the combination of local and global cues via probabilistic top-down segmentation. Altogether, this approach allows examining and comparing object hypotheses with high precision down to the pixel level. Qualitative and quantitative results on a large data set confirm that our method is able to reliably detect pedestrians in crowded scenes, even when they overlap and partially occlude each other. In addition, the flexible nature of our approach allows it to operate on very small training sets."} {"_id": "36cd88ed2c17a596001e9c7d89533ac46c28dec0", "title": "Object Detection Using the Statistics of Parts", "text": "In this paper we describe a trainable object detector and its instantiations for detecting faces and cars at any size, location, and pose. To cope with variation in object orientation, the detector uses multiple classifiers, each spanning a different range of orientation. Each of these classifiers determines whether the object is present at a specified size within a fixed-size image window. To find the object at any location and size, these classifiers scan the image exhaustively. Each classifier is based on the statistics of localized parts. Each part is a transform from a subset of wavelet coefficients to a discrete set of values. Such parts are designed to capture various combinations of locality in space, frequency, and orientation. In building each classifier, we gathered the class-conditional statistics of these part values from representative samples of object and non-object images. We trained each classifier to minimize classification error on the training set by using Adaboost with Confidence-Weighted Predictions (Shapire and Singer, 1999). In detection, each classifier computes the part values within the image window and looks up their associated class-conditional probabilities. The classifier then makes a decision by applying a likelihood ratio test. For efficiency, the classifier evaluates this likelihood ratio in stages. At each stage, the classifier compares the partial likelihood ratio to a threshold and makes a decision about whether to cease evaluation\u2014labeling the input as non-object\u2014or to continue further evaluation. The detector orders these stages of evaluation from a low-resolution to a high-resolution search of the image. Our trainable object detector achieves reliable and efficient detection of human faces and passenger cars with out-of-plane rotation."} {"_id": "469f5b07c8927438b79a081efacea82449b338f8", "title": "Real-Time Object Detection for \"Smart\" Vehicles", "text": "This paper presents an e cient shape-based object detection method based on Distance Transforms and describes its use for real-time vision on-board vehicles. The method uses a template hierarchy to capture the variety of object shapes; e cient hierarchies can be generated o ine for given shape distributions using stochastic optimization techniques (i.e. simulated annealing). Online, matching involves a simultaneous coarse-tone approach over the shape hierarchy and over the transformation parameters. Very large speedup factors are typically obtained when comparing this approach with the equivalent brute-force formulation; we have measured gains of several orders of magnitudes. We present experimental results on the real-time detection of tra c signs and pedestrians from a moving vehicle. Because of the highly time sensitive nature of these vision tasks, we also discuss some hardwarespeci c implementations of the proposed method as far as SIMD parallelism is concerned."} {"_id": "4e3a22ed94c260b9143eee9fdf6d5d6e892ecd8f", "title": "A Performance Evaluation of Local Descriptors", "text": ""} {"_id": "ac3f8372b9d893dbdb7e4b9cd3df5ed825ffb548", "title": "Twitter Sentiment Analysis: Lexicon Method, Machine Learning Method and Their Combination", "text": "This paper covers the two approaches for sentiment analysis: i) lexicon based method; ii) machine learning method. We describe several techniques to implement these approaches and discuss how they can be adopted for sentiment classification of Twitter messages. We present a comparative study of different lexicon combinations and show that enhancing sentiment lexicons with emoticons, abbreviations and social-media slang expressions increases the accuracy of lexicon-based classification for Twitter. We discuss the importance of feature generation and feature selection processes for machine learning sentiment classification. To quantify the performance of the main sentiment analysis methods over Twitter we run these algorithms on a benchmark Twitter dataset from the SemEval-2013 competition, task 2-B. The results show that machine learning method based on SVM and Naive Bayes classifiers outperforms the lexicon method. We present a new ensemble method that uses a lexicon based sentiment score as input feature for the machine learning approach. The combined method proved to produce more precise classifications. We also show that employing a cost-sensitive classifier for highly unbalanced datasets yields an improvement of sentiment classification performance up to 7%."} {"_id": "36203a748889758656f2f8bbdcd1c2cc236f410f", "title": "A Balancing Control Strategy for a One-Wheel Pendulum Robot Based on Dynamic Model Decomposition: Simulations and Experiments", "text": "A dynamics-based posture-balancing control strategy for a new one-wheel pendulum robot (OWPR) is proposed and verified. The OWPR model includes a rolling wheel, a robot body with a steering axis, and a pendulum for lateral balancing. In constructing the dynamic model, three elements are generalized in comparison with existing robotic systems: the mass and inertia of the robot body, the \u201cI\u201d-type pendulum, and the steering motion. The dynamics of the robot are derived using a Lagrangian formulation to represent the torques of the wheel during the rolling, yawing, and pitching of the robot body, in terms of the control inputs. The OWPR dynamics are decomposed into state-space models for lateral balancing and steering, and the corresponding controller is designed to be adaptive to changes in the state variables. Simulations and experimental studies are presented that demonstrate and verify the efficiency of the proposed models and the control algorithm."} {"_id": "d2c25b7c43abe6ab9162a01d63ab3bde7572024b", "title": "Implementation of the State of Charge Estimation with Adaptive Extended Kalman Filter for Lithium-Ion Batteries by Arduino", "text": "This study considers the use of Arduino to achieve state of charge (SOC) estimation of lithium-ion batteries by adaptive extended Kalman filter (AEKF). To implement a SOC estimator for the lithium-ion battery, we adopt a first-order RC equivalent circuit as the equivalent circuit model (ECM) of the battery. The parameters of the ECM will be identified through the designed experiments, and they will be approximated by the piecewise linear functions and then will be built into Arduino. The AEKF algorithm will also be programed into Arduino to estimate the SOC. To verify the accuracy of the SOC estimation, some lithium-ion batteries are tested at room temperature. Experimental results show that the absolute value of the steady-state SOC estimation error is small."} {"_id": "5a4b1c95e098da013cbcd4149f177174765f4881", "title": "Multi-view Scene Flow Estimation: A View Centered Variational Approach", "text": "We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images\u2019 coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem."} {"_id": "4a28c3f80d341f9de6a8f2030d70cf11d5f531f2", "title": "B-ROC Curves for the Assessment of Classifiers over Imbalanced Data Sets", "text": "The class imbalance problem appears to be ubiquitous to a large portion of the machine learning and data mining communities. One of the key questions in this setting is how to evaluate the learning algorithms in the case of class imbalances. In this paper we introduce the Bayesian Receiver Operating Characteristic (B-ROC) curves, as a set of tradeoff curves that combine in an intuitive way, the variables that are more relevant to the evaluation of classifiers over imbalanced data sets. This presentation is based on section 4 of (C\u00e1rdenas, Baras, & Seamon 2006). Introduction The term class imbalance refers to the case when in a classification task, there are many more instances of some classes than others. The problem is that under this setting, classifiers in general perform poorly because they tend to concentrate on the large classes and disregard the ones with few examples. Given that this problem is prevalent in a wide range of practical classification problems, there has been recent interest in trying to design and evaluate classifiers faced with imbalanced data sets (Japkowicz 2000; Chawla, Japkowicz, & Ko\u0142cz 2003; Chawla, Japkowicz, & Ko\u0142z 2004). A number of approaches on how to address these issues have been proposed in the literature. Ideas such as data sampling methods, one-class learning (i.e. recognitionbased learning), and feature selection algorithms, appear to be the most active research directions for learning classifiers. On the other hand the issue of how to evaluate binary classifiers in the case of class imbalances appears to be dominated by the use of ROC curves (Ferri et al. 2004; 2005) (and to a lesser extent, by error curves (Drummond & Holte 2001)). The class imbalance problem is of particular importance in intrusion detection systems (IDSs). In this paper we present and expand some of the ideas introduced in our research for the evaluation of IDSs (C\u00e1rdenas, Baras, & Seamon 2006). In particular we claim that for heavily imbalanced data sets, ROC curves cannot provide the necessary Copyright c \u00a9 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. intuition for the choice of the operational point of the classifier and therefore we introduce the Bayesian-ROCs (BROCs). Furthermore we demonstrate how B-ROCs can deal with the uncertainty of class distributions by displaying the performance of the classifier under different conditions. Finally, we also show how B-ROCs can be used for comparing classifiers without any assumptions of misclassification costs. Performance Tradeoffs Before we present our formulation we need to introduce some notation and definitions. Assume that the input to the classifier is a feature-vector x. Let C be an indicator random variable denoting whether x belongs to class zero: C = 0 (the majority class) or class one: C = 1 (the minority class). The output of the classifier is denoted by A = 1 if the classifier assigns x to class one, and A = 0 if the classifier assigns x to class zero. Finally, the class imbalance problem is quantified by the probability of a positive example p = Pr[C = 1]. Most classifiers subject to the class imbalance problem are evaluated with the help of ROC curves. ROC curves are a tool to visualize the tradeoff between the probability of false alarm PFA \u2261 Pr[A = 1|C = 0] and the probability of detection PD \u2261 Pr[A = 1|C = 1]. Of interest to us in the intrusion detection community, is that classifiers with ROC curves achieving traditionally \u201cgood\u201d operating points such as (PFA = 0.01,PD = 1) would still generate a huge amount of false alarms in realistic scenarios. This effect is due in part to the class imbalance problem, since one of the causes for the large amount of false alarms that IDSs generate, is the enormous difference between the large amount of normal activity compared to the small amount of intrusion events. The reasoning is that because the likelihood of an attack is very small, even if an IDS fires an alarm, the likelihood of having an intrusion remains relatively small. That is, when we compute the posterior probability of intrusion given that the IDS fired an alarm, (a quantity known as the Bayesian detection rate, or the positive predictive value (PPV)), we obtain: PPV \u2261 Pr[C = 1|A = 1] = pPD pPD +(1\u2212 p)PFA (1) Therefore, if the rate of incidence of an attack is very small, for example on average only 1 out of 105 events is an attack"} {"_id": "5c7efb061bd14cd8e41ed7314967b4e13347bbad", "title": "Data-driven models for monthly streamflow time series prediction", "text": "C. L. Wu and K. W. Chau* 2 Dept. of Civil and Structural Engineering, Hong Kong Polytechnic University, 3 Hung Hom, Kowloon, Hong Kong, People\u2019s Republic of China 4 5 *Email: cekwchau@polyu.edu.hk 6 ABSTRACT 7 Data-driven techniques such as Auto-Regressive Moving Average (ARMA), K-Nearest-Neighbors (KNN), and 8 Artificial Neural Networks (ANN), are widely applied to hydrologic time series prediction. This paper investigates 9 different data-driven models to determine the optimal approach of predicting monthly streamflow time series. Four 10 sets of data from different locations of People\u2019s Republic of China (Xiangjiaba, Cuntan, Manwan, and 11 Danjiangkou) are applied for the investigation process. Correlation integral and False Nearest Neighbors (FNN) 12 are first employed for Phase Space Reconstruction (PSR). Four models, ARMA, ANN, KNN, and Phase Space 13 Reconstruction-based Artificial Neural Networks (ANN-PSR) are then compared by one-month-ahead forecast 14 using Cuntan and Danjiangkou data. The KNN model performs the best among the four models, but only exhibits 15 weak superiority to ARMA. Further analysis demonstrates that a low correlation between model inputs and 16 outputs could be the main reason to restrict the power of ANN. A Moving Average Artificial Neural Networks 17 (MA-ANN), using the moving average of streamflow series as inputs, is also proposed in this study. The results 18 show that the MA-ANN has a significant improvement on the forecast accuracy compared with the original four 19 models. This is mainly due to the improvement of correlation between inputs and outputs depending on the 20 moving average operation. The optimal memory lengths of the moving average were three and six for Cuntan and 21 Danjiangkou respectively when the optimal model inputs are recognized as the previous twelve months. 22 23"} {"_id": "f8c90c6549b97934da4fcdafe0012cea95cc443c", "title": "State-of-the-Art Predictive Maintenance Techniques*", "text": "This paper discusses the limitations of time-based equipment maintenance methods and the advantages of predictive or online maintenance techniques in identifying the onset of equipment failure. The three major predictive maintenance techniques, defined in terms of their source of data, are described as follows: 1) the existing sensor-based technique; 2) the test-sensor-based technique (including wireless sensors); and 3) the test-signal-based technique (including the loop current step response method, the time-domain reflectrometry test, and the inductance-capacitance-resistance test). Examples of detecting blockages in pressure sensing lines using existing sensor-based techniques and of verifying calibration using existing-sensor direct current output are given. Three Department of Energy (DOE)-sponsored projects, whose aim is to develop online and wireless hardware and software systems for performing predictive maintenance on critical equipment in nuclear power plants, DOE research reactors, and general industrial applications, are described."} {"_id": "601e0049eccf0d5e69c9d246b051f4c290d1e26b", "title": "Towards Accurate Duplicate Bug Retrieval Using Deep Learning Techniques", "text": "Duplicate Bug Detection is the problem of identifying whether a newly reported bug is a duplicate of an existing bug in the system and retrieving the original or similar bugs from the past. This is required to avoid costly rediscovery and redundant work. In typical software projects, the number of duplicate bugs reported may run into the order of thousands, making it expensive in terms of cost and time for manual intervention. This makes the problem of duplicate or similar bug detection an important one in Software Engineering domain. However, an automated solution for the same is not quite accurate yet in practice, in spite of many reported approaches using various machine learning techniques. In this work, we propose a retrieval and classification model using Siamese Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) for accurate detection and retrieval of duplicate and similar bugs. We report an accuracy close to 90% and recall rate close to 80%, which makes possible the practical use of such a system. We describe our model in detail along with related discussions from the Deep Learning domain. By presenting the detailed experimental results, we illustrate the effectiveness of the model in practical systems, including for repositories for which supervised training data is not available."} {"_id": "19bd90c739c93ebb5186f6718e0a70dc8f3b01af", "title": "Insights from Price's equation into evolutionary epidemiology", "text": "The basic reproduction number, denoted by R0, is one of the most important quantities in epidemiological theory ([11], [23]). It is defined as the expected number of new infections generated by an infected individual in an otherwise wholly susceptible population ([2], [12], [23]). Part of the reason why R0 plays such a central role in this body of theory undoubtedly stems from its relatively simple and intuitively sensible interpretation as a measure of pathogen reproduction. If R0 is less than unity then we expect the pathogen to die out since each infected individual fails to generate at least one other infection during the lifetime of the infection. Given that R0 is a measure of pathogen reproductive success, it is not surprising that this quantity has also come to form the basis of most evolutionary considerations of host-pathogen interactions ([1], [18]). For example, mathematical models for numerous epidemiological settings have been used to demonstrate that natural selection is often expected to favour the pathogen strain that results in the largest value of R0 ([6], [18]). In more complex epidemiological settings such optimization criteria typically cannot be derived and instead a game-theoretic approach is taken ([5]). In this context a measure of the fitness of a rare mutant pathogen strain is used to characterize the evolutionarily stable strain (i.e., the strain that, if present within the population in sufficient numbers, cannot be displaced by any mutant strain that arises). Typically R0 again plays a central role as the measure of mutant fitness in such invasion analyses ([10], [18], [30]). In this chapter we consider an alternative approach for developing theory in evolutionary epidemiology. Rather than using the total number of new infections generated by an infected individual (i.e., R0) as a measure of pathogen fitness we use the instantaneous rate of change of the number of infected hosts instead (see also [3], [18]). This shifts the focus from a consideration of pathogen reproductive success per generation to pathogen reproductive success per unit time. One very useful result of this change in focus is that we can then model the time dynamics of evolutionary change in the pathogen population simultaneously with the epidemiological dynamics, rather than simply characterizing the evolutionary equilibria that are expected. Even more importantly, however, this seemingly slight change"} {"_id": "fb9339830e2d471e8b4c46caf465b39027c73164", "title": "Context Prediction for Unsupervised Deep Learning on Point Clouds", "text": "Point clouds provide a flexible and natural representation usable in countless applications such as robotics or self-driving cars. Recently, deep neural networks operating on raw point cloud data have shown promising results on supervised learning tasks such as object classification and semantic segmentation. While massive point cloud datasets can be captured using modern scanning technology, manually labelling such large 3D point clouds for supervised learning tasks is a cumbersome process. This necessitates effective unsupervised learning methods that can produce representations such that downstream tasks require significantly fewer annotated samples. We propose a novel method for unsupervised learning on raw point cloud data in which a neural network is trained to predict the spatial relationship between two point cloud segments. While solving this task, representations that capture semantic properties of the point cloud are learned. Our method outperforms previous unsupervised learning approaches in downstream object classification and segmentation tasks and performs on par with fully supervised methods."} {"_id": "548aa59905693f84e95515c6134cd5352a97ca76", "title": "Scatterplots: Tasks, Data, and Designs", "text": "Traditional scatterplots fail to scale as the complexity and amount of data increases. In response, there exist many design options that modify or expand the traditional scatterplot design to meet these larger scales. This breadth of design options creates challenges for designers and practitioners who must select appropriate designs for particular analysis goals. In this paper, we help designers in making design choices for scatterplot visualizations. We survey the literature to catalog scatterplot-specific analysis tasks. We look at how data characteristics influence design decisions. We then survey scatterplot-like designs to understand the range of design options. Building upon these three organizations, we connect data characteristics, analysis tasks, and design choices in order to generate challenges, open questions, and example best practices for the effective design of scatterplots."} {"_id": "5d2548457d10e5e5072f28fc8b7a6f1638fdef14", "title": "The design of scaffolds for use in tissue engineering. Part I. Traditional factors.", "text": "In tissue engineering, a highly porous artificial extracellular matrix or scaffold is required to accommodate mammalian cells and guide their growth and tissue regeneration in three dimensions. However, existing three-dimensional scaffolds for tissue engineering proved less than ideal for actual applications, not only because they lack mechanical strength, but they also do not guarantee interconnected channels. In this paper, the authors analyze the factors necessary to enhance the design and manufacture of scaffolds for use in tissue engineering in terms of materials, structure, and mechanical properties and review the traditional scaffold fabrication methods. Advantages and limitations of these traditional methods are also discussed."} {"_id": "a4803dd805a60c7c161eb7ed60e52ed9a425b65f", "title": "From Polynesian healers to health food stores: changing perspectives of Morinda citrifolia (Rubiaceae).", "text": "Morinda citrifolia L (noni) is one of the most important traditional Polynesian medicinal plants. Remedies from isolated Polynesian cultures, such as that of Rotuma, illustrate traditional indications that focus upon leaves, roots, bark, and green fruit, primarily for topical ailments. Anecdotally collected Hawaiian remedies that employ noni fruit illustrate changing usage patterns with shifts in recent times to preparation of juice made of ripe or decaying fruit. Ralph M. Heinicke promoted a wide range of claims about noni, and these seem to have fueled much of the current commercial interest in the plant. Recent studies of the proliferation of commercial products have shown that noni product manufacturers are promoting a range of therapeutic claims. These claims are based upon traditional Polynesian uses, Heinicke's ideas, and fragments of recent scientific studies including the activity of noni in the treatment of cancer. A review is provided of recent studies of potential anticancer activity of noni fruit. While noni's anticancer potential is still being explored, it continues to be widely used by Polynesians and non-Polynesians alike for both traditional and newly hypothesized indications."} {"_id": "8e27d3a78ea387200405ec026619201dbd8bd5b3", "title": "How Alluring Are Dark Personalities ? The Dark Triad and Attractiveness in Speed Dating", "text": "Copy Abstract: Dark Triad traits (narcissism, psychopathy, and Machiavellianism) are linked to the pursuit of short-term mating strategies, but they may have differential effects on actual mating success in naturalistic scenarios: Narcissism may be a facilitator for men\u2019s short-term mating success, while Machiavellianism and psychopathy may be detrimental. To date, little is known about the attractiveness of Dark Triad traits in women. In a speed-dating study, we assessed participants\u2019 Dark Triad traits, Big Five personality traits, and physical attractiveness in N=90 heterosexual individuals (46 women and 44 men). Each participant rated each partner\u2019s mate appeal for shortand long-term relationships. Across both sexes, narcissism was positively associated with mate appeal for shortand long-term relationships. Further analyses indicated that these associations were due to the shared variance among narcissism and extraversion in men and narcissism and physical attractiveness in women, respectively. In women, psychopathy was also positively associated with mate appeal for short-term relationships. Regarding mating preferences, narcissism was found to involve greater choosiness in the rating of others\u2019 mate appeal (but not actual choices) in men, while psychopathy was associated with greater openness towards short-term relationships in women. Copyright \u00a9 2016 European Association of Personality Psychology"} {"_id": "e18fa8c8f402c483b2c3eaaa89192fe99e80abd5", "title": "Evaluating Sentiment Analysis in the Context of Securities Trading", "text": "There are numerous studies suggesting that published news stories have an important effect on the direction of the stock market, its volatility, the volume of trades, and the value of individual stocks mentioned in the news. There is even some published research suggesting that automated sentiment analysis of news documents, quarterly reports, blogs and/or twitter data can be productively used as part of a trading strategy. This paper presents just such a family of trading strategies, and then uses this application to re-examine some of the tacit assumptions behind how sentiment analyzers are generally evaluated, in spite of the contexts of their application. This discrepancy comes at a cost."} {"_id": "124a9d3fa4569b2ebda1ca8726166a635f31a36d", "title": "Feature selection methods for conversational recommender systems", "text": "This paper focuses on question selection methods for conversational recommender systems. We consider a scenario, where given an initial user query, the recommender system may ask the user to provide additional features describing the searched products. The objective is to generate questions/features that a user would likely reply, and if replied, would effectively reduce the result size of the initial query. Classical entropy-based feature selection methods are effective in term of result size reduction, but they select questions uncorrelated with user needs and therefore unlikely to be replied. We propose two feature-selection methods that combine feature entropy with an appropriate measure of feature relevance. We evaluated these methods in a set of simulated interactions where a probabilistic model of user behavior is exploited. The results show that these methods outperform entropy-based feature selection."} {"_id": "38b0877fa6ac3ebfbb29d74f761fea394ee190f3", "title": "Lazy Decision Trees", "text": "Lazy learning algorithms, exemplified by nearestneighbor algorithms, do not induce a concise hypothesis from a given training set; the inductive process is delayed until a test instance is given. Algorithms for constructing decision trees, such as C4.5, ID3, and CART create a single \u201cbest\u201d decision tree during the training phase, and this tree is then used to classify test instances. The tests at the nodes of the constructed tree are good on average, but there may be better tests for classifying a specific instance. We propose a lazy decision tree algorithm-LAzuDT-that conceptually constructs the \u201cbest\u201d decision tree for each test instance. In practice, only a path needs to be constructed, and a caching scheme makes the algorithm fast. The algorithm is robust with respect to missing values without resorting to the complicated methods usually seen in induction of decision trees. Experiments on real and artificial problems are presented."} {"_id": "4e6cad4d8616c88856792688228a4c52cec9bace", "title": "A consensus-based approach for platooning with inter-vehicular communications", "text": "Automated and coordinated vehicles' driving (platooning) is gaining more and more attention today and it represents a challenging scenario heavily relying on wireless Inter-Vehicular Communication (IVC). In this paper, we propose a novel controller for vehicle platooning based on consensus. Opposed to current approaches where the logical control topology is fixed a priori and the control law designed consequently, we design a system whose control topology can be reconfigured depending on the actual network status. Moreover, the controller does not require the vehicles to be radar equipped and automatically compensates outdated information caused by network delays. We define the control law and analyze it in both analytical and simulative way, showing its robustness in different network scenarios. We consider three different wireless network settings: uncorrelated Bernoullian losses, correlated losses using a Gilbert-Elliott channel, and a realistic traffic scenario with interferences caused by other vehicles. Finally, we compare our strategy with another state of the art controller. The results show the ability of the proposed approach to maintain a stable string of vehicles even in the presence of strong interference, delays, and fading conditions, providing higher comfort and safety for platoon drivers."} {"_id": "3c6bf8d1a83e7c78dc5ff43e0b9e776753ec96ae", "title": "Thinking Positively - Explanatory Feedback for Conversational Recommender Systems", "text": "When it comes to buying expensive goods people expect to be skillfully steered through the options by well-informed sales assistants that are capable of balancing the user\u2019s many and varied requirements. In addition users often need to be educated about the product-space, especially if they are to come to understand what is available and why certain options are being recommended by the sales-assistant. The same issues arise in interactive recommender systems, our online equivalent of a sales assistant and explanation in recommender systems, as a means to educate users and justify recommendations, is now well accepted. In this paper we focus on a novel approach to explanation. Instead of attempting to justify a particular recommendation we focus on how explanations can help users to understand the recommendation opportunities that remain if the current recommendation should not meet their requirements. We describe how this approach to explanation is tightly coupled with the generation of compound critiques, which act as a form of feedback for the users. And we argue that these explanation-rich critiques have the potential to dramatically improve recommender performance and usability."} {"_id": "9a423eb7c6f3b8082bfc99ce5811a10416a1daff", "title": "A Survey of Definitions and Models of Exploratory Search", "text": "Exploratory search has an unclear and open-ended definition. The complexity of the task and the difficulty of defining this activity are reflected in the limits of existing evaluation methods for exploratory search systems. In order to improve them, we intend to design an evaluation method based on a user-centered model of exploratory search. In this work, we identified and defined the characteristics of exploratory search and used them as an information seeking model evaluation grid. We tested this analytic grid on two information seeking models: Ellis' and Marchionini's models. The results show that Marchonini's model does not match our evaluation method's requirements whereas on the other hand Ellis' model could be adapted to better suit exploratory search."} {"_id": "9cd69fa7f185ab9be4b452dadd44cbce1e8fe579", "title": "Efficient FIR Filter Design Using Modified Carry Select Adder & Wallace Tree Multiplier", "text": "An area-power-delay efficient design of FIR filter is described in this paper. In proposed multiplier unit high speed is achieved using XOR-XNOR column by column reduction compressors instead of compressors using full adder. The carry propagation delay and area of carry select adder is reduced by splitting carry select adder into equal bit groups. The proposed carry select adder unit consumes less power than conventional carry select adder unit. With proposed multiplier unit and carry select adder unit, the designed FIR Filter consumes 55% power less than the conventional filter without significant increase in area. The power & delay comparison is performed for both existing and proposed method of FIR filter. The design is implemented using 0.18\u03bcm technology. Index Terms FIR filter design, FDA Tool, Low Power VLSI, Wallace tree multiplication."} {"_id": "2d4204a473d03ca1cd87e894e3d5772ae0364052", "title": "An efficient and generic reversible debugger using the virtual machine based approach", "text": "The reverse execution of programs is a function where programs are executed backward in time. A reversible debugger is a debugger that provides such a functionality. In this paper, we propose a novel reversible debugger that enables reverse execution of programs written in the C language. Our approach takes the virtual machine based approach. In this approach, the target program is executed on a special virtual machine. Our contribution in this paper is two-fold. First, we propose an approach that can address problems of (1) compatibility and (2) efficiency that exist in previous works. By compatibility, we mean that previous debuggers are not generic, i.e., they support only a special language or special intermediate code. Second, our approach provides two execution modes: the native mode, where the debuggee is directly executed on a real CPU, and the virtual machine mode, where the debuggee is executed on a virtual machine. Currently, our debugger provides four types of trade-off settings (designated by unit and optimization) to consider trade-offs between granularity, accuracy, overhead and memory requirement. The user can choose the appropriate setting flexibly during debugging without finishing and restarting the debuggee."} {"_id": "623dc089f4736e1ddeee35b50f0ae3f6efa15c78", "title": "Real-time Empirical Mode Decomposition for EEG signal enhancement", "text": "Electroencephalography (EEG) recordings are used for brain research. However, in most cases, the recordings not only contain brain waves, but also artifacts of physiological or technical origins. A recent approach used for signal enhancement is Empirical Mode Decomposition (EMD), an adaptive data-driven technique which decomposes non-stationary data into so-called Intrinsic Mode Functions (IMFs). Once the IMFs are obtained, they can be used for denoising and detrending purposes. This paper presents a real-time implementation of an EMD-based signal enhancement scheme. The proposed implementation is used for removing noise, for suppressing muscle artifacts, and for detrending EEG signals in an automatic manner and in real-time. The proposed algorithm is demonstrated by application to a simulated and a real EEG data set from an epilepsy patient. Moreover, by visual inspection and in a quantitative manner, it is shown that after the EMD in real-time, the EEG signals are enhanced."} {"_id": "904179d733b8ff792586631d50b3bd64f42d6b7d", "title": "Span, CRUNCH, and Beyond: Working Memory Capacity and the Aging Brain", "text": "Neuroimaging data emphasize that older adults often show greater extent of brain activation than younger adults for similar objective levels of difficulty. A possible interpretation of this finding is that older adults need to recruit neuronal resources at lower loads than younger adults, leaving no resources for higher loads, and thus leading to performance decrements [Compensation-Related Utilization of Neural Circuits Hypothesis; e.g., Reuter-Lorenz, P. A., & Cappell, K. A. Neurocognitive aging and the compensation hypothesis. Current Directions in Psychological Science, 17, 177\u2013182, 2008]. The Compensation-Related Utilization of Neural Circuits Hypothesis leads to the prediction that activation differences between younger and older adults should disappear when task difficulty is made subjectively comparable. In a Sternberg memory search task, this can be achieved by assessing brain activity as a function of load relative to the individual's memory span, which declines with age. Specifically, we hypothesized a nonlinear relationship between load and both performance and brain activity and predicted that asymptotes in the brain activation function should correlate with performance asymptotes (corresponding to working memory span). The results suggest that age differences in brain activation can be largely attributed to individual variations in working memory span. Interestingly, the brain activation data show a sigmoid relationship with load. Results are discussed in terms of Cowan's [Cowan, N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87\u2013114, 2001] model of working memory and theories of impaired inhibitory processes in aging."} {"_id": "10254c472248b5831352eaa2b542fbba455a4f0e", "title": "Dry and Noncontact EEG Sensors for Mobile Brain\u2013Computer Interfaces", "text": "Dry and noncontact electroencephalographic (EEG) electrodes, which do not require gel or even direct scalp coupling, have been considered as an enabler of practical, real-world, brain-computer interface (BCI) platforms. This study compares wet electrodes to dry and through hair, noncontact electrodes within a steady state visual evoked potential (SSVEP) BCI paradigm. The construction of a dry contact electrode, featuring fingered contact posts and active buffering circuitry is presented. Additionally, the development of a new, noncontact, capacitive electrode that utilizes a custom integrated, high-impedance analog front-end is introduced. Offline tests on 10 subjects characterize the signal quality from the different electrodes and demonstrate that acquisition of small amplitude, SSVEP signals is possible, even through hair using the new integrated noncontact sensor. Online BCI experiments demonstrate that the information transfer rate (ITR) with the dry electrodes is comparable to that of wet electrodes, completely without the need for gel or other conductive media. In addition, data from the noncontact electrode, operating on the top of hair, show a maximum ITR in excess of 19 bits/min at 100% accuracy (versus 29.2 bits/min for wet electrodes and 34.4 bits/min for dry electrodes), a level that has never been demonstrated before. The results of these experiments show that both dry and noncontact electrodes, with further development, may become a viable tool for both future mobile BCI and general EEG applications."} {"_id": "8353ee33dc6aec01813543c87dc8e5191c5c89fb", "title": "ECG measurement on a chair without conductive contact", "text": "For the purpose of long-term, everyday electrocardiogram (ECG) monitoring, we present a convenient method of ECG measurement without direct conductive contact with the skin while subjects sat on a chair wearing normal clothes. Measurements were made using electrodes attached to the back of a chair, high-input-impedance amplifiers mounted on the electrodes, and a large ground-plane placed on the chair seat. ECGs were obtained by the presented method for several types of clothing and compared to ECGs obtained from conventional measurement using Ag-AgCl electrodes. Motion artifacts caused by usual desk works were investigated. This study shows the feasibility of the method for long-term, convenient, everyday use"} {"_id": "069443f2bbeb2deedefb600c82ed59cde2137e60", "title": "Analysis of emotion recognition using facial expressions, speech and multimodal information", "text": "The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on facial expressions or speech, relatively limited work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. This paper analyzes the strengths and the limitations of systems based only on facial expressions or acoustic information. It also discusses two approaches used to fuse these two modalities: decision level and feature level integration. Using a database recorded from an actress, four emotions were classified: sadness, anger, happiness, and neutral state. By the use of markers on her face, detailed facial motions were captured with motion capture, in conjunction with simultaneous speech recordings. The results reveal that the system based on facial expression gave better performance than the system based on just acoustic information for the emotions considered. Results also show the complementarily of the two modalities and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably."} {"_id": "0bc46478051356455facc79f216a00b896c2dc5f", "title": "ORTHONORMAL BASES OF COMPACTLY SUPPORTED WAVELETS", "text": "We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches."} {"_id": "969c055f67efe1296bd9816c4effb5d4dfd83948", "title": "Emotion Detection from Speech to Enrich Multimedia Content", "text": "The paper describes an experimental study on the detection of emotion from speech. As computer based characters such as avatars and virtual chat faces become more common, the use of emotion to drive the expression of the virtual characters become more important. The study utilizes a corpus containing emotional speech with 721 short utterances expressing four emotions: anger, happiness, sadness, and the neutral (unemotional) state, which were captured manually from movies and teleplays. We introduce a new concept to evaluate emotions in speech. Emotions are so complex that most speech sentences cannot be precisely assigned into a particular emotion category; however, most emotional states nevertheless can be described as a mixture of multiple emotions. Based on this concept we have trained SVMs (support vector machines) to recognize utterances within these four categories and developed an agent that can recognize and express emotions."} {"_id": "f2e808509437b0474a6c9a258cd06c6f3b42754b", "title": "The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults.", "text": "The objective of this study was to refine the APACHE (Acute Physiology, Age, Chronic Health Evaluation) methodology in order to more accurately predict hospital mortality risk for critically ill hospitalized adults. We prospectively collected data on 17,440 unselected adult medical/surgical intensive care unit (ICU) admissions at 40 US hospitals (14 volunteer tertiary-care institutions and 26 hospitals randomly chosen to represent intensive care services nationwide). We analyzed the relationship between the patient's likelihood of surviving to hospital discharge and the following predictive variables: major medical and surgical disease categories, acute physiologic abnormalities, age, preexisting functional limitations, major comorbidities, and treatment location immediately prior to ICU admission. The APACHE III prognostic system consists of two options: (1) an APACHE III score, which can provide initial risk stratification for severely ill hospitalized patients within independently defined patient groups; and (2) an APACHE III predictive equation, which uses APACHE III score and reference data on major disease categories and treatment location immediately prior to ICU admission to provide risk estimates for hospital mortality for individual ICU patients. A five-point increase in APACHE III score (range, 0 to 299) is independently associated with a statistically significant increase in the relative risk of hospital death (odds ratio, 1.10 to 1.78) within each of 78 major medical and surgical disease categories. The overall predictive accuracy of the first-day APACHE III equation was such that, within 24 h of ICU admission, 95 percent of ICU admissions could be given a risk estimate for hospital death that was within 3 percent of that actually observed (r2 = 0.41; receiver operating characteristic = 0.90). Recording changes in the APACHE III score on each subsequent day of ICU therapy provided daily updates in these risk estimates. When applied across the individual ICUs, the first-day APACHE III equation accounted for the majority of variation in observed death rates (r2 = 0.90, p less than 0.0001)."} {"_id": "25573459892caddfbe8d7478f3db11dc5e537f3a", "title": "Recent advances on active noise control : open issues and innovative applications", "text": "Yoshinobu Kajikawa, Woon\u00adSeng Gan and Sen M. Kuo APSIPA Transactions on Signal and Information Processing / Volume 1 / August 2012 / e3 DOI: 10.1017/ATSIP.2012.4, Published online: Link to this article: http://journals.cambridge.org/abstract_S2048770312000042 How to cite this article: Yoshinobu Kajikawa, Woon\u00adSeng Gan and Sen M. Kuo (2012). Recent advances on active noise control: open issues and innovative applications. APSIPA Transactions on Signal and Information Processing,1, e3 doi:10.1017/ATSIP.2012.4 Request Permissions : Click here"} {"_id": "8e3bfff28337c4249c1e98973b4df8f95a205dce", "title": "Software Defined Networking Meets Information Centric Networking: A Survey", "text": "Information centric networking (ICN) and software-defined networking (SDN) are two emerging networking paradigms that promise to solve different aspects of networking problems. ICN is a clean-slate design for accommodating the ever increasing growth of the Internet traffic by regarding content as the network primitive, adopting in-network caching, and name-based routing, while SDN focuses on agile and flexible network management by decoupling network control logic from data forwarding. ICN and SDN have gained significant research attention separately in the most of the previous work. However, the features of ICN have profound impacts on the design and operation of SDN, such as in-network caching and data-centric security. Conversely, SDN provides a powerful tool for experimenting and deploying ICN architectures and can greatly facilitate ICN functionality and management. In this paper, we point out the necessity of surveying the scattered works on integrating SDN and ICN (SD-ICN) for improving operational networks. Specifically, we analyze the points of SD-ICN strengths and opportunities, and discuss the SDN enablers for deploying ICN architectures. In addition, we review and classify the recent work on improving the network management by SD-ICN and discuss the potential security benefits of SD-ICN. Finally, a number of open issues and future trends are highlighted."} {"_id": "050c6fa2ee4b3e0a076ef456b82b2a8121506060", "title": "Data-driven 3D Voxel Patterns for object category recognition", "text": "Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6% in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D."} {"_id": "0a24f049590c014d5b40660449503368bcedc921", "title": "Belief space planning assuming maximum likelihood observations", "text": "We cast the partially observable control problem as a fully observable underactuated stochastic control problem in belief space and apply standard planning and control techniques. One of the difficulties of belief space planning is modeling the stochastic dynamics resulting from unknown future observations . The core of our proposal is to define deterministic beliefsystem dynamics based on an assumption that the maximum likelihood observation (calculated just prior to the observation) is always obtained. The stochastic effects of future observation s are modeled as Gaussian noise. Given this model of the dynamics, two planning and control methods are applied. In the first, linear quadratic regulation (LQR) is applied to generate policies in the belief space. This approach is shown to be optimal for linearGaussian systems. In the second, a planner is used to find locally optimal plans in the belief space. We propose a replanning approach that is shown to converge to the belief space goal in a finite number of replanning steps. These approaches are characterized in the context of a simple nonlinear manipulation problem where a planar robot simultaneously locates and grasps an object."} {"_id": "d13aa46e8069f67d48cf773bf70d241ead6ee7e6", "title": "Very High Resolution Spaceborne SAR Tomography in Urban Environment", "text": "Synthetic aperture radar tomography (TomoSAR) extends the synthetic aperture principle into the elevation direction for 3-D imaging. It uses stacks of several acquisitions from slightly different viewing angles (the elevation aperture) to reconstruct the reflectivity function along the elevation direction by means of spectral analysis for every azimuth-range pixel. The new class of meter-resolution spaceborne SAR systems (TerraSAR-X and COSMO-Skymed) offers a tremendous improvement in tomographic reconstruction of urban areas and man-made infrastructure. The high resolution fits well to the inherent scale of buildings (floor height, distance of windows, etc.). This paper demonstrates the tomographic potential of these SARs and the achievable quality on the basis of TerraSAR-X spotlight data of urban environment. A new Wiener-type regularization to the singular-value decomposition method-equivalent to a maximum a posteriori estimator-for TomoSAR is introduced and is extended to the differential case (4-D, i.e., space-time). Different model selection schemes for the estimation of the number of scatterers in a resolution cell are compared and proven to be applicable in practice. Two parametric estimation algorithms of the scatterers' elevation and their velocities are evaluated. First 3-D and 4-D reconstructions of an entire building complex (including its radar reflectivity) with very high level of detail from spaceborne SAR data by pixelwise TomoSAR are presented."} {"_id": "0adeeb60165f8ac498cf52840dc2c436c11a14c2", "title": "Softmax Discriminant Classifier", "text": "A simple but effective classifier, which is called soft max discriminant classifier or SDC for short, is presented. Based on the soft max discriminant function, SDC assigns the label information to a new testing sample by nonlinear transformation of the distance between the testing sample and training samples. Experimental results on some well-known data sets demonstrate the feasibility and effectiveness of the proposed algorithm."} {"_id": "cc2be11675137e4ea60bf6c89a108518c53b48bb", "title": "An improved power quality induction heater using zeta converter", "text": "This paper presents an induction heater (IH) with power factor correction (PFC) for domestic induction heating applications. A half bridge voltage fed series resonant inverter (VFSRI) based IH is fed from the front end PFC corrected zeta converter. The PFC-zeta converter operating in continuous inductor conduction mode (CICM) is used to regulate the DC link voltage. The switch current stress of PFC-zeta converter is controlled by operating zeta converter in CICM mode using a well-established internal current loop and outer voltage loop approach. The PFC-zeta converter based voltage fed IH converts intermediate DC bus voltage to high frequency (30 kHz) voltage suitable for domestic induction heating. A 2 kW PFC-zeta converter based IH is designed and its performance is simulated in MATLAB with power quality indices within an IEC 61000-3-2 standard."} {"_id": "6c234f7838cc57e71646ff927a55ae5afefa4a98", "title": "Semi-Supervised Nonnegative Matrix Factorization", "text": "Nonnegative matrix factorization (NMF) is a popular method for low-rank approximation of nonnegative matrix, providing a useful tool for representation learning that is valuable for clustering and classification. When a portion of data are labeled, the performance of clustering or classification is improved if the information on class labels is incorporated into NMF. To this end, we present semi-supervised NMF (SSNMF), where we jointly incorporate the data matrix and the (partial) class label matrix into NMF. We develop multiplicative updates for SSNMF to minimize a sum of weighted residuals, each of which involves the nonnegative 2-factor decomposition of the data matrix or the label matrix, sharing a common factor matrix. Experiments on document datasets and EEG datasets in BCI competition confirm that our method improves clustering as well as classification performance, compared to the standard NMF, stressing that semi-supervised NMF yields semi-supervised feature extraction."} {"_id": "289b34d437ccb17fe2543b33ad7243a9be644898", "title": "The role of debriefing in simulation-based learning.", "text": "The aim of this paper is to critically review what is felt to be important about the role of debriefing in the field of simulation-based learning, how it has come about and developed over time, and the different styles or approaches that are used and how effective the process is. A recent systematic review of high fidelity simulation literature identified feedback (including debriefing) as the most important feature of simulation-based medical education.1 Despite this, there are surprisingly few papers in the peer-reviewed literature to illustrate how to debrief, how to teach or learn to debrief, what methods of debriefing exist and how effective they are at achieving learning objectives and goals. This review is by no means a systematic review of all the literature available on debriefing, and contains information from both peer and nonpeer reviewed sources such as meeting abstracts and presentations from within the medical field and other disciplines versed in the practice of debriefing such as military, psychology, and business. It also contains many examples of what expert facilitators have learned over years of practice in the area. We feel this would be of interest to novices in the field as an introduction to debriefing, and to experts to illustrate the gaps that currently exist, which might be addressed in further research within the medical simulation community and in collaborative ventures between other disciplines experienced in the art of debriefing."} {"_id": "bafc689320ef11455b8df7cdefe3694bfaac9072", "title": "The use of the interactive whiteboard for creative teaching and learning in literacy and mathematics: a case study", "text": "This paper considers the ways in which the interactive whiteboard may support and enhance pedagogic practice through whole-class teaching within literacy and numeracy. Data collected from observations of whole-class lessons, alongside individual interviews and focus group discussions with class teachers and Initial Teacher Education students, has provided opportunities to consider the potential of such technology to facilitate a more creative approach to whole-class teaching. The data suggests that, in the first instance, the special features of information and communications technology such as interactivity, \u2018provisionality,\u2019 speed, capacity and range enhance the delivery and pace of the session. This research seems to indicate that it is the skill and the professional knowledge of the teacher who mediates the interaction, and facilitates the development of pupils\u2019 creative responses at the interface of technology, which is critical to the enhancement of the whole-class teaching and learning processes. Introduction The globalising phenomenon of information and communication technologies (ICT) is a distinct characteristic of modern times. The speed and immediacy of ICT, coupled with opportunities for increased information flow through multiple routes of communication, suggest that we are living in a time of unprecedented change, with ICT affecting the way we live and function as individuals and as a society (Castells, 2004). Within the context of education there are some technologies that appear to have attracted more interest than others; however, the degree to which they have been British Journal of Educational Technology Vol 39 No 1 2008 84\u201396 doi:10.1111/j.1467-8535.2007.00703.x \u00a9 2007 The Authors. Journal compilation \u00a9 2007 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. successfully integrated into the classroom environment has been varied. In recent years, there has been a growing level of interest in the electronic or interactive whiteboard (IWB), well documented by the educational press. Such technology is generally comprised of a triangulation between data projector, computer and an electronic screen. This allows an individual to interact with software at the front of a class rather than from the computer. Effectively, the computer screen is projected onto the electronic whiteboard and presented to the class with the teacher, or perhaps student, selecting, activating and interacting with the programs. At a time in England when the government has promoted whole-class interactive teaching, particularly within Literacy and Numeracy, access to IWB technology through targeted government funding is also increasing, and the IWB is steadily becoming a feature of most numeracy and literacy lessons. In January 2004, Charles Clarke, the Secretary of State for Education in England, announced that, in addition to the \u00a325 million previously made available to schools in September 2003; a further \u00a325 million would be released for the purchase of IWBs. This technology is therefore likely to become a key resource in most schools. Introduction of new technologies such as this within the classroom context raises questions regarding the ways in which pedagogic practice may be supported and enhanced; this being the focus of this study, specifically, the links between three areas; whole-class direct teaching, creativity and the integration of technology. As IWBs are becoming more familiar within the educational market, additional challenges, which imply new priorities, have arisen from recent demands for \u2018a much stronger emphasis on creative and cultural education and a new balance in teaching and in the curriculum.\u2019 (The National Advisory Committee on Creative and Cultural Education [NACCCE], 1999) The teacher educator is therefore faced with a complex set of demands that require resolution in terms of pedagogic practice. The aim of this study is to investigate how IWBs can provide opportunities for creativity in teaching and learning, particularly within whole-class lessons, by drawing upon observations of, and discussions with, classroom practitioners and ITE students. In so doing, the study will contribute to debates within educational fields (Bleakley, 2004; Craft, 2005; NACCCE, 1999) regarding the notion of creativity in primary teaching and learning. Current developments in creativity, interactivity and whole-class direct teaching The current focus on direct whole-class teaching, particularly in mathematics, developed in response to concerns in England about the level of children\u2019s performance in English and mathematics compared to those in other countries. In particular, in the Third International Mathematics and Science Study of 1999 (Mullis et al, 2000) England was placed low in basic number skills although as Muijs and Reynolds (2001) note, faring better in geometry and problem solving. Research conducted by Professor David Reynolds, chair of the newly established Numeracy Task Force, singled out a substantial amount of direct whole-class teaching as a key feature of mathematics sessions in the top-attaining countries (Reynolds & Farrell, 1996), these findings The use of the interactive whiteboard in literacy and maths: a case study 85 \u00a9 2007 The Authors. Journal compilation \u00a9 2007 British Educational Communications and Technology Agency. echoing those of previous studies, (Galton & Croll, 1980; Rosenshine, 1979), and more recently, Bierhoff (1996), Bierhoff and Prais (1995) and Burghes (1995). In response to these concerns about attainment and the related research findings, the National Literacy Strategy (NLS; DfEE, 1998) and National Numeracy Strategy (NNS) were introduced into most state-funded primary schools; the NLS in 1998 and the NNS in 1999, both providing a clear framework for the content of the curriculum and recommended teaching approaches. In both Literacy and Numeracy, sessions were expected to consist of a substantial amount of direct, whole-class teaching. A wholeclass approach, it was claimed, enables the teacher to interact more with each pupil; adapt activities quickly in response to pupils\u2019 responses; use errors and misconceptions as a teaching point for the whole class and keep pupils on task for longer periods of time (Muijs & Reynolds, 2001). However, Muijs and Reynolds also noted that this approach was not necessarily the best to use in all circumstances. Brophy and Good (1986) are cited as finding this method more suited for teaching rules, procedures and basic skills, especially to younger pupils. Less structured and teacher-directed approaches, Muijs and Reynolds (2001) suggest, would be more appropriate when the aims of the lesson are more complex or open-ended (eg, developing students\u2019 thinking skills) With the IWB featuring widely in whole-class teaching, there is a concern that its full interactive potential may not be explored through this structured, teacher-directed approach as the teaching and modelling of rules, procedures and basic skills is likely to take precedence over more complex and cognitively demanding activities. Where whole-class interactive teaching in mathematics lessons is concerned, the Numeracy Task Force stressed that \u2018it is the quality of the whole class teaching which determines its success\u2019 (Department for Education and Employment [DfEE], 1999) rather than the whole-class approach per se. They note that whole-class teaching should not be viewed as a purely transmission-based approach, with children given instruction regarding knowledge and understanding in a product-driven mode of teaching. Rather, it should be seen as an interactive, process-oriented approach to learning with the quality of the interaction within the classroom being of prime importance, maximised effectively and efficiently through good whole-class teaching. The NNS goes on to state that \u2018interaction is a two-way process in which pupils are expected to play an active part\u2019 and that \u2018high quality interactive teaching is oral, interactive and lively\u2019, which is fundamental to a social constructivist view of learning (Vygotsky, 1978). This may be a laudable claim, however; Muijs and Reynolds (2001) caution that with direct whole-class teaching, pupils may find it easy to adopt a more passive role, becoming too dependent on the teacher and failing to develop independent learning skills. Nevertheless, it may be within a more social, interactive and lively learning environment that the IWB could be seen to make a valuable contribution. At present, there is a limited amount of research available that focuses specifically upon the IWB and associated pedagogy, however, Smith, Hardman and Higgins (2006) have undertaken a substantial study involving observations made in primary schools of 184 lessons in literacy and numeracy conducted over a 2-year period. The study suggests that, although the use of IWBs engages the pupils and sessions are generally faster in 86 British Journal of Educational Technology Vol 39 No 1 2008 \u00a9 2007 The Authors. Journal compilation \u00a9 2007 British Educational Communications and Technology Agency. their pace of delivery, the underlying pedagogy of whole-class teaching appears to remain unaffected, with teacher-led recitation and emphasis upon recall dominating proceedings. Previous research has highlighted the motivational impact of IWBs upon pupils, with the large screen, the multimedia capability and the element of fun enhancing the presentational aspects of a lesson (Glover & Miller, 2001; Levy, 2002). Essentially, there appears to be the potential for enhancements in whole-class teaching and learning through the use of IWBs if pedagogic practice were to adapt and change through creative and innovative use of the particular features of this new technology. Creativity Creativity has been described as \u2018the word of the moment\u2019 (Bruce, 2004, p. vi"} {"_id": "dbf38bd3fb7ae7134e740e36b028eff71f03ec00", "title": "Multi Objective Segmentation for Vehicle License Plate Detection with Immune-based Classifier: A General Framework", "text": "Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework."} {"_id": "8ae6dfc9226d49d9eab49dd7e21e04b660c4569f", "title": "Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network", "text": "Recently, Generative Adversarial Network (GAN) has been found wide applications in style transfer, image-to-image translation and image super-resolution. In this paper, a colordepth conditional GAN is proposed to concurrently resolve the problems of depth super-resolution and color super-resolution in 3D videos. Firstly, given the low-resolution depth image and low-resolution color image, a generative network is proposed to leverage mutual information of color image and depth image to enhance each other in consideration of the geometry structural dependency of color-depth image in the same scene. Secondly, three loss functions, including data loss, total variation loss, and 8-connected gradient difference loss are introduced to train this generative network in order to keep generated images close to the real ones, in addition to the adversarial loss. Experimental results demonstrate that the proposed approach produces highquality color image and depth image from low-quality image pair, and it is superior to several other leading methods. Besides, the applications of the proposed method in other tasks are image smoothing and edge detection at the same time."} {"_id": "19517a10e0f26729fbc2b1a858e56f8af860eac3", "title": "Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence", "text": "The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust (Austropuccinia psidii) on paperbark tea trees (Melaleuca quinquenervia) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec \u00ae camera, orthorectified in Headwall SpectralView \u00ae , and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools."} {"_id": "f950f8af6437540bf301c91a0c26b8015ac6c439", "title": "Classification and Regression using SAS", "text": "K-Nearest Neighbor (KNN) classification and regression are two widely used analytic methods in predictive modeling and data mining fields. They provide a way to model highly nonlinear decision boundaries, and to fulfill many other analytical tasks such as missing value imputation, local smoothing, etc. In this paper, we discuss ways in SAS R \u00a9 to conduct KNN classification and KNN Regression. Specifically, PROC DISCRIM is used to build multi-class KNN classification and PROC KRIGE2D is used for KNN regression tasks. Technical details such as tuning parameter selection, etc are discussed. We also discuss tips and tricks in using these two procedures for KNN classification and regression. Examples are presented to demonstrate full process flow in applying KNN classification and regression in real world business projects."} {"_id": "7114f7bb38914958f07cfa98fb29afcd94c626ad", "title": "Cardiac pulse detection in BCG signals implemented on a regular classroom chair integrated to an emotional and learning model for personalization of learning resources", "text": "Emotions are related with learning processes, and physiological signals can be used to detect them. In this work, it is presented a model for the dynamic personalization of a learning environment. In the model, a specific combination of emotions and cognition processes are connected and integrated with the concept of \u2018flow\u2019, since it may provide a way to improve learning abilities. Physiological signals can be used to relate temporal emotions and subject's learning processes; the cardiac pulse is a reliable signal that carries useful information about the subject's emotional condition, which is detected using a classroom chair adapted with non invasive Electro-Mechanical Film (EMFi) sensors and an acquisition system that generates a ballistocardiogram (BCG), which is analyzed by a developed algorithm to obtain cardiac pulse statistics. A study including different data obtained from the chair's sensors was carried out and interpreted in terms of emotions, which together with a cognitive model is used for the personalization of content in a learning scenario. Such model establishes a relation between the learner's knowledge, the content of a course or a learning activity and the emotional condition of each student."} {"_id": "877091fdec0c58d28b7d81094bc73e135a63fe60", "title": "A quantitative analysis of the speedup factors of FPGAs over processors", "text": "The speedup over a microprocessor that can be achieved by implementing some programs on an FPGA has been extensively reported. This paper presents an analysis, both quantitative and qualitative, at the architecture level of the components of this speedup. Obviously, the spatial parallelism that can be exploited on the FPGA is a big component. By itself, however, it does not account for the whole speedup.In this paper we experimentally analyze the remaining components of the speedup. We compare the performance of image processing application programs executing in hardware on a Xilinx Virtex E2000 FPGA to that on three general-purpose processor platforms: MIPS, Pentium III and VLIW. The question we set out to answer is what is the inherent advantage of a hardware implementation over a von Neumann platform. On the one hand, the clock frequency of general-purpose processors is about 20 times that of typical FPGA implementations. On the other hand, the iteration level parallelism on the FPGA is one to two orders of magnitude that on the CPUs. In addition to these two factors, we identify the efficiency advantage of FPGAs as an important factor and show that it ranges from 6 to 47 on our test benchmarks. We also identify some of the components of this factor: the streaming of data from memory, the overlap of control and data flow and the elimination of some instruction on the FPGA. The results provide a deeper understanding of the tradeoff between system complexity and performance when designing Configurable SoC as well as designing software for CSoC. They also help understand the one to two orders of magnitude in speedup of FPGAs over CPU after accounting for clock frequencies."} {"_id": "e960842f887e5abdf1a29cf0ef3d3f7c86991adc", "title": "Computational Complexity of Linear Large Margin Classification With Ramp Loss", "text": "Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the \u03c1margin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization."} {"_id": "648b03cfa3fa899019ebc418923fa7afdcd64828", "title": "Effectiveness of Chinese massage therapy (Tui Na) for chronic low back pain: study protocol for a randomized controlled trial", "text": "BACKGROUND\nLow back pain is a common, disabling musculoskeletal disorder in both developing and developed countries. Although often recommended, the potential efficacy of massage therapy in general, and Chinese massage (tuina) in particular, for relief of chronic low back pain (CLBP) has not been fully established due to inadequate sample sizes, low methodological quality, and subclinical dosing regimens of trials to date. Thus, the purpose of this randomized controlled trial (RCT) is to evaluate the comparative effectiveness of tuina massage therapy versus conventional analgesics for CLBP.\n\n\nMETHODS/DESIGN\nThe present study is a single center, two-arm, open-label RCT. A total of 150 eligible CLBP patients will be randomly assigned to either a tuina treatment group or a conventional drug control group in a 1:1 ratio. Patients in the tuina group receive a 20 minutes, 4-step treatment protocol which includes both structural and relaxation massage, administered in 20 sessions over a period of 4 weeks. Patients in the conventional drug control group are instructed to take a specific daily dose of ibuprofen. The primary outcome measure is the change from baseline back pain and function, measured by Roland-Morris Disability Questionnaire, at two months. Secondary outcome measures include the visual analogue scale, Japanese orthopedic association score (JOAS), and McGill pain questionnaire.\n\n\nDISCUSSION\nThe design and methodological rigor of this trial will allow for collection of valuable data to evaluate the efficacy of a specific tuina protocol for treating CLBP. This trial will therefore contribute to providing a solid foundation for clinical treatment of CLBP, as well as future research in massage therapy.\n\n\nTRIAL REGISTRATION\nThis trial was registered with ClinicalTrials.gov of the National Institute of Health on 22 October 2013 (http://NCT01973010)."} {"_id": "7826bc81ffed9c1f342616df264c92d6f732f4dd", "title": "Autonomic Communications in Software-Driven Networks", "text": "Autonomic communications aim to provide the quality-of-service in networks using self-management mechanisms. It inherits many characteristics from autonomic computing, in particular, when communication systems are running as specialized applications in software-defined networking (SDN) and network function virtualization (NFV)-enabled cloud environments. This paper surveys autonomic computing and communications in the context of software-driven networks, i.e., networks based on SDN/NFV concepts. Autonomic communications create new challenges in terms of security, operations, and business support. We discuss several goals, research challenges, and development issues on self-management mechanisms and architectures in software-driven networks. This paper covers multiple perspectives of autonomic communications in software-driven networks, such as automatic testing, integration, and deployment of network functions. We also focus on self-management and optimization, which make use of machine learning techniques."} {"_id": "377fa49c9ead7b9604993e984d5ab14888188b1d", "title": "Coordinated Control of the Energy Router-Based Smart Home Energy Management System", "text": "Home area energy networks will be an essential part of the future Energy Internet in terms of energy saving, demand-side management and stability improvement of the distribution network, while an energy router will be the perfect choice to serve as an intelligent and multi-functional energy interface between the home area energy network and power grid. This paper elaborates on the design, analysis and implementation of coordinated control of the low-voltage energy router-based smart home energy management system (HEMS). The main contribution of this paper is to develop a novel solution to make the energy router technically feasible and practical for the HEMS to make full use of the renewable energy sources (RESs), while maintaining \u201coperational friendly and beneficial\u201d to the power grid. The behaviors of the energy router-based HEMS in correlation with the power grid are investigated, then the coordinated control scheme composed of a reference voltage and current compensation strategy and a fuzzy logic control-based power management strategy is developed. The system model is built on the MATLAB/Simulink platform, simulation results have demonstrated that the presented control scheme is a strong performer in making full use of the RES generations for the HEMS while maintaining the operational stability of the whole system, as well as in collaboration with the power grid to suppress the impact of RES output fluctuations and load consumption variations."} {"_id": "e027edb12295a264dee7370a57c8a22e3106474a", "title": "The brain's conversation with itself: neural substrates of dialogic inner speech.", "text": "Inner speech has been implicated in important aspects of normal and atypical cognition, including the development of auditory hallucinations. Studies to date have focused on covert speech elicited by simple word or sentence repetition, while ignoring richer and arguably more psychologically significant varieties of inner speech. This study compared neural activation for inner speech involving conversations ('dialogic inner speech') with single-speaker scenarios ('monologic inner speech'). Inner speech-related activation differences were then compared with activations relating to Theory-of-Mind (ToM) reasoning and visual perspective-taking in a conjunction design. Generation of dialogic (compared with monologic) scenarios was associated with a widespread bilateral network including left and right superior temporal gyri, precuneus, posterior cingulate and left inferior and medial frontal gyri. Activation associated with dialogic scenarios and ToM reasoning overlapped in areas of right posterior temporal cortex previously linked to mental state representation. Implications for understanding verbal cognition in typical and atypical populations are discussed."} {"_id": "9fb5d6f97ba6ffa35a7bb8a013474f991249a813", "title": "Judgments of genuine, suppressed, and faked facial expressions of pain.", "text": "The process of discriminating among genuine, suppressed, and faked expressions of pain was examined. Untrained judges estimated the severity of pain being experienced when viewing videotaped facial expressions of chronic pain patients undergoing a painful diagnostic test or dissimulating reactions. Verbal feedback as to whether pain was experienced also was provided, so as to be either consistent or inconsistent with the facial expression. Judges were able to distinguish genuine pain faces from baseline expressions but, relative to genuine pain faces, attributed more pain to faked faces and less pain to suppressed ones. Advance warning of deception did not improve discrimination but led to a more conservative or nonempathic judging style. Verbal feedback increased or decreased judgments, as appropriate, but facial information consistently was assigned greater weight. An augmenting model of the judgment process that attaches considerable importance to the context in which information is provided was supported."} {"_id": "adf0d7a1967a14be8f881ab71449015aff720755", "title": "SRFeat: Single Image Super-Resolution with Feature Discrimination", "text": "Generative adversarial networks (GANs) have recently been adopted to single image super-resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures. However, the results of such GAN-based approaches tend to include less meaningful high-frequency noise that is irrelevant to the input image. In this paper, we propose a novel GAN-based SISR method that overcomes the limitation and produces more realistic results by attaching an additional discriminator that works in the feature domain. Our additional discriminator encourages the generator to produce structural high-frequency features rather than noisy artifacts as it distinguishes synthetic and real images in terms of features. We also design a new generator that utilizes long-range skip connections so that information between distant layers can be transferred more effectively. Experiments show that our method achieves the state-of-the-art performance in terms of both PSNR and perceptual quality compared to recent GAN-based methods."} {"_id": "103f1b70ccc04372da643f3ae16acbfd975ee5d3", "title": "Time Series Featurization via Topological Data Analysis: an Application to Cryptocurrency Trend Forecasting", "text": "We propose a novel methodology for feature extraction from time series data based on topological data analysis. The proposed procedure applies a dimensionality reduction technique via principal component analysis to the point cloud of the Takens\u2019 embedding from the observed time series and then evaluates the persistence landscape and silhouettes based on the corresponding Rips complex. We define a new notion of Rips distance function that is especially suited for persistence homologies built on Rips complexes and prove stability theorems for it. We use these results to demonstrate in turn some stability properties of the topological features extracted using our procedure with respect to additive noise and sampling. We further apply our method to the problem of trend forecasting for cryptocurrency prices, where we manage to achieve significantly lower error rates than more standard, non TDA-based methodologies in complex pattern classification tasks. We expect our method to provide a new insight on feature engineering for granular, noisy time series data."} {"_id": "2621b8f63247ea5af03f4ea0e83c3b528238c4a1", "title": "Evaluating STT-RAM as an energy-efficient main memory alternative", "text": "In this paper, we explore the possibility of using STT-RAM technology to completely replace DRAM in main memory. Our goal is to make STT-RAM performance comparable to DRAM while providing substantial power savings. Towards this goal, we first analyze the performance and energy of STT-RAM, and then identify key optimizations that can be employed to improve its characteristics. Specifically, using partial write and row buffer write bypass, we show that STT-RAM main memory performance and energy can be significantly improved. Our experiments indicate that an optimized, equal capacity STT-RAM main memory can provide performance comparable to DRAM main memory, with an average 60% reduction in main memory energy."} {"_id": "a95436fb5417f16497d90cd2aeb11a0e2873f55f", "title": "Architecting phase change memory as a scalable dram alternative", "text": "Memory scaling is in jeopardy as charge storage and sensing mechanisms become less reliable for prevalent memory technologies, such as DRAM. In contrast, phase change memory (PCM) storage relies on scalable current and thermal mechanisms. To exploit PCM's scalability as a DRAM alternative, PCM must be architected to address relatively long latencies, high energy writes, and finite endurance.\n We propose, crafted from a fundamental understanding of PCM technology parameters, area-neutral architectural enhancements that address these limitations and make PCM competitive with DRAM. A baseline PCM system is 1.6x slower and requires 2.2x more energy than a DRAM system. Buffer reorganizations reduce this delay and energy gap to 1.2x and 1.0x, using narrow rows to mitigate write energy and multiple rows to improve locality and write coalescing. Partial writes enhance memory endurance, providing 5.6 years of lifetime. Process scaling will further reduce PCM energy costs and improve endurance."} {"_id": "71c2deb5c3b4b0fd1ed68bdda534ec7ea76e845b", "title": "A novel nonvolatile memory with spin torque transfer magnetization switching: spin-ram", "text": "A novel nonvolatile memory utilizing spin torque transfer magnetization switching (STS), abbreviated spin-RAM hereafter, is presented for the first time. The spin-RAM is programmed by magnetization reversal through an interaction of a spin momentum-torque-transferred current and a magnetic moment of memory layers in magnetic tunnel junctions (MTJs), and therefore an external magnetic field is unnecessary as that for a conventional MRAM. This new programming mode has been accomplished owing to our tailored MTJ, which has an oval shape of 100 times 150 nm. The memory cell is based on a 1-transistor and a 1-MTJ (ITU) structure. The 4kbit spin-RAM was fabricated on a 4 level metal, 0.18 mum CMOS process. In this work, writing speed as high as 2 ns, and a write current as low as 200 muA were successfully demonstrated. It has been proved that spin-RAM possesses outstanding characteristics such as high speed, low power and high scalability for the next generation universal memory"} {"_id": "f03cb33bac64a0d23008c8970d1ced993354ebc6", "title": "Ultra-Thin Phase-Change Bridge Memory Device Using GeSb", "text": "An ultra-thin phase-change bridge (PCB) memory cell, implemented with doped GeSb, is shown with < 100muA RESET current. The device concept provides for simplified scaling to small cross-sectional area (60nm2) through ultra-thin (3nm) films; the doped GeSb phase-change material offers the potential for both fast crystallization and good data retention"} {"_id": "fae8a785260ac5c34be82fca92a4abef4c30d655", "title": "Phase-change random access memory: A scalable technology", "text": "random access memory: A scalable technology S. Raoux G. W. Burr M. J. Breitwisch C. T. Rettner Y.-C. Chen R. M. Shelby M. Salinga D. Krebs S.-H. Chen H.-L. Lung C. H. Lam Nonvolatile RAM using resistance contrast in phase-change materials [or phase-change RAM (PCRAM)] is a promising technology for future storage-class memory. However, such a technology can succeed only if it can scale smaller in size, given the increasingly tiny memory cells that are projected for future technology nodes (i.e., generations). We first discuss the critical aspects that may affect the scaling of PCRAM, including materials properties, power consumption during programming and read operations, thermal cross-talk between memory cells, and failure mechanisms. We then discuss experiments that directly address the scaling properties of the phase-change materials themselves, including studies of phase transitions in both nanoparticles and ultrathin films as a function of particle size and film thickness. This work in materials directly motivated the successful creation of a series of prototype PCRAM devices, which have been fabricated and tested at phase-change material cross-sections with extremely small dimensions as low as 3 nm \u00b7 20 nm. These device measurements provide a clear demonstration of the excellent scaling potential offered by this technology, and they are also consistent with the scaling behavior predicted by extensive device simulations. Finally, we discuss issues of device integration and cell design, manufacturability, and reliability."} {"_id": "1a124ed5d7c739727ca60cf11008edafa9e3ecf2", "title": "SamzaSQL: Scalable Fast Data Management with Streaming SQL", "text": "As the data-driven economy evolves, enterprises have come to realize a competitive advantage in being able to act on high volume, high velocity streams of data. Technologies such as distributed message queues and streaming processing platforms that can scale to thousands of data stream partitions on commodity hardware are a response. However, the programming API provided by these systems is often low-level, requiring substantial custom code that adds to the programmer learning curve and maintenance overhead. Additionally, these systems often lack SQL querying capabilities that have proven popular on Big Data systems like Hive, Impala or Presto. We define a minimal set of extensions to standard SQL for data stream querying and manipulation. These extensions are prototyped in SamzaSQL, a new tool for streaming SQL that compiles streaming SQL into physical plans that are executed on Samza, an open-source distributed stream processing framework. We compare the performance of streaming SQL queries against native Samza applications and discuss usability improvements. SamzaSQL is a part of the open source Apache Samza project and will be available for general use."} {"_id": "8619ce028bd112548f83d3f36290c1b05a8a694e", "title": "Neuro-fuzzy rule generation: survey in soft computing framework", "text": "The present article is a novel attempt in providing an exhaustive survey of neuro-fuzzy rule generation algorithms. Rule generation from artificial neural networks is gaining in popularity in recent times due to its capability of providing some insight to the user about the symbolic knowledge embedded within the network. Fuzzy sets are an aid in providing this information in a more human comprehensible or natural form, and can handle uncertainties at various levels. The neuro-fuzzy approach, symbiotically combining the merits of connectionist and fuzzy approaches, constitutes a key component of soft computing at this stage. To date, there has been no detailed and integrated categorization of the various neuro-fuzzy models used for rule generation. We propose to bring these together under a unified soft computing framework. Moreover, we include both rule extraction and rule refinement in the broader perspective of rule generation. Rules learned and generated for fuzzy reasoning and fuzzy control are also considered from this wider viewpoint. Models are grouped on the basis of their level of neuro-fuzzy synthesis. Use of other soft computing tools like genetic algorithms and rough sets are emphasized. Rule generation from fuzzy knowledge-based networks, which initially encode some crude domain knowledge, are found to result in more refined rules. Finally, real-life application to medical diagnosis is provided."} {"_id": "436c3119d16ce2e3c243ffe7a4a1a5dc40b128aa", "title": "Dialog Act Modeling for Conversational Speech", "text": "We describe an integrated approach for statistical modeling of discourse structure for natural conversational speech. Our model is based on 42`dialog acts' which were hand-labeled in 1155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We developed several models and algorithms to automatically detect dialog acts from transcribed or automatically recognized words and from prosodic properties of the speech signal, and by using a statistical discourse grammar. All of these components were probabilistic in nature and estimated from data, employing a variety of techniques (hidden Markov models, N-gram language models, maximum entropy estimation, decision tree classiiers, and neural networks). In preliminary studies, we achieved a dialog act labeling accuracy of 65% based on recognized words and prosody, and an accuracy of 72% based on word transcripts. Since humans achieve 84% on this task (with chance performance at 35%) we nd these results encouraging."} {"_id": "b8ec319b1f5223508267b1d5b677c0796d25ac13", "title": "Learning by Association \u2014 A Versatile Semi-Supervised Training Method for Neural Networks", "text": "In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. Associations are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN."} {"_id": "c26e4b8cd5b8f99718e637fa42d1458275bea0a2", "title": "Plead or Pitch ? The Role of Language in Kickstarter Project Success", "text": "We present an analysis of over 26000 projects from Kickstarter, a popular crowdfunding platform. Specifically, we focus on the language used in project pitches, and how it impacts project success/failure. We train a series of discriminative models for binary success/failure prediction of projects with increasing complexity of linguistic features, and show that successful project pitches are, on average, more emotive, thoughtful and colloquial. In addition, we also present an analysis of pledge rewards in Kickstarter, and how these differ across categories and for successful versus unsuccessful projects."} {"_id": "238e8b5689d0b49c218e235a3e38d59c57ae6503", "title": "Mobility Management for Femtocells in LTE-Advanced: Key Aspects and Survey of Handover Decision Algorithms", "text": "Support of femtocells is an integral part of the Long Term Evolution - Advanced (LTE-A) system and a key enabler for its wide adoption in a broad scale. Femtocells are short-range, low-power and low-cost cellular stations which are installed by the consumers in an unplanned manner. Even though current literature includes various studies towards understanding the main challenges of interference management in the presence of femtocells, little light has been shed on the open issues of mobility management (MM) in the two-tier macrocell-femtocell network. In this paper, we provide a comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution. A detailed overview of the respective MM procedures in the LTE-A system is also provided to better comprehend the solutions and open issues posed in real-life systems. Based on the discussion for the HO decision phase, we subsequently survey and classify existing HO decision algorithms for the two-tier macrocell-femtocell network, depending on the primary HO decision criterion used. For each class, we overview up to three representative algorithms and provide detailed flowcharts to describe their fundamental operation. A comparative summary of the main decision parameters and key features of selected HO decision algorithms concludes this work, providing insights for future algorithmic design and standardization activities."} {"_id": "2cf9714cb82974c85c99a5f3bfe5cd79de52bd69", "title": "Directing exploratory search: reinforcement learning from user interactions with keywords", "text": "Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information."} {"_id": "be4e6bbf2935d999ce4def488ed537dfd50a2ee3", "title": "Investigations on speaker adaptation of LSTM RNN models for speech recognition", "text": "Recently Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) acoustic models have demonstrated superior performance over deep neural networks (DNN) models in speech recognition and many other tasks. Although a lot of work have been reported on DNN model adaptation, very little has been done on LSTM model adaptation. In this paper we present our extensive studies of speaker adaptation of LSTM-RNN models for speech recognition. We investigated different adaptation methods combined with KL-divergence based regularization, where and which network component to adapt, supervised versus unsupervised adaptation and asymptotic analysis. We made a few distinct and important observations. In a large vocabulary speech recognition task, by adapting only 2.5% of the LSTM model parameters using 50 utterances per speaker, we obtained 12.6% WERR on the dev set and 9.1% WERR on the evaluation set over a strong LSTM baseline model."} {"_id": "32f9fad37daf596563bcf0f5251e7f43c1f0ca58", "title": "A 3D printed low profile magnetic dipole antenna", "text": "An electrically small, low profile magnetic dipole antenna is presented. The proposed antenna is composed of multiple small twisted loops connected in parallel. The simulated radiation efficiency is 87 % at the resonant frequency of 222.5 MHz. The corresponding electrical size ka is 0.276 with the height of 0.0016\u03bb. The prototype is built using a selective laser sintering technology and silver paste painting. The measured result is discussed."} {"_id": "910c864568bd83719ee5050a9d734be9ba439cd5", "title": "Applying connectionist modal logics to distributed knowledge representation problems", "text": "Neural-Symbolic Systems concern the integration of the symbolic and connectionist paradigms of Artificial Intelligence. Distributed knowledge representation is traditionally seen under a symbolic perspective. In this paper, we show how neural networks can represent distributed symbolic knowledge, acting as multi-agent systems with learning capability (a key feature of neural networks). We apply the framework of Connectionist Modal Logics to well-known testbeds for distributed knowledge representation formalisms, namely the muddy children and the wise men puzzles. Finally, we sketch a full solution to these problems by extending our approach to deal with knowledge evolution over time."} {"_id": "45864034eba454b115a0cd91e175104010cd14ad", "title": "Epigenetics and Signaling Pathways in Glaucoma", "text": "Glaucoma is the most common cause of irreversible blindness worldwide. This neurodegenerative disease becomes more prevalent with aging, but predisposing genetic and environmental factors also contribute to increased risk. Emerging evidence now suggests that epigenetics may also be involved, which provides potential new therapeutic targets. These three factors work through several pathways, including TGF-\u03b2, MAP kinase, Rho kinase, BDNF, JNK, PI-3/Akt, PTEN, Bcl-2, Caspase, and Calcium-Calpain signaling. Together, these pathways result in the upregulation of proapoptotic gene expression, the downregulation of neuroprotective and prosurvival factors, and the generation of fibrosis at the trabecular meshwork, which may block aqueous humor drainage. Novel therapeutic agents targeting these pathway members have shown preliminary success in animal models and even human trials, demonstrating that they may eventually be used to preserve retinal neurons and vision."} {"_id": "24d66ec9dd202a6ea02b8723ae9d2fd7ffd32a4a", "title": "BING: Binarized Normed Gradients for Objectness Estimation at 300fps", "text": "Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 \u00d7 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2% object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5% DR."} {"_id": "9cf389e03a5a23c1db4b14b849552777f238ee50", "title": "A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods.", "text": "In this initial pilot study, a controlled clinical comparison was made of attention perforance in children with attention deficit-hyperactivity disorder (ADHD) in a virtual reality (VR) classroom. Ten boys diagnosed with ADHD and ten normal control boys participated in the study. Groups did not significantly differ in mean age, grade level, ethnicity, or handedness. No participants reported simulator sickness following VR exposure. Children with ADHD exhibited more omission errors, commission errors, and overall body movement than normal control children in the VR classroom. Children with ADHD were more impacted by distraction in the VR classroom. VR classroom measures were correlated with traditional ADHD assessment tools and the flatscreen CPT. Of note, the small sample size incorporated in each group and higher WISC-III scores of normal controls might have some bearing on the overall interpretation of results. These data suggested that the Virtual Classroom had good potential for controlled performance assessment within an ecologically valid environment and appeared to parse out significant effects due to the presence of distraction stimuli."} {"_id": "6ae8321f2863b79b37e2f0e0d131dc1a80829cf7", "title": "Does Twitter language reliably predict heart disease? A commentary on Eichstaedt et al. (2015a)", "text": "We comment on Eichstaedt et\u00a0al.'s (2015a) claim to have shown that language patterns among Twitter users, aggregated at the level of US counties, predicted county-level mortality rates from atherosclerotic heart disease (AHD), with \"negative\" language being associated with higher rates of death from AHD and \"positive\" language associated with lower rates. First, we examine some of Eichstaedt et al.'s apparent assumptions about the nature of AHD, as well as some issues related to the secondary analysis of online data and to considering counties as communities. Next, using the data files supplied by Eichstaedt et al., we reproduce their regression- and correlation-based models, substituting mortality from an alternative cause of death-namely, suicide-as the outcome variable, and observe that the purported associations between \"negative\" and \"positive\" language and mortality are reversed when suicide is used as the outcome variable. We identify numerous other conceptual and methodological limitations that call into question the robustness and generalizability of Eichstaedt et al.'s claims, even when these are based on the results of their ridge regression/machine learning model. We conclude that there is no good evidence that analyzing Twitter data in bulk in this way can add anything useful to our ability to understand geographical variation in AHD mortality rates."} {"_id": "458dd9078a518a859e9ae1051b28a19f8dfa72de", "title": "Cascading Bandits for Large-Scale Recommendation Problems", "text": "Most recommender systems recommend a list of items. The user examines the list, from the first item to the last, and often chooses the first attractive item and does not examine the rest. This type of user behavior can be modeled by the cascade model. In this work, we study cascading bandits, an online learning variant of the cascade model where the goal is to recommend K most attractive items from a large set of L candidate items. We propose two algorithms for solving this problem, which are based on the idea of linear generalization. The key idea in our solutions is that we learn a predictor of the attraction probabilities of items from their features, as opposing to learning the attraction probability of each item independently as in the existing work. This results in practical learning algorithms whose regret does not depend on the number of items L. We bound the regret of one algorithm and comprehensively evaluate the other on a range of recommendation problems. The algorithm performs well and outperforms all baselines."} {"_id": "2e9de0de9aa8ab46a9d3e20fe21472104f42cbbe", "title": "SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent", "text": "TheSGD-QN algorithm is a stochastic gradient descent algorithm that m akes careful use of secondorder information and splits the parameter update into inde pendently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient escent but requires less iterations to achieve the same accuracy. This algorith m won the \u201cWild Track\u201d of the first PASCAL Large Scale Learning Challenge (Sonnenburg et al., 2008 )."} {"_id": "33ffb3959534d798c12f69391b380b22e7492e3e", "title": "Autonomous human-robot proxemics: socially aware navigation based on interaction potential", "text": "To enable situated human\u2013robot interaction (HRI), an autonomous robot must both understand and control proxemics\u2014the social use of space\u2014to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and modelswere implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human\u2013robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human\u2013robot proxemics with respect to \u201cinteraction potential\u201d\u2014predicted automated speech and gesture recognition rates as the robot enters into and engages in faceto-face social encounters with a human user\u2014illustrating their efficacy to support richer robot perception and autonomy in HRI. Electronic supplementary material The online version of this article (doi:10.1007/s10514-016-9572-2) contains supplementary material, which is available to authorized users. B Ross Mead rossmead@usc.edu 1 3710 McClintock Avenue, RTH 423, Los Angeles, CA 90089-0781, USA 2 3710 McClintock Avenue, RTH 407, Los Angeles, CA 90089-0781, USA"} {"_id": "6716a4e9ebb330a08e3cb19b0b574b0e7b429166", "title": "Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces", "text": "Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user\u2019s \u201csense of touch\u201d within the immersive virtual environment is simulated through an Immersion CyberForce\u00ae hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging."} {"_id": "274c938a70de1afb9ef1489cab2186c1d699725e", "title": "Electricity Price Forecasting With Extreme Learning Machine and Bootstrapping", "text": "Artificial neural networks (ANNs) have been widely applied in electricity price forecasts due to their nonlinear modeling capabilities. However, it is well known that in general, traditional training methods for ANNs such as back-propagation (BP) approach are normally slow and it could be trapped into local optima. In this paper, a fast electricity market price forecast method is proposed based on a recently emerged learning method for single hidden layer feed-forward neural networks, the extreme learning machine (ELM), to overcome these drawbacks. The new approach also has improved price intervals forecast accuracy by incorporating bootstrapping method for uncertainty estimations. Case studies based on chaos time series and Australian National Electricity Market price series show that the proposed method can effectively capture the nonlinearity from the highly volatile price data series with much less computation time compared with other methods. The results show the great potential of this proposed approach for online accurate price forecasting for the spot market prices analysis."} {"_id": "99e2a37d09479b29f14b34c81f4f493f40920b7f", "title": "E-TOURISM: AN INNOVATIVE APPROACH FOR THE SMALL AND MEDIUM-SIZED TOURISM ENTERPRISES (SMTES) IN KOREA", "text": "This paper deals with e-tourism, innovation and growth. The Internet is revolutionising the distribution of tourism information and sales. The Korean small and medium-sized tourism enterprises (SMTEs) with well-developed and innovative Web sites can now have \u201cequal Internet access\u201d to international tourism markets. This paper examines problems and solutions related to electronic commerce in the tourism industry and suggests recommendations for successful e-commerce strategies in tourism to be applied by the industry and the government in Korea. Introduction The definitions of tourism innovation (e.g. product, service and technological innovations) remains unclear, with the exception maybe of the Internet. New technologies can produce an essential contribution to tourism development. For tourism businesses, the Internet offers the potential to make information and booking facilities available to large numbers of tourists at relatively low costs. It also provides a tool for communication between tourism suppliers, intermediaries, as well as end-consumers. OECD (2000) revealed that the advent of Internet-based electronic commerce offers considerable opportunities for firms to expand their customer base, enter new product markets and rationalise their business. WTO (2001) also indicated that electronic business offers SMEs the opportunity to undertake their business in new and more cost-effective ways. According to WTO, the Internet is revolutionising the distribution of tourism information and sales. An increasing proportion of Internet users are buying on\u2013line and tourism will gain a larger and larger share of the online commerce market. Obviously, the Internet is having a major impact as a source of information for tourism. However, the SMTEs are facing more stringent impediments to the adoption of new information technology, in particular, e-business. Part of the problem relates to the scale and affordability of information technology, as well as the facility of implementation within rapidly growing and changing organisations. In addition, new solutions configured for large, stable, and internationally-oriented firms do not fit well for small, dynamic, and locally-based tourism firms. Despite these challenges, SMTEs with well-developed and innovative Web sites can now have \u201cequal Internet access\u201d to international tourism markets. This implies equal access to telecom infrastructure, as well as to marketing management and education. According to a UN report (2001), \u201cit is not the cost of being there, on the on-line market place, which must be reckoned with, but the cost of not being there.\u201d It is certain that embracing digital communication and information technology is no longer an option, but a necessity. Thus, one of the most important characteristics of"} {"_id": "c78537e92ccc4b14939a97ef5345cd7dbacc1c25", "title": "The Effectiveness of Intrinsic and Extrinsic Motivations : A Study of Malaysian Amway Company \u2019 s Direct Sales Forces", "text": "This research utilizes the survey questionnaire data collected in May/June 2010 from 200 Amway Company\u2019s direct sales forces in Klang Valley areas (Malaysia) to analyze the effectiveness of intrinsic and extrinsic motivation in influencing job satisfaction. The research will be analyzed using the well-established correlation analysis, regression analysis, independent sample t-test, and one-way ANOVA. There are four major findings. First, there is a relationship between intrinsic and extrinsic motivations with job satisfaction. According to the correlation value, intrinsic motivation compared to extrinsic motivation tends to contribute more in job satisfaction. Second, there are significant and positive relationship between intrinsic and extrinsic motivations and job satisfaction. Both of the intrinsic and extrinsic motivation is identified as the predictor for job satisfaction. In other words, they are both significantly contribute in better job satisfaction. Third, there is no difference between gender and intrinsic and extrinsic motivations. Hence, gender is not a factor that affects both of the intrinsic and extrinsic motivations. Lastly, the result indicates that there is a difference between age and intrinsic and extrinsic motivations. Therefore, age is the factor that influences on both intrinsic and extrinsic motivations. Last but not least, the results have demonstrated the effectiveness of intrinsic and extrinsic motivations in influencing job satisfaction among the Amway Company\u2019s direct sales forces, as well as establishing appropriate intrinsic motivations may promote higher job satisfaction."} {"_id": "59d312a4ef11d1fa2b001650decb664a64fcb0a7", "title": "A robust adaptive mixing control for improved forward flight of a tilt-rotor UAV", "text": "This work presents the modeling and control of a tilt-rotor UAV, with tail controlled surfaces, for path tracking with improved forward flight. The dynamic model is obtained using the Euler-Lagrange formulation considering the aerodynamic forces and torques exerted on the horizontal and vertical stabilizers, and fuselage. For control design purposes, the equations of motion are linearized around different operation points to cover a large range of forward velocity. Based on these linearized dynamic models, a mixed H2/H\u221e robust controller is designed for each operation point. Therefore, an adaptive mixing scheme is used to perform an on-line smooth gain-scheduling between them. Simulation results show the control strategy efficiency when the UAV is designated to have a forward acceleration and perform a circular trajectory subject to a wind disturbance."} {"_id": "7e9100e625ec05c95adc8f05927bfca43258eee7", "title": "Fatigue in soccer: a brief review.", "text": "This review describes when fatigue may develop during soccer games and the potential physiological mechanisms that cause fatigue in soccer. According to time-motion analyses and performance measures during match-play, fatigue or reduced performance seems to occur at three different stages in the game: (1) after short-term intense periods in both halves; (2) in the initial phase of the second half; and (3) towards the end of the game. Temporary fatigue after periods of intense exercise in the game does not appear to be linked directly to muscle glycogen concentration, lactate accumulation, acidity or the breakdown of creatine phosphate. Instead, it may be related to disturbances in muscle ion homeostasis and an impaired excitation of the sarcolemma. Soccer players' ability to perform maximally is inhibited in the initial phase of the second half, which may be due to lower muscle temperatures compared with the end of the first half. Thus, when players perform low-intensity activities in the interval between the two halves, both muscle temperature and performance are preserved. Several studies have shown that fatigue sets in towards the end of a game, which may be caused by low glycogen concentrations in a considerable number of individual muscle fibres. In a hot and humid environment, dehydration and a reduced cerebral function may also contribute to the deterioration in performance. In conclusion, fatigue or impaired performance in soccer occurs during various phases in a game, and different physiological mechanisms appear to operate in different periods of a game."} {"_id": "16326dd240c9a3ffd02237497ac868b6edb3147b", "title": "Combining Gradient and Albedo Data for Rotation Invariant Classification of 3D Surface Texture", "text": "We present a new texture classification scheme which is invariant to surface-rotation. Many texture classification approaches have been presented in the past that are image-rotation invariant, However, image rotation is not necessarily the same as surface rotation. We have therefore developed a classifier that uses invariants that are derived from surface properties rather than image properties. Previously we developed a scheme that used surface gradient (normal) fields estimated using photometric stereo. In this paper we augment these data with albedo information and an also employ an additional feature set: the radial spectrum. We used 30 real textures to test the new classifier. A classification accuracy of 91% was achieved when albedo and gradient 1D polar and radial features were combined. The best performance was also achieved by using 2D albedo and gradient spectra. The classification accuracy is 99%."} {"_id": "318344b8015a92f23e508d6476f8243c74ff02ee", "title": "A Software-Based Sonar Ranging Sensor for Smart Phones", "text": "We live in a 3-D world. However, the smart phones that we use every day are incapable of sensing depth, without the use of custom hardware. By creating new depth sensors, we can provide developers with the tools that they need to create immersive mobile applications that take advantage of the 3-D nature of our world. In this paper, we propose a new sonar sensor for smart phones. This sonar sensor does not require any additional hardware, and utilizes the phone's microphone and rear speaker. The sonar sensor calculates distances by measuring the elapsed time between the initial pulse and its reflection. We evaluate the accuracy of the sonar sensor by using it to measure the distance from the phone to an object. We found that we were able to measure the distances of objects accurately with an error bound of 12 cm."} {"_id": "1adc43eb3cb3ee8b4f2289ed3533b6f550a3fefa", "title": "Recent Advances in Augmented Reality", "text": "The field of Augmented Reality (AR) has existed for just over one decade, but the growth and progress in the past few years has been remarkable. In 1997, the first author published a survey [3] (based on a 1995 SIGGRAPH course lecture) that defined the field, described many problems, and summarized the developments up to that point. Since then, the field has grown rapidly. In the late 1990\u0092s, several conferences specializing in this area were started, including the International Workshop and Symposium on Augmented Reality [29], the International Symposium on Mixed Reality [30], and the Designing Augmented Reality Environments workshop. Some wellfunded interdisciplinary consortia were formed that focused on AR, notably the Mixed Reality Systems Laboratory [50] in Japan and Project ARVIKA [61] in Germany. A freely-available software toolkit (the ARToolkit) for rapidly building AR applications is now available [2]. Because of this wealth of new developments, an updated survey is needed to guide and encourage further research in this exciting area."} {"_id": "3244243bd0ab1790dfda1128390fd56674c24389", "title": "Exploring the Benefits of Augmented Reality Documentation for Maintenance and Repair", "text": "We explore the development of an experimental augmented reality application that provides benefits to professional mechanics performing maintenance and repair tasks in a field setting. We developed a prototype that supports military mechanics conducting routine maintenance tasks inside an armored vehicle turret, and evaluated it with a user study. Our prototype uses a tracked headworn display to augment a mechanic's natural view with text, labels, arrows, and animated sequences designed to facilitate task comprehension, localization, and execution. A within-subject controlled user study examined professional military mechanics using our system to complete 18 common tasks under field conditions. These tasks included installing and removing fasteners and indicator lights, and connecting cables, all within the cramped interior of an armored personnel carrier turret. An augmented reality condition was tested against two baseline conditions: the same headworn display providing untracked text and graphics and a fixed flat panel display representing an improved version of the laptop-based documentation currently employed in practice. The augmented reality condition allowed mechanics to locate tasks more quickly than when using either baseline, and in some instances, resulted in less overall head movement. A qualitative survey showed that mechanics found the augmented reality condition intuitive and satisfying for the tested sequence of tasks."} {"_id": "56695dbdf9c49a4d44961bd97b16e3153144f906", "title": "ENHANCING THE TOURISM EXPERIENCE THROUGH MOBILE AUGMENTED REALITY: CHALLENGES AND PROSPECTS", "text": "The paper discusses the use of Augmented Reality (AR) applications for the needs of tourism. It describes the technology\u2019s evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR applications development, emphasizing on the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR\u2019s substantial end-user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR\u2019s full potential within the field."} {"_id": "87806d658ee71aff6c595e412d0a96187a2dffa6", "title": "A head-mounted three dimensional display", "text": "The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves. The retinal image of the real objects which we see is, after all, only two-dimensional. Thus if we can place suitable two-dimensional images on the observer's retinas, we can create the illusion that he is seeing a three-dimensional object. Although stereo presentation is important to the three-dimensional illusion, it is less important than the change that takes place in the image when the observer moves his head. The image presented by the three-dimensional display must change in exactly the way that the image of a real object would change for similar motions of the user's head. Psychologists have long known that moving perspective images appear strikingly three-dimensional even without stereo presentation; the three-dimensional display described in this paper depends heavily on this \"kinetic depth effect.\""} {"_id": "d1baa7bf1dd81422c86954447b8ad570539f93be", "title": "Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR", "text": "Although Augmented Reality technology was first developed over forty years ago, there has been little survey work giving an overview of recent research in the field. This paper reviews the ten-year development of the work presented at the ISMAR conference and its predecessors with a particular focus on tracking, interaction and display research. It provides a roadmap for future augmented reality research which will be of great value to this relatively young field, and also for helping researchers decide which topics should be explored when they are beginning their own studies in the area."} {"_id": "11385b86db4e49d85abaf59940c536aa17ff44b0", "title": "Dynamic program slicing methods", "text": "A dynamic program slice is that part of a program that \u2018\u2018affects\u2019\u2019 the computation of a variable of interest during program execution on a specific program input. Dynamic program slicing refers to a collection of program slicing methods that are based on program execution and may significantly reduce the size of a program slice because run-time information, collected during program execution, is used to compute program slices. Dynamic program slicing was originally proposed only for program debugging, but its application has been extended to program comprehension, software testing, and software maintenance. Different types of dynamic program slices, together with algorithms to compute them, have been proposed in the literature. In this paper we present a classification of existing dynamic slicing methods and discuss the algorithms to compute dynamic slices. In the second part of the paper, we compare the existing methods of dynamic slice computation. \uf6d9 1998 Elsevier Science B.V. All rights reserved."} {"_id": "63ad6c788f36e38e615abaadf29279c8e42f0742", "title": "Group Linguistic Bias Aware Neural Response Generation", "text": "For practical chatbots, one of the essential factor for improving user experience is the capability of customizing the talking style of the agents, that is, to make chatbots provide responses meeting users\u2019 preference on language styles, topics, etc. To address this issue, this paper proposes to incorporate linguistic biases, which implicitly involved in the conversation corpora generated by human groups in the Social Network Services (SNS), into the encoderdecoder based response generator. By attaching a specially designed neural component to dynamically control the impact of linguistic biases in response generation, a Group Linguistic Bias Aware Neural Response Generation (GLBA-NRG) model is eventually presented. The experimental results on the dataset from the Chinese SNS show that the proposed architecture outperforms the current response generating models by producing both meaningful and vivid responses with customized styles."} {"_id": "ce1c2fa34fa42b1e6f2daa47b6accb5f78c2706b", "title": "Entity resolution using inferred relationships and behavior", "text": "We present a method for entity resolution that infers relationships between observed identities and uses those relationships to aid in mapping identities to underlying entities. We also introduce the idea of using graphlets for entity resolution. Graphlets are collections of small graphs that can be used to characterize the \u201crole\u201d of a node in a graph. The idea is that graphlets can provide a richer set of features to characterize identities. We validate our method on standard author datasets, and we further evaluate our method using data collected from Twitter. We find that inferred relationships and graphlets are useful for entity resolution."} {"_id": "a934b9519dd6df9378b4f0c8fff29baebf36d6e5", "title": "A Hidden Semi-Markov Model-Based Speech Synthesis System", "text": "Recently, a statistical speech synthesis system based on the hidden Markov model (HMM) has been proposed. In this system, spectrum, excitation, and duration of human speech are modeled simultaneously by context-dependent HMMs and speech parameter vector sequences are generated from the HMMs themselves. This system defines a speech synthesis problem in a generative model framework and solves it using the maximum likelihood (ML) criterion. However, there is an inconsistency: although state duration models are explicitly used in the synthesis part of the system, they have not been incorporated in its training part. This inconsistency may degrade the naturalness of synthesized speech. In the present paper, a statistical speech synthesis system based on a hidden semi-Markov model (HSMM), which can be viewed as an HMM with explicit state duration models, is developed and evaluated. The use of HSMMs allows us to incorporate the state duration models explicitly not only in the synthesis part but also in the training part of the system and resolves the inconsistency in the HMM-based speech synthesis system. Subjective listening test results show that the use of HSMMs improves the reported naturalness of synthesized speech. key words: hidden Markov model, hidden semi-Markov model, HMMbased speech synthesis"} {"_id": "30ffc7c6aab3bbd1f5af69fb97a7d151509d0a52", "title": "Estimating mobile application energy consumption using program analysis", "text": "Optimizing the energy efficiency of mobile applications can greatly increase user satisfaction. However, developers lack viable techniques for estimating the energy consumption of their applications. This paper proposes a new approach that is both lightweight in terms of its developer requirements and provides fine-grained estimates of energy consumption at the code level. It achieves this using a novel combination of program analysis and per-instruction energy modeling. In evaluation, our approach is able to estimate energy consumption to within 10% of the ground truth for a set of mobile applications from the Google Play store. Additionally, it provides useful and meaningful feedback to developers that helps them to understand application energy consumption behavior."} {"_id": "855972b98b09ffb4ada4c3b933d2c848e8e72d6d", "title": "Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities", "text": "This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision."} {"_id": "1f377bd3499defddad2a1b039febcb6d034a1168", "title": "Cogeneration of mechanical, electrical, and software designs for printable robots from structural specifications", "text": "Designing and fabricating new robotic systems is typically limited to experts, requiring engineering background, expensive tools, and considerable time. In contrast, to facilitate everyday users developing custom robots for personal use, this work presents a new system to easily create printable foldable robots from high-level structural specifications. A user merely needs to select electromechanical components from a library of basic building blocks and pre-designed mechanisms, then connect them to define custom robot assemblies. The system then generates complete mechanical drawings suitable for fabrication, instructions for the assembly of electronics, and software to control and drive the final robot. Several robots designed in this manner demonstrate the ability and versatility of this process."} {"_id": "30445e21ab927234034306acb15e1cf3a4142e0c", "title": "Characterization of silicone rubber based soft pneumatic actuators", "text": "Conventional pneumatic actuators have been a popular choice due to their decent force/torque output. Nowadays, new generation of pneumatic actuator made out of highly compliant elastomers, which we call soft pneumatic actuators (SPA), are drawing increasing attention due to their ease of fabrication, high customizability and innately softness. However, there is no effective method presented to characterize and understand these actuators, such as to measure the force and torque output, range of motion and the speed of actuation. In this work, we present two types of SPAs: bending and rotary actuators. In addition, we have developed two measurement setups to characterize actuators of different geometries. The measured force/torque outputs of different actuators are presented and analyzed. Step responses to certain pressure input are presented and discussed. A simple model is presented to provide physical insight to the observed behavior of the soft actuators. This work provides the basis for designing customized SPAs with application-specific requirements."} {"_id": "cb4c33639dbfdee7254dfeb486b7764b2e9aa358", "title": "Numerical Recipes 3rd Edition: The Art of Scientific Computing", "text": "numerical recipes 3rd edition the art of scientific computing numerical recipes 3rd edition the art of scientific numerical recipes 3rd edition the art of scientific by william h press numerical recipes 3rd edition the art numerical recipes source code cd rom 3rd edition the art numerical recipes 3rd edition the art of scientific numerical recipes with source code cd rom 3rd edition the numerical recipes with source code cd rom 3rd edition the numerical recipes with source code cd rom 3rd edition the numerical recipes assets numerical recipes 3rd edition the art of scientific computing free ebooks numerical recipes 3rd edition the art of numerical recipes 3rd edition the art of scientific numerical recipes with source code cd rom 3rd edition the numerical recipes 3rd edition the art of scientific numerical recipes with source code cd rom 3rd edition the numerical recipes source code cd rom 3rd edition the art numerical recipes source code cd rom 3rd edition the art numerical recipes 3rd edition the art of scientific free ebooks numerical recipes with source code cd rom 3rd numerical recipes the art of scientific computing numerical recipesthe art of scientific computing 3th third numerical recipesthe art of scientific computing 3th third numerical recipes in c the art of scientific computing by william h press numerical recipes 3rd edition the art numerical recipesthe art of scientific computing 3th third numerical recipes in c example book the art of scientific numerical recipes 3rd edition the art of scientific by william h press numerical recipes 3rd edition the art numerical recipes in c the art of scientific computing numerical recipes the art of scientific computing fortran"} {"_id": "19b6edcfd9b82d5d3538d8950092fe87061bf35a", "title": "A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets", "text": "Traditional rough set theory is mainly used to extract rules from and reduce attributes in databases in which attributes are characterized by partitions, while the covering rough set theory, a generalization of traditional rough set theory, does the same yet characterizes attributes by covers. In this paper, we propose a way to reduce the attributes of covering decision systems, which are databases characterized by covers. First, we define consistent and inconsistent covering decision systems and their attribute reductions. Then, we state the sufficient and the necessary conditions for reduction. Finally, we use a discernibility matrix to design algorithms that compute all the reducts of consistent and inconsistent covering decision systems. Numerical tests on four public data sets show that the proposed attribute reductions of covering decision systems accomplish better classification performance than those of traditional rough sets. 2007 Elsevier Inc. All rights reserved."} {"_id": "c1972f1c721f122185cf893ed8530f2425fe32b1", "title": "Recognizing tactic patterns in broadcast basketball video using player trajectory", "text": "The explosive growth of the sports fandom inspires much research on manifold sports video analyses and applications. The audience, sports fans, and even professionals require more than traditional highlight extraction or semantic summarization. Computer-assisted sports tactic analysis is inevitably in urging demand. Recognizing tactic patterns in broadcast basketball video is a challenging task due to its complicated scenes, varied camera motion, frequently occlusions between players, etc. In basketball games, the action screen means that an offensive player perform a blocking move via standing beside or behind a defender for freeing a teammate to shoot, to receive a pass, or to drive in for scoring. In this paper, we propose a screen-strategy recognition system capable of detecting and classifying screen patterns in basketball video. The proposed system automatically detects the court lines for camera calibration, tracks players, and discriminates the offensive/defensive team. Player trajectories are calibrated to the realworld court model for screen pattern recognition. Our experiments on broadcast basketball videos show promising results. Furthermore, the extracted player trajectories and the recognized screen patterns visualized on a court model indeed assist the coach/players or the fans in comprehending the tactics executed in basketball games informatively and efficiently. 2012 Elsevier Inc. All rights reserved."} {"_id": "13dd9ab6da60e3fff3a75e2b22017da771c80da1", "title": "Integration of it governance and security risk management: A systematic literature review", "text": "GRC is an umbrella acronym covering the three disciplines of governance, risk management and compliance. In this context, IT GRC is the subset of GRC dealing with IT aspects of GRC. The main challenge of GRC is to have an approach as integrated as possible of the three domains. The objective of our paper is to study one facet of IT GRC: the links and integration between IT governance and risk management that we consider today as the least integrated. To do so, the method followed in this paper is a systematic literature review, in order to identify the existing research works in this field. The resulting contribution of the paper is a set of recommendations established for practitioners and for researchers on how better deal with the integration between IT governance and risk management."} {"_id": "83e351642cc1ff946af9caed3bb33f65953a6156", "title": "Emotional intelligence: the most potent factor in the success equation.", "text": "Star performers can be differentiated from average ones by emotional intelligence. For jobs of all kinds, emotional intelligence is twice as important as a person's intelligence quotient and technical skills combined. Excellent performance by top-level managers adds directly to a company's \"hard\" results, such as increased profitability, lower costs, and improved customer retention. Those with high emotional intelligence enhance \"softer\" results by contributing to increased morale and motivation, greater cooperation, and lower turnover. The author discusses the five components of emotional intelligence, its role in facilitating organizational change, and ways to increase an organization's emotional intelligence."} {"_id": "67aafb27e6b0d970f1d1bab9b523b1d6569609d9", "title": "A comparative analysis of biclustering algorithms for gene expression data", "text": "The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters."} {"_id": "8f8c19ed9cf6ae02e3dbf1b7dadfd8f6660a6119", "title": "Digital signature based on PlayGamal algorithm", "text": "To keep the security and integrity of digital messages, can be applied cryptography method. Digital Signature is one of the vital digital data, so the secrecy must be maintained at the time of distribution and storage of data. This study aims to combine modern cryptographic methods with ElGamal traditional algorithm method is called PlayGamal Cipher Algorithm. Moreover. the method will be implement as application for digital signature. The proposed method will be compare with the existing El-Gamal to know the performance during encryption and decryption process."} {"_id": "ea5cf5a6d9dcde96999e215f3f0196cd468f2bf0", "title": "Chiron: a parallel engine for algebraic scientific workflows", "text": "Large-scale scientific experiments based on computer simulations are typically modeled as scientific workflows, which eases the chaining of different programs. These scientific workflows are defined, executed, and monitored by scientific workflowmanagement systems (SWfMS). As these experiments manage large amounts of data, it becomes critical to execute them in high-performance computing environments, such as clusters, grids, and clouds. However, few SWfMS provide parallel support. The ones that do so are usually labor-intensive for workflow developers and have limited primitives to optimize workflow execution. To address these issues, we developed workflow algebra to specify and enable the optimization of parallel execution of scientific workflows. In this paper, we show how the workflow algebra is efficiently implemented in Chiron, an algebraic based parallel scientific workflow engine. Chiron has a unique native distributed provenance mechanism that enables runtime queries in a relational database. We developed two studies to evaluate the performance of our algebraic approach implemented in Chiron; the first study compares Chiron with different approaches, whereas the second one evaluates the scalability of Chiron. By analyzing the results, we conclude that Chiron is efficient in executing scientific workflows, with the benefits of declarative specification and runtime provenance support. Copyright \u00a9 2013 John Wiley & Sons, Ltd."} {"_id": "934e0abba8b3cff110d6272615909440f5b92763", "title": "Change from baseline and analysis of covariance revisited.", "text": "The case for preferring analysis of covariance (ANCOVA) to the simple analysis of change scores (SACS) has often been made. Nevertheless, claims continue to be made that analysis of covariance is biased if the groups are not equal at baseline. If the required equality were in expectation only, this would permit the use of ANCOVA in randomized clinical trials but not in observational studies. The discussion is related to Lord's paradox. In this note, it is shown, however that it is not a necessary condition for groups to be equal at baseline, not even in expectation, for ANCOVA to provide unbiased estimates of treatment effects. It is also shown that although many situations can be envisaged where ANCOVA is biased it is very difficult to imagine circumstances under which SACS would then be unbiased and a causal interpretation could be made."} {"_id": "fac9e24c9dc285b71439c483698e423202bfeb43", "title": "Determinants of Behavioral Intention to Mobile Banking Case From Yemen", "text": "Nowadays, new tools and technologies are emerging rapidly. They are often used cross-culturally before being tested for suitability and validity. However, they must be validated to ensure that they work with all users, not just part of them. Mobile banking (as a new technology tool) has been introduced assuming that it performs well concerning authentication, among all members of the society. Our research aimed to evaluate authentication mobile banking user acceptance, through Technology Acceptance Model (TAM), in Arabic countries, namely Yemen. The results confirm the previous studies that have shown the importance of perceived ease of use and perceived usefulness. Furthermore, perceived ease of use plays a determinant role. KeywordsTechnology acceptance models; Mobile Banking;"} {"_id": "d29ef3a422b589c33e123d37d66c97316f2c1c92", "title": "Reliability and validity of the Evaluation Tool of Children's Handwriting-Cursive (ETCH-C) using the general scoring criteria.", "text": "OBJECTIVES\nTo determine the reliability and aspects of validity of the Evaluation Tool of Children's Handwriting-Cursive (ETCH-C; Amundson, 1995), using the general scoring criteria, when assessing children who use alternative writing scripts.\n\n\nMETHOD\nChildren in Years 5 and 6 with handwriting problems and a group of matched control participants from their respective classrooms were assessed with the ETCH-C twice, 4 weeks apart.\n\n\nRESULTS\nTotal Letter scores were most reliable; more variability should be expected for Total Word scores. Total Numeral scores showed unacceptable reliability levels and are not recommended. We found good discriminant validity for Letter and Word scores and established cutoff scores to distinguish children with and without handwriting dysfunction (Total Letter <90%, Total Word <85%).\n\n\nCONCLUSION\nThe ETCH-C, using the general scoring criteria, is a reliable and valid test of handwriting for children using alternative scripts."} {"_id": "76d4f741a0321bad1f080a6c4d41996a381d3c80", "title": "Design, Analysis, and Optimization of Ironless Stator Permanent Magnet Machines", "text": "This paper presents a methodology for the design, analysis, and graphical optimization of ironless brushless permanent magnet machines primarily for generator applications. Magnetic flux in this class of electromagnetic machine tends to be 3-D due to the lack of conventional iron structures and the absence of a constrained magnetic flux path. The proposed methodology includes comprehensive geometric, magnetic and electrical dimensioning followed by detailed 3-D finite element (FE) modeling of a base machine for which parameters are determined. These parameters are then graphically optimized within sensible volumetric and electromagnetic constraints to arrive at improved design solutions. This paper considers an ironless machine design to validate the 3-D FE model to optimize power conversion for the case of a low-speed, ironless stator generator. The machine configuration investigated in this paper has concentric arrangement of the rotor and the stator, solenoid-shaped coils, and a simple mechanical design considered for ease of manufacture and maintenance. Using performance and material effectiveness as the overriding optimization criteria, this paper suggests optimal designs configurations featuring two different winding arrangements, i.e., radial and circumferentially mounted. Performance and material effectiveness of the studied ironless stator designs are compared to published ironless machine configurations."} {"_id": "852c633882927affd1a951e81e6e30251bb40867", "title": "Radiation Efficiency Measurement Method for Passive UHF RFID Dipole Tag Antennas", "text": "Concurrently with the continuously developing radio frequency identification (RFID) technology, new types of tag antenna materials and structures are emerging to fulfill the requirements encountered within the new application areas. In this work, a radiation efficiency measurement method is developed and verified for passive ultra-high frequency (UHF) RFID dipole tag antennas. In addition, the measurement method is applied to measure the radiation efficiency of sewed dipole tag antennas for wearable body-centric wireless communication applications. The acquired information from measurements can be used to characterize tag antenna material structures losses and to further both improve and optimize tag antenna performance and reliability."} {"_id": "1886edb4e771c1c0aa7bae360d7f3de23ac4ac8e", "title": "Failure Trends in a Large Disk Drive Population", "text": "It is estimated that over 90% of all new information produced in the world is being stored on magnetic media, most of it on hard disk drives. Despite their importance, there is relatively little published work on the failure patterns of disk drives, and the key factors that affect their lifetime. Most available data are either based on extrapolation from accelerated aging experiments or from relatively modest sized field studies. Moreover, larger population studies rarely have the infrastructure in place to collect health signals from components in operation, which is critical information for detailed failure analysis. We present data collected from detailed observations of a large disk drive population in a production Internet services deployment. The population observed is many times larger than that of previous studies. In addition to presenting failure statistics, we analyze the correlation between failures and several parameters generally believed to impact longevity. Our analysis identifies several parameters from the drive\u2019s self monitoring facility (SMART) that correlate highly with failures. Despite this high correlation, we conclude that models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures. Surprisingly, we found that temperature and activity levels were much less correlated with drive failures than previously reported."} {"_id": "517c5cd1dbafb3cfa0eea4fc78d0b5cd085209b2", "title": "DRAM errors in the wild: a large-scale field study", "text": "Errors in dynamic random access memory (DRAM) are a common form of hardware failure in modern compute clusters. Failures are costly both in terms of hardware replacement costs and service disruption. While a large body of work exists on DRAM in laboratory conditions, little has been reported on real DRAM failures in large production clusters. In this paper, we analyze measurements of memory errors in a large fleet of commodity servers over a period of 2.5 years. The collected data covers multiple vendors, DRAM capacities and technologies, and comprises many millions of DIMM days.\n The goal of this paper is to answer questions such as the following: How common are memory errors in practice? What are their statistical properties? How are they affected by external factors, such as temperature and utilization, and by chip-specific factors, such as chip density, memory technology and DIMM age?\n We find that DRAM error behavior in the field differs in many key aspects from commonly held assumptions. For example, we observe DRAM error rates that are orders of magnitude higher than previously reported, with 25,000 to 70,000 errors per billion device hours per Mbit and more than 8% of DIMMs affected by errors per year. We provide strong evidence that memory errors are dominated by hard errors, rather than soft errors, which previous work suspects to be the dominant error mode. We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a surprisingly small effect on error behavior in the field, when taking all other factors into account. Finally, unlike commonly feared, we don't observe any indication that newer generations of DIMMs have worse error behavior."} {"_id": "663e064469ad91e6bda345d216504b4c868f537b", "title": "A scalable, commodity data center network architecture", "text": "Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance.\n In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP."} {"_id": "07add9c98a979e732cfa215c901adb1975f3f43a", "title": "The case for RAMClouds: scalable high-performance storage entirely in DRAM", "text": "Disk-oriented approaches to online storage are becoming increasingly problematic: they do not scale gracefully to meet the needs of large-scale Web applications, and improvements in disk capacity have far outstripped improvements in access latency and bandwidth. This paper argues for a new approach to datacenter storage called RAMCloud, where information is kept entirely in DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers. We believe that RAMClouds can provide durable and available storage with 100-1000x the throughput of disk-based systems and 100-1000x lower access latency. The combination of low latency and large scale will enable a new breed of dataintensive applications."} {"_id": "26b730317a906882754a1e25c263c12eb2613132", "title": "Sketching in circuits: designing and building electronics on paper", "text": "The field of new methods and techniques for building electronics is quickly growing - from research in new materials for circuit building, to modular toolkits, and more recently to untoolkits, which aim to incorporate more off-the-shelf parts. However, the standard mediums for circuit design and construction remain the breadboard, protoboard, and printed circuit board (PCB). As an alternative, we introduce a method in which circuits are hand-made on ordinary paper substrates, connected with conductive foil tape and off-the-shelf circuit components with the aim of supporting the durability, scalability, and accessibility needs of novice and expert circuit builders alike. We also used electrified notebooks to investigate how the circuit design and build process would be affected by the constraints and affordances of the bound book. Our ideas and techniques were evaluated through a series of workshops, through which we found our methods supported a wide variety of approaches and results - both technical and expressive - to electronics design and construction."} {"_id": "f285a3075faa90ce6c1a76719cb5867406d3e07a", "title": "Entailment-based Fully Automatic Technique for Evaluation of Summaries", "text": "We propose a fully automatic technique for evaluating text summaries without the need to prepare the gold standard summaries manually. A standard and popular summary evaluation techniques or tools are not fully automatic; they all need some manual process or manual reference summary. Using recognizing textual entailment (TE), automatically generated summaries can be evaluated completely automatically without any manual preparation process. We use a TE system based on a combination of lexical entailment module, lexical distance module, Chunk module, Named Entity module and syntactic text entailment (TE) module. The documents are used as text (T) and summary of these documents are taken as hypothesis (H). Therefore, the more information of the document is entailed by its summary the better the summary. Comparing with the ROUGE 1.5.5 evaluation scores over TAC 2008 (formerly DUC, conducted by NIST) dataset, the proposed evaluation technique predicts the ROUGE scores with a accuracy of 98.25% with respect to ROUGE-2 and 95.65% with respect to ROUGE-SU4."} {"_id": "f2561f0f08ed82921a0d6bb9537adb47b67b8ba5", "title": "Hemispheric asymmetry reduction in older adults: the HAROLD model.", "text": "A model of the effects of aging on brain activity during cognitive performance is introduced. The model is called HAROLD (hemispheric asymmetry reduction in older adults), and it states that, under similar circumstances, prefrontal activity during cognitive performances tends to be less lateralized in older adults than in younger adults. The model is supported by functional neuroimaging and other evidence in the domains of episodic memory, semantic memory, working memory, perception, and inhibitory control. Age-related hemispheric asymmetry reductions may have a compensatory function or they may reflect a dedifferentiation process. They may have a cognitive or neural origin, and they may reflect regional or network mechanisms. The HAROLD model is a cognitive neuroscience model that integrates ideas and findings from psychology and neuroscience of aging."} {"_id": "361367838ee5d9d5c9a77c69c1c56b1c309ab236", "title": "Salient Object Detection: A Survey", "text": "Detecting and segmenting salient objects in natural scenes, often referred to as salient object detection, has attracted a lot of interest in computer vision. While many models have been proposed and several applications have emerged, yet a deep understanding of achievements and issues is lacking. We aim to provide a comprehensive review of the recent progress in salient object detection and situate this field among other closely related areas such as generic scene segmentation, object proposal generation, and saliency for fixation prediction. Covering 228 publications, we survey i) roots, key concepts, and tasks, ii) core techniques and main modeling trends, and iii) datasets and evaluation metrics in salient object detection. We also discuss open problems such as evaluation metrics and dataset bias in model performance and suggest future research directions."} {"_id": "208a59ad8612c7ac0ee76b7eb55d6b17a237ce32", "title": "How to stop spread of misinformation on social media: Facebook plans vs. right-click authenticate approach", "text": "One of the key features of social networks is that users are able to share information, and through cascades of sharing information, this information may reach a large number of individuals. The high availability of user-provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. With lack of means to verify information, social media has been accused of becoming a hot bed for sharing of misinformation. Facebook, as one of the largest social networking services, has been facing widespread criticism on how its newsfeed algorithm is designed thus amplifying dissemination of misinformation. In late 2016, Facebook revealed plans to address fake news on Facebook newsfeeds. In this work, we study the methods Facebook has proposed to combat the spread of misinformation and compare it with our previously proposed approach called \u2018Right-click Authenticate\u2019. By analyzing the Business Process Modeling and Notation of both approaches, this paper suggests some key weaknesses and improvements social media companies need to consider when tackling the spread of misinformation online."} {"_id": "833de6c09b38a679ed870ad3a7ccfafc8de010e1", "title": "Instantaneous ego-motion estimation using multiple Doppler radars", "text": "The estimation of the ego-vehicle's motion is a key capability for advanced driving assistant systems and mobile robot localization. The following paper presents a robust algorithm using radar sensors to instantly determine the complete 2D motion state of the ego-vehicle (longitudinal, lateral velocity and yaw rate). It evaluates the relative motion between at least two Doppler radar sensors and their received stationary reflections (targets). Based on the distribution of their radial velocities across the azimuth angle, non-stationary targets and clutter are excluded. The ego-motion and its corresponding covariance matrix are estimated. The algorithm does not require any preprocessing steps such as clustering or clutter suppression and does not contain any model assumptions. The sensors can be mounted at any position on the vehicle. A common field of view is not required, avoiding target association in space. As an additional benefit, all targets are instantly labeled as stationary or non-stationary."} {"_id": "4b2c2633246ba1fafe8dedace9b168ed27f062f3", "title": "The capacity of low-density parity-check codes under message-passing decoding", "text": "In this paper we present a general method for determining the capacity of low-density paritycheck codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et. al. [1] in the case of a binary symmetric channel and a binary message passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined. Index Terms \u2013 low-density parity-check codes, turbo codes, message-passing decoders, iterative decoding, belief-propagation, turbo decoding"} {"_id": "67861b521b2d3eaf70e8e3ba1cb7b7d66b7fdacd", "title": "Model-Based Wheel Slip Detection for Outdoor Mobile Robots", "text": "This paper introduces a model-based approach to estimating longitudinal wheel slip and detecting immobilized conditions of autonomous mobile robots operating on outdoor terrain. A novel tire traction/braking model is presented and used to calculate vehicle dynamic forces in an extended Kalman filter framework. Estimates of external forces and robot velocity are derived using measurements from wheel encoders, IMU, and GPS. Weak constraints are used to constrain the evolution of the resistive force estimate based upon physical reasoning. Experimental results show the technique accurately and rapidly detects robot immobilization conditions while providing estimates of the robot's velocity during normal driving. Immobilization detection is shown to be robust to uncertainty in tire model parameters. Accurate immobilization detection is demonstrated in the absence of GPS, indicating the algorithm is applicable for both terrestrial applications and space robotics."} {"_id": "b8e70e21db2918c9932f0ca27c98ee7da168223f", "title": "Robot Modeling and Control [Book Review]", "text": "The field of robotics began in the 1960s and 1970s with the design and control of general purpose robotic manipulators. During the 1980s the expense of integrating robotics into manufacturing lines and the difficulty of performing seemingly easy tasks, such as manipulating flexible objects or washing a window, led to disenchantment among many researchers and funding agencies. Since the 1990s the popularity of robotics has resurged with many new exciting areas of application particularly in the medical, defense, and service sectors. Robotics has grown from a field that is primarily concerned with problems such as the manipulation of rigid bodies in well-structured environments to a field that has tackled challenging problems such as extraterrestrial exploration and remote minimally invasive surgery. I began my involvement in the field of robotics as a graduate student studying control theory. My project was to design a control algorithm to induce stable walking in a biped robot. As with many control problems, before formulating a solution I had to determine a suitable model for the plant. Since I was an electrical engineering student with little background in the kinematics and dynamics of rigid-body mechanical systems, this task was daunting. With some guidance from a fellow student, I fumbled my way through Lagrange\u2019s method to derive the robot\u2019s equations of motion. I had little idea at the time why forming the Lagrangian and taking certain derivatives resulted in the right equations. For insight I was pointed to various texts on classical mechanics. While these texts provided the desired insight, I was still left with questions: Given an arbitrary system, how do I systematically construct the Lagrangian so that I can derive the system\u2019s equation of motion? What are the most effective techniques for robot control? What special aspects of the robot\u2019s dynamics should be considered in developing controllers? Five years after my introduction to the field of robotics, I now know that research in the field goes well beyond the dynamics and control of traditional manipulators. The concepts required to analyze and control manipulators, however, are still needed in modern robotics research. These fundamental concepts, which are addressed by many introductory texts on robotics, are the following: 1) Forward kinematics: Given joint displacements, determine the position and orientation of the endeffector with respect to a specified reference frame. 2) Inverse kinematics: Given the position and orientation of the end-effector frame relative to a specified reference frame, determine the joint variables that give rise to this configuration. 3) Differential kinematics: Given the manipulator\u2019s joint rates, determine the linear and angular velocities of the end-effector frame relative to a fixed reference frame. 4) Statics: Given the forces and moments applied to the manipulator, determine the corresponding joint forces and torques required to maintain equilibrium in the presence of applied forces and moments. 5) Dynamics: Determine the equations of motion of the system. Specifically, determine the differential equations relating the applied forces and torques to the joint variables, joint rates, and joint accelerations. 6) Trajectory generation: Given initial and final configurations of the manipulator, find a trajectory of the joint variables connecting these two configurations. 7) Control: Given desired joint trajectories or forces between the end-effector and the environment determine joint torques and forces that effect these joint trajectories and forces."} {"_id": "217a6a618e87c6a709bff0698bc15cf4bfa789a4", "title": "K-modestream algorithm for clustering categorical data streams", "text": "Clustering categorical data streams is a challenging problem because new data points are continuously adding to the already existing database at rapid pace and there exists no natural order among the categorical values. Recently, some algorithms have been discussed to tackle the problem of clustering the categorical data streams. However, in all these schemes the user needs to pre-specify the number of clusters, which is not trivial, and it renders to inefficient in the data stream environment. In this paper, we propose a new clustering algorithm, named it as k-modestream, which follows the k-modes algorithm paradigm to dynamically cluster the categorical data streams. It automatically computes the number of clusters and their initial modes simultaneously at regular time intervals. We analyse the time complexity of our scheme and perform various experiments using the synthetic and real world datasets to evaluate its efficacy."} {"_id": "46b7a7fe4ad9aa4a734c827409a940ff050e3940", "title": "A 3.5\u20136.8GHz wide-bandwidth DTC-assisted fractional-N all-digital PLL with a MASH \u0394\u03a3 TDC for low in-band phase noise", "text": "We present a digital-to-time converter (DTC)-assisted fractional-N wide-bandwidth all-digital PLL (ADPLL). It employs a MASH \u0394\u03a3 time-to-digital converter (TDC) to achieve low in-band phase noise, and a wide-tuning range digitally-controlled oscillator (DCO). Fabricated in 40nm CMOS, the ADPLL consumes 10.7 mW while outputting 1.73 to 3.38 GHz (after a \u00f72 division) and achieves better than -109 dBc/Hz in-band phase noise and 420fsrms integrated jitter."} {"_id": "e8e77ec157375343667e002c78d3d3bf73e5fd8a", "title": "Measuring Semantic Similarity in the Taxonomy of WordNet", "text": "This paper presents a new model to measure semantic similarity in the taxonomy of WordNet, using edgecounting techniques. We weigh up our model against a benchmark set by human similarity judgment, and achieve a much improved result compared with other methods: the correlation with average human judgment on a standard 28 word pair dataset is 0.921, which is better than anything reported in the literature and also significantly better than average individual human judgments. As this set has been effectively used for algorithm selection and tuning, we also cross-validate an independent 37 word pair test set (0.876) and present results for the full 65 word pair superset (0.897)."} {"_id": "1e83848317597969dd07906fe7c3dceddd1737f3", "title": "A Hardware-Assisted Realtime Attack on A5/2 Without Precomputations", "text": "A5/2 is a synchronous stream cipher that is used for protecting GSM communication. Recently, some powerful attacks [2,10] on A5/2 have been proposed. In this contribution we enhance the ciphertext-only attack [2] by Barkan, Biham, and Keller by designing special-purpose hardware for generating and solving the required systems of linear equations. For realizing the LSE solver component, we use an approach recently introduced in [5,6] describing a parallelized hardware implementation of the Gauss-Jordan algorithm. Our hardware-only attacker immediately recovers the initial secret state of A5/2 which is sufficient for decrypting all frames of a session using a few ciphertext frames without any precomputations and memory. More precisely, in contrast to [2] our hardware architecture directly attacks the GSM speech channel (TCH/FS and TCH/EFS). It requires 16 ciphertext frames and completes the attack in about 1 second. With minor changes also input from other GSM channels (e.g., SDCCH/8) can be used to mount the attack."} {"_id": "703b5e7a9a7f4b567cbaec329adce0df504c98fe", "title": "Patterns of Internet and Traditional News Media Use in a Networked Community", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. The growing popularity of the World Wide Web as a source of news raises questions about the future of traditional news media. Is the Web likely to become a supplement to newspapers and television news, or a substitute for these media? Among people who have access to newspapers, television, and the World Wide Web, why do some prefer to use the Web as a source of news, while others prefer traditional news media? Drawing from a survey of 520 undergraduate students at a large public university where Internet use is woven into the fabric of daily life, this study suggests that use of the Web as a news source is positively related with reading newspapers but has no relationship with viewing television news. Members of this community use the Web mainly as a source of entertainment. Patterns of Web and traditional media exposure are examined in light of computer anxiety, desire for control, and political knowledge. This study suggests that even when computer skills and Internet access become more widespread in the general population, use of the World Wide Web as a news source seems unlikely to diminish substantially use of traditional news media. Will the World Wide Web become a supplement or substitute for traditional news media? As the World Wide Web became popularly accessible only with the advent of Mosaic browser software in 1993, it is still unclear at this early stage of the Web's development how dominant it is likely to become as a provider of daily news to the general public. Yet the rapid spread of on-line news outlets raises profound questions about the future of traditional newspapers and television news programs. The number of newspapers around the world being published \u2026"} {"_id": "46ed0e6077fe5d2a571c99f1dc50f2cd697a6815", "title": "Smart home automation system using Bluetooth technology", "text": "In this paper a low cost and user friendly remote controlled home automation system is presented using Arduino board, Bluetooth module, smartphone, ultrasonic sensor and moisture sensor. A smartphone application is used in the suggested system which allows the users to control up to 18 devices including home appliances and sensors using Bluetooth technology. Nowadays, most of conventional home automation systems are designed for special purposes while proposed system is a general purpose home automation system. Which can easily be implement in existing home. The suggested system has more features than conventional home automation systems such as an ultrasonic sensor is used for water level detection and soil moisture sensor is use for automatic plant irrigation system. This paper also describes the hardware and software architecture of system, future work and scope. The proposed prototype of home automation system is implemented and tested on hardware and it gave the exact and expected results."} {"_id": "dab8b00e5619ceec615b179265cd6d315a97911d", "title": "A two-stage training deep neural network for small pedestrian detection", "text": "In the present paper, we propose a deep network architecture in order to improve the accuracy of pedestrian detection. The proposed method contains a proposal network and a classification network that are trained separately. We use a single shot multibox detector (SSD) as a proposal network to generate the set of pedestrian proposals. The proposal network is fine-tuned from a pre-trained network by several pedestrian data sets of large input size (512 \u00d7 512 pixels) in order to improve detection accuracy of small pedestrians. Then, we use a classification network to classify pedestrian proposals. We then combine the scores from the proposal network and the classification network to obtain better final detection scores. Experiments were evaluated using the Caltech test set, and, compared to other state-of-the-art methods of pedestrian detection task, the proposed method obtains better results for small pedestrians (30 to 50 pixels in height) with an average miss rate of 42%."} {"_id": "27630637ca68ce01400c8c2407f731ab761ac7da", "title": "Why Have Child Maltreatment and Child Victimization Declined ?", "text": "Various forms of child maltreatment and child victimization declined as much as 40\u201370% from 1993 until 2004, including sexual abuse, physical abuse, sexual assault, homicide, aggravated assault, robbery, and larceny. Other child welfare indicators also improved during the same period, including teen pregnancy, teen suicide, and children living in poverty. This article reviews a wide variety of possible explanations for these changes: demography, fertility and abortion legalization, economic prosperity, increased incarceration of offenders, increased agents of social intervention, changing social norms and practices, the dissipation of the social changes from the 1960s, and psychiatric pharmacology. Multiple factors probably contributed. In particular, economic prosperity, increasing agents of social intervention, and psychiatric pharmacology have advantages over some of the other explanations in accounting for the breadth and timing of the improvements."} {"_id": "31918003360c352fb0750040d163f287894ab547", "title": "Development Process for AUTOSAR-based Embedded System", "text": "Recently automotive embedded system has developed highly since the advent of smart car, electric card and so on. They have various value-added system for example IPA (Intelligent Parking Assistance), BSW (Blind Spot Warning), LDWS (Lane Departure warning System), LKS(Lane Keeping System)-these are ADAS (Advanced Driver Assistance Systems). AUTOSAR (AUTomotive Open System Architecture) is the most notable industrial standard for developing automotive embedded software. AUTOSAR is a partnership of automotive manufacturers and suppliers working together to develop and establish an open industry standard for automotive E/E architectures. In this paper, we will introduce AUTOSAR briefly and demonstrate the result of automotive software LDWS (Lane Detection & Warning System) development."} {"_id": "bfb9ec14d76fa64dab58ba8ee8e7271d0c587d47", "title": "Physical and psychological factors predict outcome following whiplash injury", "text": "Predictors of outcome following whiplash injury are limited to socio-demographic and symptomatic factors, which are not readily amenable to secondary and tertiary intervention. This prospective study investigated the predictive capacity of early measures of physical and psychological impairment on pain and disability 6 months following whiplash injury. Motor function (ROM; kinaesthetic sense; activity of the superficial neck flexors (EMG) during cranio-cervical flexion), quantitative sensory testing (pressure, thermal pain thresholds, brachial plexus provocation test), sympathetic vasoconstrictor responses and psychological distress (GHQ-28, TSK, IES) were measured in 76 acute whiplash participants. The outcome measure was Neck Disability Index scores at 6 months. Stepwise regression analysis was used to predict the final NDI score. Logistic regression analyses predicted membership to one of the three groups based on final NDI scores (<8 recovered, 10-28 mild pain and disability, >30 moderate/severe pain and disability). Higher initial NDI score (1.007-1.12), older age (1.03-1.23), cold hyperalgesia (1.05-1.58), and acute post-traumatic stress (1.03-1.2) predicted membership to the moderate/severe group. Additional variables associated with higher NDI scores at 6 months on stepwise regression analysis were: ROM loss and diminished sympathetic reactivity. Higher initial NDI score (1.03-1.28), greater psychological distress (GHQ-28) (1.04-1.28) and decreased ROM (1.03-1.25) predicted subjects with persistent milder symptoms from those who fully recovered. These results demonstrate that both physical and psychological factors play a role in recovery or non-recovery from whiplash injury. This may assist in the development of more relevant treatment methods for acute whiplash."} {"_id": "1642e4592f53b2bd50c8dae0428f703d15a09c12", "title": "Differential privacy for location pattern mining", "text": "One main concern for individuals to participate in the data collection of personal location history records is the disclosure of their location and related information when a user queries for statistical or pattern mining results derived from these records. In this paper, we investigate how the privacy goal that the inclusion of one's location history in a statistical database with location pattern mining capabilities does not substantially increase one's privacy risk. In particular, we propose a differentially private pattern mining algorithm for interesting geographic location discovery using a region quadtree spatial decomposition to preprocess the location points followed by applying a density-based clustering algorithm. A differentially private region quadtree is used for both de-noising the spatial domain and identifying the likely geographic regions containing the interesting locations. Then, a differential privacy mechanism is applied to the algorithm outputs, namely: the interesting regions and their corresponding stay point counts. The quadtree spatial decomposition enables one to obtain a localized reduced sensitivity to achieve the differential privacy goal and accurate outputs. Experimental results on synthetic datasets are used to show the feasibility of the proposed privacy preserving location pattern mining algorithm."} {"_id": "08791420bc8894d1762a44188bc4727479a8de2c", "title": "From categories to subcategories: Large-scale image classification with partial class label refinement", "text": "The number of digital images is growing extremely rapidly, and so is the need for their classification. But, as more images of pre-defined categories become available, they also become more diverse and cover finer semantic differences. Ultimately, the categories themselves need to be divided into subcategories to account for that semantic refinement. Image classification in general has improved significantly over the last few years, but it still requires a massive amount of manually annotated data. Subdividing categories into subcategories multiples the number of labels, aggravating the annotation problem. Hence, we can expect the annotations to be refined only for a subset of the already labeled data, and exploit coarser labeled data to improve classification. In this work, we investigate how coarse category labels can be used to improve the classification of subcategories. To this end, we adopt the framework of Random Forests and propose a regularized objective function that takes into account relations between categories and subcategories. Compared to approaches that disregard the extra coarse labeled data, we achieve a relative improvement in subcategory classification accuracy of up to 22% in our large-scale image classification experiments."} {"_id": "a7c1534f09943be868088a6ac5854830898347c6", "title": "Hemisphere lens-loaded Vivaldi antenna for time domain microwave imaging of concealed objects", "text": "The hemisphere lens-loaded Vivaldi antenna for the microwave imaging applications is designed and tested in this paper. The proposed antenna is designed to work in the wide frequency band of 1\u201314 GHz, and is fabricated on the FR-4 substrate. The directivity of the proposed Vivaldi antenna is enhanced using a hemispherical shape dielectric lens, which is fixed on the end-fire direction of the antenna. The proposed antenna is well suited for the microwave imaging applications because of the wide frequency range and high directivity. The design of the antenna is carried out using the CST microwave studio, and various parameters such as the return loss, the radiation pattern, the directivity, and input impedance are optimized. The maximum improvement of 4.19 dB in the directivity is observed with the designed hemisphere lens. The antenna design is validated by fabricating and testing it in an anechoic environment. Finally, the designed antenna is utilized to establish a setup for measuring the scattering coefficients of various objects and structures in the frequency band of 1\u201314 GHz. The two-dimensional (2D) microwave images of these objects are successfully obtained in terms of the measured wide band scattering data using a novel time domain inverse scattering approach, which shows the applicability of the proposed antenna."} {"_id": "48947a9ce5e37003008a38fbfcdb2a317421b7e4", "title": "ELM-ART: An Adaptive Versatile System for Web-based Instruction", "text": "This paper discusses the problems of developing versatile adaptive and intelligent learning systems that can be used in the context of practical Web-based education. We argue that versatility is an important feature of successful Web-based education systems. We introduce ELM-ART, an intelligent interactive educational system to support learning programming in LISP. ELM-ART provides all learning material online in the form of an adaptive interactive textbook. Using a combination of an overlay model and an episodic student model, ELM-ART provides adaptive navigation support, course sequencing, individualized diagnosis of student solutions, and example-based problem-solving support. Results of an empirical study show different effects of these techniques on different types of users during the first lessons of the programming course. ELM-ART demonstrates how some interactive and adaptive educational components can be implemented in WWW context and how multiple components can be naturally integrated together in a single system."} {"_id": "34329c7ec1cc159ed5efa84cb38ea9bbec335d19", "title": "A probabilistic model for retrospective news event detection", "text": "Retrospective news event detection (RED) is defined as the discovery of previously unidentified events in historical news corpus. Although both the contents and time information of news articles are helpful to RED, most researches focus on the utilization of the contents of news articles. Few research works have been carried out on finding better usages of time information. In this paper, we do some explorations on both directions based on the following two characteristics of news articles. On the one hand, news articles are always aroused by events; on the other hand, similar articles reporting the same event often redundantly appear on many news sources. The former hints a generative model of news articles, and the latter provides data enriched environments to perform RED. With consideration of these characteristics, we propose a probabilistic model to incorporate both content and time information in a unified framework. This model gives new representations of both news articles and news events. Furthermore, based on this approach, we build an interactive RED system, HISCOVERY, which provides additional functions to present events, Photo Story and Chronicle."} {"_id": "0621213a012d169cb7c2930354c6489d6a89baf8", "title": "A cross-collection mixture model for comparative text mining", "text": "In this paper, we define and study a novel text mining problem, which we refer to as Comparative Text Mining (CTM). Given a set of comparable text collections, the task of comparative text mining is to discover any latent common themes across all collections as well as summarize the similarity and differences of these collections along each common theme. This general problem subsumes many interesting applications, including business intelligence and opinion summarization. We propose a generative probabilistic mixture model for comparative text mining. The model simultaneously performs cross-collection clustering and within-collection clustering, and can be applied to an arbitrary set of comparable text collections. The model can be estimated efficiently using the Expectation-Maximization (EM) algorithm. We evaluate the model on two different text data sets (i.e., a news article data set and a laptop review data set), and compare it with a baseline clustering method also based on a mixture model. Experiment results show that the model is quite effective in discovering the latent common themes across collections and performs significantly better than our baseline mixture model."} {"_id": "3bb6fa7b7f22de6ccc6cca3036a480a5bc839a4d", "title": "Analysis of first-order anti-aliasing integration sampler", "text": "Performance of the first-order anti-aliasing integration sampler used in software-defined radio (SDR) receivers is analyzed versus all practical nonidealities. The nonidealities that are considered in this paper are transconductor finite output resistance, switch resistance, nonzero rise and fall times of the sampling clock, charge injection, clock jitter, and noise. It is proved that the filter is quite robust to all of these nonidealities except for transconductor finite output resistance. Furthermore, linearity and noise performances are all limited to design of a low-noise and highly linear transconductor."} {"_id": "746706121f51103c52350e9efa347db858ee78b7", "title": "If You Measure It, Can You Improve It?: Exploring The Value of Energy Disaggregation", "text": "Over the past few years, dozens of new techniques have been proposed for more accurate energy disaggregation, but the jury is still out on whether these techniques can actually save energy and, if so, whether higher accuracy translates into higher energy savings. In this paper, we explore both of these questions. First, we develop new techniques that use disaggregated power data to provide actionable feedback to residential users. We evaluate these techniques using power traces from 240 homes and find that they can detect homes that need feedback with as much as 84% accuracy. Second, we evaluate whether existing energy disaggregation techniques provide power traces with sufficient fidelity to support the feedback techniques that we created and whether more accurate disaggregation results translate into more energy savings for the users. Results show that feedback accuracy is very low even while disaggregation accuracy is high. These results indicate a need to revisit the metrics by which disaggregation is evaluated."} {"_id": "8cafd353218e8dbd3e2d485bc0079f7d1b3dc39a", "title": "Supervised Word Sense Disambiguation using Python", "text": "In this paper, we discuss the problem of Word Sense Disambigu ation (WSD) and one approach to solving the lexical sample problem. We use training and te st data fromSENSEVAL-3 and implement methods based on Na\u0131\u0308ve Bayes calculations, cosi ne comparison of word-frequency vectors, decision lists, and Latent Semantic Analysis. We a lso implement a simple classifier combination system that combines these classifiers into one WSD module. We then prove the effectiveness of our WSD module by participating in the M ultilingual Chinese-English Lexical Sample Task from SemEval-2007."} {"_id": "a4bd8fb3e27e41a5afc13b7c19f783bf17fcd7b9", "title": "Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets", "text": "With the ever growing amount of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure a consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction."} {"_id": "420bf47ca523f4fe0772d9d13e1bc7f93d187186", "title": "Speech Recognition Engineering Issues in Speech to Speech Translation System Design for Low Resource Languages and Domains", "text": "Engineering automatic speech recognition (ASR) for speech to speech (S2S) translation systems, especially targeting languages and domains that do not have readily available spoken language resources, is immensely challenging due to a number of reasons. In addition to contending with the conventional data-hungry speech acoustic and language modeling needs, these designs have to accommodate varying requirements imposed by the domain needs and characteristics, target device and usage modality (such as phrase-based, or spontaneous free form interactions, with or without visual feedback) and huge spoken language variability arising due to socio-linguistic and cultural differences of the users. This paper, using case studies of creating speech translation systems between English and languages such as Pashto and Farsi, describes some of the practical issues and the solutions that were developed for multilingual ASR development. These include novel acoustic and language modeling strategies such as language adaptive recognition, active-learning based language modeling, class-based language models that can better exploit resource poor language data, efficient search strategies, including N-best and confidence generation to aid multiple hypotheses translation, use of dialog information and clever interface choices to facilitate ASR, and audio interface design for meeting both usability and robustness requirements"} {"_id": "0d636d8241b3c578513f0f52448198c035bb717e", "title": "Effects of prior hamstring strain injury on strength, flexibility, and running mechanics.", "text": "BACKGROUND\nPrevious studies have shown evidence of residual scar tissue at the musculotendon junction following a hamstring strain injury, which could influence re-injury risk. The purpose of this study was to investigate whether bilateral differences in strength, neuromuscular patterns, and musculotendon kinematics during sprinting are present in individuals with a history of unilateral hamstring injury, and whether such differences are linked to the presence of scar tissue.\n\n\nMETHODS\nEighteen subjects with a previous hamstring injury (>5 months prior) participated in a magnetic resonance (MR) imaging exam, isokinetic strength testing, and a biomechanical assessment of treadmill sprinting. Bilateral comparisons were made for peak knee flexion torque, angle of peak torque, and the hamstrings:quadriceps strength ratio, as well as muscle activations and peak hamstring stretch during sprinting. MR images were used to measure the volumes of the proximal tendon/aponeurosis of the biceps femoris, with asymmetries considered indicative of scar tissue.\n\n\nFINDINGS\nA significantly enlarged proximal biceps femoris tendon volume was measured on the side of prior injury. However, no significant differences between the previously injured and uninjured limbs were found in strength measures, peak hamstring stretch, or muscle activation patterns. Further, the degree of asymmetry in tendon volume was not correlated to any of the functional measures.\n\n\nINTERPRETATION\nInjury-induced changes in morphology do not seem discernable from strength measures, running kinematics, or muscle activation patterns. Further research is warranted to ascertain whether residual scarring alters localized musculotendon mechanics in a way that may contribute to the high rates of muscle re-injury that are observed clinically."} {"_id": "125afe2892d2fe9960bdcf9469d7ee504af26d18", "title": "Gray-level grouping (GLG): an automatic method for optimized image contrast enhancement - part II: the variations", "text": "This is Part II of the paper, \"Gray-Level Grouping (GLG): an Automatic Method for Optimized Image Contrast Enhancement\". Part I of this paper introduced a new automatic contrast enhancement technique: gray-level grouping (GLG). GLG is a general and powerful technique, which can be conveniently applied to a broad variety of low-contrast images and outperforms conventional contrast enhancement techniques. However, the basic GLG method still has limitations and cannot enhance certain classes of low-contrast images well, e.g., images with a noisy background. The basic GLG also cannot fulfill certain special application purposes, e.g., enhancing only part of an image which corresponds to a certain segment of the image histogram. In order to break through these limitations, this paper introduces an extension of the basic GLG algorithm, selective gray-level grouping (SGLG), which groups the histogram components in different segments of the grayscale using different criteria and, hence, is able to enhance different parts of the histogram to various extents. This paper also introduces two new preprocessing methods to eliminate background noise in noisy low-contrast images so that such images can be properly enhanced by the (S)GLG technique. The extension of (S)GLG to color images is also discussed in this paper. SGLG and its variations extend the capability of the basic GLG to a larger variety of low-contrast images, and can fulfill special application requirements. SGLG and its variations not only produce results superior to conventional contrast enhancement techniques, but are also fully automatic under most circumstances, and are applicable to a broad variety of images."} {"_id": "c2c5206f6a539b02f5d5a19bdb3a90584f7e6ba4", "title": "Affective Computing: A Review", "text": "Affective computing is currently one of the most active research topics, furthermore, having increasingly intensive attention. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Affective computing concerns multidisciplinary knowledge background such as psychology, cognitive, physiology and computer sciences. The paper is emphasized on the several issues involved implicitly in the whole interactive feedback loop. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are also discussed."} {"_id": "7083777989197f87e3933d26d614d15ea05c3fe6", "title": "Advances in Instance Selection for Instance-Based Learning Algorithms", "text": "The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner."} {"_id": "ff5c3b48e4a2c46de00839c86b3322735d42a907", "title": "Effects of sleep deprivation on performance: a meta-analysis.", "text": "To quantitatively describe the effects of sleep loss, we used meta-analysis, a technique relatively new to the sleep research field, to mathematically summarize data from 19 original research studies. Results of our analysis of 143 study coefficients and a total sample size of 1.932 suggest that overall sleep deprivation strongly impairs human functioning. Moreover, we found that mood is more affected by sleep deprivation than either cognitive or motor performance and that partial sleep deprivation has a more profound effect on functioning than either long-term or short-term sleep deprivation. In general, these results indicate that the effects of sleep deprivation may be underestimated in some narrative reviews, particularly those concerning the effects of partial sleep deprivation."} {"_id": "2bd233eb65e0a5383c9f2fc92bd08fab9a847c97", "title": "An investigation of the impacts of some external contextual factors on ERP systems success assessment: a case of firms in Baltic-Nordic region", "text": "Enterprise Resource Planning (ERP) systems are among the largest information systems (IS) investments made by firms, and the use of such systems is spreading globally. Many researchers have discussed their adoption and implementation, few have investigated the impact of external contextual factors on the success of such technologies in adopting firms. This study aims to fill this gap in research by examining the effects of three external contextual factors, i.e., industry type, industry climate, and national economic climate on ERP success assessment. We obtained data from Estonia and Finland and our analysis shows that industry and national economic climates have significant relationships with ERP success."} {"_id": "c876a24ffe1dac77397218275cfda7cc3dce6ac9", "title": "An Open Smart City IoT Test Bed: Street Light Poles as Smart City Spines: Poster Abstract", "text": "Street light poles will be a key enabler for a smart city's hardware infrastructure, thanks to their ubiquity throughout the city as well as access to power. We propose an IoT test bed around light poles for the city, with a modular hardware and software architecture to enable experimentation with various technologies."} {"_id": "6115c7df49cdd3a4e6dd86ebd8b6e75b26d27f26", "title": "RFBoost: An improved multi-label boosting algorithm and its application to text categorisation", "text": "The AdaBoost.MH boosting algorithm is considered to be one of the most accurate algorithms for multilabel classification. AdaBoost.MH works by iteratively building a committee of weak hypotheses of decision stumps. In each round of AdaBoost.MH learning, all features are examined, but only one feature is used to build a new weak hypothesis. This learning mechanism may entail a high degree of computational time complexity, particularly in the case of a large-scale dataset. This paper describes a way to manage the learning complexity and improve the classification performance of AdaBoost.MH. We propose an improved version of AdaBoost.MH, called RFBoost . The weak learning in RFBoost is based on filtering a small fixed number of ranked features in each boosting round rather than using all features, as AdaBoost.MH does. We propose two methods for ranking the features: One Boosting Round and Labeled Latent Dirichlet Allocation (LLDA), a supervised topic model based on Gibbs sampling. Additionally, we investigate the use of LLDA as a feature selection method for reducing the feature space based on the maximal conditional probabilities of words across labels. Our experimental results on eight well-known benchmarks for multi-label text categorisation show that RFBoost is significantly more efficient and effective than the baseline algorithms. Moreover, the LLDA-based feature ranking yields the best performance for RFBoost. \u00a9 2016 Elsevier B.V. All rights reserved."} {"_id": "f61dd368316f45bc45c740aba9570b93ea48c40a", "title": "Novel use of electronic whiteboard in the operating room increases surgical team compliance with pre-incision safety practices.", "text": "BACKGROUND\nDespite evidence that use of a checklist during the pre-incision time out improves patient morbidity and mortality, compliance with performing the required elements of the checklist has been low. In an effort to improve compliance, a standardized time out interactive Electronic Checklist System [iECS] was implemented in all hospital operating room (OR) suites at 1 institution. The purpose of this 12-month prospective observational study was to assess whether an iECS in the OR improves and sustains improved surgical team compliance with the pre-incision time out.\n\n\nMETHODS\nDirect observational analyses of preprocedural time outs were performed on 80 cases 1 month before, and 1 and 9 months after implementation of the iECS, for a total of 240 observed cases. Three observers, who achieved high interrater reliability (kappa = 0.83), recorded a compliance score (yes, 1; no, 0) on each element of the time out. An element was scored as compliant if it was clearly verbalized by the surgical team.\n\n\nRESULTS\nPre-intervention observations indicated that surgical staff verbally communicated the core elements of the time out procedure 49.7 \u00b1 12.9% of the time. After implementation of the iECS, direct observation of 80 surgical cases at 1 and 9 months indicated that surgical staff verbally communicated the core elements of the time out procedure 81.6 \u00b1 11.4% and 85.8 \u00b1 6.8% of the time, respectively, resulting in a statistically significant (P < .0001) increase in time out procedural compliance.\n\n\nCONCLUSION\nImplementation of a standardized, iECS can dramatically increase compliance with preprocedural time outs in the OR, an important and necessary step in improving patient outcomes and reducing preventable complications and deaths."} {"_id": "3a326b2a558501a64aab85d51fcaf52e70b86019", "title": "Self-Adaptive Skin Segmentation in Color Images", "text": "In this paper, we present a new method for skin detection and segmentation, relying on spatial analysis of skin-tone pixels. Our contribution lies in introducing self-adaptive seeds, from which the skin probability is propagated using the distance transform. The seeds are determined from a local skin color model that is learned on-line from a presented image, without requiring any additional information. This is in contrast to the existing methods that need a skin sample for the adaptation, e.g., acquired using a face detector. In our experimental study, we obtained F-score of over 0.85 for the ECU benchmark, and this is highly competitive compared with several state-of-the-art methods."} {"_id": "c4cf7f57600ef62cbd1179986f4ddea822c7b157", "title": "Mobile Robot Control on a Reference Path", "text": "In this paper a control design of a nonholonomic mobile robot with a differential drive is presented. On the basis of robot kinematics equations a robot control is designed where the robot is controlled to follow the arbitrary path reference with a predefined velocity profile. The designed control algorithm proved stable and robust to the errors in robot initial positions, to input and output noises and to other disturbances. The obtained control law is demonstrated on a simple trajectory example, however, for a more general applicability a time-optimal motion planning algorithm considering acceleration constraints is presented as well"} {"_id": "528dd75a9b3f56a8fb03072d41565056a7e1c1e0", "title": "An investigation of factors associated with the health and well-being of HIV-infected or HIV-affected older people in rural South Africa", "text": "BACKGROUND\nDespite the severe impact of HIV in sub-Saharan Africa, the health of older people aged 50+ is often overlooked owing to the dearth of data on the direct and indirect effects of HIV on older people's health status and well-being. The aim of this study was to examine correlates of health and well-being of HIV-infected older people relative to HIV-affected people in rural South Africa, defined as participants with an HIV-infected or death of an adult child due to HIV-related cause.\n\n\nMETHODS\nData were collected within the Africa Centre surveillance area using instruments adapted from the World Health Organization (WHO) Study on global AGEing and adult health (SAGE). A stratified random sample of 422 people aged 50+ participated. We compared the health correlates of HIV-infected to HIV-affected participants using ordered logistic regressions. Health status was measured using three instruments: disability index, quality of life and composite health score.\n\n\nRESULTS\nMedian age of the sample was 60 years (range 50-94). Women HIV-infected (aOR 0.15, 95% confidence interval (CI) 0.08-0.29) and HIV-affected (aOR 0.20, 95% CI 0.08-0.50), were significantly less likely than men to be in good functional ability. Women's adjusted odds of being in good overall health state were similarly lower than men's; while income and household wealth status were stronger correlates of quality of life. HIV-infected participants reported better functional ability, quality of life and overall health state than HIV-affected participants.\n\n\nDISCUSSION AND CONCLUSIONS\nThe enhanced healthcare received as part of anti-retroviral treatment as well as the considerable resources devoted to HIV care appear to benefit the overall well-being of HIV-infected older people; whereas similar resources have not been devoted to the general health needs of HIV uninfected older people. Given increasing numbers of older people, policy and programme interventions are urgently needed to holistically meet the health and well-being needs of older people beyond the HIV-related care system."} {"_id": "ce1889c543f2a85e7a98c020efa265cdad8a7647", "title": "A Machine Learning Approach to Anomaly Detection", "text": "Much of the intrusion detection research focuses on signature (misuse) detection, where models are built to recognize known attacks. However, signature detection, by its nature, cannot detect novel attacks. Anomaly detection focuses on modeling the normal behavior and identifying significant deviations, which could be novel attacks. In this paper we explore two machine learning methods that can construct anomaly detection models from past behavior. The first method is a rule learning algorithm that characterizes normal behavior in the absence of labeled attack data. The second method uses a clustering algorithm to identify outliers."} {"_id": "d6b2180dd2a401d252573883a5ab6880b13a3031", "title": "Construction and Evaluation of a User Experience Questionnaire", "text": "An end-user questionnaire to measure user experience quickly in a simple and immediate way while covering a preferably comprehensive impression of the product user experience was the goal of the reported construction process. An empirical approach for the item selection was used to ensure practical relevance of items. Usability experts collected terms and statements on user experience and usability, including \u2018hard\u2019 as well as \u2018soft\u2019 aspects. These statements were consolidated and transformed into a first questionnaire version containing 80 bipolar items. It was used to measure the user experience of software products in several empirical studies. Data were subjected to a factor analysis which resulted in the construction of a 26 item questionnaire including the six factors Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty. Studies conducted for the original German questionnaire and an English version indicate a satisfactory level of reliability and construct validity."} {"_id": "817a6ce83b610ca538d84b96f40328e8a98946f9", "title": "Transformational change and business process reengineering (BPR): Lessons from the British and Dutch public sector", "text": "Available online 18 May 2011"} {"_id": "0c1edc0fb1a2bbed050403f465c1bfb5fdd507e5", "title": "Clustering of multivariate time series data using particle swarm optimization", "text": "Particle swarm optimization (PSO) is a practical and effective optimization approach that has been applied recently for data clustering in many applications. While various non-evolutionary optimization and clustering algorithms have been applied for clustering multivariate time series in some applications such as customer segmentation, they usually provide poor results due to their dependency on the initial values and their poor performance in manipulating multiple objectives. In this paper, a particle swarm optimization algorithm is proposed for clustering multivariate time series data. Since the time series data sometimes do not have the same length and they usually have missing data, the regular Euclidean distance and dynamic time warping can not be applied for such data to measure the similarity. Therefore, a hybrid similarity measure based on principal component analysis and Mahalanobis distance is applied in order to handle such limitations. The comparison between the results of the proposed method with the similar ones in the literature shows the superiority of the proposed method."} {"_id": "5cb52b24aa172d7a043f618068e2dbd9c5038df9", "title": "Nonlinear Dynamics of Spring Softening and Hardening in Folded-MEMS Comb Drive Resonators", "text": "This paper studies analytically and numerically the spring softening and hardening phenomena that occur in electrostatically actuated microelectromechanical systems comb drive resonators utilizing folded suspension beams. An analytical expression for the electrostatic force generated between the combs of the rotor and the stator is derived and takes into account both the transverse and longitudinal capacitances present. After formulating the problem, the resulting stiff differential equations are solved analytically using the method of multiple scales, and a closed-form solution is obtained. Furthermore, the nonlinear boundary value problem that describes the dynamics of inextensional spring beams is solved using straightforward perturbation to obtain the linear and nonlinear spring constants of the beam. The analytical solution is verified numerically using a Matlab/Simulink environment, and the results from both analyses exhibit excellent agreement. Stability analysis based on phase plane trajectory is also presented and fully explains previously reported empirical results that lacked sufficient theoretical description. Finally, the proposed solutions are, once again, verified with previously published measurement results. The closed-form solutions provided are easy to apply and enable predicting the actual behavior of resonators and gyroscopes with similar structures."} {"_id": "11338e3919ae988af93c7dd4729db856c23173b1", "title": "Automatic Detection and Classification of Brain Hemorrhages", "text": "Computer-aided diagnosis systems have been the focus of many research endeavors. They are based on the idea of processing and analyzing images of different parts of the human body for a quick and accurate diagnosis. In this paper, the aforementioned approach is followed to detect whether a brain hemorrhage exists or not in a Computed Topography (CT) scans of the brain. Moreover, the type of the hemorrhage is identified. The implemented system consists of several stages that include image preprocessing, image segmentation, feature extraction, and classification. The results of the conducted experiments are very promising. A recognition rate of 100% is attained for detecting whether a brain hemorrhage exists or not. For the hemorrhage type classification, more than 92% accuracy is achieved. Key\u2013Words:brain hemorrhage, brain ct scans, machine learning, image processing, image segmentation"} {"_id": "ad7d3f6817b6cc96781d3785c7e45e0843a9d39d", "title": "Analysis of a linear series-fed rectangular microstrip antenna array", "text": "A new method is proposed to analyze a linear series-fed microstrip antenna array which maintains accuracy while not requiring a full-wave analysis of the entire structure. The method accounts for mutual coupling between patches as well as the patch excitations by the feed lines."} {"_id": "090d25f94cb021bdd3400a2f547f989a6a5e07ec", "title": "Direct least squares fitting of ellipses", "text": "This work presents a new efficient method for fitting ellipses to scattered data. Previous algorithms either fitted general conics or were computationally expensive. By minimizing the algebraic distance subject to the constraint 4 2 1 the new method incorporates the ellipticity constraint into the normalization factor. The new method combines several advantages: (i) It is ellipse-specific so that even bad data will always return an ellipse; (ii) It can be solved naturally by a generalized eigensystem and (iii) it is extremely robust, efficient and easy to implement. We compare the proposed method to other approaches and show its robustness on several examples in which other non-ellipse-specific approaches would fail or require computationally expensive iterative refinements. Source code for the algorithm is supplied and a demonstration is available on ! \" ! $#% '& () \"*) & +, ./10 0 32, . 4) \"*) \"*) % 5* 0"} {"_id": "4c4387afaeadda64d8183d7aba19574a9b757a6a", "title": "SUITOR: an attentive information system", "text": "Attentive systems pay attention to what users do so that they can attend to what users need. Such systems track user behavior, model user interests, and anticipate user desires and actions. Because the general class of attentive systems is broad \u2014 ranging from human butlers to web sites that profile users \u2014 we have focused specifically on attentive information systems, which observe user actions with information resources, model user information states, and suggest information that might be helpful to users. In particular, we describe an implemented system, Simple User Interest Tracker (Suitor), that tracks computer users through multiple channels \u2014 gaze, web browsing, application focus \u2014 to determine their interests and to satisfy their information needs. By observing behavior and modeling users, Suitor finds and displays potentially relevant information that is both timely and non-disruptive to the users' ongoing activities."} {"_id": "002aaf4412f91d0828b79511f35c0863a1a32c47", "title": "A real-time face tracker", "text": "We present a real-time face tracker in this paper The system has achieved a rate of 30% frameshecond using an HP-9000 workstation with a framegrabber and a Canon VC-CI camera. It can track a person 'sface while the person moves freely (e.g., walks, jumps, sits down and stands up) in a room. Three types of models have been employed in developing the system. First, we present a stochastic model to characterize skin-color distributions of human faces. The information provided by the model is sufJicient for tracking a human face in various poses and views. This model is adaptable to different people and different lighting conditions in real-time. Second, a motion model e's used to estimate image motion and to predict search window. Third, a camera model is used toprediet and to compensate for camera motion. The system can be applied to tele-conferencing and many HCI applications including lip-reading and gaze tracking. The principle in developing this system can be extended to other tracking problems such as tracking the human hand."} {"_id": "36bb4352891209ba0a7df150c74cd4db6d603ca5", "title": "Single Image Super-Resolution via Multiple Mixture Prior Models", "text": "Example learning-based single image super-resolution (SR) is a promising method for reconstructing a high-resolution (HR) image from a single-input low-resolution (LR) image. Lots of popular SR approaches are more likely either time-or space-intensive, which limit their practical applications. Hence, some research has focused on a subspace view and delivered state-of-the-art results. In this paper, we utilize an effective way with mixture prior models to transform the large nonlinear feature space of LR images into a group of linear subspaces in the training phase. In particular, we first partition image patches into several groups by a novel selective patch processing method based on difference curvature of LR patches, and then learning the mixture prior models in each group. Moreover, different prior distributions have various effectiveness in SR, and in this case, we find that student-t prior shows stronger performance than the well-known Gaussian prior. In the testing phase, we adopt the learned multiple mixture prior models to map the input LR features into the appropriate subspace, and finally reconstruct the corresponding HR image in a novel mixed matching way. Experimental results indicate that the proposed approach is both quantitatively and qualitatively superior to some state-of-the-art SR methods."} {"_id": "7cecad0af237dcd7760364e1ae0d03dea362d7cd", "title": "Pharmacogenetics and adverse drug reactions", "text": "Polymorphisms in the genes that code for drug-metabolising enzymes, drug transporters, drug receptors, and ion channels can affect an individual's risk of having an adverse drug reaction, or can alter the efficacy of drug treatment in that individual. Mutant alleles at a single gene locus are the best studied individual risk factors for adverse drug reactions, and include many genes coding for drug-metabolising enzymes. These genetic polymorphisms of drug metabolism produce the phenotypes of \"poor metabolisers\" or \"ultrarapid metabolisers\" of numerous drugs. Together, such phenotypes make up a substantial proportion of the population. Pharmacogenomic techniques allow efficient analysis of these risk factors, and genotyping tests have the potential to optimise drug therapy in the future."} {"_id": "794435fe025ac480dbdb6218866e1d30d5a786c8", "title": "High performance work systems: the gap between policy and practice in health care reform.", "text": "PURPOSE\nStudies of high-performing organisations have consistently reported a positive relationship between high performance work systems (HPWS) and performance outcomes. Although many of these studies have been conducted in manufacturing, similar findings of a positive correlation between aspects of HPWS and improved care delivery and patient outcomes have been reported in international health care studies. The purpose of this paper is to bring together the results from a series of studies conducted within Australian health care organisations. First, the authors seek to demonstrate the link found between high performance work systems and organisational performance, including the perceived quality of patient care. Second, the paper aims to show that the hospitals studied do not have the necessary aspects of HPWS in place and that there has been little consideration of HPWS in health system reform.\n\n\nDESIGN/METHODOLOGY/APPROACH\nThe paper draws on a series of correlation studies using survey data from hospitals in Australia, supplemented by qualitative data collection and analysis. To demonstrate the link between HPWS and perceived quality of care delivery the authors conducted regression analysis with tests of mediation and moderation to analyse survey responses of 201 nurses in a large regional Australian health service and explored HRM and HPWS in detail in three casestudy organisations. To achieve the second aim, the authors surveyed human resource and other senior managers in all Victorian health sector organisations and reviewed policy documents related to health system reform planned for Australia.\n\n\nFINDINGS\nThe findings suggest that there is a relationship between HPWS and the perceived quality of care that is mediated by human resource management (HRM) outcomes, such as psychological empowerment. It is also found that health care organisations in Australia generally do not have the necessary aspects of HPWS in place, creating a policy and practice gap. Although the chief executive officers of health service organisations reported high levels of strategic HRM, the human resource and other managers reported a distinct lack of HPWS from their perspectives. The authors discuss why health care organisations may have difficulty in achieving HPWS.\n\n\nORIGINALITY/VALUE\nLeaders in health care organisations should focus on ensuring human resource management systems, structures and processes that support HPWS. Policy makers need to consider HPWS as a necessary component of health system reform. There is a strong need to reorient organisational human resource management policies and procedures in public health care organisations towards high performing work systems."} {"_id": "da2a956676b59b5237bde62a308fa604215d6a55", "title": "Analysis and design of HBT Cherry-Hooper amplifiers with emitter-follower feedback for optical communications", "text": "In this article, the large-signal, small-signal, and noise performance of the Cherry-Hooper amplifier with emitter-follower feedback are analyzed from a design perspective. A method for choosing the component values to obtain a low group delay distortion or Bessel transfer function is given. The design theory is illustrated with an implementation of the circuit in a 47-GHz SiGe process. The amplifier has 19.7-dB gain, 13.7-GHz bandwidth, and /spl plusmn/10-ps group delay distortion. The amplifier core consumes 34 mW from a -3.3-V supply."} {"_id": "d5673c53b3643372dd8d35136769ecd73a6dede3", "title": "A Deep Learning Framework for Smart Street Cleaning", "text": "Conventional street cleaning methods include street sweepers going to various spots in the city and manually verifying if the street needs cleaning and taking action if required. However, this method is not optimized and demands a huge investment in terms of time and money. This paper introduces an automated framework which addresses street cleaning problem in a better way by making use of modern equipment with cameras and computational techniques to analyze, find and efficiently schedule clean-up crews for the areas requiring more attention. Deep learning-based neural network techniques can be used to achieve better accuracy and performance for object detection and classification than conventional machine learning algorithms for large volume of images. The proposed framework for street cleaning leverages the deep learning algorithm pipeline to analyze the street photographs and determines if the streets are dirty by detecting litter objects. The pipeline further determines the the degree to which the streets are littered by classifying the litter objects detected in earlier stages. The framework also provides information on the cleanliness status of the streets on a dashboard updated in real-time. Such framework can prove effective in reducing resource consumption and overall operational cost involved in street cleaning."} {"_id": "189a391b217387514bfe599a0b6c1bbc1ccc94bb", "title": "A New Paradigm for Collision-free Hashing: Incrementality at Reduced Cost", "text": "We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is incremental. (This means that if a message x which I have previously hashed is modi ed to x0 then rather than having to re-compute the hash of x 0 from scratch, I can quickly \\update\" the old hash value to the new one, in time proportional to the amount of modi cation made in x to get x.) Also any function emanating from this paradigm is parallelizable, useful for hardware implementation. We derive several speci c functions from our paradigm. All use a standard hash function, assumed ideal, and some algebraic operations. The rst function, MuHASH, uses one modular multiplication per block of the message, making it reasonably e cient, and signi cantly faster than previous incremental hash functions. Its security is proven, based on the hardness of the discrete logarithm problem. A second function, AdHASH, is even faster, using additions instead of multiplications, with security proven given either that approximation of the length of shortest lattice vectors is hard or that the weighted subset sum problem is hard. A third function, LtHASH, is a practical variant of recent lattice based functions, with security proven based, again on the hardness of shortest lattice vector approximation. Dept. of Computer Science & Engineering, University of California at San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA. E-Mail: mihir@cs.ucsd.edu. URL: http://www-cse.ucsd.edu/users/mihir. Supported in part by NSF CAREER Award CCR-9624439 and a Packard Foundation Fellowship in Science and Engineering. yMIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139, USA. E-Mail: miccianc@theory.lcs.mit.edu. Supported in part by DARPA contract DABT63-96-C-0018."} {"_id": "185aa5c29a1bce3cd5c806cd68c6518b65ba3d75", "title": "Bayesian Rose Trees", "text": "Hierarchical structure is ubiquitous in data across many domains. There are many hierarchical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these methods limit discoverable hierarchies to those with binary branching structure. This limitation, while computationally convenient, is often undesirable. In this paper we explore a Bayesian hierarchical clustering algorithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy agglomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms."} {"_id": "940173d9b880defde6c5f171579cddbba8288dc6", "title": "Digital Business Strategy and Value Creation: Framing the Dynamic Cycle of Control Points", "text": "Within changing value networks, the profits and competitive advantages of participation reside dynamically at control points that are the positions of greatest value and/or power. The enterprises that hold these positions have a great deal of control over how the network operates, how the benefits are redistributed, and how this influences the execution of a digital business strategy. This article is based on a field study that provides preliminary, yet promising, empirical evidence that sheds light on the dynamic cycle of value creation and value capture points in digitally enabled networks in response to triggers related to technology and business strategy. We use the context of the European and U.S. broadcasting industry. Specifically, we illustrate how incremental innovations may shift value networks from static, vertically integrated networks to more loosely coupled networks, and how cross-boundary industry disruptions may then, in turn, shift those to two-sided markets. Based on our analysis we provide insights and implications for digital business strategy research and practice."} {"_id": "f39d45aeaf8a5ace793fb8c8495eb0c7598f5e23", "title": "Attitude Detection for One-Round Conversation: Jointly Extracting Target-Polarity Pairs", "text": "We tackle Attitude Detection, which we define as the task of extracting the replier's attitude, i.e., a target-polarity pair, from a given one-round conversation. While previous studies considered Target Extraction and Polarity Classification separately, we regard them as subtasks of Attitude Detection. Our experimental results show that treating the two subtasks independently is not the optimal solution for Attitude Detection, as achieving high performance in each subtask is not sufficient for obtaining correct target-polarity pairs. Our jointly trained model AD-NET substantially outperforms the separately trained models by alleviating the target-polarity mismatch problem. Moreover, we proposed a method utilising the attitude detection model to improve retrieval-based chatbots by re-ranking the response candidates with attitude features. Human evaluation indicates that with attitude detection integrated, the new responses to the sampled queries from are statistically significantly more consistent, coherent, engaging and informative than the original ones obtained from a commercial chatbot."} {"_id": "b81ef3e2185b79843fce53bc682a6f9fa44f2485", "title": "Robust misinterpretation of confidence intervals.", "text": "Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data."} {"_id": "a9267dd3407e346aa08cd4018fbde92018230eb3", "title": "Low-complexity image denoising based on statistical modeling of wavelet coefficients", "text": "We introduce a simple spatially adaptive statistical model for wavelet image coefficients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the estimation-quantization (EQ) coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate maximum a posteriori probability rule. Then we apply an approximate minimum mean squared error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature."} {"_id": "aead2b2d26ab62f6229048b912b7ad313187008c", "title": "Crowdlet: Optimal worker recruitment for self-organized mobile crowdsourcing", "text": "In this paper, we advocate Crowdlet, a novel self-organized mobile crowdsourcing paradigm, in which a mobile task requester can proactively exploit a massive crowd of encountered mobile workers at real-time for quick and high-quality results. We present a comprehensive system model of Crowdlet that defines task, worker arrival and worker ability models. Further, we introduce a service quality concept to indicate the expected service gain that a requester can enjoy when he recruits an encountered worker, by jointly taking into account worker ability, real-timeness and task reward. Based on the models, we formulate an online worker recruitment problem to maximize the expected sum of service quality. We derive an optimal worker recruitment policy through the dynamic programming principle, and show that it exhibits a nice threshold based structure. We conduct extensive performance evaluation based on real traces, and numerical results demonstrate that our policy can achieve superior performance and improve more than 30% performance gain over classic policies. Besides, our Android prototype shows that Crowdlet is cost-efficient, requiring less than 7 seconds and 6 Joule in terms of time and energy cost for policy computation in most cases."} {"_id": "f39322a314bf96dbaf482b7f86fdf99853fca468", "title": "The population genetics of the Jewish people", "text": "Adherents to the Jewish faith have resided in numerous geographic locations over the course of three millennia. Progressively more detailed population genetic analysis carried out independently by multiple research groups over the past two decades has revealed a pattern for the population genetic architecture of contemporary Jews descendant from globally dispersed Diaspora communities. This pattern is consistent with a major, but variable component of shared Near East ancestry, together with variable degrees of admixture and introgression from the corresponding host Diaspora populations. By combining analysis of monoallelic markers with recent genome-wide variation analysis of simple tandem repeats, copy number variations, and single-nucleotide polymorphisms at high density, it has been possible to determine the relative contribution of sex-specific migration and introgression to map founder events and to suggest demographic histories corresponding to western and eastern Diaspora migrations, as well as subsequent microevolutionary events. These patterns have been congruous with the inferences of many, but not of all historians using more traditional tools such as archeology, archival records, linguistics, comparative analysis of religious narrative, liturgy and practices. Importantly, the population genetic architecture of Jews helps to explain the observed patterns of health and disease-relevant mutations and phenotypes which continue to be carefully studied and catalogued, and represent an important resource for human medical genetics research. The current review attempts to provide a succinct update of the more recent developments in a historical and human health context."} {"_id": "2ab66c0f02521f4e5d42a6b29b08eb815504818b", "title": "Learning deep physiological models of affect", "text": "More than 15 years after the early studies in Affective Computing (AC), [1] the problem of detecting and modeling emotions in the context of human-computer interaction (HCI) remains complex and largely unexplored. The detection and modeling of emotion is, primarily, the study and use of artificial intelligence (AI) techniques for the construction of computational models of emotion. The key challenges one faces when attempting to model emotion [2] are inherent in the vague definitions and fuzzy boundaries of emotion, and in the modeling methodology followed. In this context, open research questions are still present in all key components of the modeling process. These include, first, the appropriateness of the modeling tool employed to map emotional manifestations and responses to annotated affective states; second, the processing of signals that express these manifestations (i.e., model input); and third, the way affective annotation (i.e., model output) is handled. This paper touches upon all three key components of an affective model (i.e., input, model, output) and introduces the use of deep learning (DL) [3], [4], [5] methodologies for affective modeling from multiple physiological signals."} {"_id": "b3e6312fb37a99336f7780b1c69badd899858b41", "title": "Three Implementations of SquishQL, a Simple RDF Query Language", "text": "RDF provides a basic way to represent data for the Semantic Web. We have been experimenting with the query paradigm for working with RDF data in semantic web applications. Query of RDF data provides a declarative access mechanism that is suitable for application usage and remote access. We describe work on a conceptual model for querying RDF data that refines ideas first presented in at the W3C workshop on Query Languages [14] and the design of one possible syntax, derived from [7], that is suitable for application programmers. Further, we present experience gained in three implementations of the query language."} {"_id": "060d1b412dc3fb7d5f5d952adc8a4a8ecc4bd3fa", "title": "Introduction to Statistical Learning Theory", "text": "The goal of statistical learning theory is to study, in a statistical framework, the properties of learning algorithms. In particular, most results take the form of so-called error bounds. This tutorial introduces the techniques that are used to obtain such results."} {"_id": "9b9c9cc72ebc16596a618d5b78972437c9c569f6", "title": "Fast Texture Transfer", "text": ""} {"_id": "5de1dec847e812b7d7f2776a756ee17b2d1791cd", "title": "Social Media: The Good, the Bad, and the Ugly", "text": "Companies when strategizing are looking for innovative ways to have a competitive advantage over their competitors. One way in which they compete is by the adoption of social media. Social media has evolved over the years and as a result, new concepts and applications are developed which promises to provide business value to a company. However, despite the usefulness of social media, many businesses fail to reap its full benefits. The current literature shows evidence of lack of strategically designed process for companies to successfully implement social media. The purpose of this study is to suggest a framework which provides the necessary alignment between social media goals with business objectives. From the literature review, a social media strategy framework was derived to offer an effective step by step approach to the development and imple\u2010 mentation of social media goals aligned with a company\u2019s business objectives. The contribution to this study is the development of a social media strategy framework that can be used by organisations for business value."} {"_id": "a70ba6daebb31461480fe3a369af6c7658ccdec0", "title": "Web Mining for Information Retrieval", "text": "The present era is engulfed in data and it is quite difficult to churn the emergence of unstructured data in order to mine relevant information. In this paper we present a basic outline of web mining techniques which changed the scenario of today world and help in retrieval of information. Web Mining actually referred as mining of interesting pattern by using set of tools and techniques from the vast pool of web. It focuses on different aspects of web mining referred as Web Content Mining, Web Structure Mining and Web Usage Mining. The overview of these mining techniques which help in retrieving desired information are covered along with the approaches used and algorithm employed."} {"_id": "6de1fbfe86ddfef72d5fcfdb6e7e140be0ab195e", "title": "Argumentation Mining: Where are we now, where do we want to be and how do we get there?", "text": "This paper gives a short overview of the state-of-the-art and goals of argumentation mining and it provides ideas for further research. Its content is based on two invited lectures on argumentation mining respectively at the FIRE 2013 conference at the India International Center in New Delhi, India and a lecture given as SICSA distinguished visitor at the University of Dundee, UK in the summer of 2014."} {"_id": "9abba0edb3d631b6cca51a346d494b911e5342fa", "title": "1 Unsupervised Feature Learning in Time Series 2 Prediction Using Continuous Deep Belief Network 3", "text": "A continuous Deep Belief Network (cDBN) with two hidden layers is proposed in this 18 paper, focusing on the problem of weak feature learning ability when dealing with continuous data. 19 In cDBN, the input data is trained in an unsupervised way by using continuous version of transfer 20 functions, the contrastive divergence is designed in hidden layer training process to raise 21 convergence speed, an improved dropout strategy is then implemented in unsupervised training to 22 realize features learning by de-cooperating between the units, and then the network is fine-tuned 23 using back propagation algorithm. Besides, hyper-parameters are analysed through stability 24 analysis to assure the network can find the optimal. Finally, the experiments on Lorenz chaos series, 25 CATS benchmark and other real world like CO2 and waste water parameters forecasting show that 26 cDBN has the advantage of higher accuracy, simpler structure and faster convergence speed than 27 other methods. 28"} {"_id": "63cffc8d517259e5b4669e122190dfe515ee5508", "title": "Influence of Some Parameters on the Effectiveness of Induction Heating", "text": "We have investigated the effectiveness of heating conductive plates by induction in low frequencies analytically and numerically, in relation to the shape of the plate, the area of the region exposed to the magnetic flux, and the position of the exposed region with respect to the center of the plate. We considered both uniform and nonuniform magnetic fields. For plates with equivalent area exposed to the same uniform magnetic flux, the one with the most symmetrical shape is heated most effectively. If a coil is employed for the excitation, the results depend on the shape of the plate, the shape of the coil section, and the coil lift-off. When only the central region of a plate is exposed to a variable magnetic flux, there is a specific value of the exposed area for which the power dissipated in the plate material reaches a maximum. The most effective heating of a plate partially exposed occurs when the axis of the exciting coil is at the plate center."} {"_id": "4ad7f58bde3358082cc922da3e726571fab453e7", "title": "A Model of a Localized Cross-Border E-Commerce", "text": "By the explosive growth of B2B e-commerce transactions in international supply chains and the rapid increase of business documents in Iran\u2019s cross-border trading, effective management of trade processes over borders is vital in B2B e-commerce systems. Structure of the localized model in this paper is based on three major layers of a B2B e-commerce infrastructure, which are messaging layer, business process layer and content layer. For each of these layers proper standards and solutions are chosen due to Iran\u2019s e-commerce requirements. As it is needed to move smoothly towards electronic documents in Iran, UNedocs standard is suggested to support the contents of both paper and electronic documents. The verification of the suggested model is done by presenting a four phase scenario through case study method. The localized model in this paper tries to make a strategic view of business documents exchange in trade processes, and getting closer to the key target of regional single windows establishment in global trade e-supply chains."} {"_id": "3ffce42ed3d7ac5963e03d4b6e32460ef5b29ff7", "title": "Object modelling by registration of multiple range images", "text": "We study the problem of creating a complete model of a physical object. Although this may be possible using intensity images, we use here range images which directly provide access t o three dimensional information. T h e first problem that we need t o solve is t o find the transformation between the different views. Previous approaches have either assumed this transformation t o be known (which is extremely difficult for a complete model), or computed it with feature matching (which is not accurate enough for integration). In this paper, we propose a new approach which works on range d a t a directly, and registers successive views with enough overlapping area t o get an accurate transformation between views. This is performed by minimizing afunctional which does not require point t o point matches. We give the details of the registration method and modeling procedure, and illustrate them on real range images of complex objects. 1 Introduction Creating models of physical objects is a necessary component machine of biological vision modules. Such models can then be used in object recognition, pose estimation or inspection tasks. If the object of interest has been precisely designed, then such a model exists in the form of a CAD model. In many applications, however, it is either not possible or not practical to have access to such CAD models, and we need to build models from the physical object. Some researchers bypass the problem by using a model which consist,s of multiple views ([4], [a]), but t,liis is not, always enough. If one needs a complete model of an object, the following steps are necessary: 1. data acquisition, 2. registration between views, 3. integration of views. By view we mean the 3D surface information of the object from specific point of view. While the integration process is very dependent on the representation scheme used, the precondition for performing integration consists of knowing the transformation between the data from different views. The goal of registrat,ion is to find such a transformat.ion, which is also known as t,he covrespon,den,ce problem. This problem has been a t the core of many previous research efforts: Bhanu [a] developed an object modeling system for object recognition by rotating object through known angles to acquire multiple views. Chien et al. [3] and Ahuja and Veen-stra [l] used orthogonal views to construct octree object ' models. With these methods, \u2026"} {"_id": "883b2b981dc04139800f30b23a91b8d27be85b65", "title": "Rigid 3D geometry matching for grasping of known objects in cluttered scenes", "text": "In this paper, we present an efficient 3D object recognition and pose estimation approach for grasping procedures in cluttered and occluded environments. In contrast to common appearance-based approaches, we rely solely on 3D geometry information. Our method is based on a robust geometric descriptor, a hashing technique and an efficient, localized RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method simultaneously recognizes multiple model instances and estimates their pose in the scene. A variety of tests shows that the proposed method performs well on noisy, cluttered and unsegmented range scans in which only small parts of the objects are visible. The main procedure of the algorithm has a linear time complexity resulting in a high recognition speed which allows a direct integration of the method into a continuous manipulation task. The experimental validation with a 7-degrees-of-freedom Cartesian impedance controlled robot shows how the method can be used for grasping objects from a complex random stack. This application demonstrates how the integration of computer vision and softrobotics leads to a robotic system capable of acting in unstructured and occluded environments."} {"_id": "9bc8aaaf23e2578c47d5d297d1e1cbb5b067ca3a", "title": "Combining Scale-Space and Similarity-Based Aspect Graphs for Fast 3D Object Recognition", "text": "This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms."} {"_id": "dbd66f601b325404ff3cdd7b9a1a282b2da26445", "title": "T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-Less Objects", "text": "We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp:felk:cvut:cz/t-less."} {"_id": "a8e5c255bcc474194777f884ee4caec841269200", "title": "Secure LSB steganography for colored images using character-color mapping", "text": "Steganography is the science of embedding the secret messages inside other medium files in a way that hides the existence of the secret message at all. Steganography can be applied to text, audio, image, and video file types. In this study, we propose a new steganography approach for digital images in which the RGB coloring model was used. The efficiency of the proposed approach has been tested and evaluated. The experimental results show that the proposed approach produce high-quality stego images that resist against visual and statistical attacks."} {"_id": "ffa25b893ea72ffb077158eb750df827d154b5b6", "title": "TicTac: Accelerating Distributed Deep Learning with Communication Scheduling.", "text": "State-of-the-art deep learning systems rely on iterative distributed training to tackle the increasing complexity of models and input data. The iteration time in these communication-heavy systems depends on the computation time, communication time and the extent of overlap of computation and communication. In this work, we identify a shortcoming in systems with graph representation for computation, such as TensorFlow and PyTorch, that result in high variance in iteration time \u2014 random order of received parameters across workers. We develop a system, TicTac, to improve the iteration time by fixing this issue in distributed deep learning with Parameter Servers while guaranteeing near-optimal overlap of communication and computation. TicTac identifies and enforces an order of network transfers which improves the iteration time using prioritization. Our system is implemented over TensorFlow and requires no changes to the model or developer inputs. TicTac improves the throughput by up to 37.7% in inference and 19.2% in training, while also reducing straggler effect by up to 2.3\u00d7. Our code is publicly available."} {"_id": "73f35cb89d62a1ee7ace46d047956a2a0353719a", "title": "An ultra-low power (ULP) zero-current-detector (ZCD) circuit for switching inductor converter applied in energy harvesting system", "text": "In this paper, we present an ultra-low power (ULP) zero-current-detector (ZCD) circuit by using sub-threshold design method and discontinued operating time to decrease power consumption, which can be widely used in switching inductor converters such as boost, buck or buck-boost converters. The designed ZCD circuit consumes only 40 nA quiescent current to realize nearly 70 dB DC gain and 650 kHz gain-bandwidth (GBW) when it operates. The main comparator of the ZCD circuit has high sensitivity and very low delay time to turn-off the synchronous switches of the DC-DC converter on time. The average quiescent current consumption is less than 1.6 nA because the operating time is only 40 \u03bcs under the minimum switching period of 1 ms. Finally, the designed ZCD circuit is used in a boost converter for vibrational energy harvesting applications."} {"_id": "6925dc5d9aed7a3d9640179aca0355c6259db260", "title": "Text Classification Using the N-Gram Graph Representation Model Over High Frequency Data Streams", "text": "A prominent challenge in our information age is the classification over high frequency data streams. In this research, we propose an innovative and high-accurate text stream classificationmodel that is designed in an elastic distributed way and is capable to service text load with fluctuated frequency. In this classification model, text is represented as N-Gram Graphs and the classification process takes place using text pre-processing, graph similarity and feature classification techniques following the supervised machine learning approach. The work involves the analysis of many variations of the proposed model and its parameters, such as various representations of text as N-Gram Graphs, graph comparisons metrics and classification methods in order to conclude to the most accurate setup. To deal with the scalability, the availability and the timely response in case of high frequency text we employ the Beam programming model. Using the Beam programmingmodel the classification process occurs as a sequence of distinct tasks and facilitates the distributed implementation of the most computational demanding tasks of the inference stage. The proposed model and the various parameters that constitute it are evaluated experimentally and the high frequency stream emulated using two public datasets (20NewsGroup and Reuters-21578) that are commonly used in the literature for text classification."} {"_id": "33fcc94696ac3ba51e21d61647ff66b0c104e759", "title": "PIP Distance: A Unitary-invariant Metric for Understanding Functionality and Dimensionality of Vector Embeddings", "text": "In this paper, we present a theoretical framework for understanding vector embedding, a fundamental building block of many deep learning models, especially in NLP. We discover a natural unitary-invariance in vector embeddings, which is required by the distributional hypothesis. This unitary-invariance states the fact that two embeddings are essentially equivalent if one can be obtained from the other by performing a relative-geometry preserving transformation, for example a rotation. This idea leads to the Pairwise Inner Product (PIP) loss, a natural unitary-invariant metric for the distance between two embeddings. We demonstrate that the PIP loss captures the difference in functionality between embeddings. By formulating the embedding training process as matrix factorization under noise, we reveal a fundamental bias-variance tradeoff in dimensionality selection. With tools from perturbation and stability theory, we provide an upper bound on the PIP loss using the signal spectrum and noise variance, both of which can be readily inferred from data. Our framework sheds light on many empirical phenomena, including the existence of an optimal dimension, and the robustness of embeddings against over-parametrization. The bias-variance tradeoff of PIP loss explicitly answers the fundamental open problem of dimensionality selection for vector embeddings."} {"_id": "8405b58be29ddc4f4b57557ecf72b4acd374c9f4", "title": "Haptic Feedback in Room-Scale VR", "text": "Virtual reality (VR) is now becoming a mainstream medium. Current systems like the HTC Vive offer accurate tracking of the HMD and controllers, which allows for highly immersive interactions with the virtual environment. The interactions can be further enhanced by adding feedback. As an example, a controller can vibrate when it is close to a grabbable ball. As such interactions are not exhaustingly researched, we conducted a user study. Specifically, we examine: 1. grabbing and throwing with controllers in a simple basketball game. 2. the influence of haptic and optical feedback on performance, presence, task load, and usability. 3. the advantages of VR over desktop for point-cloud editing. Several new techniques emerged from the point-cloud editor for VR. The bi-manual pinch gesture, which extends the handlebar metaphor, is a novel viewing method used to translate, rotate, and scale the point-cloud. Our new rendering technique uses the geometry shader to draw sparse point clouds quickly. The selection volumes at the controllers are our new technique to efficiently select points in point clouds. The resulting selection is visualized in real time. The results of the user study show that: 1. grabbing with a controller button is intuitive but throwing is not. Releasing a button is a bad metaphor for releasing a grabbed virtual object in order to throw it. 2. any feedback is better than none. Adding haptic, optical, or both feedback types to the grabbing improves the user performance and presence. However, only sub-scores like accuracy and predictability are significantly improved. Usability and task load are mostly unaffected by feedback. 3. the point-cloud editing is significantly better in VR with the bi-manual pinch gesture and selection volumes than on the desktop with the orbiting camera and lasso selections."} {"_id": "d3c86e4f26b379a091cb915c73480d8db5d8fd9e", "title": "Strong intuitionistic fuzzy graphs", "text": "We introduce the notion of strong intuitionistic fuzzy graphs and investigate some of their properties. We discuss some propositions of self complementary and self weak complementary strong intuitionistic fuzzy graphs. We introduce the concept of intuitionistic fuzzy line graphs."} {"_id": "4a13a1e30069148bf24bdd697ecc139a82f3e19a", "title": "Analysis of packet loss and delay variation on QoE for H.264 andWebM/VP8 Codecs", "text": "The popularity of multimedia services over Internet has increased in the recent years. These services include Video on Demand (VoD) and mobile TV which are predominantly growing, and the user expectations towards the quality of videos are gradually increasing. Different video codecs are used for encoding and decoding. Recently Google has introduced the VP8 codec which is an open source compression format. It is introduced to compete with existing popular codec namely H.264/AVC developed by ITU-T Video Coding Expert Group (VCEG), as by 2016 there will be a license fee for H.264. In this work we compare the performance of H.264/AVC and WebM/VP8 in an emulated environment. NetEm is used as an emulator to introduce delay/delay variation and packet loss. We have evaluated the user perception of impaired videos using Mean Opinion Score (MOS) by following the International Telecommunication Union (ITU) Recommendations Absolute Category Rating (ACR) and analyzed the results using statistical methods. It was found that both video codecs exhibit similar performance in packet loss, But in case of delay variation H.264 codec shows better results when compared to WebM/VP8. Moreover along with the MOS ratings we also studied the effect of user feelings and online video watching experience impacts on their perception."} {"_id": "d54672065689c9128354d54aac722bbdf450e406", "title": "3D depth estimation from a holoscopic 3D image", "text": "This paper presents an innovative technique for 3D depth estimation from a single holoscopic 3D image (H3D). The image is captured with a single aperture holoscopic 3D camera, which mimics a fly's eye technique to acquire an optical 3D model of a true 3D scene. The proposed method works by extracting of optimum viewpoints images from a H3D image and it uses the shift and integration function in up-sampling the extracted viewpoints and then it performs the match block functions to match correspondence features between the stereoscopic 3D images. Finally, the 3D depth is estimated through a smoothing and optimizing process to produce a final 3D depth map. The proposed method estimates the full 3D depth information from a single H3D image, which makes it reliable and suitable for trend autonomous robotic applications."} {"_id": "e3ab8a2d775e74641b4612fac4f33b5b1895d8ea", "title": "User practice in password security: An empirical study of real-life passwords in the wild", "text": "Due to increasing security awareness of password from the public and little attention on the characteristics of real-life passwords, it is thus natural to understand the current state of characteristics of real-life passwords, and to explore how password characteristics change over time and how earlier password practice is understood in current context. In this work, we attempt to present an in-depth and comprehensive understanding of user practice in real-life passwords, and to see whether the previous observations can be confirmed or reversed, based on large-scale measurements rather than anecdotal experiences or user surveys. Specifically, we measure password characteristics on over 6 million passwords, in terms of password length, password composition, and password selection. We then make informed comparisons of the findings between our investigation and previously reported results. Our general findings include: (1) average password length is at least 12% longer than previous results, and 85% of our passwords have the length between 8 and 11 characters;"} {"_id": "df45ccf1189bc212cacb7e12c796f9d16bbeb4bf", "title": "WAVELET BASED WATERMARKING ON DIGITAL IMAGE", "text": "Safeguarding creative content and intellectual property in a digital form has become increasingly difficult as technologies, such as the internet, broadband availability and mobile access advance. It has grown to be progressively easier to copy, modify and redistribute digital media, resulting in great declines in business profits. Digital watermarking is a technique which has been proposed as a possible solution to this problem. Digital Watermarking is a technology which is used to identify the creator, owner, distributor of a given video or image by embedding copyright marks into the digital content, hence digital watermarking is a powerful tool used to check the copy right violation. In this paper a robust watermarking technique based on DWT (Discrete Wavelet Transform) is presented. In this technique the insertion and extraction of the watermark in the grayscale image is found to be simpler than other transform techniques. This paper successfully explains the digital watermarking technique on digital images based on discrete wavelet transform by analyzing various values of PSNR\u2019s and MSE\u2019s.."} {"_id": "095836eda402aae1cf22e515fef662dc5a1bb2cb", "title": "On the robustness of power systems: Optimal load-capacity distributions and hardness of attacking", "text": "We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,\u2026, LN and capacities C1,\u2026, CN are independent and identically distributed with the joint distribution PLC(x, y) = P [L \u2264 x, C \u2264 y]; the capacity Ci defines the maximum flow allowed on line i. We survey some results on the robustness of this power system against random attacks (or, failures) that target a p-fraction of the lines, under a democratic fiber bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. We then consider the case where an adversary can launch a targeted attack, and present several results on the hardness of attacking optimally."} {"_id": "44638b2ab0c1f926b0d79dda8d3508f00b979be9", "title": "Process mining: a two-step approach to balance between underfitting and overfitting", "text": "Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such \u201coverfitting\u201d by generalizing the model to allow for more behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are \u201coverfitting\u201d (allow only for what has actually been observed) while other parts may be \u201cunderfitting\u201d (allow for much more behavior without strong support for it). None of the existing techniques enables the user to control the balance between \u201coverfitting\u201d and \u201cunderfitting\u201d. To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the \u201ctheory of regions\u201d, the model is synthesized. The approach has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches."} {"_id": "55b0e1c5e1ef6060b9fdcb5f644b01d89afe5b27", "title": "A Framework for Clustering Massive Text and Categorical Data Streams", "text": "Many applications such as news group filtering, text crawling, and document organization require real time clustering and segmentation of text data records. The categorical data stream clustering problem also has a number of applications to the problems of customer segmentation and real time trend analysis. We will present an online approach for clustering massive text and categorical data streams with the use of a statistical summarization methodology. We present results illustrating the effectiveness of the technique."} {"_id": "7adc7cbdaf862d23b34e973b05dcbae8ede47bb1", "title": "Build watson: An overview of DeepQA for the Jeopardy! Challenge", "text": "Computer systems that can directly and accurately answer peoples' questions over a broad domain of human knowledge have been envisioned by scientists and writers since the advent of computers themselves. Open domain question answering holds tremendous promise for facilitating informed decision making over vast volumes of natural language content. Applications in business intelligence, healthcare, customer support, enterprise knowledge management, social computing, science and government would all benefit from deep language processing. The DeepQA project (www.ibm.com/deepqa) is aimed at illustrating how the advancement and integration of Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), massively parallel computation and Knowledge Representation and Reasoning (KR&R) can greatly advance open-domain automatic Question Answering. An exciting proof-point in this challenge is to develop a computer system that can successfully compete against top human players at the Jeopardy! quiz show (www.jeopardy.com). Attaining champion-level performance Jeopardy! requires a computer to rapidly answer rich open-domain questions, and to predict its own performance on any given category/question. The system must deliver high degrees of precision and confidence over a very broad range of knowledge and natural language content and with a 3-second response time. To do this DeepQA generates, evidences and evaluates many competing hypotheses. A key to success is automatically learning and combining accurate confidences across an array of complex algorithms and over different dimensions of evidence. Accurate confidences are needed to know when to \"buzz in\" against your competitors and how much to bet. Critical for winning at Jeopardy!, High precision and accurate confidence computations are just as critical for providing real value in business settings where helping users focus on the right content sooner and with greater confidence can make all the difference. The need for speed and high precision demands a massively parallel compute platform capable of generating, evaluating and combing 1000's of hypotheses and their associated evidence. In this talk I will introduce the audience to the Jeopardy! Challenge and describe our technical approach and our progress on this grand-challenge problem."} {"_id": "61f064c24cb9776a408b8bf4cfbcb5a5105ac31a", "title": "Multi-type attributes driven multi-camera person re-identification", "text": "One of the major challenges in person Re-Identification (ReID) is the inconsistent visual appearance of a person. Current works on visual feature and distance metric learning have achieved significant achievements, but still suffer from the limited robustness to pose variations, viewpoint changes, etc., and the high computational complexity. This makes person ReID among multiple cameras still challenging. This work is motivated to learn mid-level human attributes which are robust to visual appearance variations and could be used as efficient features for person matching. We propose a weakly supervised multi-type attribute learning framework which considers the contextual cues among attributes and progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a threestage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit promising generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained competitive accuracy on four person ReID datasets. Experiments also show that a simple distance metric learning modular further boosts our method, making it outperform many recent \u2217Corresponding author Email address: slzhang.jdl@pku.edu.cn (Shiliang Zhang) Preprint submitted to Pattern Recognition September 11, 2017"} {"_id": "49c5b67d22a3d8929913efeb3621fb7f5c9160ef", "title": "Fractional Order Fuzzy Control of Hybrid Power System with Renewable Generation Using Chaotic PSO", "text": "This paper investigates the operation of a hybrid power system through a novel fuzzy control scheme. The hybrid power system employs various autonomous generation systems like wind turbine, solar photovoltaic, diesel engine, fuel-cell, aqua electrolyzer etc. Other energy storage devices like the battery, flywheel and ultra-capacitor are also present in the network. A novel fractional order (FO) fuzzy control scheme is employed and its parameters are tuned with a particle swarm optimization (PSO) algorithm augmented with two chaotic maps for achieving an improved performance. This FO fuzzy controller shows better performance over the classical PID, and the integer order fuzzy PID controller in both linear and nonlinear operating regimes. The FO fuzzy controller also shows stronger robustness properties against system parameter variation and rate constraint nonlinearity, than that with the other controller structures. The robustness is a highly desirable property in such a scenario since many components of the hybrid power system may be switched on/off or may run at lower/higher power output, at different time instants."} {"_id": "740ffcfc4607b60d55a0d6108ca6ee3b4b5b0f16", "title": "Image Enhancement Network Trained by Using HDR images", "text": "In this paper, a novel image enhancement network is proposed, where HDR images are used for generating training data for our network. Most of conventional image enhancement methods, including Retinex based methods, do not take into account restoring lost pixel values caused by clipping and quantizing. In addition, recently proposed CNN based methods still have a limited scope of application or a limited performance, due to network architectures. In contrast, the proposed method have a higher performance and a simpler network architecture than existing CNN based methods. Moreover, the proposed method enables us to restore lost pixel values. Experimental results show that the proposed method can provides higher-quality images than conventional image enhancement methods including a CNN based method, in terms of TMQI and NIQE."} {"_id": "87822f10ef2f797c65eba3185a81cd541f8dad6b", "title": "Topographical and temporal diversity of the human skin microbiome.", "text": "Human skin is a large, heterogeneous organ that protects the body from pathogens while sustaining microorganisms that influence human health and disease. Our analysis of 16S ribosomal RNA gene sequences obtained from 20 distinct skin sites of healthy humans revealed that physiologically comparable sites harbor similar bacterial communities. The complexity and stability of the microbial community are dependent on the specific characteristics of the skin site. This topographical and temporal survey provides a baseline for studies that examine the role of bacterial communities in disease states and the microbial interdependencies required to maintain healthy skin."} {"_id": "e11086c453c6be433084a45d1cc2aeb403d73f23", "title": "A Formal Analysis of the ISO 9241-210 Definition of User Experience", "text": "User Experience (UX) is a major concept in HCI and a variety of different UX definitions have been suggested within the scientific community. An ISO UX definition has been presented to standardize the term from an industry perspective. We introduce methods from formal logic in order to formalize and analyze the ISO UX definition with regard to consistency and ambiguities and present recommendations for an improved version. Although this kind of formalization is not common within the CHI community, we show that quasi-formal methods provide an alternative way of analyzing widely discussed HCI terms, such as UX, to deepen its understanding."} {"_id": "b88678762b44ea53db95365fd62e6f0426b195a8", "title": "THE IMPACT OF CHANGE MANAGEMENT IN ERP SYSTEM : A CASE STUDY OF MADAR", "text": "This paper discusses Enterprise Resource Planning (ERP) change management. A review of the research literature has been presented that focuses on the ERP change management factors. The literature is further classified and the major outcomes of each study have been addressed in this paper. The discussion is supported by a practical ERP project called Madar. The paper is targeted to investigate and identify the reasons for resistance to diffusion and why individuals within an organization resist the changes. This paper also suggests strategies to minimize the resistance if not overcome completely."} {"_id": "bd7eb1e0dfb280ffcccbb07c9c5f6bfc3b79f6c3", "title": "Walking to public transit: steps to help meet physical activity recommendations.", "text": "BACKGROUND\nNearly half of Americans do not meet the Surgeon General's recommendation of > or =30 minutes of physical activity daily. Some transit users may achieve 30 minutes of physical activity daily solely by walking to and from transit. This study estimates the total daily time spent walking to and from transit and the predictors of achieving 30 minutes of physical activity daily by doing so.\n\n\nMETHODS\nTransit-associated walking times for 3312 transit users were examined among the 105,942 adult respondents to the 2001 National Household Travel Survey, a telephone-based survey sponsored by the U.S. Department of Transportation to assess American travel behavior.\n\n\nRESULTS\nAmericans who use transit spend a median of 19 minutes daily walking to and from transit; 29% achieve > or =30 minutes of physical activity a day solely by walking to and from transit. In multivariate analysis, rail users, minorities, people in households earning <$15,000 a year, and people in high-density urban areas were more likely to spend > or =30 minutes walking to and from transit daily.\n\n\nCONCLUSIONS\nWalking to and from public transportation can help physically inactive populations, especially low-income and minority groups, attain the recommended level of daily physical activity. Increased access to public transit may help promote and maintain active lifestyles. Results from this study may contribute to health impact assessment studies (HIA) that evaluate the impact of proposed public transit systems on physical activity levels, and thereby may influence choices made by transportation planners."} {"_id": "47165ad181b3da3a615e6e8bc8509fcfc53f49c4", "title": "Cross-Modal Correlation Learning by Adaptive Hierarchical Semantic Aggregation", "text": "With the explosive growth of web data, effective and efficient technologies are in urgent need for retrieving semantically relevant contents of heterogeneous modalities. Previous studies devote efforts to modeling simple cross-modal statistical dependencies, and globally projecting the heterogeneous modalities into a measurable subspace. However, global projections cannot appropriately adapt to diverse contents, and the naturally existing multilevel semantic relation in web data is ignored. We study the problem of semantic coherent retrieval, where documents from different modalities should be ranked by the semantic relevance to the query. Accordingly, we propose TINA, a correlation learning method by adaptive hierarchical semantic aggregation. First, by joint modeling of content and ontology similarities, we build a semantic hierarchy to measure multilevel semantic relevance. Second, with a set of local linear projections and probabilistic membership functions, we propose two paradigms for local expert aggregation, i.e., local projection aggregation and local distance aggregation. To learn the cross-modal projections, we optimize the structure risk objective function that involves semantic coherence measurement, local projection consistency, and the complexity penalty of local projections. Compared to existing approaches, a better bias-variance tradeoff is achieved by TINA in real-world cross-modal correlation learning tasks. Extensive experiments on widely used NUS-WIDE and ICML-Challenge for image-text retrieval demonstrate that TINA better adapts to the multilevel semantic relation and content divergence, and, thus, outperforms state of the art with better semantic coherence."} {"_id": "74257c2a5c9633565c3becdb9139789bcf14b478", "title": "IT Control in the Australian Public Sector: An International Comparison", "text": "Despite widespread adoption of IT control frameworks, little academic empirical research has been undertaken to investigate their use. This paper reports upon research to benchmark the maturity levels of 15 key IT control processes from the Control Objectives for Information and Related Technology (COBIT) in public sector organisations across Australia. It also makes a comparison against a similar benchmark for a mixed sector group from a range of nations, a mixed sector group from Asian-Oceanic nations, and for public sector organisations for all geographic areas. The Australian data were collected in a mail survey of the 387 non-financial public sector organisations identified as having more than 50 employees, which returned a 27% response rate. Patterns seen in the original international survey undertaken by the IS Audit and Control Association in 2002 were also seen in the Australian data. However, the Australian public sector performed better than sectors in all the international benchmarks for the 15 most important IT processes."} {"_id": "9583ac53a19cdf0db81fef6eb0b63e66adbe2324", "title": "Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent", "text": "We study the resilience to Byzantine failures of distributed implementations of Stochastic Gradient Descent (SGD). So far, distributed machine learning frameworks have largely ignored the possibility of failures, especially arbitrary (i.e., Byzantine) ones. Causes of failures include software bugs, network asynchrony, biases in local datasets, as well as attackers trying to compromise the entire system. Assuming a set of n workers, up to f being Byzantine, we ask how resilient can SGD be, without limiting the dimension, nor the size of the parameter space. We first show that no gradient aggregation rule based on a linear combination of the vectors proposed by the workers (i.e, current approaches) tolerates a single Byzantine failure. We then formulate a resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers. We propose Krum, an aggregation rule that satisfies our resilience property, which we argue is the first provably Byzantine-resilient algorithm for distributed SGD. We also report on experimental evaluations of Krum."} {"_id": "0760550d3830230a05191766c635cec80a676b7e", "title": "Deep learning with Elastic Averaging SGD", "text": "We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm is proposed in this setting where the communication and coordination of work among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance. We propose synchronous and asynchronous variants of the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. We show that the stability of EASGD is guaranteed when a simple stability condition is satisfied, which is not the case for ADMM. We additionally propose the momentum-based version of our algorithm that can be applied in both synchronous and asynchronous settings. Asynchronous variant of the algorithm is applied to train convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient."} {"_id": "0e9bac6a2b51e93e73f7f5045d4252972db10b5a", "title": "Large-scale matrix factorization with distributed stochastic gradient descent", "text": "We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm. We first develop a novel \"stratified\" SGD variant (SSGD) that applies to general loss-minimization problems in which the loss function can be expressed as a weighted sum of \"stratum losses.\" We establish sufficient conditions for convergence of SSGD using results from stochastic approximation theory and regenerative process theory. We then specialize SSGD to obtain a new matrix-factorization algorithm, called DSGD, that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. DSGD can handle a wide variety of matrix factorizations. We describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms."} {"_id": "1109b663453e78a59e4f66446d71720ac58cec25", "title": "OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks", "text": "We present an integrated framework for using Convolutional Networks for classification , localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat."} {"_id": "50629d7d6afd7577ccfd92b35c7e15f79ad4b180", "title": "ON MILLIMETER-WAVE IMAGING OF CONCEALED OBJECTS : APPLICATION USING BACK-PROJECTION ALGORITHM", "text": "Millimeter-wave (MMW) imaging is a powerful tool for the detection of objects concealed under clothing. Several factors including different kinds of objects, variety of covering materials and their thickness, accurate imaging of near-field scattered data affect the success of detection. To practice with such considerations, this paper presents the two-dimensional (2D) images of different targets hidden under various fabric sheets. The W-band inverse synthetic aperture radar (ISAR) data of various target-covering situations are acquired and imaged by applying both the focusing operator based inversion algorithm and the spherical back-projection algorithm. Results of these algorithms are demonstrated and compared to each other to assess the performance of the MMW imaging in detecting the concealed objects of both metallic and dielectric types."} {"_id": "34e4016ad6362dc1f85c4a755b2b47c4795fbe4d", "title": "A study of prefix hijacking and interception in the internet", "text": "There have been many incidents of prefix hijacking in the Internet. The hijacking AS can blackhole the hijacked traffic. Alternatively, it can transparently intercept the hijacked traffic by forwarding it onto the owner. This paper presents a study of such prefix hijacking and interception with the following contributions: (1). We present a methodology for prefix interception, (2). We estimate the fraction of traffic to any prefix that can be hijacked and intercepted in the Internet today, (3). The interception methodology is implemented and used to intercept real traffic to our prefix, (4). We conduct a detailed study to detect ongoing prefix interception.\n We find that: Our hijacking estimates are in line with the impact of past hijacking incidents and show that ASes higher up in the routing hierarchy can hijack a significant amount of traffic to any prefix, including popular prefixes. A less apparent result is that the same holds for prefix interception too. Further, our implementation shows that intercepting traffic to a prefix in the Internet is almost as simple as hijacking it. Finally, while we fail to detect ongoing prefix interception, the detection exercise highlights some of the challenges posed by the prefix interception problem."} {"_id": "5ef3fe3b0e2c8e5f06c0bf45e503d18371b31946", "title": "Development of a Dynamic Simulation Tool for the ExoMars Rover", "text": "Future planetary missions, including the 2011 European Space Agency (ESA) ExoMars mission, will require rovers to travel further, faster, and over more demanding terrain than has been encountered to date. To improve overall mobility, advances need to be made in autonomous navigation, power collection, and locomotion. In this paper we focus on the locomotion problem and discuss the development of a planetary rover chassis simulation tool that allows us to study key locomotion test cases such as slope climbing in loose soil. We have also constructed rover wheels and obtained experimental data to validate the wheel-soil interaction module. The main conclusion is that to fully validate such a complex simulation, experimental data from a full rover chassis is required. This is a first step in an on-going effort to validate the simulation with experimental data obtained from a full rover prototype."} {"_id": "5cbed8c666ab7cb0836c753877b867d9ee0b14dd", "title": "Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging", "text": "This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360\u00b0 range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360\u00b0 uniform behavior on the angle estimation is observed with a median angle bias of 1.01\u00b0 and a median angle SD of 1.8\u00b0. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5\u00b0 with SDs around 1\u00b0. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles."} {"_id": "5254a6bc467b49f27c00f4654d03dc5d69d9d38d", "title": "Predictive Data Mining for Diagnosis of Thyroid Disease using Neural Network", "text": "This paper presents a systematic approach for earlier diagnosis of Thyroid disease using back propagation algorithm used in neural network. Back propagation algorithm is a widely used algorithm in this field. ANN has been developed based on back propagation of error used for earlier prediction of disease. ANN was subsequently trained with experimental data and testing is carried out using data that was not used during training process. Results show that outcome of ANN is in good agreement with experimental data; this indicates that developed neural network can be used as an alternative for earlier prediction of a disease. Keywords\u2014Back propagation, decision tree, gradient descent, prediction, Supervised Learning"} {"_id": "62b45eb6deedc89f3d3ef428aa6be2c4bba4eeb4", "title": "A Robust Single-Phase PLL System With Stable and Fast Tracking", "text": "Phase, frequency, and amplitude of single-phase voltages are the most important and basic information required for single-phase grid-connected applications. This paper proposes a method for instantly and robustly estimating the phase, frequency, and amplitude of frequency-varying single-phase signals for such applications, which is a phase-locked loop (PLL) method based on a structure. The proposed method has the following attractive features: 1) the estimation system results in a nonlinear system, but it can be stabilized; 2) all subsystems constructing the system can be easily designed; 3) \"two-phase signal generator\"; can autotune a single system parameter in response to varying frequency of the injected signal; 4) high-order \"PLL-Controllers\" allowing fast tracking can be stably used; and 5) even in hostile envelopments, favorable instant estimates can be obtained. This paper proposes the estimation method and verifies its usefulness by extensive numerical experiments."} {"_id": "c7d05ca5bddd91d1d929aa95b45c0b6b29ec3c11", "title": "MixedFusion: Real-Time Reconstruction of an Indoor Scene with Dynamic Objects", "text": "Real-time indoor scene reconstruction aims to recover the 3D geometry of an indoor scene in real time with a sensor scanning the scene. Previous works of this topic consider pure static scenes, but in this paper, we focus on more challenging cases that the scene contains dynamic objects, for example, moving people and floating curtains, which are quite common in reality and thus are eagerly required to be handled. We develop an end-to-end system using a depth sensor to scan a scene on the fly. By proposing a Sigmoid-based Iterative Closest Point (S-ICP) method, we decouple the camera motion and the scene motion from the input sequence and segment the scene into static and dynamic parts accordingly. The static part is used to estimate the camera rigid motion, while for the dynamic part, graph node-based motion representation and model-to-depth fitting are applied to reconstruct the scene motions. With the camera and scene motions reconstructed, we further propose a novel mixed voxel allocation scheme to handle static and dynamic scene parts with different mechanisms, which helps to gradually fuse a large scene with both static and dynamic objects. Experiments show that our technique successfully fuses the geometry of both the static and dynamic objects in a scene in real time, which extends the usage of the current techniques for indoor scene reconstruction."} {"_id": "ab577cd7b229744d6a153895cbe73e9f0480e631", "title": "Faceted Search", "text": "We live in an information age that requires us, more than ever, to represent, access, and use information. Over the last several decades, we have developed a modern science and technology for information retrieval, relentlessly pursuing the vision of a \u201cmemex\u201d that Vannevar Bush proposed in his seminal article, \u201cAs We May Think.\u201d Faceted search plays a key role in this program. Faceted search addresses weaknesses of conventional search approaches and has emerged as a foundation for interactive information retrieval. User studies demonstrate that faceted search provides more effective information-seeking support to users than best-first search. Indeed, faceted search has become increasingly prevalent in online information access systems, particularly for e-commerce and site search. In this lecture, we explore the history, theory, and practice of faceted search. Although we cannot hope to be exhaustive, our aim is to provide sufficient depth and breadth to offer a useful resource to both researchers and practitioners. Because faceted search is an area of interest to computer scientists, information scientists, interface designers, and usability researchers, we do not assume that the reader is a specialist in any of these fields. Rather, we offer a self-contained treatment of the topic, with an extensive bibliography for those who would like to pursue particular aspects in more depth."} {"_id": "b3f1d0a8b2ea51baa44461bddd627cf316628b7e", "title": "Classifying distinct basal cell carcinoma subtype by\u00a0means of dermatoscopy and reflectance confocal microscopy.", "text": "BACKGROUND\nThe current guidelines for the management of basal cell carcinoma (BCC) suggest a different therapeutic approach according to histopathologic subtype. Although dermatoscopic and confocal criteria of BCC have been investigated, no specific studies were performed to evaluate the distinct reflectance confocal microscopy (RCM) aspects of BCC subtypes.\n\n\nOBJECTIVES\nTo define the specific dermatoscopic and confocal criteria for delineating different BCC subtypes.\n\n\nMETHODS\nDermatoscopic and confocal images of histopathologically confirmed BCCs were retrospectively evaluated for the presence of predefined criteria. Frequencies of dermatoscopic and confocal parameters are provided. Univariate and adjusted odds ratios were calculated. Discriminant analyses were performed to define the independent confocal criteria for distinct BCC subtypes.\n\n\nRESULTS\nEighty-eight BCCs were included. Dermatoscopically, superficial BCCs (n=44) were primarily typified by the presence of fine telangiectasia, multiple erosions, leaf-like structures, and revealed cords connected to the epidermis and epidermal streaming upon RCM. Nodular BCCs (n=22) featured the classic dermatoscopic features and well outlined large basaloid islands upon RCM. Infiltrative BCCs (n=22) featured structureless, shiny red areas, fine telangiectasia, and arborizing vessels on dermatoscopy and dark silhouettes upon RCM.\n\n\nLIMITATIONS\nThe retrospective design.\n\n\nCONCLUSION\nDermatoscopy and confocal microscopy can reliably classify different BCC subtypes."} {"_id": "062c1c1b3e280353242dd2fb3c46178b87cb5e46", "title": "Fitted Natural Actor-Critic: A New Algorithm for Continuous State-Action MDPs", "text": "In this paper we address reinforcement learning problems with continuous state-action spaces. We propose a new algorithm, tted natural actor-critic (FNAC), that extends the work in [1] to allow for general function approximation and data reuse. We combine the natural actor-critic architecture [1] with a variant of tted value iteration using importance sampling. The method thus obtained combines the appealing features of both approaches while overcoming their main weaknesses: the use of a gradient-based actor readily overcomes the di culties found in regression methods with policy optimization in continuous action-spaces; in turn, the use of a regression-based critic allows for e cient use of data and avoids convergence problems that TD-based critics often exhibit. We establish the convergence of our algorithm and illustrate its application in a simple continuous space, continuous action problem."} {"_id": "087350c680e791093624be7d1094c4cef73c2214", "title": "Learning Discriminative Affine Regions via", "text": "We present an accurate method for estimation of the affine shape of local features. The method is trained in a novel way, exploiting the recently proposed HardNet triplet loss. The loss function is driven by patch descriptor differences, avoiding problems with symmetries. Moreover, such training process does not require precisely geometrically aligned patches. The affine shape is represented in a way amenable to learning by stochastic gradient descent. When plugged into a state-of-the-art wide baseline matching algorithm, the performance on standard datasets improves in both the number of challenging pairs matched and the number of inliers. Finally, AffNet with combination of Hessian detector and HardNet descriptor improves bag-of-visual-words based state of the art on Oxford5k and Paris6k by large margin, 4.5 and 4.2 mAP points respectively. The source code and trained networks are available at https://github.com/ducha-aiki/affnet"} {"_id": "81f40dec4eb9ba43fe0d55b9accb75626c9745e4", "title": "Evaluating the Effectiveness of Serious Games for Cultural Awareness: The Icura User Study", "text": "There is an increasing awareness about the potential of serious games for education and training in many disciplines. However, research still witnesses a lack of methodologies, guidelines and best practices on how to develop effective serious games and how to integrate them in the actual learning and training processes. This process of integration heavily depends on providing and spreading evidence of the effectiveness of serious games This paper reports a user study to evaluate the effectiveness of Icura, a serious game about Japanese culture and etiquette. The evaluation methodology extends the set of instruments used in previous studies by evaluating the effects of the game on raising awareness, by avoiding the selective attention bias and by assessing the medium-term retention. . With this research we aim to provide a handy toolkit for evaluating the effectiveness a serious games for cultural awareness and heritage."} {"_id": "f97f0902698abff8a2bc3488e8cca223e5c357a1", "title": "Feature selection via sensitivity analysis of SVM probabilistic outputs", "text": "Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features in decreasing order of importance so that more relevant features can be identified. It uses a novel criterion based on the probabilistic outputs of SVM. This criterion, termed Feature-based Sensitivity of Posterior Probabilities (FSPP), evaluates the importance of a specific feature by computing the aggregate value, over the feature space, of the absolute difference of the probabilistic outputs of SVM with and without the feature. The exact form of this criterion is not easily computable and approximation is needed. Four approximations, FSPP1-FSPP4, are proposed for this purpose. The first two approximations evaluate the criterion by randomly permuting the values of the feature among samples of the training data. They differ in their choices of the mapping function from standard SVM output to its probabilistic output: FSPP1 uses a simple threshold function while FSPP2 uses a sigmoid function. The second two directly approximate the criterion but differ in the smoothness assumptions of criterion with respect to the features. The performance of these approximations, used in an overall feature-selection scheme, is then evaluated on various artificial problems and real-world problems, including datasets from the recent Neural Information Processing Systems (NIPS) feature selection competition. FSPP1-3 show good performance consistently with FSPP2 being the best overall by a slight margin. The performance of FSPP2 is competitive with some of the best performing feature-selection methods in the literature on the datasets that we have tested. Its associated computations are modest and hence it is suitable as a feature-selection method for SVM applications."} {"_id": "f424355ca1351cadc4d0f84d362933da6cf1eea7", "title": "BzTree: A High-Performance Latch-free Range Index for Non-Volatile Memory", "text": "Storing a database (rows and indexes) entirely in non-volatile memory (NVM) potentially enables both high performance and fast recovery. To fully exploit parallelism on modern CPUs, modern main-memory databases use latch-free (lock-free) index structures, e.g. Bw-tree or skip lists. To achieve high performance NVMresident indexes also need to be latch-free. This paper describes the design of the BzTree, a latch-free B-tree index designed for NVM. The BzTree uses a persistent multi-word compare-and-swap operation (PMwCAS) as a core building block, enabling an index design that has several important advantages compared with competing index structures such as the Bw-tree. First, the BzTree is latch-free yet simple to implement. Second, the BzTree is fast showing up to 2x higher throughput than the Bw-tree in our experiments. Third, the BzTree does not require any special-purpose recovery code. Recovery is near-instantaneous and only involves rolling back (or forward) any PMwCAS operations that were in-flight during failure. Our end-to-end recovery experiments of BzTree report an average recovery time of 145 \u03bcs. Finally, the same BzTree implementation runs seamlessly on both volatile RAM and NVM, which greatly reduces the cost of code maintenance. PVLDB Reference Format: Joy Arulraj, Justin Levandoski, Umar Farooq Minhas,Per-Ake Larson. BzTree: A High-Performance Latch-free Range Index for Non-Volatile Memory. PVLDB, 11(4): xxxx-yyyy, 2018. DOI: https://doi.org/10.1145/3164135.3164147"} {"_id": "3206fe6bbad88896b8608fafe7c9295f2504b745", "title": "DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time", "text": "We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the updated model in real time. Because we do not require a template or other prior scene model, the approach is applicable to a wide range of moving objects and scenes."} {"_id": "1d13f96a076cb0e3590f13209c66ad26aa33792f", "title": "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation", "text": "Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms \u2014 especially the collaborative filtering (CF)-based approaches with shallow or deep models \u2014 usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users\u2019 historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines."} {"_id": "896191c053a5c98f2478cb20246ae447ebcd6e38", "title": "Optimal Mass Transport: Signal processing and machine-learning applications", "text": "Transport-based techniques for signal and data analysis have recently received increased interest. Given their ability to provide accurate generative models for signal intensities and other data distributions, they have been used in a variety of applications, including content-based retrieval, cancer detection, image superresolution, and statistical machine learning, to name a few, and they have been shown to produce state-of-the-art results. Moreover, the geometric characteristics of transport-related metrics have inspired new kinds of algorithms for interpreting the meaning of data distributions. Here, we provide a practical overview of the mathematical underpinnings of mass transport-related methods, including numerical implementation, as well as a review, with demonstrations, of several applications. Software accompanying this article is available from [43]."} {"_id": "ef8d41dace32a312ac7e29688df2702089f4ba2d", "title": "Design and Experiment of a Conformal Monopulse Antenna for Passive Radar Applications", "text": "This paper proposed a design scheme of wide band monopulse antenna system for the passive radar seeker (PRS) of anti-radiation missile (ARM). To save the installation space of the PRS, the conical conformal log periodic antennas (LPDA) are employed to form the monopulse antenna system. The specific implementation scheme of PRS based on the conically conformal monopulse antennas was designed and analyzed. A practical monopulse antenna system composed of two log periodic antennas was designed and fabricated over the operating frequency range of 1GHz to 8GHz. A loaded resistor with impedance equal to 50\u03a9 is used to reduce the return loss at low frequency. The experimental results demonstrate that acceptable wide band impedance matching and port isolation for each fabricated antenna was achieved. Furthermore, the wide band sum and difference radiation patterns were formed, which validates the proposed conformal monopulse antenna design scheme for the PRS application."} {"_id": "234904ac1dbbeb507f8c63abf327fe47d023810c", "title": "Prediction of Web Users Browsing Behavior : A Review", "text": "In Web Usage Mining (WUM), Classification technique is useful in web user prediction, where user\u2019s browsing behavior is predicted. In Web prediction, classification is done first to predict the next Web pages set that a user may visit based on history of previously visited pages. Predicting user\u2019s behavior effectively is important in various applications like recommendation systems, smart phones etc. Currently various prediction systems are available based on Markov model, Association Rule Mining (ARM), classification techniques, etc. In Markov prediction model it cannot predict for a session that was not observed previously in the training set and prediction time is also constant. In ARM, Apriori algorithm requires more time to generate item sets. This paper presents review of various prediction models used to predict web users browsing behavior. Key Words\u2014 WUM, classification, Browsing behavior, prediction, ARM, session"} {"_id": "5c785d21421bfed0beba8de06a13fae6fe318e50", "title": "Testing and evaluation of a wearable augmented reality system for natural outdoor environments", "text": "This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive \u2018heads-up\u2019 visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier\u2019s view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods."} {"_id": "a3721e874b0d80c21ac0db5034a4917f8ef94e5e", "title": "Position Paper: Physical Unclonable Functions for IoT Security", "text": "Devices in the Internet of Things (IoT) introduce unique security challenges due to their operating conditions and device limitations. Existing security solutions based on classical cryptography have significant drawbacks in IoT devices, primarily due to the possibility of physical and side channel attacks. As an alternative approach, this position paper advocates the use of physical unclonable functions (PUFs) as a security primitive for developing security solutions for IoT devices. Preliminary work on developing a PUF based mutual authentication protocol is also presented."} {"_id": "7e561868b2da2575b09ffb453ace9590cdcf4f9a", "title": "Predicting at-risk novice Java programmers through the analysis of online protocols", "text": "In this study, we attempted to quantify indicators of novice programmer progress in the task of writing programs, and we evaluated the use of these indicators for identifying academically at-risk students. Over the course of nine weeks, students completed five different graded programming exercises in a computer lab. Using an instrumented version of BlueJ, an integrated development environment for Java, we collected novice compilations and explored the errors novices encountered, the locations of these errors, and the frequency with which novices compiled their programs. We identified which frequently encountered errors and which compilation behaviors were characteristic of at-risk students. Based on these findings, we developed linear regression models that allowed prediction of students' scores on a midterm exam. However, the models derived could not accurately predict the at-risk students. Although our goal of identifying at-risk students was not attained, we have gained insights regarding the compilation behavior of our students, which may help us identify students who are in need of intervention."} {"_id": "234a741a8bcdf25f5924101edc1f67f164f5277b", "title": "Cues to deception.", "text": "Do people behave differently when they are lying compared with when they are telling the truth? The combined results of 1,338 estimates of 158 cues to deception are reported. Results show that in some ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also make a more negative impression and are more tense. Their stories include fewer ordinary imperfections and unusual contents. However, many behaviors showed no discernible links, or only weak links, to deceit. Cues to deception were more pronounced when people were motivated to succeed, especially when the motivations were identity relevant rather than monetary or material. Cues to deception were also stronger when lies were about transgressions."} {"_id": "7ac64e54d377d2e765158cb545df5013e92905da", "title": "Deception and design: the impact of communication technology on lying behavior", "text": "Social psychology has demonstrated that lying is an important, and frequent, part of everyday social interactions. As communication technologies become more ubiquitous in our daily interactions, an important question for developers is to determine how the design of these technologies affects lying behavior. The present research reports the results of a diary study, in which participants recorded all of their social interactions and lies for seven days. The data reveal that participants lied most on the telephone and least in email, and that lying rates in face-to-face and instant messaging interactions were approximately equal. This pattern of results suggests that the design features of communication technologies (e.g., synchronicity, recordability, and copresence) affect lying behavior in important ways, and that these features must be considered by both designers and users when issues of deception and trust arise. The implications for designing applications that increase, decrease or detect deception are discussed."} {"_id": "c3245983b40b3c9b90f62f50bbe99372c04e973d", "title": "Psychological aspects of natural language. use: our words, our selves.", "text": "The words people use in their daily lives can reveal important aspects of their social and psychological worlds. With advances in computer technology, text analysis allows researchers to reliably and quickly assess features of what people say as well as subtleties in their linguistic styles. Following a brief review of several text analysis programs, we summarize some of the evidence that links natural word use to personality, social and situational fluctuations, and psychological interventions. Of particular interest are findings that point to the psychological value of studying particles-parts of speech that include pronouns, articles, prepositions, conjunctives, and auxiliary verbs. Particles, which serve as the glue that holds nouns and regular verbs together, can serve as markers of emotional state, social identity, and cognitive styles."} {"_id": "f8a5ada24bb625ce5bbcdb9004da911d7fa845b2", "title": "Testing the Interactivity Model: Communication Processes, Partner Assessments, and the Quality of Collaborative Work", "text": "EE K. BuRGOON IS Professor of Communication and Director of the Center for the Management of Information at the University of Arizona. She is the author or coauthor of seven books and nearly two hundred articles and chapters on topics related to interpersonal and nonverbal communication, deception and credibility, mass media, and new communication technologies. Her current research focuses on interactivity, adaptation, and attunement in communication."} {"_id": "0d202e5e05600222cc0c6f53504e046996fdd3c8", "title": "Male and Female Spoken Language Differences : Stereotypes and Evidence", "text": "Male speech and female speech have been observed to differ in their form, topic, content, and use. Early writers were largely introspective in their analyses; more recent work has begun to provide empirical evidence. Men may be more loquacious and directive; they use more nonstandard forms, talk more about sports, money, and business, and more frequently refer to time, space, quantity, destructive action, perceptual attributes, physical movements, and objects. Women are often more supportive, polite, and expressive, talk more about home and family, and use more words implying feeling, evaluation, interpretation, and psychological state. A comprehensive theory of \"genderlect\" must include information about linguistic features under a multiplicity of conditions."} {"_id": "416f27b7ed8764a5d609965ba9a08b67affdd1b8", "title": "Study of Deep Learning Techniques for Side-Channel Analysis and Introduction to ASCAD Database", "text": "To provide insurance on the resistance of a system against side-channel analysis, several national or private schemes are today promoting an evaluation strategy, common in classical cryptography, which is focussing on the most powerful adversary who may train to learn about the dependency between the device behaviour and the sensitive data values. Several works have shown that this kind of analysis, known as Template Attacks in the side-channel domain, can be rephrased as a classical Machine Learning classification problem with learning phase. Following the current trend in the latter area, recent works have demonstrated that deep learning algorithms were very efficient to conduct security evaluations of embedded systems and had many advantages compared to the other methods. Unfortunately, their hyper-parametrization has often been kept secret by the authors who only discussed on the main design principles and on the attack efficiencies. This is clearly an important limitation of previous works since (1) the latter parametrization is known to be a challenging question in Machine Learning and (2) it does not allow for the reproducibility of the presented results. This paper aims to address these limitations in several ways. First, completing recent works, we propose a comprehensive study of deep learning algorithms when applied in the context of side-channel analysis and we discuss the links with the classical template attacks. Secondly, we address the question of the choice of the hyper-parameters for the class of multi-layer perceptron networks and convolutional neural networks. Several benchmarks and rationales are given in the context of the analysis of a masked implementation of the AES algorithm. To enable perfect reproducibility of our tests, this work also introduces an open platform including all the sources of the target implementation together with the campaign of electromagnetic measurements exploited in our benchmarks. This open database, named ASCAD, has been specified to serve as a common basis for further works on this subject. Our work confirms the conclusions made by Cagli et al. at CHES 2017 about the high potential of convolutional neural networks. Interestingly, it shows that the approach followed to design the algorithm VGG-16 used for image recognition seems also to be sound when it comes to fix an architecture for side-channel analysis."} {"_id": "193debca0be1c38dabc42dc772513e6653fd91d8", "title": "Mnemonic Descent Method: A Recurrent Process Applied for End-to-End Face Alignment", "text": "Cascaded regression has recently become the method of choice for solving non-linear least squares problems such as deformable image alignment. Given a sizeable training set, cascaded regression learns a set of generic rules that are sequentially applied to minimise the least squares problem. Despite the success of cascaded regression for problems such as face alignment and head pose estimation, there are several shortcomings arising in the strategies proposed thus far. Specifically, (a) the regressors are learnt independently, (b) the descent directions may cancel one another out and (c) handcrafted features (e.g., HoGs, SIFT etc.) are mainly used to drive the cascade, which may be sub-optimal for the task at hand. In this paper, we propose a combined and jointly trained convolutional recurrent neural network architecture that allows the training of an end-to-end to system that attempts to alleviate the aforementioned drawbacks. The recurrent module facilitates the joint optimisation of the regressors by assuming the cascades form a nonlinear dynamical system, in effect fully utilising the information between all cascade levels by introducing a memory unit that shares information across all levels. The convolutional module allows the network to extract features that are specialised for the task at hand and are experimentally shown to outperform hand-crafted features. We show that the application of the proposed architecture for the problem of face alignment results in a strong improvement over the current state-of-the-art."} {"_id": "33ef870128a5a7ea3c98274f0753324083b955aa", "title": "BigDataScript: a scripting language for data pipelines", "text": "MOTIVATION\nThe analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability.\n\n\nRESULTS\nWe introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code.\n\n\nAVAILABILITY AND IMPLEMENTATION\nBigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript."} {"_id": "d174b88552cf2531e10af49e0d182802f0c8d604", "title": "Implicit and Explicit Stereotyping of Adolescents", "text": "Although adolescents are commonly assumed to be rebellious, risky and moody, two experiments demonstrate for the first time that these beliefs operate both explicitly and implicitly as stereotypes. In Experiment 1, participants (a) explicitly endorsed adolescent stereotypes and (b) implicitly associated adolescent stereotyped words more rapidly with the adolescent than the adult social category. Individual differences in the explicit endorsement of adolescent stereotypes predicted explicit perceptions of the rebelliousness of a 17-year-old but not a 71-year-old, although individual differences in implicit stereotyping did not. Identification with adults was associated with greater implicit stereotyping but not explicit stereotyping. In Experiment 2, subliminal exposure to adolescent stereotyped words increased subsequent perceptions of the rebelliousness of a 17-year-old but not a 71-year-old. Although individual differences in implicit adolescent stereotyping did not predict explicit evaluations of adolescents, stereotypes of adolescents nevertheless influenced explicit evaluations unconsciously and unintentionally."} {"_id": "7c17025c540b88df14da35229618b5e896ab9528", "title": "Rain Streak Removal Using Layer Priors", "text": "This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples."} {"_id": "6a584f3fbc4c5333825ce0a8e8a30776097c81c5", "title": "The Microfinance Promise", "text": "Contributions to this research made by a member of The Financial Access Initiative."} {"_id": "4279602fdbe037b935b63bcadbbfbe503a7c30ee", "title": "You are what you like! Information leakage through users' Interests", "text": "Suppose that a Facebook user, whose age is hidden or missing, likes Britney Spears. Can you guess his/her age? Knowing that most Britney fans are teenagers, it is fairly easy for humans to answer this question. Interests (or \u201clikes\u201d) of users is one of the highly-available on-line information. In this paper, we show how these seemingly harmless interests (e.g., music interests) can leak privacysensitive information about users. In particular, we infer their undisclosed (private) attributes using the public attributes of other users sharing similar interests. In order to compare user-defined interest names, we extract their semantics using an ontologized version of Wikipedia and measure their similarity by applying a statistical learning method. Besides self-declared interests in music, our technique does not rely on any further information about users such as friend relationships or group belongings. Our experiments, based on more than 104K public profiles collected from Facebook and more than 2000 private profiles provided by volunteers, show that our inference technique efficiently predicts attributes that are very often hidden by users. To the best of our knowledge, this is the first time that user interests are used for profiling, and more generally, semantics-driven inference of private data is addressed."} {"_id": "47b4d1e152efebb00ac9f948d610e6c6a27d34ea", "title": "Automated Method for Discrimination of Arrhythmias Using Time, Frequency, and Nonlinear Features of Electrocardiogram Signals", "text": "We developed an automated approach to differentiate between different types of arrhythmic episodes in electrocardiogram (ECG) signals, because, in real-life scenarios, a software application does not know in advance the type of arrhythmia a patient experiences. Our approach has four main stages: (1) Classification of ventricular fibrillation (VF) versus non-VF segments\u2014including atrial fibrillation (AF), ventricular tachycardia (VT), normal sinus rhythm (NSR), and sinus arrhythmias, such as bigeminy, trigeminy, quadrigeminy, couplet, triplet\u2014using four image-based phase plot features, one frequency domain feature, and the Shannon entropy index. (2) Classification of AF versus non-AF segments. (3) Premature ventricular contraction (PVC) detection on every non-AF segment, using a time domain feature, a frequency domain feature, and two features that characterize the nonlinearity of the data. (4) Determination of the PVC patterns, if present, to categorize distinct types of sinus arrhythmias and NSR. We used the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, Creighton University\u2019s VT arrhythmia database, the MIT-BIH atrial fibrillation database, and the MIT-BIH malignant ventricular arrhythmia database to test our algorithm. Binary decision tree (BDT) and support vector machine (SVM) classifiers were used in both stage 1 and stage 3. We also compared our proposed algorithm\u2019s performance to other published algorithms. Our VF detection algorithm was accurate, as in balanced datasets (and unbalanced, in parentheses) it provided an accuracy of 95.1% (97.1%), sensitivity of 94.5% (91.1%), and specificity of 94.2% (98.2%). The AF detection was accurate, as the sensitivity and specificity in balanced datasets (and unbalanced, in parentheses) were found to be 97.8% (98.6%) and 97.21% (97.1%), respectively. Our PVC detection algorithm was also robust, as the accuracy, sensitivity, and specificity were found to be 99% (98.1%), 98.0% (96.2%), and 98.4% (99.4%), respectively, for balanced and (unbalanced) datasets."} {"_id": "95a8087f81f7c60544c4e9cfec67f4c1513ff789", "title": "An Algebra and Equivalences to Transform Graph Patterns in Neo4j", "text": "Modern query optimizers of relational database systems embody more than three decades of research and practice in the area of data management and processing. Key advances include algebraic query transformation, intelligent search space pruning, and modular optimizer architectures. Surprisingly, many of these contributions seem to have been overlooked in the emerging field of graph databases so far. In particular, we believe that query optimization based on a general graph algebra and its equivalences can greatly improve on the current state of the art. Although some graph algebras have already been proposed, they have often been developed in a context, in which a relational database system is used as a backend to process graph data. As a consequence, these algebras are typically tightly coupled to the relational algebra, making them unsuitable for native graph databases. While we support the approach of extending the relational algebra, we argue that graph-specific operations should be defined at a higher level, independent of the database backend. In this paper, we introduce such a general graph algebra and corresponding equivalences. We demonstrate how it can be used to optimize Cypher queries in the setting of the Neo4j native graph database."} {"_id": "3c203e4687fab9bacc2adcff2cbcab2b6d7937dd", "title": "Information theory and neural coding", "text": "Information theory quantifies how much information a neural response carries about the stimulus. This can be compared to the information transferred in particular models of the stimulus\u2013response function and to maximum possible information transfer. Such comparisons are crucial because they validate assumptions present in any neurophysiological analysis. Here we review information-theory basics before demonstrating its use in neural coding. We show how to use information theory to validate simple stimulus\u2013response models of neural coding of dynamic stimuli. Because these models require specification of spike timing precision, they can reveal which time scales contain information in neural coding. This approach shows that dynamic stimuli can be encoded efficiently by single neurons and that each spike contributes to information transmission. We argue, however, that the data obtained so far do not suggest a temporal code, in which the placement of spikes relative to each other yields additional information."} {"_id": "e835b786e552f45da8c8fa7af0dfc3c571f4150c", "title": "ANE: Network Embedding via Adversarial Autoencoders", "text": "Network embedding is an important method to learn low-dimensional representations of vertexes in network, whose goal is to capture and preserve the highly non-linear network structures. Here, we propose an Adversarial autoencoders based Network Embedding method (ANE for short), which utilizes the rencently proposed adversarial autoencoders to perform variational inference by matching the aggregated posterior of low-dimensional representations of vertexes with an arbitraray prior distribution. This framework introduces adversarial regularization to autoencoders. And it is able to attaches the latent representations of similar vertexes to each other and thus prevents the manifold fracturing problem that is typically encountered in the embeddings learnt by the autoencoders. Experiments demonstrate the effictiveness of ANE on link prediction and multi-label classification on three real-world information networks."} {"_id": "6be966504d72b77886af302cf7e28a7d5c14e624", "title": "Multiview projectors/cameras system for 3D reconstruction of dynamic scenes", "text": "Active vision systems are usually limited to either partial or static scene reconstructions. In this paper, we propose to acquire the entire 3D shape of a dynamic scene. This is performed using a multiple projectors and cameras system, that allows to recover the entire shape of the object within a single scan at each frame. Like previous approaches, a static and simple pattern is used to avoid interferences of multiple patterns projected on the same object. In this paper, we extend the technique to capture a dense entire shape of a moving object with accuracy and high video frame rate. To achieve this, we mainly propose two additional steps; one is checking the consistency between the multiple cameras and projectors, and the other is an algorithm for light sectioning based on a plane parameter optimization. In addition, we also propose efficient noise reduction and mesh generation algorithm which are necessary for practical applications. In the experiments, we show that we can successfully reconstruct dense entire shapes of moving objects. Results are illustrated on real data from a system composed of six projectors and six cameras that was actually built."} {"_id": "65c0d042d2ee7e4b71992e97f8bb42f028facac6", "title": "Using Machine Learning to Break Visual Human Interaction Proofs (HIPs)", "text": "Machine learning is often used to automatically solve human tasks. In this paper, we look for tasks where machine learning algorithms are not as good as humans with the hope of gaining insight into their current limitations. We studied various Human Interactive Proofs (HIPs) on the market, because they are systems designed to tell computers and humans apart by posing challenges presumably too hard for computers. We found that most HIPs are pure recognition tasks which can easily be broken using machine learning. The harder HIPs use a combination of segmentation and recognition tasks. From this observation, we found that building segmentation tasks is the most effective way to confuse machine learning algorithms. This has enabled us to build effective HIPs (which we deployed in MSN Passport), as well as design challenging segmentation tasks for machine learning algorithms."} {"_id": "29d583c9ed11377d02158150b61f2c4ce9ad5fb1", "title": "Area-efficient cross-coupled charge pump for on-chip solar cell", "text": "In this paper, an area-efficient cross-coupled charge pump for standalone chip using on-chip solar cell is proposed. The proposed cross-coupled charge pump outputs higher voltage than conventional cross-coupled charge pump on the same area by adding capacitors to drive MOS transistors. The proposed circuit is fabricated in 0.18um standard CMOS process, and outputs 80mV higher than the conventional circuit with 500mV input and 100kHz clock frequency."} {"_id": "745782902e97be8fbacd1e05d283f11104e2fec6", "title": "An introduction to ROC analysis", "text": "Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research. 2005 Elsevier B.V. All rights reserved."} {"_id": "509d6923c3d18a78b49a8573189ef9a1a5d56158", "title": "Sentiment Intensity Ranking among Adjectives Using Sentiment Bearing Word Embeddings", "text": "Identification of intensity ordering among polar (positive or negative) words which have the same semantics can lead to a finegrained sentiment analysis. For example, master, seasoned and familiar point to different intensity levels, though they all convey the same meaning (semantics), i.e., expertise: having a good knowledge of. In this paper, we propose a semisupervised technique that uses sentiment bearing word embeddings to produce a continuous ranking among adjectives that share common semantics. Our system demonstrates a strong Spearman\u2019s rank correlation of 0.83 with the gold standard ranking. We show that sentiment bearing word embeddings facilitate a more accurate intensity ranking system than other standard word embeddings (word2vec and GloVe). Word2vec is the state-of-the-art for intensity ordering task."} {"_id": "4e8160b341556d11ea11a636f4e85f221e80df92", "title": "Does the Fragrance of Essential Oils Alleviate the Fatigue Induced by Exercise? A Biochemical Indicator Test in Rats", "text": "Objective\nTo study the effect of the essential oils of Citrus sinensis L., Mentha piperita L., Syzygium aromaticum L., and Rosmarinus officinalis L. on physical exhaustion in rats.\n\n\nMethods\nForty-eight male Wistar rats were randomly divided into a control group, a fatigue group, an essential oil mixture (EOM) group, and a peppermint essential oil (PEO) group. Loaded swimming to exhaustion was used as the rat fatigue model. Two groups were nebulized with EOM and PEO after swimming, and the others were nebulized with distilled water. After continuous inhalation for 3 days, the swimming time, blood glucose, blood lactic acid (BLA), blood urea nitrogen (BUN), superoxide dismutase (SOD), glutathione peroxidase (GSH-PX), and malondialdehyde (MDA) in blood were determined.\n\n\nResults\nWhile an increased time to exhaustion and SOD activity were apparent in both the EOM and PEO groups, the BLA and MDA were lower in both groups, in comparison with the fatigue group, and the changes in the EOM group were more dramatic. Additionally, the EOM group also showed marked changes of the rise of blood glucose and the decrease of BUN and GSH-PX.\n\n\nConclusion\nThe results suggested that the inhalation of an essential oil mixture could powerfully relieve exercise-induced fatigue."} {"_id": "bdd46459102967fed8c9ce41f81ce4d18b33c38e", "title": "Addressing Function Approximation Error in Actor-Critic Methods", "text": "In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested."} {"_id": "3b8e1b8b85639079dbad489b23e28437f4c565ad", "title": "Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms", "text": "Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms Amani AlOnazi The progress of high performance computing platforms is dramatic, and most of the simulations carried out on these platforms result in improvements on one level, yet expose shortcomings of current CFD packages capabilities. Therefore, hardwareaware design and optimizations are crucial towards exploiting modern computing resources. This thesis proposes optimizations aimed at accelerating numerical simulations, which are illustrated in OpenFOAM solvers. A hybrid MPI and GPGPU parallel conjugate gradient linear solver has been designed and implemented to solve the sparse linear algebraic kernel that derives from two CFD solver: icoFoam, which is an incompressible flow solver, and laplacianFoam, which solves the Poisson equation, for e.g., thermal diffusion. A load-balancing step is applied using heterogeneous decomposition, which decomposes the computations taking into account the performance of each computing device and seeking to minimize communication. In addition, we implemented the recently developed pipeline conjugate gradient as an algorithmic improvement, and parallelized it using MPI, GPGPU, and a hybrid technique. While many questions of ultimately attainable per node performance and multi-node scaling remain, the experimental results show that the hybrid implementation of both solvers significantly outperforms state-of-the-art implementations of a widely used open source package."} {"_id": "52b102620fff029b80b3193bec147fe6afd6f42e", "title": "Benchmark of a large scale database for facial beauty prediction", "text": "Due to its indefinite evaluation criterion, facial beauty analysis has shown its challenges in pattern recognition and biometric recognition. Plenty of methods have been designed for solving this problem, but there are still various limitations. Most of the available research results in this field are achieved on a small-scale facial beauty database which is difficult in modeling the structure information for facial beauty. And majority of the presented facial beauty prediction algorithm requires burden landmark or expensive optimization procedure. In this paper, we have established a large scale facial beauty database named LSFBD, which is a new database for facial beauty analysis. The LSFBD database contains 20000 labeled images, which has 10000 unconstrained male subjects and 10000 unconstrained female subjects separately, and all images are verified with well-designed rating with average scores and standard deviation. In this database, manually adding the extreme beauty images make the distribution more reasonable and the result more accurate. CRBM or Eigenfaces results are presented in the paper as benchmark for evaluating the task of facial beauty prediction."} {"_id": "4d9583991eeae36366cd61e6011df7ada7413f0d", "title": "The Distributed Computing Paradigms: P2P, Grid, Cluster, Cloud, and Jungle", "text": "The distributed computing is done on many systems to solve a large scale problem. The growing of high-speed broadband networks in developed and developing countries, the continual increase in computing power, and the rapid growth of the Internet have changed the way. In it the society manages information and information services. Historically, the state of computing has gone through a series of platform and environmental changes. Distributed computing holds great assurance for using computer systems effectively. As a result, supercomputer sites and datacenters have changed from providing high performance floating point computing capabilities to concurrently servicing huge number of requests from billions of users. The distributed computing system uses multiple computers to solve large-scale problems over the Internet. It becomes data-intensive and network-centric. The applications of distributed computing have become increasingly wide-spread. In distributed computing, the main stress is on the large scale resource sharing and always goes for the best performance. In this article, we have reviewed the work done in the area of distributed computing paradigms. The main stress is on the evolving area of cloud computing."} {"_id": "d2331d22cab942a5f15c907effe1eaedd5d77305", "title": "Guide to Health Informatics", "text": "From the very earliest moments in the modern history of the computer, scientists have dreamed of creating an \" electronic brain \". Of all the modern technological quests, this search to create artificially intelligent (AI) computer systems has been one of the most ambitious and, not surprisingly, controversial. It also seems that very early on, scientists and doctors alike were captivated by the potential such a technology might have in medicine (e.g. Ledley and Lusted, 1959). With intelligent computers able to store and process vast stores of knowledge, the hope was that they would become perfect 'doctors in a box', assisting or surpassing clinicians with tasks like diagnosis. With such motivations, a small but talented community of computer scientists and healthcare professionals set about shaping a research program for a new discipline called Artificial Intelligence in Medicine (AIM). These researchers had a bold vision of the way AIM would revolutionise medicine, and push forward the frontiers of technology. AI in medicine at that time was a largely US-based research community. Work originated out of a number of campuses, including MIT-Tufts, Pittsburgh, Stanford and Rutgers (e. The field attracted many of the best computer scientists and by any measure their output in the first decade of the field remains a remarkable achievement. In reviewing this new field in 1984, Clancey and Shortliffe provided the following definition: 'Medical artificial intelligence is primarily concerned with the construction of AI programs that perform diagnosis and make therapy recommendations. Unlike medical applications based on other programming methods, such as purely statistical and probabilistic methods, medical AI programs are based on symbolic models of disease entities and their relationship to patient factors and clinical manifestations.' Much has changed since then, and today the importance of diagnosis as a task requiring computer support in routine clinical situations receives much less emphasis (Durinck et al., 1994). The strict focus on the medical setting has now broadened across the healthcare spectrum, and instead of AIM systems, it is more typical to describe them as clinical decision support systems (CDSS). Intelligent systems today are thus found supporting medication prescribing, in clinical laboratories and educational settings, for clinical surveillance, or in data-rich areas like the intensive care setting. While there certainly have been ongoing challenges in developing such systems, they actually have proven their reliability and accuracy on repeated occasions (Shortliffe, 1987). Much of the difficulty experienced in introducing them has \u2026"} {"_id": "76456aac10e7ed56f1188cc28ea5f525c959896e", "title": "A sock puppet detection algorithm on virtual spaces", "text": "On virtual spaces, some individuals use multiple usernames or copycat/forge other users (usually called \u2018\u2018sock puppet\u2019\u2019) to communicate with others. Those sock puppets are fake identities through which members of Internet community praise or create the illusion of support for the product or one\u2019s work, pretending to be a different person. A fundamental problem is how to identify these sock puppets. In this paper, we propose a sock puppet detection algorithm which combines authorship-identification techniques and link analysis. Firstly, we propose an interesting social network model in which links between two IDs are built if they have similar attitudes to most topics that both of them participate in; then, the edges are pruned according a hypothesis test, which consider the impact of their writing styles; finally, the link-based community detection for pruned network is performed. Compared to traditional methods, our approach has three advantages: (1) it conforms to the practical meanings of sock puppet community; (2) it can be applied in online situation; (3) it increases the efficiency of link analysis. In the experimental work, we evaluate our method using real datasets and compared our approach with several previous methods; the results have proved above advantages. 2012 Elsevier B.V. All rights reserved."} {"_id": "97728f1466f435b7145637acb4893c6acaa2291a", "title": "EdgeChain: An Edge-IoT Framework and Prototype Based on Blockchain and Smart Contracts", "text": "The emerging Internet of Things (IoT) is facing significant scalability and security challenges. On the one hand, IoT devices are \u201cweak\u201d and need external assistance. Edge computing provides a promising direction addressing the deficiency of centralized cloud computing in scaling massive number of devices. On the other hand, IoT devices are also relatively \u201cvulnerable\u201d facing malicious hackers due to resource constraints. The emerging blockchain and smart contracts technologies bring a series of new security features for IoT and edge computing. In this paper, to address the challenges, we design and prototype an edge-IoT framework named \u201cEdgeChain\u201d based on blockchain and smart contracts. The core idea is to integrate a permissioned blockchain and the internal currency or \u201ccoin\u201d system to link the edge cloud resource pool with each IoT device\u2019 account and resource usage, and hence behavior of the IoT devices. EdgeChain uses a credit-based resource management system to control how much resource IoT devices can obtain from edge servers, based on pre-defined rules on priority, application types and past behaviors. Smart contracts are used to enforce the rules and policies to regulate the IoT device behavior in a non-deniable and automated manner. All the IoT activities and transactions are recorded into blockchain for secure data logging and auditing. We implement an EdgeChain prototype and conduct extensive experiments to evaluate the ideas. The results show that while gaining the security benefits of blockchain and smart contracts, the cost of integrating them into EdgeChain is within a reasonable and acceptable range."} {"_id": "16463c13e27f35d326056bd84364e02182a978a4", "title": "\"Now, i have a body\": uses and social norms for mobile remote presence in the workplace", "text": "As geographically distributed teams become increasingly common, there are more pressing demands for communication work practices and technologies that support distributed collaboration. One set of technologies that are emerging on the commercial market is mobile remote presence (MRP) systems, physically embodied videoconferencing systems that remote workers use to drive through a workplace, communicating with locals there. Our interviews, observations, and survey results from people, who had 2-18 months of MRP use, showed how remotely-controlled mobility enabled remote workers to live and work with local coworkers almost as if they were physically there. The MRP supported informal communications and connections between distributed coworkers. We also found that the mobile embodiment of the remote worker evoked orientations toward the MRP both as a person and as a machine, leading to formation of new usage norms among remote and local coworkers."} {"_id": "4a861d29f36d2e4f03477c5df2730c579d8394d3", "title": "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting", "text": "The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-theart operational ROVER algorithm for precipitation nowcasting."} {"_id": "4f9e6b13e22ae8f3d77b1f5d1c946179e3abfd64", "title": "Combining Instance-Based and Model-Based Learning", "text": "This paper concerns learning tasks that require the prediction of a continuous value rather than a discrete class. A general method is presented that allows predictions to use both instance-based and model-based learning. Results with three approaches to constructing models and with eight datasets demonstrate improvements due to the composite method."} {"_id": "6e820cf11712b9041bb625634612a535476f0960", "title": "An Analysis of Transformations", "text": "In the analysis of data it is often assumed that observations y,, y,, ...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples."} {"_id": "a241a7e26d6baf2c068601813216d3cc09e845ff", "title": "Deep Learning for Time Series Modeling CS 229 Final Project Report", "text": "Demand forecasting is crucial to electricity providers because their ability to produce energy exceeds their ability to store it. Excess demand can cause \u201cbrown outs,\u201d while excess supply ends in waste. In an industry worth over $1 trillion in the U.S. alone [1], almost 9% of GDP [2], even marginal improvements can have a huge impact. Any plan toward energy efficiency should include enhanced utilization of existing production. Energy loads provide an interesting topic for Machine Learning techniques due to the availability of large datasets that exhibit fundamental nonlinear patterns. Using data from the Kaggle competition \u201cGlobal Energy Forecasting Competition 2012 Load Forecasting\u201d [3] we sought to use deep learning architectures to predict energy loads across different network grid areas, using only time and temperature data. Data included hourly demand for four and a half years from 20 different geographic regions, and similar hourly temperature readings from 11 zones. For most of our analysis we focused on short term load forecasting because this will aide online operational scheduling."} {"_id": "e9e46148e6f9b115f22767703ca88b55ef10d24e", "title": "When moderation is mediated and mediation is moderated.", "text": "Procedures for examining whether treatment effects on an outcome are mediated and/or moderated have been well developed and are routinely applied. The mediation question focuses on the intervening mechanism that produces the treatment effect. The moderation question focuses on factors that affect the magnitude of the treatment effect. It is important to note that these two processes may be combined in informative ways, such that moderation is mediated or mediation is moderated. Although some prior literature has discussed these possibilities, their exact definitions and analytic procedures have not been completely articulated. The purpose of this article is to define precisely both mediated moderation and moderated mediation and provide analytic strategies for assessing each."} {"_id": "a1c5a6438d3591819e730d8aecb776a52130c33d", "title": "Compact ultra-wide stopband lowpass filter using transformed stepped impedance hairpin resonator", "text": "A compact microstrip lowpass filter (LPF) with ultra-wide stopband using transformed stepped impedance hairpin resonator is proposed. The transformed resonator consists of a stepped impedance hairpin resonator and an embedded hexagon stub loaded coupled-line structure. Without enlarging the size, the embedded structure is introduced to get a broad stopband. A prototype LPF has been simulated, fabricated and measured, and the measurements are in good agreement with simulations. The implemented lowpass filter exhibits an ultra-wide stopband up to 12.01fc with a rejection level of 14 dB. In addition, the proposed filter features a size of 0.071\u03bbg\u00d7 0.103\u03bbg, where \u03bbg is the waveguide length at the cutoff frequency 1.45 GHz."} {"_id": "63c30ec269e7e02f97e2a20a9d68e268b4405a5d", "title": "Are Coherence Protocol States Vulnerable to Information Leakage?", "text": "Most commercial multi-core processors incorporate hardware coherence protocols to support efficient data transfers and updates between their constituent cores. While hardware coherence protocols provide immense benefits for application performance by removing the burden of software-based coherence, we note that understanding the security vulnerabilities posed by such oft-used, widely-adopted processor features is critical for secure processor designs in the future. In this paper, we demonstrate a new vulnerability exposed by cache coherence protocol states. We present novel insights into how adversaries could cleverly manipulate the coherence states on shared cache blocks, and construct covert timing channels to illegitimately communicate secrets to the spy. We demonstrate 6 different practical scenarios for covert timing channel construction. In contrast to prior works, we assume a broader adversary model where the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate how adversaries can manipulate combinations of coherence states and data placement in different caches to construct timing channels. We also explore how adversaries could exploit multiple caches and their associated coherence states to improve transmission bandwidth with symbols encoding multiple bits. Our experimental results on commercial systems show that the peak transmission bandwidths of these covert timing channels can vary between 700 to 1100 Kbits/sec. To the best of our knowledge, our study is the first to highlight the vulnerability of hardware cache coherence protocols to timing channels that can help computer architects to craft effective defenses against exploits on such critical processor features."} {"_id": "218f9e89f2f585b184d928b08636d300e1307c7f", "title": "Seahawk: Stack Overflow in the IDE", "text": "Services, such as Stack Overflow, offer a web platform to programmers for discussing technical issues, in form of Question and Answers (Q&A). Since Q&A services store the discussions, the generated crowd knowledge can be accessed and consumed by a large audience for a long time. Nevertheless, Q&A services are detached from the development environments used by programmers: Developers have to tap into this crowd knowledge through web browsers and cannot smoothly integrate it into their workflow. This situation hinders part of the benefits of Q&A services. To better leverage the crowd knowledge of Q&A services, we created Seahawk, an Eclipse plugin that supports an integrated and largely automated approach to assist programmers using Stack Overflow. Seahawk formulates queries automatically from the active context in the IDE, presents a ranked and interactive list of results, lets users import code samples in discussions through drag & drop and link Stack Overflow discussions and source code persistently as a support for team work. Video Demo URL: http://youtu.be/DkqhiU9FYPI"} {"_id": "9b75d440a545a915aac025029cab9c7352173021", "title": "Cooperative Control for A Hybrid Rehabilitation System Combining Functional Electrical Stimulation and Robotic Exoskeleton", "text": "Functional electrical stimulation (FES) and robotic exoskeletons are two important technologies widely used for physical rehabilitation of paraplegic patients. We developed a hybrid rehabilitation system (FEXO Knee) that combined FES and an exoskeleton for swinging movement control of human knee joints. This study proposed a novel cooperative control strategy, which could realize arbitrary distribution of torque generated by FES and exoskeleton, and guarantee harmonic movements. The cooperative control adopted feedfoward control for FES and feedback control for exoskeleton. A parameter regulator was designed to update key parameters in real time to coordinate FES controller and exoskeleton controller. Two muscle groups (quadriceps and hamstrings) were stimulated to generate active torque for knee joint in synchronization with torque compensation from exoskeleton. The knee joint angle and the interactive torque between exoskeleton and shank were used as feedback signals for the control system. Central pattern generator (CPG) was adopted that acted as a phase predictor to deal with phase confliction of motor patterns, and realized synchronization between the two different bodies (shank and exoskeleton). Experimental evaluation of the hybrid FES-exoskeleton system was conducted on five healthy subjects and four paraplegic patients. Experimental results and statistical analysis showed good control performance of the cooperative control on torque distribution, trajectory tracking, and phase synchronization."} {"_id": "859e592b643384133bdb6cffb0990c87ecf06d93", "title": "Uniform circular array with integrated microstrip tapered baluns", "text": "This paper presents the design, simulation, and measurement results of a 4-element uniform circular array (UCA) using dipole antenna elements with integrated microstrip tapered baluns to convert between balanced and unbalanced signals. These baluns are compact, integrated into the array support structure, and can provide a balanced signal over a relatively wide bandwidth. Simulation results are shown for the amplitude and phase balance of the balun from 1.6 GHz to 2.2 GHz. The array was fabricated and its manifold was measured in an anechoic chamber at a frequency of 1.7 GHz. The gain and phase responses of the array are presented and are compared to a simulated ideal dipole UCA in free-space."} {"_id": "883b30ddaceb39b8b7c2e2e4ab1b6a92adaeb8c0", "title": "N-Party Encrypted Diffie-Hellman Key Exchange Using Different Passwords", "text": "We consider the problem of password-authenticated group Diffie-Hellman key exchange among N parties, N\u22121 clients and a singleserver, using different passwords. Most password-authenticated key exchange schemes in the literature have focused on an authenticated key exchange using a shared password between a client and a server. With a rapid change in modern communication environment such as ad-hoc networks and ubiquitous computing, it is necessary to construct a secure end-to-end channel between clients, which is a quite different paradigm from the existing ones. To achieve this end-to-end security, only a few schemes of three-party setting have been presented where two clients exchange a key using their own passwords with the help of a server. However, up until now, no formally treated and round efficient protocols which enable group members to generate a common session key with clients\u2019 distinct passwords have been suggested. In this paper we securely and efficiently extend three-party case to Nparty case with a formal proof of security. Two provably secure N-party EKE protocols are suggested; N-party EKE-U in the unicast network and N-party EKE-M in the multicast network. The proposed N-party EKE-M is provable secure and provides forward secrecy. Especially, the scheme is of constant-round, hence scalable and practical."} {"_id": "e48e480c16f419ec3a3c7dd7a9be7089e3dd1675", "title": "Iterative unified clustering in big data", "text": "We propose a novel iterative unified clustering algorithm for data with both continuous and categorical variables, in the big data environment. Clustering is a well-studied problem and finds several applications. However, none of the big data clustering works discuss the challenge of mixed attribute datasets, with both categorical and continuous attributes. We study an application in the health care domain namely Case Based Reasoning (CBR), which refers to solving new problems based on solutions to similar past problems. This is particularly useful when there is a large set of clinical records with several types of attributes, from which similar patients need to be identified. We go one step further and include the genomic components of patient records to enhance the CBR discovery. Thus, our contributions in this paper spans across the big data algorithmic research and a key contribution to the domain of heath care information technology research. First, our clustering algorithm deals with both continuous and categorical variables in the data; second, our clustering algorithm is iterative where it finds the clusters which are not well formed and iteratively drills down to form well defined clusters at the end of the process; third we provide a novel approach to CBR across clinical and genomic data. Our research has implications for clinical trials and facilitating precision diagnostics in large and heterogeneous patient records. We present extensive experimental results to show the efficacy or our approach."} {"_id": "5f7526b8e47fbb274ae6b662c5eeb96e89f4b33f", "title": "PLEdestrians: A Least-Effort Approach to Crowd Simulation", "text": "We present a new algorithm for simulating large-scale crowds at interactive rates based on the Principle of Least Effort. Our approach uses an optimization method to compute a biomechanically energy-efficient, collision-free trajectory that minimizes the amount of effort for each heterogeneous agent in a large crowd. Moreover, the algorithm can automatically generate many emergent phenomena such as lane formation, crowd compression, edge and wake effects ant others. We compare the results from our simulations to data collected from prior studies in pedestrian and crowd dynamics, and provide visual comparisons with real-world video. In practice, our approach can interactively simulate large crowds with thousands of agents on a desktop PC and naturally generates a diverse set of emergent behaviors."} {"_id": "29be05f17c8906d70659fe1110758a59d39d2a08", "title": "Android taint flow analysis for app sets", "text": "One approach to defending against malicious Android applications has been to analyze them to detect potential information leaks. This paper describes a new static taint analysis for Android that combines and augments the FlowDroid and Epicc analyses to precisely track both inter-component and intra-component data flow in a set of Android applications. The analysis takes place in two phases: given a set of applications, we first determine the data flows enabled individually by each application, and the conditions under which these are possible; we then build on these results to enumerate the potentially dangerous data flows enabled by the set of applications as a whole. This paper describes our analysis method, implementation, and experimental results."} {"_id": "de33698f7b2264bf7313f43d1de8c2d19e2a2f7a", "title": "An Optimization Approach for Utilizing Cloud Services for Mobile Devices in Cloud Environment", "text": "Abstract. Mobile cloud computing has emerged aiming at assisting mobile devices in processing computationally or data intensive tasks using cloud resources. This paper presents an optimization approach for utilizing cloud services for mobile client in mobile cloud, which considers the benefit of both mobile device users and cloud datacenters. The mobile cloud service provisioning optimization is conducted in parallel under the deadline, budget and energy expenditure constraint. Mobile cloud provider runs multiple VMs to execute the jobs for mobile device users, the cloud providers want to maximize the revenue and minimize the electrical cost. The mobile device user gives the suitable payment to the cloud datacenter provider for available cloud resources for optimize the benefit. The paper proposes a distributed optimization algorithm for utilizing cloud services for mobile devices. The experiment is to test convergence of the proposed algorithm and also compare it with other related work. The experiments study the impacts of job arrival rate, deadline and mobility speeds on energy consumption ratio, execution success ratio, resource allocation efficiency and cost. The experiment shows that the proposed algorithm outperforms other related work in terms of some performance metrics such as allocation efficiency."} {"_id": "39e8237f35361b64ac6de9d4f3c93f409f73dac0", "title": "Inner Product Similarity Search using Compositional Codes", "text": "This paper addresses the nearest neighbor search problem under inner product similarity and introduces a compact code-based approach. The idea is to approximate a vector using the composition of several elements selected from a source dictionary and to represent this vector by a short code composed of the indices of the selected elements. The inner product between a query vector and a database vector is efficiently estimated from the query vector and the short code of the database vector. We show the superior performance of the proposed group M -selection algorithm that selects M elements from M source dictionaries for vector approximation in terms of search accuracy and efficiency for compact codes of the same length via theoretical and empirical analysis. Experimental results on large-scale datasets (1M and 1B SIFT features, 1M linear models and Netflix) demonstrate the superiority of the proposed approach."} {"_id": "ae6c24aa50eb2b5d6387a761655091b88d414359", "title": "Lexical input as related to children's vocabulary acquisition: effects of sophisticated exposure and support for meaning.", "text": "A corpus of nearly 150,000 maternal word-tokens used by 53 low-income mothers in 263 mother-child conversations in 5 settings (e.g., play, mealtime, and book readings) was studied. Ninety-nine percent of maternal lexical input consisted of the 3,000 most frequent words. Children's vocabulary performance in kindergarten and later in 2nd grade related more to the occurrence of sophisticated lexical items than to quantity of lexical input overall. Density of sophisticated words heard and the density with which such words were embedded in helpful or instructive interactions, at age 5 at home, independently predicted over a third of the variance in children's vocabulary performance in both kindergarten and 2nd grade. These two variables, with controls for maternal education, child nonverbal IQ, and amount of child's talk produced during the interactive settings, at age 5, predicted 50% of the variance in children's 2nd-grade vocabulary."} {"_id": "c90fc38e3c38b7feb40e33a23d643be2d7e9fdaa", "title": "Automotive radar target characterization from 22 to 29 GHz and 76 to 81 GHz", "text": "The radar signatures of automotive targets were measured from 22 to 29 GHz and 76 to 81 GHz inside an anechoic chamber using a vector network analyzer. Radar cross section maps of a sedan, truck, van, pedestrian, motorcycle, and bicycle as a function of effective radar sensor bandwidth, center frequency, and polarization are presented."} {"_id": "70d2d4b07b5c65ef4866c7fd61f9620bffa01e29", "title": "A model for smart agriculture using IoT", "text": "Climate changes and rainfall has been erratic over the past decade. Due to this in recent era, climate-smart methods called as smart agriculture is adopted by many Indian farmers. Smart agriculture is an automated and directed information technology implemented with the IOT (Internet of Things). IOT is developing rapidly and widely applied in all wireless environments. In this paper, sensor technology and wireless networks integration of IOT technology has been studied and reviewed based on the actual situation of agricultural system. A combined approach with internet and wireless communications, Remote Monitoring System (RMS) is proposed. Major objective is to collect real time data of agriculture production environment that provides easy access for agricultural facilities such as alerts through Short Massaging Service (SMS) and advices on weather pattern, crops etc."} {"_id": "ea88b58158395aefbb27f4706a18dfa2fd7daa89", "title": "\"It Won't Happen To Me!\": Self-Disclosure in Online Social Networks", "text": "Despite the considerable amount of self-disclosure in Online Social Networks (OSN), the motivation behind this phenomenon is still little understood. Building on the Privacy Calculus theory, this study fills this gap by taking a closer look at the factors behind individual self-disclosure decisions. In a Structural Equation Model with 237 subjects we find Perceived Enjoyment and Privacy Concerns to be significant determinants of information revelation. We confirm that the privacy concerns of OSN users are primarily determined by the perceived likelihood of a privacy violation and much less by the expected damage. These insights provide a solid basis for OSN providers and policy-makers in their effort to ensure healthy disclosure levels that are based on objective rationale rather than subjective misconceptions."} {"_id": "7427a00058b9925fc9c23379217cba637cb15f99", "title": "Finite-time posture control of a unicycle robot", "text": "This paper deals with the problem of posture control for a unicycle (single-wheel) robot by designing and analyzing a finite-time posture control strategy. The unicycle robot consists of a lower rotating wheel, an upper rotating disk, and a robot body. The rotating wheel enables the unicycle robot to move forward and backward to obtain pitch (longitudinal) balance. The rotating disk is used for obtaining roll (lateral) balance. Therefore, the unicycle robot can be viewed as a mobile inverted pendulum for the pitch axis and a reaction-wheel pendulum for the roll axis. The dynamics models for unicycle robots at roll and pitch axes are derived using the Lagrange equation. According to the dynamics models, two finite-time controller are designed, which includes the finite-time roll controller and the finite-time pitch controller. Moreover, the stability on the unicycle robot with the proposed finite-time posture control strategy is also analyzed. Finally, simulations are worked out to illustrate the effectiveness of the finite-time posture control strategy for the unicycle robot at roll and pitch axes."} {"_id": "7e8bc9da7071d6a2e4f2a5437d7071868913f8b2", "title": "Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques", "text": "Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS. Keywords\u2014Data mining, K-means, road traffic accidents, Waze, Weka."} {"_id": "9dbfcf610da740396b2b9fd75c7032f0b94896d7", "title": "Using Program Analysis to Improve Database Applications", "text": "Applications that interact with database management systems (DBMSs) are ubiquitous. Such database applications are usually hosted on an application server and perform many small accesses over the network to a DBMS hosted on the database server to retrieve data for processing. For decades, the database and programming systems research communities have worked on optimizing such applications from different perspectives: database researchers have built highly efficient DBMSs, and programming systems researchers have developed specialized compilers and runtime systems for hosting applications. However, there has been relatively little work that optimizes database applications by considering these specialized systems in combination and looking for optimization opportunities that span across them. In this article, we highlight three projects that optimize database applications by looking at both the programming system and the DBMS in a holistic manner. By carefully revisiting the interface between the DBMS and the application, and by applying a mix of declarative database optimization and modern program analysis techniques, we show that a speedup of multiple orders of magnitude is possible in real-world applications."} {"_id": "7cc21bcce77dacd1ea7646244043261167f2dcd0", "title": "Exploring Models and Data for Remote Sensing Image Caption Generation", "text": "Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, notable progress has been made in scene classification and target detection. However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal."} {"_id": "fdc3948f5fec24eb7cd4178aee9732ab284f1f1c", "title": "Hybrid Multi-Mode Narrow-Frame Antenna for WWAN/LTE Metal-Rimmed Smartphone Applications", "text": "A hybrid multi-mode narrow-frame antenna for WWAN/LTE metal-rimmed smartphone applications is proposed in this paper. The ground clearance is only 5 mm \u00d7 45 mm, which is promising for narrow-frame smartphones. The metal rim with a small gap is connected to the system ground by three grounded patches. This proposed antenna can excite three coupled-loop modes and one slot mode. By incorporating these four modes, the proposed antenna can provide coverage for GSM850/900, DCS/PCS/UMTS2100, and LTE2300/2500 operations. Detailed design considerations of the proposed antenna are described, and both experimental and simulated results are also presented."} {"_id": "b7706677ff24a509651bdcd43076aadcad6e378c", "title": "An integrated fuzzy MCDM approach for supplier evaluation and selection", "text": "A fuzzy multi-criteria group decision making approach that makes use of quality function deployment (QFD), fusion of fuzzy information and 2-tuple linguistic representation model is developed for supplier selection. The proposed methodology seeks to establish the relevant supplier assessment criteria while also considering the impacts of inner dependence among them. Two interrelated house of quality matrices are constructed, and fusion of fuzzy information and 2-tuple linguistic representation model are employed to compute the weights of supplier selection criteria and subsequently the ratings of suppliers. The proposed method is apt to manage non-homogeneous information in a decision setting with multiple information sources. The decision framework presented in this paper employs ordered weighted averaging (OWA) operator, and the aggregation process is based on combining information by means of fuzzy sets on a basic linguistic term set. The proposed framework is illustrated through a case study conducted in a private hospital in Istanbul."} {"_id": "8bbbdff11e88327816cad3c565f4ab1bb3ee20db", "title": "Automatic Semantic Face Recognition Nawaf", "text": "Recent expansion in surveillance systems has motivated research in soft biometrics that enable the unconstrained recognition of human faces. Comparative soft biometrics show superior recognition performance than categorical soft biometrics and have been the focus of several studies which have highlighted their ability for recognition and retrieval in constrained and unconstrained environments. These studies, however, only addressed face recognition for retrieval using human generated attributes, posing a question about the feasibility of automatically generating comparative labels from facial images. In this paper, we propose an approach for the automatic comparative labelling of facial soft biometrics. Furthermore, we investigate unconstrained human face recognition using these comparative soft biometrics in a human labelled gallery (and vice versa). Using a subset from the LFW dataset, our experiments show the efficacy of the automatic generation of comparative facial labels, highlighting the potential extensibility of the approach to other face recognition scenarios and larger ranges of attributes."} {"_id": "f23dfa84771615d4b4ea733e325a44922d3c9711", "title": "Emergency situation awareness from twitter for crisis management", "text": "This paper describes ongoing work with the Australian Government to detect, assess, summarise, and report messages of interest for crisis coordination published by Twitter. The developed platform and client tools, collectively termed the Emergency Situation Awareness - Automated Web Text Mining (ESA-AWTM) system, demonstrate how relevant Twitter messages can be identified and utilised to inform the situation awareness of an emergency incident as it unfolds.\n A description of the ESA-AWTM platform is presented detailing how it may be used for real life emergency management scenarios. These scenarios are focused on general use cases to provide: evidence of pre-incident activity; near-real-time notification of an incident occurring; first-hand reports of incident impacts; and gauging the community response to an emergency warning. Our tools have recently been deployed in a trial for use by crisis coordinators."} {"_id": "04cf655bfc5da8ae164626034734e1d409adf5ed", "title": "Avoiding and Treating Blindness From Fillers: A Review of the World Literature.", "text": "BACKGROUND\nAs the popularity of soft tissue fillers increases, so do the reports of adverse events. The most serious complications are vascular in nature and include blindness.\n\n\nOBJECTIVE\nTo review the cases of blindness after filler injection, to highlight key aspects of the vascular anatomy, and to discuss prevention and management strategies.\n\n\nMETHODS\nA literature review was performed to identify all the cases of vision changes from filler in the world literature.\n\n\nRESULTS\nNinety-eight cases of vision changes from filler were identified. The sites that were high risk for complications were the glabella (38.8%), nasal region (25.5%), nasolabial fold (13.3%), and forehead (12.2%). Autologous fat (47.9%) was the most common filler type to cause this complication, followed by hyaluronic acid (23.5%). The most common symptoms were immediate vision loss and pain. Most cases of vision loss did not recover. Central nervous system complications were seen in 23.5% of the cases. No treatments were found to be consistently successful in treating blindness.\n\n\nCONCLUSION\nAlthough the risk of blindness from fillers is rare, it is critical for injecting physicians to have a firm knowledge of the vascular anatomy and to understand key prevention and management strategies."} {"_id": "b65f3cd5431f91cca97469996f08d2c139d8ef6a", "title": "Filler Rhinoplasty Evaluated by Anthropometric Analysis.", "text": "BACKGROUND\nThere are no reports of objectively evaluating the efficacy of filler rhinoplasty by anthropometric techniques.\n\n\nOBJECTIVE\nTo objectively demonstrate the effectiveness of filler rhinoplasty by anthropometric analysis.\n\n\nMATERIALS AND METHODS\nA total of 242 patients who revisited the clinic within 2 months of undergoing hyaluronic acid filler rhinoplasty were analyzed based on the injection site, injected volume, and the change in anthropometry.\n\n\nRESULTS\nAmong the 242 patients, 112 (46.3%) were in the nasal dorsum augmentation group, 8 (3.3%) were in the tip rotation group, and 122 (50.4%) were in the whole nose augmentation group. Average injection volume was 1 \u00b1 0.4 mL for nasal dorsum and 0.9 \u00b1 0.3 mL for tip rotation, whereas 1.6 \u00b1 0.5 mL was used for whole nose augmentation. On follow-up, the radix height, nasofrontal angle, and nasolabial angle (NLA) had increased by 78.3%, 5.7 \u00b1 4.1\u00b0, and 9.4 \u00b1 4.5\u00b0, respectively, whereas the modified nasofacial angle had decreased by 1.9 \u00b1 2.9\u00b0. Three cases (1.2%) of vascular complications were encountered.\n\n\nCONCLUSION\nFiller rhinoplasty is a simple and effective treatment modality producing outcomes comparable with surgical augmentation rhinoplasty. Among various anthropometric measurements, the nasal radix height was the most useful for evaluating dorsum augmentation, whereas the NLA was the best for nasal tip rotation."} {"_id": "407ee40d3f6168411e4b66eec003e416f6195466", "title": "The anatomical origin and course of the angular artery regarding its clinical implications.", "text": "BACKGROUND\nThe purposes of this study were to determine the morphological features and conceptualize the anatomical definition of the angular artery (AA) as an aid to practical operations in the clinical field.\n\n\nMATERIALS AND METHODS\nThirty-one hemifaces from 17 Korean cadavers and 26 hemifaces from 13 Thai cadavers were dissected.\n\n\nRESULTS\nThe topography of the AA was classified into 4 types according to its course: Type I (persistent pattern), in which the AA traverses the lateral side of the nose (11%); Type II (detouring pattern), in which the AA traverses the cheek and tear trough area (18%); Type III (alternative pattern), in which the AA traverses the medial canthal area through a branch of the ophthalmic artery (22.8%); and Type IV (latent pattern), in which the AA is absent (26.3%).\n\n\nCONCLUSION\nThe findings of this study will contribute toward improved outcomes for cosmetic surgery involving the injection of facial filler by enhancing the understanding of AA anatomy."} {"_id": "766b55aa9c8915a9f786f0fd9f0e79f2e9bf57dc", "title": "Ischemic oculomotor nerve palsy and skin necrosis caused by vascular embolization after hyaluronic acid filler injection: a case report.", "text": "Hyaluronic acid filler injection is widely used for soft tissue augmentation. However, there can be disastrous complications by direct vascular embolization. We present a case of ischemic oculomotor nerve palsy and skin necrosis after hyaluronic acid filler injection on glabellar.blepharoptosis, exotropia and diplopia developed suddenly after the injection, and skin necrosis gradually occurred. Symptoms and signs of oculomotor nerve palsy continuously improved with steroid therapy. Skin defects healed with minimal scars through intensive wound care.Percutaneous filler injection of periorbital areas should be performed carefully by experienced surgeons, and the possibility of embolization should be considered promptly if symptoms develop."} {"_id": "efa97923a4a9174c0f1c6f5b16c4019e1fcb3e21", "title": "Complications of injectable fillers, part I.", "text": "Dermal filling has rapidly become one of the most common procedures performed by clinicians worldwide. The vast majority of treatments are successful and patient satisfaction is high. However, complications, both mild and severe, have been reported and result from injection of many different types of dermal fillers. In this Continuing Medical Education review article, the author describes common technical errors, the signs and symptoms of both common and rare complications, and management of sequelae in clear, easily adaptable treatment algorithms."} {"_id": "2f725ea2e93ea4814d8b83ce6f9a82141cedbd4e", "title": "Developing a Research Agenda for Human-Centered Data Science", "text": "The study and analysis of large and complex data sets offer a wealth of insights in a variety of applications. Computational approaches provide researchers access to broad assemblages of data, but the insights extracted may lack the rich detail that qualitative approaches have brought to the understanding of sociotechnical phenomena. How do we preserve the richness associated with traditional qualitative methods while utilizing the power of large data sets? How do we uncover social nuances or consider ethics and values in data use? These and other questions are explored by human-centered data science, an emerging field at the intersection of human-computer interaction (HCI), computer-supported cooperative work (CSCW), human computation, and the statistical and computational techniques of data science. This workshop, the first of its kind at CSCW, seeks to bring together researchers interested in human-centered approaches to data science to collaborate, define a research agenda, and form a community."} {"_id": "03e6e9198bffe536fe6ccd1edb91ccf0a68b254d", "title": "MEASURING THE SIMILARITY OF TWO IMAGE SEQUENCES", "text": "We propose a novel similarity measure of two image sequences based on shapeme histograms. The idea of shapeme histogram has been used for single image/texture recognition, but is used here to solve the sequence-tosequence matching problem. We develop techniques to represent each sequence as a set of shapeme histograms, which captures different variations of the object appearances within the sequence. These shapeme histograms are computed from the set of 2D invariant features that are stable across multiple images in the sequence, and therefore minimizes the effect of both background clutter, and 2D pose variations. We define sequence similarity measure as the similarity of the most similar pair of images from both sequences. This definition maximizes the chance of matching between two sequences of the same object, because it requires only part of the sequences being similar. We also introduce a weighting scheme to conduct an implicit feature selection process during the matching of two shapeme histograms. Experiments on clustering image sequences of tracked objects demonstrate the efficacy of the proposed method."} {"_id": "6ad2b6ceb9d4102ddbad7f853748add299c5d58b", "title": "Design and implementation of a 1.3 kW, 7-level flying capacitor multilevel AC-DC converter with power factor correction", "text": "This work presents a 1.3 kW, single phase, AC-DC converter with power factor correction based on a 7-level flying capacitor multilevel converter. The topology features a tradeoff of active circuit complexity for dramatic reduction on the magnetic component size, while maintaining a high efficiency. In this work, we demonstrate these features through theoretical analysis as well as a hardware prototype. It has been experimentally demonstrated that the prototype can operate at an universal AC input from 90\u2013230 VRMS and frequencies from 47\u201363 Hz with an output voltage of 400 V, achieving a box power density of 1.21 W/cm3 (19.8 W/in3) and a peak efficiency of 97.6%. This prototype is the first successful demonstration of a 7-level flying capacitor multilevel boost topology as an AC-DC converter with fully implemented digital PFC control and self-powered start-up from a universal AC input."} {"_id": "68302d9c362fd66d6f3e3aca0d1a0303cd241598", "title": "Gender Identity and Adjustment in Middle Childhood", "text": "Gender identity is a central construct in many accounts of psychosocial development, yet it has been defined in diverse ways. Kohlberg (1966) and Zucker et al. (1993) viewed gender identity as knowing that one is a member of one sex rather than the other; Kagan (1964) regarded gender identity as the degree to which one perceives the self as conforming to cultural stereotypes for one's gender; Bem (1981) saw gender identity as the degree to which one internalizes societal pressures for gender conformity; Green (1974) and Spence (1985) viewed gender identity as a fundamental sense of acceptance of, and of belonging to, one's gender. It is conceivable that all of the foregoing (and still other) conceptualizations of gender identity have merit but that different varieties or facets of gender identity serve different psychological functions or affect adjustment in different ways. Thus, it may be fruitful to regard gender identity as a multidimensional construct and to define gender identity as the collection of thoughts and feelings one has about one's gender category and one's membership in it. A recent study by Egan and Perry (2001) was built on this premise. Egan and Perry proposed that gender identity is composed of five major components: (a) membership knowledge (knowledge of membership in a gender category); (b) gender typicality (the degree to which one feels one is a typical member of one's gender category); (c) gender contentedness (the degree to which one is happy with one's gender assignment); (d) felt pressure for gender conformity (the degree to which one feels pressure from parents, peers, and self for conformity to gender stereotypes); and (e) intergroup bias (the extent to which one believes one's own sex is superior to the other). Egan and Perry (2001) measured the last four of these components of gender identity in preadolescent children and found the components to be relatively independent, to be fairly stable over a school year, and to relate to adjustment (i.e., self-esteem and peer acceptance) in different ways. Gender typicality and gender contentedness were favorably related to adjustment, whereas felt pressure and intergroup bias were negatively associated with adjustment. Links between the gender identity constructs and the adjustment indexes remained significant when children's perceptions of self-efficacy for a wide variety of sex-typed activities were statistically controlled. This suggests that the gender identity constructs carry implications for adjustment beyond self-perceptions of specific sex-linked competencies. The purposes of the \u2026"} {"_id": "4b631afce2f2a8b9b4c16c5e13c3765e75a38e54", "title": "Load Forecasting via Deep Neural Networks", "text": "Nowadays, electricity plays a vital role in national economic and social development. Accurate load forecasting can help power companies to secure electricity supply and scheduling and reduce wastes since electricity is difficult to store. In this paper, we propose a novel Deep Neural Network architecture for short term load forecasting. We integrate multiple types of input features by using appropriate neural network components to process each of them. We use Convolutional Neural Network components to extract rich features from historical load sequence and use Recurrent Components to model the implicit dynamics. In addition, we use Dense layers to transform other types of features. Experimental results on a large data set containing hourly loads of a North China city show the superiority of our method. Moreover, the proposed method is quite flexible and can be applied to other time series prediction tasks. c \u00a9 2017 The Authors. Published by Elsevier B.V. Selectio and/or peer-review under responsibility of ITQM2017."} {"_id": "fb996b851971db84c56bcb8d0d96a2e04b21417f", "title": "3D-Printing of Meso-structurally Ordered Carbon Fiber/Polymer Composites with Unprecedented Orthotropic Physical Properties", "text": "Here we report the first example of a class of additively manufactured carbon fiber reinforced composite (AMCFRC) materials which have been achieved through the use of a latent thermal cured aromatic thermoset resin system, through an adaptation of direct ink writing (DIW) 3D-printing technology. We have developed a means of printing high performance thermoset carbon fiber composites, which allow the fiber component of a resin and carbon fiber fluid to be aligned in three dimensions via controlled micro-extrusion and subsequently cured into complex geometries. Characterization of our composite systems clearly show that we achieved a high order of fiber alignment within the composite microstructure, which in turn allows these materials to outperform equivalently filled randomly oriented carbon fiber and polymer composites. Furthermore, our AM carbon fiber composite systems exhibit highly orthotropic mechanical and electrical responses as a direct result of the alignment of carbon fiber bundles in the microscale which we predict will ultimately lead to the design of truly tailorable carbon fiber/polymer hybrid materials having locally programmable complex electrical, thermal and mechanical response."} {"_id": "754a984815c7fab512e651fc6c5d6aa4864f559e", "title": "Ten-year research update review: child sexual abuse.", "text": "OBJECTIVE To provide clinicians with current information on prevalence, risk factors, outcomes, treatment, and prevention of child sexual abuse (CSA). To examine the best-documented examples of psychopathology attributable to CSA. METHOD Computer literature searches of and for key words. All English-language articles published after 1989 containing empirical data pertaining to CSA were reviewed. RESULTS CSA constitutes approximately 10% of officially substantiated child maltreatment cases, numbering approximately 88,000 in 2000. Adjusted prevalence rates are 16.8% and 7.9% for adult women and men, respectively. Risk factors include gender, age, disabilities, and parental dysfunction. A range of symptoms and disorders has been associated with CSA, but depression in adults and sexualized behaviors in children are the best-documented outcomes. To date, cognitive-behavioral therapy (CBT) of the child and a nonoffending parent is the most effective treatment. Prevention efforts have focused on child education to increase awareness and home visitation to decrease risk factors. CONCLUSIONS CSA is a significant risk factor for psychopathology, especially depression and substance abuse. Preliminary research indicates that CBT is effective for some symptoms, but longitudinal follow-up and large-scale \"effectiveness\" studies are needed. Prevention programs have promise, but evaluations to date are limited."} {"_id": "ea741073b635819de3aec7db648b84eb4abe976a", "title": "Appropriateness of plantar pressure measurement devices: a comparative technical assessment.", "text": "Accurate plantar pressure measurements are mandatory in both clinical and research contexts. Differences in accuracy, precision and reliability of the available devices have prevented so far the onset of standardization processes or the definition of reliable reference datasets. In order to comparatively assess the appropriateness of the most used pressure measurement devices (PMD) on-the-market, in 2006 the Institute the author is working for approved a two-year scientific project aimed to design, validate and implement dedicated testing methods for both in-factory and on-the field assessment. A first testing phase was also performed which finished in December 2008. Five commercial PMDs using different technologies-resistive, elastomer-based capacitive, air-based capacitive-were assessed and compared with respect to absolute pressure measurements, hysteresis, creep and COP estimation. The static and dynamic pressure tests showed very high accuracy of capacitive, elastomer-based technology (RMSE<0.5%), and quite a good performance of capacitive, air-based technology (RMSE<5%). High accuracy was also found for the resistive technology by TEKSCAN (RMSE<2.5%), even though a complex ad hoc calibration was necessary."} {"_id": "4b119dc8a4e38824d73a9f88179935a96e001aaf", "title": "Finite Precision Error Analysis of Neural Network Hardware Implementations", "text": "29 computation. On the other hand, for network learning, at least 14-16 bits of precision must be used for the weights to avoid having the training process divert too much from the trajectory of the high precision computation. References [1] D. Hammerstrom. A VLSI architecture for high-performance, low cost, on-chip learning. Figure 10: The average squared dierences between the desired and actual outputs of the XOR problem after the network converges. The paper is devoted to the derivation of the nite precision error analysis techniques for neural network implementations, especially analysis of the back-propagation learning of MLP's. This analysis technique is proposed to be more versatile and to prepare the ground for a wider variety of neural network algorithms: recurrent neural networks, competitive learning networks, and etc. All these networks share similar computational mechanisms as those used in back-propagation learning. For the forward retrieving operations, it is shown that 8-bit weights are sucient to maintain the same performance as using high precision 27 ture of the XOR problem, a soft convergence is good enough for the termination of training. Therefore, at the predicted point of 12-13 bits of weights, the squared dierence curve dives. Another interesting observation is worthwhile to mention: the total nite precision error in a single iteration of weight updating is mainly generated in the nal jamming operators in the computation of the output delta, hidden delta, and weight update. Therefore, even though it is required to have at least 13 to 16 bits assigned to the computation of the weight update and stored as the total weight value, the number of weight bits in the computation of forward retrieving and hidden delta steps of learning can be as low as 8 bits without excessive degradation of learning convergence and accuracy."} {"_id": "662089d17a0f4518b08b76274aa842134990134f", "title": "Rotation Invariant Multi-View Color Face Detection Based on Skin Color and Adaboost Algorithm", "text": "As the training of Adaboost is complicated or does not work well in the multi-view face detection with large plane-rotated angle, this paper proposes a rotation invariant multi-view color face detection method combining skin color segmentation and multi-view Adaboost algorithm. First the possible face region is fast detected by skin color table, and the skin-color background adhesion of face is separated by color clustering. After region merging, the candidate face region is received. Then the face direction in plane is calculated by K-L transform, and finally the candidate face region corrected by rotating is scanned by multi-view Adaboost classifier to locate the face accurately. The experiments show that the method can effectively detect the plane large-angle multi-view face image which the conventional Adaboost can not do. It can be effectively applied to the cases of multi-view and multi-face image with complex background."} {"_id": "021f37e9da69ea46fba9d2bf4e7ca3e8ba7b3448", "title": "Amorphous Silicon Solar Vivaldi Antenna", "text": "An ultrawideband solar Vivaldi antenna is proposed. Cut from amorphous silicon cells, it maintains a peak power at 4.25 V, which overcomes a need for lossy power management components. The wireless communications device can yield solar energy or function as a rectenna for dual-source energy harvesting. The solar Vivaldi performs with 0.5-2.8 dBi gain from 0.95-2.45 GHz , and in rectenna mode, it covers three bands for wireless energy scavenging."} {"_id": "505869faa8fa30b4d767a5de452fa0057d364c5d", "title": "Automating metadata generation: the simple indexing interface", "text": "In this paper, we focus on the development of a framework for automatic metadata generation. The first step towards this framework is the definition of an Application Programmer Interface (API), which we call the Simple Indexing Interface (SII). The second step is the definition of a framework for implementation of the SII. Both steps are presented in some detail in this paper. We also report on empirical evaluation of the metadata that the SII and supporting framework generated in a real-life context."} {"_id": "b17d5cf76dc8a51b44de13f4ad02e801c489775f", "title": "Four common anatomic variants that predispose to unfavorable rhinoplasty results: a study based on 150 consecutive secondary rhinoplasties.", "text": "A retrospective study was conducted of 150 consecutive secondary rhinoplasty patients operated on by the author before February of 1999, to test the hypothesis that four anatomic variants (low radix/low dorsum, narrow middle vault, inadequate tip projection, and alar cartilage malposition) strongly predispose to unfavorable rhinoplasty results. The incidences of each variant were compared with those in 50 consecutive primary rhinoplasty patients. Photographs before any surgery were available in 61 percent of the secondary patients; diagnosis in the remaining individuals was made from operative reports, physical diagnosis, or patient history. Low radix/low dorsum was present in 93 percent of the secondary patients and 32 percent of the primary patients; narrow middle vault was present in 87 percent of the secondary patients and 38 percent of the primary patients; inadequate tip projection was present in 80 percent of the secondary patients and 31 percent of the primary patients; and alar cartilage malposition was present in 42 percent of the secondary patients and 18 percent of the primary patients. In the 150-patient secondary group, the most common combination was the triad of low radix, narrow middle vault, and inadequate tip projection (40 percent of patients). The second largest group (27 percent) had shared all four anatomic points before their primary rhinoplasties. Seventy-eight percent of the secondary patients had three or all four anatomic variants in some combination; each secondary patient had at least one of the four traits; 99 percent had two or more. Seventy-eight percent of the primary patients had at least two variants, and 58 percent had three or more. Twenty-two percent of the primary patients had none of the variants and therefore would presumably not be predisposed to unfavorable results following traditional reduction rhinoplasty. This study supports the contention that four common anatomic variants, if unrecognized, are strongly associated with unfavorable results following primary rhinoplasty. It is important for all surgeons performing rhinoplasty to recognize these anatomic variants to avoid the unsatisfactory functional and aesthetic sequelae that they may produce by making their correction a deliberate part of each preoperative surgical plan."} {"_id": "732dfd5dd20af6c5d78878c841e49322e1c4d607", "title": "Categorization of Anomalies in Smart Manufacturing Systems to Support the Selection of Detection Mechanisms", "text": "An important issue in anomaly detection in smart manufacturing systems is the lack of consistency in the formal definitions of anomalies, faults, and attacks. The term anomaly is used to cover a wide range of situations that are addressed by different types of solutions. In this letter, we categorize anomalies in machines, controllers, and networks along with their detection mechanisms, and unify them under a common framework to aid in the identification of potential solutions. The main contribution of the proposed categorization is that it allows the identification of gaps in anomaly detection in smart manufacturing systems."} {"_id": "6bc4b1376ec2812b6d752c4f6bc8d8fd0512db91", "title": "Multimodal Machine Learning: A Survey and Taxonomy", "text": "Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research."} {"_id": "2a3e10b0ff84f36ae642b9431c9be408703533c7", "title": "NCDawareRank: a novel ranking method that exploits the decomposable structure of the web", "text": "Research about the topological characteristics of the hyperlink graph has shown that Web possesses a nested block structure, indicative of its innate hierarchical organization. This crucial observation opens the way for new approaches that can usefully regard Web as a Nearly Completely Decomposable(NCD) system; In recent years, such approaches gave birth to various efficient methods and algorithms that exploit NCD from a computational point of view and manage to considerably accelerate the extraction of the PageRank vector. However, very little have been done towards the qualitative exploitation of NCD.\n In this paper we propose NCDawareRank, a novel ranking method that uses the intuition behind NCD to generalize and refine PageRank. NCDawareRank considers both the link structure and the hierarchical nature of the Web in a way that preserves the mathematically attractive characteristics of PageRank and at the same time manages to successfully resolve many of its known problems, including Web Spamming Susceptibility and Biased Ranking of Newly Emerging Pages. Experimental results show that NCDawareRank is more resistant to direct manipulation, alleviates the problems caused by the sparseness of the link graph and assigns more reasonable ranking scores to newly added pages, while maintaining the ability to be easily implemented on a large-scale and in a computationally efficient manner."} {"_id": "de1eac60ef1c6af41746430469fe69f6e8a3d258", "title": "Motion Detection Techniques Using Optical Flow", "text": "Motion detection is very important in image processing. One way of detecting motion is using optical flow. Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. The method used for finding the optical flow in this project is assuming that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. This technique is later used in developing software for motion detection which has the capability to carry out four types of motion detection. The motion detection software presented in this project also can highlight motion region, count motion level as well as counting object numbers. Many objects such as vehicles and human from video streams can be recognized by applying optical flow technique. Keywords\u2014Background modeling, Motion detection, Optical flow, Velocity smoothness constant, motion trajectories."} {"_id": "33871c9019e44864e79e27255a676f48eb4c4f8f", "title": "What \u2019 s good for the goose is good for the GANder Comparing Generative Adversarial Networks for NLP", "text": "Generative Adversarial Nets (GANs), which use discriminators to help train a generative model, have been successful particularly in computer vision for generating images. However, there are many restrictions in its applications to natural language tasks\u2013mainly, it is difficult to back-propagate through discrete-value random variables. Yet recent publications have applied GAN with promising results. Sequence GAN (Yu et al. 2017) introduces a solution by modeling the data generator as a reinforcement learning (RL) policy to overcome the generator differentiation problem, with the RL reward signals produced by the discriminator after it judges complete sequences. However, problems with this model persist, as the GAN training objective is inherently unstable, producing a large variation of results that make it difficult to fool the discriminator. Maximum-Likelihood Augmented Discrete GAN (Che at al. 2017) suggests a new low-variance objective for the generator, using a normalized reward signal from the discriminator that corresponds to log-likelihood. Our project explores both proposed implementations: we produce experimental results on both synthetic and real-world discrete datasets to explore the effectiveness of GAN over strong baselines."} {"_id": "a84b3a4a97b5af474fe81af7f8e98048f59f5d2f", "title": "Design and implementation of intrusion detection system using convolutional neural network for DoS detection", "text": "Nowadays, network is one of the essential parts of life, and lots of primary activities are performed by using the network. Also, network security plays an important role in the administrator and monitors the operation of the system. The intrusion detection system (IDS) is a crucial module to detect and defend against the malicious traffics before the system is affected. This system can extract the information from the network system and quickly indicate the reaction which provides real-time protection for the protected system. However, detecting malicious traffics is very complicating because of their large quantity and variants. Also, the accuracy of detection and execution time are the challenges of some detection methods. In this paper, we propose an IDS platform based on convolutional neural network (CNN) called IDS-CNN to detect DoS attack. Experimental results show that our CNN based DoS detection obtains high accuracy at most 99.87%. Moreover, comparisons with other machine learning techniques including KNN, SVM, and Na\u00efve Bayes demonstrate that our proposed method outperforms traditional ones."} {"_id": "44a81a98cb3d25aafc1fa96372f0a68486d648a2", "title": "Is multiagent deep reinforcement learning the answer or the question? A brief survey", "text": "Deep reinforcement learning (DRL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. In this context, first, this article provides a clear overview of current multiagent deep reinforcement learning (MDRL) literature. Second, it provides guidelines to complement this emerging area by (i) showcasing examples on how methods and algorithms from DRL and multiagent learning (MAL) have helped solve problems in MDRL and (ii) providing general lessons learned from these works. We expect this article will help unify and motivate future research to take advantage of the abundant literature that exists in both areas (DRL and MAL) in a joint effort to promote fruitful research in the multiagent community."} {"_id": "460054a0540e6ad59711d16f2323743a055f7be5", "title": "Identifiable Phenotyping using Constrained Non-Negative Matrix Factorization", "text": "This work proposes a new algorithm for automated and simultaneous phenotyping of multiple co\u2013occurring medical conditions, also referred as comorbidities, using clinical notes from the electronic health records (EHRs). A basic latent factor estimation technique of non-negative matrix factorization (NMF) is augmented with domain specific constraints to obtain sparse latent factors that are anchored to a fixed set of chronic conditions. The proposed anchoring mechanism ensures a one-to-one identifiable and interpretable mapping between the latent factors and the target comorbidities. Qualitative assessment of the empirical results by clinical experts suggests that the proposed model learns clinically interpretable phenotypes while being predictive of 30 day mortality. The proposed method can be readily adapted to any non-negative EHR data across various healthcare institutions."} {"_id": "69296a15df81fd853e648d160a329cbd9c0050c8", "title": "Integrating perceived playfulness into expectation-confirmation model for web portal context", "text": "This paper investigated the value of including \"playfulness\" in expectation-confirmation theory (ECT) when studying continued use of a web site. Original models examined cognitive beliefs and effects that influence a person's intention to continue to use an information system. Here, an extended ECT model (with an additional relationship between perceived playfulness and satisfaction) was shown to provide a better fit than a simple path from perceived usefulness to satisfaction. The results indicated that perceived playfulness, confirmation to satisfaction, and perceived usefulness all contributed significantly to the users' intent to reuse a web site. Thus, we believe that the extended ECT model is an appropriate tool for the study of web site effects."} {"_id": "e481de52378f366d75fa78cff438d1f37842f0aa", "title": "Survey of review spam detection using machine learning techniques", "text": "Online reviews are often the primary factor in a customer\u2019s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation."} {"_id": "592a6d781309423ceb95502e92e577ef5656de0d", "title": "Incorporating Structural Alignment Biases into an Attentional Neural Translation Model", "text": "Neural encoder-decoder models of machine translation have achieved impressive results, rivalling traditional translation models. However their modelling formulation is overly simplistic, and omits several key inductive biases built into traditional models. In this paper we extend the attentional neural translation model to include structural biases from word based alignment models, including positional bias, Markov conditioning, fertility and agreement over translation directions. We show improvements over a baseline attentional model and standard phrase-based model over several language pairs, evaluating on difficult languages in a low resource setting."} {"_id": "6eedf0a4fe861335f7f7664c14de7f71c00b7932", "title": "Neural Turing Machines", "text": "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples."} {"_id": "74fdaeb2678aba886c3d899f66b4197b901483d7", "title": "Deep Neural Networks in Machine Translation: An Overview", "text": "Deep neural networks (DNNs) are widely used in machine translation (MT). This article gives an overview of DNN applications in various aspects of MT."} {"_id": "9999de91aae516680e2364b6d5bfeeec4c748616", "title": "Syntactically Guided Neural Machine Translation", "text": "We investigate the use of hierarchical phrase-based SMT lattices in end-to-end neural machine translation (NMT). Weight pushing transforms the Hiero scores for complete translation hypotheses, with the full translation grammar score and full ngram language model score, into posteriors compatible with NMT predictive probabilities. With a slightly modified NMT beam-search decoder we find gains over both Hiero and NMT decoding alone, with practical advantages in extending NMT to very large input and output vocabularies."} {"_id": "e810ddd9642db98492bd6a28b08a8655396c1555", "title": "Facing facts: neuronal mechanisms of face perception.", "text": "The face is one of the most important stimuli carrying social meaning. Thanks to the fast analysis of faces, we are able to judge physical attractiveness and features of their owners' personality, intentions, and mood. From one's facial expression we can gain information about danger present in the environment. It is obvious that the ability to process efficiently one's face is crucial for survival. Therefore, it seems natural that in the human brain there exist structures specialized for face processing. In this article, we present recent findings from studies on the neuronal mechanisms of face perception and recognition in the light of current theoretical models. Results from brain imaging (fMRI, PET) and electrophysiology (ERP, MEG) show that in face perception particular regions (i.e. FFA, STS, IOA, AMTG, prefrontal and orbitofrontal cortex) are involved. These results are confirmed by behavioral data and clinical observations as well as by animal studies. The developmental findings reviewed in this article lead us to suppose that the ability to analyze face-like stimuli is hard-wired and improves during development. Still, experience with faces is not sufficient for an individual to become an expert in face perception. This thesis is supported by the investigation of individuals with developmental disabilities, especially with autistic spectrum disorders (ASD)."} {"_id": "6ceea52b14f26eb39dc7b9e9da3be434f9c3c173", "title": "Differential neuronal plasticity in mouse hippocampus associated with various periods of enriched environment during postnatal development", "text": "Enriched environment (EE) is characterized by improved conditions for enhanced exploration, cognitive activity, social interaction and physical exercise. It has been shown that EE positively regulates the remodeling of neural circuits, memory consolidation, long-term changes in synaptic strength and neurogenesis. However, the fine mechanisms by which environment shapes the brain at different postnatal developmental stages and the duration required to induce such changes are still a matter of debate. In EE, large groups of mice were housed in bigger cages and were given toys, nesting materials and other equipment that promote physical activity to provide a stimulating environment. Weaned mice were housed in EE for 4, 6 or 8\u00a0weeks and compared with matched control mice that were raised in a standard environment. To investigate the differential effects of EE on immature and mature brains, we also housed young adult mice (8\u00a0weeks old) for 4\u00a0weeks in EE. We studied the influence of onset and duration of EE housing on the structure and function of hippocampal neurons. We found that: (1) EE enhances neurogenesis in juvenile, but not young adult mice; (2) EE increases the number of synaptic contacts at every stage; (3) long-term potentiation (LTP) and spontaneous and miniature activity at the glutamatergic synapses are affected differently by EE depending on its onset and duration. Our study provides an integrative view of the role of EE during postnatal development in various mechanisms of plasticity in the hippocampus including neurogenesis, synaptic morphology and electrophysiological parameters of synaptic connectivity. This work provides an explanation for discrepancies found in the literature about the effects of EE on LTP and emphasizes the importance of environment on hippocampal plasticity."} {"_id": "49f39786b87dddbf088ae202f2bdad46668387e3", "title": "Flash memory cells-an overview", "text": "The aim of this paper is to give a thorough overview of Flash memory cells. Basic operations and charge-injection mechanisms that are most commonly used in actual Flash memory cells are reviewed to provide an understanding of the underlying physics and principles in order to appreciate the large number of device structures, processing technologies, and circuit designs presented in the literature. New cell structures and architectural solutions have been surveyed to highlight the evolution of the Flash memory technology, oriented to both reducing cell size and upgrading product functions. The subject is of extreme interest: new concepts involving new materials, structures, principles, or applications are being continuously introduced. The worldwide semiconductor memory market seems ready to accept many new applications in fields that are not specific to traditional nonvolatile memories."} {"_id": "a20882df2f267c12e541d441281249dc2aee5fd0", "title": "On the Global Convergence of Majorization Minimization Algorithms for Nonconvex Optimization Problems", "text": "In this paper, we study the global convergence of majorization minimization (MM) algorithms for solving nonconvex regularized optimization problems. MM algorithms have received great attention in machine learning. However, when applied to nonconvex optimization problems, the convergence of MM algorithms is a challenging issue. We introduce theory of the KurdykaLojasiewicz inequality to address this issue. In particular, we show that many nonconvex problems enjoy the KurdykaLojasiewicz property and establish the global convergence result of the corresponding MM procedure. We also extend our result to a well known method that called CCCP (concave-convex procedure)."} {"_id": "2ffe2450366f9d6ff704964297ea624ce30b0f4d", "title": "An Overview on Mobile Data Mining", "text": "In early days the mobile phones are considered to perform only telecommunication operation. This scenario of mobile phones as communication devices changed with the emergence of a new class of mobile devices called the smart phones. These smart phones in addition to being used as a communication device are capable of doing things that a computer does. In recent times the smart phone are becoming more and more powerful in both computing and storage aspects. The data generated by the smart phone provide a means to get new knowledge about various aspects like usage, movement of the user etc. This paper provides an introduction to Mobile Data Mining and its types."} {"_id": "1f8560da84454fe3128c76d043d84e0a9a749dcd", "title": "Cognitive Neuroscience : implications for education ?", "text": "Research into the functioning of the human brain, particularly during the past decade, has greatly enhanced our understanding of cognitive behaviours which are fundamental to education: learning, memory, intelligence, emotion. Here, we argue the case that research findings from cognitive neuroscience hold implications for educational practice. In doing so we advance a bio-psycho-social position that welcomes multi-disciplinary perspectives on current educational challenges. We provide some examples of research implications which support conventional pedagogic wisdom, and others which are novel and perhaps counter-intuitive. As an example, we take a model of adaptive plasticity that relies on stimulus reinforcement and examine possible implications for pedagogy and curriculum depth. In doing so, we reject some popular but over-simplistic applications of neuroscience to education. In sum, the education profession could benefit from embracing rather than ignoring cognitive neuroscience. Moreover, educationists should be actively contributing to the research agenda of future brain research."} {"_id": "9ebe089caca6d78ff525856c7a828884724b9039", "title": "Bayesian Reinforcement Learning in Factored POMDPs", "text": "Bayesian approaches provide a principled solution to the explorationexploitation trade-off in Reinforcement Learning. Typical approaches, however, either assume a fully observable environment or scale poorly. This work introduces the Factored Bayes-Adaptive POMDP model, a framework that is able to exploit the underlying structure while learning the dynamics in partially observable systems. We also present a belief tracking method to approximate the joint posterior over state and model variables, and an adaptation of the Monte-Carlo Tree Search solution method, which together are capable of solving the underlying problem near-optimally. Our method is able to learn efficiently given a known factorization or also learn the factorization and the model parameters at the same time. We demonstrate that this approach is able to outperform current methods and tackle problems that were previously infeasible."} {"_id": "0f7129cb21af9e7a894b1e32a4b68dd3b80c80f8", "title": "Training techniques to improve endurance exercise performances.", "text": "In previously untrained individuals, endurance training improves peak oxygen uptake (VO2peak), increases capillary density of working muscle, raises blood volume and decreases heart rate during exercise at the same absolute intensity. In contrast, sprint training has a greater effect on muscle glyco(geno)lytic capacity than on muscle mitochondrial content. Sprint training invariably raises the activity of one or more of the muscle glyco(geno)lytic or related enzymes and enhances sarcolemmal lactate transport capacity. Some groups have also reported that sprint training transforms muscle fibre types, but these data are conflicting and not supported by any consistent alteration in sarcoplasmic reticulum Ca2+ ATPase activity or muscle physicochemical H+ buffering capacity. While the adaptations to training have been studied extensively in previously sedentary individuals, far less is known about the responses to high-intensity interval training (HIT) in already highly trained athletes. Only one group has systematically studied the reported benefits of HIT before competition. They found that >or=6 HIT sessions, was sufficient to maximally increase peak work rate (W(peak)) values and simulated 40 km time-trial (TT(40)) speeds of competitive cyclists by 4 to 5% and 3.0 to 3.5%, respectively. Maximum 3.0 to 3.5% improvements in TT(40) cycle rides at 75 to 80% of W(peak) after HIT consisting of 4- to 5-minute rides at 80 to 85% of W(peak) supported the idea that athletes should train for competition at exercise intensities specific to their event. The optimum reduction or 'taper' in intense training to recover from exhaustive exercise before a competition is poorly understood. Most studies have shown that 20 to 80% single-step reductions in training volume over 1 to 4 weeks have little effect on exercise performance, and that it is more important to maintain training intensity than training volume. Progressive 30 to 75% reductions in pool training volume over 2 to 4 weeks have been shown to improve swimming performances by 2 to 3%. Equally rapid exponential tapers improved 5 km running times by up to 6%. We found that a 50% single-step reduction in HIT at 70% of W(peak) produced peak approximately 6% improvements in simulated 100 km time-trial performances after 2 weeks. It is possible that the optimum taper depends on the intensity of the athletes' preceding training and their need to recover from exhaustive exercise to compete. How the optimum duration of a taper is influenced by preceding training intensity and percentage reduction in training volume warrants investigation."} {"_id": "254149281ea88ee3cbd8d629993cd18ab077dada", "title": "Multi-View Inpainting for Image-Based Scene Editing and Rendering", "text": "We propose a method to remove objects such as people and cars from multi-view urban image datasets, enabling free-viewpoint IBR in the edited scenes. Our method combines information from multi-view 3D reconstruction with image inpainting techniques, by formulating the problem as an optimization of a global patch-based objective function. We use Image-Based Rendering (IBR) techniques to reproject information from neighboring views, and 3D multi-view stereo reconstruction to perform multiview coherent initialization for inpainting of pixels not filled by reprojection. Our algorithm performs multi-view consistent inpainting for color and 3D by blending reprojections with patch-based image inpainting. We run our algorithm on casually captured datasets, and Google StreetViewdata, removing objects cars, people and pillars, showing that our approach produces results of sufficient quality for free-viewpoint IBR on \"cleaned up\" scenes, as well as IBR scene editing, such as limited motion of real objects."} {"_id": "8368d2fc947cf6ac46a1d251d1895f2f87c7d498", "title": "Habanero-Java: the new adventures of old X10", "text": "In this paper, we present the Habanero-Java (HJ) language developed at Rice University as an extension to the original Java-based definition of the X10 language. HJ includes a powerful set of task-parallel programming constructs that can be added as simple extensions to standard Java programs to take advantage of today's multi-core and heterogeneous architectures. The language puts a particular emphasis on the usability and safety of parallel constructs. For example, no HJ program using async, finish, isolated, and phaser constructs can create a logical deadlock cycle. In addition, the future and data-driven task variants of the async construct facilitate a functional approach to parallel programming. Finally, any HJ program written with async, finish, and phaser constructs that is data-race free is guaranteed to also be deterministic.\n HJ also features two key enhancements that address well known limitations in the use of Java in scientific computing --- the inclusion of complex numbers as a primitive data type, and the inclusion of array-views that support multidimensional views of one-dimensional arrays. The HJ compiler generates standard Java class-files that can run on any JVM for Java 5 or higher. The HJ runtime is responsible for orchestrating the creation, execution, and termination of HJ tasks, and features both work-sharing and work-stealing schedulers. HJ is used at Rice University as an introductory parallel programming language for second-year undergraduate students. A wide variety of benchmarks have been ported to HJ, including a full application that was originally written in Fortran 90. HJ has a rich development and runtime environment that includes integration with DrJava, the addition of a data race detection tool, and service as a target platform for the Intel Concurrent Collections coordination language"} {"_id": "0d2c4723e9e5925cde74bd879611fda6f6e3980b", "title": "BigBench: towards an industry standard benchmark for big data analytics", "text": "There is a tremendous interest in big data by academia, industry and a large user base. Several commercial and open source providers unleashed a variety of products to support big data storage and processing. As these products mature, there is a need to evaluate and compare the performance of these systems.\n In this paper, we present BigBench, an end-to-end big data benchmark proposal. The underlying business model of BigBench is a product retailer. The proposal covers a data model and synthetic data generator that addresses the variety, velocity and volume aspects of big data systems containing structured, semi-structured and unstructured data. The structured part of the BigBench data model is adopted from the TPC-DS benchmark, which is enriched with semi-structured and unstructured data components. The semi-structured part captures registered and guest user clicks on the retailer's website. The unstructured data captures product reviews submitted online. The data generator designed for BigBench provides scalable volumes of raw data based on a scale factor. The BigBench workload is designed around a set of queries against the data model. From a business prospective, the queries cover the different categories of big data analytics proposed by McKinsey. From a technical prospective, the queries are designed to span three different dimensions based on data sources, query processing types and analytic techniques.\n We illustrate the feasibility of BigBench by implementing it on the Teradata Aster Database. The test includes generating and loading a 200 Gigabyte BigBench data set and testing the workload by executing the BigBench queries (written using Teradata Aster SQL-MR) and reporting their response times."} {"_id": "be410f931d5369a5bdadff8f94ddea7fb25869ce", "title": "Colostrum avoidance, prelacteal feeding and late breast-feeding initiation in rural Northern Ethiopia.", "text": "OBJECTIVE\nTo identify specific cultural and behavioural factors that might be influenced to increase colostrum feeding in a rural village in Northern Ethiopia to improve infant health.\n\n\nDESIGN\nBackground interviews were conducted with six community health workers and two traditional birth attendants. A semi-structured tape-recorded interview was conducted with twenty mothers, most with children under the age of 5 years. Variables were: parental age and education; mother's ethnicity; number of live births and children's age; breast-feeding from birth through to weaning; availability and use of formula; and descriptions of colostrum v. other stages of breast milk. Participant interviews were conducted in Amharic and translated into English.\n\n\nSETTING\nKossoye, a rural Amhara village with high prevalence rates of stunting: inappropriate neonatal feeding is thought to be a factor.\n\n\nSUBJECTS\nWomen (20-60 years of age) reporting at least one live birth (range: 1-8, mean: \u223c4).\n\n\nRESULTS\nColostrum (inger) and breast milk (yetut wotet) were seen as different substances. Colostrum was said to cause abdominal problems, but discarding a portion was sufficient to mitigate this effect. Almost all (nineteen of twenty) women breast-fed and twelve (63 %) reported ritual prelacteal feeding. A majority (fifteen of nineteen, 79 %) reported discarding colostrum and breast-feeding within 24 h of birth. Prelacteal feeding emerged as an additional factor to be targeted through educational intervention.\n\n\nCONCLUSIONS\nTo maximize neonatal health and growth, we recommend culturally tailored education delivered by community health advocates and traditional health practitioners that promotes immediate colostrum feeding and discourages prelacteal feeding."} {"_id": "450a6b3e27869595df328a4c2df8f0c37610a669", "title": "High gain antipodal tapered slot antenna With sine-shaped corrugation and fermi profile substrate slotted cut-out for MMW 5G", "text": "A high gain, high-efficiency antipodal tapered slot antenna with sine-shaped corrugation and a Fermi profile substrate cut-out has been developed for 5G millimeter wave (MMW) communications. A parametric study of a new substrate cutout with Fermi profile is demonstrated to reduce the sidelobe level at the E-plane and H-plane as well as to increase antenna efficiency by an average of 88% over a band of 20-40 GHz. A low-cost printed circuit board (PCB) is processed simply with a CNC Milling machine to fabricate the proposed antenna with Fermi profile substrate cut-out. The measured reflection coefficient is found to be less than -14 dB over a frequency range of 20-40 GHz. Furthermore, the measured gain of the proposed antenna is 17 dB at 30 GHz and the measured radiation pattern and gain is almost constant within the wide bandwidth from 30-40 GHz. Therefore, this antenna is proposed for use in an H-plane array structure such as for point-to-point communication systems, a switched-beam system."} {"_id": "48f3552e17bcebb890e4b1f19c9a2c1fa362800f", "title": "The staircase to terrorism: a psychological exploration.", "text": "To foster a more in-depth understanding of the psychological processes leading to terrorism, the author conceptualizes the terrorist act as the final step on a narrowing staircase. Although the vast majority of people, even when feeling deprived and unfairly treated, remain on the ground floor, some individuals climb up and are eventually recruited into terrorist organizations. These individuals believe they have no effective voice in society, are encouraged by leaders to displace aggression onto out-groups, and become socialized to see terrorist organizations as legitimate and out-group members as evil. The current policy of focusing on individuals already at the top of the staircase brings only short-term gains. The best long-term policy against terrorism is prevention, which is made possible by nourishing contextualized democracy on the ground floor."} {"_id": "b3a18280f63844e2178d8f82bc369fcf3ae6d161", "title": "Reducing gender bias in word embeddings", "text": "Word embedding is a popular framework that represents text data as vectors of real numbers. These vectors capture semantics in language, and are used in a variety of natural language processing and machine learning applications. Despite these useful properties, word embeddings derived from ordinary language corpora necessarily exhibit human biases [6]. We measure direct and indirect gender bias for occupation word vectors produced by the GloVe word embedding algorithm [9], then modify this algorithm to produce an embedding with less bias to mitigate amplifying the bias in downstream applications utilizing this embedding."} {"_id": "79ccfb918bf0b6ccbe83b552e1b8266eb05e8f53", "title": "Text Mining of Tweet for Sentiment Classification and Association with Stock Prices", "text": "In present days, the social media and networking act as one of the key platforms for sharing information and opinions. Many people share ideas, express their view points and opinions on various topic of their interest. Social media text has rich information about the companies, their products and various services offered by them. In this research we focus exploring the association of sentiments of social media text and stock prices of a company. The tweets of several company has been extracted and performed sentiment classification using Na\u00efve Bayes classifier and SVM classifier. To perform the classification, N-gram based feature vectors are constructed using important words of tweets. Further, the pattern of association between number of tweets which are positive or negative and stock prices has been explored. Motivated by such an association, the features related to tweets such as number of positive, negative, neutral tweets and total number of tweets are used to predict the stock market status using Support Vector Machine classifier."} {"_id": "45f6e665105fe0b3c311ab4f0bf5f6bf8738f242", "title": "History and State of the Art in Commercial Electric Ship Propulsion, Integrated Power Systems, and Future Trends", "text": "Electric propulsion has emerged as one of the most efficient propulsion arrangements for several vessel types over the last decades. Even though examples can be found in the history at the end of 19th century, and further into the 20th century, the modern use of electric propulsion started in the 1980s along with the development of semiconductor switching devices to be used in high power drives (dc drives and later ac-to-ac drives). This development opened up for full rpm control of propellers and thrusters, and thereby enabling a simplification of the mechanical structure. However, the main reason for using electric propulsion in commercial ship applications is the potential for fuel savings compared to equivalent mechanical alternatives, except for icebreakers where the performance of an electric powered propeller is superior to a combustion engine powered propeller. The fuel saving potential lies within the fact that the applicable vessels have a highly varying operation profile and are seldom run at full power. This favors the power plant principle in which electric power can be produced at any time with optimum running of prime movers, e.g., diesel engines, by turning on and off units depending on the power demand for propulsion and other vessel loads. Icebreakers were among the first vessels to take advantage of this technology later followed by cruise vessel, and the offshore drilling vessels operating with dynamic positioning (DP). The converter technology was rapidly developing and soon the dc drives were replaced with ac drives. In the same period electric propulsion emerged as basic standard for large cruise liners, and DP operated drilling vessels, but also found its way into other segments as shuttle tankers, ferries, and other special vessels. At the same time podded propulsion were introduced, where the electric motor was mounted directly on the propeller shaft in a submerged 360 $^{\\circ}$ steerable pod, adding better efficiency, improved maneuvering, and reduced installation space/cost to the benefits of electric propulsion. The future trends are now focusing on further optimization of efficiency by allowing multiple energy sources, independent operation of individual power producers, and energy storage for various applications, such as power back up, peak shaving, or emission free operation (short voyages)."} {"_id": "80bb5c9119ff4fc2374103b4f3d6a8f614b3c2ed", "title": "Shining the Floodlights on Mobile Web Tracking \u2014 A Privacy Survey", "text": "We present the first published large-scale study of mobile web tracking. We compare tracking across five physical and emulated mobile devices with one desktop device as a benchmark. Our crawler is based on FourthParty; however, our architecture avoids clearing state which has the benefit of continual observation of (and by) third-parties. We confirm many intuitive predictions and report a few surprises. The lists of top third-party domains across different categories devices are substantially similar; we found surprisingly few mobile-specific ad networks. The use of JavaScript by tracking domains increases gradually as we consider more powerful devices. We also analyze cookie longevity by device. Finally, we analyze a curious phenomenon of cookies that are used to store information about the user\u2019s browsing history on the client. Mobile tracking appears to be an under-researched area, and this paper is only a first step. We have made our code and data available at http://webtransparency.org/ for others to build on."} {"_id": "c1b5a34aa3a8052378b2a4a33c98c02eed24ff1a", "title": "Automated Trajectory Planner of Industrial Robot for Pick-and-Place Task", "text": "Industrial robots, due to their great speed, precision and cost\u2010effectiveness in repetitive tasks, now tend to be used in place of human workers in automated manufacturing systems. In particular, they perform the pick\u2010and\u2010place operation, a non\u2010value\u2010added activity which at the same time cannot be eliminated. Hence, minimum time is an important consideration for economic reasons in the trajectory planning system of the manipulator. The trajectory should also be smooth to handle parts precisely in applications such as semiconductor manufacturing, processing and handling of chemicals and medicines, and fluid and aerosol deposition. In this paper, an automated trajectory planner is proposed to determine a smooth, minimum\u2010time and collision\u2010free trajectory for the pick\u2010and\u2010place operations of a 6\u2010DOF robotic manipulator in the presence of an obstacle. Subsequently, it also proposes an algorithm for the jerk\u2010 bounded Synchronized Trigonometric S\u2010curve Trajectory (STST) and the \u2018forbidden\u2010sphere\u2019 technique to avoid the obstacle. The proposed planner is demonstrated with suitable examples and comparisons. The experiments show that the proposed planner is capable of providing a smoother trajectory than the cubic spline based trajectory."} {"_id": "4811ccadaf63fa92ab04b0080403084098fa4c02", "title": "Progressive muscle relaxation reduces migraine frequency and normalizes amplitudes of contingent negative variation (CNV)", "text": "Central information processing, visible in evoked potentials like the contingent negative variation (CNV) is altered in migraine patients who exhibit higher CNV amplitudes and a reduced habituation. Both characteristics were shown to be normalized under different prophylactic migraine treatment options whereas Progressive Muscle Relaxation (PMR) has not yet been examined. We investigated the effect of PMR on clinical course and CNV in migraineurs in a quasi-randomized, controlled trial. Thirty-five migraine patients and 46 healthy controls were examined. Sixteen migraineurs and 21 healthy participants conducted a 6-week PMR-training with CNV-measures before and after as well as three months after PMR-training completion. The remaining participants served as controls. The clinical course was analyzed with two-way analyses of variance (ANOVA) with repeated measures. Pre-treatment CNV differences between migraine patients and healthy controls were examined with t-tests for independent measures. The course of the CNV-parameters was examined with three-way ANOVAs with repeated measures. After PMR-training, migraine patients showed a significant reduction of migraine frequency. Preliminary to the PMR-training, migraine patients exhibited higher amplitudes in the early component of the CNV (iCNV) and the overall CNV (oCNV) than healthy controls, but no differences regarding habituation. After completion of the PMR-training, migraineurs showed a normalization of the iCNV amplitude, but neither of the oCNV nor of the habituation coefficient. The results confirm clinical efficacy of PMR for migraine prophylaxis. The pre-treatment measure confirms altered cortical information processing in migraine patients. Regarding the changes in the iCNV after PMR-training, central nervous mechanisms of the PMR-effect are supposed which may be mediated by the serotonin metabolism."} {"_id": "10eb7bfa7687f498268bdf74b2f60020a151bdc6", "title": "Visualizing Data using t-SNE", "text": "We present a new technique called \u201ct-SNE\u201d that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets."} {"_id": "666b639aadcd2a8a11d24b36bae6a4f07e802b34", "title": "Tailoring Continuous Word Representations for Dependency Parsing", "text": "Word representations have proven useful for many NLP tasks, e.g., Brown clusters as features in dependency parsing (Koo et al., 2008). In this paper, we investigate the use of continuous word representations as features for dependency parsing. We compare several popular embeddings to Brown clusters, via multiple types of features, in both news and web domains. We find that all embeddings yield significant parsing gains, including some recent ones that can be trained in a fraction of the time of others. Explicitly tailoring the representations for the task leads to further improvements. Moreover, an ensemble of all representations achieves the best results, suggesting their complementarity."} {"_id": "7f0eb2cb332a8ce5fafaa7c280b5c5ab9c7ca95a", "title": "A Universal Part-of-Speech Tagset", "text": "To facilitate future research in unsupervised induction of syntactic structure and to standardize best-practices, we propose a tagset that consists of twelve universal part-of-speech categories. In addition to the tagset, we develop a mapping from 25 different treebank tagsets to this universal set. As a result, when combined with the original treebank data, this universal tagset and mapping produce a dataset consisting of common parts-of-speech for 22 different languages. We highlight the use of this resource via three experiments, that (1) compare tagging accuracies across languages, (2) present an unsupervised grammar induction approach that does not use gold standard part-of-speech tags, and (3) use the universal tags to transfer dependency parsers between languages, achieving state-of-the-art results."} {"_id": "013cd20c0eaffb9cab80875a43086e0c3224fe20", "title": "Representation Learning: A Review and New Perspectives", "text": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."} {"_id": "0183b3e9d84c15c7048e6c2149ed86257ccdc6cb", "title": "Dependency-Based Word Embeddings", "text": "While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by Mikolov et al. to include arbitrary contexts. In particular, we perform experiments with dependency-based contexts, and show that they produce markedly different embeddings. The dependencybased embeddings are less topical and exhibit more functional similarity than the original skip-gram embeddings."} {"_id": "d0399694b31c933d66cd2a6896afbd5aa7bc097d", "title": "Intentional rounding: facilitators, benefits and barriers.", "text": "AIMS AND OBJECTIVES\nTo describe the implementation, practice and sustainability of Intentional Rounding (IR) within two diverse settings (aged care and maternity).\n\n\nBACKGROUND\nThe profile of patients in hospitals has changed over time, generally being more severe, placing heavy demands on nurses' time. Routine non-urgent care is often provided only when there is time. IR has been found to increase both patient and staff satisfaction, also resulting in improved patient outcomes such as reduced falls and call bell use. IR is also used as a time management tool for safe and reliable provision of routine care.\n\n\nMETHODS\nThis descriptive qualitative research study comprised of three focus groups in a metropolitan hospital.\n\n\nRESULTS\nFifteen nurses participated in three focus groups. Seven main themes emerged from the thematic analysis of the verbatim transcripts: implementation and maintenance, how IR works, roles and responsibilities, context and environment, benefits, barriers and legal issues.\n\n\nCONCLUSION\nIR was quickly incorporated into normal practice, with clinicians being able to describe the main concepts and practices. IR was seen as a management tool, facilitating accountability and continuity of management support being essential for sustainability. Clinicians reported increases in patient and staff satisfaction, and the opportunity to provide patient education. While patient type and acuity, ward layout and staff experience affected the practice of IR, the principles of IR are robust enough to allow for differences in the ward specialty and patient type. However, care must be taken when implementing IR to reduce the risk of alienating experienced staff. Incorporation of IR charts into the patient health care record is recommended.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nEngaging all staff, encouraging ownership and stability of management are key factors in the successful implementation and maintenance of IR. IR is flexible and robust enough to accommodate different patient types and acuity."} {"_id": "d424f718d4a56318106f7c935aedd5a66bb7112a", "title": "Large-scale genome-wide association analysis of bipolar disorder identifies a new susceptibility locus near ODZ4", "text": "We conducted a combined genome-wide association study (GWAS) of 7,481 individuals with bipolar disorder (cases) and 9,250 controls as part of the Psychiatric GWAS Consortium. Our replication study tested 34 SNPs in 4,496 independent cases with bipolar disorder and 42,422 independent controls and found that 18 of 34 SNPs had P < 0.05, with 31 of 34 SNPs having signals with the same direction of effect (P = 3.8 \u00d7 10\u22127). An analysis of all 11,974 bipolar disorder cases and 51,792 controls confirmed genome-wide significant evidence of association for CACNA1C and identified a new intronic variant in ODZ4. We identified a pathway comprised of subunits of calcium channels enriched in bipolar disorder association intervals. Finally, a combined GWAS analysis of schizophrenia and bipolar disorder yielded strong association evidence for SNPs in CACNA1C and in the region of NEK4-ITIH1-ITIH3-ITIH4. Our replication results imply that increasing sample sizes in bipolar disorder will confirm many additional loci."} {"_id": "503f347333b09c13f32fd8db7296d21aad032c86", "title": "Realization of a Filter with Helical Components *", "text": "In the VHF range, high-quality narrow-band filters with reasonable physical dimensions are extremely d8icult to realize. Excessive pass-band insertion loss accompanies filters employing lumped constant elements, and unreasonable size is a natural consequence of coaxial-resonator filters. Harmonic crystal filters are inadequate because of the unpredictable amount of spurious modes above the harmonic frequency; and hence they can only be used for very low percentage bandwidths. A solution to the above problems is provided by the use of helical resonators for high-quality filters. This paper describes the helical resonator, measurement of its Q, and coupling to the resonator. A procedure for the design and construction of a filter using helical circuits is then presented. Finally an example is given illustrating the design method, and several photographs of helical-resonator filters are shown."} {"_id": "3c30b496bc5f680d46094ea108feb4368ade4d07", "title": "Accuracy in Random Number Generation", "text": "The generation of continuous random variables on a digital computer encounters a problem of accuracy caused by approximations and discretization error. These in turn impose a bias on the simulation results. An ideal discrete approximation of a continuous distribution and a measure of error are proposed. Heuristic analysis of common methods for transforming uniform deviates to other continuous random variables is discussed. Comments and recommendations are made for the design of algorithms to reduce the bias and avoid overflow problems."} {"_id": "0c2c405ed2e9fa48c4b782ab8a4da1c06c1b132e", "title": "Low power 3\u20132 and 4\u20132 adder compressors implemented using ASTRAN", "text": "This paper presents two adder compressors architectures addressing high-speed and low power. Adder compressors are used to implement arithmetic circuits such as multipliers and digital signal processing units like the Fast Fourier Transform (FTT). To address the objective of high-speed and low power, it is well known that optimization efforts should be applied in all abstraction levels. In this paper are combined optimizations at logic, electrical and physical level. At the logic level, the circuit is optimized by using multiplexers instead of XOR gates to reduce delay, power and area. At the electrical level, this work presents an architecture that generate the XOR and XNOR signals simultaneously, this reduce internal glitches hence dynamic power as well. And finally at the physical level, and automatic layout generation tool (ASTRAN) is used to make the adder compressors layouts. This tool has proved to reduce power consumption and delay due to the smaller input capacitances of the complex gates generated compared to manual-designed layouts."} {"_id": "12b7cc398d67b2a17ace0b0b79363e9a646f8bcb", "title": "Review and Comparative Study of Clustering Techniques", "text": "Clustering is an automatic learning technique which aims at grouping a set of objects into clusters so that objects in the same clusters should be similar as possible, whereas objects in one cluster should be as dissimilar as possible from objects in other clusters. Document clustering aims to group in an unsupervised way, a given document set into clusters such that documents within each clusters are more similar between each other than those in different clusters. Cluster analysis aims to organize a collection of patterns into clusters based on similarity. This paper focuses on survey of various clustering techniques. These techniques can be divided into several categories: Partitional algorithms, Hierarchical algorithms, Density based, and comparison of various algorithm is surveyed and shows how Hierarchical Clustering can be better than other techniques."} {"_id": "25fdcb2969d0a6ee660f2c98391725ff98867f3f", "title": "Defect detection and identification in textile fabrics using Multi Resolution Combined Statistical and Spatial Frequency Method", "text": "In textile industry, reliable and accurate quality control and inspection becomes an important element. Presently, this is still accomplished by human experience, which is more time consuming and is also prone to errors. Hence automated visual inspection systems become mandatory in textile industries. This Paper presents a novel algorithm of fabric defect detection by making use of Multi Resolution Combined Statistical and Spatial Frequency Method. Defect detection consists of two phases, first is the training and next is the testing phase. In the training phase, the reference fabric images are cropped into non-overlapping sub-windows. By applying MRCSF the features of the textile fabrics are extracted and stored in the database. During the testing phase the same procedure is applied for test fabric and the features are compared with database information. Based on the comparison results, each sub-window is categorized as defective or non-defective. The classification rate obtained by the process of simulation using MATLAB was found to be 99%."} {"_id": "b6dc8b5e100482058323194c82aa30a1cf0f97fc", "title": "Architecture for the Internet of Things (IoT): API and Interconnect", "text": "The proposed architecture is structured into a secure API, a backbone, and separate device networks with standard interface to the backbone. The API decouples innovation of services and service logic from protocols and network elements. It also enables service portability between systems, i.e. a service may be allocated to end-systems or servers, with possible relocation and replication throughout its lifecycle. Ubiquitous service provisioning depends on interoperability, not only for provisioning of a standard QoS controlled IP bearer, but also for cross domain naming, security, mobility, multicast, location, routing and management, including fair compensation for utility provisioning. The proposed architecture not only includes these critical elements but also caters for multi-homing, mobile networks with dynamic membership, and third party persistent storage based on indirection."} {"_id": "ef8f45bcfbc98dbd7b434a2342adbd66c3182b54", "title": "Organophosphate Pesticide Exposure and Attention in Young Mexican-American Children: The CHAMACOS Study", "text": "BACKGROUND\nExposure to organophosphate (OP) pesticides, well-known neurotoxicants, has been associated with neurobehavioral deficits in children.\n\n\nOBJECTIVES\nWe investigated whether OP exposure, as measured by urinary dialkyl phosphate (DAP) metabolites in pregnant women and their children, was associated with attention-related outcomes among Mexican-American children living in an agricultural region of California.\n\n\nMETHODS\nChildren were assessed at ages 3.5 years (n = 331) and 5 years (n = 323). Mothers completed the Child Behavior Checklist (CBCL). We administered the NEPSY-II visual attention subtest to children at 3.5 years and Conners' Kiddie Continuous Performance Test (K-CPT) at 5 years. The K-CPT yielded a standardized attention deficit/hyperactivity disorder (ADHD) Confidence Index score. Psychometricians scored behavior of the 5-year-olds during testing using the Hillside Behavior Rating Scale.\n\n\nRESULTS\nPrenatal DAPs (nanomoles per liter) were nonsignificantly associated with maternal report of attention problems and ADHD at age 3.5 years but were significantly related at age 5 years [CBCL attention problems: \u03b2 = 0.7 points; 95% confidence interval (CI), 0.2-1.2; ADHD: \u03b2 = 1.3; 95% CI, 0.4-2.1]. Prenatal DAPs were associated with scores on the K-CPT ADHD Confidence Index > 70th percentile [odds ratio (OR) = 5.1; 95% CI, 1.7-15.7] and with a composite ADHD indicator of the various measures (OR = 3.5; 95% CI, 1.1-10.7). Some outcomes exhibited evidence of effect modification by sex, with associations found only among boys. There was also limited evidence of associations between child DAPs and attention.\n\n\nCONCLUSIONS\nIn utero DAPs and, to a lesser extent, postnatal DAPs were associated adversely with attention as assessed by maternal report, psychometrician observation, and direct assessment. These associations were somewhat stronger at 5 years than at 3.5 years and were stronger in boys."} {"_id": "921cefa2e8363b3be6d7522ad82a506a2a835f01", "title": "A Survey on Query Processing and Optimization in Relational Database Management System", "text": "The query processer and optimizer is an important component in today\u2019s relational database management system. This component is responsible for translating a user query, usually written in a non-procedural language like SQL \u2013 into an efficient query evaluation program that can be executed against database. In this paper, we identify many of the common issues, themes, and approaches that extend this work and the settings in which each piece of work is most appropriate. Our goal with this paper is to be a \u201cvalue-add\u201d over the existing papers on the material, providing not only a brief overview of each technique, but also a basic framework for understating the field of query processing and optimization in general."} {"_id": "2a361cc98ff78de97fcd2f86e9ababb95c922eae", "title": "A randomized controlled trial of postcrisis suicide prevention.", "text": "OBJECTIVE\nThis study tested the hypothesis that professionals' maintenance of long-term contact with persons who are at risk of suicide can exert a suicide-prevention influence. This influence was hypothesized to result from the development of a feeling of connectedness and to be most pertinent to high-risk individuals who refuse to remain in the health care system.\n\n\nMETHODS\nA total of 3,005 persons hospitalized because of a depressive or suicidal state, populations known to be at risk of subsequent suicide, were contacted 30 days after discharge about follow-up treatment. A total of 843 patients who had refused ongoing care were randomly divided into two groups; persons in one group were contacted by letter at least four times a year for five years. The other group-the control group-received no further contact. A follow-up procedure identified patients who died during the five-year contact period and during the subsequent ten years. Suicide rates in the contact and no-contact groups were compared.\n\n\nRESULTS\nPatients in the contact group had a lower suicide rate in all five years of the study. Formal survival analyses revealed a significantly lower rate in the contact group (p=.04) for the first two years; differences in the rates gradually diminished, and by year 14 no differences between groups were observed.\n\n\nCONCLUSIONS\nA systematic program of contact with persons who are at risk of suicide and who refuse to remain in the health care system appears to exert a significant preventive influence for at least two years. Diminution of the frequency of contact and discontinuation of contact appear to reduce and eventually eliminate this preventive influence."} {"_id": "3c5a8d0742e9bc24f53e3da8ae52659d3fe2c5fd", "title": "An AAL system based on IoT technologies and linked open data for elderly monitoring in smart cities", "text": "The average age growing of the urban population, with an increasing number of 65+ citizens, is calling for the cities to provide global services specifically geared to elderly people. In this context, collecting data from the elderly's environment and his/her habits and making them available in a structured way to third parties for analysis, is the first step towards the realization of innovative user-centric services. This paper presents a city-wide general IoT-based sensing infrastructure and a data management layer providing some REST and Linked Open Data Application Programming Interfaces (APIs) that collect and present data related to elderly people. In particular, this architecture is used by the H2020 City4Age project to help geriatricians in identifying the onset of Mild Cognitive Impairment (MCI) disease."} {"_id": "08a6e999532544e83618c16a96f6d4c7356bc140", "title": "Bio fl oc technology in aquaculture : Bene fi cial effects and future challenges", "text": ""} {"_id": "3b0cfc0cc3fc3bee835147786c749de5ab1f8ed2", "title": "Streaming Big Data Processing in Datacenter Clouds", "text": "Today, we live in a digital universe in which information and technology are not only around us but also play important roles in dictating the quality of our lives. As we delve deeper into this digital universe, we're witnessing explosive growth in the variety, velocity, and volume of data1,2 being transmitted over the Internet. A zetabyte of data passed through the Internet in the past year; IDC predicts that this digital universe will explode to an unimaginable eight Zbytes by 2015. These data are and will be generated mainly from Internet search, social media, mobile devices, the Internet of Things, business transactions, next-generation radio astronomy telescopes, high-energy physics synchrotron, and content distribution. Government and business organizations are now overflowing with data, easily aggregating to terabytes or even petabytes of information."} {"_id": "ec04f476a416172988ba822de313eecd72f2e223", "title": "Integration of alternative sources of energy as current sources", "text": "This work proposes an integration of alternative energy sources having the different generating modules connected as current sources. These sources were configured to inject current into a common DC bus. This feature avoids current circulation among sources due to the voltage differences, reducing the stress in the individual generators. A battery bank is a system part, used as an energy accumulator for periods when there is primary energy depletion. The bus voltage is controlled by modulation of a secondary load in parallel with the main load. A DC-DC Boost converter connects each primary source to a common bus, except for the storage system that uses a DC-DC Buck-Boost converter. Finally, it is presented a mathematical analysis through simulated results to show the sources behavior and overall performance when turned into current sources."} {"_id": "91e82ab0f5d0b8d2629084641de607578057c98e", "title": "Isolation and characterization of microsatellite loci in the black pepper, Piper nigrum L. (piperaceae)", "text": "The black pepper, Piper nigrum L., which originated in \u00cdndia, is the World\u2019s most important commercial spice. Brazil has a germplasm collection of this species preserved at the Brazilian Agricultural Research Corporation (Embrapa\u2014Eastern Amazonia) where efforts are being made to generation information on the patterns of genetic variation and develop strategies for conservation and management of black pepper. Molecular markers of the SSR type are powerful tools for the description of material preserved in genetic resources banks, due to characteristics such as high levels of polymorphism, codominance and Mendelian segregation. Given this, we developed nine microsatellite markers from an enriched library of Piper nigrum L. Twenty varieties clonal from the Brazilian germplasm collection were analyzed, and observed and expected heterozygosity values ranged over 0.11\u20131.00 and 0.47\u20130.87, respectively. The nine microsatellite loci characterized here will contribute to studies of genetic diversity and conservation of Piper nigrum L."} {"_id": "3215cd14ee1a559eec1513b97ab1c7e6318bd5af", "title": "Global Linear Neighborhoods for Efficient Label Propagation", "text": "Graph-based semi-supervised learning improves classification by combining labeled and unlabeled data through label propagation. It was shown that the sparse representation of graph by weighted local neighbors provides a better similarity measure between data points for label propagation. However, selecting local neighbors can lead to disjoint components and incorrect neighbors in graph, and thus, fail to capture the underlying global structure. In this paper, we propose to learn a nonnegative low-rank graph to capture global linear neighborhoods, under the assumption that each data point can be linearly reconstructed from weighted combinations of its direct neighbors and reachable indirect neighbors. The global linear neighborhoods utilize information from both direct and indirect neighbors to preserve the global cluster structures, while the low-rank property retains a compressed representation of the graph. An efficient algorithm based on a multiplicative update rule is designed to learn a nonnegative low-rank factorization matrix minimizing the neighborhood reconstruction error. Large scale simulations and experiments on UCI datasets and high-dimensional gene expression datasets showed that label propagation based on global linear neighborhoods captures the global cluster structures better and achieved more accurate classification results."} {"_id": "1e88f725cdcf0ca38678267b88b41ab55fb968f8", "title": "The standard operating procedure of the DOE-JGI Microbial Genome Annotation Pipeline (MGAP v.4)", "text": "The DOE-JGI Microbial Genome Annotation Pipeline performs structural and functional annotation of microbial genomes that are further included into the Integrated Microbial Genome comparative analysis system. MGAP is applied to assembled nucleotide sequence datasets that are provided via the IMG submission site. Dataset submission for annotation first requires project and associated metadata description in GOLD. The MGAP sequence data processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNA features, as well as CRISPR elements. Structural annotation is followed by assignment of protein product names and functions."} {"_id": "9beb7cea71ef7da73852465734f8303df1e1c970", "title": "Feature-level fusion of palmprint and palm vein for person identification based on a \u201cJunction Point\u201d representation", "text": "The issue of how to represent the palm features for effective classification is still an open problem. In this paper, we propose a novel palm representation, the \"junction points\" (JP) set, which is formed by the two set of line segments extracted from the registered palmprint and palm vein images respectively. Unlike the existing approaches, the JP set, containing position and orientation information, is a more compact feature that significantly reduces the storage requirement. We compare the proposed JP approach with the line-based methods on a large dataset. Experimental results show that the proposed JP approach provides a better representation and achieves lower error rate in palm verification."} {"_id": "75578b7c5f015ca57f4a7d6ecfef902e993b1c7d", "title": "Multilevel Analysis Techniques and Applications", "text": "The purpose of this series is to present methodological techniques to investigators and students from all functional areas of business, although individuals from other disciplines will also find the series useful. Each volume in the series will focus on a specific method The goal is to provide an understanding and working knowledge of each method with a minimum of mathematical derivations. Proposals are invited from all interested authors. Each proposal should consist of the following: (i) a brief description of the volume's focus and intended market, (ii) a table of contents with an outline of each chapter, and (iii) a curriculum vita. Materials may be sent to Dr. Contents Preface ix 1. Introduction to multilevel analysis 1 1.1. Why do we need special multilevel analysis techniques? 5 1.2. Multilevel theories 7 1.3. Models described in this book 8"} {"_id": "7e49a6f11a8843b2ff5bdbf7cf95617c6219f757", "title": "Multi-Modal Fusion for Moment in Time Video Classification", "text": "Action recognition in videos remains a challenging problem in the machine learning community. Particularly challenging is the differing degree of intra-class variation between actions: While background information is enough to distinguish certain classes, many others are abstract and require fine-grained knowledge for discrimination. To approach this problem, in this work we evaluate different modalities on the recently published Moments in Time dataset, a collection of one million videos of short length."} {"_id": "bc7bcec88a3b5aeb7e3ecb0e1e584f6b29b9fc09", "title": "Band-Notched Small Square-Ring Antenna With a Pair of T-Shaped Strips Protruded Inside the Square Ring for UWB Applications", "text": "A novel printed monopole antenna with constant gain over a wide bandwidth for ultra wideband applications with desired notch-band characteristic is presented. The proposed antenna consists of a square-ring radiating patch with a pair of T-shaped strips protruded inside the square ring and a coupled T-shaped strip and a ground plane with a protruded strip, which provides a wide us able fractional bandwidth of more than 130% (3.07-14.6 GHz). By using the square-ring radiating patch with a pair of T-shaped strips protruded inside it, the frequency bandstop performance is generated, and we can control its characteristics such as band-notch frequency and its bandwidth by electromagnetically adjusting coupling between a pair of T-shaped strips protruded inside the square ring. The designed antenna has a small size of 12 \u00d7 18 mm2, or about 0.15\u03bb \u00d7 0.25\u03bb at 4.2 GHz, while showing the band-rejection performance in the frequency band of 5.05-5.95 GHz."} {"_id": "56d986c576d37c2f9599cabb8ba5a59660971045", "title": "P300 brain computer interface: current challenges and emerging trends", "text": "A brain-computer interface (BCI) enables communication without movement based on brain signals measured with electroencephalography (EEG). BCIs usually rely on one of three types of signals: the P300 and other components of the event-related potential (ERP), steady state visual evoked potential (SSVEP), or event related desynchronization (ERD). Although P300 BCIs were introduced over twenty years ago, the past few years have seen a strong increase in P300 BCI research. This closed-loop BCI approach relies on the P300 and other components of the ERP, based on an oddball paradigm presented to the subject. In this paper, we overview the current status of P300 BCI technology, and then discuss new directions: paradigms for eliciting P300s; signal processing methods; applications; and hybrid BCIs. We conclude that P300 BCIs are quite promising, as several emerging directions have not yet been fully explored and could lead to improvements in bit rate, reliability, usability, and flexibility."} {"_id": "79b7d4de7b1d6c41ad3cf0a7996ce79acf98c8dd", "title": "Event Recognition in Videos by Learning from Heterogeneous Web Sources", "text": "In this work, we propose to leverage a large number of loosely labeled web videos (e.g., from YouTube) and web images (e.g., from Google/Bing image search) for visual event recognition in consumer videos without requiring any labeled consumer videos. We formulate this task as a new multi-domain adaptation problem with heterogeneous sources, in which the samples from different source domains can be represented by different types of features with different dimensions (e.g., the SIFT features from web images and space-time (ST) features from web videos) while the target domain samples have all types of features. To effectively cope with the heterogeneous sources where some source domains are more relevant to the target domain, we propose a new method called Multi-domain Adaptation with Heterogeneous Sources (MDA-HS) to learn an optimal target classifier, in which we simultaneously seek the optimal weights for different source domains with different types of features as well as infer the labels of unlabeled target domain data based on multiple types of features. We solve our optimization problem by using the cutting-plane algorithm based on group based multiple kernel learning. Comprehensive experiments on two datasets demonstrate the effectiveness of MDA-HS for event recognition in consumer videos."} {"_id": "067f9352c60a9e4cf98c58a4ddf0984d31cf9b46", "title": "Planning optimal paths for multiple robots on graphs", "text": "In this paper, we study the problem of optimal multi-robot path planning (MPP) on graphs. We propose two multiflow based integer linear programming (ILP) models that compute minimum last arrival time and minimum total distance solutions for our MPP formulation, respectively. The resulting algorithms from these ILP models are complete and guaranteed to yield true optimal solutions. In addition, our flexible framework can easily accommodate other variants of the MPP problem. Focusing on the time optimal algorithm, we evaluate its performance, both as a stand alone algorithm and as a generic heuristic for quickly solving large problem instances. Computational results confirm the effectiveness of our method."} {"_id": "34df85f4db9d1389c63da17f3ffbb7af1ed2ea0c", "title": "Coordinating Hundreds of Cooperative, Autonomous Vehicles in Warehouses", "text": "The Kiva warehouse management system creates a new paradigm for pick-pack-and-ship warehouses that significantly improves worker productivity. The Kiva system uses movable storage shelves that can be lifted by small, autonomous robots. By bringing the product to the worker, productivity is increased by a factor of two or more, while simultaneously improving accountability and flexibility. A Kiva installation for a large distribution center may require 500 or more vehicles. As such, the Kiva system represents the first commercially available, large-scale autonomous robot system. The first permanent installation of a Kiva system was deployed in the summer of 2006."} {"_id": "679efbf29c911f786c84dc210f7cc5a8fc166b70", "title": "Multi-agent Path Planning and Network Flow", "text": "This paper connects multi-agent path planning on graphs (roadmaps) to network flow problems, showing that the former can be reduced to the latter, therefore enabling the application of combinatorial network flow algorithms, as well as general linear program techniques, to multi-agent path planning problems on graphs. Exploiting this connecti on, we show that when the goals are permutation invariant, the problem always has a feasible solution path set with a longes t finish time of no more than n+V \u2212 1 steps, in which n is the number of agents andV is the number of vertices of the underlying graph. We then give a complete algorithm that finds such a solution in O(nVE) time, with E being the number of edges of the graph. Taking a further step, we study time and distance optimality of the feasible solutions, show that th ey have a pairwise Pareto optimal structure, and again provide efficient algorithms for optimizing two of these practical objectives."} {"_id": "0c35a65a99af8202fe966c5e7bee00dea7cfcbf8", "title": "Experiences with an Interactive Museum Tour-Guide Robot", "text": "This article describes the software architecture of an auto nomous, interactive tour-guide robot. It presents a modular and distributed software archi te ture, which integrates localization, mapping, collision avoidance, planning, and vari ous modules concerned with user interaction and Web-based telepresence. At its heart, the s oftware approach relies on probabilistic computation, on-line learning, and any-time alg orithms. It enables robots to operate safely, reliably, and at high speeds in highly dynamic environments, and does not require any modifications of the environment to aid the robot \u2019s peration. Special emphasis is placed on the design of interactive capabilities that appeal to people\u2019s intuition. The interface provides new means for human-robot interaction w ith crowds of people in public places, and it also provides people all around the world with the ability to establish a \u201cvirtual telepresence\u201d using the Web. To illustrate our approac h, results are reported obtained in mid-1997, when our robot \u201cRHINO\u201d was deployed for a period of six days in a densely populated museum. The empirical results demonstrate relia bl operation in public environments. The robot successfully raised the museum\u2019s atten dance by more than 50%. In addition, thousands of people all over the world controlled the robot through the Web. We conjecture that these innovations transcend to a much large r range of application domains for service robots."} {"_id": "13dcb5efadfc4575abb8ee5ef6f52d565e08de1d", "title": "Principles of Robot Motion: Theory, Algorithms, and Implementations [Book Review]", "text": "introduction to autonomous mobile robots intelligent robotics and autonomous agents series PDF disaster robotics intelligent robotics and autonomous agents series PDF probabilistic robotics intelligent robotics and autonomous agents series PDF self-reconfigurable robots an introduction intelligent robotics and autonomous agents series PDF mechanics of robotic manipulation intelligent robotics and autonomous agents PDF strategic negotiation in multiagent environments intelligent robotics and autonomous agents PDF random finite sets for robot mapping & slam new concepts in autonomous robotic map representations springer tracts in advanced robotics PDF"} {"_id": "82c0086755479360935ec73add346854df4d1304", "title": "What If You Can't Trust Your Network Card?", "text": "In the last few years, many different attacks against computing platform targeting hardware or low level firmware have been published. Such attacks are generally quite hard to detect and to defend against as they target components that are out of the scope of the operating system and may not have been taken into account in the security policy enforced on the platform. In this paper, we study the case of remote attacks against network adapters. In our case study, we assume that the target adapter is running a flawed firmware that an attacker may subvert remotely by sending packets on the network to the adapter. We study possible detection techniques and their efficiency. We show that, depending on the architecture of the adapter and the interface provided by the NIC to the host operating system, building an efficient detection framework is possible. We explain the choices we made when designing such a framework that we called NAVIS and give details on our proof of concept implementation."} {"_id": "07ca378eacd8241cc956478d2619279895dae925", "title": "A Scalable Multiphysics Network Simulation Using PETSc DMNetwork", "text": "A scientific framework for simulations of large-scale networks, such as is required for the analysis of critical infrastructure interaction and interdependencies, is needed for applications on exascale computers. Such a framework must be able to manage heterogeneous physics and unstructured topology, and must be reusable. To this end we have developed DMNetwork, a class in PETSc that provides data and topology management and migration for network problems, along with multiphysics solvers to exploit the problem structure. It eases the application development cycle by providing the necessary infrastructure through simple abstractions to define and query the network. This paper presents the design of the DMNetwork, illustrates its user interface, and demonstrates its ability to solve large network problems through the numerical simulation of a water pipe network with more than 2 billion variables on extreme-scale computers using up to 30,000 processor cores."} {"_id": "f26e5e0e58b4b05d4632b4a3418c3f9305d091bf", "title": "3D Integration technology: Status and application development", "text": "As predicted by the ITRS roadmap, semiconductor industry development dominated by shrinking transistor gate dimensions alone will not be able to overcome the performance and cost problems of future IC fabrication. Today 3D integration based on through silicon vias (TSV) is a well-accepted approach to overcome the performance bottleneck and simultaneously shrink the form factor. Several full 3D process flows have been demonstrated, however there are still no microelectronic products based on 3D TSV technologies in the market \u2014 except CMOS image sensors. 3D chip stacking of memory and logic devices without TSVs is already widely introduced in the market. Applying TSV technology for memory on logic will increase the performance of these advanced products and simultaneously shrink the form factor. In addition to the enabling of further improvement of transistor integration densities, 3D integration is a key technology for integration of heterogeneous technologies. Miniaturized MEMS/IC products represent a typical example for such heterogeneous systems demanding for smart system integration rather than extremely high transistor integration densities. The European 3D technology platform that has been established within the EC funded e-CUBES project is focusing on the requirements coming from heterogeneous systems. The selected 3D integration technologies are optimized concerning the availability of devices (packaged dies, bare dies or wafers) and the requirements of performance and form factor. There are specific technology requirements for the integration of MEMS/NEMS devices which differ from 3D integrated ICs (3D-IC). While 3D-ICs typically show a need for high interconnect densities and conductivities, TSV technologies for the integration of MEMS to ICs may result in lower electrical performance but have to fulfill other requirements, e. g. mechanical stability issues. 3D integration of multiple MEMS/IC stacks was successfully demonstrated for the fabrication of miniaturized sensor systems (e-CUBES), as for automotive, health & fitness and aeronautic applications."} {"_id": "44562e355ef846d094aea9218e1b88fb56196f9e", "title": "Taxonomy of attacks on industrial control protocols", "text": "Industrial control systems (ICS) are highly distributed information systems used to control and monitor critical infrastructures such as nuclear plants, power generation and distribution plants, Oil and Gas and many other facilities. The main architecture principles of ICS are; real time response, high availability and reliability. For these specific purposes, several protocols has been designed to ensure the control and supervision operations. Modbus and DNP3 are the most used protocols in the ICS world due to their compliance with real time needs. With the increasing of the connectivity to the internet world for business reasons, ICS adopted Internet based technologies and most of communication protocols are redesigned to work over IP. This openness exposed the ICS components as well as communication protocols to cyber-attacks with a higher risk than attacks on traditional IT systems. In order to facilitate the risk assessment of cyber-attacks on ICS protocols we propose a taxonomy model of different identified attacks on Modbus and DNP3.the model is based on the threat origin, threat type, attack type, attack scenario, vulnerability type and the impact of the attack. We populate this Taxonomy model with identified attacks on Modbus and DNP3 from previous academic and industrial works."} {"_id": "ee0db22a39d309330afdf03bd6b8e5426a1f1504", "title": "Neural-Network-Based Adaptive Leader-Following Control for Multiagent Systems With Uncertainties", "text": "A neural-network-based adaptive approach is proposed for the leader-following control of multiagent systems. The neural network is used to approximate the agent's uncertain dynamics, and the approximation error and external disturbances are counteracted by employing the robust signal. When there is no control input constraint, it can be proved that all the following agents can track the leader's time-varying state with the tracking error as small as desired. Compared with the related work in the literature, the uncertainty in the agent's dynamics is taken into account; the leader's state could be time-varying; and the proposed algorithm for each following agent is only dependent on the information of its neighbor agents. Finally, the satisfactory performance of the proposed method is illustrated by simulation examples."} {"_id": "66479c2251088dae51c228341c26164f21250593", "title": "Some mathematical notes on three-mode factor analysis.", "text": ""} {"_id": "5a94ab41593ed7e01fddabc6375f4a78862e1444", "title": "Ee364a: Convex Optimization I Final Exam", "text": "This is a 24 hour take-home final. Please turn it in at Bytes Cafe in the Packard building, 24 hours after you pick it up. You may use any books, notes, or computer programs, but you may not discuss the exam with anyone until March 16, after everyone has taken the exam. The only exception is that you can ask us for clarification, via the course staff email address. We've tried pretty hard to make the exam unambiguous and clear, so we're unlikely to say much. Please make a copy of your exam, or scan it, before handing it in. Please attach the cover page to the front of your exam. Assemble your solutions in order (problem 1, problem 2, problem 3,. . .), starting a new page for each problem. Put everything associated with each problem (e.g., text, code, plots) together; do not attach code or plots at the end of the final. We will deduct points from long needlessly complex solutions, even if they are correct. Our solutions are not long, so if you find that your solution to a problem goes on and on for many pages, you should try to figure out a simpler one. We expect neat, legible exams from everyone, including those enrolled Cr/N. When a problem involves computation you must give all of the following: a clear discussion and justification of exactly what you did, the source code that produces the result, and the final numerical results or plots. Files containing problem data can be found in the usual place, Please respect the honor code. Although we allow you to work on homework assignments in small groups, you cannot discuss the final with anyone, at least until everyone has taken it. All problems have equal weight. Some are easy. Others, not so much. Be sure you are using the most recent version of CVX, CVXPY, or Convex,jl. Check your email often during the exam, just in case we need to send out an important announcement. Some problems involve applications. But you do not need to know anything about the problem area to solve the problem; the problem statement contains everything you need."} {"_id": "9e56528c948645c366044c05d5a17354745a6d52", "title": "Classical Statistics and Statistical Learning in Imaging Neuroscience", "text": "Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques."} {"_id": "6537b42e6f3ce6eb6ef7d775e22c0817bf74be15", "title": "A Mining and Visualizing System for Large-Scale Chinese Technical Standards", "text": "A technical standard is an established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices. Every year, more than 10K standards are released by standardization organizations, such as International Standardization Organization (ISO), American National Standards Institute (ANSI), and China National Institute of Standardization (CNIS). Thereinto, China publishes more than 4K national and industrial standards per year. Confronted with the large amount of standards, how to manage and make full use of these standards efficiently and effectively has become a major issue to be addressed. In this paper, we introduce a mining and visualizing system for large-scale Chinese technical standards from spatial and temporal perspectives. Specifically, we propose an s-index metric to measure the contribution of a drafting unit. Moreover, we develop a multiple dimensional mining and visualizing analysis system for standardization experts and users to explore regional differences and temporal trends of standards in China. Our system has run on 95K Chinese standards spanning from 2001 to 2016 and provided online service. The extensive experiments and case study demonstrate the effectiveness of our proposed system."} {"_id": "c8d3dd12a002cd0bc21e39ae41d7c7b5f99981b0", "title": "Foot pressure distribution and contact duration pattern during walking at self-selected speed in young adults", "text": "Foot load observed as pressure distribution is examined in relation to a foot and ankle functions within the gait cycle. This load defines the modes of healthy and pathological gait. Determination of the patterns of healthy, i.e. \u201cnormal\u201d walk is the basis for classification of pathological modes. Eleven healthy participants were examined in this initial study. Participants walked over pressure plate barefoot at their self - selected speed. Maximal values of the pressure were recorded in the heel, the first, the second and the third metatarsal joints and in the hallux region. Largest contact duration is recorded in the metatarsal region."} {"_id": "632a9af82186d130712086ff9b019b5dafaf02e5", "title": "SPROUT: Lazy vs. Eager Query Plans for Tuple-Independent Probabilistic Databases", "text": "A paramount challenge in probabilistic databases is the scalable computation of confidences of tuples in query results. This paper introduces an efficient secondary-storage operator for exact computation of queries on tuple-independent probabilistic databases. We consider the conjunctive queries without self-joins that are known to be tractable on any tuple-independent database, and queries that are not tractable in general but become tractable on probabilistic databases restricted by functional dependencies. Our operator is semantically equivalent to a sequence of aggregations and can be naturally integrated into existing relational query plans. As a proof of concept, we developed an extension of the PostgreSQL 8.3.3 query engine called SPROUT. We study optimizations that push or pull our operator or parts thereof past joins. The operator employs static information, such as the query structure and functional dependencies, to decide which constituent aggregations can be evaluated together in one scan and how many scans are needed for the overall confidence computation task. A case study on the TPC-H benchmark reveals that most TPC-H queries obtained by removing aggregations can be evaluated efficiently using our operator. Experimental evaluation on probabilistic TPC-H data shows substantial efficiency improvements when compared to the state of the art."} {"_id": "ba32fded0293e79885f5cee8cbca79c345474d8b", "title": "R-ADMAD: high reliability provision for large-scale de-duplication archival storage systems", "text": "Data de-duplication has become a commodity component in data-intensive systems and it is required that these systems provide high reliability comparable to others. Unfortunately, by storing duplicate data chunks just once, de-duped system improves storage utilization at cost of error resilience or reliability. In this paper, R-ADMAD, a high reliability provision mechanism is proposed. It packs variable-length data chunks into fixed sized objects, and exploits ECC codes to encode the objects and distributes them among the storage nodes in a redundancy group, which is dynamically generated according to current status and actual failure domains. Upon failures, R-ADMAD proposes a distributed and dynamic recovery process. Experimental results show that R-ADMAD can provide the same storage utilization as RAID-like schemes, but comparable reliability to replication based schemes with much more redundancy. The average recovery time of R-ADMAD based configurations is about 2-6 times less than RAID-like schemes. Moreover, R-ADMAD can provide dynamic load balancing even without the involvement of the overloaded storage nodes."} {"_id": "ddcd8c6b235d3d3e1f00e628b76ccc736254effb", "title": "Fast hyperparameter selection for graph kernels via subsampling and multiple kernel learning", "text": "Model selection is one of the most computationally expensive tasks in a machine learning application. When dealing with kernel methods for structures, the choice with the largest impact on the overall performance is the selection of the feature bias, i.e. the choice of the concrete kernel for structures. Each kernel in turn exposes several hyper-parameters which also need to be fine tuned. Multiple Kernel Learning offers a way to approach this computational bottleneck by generating a combination of different kernels under different parametric settings. However, this solution still requires the computation of many large kernel matrices. In this paper we propose a method to efficiently select a small number of kernels on a subset of the original data, gaining a dramatic reduction in the runtime without a significant loss of predictive performance."} {"_id": "faeba02f45e7be56ad4ada7687fbd55657b9cc95", "title": "Houston, we have a problem...: a survey of actual problems in computer games development", "text": "This paper presents a survey of problems found in the development process of electronic games. These problems were collected mainly from game postmortems and specialized litterature on game development, allowing a comparison with respect to well-known problems in the traditional software industry."} {"_id": "c57f871287d75f50225f51d9313af30c748dcd65", "title": "Deep Kernelized Autoencoders", "text": "In this paper we introduce the deep kernelized autoencoder, a neural network model that allows an explicit approximation of (i) the mapping from an input space to an arbitrary, user-specified kernel space and (ii) the back-projection from such a kernel space to input space. The proposed method is based on traditional autoencoders and is trained through a new unsupervised loss function. During training, we optimize both the reconstruction accuracy of input samples and the alignment between a kernel matrix given as prior and the inner products of the hidden representations computed by the autoencoder. Kernel alignment provides control over the hidden representation learned by the autoencoder. Experiments have been performed to evaluate both reconstruction and kernel alignment performance. Additionally, we applied our method to emulate kPCA on a denoising task obtaining promising results."} {"_id": "2c521847f2c6801d8219a1a2e9f4e196798dd07d", "title": "An Efficient Algorithm for Media-based Surveillance System (EAMSuS) in IoT Smart City Framework", "text": ""} {"_id": "d55b63e1b7ad70e3b37d5089585a7423cd245fde", "title": "An Innovative Approach to Investigate Various Software Testing Techniques and Strategies", "text": "Software testing is a way of finding errors from the system. It helps us to identify and debug mistakes, errors, faults and failures of a system. There are many techniques and strategies emerged since the concept of software development emerged. The aim of testing is to make the quality of software as efficient as possible.in this paper we discuss most widely used techniques and strategies. Where they can be used and how they can be used. How they work and how they differ (from each other).They are the following. Techniques: Black Box Testing, White Box Testing, And Grey Box Testing. Strategies: Unit Testing, System Testing, And Acceptance Testing."} {"_id": "2dccec3c1a8a17883cece784e8f0fc0af413eb83", "title": "Online Dominant and Anomalous Behavior Detection in Videos", "text": "We present a novel approach for video parsing and simultaneous online learning of dominant and anomalous behaviors in surveillance videos. Dominant behaviors are those occurring frequently in videos and hence, usually do not attract much attention. They can be characterized by different complexities in space and time, ranging from a scene background to human activities. In contrast, an anomalous behavior is defined as having a low likelihood of occurrence. We do not employ any models of the entities in the scene in order to detect these two kinds of behaviors. In this paper, video events are learnt at each pixel without supervision using densely constructed spatio-temporal video volumes. Furthermore, the volumes are organized into large contextual graphs. These compositions are employed to construct a hierarchical codebook model for the dominant behaviors. By decomposing spatio-temporal contextual information into unique spatial and temporal contexts, the proposed framework learns the models of the dominant spatial and temporal events. Thus, it is ultimately capable of simultaneously modeling high-level behaviors as well as low-level spatial, temporal and spatio-temporal pixel level changes."} {"_id": "699c6a8a8220f77423a881528f907ee5399ced1f", "title": "Intelligent Condition Based Monitoring Using Acoustic Signals for Air Compressors", "text": "Intelligent fault diagnosis of machines for early recognition of faults saves industry from heavy losses occurring due to machine breakdowns. This paper proposes a process with a generic data mining model that can be used for developing acoustic signal-based fault diagnosis systems for reciprocating air compressors. The process includes details of data acquisition, sensitive position analysis for deciding suitable sensor locations, signal pre-processing, feature extraction, feature selection, and a classification approach. This process was validated by developing a real time fault diagnosis system on a reciprocating type air compressor having 8 designated states, including one healthy state, and 7 faulty states. The system was able to accurately detect all the faults by analyzing acoustic recordings taken from just a single position. Additionally, thorough analysis has been presented where performance of the system is compared while varying feature selection techniques, the number of selected features, and multiclass decomposition algorithms meant for binary classifiers."} {"_id": "c0e97ca70fe29db4ceb834464576b699ef8874b1", "title": "Recurrent-OctoMap: Learning State-Based Map Refinement for Long-Term Semantic Mapping With 3-D-Lidar Data", "text": "This letter presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term three-dimensional (3-D) Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3-D refinement of semantic maps (i.e. fusing semantic observations). The most widely used approach for the 3-D semantic map refinement is \u201cBayes update,\u201d which fuses the consecutive predictive probabilities following a Markov-chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3-D map as an OctoMap, and model each cell as a recurrent neural network, to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequence-to-sequence encoding\u2013decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3-D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3-D Lidar dataset. The experimental results show that our proposed approach outperforms the conventional \u201cBayes update\u201d approach."} {"_id": "4622265755b2d4683e57c32d638bab841a4d5b45", "title": "Learning to generate chairs with convolutional neural networks", "text": "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task."} {"_id": "5a01793b222acb13543d48b74cb8799b08e8f581", "title": "Designing augmented reality interfaces", "text": "Most interactive computer graphics appear on a screen separate from the real world and the user's surroundings. However this does not always have to be the case. In augmented reality (AR) interfaces, three-dimensional virtual images appear superimposed over real objects. AR applications typically use head-mounted or handheld displays to make computer graphics appear in the user's environment."} {"_id": "13f3045805cbd0a58f3abf6c9ad52515d6c10aeb", "title": "Comics on the Brain: Structure and Meaning in Sequential Image Comprehension", "text": "Just as syntax differentiates coherent sentences from scrambled word strings, sequential images must also use a cognitive system to distinguish coherent narratives from random strings of images. We conducted experiments analogous to two classic psycholinguistic studies to examine structure and semantics in sequential images. We compared Normal comic strips with both structure and meaning to sequences with Semantics Only, Structure Only, or randomly Scrambled panels. Experiment 1 used a target-monitoring paradigm, and found that RTs were slowest to target panels in Scrambled sequences, intermediate in Structural Only and Semantic Only sequences, and fastest in Normal sequences. Experiment 2 measured ERPs to the same strips. The largest N400 appeared in Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in Normal sequences. Together, these findings suggest that sequential image comprehension is guided by an interaction between a structure and meaning, broadly analogous to syntax and semantics in language."} {"_id": "1d3ddcefe4d5fefca04fe730ca73312e2c588b3b", "title": "A comparative analysis of machine learning techniques for student retention management", "text": "Student retention is an essential part of many enrollment management systems. It affects university rankings, school reputation, and financial wellbeing. Student retention has become one of the most important priorities for decision makers in higher education institutions. Improving student retention starts with a thorough understanding of the reasons behind the attrition. Such an understanding is the basis for accurately predicting at-risk students and appropriately intervening to retain them. In this study, using five years of institutional data along with several data mining techniques (both individuals as well as ensembles), we developed analytical models to predict and to explain the reasons behind freshmen student attrition. The comparative analyses results showed that the ensembles performed better than individual models, while the balanced dataset produced better prediction results than the unbalanced dataset. The sensitivity analysis of the Purchase Export Previous article Next article Check if you have access through your login credentials or your institution."} {"_id": "60cae7e13210bbba9efc8ad42b75d491b7cc8c84", "title": "A hybrid data analytic approach to predict college graduation status and its determinative factors", "text": "Purpose \u2013 The prediction of graduation rates of college students has become increasingly important to colleges and universities across the USA and the world. Graduation rates, also referred to as completion rates, directly impact university rankings and represent a measurement of institutional performance and student success. In recent years, there has been a concerted effort by federal and state governments to increase the transparency and accountability of institutions, making \u201cgraduation rates\u201d an important and challenging university goal. In line with this, the main purpose of this paper is to propose a hybrid data analytic approach which can be flexibly implemented not only in the USA but also at various colleges across the world which would help predict the graduation status of undergraduate students due to its generic nature. It is also aimed at providing a means of determining and ranking the critical factors of graduation status. Design/methodology/approach \u2013 This study focuses on developing a novel hybrid data analytic approach to predict the degree completion of undergraduate students at a four-year public university in the USA. Via the deployment of the proposed methodology, the data were analyzed using three popular data mining classifications methods (i.e. decision trees, artificial neural networks, and support vector machines) to develop predictive degree completion models. Finally, a sensitivity analysis is performed to identify the relative importance of each predictor factor driving the graduation. Findings \u2013 The sensitivity analysis of the most critical factors in predicting graduation rates is determined to be fall-term grade-point average, housing status (on campus or commuter), and which high school the student attended. The least influential factors of graduation status are ethnicity, whether or not a student had work study, and whether or not a student applied for financial aid. All three data analytic models yielded high accuracies ranging from 71.56 to 77.61 percent, which validates the proposed model. Originality/value \u2013 This study presents uniqueness in that it presents an unbiased means of determining the driving factors of college graduation status with a flexible and powerful hybrid methodology to be implemented at other similar decision-making settings."} {"_id": "1b3b22b95ab55853aff3ea980a5b4a76b7537980", "title": "Improved Use of Continuous Attributes in C4.5", "text": "A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modi cations lead to smaller decision trees with higher predictive accuracies. Results also con rm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits."} {"_id": "9fa05d7a03de28f94596f0fa5e8f107cfe4d38d7", "title": "Self-organized computation with unreliable , memristive nanodevices", "text": "Nanodevices have terrible properties for building Boolean logic systems: high defect rates, high variability, high death rates, drift, and (for the most part) only two terminals. Economical assembly requires that they be dynamical. We argue that strategies aimed at mitigating these limitations, such as defect avoidance/reconfiguration, or applying coding theory to circuit design, present severe scalability and reliability challenges. We instead propose to mitigate device shortcomings and exploit their dynamical character by building self-organizing, self-healing networks that implement massively parallel computations. The key idea is to exploit memristive nanodevice behavior to cheaply implement adaptive, recurrent networks, useful for complex pattern recognition problems. Pulse-based communication allows the designer to make trade-offs between power consumption and processing speed. Self-organization sidesteps the scalability issues of characterization, compilation and configuration. Network dynamics supplies a graceful response to device death. We present simulation results of such a network\u2014a self-organized spatial filter array\u2014that demonstrate its performance as a function of defects and device variation. (Some figures in this article are in colour only in the electronic version) 1. Nanoelectronics and computing paradigms Nanodevices are crummy1. High defect rates, high device variability, device ageing, and limitations on device complexity (e.g., two-terminal devices are much easier to build) are to be expected if we intend to mass produce nanoelectronic systems economically. Not only that, it is almost axiomatic among many researchers that such systems will be built from simple structures, such as crossbars, composed of nanodevices that must be configured to implement the desired functionality (Heath et al 1998, Williams and Kuekes 2000, Kuekes and Williams 2002, DeHon 2003, Stan et al 2003, Ziegler and Stan 2003, Snider et al 2004, DeHon 2005, Ma et al 2005, Snider 2005, Snider et al 2005, Strukov and Likharev 2005). So we are faced with the challenge of computing with devices that are not only crummy, but dynamical as well. Can reliable Boolean logic systems be built from such crummy devices? Yes, but we argue that at some point as 1 \u2018Crummy\u2019 was introduced into the technical lexicon by Moore and Shannon (1954). device dimensions scale down, the overhead and complexity become so costly that performance and density improvements will hit a barrier. In the next section we discuss two frequently proposed strategies for implementing Boolean logic with crummy, dynamical devices\u2014reconfiguration and coding theory\u2014and argue that each has severe scalability problems. This is not suggesting that logic at the nanoscale is not worth pursuing. It clearly is, and semiconductor manufacturers have the economic motivation to continue scaling down as aggressively as their profits permit. Rather we are suggesting that the \u2018killer app\u2019 for nanoelectronics lies elsewhere. An alternative computational paradigm\u2014adaptive, recurrent networks\u2014is computationally powerful and requires only two types of components, which we call \u2018edges\u2019 and \u2018nodes\u2019. Edge to node ratios are typically high, hundreds to thousands, and edges are, unfortunately, difficult to implement efficiently. This difficulty has made these networks extremely unpopular; software implementations are impossibly slow, and hardware implementations require far too much area. 0957-4484/07/365202+13$30.00 1 \u00a9 2007 IOP Publishing Ltd Printed in the UK Nanotechnology 18 (2007) 365202 G S Snider In this paper we propose using memristive nanodevices to implement edges, conventional analog and digital electronics to implement nodes, and pairs of bipolar pulses, called \u2018spikes\u2019, to implement communication. The tiny size of the nanodevices implementing edges would allow, for the first time, a practical hardware implementation of adaptive, recurrent networks. We suggest that such networks are a better architectural fit to nanoelectronics than Boolean logic circuits. They are robust in the presence of device variations and defects; they degrade gracefully as devices become defective over time, and can even \u2018self-heal\u2019 in response to internal change; they can be implemented with simple, crossbar-like structures. Just as importantly, they can be structured to self-organize their computations, sidestepping scalability problems with device characterization, compilation and configuration. Such systems can contain large numbers of defective components, but we will not need to discover where they are\u2014in fact, we will not care where they are. The system can adapt and \u2018rewire\u2019 itself around them. 2. Boolean logic is hard with crummy, dynamical devices 2.1. Reconfiguration to the rescue? Several researchers have proposed using a combination of device characterization, defect avoidance, and configuration to handle initial static defects (Stan et al 2003, DeHon 2003, Snider et al 2004, Strukov and Likharev 2005, Snider and Williams 2007). The strategy is a three-pass algorithm: (1) Characterization. Analyze every nanowire and nanodevice in the system and compile a list of resources which are defective (stuck open or stuck closed, shorted, broken, out-of-spec, etc). Such analysis algorithms were used in the Teramac system (Culbertson et al 1997). (2) Defect avoidance. Give the list of defects from pass 1 to a compiler which maps a desired circuit onto the defective fabric, routing around defective components (Culbertson et al 1997, Snider et al 2004, Strukov and Likharev 2005, Snider and Williams 2007). (3) Configuration. Give the mapping determined in pass 2 to a controller that electrically configures each of the mapped components. Since every chip will have a unique set of defects, the above process must be applied to each and every chip. This presents some interesting challenges for manufacturing, since the time required to perform the above steps will contribute to production cost. Characterization (pass 1) is problematic due to device variability\u2014the functional state of a device (or wire) is not necessarily discrete (working versus nonworking) but can lie on a continuum. And characterizing, say, 1012 nanodevices (DeHon et al 2005) in a reasonable amount of time is not likely to be trivial, especially given the bottleneck of pins on the chip. It is not clear if existing characterization algorithms could be applied to systems like this, or how well they would scale. Compilation (pass 2) also presents considerable risk. Compiling circuits onto configurable chips (such as FPGAs) today is a time-consuming process, due to the NP-hard placement and routing problems that lie at the compiler\u2019s core. Even circuits comprising only a few tens of thousands of gates can require several minutes to compile, depending on the degree of optimization needed\u2014and that\u2019s assuming a defect-free target, where a single compilation can be used to manufacture thousands of parts. One proposal for minimizing this problem requires an initial \u2018ideal\u2019 compilation onto a hypothetical defect-free fabric, laying out components a little more sparsely than optimal. This would be done only once for each (circuit type, chip type) combination, so one could afford to spend enormous amounts of compute time on this to arrive at this ideal configuration. The configuration of an individual chip on the production line would then be viewed as a \u2018perturbation\u2019 of the ideal configuration, with resource allocation shifted as necessary to avoid defects. One might even combine the characterization pass with this pass for further speed improvements. This strategy might be workable. But it is not clear how well this would scale, or how robust this would be in the presence of defect clustering. Configuration (pass 3) is the most straightforward, with the most significant risk being configuration time restrictions due to the pin bottleneck and power dissipation. Note that the above approach of characterize, compile, configure does not handle device death. What happens when a nanodevice stops working and the chip starts computing nonsense? If the nanodevices are reconfigurable, the system can be stopped and reconfigured to work around the newly formed defects. But that assumes additional circuitry to detect the malfunction (e.g. self-checking circuits (Wakerly 1978)), and a companion host processor capable of implementing the three passes in order to reconfigure the chip. Such a processor would have to be fast, reliable (which probably means it would not be implemented with nanodevices), and likely would require a significant amount of memory. Implementing such a coprocessor would seem to negate the benefits that one was presumably getting from the nanoscale circuit in the first place. 2.2. Coding theory to the rescue? Coding theory has been used for decades to robustly transmit information over noisy channels by adding a small amount of redundant information. Can coding theory do the same for logic circuits by adding a small number of redundant gates or components in order to achieve reliable operation? von Neumann (1956), looking forward to nanoscale computation, was perhaps the first to address this question. His approach used a code that replicated each logic gate, and combined the replicated gate outputs in a clever way so that the entire system achieved a desired level of reliability. Although his primary concern was correcting errors induced by transient faults, the approach could also compensate for initially defective devices or \u2018device deaths\u2019 as long as the failing devices were randomly distributed and the number did not exceed a threshold (the trade-off being that device deaths would reduce the system\u2019s tolerance of transient faults). The overhead for his scheme could be enormous, though. Replication factors for Boolean logic went to infinity as fault rates approached about 1%. This bound has been improved by later researchers, with"} {"_id": "ad6f202e6129ba1e2d743dd7500da6bc9e15bf44", "title": "Controllability of complex networks via pinning.", "text": "We study the problem of controlling a general complex network toward an assigned synchronous evolution by means of a pinning control strategy. We define the pinning controllability of the network in terms of the spectral properties of an extended network topology. The roles of the control and coupling gains, as well as of the number of pinned nodes, are also discussed."} {"_id": "84875ee58006260d3956f1bac933f1bf7851fdd6", "title": "Feature Extraction with Convolutional Neural Networks for Handwritten Word Recognition", "text": "In this paper, we show that learning features with convolutional neural networks is better than using hand-crafted features for handwritten word recognition. We consider two kinds of systems: a grapheme based segmentation and a sliding window segmentation. In both cases, the combination of a convolutional neural network with a HMM outperform a state-of-the art HMM system based on explicit feature extraction. The experiments are conducted on the Rimes database. The systems obtained with the two kinds of segmentation are complementary: when they are combined, they outperform the systems in isolation. The system based on grapheme segmentation yields lower recognition rate but is very fast, which is suitable for specific applications such as document classification."} {"_id": "c36eb6887c13a3fbb9ec01cd5a7501c4e063de73", "title": "Design and implementation of 0.7 to 7 GHz broadband double-ridged horn antenna", "text": "In this paper, simulation and measurement results of 0.7-7 GHz double-ridged guide horn (DRGH) antenna with coaxial input feed section is presented. This antenna, due to the large frequency band required by standards, is appropriate to be used in electromagnetic compatibility (EMC) testing and antenna measurement as a transmitter. A step by step method for designing DRGH antenna is given. A suitable taper for the ridges in the horn is designed and its impedance variations along the horn are shown. In addition, a new structure for the electrical field probe in the feed section is introduced by which a shift-down of lower frequency to 0.7 GHz is achieved. A sensitivity analysis is done on the parameters of the proposed structure. Other parameters of the feed section are also investigated and optimized values are obtained. Finally, the proposed antenna has been fabricated and measurement results show good agreement with the simulation."} {"_id": "b446e79a70cb0dcbecdfd2f8c11d2ac1bec4c2f7", "title": "An empirical study of the mechanisms of mindfulness in a mindfulness-based stress reduction program.", "text": "S. L. Shapiro and colleagues (2006) have described a testable theory of the mechanisms of mindfulness and how it affects positive change. They describe a model in which mindfulness training leads to a fundamental change in relationship to experience (reperceiving), which leads to changes in self-regulation, values clarification, cognitive and behavioral flexibility, and exposure. These four variables, in turn, result in salutogenic outcomes. Analyses of responses from participants in a mindfulness-based stress-reduction program did not support the mediating effect of changes in reperceiving on the relationship of mindfulness with those four variables. However, when mindfulness and reperceiving scores were combined, partial support was found for the mediating effect of the four variables on measures of psychological distress. Issues arising in attempts to test the proposed theory are discussed, including the description of the model variables and the challenges to their assessment."} {"_id": "a01c855b24fea6fe8ecd229e9a3b536760c689e4", "title": "Second screen applications and tablet users: constellation, awareness, experience, and interest", "text": "This study investigates how tablet users incorporate multiple media in their television viewing experience. Three patterns are found: (a) only focusing on television, (b) confounding television viewing with other screen media (e.g. laptop, tablet) and (c) confounding television viewing with various media, including print and screen media. Furthermore, we question how the incorporation of screen media in this experience affects the practice of engaging in digital commentary on television content. Also, we inquire the uptake and interest in so-called 'second screen applications'. These applications allow extensions of the primary screen experience on secondary screens (e.g. tablet). The results, based on a sample of 260 tablet users, indicate that there is only a modest uptake and interest in using secondary screens to digitally share opinions. However, the use of second screen interaction with television content is not discarded: although there is still little awareness and experience, we notice a moderate interest in these apps."} {"_id": "0b41a405b329198428b1ce947b86a0797b59289e", "title": "Muscle simulation for facial animation in Kong: Skull Island", "text": "For Kong: Skull Island, Industrial Light & Magic created an anatomically motivated facial simulation model for Kong that includes the facial skeleton and musculature. We applied a muscle simulation framework that allowed us to target facial shapes while maintaining desirable physical properties to ensure that the simulations stayed on-model. This allowed muscle simulations to be used as a powerful tool for adding physical detail to and improving the anatomical validity of both blendshapes and blendshape animations in order to achieve more realistic facial animation with less hand sculpting."} {"_id": "a668a6ca3fc8a83436fc28a65a890daf6cd59762", "title": "Autonomous semantic mapping for robots performing everyday manipulation tasks in kitchen environments", "text": "In this work we report about our efforts to equip service robots with the capability to acquire 3D semantic maps. The robot autonomously explores indoor environments through the calculation of next best view poses, from which it assembles point clouds containing spatial and registered visual information. We apply various segmentation methods in order to generate initial hypotheses for furniture drawers and doors. The acquisition of the final semantic map makes use of the robot's proprioceptive capabilities and is carried out through the robot's interaction with the environment. We evaluated the proposed integrated approach in the real kitchen in our laboratory by measuring the quality of the generated map in terms of the map's applicability for the task at hand (e.g. resolving counter candidates by our knowledge processing system)."} {"_id": "b1dd6f03a5d7a09850542453695b42665b445fa1", "title": "Conscious intention and brain activity", "text": "The problem of free will lies at the heart of modern scientific studies of consciousness. An influential series of experiments by Libet has suggested that conscious intentions arise as a result of brain activity. This contrasts with traditional concepts of free will, in which the mind controls the body. A more recent study by Haggard and Eimer has further examined the relation between intention and brain processes, concluding that conscious awareness of intention is linked to the choice or selection of a specific action, and not to the earliest initiation of action processes. The exchange of views in this paper further explores the relation between conscious intention and brain activity."} {"_id": "275445087a85bf6d3b0f101a60b0be7ab4ed520f", "title": "Identifying emotional states using keystroke dynamics", "text": "The ability to recognize emotions is an important part of building intelligent computers. Emotionally-aware systems would have a rich context from which to make appropriate decisions about how to interact with the user or adapt their system response. There are two main problems with current system approaches for identifying emotions that limit their applicability: they can be invasive and can require costly equipment. Our solution is to determine user emotion by analyzing the rhythm of their typing patterns on a standard keyboard. We conducted a field study where we collected participants' keystrokes and their emotional states via self-reports. From this data, we extracted keystroke features, and created classifiers for 15 emotional states. Our top results include 2-level classifiers for confidence, hesitance, nervousness, relaxation, sadness, and tiredness with accuracies ranging from 77 to 88%. In addition, we show promise for anger and excitement, with accuracies of 84%."} {"_id": "e97736b0920af2b5729bf62172a2f20a80dd5666", "title": "Efficient deep learning for stereo matching with larger image patches", "text": "Stereo matching plays an important role in many applications, such as Advanced Driver Assistance Systems, 3D reconstruction, navigation, etc. However it is still an open problem with many difficult. Most difficult are often occlusions, object boundaries, and low or repetitive textures. In this paper, we propose a method for processing the stereo matching problem. We propose an efficient convolutional neural network to measure how likely the two patches matched or not and use the similarity as their stereo matching cost. Then the cost is refined by stereo methods, such as semiglobal maching, subpixel interpolation, median filter, etc. Our architecture uses large image patches which makes the results more robust to texture-less or repetitive textures areas. We experiment our approach on the KITTI2015 dataset which obtain an error rate of 4.42% and only needs 0.8 second for each image pairs."} {"_id": "537fcac9d4b94cd54d989cbc690d18a4a02898fb", "title": "Loading of the knee joint during activities of daily living measured in vivo in five subjects.", "text": "Detailed knowledge about loading of the knee joint is essential for preclinical testing of implants, validation of musculoskeletal models and biomechanical understanding of the knee joint. The contact forces and moments acting on the tibial component were therefore measured in 5 subjects in vivo by an instrumented knee implant during various activities of daily living. Average peak resultant forces, in percent of body weight, were highest during stair descending (346% BW), followed by stair ascending (316% BW), level walking (261% BW), one legged stance (259% BW), knee bending (253% BW), standing up (246% BW), sitting down (225% BW) and two legged stance (107% BW). Peak shear forces were about 10-20 times smaller than the axial force. Resultant forces acted almost vertically on the tibial plateau even during high flexion. Highest moments acted in the frontal plane with a typical peak to peak range -2.91% BWm (adduction moment) to 1.61% BWm (abduction moment) throughout all activities. Peak flexion/extension moments ranged between -0.44% BWm (extension moment) and 3.16% BWm (flexion moment). Peak external/internal torques lay between -1.1% BWm (internal torque) and 0.53% BWm (external torque). The knee joint is highly loaded during daily life. In general, resultant contact forces during dynamic activities were lower than the ones predicted by many mathematical models, but lay in a similar range as measured in vivo by others. Some of the observed load components were much higher than those currently applied when testing knee implants."} {"_id": "46fb1214b2303c61d1684c167b1add1996e15313", "title": "TrendLearner: Early Prediction of Popularity Trends of User Generated Content", "text": "Accurately predicting the popularity of user generated content (UGC) is very valuable to content providers, online advertisers, as well as social media and social network researchers. However, it is also a challenging task due to the plethora of factors that affect content popularity in social systems. We here focus on the problem of predicting the popularity trend of a piece of UGC (object) as early as possible, as a step towards building more accurate popularity prediction methods. Unlike previous work, we explicitly address the inherent tradeoff between prediction accuracy and remaining interest in the object after prediction, since, to be useful, accurate predictions should be made before interest has exhausted. Moreover, given the heterogeneity in popularity dynamics across objects in most UGC applications, this tradeoff has to be solved on a per-object basis, which makes the prediction task harder. We propose to tackle this problem with a novel two-step learning approach in which we: (1) extract popularity trends from previously uploaded objects, and then (2) predict trends for newly uploaded content. The first step exploits a time series clustering algorithm to represent each trend by a time series centroid. We propose to treat the second step as a classification problem. First, we extract a set of features of the target object corresponding to the distances of its early popularity curve to the previously identified centroids. We then combine these features with content features (e.g., incoming links, category), using them to train classifiers for prediction. Our experimental results for YouTube datasets show that we can achieve Micro and Macro F1 scores between 0.61 and 0.71 (a gain of up to 38% when compared to alternative approaches), with up to 68% of the views still remaining for 50% or 21% of the videos, depending on the dataset. We also show that our approach can be applied to produce predictions of content popularity at a future date that are much more accurate than recently proposed regression-based and state-space based models, with accuracy improvements of at least 33% and 59%, on average."} {"_id": "f2c6c4bc71db8f2f18f81373c65f48d80720d95e", "title": "Blur detection for digital images using wavelet transform", "text": "With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of blur. Based on the edge type and sharpness analysis, using the Harr wavelet transform, a new blur detection scheme is proposed in this paper, which can determine whether an image is blurred or not and to what extent an image is blurred. Experimental results demonstrate the effectiveness of the proposed scheme."} {"_id": "3b02ec4f4b77368f7e34f86f9a49a0d5902c45e0", "title": "Time-driven activity-based costing.", "text": "In the classroom, activity-based costing (ABC) looks like a great way to manage a company's limited resources. But executives who have tried to implement ABC in their organizations on any significant scale have often abandoned the attempt in the face of rising costs and employee irritation. They should try again, because a new approach sidesteps the difficulties associated with large-scale ABC implementation. In the revised model, managers estimate the resource demands imposed by each transaction, product, or customer, rather than relying on time-consuming and costly employee surveys. This method is simpler since it requires, for each group of resources, estimates of only two parameters: how much it costs per time unit to supply resources to the business's activities (the total overhead expenditure of a department divided by the total number of minutes of employee time available) and how much time it takes to carry out one unit of each kind of activity (as estimated or observed by the manager). This approach also overcomes a serious technical problem associated with employee surveys: the fact that, when asked to estimate time spent on activities, employees invariably report percentages that add up to 100. Under the new system, managers take into account time that is idle or unused. Armed with the data, managers then construct time equations, a new feature that enables the model to reflect the complexity of real-world operations by showing how specific order, customer, and activity characteristics cause processing times to vary. This Tool Kit uses concrete examples to demonstrate how managers can obtain meaningful cost and profitability information, quickly and inexpensively. Rather than endlessly updating and maintaining ABC data,they can now spend their time addressing the deficiencies the model reveals: inefficient processes, unprofitable products and customers, and excess capacity."} {"_id": "023c3af96af31d7c2a97c9b39028984f6bea8423", "title": "Visualization in Meteorology\u2014A Survey of Techniques and Tools for Data Analysis Tasks", "text": "This article surveys the history and current state of the art of visualization in meteorology, focusing on visualization techniques and tools used for meteorological data analysis. We examine characteristics of meteorological data and analysis tasks, describe the development of computer graphics methods for visualization in meteorology from the 1960s to today, and visit the state of the art of visualization techniques and tools in operational weather forecasting and atmospheric research. We approach the topic from both the visualization and the meteorological side, showing visualization techniques commonly used in meteorological practice, and surveying recent studies in visualization research aimed at meteorological applications. Our overview covers visualization techniques from the fields of display design, 3D visualization, flow dynamics, feature-based visualization, comparative visualization and data fusion, uncertainty and ensemble visualization, interactive visual analysis, efficient rendering, and scalability and reproducibility. We discuss demands and challenges for visualization research targeting meteorological data analysis, highlighting aspects in demonstration of benefit, interactive visual analysis, seamless visualization, ensemble visualization, 3D visualization, and technical issues."} {"_id": "8e38ef4ab1097b38e3a5ac1c2b20a5044b1d2e90", "title": "Combining perspiration- and morphology-based static features for fingerprint liveness detection", "text": "0167-8655/$ see front matter 2012 Elsevier B.V. A doi:10.1016/j.patrec.2012.01.009 \u21d1 Corresponding author at: Dipartimento di Informa of Naples Federico II, via Claudio 21, 80125 Naples, It E-mail addresses: emanuela.marasco@unina.it (E. (C. Sansone). It has been showed that, by employing fake fingers, the existing fingerprint recognition systems may be easily deceived. So, there is an urgent need for improving their security. Software-based liveness detection algorithms typically exploit morphological and perspiration-based characteristics separately to measure the vitality. Both such features provide discriminant information about live and fake fingers, then, it is reasonable to investigate also their joint contribution. In this paper, we combine a set of the most robust morphological and perspiration-based measures. The effectiveness of the proposed approach has been assessed through a comparison with several state-ofthe-art techniques for liveness detection. Experiments have been carried out, for the first time, by adopting standard databases. They have been taken from the Liveness Detection Competition 2009 whose data have been acquired by using three different optical sensors. Further, we have analyzed how the performance of our algorithm changes when the material employed for the spoof attack is not available during the training of the system. 2012 Elsevier B.V. All rights reserved."} {"_id": "c145ac5130bfce3de23de25d6618ea282c67c075", "title": "Recognition of Pollen-Bearing Bees from Video Using Convolutional Neural Network", "text": "In this paper, the recognition of pollen bearing honey bees from videos of the entrance of the hive is presented. This computer vision task is a key component for the automatic monitoring of honeybees in order to obtain large scale data of their foraging behavior and task specialization. Several approaches are considered for this task, including baseline classifiers, shallow Convolutional Neural Networks, and deeper networks from the literature. The experimental comparison is based on a new dataset of images of honeybees that was manually annotated for the presence of pollen. The proposed approach, based on Convolutional Neural Networks is shown to outperform the other approaches in terms of accuracy. Detailed analysis of the results and the influence of the architectural parameters, such as the impact of dedicated color based data augmentation, provide insights into how to apply the approach to the target application."} {"_id": "2591d2d773f38b5c8d17829a3938c85edf2009a6", "title": "Psychophysiological contributions to phantom limbs.", "text": "Recent studies of amputees reveal a remarkable diversity in the qualities of experiences that define the phantom limb, whether painless or painful. This paper selectively reviews evidence of peripheral, central and psychological processes that trigger or modulate a variety of phantom limb experiences. The data show that pain experienced prior to amputation may persist in the form of a somatosensory memory in the phantom limb. It is suggested that the length and size of the phantom limb may be a perceptual marker of the extent to which sensory input from the amputation stump have re-occupied deprived cortical regions originally subserving the amputated limb. A peripheral mechanism involving a sympathetic-efferent somatic-afferent cycle is presented to explain fluctuations in the intensity of paresthesias referred to the phantom limb. While phantom pain and other sensations are frequently triggered by thoughts and feelings, there is no evidence that the painful or painless phantom limb is a symptom of a psychological disorder. It is concluded that the experience of a phantom limb is determined by a complex interaction of inputs from the periphery and widespread regions of the brain subserving sensory, cognitive, and emotional processes."} {"_id": "346e6803d8adff413a9def0768637db533fe71b1", "title": "Experimental personality designs: analyzing categorical by continuous variable interactions.", "text": "Theories hypothesizing interactions between a categorical and one or more continuous variables are common in personality research. Traditionally, such hypotheses have been tested using nonoptimal adaptations of analysis of variance (ANOVA). This article describes an alternative multiple regression-based approach that has greater power and protects against spurious conclusions concerning the impact of individual predictors on the outcome in the presence of interactions. We discuss the structuring of the regression equation, the selection of a coding system for the categorical variable, and the importance of centering the continuous variable. We present in detail the interpretation of the effects of both individual predictors and their interactions as a function of the coding system selected for the categorical variable. We illustrate two- and three-dimensional graphical displays of the results and present methods for conducting post hoc tests following a significant interaction. The application of multiple regression techniques is illustrated through the analysis of two data sets. We show how multiple regression can produce all of the information provided by traditional but less optimal ANOVA procedures."} {"_id": "08fe9658c086b842980e86c66bde3cef95bb6bec", "title": "Deformable part models are convolutional neural networks", "text": "Deformable part models (DPMs) and convolutional neural networks (CNNs) are two widely used tools for visual recognition. They are typically viewed as distinct approaches: DPMs are graphical models (Markov random fields), while CNNs are \u201cblack-box\u201d non-linear classifiers. In this paper, we show that a DPM can be formulated as a CNN, thus providing a synthesis of the two ideas. Our construction involves unrolling the DPM inference algorithm and mapping each step to an equivalent CNN layer. From this perspective, it is natural to replace the standard image features used in DPMs with a learned feature extractor. We call the resulting model a DeepPyramid DPM and experimentally validate it on PASCAL VOC object detection. We find that DeepPyramid DPMs significantly outperform DPMs based on histograms of oriented gradients features (HOG) and slightly outperforms a comparable version of the recently introduced R-CNN detection system, while running significantly faster."} {"_id": "1060ff9852dc12e05ec44bee7268efdc76f7535d", "title": "Collection flow", "text": "Computing optical flow between any pair of Internet face photos is challenging for most current state of the art flow estimation methods due to differences in illumination, pose, and geometry. We show that flow estimation can be dramatically improved by leveraging a large photo collection of the same (or similar) object. In particular, consider the case of photos of a celebrity from Google Image Search. Any two such photos may have different facial expression, lighting and face orientation. The key idea is that instead of computing flow directly between the input pair (I, J), we compute versions of the images (I', J') in which facial expressions and pose are normalized while lighting is preserved. This is achieved by iteratively projecting each photo onto an appearance subspace formed from the full photo collection. The desired flow is obtained through concatenation of flows (I \u2192 I') o (J' \u2192 J). Our approach can be used with any two-frame optical flow algorithm, and significantly boosts the performance of the algorithm by providing invariance to lighting and shape changes."} {"_id": "205f65b295c80131dbf8f17874dd362413e5d7fe", "title": "Learning Dense Correspondence via 3D-Guided Cycle Consistency", "text": "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and real-to-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms state-of-the-art pairwise matching methods in correspondence-related tasks."} {"_id": "54a1b59a47f38bef6ad4cd2d56ea4553746b0b22", "title": "Descriptor Matching with Convolutional Neural Networks: a Comparison to SIFT", "text": "Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching."} {"_id": "7b1e5e9f85b6d9735ccd63a7bacd4e1bcfa589bb", "title": "Mirrored Light Field Video Camera Adapter", "text": "This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible. Mirrors of different shape and orientation reflect the scene into an upwards-facing camera to create an array of virtual cameras with overlapping field of view at specified depths, and deliver video frame rate light fields. We describe the design, construction, decoding and calibration processes of our mirrorbased light field camera adapter in preparation for an open-source release to benefit the robotic vision community. The latest report, computer-aided design models, diagrams and code can be obtained from the following repository: https://bitbucket.org/acrv/mirrorcam."} {"_id": "635268baf38ac5058e0886b8b235e3f98c9f93c1", "title": "UPTIME: Ubiquitous pedestrian tracking using mobile phones", "text": "The mission of tracking a pedestrian is valuable for many applications including walking distance estimation for the purpose of pervasive healthcare, museum and shopping mall guides, and locating emergency responders. In this paper, we show how accurate and ubiquitous tracking of a pedestrian can be performed using only the inertial sensors embedded in his/her mobile phone. Our work depends on performing dead reckoning to track the user's movement. The main challenge that needs to be addressed is handling the noise of the low cost low quality inertial sensors in cell phones. Our proposed system combines two novel contributions: a novel step count estimation technique and a gait-based accurate variable step size detection algorithm. The step count estimation technique is based on a lightweight finite state machine approach that leverages orientation-independent features. In order to capture the varying stride length of the user, based on his changing gait, we employ a multi-class hierarchical Support Vector Machine classifier. Combining the estimated number of steps with the an accurate estimate of the individual stride length, we achieve ubiquitous and accurate tracking of a person in indoor environments. We implement our system on different Android-based phones and compare it to the state-of-the-art techniques in indoor and outdoor testbeds with arbitrary phone orientation. Our results in two different testbeds show that we can provide an accurate step count estimation with an error of 5.72%. In addition, our gait type classifier has an accuracy of 97.74%. This leads to a combined tracking error of 6.9% while depending only on the inertial sensors and turning off the GPS sensor completely. This highlights the ability of the system to provide ubiquitous, accurate, and energy efficient tracking."} {"_id": "7dbca41518638170b0be90e85b7ac3365b337d44", "title": "A Noise\u2010aware Filter for Real\u2010time Depth Upsampling", "text": "A new generation of active 3D range sensors, such as time-of-flight cameras, enables recording of full-frame depth maps at video frame rate. Unfortunately, the captured data are typically starkly contaminated by noise and the sensors feature only a rather limited image resolution. We therefore present a pipeline to enhance the quality and increase the spatial resolution of range data in real-time by upsampling the range information with the data from a high resolution video camera. Our algorithm is an adaptive multi-lateral upsampling filter that takes into account the inherent noisy nature of real-time depth data. Thus, we can greatly improve reconstruction quality, boost the resolution of the data to that of the video sensor, and prevent unwanted artifacts like texture copy into geometry. Our technique has been crafted to achieve improvement in depth map quality while maintaining high computational efficiency for a real-time application. By implementing our approach on the GPU, the creation of a real-time 3D camera with video camera resolution is feasible."} {"_id": "38d309e86c2bcb00bfe9b7f9c6740f061f4ef27c", "title": "Analyzing of two types water cooling electric motors using computational fluid dynamics", "text": "The focus of this work consists in analyzing water flow inside a water cooling electric motor frame. Aiming of work is in the comparison of load losses and the avoidance of hot spots in two types water cooled frames. Total losses of electrical machine were considered as thermal load. Electrical motor is new designed electrically excited synchronous machine for automotive industry. For the development of this work computational fluid dynamics (CFD) was used."} {"_id": "2b905881fe991ce7b6f59b00872d696902906db2", "title": "Imperative Functional Programming", "text": "We present a new model, based on monads, for performing input/output in a non-strict, purely functional language. It is composable, extensible, efficient, requires no extensions to the type system, and extends smoothly to incorporate mixed-language working and in-place array updates."} {"_id": "98adc92ea6225c3a0ba6f204c531f408fedbc281", "title": "Cyber Security Analysis of Substation Automation System", "text": "The automation of substation is increasing in the modern world. The implementation of SCADA system is necessary for Substation Automation (SA) systems. Generally Substation Automation Systems uses Intelligent Electronic devices (IED) for monitoring, control and protection of substation. Standard protocols used by the Substation Automation systems are IEC 60870-5-104, DNP, IEC 61850, IEC 60870-5-101. In this paper, Modbus protocol is used as communication protocol. Cyber attack is critical issue in SCADA systems. This paper deals with the monitoring of substation and cyber security analysis of SCADA systems."} {"_id": "823964b144009f7c395cd09de9a70fe06542cc84", "title": "Overview of Current Development in Electrical Energy Storage Technologies and the Application Potential in Power System Operation", "text": "Electrical power generation is changing dramatically across the world because of the need to reduce greenhouse gas emissions and to introduce mixed energy sources. The power network faces great challenges in transmission and distribution to meet demand with unpredictable daily and seasonal variations. Electrical Energy Storage (EES) is recognized as underpinning technologies to have great potential in meeting these challenges, whereby energy is stored in a certain state, according to the technology used, and is converted to electrical energy when needed. However, the wide variety of options and complex characteristic matrices make it difficult to appraise a specific EES technology for a particular application. This paper intends to mitigate this problem by providing a comprehensive and clear picture of the state-of-the-art technologies available, and where they would be suited for integration into a power generation and distribution system. The paper starts with an overview of the operation principles, technical and economic performance features and the current research and development of important EES technologies, sorted into six main categories based on the types of energy stored. Following this, a comprehensive comparison and an application potential analysis of the reviewed technologies are presented. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/)."} {"_id": "7ad3675c38070d41b5e4c96fef1a6c80ca481f51", "title": "Xface: MPEG-4 based open source toolkit for 3D Facial Animation", "text": "In this paper, we present our open source, platform independent toolkit for developing 3D talking agents, namely Xface. It relies on MPEG-4 Face Animation (FA) standard. The toolkit currently incorporates three pieces of software. The core Xface library is for developers who want to embed 3D facial animation to their software as well as researchers who want to focus on related topics without the hassle of implementing a full framework from scratch. XfaceEd editor provides an easy to use interface to generate MPEG-4 ready meshes from static 3D models. Last, XfacePlayer is a sample application that demonstrates the toolkit in action. All the pieces are implemented in C++ programming language and rely on only operating system independent libraries. The main design principles for Xface are ease of use and extendibility."} {"_id": "49ba47eaa12cecca72bf7729d3bbdb9d5c38db24", "title": "Classifying imbalanced data using a bagging ensemble variation (BEV)", "text": "In many applications, data collected are highly skewed where data of one class clearly dominates data from the other classes. Most existing classification systems that perform well on balanced data give very poor performance on imbalanced data, especially for the minority class data. Existing work on improving the quality of classification on imbalanced data include over-sampling, under-sampling, and methods that make modifications to the existing classification systems. This paper discusses the BEV system for classifying imbalanced data. The system is developed based on the ideas from the \"Bagging\" classification ensemble. The motivation behind the scheme is to maximally use the minority class data without creating synthetic data or making changes to the existing classification systems. Experimental results using real world imbalanced data show the effectiveness of the system."} {"_id": "20d16d229ed5fddcb9ac1c3a7925582c286d3927", "title": "Defining clusters from a hierarchical cluster tree: the Dynamic Tree Cut package for R", "text": "SUMMARY\nHierarchical clustering is a widely used method for detecting clusters in genomic data. Clusters are defined by cutting branches off the dendrogram. A common but inflexible method uses a constant height cutoff value; this method exhibits suboptimal performance on complicated dendrograms. We present the Dynamic Tree Cut R package that implements novel dynamic branch cutting methods for detecting clusters in a dendrogram depending on their shape. Compared to the constant height cutoff method, our techniques offer the following advantages: (1) they are capable of identifying nested clusters; (2) they are flexible-cluster shape parameters can be tuned to suit the application at hand; (3) they are suitable for automation; and (4) they can optionally combine the advantages of hierarchical clustering and partitioning around medoids, giving better detection of outliers. We illustrate the use of these methods by applying them to protein-protein interaction network data and to a simulated gene expression data set.\n\n\nAVAILABILITY\nThe Dynamic Tree Cut method is implemented in an R package available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/BranchCutting."} {"_id": "1742c355b5b17f5e147661bbcafcd204f253cf0e", "title": "Automatic Stance Detection Using End-to-End Memory Networks", "text": "We present an effective end-to-end memory network model that jointly (i) predicts whether a given document can be considered as relevant evidence for a given claim, and (ii) extracts snippets of evidence that can be used to reason about the factuality of the target claim. Our model combines the advantages of convolutional and recurrent neural networks as part of a memory network. We further introduce a similarity matrix at the inference level of the memory network in order to extract snippets of evidence for input claims more accurately. Our experiments on a public benchmark dataset, FakeNewsChallenge, demonstrate the effectiveness of our approach."} {"_id": "3088a18108834988640a7176d2fd50e711d40146", "title": "Transfer learning from multiple source domains via consensus regularization", "text": "Recent years have witnessed an increased interest in transfer learning. Despite the vast amount of research performed in this field, there are remaining challenges in applying the knowledge learnt from multiple source domains to a target domain. First, data from multiple source domains can be semantically related, but have different distributions. It is not clear how to exploit the distribution differences among multiple source domains to boost the learning performance in a target domain. Second, many real-world applications demand this transfer learning to be performed in a distributed manner. To meet these challenges, we propose a consensus regularization framework for transfer learning from multiple source domains to a target domain. In this framework, a local classifier is trained by considering both local data available in a source domain and the prediction consensus with the classifiers from other source domains. In addition, the training algorithm can be implemented in a distributed manner, in which all the source-domains are treated as slave nodes and the target domain is used as the master node. To combine the training results from multiple source domains, it only needs share some statistical data rather than the full contents of their labeled data. This can modestly relieve the privacy concerns and avoid the need to upload all data to a central location. Finally, our experimental results show the effectiveness of our consensus regularization learning."} {"_id": "06eda0aa078454c99eb6be43d3a301c9b8d5d1fa", "title": "Image Similarity Using Mutual Information of Regions", "text": "Mutual information (MI) has emerged in recent years as an effective similarity measure for comparing images. One drawback of MI, however, is that it is calculated on a pixel by pixel basis, meaning that it takes into account only the relationships between corresponding individual pixels and not those of each pixel\u2019s respective neighborhood. As a result, much of the spatial information inherent in images is not utilized. In this paper, we propose a novel extension to MI called regional mutual information (RMI). This extension efficiently takes neighborhood regions of corresponding pixels into account. We demonstrate the usefulness of RMI by applying it to a real-world problem in the medical domain\u2014 intensity-based 2D-3D registration of X-ray projection images (2D) to a CT image (3D). Using a gold-standard spine image data set, we show that RMI is a more robust similarity meaure for image registration than MI."} {"_id": "d1f711ab0fc172f9d2cbb430fdc814be3abd8832", "title": "A novel data glove for fingers motion capture using inertial and magnetic measurement units", "text": "A novel data glove embedded low cost MEMS inertial and magnetic measurement units, is proposed for fingers motion capture. Each unit consists of a tri-axial gyroscope, a tri-axial accelerometer and a tri-axial magnetometer. The sensor board and processor board are compactly designed, which are small enough to fit the size of our fingers. The data glove is equipped with fifteen units to measure each joint angle of the fingers. Then the calibration approach is put up to improve the accuracy of measurements by both offline and online procedures, and a fast estimation method is used to determine the three orientations of fifteen units simultaneously. The proposed algorithm is easy to be implemented, and more precise and efficient measurements can be obtained as compared with existing methods. The fingers motion capture experiments are implemented to acquire the characteristics of the fingers and teleoperate the robotic hands, which prove the effectiveness of the data glove."} {"_id": "6ac2443b7fa7de3b779137a5cbbf1a023d6b2a68", "title": "Interpretation and trust: designing model-driven visualizations for text analysis", "text": "Statistical topic models can help analysts discover patterns in large text corpora by identifying recurring sets of words and enabling exploration by topical concepts. However, understanding and validating the output of these models can itself be a challenging analysis task. In this paper, we offer two design considerations - interpretation and trust - for designing visualizations based on data-driven models. Interpretation refers to the facility with which an analyst makes inferences about the data through the lens of a model abstraction. Trust refers to the actual and perceived accuracy of an analyst's inferences. These considerations derive from our experiences developing the Stanford Dissertation Browser, a tool for exploring over 9,000 Ph.D. theses by topical similarity, and a subsequent review of existing literature. We contribute a novel similarity measure for text collections based on a notion of \"word-borrowing\" that arose from an iterative design process. Based on our experiences and a literature review, we distill a set of design recommendations and describe how they promote interpretable and trustworthy visual analysis tools."} {"_id": "806a8739d8cb68c3e4695d37ae6757d29e317c23", "title": "Exploring simultaneous keyword and key sentence extraction: improve graph-based ranking using wikipedia", "text": "Summarization and Keyword Selection are two important tasks in NLP community. Although both aim to summarize the source articles, they are usually treated separately by using sentences or words. In this paper, we propose a two-level graph based ranking algorithm to generate summarization and extract keywords at the same time. Previous works have reached a consensus that important sentence is composed by important keywords. In this paper, we further study the mutual impact between them through context analysis. We use Wikipedia to build a two-level concept-based graph, instead of traditional term-based graph, to express their homogenous relationship and heterogeneous relationship. We run PageRank and HITS rank on the graph to adjust both homogenous and heterogeneous relationships. A more reasonable relatedness value will be got for key sentence selection and keyword selection. We evaluate our algorithm on TAC 2011 data set. Traditional term-based approach achieves a score of 0.255 in ROUGE-1 and a score of 0.037 and ROUGE-2 and our approach can improve them to 0.323 and 0.048 separately."} {"_id": "3d92f8806d45727ad476751748bbb7ceda65c685", "title": "English, Devnagari and Urdu Text Identification", "text": "In a multi-lingual multi-script country like India, a single text line of a document page may contain words of two or more scripts. For the Optical Character Recognition of such a document page it is necessary to identify different scripts from the document. In this paper, an automatic technique for word -wise identification of English, Devnagari and Urdu scripts from a single document is proposed. Here, at first, the document is segmented into lines and then the lines are segmented into possible words. Using characteristics of different scripts, the identification scheme is developed."} {"_id": "04afd5f18d3080c57d4b304dfbd1818da9a02e8e", "title": "Models and Issues in Data Stream Systems", "text": "In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work relevant to data stream systems and current projects in the area, the paper explores topics in stream query languages, new requirements and challenges in query processing, and algorithmic issues."} {"_id": "e988f7ad852cc0918b521431d0ec9e0e792bde2a", "title": "Accuracy analysis of kinect depth data", "text": "This paper presents an investigation of the geometric quality of depth data obtained by the Kinect sensor. Based on the mathematical model of depth measurement by the sensor a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimetres up to about 4 cm at the maximum range of the sensor. The accuracy of the data is also found to be influenced by the low resolution of the depth measurements."} {"_id": "d61f07f93bc6ecd70f2fc37c9f43748d714b3dfb", "title": "Automatic Mapping Clinical Notes to Medical Terminologies", "text": "Automatic mapping of key concepts from clinical notes to a terminology is an important task to achieve for extraction of the clinical information locked in clinical notes and patient reports. The present paper describes a system that automatically maps free text into a medical reference terminology. The algorithm utilises Natural Language Processing (NLP) techniques to enhance a lexical token matcher. In addition, this algorithm is able to identify negative concepts as well as performing term qualification. The algorithm has been implemented as a web based service running at a hospital to process real-time data and demonstrated that it worked within acceptable time limits and accuracy limits for them. However broader acceptability of the algorithm will require comprehensive evaluations."} {"_id": "064fb3a6f2666e17f6d411c0a731d56aae0a785e", "title": "Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering", "text": "Building a successful recommender system depends on understanding both the dimensions of people\u2019s preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underlying datasets. In this paper we build novel models for the One-Class Collaborative Filtering setting, where our goal is to estimate users\u2019 fashion-aware personalized ranking functions based on their past feedback. To uncover the complex and evolving visual factors that people consider when evaluating products, our method combines high-level visual features extracted from a deep convolutional neural network, users\u2019 past feedback, as well as evolving trends within the community. Experimentally we evaluate our method on two large real-world datasets from Amazon.com, where we show it to outperform stateof-the-art personalized ranking measures, and also use it to visualize the high-level fashion trends across the 11-year span of our dataset."} {"_id": "0842636e2efd5a0c0f34ae88785af29612814e17", "title": "Joint Deep Modeling of Users and Items Using Reviews for Recommendation", "text": "A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets."} {"_id": "4c3103164ae3d2e79c9e1d943d77b7dfdf609307", "title": "Ratings meet reviews, a combined approach to recommend", "text": "Most existing recommender systems focus on modeling the ratings while ignoring the abundant information embedded in the review text. In this paper, we propose a unified model that combines content-based filtering with collaborative filtering, harnessing the information of both ratings and reviews. We apply topic modeling techniques on the review text and align the topics with rating dimensions to improve prediction accuracy. With the information embedded in the review text, we can alleviate the cold-start problem. Furthermore, our model is able to learn latent topics that are interpretable. With these interpretable topics, we can explore the prior knowledge on items or users and recommend completely \"cold\"' items. Empirical study on 27 classes of real-life datasets show that our proposed model lead to significant improvement compared with strong baseline methods, especially for datasets which are extremely sparse where rating-only methods cannot make accurate predictions."} {"_id": "6dedbf5566c1e12f61ec7731aa5f7635ab28dc74", "title": "Nonlinear model predictive control for aerial manipulation", "text": "This paper presents a nonlinear model predictive controller to follow desired 3D trajectories with the end effector of an unmanned aerial manipulator (i.e., a multirotor with a serial arm attached). To the knowledge of the authors, this is the first time that such controller runs online and on board a limited computational unit to drive a kinematically augmented aerial vehicle. Besides the trajectory following target, we explore the possibility of accomplishing other tasks during flight by taking advantage of the system redundancy. We define several tasks designed for aerial manipulators and show in simulation case studies how they can be achieved by either a weighting strategy, within a main optimization process, or a hierarchical approach consisting on nested optimizations. Moreover, experiments are presented to demonstrate the performance of such controller in a real robot."} {"_id": "5cb265c890c8ddf5a5bf06762b87f7074f4ff9ea", "title": "The generic model query language GMQL - Conceptual specification, implementation, and runtime evaluation", "text": "The generic model query language GMQL is designed to query collections of conceptual models created in arbitrary graph-based modelling languages. Querying conceptual models means searching for particular model subgraphs that comply with a predefined pattern query. Such a query specifies the structural and semantic properties of the model language from the literature and formally specify the language\u2019s syntax and semantics. We conduct an analysis of GMQL's theoretical and practical runtime performance concluding that it returns query results within satisfactory time. Given its generic nature, GMQL contributes to a broad range of different model analysis scenarios ranging from business process compliance management to model translation and business process weakness detection. As GMQL returns results with acceptable runtime performance, it can be used to query large collections of hundreds or thousands of conceptual models containing not only process models, but also data models or organizational charts. In this paper, we furthermore evaluate GMQL against the backdrop of existing query approaches thereby carving out its advantages and limitations as well as pointing toward future research. & 2014 Elsevier Ltd. All rights reserved."} {"_id": "09661a6bb7578979e42c75d6ce382baba64d4981", "title": "Proving Theorems about LISP Functions", "text": "We d e s c r i b e some s i m p l e h e u r i s t i c s comb in ing e v a l u a t i o n and ma themat i ca l i n d u c t i o n wh ich we have implemented in a program t h a t a u t o m a t i c a l l y p roves a wide v a r i e t y o f theorems about r e c u r s i v e LISP f u n c t i o n s . The method the program uses t o gene ra te i n d u c t i o n f o r m u l a s i s d e s c r i b e d a t l e n g t h . The theorems proved by t h e p r o \u00ad gram i n c l u d e t h a t REVERSE i s i t s own i n v e r s e and t h a t a p a r t i c u l a r SORT program is c o r r e c t . Append ix B c o n t a i n s a l i s t o f the theorems p roved by t h e p r o g r a m ."} {"_id": "f59a6c57fe1735d0c36b24437d6110f7ef21a27d", "title": "Ferrite-magnet spoke-type IPMSM with W-shaped magnet placement", "text": "Rare earth magnets that have a large energy product are generally used in high-efficiency interior permanent-magnet synchronous motors (IPMSMs). However, efficient rare-earth-free motors are needed because of problems such as the high prices of and export restrictions on rare earth materials. This paper introduces a ferrite-magnet spoke-type IPMSM that employs a W-shaped magnet placement in order to achieve high efficiency. The effectiveness of the W-shaped magnet placement is verified from the results of two-dimensional finite element analysis and experiments on a prototype."} {"_id": "79890a7d61b082947e1300f60231336a53cc285c", "title": "Deep Joint Rain Detection and Removal from a Single Image", "text": "In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture."} {"_id": "6a6502cc5aef5510122c7b7b9e00e489ef06c2ca", "title": "5G Spectrum: is china ready?", "text": "With a considerable ratio of the world's mobile users, China has been actively promoting research on 5G, in which the spectrum issue is of great interest. New 5G characteristics put forward a lot of requirements for spectrum in terms of total amount, candidate bands, as well as new challenges for spectrum usage methods and management. Based on China's current situation, this article first discusses the 5G vision, spectrum demands, and potential candidate bands. Furthermore, it is indicated that spectrum sharing will bring many benefits for 5G systems, and different sharing scenarios are summarized. Finally, based on the current framework of spectrum management in China, potential services classification and spectrum assessment are proposed to accommodate new 5G requirements."} {"_id": "16bae0f9f6a2823b2a65f6296494eea41f5c9859", "title": "Scalable Semantic Web Data Management Using Vertical Partitioning", "text": "Efficient management of RDF data is an important factor in realizing the Semantic Web vision. Performance and scalability issues are becoming increasingly pressing as Semantic Web technology is applied to real-world applications. In this paper, we examine the reasons why current data management solutions for RDF data scale poorly, and explore the fundamental scalability limitations of these approaches. We review the state of the art for improving performance for RDF databases and consider a recent suggestion, \u201cproperty tables.\u201d We then discuss practically and empirically why this solution has undesirable features. As an improvement, we propose an alternative solution: vertically partitioning the RDF data. We compare the performance of vertical partitioning with prior art on queries generated by a Web-based RDF browser over a large-scale (more than 50 million triples) catalog of library data. Our results show that a vertical partitioned schema achieves similar performance to the property table technique while being much simpler to design. Further, if a column-oriented DBMS (a database architected specially for the vertically partitioned case) is used instead of a row-oriented DBMS, another order of magnitude performance improvement is observed, with query times dropping from minutes to several seconds."} {"_id": "64b0625cec2913ca30a059ffb5f06c750b86c881", "title": "A robust fault modeling and prediction technique for heterogeneous network", "text": "This paper proposes a modelling technique for fault identification in a heterogeneous network. Heterogeneous network is the integration of wired network, cellular network, wireless local area network (WLAN), and Adhoc network, etc. There may be several numbers of causes and sub-causes of fault in each type of network. These causes of fault must be mitigated in order to have a secure and reliable system. The paper introduces a model based approach that starts with the enumeration of possible causes of each fault over the heterogeneous network. A framework is then created by such causes which diagrammatically describe the causes and sub-causes responsible for the occurrence of a fault. This paper suggests a conceptual fault cause tree model based on the probabilistic framework for ranking the faults and their possible causes. An effective mitigation strategy is required to mitigate the causes and sub-causes. Once mitigation of the cause creating a fault is achieved, the system is analyzed and tested for accuracy and reliability. The Proposed technique assures that all sub-causes even at the lowest level of abstraction is taken into consideration in making the system more robust against the particular fault. The proposed model is also used to analyze the faulty probability of a heterogeneous network."} {"_id": "118737f1a4429535e22dd438402cc50f4e9d1644", "title": "One-against-all multi-class SVM classification using reliability measures", "text": "Support vector machines (SVM) is originally designed for binary classification. To extend it to multi-class scenario, a typical conventional way is to decompose an M-class problem into a series of two-class problems, for which one-against-all is the earliest and one of the most widely used implementations. However, certain theoretical analysis reveals a drawback, i.e., the competence of each classifier is totally neglected when the results of classification from the multiple classifiers are combined for the final decision. To overcome this limitation, this paper introduces reliability measures into the multi-class framework. Two measures are designed: static reliability measure (SRM) and dynamic reliability measure (DRM). SRM works on a collective basis and yields a constant value regardless of the location of the test sample. DRM, on the other hand, accounts for the spatial variation of the classifier's performance. Based on these two reliability measures, a new decision strategy for the one-against-all method is proposed, which is tested on benchmark data sets and demonstrates its effectiveness."} {"_id": "a44afaa688d50effa9968cceb292c81b5bb2616d", "title": "Modeling a DC Power System in Hardware-inthe-Loop Environment", "text": "ii Abstract (in Finnish) iiiin Finnish) iii"} {"_id": "ebe24a632c42c36dc87d49c66164cc6097168101", "title": "Motion Capture and Retargeting of Fish by Monocular Camera", "text": "Accurate motion capture and flexible retargeting of underwater creatures such as fish remain to be difficult due to the long-lasting challenges of marker attachment and feature description for soft bodies in the underwater environment. Despite limited new research progresses appeared in recent years, the fish motion retargeting with a desirable motion pattern in real-time remains elusive. Strongly motivated by our ambitious goal of achieving high-quality data-driven fish animation with a light-weight, mobile device, this paper develops a novel framework of motion capturing and retargeting for a fish. We capture the motion of actual fish by a monocular camera without the utility of any marker. The elliptical Fourier coefficients are then integrated into the contour-based feature extraction process to analyze the fish swimming patterns. This novel approach can obtain the motion information in a robust way, with smooth medial axis as the descriptor for a soft fish body. For motion retargeting, we propose a two-level scheme to properly transfer the captured motion into new models, such as 2D meshes (with texture) generated from pictures or 3D models designed by artists, regardless of different body geometry and fin proportions among various species. Both motion capture and retargeting processes are functioning in real time. Hence, the system can simultaneously create fish animation with variation, while obtaining video sequences of real fish by a monocular camera."} {"_id": "613b6abf4e3557cb3e17eb96c137ce9b7788907c", "title": "Joint shot boundary detection and key frame extraction", "text": "Representing a video by a set of key frames is useful for efficient video browsing and retrieving. But key frame extraction keeps a challenge in the computer vision field. In this paper, we propose a joint framework to integrate both shot boundary detection and key frame extraction, wherein three probabilistic components are taken into account, i.e. the prior of the key frames, the conditional probability of shot boundaries and the conditional probability of each video frame. Thus the key frame extraction is treated as a Maximum A Posteriori which can be solved by adopting alternate strategy. Experimental results show that the proposed method preserves the scene level structure and extracts key frames that are representative and discriminative."} {"_id": "218f2a4d1174f5d9aed2383ea600869c186cfd08", "title": "Design and analysis of cross-fed rectangular array antenna; an X-band microstrip array antenna, operating at 11 GHz", "text": "In this paper, an X-Band Microstrip Patch antenna termed as the Cross fed rectangular array antenna has been presented with an enhanced radiation efficiency. The proposed antenna is designed and simulated using FEKO 5.5 suite. It is then fabricated on a 40 \u00d7 40mm2 Rogers RT-Duroid 5880LZ dielectric material board. This antenna is composed of a rectangular patch array in a cross fashion with four patches, each with a dimension of 12.3 mm \u00d7 9.85 mm excited using a wire-port. The antenna operates at a frequency of 11.458 GHz in the X-Band (8-12 GHz). It has achieved stable radiation efficiency of 80.71% with a gain of 7.4 dB at the operating frequency. It is thus inferred that this antenna can be used for X-Band applications such as in maritime navigation and airborne radars."} {"_id": "1a86eb42952412ee02e3f6da06f874f1946eff6b", "title": "Deep Cross-Modal Projection Learning for Image-Text Matching", "text": "The key point of image-text matching is how to accurately measure the similarity between visual and textual inputs. Despite the great progress of associating the deep cross-modal embeddings with the bi-directional ranking loss, developing the strategies for mining useful triplets and selecting appropriate margins remains a challenge in real applications. In this paper, we propose a cross-modal projection matching (CMPM) loss and a cross-modal projection classification (CMPC) loss for learning discriminative image-text embeddings. The CMPM loss minimizes the KL divergence between the projection compatibility distributions and the normalized matching distributions defined with all the positive and negative samples in a mini-batch. The CMPC loss attempts to categorize the vector projection of representations from one modality onto another with the improved norm-softmax loss, for further enhancing the feature compactness of each class. Extensive analysis and experiments on multiple datasets demonstrate the superiority of the proposed approach."} {"_id": "a3f550d415313eee87764fcc5c44b94cc82313c4", "title": "Efficient Ordered Combinatorial Semi-Bandits for Whole-Page Recommendation", "text": "Multi-Armed Bandit (MAB) framework has been successfully applied in many web applications. However, many complex real-world applications that involve multiple content recommendations cannot fit into the traditional MAB setting. To address this issue, we consider an ordered combinatorial semi-bandit problem where the learner recommends S actions from a base set of K actions, and displays the results in S (out of M ) different positions. The aim is to maximize the cumulative reward with respect to the best possible subset and positions in hindsight. By the adaptation of a minimumcost maximum-flow network, a practical algorithm based on Thompson sampling is derived for the (contextual) combinatorial problem, thus resolving the problem of computational intractability. With its potential to work with whole-page recommendation and any probabilistic models, to illustrate the effectiveness of our method, we focus on Gaussian process optimization and a contextual setting where click-throughrate is predicted using logistic regression. We demonstrate the algorithms\u2019 performance on synthetic Gaussian process problems and on large-scale news article recommendation datasets from Yahoo! Front Page Today Module."} {"_id": "d79ec6fd2beba6fa8a884e8b3e48bd0691ce186f", "title": "Overweight among primary school-age children in Malaysia.", "text": "This study is a secondary data analysis from the National Health Morbidity Survey III, a population-based study conducted in 2006. A total of 7,749 children between 7 and 12 years old were recruited into the study. This study seeks to report the prevalence of overweight (including obesity) children in Malaysia using international cut-off point and identify its associated key social determinants. The results show that the overall prevalence of overweight children in Malaysia was 19.9%. The urban residents, males, Chinese, those who are wealthy, have overweight or educated guardians showed higher prevalence of overweight. In multivariable analysis, higher likelihood of being overweight was observed among those with advancing age (OR=1.15), urban residents (OR=1.16, 95% CI: 1.01-1.36), the Chinese (OR=1.45, 95% CI: 1.19-1.77), boys (OR=1.23, 95% CI: 1.08-1.41), and those who came from higher income family. In conclusion, one out of five of 7-12 year-old-children in Malaysia were overweight. Locality of residence, ethnicity, gender, guardian education, and overweight guardian were likely to be the predictors of this alarming issue. Societal and public health efforts are needed in order to reduce the burden of disease associated with obesity."} {"_id": "dbe39712e69a62f94cb309513d85e71930c5c293", "title": "Mesh Location in Open Ventral Hernia Repair: A Systematic Review and Network Meta-analysis", "text": "There is no consensus on the ideal location for mesh placement in open ventral hernia repair (OVHR). We aim to identify the mesh location associated with the lowest rate of recurrence following OVHR using a systematic review and meta-analysis. A search was performed for studies comparing at least two of four locations for mesh placement during OVHR (onlay, inlay, sublay, and underlay). Outcomes assessed were hernia recurrence and surgical site infection (SSI). Pairwise meta-analysis was performed to compare all direct treatment of mesh locations. A multiple treatment meta-analysis was performed to compare all mesh locations in the Bayesian framework. Sensitivity analyses were planned for the following: studies with a low risk of bias, incisional hernias, by hernia size, and by mesh type (synthetic or biologic). Twenty-one studies were identified (n\u00a0=\u00a05,891). Sublay placement of mesh was associated with the lowest risk for recurrence [OR 0.218 (95\u00a0% CI 0.06\u20130.47)] and was the best of the four treatment modalities assessed [Prob (best)\u00a0=\u00a094.2\u00a0%]. Sublay was also associated with the lowest risk for SSI [OR 0.449 (95\u00a0% CI 0.12\u20131.16)] and was the best of the 4 treatment modalities assessed [Prob (best)\u00a0=\u00a077.3\u00a0%]. When only assessing studies at low risk of bias, of incisional hernias, and using synthetic mesh, the probability that sublay had the lowest rate of recurrence and SSI was high. Sublay mesh location has lower complication rates than other mesh locations. While additional randomized controlled trials are needed to validate these findings, this network meta-analysis suggests the probability of sublay being the best location for mesh placement is high."} {"_id": "dc160709bbe528b506a37ead334f60d258413357", "title": "Learned Step Size Quantization", "text": "We present here Learned Step Size Quantization, a method for training deep networks such that they can run at inference time using low precision integer matrix multipliers, which offer power and space advantages over high precision alternatives. The essence of our approach is to learn the step size parameter of a uniform quantizer by backpropagation of the training loss, applying a scaling factor to its learning rate, and computing its associated loss gradient by ignoring the discontinuity present in the quantizer. This quantization approach can be applied to activations or weights, using different levels of precision as needed for a given system, and requiring only a simple modification of existing training code. As demonstrated on the ImageNet dataset, our approach achieves better accuracy than all previous published methods for creating quantized networks on several ResNet network architectures at 2-, 3and 4-bits of precision."} {"_id": "aabe608eef164a4940d962aeffebafb96ebbeb81", "title": "Design of EMG Acquisition Circuit to Control an Antagonistic Mechanism Actuated by Pneumatic Artificial Muscles PAMs", "text": "-A pneumatically actuated antagonistic pair of muscles with joint mechanism (APMM) is supported and developed to be essential for bionic and biomimetic applications to emulate the biological muscles by realizing various kinds of locomotion based on normal electrical activity of biological muscles. This Paper aims to compare the response of antagonistic pairs of muscles mechanism (APMM) based on the pneumatic artificial muscles (PAMs) to an EMG signal that was acquired throw a designed circuit and an EMG Laboratory acquisition kit. The response is represented as a joint rotary displacement generated by the contraction and extension of the pneumatic artificial muscles. A statistical study was done to prove the efficiency of the designed circuit the response of antagonistic pairs of muscles mechanism. The statistical result showed that there is no significant difference of voltage data in both EMG acquired signal between reference kit and designed circuit. An excellent correlation behavior between the EMG control signal and the response of APMM as an angular displacement has been discussed and statistically analyzed. Index Term-Pneumatic Artificial Muscles, Biomechatronics, Electromyogram EMG, Pneumatic Proportional Directional Control Valve, Signal Processing, Bionic."} {"_id": "798957b4bbe99fcf9283027d30e19eb03ce6b4d5", "title": "Dependency Parsing and Domain Adaptation with LR Models and Parser Ensembles", "text": "We present a data-driven variant of the LR algorithm for dependency parsing, and extend it with a best-first search for probabilistic generalized LR dependency parsing. Parser actions are determined by a classifier, based on features that represent the current state of the parser. We apply this parsing framework to both tracks of the CoNLL 2007 shared task, in each case taking advantage of multiple models trained with different learners. In the multilingual track, we train three LR models for each of the ten languages, and combine the analyses obtained with each individual model with a maximum spanning tree voting scheme. In the domain adaptation track, we use two models to parse unlabeled data in the target domain to supplement the labeled out-ofdomain training set, in a scheme similar to one iteration of co-training."} {"_id": "6c4c0ac83373d18267779dc461ddc769ad8d0133", "title": "Analysis of Blockage Effects on Urban Cellular Networks", "text": "Large-scale blockages such as buildings affect the performance of urban cellular networks, especially at higher frequencies. Unfortunately, such blockage effects are either neglected or characterized by oversimplified models in the analysis of cellular networks. Leveraging concepts from random shape theory, this paper proposes a mathematical framework to model random blockages and analyze their impact on cellular network performance. Random buildings are modeled as a process of rectangles with random sizes and orientations whose centers form a Poisson point process on the plane. The distribution of the number of blockages in a link is proven to be a Poisson random variable with parameter dependent on the length of the link. Our analysis shows that the probability that a link is not intersected by any blockages decays exponentially with the link length. A path loss model that incorporates the blockage effects is also proposed, which matches experimental trends observed in prior work. The model is applied to analyze the performance of cellular networks in urban areas with the presence of buildings, in terms of connectivity, coverage probability, and average rate. Our results show that the base station density should scale superlinearly with the blockage density to maintain the network connectivity. Our analyses also show that while buildings may block the desired signal, they may still have a positive impact on the SIR coverage probability and achievable rate since they can block significantly more interference."} {"_id": "e4f5ea444c790ed90ecaba667c7ed6db21549b61", "title": "Depth enhanced visual-inertial odometry based on Multi-State Constraint Kalman Filter", "text": "There have been increasing demands for developing robotic system combining camera and inertial measurement unit in navigation task, due to their low-cost, lightweight and complementary properties. In this paper, we present a Visual Inertial Odometry (VIO) system which can utilize sparse depth to estimate 6D pose in GPS-denied and unstructured environments. The system is based on Multi-State Constraint Kalman Filter (MSCKF), which benefits from low computation load when compared to optimization-based method, especially on resource-constrained platform. Features are enhanced with depth information forming 3D landmark position measurements in space, which reduces uncertainty of position estimate. And we derivate measurement model to access compatibility with both 2D and 3D measurements. In experiments, we evaluate the performance of the system in different in-flight scenarios, both cluttered room and industry environment. The results suggest that the estimator is consistent, substantially improves the accuracy compared with original monocular-based MSKCF and achieves competitive accuracy with other research."} {"_id": "b4951eb36bb0a408b02fadc12c0a1d8e680b589f", "title": "Web scraping technologies in an API world", "text": "Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis."} {"_id": "0823b293d13a5efaf9c3f37109a4a6018d05d074", "title": "Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks", "text": "To describe the log-likelihood computation in our model, let us consider a two scale pyramid for the moment. Given a (vectorized) j \u00d7 j image I , denote by l = d(I) the coarsened image, and h = I \u2212 u(d(I)) to be the high pass. In this section, to simplify the computations, we use a slightly different u operator than the one used to generate the images displayed in Figure 3 of the paper. Namely, here we take d(I) to be the mean over each disjoint block of 2\u00d7 2 pixels, and take u to be the operator that removes the mean from each 2\u00d7 2 block. Since u has rank 3d/4, in this section, we write h in an orthonormal basis of the range of u, then the (linear) mapping from I to (l, h) is unitary. We now build a probability density p on Rd2 by p(I) = q0(l, h)q1(l) = q0(d(I), h(I))q1(d(I)); in a moment we will carefully define the functions qi. For now, suppose that qi \u2265 0, \u222b q1(l) dl = 1, and for each fixed l, \u222b q0(l, h) dh = 1. Then we can check that p has unit integral: \u222b"} {"_id": "11da2d589485685f792a8ac79d4c2e589e5f77bd", "title": "Show and tell: A neural image caption generator", "text": "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art."} {"_id": "2b329183e93cb8c1c20c911c765d9a94f34b5ed5", "title": "Generative Adversarial Networks", "text": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."} {"_id": "3da5e164744eb111739aa6cf08bbdadaf5d53703", "title": "Rules and Ontologies for the Semantic Web", "text": "Rules and ontologies play a key role in the layered architecture of the Semantic Web, as they are used to ascribe meaning to, and to reason about, data on the Web. While the Ontology Layer of the Semantic Web is quite developed, and the Web Ontology Language (OWL) is a W3C recommendation since a couple of years already, the rules layer is far less developed and an active area of research; a number of initiatives and proposals have been made so far, but no standard as been released yet. Many implementations of rule engines are around which deal with Semantic Web data in one or another way. This article gives a comprehensive, although not exhaustive, overview of such systems, describes their supported languages, and sets them in relation with theoretical approaches for combining rules and ontologies as foreseen in the Semantic Web architecture. In the course of this, we identify desired properties and common features of rule languages and evaluate existing systems against their support. Furthermore, we review technical problems underlying the integration of rules and ontologies, and classify representative proposals for theoretical integration approaches into different categories."} {"_id": "aed69092966e15473d8ebacb217335b49391d172", "title": "Process Mining in IT Service Management: A Case Study", "text": "The explosion of process-related data in nowadays organizations has raised the interest to exploit the data in order to know in deep how the business processes are being carried out. To face this need, process mining has emerged as a way to analyze the behavior of an organization by extracting knowledge from process related data. In this paper, we present a case study of process mining in a real IT service management scenario. We describe an exploratory analysis of real life event logs generated between 2012 and 2015, in four different processes designed within an IT platform. More specifically, we analyze the way of handling the different requests and incidents registered in an organization."} {"_id": "d7c3c875f2f0c8ff1e8361802eca52c7b1d481c5", "title": "Learning to Detect Anomalies in Surveillance Video", "text": "Detecting anomalies in surveillance videos, that is, finding events or objects with low probability of occurrence, is a practical and challenging research topic in computer vision community. In this paper, we put forward a novel unsupervised learning framework for anomaly detection. At feature level, we propose a Sparse Semi-nonnegative Matrix Factorization (SSMF) to learn local patterns at each pixel, and a Histogram of Nonnegative Coefficients (HNC) can be constructed as local feature which is more expressive than previously used features like Histogram of Oriented Gradients (HOG). At model level, we learn a probability model which takes the spatial and temporal contextual information into consideration. Our framework is totally unsupervised requiring no human-labeled training data. With more expressive features and more complicated model, our framework can accurately detect and localize anomalies in surveillance video. We carried out extensive experiments on several benchmark video datasets for anomaly detection, and the results demonstrate the superiority of our framework to state-of-the-art approaches, validating the effectiveness of our framework."} {"_id": "7e403d160f3db4a5d631ac450abcba190268c0e6", "title": "SAIL: single access point-based indoor localization", "text": "This paper presents SAIL, a Single Access Point Based Indoor Localization system. Although there have been advances in WiFi-based positioning techniques, we find that existing solutions either require a dense deployment of access points (APs), manual fingerprinting, energy hungry WiFi scanning, or sophisticated AP hardware. We design SAIL using a single commodity WiFi AP to avoid these restrictions. SAIL computes the distance between the client and an AP using the propagation delay of the signal traversing between the two, combines the distance with smartphone dead-reckoning techniques, and employs geometric methods to ultimately yield the client's location using a single AP. SAIL combines physical layer (PHY) information and human motion to compute the propagation delay of the direct path by itself, eliminating the adverse effect of multipath and yielding sub-meter distance estimation accuracy. Furthermore, SAIL systematically addresses some of the common challenges towards dead-reckoning using smartphone sensors and achieves 2-5x accuracy improvements over existing techniques. We have implemented SAIL on commodity wireless APs and smartphones. Evaluation in a large-scale enterprise environment with 10 mobile users demonstrates that SAIL can capture the user's location with a mean error of 2.3m using just a single AP."} {"_id": "83d323a5bb26b706d4f6d24eb27411a7e7ff57e6", "title": "Protective action of green tea catechins in neuronal mitochondria during aging.", "text": "Mitochondria are central players in the regulation of cell homeostasis. They are essential for energy production but at the same time, reactive oxygen species accumulate as byproducts of the electron transport chain causing mitochondrial damage. In the central nervous system, senescence and neurodegeneration occur as a consequence of mitochondrial oxidative insults and impaired electron transfer. The accumulation of several oxidation products in neurons during aging prompts the idea that consumption of antioxidant compounds may delay neurodegenerative processes. Tea, one of the most consumed beverages in the world, presents benefits to human health that have been associated to its abundance in polyphenols, mainly catechins, that possess powerful antioxidant properties in vivo and in vitro. In this review, the focus will be placed on the effects of green tea catechins in neuronal mitochondria. Although these compounds reach the brain in small quantities, there are several possible targets, signaling pathways and molecular machinery impinging in the mitochondria that will be highlighted. Accumulated evidence thus far seems to indicate that catechins help prevent neurodegeneration and delay brain function decline."} {"_id": "3fefeab49d4d387eefabe170413748a028a66347", "title": "Inferring Unusual Crowd Events From Mobile Phone Call Detail Records", "text": "The pervasiveness and availability of mobile phone data offer the opportunity of discovering usable knowledge about crowd behaviors in urban environments. Cities can leverage such knowledge in order to provide better services (e.g., public transport planning, optimized resource allocation) and safer cities. Call Detail Record (CDR) data represents a practical data source to detect and monitor unusual events considering the high level of mobile phone penetration, compared with GPS equipped and open devices. In this paper, we provide a methodology that is able to detect unusual events from CDR data that typically has low accuracy in terms of space and time resolution. Moreover, we introduce a concept of unusual event that involves a large amount of people who expose an unusual mobility behavior. Our careful consideration of the issues that come from coarse-grained CDR data ultimately leads to a completely general framework that can detect unusual crowd events from CDR data effectively and efficiently. Through extensive experiments on real-world CDR data for a large city in Africa, we demonstrate that our method can detect unusual events with 16% higher recall and over 10 times higher precision, compared to stateof-the-art methods. We implement a visual analytics prototype system to help end users analyze detected unusual crowd events to best suit different application scenarios. To the best of our knowledge, this is the first work on the detection of unusual events from CDR data with considerations of its temporal and spatial sparseness and distinction between user unusual activities and daily routines."} {"_id": "bf09dc10f2d36313a8025e09832c6a7812c2f4f8", "title": "Mining Cluster-Based Temporal Mobile Sequential Patterns in Location-Based Service Environments", "text": "Researches on Location-Based Service (LBS) have been emerging in recent years due to a wide range of potential applications. One of the active topics is the mining and prediction of mobile movements and associated transactions. Most of existing studies focus on discovering mobile patterns from the whole logs. However, this kind of patterns may not be precise enough for predictions since the differentiated mobile behaviors among users and temporal periods are not considered. In this paper, we propose a novel algorithm, namely, Cluster-based Temporal Mobile Sequential Pattern Mine (CTMSP-Mine), to discover the Cluster-based Temporal Mobile Sequential Patterns (CTMSPs). Moreover, a prediction strategy is proposed to predict the subsequent mobile behaviors. In CTMSP-Mine, user clusters are constructed by a novel algorithm named Cluster-Object-based Smart Cluster Affinity Search Technique (CO-Smart-CAST) and similarities between users are evaluated by the proposed measure, Location-Based Service Alignment (LBS-Alignment). Meanwhile, a time segmentation approach is presented to find segmenting time intervals where similar mobile characteristics exist. To our best knowledge, this is the first work on mining and prediction of mobile behaviors with considerations of user relations and temporal property simultaneously. Through experimental evaluation under various simulated conditions, the proposed methods are shown to deliver excellent performance."} {"_id": "635ce4a260d78b0539eecd45dc582db23835205d", "title": "The dependency paradox in close relationships: accepting dependence promotes independence.", "text": "Using multiple methods, this investigation tested the hypothesis that a close relationship partner's acceptance of dependence when needed (e.g., sensitive responsiveness to distress cues) is associated with less dependence, more autonomous functioning, and more self-sufficiency (as opposed to more dependence) on the part of the supported individual. In two studies, measures of acceptance of dependency needs and independent functioning were obtained through couple member reports, by observing couple members' behaviors during laboratory interactions, by observing responses to experimentally manipulated partner assistance provided during an individual laboratory task, and by following couples over a period of 6 months to examine independent goal striving as a function of prior assessments of dependency acceptance. Results provided converging evidence in support of the proposed hypothesis. Implications of the importance of close relationships for optimal individual functioning are discussed."} {"_id": "5f5de869662a075da3b9998e7ff9206b3e502860", "title": "Semantic Segmentation of Mixed Crops using Deep Convolutional Neural Network", "text": "Estimation of in-field biomass and crop composition is important for both farmers and researchers. Using close-up high resolution images of the crops, crop species can be distinguished using image processing. In the current study, deep convolutional neural networks for semantic segmentation (or pixel-wise classification) of cluttered classes in RGB images was explored in case of catch crops and volunteer barley cereal. The dataset consisted of RGB images from a plot trial using oil radish as catch crops in barley. The images were captured using a high-end consumer camera mounted on a tractor. The images were manually annotated in 7 classes: oil radish, barley, weed, stump, soil, equipment and unknown. Data argumentation was used to artificially increase the dataset by transposing and flipping the images. A modified version of VGG-16 deep neural network was used. First, the last fully-connected layers were converted to convolutional layer and the depth was modified to cope with our number of classes. Secondly, a deconvolutional layer with a 32 stride was added between the last fully-connected layer and the softmax classification layer to ensure that the output layer has the same size as the input. Preliminary results using this network show a pixel accuracy of 79% and a frequency weighted intersection over union of 66%. These preliminary results indicate great potential in deep convolutional networks for segmentation of plant species in cluttered RGB images."} {"_id": "149f3dfe7542baffa670ea0351e14499e5d189c0", "title": "Recent Research in Cooperative Control of Multivehicle", "text": "This paper presents a survey of recent research in cooperative control of multivehicle systems, using a common mathematical framework to allow different methods to be described in a unified way. The survey has three primary parts: an overview of current applications of cooperative control, a summary of some of the key technical approaches that have been explored, and a description of some possible future directions for research. Specific technical areas that are discussed include formation control, cooperative tasking, spatiotemporal planning, and consensus. DOI: 10.1115/1.2766721"} {"_id": "649f417531ac7b1408b80fb35125319f86d00f79", "title": "\"Green\" electronics: biodegradable and biocompatible materials and devices for sustainable future.", "text": "\"Green\" electronics represents not only a novel scientific term but also an emerging area of research aimed at identifying compounds of natural origin and establishing economically efficient routes for the production of synthetic materials that have applicability in environmentally safe (biodegradable) and/or biocompatible devices. The ultimate goal of this research is to create paths for the production of human- and environmentally friendly electronics in general and the integration of such electronic circuits with living tissue in particular. Researching into the emerging class of \"green\" electronics may help fulfill not only the original promise of organic electronics that is to deliver low-cost and energy efficient materials and devices but also achieve unimaginable functionalities for electronics, for example benign integration into life and environment. This Review will highlight recent research advancements in this emerging group of materials and their integration in unconventional organic electronic devices."} {"_id": "2451ea08eb78e24371de8128c14a6680b2a628fa", "title": "Stability of Causal Inference", "text": "We consider the sensitivity of causal identi cation to small perturbations in the input. A long line of work culminating in papers by Shpitser and Pearl (2006) and Huang and Valtorta (2008) led to a complete procedure for the causal identi cation problem. In our main result in this paper, we show that the identi cation function computed by these procedures is in some cases extremely unstable numerically. Speci cally, the \u201ccondition number\u201d of causal identi cation can be of the order of \u03a9(exp(n0.49)) on an identi able semi-Markovian model with n visible nodes. That is, in order to give an output accurate to d bits, the empirical probabilities of the observable events need to be obtained to accuracy d +\u03a9(n0.49) bits."} {"_id": "40bda71f8adc847c40b89e06febd6b037c765077", "title": "A Fuzzy Local Search Classifier for Intrusion Detection", "text": "In this paper, we propose a fuzzy local search (FLS) method for intrusion detection. The FLS system is a fuzzy classifier, whose knowledge base is modeled as a fuzzy rule such as \"if-then\" and improved by a local search metaheuristic. The proposed method is implemented and tested on the benchmark KDD'99 intrusion dataset. The results are encouraging and demonstrate the benefits of our approach."} {"_id": "61c4c83ed02ed2477190611f69cb86f89e8ff0ab", "title": "A Study on Watermarking Schemes for Image Authentication", "text": "The digital revolution in digital image processing has made it possible to create, manipulate and transmit digital images in a simple and fast manner. The adverse affect of this is that the same image processing techniques can be used by hackers to tamper with any image and use it illegally. This has made digital image safety and integrity the top prioritized issue in today\u2019s information explosion. Watermarking is a popular technique that is used for copyright protection and authentication. This paper presents an overview of the various concepts and research works in the field of image watermark authentication. In particular, the concept of content-based image watermarking is reviewed in details."} {"_id": "c8e424defb590f6b3eee659eb097ac978bf49348", "title": "The role of academic emotions in the relationship between perceived academic control and self-regulated learning in online learning", "text": "Self-regulated learning is recognized as a critical factor for successful online learning, and students\u2019 perceived academic control and academic emotions are important antecedents of self-regulated learning. Because emotions and cognition are interrelated, investigating the joint relationship between perceived academic control and academic emotions on self-regulated learning would be valuable to understanding the process of self-regulated learning. Therefore, this study examined the role of academic emotions (enjoyment, anxiety, and boredom) in the relationship between perceived academic control and selfregulated learning in online learning. The path model was proposed to test the mediating and moderating effects of academic emotions. Data were collected from 426 Korean college students registered in online courses, and a path analysis was conducted. The results demonstrated that enjoyment mediated the relationship between perceived academic control and self-regulated learning, but the moderating effect of enjoyment was not significant. Boredom and anxiety did not have significant mediating effects on self-regulated learning, whereas they showed significant moderating effects in the relationship between perceived academic control and self-regulated learning. The role of academic emotions in learning and their implications for facilitating students\u2019 self-regulated learning in online learning were discussed based on the findings. 2014 Elsevier Ltd. All rights reserved."} {"_id": "82b2c431035e5c0faa20895fe9f002327c0994bd", "title": "IoT based smart emergency response system for fire hazards", "text": "The Internet of Things pertains to connecting currently unconnected things and people. It is the new era in transforming the existed systems to amend the cost effective quality of services for the society. To support Smart city vision, Urban IoT design plans exploit added value services for citizens as well as administration of the city with the most advanced communication technologies. To make emergency response real time, IoT enhances the way first responders and provides emergency managers with the necessary up-to-date information and communication to make use those assets. IoT mitigates many of the challenges to emergency response including present problems like a weak communication network and information lag. In this paper it is proposed that an emergency response system for fire hazards is designed by using IoT standardized structure. To implement this proposed scheme a low-cost Espressif wi-fi module ESP-32, Flame detection sensor, Smoke detection sensor (MQ-5), Flammable gas detection sensor and one GPS module are used. The sensors detects the hazard and alerts the local emergency rescue organizations like fire departments and police by sending the hazard location to the cloud-service through which all are connected. The overall network utilizes a light weighted data oriented publish-subscribe message protocol MQTT services for fast and reliable communication. Thus, an intelligent integrated system is designed with the help of IoT."} {"_id": "2f32f535c8a0acf1d55247c49564d4d3f8a2c5c5", "title": "Machinery equipment early fault detection using Artificial Neural Network based Autoencoder", "text": "Machinery equipment early fault detection is still in an open challenge. The objective of this paper is to introduce a parametric method Artificial Neural Network based Autoencoder implemented to perform early fault detection of a machinery equipment. The performance of this method is then compared to one of the industry state of the art nonparametric methods called Similarity Based Modeling. The comparison is done by analyzing the implementation result on both artificial and real case dataset. Root Mean Square Error (RMSE) is applied to measure the performance. Based on the result of the research, both of these methods are effective to do pattern recognition and able to identify data anomaly or in this case is fault identification."} {"_id": "93788e92e5a411f3e5ed16b09400dcdbcec3b4f0", "title": "Anomaly Detection in Video Surveillance via Gaussian Process", "text": "In this paper, we propose a new approach for anomaly detection in video surveillance. This approach is based on a nonparametric Bayesian regression model built upon Gaussian process priors. It establishes a set of basic vectors describing motion patterns from low-level features via online clustering, and then constructs a Gaussian process regression model to approximate the distribution of motion patterns in kernel space. We analyze di\u00aeerent anomaly measure criterions derived from Gaussian process regression model and compare their performances. To reduce"} {"_id": "2be9abe785b6159df65063dd80a6a72e29fa6d23", "title": "Forensic Analysis and Anonymisation of Printed Documents", "text": "Contrary to popular belief, the paperless office has not yet established itself. Printer forensics is therefore still an important field today to protect the reliability of printed documents or to track criminals. An important task of this is to identify the source device of a printed document. There are many forensic approaches that try to determine the source device automatically and with commercially available recording devices. However, it is difficult to find intrinsic signatures that are robust against a variety of influences of the printing process and at the same time can identify the specific source device. In most cases, the identification rate only reaches up to the printer model. For this reason we reviewed document colour tracking dots, an extrinsic signature embedded in nearly all modern colour laser printers. We developed a refined and generic extraction algorithm, found a new tracking dot pattern and decoded pattern information. Through out we propose to reuse document colour tracking dots, in combination with passive printer forensic methods. From privacy perspective we additional investigated anonymization approaches to defeat arbitrary tracking. Finally we propose our toolkitdeda which implements the entire workflow of extracting, analysing and anonymisation of a tracking dot pattern."} {"_id": "c18172a7d11d1fd3467040bea0bc9a38c1b26a6d", "title": "AtDELFI: automatically designing legible, full instructions for games", "text": "This paper introduces a fully automatic method for generating video game tutorials. The AtDELFI system (Automatically DEsigning Legible, Full Instructions for games) was created to investigate procedural generation of instructions that teach players how to play video games. We present a representation of game rules and mechanics using a graph system as well as a tutorial generation method that uses said graph representation. We demonstrate the concept by testing it on games within the General Video Game Artificial Intelligence (GVG-AI) framework; the paper discusses tutorials generated for eight different games. Our findings suggest that a graph representation scheme works well for simple arcade style games such as Space Invaders and Pacman, but it appears that tutorials for more complex games might require higher-level understanding of the game than just single mechanics."} {"_id": "25b9dd71c8247820ccc51f238abd215880b973e5", "title": "Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification", "text": "Categorization of documents is challenging, as the number o f discriminating words can be very large. We present a nearest neighbor classification scheme for text categoriz ation in which the importance of discriminating words is learned using mutual information and weight adjustment tec hniques. The nearest neighbors for a particular document are then computed based on the matching words and their weigh ts. We evaluate our scheme on both synthetic and real world documents. Our experiments with synthetic data sets s how that this scheme is robust under different emulated conditions. Empirical results on real world documents demo nstrate that this scheme outperforms state of the art classification algorithms such as C4.5, RIPPER, Rainbow, an d PEBLS."} {"_id": "95d2a3c89bd97436aac9c72affcd0edc5c7d2e58", "title": "Multiple HOG templates for gait recognition", "text": "In gait recognition field, template-based approaches such as Gait Energy Image (GEI) and Chrono-Gait Image (CGI) can achieve good recognition performance with low computational cost. Meanwhile, CGI can preserve temporal information better than GEI. However, they pay less attention to the local shape features. To preserve temporal information and generate more abundant local shape features, we generate multiple HOG templates by extracting Histogram of Oriented Gradients (HOG) of GEI and CGI templates. Experiments show that compared with several published approaches, our proposed multiple HOG templates achieve better performance for gait recognition."} {"_id": "235723a15c86c369c99a42e7b666dfe156ad2cba", "title": "Improving predictive inference under covariate shift by weighting the log-likelihood function", "text": "A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is e ective in cases such as sample surveys or design of experiments, where the observed covariate follows a di erent distribution than that in the whole population. Under misspeci cation of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is de ned by the expected Kullback\u2013Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct speci cation of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed. c \u00a9 2000 Elsevier Science B.V. All rights reserved."} {"_id": "2b3818c141da414cf9e783c8b2d4928019cb70fd", "title": "Discriminative learning for differing training and test distributions", "text": "We address classification problems for which the training instances are governed by a distribution that is allowed to differ arbitrarily from the test distribution---problems also referred to as classification under covariate shift. We derive a solution that is purely discriminative: neither training nor test distribution are modeled explicitly. We formulate the general problem of learning under covariate shift as an integrated optimization problem. We derive a kernel logistic regression classifier for differing training and test distributions."} {"_id": "d5a55a548fc7cd703c1dd8d867ca1eb6b0c0764c", "title": "Can Movies and Books Collaborate? Cross-Domain Collaborative Filtering for Sparsity Reduction", "text": "The sparsity problem in collaborative filtering (CF) is a major bottleneck for most CF methods. In this paper, we consider a novel approach for alleviating the sparsity problem in CF by transferring useritem rating patterns from a dense auxiliary rating matrix in other domains (e.g., a popular movie rating website) to a sparse rating matrix in a target domain (e.g., a new book rating website). We do not require that the users and items in the two domains be identical or even overlap. Based on the limited ratings in the target matrix, we establish a bridge between the two rating matrices at a clusterlevel of user-item rating patterns in order to transfer more useful knowledge from the auxiliary task domain. We first compress the ratings in the auxiliary rating matrix into an informative and yet compact cluster-level rating pattern representation referred to as a codebook. Then, we propose an efficient algorithm for reconstructing the target rating matrix by expanding the codebook. We perform extensive empirical tests to show that our method is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary tasks, as compared to many state-of-the-art CF methods."} {"_id": "5efbd02d154fb76506caa9cf3e9333c89f24ccc9", "title": "A new approach to identify power transformer criticality and asset management decision based on dissolved gas-in-oil analysis", "text": "Dissolved gas analysis (DGA) of transformer oil is one of the most effective power transformer condition monitoring tools. There are many interpretation techniques for DGA results. However, all of these techniques rely on personnel experience more than standard mathematical formulation and significant number of DGA results fall outside the proposed codes of the current methods and cannot be diagnosed by these methods. To overcome these limitations, this paper introduces a novel approach using Gene Expression Programming (GEP) to help in standardizing DGA interpretation techniques, identify transformer critical ranking based on DGA results and propose a proper maintenance action. DGA has been performed on 338 oil samples that have been collected from different transformers of different rating and different life span. Traditional DGA interpretation techniques are used in analyzing the results to measure its consistency. These data are then used to develop the new GEP model. Results show that all current traditional techniques do not necessarily lead to the same conclusion for the same oil sample. The new approach using GEP is easy to implement and it does not call for any expert personnel to interpret the DGA results and to provide a proper asset management decision on the transformer based on DGA analysis."} {"_id": "e1e146ddf8214fcab965dabeb9908d097912a77f", "title": "Geolocation and Assisted GPS", "text": "C urrently in development, numerous geolocation technologies can pinpoint a person's or ob-ject's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next-generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation-based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point (see http://www.fcc.gov/e911/). Location technologies requiring new, modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. An obvious way to satisfy the FCC requirement is to incorporate Global Positioning System (GPS) receivers into mobile phones. GPS consists of a constellation of 24 satellites, equally spaced in six orbital planes 20,200 kilometers above the Earth, that transmit two specially coded carrier signals: L1 frequency for civilian use, and L2 for military and government use. GPS receivers process the signals to compute position in 3D\u2014latitude, longitude , and altitude\u2014within a radius of 10 meters or better. Accuracy has increased substantially since the US government turned off Selective Availability, the intentional degradation of GPS signals , in May 2000. Because no return channel links GPS receivers to satellites, any number of users can get their positions simultaneously. GPS signals also resist interference and jamming. To operate properly, however, conventional GPS receivers need a clear view of the skies and signals from at least four satellites, requirements that exclude operation in buildings or other RF-shadowed environments. Further, it takes a GPS receiver starting \" cold \" \u2014without any knowledge about the GPS constellation's state\u2014as long as several minutes to achieve the mobile station location fix, a considerable delay for \u2026"} {"_id": "e91ac1c4539b7c9162d9907bc6f655a9bd43e705", "title": "DASCo: dynamic adaptive streaming over CoAP", "text": "This paper presents the Dynamic Adaptive Streaming over CoAP (DASCo), a solution for adaptive media streaming in the Internet of Things (IoT) environment. DASCo combines DASH (Dynamic Adaptive Streaming over HTTP), the widespread open standard for HTTP-compliant streaming, with Constrained Application Protocol (CoAP), the vendor-independent web transfer protocol designed for resource-constrained devices. The proposed solution uses DASH formats and mechanisms to make media segments available for consumers, and exploits CoAP to deliver media segments to consumers\u2019 applications. Consequently, the DASCo player offers native interoperability with IoT devices that are accessed via CoAP, thus it allows easy access to data collected by different sensors in order to enrich the multimedia services. In the paper we present an overview of constraints of default CoAP implementation with respect to media streaming, and propose guidelines for development of adaptive streaming service over CoAP. Moreover, we discuss the features of CoAP that can be investigated when designing an efficient adaptive algorithm for DASCo. Presented experimental results show poor performance of DASCo when default values of CoAP transmission parameters have been used. However, adjusting the parameters according to the network conditions considerably improves DASCo efficiency. Furthermore, in bad network conditions the enhanced DASCo is characterized by a more stable download rate compared to DASH. This feature is important in the context of dynamic media adaptation, since it allows an adaptation algorithm to better fit media bit rate with download conditions."} {"_id": "fe46824b50fba1f9b7b16c8e71c00a72933353c8", "title": "Dynamic time warping for gesture-based user identification and authentication with Kinect", "text": "The Kinect has primarily been used as a gesture-driven device for motion-based controls. To date, Kinect-based research has predominantly focused on improving tracking and gesture recognition across a wide base of users. In this paper, we propose to use the Kinect for biometrics; rather than accommodating a wide range of users we exploit each user's uniqueness in terms of gestures. Unlike pure biometrics, such as iris scanners, face detectors, and fingerprint recognition which depend on irrevocable biometric data, the Kinect can provide additional revocable gesture information. We propose a dynamic time-warping (DTW) based framework applied to the Kinect's skeletal information for user access control. Our approach is validated in two scenarios: user identification, and user authentication on a dataset of 20 individuals performing 8 unique gestures. We obtain an overall 4.14%, and 1.89% Equal Error Rate (EER) in user identification, and user authentication, respectively, for a gesture and consistently outperform related work on this dataset. Given the natural noise present in the real-time depth sensor this yields promising results."} {"_id": "71e2307d30ea55c3a70e4220c65a29826f78dccc", "title": "A Multi-disciplinary Approach to Marketing Leveraging Advances in", "text": "For decades, neuroscience has greatly contributed to our foundational understanding of human behavior. More recently, the findings and methods of neuroscience have been applied to study the process of decision-making in order to offer advanced insights into the neural mechanisms that influence economic and consumer choices. In this thesis, I will address how customized marketing strategies can be enriched through the integration of consumer neuroscience, an integrative field anchored in the biological, cognitive and affective mechanisms of consumer behavior. By recognizing and utilizing these multidisciplinary interdependencies, marketers can enhance their advertising and promotional mix to elicit desired neural and affective consumer responses and measure these reactions in order to enhance purchasing decisions. The principal objective of this thesis is to present a comprehensive review of consumer neuroscience and to elucidate why it is an increasingly important area of study within the framework of human behavior. I will also describe how the insights gained from this emerging field can be leveraged to optimize marketing activities. Finally, I propose an experiment that illuminates key research questions, which may have considerable impact on the discipline of consumer neuroscience as well as the marketing industry."} {"_id": "127f0fd91bc7d5cee164beb3ec89e0f5071a3e8d", "title": "AutoMoDe: A novel approach to the automatic design of control software for robot swarms", "text": "We introduce AutoMoDe: a novel approach to the automatic design of control software for robot swarms. The core idea in AutoMoDe recalls the approach commonly adopted in machine learning for dealing with the bias\u2013variance tradedoff: to obtain suitably general solutions with low variance, an appropriate design bias is injected. AutoMoDe produces robot control software by selecting, instantiating, and combining preexisting parametric modules\u2014the injected bias. The resulting control software is a probabilistic finite state machine in which the topology, the transition rules and the values of the parameters are obtained automatically via an optimization process that maximizes a task-specific objective function. As a proof of concept, we define AutoMoDe-Vanilla, which is a specialization of AutoMoDe for the e-puck robot. We use AutoMoDe-Vanilla to design the robot control software for two different tasks: aggregation and foraging. The results show that the control software produced by AutoMoDe-Vanilla (i) yields good results, (ii) appears to be robust to the so called reality gap, and (iii) is naturally human-readable."} {"_id": "103476735a92fced52ac096f301b07fda3ee42b7", "title": "BiDirectional optical communication with AquaOptical II", "text": "This paper describes AquaOptical II, a bidirectional, high data-rate, long-range, underwater optical communication system. The system uses the software radio principle. Each AquaOptical II modem can be programmed to transmit user defined waveforms and record the received waveforms for detailed analysis. This allows for the use of many different modulation schemes. We describe the hardware and software architecture we developed for these goals. We demonstrate bidirectional communication between two AquaOptical II modems in a pool experiment. During the experiment AquaOptical II achieved a signal to noise ration of 5.1 over a transmission distance of 50 m at pulse widths of 1 \u00b5sec, 500 ns, and 250 ns. When using discrete pulse interval modulation (DPIM) this corresponds to a bit-rate of 0.57 Mbit/s, 1.14 Mbit/s, and 2.28 Mbit/s."} {"_id": "0eb463bc1079db2ba081f20af675cb055ca16d6f", "title": "Semantic Data Models", "text": "Semantic data models have emerged from a requirement for more expressive conceptual data models. Current generation data models lack direct support for relationships, data abstraction, inheritance, constraints, unstructured objects, and the dynamic properties of an application. Although the need for data models with richer semantics is widely recognized, no single approach has won general acceptance. This paper describes the generic properties of semantic data models and presents a representative selection of models that have been proposed since the mid-1970s. In addition to explaining the features of the individual models, guidelines are offered for the comparison of models. The paper concludes with a discussion of future directions in the area of conceptual data modeling."} {"_id": "ab7f7d3af0989135cd42a3d48348b0fdaceba070", "title": "External localization system for mobile robotics", "text": "We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems."} {"_id": "e19e6fdec87e5cdec2116fd44cd458b6ecece408", "title": "Online Product Quantization", "text": "Approximate nearest neighbor (ANN) search has achieved great success in many tasks. However, existing popular methods for ANN search, such as hashing and quantization methods, are designed for static databases only. They cannot handle well the database with data distribution evolving dynamically, due to the high computational effort for retraining the model based on the new database. In this paper, we address the problem by developing an online product quantization (online PQ) model and incrementally updating the quantization codebook that accommodates to the incoming streaming data. Moreover, to further alleviate the issue of large scale computation for the online PQ update, we design two budget constraints for the model to update partial PQ codebook instead of all. We derive a loss bound which guarantees the performance of our online PQ model. Furthermore, we develop an online PQ model over a sliding window with both data insertion and deletion supported, to reflect the real-time behavior of the data. The experiments demonstrate that our online PQ model is both time-efficient and effective for ANN search in dynamic large scale databases compared with baseline methods and the idea of partial PQ codebook update further reduces the update cost."} {"_id": "51f478339cc86f20fff222baf17be5bab7f29bc3", "title": "Survey on Collaborative Filtering, Content-based Filtering and Hybrid Recommendation System", "text": "Recommender systems or recommendation systems are a subset of information filtering system that used to anticipate the 'evaluation' or 'preference' that user would feed to an item. In recent years E-commerce applications are widely using Recommender system. Generally the most popular E-commerce sites are probably music, news, books, research articles, and products. Recommender systems are also available for business experts, jokes, restaurants, financial services, life insurance and twitter followers. Recommender systems have formulated in parallel with the web. Initially Recommender systems were based on demographic, content-based filtering and collaborative filtering. Currently, these systems are incorporating social information for enhancing a quality of recommendation process. For betterment of recommendation process in the future, Recommender systems will use personal, implicit and local information from the Internet. This paper provides an overview of recommender systems that include collaborative filtering, content-based filtering and hybrid approach of recommender system."} {"_id": "6e6eccaabebdaf463e118d9798251d6ce86c0d8f", "title": "Lidar-histogram for fast road and obstacle detection", "text": "Detection of traversable road regions, positive and negative obstacles, and water hazards is a fundamental task for autonomous driving vehicles, especially in off-road environment. This paper proposes an efficient method, called Lidar-histogram. It can be used to integrate the detection of traversable road regions, obstacles and water hazards into one single framework. The weak assumption of the Lidar-histogram is that a decent-sized area in front of the vehicle is flat. The Lidar-histogram is derived from an efficient organized map of Lidar point cloud, called Lidar-imagery, to index, describe and store Lidar data. The organized point-cloud map can be easily obtained by indexing the original unordered 3D point cloud to a Lidar-specific 2D coordinate system. In the Lidar-histogram representation, the 3D traversable road plane in front of vehicle can be projected as a straight line segment, and the positive and negative obstacles are projected above and below the line segment, respectively. In this way, the problem of detecting traversable road and obstacles is converted into a simple linear classification task in 2D space. Experiments have been conducted in different kinds of off-road and urban scenes, and we have obtained very promising results."} {"_id": "c5d34128e995bd0039f74eba2255c82950ca4c46", "title": "Automated personalized feedback for physical activity and dietary behavior change with mobile phones: a randomized controlled trial on adults.", "text": "BACKGROUND\nA dramatic rise in health-tracking apps for mobile phones has occurred recently. Rich user interfaces make manual logging of users' behaviors easier and more pleasant, and sensors make tracking effortless. To date, however, feedback technologies have been limited to providing overall statistics, attractive visualization of tracked data, or simple tailoring based on age, gender, and overall calorie or activity information. There are a lack of systems that can perform automated translation of behavioral data into specific actionable suggestions that promote healthier lifestyle without any human involvement.\n\n\nOBJECTIVE\nMyBehavior, a mobile phone app, was designed to process tracked physical activity and eating behavior data in order to provide personalized, actionable, low-effort suggestions that are contextualized to the user's environment and previous behavior. This study investigated the technical feasibility of implementing an automated feedback system, the impact of the suggestions on user physical activity and eating behavior, and user perceptions of the automatically generated suggestions.\n\n\nMETHODS\nMyBehavior was designed to (1) use a combination of automatic and manual logging to track physical activity (eg, walking, running, gym), user location, and food, (2) automatically analyze activity and food logs to identify frequent and nonfrequent behaviors, and (3) use a standard machine-learning, decision-making algorithm, called multi-armed bandit (MAB), to generate personalized suggestions that ask users to either continue, avoid, or make small changes to existing behaviors to help users reach behavioral goals. We enrolled 17 participants, all motivated to self-monitor and improve their fitness, in a pilot study of MyBehavior. In a randomized two-group trial, investigators randomly assigned participants to receive either MyBehavior's personalized suggestions (n=9) or nonpersonalized suggestions (n=8), created by professionals, from a mobile phone app over 3 weeks. Daily activity level and dietary intake was monitored from logged data. At the end of the study, an in-person survey was conducted that asked users to subjectively rate their intention to follow MyBehavior suggestions.\n\n\nRESULTS\nIn qualitative daily diary, interview, and survey data, users reported MyBehavior suggestions to be highly actionable and stated that they intended to follow the suggestions. MyBehavior users walked significantly more than the control group over the 3 weeks of the study (P=.05). Although some MyBehavior users chose lower-calorie foods, the between-group difference was not significant (P=.15). In a poststudy survey, users rated MyBehavior's personalized suggestions more positively than the nonpersonalized, generic suggestions created by professionals (P<.001).\n\n\nCONCLUSIONS\nMyBehavior is a simple-to-use mobile phone app with preliminary evidence of efficacy. To the best of our knowledge, MyBehavior represents the first attempt to create personalized, contextualized, actionable suggestions automatically from self-tracked information (ie, manual food logging and automatic tracking of activity). Lessons learned about the difficulty of manual logging and usability concerns, as well as future directions, are discussed.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02359981; https://clinicaltrials.gov/ct2/show/NCT02359981 (Archived by WebCite at http://www.webcitation.org/6YCeoN8nv)."} {"_id": "5e5fa1de11a715d65008074a24060368b221f376", "title": "Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study.", "text": "Qualitative content analysis and thematic analysis are two commonly used approaches in data analysis of nursing research, but boundaries between the two have not been clearly specified. In other words, they are being used interchangeably and it seems difficult for the researcher to choose between them. In this respect, this paper describes and discusses the boundaries between qualitative content analysis and thematic analysis and presents implications to improve the consistency between the purpose of related studies and the method of data analyses. This is a discussion paper, comprising an analytical overview and discussion of the definitions, aims, philosophical background, data gathering, and analysis of content analysis and thematic analysis, and addressing their methodological subtleties. It is concluded that in spite of many similarities between the approaches, including cutting across data and searching for patterns and themes, their main difference lies in the opportunity for quantification of data. It means that measuring the frequency of different categories and themes is possible in content analysis with caution as a proxy for significance."} {"_id": "28bd4f88a9f2830d97fbdd1120ab3e9a810adf18", "title": "A hybrid bipartite graph based recommendation algorithm for mobile games", "text": "With the rapid development of the mobile games, mobile game recommendation has become a core technique to mobile game marketplaces. This paper proposes a bipartite graph based recommendation algorithm PKBBR (Prior Knowledge Based for Bipartite Graph Rank). We model the user's interest in mobile game based on bipartite graph structure and use the users' mobile game behavior to describe the edge weights of the graph, then incorporate users' prior knowledge into the projection procedure of the bipartite graph to enrich the information among the nodes. Because the popular games have a great influence on mobile game marketplace, we design a hybrid recommendation algorithm to incorporate popularity recommendation based on users' behaviors. The experiment results show that this hybrid method could achieve a better performance than other approaches."} {"_id": "1b52a22614d1dd0c95072bd6bb7bab3bee75f550", "title": "Active Object Categorization on a Humanoid Robot", "text": "We present a Bag of Words-based active object categorization technique implemented and tested on a humanoid robot. The robot is trained to categorize objects that are handed to it by a human operator. The robot uses hand and head motions to actively acquire a number of different views. A view planning scheme using entropy minimization reduces the number of views needed to achieve a valid decision. Categorization results are significantly improved by active elimination of background features using robot arm motion. Our experiments cover both, categorization when the object is handed to the robot in a fixed pose at training and testing, and object pose independent categorization. Results on a 4-class object database demonstrate the classification efficiency, a significant gain from multi-view compared to single-view classification, and the advantage of view planning. We conclude that humanoid robotic systems can be successfully applied to actively categorize objects a task with many potential applications ranging from edutainment to active surveillance."} {"_id": "d8adb97dbc276c4135601bb7ffcd03b7271f5a6e", "title": "Real Time Text Mining on Twitter Data", "text": "Social media constitute a challenging new source of information for intelligence gathering and decision making. Twitter is one of the most popular social media sites and often becomes the primary source of information. Twitter messages are short and well suited for knowledge discovery. Twitter provides both researchers and practitioners a free Application Programming Interface (API) which allows them to gather and analyse large data sets of tweets. Twitter information aren't solely tweet texts, as Twitter\u2019s API provides a lot of info to perform attention-grabbing analysis studies. The paper concisely describes method of knowledge gathering and therefore the main areas of knowledge mining, information discovery and information visual image from Twitter information. In this paper we can create a twitter app from which we can fetch the real time twitter tweets on a particular topic and stored it into R and then we can apply several text mining steps on the tweets to pre-process the tweets text and than we"} {"_id": "e02010ed24b9a1aea265ac84c3e5d103c071dfe2", "title": "A Low-Power Speech Recognizer and Voice Activity Detector Using Deep Neural Networks", "text": "This paper describes digital circuit architectures for automatic speech recognition (ASR) and voice activity detection (VAD) with improved accuracy, programmability, and scalability. Our ASR architecture is designed to minimize off-chip memory bandwidth, which is the main driver of system power consumption. A SIMD processor with 32 parallel execution units efficiently evaluates feed-forward deep neural networks (NNs) for ASR, limiting memory usage with a sparse quantized weight matrix format. We argue that VADs should prioritize accuracy over area and power, and introduce a VAD circuit that uses an NN to classify modulation frequency features with 22.3- $\\mu \\text{W}$ power consumption. The 65-nm test chip is shown to perform a variety of ASR tasks in real time, with vocabularies ranging from 11 words to 145 000 words and full-chip power consumption ranging from 172 $\\mu \\text{W}$ to 7.78 mW."} {"_id": "72b169c5539cfca8ac315638ff36eb04997f8350", "title": "A federated Edge Cloud-IoT architecture", "text": "The provisioning of powerful decentralized services in the Internet of Things (IoT), as well as the integration of innovative IoT platforms, requires ubiquity, reliability, high-performance, efficiency, and scalability. Towards the accomplishment of the above requirements, the contemporary technological trends leaded to the IoT and Cloud convergence. This combination created the approach of federated Cloud-IoT architectures. The Cloud-IoT architectural vision is paving the way to the vital step, beyond the IoT cognitive capabilities, that refers to the IoT world leveraging cloud principles. For the success of the deployment of such innovative and high-performance solutions, the introduction and the exploitation of 5G technological principles will empower the way forward. Towards this direction, this paper presents a federated architecture combined by distributed Edge Cloud-IoT platforms, that enables the semantic-based service combination and provisioning. The 5G technological features, such as low-latency, zero-losses, high-bitrates are studied and proposed for the empowerment of the federation of the Edge Cloud-IoT architectures. A smart health use case is presented for the evaluation of the proposed solution."} {"_id": "130dab15d243e5569925aa8d2eafb080078baf79", "title": "GPCAD: a tool for CMOS op-amp synthesis", "text": "We present a method for optimizing and automating component and transistor sizing for CMOS operational amplifiers. We observe that a wide variety of performance measures can be formulated as posynomial functions of the design variables. As a result, amplifier design problems can be formulated as a geometric program, a special type of convex optimization problem for which very efficient global optimization methods have recently been developed. The synthesis method is therefore fast, and determines the globally optimal design; in particular the final solution is completely independent of the starting point (which can even be infeasible), and infeasible specifications are unambiguously detected. After briefly introducing the method, which is described in more detail by M. Hershenson et al., we show how the method can be applied to six common op-amp architectures, and give several example designs."} {"_id": "ccb2b479b2b430e284e1c3afb1f9362cd1c95119", "title": "Applying graph-based anomaly detection approaches to the discovery of insider threats", "text": "The ability to mine data represented as a graph has become important in several domains for detecting various structural patterns. One important area of data mining is anomaly detection, but little work has been done in terms of detecting anomalies in graph-based data. In this paper we present graph-based approaches to uncovering anomalies in applications containing information representing possible insider threat activity: e-mail, cell-phone calls, and order processing."} {"_id": "b81c9a3f774f839a79c0c6e40449ae5d5f49966a", "title": "Talla at SemEval-2017 Task 3: Identifying Similar Questions Through Paraphrase Detection", "text": "This paper describes our approach to the SemEval-2017 shared task of determining question-question similarity in a community question-answering setting (Task 3B). We extracted both syntactic and semantic similarity features between candidate questions, performed pairwise-preference learning to optimize for ranking order, and then trained a random forest classifier to predict whether the candidate questions were paraphrases of each other. This approach achieved a MAP of 45.7% out of max achievable 67.0% on the test set."} {"_id": "789156726070dd03a8d2175157161c35c5a5b290", "title": "DDD: A New Ensemble Approach for Dealing with Concept Drift", "text": "Online learning algorithms often have to operate in the presence of concept drifts. A recent study revealed that different diversity levels in an ensemble of learning machines are required in order to maintain high generalization on both old and new concepts. Inspired by this study and based on a further study of diversity with different strategies to deal with drifts, we propose a new online ensemble learning approach called Diversity for Dealing with Drifts (DDD). DDD maintains ensembles with different diversity levels and is able to attain better accuracy than other approaches. Furthermore, it is very robust, outperforming other drift handling approaches in terms of accuracy when there are false positive drift detections. In all the experimental comparisons we have carried out, DDD always performed at least as well as other drift handling approaches under various conditions, with very few exceptions."} {"_id": "4aa9f5150b46320f534de4747a2dd0cd7f3fe292", "title": "Semi-supervised Sequence Learning", "text": "We present two approaches that use unlabeled data to improve sequ nce learning with recurrent networks. The first approach is to predict wha t comes next in a sequence, which is a conventional language model in natural language processing. The second approach is to use a sequence autoencoder, which r eads the input sequence into a vector and predicts the input sequence again. T hese two algorithms can be used as a \u201cpretraining\u201d step for a later supervised seq uence learning algorithm. In other words, the parameters obtained from the unsu pervised step can be used as a starting point for other supervised training model s. In our experiments, we find that long short term memory recurrent networks after b eing pretrained with the two approaches are more stable and generalize bette r. With pretraining, we are able to train long short term memory recurrent network s up to a few hundred timesteps, thereby achieving strong performance in ma ny text classification tasks, such as IMDB, DBpedia and 20 Newsgroups."} {"_id": "e833ac2e377ab0bc4a6a7f236c4b6c32f3d2b8c1", "title": "Crime Scene Reconstruction: Online Gold Farming Network Analysis", "text": "Many online games have their own ecosystems, where players can purchase in-game assets using game money. Players can obtain game money through active participation or \u201creal money trading\u201d through official channels: converting real money into game money. The unofficial market for real money trading gave rise to gold farming groups (GFGs), a phenomenon with serious impact in the cyber and real worlds. GFGs in massively multiplayer online role-playing games (MMORPGs) are some of the most interesting underground cyber economies because of the massive nature of the game. To detect GFGs, there have been various studies using behavioral traits. However, they can only detect gold farmers, not entire GFGs with internal hierarchies. Even worse, GFGs continuously develop techniques to hide, such as forming front organizations, concealing cyber-money, and changing trade patterns when online game service providers ban GFGs. In this paper, we analyze the characteristics of the ecosystem of a large-scale MMORPG, and devise a method for detecting GFGs. We build a graph that characterizes virtual economy transactions, and trace abnormal trades and activities. We derive features from the trading graph and physical networks used by GFGs to identify them in their entirety. Using their structure, we provide recommendations to defend effectively against GFGs while not affecting the existing virtual ecosystem."} {"_id": "096e07ced8d32fc9a3617ff1f725efe45507ede8", "title": "Learning methods for generic object recognition with invariance to pose and lighting", "text": "We assess the applicability of several popular learning methods for the problem of recognizing generic visual categories with invariance to pose, lighting, and surrounding clutter. A large dataset comprising stereo image pairs of 50 uniform-colored toys under 36 azimuths, 9 elevations, and 6 lighting conditions was collected (for a total of 194,400 individual images). The objects were 10 instances of 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. Five instances of each category were used for training, and the other five for testing. Low-resolution grayscale images of the objects with various amounts of variability and surrounding clutter were used for training and testing. Nearest neighbor methods, support vector machines, and convolutional networks, operating on raw pixels or on PCA-derived features were tested. Test error rates for unseen object instances placed on uniform backgrounds were around 13% for SVM and 7% for convolutional nets. On a segmentation/recognition task with highly cluttered images, SVM proved impractical, while convolutional nets yielded 16/7% error. A real-time version of the system was implemented that can detect and classify objects in natural scenes at around 10 frames per second."} {"_id": "0addfc35fc8f4419f9e1adeccd19c07f26d35cac", "title": "A discriminatively trained, multiscale, deformable part model", "text": "This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose."} {"_id": "11540131eae85b2e11d53df7f1360eeb6476e7f4", "title": "Learning to Forget: Continual Prediction with LSTM", "text": "Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive forget gate that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way."} {"_id": "a44ebe793947d04d9026a42f9a28c962ff66c5c6", "title": "The anatomy of the posterior aspect of the knee. An anatomic study.", "text": "BACKGROUND\nThe orthopaedic literature contains relatively little quantitative information regarding the anatomy of the posterior aspect of the knee. The purpose of the present study was to provide a detailed description of, and to propose a standard nomenclature for, the anatomy of the posterior aspect of the knee.\n\n\nMETHODS\nDetailed dissection of twenty nonpaired, fresh-frozen knees was performed. Posterior knee structures were measured according to length, width, and/or distance to reproducible osseous landmarks.\n\n\nRESULTS\nThe semimembranosus tendon had eight attachments distal to the main common tendon. The main components were a lateral expansion to the oblique popliteal ligament; a direct arm, which attached to the tibia; and an anterior arm. The oblique popliteal ligament, the largest posterior knee structure, formed a broad fascial sheath over the posterior aspect of the knee and measured 48.0 mm in length and 9.5 mm wide at its medial origin and 16.4 mm wide at its lateral attachment. It had two lateral attachments, one to the meniscofemoral portion of the posterolateral joint capsule and one to the tibia, along the lateral border of the posterior cruciate ligament facet. The semimembranosus also had a distal tibial expansion, which formed a posterior fascial layer over the popliteus muscle. A thickening of the posterior joint capsule, the proximal popliteus capsular expansion, which in this study averaged 40.5 mm in length, connected the posteromedial knee capsule at its attachment at the intercondylar notch to the medial border of the popliteus musculotendinous junction. The plantaris muscle, popliteofibular ligament, fabellofibular ligament, and semimembranosus bursa were present in all specimens.\n\n\nCONCLUSIONS\nThe anatomy of the posterior aspect of the knee is quite complex. This study provides information that can lead to further biomechanical, radiographic imaging, and clinical studies of the importance of these posterior knee structures."} {"_id": "b3eea1328c10455faa9b49c1f4aec7cd5a0b2d1a", "title": "The discrete wavelet transform: wedding the a trous and Mallat algorithms", "text": ""} {"_id": "29b6452a3787022942da7670c197ff088c64562b", "title": "Benefit of Direct Charge Measurement (DCM) on Interconnect Capacitance Measurement", "text": "This paper discusses application of direct charge measurement (DCM) on characterizing on-chip interconnect capacitance. Measurement equipment and techniques are leveraged from Flat Panel Display testing. On-chip active device is not an essential necessity for DCM test structure and it is easy to implement parallel measurements. Femto-Farad measurement sensitivity is achieved without having on-chip active device. Measurement results of silicon and glass substrates, including parallel measurements, are presented."} {"_id": "07b4bd19a4450cda062d88a2fd0447fbc0875fd0", "title": "Adaptive Sharing for Online Social Networks: A Trade-off Between Privacy Risk and Social Benefit", "text": "Online social networks such as Facebook allow users to control which friend sees what information, but it can be a laborious process for users to specify every receiver for each piece of information they share. Therefore, users usually group their friends into social circles, and select the most appropriate social circle to share particular information with. However, social circles are not formed for setting privacy policies, and even the most appropriate social circle still cannot adapt to the changes of users' privacy requirements influenced by the changes in context. This problem drives the need for better privacy control which can adaptively filter the members in a selected social circle to satisfy users' requirements while maintaining users' social needs. To enable such adaptive sharing, this paper proposes a utility-based trade-off framework that models users' concerns (i.e. Potential privacy risks) and incentives of sharing (i.e. Potential social benefits), and quantifies users' requirements as a trade-off between these two types of utilities. By balancing these two metrics, our framework suggests a subset of a selected circle that aims to maximise users' overall utility of sharing. Numerical simulation results compare the outcome of three sharing strategies in randomly changing contexts."} {"_id": "28133b781cfa109f9943f821ed31736f01d84afe", "title": "Evaluating Reinforcement Learning Algorithms in Observational Health Settings", "text": "Omer Gottesman, Fredrik Johansson, Joshua Meier, Jack Dent, Donghun Lee, Srivatsan Srinivasan, Linying Zhang, Yi Ding, David Wihl, Xuefeng Peng, Jiayu Yao, Isaac Lage, Christopher Mosch, Li-wei H. Lehman, Matthieu Komorowski, Aldo Faisal, Leo Anthony Celi, David Sontag, and Finale Doshi-Velez Paulson School of Engineering and Applied Sciences, Harvard University Institute for Medical Engineering and Science, MIT T.H. Chan School of Public Health, Harvard University Department of Statistics, Harvard University Laboratory for Computational Physiology, Harvard-MIT Health Sciences & Technology, MIT Department of Surgery and Cancer, Faculty of Medicine, Imperial College London Department of Bioengineering, Imperial College London Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center MIT Critical Data"} {"_id": "35e500a7392bcff30045f5b21c1255c2aa2e7647", "title": "Personality traits and group-based information behaviour: an exploratory study", "text": "Although the importance of individual characteristics and psychological factors has been conceptualized in many models of information seeking behaviour (e.g., Case 2007; Wilson 1999) research into personality issues has only recently attracted attention in information science. This may be explained by the work by Heinstr\u00f6m (2002; 2003; 2006a; 2006b; 2006c), but may also be explained by the emerging interest and research into affective dimensions of information behaviour (Nahl and Bilal 2007). Hitherto, the research focus in information science VOL. 14 NO. 2, JUNE, 2009"} {"_id": "92c7e2171731d0a768e900150ef46402284830fd", "title": "Design of Neuro-Fuzzy System Controller for DC ServomotorBased Satellite Tracking System", "text": "A parabolic dish antenna positioning system is a control unit which directs the antenna to desired target automatically. In all its aspects, automation has become widespread and promises to represent the future for satellite tracking antenna systems. The term tracking is used to mean that the control system should utilize suitable algorithms to automate the process of pointing the antenna to a selected satellite thereby establishing the desired line of sight (LOS). This paper aims to present and discuss the results obtained from the development of a DC servomotor-based Neuro-Fuzzy System Controller (NFSC) algorithm which can be applied in the positioning of the parabolic reflector antennas. The advantage of using NFSC method is that it has a high degree of nonlinearity tolerance, learning ability and solves problems that are difficult to address with the conventional techniques such as Proportional Integral Derivative (PID) strategy. The architecture of the proposed antenna control system employs Adaptive Neuro-Fuzzy Inference System (ANFIS) design environment of MATLAB/SIMULINK package. The results obtained indicated that the proposed NFSC was able to achieve desired output DC motor position with reduced rise time and overshoot. Future work is to apply the controller in real time to achieve automatic satellite tracking with parabolic antenna using data derived from the signal strength."} {"_id": "52efd857b500531e040a9366b3eb5bb0ea543979", "title": "Google effects on memory: cognitive consequences of having information at our fingertips.", "text": "The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can \"Google\" the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves."} {"_id": "38bef5fc907299a1a9b91346eec334b259d5b3eb", "title": "Rigid scene flow for 3D LiDAR scans", "text": "The perception of the dynamic aspects of the environment is a highly relevant precondition for the realization of autonomous robot system acting in the real world. In this paper, we propose a novel method for estimating dense rigid scene flow in 3D LiDAR scans. We formulate the problem as an energy minimization problem, where we assume local geometric constancy and incorporate regularization for smooth motion fields. Analyzing the dynamics at point level helps in inferring the fine-grained details of motion. We show results on multiple sequences of the KITTI odometry dataset, where we seamlessly estimate multiple motions pertaining to different dynamic objects. Furthermore, we test our approach on a dataset with pedestrians to show how our method adapts to a case with non-rigid motion. For comparison we use the ground truth from KITTI and show how our method outperforms different ICP-based methods."} {"_id": "351e1e61e38a127f1dd68d74a6dfb0398e8cb84e", "title": "Dialogue-act-driven Conversation Model : An Experimental Study", "text": "The utility of additional semantic information for the task of next utterance selection in an automated dialogue system is the focus of study in this paper. In particular, we show that additional information available in the form of dialogue acts \u2013when used along with context given in the form of dialogue history\u2013 improves the performance irrespective of the underlying model being generative or discriminative. In order to show the model agnostic behavior of dialogue acts, we experiment with several well-known models such as sequence-to-sequence encoder-decoder model, hierarchical encoder-decoder model, and Siamese-based models with and without hierarchy; and show that in all models, incorporating dialogue acts improves the performance by a significant margin. We, furthermore, propose a novel way of encoding dialogue act information, and use it along with hierarchical encoder to build a model that can use the sequential dialogue act information in a natural way. Our proposed model achieves an MRR of about 84.8% for the task of next utterance selection on a newly introduced DailyDialog dataset, and outperform the baseline models. We also provide a detailed analysis of results including key insights that explain the improvement in MRR because of dialogue act information."} {"_id": "8ecd08b194529ff6b8e8fc80b5c7a72504a059d3", "title": "Evaluation of Objective Quality Measures for Speech Enhancement", "text": "In this paper, we evaluate the performance of several objective measures in terms of predicting the quality of noisy speech enhanced by noise suppression algorithms. The objective measures considered a wide range of distortions introduced by four types of real-world noise at two signal-to-noise ratio levels by four classes of speech enhancement algorithms: spectral subtractive, subspace, statistical-model based, and Wiener algorithms. The subjective quality ratings were obtained using the ITU-T P.835 methodology designed to evaluate the quality of enhanced speech along three dimensions: signal distortion, noise distortion, and overall quality. This paper reports on the evaluation of correlations of several objective measures with these three subjective rating scales. Several new composite objective measures are also proposed by combining the individual objective measures using nonparametric and parametric regression analysis techniques."} {"_id": "a46c5572506e5b015174332445cded7f888d137d", "title": "A Proposed Digital Forensics Business Model to Support Cybercrime Investigation in Indonesia", "text": "Digital forensics will always include at least human as the one who performs activities, digital evidence as the main object, and process as a reference for the activities followed. The existing framework has not provided a description of the interaction between human, interaction between human and digital evidence, as well as interaction between human and the process itself. A business model approach can be done to provide the idea regarding the interaction in question. In this case, what has been generated by the author in the previous study through a business model of the digital chain of custody becomes the first step in constructing a business model of a digital forensics. In principle, the proposed business model already accommodates major components of digital forensics (human, digital evidence, process) and also considers the interactions among the components. The business model suggested has contained several basic principles as described in The Regulation of Chief of Indonesian National Police (Perkap) No 10/2010. This will give support to law enforcement to deal with cybercrime cases that are more frequent and more sophisticated, and can be a reference for each institution and organization to implement digital forensics activities."} {"_id": "e3ff5de6f800fde3368eb78c61463c6cb3302f59", "title": "Compact slow-wave millimeter-wave bandpass filter (BPF) using open-loop resonator", "text": "A compact slow-wave millimeter-wave bandpass filter for wireless communication systems is developed and proposed in this paper. The filter is based on an open-loop resonator that is composed of a microstrip line with both ends loaded with folded open stubs. The folded arms of the open stubs are utilized to increase the loading capacitance to ground. In addition, the folded arms of the open stubs are also used to introduce cross couplings. The cross couplings can create transmission zeros at the edges of the desired passband. As a result, the filter can exhibit a passband with sharp rejection skirts. The filter is designed to have a fractional bandwidth of about 3.3% at a center frequency of 30.0 GHz. The filter design is realized using Rogers RT6002 substrate with a dielectric constant of 2.94 and a thickness of 0.127 mm. The filter design is successfully fabricated using the printed circuit board (PCB) technology. The fabricated filter is measured and an excellent agreement between the simulated and measured results is achieved. The fabricated filter has the advantages of compact size and excellent performance."} {"_id": "c896eb2c37475a147d31cd6f0c19fce60e8e6b43", "title": "RDF-4X: a scalable solution for RDF quads store in the cloud", "text": "Resource Description Framework (RDF) represents a flexible and concise model for representing the metadata of resources on the web. Over the past years, with the increasing amount of RDF data, efficient and scalable RDF data management has become a fundamental challenge to achieve the Semantic Web vision. However, multiple approaches for RDF storage have been suggested, ranging from simple triple stores to more advanced techniques like vertical partitioning on the predicates or centralized approaches. Unfortunately, it is still a challenge to store a huge quantity of RDF quads due, in part, to the query processing for RDF data. This paper proposes a scalable solution for RDF data management that uses Apache Accumulo. We focus on introducing storage methods and indexing techniques that scale to billions of quads across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases our approach works well against large RDF datasets."} {"_id": "30725559faf4f883c8bc9ed0af4aad6f6e9c9b10", "title": "Global, Dense Multiscale Reconstruction for a Billion Points", "text": "We present a variational approach for surface reconstruction from a set of oriented points with scale information. We focus particularly on scenarios with nonuniform point densities due to images taken from different distances. In contrast to previous methods, we integrate the scale information in the objective and globally optimize the signed distance function of the surface on a balanced octree grid. We use a finite element discretization on the dual structure of the octree minimizing the number of variables. The tetrahedral mesh is generated efficiently with a lookup table which allows to map octree cells to the nodes of the finite elements. We optimize memory efficiency by data aggregation, such that robust data terms can be used even on very large scenes. The surface normals are explicitly optimized and used for surface extraction to improve the reconstruction at edges and corners."} {"_id": "41e0436a617bda62ef6d895d58f1c4347a90dc89", "title": "Teachers\u2019 Performance Evaluation in Higher Educational Institution using Data Mining Technique", "text": "Educational Data Mining (EDM) is an evolving field exploring pedagogical data by applying different machine learning techniques/tools. It can be considered as interdisciplinary research field which provides intrinsic knowledge of teaching and learning process for effective education. The main objective of any educational institution is to provide quality education to its students. One way to achieve highest level of quality in higher education system is by discovering knowledge that predicts teachers\u2019 performance. This study presents an efficient system model for evaluation and prediction of teachers\u2019 performance in higher institutions of learning using data mining technologies. To achieve the objectives of this work, a two-layered classifier system was designed; it consists of an Artificial Neural Network (ANN) and Decision Tree. The classifier system was tested successfully using case study data from a Nigerian University in the South West of Nigeria. The data consists of academic qualifications for teachers as well as their experiences and grades of students in courses they taught among others. The attribute selected were evaluated using two feature selection methods in order to get a subset of the attributes that would make for a compact and accurate predictive model. The WEKA machine learning tool was used for the mining. The results show that, among the six attributes used, Working Experience, and Rank are rated the best two attributes that contributed mostly to the performance of teachers in this study. Also, considering the time taken to build the models and performance accuracy level, C4.5 decision tree outperformed the other two algorithms (ID3 and MLP) with good performance of 83.5% accuracy level and acceptable kappa statistics of 0.743. It does mean that C4.5 decision tree is best algorithm suitable for predicting teachers\u2019 performance in relation to the other two algorithms in this work. General Terms Data Mining"} {"_id": "2564e203c97df4e62348fa116ffa3b18f5151278", "title": "Parsimonious module inference in large networks.", "text": "We investigate the detectability of modules in large networks when the number of modules is not known in advance. We employ the minimum description length principle which seeks to minimize the total amount of information required to describe the network, and avoid overfitting. According to this criterion, we obtain general bounds on the detectability of any prescribed block structure, given the number of nodes and edges in the sampled network. We also obtain that the maximum number of detectable blocks scales as sqrt[N], where N is the number of nodes in the network, for a fixed average degree \u27e8k\u27e9. We also show that the simplicity of the minimum description length approach yields an efficient multilevel Monte Carlo inference algorithm with a complexity of O(\u03c4NlogN), if the number of blocks is unknown, and O(\u03c4N) if it is known, where \u03c4 is the mixing time of the Markov chain. We illustrate the application of the method on a large network of actors and films with over 10(6) edges, and a dissortative, bipartite block structure."} {"_id": "ea5b1f3c719cd4ddd4c78a0da1501e36d87d9782", "title": "Discriminative Optimization: Theory and Applications to Computer Vision", "text": "Many computer vision problems are formulated as the optimization of a cost function. This approach faces two main challenges: designing a cost function with a local optimum at an acceptable solution, and developing an efficient numerical method to search for this optimum. While designing such functions is feasible in the noiseless case, the stability and location of local optima are mostly unknown under noise, occlusion, or missing data. In practice, this can result in undesirable local optima or not having a local optimum in the expected place. On the other hand, numerical optimization algorithms in high-dimensional spaces are typically local and often rely on expensive first or second order information to guide the search. To overcome these limitations, we propose Discriminative Optimization (DO), a method that learns search directions from data without the need of a cost function. DO explicitly learns a sequence of updates in the search space that leads to stationary points that correspond to the desired solutions. We provide a formal analysis of DO and illustrate its benefits in the problem of 3D registration, camera pose estimation, and image denoising. We show that DO outperformed or matched state-of-the-art algorithms in terms of accuracy, robustness, and computational efficiency."} {"_id": "93b8a3e1771b42b46eaa802a85ddcfba3cdf7c1e", "title": "Pervasive Motion Tracking and Muscle Activity Monitor", "text": "This paper introduces a novel human activity monitoring system combining Inertial Measurement Units (IMU) and Mechanomyographic (MMG) muscle sensing technology. While other work has recognised and implemented systems for combined muscle activity and motion recording, they have focused on muscle activity through EMG sensors, which have limited use outside controlled environments. MMG is a low frequency vibration emitted by skeletal muscle whose measurement does not require gel or adhesive electrodes, potentially offering much more efficient implementation for pervasive use. We have developed a combined IMU and MMG sensor the fusion of which provides a new dimension in human activity monitoring and bridges the gap between human dynamics and muscle activity. Results show that synergy between motion and muscle detection is not only viable, but straightforward, inexpensive, and can be packaged in a lightweight wearable package."} {"_id": "d3bed9e9e7cb89236a5908ed902d006589e5f07b", "title": "Recognition for Handwritten English Letters : A Review", "text": "Character recognition is one of the most interesting and challenging research areas in the field of Image processing. English character recognition has been extensively studied in the last half century. Nowadays different methodologies are in widespread use for character recognition. Document verification, digital library, reading bank deposit slips, reading postal addresses, extracting information from cheques, data entry, applications for credit cards, health insurance, loans, tax forms etc. are application areas of digital document processing. This paper gives an overview of research work carried out for recognition of hand written English letters. In Hand written text there is no constraint on the writing style. Hand written letters are difficult to recognize due to diverse human handwriting style, variation in angle, size and shape of letters. Various approaches of hand written character recognition are discussed here along with their performance."} {"_id": "9313629756a2f76b6f43c6fbc8c8e452a2fc6588", "title": "Anomaly Detection for Astronomical Data", "text": "Modern astronomical observatories can produce massive amount of data that are beyond the capability of the researchers to even take a glance. These scientific observations present both great opportunities and challenges for astronomers and machine learning researchers. In this project we address the problem of detecting anomalies/novelties in these large-scale astronomical data sets. Two types of anomalies, the point anomalies and the group anomalies, are considered. The point anomalies include individual anomalous objects, such as single stars or galaxies that present unique characteristics. The group anomalies include anomalous groups of objects, such as unusual clusters of the galaxies that are close together. They both have great values for astronomical studies, and our goal is to detect them automatically in un-supervised ways. For point anomalies, we adopt the subspace-based detection strategy and proposed a robust low-rank matrix decomposition algorithm for more reliable results. For group anomalies, we use hierarchical probabilistic models to capture the generative mechanism of the data, and then score the data groups using various probability measures. Experimental evaluation on both synthetic and real world data sets shows the effectiveness of the proposed methods. On a real astronomical data sets, we obtained several interesting anecdotal results. Initial inspections by the astronomers confirm the usefulness of these machine learning methods in astronomical research."} {"_id": "14956e13069de62612d63790860f41e9fcc9bf70", "title": "Graph SLAM based mapping for AGV localization in large-scale warehouses", "text": "The operation of industrial Automated Guided Vehicles (AGV) today requires designated infrastructure and readily available maps for their localization. In logistics, high effort and investment is necessary to enable the introduction of AGVs. Within the SICK AG coordinated EU-funded research project PAN-Robots we aim to reduce the installation time and costs dramatically by semi-automated plant exploration and localization based on natural landmarks. In this paper, we present our current mapping and localization results based on measurement data acquired at the site of our project partner Coca-Cola Iberian Partners in Bilbao, Spain. We evaluate our solution in terms of accuracy of the map, i.e. comparing landmark position estimates with a ground truth map of millimeter accuracy. The localization results are shown based on artificial landmarks as well as natural landmarks (gridmaps) based on the same graph based optimization solution."} {"_id": "03184ac97ebf0724c45a29ab49f2a8ce59ac2de3", "title": "Evaluation of output embeddings for fine-grained image classification", "text": "Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results."} {"_id": "453b92fb760b2b6865e96670003061097953b5b3", "title": "Advice weaving in AspectJ", "text": "This paper describes the implementation of advice weaving in AspectJ. The AspectJ language picks out dynamic join points in a program's execution with pointcuts and uses advice to change the behavior at those join points. The core task of AspectJ's advice weaver is to statically transform a program so that at runtime it will behave according to the AspeetJ language semantics. This paper describes the 1.1 implementation which is based on transforming bytecode. We describe how AspectJ's join points are mapped to regions of bytecode, how these regions are efficiently matched by AspectJ's pointcuts, and how pieces of advice are efficiently implemented at these regions. We also discuss both run-time and compile-time performance of this implementation."} {"_id": "7081388f372231e13080f129423f6605d0c5168f", "title": "Scalable production of large quantities of defect-free few-layer graphene by shear exfoliation in liquids.", "text": "To progress from the laboratory to commercial applications, it will be necessary to develop industrially scalable methods to produce large quantities of defect-free graphene. Here we show that high-shear mixing of graphite in suitable stabilizing liquids results in large-scale exfoliation to give dispersions of graphene nanosheets. X-ray photoelectron spectroscopy and Raman spectroscopy show the exfoliated flakes to be unoxidized and free of basal-plane defects. We have developed a simple model that shows exfoliation to occur once the local shear rate exceeds 10(4) s(-1). By fully characterizing the scaling behaviour of the graphene production rate, we show that exfoliation can be achieved in liquid volumes from hundreds of millilitres up to hundreds of litres and beyond. The graphene produced by this method performs well in applications from composites to conductive coatings. This method can be applied to exfoliate BN, MoS2 and a range of other layered crystals."} {"_id": "75412b7dbc418aa56710e1b6e4b61da794d5adc7", "title": "Large-area synthesis of high-quality and uniform graphene films on copper foils.", "text": "Graphene has been attracting great interest because of its distinctive band structure and physical properties. Today, graphene is limited to small sizes because it is produced mostly by exfoliating graphite. We grew large-area graphene films of the order of centimeters on copper substrates by chemical vapor deposition using methane. The films are predominantly single-layer graphene, with a small percentage (less than 5%) of the area having few layers, and are continuous across copper surface steps and grain boundaries. The low solubility of carbon in copper appears to help make this growth process self-limiting. We also developed graphene film transfer processes to arbitrary substrates, and dual-gated field-effect transistors fabricated on silicon/silicon dioxide substrates showed electron mobilities as high as 4050 square centimeters per volt per second at room temperature."} {"_id": "7ad70a564c4c8898f61eddbcecf9e90f71e9d4b6", "title": "Measurement of the elastic properties and intrinsic strength of monolayer graphene.", "text": "We measured the elastic properties and intrinsic breaking strength of free-standing monolayer graphene membranes by nanoindentation in an atomic force microscope. The force-displacement behavior is interpreted within a framework of nonlinear elastic stress-strain response, and yields second- and third-order elastic stiffnesses of 340 newtons per meter (N m(-1)) and -690 Nm(-1), respectively. The breaking strength is 42 N m(-1) and represents the intrinsic strength of a defect-free sheet. These quantities correspond to a Young's modulus of E = 1.0 terapascals, third-order elastic stiffness of D = -2.0 terapascals, and intrinsic strength of sigma(int) = 130 gigapascals for bulk graphite. These experiments establish graphene as the strongest material ever measured, and show that atomically perfect nanoscale materials can be mechanically tested to deformations well beyond the linear regime."} {"_id": "ddeb1c116cecd14f1065c76599509de094e18420", "title": "Raman spectrum of graphene and graphene layers.", "text": "Graphene is the two-dimensional building block for carbon allotropes of every other dimensionality. We show that its electronic structure is captured in its Raman spectrum that clearly evolves with the number of layers. The D peak second order changes in shape, width, and position for an increasing number of layers, reflecting the change in the electron bands via a double resonant Raman process. The G peak slightly down-shifts. This allows unambiguous, high-throughput, nondestructive identification of graphene layers, which is critically lacking in this emerging research area."} {"_id": "85d3432db02840f07f7ce8603507cb6125ac1334", "title": "Synthesis of graphene-based nanosheets via chemical reduction of exfoliated graphite oxide", "text": "Reduction of a colloidal suspension of exfoliated graphene oxide sheets in water with hydrazine hydrate results in their aggregation and subsequent formation of a high-surface-area carbon material which consists of thin graphene-based sheets. The reduced material was characterized by elemental analysis, thermo-gravimetric analysis, scanning electron microscopy, X-ray photoelectron spectroscopy, NMR spectroscopy, Raman spectroscopy, and by electrical conductivity measurements. 2007 Elsevier Ltd. All rights reserved."} {"_id": "3364a3131285f309398372ebd2831f2fddb59f26", "title": "Microphone Array Processing for Distant Speech Recognition: From Close-Talking Microphones to Far-Field Sensors", "text": "Distant speech recognition (DSR) holds the promise of the most natural human computer interface because it enables man-machine interactions through speech, without the necessity of donning intrusive body- or head-mounted microphones. Recognizing distant speech robustly, however, remains a challenge. This contribution provides a tutorial overview of DSR systems based on microphone arrays. In particular, we present recent work on acoustic beam forming for DSR, along with experimental results verifying the effectiveness of the various algorithms described here; beginning from a word error rate (WER) of 14.3% with a single microphone of a linear array, our state-of-the-art DSR system achieved a WER of 5.3%, which was comparable to that of 4.2% obtained with a lapel microphone. Moreover, we present an emerging technology in the area of far-field audio and speech processing based on spherical microphone arrays. Performance comparisons of spherical and linear arrays reveal that a spherical array with a diameter of 8.4 cm can provide recognition accuracy comparable or better than that obtained with a large linear array with an aperture length of 126 cm."} {"_id": "c47b829a6a19f405e941e32ce19ea5e662967519", "title": "The Impact on Family Functioning of Social Media Use by Depressed Adolescents: A Qualitative Analysis of the Family Options Study", "text": "BACKGROUND\nAdolescent depression is a prevalent mental health problem, which can have a major impact on family cohesion. In such circumstances, excessive use of the Internet by adolescents may exacerbate family conflict and lack of cohesion. The current study aims to explore these patterns within an intervention study for depressed adolescents.\n\n\nMETHOD\nThe current study draws upon data collected from parents within the family options randomized controlled trial that examined family based interventions for adolescent depression (12-18\u2009years old) in Melbourne, Australia (2012-2014). Inclusion in the trial required adolescents to meet diagnostic criteria for a major depressive disorder via the Structured Clinical Interview for DSM-IV Childhood Disorders. The transcripts of sessions were examined using qualitative thematic analysis. The transcribed sessions consisted of 56\u2009h of recordings in total from 39 parents who took part in the interventions.\n\n\nRESULTS\nThe thematic analysis explored parental perceptions of their adolescent's use of social media (SM) and access to Internet content, focusing on the possible relationship between adolescent Internet use and the adolescent's depressive disorder. Two overarching themes emerged as follows: the sense of loss of parental control over the family environment and parents' perceived inability to protect their adolescent from material encountered on the Internet and social interactions via SM.\n\n\nCONCLUSION\nParents within the context of family based treatments felt that prolonged exposure to SM exposed their already vulnerable child to additional stressors and risks. The thematic analysis uncovered a sense of parental despair and lack of control, which is consistent with their perception of SM and the Internet as relentless and threatening to their parental authority and family cohesion."} {"_id": "1eab69b7a85f87eb265945bebddd6ac0e1e08be3", "title": "A Type-Based Compiler for Standard ML", "text": "Compile-time type information should be valuable in efficient compilation of statically typed functional languages such as Standard ML. But how should type-directed compilation work in real compilers, and how much performance gain will type-based optimizations yield? In order to support more efficient data representations and gain more experience about type-directed compilation, we have implemented a new type-based middle end and back end for the Standard ML of New Jersey compiler. We describe the basic design of the new compiler, identify a number of practical issues, and then compare the performance of our new compiler with the old non-type-based compiler. Our measurement shows that a combination of several simple type-based optimizations reduces heap allocation by 36%; and improves the already-efficient code generated by the old non-type-based compiler by about 19% on a DECstation 500."} {"_id": "c4108ffeed1fb1766578f60c10e9ebfc37ea13c0", "title": "Using Neural Networks to Generate Inferential Roles for Natural Language", "text": "Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's \"inferential role.\" We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition."} {"_id": "32805881c575cd7a55a42b3ad0b00a112d6620eb", "title": "Double Thompson Sampling for Dueling Bandits", "text": "In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandit problems. As its name suggests, D-TS selects both the first and the second candidates according to Thompson Sampling. Specifically, D-TS maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison according to two sets of samples independently drawn from the posterior distribution. This simple algorithm applies to general Copeland dueling bandits, including Condorcet dueling bandits as a special case. For general Copeland dueling bandits, we show that D-TS achieves O(K log T ) regret. Moreover, using a back substitution argument, we refine the regret to O(K log T +K log log T ) in Condorcet dueling bandits and most practical Copeland dueling bandits. In addition, we propose an enhancement of D-TS, referred to as D-TS, to reduce the regret in practice by carefully breaking ties. Experiments based on both synthetic and real-world data demonstrate that D-TS and D-TS significantly improve the overall performance, in terms of regret and robustness."} {"_id": "5c8a746146dba2dacb1741b8d9362d37c09f61ce", "title": "A Proposal for a Set of Level 3 Basic Linear Algebra Subprograms", "text": "This paper describes a proposal for Level 3 Basic Linear Algebra Subprograms (Level 3 BLAS). The Level 3 BLAS are targeted at matrix-matrix operations with the aim of providing more efficient, but portable, implementations of algorithms on high-performance computers, especially those with hierarchical memory and parallel processing capability."} {"_id": "3a8d076b5baa28e28160a1229ac3cae0b9e192f3", "title": "Accurate vertical road profile estimation using v-disparity map and dynamic programming", "text": "Detecting obstacles on the road is crucial for the advanced driver assistance systems. Obstacle detection on the road can be greatly facilitated if we have a vertical road profile. Therefore, in this paper, we present a novel method that can estimate an accurate vertical road profile of the scene from the stereo images. Unlike conventional stereo-based road profile estimation methods that heavily rely on a parametric model of the road surface, our method can obtain a road profile for an arbitrary complicated road. To this end, an energy function that includes the stereo matching fidelity and spatio-temporal smoothness of the road profile is presented, and thus the road profile is extracted by maximizing the energy function via dynamic programming. The experimental results demonstrate the effectiveness of the proposed method."} {"_id": "9b9c8419c237ceb2cabeb7778dd425317fc0095f", "title": "Position control of Robotino mobile robot using fuzzy logic", "text": "The appearance of fuzzy logic theory led to new trends in robotics field and solves various typical problems. In fact, fuzzy logic is a good alternative for increasing the capabilities of autonomous mobile robots in an unknown dynamic environment by integrating human experience. The paper presents the design and implementation of the position control using fuzzy logic for an omni directional mobile robot. The position control of the mobile robot was tested remotely using the Festo Robotino platform and Matlab environment. The obstacle avoidance and mobile robot odometry were also studied in this work."} {"_id": "e911518c14dffe2ce9d25a5bdd8d0bf20df1069f", "title": "Phase III clinical trial of thalidomide plus dexamethasone compared with dexamethasone alone in newly diagnosed multiple myeloma: a clinical trial coordinated by the Eastern Cooperative Oncology Group.", "text": "PURPOSE\nTo determine if thalidomide plus dexamethasone yields superior response rates compared with dexamethasone alone as induction therapy for newly diagnosed multiple myeloma.\n\n\nPATIENTS AND METHODS\nPatients were randomly assigned to receive thalidomide plus dexamethasone or dexamethasone alone. Patients in arm A received thalidomide 200 mg orally for 4 weeks; dexamethasone was administered at a dose of 40 mg orally on days 1 to 4, 9 to 12, and 17 to 20. Cycles were repeated every 4 weeks. Patients in arm B received dexamethasone alone at the same schedule as in arm A.\n\n\nRESULTS\nTwo hundred seven patients were enrolled: 103 were randomly assigned to thalidomide plus dexamethasone and 104 were randomly assigned to dexamethasone alone; eight patients were ineligible. The response rate with thalidomide plus dexamethasone was significantly higher than with dexamethasone alone (63% v 41%, respectively; P = .0017). The response rate allowing for use of serum monoclonal protein levels when a measurable urine monoclonal protein was unavailable at follow-up was 72% v 50%, respectively. The incidence rates of grade 3 or higher deep vein thrombosis (DVT), rash, bradycardia, neuropathy, and any grade 4 to 5 toxicity in the first 4 months were significantly higher with thalidomide plus dexamethasone compared with dexamethasone alone (45% v 21%, respectively; P < .001). DVT was more frequent in arm A than in arm B (17% v 3%); grade 3 or higher peripheral neuropathy was also more frequent (7% v 4%, respectively).\n\n\nCONCLUSION\nThalidomide plus dexamethasone demonstrates significantly superior response rates in newly diagnosed myeloma compared with dexamethasone alone. However, this must be balanced against the greater toxicity seen with the combination."} {"_id": "6ceacd889559cfcf0009e914d47f915167231846", "title": "The impact of visual attributes on online image diffusion", "text": "Little is known on how visual content affects the popularity on social networks, despite images being now ubiquitous on the Web, and currently accounting for a considerable fraction of all content shared. Existing art on image sharing focuses mainly on non-visual attributes. In this work we take a complementary approach, and investigate resharing from a mainly visual perspective. Two sets of visual features are proposed, encoding both aesthetical properties (brightness, contrast, sharpness, etc.), and semantical content (concepts represented by the images). We collected data from a large image-sharing service (Pinterest) and evaluated the predictive power of different features on popularity (number of reshares). We found that visual properties have low predictive power compared that of social cues. However, after factoring-out social influence, visual features show considerable predictive power, especially for images with higher exposure, with over 3:1 accuracy odds when classifying highly exposed images between very popular and unpopular."} {"_id": "5b5ee84ac8ab484c33b3b152c9af5a0715217e53", "title": "Finding Structural Similarity in Time Series Data Using Bag-of-Patterns Representation", "text": "For more than one decade, time series similarity search has been given a great deal of attention by data mining researchers. As a result, many time series representations and distance measures have been proposed. However, most existing work on time series similarity search focuses on finding shape-based similarity. While some of the existing approaches work well for short time series data, they typically fail to produce satisfactory results when the sequence is long. For long sequences, it is more appropriate to consider the similarity based on the higher-level structures. In this work, we present a histogram-based representation for time series data, similar to the \u201cbag of words\u201d approach that is widely accepted by the text mining and information retrieval communities. We show that our approach outperforms the existing methods in clustering, classification, and anomaly detection on several real datasets."} {"_id": "2343c9765be95ae7a156a75825321ba00940ff83", "title": "An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments", "text": "Grabbing and manipulating virtual objects is an important user interaction for immersive virtual environments. We present implementations and discussion of six techniques which allow manipulation of remote objects. A user study of these techniques was performed which revealed their characteristics and deficiencies, and led to the development of a new class of techniques. These hybrid techniques provide istinct advantages in terms of ease of use and efficiency because they consider the tasks of grabbing and manipulation separately. CR Categories and Subject Descr iptors: I.3.7 [Computer Graphics]:Three-Dimensional Graphics and Realism Virtual Reality; I.3.6 [Computer Graphics]:Methodology and Techniques Interaction Techniques."} {"_id": "24be6d5e698e32a72b709cb3eb50f9da0f616027", "title": "Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females", "text": "Music therapists use guided affect regulation in the treatment of mood disorders. However, self-directed uses of music in affect regulation are not fully understood. Some uses of music may have negative effects on mental health, as can non-music regulation strategies, such as rumination. Psychological testing and functional magnetic resonance imaging (fMRI) were used explore music listening strategies in relation to mental health. Participants (n = 123) were assessed for depression, anxiety and Neuroticism, and uses of Music in Mood Regulation (MMR). Neural responses to music were measured in the medial prefrontal cortex (mPFC) in a subset of participants (n = 56). Discharge, using music to express negative emotions, related to increased anxiety and Neuroticism in all participants and particularly in males. Males high in Discharge showed decreased activity of mPFC during music listening compared with those using less Discharge. Females high in Diversion, using music to distract from negative emotions, showed more mPFC activity than females using less Diversion. These results suggest that the use of Discharge strategy can be associated with maladaptive patterns of emotional regulation, and may even have long-term negative effects on mental health. This finding has real-world applications in psychotherapy and particularly in clinical music therapy."} {"_id": "4c648fe9b7bfd25236164333beb51ed364a73253", "title": "Presentation Attack Detection Methods for Face Recognition Systems: A Comprehensive Survey", "text": "The vulnerability of face recognition systems to presentation attacks (also known as direct attacks or spoof attacks) has received a great deal of interest from the biometric community. The rapid evolution of face recognition systems into real-time applications has raised new concerns about their ability to resist presentation attacks, particularly in unattended application scenarios such as automated border control. The goal of a presentation attack is to subvert the face recognition system by presenting a facial biometric artifact. Popular face biometric artifacts include a printed photo, the electronic display of a facial photo, replaying video using an electronic display, and 3D face masks. These have demonstrated a high security risk for state-of-the-art face recognition systems. However, several presentation attack detection (PAD) algorithms (also known as countermeasures or antispoofing methods) have been proposed that can automatically detect and mitigate such targeted attacks. The goal of this survey is to present a systematic overview of the existing work on face presentation attack detection that has been carried out. This paper describes the various aspects of face presentation attacks, including different types of face artifacts, state-of-the-art PAD algorithms and an overview of the respective research labs working in this domain, vulnerability assessments and performance evaluation metrics, the outcomes of competitions, the availability of public databases for benchmarking new PAD algorithms in a reproducible manner, and finally a summary of the relevant international standardization in this field. Furthermore, we discuss the open challenges and future work that need to be addressed in this evolving field of biometrics."} {"_id": "990c5f709007ffc72234fc9f491065c891bf888f", "title": "Deep Fusion Net for Multi-atlas Segmentation: Application to Cardiac MR Images", "text": "Atlas selection and label fusion are two major challenges in multi-atlas segmentation. In this paper, we propose a novel deep fusion net for better solving these challenges. Deep fusion net is a deep architecture by concatenating a feature extraction subnet and a non-local patchbased label fusion (NL-PLF) subnet in a single network. This network is trained end-to-end for automatically learning deep features achieving optimal performance in a NL-PLF framework. The learned deep features are further utilized in defining a similarity measure for atlas selection. Experimental results on Cardiac MR images for left ventricular segmentation demonstrate that our approach is effective both in atlas selection and multi-atlas label fusion, and achieves state of the art in performance."} {"_id": "2788637dafcdc6c440b1ab91756c601523ee45ca", "title": "A New Distance Measure for Face Recognition System", "text": "This paper proposes a new powerful distance measure called Normalized Unmatched Points (NUP). This measure can be used in a face recognition system to discriminate facial images. It works by counting the number of unmatched pixels between query and database images. A face recognition system has been proposed which makes use of this proposed distance measure for taking the decision on matching. This system has been tested on four publicly available databases, viz. ORL, YALE, BERN and CALTECH databases. Experimental results show that the proposed measure achieves recognition rates more than 98.66% for the first five likely matched faces. It is observed that the NUP distance measure performs better than other existing similar variants on these databases."} {"_id": "a38e54b0425a6074c53f0d0d365158613b4edea2", "title": "Fusing Iris, Palmprint and Fingerprint in a Multi-biometric Recognition System", "text": "This paper presents a trimodal biometric recognition system based on iris, palmprint and fingerprint. Wavelet transform and Gabor filter are used to extract features in different scales and orientations from iris and palmprint. Minutiae extraction and alignment is used for fingerprint matching. Six different fusion algorithms including score-based, rank-based and decision-based methods are used to combine the results of three modules. We also propose a new rank-based fusion algorithm Maximum Inverse Rank (MIR) which is robust with respect to variations in scores and also bad ranking from a module. CASIA datasets for iris, palmprint and fingerprint are used in this study. The experiments show the effectiveness of our fusion method and our trimodal biometric recognition system in comparison to existing multi-modal recognition systems."} {"_id": "f3f050cee9bc594ac12801b9fb700330918bf42a", "title": "Dermoscopy in differentiating palmar syphiloderm from palmar papular psoriasis.", "text": "Palmar syphiloderm is one of the most common presentations of secondary syphilis and its recognition is of utmost importance in order to promptly identify such a disease and initiate appropriate workup/management. However, the differential diagnosis with palmar papular psoriasis often poses some difficulties, with consequent possible diagnostic errors/delays and prescription of improper therapies. In this report, we underline the role of dermoscopy as a supportive tool to facilitate the non-invasive recognition of palmar syphiloderm and its distinction from palmar papular psoriasis."} {"_id": "5ec89f73a8d1e817ebf654f91318d28c9cfebead", "title": "Semantically Guided Depth Upsampling", "text": "We present a novel method for accurate and efficient upsampling of sparse depth data, guided by high-resolution imagery. Our approach goes beyond the use of intensity cues only and additionally exploits object boundary cues through structured edge detection and semantic scene labeling for guidance. Both cues are combined within a geodesic distance measure that allows for boundary-preserving depth interpolation while utilizing local context. We model the observed scene structure by locally planar elements and formulate the upsampling task as a global energy minimization problem. Our method determines globally consistent solutions and preserves fine details and sharp depth boundaries. In our experiments on several public datasets at different levels of application, we demonstrate superior performance of our approach over the state-of-the-art, even for very sparse measurements."} {"_id": "20ba6183545930316f0db323ff40de6f86381c0e", "title": "A master-slave blockchain paradigm and application in digital rights management", "text": "Upon flaws of current blockchain platforms of heavyweight, large capacity of ledger, and time-consuming of synchronization of data, in this paper, we proposed a new paradigm of master-slave blockchain scheme (MSB) for pervasive computing that suitable for general PC, mobile device such as smart phones or PADs to participants in the working of mining and verification, in which we separated traditional blockchain model in 2 layer defined as master node layer and a series of slavery agents layer, then we proposed 2 approaches for partially computing model(P-CM) and non-computing of model(NCM) in the MSB blockchain, Finally large amounts of simulations manifest the proposed master-slave blockchain scheme is feasible, extendible and suitable for pervasive computing especially in the 5G generation environment, and can apply in the DRM-related applications."} {"_id": "c35d770942f22b4a1595a225c518335e3acdd494", "title": "Analysis of GDI Technique for Digital Circuit Design", "text": "Power Dissipation of Digital circuits can be reduced by 15% 25% by using appropriate logic restructuring and also it can be reduced by 40% 60% by lowering switching activity. Here, Gate Diffusion Input Technique which is based on a Shannon expansion is analyzed for minimizing the power consumption and delay of static digital circuits. This technique as compare to other currently used logic design style, allows less power consumption and reduced propagation delay for low-power design of combinatorial digital circuits with minimum number of transistors. In this paper, basic building blocks of digital system and few combinational circuits are analyzed using GDI and other CMOS techniques. All circuits are designed at 180nm technology in CADENCE and simulate using VIRTUOSO SPECTRE simulator at 100 MHz frequency. Comparative analysis has been done among GDI and other parallel design styles for designing ripple adder, CLA adder and bit magnitude comparator. Simulation result shows GDI technique saves 53. 3%, 55. 6% and 75. 6% power in ripple adder, CLA adder and bit magnitude comparator respectively as compare to CMOS. Also delay is reduced with 25. 2%, 3. 4% and 6. 9% as compare to CMOS. Analysis conclude that GDI is revolutionary high speed and low power consumption technique."} {"_id": "115e07fd6a7910895e7a3d81ed3121d89262b40b", "title": "Cooperative Inverse Reinforcement Learning", "text": "For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partialinformation game with two agents, human and robot; both are rewarded according to the human\u2019s reward function, but the robot does not initially know what this is. In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions that are more effective in achieving value alignment. We show that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, prove that optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL algorithm."} {"_id": "925cc228f463ef493b3ab036b659df0d292bb087", "title": "On the Folly of Rewarding A, While Hoping for B", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."} {"_id": "5be817083fbefe6bc087ec5b5069da92d5f98b88", "title": "Network Intrusion Detection System using attack behavior classification", "text": "Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time."} {"_id": "13c0c418df650ad94ac368c81e2133ec9e166381", "title": "Mid-level deep pattern mining", "text": "Mid-level visual element discovery aims to find clusters of image patches that are both representative and discriminative. In this work, we study this problem from the prospective of pattern mining while relying on the recently popularized Convolutional Neural Networks (CNNs). Specifically, we find that for an image patch, activation extracted from the first fully-connected layer of a CNN have two appealing properties which enable its seamless integration with pattern mining. Patterns are then discovered from a large number of CNN activations of image patches through the well-known association rule mining. When we retrieve and visualize image patches with the same pattern (See Fig. 1), surprisingly, they are not only visually similar but also semantically consistent. We apply our approach to scene and object classification tasks, and demonstrate that our approach outperforms all previous works on mid-level visual element discovery by a sizeable margin with far fewer elements being used. Our approach also outperforms or matches recent works using CNN for these tasks. Source code of the complete system is available online."} {"_id": "d98fa0e725ebc6e69a16219de9ccf90c07175998", "title": "Broadband Reflectarray Antenna Using Subwavelength Elements Based on Double Square Meander-Line Rings", "text": "A linearly polarized broadband reflectarray is presented employing a novel single layer subwavelength phase shifting element. The size of the element is a fifth of a wavelength at the center frequency of 10 GHz and the element consists of double concentric square rings of meander lines. By changing the length of the meander line, a 420\u00b0 phase variation range is achieved at the center frequency. This characteristic makes the proposed configuration unique, as most of the reported subwavelength reflectarray elements can only realize a phase range far less than 360\u00b0. In addition, the slope of the phase response remains almost constant from 9 to 11 GHz, demonstrating a broadband property. A $48 \\times 48$-element reflectarray antenna is simulated, fabricated, and measured. Good agreement is obtained between simulated and measured results. A measured 1.5-dB gain bandwidth of 18% and 56.5% aperture efficiency is achieved."} {"_id": "37d0f68ed0d6a1293d260cc2a43be462b553c0f6", "title": "Does Computer-Assisted Learning Improve Learning Outcomes ? Evidence from a Randomized Experiment in Migrant Schools in Beijing", "text": "The\t\r education\t\r of\t\r the\t\r poor\t\r and\t\r disadvantaged\t\r population\t\r has\t\r been\t\r a\t\r long-\u00ad\u2010standing challenge\t\r to\t\r the\t\r education\t\r system\t\r in\t\r both\t\r developed\t\r and\t\r developing\t\r countries. Although\t\r computer-\u00ad\u2010assisted\t\r learning\t\r (CAL)\t\r has\t\r been\t\r considered\t\r one\t\r alternative\t\r to improve\t\r learning\t\r outcomes\t\r in\t\r a\t\r cost-\u00ad\u2010effective\t\r way,\t\r the\t\r empirical\t\r evidence\t\r of\t\r its\t\r impacts on\t\r improving\t\r learning\t\r outcomes\t\r is\t\r mixed.\t\r This\t\r paper\t\r intends\t\r to\t\r explore\t\r the\t\r nature\t\r of the\t\r effects\t\r of\t\r CAL\t\r on\t\r student\t\r academic\t\r and\t\r non-\u00ad\u2010academic\t\r outcomes\t\r for\t\r underserved populations\t\r in\t\r a\t\r developing\t\r country.\t\r To\t\r meet\t\r this\t\r goal,\t\r we\t\r exploit\t\r a\t\r randomized\t\r field experiment\t\r of\t\r a\t\r CAL\t\r program\t\r involving\t\r over\t\r 4000\t\r third-\u00ad\u2010grade\t\r students,\t\r mostly\t\r from poor\t\r migrant\t\r families,\t\r in\t\r 43\t\r migrant\t\r schools\t\r in\t\r Beijing.\t\r The\t\r main\t\r intervention\t\r is\t\r a\t\r math CAL\t\r program\t\r that\t\r is\t\r held\t\r out\t\r of\t\r regular\t\r school\t\r hours.\t\r The\t\r program\t\r is\t\r tailored\t\r to\t\r the regular\t\r school\t\r math\t\r curriculum\t\r and\t\r is\t\r remedial\t\r in\t\r nature.\t\r Our\t\r results\t\r show\t\r that,\t\r the\t\r CAL program\t\r improved\t\r the\t\r student\t\r standardized\t\r math\t\r scores\t\r by\t\r 0.14\t\r standard\t\r deviations and\t\r most\t\r of\t\r the\t\r program\t\r effect\t\r took\t\r place\t\r within\t\r two\t\r months\t\r after\t\r the\t\r start\t\r of\t\r the program.\t\r Low-\u00ad\u2010performing\t\r students\t\r and\t\r those\t\r with\t\r less-\u00ad\u2010educated\t\r parents\t\r benefited more\t\r from\t\r the\t\r program.\t\r Moreover,\t\r CAL\t\r also\t\r significantly\t\r increased\t\r the\t\r levels\t\r of\t\r self-\u00ad\u2010 efficacy\t\r of\t\r the\t\r students\t\r and\t\r their\t\r interest\t\r in\t\r learning.\t\r We\t\r observed\t\r at\t\r most\t\r a\t\r moderate program\t\r spillover\t\r in\t\r Chinese\t\r test\t\r scores.\t\r Our\t\r findings\t\r are\t\r robust\t\r to\t\r the\t\r Hawthorne effect\t\r and\t\r CAL\t\r program\t\r spillovers\t\r that\t\r might\t\r potentially\t\r bias\t\r the\t\r estimates\t\r of\t\r the program\t\r effects. Key\t\r Words:\t\r \t\r Education;\t\r Development;\t\r Computer\t\r Assisted\t\r Learning;\t\r Random Assignment;\t\r Test\t\r Scores;\t\r China;\t\r Migration JEL\t\r Codes:\t\r I20;\t\r I21;\t\r I28;\t\r O15"} {"_id": "218acaf6ea938025ee99676606a2ac5f8dd888ed", "title": "A network forensics tool for precise data packet capture and replay in cyber-physical systems", "text": "Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and securityrelated anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose \"packet capture\" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions."} {"_id": "3230af0ced496c3eb59e4abc4df15f2572b50d03", "title": "Meta-learning with backpropagation", "text": "This paper introduces gradient descent methods applied to meta-leaming (leaming how to leam) in Neural Networks. Meta-leaning has been of interest in the machine leaming field for decades because of its appealing applications to intelligent agents, non-stationary time series, autonomous robots, and improved leaming algorithms. Many previous neural network-based approaches toward meta-leaming have been based on evolutionary methods. We show how to use gradient descent for meta-leaming in recurrent neural networks. Based on previous work on FixedWeight Leaming Neural Networks, we hypothesize that any recurrent network topology and its corresponding leaming algorithm(s) is a potential meta-leaming system. We tested several recurrent neural network topologies and their corresponding forms of Backpropagation for their ability to meta-leam. One of our systems, based on the Long Short-Term Memory neural network developed a leaming algorithm that could leam any two-dimensional quadratic function (from a set of such functions} after only 30 training examples."} {"_id": "fcaefc4836a276009f85a527f866b59611cbd9f1", "title": "Swarm-Based Spatial Sorting", "text": "Purpose \u2013 To present an algorithm for spatially sorting objects into an annular structure. Design/Methodology/Approach \u2013 A swarm-based model that requires only stochastic agent behaviour coupled with a pheromone-inspired \u201cattraction-repulsion\u201d mechanism. Findings \u2013 The algorithm consistently generates high-quality annular structures, and is particularly powerful in situations where the initial configuration of objects is similar to those observed in nature. Research limitations/implications \u2013 Experimental evidence supports previous theoretical arguments about the nature and mechanism of spatial sorting by insects. Practical implications \u2013 The algorithm may find applications in distributed robotics. Originality/value \u2013 The model offers a powerful minimal algorithmic framework, and also sheds further light on the nature of attraction-repulsion algorithms and underlying natural processes."} {"_id": "5d263170dd6f4132da066645ac4c352e783f6471", "title": "A High Efficiency MAC Protocol for WLANs: Providing Fairness in Dense Scenarios", "text": "Collisions are a main cause of throughput degradation in wireless local area networks. The current contention mechanism used in the IEEE 802.11 networks is called carrier sense multiple access with collision avoidance CSMA/CA. It uses a binary exponential backoff technique to randomize each contender attempt of transmitting, effectively reducing the collision probability. Nevertheless, CSMA/CA relies on a random backoff that while effective and fully decentralized, in principle is unable to completely eliminate collisions, therefore degrading the network throughput as more contenders attempt to share the channel. To overcome these situations, carrier sense multiple access with enhanced collision avoidance CSMA/ECA is able to create a collision-free schedule in a fully decentralized manner using a deterministic backoff after successful transmissions. Hysteresis and fair share are two extensions of CSMA/ECA to support a large number of contenders in a collision-free schedule. CSMA/ECA offers better throughput than CSMA/CA and short-term throughput fairness. This paper describes CSMA/ECA and its extensions. In addition, it provides the first evaluation results of CSMA/ECA with non-saturated traffic, channel errors, and its performance when coexisting with CSMA/CA nodes. Furthermore, it describes the effects of imperfect clocks over CSMA/ECA and presents a mechanism to leverage the impact of channel errors and the addition/withdrawal of nodes over collision-free schedules. Finally, the experimental results on throughput and lost frames from a CSMA/ECA implementation using commercial hardware and open-source firmware are presented."} {"_id": "0f00a04c4a8c92e070b50ab411df4cd31d2cbe97", "title": "Face recognition with one training image per person", "text": "At present there are many methods that could deal well with frontal view face recognition. However, most of them cannot work well when there is only one training image per person. In this paper, an extension of the eigenface technique, i.e. (PC)A, is proposed. (PC)A combines the original face image with its horizontal and vertical projections and then performs principal component analysis on the enriched version of the image. It requires less computational cost than the standard eigenface technique and experimental results show that on a gray-level frontal view face database where each person has only one training image, (PC)A achieves 3%-5% higher accuracy than the standard eigenface technique through using 10%-15% fewer eigenfaces."} {"_id": "44d811b066187865cce71d9acecdda8c3f9c8eee", "title": "Travel time estimation of a path using sparse trajectories", "text": "In this paper, we propose a citywide and real-time model for estimating the travel time of any path (represented as a sequence of connected road segments) in real time in a city, based on the GPS trajectories of vehicles received in current time slots and over a period of history as well as map data sources. Though this is a strategically important task in many traffic monitoring and routing systems, the problem has not been well solved yet given the following three challenges. The first is the data sparsity problem, i.e., many road segments may not be traveled by any GPS-equipped vehicles in present time slot. In most cases, we cannot find a trajectory exactly traversing a query path either. Second, for the fragment of a path with trajectories, they are multiple ways of using (or combining) the trajectories to estimate the corresponding travel time. Finding an optimal combination is a challenging problem, subject to a tradeoff between the length of a path and the number of trajectories traversing the path (i.e., support). Third, we need to instantly answer users' queries which may occur in any part of a given city. This calls for an efficient, scalable and effective solution that can enable a citywide and real-time travel time estimation. To address these challenges, we model different drivers' travel times on different road segments in different time slots with a three dimension tensor. Combined with geospatial, temporal and historical contexts learned from trajectories and map data, we fill in the tensor's missing values through a context-aware tensor decomposition approach. We then devise and prove an object function to model the aforementioned tradeoff, with which we find the most optimal concatenation of trajectories for an estimate through a dynamic programming solution. In addition, we propose using frequent trajectory patterns (mined from historical trajectories) to scale down the candidates of concatenation and a suffix-tree-based index to manage the trajectories received in the present time slot. We evaluate our method based on extensive experiments, using GPS trajectories generated by more than 32,000 taxis over a period of two months. The results demonstrate the effectiveness, efficiency and scalability of our method beyond baseline approaches."} {"_id": "b8084d5e193633462e56f897f3d81b2832b72dff", "title": "DeepID3: Face Recognition with Very Deep Neural Networks", "text": "The state-of-the-art of face recognition has been significantly advanced by the emergence of deep learning. Very deep neural networks recently achieved great success on general object recognition because of their superb learning capacity. This motivates us to investigate their effectiveness on face recognition. This paper proposes two very deep neural network architectures, referred to as DeepID3, for face recognition. These two architectures are rebuilt from stacked convolution and inception layers proposed in VGG net [10] and GoogLeNet [16] to make them suitable to face recognition. Joint face identification-verification supervisory signals are added to both intermediate and final feature extraction layers during training. An ensemble of the proposed two architectures achieves 99.53% LFW face verification accuracy and 96.0% LFW rank-1 face identification accuracy, respectively. A further discussion of LFW face verification result is given in the end."} {"_id": "7b93d0791cdf8342be6c398db831321759180d86", "title": "Incredible high performance SAW resonator on novel multi-layerd substrate", "text": "High Q and low TCF are elemental characteristics for SAW resonator, in order to realize high performance filters and duplexers coping with new LTE frequency bands. Authors focus on the acoustic energy confinement in SAW propagation surface and electrical characteristics have been investigated theoretically and experimentally. A new multi-layered structure substrate have realized incredible high performances such as both three times Q and one fifth TCF compared with conventional structure. Band25 duplexer using the new structure has been developed successfully with low loss and good temperature stability."} {"_id": "36604c12e02c8146f8cb486f02f7ba9545004669", "title": "Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth", "text": "When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies."} {"_id": "fcdf8fee148179f7bf26a8254cb82c86321811d2", "title": "Understanding Individual Neuron Importance Using Information Theory", "text": "In this work, we characterize the outputs of individual neurons in a trained feed-forward neural network by entropy, mutual information with the class variable, and a class selectivity measure based on Kullback-Leibler divergence. By cumulatively ablating neurons in the network, we connect these information-theoretic measures to the impact their removal has on classification performance on the test set. We observe that, looking at the neural network as a whole, none of these measures is a good indicator for classification performance, thus confirming recent results by Morcos et al. However, looking at specific layers separately, both mutual information and class selectivity are positively correlated with classification performance. We thus conclude that it is ill-advised to compare these measures across layers, and that different layers may be most appropriately characterized by different measures. We then discuss pruning neurons from neural networks to reduce computational complexity of inference. Drawing from our results, we perform pruning based on information-theoretic measures on a fully connected feed-forward neural network with two hidden layers trained on MNIST dataset and compare the results to a recently proposed pruning method. We furthermore show that the common practice of re-training after pruning can partly be obviated by a surgery step called bias balancing, without incurring significant performance degradation."} {"_id": "3b2e70869975c3c27c2338788b67371e758cc221", "title": "FLEXIBLE MANUFACTURING SYSTEMS MODELLING AND PERFORMANCE EVALUATION USING AUTOMOD", "text": "In recent times flexible manufacturing systems emerged as a powerful technology to meet the continuous changing customer demands. Increase in the performance of flexible manufacturing systems is expected as a result of integration of the shop floor activities such as machine and vehicle scheduling. The authors made an attempt to integrate machine and vehicle scheduling with an objective to minimize the makespan using Automod. Automod is a discrete event simulation package used to model and simulate a wide variety of issues in automated manufacturing systems. The key issues related to the design and operation of automated guided vehicles such as flow path layout, number of vehicles and traffic control problems are considered in the study. The performance measures like throughput, machine and vehicle utilization are studied for different job dispatching and vehicle assignment rules in different flexible manufacturing system configurations. (Received in August 2010, accepted in April 2011. This paper was with the authors 1 month for 2 revisions.)"} {"_id": "2ab9b5fb21b3e4f1de78494380135df535dd75e0", "title": "Fusing Social Media Cues: Personality Prediction from Twitter and Instagram", "text": "Incorporating users\u2019 personality traits has shown to be instrumental in many personalized retrieval and recommender systems. Analysis of users\u2019 digital traces has become an important resource for inferring personality traits. To date, the analysis of users\u2019 explicit and latent characteristics is typically restricted to a single social networking site (SNS). In this work, we propose a novel method that integrates text, image, and users\u2019 meta features from two different SNSs: Twitter and Instagram. Our preliminary results indicate that the joint analysis of users\u2019 simultaneous activities in two popular SNSs seems to lead to a consistent decrease of the prediction errors for each personality trait."} {"_id": "ada07d84bf881daa4a7e692670e61ad766f692f3", "title": "Current sharing in three-phase LLC interleaved resonant converter", "text": "In this paper, a novel approach for multi-phase interleaved LLC resonant converter is presented. The proposed solution, based on the use of three LLC modules with star connection of transformer primary windings, allows a drastic reduction of the output current ripple and consequently of the output filter capacitor size. Differently from other multi-phase solutions, that are greatly susceptible to resonant components' tolerance causing current imbalance, the proposed topology exhibits an inherent current sharing capability. Moreover, a closed-loop phase-shift control is introduced to additionally compensate for current mismatch and completely balance the current supplied by each module. The benefit of such solution on the reduction of output current ripple and the phase-shift control interaction and effect on load-step variations are also investigated. Measurements on a prototype are added to simulations as validation of the assertions and proposals."} {"_id": "8914b7cba355d22425f22a2fa5b109ba82806f45", "title": "Ratiometric Artifact Reduction in Low Power Reflective Photoplethysmography", "text": "This paper presents effective signal-processing techniques for the compensation of motion artifacts and ambient light offsets in a reflective photoplethysmography sensor suitable for wearable applications. A ratiometric comparison of infrared (IR) and red absorption characteristics cancels out noise that is multiplicative in nature and amplitude modulation of pulsatile absorption signals enables rejection of additive noise. A low-power, discrete-time pulse-oximeter platform is used to capture IR and red photoplethysmograms so that the data used for analysis have noise levels representative of what a true body sensor network device would experience. The proposed artifact rejection algorithm is designed for real-time implementation with a low-power microcontroller while being robust enough to compensate for varying levels in ambient light as well as reducing the effects of motion-induced artifacts. The performance of the system is illustrated by its ability to extract a typical plethysmogram heart-rate waveform since the sensor is subjected to a range of physical disturbances."} {"_id": "d5083647572808fcea765d4721c59d3a42e3980c", "title": "A learning environment for augmented reality mobile learning", "text": "There are many tools that enable the development of the augmented reality (AR) activities, some of them can even create and generate an AR experience where the incorporation of 3D objects is simple. However AR tools used in education are different from the general tools limited to the reproduction of virtual content. The purpose of this paper is to present a learning environment based on augmented reality, which can be used both by teachers to develop quality AR educational resources and by students to acquire knowledge in an area. Common problems teachers have in applying AR have been taken into account by producing an authoring tool for education which includes the following characteristics: (1) the ability to incorporate diverse multimedia resources such as video, sound, images and 3D objects in an easy manner, 2) the ability to incorporate tutored descriptions into elements which are being displayed (thus, the student is provided with an additional information, description or narrative about the resource, while the content gets adapted and personalized), (3) the possibility for the teacher to incorporate multiple choice questions (MCQ) into the virtual resource (useful for instant feedback to students on their understanding, it can be useful for the student to know what are the most important points of that issue and for the teacher to assess whether the student distinguishes different concepts) and (4) a library of virtual content where all resources are available in a simple and transparent way for any user. In this study ARLE is used to add AR technologies into notes or books created by the teacher, thereby supplementing both the theoretical and practical content without any programming skills needed on the designers' behalf. In addition to presenting system architecture and the examples of its educational use, a survey concerning use of AR amongst teachers in Spain has been conducted."} {"_id": "0a802c57c83655a1f2221d135c7bd6a1cbdd06fe", "title": "Qualitative Simulation", "text": "Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a set of constraints abstracted from a differential equation, we prove that the OSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the given constraints. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions."} {"_id": "4491bc44446ed65d19bfd41320ecf2aed39d190b", "title": "Anticipatory affect: neural correlates and consequences for choice.", "text": "'Anticipatory affect' refers to emotional states that people experience while anticipating significant outcomes. Historically, technical limitations have made it difficult to determine whether anticipatory affect influences subsequent choice. Recent advances in the spatio-temporal resolution of functional magnetic resonance imaging, however, now allow researchers to visualize changes in neural activity seconds before choice occurs. We review evidence that activation in specific brain circuits changes during anticipation of monetary incentives, that this activation correlates with affective experience and that activity in these circuits may influence subsequent choice. Specifically, an activation likelihood estimate meta-analysis of cued response studies indicates that nucleus accumbens (NAcc) activation increases during gain anticipation relative to loss anticipation, while anterior insula activation increases during both loss and gain anticipation. Additionally, anticipatory NAcc activation correlates with self-reported positive arousal, whereas anterior insula activation correlates with both self-reported negative and positive arousal. Finally, NAcc activation precedes the purchase of desirable products and choice of high-risk gambles, whereas anterior insula activation precedes the rejection of overpriced products and choice of low-risk gambles. Together, these findings support a neurally plausible framework for understanding how anticipatory affect can influence choice."} {"_id": "9da1d06e9afe37b3692a102022f561e2b6b25eaf", "title": "Ernest: Efficient Performance Prediction for Large-Scale Advanced Analytics", "text": "Recent workload trends indicate rapid growth in the deployment of machine learning, genomics and scientific workloads on cloud computing infrastructure. However, efficiently running these applications on shared infrastructure is challenging and we find that choosing the right hardware configuration can significantly improve performance and cost. The key to address the above challenge is having the ability to predict performance of applications under various resource configurations so that we can automatically choose the optimal configuration. Our insight is that a number of jobs have predictable structure in terms of computation and communication. Thus we can build performance models based on the behavior of the job on small samples of data and then predict its performance on larger datasets and cluster sizes. To minimize the time and resources spent in building a model, we use optimal experiment design, a statistical technique that allows us to collect as few training points as required. We have built Ernest, a performance prediction framework for large scale analytics and our evaluation on Amazon EC2 using several workloads shows that our prediction error is low while having a training overhead of less than 5% for long-running jobs."} {"_id": "914cfd6fb61d75608592164193f90776bef58a7e", "title": "Bootstrapping the Linked Data Web", "text": "Most knowledge sources on the Data Web were extracted from structured or semi-structured data. Thus, they encompass solely a small fraction of the information available on the document-oriented Web. In this paper, we present BOA, an iterative bootstrapping strategy for extracting RDF from unstructured data. The idea behind BOA is to use the Data Web as background knowledge for the extraction of natural language patterns that represent predicates found on the Data Web. These patterns are used to extract instance knowledge from natural language text. This knowledge is nally fed back into the Data Web, therewith closing the loop. We evaluate our approach on two data sets using DBpedia as background knowledge. Our results show that we can extract several thousand new facts in one iteration with very high accuracy. Moreover, we provide the rst repository of natural language representations of predicates found on the Data Web."} {"_id": "befb418f58475b7ebafd1826ba2e60d354eecc80", "title": "Combined Spectral and Spatial Modeling of Corn Yield Based on Aerial Images and Crop Surface Models Acquired with an Unmanned Aircraft System", "text": "Precision Farming (PF) management strategies are commonly based on estimations of within-field yield potential, often derived from remotely-sensed products, e.g., Vegetation Index (VI) maps. These well-established means, however, lack important information, like crop height. Combinations of VI-maps and detailed 3D Crop Surface Models (CSMs) enable advanced methods for crop yield prediction. This work utilizes an Unmanned Aircraft System (UAS) to capture standard RGB imagery datasets for corn grain yield prediction at three earlyto mid-season growth stages. The imagery is processed into simple VI-orthoimages for crop/non-crop classification and 3D CSMs for crop height determination at different spatial resolutions. Three linear regression models are tested on their prediction ability using site-specific (i) unclassified mean heights, (ii) crop-classified mean heights and (iii) a combination of crop-classified mean heights with according crop coverages. The models show determination coefficients R of up to 0.74, whereas model (iii) performs best with imagery captured at the end of stem elongation and intermediate spatial resolution (0.04 m\u00b7px\u22121). Following these results, combined spectral and spatial modeling, based on aerial images and CSMs, proves to be a suitable method for mid-season corn yield prediction. Remote Sens. 2014, 11 10336"} {"_id": "dc6e656ebdaf65c07f09ba93e6b1308f564978d5", "title": "You are what you eat? Vegetarianism, health and identity.", "text": "This paper examines the views of 'health vegetarians' through a qualitative study of an online vegetarian message board. The researcher participated in discussions on the board, gathered responses to questions from 33 participants, and conducted follow-up e-mail interviews with 18 of these participants. Respondents were predominantly from the United States, Canada and the UK. Seventy per cent were female, and ages ranged from 14 to 53 years, with a median of 26 years. These data are interrogated within a theoretical framework that asks, 'what can a vegetarian body do?' and explores the physical, psychic, social and conceptual relations of participants. This provides insights into the identities of participants, and how diet and identity interact. It is concluded that vegetarianism is both a diet and a bodily practice with consequences for identity formation and stabilisation."} {"_id": "6b85beb00b43e6ea6203a0afa66f15f3cefb48c6", "title": "Digital Material Fabrication Using Mask-Image-Projection-based Stereolithography", "text": "Purpose \u2013 The purpose of this paper is to present a mask-image-projection-based Stereolithography process that can combine two base materials with various concentrations and structures to produce a solid object with desired material characteristics. Stereolithography is an additive manufacturing process in which liquid photopolymer resin is cross-linked and converted to solid. The fabrication of digital material requires frequent resin changes during the building process. The process presented in this paper attempts to address the related challenges in achieving such fabrication capability. Design/methodology/approach \u2013 A two-channel system design is presented for the multi-material mask-imageprojection-based Stereolithography process. In such a design, a coated thick film and linear motions in two axes are used to reduce the separation force of a cured layer. The material cleaning approach to thoroughly remove resin residue on built surfaces is presented for the developed process. Based on a developed testbed, experimental studies were conducted to verify the effectiveness of the presented process on digital material fabrication. Findings \u2013 The proposed two-channel system can reduce the separation force of a cured layer by an order of magnitude in the bottom-up projection system. The developed two-stage cleaning approach can effectively remove resin residue on built surfaces. Several multi-material designs have been fabricated to highlight the capability of the developed multi-material mask-image-projection-based Stereolithography process. Research limitations/implications \u2013 A proof-of-concept testbed has been developed. Its building speed and accuracy can be further improved. Our tests were limited to the same type of liquid resins. In addition, the removal of trapped air is a challenge in the presented process. Originality/value \u2013 This paper presents a novel and a pioneering approach towards digital material fabrication based on the Stereolithography process. This research contributes to the additive manufacturing development by significantly expanding the selection of base materials in fabricating solid objects with desired material characteristics."} {"_id": "9425c683ce13c482201ab23dab29bc34e7046229", "title": "Calibrating Large-area Mask Projection Stereolithography for Its Accuracy and Resolution Improvements", "text": "Solid freeform fabrication (SFF) processes based on mask image projection such as digital micro-mirror devices (DMD) have the potential to be fast and inexpensive. More and more research and commercial systems have been developed based on such digital devices. However, a digital light processing (DLP) projector based on DMD has limited resolution and certain image blurring. In order to use a DLP projector in the large-area mask projection stereolithography, it is critical to plan mask images in order to achieve high accuracy and resolution. Based on our previous work on optimized pixel blending, we present a calibration method for capturing the non-uniformity of a projection image by a low cost off-the-shelf DLP projector. Our method is based on two calibration systems, a geometric calibration system that can calibrate the position, shape, size, and orientation of a pixel and an energy calibration system that can calibrate the light intensity of a pixel. Based on both results, the light intensity at various grayscale levels can be approximated for each pixel. Developing a library of such approximation functions is critical for the optimized pixel blending to generate a better mask image plan. Experimental results verify our calibration results."} {"_id": "58a64cc6a1dd8269ab19b9de271e202ab3e6de92", "title": "Projection micro-stereolithography using digital micro-mirror dynamic mask", "text": "We present in this paper the development of a high-resolution projection micro-stereolithography (P SL) process by using the Digital Micromirror Device (DMDTM, Texas Instruments) as a dynamic mask. This unique technology provides a parallel fabrication of complex three-dimensional (3D) microstructures used for micro electro-mechanical systems (MEMS). Based on the understanding of underlying mechanisms, a process model has been developed with all critical parameters obtained from the experimental measurement. By coupling the experimental measurement and the process model, the photon-induced curing behavior of the resin has been quantitatively studied. The role o erty of the r n d \u00a9"} {"_id": "9e0c729e1596cd3567945db84e440fbfdeb75d9a", "title": "A Layerless Additive Manufacturing Process based on CNC Accumulation", "text": "Most current additive manufacturing processes are layer-based, that is building a physical model layer-by-layer. By converting 3-dimensional geometry into 2-dimensional contours, the layer-based approach can dramatically simplify the process planning steps. However, there are also drawbacks associated with the layer-based approach such as inconsistent material properties between various directions. In a recent NSF workshop on additive manufacturing, it is suggested to investigate alternative non-layer based approaches. In this paper, we present an additive manufacturing process without planar layers. In the developed testbed, an additive tool based on a fiber optics cable and a UV-LED has been developed. By merging such tools inside a liquid resin tank, we demonstrate its capability of building various 2D and 3D structures. The technical challenges related to the development of such a process are discussed. Some potential applications including part repairing and building around inserts have also been demonstrated."} {"_id": "0ab4c2b3a97de19f37df76662029ae5886e4eb22", "title": "Single\u2010cell sequencing maps gene expression to mutational phylogenies in PDGF\u2010 and EGF\u2010driven gliomas", "text": "Glioblastoma multiforme (GBM) is the most common and aggressive type of primary brain tumor. Epidermal growth factor (EGF) and platelet-derived growth factor (PDGF) receptors are frequently amplified and/or possess gain-of-function mutations in GBM However, clinical trials of tyrosine-kinase inhibitors have shown disappointing efficacy, in part due to intra-tumor heterogeneity. To assess the effect of clonal heterogeneity on gene expression, we\u00a0derived an approach to map single-cell expression profiles to\u00a0sequentially acquired mutations identified from exome sequencing. Using 288 single cells, we constructed high-resolution phylogenies of EGF-driven and PDGF-driven GBMs, modeling transcriptional kinetics during tumor evolution. Descending the phylogenetic tree of a PDGF-driven tumor corresponded to a progressive induction of an oligodendrocyte progenitor-like cell type, expressing pro-angiogenic factors. In contrast, phylogenetic analysis of an EGFR-amplified tumor showed an up-regulation of pro-invasive genes. An in-frame deletion in a specific dimerization domain of PDGF receptor correlates with an up-regulation of growth pathways in a proneural GBM and enhances proliferation when ectopically expressed in glioma cell lines. In-frame deletions in this domain are frequent in public GBM data."} {"_id": "af8f8e292ab7b3a80cca31a9e12ef01d2608a9ca", "title": "A very-high output impedance current mirror for very-low voltage biomedical analog circuits", "text": "In this paper, we present the design of a new very-high output impedance CMOS current mirror with enhanced output voltage compliance. The proposed current mirror uses MOS current dividers to sample the output current and a feedback action is used to force it to be equal to the input current; yielding very high impedance with a very large output voltage range. The proposed implementation yields an increase of the output impedance by a factor of about gmro compared with that of the super-Wilson current mirror, thus offering a potential solution to mitigate the effect of the low output impedance of ultra-deep submicron CMOS transistors used in sub 1-V current mirrors and current sources. A NMOS version of the proposed current mirror circuit was implemented using STMicroelectronics 1-V 90-nm CMOS process and simulated using Spectre to validate its performance. The output current is mirrored with a transfer error lower than 1% down to an output voltage as low as 80 mV for an input current of 5 muA, and 111 mV when the input current is increased to 50 muA."} {"_id": "2a9d09d8e2390c92cdaa5c8b98d6dd4cb394f638", "title": "Dune: Safe User-level Access to Privileged CPU Features", "text": "Dune is a system that provides applications with direct but safe access to hardware features such as ring protection, page tables, and tagged TLBs, while preserving the existing OS interfaces for processes. Dune uses the virtualization hardware in modern processors to provide a process, rather than a machine abstraction. It consists of a small kernel module that initializes virtualization hardware and mediates interactions with the kernel, and a user-level library that helps applications manage privileged hardware features. We present the implementation of Dune for 64bit x86 Linux. We use Dune to implement three userlevel applications that can benefit from access to privileged hardware: a sandbox for untrusted code, a privilege separation facility, and a garbage collector. The use of Dune greatly simplifies the implementation of these applications and provides significant performance advantages."} {"_id": "015d1aabae32efe2c30dfa32d8ce71d01bcac9c5", "title": "Applications of approximation algorithms to cooperative games", "text": "The Internet, which is intrinsically a common playground for a large number of players with varying degrees of collaborative and sel sh motives, naturally gives rise to numerous new game theoretic issues. Computational problems underlying solutions to these issues, achieving desirable economic criteria, often turn out to be NP-hard. It is therefore natural to apply notions from the area of approximation algorithms to these problems. The connection is made more meaningful by the fact that the two areas of game theory and approximation algorithms share common methodology { both heavily use machinery from the theory of linear programming. Various aspects of this connection have been explored recently by researchers [8, 10, 15, 20, 21, 26, 27, 29]. In this paper we will consider the problem of sharing the cost of a jointly utilized facility in a \\fair\" manner. Consider a service providing company whose set of possible customers, also called users, is U . For each set S U C(S) denotes the cost incurred by the company to serve the users in S. The function C is known as the cost function. For concreteness, assume that the company broadcasts news of common interest, such as nancial news, on the net. Each user, i, has a utility, u0i, for receiving the news. This utility u 0 i is known only to user i. User i enjoys a bene t of u0i xi if she gets the news at the price xi. If she does not get the news then her bene t is 0. Each user is assumed to be sel sh, and hence in order to maximize bene t, may misreport her utility as some other number, say ui. For the rest of the discussion, the utility of user i will mean the number ui. A cost sharing mechanism determines which users receive the broadcast and at what price. The mechanism is strategyproof if the dominant strategy of each user is to reveal the"} {"_id": "182b53b6823605f2a9b6fa6135227a303493b4c4", "title": "Automatic generation of destination maps", "text": "Destination maps are navigational aids designed to show anyone within a region how to reach a location (the destination). Hand-designed destination maps include only the most important roads in the region and are non-uniformly scaled to ensure that all of the important roads from the highways to the residential streets are visible. We present the first automated system for creating such destination maps based on the design principles used by mapmakers. Our system includes novel algorithms for selecting the important roads based on mental representations of road networks, and for laying out the roads based on a non-linear optimization procedure. The final layouts are labeled and rendered in a variety of styles ranging from informal to more formal map styles. The system has been used to generate over 57,000 destination maps by thousands of users. We report feedback from both a formal and informal user study, as well as provide quantitative measures of success."} {"_id": "08f4d8e7626e55b7c4ffe1fd12eb034bc8022a43", "title": "Aspect Extraction with Automated Prior Knowledge Learning", "text": "Aspect extraction is an important task in sentiment analysis. Topic modeling is a popular method for the task. However, unsupervised topic models often generate incoherent aspects. To address the issue, several knowledge-based models have been proposed to incorporate prior knowledge provided by the user to guide modeling. In this paper, we take a major step forward and show that in the big data era, without any user input, it is possible to learn prior knowledge automatically from a large amount of review data available on the Web. Such knowledge can then be used by a topic model to discover more coherent aspects. There are two key challenges: (1) learning quality knowledge from reviews of diverse domains, and (2) making the model fault-tolerant to handle possibly wrong knowledge. A novel approach is proposed to solve these problems. Experimental results using reviews from 36 domains show that the proposed approach achieves significant improvements over state-of-the-art baselines."} {"_id": "e340a3b93204c030a8b4db2c24a9ef2628cde140", "title": "Application Protocols and Wireless Communication for IoT: A Simulation Case Study Proposal", "text": "The current Internet of Things (IoT) solutions require support at different network layers, from higher level applications to lower level media-based support. This paper presents some of the main application requirements for IoT, characterizing architecture, Quality of Service (QoS) features, security mechanisms, discovery service resources and web integration options and the protocols that can be used to provide them (e.g. CoAP, XMPP, DDS, MQTT-SN, AMQP). As examples of lower-level requirements and protocols, several wireless network characteristics (e.g. ZigBee, Z-Wave, BLE, LoRaWAN, SigFox, IEEE 802.11af, NB-IoT) are presented. The variety of possible applications scenarios and the heterogeneity of enabling technologies combined with a large number of sensors and devices, suggests the need for simulation and modeling tactics to describe how the previous requirements can be met. As a potential solution, the creation of simulation models and the usage of the OMNET++ simulation tool to enable meaningful IoT simulation is discussed. The analysis of the behavior of IoT applications is proposed for two use cases: Wireless Sensor Networks (WSN) for home and industrial automation, and Low Power Wide Area (LPWA) networks for smart meters, smart buildings, and smart cities."} {"_id": "405872b6c6c1a53c2ede41ccb7c9de6d4207d9be", "title": "Integrating Surface and Abstract Features for Robust Cross-Domain Chinese Word Segmentation", "text": "Current character-based approaches are not robust for cross domain Chin ese word segmentation. In this paper, we alleviate this problem by deriving a novel enhanced ch aracterbased generative model with a new abstract aggregate candidate-feature, which indicates if th given candidate prefers the corresponding position-tag of the longest dictionary matching wo rd. Since the distribution of the proposed feature is invariant across domains, our m del thus possesses better generalization ability. Open tests on CIPS-SIGHAN-2010 show that the enhance d gen rative model achieves robust cross-domain performance for various OOV cov erage rates and obtains the best performance on three out of four domains. The enhanced gen erative model is then further integrated with a discriminative model which also utilizes dictionary information . This integrated model is shown to be either superior or comparable to all other models repo rted in the literatur e on every domain of this task."} {"_id": "724e3c4e98bc9ac3281d5aef7d53ecfd4233c3fc", "title": "Neural Networks and Graph Algorithms with Next-Generation Processors", "text": "The use of graphical processors for distributed computation revolutionized the field of high performance scientific computing. As the Moore's Law era of computing draws to a close, the development of non-Von Neumann systems: neuromorphic processing units, and quantum annealers; again are redefining new territory for computational methods. While these technologies are still in their nascent stages, we discuss their potential to advance computing in two domains: machine learning, and solving constraint satisfaction problems. Each of these processors utilize fundamentally different theoretical models of computation. This raises questions about how to best use them in the design and implementation of applications. While many processors are being developed with a specific domain target, the ubiquity of spin-glass models and neural networks provides an avenue for multi-functional applications. This provides hints at the future infrastructure needed to integrate many next-generation processing units into conventional high-performance computing systems."} {"_id": "ce484d7a889fb3b0fa455086af0bfa453c5ec3db", "title": "Identification of extract method refactoring opportunities for the decomposition of methods", "text": "The extraction of a code fragment into a separate method is one of the most widely performed refactoring activities, since it allows the decomposition of large and complex methods and can be used in combination with other code transformations for fixing a variety of design problems. Despite the significance of Extract Method refactoring towards code quality improvement, there is limited support for the identification of code fragments with distinct functionality that could be extracted into new methods. The goal of our approach is to automatically identify Extract Method refactoring opportunities which are related with the complete computation of a given variable (complete computation slice) and the statements affecting the state of a given object (object state slice). Moreover, a set of rules regarding the preservation of existing odule decomposition dependences is proposed that exclude refactoring opportunities corresponding to slices whose extraction could possibly cause a change in program behavior. The proposed approach has been evaluated regarding its ability to capture slices of code implementing a distinct functionality, its ability to resolve existing design flaws, its impact on the cohesion of the decomposed and extracted methods, and its ability to preserve program behavior. Moreover, precision and recall have been computed employing the refactoring depe opportunities found by in"} {"_id": "e7922d53216a4c234f601049ec3326a6ea5d5c7c", "title": "WHY PEOPLE STAY : USING JOB EMBEDDEDNESS TO PREDICT VOLUNTARY TURNOVER", "text": "A new construct, entitled job embeddedness, is introduced. Assessing factors from on and off the job, it includes an individual\u2019s (a) links to other people, teams and groups, (b) perception of their fit with their job, organization and community and (c) what they say they would have to sacrifice if they left their job. A measure of job embeddedness is developed with two samples. The results show that job embeddedness predicts the key outcomes of both intent to leave and voluntary turnover, and explains significant incremental variance over and above job satisfaction, organizational commitment, job alternatives and job search. Implications for theory and practice are discussed."} {"_id": "6c93f95929fa900bb2eafbd915d417199bc0a9dc", "title": "Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture", "text": "Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks (Eedn) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation."} {"_id": "22b65eebec33b89ec912054b4c4ec3d963960ab0", "title": "Hopelessness Depression : A Theory-Based Subtype of Depression", "text": "We present a revision of the 1978 reformulated theory of helplessness and depression and call it the hopelessness theory of depression. Although the 1978 reformulation has generated a vast amount of empirical work on depression over the past 10 years and recently has been evaluated as a model of depression, we do not think that it presents a clearly articulated theory of depression. We build on the skeletal logic of the 1978 statement and (a) propose a hypothesized subtype of depression\u2014 hopelessness depression, (b) introduce hopelessness as a proximal sufficient cause of the symptoms of hopelessness depression, (c) deemphasize causal attributions because inferred negative consequences and inferred negative characteristics about the self are also postulated to contribute to the formation of hopelessness and, in turn, the symptoms of hopelessness depression, and (d) clarify the diathesisstress and causal mediation components implied, but not explicitly articulated, in the 1978 statement. We report promising findings for the hopelessness theory and outline the aspects that still need to be tested."} {"_id": "9b8d134417c5db4a6c3dce5784a0407c2d97495d", "title": "SSS: a hybrid architecture applied to robot navigation", "text": "This paper describes a new three layer architecture, SSS, for robot control. It combines a servo-control layer, a \u00d2subsumption\" layer, and a symbolic layer in a way that allows the advantages of each technique to be fully exploited. The key to this synergy is the interface between the individual subsystems. We describe how to build situation recognizers that bridge the gap between the servo and subsumption layers, and event detectors that link the subsumption and symbolic layers. The development of such a combined system is illustrated by a fully implemented indoor navigation example. The resulting real robot, called \u00d2TJ\", is able automatically map office building environments and smoothly navigate through them at the rapid speed of 2.6 feet per second."} {"_id": "8de1c724a42d204c0050fe4c4b4e81a675d7f57c", "title": "Deep Face Recognition: A Survey", "text": "Face recognition made tremendous leaps in the last five years with a myriad of systems proposing novel techniques substantially backed by deep convolutional neural networks (DCNN). Although face recognition performance sky-rocketed using deep-learning in classic datasets like LFW, leading to the belief that this technique reached human performance, it still remains an open problem in unconstrained environments as demonstrated by the newly released IJB datasets. This survey aims to summarize the main advances in deep face recognition and, more in general, in learning face representations for verification and identification. The survey provides a clear, structured presentation of the principal, state-of-the-art (SOTA) face recognition techniques appearing within the past five years in top computer vision venues. The survey is broken down into multiple parts that follow a standard face recognition pipeline: (a) how SOTA systems are trained and which public data sets have they used; (b) face preprocessing part (detection, alignment, etc.); (c) architecture and loss functions used for transfer learning (d) face recognition for verification and identification. The survey concludes with an overview of the SOTA results at a glance along with some open issues currently overlooked by the community."} {"_id": "3371575d11fbf83577b5adf2a0994c1306aebd09", "title": "Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing", "text": "Vocal perception is particularly important for understanding a speaker's emotional state and intentions because, unlike facial perception, it is relatively independent of speaker distance and viewing conditions. The idea, derived from brain lesion studies, that vocal emotional comprehension is a special domain of the right hemisphere has failed to receive consistent support from neuroimaging. This conflict can be reconciled if vocal emotional comprehension is viewed as a multi-step process with individual neural representations. This view reveals a processing chain that proceeds from the ventral auditory pathway to brain structures implicated in cognition and emotion. Thus, vocal emotional comprehension appears to be mediated by bilateral mechanisms anchored within sensory, cognitive and emotional processing systems."} {"_id": "289bdc364e2b8b03d0e52609dc6665a5f9d056c4", "title": "Generating English from Abstract Meaning Representations", "text": "We present a method for generating English sentences from Abstract Meaning Representation (AMR) graphs, exploiting a parallel corpus of AMRs and English sentences. We treat AMR-to-English generation as phrase-based machine translation (PBMT). We introduce a method that learns to linearize tokens of AMR graphs into an English-like order. Our linearization reduces the amount of distortion in PBMT and increases generation quality. We report a Bleu score of 26.8 on the standard AMR/English test set."} {"_id": "5a778a2a32f35a96f4dfb0f22d1415eff321e7ad", "title": "A Novel Distributed SDN-Secured Architecture for the IoT", "text": "Due to their rapid evolution, mobile devices demand for more dynamic and flexible networking services. A major challenges of future mobile networks is the increased mobile traffic. With the recent upcoming technologies of network programmability like Software-Defined Network (SDN), it may be integrated to create a new communication platform for Internet of Things (IoT). In this work, we present how to determine the effectiveness of an approach to build a new secured network architecture based on SDN and clusters. Our proposed scheme is a starting point for some experiments providing perspective over SDN deployment in a cluster environment. With this aim in mind, we suggest a routing protocol that manages routing tasks over Cluster-SDN. By using network virtualization and OpenFlow technologies to generate virtual nodes, we simulate a prototype system controlled by SDN. With our testbed, we are able to manage 500 things. We can analyze every OpenFlow messages and we have discovered that with a particular flow, the things can exchange information unlike the routing principle."} {"_id": "eb60bcac9bf7668cc4318995fc8b9b7ada46c090", "title": "Logistic Regression and Collaborative Filtering for Sponsored Search Term Recommendation", "text": "Sponsored search advertising is largely based on bidding on individual terms. The richness of natural languages permits web searchers to express their information needs in myriad ways. Advertisers have difficulty discovering all the terms that are relevant to their products or services. We examine the performance of logistic regression and collaborative filtering models on two different data sources to predict terms relevant to a set of seed terms describing an advertiser\u2019s product or service."} {"_id": "a54eae23ac844c3833b872edd40a571f2fc1b2f3", "title": "Removal of High-Density Salt-and-Pepper Noise in Images With an Iterative Adaptive Fuzzy Filter Using Alpha-Trimmed Mean", "text": "Suppression of impulse noise in images is an important problem in image processing. In this paper, we propose a novel adaptive iterative fuzzy filter for denoising images corrupted by impulse noise. It operates in two stages-detection of noisy pixels with an adaptive fuzzy detector followed by denoising using a weighted mean filter on the \u201cgood\u201d pixels in the filter window. Experimental results demonstrate the algorithm to be superior to state-of-the-art filters. The filter is also shown to be robust to very high levels of noise, retrieving meaningful detail at noise levels as high as 97%."} {"_id": "f77f1d1d274042105cceebf17a57e680dfe1ce03", "title": "Ant colony optimization for routing and load-balancing: survey and new directions", "text": "Although an ant is a simple creature, collectively a colony of ants performs useful tasks such as finding the shortest path to a food source and sharing this information with other ants by depositing pheromone. In the field ofant colony optimization (ACO), models ofcollective intelligenceof ants are transformed into useful optimization techniques that find applications in computer networking. In this survey, the problem-solving paradigm of ACO is explicated and compared to traditional routing algorithms along the issues of routing information, routing overhead and adaptivity. The contributions of this survey include 1) providing a comparison and critique of the state-of-the-art approaches for mitigatingstagnation (a major problem in many ACO algorithms), 2) surveying and comparing three major research in applying ACO in routing and load-balancing, and 3) discussing new directions and identifying open problems. The approaches for mitigating stagnation discussed include:evaporation, aging, pheromone smoothingand limiting, privileged pheromone layingandpheromone-heuristic control . The survey on ACO in routing/load-balancing includes comparison and critique of ant-based control and its ramifications, AntNetand its extensions, as well as ASGAand SynthECA. Discussions on new directions include an ongoing work of the authors in applyingmultiple ant colony optimizationin load-balancing."} {"_id": "9931b8ea6594b97c7dfca93936a2d95a38167046", "title": "It Makes Sense: A Wide-Coverage Word Sense Disambiguation System for Free Text", "text": "Word sense disambiguation (WSD) systems based on supervised learning achieved the best performance in SensEval and SemEval workshops. However, there are few publicly available open source WSD systems. This limits the use of WSD in other applications, especially for researchers whose research interests are not in WSD. In this paper, we present IMS, a supervised English all-words WSD system. The flexible framework of IMS allows users to integrate different preprocessing tools, additional features, and different classifiers. By default, we use linear support vector machines as the classifier with multiple knowledge-based features. In our implementation, IMS achieves state-of-the-art results on several SensEval and SemEval tasks."} {"_id": "b4f943584acf7694cc369a42a75ba804f8493ed3", "title": "NASARI: a Novel Approach to a Semantically-Aware Representation of Items", "text": "The semantic representation of individual word senses and concepts is of fundamental importance to several applications in Natural Language Processing. To date, concept modeling techniques have in the main based their representation either on lexicographic resources, such as WordNet, or on encyclopedic resources, such as Wikipedia. We propose a vector representation technique that combines the complementary knowledge of both these types of resource. Thanks to its use of explicit semantics combined with a novel cluster-based dimensionality reduction and an effective weighting scheme, our representation attains state-of-the-art performance on multiple datasets in two standard benchmarks: word similarity and sense clustering. We are releasing our vector representations at http://lcl.uniroma1.it/nasari/."} {"_id": "b54315a22b825e9ca1b59aa1d3fac98ea4925941", "title": "De-Conflated Semantic Representations", "text": "One major deficiency of most semantic representation techniques is that they usually model a word type as a single point in the semantic space, hence conflating all the meanings that the word can have. Addressing this issue by learning distinct representations for individual meanings of words has been the subject of several research studies in the past few years. However, the generated sense representations are either not linked to any sense inventory or are unreliable for infrequent word senses. We propose a technique that tackles these problems by de-conflating the representations of words based on the deep knowledge that can be derived from a semantic network. Our approach provides multiple advantages in comparison to the previous approaches, including its high coverage and the ability to generate accurate representations even for infrequent word senses. We carry out evaluations on six datasets across two semantic similarity tasks and report state-of-the-art results on most of them."} {"_id": "fb166f1e77428a492ea869a8b79df275dd9669c2", "title": "Neural Sequence Learning Models for Word Sense Disambiguation", "text": "Word Sense Disambiguation models exist in many flavors. Even though supervised ones tend to perform best in terms of accuracy, they often lose ground to more flexible knowledge-based solutions, which do not require training by a word expert for every disambiguation target. To bridge this gap we adopt a different perspective and rely on sequence learning to frame the disambiguation problem: we propose and study in depth a series of end-to-end neural architectures directly tailored to the task, from bidirectional Long Short-Term Memory to encoder-decoder models. Our extensive evaluation over standard benchmarks and in multiple languages shows that sequence learning enables more versatile all-words models that consistently lead to state-of-the-art results, even against word experts with engineered features."} {"_id": "0aa9b26b407a36ed62d19c8c1c1c6a26d75991af", "title": "Your Exploit is Mine: Automatic Shellcode Transplant for Remote Exploits", "text": "Developing a remote exploit is not easy. It requires a comprehensive understanding of a vulnerability and delicate techniques to bypass defense mechanisms. As a result, attackers may prefer to reuse an existing exploit and make necessary changes over developing a new exploit from scratch. One such adaptation is the replacement of the original shellcode (i.e., the attacker-injected code that is executed as the final step of the exploit) in the original exploit with a replacement shellcode, resulting in a modified exploit that carries out the actions desired by the attacker as opposed to the original exploit author. We call this a shellcode transplant. Current automated shellcode placement methods are insufficient because they over-constrain the replacement shellcode, and so cannot be used to achieve shellcode transplant. For example, these systems consider the shellcode as an integrated memory chunk and require that the execution path of the modified exploit must be same as the original one. To resolve these issues, we present ShellSwap, a system that uses symbolic tracing, with a combination of shellcode layout remediation and path kneading to achieve shellcode transplant. We evaluated the ShellSwap system on a combination of 20 exploits and 5 pieces of shellcode that are independently developed and different from the original exploit. Among the 100 test cases, our system successfully generated 88% of the exploits."} {"_id": "8a1ff425abea99ca21bfdf9f6b7b7254e36e3242", "title": "Manufacturing Problem Solving in a Job Shop \u2014 Research in Layout Planning", "text": "For ensuring efficient operation of a job shop, it is important to minimize waste, which has no value addition to the final product. For a job shop, minimizing movement is considered as the highest priority for waste prevention. For this reason, the layout for a job shop should be designed in such a way to ensure the lowest possible cost for production by reducing nonvalue added activities, such as movement of work-in-process. An effective and efficient way of layout planning for a job shop is a key for solving movement inefficiencies and facilitating communication and interaction between workers and supervisors. This involves relocation of equipment and machinery to streamline materials flow. The primary objective of relocation is to avoid flow conflicts, reduce process time, and increase efficiency of labor usage. Proximity of the most frequently used machines minimizes the movement cost significantly, which eventually minimizes the cost of production. This paper describes the research done in process flow improvements in a job shop manufacturing steel components. The literature focused mainly on mathematical modeling with assumptions that are not applicable for a typical small-scale job shop operation. However, this was overcome by collecting material movement data over three months and analyzing the information using a From-To chart. By analyzing the chart, the actual loads between departments for the operation period were tabulated in available plant space. From this information, the inter-departmental flow was shown by a model. This provides the basic layout pattern, which was improved. A second step was to determine the cost of this layout by multiplying the material handling cost by the number of loads moved between each pair of departments. As a recommendation for solving the problem, two layout models have been developed for ensuring the lowest movement cost. Introduction Transportation is considered as one of the seven wastes for lean manufacturing, and effective layout planning is considered as a key to overcome this kind of waste. It is stated, \u201cDouble handling and excessive movements are likely to cause damage and deterioration with the distance of communication between processes\u201d [1]. Therefore, layout planning has clear impact with the quality and quantity of the final products by reducing waste and improving efficiency."} {"_id": "11d063b26b99d63581bec6566a77f7751b4be2b3", "title": "High-power GaN MMIC PA Over 40\u20134000MHz", "text": "We report a high-performance GaN MMIC power amplifier operating from 40MHz to 4,000MHz. The MMIC achieved 80W pulsed (100us pulse width and 10% duty cycle) output power (P5dB) with 54% efficiency at 40MHz, 50W with about 30% efficiency across most of the mid band, and gradually decreases to 30W with 22% efficiency at 4000MHz. Power gain is 25dB across the 40-4000MHz band. This ultra wideband performance is achieved by both tailoring the device output impedance, and using a unique wide-band, circuit-matching topology. Detailed design techniques of both the device and the matching circuit will be presented."} {"_id": "5a6eec6f2b3bdb918b400214fb9c41645eb81e0a", "title": "Team assembly mechanisms determine collaboration network structure and team performance.", "text": "Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields."} {"_id": "1f6235cedd6b0023b0eec7e11c226e89b4515bd2", "title": "Internet of Things Business Models", "text": "Almost all businesses are aware of the potential gains that the Internet of Things (IoT) has to offer, they are unsure of how to approach it. This article proposes a business model that builds on Holler et al., (2014) [1]. The model consists of three dimensions: \u201cWho, Where, and Why\u201d. \u201cWho\u201d describes collaborating partners, which builds the \u201cValue Network\u201d. \u201cWhere\u201d describes sources of value co-creation rooted in the layer model of digitized objects, and \u201cWhy\u201d describes how partners benefit from collaborating within the value network. With the intention of addressing \u201cHow\u201d, the proposed framework has integrated the IoT strategy category, tactics, and value chain elements. The framework is then validated through the case studies of some successful players who are either the Awardees of the IoT Award 2014 or the ICT Award 2015 of Hong Kong."} {"_id": "077492a77812a68c86b970557e97a452a6689427", "title": "Automatic 3D face reconstruction from single images or video", "text": "This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression- and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. To make the algorithm robust with respect to head orientation, this process is iterated while the estimate of pose is refined. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a high resolution 3D surface model."} {"_id": "d77d179169dc0354a31208cfa461267469215045", "title": "Polymer Nanoparticles for Smart Drug Delivery", "text": "In the recent decades, polymers are widely used as biomaterials due to their favorable properties such as good biocompatibility, easy design and preparation, a variety of structures and interesting bio-mimetic character. Especially in the field of smart drug delivery, polymer played a significant role because it can deliver therapeutic agents directly into the intended site of action, with superior efficacy. The ideal requirements for designing nano-particulate delivery system are to effectively be controlled particle size, surface character; enhance permeation, flexibility, solubility and release of therapeutically active agents in order to attain the target and specific activity at a predetermined rate and time. The smart drug delivery systems have been successfully made by the advances in polymer science in the bio-nano\u2010 technology field. Recently, these advances have been found in various medical applications for nano-scale structures in smart drug delivery. The smart drug delivery systems should possess some important feature such as pre-scheduled rate, self controlled, targeted, predetermined time and monitor the delivery. The smart drug delivery system enhances the polymer nanoparticle better stage to their therapy regimen. They are drug carriers of natural, semi-synthetic, and synthetic polymeric nature at the nano-scale to micro-scale range. The polymeric particles are collectively named as spheres and capsules. The most of the polymeric nanoparticles with surfactants offer stability of various forms of active drugs and have useful to smart release properties. There are numerous biological applications have been reported for the nano-scale to micro-scale sized particles, such as site-targeted, controlled, and enhanced bioavailability of hydrophobic drugs [1-4]. Due to the nanoparticles size the drugs have been targeting into various applications, such as, various cancers targeting has been shown to be promising [5]. Moreover, polymeric particles proved their effectiveness in stabilizing and protecting the drug molecules such as proteins, peptides, or DNA molecules from various environmental hazards degradation [2-4, 6, 7]. So these polymers are affording the potential for various protein and gene delivery. Numerous methods had been available to fabricate"} {"_id": "20ff5fc3d1628db26a1d4936eaa7b0cdad8eeae8", "title": "Deep neural networks for recognizing online handwritten mathematical symbols", "text": "This paper presents application of deep learning to recognize online handwritten mathematical symbols. Recently various deep learning architectures such as Convolution neural network (CNN), Deep neural network (DNN) and Long short term memory (LSTM) RNN have been applied to fields such as computer vision, speech recognition and natural language processing where they have been shown to produce state-of-the-art results on various tasks. In this paper, we apply max-out-based CNN and BLSTM to image patterns created from online patterns and to the original online patterns, respectively and combine them. We also compare them with traditional recognition methods which are MRF and MQDF by carrying out some experiments on CROHME database."} {"_id": "0955315509ac15bb4f825dbcd1e51423c3781ce4", "title": "The HAWKwood Database", "text": "We present a database consisting of wood pile images, which can be used as a benchmark to evaluate the performance of wood pile detection and surveying algorithms. We distinguish six database categories which can be used for different types of algorithms. Images of real and synthetic scenes are provided, which consist of 7655 images divided into 354 data sets. Depending on the category the data sets either include ground truth data or forestry specific measurements with which algorithms may be compared."} {"_id": "6681c5cecb6efb15f170786e04e05fc77820be50", "title": "Temperature-aware microarchitecture: Modeling and implementation", "text": "With cooling costs rising exponentially, designing cooling solutions for worst-case power dissipation is prohibitively expensive. Chips that can autonomously modify their execution and power-dissipation characteristics permit the use of lower-cost cooling solutions while still guaranteeing safe temperature regulation. Evaluating techniques for this dynamic thermal management (DTM), however, requires a thermal model that is practical for architectural studies.This paper describes HotSpot, an accurate yet fast and practical model based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package. Validation was performed using finite-element simulation. The paper also introduces several effective methods for DTM: \"temperature-tracking\" frequency scaling, \"migrating computation\" to spare hardware units, and a \"hybrid\" policy that combines fetch gating with dynamic voltage scaling. The latter two achieve their performance advantage by exploiting instruction-level parallelism, showing the importance of microarchitecture research in helping control the growth of cooling costs.Modeling temperature at the microarchitecture level also shows that power metrics are poor predictors of temperature, that sensor imprecision has a substantial impact on the performance of DTM, and that the inclusion of lateral resistances for thermal diffusion is important for accuracy."} {"_id": "80898c89f32975f33e42a680a4d675df63a8a3e5", "title": "A 3 ppm 1.5 \u00d7 0.8 mm 2 1.0 \u00b5A 32.768 kHz MEMS-Based Oscillator", "text": "This paper describes the first 32 kHz low-power MEMS-based oscillator in production. The primary goal is to provide a small form-factor oscillator (1.5 \u00d7 0.8 mm 2 ) for use as a crystal replacement in space-constrained mobile devices. The oscillator generates an output frequency of 32.768 kHz and its binary divisors down to 1 Hz. The frequency stability over the industrial temperature range (-40 \u00b0C to 85 \u00b0C) is \u00b1100 ppm as an oscillator (XO) or \u00b13 ppm with optional calibration as a temperature compensated oscillator (TCXO). Supply currents are 0.9 \u03bcA for the XO and 1.0 \u03bcA for the TCXO at supply voltages from 1.4 V to 4.5 V. The MEMS resonator is a capacitively-transduced tuning fork at 524 kHz. The circuitry is fabricated in 180 nm CMOS and includes low power sustaining circuit, fractional-N PLL, temperature sensor, digital control, and low swing driver."} {"_id": "982ffaf04a681c98c2d314a1100dd705a950850e", "title": "ITIL in small to medium-sized enterprises software companies: towards an implementation sequence", "text": "Information Technology Infrastructure Library, ITIL framework, is a set of comprehensive publications providing descriptive guidance on the management of IT processes, functions, roles and responsibilities related to Information Technology Service Management. However, and in spite of its repercussion and popularity, the ITIL framework does not suggest an implementation order for their processes. This decision constitutes the first challenge that an organization must overcome when starting an ITIL implementation, the enterprise size being one of the leading factors to be considered in the decision making process. In the scenario of Small and Medium Enterprises dedicated to producing software, this paper is devoted to investigating which processes are the most selected to start the implementation of ITIL in these organizations. This is done by means of two different instruments. Firstly, a systematic literature review on the topic and secondly, a survey conducted among experts and practitioners. Results show in both cases that Incident Management Process should be the first process when implementing ITIL framework."} {"_id": "24a9e0ac9bbc708efd14512becc5c4514d8f042d", "title": "Efficiently inferring community structure in bipartite networks", "text": "Bipartite networks are a common type of network data in which there are two types of vertices, and only vertices of different types can be connected. While bipartite networks exhibit community structure like their unipartite counterparts, existing approaches to bipartite community detection have drawbacks, including implicit parameter choices, loss of information through one-mode projections, and lack of interpretability. Here we solve the community detection problem for bipartite networks by formulating a bipartite stochastic block model, which explicitly includes vertex type information and may be trivially extended to k-partite networks. This bipartite stochastic block model yields a projection-free and statistically principled method for community detection that makes clear assumptions and parameter choices and yields interpretable results. We demonstrate this model's ability to efficiently and accurately find community structure in synthetic bipartite networks with known structure and in real-world bipartite networks with unknown structure, and we characterize its performance in practical contexts."} {"_id": "e5e4c3a891997f93f539c421cae73dab078cdb79", "title": "Fast Fourier Transforms for Nonequispaced Data", "text": "Fast Fourier Transforms for Nonequispaced Data Alok Dutt Yale University 1993 Two groups of algorithms are presented generalizing the fast Fourier transform (FFT) to the case of non-integer frequencies and nonequispaced nodes on the interval [-7r, 7r]. These schemes are based on combinations of certain analytical considerations with the classical fast Fourier transform, and generalize both the forward and backward FFTs. The number of arithmetic operations required by each of the algorithms is proportional to Nlog N + Nlog(I/e), where 6 is the desired precision of computations and N is the number of nodes. Several related algorithms are also presented, each of which utilizes a similar set of techniques from analysis and linear algebra. These include an efficient version of the Fast Multipole Method in one dimension and fast algorithms for the evaluation, integration and differentiation of Lagrange polynomial interpolants. Several numerical examples are used to illustrate the efficiency of the approach, and to compare the performances of the two sets of nonuniform FFT algorithms. The work of this author was supported in part by the Office of Naval Research under Grant N00014-89-J-1527 and in part by the National Science Foundation under Grant DMS9012751. Approved for public release: distribution is unlimited."} {"_id": "4f1b7c199a02b8efc5e77b9a5b6283095fa038b5", "title": "Value of Information Systems and Products : Understanding the Users \u2019 Perspective and Values", "text": "Developers aim at providing value through their systems and products. However, value is not financial only, but depends on usage and users\u2019 perceptions of value. In this paper, we clarify the concept of value from the users\u2019 perspective and the role of user involvement in providing value. First, theories and approaches of psychology, marketing and human-computer interaction are reviewed. Secondly, the concept of \u2018user values\u2019 is suggested to clarify the concept of value from the user\u2019s point of view and a category framework of user values is presented to make them more concrete and easier to identify. Thirdly, the activities and methods for adopting user values in development work are discussed. The analysis of the literature shows that value has been considered in multiple ways in development. However, users\u2019 perspectives have received less attention. As a conclusion, we draw future research directions for value-centered design and propose that user involvement is essential in identifying user values, interpreting the practical meaning of the values and implementing them in development work."} {"_id": "e80f6871f78da0e5c4ae7c07235db27cbadaffe0", "title": "Quadtree Convolutional Neural Networks", "text": "This paper presents a Quadtree Convolutional Neural Network (QCNN) for efficiently learning from image datasets representing sparse data such as handwriting, pen strokes, freehand sketches, etc. Instead of storing the sparse sketches in regular dense tensors, our method decomposes and represents the image as a linear quadtree that is only refined in the non-empty portions of the image. The actual image data corresponding to non-zero pixels is stored in the finest nodes of the quadtree. Convolution and pooling operations are restricted to the sparse pixels, leading to better efficiency in computation time as well as memory usage. Specifically, the computational and memory costs in QCNN grow linearly in the number of non-zero pixels, as opposed to traditional CNNs where the costs are quadratic in the number of pixels. This enables QCNN to learn from sparse images much faster and process high resolution images without the memory constraints faced by traditional CNNs. We study QCNN on four sparse image datasets for sketch classification and simplification tasks. The results show that QCNN can obtain comparable accuracy with large reduction in computational and memory costs."} {"_id": "9a2b2bfa324041c1fc1e84598d6f14956c67c825", "title": "A 0 . 3-\u03bc m CMOS 8Gb / s 4-PAM Serial Link Transceiver", "text": "An 8-Gb/s 0.3-\u00b5m CMOS transceiver uses multilevel signaling (4-PAM) and transmit pre-shaping in combination with receive equalization to reduce ISI due to channel low-pass effects. High on-chip frequencies are avoided by multi-plexing and demultiplexing the data directly at the pads. Timing recovery takes advantage of a novel frequency acquisition scheme and a linear PLL with a loop bandwidth >30MHz, phase margin >48 and capture range of 20MHz without a frequency acquisition aid. The transmitted 8-Gbps data is successfully detected by the receiver after a 10-m coaxial cable. The 2mm x 2mm chip consumes 1.1W at 8Gbps with a 3-V supply."} {"_id": "bbf70ffe55676b34c43b585e480e8343943aa328", "title": "SDN/NFV-enabled satellite communications networks: Opportunities, scenarios and challenges", "text": "In the context of next generation 5G networks, the satellite industry is clearly committed to revisit and revamp the role of satellite communications. As major drivers in the evolution of (terrestrial) fixed and mobile networks, Software Defined Networking (SDN) and Network Function Virtualisation (NFV) technologies are also being positioned as central technology enablers towards improved and more flexible integration of satellite and terrestrial segments, providing satellite network further service innovation and business agility by advanced network resources management techniques. Through the analysis of scenarios and use cases, this paper provides a description of the benefits that SDN/NFV technologies can bring into satellite communications towards 5G. Three scenarios are presented and analysed to delineate different potential improvement areas pursued through the introduction of SDN/NFV technologies in the satellite ground segment domain. Within each scenario, a number of use cases are developed to gain further insight into specific capabilities and to identify the technical challenges stemming from them."} {"_id": "2b354f4ad32a03914dd432658150ab2419d4ff0f", "title": "Multi-Sensor Conflict Measurement and Information Fusion", "text": "In sensing applications where multiple sensors observe the same scene, fusing sensor outputs can provide improved results. However, if some of the sensors are providing lower quality outputs, e.g. when one or more sensors has a poor signal-tonoise ratio (SNR) and therefore provides very noisy data, the fused results can be degraded. In this work, a multi-sensor conflict measure is proposed which estimates multi-sensor conflict by representing each sensor output as interval-valued information and examines the sensor output overlaps on all possible n-tuple sensor combinations. The conflict is based on the sizes of the intervals and how many sensors output values lie in these intervals. In this work, conflict is defined in terms of how little the output from multiple sensors overlap. That is, high degrees of overlap mean low sensor conflict, while low degrees of overlap mean high conflict. This work is a preliminary step towards a robust conflict and sensor fusion framework. In addition, a sensor fusion algorithm is proposed based on a weighted sum of sensor outputs, where the weights for each sensor diminish as the conflict measure increases. The proposed methods can be utilized to (1) assess a measure of multi-sensor conflict, and (2) improve sensor output fusion by lessening weighting for sensors with high conflict. Using this measure, a simulated example is given to explain the mechanics of calculating the conflict measure, and stereo camera 3D outputs are analyzed and fused. In the stereo camera case, the sensor output is corrupted by additive impulse noise, DC offset, and Gaussian noise. Impulse noise is common in sensors due to intermittent interference, a DC offset a sensor bias or registration error, and Gaussian noise represents a sensor output with low SNR. The results show that sensor output fusion based on the conflict measure shows improved accuracy over a simple averaging fusion strategy."} {"_id": "521c5092f7ff4e1fdbf607e2f54e8ce26b51bd27", "title": "Active Imitation Learning via Reduction to I.I.D. Active Learning", "text": "In standard passive imitation learning, the goal is to learn a target policy by passively observing full execution trajectories of it. Unfortunately, generating such trajectories can require substantial expert effort and be impractical in some cases. In this paper, we consider active imitation learning with the goal of reducing this effort by querying the expert about the desired action at individual states, which are selected based on answers to past queries and the learner\u2019s interactions with an environment simulator. We introduce a new approach based on reducing active imitation learning to i.i.d. active learning, which can leverage progress in the i.i.d. setting. Our first contribution, is to analyze reductions for both non-stationary and stationary policies, showing that the label complexity (number of queries) of active imitation learning can be substantially less than passive learning. Our second contribution, is to introduce a practical algorithm inspired by the reductions, which is shown to be highly effective in four test domains compared to a number of alternatives."} {"_id": "3fe3343d1f908270f067acebc0463590d284abf3", "title": "Judgment under emotional certainty and uncertainty: the effects of specific emotions on information processing.", "text": "The authors argued that emotions characterized by certainty appraisals promote heuristic processing, whereas emotions characterized by uncertainty appraisals result in systematic processing. The 1st experiment demonstrated that the certainty associated with an emotion affects the certainty experienced in subsequent situations. The next 3 experiments investigated effects on processing of emotions associated with certainty and uncertainty. Compared with emotions associated with uncertainty, emotions associated with certainty resulted in greater reliance on the expertise of a source of a persuasive message in Experiment 2, more stereotyping in Experiment 3, and less attention to argument quality in Experiment 4. In contrast to previous theories linking valence and processing, these findings suggest that the certainty appraisal content of emotions is also important in determining whether people engage in systematic or heuristic processing."} {"_id": "f26a8dcfbaf9f46c021c41a3545fcfa845660c47", "title": "Human Pose Regression by Combining Indirect Part Detection and Contextual Information", "text": "In this paper, we propose an end-to-end trainable regression approach for human pose estimation from still images. We use the proposed Soft-argmax function to convert feature maps directly to joint coordinates, resulting in a fully differentiable framework. Our method is able to learn heat maps representations indirectly, without additional steps of artificial ground truth generation. Consequently, contextual information can be included to the pose predictions in a seamless way. We evaluated our method on two very challenging datasets, the Leeds Sports Poses (LSP) and the MPII Human Pose datasets, reaching the best performance among all the existing regression methods and comparable results to the state-of-the-art detection based approaches."} {"_id": "50e1460abd160b92b38f206553f7917cf6470324", "title": "Tachyon : Memory Throughput I / O for Cluster Computing Frameworks", "text": "As ever more big data computations start to be in-memory, I/O throughput dominates the running times of many workloads. For distributed storage, the read throughput can be improved using caching, however, the write throughput is limited by both disk and network bandwidth due to data replication for fault-tolerance. This paper proposes a new file system architecture to enable frameworks to both read and write reliably at memory speed, by avoiding synchronous data replication on writes."} {"_id": "20ad0ba7e187e6f335a08c59a4e53da4e4b027ec", "title": "Automatic Acquisition of Hyponyms from Large Text Corpora", "text": "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexicosyntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested."} {"_id": "3d378bdbf3564e341bec66c1974899b01e82507f", "title": "Open domain event extraction from twitter", "text": "Tweets are the most up-to-date and inclusive stream of in- formation and commentary on current events, but they are also fragmented and noisy, motivating the need for systems that can extract, aggregate and categorize important events. Previous work on extracting structured representations of events has focused largely on newswire text; Twitter's unique characteristics present new challenges and opportunities for open-domain event extraction. This paper describes TwiCal-- the first open-domain event-extraction and categorization system for Twitter. We demonstrate that accurately extracting an open-domain calendar of significant events from Twitter is indeed feasible. In addition, we present a novel approach for discovering important event categories and classifying extracted events based on latent variable models. By leveraging large volumes of unlabeled data, our approach achieves a 14% increase in maximum F1 over a supervised baseline. A continuously updating demonstration of our system can be viewed at http://statuscalendar.com; Our NLP tools are available at http://github.com/aritter/ twitter_nlp."} {"_id": "87086ba81c59a04d657107b7895d8ab38e4b6464", "title": "TwiNER: named entity recognition in targeted twitter stream", "text": "Many private and/or public organizations have been reported to create and monitor targeted Twitter streams to collect and understand users' opinions about the organizations. Targeted Twitter stream is usually constructed by filtering tweets with user-defined selection criteria e.g. tweets published by users from a selected region, or tweets that match one or more predefined keywords. Targeted Twitter stream is then monitored to collect and understand users' opinions about the organizations. There is an emerging need for early crisis detection and response with such target stream. Such applications require a good named entity recognition (NER) system for Twitter, which is able to automatically discover emerging named entities that is potentially linked to the crisis. In this paper, we present a novel 2-step unsupervised NER system for targeted Twitter stream, called TwiNER. In the first step, it leverages on the global context obtained from Wikipedia and Web N-Gram corpus to partition tweets into valid segments (phrases) using a dynamic programming algorithm. Each such tweet segment is a candidate named entity. It is observed that the named entities in the targeted stream usually exhibit a gregarious property, due to the way the targeted stream is constructed. In the second step, TwiNER constructs a random walk model to exploit the gregarious property in the local context derived from the Twitter stream. The highly-ranked segments have a higher chance of being true named entities. We evaluated TwiNER on two sets of real-life tweets simulating two targeted streams. Evaluated using labeled ground truth, TwiNER achieves comparable performance as with conventional approaches in both streams. Various settings of TwiNER have also been examined to verify our global context + local context combo idea."} {"_id": "b62c81269e1d13bcae9c15db335887db990e5860", "title": "Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data", "text": "In this paper, we present an overview of generalized expectation criteria (GE), a simple, robust, scalable method for semi-supervised training using weakly -labeled data. GE fits model parameters by favoring models that match certain expectation constrai nts, such as marginal label distributions, on the unlabeled data. This paper shows how to apply generali z d expectation criteria to two classes of parametric models: maximum entropy models and condition al random fields. Experimental results demonstrate accuracy improvements over supervise d training and a number of other stateof-the-art semi-supervised learning methods for these mod els."} {"_id": "351743198513dedd4f7d59b8d694fd15caa9a6c2", "title": "Towards Instrument Segmentation for Music Content Description: a Critical Review of Instrument Classification Techniques", "text": "A system capable of describing the musical content of any kind of soundfile or soundstream, as it is supposed to be done in MPEG7-compliant applications, should provide an account of the different moments where a certain instrument can be listened to. This segmentation according to instrument taxonomies must be solved with different strategies than segmentation according to perceptual features. In this paper we concentrate on reviewing the different techniques that have been so far proposed for automatic classification of musical instruments. Although the ultimate goal should be the segmentation of complex sonic mixtures, it is still far from being solved. Therefore, the practical approach is to reduce the scope of the classification systems to only deal with isolated, and out-of-context, sounds. There is an obvious tradeoff in endorsing this strategy: we gain simplicity and tractability, but we lose contextual and time-dependent cues that can be exploited as relevant features for classifying the sounds."} {"_id": "e6c3c1ab62e14c5e23eca9c8db08a2c2e06e2469", "title": "Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems", "text": "Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm that shares many similarities with evolutionary computation techniques. However, the PSO is driven by the simulation of a social psychological metaphor motivated by collective behaviors of bird and other social organisms instead of the survival of the fittest individual. Inspired by the classical PSO method and quantum mechanics theories, this work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution. The application of Gaussian mutation operator instead of random sequences in QPSO is a powerful strategy to improve the QPSO performance in preventing premature convergence to local optima. In this paper, new combinations of QPSO and Gaussian probability distribution are employed in well-studied continuous optimization problems of engineering design. Two case studies are described and evaluated in this work. Our results indicate that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence and, in most cases, they outperform the results presented in the literature. 2009 Elsevier Ltd. All rights reserved."} {"_id": "9f8bea863cdf2bb4c0fb118aa0d8b58c73c3fa54", "title": "Positive Youth Development , Participation in Community Youth Development Programs , and Community Contributions of Fifth-Grade Adolescents : Findings From the First Wave Of the 4-H Study of Positive Youth Development", "text": "The 4-H Study of Positive Youth Development (PYD), a longitudinal investigation of a diverse sample of 1,700 fifth graders and 1,117 of their parents, tests developmental contextual ideas linking PYD, youth contributions, and participation in community youth development (YD) programs, representing a key ecological asset. Using data from Wave 1 of the study, structural equation modeling procedures provided evidence for five first-order latent factors representing the \u201cFive Cs\u201d of PYD (competence, confidence, connection, character, and caring), and for their convergence on a second-order, PYD latent construct. A theoretical construct, youth \u201ccontribution,\u201d was also created and examined. Both PYD and YD program participation independently related to contribution. The importance of longitudinal analyses for extending the present results is discussed."} {"_id": "e70f05f9a8b13e2933eee86a3bcebbc8403c672b", "title": "A Flexible AC Distribution System Device for a Microgrid", "text": "This paper presents a flexible ac distribution system device for microgrid applications. The device aims to improve the power quality and reliability of the overall power distribution system that the microgrid is connected to. The control design employs a new model predictive control algorithm which allows faster computational time for large power systems by optimizing the steady-state and the transient control problems separately. Extended Kalman filters are also employed for frequency tracking and to extract the harmonic spectra of the grid voltage and the load currents in the microgrid. The design concept is verified through different test case scenarios to demonstrate the capability of the proposed device and the results obtained are discussed."} {"_id": "e08332bc8f664e3d55363b50d1b932fbfa717986", "title": "A Hierarchical Game Framework for Resource Management in Fog Computing", "text": "Supporting real-time and mobile data services, fog computing has been considered as a promising technology to overcome long and unpredicted delay in cloud computing. However, as resources in FNs are owned by independent users or infrastructure providers, the ADSSs cannot connect and access data services from the FNs directly, but can only request data service from the DSOs in the cloud. Accordingly, in fog computing, the DSOs are required to communicate with FNs and allocate resources from the FNs to the ADSSs. The DSOs provide virtualized data services to the ADSSs, and the FNs, motivated by the DSOs, provide data services in the physical network. Nevertheless, with fog computing added as the intermediate layer between the cloud and users, there are challenges such as the resource allocation in the virtualized network between the DSOs and ADSSs, the asymmetric information problem between DSOs and ADSSs, and the resource matching from the FNs to the ADSSs in the physical network. In this article, we propose a three-layer hierarchical game framework to solve the challenges in fog computing networks. In the proposed framework, we apply the Stackelberg sub-game for the interaction between DSOs and ADSSs, moral hazard modeling for the interaction between DSOs and FNs, and the student project allocation matching sub-game for the interaction between FNs and ADSSs. The purpose is to obtain stable and optimal utilities for each DSO, FN, and ADSS in a distributed fashion."} {"_id": "b919bdba5cbafcb555b65ccbd5874b77f36c10e1", "title": "Social Learning Theory and Developmental Psychology : The Legacies of", "text": "Social learning theory began as an attempt by Robert Sears and others to meld psychoanalytic and stimulus-response learning theory into a comprehensive explanation of human behavior, drawing on the clinical richness of the former and the rigor of the latter. Albert Bandura abandoned the psychoanalytic and drive features of the approach, emphasizing instead cognitive and informationprocessing capacities that mediate social behavior. Both theories were intended as a general framework for the understanding of human behavior, and their developmental aspects remain to be worked out in detail. Nevertheless, Bandura has provided a strong theoretical beginning: The theory appears to be capable of accounting well for existing developmental data as well as guiding new investigation."} {"_id": "a2e905bcb84a1f38523f0c95f29be05e2179019b", "title": "Compact Half Diamond Dual-Band Textile HMSIW On-Body Antenna", "text": "A novel wearable dual-band textile antenna, designed for optimal on-body performance in the 2.4 and 5.8 GHz Industrial, Scientific and Medical bands, is proposed. By using brass eye-lets and a combination of conducting and non-conductive textile materials, a half-mode substrate integrated waveguide cavity with ground plane is realized that is very compact and flexible, while still directing radiation away from the wearer. Additional miniaturization is achieved by adding a row of shorting vias and slots. Beside excellent free space performance in the 2.4 and 5.8 GHz bands, respectively, with measured impedance bandwidth of 4.9% and 5.1%, maximal measured free-space gain of 4.1 and 5.8 dBi, and efficiency of 72.8% and 85.6%, very stable on-body performance is obtained, with minimal frequency detuning when deploying the antenna on the human body and when bent around cylinders with radii of 75 and 40 mm. At 2.45 and 5.8 GHz, respectively, the measured on-body gain is 4.4 and 5.7 dBi, with sufficiently small calculated SAR values of 0.55 and 0.90 W/kg. These properties make the proposed antenna excellently suited for wearable on-body systems."} {"_id": "f2217c96f4a628aac892b89bd1f0625b09b1c577", "title": "A machine learning approach for medication adherence monitoring using body-worn sensors", "text": "One of the most important challenges in chronic disease self-management is medication non-adherence, which has irrevocable outcomes. Although many technologies have been developed for medication adherence monitoring, the reliability and cost-effectiveness of these approaches are not well understood to date. This paper presents a medication adherence monitoring system by user-activity tracking based on wrist-band wearable sensors. We develop machine learning algorithms that track wrist motions in real-time and identify medication intake activities. We propose a novel data analysis pipeline to reliably detect medication adherence by examining single-wrist motions. Our system achieves an accuracy of 78.3% in adherence detection without need for medication pillboxes and with only one sensor worn on either of the wrists. The accuracy of our algorithm is only 7.9% lower than a system with two sensors that track motions of both wrists."} {"_id": "1ce0260db2f3278c910c9e0eb518309f43885a91", "title": "Cost Effective Genetic Algorithm for Workflow Scheduling in Cloud Under Deadline Constraint", "text": "Cloud computing is becoming an increasingly admired paradigm that delivers high-performance computing resources over the Internet to solve the large-scale scientific problems, but still it has various challenges that need to be addressed to execute scientific workflows. The existing research mainly focused on minimizing finishing time (makespan) or minimization of cost while meeting the quality of service requirements. However, most of them do not consider essential characteristic of cloud and major issues, such as virtual machines (VMs) performance variation and acquisition delay. In this paper, we propose a meta-heuristic cost effective genetic algorithm that minimizes the execution cost of the workflow while meeting the deadline in cloud computing environment. We develop novel schemes for encoding, population initialization, crossover, and mutations operators of genetic algorithm. Our proposal considers all the essential characteristics of the cloud as well as VM performance variation and acquisition delay. Performance evaluation on some well-known scientific workflows, such as Montage, LIGO, CyberShake, and Epigenomics of different size exhibits that our proposed algorithm performs better than the current state-of-the-art algorithms."} {"_id": "959536fe0fe4c278136777aa54d3c0032279ece3", "title": "Measuring #GamerGate: A Tale of Hate, Sexism, and Bullying", "text": "2009-2014 PhD in Computer Science, University of California Santa Barbara, Santa Barbara, CA. Dissertation title: \u201cStepping Up the Cybersecurity Game: Protecting Online Services from Malicious Activity\u201d 2014 MSc in Computer Science, University of California Santa Barbara, Santa Barbara, CA. 2006-2009 Laurea Specialistica in Computer Engineering (MSc equivalent), Universit\u00e0 degli Studi di Genova, Genova, Italy. Thesis title: \u201cA Distributed System for Intrusion Prevention\u201d 2003-2006 Laurea Triennale in Computer Engineering (BSc equivalent), Universit\u00e0 degli Studi di Genova, Genova, Italy. Thesis title: \u201cComputer Security in a Linux System\u201d (in Italian) 1998-2003 Liceo Classico A. D\u2019Oria, High School, Genova, Italy. Focus on humanities"} {"_id": "b62309d46935fa19e604e7b8b3c03eb9c93defc8", "title": "A fast-transient LDO based on buffered flipped voltage follower", "text": "In this work, the analysis of the flipped voltage follower (FVF) based single-transistor-control (STC) LDO is given. Two evolved versions of FVF, cascaded FVF (CAFVF) and level shifted FVF (LSFVF), are studied. Then, a buffered FVF (BFVF) for LDO application is proposed, combining the virtues of both CAFVF and LSFVF structures. It alleviates the minimum loading requirement of FVF and the simulation result shows that it has faster transient response and better load regulation."} {"_id": "be8a69da1ee8c63b567df197bf0afa1e2d46ffdc", "title": "Double hierarchy hesitant fuzzy linguistic term set and MULTIMOORA method: A case of study to evaluate the implementation status of haze controlling measures", "text": "In recent years, hesitant fuzzy linguistic term sets (HFLTSs) have been studied by many scholars and are becoming gradually mature. However, some shortcomings of HFLTS also emerged. To describe the complex linguistic terms or linguistic term sets more accurately and reasonably, in this paper, we introduce the novel concepts named double hierarchy linguistic term set (DHLTS) and double hierarchy hesitant fuzzy linguistic term set (DHHFLTS). The operational laws and properties of the DHHFLTSs are developed as well. Afterwards, we investigate the multiple criteria decision making model with double hierarchy hesitant fuzzy linguistic information. We develop a double hierarchy hesitant fuzzy linguistic MULTIMOORA (DHHFL-MULTIMOORA) method to solve it. Furthermore, we apply the DHHFL-MULTIMOORA method to deal with a practical case about selecting the optimal city in China by evaluating the implementation status of haze controlling measures. Some comparisons between the DHHFL-MULTIMOORA method and the hesitant fuzzy linguistic TOPSIS method are provided to show the advantages of the proposed method. \u00a9 2017 Elsevier B.V. All rights reserved. l p v t t s c e P t t g"} {"_id": "53e3e04251b4b9d54b6a2f6cee4e7c89e4a978f3", "title": "Behavioral activation and inhibition systems and the severity and course of depression.", "text": "Theorists have proposed that depression is associated with abnormalities in the behavioral activation (BAS) and behavioral inhibition (BIS) systems. In particular, depressed individuals are hypothesized to exhibit deficient BAS and overactive BIS functioning. Self-reported levels of BAS and BIS were examined in 62 depressed participants and 27 nondepressed controls. Clinical functioning was assessed at intake and at 8-month follow-up. Relative to nondepressed controls, depressed participants reported lower BAS levels and higher BIS levels. Within the depressed group, lower BAS levels were associated with greater concurrent depression severity and predicted worse 8-month outcome. Levels of both BIS and BAS showed considerable stability over time and clinical state. Overall, results suggest that BAS dysregulation exacerbates the presentation and course of depressive illness."} {"_id": "7d9b03aae8a4efb22f94488866bbba8448c631ef", "title": "Direct voltage control of DC-DC boost converters using model predictive control based on enumeration", "text": "This paper presents a model predictive control (MPC) approach for the dc-dc boost converter. Based on a hybrid model of the converter suitable for both continuous and discontinuous conduction mode an objective function is formulated, which is to be minimized. The proposed MPC scheme, utilized as a voltage-mode controller, achieves regulation of the output voltage to its reference, without requiring a subsequent current control loop. Simulation and experimental results are provided to demonstrate the merits of the proposed control methodology, which include fast transient response and robustness."} {"_id": "4ad67ec5310f6149df6be2af686d85933539a063", "title": "A Logit Model of Brand Choice Calibrated on Scanner Data", "text": "Optical scanning of the Universal Product Code in supermarkets provides a new level of detail and completeness in household panel data and makes possible the construction of more comprehensive brand choice models than hitherto possible. A multinomial logit model calibrated on 32 weeks of purchases of regular ground coffee by 100 households shows high statistical significance for the explanatory variables of brand loyalty, size loyalty, presence/absence of store promotion, regular shelf price and promotional price cut. The model is quite parsimonious in that the coefficients of these variables are modeled to be the same for all coffee brand-sizes. Considering its parsimony, the calibrated model predicts surprisingly well the share of purchases by brand-size in a hold-out sample of 100 households over the 32 week calibration period and a subsequent 20 week forecast period. Discrepencies in prediction are conjectured to be due in part to missing variables. Three short term market response measures are calculated from the model: regular shelf price elasticity of share, percent share increase from a promotion with a median price cut, and promotional price cut elasticity of share. Responsiveness varies across brand-sizes in a systematic way with large share brand-sizes showing less responsiveness in percentage terms. On the basis of the model a quantitative picture emerges of groups of loyal customers who are relatively insensitive to marketing actions and a pool of switchers who are quite responsive."} {"_id": "314aeefaf90a3322e9ca538c6b5f8a02fbb256bc", "title": "Global H\u221e Consensus of Multi-Agent Systems with Lipschitz Nonlinear Dynamics", "text": "Abstract: This paper addresses the global consensus problems of a class of nonlinear multi-agent systems with Lipschitz nonlinearity and directed communication graphs, by using a distributed consensus protocol based on the relative states of neighboring agents. A two-step algorithm is presented to construct a protocol, under which a Lipschitz multi-agent system without disturbances can reach global consensus for a strongly connected directed communication graph. Another algorithm is then given to design a protocol which can achieve global consensus with a guaranteed H\u221e performance for a Lipschitz multiagent system subject to external disturbances. The case with a leader-follower communication graph is also discussed. Finally, the effectiveness of the theoretical results is demonstrated through a network of single-link manipulators."} {"_id": "3e08a3912ebe494242f6bcd772929cc65307129c", "title": "Few-Shot Image Recognition by Predicting Parameters from Activations", "text": "In this paper, we are interested in the few-shot learning problem. In particular, we focus on a challenging scenario where the number of categories is large and the number of examples per novel category is very limited, e.g. 1, 2, or 3. Motivated by the close relationship between the parameters and the activations in a neural network associated with the same category, we propose a novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations. Zero training is required in adaptation to novel categories, and fast inference is realized by a single forward pass. We evaluate our method by doing few-shot image recognition on the ImageNet dataset, which achieves the state-of-the-art classification accuracy on novel categories by a significant margin while keeping comparable performance on the large-scale categories. We also test our method on the MiniImageNet dataset and it strongly outperforms the previous state-of-the-art methods."} {"_id": "5c782aafeecd658558b64acacad18cbefba86f2e", "title": "Field test of autonomous loading operation by wheel loader", "text": "The authors have been conducting research on an autonomous system for loading operation by wheel loader. Experimental results at a field test site using full-size model (length: 6.1m) will be described in this paper. Basic structure of system consists of three sub systems: measuring and modeling of environment, task planning and motion control. The experimental operation includes four cycles of scooping and loading to dump truck. The experimental results prove that the developed system performed the autonomous operation smoothly and completed the mission."} {"_id": "a14f69985e19456681bc874310e7166528637bed", "title": "Feature-Oriented Software Product Lines", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance."} {"_id": "58fa283251f2f1fb5932cd9f5619b7299c29afba", "title": "DaTuM: Dynamic tone mapping technique for OLED display power saving based on video classification", "text": "The adoption of the latest OLED (organic light emitting diode) technology does not change the fact that screen is still one of the most energy-consuming modules in modern smartphones. In this work, we found that video streams from the same video category share many common power consumption features on OLED screens. Therefore, we are able to build a Hidden Markov Model (HMM) classifier to categorize videos based on OLED screen power characteristics. Using this HMM classifier, we propose a video classification based dynamic tone mapping (DTM) scheme, namely, DaTuM, to remap output color range and minimize the power-hungry color compositions on OLED screens for power saving. Experiment shows that DaTuM scheme averagely reduces OLED screen power by 17.8% with minimum display quality degradation. Compared to DTM scheme based on official category info provided by the video sources and one state-of-the-art scheme, DaTuM substantially enhances OLED screens' power efficiency and display quality controllability."} {"_id": "bd5756b67109fd5c172b885921952b8f5fd5a944", "title": "Coordinate Noun Phrase Disambiguation in a Generative Parsing Model", "text": "In this paper we present methods for improving the disambiguation of noun phrase (NP) coordination within the framework of a lexicalised history-based parsing model. As well as reducing noise in the data, we look at modelling two main sources of information for disambiguation: symmetry in conjunct structure, and the dependency between conjunct lexical heads. Our changes to the baseline model result in an increase in NP coordination dependency f-score from 69.9% to 73.8%, which represents a relative reduction in f-score error of 13%."} {"_id": "40414d71c9706806960f6551a1ff53cc87488899", "title": "The max-min hill-climbing Bayesian network structure learning algorithm", "text": "We present a new algorithm for Bayesian network structure learning, called Max-Min Hill-Climbing (MMHC). The algorithm combines ideas from local learning, constraint-based, and search-and-score techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. In our extensive empirical evaluation MMHC outperforms on average and in terms of various metrics several prototypical and state-of-the-art algorithms, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search. These are the first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other. MMHC offers certain theoretical advantages, specifically over the Sparse Candidate algorithm, corroborated by our experiments. MMHC and detailed results of our study are publicly available at http://www.dsl-lab.org/supplements/mmhc_paper/mmhc_index.html."} {"_id": "e17511e875e49ddd6b7ed14428771513a4c6155a", "title": "Ordering-Based Search: A Simple and Effective Algorithm for Learning Bayesian Networks", "text": "One of the basic tasks for Bayesian networks (BNs) is that of learning a network structure from data. The BN-learning problem is NPhard, so the standard solution is heuristic search. Many approaches have been proposed for this task, but only a very small number outperform the baseline of greedy hill-climbing with tabu lists; moreover, many of the proposed algorithms are quite complex and hard to implement. In this paper, we propose a very simple and easy-toimplement method for addressing this task. Our approach is based on the well-known fact that the best network (of bounded in-degree) consistent with a given node ordering can be found very efficiently. We therefore propose a search not over the space of structures, but over the space of orderings, selecting for each ordering the best network consistent with it. This search space is much smaller, makes more global search steps, has a lower branching factor, and avoids costly acyclicity checks. We present results for this algorithm on both synthetic and real data sets, evaluating both the score of the network found and in the running time. We show that orderingbased search outperforms the standard baseline, and is competitive with recent algorithms that are much harder to implement."} {"_id": "1e42647ecb5c88266361c2e6ef785eeadf8dc9c3", "title": "Learning Bayesian Networks: The Combination of Knowledge and Statistical Data", "text": ""} {"_id": "6a3d5f585c64fc1903ea573232d82240b0d11e62", "title": "Game private networks performance: analytical models for very-large scale simulation", "text": "The WTFast's Gamers Private Network (GPN\u00ae) is a client/server solution that makes online games faster. GPN\u00ae connects online video-game players with a common game service across a wide-area network. Online games are interactive competitions by individual players who compete in a virtual environment. Response time, latency and its predictability are keys to GPN\u00ae success and runs against the vast complexity of internet-wide systems. We have built an experimental network of virtualized GPN\u00ae components so as to carefully measure the statistics of latency for distributed Minecraft games and to do so in a controlled laboratory environment. This has led to a better understanding of the coupling between parameters such as: the number of players, the subset of players that are idle or active, the volume of packets exchanged, the size of packets, latency to and from the game servers, and time-series for most of those parameters. In this paper we present a mathematical model of those system game network parameters and show how it leads to: (1) realistic simulation of each of those network or game parameters, without relying on the experimental setup; (2) very large-scale numerical simulation of the game setup so as to explore various internet-wide performance scenarios that: (a) are impossible to isolate from internet \u201cnoise\u201d in their real environment and; (b) would require vast supercomputing resources if they were to be simulated exhaustively. We motivate all elements of our mathematical model and estimate the savings in computational costs they will bring for very large-scale simulation of the GPN\u00ae. Such simulations will improve quality of service for GPN\u00ae systems and their reliability."} {"_id": "dc4fe639012048e98eb14500e41b04275a43b192", "title": "Recursive circle packing problems", "text": "This paper presents a class of packing problems where circles may be placed either inside or outside other circles, the whole set being packed in a rectangle. This corresponds to a practical problem of packing tubes in a container; before being inserted in the container, tubes may be put inside other tubes in a recursive fashion. A variant of the greedy randomized adaptive search procedure is proposed for tackling this problem, and its performance assessed in a set of benchmark instances."} {"_id": "8d701bc4b2853739de4e752d879296608119a65c", "title": "A Hybrid Fragmentation Approach for Distributed Deductive Database Systems", "text": "Fragmentation of base relations in distributed database management systems increases the level of concurrency and therefore system throughput for query processing. Algorithms for horizontal and vertical fragmentation of relations in relational, object-oriented and deductive databases exist; however, hybrid fragmentation techniques based on variable bindings appearing in user queries and query-access-rule dependency are lacking for deductive database systems. In this paper, we propose a hybrid fragmentation approach for distributed deductive database systems. Our approach first considers the horizontal partition of base relations according to the bindings imposed on user queries, and then generates vertical fragments of the horizontally partitioned relations and clusters rules using affinity of attributes and access frequency of queries and rules. The proposed fragmentation technique facilitates the design of distributed deductive database systems."} {"_id": "b7975962664a04a39a24f1f068f5aaadfb5c208d", "title": "Risks and benefits of social media for children and adolescents.", "text": "Resources \u2022 AAP Internet safety resources site, http:// safetynet.aap.org \u2022 CDC Social Media Tools Guidelines & Best Practices site, http://www.cdc.gov/ SocialMedia/Tools/ guidelines/ \u2022 Common Sense Media (2009). Is technology changing childhood? A national poll on teens and social networking. R e t r i e v e d f r om http://www.commonsensemedia.org/teensocial-media. FACEBOOK, TWITTER, AND YouTube bring benefits to children and teenagers, including enhancing communication, broadening social connections, and learning technical skills, but can also expose them to risks, such as cyberbullying, \u201cFacebook depression,\u201d and \u201csexting,\u201d according to a new report by the American Academy of Pediatrics (AAP; O'Keeffe, Clarke-Pearson, & Council on Communications and Media, 2011). The new report outlines the latest research on one of the most common activities among today's children and teenagers and describes how health care providers can help families understand these sites and encourage their healthy use. The report considers any site that allows social interaction as a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. The abundance of these sites has grown exponentially in recent years. According to a poll cited in the report, more than half of American teenagers log onto their favorite social media site at least once a day, whereas 22% do so at least 10 times a day; 75% of teenagers now own cell phones, with 54% of them using them for texting, 24% for instant messaging, and 25% for social media access. According to the authors of the new report, increase in social media has been so rapid and their presence in children's everyday life is now so pervasive that, for some teens, social media is the primary way they interact socially, and a large part of this generation's social and emotional development is occurring while on the Internet and on cell phones. Because of their limited capacity for self-regulation and susceptibility to peer pressure,"} {"_id": "de5ad10b0513a2641a6c13e3fe777f4f42c21bbe", "title": "Raspberry Pi based interactive home automation system through E-mail", "text": "Home automation is becoming more and more popular day by day due to its numerous advantages. This can be achieved by local networking or by remote control. This paper aims at designing a basic home automation application on Raspberry Pi through reading the subject of E-mail and the algorithm for the same has been developed in python environment which is the default programming environment provided by Raspberry Pi. Results show the efficient implementation of proposed algorithm for home automation. LEDs were used to indicate the switching action."} {"_id": "8da810635b63657edac71799bdb5fdb981fffd6e", "title": "On the privacy-utility tradeoff in participatory sensing systems", "text": "The ubiquity of sensors-equipped mobile devices has enabled citizens to contribute data via participatory sensing systems. This emergent paradigm comes with various applications to improve users' quality of life. However, the data collection process may compromise the participants' privacy when reporting data tagged or correlated with their sensitive information. Therefore, anonymization and location cloaking techniques have been designed to provide privacy protection, yet to some cost of data utility which is a major concern for queriers. Different from past works, we assess simultaneously the two competing goals of ensuring the queriers' required data utility and protecting the participants' privacy. First, we introduce a trust worthy entity to the participatory sensing traditional system. Also, we propose a general privacy-preserving mechanism that runs on this entity to release a distorted version of the sensed data in order to minimize the information leakage with its associated private information. We demonstrate how to identify a near-optimal solution to the privacy-utility tradeoff by maximizing a privacy score while considering a utility metric set by data queriers (service providers). Furthermore, we tackle the challenge of data with large size alphabets by investigating quantization techniques. Finally, we evaluate the proposed model on three different real datasets while varying the prior knowledge and the obfuscation type. The obtained results demonstrate that, for different applications, a limited distortion may ensure the participants' privacy while maintaining about 98% of the required data utility."} {"_id": "11b76a652c367ae549bcd5b82ab4a89629f937f4", "title": "On the statistics of natural stochastic textures and their application in image processing", "text": "Statistics of natural images has become an important subject of research in recent years. The highly kurtotic, non-Gaussian, statistics known to be characteristic of many natural images are exploited in various image processing tasks. In this paper, we focus on natural stochastic textures (NST) and substantiate our finding that NST have Gaussian statistics. Using the well-known statistical self-similarity property of natural images, exhibited even more profoundly in NST, we exploit a Gaussian self-similar process known as the fractional Brownian motion, to derive a fBm-PDE-based singleimage superresolution scheme for textured images. Using the same process as a prior, we also apply it in denoising of NST."} {"_id": "9fdb60d4b2db75838513f3409e83ebc25cc02e3d", "title": "JSpIRIT: a flexible tool for the analysis of code smells", "text": "Code smells are a popular mechanism to identify structural design problems in software systems. Since it is generally not feasible to fix all the smells arising in the code, some of them are often postponed by developers to be resolved in the future. One reason for this decision is that the improvement of the code structure, to achieve modifability goals, requires extra effort from developers. Therefore, they might not always spend this additional effort, particularly when they are focused on delivering customer-visible features. This postponement of code smells are seen as a source of technical debt. Furthermore, not all the code smells may be urgent to fix in the context of the system's modifability and business goals. While there are a number of tools to detect smells, they do not allow developers to discover the most urgent smells according to their goals. In this article, we present a fexible tool to prioritize technical debt in the form of code smells. The tool is flexible to allow developer s to add new smell detection strategies and to prioritize smells, and groups of smells, based on the confguration of their manifold criteria. To illustrate this flexibility, we present an application example of our tool. The results suggest that our tool can be easily extended to be aligned with the developer's goals."} {"_id": "074c2c9a0cd64fe036748bb0e2bb2a32d1e612e4", "title": "Asymmetrical duty cycle permits zero switching loss in PWM circuits with no conduction loss penalty", "text": "Operating a bridge-type PWM (pulse width modulation) switch mode power converter with asymmetrical duty ratios can eliminate switching losses with no increase in conduction loss. This circuit topology combines the best features of resonant (zero switching loss) and switch mode (low conclusion loss) circuits. Design equations are presented for just such a circuit.<>"} {"_id": "503035e205654af479d5be4c4b700dcae5a66913", "title": "Reproducibility of quantitative tractography methods applied to cerebral white matter", "text": "Tractography based on diffusion tensor imaging (DTI) allows visualization of white matter tracts. In this study, protocols to reconstruct eleven major white matter tracts are described. The protocols were refined by several iterations of intra- and inter-rater measurements and identification of sources of variability. Reproducibility of the established protocols was then tested by raters who did not have previous experience in tractography. The protocols were applied to a DTI database of adult normal subjects to study size, fractional anisotropy (FA), and T2 of individual white matter tracts. Distinctive features in FA and T2 were found for the corticospinal tract and callosal fibers. Hemispheric asymmetry was observed for the size of white matter tracts projecting to the temporal lobe. This protocol provides guidelines for reproducible DTI-based tract-specific quantification."} {"_id": "771a1d5d3375b1b04cb22161193e0b72b98c6088", "title": "Threshold-Dependent Camouflaged Cells to Secure Circuits Against Reverse Engineering Attacks", "text": "With current tools and technology, someone who has physical access to a chip can extract the detailed layout of the integrated circuit (IC). By using advanced visual imaging techniques, reverse engineering can reveal details that are meant to be kept secret, such as a secure protocol or novel implementation that offers a competitive advantage. A promising solution to defend against reverse engineering attacks is IC camouflaging. In this work, we propose a new camouflaging technique based on the threshold voltage of the transistors. We refer to these cells as threshold dependent camouflaged cells. Our work differs from current commercial solutions in that the latter use look-alike cells, with the assumption that it is difficult for the reverse engineer to identify the cell's functionality. Yet, if a structural distinction between cells exists, then these are still vulnerable, especially as reverse engineers use more advanced and precise techniques. On the other hand, the proposed threshold dependent standard cells are structurally identical regardless of the cells' functionality. Detailed circuit simulations of our proposed threshold dependent camouflaged cells demonstrate that they can be used to cost-effectively and robustly camouflage large netlists. Corner analysis of process, temperature, and supply voltage (PVT) variations show that our cells operate as expected over all PVT corners simulated."} {"_id": "5b59feb5ac67ae6f852b84337179da51202764dc", "title": "Yacc is dead", "text": "We present two novel approaches to parsing context-free languages. The first approach is based on an extension of Brzozowski\u2019s derivative from regular expressions to context-free grammars. The second approach is based on a generalization of the derivative to parser combinators. The payoff of these techniques is a small (less than 250 lines of code), easy-to-implement parsing library capable of parsing arbitrary context-free grammars into lazy parse forests. Implementations for both Scala and Haskell are provided. Preliminary experiments with S-Expressions parsed millions of tokens per second, which suggests this technique is efficient enough for use in practice. 1 Top-down motivation: End cargo cult parsing \u201cCargo cult parsing\u201d is a plague upon computing. Cargo cult parsing refers to the use of \u201cmagic\u201d regular expressions\u2014often cut and pasted directly from Google search results\u2014to parse languages which ought to be parsed with contextfree grammars. Such parsing has two outcomes. In the first case, the programmer produces a parser that works \u201cmost of the time\u201d because the underlying language is fundamentally irregular. In the second case, some domain-specific language ends up with a mulish syntax, because the programmer has squeezed the language through the regular bottleneck. There are two reasons why regular expressions are so abused while context-free languages remain foresaken: (1) regular expression libraries are available in almost every language, while parsing libraries and toolkits are not, and (2) regular expressions are \u201cWYSIWYG\u201d\u2014the language described is the language that gets matched\u2014whereas parser-generators are WYSIWYGIYULR(k)\u2014\u201cwhat you see is what you get if you understand LR(k).\u201d To end cargo-cult parsing, we need a new approach to parsing that: 1. handles arbitrary context-free grammars; 2. parses efficiently on average; and 3. can be implemented as a library with little effort. The end goals of the three conditions are simplicity, feasibility and ubiquity. The \u201carbitrary context-free grammar\u201d condition is necessary because programmers will be less inclined to use a tool that forces them to learn or think about LL/LR arcana. It is hard for compiler experts to imagine, but the constraints on LL/LR 1 The term \u201ccargo cult parsing\u201d is due to Larry Wall, 19 June 2008, Google Tech Talk. grammars are (far) beyond the grasp of the average programmer. [Of course, arbitrary context-free grammars bring ambiguity, which means the parser must be prepared to return a parse forest rather than a parse tree.] The \u201cefficient parsing\u201d condition is necessary because programmers will avoid tools branded as inefficient (however justly or unjustly this label has been applied). Specifically, a parser needs to have roughly linear behavior on average. Because ambiguous grammars may yield an exponential number of parse trees, parse trees must be produced lazily, and each parse tree should be paid for only if it is actually produced. The \u201ceasily implemented\u201d condition is perhaps most critical. It must be the case that a programmer could construct a general parsing toolkit if their language of choice doesn\u2019t yet have one. If this condition is met, it is reasonable to expect that proper parsing toolkits will eventually be available for every language.When proper parsing toolkits and libraries remain unavailable for a language, cargo cult parsing prevails. This work introduces parsers based on the derivative of context-free languages and upon the derivative of parser combinators. Parsers based on derivatives meet all of the aforementioned requirements: they accept arbitrary grammars, they produce parse forests efficiently (and lazily), and they are easy to implement (less than 250 lines of Scala code for the complete library). Derivative-based parsers also avoid the precompilation overhead of traditional parser generators; this cost is amortized (and memoised) across the parse itself. In addition, derivative-based parsers can be modified mid-parse, which makes it conceivable that a language could to modify its own syntax at compileor run-time. 2 Bottom-up motivation: Generalizing the derivative Brzozowski defined the derivative of regular expressions in 1964 [1]. This technique was lost to the \u201csands of time\u201d until Owens, Reppy and Turon recently revived it to show that derivative-based lexer-generators were easier to write, more efficient and more flexible than the classical regex-to-NFA-to-DFA generators [15]. (Derivative-based lexers allow, for instance, both complement and intersection.) Given the payoff for regular languages, it is natural to ask whether the derivative, and its benefits, extend to context-free languages, and transitively, to parsing. As it turns out, they do. We will show that context-free languages are closed under the derivative\u2014they critical property needed for parsing. We will then show that context-free parser combinators are also closed under a generalization of the derivative. The net impact is that we will be able to write a derivative-based parser combinator library in under 250 lines of Scala, capable of producing a lazy parse forest for any context-free grammar. 2 The second author on this paper, an undergraduate student, completed the implementation for Haskell in less than a week."} {"_id": "5118a15588e670031f40b72f418e21a8f47d8727", "title": "Sketches by Paul the Robot", "text": "In this paper we describe Paul, a robotic installation that produces observational sketches of people. The sketches produced have been considered of interest by fine art professionals in recent art fairs and exhibitions, as well as by the public at large. We identify factors that may account for the perceived qualities of the produced sketches. A technical overview of the system is also presented."} {"_id": "2c8bde8c34468774f2a070ac3e14a6b97f877d13", "title": "A Hierarchical Neural Model for Learning Sequences of Dialogue Acts", "text": "We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model."} {"_id": "3ebc2b0b4168b20f920074262577a3b8977dce83", "title": "Delta-Sigma FDC Based Fractional-N PLLs", "text": "Fractional-N phase-locked loop frequency synthesizers based on time-to-digital converters (TDC-PLLs) have been proposed to reduce the area and linearity requirements of conventional PLLs based on delta-sigma modulation and charge pumps (\u0394\u03a3-PLLs). Although TDC-PLLs with good performance have been demonstrated, TDC quantization noise has so far kept their phase noise and spurious tone performance below that of the best comparable \u0394\u03a3-PLLs. An alternative approach is to use a delta-sigma frequency-to-digital converter (\u0394\u03a3 FDC) in place of a TDC to retain the benefits of TDC-PLLs and \u0394\u03a3-PLLs. This paper proposes a practical \u0394\u03a3 FDC based PLL in which the quantization noise is equivalent to that of a \u0394\u03a3-PLL. It presents a linearized model of the PLL, design criteria to avoid spurious tones in the \u0394\u03a3FDC quantization noise, and a design methodology for choosing the loop parameters in terms of standard PLL target specifications."} {"_id": "3d2626090650f79d62b6e17f1972f3abe9c4bdcc", "title": "Distributed generation intelligent islanding detection using governor signal clustering", "text": "One of the major protection concerns with distribution networks comprising distributed generation is unintentional islanding phenomenon. Expert diagnosis system is needed to distinguish network cut off from normal occurrences. An important part of synchronous generator is automatic load-frequency controller (ALFC). In this paper, a new approach based on clustering of input signal to governor is introduced. Self-organizing map (SOM) neural network is used to identify and classify islanding and non-islanding phenomena. Simulation results show that input signal to governor has different characteristics concern with islanding conditions and other disturbances. In addition, the SOM is able to identify and classify phenomena satisfactorily. Using proposed method, islanding can be detected after 200 ms."} {"_id": "669dbee1fd07c8193c3c0c7b4c00767e3d311ab5", "title": "Function-Hiding Inner Product Encryption Is Practical", "text": "In a functional encryption scheme, secret keys are associated with functions and ciphertexts are associated with messages. Given a secret key for a function f , and a ciphertext for a message x, a decryptor learns f(x) and nothing else about x. Inner product encryption is a special case of functional encryption where both secret keys and ciphertext are associated with vectors. The combination of a secret key for a vector x and a ciphertext for a vector y reveal \u3008x,y\u3009 and nothing more about y. An inner product encryption scheme is function-hiding if the keys and ciphertexts reveal no additional information about both x and y beyond their inner product. In the last few years, there has been a flurry of works on the construction of function-hiding inner product encryption, starting with the work of Bishop, Jain, and Kowalczyk (Asiacrypt 2015) to the more recent work of Tomida, Abe, and Okamoto (ISC 2016). In this work, we focus on the practical applications of this primitive. First, we show that the parameter sizes and the run-time complexity of the state-of-the-art construction can be further reduced by another factor of 2, though we compromise by proving security in the generic group model. We then show that function privacy enables a number of applications in biometric authentication, nearest-neighbor search on encrypted data, and single-key two-input functional encryption for functions over small message spaces. Finally, we evaluate the practicality of our encryption scheme by implementing our function-hiding inner product encryption scheme. Using our construction, encryption and decryption operations for vectors of length 50 complete in a tenth of a second in a standard desktop environment."} {"_id": "16207aded3bbeb6ad28aceb89e46c5921372fb13", "title": "Characterising the Behaviour of IEEE 802.11 Broadcast Transmissions in Ad Hoc Wireless LANs", "text": "This paper evaluates the performance of the IEEE 802.11 broadcast traffic under both saturation and nonsaturation conditions. The evaluation highlights some important characteristics of IEEE 802.11 broadcast traffic as compared to corresponding unicast traffic. Moreover, it underlines the inaccuracy of the broadcast saturation model proposed by Ma and Chen due to the absence of backoff counter freeze process when channel is busy. Computer simulations are used to validate the accuracy of the new model and demonstrate the importance of capturing the freezing of backoff counter in the analytical study of IEEE 802.11 broadcast."} {"_id": "538578f986cf3a5d888b4a1d650f875d4710b0a7", "title": "Brain development during childhood and adolescence: a longitudinal MRI study", "text": "Pediatric neuroimaging studies, up to now exclusively cross sectional, identify linear decreases in cortical gray matter and increases in white matter across ages 4 to 20. In this large-scale longitudinal pediatric neuroimaging study, we confirmed linear increases in white matter, but demonstrated nonlinear changes in cortical gray matter, with a preadolescent increase followed by a postadolescent decrease. These changes in cortical gray matter were regionally specific, with developmental curves for the frontal and parietal lobe peaking at about age 12 and for the temporal lobe at about age 16, whereas cortical gray matter continued to increase in the occipital lobe through age 20."} {"_id": "dd7019c0444ce0765cb7f4af6a5a5023b0c7ba12", "title": "Risk Factors in Adolescence: The Case of Gambling, Videogame Playing, and the Internet", "text": "It has been noted that adolescents may be more susceptible to pathological gambling. Not only is it usually illegal, but it appears to be related to high levels of problem gambling and other delinquent activities such as illicit drug taking and alcohol abuse. This paper examines risk factors not only in adolescent gambling but also in videogame playing (which shares many similarities with gambling). There appear to be three main forms of adolescent gambling that have been widely researched. Adolescent gambling activities and general risk factors in adolescent gambling are provided. As well, the influence of technology on adolescents in the form of both videogames and the Internet are examined. It is argued that technologically advanced forms of gambling may be highly appealing to adolescents."} {"_id": "4775dff7528d0952a75af40ffe9900ed3c8874d8", "title": "Automated Data Structure Generation: Refuting Common Wisdom", "text": "Common wisdom in the automated data structure generation community states that declarative techniques have better usability than imperative techniques, while imperative techniques have better performance. We show that this reasoning is fundamentally flawed: if we go to the declarative limit and employ constraint logic programming (CLP), the CLP data structure generation has orders of magnitude better performance than comparable imperative techniques. Conversely, we observe and argue that when it comes to realistically complex data structures and properties, the CLP specifications become more obscure, indirect, and difficult to implement and understand than their imperative counterparts. We empirically evaluate three competing generation techniques, CLP, Korat, and UDITA, to validate these observations on more complex and interesting data structures than any prior work in this area. We explain why these observations are true, and discuss possible techniques for attaining the best of both worlds."} {"_id": "77e10f17c7eeebe37f7ca046dfb616c8ac994995", "title": "Simple Bayesian Algorithms for Best Arm Identification", "text": "This paper considers the optimal adaptive allocation of measurement effort for identifying the best among a finite set of options or designs. An experimenter sequentially chooses designs to measure and observes noisy signals of their quality with the goal of confidently identifying the best design after a small number of measurements. I propose three simple Bayesian algorithms for adaptively allocating measurement effort. One is Top-Two Probability sampling, which computes the two designs with the highest posterior probability of being optimal, and then randomizes to select among these two. One is a variant a top-two sampling which considers not only the probability a design is optimal, but the expected amount by which its quality exceeds that of other designs. The final algorithm is a modified version of Thompson sampling that is tailored for identifying the best design. I prove that these simple algorithms satisfy a strong optimality property. In a frequestist setting where the true quality of the designs is fixed, one hopes the posterior definitively identifies the optimal design, in the sense that that the posterior probability assigned to the event that some other design is optimal converges to zero as measurements are collected. I show that under the proposed algorithms this convergence occurs at an exponential rate, and the corresponding exponent is the best possible among all allocation rules."} {"_id": "eb65824356478ad59c927a565de2e467a3f0c67a", "title": "Cloudlet-based just-in-time indexing of IoT video", "text": "As video cameras proliferate, the ability to scalably capture and search their data becomes important. Scalability is improved by performing video analytics on cloudlets at the edge of the Internet, and only shipping extracted index information and meta-data to the cloud. In this setting, we describe interactive data exploration (IDE), which refers to human-in-the-loop content-based retrospective search using predicates that may not have been part of any prior indexing. We also describe a new technique called just-in-time indexing (JITI) that improves response times in IDE."} {"_id": "537abf102ed05a373eb227151b7423306c9e5d3c", "title": "Intrinsically Motivated Affordance Discovery and Modeling", "text": "In this chapter, we argue that a single intrinsic motivation function for affordance discovery can guide long-term learning in robot systems. To these ends, we provide a novel definition of \u201caffordance\u201d as the latent potential for the closed-loop control of environmental stimuli perceived by sensors. Specifically, the proposed intrinsic motivation function rewards the discovery of such control affordances. We will demonstrate how this function has been used by a humanoid robot to learn a number of general purpose control skills that address many different tasks. These skills, for example, include strategies for finding, grasping, and placing simple objects. We further show how this same intrinsic reward function is used to direct the robot to build stable models of when the environment affords these skills."} {"_id": "1ca0911ee19bade27650860ebd904a7dc32c13cb", "title": "An affective computational model for machine consciousness", "text": "In the past, several models of consciousness have become popular and have led to the development of models for machine consciousness with varying degrees of success and challenges for simulation and implementations. Moreover, affective computing attributes that involve emotions, behavior and personality have not been the focus of models of consciousness as they lacked motivation for deployment in software applications and robots. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans. Personality and affection hence can give an additional flavor for the computational model of consciousness in humanoid robotics. Recent advances in areas of machine learning with a focus on deep learning can further help in developing aspects of machine consciousness in areas that can better replicate human sensory perceptions such as speech recognition and vision. With such advancements, one encounters further challenges in developing models that can synchronize different aspects of affective computing. In this paper, we review some existing models of consciousnesses and present an affective computational model that would enable the human touch and feel for robotic systems."} {"_id": "1678a55524be096519b3ea71c9680ba8041a761e", "title": "\u2013 \u201c Mixture Densities , Maximum Likelihood and the EM Algorithm", "text": "The problem of estimating the parameters which determine a mixture density has been the subject of a large, diverse body of literature spanning nearly ninety years. During the last two decades, the method of maximum likelihood has become the most widely followed approach to this problem, thanks primarily to the advent of high speed electronic computers. Here, we first offer a brief survey of the literature directed toward this problem and review maximum-likelihood estimation for it. We then turn to the subject of ultimate interest, which is a particular iterative procedure for numerically approximating maximum-likelihood estimates for mixture density problems. This procedure, known as the EM algorithm, is a specialization to the mixture density context of a general algorithm of the same name used to approximate maximum-likelihood estimates for incomplete data problems. We discuss the formulation and theoretical and practical properties of the EM algorithm for mixture densities, focussing in particular on mixtures of densities from exponential families."} {"_id": "8a1a3f4dcb4ae461c3c0063820811d9c37d8ec75", "title": "Embedded image coding using zerotrees of wavelet coefficients", "text": "The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish a n image from the \u201cnull\u201d image. Using a n embedded coding algorithm, a n encoder can terminate the encoding a t any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding a t any point in the bit stream and still produce exactly the same image that would have been encoded a t the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, EZW consistently produces compression results that a re competitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: 1) a discrete wavelet transform or hierarchical subband decomposition, 2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, 3) entropy-coded successive-approximation quantization, and 4) universal lossless data compression which is achieved via adaptive arithmetic coding."} {"_id": "b82ab6b6885bdd1b06cafd2b3ca892b75352a96b", "title": "A 600MS/s 30mW 0.13\u00b5m CMOS ADC array achieving over 60dB SFDR with adaptive digital equalization", "text": "At high conversion speed, time interleaving provides a viable way of achieving analog-to-digital conversion with low power consumption, especially when combined with the successive-approximation-register (SAR) architecture that is known to scale well in CMOS technology. In this work, we showcase a digital background-equalization technique to treat the path-mismatch problem as well as individual ADC nonlinearities in time-interleaved SAR ADC arrays. This approach was first introduced to calibrate the linearity errors in pipelined ADCs [1], and subsequently extended to treat SAR ADCs [2]. In this prototype, we demonstrate the effectiveness of this technique in a compact SAR ADC array, which achieves 7.5 ENOB and a 65dB SFDR at 600MS/s while dissipating 23.6mW excluding the on-chip DLL, and exhibiting one of the best conversion FOMs among ADCs with similar sample rates and resolutions [3]."} {"_id": "ac22f98cd13926ed25619c6620990c5c875b3948", "title": "LateralPaD: A surface-haptic device that produces lateral forces on a bare finger", "text": "The LateralPaD is a surface haptic device that generates lateral (shear) force on a bare finger, by vibrating the touch surface simultaneously in both out-of-plane (normal) and in-plane (lateral) directions. A prototype LateralPaD has been developed in which the touch surface is glass, and piezoelectric actuators drive normal and lateral resonances at the same ultrasonic frequency (~22.3 KHz). The force that develops on the finger can be controlled by modulating the relative phase of the two resonances. A 2DOF load cell setup is used to characterize the dependence of induced lateral force on vibration amplitude, relative phase, and applied normal force. A Laser Doppler Vibrometer (LDV) is used to measure the motion of glass surface as well as of the fingertip. Together, these measurements yield insight into the mechanism of lateral force generation. We show evidence for a mechanism dependent on tilted impacts between the LateralPaD and fingertip."} {"_id": "ca5766b91da4903ad6f6d40a5b31a3ead1f7f6de", "title": "A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution", "text": "We address the problem of image upscaling in the form of single image super-resolution based on a dictionary of lowand highresolution exemplars. Two recently proposed methods, Anchored Neighborhood Regression (ANR) and Simple Functions (SF), provide state-ofthe-art quality performance. Moreover, ANR is among the fastest known super-resolution methods. ANR learns sparse dictionaries and regressors anchored to the dictionary atoms. SF relies on clusters and corresponding learned functions. We propose A+, an improved variant of ANR, which combines the best qualities of ANR and SF. A+ builds on the features and anchored regressors from ANR but instead of learning the regressors on the dictionary it uses the full training material, similar to SF. We validate our method on standard images and compare with state-of-the-art methods. We obtain improved quality (i.e. 0.2-0.7dB PSNR better than ANR) and excellent time complexity, rendering A+ the most efficient dictionary-based super-resolution method to date."} {"_id": "b118b38c180925f9e5bfda98cc12d03f536c12d4", "title": "Experience Prototyping", "text": "In this paper, we describe \"Experience Prototyping\" as a form of prototyping that enables design team members, users and clients to gain first-hand appreciation of existing or future conditions through active engagement with prototypes. We use examples from commercial design projects to illustrate the value of such prototypes in three critical design activities: understanding existing experiences, exploring design ideas and in communicating design concepts."} {"_id": "211efdc4db415ad42c7c5eb9b0457c84ad23ef30", "title": "Supernumerary nostril: a case report", "text": "BACKGROUND\nSupernumerary nostril is a congenital anomaly that contains additional nostril with or without accessory cartilage. These rare congenital nasal deformities result from embryological defects. Since 1906, Lindsay (Trans Pathol Soc Lond. 57:329-330, 1906) has published the first research of bilateral supernumerary nostrils, and only 34 cases have been reported so far in the English literature.\n\n\nCASE PRESENTATION\nA 1-year-old female baby was brought to our department group for the treatment of an accessory opening above the left nostril which had been presented since her birth. Medical history was non-specific and her birth was normal. The size of a supernumerary nostril was about 0.2\u00a0cm diameter and connected to the left nostril. The right one was normal. Minimal procedure was operated for the anomaly. After 1\u00a0year, rhinoplasty was performed for the nostril asymmetry.\n\n\nCONCLUSIONS\nAt 1\u00a0year follow-up, the functional and cosmetic result was satisfactory. In this case, it is important that we have early preoperative diagnosis. Also, it is desirable that we should perform a corrective surgery as soon as possible for the patient's psychosocial growth."} {"_id": "4424a9788e70ef4a1d447f6e28c1fc0c7cbdefc0", "title": "ADvanced IMage Algebra (ADIMA): a novel method for depicting multiple\nsclerosis lesion heterogeneity, as demonstrated by quantitative MRI", "text": "BACKGROUND\nThere are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans.\n\n\nOBJECTIVE\nTo determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images.\n\n\nMETHODS\nWe obtained conventional PDw and T2w images from 10 patients with relapsing-remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML.\n\n\nRESULTS\nOur study's ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively.\n\n\nCONCLUSION\nADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished."} {"_id": "5807664af8e63d5207f59fb263c9e7bd3673be79", "title": "Hybrid speech recognition with Deep Bidirectional LSTM", "text": "Deep Bidirectional LSTM (DBLSTM) recurrent neural networks have recently been shown to give state-of-the-art performance on the TIMIT speech database. However, the results in that work relied on recurrent-neural-network-specific objective functions, which are difficult to integrate with existing large vocabulary speech recognition systems. This paper investigates the use of DBLSTM as an acoustic model in a standard neural network-HMM hybrid system. We find that a DBLSTM-HMM hybrid gives equally good results on TIMIT as the previous work. It also outperforms both GMM and deep network benchmarks on a subset of the Wall Street Journal corpus. However the improvement in word error rate over the deep network is modest, despite a great increase in framelevel accuracy. We conclude that the hybrid approach with DBLSTM appears to be well suited for tasks where acoustic modelling predominates. Further investigation needs to be conducted to understand how to better leverage the improvements in frame-level accuracy towards better word error rates."} {"_id": "240cc2dbe027400957ed1f8cf8fb092a533c406e", "title": "Design of Multiple-Level Hybrid Classifier for Intrusion Detection System", "text": "As the number of networked computers grows, intrusion detection is an essential component in keeping networks secure. However, constructing and maintaining a misuse detection system is very labor-intensive since attack scenarios and patterns need to be analyzed and categorized, and the corresponding rules and patterns need to be carefully hand-coded. Thus, data mining can be used to ease this inconvenience. This paper proposes a multiple-level hybrid classifier, an intrusion detection system that uses a combination of tree classifiers and clustering algorithms to detect intrusions. Performance of this new algorithm is compared to other popular approaches such as MADAM ID and 3-level tree classifiers, and significant improvement has been achieved from the viewpoint of both high intrusion detection rate and reasonably low false alarm rate"} {"_id": "379c379e40e6bc5d0f7b8b884a6153585d55b9ca", "title": "Project Lachesis: Parsing and Modeling Location Histories", "text": "A datatype with increasing importance in GIS is what we call the location history\u2013a record of an entity\u2019s location in geographical space over an interval of time. This paper proposes a number of rigorously defined data structures and algorithms for analyzing and generating location histories. Stays are instances where a subject has spent some time at a single location, and destinations are clusters of stays. Using stays and destinations, we then propose two methods for modeling location histories probabilistically. Experiments show the value of these data structures, as well as the possible applications of probabilistic models of location histories."} {"_id": "351a37370f395c3254242a4d8395a89b7072c179", "title": "Extending feature usage: a study of the post-adoption of electronic medical records", "text": "In order to study the factors that influence how professionals use complex systems, we create a tentative research model that builds on PAM and TAM. Specifically we include PEOU and the construct \u2018Professional Association Guidance\u2019. We postulate that feature usage is enhanced when professional associations influence PU by highlighting additional benefits. We explore the theory in the context of post-adoption use of Electronic Medical Records (EMRs) by primary care physicians in Ontario. The methodology can be extended to other professional environments and we suggest directions for future research."} {"_id": "08e8ee6fd84dd21028e7f42aa68b333ce4bf13c0", "title": "Information theory-based shot cut/fade detection and video summarization", "text": "New methods for detecting shot boundaries in video sequences and for extracting key frames using metrics based on information theory are proposed. The method for shot boundary detection relies on the mutual information (MI) and the joint entropy (JE) between the frames. It can detect cuts, fade-ins and fade-outs. The detection technique was tested on the TRECVID2003 video test set having different types of shots and containing significant object and camera motion inside the shots. It is demonstrated that the method detects both fades and abrupt cuts with high accuracy. The information theory measure provides us with better results because it exploits the inter-frame information in a more compact way than frame subtraction. It was also successfully compared to other methods published in literature. The method for key frame extraction uses MI as well. We show that it captures satisfactorily the visual content of the shot."} {"_id": "3e15fb211f94b2e85d07e60f0aa641797b062e01", "title": "Small Discrete Fourier Transforms on GPUs", "text": "Efficient implementations of the Discrete Fourier Transform (DFT) for GPUs provide good performance with large data sizes, but are not competitive with CPU code for small data sizes. On the other hand, several applications perform multiple DFTs on small data sizes. In fact, even algorithms for large data sizes use a divide-and-conquer approach, where eventually small DFTs need to be performed. We discuss our DFT implementation, which is efficient for multiple small DFTs. One feature of our implementation is the use of the asymptotically slow matrix multiplication approach for small data sizes, which improves performance on the GPU due to its regular memory access and computational patterns. We combine this algorithm with the mixed radix algorithm for 1-D, 2-D, and 3-D complex DFTs. We also demonstrate the effect of different optimization techniques. When GPUs are used to accelerate a component of an application running on the host, it is important that decisions taken to optimize the GPU performance not affect the performance of the rest of the application on the host. One feature of our implementation is that we use a data layout that is not optimal for the GPU so that the overall effect on the application is better. Our implementation performs up to two orders of magnitude faster than cuFFT on an NVIDIA GeForce 9800 GTX GPU and up to one to two orders of magnitude faster than FFTW on a CPU for multiple small DFTs. Furthermore, we show that our implementation can accelerate the performance of a Quantum Monte Carlo application for which cuFFT is not effective. The primary contributions of this work lie in demonstrating the utility of the matrix multiplication approach and also in providing an implementation that is efficient for small DFTs when a GPU is used to accelerate an application running on the host."} {"_id": "175963e3634457ee55f6d82cb4fafe7bd7469ee2", "title": "Groundnut leaf disease detection and classification by using back probagation algorithm", "text": "Many studies shows that quality of agricultural products may be reduced from many causes. One of the most important factors contributing to low yield is disease attack. The plant disease such as fungi, bacteria and viruses. The leaf disease completely destroys the quality of the leaf. Common ground nut disease is cercospora. It is one of the type of disease in early stage of ground nut leaf. The upgraded processing pattern comprises of four leading steps. Initially a color renovation constitute intended for the input RGB image is formed, This RGB is converted into HSV because RGB is for Color generation and color descriptor. The next step is plane separation. Next performed the color features. Then using back propagation algorithm detection of leaf disease is done."} {"_id": "4a7c79583ffe7a695385d2c4c55f27b645a397ba", "title": "Development of Robotic Gloves for Hand Rehabilitation Post-Stroke", "text": "This paper presents the hardware and software developments that have been achieved and implemented in the robotic gloves for hand rehabilitation post-stroke. The practical results are shown that prove the functionalities of the robotic gloves in common operating conditions. This work focused on development of a lightweight and low-cost robotic glove that patients can wear and use to recover hand functionality. We developed some structures for the robotic glove, as exoskeleton and as soft robotic glove. The paper presents a comparison of these structures."} {"_id": "15b9bb158ef57a73c94cabcf4a95679f7d2f818e", "title": "Customised Visualisations of Linked Open Data", "text": "This paper aims to tackle on Linked Open Data (LOD) customised visualisations. The work is part of an ongoing research on interfaces and tools for helping non-programmers to easily present, analyse and support sense making over large semantic dataset. The customisation is a key aspect of our work. Producing effective customised visualisations is still difficult due to the complexity of the existing tools and the limited set of options they offer, especially to those with little knowledge in LOD and semantic data models. How can we give users full control on the primitives of the visualisation and their properties, without requiring them to master Semantic Web technologies or programming languages? The paper presents a conceptual model that can be used as a reference to build tools for generating customisable infoviews. We used it to conduct a survey on existing tools in terms of customisation capabilities. Starting from the feedback collected in this phase, we will implement these customisation features into some prototypes we are currently experimenting on."} {"_id": "8dbb9432e7e7e5c11bbe7bdbbe8eb9a14811c970", "title": "Enhancing representation learning with tensor decompositions for knowledge graphs and high dimensional sequence modeling", "text": "The capability of processing and digesting raw data is one of the key features of a humanlike artificial intelligence system. For instance, real-time machine translation should be able to process and understand spoken natural language, and autonomous driving relies on the comprehension of visual inputs. Representation learning is a class of machine learning techniques that autonomously learn to derive latent features from raw data. These new features are expected to represent the data instances in a vector space that facilitates the machine learning task. This thesis studies two specific data situations that require efficient representation learning: knowledge graph data and high dimensional sequences. In the first part of this thesis, we first review multiple relational learning models based on tensor decomposition for knowledge graphs. We point out that relational learning is in fact a means of learning representations through one-hot mapping of entities. Furthermore, we generalize this mapping function to consume a feature vector that encodes all known facts about each entity. It enables the relational model to derive the latent representation instantly for a new entity, without having to re-train the tensor decomposition. In the second part, we focus on learning representations from high dimensional sequential data. Sequential data often pose the challenge that they are of variable lengths. Electronic health records, for instance, could consist of clinical event data that have been collected at subsequent time steps. But each patient may have a medical history of variable length. We apply recurrent neural networks to produce fixed-size latent representations from the raw feature sequences of various lengths. By exposing a prediction model to these learned representations instead of the raw features, we can predict the therapy prescriptions more accurately as a means of clinical decision support. We further propose Tensor-Train recurrent neural networks. We give a detailed introduction to the technique of tensorizing and decomposing large weight matrices into a few smaller tensors. We demonstrate the specific algorithms to perform the forward-pass and the back-propagation in this setting. Then we apply this approach to the input-to-hidden weight matrix in recurrent neural networks. This novel architecture can process extremely high dimensional sequential features such as video data. The model also provides a promising solution to processing sequential features with high sparsity. This is, for instance, the case with electronic health records, since they are often of categorical nature and have to be binary-coded. We incorporate a statistical survival model with this representation learning model, which shows superior prediction quality."} {"_id": "6754f7a897bd44364d181fa6946014c9a9165fe4", "title": "Analyzing and Managing Role-Based Access Control Policies", "text": "Today more and more security-relevant data is stored on computer systems; security-critical business processes are mapped to their digital counterparts. This situation applies to various domains such as health care industry, digital government, and financial service institutes requiring that different security requirements must be fulfilled. Authorisation constraints can help the policy architect design and express higher-level organisational rules. Although the importance of authorisation constraints has been addressed in the literature, there does not exist a systematic way to verify and validate authorisation constraints. In this paper, we specify both non-temporal and history-based authorisation constraints in the Object Constraint Language (OCL) and first-order linear temporal logic (LTL). Based upon these specifications, we attempt to formally verify role-based access control policies with the help of a theorem prover and to validate policies with the USE system, a validation tool for OCL constraints. We also describe an authorisation engine, which supports the enforcement of authorisation constraints."} {"_id": "fafe0f7a8f25a06f21962ca5f45361ec75373051", "title": "Greedy Layerwise Learning Can Scale to ImageNet", "text": "Shallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on image classification tasks using the large-scale ImageNet dataset and the CIFAR-10 dataset. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems lead to a CNN that exceeds AlexNet performance on ImageNet. Extending this training methodology to construct individual layers by solving 2-and-3-hidden layer auxiliary problems, we obtain an 11-layer network that exceeds several members of the VGG model family on ImageNet, and can train a VGG-11 model to the same accuracy as end-to-end learning. To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. We illustrate several interesting properties of these models theoretically and conduct a range of experiments to study the properties this training induces on the intermediate representations."} {"_id": "cdb25e4df6913bb94edcd1174d00baf2d21c9a6d", "title": "Rethinking the Value of Network Pruning", "text": "Network pruning is widely used for reducing the heavy computational cost of 1 deep models. A typical pruning algorithm is a three-stage pipeline, i.e., training (a 2 large model), pruning and fine-tuning. In this work, we make a rather surprising 3 observation: fine-tuning a pruned model only gives comparable or even worse 4 performance than training that model with randomly initialized weights. Our 5 results have several implications: 1) training a large, over-parameterized model is 6 not necessary to obtain an efficient final model, 2) learned \u201cimportant\u201d weights 7 of the large model are not necessarily useful for the small pruned model, 3) the 8 pruned architecture itself, rather than a set of inherited weights, is what leads to the 9 efficiency benefit in the final model, which suggests that some pruning algorithms 10 could be seen as performing network architecture search. 11"} {"_id": "c4d5b0995a97a3a7824e51e9f71aa1fd700d27e1", "title": "Compact wideband directional antenna with three-dimensional structure for microwave-based head imaging systems", "text": "Microwave-based head imaging systems demand wideband antennas with compact size. To ensure high scattered signals from the intended target point, the antennas need to have directional radiation patterns. Designing an antenna by addressing all these issues is challenging as the head imaging system utilizes low microwave frequencies (around 1-3 GHz) to maintain high signal penetration in the lossy human head. This paper reports a coplanar waveguide-fed design that overcomes all these limitations. The antenna is constructed in a three-dimensional manner by combining a dipole antenna with a folded parasitic structure printed on two substrate blocks. The antenna mechanism is explained with the description of its design parameters. The antenna covers 101% fractional bandwidth with a stable gain, good radiation efficiency and well directional radiation characteristics. The overall volume of the antenna is 0.24 \u00d7 0.1 \u00d7 0.05 \u03bbm where \u03bbm is the wavelength at the lowest frequency of operation."} {"_id": "7b4bb79c57bba1bc722a589a43fae2d488bf7e8b", "title": "A spiral antenna over a high-impedance surface consisting of fan-shaped patch cells", "text": "Radiation characteristics of a spiral antenna over a high-impedance surface (HIS) are analyzed. The HIS consists of fan-shaped patch cells. The fan-shaped patches are arranged homogeneously in the circumferential direction but are set non-homogeneously in the radial direction. The analysis is performed using method of moment. Radiation characteristics of a spiral antenna with a perfect electric conductor (PEC) reflector are analyzed. It is reaffirmed that wideband radiation characteristics, including input impedance and axial ratio are deteriorate. Subsequently, Radiation characteristics of a spiral antenna with a fan-shaped HIS reflector are analyzed. It is revealed that input impedance and axial ratio are mitigated by replacing the PEC reflector with the fan-shaped HIS reflector."} {"_id": "bb1d6215f0cfd84b5efc7173247b016ade4c976e", "title": "Autoencoders, Unsupervised Learning, and Deep Architectures", "text": "To better understand deep architectures and unsupervised learning, uncluttered by hardware details, we develop a general autoencoder framework for the comparative study of autoencoders, including Boolean autoencoders. We derive several results regarding autoencoders and autoencoder learning, including results on learning complexity, vertical and horizontal composition, and fundamental connections between critical points and clustering. Possible implications for the theory of deep architectures are discussed."} {"_id": "071a6cd442706e424ea09bc8852eaa2e901c72f3", "title": "Accelerating Differential Evolution Using an Adaptive Local Search", "text": "We propose a crossover-based adaptive local search (LS) operation for enhancing the performance of standard differential evolution (DE) algorithm. Incorporating LS heuristics is often very useful in designing an effective evolutionary algorithm for global optimization. However, determining a single LS length that can serve for a wide range of problems is a critical issue. We present a LS technique to solve this problem by adaptively adjusting the length of the search, using a hill-climbing heuristic. The emphasis of this paper is to demonstrate how this LS scheme can improve the performance of DE. Experimenting with a wide range of benchmark functions, we show that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm. Performance comparisons with other LS heuristics and with some other well-known evolutionary algorithms from literature are also presented."} {"_id": "5880b9bc3f75f4649b8ec819c3f983a14fca9927", "title": "Hybrid Recommender Systems: Survey and Experiments", "text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering."} {"_id": "69a5f151f68f04a5c471c31f8f0209126002b479", "title": "Recommender Systems Research: A Connection-Centric Survey", "text": "Recommender systems attempt to reduce information overload and retain customers by selecting a subset of items from a universal set based on user preferences. While research in recommender systems grew out of information retrieval and filtering, the topic has steadily advanced into a legitimate and challenging research area of its own. Recommender systems have traditionally been studied from a content-based filtering vs. collaborative design perspective. Recommendations, however, are not delivered within a vacuum, but rather cast within an informal community of users and social context. Therefore, ultimately all recommender systems make connections among people and thus should be surveyed from such a perspective. This viewpoint is under-emphasized in the recommender systems literature. We therefore take a connection-oriented perspective toward recommender systems research. We posit that recommendation has an inherently social element and is ultimately intended to connect people either directly as a result of explicit user modeling or indirectly through the discovery of relationships implicit in extant data. Thus, recommender systems are characterized by how they model users to bring people together: explicitly or implicitly. Finally, user modeling and the connection-centric viewpoint raise broadening and social issues\u2014such as evaluation, targeting, and privacy and trust\u2014which we also briefly address."} {"_id": "6c49508db853e9b167b6d894518c034076993953", "title": "Method to find community structures based on information centrality.", "text": "Community structures are an important feature of many social, biological, and technological networks. Here we study a variation on the method for detecting such communities proposed by Girvan and Newman and based on the idea of using centrality measures to define the community boundaries [M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)]. We develop an algorithm of hierarchical clustering that consists in finding and removing iteratively the edge with the highest information centrality. We test the algorithm on computer generated and real-world networks whose community structure is already known or has been studied by means of other methods. We show that our algorithm, although it runs to completion in a time O(n4) , is very effective especially when the communities are very mixed and hardly detectable by the other methods."} {"_id": "719542dc73ea92a52a6add63f873811b69773875", "title": "Collaborative Recommendation via Adaptive Association Rule Mining", "text": "Collaborative recommender systems allow personalization for e-commerce by exploiting similarities and dissimilarities among users' preferences. We investigate the use of association rule mining as an underlying technology for collaborative recommender systems. Association rules have been used with success in other domains. However, most currently existing association rule mining algorithms were designed with market basket analysis in mind. Such algorithms are ine cient for collaborative recommendation because they mine many rules that are not relevant to a given user. Also, it is necessary to specify the minimum support of the mined rules in advance, often leading to either too many or too few rules; this negatively impacts the performance of the overall system. We describe a collaborative recommendation technique based on a new algorithm speci cally designed to mine association rules for this purpose. Our algorithm does not require the minimum support to be speci ed in advance. Rather, a target range is given for the number of rules, and the algorithm adjusts the minimum support for each user in order to obtain a ruleset whose size is in the desired range. Rules are mined for a speci c target user, reducing the time required for the mining process. We employ associations between users as well as associations between items in making recommendations. Experimental evaluation of a system based on our algorithm reveals performance that is signi cantly better than that of traditional correlationbased approaches. Corresponding author. Present a liation: Department of Computer Science, Wellesley College, 106 Central Street, Wellesley, MA 02481 USA, e-mail: salvarez@wellesley.edu"} {"_id": "17f221aff2b4f2ddf1886274fbfa7bf6610c2039", "title": "Information rules - a strategic guide to the network economy", "text": "Whatever our proffesion, information rules a strategic guide to the network economy can be good source for reading. Locate the existing reports of word, txt, kindle, ppt, zip, pdf, as well as rar in this website. You could completely review online or download this book by here. Currently, never ever miss it. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files. Have leisure times? Read information rules a strategic guide to the network economy writer by Why? A best seller book in the world with great value and also content is combined with appealing words. Where? Simply below, in this site you could review online. Want download? Certainly offered, download them also here. Available files are as word, ppt, txt, kindle, pdf, rar, and zip. GO TO THE TECHNICAL WRITING FOR AN EXPANDED TYPE OF THIS INFORMATION RULES A STRATEGIC GUIDE TO THE NETWORK ECONOMY, ALONG WITH A CORRECTLY FORMATTED VERSION OF THE INSTANCE MANUAL PAGE ABOVE."} {"_id": "a47b437c7d349cf529c86719625e0ca65696ab66", "title": "Fuzzy Cognitive Map Learning Based on Nonlinear Hebbian Rule", "text": "Fuzzy Cognitive Map (FCM) is a soft computing technique for modeling systems. It combines synergistically the theories of neural networks and fuzzy logic. The methodology of developing FCMs is easily adaptable but relies on human experience and knowledge, and thus FCMs exhibit weaknesses and dependence on human experts. The critical dependence on the expert's opinion and knowledge, and the potential convergence to undesired steady states are deficiencies of FCMs. In order to overcome these deficiencies and improve the efficiency and robustness of FCM a possible solution is the utilization of learning methods. This research work proposes the utilization of the unsupervised Hebbian algorithm to nonlinear units for training FCMs. Using the proposed learning procedure, the FCM modifies its fuzzy causal web as causal patterns change and as experts update their causal knowledge."} {"_id": "666ff37c3339236f0a7f206955ffde382ecc2893", "title": "Emergence of 3D Printed Fashion: Navigating the Ambiguity of Materiality Through Collective Design", "text": "The emergence of 3D printing technology is being embraced by an increasing number of fashion designers. Due to the nascent and evolving nature of the technology, however, there is significant ambiguity around this technology\u2019s implications for the practices of fashion design. Based on the theoretical perspective of sociomateriality and the concept of translation, and drawing on archives, interviews, and other forms of qualitative data, this study\u2019s preliminary findings show that fashion designers navigate this ambiguity by pursuing a collective design process with diverse stakeholders to actively perceive and leverage the affordances and constraints of the technology. The ongoing interaction among a network of heterogeneous actors gives rise to innovative perceptions, practices, and products, which collectively shape the emergence of the field of 3D printed fashion."} {"_id": "cc92d8f5e97c5df3dd03df0f437d5061c709f65c", "title": "Addressing cold start in recommender systems: a semi-supervised co-training algorithm", "text": "Cold start is one of the most challenging problems in recommender systems. In this paper we tackle the cold-start problem by proposing a context-aware semi-supervised co-training method named CSEL. Specifically, we use a factorization model to capture fine-grained user-item context. Then, in order to build a model that is able to boost the recommendation performance by leveraging the context, we propose a semi-supervised ensemble learning algorithm. The algorithm constructs different (weak) prediction models using examples with different contexts and then employs the co-training strategy to allow each (weak) prediction model to learn from the other prediction models. The method has several distinguished advantages over the standard recommendation methods for addressing the cold-start problem. First, it defines a fine-grained context that is more accurate for modeling the user-item preference. Second, the method can naturally support supervised learning and semi-supervised learning, which provides a flexible way to incorporate the unlabeled data.\n The proposed algorithms are evaluated on two real-world datasets. The experimental results show that with our method the recommendation accuracy is significantly improved compared to the standard algorithms and the cold-start problem is largely alleviated."} {"_id": "049f4f88fdb42c91796ab93598fc4e23f3cbdebe", "title": "Learning Extractors from Unlabeled Text using Relevant Databases", "text": "Supervised machine learning algorithms for information extraction generally require large amounts of training data. In many cases where labeling training data is burdensome, there may, however, already exist an incomplete database relevant to the task at hand. Records from this database can be used to label text strings that express the same information. For tasks where text strings do not follow the same format or layout, and additionally may contain extra information, labeling the strings completely may be problematic. This paper presents a method for training extractors which fill in missing labels of a text sequence that is partially labeled using simple high-precision heuristics. Furthermore, we improve the algorithm by utilizing labeled fields from the database. In experiments with BibTeX records and research paper citation strings, we show a significant improvement in extraction accuracy over a baseline that only relies on the database for training data."} {"_id": "72abc9fe3b65a753153916a39c72b91d7328f42b", "title": "Cloaker: Hardware Supported Rootkit Concealment", "text": "Rootkits are used by malicious attackers who desire to run software on a compromised machine without being detected. They have become stealthier over the years as a consequence of the ongoing struggle between attackers and system defenders. In order to explore the next step in rootkit evolution and to build strong defenses, we look at this issue from the point of view of an attacker. We construct Cloaker, a proof-of-concept rootkit for the ARM platform that is non- persistent and only relies on hardware state modifications for concealment and operation. A primary goal in the design of Cloaker is to not alter any part of the host operating system (OS) code or data, thereby achieving immunity to all existing rootkit detection techniques which perform integrity, behavior and signature checks of the host OS. Cloaker also demonstrates that a self-contained execution environment for malicious code can be provided without relying on the host OS for any services. Integrity checks of hardware state in each of the machine's devices are required in order to detect rootkits such as Cloaker. We present a framework for the Linux kernel that incorporates integrity checks of hardware state performed by device drivers in order to counter the threat posed by rootkits such as Cloaker."} {"_id": "54eef481dffdb85a141e29146a17aaae44ef39ec", "title": "Detecting Driver Drowsiness Based on Sensors: A Review", "text": "In recent years, driver drowsiness has been one of the major causes of road accidents and can lead to severe physical injuries, deaths and significant economic losses. Statistics indicate the need of a reliable driver drowsiness detection system which could alert the driver before a mishap happens. Researchers have attempted to determine driver drowsiness using the following measures: (1) vehicle-based measures; (2) behavioral measures and (3) physiological measures. A detailed review on these measures will provide insight on the present systems, issues associated with them and the enhancements that need to be done to make a robust system. In this paper, we review these three measures as to the sensors used and discuss the advantages and limitations of each. The various ways through which drowsiness has been experimentally manipulated is also discussed. We conclude that by designing a hybrid drowsiness detection system that combines non-intrusive physiological measures with other measures one would accurately determine the drowsiness level of a driver. A number of road accidents might then be avoided if an alert is sent to a driver that is deemed drowsy."} {"_id": "6e56d2fbf2e6c3f3151a9946fd03c1b36ae0b007", "title": "Comparative study of different methods of social network analysis and visualization", "text": "A Social Network is a social structure made of individuals or organizations that are linked by one or more specific types of interdependency, such as friends, kinship, terrorist relation, conflict, financial exchange, disease transmission (epidemiology), airline routes etc. Social Network Analysis is an approach to the study of human or organization social interactions. It can be used to investigate kinship patterns, community structure or the organization of other formal and informal social networks. Recently, email networks have been popular source of data for both analysis and visualization of social network of a person or organization. In this paper, three types of visualization technique to analyze social networks have been considered - Force directed layout, Spherical layout and Clustered layout. Each method was evaluated with various data sets from an organization. Force directed layout is used to view the total network structure (Overview). Spherical layout is a 3D method to reveal communication patterns and relationships between different groups. Clustered graph layout visualizes a large amount of data in an efficient and compact form. It gives a hierarchical view of the network. Among the three methods, Clustered layout is the best to handle large network and group relationship. Finally, the comparative study of these three methods has been given."} {"_id": "08ff9a747e1b8c851403461f3e5c811a97eda4f8", "title": "A Systematic Analysis of XSS Sanitization in Web Application Frameworks", "text": "While most research on XSS defense has focused on techniques for securing existing applications and re-architecting browser mechanisms, sanitization remains the industry-standard defense mechanism. By streamlining and automating XSS sanitization, web application frameworks stand in a good position to stop XSS but have received little research attention. In order to drive research on web frameworks, we systematically study the security of the XSS sanitization abstractions frameworks provide. We develop a novel model of the web browserions frameworks provide. We develop a novel model of the web browser and characterize the challenges of XSS sanitization. Based on the model, we systematically evaluate the XSS abstractions in 14 major commercially-used web frameworks. We find that frameworks often do not address critical parts of the XSS conundrum. We perform an empirical analysis of 8 large web applications to extract the requirements of sanitization primitives from the perspective of realworld applications. Our study shows that there is a wide gap between the abstractions provided by frameworks and the requirements of applications."} {"_id": "42141ca5134cca10e129e587a0061c819b67e69b", "title": "An integrated approach of diet and exercise recommendations for diabetes patients", "text": "Diabetes is among one of the fastest growing disease all over the world. Controlled diet and proper exercise are considered as a treatment to control diabetes. However, food and exercise suggestions in existing solutions do not consider integrated knowledge from personal profile, preferences, current vital signs, diabetes domain, food domain and exercise domain. Furthermore, there is a strong correlation of diet and exercise. We have implemented an ontology based integrated approach to combine knowledge from various domains to generate diet and exercise suggestions for diabetics. The solution is developed as a Semantic Healthcare Assistant for Diet and Exercise (SHADE). For each domain (person, diabetes, food and exercise) we have defined separate ontology along with rules and then an integrated ontology combines these individual ontologies. Finally, diet recommendations are presented in the form of various alternative menus such that each menu is a healthy and balanced diet."} {"_id": "7e9bfb62ba48bbd8d9c13ef1dc7b93fcc58efea8", "title": "Low-profile dual-band circularly polarized microstrip antenna for GNSS applications", "text": "This paper presents a design of a micro-strip circularly polarized antenna intended for the Global Navigation Satellite Systems (GNSS). The presented device is composed of a micro-strip slotted patch antenna printed on a Rogers RO3006 substrate, a foam layer of 2 mm thick and a wideband commercial 3-dB SMT coupler. The combined fullwave antenna results with the measured S-Parameters of the coupler shows very good performances in terms of antenna matching and axial ratio on larger bandwidths."} {"_id": "cc2fd4427572cc0ce8d172286277a88272c96d03", "title": "The PLEX Cards and its techniques as sources of inspiration when designing for playfulness", "text": "Playfulness can be observed in all areas of human activity. It is an attitude of making activities more enjoyable. Designing for playfulness involves creating objects that elicit a playful approach and provide enjoyable experiences. In this paper, we introduce the design and evaluation of the PLEX Cards and its two related idea generation techniques. The cards were created to communicate the 22 categories of a playful experiences framework to designers and other stakeholders who wish to design for playfulness. We have evaluated the helpfulness of both the cards and their associated techniques in two studies. The results show that the PLEX Cards and its associated techniques are valuable sources of inspiration when designing for playfulness."} {"_id": "684250ab1e1459197f4e37304e4103cd6848a826", "title": "Cloud Federation: Effects of Federated Compute Resources on Quality of Service and Cost*", "text": "Cloud Federation is one concept to confront challenges that still persist in Cloud Computing, such as vendor lock-in or compliance requirements. The lack of a standardized meaning for the term Cloud Federation has led to multiple conflicting definitions and an unclear prospect of its possible benefits. Taking a client-side perspective on federated compute services, we analyse how choosing a certain federation strategy affects Quality of Service and cost of the resulting service or application. Based on a use case, we experimentally prove our analysis to be correct and describe the different trade-offs that exist within each of the strategies."} {"_id": "267411fb2e73d4f9cbef8bebc3bb6140ea7dd43c", "title": "The nucleotide sequence of a HMW glutenin subunit gene located on chromosome 1A of wheat (Triticum aestivum L.).", "text": "A cloned 8.2 kb EcoRI fragment has been isolated from a genomic library of DNA derived from Triticum aestivum L. cv. Cheyenne. This fragment contains sequences related to the high molecular weight (HMW) subunits of glutenin, proteins considered to be important in determining the elastic properties of gluten. The cloned HMW subunit gene appears to be derived from chromosome 1A. The nucleotide sequence of this gene has provided new information on the structure and evolution of the HMW subunits. However, hybrid-selection translation experiments suggest that this gene is silent."} {"_id": "213a25383e5b9c4db9dc6908e1be53d66ead82ca", "title": "dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning", "text": "Learning graph representations is a fundamental task aimed at capturing various properties of graphs in vector space. The most recent methods learn such representations for static networks. However, real world networks evolve over time and have varying dynamics. Capturing such evolution is key to predicting the properties of unseen networks. To understand how the network dynamics affect the prediction performance, we propose an embedding approach which learns the structure of evolution in dynamic graphs and can predict unseen links with higher precision. Our model, dyngraph2vec, learns the temporal transitions in the network using a deep architecture composed of dense and recurrent layers. We motivate the need of capturing dynamics for prediction on a toy data set created using stochastic block models. We then demonstrate the efficacy of dyngraph2vec over existing state-of-the-art methods on two real world data sets. We observe that learning dynamics can improve the quality of embedding and yield better performance in link prediction."} {"_id": "e0f89f02f6de65dd84da556f4e2db4ced9713f6f", "title": "Operational capabilities development in mediated offshore software services models", "text": "The paper expands theoretical and empirical understanding of capabilities development in the mediated offshore outsourcing model whereby a small or a medium-sized firm delivers offshore software services to a larger information technology firm that in turn contracts and interfaces with the actual end-client onshore firms. Such a mediated model has received little prior research attention, although it is common particularly among Chinese firms exporting services to Japan, the largest export market for Chinese software services. We conducted case studies in four China-based software companies to understand the mechanisms used to develop their operational capabilities. We focused on client-specific, process, and human resources capabilities that have been previously associated with vendor success. We found a range of learning mechanisms to build the capabilities in offshore firms. Results show that the development of human resources capabilities was most challenging in the mediated model; yet foundational for the development of the other capabilities. This paper contributes to the information systems literature by improving our understanding of the development of operational capabilities in smalland medium-sized Chinese firms that deploy the mediated model of offshore"} {"_id": "c557a7079a885c1bbc0b21057a0e16f927179ac7", "title": "Unsupervised detection of surface defects: A two-step approach", "text": "In this paper, we focus on the problem of finding anomalies in surface images. Despite enormous research efforts and advances, it still remains a big challenge to be solved. This paper proposes a unified approach for defect detection. Our proposed method consists of two phases: (1) global estimation and (2) local refinement. First, we roughly estimate defects by applying a spectral-based approach in a global manner. We then locally refine the estimated region based on the distributions of pixel intensities derived from defect and defect-free regions. Experimental results show that the proposed method outperforms the previous defect detection methods and gives robust results even in noisy surface defect images."} {"_id": "cc0530e3c7463d51ad51eb90e25ccbd7dda11814", "title": "The Ideology of Interactivity (or Video Games and Taylorization of Leisure)", "text": "Interactivity is one of the key conceptual apparatuses through which video games have been theorized thus far. As many writers have noted, video games are distinct from other forms of media because player actions seem to have direct, immediate consequences in the world depicted onscreen. But in many ways, this \u201cinteractive\u201d feature of video games tends to manifest itself as a relentless series of demands, or a way of disciplining player behavior. In this sense, it seems more accurate to describe the human-machine interface made possible by gaming as an aggressive form of \u201cinterpellation\u201d or hailing. Drawing primarily upon the work of Louis Althusser, I argue that traditional theories of interactivity fail to acknowledge the work of video games\u2014in other words, the extent to which video games define and reconstitute players as subjects of ideology."} {"_id": "0407d76a4f3a4cb35a63734eaea41a31f8008aef", "title": "Generalized Methods for the Design of Quasi-Ideal Symmetric and Asymmetric Coupled-Line Sections and Directional Couplers", "text": "Generalized methods for the design of quasi-ideal symmetric and asymmetric coupled-line sections have been proposed. Two cases of coupled-line sections, in which the inductive coupling coefficient is either greater or smaller than the capacitive coupling coefficient, have been separately considered. To compensate for the coupling coefficients' inequality, a combination of lumped capacitors and short coupled-line sections has been introduced into the coupled-line section. The proposed methods allow for designing quasi-ideal coupled-line sections in nearly arbitrarily chosen dielectric media and for arbitrarily chosen coupling. The theoretical analyses have been verified by the design and measurement of three compensated coupled-line directional couplers illustrating application of the proposed methods in different dielectric structures."} {"_id": "946716c6759498d1ee953ad6d13b052a5052c6a8", "title": "Bullying, cyberbullying, and mental health in young people.", "text": "OBJECTIVE\nTo investigate the factors associated with exposure to in-real-life (IRL) bullying, cyberbullying, and both IRL and cyberbullying and to explore the relationship between these types of bullying and mental health among 13-16-year-old Swedish boys and girls.\n\n\nMETHODS\nData was derived from a cross-sectional web-based study of 13-16-year-old students in northern Sweden (n=1214, response rate 81.9%).\n\n\nRESULTS\nThe combination of IRL- and cyberbullying was the most common type of bullying. A non-supportive school environment and poor body image were related to exposure to bullying for both genders but the relationship was more distinct in girls. All types of bullying were associated with depressive symptoms in both boys and girls and all forms of bullying increased the likelihood of psychosomatic problems in girls.\n\n\nCONCLUSIONS\nCyberbullying can be seen as an extension of IRL bullying. A combination of IRL- and cyberbullying seems to be particularly negative for mental health. Interventions should focus on improved school environment and body image as well as anti-violence programmes. Gender aspects of bullying need to be acknowledged."} {"_id": "173443511c450bd8f61e3d1122982f74c94147ae", "title": "Pivoted Document Length Normalization", "text": "Automatic information retrieval systems have to deal with documents of varying lengths in a text collection. Document length normalization is used to fairly retrieve documents of all lengths. In this study, we ohserve that a normalization scheme that retrieves documents of all lengths with similar chances as their likelihood of relevance will outperform another scheme which retrieves documents with chances very different from their likelihood of relevance. We show that the retrievaf probabilities for a particular normalization method deviate systematically from the relevance probabilities across different collections. We present pivoted normalization, a technique that can be used to modify any normalization function thereby reducing the gap between the relevance and the retrieval probabilities. Training pivoted normalization on one collection, we can successfully use it on other (new) text collections, yielding a robust, collectzorz independent normalization technique. We use the idea of pivoting with the well known cosine normalization function. We point out some shortcomings of the cosine function andpresent two new normalization functions--pivoted unique normalization and piuotert byte size normalization."} {"_id": "27d9d71afb06ea5d4295fbdcc7cd67e491d50cc4", "title": "Nearest neighbor pattern classification", "text": "The case of n unity-variance random variables x1, XZ,. * *, x, governed by the joint probability density w(xl, xz, * * * x,) is considered, where the density depends on the (normalized) cross-covariances pii = E[(xi jzi)(xi li)]. It is shown that the condition holds for an \u201carbitrary\u201d function f(xl, x2, * * * , x,) of n variables if and only if the underlying density w(xl, XZ, * * * , x,) is the usual n-dimensional Gaussian density for correlated random variables. This result establishes a generalized form of Price\u2019s theorem in which: 1) the relevant condition (*) subsumes Price\u2019s original condition; 2) the proof is accomplished without appeal to Laplace integral expansions; and 3) conditions referring to derivatives with respect to diagonal terms pii are avoided, so that the unity variance assumption can be retained. Manuscript received February 10, 1966; revised May 2, 1966. The author is with the Ordnance Research Laboratory, Pennsylvania State University, State College, Pa. RICE\u2019S THEOREM and its various extensions ([l]-[4]) have had great utility in the determination of output correlations between zero-memory nonlinearities subjected to jointly Gaussian inputs. In its original form, the theorem considered n jointly normal random variables, x1, x2, . . . x,, with respective means 21, LE2, . . . ) Z, and nth-order joint probability density, P(z 1, .x2, . 1 . , :r,,) = (27p ply,, y . exp { -a F F ;;;, ~ (2,. 2,)(:r, 5,) I , (1) where IM,l is the determinant\u2019 of J1,, = [p,,], Pr-. = E[(sr 5$.)(x, ,2J] = xvx, &IL:, is the correlation coefficient of x, and x,, and Ail,, is the cofactor of p.? in ilf,. From [l], the theorem statement is as follows: \u201cLet there be n zero-memory nonlinear devices specified by the input-output relationship f<(x), i = 1, 2, . 1 . , n. Let each xi be the single input to a corresponding fi(x) Authorized licensed use limited to: West Virginia University. Downloaded on February 26, 2009 at 13:41 from IEEE Xplore. Restrictions apply."} {"_id": "64b3435826a94ddd269b330e6254579f3244f214", "title": "Matrix computations (3. ed.)", "text": ""} {"_id": "ba6419a7a4404174ba95a53858632617c47cfff0", "title": "Statistical learning theory", "text": ""} {"_id": "37357230347ed6ed8912a09c7ba4c35260e84e4c", "title": "A flexible coupling approach to multi-agent planning under incomplete information", "text": "Multi-agent planning (MAP) approaches are typically oriented at solving loosely coupled problems, being ineffective to deal with more complex, strongly related problems. In most cases, agents work under complete information, building complete knowledge bases. The present article introduces a general-purpose MAP framework designed to tackle problems of any coupling levels under incomplete information. Agents in our MAP model are partially unaware of the information managed by the rest of agents and share only the critical information that affects other agents, thus maintaining a distributed vision of the task. Agents solve MAP tasks through the adoption of an iterative refinement planning procedure that uses single-agent planning technology. In particular, agents will devise refinements through the partial-order planning paradigm, a flexible framework to build refinement plans leaving unsolved details that will be gradually completed by means of new refinements. Our proposal is supported with the implementation of a fully operative MAP system and we show various experiments when running our system over different types of MAP problems, from the most strongly related to the most loosely coupled."} {"_id": "9f0f1f373cabdf5f232d490990b5cf03e110349a", "title": "Image Processing Operations Identification via Convolutional Neural Network", "text": "In recent years, image forensics has attracted more and more attention, and many forensic methods have been proposed for identifying image processing operations. Up to now, most existing methods are based on hand crafted features, and just one specific operation is considered in their methods. In many forensic scenarios, however, multiple classification for various image processing operations is more practical. Besides, it is difficult to obtain effective features by hand for some image processing operations. In this paper, therefore, we propose a new convolutional neural network (CNN) based method to adaptively learn discriminative features for identifying typical image processing operations. We carefully design the high pass filter bank to get the image residuals of the input image, the channel expansion layer to mix up the resulting residuals, the pooling layers, and the activation functions employed in our method. The extensive results show that the proposed method can outperform the currently best method based on hand crafted features and three related methods based on CNN for image steganalysis and/or forensics, achieving the state-of-the-art results. Furthermore, we provide more supplementary results to show the rationality and robustness of the proposed model."} {"_id": "7606f448d1ad155bb6d61aed0aa2e1b22723f4db", "title": "Real Time Identification and Prevention of Doziness using Face Recognition System", "text": "A large number of road accidents in the world occur due to driver\u2019s not taking proper breaks while driving long distances. This System is used to detect fatigue behaviour of the driver well and before any accident occurs. In this system driver\u2019s facial appearance is extracted via high speed camera which is placed in car in front of driver. This camera will capture the frames at every instant of time. Each of this frame will be processed using image processing algorithm. The mouth and eye region is separated from each of the frames and analyzed for concluding whether the driver is dozy or not. The system draws conclusion if the eyes are found closed for five consecutive frame. If doziness is detected, a signal alert is triggered. In case accidents occurs, message is sent to the desired number."} {"_id": "9a6258dd41d81db3320cfe0b88911489d1ee23f0", "title": "A Comprehensive Review of Smart Wheelchairs: Past, Present, and Future", "text": "A smart wheelchair (SW) is a power wheelchair (PW) to which computers, sensors, and assistive technology are attached. In the past decade, there has been little effort to provide a systematic review of SW research. This paper aims to provide a complete state-of-the-art overview of SW research trends. We expect that the information gathered in this study will enhance awareness of the status of contemporary PW as well as SW technology and increase the functional mobility of people who use PWs. We systematically present the international SW research effort, starting with an introduction to PWs and the communities they serve. Then, we discuss in detail the SW and associated technological innovations with an emphasis on the most researched areas, generating the most interest for future research and development. We conclude with our vision for the future of SW research and how to best serve people with all types of disabilities."} {"_id": "be52a712b18acf0d33c561c48691b9fe007b05e2", "title": "Electrochemical Sensors for Soil Nutrient Detection: Opportunity and Challenge", "text": "Soil testing is the basis for nutrient recommendation and formulated fertilization. This study presented a brief overview of potentiometric electrochemical sensors (ISE and ISFET) for soil NPK detection. The opportunities and challenges for electrochemical sensors in soil testing were"} {"_id": "1bfce3f5a992223f7d0d0149e401d4b7fb35702d", "title": "Adaptive resource allocation for wildlife protection against illegal poachers", "text": "Illegal poaching is an international problem that leads to the extinction of species and the destruction of ecosystems. As evidenced by dangerously dwindling populations of endangered species, existing anti-poaching mechanisms are insufficient. This paper introduces the Protection Assistant for Wildlife Security (PAWS) application a joint deployment effort done with researchers at Uganda\u2019s Queen Elizabeth National Park (QENP) with the goal of improving wildlife ranger patrols. While previous works have deployed applications with a game-theoretic approach (specifically Stackelberg Games) for counter-terrorism, wildlife crime is an important domain that promotes a wide range of new deployments. Additionally, this domain presents new research challenges and opportunities related to learning behavioral models from collected poaching data. In addressing these challenges, our first contribution is a behavioral model extension that captures the heterogeneity of poachers\u2019 decision making processes. Second, we provide a novel framework, PAWS-Learn, that incrementally improves the behavioral model of the poacher population with more data. Third, we develop a new algorithm, PAWS-Adapt, that adaptively improves the resource allocation strategy against the learned model of poachers. Fourth, we demonstrate PAWS\u2019s potential effectiveness when applied to patrols in QENP, where PAWS will be deployed."} {"_id": "144de7e34f091e9e622114f1cf59c09e26cb32ac", "title": "Automatic knowledge extraction from OCR documents using hierarchical document analysis", "text": "Industries can improve their business efficiency by analyzing and extracting relevant knowledge from large numbers of documents. Knowledge extraction manually from large volume of documents is labor intensive, unscalable and challenging. Consequently there have been a number of attempts to develop intelligent systems to automatically extract relevant knowledge from OCR documents. Moreover, the automatic system can improve the capability of search engine by providing application-specific domain knowledge. However, extracting the efficient information from OCR documents is challenging due to highly unstructured format [1, 11, 18, 26]. In this paper, we propose an efficient framework for a knowledge extraction system that takes keywords based queries and automatically extracts their most relevant knowledge from OCR documents by using text mining techniques. The framework can provide relevance ranking of knowledge to a given query. We tested the proposed framework on corpus of documents at GE Power where document consists of more than hundred pages in PDF."} {"_id": "9b84fa56b2eb41a67d32adf06942fcb50665f326", "title": "What is principal component analysis?", "text": "Several measurement techniques used in the life sciences gather data for many more variables per sample than the typical number of samples assayed. For instance, DNA microarrays and mass spectrometers can measure levels of thousands of mRNAs or proteins in hundreds of samples. Such high-dimensionality makes visualization of samples difficult and limits simple exploration of the data. Principal component analysis (PCA) is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set1. It accomplishes this reduction by identifying directions, called principal components, along which the variation in the data is maximal. By using a few components, each sample can be represented by relatively few numbers instead of by values for thousands of variables. Samples can then be plotted, making it possible to visually assess similarities and differences between samples and determine whether samples can be grouped. Saal et al.2 used microarrays to measure the expression of 27,648 genes in 105 breast tumor samples. I will use this gene expression data set, which is available through the Gene Expression Omnibus database (accession no. GSE5325), to illustrate how PCA can be used to represent samples with a smaller number of variables, visualize samples and genes, and detect dominant patterns of gene expression. My aim with this example is to leave you with an idea of how PCA can be used to explore data sets in which thousands of variables have been measured."} {"_id": "b9bb5f4dec8e7cb9974a2eb6ad558d919d0ebe9c", "title": "DsUniPi: An SVM-based Approach for Sentiment Analysis of Figurative Language on Twitter", "text": "The DsUniPi team participated in the SemEval 2015 Task#11: Sentiment Analysis of Figurative Language in Twitter. The proposed approach employs syntactical and morphological features, which indicate sentiment polarity in both figurative and non-figurative tweets. These features were combined with others that indicate presence of figurative language in order to predict a fine-grained sentiment score. The method is supervised and makes use of structured knowledge resources, such as SentiWordNet sentiment lexicon for assigning sentiment score to words and WordNet for calculating word similarity. We have experimented with different classification algorithms (Na\u00efve Bayes, Decision trees, and SVM), and the best results were achieved by an SVM classifier with linear kernel."} {"_id": "35c867252b0a105339acced5cdde4d2b9fb49b54", "title": "A Critical Review of Marketing Research on Diffusion of New Products", "text": "We critically examine alternate models of the diffusion of new products and the turning points of the diffusion curve. On each of these topics, we focus on the drivers, specifications, and estimation methods researched in the literature. We discover important generalizations about the shape, parameters, and turning points of the diffusion curve and the characteristics of diffusion across early stages of the product life cycle. We point out directions for future research. Because new products affect every aspect of the life of individuals, communities, countries, and economies, the study of the diffusion of innovations is of vital importance. Researchers have studied this topic in various disciplines, including marketing, economics, medicine, agriculture, sociology, anthropology, geography, and technology management. We present a critical review of research on the diffusion of new products primarily in the marketing literature, but also in the economics and geography literature. We use the word product broadly to cover any good, service, idea, or person. We distinguish the term new product from the broader term innovation, which refers to both new product and new method, practice, institution, or social entity. Even though we restrict our review to the marketing literature, which focuses on the diffusion of new products, the implications of our review may hold as well as for the study of the diffusion of innovations in other disciplines. The marketing literature on this topic is vast, dating back at least as early as the publication by Fourt and Woodlock (1960). The term diffusion has been used differently in two groups of literatures. Within economics and most nonmarketing disciplines, diffusion is defined as the spread of an innovation across social groups over time (Brown, 1981; Stoneman, 2002). As such, the phenomenon is separate from the drivers, which can be consumer income, the product\u2019s price, word-of-mouth communication, and so on. In marketing and communication, diffusion typically has come to mean the communication of an innovation through the population (Golder and Tellis, 1998; Mahajan, Muller, and Wind, 2000a; Mahajan, Muller, and Bass, 1990; Rogers 1995). In this sense, the phenomenon (spread of a product) is synonymous with its underlying driver (communication). The Webster (2004) definition of the noun \u201cdiffusion\u201d is \u201cthe spread of a cultural or technological practice or innovation from one region or people to another, as by trade or conquest\u201d and the verb \u201cdiffusing\u201d is \u201cpour, spread out or disperse in every direction; spread or scatter widely.\u201d This latter interpretation is synonymous with the term\u2019s use in economics and most other disciplines. In addition, some researchers in marketing have subscribed to the definition used in economics (Bemmaor, 1994; Dekimpe, Parker, and Sarvary, 2000a; Van den Bulte and 40 DEEPA CHANDRASEKARAN AND GERARD J. TELLIS Stremersch, 2004). Hence, in this review, we define diffusion as the spread of an innovation across markets over time. Researchers commonly measure diffusion using the sales and especially the market penetration of a new product during the early stages of its life cycle. To characterize this phenomenon carefully, we adopt the definitions of the stages and turning points of the product\u2019s life cycle by Golder and Tellis (2004): 1. Commercialization is the date a new product is first sold. 2. Takeoff is the first dramatic and sustained increase in a new product\u2019s sales. 3. Introduction is the period from a new product\u2019s commercialization until its takeoff. 4. Slowdown is the beginning of a period of level, slowly increasing, or temporarily decreasing product sales after takeoff. 5. Growth is the period from a new product\u2019s takeoff until its slowdown. 6. Maturity is the period from a product\u2019s slowdown until sales begin a steady decline. Hence, there are two key turning points in the diffusion curve: takeoff and slowdown. Prior reviews address various aspects of the marketing literature on the diffusion of new products. For example, Mahajan, Muller, and Bass (1990) provide an excellent overview of the Bass model, its extensions, and some directions for further research. Parker (1994) provides an overview of the Bass model and evaluates the various estimation techniques, forecasting abilities, and specification improvements of the model. Mahajan, Muller, and Bass (1995) summarize the generalizations from applications of the Bass model. An edited volume by Mahajan, Muller, and Wind (2000b) covers in depth various topics in diffusion models, such as specification, estimation, and applications. Sultan, Farley, and Lehmann (1990) and Van den Bulte and Stremersch (2004) meta-analyze the diffusion parameters of the Bass model. The current review differs from prior reviews in two important aspects. First, the prior reviews focus on the S-curve of cumulative sales of a new product, mostly covering growth. This review focuses on phenomena besides the S-curve, such as takeoff and slowdown. Second, the above reviews focus mainly on the Bass model. This review considers the Bass model as well as other models of diffusion and drivers of new product diffusion other than communication. Our key findings and the most useful part of our study is the discovery of potential generalizations from past research. For the benefit of readers who are familiar with this topic, we present these generalizations before details of the measures, models, and methods used in past research. (Readers who are unfamiliar with the topic may want to read the Potential Generalizations section last). Therefore, we organize the rest of the chapter as follows. In the next section, we summarize potential generalizations from prior research. In the third section, we point out limitations of past research and directions for future research. In the fourth section, we evaluate key models and drivers of the diffusion curve. In the fifth section, we evaluate models of the key turning points in diffusion: takeoff and slowdown. Potential Generalizations We use the term potential generalizations or regularities to describe empirical findings with substantial support. By substantial, we mean that support comes from reviews or meta-analyses of the literature or individual studies with a large sample of over ten categories or ten countries. Table 2.1 lists the studies on which the potential generalizations are based. This section covers important findings about the shape of the diffusion curve, parameters of the Bass models, the turning points of diffusion, and findings across stages of the diffusion curve. MARKETING RESEARCH ON DIFFUSION OF NEW PRODUCTS 41 Shape of the Diffusion Curve The most important and most widely reported finding about new product diffusion relates to the shape of the diffusion curve (see Figure 2.1). Numerous studies in a variety of disciplines suggest that (with the exception of entertainment products) the plot of cumulative sales of new products against time is an S-shaped curve (e.g., Mahajan, Muller, and Bass 1990; Mahajan, Muller, and Wind, 2000a). Parameters of the Bass Model Most of the marketing studies use the Bass diffusion model to capture the S-shaped curve of new products sales (see later section for explanation). This model has three key parameters: the coefficient of innovation or external influence (p), the coefficient of imitation or internal influence (q), and the market potential (\u03b1 or m). Table 2.1 Studies Included for Assessing Potential Generalizations Authors Categories Countries Gatignon, Eliashberg, and 6 consumer durables 14 European countries Robertson (1989) Mahajan, Muller, and Bass (1990) Numerous studies Sultan, Farley, and Lehmann (1990) 213 applications United States, European countries Helsen, Jedidi, and DeSarbo (1993) 3 consumer durables 11 European countries and United States Ganesh and Kumar (1996) 1 industrial product 10 European countries, United States, and Japan Ganesh, Kumar, Subramaniam (1997) 4 consumer durables 16 European countries Golder and Tellis (1997) 31 consumer durables United States Putsis et al., (1997) 4 consumer durables 10 European countries Dekimpe, Parker, and Sarvary (1998) 1 service 74 countries Kumar, Ganesh, and Echambadi (1998) 5 consumer durables 14 European countries Golder and Tellis (1998) 10 consumer durables United States Kohli, Lehmann, and Pae (1999) 32 appliances, house United States wares and electronics Dekimpe, Parker, and Sarvary (2000) 1 innovation More than 160 countries Mahajan, Muller, and Wind (2000) Numerous studies Van den Bulte (2000) 31 consumer durables United States Talukdar, Sudhir, and Ainslie (2002) 6 consumer durables 31 countries Agarwal and Bayus (2002) 30 innovations United States Goldenberg, Libai, and Muller (2002) 32 innovations United States Tellis, Stremersch, and Yin (2003) 10 consumer durables 16 European countries Golder and Tellis (2004) 30 consumer durables United States Stremersch and Tellis (2004) 10 consumer durables 16 European countries Van den Bulte and Stremersch (2004) 293 applications 28 countries <> <> 42 DEEPA CHANDRASEKARAN AND GERARD J. TELLIS Coefficient of Innovation \u2022 The mean value of the coefficient of innovation for a new product lies between 0.0007 and .03 (Sultan, Farley, and Lehmann, 1990; Talukdar, Sudhir, and Ainslie, 2002; Van den Bulte and Stremersch, 2004). \u2022 The mean value of the coefficient of innovation for a new product is 0.001 for developed countries and 0.0003 for developing countries (Talukdar, Sudhir, and Ainslie, 2002). \u2022 The coefficient of innovation is higher for European countries than for the United States (Sultan, Farley, and Lehmann, 1990). Coefficient of Imitation \u2022 The mean value of the coefficient of imitation for a new product lies between 0.38 and 0.53 (Sultan, Farley, and Lehmann, 1990; Talukdar, Sudhir, and Ainslie, 2002; Van den Bulte and Stremersch, 2004). \u2022 Indu"} {"_id": "b770794840edef3dc6a36fd9d55f0cf491bec42b", "title": "GPU-based cone beam computed tomography", "text": "The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s."} {"_id": "e62622831d968d7bf9440c0ea04252e46a649da6", "title": "Achievement Goals in the Classroom : Students ' Learning Strategies and Motivation Processes", "text": "We studied how specific motivational processes are related to the salience of mastery and performance goals in actual classroom settings. One hundred seventy-six students attending a junior high/high school for academically advanced students were randomly selected from one of their classes and responded to a questionnaire on their perceptions of the classroom goal orientation, use of effective learning strategies, task choices, attitudes, and causal attributions. Students who perceived an emphasis on mastery goals in the classroom reported using more effective strategies, preferred challenging tasks, had a more positive attitude toward the class, and had a stronger belief that success follows from one's effort. Students who perceived performance goals as salient tended to focus on their ability, evaluating their ability negatively and attributing failure to lack of ability. The pattern and strength of the findings suggest that the classroom goal orientation may facilitate the maintenance of adaptive motivation patterns when mastery goals are salient and are adopted by students."} {"_id": "104f1e76b653386c38b4a0bd535ad08c4a651832", "title": "Calibrating the COCOMO II Post-Architecture Model", "text": "The COCOMO II model was created to meet the need for a cost model that accounted for future software development practices. This resulted in the formulation of three submodels for cost estimation, one for composing applications, one for early lifecycle estimation and one for detailed estimation when the architecture of the product is understood. This paper describes the calibration procedures for the last model, Post-Architecture COCOMO II model, from eighty-three observations. The results of the multiple regression analysis and their implications are discussed. Future work includes further analysis of the PostArchitecture model, calibration of the other models, derivation of maintenance parameters, and refining the effort distribution for the model output."} {"_id": "10b37a3018286dc787fd2cf25caae72f83035f10", "title": "Meditation states and traits: EEG, ERP, and neuroimaging studies.", "text": "Neuroelectric and imaging studies of meditation are reviewed. Electroencephalographic measures indicate an overall slowing subsequent to meditation, with theta and alpha activation related to proficiency of practice. Sensory evoked potential assessment of concentrative meditation yields amplitude and latency changes for some components and practices. Cognitive event-related potential evaluation of meditation implies that practice changes attentional allocation. Neuroimaging studies indicate increased regional cerebral blood flow measures during meditation. Taken together, meditation appears to reflect changes in anterior cingulate cortex and dorsolateral prefrontal areas. Neurophysiological meditative state and trait effects are variable but are beginning to demonstrate consistent outcomes for research and clinical applications. Psychological and clinical effects of meditation are summarized, integrated, and discussed with respect to neuroimaging data."} {"_id": "197ef20f1c652589da145a625093e8b31082c470", "title": "Weighted Association Rule Mining using weighted support and significance framework", "text": "We address the issues of discovering significant binary relationships in transaction datasets in a weighted setting. Traditional model of association rule mining is adapted to handle weighted association rule mining problems where each item is allowed to have a weight. The goal is to steer the mining focus to those significant relationships involving items with significant weights rather than being flooded in the combinatornal explosion of insignificant relationships. We identify the challenge of using weights in the iterative process of generating large itemsets. The problem of invalidation of the \"downward closure property\" in the weighted setting is solved by using an improved model of weighted support measurements and exploiting a \"weighted downward closure property\". A new algorithm called WARM (Weighted Association Rule Mining) is developed based on the improved model. The algorithm is both scalable and efficient in discovering significant relationships in weighted settings as illustrated by experiments performed on simulated datasets."} {"_id": "9467791cf63d3c8bf6e92c1786c91c9cf86ba256", "title": "A low-power Adaboost-based object detection processor using Haar-like features", "text": "This paper presents an architecture of a low-power real-time object detection processor using Adaboost with Haar-Like features. We employ a register array based architecture, and introduce two architectural-level power optimization techniques; signal gating domain for integral image extraction, and low-power integral image update. The power efficiency of our proposed architecture including nine classifiers is estimated to be 0.64mW/fps when handling VGA(640 \u00d7 480) 70fps video."} {"_id": "4a96fbbfba14bb9237948286b9bfacf97f059576", "title": "A study on evolution of email spam over fifteen years", "text": "Email spam is a persistent problem, especially today, with the increasing dedication and sophistication of spammers. Even popular social media sites such as Facebook, Twitter, and Google Plus are not exempt from email spam as they all interface with email systems. With an \u201carms-race\u201d between spammers and spam filter developers, spam has been continually changing over the years. In this paper, we analyze email spam trends on a dataset collected by the Spam Archive, which contains 5.1 million spam emails spread over 15 years (1998-2013). We use statistical analysis techniques on different headers in email messages (e.g. content type and length) and embedded items in message body (e.g. URL links and HTML attachments). Also, we investigate topic drift by applying topic modeling on the content of email spam. Moreover, we extract sender-to-receiver IP routing networks from email spam and perform network analysis on it. Our results show the dynamic nature of email spam over one and a half decades and demonstrate that the email spam business is not dying but changing to be more capricious."} {"_id": "19db9eb3a43dfbe5d45a70f65ef1fe39b1c1688c", "title": "Applying persuasive design in a diabetes mellitus application", "text": "This paper describes persuasive design methods and compares this to an application currently under development for diabetes mellitus patients. Various elements of persuasion and a categorization of persuasion types are mentioned. Also discussed are principles of how successful persuasion should be designed, as well as the practical applications and ethics of persuasive design. This paper is not striving for completeness of theories on the topic, but uses the theories to compare it to an application intended for diabetes mellitus patients. The results of this comparison can be used for improvements of the application."} {"_id": "e8e6ef5ad06082d6e57112e5ff8ac0a44ea94527", "title": "Analysis of modulation schemes for Bluetooth-LE module for Internet-of-Things (IoT) applications", "text": "Bluetooth transceivers have been in the active area of recent research being the key component of physical layer in the Bluetooth technology. The low power consumption of Bluetooth low energy (LE) devices compared to the conventional Bluetooth devices has enhanced its importance in Internet-of-Things (IoT) applications. Therefore, Bluetooth low energy device based solution needs expansion in the IoT network infrastructure. The transceivers in the literature and modulation schemes are compared and summarized. Energy consumption of modulation schemes in Bluetooth communication are analyzed and compared using the model presented in this work. In this approach considering both circuit and signal power consumption, optimum modulation order for minimum energy consumption has been found using numerical calculation and relation between signal to noise ratio (SNR) and channel capacity. Battery life for IoT sensors using Bluetooth LE technology as a wireless link has been analyzed considering multiple transaction times for transmitters having different power consumption. MFSK and more bandwidth-efficient GFSK are identified as low energy solution for all smart devices."} {"_id": "2d6a600e03e2ac7ae18fe2623c770878394814d6", "title": "Wheat glutenin subunits and dough elasticity: findings of the EUROWHEAT project", "text": "*IACR-Long Ashton Research Station, Department of Agricultural Sciences, University of Bristol, Long Ashton BS41 9AF, UK (tel: +1275-392181; fax: +1275-394299; e-mail: peter.shewry@bbsrc.ac.uk) {Institut National de la Recherche Agronomique, Centre de Recherches de Nantes, Laboratoire de BiochimieetdeTechnologiedesProt\u00e9ines,B.P.71627, Ruede laG\u00e9raudi\u00e8re,Nantes 44316,Cedex03, France xDipartimento di Agrobiologia ed Agrochimica, Via San Camillo de Lellis, Viterbo 01100, Lazio, Italy {Institute of Food Research, Norwich Laboratory, Norwich Research Park, Colney Lane, Norwich NR4 7UA, UK"} {"_id": "97ceffb21ea6e2028eafe797d813fa643f255ee5", "title": "Insights into Human Behavior from Lesions to the Prefrontal Cortex", "text": "The prefrontal cortex (PFC), a cortical region that was once thought to be functionally insignificant, is now known to play an essential role in the organization and control of goal-directed thought and behavior. Neuroimaging, neurophysiological, and modeling techniques have led to tremendous advances in our understanding of PFC functions over the last few decades. It should be noted, however, that neurological, neuropathological, and neuropsychological studies have contributed some of the most essential, historical, and often prescient conclusions regarding the functions of this region. Importantly, examination of patients with brain damage allows one to draw conclusions about whether a brain area is necessary for a particular function. Here, we provide a broad overview of PFC functions based on behavioral and neural changes resulting from damage to PFC in both human patients and nonhuman primates."} {"_id": "b2bad87690428fc5479d2d63ed928ab68449c672", "title": "A lane detection algorithm based on reliable lane markings", "text": "This paper proposes a robust and effective vision-based lane detection approach. First, two binary images are obtained from the region of interest of gray-scale images. The obtained binary images are merged by a novel neighborhood AND operator and then transformed to a bird's eye view (BEV) via inverse perspective mapping. Then, gaussian probability density functions are fit to the left and right regions of a histogram image acquired from the BEV. Finally, a polynomial lane model is estimated from the identified regions. Experimental results show that the proposed method accurately detects lanes in complex situations including worn-out and curved lanes."} {"_id": "f1aaee5f263dd132f7efbdcb6988ec6dbf8de165", "title": "Socio-cognitive gamification: general framework for educational games", "text": "Gamification of learning material has received much interest from researchers in the past years. This paper aims to further improve such learning experience by applying socio-cognitive gamification to educational games. Dynamic difficulty adjustment (DDA) is a well-known tool in optimizing gaming experience. It is a process to control the parameters in a video game automatically based on user experience in real-time. This method can be extended by using a biofeedback-approach, where certain aspects of the player\u2019s ability is estimated based on physiological measurement (e.g. eye tracking, ECG, EEG). Here, we outline the design of a biofeedback-based framework that supports dynamic difficulty adjustment in educational games. It has a universal architecture, so the concept can be employed to engage users in non-game contexts as well. The framework accepts input from the games, from the physiological sensors and from the so-called supervisor unit. This special unit empowers a new social aspect by enabling another user to observe or intervene during the interaction. To explain the game-user interaction itself in educational games we propose a hybrid model."} {"_id": "6143217ceebc10506fd5a8073434cd6f83cf9a33", "title": "EPOpt: Learning Robust Neural Network Policies Using Model Ensembles", "text": "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks \u2013 especially when the policies are represented using rich function approximators like deep neural networks. Modelbased methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning/adaptation."} {"_id": "9917363277c783a01bff32af1c27fc9b373ad55d", "title": "DeepLoco: dynamic locomotion skills using hierarchical deep reinforcement learning", "text": "Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge. We adopt a two-level hierarchical control framework. First, low-level controllers are learned that operate at a fine timescale and which achieve robust walking gaits that satisfy stepping-target and style objectives. Second, high-level controllers are then learned which plan at the timescale of steps by invoking desired step targets for the low-level controller. The high-level controller makes decisions directly based on high-dimensional inputs, including terrain maps or other suitable representations of the surroundings. Both levels of the control policy are trained using deep reinforcement learning. Results are demonstrated on a simulated 3D biped. Low-level controllers are learned for a variety of motion styles and demonstrate robustness with respect to force-based disturbances, terrain variations, and style interpolation. High-level controllers are demonstrated that are capable of following trails through terrains, dribbling a soccer ball towards a target location, and navigating through static or dynamic obstacles."} {"_id": "de9b0f81505e536446bfb6a11281d4af7aa1d904", "title": "Terrain-adaptive locomotion skills using deep reinforcement learning", "text": "Reinforcement learning offers a promising methodology for developing skills for simulated characters, but typically requires working with sparse hand-crafted features. Building on recent progress in deep reinforcement learning (DeepRL), we introduce a mixture of actor-critic experts (MACE) approach that learns terrain-adaptive dynamic locomotion skills using high-dimensional state and terrain descriptions as input, and parameterized leaps or steps as output actions. MACE learns more quickly than a single actor-critic approach and results in actor-critic experts that exhibit specialization. Additional elements of our solution that contribute towards efficient learning include Boltzmann exploration and the use of initial actor biases to encourage specialization. Results are demonstrated for multiple planar characters and terrain classes."} {"_id": "0ecd4fdce541317b38124967b5c2a259d8f43c91", "title": "The Arcade Learning Environment: An Evaluation Platform for General Agents", "text": "In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available."} {"_id": "8618c07a15ee34e8f09fda73cea28a83e5f06804", "title": "Software development governance: A meta-management perspective", "text": "Software development governance is a nascent field of research. Establishing how it is framed early, can significantly affect the progress of contributions. This position paper considers the nature and role of governance in organizations and in the software development domain in particular. In contrast to the dominant functional and structural perspectives, an integrated view of governance is proposed as managing the management of a particular domain (that is, a meta-management perspective). Principles are developed and applied to software development governance and illustrated by a case study."} {"_id": "a65047dde59feb81790c304e4d5a9a7c25aa2acb", "title": "Automated extraction of product comparison matrices from informal product descriptions", "text": "Domain analysts, product managers, or customers aim to capture the important features and differences among a set of related products. A case-by-case reviewing of each product description is a laborious and time-consuming task that fails to deliver a condense view of a family of product. In this article, we investigate the use of automated techniques for synthesizing a product comparison matrix (PCM) from a set of product descriptions written in natural language. We describe a tool-supported process, based on term recognition, information extraction, clustering, and similarities, capable of identifying and organizing features and values in a PCM \u2013 despite the informality and absence of structure in the textual descriptions of products. We evaluate our proposal against numerous categories of products mined from BestBuy. Our empirical results show that the synthesized PCMs exhibit numerous quantitative, comparable information that can potentially complement or even refine technical descriptions of products. The user study shows that our automatic approach is capable of extracting a significant portion of correct features and correct values. This approach has been implemented in MatrixMiner a web environment with an interactive support for automatically synthesizing PCMs from informal product descriptions. MatrixMiner also maintains traceability with the original descriptions and the technical specifications for further refinement or maintenance by users. Preprint submitted to JSS January 5, 2017"} {"_id": "b83396caf4762c906530c9219a9e4dd0658232b0", "title": "A General Lower Bound on the Number of Examples Needed for Learning", "text": "We prove a lower bound of ( ln + VCdim(C) ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and and are the accuracy and con dence parameters. This improves the previous best lower bound of ( ln + VCdim(C)), and comes close to the known general upper bound of O( ln + VCdim(C) ln ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. A. Ehrenfeucht was supported by NSF grant MCS-8305245, D. Haussler by ONR grant N00014-86-K-0454, M. Kearns by ONR grant N00014-86-K-0454 and an A.T. & T. Bell Laboratories Scholarship, and L. Valiant by grants ONR-N00014-85-K-0445, NSF-DCR-8600379, DAAL03-86-K-0171 and by the SERC of the U.K. Part of this research was done while M. Kearns was visiting U.C. Santa Cruz."} {"_id": "d5974504d7dadca9aa78df800a924f2ac18f24d6", "title": "On-demand virtualization for live migration in bare metal cloud", "text": "The level of demand for bare-metal cloud services has increased rapidly because such services are cost-effective for several types of workloads, and some cloud clients prefer a single-tenant environment due to the lower security vulnerability of such enviornments. However, as the bare-metal cloud does not utilize a virtualization layer, it cannot use live migration. Thus, there is a lack of manageability with the bare-metal cloud. Live migration support can improve the manageability of bare-metal cloud services significantly.\n This paper suggests an on-demand virtualization technique to improve the manageability of bare-metal cloud services. A thin virtualization layer is inserted into the bare-metal cloud when live migration is requested. After the completion of the live migration process, the thin virtualization layer is removed from the host. We modified BitVisor [19] to implement on-demand virtualization and live migration on the x86 architecture.\n The elapsed time of on-demand virtualization was negligible. It takes about 20 ms to insert the virtualization layer and 30 ms to remove the one. After removing the virtualization layer, the host machine works with bare-metal performance."} {"_id": "4b278c79a7ba22a0eb9f5f967010bf57d6667c37", "title": "Gender identity and sexual orientation in women with borderline personality disorder.", "text": "INTRODUCTION\nIn the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, text revision (DSM-IV-TR) (and earlier editions), a disturbance in \"identity\" is one of the defining features of borderline personality disorder (BPD). Gender identity, a person's sense of self as a male or a female, constitutes an important aspect of identity formation, but this construct has rarely been examined in patients with BPD.\n\n\nAIMS\nIn the present study, the presence of gender identity disorder or confusion was examined in women diagnosed with BPD.\n\n\nMAIN OUTCOME MEASURES\nWe used a validated dimensional measure of gender dysphoria. Recalled gender identity and gender role behavior from childhood was also assessed with a validated dimensional measure, and current sexual orientation was assessed by two self-report measures.\n\n\nMETHODS\nA consecutive series of 100 clinic-referred women (mean age, 34 years) with BPD participated in the study. The women were diagnosed with BPD using the International Personality Disorder Exam-BPD Section.\n\n\nRESULTS\nNone of the women with BPD met the criterion for caseness on the dimensional measure of gender dysphoria. Women who self-reported either a bisexual or a homosexual sexual orientation had a significantly higher score on the dimensional measure of gender dysphoria than the women who self-reported a heterosexual sexual orientation, and they also recalled significantly more cross-gender behavior during childhood. Results were compared with a previous study on a diagnostically heterogeneous group of women with other clinical problems.\n\n\nCONCLUSION\nThe importance of psychosexual assessment in the clinical evaluation of patients with BPD is discussed."} {"_id": "f4ed7a2c1ce10b08ad338217743d737de64b056b", "title": "Enhanced Classification Model for Cervical Cancer Dataset based on Cost Sensitive Classifier Hayder", "text": "Cervical cancer threatens the lives of many women in our world today. In 2014, the number of women infected with this disease in the United States was 12,578, of which 4,115 died, with a death rate of nearly 32%. Cancer data, including cervical cancer datasets, represent a significant challenge data mining techniques because absence of different costs for error cases. The proposed model present a cost sensitive classifiers that has three main stages; the first stage is prepressing the original data to prepare it for classification model which is build based on decision tree classifier with cost selectivity and finallyevaluation the proposed model based on many metrics in addition to apply a cross validation.The proposed model provides more accurate result in both binary class and multi class classification. It has a TP rate (0.429) comparing with (0.160) for typical decision tree in binary class task."} {"_id": "279beb332fa6e158f32742b7dfafe83f12a97110", "title": "Sequencing technologies \u2014 the next generation", "text": "Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest."} {"_id": "4d843ef246fa08f39d1b1edbc5002dbae35b9334", "title": "Multi-Bias Non-linear Activation in Deep Neural Networks", "text": "As a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the stateof-the-art performance on several benchmarks."} {"_id": "5f2a4982a8adef2d1a6d589a155143291d440c0a", "title": "Constraints in the IoT: The World in 2020 and Beyond", "text": "The Internet of Things (IoT), often referred as the future Internet; is a collection of interconnected devices integrated into the world-wide network that covers almost everything and could be available anywhere. IoT is an emerging technology and aims to play an important role in saving money, conserving energy, eliminating gap and better monitoring for intensive management on a routine basis. On the other hand, it is also facing certain design constraints such as technical challenges, social challenges, compromising privacy and performance tradeoffs. This paper surveys major technical limitations that are hindering the successful deployment of the IoT such as standardization, interoperability, networking issues, addressing and sensing issues, power and storage restrictions, privacy and security, etc. This paper categorizes the existing research on the technical constraints that have been published in the recent years. With this categorization, we aim to provide an easy and concise view of the technical aspects of the IoT. Furthermore, we forecast the changes influenced by the IoT. This paper predicts the future and provides an estimation of the world in year 2020 and beyond. Keywords\u2014Internet of Things; Future Internet; Next generation network issues; World-wide network; 2020"} {"_id": "28f7f43774bce41023f9912a24219e33612a3842", "title": "Don't Get Caught in the Cold, Warm-up Your JVM: Understand and Eliminate JVM Warm-up Overhead in Data-Parallel Systems", "text": "Many widely used, latency sensitive, data-parallel distributed systems, such as HDFS, Hive, and Spark choose to use the Java Virtual Machine (JVM), despite debate on the overhead of doing so. This paper analyzes the extent and causes of the JVM performance overhead in the above mentioned systems. Surprisingly, we find that the warm-up overhead, i.e., class loading and interpretation of bytecode, is frequently the bottleneck. For example, even an I/O intensive, 1GB read on HDFS spends 33% of its execution time in JVM warm-up, and Spark queries spend an average of 21 seconds in warm-up. The findings on JVM warm-up overhead reveal a contradiction between the principle of parallelization, i.e., speeding up long running jobs by parallelizing them into short tasks, and amortizing JVM warm-up overhead through long tasks. We solve this problem by designing HotTub, a new JVM that amortizes the warm-up overhead over the lifetime of a cluster node instead of over a single job by reusing a pool of already warm JVMs across multiple applications. The speed-up is significant. For example, using HotTub results in up to 1.8X speedups for Spark queries, despite not adhering to the JVM specification in edge cases."} {"_id": "2fb5c1fdfdf999631a30c09a3602956c9de084db", "title": "HellRank: a Hellinger-based centrality measure for bipartite social networks", "text": "Measuring centrality in a social network, especially in bipartite mode, poses many challenges, for example, the requirement of full knowledge of the network topology, and the lack of properly detecting top-k behavioral representative users. To overcome the above mentioned challenges, we propose HellRank, an accurate centrality measure for identifying central nodes in bipartite social networks. HellRank is based on the Hellinger distance between two nodes on the same side of a bipartite network. We theoretically analyze the impact of this distance on a bipartite network and find upper and lower bounds for it. The computation of the HellRank centrality measure can be distributed, by letting each node uses local information only on its immediate neighbors. Consequently, one does not need a central entity that has full knowledge of the network topological structure. We experimentally evaluate the performance of the HellRank measure in correlation with other centrality measures on real-world networks. The results show partial ranking similarity between the HellRank and the other conventional metrics according to the Kendall and Spearman rank correlation coefficient."} {"_id": "b65ec8eec4933b3a9425d7dc981fdc27e4260077", "title": "The biology of VEGF and its receptors", "text": "Vascular endothelial growth factor (VEGF) is a key regulator of physiological angiogenesis during embryogenesis, skeletal growth and reproductive functions. VEGF has also been implicated in pathological angiogenesis associated with tumors, intraocular neovascular disorders and other conditions. The biological effects of VEGF are mediated by two receptor tyrosine kinases (RTKs), VEGFR-1 and VEGFR-2, which differ considerably in signaling properties. Non-signaling co-receptors also modulate VEGF RTK signaling. Currently, several VEGF inhibitors are undergoing clinical testing in several malignancies. VEGF inhibition is also being tested as a strategy for the prevention of angiogenesis, vascular leakage and visual loss in age-related macular degeneration."} {"_id": "45d5d9f461793425dff466556acd87b934a6fa0c", "title": "Work in Progress: K-Nearest Neighbors Techniques for ABAC Policies Clustering", "text": "In this paper, we present an approach based on the K-Nearest Neighbors algorithms for policies clustering that aims to reduce the ABAC policies dimensionality for high scale systems. Since ABAC considers a very large set of attributes for access decisions, it turns out that using such model for large scale systems might be very complicated. To date, researchers have proposed to use data mining techniques to discover roles for RBAC system construction. In this work in progress, we consider the usage of KNN-based techniques for the classification of ABAC policies based on similarity computations of rules in order to enhance the ABAC flexibility and to reduce the number of policy rules."} {"_id": "5cacce78a013fd2a5dcf4457787feff42ddaf68e", "title": "Online 3D acquisition and model integration", "text": "This paper presents a system which yields complete 3D models in a fast and inexpensive way. The major building blocks are a high-speed structured light range scanner and a registration module. The former generates \u2019raw\u2019 range data whereas the latter performs a fast registration (ICP) and renders the partially integrated model as a preview of the final result. As the scanner uses only a single image to make a reconstruction, it is possible to scan while the object is moved. The use of an adaptive projection pattern gives a more robust behaviour. This allows the system to deal more easily with complicated geometry and texture than most other systems. During the scanning process the model is built up incrementally and rendered on the screen. This real-time visual feedback allows the user to check the current state of the 3D model. Holes or other flaws can be detected during the acquisition process itself. This speeds up the model building process, and solves indirectly the problem of view planning. The whole real-time pipeline comprising acquisition, merging and visualization only uses off-the-shel f hardware. A regular desktop PC connected to a camera and LCD projector is turned into a high-speed scanner and modeler. The current implementation has a throughput of approx. 5 fps."} {"_id": "9f8f7e6cc18205b92ecf9e792bfcdb4b6f19cae3", "title": "Opinion Mining in Online Reviews About Distance Education Programs", "text": "The popularity of distance education programs is increasing at a fast pace. En par with this development, online communication in fora, social media and reviewing platforms between students is increasing as well. Exploiting this information to support fellow students or institutions requires to extract the relevant opinions in order to automatically generate reports providing an overview of pros and cons of different distance education programs. We report on an experiment involving distance education experts with the goal to develop a dataset of reviews annotated with relevant categories and aspects in each category discussed in the specific review together with an indication of the sentiment. Based on this experiment, we present an approach to extract general categories and specific aspects under discussion in a review together with their sentiment. We frame this task as a multi-label hierarchical text classification problem and empirically investigate the performance of different classification architectures to couple the prediction of a category with the prediction of particular aspects in this category. We evaluate different architectures and show that a hierarchical approach leads to superior results in comparison to a flat model which makes decisions independently. This work has been performed while the first and last authors were at Bielefeld University."} {"_id": "539a06ab025005ff2ad5d8435515faa058c73b07", "title": "Real-time large-scale dense RGB-D SLAM with volumetric fusion", "text": "We present a new SLAM system capable of producing high quality globally consistent surface reconstructions over hundreds of metres in real-time with only a low-cost commodity RGB-D sensor. By using a fused volumetric surface reconstruction we achieve a much higher quality map over what would be achieved using raw RGB-D point clouds. In this paper we highlight three key techniques associated with applying a volumetric fusion-based mapping system to the SLAM problem in real-time. First, the use of a GPU-based 3D cyclical buffer trick to efficiently extend dense every frame volumetric fusion of depth maps to function over an unbounded spatial region. Second, overcoming camera pose estimation limitations in a wide variety of environments by combining both dense geometric and photometric camera pose constraints. Third, efficiently updating the dense map according to place recognition and subsequent loop closure constraints by the use of an \u201cas-rigid-as-possible\u201d space deformation. We present results on a wide variety of aspects of the system and show through evaluation on de facto standard RGB-D benchmarks that our system performs strongly in terms of trajectory estimation, map quality and computational performance in comparison to other state-of-the-art systems."} {"_id": "af1745e54e256351f55da4a4a4bf61f594e7e3a7", "title": "The six determinants of gait and the inverted pendulum analogy: A dynamic walking perspective.", "text": "We examine two prevailing, yet surprisingly contradictory, theories of human walking. The six determinants of gait are kinematic features of gait proposed to minimize the energetic cost of locomotion by reducing the vertical displacement of the body center of mass (COM). The inverted pendulum analogy proposes that it is beneficial for the stance leg to behave like a pendulum, prescribing a more circular arc, rather than a horizontal path, for the COM. Recent literature presents evidence against the six determinants theory, and a simple mathematical analysis shows that a flattened COM trajectory in fact increases muscle work and force requirements. A similar analysis shows that the inverted pendulum fares better, but paradoxically predicts no work or force requirements. The paradox may be resolved through the dynamic walking approach, which refers to periodic gaits produced almost entirely by the dynamics of the limbs alone. Demonstrations include passive dynamic walking machines that descend a gentle slope, and active dynamic walking robots that walk on level ground. Dynamic walking takes advantage of the inverted pendulum mechanism, but requires mechanical work to transition from one pendular stance leg to the next. We show how the step-to-step transition is an unavoidable energetic consequence of the inverted pendulum gait, and gives rise to predictions that are experimentally testable on humans and machines. The dynamic walking approach provides a new perspective, focusing on mechanical work rather than the kinematics or forces of gait. It is helpful for explaining human gait features in a constructive rather than interpretive manner."} {"_id": "3e2dcf63a9f6fac3ac17aaefe2be2572349cc6f5", "title": "Comparing CORBA and Web-Services in view of a Service Oriented Architecture", "text": "The concept of Service Oriented Architecture revolves around registering services as tasks. These tasks are accomplished collectively by various disparate components seamlessly connected to one another. The task of interlinking these components may be considered amongst the most convoluted and difficult tasks currently faced by software practitioners. This paper attempts to show that although middleware technologies can be solely utilized to develop service oriented architecture, however such architecture would severely lack quality, interoperability and ease of implementation. In order to resolve these complexities and complications this paper proposes Web Services as an alternative to Middleware, for the realization of a fully functional interoperable and an automated SOA which conforms to the characteristics of a SOA. This paper provides an abstract implementation model of a SOA using both middleware and web services. It then attempts to point out the implementation and accepted benefits of the latter, especially when legacy applications are involved. Emphasize is laid out on the significance of interoperability since it assists in mobility and other corporate benefits. The paper concludes that when interoperability along with its benefits of mobility, expansion, costs, simplicity and enterprise integration are required in the construction of a SOA then web services should be the definite integration choice. The paper also highlights the importance of object oriented middleware, along with situations in which it might be preferred over web services."} {"_id": "38e83f4fcfca63dfa2db4f48ad75cbbdda948a84", "title": "Gentle Adaboost algorithm for weld defect classification", "text": "In this paper, we present a new strategy for automatic classification of weld defects in radiographs based on Gentle Adaboost algorithm. Radiographic images were segmented and moment-based features were extracted and given as input to Gentle Adaboost classifier. The performance of our classification system is evaluated using hundreds of radiographic images. The classifier is trained to classify each defect pattern into one of four classes: Crack, Lack of penetration, Porosity, and Solid inclusion. The experimental results show that the Gentle Adaboost classifier is an efficient automatic weld defect classification algorithm and can achieve high accuracy and is faster than support vector machine (SVM) algorithm, for the tested data."} {"_id": "8bc68ff091ee873c797b8b2979139b024527cb59", "title": "Artificial Neural Networks for Misuse Detection", "text": "Misuse detection is the process of attempting to identify instances of network attacks by comparing current activity against the expected actions of an intruder. Most current approaches to misuse detection involve the use of rule-based expert systems to identify indications of known attacks. However, these techniques are less successful in identifying attacks which vary from expected patterns. Artificial neural networks provide the potential to identify and classify network activity based on limited, incomplete, and nonlinear data sources. We present an approach to the process of misuse detection that utilizes the analytical strengths of neural networks, and we provide the results from our preliminary analysis of this approach."} {"_id": "0722af4b124785f40861b527fa494a4d76ac6d70", "title": "A General Greedy Approximation Algorithm with Applications", "text": "Greedy approximation algorithms have been frequently used to obtain sparse solutions to learning problems. In this paper, we present a general greedy algorithm for solving a class of convex optimization problems. We derive a bound on the rate of approximation for this algorithm, and show that our algorithm includes a number of earlier studies as special cases."} {"_id": "d02e1139a93a91ee83becf5fdbdf83b467799e20", "title": "Traffic sign detection and recognition based on random forests", "text": "In this paper we present a new traffic sign detection and recognition (TSDR) method, which is achieved in three main steps. The first step segments the image based on thresholding of HSI color space components. The second step detects traffic signs by processing the blobs extracted by the first step. The last one performs the recognition of the detected traffic signs. The main contributions of the paper are as follows. First, we propose, in the second step, to use invariant geometric moments to classify shapes instead of machine learning algorithms. Second, inspired by the existing features, new ones have been proposed for the recognition. The histogram of oriented gradients (HOG) features has been extended to the HSI raffic sign detection raffic sign recognition olor segmentation andom forests upport vector machines (SVMs) istogram of oriented gradients (HOG) color space and combined with the local self-similarity (LSS) features to get the descriptor we use in our algorithm. As a classifier, random forest and support vector machine (SVM) classifiers have been tested together with the new descriptor. The proposed method has been tested on both the German Traffic Sign Detection and Recognition Benchmark and the Swedish Traffic Signs Data sets. The results obtained are satisfactory when compared to the state-of-the-art methods. \u00a9 2016 Elsevier B.V. All rights reserved. 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 ocal self-similarity (LSS)"} {"_id": "c8ec9df5c699b110ed04d4b29d2de20adc5283db", "title": "Few-shot Learning Using a Small-Sized Dataset of High-Resolution FUNDUS Images for Glaucoma Diagnosis", "text": "Deep learning has recently attracted a lot of attention, mainly thanks to substantial gains in terms of effectiveness. However, there is still room for significant improvement, especially when dealing with use cases that come with a limited availability of data, as is often the case in the area of medical image analysis. In this paper, we introduce a novel approach for early diagnosis of glaucoma in high-resolution FUNDUS images, only requiring a small number of training samples. In particular, we developed a predictive model based on a matching neural network architecture, integrating a high-resolution deep convolutional network that allows preserving the high-fidelity nature of the medical images. Our experimental results show that our predictive model is able to obtain higher levels of effectiveness than vanilla deep convolutional neural networks."} {"_id": "0afcbfcd5f30eb680f7c6868fbddb2f034093918", "title": "NLANGP: Supervised Machine Learning System for Aspect Category Classification and Opinion Target Extraction", "text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 12 of SemEval-2015. Our system is based on two supervised machine learning algorithms: sigmoidal feedforward network to train binary classifiers for aspect category classification (Slot 1), and Conditional Random Fields to train classifiers for opinion target extraction (Slot 2). We extract a variety of lexicon and syntactic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances, ranking 1st for three of the evaluations (Slot 1 for both restaurant and laptop domains, and Slot 1 & 2) and 2nd for Slot 2 evaluation."} {"_id": "120ce727d1754bdd663199fa03b110fd6ceee4bf", "title": "SemEval-2016 Task 5: Aspect Based Sentiment Analysis", "text": "This paper describes the SemEval 2016 shared task on Aspect Based Sentiment Analysis (ABSA), a continuation of the respective tasks of 2014 and 2015. In its third year, the task provided 19 training and 20 testing datasets for 8 languages and 7 domains, as well as a common evaluation procedure. From these datasets, 25 were for sentence-level and 14 for text-level ABSA; the latter was introduced for the first time as a subtask in SemEval. The task attracted 245 submissions from 29 teams."} {"_id": "19d748f959a24888f0239facf194085a4bf59f20", "title": "A Logic Programming Approach to Aspect Extraction in Opinion Mining", "text": "Aspect extraction aims to extract fine-grained opinion targets from opinion texts. Recent work has shown that the syntactical approach performs well. In this paper, we show that Logic Programming, particularly Answer Set Programming (ASP), can be used to elegantly and efficiently implement the key components of syntax based aspect extraction. Specifically, the well known double propagation (DP) method is implemented using 8 ASP rules that naturally model all key ideas in the DP method. Our experiment on a widely used data set also shows that the ASP implementation is much faster than a Java-based implementation. Syntactical approach has its limitation too. To further improve the performance of syntactical approach, we identify a set of general words from Word Net that have little chance to be an aspect and prune them when extracting aspects. The concept of general words and their pruning are concisely captured by 10 new ASP rules, and a natural extension of the 8 rules for the original DP method. Experimental results show a major improvement in precision with almost no drop in recall compared with those reported in the existing work on a typical benchmark data set. Logic Programming provides a convenient and effective tool to encode and thus test knowledge needed to improve the aspect extraction methods so that the researchers can focus on the identification and discovery of new knowledge to improve aspect extraction."} {"_id": "4947198a6f4b0e3f9a514bfb4c9d5410427b9e84", "title": "Cortical demyelination and diffuse white matter injury in multiple sclerosis.", "text": "Focal demyelinated plaques in white matter, which are the hallmark of multiple sclerosis pathology, only partially explain the patient's clinical deficits. We thus analysed global brain pathology in multiple sclerosis, focusing on the normal-appearing white matter (NAWM) and the cortex. Autopsy tissue from 52 multiple sclerosis patients (acute, relapsing-remitting, primary and secondary progressive multiple sclerosis) and from 30 controls was analysed using quantitative morphological techniques. New and active focal inflammatory demyelinating lesions in the white matter were mainly present in patients with acute and relapsing multiple sclerosis, while diffuse injury of the NAWM and cortical demyelination were characteristic hallmarks of primary and secondary progressive multiple sclerosis. Cortical demyelination and injury of the NAWM, reflected by diffuse axonal injury with profound microglia activation, occurred on the background of a global inflammatory response in the whole brain and meninges. There was only a marginal correlation between focal lesion load in the white matter and diffuse white matter injury, or cortical pathology, respectively. Our data suggest that multiple sclerosis starts as a focal inflammatory disease of the CNS, which gives rise to circumscribed demyelinated plaques in the white matter. With chronicity, diffuse inflammation accumulates throughout the whole brain, and is associated with slowly progressive axonal injury in the NAWM and cortical demyelination."} {"_id": "12f1075dba87030dd82b1254c2f160f0ab861c2a", "title": "Essential communication practices for Extreme Programming in a global software development team", "text": "We conducted an industrial case study of a distributed team in the USA and the Czech Republic that used Extreme Programming. Our goal was to understand how this globally-distributed team created a successful project in a new problem domain using a methodology that is dependent on informal, face-to-face communication. We collected quantitative and qualitative data and used grounded theory to identify four key factors for communication in globally-distributed XP teams working within a new problem domain. Our study suggests that, if these critical enabling factors are addressed, methodologies dependent on informal communication can be used on global software development projects."} {"_id": "11dd5ff0c75630de85d040ed1bc42eacf168f578", "title": "The application of virtual reality to (chemical engineering) education", "text": "Virtual reality, VR, offers many benefits to technical education, including the delivery of information through multiple active channels, the addressing of different learning styles, and experiential-based learning. This poster presents work performed by the authors to apply VR to engineering education, in three broad project areas: virtual chemical plants, virtual laboratory accidents, and a virtual UIC campus. The first area provides guided exploration of domains otherwise inaccessible, such as the interior of operating reactors and microscopic reaction mechanisms. The second promotes safety by demonstrating the consequences of not following proper lab safety procedures. And the third provides valuable guidance for (foreign) visitors. All programs developed are available on the Web, for free download to any interested parties."} {"_id": "c4f05354ce6776dd1a3a076c9cc60614ee38476e", "title": "Deep EndoVO: A Recurrent Convolutional Neural Network (RCNN) based Visual Odometry Approach for Endoscopic Capsule Robots", "text": "Ingestible wireless capsule endoscopy is an emerging minimally invasive diagnostic technology for inspection of the GI tract and diagnosis of a wide range of diseases and pathologies. Medical device companies and many research groups have recently made substantial progresses in converting passive capsule endoscopes to active capsule robots, enabling more accurate, precise, and intuitive detection of the location and size of the diseased areas. Since a reliable real time pose estimation functionality is crucial for actively controlled endoscopic capsule robots, in this study, we propose a monocular visual odometry (VO) method for endoscopic capsule robot operations. Our method lies on the application of the deep Recurrent Convolutional Neural Networks (RCNNs) for the visual odometry task, where Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are used for the feature extraction and inference of dynamics across the frames, respectively. Detailed analyses and evaluations made on a real pig stomach dataset proves that our system achieves high translational and rotational accuracies for different types of endoscopic capsule robot trajectories."} {"_id": "76082a00ed6f3e3828f78436723d7fde29faeed4", "title": "Pondering the Concept of Abstraction in (Illustrative) Visualization", "text": "We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory."} {"_id": "357a89799c968d5a363197026561b04c5a496b66", "title": "Haraka - Efficient Short-Input Hashing for Post-Quantum Applications", "text": "Recently, many efficient cryptographic hash function design strategies have been explored, not least because of the SHA-3 competition. These designs are, almost exclusively, geared towards high performance on long inputs. However, various applications exist where the performance on short (fixed length) inputs matters more. Such hash functions are the bottleneck in hash-based signature schemes like SPHINCS or XMSS, which is currently under standardization. Secure functions specifically designed for such applications are scarce. We attend to this gap by proposing two short-input hash functions (or rather simply compression functions). By utilizing AES instructions on modern CPUs, our proposals are the fastest on such platforms, reaching throughputs below one cycle per hashed byte even for short inputs, while still having a very low latency of less than 60 cycles. Under the hood, this results comes with several innovations. First, we study whether the number of rounds for our hash functions can be reduced, if only second-preimage resistance (and not collision resistance) is required. The conclusion is: only a little. Second, since their inception, AES-like designs allow for supportive security arguments by means of counting and bounding the number of active S-boxes. However, this ignores powerful attack vectors using truncated differentials, including the powerful rebound attacks. We develop a general tool-based method to include arguments against attack vectors using truncated differentials."} {"_id": "4d58f886f5150b2d5e48fd1b5a49e09799bf895d", "title": "Texas 3D Face Recognition Database", "text": "We make the Texas 3D Face Recognition Database available to researchers in three dimensional (3D) face recognition and other related areas. This database contains 1149 pairs of high resolution, pose normalized, preprocessed, and perfectly aligned color and range images of 118 adult human subjects acquired using a stereo camera. The images are accompanied with information about the subjects' gender, ethnicity, facial expression, and the locations of 25 manually located anthropometric facial fiducial points. Specific partitions of the data for developing and evaluating 3D face recognition algorithms are also included."} {"_id": "270af733bcf18d9c14230bcffc77d6ae57e2667d", "title": "Maximum Entropy Deep Inverse Reinforcement Learning", "text": "This paper presents a general framework for employing deep architectures in particular neural networks to solve the inverse reinforcement learning (IRL) problem. Specifically, we propose to exploit the representational capacity and favourable computational complexity of deep networks to approximate complex, nonlinear reward functions. We show that the Maximum Entropy paradigm for IRL lends itself naturally to the efficient training of deep architectures. At test time, the approach leads to a computational complexity independent of the number of demonstrations. This makes it especially well-suited for applications in life-long learning scenarios commonly encountered in robotics. We demonstrate that our approach achieves performance commensurate to the state-of-the-art on existing benchmarks already with simple, comparatively shallow network architectures while significantly outperforming the stateof-the-art on an alternative benchmark based on more complex, highly varying reward structures representing strong interactions between features. Furthermore, we extend the approach to include convolutional layers in order to eliminate the dependency on precomputed features of current algorithms and to underline the substantial gain in flexibility in framing IRL in the context of deep learning."} {"_id": "2dfcb6a65c8c97e7cadd2742f1671d773ab596c1", "title": "Distributed Secret Sharing Approach With Cheater Prevention Based on QR Code", "text": "QR barcodes are used extensively due to their beneficial properties, including small tag, large data capacity, reliability, and high-speed scanning. However, the private data of the QR barcode lacks adequate security protection. In this article, we design a secret QR sharing approach to protect the private QR data with a secure and reliable distributed system. The proposed approach differs from related QR code schemes in which it uses the QR characteristics to achieve secret sharing and can resist the print-and-scan operation. The secret can be split and conveyed with QR tags in the distribution application, and the system can retrieve the lossless secret when authorized participants cooperate. General browsers can read the original data from the marked QR tag via a barcode reader, and this helps reduce the security risk of the secret. Based on our experiments, the new approach is feasible and provides content readability, cheater detectability, and an adjustable secret payload of the QR barcode."} {"_id": "5f6b7fca82ff3947f6cc571073c18c687eaedd0d", "title": "Big Data analytics. Three use cases with R, Python and Spark", "text": "Management and analysis of big data are systematically associated with a data distributed architecture in the Hadoop and now Spark frameworks. This article offers an introduction for statisticians to these technologies by comparing the performance obtained by the direct use of three reference environments : R, Python Scikit-learn, \u2217Universit\u00e9 de Toulouse \u2013 INSA, Institut de Math\u00e9matiques, UMR CNRS 5219 \u2020Institut de Math\u00e9matiques, UMR CNRS 5219 \u2021Universit\u00e9 de Toulouse \u2013 UT3, Institut de Math\u00e9matiques, UMR CNRS 5219 1 ar X iv :1 60 9. 09 61 9v 1 [ st at .A P] 3 0 Se p 20 16 Spark MLlib on three public use cases : character recognition, recommending films, categorizing products. As main result, it appears that, if Spark is very efficient for data munging and recommendation by collaborative filtering (non-negative factorization), current implementations of conventional learning methods (logistic regression, random forests) in MLlib or SparkML do not ou poorly compete habitual use of these methods (R, Python Scikit-learn) in an integrated or undistributed architecture."} {"_id": "b58f395fd0afa3e309ff108247e03ab6c3f15719", "title": "Effectiveness Evaluation of Rule Based Classifiers for the Classification of Iris Data Set", "text": "ISSN 2250 \u2013 1061 | \u00a9 2011 Bonfring Abstract--In machine learning, classification refers to a step by step procedure for designating a given piece of input data into any one of the given categories. There are many classification problem occurs and need to be solved. Different types are classification algorithms like tree-based, rule-based, etc are widely used. This work studies the effectiveness of Rule-Based classifiers for classification by taking a sample data set from UCI machine learning repository using the open source machine learning tool. A comparison of different rulebased classifiers used in Data Mining and a practical guideline for selecting the most suited algorithm for a classification is presented and some empirical criteria for describing and evaluating the classifiers are given."} {"_id": "53c5c2da782debbaa150cbce1ff4909d1d323b1f", "title": "Comparison of high-speed switched reluctance machines with conventional and toroidal windings", "text": "This paper presents designs of 50,000 rpm 6/4 switched reluctance motors (SRM's) with a focus of the comparative study on conventional and toroidal windings. There are four different machines compared in this paper, while the first is a conventional SRM, and the other three are toroidal winding machines. The first toroidal SRM (TSRM1) employs the conventional asymmetric convert and the same switching sequence as conventional SRM (CSRM). Therefore, an equivalent magnetic performance is observed. The second toroidal SRM (TSRM2) introduces a 12-switch converter topology. With a proper coil connection and switching sequence, all the coils are active and contribute to the flux and torque generation at the same time. The analysis shows that for the same amount of copper losses, TSRM2 yields a 50% higher output torque and power at rated speed than CSRM, while TSRM1 only generates half the torque of CSRM. The third toroidal SRM is a resized TSRM2, which is presented with the same envelope dimension of CSRM (same volumetric comparison). The comparison shows it's competitive to CSRM, especially as toroidal-winding can achieve higher filling factor during the manufacture process of winding."} {"_id": "1250ef2f19d0ac751ec7d0e2a22e741ecb40ea92", "title": "Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition", "text": "Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed and empirically evaluated on a large vocabulary conversational telephone speech recognition task. Meanwhile, regarding to multi-GPU devices, the training process for LSTM networks is introduced and discussed. Experimental results demonstrate that the deep LSTM networks benefit from the depth and yield the state-of-the-art performance on this task."} {"_id": "84e65a5bdb735d62eef4f72c2f01af354b2285ba", "title": "Efficient Architecture Search by Network Transformation", "text": "Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters."} {"_id": "63af189a7f392ea29cca5b18a71b9a680aef071b", "title": "Iron Loss Reduction in an Interior PM Automotive Alternator", "text": "This paper examines the iron loss characteristics of a high-flux interior permanent-magnet machine. The machine was designed as a concept demonstrator for a 6-kW automotive alternator and has a wide field-weakening range. Initial experimental results revealed a high iron loss during field-weakening operation. Finite-element analysis was used to investigate the cause of the high iron losses and to predict their magnitude as a function of speed. The effects of changes in the machine design were examined in order to reduce iron losses and hence improve the machine performance"} {"_id": "bbe13b72314fffcc2f35b0660195f2f6607c00a0", "title": "Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions", "text": "Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset."} {"_id": "ee61d5dbb2ff64995f1aeb81d94c0b55d562b4c9", "title": "A Strategic Analysis of Electronic Marketplaces", "text": ""} {"_id": "9b9bac085208271dfd33fd333dcb76dcde8332b8", "title": "On the Convergence of Adam and Beyond", "text": ""} {"_id": "db4bfae21d57a1583effad1b2952f78daece2454", "title": "New Fast and Accurate Jacobi SVD Algorithm. I", "text": "This paper is the result of contrived efforts to break the barrier between numerical accuracy and run time efficiency in computing the fundamental decomposition of numerical linear algebra \u2013 the singular value decomposition (SVD) of a general dense matrix. It is an unfortunate fact that the numerically most accurate one\u2013sided Jacobi SVD algorithm is several times slower than generally less accurate bidiagonalization based methods such as the QR or the divide and conquer algorithm. Despite its sound numerical qualities, the Jacobi SVD is not included in the state of the art matrix computation libraries and it is even considered obsolete by some leading researches. Our quest for a highly accurate and efficient SVD algorithm has led us to a new, superior variant of the Jacobi algorithm. The new algorithm has inherited all good high accuracy properties, and it outperforms not only the best implementations of the one\u2013sided Jacobi algorithm but also the QR algorithm. Moreover, it seems that the potential of the new approach is yet to be fully exploited."} {"_id": "54daa2279dd7c6a44b406b8007086474db7f8359", "title": "Predictive Control in Power Electronics and Drives", "text": "Predictive control is a very wide class of controllers that have found rather recent application in the control of power converters. Research on this topic has been increased in the last years due to the possibilities of today's microprocessors used for the control. This paper presents the application of different predictive control methods to power electronics and drives. A simple classification of the most important types of predictive control is introduced, and each one of them is explained including some application examples. Predictive control presents several advantages that make it suitable for the control of power converters and drives. The different control schemes and applications presented in this paper illustrate the effectiveness and flexibility of predictive control."} {"_id": "da1fa5958aac40af3991eb4bda2ebe4a221be897", "title": "Anomaly Detection using Autoencoders in High Performance Computing Systems", "text": "Anomaly detection in supercomputers is a very difficult problem due to the big scale of the systems and the high number of components. The current state of the art for automated anomaly detection employs Machine Learning methods or statistical regression models in a supervised fashion, meaning that the detection tool is trained to distinguish among a fixed set of behaviour classes (healthy and unhealthy states). We propose a novel approach for anomaly detection in High Performance Computing systems based on a Machine (Deep) Learning technique, namely a type of neural network called autoencoder. The key idea is to train a set of autoencoders to learn the normal (healthy) behaviour of the supercomputer nodes and, after training, use them to identify abnormal conditions. This is different from previous approaches which where based on learning the abnormal condition, for which there are much smaller datasets (since it is very hard to identify them to begin with). We test our approach on a real supercomputer equipped with a fine-grained, scalable monitoring infrastructure that can provide large amount of data to characterize the system behaviour. The results are extremely promising: after the training phase to learn the normal system behaviour, our method is capable of detecting anomalies that have never been seen before with a very good accuracy (values ranging between 88% and 96%)."} {"_id": "b1b215e30753f6eaa74409819bd229ec4851ae78", "title": "Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks", "text": "Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69% and 80%, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71% for cardiac views and 81% for non-cardiac views."} {"_id": "58f713f4c3c2804bebd4792fd7fb9b69336c471f", "title": "Data Security and Privacy Protection Issues in Cloud Computing", "text": "It is well-known that cloud computing has many potential advantages and many enterprise applications and data are migrating to public or hybrid cloud. But regarding some business-critical applications, the organizations, especially large enterprises, still wouldn't move them to cloud. The market size the cloud computing shared is still far behind the one expected. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle. Then this paper discusses some current solutions. Finally, this paper describes future research work about data security and privacy protection issues in cloud."} {"_id": "559e2ae733f4d231e2739dbc6d8d528af6feddf3", "title": "INTEGRATING MOOCs IN BLENDED COURSES", "text": "Recent studies appreciate that \"MOOCs bring an impetus of reform, research and innovation to the Academy\" [9]. Even though MOOCs are usually developed and delivered as independent online courses, experiments to wrap formal university courses around existing MOOCs are reported by teachers and researchers in different articles [3], [4], [8], [17]. This paper describes a new approach, in which the participation of students in different MOOCs was integrated in a blended course run on a social mobile LMS. The topics of MOOCs delivered on specific platforms and having particular characteristics were connected with the Fall 2013 undergraduate course of Web Programming, at University Politehnica Timisoara, Romania, facilitated by the first author, the co-authors providing a shadow/peer facilitation, for tailoring the course scenario. The main parts of this study deal with: a) The reasons to integrate MOOCs in the university course. b) How the course was designed, how the students\u2019 activities on different MOOC platforms were assessed and integrated in the course scenario. c) The results of a survey that evaluates students\u2019 experiences related to MOOCs: a number of MOOC features were assessed [14]; answers to a few problems are also analysed: Did the participation in MOOCs support students to clarify and expand the course issues? What are students' suggestions for a more active participation in MOOCs? By comparing the learning scenarios of MOOCs with the Web Programming blended course, how can the course and its virtual space be improved? Do students consider that the MOOCs phenomenon is important for professional and personal development? The conclusions of the paper can be used by other teachers/instructors for integrating MOOCs in the courses they deliver/facilitate."} {"_id": "07c8dc37b1061784f3b55cf3ca5d2bc735e1693c", "title": "SQLrand: Preventing SQL Injection Attacks", "text": "We present a practical protection mechanism against SQL injection attacks. Such attacks target databases that are accessible through a web frontend, and take advantage of flaws in the input validation logic of Web components such as CGI scripts. We apply the concept of instruction-set randomization to SQL, creating instances of the language that are unpredictable to the attacker. Queries injected by the attacker will be caught and terminated by the database parser. We show how to use this technique with the MySQL database using an intermediary proxy that translates the random SQL to its standard language. Our mechanism imposes negligible performance overhead to query processing and can be easily retrofitted to existing systems."} {"_id": "0d1f5807a26286f8a486d7b535d5fc16bd37d86d", "title": "Finding application errors and security flaws using PQL: a program query language", "text": "A number of effective error detection tools have been built in recent years to check if a program conforms to certain design rules. An important class of design rules deals with sequences of events asso-ciated with a set of related objects. This paper presents a language called PQL (Program Query Language) that allows programmers to express such questions easily in an application-specific context. A query looks like a code excerpt corresponding to the shortest amount of code that would violate a design rule. Details of the tar-get application's precise implementation are abstracted away. The programmer may also specify actions to perform when a match is found, such as recording relevant information or even correcting an erroneous execution on the fly.We have developed both static and dynamic techniques to find solutions to PQL queries. Our static analyzer finds all potential matches conservatively using a context-sensitive, flow-insensitive, inclusion-based pointer alias analysis. Static results are also use-ful in reducing the number of instrumentation points for dynamic analysis. Our dynamic analyzer instruments the source program to catch all violations precisely as the program runs and to optionally perform user-specified actions.We have implemented the techniques described in this paper and found 206 errors in 6 large real-world open-source Java applica-tions containing a total of nearly 60,000 classes. These errors are important security flaws, resource leaks, and violations of consis-tency invariants. The combination of static and dynamic analysis proves effective at addressing a wide range of debugging and pro-gram comprehension queries. We have found that dynamic analysis is especially suitable for preventing errors such as security vulner-abilities at runtime."} {"_id": "02d0ce2e95891570f11bbcfee607587f3fac9a02", "title": "A First Step Towards Automated Detection of Buffer Overrun Vulnerabilities", "text": "We describe a new technique for finding potential buffer overrun vulnerabilities in security-critical C code. The key to success is to use static analysis: we formulate detection of buffer overruns as an integer range analysis problem. One major advantage of static analysis is that security bugs can be eliminated before code is deployed. We have implemented our design and used our prototype to find new remotely-exploitable vulnerabilities in a large, widely deployed software package. An earlier hand audit missed"} {"_id": "188847872834a63fb435cf3a51eef72046464317", "title": "StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks", "text": "This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attacks gained notoriety in 1988 as part of the Morris Worm incident on the Internet. While it is fairly simple to fix individual buffer overflow vulnerabilities, buffer overflow attacks continue to this day. Hundreds of attacks have been discovered, and while most of the obvious vulnerabilities have now been patched, more sophisticated buffer overflow attacks continue to emerge. We describe StackGuard: a simple compiler technique that virtually eliminates buffer overflow vulnerabilities with only modest performance penalties. Privileged programs that are recompiled with the StackGuard compiler extension no longer yield control to the attacker, but rather enter a fail-safe state. These programs require no source code changes at all, and are binary-compatible with existing operating systems and libraries. We describe the compiler technique (a simple patch to gcc), as well as a set of variations on the technique that tradeoff between penetration resistance and performance. We present experimental results of both the penetration resistance and the performance impact of this technique. This research is partially supported by DARPA contracts F3060296-1-0331 and F30602-96-1-0302. Ryerson Polytechnic University"} {"_id": "3a14ea1fc798843bec6722e6f7997d1ef9714922", "title": "Securing web application code by static analysis and runtime protection", "text": "Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities have been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabilities in legacy C programs, raising hopes that the same success can be achieved with Web applications. In this paper, we describe a sound and holistic approach to ensuring Web application security. Viewing Web application vulnerabilities as a secure information flow problem, we created a lattice-based static analysis algorithm derived from type systems and typestate, and addressed its soundness. During the analysis, sections of code considered vulnerable are instrumented with runtime guards, thus securing Web applications in the absence of user intervention. With sufficient annotations, runtime overhead can be reduced to zero. We also created a tool named.WebSSARI (Web application Security by Static Analysis and Runtime Inspection) to test our algorithm, and used it to verify 230 open-source Web application projects on SourceForge.net, which were selected to represent projects of different maturity, popularity, and scale. 69 contained vulnerabilities. After notifying the developers, 38 acknowledged our findings and stated their plans to provide patches. Our statistics also show that static analysis reduced potential runtime overhead by 98.4%."} {"_id": "8a3a53c768c652b76689f9dde500a07f36d740cb", "title": "An investigation into language model data augmentation for low-resourced STT and KWS", "text": "This paper reports on investigations using two techniques for language model text data augmentation for low-resourced automatic speech recognition and keyword search. Lowresourced languages are characterized by limited training materials, which typically results in high out-of-vocabulary (OOV) rates and poor language model estimates. One technique makes use of recurrent neural networks (RNNs) using word or subword units. Word-based RNNs keep the same system vocabulary, so they cannot reduce the OOV, whereas subword units can reduce the OOV but generate many false combinations. A complementary technique is based on automatic machine translation, which requires parallel texts and is able to add words to the vocabulary. These methods were assessed on 10 languages in the context of the Babel program and NIST OpenKWS evaluation. Although improvements vary across languages with both methods, small gains were generally observed in terms of word error rate reduction and improved keyword search performance."} {"_id": "d908528685ce3c64b570c21758ce2d1aae30b4db", "title": "Supervised Dynamic and Adaptive Discretization for Rule Mining", "text": "Association rule mining is a well-researched topic in data mining. However, a common limitation with existing algorithms is that they mainly deal with categorical data. In this work we propose a methodology that allows adaptive discretization and quantitative rule discovery in large mixed databases. More specifically, we propose a top-down, recursive approach to find ranges of values for continuous attributes that result in high confidence rules. Our approach allows any given continuous attribute to be discretized in multiple ways. Compared to a global discretization scheme, our approach makes it possible to capture different intervariable interactions. We applied our algorithm to various synthetic and real datasets, including Intel manufacturing data that motivated our research. The experimental results and analysis indicate that our algorithm is capable of finding more meaningful rules for multivariate data, in addition to being more efficient than the state-of-the-art techniques."} {"_id": "33f57f2f632d89950909b31c75fae7317e6ea0cb", "title": "Modeling social influence through network autocorrelation: constructing the weight matrix", "text": "Many physical and social phenomena are embedded within networks of interdependencies, the so-called \u2018context\u2019 of these phenomena. In network analysis, this type of process is typically modeled as a network autocorrelation model. Parameter estimates and inferences based on autocorrelation models, hinge upon the chosen specification of weight matrix W, the elements of which represent the influence pattern present in the network. In this paper I discuss how social influence processes can be incorporated in the specification of W. Theories of social influence center around \u2018communication\u2019 and \u2018comparison\u2019; it is discussed how these can be operationalized in a network analysis context. Starting from that, a series of operationalizations of W is discussed. Finally, statistical tests are presented that allow an analyst to test various specifications against one another or pick the best fitting model from a set of models. \u00a9 2002 Elsevier Science B.V. All rights reserved."} {"_id": "eb9ab73e669195e1d3e73addc0028ffc08aa8da7", "title": "Application of generalized Bagley-polygon four-port power dividers to designing microwave dual-band bandpass planar filters", "text": "A new type of microwave dual-band bandpass planar filter based on signal-interference techniques is reported. The described filter approach consists of transversal filtering sections made up of generalized Bagley-polygon four-port power dividers. This transversal section, by exploiting feedforward signal-interaction concepts, enables dual-band bandpass filtering transfer functions with several transmission zeros to be obtained. A set of closed formulas and guidelines for the analytical synthesis of the dual-passband transversal filtering section are derived. Moreover, its practical usefulness is proven with the development and testing of a 2.75/3.25-GHz dual-band microstrip prototype."} {"_id": "7f270d66e0e82040b82dfcef6ad90a1e78e13f04", "title": "Measuring user acceptance of emerging information technologies: an assessment of possible method biases", "text": "The measurement scales for the perceived usefulness and perceived ease of we constructs introduced by Davis [12/ have become widely used for forecasting user acceptance of emerging information technologies. An experiment was conducted to examine whether grouping of items caused artifactual inflation of reliability and validity measures. We found support for our hypothesis that the reliability and validity stemmed notfrom item grouping butfrom the constructs of perceived usefulness and perceived ease of use being clearly defined, and the items used co measure each of these consn-ucts clearly capturing the essence of the conscrucI."} {"_id": "8cd7d1461a6a0dad8dc01868e1948cce6ef89273", "title": "Effect of gas type and flow rate on Cu free air ball formation in thermosonic wire bonding", "text": "0026-2714/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.microrel.2010.02.023 * Corresponding author. E-mail address: apequegn@uwaterloo.ca (A. Peque The development of novel Cu wires for thermosonic wire bonding is time consuming and the effects of shielding gas on the electrical flame off (EFO) process is not fully understood. An online method is used in this study for characterizing Cu free air balls (FABs) formed with different shielding gas types and flow rates. The ball heights before (HFAB) and after deformation (Hdef) are responses of the online method and measured as functions of gas flow rate. Sudden changes in the slopes of these functions, a non-parallelity of the two functions, and a large standard deviation of the HFAB measurements all identify FAB defects. Using scanning electron microscope (SEM) images in parallel with the online measurements golf-club shaped and pointed shaped FABs are found and the conditions at which they occur are identified. In general FAB defects are thought to be caused by changes in surface tension of the molten metal during EFO due to inhomogeneous cooling or oxidation. It is found that the convective cooling effect of the shielding gas increases with flow rate up to 0.65 l/min where the bulk temperature of a thermocouple at the EFO site decreases by 19 C. Flow rates above 0.7 l/min yield an undesirable EFO process due to an increase in oxidation which can be explained by a change in flow from laminar to turbulent. The addition of H2 to the shielding gas reduces the oxidation of the FAB as well as providing additional thermal energy during EFO. Different Cu wire materials yield different results where some perform better than others. 2010 Elsevier Ltd. All rights reserved."} {"_id": "f3a67725d61ae77afaa2dd5e3763b78800c78321", "title": "Enclosed: a component-centric interface for designing prototype enclosures", "text": "This paper explores the problem of designing enclosures (or physical cases) that are needed for prototyping electronic devices. We present a novel interface that uses electronic components as handles for designing the 3D shape of the enclosure. We use the .NET Gadgeteer platform as a case study of this problem, and implemented a proof-of-concept system for designing enclosures for Gadgeteer components. We show examples of enclosures designed and fabricated with our system."} {"_id": "4a58e3066f12bb86d7aef2776e9d8a2a4e4daf3e", "title": "Evaluation Techniques for Storage Hierarchies", "text": "This paper introduces an efficient technique called \u201cstack processing\u201d that can be used in the cost-performance evaluation of a large class of storage hierarchies. The technique depends on a classification of page replacement algorithms as \u201cstack algorithms\u201d for which various properties are derived. These properties may be of use in the general areas of program modeling and system analysis, as well as in the evaluation of storage hierarchies. For a better understanding of storage hierarchies, we briefly review some basic concepts of their design."} {"_id": "671697cf84dfbe53a1cb0bed29b9f649c653bbc5", "title": "Multispectral Deep Neural Networks for Pedestrian Detection", "text": "Multispectral pedestrian detection is essential for around-the-clock applications, e.g., surveillance and autonomous driving. We deeply analyze Faster R-CNN for multispectral pedestrian detection task and then model it into a convolutional network (ConvNet) fusion problem. Further, we discover that ConvNet-based pedestrian detectors trained by color or thermal images separately provide complementary information in discriminating human instances. Thus there is a large potential to improve pedestrian detection by using color and thermal images in DNNs simultaneously. We carefully design four ConvNet fusion architectures that integrate two-branch ConvNets on different DNNs stages, all of which yield better performance compared with the baseline detector. Our experimental results on KAIST pedestrian benchmark show that the Halfway Fusion model that performs fusion on the middle-level convolutional features outperforms the baseline method by 11% and yields a missing rate 3.5% lower than the other proposed architectures."} {"_id": "35e23d6461ac90e0fd9cc7e199096ee380908b61", "title": "Traffic Analysis of Encrypted Messaging Services: Apple iMessage and Beyond", "text": "Instant messaging services are quickly becoming the most dominant form of communication among consumers around the world. Apple iMessage, for example, handles over 2 billion messages each day, while WhatsApp claims 16 billion messages from 400 million international users. To protect user privacy, many of these services typically implement end-to-end and transport layer encryption, which are meant to make eavesdropping infeasible even for the service providers themselves. In this paper, however, we show that it is possible for an eavesdropper to learn information about user actions, the language of messages, and even the length of those messages with greater than 96% accuracy despite the use of state-of-the-art encryption technologies simply by observing the sizes of encrypted packets. While our evaluation focuses on Apple iMessage, the attacks are completely generic and we show how they can be applied to many popular messaging services, including WhatsApp, Viber, and Telegram."} {"_id": "f1d71d041616f9ea2952258da3db522d53163f60", "title": "DroidCat: Effective Android Malware Detection and Categorization via App-Level Profiling", "text": "Most existing Android malware detection and categorization techniques are static approaches, which suffer from evasion attacks, such as obfuscation. By analyzing program behaviors, dynamic approaches are potentially more resilient against these attacks. Yet existing dynamic approaches mostly rely on characterizing system calls which are subject to system-call obfuscation. This paper presents DroidCat, a novel dynamic app classification technique, to complement existing approaches. By using a diverse set of dynamic features based on method calls and inter-component communication (ICC) Intents without involving permission, app resources, or system calls while fully handling reflection, DroidCat achieves superior robustness than static approaches as well as dynamic approaches relying on system calls. The features were distilled from a behavioral characterization study of benign versus malicious apps. Through three complementary evaluation studies with 34 343 apps from various sources and spanning the past nine years, we demonstrated the stability of DroidCat in achieving high classification performance and superior accuracy compared with the two state-of-the-art peer techniques that represent both static and dynamic approaches. Overall, DroidCat achieved 97% F1-measure accuracy consistently for classifying apps evolving over the nine years, detecting or categorizing malware, 16%\u201327% higher than any of the two baselines compared. Furthermore, our experiments with obfuscated benchmarks confirmed higher robustness of DroidCat over these baseline techniques. We also investigated the effects of various design decisions on DroidCat\u2019s effectiveness and the most important features for our dynamic classification. We found that features capturing app execution structure such as the distribution of method calls over user code and libraries are much more important than typical security features such as sensitive flows."} {"_id": "98e01bbd62e39e2c4bb81d690cb24523ae9c1e9d", "title": "How We Transmit Memories to Other Brains: Constructing Shared Neural Representations Via Communication", "text": "Humans are able to mentally construct an episode when listening to another person's recollection, even though they themselves did not experience the events. However, it is unknown how strongly the neural patterns elicited by mental construction resemble those found in the brain of the individual who experienced the original events. Using fMRI and a verbal communication task, we traced how neural patterns associated with viewing specific scenes in a movie are encoded, recalled, and then transferred to a group of na\u00efve listeners. By comparing neural patterns across the 3 conditions, we report, for the first time, that event-specific neural patterns observed in the default mode network are shared across the encoding, recall, and construction of the same real-life episode. This study uncovers the intimate correspondences between memory encoding and event construction, and highlights the essential role our common language plays in the process of transmitting one's memories to other brains."} {"_id": "7a4a716d8675311985b281f5aa339a27b9055c0f", "title": "Development of a new hybrid ANN for solving a geotechnical problem related to tunnel boring machine performance", "text": "Prediction of tunnel boring machine (TBM) performance parameters can be caused to reduce the risks associated with tunneling projects. This study is aimed to introduce a new hybrid model namely Firefly algorithm (FA) combined by artificial neural network (ANN) for solving problems in the field of geotechnical engineering particularly for estimation of penetration rate (PR) of TBM. For this purpose, the results obtained from the field observations and laboratory tests were considered as model inputs to estimate PR of TBMs operated in a water transfer tunnel in Malaysia. Five rock mass and material properties (rock strength, tensile strength of rock, rock quality designation, rock mass rating and weathering zone) and two machine factors (trust force and revolution per minute) were used in the new model for predicting PA. FA algorithm was used to optimize weight and bias of ANN to obtain a higher level of accuracy. A series of hybrid FA-ANN models using the most influential parameters on FA were constructed to estimate PR. For comparison, a simple ANN model was built to predict PR of TBM. This ANN model was improved on the basis of new ways. By doing this, the best ANN model was chosen for comparison purposes. After implementing the best models for two methods, the data were divided into five separate categories. This will minimize the chance of randomness. Then the best models were applied for these new categories. The results demonstrated that new hybrid intelligent model is able to provide higher performance capacity for predicting. Based on the coefficient of determination 0.948 and 0.936 and 0.885 and 0.889 for training and testing datasets of FA-ANN and ANN models, respectively, it was found that the new hybrid model can be introduced as a superior model for solving geotechnical engineering problems."} {"_id": "b9e7cd6f5a330a3466198fa5d20aa134b876f221", "title": "A whole-body control framework for humanoids operating in human environments", "text": "Tomorrow's humanoids will operate in human environments, where efficient manipulation and locomotion skills, and safe contact interactions are critical design factors. We report here our recent efforts into these issues, materialized into a whole-body control framework. This framework integrates task-oriented dynamic control and control prioritization allowing to control multiple task primitives while complying with physical and movement-related constraints. Prioritization establishes a hierarchy between control spaces, assigning top priority to constraint-handling tasks, while projecting operational tasks in the null space of the constraints, and controlling the posture within the residual redundancy. This hierarchy is directly integrated at the kinematic level, allowing the program to monitor behavior feasibility at runtime. In addition, prioritization allows us to characterize the dynamic behavior of the individual control primitives subject to the constraints, and to synthesize operational space controllers at multiple levels. To complete this framework, we have developed free-floating models of the humanoid and incorporate the associated dynamics and the effects of the resulting support contacts into the control hierarchy. As part of a long term collaboration with Honda, we are currently implementing this framework into the humanoid robot Asimo"} {"_id": "edeab73e5868ab3a5bb63971ee7329aa5c9da90b", "title": "Path diversification for future internet end-to-end resilience and survivability", "text": "Path Diversification is a new mechanism that can be used to select multiple paths between a given ingress and egress node pair using a quantified diversity measure to achieve maximum flow reliability. The path diversification mechanism is targeted at the end-to-end layer, but can be applied at any level for which a path discovery service is available. Path diversification also takes into account service requirements for low-latency or maximal reliability in selecting appropriate paths. Using this mechanism will allow future internetworking architectures to exploit naturally rich physical topologies to a far greater extent than is possible with shortest-path routing or equal-cost load balancing. We describe the path diversity metric and its application at various aggregation levels, and apply the path diversification process to 13 real-world network graphs as well as 4 synthetic topologies to asses the gain in flow reliability. Based on the analysis of flow reliability across a range of networks, we then extend our path diversity metric to create a composJ. P. Rohrer* E-mail: jprohrer@nps.edu Tel.: +1 831 656 3196 Computer Science Department, Naval Postgraduate School Monterey, California, USA A. Jabbar* E-mail: jabbar@ge.com Advanced Communication Systems Lab, GE Global Research Niskayuna, New York, USA J. P. G. Sterbenz E-mail: jpgs@{ittc.ku.edu|comp.lancs.ac.uk} Tel.: +1 785 864 7890 Information and Telecommunication Technology Center, The University of Kansas Lawrence, Kansas, USA and Lancaster University, Lancaster, UK * Work performed while at The University of Kansas ite compensated total graph diversity metric that is representative of a particular topology\u2019s survivability with respect to distributed simultaneous link and node failures. We tune the accuracy of this metric having simulated the performance of each topology under a range of failure severities, and present the results. The topologies used are from nationalscale backbone networks with a variety of characteristics, which we characterize using standard graph-theoretic metrics. The end result is a compensated total graph diversity metric that accurately predicts the survivability of a given network topology."} {"_id": "20a531fb5b8b7d978f8f24c18c51ff58c949b60d", "title": "Hyperbolic Graph Generator", "text": "Networks representing many complex systems in nature and society share some common structural properties like heterogeneous degree distributions and strong clustering. Recent research on network geometry has shown that those real networks can be adequately modeled as random geometric graphs in hyperbolic spaces. In this paper, we present a computer program to generate such graphs. Besides real-world-like networks, the program can generate random graphs from other well-known graph ensembles, such as the soft configuration model, random geometric graphs on a circle, or Erd\u0151s-R\u00e9nyi random graphs. The simulations show a good match between the expected values of different network structural properties and the corresponding empirical values measured in generated graphs, confirming the accurate behavior of the program."} {"_id": "8fb7639148d92779962be1b3b27761e9fe0a15ee", "title": "Sentiment analysis on Twitter posts: An analysis of positive or negative opinion on GoJek", "text": "Online transportation, such as GoJek, is preferred by many users especially in areas where public transport is difficult to access or when there is a traffic jam. Twitter is a popular social networking site in Indonesia that can generate information from users' tweets. In this study, we proposed a system that detect public sentiments based on Twitter post about online transportation services especially GoJek. The system will collect tweets, analyze the tweets sentiments using SVM, and group them into positive and negative sentiment."} {"_id": "5a37e085fd1ce6d8c49609ad5688292b5939d059", "title": "Click Through Rate Prediction for Contextual Advertisment Using Linear Regression", "text": "This research presents an innovative and unique way of solving the advertisement prediction problem which is considered as a learning problem over the past several years. Online advertising is a multi-billion-dollar industry and is growing every year with a rapid pace. The goal of this research is to enhance click through rate of the contextual advertisements using Linear Regression. In order to address this problem, a new technique propose in this paper to predict the CTR which will increase the overall revenue of the system by serving the advertisements more suitable to the viewers with the help of feature extraction and displaying the advertisements based on context of the publishers. The important steps include the data collection, feature extraction, CTR prediction and advertisement serving. The statistical results obtained from the dynamically used technique show an efficient outcome by fitting the data close to perfection for the LR technique using optimized feature selection. Keywords-Click Through Rate(CTR), Contextual Advertisements, Machine Learning, Web advertisements, Regression Problem."} {"_id": "62566f0b005f9bf10b3ac6487dcacd21f97265fe", "title": "ICrafter: A Service Framework for Ubiquitous Computing Environments", "text": "In this paper, we propose ICrafter, a framework for services and their user interfaces in a class of ubiquitous computing environments. The chief objective of ICrafter is to let users flexibly interact with the services in their environment using a variety of modalities and input devices. We extend existing service frameworks in three ways. First, to offload services and user input devices, ICrafter provides infrastructure support for UI selection, generation, and adaptation. Second, ICrafter allows UIs to be associated with service patterns for on-the-fly aggregation of services. Finally, ICrafter facilitates the design of service UIs that are portable but still reflect the context of the local environment. In addition, we also focus on the system properties such as incremental deployability and robustness that are critical for ubiquitous computing environments. We describe the goals and architecture of ICrafter, a prototype implementation that validates its design, and the key lessons learnt from our"} {"_id": "61d530578b8b91157cda18c5097ea97ac2f6910e", "title": "A signal detection method for temporal variation of adverse effect with vaccine adverse event reporting system data", "text": "BACKGROUND\nTo identify safety signals by manual review of individual report in large surveillance databases is time consuming; such an approach is very unlikely to reveal complex relationships between medications and adverse events. Since the late 1990s, efforts have been made to develop data mining tools to systematically and automatically search for safety signals in surveillance databases. Influenza vaccines present special challenges to safety surveillance because the vaccine changes every year in response to the influenza strains predicted to be prevalent that year. Therefore, it may be expected that reporting rates of adverse events following flu vaccines (number of reports for a specific vaccine-event combination/number of reports for all vaccine-event combinations) may vary substantially across reporting years. Current surveillance methods seldom consider these variations in signal detection, and reports from different years are typically collapsed together to conduct safety analyses. However, merging reports from different years ignores the potential heterogeneity of reporting rates across years and may miss important safety signals.\n\n\nMETHOD\nReports of adverse events between years 1990 to 2013 were extracted from the Vaccine Adverse Event Reporting System (VAERS) database and formatted into a three-dimensional data array with types of vaccine, groups of adverse events and reporting time as the three dimensions. We propose a random effects model to test the heterogeneity of reporting rates for a given vaccine-event combination across reporting years. The proposed method provides a rigorous statistical procedure to detect differences of reporting rates among years. We also introduce a new visualization tool to summarize the result of the proposed method when applied to multiple vaccine-adverse event combinations.\n\n\nRESULT\nWe applied the proposed method to detect safety signals of FLU3, an influenza vaccine containing three flu strains, in the VAERS database. We showed that it had high statistical power to detect the variation in reporting rates across years. The identified vaccine-event combinations with significant different reporting rates over years suggested potential safety issues due to changes in vaccines which require further investigation.\n\n\nCONCLUSION\nWe developed a statistical model to detect safety signals arising from heterogeneity of reporting rates of a given vaccine-event combinations across reporting years. This method detects variation in reporting rates over years with high power. The temporal trend of reporting rate across years may reveal the impact of vaccine update on occurrence of adverse events and provide evidence for further investigations."} {"_id": "0cbda5365adc971b0d0ed51c0cb4bfcf0013959d", "title": "Measuring effects of music, noise, and healing energy using a seed germination bioassay.", "text": "OBJECTIVE\nTo measure biologic effects of music, noise, and healing energy without human preferences or placebo effects using seed germination as an objective biomarker.\n\n\nMETHODS\nA series of five experiments were performed utilizing okra and zucchini seeds germinated in acoustically shielded, thermally insulated, dark, humid growth chambers. Conditions compared were an untreated control, musical sound, pink noise, and healing energy. Healing energy was administered for 15-20 minutes every 12 hours with the intention that the treated seeds would germinate faster than the untreated seeds. The objective marker was the number of seeds sprouted out of groups of 25 seeds counted at 12-hour intervals over a 72-hour growing period. Temperature and relative humidity were monitored every 15 minutes inside the seed germination containers. A total of 14 trials were run testing a total of 4600 seeds.\n\n\nRESULTS\nMusical sound had a highly statistically significant effect on the number of seeds sprouted compared to the untreated control over all five experiments for the main condition (p < 0.002) and over time (p < 0.000002). This effect was independent of temperature, seed type, position in room, specific petri dish, and person doing the scoring. Musical sound had a significant effect compared to noise and an untreated control as a function of time (p < 0.03) while there was no significant difference between seeds exposed to noise and an untreated control. Healing energy also had a significant effect compared to an untreated control (main condition, p < 0.0006) and over time (p < 0.0001) with a magnitude of effect comparable to that of musical sound.\n\n\nCONCLUSION\nThis study suggests that sound vibrations (music and noise) as well as biofields (bioelectromagnetic and healing intention) both directly affect living biologic systems, and that a seed germination bioassay has the sensitivity to enable detection of effects caused by various applied energetic conditions."} {"_id": "c55511ba441f4cbbe8ed68d93bedb79c915023f3", "title": "Indoor Localization Algorithm based on Fingerprint Using a Single Fifth Generation Wi-Fi Access Point", "text": "This paper proposes an indoor positioning system (IPS) based on WLAN using a single fifth-generation (5G) Wi-Fi access point. The proposed method uses fingerprint and classification models based on KNN (K-nearest neighbor) and Bayes rule. The fingerprint is formed by beam RSS (Received Signal Strength) samples, collected in some 2D locations of the indoor environment. Numerical simulations shown that using the best beam samples, it is possible to locate the stationary user's mobile device with average error less than 2.5 m."} {"_id": "6e5e76268f292929ccba794ea4dcbb4c68899df7", "title": "Decision Trees for Mining Data Streams Based on the Gaussian Approximation", "text": "Since the Hoeffding tree algorithm was proposed in the literature, decision trees became one of the most popular tools for mining data streams. The key point of constructing the decision tree is to determine the best attribute to split the considered node. Several methods to solve this problem were presented so far. However, they are either wrongly mathematically justified (e.g., in the Hoeffding tree algorithm) or time-consuming (e.g., in the McDiarmid tree algorithm). In this paper, we propose a new method which significantly outperforms the McDiarmid tree algorithm and has a solid mathematical basis. Our method ensures, with a high probability set by the user, that the best attribute chosen in the considered node using a finite data sample is the same as it would be in the case of the whole data stream."} {"_id": "b22b4817757778bdca5b792277128a7db8206d08", "title": "SCAN: Learning Hierarchical Compositional Visual Concepts", "text": "The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts."} {"_id": "2c9bd66e9782af58a61196884a512ebbeef16859", "title": "Data fusion schemes for cooperative spectrum sensing in cognitive radio networks", "text": "Cooperative spectrum sensing has proven its efficiency to detect spectrum holes in cognitive radio network (CRN) by combining sensing information of multiple cognitive radio users. In this paper, we study different fusion schemes that can be implemented in fusion center. Simulation comparison between these schemes based on hard, quantized and soft fusion rules are conducted. It is shown through computer simulation that the soft combination scheme outperforms the hard one at the cost of more complexity; the quantized combination scheme provides a good tradeoff between detection performance and complexity. In the paper, we also analyze a quantized combination scheme based on a tree-bit quantization and compare its performance with some hard and soft combination schemes."} {"_id": "2ccbdf9e9546633ee58009e0c0f3eaee75e6f576", "title": "The Meteor metric for automatic evaluation of machine translation", "text": "The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast with IBM\u2019s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences. Furthermore, the feature ingredients within Meteor are parameterized, allowing for the tuning of the metric\u2019s free parameters in search of values that result in optimal correlation with human judgments. Optimal parameters can be separately tuned for different types of human judgments and for different languages. We discuss the initial design of the Meteor metric, subsequent improvements, and performance in several independent evaluations in recent years."} {"_id": "34ddd8865569c2c32dec9bf7ffc817ff42faaa01", "title": "A Stochastic Approximation Method", "text": "Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown tot he experiment, and it is desire to find the solution x=0 of the equation M(x) = a, where x is a given constant. we give a method for making successive experiments at levels x1, x2,... in such a way that x, will tend to 0 in probability."} {"_id": "3d07b5087e53c6f7c228b3c7e769494527be228e", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "text": "We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We show that the single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU. We also define a human-targeted TER (or HTER) and show that it yields higher correlations with human judgments than BLEU\u2014even when BLEU is given human-targeted references. Our results indicate that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate with human judgments as well as\u2014or better than\u2014a second human judgment does."} {"_id": "7c18e6cad2f9e23036ab28689e27d9f1176d65a3", "title": "A process study of computer-aided translation", "text": "We investigate novel types of assistance for human translators, based on statistical machine translation methods. We developed the computer-aided tool Caitra that makes suggestions for sentence completion, shows word and phrase translation options, and allows postediting of machine translation output. We carried out a study of the translation process that involved non-professional translators that were native in either French or English and recorded their interaction with the tool. Users translated 192 sentences from French news stories into English. Most translators were faster and better when assisted by our tool. A detailed examination of the logs also provides insight into the human translation process, such as time spent on different activities and length of pauses."} {"_id": "85b03c6b6c921c674b4c0e463de0d3f8c27ef491", "title": "Miniaturized UWB Log-Periodic Square Fractal Antenna", "text": "In this letter, the log-periodic square fractal geometry is presented for the design of a miniaturized patch antenna for the ultra-wideband (UWB) services (3.1-10.6 GHz). A miniaturization factor of 23% is achieved with a constant and stable gain in the desired band. The radiation pattern is broadside, which finds suitable applications in the UWB radars and medical imaging. Furthermore, the time-domain performance of the proposed antenna is investigated. A prototype model of the proposed antenna is fabricated and measured as a proof of concept."} {"_id": "6373298f14c7472dbdecc3d77439853e39ec216f", "title": "Analysis, design, and performance evaluation of asymmetrical half-bridge flyback converter for universal-line-voltage-range applications", "text": "The asymmetrical half-bridge (AHB) flyback converter is an attractive topology for operation at higher switching frequencies because it can operate with zero-voltage switching of the primary-side switches and zero-current switching of the secondary-side rectifier. In this paper, a detailed analysis and design procedure of the AHB flyback converter for the universal-line-voltage-range applications is presented. The performance of the AHB flyback converter is evaluated by loss analysis based on the simulation waveforms obtained in Simplis and experimentally verified on a laboratory prototype of a 65-W (19.5-V, 3.33-A) universal-line-voltage-range adapter."} {"_id": "5f8b4c3cd03421e04f20d3cb98aee745c4d58216", "title": "Resistance to Change : The Moderating Effects of Leader-Member Exchange and Role Breadth Self-Efficacy", "text": "The prevalence of resistance during change initiatives is well recognized in the change management literature. The implementation of the lean production system is no exception. It often requires substantial changes to processes and the way people work. As such, understanding how to manage this resistance is important. One view argues that the extent of resistance during change depends on the characteristics of the change process. This view posits that resistance can be reduced if organizations manage information flow, create room for participation and develop trust in management. In addition, this paper also proposes that is Leader-Member Exchange (LMX) and Role Breadth Self-Efficacy (RBSE) moderate the effect on the employees\u2019 resistance to change. \uf020"} {"_id": "97225355a9cbc284939fb0193fddfd41ac45ee71", "title": "Real-time wireless vibration monitoring system using LabVIEW", "text": "Vibration analysis provides relevant information about abnormal working condition of machine parts. Vibration measurement is prerequisite for vibration analysis which is used for condition monitoring of machinery. Also, wireless vibration monitoring has many advantages over wired monitoring. This Paper presents, implementation of a reliable and low cost wireless vibration monitoring system. Vibration measurement has been done using 3-Axis digital output MEMS Accelerometer sensor. This sensor can sense vibrations in the range 0.0156g to 8g where, 1g is 9.81m/s2. Accelerometer Sensor is interfaced with Arduino-derived microcontroller board having Atmel's AT-mega328p microcontroller. The implemented system uses ZigBee communication protocol i.e. standard IEEE 802.15.4, for wireless communication between Sensor Unit and Vibration Monitoring Unit. The wireless communication has been done using XBee RF modules. National Instruments's LabVIEW software has been used for development of graphical user interface, data-logging and alarm indication on the PC. Experimental results show continuous real-time monitoring of machine's vibrations on charts. These results, along with data-log file have been used for vibration analysis. This analysis is used to ensure safe working condition of machinery and used in predictive maintenance."} {"_id": "5badb3cf19a724327f62991e8616772b0222858b", "title": "Big Data Cognitive Storage: An Optimum Way of handling the Educational Data in Indian Scenario", "text": "The concept of big data has been incorporated in majority of areas. The educational sector has plethora of data especially in online education which plays a vital in modern education. Moreover digital learning which comprises of data and analytics contributes significantly to enhance teaching and learning. The key challenge for handling such data can be a costly affair. IBM has introduced the technology \"Cognitive Storage\" which ensures that the most relevant information is always on hand. This technology governs the incoming data, stores the data in definite media, application of levels of data protection, policies for the lifecycle and retention of different classes of data. This technology can be very beneficial for online learning in Indian scenario. This technology will be very beneficial in Indian society so as to store more information for the upliftment of the students\u2019 knowledge."} {"_id": "b521b6c9983b3e74355be32ac3082beb9f2a291d", "title": "Development of an Assessment Model for Industry 4.0: Industry 4.0-MM", "text": "The application of new technologies in the manufacturing environment is ushering a new era referred to as the 4th industrial revolution, and this digital transformation appeals to companies due to various competitive advantages it provides. Accordingly, there is a fundamental need for assisting companies in the transition to Industry 4.0 technologies/practices, and guiding them for improving their capabilities in a standardized, objective, and repeatable way. Maturity Models (MMs) aim to assist organizations by providing comprehensive guidance. Therefore, the literature is reviewed systematically with the aim of identifying existing studies related to MMs proposed in the context of Industry 4.0. Seven identified MMs are analyzed by comparing their characteristics of scope, purpose, completeness, clearness, and objectivity. It is concluded that none of them satisfies all expected criteria. In order to satisfy the need for a structured Industry 4.0 assessment/maturity model, SPICE-based Industry 4.0-MM is proposed in this study. Industry 4.0-MM has a holistic approach consisting of the assessment of process transformation, application management, data governance, asset management, and organizational alignment areas. The aim is to create a common base for performing an assessment of the establishment of Industry 4.0 technologies, and to guide companies towards achieving a higher maturity stage in order to maximize the economic benefits of Industry 4.0. Hence, Industry 4.0-MM provides standardization in continuous benchmarking and improvement of businesses in the manufacturing industry."} {"_id": "f5fb1d2992900d68ddad8ebb7fa3e1b941b985dd", "title": "Exosomes in tumor microenvironment: novel transporters and biomarkers", "text": "Tumor microenvironment (TME) plays an integral part in the biology of cancer, participating in tumor initiation, progression, and response to therapy. Exosome is an important part of TME. Exosomes are small vesicles formed in vesicular bodies with a diameter of 30\u2013100\u00a0nm and a classic \u201ccup\u201d or \u201cdish\u201d morphology. They can contain microRNAs, mRNAs, DNA fragments and proteins, which are shuttled from a donor cell to recipient cells. Exosomes secreted from tumor cells are called tumor-derived (TD) exosomes. There is emerging evidence that TD exosomes can construct a fertile environment to support tumor proliferation, angiogenesis, invasion and premetastatic niche preparation. TD exosomes also may facilitate tumor growth and metastasis by inhibiting immune surveillance and by increasing chemoresistance via removal of chemotherapeutic drugs. Therefore, TD-exosomes might be potential targets for therapeutic interventions via their modification or removal. For example, exosomes can serve as specific delivery vehicles to tumors of drugs, small molecules, or agents of prevention and gene therapy. Furthermore, the biomarkers detected in exosomes of biological fluids imply a potential for exosomes in the early detection and diagnosis, prediction of therapeutic efficacy, and determining prognosis of cancer. Although exosomes may serve as cancer biomarkers and aid in the treatment of cancer, we have a long way to go before we can further enhance the anti-tumor therapy of exosomes and develop exosome-based cancer diagnostic and therapeutic strategies."} {"_id": "5b47d47a6b9da9bc77da3ab15257f5d30ff3f489", "title": "Weakly supervised named entity classification", "text": "In this paper, we describe a new method for the problem of named entity classification for specialized or technical domains, using distant supervision. Our approach relies on a simple observation: in some specialized domains, named entities are almost unambiguous. Thus, given a seed list of names of entities, it is cheap and easy to obtain positive examples from unlabeled texts using a simple string match. Those positive examples can then be used to train a named entity classifier, by using the PU learning paradigm, which is learning from positive and unlabeled examples. We introduce a new convex formulation to solve this problem, and apply our technique in order to extract named entities from financial reports corresponding to healthcare companies."} {"_id": "c986bb0fbb101a36dbec97e2b49475897298d295", "title": "An Efficient Method of Number Plate Extraction from Indian Vehicles Image", "text": "Automatic Number Plate Recognition (ANPR) is an image-processing technology that identifies vehicles by their number plates without direct human intervention. It is an application of computer vision and important area of research due to its many applications. The main process of ANPR is divided into four stages. This paper presents a simple and efficient method for the extraction of number plate from the vehicle image based on morphological operations, thresholding and sobel edge detection, and the connected component analysis."} {"_id": "16c9604d0fc53dc7f21fb31cbc7fab6bd9bdddd6", "title": "Information fusion for wireless sensor networks: Methods, models, and classifications", "text": "Wireless sensor networks produce a large amount of data that needs to be processed, delivered, and assessed according to the application objectives. The way these data are manipulated by the sensor nodes is a fundamental issue. Information fusion arises as a response to process data gathered by sensor nodes and benefits from their processing capability. By exploiting the synergy among the available data, information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity. In this work, we survey the current state-of-the-art of information fusion by presenting the known methods, algorithms, architectures, and models of information fusion, and discuss their applicability in the context of wireless sensor networks."} {"_id": "b1ec21386d1573b9a9ad791c434aca576378ee9b", "title": "Leadership style and patient safety: implications for nurse managers.", "text": "OBJECTIVE\nThe purpose of this study was to explore the relationship between nurse manager (NM) leadership style and safety climate.\n\n\nBACKGROUND\nNursing leaders are needed who will change the environment and increase patient safety. Hospital NMs are positioned to impact day-to-day operations. Therefore, it is essential to inform nurse executives regarding the impact of leadership style on patient safety.\n\n\nMETHODS\nA descriptive correlational study was conducted in 41 nursing departments across 9 hospitals. The hospital unit safety climate survey and multifactorial leadership questionnaire were completed by 466 staff nurses. Bivariate and regression analyses were conducted to determine how well leadership style predicted safety climate.\n\n\nRESULTS\nTransformational leadership style was demonstrated as a positive contributor to safety climate, whereas laissez-faire leadership style was shown to negatively contribute to unit socialization and a culture of blame.\n\n\nCONCLUSIONS\nNursing leaders must concentrate on developing transformational leadership skills while also diminishing negative leadership styles."} {"_id": "e66e7dbbd10861d346ea00594c0df7a8605c7cf5", "title": "Iterative Attention Mining for Weakly Supervised Thoracic Disease Pattern Localization in Chest X-Rays", "text": "Given image labels as the only supervisory signal, we focus on harvesting, or mining, thoracic disease localizations from chest X-ray images. Harvesting such localizations from existing datasets allows for the creation of improved data sources for computer-aided diagnosis and retrospective analyses. We train a convolutional neural network (CNN) for image classification and propose an attention mining (AM) strategy to improve the model\u2019s sensitivity or saliency to disease patterns. The intuition of AM is that once the most salient disease area is blocked or hidden from the CNN model, it will pay attention to alternative image regions, while still attempting to make correct predictions. However, the model requires to be properly constrained during AM, otherwise, it may overfit to uncorrelated image parts and forget the valuable knowledge that it has learned from the original image classification task. To alleviate such side effects, we then design a knowledge preservation (KP) loss, which minimizes the discrepancy between responses for X-ray images from the original and the updated networks. Furthermore, we modify the CNN model to include multi-scale aggregation (MSA), improving its localization ability on small-scale disease findings, e.g., lung nodules. We experimentally validate our method on the publicly-available ChestXray14 dataset, outperforming a class activation map (CAM)-based approach, and demonstrating the value of our novel framework for mining disease locations."} {"_id": "bcdd2458633e6f9955a2b18846e5c85c7b047e08", "title": "Content-Based Image Retrieval Using Multiple Features", "text": "Algorithms of Content-Based Image Retrieval (CBIR) have been well developed along with the explosion of information. These algorithms are mainly distinguished based on feature used to describe the image content. In this paper, the algorithms that are based on color feature and texture feature for image retrieval will be presented. Color Coherence Vector based image retrieval algorithm is also attempted during the implementation process, but the best result is generated from the algorithms that weights color and texture. 80% satisfying rate is achieved."} {"_id": "587f6b97f6c75d7bfaf2c04be8d9b4ad28ee1b0a", "title": "DIFusion: Fast Skip-Scan with Zero Space Overhead", "text": "Scan is a crucial operation in main-memory column-stores. It scans a column and returns a result bit vector indicating which records satisfy a filter predicate. ByteSlice is an in-memory data layout that chops data into multiple bytes and exploits early-stop capability by high-order bytes comparisons. As column widths are usually not multiples of byte, the last-byte of ByteSlice is padded with 0's, wasting memory bandwidth and computation power. To fully leverage the resources, we propose to weave a secondary index into the vacant bits (i.e., bits originally padded with 0's), forming our new layout coined DIFusion (Data Index Fusion). DIFusion enables skip-scan, a new fast scan that inherits the early-stopping capability from ByteSlice and at the same time possesses the data-skipping ability of index with zero space overhead. Empirical results show that skip-scan on DIFusion outperforms scan on ByteSlice."} {"_id": "6c1671a8163f7a2ce0cd68424a142df0bae40c2e", "title": "Monocyte emigration from bone marrow during bacterial infection requires signals mediated by chemokine receptor CCR2", "text": "Monocytes recruited to tissues mediate defense against microbes or contribute to inflammatory diseases. Regulation of the number of circulating monocytes thus has implications for disease pathogenesis. However, the mechanisms controlling monocyte emigration from the bone marrow niche where they are generated remain undefined. We demonstrate here that the chemokine receptor CCR2 was required for emigration of Ly6Chi monocytes from bone marrow. Ccr2\u2212/\u2212 mice had fewer circulating Ly6Chi monocytes and, after infection with Listeria monocytogenes, accumulated activated monocytes in bone marrow. In blood, Ccr2\u2212/\u2212 monocytes could traffic to sites of infection, demonstrating that CCR2 is not required for migration from the circulation into tissues. Thus, CCR2-mediated signals in bone marrow determine the frequency of Ly6Chi monocytes in the circulation."} {"_id": "7acbb31671647dd86f451ba3a1b895d949b70ff9", "title": "Incremental learning for \u03bd-Support Vector Regression", "text": "The \u03bd-Support Vector Regression (\u03bd-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter \u03bd on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to \u03bd-Support Vector Classification (\u03bd-SVC) (Sch\u00f6lkopf et\u00a0al., 2000), \u03bd-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line \u03bd-SVC algorithm (AONSVM) to \u03bd-SVR will not generate an effective initial solution. It is the main challenge to design an incremental \u03bd-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of \u03bd-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental \u03bd-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch \u03bd-SVR algorithms with both cold and warm starts."} {"_id": "d574fe2948052948f8506e2c369cf0558827e260", "title": "Feature combination strategies for saliency-based visual attention systems", "text": "acin ral eais a000 Abstract. Bottom-up or saliency-based visual attention allows primates to detect nonspecific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psychophysical studies, describes attention as a rapidly shiftable \u2018\u2018spotlight.\u2019\u2019 We use a model that reproduces the attentional scan paths of this spotlight. Simple multi-scale \u2018\u2018feature maps\u2019\u2019 detect local spatial discontinuities in intensity, color, and orientation, and are combined into a unique \u2018\u2018master\u2019\u2019 or \u2018\u2018saliency\u2019\u2019 map. The saliency map is sequentially scanned, in order of decreasing saliency, by the focus of attention. We here study the problem of combining feature maps, from different visual modalities (such as color and orientation), into a unique saliency map. Four combination strategies are compared using three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global nonlinear normalization followed by summation, and (4) local nonlinear competition between salient locations followed by summation. Performance was measured as the number of false detections before the most salient target was found. Strategy (1) always yielded poorest performance and (2) best performance, with a threefold to eightfold improvement in time to find a salient target. However, (2) yielded specialized systems with poor generalization. Interestingly, strategy (4) and its simplified, computationally efficient approximation (3) yielded significantly better performance than (1), with up to fourfold improvement, while preserving generality. \u00a9 2001 SPIE and IS&T. [DOI: 10.1117/1.1333677]"} {"_id": "b77e603fdfce2e2f5ae40592d07f0cbd1ad91ac1", "title": "Counter-Forensics: Attacking Image Forensics", "text": "This chapter discusses counter-forensics, the art and science of impeding or misleading forensic analyses of digital images. Research on counter-forensics is motivated by the need to assess and improve the reliability of forensic methods in situations where intelligent adversaries make efforts to induce a certain outcome of forensic analyses. Counter-forensics is first defined in a formal decision-theoretic framework. This framework is then interpreted and extended to encompass the requirements to forensic analyses in practice, including a discussion of the notion of authenticity in the presence of legitimate processing, and the role of image models with regard to the epistemic underpinning of the forensic decision problem. A terminology is developed that distinguishes security from robustness properties, integrated from post-processing attacks, and targeted from universal attacks. This terminology is directly applied in a self-contained technical survey of counter-forensics against image forensics, notably techniques that suppress traces of image processing and techniques that synthesize traces of authenticity, including examples and brief evaluations. A discussion of relations to other domains of multimedia security and an overview of open research questions concludes the chapter. 1 Definition of Counter-Forensics This final chapter changes the perspective. It is devoted to digital image counterforensics, the art and science of impeding and misleading forensic analyses of digital images. Rainer B\u00f6hme Westf\u00e4lische Wilhelms-Universit\u00e4t M\u00fcnster, Dept. of Information Systems, Leonardo-Campus 3, 48149 M\u00fcnster, Germany e-mail: rainer.boehme@wi.uni-muenster.de Matthias Kirchner International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704 e-mail: kirchner@icsi.berkeley.edu"} {"_id": "eb2c8a159a41ed86e70aad96d2d7f833633eddac", "title": "Band-pass shielding enclosure for 2.4/5 GHz WLAN applications", "text": "This paper presents a design of band-pass shielding enclosure for two frequency bands of WLAN. The proposed design employs a compact frequency selective surface to highlight the advantages of these miniaturized structures for the selective shielding of such enclosures that are electrically small. The results indicate that the proposed shielding enclosure has high transmittance in both the pass bands for horizontal and vertical polarizations, while offers a good amount of electromagnetic shielding outside these frequency bands."} {"_id": "7d973f3c1634dadd4543d1132dcaf0160ad7d01f", "title": "Big Data For Development: Applications and Techniques", "text": "With the explosion of social media sites and proliferation of digital computing devices and Internet access, massive amounts of public data is being generated on a daily basis. Efficient techniques/ algorithms to analyze this massive amount of data can provide near real-time information about emerging trends and provide early warning in case of an imminent emergency (such as the outbreak of a viral disease). In addition, careful mining of these data can reveal many useful indicators of socioeconomic and political events, which can help in establishing effective public policies. The focus of this study is to review the application of big data analytics for the purpose of human development. The emerging ability to use big data techniques for development (BD4D) promises to revolutionalize healthcare, education, and agriculture; facilitate the alleviation of poverty; and help to deal with humanitarian crises and violent conflicts. Besides all the benefits, the large-scale deployment of BD4D is beset with several challenges due to the massive size, fast-changing and diverse nature of big data. The most pressing concerns relate to efficient data acquisition and sharing, establishing of context (e.g., geolocation and time) and veracity of a dataset, and ensuring appropriate privacy. In this study, we provide a review of existing BD4D work to study the impact of big data on the development of society. In addition to reviewing the important works, we also highlight important challenges and open issues."} {"_id": "78b7a8acf5da38537f78fba56aefadab0bf3e7e1", "title": "On-Chip High-Voltage Generation in Integrated Circuits Using an Improved Multiplier Technique", "text": "An improved oped for generating +40 voltage multiplier technique has been develV internally in p-channel MNOS integrated circuits to enrtble them to be operated from standard +5and : 12-V supply rails. With this technique, the multiplication efficiency and current driving capabtilty are both independent of the number of multiplier stages. A mathematical model and simple equivalent circuit have been developed for the multiplier and the predicted performance agrees well with measured results. A multiplier has already been incorporated into a TTL compatible nonvolatile quad-latch, in which it occupies a chip area of 600 ~m X 240 pm. It is operated with a clock frequency of 1 MHz and can supManuscript received December 9, 1975; revised February 18, 1976. This paper is based on part of a presentation entitled \u201cA non-volatile MNOS quad-latch,\u201d which was presented at the First European SolidState Circuits Conference, Canterbury, England, September 2-5, 1975. The author is with the Allen Clark Research Centre, The Plessey Company Ltd., CasweJl, Towcester, Northants., England. MNOS Voltage ply a maximum load current of about 10 NA. The output impedance is 3.2 Mfl. INTRODUCTION ALTHOUGH MNOS technology is now well established for fabricating nonvolatile memory circuits, the relatively high potentials necessary to write or erase information, typically 30-40 V, are an obvious disadvantage. In many applications, the need to generate these voltages has prevented the use of MNOS devices being economically viable, especially when only a few bits of nonvolatile data are required. To overcome this problem, a method of on-chip high-voltage generation using a new voltage multiplier technique has been developed, enabling MNOS circuits to be operated with standard DICKSON: ON-CHIP HIGH-VOLTAGEGENERATION 375"} {"_id": "ca2b173e1d7a7b906feed1f1da611d883733b61c", "title": "Using Fully Homomorphic Encryption for Statistical Analysis of Categorical, Ordinal and Numerical Data", "text": "In recent years, there has been a growing trend towards outsourcing of computational tasks with the development of cloud services. The Gentry\u2019s pioneering work of fully homomorphic encryption (FHE) and successive works have opened a new vista for secure and practical cloud computing. In this paper, we consider performing statistical analysis on encrypted data. To improve the efficiency of the computations, we take advantage of the batched computation based on the ChineseRemainder-Theorem. We propose two building blocks that work with FHE: a novel batch greater-than primitive, and matrix primitive for encrypted matrices. With these building blocks, we construct secure procedures and protocols for different types of statistics including the histogram (count), contingency table (with cell suppression) for categorical data; k-percentile for ordinal data; and principal component analysis and linear regression for numerical data. To demonstrate the effectiveness of our methods, we ran experiments in five real datasets. For instance, we can compute a contingency table with more than 50 cells from 4000 of data in just 5 minutes, and we can train a linear regression model with more than 40k of data and dimension as high as 6 within 15 minutes. We show that the FHE is not as slow as commonly believed and it becomes feasible to perform a broad range of statistical analysis on thousands of encrypted data."} {"_id": "a052348595c07b7da2fc9ef5a0e883f03b6a02b1", "title": "Anomalous Magnetic Field Activity During a Bioenergy Healing Experiment", "text": "A few studies have reported magnetic fi eld changes during bioenergy healing. In a pilot experiment, we examined magnetic fi eld activity during hands-on healing and distant healing of mice with experimentally induced tumors. During healing sessions, we observed distinct magnetic fi eld oscillations adjacent to the mice cages, which were similar in appearance to those reported by Zimmerman (1985). The magnetic fi eld oscillations began as 20\u201330 Hz oscillations, slowing to 8\u20139 Hz, and then to less than 1 Hz, at which point the oscillations reversed and increased in frequency, with an overall symmetrical appearance resembling a \u201cchirp wave.\u201d The waves ranged from 1\u20138 milliGauss peak-to-peak in strength and 60\u2013120 sec in duration. Evidence to date suggests that bioenergy healing may be detectable with DC gaussmeters, although we have not ruled out an artifactual basis for the oscillations reported here."} {"_id": "0e9741bc1e0c80520a8181970cd4f61caa00055a", "title": "Algorithms implementing distributed shared memory", "text": "Four basic algorithms for implementing distributed shared memory are compared. Conceptually, these algorithms extend local virtual address spaces to span multiple hosts connected by a local area network, and some of them can easily be integrated with the hosts' virtual memory systems. The merits of distributed shared memory and the assumptions made with respect to the environment in which the shared memory algorithms are executed are described. The algorithms are then described, and a comparative analysis of their performance in relation to application-level access behavior is presented. It is shown that the correct choice of algorithm is determined largely by the memory access behavior of the applications. Two particularly interesting extensions of the basic algorithms are described, and some limitations of distributed shared memory are noted.<>"} {"_id": "0a482d33dc0c7ca08cc333158c5c0e82d4c04560", "title": "Adapting the Tesseract open source OCR engine for multilingual OCR", "text": "We describe efforts to adapt the Tesseract open source OCR engine for multiple scripts and languages. Effort has been concentrated on enabling generic multi-lingual operation such that negligible customization is required for a new language beyond providing a corpus of text. Although change was required to various modules, including physical layout analysis, and linguistic post-processing, no change was required to the character classifier beyond changing a few limits. The Tesseract classifier has adapted easily to Simplified Chinese. Test results on English, a mixture of European languages, and Russian, taken from a random sample of books, show a reasonably consistent word error rate between 3.72% and 5.78%, and Simplified Chinese has a character error rate of only 3.77%."} {"_id": "2be13dc3d2e27bea5b5008a164e8ddac5ff4dcdd", "title": "Multilingual OCR (MOCR): An Approach to Classify Words to Languages", "text": "There are immense efforts to design a complete OCR for most of the world\u2019s leading languages, however, multilingual documents either of handwritten or of printed form. As a united attempt, Unicode based OCRs were studied mostly with some positive outcomes, despite the fact that a large character set slows down the recognition significantly. In this paper, we come out with a method to classify words to a language as the word segmentation is complete. For the purpose, we identified the characteristics of writings of several languages and utilized projecting method combined with some other feature extraction methods. In addition, this paper intends a modified statistical approach to correct the skewness before processing a segmented document. The proposed procedure, evaluated for a collection of both handwritten and printed documents, came with excellent outcomes in assigning words to languages. General Terms Pattern Recognition, Document Processing, Optical Character Recognition."} {"_id": "84073be75980a6a8aabc11a00b4017f681362c08", "title": "The OCRopus open source OCR system", "text": "OCRopus is a new, open source OCR system emphasizing modularity, easy extensibility, and reuse, aimed at both the research community and large scale commercial document conversions. This paper describes the current status of the system, its general architecture, as well as the major algorithms currently being used for layout analysis and text line recognition."} {"_id": "ecc5492d00b1c53ed30d830136c300d381ca2770", "title": "Automatic code generation of convolutional neural networks in FPGA implementation", "text": "Convolutional neural networks (CNNs) have gained great success in various computer vision applications. However, state-of-the-art CNN models are computation-intensive and hence are mainly processed on high performance processors like server CPUs and GPUs. Owing to the advantages of high performance, energy efficiency and reconfigurability, Field-Programmable Gate Arrays (FPGAs) have been widely explored as CNN accelerators. In this paper, we propose parallel structures to exploit the inherent parallelism and efficient computation units to perform operations in convolutional and fully-connected layers. Further, an automatic generator is proposed to generate Verilog HDL source code automatically according to high-level hardware description language. Execution time, DSP consumption and performance are analytically modeled based on some critical design variables. We demonstrate the automatic methodology by implementing two representative CNNs (LeNet and AlexNet) and evaluate the execution time models by comparing estimated and measured values. Our results show that the proposed automatic methodology yields hardware design with good performance and saves much developing round time."} {"_id": "46cf123a28b30921ac904672fbba9073885fa9dc", "title": "Extending drift-diffusion paradigm into the era of FinFETs and nanowires", "text": "This paper presents a feasibility study that the drift-diffusion model can capture the ballistic transport of FinFETs and nanowires with a simple model extension. For FinFETs, Monte Carlo simulation is performed and the ballistic mobility is calibrated to linear & saturation currents. It is validated that the calibrated model works over a wide range of channel length and channel stress. The ballistic mobility model is then applied to a nanowire with 5nm design rules. Finally, the technology scaling trend of the ballistic ratio is explored."} {"_id": "b418c47489e0e51504540f6ac46e530822e759a8", "title": "Secure Communication Method Based on Encryption and Steganography", "text": "Encryption is a traditional method of scrambling data beyond recognition. For years, encryption has been widely used in various areas and domains where secrecy and confidentiality were required. However, there are certain encryption techniques that are not allowed in some countries. In these cases, steganography may come as a solution for hiding data using an apparently inoffensive and innocent carrier. The purpose of steganography is to deliver secret information from a sender to a receiver by scrambling the communication channel not the information itself. This way, it is very unlikely for someone to identify and extract the secret message without knowing exactly how it was embedded into the carrier. Moreover, this task is more difficult since most steganographic methods rely on file or packet header redundancy, lossy multimedia algorithms or insignificant content alteration. All of these are very hard to detect. In this paper, a combination of encryption with image based steganography is presented in order to offer similar robustness as offered by advanced encryption algorithms such as RSA, AES or DES but with lower resource needs. The most original contribution of this paper consists in how a secret message is embedded into a image carrier generating 3 different but very similar images. These images are sent to the destination using different channels and then used to recover the original (secret) message."} {"_id": "084dd226a724608d29a70101f793ed4e81dddfd7", "title": "Jigsaw: indoor floor plan reconstruction via mobile crowdsensing", "text": "The lack of floor plans is a critical reason behind the current sporadic availability of indoor localization service. Service providers have to go through effort-intensive and time-consuming business negotiations with building operators, or hire dedicated personnel to gather such data. In this paper, we propose Jigsaw, a floor plan reconstruction system that leverages crowdsensed data from mobile users. It extracts the position, size and orientation information of individual landmark objects from images taken by users. It also obtains the spatial relation between adjacent landmark objects from inertial sensor data, then computes the coordinates and orientations of these objects on an initial floor plan. By combining user mobility traces and locations where images are taken, it produces complete floor plans with hallway connectivity, room sizes and shapes. Our experiments on 3 stories of 2 large shopping malls show that the 90-percentile errors of positions and orientations of landmark objects are about 1~2m and 5~9\u00b0, while the hallway connectivity is 100% correct."} {"_id": "cdd34f54a5a719c9474a7c167642fac89bef9893", "title": "Dielectric properties of human normal, malignant and cirrhotic liver tissue: in vivo and ex vivo measurements from 0.5 to 20 GHz using a precision open-ended coaxial probe.", "text": "Hepatic malignancies have historically been treated with surgical resection. Due to the shortcomings of this technique, there is interest in other, less invasive, treatment modalities, such as microwave hepatic ablation. Crucial to the development of this technique is the accurate knowledge of the dielectric properties of human liver tissue at microwave frequencies. To this end, we characterized the dielectric properties of in vivo and ex vivo normal, malignant and cirrhotic human liver tissues from 0.5 to 20 GHz. Analysis of our data at 915 MHz and 2.45 GHz indicates that the dielectric properties of ex vivo malignant liver tissue are 19 to 30% higher than normal tissue. The differences in the dielectric properties of in vivo malignant and normal liver tissue are not statistically significant (with the exception of effective conductivity at 915 MHz, where malignant tissue properties are 16% higher than normal). Also, the dielectric properties of in vivo normal liver tissue at 915 MHz and 2.45 GHz are 16 to 43% higher than ex vivo. No statistically significant differences were found between the dielectric properties of in vivo and ex vivo malignant tissue (with the exception of effective conductivity at 915 MHz, where malignant tissue properties are 28% higher than normal). We report the one-pole Cole-Cole parameters for ex vivo normal, malignant and cirrhotic liver tissue in this frequency range. We observe that wideband dielectric properties of in vivo liver tissue are different from the wideband dielectric properties of ex vivo liver tissue, and that the in vivo data cannot be represented in terms of a Cole-Cole model. Further work is needed to uncover the mechanisms responsible for the observed wideband trends in the in vivo liver data."} {"_id": "55686516b40a94165aa9bda3aa0d0d4cda8992fb", "title": "Camera parameters auto-adjusting technique for robust robot vision", "text": "How to make vision system work robustly under dynamic light conditions is still a challenging research focus in computer/robot vision community. In this paper, a novel camera parameters auto-adjusting technique based on image entropy is proposed. Firstly image entropy is defined and its relationship with camera parameters is verified by experiments. Then how to optimize the camera parameters based on image entropy is proposed to make robot vision adaptive to the different light conditions. The algorithm is tested by using the omnidirectional vision in indoor RoboCup Middle Size League environment and the perspective camera in outdoor ordinary environment, and the results show that the method is effective and color constancy to some extent can be achieved."} {"_id": "4e7abdcafcfb593c2db9a69d65b479333cf22961", "title": "Image Processing And Pattern Recognition Fundamentals And Techniques", "text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite image processing and pattern recognition fundamentals and techniques book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of image processing and pattern recognition fundamentals and techniques, just pick it. You know, this book is always making the fans to be dizzy if not to find."} {"_id": "da3e09593812d0d4022a05737f870625dd13e661", "title": "Rules for an Ontology-based Approach to Adaptation", "text": "Adaptation addresses the complexity and overload caused by an increasing amount of available resources by the identification and presentation of relevant resources. Drawbacks of existing approaches to this end include the limited degree of customization, difficulties in the acquisition of model information and the lack of control and transparency of the system's adaptive behavior. This paper proposes an approach which addresses these drawbacks to perform meaningful and effective adaptation by the application of semantic Web technologies. This comprises the use of an ontology and adaptation rules for knowledge representation and inference engines for reasoning. The focus of this paper lies in the presentation of adaptation rules"} {"_id": "454c99488333b2720299e1c49c6ded14e1ceb108", "title": "RTL implementation and analysis of fixed priority, round robin, and matrix arbiters for the NoC's routers", "text": "Networks-on-Chip (NoC) is an emerging on-chip interconnection centric platform that influences modern high speed communication infrastructure to improve the performance of many-core System-on-Chip (SoCs) designs. The core of each NoCs router involves arbiter and multiplier pairs that need to be carefully co-optimized in order to achieve an overall efficient implementation. Low transmission latency design is one of the most important parameters of NoC design. This paper uses parametric Verilog HDL to implement the designs and compares the performance in terms of power, area, and delay of different types of arbiters using for NoCs routers. The RTL implementation is performed using parametric Verilog HDL and analysis in term of power, area and delay is performed using Xilinx ISE 14.7 and Xpower Analyzer (XPA) with Xpower Estimator (XPE). The target device uses for these implementation is Vertex 6."} {"_id": "a90d4559e7d4fb07b1ae495f5062aa83ce84ef4f", "title": "[Translation and validation for Brazil of the body image scale for adolescents--Offer Self-Image Questionnaire (OSIQ)].", "text": "OBJECTIVE\nTo evaluate the semantic and measure equivalence of the body image sub-scale of the Offer Self Image Questionnaire (OSIQ).\n\n\nMETHODS\nParticipants were 386 teenagers, 10 to 18 years old, both sexes, enrolled in a private school (junior and high school age). Translation, back-translation, technique revision and evaluation were conducted. The Portuguese instrument was evaluated for internal consistency, discriminate and concurrent validity.\n\n\nRESULTS\nInternal consistency showed values from 0.43 to 0.54 and was able to discriminate all groups studied--the whole population, boys and girls, and boys in early adolescence, by nutritional status (p<0.001; p<0.009; p=0.030; p=0.043, respectively). Concurrent analyses showed significant correlation with anthropometric measures only for girls (r=-0.16 and p=0.021; r=-0.19 and p=0.007), early adolescence (r=-0.23 and p=0.008; r=-0.26 and p=0.003) and intermediate adolescence (r=-0.29 and p=0.010) and the retest confirmed reliability by the coefficient of interclass correlation. Although the instrument has proven its ability to discriminate between the groups studied by nutritional state, other results were less satisfactory. More studies are necessary for full transcultural adaptation, including the application of other comparative scales.\n\n\nCONCLUSION\nThe body image sub-scale of the OSIQ was translated, but the results are not promising and require more studies."} {"_id": "07c00639d498de8b3aa0c3ee50dd939cd03fce7d", "title": "Inflammatory reaction to hyaluronic acid: A newly described complication in vocal fold augmentation.", "text": "OBJECTIVES/HYPOTHESIS\nTo establish the rate of inflammatory reaction to hyaluronic acid (HA) in vocal fold injection augmentation, determine the most common presenting signs and symptoms, and propose an etiology.\n\n\nSTUDY DESIGN\nRetrospective chart review.\n\n\nMETHODS\nPatients injected with HA over a 5-year period were reviewed to identify those who had a postoperative inflammatory reaction. Medical records were reviewed for patient demographic information, subjective complaints, Voice Handicap Index-10 (VHI-10) scores, medical intervention, and resolution time. Videolaryngostroboscopy examinations were also evaluated.\n\n\nRESULTS\nA total of 186 patients (245 vocal folds) were injected with HA over a 5-year period, with a postoperative inflammatory reaction rate of 3.8%. The most common complaints in these patients were odynophagia, dysphonia, and dyspnea with vocal fold erythema, edema, and loss of pliability on videolaryngostroboscopy. All patients were treated with corticosteroids. Return of vocal fold vibration ranged from 3 weeks to 26 months, with VHI-10 scores normalizing in 50% of patients.\n\n\nCONCLUSIONS\nThis reaction may be a form of hypersensitivity related to small amounts of protein linked to HA. Alternatively, extravascular compression from the HA could lead to venous congestion of the vocal fold. The possibility of equipment contamination is also being investigated. Further studies are needed to determine the etiology and best treatment.\n\n\nLEVEL OF EVIDENCE\n4 Laryngoscope, 2016 127:445-449, 2017."} {"_id": "83fbcac3e5d959923397aaa07317a14c852b4948", "title": "Global effects of smoking, of quitting, and of taxing tobacco.", "text": "From the Center for Global Health Research, St. Michael\u2019s Hospital and Dalla Lana School of Public Health, University of Toronto, Toronto (P.J.); and the Clinical Trial Service Unit and Epidemiological Studies Unit, Nuffield Department of Population Health, Richard Doll Building, University of Oxford, Oxford, United Kingdom (R.P.). Address reprint requests to Dr. Jha at prabhat.jha@utoronto.ca."} {"_id": "95a6d057b441396420ee46eca84dea47e4bf11e7", "title": "User-centered security: stepping up to the grand challenge", "text": "User-centered security has been identified as a grand challenge in information security and assurance. It is on the brink of becoming an established subdomain of both security and human/computer interface (HCI) research, and an influence on the product development lifecycle. Both security and HCI rely on the reality of interactions with users to prove the utility and validity of their work. As practitioners and researchers in those areas, we still face major issues when applying even the most foundational tools used in either of these fields across both of them. This essay discusses the systemic roadblocks at the social, technical, and pragmatic levels that user-centered security must overcome to make substantial breakthroughs. Expert evaluation and user testing are producing effective usable security today. Principles such as safe staging, enumerating usability failure risks, integrated security, transparent security and reliance on trustworthy authorities can also form the basis of improved systems"} {"_id": "74d369ac9d945959b6afe5e7cb7147ece2f3aceb", "title": "Static and Moving Object Detection Using Flux Tensor with Split Gaussian Models", "text": "In this paper, we present a moving object detection system named Flux Tensor with Split Gaussian models (FTSG) that exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison. This hybrid system can handle challenges such as shadows, illumination changes, dynamic background, stopped and removed objects. Extensive testing performed on the CVPR 2014 Change Detection benchmark dataset shows that FTSG outperforms state-of-the-art methods."} {"_id": "9c3b6fe16fcf417ee8f87bd2810fa9e0a9896bd4", "title": "A Quick Tour of BabelNet 1.1", "text": "In this paper we present BabelNet 1.1, a brand-new release of the largest \u201cencyclopedic dictionary\u201d, obtained from the automatic integration of the most popular computational lexicon of English, i.e. WordNet, and the largest multilingual Web encyclopedia, i.e. Wikipedia. BabelNet 1.1 covers 6 languages and comes with a renewed Web interface, graph explorer and programmatic API. BabelNet is available online at http://www.babelnet.org."} {"_id": "42b9487dc0ce33f9ebf83930d709ed73f45399ad", "title": "Cloud Computing: The impact on digital forensic investigations", "text": "Cloud Computing (CC) as a concept and business opportunity is likely to see many organisations experiencing the 'credit crunch', embrace the relatively low cost option of CC to ensure continued business viability and sustainability. The pay-as-you-go structure of the CC business model is typically suited to SME's who do not have the resources to completely fulfil their IT requirements. Private end users will also look to utilise the colossal pool of resources that CC offers in an attempt to provide mobile freedom of information. However, as with many opportunities that offer legitimate users enormous benefits, unscrupulous and criminal users will also look to use CC to exploit the loopholes that may exist within this new concept, design and business model. This paper will outline the tasks that the authors undertook for the CLOIDIFIN project and highlight where the Impact of CC will diversely effect digital forensic investigations."} {"_id": "c144214851af20f95642f394195d4671fed258a7", "title": "An Improvement to Feature Selection of Random Forests on Spark", "text": "The Random Forests algorithm belongs to the class of ensemble learning methods, which are common used in classification problem. In this paper, we studied the problem of adopting the Random Forests algorithm to learn raw data from real usage scenario. An improvement, which is stable, strict, high efficient, data-driven, problem independent and has no impact on algorithm performance, is proposed to investigate 2 actual issues of feature selection of the Random Forests algorithm. The first one is to eliminate noisy features, which are irrelevant to the classification. And the second one is to eliminate redundant features, which are highly relevant with other features, but useless. We implemented our improvement approach on Spark. Experiments are performed to evaluate our improvement and the results show that our approach has an ideal performance."} {"_id": "099089dd8cc7a2cba74ac56664e43313b47c7bf3", "title": "Yield Trends Are Insufficient to Double Global Crop Production by 2050", "text": "Several studies have shown that global crop production needs to double by 2050 to meet the projected demands from rising population, diet shifts, and increasing biofuels consumption. Boosting crop yields to meet these rising demands, rather than clearing more land for agriculture has been highlighted as a preferred solution to meet this goal. However, we first need to understand how crop yields are changing globally, and whether we are on track to double production by 2050. Using \u223c2.5 million agricultural statistics, collected for \u223c13,500 political units across the world, we track four key global crops-maize, rice, wheat, and soybean-that currently produce nearly two-thirds of global agricultural calories. We find that yields in these top four crops are increasing at 1.6%, 1.0%, 0.9%, and 1.3% per year, non-compounding rates, respectively, which is less than the 2.4% per year rate required to double global production by 2050. At these rates global production in these crops would increase by \u223c67%, \u223c42%, \u223c38%, and \u223c55%, respectively, which is far below what is needed to meet projected demands in 2050. We present detailed maps to identify where rates must be increased to boost crop production and meet rising demands."} {"_id": "a17a069a093dba710582e6e3a25abe67e7d74da7", "title": "Characterizing affective instability in borderline personality disorder.", "text": "OBJECTIVE\nThis study sought to understand affective instability among patients with borderline personality disorder by examining the degree of instability in six affective domains. The authors also examined the subjective intensity with which moods are experienced and the association between instability and intensity of affect.\n\n\nMETHOD\nIn a group of 152 patients with personality disorders, subjective affective intensity and six dimensions of affective instability were measured. The mean scores for lability and intensity for each affective domain for patients with borderline personality disorder were compared with those of patients with other personality disorders through analyses that controlled for other axis I affective disorders, age, and sex.\n\n\nRESULTS\nGreater lability in terms of anger and anxiety and oscillation between depression and anxiety, but not in terms of oscillation between depression and elation, was associated with borderline personality disorder. Contrary to expectation, the experience of an increase in subjective affective intensity was not more prominent in patients with borderline personality disorder than in those with other personality disorders.\n\n\nCONCLUSIONS\nBy applying a finer-grained perspective on affective instability than those of previous personality disorder studies, this study points to patterns of affective experience characteristic of patients with borderline personality disorder."} {"_id": "65bf178effbf3abe806926899c79924a80502f8c", "title": "Proximal Algorithms in Statistics and Machine Learning", "text": "In this paper we develop proximal methods for statistical learning. Proximal point algorithms are useful in statistics and machine learning for obtaining optimization solutions for composite functions. Our approach exploits closedform solutions of proximal operators and envelope representations based on the Moreau, Forward-Backward, Douglas-Rachford and Half-Quadratic envelopes. Envelope representations lead to novel proximal algorithms for statistical optimisation of composite objective functions which include both non-smooth and non-convex objectives. We illustrate our methodology with regularized Logistic and Poisson regression and non-convex bridge penalties with a fused lasso norm. We provide a discussion of convergence of non-descent algorithms with acceleration and for non-convex functions. Finally, we provide directions for future research."} {"_id": "0f9c08653e801764fc431f3d5f02c1f1286cc1ab", "title": "Chaotic bat algorithm", "text": "Bat algorithm (BA) is a recent metaheuristic optimization algorithm proposed by Yang. In the present study, we have introduced chaos into BA so as to increase its global search mobility for robust global optimization. Detailed studies have been carried out on benchmark problems with different chaotic maps. Here, four different variants of chaotic BA are introduced and thirteen different chaotic maps are vailable online xxx eywords: at algorithm haos etaheuristic lobal optimization utilized for validating each of these four variants. The results show that some variants of chaotic BAs can clearly outperform the standard BA for these benchmarks. \u00a9 2013 Elsevier B.V. All rights reserved. . Introduction Many design optimization problems are often highly nonlinar, which can typically have multiple modal optima, and it is thus ery challenging to solve such multimodal problems. To cope with his issue, global optimization algorithms are widely attempted, owever, traditional algorithms may not produce good results, and atest trends are to use new metaheuristic algorithms [1]. Metaeuristic techniques are well-known global optimization methods hat have been successfully applied in many real-world and comlex optimization problems [2,3]. These techniques attempt to imic natural phenomena or social behavior so as to generate etter solutions for optimization problem by using iterations and tochasticity [4]. They also try to use both intensification and iversification to achieve better search performance. Intensificaion typically searches around the current best solutions and selects he best candidate designs, while the diversification process allows he optimizer to explore the search space more efficiently, mostly y randomization [1]. In recent years, several novel metaheuristic algorithms have een proposed for global search. Such algorithms can increase he computational efficiency, solve larger problems, and implePlease cite this article in press as: A.H. Gandomi, X.-S. http://dx.doi.org/10.1016/j.jocs.2013.10.002 ent robust optimization codes [5]. For example, Xin-She Yang [6] ecently developed a promising metaheuristic algorithm, called bat lgorithm (BA). Preliminary studies suggest that the BA can have \u2217 Corresponding author. Tel.: +44 2084112351. E-mail addresses: a.h.gandomi@gmail.com, ag72@uakron.edu (A.H. Gandomi), .yang@mdx.ac.uk (X.-S. Yang). 877-7503/$ \u2013 see front matter \u00a9 2013 Elsevier B.V. All rights reserved. ttp://dx.doi.org/10.1016/j.jocs.2013.10.002 superior performance over genetic algorithms and particle swarm optimization [6], and it can solve real world and engineering optimization problems [7\u201310]. On the other hand, recent advances in theories and applications of nonlinear dynamics, especially chaos, have drawn more attention in many fields [10]. One of these fields is the applications of chaos in optimization algorithms to replace certain algorithm-dependent parameters [11]. Previously, chaotic sequences have been used to tune parameters in metaheuristic optimization algorithms such as genetic algorithms [12], particle swarm optimization [13], harmony search [14], ant and bee colony optimization [15,16], imperialist competitive algorithm [17], firefly algorithm [18], and simulated annealing [19]. Such a combination of chaos with metaheuristics has shown some promise once the right set of chaotic maps are used. It is still not clear why the use of chaos in an algorithm to replace certain parameters may change the performance, however, empirical studies indeed indicate that chaos can have high-level of mixing capability, and thus it can be expected that when a fixed parameter is replaced by a chaotic map, the solutions generated may have higher mobility and diversity. For this reason, it may be useful to carry out more studies by introducing chaos to other, especially newer, metaheuristic algorithms. Therefore, one of the aims of this paper is to introduce chaos into the standard bat algorithm, and as a result, we propose a chaosbased bat algorithm (CBA). As different chaotic maps may lead to different behavior of the algorithm, we then have a set of chaosYang, Chaotic bat algorithm, J. Comput. Sci. (2013), based bat algorithms. In these algorithms, we use different chaotic systems to replace the parameters in BA. Thus different methods that use chaotic maps as potentially efficient alternatives to pseudorandom sequences have been proposed. In order to evaluate the"} {"_id": "207bcf6dbd11bd6c8ab252750244bf051efd2576", "title": "A novel bat algorithm with habitat selection and Doppler effect in echoes for optimization", "text": "A novel bat algorithm (NBA) is proposed for optimization in this paper, which focuses on further mimicking the bats\u2019 behaviors and improving bat algorithm (BA) in view of biology. The proposed algorithm incorporates the bats\u2019 habitat selection and their self-adaptive compensation for Doppler effect in echoes into the basic BA. The bats\u2019 habitat selection is modeled as the selection between their quantum behaviors and mechanical behaviors. Having considered the bats\u2019 self-adaptive compensation for Doppler effect in echoes and the individual\u2019s difference in the compensation rate, the echolocation characteristics of bats can be further simulated in NBA. A self-adaptive local search strategy is also embedded into NBA. Simulations and comparisons based on twenty benchmark problems and four real-world engineering designs demonstrate the effectiveness, efficiency and stability of NBA compared with the basic BA and some well-known algorithms, and suggest that to improve algorithm based on biological basis should be very efficient. Further research topics are also discussed. 2015 Elsevier Ltd. All rights reserved."} {"_id": "745b88eb437eb59e2a58fe378d287702a6b0d985", "title": "Use of genetic algorithms to solve production and operations management problems: A review", "text": "Operations managers and scholars in their search for fast and good solutions to real-world problems have applied genetic algorithms to many problems. While genetic algorithms are promising tools for problem solving, future research will benefit from a review of the problems that have been solved and the designs of the genetic algorithms used to solve them. This paper provides a review of the use of genetic algorithms to solve operations problems. Reviewed papers are classified according to the problems they solve. The basic design of each genetic algorithm is described, the shortcomings of the current research are discussed and directions for future research are suggested."} {"_id": "1447fe8dc5e08f4a6bad99d2730347dd5a134354", "title": "A swarm optimization algorithm inspired in the behavior of the social-spider", "text": "Swarm intelligence is a research field that models the collective behavior in swarms of insects or animals. Several algorithms arising from such models have been proposed to solve a wide range of complex optimization problems. In this paper, a novel swarm algorithm called the Social Spider Optimization (SSO) is proposed for solving optimization tasks. The SSO algorithm is based on the simulation of cooperative behavior of social-spiders. In the proposed algorithm, individuals emulate a group of spiders which interact to each other based on the biological laws of the cooperative colony. The algorithm considers two different search agents (spiders): males and females. Depending on gender, each individual is conducted by a set of different evolutionary operators which mimic different cooperative behaviors that are typically found in the colony. In order to illustrate the proficiency and robustness of the proposed approach, it is compared to other well-known evolutionary methods. The comparison examines several standard benchmark functions that are commonly considered within the literature of evolutionary algorithms. The outcome shows a high performance of the proposed method for searching a global optimum with several benchmark functions."} {"_id": "90cbb19ad3e69de3907633fc55123153f980c843", "title": "Bat-Inspired Optimization Approach for the Brushless DC Wheel Motor Problem", "text": "This paper presents a metaheuristic algorithm inspired in evolutionary computation and swarm intelligence concepts and fundamentals of echolocation of micro bats. The aim is to optimize the mono and multiobjective optimization problems related to the brushless DC wheel motor problems, which has 5 design parameters and 6 constraints for the mono-objective problem and 2 objectives, 5 design parameters, and 5 constraints for multiobjective version. Furthermore, results are compared with other optimization approaches proposed in the recent literature, showing the feasibility of this newly introduced technique to high nonlinear problems in electromagnetics."} {"_id": "0fe96806c009e8d095205e8f954d41b2b9fd5dcf", "title": "On-the-Job Learning with Bayesian Decision Theory", "text": "Our goal is to deploy a high-accuracy system starting with zero training examples. We consider an on-the-job setting, where as inputs arrive, we use real-time crowdsourcing to resolve uncertainty where needed and output our prediction when confident. As the model improves over time, the reliance on crowdsourcing queries decreases. We cast our setting as a stochastic game based on Bayesian decision theory, which allows us to balance latency, cost, and accuracy objectives in a principled way. Computing the optimal policy is intractable, so we develop an approximation based on Monte Carlo Tree Search. We tested our approach on three datasets\u2014named-entity recognition, sentiment classification, and image classification. On the NER task we obtained more than an order of magnitude reduction in cost compared to full human annotation, while boosting performance relative to the expert provided labels. We also achieve a 8% F1 improvement over having a single human label the whole set, and a 28% F1 improvement over online learning. \u201cPoor is the pupil who does not surpass his master.\u201d \u2013 Leonardo da Vinci"} {"_id": "09ecdb904eb7ae8a12d0c6c04ae531617a30eafa", "title": "Rethinking serializable multiversion concurrency control", "text": "Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multiversioned systems are often significantly outperformed by singleversion systems. We propose BOHM, a new concurrency control protocol for mainmemory multi-versioned database systems. BOHM guarantees serializable execution while ensuring that reads never block writes. In addition, BOHM does not require reads to perform any bookkeeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. BOHM has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that BOHM performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees."} {"_id": "99d2b42c731269751431393d20f9b9225787cc5e", "title": "Overview of rotary steerable system and its control methods", "text": "Rotary steerable system (RSS) is a new technique of drilling technology adopted in directional drilling. Compared with conventional directional drilling, RSS has the ability to improve the drilling efficiency, reduce the drilling risk and decrease the drilling cost at the same time. This paper summaries the fundamental structure, classification, directional principle and development process of RSS; and gives detail analysis of the control methods in RSS. According to these summaries and analyses, the advantages and disadvantages of RSS at present are given, and some key suggestions about this technique and the related control methods are also proposed in this paper."} {"_id": "8bf73b47f5d3ac82187971311faa371d719fa08f", "title": "Commonsense Reasoning Based on Betweenness and Direction in Distributional Models", "text": "Several recent approaches use distributional similarity for making symbolic reasoning more flexible. While an important step in the right direction, the use of similarity has a number of inherent limitations. We argue that similarity-based reasoning should be complemented with commonsense reasoning patterns such as interpolation and a fortiori inference. We show how the required background knowledge for these inference patterns can be obtained from distributional models."} {"_id": "416377d4a06e73b87f0ff9fbc86340f1228dbf8d", "title": "A Survey on Mobile Banking Applications and the Adopted Models", "text": "Many banks around the world are starting to offer banking services through mobile phones. Therefore, many studies have been proposed to help bankers to increase the number of customers. Despite the effort that has been founded in the state of the art, the adoption rate of mobile banking application has not reached the expected amount yet. Therefore, the aim of this study is to analyse the most well-known and accepted models to provide a comprehensive understanding of their impacts toward the adoption of mobile banking applications. Furthermore, this study aims also at exploring the most effected factors that have been used to influence the adoption behaviour intention of mobile banking applications. As a result of this survey study, some critical recommendations were stated"} {"_id": "b924e60dd39e9795fb610d644df908a9a36fe75d", "title": "A review of Augmented Reality and its application in context aware library system", "text": "Augmented Reality has revolutionized the way of looking at actuality. The colossal developments in this area have led to visualizations beyond desktop. Unreal objects can be augmented on real surfaces. Augmented Reality has converged the knowledge of computer vision, virtual reality, image-processing, human-to-computer interaction and more such areas. It has found applications in various expanses like Games, Entertainment, Military, Navigation and many more. One such application could be in Library Management System. Earlier systems for book tracking involved immobile computers that only had the shelf numbers nested with book id to locate a book. A borrower needed to ask the Librarian/support staff to look up the shelf number from the system so as to find a book or simply do it manually by himself by searching each shelf. The application of Augmented Reality in Library Administration is useful in automating the proximity location tracking of a book and providing alternatives to the borrower in the books context. The borrower is navigated through the library till he/she reaches the shelf where the book is kept. The spine of the book serves as the index of the required book and contextual information is augmented once the book is located. Thus Augmented Reality can prove immensely helpful for Library Administration."} {"_id": "35d6119d3bad46fe9905af2d3f6263dd201eeebd", "title": "A new force-feedback arm exoskeleton for haptic interaction in virtual environments", "text": "The paper presents the mechanical design of the L-EXOS, a new exoskeleton for the human arm. The exoskeleton is a tendon driven wearable haptic interface with 5 dof 4 actuated ones, and is characterized by a workspace very close to the one of the human arm. The design has been optimized to obtain a solution with reduced mass and high stiffness, by employing special mechanical components and carbon fiber structural parts. The devised exoskeleton is very effective for simulating the touch by hand of large objects or the manipulation within the whole workspace of the arm. The main features of the first prototype that has been developed at PERCRO are presented, together with an indication of the achieved and tested performance."} {"_id": "fa9bf738ee1e6b631b670d3473d801d368e8bf5a", "title": "Issues and Challenges in Scrum Implementation", "text": "The aim of this research paper is to bring the challenges and issues in Scrum implementation to light and proposing solutions for these. For this, a survey is conducted in two companies named Digital Prodigy Limited (DPL) and Bentley Systems Pakistan. Participants include experienced and inexperienced scrum personals from both companies. The analysis of the survey results exposed several issues that affect scrum implementation directly or indirectly and resulting in violation of Scrum rules. Quality items pileup, module integration issues, code quality, disruption in team work, mature vs. immature scrum, sprint duration, lack of Scrum training , release process, backlog management, no technical practices, multiple teams, metrics, risk management, documentation and over idealistic are some examples of these issues. During this study, it has been observed that proper training of Scrum can eliminate half of the issues such as, disruption in team work, immature Scrum, sprint duration and backlog management. At the end, suggestions to address these issues are also provided."} {"_id": "fd48c4bfa8f336ad6879f16710d980988ca35b67", "title": "Investigating the global trend of RF power amplifiers with the arrival of 5G", "text": "To satisfy the continuously increasing demand for high data rates and mobility required by new wireless applications, the 5G has gained much attention recently. Radio frequency power amplifiers (RF-PAs), as one of critical components of 5G transmitter system, are becoming a hot issue. In this paper, the statistical analysis on RF PA papers shows the research on RF-PAs in Asia-Pacific and cooperation between different affiliations and countries are gradually becoming more prominent, showing the globalization trend of RF PA research and the 5G technologies. The decreased research cycle of RF PA shows the processes of research on PA and 5G technologies are speeding up. Some promising RF-PA technologies for 5G wireless communication system are also been discussed."} {"_id": "c78138102d9ad484bc78b6129e3b80ed950fb333", "title": "Grade Prediction with Temporal Course-wise Influence", "text": "There is a critical need to develop new educational technology applications that analyze the data collected by universities to ensure that students graduate in a timely fashion (4 to 6 years); and they are well prepared for jobs in their respective fields of study. In this paper, we present a novel approach for analyzing historical educational records from a large, public university to perform next-term grade prediction; i.e., to estimate the grades that a student will get in a course that he/she will enroll in the next term. Accurate next-term grade prediction holds the promise for better student degree planning, personalized advising and automated interventions to ensure that students stay on track in their chosen degree program and graduate on time. We present a factorization-based approach called Matrix Factorization with Temporal Course-wise Influence that incorporates course-wise influence effects and temporal effects for grade prediction. In this model, students and courses are represented in a latent \u201cknowledge\u201d space. The grade of a student on a course is modeled as the similarity of their latent representation in the \u201cknowledge\u201d space. Course-wise influence is considered as an additional factor in the grade prediction. Our experimental results show that the proposed method outperforms several baseline approaches and infer meaningful patterns between pairs of courses within academic programs."} {"_id": "e15ad6b7fc692aaa5855691599e263a600d85325", "title": "The fragment assembly string graph", "text": "We present a concept and formalism, the string graph, which represents all that is inferable about a DNA sequence from a collection of shotgun sequencing reads collected from it. We give time and space efficient algorithms for constructing a string graph given the collection of overlaps between the reads and, in particular, present a novel linear expected time algorithm for transitive reduction in this context. The result demonstrates that the decomposition of reads into kmers employed in the de Bruijn graph approach described earlier is not essential, and exposes its close connection to the unitig approach we developed at Celera. This paper is a preliminary piece giving the basic algorithm and results that demonstrate the efficiency and scalability of the method. These ideas are being used to build a next-generation whole genome assembler called BOA (Berkeley Open Assembler) that will easily scale to mammalian genomes."} {"_id": "aa43eeb7511a5d349af0cc27b1a594b4e9f2927c", "title": "Fusion of laser and monocular camera data in object grid maps for vehicle environment perception", "text": "Occupancy grid maps provide a reliable vehicle environmental model and usually process data from range finding sensors. Object grid maps additionally contain information about the classes of objects, which is crucial for applications like autonomous driving. Unfortunately, they lack the precision of occupancy grid maps, since they mostly process classification results from camera data by projecting the corresponding images onto the ground plane. This paper proposes a modular framework to create precise object grid maps. The presented algorithm creates classical occupancy grid maps and object grid maps. In a combination step, it transforms both maps into the same frame of discernment based on the Dempster-Shafer theory of evidence. This allows fusing the maps to one object grid map, which contains valuable object information and at the same time benefits from the precision of the occupancy grid map."} {"_id": "4dfa7b0c07fcff203306ecf7f9f496d60369e45e", "title": "Semantic Representation", "text": "In recent years, there has been renewed interest in the NLP community in genuine language understanding and dialogue. Thus the long-standing issue of how the semantic content of language should be represented is reentering the communal discussion. This paper provides a brief \u201copinionated survey\u201d of broadcoverage semantic representation (SR). It suggests multiple desiderata for such representations, and then outlines more than a dozen approaches to SR\u2013some longstanding, and some more recent, providing quick characterizations, pros, cons, and some comments on imple-"} {"_id": "a7f47084be0c7c4fac60f7523858c90abf4d0b51", "title": "A 0.002-mm$^{2}$ 6.4-mW 10-Gb/s Full-Rate Direct DFE Receiver With 59.6% Horizontal Eye Opening Under 23.3-dB Channel Loss at Nyquist Frequency", "text": "This paper reports a full-rate direct decision-feedback-equalization (DFE) receiver with circuit techniques to widen the data eye opening with competitive power and area efficiencies. Specifically, a current-reuse active-inductor (AI) linear equalizer is merged into a clocked-one-tap DFE core for joint-elimination of pre-cursor and long-tail post-cursors. Unlike the passive-inductor designs that are bulky and untunable, the AI linear equalizer offers orthogonally tunable low- and high-frequency de-emphasis. The clocked-one-tap DFE resolves the first post-cursor via return-to-zero feedback data patterns for sharper data transition (i.e., horizontal eye opening), and is followed by a D-flip-flop slicer to maximize the data height (i.e., vertical eye opening). A 10-Gb/s DFE receiver was fabricated in 65-nm CMOS. Measured over an 84-cm printed circuit board differential trace with 23.3-dB channel loss at Nyquist frequency (5 GHz), the achieved figure-of-merit is 0.027 pJ/bit/dB (power consumption/date rate/channel loss). At 10-12 bit error rate under 27-1 pseudorandom binary sequence, the horizontal and vertical eye opening are 59.6% and 189.3 mV, respectively. The die size is 0.002 mm2."} {"_id": "05991f4ea3bde9f0f993b304b681ecfc78b4cdff", "title": "A Survey on TCP Congestion Control Schemes in Guided Media and Unguided Media Communication", "text": "Transmission Control Protocol (TCP) is a widely used end-to-end transport protocol in the Internet. This End to End delivery in wired (Guided) as well as wireless (Unguided) network improves the performance of transport layer or Transmission control Protocol (TCP) characterized by negligible random packet losses. This paper represents tentative study of TCP congestion control principles and mechanisms. Modern implementations of TCP hold four intertwined algorithms: slow start, congestion avoidance, fast retransmits, and fast recovery in addition to the standard algorithms used in common implementations of TCP. This paper describes the performance characteristics of four representative TCP schemes, namely TCP Tahoe, Reno, New Reno and Vegas under the condition of congested link capacities for wired network as well as wireless network."} {"_id": "951fbb632fd02fd57fb1d864bbd183ebb93172e0", "title": "Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review.", "text": "Breast cancer is the most common form of cancer among women worldwide. Early detection of breast cancer can increase treatment options and patients' survivability. Mammography is the gold standard for breast imaging and cancer detection. However, due to some limitations of this modality such as low sensitivity especially in dense breasts, other modalities like ultrasound and magnetic resonance imaging are often suggested to achieve additional information. Recently, computer-aided detection or diagnosis (CAD) systems have been developed to help radiologists in order to increase diagnosis accuracy. Generally, a CAD system consists of four stages: (a) preprocessing, (b) segmentation of regions of interest, (c) feature extraction and selection, and finally (d) classification. This paper presents the approaches which are applied to develop CAD systems on mammography and ultrasound images. The performance evaluation metrics of CAD systems are also reviewed."} {"_id": "77105d6df4ccd7454bd805575abe05428f36893a", "title": "ULA-OP: an advanced open platform for ultrasound research", "text": "The experimental test of novel ultrasound (US) investigation methods can be made difficult by the lack of flexibility of commercial US machines. In the best options, these only provide beamformed radiofrequency or demodulated echo-signals for acquisition by an external PC. More flexibility is achieved in high-level research platforms, but these are typically characterized by high cost and large size. This paper presents a powerful but portable US system, specifically developed for research purposes. The system design has been based on high-level commercial integrated circuits to obtain the maximum flexibility and wide data access with minimum of electronics. Preliminary applications involving nonstandard imaging transmit/receive strategies and simultaneous B-mode and multigate spectral Doppler mode are discussed."} {"_id": "3e76919e1a27105e76ca9105a9a94f262729eade", "title": "Improved Winding Proposal for Wound Rotor Resolver Using Genetic Algorithm and Winding Function Approach", "text": "Among position sensors, resolvers are superior from reliability point of view. However, obtaining lower output voltage harmonics and simple manufacturing process is a challenge in the design and optimization of resolvers. In this paper, a metaheuristic optimization algorithm is used to minimize total harmonic distortion of the output signals, and consequently the estimated position error in concentrated coil wound field resolvers. Meanwhile, to minimize total coil numbers, manufacturing costs, and complexity of the winding process, modified objective function and constraints are proposed. In this way, a modified winding function method is employed for performance analysis of the axial flux resolver. Then, the merits of the optimized winding configuration with respect to fractional slot concentrated windings are shown. The results of the proposed configurations are verified with a three-dimensional time-stepping finite-element method. Finally, the prototype of the studied axial flux resolver is constructed and tested. Good agreement is obtained between simulation and experimental results, confirming the effectiveness of the optimization process."} {"_id": "6662dbe13b725180d7f9ae114b709f8bc7d0ced9", "title": "Lens design as multi-objective optimisation", "text": "This paper demonstrates the computational advantages of a multiobjective framework that can overcome the generic and domain-related challenges in optical system design and optimisation. Non-dominated sorting genetic algorithms-II (Deb, 2003) is employed in this study. The optical systems studied in this paper are Cooke triplets, Petzval lens systems and achromatic doublets. We report the results of four studies. In the first study, we optimise the optical systems using computationally efficient image quality objective functions. Our approach uses only two paraxial rays to estimate the objective functions and thus improves the computational efficiency. This timesaving measure can partially compensate for the typically enormous number of fitness function evaluations required in evolutionary algorithms. The reduction in reliability due to the computations from a single ray pair is compensated by the availability of multiple objective functions that help us to navigate to the optima. In the second study, hybridisation of evolutionary and gradient-based approaches and scaling techniques are employed to speed up convergence and enforce the constraints. The third study shows how recent developments in optical system design research can be better integrated in a multi-objective framework. The fourth study optimises an achromatic doublet with suitable constraints applied to the thicknesses and image distance."} {"_id": "5976d405e242dac207fcd32f37fb1e9044b6f8de", "title": "The Relationship Between Health and Growth : When Lucas", "text": "This paper revisits the relationship between health and growth in light of modern endogenous growth theory. We propose a unified framework that encompasses the growth effects of both the rate of improvement of health and the level of health. Based on cross-country regressions over the period 1960-2000, where we instrument for both variables, we find that a higher initial level and a higher rate of improvement in life expectancy both have a significantly positive impact on per capita GDP growth. Then, restricting attention to OECD countries, we find supportive evidence that only the reduction in mortality below age forty generates productivity gains, which in turn may explain why the positive correlation between health and growth in cross-OECD country regressions appears to have weakened since 1960. JEL classification: E0, I10, O10, O15"} {"_id": "fd94e80dd0f8a05ccf0c7912da2a5b90ed75d210", "title": "Tool for 3D Gazebo Map Construction from Arbitrary Images and Laser Scans", "text": "Algorithms for mobile robots, such as navigation, mapping, and SLAM, require proper modelling of environment. Our paper presents an automatic tool that allows creating a realistic 3D-landscape in Gazebo simulation, which is based on the real sensor-based experimental results. The tool provides an occupancy grid map automatic filtering and import to Gazebo framework as a heightmap and enables to configure the settings for created simulation environment. In addition, the tool is capable to create a 3D Gazebo map from any arbitrary image."} {"_id": "9a83b57a240e950f8080c1ce185de325858d585b", "title": "Multi-View Stereo Revisited", "text": "We present an extremely simple yet robust multi-view stereo algorithm and analyze its properties. The algorithm first computes individual depth maps using a window-based voting approach that returns only good matches. The depth maps are then merged into a single mesh using a straightforward volumetric approach. We show results for several datasets, showing accuracy comparable to the best of the current state of the art techniques and rivaling more complex algorithms."} {"_id": "4e088d1c5bc436f1f84997906223e5f24e1df28c", "title": "SPARSKIT: a basic tool kit for sparse matrix computations - Version 2", "text": "This paper presents the main features of a tool package for manipulating and working with sparse matrices. One of the goals of the package is to provide basic tools to facilitate exchange of software and data between researchers in sparse matrix computations. Our starting point is the Harwell/Boeing collection of matrices for which we provide a number of tools. Among other things the package provides programs for converting data structures, printing simple statistics on a matrix, plotting a matrix profile, performing basic linear algebra operations with sparse matrices and so on. *RIACS, Mail Stop 230-5, NASA Ames Research Center, Moffett Field, CA 94035. This work was supported in part by the NAS Systems Division, via Cooperative Agreement NCC 2-387 between NASA and the University Space Research Association (USRA) and in part by the Department of Energy under grant DE-FG02-85ER25001."} {"_id": "1c9aca60f7ac5edcceb73d612806704a7d662643", "title": "Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks", "text": "A key challenge in entity linking is making effective use of contextual information to disambiguate mentions that might refer to different entities in different contexts. We present a model that uses convolutional neural networks to capture semantic correspondence between a mention\u2019s context and a proposed target entity. These convolutional networks operate at multiple granularities to exploit various kinds of topic information, and their rich parameterization gives them the capacity to learn which n-grams characterize different topics. We combine these networks with a sparse linear model to achieve state-of-the-art performance on multiple entity linking datasets, outperforming the prior systems of Durrett and Klein (2014) and Nguyen et al. (2014).1"} {"_id": "4f5cd4c2d81db5c52f952589a8d52bba16962707", "title": "Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction", "text": "This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on two scoring functions that operate by learning low-dimensional embeddings of words and of entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over existing methods that rely on text features alone."} {"_id": "00ac5fa58d024af5a681e520f49ea0a2dfc6c078", "title": "VisKE: Visual knowledge extraction and question answering by visual verification of relation phrases", "text": "How can we know whether a statement about our world is valid. For example, given a relationship between a pair of entities e.g., `eat(horse, hay)', how can we know whether this relationship is true or false in general. Gathering such knowledge about entities and their relationships is one of the fundamental challenges in knowledge extraction. Most previous works on knowledge extraction have focused purely on text-driven reasoning for verifying relation phrases. In this work, we introduce the problem of visual verification of relation phrases and developed a Visual Knowledge Extraction system called VisKE. Given a verb-based relation phrase between common nouns, our approach assess its validity by jointly analyzing over text and images and reasoning about the spatial consistency of the relative configurations of the entities and the relation involved. Our approach involves no explicit human supervision thereby enabling large-scale analysis. Using our approach, we have already verified over 12000 relation phrases. Our approach has been used to not only enrich existing textual knowledge bases by improving their recall, but also augment open-domain question-answer reasoning."} {"_id": "057ac29c84084a576da56247bdfd63bf17b5a891", "title": "Learning Structured Embeddings of Knowledge Bases", "text": "Many Knowledge Bases (KBs) are now readily available and encompass colossal quantities of information thanks to either a long-term funding effort (e.g. WordNet, OpenCyc) or a collaborative process (e.g. Freebase, DBpedia). However, each of them is based on a different rigid symbolic framework which makes it hard to use their data in other systems. It is unfortunate because such rich structured knowledge might lead to a huge leap forward in many other areas of AI like natural language processing (word-sense disambiguation, natural language understanding, ...), vision (scene classification, image semantic annotation, ...) or collaborative filtering. In this paper, we present a learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced. These learnt embeddings would allow data from any KB to be easily used in recent machine learning methods for prediction and information retrieval. We illustrate our method on WordNet and Freebase and also present a way to adapt it to knowledge extraction from raw text."} {"_id": "c4ed93ddc4be73a902e936cd44ad3f3c0e1f87f3", "title": "Safety of Machine Learning Systems in Autonomous Driving", "text": "Machine Learning, and in particular Deep Learning, are extremely capable tools for solving problems which are difficult, or intractable to tackle analytically. Application areas include pattern recognition, computer vision, speech and natural language processing. With the automotive industry aiming for increasing amount of automation in driving, the problems to solve become increasingly complex, which appeals to the use of supervised learning methods from Machine Learning and Deep Learning. With this approach, solutions to the problems are learned implicitly from training data, and inspecting their correctness is not possible directly. This presents concerns when the resulting systems are used to support safety-critical functions, as is the case with autonomous driving of automotive vehicles. This thesis studies the safety concerns related to learning systems within autonomous driving and applies a safety monitoring approach to a collision avoidance scenario. Experiments are performed using a simulated environment, with a deep learning system supporting perception for vehicle control, and a safety monitor for collision avoidance. The related operational situations and safety constraints are studied for an autonomous driving function, with potential faults in the learning system introduced and examined. Also, an example is considered for a measure that indicates trustworthiness of the learning system during operation."} {"_id": "610b86da495e69a27484287eac6e79285513884f", "title": "A Broadband U-Slot Coupled Microstrip-to-Waveguide Transition", "text": "A novel planar broadband microstrip-to-waveguide transition is proposed in this paper. The referred waveguide can be either rectangular waveguide or ridged waveguide. The transition consists of an open-circuited microstrip quarter-wavelength resonator and a resonant U-shaped slot on the upper broadside wall of a short-circuited waveguide. A physics-based equivalent-circuit model is also developed for interpreting the working mechanism and providing a coarse model for engineering design. The broadband transition can be regarded as a stacked two-pole resonator filter. Each coupling circuit can be approximately designed separately using the group-delay information at the center frequency. In addition to its broadband attribute, the transition is compact in size, vialess, and is highly compatible with planar circuits. These good features make the new transition very attractive for the system architecture where waveguide devices need to be surface mounted on a multilayered planer circuit. Two design examples are given to demonstrate the usefulness of the transition: one is a broadband ridged-waveguide bandpass filter and the other is a surface-mountable broadband low-temperature co-fired ceramic laminated waveguide cavity filter. Both filters are with the proposed transition for interfacing with microstrip lines, showing promising potentials in practical applications."} {"_id": "cd33ec99c3bcba977356bc44efab5cb3ab837938", "title": "Optimized, Direct Sale of Privacy in Personal-Data Marketplaces", "text": "Very recently, we are witnessing the emergence of a number of start-ups that enables individuals to sell their private data directly to brokers and businesses. While this new paradigm may shift the balance of power between individuals and companies that harvest data, it raises some practical, fundamental questions for users of these services: how they should decide which data must be vended and which data protected, and what a good deal is. In this work, we investigate a mechanism that aims at helping users address these questions. The investigated mechanism relies on a hard-privacy model and allows users to share partial or complete profile data with broker companies in exchange for an economic reward. The theoretical analysis of the trade-off between privacy and money posed by such mechanism is the object of this work. We adopt a generic measure of privacy although part of our analysis focuses on some important examples of Bregman divergences. We find a parametric solution to the problem of optimal exchange of privacy for money, and obtain a closed-form expression and characterize the trade-off between profile-disclosure risk and economic reward for several interesting cases."} {"_id": "91f09f9009d5d7df61d7da0bcc590959f5794dde", "title": "Performance analysis of MapReduce programs on Hadoop cluster", "text": "This paper discusses various MapReduce applications like pi, wordcount, grep, Terasort. We have shown experimental results of these applications on a Hadoop cluster. In this paper, performance of above application has been shown with respect to execution time and number of nodes. We find that as the number of nodes increases the execution time decreases. This paper is basically a research study of above MapReduce applications."} {"_id": "1e02f85c5560f6f57b0a6ddffcd89b83ab6cc01c", "title": "Ten key considerations for the successful implementation and adoption of large-scale health information technology.", "text": "The implementation of health information technology interventions is at the forefront of most policy agendas internationally. However, such undertakings are often far from straightforward as they require complex strategic planning accompanying the systemic organizational changes associated with such programs. Building on our experiences of designing and evaluating the implementation of large-scale health information technology interventions in the USA and the UK, we highlight key lessons learned in the hope of informing the on-going international efforts of policymakers, health directorates, healthcare management, and senior clinicians."} {"_id": "3cf000b5886f1feb1d8cd272deb9a71dd6db2111", "title": "Classification of Age Groups Based on Facial Features", "text": "An age group classification system for gray-scale facial images is proposed in this paper. Four age groups, including babies, young adults, middle-aged adults, and old adults, are used in the classification system. The process of the system is divided into three phases: location, feature extraction, and age classification. Based on the symmetry of human faces and the variation of gray levels, the positions of eyes, noses, and mouths could be located by applying the Sobel edge operator and region labeling. Two geometric features and three wrinkle features from a facial image are then obtained. Finally, two back-propagation neural networks are constructed for classification. The first one employs the geometric features to distinguish whether the facial image is a baby. If it is not, then the second network uses the wrinkle features to classify the image into one of three adult groups. The proposed system is experimented with 230 facial images on a Pentium II 350 processor with 128 MB RAM. One half of the images are used for training and the other half for test. It takes 0.235 second to classify an image on an average. The identification rate achieves 90.52% for the training images and 81.58% for the test images, which is roughly close to human\u2019s subjective justification."} {"_id": "47ae71f741c635905b9ebcd63e20255d31a9ae0f", "title": "Observing Tutorial Dialogues Collaboratively: Insights About Human Tutoring Effectiveness From Vicarious Learning", "text": "The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge."} {"_id": "1e8c283cedbbceb2a56bf962bc0a86fd40f1cea6", "title": "Asynchronous Large-Scale Graph Processing Made Easy", "text": "Scaling large iterative graph processing applications through parallel computing is a very important problem. Several graph processing frameworks have been proposed that insulate developers from low-level details of parallel programming. Most of these frameworks are based on the bulk synchronous parallel (BSP) model in order to simplify application development. However, in the BSP model, vertices are processed in fixed rounds, which often leads to slow convergence. Asynchronous executions can significantly accelerate convergence by intelligently ordering vertex updates and incorporating the most recent updates. Unfortunately, asynchronous models do not provide the programming simplicity and scalability advantages of the BSP model. In this paper, we combine the easy programmability of the BSP model with the high performance of asynchronous execution. We have designed GRACE, a new graph programming platform that separates application logic from execution policies. GRACE provides a synchronous iterative graph programming model for users to easily implement, test, and debug their applications. It also contains a carefully designed and implemented parallel execution engine for both synchronous and user-specified built-in asynchronous execution policies. Our experiments show that asynchronous execution in GRACE can yield convergence rates comparable to fully asynchronous executions, while still achieving the near-linear scalability of a synchronous BSP system."} {"_id": "d4baf2e2f2715ef65223d5642f408ecd9d0d5321", "title": "Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees", "text": "Developing classification algorithms that are fair with respect to sensitive attributes of the data is an important problem due to the increased deployment of classification algorithms in societal contexts. Several recent works have focused on studying classification with respect to specific fairness metrics, modeled the corresponding fair classification problem as constrained optimization problems, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which there are no fair classifiers with theoretical guarantees; primarily because the resulting optimization problem is non-convex. The main contribution of this paper is a meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees. In particular, our algorithm can handle non-convex \"linear fractional\" constraints (which includes fairness constraints such as predictive parity) for which no prior algorithm was known. Key to our results is an algorithm for a family of classification problems with convex constraints along with a reduction from classification problems with linear fractional constraints to this family. Empirically, we observe that our algorithm is fast, can achieve near-perfect fairness with respect to various fairness metrics, and the loss in accuracy due to the imposed fairness constraints is often small."} {"_id": "41aa1e97416a01ddcbb26e6fdd5402bfe0c6c1ce", "title": "Bridge Management System with Integrated Life Cycle Cost Optimization", "text": ".................................................................................................................................. iii Table of"} {"_id": "0a80cfdcca92f0b6c5f3fd1569c92f3b2d08b4db", "title": "Understanding Instructions on Large Scale for Human-Robot Interaction", "text": "Correctly interpreting human instructions is the first step to human-robot interaction. Previous approaches to semantically parsing the instructions relied on large numbers of training examples with annotation to widely cover all words in a domain. Annotating large enough instructions with semantic forms needs exhaustive engineering efforts. Hence, we propose propagating the semantic lexicon to learn a semantic parser from limited annotations, whereas the parser still has the ability of interpreting instructions on a large scale. We assume that the semantically-close words have the same semantic form based on the fact that human usually uses different words to refer to a same object or task. Our approach softly maps the unobserved words/phrases to the semantic forms learned from the annotated copurs through a metric for knowledge-based lexical similarity. Experiments on the collected instructions showed that the semantic parser learned with lexicon propagation outperformed the baseline. Our approach provides an opportunity for the robots to understand the human instructions on a large scale."} {"_id": "b3d4e09f161207eef2f00160479b75878ff9fbd1", "title": "Evolutionary Bayesian Rose Trees", "text": "We present an evolutionary multi-branch tree clustering method to model hierarchical topics and their evolutionary patterns overtime. The method builds evolutionary trees in a Bayesian online filtering framework. The tree construction is formulated as an online posterior estimation problem, which well balances both the fitness of the current tree and the smoothness between trees. The state-of-the-art multi-branch clustering method, Bayesian rose trees, is employed to generate a topic tree with a high fitness value. A constraint model is also introduced to preserve the smoothness between trees. A set of comprehensive experiments on real world news data demonstrates that the proposed method better incorporates historical tree information and is more efficient and effective than the traditional evolutionary hierarchical clustering algorithm. In contrast to our previous method [31], we implement two additional baseline algorithms to compare them with our algorithm. We also evaluate the performance of the clustering algorithm based on multiple constraint trees. Furthermore, two case studies are conducted to demonstrate the effectiveness and usefulness of our algorithm in helping users understand the major hierarchical topic evolutionary patterns in text data."} {"_id": "75bbd7d888899337f55663f28ced0ad7372d0dd1", "title": "Capacitive Touch Systems With Styli for Touch Sensors: A Review", "text": "This paper presents the latest progress in development and applications of the capacitive touch systems (CTSs) with styli. The CTSs, which include the touch sensor, analog front-end (AFE) integrated circuit (IC), and micro-controller unit, are reviewed along with the passive and active styli. The architecture of the CTS is explained first, followed by an exploration of the touch sensors: (1) types of touch sensors, such as in-cell, on-cell, and add-on types according to the size and material, (2) AFE IC with the driving and sensing methods, and (3) passive and active styli for the CTS. Finally, the future perspectives of the CTS are given from the viewpoint of the technical developments."} {"_id": "6514d7eeb27a47f8b75e157aca98b177c38de4e9", "title": "Linear feedback control : analysis and design with MATLAB", "text": ""} {"_id": "f4702d55b49ba075edc34309423880d091eac2b3", "title": "A Multi-Scale CNN and Curriculum Learning Strategy for Mammogram Classification", "text": "Screening mammography is an important front-line tool for the early detection of breast cancer, and some 39 million exams are conducted each year in the United States alone. Here, we describe a multi-scale convolutional neural network (CNN) trained with a curriculum learning strategy that achieves high levels of accuracy in classifying mammograms. Specifically, we first train CNN-based patch classifiers on segmentation masks of lesions in mammograms, and then use the learned features to initialize a scanning-based model that renders a decision on the whole image, trained end-to-end on outcome data. We demonstrate that our approach effectively handles the \u201cneedle in a haystack\u201d nature of full-image mammogram classification, achieving 0.92 AUROC on the DDSM dataset."} {"_id": "7367b866d1e08a2b4c1143822e076ad0291d2cc5", "title": "QRPp1-4: Characterizing Quality of Time and Topology in a Time Synchronization Network", "text": "As Internet computing gains speed, complexity, and becomes ubiquitous, the need for precise and accurate time synchronization increases. In this paper, we present a characterization of a clock synchronization network managed by Network Time Protocol (NTP), composed by thousands of nodes, including hundreds of Stratum 1 servers, based on data collected recently by a robot. NTP is the most common protocol for time synchronization in the Internet. Many aspects that define the quality of timekeeping are analyzed, as well as topological characteristics of the network. The results are compared to previous characterizations of the NTP network, showing the evolution of clock synchronization in the last fifteen years."} {"_id": "57344d036f349600e918be3c0676bde7f47ad7bf", "title": "Exact Inference Techniques for the Dynamic Analysis of Bayesian Attack Graphs", "text": "Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise valuable network resources. The uncertainty about the attacker\u2019s behaviour and capabilities make Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of traditional attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our proposed approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches."} {"_id": "b77f5f07c07b8569596eb03e578a680300016188", "title": "Daily Fluctuations in Smartphone Use, Psychological Detachment, and Work Engagement: The Role of Workplace Telepressure", "text": "Today's work environment is shaped by the electronic age. Smartphones are important tools that allow employees to work anywhere and anytime. The aim of this diary study was to examine daily smartphone use after and during work and their association with psychological detachment (in the home domain) and work engagement (in the work domain), respectively. We explored whether workplace telepressure, which is a strong urge to respond to work-related messages and a preoccupation with quick response times, promotes smartphone use. Furthermore, we hypothesized that employees experiencing high workplace telepressure would have more trouble letting go of the workday during the evening and feel less engaged during their workday to the extent that they use their smartphone more intensively across domains. A total of 116 employees using their smartphones for work-related purposes completed diary questionnaires on five workdays (N = 476 data points) assessing their work-related smartphone use, psychological detachment after work, and engagement during work. Workplace telepressure was measured as a between-individual variable and only assessed at the beginning of the study, as well as relevant control variables such as participants' workload and segmentation preference (a preference for work and home domains to be as segmented as possible). Multilevel path analyses revealed that work-related smartphone use after work was negatively related to psychological detachment irrespective of employees' experienced workplace telepressure, and daily smartphone use during work was unrelated to work engagement. Supporting our hypothesis, employees who reported high telepressure experienced less work engagement on days that they used their smartphone more intensively during work. Altogether, intensive smartphone use after work hampers employees' psychological detachment, whereas intensive smartphone use during work undermines their work engagement only when employees experience high workplace telepressure as well. Theoretical and practical implications of these findings are discussed."} {"_id": "9e88de8a38a53dbbc4ab670b6cdbf5c61ead18c5", "title": "Extending the Definitions of Antenna Gain and Radiation Pattern into the Time Domain", "text": "Many of the classical parameters of frequency domain (CW) antennas, such as gain and radiation pattern, are defined only in the frequency domain, and currently have no meaning in the time domain. The purpose of this note is to extend their definitions into the time domain. We develop here the concept of waveform norms as a mechanism for comparing the radiated field to the driving voltage. The concept of gain that we develop then compares a norm of the radiated field to a norm of the derivative of the driving voltage. The transient gain is therefore a function of both the choice of norm, and of the choice of driving waveform. A key feature of our definitions of gain and pnttcrn factor is that they arc equivalent in both transmit and receive modes. ."} {"_id": "ed173a39f4cd980eef319116b6ba39cec1b37c42", "title": "Associative Embedding: End-to-End Learning for Joint Detection and Grouping", "text": "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets."} {"_id": "8d7a6c618a898ff8c09f0ec495c776a1837242b5", "title": "A quadruped robot with parallel mechanism legs", "text": "Summary form only given. The design and control of quadruped robots has become a fascinating research field because they have better mobility on unstructured terrains. Until now, many kinds of quadruped robots were developed, such as JROB-1 [1], BISAM [2], BigDog [3], LittleDog [4], HyQ [5] and Cheetah cub [6]. They have shown significant walking performance. However, most of them use serial mechanism legs and have animal like structure: the thigh and the crus. To swing the crus in swing phase and support the body's weight in stance phase, a linear actuator is attached on the thigh [2, 3, 5, 6], or instead, a rotational actuator is installed on the knee joint [1, 4]. To make the robot more useful in the wild environment, e.g., the detection or manipulation tasks, the payload capability is very important. To carry the sensors or tools, heavy load legged robot is very necessary. Thus the knee actuator should be lightweight, powerful and easy to maintain. However, this can be very costly and hard to satisfy at the same time."} {"_id": "343f9b0e97a5f980f70f21dba284cd87314cd9b1", "title": "An Overview of Inter-Vehicular Communication Systems, Protocols and Middleware", "text": "Inter-vehicular communication is an important research area that is rapidly growing due to considerable advances in mobile and wireless communication technologies, as well as the growth of microprocessing capabilities inside today\u2019s cars, and other moving vehicles. A good amount of research has been done to exploit the different services that can be provided to enhance the safety and comfort of the driver. Additional functions provide the car electronics and the passengers with access to the Internet and other core network resources. This paper provides a survey of the latest advances in the area of inter-vehicular communication (IVC) including vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) functions and services. In addition, the paper presents the most important projects and protocols that are involved in IVC systems as well as the different issues and challenges that exist at each layer of the networking model."} {"_id": "b8d9ff98b9fe1fa3951caaa084109f6bfd0de95d", "title": "DERI&UPM: Pushing Corpus Based Relatedness to Similarity: Shared Task System Description", "text": "In this paper, we describe our system submitted for the semantic textual similarity (STS) task at SemEval 2012. We implemented two approaches to calculate the degree of similarity between two sentences. First approach combines corpus-based semantic relatedness measure over the whole sentence with the knowledge-based semantic similarity scores obtained for the words falling under the same syntactic roles in both the sentences. We fed all these scores as features to machine learning models to obtain a single score giving the degree of similarity of the sentences. Linear Regression and Bagging models were used for this purpose. We used Explicit Semantic Analysis (ESA) as the corpus-based semantic relatedness measure. For the knowledgebased semantic similarity between words, a modified WordNet based Lin measure was used. Second approach uses a bipartite based method over the WordNet based Lin measure, without any modification. This paper shows a significant improvement in calculating the semantic similarity between sentences by the fusion of the knowledge-based similarity measure and the corpus-based relatedness measure against corpus based measure taken alone."} {"_id": "358e8ad4d8c6baab848f3b3d8bd7fc689fbf3ca5", "title": "WebGT: An Interactive Web-Based System for Historical Document Ground Truth Generation", "text": "We present WebGT, the first web-based system to help users produce ground truth data for document images. This user-friendly software system helps historians and computer scientists collectively annotate historical documents. It supports real time collaboration among remote sites independent of the local operating system and also provides several novel semi-automatic tools that have proven effective for annotating degraded documents."} {"_id": "bd61ee05c320ec40d67ed8cf857de2799dbd3174", "title": "Creating Ground Truth for Historical Manuscripts with Document Graphs and Scribbling Interaction", "text": "Ground truth is both - indispensable for training and evaluating document analysis methods, and yet very tedious to create manually. This especially holds true for complex historical manuscripts that exhibit challenging layouts with interfering and overlapping handwriting. In this paper, we propose a novel semi-automatic system to support layout annotations in such a scenario based on document graphs and a pen-based scribbling interaction. On the one hand, document graphs provide a sparse page representation that is already close to the desired ground truth and on the other hand, scribbling facilitates an efficient and convenient pen-based interaction with the graph. The performance of the system is demonstrated in the context of a newly introduced database of historical manuscripts with complex layouts."} {"_id": "feaee4dbf7ed20387cc2fc400669aff9d2f99267", "title": "Aletheia - An Advanced Document Layout and Text Ground-Truthing System for Production Environments", "text": "Large-scale digitisation has led to a number of new possibilities with regard to adaptive and learning based methods in the field of Document Image Analysis and OCR. For ground truth production of large corpora, however, there is still a gap in terms of productivity. Ground truth is not only crucial for training and evaluation at the development stage of tools but also for quality assurance in the scope of production workflows for digital libraries. This paper describes Aletheia, an advanced system for accurate and yet cost-effective ground truthing of large amounts of documents. It aids the user with a number of automated and semi-automated tools which were partly developed and improved based on feedback from major libraries across Europe and from their digitisation service providers which are using the tool in a production environment. Novel features are, among others, the support of top-down ground truthing with sophisticated split and shrink tools as well as bottom-up ground truthing supporting the aggregation of lower-level elements to more complex structures. Special features have been developed to support working with the complexities of historical documents. The integrated rules and guidelines validator, in combination with powerful correction tools, enable efficient production of highly accurate ground truth."} {"_id": "069c40a8ca5305c9a0734c1f6134eb19a678f4ab", "title": "LabelMe: A Database and Web-Based Tool for Image Annotation", "text": "We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web."} {"_id": "3301691a253f7ffd3ca6b1db89c3b111c73c4d09", "title": "Decoronation - a conservative method to treat ankylosed teeth for preservation of alveolar ridge prior to permanent prosthetic reconstruction: literature review and case presentation.", "text": "Avulsed teeth that are stored extraorally in a dry environment for >60 min generally develop replacement root resorption or ankylosis following their replantation due to the absence of a vital periodontal ligament on their root surface. One negative sequelae of such ankylosis is tooth infra-positioning and the local arrest of alveolar bone growth. Removal of an ankylosed tooth may be difficult and traumatic leading to esthetic bony ridge deformities and optimal prosthetic treatment interferences. Recently a treatment option for ankylosed teeth named 'de-coronation' gained interest, particularly in pediatric dentistry that concentrated in dental traumatology. This article reviews the up-to-date literature that has been published on decoronation with respect to its importance for future prosthetic rehabilitation followed by a case presentation that demonstrates its clinical benefits."} {"_id": "e8f6c316b3f80699e75a94d5c58efe96ea50cd83", "title": "A Component-Based Dynamic Link Support for Safety-Critical Embedded Systems", "text": "Safety-critical embedded systems have to undergo rigorous development process in order to ensure that their function will not compromise humans or environment where they operate. Therefore, they rely on simple and proven-in-use design. However, with growing software complexity, maintenance becomes very important aspect in safety domain. Recent approaches for managing maintenance allow to perform changes on software at design-time, which implies that the whole system has to be rebuilt when the application software changes. In this paper, we describe more flexible solution for updating the application software. We apply the component-based paradigm to construct the application software, i.e. we define a model of a software function that can be dynamically linked with the entire operating system (OS). In order to avoid the usage of the OS-provided support for dynamic linking, we design software functions as position-independent and relocation-free binaries with well-defined interfaces. With the help of component-based paradigm we show how to simplify the link support and make it suitable for safety domain."} {"_id": "5de6c10e37b04dadde40b344c334876d13fcf76e", "title": "Parameter Estimation for Probabilistic Finite-State Transducers", "text": "Weighted finite-state transducers suffer from the lack of a training algorithm. Training is even harder for transducers that have been assembled via finite-state operations such as composition, minimization, union, concatenation, and closure, as this yields tricky parameter tying. We formulate a \u201cparameterized FST\u201d paradigm and give training algorithms for it, including a general bookkeeping trick (\u201cexpectation semirings\u201d) that cleanly and efficiently computes expectations and gradients. 1 Background and Motivation Rational relations on strings have become widespread in language and speech engineering (Roche and Schabes, 1997). Despite bounded memory they are well-suited to describe many linguistic and textual processes, either exactly or approximately. A relation is a set of (input, output) pairs. Relations are more general than functions because they may pair a given input string with more or fewer than one output string. The class of so-called rational relations admits a nice declarative programming paradigm. Source code describing the relation (a regular expression) is compiled into efficient object code (in the form of a 2-tape automaton called a finite-state transducer). The object code can even be optimized for runtime and code size (via algorithms such as determinization and minimization of transducers). This programming paradigm supports efficient nondeterminism, including parallel processing over infinite sets of input strings, and even allows \u201creverse\u201d computation from output to input. Its unusual flexibility for the practiced programmer stems from the many operations under which rational relations are closed. It is common to define further useful operations (as macros), which modify existing relations not by editing their source code but simply by operating on them \u201cfrom outside.\u201d \u2217A brief version of this work, with some additional material, first appeared as (Eisner, 2001a). A leisurely journal-length version with more details has been prepared and is available. The entire paradigm has been generalized to weighted relations, which assign a weight to each (input, output) pair rather than simply including or excluding it. If these weights represent probabilities P (input, output) or P (output | input), the weighted relation is called a joint or conditional (probabilistic) relation and constitutes a statistical model. Such models can be efficiently restricted, manipulated or combined using rational operations as before. An artificial example will appear in \u00a72. The availability of toolkits for this weighted case (Mohri et al., 1998; van Noord and Gerdemann, 2001) promises to unify much of statistical NLP. Such tools make it easy to run most current approaches to statistical markup, chunking, normalization, segmentation, alignment, and noisy-channel decoding,1 including classic models for speech recognition (Pereira and Riley, 1997) and machine translation (Knight and Al-Onaizan, 1998). Moreover, once the models are expressed in the finitestate framework, it is easy to use operators to tweak them, to apply them to speech lattices or other sets, and to combine them with linguistic resources. Unfortunately, there is a stumbling block: Where do the weights come from? After all, statistical models require supervised or unsupervised training. Currently, finite-state practitioners derive weights using exogenous training methods, then patch them onto transducer arcs. Not only do these methods require additional programming outside the toolkit, but they are limited to particular kinds of models and training regimens. For example, the forward-backward algorithm (Baum, 1972) trains only Hidden Markov Models, while (Ristad and Yianilos, 1996) trains only stochastic edit distance. In short, current finite-state toolkits include no training algorithms, because none exist for the large space of statistical models that the toolkits can in principle describe and run. Given output, find input to maximize P (input, output)."} {"_id": "b0d04d05adf63cb635215f142e22f83d48cbb81b", "title": "Data mining in software engineering", "text": "The increased availability of data created as part of the sof tware development process allows us to apply novel analysis techniques on the data and use the results to guide t he process\u2019s optimization. In this paper we describe variou s data sources and discuss the principles and techniques of data mi ning as applied on software engineering data. Data that can b e mined is generated by most parts of the development process: requi rements elicitation, development analysis, testing, debu gging, and maintenance. Based on this classification we survey the mini ng approaches that have been used and categorize them accord ing to the corresponding parts of the development process and th e task they assist. Thus the survey provides researchers wit h a concise overview of data mining techniques applied to softw are engineering data, and aids practitioners on the selecti on of appropriate data mining techniques for their work."} {"_id": "87f920f1800e01e5d1bbdd30f8095a61ddcc7edd", "title": "Signal Processing for FMCW SAR", "text": "The combination of frequency-modulated continuous-wave (FMCW) technology and synthetic aperture radar (SAR) techniques leads to lightweight cost-effective imaging sensors of high resolution. One limiting factor to the use of FMCW sensors is the well-known presence of nonlinearities in the transmitted signal. This results in contrast- and range-resolution degradation, particularly when the system is intended for high-resolution long-range applications, as it is the case for SAR. This paper presents a novel processing solution, which solves the nonlinearity problem for the whole range profile. Additionally, the conventional stop-and-go approximation used in pulse-radar algorithms is not valid in FMCW SAR applications under certain circumstances. Therefore, the motion within the sweep needs to be taken into account. Analytical development of the FMCW SAR signal model, starting from the deramped signal and without using the stop-and-go approximation, is presented in this paper. The model is then applied to stripmap, spotlight, and single-transmitter/multiple-receiver digital-beamforming SAR operational mode. The proposed algorithms are verified by processing real FMCW SAR data collected with the demonstrator system built at the Delft University of Technology."} {"_id": "fbfc5fb33652d27ac7e46462388ccf2b48078d62", "title": "Air-Gap Convection in Rotating Electrical Machines", "text": "This paper reviews the convective heat transfer within the air gap of both cylindrical and disk geometry rotating electrical machines, including worked examples relevant to fractional horsepower electrical machines. Thermal analysis of electrical machines is important because torque density is limited by maximum temperature. Knowledge of surface convective heat transfer coefficients is necessary for accurate thermal modeling, for example, using lumped parameter models. There exists a wide body of relevant literature, but much of it has traditionally been in other application areas, dominated by mechanical engineers, such as gas turbine design. Particular attention is therefore given to the explanation of the relevant nondimensional parameters and to the presentation of measured convective heat transfer correlations for a wide variety of situations from laminar to turbulent flow at small and large gap sizes for both radial-flux and axial-flux electrical machines."} {"_id": "362f40b7121ec60791546577c796ac9ec4433c21", "title": "CAPTCHA: Using Hard AI Problems for Security", "text": "We introduce captcha, an automated test that humans can pass, but current computer programs can\u2019t pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. captchas based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels."} {"_id": "c4475e5c53abedfdb84fd5a654492a9fd33d0610", "title": "Multiple social identities and stereotype threat: imbalance, accessibility, and working memory.", "text": "In 4 experiments, the authors showed that concurrently making positive and negative self-relevant stereotypes available about performance in the same ability domain can eliminate stereotype threat effects. Replicating past work, the authors demonstrated that introducing negative stereotypes about women's math performance activated participants' female social identity and hurt their math performance (i.e., stereotype threat) by reducing working memory. Moving beyond past work, it was also demonstrated that concomitantly presenting a positive self-relevant stereotype (e.g., college students are good at math) increased the relative accessibility of females' college student identity and inhibited their gender identity, eliminating attendant working memory deficits and contingent math performance decrements. Furthermore, subtle manipulations in questions presented in the demographic section of a math test eliminated stereotype threat effects that result from women reporting their gender before completing the test. This work identifies the motivated processes through which people's social identities became active in situations in which self-relevant stereotypes about a stigmatized group membership and a nonstigmatized group membership were available. In addition, it demonstrates the downstream consequences of this pattern of activation on working memory and performance."} {"_id": "b3d506fd3ec836f1499478479960d2998b90f56c", "title": "Multiuser Resource Allocation for Mobile-Edge Computation Offloading", "text": "Mobile-edge computation offloading (MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we consider resource allocation in a MECO system comprising multiple users that time share a single edge cloud and have different computation loads. The optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under constraint on computation latency and for both the cases of infinite and finite edge cloud computation capacities. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Computing the threshold requires iterative computation. To reduce the complexity, a sub-optimal resource-allocation algorithm is proposed and shown by simulation to have close-to-optimal performance."} {"_id": "7f7f2a84f0fc1393e92fc1a9fc30d92ca034962f", "title": "Trimming and Improving Skip-thought Vectors", "text": "The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics. In this paper, we propose a suite of techniques to trim and improve it. First, we validate a hypothesis that, given a current sentence, inferring the previous and inferring the next sentence provide similar supervision power, therefore only one decoder for predicting the next sentence is preserved in our trimmed skip-thought model. Second, we present a connection layer between encoder and decoder to help the model to generalize better on semantic relatedness tasks. Third, we found that a good word embedding initialization is also essential for learning better sentence representations. We train our model unsupervised on a large corpus with contiguous sentences, and then evaluate the trained model on 7 supervised tasks, which includes semantic relatedness, paraphrase detection, and text classification benchmarks. We empirically show that, our proposed model is a faster, lighter-weight and equally powerful alternative to the original skip-thought model."} {"_id": "64227f6ae36f2fe23b9754d0a0123feac89f7731", "title": "Self-tuning fuzzy PID controller for electro-hydraulic cylinder", "text": "Hydraulic systems are widely used in industrial applications. This is due to its high speed of response with fast start, stop and speed reversal possible. The torque to inertia ratio is also large with resulting high acceleration capability. The nonlinear properties of hydraulic cylinder make the position tracking control design challenging. This paper presents the development and implementation of self-tuning fuzzy PID controller in controlling the position variation of electro-hydraulic actuator. The hydraulic system mathematical model is approximated using system identification technique. The simulation studies were done using Matlab Simulink environment. The output performance was compared with the design using pole-placement controller. The roots mean squared error for both techniques showed that self-tuning Fuzzy PID produced better result compared to using pole-placement controller."} {"_id": "b3e1f8fda8c9a9d531d6664b106d04f077179a57", "title": "Formula Derivation and Verification of Characteristic Impedance for Offset Double-Sided Parallel Strip Line (DSPSL)", "text": "Double-sided parallel strip line (DSPSL) is a versatile microwave transmission line. With an offset, it allows for large variations of the characteristic impedance and therefore flexibility for the design of novel microwave devices. In this letter, a formula for the characteristic impedance of DSPSL is derived by the fuzzy EM method and verified by commercial software. Very good agreement is observed."} {"_id": "455e11529077ec799484a5250ea8d6dd49a47390", "title": "Two Conceptions of the Physical", "text": "The debate over physicalism in philosophy of mind can be seen as concerning an inconsistent tetrad of theses: (1) if physicalism is true, a priori physicalism is true; (2) a priori physicalism is false; (3) if physicalism is false, epiphenomenalism is true; (4) epiphenomenalism is false. This paper argues that one may resolve the debate by distinguishing two conceptions of the physical: on the theory-based conception, it is plausible that (2) is true and (3) is false; on the object-based conception, it is plausible that (3) is true and (2) is false. The paper also defends and explores the version of physicalism that results from this strategy. 1. One way to view the contemporary debate in philosophy of mind over physicalism is to see it as being organized around an inconsistent tetrad of theses. These are: (1) If physicalism is true, a priori physicalism is true. (2) A priori physicalism is false. (3) If physicalism is false, epiphenomenalism is true. (4) Epiphenomenalism is false. It is obvious of course that these theses are inconsistent: (1) and (2) entail that physicalism is false, while (3) and (4) entail that it is true. Barring ambiguity, therefore, one thing we know is that one of the theses is false. On the other hand, each of the theses has powerful considerations, or at least what seem initially to be powerful considerations, in its favor. (1) In support of (1) are considerations of supervenience, articulated most clearly in recent times by Frank Jackson and David Chalmers. A priori physicalism is a thesis with two parts. The first part-the physicalist part-is that the mental supervenes with metaphysical necessity on the physical. The second part-the a priori part-is that mental truths are a priori entailed by physical truths. Many philosophers hold that supervenience stands in need of justification or explanation; Jackson and Chalmers argue that the project of justifying or explaining supervenience just is the project of making it plausible that there is an a priori entailment of the mental by the physical. This suggests that the first part of"} {"_id": "5adb91899113b6cdea63f9c10baf41b154418c2f", "title": "A modular architecture for building automation systems", "text": "The deployment of building automation systems (BAS) allows to increase comfort, safety and security and to reduce operational cost. Today such systems typically follow a two-layered hierarchical approach. While control networks interconnect distributed sensors, actuators and controllers, a backbone provides the necessary infrastructure for management tasks hosted by configuration and management devices. In addition, devices interconnecting the control network with the backbone and the backbone with further networks (e.g., the Internet) play a strategic role. All BAS devices contributing to a particular functionality differ in their requirements for hardware. This paper discusses requirements for devices used in the building automation domain and presents our work in progress to assemble platforms with different purposes relying on a modular architecture."} {"_id": "d4e5e9978ca8449ec6c4e8a4ebac4cb677bc3f62", "title": "Exploring actor-object relationships for query-focused multi-document summarization", "text": "Most research on multi-document summarization explores methods that generate summaries based on queries regardless of the users\u2019 preferences. We note that, different users can generate somewhat different summaries on the basis of the same source data and query. This paper presents our study on how to exploit the information regards how users summarized their texts. Models of different users can be used either separately, or in an ensemble-like fashion. Machine learning methods are explored in the construction of the individual models. However, we explore yet another hypothesis. We believe that the sentences selected into the summary should be coherent and supplement each other in their meaning. One method to model this relationship between sentences is by detecting actor\u2013object relationship (AOR). The sentences that satisfy this relationship have their importance value enhanced. This paper combines ensemble summarizing system and AOR to generate summaries. We have evaluated this method on DUC 2006 and DUC 2007 using ROUGE measure. Experimental results show the supervised method that exploits the ensemble summarizing system combined with AOR outperforms previous models when considering performance in query-based multidocument summarization tasks. Keyword User-based summarization \u00b7 Actor\u2013object relationship \u00b7 Multi-document summarization \u00b7 Ensemble summarizing system \u00b7 Training data construction Communicated by V. Loia. M. Valizadeh (B) \u00b7 P. Brazdil LIAAD INESC Tec, University of Porto, Porto, Portugal e-mail: valizadehmr@gmail.com P. Brazdil FEP, University of Porto, Porto, Portugal e-mail: pbrazdil@inescporto.pt"} {"_id": "e674619de6d94cdd78158e380ed5ad701f9c19e7", "title": "WhyCon : An Efficent , Marker-based Localization System", "text": "We present an open-source marker-based localization system intended as a low-cost easy-to-deploy solution for aerial and swarm robotics. The main advantage of the presented method is its high computational efficiency, which allows its deployment on small robots with limited computational resources. Even on low-end computers, the core component of the system can detect and estimate 3D positions of hundreds of black and white markers at the maximum frame-rate of standard cameras. The method is robust to changing lighting conditions and achieves accuracy in the order of millimeters to centimeters. Due to its reliability, simplicity of use and availability as an open-source ROS module (http://purl.org/robotics/whycon), the system is now used in a number of aerial robotics projects where fast and precise relative localization is required."} {"_id": "ed73065187c8e2480fb9883a2b1f197e3e43cb9d", "title": "A Novel Feature Selection Based on One-Way ANOVA F-Test for E-Mail Spam Classification", "text": "Spam is commonly defined as unwanted e-mails and it became a global threat against e-mail users. Although, Support Vector Machine (SVM) has been commonly used in e-mail spam classification, yet the problem of high data dimensionality of the feature space due to the massive number of e-mail dataset and features still exist. To improve the limitation of SVM, reduce the computational complexity (efficiency) and enhancing the classification accuracy (effectiveness). In this study, feature selection based on one-way ANOVA F-test statistics scheme was applied to determine the most important features contributing to e-mail spam classification. This feature selection based on one-way ANOVA F-test is used to reduce the high data dimensionality of the feature space before the classification process. The experiment of the proposed scheme was carried out using spam base wellknown benchmarking dataset to evaluate the feasibility of the proposed method. The comparison is achieved for different datasets, categorization algorithm and success measures. In addition, experimental results on spam base English datasets showed that the enhanced SVM (FSSVM) significantly outperforms SVM and many other recent spam classification methods for English dataset in terms of computational complexity and dimension reduction."} {"_id": "e15d731fcdf98ec4a4150149a935ddf3ba5549c7", "title": "SEISMIC BEHAVIOR OF INTERMEDIATE BEAMS IN STEEL PLATE SHEAR WALLS", "text": "This paper presents some preliminary results of an ongoing research whose objective is to investigate the seismic behavior of intermediate beams (i.e., the beams other than those at the roof and foundation levels) in multi-story steel plate shear walls (SPSWs). Of primary interest is the determination of the strength level needed to avoid the formation of in-span plastic hinges, a relevant practical issue that has not been considered in past investigations. To attain this objective, the seismic response of different SPSW models was analyzed by performing linear and nonlinear analysis. The intermediate beams of the SPSW models were designed to resist: (I) forces imposed by gravity loads only; (II) forces determined by the ASCE 7 load combinations; and (III) forces imposed by fully yielded plates. For comparison purposes, SPSW models designed according to the Canadian Standard CAN/CSA S16-01 were considered as well. It is concluded that intermediate beams designed according to criteria I and II are prone to inspan plastic hinges and to excessive plastic deformations. It was also found that, while in-span plastic hinges do not appear in the intermediate beams of the CAN/CSA S16-01 models, they do appear in the roof and foundation beams of these models when a collapse mechanism develops."} {"_id": "755321ceb4bd63a6bc7d786930b07b73de3c928d", "title": "MBBA: A Multi-Bandwidth Bus Arbiter for Hard Real-Time", "text": "Multi-core architectures are being increasingly used in embedded systems as they offer several advantages: improved hardware integration, low thermal dissipation and reduced energy consumption, while they make it possible to improve the computing power. In order to run real-time software on a multicore architecture, computing the Worst-Case Execution Time of every thread should be achievable. This notably involves bounding memory latencies by employing a predictable bus arbiter. However, state-of-the-art techniques prove to be irrelevant to schedule unbalanced workloads in which some threads require more bus bandwidth than the other ones. This paper proposes a new bus arbitration scheme that ensures that the shared bus latencies can be upper bounded. Compared to other schemes that make the bus latencies predictable, like the Round-Robin protocol, our approach defines several levels of bandwidth to meet requirements that may vary from one thread to another. Experimental results (WCET estimates) show that the worst-case bus latency is noticeably shortened, compared to Round-Robin, for the cores with highest priority that get the largest bandwidth. The relevance of the scheme is shown through an example workload composed of various benchmarks."} {"_id": "69adc4f4f6ac4015d9dea2dab71d3e1bade2725f", "title": "LoRa Scalability: A Simulation Model Based on Interference Measurements", "text": "LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data."} {"_id": "8ee4ed49fbf411fb25ba3119c2bc5c5f27a3e93e", "title": "Improvements in muscle symmetry in children with cerebral palsy after equine-assisted therapy (hippotherapy).", "text": "OBJECTIVE\nTo evaluate the effect of hippotherapy (physical therapy utilizing the movement of a horse) on muscle activity in children with spastic cerebral palsy.\n\n\nDESIGN\nPretest/post-test control group.\n\n\nSETTING/LOCATION\nTherapeutic Riding of Tucson (TROT), Tucson, AZ.\n\n\nSUBJECTS\nFifteen (15) children ranging from 4 to 12 years of age diagnosed with spastic cerebral palsy.\n\n\nINTERVENTIONS\nChildren meeting inclusion criteria were randomized to either 8 minutes of hippotherapy or 8 minutes astride a stationary barrel.\n\n\nOUTCOME MEASURES\nRemote surface electromyography (EMG) was used to measure muscle activity of the trunk and upper legs during sitting, standing, and walking tasks before and after each intervention.\n\n\nRESULTS\nAfter hippotherapy, significant improvement in symmetry of muscle activity was noted in those muscle groups displaying the highest asymmetry prior to hippotherapy. No significant change was noted after sitting astride a barrel.\n\n\nCONCLUSIONS\nEight minutes of hippotherapy, but not stationary sitting astride a barrel, resulted in improved symmetry in muscle activity in children with spastic cerebral palsy. These results suggest that the movement of the horse rather than passive stretching accounts for the measured improvements."} {"_id": "96e6a49cf2d1e89c981e3b0bf9fafdcb2dfc35de", "title": "Gain and bandwidth improvement of microstrip antenna using RIS and FPC resonator", "text": "Improvement in gain as well as bandwidth of microstrip antenna using reactive impedance surface and Fabry Perot cavity resonator is proposed. Suspended microstrip antenna (MSA) is designed on reactive impedance surface (RIS) backed FR4 substrate. RIS improves bandwidth, gain and efficiency of antenna due to reduced coupling between MSA and ground plane. This MSA backed RIS is placed in a FPC resonator for further improving the gain of MSA. 1\u00d71 to 4\u00d74 square patch arrays are designed on a dielectric layer which is energized by the radiations from RIS backed MSA. The PRS layer is placed at about \u03bb0/2 from ground. The S11 is < \u22129.5 dB over 5.725\u20136.4 GHz. Maximum gain of 15.8 dBi with gain variation < 2 dB is obtained using 4\u00d74 array. Beside this, cross polarization and Side lobe level are < \u22122\u03bf dB. Antenna offers front to back lobe ratio (FBR) > 20 dB and > 70 % antenna efficiency. The measured results of prototype structure are in agreement with the simulation results."} {"_id": "1a4134f85c9b5cb0697a3213308c148e7e53afee", "title": "Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders.", "text": "PURPOSE\nSpoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels.\n\n\nMETHOD\nThe communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years.\n\n\nRESULTS\nThe majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors.\n\n\nCONCLUSION\nThe spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth."} {"_id": "127f3b1106835ccee15d2cdf1e1099cf699247cd", "title": "THE DEMOCRATIZATION OF PERSONAL CONSUMER LOANS? DETERMINANTS OF SUCCESS IN ONLINE PEER-TO-PEER LENDING COMMUNITIES", "text": "Online peer-to-peer (P2P) lending communities enable individual consumers to borrow from, and lend money to, one another directly. We study the borrowerand loan listing-related determinants of funding success in an online P2P lending community by conceptualizing loan decision variables (loan amount, interest rate offered, duration of loan listing) as mediators between borrower attributes such as demographic characteristics, financial strength, and effort prior to making the request, and the likelihood of funding success. Borrower attributes are also treated as moderators of the effects of loan decision variables on funding success. The results of our empirical study, conducted using a database of 5,370 completed P2P loan transactions, provide support for the proposed conceptual framework. Although demographic attributes such as race and gender do affect likelihood of funding success, their effects are very small in comparison to effects of borrowers\u2019 financial strength and their effort when listing and publicizing the loan. These results are substantially different from the documented discriminatory practices of US financial institutions, suggesting that individual lenders lend more fairly when their own investment money is at stake in P2P loans. The paper concludes with specific suggestions to borrowers to increase their chances of receiving funding in P2P lending communities, and a discussion of future research opportunities."} {"_id": "5839f91e41d1dfd7506bf6ec904b256939a8af12", "title": "Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis", "text": "The advent of the Social Web has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. In pace with a global deluge of videos from billions of computers, smartphones, tablets, university projectors and security cameras, the amount of multimodal content on the Web has been growing exponentially, and with that comes the need for decoding such information into useful knowledge. In this paper, a multimodal affective data analysis framework is proposed to extract user opinion and emotions from video content. In particular, multiple kernel learning is used to combine visual, audio and textual modalities. The proposed framework outperforms the state-of-the-art model in multimodal sentiment analysis research with a margin of 10-13% and 3-5% accuracy on polarity detection and emotion recognition, respectively. The paper also proposes an extensive study on decision-level fusion."} {"_id": "b3848d438ae936b4cf4b244345c2cbb7d62461e8", "title": "Clash of the Contagions: Cooperation and Competition in Information Diffusion", "text": "In networks, contagions such as information, purchasing behaviors, and diseases, spread and diffuse from node to node over the edges of the network. Moreover, in real-world scenarios multiple contagions spread through the network simultaneously. These contagions not only propagate at the same time but they also interact and compete with each other as they spread over the network. While traditional empirical studies and models of diffusion consider individual contagions as independent and thus spreading in isolation, we study how different contagions interact with each other as they spread through the network. We develop a statistical model that allows for competition as well as cooperation of different contagions in information diffusion. Competing contagions decrease each other's probability of spreading, while cooperating contagions help each other in being adopted throughout the network. We evaluate our model on 18,000 contagions simultaneously spreading through the Twitter network. Our model learns how different contagions interact with each other and then uses these interactions to more accurately predict the diffusion of a contagion through the network. Moreover, the model also provides a compelling hypothesis for the principles that govern content interaction in information diffusion. Most importantly, we find very strong effects of interactions between contagions. Interactions cause a relative change in the spreading probability of a contagion by 71% on the average."} {"_id": "017372aec4b163ed6300499d40e316d2a0a7a9dd", "title": "Patterns of temporal variation in online media", "text": "Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored.\n We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets.\n We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention."} {"_id": "431551e2626973b150d982563a175ef80b56ce98", "title": "An annotation tool for automatically triangulating individuals' psychophysiological emotional reactions to digital media stimuli", "text": "Current affective user experience studies require both laborious and time-consuming data analysis, as well as dedicated affective classification algorithms. Moreover, the high technical complexity and lack of general guidelines for developing these affective classification algorithms further limits the comparability of the obtained results. In this paper we target this issue by presenting a tool capable of automatically annotating and triangulating players' physiologically interpreted emotional reactions to in-game events. This tool was initially motivated by an experimental psychology study regarding the emotional habituation effects of audio-visual stimuli in digital games and we expect it to contribute in future similar studies by providing both a deeper and more objective analysis on the affective aspects of user experience. We also hope it will contribute towards the rapid implementation and accessibility of this type of studies by open-sourcing it. Throughout this paper we describe the development and benefits presented by our tool, which include: enabling researchers to conduct objective a posteriori analyses without disturbing the gameplay experience, automating the annotation and emotional response identification process, and formatted data exporting for further analysis in third-party statistical software applications."} {"_id": "cded2e16ad67f690c37a7c686ae5431c69caddcb", "title": "A 3-D printed lightweight cavity-backed monocone antenna", "text": "A lightweight exponentially tapered cavity-backed mono-cone antenna made by 3-D printing and conductive paint coating is proposed. To make the 3-D printed structure be metalized, three layers of conductive paint are used. To verify the proposed fabrication method, the 3-D printed antenna was fabricated and its return loss and far-field radiation characteristics were analyzed."} {"_id": "b4f517f2f229a5f9d8b64bf7572c515c6ed414bd", "title": "Symmetrical Dense-Shortcut Deep Fully Convolutional Networks for Semantic Segmentation of Very-High-Resolution Remote Sensing Images", "text": "Semantic segmentation has emerged as a mainstream method in very-high-resolution remote sensing land-use/land-cover applications. In this paper, we first review the state-of-the-art semantic segmentation models in both computer vision and remote sensing fields. Subsequently, we introduce two semantic segmentation frameworks: SNFCN and SDFCN, both of which contain deep fully convolutional networks with shortcut blocks. We adopt an overlay strategy as the postprocessing method. Based on our frameworks, we conducted experiments on two online ISPRS datasets: Vaihingen and Potsdam. The results indicate that our frameworks achieve higher overall accuracy than the classic FCN-8s and SegNet models. In addition, our postprocessing method can increase the overall accuracy by about 1%\u20132% and help to eliminate \u201csalt and pepper\u201d phenomena and block effects."} {"_id": "51f300c4c0de859f8c0a1598c78e5e71eeef42fd", "title": "Generating Neural Networks with Neural Networks", "text": "Hypernetworks are neural networks that generate weights for another neural network. We formulate the hypernetwork training objective as a compromise between accuracy and diversity, where the diversity takes into account trivial symmetry transformations of the target network. We explain how this simple formulation generalizes variational inference. We use multi-layered perceptrons to form the mapping from the low dimensional input random vector to the high dimensional weight space, and demonstrate how to reduce the number of parameters in this mapping by parameter sharing. We perform experiments and show that the generated weights are diverse and lie on a non-trivial manifold."} {"_id": "b8a618199f53fcabc8b79dcd53930398bdc6bcdb", "title": "OPTIMIZING THE PERFORMANCE OF VEHICLE-TO-GRID ( V 2 G ) ENABLED BATTERY ELECTRIC VEHICLES THROUGH A SMART CHARGE SCHEDULING MODEL", "text": "\u2212A smart charge scheduling model is presented for potential (1) vehicle-to-grid (V2G) enabled battery electric vehicle (BEV) owners who are willing to participate in the grid ancillary services, and (2) grid operators. Unlike most V2G implementations, which are considered from the perspective of power grid systems, this model includes a communication network architecture for connecting system components that supports both BEV owners and grid operators to efficiently monitor and manage the charging and ancillary service activities. This model maximizes the net profit to each BEV participant while simultaneously satisfying energy demands for his/her trips. The performance of BEVs using the scheduling model is validated by estimating optimal annual financial benefits under different scenarios. An analysis of popular BEV models revealed that one of the existing BEVs considered in the study can generate an annual regulation profit of $454, $394 and $318 when the average daily driving distance is 20 miles, 40 miles and 60 miles, respectively. All popular BEV models can completely compensate the energy cost and generate a positive net profit, through the application of the scheduling model presented in this paper, with an annual driving distance of approximately 15,000 miles. Simulation analysis indicated that the extra load distribution from the optimized BEV charging operations were well balanced compared to the unmanaged BEV"} {"_id": "2b21b25e1fdcf9555dc0f26491c2bbc45142a282", "title": "Reading Between the Lines: Overcoming Data Sparsity for Accurate Classification of Lexical Relationships", "text": "The lexical semantic relationships between word pairs are key features for many NLP tasks. Most approaches for automatically classifying related word pairs are hindered by data sparsity because of their need to observe two words co-occurring in order to detect the lexical relation holding between them. Even when mining very large corpora, not every related word pair co-occurs. Using novel representations based on graphs and word embeddings, we present two systems that are able to predict relations between words, even when these are never found in the same sentence in a given corpus. In two experiments, we demonstrate superior performance of both approaches over the state of the art, achieving significant gains in recall."} {"_id": "7d5146e3ddb13a157799f0bdca1ac7d822b1fc82", "title": "MC-HOG Correlation Tracking with Saliency Proposal", "text": "Designing effective feature and handling the model drift problem are two important aspects for online visual tracking. For feature representation, gradient and color features are most widely used, but how to effectively combine them for visual tracking is still an open problem. In this paper, we propose a rich feature descriptor, MC-HOG, by leveraging rich gradient information across multiple color channels or spaces. Then MC-HOG features are embedded into the correlation tracking framework to estimate the state of the target. For handling the model drift problem caused by occlusion or distracter, we propose saliency proposals as prior information to provide candidates and reduce background interference. In addition to saliency proposals, a ranking strategy is proposed to determine the importance of these proposals by exploiting the learnt appearance filter, historical preserved object samples and the distracting proposals. In this way, the proposed approach could effectively explore the color-gradient characteristics and alleviate the model drift problem. Extensive evaluations performed on the benchmark dataset show the superiority of the proposed method."} {"_id": "4a6a138799df4f05e3a37dc83a375ea546d1bdd0", "title": "Leveraging stereopsis for saliency analysis", "text": "Stereopsis provides an additional depth cue and plays an important role in the human vision system. This paper explores stereopsis for saliency analysis and presents two approaches to stereo saliency detection from stereoscopic images. The first approach computes stereo saliency based on the global disparity contrast in the input image. The second approach leverages domain knowledge in stereoscopic photography. A good stereoscopic image takes care of its disparity distribution to avoid 3D fatigue. Particularly, salient content tends to be positioned in the stereoscopic comfort zone to alleviate the vergence-accommodation conflict. Accordingly, our method computes stereo saliency of an image region based on the distance between its perceived location and the comfort zone. Moreover, we consider objects popping out from the screen salient as these objects tend to catch a viewer's attention. We build a stereo saliency analysis benchmark dataset that contains 1000 stereoscopic images with salient object masks. Our experiments on this dataset show that stereo saliency provides a useful complement to existing visual saliency analysis and our method can successfully detect salient content from images that are difficult for monocular saliency analysis methods."} {"_id": "2b76d1c1daeddaee11963cd9069f5eef940649aa", "title": "The line of action: an intuitive interface for expressive character posing", "text": "The line of action is a conceptual tool often used by cartoonists and illustrators to help make their figures more consistent and more dramatic. We often see the expression of characters---may it be the dynamism of a super hero, or the elegance of a fashion model---well captured and amplified by a single aesthetic line. Usually this line is laid down in early stages of the drawing and used to describe the body's principal shape. By focusing on this simple abstraction, the person drawing can quickly adjust and refine the overall pose of his or her character from a given viewpoint. In this paper, we propose a mathematical definition of the line of action (LOA), which allows us to automatically align a 3D virtual character to a user-specified LOA by solving an optimization problem. We generalize this framework to other types of lines found in the drawing literature, such as secondary lines used to place arms. Finally, we show a wide range of poses and animations that were rapidly created using our system."} {"_id": "eee9d92794872fd3eecf38f86cd26d605f3eede7", "title": "Multi-view point clouds registration and stitching based on SIFT feature", "text": "In order to solve multi-view point clouds registration in large non-feature marked scenes, a new registration and stitching method was proposed based on 2D SIFT(Scale-invariant feature transform) features. Firstly, we used texture mapping method to generate 2D effective texture image, and then extracted and match SIFT features, obtained accurate key points and registration relationship between the effective texture images. Secondly, we reflected the SIFT key points and registration relationship to the 3D point clouds data, obtained key points and registration relationship of multi-view point clouds, we can achieve multi-view point clouds stitching. Our algorithm used texture mapping method to generate 2D effective texture image, it can eliminate interference of the holes and ineffective points, and can eliminate unnecessary mistake matching. Our algorithm used correct extracted matching point pairs to stitch, avoiding stepwise iterated of ICP algorithm, so our algorithm is simple to calculate, and it's matching precision and matching efficiency are improved to some extent. We carried experiments on two-view point clouds in two large indoor; the experiment results verified the validity of our algorithm."} {"_id": "c7d39d8aa2bfb1a7952e2c6fbd5bed237478f1a0", "title": "An IOT based human healthcare system using Arduino uno board", "text": "Internet of Things (IOT) visualizes a future of anything anywhere by anyone at any time. The Information and communication technologies help in creating a revolution in digital technology. IOT are known for interconnecting various physical devices with the networks. In IOT, various physical devices are embedded with different types of sensors and other devices to exchange data between them. An embedded system is a combination of software and hardware where they are programmed for functioning specific functions. These data can be accessed from any parts of the world by making use of cloud. This can be used for creating a digital world, smart homes, healthcare systems and real life data exchange like smart banking. Though Internet of things has emerged long back, it is now becoming popular and gaining attention lately. In healthcare industry, some of the hospitals started using sensors implemented in the bed to get the status of patient's movement and other activities. This paper contains various IOT applications and the role of IOT in the healthcare system, challenges in the healthcare system using IOT. Also, introduced a secured surveillance monitoring system for reading and storing patient's details using low power for transmitting the data."} {"_id": "56debe08d1f3f0a149ef18b86fc2c6be593bdc03", "title": "Beyond Cybersecurity Awareness: Antecedents and Satisfaction", "text": "Organizations develop technical and procedural measures to protect information systems. Relying only on technical based security solutions is not enough. Organizations must consider technical security solutions along with social, human, and organizational factors. The human element represents the employees (insiders) who use the information systems and other technology resources in their day-to-day operations. ISP awareness is essential to protect organizational information systems. This study adapts the Innovation Diffusion Theory to examine the antecedents of ISP awareness and its impact on the satisfaction with ISP and security practices. A sample of 236 employees in universities in the United States is collected to evaluate the research model. Results indicated that ISP quality, self-efficacy, and technology security awareness significantly impact ISP awareness. The current study presents significant contributions toward understanding the antecedents of ISP awareness and provides a starting point toward including satisfaction aspect in information security behavioral domain."} {"_id": "b329e8dc2f97ee604df17b6fa15484363ccb52ab", "title": "Vehicle detection and classification based on convolutional neural network", "text": "Deep learning has emerged as a hot topic due to extensive application and high accuracy. In this paper this efficient method is used for vehicle detection and classification. We extract visual features from the activation of a deep convolutional network, large-scale sparse learning and other distinguishing features in order to compare their accuracy. When compared to the leading methods in the challenging ImageNet dataset, our deep learning approach obtains highly competitive results. Through the experiments with in short of training data, the features extracted by deep learning method outperform those generated by traditional approaches."} {"_id": "a697e3eb3c52d2c8833a178fbf99dd4f728bcddd", "title": "Neuroticism as a mediator of treatment response to SSRIs in major depressive disorder.", "text": "BACKGROUND\nSerotonin function has been implicated in both major depressive disorder and neuroticism. In the current investigation, we examined the hypothesis that any change in depression severity is mediated through the reduction of neuroticism, but only for those compounds which target serotonin receptors.\n\n\nMETHODS\nNinety-three outpatients in the midst of a major depressive episode received one of three antidepressant medications, classified into two broad types: selective serotonin reuptake inhibitors (SSRIs) and non-SSRIs (i.e. reversible monoamine oxidase inhibitors [RIMAs] and noradrenergic and dopaminergic reuptake blockers [NDMs]). Patients completed the Hamilton Rating Scale for Depression, Beck Depression Inventory II and Revised NEO Personality Inventory prior to and following approximately 16 weeks of treatment. Structural equation modeling was used to test two models: a mediation model, in which neuroticism change is the mechanism by which SSRIs exert a therapeutic effect upon depressive symptoms, and a complication model, in which neuroticism change is a mere epiphenomenon of depression reduction in response to SSRIs.\n\n\nRESULTS\nThe mediation model provided a good fit to the data; the complication model did not. Patients treated with SSRIs demonstrated greater neuroticism change than those treated with non-SSRIs, and greater neuroticism change was associated with greater depressive symptom change. These effects held for both self-reported and clinician-rated depressive symptom severity.\n\n\nLIMITATIONS\nReplication within a randomized control trial with multiple assessment periods is required.\n\n\nCONCLUSION\nNeuroticism mediates changes in depression in response to treatment with SSRIs, such that any treatment effect of SSRIs occurs through neuroticism reduction."} {"_id": "37eacf589bdf404b5ceabb91d1a154268f7eb567", "title": "Voice as sound: using non-verbal voice input for interactive control", "text": "We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as \"control by continuous voice\", \"rate-based parameter control by pitch,\" and \"discrete parameter control by tonguing.\" We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach."} {"_id": "4999437db9bae36fab1a8336445c0e6b3d1f70c0", "title": "Improving Diversity in Ranking using Absorbing Random Walks", "text": "We introduce a novel ranking algorithm called GRASSHOPPER, which ranks items with an emphasis on diversity. That is, the top items should be different from each other in order to have a broad coverage of the whole item set. Many natural language processing tasks can benefit from such diversity ranking. Our algorithm is based on random walks in an absorbing Markov chain. We turn ranked items into absorbing states, which effectively prevents redundant items from receiving a high rank. We demonstrate GRASSHOPPER\u2019s effectiveness on extractive text summarization: our algorithm ranks between the 1st and 2nd systems on DUC 2004 Task 2; and on a social network analysis task that identifies movie stars of the world."} {"_id": "94c92186f6ab009160f8fd634fb5ddf4d0757a25", "title": "Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples", "text": "In recent years, deep learning has shown performance breakthroughs in many prominent problems. However, this comes with a major concern: deep networks have been found to be vulnerable to adversarial examples. Adversarial examples are modified inputs designed to cause the model to err. In the domains of images and speech, the modifications are untraceable by humans, but affect the model\u2019s output. In binary files, small modifications to the byte-code often lead to significant changes in its functionality/validity, therefore generating adversarial examples is not straightforward. We introduce a novel approach for generating adversarial examples for discrete input sets, such as binaries. We modify malicious binaries by injecting a small sequence of bytes to the binary file. The modified files are detected as benign, while preserving their malicious functionality. We applied this approach to an end-to-end convolutional neural malware detector and present a high evasion rate. Moreover, we showed that our payload is transferable within different positions of the same file and across different files and file families."} {"_id": "7917b89d0780decf7201aad8db9ed3cb101b24d7", "title": "Intrusion Detection System using K-means , PSO with SVM Classifier : A Survey", "text": "Intrusion detection is a process of identifying the Attacks. The main aim of IDS is to identify the Normal and Intrusive activities. In recent years, many researchers are using data mining techniques for building IDS. Here we propose a new approach using data mining technique such as SVM and Particle swarm optimization for attaining higher detection rate. The propose technique has major steps: Preprocessing, Training using PSO, clustering using K-means to generate different training subsets. Then based on the subsequent training subsets a vector for SVM classification is formed and in the end, classification using PSO is performed to detect Intrusion has happened or not. This paper contains summarization study and identification of the drawbacks of formerly surveyed works. Keywords-Intrusion detection system; Neuro-fuzzy; Support Vector Machine (SVM); PSO; K-means"} {"_id": "8fbe4fd165b79e01e9b48f067cfe4d454f4a17e4", "title": "Artificial Life and Real Robots", "text": "The first part of this paper explores the general issues in using Artificial Life techniques to program actual mobile robots. In particular it explores the difficulties inherent in transferring programs evolved in a simulated environment to run on an actual robot. It examines the dual evolution of organism morphology and nervous systems in biology. It proposes techniques to capture some of the search space pruning that dual evolution offers in the domain of robot programming. It explores the relationship between robot morphology and program structure, and techniques for capturing regularities across this mapping. The second part of the paper is much more specific. It proposes techniques which could allow realistic explorations concerning the evolution of programs to control physically embodied mobile robots. In particular we introduce a new abstraction for behavior-based robot programming which is specially tailored to be used with genetic programming techniques. To compete with hand coding techniques it will be necessary to automatically evolve programs that are one to two orders of magnitude more complex than those previously reported in any domain. Considerable extensions to previously reported approaches to genetic programming are necessary in order to achieve this goal."} {"_id": "42fcaf880933b43c7ef1c0cf1b437e46cd9d0a9c", "title": "VARIATIONAL RECURRENT ADVERSARIAL DEEP DOMAIN ADAPTATION", "text": "We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies of multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model\u2019s ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches."} {"_id": "025fcdec650972d7baae1a5bb507e11ff2965075", "title": "Evaluation of Attention and Relaxation Levels of Archers in Shooting Process using Brain Wave Signal Analysis Algorithms", "text": "* Archer's capability of attention and relaxation control during shooting process was evaluated using EEG technology. Attention and meditation algorithms were used to represent the levels of mental concentration and relaxation levels. Elite, mid-level, and novice archers were tested for short and long distance shootings in the archery field. Single channel EEG was recorded on the forehead (Fp1) during the shooting process, and attention and meditation levels were computed by real time. Four types of variations were defined based on the increasing and decreasing patterns of attention and meditation levels during shooting process. Elite archers showed increases in both attention and relaxation while mid-level archers showed increased attention but decreased relaxation. Elite archers also showed higher levels of attention at the release than mid-level and novice archers. Levels of attention and relaxation and their variation patterns were useful to categorize archers and to provide feedback in training."} {"_id": "4d48312f8602c1a36496e5342d654f6ac958f277", "title": "Analysis of the Electromagnetic Signature of Reinforced Concrete Structures for Nondestructive Evaluation of Corrosion Damage", "text": "This paper presents a nondestructive corrosion damage detection method for reinforced concrete structures based on the analysis of the electromagnetic signature of the steel rebar corrosion. The signature of the corrosion on the scattered field upon microwave illumination is first numerically analyzed. First-order quality factor parameters, the energy and the mean propagation delay, are proposed to quantify the corrosion amount in the structure. To validate the model, low-profile ultra-wide-band antennas (3-12 GHz) are fabricated and measured. Measurements on 12 reinforced concrete samples with induced corrosion are performed, using three different antenna setups. The experiments demonstrate a good correlation between an increase in the corrosion amount with a decrease in the energy and an increase in the time delay of the propagated signal."} {"_id": "424c2e2382a21159db5c5058c5b558f1f534fd37", "title": "A PatchMatch-Based Dense-Field Algorithm for Video Copy\u2013Move Detection and Localization", "text": "We propose a new algorithm for the reliable detection and localization of video copy\u2013move forgeries. Discovering well-crafted video copy\u2013moves may be very difficult, especially when some uniform background is copied to occlude foreground objects. To reliably detect both additive and occlusive copy\u2013moves, we use a dense-field approach, with invariant features that guarantee robustness to several postprocessing operations. To limit complexity, a suitable video-oriented version of PatchMatch is used, with a multiresolution search strategy, and a focus on volumes of interest. Performance assessment relies on a new dataset, designed ad hoc, with realistic copy\u2013moves and a wide variety of challenging situations. Experimental results show the proposed method to detect and localize video copy\u2013moves with good accuracy even in adverse conditions."} {"_id": "ae3b5ad72efeb6cdfdd38cb1627a67a12e106962", "title": "The Zachman Framework and the OMG''s Model Driven Architecture", "text": "123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 BUSINESS PROCESS TRENDS WHITEPAPER"} {"_id": "414b7477daa7838b6bbd7af659683a965691272c", "title": "Video summarisation: A conceptual framework and survey of the state of the art", "text": "Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users. 2007 Elsevier Inc. All rights reserved."} {"_id": "0ee2da37957b9d5b3fcc7827c84ee326cd8cb0c3", "title": "I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience", "text": "Social media technologies collapse multiple audiences into single contexts, making it difficult for people to use the same techniques online that they do to handle multiplicity in face-to-face conversation. This article investigates how content producers navigate \u2018imagined audiences\u2019 on Twitter. We talked with participants who have different types of followings to understand their techniques, including targeting different audiences, concealing subjects, and maintaining authenticity. Some techniques of audience management resemble the practices of \u2018micro-celebrity\u2019 and personal branding, both strategic self-commodification. Our model of the networked audience assumes a manyto-many communication through which individuals conceptualize an imagined audience evoked through their tweets."} {"_id": "216e92be827a9774130263754f2036aab0808484", "title": "The Benefits of Facebook \"Friends: \" Social Capital and College Students' Use of Online Social Network Sites", "text": "This study examines the relationship between use of Facebook, a popular online social network site, and the formation and maintenance of social capital. In addition to assessing bonding and bridging social capital, we explore a dimension of social capital that assesses one's ability to stay connected with members of a previously inhabited community, which we call maintained social capital. Regression analyses conducted on results from a survey of undergraduate students (N=286) suggest a strong association between use of Facebook and the three types of social capital, with the strongest relationship being to bridging social capital. In addition, Facebook usage was found to interact with measures of psychological well-being, suggesting that it might provide greater benefits for users experiencing low self-esteem and low life satisfaction."} {"_id": "8392a12398718f73df004d742d8edf60477c723a", "title": "Signals in Social Supernets", "text": "Social network sites (SNSs) provide a new way to organize and navigate an egocentric social network. Are they a fad, briefly popular but ultimately useless? Or are they the harbingers of a new and more powerful social world, where the ability to maintain an immense network\u2014a social \"supernet\"\u2014fundamentally changes the scale of human society? This article presents signaling theory as a conceptual framework with which to assess the transformative potential of SNSs and to guide their design to make them into more effective social tools. It shows how the costs associated with adding friends and evaluating profiles affect the reliability of users' self-presentation; examines strategies such as information fashion and risk-taking; and shows how these costs and strategies affect how the publicly-displayed social network aids the establishment of trust, identity, and cooperation\u2014the essential foundations for an expanded social world. Grooming, Gossip, and Online Friending Social ties provide many benefits, including companionship, access to information, and emotional and material support (Granovetter, 1983; Wellman, Garton, & Haythornthwaite, 1997; Wellman & Gulia, 1999). Increasing the number of ties increases access to these benefits, although time and cognitive constraints preclude indefinite expansions of one's personal network. Yet if maintaining ties were to become more temporally efficient and cognitively effective, it should be possible to increase the scale of one's social world\u2014to create a \"supernet\" with many more ties than is feasible without socially assistive tools. The question this article addresses is whether social network sites (SNSs) are a technology that can bring this about. In the wild, apes groom each other, picking through fur to remove parasitic bugs. This behavior helps with hygiene and is relaxing and pleasant for the recipient. Perhaps most importantly, it establishes social bonds: Apes who groom each other are more likely to help each other and not fight. Long grooming sessions are time consuming, however. Since the ape must also spend many hours finding food, sleeping, etc., it is clear that grooming can sustain only a limited number of relationships (Dunbar, 1996). In Grooming, Gossip, and the Evolution of Language, Robin Dunbar (1996) argued eloquently that in human societies, language, especially gossip, has taken over the social function of grooming. Instead of removing lice from each other's hair, people check in with friends and colleagues, ask how they are doing, and exchange a few words about common acquaintances, the news, or the local sports team (Dunbar, 1996, 2004). Language is much more efficient than physical grooming, for one can talk to several people at once. Language also helps people learn about cultural norms, evaluate others' behavior, and keep up with the news and shifting opinions of their surrounding community. It makes reputation possible\u2014individuals benefit from the experience of others in determining who is nice, who does good Signals in Social Supernets http://jcmc.indiana.edu/vol13/issue1/donath.html 2 of 19 4/7/2008 6:26 PM work, and who should be shunned for their dishonest ways. Using language to maintain ties and manage trust, people can form and manage more complex and extensive social networks.1 Communication technologies expand human social reach (Horrigan, Boase, Rainie, & Wellman, 2006). Email makes communication more efficient: Sending a message to numerous recipients is as easy as sending it to one, and its asynchrony means that there is little need to coordinate interaction. Contact management tools, from paper Rolodexes to complex software systems, increase one's ability to remember large numbers of people (Whittaker, Jones, & Terveen 2002). While these technologies provide some of the support an expanded social world needs, they alone are not sufficient. People need to be able to keep track of ever-changing relationships (Dunbar, 1996; Nardi, Whittaker, Isaacs, Creech, Johnson, & Hainsworth, 2002), to see people within the context of their social relationships (Raub & Weesie, 1990), and, most fundamentally, to know whom to trust (Bacharach & Gambetti, 2001; Good, 2000). Email and contact tools help maintain an expanded collection of individual relationships. Are social network sites the solution for placing these relationships into the greater social context? A page on MySpace, filled with flashing logos, obscure comments, poorly focused yet revealing photos, and laced with twinkling animated gifs, may not look to the casual observer like the harbinger of the next stage in human social evolution. But perhaps it is. SNSs locate people in the context of their acquaintances, provide a framework for maintaining an extensive array of friends and other contacts, and allow for the public display of interpersonal commentary (boyd & Ellison, this issue). At the same time, SNSs are still primitive; it is too early in their development to observe clear evidence that they have transformed society. The goal of this article is to present a theoretical framework with which to a) assess the transformative potential of SNSs and b) develop design guidelines for making them into more effective social tools. The foundation for this analysis is signaling theory, which models why some communications are reliably honest and others are not. The argument begins with an introduction to signaling theory. The next section uses this theory to examine how the fundamental structure of SNSs can bring greater trust and reliability to online self-presentations, how specific site design decisions enhance or weaken their trust-conferring ability, and how seemingly pointless or irrational behaviors, such as online fashion and risk taking, actually signal social information. The final section examines the transformative possibilities of social supernets\u2014not only whether SNSs may bring them about, but if so, in what ways they might change our society. An emphasis of this article is on ways of achieving reliable information about identity and affiliations. There are situations where ephemeral, hidden, or multiple identities are desirable. However, minimal online identity has been easy to create, while it is harder to establish more grounded identities in a fluid and nuanced way. A primary goal of this article is to understand how reliability is encouraged or enforced. For designers of future systems such knowledge is a tool, not a blueprint. Depending on the situation, they should choose the appropriate space between anonymous and whimsical and verified and trustworthy identities and communication."} {"_id": "2311a88803728111ba7bdb327b127ec3f54d282a", "title": "Beyond Bowling Together : SocioTechnical Capital", "text": "Social resources like trust and shared identity make it easier for people to work and play together. Such social resources are sometimes referred to as social capital. Thirty years ago, Americans built social capital as a side effect of participation in civic organizations and social activities, including bowling leagues. Today, they do so far less frequently (Putnam 2000). HCI researchers and practitioners need to find new ways for people to interact that will generate even more social capital than bowling together does. A new theoretical construct, SocioTechnical Capital, provides a framework for generating and evaluating technology-mediated social relations."} {"_id": "5fc6817421038f21d355af7cee4114155d134f69", "title": "A Scientific Methodology for MIS Case Studies", "text": ""} {"_id": "179aa78159b0d3739e12e80d492b4235df7283c2", "title": "On multiple foreground cosegmentation", "text": "In this paper, we address a challenging image segmentation problem called multiple foreground cosegmentation (MFC), which concerns a realistic scenario in general Webuser photo sets where a finite number of K foregrounds of interest repeatedly occur cross the entire photo set, but only an unknown subset of them is presented in each image. This contrasts the classical cosegmentation problem dealt with by most existing algorithms, which assume a much simpler but less realistic setting where the same set of foregrounds recurs in every image. We propose a novel optimization method for MFC, which makes no assumption on foreground configurations and does not suffer from the aforementioned limitation, while still leverages all the benefits of having co-occurring or (partially) recurring contents across images. Our method builds on an iterative scheme that alternates between a foreground modeling module and a region assignment module, both highly efficient and scalable. In particular, our approach is flexible enough to integrate any advanced region classifiers for foreground modeling, and our region assignment employs a combinatorial auction framework that enjoys several intuitively good properties such as optimality guarantee and linear complexity. We show the superior performance of our method in both segmentation quality and scalability in comparison with other state-of-the-art techniques on a newly introduced FlickrMFC dataset and the standard ImageNet dataset."} {"_id": "ca6a1718c47f05bf0ad966f2f5ee62ff5d71af11", "title": "Fourier Analysis for Demand Forecasting in a Fashion Company", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a \u20ac60+ million turnover mediumto large-sized Italian fashion company, which operates in the women\u2019s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software."} {"_id": "17444ea2d3399870939fb9afea78157d5135ec4a", "title": "Multiview Hessian Regularization for Image Annotation", "text": "The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR."} {"_id": "bff381287fcd0ae45cae0ad2fab03bceb04b1428", "title": "PKLot - A robust dataset for parking lot classification", "text": "http://dx.doi.org/10.1016/j.eswa.2015.02.009 0957-4174/ 2015 Elsevier Ltd. All rights reserved. \u21d1 Corresponding author. E-mail addresses: prlalmeida@inf.ufpr.br (P.R.L. de Almeida), lesoliveira@inf. ufpr.br (L.S. Oliveira), alceu@ppgia.pucpr.br (A.S. Britto Jr.), eunelson@ppgia.pucpr. br (E.J. Silva Jr.), alessandro.koerich@etsmtl.ca (A.L. Koerich). Paulo R.L. de Almeida , Luiz S. Oliveira , Alceu S. Britto Jr. b,\u21d1, Eunelson J. Silva Jr. b Alessandro L. Koerich b,c"} {"_id": "9e92fc38823baa82e3af674d955d3a19a27c69b2", "title": "Irregularities in the Distribution of Primes and Twin Primes By", "text": "The maxima and minima of sL(x)) \u2014 n(x), iR(x)) \u2014 n(x), and sL2(x)) \u2014 n2(x) in various intervals up to x = 8 x 10 are tabulated. Here n(x) and n2(x) are respectively the number of primes and twin primes not exceeding x, L(x) is the logarithmic integral, R(x) is Riemann's approximation to ir(x), and L2(x) is the Hardy-Littlewood approximation to ti\"2(;c). The computation of the sum of inverses of twin primes less than 8 x 10 gives a probable value 1.9021604 \u00b1 5 x 10~7 for Brun's constant. 1. Approximations to nix). Let P= {2, 3, 5, \u2022 \u2022 \u2022 } be the set of primes, and let 7r(x) be the number of primes not exceeding x. Two well-known approximations to 7t(x) for x > 1 are the logarithmic integral: rx dt (1.1) L{x) = \\v ; J o log t O-2) =T + log(logx)+\u00cd: ^> k=\\ k-k and Riemann's approximation: (1.3) R{x)=Z^L{xi'k) k=i k (1.4) =1 + \u00bfJt\u00f3_ fz. k\\kl{k + 1) Note that (1.1) differs by 1(2) = 1.04516378... from the frequently used approximation f2 dt/log t. We are interested in the errors O-5) r.{x) = (L(x)> n{x) and (1 6) r2(x) = (Rix)) n{x), where denotes the integer closest to y {i.e., the integer part of (y + W)). k Received July 5, 1974. AMS (MOS) subject classifications (1970). Primary 10-04, 10H25, 10H15; Secondary 10A25, 10A40, 10H05, 65A05, 65B05."} {"_id": "d5beb0826660ebc0c3bb96d1dba3f65daed1c377", "title": "Low-Cost Dual-Band Circularly Polarized Switched-Beam Array for Global Navigation Satellite System", "text": "This paper presents the design and development of a dual-band switched-beam microstrip array for global navigation satellite system (GNSS) applications such as ocean reflectometry and remote sensing. In contrast to the traditional Butler matrix, a simple, low cost, broadband and low insertion loss beam switching feed network is proposed, designed and integrated with a dual band antenna array to achieve continuous beam coverage of \u00b125\u00b0 around the boresight at the L1 (1.575 GHz) and L2 (1.227 GHz) bands. To reduce the cost, microstrip lines and PIN diode based switches are employed. The proposed switched-beam network is then integrated with dual-band step-shorted annular ring (S-SAR) antenna elements in order to produce a fully integrated compact-sized switched-beam array. Antenna simulation results show that the switched-beam array achieves a maximum gain of 12 dBic at the L1 band and 10 dBic at the L2 band. In order to validate the concept, a scaled down prototype of the simulated design is fabricated and measured. The prototype operates at twice of the original design frequency, i.e., 3.15 GHz and 2.454 GHz and the measured results confirm that the integrated array achieves beam switching and good performance at both bands."} {"_id": "6d1eb878e1d2530c197c962dd4a61d2aba015261", "title": "A faceted approach to conceptualizing tasks in information seeking", "text": "0306-4573/$ see front matter Published by Elsevi doi:10.1016/j.ipm.2008.07.005 * Corresponding author. Tel.: +1 601 266 4035; fa E-mail addresses: Yuelin.Li@usm.edu (Y. Li), nick 1 Tel.: +1 732 932 7500x8271; fax: +1 732 932 69 The nature of the task that leads a person to engage in information interaction, as well as of information seeking and searching tasks, have been shown to influence individuals\u2019 information behavior. Classifying tasks in a domain has been viewed as a departure point of studies on the relationship between tasks and human information behavior. However, previous task classification schemes either classify tasks with respect to the requirements of specific studies or merely classify a certain category of task. Such approaches do not lead to a holistic picture of task since a task involves different aspects. Therefore, the present study aims to develop a faceted classification of task, which can incorporate work tasks and information search tasks into the same classification scheme and characterize tasks in such a way as to help people make predictions of information behavior. For this purpose, previous task classification schemes and their underlying facets are reviewed and discussed. Analysis identifies essential facets and categorizes them into Generic facets of task and Common attributes of task. Generic facets of task include Source of task, Task doer, Time, Action, Product, and Goal. Common attributes of task includes Task characteristics and User\u2019s perception of task. Corresponding sub-facets and values are identified as well. In this fashion, a faceted classification of task is established which could be used to describe users\u2019 work tasks and information search tasks. This faceted classification provides a framework to further explore the relationships among work tasks, search tasks, and interactive information retrieval and advance adaptive IR systems design. Published by Elsevier Ltd."} {"_id": "80768c58c3e298c17cb39164715cb968cbe7af31", "title": "RCD: Rapid Close to Deadline Scheduling for datacenter networks", "text": "Datacenter-based Cloud Computing services provide a flexible, scalable and yet economical infrastructure to host online services such as multimedia streaming, email and bulk storage. Many such services perform geo-replication to provide necessary quality of service and reliability to users resulting in frequent large inter-datacenter transfers. In order to meet tenant service level agreements (SLAs), these transfers have to be completed prior to a deadline. In addition, WAN resources are quite scarce and costly, meaning they should be fully utilized. Several recently proposed schemes, such as B4 [1], TEMPUS [2], and SWAN [3] have focused on improving the utilization of inter-datacenter transfers through centralized scheduling, however, they fail to provide a mechanism to guarantee that admitted requests meet their deadlines. Also, in a recent study, authors propose Amoeba [4], a system that allows tenants to define deadlines and guarantees that the specified deadlines are met, however, to admit new traffic, the proposed system has to modify the allocation of already admitted transfers. In this paper, we propose Rapid Close to Deadline Scheduling (RCD), a close to deadline traffic allocation technique that is fast and efficient. Through simulations, we show that RCD is up to 15 times faster than Amoeba, provides high link utilization along with deadline guarantees, and is able to make quick decisions on whether a new request can be fully satisfied before its deadline."} {"_id": "f8f02450166292caf154e73fbf307e725af0a06c", "title": "A Lyapunov approach to the stability of fractional differential equations", "text": "Lyapunov stability of fractional differential equations is addressed in this paper. The key concept is the frequency distributed fractional integrator model, which is the basis for a global state space model of FDEs. Two approaches are presented: the direct one is intuitive but it leads to a large dimension parametric problem while the indirect one, which is based on the continuous frequency distribution, leads to a parsimonious solution. Two examples, with linear and nonlinear FDEs, exhibit the main features of this new methodology. & 2010 Elsevier B.V. All rights reserved."} {"_id": "c20875094dc9c56b83ca22b83cd3eb950c657ad5", "title": "Semi-Supervised Recognition of Sarcasm in Twitter and Amazon", "text": "Sarcasm is a form of speech act in which the speakers convey their message in an implicit way. The inherently ambiguous nature of sarcasm sometimes makes it hard even for humans to decide whether an utterance is sarcastic or not. Recognition of sarcasm can benefit many sentiment analysis NLP applications, such as review summarization, dialogue systems and review ranking systems. In this paper we experiment with semisupervised sarcasm identification on two very different data sets: a collection of 5.9 million tweets collected from Twitter, and a collection of 66000 product reviews from Amazon. Using the Mechanical Turk we created a gold standard sample in which each sentence was tagged by 3 annotators, obtaining F-scores of 0.78 on the product reviews dataset and 0.83 on the Twitter dataset. We discuss the differences between the datasets and how the algorithm uses them (e.g., for the Amazon dataset the algorithm makes use of structured information). We also discuss the utility of Twitter #sarcasm hashtags for the task."} {"_id": "0a8a6dc8f57ed11c0cda81023c30e4deb1f48f9d", "title": "Scheduling and locking in multiprocessor real-time operating systems", "text": "BJ\u00d6RN B. BRANDENBURG: Scheduling and Locking in Multiprocessor Real-Time Operating Systems (Under the direction of James H. Anderson) With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due priority inversion can be bounded a priori. Several spinlockand semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspensionoblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five"} {"_id": "48a165c4b0fd4081c338ac990612d7fa30304f7f", "title": "Creativity support tools: accelerating discovery and innovation", "text": "How can designers of programming interfaces, interactive tools, and rich social environments enable more people to be more creative more often?"} {"_id": "872085b4f62b02913fdab640547741d07fac3cbb", "title": "Miniaturized rat-race coupler with bandpass response and good stopband rejection", "text": "Coupled-resonator network is applied to the design of a rat-race coupler with bandpass response in this paper. Net-type resonators are used to reduce the circuit size and control the harmonics. As a result, the proposed coupler exhibits not only a compact size but also good stopband rejection. The proposed coupler occupies only one sixth area of the conventional design, while the rejection level in the stopband is better than 30 dB up to 4.3\u01920. The measured results are in good agreement with the simulated predictions."} {"_id": "0fb1280aa1f2d066512d09e9ea3cfc724e9929cc", "title": "BinMatch: A Semantics-Based Hybrid Approach on Binary Code Clone Analysis", "text": "Binary code clone analysis is an important technique which has a wide range of applications in software engineering (e.g., plagiarism detection, bug detection). The main challenge of the topic lies in the semantics-equivalent code transformation (e.g., optimization, obfuscation) which would alter representations of binary code tremendously. Another challenge is the trade-off between detection accuracy and coverage. Unfortunately, existing techniques still rely on semantics-less code features which are susceptible to the code transformation. Besides, they adopt merely either a static or a dynamic approach to detect binary code clones, which cannot achieve high accuracy and coverage simultaneously.\u00a0 In this paper, we propose a semantics-based hybrid approach to detect binary clone functions. We execute a template binary function with its test cases, and emulate the execution of every target function for clone comparison with the runtime information migrated from that template function. The semantic signatures are extracted during the execution of the template function and emulation of the target function. Lastly, a similarity score is calculated from their signatures to measure their likeness. We implement the approach in a prototype system designated as BinMatch which analyzes IA-32 binary code on the Linux platform. We evaluate BinMatch with eight real-world projects compiled with different compilation configurations and commonly-used obfuscation methods, totally performing over 100 million pairs of function comparison. The experimental results show that BinMatch is robust to the semantics-equivalent code transformation. Besides, it not only covers all target functions for clone analysis, but also improves the detection accuracy comparing to the state-of-the-art solutions."} {"_id": "f4ec256be284ff40316f27fa3b07531f407ce9fe", "title": "Broadband antenna array aperture made of tightly couple printed dipoles", "text": "This study reports a novel tight-coupled dipole array antenna to operate with up to six octave bandwidth and 60 degrees scanning. The array was designed through full-wave EM simulations by employing the current sheet array radiator concept that was advanced by a novel integrated feed network. Several prototypes of planar and conformal arrays across 0.3\u201320 GHz have been fabricated and tested with good agreement observed between all predicted and measured terminal and radiation features. The exemplified arrays have been designed for 1.2\u20136 GHz with relative radiator height 0.12 of maximum operational wavelength."} {"_id": "cd45fe6a1d77e8e17e94f2abb454bcdfe55413bf", "title": "Swing-up and stabilization of a cart-pendulum system under restricted cart track length", "text": "This paper describes the swing-up and stabilization of a cart\u2013pendulum system with a restricted cart track length and restricted control force using generalized energy control methods. Starting from a pendant position, the pendulum is swung up to the upright unstable equilibrium con5guration using energy control principles. An \u201cenergy well\u201d is built within the cart track to prevent the cart from going outside the limited length. When su9cient energy is acquired by the pendulum, it goes into a \u201ccruise\u201d mode when the acquired energy is maintained. Finally, when the pendulum is close to the upright con5guration, a stabilizing controller is activated around a linear zone about the upright con5guration. The proposed scheme has worked well both in simulation and a practical setup and the conditions for stability have been derived using the multiple Lyapunov functions approach. c \u00a9 2002 Elsevier Science B.V. All rights reserved."} {"_id": "5332dbbdba1f4918b0fe164036dd76623848e66e", "title": "Depression as a risk factor for coronary artery disease: evidence, mechanisms, and treatment.", "text": "OBJECTIVE\nThe present paper reviews the evidence that depression is a risk factor for the development and progression of coronary artery disease (CAD).\n\n\nMETHODS\nMEDLINE searches and reviews of bibliographies were used to identify relevant articles. Articles were clustered by theme: depression as a risk factor, biobehavioral mechanisms, and treatment outcome studies.\n\n\nRESULTS\nDepression confers a relative risk between 1.5 and 2.0 for the onset of CAD in healthy individuals, whereas depression in patients with existing CAD confers a relative risk between 1.5 and 2.5 for cardiac morbidity and mortality. A number of plausible biobehavioral mechanisms linking depression and CAD have been identified, including treatment adherence, lifestyle factors, traditional risk factors, alterations in autonomic nervous system (ANS) and hypothalamic pituitary adrenal (HPA) axis functioning, platelet activation, and inflammation.\n\n\nCONCLUSION\nThere is substantial evidence for a relationship between depression and adverse clinical outcomes. However, despite the availability of effective therapies for depression, there is a paucity of data to support the efficacy of these interventions to improve clinical outcomes for depressed CAD patients. Randomized clinical trials are needed to further evaluate the value of treating depression in CAD patients to improve survival and reduce morbidity."} {"_id": "82f348ccf3a9eaceb1b88cc6278f4c422ea6b95f", "title": "Risk Assessment Uncertainties in Cybersecurity Investments", "text": "When undertaking cybersecurity risk assessments, it is important to be able to assign numeric values to metrics to compute the final expected loss that represents the risk that an organization is exposed to due to cyber threats. Even if risk assessment is motivated by real-world observations and data, there is always a high chance of assigning inaccurate values due to different uncertainties involved (e.g., evolving threat landscape, human errors) and the natural difficulty of quantifying risk. Existing models empower organizations to compute optimal cybersecurity strategies given their financial constraints, i.e., available cybersecurity budget. Further, a general game-theoretic model with uncertain payoffs (probability-distribution-valued payoffs) shows that such uncertainty can be incorporated in the game-theoretic model by allowing payoffs to be random. This paper extends previous work in the field to tackle uncertainties in risk assessment that affect cybersecurity investments. The findings from simulated examples indicate that although uncertainties in cybersecurity risk assessment lead, on average, to different cybersecurity strategies, they do not play a significant role in the final expected loss of the organization when utilising a game-theoretic model and methodology to derive these strategies. The model determines robust defending strategies even when knowledge regarding risk assessment values is not accurate. As a result, it is possible to show that the cybersecurity investments\u2019 tool is capable of providing effective decision support."} {"_id": "f31a07a7db9b1796a04e781e550a51f2453649e9", "title": "Wideband and Low Sidelobe Slot Antenna Fed by Series-Fed Printed Array", "text": "A combination of slots and series-fed patch antenna array is introduced leading to an array antenna with wide bandwidth, low sidelobe level (SLL), and high front-to-back ratio (F/B). Three structures are analyzed. The first, the reference structure, is the conventional series-fed microstrip antenna array that operates at 16.26 GHz with a bandwidth of 3%, SLL of -24 dB, and F/B of 23 dB. The second is similar to the reference structure, but a large slot that covers the patch array is removed from the ground plane. This results in a wide operating bandwidth of over 72% for , with a 3 dB gain bandwidth of over 15.9%. The SLL of this structure is improved to more than at 16.26 GHz, and stable SLL of over 15.5-16.4 GHz band is also exhibited. This structure has a bi-directional radiation pattern. To make the pattern uni-directional and increase the F/B, a third structure is analyzed. In this structure, rather than one single large slot, an array of slots is used and a reflector is placed above the series-fed patch array. This increases the F/B to 40 dB. The simulated and measured results are presented and discussed."} {"_id": "3fde79fafc9edbe0ad4282104e44c468a2bf1af4", "title": "Active Learning for Real Time Detection of Polyps in Videocolonoscopy", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L\u2019archive ouverte pluridisciplinaire HAL, est destin\u00e9e au d\u00e9p\u00f4t et \u00e0 la diffusion de documents scientifiques de niveau recherche, publi\u00e9s ou non, \u00e9manant des \u00e9tablissements d\u2019enseignement et de recherche fran\u00e7ais ou \u00e9trangers, des laboratoires publics ou priv\u00e9s. Active Learning For Real Time Detection Of Polyps In Videocolonoscopy Quentin Angermann, Aymeric Histace, Olivier Romain"} {"_id": "2934206d916a04540fb3d437628fe44aea15ef0a", "title": "The challenge of simultaneous object detection and pose estimation: A comparative study", "text": "Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be viewinvariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets."} {"_id": "0be360a2964c4bb91aaad0cc6d1baa6639746028", "title": "Automatic recognition and analysis of human faces and facial expressions: a survey", "text": "Humans detect and identify faces in a scene with little or no effort . However, building an automated system that accomplishes this task is very difficult . There are several related Subproblems : detection of a pattern as a face . identification of the face, analysis of facial expressions, and classification based on physical features of the face . A system that performs these operations will find many applications, e .g . criminal identification, authentication in secure systems, etc . Most of the work to date has been in identification . This paper surveys the past work in solving these problems . The capability of the human visual system with respect to these problems is also discussed . It is meant to serve as a guide for an automated system . Some new approaches to these problems are also briefly discussed . Face detection Face identification Facial expressions Classification Facial features"} {"_id": "4d899ebf7a3004fe550842830f06b4600d9c6230", "title": "The theory of signal detectability", "text": "The problem of signal detectability treated in this paper is the following: Suppose an observer is given a voltage varying with time during a prescribed observation interval and is asked to decide whether its source is noise or is signal plus noise. What method should the observer use to make this decision, and what receiver is a realization of that method? After giving a discussion of theoretical aspects of this problem, the paper presents specific derivations of the optimum receiver for a number of cases of practical interest. The receiver whose output is the value of the likelihood ratio of the input voltage over the observation interval is the answer to the second question no matter which of the various optimum methods current in the literature is employed including the Neyman Pearson observer, Siegert's ideal observer, and.Woodward and Davies' tlobserver.lV An optimum observer required to give a yes or no answer simply chooses an operating level and concludes that the receiver input arose from signal plus noise only when this level is exceeded by the output of his likelihood ratio receiver. Associated with each such operating level are conditional probabilities that the answer is a false alarm and the conditional probability of detection. Graphs of these quantities, called receiver operating characteristic, or ROC, curves are convenient for evaluating a receiver. If the detection problem is changed by varying, for example, the signal power, then a family of ROC curves is generated. Such things as betting curves can easily be obtained from such a family. The operating level to be used in a particular situation must be chosen by the observer. His choice will depend on such factors as the permissable false alarm rate, a priori probabilities, and relative importance of errors. With these theoretical aspects serving as sn introduction, attention is devoted to the derivation of explicit formulas for likelihood ratio, and for probability of detection and probability of false alarm, for a number of particular cases. Stationary, band-limited, white Gaussian noise is assumed.. The seven special cases which sre presented were chosen from the simplest problems in signal detection which closely represent practical situations. Two of the cases form a basis for the best available approximation to the important problem of finding probability of detection when the starting time of the signal, signal frequency, or both, are l l&TlOUn. Furthermore, in these two cases uncertainty in the signal can be varied, and a quantitative relationship between uncertainty and ability to detect signals is presented for these two rather general cases. The variety of examples presented should serve to suggest methods for attacking other simple signal detection problems and to give insight into problems too complicated to allow a direct solution."} {"_id": "5140f1dc83e562de0eb409385480b799e9549d54", "title": "Textural Features for Image Classification", "text": "Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on graytone spatial dependancies, and illustrates their application in categoryidentification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications."} {"_id": "6513888c5ef473bdbb3167c7b52f0985be071f7a", "title": "Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression", "text": "A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D \u201cGabor\u201d representations for image analysis, segmentation, and compression. These transforms are conjoint spat iahpectral representations [lo], [15], which provide a complete image description in terms of locally windowed 2-D spectral coordinates embedded within global 2-D spatial coordinates. Because intrinsic redundancies within images a re extracted, the resulting image codes can be very compact. However, these conjoint transforms are inherently difficult to compute because t e elementary expansion functions a re not orthogonal. One o r t h o g o n k i n g approach developed for 1-D signals by Bastiaans [SI, based on biorthonormal expansions, is restricted by constraints on the conjoint sampling rates and invariance of the windowing function, as well as by the fact that the auxiliary orthogonalizing functions are nonlocal infinite series. In the present \u201cneural network\u201d approach, based upon interlaminar interactions involving two layers with fixed weights and one layer with adjustable weights, the network finds coefficients for complete conjoint 2-D Gabor transforms without these restrictive conditions. For arbitrary noncomplete transforms, in which the coefficients might be interpreted simply as signifying the presence of certain features in the image, the network finds optimal coefficients in the sense of minimal mean-squared-error in representing the image. I n one algebraically complete scheme permitting exact reconstruction, the network finds expansion coefficients that reduce entropy from 7.57 in the pixel representation to 2.55 in the complete 2-D Gabor transform. In \u201cwavelet\u201d expansions based on a biologically inspired log-polar ensemble of dilations, rotations, and translations of a single underlying 2-D Gabor wavelet template, image compression is illustrated with ratios up to 20: 1. Also demonstrated is image segmentation based on the clustering of coefficients in the complete 2-D Gabor transform. This coefficient-finding network for implementing useful nonorthogonal image transforms may also have neuroscientific relevance, because the network layers with fixed weights use empirical 2-D receptive field profiles obtained from orientation-selective neurons in cat visual cortex as the weighting functions, and the resulting transform mimics the biological visual strategy of embedding angular and spectral analysis within global spatial coordinates."} {"_id": "ab0e9c43af9f8ba3dcf77e1a21b9ed21abd82288", "title": "ReTiCaM: Real-time Human Performance Capture from Monocular Video", "text": "We present the first real-time human performance capture approach that reconstructs dense, space-time coherent deforming geometry of entire humans in general everyday clothing from just a single RGB video. We propose a novel two-stage analysis-by-synthesis optimization whose formulation and implementation are designed for high performance. In the first stage, a skinned templatemodel is jointly fitted to background subtracted input video, 2D and 3D skeleton joint positions found using a deep neural network, and a set of sparse facial landmark detections. In the second stage, dense non-rigid 3D deformations of skin and even loose apparel are captured based on a novel real-time capable algorithm for non-rigid tracking using dense photometric and silhouette constraints. Our novel energy formulation leverages automatically identified material regions on the template to model the differing non-rigid deformation behavior of skin and apparel. The two resulting nonlinear optimization problems per-frame are solved with specially-tailored data-parallel Gauss-Newton solvers. In order to achieve real-time performance of over 25Hz, we design a pipelined parallel architecture using the CPU and two commodity GPUs. Our method is the first real-time monocular approach for full-body performance capture. Our method yields comparable accuracy with off-line performance capture techniques, while being orders of magnitude faster."} {"_id": "3ec569ba6c3f377d94a9fc79e6656d27d099430c", "title": "Comorbidity of anxiety and unipolar mood disorders.", "text": "Research on relationships between anxiety and depression has proceeded at a rapid pace since the 1980s. The similarities and differences between these two conditions, as well as many of the important features of the comorbidity of these disorders, are well understood. The genotypic structure of anxiety and depression is also fairly well documented. Generalized anxiety and major depression share a common genetic diathesis, but the anxiety disorders themselves are genetically hetergeneous. Sophisticated phenotypic models have also emerged, with data converging on an integrative hierarchical model of mood and anxiety disorders in which each individual syndrome contains both a common and a unique component. Finally, considerable progress has been made in understanding cognitive aspects of these disorders. This work has focused on both the cognitive content of anxiety and depression and on the effects that anxiety and depression have on information processing for mood-congruent material."} {"_id": "752d4887c68149d54f93206e0d20bc5359ccc4d2", "title": "The neuropsychopharmacology of fronto-executive function: monoaminergic modulation.", "text": "We review the modulatory effects of the catecholamine neurotransmitters noradrenaline and dopamine on prefrontal cortical function. The effects of pharmacologic manipulations of these systems, sometimes in comparison with the indoleamine serotonin (5-HT), on performance on a variety of tasks that tap working memory, attentional-set formation and shifting, reversal learning, and response inhibition are compared in rodents, nonhuman primates, and humans using, in a behavioral context, several techniques ranging from microiontophoresis and single-cell electrophysiological recording to pharmacologic functional magnetic resonance imaging. Dissociable effects of drugs and neurotoxins affecting these monoamine systems suggest new ways of conceptualizing state-dependent fronto-executive functions, with implications for understanding the molecular genetic basis of mental illness and its treatment."} {"_id": "82cfd6306399e0e8b96ce6a9f3d4cb55a783984b", "title": "Vicious and virtuous cycles in ERP implementation: a case study of interrelations between critical success factors", "text": "\u2022 A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. \u2022 The final author version and the galley proof are versions of the publication after peer review. \u2022 The final published version features the final layout of the paper including the volume, issue and page numbers."} {"_id": "3164ff63c7fd8f062e0d3c487059630def97955c", "title": "Nivolumab in previously untreated melanoma without BRAF mutation.", "text": "BACKGROUND\nNivolumab was associated with higher rates of objective response than chemotherapy in a phase 3 study involving patients with ipilimumab-refractory metastatic melanoma. The use of nivolumab in previously untreated patients with advanced melanoma has not been tested in a phase 3 controlled study.\n\n\nMETHODS\nWe randomly assigned 418 previously untreated patients who had metastatic melanoma without a BRAF mutation to receive nivolumab (at a dose of 3 mg per kilogram of body weight every 2 weeks and dacarbazine-matched placebo every 3 weeks) or dacarbazine (at a dose of 1000 mg per square meter of body-surface area every 3 weeks and nivolumab-matched placebo every 2 weeks). The primary end point was overall survival.\n\n\nRESULTS\nAt 1 year, the overall rate of survival was 72.9% (95% confidence interval [CI], 65.5 to 78.9) in the nivolumab group, as compared with 42.1% (95% CI, 33.0 to 50.9) in the dacarbazine group (hazard ratio for death, 0.42; 99.79% CI, 0.25 to 0.73; P<0.001). The median progression-free survival was 5.1 months in the nivolumab group versus 2.2 months in the dacarbazine group (hazard ratio for death or progression of disease, 0.43; 95% CI, 0.34 to 0.56; P<0.001). The objective response rate was 40.0% (95% CI, 33.3 to 47.0) in the nivolumab group versus 13.9% (95% CI, 9.5 to 19.4) in the dacarbazine group (odds ratio, 4.06; P<0.001). The survival benefit with nivolumab versus dacarbazine was observed across prespecified subgroups, including subgroups defined by status regarding the programmed death ligand 1 (PD-L1). Common adverse events associated with nivolumab included fatigue, pruritus, and nausea. Drug-related adverse events of grade 3 or 4 occurred in 11.7% of the patients treated with nivolumab and 17.6% of those treated with dacarbazine.\n\n\nCONCLUSIONS\nNivolumab was associated with significant improvements in overall survival and progression-free survival, as compared with dacarbazine, among previously untreated patients who had metastatic melanoma without a BRAF mutation. (Funded by Bristol-Myers Squibb; CheckMate 066 ClinicalTrials.gov number, NCT01721772.)."} {"_id": "7238c5082e2d02a1bbcb4b4503e787a271c66379", "title": "Marginal improvement of AUFLS using ROCOF", "text": "This paper proposes a scheme that integrates rate of change of frequency (ROCOF) and under frequency (UF) elements to design more effective load shedding response. A new load block activation algorithm is also introduced that improves the overall load shedding (LS) performance. The combination of ROCOF and UF criteria is used to overcome the LS block discrimination problem and to provide a flexible and decentralised solution to mitigation. The proposed method is verified through simulations on the IEEE 39-bus test system."} {"_id": "c6f61f344c919493886bf67d0e64e0242ae83547", "title": "Size matters: word count as a measure of quality on wikipedia", "text": "Wikipedia, \"the free encyclopedia\", now contains over two million English articles, and is widely regarded as a high-quality, authoritative encyclopedia. Some Wikipedia articles, however, are of questionable quality, and it is not always apparent to the visitor which articles are good and which are bad. We propose a simple metric -- word count -- for measuring article quality. In spite of its striking simplicity, we show that this metric significantly outperforms the more complex methods described in related work."} {"_id": "3661933b3173af59a20adbf129c0e3efa42c8ae4", "title": "OPRM1 and EGFR contribute to skin pigmentation differences between Indigenous Americans and Europeans", "text": "Contemporary variation in skin pigmentation is the result of hundreds of thousands years of human evolution in new and changing environments. Previous studies have identified several genes involved in skin pigmentation differences among African, Asian, and European populations. However, none have examined skin pigmentation variation among Indigenous American populations, creating a critical gap in our understanding of skin pigmentation variation. This study investigates signatures of selection at 76 pigmentation candidate genes that may contribute to skin pigmentation differences between Indigenous Americans and Europeans. Analysis was performed on two samples of Indigenous Americans genotyped on genome-wide SNP arrays. Using four tests for natural selection\u2014locus-specific branch length (LSBL), ratio of heterozygosities (lnRH), Tajima\u2019s D difference, and extended haplotype homozygosity (EHH)\u2014we identified 14 selection-nominated candidate genes (SNCGs). SNPs in each of the SNCGs were tested for association with skin pigmentation in 515 admixed Indigenous American and European individuals from regions of the Americas with high ground-level ultraviolet radiation. In addition to SLC24A5 and SLC45A2, genes previously associated with European/non-European differences in skin pigmentation, OPRM1 and EGFR were associated with variation in skin pigmentation in New World populations for the first time."} {"_id": "9e2bc7c8dba23d594f22ccdd73d42f5aafbc5060", "title": "Non-Foster impedance matching of electrically small antennas", "text": "When the size of an antenna is electrically small, the antenna is neither efficient nor a good radiator because most of the input power is stored in the reactive near-field region and little power is radiated in the far-field region. As demonstrated in [1]- [2], the radiation quality factor of small antennas is definitely high. In other words, the input impedance of small antennas is considerably reactive. To reduce the radiation quality factor in the whole or partial frequency range of interest, it is important to increase the radiation resistance and/or reduce the reactance of the antenna. Hence, it is necessary to modify the antenna to reduce the reactance of the antenna and/or add impedance matching networks (MNs) to maximize the transfer of power from a resistive source to the highly reactive antenna."} {"_id": "36bcaabc15e812df931a8dd8f37c7cfcb0a07d54", "title": "A restart CMA evolution strategy with increasing population size", "text": "In this paper we introduce a restart-CMA-evolution strategy, where the population size is increased for each restart (IPOP). By increasing the population size the search characteristic becomes more global after each restart. The IPOP-CMA-ES is evaluated on the test suit of 25 functions designed for the special session on real-parameter optimization of CEC 2005. Its performance is compared to a local restart strategy with constant small population size. On unimodal functions the performance is similar. On multi-modal functions the local restart strategy significantly outperforms IPOP in 4 test cases whereas IPOP performs significantly better in 29 out of 60 tested cases."} {"_id": "1dd26846e8b42e2da92b5977f8a6050f2f92f388", "title": "Robust Curb Detection with Fusion of 3D-Lidar and Camera Data", "text": "Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes."} {"_id": "ba0164fe77d37786eca4cfe1a6fbc020943c91a2", "title": "Successful lean implementation: Organizational culture and soft lean practices", "text": "Lean management (LM) is a managerial approach for improving processes based on a complex system of interrelated socio-technical practices. Recently, debate has centered on the role of organizational culture (OC) in LM. This paper aims to contribute to this debate by examining whether plants that successfully implement LM are characterized by a specific OC profile and extensively adopt soft LM practices. Data were analyzed from the High Performance Manufacturing (HPM) project dataset using a multi-group approach. The results revealed that a specific OC profile characterizes successful lean plants; in particular, when compared to unsuccessful lean plants, they show a higher institutional collectivism, future orientation, a humane orientation, and a lower level of assertiveness. While a high level of institutional collectivism, future orientation, and humane orientation are common features of high performers in general, a low level of assertiveness is typical only of successful lean plants. In addition, successful lean plants use soft LM practices more extensively than unsuccessful lean plants (i.e., lean practices concerning people and relations, such as small group problem solving, employees\u2019 training to perform multiple tasks, supplier partnerships, customer involvement, and continuous improvement), while they do not differ significantly in terms of hard LM practices (i.e., lean technical and analytical tools). For managers, the results indicate that, in order to implement LM successfully, it is fundamental to go beyond LM technicalities by adopting soft practices and nurturing the development of an appropriate OC profile."} {"_id": "c25be11ce09e711411863f503ab588ba15ec73c0", "title": "Narcissism and Romantic Attraction", "text": "A model of narcissism and romantic attraction predicts that narcissists will be attracted to admiring individuals and highly positive individuals and relatively less attracted to individuals who offer the potential for emotional intimacy. Five studies supported this model. Narcissists, compared with nonnarcissists, preferred more self-oriented (i.e., highly positive) and less other-oriented (i.e., caring) qualities in an ideal romantic partner (Study 1). Narcissists were also relatively more attracted to admiring and highly positive hypothetical targets and less attracted to caring targets (Studies 2 and 3). Indeed, narcissists displayed a preference for highly positive-noncaring targets compared with caring but not highly positive targets (Study 4). Finally, mediational analyses demonstrated that narcissists' romantic attraction is, in part, the result of a strategy for enhancing self-esteem (Study 5)."} {"_id": "146ec475964d7c61d35b2e6a0eb6de03258edac2", "title": "Creating a Labeled Dataset for Medical Misinformation in Health Forums", "text": "The dissemination of medical misinformation online presents a challenge to human health. Machine learning techniques provide a unique opportunity for decreasing the cognitive load associated with deciding upon whether any given user comment is likely to contain misinformation, but a paucity of labeled data of medical misinformation makes supervised approaches a challenge. In order to ameliorate this condition, we present a new labeled dataset of misinformative and non-misinformative comments developed over posted questions and comments on a health discussion forum. This required extraction of candidate misinformative entries from the corpus using information retrieval techniques, development of a codex and labeling strategy for the dataset, and the creation of features for use in machine learning tasks. By identifying the nine most descriptive features with regard to classification as misinformative or non-misinformative through the use of Recursive Feature Elimination, we achieved a classification accuracy of 90.1%, where the dataset is comprised 85.8% of non-misinformative comments. In our opinion, this dataset and analysis will aid the machine learning community in the development of an online misinformation classification system over user-generated content such as medical forum posts."} {"_id": "de8335a834362c50de9075ccebc20605aa326545", "title": "Evaluating visual analytics with eye tracking", "text": "The application of eye tracking for the evaluation of humans' viewing behavior is a common approach in psychological research. So far, the use of this technique for the evaluation of visual analytics and visualization is less prominent. We investigate recent scientific publications from the main visualization and visual analytics conferences and journals that include an evaluation by eye tracking. Furthermore, we provide an overview of evaluation goals that can be achieved by eye tracking and state-of-the-art analysis techniques for eye tracking data. Ideally, visual analytics leads to a mixed-initiative cognitive system where the mechanism of distribution is the interaction of the user with visualization environments. Therefore, we also include a discussion of cognitive approaches and models to include the user in the evaluation process. Based on our review of the current use of eye tracking evaluation in our field and the cognitive theory, we propose directions of future research on evaluation methodology, leading to the grand challenge of developing an evaluation approach to the mixed-initiative cognitive system of visual analytics."} {"_id": "f058c2376ba396e062cb1b9f5d6f476fd6515927", "title": "Game theoretic approach on Real-time decision making for IoT-based traffic light control", "text": "1Department of Computer Engineering, Chung-Ang University, Seoul, Korea 2Department of Computer Science, Universidad Aut\u00f4noma de Madrid, Spain Correspondence Jai E. Jung, Department of Computer Engineering, Chung-Ang University, Korea. Email: j3ung@cau.ac.kr David Camacho, Department of Computer Science, Universidad Autonoma de Madrid, Spain. Email: david.camacho@uam.es Funding information Chung-Ang University Summary Smart traffic light control at intersections is 1 of the major issues in Intelligent Transportation System. In this paper, on the basis of the new emerging technologies of Internet of Things, we introduce a new approach for smart traffic light control at intersection. In particular, we firstly propose a connected intersection system where every objects such as vehicles, sensors, and traffic lights will be connected and sharing information to one another. By this way, the controller is able to collect effectively and mobility traffic flow at intersection in real-time. Secondly, we propose the optimization algorithms for traffic lights by applying algorithmic game theory. Specially, 2 game models (which are Cournot Model and Stackelberg Model) are proposed to deal with difference scenarios of traffic flow. In this regard, based on the density of vehicles, controller will make real-time decisions for the time durations of traffic lights to optimize traffic flow. To evaluate our approach, we have used Netlogo simulator, an agent-based modeling environment for designing and implementing a simple working traffic. The simulation results shows that our approach achieves potential performance with various situations of traffic flow."} {"_id": "ec375191617e58e0592a8485333753974f9c741a", "title": "Pulsed voltage converter with bipolar output voltages up to 10 kV for Dielectric Barrier Discharge", "text": "For pulsed power applications special pulsed voltage converters are needed. This paper presents an approach for a pulsed power converter, which generates bipolar output voltages up to 10 kV with extremely fast voltage slopes and high repetition rates. The topology is based on a H-bridge with a 10 kV dc-link. The output voltage and current of the pulsed voltage converter are adapted for the operation with Dielectric Barrier Discharge. To avoid the use of spark gaps due to their limited lifetime and thus to the lifetime of the converter, series stacked MOSFETs are used to realize a switch with a high blocking voltage. A balancing network for the series stacked MOSFETs is introduced as well as an adequate gate drive circuit. A matching method for the capacitive load is described, to achieve a maximum voltage slope at this capacitive load. To validate the theoretical considerations a prototype and measurements are presented."} {"_id": "3f11994f22bbea0997f942032bd8c0a6ab3f2ec0", "title": "Odor Recognition in Multiple E-Nose Systems With Cross-Domain Discriminative Subspace Learning", "text": "In this paper, we propose an odor recognition framework for multiple electronic noses (E-noses), machine olfaction odor perception systems. Straight to the point, the proposed transferring odor recognition model is called cross-domain discriminative subspace learning (CDSL). General odor recognition problems with E-nose are single domain oriented, that is, recognition algorithms are often modeled and tested on the same one domain data set (i.e., from only one E-nose system). Different from that, we focus on a more realistic scenario: the recognition model is trained on a prepared source domain data set from a master E-nose system ${A}$ , but tested on another target domain data set from a slave system ${B}$ or ${C}$ with the same type of the master system ${A}$ . The internal device parameter variance between master and slave systems often results in data distribution discrepancy between source domain and target domain, such that single-domain-based odor recognition model may not be adapted to another domain. Therefore, we propose domain-adaptation-based odor recognition for addressing the realistic recognition scenario across systems. Specifically, the proposed CDSL method consists of three merits: 1) an intraclass scatter minimization- and an interclass scatter maximization-based discriminative subspace learning is solved on source domain; 2) a data fidelity and preservation constraint of the subspace is imposed on target domain without distortion; and 3) a minipatch feature weighted domain distance is minimized for closely connecting the source and target domains. Experiments and comparisons on odor recognition tasks in multiple E-noses demonstrate the efficiency of the proposed method."} {"_id": "7fe690f22bb04949c1b46fb21710e28e5bd1157c", "title": "Real Time Drowsy Driver Identification Using Eye Blink Detection", "text": "HUMAN COMPUTER INTERFACE (HCI) systems are designed for use in assisting people in various aspects. Driving support systems such as navigating systems are getting commoner by the day. The capability of driving support systems to detect the level of driver\u2019s alertness is very important in ensuring road safety. By observation of blink pattern and eye movements, driver fatigue can be detected early enough to prevent collisions caused by drowsiness. The analysis of face image are widely used in security systems, face recognition, criminal focusing etc. In this specific study a non recursive system have been designed to detect shutting of eyes of the person driving an automobile. Real time detection of driver\u2019s eyes is processed using image processing in MATLAB to detect whether the eye remains closed more than the fixed duration thus indicating condition of fatigue and raise an alarm which could prevent a collision. The driving support systems have been found lacking in detecting the influence of drug or alcohol causing great degree of risks to the commuters. This study has found that eye blink patterns are starkly different for persons under the influence of drugs and can be easily detected by the system designed by us."} {"_id": "ab614b5712d41433e6341fd0eb465258f14d1f23", "title": "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks", "text": "Recurrent neural network (RNN) models are widely used for processing sequential data governed by a latent tree structure. Previous work shows that RNN models (especially Long Short-Term Memory (LSTM) based models) could learn to exploit the underlying tree structure. However, its performance consistently lags behind that of tree-based models. This work proposes a new inductive bias Ordered Neurons, which enforces an order of updating frequencies between hidden state neurons. We show that the ordered neurons could explicitly integrate the latent tree structure into recurrent models. To this end, we propose a new RNN unit: ON-LSTM, which achieve good performances on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference1."} {"_id": "899886a4dab243958c343ba2925861a2840190bd", "title": "The Pendubot: a mechatronic system for control research and education", "text": "1 In this paper we describe the Pendubot, a mechatronic device for use in control engineering education and for research in nonlinear control and robotics. This device is a two{link planar robot with an actuator at the shoulder but no actuator at the elbow. With this system , a number of fundamental concepts in nonlinear dynamics and control theory may be illustrated. The pendubot complements previous mechatronic systems, such as the Acrobot 3] and the inverted pendulum of Furuta 4]."} {"_id": "a9f9a4dc25479e550ce1e0ddcbaf00743ccafc29", "title": "Extensional Versus Intuitive Reasoning : The Conjunction Fallacy in Probability Judgment", "text": "Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and .P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative than one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomenon is demonstrated in a variety of contexts including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both between-subjects and within-subjects comparisons. Alternative interpretations of the conjunction fallacy are discussed and attempts to combat it are explored."} {"_id": "2f983533f6dd084b0851652b5991fd6464706ff4", "title": "An Achilles Heel in Signature-Based IDS : Squealing False Positives in SNORT", "text": "We report a vulnerability to network signature-based IDS which we have tested using Snort and we call \u201cSquealing\u201d. This vulnerability has significant implications since it can easily be generalized to any IDS. The vulnerability of signature-based IDS to high false positive rates has been welldocumented but we go further to show (at a high level) how packets can be crafted to match attack signatures such that a alarms on a target IDS can be conditioned or disabled and then exploited. This is the first academic treatment of this vulnerability that has already been reported to the CERT Coordination Center and the National Infrastructure Protection Center. Independently, other tools based on \u201csquealing\u201d are poised to appear that, while validating our ideas, also gives cause for concern. keywords: squealing, false positive, intrusion detection, IDS, signature-based, misuse behavior, network intrusion detection, snort"} {"_id": "363d109c3f00026f9ef904dd8cc3c935ee463b65", "title": "Snort: Lightweight Intrusion Detection for Networks", "text": "Network intrusion detection systems (NIDS) are an important part of any network security architecture. They provide a layer of defense which monitors network traffic for predefined suspicious activity or patterns, and alert system administrators when potential hostile traffic is detected. Commercial NIDS have many differences, but Information Systems departments must face the commonalities that they share such as significant system footprint, complex deployment and high monetary cost. Snort was designed to address these issues."} {"_id": "5dbb8f63e9ac926005037debc5496e9949a3885f", "title": "Evaluating intrusion detection systems: the 1998 DARPA off-line intrusion detection evaluation", "text": "A intrusion detection evaluation test bed was developed which generated normal traffic similar to that on a government site containing 100\u2019s of users on 1000\u2019s of hosts. More than 300 instances of 38 different automated attacks were launched against victim UNIX hosts in seven weeks of training data and two weeks of test data. Six research groups participated in a blind evaluation and results were analyzed for probe, denialof-service (DoS), remote-to-local (R2L), and user to root (U2R) attacks. The best systems detected old attacks included in the training data, at moderate detection rates ranging from 63% to 93% at a false alarm rate of 10 false alarms per day. Detection rates were much worse for new and novel R2L and DoS attacks included only in the test data. The best systems failed to detect roughly half these new attacks which included damaging access to root-level privileges by remote users. These results suggest that further research should focus on developing techniques to find new attacks instead of extending existing rule-based"} {"_id": "8210f25d033de427cfd2e385bc9dd8f9fdca3cfc", "title": "Data Preparation for Mining World Wide Web Browsing Patterns", "text": "The World Wide Web (WWW) continues to grow at an astounding rate in both the sheer volume of traffic and the size and complexity of Web sites. The complexity of tasks such as Web site design, Web server design, and of simply navigating through a Web site have increased along with this growth. An important input to these design tasks is the analysis of how a Web site is being used. Usage analysis includes straightforward statistics, such as page access frequency, as well as more sophisticated forms of analysis, such as finding the common traversal paths through a Web site. Web Usage Mining is the application of data mining techniques to usage logs of large Web data repositories in order to produce results that can be used in the design tasks mentioned above. However, there are several preprocessing tasks that must be performed prior to applying data mining algorithms to the data collected from server logs. This paper presents several data preparation techniques in order to identify unique users and user sessions. Also, a method to divide user sessions into semantically meaningful transactions is defined and successfully tested against two other methods. Transactions identified by the proposed methods are used to discover association rules from real world data using the WEBMINER system [15]."} {"_id": "55ce99ff2cb04eaed460b8a8ee5c3fd4821e0e0f", "title": "Differentially Private Stochastic Gradient Descent for in-RDBMS Analytics", "text": "This paper studies differential privacy for stochastic gradient descent (SGD) in an in-RDBMS system. While significant progress has been made separately on in-RDBMS SGD and private SGD, none of the major in-RDBMS machine learning frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous algorithms, we revisit and use the classical technique of output perturbation. While using output perturbation trivially addresses (2), it gives rise to challenges in addressing (1). We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We then integrate this algorithm, along with the state-of-the-art differentially private SGD, into Bismarck, an inRDBMS analytics system. Extensive experiments demonstrate that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, for a number of datasets yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms."} {"_id": "db3612fedd16ec37b30b97747ce0d8536c7ce5bd", "title": "Developing Leadership Character in Business Programs", "text": "Our objective is to encourage and enable leadership character development in business education. Building on a model of character strengths and their link to virtues, values, and ethical decision making, we describe an approach to develop leadership character at the individual, group, and organizational levels. We contrast this approach to existing practices that have focused on teaching functional content over character and address how business educators can enable leadership character development through their own behaviors, relationships, and structures. Most important, we provide concrete suggestions on how to integrate a focus on character development into existing business programs, both in terms of individual courses as well as the overall curriculum. We highlight that the development of leadership character must extend beyond student engagement in a course since \u201cit takes a village\u201d to develop character. ........................................................................................................................................................................"} {"_id": "d2fbc1a8bcc7c252a70c524cc96c14aa807c2345", "title": "Approximating displacement with the body velocity integral", "text": "In this paper, we present a technique for approximating the net displacement of a locomoting system over a gait without directly integrating its equations of motion. The approximation is based on a volume integral, which, among other benefits, is more open to optimization by algorithm or inspection than is the full displacement integral. Specifically, we develop the concept of a body velocity integral (BVI), which is computable over a gait as a volume integral via Stokes\u2019s theorem. We then demonstrate that, given an appropriate choice of coordinates, the BVI for a gait approximates the displacement of the system over that gait. This consideration of coordinate choice is a new approach to locomotion problems, and provides significantly improved results over past attempts to apply Stokes\u2019s theorem to gait analysis."} {"_id": "cd3f32418cbacc65357f7436a2d4186c634f024a", "title": "On Selecting the Largest Element in Spite of Erroneous Information", "text": ""} {"_id": "9df2cbb10394319d0466da3e914193876efdaf91", "title": "Octree-based decimation of marching cubes surfaces", "text": "The marching cubes (MC) algorithm is a method for generating isosurfaces. It also generates an excessively large number of triangles to represent an isosurface; this increases the rendering time. This paper presents a decimation method to reduce the number of triangles generated. Decimation is carried out before creating a large number of triangles. Four major steps comprise the algorithm: surface tracking, merging, crack patching and triangulation. Surface tracking is an enhanced implementation of the MC algorithm. Starting from a seed point, the surface tracker visits only those cells likely to compose part of the desired isosurface. The cells making up the extracted surface are stored in an octree that is further processed. A bottom-up approach is taken in merging the cells containing a relatively flat approximating surface. The finer surface details are maintained. Cells are merged as long as the error due to such an operation is within a user-specified error parameter, or a cell acquires more than one connected surface component in it. A crack patching method is described that forces edges of smaller cells to lie along those of the larger neighboring cells. The overall saving in the number of triangles depends both on the specified error value and the nature of the data. Use of the hierarchical octree data structure also presents the potential of incremental representation of surfaces. We can generate a highly smoothed surface representation which can be progressively refined as the user-specified error value is decreased."} {"_id": "363490159a0e757d7d80cb683cee4218afdf4878", "title": "Talking to machines (statistically speaking)", "text": "Statistical methods have long been the dominant approach in speech recognition and probabilistic modelling in ASR is now a mature technology. The use of statistical methods in other areas of spoken dialogue is however more recent and rather less mature. This paper reviews spoken dialogue systems from a statistical modelling perspective. The complete system is first presented as a partially observable Markov decision process. The various sub-components are then exposed by introducing appropriate intermediate variables. Samples of existing work are reviewed within this framework, including dialogue control and optimisation, semantic interpretation, goal detection, natural language generation and synthesis."} {"_id": "3504a8bfb1a35ca115a4829a8afd7e417aba92ac", "title": "Mobile ECG measurement and analysis system using mobile phone as the base station", "text": "In this paper, we introduce an ECG measurement, analysis and transmission system which uses a mobile phone as a base station. The system is based on a small-sized mobile ECG recording device which sends measurement data wirelessly to the mobile phone. In the mobile phone the received data is analyzed and if abnormalities are found part of the measurement data is sent to a server for the use of medical personnel. The prototype of the system was made with a portable ECG monitor and Nokia 6681 mobile phone. The results show that modern smart phones are indeed capable for this kind of tasks. Thus, with their good networking and data processing capabilities, they are a potential part of the future wireless health care systems."} {"_id": "5f76a3f006cb4af1394bc852ce77646bada39ee1", "title": "STANDARDISATION AND CLASSIFICATION OF ALERTS GENERATED BY INTRUSION DETECTION SYSTEMS", "text": "Intrusion detection systems are most popular de-fence mechanisms used to provide security to IT infrastructures. Organisation need best performance, so it uses multiple IDSs from different vendors. Different vendors are using different formats and protocols. Difficulty imposed by this is the generation of several false alarms. Major part of this work concentrates on the collection of alerts from different intrusion detection systems to represent them in IDMEF(Intrusion Detection Message Exchange Format) format. Alerts were collected from intrusion detection systems like snort, ossec, suricata etc. Later classification is attempted using machine learning technique, which helps to mitigate generation of false positives."} {"_id": "e51662b2b2e1dfc113c931f23524178ae4bc82fc", "title": "3D Surface Reconstruction of Noisy Point Clouds Using Growing Neural Gas: 3D Object/Scene Reconstruction", "text": "With the advent of low-cost 3D sensors and 3D printers, scene and object 3D surface reconstruction has become an important research topic in the last years. In this work, we propose an automatic (unsupervised) method for 3D surface reconstruction from raw unorganized point clouds acquired using low-cost 3D sensors. We have modified the growing neural gas network, which is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation, to perform 3D surface reconstruction of different real-world objects and scenes. Some improvements have been made on the original algorithm considering colour and surface normal information of input data during the learning stage and creating complete triangular meshes instead of basic wire-frame representations. The proposed method is able to successfully create 3D faces online, whereas existing 3D reconstruction methods based on self-organizing maps required post-processing steps to close gaps and holes produced during the 3D reconstruction process. A set of quantitative and qualitative experiments were carried out to validate the proposed method. The method has been implemented and tested on real data, and has been found to be effective at reconstructing noisy point clouds obtained using low-cost 3D sensors."} {"_id": "1254b7f7d7399add50b7ededd7c737db4239874c", "title": "Advanced PROPHET Routing in Delay Tolerant Network", "text": "To solve routing jitter problem in PROPHET in delay tolerant network, advanced PROPHET routing is proposed in this paper. Average delivery predictabilities are used in advanced PROPHET to avoid routing jitter. Furthermore, we evaluate it through simulations versus PROPHET routing protocol. The experimental results show there has higher average delivery rates and shorter average delays in advanced PROPHET. Thus, it is fair to say that advanced PROPHET gives better performance than PROPHET."} {"_id": "14b061c4da9baa988f642eae43903dd5ea7ce3a3", "title": "Error control and concealment for video communication: a review", "text": "The problem of error control and concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. This paper reviews the techniques that have been developed for error control and concealment in the past ten to fifteen years. These techniques are described in three categories according to the roles that the encoder and decoder play in the underlying approaches. Forward error concealment includes methods that add redundancy at the source end to enhance error resilience of the coded bit streams. Error concealment by postprocessing refers to operations at the decoder to recover the damaged areas based on characteristics of image and video signals. Finally, interactive error concealment covers techniques that are dependent on a dialog between the source and destination. Both current research activities and practice in international standards are covered."} {"_id": "586f8952909eb72727eaa9365f62c3f36cd5a9aa", "title": "Regenerative braking strategy for electric vehicles", "text": "Regenerative braking is an effective approach for electric vehicles to extend their driving range. The control strategy of regenerative braking plays an important role in maintaining the vehicle's stability and recovering energy. In this paper, the main properties that have influence on brake energy regeneration are analyzed. Mathematical model of brake energy regenerating electric vehicles is established. By analyzing the charge and discharge characteristics of the battery and motor, a simple regenerative braking strategy is proposed. The strategy takes the braking torque required, the motor available braking torque, and the braking torque limit into account, and it can make the best use of the motor braking torque. Simulation results show higher energy regeneration compared to a parallel strategy when the proposed strategy is adopted."} {"_id": "8688a74162a6b2ca759d94bd2a4f9b28db7fd5b4", "title": "Periodically Controlled Hybrid Systems Verifying A Controller for An Autonomous Vehicle", "text": "This paper introduces Periodically Controlled Hybrid Automata (PCHA) for describing a class of hybrid control systems. In a PCHA, control actions occur roughly periodically while internal and input actions, may occur in the interim changing the discrete-state or the setpoint. Based on periodicity and subtangential conditions, a new sufficient condition for verifying invariance of PCHAs is presented. This technique is used in verifying safety of the planner-controller subsystem of an autonomous ground vehicle, and in deriving geometric properties of planner generated paths that can be followed safely by the controller under environmental uncertainties."} {"_id": "2f2f0f3f6def111907780d6580f6b0a7dfc9153c", "title": "Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability", "text": "Knowing the quality of reading comprehension (RC) datasets is important for the development of natural-language understanding systems. In this study, two classes of metrics were adopted for evaluating RC datasets: prerequisite skills and readability. We applied these classes to six existing datasets, including MCTest and SQuAD, and highlighted the characteristics of the datasets according to each metric and the correlation between the two classes. Our dataset analysis suggests that the readability of RC datasets does not directly affect the question difficulty and that it is possible to create an RC dataset that is easy to read but difficult to answer."} {"_id": "0d4fef0ef83c6bad2e14fe4a4880fa153f550974", "title": "Neural Networks for Open Domain Targeted Sentiment", "text": "Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines."} {"_id": "0999fcd42bc502742dbcb25bd41760ff10d15fb0", "title": "On Throughput Efficiency of Geographic Opportunistic Routing in Multihop Wireless Networks", "text": "Geographic opportunistic routing (GOR) is a new routing concept in multihop wireless networks. In stead of picking one node to forward a packet to, GOR forwards a packet to a set of candidate nodes and one node is selected dynamically as the actual forwarder based on the instantaneous wireless channel condition and node position and availability at the time of transmission. GOR takes advantages of the spatial diversity and broadcast nature of wireless communications and is an efficient mechanism to combat the unreliable links. The existing GOR schemes typically involve as many as available next-hop neighbors into the local opportunistic forwarding, and give the nodes closer to the destination higher relay priorities. In this paper, we focus on realizing GOR\u2019s potential in maximizing throughput. We start with an insightful analysis of various factors and their impact on the throughput of GOR, and propose a local metric named expected one-hop throughput (EOT) to balance the tradeoff between the benefit (i.e., packet advancement and transmission reliability) and the cost (i.e., medium time delay). We identify an upper bound of EOT and proof its concavity. Based on the EOT, we also propose a local candidate selection and prioritization algorithm. Simulation results validate our analysis and show that the metric EOT leads to both higher one-hop and path throughput than the corresponding pure GOR and geographic routing."} {"_id": "75b8c0abfd45fd77d7a61da7d12bdf516e3139c7", "title": "Estimating types in binaries using predictive modeling", "text": "Reverse engineering is an important tool in mitigating vulnerabilities in binaries. As a lot of software is developed in object-oriented languages, reverse engineering of object-oriented code is of critical importance. One of the major hurdles in reverse engineering binaries compiled from object-oriented code is the use of dynamic dispatch. In the absence of debug information, any dynamic dispatch may seem to jump to many possible targets, posing a significant challenge to a reverse engineer trying to track the program flow. We present a novel technique that allows us to statically determine the likely targets of virtual function calls. Our technique uses object tracelets \u2013 statically constructed sequences of operations performed on an object \u2013 to capture potential runtime behaviors of the object. Our analysis automatically pre-labels some of the object tracelets by relying on instances where the type of an object is known. The resulting type-labeled tracelets are then used to train a statistical language model (SLM) for each type.We then use the resulting ensemble of SLMs over unlabeled tracelets to generate a ranking of their most likely types, from which we deduce the likely targets of dynamic dispatches.We have implemented our technique and evaluated it over real-world C++ binaries. Our evaluation shows that when there are multiple alternative targets, our approach can drastically reduce the number of targets that have to be considered by a reverse engineer."} {"_id": "3b1189bcc031edb119ca0a94cc75080ff9814ca9", "title": "Deep Emotion: A Computational Model of Emotion Using Deep Neural Networks", "text": "Emotions are very important for human intelligence. For example, emotions are closely related to the appraisal of the internal bodily state and external stimuli. This helps us to respond quickly to the environment. Another important perspective in human intelligence is the role of emotions in decision-making. Moreover, the social aspect of emotions is also very important. Therefore, if the mechanism of emotions were elucidated, we could advance toward the essential understanding of our natural intelligence. In this study, a model of emotions is proposed to elucidate the mechanism of emotions through the computational model. Furthermore, from the viewpoint of partner robots, the model of emotions may help us to build robots that can have empathy for humans. To understand and sympathize with people\u2019s feelings, the robots need to have their own emotions. This may allow robots to be accepted in human society. The proposed model is implemented using deep neural networks consisting of three modules, which interact with each other. Simulation results reveal that the proposed model exhibits reasonable behavior as the basic mechanism of emotion."} {"_id": "25ea0f831c214ee211ee55ffd31eab719799861c", "title": "Comorbidity of DSM-IV pathological gambling and other psychiatric disorders: results from the National Epidemiologic Survey on Alcohol and Related Conditions.", "text": "OBJECTIVE\nTo present nationally representative data on lifetime prevalence and comorbidity of pathological gambling with other psychiatric disorders and to evaluate sex differences in the strength of the comorbid associations.\n\n\nMETHOD\nData were derived from a large national sample of the United States. Some 43,093 household and group quarters residents age 18 years and older participated in the 2001-2002 survey. Prevalence and associations of lifetime pathological gambling and other lifetime psychiatric disorders are presented. The diagnostic interview was the National Institute on Alcohol Abuse and Alcoholism Alcohol Use Disorder and Associated Disabilities Interview Schedule-DSM-IV Version. Fifteen symptom items operationalized the 10 pathological gambling criteria.\n\n\nRESULTS\nThe lifetime prevalence rate of pathological gambling was 0.42%. Almost three quarters (73.2%) of pathological gamblers had an alcohol use disorder, 38.1% had a drug use disorder, 60.4% had nicotine dependence, 49.6% had a mood disorder, 41.3% had an anxiety disorder, and 60.8% had a personality disorder. A large majority of the associations between pathological gambling and substance use, mood, anxiety, and personality disorders were overwhelmingly positive and significant (p < .05), even after controlling for sociodemographic and socioeconomic characteristics. Male sex, black race, divorced/separated/widowed marital status, middle age, and living in the West and Midwest were associated with increased risk for pathological gambling. Further, associations between alcohol dependence, any drug use disorder, drug abuse, nicotine dependence, major depressive episode, and generalized anxiety disorder and pathological gambling were stronger among women than men (p > .05).\n\n\nCONCLUSION\nPathological gambling is highly comorbid with substance use, mood, anxiety, and personality disorders, suggesting that treatment for one condition should involve assessment and possible concomitant treatment for comorbid conditions."} {"_id": "9201fe9244071f6c4d1bac1b612f2b6aa12ca18f", "title": "Content-based image retrieval by integrating color and texture features", "text": "Content-based image retrieval (CBIR) has been an active research topic in the last decade. Feature extraction and representation is one of the most important issues in the CBIR. In this paper, we propose a content-based image retrieval method based on an efficient integration of color and texture features. As its color features, pseudo-Zernike chromaticity distribution moments in opponent chromaticity space are used. As its texture features, rotation-invariant and scale-invariant image descriptor in steerable pyramid domain are adopted, which offers an efficient and flexible approximation of early processing in the human visual system. The integration of color and texture information provides a robust feature set for color image retrieval. Experimental results show that the proposed method yields higher retrieval accuracy than some conventional methods even though its feature vector dimension is not higher than those of the latter for different test DBs."} {"_id": "10160d18917e9359f5b8222362a564040fc88692", "title": "Rfam 12.0: updates to the RNA families database", "text": "The Rfam database (available at http://rfam.xfam.org) is a collection of non-coding RNA families represented by manually curated sequence alignments, consensus secondary structures and annotation gathered from corresponding Wikipedia, taxonomy and ontology resources. In this article, we detail updates and improvements to the Rfam data and website for the Rfam 12.0 release. We describe the upgrade of our search pipeline to use Infernal 1.1 and demonstrate its improved homology detection ability by comparison with the previous version. The new pipeline is easier for users to apply to their own data sets, and we illustrate its ability to annotate RNAs in genomic and metagenomic data sets of various sizes. Rfam has been expanded to include 260 new families, including the well-studied large subunit ribosomal RNA family, and for the first time includes information on short sequence- and structure-based RNA motifs present within families."} {"_id": "8db26a22942404bd435909a16bb3a50cd67b4318", "title": "Marginalized Denoising Autoencoders for Domain Adaptation", "text": "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters \u2014 in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB, significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks."} {"_id": "d3f7ff54b188b724597f143f57114b8eef067572", "title": "Multi-Task Learning for Mental Health using Social Media Text", "text": "We introduce initial groundwork for estimating suicide risk and mental health in a deep learning framework. By modeling multiple conditions, the system learns to make predictions about suicide risk and mental health at a low false positive rate. Conditions are modeled as tasks in a multitask learning (MTL) framework, with gender prediction as an additional auxiliary task. We demonstrate the effectiveness of multi-task learning by comparison to a well-tuned single-task baseline with the same number of parameters. Our best MTL model predicts potential suicide attempt, as well as the presence of atypical mental health, with AUC > 0.8. We also find additional large improvements using multi-task learning on mental health tasks with limited training data."} {"_id": "00b69fcb15b6ddedd6a1b23a0e4ed3afc0b8ac49", "title": "Co-Training for Domain Adaptation", "text": "Domain adaptation algorithms seek to generalize a model trained in a source domain to a new target domain. In many practical cases, the source and target distributions can differ substantially, and in some cases crucial target features may not have support in the source domain. In this paper we introduce an algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. Our algorithm is a variant of co-training [7], and we name it CODA (Co-training for domain adaptation). Unlike the original co-training work, we do not assume a particular feature split. Instead, for each iteration of cotraining, we formulate a single optimization problem which simultaneously learns a target predictor, a split of the feature space into views, and a subset of source and target features to include in the predictor. CODA significantly out-performs the state-of-the-art on the 12-domain benchmark data set of Blitzer et al. [4]. Indeed, over a wide range (65 of 84 comparisons) of target supervision CODA achieves the best performance."} {"_id": "2254a9c8e0a3d753ce25d4049e063e0e9611f377", "title": "A Planar Magic-T Using Substrate Integrated Circuits Concept", "text": "In this letter, a slotline to substrate integrated waveguide transition is proposed for the development of substrate integrated circuits. The insertion loss of the back-to-back transition is less than 1 dB from 8.7 to 9.0 GHz. With this transition, a planar magic-T is studied and designed. Measured results indicate a very good performance of the fabricated magic-T is observed within the experimental frequency range of 8.4-9.4 GHz. The amplitude and phase imbalances are less than 0.2 dB and 1.5deg, respectively."} {"_id": "170ec65f2281e0a691d794b8d1580a41117782bf", "title": "Automatic global vessel segmentation and catheter removal using local geometry information and vector field integration", "text": "Vessel enhancement and segmentation aim at (binary) per-pixel segmentation considering certain local features as probabilistic vessel indicators. We propose a new methodology to combine any local probability map with local directional vessel information. The resulting global vessel segmentation is represented as a set of discrete streamlines populating the vascular structures and providing additional connectivity and geometric shape information. The streamlines are computed by numerical integration of the directional vector field that is obtained from the eigenanalysis of the local Hessian indicating the local vessel direction. The streamline representation allows for sophisticated post-processing techniques using the additional information to refine the segmentation result with respect to the requirements of the particular application such as image registration. We propose different post-processing techniques for hierarchical segmentation, centerline extraction, and catheter removal to be used for X-ray angiograms. We further demonstrate how the global approach is able to significantly improve the segmentation compared to conventional local Hessian-based approaches."} {"_id": "637058be1258f718ba0f7f8ff57a27d47d19c8eb", "title": "Control of VSC connected to the grid through LCL-filter to achieve balanced currents", "text": "Grid-connected voltage source converters (VSCs) are the heart of many applications with power quality concerns due to their reactive power controllability. However, the major drawback is their sensitivity to grid disturbances. Moreover, when VSCs are used in DG applications, voltage unbalance may be intolerant. The current protection may trip due to current unbalance or due to overcurrent. In this paper, a vector current controller for VSC connected to the grid through LCL-filter is developed with the main focus on producing symmetrical and balanced currents in case of unbalanced voltage dips. Implementing this controller helps the VSC system not to trip during voltage dips."} {"_id": "5e54f1d05c94357df0af05a7bbc79eaf4bc61144", "title": "Marketing malpractice: the cause and the cure.", "text": "Ted Levitt used to tell his Harvard Business School students, \"People don't want a quarter-inch drill--they want a quarter-inch hole.\" But 35 years later, marketers are still thinking in terms of products and ever-finer demographic segments. The structure of a market, as seen from customers' point of view, is very simple. When people need to get a job done, they hire a product or service to do it for them. The marketer's task is to understand what jobs periodically arise in customers' lives for which they might hire products the company could make. One job, the \"I-need-to-send-this-from-here-to-there-with-perfect-certainty-as-fast-as-possible\"job, has existed practically forever. Federal Express designed a service to do precisely that--and do it wonderfully again and again. The FedEx brand began popping into people's minds whenever they needed to get that job done. Most of today's great brands--Crest, Starbucks, Kleenex, eBay, and Kodak, to name a few-started out as just this kind of purpose brand. When a purpose brand is extended to products that target different jobs, it becomes an endorser brand. But, over time, the power of an endorser brand will surely erode unless the company creates a new purpose brand for each new job, even as it leverages the endorser brand as an overall marker of quality. Different jobs demand different purpose brands. New growth markets are created when an innovating company designs a product and then positions its brand on a job for which no optimal product yet exists. In fact, companies that historically have segmented and measured markets by product categories generally find that when they instead segment by job, their market is much larger (and their current share much smaller) than they had thought. This is great news for smart companies hungry for growth."} {"_id": "26a6b9ed436ebf5fc7b214af840d625685f89203", "title": "Cubrick : A Scalable Distributed MOLAP Database for Fast Analytics", "text": "This paper describes the architecture and design of Cubrick, a distributed multidimensional in-memory database that enables real-time data analysis of large dynamic datasets. Cubrick has a strictly multidimensional data model composed of dimensions, dimensional hierarchies and metrics, supporting sub-second MOLAP operations such as slice and dice, roll-up and drill-down over terabytes of data. All data stored in Cubrick is chunked in every dimension and stored within containers called bricks in an unordered and sparse fashion, providing high data ingestion ratios and indexed access through every dimension. In this paper, we describe details about Cubrick\u2019s internal data structures, distributed model, query execution engine and a few details about the current implementation. Finally, we present some experimental results found in a first Cubrick deployment inside Facebook."} {"_id": "2d6a270c21cee7305aec08e61b11121467f25b2f", "title": "A comparative study of HTM and other neural network models for online sequence learning with streaming data", "text": "Online sequence learning from streaming data is one of the most challenging topics in machine learning. Neural network models represent promising candidates for sequence learning due to their ability to learn and recognize complex temporal patterns. In this paper, we present a comparative study of Hierarchical Temporal Memory (HTM), a neurally-inspired model, and other feedforward and recurrent artificial neural network models on both artificial and real-world sequence prediction algorithms. HTM and long-short term memory (LSTM) give the best prediction accuracy. HTM additionally demonstrates many other features that are desirable for real-world sequence learning, such as fast adaptation to changes in the data stream, robustness to sensor noise and fault tolerance. These features make HTM an ideal candidate for online sequence learning problems."} {"_id": "7a736b7347fc5ea93c196ddfe0630ecddc17d324", "title": "Multirate Multimodal Video Captioning", "text": "Automatically describing videos with natural language is a crucial challenge of video understanding. Compared to images, videos have specific spatial-temporal structure and various modality information. In this paper, we propose a Multirate Multimodal Approach for video captioning. Considering that the speed of motion in videos varies constantly, we utilize a Multirate GRU to capture temporal structure of videos. It encodes video frames with different intervals and has a strong ability to deal with motion speed variance. As videos contain different modality cues, we design a particular multimodal fusion method. By incorporating visual, motion, and topic information together, we construct a well-designed video representation. Then the video representation is fed into a RNN-based language model for generating natural language descriptions. We evaluate our approach for video captioning on \"Microsoft Research - Video to Text\" (MSR-VTT), a large-scale video benchmark for video understanding. And our approach gets great performance on the 2nd MSR Video to Language Challenge."} {"_id": "6b87864dd6846ea3167a3f1dcf8e28a1cbc85000", "title": "Animal-assisted therapy for persons with aphasia: A pilot study.", "text": "This study explored the effects and effectiveness of animal-assisted therapy (AAT) for persons with aphasia. Three men with aphasia from left-hemisphere strokes participated in this study. The men received one semester of traditional therapy followed by one semester of AAT. While both therapies were effective, in that each participant met his goals, no significant differences existed between test results following traditional speech-language therapy versus AAT. Results of a client-satisfaction questionnaire, however, indicated that each of the participants was more motivated, enjoyed the therapy sessions more, and felt that the atmosphere of the sessions was lighter and less stressed during AAT compared with traditional therapy."} {"_id": "709f7a6b870cb07a4eab553adf6345b244913913", "title": "NoSQL Databases and Data Modeling Techniques for a Document-oriented NoSQL Database", "text": "NoSQL databases are an important component of Big Data for storing and retrieving large volumes of data. Traditional Relational Database Management Systems (RDBMS) use the ACID theorem for data consistency, whereas NoSQL Databases use a non-transactional approach called BASE. RDBMS scale vertically and NoSQL Databases can scale both horizontally (sharding) and vertically. Four types of NoSQL databases are Document-oriented, Key-Value Pairs, Column-oriented and Graph. Data modeling for Document-oriented databases is similar to data modeling for traditional RDBMS during the conceptual and logical modeling phases. However, for a physical data model, entities can be combined (denormalized) by using embedding. What was once called a foreign key in a traditional RDBMS is now called a reference in a Documentoriented NoSQL database."} {"_id": "13c4ac1bd5511671a658efd8f3ceec572cc53b5f", "title": "Institutionally Distributed Deep Learning Networks", "text": "Deep learning has become a promising approach for automated medical diagnoses. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In such cases, sharing a deep learning model is a more attractive alternative. The best method of performing such a task is unclear, however. In this study, we simulate the dissemination of learning deep learning network models across four institutions using various heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in three independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance (testing accuracy = 77.3%) that was closest to that of centrally hosted patient data (testing accuracy = 78.7%). We also found that there is an improvement in the performance of cyclical weight transfer heuristic with high frequency of weight transfer."} {"_id": "d1f7ec3085b71f584ada50e817ef7c14337953e2", "title": "A Chebyshev-Response Step-Impedance Phase-Inverter Rat-Race Coupler Directly on Lossy Silicon Substrate and Its Gilbert Mixer Application", "text": "This paper focuses on the analysis and the design methodology of the step-impedance phase-inverter rat-race coupler on a silicon-based process. The issues of impedance limitation and bandwidth are discussed in detail. Our proposed concept utilizes a high silicon dielectric constant, phase-inverter structure, step-impedance technique, and Chebyshev response to make the rat-race coupler more compact (~ 64% reduction) and highly balanced over a wide operating bandwidth. Moreover, the inter-digital coplanar stripline used in the step-impedance section effectively reduces the characteristic impedance of the transmission line for large size shrinkage and insertion-loss reduction. The demonstrated step-impedance rat-race coupler directly on silicon substrate has 6- ~ 7-dB insertion loss from 5 to 15 GHz and small variations in amplitude/phase balance. Compared with our previous work, the proposed rat-race coupler achieves a 3-dB improvement in the insertion loss. Thus, a 0.13-\u03bcm CMOS Gilbert down-converter with a miniature phase-inverter rat-race coupler at the RF path for wideband single-to-differential signal conversion achieves a noise figure of 16 dB."} {"_id": "033cc3784a60115d758a11a765e764b86aca336c", "title": "The Visual Hull Concept for Silhouette-Based Image Understanding", "text": "Many algorithms for both identifying and reconstructing a 3-D object are based on the 2-D silhouettes of the object. In general, identifying a nonconvex object using a silhouettebased approach implies neglecting some features of its surface as identification clues. The same features cannot be reconstructed by volume intersection techniques using multiple silhouettes of the object. This paper addresses the problem of finding which parts of a nonconvex object are relevant for silhouette-based image understanding. For this purpose, the geometric concept of visual hull of a 3-D object is introduced. The visual hull of a 3-D object S is the closest approximation of S that can be obtained with the volume intersection approach. An equivalent statement, relative to object identification, is that the visual hull of S is the maximal object silhouette-equivalent to S, i.e., which can be substituted for S without affecting any silhouette. Only the parts of the surface of S that also lie on the surface of the visual hull can be reconstructed or identified using silhouette-based algorithms. The visual hull of an object depends not only on the object itself but also on the region allowed to the viewpoint. Two main viewing regions can be considered, resulting in the external and internal visual hull. In the former case the viewing region is related to the convex hull of S, in the latter it is bounded by S itself. The internal visual hull also admits an interpretation not related to silhouettes: the features of the surface of S that is not coincident with the surface of the internal visual hull cannot be observed from any viewpoint lying outside the convex hull. After a general discussion of the visual hull and its properties, algorithms for computing the internal and external visual hulls of 2-D objects and 3-D planar face objects are presented and their complexity analyzed. In general, the visual hull of a 3-D planar face object turns out to be bounded by planar and curved patches. A precise statement of the concept of visual hull appears to be novel, as is the problem of its computation."} {"_id": "a98e5856091999922ec7150efefe25bbfeb2aecf", "title": "Big data, smart cities and city planning", "text": "I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London's public transport systems."} {"_id": "17650831f1900b849fd1914d02337e1d006aea0c", "title": "Maglev: A Fast and Reliable Software Network Load Balancer", "text": "Maglev is Google\u2019s network load balancer. It is a large distributed software system that runs on commodity Linux servers. Unlike traditional hardware network load balancers, it does not require a specialized physical rack deployment, and its capacity can be easily adjusted by adding or removing servers. Network routers distribute packets evenly to the Maglev machines via Equal Cost Multipath (ECMP); each Maglev machine then matches the packets to their corresponding services and spreads them evenly to the service endpoints. To accommodate high and ever-increasing traffic, Maglev is specifically optimized for packet processing performance. A single Maglev machine is able to saturate a 10Gbps link with small packets. Maglev is also equipped with consistent hashing and connection tracking features, to minimize the negative impact of unexpected faults and failures on connection-oriented protocols. Maglev has been serving Google\u2019s traffic since 2008. It has sustained the rapid global growth of Google services, and it also provides network load balancing for Google Cloud Platform."} {"_id": "1c610c122c0c848ba8defbcb40ce9f4dd55de7d9", "title": "Augmented Reality Tracking in Natural Environments", "text": "Tracking, or camera pose determination, is the main technical challenge in creating augmented realities. Constraining the degree to which the environment may be altered to support tracking heightens the challenge. This paper describes several years of work at the USC Computer Graphics and Immersive Technologies (CGIT) laboratory to develop self-contained, minimally intrusive tracking systems for use in both indoor and outdoor settings. These hybrid-technology tracking systems combine vision and inertial sensing with research in fiducial design, feature detection, motion estimation, recursive filters, and pragmatic engineering to satisfy realistic application requirements."} {"_id": "728bbc615aececb184b07899d973fad62269dc21", "title": "Word Epoch Disambiguation: Finding How Words Change Over Time", "text": "In this paper we introduce the novel task of \u201cword epoch disambiguation,\u201d defined as the problem of identifying changes in word usage over time. Through experiments run using word usage examples collected from three major periods of time (1800, 1900, 2000), we show that the task is feasible, and significant differences can be observed between occurrences of words in different periods of time."} {"_id": "15386c1d34870f028927d02d5608581d02e589a1", "title": "How to model fake news", "text": "Over the past three years it has become evident that fake news is a danger to democracy. However, until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definition of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced. The first approach, based on the idea of a representative voter, is shown to be suitable to obtain a qualitative understanding of phenomena associated with fake news at a macroscopic level. The second approach, based on the idea of an election microstructure, describes the collective behaviour of the electorate by modelling the preferences of individual voters. It is shown through a simulation study that the mere knowledge that pieces of fake news may be in circulation goes a long way towards mitigating the impact of fake news."} {"_id": "f98990356a62e05af16993a5fc355a7e675a3320", "title": "Penoscrotal plication as a uniform approach to reconstruction of penile curvature.", "text": "OBJECTIVE\nTo present our 4-year experience of using a minimally invasive technique, penoscrotal plication (PSP), as a uniform treatment for men with debilitating penile curvature resulting from Peyronie's disease.\n\n\nPATIENTS AND METHODS\nIn 48 men (median age 58.7 years) with penile curvature the penis was reconstructed by imbricating the tunica albuginea opposite the curvature with multiple nonabsorbable sutures. All patients, regardless of the degree or direction of curvature, were approached through a small penoscrotal incision made without degloving the penis. Detailed measurements of penile shaft angle and stretched penile length were recorded and analysed before and after reconstruction, and the numbers of sutures required for correction were documented.\n\n\nRESULTS\nNearly all patients had dorsal and/or lateral deformities that were easily corrected via a ventral penoscrotal incision. The median (range) degree of correction was 28 (18-55) degrees and number of sutures used was 6 (4-17). Stretched penile length measurements before and after plication showed no significant difference. A single PSP procedure was successful in 45/48 (93%) patients; two were dissatisfied with the correction, one having repeat plication and the other a penile prosthesis; one other required a suture release for pain.\n\n\nCONCLUSIONS\nPSP is safe and effective and should be considered even for cases with severe or biplanar curvature."} {"_id": "d12f2be30367f85eb9ca54313939bf07f6482b0a", "title": "Non-sexually related acute genital ulcers in 13 pubertal girls: a clinical and microbiological study.", "text": "OBJECTIVE\nTo describe the clinical and microbiological features of acute genital ulcers (AGU), which have been reported in virgin adolescents, predominantly in girls.\n\n\nDESIGN\nDescriptive study. We collected data on the clinical features, sexual history, blood cell count, biochemistry, microbiological workup, and 1-year follow-up.\n\n\nSETTING\nDepartments of dermatology of 3 university hospitals in Paris. Patients Thirteen immunocompetent female patients with a first flare of non-sexually transmitted AGU.\n\n\nMAIN OUTCOME MEASURES\nClinical and microbiological data, using a standardized form.\n\n\nRESULTS\nMean age was 16.6 years (range, 11-19 years). Eleven patients denied previous sexual contact. A fever or flulike symptoms preceded AGU in 10 of the 13 patients (77%), with a mean delay of 3.8 days before the AGU onset (range, 0-10 days). The genital ulcers were bilateral in 10 patients. The final diagnosis was Epstein-Barr virus primary infection in 4 patients (31%) and Beh\u00e7et disease in 1 patient (8%). No other infectious agents were detected in this series.\n\n\nCONCLUSIONS\nWe recommend serologic testing for Epstein-Barr virus with IgM antibodies to viral capsid antigens in non-sexually related AGU in immunocompetent patients. Further microbiological studies are required to identify other causative agents."} {"_id": "1c1406a873ac215ac4ccfdb37d9876e7902e080c", "title": "Tangible Query Interfaces: Physically Constrained Tokens for Manipulating Database Queries", "text": "We present a new approach for using physically constrained tokens to express, manipulate, and visualize parameterized database queries. This method extends tangible interfaces to enable interaction with large aggregates of information. We describe two interface prototypes that use physical tokens to represent database parameters. These tokens are manipulated upon physical constraints, which map compositions of tokens onto interpretations including database queries, views, and Boolean operations. We propose a framework for \u201c token + constraint\u201d interfaces, and compare one of our prototypes with a comparable graphical interface in a preliminary user study. \u2020 Current affiliation: Visualization Dept., Zuse Institute Berlin (ZIB), ullmer@zib.de \u2020\u2020 Current affiliation: Computer Science Dept., Tufts University, jacob@cs.tufts.edu"} {"_id": "e3ff5f811a2d10aee18edb808d64051cb1a1f642", "title": "OpenIoT: An open service framework for the Internet of Things", "text": "The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies."} {"_id": "f0982dfd3071d33296c22a4c38343887dd5b2a9b", "title": "A visual analytics agenda", "text": "Researchers have made significant progress in disciplines such as scientific and information visualization, statistically based exploratory and confirmatory analysis, data and knowledge representations, and perceptual and cognitive sciences. Although some research is being done in this area, the pace at which new technologies and technical talents are becoming available is far too slow to meet the urgent need. National Visualization and Analytics Center's goal is to advance the state of the science to enable analysts to detect the expected and discover the unexpected from massive and dynamic information streams and databases consisting of data of multiple types and from multiple sources, even though the data are often conflicting and incomplete. Visual analytics is a multidisciplinary field that includes the following focus areas: (i) analytical reasoning techniques, (ii) visual representations and interaction techniques, (iii) data representations and transformations, (iv) techniques to support production, presentation, and dissemination of analytical results. The R&D agenda for visual analytics addresses technical needs for each of these focus areas, as well as recommendations for speeding the movement of promising technologies into practice. This article provides only the concise summary of the R&D agenda. We encourage reading, discussion, and debate as well as active innovation toward the agenda for visual analysis."} {"_id": "5264ae4ea4411426ddd91dc780c2892c3ff933d3", "title": "An Introduction to Variable and Feature Selection", "text": "Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variabl es are available. These areas include text processing of internet documents, gene expression arr ay nalysis, and combinatorial chemistry. The objective of variable selection is three-fold: improvi ng the prediction performance of the predictors, providing faster and more cost-effective predict ors, and providing a better understanding of the underlying process that generated the data. The contrib utions of this special issue cover a wide range of aspects of such problems: providing a better definit ion of the objective function, feature construction, feature ranking, multivariate feature sele ction, efficient search methods, and feature validity assessment methods."} {"_id": "a1420a6c619d2572109abfb4a387f70c2fc998ff", "title": "Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I.", "text": "BACKGROUND\nAs part of an interdisciplinary study of medical injury and malpractice litigation, we estimated the incidence of adverse events, defined as injuries caused by medical management, and of the subgroup of such injuries that resulted from negligent or substandard care.\n\n\nMETHODS\nWe reviewed 30,121 randomly selected records from 51 randomly selected acute care, nonpsychiatric hospitals in New York State in 1984. We then developed population estimates of injuries and computed rates according to the age and sex of the patients as well as the specialties of the physicians.\n\n\nRESULTS\nAdverse events occurred in 3.7 percent of the hospitalizations (95 percent confidence interval, 3.2 to 4.2), and 27.6 percent of the adverse events were due to negligence (95 percent confidence interval, 22.5 to 32.6). Although 70.5 percent of the adverse events gave rise to disability lasting less than six months, 2.6 percent caused permanently disabling injuries and 13.6 percent led to death. The percentage of adverse events attributable to negligence increased in the categories of more severe injuries (Wald test chi 2 = 21.04, P less than 0.0001). Using weighted totals, we estimated that among the 2,671,863 patients discharged from New York hospitals in 1984 there were 98,609 adverse events and 27,179 adverse events involving negligence. Rates of adverse events rose with age (P less than 0.0001). The percentage of adverse events due to negligence was markedly higher among the elderly (P less than 0.01). There were significant differences in rates of adverse events among categories of clinical specialties (P less than 0.0001), but no differences in the percentage due to negligence.\n\n\nCONCLUSIONS\nThere is a substantial amount of injury to patients from medical management, and many injuries are the result of substandard care."} {"_id": "74a08b42f7cd4208dc39e44a74515917febea252", "title": "Developing a car gesture interface for use as a secondary task", "text": "Existing gesture-interface research has centered on controlling the user's primary task. This paper explores the use of gestures to control secondary tasks while the user is focused on driving. Through contextual inquiry, ten iterative prototypes, and a Wizard of Oz experiment, we show that a gesture interface is a viable alternative for completing secondary tasks in the car."} {"_id": "818064efbe870746d3fa6e3b4e208c4f37a6847a", "title": "Modularized Morphing of Neural Networks", "text": "In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified."} {"_id": "a0a9390e14beb38c504473c3adc857f8faeaebd2", "title": "Face Detection using Digital Image Processing", "text": "This paper presents a technique for automatically detecting human faces in digital color images. This is two-step process which first detects regions contain human skin in the color image and then extracts information from these regions which might indicate the location of a face in the image. The skin detection is performed using a skin filter which relies on color and texture information. The face detection is performed on a grayscale image containing only the detected skin areas. A combination of thresh holding and mathematical morphology are used to extract object features that would indicate the presence of a face. The face detection process works predictably and fairly reliably, as test results show."} {"_id": "1c32fb359bef96c6cdbc4b668bb7e365538a5047", "title": "Regularized linear and kernel redundancy analysis", "text": "Redundancy analysis (RA) is a versatile technique used to predict multivariate criterion variables from multivariate predictor variables. The reduced-rank feature of RA captures redundant information in the criterion variables in a most parsimonious way. A ridge type of regularization was introduced in RA to deal with the multicollinearity problem among the predictor variables. The regularized linear RA was extended to nonlinear RA using a kernel method to enhance the predictability. The usefulness of the proposed procedures was demonstrated by a Monte Carlo study and through the analysis of two real data sets."} {"_id": "742ae20de07ae03c907f3c5f68cac26d95a8121d", "title": "A design process for embedding knowledge management in everyday work", "text": "Knowledge Management Software must be embedded in processes of knowledge workers' everyday practice. In order to attain a seamless design, regarding the special qualities and requirements of knowledge work, detailed studies of the existing work processes and analysis of the used knowledge are necessary. Participation of the knowledge owners and future users us an important factor for success of knowledge management systems. In this paper we describe characteristics of knowledge work motivating the usage of participatory design techniques. We suggest a design process for developing or improving knowledge management, which includes ethnographic surveys, user participation in cyclic improvement, scenario based design, and the use of multiple design artifacts and documents. Finally we explain the benefits of our approach. The paper is based on a case study we carried out to design and introduce a knowledge management system in a training company."} {"_id": "9699af44d3d0c9a727fe13cab3088ada6ae66876", "title": "The Development Process of the Semantic Web and Web Ontology", "text": "This paper deals with the semantic web and web ontology. The existing ontology development processes are not catered towards casual web ontology development, a notion analogous to standard web page development. Ontologies have become common on the World-Wide Web[2]. Key features of this process include easy and rapid creation of ontological skeletons, searching and linking to existing ontologies and a natural language-based technique to improve presentation of ontologies[6]. Ontologies, however, vary greatly in size, scope and semantics. They can range from generic upper-level ontologies to domain-specific schemas. The success of the Semantic Web is based on the existance of numerous distributed ontologies, using which users can annotate their data, thereby enabling shared machine readable content. This paper elaborates the stages in a casual ontology development process. Key wordsToolkits; ontology; semantic web; language-based; web ontology."} {"_id": "5e47d3fd6447b292f7fbe572fc17ebffa487809f", "title": "Building upon RouteFlow : a SDN development experience", "text": "RouteFlow is a platform for providing virtual IP routing services in OpenFlow networks. During the first year of development, we came across some use cases that might be interesting pursuing in addition to a number of lessons learned worth sharing. In this paper, we will discuss identified requirements and architectural and implementation changes made to shape RouteFlow into a more robust solution for Software Defined networking (SDN). This paper addresses topics of interest to the SDN community, such as development issues involving layered applications on top of network controllers, ease of configuration, and network visualization. In addition, we will present the first publicly known use case with multiple, heterogeneous OpenFlow controllers to implement a centralized routing control function, demonstrating how IP routing as a service can be provided for different network domains under a single central control. Finally, performance comparisons and a real testbed were used as means of validating the implementation."} {"_id": "f243961ec694c5510ec0a7f71fd4596c58195bdc", "title": "PhysioDroid: Combining Wearable Health Sensors and Mobile Devices for a Ubiquitous, Continuous, and Personal Monitoring", "text": "Technological advances on the development of mobile devices, medical sensors, and wireless communication systems support a new generation of unobtrusive, portable, and ubiquitous health monitoring systems for continuous patient assessment and more personalized health care. There exist a growing number of mobile apps in the health domain; however, little contribution has been specifically provided, so far, to operate this kind of apps with wearable physiological sensors. The PhysioDroid, presented in this paper, provides a personalized means to remotely monitor and evaluate users' conditions. The PhysioDroid system provides ubiquitous and continuous vital signs analysis, such as electrocardiogram, heart rate, respiration rate, skin temperature, and body motion, intended to help empower patients and improve clinical understanding. The PhysioDroid is composed of a wearable monitoring device and an Android app providing gathering, storage, and processing features for the physiological sensor data. The versatility of the developed app allows its use for both average users and specialists, and the reduced cost of the PhysioDroid puts it at the reach of most people. Two exemplary use cases for health assessment and sports training are presented to illustrate the capabilities of the PhysioDroid. Next technical steps include generalization to other mobile platforms and health monitoring devices."} {"_id": "249b8988c4f43315ccfaef3d1d3ebda67494d2f0", "title": "Wideband planar monopole antennas with dual band-notched characteristics", "text": "Wideband planar monopole antennas with dual band-notched characteristics are presented. The proposed antenna consists of a wideband planar monopole antenna and the multiple cup-, cap-, and inverted L-shaped slots, producing band-notched characteristics. In order to generate dual band-notched characteristic, we propose nine types of planar monopole antennas, which have two or three cap (cup or inverted L)-shaped slots in the radiator. This technique is suitable for creating ultra-wideband antenna with narrow frequency notches or for creating multiband antennas"} {"_id": "c0b2c8817eacdeb7809d82e5aa1edb1cd5836938", "title": "Optimal Operation of Distribution Feeders in Smart Grids", "text": "This paper presents a generic and comprehensive distribution optimal power flow (DOPF) model that can be used by local distribution companies (LDCs) to integrate their distribution system feeders into a Smart Grid. The proposed three-phase DOPF framework incorporates detailed modeling of distribution system components and considers various operating objectives. Phase specific and voltage dependent modeling of customer loads in the three-phase DOPF model allows LDC operators to determine realistic operating strategies that can improve the overall feeder efficiency. The proposed distribution system operation objective is based on the minimization of the energy drawn from the substation while seeking to minimize the number of switching operations of load tap changers and capacitors. A novel method for solving the three-phase DOPF model by transforming the mixed-integer nonlinear programming problem to a nonlinear programming problem is proposed which reduces the computational burden and facilitates its practical implementation and application. Two practical case studies, including a real distribution feeder test case, are presented to demonstrate the features of the proposed methodology. The results illustrate the benefits of the proposed DOPF in terms of reducing energy losses while limiting the number of switching operations."} {"_id": "01eff5a77d72b34ea2dfab434f82efee91827519", "title": "Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).", "text": "BACKGROUND\nThe Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military.\n\n\nMETHODS\nThe HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA.\n\n\nRESULTS\nInvestigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research.\n\n\nCONCLUSION\nThese results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains."} {"_id": "1b645b3a2366baeb6e6b485ec5b414fc0827fb86", "title": "A 0.5-to-2.5 Gb/s Reference-Less Half-Rate Digital CDR With Unlimited Frequency Acquisition Range and Improved Input Duty-Cycle Error Tolerance", "text": "A reference-less highly digital half-rate clock and data recovery (CDR) circuit with improved tolerance to input duty cycle error is presented. Using a chain of frequency dividers, the pro posed frequency detector produces a known sub-harmonic tone from the incoming random data. A digital frequency-locked loop uses the extracted tone, and drives the oscillator to any sub-rate of the input data frequency. The early/late outputs of a conventional half-rate bang-bang phase detector are used to determine the duty-cycle error in the incoming random data and adjust the oscillator clock phases to maximize receiver timing margins. Fabricated in 0.13 \u03bcm CMOS technology, the prototype digital CDR op erates without any errors from 0.5 Gb/s to 2.5 Gb/s. At 2 Gb/s, the prototype consumes 6.1 mW power from a 1.2 V supply. The pro posed clock-phase calibration is capable of correcting upto \u00b120% of input data duty-cycle error."} {"_id": "97325a06de27b59c431a37b03c88f33e3a789f31", "title": "Applying data mining techniques in job recommender system for considering candidate job preferences", "text": "Job recommender systems are desired to attain a high level of accuracy while making the predictions which are relevant to the customer, as it becomes a very tedious task to explore thousands of jobs, posted on the web, periodically. Although a lot of job recommender systems exist that uses different strategies , here efforts have been put to make the job recommendations on the basis of candidate's profile matching as well as preserving candidate's job behavior or preferences. Firstly, rules predicting the general preferences of the different user groups are mined. Then the job recommendations to the target candidate are made on the basis of content based matching as well as candidate preferences, which are preserved either in the form of mined rules or obtained by candidate's own applied jobs history. Through this technique a significant level of accuracy has been achieved over other basic methods of job recommendations."} {"_id": "f394fc319d90a6227ce87504d882245c6c342cd2", "title": "A Simple Flood Forecasting Scheme Using Wireless Sensor Networks", "text": "This paper presents a forecasting model designed using WSNs( Wireless Sensor Networks) to predict flood in rivers using simple and fast calculations to provide real-time results and save the lives of people who may be affected by the flood. Our prediction model uses multiple variable robust linear regression which is easy to understand and simple and cost effective in implementation, is speed efficient, but has low resource utilization and yet provides real time predictions with reliable accuracy, thus having features which are desirable in any real world algorithm. Our prediction model is independent of the number of parameters, i.e. any number of parameters may be added or removed based on the on-site requirements. When the water level rises, we represent it using a polynomial whose nature is used to determine if the water level may exceed the flood line in the near future. We compare our work with a contemporary algorithm to demonstrate our improvements over it. Then we present our simulation results for the predicted water level compared to the actual water level."} {"_id": "559caee18178518a655004979d2dc26c969d10c6", "title": "Test Case Prioritization Using Requirements-Based Clustering", "text": "The importance of using requirements information in the testing phase has been well recognized by the requirements engineering community, but to date, a vast majority of regression testing techniques have primarily relied on software code information. Incorporating requirements information into the current testing practice could help software engineers identify the source of defects more easily, validate the product against requirements, and maintain software products in a holistic way. In this paper, we investigate whether the requirements-based clustering approach that incorporates traditional code analysis information can improve the effectiveness of test case prioritization techniques. To investigate the effectiveness of our approach, we performed an empirical study using two Java programs with multiple versions and requirements documents. Our results indicate that the use of requirements information during the test case prioritization process can be beneficial."} {"_id": "38a62849b31996ff3d3595e505227973e9e3a562", "title": "Teaching and working with robots as a collaboration", "text": "New applications for autonomous robots bring them into the human environment where they are to serve as helpful assistants to untrained users in the home or office, or work as capable members of human-robot teams for security, military, and space efforts. These applications require robots to be able to quickly learn how to perform new tasks from natural human instruction, and to perform tasks collaboratively with human teammates. Using joint intention theory as our theoretical framework, our approach integrates learning and collaboration through a goal based task structure. Specifically, we use collaborative discourse with accompanying gestures and social cues to teach a humanoid robot a structurally complex task. Having learned the representation for the task, the robot then performs it shoulder-to-shoulder with a human partner, using social communication acts to dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot."} {"_id": "87660054e271de3a284cdb654d989a0197bd6051", "title": "Sensitive Lifelogs: A Privacy Analysis of Photos from Wearable Cameras", "text": "While media reports about wearable cameras have focused on the privacy concerns of bystanders, the perspectives of the `lifeloggers' themselves have not been adequately studied. We report on additional analysis of our previous in-situ lifelogging study in which 36 participants wore a camera for a week and then reviewed the images to specify privacy and sharing preferences. In this Note, we analyze the photos themselves, seeking to understand what makes a photo private, what participants said about their images, and what we can learn about privacy in this new and very different context where photos are captured automatically by one's wearable camera. We find that these devices record many moments that may not be captured by traditional (deliberate) photography, with camera owners concerned about impression management and protecting private information of both themselves and bystanders."} {"_id": "27f366b733ba0f75a93c06d5d7f0d1e06b467a4c", "title": "Foundations of Logic Programming", "text": ""} {"_id": "af6bb0fef60a068f6930ab22397b56ac1ad39026", "title": "Evaluating faces on trustworthiness: an extension of systems for recognition of emotions signaling approach/avoidance behaviors.", "text": "People routinely make various trait judgments from facial appearance, and such judgments affect important social outcomes. These judgments are highly correlated with each other, reflecting the fact that valence evaluation permeates trait judgments from faces. Trustworthiness judgments best approximate this evaluation, consistent with evidence about the involvement of the amygdala in the implicit evaluation of face trustworthiness. Based on computer modeling and behavioral experiments, I argue that face evaluation is an extension of functionally adaptive systems for understanding the communicative meaning of emotional expressions. Specifically, in the absence of diagnostic emotional cues, trustworthiness judgments are an attempt to infer behavioral intentions signaling approach/avoidance behaviors. Correspondingly, these judgments are derived from facial features that resemble emotional expressions signaling such behaviors: happiness and anger for the positive and negative ends of the trustworthiness continuum, respectively. The emotion overgeneralization hypothesis can explain highly efficient but not necessarily accurate trait judgments from faces, a pattern that appears puzzling from an evolutionary point of view and also generates novel predictions about brain responses to faces. Specifically, this hypothesis predicts a nonlinear response in the amygdala to face trustworthiness, confirmed in functional magnetic resonance imaging (fMRI) studies, and dissociations between processing of facial identity and face evaluation, confirmed in studies with developmental prosopagnosics. I conclude with some methodological implications for the study of face evaluation, focusing on the advantages of formally modeling representation of faces on social dimensions."} {"_id": "631483c15641c3652377f66c8380ff684f3e365c", "title": "Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures", "text": "This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW). Sync-DRAW can also perform text-to-video generation which, to the best of our knowledge, makes it the first approach of its kind. It combines a Variational Autoencoder(VAE) with a Recurrent Attention Mechanism in a novel manner to create a temporally dependent sequence of frames that are gradually formed over time. The recurrent attention mechanism in Sync-DRAW attends to each individual frame of the video in sychronization, while the VAE learns a latent distribution for the entire video at the global level. Our experiments with Bouncing MNIST, KTH and UCF-101 suggest that Sync-DRAW is efficient in learning the spatial and temporal information of the videos and generates frames with high structural integrity, and can generate videos from simple captions on these datasets."} {"_id": "14e54c29e986977dd0537ef694fad0fa6eb862f6", "title": "How schema and novelty augment memory formation", "text": "Information that is congruent with existing knowledge (a schema) is usually better remembered than less congruent information. Only recently, however, has the role of schemas in memory been studied from a systems neuroscience perspective. Moreover, incongruent (novel) information is also sometimes better remembered. Here, we review lesion and neuroimaging findings in animals and humans that relate to this apparent paradoxical relationship between schema and novelty. In addition, we sketch a framework relating key brain regions in medial temporal lobe (MTL) and medial prefrontal cortex (mPFC) during encoding, consolidation and retrieval of information as a function of its congruency with existing information represented in neocortex. An important aspect of this framework is the efficiency of learning enabled by congruency-dependent MTL-mPFC interactions."} {"_id": "e3530c67ed2294ce1a72b424d5e38c95552ba15f", "title": "Neural scene representation and rendering", "text": "Scene representation\u2014the process of converting visual sensory data into concise descriptions\u2014is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them."} {"_id": "1a74a968e64e4e8381a6a6ddc7f6f7a7599ce4ee", "title": "Detecting sources of computer viruses in networks: theory and experiment", "text": "We provide a systematic study of the problem of finding the source of a computer virus in a network. We model virus spreading in a network with a variant of the popular SIR model and then construct an estimator for the virus source. This estimator is based upon a novel combinatorial quantity which we term rumor centrality. We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops in different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding virus sources in networks which are not tree-like."} {"_id": "53d1f9f7c77ffb0af729c173c35f099550f27f6e", "title": "Rumor centrality: a universal source detector", "text": "We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes possess the rumor. In a recent work [10], this question was introduced and studied. The authors proposed rumor centrality as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2-regular tree) graph, the probability of source detection goes to 0 while for d-regular trees with d \u2265 3 the probability of detection, say \u03b1d, remains bounded away from 0 and is less than 1/2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with non-exponential spreading times.\n This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multi-type continuous time branching process (an equivalent representation of a generalized Polya's urn, cf. [1]) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of [10] as a special case and more importantly, we obtain a variety of results establishing the universality of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution."} {"_id": "71b64035eacf2986ca7c153ebec72eb781a23931", "title": "Learning to Discover Social Circles in Ego Networks", "text": "Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. \u2018circles\u2019 on Google+, and \u2018lists\u2019 on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user\u2019s network grows. We define a novel machine learning task of identifying users\u2019 social circles. We pose the problem as a node clustering problem on a user\u2019s ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth."} {"_id": "9b90cb4aea40677494e4a3913878e355c4ae56e8", "title": "Collective dynamics of \u2018small-world\u2019 networks", "text": "Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays,, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks \u2018rewired\u2019 to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them \u2018small-world\u2019 networks, by analogy with the small-world phenomenon, (popularly known as six degrees of separation). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices."} {"_id": "00d23e5c06f90bed0c9d4aec22babb2f7488817f", "title": "Link Prediction via Matrix Factorization", "text": "We propose to solve the link prediction problem in graphs using a supervised matrix factorization approach. The model learns latent features from the topological structure of a (possibly directed) graph, and is shown to make better predictions than popular unsupervised scores. We show how these latent features may be combined with optional explicit features for nodes or edges, which yields better performance than using either type of feature exclusively. Finally, we propose a novel approach to address the class imbalance problem which is common in link prediction by directly optimizing for a ranking loss. Our model is optimized with stochastic gradient descent and scales to large graphs. Results on several datasets show the efficacy of our approach."} {"_id": "917ec490106e81e69b3c8a39a04db811079d0b55", "title": "LHCP and RHCP Substrate Integrated Waveguide Antenna Arrays for Millimeter-Wave Applications", "text": "Left-hand and right-hand circularly polarized (LHCP and RHCP) substrate integrated waveguide (SIW) antenna arrays are presented at the 28-GHz band for millimeter-wave (mm-wave) applications. Two types of circularly polarized (CP) antenna elements are designed to achieve respective LHCP and RHCP performance. Eight-element LHCP and RHCP antenna arrays have been implemented with feeding networks and measured. Based on the measurement, the LHCP and RHCP antenna arrays have impedance bandwidths of 1.54 and 1.7 GHz within $| S_{\\rm{11}}| < - \\text{10 dB}$, whereas 3-dB axial-ratio bandwidths are 1.1 and 1.3 GHz, respectively. Each of the fabricated LHCP and RHCP antenna arrays is accomplished with a gain up to 13.09 and 13.52 dBi. Most of the measured results are validated with the simulated ones. The proposed CP antenna arrays can provide low-cost, broadband characteristics, and high-gain radiation performance with CP properties for mm-wave applications."} {"_id": "afdc55d29c3d976b1bbab75fce4f37526ae7977f", "title": "A Review of Substrate Integrated Waveguide End-Fire Antennas", "text": "Substrate integrated waveguide (SIW) is a planar waveguide structure, which offers the advantages of low profile, ease of fabrication, low insertion loss, and compatibility with other planar circuits. SIW end-fire antennas have drawn broad interests due to the potential applications in aircraft, missile, and radar systems. However, this planar structure suffers from narrow bandwidth due to severe impedance mismatch at the radiating aperture. Meanwhile, the narrow radiating aperture of SIW end-fire antennas also deteriorates the radiation performance. This paper presents a detailed review upon the most recent research efforts concerning the improvement of antenna performances. They are discussed and classified into three different categories from the aspect of polarization properties: horizontally polarized, vertically polarized, and circularly polarized SIW end-fire antennas. Some practical difficulties for the development of SIW end-fire antennas are pointed out and effective approaches are also provided. A wide variety of antenna examples are presented with respect to theoretical and experimental results."} {"_id": "876446e5c9eaaf1b54746270abdb690db98b748f", "title": "Feature Mining for Localised Crowd Counting", "text": "This paper presents a multi-output regression model for crowd counting in public scenes. Existing counting by regression methods either learn a single model for global counting, or train a large number of separate regressors for localised density estimation. In contrast, our single regression model based approach is able to estimate people count in spatially localised regions and is more scalable without the need for training a large number of regressors proportional to the number of local regions. In particular, the proposed model automatically learns the functional mapping between interdependent low-level features and multi-dimensional structured outputs. The model is able to discover the inherent importance of different features for people counting at different spatial locations. Extensive evaluations on an existing crowd analysis benchmark dataset and a new more challenging dataset demonstrate the effectiveness of our approach."} {"_id": "590ed266f720d04e8c5b2af3a5a9d8c86c24880d", "title": "Trajectory Bundling for Animated Transitions", "text": "Animated transition has been a popular design choice for smoothly switching between different visualization views or layouts, in which movement trajectories are created as cues for tracking objects during location shifting. Tracking moving objects, however, becomes difficult when their movement paths overlap or the number of tracking targets increases. We propose a novel design to facilitate tracking moving objects in animated transitions. Instead of simply animating an object along a straight line, we create \"bundled\" movement trajectories for a group of objects that have spatial proximity and share similar moving directions. To study the effect of bundled trajectories, we untangle variations due to different aspects of tracking complexity in a comprehensive controlled user study. The results indicate that using bundled trajectories is particularly effective when tracking more targets (six vs. three targets) or when the object movement involves a high degree of occlusion or deformation. Based on the study, we discuss the advantages and limitations of the new technique, as well as provide design implications."} {"_id": "5b0644069bd4a2fcf9e3272307f68c2002d2122f", "title": "A Taxonomy of Domain-Specific Aspect Languages", "text": "Domain-Specific Aspect Languages (DSALs) are Domain-Specific Languages (DSLs) designed to express crosscutting concerns. Compared to DSLs, their aspectual nature greatly amplifies the language design space. We structure this space in order to shed light on and compare the different domain-specific approaches to deal with crosscutting concerns. We report on a corpus of 36 DSALs covering the space, discuss a set of design considerations, and provide a taxonomy of DSAL implementation approaches. This work serves as a frame of reference to DSAL and DSL researchers, enabling further advances in the field, and to developers as a guide for DSAL implementations."} {"_id": "04fa47f1d3983bacfea1e3c838cf868f9b73dc58", "title": "Convolutional face finder: a neural architecture for fast and robust face detection", "text": "In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to /spl plusmn/20 degrees in image plane and turned up to /spl plusmn/60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns."} {"_id": "cf8f95458591e072835c4372c923e3087754a484", "title": "Markov Logic Mixtures of Gaussian Processes: Towards Machines Reading Regression Data", "text": "We propose a novel mixtures of Gaussian processes model in which the gating function is interconnected with a probabilistic logical model, in our case Markov logic networks. In this way, the resulting mixed graphical model, called Markov logic mixtures of Gaussian processes (MLxGP), solves joint Bayesian non-parametric regression and probabilistic relational inference tasks. In turn, MLxGP facilitates novel, interesting tasks such as regression based on logical constraints or drawing probabilistic logical conclusions about regression data, thus putting \u201cmachines reading regression data\u201d in reach."} {"_id": "bd151b04739107e09c646933f2abd3ecb3e976c9", "title": "On the Spectral Bias of Neural Networks", "text": "Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we show that deep ReLU networks are biased towards low frequency functions, meaning that they cannot have local fluctuations without affecting their global behavior. Intuitively, this property is in line with the observation that over-parameterized networks find simple patterns that generalize across data samples. We also investigate how the shape of the data manifold affects expressivity by showing evidence that learning high frequencies gets easier with increasing manifold complexity, and present a theoretical understanding of this behavior. Finally, we study the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions."} {"_id": "3d0dd69d6f49d3b91fad795509c86918b7e4142c", "title": "Structured light in scattering media", "text": "Virtually all structured light methods assume that the scene and the sources are immersed in pure air and that light is neither scattered nor absorbed. Recently, however, structured lighting has found growing application in underwater and aerial imaging, where scattering effects cannot be ignored. In this paper, we present a comprehensive analysis of two representative methods - light stripe range scanning and photometric stereo - in the presence of scattering. For both methods, we derive physical models for the appearances of a surface immersed in a scattering medium. Based on these models, we present results on (a) the condition for object detectability in light striping and (b) the number of sources required for photometric stereo. In both cases, we demonstrate that while traditional methods fail when scattering is significant, our methods accurately recover the scene (depths, normals, albedos) as well as the properties of the medium. These results are in turn used to restore the appearances of scenes as if they were captured in clear air. Although we have focused on light striping and photometric stereo, our approach can also be extended to other methods such as grid coding, gated and active polarization imaging."} {"_id": "e99e5c845d6bda14490a0dacd0d7143c20320e2b", "title": "Conceptualizing Sources in Online News", "text": "This study attempts a new conceptualization of communication \u201csources\u201d by proposing a typology of sources that would apply not only to traditional media but also to new online media. Ontological rationale for the distinctions in the typology is supplemented by psychological evidence via an experiment that investigated the effects of different types of source attributions upon receivers\u2019 perception of online news content. Participants (N = 48) in a 4-condition, between-participants experiment read 6 identical news stories each through an online service. Participants were told that the stories were selected by 1 of 4 sources: news editors, the computer terminal on which they were accessing the stories, other audience members (or users) of the online news service, or (using a pseudo-selection task) the individual user (self). After reading each online news story, all participants filled out a paperand-pencil questionnaire indicating their perceptions of the story they had just read. In confirmation of the distinctions made in the typology, attribution of identical content to 4 different types of online sources was associated with significant variation in news story perception. Theoretical implications of the results as well as the typology are discussed."} {"_id": "823c9dc4ef36dd93079dfd640da59d3d6ff2cc22", "title": "Conceptualising user hedonic experience", "text": "For many years, the approach taken towards technology design focused on supporting the effective, efficient, and satisfying use of it. The way people use technology has shifted from merely using it to enjoying using it. This paper describes an early approach to understanding user experience in context of technologies (e.g. digital cameras, PDAs, and mobile phones), as well as in more general context such as physical activities e.g. exercising, orienteering, and walking, and in context of diaries. The focus of this paper is on hedonic user experience; that is pleasure, enjoyment, excitement, and fun in the context of technology. This study provides insights into factors contributing to and influencing such experiences and the relationships between them."} {"_id": "34245763557c61fb18218543e1fb498970d32d1c", "title": "Factorized Geometrical Autofocus for Synthetic Aperture Radar Processing", "text": "This paper describes a factorized geometrical autofocus (FGA) algorithm, specifically suitable for ultrawideband synthetic aperture radar. The strategy is integrated in a fast factorized back-projection chain and relies on varying track parameters step by step to obtain a sharp image; focus measures are provided by an object function (intensity correlation). The FGA algorithm has been successfully applied on synthetic and real (Coherent All RAdio BAnd System II) data sets, i.e., with false track parameters introduced prior to processing, to set up constrained problems involving one geometrical quantity. Resolution (3 dB in azimuth and slant range) and peak-to-sidelobe ratio measurements in FGA images are comparable with reference results (within a few percent and tenths of a decibel), demonstrating the capacity to compensate for residual space variant range cell migration. The FGA algorithm is finally also benchmarked (visually) against the phase gradient algorithm to emphasize the advantage of a geometrical autofocus approach."} {"_id": "d2491e8f4e6d5a8d1380efda2ec327ed7afefa63", "title": "It's More than Just Sharing Game Play Videos! Understanding User Motives in Mobile Game Social Media", "text": "As mobile gaming has become increasingly popular in recent years, new forms of mobile game social media such as GameDuck that share mobile gameplay videos have emerged. In this work, we set out to understand the user motives of GameDuck by leveraging the Uses and Gratification Theory. We first explore the major motive themes from users' responses (n=138) and generate motivation survey items. We then identify the key motivators by conducting exploratory factor analysis of the survey results (n=354). Finally, we discuss how this new social media relates to existing systems such as Twitch."} {"_id": "8326f86876adc98c72031de6c3d3d3fac0403175", "title": "Enterprise Architecture design for ensuring strategic business IT alignment (integrating SAMM with TOGAF 9.1)", "text": "Strategic business IT (Information Technology) alignment is one of the main objectives that is achieved from the implementation of Enterprise Architecture (EA) in an organization. EA helps organizations to define architecture of business, information systems and technology that capable for aligning business strategy with IT organizations, through the development of business models, business strategy, business processes and organizations that aligned with infrastructure, applications and IT organizations. A good design of Enterprise Architecture should consider various viewpoints of IT alignment with the organization's business needs. This paper provides a solution how to design Enterprise Architecture which provides guarantee for strategic business IT alignment that is designed through the integration of SAMM component with TOGAF 9.1 metamodel."} {"_id": "073daaf4f6fa972d3bdee3c4e4510d21dc934dfb", "title": "Machine learning - a probabilistic perspective", "text": "\u201cKevin Murphy excels at unraveling the complexities of machine learning methods while motivating the reader with a stream of illustrated examples and real-world case studies. The accompanying software package includes source code for many of the figures, making it both easy and very tempting to dive in and explore these methods for yourself. A must-buy for anyone interested in machine learning or curious about how to extract useful knowledge from big data.\u201d John Winn, Microsoft Research"} {"_id": "afe6e5010ae932071eb6288dd0b02347c5a7f6b1", "title": "Algorithms for the variable sized bin packing problem", "text": "In this paper, we consider the variable sized bin packing problem where the objective is to minimize the total cost of used bins when the cost of unit size of each bin does not increase as the bin size increases. Two greedy algorithms are described, and analyzed in three special cases: (a) the sizes of items and bins are divisible, respectively, (b) only the sizes of bins are divisible, and (c) the sizes of bins are not divisible. Here, we say that a list of numbers a1; a2; . . . ; am are divisible when aj exactly divides aj 1, for each 1 < j6m. In the case of (a), the algorithms give optimal solutions, and in the case of (b), each algorithm gives a solution whose value is less than 11 9 C\u00f0B \u00de \u00fe 4 9 , where C\u00f0B \u00de is the optimal value. In the case of (c), each algorithm gives a solution whose value is less than 3 2 C\u00f0B \u00de \u00fe 1. 2002 Elsevier Science B.V. All rights reserved."} {"_id": "aade79855b2c79d77ee48751fb946c35892fed9b", "title": "IoT system for Human Activity Recognition using BioHarness 3 and Smartphone", "text": "This paper presents an Internet of Things (IoT) approach to Human Activity Recognition (HAR) using remote monitoring of vital signs in the context of a healthcare system for self-managed chronic heart patients. Our goal is to create a HAR-IoT system using learning algorithms to infer the activity done within 4 categories (lie, sit, walk and run) as well as the time consumed performing these activities and, finally giving feedback during and after the activity. Alike in this work, we provide a comprehensive insight on the cloud-based system implemented and the conclusions after implementing two different learning algorithms and the results of the overall system for larger implementations."} {"_id": "aa2ef24ebbe3f1afcd5d5506ee60508195a46c2b", "title": "Analysis and Design of an Input-Series Two-Transistor Forward Converter For High-Input Voltage Multiple-Output Applications", "text": "In this paper, an input-series two-transistor forward converter is proposed and investigated, which is aimed at high-input voltage multiple-output applications. In this converter, all of the switches are operating synchronously, and the input voltage sharing (IVS) of each series-module is achieved automatically by the coupling of primary windings of the common forward integrated transformer. The active IVS processes are analyzed based on the model of the forward integrated transformer. Through the influence analysis when the mismatches in various series-modules are considered, design principles of the key parameters in each series-module are discussed to suppress the input voltage difference. Finally, a 96-W laboratory-made prototype composed of two forward series-modules is built, and the feasibility of the proposed method and the theoretical analysis are verified by the experimental results."} {"_id": "af390f5ded307d8f548243163178d2db639b6e5c", "title": "Task-Driven Feature Pooling for Image Classification", "text": "Feature pooling is an important strategy to achieve high performance in image classification. However, most pooling methods are unsupervised and heuristic. In this paper, we propose a novel task-driven pooling (TDP) model to directly learn the pooled representation from data in a discriminative manner. Different from the traditional methods (e.g., average and max pooling), TDP is an implicit pooling method which elegantly integrates the learning of representations into the given classification task. The optimization of TDP can equalize the similarities between the descriptors and the learned representation, and maximize the classification accuracy. TDP can be combined with the traditional BoW models (coding vectors) or the recent state-of-the-art CNN models (feature maps) to achieve a much better pooled representation. Furthermore, a self-training mechanism is used to generate the TDP representation for a new test image. A multi-task extension of TDP is also proposed to further improve the performance. Experiments on three databases (Flower-17, Indoor-67 and Caltech-101) well validate the effectiveness of our models."} {"_id": "843ae23cbb088640a1697f887b4de82b8ba7a07d", "title": "Channel Attention and Multi-level Features Fusion for Single Image Super-Resolution", "text": "Convolutional neural networks (CNNs) have demonstrated superior performance in super-resolution (SR). However, most CNN-based SR methods neglect the different importance among feature channels or fail to take full advantage of the hierarchical features. To address these issues, this paper presents a novel recursive unit. Firstly, at the beginning of each unit, we adopt a compact channel attention mechanism to adaptively recalibrate the channel importance of input features. Then, the multi-level features, rather than only deep-level features, are extracted and fused. Additionally, we find that it will force our model to learn more details by using the learnable upsampling method (i.e., transposed convolution) only on residual branch (instead of using it both on residual branch and identity branch) while using the bicubic interpolation on the other branch. Analytic experiments show that our method achieves competitive results compared with the state-of-the-art methods and maintains faster speed as well."} {"_id": "82566f380f61e835292e483cda84eb3d22e32cd4", "title": "Kolmogorov's theorem and multilayer neural networks", "text": "-Taking advantage of techniques developed by Kolmogorov, we give a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers. From our proof we derive estimates of numbers of hidden units based on properties of the function being approximated and the accuracy of its approximation. Keywords--Feedforward neural networks, Multilayer perceptron type networks, Sigmoidal activation function, Approximations of continuous functions, Uniform approximation, Universal approximation capabilities, Estimates of number of hidden units, Modulus of continuity."} {"_id": "10236985b28470951de73f76d6fba5343d5f788f", "title": "Dynamic Bin Packing for On-Demand Cloud Resource Allocation", "text": "Dynamic Bin Packing (DBP) is a variant of classical bin packing, which assumes that items may arrive and depart at arbitrary times. Existing works on DBP generally aim to minimize the maximum number of bins ever used in the packing. In this paper, we consider a new version of the DBP problem, namely, the MinTotal DBP problem which targets at minimizing the total cost of the bins used overtime. It is motivated by the request dispatching problem arising from cloud gaming systems. We analyze the competitive ratios of the modified versions of the commonly used First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a new bin only when no currently open bin can accommodate the item to be packed) algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any Fit packing cannot be better than \u03bc + 1, where \u03bc is the ratio of the maximum item duration to the minimum item duration. The competitive ratio of Best Fit packing is not bounded for any given \u03bc. For First Fit packing, if all the item sizes are smaller than 1/\u03b2 of the bin capacity (\u03b2> 1 is a constant), the competitive ratio has an upper bound of \u03b2/\u03b2-1\u00b7\u03bc+3\u03b2/\u03b2-1 + 1. For the general case, the competitive ratio of First Fit packing has an upper bound of 2\u03bc + 7. We also propose a Hybrid First Fit packing algorithm that can achieve a competitive ratio no larger than 5/4 \u03bc + 19/4 when \u03bc is not known and can achieve a competitive ratio no larger than \u03bc + 5 when \u03bc is known."} {"_id": "bd2afb3933cc68270327fbbd7d446fd86b5a7806", "title": "Adaptation of Word Vectors using Tree Structure for Visual Semantics", "text": "We propose a framework of word-vector adaptation, which makes vectors of visually similar concepts close to each other. Here, word vectors are real-valued vector representation of words, e.g., word2vec representation. Our basic idea is to assume that each concept has some hypernyms that are important to determine its visual features. For example, for a concept Swallow with hypernyms Bird, Animal and Entity, we believe Bird is the most important since birds have common visual features with their feathers etc. Adapted word vectors are obtained for each word by taking a weighted sum of a given original word vector and its hypernym word vectors. Our weight optimization makes vectors of visually similar concepts close to each other, by giving a large weight for such important hypernyms. We apply the adapted word vectors to zero-shot learning on the TRECVID 2014 semantic indexing dataset. We achieved 0.083 of Mean Average Precision, which is the best performance without using TRECVID training data to the best of our knowledge."} {"_id": "16ae3881a409835e6e957f96233e7cfbab8481bc", "title": "Humanoid robot HRP-3", "text": "In this paper, the development of humanoid robot HRP-3 is presented. HRP-3, which stands for Humanoid Robotics Platform-3, is a human-size humanoid robot developed as the succeeding model of HRP-2. One of features of HRP-3 is that its main mechanical and structural components are designed to prevent the penetration of dust or spray. Another is that its wrist and hand are newly designed to improve manipulation. Software for a humanoid robot in a real environment is also improved. We also include information on mechanical features of HRP-3 and together with the newly developed hand. Also included are the technologies implemented in HRP-3 prototype. Electrical features and some experimental results using HRP-3 are also presented."} {"_id": "a94e358289f956ee563242a5236f523548940328", "title": "Modeling and Simulation of Electric and Hybrid Vehicles", "text": "This paper discusses the need for modeling and simulation of electric and hybrid vehicles. Different modeling methods such as physics-based Resistive Companion Form technique and Bond Graph method are presented with powertrain component and system modeling examples. The modeling and simulation capabilities of existing tools such as Powertrain System Analysis Toolkit (PSAT), ADvanced VehIcle SimulatOR (ADVISOR), PSIM, and Virtual Test Bed are demonstrated through application examples. Since power electronics is indispensable in hybrid vehicles, the issue of numerical oscillations in dynamic simulations involving power electronics is briefly addressed"} {"_id": "f72831ce1c4c253412310adc8cd602d90cc7973c", "title": "A Matlab-based modeling and simulation package for electric and hybrid electric vehicle design", "text": "This paper discusses a simulation and modeling package developed at Texas A&M University, V-Elph 2.01. VElph facilitates in-depth studies of electric vehicle (EV) and hybrid EV (HEV) configurations or energy management strategies through visual programming by creating components as hierarchical subsystems that can be used interchangeably as embedded systems. V-Elph is composed of detailed models of four major types of components: electric motors, internal combustion engines, batteries, and support components that can be integrated to model and simulate drive trains having all electric, series hybrid, and parallel hybrid configurations. V-Elph was written in the Matlab/Simulink graphical simulation language and is portable to most computer platforms. This paper also discusses the methodology for designing vehicle drive trains using the V-Elph package. An EV, a series HEV, a parallel HEV, and a conventional internal combustion engine (ICE) driven drive train have been designed using the simulation package. Simulation results such as fuel consumption, vehicle emissions, and complexity are compared and discussed for each vehicle."} {"_id": "682c150f6a1ac49c355ad5ec992d8f3a364d9756", "title": "PSIM-based modeling of automotive power systems: conventional, electric, and hybrid electric vehicles", "text": "Automotive manufacturers have been taking advantage of simulation tools for modeling and analyzing various types of vehicles, such as conventional, electric, and hybrid electric vehicles. These simulation tools are of great assistance to engineers and researchers to reduce product-development cycle time, improve the quality of the design, and simplify the analysis without costly and time-consuming experiments. In this paper, a modeling tool that has been developed to study automotive systems using the power electronics simulator (PSIM) software is presented. PSIM was originally made for simulating power electronic converters and motor drives. This user-friendly simulation package is able to simulate electric/electronic circuits; however, it has no capability for simulating the entire system of an automobile. This paper discusses the PSIM validity as an automotive simulation tool by creating module boxes for not only the electrical systems, but also the mechanical, energy-storage, and thermal systems of the vehicles. These modules include internal combustion engines, fuel converters, transmissions, torque couplers, and batteries. Once these modules are made and stored in the library, the user can make the car model either a conventional, an electric, or a hybrid vehicle at will, just by dragging and dropping onto a schematic blank page."} {"_id": "69c37d0ce5fbf3647eb6b60e23c080cf477183ec", "title": "Principles of object-oriented modeling and simulation with Modelica 2.1", "text": "Why should wait for some days to get or receive the principles of object oriented modeling and simulation with modelica 2 1 pdf book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This principles of object oriented modeling and simulation with modelica 2 1 pdf is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?"} {"_id": "83b2339158240eadced7ca712b07169ae24667ec", "title": "ADVISOR 2.1: a user-friendly advanced powertrain simulation using a combined backward/forward approach", "text": "-ADVISOR 2.1 is the latest version of the National Renewable Energy Laboratory\u2019s advanced vehicle simulator. It was first developed in 1994 to support the U.S. Department of Energy hybrid propulsion system program, and is designed to be accurate, fast, flexible, easily sharable, and easy to use. This paper presents the model, focusing on its combination of forwardand backward-facing simulation approaches, and evaluates the model in terms of its design goals. ADVISOR predicts acceleration time to within 0.7% and energy use on the demanding US06 to within 0.6% for an underpowered series hybrid vehicle (0-100 km/h in 20 s). ADVISOR simulates vehicle performance on standard driving cycles between 2.6 and 8.0 times faster than a representative forward-facing vehicle model. Due in large part to ADVISOR\u2019s powerful graphical user interface and Web presence, over 800 users have downloaded ADVISOR from 45 different countries. Many of these users have contributed their own component data to the ADVISOR library."} {"_id": "9b35f4dcaa1831e05adda9a8e37e6d590c79609b", "title": "Marital status and health: United States, 1999-2002.", "text": "OBJECTIVE\nThis report presents prevalence estimates by marital status for selected health status and limitations, health conditions, and health risk behaviors among U.S. adults, using data from the 1999-2002 National Health Interview Surveys (NHIS).\n\n\nMETHODS\nData for the U.S. civilian noninstitutionalized population were collected using computer-assisted personal interviews (CAPI). The household response rate for the NHIS was 88.7%. This report is based on a total of 127,545 interviews with sample adults aged 18 years and over, representing an overall response rate of 72.4% for the 4 years combined. Statistics were age-adjusted to the 2000 U.S. standard population. Marital status categories shown in this report are: married, widowed, divorced or separated, never married, and living with a partner.\n\n\nRESULTS\nRegardless of population subgroup (age, sex, race, Hispanic origin, education, income, or nativity) or health indictor (fair or poor health, limitations in activities, low back pain, headaches, serious psychological distress, smoking, or leisure-time physical inactivity), married adults were generally found to be healthier than adults in other marital status categories. Marital status differences in health were found in each of the three age groups studied (18-44 years, 45-64 years, and 65 years and over), but were most striking among adults aged 18-44 years. The one negative health indicator for which married adults had a higher prevalence was overweight or obesity. Married adults, particularly men, had high rates of overweight or obesity relative to adults in other marital status groups across most population subgroups studied. Never married adults were among the least likely to be overweight or obese."} {"_id": "a98d9815e6694971e755b462ad77a6c9d847b147", "title": "Synthetic therapeutic peptides: science and market.", "text": "The decreasing number of approved drugs produced by the pharmaceutical industry, which has been accompanied by increasing expenses for R&D, demands alternative approaches to increase pharmaceutical R&D productivity. This situation has contributed to a revival of interest in peptides as potential drug candidates. New synthetic strategies for limiting metabolism and alternative routes of administration have emerged in recent years and resulted in a large number of peptide-based drugs that are now being marketed. This review reports on the unexpected and considerable number of peptides that are currently available as drugs and the chemical strategies that were used to bring them into the market. As demonstrated here, peptide-based drug discovery could be a serious option for addressing new therapeutic challenges."} {"_id": "e845231c7ac288fdb354d136e9537ba8993fd1f0", "title": "H-Ras transfers from B to T cells via tunneling nanotubes", "text": "Lymphocytes form cell\u2013cell connections by various mechanisms, including intercellular networks through actin-supported long-range plasma membrane (PM) extensions, termed tunneling nanotubes (TNTs). In this study, we tested in vitro whether TNTs form between human antigen-presenting B cells and T cells following cell contact and whether they enable the transfer of PM-associated proteins, such as green fluorescent protein (GFP)-tagged H-Ras (GFP-H-Ras). To address this question, we employed advanced techniques, including cell trapping by optical tweezers and live-cell imaging by 4D spinning-disk confocal microscopy. First, we showed that TNTs can form after optically trapped conjugated B and T cells are being pulled apart. Next, we determined by measuring fluorescence recovery after photobleaching that GFP-H-Ras diffuses freely in the membrane of TNTs that form spontaneously between B and T cells during coculturing. Importantly, by 4D time-lapse imaging, we showed that GFP-H-Ras-enriched PM patches accumulate at the junction between TNTs and the T-cell body and subsequently transfer to the T-cell surface. Furthermore, the PM patches adopted by T cells were enriched for another B-cell-derived transmembrane receptor, CD86. As predicted, the capacity of GFP-H-Ras to transfer between B and T cells, during coculturing, was dependent on its normal post-transcriptional lipidation and consequent PM anchorage. In summary, our data indicate that TNTs connecting B and T cells provide a hitherto undescribed route for the transfer of PM patches containing, for example, H-Ras from B to T cells."} {"_id": "a2ff5117ccd1eb3e42c6a606b8cecb4358d3ec84", "title": "SciMATE: A Novel MapReduce-Like Framework for Multiple Scientific Data Formats", "text": "Despite the popularity of MapReduce, there are several obstacles to applying it for developing scientific data analysis applications. Current MapReduce implementations require that data be loaded into specialized file systems, like the Hadoop Distributed File System (HDFS), whereas with rapidly growing size of scientific datasets, reloading data in another file system or format is not feasible. We present a framework that allows scientific data in different formats to be processed with a MapReduce like API. Our system is referred to as SciMATE, and is based on the MATE system developed at Ohio State. SciMATE is developed as a customizable system, which can be adapted to support processing on any of the scientific data formats. We have demonstrated the functionality of our system by creating instances that can be processing NetCDF and HDF5 formats as well as flat-files. We have also implemented three popular data mining applications and have evaluated their execution with each of the three instances of our system."} {"_id": "c10226699acb5a778f3655cf093e8cd65a8adb42", "title": "Assessing the Impact of Real-time Ridesharing on Urban Traffic using Mobile Phone Data", "text": "Recently, smart-phone based technology has enabled ridesharing services to match customers making similar trips in realtime for a reduced rate and minimal inconvenience. But what are the impacts of such services on city-wide congestion? The answer lies in whether or not ridesharing adds to vehicle traffic by diverting non-driving trips like walking, transit, or cycling, or reduces vehicle traffic by diverting trips otherwise made in private, single occupancy cars or taxis. This research explores the impact of rideshare adoption on congestion using mobile phone data. We extract average daily origin-destination (OD) trips from mobile phone records and estimate the proportions of these trips made by auto and other non-auto travelers. Next, we match spatially and temporally similar trips, and assume a range of adoption rates for auto and non-auto users, in order to distill rideshare vehicle trips. Finally, for several adoption scenarios, we evaluate the impacts of congestion network-wide."} {"_id": "0721f581323fa61548c25669a6963dcf7a74bee5", "title": "An overview of online trust: Concepts, elements, and implications", "text": "Lack of trust has been repeatedly identified as one of the most formidable barriers to people for engaging in e-commerce, involving transactions in which financial and personal information is submitted to merchants via the Internet. The future of e-commerce is tenuous without a general climate of online trust. Building consumer trust on the Internet presents a challenge for online merchants and is a research topic of increasing interest and importance. This paper provides an overview of the nature and concepts of trust from multi-disciplinary perspectives, and it reviews relevant studies that investigate the elements of online trust. Also, a framework of trust-inducing interface design features articulated from the existing literature is presented. The design features were classified into four dimensions, namely (1) graphic design, (2) structure design, (3) content design, and (4) social-cue design. By applying the design features identified within this framework to e-commerce web site interfaces, online merchants might then anticipate fostering optimal levels of trust in their customers. 2003 Elsevier Ltd. All rights reserved."} {"_id": "3e8723310b5b6f6ba0d0d61cc970c54ead5a8872", "title": "Exploring compassion: a meta-analysis of the association between self-compassion and psychopathology.", "text": "Compassion has emerged as an important construct in studies of mental health and psychological therapy. Although an increasing number of studies have explored relationships between compassion and different facets of psychopathology there has as yet been no systematic review or synthesis of the empirical literature. We conducted a systematic search of the literature on compassion and mental health. We identified 20 samples from 14 eligible studies. All studies used the Neff Self Compassion Scale (Neff, 2003b). We employed meta-analysis to explore associations between self-compassion and psychopathology using random effects analyses of Fisher's Z correcting for attenuation arising from scale reliability. We found a large effect size for the relationship between compassion and psychopathology of r=-0.54 (95% CI=-0.57 to -0.51; Z=-34.02; p<.0001). Heterogeneity was significant in the analysis. There was no evidence of significant publication bias. Compassion is an important explanatory variable in understanding mental health and resilience. Future work is needed to develop the evidence base for compassion in psychopathology, and explore correlates of compassion and psychopathology."} {"_id": "402f6f33fd2db4f17b591002723d793814362b1e", "title": "Autonomous boids", "text": "The classical work of bird-like objects of Reynolds [1] simulates polarized motion of groups of oriented particles, bird-like objects, or simply boids. To do this, three steering vectors are introduced. Cohesion is the tendency of boids to stay in the center of the flock, alignment smoothes their velocities to similar values, and separation helps them to avoid mutual collisions. If no impetus is introduced the boids wander somewhat"} {"_id": "9da22d5240223e08cfaefd65cbf0504cb983c284", "title": "Evaluation of Quranic text retrieval system based on manually indexed topics", "text": "This paper investigates the effectiveness of a state of the art information retrieval (IR) system in the verse retrieval problem for Quranic text. The evaluation is based on manually indexed topics of the Quran that provides both the queries and the relevance judgments. Furthermore, the system is evaluated in both Malay and English environment. The performance of the system is measured based on the MAP, the precision at 1, 5 and 10, and the MRR scores. The results of the evaluation are promising, showing the IR system has many potential for the Quranic text retrieval."} {"_id": "6d06e485853d24f28855279bf7b13c45cb3cad31", "title": "Lyapunov-based control for switched power converters", "text": "The fundamental properties, such as passivity or incremental passivity, of the network elements making up a switched power converter are examined. The nominal open-loop operation of a broad class of such converters is shown to be stable in the large via a Lyapunov argument. The obtained Lyapunov function is then shown to be useful for designing globally stabilizing controls that include adaptive schemes for handling uncertain nominal parameters. Numerical simulations illustrate the application of this control approach in DC-DC converters.<>"} {"_id": "788f341d02130e1807edf88c8c64a77e4096437e", "title": "Pancreas Segmentation in Abdominal CT Scan: A Coarse-to-Fine Approach", "text": "Deep neural networks have been widely adopted for automatic organ segmentation from CT-scanned images. However, the segmentation accuracy on some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily distracted by the complex and variable background region which occupies a large fraction of the input volume. In this paper, we propose a coarse-to-fine approach to deal with this problem. We train two deep neural networks using different regions of the input volume. The first one, the coarse-scaled model, takes the entire volume as its input. It is used for roughly locating the spatial position of the pancreas. The second one, the fine-scaled model, only sees a small input region covering the pancreas, thus eliminating the background noise and providing more accurate segmentation especially around the boundary areas. At the testing stage, we first use the coarse-scaled model to roughly locate the pancreas, then adopt the fine-scaled model to refine the initial segmentation in an iterative manner to obtain increasingly better segmentation. We evaluate our algorithm on the NIH pancreas segmentation dataset with 82 volumes, and outperform the state-of-theart [18] by more than 4%, measured by the Dice-S\u00f8rensen Coefficient (DSC). In addition, we report 62.43% DSC for our worst case, significantly better than 34.11% reported in [18]."} {"_id": "e3fc67dfcf8e194f452fd734e4dfd99a53f2afeb", "title": "UNFOLD: a memory-efficient speech recognizer using on-the-fly WFST composition", "text": "Accurate, real-time Automatic Speech Recognition (ASR) requires huge memory storage and computational power. The main bottleneck in state-of-the-art ASR systems is the Viterbi search on a Weighted Finite State Transducer (WFST). The WFST is a graph-based model created by composing an Acoustic Model (AM) and a Language Model (LM) offline. Offline composition simplifies the implementation of a speech recognizer as only one WFST has to be searched. However, the size of the composed WFST is huge, typically larger than a Gigabyte, resulting in a large memory footprint and memory bandwidth requirements.\n In this paper, we take a completely different approach and propose a hardware accelerator for speech recognition that composes the AM and LM graphs on-the-fly. In our ASR system, the fully-composed WFST is never generated in main memory. On the contrary, only the subset required for decoding each input speech fragment is dynamically generated from the AM and LM models. In addition to the direct benefits of this on-the-fly composition, the resulting approach is more amenable to further reduction in storage requirements through compression techniques.\n The resulting accelerator, called UNFOLD, performs the decoding in real-time using the compressed AM and LM models, and reduces the size of the datasets from more than one Gigabyte to less than 40 Megabytes, which can be very important in small form factor mobile and wearable devices.\n Besides, UNFOLD improves energy-efficiency by orders of magnitude with respect to CPUs and GPUs. Compared to a state-of-the-art Viterbi search accelerators, the proposed ASR system outperforms by providing 31x reduction in memory footprint and 28% energy savings on average."} {"_id": "548bc4203770450c21133bfb72c58f5fae0fbdf2", "title": "Visual-Inertial-Semantic Scene Representation for 3D Object Detection", "text": "We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection."} {"_id": "6984f2aadd187051546e8e6ddb505460b1bc327a", "title": "Computational Intelligence in Sports: A Systematic Literature Review", "text": "Recently, data mining studies are being successfully conducted to estimate several parameters in a variety of domains. Data mining techniques have attracted the attention of the information industry and society as a whole, due to a large amount of data and the imminent need to turn it into useful knowledge. However, the effective use of data in some areas is still under development, as is the case in sports, which in recent years, has presented a slight growth; consequently, many sports organizations have begun to see that there is a wealth of unexplored knowledge in the data extracted by them. Therefore, this article presents a systematic review of sports data mining. Regarding years 2010 to 2018, 31 types of research were found in this topic. Based on these studies, we present the current panorama, themes, the database used, proposals, algorithms, and research opportunities. Our findings provide a better understanding of the sports data mining potentials, besides motivating the scientific community to explore this timely and interesting topic."} {"_id": "e2176d557793b7e2b80d8e5ec945078441356eb8", "title": "Design of a distributed energy-efficient clustering algorithm for heterogeneous wireless sensor networks", "text": "The clustering Algorithm is a kind of key technique used to reduce energy consumption. It can increase the scalability and lifetime of the network. Energy-efficient clustering protocols should be designed for the characteristic of heterogeneous wireless sensor networks. We propose and evaluate a new distributed energy-efficient clustering scheme for heterogeneous wireless sensor networks, which is called DEEC. In DEEC, the cluster-heads are elected by a probability based on the ratio between residual energy of each node and the average energy of the network. The epochs of being cluster-heads for nodes are different according to their initial and residual energy. The nodes with high initial and residual energy will have more chances to be the cluster-heads than the nodes with low energy. Finally, the simulation results show that DEEC achieves longer lifetime and more effective messages than current important clustering protocols in heterogeneous environments. 2006 Elsevier B.V. All rights reserved."} {"_id": "b0fa1b8490d9f86de73556edc3a73e32019a9207", "title": "Reading the mind in cartoons and stories: an fMRI study of \u2018theory of mind\u2019 in verbal and nonverbal tasks", "text": "Previous functional imaging studies have explored the brain regions activated by tasks requiring 'theory of mind'--the attribution of mental states. Tasks used have been primarily verbal, and it has been unclear to what extent different results have reflected different tasks, scanning techniques, or genuinely distinct regions of activation. Here we report results from a functional magnetic resonance imaging study (fMRI) involving two rather different tasks both designed to tap theory of mind. Brain activation during the theory of mind condition of a story task and a cartoon task showed considerable overlap, specifically in the medial prefrontal cortex (paracingulate cortex). These results are discussed in relation to the cognitive mechanisms underpinning our everyday ability to 'mind-read'."} {"_id": "c5098b6da33d0b966cb175a80bab70b829799a6b", "title": "A method for measuring helpfulness in online peer review", "text": "This paper describes an original method for evaluating peer review in online systems by calculating the helpfulness of an individual reviewer's response. We focus on the development of specific and machine scoreable indicators for quality in online peer review."} {"_id": "c6c1ea09425f79bcef32491b785afc5d604222d0", "title": "Ultrawideband Metasurface Lenses Based on Off-Shifted Opposite Layers", "text": "Metasurfaces have demonstrated to be a low-cost solution for the development of directive antennas at high frequency. One of the opportunities of metasurfaces is the possibility to produce planar lenses. However, these lenses usually present a narrow band of operation. This limitation on bandwidth is more restrictive when the required range of refractive index is high. Here, we present a novel implementation of metasurfaces with low dispersion that can be employed for the design of ultrawideband planar lenses. The implementation consists of two mirrored metasurfaces that are displaced exactly a half unit cell. This displacement produces two effects: It increases the equivalent refractive index of the fundamental mode and flattens its frequency dependence."} {"_id": "d5961c904a85aaf89bf1cd85727f4b3e4dc09fb5", "title": "Planning algorithms for s-curve trajectories", "text": "Although numerous researches on s-curve motion profiles have been carried out, up to date, no systematic investigation on the general model of polynomial s-curve motion profiles is considered. In this paper, the model of polynomial s-curve motion profiles is generalized in a recursive form. Based on that, a general algorithm to design s-curve trajectory with jerk bounded and time-optimal consideration is proposed. In addition, a special strategy for planning s-curve motion profiles using a trigonometric model is also presented. The algorithms are implemented on a linear motor system. Experimental results show the effectiveness and promising application ability of the algorithms in s-curve motion profiling."} {"_id": "28b3bac665673755aca40849ca467ea57b246bd3", "title": "Fibroblast Growth Factor Receptor 2 (FGFR2) Mutation Related Syndromic Craniosynostosis", "text": "Craniosynostosis results from the premature fusion of cranial sutures, with an incidence of 1 in 2,100-2,500 live births. The majority of cases are non-syndromic and involve single suture fusion, whereas syndromic cases often involve complex multiple suture fusion. The fibroblast growth factor receptor 2 (FGFR2) gene is perhaps the most extensively studied gene that is mutated in various craniosynostotic syndromes including Crouzon, Apert, Pfeiffer, Antley-Bixler, Beare-Stevenson cutis gyrata, Jackson-Weiss, Bent Bone Dysplasia, and Seathre-Chotzen-like syndromes. The majority of these mutations are missense mutations that result in constitutive activation of the receptor and downstream molecular pathways. Treatment involves a multidisciplinary approach with ultimate surgical fixation of the cranial deformity to prevent further sequelae. Understanding the molecular mechanisms has allowed for the investigation of different therapeutic agents that can potentially be used to prevent the disorders. Further research efforts are need to better understand screening and effective methods of early intervention and prevention. Herein, the authors provide a comprehensive update on FGFR2-related syndromic craniosynostosis."} {"_id": "18422d0eca8b06e98807e2663a2d9aed683402b6", "title": "EpiFast: a fast algorithm for large scale realistic epidemic simulations on distributed memory systems", "text": "Large scale realistic epidemic simulations have recently become an increasingly important application of high-performance computing. We propose a parallel algorithm, EpiFast, based on a novel interpretation of the stochastic disease propagation in a contact network. We implement it using a master-slave computation model which allows scalability on distributed memory systems.\n EpiFast runs extremely fast for realistic simulations that involve: (i) large populations consisting of millions of individuals and their heterogeneous details, (ii) dynamic interactions between the disease propagation, the individual behaviors, and the exogenous interventions, as well as (iii) large number of replicated runs necessary for statistically sound estimates about the stochastic epidemic evolution. We find that EpiFast runs several magnitude faster than another comparable simulation tool while delivering similar results.\n EpiFast has been tested on commodity clusters as well as SGI shared memory machines. For a fixed experiment, if given more computing resources, it scales automatically and runs faster. Finally, EpiFast has been used as the major simulation engine in real studies with rather sophisticated settings to evaluate various dynamic interventions and to provide decision support for public health policy makers."} {"_id": "5c71a81f1b934a9fc7765a225ae4a8833071ad17", "title": "A \"Pile\" Metaphor for Supporting Casual Organization of Information", "text": "A user study was conducted to investigate how people deal with the flow of information in their workspaces. Subjects reported that, in an attempt to quickly and informally manage their information, they created piles of documents. Piles were seen as complementary to the folder filing system, which was used for more formal archiving. A new desktop interface element\u2013the pile\u2013 was developed and prototyped through an iterative process. The design includes direct manipulation techniques and support for browsing, and goes beyond physical world functionality by providing system assistance for automatic pile construction and reorganization. Preliminary user tests indicate the design is promising and raise issues that will be addressed in future work."} {"_id": "21d4258394a9c8f0ea15f0792d67f7e645720ff6", "title": "Multiscale Combinatorial Grouping", "text": "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates."} {"_id": "250b1eb62ef3188e7172b63b64b7c9b133b370f9", "title": "Fast Approximate Energy Minimization via Graph Cuts", "text": "In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function\u2019s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an \u03b1-\u03b2swap: for a pair of labels \u03b1, \u03b2, this move exchanges the labels between an arbitrary set of pixels labeled \u03b1 and another arbitrary set labeled \u03b2. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an \u03b1-expansion: for a label \u03b1, this move assigns an arbitrary set of pixels the label \u03b1. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. 1 Energy minimization in early vision Many early vision problems require estimating some spatially varying quantity (such as intensity or disparity) from noisy measurements. Such quantities tend to be piecewise smooth; they vary smoothly at most points, but change dramatically at object boundaries. Every pixel p \u2208 P must be assigned a label in some set L; for motion or stereo, the labels are disparities, while for image restoration they represent intensities. The goal is to find a labeling f that assigns each pixel p \u2208 P a label fp \u2208 L, where f is both piecewise smooth and consistent with the observed data. These vision problems can be naturally formulated in terms of energy minimization. In this framework, one seeks the labeling f that minimizes the energy E(f) = Esmooth(f) + Edata(f). Here Esmooth measures the extent to which f is not piecewise smooth, while Edata measures the disagreement between f and the observed data. Many different energy functions have been proposed in the literature. The form of Edata is typically"} {"_id": "62a2b1956166ecd5fd8a6b2928f45765f41b76ed", "title": "Real-time object classification in 3D point clouds using point feature histograms", "text": "This paper describes a LIDAR-based perception system for ground robot mobility, consisting of 3D object detection, classification and tracking. The presented system was demonstrated on-board our autonomous ground vehicle MuCAR-3, enabling it to safely navigate in urban traffic-like scenarios as well as in off-road convoy scenarios. The efficiency of our approach stems from the unique combination of 2D and 3D data processing techniques. Whereas fast segmentation of point clouds into objects is done in a 2 \u008dD occupancy grid, classifying the objects is done on raw 3D point clouds. For fast object feature extraction, we advocate the use of statistics of local point cloud properties, captured by histograms over point features. In contrast to most existing work on 3D point cloud classification, where real-time operation is often impossible, this combination allows our system to perform in real-time at 0.1s frame-rate."} {"_id": "76193ed7cea1716ef1d912954b028169359f2a86", "title": "Latent Fingerprint Enhancement via Multi-Scale Patch Based Sparse Representation", "text": "Latent fingerprint identification plays an important role for identifying and convicting criminals in law enforcement agencies. Latent fingerprint images are usually of poor quality with unclear ridge structure and various overlapping patterns. Although significant advances have been achieved on developing automated fingerprint identification system, it is still challenging to achieve reliable feature extraction and identification for latent fingerprints due to the poor image quality. Prior to feature extraction, fingerprint enhancement is necessary to suppress various noises, and improve the clarity of ridge structures in latent fingerprints. Motivated by the recent success of sparse representation in image denoising, this paper proposes a latent fingerprint enhancement algorithm by combining the total variation model and multiscale patch-based sparse representation. First, the total variation model is applied to decompose the latent fingerprint into cartoon and texture components. The cartoon component with most of the nonfingerprint patterns is removed as the structured noise, whereas the texture component consisting of the weak latent fingerprint is enhanced in the next stage. Second, we propose a multiscale patch-based sparse representation method for the enhancement of the texture component. Dictionaries are constructed with a set of Gabor elementary functions to capture the characteristics of fingerprint ridge structure, and multiscale patch-based sparse representation is iteratively applied to reconstruct high-quality fingerprint image. The proposed algorithm cannot only remove the overlapping structured noises, but also restore and enhance the corrupted ridge structures. In addition, we present an automatic method to segment the foreground of latent image with the sparse coefficients and orientation coherence. Experimental results and comparisons on NIST SD27 latent fingerprint database are presented to show the effectiveness of the proposed algorithm and its superiority over existing algorithms."} {"_id": "7aa21c9291a655423d971965fc3d46728de2deae", "title": "A study on intelligent personalized push notification with user history", "text": "In mobile application, the push notification is an important function to make users consume an item (eg. movie, novel). If a mobile application sends too many notifications to users without considering their preferences, it can cause them to turn off the notification or even deleting the mobile application. Thus, we develop a framework that selects a set of users who are likely to react to the push notification. To the best of our knowledge, we first apply recommendation techniques (i.e., collaborative filtering and content-based models) to the problem of discovering the target of push notification."} {"_id": "c38a77cf264b9fbd806e37a54ab89ae3e628c4e1", "title": "Social media analytics and research test-bed (SMART dashboard)", "text": "We developed a social media analytics and research testbed (SMART) dashboard for monitoring Twitter messages and tracking the diffusion of information in different cities. SMART dashboard is an online geo-targeted search and analytics tool, including an automatic data processing procedure to help researchers to 1) search tweets in different cities; 2) filter noise (such as removing redundant retweets and using machine learning methods to improve precision); 3) analyze social media data from a spatiotemporal perspective, and 4) visualize social media data in various ways (such as weekly and monthly trends, top URLs, top retweets, top mentions, or top hashtags). By monitoring social messages in geo-targeted cities, we hope that SMART dashboard can assist researchers investigate and monitor various topics, such as flu outbreaks, drug abuse, and Ebola epidemics at the municipal level."} {"_id": "8c156e938358dfb24b7c03ed65d19d803411db29", "title": "Racing Bib Numbers Recognition", "text": "Running races, such as marathons, are broadly covered by professional as well as amateur photographers. This leads to a constantly growing number of photos covering a race, making the process of identifying a particular runner in such datasets difficult. Today, such identification is often done manually. In running races, each competitor has an identification number, called the Racing Bib Number (RBN), used to identify that competitor during the race. RBNs are usually printed on a paper or cardboard tag and pined onto the competitor\u2019s T-shirt during the race. We introduce an automatic system that receives a set of natural images taken in running sports events and outputs the participants\u2019 RBN."} {"_id": "a023186eb15325673b25a24115963cd160130787", "title": "An improved Opposition-Based Sine Cosine Algorithm for global optimization", "text": "Real life optimization problems require techniques that properly explore the search spaces to obtain the best solutions. In this sense, it is common that traditional optimization algorithms fail in local optimal values. The Sine Cosine Algorithms (SCA) has been recently proposed; it is a global optimization approach based on two trigonometric functions. SCA uses the sine and cosine functions to modify a set of candidate solutions; such operators create a balance between exploration and exploitation of the search space. However, like other similar approaches, SCA tends to be stuck into sub-optimal regions that it is reflected in the computational effort required to find the best values. This situation occurs due that the operators used for exploration do not work well to analyze the search space. This paper presents an improved version of SCA that considers the opposition based learning (OBL) as a mechanism for a better exploration of the search space generating more accurate solutions. OBL is a machine learning strategy commonly used to increase the performance of metaheuristic algorithms. OBL considers the opposite position of a solution in the search space. Based on the objective function value, the OBL selects the best element between the original solution and its opposite position; this task increases the accuracy of the optimization process. The hybridization of concepts from different fields is crucial in intelligent and expert systems; it helps to combine the advantages of algorithms to generate more efficient approaches. The proposed method is an example of this combination; it has been tested over several benchmark functions and engineering problems. Such results support the efficacy of the proposed approach to find the optimal solutions in complex search spaces. \u00a9 2017 Elsevier Ltd. All rights reserved. e w g t S f t ( fi b M a b"} {"_id": "4c74a0be4cf9878ab98912801ea12e7a44ae04b5", "title": "Clustering Algorithms in Biomedical Research: A Review", "text": "Applications of clustering algorithms in biomedical research are ubiquitous, with typical examples including gene expression data analysis, genomic sequence analysis, biomedical document mining, and MRI image analysis. However, due to the diversity of cluster analysis, the differing terminologies, goals, and assumptions underlying different clustering algorithms can be daunting. Thus, determining the right match between clustering algorithms and biomedical applications has become particularly important. This paper is presented to provide biomedical researchers with an overview of the status quo of clustering algorithms, to illustrate examples of biomedical applications based on cluster analysis, and to help biomedical researchers select the most suitable clustering algorithms for their own applications."} {"_id": "8decee97a7238011a732678997e4c3e5749746a2", "title": "Towards an automated system for short-answer assessment using ontology mapping", "text": "A key concern for any e-assessment tool (computer assisted assessment) is its efficiency in assessing the learner\u2019s knowledge, skill set and ability. Multiple-choice questions are the most common means of assessment used in e-assessment systems, and are also successful. An efficient e-assessment system should use variety of question types including shortanswers, essays etc. and modes of response to assess learner\u2019s performance. In this paper, we consider the task of assessing short-answer questions. Several researches have been performed on the evaluation and assessment of short-answer questions and many products are deployed to assess the same as part of e-learning systems. We propose an automated system for assessing short-answers using ontology mapping. We also compare our approach with some existing systems and give an overall evaluation of experiment results, which shows that our approach using ontology mapping gives an optimized result."} {"_id": "120f1a81fd4abd089f47a335d0771b4162e851e8", "title": "A Direct Least-Squares (DLS) method for PnP", "text": "In this work, we present a Direct Least-Squares (DLS) method for computing all solutions of the perspective-n-point camera pose determination (PnP) problem in the general case (n \u2265 3). Specifically, based on the camera measurement equations, we formulate a nonlinear least-squares cost function whose optimality conditions constitute a system of three third-order polynomials. Subsequently, we employ the multiplication matrix to determine all the roots of the system analytically, and hence all minima of the LS, without requiring iterations or an initial guess of the parameters. A key advantage of our method is scalability, since the order of the polynomial system that we solve is independent of the number of points. We compare the performance of our algorithm with the leading PnP approaches, both in simulation and experimentally, and demonstrate that DLS consistently achieves accuracy close to the Maximum-Likelihood Estimator (MLE)."} {"_id": "7ec474923d47930b44790e043e070abcb728b73d", "title": "A Soft Robotic Orthosis for Wrist Rehabilitation 1", "text": "In the United States about 795,000 people suffer from a stroke each year [1]. Of those who survive the stroke, the majority experience paralysis on one side of their body, a condition known as hemiparesis. Physical therapy has been shown to restore functionality of the paretic limbs, especially when done early and often [2]. However, the effectiveness of therapy is limited by the availability of therapists and the amount of practice that patients do on their own. Robot-assisted therapy has been explored as a means of guiding patients through the time-intensive exercise regimes that most therapy techniques prescribe. Several wearable, robotic orthoses for the hand and wrist have been developed and are still being developed today [3]. However, few of these existing solutions allow for any significant range of motion, and those that do only offer one degree of freedom. The assisted degree of freedom is almost always flexion/extension, despite the fact that supination/ pronation is crucial in nearly all activities of daily living. In addition, current devices are often large, heavy, and uncomfortable for the wearer, presenting significant deterrents to practice. This paper presents a soft wearable device for the wrist that provides motion-specific assistance with rehabilitation for hemiparetic stroke patients at home. Unlike conventional robotassisted rehabilitation, this pneumatically actuated orthosis is portable, soft, and lightweight, making it more suitable for use outside of the clinic. In addition, this device supports all the degrees of freedom of the wrist, including supination and pronation, which are critical to many tasks."} {"_id": "201fed356fc9d39d3b9d81b029ba71cc798cc7ec", "title": "Video enhancement using per-pixel virtual exposures", "text": "We enhance underexposed, low dynamic range videos by adaptively and independently varying the exposure at each photoreceptor in a post-process. This virtual exposure is a dynamic function of both the spatial neighborhood and temporal history at each pixel. Temporal integration enables us to expand the image's dynamic range while simultaneously reducing noise. Our non-linear exposure variation and denoising filters smoothly transition from temporal to spatial for moving scene elements. Our virtual exposure framework also supports temporally coherent per frame tone mapping. Our system outputs restored video sequences with significantly reduced noise, increased exposure time of dark pixels, intact motion, and improved details."} {"_id": "27b2b7b1c70ece4fbdf6c9746cdddb9e616ab841", "title": "Effects of pomegranate juice consumption on oxidative stress in patients with type 2 diabetes: a single-blind, randomized clinical trial.", "text": "Increased free radicals production due to hyperglycemia produces oxidative stress in patients with diabetes. Pomegranate juice (PJ) has antioxidant properties. This study was conducted to determine the effects of PJ consumption in oxidative stress in type 2 diabetic patients. This study was a randomized clinical trial performed on 60, 40-65 years old diabetic patients. The patients were randomly allocated either to PJ consumption group or control. Patients in PJ group consumed 200\u2009ml of PJ daily for six weeks. Sex distribution and the mean age were not different between two groups. After six weeks intervention, oxidized LDL and anti-oxidized LDL antibodies decreased and total serum antioxidant capacity and arylesterase activity of paraoxonase increased significantly in the PJ-treated group compared to the control group. Our data have shown that six weeks supplementation of PJ could have favorable effects on oxidative stress in patients with type 2 diabetes (T2D)."} {"_id": "425d6a85bc0b81c6e288480f5c6f5dddba0f1089", "title": "MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach", "text": "Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages."} {"_id": "71fbe95871241feebfba7a32aa9a51d0638c0b39", "title": "A Memory-Network Based Solution for Multivariate Time-Series Forecasting", "text": "Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Timeseries network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Addtionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most."} {"_id": "5bc848fcbeed1cffb55098c4d7cef4596576e779", "title": "Chapter 17 Wireless Sensor Network Security : A Survey", "text": "As wireless sensor networks continue to grow, so does the need for effective security mechanisms. Because sensor networks may interact with sensitive data and/or operate in hostile unattended environments, it is imperative that these security concerns be addressed from the beginning of the system design. However, due to inherent resource and computing constraints, security in sensor networks poses different challenges than traditional network/computer security. There is currently enormous research potential in the field of wireless sensor network security. Thus, familiarity with the current research in this field will benefit researchers greatly. With this in mind, we survey the major topics in wireless sensor network security, and present the obstacles and the requirements in the sensor security, classify many of the current attacks, and finally list their corresponding defensive measures."} {"_id": "c219fa405ce0734a23b067376481f289789ef197", "title": "A Nonlinear Controller Based on a Discrete Energy Function for an AC/DC Boost PFC Converter", "text": "AC/DC converter systems generally have two stages: an input power factor correction (PFC) boost ac/dc stage that converts input ac voltage to an intermediate dc voltage while reducing the input current harmonics injected to the grid, followed by a dc/dc converter that steps up or down the intermediate dc-bus voltage as required by the output load and provides high-frequency galvanic isolation. Since a low-frequency ripple (second harmonic of the input ac line frequency) exists in the output voltage of the PFC ac/dc boost converter due to the power ripple, the voltage loop in the conventional control system must have a very low bandwidth in order to avoid distortions in the input current waveform. This results in the conventional PFC controller having a slow dynamic response against load variations with adverse overshoots and undershoots. This paper presents a new control approach that is based on a novel discrete energy function minimization control law that allows the front-end ac/dc boost PFC converter to operate with faster dynamic response than the conventional controllers and simultaneously maintain near unity input power factor. Experimental results from a 3-kW ac/dc converter built for charging traction battery of a pure electric vehicle are presented in this paper to validate the proposed control method and its superiority over conventional controllers."} {"_id": "190875cda0d1fb86fc6036a9ad7d46fc1f9fc19b", "title": "Tracking Sentiment in Mail: How Genders Differ on Emotional Axes", "text": "With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word\u2013emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy\u2013sadness axis, whereas men prefer terms from the fear\u2013trust axis. Finally, we show visualizations that can help people track emotions in their emails."} {"_id": "4b07a4d8643f9a33dc2372a8c25a67be046a44b8", "title": "Bidding algorithms for simultaneous auctions", "text": "This paper introduces RoxyBot, one of the top-scoring agents in the First International Trading Agent Competition. A TAC agent simulates one vision of future travel agents: it represents a set of clients in simultaneous auctions, trading complementary (e.g., airline tickets and hotel reservations) and substitutable (e.g., symphony and theater tickets) goods. RoxyBot faced two key technical challenges in TAC: (i) allocation---assigning purchased goods to clients at the end of a game instance so as to maximize total client utility, and (ii) completion---determining the optimal quantity of each resource to buy and sell given client preferences, current holdings, and market prices. For the dimensions of TAC, an optimal solution to the allocation problem is tractable, and RoxyBot uses a search algorithm based on A* to produce optimal allocations. An optimal solution to the completion problem is also tractable, but in the interest of minimizing bidding cycle time, RoxyBot solves the completion problem using beam search, producing approximately optimal completions. RoxyBot's completer relies on an innovative data structure called a priceline."} {"_id": "e95c610484468a9c53fc73b8adfb62bf8690e6ed", "title": "Bandwidth Enhancement of a Single-Feed Circularly Polarized Antenna Using a Metasurface: Metamaterial-based wideband CP rectangular microstrip antenna.", "text": "In this article, the authors use a 7 x 7 rectangular-ring unit metasurface and a single-feed, CP, rectangular, slotted patch antenna to enhance bandwidth. The antenna consists of a rectangular, slotted patch radiator, a metasurface with an array of rectangular-ring cells, and a coaxial feed. Using the metasurface results in a wide CP bandwidth sevenfold the size of a conventional antenna. The measured 3-dB-AR bandwidth of an antenna prototype printed onto an FR4 substrate is 28.3% (3.62 GHz-4.75 GHz) with 2:1-voltage standing wave ratio (VSWR) bandwidth of 36%. A 6.0-dBic, boresight gain with a variation of 1.5 dB is achieved across the frequency band of 3.35 GHz-4.75 GHz, with the maximum gain of 7.4 dBic at 4.1 GHz. We measure and compare the antenna prototype with a simulation using CST Microwave Studio. Parametric studies aid in understanding the proposed antenna's operation mechanism."} {"_id": "7e9db0d2a5c60e773cee15cdb07a966ee3a2e839", "title": "Dynamic Online Pricing with Incomplete Information", "text": "Consider the pricing decision for a manager at large online retailer, such as Amazon.com, that sells millions of products. The pricing manager must decide on real-time prices for each of these product. Due to the large number of products, the manager must set retail prices without complete demand information. A manager can run price experiments to learn about demand and maximize long run profits. There are two aspects that make the online retail pricing different from traditional brick and mortar settings. First, due to the number of products the manager must be able to automate pricing. Second, an online retailer can make frequent price changes. Pricing differs from other areas of online marketing where experimentation is common, such as online advertising or website design, as firms do not randomize prices to different customers at the same time. In this paper we propose a dynamic price experimentation policy where the firm has incomplete demand information. For this general setting, we derive a pricing algorithm that balances earning profit immediately and learning for future profits. The proposed approach marries statistical machine learning and economic theory. In particular we combine multi-armed bandit (MAB) algorithms with partial identification of consumer demand into a unique pricing policy. Our automated policy solves this problem using a scalable distribution-free algorithm. We show that our method converges to the optimal price faster than standard machine learning MAB solutions to the problem. In a series of Monte Carlo simulations, we show that the proposed approach perform favorably compared to methods in computer science and revenue management. \u2217Ross School of Business, University of Michigan. Contact: ericmsch@umich.edu \u2020Rady School of Management, University of California, San Diego. Contact: kamisra@ucsd.edu \u2021Department of Electrical Engineering and Computer Science, University of Michigan. Contact: jabernet@umich.edu"} {"_id": "98a44ee30b38c6f51c27464526224db4f5557bfe", "title": "The biological principles of swarm intelligence", "text": "The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. From the routing of traffic in telecommunication networks to the design of control algorithms for groups of autonomous robots, the collective behaviors of these animals have inspired many of the foundational works in this emerging research field. For the first issue of this journal dedicated to swarm intelligence, we review the main biological principles that underlie the organization of insects\u2019 colonies. We begin with some reminders about the decentralized nature of such systems and we describe the underlying mechanisms of complex collective behaviors of social insects, from the concept of stigmergy to the theory of self-organization in biological systems. We emphasize in particular the role of interactions and the importance of bifurcations that appear in the collective output of the colony when some of the system\u2019s parameters change. We then propose to categorize the collective behaviors displayed by insect colonies according to four functions that emerge at the level of the colony and that organize its global behavior. Finally, we address the role of modulations of individual behaviors by disturbances (either environmental or internal to the colony) in the overall flexibility of insect colonies. We conclude that future studies about self-organized biological behaviors should investigate such modulations to better understand how insect colonies adapt to uncertain worlds."} {"_id": "e6bdbe24f5cd7195af0bb7db343a77f7534b2491", "title": "Illuminating Light: An Optical Design Tool with a Luminous-Tangible Interface", "text": "We describe a novel system for rapid prototyping of laserbased optical and holographic layouts. Users of this optical prototyping tool \u2013 called the Illuminating Light system \u2013 move physical representations of various optical elements about a workspace, while the system tracks these components and projects back onto the workspace surface the simulated propagation of laser light through the evolving layout. This application is built atop the Luminous Room infrastructure, an aggregate of interlinked, computer-controlled projector-camera units called I/O Bulbs. Philosophically, the work embodies the emerging ideas of the Luminous Room and builds on the notions of \u2018graspable media\u2019. We briefly introduce the I/O Bulb and Luminous Room concepts and discuss their current implementations. After an overview of the optical domain that the Illuminating Light system is designed to address, we present the overall system design and implementation, including that of an intermediary toolkit called voodoo which provides a general facility for object identification and tracking."} {"_id": "cbc3a39442f0c4a501d9ff35d8ecbaa88377692d", "title": "Congruent evidence from \u03b1-tubulin and \u03b2-tubulin gene phylogenies for a zygomycete origin of microsporidia", "text": "The origin of microsporidia and the evolutionary relationships among the major lineages of fungi have been examined by molecular phylogeny using a-tubulin and b-tubulin. Chytrids, basidiomycetes, ascomycetes, and microsporidia were all recovered with high support, and the zygomycetes were consistently paraphyletic. The microsporidia were found to branch within zygomycetes, and showed relationships with members of the Entomophthorales and Zoopagales. This provides support for the microsporidia having evolved from within the fungi, however, the tubulin genes are difficult to interpret unambiguously since fungal and microsporidian tubulins are very divergent. Rapid evolutionary rates a characteristic of practically all microsporidian genes studied, so determining their evolutionary history will never be completely free of such difficulties. While the tubulin phylogenies do not provide a decisive conclusion, they do further narrow the probable origin of microsporidia to a zygomycete-like ancestor. 2002 Elsevier Science (USA). All rights reserved."} {"_id": "114a4222c53f1a6879f1a77f1bae2fc0f8f55348", "title": "The Galois / Counter Mode of Operation ( GCM )", "text": ""} {"_id": "b37b6c95bda81d7c1f84edc22da970d8f3a8519e", "title": "Lattice Attacks on Digital Signature Schemes", "text": "We describe a lattice attack on the Digital Signature Algorithm (DSA) when used to sign many messages, mi, under the assumption that a proportion of the bits of each of the associated ephemeral keys, yi, can be recovered by alternative techniques."} {"_id": "c8f6a8f081f49325eb97600eca05620887092d2c", "title": "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems", "text": "By carefully measuring the amount of time required to perform private key operations, attackers may be able to nd xed Di eHellman exponents, factor RSA keys, and break other cryptosystems. Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext. Actual systems are potentially at risk, including cryptographic tokens, network-based cryptosystems, and other applications where attackers can make reasonably accurate timing measurements. Techniques for preventing the attack for RSA and Di e-Hellman are presented. Some cryptosystems will need to be revised to protect against the attack, and new protocols and algorithms may need to incorporate measures to prevent timing attacks."} {"_id": "0dba88589f12be4b7438da48056c44a97844450e", "title": "The RC5 Encryption Algorithm", "text": "This document describes the RC encryption algorithm a fast symmetric block cipher suitable for hardware or software imple mentations A novel feature of RC is the heavy use of data dependent rotations RC has a variable word size a variable number of rounds and a variable length secret key The encryption and decryption algorithms are exceptionally simple"} {"_id": "826dc5774b2c2430cef0dfc4d18bc35947106c6d", "title": "Knowledge management , flexibility and firm performance : The effects of family involvement", "text": ""} {"_id": "f9166911d7ac1afb5c230813edce76109876276a", "title": "Generation of Test Cases from Software Requirements Using Natural Language Processing", "text": "Software testing plays an important role in early verification of software systems and it enforces quality in the system under development. One of the challenging tasks in the software testing is generation of software test cases. There are many existing approaches to generate test cases like using uses case, activity diagrams and sequence diagrams; they have their own limitations such as inability to capture test cases for non functional requirements and etc. Thus these techniques have restricted use in acceptance testing and are not effective for verification & acceptance of large software system. If software requirements are stated using semi-formal or formal methods then it is difficult for the testers and other third party domain experts to test the system. It also requires much expertise in interpreting requirements and only limited number of persons can understand them. This paper proposes an approach to generate test case from software requirements expressed in natural language using natural language processing technique."} {"_id": "d30fca9f090bde034e6e0882502ec0846fe35b45", "title": "Cellular planning for next generation wireless mobile network using novel energy efficient CoMP", "text": "Cellular planning also called radio network planning is a crucial stage for the deployment of a wireless network. The enormous increase in data traffic requires augmentation of coverage, capacity, throughput, quality of service and optimized cost of the network. The main goal of cellular planning is to enhance the spectral efficiency and hence the throughput of a network. A novel CoMP algorithm has been discussed with two-tier heterogeneous network. Number of clusters has been obtained using V-R (variance ratio) Criterion. The centroid of a cluster obtained using K-means algorithm provides the deployment of BS position. Application of CoMP in this network using DPS approach with sleep mode of power saving, provides higher energy efficiency, SINR and throughput as compared to nominal CoMP. CoMP basically describes a scheme in which a group of base stations (BS) dynamically co-ordinate and co-operate among themselves to convert interference into a beneficial signal. Network planning using stochastic method and Voronoi Tessellation with two-tier network has been applied to a dense region of Surat city in Gujarat state of India. The results show clear improvement in signal-to-interference plus noise ratio (SINR) by 25% and energy efficiency of the network by 28% using the proposed CoMP transmission."} {"_id": "7e9d77c808961ff42f65f8bd5e0ade60bfe562d1", "title": "CardBoardiZer: Creatively Customize, Articulate and Fold 3D Mesh Models", "text": "Computer-aided design of flat patterns allows designers to prototype foldable 3D objects made of heterogeneous sheets of material. We found origami designs are often characterized by pre-synthesized patterns and automated algorithms. Furthermore, augmenting articulated features to a desired model requires time-consuming synthesis of interconnected joints. This paper presents CardBoardiZer, a rapid cardboard based prototyping platform that allows everyday sculptural 3D models to be easily customized, articulated and folded. We develop a building platform to allow the designer to 1) import a desired 3D shape, 2) customize articulated partitions into planar or volumetric foldable patterns, and 3) define rotational movements between partitions. The system unfolds the model into 2D crease-cut-slot patterns ready for die-cutting and folding. In this paper, we developed interactive algorithms and validated the usability of CardBoardiZer using various 3D models. Furthermore, comparisons between CardBoardiZer and methods of Autodesk\u00ae 123D Make, demonstrated significantly shorter time-to-prototype and ease of fabrication."} {"_id": "07925910d45761d96269fc3bdfdc21b1d20d84ad", "title": "Deep Learning without Poor Local Minima", "text": "In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice."} {"_id": "689246187740f686a3dc12a41b3e4e54f1399b87", "title": "Aggregating User Input in Ecology Citizen Science Projects", "text": "Camera traps (remote, automatic cameras) are revolutionizing large-scale studies in ecology. The Serengeti Lion Project has used camera traps to produce over 1.5 million pictures of animals in the Serengeti. To analyze these pictures, the Project created Snapshot Serengeti, a citizen science website where volunteers can help classify animals. To increase accuracy, each photo is shown to multiple users and a critical step is aggregating individual classifications. In this paper, we present a new aggregation algorithm which achieves an accuracy of 98.6%, better than many human experts. Our algorithm also requires fewer users per photo than existing methods. The algorithm is intuitive and designed so that nonexperts can understand the end results. Ecology seeks to understand the interrelationships of species with one another and with their environment. Monitoring many species of animals simultaneously has traditionally been very difficult. Camera traps (remote, automatic cameras) are revolutionizing ecological research by providing a non-invasive, cost-effective approach for large-scale monitoring. Ecologists are currently using these traps in the Serengeti National Park, one of the world\u2019s last large intact natural areas, to understand the dynamics of its dozens of large mammals (Swanson et al. 2014). As of November 2013, the ecologists have spent 3 years using more than 200 cameras spread over 1,125 square kilometers to take more than 1.5 million photos. In order to process so many images, the ecologists, along with Zooniverse (a citizen science platform), created Snapshot Serengeti, a web site where over 35,000 volunteers helped classify the species in the photos (Zooniverse 2014a). Since volunteers can make mistakes, each photo is shown to multiple users. A critical step is to combine these classifications into one aggregate classification: e.g., if 4 out of 5 users classify a photo as containing a zebra, we might \u21e4These authors are part of the Zooniverse project, funded in part by MICO. Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. decide that the photo does indeed contain a zebra. In this paper, we develop an aggregation algorithm for Snapshot Serengeti. Classification aggregation is an active area in machine learning; however, we show that much of the existing literature is based on assumptions which do not apply to Snapshot Serengeti, and must therefore develop a novel approach. 1 In addition, current machine learning work on classification aggregation often draws on ideas such as expectation maximization and Bayesian reasoning. While powerful, these methods obscure the connection between input and results, making it hard for non-machine learning experts to understand the end results. Thus, our algorithm must be both accurate and intuitive. Our paper proceeds as follows. We begin by discussing Snapshot Serengeti and previous machine learning literature on classifier aggregation. We then discuss why much of this existing work is not applicable to Snapshot Serengeti. We next introduce a new classifier aggregation algorithm for Snapshot Serengeti and compare it against the current algorithm. Finally, we conclude and discuss possible future work."} {"_id": "572e1b1f9fc7414f8bfd9aabbb34da67619a25ae", "title": "An approach for offensive text detection and prevention in Social Networks", "text": "Social Network has become a place where people from every corner of the world has established a virtual civilization. In this virtual community, people used to share their views, express their feelings, photos, videos, blogs, etc. Social Networking Sites like Facebook, Twitters, etc. has given a platform to share innumerable contents with just a click of a button. However, there is no restriction applied by them for the uploaded content. These uploaded content may contains abusive words, explicit images which may be unsuitable for social platforms. As such there is no defined mechanism for restricting offensive contents from publishing on social sites. To solve this problem we have used our proposed approach. In our approach we are developing a social network prototype for implementing our approach for automatic filtering of offensive content in social network. Many popular social networking sites today don't have proper mechanism for restricting offensive contents. They use reporting methods in which user report if the content is abuse. This requires substantial human efforts and time. In this paper, we applied pattern matching algorithm for offensive keyword detection from social networking comments and prevent it from publishing on social platform. Apart from conventional method of reporting abusive contents by users our approach does not requires any human intervention and thus restrict offensive words by detecting and preventing it automatically."} {"_id": "c0fba6c576daa416b0db591594681c130f785658", "title": "Major League Soccer Scheduling Problem", "text": "Major League Soccer is the highest level mens professional soccer league in United States and Canada, consisting of 20 teams, with 17 in United States and 3 in Canada. The League is divided by two Conferences: Western and Eastern with 10 teams each. The regular season of the League runs from early March to late October each year. It is important to look at MLS mainly for three reasons. Firstly, it is a fast-growing league, with only 10 teams competed in 1996 when the League started. Secondly, the popularity of the game has increased dramatically, with the average per-game attendance in MLS increased by 40% in the past decade, which exceeds that of NHL and NBA. Lastly, the big TV broadcasting contract signed with ESPN, Fox Sports, and Univision has demonstrated the huge commercial value of the League."} {"_id": "2bb887f21c78de312046dc826564708d489bb77e", "title": "Information Security and Privacy: Rethinking Governance Models", "text": "Concerns about information security and privacy continue to make headlines in the media and rate as serious issues in business and public surveys. Conventional thinking suggests that enhancements to governance models and business practices improve the performance of individual businesses. However, is this enough? Good governance is based on clear rights and responsibilities, and this panel will address the contention that such clarity is lacking in the treatment of personal information. As a result, certain types of business innovation may be constrained and good practices may remain tick box exercises, disconnected from wider business objectives. More radical thinking would go beyond incremental improvements to the status quo and recognize the profound challenges to privacy and information security created by digital technology. This panel will bring together a range of research disciplines and senior business representatives to critique current practice and develop a future research agenda."} {"_id": "9ae7cd3170e1feb8aeb80aa38bc813c93c6c0218", "title": "Software Test Automation practices in agile development environment: An industry experience report", "text": "The increased importance of Test Automation in software engineering is very evident considering the number of companies investing in automated testing tools nowadays, with the main aim of preventing defects during the development process. Test Automation is considered an essential activity for agile methodologies being the key to speed up the quality assurance process. This paper presents empirical observations and the challenges of a test team new to agile practices and Test Automation using open source testing tools integrated in software projects that use the Scrum methodology. The results obtained showed some important issues to be discussed and the Test Automation practices collected based on the experiences and lessons learned."} {"_id": "f11e2579cc3584f28e8b07bab572ac80a1dd7416", "title": "Radial-Flux Permanent-Magnet Hub Drives: A Comparison Based on Stator and Rotor Topologies", "text": "In this paper, the performances of electric vehicle (EV) in-wheel (hub) nonoverlap-winding permanent-magnet motors with different stator and rotor topologies are compared. The calculation of the frequency losses in the design optimization of the hub motors is specifically considered. Also, the effect of the slot and pole number choice that determines the winding factor of the machine is investigated. It is shown that the torque-per-copper-loss performance of the machine cannot be judged solely on the winding factor of the machine. Wide open stator slot machines with rectangular preformed coils and, hence, low manufacturing costs are found to perform better than expected. The design detail and test results of a prototype 10-kW water-cooled EV hub drive are presented. The test results confirm the finite-element-calculated results, specifically in the high-speed region of the drive where the rotor topology affects the constant power speed range."} {"_id": "448fd962a195c6ff196ca5962b3eb7afb896d9e3", "title": "Fast and robust Earth Mover's Distances", "text": "We present a new algorithm for a robust family of Earth Mover's Distances - EMDs with thresholded ground distances. The algorithm transforms the flow-network of the EMD so that the number of edges is reduced by an order of magnitude. As a result, we compute the EMD by an order of magnitude faster than the original algorithm, which makes it possible to compute the EMD on large histograms and databases. In addition, we show that EMDs with thresholded ground distances have many desirable properties. First, they correspond to the way humans perceive distances. Second, they are robust to outlier noise and quantization effects. Third, they are metrics. Finally, experimental results on image retrieval show that thresholding the ground distance of the EMD improves both accuracy and speed."} {"_id": "6a8fe4de9e91fcea62993599a975a3464bc7e3af", "title": "Evaluation methods and decision theory for classification of streaming data with temporal dependence", "text": "Predictive modeling on data streams plays an important role in modern data analysis, where data arrives continuously and needs to be mined in real time. In the stream setting the data distribution is often evolving over time, and models that update themselves during operation are becoming the state-of-the-art. This paper formalizes a learning and evaluation scheme of such predictive models. We theoretically analyze evaluation of classifiers on streaming data with temporal dependence. Our findings suggest that the commonly accepted data stream classification measures, such as classification accuracy and Kappa statistic, fail to diagnose cases of poor performance when temporal dependence is present, therefore they should not be used as sole performance indicators. Moreover, classification accuracy can be misleading if used as a proxy for evaluating change detectors with datasets that have temporal dependence. We formulate the decision theory for streaming data classification with temporal dependence and develop a new evaluation methodology for data stream classification that takes temporal dependence into account. We propose a combined measure for classification performance, that takes into account temporal dependence, and we recommend using it as the main performance measure in classification of streaming data."} {"_id": "6e6dfc5d5f097f9e7d02d3fc0380c1ca49744df2", "title": "Learning Background-Aware Correlation Filters for Visual Tracking", "text": "Correlation Filters (CFs) have recently demonstrated excellent performance in terms of rapidly tracking objects under challenging photometric and geometric variations. The strength of the approach comes from its ability to efficiently learn - on the fly - how the object is changing over time. A fundamental drawback to CFs, however, is that the background of the target is not modeled over time which can result in suboptimal performance. Recent tracking algorithms have suggested to resolve this drawback by either learning CFs from more discriminative deep features (e.g. DeepSRDCF [9] and CCOT [11]) or learning complex deep trackers (e.g. MDNet [28] and FCNT [33]). While such methods have been shown to work well, they suffer from high complexity: extracting deep features or applying deep tracking frameworks is very computationally expensive. This limits the real-time performance of such methods, even on high-end GPUs. This work proposes a Background-Aware CF based on hand-crafted features (HOG [6]) that can efficiently model how both the foreground and background of the object varies over time. Our approach, like conventional CFs, is extremely computationally efficient- and extensive experiments over multiple tracking benchmarks demonstrate the superior accuracy and real-time performance of our method compared to the state-of-the-art trackers."} {"_id": "d59691a0341246f8c09c655428712ddc86246efc", "title": "Broadband Radial Waveguide Power Combiner with Improved Isolation among Adjacent Output Ports", "text": "An eight-way waveguide-based power combiner/divider is presented and investigated in the frequency range 7.5\u201310.5GHz. A simple approach is proposed for design purposes. The measured combiner shows a good agreement between the simulated and measured results. Insertion loss is about \u22120.3 dB, return loss is less than \u221215 dB and isolation between adjacent output ports is better than \u221211 dB at 8.5GHz and reaches about \u221214 dB at 9.5GHz."} {"_id": "1b322f89319440f06bb45d9081461978c9c643db", "title": "Telomere measurement by quantitative PCR.", "text": "It has long been presumed impossible to measure telomeres in vertebrate DNA by PCR amplification with oligonucleotide primers designed to hybridize to the TTAGGG and CCCTAA repeats, because only primer dimer-derived products are expected. Here we present a primer pair that eliminates this problem, allowing simple and rapid measurement of telomeres in a closed tube, fluorescence-based assay. This assay will facilitate investigations of the biology of telomeres and the roles they play in the molecular pathophysiology of diseases and aging."} {"_id": "261e39073725dd16d46aeaeb30b7b7dd3e8b78ee", "title": "Depth map creation and image-based rendering for advanced 3DTV services providing interoperability and scalability", "text": "Due to enormous progress in the areas of auto-stereoscopic 3D displays, digital video broadcast and computer vision algorithms, 3D television (3DTV) has reached a high technical maturity and many people now believe in its readiness for marketing. Experimental prototypes of entire 3DTV processing chains have been demonstrated successfully during the last few years, and the motion picture experts group (MPEG) of ISO/IEC has launched related ad hoc groups and standardization efforts envisaging the emerging market segment of 3DTV. In this context the paper discusses an advanced approach for a 3DTV service, which is based on the concept of video-plus-depth data representations. It particularly considers aspects of interoperability and multi-view adaptation for the case that different multi-baseline geometries are used for multi-view capturing and 3D display. Furthermore it presents algorithmic solutions for the creation of depth maps and depth image-based rendering related to this framework of multi-view adaptation. In contrast to other proposals, which are more focused on specialized configurations, the underlying approach provides a modular and flexible system architecture supporting a wide range of multi-view structures. r 2007 Elsevier B.V. All rights reserved."} {"_id": "14726825de6b438920bd2f021a5e8b7323046f00", "title": "Generalizing Skills with Semi-Supervised Reinforcement Learning", "text": "Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semisupervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of \u201clabeled\u201d MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of \u201cunlabeled\u201d MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent\u2019s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward."} {"_id": "e4945a6b2bea1efc1f6fcfa160e476f74a037b69", "title": "Virtual InfiniBand clusters for HPC clouds", "text": "High Performance Computing (HPC) employs fast interconnect technologies to provide low communication and synchronization latencies for tightly coupled parallel compute jobs. Contemporary HPC clusters have a fixed capacity and static runtime environments; they cannot elastically adapt to dynamic workloads, and provide a limited selection of applications, libraries, and system software. In contrast, a cloud model for HPC clusters promises more flexibility, as it provides elastic virtual clusters to be available on-demand. This is not possible with physically owned clusters.\n In this paper, we present an approach that makes it possible to use InfiniBand clusters for HPC cloud computing. We propose a performance-driven design of an HPC IaaS layer for InfiniBand, which provides throughput and latency-aware virtualization of nodes, networks, and network topologies, as well as an approach to an HPC-aware, multi-tenant cloud management system for elastic virtualized HPC compute clusters."} {"_id": "06ee3e95ce74711830a9752768c37c3e69585d7d", "title": "Smart Cities' Data: Challenges and Opportunities for Semantic Technologies", "text": "How can we innovate smart systems for smart cities, to make data available homogeneously, inexpensively, and flexibly while supporting an array of applications that have yet to exist or be specified?"} {"_id": "758e3fbd07ef858b33420aa981ae741a3f0d9a78", "title": "cocor: A Comprehensive Solution for the Statistical Comparison of Correlations", "text": "A valid comparison of the magnitude of two correlations requires researchers to directly contrast the correlations using an appropriate statistical test. In many popular statistics packages, however, tests for the significance of the difference between correlations are missing. To close this gap, we introduce cocor, a free software package for the R programming language. The cocor package covers a broad range of tests including the comparisons of independent and dependent correlations with either overlapping or nonoverlapping variables. The package also includes an implementation of Zou's confidence interval for all of these comparisons. The platform independent cocor package enhances the R statistical computing environment and is available for scripting. Two different graphical user interfaces-a plugin for RKWard and a web interface-make cocor a convenient and user-friendly tool."} {"_id": "49254a9de8b95df38f3db57228fe242ab7084816", "title": "Eye localization from thermal infrared images", "text": "By using the knowledge of facial structure and temperature distribution, this paper proposes an automatic eye localization method from long wave infrared thermal images both with eyeglasses and without eyeglasses. First, with the help of support vector machine classifier, three gray-projection features are defined to determine whether a subject is with eyeglasses. For subjects with eyeglasses, the locations of valleys in the projection curve are used to perform eye localization. For subjects without eyeglasses, a facial structure consisting of 15 sub-regions is proposed to extract Haar-like features. Eight classifiers are learned from the features selected by Adaboost algorithm for left and right eye, respectively. A vote strategy is employed to find the most likely eyes. To evaluate the effectiveness of our approach, experiments are performed on NVIE and Equinox databases. The eyeglass detection results on NVIE database and Equinox database are 99.36% and 95%, respectively, which demonstrate the effectiveness and robustness of our eyeglass detection method. Eye localization results of withindatabase experiments and cross-database experiments on these two databases are very comparable with the previous results in this field, verifying the effectiveness and the generalization ability of our approach. & 2013 Elsevier Ltd. All rights reserved."} {"_id": "0788cda105da9853627d3e1ec8d01e01f7239c30", "title": "Parallel Coordinate Descent for L1-Regularized Loss Minimization", "text": "We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict near-linear speedups, up to a problemdependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1."} {"_id": "2414283ed14ebb0eec031bb75cd25fbad000687e", "title": "Distributed large-scale natural graph factorization", "text": "Natural graphs, such as social networks, email graphs, or instant messaging patterns, have become pervasive through the internet. These graphs are massive, often containing hundreds of millions of nodes and billions of edges. While some theoretical models have been proposed to study such graphs, their analysis is still difficult due to the scale and nature of the data.\n We propose a framework for large-scale graph decomposition and inference. To resolve the scale, our framework is distributed so that the data are partitioned over a shared-nothing set of machines. We propose a novel factorization technique that relies on partitioning a graph so as to minimize the number of neighboring vertices rather than edges across partitions. Our decomposition is based on a streaming algorithm. It is network-aware as it adapts to the network topology of the underlying computational hardware. We use local copies of the variables and an efficient asynchronous communication protocol to synchronize the replicated values in order to perform most of the computation without having to incur the cost of network communication. On a graph of 200 million vertices and 10 billion edges, derived from an email communication network, our algorithm retains convergence properties while allowing for almost linear scalability in the number of computers."} {"_id": "2a65a1a126f843f0e3600ba80da50bc6d4c32855", "title": "The Matrix Cookbook", "text": "Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rish\u00f8j, Christian Schr\u00f6ppel Douglas L. Theobald, Esben Hoegh-Rasmussen, Glynne Casteel, Jan Larsen, Jun Bin Gao, J\u00fcrgen Struckmeier, Kamil Dedecius, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Miguel Bar\u00e3o, Ole Winther, Pavel Sakov, Stephan Hattinger, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies."} {"_id": "51e93552fe55be91a5711ff2aabc04b742503e68", "title": "ICA with Reconstruction Cost for Efficient Overcomplete Feature Learning", "text": "F . Here,Cx denotes the covariance matrix of the data. E is the matrix whose columns are eigenvectors of Cx; andD is a diagonal matrix of eigenvalues of Cx. This result also means that when the data is not whitened, the reconstruction cost becomes a weighted linear regression in the space rotated by eigenvec tors and scaled by eigenvalues. The cost is therefore weighted heavily in the principal vector d irections and less so in the other directions. Also, note that D 1 2E is the well-known PCA whitening matrix. The optimization th erefore builds in an inverse whitening matrix such that W will have to learn the whitening matrix to cancel outED 1 2 ."} {"_id": "903520b0b13edc0e6a18a05dde11ff64f7a7582f", "title": "Modeling IoT-Based Solutions Using Human-Centric Wireless Sensor Networks", "text": "The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes."} {"_id": "40c5050e470fa0890e85487e4679197e07a91c09", "title": "pVM: persistent virtual memory for efficient capacity scaling and object storage", "text": "Next-generation byte-addressable nonvolatile memories (NVMs), such as phase change memory (PCM) and Memristors, promise fast data storage, and more importantly, address DRAM scalability issues. State-of-the-art OS mechanisms for NVMs have focused on improving the block-based virtual file system (VFS) to manage both persistence and the memory capacity scaling needs of applications. However, using the VFS for capacity scaling has several limitations, such as the lack of automatic memory capacity scaling across DRAM and NVM, inefficient use of the processor cache and TLB, and high page access costs. These limitations reduce application performance and also impact applications that use NVM for persistent object storage with flat namespaces, such as photo stores, NoSQL databases, and others.\n To address such limitations, we propose persistent virtual memory (pVM), a system software abstraction that provides applications with (1) automatic OS-level memory capacity scaling, (2) flexible memory placement policies across NVM, and (3) fast object storage. pVM extends the OS virtual memory (VM) instead of building on the VFS and abstracts NVM as a NUMA node with support for NVM-based memory placement mechanisms. pVM inherits benefits from the cache and TLB-efficient VM subsystem and augments these further by distinguishing between persistent and nonpersistent capacity use of NVM. Additionally, pVM achieves fast persistent storage by further extending the VM subsystem with consistent and durable OS-level persistent metadata. Our evaluation of pVM with memory capacity-intensive applications shows a 2.5x speedup and up to 80% lower TLB and cache misses compared to VFS-based systems. pVM's object store provides 2x higher throughput compared to the block-based approach of the state-of-the art solution and up to a 4x reduction in the time spent in the OS."} {"_id": "877aff9bd05de7e9d82587b0e6f1cda28fd33171", "title": "Long-Term Visual Localization Using Semantically Segmented Images", "text": "Robust cross-seasonal localization is one of the major challenges in long-term visual navigation of autonomous vehicles. In this paper, we exploit recent advances in semantic segmentation of images, i.e., where each pixel is assigned a label related to the type of object it represents, to attack the problem of long-term visual localization. We show that semantically labeled 3D point maps of the environment, together with semantically segmented images, can be efficiently used for vehicle localization without the need for detailed feature descriptors (SIFT, SURF, etc.), Thus, instead of depending on hand-crafted feature descriptors, we rely on the training of an image segmenter. The resulting map takes up much less storage space compared to a traditional descriptor based map. A particle filter based semantic localization solution is compared to one based on SIFT-features, and even with large seasonal variations over the year we perform on par with the larger and more descriptive SIFT-features, and are able to localize with an error below 1 m most of the time."} {"_id": "967852b62d5ef4c0f65f7f5568c5ed8661b77460", "title": "Sexual Conflict in Human Mating", "text": "Despite interdependent reproductive fates that favor cooperation, males and females exhibit many psychological and behavioral footprints of sexually antagonistic coevolution. These include strategies of deception, sexual exploitation, and sexual infidelity as well as anti-exploitation defenses such as commitment skepticism and emotions such as sexual regret and jealousy. Sexual conflict pervades the mating arena prior to sexual consummation, after a mating relationship has formed, and in the aftermath of a breakup. It also permeates many other social relationships in forms such as daughter-guarding, conflict in opposite-sex friendships, and workplace sexual harassment. As such, sexual conflict constitutes not a narrow or occasional flashpoint but rather persistent threads that run through our intensely group-living species."} {"_id": "a7ebc07154979549def2c39e33ad711ae5f41c65", "title": "Evaluating the relationship between white matter integrity, cognition, and varieties of video game learning.", "text": "BACKGROUND\nMany studies are currently researching the effects of video games, particularly in the domain of cognitive training. Great variability exists among video games however, and few studies have attempted to compare different types of video games. Little is known, for instance, about the cognitive processes or brain structures that underlie learning of different genres of video games.\n\n\nOBJECTIVE\nTo examine the cognitive and neural underpinnings of two different types of game learning in order to evaluate their common and separate correlates, with the hopes of informing future intervention research.\n\n\nMETHODS\nParticipants (31 younger adults and 31 older adults) completed an extensive cognitive battery and played two different genres of video games, one action game and one strategy game, for 1.5 hours each. DTI scans were acquired for each participant, and regional fractional anisotropy (FA) values were extracted using the JHU atlas.\n\n\nRESULTS\nBehavioral results indicated that better performance on tasks of working memory and perceptual discrimination was related to enhanced learning in both games, even after controlling for age, whereas better performance on a perceptual speed task was uniquely related with enhanced learning of the strategy game. DTI results indicated that white matter FA in the right fornix/stria terminalis was correlated with action game learning, whereas white matter FA in the left cingulum/hippocampus was correlated with strategy game learning, even after controlling for age.\n\n\nCONCLUSION\nAlthough cognition, to a large extent, was a common predictor of both types of game learning, regional white matter FA could separately predict action and strategy game learning. Given the neural and cognitive correlates of strategy game learning, strategy games may provide a more beneficial training tool for adults suffering from memory-related disorders or declines in processing speed, particularly older adults."} {"_id": "82b90cbbfc440d3b932df2e732d37aa86af138e9", "title": "Empirical vulnerability analysis of automated smart contracts security testing on blockchains", "text": "\u008ce emerging blockchain technology supports decentralized computing paradigm shi\u0089 and is a rapidly approaching phenomenon. While blockchain is thought primarily as the basis of Bitcoin, its application has grown far beyond cryptocurrencies due to the introduction of smart contracts. Smart contracts are self-enforcing pieces of so\u0089ware, which reside and run over a hosting blockchain. Using blockchain-based smart contracts for secure and transparent management to govern interactions (authentication, connection, and transaction) in Internet-enabled environments, mostly IoT, is a niche area of research and practice. However, writing trustworthy and safe smart contracts can be tremendously challenging because of the complicated semantics of underlying domain-speci\u0080c languages and its testability. \u008cere have been high-pro\u0080le incidents that indicate blockchain smart contracts could contain various code-security vulnerabilities, instigating \u0080nancial harms. When it involves security of smart contracts, developers embracing the ability to write the contracts should be capable of testing their code, for diagnosing security vulnerabilities, before deploying them to the immutable environments on blockchains. However, there are only a handful of security testing tools for smart contracts. \u008cis implies that the existing research on automatic smart contracts security testing is not adequate and remains in a very stage of infancy. With a speci\u0080c goal to more readily realize the application of blockchain smart contracts in security and privacy, we should \u0080rst understand their vulnerabilities before widespread implementation. Accordingly, the goal of this paper is to carry out a far-reaching experimental assessment of current static smart contracts security testing tools, for the most widely used blockchain, the Ethereum and its domain-speci\u0080c programming language, Solidity, to provide the \u0080rst body of knowledge for creating more secure blockchainbased so\u0089ware. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\u0080t or commercial advantage and that copies bear this notice and the full citation on the \u0080rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s)."} {"_id": "3502439dbd2bdfb0a1b452f90760d8ff1fbb2c73", "title": "CONVOLUTIONAL SEQUENCE LEARNING", "text": "We present Deep Voice 3, a fully-convolutional attention-based neural textto-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on a single GPU server."} {"_id": "e17c0ca1e9e5658c54d0c556644bb0c8b5a606c7", "title": "Parenting cognitions \u2192 parenting practices \u2192 child adjustment? The standard model.", "text": "In a large-scale (N = 317) prospective 8-year longitudinal multiage, multidomain, multivariate, multisource study, we tested a conservative three-term model linking parenting cognitions in toddlerhood to parenting practices in preschool to classroom externalizing behavior in middle childhood, controlling for earlier parenting practices and child externalizing behavior. Mothers who were more knowledgeable, satisfied, and attributed successes in their parenting to themselves when their toddlers were 20 months of age engaged in increased supportive parenting during joint activity tasks 2 years later when their children were 4 years of age, and 6 years after that their 10-year-olds were rated by teachers as having fewer classroom externalizing behavior problems. This developmental cascade of a \"standard model\" of parenting applied equally to families with girls and boys, and the cascade from parenting attributions to supportive parenting to child externalizing behavior obtained independent of 12 child, parent, and family covariates. Conceptualizing socialization in terms of cascades helps to identify points of effective intervention."} {"_id": "226ac4e235990a4bfe0e4c77695e036d66f6c4d9", "title": "Attribute-Based Access Control Scheme in Federated IoT Platforms", "text": "The Internet of Things (IoT) introduced the possibility to connect electronic things from everyday life to the Internet, while making them ubiquitously available. With advanced IoT services, based on a trusted federation among heterogeneous IoT platforms, new security problems (including authentication and authorization) emerge. This contribution aims at describing the main facets of the preliminary security architecture envisaged in the context of the symbIoTe project, recently launched by European Commission under the Horizon 2020 EU program. Our approach features distributed and decoupled mechanisms for authentication and authorization services in complex scenarios embracing heterogeneous and federated IoT platforms, by leveraging Attribute Based Access Control and token-based authorization techniques."} {"_id": "46f0d44599188dadf831a3c0e486b2f0391d0dec", "title": "On the privacy of anonymized networks", "text": "The proliferation of online social networks, and the concomitant accumulation of user data, give rise to hotly debated issues of privacy, security, and control. One specific challenge is the sharing or public release of anonymized data without accidentally leaking personally identifiable information (PII). Unfortunately, it is often difficult to ascertain that sophisticated statistical techniques, potentially employing additional external data sources, are unable to break anonymity. In this paper, we consider an instance of this problem, where the object of interest is the structure of a social network, i.e., a graph describing users and their links. Recent work demonstrates that anonymizing node identities may not be sufficient to keep the network private: the availability of node and link data from another domain, which is correlated with the anonymized network, has been used to re-identify the anonymized nodes. This paper is about conditions under which such a de-anonymization process is possible.\n We attempt to shed light on the following question: can we assume that a sufficiently sparse network is inherently anonymous, in the sense that even with unlimited computational power, de-anonymization is impossible? Our approach is to introduce a random graph model for a version of the de-anonymization problem, which is parameterized by the expected node degree and a similarity parameter that controls the correlation between two graphs over the same vertex set. We find simple conditions on these parameters delineating the boundary of privacy, and show that the mean node degree need only grow slightly faster than log n with network size n for nodes to be identifiable. Our results have policy implications for sharing of anonymized network information."} {"_id": "465fd6e50dc5c3c2583bc1d7a7b3b424e0101b9f", "title": "New technology and health care costs--the case of robot-assisted surgery.", "text": "n engl j med 363;8 nejm.org august 19, 2010 701 stood. These technologies can lead to increases in costs, either because they are simply more expensive than previous treatments or because their introduction leads to an expansion in the types and numbers of patients treated. We examined these patterns as they apply to the case of robot-assisted surgery. Robotic surgical devices allow a surgeon at a console to operate remote-controlled robotic arms, which may facilitate the performance of laparoscopic procedures. Laparoscopic surgery, in turn, is associated with shorter hospital stays than open surgery, as well as with less postoperative pain and scarring and lower risks of infection and need for blood transfusion. Robotic technology has been adopted rapidly over the past 4 years in both the United States and Europe. The number of robot-assisted procedures that are performed worldwide has nearly tripled since 2007, from 80,000 to 205,000. Between 2007 and 2009, the number of da Vinci systems, the leading robotic technology, that were installed in U.S. hospitals grew by approximately 75%, from almost 800 to around 1400, and the number that were installed in other countries doubled, from 200 to nearly 400, according to Intuitive Surgical, da Vinci\u2019s manufacturer. A wide range of procedures are now performed by means of robot-assisted surgery. Some of these procedures were already being performed laparoscopically before robots were introduced; the introduction of robotic technology affects expenditures associated with such procedures primarily by increasing the cost per procedure. For procedures that were more often performed as open surgeries, the introduction of robots may affect both the cost and the volume of surgeries performed. Robotic surgical systems have high fixed costs, with prices ranging from $1 million to $2.5 million for each unit. Surgeons must perform 150 to 250 procedures to become adept in their use. The systems also require costly maintenance and demand the use of additional consumables (singleuse robotic appliances). The use of robotic systems may also require more operating time than alternatives. In the case of procedures that had previously been performed as open surgery, however, some of the new costs will New Technology and Health Care Costs \u2014 The Case of Robot-Assisted Surgery"} {"_id": "bbb31f0a7c620cb168c9860167eae36e7936382d", "title": "Minimally supervised question classification on fine-grained taxonomies", "text": "This article presents a minimally supervised approach to question classification on fine-grained taxonomies. We have defined an algorithm that automatically obtains lists of weighted terms for each class in the taxonomy, thus identifying which terms are highly related to the classes and are highly discriminative between them. These lists have then been applied to the task of question classification. Our approach is based on the divergence of probability distributions of terms in plain text retrieved from the Web. A corpus of questions with which to train the classifier is not therefore necessary. As the system is based purely on statistical information, it does not require additional linguistic resources or tools. The experiments were performed on English questions and their Spanish translations. The results reveal that our system surpasses current supervised approaches in this task, obtaining a significant improvement in the experiments carried out."} {"_id": "3e4c19f558862a92c8e7485758b5809a0b8338db", "title": "Crowdsourcing Information Systems \u2013 A Systems Theory Perspective", "text": "Crowdsourcing harnesses the potential of large and open networks of people. It is a relatively new phenomenon and attracted substantial interest in practice. Related research, however, lacks a theoretical foundation. We propose a systemtheoretical perspective on crowdsourcing systems to address this gap and illustrate its applicability by using it to classify crowdsourcing systems. By deriving two principal dimensions from theory, we identify four fundamental types of crowdsourcing systems that help to distinguish important features of such systems. We analyse their respective characteristics and discuss implications and requirements for various aspects related to the design of such systems. Our results demonstrate that systems theory can inform the study of crowdsourcing systems. The identified system types and the implications on their design may prove useful for researchers to frame future studies and for practitioners to identify the right crowdsourcing systems for a particular purpose."} {"_id": "f5ceb1d4403a710fa2e0002755f286756b407c16", "title": "Methods of mapping from phase to sine amplitude in direct digital synthesis", "text": "There are many methods for performing functional mapping from phase to sine amplitude (e.g., ROM look-up, coarse/fine segmentation into multiple ROM's, Taylor series, CORDIC algorithm). The spectral purity of the conventional direct digital synthesizer (DDS) is also determined by the resolution of the values stored in the sine table ROM. Therefore, it is desirable to increase the resolution of the ROM. Unfortunately, larger ROM storage means higher power consumption, lower reliability, lower speed, and greatly increased costs. Different memory compression and algorithmic techniques and their effect on distortion and trade-offs are investigated in detail. A computer program has been created to simulate the effects of the memory compression and algorithmic techniques on the output spectrum of the DDS. For each memory compression and algorithmic technique, the worst case spurious response is calculated using the computer program."} {"_id": "17d8fa4be0e351e62e6f1c72296e1f6136d3b9df", "title": "Why does deep and cheap learning work so well?", "text": "We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through \u201ccheap learning\u201d with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various \u201cno-flattening theorems\u201d showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than 2 neurons in a single hidden layer."} {"_id": "fc64015f62631b642e0162860be9d753cc4a1b0a", "title": "Auto-Calibration Around-View Monitoring System", "text": "This work is to establish an auto-calibration around-view monitoring system to avoid the inconsistent performance from deviations of camera installation. First, a vehicle is parked at a specific area which has a special pattern on the ground. Second, the captured images from four embedded cameras are compared with the reference pattern images to obtain the deviations in angle, X-axis and Y-axis. Finally, these deviations are used to compensate the coefficients of transformation matrices associated with view-point conversion and image stitching. Particularly, the adaptive edge recovery scheme is developed to help on precise alignments. The experimental results show that the proposed system implemented at the TI DaVinci DSP platform can accomplish auto-calibration of a vehicle within 60 seconds and have deviation rates smaller than 38% at stitched regions and 25% at the other regions. Additionally, \u00b12\u00b0 deviation errors at angles, X-axis and Y-axis are allowed. Particularly, the proposed system can generate precise 2-meter around-view pictures at a vehicle speed of 30km/h. Therefore, the auto-calibration system proposed herein can be widely applied to vehicle manufacture, verification and repair."} {"_id": "dd9d624bc79f0c08378fedb6ddbe772581fa0d45", "title": "Incivility in nursing education: An intervention.", "text": "Incivility in nursing education is an unfortunate phenomenon affecting nursing students in all aspects of their educational experience. Students and their instructors are often ill equipped to deal with academic incivility and their lack of ability to handle such behaviors has proven detrimental to the future of the nursing profession. Nursing instructors need tools to help educate nursing students on how to recognize uncivil behaviors within themselves as well as others and ways to combat it. This research project addressed these aspects of academic incivility and implemented an e-learning module that was developed to educate students on incivility. The data was collected through a pre-test, post-test model with resulting statistical analysis using the McNemar's test. Results showed the nursing students obtained increased self-efficacy in regards to their ability to define, detect, and combat academic incivility after viewing the e-learning module. In conclusion, the successful implementation of the e-learning module provides further incentive for schools of nursing to consider implementing incivility education in their curriculums."} {"_id": "48a7b3c5d2592ee6e5760f2dd3d771f2b701b682", "title": "Single phase based on ups applied Z-source inverter by using matlab / simulink", "text": "Uninterruptible power supplies (UPSs) are widely used to supply critical loads, such as airline computers and life-support systems in hospitals, providing protection against power failure or anomalies of power-line voltage. In general, there are two types of traditional single phase UPSs. The first one couples a battery bank to a half or full-bridge inverter with a low-frequency transformer. In this type of UPSs, the ac output voltage is higher than that of the battery bank; thus, a step-up transformer is required to boost voltage. .The second one couples a battery bank to a dc/dc booster with a half or full bridge inverter. In this type of UPSs, the additional booster is needed, leading to high cost and low efficiency. The controlling of the switches in the booster also complicates the system. Due to the presence of the step-up transformer, the inverter current is much higher than the load current, causing high current stress on the switches of the inverter The dead time in the pulse width-modulation (PWM) signals to prevent the upper and lower switches at the same phase leg from shooting through has to be provided in the aforementioned two types of UPSs, and it distorts the voltage waveform of the ac output voltage. In order to overcome the above problems in the traditional UPSs, new topology of the UPS is proposed by using a voltage source inverter and Zsource inverter. With this new topology, the proposed UPS offers the advantages over the traditional UPSs such as the dc/dc booster and the inverter have been combined into one single-stage power conversion and the distortion of the ac output-voltage waveform is reduced in the absence of dead time in the PWM signals. The effectiveness of the proposed method is implemented by using MATLAB/SIMULINK software."} {"_id": "4d0f1ef8e1ae14af8db252bb9057da17c55bef21", "title": "SAR automatic target recognition based on a visual cortical system", "text": "Human Vision system is the most complex and accurate system. In order to extract better features about Synthetic Aperture Radar (SAR) targets, a SAR automatic target recognition (ATR) algorithm based on human visual cortical system is proposed. This algorithm contains three stages: (1) Image preprocessing (we use a Kuan filter to do the enhancement and an adaptive Intersecting Cortical Model (ICM) to do the segmentation) (2) Feature extraction using a sparse autoencoder. (3) Classification using a softmax regression classifier. Experiment result of MSTAR public data shows a better performance of recognition."} {"_id": "8f50f6a53ac6c81ce583d098c5339058e7343794", "title": "Trust and mobile commerce in North America", "text": "Mobile Commerce (mCommerce) activities include the act of shopping and buying on mobile devices, along with the more recent emergence of mobile payment systems. Within North America, mCommerce activities have become increasingly popular and will likely continue on this upwards trend as mobile devices further proliferate markets. Historically, one common issue with the adoption and use of eCommerce systems (e.g., commerce activities on personal computers) is trust. Yet we know little of how trust and other social factors may affect mCommerce usageda new genre of commerce activities that explicitly occur on mobile devices. To help address this problem, we have conducted two studies that explore users' mCommerce activities. The first is a diary and interview study of mCommerce shoppers who have already adopted the technology and shop on their mobile devices regularly. Our study explores typical mCommerce routines and behaviors along with issues of trust, given its long-term concern for eCommerce. The second is a diary and interview study of new and existing users of mobile payment services in North America. Participants used a variety of services, including Google Wallet, Amazon Payments, LevelUp, Square and company apps geared towards payments (e.g., Starbucks). Our results show that when it comes to shopping on mobile devices, people have few trust concerns. Yet when mobile devices are used for payments within physical stores, trust issues emerge along with prepurchase anxiety and mental model challenges. We discuss these results and show the value in adapting and developing new trust mechanisms for mCommerce. \u00a9 2016 Elsevier Ltd. All rights reserved."} {"_id": "8318fa48ed23f9e8b9909385d3560f029c623171", "title": "Implementing linearizability at large scale and low latency", "text": "Linearizability is the strongest form of consistency for concurrent systems, but most large-scale storage systems settle for weaker forms of consistency. RIFL provides a general-purpose mechanism for converting at-least-once RPC semantics to exactly-once semantics, thereby making it easy to turn non-linearizable operations into linearizable ones. RIFL is designed for large-scale systems and is lightweight enough to be used in low-latency environments. RIFL handles data migration by associating linearizability metadata with objects in the underlying store and migrating metadata with the corresponding objects. It uses a lease mechanism to implement garbage collection for metadata. We have implemented RIFL in the RAMCloud storage system and used it to make basic operations such as writes and atomic increments linearizable; RIFL adds only 530 ns to the 13.5 \u03bcs base latency for durable writes. We also used RIFL to construct a new multi-object transaction mechanism in RAMCloud; RIFL's facilities significantly simplified the transaction implementation. The transaction mechanism can commit simple distributed transactions in about 20 \u03bcs and it outperforms the H-Store main-memory database system for the TPC-C benchmark."} {"_id": "6d9dc7c078bc32f2b7c17637ecc60e5ffe410897", "title": "Penile septoplasty for congenital ventral penile curvature: results in 51 patients.", "text": "PURPOSE\nThe technique most widely used to correct congenital ventral penile curvature is still corporoplasty as originally described by Nesbit. We present results in patients treated with a variation of Nesbit corporoplasty used specifically for congenital ventral penile curvature.\n\n\nMATERIALS AND METHODS\nFrom June 2000 to June 2007 we treated 51 patients with congenital ventral penile curvature using modified corporoplasty (septoplasty), consisting of accessing the bed of the penile dorsal vein and excising 1 or more diamonds of tunica albuginea from it, extending in wedge-like formation 4 to 5 mm deep into the septum, until the penis is completely straightened. Patient history, clinical findings, self-photography results and the International Index of Erectile Function score were assessed. Curvature grade is expressed using the equation, 180 degrees - X, where X represents the deviation in degrees from the penis axis. Mean preoperative ventral curvature was 131.4 degrees (median 135, range 145 to 110). Of the patients 13 also had erectile dysfunction.\n\n\nRESULTS\nAt followup postoperative mean ventral curvature was 178.3 degrees (median 179.1, range 180 to 175). A total of 49 stated that they were completely satisfied. Penile shortening was 5 to 15 mm. Compared to preoperative values there were marked improvements in the International Index of Erectile Function score in the various groups. No major postoperative complications developed. In 4 patients wound healing occurred by secondary intent.\n\n\nCONCLUSIONS\nThis technique provides excellent straightening of the curved penis. By avoiding isolation of the whole dorsal neurovascular bundle there is no risk of neurovascular lesions. Suture perception is minimized."} {"_id": "f9f92fad17743dd14be7b8cc05ad0881b67f32c2", "title": "Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory", "text": "Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods."} {"_id": "1b51a9be75c5b4a02aecde88a965e32413efd5a3", "title": "On multi-view feature learning", "text": "Sparse coding is a common approach to learning local features for object recognition. Recently, there has been an increasing interest in learning features from spatio-temporal, binocular, or other multi-observation data, where the goal is to encode the relationship between images rather than the content of a single image. We provide an analysis of multi-view feature learning, which shows that hidden variables encode transformations by detecting rotation angles in the eigenspaces shared among multiple image warps. Our analysis helps explain recent experimental results showing that transformation-specific features emerge when training complex cell models on videos. Our analysis also shows that transformation-invariant features can emerge as a by-product of learning representations of transformations."} {"_id": "213d7af7107fa4921eb0adea82c9f711fd105232", "title": "Reducing the dimensionality of data with neural networks.", "text": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."} {"_id": "3da3aa90c3e013ade289122168bb09ad3dd264e4", "title": "Learning Object Affordances: From Sensory--Motor Coordination to Imitation", "text": "Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games."} {"_id": "4c46347fbc272b21468efe3d9af34b4b2bad6684", "title": "Deep learning via Hessian-free optimization", "text": "We develop a 2nd-order optimization method based on the \u201cHessian-free\u201d approach, and apply it to training deep auto-encoders. Without using pre-training, we obtain results superior to those reported by Hinton & Salakhutdinov (2006) on the same tasks they considered. Our method is practical, easy to use, scales nicely to very large datasets, and isn\u2019t limited in applicability to autoencoders, or any specific model class. We also discuss the issue of \u201cpathological curvature\u201d as a possible explanation for the difficulty of deeplearning and how 2nd-order optimization, and our method in particular, effectively deals with it."} {"_id": "2e5fd5433d93350d84b57cc865713bc7e237769e", "title": "Hierarchical Singular Value Decomposition of Tensors", "text": "We define the hierarchical singular value decomposition (SVD) for tensors of order d \u2265 2. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in d = 2), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format (H-Tucker) which requires only O((d \u2212 1)k3 + dnk) parameters, where d is the order of the tensor, n the size of the modes and k the (hierarchical) rank. The H-Tucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank k tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank k tensors) is in O((d\u22121)k4 +dnk2) and the attainable accuracy is just 2\u20133 digits less than machine precision."} {"_id": "935b70f2e223b2229187422843ae0efb0116f033", "title": "Contact-less Transfer of Energy by means of a Rotating Transformer", "text": "This paper examines the electrical properties of a rotating transformer used for contact-less transfer of energy to rotating equipment. Two winding layouts are analysed theoretically and experimentally. The reluctance modelling provides a deep understanding of how the geometry of the core and windings affect the electrical behaviour of the component. Theoretical calculations, measured results and finite element analysis are used to compare the proposed layouts. Basic design guidelines are given to adjust the leakage and magnetising inductances. The selection of a phaseshifted full bridge converter is suggested for applications at the power level of 1kW."} {"_id": "51797b3d2256cee87cc07a6f0ae6fbd312047401", "title": "Anarchy, stability, and utopia: creating better matchings", "text": "Historically, the analysis of matching has centered on designing algorithms to produce stable matchings as well as on analyzing the incentive compatibility of matching mechanisms. Less attention has been paid to questions related to the social welfare of stable matchings in cardinal utility models. We examine the loss in social welfare that arises from requiring matchings to be stable, the natural equilibrium concept under individual rationality. While this loss can be arbitrarily bad under general preferences, when there is some structure to the underlying graph corresponding to natural conditions on preferences, we prove worst case bounds on the price of anarchy. Surprisingly, under simple distributions of utilities, the average case loss turns out to be significantly smaller than the worst-case analysis would suggest. Furthermore, we derive conditions for the existence of approximately stable matchings that are also close to socially optimal, demonstrating that adding small switching costs can make socially (near-)optimal matchings stable. Our analysis leads to several concomitant results of interest on the convergence of decentralized partner-switching algorithms, and on the impact of heterogeneity of tastes on social welfare."} {"_id": "5a3a21efea38628cf437378931ddfd60c79d74f0", "title": "Recovering human body configurations: combining segmentation and recognition", "text": "The goal of this work is to detect a human figure image and localize his joints and limbs along with their associated pixel masks. In this work we attempt to tackle this problem in a general setting. The dataset we use is a collection of sports news photographs of baseball players, varying dramatically in pose and clothing. The approach that we take is to use segmentation to guide our recognition algorithm to salient bits of the image. We use this segmentation approach to build limb and torso detectors, the outputs of which are assembled into human figures. We present quantitative results on torso localization, in addition to shortlisted full body configurations."} {"_id": "5116ed56e2f22cc8739cba7ff9c1bf32bfdf29b1", "title": "1 . 1 Definition of adjustable autonomy and human-centered autonomous", "text": "We expect a variety of autonomous systems, from rovers to life-support systems, to play a critical role in the success of manned Mars missions. The crew and ground support personnel will want to control and be informed by these systems at varying levels of detail depending on the situation. Moreover, these systems will need to operate safely in the presence of people and cooperate with them effectively. We call such autonomous systems human-centered in contrast with traditional \u00d2black-box\u00d3 autonomous systems. Our goal is to design a framework for human-centered autonomous systems that enables users to interact with these systems at whatever level of control is most appropriate whenever they so choose, but minimize the necessity for such interaction. This paper discusses on-going research at the NASA Ames Research Center and the Johnson Space Center in developing human-centered autonomous systems that can be used for a manned Mars mission."} {"_id": "18f17d72bc033dce8cc0ac5e6cf7f647cddbdc43", "title": "Fat grafting in facial rejuvenation.", "text": "Patients with significant facial atrophy and age-related loss of facial fat generally achieve suboptimal improvement from both surface treatments of facial skin and surgical lifts. Restoring lost facial volume by fat grafting is a powerful technique that is now acknowledged by most plastic surgeons and other physicians engaged in treating the aging face as one of the most important advances in aesthetic surgery. Properly performed, the addition of fat to areas of the face that have atrophied because of age or disease can produce a significant and sustained improvement in appearance that is unobtainable by other means."} {"_id": "2a761bb8cf2958eb5e5424b00f71692abc7191a4", "title": "A UWB based Localization System for Indoor Robot Navigation", "text": "For robots to become more popular for domestic applications, the short comings of current indoor navigation technologies have to be overcome. In this paper, we propose the use of UWB-IR for indoor robot navigation. Various parts of an actual implementation of a UWB-IR based robot navigation system such as system architecture, RF sub-system design, antennas and localization algorithms are discussed. It is shown that by properly addressing the various issues, a localization error of less than 25 cm can be achieved at all points within a realistic indoor localization space."} {"_id": "e0ca033ad5d8a351a48c6a8fb4ff235ace26f362", "title": "Blockchain-Free Cryptocurrencies : A Framework for Truly Decentralised Fast Transactions", "text": "The \u201cblockchain\u201d distributed ledger pioneered by Bitcoin is effective at preventing double-spending, but inherently attracts (1) \u201cuser cartels\u201d and (2) incompressible delays, as a result of linear verification and a winner-takes-all incentive lottery. We propose to forgo the \u201cblocks\u201d and \u201cchain\u201d entirely, and build a truly distributed ledger system based on a lean graph of cross-verifying transactions, which now become the main and only objects in the system. A fully distributed consensus mechanism, based on progressive proofs of work with predictable incentives, ensures rapid convergence even across a large network of unequal participants, who all get rewards working at their own pace. Graph-based affirmation fosters snappy response through automatic scaling, while application-agnostic design supports all modern cryptocurrency features such as multiple denominations, swaps, securitisation, scripting, smart contracts, etc. We prove theoretically, and experimentally verify, our proposal to show it achieves a crucial \u201cconvergence\u201d property \u2014 meaning that any valid transaction entering the system will quickly become enshrined into the ancestry upon which all future transactions will rest."} {"_id": "b38e0fb48dbdf8624a91bb2b2ec24dbcb476fc40", "title": "Fuzzy Adaptive Sliding Mode Control of 6 DOF Parallel Manipulator with Electromechanical Actuators in Cartesian Space Coordinates", "text": "This paper proposes a fuzzy adaptive sliding mode controller in Cartesian space coordinates for trajectory tracking of the 6 DOF parallel manipulator with considering dynamics of electromechanical actuators. The 6-DOF sensors, may be very expensive or impossible to find at the desired accuracies; and also, especially at constant speeds, will contain errors; therefore, it is better to use LVDT position sensors and then, solving the forward kinematics problem using an iterative artificial neural network strategy, Instead of using the Numerical methods such as the Newton-Raphson method, that heavy computational load imposes to the system. This controller consists of a sliding mode control and adaptive learning algorithms to adjust uncertain parameters of system, that relax the requirement of bounding parameter values and not depending upon any parameter initialization conditions, and fuzzy approximators to estimate the plant\u2019s unknown nonlinear functions, and robustifying control terms to compensate of approximation errors; so that closedloop stability and finite-time convergence of tracking errors can be guaranteed. Simulation results demonstrate that the proposed control strategy can achieve favorable control performance with regard to uncertainties, nonlinearities and external disturbances."} {"_id": "69155d0852cf46e45c52c0e0b6e965b77ff1c0be", "title": "Spatial Planning: A Configuration Space Approach", "text": "This paper presents algorithms for computing constraints on the position of an object due to the presence of ther objects. This problem arises in applications that require choosing how to arrange or how to move objects without collisions. The approach presented here is based on characterizing the position and orientation of an object as a single point in a configuration space, in which each coordinate represents a degree of freedom in the position or orientation of the object. The configurations forbidden to this object, due to the presence of other objects, can then be characterized as regions in the configuration space, called configuration space obstacles. The paper presents algorithms for computing these configuration space obstacles when the objects are polygons or polyhedra."} {"_id": "5b4bc6f02be76fe9ed542dd23c4c6d274fec8ae4", "title": "Design and Describe REST API without Violating REST: A Petri Net Based Approach", "text": "As REST architectural style gains popularity in the web service community, there is a growing concern and debate on how to design Restful web services (REST API) in a proper way. We attribute this problem to lack of a standard model and language to describe a REST API that respects all the REST constraints. As a result, many web services that claim to be REST API are not hypermedia driven as prescribed by REST. This situation may lead to REST APIs that are not as scalable, extensible, and interoperable as promised by REST. To address this issue, this paper proposes REST Chart as a model and language to design and describe REST API without violating the REST constraints. REST Chart models a REST API as a special type of Colored Petri Net whose topology defines the REST API and whose token markings define the representational state space of user agents using that API. We demonstrate REST Chart with an example REST API. We also show how REST Chart can support efficient content negotiation and reuse hybrid representations to broaden design choices. Furthermore, we argue that the REST constraints, such as hypermedia driven and statelessness, can either be enforced naturally or checked automatically in REST Chart."} {"_id": "e2ea20d8cb365cec59b19b621dae3e26c85b53f9", "title": "Power electronics for renewable energy systems: Wind turbine and photovoltaic systems", "text": "The use of renewable energy sources are increased because of the depletion of natural resources and the increasing pollution level from energy production. The wind energy and the solar energy are most widely used among the renewable energy sources. Power electronics is needed in almost all kinds of renewable energy system. It controls the renewable source and interfaces with the load effectively, which can be grid-connection or working in stand-alone mode. In this paper, overview of wind and photovoltaic energy systems are introduced. Next, the power electronic circuits behind the most common wind and photovoltaic configurations are discussed. Finally, their controls and important requirements for grid connection are explained."} {"_id": "b2e3f06a00d4e585498ac4449d86a01c3c71680f", "title": "The LightCycler: a microvolume multisample fluorimeter with rapid temperature control.", "text": "Experimental and commercial microvolume fluorimeters with rapid temperature control are described. Fluorescence optics adopted from flow cytometry were used to interrogate 1-10-microL samples in glass capillaries. Homogeneous temperature control and rapid change of sample temperatures (10 degrees C/s) were obtained by a circulating air vortex. A prototype 2-color, 32-sample version was constructed with a xenon arc for excitation, separate excitation and emission paths, and photomultiplier tubes for detection. The commercial LightCycler, a 3-color, 24-sample instrument, uses a blue light-emitting diode for excitation, paraxial epi-illumination through the capillary tip and photodiodes for detection. Applications include analyte quantification and nucleic acid melting curves with fluorescent dyes, enzyme assays with fluorescent substrates and techniques that use fluorescence resonance energy transfer. Microvolume capability allows analysis of very small or expensive samples. As an example of one application, rapid cycle DNA amplification was continuously monitored by three different fluorescence techniques, Which included using the double-stranded DNA dye SYBR Green I, a dual-labeled 5'-exonuclease hydrolysis probe, and adjacent fluorescein and Cy5z-labeled hybridization probes. Complete amplification and analysis requires only 10-15 min."} {"_id": "310745d7eaaf877acb5b3a7dc3778d91d2172633", "title": "The Internet of things.", "text": "Keynote David Rose The New Vanguard for Business: Connectivity, Design, and the Internet of Things The Internet of Things is the hottest topic of the moment a shift predicted to be as momentous as the impact of the internet itself. The internet has allowed us to share ideas and data largely input by humans. What about a world where data from objects as diverse as umbrellas, fridges, and gas tanks all flows through the internet?"} {"_id": "93852ab424a74b893a163df1223d968fb69805b6", "title": "Kinetic Energy of Wind-Turbine Generators for System Frequency Support", "text": "As wind power penetration increases and fossil plants are retired, it is feared that there will be insufficient kinetic energy (KE) from the plants to support the system frequency. This paper shows the fear is groundless because the high inertias (H cong 4 seconds) of wind turbine-generators (WTGs) can be integrated to provide frequency support during generation outage."} {"_id": "61578923fde1cafdc1e6824f72a31869889eb8d2", "title": "Enhancing financial performance with social media: An impression management perspective", "text": "The growing plethora of social media outlets have sparked both opportunity and concern in how organizations manage their corporate image. While previous research has examined the various problems associated with negative, word-of-mouth transference of information occurring simultaneously throughout many networks in social media, this paper seeks to address social media usage in impression management (IM). Specifically, we seek to answer two questions: Do IM direct-assertive strategies in social media impact a firm\u2019s financial performance? And which social media strategies impact a firm\u2019s financial performance? To analyze these questions in depth, we use text mining to collect and analyze text from a variety of social network platforms, including blogs, forums, and corporate websites, to assess how such IM strategies impact financial performance. Our results provide text mining validation that social media have a 1 Corresponding author positive impact on IM. We also provide further understanding of how social media strengthens organizations\u2019 communication with various internal and external stakeholders. Lastly, we provide future research ideas concerning social media\u2019s usage in IM."} {"_id": "6f56eab88b70a4dd7a11255aaef97c0f87c919db", "title": "Low power/low voltage high speed CMOS differential track and latch comparator with rail-to-rail input", "text": "A new CMOS differential latched comparator suitable for low voltage, low-power application is presented. The circuit consists of a constant-gm rail-to-rail common-mode operational transconductance amplifier followed by a regenerative latch in a track and latch configuration to achieve a relatively constant delay. The use of a track and latch minimizes the total number of gain stages required for a given resolution. Potential offset from the constant-gm differential input stage, estimated as the main source of offset, can be minimized by proper choice of transistors sizes. Simulation results show that the circuit requires less than 86 \u03bcA with a supply voltage of 1.65 V in a standard CMOS 0.18 \u03bcm digital process. The average delay is less than 1 ns and is approximately independent of the common-mode input voltage."} {"_id": "677c44572cd6ac6ccab36d74c8246c4d8785434f", "title": "From semantics to syntax and back again: Argument structure in the third year of life", "text": "An essential part of the human capacity for language is the ability to link conceptual or semantic representations with syntactic representations. On the basis of data from spontaneous production, suggested that young children acquire such links on a verb-by-verb basis, with little in the way of a general understanding of linguistic argument structure. Here, we suggest that a receptive understanding of argument structure--including principles linking syntax and conceptual/semantic structure--appears earlier. In a forced-choice pointing task we have shown that toddlers in the third year of life can map a single scene (involving a novel causative action paired with a novel verb) onto two distinct syntactic frames (transitive and intransitive). This suggests that even before toddlers begin generalizing argument structure in their own speech, they have some representation of conceptual/semantic categories, syntactic categories, and a system that links the two."} {"_id": "6cf2afb2c5571b594038751b0cc76f548711c350", "title": "Physical Discipline and Behavior Problems in African American , European American , and Hispanic Children : Emotional Support as a Moderator", "text": "Using data collected over a 6-year period on a sample of 1,039 European American children, 550 African American children, and 401 Hispanic children from the children of the National Longitudinal Survey of Youth, this study assessed whether maternal emotional support of the child moderates the relation between spanking and behavior problems. Children were 4\u20135 years of age in the first of 4 waves of data used (1988, 1990, 1992, 1994). At each wave, mothers reported their use of spanking and rated their children\u2019s behavior problems. Maternal emotional support of the child was based on interviewer observations conducted as part of the Home Observation for Measurement of the Environment. For each of the 3 racial-ethnic groups, spanking predicted an increase in the level of problem behavior over time, controlling for income-needs ratio and maternal emotional support. Maternal emotional support moderated the link between spanking and problem behavior. Spanking was associated with an increase in behavior problems over time in the context of low levels of emotional support, but not in the context of high levels of emotional support. This pattern held for all 3 racial-ethnic groups."} {"_id": "ac7257f58991d63ec9541b632a3c19d7c3db4275", "title": "Depressive symptomatology among university students in Denizli, Turkey: prevalence and sociodemographic correlates.", "text": "AIM\nTo determine overall and subgroup prevalence of depressive symptomatology among university students in Denizli, Turkey during the 1999-2000 academic year, and to investigate whether sociodemographic factors were associated with depressive symptoms in university students.\n\n\nMETHODS\nA stratified probability sample of 504 Turkish university students (296 male, 208 female) was used in a cross-sectional study. Data were obtained by self-administered questionnaire, including questions on sociodemographic characteristics and problem areas. The revised Beck Depression Inventory (BDI) was used to determine depressive symptoms of the participants. BDI scores 17 or higher were categorized as depressive for logistic regression analysis. Student t-test and linear regression were used for continuous data analysis.\n\n\nRESULTS\nOut of all participants, 26.2% had a BDI score 17 or higher. The prevalence of depressive symptoms increased to 32.1% among older students, 34.7% among students with low socioeconomic status, 31.2% among seniors, and 62.9% among students with poor school performance. The odds ratio of depressive symptoms was 1.84 (95% confidence interval [CI], 1.03-3.28) in students with low socioeconomic status and 7.34 (95% CI, 3.36-16.1) in students with poor school performance in the multivariate logistic model. The participants identified several problem areas: lack of social activities and shortage of facilities on the campus (69.0%), poor quality of the educational system (54.8%), economic problems (49.3%), disappointment with the university (43.2%), and friendship problems (25.9%).\n\n\nCONCLUSIONS\nConsidering the high frequency of depressive symptoms among Turkish university students, a student counseling service offering mental health assistance is necessary. This service should especially find the way to reach out to poor students and students with poor school performance."} {"_id": "5754524110610b04cfe12d42433d92d72397a19a", "title": "Momentum Investment Strategies of Mutual Funds , Performance Persistence , and Survivorship Bias", "text": "We show that the persistent use of momentum investment strategies by mutual funds has important implications for the performance persistence and survivorship bias controversies. Using mutual fund portfolio holdings from a database free of survivorship bias, we \u0304nd that the best performing funds during one year are the best performers during the following year, with the exception of 1981, 1983, 1988, and 1989. This pattern corresponds exactly to the pattern exhibited by the momentum e\u00aeect in stock returns, \u0304rst documented by Jegadeesh and Titman (1993) and recently studied by Chan, Jegadeesh, and Lakonishok (1996). Our evidence points not only to the momentum e\u00aeect in stock returns, but to the persistent, active use of momentum strategies by mutual funds as the reasons for performance persistence. Moreover, essentially no persistence remains after controlling for the one-year momentum e\u00aeect in stock returns. We also explain why most recent studies have found that survivorship bias is a relatively small concern. Funds that were the best performers during one year are the worst performers during the following year whenever the momentum e\u00aeect in stock returns is absent, and these funds tend to disappear with a frequency not appreciably lower than that of consistently poor performers. Therefore, the pool of non-surviving funds is more representative of the cross-section of all funds than previously thought. Speci \u0304cally, we \u0304nd a di\u00aeerence of only 20 basis points per year in risk-adjusted pre-expense returns between the average fund and the average surviving fund."} {"_id": "d1d9acd4a55c9d742a8b6736928711c3cfbe6526", "title": "Local Feature Based Multiple Object Instance Identification Using Scale and Rotation Invariant Implicit Shape Model", "text": "In this paper, we propose a Scale and Rotation Invariant Implicit Shape Model (SRIISM), and develop a local feature matching based system using the model to accurately locate and identify large numbers of object instances in an image. Due to repeated instances and cluttered background, conventional methods for multiple object instance identification suffer from poor identification results. In the proposed SRIISM, we model the joint distribution of object centers, scale, and orientation computed from local feature matches in Hough voting, which is not only invariant to scale changes and rotation of objects, but also robust to false feature matches. In the multiple object instance identification system using SRIISM, we apply a fast 4D bin search method in Hough space with complexity O(n), where n is the number of feature matches, in order to segment and locate each instance. Futhermore, we apply maximum likelihood estimation (MLE) for accurate object pose detection. In the evaluation, we created datasets simulating various industrial applications such as pick-and-place and inventory management. Experiment results on the datasets show that our method outperforms conventional methods in both accuracy (5%-30% gain) and speed (2x speed up)."} {"_id": "1cf74b1f4fbd05250701c86a45044534a66c7f5e", "title": "Robust Object Detection with Interleaved Categorization and Segmentation", "text": "This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems."} {"_id": "1fe6afba10f5e1168d63b7eca0483f9c57337d57", "title": "Real-time object detection and localization with SIFT-based clustering", "text": "a r t i c l e i n f o Keywords: Pick-and-place applications Machine vision for industrial applications SIFT This paper presents an innovative approach for detecting and localizing duplicate objects in pick-and-place applications under extreme conditions of occlusion, where standard appearance-based approaches are likely to be ineffective. The approach exploits SIFT keypoint extraction and mean shift clustering to partition the correspondences between the object model and the image onto different potential object instances with real-time performance. Then, the hypotheses of the object shape are validated by a projection with a fast Euclidean transform of some delimiting points onto the current image. Moreover, in order to improve the detection in the case of reflective or transparent objects, multiple object models (of both the same and different faces of the object) are used and fused together. Many measures of efficacy and efficiency are provided on random disposals of heavily-occluded objects, with a specific focus on real-time processing. Experimental results on different and challenging kinds of objects are reported. Information technologies have become in the last decades a fundamental aid for helping the automation of everyday people life and industrial processes. Among the many different disciplines contributing to this process, machine vision and pattern recognition have been widely used for industrial applications and especially for robot vision. A typical need is to automate the pick-and-place process of picking up objects, possibly performing some tasks, and then placing them down on a different location. Most of the pick-and-place systems are basically composed of robotic systems and sensors. The sensors are in charge of driving the robot arms to the right 3D location and possibly orientation of the next object to be picked up, according to the robot's degrees of freedom. Object picking can be very complicated if the scene is not well structured and constrained. The automation of object picking by using cameras, however, requires to detect and localize objects in the scene; they are crucial tasks for several other computer vision applications, such as image/ video retrieval [1,2], or automatic robot navigation [3]. This paper describes a new complete approach for pick-and-place processes with the following challenging requirements: 1. Different types of objects: the approach should work with every type of object of different dimension and complexity, with reflective surfaces or semi-transparent parts, such as in the case of pharmaceutical and cosmetic objects, often reflective or included in transparent flowpacks; 2. \u2026"} {"_id": "0f04a0b658f00f329687d8ba94d9fca25269b4b7", "title": "Extensibility, Safety and Performance in the SPIN Operating System", "text": "This paper describes the motivation architecture and performance of SPIN an extensible operating system SPIN provides an extension infrastructure together with a core set of extensible services that allow applica tions to safely change the operating system s interface and implementation Extensions allow an application to specialize the underlying operating system in order to achieve a particular level of performance and function ality SPIN uses language and link time mechanisms to inexpensively export ne grained interfaces to operat ing system services Extensions are written in a type safe language and are dynamically linked into the op erating system kernel This approach o ers extensions rapid access to system services while protecting the op erating system code executing within the kernel address space SPIN and its extensions are written in Modula and run on DEC Alpha workstations"} {"_id": "00bbfde6af97ce5efcf86b3401d265d42a95603d", "title": "Feature hashing for large scale multitask learning", "text": "Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case --- multitask learning with hundreds of thousands of tasks."} {"_id": "1805f1ba9bc1ae4f684ac04d3118cf6043587dfc", "title": "A new approach to unwrap a 3-D fingerprint to a 2-D rolled equivalent fingerprint", "text": "For many years, fingerprints have been captured by pressing a finger against a paper or hard surface. This touch-based fingerprint acquisition introduces some problems such as distortions and deformations in the acquired images, which arise due to the contact of the fingerprint surface with the sensor platen, and degrades the recognition performance. A new touch-less fingerprint technology has been recently introduced to the market, which can address the problems with the contact-based fingerprint systems. In this paper, we propose a new algorithm for unwrapping the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image. Therefore, The resulting image can be matched with the conventional 2-D scans; it also can be used for matching unwrapped 3-D fingerprints among themselves with the 2-D fingerprint matching algorithms. The algorithm is based on curvature analysis of the 3-D surface. The quality of the resulting image is evaluated and analyzed using NIST fingerprint image software."} {"_id": "73b911968bff8e855c22c7fcbd5f17d1c95456f5", "title": "Functional Search-based Testing from State Machines", "text": "The application of metaheuristic search techniques in test data generation has been extensively investigated in recent years. Most studies, however, have concentrated on the application of such techniques in structural testing. The use of search-based techniques in functional testing is less frequent, the main cause being the implicit nature of the specification. This paper investigates the use of search-based techniques for functional testing, having the specification in form of a state machine. Its purpose is to generate input data for chosen paths in a state machine, so that the parameter values provided to the methods satisfy the corresponding guards and trigger the desired transitions. A general form of a fitness function for an individual path is presented and this approach is empirically evaluated using three search techniques: simulated annealing, genetic algorithms and particle swarm optimization."} {"_id": "d6c1438b62796662a099d653da6235ecefdf07ed", "title": "Ensemble Pruning Using Reinforcement Learning", "text": "Multiple Classifier systems have been developed in order to improve classification accuracy using methodologies for effective classifier combination. Classical approaches use heuristics, statistical tests, or a meta-learning level in order to find out the optimal combination function. We study this problem from a Reinforcement Learning perspective. In our modeling, an agent tries to learn the best policy for selecting classifiers by exploring a state space and considering a future cumulative reward from the environment. We evaluate our approach by comparing with state-of-the-art combination methods and obtain very promising results."} {"_id": "0c56c03a65a9c38561b2d6fd01e55dc80df357c3", "title": "Improved ID3 algorithm", "text": "as the classical algorithm of the decision tree classification algorithm, ID3 is famous for the merits of high classifying speed easy, strong learning ability and easy construction. But when use it to classify, there does exist the problem of inclining to chose attributions which has many values, which affects its practicality. This paper for solving the problem a decision tree algorithm based on attribute-importance is proposed. The improved algorithm uses attribute-importance to increase information gain of attribution which has fewer attributions and compares ID3 with improved ID3 by an example. The experimental analysis of the data show that the improved ID3 algorithm can get more reasonable and more effective rules."} {"_id": "adb705bd2b16d1f839d803ae5a94dc555f4e61ff", "title": "Convolutional Neural Network Information Fusion based on Dempster-Shafer Theory for Urban Scene Understanding", "text": "Dempster-Shafer theory provides a sensor fusion framework that autonomously accounts for obstacle occlusion in dynamic, urban environments. However, to discern static and moving obstacles, the Dempster-Shafer approach requires manual tuning of parameters dependent on the situation and sensor types. The proposed methodology utilizes a deep fully convolutional neural network to improve the robust performance of the information fusion algorithm in distinguishing static and moving obstacles from navigable space. The image-like spatial structure of probabilistic occupancy allows a semantic segmentation framework to discern classes for individual grid cells. A subset of the KITTI LIDAR tracking dataset in combination with semantic map data was used for the information fusion task. The probabilistic occupancy grid output of the Dempster-Shafer information fusion algorithm was provided as input to the neural network. The network then learned an offset from the original DST result to improve semantic labeling performance. The proposed framework outperformed the baseline approach in the mean intersection over union metric reaching 0.546 and 0.531 in the validation and test sets respectively. However, little improvement was achieved in discerning moving and static cells due to the limited dataset size. To improve model performance in future work, the dataset will be expanded to facilitate more effective learning, and temporal data will be fed through individual convolutional networks prior to being merged in channels as input to the main network."} {"_id": "b8811144fb24a25335bf30dedfb70930d2f67055", "title": "MARVEL: A Large-Scale Image Dataset for Maritime Vessels", "text": ""} {"_id": "d96c02619add7c6a98ceb245758b10440dc3d5ad", "title": "Baseline Mechanisms for Enterprise Governance of IT in SMEs", "text": "Information Technology (IT) has become fundamental for most organizations since it is vital to their sustainability, development and success. This pervasive use led organizations to a critical dependency on IT. Despite the benefits, it exposes organizations to several risks. Hence, a significant focus on Enterprise Governance of IT (EGIT) is required. EGIT involve the definition and implementation of processes, structures and relational mechanisms to support the business/IT alignment and the creation of business value from IT investments. These mechanisms act as a mean to direct and operationalize IT-related decision-making. However, identifying the appropriate mechanisms is a complex task since there are internal and external contingency factors that influence the design and implementation of EGIT. Small and Medium Enterprises (SMEs) are considered key elements to promote economic growth, job creation, social integration and innovation. The use of IT in these organizations can imply severe consequences on survival and growth in highly competitive markets, becoming critical in this globalization era. Several studies were developed to investigate and identify EGIT mechanisms in different contingencies but very few focused on the organization' size criterion. However, SMEs have particular characteristics that require further investigation. Therefore, our goal in this research is to evaluate EGIT mechanisms in a SME's context and identify a baseline of mechanisms for SMEs using semi-structured interviews with IT experts that have experience in SMEs. The article ends by presenting contributions, limitations and future work."} {"_id": "c720bd17e837415de3e602ee844288546eb576fa", "title": "Operability of Folded Microstrip Patch-Type Tag Antenna in the UHF RFID Bands Within 865-928 MHz", "text": "The emerging use of passive radio frequency identification (RFID) systems at ultra-high-frequency (UHF) spectrum requires application specific tag antenna designs for challenging applications. Many objects containing conductive material need novel tag antenna designs for reliable identification. The operability of folded microstrip patch-type tag antenna for objects containing conductive material is analyzed and tested within the UHF RFID bands used at the moment mainly in Europe and in North and South America. First the operability of the tag antenna design and the effects of conductive material are modeled with simulations based on finite element method (FEM). The performance of the tag antenna design affixed to a package containing metallic foil is verified with read range measurements. The results show that the antenna design is operable in both of the UHF RFID bands within 865-928 MHz"} {"_id": "8e6d963a9c1115de61e7672c6d4c54c781b3e54d", "title": "Lessons from the journey: a query log analysis of within-session learning", "text": "The Internet is the largest source of information in the world. Search engines help people navigate the huge space of available data in order to acquire new skills and knowledge. In this paper, we present an in-depth analysis of sessions in which people explicitly search for new knowledge on the Web based on the log files of a popular search engine. We investigate within-session and cross-session developments of expertise, focusing on how the language and search behavior of a user on a topic evolves over time. In this way, we identify those sessions and page visits that appear to significantly boost the learning process. Our experiments demonstrate a strong connection between clicks and several metrics related to expertise. Based on models of the user and their specific context, we present a method capable of automatically predicting, with good accuracy, which clicks will lead to enhanced learning. Our findings provide insight into how search engines might better help users learn as they search."} {"_id": "afdb7a95e28c0706bdb5350a4cf46cce94e3de97", "title": "Dual-Band Multi-Pole Directional Filter for Microwave Multiplexing Applications", "text": "A novel microstrip directional filter for multiplexing applications is presented. This device uses composite right-left-handed transmission lines and resonators to achieve dual-band frequency response. In addition, by cascading two or more stages using dual frequency immitance inverters, multi-pole configurations can be obtained. Simulation and experimental results are presented with good agreement."} {"_id": "919f3ad43b48142ca52c979a956f0799e11d0f6c", "title": "Multi-domain spoken language understanding with transfer learning", "text": "This paper addresses the problem of multi-domain spoken language understanding (SLU) where domain detection and domaindependent semantic tagging problems are combined. We present a transfer learning approach to the multi-domain SLU problem in which multiple domain-specific data sources can be incorporated. To implement multi-domain SLU with transfer learning, we introduce a triangular-chain structured model. This model effectively learns multiple domains in parallel, and allows use of domain-independent patterns among domains to create a better model for the target domain. We demonstrate that the proposed method outperforms baseline models on dialog data for multi-domain SLU problems. 2009 Elsevier B.V. All rights reserved."} {"_id": "2b028c2cc8864ead78775d3a1c0efabe202f86c3", "title": "Can Computers Overcome Humans? Consciousness Interaction and its Implications", "text": "Can computers overcome human capabilities? This is a paradoxical and controversial question, particularly because there are many hidden assumptions. This article focuses on that issue putting on evidence some misconception related to future generations of machines and the understanding of the brain. It will discuss to what extent computers might reach human capabilities, and how it would be possible only if the computer is a conscious machine. However, it will be shown that if the computer is conscious, an interference process due to consciousness would affect the information processing of the system. Therefore, it might be possible to make conscious machines to overcome human capabilities, which will have similar limitations than humans. In other words, trying to overcome human capabilities with computers implies the paradoxical conclusion that a computer will never overcome human capabilities at all, or if the computer does, it should not be considered as a computer anymore."} {"_id": "9c37e8d5bb8fcd6d5b24edb18e5ce422b459e856", "title": "High-Speed Parallel Decimal Multiplication with Redundant Internal Encodings", "text": "The decimal multiplication is one of the most important decimal arithmetic operations which have a growing demand in the area of commercial, financial, and scientific computing. In this paper, we propose a parallel decimal multiplication algorithm with three components, which are a partial product generation, a partial product reduction, and a final digit-set conversion. First, a redundant number system is applied to recode not only the multiplier, but also multiples of the multiplicand in signed-digit (SD) numbers. Furthermore, we present a multioperand SD addition algorithm to reduce the partial product array. Finally, a digit-set conversion algorithm with a hybrid prefix network to decrease the number of the logic gates on the critical path is discussed. An analysis of the timing delay and an HDL model synthesized under 90 nm technology show that by considering the tradeoff of designs among three components, the overall delay of the proposed 16 \u00d7 16-digit multiplier takes about 11 percent less timing delay with 2 percent less area compared to the current fastest design."} {"_id": "a9001afcff3aee4e0e0b13289d06e3f3c403eafd", "title": "Survey of Large-Scale MIMO Systems", "text": "The escalating teletraffic growth imposed by the proliferation of smartphones and tablet computers outstrips the capacity increase of wireless communications networks. Furthermore, it results in substantially increased carbon dioxide emissions. As a powerful countermeasure, in the case of full-rank channel matrices, MIMO techniques are potentially capable of linearly increasing the capacity or decreasing the transmit power upon commensurately increasing the number of antennas. Hence, the recent concept of large-scale MIMO (LS-MIMO) systems has attracted substantial research attention and been regarded as a promising technique for next-generation wireless communications networks. Therefore, this paper surveys the state of the art of LS-MIMO systems. First, we discuss the measurement and modeling of LS-MIMO channels. Then, some typical application scenarios are classified and analyzed. Key techniques of both the physical and network layers are also detailed. Finally, we conclude with a range of challenges and future research topics."} {"_id": "31bd199555b926c6f985d0bbf4c71f5c46b5a078", "title": "Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more", "text": "Designations used by companies to distinguish their products are often claimed as trademarks or registered trademarks. In all instances in which Morgan Kaufmann Publishers is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means\u2014electronic, mechanical, photocopying, scanning, or otherwise\u2014without prior written permission of the publisher. may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting \" Support & Contact \" then \" Copyright and Permission \" and then \" Obtaining Permissions. \" Page 197: \" Make a new plan Stan. .. and get yourself free. Physical database design : the database professional's guide to exploiting indexes, views, storage, and more / Sam Lightstone, Toby Teorey, and Tom Nadeau. p. cm.-(The Morgan Kaufmann series in database management systems) Includes bibliographical references and index."} {"_id": "1dba04ee74e7aa5db2b3264ce473e215c332251f", "title": "Area and length minimizing flows for shape segmentation", "text": "A number of active contour models have been proposed that unify the curve evolution framework with classical energy minimization techniques for segmentation, such as snakes. The essential idea is to evolve a curve (in two dimensions) or a surface (in three dimensions) under constraints from image forces so that it clings to features of interest in an intensity image. The evolution equation has been derived from first principles as the gradient flow that minimizes a modified length functional, tailored to features such as edges. However, because the flow may be slow to converge in practice, a constant (hyperbolic) term is added to keep the curve/surface moving in the desired direction. We derive a modification of this term based on the gradient flow derived from a weighted area functional, with image dependent weighting factor. When combined with the earlier modified length gradient flow, we obtain a partial differential equation (PDE) that offers a number of advantages, as illustrated by several examples of shape segmentation on medical images. In many cases the weighted area flow may be used on its own, with significant computational savings."} {"_id": "a63bcdb0c3cd459d59700160bfa9d600f3d8aebe", "title": "Research Advances in Cloud Computing", "text": "Serverless computing has emerged as a new compelling paradigm for the deployment of applications and services. It represents an evolution of cloud programmingmodels, abstractions, and platforms, and is a testament to the maturity and wide adoption of cloud technologies. In this chapter, we survey existing serverless platforms from industry, academia, and open-source projects, identify key characteristics and use cases, and describe technical challenges and open problems. I. Baldini \u00b7 P. Castro (B) \u00b7 K. Chang \u00b7 P. Cheng \u00b7 S. Fink \u00b7 N. Mitchell \u00b7 V. Muthusamy \u00b7 R. Rabbah \u00b7 A. Slominski IBM Research, New York, USA e-mail: ioana@us.ibm.com P. Castro e-mail: castrop@us.ibm.com K. Chang e-mail: Kerry.Chang@ibm.com P. Cheng e-mail: perry@us.ibm.com S. Fink e-mail: sjfink@us.ibm.com N. Mitchell e-mail: nickm@us.ibm.com V. Muthusamy (B) e-mail: vmuthus@us.ibm.com R. Rabbah e-mail: rabbah@us.ibm.com A. Slominski (B) e-mail: aslom@us.ibm.com V. Ishakian (B) Bentley University, Waltham, USA e-mail: vishakian@bentley.edu P. Suter Two Sigma, New York, USA \u00a9 Springer Nature Singapore Pte Ltd. 2017 S. Chaudhary et al. (eds.), Research Advances in Cloud Computing, DOI 10.1007/978-981-10-5026-8_1 1"} {"_id": "84b8ffa0e39a4149bd62122a84c081a7ad06b056", "title": "Primed for Violence : The Role of Gender Inequality in Predicting Internal Conflict", "text": "We know, most notably through Ted Gurr\u2019s research, that ethnic discrimination can lead to ethnopolitical rebellion\u2013intrastate conflict. I seek to discover what impact, if any, gender inequality has on intrastate conflict. Although democratic peace scholars and others highlight the role of peaceful domestic behavior in predicting state behavior, many scholars have argued that a domestic environment of inequality and violenceFstructural and cultural violenceFresults in a greater likelihood of violence at the state and the international level. This project contributes to this line of inquiry and further tests the grievance theory of intrastate conflict by examining the norms of violence that facilitate a call to arms. And in many ways, I provide an alternative explanation for the significance of some of the typical economic measuresFthe greed theoryFbased on the link between discrimination, inequality, and violence. I test whether states characterized by higher levels of gender inequality are more likely to experience intrastate conflict. Ultimately, the basic link between gender inequality and intrastate conflict is confirmedFstates characterized by gender inequality are more likely to experience intrastate conflict, 1960\u20132001."} {"_id": "53b3fd527da1cc4367319da03b517636e33528de", "title": "Sales Taxes and Internet Commerce\u2217", "text": "We estimate the sensitivity of Internet retail purchasing to sales taxes using data from the eBay marketplace. Our first approach exploits the fact that seller locations are revealed only after buyers have expressed interest in an item by clicking on its listing. We use millions of location \u201csurprises\u201dto estimate price elasticities with respect to the charged sales tax. We then use aggregated data to estimate cross-state substitution parameters, and substitution between offl ine and online purchases, relying both on cross-sectional variation in state and local sales taxes, and on changes in these rates over time. We find substantial sensitivity to sales taxes. Using our item-level approach, we find an average price elasticity of around -2 for interested buyers. Using our aggregate approach, we find that a one percentage point increase in a state\u2019s sales tax increases online purchases by state residents by just under two percent, but decreases their online purchases from home-state retailers by 3-4 percent. (JEL: D12, H20, H71, L81) \u2217We are grateful to Glenn Ellison and many seminar participants for useful comments. We appreciate support from the Stanford Institute for Economic Policy Research and the Toulouse Network on Information Technology. Data access for this study was obtained under a consulting agreement between the Stanford authors (Einav, Knoepfle and Levin) and eBay Research. \u2020Department of Economics, Stanford University, Stanford, CA 94305-6072 (Einav, Knoepfle, Levin), NBER (Einav and Levin), and eBay Labs (Sundaresan). Email: leinav@stanford.edu, knoepfle@stanford.edu, jdlevin@stanford.edu, nsundaresan@ebay.com."} {"_id": "9d2cdcd0009490d45218b4e9c42f087b27bc9fcb", "title": "Internet Web servers: workload characterization and performance implications", "text": "This paper presents a workload characterization study for Internet Web servers. Six different data sets are used in the study: three from academic environments, two from scientific research organizations, and one from a commercial Internet provider. These data sets represent three different orders of magnitude in server activity, and two different orders of magnitude in time duration, ranging from one week of activity to one year. The workload characterization focuses on the document type distribution, the document size distribution, the document referencing behavior, and the geographic distribution of server requests. Throughout the study, emphasis is placed on finding workload characteristics that are common to all the data sets studied. Ten such characteristics are identified. The paper concludes with a discussion of caching and performance issues, using the observed workload characteristics to suggest performance enhancements that seem promising for Internet Web servers."} {"_id": "13cf6e3658598e92a24feb439e532894e4aa68e3", "title": "TIARA: a visual exploratory text analytic system", "text": "In this paper, we present a novel exploratory visual analytic system called TIARA (Text Insight via Automated Responsive Analytics), which combines text analytics and interactive visualization to help users explore and analyze large collections of text. Given a collection of documents, TIARA first uses topic analysis techniques to summarize the documents into a set of topics, each of which is represented by a set of keywords. In addition to extracting topics, TIARA derives time-sensitive keywords to depict the content evolution of each topic over time. To help users understand the topic-based summarization results, TIARA employs several interactive text visualization techniques to explain the summarization results and seamlessly link such results to the original text. We have applied TIARA to several real-world applications, including email summarization and patient record analysis. To measure the effectiveness of TIARA, we have conducted several experiments. Our experimental results and initial user feedback suggest that TIARA is effective in aiding users in their exploratory text analytic tasks."} {"_id": "1b255cda9c92a5fbaba319aeb4b4ec532693c2a4", "title": "Dynamic topic models", "text": "A family of probabilistic time series models is developed to analyze the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric wavelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative window into the contents of a large document collection. The models are demonstrated by analyzing the OCR'ed archives of the journal Science from 1880 through 2000."} {"_id": "31b52afe4db00f97a703d534803165bbd55359c2", "title": "EventRiver: Visually Exploring Text Collections with Temporal References", "text": "Many text collections with temporal references, such as news corpora and weblogs, are generated to report and discuss real life events. Thus, event-related tasks, such as detecting real life events that drive the generation of the text documents, tracking event evolutions, and investigating reports and commentaries about events of interest, are important when exploring such text collections. To incorporate and leverage human efforts in conducting such tasks, we propose a novel visual analytics approach named EventRiver. EventRiver integrates event-based automated text analysis and visualization to reveal the events motivating the text generation and the long term stories they construct. On the visualization, users can interactively conduct tasks such as event browsing, tracking, association, and investigation. A working prototype of EventRiver has been implemented for exploring news corpora. A set of case studies, experiments, and a preliminary user test have been conducted to evaluate its effectiveness and efficiency."} {"_id": "0c1960cbedb9be693e96cc5c3c8889dc5879332c", "title": "Evolutionary hierarchical dirichlet processes for multiple correlated time-varying corpora", "text": "Mining cluster evolution from multiple correlated time-varying text corpora is important in exploratory text analytics. In this paper, we propose an approach called evolutionary hierarchical Dirichlet processes (EvoHDP) to discover interesting cluster evolution patterns from such text data. We formulate the EvoHDP as a series of hierarchical Dirichlet processes~(HDP) by adding time dependencies to the adjacent epochs, and propose a cascaded Gibbs sampling scheme to infer the model. This approach can discover different evolving patterns of clusters, including emergence, disappearance, evolution within a corpus and across different corpora. Experiments over synthetic and real-world multiple correlated time-varying data sets illustrate the effectiveness of EvoHDP on discovering cluster evolution patterns."} {"_id": "1441c41d266ce48a2041bd4da0468eec961ddf4f", "title": "The Word Tree, an Interactive Visual Concordance", "text": "We introduce the Word Tree, a new visualization and information-retrieval technique aimed at text documents. A Word Tree is a graphical version of the traditional \"keyword-in-context\" method, and enables rapid querying and exploration of bodies of text. In this paper we describe the design of the technique, along with some of the technical issues that arise in its implementation. In addition, we discuss the results of several months of public deployment of word trees on Many Eyes, which provides a window onto the ways in which users obtain value from the visualization."} {"_id": "84badeea160675906c0d3f2a30e286cf60d79d41", "title": "Total variation regularization of local-global optical flow", "text": "More data fidelity terms in variational optical flow methods improve the estimation's robustness. A robust and anisotropic smoother enhances the specific fill-in process. This work presents a combined local-global (CLG) approach with total variation regularization. The combination of bilateral filtering and anisotropic (image driven) regularization is used to control the propagation phenomena. The resulted method, CLG-TV, is able to compute larger displacements in a reasonable time. The numerical scheme is highly parallelizable and runs in real-time on current generation graphic processing units."} {"_id": "73b671a99c6404ebcf1bf664cb9e3ab539570d4a", "title": "Learning intermediate object affordances: Towards the development of a tool concept", "text": "Inspired by the extraordinary ability of young infants to learn how to grasp and manipulate objects, many works in robotics have proposed developmental approaches to allow robots to learn the effects of their own motor actions on objects, i.e., the objects affordances. While holding an object, infants also promote its contact with other objects, resulting in object-object interactions that may afford effects not possible otherwise. Depending on the characteristics of both the held object (intermediate) and the acted object (primary), systematic outcomes may occur, leading to the emergence of a primitive concept of tool. In this paper we describe experiments with a humanoid robot exploring object-object interactions in a playground scenario and learning a probabilistic causal model of the effects of actions as functions of the characteristics of both objects. The model directly links the objects' 2D shape visual cues to the effects of actions. Because no object recognition skills are required, generalization to novel objects is possible by exploiting the correlations between the shape descriptors. We show experiments where an affordance model is learned in a simulated environment, and is then used on the real robotic platform, showing generalization abilities in effect prediction. We argue that, despite the fact that during exploration no concept of tool is given to the system, this very concept may emerge from the knowledge that intermediate objects lead to significant effects when acting on other objects."} {"_id": "2d662a458148afa4a10add1e6f9815969bd1adc6", "title": "Stereovision-Based Object Segmentation for Automotive Applications", "text": "Obstacle detection and classification in a complex urban area are highly demanding, but desirable for pedestrian protection, stop & go, and enhanced parking aids. The most difficult task for the system is to segment objects from varied and complicated background. In this paper, a novel position-based object segmentationmethod has been proposed to solve this problem. According to the method proposed, object segmentation is performed in two steps: in depth map (X-Z plane) and in layered images (X-Y planes). The stereovision technique is used to reconstruct image points and generate the depth map. Objects are detected in the depth map. Afterwards, the original edge image is separated into different layers based on the distance of detected objects. Segmentation performed in these layered images can be easier and more reliable. It has been proved that the proposed method offers robust detection of potential obstacles and accurate measurement of their location and size."} {"_id": "37b9b5a5eb63349a3e6f75d5c4c061d7dbc87f4e", "title": "Attacks on Copyright Marking Systems", "text": "In the last few years, a large number of schemes have been proposed for hiding copyright marks and other information in digital pictures, video, audio and other multimedia objects. We describe some contenders that have appeared in the research literature and in the field; we then present a number of attacks that enable the information hidden by them to be removed or otherwise rendered unusable. 1 Information Hiding Applications The last few years have seen rapidly growing interest in ways to hide information in other information. A number of factors contributed to this. Fears that copyright would be eroded by the ease with which digital media could be copied led people to study ways of embedding hidden copyright marks and serial numbers in audio and video; concern that privacy would be eroded led to work on electronic cash, anonymous remailers, digital elections and techniques for making mobile computer users harder for third parties to trace; and there remain the traditional \u2018military\u2019 concerns about hiding one\u2019s own traffic while making it hard for the opponent to do likewise. The first international workshop on information hiding [2] brought these communities together and a number of hiding schemes were presented there; more have been presented elsewhere. We formed the view that useful progress in steganography and copyright marking might come from trying to attack all these first generation schemes. In the related field of cryptology, progress was iterative: cryptographic algorithms were proposed, attacks on them were found, more algorithms were proposed, and so on. Eventually, theory emerged: fast correlation attacks on stream ciphers and differential and linear attacks on block ciphers, now help us understand the strength of cryptographic algorithms in much more detail than before. Similarly, many cryptographic protocols were proposed and almost all the early candidates were broken, leading to concepts of protocol robustness and techniques for formal verification [6]. So in this paper, we first describe the copyright protection context in which most recent schemes have been developed; we then describe a selection of these ? The first author is grateful to Intel Corporation for financial support under the grant \u2018Robustness of Information Hiding Systems\u2019 ?? The third author is supported by a European Commission Marie-Curie grant David Aucsmith (Ed.): Information Hiding 1998, LNCS 1525, pp. 218\u2013238, 1998. c \u00a9 Springer-Verlag Berlin Heidelberg 1998 Attacks on Copyright Marking Systems 219 schemes and present a number of attacks, which break most of them. We finally make some remarks on the meaning of robustness in the context of steganography in general and copyright marking in particular. 1.1 Copyright Protection Issues Digital recording media offer many new possibilities but their uptake has been hindered by widespread fears among intellectual property owners such as Hollywood and the rock music industry that their livelihoods would be threatened if users could make unlimited perfect copies of videos, music and multimedia works. One of the first copy protection mechanisms for digital media was the serial copy management system (SCMS) introduced by Sony and Phillips for digital audio tapes in the eighties [31]. The idea was to allow consumers to make a digital audio tape of a CD they owned in order to use it (say) in their car, but not to make a tape of somebody else\u2019s tape; thus copies would be limited to first generation only. The implementation was to include a Boolean marker in the header of each audio object. Unfortunately this failed because the hardware produced by some manufacturers did not enforce it. More recently the Digital Video Disk, also known as Digital Versatile Disk (DVD) consortium called for proposals for a copyright marking scheme to enforce serial copy management. The idea is that the DVD players sold to consumers will allow unlimited copying of home videos and time-shifted viewing of TV programmes, but cannot easily be abused for commercial piracy [19, 44]. The proposed implementation is that videos will be unmarked, or marked \u2018never copy\u2019, or \u2018copy once only\u2019; compliant players would not record a video marked \u2018never copy\u2019 and when recording one marked \u2018copy once only\u2019 would change its mark to \u2018never copy\u2019. Commercially sold videos would be marked \u2018never copy\u2019, while TV broadcasts and similar material would be marked \u2018copy once only\u2019 and home videos would be unmarked. Electronic copyright management schemes have also been proposed by European projects such as Imprimatur and CITED [45, 66, 67], and American projects such as the proposed by the Working Group on Intellectual Property Rights [69]."} {"_id": "9ebe574f95efdb5868a60764bb2f46e2783a00a6", "title": "DPIL@FIRE2016: Overview of the Shared task on Detecting Paraphrases in Indian language", "text": "This paper explains the overview of the shared task \"Detecting Paraphrases in Indian Languages\" (DPIL) conducted at FIRE 2016. Given a pair of sentences in the same language, participants are asked to detect the semantic equivalence between the sentences. The shared task is proposed for four Indian languages namely Tamil, Malayalam, Hindi and Punjabi. The dataset created for the shared task has been made available online and it is the first open-source paraphrase detection corpora for Indian languages."} {"_id": "048dbd4acf1c34fc87bee4b67adf965e90acb38c", "title": "A comparison of trading algorithms based on machine learning classifiers : application on the S & P 500", "text": "In this paper we analyse whether different machine learning classifiers can be useful when it comes to financial data prediction. We first measure the accuracy and the volatility of our algorithm on the stocks included in the S&P 500 stock index. We then back-test it against the same benchmark for two different periods. Overall, our results show that an algorithm that trades just according to the predictions of the classifiers, is not profitable even if the accuracy obtained by the algorithm is higher than the accuracy that should be obtained according to the random walk theory. However, it can boost the results obtained by other strategies, maintaining the same level of volatility. JEL classification: C4, C8, G14."} {"_id": "6bf703874706ba2b5cc1b6e49f9d1ae59ff1ce5f", "title": "Visual Perception of Parallel Coordinate Visualizations", "text": "Parallel coordinates is a visualization technique that provides an unbiased representation of high-dimensional data. The parallel configuration of axes treats data dimensions uniformly and is well suited for exploratory visualization. However, first-time users of parallel coordinate visualizations can find the representation confusing and difficult to understand.We used eye tracking to study how parallel coordinate visualizations are perceived, and compared the results to the optimal visual scan path required to complete the tasks. The results indicate that even first-time users quickly learn how to use parallel coordinate visualizations, pay attention to the correct task-specific areas in the visualization, and become rapidly proficient with it."} {"_id": "37e1fc37a3ee90f24d85ad6fd3e5c51d3f5ab4fd", "title": "Attentive Explanations: Justifying Decisions and Pointing to the Evidence", "text": "Deep models are the defacto standard in visual decision problems due to their impressive performance on a wide array of visual tasks. However, they are frequently seen as opaque and are unable to explain their decisions. In contrast, humans can justify their decisions with natural language and point to the evidence in the visual world which supports their decisions. We propose a method which incorporates a novel explanation attention mechanism; our model is trained using textual rationals, and infers latent attention to visually ground explanations. We collect two novel datasets in domains where it is interesting and challenging to explain decisions. First, we extend the visual question answering task to not only provide an answer but also visual and natural language explanations for the answer. Second, we focus on explaining human activities in a contemporary activity recognition dataset. We extensively evaluate our model, both on the justification and pointing tasks, by comparing it to prior models and ablations using both automatic and human evaluations."} {"_id": "6ccaddf9d418f32f0dc2be6140ee6902c7d7e5ab", "title": "A chaincode based scheme for fingerprint feature extraction", "text": "A feature extraction method using the chaincode representation of fingerprint ridge contours is presented for use by Automatic Fingerprint Identification Systems. The representation allows efficient image quality enhancement and detection of fine minutiae feature points. For image enhancement a given fingerprint image is first binarized after a quick averaging to generate its chaincode representation. The direction field is estimated from a set of selected chaincodes. The original gray scale image is then enhanced using connected component analysis and a dynamic filtering scheme that takes advantage of the knowledge gained from the estimated direction flow of the contours. For feature extraction, the enhanced fingerprint image is carefully binarized using a local-global binarization algorithm for generating the chaincode representation. The minutiae are generated using a sophisticated ridge contour following procedure. Visual inspection of the experiment images shows that the method is very effective."} {"_id": "a2c145dbb30c942f136b2525a0132bf72d51b0c2", "title": "Model-Based Stabilisation of Deep Reinforcement Learning", "text": "Though successful in high-dimensional domains, deep reinforcement learning exhibits high sample complexity and suffers from stability issues as reported by researchers and practitioners in the field. These problems hinder the application of such algorithms in real-world and safety-critical scenarios. In this paper, we take steps towards stable and efficient reinforcement learning by following a model-based approach that is known to reduce agent-environment interactions. Namely, our method augments deep Q-networks (DQNs) with model predictions for transitions, rewards, and termination flags. Having the model at hand, we then conduct a rigorous theoretical study of our algorithm and show, for the first time, convergence to a stationary point. En route, we provide a counterexample showing that \u2019vanilla\u2019 DQNs can diverge confirming practitioners\u2019 and researchers\u2019 experiences. Our proof is novel in its own right and can be extended to other forms of deep reinforcement learning. In particular, we believe exploiting the relation between reinforcement (with deep function approximators) and online learning can serve as a recipe for future proofs in the domain. Finally, we validate our theoretical results in 20 games from the Atari benchmark. Our results show that following the proposed model-based learning approach not only ensures convergence but leads to a reduction in sample complexity and superior performance. Introduction Model-free deep reinforcement learning methods have recently demonstrated impressive performance on a range of complex learning tasks (Hessel et al. 2018; Lillicrap et al. 2017; Jaderberg et al. 2017). Deep Q-Networks (DQNs) (Mnih et al. 2015), in particular, stand out as a versatile and effective tool for a wide range of applications. DQNs offer an end-to-end solution for many reinforcement learning problems by generalizing tabular Q-learning to highdimensional input spaces. Unfortunately, DQNs still suffer from a number of important drawbacks that hinder wide-spread adoption. There is a general lack of theoretical understanding to guide the use of deep Q-learning algorithms. Combining non-linear function approximation with off-policy reinforcement learning is known to lead to possible divergence issues, even in very simple tasks (Boyan and Preliminary work. Moore 1995). Despite improvements introduced by deep Qnetworks (Mnih et al. 2015), Q-learning approaches based on deep neural networks still exhibit occasional instability during the learning process, see for example (van Hasselt, Guez, and Silver 2016). In this paper, we aim to address the instability of current deep RL approaches by augmenting model-free value learning with a predictive model. We propose a hybrid model and value learning objective, which stabilises the learning process and reduces sample complexity when compared to model-free algorithms. We attribute this reduction to the richer feedback signal provided by our method compared to standard deep Q-networks. In particular, our model provides feedback at every time step based on the current prediction error, which in turn eliminates one source of sample inefficiency caused by sparse rewards. These conclusions are also in accordance with previous research conducted on linear function approximation in reinforcement learning (Parr et al. 2008; Sutton et al. 2012; Song et al. 2016). While linear function approximation results provide a motivation for model-based stabilisation, such theories fail to generalise to deep non-linear architectures (Song et al. 2016). To close this gap in the literature and demonstrate stability of our method, we prove convergence in the general deep RL setting with deep value function approximators. Theoretically analysing deep RL algorithms, however, is challenging due to nonlinear functional dependencies and objective non-convexities prohibiting convergence to globally optimal policies. As such, the best one would hope for is a stationary point (e.g., a first-order one) that guarantees vanishing gradients. Even understanding gradient behavior for deep networks in RL can be difficult due to explorationbased policies and replay memory considerations. To alleviate some of these problems, we map deep reinforcement learning to a mathematical framework explicitly designed for understanding the exploration/exploitation trade-off. Namely, we formalise the problem as an online learning game the agent plays against an adversary (i.e., the environment). Such a link allows us to study a more general problem combining notions of regret, reinforcement learning, and optimisation. Given such a link, we prove that for any > 0, regret vanishes as O ( T 1\u22122 lnT ) with T being the total number of rounds. This, in turn, guarantees convergence to a stationary point. ar X iv :1 80 9. 01 90 6v 1 [ cs .L G ] 6 S ep 2 01 8 Guided by the above theoretical results, we validate that our algorithm leads to faster learning by conducting experiments on 20 Atari games. Due to the high computational and budgeting demands, we chose a benchmark that is closest to our setting. Concretely, we picked DQNs as our template algorithm to improve on both theoretically and empirically. It is worth noting that extending our theoretical results to more complex RL algorithms is an orthogonal future direction. Background: Reinforcement Learning We consider a discrete time, infinite horizon, discounted Markov Decision Process (MDP) \u3008 S,A, P , P , \u03b3 \u3009 . Here S denotes the state space, A the set of possible actions, P (R) : S \u00d7 A \u00d7 R \u2192 [0, 1] is the reward function, P (S) : S \u00d7 A \u00d7 S \u2192 [0, 1] is the state transition function, and \u03b3 \u2208 (0, 1) is a discount factor. An agent, being in state St \u2208 S at time step t, executes an action at \u2208 A sampled from its policy \u03c0(at|St), a conditional probability distribution where \u03c0 : S \u00d7 A \u2192 [0, 1]. The agent\u2019s action elicits from the environment a reward signal rt \u2208 R indicating instantaneous reward, a terminal flag ft \u2208 {0, 1} indicating a terminal event that restarts the environment, and a transition to a successor state St+1 \u2208 S. We assume that the sets S, A, and R are discrete. The reward rt is sampled from the conditional probability distribution P (rt|St,at). Similarly, ft \u223c P (F (ft|St,at) with P (F ) : S \u00d7 A \u00d7 {0, 1} \u2192 [0, 1], where a terminal event (ft = 1) restarts the environment according to some initial state distribution S0 \u223c P (S) 0 (S0). The state transition to a successor state is determined by a stochastic state transition function according to St+1 \u223c P (St+1|St,at). The agent\u2019s goal is to maximise future cumulative reward E P (S) 0 ,\u03c0,P (R),P (F ),P (S) [ \u221e \u2211 t=0 ( t \u220f t\u2032=0 (1\u2212 ft\u2032\u22121) ) \u03b3rt ] with respect to the policy \u03c0. An important quantity in RL are Q-values Q(St,at), which are defined as the expected future cumulative reward value when executing action at in state St and subsequently following policy \u03c0. Qvalues enable us to conveniently phrase the RL problem as max\u03c0 \u2211 S p (S) \u2211 a \u03c0(a|S)Q(S,a), where p(S) is the stationary state distribution obtained when executing \u03c0 in the environment (starting from S0 \u223c P (S) 0 (S0)). Deep Q-Networks: Value-based reinforcement learning approaches identify optimal Q-values directly using parametric function approximators Q\u03b8(S,a), where \u03b8 \u2208 \u0398 represents the parameters (Watkins 1989; Busoniu et al. 2010). Optimal Q-value estimates then correspond to an optimal policy \u03c0(a|S) = \u03b4a,argmaxa\u2032Q\u03b8(S,a\u2032). Deep Q-networks (Mnih et al. 2015) learn a deep neural network based Qvalue approximation by performing stochastic gradient descent on the following training objective: L(\u03b8) = ES,a,r,f,S\u2032 [( r + (1\u2212 f)\u03b3max a\u2032 Q\u03b8\u2212(S \u2032,a\u2032) \u2212Q\u03b8(S,a) )2] . (1) The expectation ranges over transitions S,a, r, f,S\u2032 sampled from an experience replay memory (S\u2032 denotes the state at the next time step). Use of this replay memory, together with the use of a separate target network Q\u03b8\u2212 (with different parameters \u03b8\u2212 \u2208 \u0398) for calculating the bootstrap values maxa\u2032 Q\u03b8\u2212(S \u2032,a\u2032), helps stabilise the learning process. Background: Online Learning In this paper, we employ a special form of regret minimisation games that we briefly review here. A regret minimisation game is a triple \u3008\u0398,F , T \u3009, where \u0398 is a non-empty decision set, F is the set of moves of the adversary which contains bounded convex functions from R to R and T is the total number of rounds. The game commences in rounds, where at round j = 1, . . . , T , the agent chooses a prediction \u03b8j \u2208 \u0398 and the adversary a loss function Lj \u2208 F . At the end of the round, the adversary reveals its choice and the agent suffers a loss Lj(\u03b8j). In this paper, we are concerned with the full-information case where the agent can observe the complete loss function Lj at the end of each round. The goal of the game is for the agent to make successive predictions to minimise cumulative regret defined as:"} {"_id": "dad06d4dba532b59bd5d56b6be27c7ee4f0b6f1c", "title": "Membrane Bioreactor (MBR) Technology for Wastewater Treatment and Reclamation: Membrane Fouling", "text": "The membrane bioreactor (MBR) has emerged as an efficient compact technology for municipal and industrial wastewater treatment. The major drawback impeding wider application of MBRs is membrane fouling, which significantly reduces membrane performance and lifespan, resulting in a significant increase in maintenance and operating costs. Finding sustainable membrane fouling mitigation strategies in MBRs has been one of the main concerns over the last two decades. This paper provides an overview of membrane fouling and studies conducted to identify mitigating strategies for fouling in MBRs. Classes of foulants, including biofoulants, organic foulants and inorganic foulants, as well as factors influencing membrane fouling are outlined. Recent research attempts on fouling control, including addition of coagulants and adsorbents, combination of aerobic granulation with MBRs, introduction of granular materials with air scouring in the MBR tank, and quorum quenching are presented. The addition of coagulants and adsorbents shows a significant membrane fouling reduction, but further research is needed to establish optimum dosages of the various coagulants/adsorbents. Similarly, the integration of aerobic granulation with MBRs, which targets biofoulants and organic foulants, shows outstanding filtration performance and a significant reduction in fouling rate, as well as excellent nutrients removal. However, further research is needed on the enhancement of long-term granule integrity. Quorum quenching also offers a strong potential for fouling control, but pilot-scale testing is required to explore the feasibility of full-scale application."} {"_id": "123ae35aa7d6838c817072032ce5615bb891652d", "title": "BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1", "text": "We introduce BinaryNet, a method which trains DNNs with binary weights and activations when computing parameters\u2019 gradient. We show that it is possible to train a Multi Layer Perceptron (MLP) on MNIST and ConvNets on CIFAR-10 and SVHN with BinaryNet and achieve nearly state-of-the-art results. At run-time, BinaryNet drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. We wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST MLP 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for BinaryNet is available."} {"_id": "83f651d94997b3a2327fe52bc4fd9436a71957d0", "title": "Lagrangian Texture Advection: Preserving both Spectrum and Velocity Field", "text": "Texturing an animated fluid is a useful way to augment the visual complexity of pictures without increasing the simulation time. But texturing flowing fluids is a complex issue, as it creates conflicting requirements: we want to keep the key texture properties (features, spectrum) while advecting the texture with the underlying flow-which distorts it. In this paper, we present a new, Lagrangian, method for advecting textures: the advected texture is computed only locally and follows the velocity field at each pixel. The texture retains its local properties, including its Fourier spectrum, even though it is accurately advected. Due to its Lagrangian nature, our algorithm can perform on very large, potentially infinite scenes in real time. Our experiments show that it is well suited for a wide range of input textures, including, but not limited to, noise textures."} {"_id": "7cf6f7da2e932da9a73b218c65f0b0264dd25479", "title": "Producing radiologist-quality reports for interpretable artificial intelligence", "text": "Current approaches to explaining the decisions of deep learning systems for medical tasks have focused on visualising the elements that have contributed to each decision. We argue that such approaches are not enough to \u201copen the black box\u201d of medical decision making systems because they are missing a key component that has been used as a standard communication tool between doctors for centuries: language. We propose a model-agnostic interpretability method that involves training a simple recurrent neural network model to produce descriptive sentences to clarify the decision of deep learning classifiers. We test our method on the task of detecting hip fractures from frontal pelvic x-rays. This process requires minimal additional labelling despite producing text containing elements that the original deep learning classification model was not specifically trained to detect. The experimental results show that: 1) the sentences produced by our method consistently contain the desired information, 2) the generated sentences are preferred by doctors compared to current tools that create saliency maps, and 3) the combination of visualisations and generated text is better than either alone."} {"_id": "91ab3ca9fe3add7ce48dc2f97bb9435db1a991b9", "title": "On Divergences and Informations in Statistics and Information Theory", "text": "The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence. All basic properties of f-divergences including relations to the decision errors are proved in a new manner replacing the classical Jensen inequality by a new generalized Taylor expansion of convex functions. Some new properties are proved too, e.g., relations to the statistical sufficiency and deficiency. The generalized Taylor expansion also shows very easily that all f-divergences are average statistical informations (differences between prior and posterior Bayes errors) mutually differing only in the weights imposed on various prior distributions. The statistical information introduced by De Groot and the classical information of Shannon are shown to be extremal cases corresponding to alpha=0 and alpha=1 in the class of the so-called Arimoto alpha-informations introduced in this paper for 0